15
typosaurus
140d

Writing a brainfuck interpreter is a lot fun: Mine does recursion within loops. I also added functions: strings, you can cheat. Stackdump(!), Exit script(*) , go to first cell (^), go to last cell(?), nulling cell (0) It parses this for example: "retoor" ^[.>]. It will dump string retoor. Explanation: the string moves ptr to sixth place. ^ will reset pointer to first. [] is a loop that executes as long there's data in current cell. The "." prints char of current cell (Number if not alpha etc). ">" moves a cell to right. [.>] will thus print until it moved to an empty cell. To move to first, I could've also used my repeater function by adding times to repeat after command: <6 moves six places to left. .>.>.>.>.>.> is also a way to print six chars. +[,.] works as the Linux program "cat". , is one char keyboard input.

Thanks for listening to my tedtalk

Comments
  • 3
    You are taking the fun away from brainfuck by implementing string literals 😂
  • 3
    @Lensflare yh, it's optional to use. The repeat iterator is just very convenient. With +65 I have an A.

    I'm trying to figure out how to do an if-else using brainfuck. ChatGPT is wrong about it. Now I know why it's called brainfuck.
  • 1
    @retoor

    It'd take writing chatgpt in brainfuck.
  • 1
    let's do a nerd-outing. how old were you when you wrote your first brainfuck-interpreter?

    i was 13.
  • 1
    Always wanted to write a bf interpreter.

    Writing one now, tempted to pivot.

    Curse you and your wily temptations.
  • 1
  • 1
    @Wisecrack are you done yet?

    (also: i distinctly remember the interpreter taking up less than 300 chars in total)
  • 0
    @tosensei not doing a bf interpreter at the moment.

    Writing a neural net where each node is an interpreter.

    Thought occured to me that rather than experimenting with different loss functions and divergences, and architectures, and non-linear ops, why not let the network do that for you?

    If all that matters is the final loss, whether the network is accurate and precise, why not let everything else be decided internally?

    The most common problem with neural nets is they converge, usually below the ultimate limit of the quality of the training/test/validation data.

    So the thinking is if each node is an interpreter with some randomized code, that can decide what other nodes it connects to, with enough training the network will converge on its own toward efficient loss functions and non-linear functions in the process.

    I'll do a write-up when I have more than scaffolding code, with some results to show for it.
Add Comment