| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
| |
The REPL reads the input, send it to the lexer, and prints the token to
STDOUT. For now nothing else is done since we still don't parse the
tokens.
|
|
|
|
|
|
|
|
|
| |
The tokens for equal (`==`) and not equal (`!=`) are composed of two
characters. We introduce a new helper (`peekChar`) that we use when we
encounter the token `=` or `!` to see if this is a token composed of two
characters.
Add some tests to ensure they are parsed correctly.
|
| |
|
|
|
|
|
| |
Ensure that the new keywords added (`if`, `else`, `true`, `false`,
`return`) are parsed correctly.
|
|
|
|
|
|
|
| |
Add support for a few more keywords (`true`, `false`, `if`, `else`,
`return`).
All keywords are grouped together in the constant declaration.
|
| |
|
|
|
|
|
|
|
| |
The test `TestNextTokenBasic` was not testing anything that
`TestNextTokenMonkey` was not already testing.
Rename `TestNextTokenMonkey` to `TestNextToken` for clarity.
|
|
|
|
|
| |
Support the operator tokens that were added to our tokenizer. This also
add a few more tests to ensure we handle them correctly.
|
|
|
|
|
|
| |
Support additional tokens for operators (`-`, `*`, etc). This change
only adds the tokens to the list of constants, and group all the tokens
related to operators together.
|
|
|
|
|
|
|
|
|
| |
The initial lexer for the monkey language. We only support a small
subset at this stage.
We have some simple tests to ensure that we can parse some small
snippet, and that the minimum number of tokens we need are also all
supported correctly.
|
|
This is the initial tokenizer for the monkey language. For now we
recognize a limited number of tokens.
We only have two keywords at this stage: `fn` and `let`. `fn` is used to
create function, while `let` is used for assigning variables.
The other tokens are mostly to parse the source code, and recognize
things like brackets, parentheses, etc.
|