| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Add support for a few more keywords (`true`, `false`, `if`, `else`,
`return`).
All keywords are grouped together in the constant declaration.
|
| |
|
|
|
|
|
|
|
| |
The test `TestNextTokenBasic` was not testing anything that
`TestNextTokenMonkey` was not already testing.
Rename `TestNextTokenMonkey` to `TestNextToken` for clarity.
|
|
|
|
| |
For now, automate running the tests.
|
|
|
|
|
| |
Support the operator tokens that were added to our tokenizer. This also
add a few more tests to ensure we handle them correctly.
|
|
|
|
|
|
| |
Support additional tokens for operators (`-`, `*`, etc). This change
only adds the tokens to the list of constants, and group all the tokens
related to operators together.
|
|
|
|
|
|
|
|
|
| |
The initial lexer for the monkey language. We only support a small
subset at this stage.
We have some simple tests to ensure that we can parse some small
snippet, and that the minimum number of tokens we need are also all
supported correctly.
|
|
|
|
|
|
|
|
|
|
|
| |
This is the initial tokenizer for the monkey language. For now we
recognize a limited number of tokens.
We only have two keywords at this stage: `fn` and `let`. `fn` is used to
create function, while `let` is used for assigning variables.
The other tokens are mostly to parse the source code, and recognize
things like brackets, parentheses, etc.
|
|
|
|
|
|
| |
The project is named monkey, we add a mod file to ensure that the
tooling / dependencies are set up correctly when we import various
modules in this project.
|
| |
|
|
This commit is empty on purpose.
|