about summary refs log tree commit diff
path: root/users/fcuny (follow)
Commit message (Collapse)AuthorAgeFilesLines
* token: rewrite documentation for `LookupIdent`.franck cuny2020-01-111-3/+4
|
* lexer: delete redundant test.franck cuny2020-01-111-32/+1
| | | | | | | The test `TestNextTokenBasic` was not testing anything that `TestNextTokenMonkey` was not already testing. Rename `TestNextTokenMonkey` to `TestNextToken` for clarity.
* Makefile: add a Makefilefranck cuny2020-01-111-0/+4
| | | | For now, automate running the tests.
* lexer: support more operator tokens.franck cuny2020-01-112-1/+31
| | | | | Support the operator tokens that were added to our tokenizer. This also add a few more tests to ensure we handle them correctly.
* token: support more operator tokensfranck cuny2020-01-111-3/+10
| | | | | | Support additional tokens for operators (`-`, `*`, etc). This change only adds the tokens to the list of constants, and group all the tokens related to operators together.
* lexer: initial lexerfranck cuny2020-01-112-0/+218
| | | | | | | | | The initial lexer for the monkey language. We only support a small subset at this stage. We have some simple tests to ensure that we can parse some small snippet, and that the minimum number of tokens we need are also all supported correctly.
* token: initial tokenizer.franck cuny2020-01-111-0/+48
| | | | | | | | | | | This is the initial tokenizer for the monkey language. For now we recognize a limited number of tokens. We only have two keywords at this stage: `fn` and `let`. `fn` is used to create function, while `let` is used for assigning variables. The other tokens are mostly to parse the source code, and recognize things like brackets, parentheses, etc.
* go.mod: create the module 'monkey'franck cuny2020-01-111-0/+3
| | | | | | The project is named monkey, we add a mod file to ensure that the tooling / dependencies are set up correctly when we import various modules in this project.
* Add README.md, LICENSE.txtfranck cuny2019-12-292-0/+21