about summary refs log tree commit diff
path: root/users (follow)
Commit message (Collapse)AuthorAgeFilesLines
* lint: fix a few issuesFranck Cuny2021-05-103-1/+4
|
* git: ignore binary for the REPLFranck Cuny2021-05-101-0/+1
|
* repl: support a simple REPL for some early testingfranck cuny2020-01-112-0/+41
| | | | | | The REPL reads the input, send it to the lexer, and prints the token to STDOUT. For now nothing else is done since we still don't parse the tokens.
* lexer: support tokens for equal and not equal.franck cuny2020-01-112-2/+39
| | | | | | | | | The tokens for equal (`==`) and not equal (`!=`) are composed of two characters. We introduce a new helper (`peekChar`) that we use when we encounter the token `=` or `!` to see if this is a token composed of two characters. Add some tests to ensure they are parsed correctly.
* token: add tokens for equal and not equal.franck cuny2020-01-111-0/+3
|
* lexer: test the new keywords are parsed correctly.franck cuny2020-01-111-3/+25
| | | | | Ensure that the new keywords added (`if`, `else`, `true`, `false`, `return`) are parsed correctly.
* token: support more keywordsfranck cuny2020-01-111-2/+13
| | | | | | | Add support for a few more keywords (`true`, `false`, `if`, `else`, `return`). All keywords are grouped together in the constant declaration.
* token: rewrite documentation for `LookupIdent`.franck cuny2020-01-111-3/+4
|
* lexer: delete redundant test.franck cuny2020-01-111-32/+1
| | | | | | | The test `TestNextTokenBasic` was not testing anything that `TestNextTokenMonkey` was not already testing. Rename `TestNextTokenMonkey` to `TestNextToken` for clarity.
* Makefile: add a Makefilefranck cuny2020-01-111-0/+4
| | | | For now, automate running the tests.
* lexer: support more operator tokens.franck cuny2020-01-112-1/+31
| | | | | Support the operator tokens that were added to our tokenizer. This also add a few more tests to ensure we handle them correctly.
* token: support more operator tokensfranck cuny2020-01-111-3/+10
| | | | | | Support additional tokens for operators (`-`, `*`, etc). This change only adds the tokens to the list of constants, and group all the tokens related to operators together.
* lexer: initial lexerfranck cuny2020-01-112-0/+218
| | | | | | | | | The initial lexer for the monkey language. We only support a small subset at this stage. We have some simple tests to ensure that we can parse some small snippet, and that the minimum number of tokens we need are also all supported correctly.
* token: initial tokenizer.franck cuny2020-01-111-0/+48
| | | | | | | | | | | This is the initial tokenizer for the monkey language. For now we recognize a limited number of tokens. We only have two keywords at this stage: `fn` and `let`. `fn` is used to create function, while `let` is used for assigning variables. The other tokens are mostly to parse the source code, and recognize things like brackets, parentheses, etc.
* go.mod: create the module 'monkey'franck cuny2020-01-111-0/+3
| | | | | | The project is named monkey, we add a mod file to ensure that the tooling / dependencies are set up correctly when we import various modules in this project.
* Add README.md, LICENSE.txtfranck cuny2019-12-292-0/+21