about summary refs log tree commit diff
path: root/users/fcuny/exp (follow)
Commit message (Collapse)AuthorAgeFilesLines
* extract layers to a mounted loop deviceFranck Cuny2022-06-113-1/+128
| | | | | | | | | | | | | We create a loop device by pre-allocating space to a file on disk. This file is then converted to an ext4 partition, which we mount to a temporary path on the host. Once this is done, we extract all the layers from the given container on that mounted path. We also add a few extra files to the image (`/etc/hosts` and `/etc/resolv.conf`). When we're done extracting and writing, we run resize2fs in order to resize the image to a more reasonable size.
* add a lease to the Go contextFranck Cuny2022-06-111-0/+6
| | | | | | | | | | | | This will ensure that our container will not be deleted while we work on it. In addition, a default expiry of 24 hours will be set on it once the lease is released (once the program is completed). See [1] for more details. [1] https://github.com/containerd/containerd/blob/261c107ffc4ff681bc73988f64e3f60c32233b37/docs/garbage-collection.md Signed-off-by: Franck Cuny <franck@fcuny.net>
* pull a container into a namespaceFranck Cuny2022-06-114-0/+975
| | | | | | | | | | | | | | | | | | | | Pull an image provided as an argument to the program. We're only interested in images for Linux/amd64 at the moment. We setup a default namespace for containerd named `c2vm`. Images that will be pulled by containerd will be stored inside that namespace. Once the program is build it can be run like this: ``` ; sudo ./c2vm -container docker.io/library/redis:latest pulled docker.io/library/redis:latest (38667897 bytes) ``` And the image is indeed in the namespace: ``` ; sudo ctr -n c2vm images ls -q docker.io/library/redis:latest ```
* doc: update READMEFranck Cuny2022-06-112-1/+15
|
* Add README.md, LICENSE.txtFranck Cuny2022-06-112-0/+21
|
* fix(fcuny/monkey): remove unneeded filesFranck Cuny2022-05-292-21/+0
| | | | Change-Id: If26166f29f9b519b87e288b514d2c603ca9b4413
* readme: convert to org-modeFranck Cuny2021-05-102-1/+3
|
* lint: fix a few issuesFranck Cuny2021-05-103-1/+4
|
* git: ignore binary for the REPLFranck Cuny2021-05-101-0/+1
|
* repl: support a simple REPL for some early testingfranck cuny2020-01-112-0/+41
| | | | | | The REPL reads the input, send it to the lexer, and prints the token to STDOUT. For now nothing else is done since we still don't parse the tokens.
* lexer: support tokens for equal and not equal.franck cuny2020-01-112-2/+39
| | | | | | | | | The tokens for equal (`==`) and not equal (`!=`) are composed of two characters. We introduce a new helper (`peekChar`) that we use when we encounter the token `=` or `!` to see if this is a token composed of two characters. Add some tests to ensure they are parsed correctly.
* token: add tokens for equal and not equal.franck cuny2020-01-111-0/+3
|
* lexer: test the new keywords are parsed correctly.franck cuny2020-01-111-3/+25
| | | | | Ensure that the new keywords added (`if`, `else`, `true`, `false`, `return`) are parsed correctly.
* token: support more keywordsfranck cuny2020-01-111-2/+13
| | | | | | | Add support for a few more keywords (`true`, `false`, `if`, `else`, `return`). All keywords are grouped together in the constant declaration.
* token: rewrite documentation for `LookupIdent`.franck cuny2020-01-111-3/+4
|
* lexer: delete redundant test.franck cuny2020-01-111-32/+1
| | | | | | | The test `TestNextTokenBasic` was not testing anything that `TestNextTokenMonkey` was not already testing. Rename `TestNextTokenMonkey` to `TestNextToken` for clarity.
* Makefile: add a Makefilefranck cuny2020-01-111-0/+4
| | | | For now, automate running the tests.
* lexer: support more operator tokens.franck cuny2020-01-112-1/+31
| | | | | Support the operator tokens that were added to our tokenizer. This also add a few more tests to ensure we handle them correctly.
* token: support more operator tokensfranck cuny2020-01-111-3/+10
| | | | | | Support additional tokens for operators (`-`, `*`, etc). This change only adds the tokens to the list of constants, and group all the tokens related to operators together.
* lexer: initial lexerfranck cuny2020-01-112-0/+218
| | | | | | | | | The initial lexer for the monkey language. We only support a small subset at this stage. We have some simple tests to ensure that we can parse some small snippet, and that the minimum number of tokens we need are also all supported correctly.
* token: initial tokenizer.franck cuny2020-01-111-0/+48
| | | | | | | | | | | This is the initial tokenizer for the monkey language. For now we recognize a limited number of tokens. We only have two keywords at this stage: `fn` and `let`. `fn` is used to create function, while `let` is used for assigning variables. The other tokens are mostly to parse the source code, and recognize things like brackets, parentheses, etc.
* go.mod: create the module 'monkey'franck cuny2020-01-111-0/+3
| | | | | | The project is named monkey, we add a mod file to ensure that the tooling / dependencies are set up correctly when we import various modules in this project.
* Add README.md, LICENSE.txtfranck cuny2019-12-292-0/+21