about summary refs log tree commit diff
diff options
context:
space:
mode:
authorFranck Cuny <franck@fcuny.net>2024-07-03 16:32:51 -0700
committerFranck Cuny <franck@fcuny.net>2024-07-03 16:32:51 -0700
commitd481251d4148a9e90cf71aa1c11a8f8e077336a4 (patch)
treea5dd8993718db7cfc7b13ef2a24a3fa2e15b582f
parentsimplify the layout (diff)
downloadfcuny.net-d481251d4148a9e90cf71aa1c11a8f8e077336a4.tar.gz
some more cleanup
-rw-r--r--README.md1
-rw-r--r--archetypes/default.md1
-rw-r--r--content/blog/1password-ssh-agent.md13
-rw-r--r--content/blog/git-link-and-sourcegraph.md48
-rw-r--r--content/blog/git-link-and-sourcegraph.org49
-rw-r--r--content/blog/google-doc-failure.md (renamed from content/blog/google-doc-failure.org)45
-rw-r--r--content/blog/leaving-twitter.md (renamed from content/blog/leaving-twitter.org)6
-rw-r--r--content/blog/nix-raid-systemd-boot.md7
-rw-r--r--content/blog/no-ssh-to-prod.md7
-rw-r--r--content/blog/tailscale-docker-https.md119
-rw-r--r--content/blog/tailscale-docker-https.org121
-rw-r--r--content/notes/containerd-to-firecracker.md55
-rw-r--r--content/notes/cpu-power-management.md27
-rw-r--r--content/notes/making-sense-intel-amd-cpus.md88
-rw-r--r--content/notes/stuff-about-pcie.md125
-rw-r--r--content/notes/working-with-go.md35
-rw-r--r--content/notes/working-with-nix.md13
-rw-r--r--layouts/_default/baseof.html4
-rw-r--r--layouts/_default/single.html36
-rw-r--r--layouts/index.html64
-rw-r--r--layouts/partials/head.html23
-rw-r--r--static/css/custom.css36
-rw-r--r--static/resume.html465
-rw-r--r--treefmt.nix1
24 files changed, 738 insertions, 651 deletions
diff --git a/README.md b/README.md
index ece2df0..e2874e9 100644
--- a/README.md
+++ b/README.md
@@ -7,4 +7,3 @@ The dependencies are managed with nix. Running `nix run` will start the hugo ser
 ## Deploy
 
 You can deploy the site by running `nir run .#deploy`.
-
diff --git a/archetypes/default.md b/archetypes/default.md
index 7ce2f1a..fdccff8 100644
--- a/archetypes/default.md
+++ b/archetypes/default.md
@@ -2,4 +2,3 @@
 title: "{{ replace .Name "-" " " | title }}"
 date: {{ .Date }}
 ---
-
diff --git a/content/blog/1password-ssh-agent.md b/content/blog/1password-ssh-agent.md
index 3571c19..0561137 100644
--- a/content/blog/1password-ssh-agent.md
+++ b/content/blog/1password-ssh-agent.md
@@ -1,11 +1,8 @@
 ---
 title: 1password's ssh agent and nix
 date: 2023-12-02
-tags:
-- ssh
-- git
-- nix
 ---
+
 [A while ago](https://blog.1password.com/1password-ssh-agent/), 1password introduced an SSH agent, and I've been using it for a while now. The following describe how I've configured it with `nix`. All my ssh keys are in 1password, and it's the only ssh agent I'm using at this point.
 
 ## Personal configuration
@@ -13,6 +10,7 @@ tags:
 I have a personal 1password account, and I've created a new SSH key in it that I use for both authenticating to github and to sign commits. I use [nix-darwin](http://daiderd.com/nix-darwin/) and [home-manager](https://github.com/nix-community/home-manager) to configure my personal machine.
 
 This is how I configure ssh:
+
 ```nix
 programs.ssh = {
   enable = true;
@@ -35,6 +33,7 @@ programs.ssh = {
 ```
 
 The configuration for git:
+
 ```nix
 { lib, pkgs, config, ... }:
 let
@@ -66,6 +65,7 @@ in
 In the repository with my nix configuration, I've a file `ssh-pubkeys.toml` that contains all the public ssh keys I keep track of (mine and a few other developers). Keys from that file are used to create the file `~/.ssh/allowed_signers` that is then used by `git` (for example `git log --show-signature`) when I want to ensure commits are signed with a valid key.
 
 `ssh-pubkeys.toml` looks like this:
+
 ```toml
 # yubikey key connected to the laptop
 ykey-laptop="ssh-ed25519 ..."
@@ -76,6 +76,7 @@ op="ssh-ed25519 ..."
 ```
 
 And the following is for `zsh` so that I can use the agent for other commands that I run in the shell:
+
 ```nix
 programs.zsh.envExtra = ''
   # use 1password ssh agent
@@ -93,6 +94,7 @@ The work configuration is slightly different. Here I want to use both my work an
 I've imported my existing keys into 1password, and I keep the public keys on the disk: `$HOME/.ssh/work_gh.pub` and `$HOME/.ssh/personal_gh.pub`. I've removed the private keys from the disk.
 
 This is the configuration I use for work:
+
 ```nix
 programs.ssh = {
   enable = true;
@@ -133,6 +135,7 @@ programs.ssh = {
 ```
 
 I also create a configuration file for the 1password agent, to make sure I can use the keys from all the accounts:
+
 ```nix
  # Generate ssh agent config for 1Password - I want both my personal and work keys
  home.file.".config/1Password/ssh/agent.toml".text = ''
@@ -144,6 +147,7 @@ I also create a configuration file for the 1password agent, to make sure I can u
 ```
 
 Then the ssh configuration:
+
 ```nix
 { config, lib, pkgs, ... }:
 let
@@ -184,6 +188,7 @@ Now, when I clone a repository, instead of doing `git clone git@github.com/$WORK
 I've used yubikey to sign my commits for a while, but I find the 1password ssh agent a bit more convenient. The initial setup for yubikey was not as straightforward (granted, it's a one time thing per key).
 
 On my personal machine, my `$HOME/.ssh` looks as follow:
+
 ```sh
 ➜  ~ ls -l ~/.ssh                                                                                                                           ~
 total 16
diff --git a/content/blog/git-link-and-sourcegraph.md b/content/blog/git-link-and-sourcegraph.md
new file mode 100644
index 0000000..affbe8b
--- /dev/null
+++ b/content/blog/git-link-and-sourcegraph.md
@@ -0,0 +1,48 @@
+---
+title: emacs' git-link and sourcegraph
+date: 2021-08-24
+---
+
+I use [sourcegraph](https://sourcegraph.com/) for searching code, and I sometimes need to share a link to the source code I'm looking at in a buffer. For this, the package [`git-link`](https://github.com/sshaw/git-link) is great.
+
+To integrate sourcegraph and `git-link`, the [documentation](https://github.com/sshaw/git-link#sourcegraph) recommends adding a remote entry named `sourcegraph` in the repository, like this:
+
+```bash
+git remote add sourcegraph https://sourcegraph.com/github.com/sshaw/copy-as-format
+```
+
+The next time you run `M-x git-link` in a buffer, it will use the URL associated with that remote. That's works great, except that now you need to add this for every repository. Instead, for my usage, I came up with the following solution:
+
+    (use-package git-link
+      :ensure t
+      :after magit
+      :bind (("C-c g l" . git-link)
+             ("C-c g a" . git-link-commit))
+      :config
+      (defun fcuny/get-sg-remote-from-hostname (hostname)
+        (format "sourcegraph.<$domain>.<$tld>/%s" hostname))
+
+      (defun fcuny/git-link-work-sourcegraph (hostname dirname filename _branch commit start end)
+        ;;; For a given repository, build the proper link for sourcegraph.
+        ;;; Use the default branch of the repository instead of the
+        ;;; current one (we might be on a feature branch that is not
+        ;;; available on the remote).
+        (require 'magit-branch)
+        (let ((sg-base-url (fcuny/get-sg-remote-from-hostname hostname))
+              (main-branch (magit-main-branch)))
+          (git-link-sourcegraph sg-base-url dirname filename main-branch commit start end)))
+
+      (defun fcuny/git-link-commit-work-sourcegraph (hostname dirname commit)
+        (let ((sg-base-url (fcuny/get-sg-remote-from-hostname hostname)))
+          (git-link-commit-sourcegraph sg-base-url dirname commit)))
+
+      (add-to-list 'git-link-remote-alist '("twitter" fcuny/git-link-work-sourcegraph))
+      (add-to-list 'git-link-commit-remote-alist '("twitter" fcuny/git-link-commit-work-sourcegraph))
+
+      (setq git-link-open-in-browser 't))
+
+We use different domains to host various git repositories at work (e.g. `git.$work`, `gitfoo.$work`, etc). Each of them map to a different URI for sourcegraph (e.g. `sourcegraph.$work/gitfoo`).
+
+`git-link-commit-remote-alist` is an [association list](https://www.gnu.org/software/emacs/manual/html_node/elisp/Association-Lists.html) that takes a regular expression and a function. The custom function receives the hostname for the remote repository, which is then used to generate the URI for our sourcegraph instance. I then call `git-link-sourcegraph` replacing the hostname with the URI for sourcegraph.
+
+Now I can run `M-x git-link` in any repository where the host for the origin git repository matches `twitter` without having to setup the custom remote first.
diff --git a/content/blog/git-link-and-sourcegraph.org b/content/blog/git-link-and-sourcegraph.org
deleted file mode 100644
index 3ab5f4b..0000000
--- a/content/blog/git-link-and-sourcegraph.org
+++ /dev/null
@@ -1,49 +0,0 @@
-#+TITLE: emacs' git-link and sourcegraph
-#+TAGS[]: emacs git
-#+DATE: <2021-08-24 Tue>
-
-I use [[https://sourcegraph.com/][sourcegraph]] for searching code, and I sometimes need to share a link to the source code I'm looking at in a buffer. For this, the package [[https://github.com/sshaw/git-link][=git-link=]] is great.
-
-To integrate sourcegraph and =git-link=, the [[https://github.com/sshaw/git-link#sourcegraph][documentation]] recommends adding a remote entry named =sourcegraph= in the repository, like this:
-
-#+begin_src sh
-git remote add sourcegraph https://sourcegraph.com/github.com/sshaw/copy-as-format
-#+end_src
-
-The next time you run =M-x git-link= in a buffer, it will use the URL associated with that remote. That's works great, except that now you need to add this for every repository. Instead, for my usage, I came up with the following solution:
-
-#+begin_src elisp
-(use-package git-link
-  :ensure t
-  :after magit
-  :bind (("C-c g l" . git-link)
-         ("C-c g a" . git-link-commit))
-  :config
-  (defun fcuny/get-sg-remote-from-hostname (hostname)
-    (format "sourcegraph.<$domain>.<$tld>/%s" hostname))
-
-  (defun fcuny/git-link-work-sourcegraph (hostname dirname filename _branch commit start end)
-    ;;; For a given repository, build the proper link for sourcegraph.
-    ;;; Use the default branch of the repository instead of the
-    ;;; current one (we might be on a feature branch that is not
-    ;;; available on the remote).
-    (require 'magit-branch)
-    (let ((sg-base-url (fcuny/get-sg-remote-from-hostname hostname))
-          (main-branch (magit-main-branch)))
-      (git-link-sourcegraph sg-base-url dirname filename main-branch commit start end)))
-
-  (defun fcuny/git-link-commit-work-sourcegraph (hostname dirname commit)
-    (let ((sg-base-url (fcuny/get-sg-remote-from-hostname hostname)))
-      (git-link-commit-sourcegraph sg-base-url dirname commit)))
-
-  (add-to-list 'git-link-remote-alist '("twitter" fcuny/git-link-work-sourcegraph))
-  (add-to-list 'git-link-commit-remote-alist '("twitter" fcuny/git-link-commit-work-sourcegraph))
-
-  (setq git-link-open-in-browser 't))
-#+end_src
-
-We use different domains to host various git repositories at work (e.g. =git.$work=, =gitfoo.$work=, etc). Each of them map to a different URI for sourcegraph (e.g. =sourcegraph.$work/gitfoo=).
-
-=git-link-commit-remote-alist= is an [[https://www.gnu.org/software/emacs/manual/html_node/elisp/Association-Lists.html][association list]] that takes a regular expression and a function. The custom function receives the hostname for the remote repository, which is then used to generate the URI for our sourcegraph instance. I then call =git-link-sourcegraph= replacing the hostname with the URI for sourcegraph.
-
-Now I can run =M-x git-link= in any repository where the host for the origin git repository matches =twitter= without having to setup the custom remote first.
diff --git a/content/blog/google-doc-failure.org b/content/blog/google-doc-failure.md
index b4d449d..8262767 100644
--- a/content/blog/google-doc-failure.org
+++ b/content/blog/google-doc-failure.md
@@ -1,22 +1,28 @@
-#+TITLE: Google Doc Failures
-#+TAGS[]: documentation process
-#+DATE: <2021-04-11 Sun>
+---
+title: Google Doc Failures
+date: 2021-04-11
+---
 
 In most use cases, Google Doc is an effective tool to create "write once, read never" documents.
 
-* Convenience
-Google Doc (GDoc from now on) is the most common way of writing and sharing documents at my current job. It's very easy to start a new document, even more since we can now point our browser to https://doc.new and start typing right away.
+## Convenience
 
-Like most of my co-workers, I use it frequently during the day. Some of these documents are draft for some communication that I want others to review before I share with a broader audience; it can be a [[https://en.wikipedia.org/wiki/Request_for_Comments][Request For Comments]] for a project; meeting notes for others to read; information that I need to capture during an incident or a debugging session; interviews notes; etc.
+Google Doc (GDoc from now on) is the most common way of writing and sharing documents at my current job. It's very easy to start a new document, even more since we can now point our browser to <https://doc.new> and start typing right away.
+
+Like most of my co-workers, I use it frequently during the day. Some of these documents are draft for some communication that I want others to review before I share with a broader audience; it can be a [Request For Comments](https://en.wikipedia.org/wiki/Request_for_Comments) for a project; meeting notes for others to read; information that I need to capture during an incident or a debugging session; interviews notes; etc.
 
 I would not be surprised if the teams I work closely with generate 50 new documents each week.
-* ETOOMANYTABS
+
+## ETOOMANYTABS
+
 I have a tendency of having hundreds of open tabs in my browser during the week. A majority of these tabs are GDocs, and I think this is one of the true failure of the product. Why do I have so many tabs ? There's mainly two reasons.
 
 The first reason is a problem with Chrome's UX itself: it happily let me open the same URL as many times as I want in as many tabs, instead of sending me to the already opened tab if the document is loaded. It's not uncommon that I find the same document opened in 5 different tabs.
 
 The second reason, and it's the most important one, I know that if I need to read or comment on a doc and I close the tab, I'll likely never find that document again, or will completely forget about it.
-* Discoverability
+
+## Discoverability
+
 In 'the old days', you'd start a new document in Word or LibreOffice, and as you hit "save" for the first time, you've two decisions to make: how am I going to name that file, and where am I going to save it on disk.
 
 With GDoc these questions don't have to be answered, you don't have to name the file, and it does not matter where it lives. I've likely hundreds of docs named 'untitled' in my "drive". I also don't have to think about where they will live, because they are saved automatically for me. I'm sure there's hundreds of studies that show that these two simple steps are actually complex for many users and creates useless friction (in which folder do I store it; should I organize the docuemnts by team, years, projects; do I name it with the date and the current project; etc.).
@@ -25,19 +31,23 @@ GDoc being a Google product, it seems pretty obvious that they would come up wit
 
 Unfortunately, GDoc's search is really poor (and I'm being kind). By default most of us start by looking for some words we know are in the doc, maybe even in the title. But when working on a multiple projects that are related to the same technology, you suddenly get hundreds of documents matching your query. It's unclear how the returned set is ordered (by date ? by author ? by some scoring that is invisible to me ?).
 
-You can also search by owners, but here is another annoying bit: I think about owner as author, so I usually type =author:foo= before realizing it does not work. And that implies you already know who's the owner of the document. In the case of TDDs (Technical Design Document), I might know which team is behind it, but rarely who's the actual author.
+You can also search by owners, but here is another annoying bit: I think about owner as author, so I usually type `author:foo` before realizing it does not work. And that implies you already know who's the owner of the document. In the case of TDDs (Technical Design Document), I might know which team is behind it, but rarely who's the actual author.
 
 I could search for the title, but I rarely remember or know the name of the document I'm looking for. I could also be looking by keywords, but when working on a project with tens of related documents, you have to open all the returned docs to see which one is the correct one.
 
 And then what about new members joining your the team ? They don't know which docs exist, who wrote them, and how they are named. They end up searching and hoping that something good will be returned.
-* Workflows
+
+## Workflows
+
 More and more we create workflows around these documents: some of the docs are TDDs that are going through reviews; others are decision documents that require input from multiple teams and are pending approval; others are road map documents that also go through some review process.
 
 As a result we create templates for all kind of documents, with usually something like "draft → reviews → approved/rejected" at the top. We expect the owner of the doc to mark in bold what's the status of the doc to help the reader understand in what state the document is. It's difficult to keep track of open actions and comments. Yes, there's a way to get a list of all of them, but it's not in an obvious place.
 
 As a result, some engineers in my team built an external dashboard with swim lanes which captures the state of a document. We add new document with their URLs, add who are the reviewers, and we move the doc between the lanes. Now we have to operate a service and a database to keep track of the status of documents in GDoc.
-* Alternatives
-When it comes to technical document, I find that [[https://caitiem.com/2020/03/29/design-docs-markdown-and-git/][approach]] much more interesting. Some open source projects have adopted a similar workflow ([[https://github.com/kubernetes/enhancements/tree/master/keps][Kubernetes]], [[https://github.com/golang/proposal][Go]]).
+
+## Alternatives
+
+When it comes to technical document, I find that [approach](https://caitiem.com/2020/03/29/design-docs-markdown-and-git/) much more interesting. Some open source projects have adopted a similar workflow ([Kubernetes](https://github.com/kubernetes/enhancements/tree/master/keps), [Go](https://github.com/golang/proposal)).
 
 A new document starts its life as a text file (using what ever markup language your team/company prefers). The document is submitted for review, and the people who need to be consulted are added as reviewers. They can now comment on the document, the author can address them, mark them as resolved. It's clear in which state the document is: it's either in review, committed, or rejected. With this approach you also end up with a clear history, as time moves on you can amend the document by submitting a change, and the change goes through the same process.
 
@@ -46,11 +56,12 @@ New comers will find the document in the repository, and if they want to see the
 One of the thing that I think are critical, is that all of that is done using the tools the engineers are already using for their day to day job: a text editor, a version control system, a code review tool.
 
 There's obviously challenges with this approach too:
-+ *it's more heavy handed*: not every one likes to write in a text editor using a markup language. It can requires some time to learn or get used to the syntax
-+ *it's harder to integrate schema / visuals*: but having them checked in in the repository also improves the discoverability
+
+-   **it's more heavy handed**: not every one likes to write in a text editor using a markup language. It can requires some time to learn or get used to the syntax
+-   **it's harder to integrate schema / visuals**: but having them checked in in the repository also improves the discoverability
 
 It's also true that no all documents suffer the same challenges for discoverability:
-+ meeting notes are usually linked to meeting invites (however if you were not part of the meeting, you end up with the same challenges to discover them)
-+ drafts for communications are usually not relevant once the communication has been sent
-+ interview notes are usually transferred to some tools for HR when the feedback is submitted
 
+-   meeting notes are usually linked to meeting invites (however if you were not part of the meeting, you end up with the same challenges to discover them)
+-   drafts for communications are usually not relevant once the communication has been sent
+-   interview notes are usually transferred to some tools for HR when the feedback is submitted
diff --git a/content/blog/leaving-twitter.org b/content/blog/leaving-twitter.md
index 9bc5027..cb8e5c8 100644
--- a/content/blog/leaving-twitter.org
+++ b/content/blog/leaving-twitter.md
@@ -1,5 +1,7 @@
-#+TITLE: Leaving Twitter
-#+DATE: <2022-01-15 Sat>
+---
+title: Leaving Twitter
+date: 2022-01-15
+---
 
 January 7th 2022 was my last day at Twitter, after more than 7 years at the company.
 
diff --git a/content/blog/nix-raid-systemd-boot.md b/content/blog/nix-raid-systemd-boot.md
index b31d29c..cd020f2 100644
--- a/content/blog/nix-raid-systemd-boot.md
+++ b/content/blog/nix-raid-systemd-boot.md
@@ -1,13 +1,14 @@
 ---
 title: Workaround md raid boot issue in NixOS 22.11
 date: 2023-01-10
-tags:
-- nixos
 ---
+
 For about a year now I've been running [NixOS](https://nixos.org/ "NixOS") on my personal machines. Yesterday I decided to go ahead and upgrade my NAS from NixOS 22.05 to [22.11](https://nixos.org/blog/announcements.html#nixos-22.11). On that machine, all the disks are encrypted, and there are two RAID0 devices. To unlock the drives, I log into the [SSH daemon running in `initrd`](https://nixos.wiki/wiki/Remote_LUKS_Unlocking), where I can type my passphrase. This time however, instead of a prompt to unlock the disk, I see the following message:
+
 ```
 waiting for device /dev/disk/by-uuid/66c58a92-45fe-4b03-9be0-214ff67c177c to appear...
 ```
+
 followed by a timeout and then I'm asked if I want to reboot the machine. I do reboot the machine, and same thing happens.
 
 Now, and this is something really great about NixOS, I can boot to the previous generation (on 22.05), and this time I'm prompted for my password, the disks are unlocked, and I can log into my machine. This eliminates the possibility of a hardware failure! I also have a way to get a working machine to do more build if needed. Knowing that I can easily switch from a broken generation to a working one gives me more confidence in making changes to my system.
@@ -17,10 +18,12 @@ I then reboot again in the broken build, and drop into a `busybox` shell. I look
 My laptop has a similar setup, but without RAID devices. I had already updated to 22.11, and had rebooted the laptop without issues. To be sure, I ran another update and rebooted, and I was able to unlock the drive and log into the machine without problem.
 
 From here I have enough information to start searching for an issue similar to this. I got pretty lucky and two issues I found were:
+
 - [Since systemd-251.3 mdadm doesn't start at boot time #196800 ](https://github.com/nixoS/nixpkgs/issues/196800)
 - [Won't boot when root on raid0 with boot.initrd.systemd=true #199551 ](https://github.com/nixoS/nixpkgs/issues/199551)
 
 The proposed solution was easy:
+
 ```diff
 @@ -43,7 +43,7 @@
    };
diff --git a/content/blog/no-ssh-to-prod.md b/content/blog/no-ssh-to-prod.md
index bc958fc..71ad595 100644
--- a/content/blog/no-ssh-to-prod.md
+++ b/content/blog/no-ssh-to-prod.md
@@ -1,10 +1,8 @@
 ---
 title: No SSH to production
 date: 2022-11-28
-tags:
-- operation
-- security
 ---
+
 It's not uncommon to hear talk about preventing engineers to SSH to production machines. While I think it's a noble goal, I think most organizations are not ready for it in the short or even medium term.
 
 Why do we usually need to get a shell on a machine ? The most common reason is to investigate a system that is behaving in an unexpected way, and we need to collect information, maybe using `strace`, `tcpdump`, `perf` or one of the BCC tools. Another reason might be to validate that a change deployed to a single machine is applied correctly, before rolling it out to a large portion of the fleet.
@@ -14,12 +12,13 @@ If you end up writing a postmortem after the investigation session, one of the r
 In most cases, I think we would be better off by breaking down the problems in smaller chunk, and focus on iterative improvements. "No one gets to SSH to machines in production" is a poorly framed problem.
 
 What I think is better is to ask the following questions
+
 - who has access to the machines
 - who actually SSH to the machines
 - why do they need to SSH to the machines
 - was the state of the machine altered after someone logged to the machine
 
-For the first question, I'd recommend that we don't create user accounts and don't distribute engineers' SSH public keys on the machines. I'd create an 'infra' user account, and use signed SSH certificates (for example with [vault](https://www.hashicorp.com/products/vault/ssh-with-vault)). Only engineers who *have* to have access should be able to sign their SSH key. That way you've limited the risks to a few engineers, and you have an audit trail of who requested access. You can build reports from these audit logs, to see how frequently engineer request access. For the 'infra' user, I'd limit it's privileges, and make sure it can only run commands required for debugging/troubleshooting.
+For the first question, I'd recommend that we don't create user accounts and don't distribute engineers' SSH public keys on the machines. I'd create an 'infra' user account, and use signed SSH certificates (for example with [vault](https://www.hashicorp.com/products/vault/ssh-with-vault)). Only engineers who _have_ to have access should be able to sign their SSH key. That way you've limited the risks to a few engineers, and you have an audit trail of who requested access. You can build reports from these audit logs, to see how frequently engineer request access. For the 'infra' user, I'd limit it's privileges, and make sure it can only run commands required for debugging/troubleshooting.
 
 Using linux' audit logs, you can also generate reports on which commands are run. You can learn why the engineers needed to get on the host, and it can be used by the SRE organization to build services and tools that will enable new capabilities (for example, a service to collect traces, or do network capture remotely).
 
diff --git a/content/blog/tailscale-docker-https.md b/content/blog/tailscale-docker-https.md
new file mode 100644
index 0000000..4a60fac
--- /dev/null
+++ b/content/blog/tailscale-docker-https.md
@@ -0,0 +1,119 @@
+---
+title: Tailscale, Docker and HTTPS
+date: 2021-12-29
+---
+
+I run a number of services in my home network. For the majority of these services, I don't want to make them available on the internet, I want to only be able to access them when I'm on my home network. However, sometimes I'm not at home and I still want to access them. So far I've been using plain [wireguard](https://www.wireguard.com/) to achieve this. While the initial configuration for wireguard is pretty simple, it starts to be a bit more cumbersome as I add more hosts/containers. It's also not easy to share keys with other folks if I want to give access to some of the machines or services. For that reason I decided to give a look at [tailscale](https://tailscale.com/).
+
+There's already a lot of articles about tailscale and how to use and configure it. Their [documentation](https://tailscale.com/kb/) is also pretty good, so I won't cover the initial setup.
+
+As stated above, I want to access some of my services that are running as docker containers from anywhere. For web services, I want to use them through HTTPS, with a valid certificate, and without having to remember on which port the service it's listening. I also don't want to setup a PKI in my home lab for that (and I'm also not interested in configuring split DNS), and instead I prefer to use [let's encrypt](https://letsencrypt.org/) with a proper subdomain that is unique for each service.
+
+The [tailscale documentation](https://tailscale.com/kb/1054/dns/) has two suggestions for this:
+
+-   use their magicDNS feature / split DNS
+-   setup a subdomain on a public domain
+
+Since I already have a public domain that I use for my home network, I decided to go with the second option (I'm also uncertain how to achieve my goal using magicDNS without running tailscale inside the container).
+
+The public domain I'm using is managed through [Google Cloud Domain](https://cloud.google.com/dns/docs/tutorials/create-domain-tutorial). I create a new record for the services I want to run (for example, `dash` for my instance of grafana), using the IP address from the tailscale node the service runs on (e.g. 100.83.51.12).
+
+For routing the traffic I use [traefik](https://traefik.io/). The configuration for traefik looks like this:
+
+    global:
+      sendAnonymousUsage: false
+    providers:
+      docker:
+        exposedByDefault: false
+    entryPoints:
+      http:
+        address: ":80"
+      https:
+        address: ":443"
+    certificatesResolvers:
+      dash:
+        acme:
+          email: franck@fcuny.net
+          storage: acme.json
+          dnsChallenge:
+            provider: gcloud
+
+The important bit here is the `certificatesResolvers` part. I'll be using the [dnsChallenge](https://doc.traefik.io/traefik/user-guides/docker-compose/acme-dns/) instead of the [httpChallenge](https://doc.traefik.io/traefik/user-guides/docker-compose/acme-http/) to obtain the certificate from let's encrypt. For this to work, I need to specify the `provider` to be [gcloud](https://go-acme.github.io/lego/dns/gcloud/). I'll also need a service account (see [this doc](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application) to create it). I run `traefik` in a docker container, and the `systemd` unit file is below. The required bits for using the `dnsChallenge` with `gcloud` are:
+
+-   the environment variable `GCP_SERVICE_ACCOUNT_FILE`: it contains the credentials so that `traefik` can update the DNS record for the challenge
+-   the environment variable `GCP_PROJECT`: the name of the GCP project
+-   mounting the service account file inside the container (I store it on the host under `/data/containers/traefik/config/sa.json`)
+
+    [Unit]
+    Description=traefik proxy
+    Documentation=https://doc.traefik.io/traefik/
+    After=docker.service
+    Requires=docker.service
+
+    [Service]
+    Restart=on-failure
+    ExecStartPre=-/usr/bin/docker kill traefik
+    ExecStartPre=-/usr/bin/docker rm traefik
+    ExecStartPre=/usr/bin/docker pull traefik:latest
+
+    ExecStart=/usr/bin/docker run \
+      -p 80:80 \
+      -p 9080:8080 \
+      -p 443:443 \
+      --name=traefik \
+      -e GCE_SERVICE_ACCOUNT_FILE=/var/run/gcp-service-account.json \
+      -e GCE_PROJECT= gcp-super-project \
+      --volume=/data/containers/traefik/config/acme.json:/acme.json \
+      --volume=/data/containers/traefik/config/traefik.yml:/etc/traefik/traefik.yml:ro \
+      --volume=/data/containers/traefik/config/sa.json:/var/run/gcp-service-account.json \
+      --volume=/var/run/docker.sock:/var/run/docker.sock:ro \
+      traefik:latest
+    ExecStop=/usr/bin/docker stop traefik
+
+    [Install]
+    WantedBy=multi-user.target
+
+As an example, I run [grafana](https://grafana.com/) on my home network to view metrics from the various containers / hosts. Let's pretend I use `example.net` as my domain. I want to be able to access `grafana` via <https://dash.example.net>. Here's the `systemd` unit configuration I use for this:
+
+    [Unit]
+    Description=Grafana in a docker container
+    Documentation=https://grafana.com/docs/
+    After=docker.service
+    Requires=docker.service
+
+    [Service]
+    Restart=on-failure
+    RuntimeDirectory=grafana
+    ExecStartPre=-/usr/bin/docker kill grafana-server
+    ExecStartPre=-/usr/bin/docker rm grafana-server
+    ExecStartPre=-/usr/bin/docker pull grafana/grafana:latest
+
+    ExecStart=/usr/bin/docker run \
+      -p 3000:3000 \
+      -e TZ='America/Los_Angeles' \
+      --name grafana-server \
+      -v /data/containers/grafana/etc/grafana:/etc/grafana \
+      -v /data/containers/grafana/var/lib/grafana:/var/lib/grafana \
+      -v /data/containers/grafana/var/log/grafana:/var/log/grafana \
+      --user=grafana \
+      --label traefik.enable=true \
+      --label traefik.http.middlewares.grafana-https-redirect.redirectscheme.scheme=https \
+      --label traefik.http.middlewares.grafana-https-redirect.redirectscheme.permanent=true \
+      --label traefik.http.routers.grafana-http.rule=Host(`dash.example.net`) \
+      --label traefik.http.routers.grafana-http.entrypoints=http \
+      --label traefik.http.routers.grafana-http.service=grafana-svc \
+      --label traefik.http.routers.grafana-http.middlewares=grafana-https-redirect \
+      --label traefik.http.routers.grafana-https.rule=Host(`dash.example.net`) \
+      --label traefik.http.routers.grafana-https.entrypoints=https \
+      --label traefik.http.routers.grafana-https.tls=true \
+      --label traefik.http.routers.grafana-https.tls.certresolver=dash \
+      --label traefik.http.routers.grafana-https.service=grafana-svc \
+      --label traefik.http.services.grafana-svc.loadbalancer.server.port=3000 \
+      grafana/grafana:latest
+
+    ExecStop=/usr/bin/docker stop unifi-controller
+
+    [Install]
+    WantedBy=multi-user.target
+
+Now I can access my grafana instance via HTTPS (and <http://dash.example.net> would redirect to HTTPS) while my tailscale interface is up on the machine I'm using (e.g. my desktop or my phone).
diff --git a/content/blog/tailscale-docker-https.org b/content/blog/tailscale-docker-https.org
deleted file mode 100644
index 14e4cf1..0000000
--- a/content/blog/tailscale-docker-https.org
+++ /dev/null
@@ -1,121 +0,0 @@
-#+TITLE: Tailscale, Docker and HTTPS
-#+TAGS[]: docker tailscale traefik
-#+DATE: <2021-12-29 Wed>
-
-I run a number of services in my home network. For the majority of these services, I don't want to make them available on the internet, I want to only be able to access them when I'm on my home network. However, sometimes I'm not at home and I still want to access them. So far I've been using plain [[https://www.wireguard.com/][wireguard]] to achieve this. While the initial configuration for wireguard is pretty simple, it starts to be a bit more cumbersome as I add more hosts/containers. It's also not easy to share keys with other folks if I want to give access to some of the machines or services. For that reason I decided to give a look at [[https://tailscale.com/][tailscale]].
-
-There's already a lot of articles about tailscale and how to use and configure it. Their [[https://tailscale.com/kb/][documentation]] is also pretty good, so I won't cover the initial setup.
-
-As stated above, I want to access some of my services that are running as docker containers from anywhere. For web services, I want to use them through HTTPS, with a valid certificate, and without having to remember on which port the service it's listening. I also don't want to setup a PKI in my home lab for that (and I'm also not interested in configuring split DNS), and instead I prefer to use [[https://letsencrypt.org/][let's encrypt]] with a proper subdomain that is unique for each service.
-
-The [[https://tailscale.com/kb/1054/dns/][tailscale documentation]] has two suggestions for this:
-- use their magicDNS feature / split DNS
-- setup a subdomain on a public domain
-
-Since I already have a public domain that I use for my home network, I decided to go with the second option (I'm also uncertain how to achieve my goal using magicDNS without running tailscale inside the container).
-
-The public domain I'm using is managed through [[https://cloud.google.com/dns/docs/tutorials/create-domain-tutorial][Google Cloud Domain]]. I create a new record for the services I want to run (for example, ~dash~ for my instance of grafana), using the IP address from the tailscale node the service runs on (e.g. 100.83.51.12).
-
-For routing the traffic I use [[https://traefik.io/][traefik]]. The configuration for traefik looks like this:
-#+begin_src yaml
-global:
-  sendAnonymousUsage: false
-providers:
-  docker:
-    exposedByDefault: false
-entryPoints:
-  http:
-    address: ":80"
-  https:
-    address: ":443"
-certificatesResolvers:
-  dash:
-    acme:
-      email: franck@fcuny.net
-      storage: acme.json
-      dnsChallenge:
-        provider: gcloud
-#+end_src
-
-The important bit here is the ~certificatesResolvers~ part. I'll be using the [[https://doc.traefik.io/traefik/user-guides/docker-compose/acme-dns/][dnsChallenge]] instead of the [[https://doc.traefik.io/traefik/user-guides/docker-compose/acme-http/][httpChallenge]] to obtain the certificate from let's encrypt. For this to work, I need to specify the ~provider~ to be [[https://go-acme.github.io/lego/dns/gcloud/][gcloud]]. I'll also need a service account (see [[https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application][this doc]] to create it). I run ~traefik~ in a docker container, and the ~systemd~ unit file is below. The required bits for using the ~dnsChallenge~ with ~gcloud~ are:
-- the environment variable ~GCP_SERVICE_ACCOUNT_FILE~: it contains the credentials so that ~traefik~ can update the DNS record for the challenge
-- the environment variable ~GCP_PROJECT~: the name of the GCP project
-- mounting the service account file inside the container (I store it on the host under ~/data/containers/traefik/config/sa.json~)
-
-#+begin_src systemd
-[Unit]
-Description=traefik proxy
-Documentation=https://doc.traefik.io/traefik/
-After=docker.service
-Requires=docker.service
-
-[Service]
-Restart=on-failure
-ExecStartPre=-/usr/bin/docker kill traefik
-ExecStartPre=-/usr/bin/docker rm traefik
-ExecStartPre=/usr/bin/docker pull traefik:latest
-
-ExecStart=/usr/bin/docker run \
-  -p 80:80 \
-  -p 9080:8080 \
-  -p 443:443 \
-  --name=traefik \
-  -e GCE_SERVICE_ACCOUNT_FILE=/var/run/gcp-service-account.json \
-  -e GCE_PROJECT= gcp-super-project \
-  --volume=/data/containers/traefik/config/acme.json:/acme.json \
-  --volume=/data/containers/traefik/config/traefik.yml:/etc/traefik/traefik.yml:ro \
-  --volume=/data/containers/traefik/config/sa.json:/var/run/gcp-service-account.json \
-  --volume=/var/run/docker.sock:/var/run/docker.sock:ro \
-  traefik:latest
-ExecStop=/usr/bin/docker stop traefik
-
-[Install]
-WantedBy=multi-user.target
-#+end_src
-
-As an example, I run [[https://grafana.com/][grafana]] on my home network to view metrics from the various containers / hosts. Let's pretend I use ~example.net~ as my domain. I want to be able to access ~grafana~ via https://dash.example.net. Here's the ~systemd~ unit configuration I use for this:
-
-#+begin_src systemd
-[Unit]
-Description=Grafana in a docker container
-Documentation=https://grafana.com/docs/
-After=docker.service
-Requires=docker.service
-
-[Service]
-Restart=on-failure
-RuntimeDirectory=grafana
-ExecStartPre=-/usr/bin/docker kill grafana-server
-ExecStartPre=-/usr/bin/docker rm grafana-server
-ExecStartPre=-/usr/bin/docker pull grafana/grafana:latest
-
-ExecStart=/usr/bin/docker run \
-  -p 3000:3000 \
-  -e TZ='America/Los_Angeles' \
-  --name grafana-server \
-  -v /data/containers/grafana/etc/grafana:/etc/grafana \
-  -v /data/containers/grafana/var/lib/grafana:/var/lib/grafana \
-  -v /data/containers/grafana/var/log/grafana:/var/log/grafana \
-  --user=grafana \
-  --label traefik.enable=true \
-  --label traefik.http.middlewares.grafana-https-redirect.redirectscheme.scheme=https \
-  --label traefik.http.middlewares.grafana-https-redirect.redirectscheme.permanent=true \
-  --label traefik.http.routers.grafana-http.rule=Host(`dash.example.net`) \
-  --label traefik.http.routers.grafana-http.entrypoints=http \
-  --label traefik.http.routers.grafana-http.service=grafana-svc \
-  --label traefik.http.routers.grafana-http.middlewares=grafana-https-redirect \
-  --label traefik.http.routers.grafana-https.rule=Host(`dash.example.net`) \
-  --label traefik.http.routers.grafana-https.entrypoints=https \
-  --label traefik.http.routers.grafana-https.tls=true \
-  --label traefik.http.routers.grafana-https.tls.certresolver=dash \
-  --label traefik.http.routers.grafana-https.service=grafana-svc \
-  --label traefik.http.services.grafana-svc.loadbalancer.server.port=3000 \
-  grafana/grafana:latest
-
-ExecStop=/usr/bin/docker stop unifi-controller
-
-[Install]
-WantedBy=multi-user.target
-#+end_src
-
-Now I can access my grafana instance via HTTPS (and http://dash.example.net would redirect to HTTPS) while my tailscale interface is up on the machine I'm using (e.g. my desktop or my phone).
diff --git a/content/notes/containerd-to-firecracker.md b/content/notes/containerd-to-firecracker.md
index 52ab201..9716735 100644
--- a/content/notes/containerd-to-firecracker.md
+++ b/content/notes/containerd-to-firecracker.md
@@ -1,11 +1,6 @@
 ---
 title: containerd to firecracker
 date: 2021-05-15
-tags:
-  - linux
-  - firecracker
-  - containerd
-  - go
 ---
 
 fly.io had an [interesting
@@ -34,7 +29,7 @@ code is available [here](https://git.fcuny.net/containerd-to-vm/).
 documentation](https://pkg.go.dev/github.com/containerd/containerd).
 From the main page we can see the following example to create a client.
 
-``` go
+```go
 import (
   "github.com/containerd/containerd"
   "github.com/containerd/containerd/cio"
@@ -49,7 +44,7 @@ func main() {
 
 And pulling an image is also pretty straightforward:
 
-``` go
+```go
 image, err := client.Pull(context, "docker.io/library/redis:latest")
 ```
 
@@ -60,7 +55,7 @@ and there's a few methods associated with it.
 As `containerd` has namespaces, it's possible to specify the namespace
 we want to use when working with the API:
 
-``` go
+```go
 ctx := namespaces.WithNamespace(context.Background(), "c2vm")
 image, err := client.Pull(ctx, "docker.io/library/redis:latest")
 ```
@@ -68,7 +63,7 @@ image, err := client.Pull(ctx, "docker.io/library/redis:latest")
 The image will now be stored in the `c2vm` namespace. We can verify this
 with:
 
-``` bash
+```bash
 ; sudo ctr -n c2vm images ls -q
 docker.io/library/redis:latest
 ```
@@ -89,7 +84,7 @@ There's two commons ways to pre-allocate space to a file: `dd` and
 First, to be safe, we create a temporary file, and use `renameio` to
 handle the renaming (I recommend reading the doc of the module).
 
-``` go
+```go
 f, err := renameio.TempFile("", rawFile)
 if err != nil {
     return err
@@ -101,7 +96,7 @@ Now to do the pre-allocation (we're making an assumption here that 2GB
 is enough, we can likely check what's the size of the container before
 doing this):
 
-``` go
+```go
 command := exec.Command("fallocate", "-l", "2G", f.Name())
 if err := command.Run(); err != nil {
     return fmt.Errorf("fallocate error: %s", err)
@@ -110,7 +105,7 @@ if err := command.Run(); err != nil {
 
 We can now convert that file to ext4:
 
-``` go
+```go
 command = exec.Command("mkfs.ext4", "-F", f.Name())
 if err := command.Run(); err != nil {
     return fmt.Errorf("mkfs.ext4 error: %s", err)
@@ -119,13 +114,13 @@ if err := command.Run(); err != nil {
 
 Now we can rename safely the temporary file to the proper file we want:
 
-``` go
+```go
 f.CloseAtomicallyReplace()
 ```
 
 And to mount that file
 
-``` go
+```go
 command = exec.Command("mount", "-o", "loop", rawFile, mntDir)
 if err := command.Run(); err != nil {
     return fmt.Errorf("mount error: %s", err)
@@ -137,7 +132,7 @@ if err := command.Run(); err != nil {
 Extracting the container using `containerd` is pretty simple. Here's the
 function that I use:
 
-``` go
+```go
 func extract(ctx context.Context, client *containerd.Client, image containerd.Image, mntDir string) error {
     manifest, err := images.Manifest(ctx, client.ContentStore(), image.Target(), platform)
     if err != nil {
@@ -185,10 +180,10 @@ Let's refer to the [specification for the
 config](https://github.com/opencontainers/image-spec/blob/master/config.md).
 The elements that are of interest to me are:
 
--   `Env`, which is array of strings. They contain the environment
-    variables that likely we need to run the program
--   `Cmd`, which is also an array of strings. If there's no entry point
-    provided, this is what is used.
+- `Env`, which is array of strings. They contain the environment
+  variables that likely we need to run the program
+- `Cmd`, which is also an array of strings. If there's no entry point
+  provided, this is what is used.
 
 At this point, for this experiment, I'm going to ignore exposed ports,
 working directory, and the user.
@@ -196,7 +191,7 @@ working directory, and the user.
 First we need to read the config from the container. This is easily
 done:
 
-``` go
+```go
 config, err := images.Config(ctx, client.ContentStore(), image.Target(), platform)
 if err != nil {
     return err
@@ -205,7 +200,7 @@ if err != nil {
 
 This needs to be read and decoded:
 
-``` go
+```go
 configBlob, err := content.ReadBlob(ctx, client.ContentStore(), config)
 var imageSpec ocispec.Image
 json.Unmarshal(configBlob, &imageSpec)
@@ -221,7 +216,7 @@ for now) with the environment variables and the command.
 
 Naively, this can be done like this:
 
-``` go
+```go
 initPath := filepath.Join(mntDir, "init.sh")
 f, err := renameio.TempFile("", initPath)
 if err != nil {
@@ -262,7 +257,7 @@ we're done manipulating the image.
 
 Within a function, we can do the following:
 
-``` go
+```go
 command := exec.Command("/usr/bin/e2fsck", "-p", "-f", rawFile)
 if err := command.Run(); err != nil {
     return fmt.Errorf("e2fsck error: %s", err)
@@ -277,7 +272,7 @@ if err := command.Run(); err != nil {
 I'm using `docker.io/library/redis:latest` for my test, and I end up
 with the following size for the image:
 
-``` bash
+```bash
 -rw------- 1 root root 216M Apr 22 14:50 /tmp/fcuny.img
 ```
 
@@ -289,7 +284,7 @@ with the process, the firecracker team has [documented how to do
 this](https://github.com/firecracker-microvm/firecracker/blob/main/docs/rootfs-and-kernel-setup.md#creating-a-kernel-image).
 In my case all I had to do was:
 
-``` bash
+```bash
 git clone https://github.com/torvalds/linux.git linux.git
 cd linux.git
 git checkout v5.8
@@ -322,7 +317,7 @@ this to work we need to install the `tc-redirect-tap` CNI plugin
 Based on that documentation, I'll start with the following configuration
 in `etc/cni/conf.d/50-c2vm.conflist`:
 
-``` json
+```json
 {
   "name": "c2vm",
   "cniVersion": "0.4.0",
@@ -365,7 +360,7 @@ The first thing is to configure the list of devices. In our case we will
 have a single device, the boot drive that we've created in the previous
 step.
 
-``` go
+```go
 devices := make([]models.Drive, 1)
 devices[0] = models.Drive{
     DriveID:      firecracker.String("1"),
@@ -377,7 +372,7 @@ devices[0] = models.Drive{
 
 The next step is to configure the VM:
 
-``` go
+```go
 fcCfg := firecracker.Config{
     LogLevel:        "debug",
     SocketPath:      firecrackerSock,
@@ -403,7 +398,7 @@ fcCfg := firecracker.Config{
 
 Finally we can create the command to start and run the VM:
 
-``` go
+```go
 command := firecracker.VMCommandBuilder{}.
     WithBin(firecrackerBinary).
     WithSocketPath(fcCfg.SocketPath).
@@ -670,7 +665,7 @@ The end result:
 
 We can do a quick test with the following:
 
-``` bash
+```bash
 ; sudo docker run -it --rm redis redis-cli -h 192.168.128.9
 192.168.128.9:6379> get foo
 (nil)
diff --git a/content/notes/cpu-power-management.md b/content/notes/cpu-power-management.md
index bcb14b7..bbbd2e6 100644
--- a/content/notes/cpu-power-management.md
+++ b/content/notes/cpu-power-management.md
@@ -1,11 +1,6 @@
 ---
 title: CPU power management
 date: 2023-01-22
-tags:
-  - harwdare
-  - amd
-  - intel
-  - cpu
 ---
 
 ## Maximum power consumption of a processor
@@ -17,30 +12,34 @@ The Intel CPU has 80 cores while the AMD one has 128 cores. For Intel, this give
 The TDP is the average value the processor can sustain forever, and this is the power the cooling solution needs to be designed at for reliability. The TDP is measured under worst case load, with all cores running at 1.8Ghz (the base frequency).
 
 ## C-State vs. P-State
+
 We have two ways to control the power consumption:
+
 - disabling a subsystem
 - decrease the voltage
 
 This is done by using
-- *C-State* is for optimization of power consumption
-- *P-State* is for optimization of the voltage and CPU frequency
 
-*C-State* means that one or more subsystem are executing nothing, one or more subsystem of the CPU is at idle, powered down.
+- _C-State_ is for optimization of power consumption
+- _P-State_ is for optimization of the voltage and CPU frequency
 
-*P-State* the subsystem is actually running, but it does not require full performance, so the voltage and/or frequency it operates is decreased.
+_C-State_ means that one or more subsystem are executing nothing, one or more subsystem of the CPU is at idle, powered down.
+
+_P-State_ the subsystem is actually running, but it does not require full performance, so the voltage and/or frequency it operates is decreased.
 
 The states are numbered starting from 0. The higher the number, the more power is saved. `C0` means no power saving. `P0` means maximum performance (thus maximum frequency, voltage and power used).
 
 ### C-state
 
 A timeline of power saving using C states is as follow:
+
 1. normal operation is at c0
 2. the clock of idle core is stopped (C1)
 3. the local caches (L1/L2) of the core are flushed and the core is powered down (C3)
 4. when all the cores are powered down, the shared cache of the package (L3/LLC) are flushed and the whole package/CPU can be powered down
 
 | state | description                                                                                                                 |
-|-------|-----------------------------------------------------------------------------------------------------------------------------|
+| ----- | --------------------------------------------------------------------------------------------------------------------------- |
 | C0    | operating state                                                                                                             |
 | C1    | a state where the processor is not executing instructions, but can return to an executing state essentially instantaneously |
 | C2    | a state where the processor maintains all software-visible state, but may take longer to wake up                            |
@@ -65,6 +64,7 @@ Running `cpuid` we can find all the supported C-states for a processor (Intel(R)
 ```
 
 If I interpret this correctly:
+
 - there's one `C0`
 - there's two sub C-states for `C1`
 - there's two sub C-states for `C3`
@@ -78,7 +78,7 @@ P-states allow to change the voltage and frequency of the CPU core to decrease t
 A P-state refers to different frequency-voltage pairs. The highest operating point is the maximum state which is `P0`.
 
 | state | description                                |
-|-------|--------------------------------------------|
+| ----- | ------------------------------------------ |
 | P0    | maximum power and frequency                |
 | P1    | less than P0, voltage and frequency scaled |
 | P2    | less than P1, voltage and frequency scaled |
@@ -88,7 +88,7 @@ A P-state refers to different frequency-voltage pairs. The highest operating poi
 The ACPI Specification defines the following four global "Gx" states and six sleep "Sx" states
 
 | GX   | name           | Sx   | description                                                                       |
-|------|----------------|------|-----------------------------------------------------------------------------------|
+| ---- | -------------- | ---- | --------------------------------------------------------------------------------- |
 | `G0` | working        | `S0` | The computer is running and executing instructions                                |
 | `G1` | sleeping       | `S1` | Processor caches are flushed and the CPU stop executing instructions              |
 | `G1` | sleeping       | `S2` | CPU powered off, dirty caches flushed to RAM                                      |
@@ -102,10 +102,12 @@ When we are in any C-states, we are in `G0`.
 ## Speed Select Technology
 
 [Speed Select Technology](https://en.wikichip.org/wiki/intel/speed_select_technology) is a set of power management controls that allows a system administrator to customize per-core performance. By configuring the performance of specific cores and affinitizing workloads to those cores, higher software performance can be achieved. SST supports multiple types of customization:
+
 - Frequency Prioritization (SST-CP) - allows specific cores to clock higher by reducing the frequency of cores running lower-priority software.
 - Speed Select Base Freq (SST-BF) - allows specific cores to run higher base frequency (P1) by reducing the base frequencies (P1) of other cores.
 
 ## Turbo Boost
+
 TDP is the maximum power consumption the CPU can sustain. When the power consumption is low (e.g. many cores are in P1+ states), the CPU frequency can be increased beyond base frequency to take advantage of the headroom, since this condition does not increase the power consumption beyond TDP.
 
 Modern CPUs are heavily reliant on "Turbo(Intel)" or "boost (AMD)" ([TBT](https://en.wikichip.org/wiki/intel/turbo_boost_technology) and [TBTM](https://en.wikichip.org/wiki/intel/turbo_boost_max_technology)).
@@ -113,4 +115,5 @@ Modern CPUs are heavily reliant on "Turbo(Intel)" or "boost (AMD)" ([TBT](https:
 In our case, the Intel 6122 is rated at 1.8GHz, A.K.A "stamp speed". If we want to run the CPU at a consistent frequency, we'd have to choose 1.8GHz or below, and we'd lose significant performance if we were to disable turbo/boost.
 
 ### Turbo boost max
+
 During the manufacturing process, Intel is able to test each die and determine which cores possess the best overclocking capabilities. That information is then stored in the CPU in order from best to worst.
diff --git a/content/notes/making-sense-intel-amd-cpus.md b/content/notes/making-sense-intel-amd-cpus.md
index 2d7bb8a..75392c6 100644
--- a/content/notes/making-sense-intel-amd-cpus.md
+++ b/content/notes/making-sense-intel-amd-cpus.md
@@ -1,10 +1,6 @@
 ---
 title: Making sense of Intel and AMD CPUs naming
 date: 2021-12-29
-tags:
-  - amd
-  - intel
-  - cpu
 ---
 
 ## Intel
@@ -14,22 +10,24 @@ tags:
 The line up for the core family is i3, i5, i7 and i9. As of January 2023, the current generation is [Raptor Lake](https://en.wikipedia.org/wiki/Raptor_Lake) (13th generation).
 
 The brand modifiers are:
--   **i3**: laptops/low-end desktop
--   **i5**: mainstream users
--   **i7**: high-end users
--   **i9**: enthusiast users
+
+- **i3**: laptops/low-end desktop
+- **i5**: mainstream users
+- **i7**: high-end users
+- **i9**: enthusiast users
 
 How to read a SKU ? Let's use the [i7-12700K](https://ark.intel.com/content/www/us/en/ark/products/134594/intel-core-i712700k-processor-25m-cache-up-to-5-00-ghz.html) processor:
--   **i7**: high end users
--   **12**: 12th generation
--   **700**: SKU digits, usually assigned in the order the processors
-    are developed
--   **K**: unlocked
+
+- **i7**: high end users
+- **12**: 12th generation
+- **700**: SKU digits, usually assigned in the order the processors
+  are developed
+- **K**: unlocked
 
 List of suffixes:
 
 | suffix | meaning                                |
-|--------|----------------------------------------|
+| ------ | -------------------------------------- |
 | G..    | integrated graphics                    |
 | E      | embedded                               |
 | F      | require discrete graphic card          |
@@ -48,12 +46,12 @@ List of suffixes:
 
 #### Raptor Lake (13th generation)
 
-Raptor lake is an hybrid architecture, featuring both P-cores (performance cores) and E-cores (efficient cores), similar to Alder lake. P-cores are based on the [Raptor cove](https://en.wikipedia.org/wiki/Golden_Cove#Raptor_Cove) architecture, while the E-cores are based on the [Gracemont](https://en.wikipedia.org/wiki/Gracemont_(microarchitecture)) architecture (same as for Alder lake).
+Raptor lake is an hybrid architecture, featuring both P-cores (performance cores) and E-cores (efficient cores), similar to Alder lake. P-cores are based on the [Raptor cove](https://en.wikipedia.org/wiki/Golden_Cove#Raptor_Cove) architecture, while the E-cores are based on the [Gracemont](<https://en.wikipedia.org/wiki/Gracemont_(microarchitecture)>) architecture (same as for Alder lake).
 
 Available processors:
 
 | model      | p-cores | e-cores | GHz (base) | GHz (boosted) | TDP      |
-|------------|---------|---------|------------|---------------|----------|
+| ---------- | ------- | ------- | ---------- | ------------- | -------- |
 | i9-13900KS | 8 (16)  | 16      | 3.2/2.4    | 6/4.3         | 150/253W |
 | i9-13900K  | 8 (16)  | 16      | 3.0/2.0    | 5.8/4.3       | 125/253W |
 | i9-13900KF | 8 (16)  | 16      | 3.0/2.0    | 5.8/4.3       | 125/253W |
@@ -71,19 +69,19 @@ Available processors:
 For the Raptor Lake generation, as for the Alder lake generation, the supported socket is the [LGA<sub>1700</sub>](https://en.wikipedia.org/wiki/LGA_1700).
 
 List of Raptor lake chipsets:
-| feature                     | b760[^7] | h770[^8] | z790[^9] |
+| feature | b760[^7] | h770[^8] | z790[^9] |
 |-----------------------------|----------|----------|----------|
-| P and E cores over clocking | no       | no       | yes      |
-| memory over clocking        | yes      | yes      | yes      |
-| DMI 4 lanes                 | 4        | 8        | 8        |
-| chipset PCIe 5.0 lanes      |          |          |          |
-| chipset PCIe 4.0 lanes      |          |          |          |
-| chipset PCIe 3.0 lanes      |          |          |          |
-| SATA 3.0 ports              | up to 4  | up to 8  | up to 8  |
+| P and E cores over clocking | no | no | yes |
+| memory over clocking | yes | yes | yes |
+| DMI 4 lanes | 4 | 8 | 8 |
+| chipset PCIe 5.0 lanes | | | |
+| chipset PCIe 4.0 lanes | | | |
+| chipset PCIe 3.0 lanes | | | |
+| SATA 3.0 ports | up to 4 | up to 8 | up to 8 |
 
 #### Alder Lake (12th generation)
 
-Alder lake is an hybrid architecture, featuring both P-cores (performance cores) and E-cores (efficient cores). P-cores are based on the [Golden Cove](https://en.wikipedia.org/wiki/Golden_Cove) architecture, while the E-cores are based on the [Gracemont](https://en.wikipedia.org/wiki/Gracemont_(microarchitecture)) architecture.
+Alder lake is an hybrid architecture, featuring both P-cores (performance cores) and E-cores (efficient cores). P-cores are based on the [Golden Cove](https://en.wikipedia.org/wiki/Golden_Cove) architecture, while the E-cores are based on the [Gracemont](<https://en.wikipedia.org/wiki/Gracemont_(microarchitecture)>) architecture.
 
 This is a [good article](https://www.anandtech.com/show/16881/a-deep-dive-into-intels-alder-lake-microarchitectures/2) to read about this model. Inside the processor there's a microcontroller that monitors what each thread is doing. This can be used by the OS scheduler to hint on which core a thread should be scheduled on (between performance or efficiency).
 
@@ -92,7 +90,7 @@ As of December 2021 this is not yet properly supported by the Linux kernel.
 Available processors:
 
 | model      | p-cores | e-cores | GHz (base) | GHz (boosted) | TDP  |
-|------------|---------|---------|------------|---------------|------|
+| ---------- | ------- | ------- | ---------- | ------------- | ---- |
 | i9-12900K  | 8 (16)  | 8       | 3.2/2.4    | 5.1/3.9       | 241W |
 | i9-12900KF | 8 (16)  | 8       | 3.2/2.4    | 5.1/3.9       | 241W |
 | i7-12700K  | 8 (16)  | 4       | 3.6/2.7    | 4.9/3.8       | 190W |
@@ -100,15 +98,15 @@ Available processors:
 | i5-12600K  | 6 (12)  | 4       | 3.7/2.8    | 4.9/3.6       | 150W |
 | i5-12600KF | 6 (12)  | 4       | 3.7/2.8    | 4.9/3.6       | 150W |
 
--   support DDR4 and DDR5 (up to DDR5-4800)
--   support PCIe 4.0 and 5.0 (16 PCIe 5.0 and 4 PCIe 4.0)
+- support DDR4 and DDR5 (up to DDR5-4800)
+- support PCIe 4.0 and 5.0 (16 PCIe 5.0 and 4 PCIe 4.0)
 
 For the Alder Lake generation, the supported socket is the [LGA<sub>1700</sub>](https://en.wikipedia.org/wiki/LGA_1700).
 
 For now only supported chipset for Alder Lake are:
 
 | feature                     | z690[^1] | h670[^2] | b660[^3] | h610[^4] | q670[^6] | w680[^5] |
-|-----------------------------|----------|----------|----------|----------|----------|----------|
+| --------------------------- | -------- | -------- | -------- | -------- | -------- | -------- |
 | P and E cores over clocking | yes      | no       | no       | no       | no       | yes      |
 | memory over clocking        | yes      | yes      | yes      | no       | -        | yes      |
 | DMI 4 lanes                 | 8        | 8        | 4        | 4        | 8        | 8        |
@@ -121,37 +119,38 @@ For now only supported chipset for Alder Lake are:
 Xeon is the brand of Intel processor designed for non-consumer servers and workstations. The most recent generations are:
 
 | name            | availability |
-|-----------------|--------------|
+| --------------- | ------------ |
 | Skylake         | 2015         |
 | Cascade lake    | 2019         |
 | Cooper lake     | 2022         |
 | Sapphire rapids | 2023         |
 
 The following brand identifiers are used:
--   platinium
--   gold
--   silver
--   bronze
+
+- platinium
+- gold
+- silver
+- bronze
 
 ## AMD
 
 ### Ryzen
 
-There are multiple generation for this brand of processors. They are based on the [zen micro architecture](https://en.wikipedia.org/wiki/Zen_(microarchitecture)).
+There are multiple generation for this brand of processors. They are based on the [zen micro architecture](<https://en.wikipedia.org/wiki/Zen_(microarchitecture)>).
 
 The current (as of January 2023) generation is Ryzen 7000.
 
 The brand modifiers are:
 
--   ryzen 3: entry level
--   ryzen 5: mainstream
--   ryzen 9: high end performance
--   ryzen 9: enthusiast
+- ryzen 3: entry level
+- ryzen 5: mainstream
+- ryzen 9: high end performance
+- ryzen 9: enthusiast
 
 List of suffixes:
 
 | suffix | meaning                                                                         |
-|--------|---------------------------------------------------------------------------------|
+| ------ | ------------------------------------------------------------------------------- |
 | X      | high performance                                                                |
 | G      | integrated graphics                                                             |
 | T      | power optimized lifecycle                                                       |
@@ -184,7 +183,7 @@ The threadripper processors use the TR4, sTRX4 and sWRX8 sockets.
 Zen 3 was released in November 2020.
 
 | model         | cores   | GHz (base) | GHz (boosted) | PCIe lanes | TDP  |
-|---------------|---------|------------|---------------|------------|------|
+| ------------- | ------- | ---------- | ------------- | ---------- | ---- |
 | ryzen 5 5600x | 6 (12)  | 3.7        | 4.6           | 24         | 65W  |
 | ryzen 7 5800  | 8 (16)  | 3.4        | 4.6           | 24         | 65W  |
 | ryzen 7 5800x | 8 (16)  | 3.8        | 4.7           | 24         | 105W |
@@ -192,8 +191,8 @@ Zen 3 was released in November 2020.
 | ryzen 9 5900x | 12 (24) | 3.7        | 4.8           | 24         | 105W |
 | ryzen 9 5950x | 16 (32) | 3.4        | 4.9           | 24         | 105W |
 
--   support PCIe 3.0 and PCIe 4.0 (except for the G series)
--   only support DDR4 (up to DDR4-3200)
+- support PCIe 3.0 and PCIe 4.0 (except for the G series)
+- only support DDR4 (up to DDR4-3200)
 
 ### Zen 4
 
@@ -204,7 +203,7 @@ Zen 4 was released in September 2022.
 - all desktop processors feature 2 x 4 lane PCIe interfaces (mostly for M.2 storage devices)
 
 | model           | cores   | GHz (base) | GHz (boosted) | TDP  |
-|-----------------|---------|------------|---------------|------|
+| --------------- | ------- | ---------- | ------------- | ---- |
 | ryzen 5 7600x   | 6 (12)  | 4.7        | 5.3           | 105W |
 | ryzen 5 7600    | 6 (12)  | 3.8        | 5.1           | 65W  |
 | ryzen 7 7800X3D | 8 (16)  |            | 5.0           | 120W |
@@ -216,7 +215,6 @@ Zen 4 was released in September 2022.
 | ryzen 9 7950X   | 16 (32) | 4.5        | 5.7           | 170W |
 | ryzen 9 7950X3D | 16 (32) | 4.2        | 5.7           | 120W |
 
-
 [^1]: https://ark.intel.com/content/www/us/en/ark/products/218833/intel-z690-chipset.html
 
 [^2]: https://www.intel.com/content/www/us/en/products/sku/218831/intel-h670-chipset/specifications.html
diff --git a/content/notes/stuff-about-pcie.md b/content/notes/stuff-about-pcie.md
index b783924..b540d24 100644
--- a/content/notes/stuff-about-pcie.md
+++ b/content/notes/stuff-about-pcie.md
@@ -1,9 +1,6 @@
 ---
 title: Stuff about PCIe
 date: 2022-01-03
-tags:
-  - linux
-  - harwdare
 ---
 
 ## Speed
@@ -12,7 +9,7 @@ The most common versions are 3 and 4, while 5 is starting to be
 available with newer Intel processors.
 
 | ver | encoding  | transfer rate | x1         | x2          | x4         | x8         | x16         |
-|-----|-----------|---------------|------------|-------------|------------|------------|-------------|
+| --- | --------- | ------------- | ---------- | ----------- | ---------- | ---------- | ----------- |
 | 1   | 8b/10b    | 2.5GT/s       | 250MB/s    | 500MB/s     | 1GB/s      | 2GB/s      | 4GB/s       |
 | 2   | 8b/10b    | 5.0GT/s       | 500MB/s    | 1GB/s       | 2GB/s      | 4GB/s      | 8GB/s       |
 | 3   | 128b/130b | 8.0GT/s       | 984.6 MB/s | 1.969 GB/s  | 3.94 GB/s  | 7.88 GB/s  | 15.75 GB/s  |
@@ -76,12 +73,14 @@ An easy way to see the PCIe topology is with `lspci`:
                \-18.7  Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7
 
 Now, how do we read this ?
+
 ```
 +-[10000:00]-+-02.0-[01]----00.0  Intel Corporation NVMe Datacenter SSD [3DNAND, Beta Rock Controller]
 |            \-03.0-[02]----00.0  Intel Corporation NVMe Datacenter SSD [3DNAND, Beta Rock Controller]
 ```
 
 This is a lot of information, how do we read this ?
+
 - The first part in brackets (`[10000:00]`) is the domain and the bus.
 - The second part (`02.0` is still unclear to me)
 - The third number (between brackets) is the device on the bus
@@ -171,18 +170,18 @@ lspci -v -s 0000:01:00.0
 
 A few things to note from this output:
 
--   **GT/s** is the number of transactions supported (here, 8 billion
-    transactions / second). This is gen3 controller (gen1 is 2.5 and
-    gen2 is 5)xs
--   **LNKCAP** is the capabilities which were communicated, and
-    **LNKSTAT** is the current status. You want them to report the same
-    values. If they don't, you are not using the hardware as it is
-    intended (here I'm assuming the hardware is intended to work as a
-    gen3 controller). In case the device is downgraded, the output will
-    be like this: `LnkSta: Speed 2.5GT/s (downgraded), Width x16 (ok)`
--   **width** is the number of lanes that can be used by the device
-    (here, we can use 4 lanes)
--   **MaxPayload** is the maximum size of a PCIe packet
+- **GT/s** is the number of transactions supported (here, 8 billion
+  transactions / second). This is gen3 controller (gen1 is 2.5 and
+  gen2 is 5)xs
+- **LNKCAP** is the capabilities which were communicated, and
+  **LNKSTAT** is the current status. You want them to report the same
+  values. If they don't, you are not using the hardware as it is
+  intended (here I'm assuming the hardware is intended to work as a
+  gen3 controller). In case the device is downgraded, the output will
+  be like this: `LnkSta: Speed 2.5GT/s (downgraded), Width x16 (ok)`
+- **width** is the number of lanes that can be used by the device
+  (here, we can use 4 lanes)
+- **MaxPayload** is the maximum size of a PCIe packet
 
 ## Debugging
 
@@ -213,53 +212,53 @@ that have not been completed).
                     CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
                     AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
 
--   The Uncorrectable Error Status (UESta) reports error status of
-    individual uncorrectable error sources (no bits are set above):
-    -   Data Link Protocol Error (DLP)
-    -   Surprise Down Error (SDES)
-    -   Poisoned TLP (TLP)
-    -   Flow Control Protocol Error (FCP)
-    -   Completion Timeout (CmpltTO)
-    -   Completer Abort (CmpltAbrt)
-    -   Unexpected Completion (UnxCmplt)
-    -   Receiver Overflow (RxOF)
-    -   Malformed TLP (MalfTLP)
-    -   ECRC Error (ECRC)
-    -   Unsupported Request Error (UnsupReq)
-    -   ACS Violation (ACSViol)
--   The Uncorrectable Error Mask (UEMsk) controls reporting of
-    individual errors by the device to the PCIe root complex. A masked
-    error (bit set) is not recorded or reported. Above shows no errors
-    are being masked)
--   The Uncorrectable Severity controls whether an individual error is
-    reported as a Non-fatal (clear) or Fatal error (set).
--   The Correctable Error Status reports error status of individual
-    correctable error sources: (no bits are set above)
-    -   Receiver Error (RXErr)
-    -   Bad TLP status (BadTLP)
-    -   Bad DLLP status (BadDLLP)
-    -   Replay Timer Timeout status (Timeout)
-    -   REPLAY NUM Rollover status (Rollover)
-    -   Advisory Non-Fatal Error (NonFatalIErr)
--   The Correctable Erro Mask (CEMsk) controls reporting of individual
-    errors by the device to the PCIe root complex. A masked error (bit
-    set) is not reported to the RC. Above shows that Advisory Non-Fatal
-    Errors are being masked - this bit is set by default to enable
-    compatibility with software that does not comprehend Role-Based
-    error reporting.
--   The Advanced Error Capabilities and Control Register (AERCap)
-    enables various capabilities (The above indicates the device capable
-    of generating ECRC errors but they are not enabled):
-    -   First Error Pointer identifies the bit position of the first
-        error reported in the Uncorrectable Error Status register
-    -   ECRC Generation Capable (GenCap) indicates if set that the
-        function is capable of generating ECRC
-    -   ECRC Generation Enable (GenEn) indicates if ECRC generation is
-        enabled (set)
-    -   ECRC Check Capable (ChkCap) indicates if set that the function
-        is capable of checking ECRC
-    -   ECRC Check Enable (ChkEn) indicates if ECRC checking is enabled
+- The Uncorrectable Error Status (UESta) reports error status of
+  individual uncorrectable error sources (no bits are set above):
+  - Data Link Protocol Error (DLP)
+  - Surprise Down Error (SDES)
+  - Poisoned TLP (TLP)
+  - Flow Control Protocol Error (FCP)
+  - Completion Timeout (CmpltTO)
+  - Completer Abort (CmpltAbrt)
+  - Unexpected Completion (UnxCmplt)
+  - Receiver Overflow (RxOF)
+  - Malformed TLP (MalfTLP)
+  - ECRC Error (ECRC)
+  - Unsupported Request Error (UnsupReq)
+  - ACS Violation (ACSViol)
+- The Uncorrectable Error Mask (UEMsk) controls reporting of
+  individual errors by the device to the PCIe root complex. A masked
+  error (bit set) is not recorded or reported. Above shows no errors
+  are being masked)
+- The Uncorrectable Severity controls whether an individual error is
+  reported as a Non-fatal (clear) or Fatal error (set).
+- The Correctable Error Status reports error status of individual
+  correctable error sources: (no bits are set above)
+  - Receiver Error (RXErr)
+  - Bad TLP status (BadTLP)
+  - Bad DLLP status (BadDLLP)
+  - Replay Timer Timeout status (Timeout)
+  - REPLAY NUM Rollover status (Rollover)
+  - Advisory Non-Fatal Error (NonFatalIErr)
+- The Correctable Erro Mask (CEMsk) controls reporting of individual
+  errors by the device to the PCIe root complex. A masked error (bit
+  set) is not reported to the RC. Above shows that Advisory Non-Fatal
+  Errors are being masked - this bit is set by default to enable
+  compatibility with software that does not comprehend Role-Based
+  error reporting.
+- The Advanced Error Capabilities and Control Register (AERCap)
+  enables various capabilities (The above indicates the device capable
+  of generating ECRC errors but they are not enabled):
+  - First Error Pointer identifies the bit position of the first
+    error reported in the Uncorrectable Error Status register
+  - ECRC Generation Capable (GenCap) indicates if set that the
+    function is capable of generating ECRC
+  - ECRC Generation Enable (GenEn) indicates if ECRC generation is
+    enabled (set)
+  - ECRC Check Capable (ChkCap) indicates if set that the function
+    is capable of checking ECRC
+  - ECRC Check Enable (ChkEn) indicates if ECRC checking is enabled
 
 ## Compute Express Link (CXL)
 
-[Compute Express Link](https://en.wikipedia.org/wiki/Compute_Express_Link) (CXL) is an open standard for high-speed central processing unit (CPU)-to-device and CPU-to-memory connections, designed for high performance data center computers. The standard is built on top of the PCIe physical  interface with protocols for I/O, memory, and cache coherence.
+[Compute Express Link](https://en.wikipedia.org/wiki/Compute_Express_Link) (CXL) is an open standard for high-speed central processing unit (CPU)-to-device and CPU-to-memory connections, designed for high performance data center computers. The standard is built on top of the PCIe physical interface with protocols for I/O, memory, and cache coherence.
diff --git a/content/notes/working-with-go.md b/content/notes/working-with-go.md
index af7bf20..fbfba88 100644
--- a/content/notes/working-with-go.md
+++ b/content/notes/working-with-go.md
@@ -1,12 +1,9 @@
 ---
 title: Working with Go
 date: 2021-08-05
-tags:
-  - emacs
-  - go
 ---
 
-*This document assumes go version \>= 1.16*.
+_This document assumes go version \>= 1.16_.
 
 ## Go Modules
 
@@ -22,22 +19,22 @@ create two files: `go.mod` and `go.sum`.
 
 In the `go.mod` file you'll find:
 
--   the module import path (prefixed with `module`)
--   the list of dependencies (within `require`)
--   the version of go to use for the module
+- the module import path (prefixed with `module`)
+- the list of dependencies (within `require`)
+- the version of go to use for the module
 
 ### Versioning
 
 To bump the version of a module:
 
-``` bash
+```bash
 $ git tag v1.2.3
 $ git push --tags
 ```
 
 Then as a user:
 
-``` bash
+```bash
 $ go get -d golang.fcuny.net/m@v1.2.3
 ```
 
@@ -52,7 +49,7 @@ workspace (`git clone <module URL>`).
 
 Edit the `go.mod` file to add
 
-``` go
+```go
 replace <module URL> => <path of the local checkout>
 ```
 
@@ -85,14 +82,14 @@ There's a few special URLs (better documentation
 [here](https://golang.org/ref/mod#goproxy-protocol)):
 
 | path                  | description                                                                              |
-|-----------------------|------------------------------------------------------------------------------------------|
+| --------------------- | ---------------------------------------------------------------------------------------- |
 | $mod/@v/list          | Returns the list of known versions - there's one version per line and it's in plain text |
 | $mod/@v/$version.info | Returns metadata about a version in JSON format                                          |
 | $mod/@v/$version.mod  | Returns the `go.mod` file for that version                                               |
 
 For example, looking at the most recent versions for `gopls`:
 
-``` bash
+```bash
 ; curl -s -L https://proxy.golang.org/golang.org/x/tools/gopls/@v/list|sort -r|head
 v0.7.1-pre.2
 v0.7.1-pre.1
@@ -108,7 +105,7 @@ v0.6.8-pre.1
 
 Let's check the details for the most recent version
 
-``` bash
+```bash
 ; curl -s -L https://proxy.golang.org/golang.org/x/tools/gopls/@v/list|sort -r|head
 v0.7.1-pre.2
 v0.7.1-pre.1
@@ -124,7 +121,7 @@ v0.6.8-pre.1
 
 And let's look at the content of the `go.mod` for that version too:
 
-``` bash
+```bash
 ; curl -s -L https://proxy.golang.org/golang.org/x/tools/gopls/@v/v0.7.1-pre.2.mod
 module golang.org/x/tools/gopls
 
@@ -183,7 +180,7 @@ starting point.
 
 The configuration is straightforward, this is what I use:
 
-``` elisp
+```elisp
 ;; for go's LSP I want to use staticcheck and placeholders for completion
 (customize-set-variable 'eglot-workspace-configuration
                         '((:gopls .
@@ -206,7 +203,7 @@ flymake, eldoc.
 [pprof](https://github.com/google/pprof) is a tool to visualize
 performance data. Let's start with the following test:
 
-``` go
+```go
 package main
 
 import (
@@ -228,7 +225,7 @@ func BenchmarkStringJoin(b *testing.B) {
 Let's run a benchmark with
 `go test . -bench=. -cpuprofile cpu_profile.out`:
 
-``` go
+```go
 goos: linux
 goarch: amd64
 pkg: golang.fcuny.net/m
@@ -241,7 +238,7 @@ ok      golang.fcuny.net/m      1.327s
 And let's take a look at the profile with
 `go tool pprof cpu_profile.out`
 
-``` bash
+```bash
 File: m.test
 Type: cpu
 Time: Aug 15, 2021 at 3:01pm (PDT)
@@ -265,7 +262,7 @@ Showing top 10 nodes out of 41
 
 We can get a breakdown of the data for our module:
 
-``` bash
+```bash
 (pprof) list golang.fcuny.net
 Total: 1.17s
 ROUTINE ======================== golang.fcuny.net/m.BenchmarkStringJoin in /home/fcuny/workspace/gobench/app_test.go
diff --git a/content/notes/working-with-nix.md b/content/notes/working-with-nix.md
index 3d208e4..7da8ec7 100644
--- a/content/notes/working-with-nix.md
+++ b/content/notes/working-with-nix.md
@@ -1,9 +1,6 @@
 ---
 title: working with nix
 date: 2022-05-10
-tags:
-  - linux
-  - nix
 ---
 
 ## the `nix develop` command
@@ -17,7 +14,7 @@ sub-commands.
 they map as follow:
 
 | phase          | default to     | command                   | note |
-|----------------|----------------|---------------------------|------|
+| -------------- | -------------- | ------------------------- | ---- |
 | configurePhase | `./configure`  | `nix develop --configure` |      |
 | buildPhase     | `make`         | `nix develop --build`     |      |
 | checkPhase     | `make check`   | `nix develop --check`     |      |
@@ -40,7 +37,7 @@ phase](https://github.com/NixOS/nixpkgs/blob/fb7287e6d2d2684520f756639846ee07f62
 
 ## `buildInputs` or `nativeBuildInputs`
 
--   `nativeBuildInputs` is intended for architecture-dependent
-    build-time-only dependencies
--   `buildInputs` is intended for architecture-independent
-    build-time-only dependencies
+- `nativeBuildInputs` is intended for architecture-dependent
+  build-time-only dependencies
+- `buildInputs` is intended for architecture-independent
+  build-time-only dependencies
diff --git a/layouts/_default/baseof.html b/layouts/_default/baseof.html
index 6406b4e..b2fbbc1 100644
--- a/layouts/_default/baseof.html
+++ b/layouts/_default/baseof.html
@@ -2,8 +2,6 @@
 <html lang="en">
   {{ partial "head.html" . }}
   <body>
-    <main>
-     {{ block "main" . }}{{ end }}
-    </main>
+    <main>{{ block "main" . }}{{ end }}</main>
   </body>
 </html>
diff --git a/layouts/_default/single.html b/layouts/_default/single.html
index a360c93..2919b8f 100644
--- a/layouts/_default/single.html
+++ b/layouts/_default/single.html
@@ -1,30 +1,28 @@
 {{ define "main" }}
 
 <div>
+  <h1>{{ .Title }}</h1>
 
-<h1>{{ .Title }}</h1>
+  <div id="meta">
+    {{- $pub := .Date.Format "Jan 2, 2006" -}}
+    {{- $mod := "" -}}
 
-<div id="meta">
-  {{- $pub := .Date.Format "Jan 2, 2006" -}}
-  {{- $mod := "" -}}
-  {{- if (not .GitInfo) }}
-  {{- $mod = .Lastmod.Format "Jan 2, 2006" -}}
-  {{ else }}
-  {{- $mod = .Page.GitInfo.CommitDate.Format "Jan 2, 2006" -}}
-  {{ end -}}
-  {{ if eq $pub $mod }}
-  <span id="meta_date">posted on {{ $pub }}</span>
-  {{ else }}
-  <span id="meta_date">posted on {{ $pub }} - last modified {{ $mod }}</span>
-  {{ end }}
-</div>
+    {{- if (not .GitInfo) }}
+      {{- $mod = .Lastmod.Format "Jan 2, 2006" -}}
+    {{ else }}
+      {{- $mod = .Page.GitInfo.CommitDate.Format "Jan 2, 2006" -}}
+    {{ end -}}
 
-<article>
-{{ .Content }}
-</article>
+    {{ if eq $pub $mod }}
+    <span id="meta_date">posted on {{ $pub }}</span>
+    {{ else }}
+    <span id="meta_date">posted on {{ $pub }} - last modified {{ $mod }}</span>
+    {{ end }}
+  </div>
 
-<a href="/">↑ back home</a>
+  <article>{{ .Content }}</article>
 
+  <a href="/">↑ back home</a>
 </div>
 
 {{ end }}
diff --git a/layouts/index.html b/layouts/index.html
index e93f139..cbf00cc 100644
--- a/layouts/index.html
+++ b/layouts/index.html
@@ -2,27 +2,49 @@
 
 <p>My name is Franck Cuny and this is my little corner on the web.</p>
 
-<p>I currently work as a <a href="https://en.wikipedia.org/wiki/Site_reliability_engineering">Site Reliability Engineer</a> (SRE) at <a href="https://www.roblox.com/" target="_blank">Roblox</a>. Previously I worked as a SRE at <a href="https://twitter.com/TwitterEng" target="_blank">Twitter</a>, and my focus was on the infrastructure.</p>
+<p>
+  I currently work as a
+  <a href="https://en.wikipedia.org/wiki/Site_reliability_engineering"
+    >Site Reliability Engineer</a
+  >
+  (SRE) at <a href="https://www.roblox.com/" target="_blank">Roblox</a>.
+  Previously I worked as a SRE at
+  <a href="https://twitter.com/TwitterEng" target="_blank">Twitter</a>, and my
+  focus was on the infrastructure.
+</p>
 
-<p>I'm interested in building sustainable teams, improving the management and operation of large infrastructure, and to work with different teams to implement best practices around reliability and security.</p>
+<p>
+  I'm interested in building sustainable teams, improving the management and
+  operation of large infrastructure, and to work with different teams to
+  implement best practices around reliability and security.
+</p>
 
 <ul>
-  <li>Some of my code is shared on <a href="https://github.com/fcuny">GitHub</a></li>
-  <li>Email: <a href="mailto:franck@fcuny.net" title="franck@fcuny.net">franck@fcuny.net</a></li>
+  <li>
+    Some of my code is shared on <a href="https://github.com/fcuny">GitHub</a>
+  </li>
+  <li>
+    Email:
+    <a href="mailto:franck@fcuny.net" title="franck@fcuny.net"
+      >franck@fcuny.net</a
+    >
+  </li>
 </ul>
 
 <h2>Articles</h2>
 
 <article>
   <ul>
-  {{ range (where .Site.Pages "Section" "blog") }}
-  {{ range .Pages }}
-  <li>
-    <span class="content-title"><a href="{{ .Permalink }}">{{ .Title }}</a></span>
-    <span class="content-date"><em>posted on {{ .Date.Format "Jan 2, 2006" }}</em></span>
-  </li>
-  {{ end }}
-  {{ end }}
+    {{ range (where .Site.Pages "Section" "blog") }} {{ range .Pages }}
+    <li>
+      <span class="content-title"
+        ><a href="{{ .Permalink }}">{{ .Title }}</a></span
+      >
+      <span class="content-date"
+        ><em>posted on {{ .Date.Format "Jan 2, 2006" }}</em></span
+      >
+    </li>
+    {{ end }} {{ end }}
   </ul>
 </article>
 
@@ -30,14 +52,16 @@
 
 <article>
   <ul>
-  {{ range (where .Site.Pages "Section" "notes") }}
-  {{ range .Pages }}
-  <li>
-    <span class="content-title"><a href="{{ .Permalink }}">{{ .Title }}</a></span>
-    <span class="content-date"><em>posted on {{ .Date.Format "Jan 2, 2006" }}</em></span>
-  </li>
-  {{ end }}
-  {{ end }}
+    {{ range (where .Site.Pages "Section" "notes") }} {{ range .Pages }}
+    <li>
+      <span class="content-title"
+        ><a href="{{ .Permalink }}">{{ .Title }}</a></span
+      >
+      <span class="content-date"
+        ><em>posted on {{ .Date.Format "Jan 2, 2006" }}</em></span
+      >
+    </li>
+    {{ end }} {{ end }}
   </ul>
 </article>
 
diff --git a/layouts/partials/head.html b/layouts/partials/head.html
index 39ef8aa..1fa0b42 100644
--- a/layouts/partials/head.html
+++ b/layouts/partials/head.html
@@ -1,16 +1,25 @@
 <head>
-  <meta charset="utf-8">
+  <meta charset="utf-8" />
   <meta name="viewport" content="width=device-width, initial-scale=1.0" />
 
-  <link rel="canonical" href="{{ .Permalink }}">
+  <link rel="canonical" href="{{ .Permalink }}" />
 
-  {{ $css := "/css/custom.css"  }}
-  <link rel="stylesheet" href="{{ $css }}">
-  <link rel="alternate" href="{{ "/feed.xml" | relURL }}" type="application/atom+xml" title="ATOM feed">
+  {{ $css := "/css/custom.css" }} {{ $feed := "/feed.xml" }}
+
+  <link rel="stylesheet" href="{{ $css }}" />
+  <link
+    rel="alternate"
+    href="{{ $feed | relURL }}"
+    type="application/atom+xml"
+    title="ATOM feed"
+  />
   <link rel="author" href="humans.txt" />
 
-  <meta name="description" content="Franck Cuny's website, with articles about computers stuff.">
-  <meta name="author" content="Franck Cuny">
+  <meta
+    name="description"
+    content="Franck Cuny's website, with articles about computers stuff."
+  />
+  <meta name="author" content="Franck Cuny" />
 
   <title>{{ .Title }}</title>
 </head>
diff --git a/static/css/custom.css b/static/css/custom.css
index 714f08c..46e75b9 100644
--- a/static/css/custom.css
+++ b/static/css/custom.css
@@ -1,19 +1,19 @@
 html {
-    font-size: 20px;
+  font-size: 20px;
 }
 
 @font-face {
-    font-family: 'Gentium';
-    font-style: normal;
-    font-weight: 400;
-    src: url(/fonts/gentium-basic-v11-latin-ext_latin-regular.woff) format('woff');
+  font-family: "Gentium";
+  font-style: normal;
+  font-weight: 400;
+  src: url(/fonts/gentium-basic-v11-latin-ext_latin-regular.woff) format("woff");
 }
 
 @font-face {
-    font-family: 'Argon';
-    font-style: normal;
-    font-weight: 400;
-    src: url(/fonts/MonaspaceArgon-Light.woff) format('woff');
+  font-family: "Argon";
+  font-style: normal;
+  font-weight: 400;
+  src: url(/fonts/MonaspaceArgon-Light.woff) format("woff");
 }
 
 body {
@@ -34,7 +34,7 @@ h2 {
 }
 
 a {
-    color: #473A2F;
+  color: #473a2f;
 }
 
 a:link,
@@ -52,7 +52,7 @@ code {
   margin: 0;
   overflow-x: auto;
   word-wrap: normal;
-    font-size: 0.8rem;
+  font-size: 0.8rem;
 }
 
 p code {
@@ -93,7 +93,7 @@ td {
   padding-right: 0.7em;
   padding-top: 0.4em;
   padding-bottom: 0.4em;
-  padding-left: : 0.7em;
+  padding-left: 0.7em;
 }
 
 thead {
@@ -102,8 +102,10 @@ thead {
   text-align: left;
 }
 
-table, th, td {
-    font-size: 0.8em;
+table,
+th,
+td {
+  font-size: 0.8em;
   border-collapse: collapse;
   color: #000;
   border: 1px solid #cdcdcd;
@@ -114,11 +116,11 @@ blockquote {
   font-style: italic;
   margin: 0 0 1.5em;
   padding-left: 1em;
-  border-left: .2em solid #bdbdbd
+  border-left: 0.2em solid #bdbdbd;
 }
 
 ul {
-    list-style-type: disc;
+  list-style-type: disc;
 }
 
 ul.list-content {
@@ -128,5 +130,5 @@ ul.list-content {
 }
 
 article {
-    text-align: justify;
+  text-align: justify;
 }
diff --git a/static/resume.html b/static/resume.html
index 0f1cd83..2a4d804 100644
--- a/static/resume.html
+++ b/static/resume.html
@@ -1,209 +1,260 @@
-<!DOCTYPE html>
+<!doctype html>
 <html xmlns="http://www.w3.org/1999/xhtml" lang xml:lang>
-<head>
-  <meta charset="utf-8" />
-  <meta name="generator" content="pandoc" />
-  <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
-  <meta name="author" content="franck@fcuny.net" />
-  <title>Franck Cuny</title>
-  <style>
-    code{white-space: pre-wrap;}
-    span.smallcaps{font-variant: small-caps;}
-    span.underline{text-decoration: underline;}
-    div.column{display: inline-block; vertical-align: top; width: 50%;}
-    div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;}
-    ul.task-list{list-style: none;}
-    .display.math{display: block; text-align: center; margin: 0.5rem auto;}
-  </style>
-  <style type="text/css">body {
-font-family: sans-serif;
-font-size: 1em;
-line-height: 1.8em;
-color: #0e0e0b;
-margin: 1em auto;
-padding: 0 0.55em;
-max-width: 50rem;
-}
-h1 {
-color: #0e0e0b;
-font-size: 1.3rem;
-}
-h2, h3 {
-border-bottom: 1px solid #eee;
-font-style: italic;
-}
-h2 {
-margin-top: 1.25em;
-margin-bottom: 0.41em;
-font-size: 1.2rem;
-}
-h3 {
-margin-top: 1.5em;
-margin-bottom: 0.5em;
-font-size: 1rem;
-}
-hr{
-color:#000111;
-background-color:#000111;
-border:none;
-height:1px
-}
-a {
-color:#047bc2;
-transition:color .1s ease-in-out;
-}
-table {
-width: 100%;
-border-spacing: 0px;
-outline: none;
-}
-td{
-padding-right: 0.7em;
-}
-td:last-child {
-text-align: right;
-}
-table, th, td {
-font-family: monospace;
-color: #000;
-}
-#title-block-header {
-padding-right: 10px;
-font-size: 1.4em;
-display: flex;
-font-family: monospace;
-justify-content: space-between;
-align-items: center;
-padding-top: 0.5rem;
-border-bottom: 1px;
-}
-#experience {
-padding-top: 20px;
-}
-</style>
-  <!--[if lt IE 9]>
-    <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
-  <![endif]-->
-</head>
-<body>
-<header id="title-block-header">
-<h1 class="title">Franck Cuny</h1>
-<p class="author"><a href="mailto:franck@fcuny.net">franck@fcuny.net</a></p>
-</header>
-<p>I&#39;m a seasoned Site Reliability Engineer with experience in large
-scale distributed systems. I&#39;m invested in mentoring junior and senior
-engineers to help them increase their impact. I&#39;m always looking to
-learn from those around me.</p>
-<p><strong>Specializations</strong>: distributed systems,
-containerization, debugging, software development, reliability.</p>
-<h1 id="experience">Experience</h1>
-<h2 id="roblox-san-mateo">Roblox, San Mateo</h2>
-<table>
-<tbody>
-<tr class="odd">
-<td>Site Reliability Engineer</td>
-<td>Principal (IC6)</td>
-<td>SRE Group</td>
-<td>Feb 2022 - to date</td>
-</tr>
-</tbody>
-</table>
-<p>I&#39;m the Team Lead for the Site Reliability group that was started at
-the end of 2021.</p>
-<p>I&#39;m defining the road-map and identify areas where SREs can partner
-with different team to improve overall reliability of our services.</p>
-<h2 id="twitter-san-francisco">Twitter, San Francisco</h2>
-<h3 id="compute">Compute</h3>
-<table>
-<tbody>
-<tr class="odd">
-<td>Software Engineer</td>
-<td>Senior Staff</td>
-<td>Compute Info</td>
-<td>Aug 2021 - Jan 2022</td>
-</tr>
-<tr class="even">
-<td>Site Reliability Engineer</td>
-<td>Senior Staff</td>
-<td>Compute SREs</td>
-<td>Jan 2018 - Aug 2021</td>
-</tr>
-</tbody>
-</table>
-<p>Initially the Tech Lead of a team of 6 SREs supporting the Compute
-infrastructure. In August 2021 I changed to be a Software Engineer and
-was leading one of the effort to adopt Kubernetes for our on-premise
-infrastructure. As a Tech Lead I helped define number of internal
-processes for the team, from on-call rotations to postmortem
-processes.</p>
-<p>Twitter&#39;s Compute is one of the largest Mesos cluster in the world
-(XXX thousands of nodes across multiple data centers). The team defined
-KPIs, improved automation to mange the large fleet of bare metal
-machines, defined APIs for maintenance with partner teams.</p>
-<p>In addition to supporting Aurora/Mesos, I also lead a number of
-effort related to Kubernetes, both on-premise and in the cloud.</p>
-<p>Finally, I&#39;ve helped Twitter save XX of millions of dollar in
-hardware by designing and implementing strategies to significantly
-improve the hardware utilization of our bare metal infrastructure.</p>
-<h3 id="storage">Storage</h3>
-<table>
-<tbody>
-<tr class="odd">
-<td>Site Reliability Engineer</td>
-<td>Staff</td>
-<td>Storage SREs</td>
-<td>Aug 2014 - Jan 2018</td>
-</tr>
-</tbody>
-</table>
-<p>For 4 years I supported the Messaging and Manhattan teams. I moved
-all the pub-sub systems from bare-metal deployment to Aurora/Mesos,
-being the first storage team to adopt the Compute orchestration
-platform. This helped reducing operations, time to deploy, and improve
-overall reliability. I pushed for adopting 10Gb+ networking in our data
-center to help our team to scale. I was the SRE Tech Lead for the
-Manhattan team, helping with performance, operation and automation.</p>
-<h2 id="senior-software-engineer---say-media-san-francisco">Senior
-Software Engineer - Say Media, San Francisco</h2>
-<table>
-<tbody>
-<tr class="odd">
-<td>Software Engineer</td>
-<td>Senior SWE</td>
-<td>Infrastructure</td>
-<td>Aug 2011 - Aug 2014</td>
-</tr>
-</tbody>
-</table>
-<p>During my time at Say Media, I worked on two different teams. I
-started as a software engineer in the platform team building the various
-APIs; I then transitioned to the operation team, to develop tooling to
-increase the effectiveness of the engineering organization.</p>
-<h2 id="senior-software-engineer---linkfluence-paris">Senior Software
-Engineer - Linkfluence, Paris</h2>
-<table>
-<tbody>
-<tr class="odd">
-<td>Software Engineer</td>
-<td>Senior SWE</td>
-<td>Infrastructure</td>
-<td>July 2007 - July 2011</td>
-</tr>
-</tbody>
-</table>
-<p>I was one of the early engineers joining Linkfluence in 2007. I led
-the development of the company&#39;s crawler (web, feeds). I was responsible
-for defining the early architecture of the company, and designed the
-internal platforms (Service Oriented Architecture). I helped the company
-to contribute to open source projects; contributed to open source
-projects on behalf of the company; represented the company at numerous
-open sources conferences in Europe.</p>
-<h1 id="technical-skills">Technical Skills</h1>
-<ul>
-<li><strong>Languages</strong> Python, Go, Ruby, Perl</li>
-<li><strong>Frameworks</strong> Kubernetes, Aurora, Mesos</li>
-<li><strong>Databases</strong> RDBMS, NOSql</li>
-<li><strong>Dev tools</strong> Git</li>
-</ul>
-</body>
+  <head>
+    <meta charset="utf-8" />
+    <meta name="generator" content="pandoc" />
+    <meta
+      name="viewport"
+      content="width=device-width, initial-scale=1.0, user-scalable=yes"
+    />
+    <meta name="author" content="franck@fcuny.net" />
+    <title>Franck Cuny</title>
+    <style>
+      code {
+        white-space: pre-wrap;
+      }
+      span.smallcaps {
+        font-variant: small-caps;
+      }
+      span.underline {
+        text-decoration: underline;
+      }
+      div.column {
+        display: inline-block;
+        vertical-align: top;
+        width: 50%;
+      }
+      div.hanging-indent {
+        margin-left: 1.5em;
+        text-indent: -1.5em;
+      }
+      ul.task-list {
+        list-style: none;
+      }
+      .display.math {
+        display: block;
+        text-align: center;
+        margin: 0.5rem auto;
+      }
+    </style>
+    <style type="text/css">
+      body {
+        font-family: sans-serif;
+        font-size: 1em;
+        line-height: 1.8em;
+        color: #0e0e0b;
+        margin: 1em auto;
+        padding: 0 0.55em;
+        max-width: 50rem;
+      }
+      h1 {
+        color: #0e0e0b;
+        font-size: 1.3rem;
+      }
+      h2,
+      h3 {
+        border-bottom: 1px solid #eee;
+        font-style: italic;
+      }
+      h2 {
+        margin-top: 1.25em;
+        margin-bottom: 0.41em;
+        font-size: 1.2rem;
+      }
+      h3 {
+        margin-top: 1.5em;
+        margin-bottom: 0.5em;
+        font-size: 1rem;
+      }
+      hr {
+        color: #000111;
+        background-color: #000111;
+        border: none;
+        height: 1px;
+      }
+      a {
+        color: #047bc2;
+        transition: color 0.1s ease-in-out;
+      }
+      table {
+        width: 100%;
+        border-spacing: 0px;
+        outline: none;
+      }
+      td {
+        padding-right: 0.7em;
+      }
+      td:last-child {
+        text-align: right;
+      }
+      table,
+      th,
+      td {
+        font-family: monospace;
+        color: #000;
+      }
+      #title-block-header {
+        padding-right: 10px;
+        font-size: 1.4em;
+        display: flex;
+        font-family: monospace;
+        justify-content: space-between;
+        align-items: center;
+        padding-top: 0.5rem;
+        border-bottom: 1px;
+      }
+      #experience {
+        padding-top: 20px;
+      }
+    </style>
+    <!--[if lt IE 9]>
+      <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+    <![endif]-->
+  </head>
+  <body>
+    <header id="title-block-header">
+      <h1 class="title">Franck Cuny</h1>
+      <p class="author">
+        <a href="mailto:franck@fcuny.net">franck@fcuny.net</a>
+      </p>
+    </header>
+    <p>
+      I&#39;m a seasoned Site Reliability Engineer with experience in large
+      scale distributed systems. I&#39;m invested in mentoring junior and senior
+      engineers to help them increase their impact. I&#39;m always looking to
+      learn from those around me.
+    </p>
+    <p>
+      <strong>Specializations</strong>: distributed systems, containerization,
+      debugging, software development, reliability.
+    </p>
+    <h1 id="experience">Experience</h1>
+    <h2 id="roblox-san-mateo">Roblox, San Mateo</h2>
+    <table>
+      <tbody>
+        <tr class="odd">
+          <td>Site Reliability Engineer</td>
+          <td>Principal (IC6)</td>
+          <td>SRE Group</td>
+          <td>Feb 2022 - to date</td>
+        </tr>
+      </tbody>
+    </table>
+    <p>
+      I&#39;m the Team Lead for the Site Reliability group that was started at
+      the end of 2021.
+    </p>
+    <p>
+      I&#39;m defining the road-map and identify areas where SREs can partner
+      with different team to improve overall reliability of our services.
+    </p>
+    <h2 id="twitter-san-francisco">Twitter, San Francisco</h2>
+    <h3 id="compute">Compute</h3>
+    <table>
+      <tbody>
+        <tr class="odd">
+          <td>Software Engineer</td>
+          <td>Senior Staff</td>
+          <td>Compute Info</td>
+          <td>Aug 2021 - Jan 2022</td>
+        </tr>
+        <tr class="even">
+          <td>Site Reliability Engineer</td>
+          <td>Senior Staff</td>
+          <td>Compute SREs</td>
+          <td>Jan 2018 - Aug 2021</td>
+        </tr>
+      </tbody>
+    </table>
+    <p>
+      Initially the Tech Lead of a team of 6 SREs supporting the Compute
+      infrastructure. In August 2021 I changed to be a Software Engineer and was
+      leading one of the effort to adopt Kubernetes for our on-premise
+      infrastructure. As a Tech Lead I helped define number of internal
+      processes for the team, from on-call rotations to postmortem processes.
+    </p>
+    <p>
+      Twitter&#39;s Compute is one of the largest Mesos cluster in the world
+      (XXX thousands of nodes across multiple data centers). The team defined
+      KPIs, improved automation to mange the large fleet of bare metal machines,
+      defined APIs for maintenance with partner teams.
+    </p>
+    <p>
+      In addition to supporting Aurora/Mesos, I also lead a number of effort
+      related to Kubernetes, both on-premise and in the cloud.
+    </p>
+    <p>
+      Finally, I&#39;ve helped Twitter save XX of millions of dollar in hardware
+      by designing and implementing strategies to significantly improve the
+      hardware utilization of our bare metal infrastructure.
+    </p>
+    <h3 id="storage">Storage</h3>
+    <table>
+      <tbody>
+        <tr class="odd">
+          <td>Site Reliability Engineer</td>
+          <td>Staff</td>
+          <td>Storage SREs</td>
+          <td>Aug 2014 - Jan 2018</td>
+        </tr>
+      </tbody>
+    </table>
+    <p>
+      For 4 years I supported the Messaging and Manhattan teams. I moved all the
+      pub-sub systems from bare-metal deployment to Aurora/Mesos, being the
+      first storage team to adopt the Compute orchestration platform. This
+      helped reducing operations, time to deploy, and improve overall
+      reliability. I pushed for adopting 10Gb+ networking in our data center to
+      help our team to scale. I was the SRE Tech Lead for the Manhattan team,
+      helping with performance, operation and automation.
+    </p>
+    <h2 id="senior-software-engineer---say-media-san-francisco">
+      Senior Software Engineer - Say Media, San Francisco
+    </h2>
+    <table>
+      <tbody>
+        <tr class="odd">
+          <td>Software Engineer</td>
+          <td>Senior SWE</td>
+          <td>Infrastructure</td>
+          <td>Aug 2011 - Aug 2014</td>
+        </tr>
+      </tbody>
+    </table>
+    <p>
+      During my time at Say Media, I worked on two different teams. I started as
+      a software engineer in the platform team building the various APIs; I then
+      transitioned to the operation team, to develop tooling to increase the
+      effectiveness of the engineering organization.
+    </p>
+    <h2 id="senior-software-engineer---linkfluence-paris">
+      Senior Software Engineer - Linkfluence, Paris
+    </h2>
+    <table>
+      <tbody>
+        <tr class="odd">
+          <td>Software Engineer</td>
+          <td>Senior SWE</td>
+          <td>Infrastructure</td>
+          <td>July 2007 - July 2011</td>
+        </tr>
+      </tbody>
+    </table>
+    <p>
+      I was one of the early engineers joining Linkfluence in 2007. I led the
+      development of the company&#39;s crawler (web, feeds). I was responsible
+      for defining the early architecture of the company, and designed the
+      internal platforms (Service Oriented Architecture). I helped the company
+      to contribute to open source projects; contributed to open source projects
+      on behalf of the company; represented the company at numerous open sources
+      conferences in Europe.
+    </p>
+    <h1 id="technical-skills">Technical Skills</h1>
+    <ul>
+      <li><strong>Languages</strong> Python, Go, Ruby, Perl</li>
+      <li><strong>Frameworks</strong> Kubernetes, Aurora, Mesos</li>
+      <li><strong>Databases</strong> RDBMS, NOSql</li>
+      <li><strong>Dev tools</strong> Git</li>
+    </ul>
+  </body>
 </html>
diff --git a/treefmt.nix b/treefmt.nix
index 89a0c40..d9dc0e6 100644
--- a/treefmt.nix
+++ b/treefmt.nix
@@ -4,5 +4,6 @@
     nixpkgs-fmt.enable = true; # nix
     taplo.enable = true; # toml
     yamlfmt.enable = true; # yaml
+    prettier.enable = true; # css
   };
 }