update images

This commit is contained in:
Gabriel Augendre 2023-05-12 21:33:01 +02:00
parent 1797ead00c
commit 6bf216f094
68 changed files with 2606 additions and 2564 deletions

3
.gitattributes vendored
View file

@ -1,3 +1,6 @@
*.jpg filter=lfs diff=lfs merge=lfs -text
*.jpeg filter=lfs diff=lfs merge=lfs -text
*.png filter=lfs diff=lfs merge=lfs -text
*.stl filter=lfs diff=lfs merge=lfs -text
*.JPG filter=lfs diff=lfs merge=lfs -text
*.svg filter=lfs diff=lfs merge=lfs -text

BIN
content/posts/about-me/39.jpg (Stored with Git LFS) Normal file

Binary file not shown.

BIN
content/posts/about-me/40.jpg (Stored with Git LFS) Normal file

Binary file not shown.

BIN
content/posts/about-me/41.jpg (Stored with Git LFS) Normal file

Binary file not shown.

View file

@ -4,21 +4,21 @@ tags: ['Gabriel Augendre', 'blog', 'homepage']
date: 2020-05-12T11:29:02+00:00
aliases: ["/about-me"]
---
I'm Gabriel, a Go and Python software developer at [OVHcloud](https://www.ovhcloud.com/) (Lyon, France) and [FOSS](https://en.wikipedia.org/wiki/Free_and_open-source_software) enthusiast.
[![A picture of Gabriel Augendre](/attachments/39/processed/)](/attachments/40/original/)
[![His usual avatar for online profiles, the dwarf from a medieval fantasy audio story named "Le Donjon de Naheulbeuk" (French)](/attachments/41/processed/)](/attachments/41/original/)
{: .profile }
## Where to find me {#contact}
| Code | Contact |
|-----------------------------------------------|--------------------------------------------------------------------|
| [Gitea](https://git.augendre.info/gaugendre) | <a rel="me" href="https://fosstodon.org/@Crocmagnon">Mastodon</a> |
| [GitHub](https://github.com/Crocmagnon) | <a href="mailto:ga-contact@augendre.info">Email</a> |
| [GitLab](https://gitlab.com/gaugendre) | |
My resume is available as [a pdf](https://cv-gabriel.augendre.info) (in French) if you want to take a look.
## Analytics
I'm Gabriel, a Go and Python software developer at [OVHcloud](https://www.ovhcloud.com/) (Lyon, France) and [FOSS](https://en.wikipedia.org/wiki/Free_and_open-source_software) enthusiast.
[![A picture of Gabriel Augendre](39.jpg)](/attachments/40/original/)
[![His usual avatar for online profiles, the dwarf from a medieval fantasy audio story named "Le Donjon de Naheulbeuk" (French)](41.jpg)](/attachments/41/original/)
{: .profile }
## Where to find me {#contact}
| Code | Contact |
|-----------------------------------------------|--------------------------------------------------------------------|
| [Gitea](https://git.augendre.info/gaugendre) | <a rel="me" href="https://fosstodon.org/@Crocmagnon">Mastodon</a> |
| [GitHub](https://github.com/Crocmagnon) | <a href="mailto:ga-contact@augendre.info">Email</a> |
| [GitLab](https://gitlab.com/gaugendre) | |
My resume is available as [a pdf](https://cv-gabriel.augendre.info) (in French) if you want to take a look.
## Analytics
This website uses GoatCounter, a privacy respecting analytics tool. They are [publicly available](https://gc.gabnotes.org).

BIN
content/posts/adding-time-to-my-blogs-statistics/19.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
content/posts/adding-time-to-my-blogs-statistics/9.png (Stored with Git LFS) Normal file

Binary file not shown.

View file

@ -4,179 +4,179 @@ tags: ['analytics', 'blog', 'self-hosting']
date: 2020-06-15T16:27:27+00:00
aliases: ["/adding-time-to-my-blogs-statistics"]
---
[![Two Grafana graphs: one to view the raw view counts evolving, the other to display the difference between two data points](/attachments/9/processed/)](/attachments/9/original/)
Recently I added some statistics to my blog. I wanted to do that with minimum effort and without serving any crappy proprietary JavaScript to my users.
!!! info "Update 2020-01-03"
I now use [my own blog platform](/blog-platform-update-quick-note/) with [Plausible Analytics](/about-me/#analytics).
#100DaysToOffload No. 11
<!--more-->
My blog is powered by [WriteFreely](https://writefreely.org/), a free software written in Go. They already include a counter of views per page of my blog. This is nice, you can see the numbers in your admin interface so you can see which page works better than the others.
[![My blog stats as displayed in WriteFreely writer interface](/attachments/19/processed/)](/attachments/19/original/)
The thing is, you don't have any notion of time. You can't differentiate a post which has made 1000 views in 3 days from another which has made 1000 views in 2 months.
As I said, I wanted to fix that with minimum effort and have some nice graphs. I already have:
* [InfluxDB](https://www.influxdata.com/) and [Grafana](https://grafana.com/) on my Raspberry Pi
* InfluxDB is a time series database, meaning it's optimized to store series of data or events which occurred at a given time. Seems perfect for my blog analytics: I could regularly store the current view count of all posts and observe the evolution.
* Grafana is basically a service that draws graphs in a browser based on data from time series databases (and many other things, but that's my use case here). With that, I can graph the evolution of my view counts.
* The data on my blog server (not my Raspberry Pi), just waiting to be pushed
Based on this inventory, I assumed I needed at least one thing, maybe two:
* (mandatory) A script to retrieve the data from WriteFreely's database and push it to my Raspberry Pi
* (not mandatory) A service to receive the data, parse it and send it to InfluxDB. I say it's not mandatory because I could allow my blog server to push directly to InfluxDB, but I don't want that since it also exposes the existing data to a potential attacker.
I ended up writing both of these, in one hour and a half using SQLite, cURL, Python and Flask.
## Retrieving
My blog data is stored in an SQLite database on my server, which makes the access very easy. A simple one-liner exports the data I'm interested in to CSV:
```bash
sqlite3 /srv/writefreely/writefreely.db ".mode csv" ".header on" "select id, slug, view_count, title from posts order by view_count desc;"
```
Let's quickly review all these parts:
* `sqlite3 /srv/writefreely/writefreely.db`: connect to the database. All strings enclosed in quotes after this part are commands sent to SQLite.
* `".mode csv"`: render data as CSV
* `".header on"`: add column names to the CSV, useful if I later change the order of the columns
* `"select id, slug, view_count, title from posts order by view_count desc;"`: This is the SQL query I use to retrieve the view counts from my database. Of course this might break if WriteFreely changes their data schema but I'll try to carefully read the changelog. Anyway, this retrieves the id, the slug, the view count and the title of all posts, sorted by view count from the biggest to the smallest. The slug is basically the URL of the post, the id is a unique identifier for the post and the other fields are pretty self explanatory.
This command gives the following output:
```
$ sqlite3 /srv/writefreely/writefreely.db ".mode csv" ".header on" "select id, slug, view_count, title from posts order by view_count desc;"
id,slug,view_count,title
i3js9560vz,about-me,354,"About me"
ljp31nabom,im-starting-a-blog,160,"I'm starting a blog"
vf28dd7qao,using-your-yubikey-with-ssh-and-achieving-multifactor-authentication,144,"Using Your YubiKey With SSH and Achieving Multifactor Authentication"
vvci3b8p7d,raspberry-pi,119,"Raspberry Pi"
7326h2oj6s,synology-nas-stream-and-backup,109,"Synology NAS: Stream & Backup"
epkzgogx3q,my-setup-at-home,106,"My Setup at Home"
khojwbz2yv,youtube-on-apple-tv,70,"YouTube on Apple TV"
fdqlkla5qk,codingame-spring-challenge-2020-pacman,68,"CodinGame Spring Challenge 2020 - Pacman"
7x01zg6f0y,how-to-subscribe,65,"How to subscribe"
p3yf5p51rv,upgraded-writefreely-quick-note,62,"Upgraded writefreely - Quick Note"
bag5783dgc,bitwarden-and-password-managers,61,"Bitwarden and Password Managers"
3r1su2zj2q,cleantoots-clean-your-toot-history,39,"Cleantoots - Clean Your Toot History"
0cowcmcwur,pi-hole,36,"Pi Hole"
```
Now, I need to send this data to my server. Since I already decided I didn't want to expose the InfluxDB but only a small app that only accepts this data, I decided not to transform the data on the blog server. I would parse it on the Raspberry Pi after receiving it.
## Sending
`curl` is the perfect tool to make HTTP requests on the command line. I wanted to make a POST request with the output of the previous command as the body.
After some research and a *lot* of trial and error, here's what I came up with:
```bash
curl -X POST -H "Content-Type: text/csv" -m 30 --data-binary @- "http://<ip>:<port>/stats"
```
This expects data on the standard input, so I have to pipe the two commands:
```bash
sqlite3 /srv/writefreely/writefreely.db ".mode csv" ".header on" "select id, slug, view_count, title from posts order by view_count desc;" | curl -X POST -H "Content-Type: text/csv" -m 30 --data-binary @- "http://<ip>:<port>/stats"
```
Again, some explanation:
* `curl -X POST`: make a POST request
* `-H "Content-Type: text/csv`: add a header explaining what I'm sending
* `-m 30`: use a sensible timeout in case my Raspberry Pi doesn't respond
* `--data-binary @-`: read data from stdin and send it as the body of the request
* `"http://<ip>:<port>/stats"`: the endpoint where to send the data
I just needed to add this to my crontab to run this periodically. I originally chose a resolution of 10 minutes but found myself needing more data quickly, so I changed it to 1 minute. This also makes the loss of a data point less harmful.
At this point, we have a working script to read my blog stats and send it to my Raspberry Pi. Now we need to be able to receive it.
## Receiving
The code for this part is free software, you can find it on [my Gitea](https://git.augendre.info/gaugendre/writefreely_stats). The meat of this is a small Python Flask app:
```python
from flask import Flask, request
import csv
from io import StringIO
from influxdb import InfluxDBClient
import datetime
import os
INFLUX_USER = os.getenv("INFLUX_USER")
INFLUX_PASSWORD = os.getenv("INFLUX_PASSWORD")
INFLUX_DB = os.getenv("INFLUX_DB")
app = Flask(__name__)
@app.route("/stats", methods=["POST"])
def hello_world():
client = InfluxDBClient('localhost', 8086, INFLUX_USER, INFLUX_PASSWORD, INFLUX_DB)
influx_data = []
time = datetime.datetime.now().astimezone().isoformat()
data = request.data.decode("utf-8").split("\r\n")
reader = csv.DictReader(data, delimiter=",", quotechar='"')
for line in reader:
try:
view_count = int(line["view_count"])
id_ = line["id"]
slug = line["slug"]
title = line["title"]
except (ValueError, KeyError):
continue
influx_data.append({
"measurement": "blog_stats",
"time": time,
"tags": {
"id": id_,
"slug": slug,
"title": title
},
"fields": {
"value": view_count
}
})
if influx_data:
client.write_points(influx_data)
return "written"
return "nothing to write"
```
It expects data to be sent on its `/stats` endpoint as a POST request, which we did earlier. It then parses the body of the request as CSV, constructs an object to send to influx db and sends it. That's all!
## Graphing
I made two panels on my Grafana dashboard: one to view the raw view counts evolving, the other to display the difference between two data points. That is: the number of views in one minute.
Here are the two graphs in action:
[![Two Grafana graphs: one to view the raw view counts evolving, the other to display the difference between two data points](/attachments/9/processed/)](/attachments/9/original/)
## Allowing communication between devices
Lastly, I needed to enable the communication between the two devices. My Raspberry Pi is protected with `ufw` (which stands for Uncomplicated FireWall) in which I only allow access to ports I need from the IP addresses I need:
```bash
ufw allow in proto tcp from 10.64.168.93/32 to any port <PORT> comment "Blog stats"
ufw allow in proto tcp from 2001:bc8:47a8:42e::1/128 to any port <PORT> comment "Blog stats"
```
## Limitations
This only counts page displays, doesn't discriminate unique visitors, web crawlers, etc. It excludes logged in users however (me, in that case). It's more than enough for what I need, I don't need to know from which country or how old are my visitors.
Keywords/tags:
[![Two Grafana graphs: one to view the raw view counts evolving, the other to display the difference between two data points](9.png)](/attachments/9/original/)
Recently I added some statistics to my blog. I wanted to do that with minimum effort and without serving any crappy proprietary JavaScript to my users.
!!! info "Update 2020-01-03"
I now use [my own blog platform](/blog-platform-update-quick-note/) with [Plausible Analytics](/about-me/#analytics).
#100DaysToOffload No. 11
<!--more-->
My blog is powered by [WriteFreely](https://writefreely.org/), a free software written in Go. They already include a counter of views per page of my blog. This is nice, you can see the numbers in your admin interface so you can see which page works better than the others.
[![My blog stats as displayed in WriteFreely writer interface](19.png)](/attachments/19/original/)
The thing is, you don't have any notion of time. You can't differentiate a post which has made 1000 views in 3 days from another which has made 1000 views in 2 months.
As I said, I wanted to fix that with minimum effort and have some nice graphs. I already have:
* [InfluxDB](https://www.influxdata.com/) and [Grafana](https://grafana.com/) on my Raspberry Pi
* InfluxDB is a time series database, meaning it's optimized to store series of data or events which occurred at a given time. Seems perfect for my blog analytics: I could regularly store the current view count of all posts and observe the evolution.
* Grafana is basically a service that draws graphs in a browser based on data from time series databases (and many other things, but that's my use case here). With that, I can graph the evolution of my view counts.
* The data on my blog server (not my Raspberry Pi), just waiting to be pushed
Based on this inventory, I assumed I needed at least one thing, maybe two:
* (mandatory) A script to retrieve the data from WriteFreely's database and push it to my Raspberry Pi
* (not mandatory) A service to receive the data, parse it and send it to InfluxDB. I say it's not mandatory because I could allow my blog server to push directly to InfluxDB, but I don't want that since it also exposes the existing data to a potential attacker.
I ended up writing both of these, in one hour and a half using SQLite, cURL, Python and Flask.
## Retrieving
My blog data is stored in an SQLite database on my server, which makes the access very easy. A simple one-liner exports the data I'm interested in to CSV:
```bash
sqlite3 /srv/writefreely/writefreely.db ".mode csv" ".header on" "select id, slug, view_count, title from posts order by view_count desc;"
```
Let's quickly review all these parts:
* `sqlite3 /srv/writefreely/writefreely.db`: connect to the database. All strings enclosed in quotes after this part are commands sent to SQLite.
* `".mode csv"`: render data as CSV
* `".header on"`: add column names to the CSV, useful if I later change the order of the columns
* `"select id, slug, view_count, title from posts order by view_count desc;"`: This is the SQL query I use to retrieve the view counts from my database. Of course this might break if WriteFreely changes their data schema but I'll try to carefully read the changelog. Anyway, this retrieves the id, the slug, the view count and the title of all posts, sorted by view count from the biggest to the smallest. The slug is basically the URL of the post, the id is a unique identifier for the post and the other fields are pretty self explanatory.
This command gives the following output:
```
$ sqlite3 /srv/writefreely/writefreely.db ".mode csv" ".header on" "select id, slug, view_count, title from posts order by view_count desc;"
id,slug,view_count,title
i3js9560vz,about-me,354,"About me"
ljp31nabom,im-starting-a-blog,160,"I'm starting a blog"
vf28dd7qao,using-your-yubikey-with-ssh-and-achieving-multifactor-authentication,144,"Using Your YubiKey With SSH and Achieving Multifactor Authentication"
vvci3b8p7d,raspberry-pi,119,"Raspberry Pi"
7326h2oj6s,synology-nas-stream-and-backup,109,"Synology NAS: Stream & Backup"
epkzgogx3q,my-setup-at-home,106,"My Setup at Home"
khojwbz2yv,youtube-on-apple-tv,70,"YouTube on Apple TV"
fdqlkla5qk,codingame-spring-challenge-2020-pacman,68,"CodinGame Spring Challenge 2020 - Pacman"
7x01zg6f0y,how-to-subscribe,65,"How to subscribe"
p3yf5p51rv,upgraded-writefreely-quick-note,62,"Upgraded writefreely - Quick Note"
bag5783dgc,bitwarden-and-password-managers,61,"Bitwarden and Password Managers"
3r1su2zj2q,cleantoots-clean-your-toot-history,39,"Cleantoots - Clean Your Toot History"
0cowcmcwur,pi-hole,36,"Pi Hole"
```
Now, I need to send this data to my server. Since I already decided I didn't want to expose the InfluxDB but only a small app that only accepts this data, I decided not to transform the data on the blog server. I would parse it on the Raspberry Pi after receiving it.
## Sending
`curl` is the perfect tool to make HTTP requests on the command line. I wanted to make a POST request with the output of the previous command as the body.
After some research and a *lot* of trial and error, here's what I came up with:
```bash
curl -X POST -H "Content-Type: text/csv" -m 30 --data-binary @- "http://<ip>:<port>/stats"
```
This expects data on the standard input, so I have to pipe the two commands:
```bash
sqlite3 /srv/writefreely/writefreely.db ".mode csv" ".header on" "select id, slug, view_count, title from posts order by view_count desc;" | curl -X POST -H "Content-Type: text/csv" -m 30 --data-binary @- "http://<ip>:<port>/stats"
```
Again, some explanation:
* `curl -X POST`: make a POST request
* `-H "Content-Type: text/csv`: add a header explaining what I'm sending
* `-m 30`: use a sensible timeout in case my Raspberry Pi doesn't respond
* `--data-binary @-`: read data from stdin and send it as the body of the request
* `"http://<ip>:<port>/stats"`: the endpoint where to send the data
I just needed to add this to my crontab to run this periodically. I originally chose a resolution of 10 minutes but found myself needing more data quickly, so I changed it to 1 minute. This also makes the loss of a data point less harmful.
At this point, we have a working script to read my blog stats and send it to my Raspberry Pi. Now we need to be able to receive it.
## Receiving
The code for this part is free software, you can find it on [my Gitea](https://git.augendre.info/gaugendre/writefreely_stats). The meat of this is a small Python Flask app:
```python
from flask import Flask, request
import csv
from io import StringIO
from influxdb import InfluxDBClient
import datetime
import os
INFLUX_USER = os.getenv("INFLUX_USER")
INFLUX_PASSWORD = os.getenv("INFLUX_PASSWORD")
INFLUX_DB = os.getenv("INFLUX_DB")
app = Flask(__name__)
@app.route("/stats", methods=["POST"])
def hello_world():
client = InfluxDBClient('localhost', 8086, INFLUX_USER, INFLUX_PASSWORD, INFLUX_DB)
influx_data = []
time = datetime.datetime.now().astimezone().isoformat()
data = request.data.decode("utf-8").split("\r\n")
reader = csv.DictReader(data, delimiter=",", quotechar='"')
for line in reader:
try:
view_count = int(line["view_count"])
id_ = line["id"]
slug = line["slug"]
title = line["title"]
except (ValueError, KeyError):
continue
influx_data.append({
"measurement": "blog_stats",
"time": time,
"tags": {
"id": id_,
"slug": slug,
"title": title
},
"fields": {
"value": view_count
}
})
if influx_data:
client.write_points(influx_data)
return "written"
return "nothing to write"
```
It expects data to be sent on its `/stats` endpoint as a POST request, which we did earlier. It then parses the body of the request as CSV, constructs an object to send to influx db and sends it. That's all!
## Graphing
I made two panels on my Grafana dashboard: one to view the raw view counts evolving, the other to display the difference between two data points. That is: the number of views in one minute.
Here are the two graphs in action:
[![Two Grafana graphs: one to view the raw view counts evolving, the other to display the difference between two data points](9.png)](/attachments/9/original/)
## Allowing communication between devices
Lastly, I needed to enable the communication between the two devices. My Raspberry Pi is protected with `ufw` (which stands for Uncomplicated FireWall) in which I only allow access to ports I need from the IP addresses I need:
```bash
ufw allow in proto tcp from 10.64.168.93/32 to any port <PORT> comment "Blog stats"
ufw allow in proto tcp from 2001:bc8:47a8:42e::1/128 to any port <PORT> comment "Blog stats"
```
## Limitations
This only counts page displays, doesn't discriminate unique visitors, web crawlers, etc. It excludes logged in users however (me, in that case). It's more than enough for what I need, I don't need to know from which country or how old are my visitors.
Keywords/tags:
#blog #stats #python #dev #flask #grafana #influxdb #raspberrypi #raspi #100DaysToOffload

View file

@ -4,29 +4,29 @@ tags: ['bisect', 'bugs', 'git']
date: 2021-12-22T22:08:07.191735+00:00
aliases: ["/automate-bug-findings-with-git"]
---
## 🔍 Git bisect
When you're trying to troubleshoot a bug, finding the original commit is often helpful as it gives you context.
Enters `git bisect`! If you haven't already, consider taking a short detour to the [documentation](https://git-scm.com/docs/git-bisect).
> This command uses a binary search algorithm to find which commit in your project's history introduced a bug.
## 🤙 The manual way
`git bisect` works by taking a "good" commit and a "bad" one and from there it will checkout a commit in between. Then, you check if your bug is still present and tell git about it. It then repeats this process, narrowing down its search until it finds the first "bad" commit.
Manually checking the presence of the bug may be tedious if the process is involved or if there's a very large number of commits to go through.
## 🤖 Automating it for fun and profit
If you know how to script the detection of the bug, let's say you can reproduce it in a unit test, then you can use the `run` subcommand.
It takes a command that should exit with 0 if the commit is good and anything else if it's bad. Conveniently, most test runners behave this way, so you should be able to use the tools you already know.
Git will then do all the heavy lifting for you, running your test script after each checkout and swiftly find the culprit.
!!! Info "☝️ Pro tip"
If the script you want to run is versioned, then when git checks out previous commits your test script might change.
## 🔍 Git bisect
When you're trying to troubleshoot a bug, finding the original commit is often helpful as it gives you context.
Enters `git bisect`! If you haven't already, consider taking a short detour to the [documentation](https://git-scm.com/docs/git-bisect).
> This command uses a binary search algorithm to find which commit in your project's history introduced a bug.
## 🤙 The manual way
`git bisect` works by taking a "good" commit and a "bad" one and from there it will checkout a commit in between. Then, you check if your bug is still present and tell git about it. It then repeats this process, narrowing down its search until it finds the first "bad" commit.
Manually checking the presence of the bug may be tedious if the process is involved or if there's a very large number of commits to go through.
## 🤖 Automating it for fun and profit
If you know how to script the detection of the bug, let's say you can reproduce it in a unit test, then you can use the `run` subcommand.
It takes a command that should exit with 0 if the commit is good and anything else if it's bad. Conveniently, most test runners behave this way, so you should be able to use the tools you already know.
Git will then do all the heavy lifting for you, running your test script after each checkout and swiftly find the culprit.
!!! Info "☝️ Pro tip"
If the script you want to run is versioned, then when git checks out previous commits your test script might change.
In order to avoid that, a possible workaround is to run a non-versioned copy of the script so it's not changed when git switches to an earlier version of your repository.

BIN
content/posts/bitwarden-and-password-managers/3.svg (Stored with Git LFS) Normal file

Binary file not shown.

View file

@ -4,47 +4,15 @@ tags: ['free software', 'passwords', 'passwords manager']
date: 2020-06-05T15:10:47+00:00
aliases: ["/bitwarden-and-password-managers"]
---
[TL;DR](https://en.wiktionary.org/wiki/tl;dr): I use a password manager and you should too. [Bitwarden](http://bitwarden.com/) is a Free Software alternative.
![Bitwarden Logo](/attachments/3/original/)
This is my 10th post of #100DaysToOffload.<!--more-->
## Password managers
We all have a *lot* of online accounts now, between banks, emails, marketplaces, public institutions, games, transport, storage, food delivery, … The list goes on.
And the vast majority of these accounts should all have different passwords. And by different I mean very different.
For example, these two passwords are not different enough:
| Service | Password |
|---------|----------------------|
| Dropbox | `!B2F#czqpnKB:dropbox` |
| GitHub | `!B2F#czqpnKB:github` |
They are based on the same root and just use the service name as a suffix. That's what I used to do before using a password manager because it was easy to remember: once you memorised the root, you just had to know where you wanted to log in.
I had a complex root composed of random letters, digits and special characters. But if any of the services were compromised, my root was compromised too and the attacker could gain access to my other accounts with very little effort.
A password manager abstracts everything for you and makes it **very easy** to generate unique, long and complex passwords **without the need to remember them all**. You just have to remember the password to your password manager and let it handle the rest for you.
Most password managers also allow you to store other items such as notes, credit cards or identities to help you fill forms.
## Bitwarden
Bitwarden is my password manager of choice because it's a **free and open source software**. The clients are under the GPL and the server code is under AGPL, both guarantee you an access to the source code and the freedom to modify it yourself for your own use or redistribute your modified version.
It offers a **free plan** that is very convenient and in no way limited like others (looking at you Dashlane, making us pay $40/year to sync passwords). With the free version of Bitwarden, you can store as many passwords as you want, you can sync them between as many devices as you want (smartphone, laptop, desktop, tablet, …). It also generates random passwords for you so you don't have to roll your face over your keyboard when creating an account.
Finally, you can host it yourself, meaning that if you don't trust the online Bitwarden service to store your passwords, you can run it on your own server or at home. I see this more useful for **enterprises** that will want to store their passwords on-premise but technical individuals can also take advantage of this and not be dependent on a third-party service for their password management.
Oh, I mentioned a free plan but to support Bitwarden you can subscribe to a Premium offer. It costs **less than a dollar per month** ($10/year) and it offers:
* reports on the content of your vault (e.g.: exposed or weak passwords)
* 1GB of encrypted file storage
* TOTP code generation
You also get the nice feeling of supporting a free software project 🎉
## Get involved
Since Bitwarden is an open source project, it's quite easy to get involved. You can contribute code on [GitHub](https://github.com/bitwarden) or translations on [Crowdin](https://crowdin.com/profile/kspearrin). From my experience, contributing code was very pleasant. Kyle (the main developer) was helpful in its feedback and I could improve credit card filling on some of the sites I use regularly.
[TL;DR](https://en.wiktionary.org/wiki/tl;dr): I use a password manager and you should too. [Bitwarden](http://bitwarden.com/) is a Free Software alternative.
![Bitwarden Logo](3.svg)
* 1GB of encrypted file storage
* TOTP code generation
You also get the nice feeling of supporting a free software project 🎉
## Get involved
Since Bitwarden is an open source project, it's quite easy to get involved. You can contribute code on [GitHub](https://github.com/bitwarden) or translations on [Crowdin](https://crowdin.com/profile/kspearrin). From my experience, contributing code was very pleasant. Kyle (the main developer) was helpful in its feedback and I could improve credit card filling on some of the sites I use regularly.
I'm also involved as a proofreader of the French translation on [Crowdin](https://crowdin.com/profile/kspearrin), meaning that I have the final say on what will be included in the French translation of the apps. It's a responsibility I take seriously and we welcome all participation from fellow translators 😊

View file

@ -4,14 +4,14 @@ tags: ['blog', 'self-hosting']
date: 2020-08-17T15:45:55+00:00
aliases: ["/blog-platform-update-quick-note"]
---
I just moved from Writefreely to a custom made blog engine. Writefreely is great but I wanted more control over every bit of the template and style and I couldn't easily achieve what I wanted.
<!--more-->
#100DaysToOffload No. 17
The code powering this website is available at [Gitea](https://git.augendre.info/gaugendre/blog/) and is free software. Feel free to use it for your own needs as long as you stick with the license.
During the migration, I lost the feature that pushed every article on Mastodon. If you followed the blog using this mechanism, I encourage you to switch to the RSS feed.
I just moved from Writefreely to a custom made blog engine. Writefreely is great but I wanted more control over every bit of the template and style and I couldn't easily achieve what I wanted.
<!--more-->
#100DaysToOffload No. 17
The code powering this website is available at [Gitea](https://git.augendre.info/gaugendre/blog/) and is free software. Feel free to use it for your own needs as long as you stick with the license.
During the migration, I lost the feature that pushed every article on Mastodon. If you followed the blog using this mechanism, I encourage you to switch to the RSS feed.
If you subscribed with the RSS feed, your reader might redownload all the previous articles, if that's the case I'm really sorry but I don't know how to prevent it. If you happen to know, please [reach out to me](/about-me)!

BIN
content/posts/blog-statistics-part-2/8.png (Stored with Git LFS) Normal file

Binary file not shown.

View file

@ -4,37 +4,37 @@ tags: ['analytics', 'blog', 'self-hosting']
date: 2020-06-26T15:34:46+00:00
aliases: ["/blog-statistics-part-2"]
---
In a [recent post](/adding-time-to-my-blogs-statistics), I talked about how I tracked my posts views over time using Grafana and InfluxDB.
Today, I went another way and configured [GoAccess](http://goaccess.io/) to provide me with an HTML report with data extracted from my webserver's logs.
!!! info "Update 2020-01-03"
I now use [my own blog platform](/blog-platform-update-quick-note/) with [Goatcounter](/about-me/#analytics).
[![GoAccess Screenshot](/attachments/8/processed/)](/attachments/8/original/)
#100DaysToOffload No. 14<!--more-->
As I said, GoAccess works by parsing the logs from my webserver and extracting data from there. There's a surprising amount of data hidden there, here's an example line:
```text
148.251.134.157 - - [26/Jun/2020:16:28:58 +0200] "POST /api/collections/gaugendre/inbox HTTP/1.1" 200 0 "-" "http.rb/4.4.1 (Mastodon/3.1.4; +https://mastodon.social/)"
```
* `148.251.134.157`: the IP address of the client making the request
* `- -`: two dashes for the logged in user's identity. I don't use this features, hence the dashes.
* `[26/Jun/2020:16:28:58 +0200]`: date and time with timezone
* `POST /api/collections/gaugendre/inbox HTTP/1.1`: [HTTP method](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Request_methods) with path and protocol version
* `200`: The HTTP [status code](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes), 200 means "Ok"
* `0`: The size of the response, in bytes
* `"-"`: The referrer (what page the user was on before)
* `"http.rb/4.4.1 (Mastodon/3.1.4; +https://mastodon.social/)"`: The user agent. It describes the browser or the program used to make the request. Here it's a Mastodon server.
Every single request to the server is logged, so it makes it easy to determine what page is being most requested, and most importantly by who. Do my 100 views come from an indexing bot or from 100 different people? That's not the same thing! While the simple count from WriteFreely can't answer these questions, the logs can!
I might take it a step further and dump everything to an Elastic stack someday, but in the meantime you can access the reports generated by GoAccess for my blog at https://reports.augendre.info. I'm currently facing some issues that I don't yet understand with generating the reports regularly via crontab, but I'm on it :)
**Update 2020-06-26 18:07**: The crontab issue is resolved, reports are now updated automatically every 5 minutes.
Keywords/tags:
In a [recent post](/adding-time-to-my-blogs-statistics), I talked about how I tracked my posts views over time using Grafana and InfluxDB.
Today, I went another way and configured [GoAccess](http://goaccess.io/) to provide me with an HTML report with data extracted from my webserver's logs.
!!! info "Update 2020-01-03"
I now use [my own blog platform](/blog-platform-update-quick-note/) with [Goatcounter](/about-me/#analytics).
[![GoAccess Screenshot](8.png)](/attachments/8/original/)
#100DaysToOffload No. 14<!--more-->
As I said, GoAccess works by parsing the logs from my webserver and extracting data from there. There's a surprising amount of data hidden there, here's an example line:
```text
148.251.134.157 - - [26/Jun/2020:16:28:58 +0200] "POST /api/collections/gaugendre/inbox HTTP/1.1" 200 0 "-" "http.rb/4.4.1 (Mastodon/3.1.4; +https://mastodon.social/)"
```
* `148.251.134.157`: the IP address of the client making the request
* `- -`: two dashes for the logged in user's identity. I don't use this features, hence the dashes.
* `[26/Jun/2020:16:28:58 +0200]`: date and time with timezone
* `POST /api/collections/gaugendre/inbox HTTP/1.1`: [HTTP method](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Request_methods) with path and protocol version
* `200`: The HTTP [status code](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes), 200 means "Ok"
* `0`: The size of the response, in bytes
* `"-"`: The referrer (what page the user was on before)
* `"http.rb/4.4.1 (Mastodon/3.1.4; +https://mastodon.social/)"`: The user agent. It describes the browser or the program used to make the request. Here it's a Mastodon server.
Every single request to the server is logged, so it makes it easy to determine what page is being most requested, and most importantly by who. Do my 100 views come from an indexing bot or from 100 different people? That's not the same thing! While the simple count from WriteFreely can't answer these questions, the logs can!
I might take it a step further and dump everything to an Elastic stack someday, but in the meantime you can access the reports generated by GoAccess for my blog at https://reports.augendre.info. I'm currently facing some issues that I don't yet understand with generating the reports regularly via crontab, but I'm on it :)
**Update 2020-06-26 18:07**: The crontab issue is resolved, reports are now updated automatically every 5 minutes.
Keywords/tags:
#blog #stats #nginx #goaccess

View file

@ -4,36 +4,36 @@ tags: ['Certificates', 'OpenSSL', 'Quick Note', 'TLS']
date: 2022-08-04T09:35:07.590872+00:00
aliases: ["/certificates-chain-correct-order"]
---
Making honor to the name of this blog, here's a very quick note for my future self.
Certificates in a chain file should start with the "final" certificate (the cert issued to you). Each following certificate should be the issuer of the previous one.
```plain
-----BEGIN CERTIFICATE-----
... # 0
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
... # 1
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
... # 2
-----END CERTIFICATE-----
```
```plain
$ openssl storeutl -certs -noout -text gabnotes.org.crt | grep -E "Certificate:|: Certificate|Issuer:|Subject:"
0: Certificate
Certificate:
Issuer: C=US, O=Let's Encrypt, CN=R3
Subject: CN=gabnotes.org
1: Certificate
Certificate:
Issuer: C=US, O=Internet Security Research Group, CN=ISRG Root X1
Subject: C=US, O=Let's Encrypt, CN=R3
2: Certificate
Certificate:
Issuer: O=Digital Signature Trust Co., CN=DST Root CA X3
Subject: C=US, O=Internet Security Research Group, CN=ISRG Root X1
Making honor to the name of this blog, here's a very quick note for my future self.
Certificates in a chain file should start with the "final" certificate (the cert issued to you). Each following certificate should be the issuer of the previous one.
```plain
-----BEGIN CERTIFICATE-----
... # 0
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
... # 1
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
... # 2
-----END CERTIFICATE-----
```
```plain
$ openssl storeutl -certs -noout -text gabnotes.org.crt | grep -E "Certificate:|: Certificate|Issuer:|Subject:"
0: Certificate
Certificate:
Issuer: C=US, O=Let's Encrypt, CN=R3
Subject: CN=gabnotes.org
1: Certificate
Certificate:
Issuer: C=US, O=Internet Security Research Group, CN=ISRG Root X1
Subject: C=US, O=Let's Encrypt, CN=R3
2: Certificate
Certificate:
Issuer: O=Digital Signature Trust Co., CN=DST Root CA X3
Subject: C=US, O=Internet Security Research Group, CN=ISRG Root X1
```

View file

@ -4,91 +4,91 @@ tags: ['Docker', 'ITSF', 'chown', 'history', 'layer', 'multi-stage builds']
date: 2021-03-02T16:21:06.172437+00:00
aliases: ["/chowning-files-dockerfile-can-take-lot-space"]
---
Today I learned that recursively changing the owner of a directory tree in a Dockerfile can result in some serious increase in image size.
## 🚛 The issue
You may remember how in a [previous post](/docker-images-layers-and-cache/) we used a small example to discuss layers and final image size. Well, here's our example again, slightly modified.
```Dockerfile hl_lines="5"
# Dockerfile
FROM ubuntu
WORKDIR /app
RUN fallocate -l 100M example
RUN chown 33:33 example
```
Given that the base image weighs ~75MB, we could expect the final image to weigh ~175MB (~75 from the base image + ~100 from the big file we generated).
It turns out that since `chown`ing the file modifies it, the `example` file will count twice: once in the `fallocate` layer, and once in the `chown` layer, resulting in an image size of ~275MB.
## 📉 Workaround
Since creating "large" amounts of data in a Docker image can be quite common (think about dependencies, static files, etc), I guess that workaround strategies are required. Fortunately, our backs are covered.
Let's take a slightly more complex example to illustrate some real life situations you might encounter:
```Dockerfile
FROM ubuntu AS build
WORKDIR /build
RUN fallocate -l 100M binary
FROM ubuntu
WORKDIR /app
RUN fallocate -l 100M example
COPY --from=build /build/binary /app/binary
RUN chown -R 33:33 /app
```
This results in an image weighing 492MB. Let's bring it down to 283MB! (2x~100MB + ~75MB)
```Dockerfile hl_lines="9 14 15 19"
FROM ubuntu AS build
WORKDIR /build
RUN fallocate -l 100M binary
FROM ubuntu
WORKDIR /app
# /app is empty so only the folder is modified.
RUN chown -R 33:33 /app
# Running these in the same step prevents docker
# from generating an intermediate layer with the
# wrong permissions and taking precious space.
RUN fallocate -l 100M example \
&& chown 33:33 example
# Using --chown with COPY or ADD copies the files
# with the right permissions in a single step.
COPY --chown=33:33 --from=build /build/binary /app/binary
```
There you go! By being smart about when to run the permission changes, we just saved ourselves 200MB of disk space and network bandwidth. That's about 60% for this specific image!
In the specific case I was investigating at [ITSF](https://itsf.io), the image went from ~1.6GB to ~0.95GB just from this `chown` trick. We were copying a bunch of files in a directory and at the end we chowned the whole directory recursively. That directory weighed about 650MB, which counted twice in the final image size.
!!! info "Info"
Of course this also works with "simple" `COPY` and `ADD` instructions. It's not reserved to copying files from other stages.
## 📓 Don't forget history!
I discovered that the `chown` was taking that much space using the underrated `docker history` command. I already briefly [introduced](/docker-images-layers-and-cache/#cache-invalidation) it previously but now felt like a good time to remind you of its existence 🙂
Running it with our big 492MB image, here's the output:
```
$ docker history fat-image
IMAGE CREATED CREATED BY SIZE COMMENT
ec7efd2f2855 20 minutes ago /bin/sh -c chown -R 33:33 /app 210MB
562cdd7db0dd 21 minutes ago /bin/sh -c #(nop) COPY file:3de744e61c00e7ca… 105MB
e2b74aa6952e 30 minutes ago /bin/sh -c fallocate -l 100M example 105MB
8637829f8e9b 2 months ago /bin/sh -c #(nop) WORKDIR /app 0B
f643c72bc252 3 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
<missing> 3 months ago /bin/sh -c mkdir -p /run/systemd && echo 'do… 7B
<missing> 3 months ago /bin/sh -c [ -z "$(apt-get indextargets)" ] 0B
<missing> 3 months ago /bin/sh -c set -xe && echo '#!/bin/sh' > /… 811B
<missing> 3 months ago /bin/sh -c #(nop) ADD file:4f15c4475fbafb3fe… 72.9MB
```
All the `<missing>` rows plus the first row with a real ID above (`f643c72bc252`) are the layers of the base image. All the layers above are the ones that compose our image. We can clearly see that the `chown` layer weighs 210MB by itself.
Today I learned that recursively changing the owner of a directory tree in a Dockerfile can result in some serious increase in image size.
## 🚛 The issue
You may remember how in a [previous post](/docker-images-layers-and-cache/) we used a small example to discuss layers and final image size. Well, here's our example again, slightly modified.
```Dockerfile hl_lines="5"
# Dockerfile
FROM ubuntu
WORKDIR /app
RUN fallocate -l 100M example
RUN chown 33:33 example
```
Given that the base image weighs ~75MB, we could expect the final image to weigh ~175MB (~75 from the base image + ~100 from the big file we generated).
It turns out that since `chown`ing the file modifies it, the `example` file will count twice: once in the `fallocate` layer, and once in the `chown` layer, resulting in an image size of ~275MB.
## 📉 Workaround
Since creating "large" amounts of data in a Docker image can be quite common (think about dependencies, static files, etc), I guess that workaround strategies are required. Fortunately, our backs are covered.
Let's take a slightly more complex example to illustrate some real life situations you might encounter:
```Dockerfile
FROM ubuntu AS build
WORKDIR /build
RUN fallocate -l 100M binary
FROM ubuntu
WORKDIR /app
RUN fallocate -l 100M example
COPY --from=build /build/binary /app/binary
RUN chown -R 33:33 /app
```
This results in an image weighing 492MB. Let's bring it down to 283MB! (2x~100MB + ~75MB)
```Dockerfile hl_lines="9 14 15 19"
FROM ubuntu AS build
WORKDIR /build
RUN fallocate -l 100M binary
FROM ubuntu
WORKDIR /app
# /app is empty so only the folder is modified.
RUN chown -R 33:33 /app
# Running these in the same step prevents docker
# from generating an intermediate layer with the
# wrong permissions and taking precious space.
RUN fallocate -l 100M example \
&& chown 33:33 example
# Using --chown with COPY or ADD copies the files
# with the right permissions in a single step.
COPY --chown=33:33 --from=build /build/binary /app/binary
```
There you go! By being smart about when to run the permission changes, we just saved ourselves 200MB of disk space and network bandwidth. That's about 60% for this specific image!
In the specific case I was investigating at [ITSF](https://itsf.io), the image went from ~1.6GB to ~0.95GB just from this `chown` trick. We were copying a bunch of files in a directory and at the end we chowned the whole directory recursively. That directory weighed about 650MB, which counted twice in the final image size.
!!! info "Info"
Of course this also works with "simple" `COPY` and `ADD` instructions. It's not reserved to copying files from other stages.
## 📓 Don't forget history!
I discovered that the `chown` was taking that much space using the underrated `docker history` command. I already briefly [introduced](/docker-images-layers-and-cache/#cache-invalidation) it previously but now felt like a good time to remind you of its existence 🙂
Running it with our big 492MB image, here's the output:
```
$ docker history fat-image
IMAGE CREATED CREATED BY SIZE COMMENT
ec7efd2f2855 20 minutes ago /bin/sh -c chown -R 33:33 /app 210MB
562cdd7db0dd 21 minutes ago /bin/sh -c #(nop) COPY file:3de744e61c00e7ca… 105MB
e2b74aa6952e 30 minutes ago /bin/sh -c fallocate -l 100M example 105MB
8637829f8e9b 2 months ago /bin/sh -c #(nop) WORKDIR /app 0B
f643c72bc252 3 months ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
<missing> 3 months ago /bin/sh -c mkdir -p /run/systemd && echo 'do… 7B
<missing> 3 months ago /bin/sh -c [ -z "$(apt-get indextargets)" ] 0B
<missing> 3 months ago /bin/sh -c set -xe && echo '#!/bin/sh' > /… 811B
<missing> 3 months ago /bin/sh -c #(nop) ADD file:4f15c4475fbafb3fe… 72.9MB
```
All the `<missing>` rows plus the first row with a real ID above (`f643c72bc252`) are the layers of the base image. All the layers above are the ones that compose our image. We can clearly see that the `chown` layer weighs 210MB by itself.
That wraps it up for today! As always, I hope you learned something along the way 😊

View file

@ -4,31 +4,31 @@ tags: ['cleanup', 'history', 'mastodon', 'toots']
date: 2020-05-25T17:08:28+00:00
aliases: ["/cleantoots-clean-your-toot-history"]
---
Since I consider my messages on social media to be valid at the moment they're posted, I like to have them deleted after some time. When I still used Twitter, I also used a CLI tool called [cleantweets](https://github.com/magnusnissel/cleantweets) that helped with this.
A few months ago, after having created an account on [Fosstodon](https://fosstodon.org), I wrote a simple command line utility to help you achieve the same thing but with toots: [cleantoots](https://git.augendre.info/gaugendre/cleantoots) (notice how much effort I put into naming it).<!--more-->
As with most of my side-projects, the code source is released under a free software and open source license, here I used the GPL-3.0.
Since it's written in Python, I also released a [Python package](https://pypi.org/project/cleantoots/) to help with the installation process, so a simple `python -m pip install cleantoots` should do the trick. The rest of the configuration and setup process can be found in the project's [README](https://git.augendre.info/gaugendre/cleantoots/src/branch/master/README.md).
You can easily configure the tool to:
* Delete old toots
* Keep popular toots (with a minimum number of favorites/boosts)
* Keep toots given their ID
* Keep toots containing a hashtag (recent addition)
And as Mastodon is a federated social network and you may have multiple accounts on multiple instances, the configuration file allows that too.
I think the project is usable but it may be improved at least in two ways:
* Increase the test coverage
* Refactor the code to make it clearer and add some comments to help contributors
Please don't hesitate to open an [issue on the repository](https://git.augendre.info/gaugendre/cleantoots/issues) if you have any question, need help using it, find a bug or request a feature! 😀
I wrote this as part of [100DaysToOffload](https://100daystooffload.com/), this is the 8th post.
Keywords/tags:
Since I consider my messages on social media to be valid at the moment they're posted, I like to have them deleted after some time. When I still used Twitter, I also used a CLI tool called [cleantweets](https://github.com/magnusnissel/cleantweets) that helped with this.
A few months ago, after having created an account on [Fosstodon](https://fosstodon.org), I wrote a simple command line utility to help you achieve the same thing but with toots: [cleantoots](https://git.augendre.info/gaugendre/cleantoots) (notice how much effort I put into naming it).<!--more-->
As with most of my side-projects, the code source is released under a free software and open source license, here I used the GPL-3.0.
Since it's written in Python, I also released a [Python package](https://pypi.org/project/cleantoots/) to help with the installation process, so a simple `python -m pip install cleantoots` should do the trick. The rest of the configuration and setup process can be found in the project's [README](https://git.augendre.info/gaugendre/cleantoots/src/branch/master/README.md).
You can easily configure the tool to:
* Delete old toots
* Keep popular toots (with a minimum number of favorites/boosts)
* Keep toots given their ID
* Keep toots containing a hashtag (recent addition)
And as Mastodon is a federated social network and you may have multiple accounts on multiple instances, the configuration file allows that too.
I think the project is usable but it may be improved at least in two ways:
* Increase the test coverage
* Refactor the code to make it clearer and add some comments to help contributors
Please don't hesitate to open an [issue on the repository](https://git.augendre.info/gaugendre/cleantoots/issues) if you have any question, need help using it, find a bug or request a feature! 😀
I wrote this as part of [100DaysToOffload](https://100daystooffload.com/), this is the 8th post.
Keywords/tags:
#100DaysToOffload #cleantoots #

BIN
content/posts/codingame-spring-challenge-2020-pacman/2.jpg (Stored with Git LFS) Normal file

Binary file not shown.

View file

@ -4,72 +4,72 @@ tags: ['challenge', 'codingame', 'programming']
date: 2020-05-21T19:13:49+00:00
aliases: ["/codingame-spring-challenge-2020-pacman"]
---
I participated in the [latest CodinGame challenge](https://www.codingame.com/multiplayer/bot-programming/spring-challenge-2020) and quickly got out of the beginner's league. In this article, I'll explain my approach.
#100DaysToOffload No. 7
[![Cover picture for CodinGame Spring Challenge 2020. There is one yellow Pacman and one blue Pacman fighting against each other](/attachments/2/processed/)](/attachments/2/original/)<!--more-->
CodinGame is a coding platform on which you can solve puzzles by writing a program in the language of your choice. They also often host contests in which your code fights against that of other developers. The most matches you win, the most points you get. And the most points you get, the higher you climb on the leaderboard. I like to participate in these challenges, so I signed up for the latest one: CodinGame Spring Challenge 2020.
In all CodinGame challenges, there is a league system. You start in the Wood 1 league with an easier version of the game and a subset of the rules, just to get you started. When your code is good enough, you get to Wood 2 where you have more rules and more mechanisms. After passing Wood 2, you end up in the Bronze league with the full version of the game and all the rules.
I didn't go further than Bronze league because I didn't want to spend much more time working on my bot but you can do that if you want 😊
## The rules
In this contest, each participant has a team of Pacmans and has to eat more pellets than the opponent. There are standard pellets worth 1 point and super pellets worth 10 points. At each turn, you're given the position of everything you can see (pellets, allies and enemies) and you have to give an instruction (telling your Pacs what to do). The game stops when there are no enough pellets in game to change the outcome of the game (one of the opponents ate more than half of the points available in the board) or after 200 turns.
League Wood 1:
* You control only one Pac
* You can see the whole map (everything's given to you at each turn)
* When colliding, the movement is canceled
League Wood 2 :
* You control many Pacs (2-5)
League Bronze :
* Speed boost
* Each Pac has a type and you can change it in a game turn. When colliding, the Pacs fight. Each one can be one of rock, paper or scissors and the winner is chosen according to [the game](https://en.wikipedia.org/wiki/Rock_paper_scissors).
* You can only see what's in your Pacs line of sight. Everything else is in the fog. You can see super pellets from anywhere, though.
## Start small, submit early
As you can see from the rules, there is a big gap between the woods and bronze league. The fog is the main thing preventing you from optimizing your path since you don't know what's happening outside your lines of sight.
When talking with other participants, I noticed that some people were over engineering their program right from the wood 1 league. They were talking about path-finding algorithms, path optimization and so on. This could've worked if later the ability to view the whole map weren't taken away.
My advice for this kind of challenge is to start with a **simple algorithm** that does uncomplicated things in a few lines of code. Just enough to get out of wood leagues. For example, the simplest algorithm I can think of for the first level would be : "go eat the nearest super pellet, if there aren't, go eat the nearest standard pellet". **Submit early** and **iterate** on your code. Some things won't work but it's better to see that while running your algorithm than spend two days working on it and discover later that you can't see the whole map anymore.
That's how I got out of wood 1. No optimization, no over engineering, just "go eat". Of course, this was not optimal. Since I never changed my trajectory until it disappeared (because I ate it or because the opponent did), sometimes I ended up in an infinite collision because my opponent and I were too stubborn. But that allowed me to win most of my matches and beat the boss (which is how you leave wood leagues).
## Grow later
The wood leagues are here to make you comfortable with the contest rules.
When entering the Bronze league, you unlock all the rules of the game. In our case, this means that we only see in direct line of sight (we don't see behind walls or in diagonal), and that we have abilities we can use other than just moving towards a target: speed boost and change type.
These two abilities can really change the course of a match so you'd better take them into account.
Of course, not seeing the whole map means you have to keep a state between each turn to remember the position of the pellets you've already seen. That way, you can go back to them later if you don't see anything anymore.
My code for this challenge is available on [Gitea](https://git.augendre.info/gaugendre/codingame/src/branch/master/challenges/2020-spring.py). It's written in Python, feel free to check it and drop me an [email](/about-me/) or a message on [Mastodon](/about-me/) if you have any question! It's not written in a very maintainable way since its expected life span was that of the contest, so 11 days. I could've spent more time refactoring stuff but *in this context* this would have likely been wasted time.
Basically, here's what I do:
* Go eat the super pellets
* Use boost nearly whenever possible
* Change to the winning type if an enemy is nearby and chase it
* Go eat the nearest pellet (actually second nearest because I want to make the most use of the boost)
* When nothing in sight, go to a remembered pellet position or go discover new areas
The core method is `get_action` ([line 221](https://git.augendre.info/gaugendre/codingame/src/branch/master/challenges/2020-spring.py#L221)). It returns the action a given Pac should take for this turn. This method is run for every Pac at every turn.
My final position was 1758th/4103 total, or 121st/2466 in the Bronze league.
There were 118 in Legend, 535 in Gold, 984 in Silver, 2466 in Bronze and 904 in the woods league at the time of writing this article (a few days after the end of the contest).
A strategy that I could have applied is to try and predict your opponent's movement. That's something you can do in a game where you see the whole map: run your own algorithm on the opponent's units and try to predict what you would do in their position. Then, use this knowledge to your advantage. This is a strategy you could easily apply in a game in which you see the whole map, I don't think that would've worked well in this case.
I participated in the [latest CodinGame challenge](https://www.codingame.com/multiplayer/bot-programming/spring-challenge-2020) and quickly got out of the beginner's league. In this article, I'll explain my approach.
#100DaysToOffload No. 7
[![Cover picture for CodinGame Spring Challenge 2020. There is one yellow Pacman and one blue Pacman fighting against each other](2.jpg)](/attachments/2/original/)<!--more-->
CodinGame is a coding platform on which you can solve puzzles by writing a program in the language of your choice. They also often host contests in which your code fights against that of other developers. The most matches you win, the most points you get. And the most points you get, the higher you climb on the leaderboard. I like to participate in these challenges, so I signed up for the latest one: CodinGame Spring Challenge 2020.
In all CodinGame challenges, there is a league system. You start in the Wood 1 league with an easier version of the game and a subset of the rules, just to get you started. When your code is good enough, you get to Wood 2 where you have more rules and more mechanisms. After passing Wood 2, you end up in the Bronze league with the full version of the game and all the rules.
I didn't go further than Bronze league because I didn't want to spend much more time working on my bot but you can do that if you want 😊
## The rules
In this contest, each participant has a team of Pacmans and has to eat more pellets than the opponent. There are standard pellets worth 1 point and super pellets worth 10 points. At each turn, you're given the position of everything you can see (pellets, allies and enemies) and you have to give an instruction (telling your Pacs what to do). The game stops when there are no enough pellets in game to change the outcome of the game (one of the opponents ate more than half of the points available in the board) or after 200 turns.
League Wood 1:
* You control only one Pac
* You can see the whole map (everything's given to you at each turn)
* When colliding, the movement is canceled
League Wood 2 :
* You control many Pacs (2-5)
League Bronze :
* Speed boost
* Each Pac has a type and you can change it in a game turn. When colliding, the Pacs fight. Each one can be one of rock, paper or scissors and the winner is chosen according to [the game](https://en.wikipedia.org/wiki/Rock_paper_scissors).
* You can only see what's in your Pacs line of sight. Everything else is in the fog. You can see super pellets from anywhere, though.
## Start small, submit early
As you can see from the rules, there is a big gap between the woods and bronze league. The fog is the main thing preventing you from optimizing your path since you don't know what's happening outside your lines of sight.
When talking with other participants, I noticed that some people were over engineering their program right from the wood 1 league. They were talking about path-finding algorithms, path optimization and so on. This could've worked if later the ability to view the whole map weren't taken away.
My advice for this kind of challenge is to start with a **simple algorithm** that does uncomplicated things in a few lines of code. Just enough to get out of wood leagues. For example, the simplest algorithm I can think of for the first level would be : "go eat the nearest super pellet, if there aren't, go eat the nearest standard pellet". **Submit early** and **iterate** on your code. Some things won't work but it's better to see that while running your algorithm than spend two days working on it and discover later that you can't see the whole map anymore.
That's how I got out of wood 1. No optimization, no over engineering, just "go eat". Of course, this was not optimal. Since I never changed my trajectory until it disappeared (because I ate it or because the opponent did), sometimes I ended up in an infinite collision because my opponent and I were too stubborn. But that allowed me to win most of my matches and beat the boss (which is how you leave wood leagues).
## Grow later
The wood leagues are here to make you comfortable with the contest rules.
When entering the Bronze league, you unlock all the rules of the game. In our case, this means that we only see in direct line of sight (we don't see behind walls or in diagonal), and that we have abilities we can use other than just moving towards a target: speed boost and change type.
These two abilities can really change the course of a match so you'd better take them into account.
Of course, not seeing the whole map means you have to keep a state between each turn to remember the position of the pellets you've already seen. That way, you can go back to them later if you don't see anything anymore.
My code for this challenge is available on [Gitea](https://git.augendre.info/gaugendre/codingame/src/branch/master/challenges/2020-spring.py). It's written in Python, feel free to check it and drop me an [email](/about-me/) or a message on [Mastodon](/about-me/) if you have any question! It's not written in a very maintainable way since its expected life span was that of the contest, so 11 days. I could've spent more time refactoring stuff but *in this context* this would have likely been wasted time.
Basically, here's what I do:
* Go eat the super pellets
* Use boost nearly whenever possible
* Change to the winning type if an enemy is nearby and chase it
* Go eat the nearest pellet (actually second nearest because I want to make the most use of the boost)
* When nothing in sight, go to a remembered pellet position or go discover new areas
The core method is `get_action` ([line 221](https://git.augendre.info/gaugendre/codingame/src/branch/master/challenges/2020-spring.py#L221)). It returns the action a given Pac should take for this turn. This method is run for every Pac at every turn.
My final position was 1758th/4103 total, or 121st/2466 in the Bronze league.
There were 118 in Legend, 535 in Gold, 984 in Silver, 2466 in Bronze and 904 in the woods league at the time of writing this article (a few days after the end of the contest).
A strategy that I could have applied is to try and predict your opponent's movement. That's something you can do in a game where you see the whole map: run your own algorithm on the opponent's units and try to predict what you would do in their position. Then, use this knowledge to your advantage. This is a strategy you could easily apply in a game in which you see the whole map, I don't think that would've worked well in this case.
#programming #dev #coding #codingame #100DaysToOffload

View file

@ -4,40 +4,40 @@ tags: ['Certificates', 'OpenSSL', 'Quick Note', 'TLS']
date: 2022-12-20T13:56:56.467033+00:00
aliases: ["/create-a-csr"]
---
Another quick note today: how to generate a CSR for a basic certificate supported by modern browsers (includes `Subject Alternative Name`).
## Config file
```toml
# example.conf
[req]
prompt = no
distinguished_name = dn
req_extensions = req_ext
[dn]
CN = example.com
O = Company Name
L = Lyon
C = FR
[req_ext]
subjectAltName = DNS: example.com, IP: 192.168.1.1
```
Of course, remember to adjust the settings according to the organization you're creating the CSR for:
* `[dn]` (distinguished name) section
* `subjectAltName` line (DNS and IP)
## Private key
```bash
openssl genrsa -out example.key 4096
```
## CSR
```bash
openssl req -new -config example.conf -key example.key -out example.csr
Another quick note today: how to generate a CSR for a basic certificate supported by modern browsers (includes `Subject Alternative Name`).
## Config file
```toml
# example.conf
[req]
prompt = no
distinguished_name = dn
req_extensions = req_ext
[dn]
CN = example.com
O = Company Name
L = Lyon
C = FR
[req_ext]
subjectAltName = DNS: example.com, IP: 192.168.1.1
```
Of course, remember to adjust the settings according to the organization you're creating the CSR for:
* `[dn]` (distinguished name) section
* `subjectAltName` line (DNS and IP)
## Private key
```bash
openssl genrsa -out example.key 4096
```
## CSR
```bash
openssl req -new -config example.conf -key example.key -out example.csr
```

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View file

@ -4,75 +4,63 @@ tags: ['3D printing', 'CAD', 'FreeCAD', 'Freebox', 'wall mounting']
date: 2020-08-13T21:38:37+00:00
aliases: ["/designing-and-3d-printing-a-wall-mount-for-my-isp-modem"]
---
I recently moved to a new apartment and now I have a little closet which contains the electrical board. It's also where the optical fiber comes in and so it's where I put my ISP-provided modem.
For a few weeks, it sat on the floor in this closet since I didn't have any room for it anywhere else. I wasn't satisfied, so I decided to design some small pieces to create a wall mount for the beast.
![Freebox Delta S Server](/attachments/7/processed/)
#100DaysToOffload No. 16<!--more-->
## 3D Computer Assisted Design
### Finding a software
I used various CAD software during my studies, mainly SolidWorks and SolidEdge, but these are both expensive and way overkill pieces of software. I had to find a free alternative (as in free beer), preferably free (as in [free software](https://en.wikipedia.org/wiki/Free_software)) and open source.
I first tried [FreeCAD](https://www.freecadweb.org/), which ticks all the boxes. It also has a rich community, a great [documentation](https://wiki.freecadweb.org/) and tutorials. I had a couple of issues (visual glitches, and a crash) which made me look for another CAD software.
Then I found Fusion 360. It's a product from Autodesk, famous for AutoCAD, a widely used CAD software. I didn't suffer any glitch but it required me to create an online account to download and use the software, the files weren't stored on my hard drive but rather on their cloud, and it felt slower than FreeCAD. All things considered, I uninstalled Fusion 360 and opened back FreeCAD.
## Designing the parts
My modem has a triangular shape with a screen on one edge and ports on the opposite angle. If you want to see what it looks like, search "freebox delta S" on your favorite search engine, it's the smallest of the two triangles you'll find 😉. I wanted to put it on my wall, ports facing down and screen facing up. So I designed two pieces to support the weight of the box and one to put at the top to prevent from tumbling.
Here's what they look like:
![Bottom piece](/attachments/14/processed/)
![Top piece](/attachments/15/processed/)
The piece looking like a U will be printed twice and will support the modem on the bottom and the S shape will prevent it from tumbling. The holes with two diameters are there to let a screw pass and hide its head. The other holes on the opposite side of the U are there to make room for my screwdriver when mounting them.
You can download them here:
* [U shape (bottom)](/attachments/23/original/)
* [S shape (top)](/attachments/24/original/)
You can also find them [on Printables](https://www.printables.com/model/171644-support-mural-freebox-delta).
## Printing
I tried to find someone with a 3D printer among my colleagues and friends but no one was available. I also looked for 3D printing as a service but it was very expensive (between 20 and 30 euros for my 3 small pieces).
I finally settled on a workshop in a nearby city. I ruled it out at first because it was too far away to go there by bike but it was the only available option so I took my car and went there. It's managed by a non-profit association called [Fabrique d'Objets Libres](http://www.fablab-lyon.fr/) (*Free Objects Factory*, with free as in [free software](https://en.wikipedia.org/wiki/Free_software)). If you want to get access to the workshop, you need to become a member. After that you can access the machines during specific times where volunteers from the association come and open the workshop. They provide help using the machines and check if you're doing things safely.
I had a small 3D printing experience a year ago but I forgot nearly everything. I knew you had to run your model through another software called a *slicer*. It's tailored for a specific printer and will ultimately turn your model into a path for the printing head to follow. For this, it creates *slices* (well-chosen name, huh?). Each slice will be a layer of plastic put on top of another, gradually leading to your final piece. The slicer is specific to the 3D printer model so I won't go into details here but you can basically pick the pattern the head will follow and the filling density. All of these parameters will affect the quantity of material used to print your work.
3D printing is a very long process. We added my three pieces to the printing job and the slicer predicted a printing time of five hours. I came back the day after to pick the result and it looked really nice! I just had some cleaning to do because the slicer added some plastic between my wall mount and the board and also to support the holes.
## Mounting
When I came back home, I tried putting the screw inside the holes I made for it during the design phase. It didn't work. I was sad. I took a little margin but it was not enough and 3D printing is not a very precise process. I had to enlarge the holes with a knife.
After that, mounting was fairly straightforward. We put the bottom pieces on the modem, used a paper and pen to mark the position of the holes and put it on the wall as a template before drilling. We put screw anchors in the wall, screwed the bottom pieces, slid the modem, drilled holes for the top piece, screwed it, and voila!
I was a bit worried about the resistance for the bottom pieces but they seem to handle it very well thanks to the top piece which helps keeping the weight closer to the wall where they're stronger.
Here's what it looks like once mounted and plugged:
![Large view of my closet with the modem mounted on the wall](/attachments/6/processed/)
![Close view of my closet with the modem mounted on the wall](/attachments/5/processed/)
I'm quite proud of this since it's one of my first "hardware" projects from start to finish and it looks quite gorgeous in that red plastic 😍
## Key takeaways
Here are some things I learned during this short adventure:
* Take the manufacturing process into account during the design.
* Different manufacturing processes will impose different constraints on your design.
* Take the assembling process into account during the design.
* If you need to screw something, then you'll likely need to use a screwdriver, which is basically a long piece of metal. Create room for your tools!
* 3D printing is not precise.
* Free software rules.
* CAD and 3D printing can help you cheaply create small custom things for your home.
Keywords/tags:
I recently moved to a new apartment and now I have a little closet which contains the electrical board. It's also where the optical fiber comes in and so it's where I put my ISP-provided modem.
For a few weeks, it sat on the floor in this closet since I didn't have any room for it anywhere else. I wasn't satisfied, so I decided to design some small pieces to create a wall mount for the beast.
![Freebox Delta S Server](7.png), preferably free (as in [free software](https://en.wikipedia.org/wiki/Free_software)) and open source.
I first tried [FreeCAD](https://www.freecadweb.org/), which ticks all the boxes. It also has a rich community, a great [documentation](https://wiki.freecadweb.org/) and tutorials. I had a couple of issues (visual glitches, and a crash) which made me look for another CAD software.
Then I found Fusion 360. It's a product from Autodesk, famous for AutoCAD, a widely used CAD software. I didn't suffer any glitch but it required me to create an online account to download and use the software, the files weren't stored on my hard drive but rather on their cloud, and it felt slower than FreeCAD. All things considered, I uninstalled Fusion 360 and opened back FreeCAD.
## Designing the parts
My modem has a triangular shape with a screen on one edge and ports on the opposite angle. If you want to see what it looks like, search "freebox delta S" on your favorite search engine, it's the smallest of the two triangles you'll find 😉. I wanted to put it on my wall, ports facing down and screen facing up. So I designed two pieces to support the weight of the box and one to put at the top to prevent from tumbling.
Here's what they look like:
![Bottom piece](14.png)
![Top piece](15.jpeg)](/attachments/23/original/)
* [S shape (top)](/attachments/24/original/)
You can also find them [on Printables](https://www.printables.com/model/171644-support-mural-freebox-delta).
## Printing
I tried to find someone with a 3D printer among my colleagues and friends but no one was available. I also looked for 3D printing as a service but it was very expensive (between 20 and 30 euros for my 3 small pieces).
I finally settled on a workshop in a nearby city. I ruled it out at first because it was too far away to go there by bike but it was the only available option so I took my car and went there. It's managed by a non-profit association called [Fabrique d'Objets Libres](http://www.fablab-lyon.fr/) (*Free Objects Factory*, with free as in [free software](https://en.wikipedia.org/wiki/Free_software)). If you want to get access to the workshop, you need to become a member. After that you can access the machines during specific times where volunteers from the association come and open the workshop. They provide help using the machines and check if you're doing things safely.
I had a small 3D printing experience a year ago but I forgot nearly everything. I knew you had to run your model through another software called a *slicer*. It's tailored for a specific printer and will ultimately turn your model into a path for the printing head to follow. For this, it creates *slices* (well-chosen name, huh?). Each slice will be a layer of plastic put on top of another, gradually leading to your final piece. The slicer is specific to the 3D printer model so I won't go into details here but you can basically pick the pattern the head will follow and the filling density. All of these parameters will affect the quantity of material used to print your work.
3D printing is a very long process. We added my three pieces to the printing job and the slicer predicted a printing time of five hours. I came back the day after to pick the result and it looked really nice! I just had some cleaning to do because the slicer added some plastic between my wall mount and the board and also to support the holes.
## Mounting
When I came back home, I tried putting the screw inside the holes I made for it during the design phase. It didn't work. I was sad. I took a little margin but it was not enough and 3D printing is not a very precise process. I had to enlarge the holes with a knife.
After that, mounting was fairly straightforward. We put the bottom pieces on the modem, used a paper and pen to mark the position of the holes and put it on the wall as a template before drilling. We put screw anchors in the wall, screwed the bottom pieces, slid the modem, drilled holes for the top piece, screwed it, and voila!
I was a bit worried about the resistance for the bottom pieces but they seem to handle it very well thanks to the top piece which helps keeping the weight closer to the wall where they're stronger.
Here's what it looks like once mounted and plugged:
![Large view of my closet with the modem mounted on the wall](6.jpeg)
![Close view of my closet with the modem mounted on the wall](5.jpeg)
I'm quite proud of this since it's one of my first "hardware" projects from start to finish and it looks quite gorgeous in that red plastic 😍
## Key takeaways
Here are some things I learned during this short adventure:
* Take the manufacturing process into account during the design.
* Different manufacturing processes will impose different constraints on your design.
* Take the assembling process into account during the design.
* If you need to screw something, then you'll likely need to use a screwdriver, which is basically a long piece of metal. Create room for your tools!
* 3D printing is not precise.
* Free software rules.
* CAD and 3D printing can help you cheaply create small custom things for your home.
Keywords/tags:
#3dprinting #workshop #hardware #home #cad #freecad

BIN
content/posts/docker-images-layers-and-cache/32.png (Stored with Git LFS) Normal file

Binary file not shown.

View file

@ -4,358 +4,358 @@ tags: ['Docker', 'cache', 'layer']
date: 2020-12-28T07:55:41.393608+00:00
aliases: ["/docker-images-layers-and-cache"]
---
In this post, we'll walk through Docker image layers and the caching around them from the point of view of a Docker user. I'll assume you're already familiar with Dockerfiles and Docker concepts in general.
![Docker logo](/attachments/32/processed/)
## ✌️ The two axioms of Docker layers
There are two key concepts to understand, from which everything else is deduced. Let's call them our axioms.
Axiom 1
: Every instruction in a Dockerfile results in a layer[^1]. Each layer is stacked onto the previous one and depends upon it.
Axiom 2
: Layers are cached and this cache is invalidated whenever the layer or its parent change. The cache is reused on subsequent builds.
[^1]: Well, that's not true anymore, see [Best practices for writing Dockerfiles: Minimize the number of layers (Docker docs)](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#minimize-the-number-of-layers). But since it's easier to understand this way, I'm willing to make this compromise for this article.
So, what happens when we build a small Docker image?
```Dockerfile
# Dockerfile
FROM ubuntu
WORKDIR /app
COPY somefile ./
RUN md5sum somefile > somefile.md5
```
```text
$ echo "helloworld" > somefile
$ docker build -t gabnotes-example .
Sending build context to Docker daemon 3.072kB
Step 1/4 : FROM ubuntu
latest: Pulling from library/ubuntu
da7391352a9b: Pull complete
14428a6d4bcd: Pull complete
2c2d948710f2: Pull complete
Digest: sha256:c95a8e48bf88e9849f3e0f723d9f49fa12c5a00cfc6e60d2bc99d87555295e4c
Status: Downloaded newer image for ubuntu:latest
---> f643c72bc252
Step 2/4 : WORKDIR /app
---> Running in 0d58fcc66d8d
Removing intermediate container 0d58fcc66d8d
---> 8637829f8e9b
Step 3/4 : COPY somefile ./
---> 5edc5d0aab9d
Step 4/4 : RUN md5sum somefile > somefile.md5
---> Running in 8c54bb3e4453
Removing intermediate container 8c54bb3e4453
---> c2d34241963a
Successfully built c2d34241963a
Successfully tagged gabnotes-example:latest
```
1. Docker first downloads our base image since it doesn't exist in the local registry.
2. It creates the `/app` directory. Subsequent commands will run inside this directory.
3. It copies the file from our local directory to the image.
4. It stores the MD5 hash of our file inside a file named `somefile.md5`.
Now if we try to build the image again, without changing anything, here's what happens:
```text
$ docker build -t gabnotes-example .
Sending build context to Docker daemon 3.072kB
Step 1/4 : FROM ubuntu
---> f643c72bc252
Step 2/4 : WORKDIR /app
---> Using cache
---> 8637829f8e9b
Step 3/4 : COPY somefile ./
---> Using cache
---> 5edc5d0aab9d
Step 4/4 : RUN md5sum somefile > somefile.md5
---> Using cache
---> c2d34241963a
Successfully built c2d34241963a
Successfully tagged gabnotes-example:latest
```
For every step, Docker says it's "using cache." Remember our axioms? Well, each step of our first build generated a layer which is cached locally and was reused for our second build.
## 🔄 Cache invalidation
We can get some information about the layers of our image using `docker history`:
```text
$ docker history gabnotes-example
IMAGE CREATED CREATED BY SIZE COMMENT
c2d34241963a 23 minutes ago /bin/sh -c md5sum somefile > somefile.md5 43B
5edc5d0aab9d 23 minutes ago /bin/sh -c #(nop) COPY file:b87a7968d4d0a6b7… 11B
8637829f8e9b 23 minutes ago /bin/sh -c #(nop) WORKDIR /app 0B
f643c72bc252 4 weeks ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
<missing> 4 weeks ago /bin/sh -c mkdir -p /run/systemd && echo 'do… 7B
<missing> 4 weeks ago /bin/sh -c [ -z "$(apt-get indextargets)" ] 0B
<missing> 4 weeks ago /bin/sh -c set -xe && echo '#!/bin/sh' > /… 811B
<missing> 4 weeks ago /bin/sh -c #(nop) ADD file:4f15c4475fbafb3fe… 72.9MB
```
This output should be read as a stack: the first layer is at the bottom and the last layer of the image is at the top. This illustrates the dependencies between layers: if a "foundation" layer changes, Docker has to rebuild it and all the layers that were built upon.
It's natural: your layers 2 and 3 may depend on the output of the layer 1, so they should be rebuilt when layer 1 changes.
In our example:
```Dockerfile
# Dockerfile
FROM ubuntu
WORKDIR /app
COPY somefile ./
RUN md5sum somefile > somefile.md5
```
* the `COPY` instruction depends on the previous layer because if the working directory were to change, we would need to change the location of the file.
* the `RUN` instruction must be replayed if the file changes or if the working directory changes because then the output file would be placed elsewhere. It also depends on the presence of the `md5sum` command, which exists in the `ubuntu` image but might not exist in another one.
So if we change the content of `somefile`, the `COPY` will be replayed as well as the `RUN`. If after that we change the `WORKDIR`, it will be replayed as well as the other two.[^docs]
[^docs]: Read more about how Docker detects when the cache should be invalidated: [Leverage build cache](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#leverage-build-cache)
Let's try this:
```text
$ echo "good bye world" > somefile
$ docker build -t gabnotes-example .
Sending build context to Docker daemon 3.072kB
Step 1/4 : FROM ubuntu
---> f643c72bc252
Step 2/4 : WORKDIR /app
---> Using cache
---> 8637829f8e9b
Step 3/4 : COPY somefile ./
---> ba3ed4869a32
Step 4/4 : RUN md5sum somefile > somefile.md5
---> Running in c66d26f47038
Removing intermediate container c66d26f47038
---> c10782060ad4
Successfully built c10782060ad4
Successfully tagged gabnotes-example:latest
```
See, Docker detected that our file had changed, so it ran the copy again as well as the `md5sum` but used the `WORKDIR` from the cache.
This mechanism is especially useful for builds that take time, like installing your app's dependencies.
## 🏃‍♂️ Speed up your builds
Let's consider another example:
```text
# requirements.txt
requests==2.25.1
```
```python
# main.py
import requests
res = requests.get("https://httpbin.org/get")
print(res.json())
```
```Dockerfile
# Dockerfile
FROM python:3.8.6-buster
WORKDIR /app
COPY . ./
RUN pip install -r requirements.txt
CMD ["python", "main.py"]
```
Let's build this.
```text
$ docker build -t gabnotes-example2 .
Sending build context to Docker daemon 4.096kB
Step 1/5 : FROM python:3.8.6-buster
3.8.6-buster: Pulling from library/python
Digest: sha256:6a25504ef508896ab6682c1696b53ea7a5247b45ca1466e708717ce675831c03
Status: Downloaded newer image for python:3.8.6-buster
---> d1bfb3dd9268
Step 2/5 : WORKDIR /app
---> Running in b07bbed274c2
Removing intermediate container b07bbed274c2
---> 21cbb4d03bf2
Step 3/5 : COPY . ./
---> 0cf5413cb6a1
Step 4/5 : RUN pip install -r requirements.txt
---> Running in 50147c21a8fa
Collecting requests==2.25.1
Downloading requests-2.25.1-py2.py3-none-any.whl (61 kB)
Collecting certifi>=2017.4.17
Downloading certifi-2020.12.5-py2.py3-none-any.whl (147 kB)
Collecting chardet<5,>=3.0.2
Downloading chardet-4.0.0-py2.py3-none-any.whl (178 kB)
Collecting idna<3,>=2.5
Downloading idna-2.10-py2.py3-none-any.whl (58 kB)
Collecting urllib3<1.27,>=1.21.1
Downloading urllib3-1.26.2-py2.py3-none-any.whl (136 kB)
Installing collected packages: urllib3, idna, chardet, certifi, requests
Successfully installed certifi-2020.12.5 chardet-4.0.0 idna-2.10 requests-2.25.1 urllib3-1.26.2
Removing intermediate container 50147c21a8fa
---> 8dfa79cbad2a
Step 5/5 : CMD ["python", "main.py"]
---> Running in 75c230e0f09d
Removing intermediate container 75c230e0f09d
---> 5e39bbc5e639
Successfully built 5e39bbc5e639
Successfully tagged gabnotes-example2:latest
```
Running this image gives us:
```text
$ docker run gabnotes-example2
{'args': {}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.25.1', 'X-Amzn-Trace-Id': 'Root=1-5fe8b55a-57c890566cc87a0d342aff84'}, 'origin': '82.65.167.55', 'url': 'https://httpbin.org/get'}
```
That's ok but we'd prefer a nicer output. What about using `pprint`? Easy! We just need to edit our `main.py` and rebuild.
```python
# main.py
from pprint import pprint
import requests
res = requests.get("https://httpbin.org/get")
pprint(res.json())
```
```text
$ docker build -t gabnotes-example2 .
Sending build context to Docker daemon 4.096kB
Step 1/5 : FROM python:3.8.6-buster
---> d1bfb3dd9268
Step 2/5 : WORKDIR /app
---> Using cache
---> 21cbb4d03bf2
Step 3/5 : COPY . ./
---> e6da245ea865
Step 4/5 : RUN pip install -r requirements.txt
---> Running in ed461f60b4f4
Collecting requests==2.25.1
Downloading requests-2.25.1-py2.py3-none-any.whl (61 kB)
Collecting certifi>=2017.4.17
Downloading certifi-2020.12.5-py2.py3-none-any.whl (147 kB)
Collecting chardet<5,>=3.0.2
Downloading chardet-4.0.0-py2.py3-none-any.whl (178 kB)
Collecting idna<3,>=2.5
Downloading idna-2.10-py2.py3-none-any.whl (58 kB)
Collecting urllib3<1.27,>=1.21.1
Downloading urllib3-1.26.2-py2.py3-none-any.whl (136 kB)
Installing collected packages: urllib3, idna, chardet, certifi, requests
Successfully installed certifi-2020.12.5 chardet-4.0.0 idna-2.10 requests-2.25.1 urllib3-1.26.2
Removing intermediate container ed461f60b4f4
---> 7172609dd81e
Step 5/5 : CMD ["python", "main.py"]
---> Running in de0e3e5df424
Removing intermediate container de0e3e5df424
---> ff3202516475
Successfully built ff3202516475
Successfully tagged gabnotes-example2:latest
```
See? Because we chose to add all of our files in one command, whenever we modify our source code, Docker has to invalidate all the subsequent layers including the dependencies installation.
In order to speed up our builds locally, we may want to skip the dependency installation if they don't change. It's quite easy: add the `requirements.txt` first, install the dependencies and then add our source code.
```Dockerfile
# Dockerfile
FROM python:3.8.6-buster
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY main.py ./
CMD ["python", "main.py"]
```
After a first successful build, changing the source code will not trigger the dependencies installation again. Dependencies will only be re-installed if:
1. You pull a newer version of `python:3.8.6-buster`
2. The `requirements.txt` file is modified
3. You change any instruction in the Dockerfile from the `FROM` to the `RUN pip install` (included). For example if you change the working directory, or if you decide to copy another file with the requirements, or if you change the base image.
## ⏬ Reduce your final image size
Now you may also want to keep your images small. Since an image size is the sum of the size of each layer, if you create some files in a layer and delete them in a subsequent layers, these files will still account in the total image size, even if they are not present in the final filesystem.
Let's consider a last example:
```Dockerfile
# Dockerfile
FROM ubuntu
WORKDIR /app
RUN fallocate -l 100M example
RUN md5sum example > example.md5
RUN rm example
```
Pop quiz! Given the following:
* The ubuntu image I'm using weighs 73MB
* The file created by `fallocate` is actually 104857600 bytes, or about 105MB
* The md5 sum file size is negligible
What will be the final size of the image?
1. 73MB
2. 105MB
3. 178MB
4. zzZZZzz... Sorry, you were saying?
Well I'd like the answer to be 73MB but instead the image will weigh the full 178MB. Because we created the big file in its own layer, it will account for the total image size even if it's deleted afterwards.
What we could have done instead, is combine the three `RUN` instructions into one, like so:
```Dockerfile
# Dockerfile
FROM ubuntu
WORKDIR /app
RUN fallocate -l 100M example \
&& md5sum example > example.md5 \
&& rm example
```
This Dockerfile produces a final image that looks exactly the same as the previous one but without the 105MB overweight. Of course, this has the downside of making you recreate the big file every time this layer is invalidated, which could be annoying if creating this file is a costly operation.
This pattern is often used in official base image that try to be small whenever they can. For example, consider this snippet from the [`python:3.8.7-buster`](https://github.com/docker-library/python/blob/756285c50c055d06052dd5b6ac34ea965b499c15/3.8/buster/Dockerfile#L28,L37) image (MIT License):
```Dockerfile
RUN set -ex \
\
&& wget -O python.tar.xz "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" \
&& wget -O python.tar.xz.asc "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPG_KEY" \
&& gpg --batch --verify python.tar.xz.asc python.tar.xz \
&& { command -v gpgconf > /dev/null && gpgconf --kill all || :; } \
&& rm -rf "$GNUPGHOME" python.tar.xz.asc \
&& mkdir -p /usr/src/python \
&& tar -xJC /usr/src/python --strip-components=1 -f python.tar.xz \
&& rm python.tar.xz
```
See how `python.tar.xz` is downloaded and then deleted all in the same step? That's to prevent it from weighing in the final image. It's quite useful! But don't overuse it or your Dockerfiles might become unreadable.
## 🗒 Key takeaways
* Every instruction in a Dockerfile results in a layer[^1]. Each layer is stacked onto the previous one and depends upon it.
* Layers are cached and this cache is invalidated whenever the layer or its parent change. The cache is reused on subsequent builds.
* Use `docker history` to know more about your image's layers.
* Reduce your build duration by adding only the files you need when you need them. Push files that might change a lot to the bottom of your Dockerfile (dependencies installation example).
* Reduce your image size by combining multiple `RUN` instructions into one if you create files and delete them shortly after (big file deletion example).
Well that wraps it up for today! It was quite technical but I hope you learned something along the way 🙂
As always, please [contact me](/about-me#contact) if you have comments or questions!
## 📚 Further reading
* [About storage drivers (Docker docs)](https://docs.docker.com/storage/storagedriver/)
In this post, we'll walk through Docker image layers and the caching around them from the point of view of a Docker user. I'll assume you're already familiar with Dockerfiles and Docker concepts in general.
![Docker logo](32.png)
## ✌️ The two axioms of Docker layers
There are two key concepts to understand, from which everything else is deduced. Let's call them our axioms.
Axiom 1
: Every instruction in a Dockerfile results in a layer[^1]. Each layer is stacked onto the previous one and depends upon it.
Axiom 2
: Layers are cached and this cache is invalidated whenever the layer or its parent change. The cache is reused on subsequent builds.
[^1]: Well, that's not true anymore, see [Best practices for writing Dockerfiles: Minimize the number of layers (Docker docs)](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#minimize-the-number-of-layers). But since it's easier to understand this way, I'm willing to make this compromise for this article.
So, what happens when we build a small Docker image?
```Dockerfile
# Dockerfile
FROM ubuntu
WORKDIR /app
COPY somefile ./
RUN md5sum somefile > somefile.md5
```
```text
$ echo "helloworld" > somefile
$ docker build -t gabnotes-example .
Sending build context to Docker daemon 3.072kB
Step 1/4 : FROM ubuntu
latest: Pulling from library/ubuntu
da7391352a9b: Pull complete
14428a6d4bcd: Pull complete
2c2d948710f2: Pull complete
Digest: sha256:c95a8e48bf88e9849f3e0f723d9f49fa12c5a00cfc6e60d2bc99d87555295e4c
Status: Downloaded newer image for ubuntu:latest
---> f643c72bc252
Step 2/4 : WORKDIR /app
---> Running in 0d58fcc66d8d
Removing intermediate container 0d58fcc66d8d
---> 8637829f8e9b
Step 3/4 : COPY somefile ./
---> 5edc5d0aab9d
Step 4/4 : RUN md5sum somefile > somefile.md5
---> Running in 8c54bb3e4453
Removing intermediate container 8c54bb3e4453
---> c2d34241963a
Successfully built c2d34241963a
Successfully tagged gabnotes-example:latest
```
1. Docker first downloads our base image since it doesn't exist in the local registry.
2. It creates the `/app` directory. Subsequent commands will run inside this directory.
3. It copies the file from our local directory to the image.
4. It stores the MD5 hash of our file inside a file named `somefile.md5`.
Now if we try to build the image again, without changing anything, here's what happens:
```text
$ docker build -t gabnotes-example .
Sending build context to Docker daemon 3.072kB
Step 1/4 : FROM ubuntu
---> f643c72bc252
Step 2/4 : WORKDIR /app
---> Using cache
---> 8637829f8e9b
Step 3/4 : COPY somefile ./
---> Using cache
---> 5edc5d0aab9d
Step 4/4 : RUN md5sum somefile > somefile.md5
---> Using cache
---> c2d34241963a
Successfully built c2d34241963a
Successfully tagged gabnotes-example:latest
```
For every step, Docker says it's "using cache." Remember our axioms? Well, each step of our first build generated a layer which is cached locally and was reused for our second build.
## 🔄 Cache invalidation
We can get some information about the layers of our image using `docker history`:
```text
$ docker history gabnotes-example
IMAGE CREATED CREATED BY SIZE COMMENT
c2d34241963a 23 minutes ago /bin/sh -c md5sum somefile > somefile.md5 43B
5edc5d0aab9d 23 minutes ago /bin/sh -c #(nop) COPY file:b87a7968d4d0a6b7… 11B
8637829f8e9b 23 minutes ago /bin/sh -c #(nop) WORKDIR /app 0B
f643c72bc252 4 weeks ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
<missing> 4 weeks ago /bin/sh -c mkdir -p /run/systemd && echo 'do… 7B
<missing> 4 weeks ago /bin/sh -c [ -z "$(apt-get indextargets)" ] 0B
<missing> 4 weeks ago /bin/sh -c set -xe && echo '#!/bin/sh' > /… 811B
<missing> 4 weeks ago /bin/sh -c #(nop) ADD file:4f15c4475fbafb3fe… 72.9MB
```
This output should be read as a stack: the first layer is at the bottom and the last layer of the image is at the top. This illustrates the dependencies between layers: if a "foundation" layer changes, Docker has to rebuild it and all the layers that were built upon.
It's natural: your layers 2 and 3 may depend on the output of the layer 1, so they should be rebuilt when layer 1 changes.
In our example:
```Dockerfile
# Dockerfile
FROM ubuntu
WORKDIR /app
COPY somefile ./
RUN md5sum somefile > somefile.md5
```
* the `COPY` instruction depends on the previous layer because if the working directory were to change, we would need to change the location of the file.
* the `RUN` instruction must be replayed if the file changes or if the working directory changes because then the output file would be placed elsewhere. It also depends on the presence of the `md5sum` command, which exists in the `ubuntu` image but might not exist in another one.
So if we change the content of `somefile`, the `COPY` will be replayed as well as the `RUN`. If after that we change the `WORKDIR`, it will be replayed as well as the other two.[^docs]
[^docs]: Read more about how Docker detects when the cache should be invalidated: [Leverage build cache](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#leverage-build-cache)
Let's try this:
```text
$ echo "good bye world" > somefile
$ docker build -t gabnotes-example .
Sending build context to Docker daemon 3.072kB
Step 1/4 : FROM ubuntu
---> f643c72bc252
Step 2/4 : WORKDIR /app
---> Using cache
---> 8637829f8e9b
Step 3/4 : COPY somefile ./
---> ba3ed4869a32
Step 4/4 : RUN md5sum somefile > somefile.md5
---> Running in c66d26f47038
Removing intermediate container c66d26f47038
---> c10782060ad4
Successfully built c10782060ad4
Successfully tagged gabnotes-example:latest
```
See, Docker detected that our file had changed, so it ran the copy again as well as the `md5sum` but used the `WORKDIR` from the cache.
This mechanism is especially useful for builds that take time, like installing your app's dependencies.
## 🏃‍♂️ Speed up your builds
Let's consider another example:
```text
# requirements.txt
requests==2.25.1
```
```python
# main.py
import requests
res = requests.get("https://httpbin.org/get")
print(res.json())
```
```Dockerfile
# Dockerfile
FROM python:3.8.6-buster
WORKDIR /app
COPY . ./
RUN pip install -r requirements.txt
CMD ["python", "main.py"]
```
Let's build this.
```text
$ docker build -t gabnotes-example2 .
Sending build context to Docker daemon 4.096kB
Step 1/5 : FROM python:3.8.6-buster
3.8.6-buster: Pulling from library/python
Digest: sha256:6a25504ef508896ab6682c1696b53ea7a5247b45ca1466e708717ce675831c03
Status: Downloaded newer image for python:3.8.6-buster
---> d1bfb3dd9268
Step 2/5 : WORKDIR /app
---> Running in b07bbed274c2
Removing intermediate container b07bbed274c2
---> 21cbb4d03bf2
Step 3/5 : COPY . ./
---> 0cf5413cb6a1
Step 4/5 : RUN pip install -r requirements.txt
---> Running in 50147c21a8fa
Collecting requests==2.25.1
Downloading requests-2.25.1-py2.py3-none-any.whl (61 kB)
Collecting certifi>=2017.4.17
Downloading certifi-2020.12.5-py2.py3-none-any.whl (147 kB)
Collecting chardet<5,>=3.0.2
Downloading chardet-4.0.0-py2.py3-none-any.whl (178 kB)
Collecting idna<3,>=2.5
Downloading idna-2.10-py2.py3-none-any.whl (58 kB)
Collecting urllib3<1.27,>=1.21.1
Downloading urllib3-1.26.2-py2.py3-none-any.whl (136 kB)
Installing collected packages: urllib3, idna, chardet, certifi, requests
Successfully installed certifi-2020.12.5 chardet-4.0.0 idna-2.10 requests-2.25.1 urllib3-1.26.2
Removing intermediate container 50147c21a8fa
---> 8dfa79cbad2a
Step 5/5 : CMD ["python", "main.py"]
---> Running in 75c230e0f09d
Removing intermediate container 75c230e0f09d
---> 5e39bbc5e639
Successfully built 5e39bbc5e639
Successfully tagged gabnotes-example2:latest
```
Running this image gives us:
```text
$ docker run gabnotes-example2
{'args': {}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.25.1', 'X-Amzn-Trace-Id': 'Root=1-5fe8b55a-57c890566cc87a0d342aff84'}, 'origin': '82.65.167.55', 'url': 'https://httpbin.org/get'}
```
That's ok but we'd prefer a nicer output. What about using `pprint`? Easy! We just need to edit our `main.py` and rebuild.
```python
# main.py
from pprint import pprint
import requests
res = requests.get("https://httpbin.org/get")
pprint(res.json())
```
```text
$ docker build -t gabnotes-example2 .
Sending build context to Docker daemon 4.096kB
Step 1/5 : FROM python:3.8.6-buster
---> d1bfb3dd9268
Step 2/5 : WORKDIR /app
---> Using cache
---> 21cbb4d03bf2
Step 3/5 : COPY . ./
---> e6da245ea865
Step 4/5 : RUN pip install -r requirements.txt
---> Running in ed461f60b4f4
Collecting requests==2.25.1
Downloading requests-2.25.1-py2.py3-none-any.whl (61 kB)
Collecting certifi>=2017.4.17
Downloading certifi-2020.12.5-py2.py3-none-any.whl (147 kB)
Collecting chardet<5,>=3.0.2
Downloading chardet-4.0.0-py2.py3-none-any.whl (178 kB)
Collecting idna<3,>=2.5
Downloading idna-2.10-py2.py3-none-any.whl (58 kB)
Collecting urllib3<1.27,>=1.21.1
Downloading urllib3-1.26.2-py2.py3-none-any.whl (136 kB)
Installing collected packages: urllib3, idna, chardet, certifi, requests
Successfully installed certifi-2020.12.5 chardet-4.0.0 idna-2.10 requests-2.25.1 urllib3-1.26.2
Removing intermediate container ed461f60b4f4
---> 7172609dd81e
Step 5/5 : CMD ["python", "main.py"]
---> Running in de0e3e5df424
Removing intermediate container de0e3e5df424
---> ff3202516475
Successfully built ff3202516475
Successfully tagged gabnotes-example2:latest
```
See? Because we chose to add all of our files in one command, whenever we modify our source code, Docker has to invalidate all the subsequent layers including the dependencies installation.
In order to speed up our builds locally, we may want to skip the dependency installation if they don't change. It's quite easy: add the `requirements.txt` first, install the dependencies and then add our source code.
```Dockerfile
# Dockerfile
FROM python:3.8.6-buster
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY main.py ./
CMD ["python", "main.py"]
```
After a first successful build, changing the source code will not trigger the dependencies installation again. Dependencies will only be re-installed if:
1. You pull a newer version of `python:3.8.6-buster`
2. The `requirements.txt` file is modified
3. You change any instruction in the Dockerfile from the `FROM` to the `RUN pip install` (included). For example if you change the working directory, or if you decide to copy another file with the requirements, or if you change the base image.
## ⏬ Reduce your final image size
Now you may also want to keep your images small. Since an image size is the sum of the size of each layer, if you create some files in a layer and delete them in a subsequent layers, these files will still account in the total image size, even if they are not present in the final filesystem.
Let's consider a last example:
```Dockerfile
# Dockerfile
FROM ubuntu
WORKDIR /app
RUN fallocate -l 100M example
RUN md5sum example > example.md5
RUN rm example
```
Pop quiz! Given the following:
* The ubuntu image I'm using weighs 73MB
* The file created by `fallocate` is actually 104857600 bytes, or about 105MB
* The md5 sum file size is negligible
What will be the final size of the image?
1. 73MB
2. 105MB
3. 178MB
4. zzZZZzz... Sorry, you were saying?
Well I'd like the answer to be 73MB but instead the image will weigh the full 178MB. Because we created the big file in its own layer, it will account for the total image size even if it's deleted afterwards.
What we could have done instead, is combine the three `RUN` instructions into one, like so:
```Dockerfile
# Dockerfile
FROM ubuntu
WORKDIR /app
RUN fallocate -l 100M example \
&& md5sum example > example.md5 \
&& rm example
```
This Dockerfile produces a final image that looks exactly the same as the previous one but without the 105MB overweight. Of course, this has the downside of making you recreate the big file every time this layer is invalidated, which could be annoying if creating this file is a costly operation.
This pattern is often used in official base image that try to be small whenever they can. For example, consider this snippet from the [`python:3.8.7-buster`](https://github.com/docker-library/python/blob/756285c50c055d06052dd5b6ac34ea965b499c15/3.8/buster/Dockerfile#L28,L37) image (MIT License):
```Dockerfile
RUN set -ex \
\
&& wget -O python.tar.xz "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" \
&& wget -O python.tar.xz.asc "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPG_KEY" \
&& gpg --batch --verify python.tar.xz.asc python.tar.xz \
&& { command -v gpgconf > /dev/null && gpgconf --kill all || :; } \
&& rm -rf "$GNUPGHOME" python.tar.xz.asc \
&& mkdir -p /usr/src/python \
&& tar -xJC /usr/src/python --strip-components=1 -f python.tar.xz \
&& rm python.tar.xz
```
See how `python.tar.xz` is downloaded and then deleted all in the same step? That's to prevent it from weighing in the final image. It's quite useful! But don't overuse it or your Dockerfiles might become unreadable.
## 🗒 Key takeaways
* Every instruction in a Dockerfile results in a layer[^1]. Each layer is stacked onto the previous one and depends upon it.
* Layers are cached and this cache is invalidated whenever the layer or its parent change. The cache is reused on subsequent builds.
* Use `docker history` to know more about your image's layers.
* Reduce your build duration by adding only the files you need when you need them. Push files that might change a lot to the bottom of your Dockerfile (dependencies installation example).
* Reduce your image size by combining multiple `RUN` instructions into one if you create files and delete them shortly after (big file deletion example).
Well that wraps it up for today! It was quite technical but I hope you learned something along the way 🙂
As always, please [contact me](/about-me#contact) if you have comments or questions!
## 📚 Further reading
* [About storage drivers (Docker docs)](https://docs.docker.com/storage/storagedriver/)
* [Best practices for writing Dockerfiles (Docker docs)](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/)

BIN
content/posts/example-should-never-be-published/32.png (Stored with Git LFS) Normal file

Binary file not shown.

View file

@ -4,103 +4,103 @@ tags: []
date: 2021-01-03T18:08:52.170212+00:00
aliases: ["/example-should-never-be-published"]
---
This is a paragraph and should look like it. It is probably left align, not justified. After all, we're on the web not in a book.
[TOC]
# Heading 1
## Heading 2
### Heading 3
#### Heading 4
##### Heading 5
###### Heading 6
This is a [link](#).
Here's an abbreviation: HTML.
*[HTML]: Hyper Text Markup Language
Here's a button
[<button>Button</button>](#)
* **Bold text**
* *Italic text*
* <u>Underlined text</u>
* <mark>Highlighted text</mark>
* <code>Inline code</code>
* <kbd>Alt</kbd> + <kbd>F4</kbd>
!!! info "Info"
This is an info admonition.
!!! success "Success"
This is a success admonition.
!!! warning "Warning"
This is a warning admonition.
!!! danger "Danger"
This is a danger admonition.
```python hl_lines="1 3"
# main.py
def main():
print("Hello world")
if __name__ == "__main__":
main()
```
* Unordered
* list
* of items
Breaking paragraph
1. Ordered
2. list
2. of items
> *This quote was told by someone very famous.*
>
> \- Somewone very famous
This should be an image:
![Image alt text](/attachments/32/processed/)
<details>
<summary>Spoiler alert!</summary>
<p>Some text. 🙂</p>
</details>
| Heading 1 | Heading 2 |
|-----------|-----------|
| Table item 1 | Table item 2 |
| Table item 1 | Table item 2 |
| Table item 1 | Table item 2 |
| Table item 1 | Table item 2 |
Now onto a somewhat real example:
Notice the second `FROM` instruction? It tells Docker to start again from a new image, like at the beginning of a build, except that it will have access to the last layers of all the previous stages.
Then, the `COPY --from` is used to retrieve the built binary from the first stage.
In this extreme case, the final image weighs nothing more than the binary itself since `scratch` is a special empty image with no operating system.
Link to another section: [link](#python_poetry)
## 🐍 Applying to Python & Poetry {: id="python_poetry" }
### Install the dependencies
Let's start with a basic Dockerfile with a single stage that will just install this blog's dependencies and run the project.[^blog]
[^blog]: The source code is available [on sourcehut](https://git.augendre.info/gaugendre/blog).
Basically a multi-stage build allows you to sequentially use multiple images in one Dockerfile and pass data between them.
This is a paragraph and should look like it. It is probably left align, not justified. After all, we're on the web not in a book.
[TOC]
# Heading 1
## Heading 2
### Heading 3
#### Heading 4
##### Heading 5
###### Heading 6
This is a [link](#).
Here's an abbreviation: HTML.
*[HTML]: Hyper Text Markup Language
Here's a button
[<button>Button</button>](#)
* **Bold text**
* *Italic text*
* <u>Underlined text</u>
* <mark>Highlighted text</mark>
* <code>Inline code</code>
* <kbd>Alt</kbd> + <kbd>F4</kbd>
!!! info "Info"
This is an info admonition.
!!! success "Success"
This is a success admonition.
!!! warning "Warning"
This is a warning admonition.
!!! danger "Danger"
This is a danger admonition.
```python hl_lines="1 3"
# main.py
def main():
print("Hello world")
if __name__ == "__main__":
main()
```
* Unordered
* list
* of items
Breaking paragraph
1. Ordered
2. list
2. of items
> *This quote was told by someone very famous.*
>
> \- Somewone very famous
This should be an image:
![Image alt text](32.png)
<details>
<summary>Spoiler alert!</summary>
<p>Some text. 🙂</p>
</details>
| Heading 1 | Heading 2 |
|-----------|-----------|
| Table item 1 | Table item 2 |
| Table item 1 | Table item 2 |
| Table item 1 | Table item 2 |
| Table item 1 | Table item 2 |
Now onto a somewhat real example:
Notice the second `FROM` instruction? It tells Docker to start again from a new image, like at the beginning of a build, except that it will have access to the last layers of all the previous stages.
Then, the `COPY --from` is used to retrieve the built binary from the first stage.
In this extreme case, the final image weighs nothing more than the binary itself since `scratch` is a special empty image with no operating system.
Link to another section: [link](#python_poetry)
## 🐍 Applying to Python & Poetry {: id="python_poetry" }
### Install the dependencies
Let's start with a basic Dockerfile with a single stage that will just install this blog's dependencies and run the project.[^blog]
[^blog]: The source code is available [on sourcehut](https://git.augendre.info/gaugendre/blog).
Basically a multi-stage build allows you to sequentially use multiple images in one Dockerfile and pass data between them.
This is especially useful for projects in statically compiled languages such as Go, in which the output is a completely standalone binary: you can use an image containing the Go toolchain to build your project and copy your binary to a barebones image to distribute it.

View file

@ -4,10 +4,10 @@ tags: []
date: 2020-05-14T10:18:31+00:00
aliases: ["/how-to-subscribe"]
---
This blog offers an [RSS](https://en.wikipedia.org/wiki/RSS) feed.
The feed is accessible by appending `/feed/` to the URL of the blog.
It can be used in an RSS feed aggregator or you can have posts delivered to your email inbox with IFTTT. [This applet](https://ifttt.com/applets/147561p-rss-feed-to-email) will regularly poll the RSS feed and send all new posts to you via email.
This blog offers an [RSS](https://en.wikipedia.org/wiki/RSS) feed.
The feed is accessible by appending `/feed/` to the URL of the blog.
It can be used in an RSS feed aggregator or you can have posts delivered to your email inbox with IFTTT. [This applet](https://ifttt.com/applets/147561p-rss-feed-to-email) will regularly poll the RSS feed and send all new posts to you via email.
* Main RSS feed: `https://gabnotes.org/feed/`

View file

@ -4,35 +4,35 @@ tags: ['pipx', 'pyenv', 'python']
date: 2020-12-30T08:40:58.362871+00:00
aliases: ["/how-use-pipx-pyenv"]
---
## 👨🏻‍💻 TL;DR
In my case with `fish` I added this line to my `config.fish`:
```fish
set -gx PIPX_DEFAULT_PYTHON "$HOME/.pyenv/versions/3.8.5/bin/python"
```
It's roughly the bash equivalent for
```bash
export PIPX_DEFAULT_PYTHON="$HOME/.pyenv/versions/3.8.5/bin/python"
```
## 📖 Backstory
As a Python developer, my workflow often involves running multiple versions of Python on different projects. To help me in this task, I use [`pyenv`](https://github.com/pyenv/pyenv) which is a wonderful tool to easily install and manage multiple Python versions on your system.
I also have a Python version installed via [Homebrew][brew] for some formulae that require it. This version is the default for any script if pyenv doesn't specify any version.
[brew]: https://brew.sh/
Finally, I also use [`pipx`](https://pipxproject.github.io/pipx/) which allows me to install python packages in their own virtual environment without messing with my system installation and still have them ready for use on the command line.
My problem is that `pipx` will by default use the Python version provided by Homebrew to install the executables. As I keep my system up to date with Homebrew, the Python version often updates. When that happens, all of my `pipx` packages break and I have to reinstall them all. Granted, `pipx` provides a command to do that easily but I'd still like to avoid the operation.
Thankfully, as [documented](https://pipxproject.github.io/pipx/docs/)[^1], `pipx` supports an environment variable called `PIPX_DEFAULT_PYTHON`. You just need to point it to your preferred Python interpreter and be done with it!
[^1]: Also available when running `pipx --help`
!!! info "Update"
## 👨🏻‍💻 TL;DR
In my case with `fish` I added this line to my `config.fish`:
```fish
set -gx PIPX_DEFAULT_PYTHON "$HOME/.pyenv/versions/3.8.5/bin/python"
```
It's roughly the bash equivalent for
```bash
export PIPX_DEFAULT_PYTHON="$HOME/.pyenv/versions/3.8.5/bin/python"
```
## 📖 Backstory
As a Python developer, my workflow often involves running multiple versions of Python on different projects. To help me in this task, I use [`pyenv`](https://github.com/pyenv/pyenv) which is a wonderful tool to easily install and manage multiple Python versions on your system.
I also have a Python version installed via [Homebrew][brew] for some formulae that require it. This version is the default for any script if pyenv doesn't specify any version.
[brew]: https://brew.sh/
Finally, I also use [`pipx`](https://pipxproject.github.io/pipx/) which allows me to install python packages in their own virtual environment without messing with my system installation and still have them ready for use on the command line.
My problem is that `pipx` will by default use the Python version provided by Homebrew to install the executables. As I keep my system up to date with Homebrew, the Python version often updates. When that happens, all of my `pipx` packages break and I have to reinstall them all. Granted, `pipx` provides a command to do that easily but I'd still like to avoid the operation.
Thankfully, as [documented](https://pipxproject.github.io/pipx/docs/)[^1], `pipx` supports an environment variable called `PIPX_DEFAULT_PYTHON`. You just need to point it to your preferred Python interpreter and be done with it!
[^1]: Also available when running `pipx --help`
!!! info "Update"
I've now [switched](/setup-rtx-with-direnv-for-python/) from `pyenv` to `rtx` but the concept remains the same: install python with `rtx`, then set `PIPX_DEFAULT_PYTHON`.

BIN
content/posts/i-needed-a-budget-with-four-rules/20.svg (Stored with Git LFS) Normal file

Binary file not shown.

View file

@ -4,71 +4,65 @@ tags: ['YNAB', 'budgeting', 'four rules', 'personal finances']
date: 2020-06-25T16:05:51+00:00
aliases: ["/i-needed-a-budget-with-four-rules"]
---
In January 2020, I discovered a budget app called You Need A Budget ([YNAB](https://youneedabudget.com/)). I've been using it since then and I'm very happy with it, here's why.
![YNAB Logo](/attachments/20/original/){: width=640 }
#100DaysToOffload No. 13<!--more-->
At first, I didn't think I needed a budget. After all, I didn't have any "money problems". I could buy what I needed without having to pay attention to every euro and I could also afford some extras sometimes. But I didn't have any plan for my money. It just sat in my saving and checking account.
YNAB's four rules (that you can apply without using the software at all) brought more visibility and more clarity over what my money could do for me. They have a really nice page on their website that explains everything in detail, you can check it out [here](https://www.youneedabudget.com/the-four-rules/). Here's a quick summary of what these rules mean to me.
!!! info "Edit 2021-01-02"
I recently discovered a free software and open source alternative: [Firefly III](https://www.firefly-iii.org/). I haven't tried it though.
## YNAB's four rules
### Rule 1: Give Every Dollar a Job.
It's pretty simple. You sum up all the money you have in your accounts (and wallets) and you give each of these dollars/euros/yen/pound/yougetit a purpose. You may need to pay bills, to buy food, to pay for gas, ... These are all jobs you could give to your money. But you can't give a job to money you don't have yet. With YNAB, you don't budget for future income. You only budget what you have now.
All the other rules are variations of the first rule.
### Rule 2: Embrace Your True Expenses
Some expenses are not so regular. For example you might go to the doctor every now and then but you don't (hopefully) have an appointment every month. Or you may have a car and sometimes it needs to be repaired. Or you may want to buy presents for Christmas. These are what YNAB calls "true expenses". They are expenses that happen, but not on a monthly basis, so it's easy to forget about them and be "surprised" when November comes and you start buying 500€ of presents for everybody.
These expenses should be also taken into account, that's why you should set aside some money each month to pay for your Christmas presents, your car maintenance or your visits to the doctor.
### Rule 3: Roll With the Punches
Let's say you followed rules 1 and 2 and you carefully planned your month with $200 for groceries. But you forgot about your family that will arrive next Sunday and you have to cook for them. That's an unexpected expense and you may have to overspend on your groceries because you want to treat your family well. Life happens, and rule 3 is here to help. If you spend $220 this month on groceries, cover your overspending with leftover money from other categories! Maybe you didn't to buy clothes this month, so you can take money from there and move it to your groceries.
### Rule 4: Age Your Money
Let's say that when you receive money, you put it in a box. You mark this box with the date on which you received the money. After a while, you end up with multiple boxes, each with some amount of money inside. When you spend money, you always take bills from the oldest box. You can then tell how old it is by looking at the date on the box. That's your *age of money*.
The oldest your money is, the better it gets. If you spend money on June 25th and it's 30 days old, it means that you earned this money around May 25th, and that's great! That means you're breaking the "paycheck-to-paycheck" cycle, where you worry about how you'll pay your bills because you might not have enough money by the end of the month.
## What YNAB can help you with
* Plan for your wedding
* Plan for a move
* Plan for your vacations
* Pay off debt
* Grow an emergency fund
* Break the paycheck-to-paycheck cycle
* ...
They have a web based app as long as native apps for iOS and Android. The web based app is for when you have time to sit down and think about your budget - which you will need to do once in a while - while the mobile apps allow you to check your budget or enter transactions on the go.
## Support
I could talk for hours about YNAB and how I use it. It helped me plan my move to a new apartment during the lockdown while having enough money for works, a washing machine, a fridge, and still be able to pay our current bills.
Maybe the best part about YNAB is the support you can get from the team. They're all really great people and will try to help you with your issues when you contact them. Be it about the software itself or the way you use it, they'll surely have a piece of advice!
They also have group video sessions very regularly to help you get started or take advantage of the multiple features the software has to offer. I don't even think you need to be a customer to participate in these video chats but I'm not sure. And lastly, there is a very active [community](https://support.youneedabudget.com/category/community).
## Conclusion
I really encourage you to take a look at [YNAB](https://www.youneedabudget.com/). If you want to use their software, they offer a 34-days trial. If you're a student, you can get a whole year for free.
If you do try it, please [drop me a line](/about-me)! I'd be more than happy to know if you find it interesting and whether it works for you or not 😊
After the trial, the service is billed $84/year (that's $7/month). It may seem steep but you should really give it a try before stopping for the price. If at the end of your trial you still don't think it's the right tool for you, then you can still take the advice they offer, read their website/blog or ask questions to the community and apply it on your own budget!
I really don't have any incentive to push you towards them other than me being a happy customer 😀
Keywords/tags:
In January 2020, I discovered a budget app called You Need A Budget ([YNAB](https://youneedabudget.com/)). I've been using it since then and I'm very happy with it, here's why.
![YNAB Logo](20.svg) brought more visibility and more clarity over what my money could do for me. They have a really nice page on their website that explains everything in detail, you can check it out [here](https://www.youneedabudget.com/the-four-rules/). Here's a quick summary of what these rules mean to me.
!!! info "Edit 2021-01-02"
I recently discovered a free software and open source alternative: [Firefly III](https://www.firefly-iii.org/). I haven't tried it though.
## YNAB's four rules
### Rule 1: Give Every Dollar a Job.
It's pretty simple. You sum up all the money you have in your accounts (and wallets) and you give each of these dollars/euros/yen/pound/yougetit a purpose. You may need to pay bills, to buy food, to pay for gas, ... These are all jobs you could give to your money. But you can't give a job to money you don't have yet. With YNAB, you don't budget for future income. You only budget what you have now.
All the other rules are variations of the first rule.
### Rule 2: Embrace Your True Expenses
Some expenses are not so regular. For example you might go to the doctor every now and then but you don't (hopefully) have an appointment every month. Or you may have a car and sometimes it needs to be repaired. Or you may want to buy presents for Christmas. These are what YNAB calls "true expenses". They are expenses that happen, but not on a monthly basis, so it's easy to forget about them and be "surprised" when November comes and you start buying 500€ of presents for everybody.
These expenses should be also taken into account, that's why you should set aside some money each month to pay for your Christmas presents, your car maintenance or your visits to the doctor.
### Rule 3: Roll With the Punches
Let's say you followed rules 1 and 2 and you carefully planned your month with $200 for groceries. But you forgot about your family that will arrive next Sunday and you have to cook for them. That's an unexpected expense and you may have to overspend on your groceries because you want to treat your family well. Life happens, and rule 3 is here to help. If you spend $220 this month on groceries, cover your overspending with leftover money from other categories! Maybe you didn't to buy clothes this month, so you can take money from there and move it to your groceries.
### Rule 4: Age Your Money
Let's say that when you receive money, you put it in a box. You mark this box with the date on which you received the money. After a while, you end up with multiple boxes, each with some amount of money inside. When you spend money, you always take bills from the oldest box. You can then tell how old it is by looking at the date on the box. That's your *age of money*.
The oldest your money is, the better it gets. If you spend money on June 25th and it's 30 days old, it means that you earned this money around May 25th, and that's great! That means you're breaking the "paycheck-to-paycheck" cycle, where you worry about how you'll pay your bills because you might not have enough money by the end of the month.
## What YNAB can help you with
* Plan for your wedding
* Plan for a move
* Plan for your vacations
* Pay off debt
* Grow an emergency fund
* Break the paycheck-to-paycheck cycle
* ...
They have a web based app as long as native apps for iOS and Android. The web based app is for when you have time to sit down and think about your budget - which you will need to do once in a while - while the mobile apps allow you to check your budget or enter transactions on the go.
## Support
I could talk for hours about YNAB and how I use it. It helped me plan my move to a new apartment during the lockdown while having enough money for works, a washing machine, a fridge, and still be able to pay our current bills.
Maybe the best part about YNAB is the support you can get from the team. They're all really great people and will try to help you with your issues when you contact them. Be it about the software itself or the way you use it, they'll surely have a piece of advice!
They also have group video sessions very regularly to help you get started or take advantage of the multiple features the software has to offer. I don't even think you need to be a customer to participate in these video chats but I'm not sure. And lastly, there is a very active [community](https://support.youneedabudget.com/category/community).
## Conclusion
I really encourage you to take a look at [YNAB](https://www.youneedabudget.com/). If you want to use their software, they offer a 34-days trial. If you're a student, you can get a whole year for free.
If you do try it, please [drop me a line](/about-me)! I'd be more than happy to know if you find it interesting and whether it works for you or not 😊
After the trial, the service is billed $84/year (that's $7/month). It may seem steep but you should really give it a try before stopping for the price. If at the end of your trial you still don't think it's the right tool for you, then you can still take the advice they offer, read their website/blog or ask questions to the community and apply it on your own budget!
I really don't have any incentive to push you towards them other than me being a happy customer 😀
Keywords/tags:
#ynab #budgeting #personalfinances

View file

@ -4,45 +4,45 @@ tags: ['blog', 'introduction']
date: 2020-05-12T10:39:41.193174+00:00
aliases: ["/im-starting-a-blog"]
---
I came up across two posts recently that got me thinking:
* Kev Quirk's [*100 Days To Offload*](https://100daystooffload.com)
* Drew Devault's [*You should make a blog!*](https://drewdevault.com/make-a-blog)
That was it, I needed a place to write my thoughts too.
So here it is! This blog will be my little space of the internet where I can share what I feel I want to. These posts are meant to stay, unlike my other social media accounts where I tend to automatically delete everything older than 30 days.
<!--more-->
## Content
I'll mostly write about tech stuff, my personal side projects (I'm a developer) but I won't shut the door to any other subjects like music, art, books, or anything else.
I'm making no promise regarding the length or the frequency of the posts, though I'll try to stick to the #100DaysToOffload for the moment. So you can expect more publications in the coming days.
## Technical details
I chose to use [WriteFreely](http://writefreely.org/) which is what powers [Write.as](https://write.as). It seemed to fit my need for simplicity: a pre-built binary and some static files and you're good to go.
At first I considered writing on write.as directly but I would have had to pay for the premium version to use a custom domain and I'm not afraid of a little system administration. A cheap instance at Scaleway is, well, cheaper than the monthly fee for Write.as. Bonus point: I can do other things with this server.
## Next topics
I already have a list of topics I'd like to share:
* My home setup ([Raspberry Pi](/raspberry-pi), [Synology NAS](/synology-nas-stream-and-backup))
* Home Assistant
* [Pi Hole](/pi-hole)
* [SSH & Yubikeys](/using-your-yubikey-with-ssh-and-achieving-multifactor-authentication)
* Some projects ([CodinGame](/codingame-spring-challenge-2020-pacman))
## Where to reach me
Up-to-date info should appear on the [About me](/about-me) page on this site.
Any feedback is highly appreciated!
This concludes my first post of #100DaysToOffload.
Keywords/tags:
I came up across two posts recently that got me thinking:
* Kev Quirk's [*100 Days To Offload*](https://100daystooffload.com)
* Drew Devault's [*You should make a blog!*](https://drewdevault.com/make-a-blog)
That was it, I needed a place to write my thoughts too.
So here it is! This blog will be my little space of the internet where I can share what I feel I want to. These posts are meant to stay, unlike my other social media accounts where I tend to automatically delete everything older than 30 days.
<!--more-->
## Content
I'll mostly write about tech stuff, my personal side projects (I'm a developer) but I won't shut the door to any other subjects like music, art, books, or anything else.
I'm making no promise regarding the length or the frequency of the posts, though I'll try to stick to the #100DaysToOffload for the moment. So you can expect more publications in the coming days.
## Technical details
I chose to use [WriteFreely](http://writefreely.org/) which is what powers [Write.as](https://write.as). It seemed to fit my need for simplicity: a pre-built binary and some static files and you're good to go.
At first I considered writing on write.as directly but I would have had to pay for the premium version to use a custom domain and I'm not afraid of a little system administration. A cheap instance at Scaleway is, well, cheaper than the monthly fee for Write.as. Bonus point: I can do other things with this server.
## Next topics
I already have a list of topics I'd like to share:
* My home setup ([Raspberry Pi](/raspberry-pi), [Synology NAS](/synology-nas-stream-and-backup))
* Home Assistant
* [Pi Hole](/pi-hole)
* [SSH & Yubikeys](/using-your-yubikey-with-ssh-and-achieving-multifactor-authentication)
* Some projects ([CodinGame](/codingame-spring-challenge-2020-pacman))
## Where to reach me
Up-to-date info should appear on the [About me](/about-me) page on this site.
Any feedback is highly appreciated!
This concludes my first post of #100DaysToOffload.
Keywords/tags:
#introduction #tech #mastodon #100DaysToOffload #HundredDaysToOffload

View file

@ -4,222 +4,222 @@ tags: ['Docker', 'multi-stage builds', 'poetry', 'python']
date: 2021-01-02T10:37:29.021773+00:00
aliases: ["/lighten-your-python-image-docker-multi-stage-builds"]
---
In previous posts we talked about [poetry][poetry] and [Docker images layers][docker-cache] and I promised I would write about Docker multi-stage builds, so here we go!
[poetry]: /poetry-python-dependencies-management/
[docker-cache]: /docker-images-layers-and-cache/
!!! info "Note"
I will explain the basics of Docker multi-stage builds required to understand the post but I won't repeat the documentation (see [further reading](#further-reading)).
## ⚙️ Multi-stage builds
Basically a multi-stage build allows you to sequentially use multiple images in one Dockerfile and pass data between them.
This is especially useful for projects in statically compiled languages such as Go, in which the output is a completely standalone binary: you can use an image containing the Go toolchain to build your project and copy your binary to a barebones image to distribute it.
```go
package main
import "fmt"
func main() {
fmt.Println("Hello Gab!")
}
```
```Dockerfile
# Dockerfile
FROM golang:alpine as builder
RUN mkdir /build
ADD . /build/
WORKDIR /build
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-extldflags "-static"' -o main .
FROM scratch
COPY --from=builder /build/main /app/
WORKDIR /app
CMD ["./main"]
```
This example[^cloudreach] produces a working Docker image containing only the binary built from the project. It also perfectly illustrates the basics of multi-stage builds.
[^cloudreach]: Thanks to [Cloudreach](https://www.cloudreach.com/en/technical-blog/containerize-this-how-to-build-golang-dockerfiles/) for the example.
Notice the second `FROM` instruction? It tells Docker to start again from a new image, like at the beginning of a build, except that it will have access to the last layers of all the previous stages.
Then, the `COPY --from` is used to retrieve the built binary from the first stage.
In this extreme case, the final image weighs nothing more than the binary itself since `scratch` is a special empty image with no operating system.
## 🐍 Applying to Python & Poetry
### Install the dependencies
Let's start with a basic Dockerfile with a single stage that will just install this blog's dependencies and run the project.[^blog]
[^blog]: The source code is available [on Gitea](https://git.augendre.info/gaugendre/blog).
```Dockerfile
# Dockerfile
## Build venv
FROM python:3.8.6-buster
# Install poetry, see https://python-poetry.org/docs/#installation
ENV POETRY_VERSION=1.1.4
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python
ENV PATH /root/.poetry/bin:$PATH
# Install dependencies
WORKDIR /app
RUN python -m venv /app/venv
COPY pyproject.toml poetry.lock ./
RUN . /app/venv/bin/activate && poetry install
ENV PATH /app/venv/bin:$PATH
# Add code
COPY . ./
HEALTHCHECK --start-period=30s CMD python -c "import requests; requests.get('http://localhost:8000', timeout=2)"
CMD ["gunicorn", "blog.wsgi", "-b 0.0.0.0:8000", "--log-file", "-"]
```
It's already not that bad! We are taking advantage of the [cache][docker-cache] by copying only the files that describe our dependencies before installing them, and the Dockerfile is easy to read.
Now, our final image attack surface could be reduced: we're using a full Debian buster with all the build tools included and we have `poetry` installed in our image when we don't need it at runtime.
We'll add another stage to this build. First, we will install poetry and the project's dependencies, and in a second stage we will copy the virtual environment and our source code.
### Multi-staged dependencies & code
```Dockerfile hl_lines="15 22 24"
# Dockerfile
## Build venv
FROM python:3.8.6-buster AS venv
ENV POETRY_VERSION=1.1.4
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python
ENV PATH /root/.poetry/bin:$PATH
WORKDIR /app
COPY pyproject.toml poetry.lock ./
# The `--copies` option tells `venv` to copy libs and binaries
# instead of using links (which could break since we will
# extract the virtualenv from this image)
RUN python -m venv --copies /app/venv
RUN . /app/venv/bin/activate && poetry install
## Beginning of runtime image
# Remember to use the same python version
# and the same base distro as the venv image
FROM python:3.8.6-slim-buster as prod
COPY --from=venv /app/venv /app/venv/
ENV PATH /app/venv/bin:$PATH
WORKDIR /app
COPY . ./
HEALTHCHECK --start-period=30s CMD python -c "import requests; requests.get('http://localhost:8000', timeout=2)"
CMD ["gunicorn", "blog.wsgi", "-b 0.0.0.0:8000", "--log-file", "-"]
```
See? We didn't have to change much but our final image is already much slimmer!
Without accounting for what we install or add inside, the base `python:3.8.6-buster` weighs 882MB vs 113MB for the `slim` version. Of course it's at the expense of many tools such as build toolchains[^builds] but you probably don't need them in your production image.[^toolchain]
[^builds]: You often need these tools to install some python dependencies which require compiling. That's why I don't use the `slim` version to install my dependencies.
[^toolchain]: Except of course if your goal is to compile stuff on the go or provide a platform for people to build their code.
Your ops teams should be happier with these lighter images: less attack surface, less code that can break, less transfer time, less disk space used, ... And our Dockerfile is still readable so it should be easy to maintain.
### Final form
For this blog, I use a slightly modified version of what we just saw:
```Dockerfile hl_lines="15 17 21 27 33 34 40 41 42 44 45 46"
# Dockerfile
## Build venv
FROM python:3.8.6-buster AS venv
ENV POETRY_VERSION=1.1.4
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python
ENV PATH /root/.poetry/bin:$PATH
WORKDIR /app
COPY pyproject.toml poetry.lock ./
RUN python -m venv --copies /app/venv
# Allows me to tweak the dependency installation.
# See below.
ARG POETRY_OPTIONS
RUN . /app/venv/bin/activate \
&& poetry install $POETRY_OPTIONS
## Get git versions
FROM alpine/git:v2.26.2 AS git
ADD . /app
WORKDIR /app
# I use this file to provide the git commit
# in the footer without having git present
# in my production image
RUN git rev-parse HEAD | tee /version
## Beginning of runtime image
FROM python:3.8.6-slim-buster as prod
RUN echo "Europe/Paris" > /etc/timezone \
&& mkdir /db
COPY --from=venv /app/venv /app/venv/
ENV PATH /app/venv/bin:$PATH
WORKDIR /app
COPY manage.py LICENSE pyproject.toml ./
COPY docker ./docker/
COPY blog ./blog/
# These are the two folders that change the most.
COPY attachments ./attachments/
COPY articles ./articles/
COPY --from=git /version /app/.version
ENV SECRET_KEY "changeme"
ENV DEBUG "false"
ENV HOST ""
ENV DB_BASE_DIR "/db"
HEALTHCHECK --start-period=30s CMD python -c "import requests; requests.get('http://localhost:8000', timeout=2)"
CMD ["/app/docker/run.sh"]
```
There are not much differences between this and the previous one, except for an added stage to retrieve the git commit hash and some tweaking when copying the code.
There is also the addition of the `POETRY_OPTIONS` build argument. It allows me to build the same Dockerfile with two different outputs: one with the development dependencies like `pytest` or `pre-commit` and the other without.
I use it like this:
```bash
# with pytest
docker build --pull --build-arg POETRY_OPTIONS="" -t blog-test .
# without pytest
docker build --pull --build-arg POETRY_OPTIONS="--no-dev" -t blog .
```
Again, this is in the spirit of minimizing the production image.
## 🗒 Closing thoughts
Docker multi-stage builds helped me reduce my image sizes and attack surface - sometimes by *a lot* - without compromising on features.
I hope that you enjoyed reading this article and that you found it interesting or helpful! Please feel free to [contact me](/about-me/) if you want to comment on the subject.
In a future post, I'll talk about reducing Docker images build time in a CI environment where the filesystem isn't guaranteed to stay between runs.
## 📚 Further reading
* [*Multi-Stage Builds* - Docker blog](https://www.docker.com/blog/multi-stage-builds/)
In previous posts we talked about [poetry][poetry] and [Docker images layers][docker-cache] and I promised I would write about Docker multi-stage builds, so here we go!
[poetry]: /poetry-python-dependencies-management/
[docker-cache]: /docker-images-layers-and-cache/
!!! info "Note"
I will explain the basics of Docker multi-stage builds required to understand the post but I won't repeat the documentation (see [further reading](#further-reading)).
## ⚙️ Multi-stage builds
Basically a multi-stage build allows you to sequentially use multiple images in one Dockerfile and pass data between them.
This is especially useful for projects in statically compiled languages such as Go, in which the output is a completely standalone binary: you can use an image containing the Go toolchain to build your project and copy your binary to a barebones image to distribute it.
```go
package main
import "fmt"
func main() {
fmt.Println("Hello Gab!")
}
```
```Dockerfile
# Dockerfile
FROM golang:alpine as builder
RUN mkdir /build
ADD . /build/
WORKDIR /build
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags '-extldflags "-static"' -o main .
FROM scratch
COPY --from=builder /build/main /app/
WORKDIR /app
CMD ["./main"]
```
This example[^cloudreach] produces a working Docker image containing only the binary built from the project. It also perfectly illustrates the basics of multi-stage builds.
[^cloudreach]: Thanks to [Cloudreach](https://www.cloudreach.com/en/technical-blog/containerize-this-how-to-build-golang-dockerfiles/) for the example.
Notice the second `FROM` instruction? It tells Docker to start again from a new image, like at the beginning of a build, except that it will have access to the last layers of all the previous stages.
Then, the `COPY --from` is used to retrieve the built binary from the first stage.
In this extreme case, the final image weighs nothing more than the binary itself since `scratch` is a special empty image with no operating system.
## 🐍 Applying to Python & Poetry
### Install the dependencies
Let's start with a basic Dockerfile with a single stage that will just install this blog's dependencies and run the project.[^blog]
[^blog]: The source code is available [on Gitea](https://git.augendre.info/gaugendre/blog).
```Dockerfile
# Dockerfile
## Build venv
FROM python:3.8.6-buster
# Install poetry, see https://python-poetry.org/docs/#installation
ENV POETRY_VERSION=1.1.4
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python
ENV PATH /root/.poetry/bin:$PATH
# Install dependencies
WORKDIR /app
RUN python -m venv /app/venv
COPY pyproject.toml poetry.lock ./
RUN . /app/venv/bin/activate && poetry install
ENV PATH /app/venv/bin:$PATH
# Add code
COPY . ./
HEALTHCHECK --start-period=30s CMD python -c "import requests; requests.get('http://localhost:8000', timeout=2)"
CMD ["gunicorn", "blog.wsgi", "-b 0.0.0.0:8000", "--log-file", "-"]
```
It's already not that bad! We are taking advantage of the [cache][docker-cache] by copying only the files that describe our dependencies before installing them, and the Dockerfile is easy to read.
Now, our final image attack surface could be reduced: we're using a full Debian buster with all the build tools included and we have `poetry` installed in our image when we don't need it at runtime.
We'll add another stage to this build. First, we will install poetry and the project's dependencies, and in a second stage we will copy the virtual environment and our source code.
### Multi-staged dependencies & code
```Dockerfile hl_lines="15 22 24"
# Dockerfile
## Build venv
FROM python:3.8.6-buster AS venv
ENV POETRY_VERSION=1.1.4
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python
ENV PATH /root/.poetry/bin:$PATH
WORKDIR /app
COPY pyproject.toml poetry.lock ./
# The `--copies` option tells `venv` to copy libs and binaries
# instead of using links (which could break since we will
# extract the virtualenv from this image)
RUN python -m venv --copies /app/venv
RUN . /app/venv/bin/activate && poetry install
## Beginning of runtime image
# Remember to use the same python version
# and the same base distro as the venv image
FROM python:3.8.6-slim-buster as prod
COPY --from=venv /app/venv /app/venv/
ENV PATH /app/venv/bin:$PATH
WORKDIR /app
COPY . ./
HEALTHCHECK --start-period=30s CMD python -c "import requests; requests.get('http://localhost:8000', timeout=2)"
CMD ["gunicorn", "blog.wsgi", "-b 0.0.0.0:8000", "--log-file", "-"]
```
See? We didn't have to change much but our final image is already much slimmer!
Without accounting for what we install or add inside, the base `python:3.8.6-buster` weighs 882MB vs 113MB for the `slim` version. Of course it's at the expense of many tools such as build toolchains[^builds] but you probably don't need them in your production image.[^toolchain]
[^builds]: You often need these tools to install some python dependencies which require compiling. That's why I don't use the `slim` version to install my dependencies.
[^toolchain]: Except of course if your goal is to compile stuff on the go or provide a platform for people to build their code.
Your ops teams should be happier with these lighter images: less attack surface, less code that can break, less transfer time, less disk space used, ... And our Dockerfile is still readable so it should be easy to maintain.
### Final form
For this blog, I use a slightly modified version of what we just saw:
```Dockerfile hl_lines="15 17 21 27 33 34 40 41 42 44 45 46"
# Dockerfile
## Build venv
FROM python:3.8.6-buster AS venv
ENV POETRY_VERSION=1.1.4
RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python
ENV PATH /root/.poetry/bin:$PATH
WORKDIR /app
COPY pyproject.toml poetry.lock ./
RUN python -m venv --copies /app/venv
# Allows me to tweak the dependency installation.
# See below.
ARG POETRY_OPTIONS
RUN . /app/venv/bin/activate \
&& poetry install $POETRY_OPTIONS
## Get git versions
FROM alpine/git:v2.26.2 AS git
ADD . /app
WORKDIR /app
# I use this file to provide the git commit
# in the footer without having git present
# in my production image
RUN git rev-parse HEAD | tee /version
## Beginning of runtime image
FROM python:3.8.6-slim-buster as prod
RUN echo "Europe/Paris" > /etc/timezone \
&& mkdir /db
COPY --from=venv /app/venv /app/venv/
ENV PATH /app/venv/bin:$PATH
WORKDIR /app
COPY manage.py LICENSE pyproject.toml ./
COPY docker ./docker/
COPY blog ./blog/
# These are the two folders that change the most.
COPY attachments ./attachments/
COPY articles ./articles/
COPY --from=git /version /app/.version
ENV SECRET_KEY "changeme"
ENV DEBUG "false"
ENV HOST ""
ENV DB_BASE_DIR "/db"
HEALTHCHECK --start-period=30s CMD python -c "import requests; requests.get('http://localhost:8000', timeout=2)"
CMD ["/app/docker/run.sh"]
```
There are not much differences between this and the previous one, except for an added stage to retrieve the git commit hash and some tweaking when copying the code.
There is also the addition of the `POETRY_OPTIONS` build argument. It allows me to build the same Dockerfile with two different outputs: one with the development dependencies like `pytest` or `pre-commit` and the other without.
I use it like this:
```bash
# with pytest
docker build --pull --build-arg POETRY_OPTIONS="" -t blog-test .
# without pytest
docker build --pull --build-arg POETRY_OPTIONS="--no-dev" -t blog .
```
Again, this is in the spirit of minimizing the production image.
## 🗒 Closing thoughts
Docker multi-stage builds helped me reduce my image sizes and attack surface - sometimes by *a lot* - without compromising on features.
I hope that you enjoyed reading this article and that you found it interesting or helpful! Please feel free to [contact me](/about-me/) if you want to comment on the subject.
In a future post, I'll talk about reducing Docker images build time in a CI environment where the filesystem isn't guaranteed to stay between runs.
## 📚 Further reading
* [*Multi-Stage Builds* - Docker blog](https://www.docker.com/blog/multi-stage-builds/)
* [*Use multi-stage builds* - Docker documentation](https://docs.docker.com/develop/develop-images/multistage-build/)

View file

@ -4,8 +4,8 @@ tags: ['blog', 'self-hosting']
date: 2020-11-11T07:38:24.261889+00:00
aliases: ["/migration-complete"]
---
I just moved this blog to my self-hosted server at home. Hopefully nothing is broken but if you see something strange or missing, please [contact me](/about-me/)! 😃
I also hope we won't experience any major outages like power (both the server and the router are on UPS) or network. 🤔
I just moved this blog to my self-hosted server at home. Hopefully nothing is broken but if you see something strange or missing, please [contact me](/about-me/)! 😃
I also hope we won't experience any major outages like power (both the server and the router are on UPS) or network. 🤔
I still have a few things to move before decommissioning my VPS but I'd say the hard work is done 👌

View file

@ -4,62 +4,62 @@ tags: ['devices', 'self-hosting']
date: 2021-12-28T09:33:00.205309+00:00
aliases: ["/my-setup-at-home-12-2021-update"]
---
In this post, I'm presenting my updated setup at home. The [previous version](/my-setup-at-home/) was from May 2020.
## 👨🏻‍💻 Desk
* MacBook Pro 15-inch 2017 (personal laptop)
* development side projects
* writing
* office stuff
* MacBook Pro 15-inch 2019 (work laptop)
* development, research
* meetings
* Two 27-inch QHD displays (Dell P2720DC)
* nice 3 monitors setup, including the laptop's display
* I regret not [going 4K](https://tonsky.me/blog/monitors/) since I basically spend my days reading text.
* Thunderbolt dock ([CalDigit TS3+](https://www.caldigit.com/ts3-plus/))
* Standing desk (from [Autonomous](https://www.autonomous.ai))
## 📱 Mobility
* iPhone 13 Pro
* reading (Miniflux, Wallabag)
* media consumption (YouTube)
* pictures
* iPad Air 3 & Apple Pencil
* reading (Miniflux, Wallabag)
* note-taking, annotating documents, sketching
* accessing my recipes (I use [Paprika](https://www.paprikaapp.com/))
* media consumption (Plex, YouTube)
* writing, sometimes
* Kobo Libra H2O (e-ink reader)
* reading (ePub books, Wallabag)
## 💾 Servers
* Raspberry Pi 4 model B, 8GB
* Home Assistant
* Raspberry Pi 3 model B+
* display for Home Assistant in the hall
* [Self-built server](/replacing-my-synology-nas-with-a-self-built-server/)
* proxmox with one VM (for the moment)
* host all my services in docker containers
## 🍿 Media
* Apple TV 4K
* media consumption (Plex, Netflix, YouTube)
* Apple Home hub (control HomeAssistant devices with Siri & Apple Home app)
* HomePod
* play music through AirPlay or with Siri
* answer Siri requests
## 🏡 Home
* a set of connected light bulbs, plugs and wall switches
* automate some things like "don't kill your eyes in the bathroom at night but see where you pee during the day"
As you can see, I own quite the collection of Apple devices. They're not perfect of course and not open or hackable by any contemporary standard - which is kind of a bummer for me - but they play nicely together and are easy to use for me and my SO.
In this post, I'm presenting my updated setup at home. The [previous version](/my-setup-at-home/) was from May 2020.
## 👨🏻‍💻 Desk
* MacBook Pro 15-inch 2017 (personal laptop)
* development side projects
* writing
* office stuff
* MacBook Pro 15-inch 2019 (work laptop)
* development, research
* meetings
* Two 27-inch QHD displays (Dell P2720DC)
* nice 3 monitors setup, including the laptop's display
* I regret not [going 4K](https://tonsky.me/blog/monitors/) since I basically spend my days reading text.
* Thunderbolt dock ([CalDigit TS3+](https://www.caldigit.com/ts3-plus/))
* Standing desk (from [Autonomous](https://www.autonomous.ai))
## 📱 Mobility
* iPhone 13 Pro
* reading (Miniflux, Wallabag)
* media consumption (YouTube)
* pictures
* iPad Air 3 & Apple Pencil
* reading (Miniflux, Wallabag)
* note-taking, annotating documents, sketching
* accessing my recipes (I use [Paprika](https://www.paprikaapp.com/))
* media consumption (Plex, YouTube)
* writing, sometimes
* Kobo Libra H2O (e-ink reader)
* reading (ePub books, Wallabag)
## 💾 Servers
* Raspberry Pi 4 model B, 8GB
* Home Assistant
* Raspberry Pi 3 model B+
* display for Home Assistant in the hall
* [Self-built server](/replacing-my-synology-nas-with-a-self-built-server/)
* proxmox with one VM (for the moment)
* host all my services in docker containers
## 🍿 Media
* Apple TV 4K
* media consumption (Plex, Netflix, YouTube)
* Apple Home hub (control HomeAssistant devices with Siri & Apple Home app)
* HomePod
* play music through AirPlay or with Siri
* answer Siri requests
## 🏡 Home
* a set of connected light bulbs, plugs and wall switches
* automate some things like "don't kill your eyes in the bathroom at night but see where you pee during the day"
As you can see, I own quite the collection of Apple devices. They're not perfect of course and not open or hackable by any contemporary standard - which is kind of a bummer for me - but they play nicely together and are easy to use for me and my SO.
They are also quite low-maintenance, which means I can spend time on more fun projects like automating my Christmas decorations or adding a Raspberry Pi with a touch screen in my hall to provide a display for Home Assistant.

View file

@ -4,46 +4,46 @@ tags: ['self-hosting']
date: 2020-05-13T11:32:31+00:00
aliases: ["/my-setup-at-home"]
---
This post will serve as an introduction to others that will follow. I will try to remember to update this publication to add a reference to the new ones but I can't promise I won't forget. Meanwhile, feel free to use the tags at the end to scroll through related articles.
Let's begin #100DaysToOffload publication no. 2, shall we?
<!--more-->
## What computer-like devices do I use and what do I do with them?
Let's try not to forget anything:
* personal laptop
* work laptop (currently home because of the pandemic, otherwise it usually stays at the office)
* smartphone
* tablet (iPad Air 3 with Apple Pencil)
* media consumption (Plex, YouTube)
* note-taking, annotating documents, sketching
* accessing my recipes (I use [Paprika](https://www.paprikaapp.com/))
* writing, sometimes
* [Raspberry Pi 3 model B+](/raspberry-pi)
* [Pi-Hole](/pi-hole)
* Home Assistant*
* Monitoring of the other things*
* [Synology NAS](/synology-nas-stream-and-backup) (DS216play)*
* Backups (personal laptop and Raspberry Pi)
* Plex
* "smart" TV
* Play Plex content
* "smart" speaker
* Play smartphone music or smart TV sound
* a set of connected light bulbs, plugs and wall switches*
* Automate some things like "don't kill your eyes in the bathroom at night but see where you pee during the day"
At this point, you should have a global view of the devices I own and use daily. This will be a valuable reference for later posts when I talk about how these devices and services integrate. I'll especially focus on items marked with an asterisk (&ast;).
This post feels slightly hollow but I consider it's necessary to properly introduce the next ones and give a bit of context. Hopefully, the following articles will be denser.
The next story will probably be about my Synology NAS and what I use it for.
## Related reading
* [How Many Devices Is Too Many?](https://kevq.uk/how-many-devices-is-too-many/) by Kev Quirk
Tags/keywords:
This post will serve as an introduction to others that will follow. I will try to remember to update this publication to add a reference to the new ones but I can't promise I won't forget. Meanwhile, feel free to use the tags at the end to scroll through related articles.
Let's begin #100DaysToOffload publication no. 2, shall we?
<!--more-->
## What computer-like devices do I use and what do I do with them?
Let's try not to forget anything:
* personal laptop
* work laptop (currently home because of the pandemic, otherwise it usually stays at the office)
* smartphone
* tablet (iPad Air 3 with Apple Pencil)
* media consumption (Plex, YouTube)
* note-taking, annotating documents, sketching
* accessing my recipes (I use [Paprika](https://www.paprikaapp.com/))
* writing, sometimes
* [Raspberry Pi 3 model B+](/raspberry-pi)
* [Pi-Hole](/pi-hole)
* Home Assistant*
* Monitoring of the other things*
* [Synology NAS](/synology-nas-stream-and-backup) (DS216play)*
* Backups (personal laptop and Raspberry Pi)
* Plex
* "smart" TV
* Play Plex content
* "smart" speaker
* Play smartphone music or smart TV sound
* a set of connected light bulbs, plugs and wall switches*
* Automate some things like "don't kill your eyes in the bathroom at night but see where you pee during the day"
At this point, you should have a global view of the devices I own and use daily. This will be a valuable reference for later posts when I talk about how these devices and services integrate. I'll especially focus on items marked with an asterisk (&ast;).
This post feels slightly hollow but I consider it's necessary to properly introduce the next ones and give a bit of context. Hopefully, the following articles will be denser.
The next story will probably be about my Synology NAS and what I use it for.
## Related reading
* [How Many Devices Is Too Many?](https://kevq.uk/how-many-devices-is-too-many/) by Kev Quirk
Tags/keywords:
#tech #100DaysToOffload #HundredDaysToOffload #home

View file

@ -4,59 +4,59 @@ tags: ['Django', 'ITSF', 'python']
date: 2022-03-31T19:50:35.048724+00:00
aliases: ["/optimize-large-django-migrations"]
---
## 📖 Backstory
Today, while working on a project at [ITSF](https://itsf.io), I needed to add a new field to an existing model in a Django project. This field had to initially be computed from other values in the same model, so I couldn't use a constant default value for all the existing objects.
## 🧒🏻 First try
So I sat down, thought about it, and here's the migration I first came up with:
```{ .python .large }
def forwards(apps, schema_editor):
Model = apps.get_model('app', 'Model')
db_alias = schema_editor.connection.alias
instances = Model.objects.using(db_alias).all()
for instance in instances:
instance.new_field = compute_new_field(instance)
Model.objects.using(db_alias).bulk_update(instances, ["new_field"])
```
The `compute_new_field` function takes multiple other fields into account to produce the new value, it's not just a matter of repeating the value of an existing field (which would have been simpler, using [`F` expressions](https://docs.djangoproject.com/en/4.0/ref/models/expressions/#django.db.models.F) and `queryset.update`).
I was quite happy with this migration. I thought it solved my problem in a quite elegant way and only involved two database queries. One to fetch the initial queryset and the second to save the updates in database.
## ✋🏻 Not so fast!
Thankfully, one of my colleagues brought me back to reality:
> *There are 252320 objects in this table.*
>
> \- A very smart coworker
And I just realized that's just in the staging environment! In production we have a whopping 1.7 million of these, and it's growing.
Our migrations run in an environment with limited CPU and RAM. Running this code would have loaded the whole 1.7M objects in memory which would have caused our migration process to crash and some hair scratching to find out what had happened plus the added stress of a production deploy failing. Hopefully we would have caught the issue in staging with the 250k objects but that's not a certainty.
## 😈 Optimizing the queries
Fortunately, Django comes with batteries included and provides a pagination mechanism. It's mostly advertised to facilitate paginating list views, but the [`Paginator`](https://docs.djangoproject.com/en/4.0/ref/paginator/) class can be instantiated manually.
After re-engineering, here's the updated version which will obviously make many more DB queries but will hopefully not send our RAM to a black hole 😁
```{ .python .large hl_lines="5 6 7" }
def forwards(apps, schema_editor):
Model = apps.get_model('app', 'Model')
db_alias = schema_editor.connection.alias
instances = Model.objects.using(db_alias).all()
pages = Paginator(instances, per_page=1000, orphans=400)
for page in pages:
instances = page.object_list
for instance in instances:
instance.new_field = compute_new_field(instance)
Model.objects.using(db_alias).bulk_update(instances, ["new_field"])
```
I could spend time tweaking the page size but I know our migrations job can handle batches of 1000 objects, so I didn't put too much effort into that.
## 📝 Key takeaways
🤓 Choose your optimization metric wisely. We often want to reduce the number of DB queries, but as a matter of fact it's sometimes a good idea to *increase* it, if it means that your process actually finishes 😅
## 📖 Backstory
Today, while working on a project at [ITSF](https://itsf.io), I needed to add a new field to an existing model in a Django project. This field had to initially be computed from other values in the same model, so I couldn't use a constant default value for all the existing objects.
## 🧒🏻 First try
So I sat down, thought about it, and here's the migration I first came up with:
```{ .python .large }
def forwards(apps, schema_editor):
Model = apps.get_model('app', 'Model')
db_alias = schema_editor.connection.alias
instances = Model.objects.using(db_alias).all()
for instance in instances:
instance.new_field = compute_new_field(instance)
Model.objects.using(db_alias).bulk_update(instances, ["new_field"])
```
The `compute_new_field` function takes multiple other fields into account to produce the new value, it's not just a matter of repeating the value of an existing field (which would have been simpler, using [`F` expressions](https://docs.djangoproject.com/en/4.0/ref/models/expressions/#django.db.models.F) and `queryset.update`).
I was quite happy with this migration. I thought it solved my problem in a quite elegant way and only involved two database queries. One to fetch the initial queryset and the second to save the updates in database.
## ✋🏻 Not so fast!
Thankfully, one of my colleagues brought me back to reality:
> *There are 252320 objects in this table.*
>
> \- A very smart coworker
And I just realized that's just in the staging environment! In production we have a whopping 1.7 million of these, and it's growing.
Our migrations run in an environment with limited CPU and RAM. Running this code would have loaded the whole 1.7M objects in memory which would have caused our migration process to crash and some hair scratching to find out what had happened plus the added stress of a production deploy failing. Hopefully we would have caught the issue in staging with the 250k objects but that's not a certainty.
## 😈 Optimizing the queries
Fortunately, Django comes with batteries included and provides a pagination mechanism. It's mostly advertised to facilitate paginating list views, but the [`Paginator`](https://docs.djangoproject.com/en/4.0/ref/paginator/) class can be instantiated manually.
After re-engineering, here's the updated version which will obviously make many more DB queries but will hopefully not send our RAM to a black hole 😁
```{ .python .large hl_lines="5 6 7" }
def forwards(apps, schema_editor):
Model = apps.get_model('app', 'Model')
db_alias = schema_editor.connection.alias
instances = Model.objects.using(db_alias).all()
pages = Paginator(instances, per_page=1000, orphans=400)
for page in pages:
instances = page.object_list
for instance in instances:
instance.new_field = compute_new_field(instance)
Model.objects.using(db_alias).bulk_update(instances, ["new_field"])
```
I could spend time tweaking the page size but I know our migrations job can handle batches of 1000 objects, so I didn't put too much effort into that.
## 📝 Key takeaways
🤓 Choose your optimization metric wisely. We often want to reduce the number of DB queries, but as a matter of fact it's sometimes a good idea to *increase* it, if it means that your process actually finishes 😅
🎉 Django is awesome. Notice how little effort it took me! I only had to add three lines and reindent three other to paginate my queries and be confident that my migration will run and won't break our next deploy.

BIN
content/posts/pi-hole/10.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
content/posts/pi-hole/11.svg (Stored with Git LFS) Normal file

Binary file not shown.

View file

@ -4,33 +4,33 @@ tags: ['DNS', 'Raspberry Pi', 'self-hosting']
date: 2020-05-28T15:54:13+00:00
aliases: ["/pi-hole"]
---
!!! info "👴 Old post"
I don't use Pi Hole anymore but this post should still be relevant. I've switched to [AdGuard](https://adguard.com/en/welcome.html). There is an easy to install Home Assistant addon available and it allows me to easily configure local-only DNS entries.
![Pi Hole logo](/attachments/11/original/)
{: .svg-container }
As I was telling you [before](https://gabnotes.org/raspberry-pi), I own a Raspberry Pi on which I set up a [Pi Hole](https://pi-hole.net/).
As they advertise it, Pi Hole is "a black hole for internet advertisement". It's basically an ad blocker for your whole network.<!--more-->
Again, I won't go into details of how to setup your own Pi Hole, there are plenty of good resources for that, starting with Pi Hole's own website. Just remember to setup your DHCP server to advertise your Pi Hole IP as the preferred DNS resolver, so that every device connected to your network can automatically benefit from it.
I won't also explain how Pi Hole blocks your ads but if you'd like me to, please [let me know](/about-me).
Today's quick post is about how great Pi Hole is, in a few bullet points:
* It's easy to set up.
* It's very low maintenance. I basically check for an update every now an then but otherwise I pretty much let it live its own life.
* You can set it up in a privacy preserving way by choosing what statistics/logs you want to collect:
[![Privacy options in Pi Hole. There are five options ranging from "log every request with the client and the requested domain" to "no logs, no stats".](/attachments/10/processed/)](/attachments/10/original/)
* Did I mention it blocks ads? That works everywhere and especially well coupled with other solutions such as [uBlock Origin](https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/), [Privacy Badger](https://addons.mozilla.org/en-US/firefox/addon/privacy-badger17/) or [Ghostery](https://addons.mozilla.org/en-US/firefox/addon/ghostery/) in your browser but it also in some of your smartphone apps 😁
* Since it caches the DNS responses, it also improves a tiny bit your browsing speed and prevents your FAI or another DNS server to spy on you too much. And it reduces your footprint by sending less data on the internet. That's not why I primarily use it but it's a little bonus.
That's about it for today! I encourage you to setup a Pi Hole on one of your computers and try it, it's quite nice! They rely on donations to help them sustain the development so if you enjoy it, [consider donating](https://docs.pi-hole.net/#pi-hole-is-free-but-powered-by-your-support).
I wrote this as part of 100DaysToOffload, this is the 9th post.
Keywords/tags:
!!! info "👴 Old post"
I don't use Pi Hole anymore but this post should still be relevant. I've switched to [AdGuard](https://adguard.com/en/welcome.html). There is an easy to install Home Assistant addon available and it allows me to easily configure local-only DNS entries.
![Pi Hole logo](11.svg)
{: .svg-container }
As I was telling you [before](https://gabnotes.org/raspberry-pi), I own a Raspberry Pi on which I set up a [Pi Hole](https://pi-hole.net/).
As they advertise it, Pi Hole is "a black hole for internet advertisement". It's basically an ad blocker for your whole network.<!--more-->
Again, I won't go into details of how to setup your own Pi Hole, there are plenty of good resources for that, starting with Pi Hole's own website. Just remember to setup your DHCP server to advertise your Pi Hole IP as the preferred DNS resolver, so that every device connected to your network can automatically benefit from it.
I won't also explain how Pi Hole blocks your ads but if you'd like me to, please [let me know](/about-me).
Today's quick post is about how great Pi Hole is, in a few bullet points:
* It's easy to set up.
* It's very low maintenance. I basically check for an update every now an then but otherwise I pretty much let it live its own life.
* You can set it up in a privacy preserving way by choosing what statistics/logs you want to collect:
[![Privacy options in Pi Hole. There are five options ranging from "log every request with the client and the requested domain" to "no logs, no stats".](10.png)](/attachments/10/original/)
* Did I mention it blocks ads? That works everywhere and especially well coupled with other solutions such as [uBlock Origin](https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/), [Privacy Badger](https://addons.mozilla.org/en-US/firefox/addon/privacy-badger17/) or [Ghostery](https://addons.mozilla.org/en-US/firefox/addon/ghostery/) in your browser but it also in some of your smartphone apps 😁
* Since it caches the DNS responses, it also improves a tiny bit your browsing speed and prevents your FAI or another DNS server to spy on you too much. And it reduces your footprint by sending less data on the internet. That's not why I primarily use it but it's a little bonus.
That's about it for today! I encourage you to setup a Pi Hole on one of your computers and try it, it's quite nice! They rely on donations to help them sustain the development so if you enjoy it, [consider donating](https://docs.pi-hole.net/#pi-hole-is-free-but-powered-by-your-support).
I wrote this as part of 100DaysToOffload, this is the 9th post.
Keywords/tags:
#pihole #raspberrypi #tech #home #100DaysToOffload

View file

@ -4,187 +4,187 @@ tags: ['dependencies', 'pip-tools', 'python']
date: 2023-03-02T13:07:26.381756+00:00
aliases: ["/pip-tools-for-python-dependencies-management"]
---
## 📖 Story time
At the end of 2020, I wrote an article entitled [Poetry for Python dependencies management](/poetry-python-dependencies-management). I described ITSF's requirements for a dependencies management system and how we found Poetry useful. After updating our internal projects, I tackled my side projects and moved them to Poetry.
The requirements from late 2020 were the following:
> * It must **manage transitive dependencies**[^transitive] for us. Developers should only have to specify the direct dependencies of their projects. Transitive dependencies and the exact versions to install must be defined by the tool.
> * Any **conflicts** in dependencies must prevent their installation and break the build.
> * Adding a dependency after the initial setup must check for **compatibility with existing direct and transitive dependencies**.
> * It must **fit into a Docker-based workflow** while still being **easy to use** in a development environment.
> * It must permit **reproducible builds**. We must be able to checkout the source code at a specific version and build the app as it was released this day with the exact same versions of all the packages.
> * It should have features for **building and publishing packages** on PyPI (or a private package registry).
> * Bonus point if it can export to the requirements.txt format for compatibility.
[^transitive]: A transitive dependency is the dependency of a dependency. For example, my app relies on the `requests` package for HTTP requests. This package relies on several others to do its job, like `certifi`, `chardet`, etc. These are transitive dependencies because my app doesn't rely directly on them.
At the time, we considered `pipenv`, `poetry` and `pip-tools`, and chose Poetry because it was the only one checking all the boxes.
Two years later, these requirements haven't changed but I now find Poetry to be too complex and moved all my active side projects to `pip-tools`.
`pip-tools` regroups two utilities. The first generates a `requirements.txt` from a source, and the other takes the generated file and syncs the virtual environment to the described state. It fits well in the UNIX philosophy of one tool doing one thing and doing it well.
It doesn't try to manage your virtual environments, it doesn't try to build and publish packages... It just manages your dependencies.
## 👨🏻‍🔧 My new workflow
I'm using two source requirements files:
* `requirements.in` for the production dependencies
* `requirements-dev.in` for the development dependencies
Then, I use `pip-compile` to generate three files:
* `requirements.txt` for the production dependencies, generated from `requirements.in`
* `constraints.txt` for the constraints the production dependencies must impose on development dependencies, generated from `requirements.in`[^constraints]
* `requirements-dev.txt` for the development dependencies, generated from `requirements-dev.txt` and `constraints.txt`.
[^constraints]: I first used `requirements.txt` as the constraint file but I had some issues with extras. I now generate a separate `constraints.txt` file with `--strip-extras` to avoid this.
And finally `pip-sync` updates my virtualenv, adding missing packages and removing old ones.
### 📑 Samples
Here's what the source files look like for this blog:
```plain
# requirements.in
django[argon2]>=4.1,<5.0
django-cleanup>=6.0
django-environ>=0.9.0
# ...
```
```plain
# requirements-dev.in
-c constraints.txt
pre-commit>=2.7
pytest>=7.0
# ...
```
### 🪄 Invoke
The commands used to compile the three files are:
```{ .shell .large }
pip-compile -q --allow-unsafe --resolver=backtracking --generate-hashes requirements.in
pip-compile -q --allow-unsafe --resolver=backtracking --strip-extras -o constraints.txt requirements.in
pip-compile -q --allow-unsafe --resolver=backtracking --generate-hashes requirements-dev.in
```
This is a lot to remember and I have a terrible memory, so I'm using [invoke](https://www.pyinvoke.org/) to call the commands for me.
```{ .python .large }
# tasks.py
from pathlib import Path
from invoke import Context, task
BASE_DIR = Path(__file__).parent.resolve(strict=True)
@task
def update_dependencies(ctx: Context, *, sync: bool = True) -> None:
return compile_dependencies(ctx, update=True, sync=sync)
@task
def compile_dependencies(
ctx: Context, *, update: bool = False, sync: bool = False
) -> None:
common_args = "-q --allow-unsafe --resolver=backtracking"
if update:
common_args += " --upgrade"
with ctx.cd(BASE_DIR):
ctx.run(
f"pip-compile {common_args} --generate-hashes requirements.in",
pty=True,
echo=True,
)
ctx.run(
f"pip-compile {common_args} --strip-extras -o constraints.txt requirements.in",
pty=True,
echo=True,
)
ctx.run(
f"pip-compile {common_args} --generate-hashes requirements-dev.in",
pty=True,
echo=True,
)
if sync:
sync_dependencies(ctx)
@task
def sync_dependencies(ctx: Context) -> None:
with ctx.cd(BASE_DIR):
ctx.run("pip-sync requirements.txt requirements-dev.txt", pty=True, echo=True)
```
Invoke is a sort of Makefile, but written in Python.
```shell
invoke --list # shorter: inv -l
Available tasks:
compile-dependencies
sync-dependencies
update-dependencies
inv compile-dependencies
# Runs the commands...
```
### 📦 Add/update dependencies
Now when I need to add a dependency, I first edit the relevant `*.in` file, then I run `inv compile-dependencies` to compile all the files without updating the existing dependencies and finally `inv sync-dependencies` to really install them locally (or shorter with `inv compile-dependencies -s`).
If I want to update my dependencies, a simple `inv update-dependencies` is all I need.
After that, I commit all the `*.in` and `*.txt` files so that my future self and other people can reproduce my build.
In other words:
```shell
echo "new_package>=1.2" >> requirements.in
inv compile-dependencies -s
git add *.in *.txt
git commit -m "Add new_package"
```
### 👀 Missing compilation?
In order to avoid forgetting to compile my dependencies, I added a few pre-commit hooks to my projects:
```{ .yaml .large }
repos:
- repo: https://github.com/jazzband/pip-tools
rev: 6.12.2
hooks:
- id: pip-compile
name: pip-compile requirements.txt
args: [-q, --allow-unsafe, --resolver=backtracking, --generate-hashes, requirements.in]
files: ^requirements\.(in|txt)$
- id: pip-compile
name: pip-compile constraints.txt
args: [-q, --allow-unsafe, --resolver=backtracking, --strip-extras, --output-file=constraints.txt, requirements.in]
files: ^requirements\.in|constraints\.txt$
- id: pip-compile
name: pip-compile requirements-dev.txt
args: [-q, --allow-unsafe, --resolver=backtracking, --generate-hashes, requirements-dev.in]
files: ^requirements-dev\.(in|txt)$
```
These will run the `pip-compile` commands whenever the source or compiled files have changed to ensure the compiled files are up to date with the sources before committing. It won't update the dependencies though, as it's not desirable here.
## 🔀 Transition
Moving from Poetry to pip-tools was really easy since all the hard work was already done. I just took the dependencies listed in `pyproject.toml`, copied them over to the `*.in` files and compiled the `*.txt`
## 👨🏻‍💻 Developer experience
The beauty with all this is that I now have regular `requirements.txt` files that I can pass to `pip install -r`. It means potential collaborators don't need to worry about `pip-tools`. Or that I don't need yet another binary in my Docker images.
## 📖 Story time
At the end of 2020, I wrote an article entitled [Poetry for Python dependencies management](/poetry-python-dependencies-management). I described ITSF's requirements for a dependencies management system and how we found Poetry useful. After updating our internal projects, I tackled my side projects and moved them to Poetry.
The requirements from late 2020 were the following:
> * It must **manage transitive dependencies**[^transitive] for us. Developers should only have to specify the direct dependencies of their projects. Transitive dependencies and the exact versions to install must be defined by the tool.
> * Any **conflicts** in dependencies must prevent their installation and break the build.
> * Adding a dependency after the initial setup must check for **compatibility with existing direct and transitive dependencies**.
> * It must **fit into a Docker-based workflow** while still being **easy to use** in a development environment.
> * It must permit **reproducible builds**. We must be able to checkout the source code at a specific version and build the app as it was released this day with the exact same versions of all the packages.
> * It should have features for **building and publishing packages** on PyPI (or a private package registry).
> * Bonus point if it can export to the requirements.txt format for compatibility.
[^transitive]: A transitive dependency is the dependency of a dependency. For example, my app relies on the `requests` package for HTTP requests. This package relies on several others to do its job, like `certifi`, `chardet`, etc. These are transitive dependencies because my app doesn't rely directly on them.
At the time, we considered `pipenv`, `poetry` and `pip-tools`, and chose Poetry because it was the only one checking all the boxes.
Two years later, these requirements haven't changed but I now find Poetry to be too complex and moved all my active side projects to `pip-tools`.
`pip-tools` regroups two utilities. The first generates a `requirements.txt` from a source, and the other takes the generated file and syncs the virtual environment to the described state. It fits well in the UNIX philosophy of one tool doing one thing and doing it well.
It doesn't try to manage your virtual environments, it doesn't try to build and publish packages... It just manages your dependencies.
## 👨🏻‍🔧 My new workflow
I'm using two source requirements files:
* `requirements.in` for the production dependencies
* `requirements-dev.in` for the development dependencies
Then, I use `pip-compile` to generate three files:
* `requirements.txt` for the production dependencies, generated from `requirements.in`
* `constraints.txt` for the constraints the production dependencies must impose on development dependencies, generated from `requirements.in`[^constraints]
* `requirements-dev.txt` for the development dependencies, generated from `requirements-dev.txt` and `constraints.txt`.
[^constraints]: I first used `requirements.txt` as the constraint file but I had some issues with extras. I now generate a separate `constraints.txt` file with `--strip-extras` to avoid this.
And finally `pip-sync` updates my virtualenv, adding missing packages and removing old ones.
### 📑 Samples
Here's what the source files look like for this blog:
```plain
# requirements.in
django[argon2]>=4.1,<5.0
django-cleanup>=6.0
django-environ>=0.9.0
# ...
```
```plain
# requirements-dev.in
-c constraints.txt
pre-commit>=2.7
pytest>=7.0
# ...
```
### 🪄 Invoke
The commands used to compile the three files are:
```{ .shell .large }
pip-compile -q --allow-unsafe --resolver=backtracking --generate-hashes requirements.in
pip-compile -q --allow-unsafe --resolver=backtracking --strip-extras -o constraints.txt requirements.in
pip-compile -q --allow-unsafe --resolver=backtracking --generate-hashes requirements-dev.in
```
This is a lot to remember and I have a terrible memory, so I'm using [invoke](https://www.pyinvoke.org/) to call the commands for me.
```{ .python .large }
# tasks.py
from pathlib import Path
from invoke import Context, task
BASE_DIR = Path(__file__).parent.resolve(strict=True)
@task
def update_dependencies(ctx: Context, *, sync: bool = True) -> None:
return compile_dependencies(ctx, update=True, sync=sync)
@task
def compile_dependencies(
ctx: Context, *, update: bool = False, sync: bool = False
) -> None:
common_args = "-q --allow-unsafe --resolver=backtracking"
if update:
common_args += " --upgrade"
with ctx.cd(BASE_DIR):
ctx.run(
f"pip-compile {common_args} --generate-hashes requirements.in",
pty=True,
echo=True,
)
ctx.run(
f"pip-compile {common_args} --strip-extras -o constraints.txt requirements.in",
pty=True,
echo=True,
)
ctx.run(
f"pip-compile {common_args} --generate-hashes requirements-dev.in",
pty=True,
echo=True,
)
if sync:
sync_dependencies(ctx)
@task
def sync_dependencies(ctx: Context) -> None:
with ctx.cd(BASE_DIR):
ctx.run("pip-sync requirements.txt requirements-dev.txt", pty=True, echo=True)
```
Invoke is a sort of Makefile, but written in Python.
```shell
invoke --list # shorter: inv -l
Available tasks:
compile-dependencies
sync-dependencies
update-dependencies
inv compile-dependencies
# Runs the commands...
```
### 📦 Add/update dependencies
Now when I need to add a dependency, I first edit the relevant `*.in` file, then I run `inv compile-dependencies` to compile all the files without updating the existing dependencies and finally `inv sync-dependencies` to really install them locally (or shorter with `inv compile-dependencies -s`).
If I want to update my dependencies, a simple `inv update-dependencies` is all I need.
After that, I commit all the `*.in` and `*.txt` files so that my future self and other people can reproduce my build.
In other words:
```shell
echo "new_package>=1.2" >> requirements.in
inv compile-dependencies -s
git add *.in *.txt
git commit -m "Add new_package"
```
### 👀 Missing compilation?
In order to avoid forgetting to compile my dependencies, I added a few pre-commit hooks to my projects:
```{ .yaml .large }
repos:
- repo: https://github.com/jazzband/pip-tools
rev: 6.12.2
hooks:
- id: pip-compile
name: pip-compile requirements.txt
args: [-q, --allow-unsafe, --resolver=backtracking, --generate-hashes, requirements.in]
files: ^requirements\.(in|txt)$
- id: pip-compile
name: pip-compile constraints.txt
args: [-q, --allow-unsafe, --resolver=backtracking, --strip-extras, --output-file=constraints.txt, requirements.in]
files: ^requirements\.in|constraints\.txt$
- id: pip-compile
name: pip-compile requirements-dev.txt
args: [-q, --allow-unsafe, --resolver=backtracking, --generate-hashes, requirements-dev.in]
files: ^requirements-dev\.(in|txt)$
```
These will run the `pip-compile` commands whenever the source or compiled files have changed to ensure the compiled files are up to date with the sources before committing. It won't update the dependencies though, as it's not desirable here.
## 🔀 Transition
Moving from Poetry to pip-tools was really easy since all the hard work was already done. I just took the dependencies listed in `pyproject.toml`, copied them over to the `*.in` files and compiled the `*.txt`
## 👨🏻‍💻 Developer experience
The beauty with all this is that I now have regular `requirements.txt` files that I can pass to `pip install -r`. It means potential collaborators don't need to worry about `pip-tools`. Or that I don't need yet another binary in my Docker images.
> *I can just rely on `pip`, and it's relaxing.*

BIN
content/posts/poetry-python-dependencies-management/35.svg (Stored with Git LFS) Normal file

Binary file not shown.

View file

@ -4,113 +4,113 @@ tags: ['ITSF', 'dependencies', 'poetry', 'python']
date: 2020-12-30T20:44:05.549630+00:00
aliases: ["/poetry-python-dependencies-management"]
---
At [ITSF](https://itsf.io), teams using the same languages/frameworks regularly meet to share experience and decide on common guidelines. With the Python teams, we recently decided to change our dependencies management system to something more robust because we were not satisfied with `pip` alone.
## ✅ Requirements
Here's a list of the features we expected from our new tool:
* It must **manage transitive dependencies**[^transitive] for us. Developers should only have to specify the direct dependencies of their projects. Transitive dependencies and the exact versions to install must be defined by the tool.
* Any **conflicts** in dependencies must prevent their installation and break the build.
* Adding a dependency after the initial setup must check for **compatibility with existing direct and transitive dependencies**.
* It must **fit into a Docker-based workflow** while still being **easy to use** in a development environment.
* It must permit **reproducible builds**. We must be able to checkout the source code at a specific version and build the app as it was released this day with the exact same versions of all the packages.
* It should have features for **building and publishing packages** on PyPI (or in our case a private package registry).
* Bonus point if it can export to the requirements.txt format for compatibility.
[^transitive]: A transitive dependency is the dependency of a dependency. For example, my app relies on the `requests` package for HTTP requests. This package relies on several others to do its job, like `certifi`, `chardet`, etc. These are transitive dependencies because my app doesn't rely directly on them.
While `pip` can provide a semblance of builds reproducibility and can easily be used in a Docker-based workflow, at the time we did our research (sept. 2020) it did not fit the other use cases[^newpip].
[^newpip]: Since then, the team behind `pip` switched to a new dependency resolver. We did not research this further but it seems to solve the dependency conflicts issues. See [the blog post](https://pyfound.blogspot.com/2020/11/pip-20-3-new-resolver.html) introducing the new pip resolver.
Our main contenders were:
* [poetry](https://python-poetry.org/)
* [pipenv](https://pipenv.pypa.io/en/latest/)
* [pip-tools](https://pypi.org/project/pip-tools/)
Among them, only poetry ticks all the boxes. pip-tools and pipenv don't have features for publishing packages, but based on our research they seemed to provide all the other features we required.
## ⚙️ Poetry
### Version constraints
![Poetry logo](/attachments/35/original/)
{: .svg-container }
Poetry lets you specify your direct dependencies in a standard file called `pyproject.toml`. You can either edit this file manually or use the `poetry` command line tool.
After specifying your dependencies, you need to `lock` them. The locking phase is a crucial part of the workflow because that's when poetry decides which version of each package it needs to install. It does that on its own and breaks if there are incompatible dependencies in your tree. It generates a file called `poetry.lock` which is meant to be checked in your VCS so that other developers get this file when checking out the project.
When installing dependencies, poetry will read the lock file and install the locked versions of the packages.
This workflow allows you to specify your real dependencies in `pyproject.toml`, with non-strict version constraints like `django = "^3.1"`. This specific example specifies that we rely on `django`, at least in version `3.1` but we accept any upgrade up to version `4.0`. There are other version constraints you can use, they are documented [here](https://python-poetry.org/docs/versions/).
When you add a new package to your dependencies list, poetry automatically checks for dependencies compatibility and breaks if there is a clash. Adding a new package doesn't update all your existing pinned dependencies.
### Using a private registry
Poetry makes it very easy to use a private registry to fetch packages that you may have built and distributed internally. Instead of having to create a file somewhere in the virtualenv on every machine you need to access the said registry, you just need to add your registries in the `pyproject.toml`. Since this file is checked into your VCS, all of your developers and all of your build environment will have the configuration they need out of the box.
```toml
[[tool.poetry.source]]
name = "pypi-mirror"
url = "https://devpi.example.com/root/pypi/+simple/"
default = true
[[tool.poetry.source]]
name = "internal"
url = "https://devpi.example.com/root/internal/+simple/"
secondary = true
```
### Publishing packages
Building and publishing a package can be done in a single command:
```bash
poetry publish --build -r internal -u username -p password
```
You don't need to go through a complicated configuration process: all the configuration is available in a committed file.
## 🔀 Transition
Transitioning to poetry is easy but requires some manual work if you want to get the full benefits. Indeed, you need to extract your direct dependencies from the requirements.txt you already have. If they are documented somewhere, well you're in luck. If not, you need to spend some time to properly extract them.
To help me in this task, I used `pipdeptree`. After installing it in the virtualenv with all the dependencies, I ran the CLI tool. It renders the installed packages in a tree, like so:
```
model-bakery==1.2.1
- django [required: >=1.11.0<3.2, installed: 3.1.4]
- asgiref [required: >=3.2.10,<4, installed: 3.3.1]
- pytz [required: Any, installed: 2020.5]
- sqlparse [required: >=0.2.2, installed: 0.4.1]
pipdeptree==2.0.0
- pip [required: >=6.0.0, installed: 20.2.2]
pre-commit==2.9.3
- cfgv [required: >=2.0.0, installed: 3.2.0]
- identify [required: >=1.0.0, installed: 1.5.10]
- nodeenv [required: >=0.11.1, installed: 1.5.0]
- pyyaml [required: >=5.1, installed: 5.3.1]
- toml [required: Any, installed: 0.10.2]
- virtualenv [required: >=20.0.8, installed: 20.2.2]
- appdirs [required: >=1.4.3,<2, installed: 1.4.4]
- distlib [required: >=0.3.1,<1, installed: 0.3.1]
- filelock [required: >=3.0.0,<4, installed: 3.0.12]
- six [required: >=1.9.0,<2, installed: 1.15.0]
```
Unfortunately, it sometimes marks some packages as transitive dependencies when you really need them listed as direct dependencies. In my experience, it was often the case for `requests`, which other packages also rely upon. Therefore, you can't trust it blindly, hence the manual work.
## 🧑‍💻 Developer experience
I've been personally very satisfied with this transition to poetry on the projects I maintain. It was a bit of work to make the switch but so far I've only been enjoying benefits.
The setup in a Docker image is also quite straightforward if you accept to have poetry in your final image. If you prefer to have lightweight images, you can use multi-stage builds to install your dependencies in a first stage and retrieve the virtualenv containing only your project's dependencies in a later one. If you're interested, check out [this article](/lighten-your-python-image-docker-multi-stage-builds) I wrote on the subject!
## 🗒 Closing thoughts
Poetry is very **pleasant** to work with and we feel **safer** adding dependencies, knowing that there won't be any surprise conflict after the installation. We can also easily **build and publish** packages for internal use with the same tool, it's just a new command to learn.
At [ITSF](https://itsf.io), teams using the same languages/frameworks regularly meet to share experience and decide on common guidelines. With the Python teams, we recently decided to change our dependencies management system to something more robust because we were not satisfied with `pip` alone.
## ✅ Requirements
Here's a list of the features we expected from our new tool:
* It must **manage transitive dependencies**[^transitive] for us. Developers should only have to specify the direct dependencies of their projects. Transitive dependencies and the exact versions to install must be defined by the tool.
* Any **conflicts** in dependencies must prevent their installation and break the build.
* Adding a dependency after the initial setup must check for **compatibility with existing direct and transitive dependencies**.
* It must **fit into a Docker-based workflow** while still being **easy to use** in a development environment.
* It must permit **reproducible builds**. We must be able to checkout the source code at a specific version and build the app as it was released this day with the exact same versions of all the packages.
* It should have features for **building and publishing packages** on PyPI (or in our case a private package registry).
* Bonus point if it can export to the requirements.txt format for compatibility.
[^transitive]: A transitive dependency is the dependency of a dependency. For example, my app relies on the `requests` package for HTTP requests. This package relies on several others to do its job, like `certifi`, `chardet`, etc. These are transitive dependencies because my app doesn't rely directly on them.
While `pip` can provide a semblance of builds reproducibility and can easily be used in a Docker-based workflow, at the time we did our research (sept. 2020) it did not fit the other use cases[^newpip].
[^newpip]: Since then, the team behind `pip` switched to a new dependency resolver. We did not research this further but it seems to solve the dependency conflicts issues. See [the blog post](https://pyfound.blogspot.com/2020/11/pip-20-3-new-resolver.html) introducing the new pip resolver.
Our main contenders were:
* [poetry](https://python-poetry.org/)
* [pipenv](https://pipenv.pypa.io/en/latest/)
* [pip-tools](https://pypi.org/project/pip-tools/)
Among them, only poetry ticks all the boxes. pip-tools and pipenv don't have features for publishing packages, but based on our research they seemed to provide all the other features we required.
## ⚙️ Poetry
### Version constraints
![Poetry logo](35.svg)
{: .svg-container }
Poetry lets you specify your direct dependencies in a standard file called `pyproject.toml`. You can either edit this file manually or use the `poetry` command line tool.
After specifying your dependencies, you need to `lock` them. The locking phase is a crucial part of the workflow because that's when poetry decides which version of each package it needs to install. It does that on its own and breaks if there are incompatible dependencies in your tree. It generates a file called `poetry.lock` which is meant to be checked in your VCS so that other developers get this file when checking out the project.
When installing dependencies, poetry will read the lock file and install the locked versions of the packages.
This workflow allows you to specify your real dependencies in `pyproject.toml`, with non-strict version constraints like `django = "^3.1"`. This specific example specifies that we rely on `django`, at least in version `3.1` but we accept any upgrade up to version `4.0`. There are other version constraints you can use, they are documented [here](https://python-poetry.org/docs/versions/).
When you add a new package to your dependencies list, poetry automatically checks for dependencies compatibility and breaks if there is a clash. Adding a new package doesn't update all your existing pinned dependencies.
### Using a private registry
Poetry makes it very easy to use a private registry to fetch packages that you may have built and distributed internally. Instead of having to create a file somewhere in the virtualenv on every machine you need to access the said registry, you just need to add your registries in the `pyproject.toml`. Since this file is checked into your VCS, all of your developers and all of your build environment will have the configuration they need out of the box.
```toml
[[tool.poetry.source]]
name = "pypi-mirror"
url = "https://devpi.example.com/root/pypi/+simple/"
default = true
[[tool.poetry.source]]
name = "internal"
url = "https://devpi.example.com/root/internal/+simple/"
secondary = true
```
### Publishing packages
Building and publishing a package can be done in a single command:
```bash
poetry publish --build -r internal -u username -p password
```
You don't need to go through a complicated configuration process: all the configuration is available in a committed file.
## 🔀 Transition
Transitioning to poetry is easy but requires some manual work if you want to get the full benefits. Indeed, you need to extract your direct dependencies from the requirements.txt you already have. If they are documented somewhere, well you're in luck. If not, you need to spend some time to properly extract them.
To help me in this task, I used `pipdeptree`. After installing it in the virtualenv with all the dependencies, I ran the CLI tool. It renders the installed packages in a tree, like so:
```
model-bakery==1.2.1
- django [required: >=1.11.0<3.2, installed: 3.1.4]
- asgiref [required: >=3.2.10,<4, installed: 3.3.1]
- pytz [required: Any, installed: 2020.5]
- sqlparse [required: >=0.2.2, installed: 0.4.1]
pipdeptree==2.0.0
- pip [required: >=6.0.0, installed: 20.2.2]
pre-commit==2.9.3
- cfgv [required: >=2.0.0, installed: 3.2.0]
- identify [required: >=1.0.0, installed: 1.5.10]
- nodeenv [required: >=0.11.1, installed: 1.5.0]
- pyyaml [required: >=5.1, installed: 5.3.1]
- toml [required: Any, installed: 0.10.2]
- virtualenv [required: >=20.0.8, installed: 20.2.2]
- appdirs [required: >=1.4.3,<2, installed: 1.4.4]
- distlib [required: >=0.3.1,<1, installed: 0.3.1]
- filelock [required: >=3.0.0,<4, installed: 3.0.12]
- six [required: >=1.9.0,<2, installed: 1.15.0]
```
Unfortunately, it sometimes marks some packages as transitive dependencies when you really need them listed as direct dependencies. In my experience, it was often the case for `requests`, which other packages also rely upon. Therefore, you can't trust it blindly, hence the manual work.
## 🧑‍💻 Developer experience
I've been personally very satisfied with this transition to poetry on the projects I maintain. It was a bit of work to make the switch but so far I've only been enjoying benefits.
The setup in a Docker image is also quite straightforward if you accept to have poetry in your final image. If you prefer to have lightweight images, you can use multi-stage builds to install your dependencies in a first stage and retrieve the virtualenv containing only your project's dependencies in a later one. If you're interested, check out [this article](/lighten-your-python-image-docker-multi-stage-builds) I wrote on the subject!
## 🗒 Closing thoughts
Poetry is very **pleasant** to work with and we feel **safer** adding dependencies, knowing that there won't be any surprise conflict after the installation. We can also easily **build and publish** packages for internal use with the same tool, it's just a new command to learn.
Anyway, I hope you learned something in this post about our experience with poetry! As always, please [contact me](/about-me#contact) if you have comments or questions!

BIN
content/posts/raspberry-pi/13.JPG (Stored with Git LFS) Normal file

Binary file not shown.

View file

@ -4,117 +4,117 @@ tags: ['Raspberry Pi', 'backup', 'monitoring', 'self-hosting']
date: 2020-05-16T14:43:38+00:00
aliases: ["/raspberry-pi"]
---
!!! info "👴 Old post"
This post has grown old and I now don't use my Raspberry Pi like this anymore.
I bought a Raspberry Pi 3 model B+ about a year ago. I had to check the order date because I was pretty sure I had it for longer but I only got it at the end of March 2019.
Now feels a good time to talk about my experience with the little guy. This will be my #100DaysToOffload fifth post.
[![Illustration of a Raspberry Pi 3 model B](/attachments/13/processed/)](/attachments/13/original/)<!--more-->
Illustration of a Raspberry Pi 3 model B by Jose.gil on Wikipedia, CC-BY-SA.
## The Raspberry Pi
According to [Wikipedia](https://en.wikipedia.org/wiki/Raspberry_Pi):
> The Raspberry Pi is a series of small single-board computers developed in the United Kingdom by the Raspberry Pi Foundation to promote teaching of basic computer science in schools and in developing countries.
So in other words, when you buy a Raspberry Pi, you get a full-fledged low-spec device with which you can do basically anything you want as long as it stays in its computing power. You can either attach a keyboard, a mouse and a screen and use it as a desktop computer ; or you can just connect it to the network (wired or not) and use it as a small server. That's how I use it.
## Securing your connection
In order to use any computer without a mouse and a keyboard, you need to attach it to the network and a way to remotely access it. Since your device will likely be connected to the internet and for some reason, you might wish to access it when not connected to your home network, you'll also want to secure this access.
SSH is perfect for this. It allows you to remotely connect to any computer you're supposed to have access to. You can either use a password or a [public/private key pair](https://en.wikipedia.org/wiki/Public-key_cryptography). I won't go into details of how SSH and public-key cryptography work in this article but you can find some good resources on the internet, starting with the Wikipedia page I linked just above.
A good thing to do when enabling SSH on your machine is to change the port it listens to. By default, SSH expects you to connect on port 22. The bad guys know this, and might try to force into your machine if you leave it unprotected and accessible from the internet listening on port 22. To change the SSH port on Debian, you'll have to edit a file and restart `sshd`:
```{ .bash .large }
sed -i ".bak" "s/Port 22/Port 23574/" /etc/ssh/sshd_config
systemctl restart service sshd
```
I also usually install `ufw` and restrict the ports I open on the machine. This prevents unexpected programs to receive connections unless I authorize them.
```{ .bash .large }
# Let's not prevent ourselves from logging in before enabling the rules.
ufw allow 23574
ufw enable
```
Of course these aren't the only measures you should take to protect your machine from unwanted access. Disallowing root login, forbidding password login (both SSH config options), preventing all connections from outside of your LAN (ufw/firewall) are also recommended.
## Services
I run a couple of services on my Pi:
* [Home Assistant](https://home-assistant.io/)
* [Pi Hole](/pi-hole)
* Monitoring (InfluxDB & Grafana)
I currently use the hosted version of [Miniflux](https://miniflux.app/) but I plan to migrate it to my Pi someday.
I will talk in more details of all these services in later articles, for now I'll just focus on how I back this up.
## Backup scripts
At last, the need to have some backup strategy arises when you start to run more than just a couple of scripts on your server. In my case, I realized this when I started to rely on Home Assistant for my home automation system.
As promised in [the article about my Synology NAS and the backups](/synology-nas-stream-and-backup), here is the script I use to regularly backup my Raspberry Pi important files. This is the Home Assistant version, the other script is nearly identical except for the files included in the zip archive.
```{ .bash .large }
#!/bin/bash
set -e
LOCAL_BACKUP_FOLDER="/home/homeassistant/backup"
MOUNTED_BACKUP_FOLDER="/mnt/synology-backup/homeassistant"
LOG_FILE="/home/homeassistant/backup.log"
echo $(date -Iseconds) "Starting backup"
touch $LOG_FILE
chown homeassistant:homeassistant $LOG_FILE
mkdir -p $LOCAL_BACKUP_FOLDER
echo $(date -Iseconds) "Zipping files"
# zip quietly, recursively, and with max compression
zip -q -r -9 $LOCAL_BACKUP_FOLDER/"hass-$(date +'%Y-%m-%d-%H-%M-%S').zip" \
/home/homeassistant/.homeassistant \
-x /home/homeassistant/.homeassistant/home-assistant_v2.db
chown -R homeassistant:homeassistant $LOCAL_BACKUP_FOLDER
# cp could fail if the target directory is not mounted.
# In this case, we don't rm the files (see set -e at the top).
echo $(date -Iseconds) "Copying backups"
cp -Rn $LOCAL_BACKUP_FOLDER/* $MOUNTED_BACKUP_FOLDER/
echo $(date -Iseconds) "Deleting backups"
rm -rf $LOCAL_BACKUP_FOLDER
echo $(date -Iseconds) "Backup finished"
```
This script:
1. Zips everything we want to back up and compress it to the maximum. Use the date in the filename so it's easily identifiable when a restore is needed.
2. Copies the files to a folder where the backup volume of my NAS is mounted
4. Removes the zip file if everything went smoothly
For this simple script to work, I had to mount a volume of my NAS on the Raspberry Pi. This is done by adding a new line to the [`/etc/fstab`](https://en.wikipedia.org/wiki/Fstab) file:
```{ .text .large }
//<ip_address>/backup /mnt/synology-backup cifs username=<username>,password=<password> 0 0
```
This mounts the `/backup` volume of my NAS to the `/mnt/synology-backup` folder on my Raspberry Pi, so that when I write files to the folder on the Pi, the contents are actually written to my NAS.
And the last piece of the puzzle: to run the script periodically I had to edit a [`crontab`](https://en.wikipedia.org/wiki/Cron). `cron` is a Linux program designed to run periodical tasks. It's based on a file called a `crontab` which tells what to run when. Each user has its own `crontab`. I decided to use root's to run my scripts so that I won't run into permission issues:
```{ .text .large }
30 3 * * * /home/homeassistant/backup.sh >> /home/homeassistant/backup.log
```
It may seem cryptic at first but it's quite simple. It runs my script daily at 03:30, redirecting all echos and other output to the `backup.log` file. That allows me to go there and check what happened during the backup if I notice something odd.
With this setup, I have a snapshot of my most important files taken every day and synced up to my NAS. I'll write another script later to automate the cleaning of old backups to prevent infinite growth of the backup size. But it's not much of a concern for the moment since a month of backups weighs only 1.7 GB.
Keywords/tags:
!!! info "👴 Old post"
This post has grown old and I now don't use my Raspberry Pi like this anymore.
I bought a Raspberry Pi 3 model B+ about a year ago. I had to check the order date because I was pretty sure I had it for longer but I only got it at the end of March 2019.
Now feels a good time to talk about my experience with the little guy. This will be my #100DaysToOffload fifth post.
[![Illustration of a Raspberry Pi 3 model B](13.JPG)](/attachments/13/original/)<!--more-->
Illustration of a Raspberry Pi 3 model B by Jose.gil on Wikipedia, CC-BY-SA.
## The Raspberry Pi
According to [Wikipedia](https://en.wikipedia.org/wiki/Raspberry_Pi):
> The Raspberry Pi is a series of small single-board computers developed in the United Kingdom by the Raspberry Pi Foundation to promote teaching of basic computer science in schools and in developing countries.
So in other words, when you buy a Raspberry Pi, you get a full-fledged low-spec device with which you can do basically anything you want as long as it stays in its computing power. You can either attach a keyboard, a mouse and a screen and use it as a desktop computer ; or you can just connect it to the network (wired or not) and use it as a small server. That's how I use it.
## Securing your connection
In order to use any computer without a mouse and a keyboard, you need to attach it to the network and a way to remotely access it. Since your device will likely be connected to the internet and for some reason, you might wish to access it when not connected to your home network, you'll also want to secure this access.
SSH is perfect for this. It allows you to remotely connect to any computer you're supposed to have access to. You can either use a password or a [public/private key pair](https://en.wikipedia.org/wiki/Public-key_cryptography). I won't go into details of how SSH and public-key cryptography work in this article but you can find some good resources on the internet, starting with the Wikipedia page I linked just above.
A good thing to do when enabling SSH on your machine is to change the port it listens to. By default, SSH expects you to connect on port 22. The bad guys know this, and might try to force into your machine if you leave it unprotected and accessible from the internet listening on port 22. To change the SSH port on Debian, you'll have to edit a file and restart `sshd`:
```{ .bash .large }
sed -i ".bak" "s/Port 22/Port 23574/" /etc/ssh/sshd_config
systemctl restart service sshd
```
I also usually install `ufw` and restrict the ports I open on the machine. This prevents unexpected programs to receive connections unless I authorize them.
```{ .bash .large }
# Let's not prevent ourselves from logging in before enabling the rules.
ufw allow 23574
ufw enable
```
Of course these aren't the only measures you should take to protect your machine from unwanted access. Disallowing root login, forbidding password login (both SSH config options), preventing all connections from outside of your LAN (ufw/firewall) are also recommended.
## Services
I run a couple of services on my Pi:
* [Home Assistant](https://home-assistant.io/)
* [Pi Hole](/pi-hole)
* Monitoring (InfluxDB & Grafana)
I currently use the hosted version of [Miniflux](https://miniflux.app/) but I plan to migrate it to my Pi someday.
I will talk in more details of all these services in later articles, for now I'll just focus on how I back this up.
## Backup scripts
At last, the need to have some backup strategy arises when you start to run more than just a couple of scripts on your server. In my case, I realized this when I started to rely on Home Assistant for my home automation system.
As promised in [the article about my Synology NAS and the backups](/synology-nas-stream-and-backup), here is the script I use to regularly backup my Raspberry Pi important files. This is the Home Assistant version, the other script is nearly identical except for the files included in the zip archive.
```{ .bash .large }
#!/bin/bash
set -e
LOCAL_BACKUP_FOLDER="/home/homeassistant/backup"
MOUNTED_BACKUP_FOLDER="/mnt/synology-backup/homeassistant"
LOG_FILE="/home/homeassistant/backup.log"
echo $(date -Iseconds) "Starting backup"
touch $LOG_FILE
chown homeassistant:homeassistant $LOG_FILE
mkdir -p $LOCAL_BACKUP_FOLDER
echo $(date -Iseconds) "Zipping files"
# zip quietly, recursively, and with max compression
zip -q -r -9 $LOCAL_BACKUP_FOLDER/"hass-$(date +'%Y-%m-%d-%H-%M-%S').zip" \
/home/homeassistant/.homeassistant \
-x /home/homeassistant/.homeassistant/home-assistant_v2.db
chown -R homeassistant:homeassistant $LOCAL_BACKUP_FOLDER
# cp could fail if the target directory is not mounted.
# In this case, we don't rm the files (see set -e at the top).
echo $(date -Iseconds) "Copying backups"
cp -Rn $LOCAL_BACKUP_FOLDER/* $MOUNTED_BACKUP_FOLDER/
echo $(date -Iseconds) "Deleting backups"
rm -rf $LOCAL_BACKUP_FOLDER
echo $(date -Iseconds) "Backup finished"
```
This script:
1. Zips everything we want to back up and compress it to the maximum. Use the date in the filename so it's easily identifiable when a restore is needed.
2. Copies the files to a folder where the backup volume of my NAS is mounted
4. Removes the zip file if everything went smoothly
For this simple script to work, I had to mount a volume of my NAS on the Raspberry Pi. This is done by adding a new line to the [`/etc/fstab`](https://en.wikipedia.org/wiki/Fstab) file:
```{ .text .large }
//<ip_address>/backup /mnt/synology-backup cifs username=<username>,password=<password> 0 0
```
This mounts the `/backup` volume of my NAS to the `/mnt/synology-backup` folder on my Raspberry Pi, so that when I write files to the folder on the Pi, the contents are actually written to my NAS.
And the last piece of the puzzle: to run the script periodically I had to edit a [`crontab`](https://en.wikipedia.org/wiki/Cron). `cron` is a Linux program designed to run periodical tasks. It's based on a file called a `crontab` which tells what to run when. Each user has its own `crontab`. I decided to use root's to run my scripts so that I won't run into permission issues:
```{ .text .large }
30 3 * * * /home/homeassistant/backup.sh >> /home/homeassistant/backup.log
```
It may seem cryptic at first but it's quite simple. It runs my script daily at 03:30, redirecting all echos and other output to the `backup.log` file. That allows me to go there and check what happened during the backup if I notice something odd.
With this setup, I have a snapshot of my most important files taken every day and synced up to my NAS. I'll write another script later to automate the cleaning of old backups to prevent infinite growth of the backup size. But it's not much of a concern for the moment since a month of backups weighs only 1.7 GB.
Keywords/tags:
#tech #home #backup #raspberrypi #100DaysToOffload #HundredDaysToOffload

Binary file not shown.

View file

@ -4,130 +4,130 @@ tags: ['Django', 'Javascript', 'frontend', 'programming', 'python']
date: 2022-09-26T11:49:45.812120+00:00
aliases: ["/reduce-your-page-load-time-with-htmx-in-less-than-an-hour"]
---
## Presentation
During DjangoCon EU 2022, a talk reminded me of [htmx](https://htmx.org/). As the authors put it:
> Htmx is a library that allows you to access modern browser features directly from HTML, rather than using javascript.
>
> \- [htmx docs](https://htmx.org/docs/#introduction)
Since I really don't like javascript that much, this promise is very tempting to me&nbsp;😁
## Context
I'm working on a small cash register web app to help during events and reduce the risk of human error when counting products. If you're curious, you can find it [here](https://git.augendre.info/gaugendre/checkout).
This app also has a reporting section, where I'm generating graphs using matplotlib. Some graphs are heavy to produce, but they're only a portion of the full page. In the screenshot below, I've highlighted them:
[![Checkout reports](/attachments/42/processed/)](/attachments/42/original/)
Initially, the page took about 6 to 7 seconds to fully load. Before that, nothing was displayed on screen.
| | Before htmx | After htmx |
|---|-------------|------------|
|[LCP](https://web.dev/lcp/)| 6 seconds | ? |
Here's what the template looked like:
```jinja hl_lines="32 36 37 38"
{% extends "common/base.html" %}
{% load static %}
{% load i18n %}
{% load purchase %}
{% block extrahead %}
<link rel="stylesheet" href="{% static "purchase/css/reports.css" %}">
{% endblock %}
{% block content %}
<h1>{% translate "Reports" %}</h1>
<h2>{% translate "General" %}</h2>
<ul>
<li>{% translate "Total turnover:" %} {{ turnover|currency }}</li>
<li>{% translate "Average basket:" %} {{ average_basket|currency }}</li>
</ul>
<h3>{% translate "By day" %}</h3>
<h4>{% translate "Turnover" %}</h4>
<ul>
{% for date, turnover in turnover_by_day.items %}
<li>{{ date }} : {{ turnover|currency }}</li>
{% endfor %}
</ul>
<h4>{% translate "Average basket" %}</h4>
<ul>
{% for date, average in average_basket_by_day.items %}
<li>{{ date }} : {{ average|currency }}</li>
{% endfor %}
</ul>
{{ by_hour_plot|safe }}
<h2>{% translate "Products" %}</h2>
{% include "purchase/snippets/report_products.html" %}
{{ products_plot|safe }}
{{ products_sold_pie|safe }}
{{ products_turnover_pie|safe }}
<h2>{% translate "Turnover by payment method" %}</h2>
{% include "purchase/snippets/report_payment_methods.html" %}
<h2>{% translate "Baskets without payment method" %}</h2>
{% include "purchase/snippets/report_no_payment_method.html" %}
{% endblock %}
```
## Implementation
Here's how the template looks like with htmx. I've removed the non-relevant parts for brevity.
```jinja hl_lines="2 8 12 17 18 19 20"
{% extends "common/base.html" %}
{% load static i18n purchase django_htmx %}
{# ... #}
{% block content %}
{# ... #}
{% include "purchase/snippets/htmx_plot.html" with url='purchase:by_hour_plot' %}
<h2>{% translate "Products" %}</h2>
{% include "purchase/snippets/report_products.html" %}
{% include "purchase/snippets/htmx_plot.html" with url='purchase:products_plots' %}
{# ... #}
{% endblock %}
{% block extrascript %}
<script src="{% static 'vendor/htmx-1.8.0/htmx.min.js' %}" defer></script>
{% django_htmx_script %}
{% endblock %}
```
```jinja
{% load static %}
<div hx-get="{% url url %}"
hx-trigger="load"
hx-swap="outerHTML"
>
<img class="htmx-indicator" src="{% static 'purchase/spinner.gif' %}" alt="Spinner">
</div>
```
!!! info "django-htmx"
The Django htmx part is not mandatory. It's from [django-htmx](https://django-htmx.readthedocs.io/en/latest/index.html), by [Adam Johnson](https://adamj.eu/). It provides a nicer integration between Django and htmx. I encourage you to check out Adam's works, you'll most likely learn a thing or two.
I moved the graph generation part from my main view to two separate views which are called after the DOM is loaded. I didn't have to do much: since I was already rendering everything server-side I only had to render a partial template instead of a complete page.
## Results
The page now renders very fast with all the text reports, and spinners are displayed while the graphs are loaded asynchronously.
It still takes 5-6 seconds for the graphs to load, but the user experience is much nicer since the LCP happens during the first second instead of having to wait for the graphs to load.
| | Before htmx | After htmx |
|---|-------------|------------|
|LCP| 6 seconds | 0.17s |
So performance-wise and UX-wise, I consider it a complete win. I'd also like to point out that it only took me around 30 to 45 minutes to implement this, without prior working knowledge of the library! Another win for free software 🎉
## Presentation
During DjangoCon EU 2022, a talk reminded me of [htmx](https://htmx.org/). As the authors put it:
> Htmx is a library that allows you to access modern browser features directly from HTML, rather than using javascript.
>
> \- [htmx docs](https://htmx.org/docs/#introduction)
Since I really don't like javascript that much, this promise is very tempting to me&nbsp;😁
## Context
I'm working on a small cash register web app to help during events and reduce the risk of human error when counting products. If you're curious, you can find it [here](https://git.augendre.info/gaugendre/checkout).
This app also has a reporting section, where I'm generating graphs using matplotlib. Some graphs are heavy to produce, but they're only a portion of the full page. In the screenshot below, I've highlighted them:
[![Checkout reports](42.png)](/attachments/42/original/)
Initially, the page took about 6 to 7 seconds to fully load. Before that, nothing was displayed on screen.
| | Before htmx | After htmx |
|---|-------------|------------|
|[LCP](https://web.dev/lcp/)| 6 seconds | ? |
Here's what the template looked like:
```jinja hl_lines="32 36 37 38"
{% extends "common/base.html" %}
{% load static %}
{% load i18n %}
{% load purchase %}
{% block extrahead %}
<link rel="stylesheet" href="{% static "purchase/css/reports.css" %}">
{% endblock %}
{% block content %}
<h1>{% translate "Reports" %}</h1>
<h2>{% translate "General" %}</h2>
<ul>
<li>{% translate "Total turnover:" %} {{ turnover|currency }}</li>
<li>{% translate "Average basket:" %} {{ average_basket|currency }}</li>
</ul>
<h3>{% translate "By day" %}</h3>
<h4>{% translate "Turnover" %}</h4>
<ul>
{% for date, turnover in turnover_by_day.items %}
<li>{{ date }} : {{ turnover|currency }}</li>
{% endfor %}
</ul>
<h4>{% translate "Average basket" %}</h4>
<ul>
{% for date, average in average_basket_by_day.items %}
<li>{{ date }} : {{ average|currency }}</li>
{% endfor %}
</ul>
{{ by_hour_plot|safe }}
<h2>{% translate "Products" %}</h2>
{% include "purchase/snippets/report_products.html" %}
{{ products_plot|safe }}
{{ products_sold_pie|safe }}
{{ products_turnover_pie|safe }}
<h2>{% translate "Turnover by payment method" %}</h2>
{% include "purchase/snippets/report_payment_methods.html" %}
<h2>{% translate "Baskets without payment method" %}</h2>
{% include "purchase/snippets/report_no_payment_method.html" %}
{% endblock %}
```
## Implementation
Here's how the template looks like with htmx. I've removed the non-relevant parts for brevity.
```jinja hl_lines="2 8 12 17 18 19 20"
{% extends "common/base.html" %}
{% load static i18n purchase django_htmx %}
{# ... #}
{% block content %}
{# ... #}
{% include "purchase/snippets/htmx_plot.html" with url='purchase:by_hour_plot' %}
<h2>{% translate "Products" %}</h2>
{% include "purchase/snippets/report_products.html" %}
{% include "purchase/snippets/htmx_plot.html" with url='purchase:products_plots' %}
{# ... #}
{% endblock %}
{% block extrascript %}
<script src="{% static 'vendor/htmx-1.8.0/htmx.min.js' %}" defer></script>
{% django_htmx_script %}
{% endblock %}
```
```jinja
{% load static %}
<div hx-get="{% url url %}"
hx-trigger="load"
hx-swap="outerHTML"
>
<img class="htmx-indicator" src="{% static 'purchase/spinner.gif' %}" alt="Spinner">
</div>
```
!!! info "django-htmx"
The Django htmx part is not mandatory. It's from [django-htmx](https://django-htmx.readthedocs.io/en/latest/index.html), by [Adam Johnson](https://adamj.eu/). It provides a nicer integration between Django and htmx. I encourage you to check out Adam's works, you'll most likely learn a thing or two.
I moved the graph generation part from my main view to two separate views which are called after the DOM is loaded. I didn't have to do much: since I was already rendering everything server-side I only had to render a partial template instead of a complete page.
## Results
The page now renders very fast with all the text reports, and spinners are displayed while the graphs are loaded asynchronously.
It still takes 5-6 seconds for the graphs to load, but the user experience is much nicer since the LCP happens during the first second instead of having to wait for the graphs to load.
| | Before htmx | After htmx |
|---|-------------|------------|
|LCP| 6 seconds | 0.17s |
So performance-wise and UX-wise, I consider it a complete win. I'd also like to point out that it only took me around 30 to 45 minutes to implement this, without prior working knowledge of the library! Another win for free software 🎉
I have a few ideas where this could be useful on projects at ITSF, especially one where we implemented a very similar logic with custom code.

View file

@ -4,37 +4,37 @@ tags: ['commons', 'free software', 'licensing']
date: 2022-01-02T15:40:02.921181+00:00
aliases: ["/releasing-my-works-into-the-public-domain"]
---
## Context
I recently read an [article by Jake Bauer](https://www.paritybit.ca/blog/free-software-is-an-abject-failure) that changed my mind about Free Software and the GNU General Public Licenses.
First, I'd like to point out that, as Jake, I used to be a firm believer of the benefits of Free Software, although I've always found Richard Stallman too extreme. All my software projects were licensed under either the [GPL-3.0-or-later](https://spdx.org/licenses/GPL-3.0-or-later.html) or the [MIT](https://spdx.org/licenses/MIT.html) licenses.
While the latter is straight and easy to read, I never took the time to read the former in its entirety. I also know that I will never take the time to enforce any of these licenses because I don't want to spend time or money in legal matters.
After much thinking, I realized that what I cared about the most were the public commons. That's what I want to contribute to. If my work is shared without my name being kept, well that's OK! Maybe it will help somebody and that's all I want. And maybe someone will reuse parts of some code I wrote in a proprietary commercial platform. That's part of the game.
## Public domain & freedoms
That's why, as of January 1<sup>st</sup> 2022, **all my [software projects](https://git.augendre.info/gaugendre) and the articles in this blog** have been released into the **public domain**. The code was released under [The Unlicense](https://spdx.org/licenses/Unlicense.html)[^unlicense] and the articles under the [CC0 1.0](https://spdx.org/licenses/CC0-1.0.html).
[^unlicense]: More info about The Unlicense on the [dedicated website](https://unlicense.org/).
This gives me the peace of mind that my works will always be free and easy to use by anyone, from the solo developer to a big tech company. I don't want to care or be stressed about any of my publications being reused incompatibly with their license.
I still believe that users should be treated as first-party when building software. I still believe in [the four essential freedoms](https://www.gnu.org/philosophy/free-sw.en.html#four-freedoms) listed by the GNU project:
> * The freedom to run the program as you wish, for any purpose.
* The freedom to study how the program works, and change it so it does your computing as you wish. Access to the source code is a precondition for this.
* The freedom to redistribute copies so you can help others.
* The freedom to distribute copies of your modified versions to others. By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
>
> \- *The Free Software Foundation*
And to me, releasing my works in the public domain is the easiest and most comprehensible way of giving these freedoms to my users. Of course this doesn't ensure that all derivatives will be released under the same terms, but frankly I think I don't care.
## Reusing my work
If you ever want to reuse my work, I kindly ask you to link back to the original and mention my name. However, I gave up the right of enforcing this with the licenses I chose, so you're free to completely ignore this.
## Context
I recently read an [article by Jake Bauer](https://www.paritybit.ca/blog/free-software-is-an-abject-failure) that changed my mind about Free Software and the GNU General Public Licenses.
First, I'd like to point out that, as Jake, I used to be a firm believer of the benefits of Free Software, although I've always found Richard Stallman too extreme. All my software projects were licensed under either the [GPL-3.0-or-later](https://spdx.org/licenses/GPL-3.0-or-later.html) or the [MIT](https://spdx.org/licenses/MIT.html) licenses.
While the latter is straight and easy to read, I never took the time to read the former in its entirety. I also know that I will never take the time to enforce any of these licenses because I don't want to spend time or money in legal matters.
After much thinking, I realized that what I cared about the most were the public commons. That's what I want to contribute to. If my work is shared without my name being kept, well that's OK! Maybe it will help somebody and that's all I want. And maybe someone will reuse parts of some code I wrote in a proprietary commercial platform. That's part of the game.
## Public domain & freedoms
That's why, as of January 1<sup>st</sup> 2022, **all my [software projects](https://git.augendre.info/gaugendre) and the articles in this blog** have been released into the **public domain**. The code was released under [The Unlicense](https://spdx.org/licenses/Unlicense.html)[^unlicense] and the articles under the [CC0 1.0](https://spdx.org/licenses/CC0-1.0.html).
[^unlicense]: More info about The Unlicense on the [dedicated website](https://unlicense.org/).
This gives me the peace of mind that my works will always be free and easy to use by anyone, from the solo developer to a big tech company. I don't want to care or be stressed about any of my publications being reused incompatibly with their license.
I still believe that users should be treated as first-party when building software. I still believe in [the four essential freedoms](https://www.gnu.org/philosophy/free-sw.en.html#four-freedoms) listed by the GNU project:
> * The freedom to run the program as you wish, for any purpose.
* The freedom to study how the program works, and change it so it does your computing as you wish. Access to the source code is a precondition for this.
* The freedom to redistribute copies so you can help others.
* The freedom to distribute copies of your modified versions to others. By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
>
> \- *The Free Software Foundation*
And to me, releasing my works in the public domain is the easiest and most comprehensible way of giving these freedoms to my users. Of course this doesn't ensure that all derivatives will be released under the same terms, but frankly I think I don't care.
## Reusing my work
If you ever want to reuse my work, I kindly ask you to link back to the original and mention my name. However, I gave up the right of enforcing this with the licenses I chose, so you're free to completely ignore this.
Finally, I don't restrict myself on using different licenses in the future if I think there's a better option for some project.

Binary file not shown.

Binary file not shown.

View file

@ -4,65 +4,65 @@ tags: ['FreeBSD', 'FreeNAS', 'NAS', 'TrueNAS', 'self-hosting', 'server']
date: 2020-11-11T12:31:38.826817+00:00
aliases: ["/replacing-my-synology-nas-with-a-self-built-server"]
---
!!! info "👴 Old post"
I don't use TrueNAS anymore but I still have this machine and I even upgraded the RAM to a more comfortable 56GB.
A few weeks ago, I replaced my trusty [Synology DS216play](/synology-nas-stream-and-backup/) with a more powerful station that I built myself. I hadn't built a computer in a long time and it was a lot of fun!
![TrueNAS Logo](/attachments/26/original/)<!--more-->
## Build time!
I chose to go team red this time because I never used an AMD processor. I picked a Ryzen 5 3400G which should be more than enough for the next few years. A quick detour on the specs:
* 4 cores / 8 threads AMD CPU
* 8GB DDR4 RAM
* 550W power supply
* a motherboard, case and a fan
Bringing us to a total of 440€. I decided to add another 8GB RAM afterwards, the final price was around 475€.
I'm not 100% happy with the case. First it's way bigger than the Synology enclosure was but it still fits in the furniture under my TV, though I had to remove the front panel. Then, it can only contain two 3.5" HDD. I only have two at this time but it may prevent me to easily expand my storage afterwards. I'll see later, for the moment it's enough.
## Backup
Before unplugging the Synology NAS, I picked an external USB HDD and backed up what I wanted to transfer. I didn't have much data on my 2x2To drives so it didn't take long.
## OS Installation & configuration
Following the recommendations of a colleague, I decided to go with [FreeNAS](https://www.truenas.com/) (which was recently renamed TrueNAS, I will use both in this article). The installation is straightforward with very few options. I used a spare USB key for the OS boot drive since TrueNAS prevented me from using the storage disks for that.
After the installation is complete, I could reboot, unplug the keyboard and the monitor, and use another computer to access the web interface through which FreeNAS is configured and managed. A few settings later, I had a working home server! I just had to dump the external drive to the newly created [ZFS](https://en.wikipedia.org/wiki/ZFS) pool which would hold my data, setup some shares and get going!
Overall the complete setup including building the PC and backing up data took me something like 3-4 hours. The data restoration was done overnight.
The day to day management is also quite easy and after setting up some automated tasks like ZFS snapshots, you pretty much don't have anything to do.
I experienced some issues with my UPS though. TrueNAS allows you to monitor the state of a USB UPS and gracefully shutdown the system when it's on battery or when it reaches low battery. This is very useful to prevent data loss. It also sends you an email when something bad happens (UPS in bad state, UPS doesn't respond, etc). The state is polled every 2 seconds by default. My UPS decided it was a bit too much and the monitoring service *in the UPS* failed in the night after running a couple of hours. I woke up with an inbox full of alert emails (*one per minute* of supposed outage). The fix was relatively easy: set the polling interval to a higher value like 15 seconds, so the UPS doesn't feel DOSed. I'm not running a datacenter and my server doesn't draw too much power compared to the UPS capacity so I'm not concerned about the UPS failing within 15 seconds in case of a power outage.
## Experimentations
![FreeBSD Logo](/attachments/28/processed/)
This server will allow me to experiment more freely. I used to rent a VPS for always on services but now I can just pop them onto this machine. One detail though: FreeNAS is based on [FreeBSD](https://en.wikipedia.org/wiki/FreeBSD). FreeBSD is *not* a GNU/Linux distribution though there are some similarities. It's a system I've never administrated before and it can be a bit confusing when popping into the command line. On the other hand, if you have a pretty standard setup, you never see FreeBSD as everything can be managed through the GUI.
You can also create a virtual machine from the GUI and install a custom OS inside to run services that FreeBSD can't run. That's exactly what I needed since I had a few services that required Docker and Docker is not available on FreeBSD. I set up a VM with Ubuntu server, which I only use for my Docker services.
Fortunately, VMs are not the only way to isolate your services. FreeBSD has an integrated containerization mechanism called *jails*. It pretty much looks like a Docker container except that you can't pick a GNU/Linux distribution. It shares the kernel of the host system leading to much small overhead compared to a full-blown VM. TrueNAS comes with a plugin mechanism that creates a jail to run a custom service, like NextCloud, Deluge, Plex, etc. You can also create jails manually.
My current setup includes:
* Some jails:
* nginx (handles the incoming HTTP(S) traffic and dispatches to the appropriate service)
* NextCloud
* miniflux (moved from my Raspberry Pi)
* postgresql
* [cleantoots](/cleantoots-clean-your-toot-history/)
* Deluge
* Plex
* A VM with some Docker services:
* This blog (migrated from the VPS)
* Collabora Online server (provides online collaboration on LibreOffice/Microsoft Office documents in NextCloud)
* [Plausible Analytics](/about-me/#analytics)
The Raspberry Pi still runs Home Assistant but I disabled InfluxDB and Grafana since I didn't use them. The VPS is shut down and I plan on deleting it by the end of the month since I don't need it anymore.
## Closing thoughts
!!! info "👴 Old post"
I don't use TrueNAS anymore but I still have this machine and I even upgraded the RAM to a more comfortable 56GB.
A few weeks ago, I replaced my trusty [Synology DS216play](/synology-nas-stream-and-backup/) with a more powerful station that I built myself. I hadn't built a computer in a long time and it was a lot of fun!
![TrueNAS Logo](26.svg)<!--more-->
## Build time!
I chose to go team red this time because I never used an AMD processor. I picked a Ryzen 5 3400G which should be more than enough for the next few years. A quick detour on the specs:
* 4 cores / 8 threads AMD CPU
* 8GB DDR4 RAM
* 550W power supply
* a motherboard, case and a fan
Bringing us to a total of 440€. I decided to add another 8GB RAM afterwards, the final price was around 475€.
I'm not 100% happy with the case. First it's way bigger than the Synology enclosure was but it still fits in the furniture under my TV, though I had to remove the front panel. Then, it can only contain two 3.5" HDD. I only have two at this time but it may prevent me to easily expand my storage afterwards. I'll see later, for the moment it's enough.
## Backup
Before unplugging the Synology NAS, I picked an external USB HDD and backed up what I wanted to transfer. I didn't have much data on my 2x2To drives so it didn't take long.
## OS Installation & configuration
Following the recommendations of a colleague, I decided to go with [FreeNAS](https://www.truenas.com/) (which was recently renamed TrueNAS, I will use both in this article). The installation is straightforward with very few options. I used a spare USB key for the OS boot drive since TrueNAS prevented me from using the storage disks for that.
After the installation is complete, I could reboot, unplug the keyboard and the monitor, and use another computer to access the web interface through which FreeNAS is configured and managed. A few settings later, I had a working home server! I just had to dump the external drive to the newly created [ZFS](https://en.wikipedia.org/wiki/ZFS) pool which would hold my data, setup some shares and get going!
Overall the complete setup including building the PC and backing up data took me something like 3-4 hours. The data restoration was done overnight.
The day to day management is also quite easy and after setting up some automated tasks like ZFS snapshots, you pretty much don't have anything to do.
I experienced some issues with my UPS though. TrueNAS allows you to monitor the state of a USB UPS and gracefully shutdown the system when it's on battery or when it reaches low battery. This is very useful to prevent data loss. It also sends you an email when something bad happens (UPS in bad state, UPS doesn't respond, etc). The state is polled every 2 seconds by default. My UPS decided it was a bit too much and the monitoring service *in the UPS* failed in the night after running a couple of hours. I woke up with an inbox full of alert emails (*one per minute* of supposed outage). The fix was relatively easy: set the polling interval to a higher value like 15 seconds, so the UPS doesn't feel DOSed. I'm not running a datacenter and my server doesn't draw too much power compared to the UPS capacity so I'm not concerned about the UPS failing within 15 seconds in case of a power outage.
## Experimentations
![FreeBSD Logo](28.png)
This server will allow me to experiment more freely. I used to rent a VPS for always on services but now I can just pop them onto this machine. One detail though: FreeNAS is based on [FreeBSD](https://en.wikipedia.org/wiki/FreeBSD). FreeBSD is *not* a GNU/Linux distribution though there are some similarities. It's a system I've never administrated before and it can be a bit confusing when popping into the command line. On the other hand, if you have a pretty standard setup, you never see FreeBSD as everything can be managed through the GUI.
You can also create a virtual machine from the GUI and install a custom OS inside to run services that FreeBSD can't run. That's exactly what I needed since I had a few services that required Docker and Docker is not available on FreeBSD. I set up a VM with Ubuntu server, which I only use for my Docker services.
Fortunately, VMs are not the only way to isolate your services. FreeBSD has an integrated containerization mechanism called *jails*. It pretty much looks like a Docker container except that you can't pick a GNU/Linux distribution. It shares the kernel of the host system leading to much small overhead compared to a full-blown VM. TrueNAS comes with a plugin mechanism that creates a jail to run a custom service, like NextCloud, Deluge, Plex, etc. You can also create jails manually.
My current setup includes:
* Some jails:
* nginx (handles the incoming HTTP(S) traffic and dispatches to the appropriate service)
* NextCloud
* miniflux (moved from my Raspberry Pi)
* postgresql
* [cleantoots](/cleantoots-clean-your-toot-history/)
* Deluge
* Plex
* A VM with some Docker services:
* This blog (migrated from the VPS)
* Collabora Online server (provides online collaboration on LibreOffice/Microsoft Office documents in NextCloud)
* [Plausible Analytics](/about-me/#analytics)
The Raspberry Pi still runs Home Assistant but I disabled InfluxDB and Grafana since I didn't use them. The VPS is shut down and I plan on deleting it by the end of the month since I don't need it anymore.
## Closing thoughts
I'm really happy with this setup! I don't know yet what I'm going to do with my new NextCloud though. I tried NextCloud talk but I couldn't get it working properly for video calls. I might replace pCloud (I have a lifetime 2To plan) and use it as a backup service, who knows! If you have suggestions, please feel free to [contact me](/about-me/). 😀

View file

@ -4,71 +4,71 @@ tags: ['development', 'python', 'shell', 'virtualenv']
date: 2023-02-28T23:58:39.540937+00:00
aliases: ["/setup-rtx-with-direnv-for-python"]
---
## 👨🏻‍💻 TL;DR
```shell
## install rtx
brew install rtx direnv
## configure rtx & direnv
echo "direnv hook fish | source" >> ~/.config/fish/config.fish
# don't enable rtx's hook because it doesn't work well with direnv's python layout
# see https://github.com/jdxcode/rtx/discussions/235#discussioncomment-5159938
mkdir -p ~/.config/direnv/lib/
rtx direnv activate > ~/.config/direnv/lib/use_rtx.sh
echo "use rtx" > ~/.config/direnv/direnvrc
touch ~/.envrc
## Project setup
mkdir top-secret-project
cd top-secret-project
echo "python 3.11.1" > .tool-versions
rtx install
echo "layout python3" > .envrc
echo ".direnv" >> .gitignore
direnv allow
```
These are the latest versions of both tools, at the time of writing:
```plain
direnv --version
2.32.2
rtx --version
1.19.0 macos-arm64 (built 2023-02-28)
```
## 📖 Context
I was happily using [fish shell](https://fishshell.com/), Python, [direnv](https://direnv.net/), [pyenv](https://github.com/pyenv/pyenv) and [pyenv-virtualenv](https://github.com/pyenv/pyenv-virtualenv) to manage my projects and local Python versions. I heard about [asdf](https://asdf-vm.com/) in the past but didn't feel the need to change my tooling because I mainly work on Python-only projects.
Pyenv uses **shims**, which are a problem because when running `which <program>` you get a path to a **wrapper script** and not the real binary. It also means that binaries installed in one virtualenv are globally "accessible" (because the shims are always in your PATH) but when you try to run them you get an error asking you to activate the proper virtualenv.
My requirements for a new tool are the following:
* The tool MUST be globally available ;
* The tool SHOULD provide globally available versions of Python (and other tools like node as a bonus) ;
* I MUST be able to manage the Python version per project (and other tools like node as a bonus) ;
* The virtualenv MUST activate itself when I `cd` into the project's directory ;
* The tool MUST play nice with `direnv`.
pyenv already does all of this but:
* It adds a noticeable delay when opening a new shell ;
* It adds delay when calling shimed binaries (python, but also any other binary installed in a venv) ;
* It uses shims, which as explained above don't provide a great `which` experience.
I was watching [this YouTube video](https://www.youtube.com/watch?v=dFkGNe4oaKk) and the host mentioned a newcomer called `rtx`. It promises to be a drop-in replacement to `asdf` but written in Rust and doesn't used shims, so that's what I'm currently trying.
As I didn't find proper & clear setup instructions in the documentation, I wrote this post.
## 🏅 Achievements & improvements
* My shell startup time is noticeably reduced
* I don't have virtualenv binaries polluting my global PATH anymore
* All requirements are satisfied, except for the globally available tools
* I can now manage other language tooling with this setup
* I don't have to manually setup my virtualenv anymore, direnv does it for me
On the other hand, my tools are only available under `$HOME`. Since that's where I normally work, I'm ok with this limitation for now.
## 👨🏻‍💻 TL;DR
```shell
## install rtx
brew install rtx direnv
## configure rtx & direnv
echo "direnv hook fish | source" >> ~/.config/fish/config.fish
# don't enable rtx's hook because it doesn't work well with direnv's python layout
# see https://github.com/jdxcode/rtx/discussions/235#discussioncomment-5159938
mkdir -p ~/.config/direnv/lib/
rtx direnv activate > ~/.config/direnv/lib/use_rtx.sh
echo "use rtx" > ~/.config/direnv/direnvrc
touch ~/.envrc
## Project setup
mkdir top-secret-project
cd top-secret-project
echo "python 3.11.1" > .tool-versions
rtx install
echo "layout python3" > .envrc
echo ".direnv" >> .gitignore
direnv allow
```
These are the latest versions of both tools, at the time of writing:
```plain
direnv --version
2.32.2
rtx --version
1.19.0 macos-arm64 (built 2023-02-28)
```
## 📖 Context
I was happily using [fish shell](https://fishshell.com/), Python, [direnv](https://direnv.net/), [pyenv](https://github.com/pyenv/pyenv) and [pyenv-virtualenv](https://github.com/pyenv/pyenv-virtualenv) to manage my projects and local Python versions. I heard about [asdf](https://asdf-vm.com/) in the past but didn't feel the need to change my tooling because I mainly work on Python-only projects.
Pyenv uses **shims**, which are a problem because when running `which <program>` you get a path to a **wrapper script** and not the real binary. It also means that binaries installed in one virtualenv are globally "accessible" (because the shims are always in your PATH) but when you try to run them you get an error asking you to activate the proper virtualenv.
My requirements for a new tool are the following:
* The tool MUST be globally available ;
* The tool SHOULD provide globally available versions of Python (and other tools like node as a bonus) ;
* I MUST be able to manage the Python version per project (and other tools like node as a bonus) ;
* The virtualenv MUST activate itself when I `cd` into the project's directory ;
* The tool MUST play nice with `direnv`.
pyenv already does all of this but:
* It adds a noticeable delay when opening a new shell ;
* It adds delay when calling shimed binaries (python, but also any other binary installed in a venv) ;
* It uses shims, which as explained above don't provide a great `which` experience.
I was watching [this YouTube video](https://www.youtube.com/watch?v=dFkGNe4oaKk) and the host mentioned a newcomer called `rtx`. It promises to be a drop-in replacement to `asdf` but written in Rust and doesn't used shims, so that's what I'm currently trying.
As I didn't find proper & clear setup instructions in the documentation, I wrote this post.
## 🏅 Achievements & improvements
* My shell startup time is noticeably reduced
* I don't have virtualenv binaries polluting my global PATH anymore
* All requirements are satisfied, except for the globally available tools
* I can now manage other language tooling with this setup
* I don't have to manually setup my virtualenv anymore, direnv does it for me
On the other hand, my tools are only available under `$HOME`. Since that's where I normally work, I'm ok with this limitation for now.
That's a clear win ! I'll keep using `rtx` and update this post if I find anything to complain about.

View file

@ -4,83 +4,83 @@ tags: ['GitHub alternative', 'git', 'software forge']
date: 2020-07-01T20:26:41+00:00
aliases: ["/sourcehut"]
---
!!! Info "🧓🏻 Old post"
This post is quite old now. Since then, I've moved my git repositories to a self-hosted [gitea](https://gitea.io/) instance and I cancelled my subscription. I still stand behind this post though. Sourcehut is a solid git forge and keeps improving.
For a few weeks now I've been hosting my new projects at [sourcehut](https://sr.ht/~crocmagnon) instead of my usual GitHub account. So far, the experience has been very pleasant but it also has some drawbacks. Let's talk about it!
#100DaysToOffload No. 15<!--more-->
[sourcehut](https://sourcehut.org/) is a software forge grouping several tools in a **free and open source** suite:
* git/mercurial repository hosting
* mailing lists
* ticket system
* wiki
* builds / CI
It's blazing fast (see [this comparison](https://forgeperf.org/) ; disclaimer : it's maintained by sourcehut) and very [accessible](https://en.wikipedia.org/wiki/Web_accessibility). Every page is also usable with a browser on which you disabled Javascript, which I think is a very nice feature since it allows for any kind of web browser to use your website. Not just the ones supporting the latest [ECMAScript](https://en.wikipedia.org/wiki/ECMAScript) bells and whistles.
## Collaboration
### Email
Collaboration is mainly achieved through email. After being caught in GitHub and Gitlab for years, I recently discovered that `git` has some integrated tools to collaborate, send patches, etc. You can try generating your first patch with `git send-email` with [this tutorial](https://git-send-email.io) (brought to you by the sourcehut team). I even managed to create and send a patch to sourcehut's ticketing system so that you receive your attributed issue number when you submit by email. It's been merged and published [last month](https://lists.sr.ht/~sircmpwn/sr.ht-announce/%3CC3HPI7MYB0VU.A8FD2OLYNAG6%40homura%3E)! 🎉
I still haven't made my mind up about this. I think it might discourage external contributions because I believe that more people know the "GitHub flow" (fork to your account, push your changes and make a pull request) when fewer learned the "pure" git flow. However, as Drew pointed out[^drew], one had to also learn the GitHub flow.
[^drew]: In a toot on his Mastodon server, which is now offline, sorry about that.
On the other hand, it's like going back to the basics, which I appreciate. Also, many of my projects didn't have external contributors even on GitHub, so it's not like it would change the world for my use case.
The basic contribution workflow at sourcehut looks like this:
1. Clone the project locally
2. Make changes
3. Use git send-email to send them to the right mailing list
Finding the mailing list, configuring your git client to send emails, ... All of these may be obstacles to contributors. Though I see one (big) advantage: **if you have git and an email account, you have everything you need to contribute to projects on sourcehut**.
You don't need to create an account, fork the repository or own any resource. You just need an email address to send your patches to sourcehut mailing lists and receive feedback.
Another advantage is that "issues" are not the only way to communicate around a project. If you want to start a discussion, just send an email to the project's appropriate mailing list!
### Code review
I also tried reviewing some code since it's a big part of collaboration around software projects.
At first I found it a bit hard since there's no guidance in the web UI. But once you understand that you can do that via email, then everything falls into place.
You just need to reply to the email you received when the contributor sent the patch. You can even add inline comments and they'll be shown inline in the web UI. See an example of such discussion [here](https://lists.sr.ht/~sircmpwn/email-test-drive/patches/10576).
I found that to be very clever, though there is still room for improvement. Especially on the docs or UI to guide people.
## Documentation
The documentation is still quite Spartan and even sometimes incomplete, but that's to be expected since sourcehut is [still an alpha product](https://sourcehut.org/alpha-details/).
The team is making good progress though! They send a monthly email indicating the changes they pushed during the last month. It's always filled with good new stuff, improvements, performances upgrades, etc.
## Business model
Their business model is based on paid users. They [don't have any investors](https://man.sr.ht/billing-faq.md#why-do-i-have-to-pay-for-srht-when-github-gitlab-etc-are-free), so they're entirely driven by what their users would want. Not by how to make money fast under big companies' pressure. After the alpha, users owning resources (repositories, mailing lists, bug trackers, etc) will have to subscribe to a paid plan.
So if you just want an account to use the web UI to publish issues for example, you won't have to pay. But as I said, you don't even need an account for that: you can use email! You would need an account to get write access to any repository or read access to private repos though.
During the alpha you don't need to pay but I figured it would be a nice way to contribute to the free and open source software environment so I decided to subscribe.
## CI / Builds
I also wanted to talk a bit about their CI system. It's as simple as writing a YAML file (called a manifest) and submitting it either through the API or using a web form.
That allows you to decouple your build jobs from your repository if you want to. Or test some changes before committing them. Too many times I've seen (and made!) commits like "try to fix CI", "fix CI 2", "this should finally fix CI" because the only way to submit jobs on GitLab CI or GitHub Actions is by committing and pushing to your repo.
The final bonus that kills every other CI system is that sourcehut automatically enables an SSH access if your job fails. You can then connect to the machine that ran your scripts and investigate what happened.
Edit (2020-07-06): sourcehut lacks Windows and macOS for their builds system, making it less suitable than some others if you need to target these platforms.
## Closing words
All of this makes sourcehut a far superior software forge than GitHub or GitLab in my opinion. I especially like their full commitment to free and open source software, unlike GitLab which only has a free core and GitHub which is entirely proprietary.
I might not migrate my existing projects (yet) but I will at least continue creating new ones on sourcehut rather than GitHub. And maybe someday I'll find the courage to migrate everything.
Thanks sourcehut, for contributing to this ecosystem with such great tools.
## Related reading
* [Why not GitHub?](https://sanctum.geek.nz/why-not-github.html)
!!! Info "🧓🏻 Old post"
This post is quite old now. Since then, I've moved my git repositories to a self-hosted [gitea](https://gitea.io/) instance and I cancelled my subscription. I still stand behind this post though. Sourcehut is a solid git forge and keeps improving.
For a few weeks now I've been hosting my new projects at [sourcehut](https://sr.ht/~crocmagnon) instead of my usual GitHub account. So far, the experience has been very pleasant but it also has some drawbacks. Let's talk about it!
#100DaysToOffload No. 15<!--more-->
[sourcehut](https://sourcehut.org/) is a software forge grouping several tools in a **free and open source** suite:
* git/mercurial repository hosting
* mailing lists
* ticket system
* wiki
* builds / CI
It's blazing fast (see [this comparison](https://forgeperf.org/) ; disclaimer : it's maintained by sourcehut) and very [accessible](https://en.wikipedia.org/wiki/Web_accessibility). Every page is also usable with a browser on which you disabled Javascript, which I think is a very nice feature since it allows for any kind of web browser to use your website. Not just the ones supporting the latest [ECMAScript](https://en.wikipedia.org/wiki/ECMAScript) bells and whistles.
## Collaboration
### Email
Collaboration is mainly achieved through email. After being caught in GitHub and Gitlab for years, I recently discovered that `git` has some integrated tools to collaborate, send patches, etc. You can try generating your first patch with `git send-email` with [this tutorial](https://git-send-email.io) (brought to you by the sourcehut team). I even managed to create and send a patch to sourcehut's ticketing system so that you receive your attributed issue number when you submit by email. It's been merged and published [last month](https://lists.sr.ht/~sircmpwn/sr.ht-announce/%3CC3HPI7MYB0VU.A8FD2OLYNAG6%40homura%3E)! 🎉
I still haven't made my mind up about this. I think it might discourage external contributions because I believe that more people know the "GitHub flow" (fork to your account, push your changes and make a pull request) when fewer learned the "pure" git flow. However, as Drew pointed out[^drew], one had to also learn the GitHub flow.
[^drew]: In a toot on his Mastodon server, which is now offline, sorry about that.
On the other hand, it's like going back to the basics, which I appreciate. Also, many of my projects didn't have external contributors even on GitHub, so it's not like it would change the world for my use case.
The basic contribution workflow at sourcehut looks like this:
1. Clone the project locally
2. Make changes
3. Use git send-email to send them to the right mailing list
Finding the mailing list, configuring your git client to send emails, ... All of these may be obstacles to contributors. Though I see one (big) advantage: **if you have git and an email account, you have everything you need to contribute to projects on sourcehut**.
You don't need to create an account, fork the repository or own any resource. You just need an email address to send your patches to sourcehut mailing lists and receive feedback.
Another advantage is that "issues" are not the only way to communicate around a project. If you want to start a discussion, just send an email to the project's appropriate mailing list!
### Code review
I also tried reviewing some code since it's a big part of collaboration around software projects.
At first I found it a bit hard since there's no guidance in the web UI. But once you understand that you can do that via email, then everything falls into place.
You just need to reply to the email you received when the contributor sent the patch. You can even add inline comments and they'll be shown inline in the web UI. See an example of such discussion [here](https://lists.sr.ht/~sircmpwn/email-test-drive/patches/10576).
I found that to be very clever, though there is still room for improvement. Especially on the docs or UI to guide people.
## Documentation
The documentation is still quite Spartan and even sometimes incomplete, but that's to be expected since sourcehut is [still an alpha product](https://sourcehut.org/alpha-details/).
The team is making good progress though! They send a monthly email indicating the changes they pushed during the last month. It's always filled with good new stuff, improvements, performances upgrades, etc.
## Business model
Their business model is based on paid users. They [don't have any investors](https://man.sr.ht/billing-faq.md#why-do-i-have-to-pay-for-srht-when-github-gitlab-etc-are-free), so they're entirely driven by what their users would want. Not by how to make money fast under big companies' pressure. After the alpha, users owning resources (repositories, mailing lists, bug trackers, etc) will have to subscribe to a paid plan.
So if you just want an account to use the web UI to publish issues for example, you won't have to pay. But as I said, you don't even need an account for that: you can use email! You would need an account to get write access to any repository or read access to private repos though.
During the alpha you don't need to pay but I figured it would be a nice way to contribute to the free and open source software environment so I decided to subscribe.
## CI / Builds
I also wanted to talk a bit about their CI system. It's as simple as writing a YAML file (called a manifest) and submitting it either through the API or using a web form.
That allows you to decouple your build jobs from your repository if you want to. Or test some changes before committing them. Too many times I've seen (and made!) commits like "try to fix CI", "fix CI 2", "this should finally fix CI" because the only way to submit jobs on GitLab CI or GitHub Actions is by committing and pushing to your repo.
The final bonus that kills every other CI system is that sourcehut automatically enables an SSH access if your job fails. You can then connect to the machine that ran your scripts and investigate what happened.
Edit (2020-07-06): sourcehut lacks Windows and macOS for their builds system, making it less suitable than some others if you need to target these platforms.
## Closing words
All of this makes sourcehut a far superior software forge than GitHub or GitLab in my opinion. I especially like their full commitment to free and open source software, unlike GitLab which only has a free core and GitHub which is entirely proprietary.
I might not migrate my existing projects (yet) but I will at least continue creating new ones on sourcehut rather than GitHub. And maybe someday I'll find the courage to migrate everything.
Thanks sourcehut, for contributing to this ecosystem with such great tools.
## Related reading
* [Why not GitHub?](https://sanctum.geek.nz/why-not-github.html)
* [Github is sinking](https://yarmo.eu/post/github-sinking)

BIN
content/posts/synology-nas-stream-and-backup/16.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
content/posts/synology-nas-stream-and-backup/17.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
content/posts/synology-nas-stream-and-backup/18.png (Stored with Git LFS) Normal file

Binary file not shown.

View file

@ -4,80 +4,80 @@ tags: ['NAS', 'Plex', 'Raspberry Pi', 'Synology', 'backup', 'self-hosting', 'str
date: 2020-05-14T05:46:00+00:00
aliases: ["/synology-nas-stream-and-backup"]
---
This is the third issue of my #100DaysToOffload
My Synology NAS (DS 216play) has two 2 TB hard drives and serves two main purposes in my setup:
* Store media files (mainly movies and TV shows) & stream them
* Hold some backups
<!--more-->
## What I don't really use
I sporadically need to share files between devices when I don't have a USB key nearby or keep large files like some Linux images (I like to have at least one ready to use in case I have to troubleshoot a computer).
I also tried the collaborative editing features (NextCloud-like) but the lack of support for editing spreadsheets on the iPad was a deal breaker for me.
So, after having talked about what I don't use on my NAS, let's move on to the things I do with it!
## Media storage & streaming
I keep my media files in a volume that everyone could access through their computer (either from the Windows Explorer or the macOS Finder), but the preferred way of consuming these files is to use Plex.
I set up a Plex account, installed the package on the NAS and linked everything. I remember having to work a bit to allow external access but nothing too hard.
Now I can watch my content from anywhere and since I subscribed to the Plex Pass, I can also download it offline to my devices.
I also installed the Plex app on my Apple TV in order to have an easy access to my content there. Nothing more to say really: once you connect to your account, your library shows up and you can select a media and hit play.
I'm nearly at "drop a video and play it", but in some cases Plex needs to transcode the media to stream it to the Apple TV app and apparently my DS 216play's CPU [isn't powerful enough to do so on the fly](https://docs.google.com/spreadsheets/d/1MfYoJkiwSqCXg8cm5-Ac4oOLPRtCkgUxU0jdj3tmMPc/edit) ([source](https://support.plex.tv/articles/115002178853-using-hardware-accelerated-streaming/)). For these cases, I either try to convert the file beforehand with ffmpeg or I use VLC (there's also an Apple TV app).
I really like Plex: it has support for various devices, you can easily download subtitles for something you're playing, you can access your library from anywhere, it's very low maintenance. I'm a little annoyed by the "CPU not powerful enough" though, I'll work on that.
### Video files encoding, transcoding and streaming
*This part is more technical, feel free to jump to the backups if you want.*
Why wouldn't the NAS be able to play a video "in some cases"? In fact, every Plex player has a compatibility matrix of content you can play with it. The matrix has 5 dimensions:
* container file (e.g., MKV, AVI)
* video codec (e.g., mpeg4, H264)
* audio codec (e.g., mp3, aac)
* resolution
* bitrate
If your file has a combination of these that match the compatibility list of your player, then your media can be *direct played*. If everything's compatible except for the container, you can use *direct streaming*. Neither of these features require high CPU usage so I'm good to go, but if the media doesn't fall in one of these two categories, then the transcoding has to take place and the NAS is not powerful enough.[^1]
I'm no expert in transcoding and all but I understood that there are basically two types of operations: you can either "just" change the container of your media without having to re-encode every frame OR you can re-encode every frame. The first is light and can be done in a couple of seconds even on low-spec hardware (that's what Plex does by itself when it *direct streams*). The latter, though, would take hours or even days on my NAS for a single 4 GB movie; so I'm not considering it.
I will probably have a look at [HandBrake](https://handbrake.fr/) which I discovered while writing this article. It's a free and open source tool to transcode videos. That should help me cleanup my library and make everything compatible with the Apple TV.
[^1]: Plex documentation on direct play and direct stream: https://support.plex.tv/articles/200250387-streaming-media-direct-play-and-direct-stream/
## Backups
### MacBook
The other main thing I use my NAS for is keeping backups. The configuration here is also really easy too, at least for Time Machine on macOS:
1. Enable a file-sharing service like SMB
[![Screenshot of the "File Services" setting screen in Synology DSM](/attachments/16/processed/)](/attachments/16/original/)
2. Advertise Time Machine on the protocol you enabled
[![Screenshot of the "File Services - Advanced" setting screen in Synology DSM](/attachments/17/processed/)](/attachments/17/original/)
3. Configure your Mac to use your network drive as a Time Machine backup
This setup served me well for many months but it recently started to fail and I don't understand why yet. I just have a cryptic error on my Mac telling that the backup couldn't complete.
[![Screenshot of the Time Machine error on macOS](/attachments/18/processed/)](/attachments/18/original/)
I'll have to investigate this issue further, but I'm not too stressed either. All of my documents live in the cloud and my code projects are pushed on remote git repositories. The only thing I'll lose if my computer gets stolen is a bunch of stickers I really like and a few hours to set a new one up.
### Raspberry Pi
I also recently started to use my Raspberry Pi more, so I wanted to keep a backup to prevent loss due to SD card failure.
For this one I had to get somewhat creative. I first listed the files and folders I needed to back up. For me it was nearly everything in the Home Assistant configuration directory (except for the database, 1.5 GB no thank you) and some other files scattered around the disk for various other scripts and configuration files.
I then created two very similar scripts which would be executed periodically. The first is all about Home Assistant and the second takes care of the rest.
Since this article is already long enough, I'll stop writing here and detail the scripts in [a later post](https://blog.augendre.info/raspberry-pi). Stay tuned! 😉
Keywords/tags:
This is the third issue of my #100DaysToOffload
My Synology NAS (DS 216play) has two 2 TB hard drives and serves two main purposes in my setup:
* Store media files (mainly movies and TV shows) & stream them
* Hold some backups
<!--more-->
## What I don't really use
I sporadically need to share files between devices when I don't have a USB key nearby or keep large files like some Linux images (I like to have at least one ready to use in case I have to troubleshoot a computer).
I also tried the collaborative editing features (NextCloud-like) but the lack of support for editing spreadsheets on the iPad was a deal breaker for me.
So, after having talked about what I don't use on my NAS, let's move on to the things I do with it!
## Media storage & streaming
I keep my media files in a volume that everyone could access through their computer (either from the Windows Explorer or the macOS Finder), but the preferred way of consuming these files is to use Plex.
I set up a Plex account, installed the package on the NAS and linked everything. I remember having to work a bit to allow external access but nothing too hard.
Now I can watch my content from anywhere and since I subscribed to the Plex Pass, I can also download it offline to my devices.
I also installed the Plex app on my Apple TV in order to have an easy access to my content there. Nothing more to say really: once you connect to your account, your library shows up and you can select a media and hit play.
I'm nearly at "drop a video and play it", but in some cases Plex needs to transcode the media to stream it to the Apple TV app and apparently my DS 216play's CPU [isn't powerful enough to do so on the fly](https://docs.google.com/spreadsheets/d/1MfYoJkiwSqCXg8cm5-Ac4oOLPRtCkgUxU0jdj3tmMPc/edit) ([source](https://support.plex.tv/articles/115002178853-using-hardware-accelerated-streaming/)). For these cases, I either try to convert the file beforehand with ffmpeg or I use VLC (there's also an Apple TV app).
I really like Plex: it has support for various devices, you can easily download subtitles for something you're playing, you can access your library from anywhere, it's very low maintenance. I'm a little annoyed by the "CPU not powerful enough" though, I'll work on that.
### Video files encoding, transcoding and streaming
*This part is more technical, feel free to jump to the backups if you want.*
Why wouldn't the NAS be able to play a video "in some cases"? In fact, every Plex player has a compatibility matrix of content you can play with it. The matrix has 5 dimensions:
* container file (e.g., MKV, AVI)
* video codec (e.g., mpeg4, H264)
* audio codec (e.g., mp3, aac)
* resolution
* bitrate
If your file has a combination of these that match the compatibility list of your player, then your media can be *direct played*. If everything's compatible except for the container, you can use *direct streaming*. Neither of these features require high CPU usage so I'm good to go, but if the media doesn't fall in one of these two categories, then the transcoding has to take place and the NAS is not powerful enough.[^1]
I'm no expert in transcoding and all but I understood that there are basically two types of operations: you can either "just" change the container of your media without having to re-encode every frame OR you can re-encode every frame. The first is light and can be done in a couple of seconds even on low-spec hardware (that's what Plex does by itself when it *direct streams*). The latter, though, would take hours or even days on my NAS for a single 4 GB movie; so I'm not considering it.
I will probably have a look at [HandBrake](https://handbrake.fr/) which I discovered while writing this article. It's a free and open source tool to transcode videos. That should help me cleanup my library and make everything compatible with the Apple TV.
[^1]: Plex documentation on direct play and direct stream: https://support.plex.tv/articles/200250387-streaming-media-direct-play-and-direct-stream/
## Backups
### MacBook
The other main thing I use my NAS for is keeping backups. The configuration here is also really easy too, at least for Time Machine on macOS:
1. Enable a file-sharing service like SMB
[![Screenshot of the "File Services" setting screen in Synology DSM](16.png)](/attachments/16/original/)
2. Advertise Time Machine on the protocol you enabled
[![Screenshot of the "File Services - Advanced" setting screen in Synology DSM](17.png)](/attachments/17/original/)
3. Configure your Mac to use your network drive as a Time Machine backup
This setup served me well for many months but it recently started to fail and I don't understand why yet. I just have a cryptic error on my Mac telling that the backup couldn't complete.
[![Screenshot of the Time Machine error on macOS](18.png)](/attachments/18/original/)
I'll have to investigate this issue further, but I'm not too stressed either. All of my documents live in the cloud and my code projects are pushed on remote git repositories. The only thing I'll lose if my computer gets stolen is a bunch of stickers I really like and a few hours to set a new one up.
### Raspberry Pi
I also recently started to use my Raspberry Pi more, so I wanted to keep a backup to prevent loss due to SD card failure.
For this one I had to get somewhat creative. I first listed the files and folders I needed to back up. For me it was nearly everything in the Home Assistant configuration directory (except for the database, 1.5 GB no thank you) and some other files scattered around the disk for various other scripts and configuration files.
I then created two very similar scripts which would be executed periodically. The first is all about Home Assistant and the second takes care of the rest.
Since this article is already long enough, I'll stop writing here and detail the scripts in [a later post](https://blog.augendre.info/raspberry-pi). Stay tuned! 😉
Keywords/tags:
#tech #home #synology #nas #plex #backup #raspberrypi #100DaysToOffload #HundredDaysToOffload

BIN
content/posts/to-a-new-adventure/44.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
content/posts/to-a-new-adventure/45.jpg (Stored with Git LFS) Normal file

Binary file not shown.

View file

@ -4,29 +4,25 @@ tags: ['Career', 'OVH', 'OVHcloud', 'Work']
date: 2023-03-24T11:01:05.142656+00:00
aliases: ["/to-a-new-adventure"]
---
After leaving ITSF in January, I spent February at Gatewatcher. The company is in Paris so the position was fully remote, with 2-3 days per month on site. Unfortunately and despite my hopes, it didn't work for me.
After a month of job hunting, I'm happy to report that I'll be joining [OVHcloud](https://ovhcloud.com) in April! They have offices in Lyon so the remoteness shouldn't be an issue, though the people I'll be teaming up with are distributed in France and Canada. I've been hired as a software developer & site reliability engineer. I'll be mostly developing in Go and Python.
![OVHcloud logo on a wall, slightly blurry](ovh_wall.jpg)
During this month I met several other organizations. All of them had interesting projects to share and it was very rewarding to have the opportunity to discuss with them.
Here are some links (in alphabetical order) so you can check their work:
* [Alstom](https://www.alstom.com/) - Very large worldwide group specialized in rail transport.
* The project they were hiring for: [on LinkedIn](https://www.linkedin.com/jobs/view/project-software-designer-netbox-1-at-alstom-3495964133)
* CNRS - large state research organization
* [LBBE team](https://lbbe.univ-lyon1.fr/)
* The project they were hiring for: [Lifemap](https://lifemap.univ-lyon1.fr/)
* [Gatewatcher](https://www.gatewatcher.com/) - small/medium cybersecurity company
* [Hashbang](https://hashbang.fr/) - small worker cooperative
* [Smartfire](https://www.smartfire.pro/) - small tech agency
*[CNRS]: French National Centre for Scientific Research
*[LBBE]: Laboratory of Biometry and Evolutionary Biology
Anyway, I'm really excited to start my new job! Wish me luck 😊
<!-- ![OVHcloud logo](/attachments/44/processed/)
After leaving ITSF in January, I spent February at Gatewatcher. The company is in Paris so the position was fully remote, with 2-3 days per month on site. Unfortunately and despite my hopes, it didn't work for me.
After a month of job hunting, I'm happy to report that I'll be joining [OVHcloud](https://ovhcloud.com) in April! They have offices in Lyon so the remoteness shouldn't be an issue, though the people I'll be teaming up with are distributed in France and Canada. I've been hired as a software developer & site reliability engineer. I'll be mostly developing in Go and Python.
![OVHcloud logo on a wall, slightly blurry](45.jpg) so you can check their work:
* [Alstom](https://www.alstom.com/) - Very large worldwide group specialized in rail transport.
* The project they were hiring for: [on LinkedIn](https://www.linkedin.com/jobs/view/project-software-designer-netbox-1-at-alstom-3495964133)
* CNRS - large state research organization
* [LBBE team](https://lbbe.univ-lyon1.fr/)
* The project they were hiring for: [Lifemap](https://lifemap.univ-lyon1.fr/)
* [Gatewatcher](https://www.gatewatcher.com/) - small/medium cybersecurity company
* [Hashbang](https://hashbang.fr/) - small worker cooperative
* [Smartfire](https://www.smartfire.pro/) - small tech agency
*[CNRS]: French National Centre for Scientific Research
*[LBBE]: Laboratory of Biometry and Evolutionary Biology
Anyway, I'm really excited to start my new job! Wish me luck 😊
<!-- ![OVHcloud logo](44.png)
{: .logo } -->

View file

@ -4,27 +4,27 @@ tags: ['ssh', 'yubikey']
date: 2020-05-18T19:07:08+00:00
aliases: ["/using-your-yubikey-with-ssh-and-achieving-multifactor-authentication"]
---
In my [introductory article](/im-starting-a-blog), I teased about using SSH with a YubiKey. Here's the post that expands on the idea.
![YubiKeys](/attachments/21/processed/)
As you can see, I *like* YubiKeys.
This is part of my #100DaysToOffload series, issue No. 6.<!--more-->
I won't go into much detail as to how to set them up, other people did. [Here's the guide I followed](https://florin.myip.org/blog/easy-multifactor-authentication-ssh-using-yubikey-neo-tokens)[^guide]. Instead, I just wanted to talk about how cool it is!
[^guide]: Sadly the website seems to have been taken down 😕
First, even if you need to access your server from multiple machines, you only have one public key to authorize. No more "oh no I didn't allow this key, I have to log in with the password instead" (you should have disabled it so it's more "oh no I have to get my other computer to allow this one"). Saving time already 😊
Second, it's portable. More than a laptop. You can even plug it in on a colleague's computer and use it from there without having to compromise your private key or create another pair.
You can also generate/store your private encryption and signature keys for PGP on a YubiKey ([*Why have separate encryption subkey?*](https://security.stackexchange.com/questions/43590/pgp-why-have-separate-encryption-subkey), [*Improve the Security of Your OpenPGP Key by Using Subkeys*](http://www.connexer.com/articles/openpgp-subkeys)). That also requires you to plug in the key to decrypt/sign messages hence increasing the security of your setup.
These are features I use every day: my key is plugged pretty much all day on my computer at work since I'm using PGP to sign my commits and SSH to push them on our shared git repo. That's also what I use to log into my [Raspberry Pi](/raspberry-pi) or this blog's host.
One thing to keep in mind though is that if you lose your key and it's your only access to a remote machine, you may be screwed. Remember to always keep a backup access with an offline key[^1] or a good old keyboard & screen if you have physical access to your server!
I hope this post helps you see the coolness of these little keys! You can of course use them for 2FA on websites but hey, why not also have 2FA for SSH 😉
In my [introductory article](/im-starting-a-blog), I teased about using SSH with a YubiKey. Here's the post that expands on the idea.
![YubiKeys](21.jpeg)
As you can see, I *like* YubiKeys.
This is part of my #100DaysToOffload series, issue No. 6.<!--more-->
I won't go into much detail as to how to set them up, other people did. [Here's the guide I followed](https://florin.myip.org/blog/easy-multifactor-authentication-ssh-using-yubikey-neo-tokens)[^guide]. Instead, I just wanted to talk about how cool it is!
[^guide]: Sadly the website seems to have been taken down 😕
First, even if you need to access your server from multiple machines, you only have one public key to authorize. No more "oh no I didn't allow this key, I have to log in with the password instead" (you should have disabled it so it's more "oh no I have to get my other computer to allow this one"). Saving time already 😊
Second, it's portable. More than a laptop. You can even plug it in on a colleague's computer and use it from there without having to compromise your private key or create another pair.
You can also generate/store your private encryption and signature keys for PGP on a YubiKey ([*Why have separate encryption subkey?*](https://security.stackexchange.com/questions/43590/pgp-why-have-separate-encryption-subkey), [*Improve the Security of Your OpenPGP Key by Using Subkeys*](http://www.connexer.com/articles/openpgp-subkeys)). That also requires you to plug in the key to decrypt/sign messages hence increasing the security of your setup.
These are features I use every day: my key is plugged pretty much all day on my computer at work since I'm using PGP to sign my commits and SSH to push them on our shared git repo. That's also what I use to log into my [Raspberry Pi](/raspberry-pi) or this blog's host.
One thing to keep in mind though is that if you lose your key and it's your only access to a remote machine, you may be screwed. Remember to always keep a backup access with an offline key[^1] or a good old keyboard & screen if you have physical access to your server!
I hope this post helps you see the coolness of these little keys! You can of course use them for 2FA on websites but hey, why not also have 2FA for SSH 😉
[^1]: "offline key" here means a key pair stored on a device disconnected from internet and that you rarely use if ever. Maybe it's a USB drive in a Swiss safe, maybe it's in your nightstand, whatever. Just keep it offline as much as possible to not defeat the purpose of additional security brought by the YubiKey. For PGP, the certification key should be kept offline as it holds the power to certify other keys to allow them to sign/decrypt.

View file

@ -4,45 +4,45 @@ tags: ['apple tv', 'homepod', 'rant', 'youtube']
date: 2020-05-15T11:22:59+00:00
aliases: ["/youtube-on-apple-tv"]
---
Today's post will mostly be me ranting about the YouTube app on Apple TV.<!--more--> See, I have a pretty typical Apple user setup for my TV (and as you'll maybe learn in future posts, it's not only for the TV). I have:
* an **Apple TV**: an internet connected device on which you can install apps like Netflix, YouTube, VLC, Plex, Molotov, etc.
* a **HomePod**: Apple's smart speaker
* a TV (didn't see that coming, did you?)
The sound of the Apple TV media is supposed to come out of the HomePod via the magic of **AirPlay**.
I don't know much about AirPlay except that:
* Apple created it, so it's supposed to work well within "[the ecosystem](https://www.youtube.com/watch?v=KB4_WIPE7vo)"
* it's a wireless application protocol to transmit sound, video and controls (play, pause, next, ...)
* the 2nd version supports casting the same media to multiple devices
And indeed it works really well! I can pull my iPhone out of my pocket, play a music and stream it to my HomePod. No issue here. I can also flawlessly launch a Netflix movie or show and have the sound come out of the HomePod because that's how I left it the day before.
But when I open the YouTube app, everything falls apart.
When launching a video, the sound consistently comes out of the TV even if the HomePod is selected in the "sound output" settings and I've been watching Netflix all day long.
Because I'm stubborn and I don't want to use the TV speakers, here's what I **must** do every time I launch my first YouTube video:
* Click on the video, notice the sound comes from the TV
* Change the sound output from HomePod to TV
* Play the video again
* Change the sound output from TV to HomePod
* Play the video, now the sound comes from the HomePod but it has a 2-seconds delay
* Go to the previous menu
* Open *another* video, notice the sound comes from the HomePod and is in sync
* Go to the previous menu
* Finally watch my video in peace.
That's a bit of an instructions list, huh? Well I don't know how crappy the code of this app is but it's *the only one* that does that. Netflix, Plex, VLC, Molotov... They all manage to handle this without making me crazy. But not YouTube. Which is a pain in the neck, really.
The only "cool" thing is that once it's fixed, it's fixed for all videos I play in a row. I know: wow. I don't know when it resets but I know that if I watch YT every day, I'll have to do this every day.
So if by any chance a Google engineer comes across this blog post, I'd be more than happy to share my experience with them to help improve the app experience.
And if you have any suggestions to peacefully watch YouTube on the Apple TV with an AirPlay speaker, please [reach out to me](/about-me) 😊
This concludes my 4th post of #100DaysToOffload.
Keywords/tags:
Today's post will mostly be me ranting about the YouTube app on Apple TV.<!--more--> See, I have a pretty typical Apple user setup for my TV (and as you'll maybe learn in future posts, it's not only for the TV). I have:
* an **Apple TV**: an internet connected device on which you can install apps like Netflix, YouTube, VLC, Plex, Molotov, etc.
* a **HomePod**: Apple's smart speaker
* a TV (didn't see that coming, did you?)
The sound of the Apple TV media is supposed to come out of the HomePod via the magic of **AirPlay**.
I don't know much about AirPlay except that:
* Apple created it, so it's supposed to work well within "[the ecosystem](https://www.youtube.com/watch?v=KB4_WIPE7vo)"
* it's a wireless application protocol to transmit sound, video and controls (play, pause, next, ...)
* the 2nd version supports casting the same media to multiple devices
And indeed it works really well! I can pull my iPhone out of my pocket, play a music and stream it to my HomePod. No issue here. I can also flawlessly launch a Netflix movie or show and have the sound come out of the HomePod because that's how I left it the day before.
But when I open the YouTube app, everything falls apart.
When launching a video, the sound consistently comes out of the TV even if the HomePod is selected in the "sound output" settings and I've been watching Netflix all day long.
Because I'm stubborn and I don't want to use the TV speakers, here's what I **must** do every time I launch my first YouTube video:
* Click on the video, notice the sound comes from the TV
* Change the sound output from HomePod to TV
* Play the video again
* Change the sound output from TV to HomePod
* Play the video, now the sound comes from the HomePod but it has a 2-seconds delay
* Go to the previous menu
* Open *another* video, notice the sound comes from the HomePod and is in sync
* Go to the previous menu
* Finally watch my video in peace.
That's a bit of an instructions list, huh? Well I don't know how crappy the code of this app is but it's *the only one* that does that. Netflix, Plex, VLC, Molotov... They all manage to handle this without making me crazy. But not YouTube. Which is a pain in the neck, really.
The only "cool" thing is that once it's fixed, it's fixed for all videos I play in a row. I know: wow. I don't know when it resets but I know that if I watch YT every day, I'll have to do this every day.
So if by any chance a Google engineer comes across this blog post, I'd be more than happy to share my experience with them to help improve the app experience.
And if you have any suggestions to peacefully watch YouTube on the Apple TV with an AirPlay speaker, please [reach out to me](/about-me) 😊
This concludes my 4th post of #100DaysToOffload.
Keywords/tags:
#appletv #home #youtube #homepod #airplay #100DaysToOffload #HundredDaysToOffload