Skip to content

Commit 8a075a0

Browse files
committed
Publish Blog: Hacking my Linux server at home - Part 3
1 parent 805e82c commit 8a075a0

5 files changed

Lines changed: 259 additions & 0 deletions

File tree

Lines changed: 255 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,255 @@
1+
+++
2+
title = "Blog: Hacking my Linux server at home - Part 3"
3+
description = "MicroK8s + ArgoCD + Vault + Kustomize <3"
4+
date = 2026-03-31
5+
[taxonomies]
6+
tags = ["blog", "server", "sysadmin", "dev-ops", "k8s"]
7+
+++
8+
9+
### See also
10+
* [Part One][0]
11+
* [Part Two][1]
12+
13+
## Context
14+
In my previous post, I described how I setup Gitea, Act, Minikube and a Cloudflare Tunnel to be able to host my own zeb apps on my server, and ended the post with a list of things to do.
15+
16+
Let's take a look at the "roadmap" at the end of the previous post:
17+
* Setting up ArgoCD (GitOps),
18+
* Setting up Grafana LGTM for Observability, Logs, Tracing (exciting!),
19+
* Setting up SOPS for secrets management (would allow storing the ingress certificate in VCS securely),
20+
* Adding a new machine as a cluster node (ok, this one is definitely not next, I need to buy the hardware) (and swapping Minikube with MicroK8s?),
21+
* Server hardening.
22+
23+
Long time, no see. I did a couple of things since the last update :-) and today, I'll detail how I set up HashiCorp's Vault and ArgoCD, among some minor tweaks, and more importantly, why I made those choices over others.
24+
25+
## Swapping Minikube with MicroK8s
26+
Let's evacuate the less interesting things first. When it comes to deploying Kubernetes on your own hardware, several options stand out. One of them is Minikube, another is McroK8s. The first one worked for me, but was not offering multi-nodes support, so I figured migrating would bring a few benefits, including some "builtin" plugins such as Helm or ArgoCD.
27+
28+
No Arch Linux packages for MicroK8s, it's distributed mainly as `snaps`, universal Linux binaries. Canonical's documentation proved relatively easy to follow, and MicroK8s came with its own SystemD services to manage its various services. Nice.
29+
30+
I did encounter some obstacles however. Since the Cluster IP was different, I had to figure out how to rewire my various `socat` port forwards to correctly re-route traffic between Docker (for the registry and the CI on Gitea), MicroK8s, and incoming traffic from Cloudflare.
31+
32+
But the biggest issue I ultimately faced was random failures on my `socat` port forward between containers running in my CI, and my Docker registry, this time managed by MicroK8s. For no apparent reason, `docker push` would randomly fail when using the Docker bridge IP.
33+
34+
I eventually resorted to use the same network as the host for Act when running containers, which is... arguably not great, but probably not that terrible either, since I will be the first person to either push code and run CIs.
35+
```sh
36+
# /etc/act_runner/config.yaml
37+
host:
38+
network: "host"
39+
```
40+
41+
Definitely not a setting you would toggle in a serious production environment, but I needed a stable base to improve on. I did investigate socat's TTL, IPv6 vs IPv4 and a couple of other parameters. I'm still unsure what's going on.
42+
43+
Let's move on to the main dishes :)
44+
45+
## Handling Secrets
46+
Storing and managing passwords (and more generally secrets) has always been a great source of smile or happy bullying in some places, with stories ranging from "somebody guessed my password and posted nasty things on Slack" to "somebody stole my whole crypto wallet and spent $5000 with my OpenAI key".
47+
48+
Mismanaging secrets has always been terrible, but with the rise of malwares, and, more recently, the adoption of AI tools basically indexing your folders and sending those into requests, that's definitely something you want to handle with a minimum of discipline.
49+
50+
There's usually two types of configuration: unsensitive data (identifiers, usernames, URLs without auth) and secrets, such as passwords, private keys (for asymetric algorithms), or tokens. We're interested in the latter here.
51+
52+
Some secrets are long or short lived. Things like OAuth2 tokens, valid for a relatively short period of time, are usually stored in application state, cache or databases, and refreshed often.
53+
Long-lived secrets usually do not receive that much attention though, especially when you start building a service. You just put things into variable environments (either directly or through your PaaS abstraction, like `fly secrets`), and it stays there for 5 years before seeing its first-ever renewal.
54+
55+
Let's say we need to have a couple of secret keys to deploy our app, and we want to generate a Kubernetes secret that our app will have access to.
56+
We could:
57+
* Store the `secret.yaml` file with plain values in code. It would completely defeat the purpose of secrecy.
58+
* Keep our `secret.yaml` file aside, in a safe place, such as our password manager. Not great, not lean, tied to one account (that would become one more SPOF in your organization), but at least confidential.
59+
* We could set up environment variables in our CI, and substitute values just before deployment. Sure, it works, but anyone able to run the CI can extract those values.
60+
* We could commit the file with some encryption in our repository, that would be decrypted during the deployment. That's basically how SOPS[2] work.
61+
* We could also store secrets in database. This works for scoped configuration (eg. user tokens), but not for config you expect to be set before your app starts. Abusing it would break your apps lifecycles and prevent K8s to restart or redeploy apps when needed.
62+
* Or, we could use a Kubernetes operator providing `External Secrets` and combine it with a password vault.
63+
64+
I chose the last option for the following reasons:
65+
1. Using SOPS requires you to manually re-encrypt, commit and push any file you want to handle as a secret,
66+
2. `External Secrets` are represented by Kubernetes CRDs (custom definitions) whose state you can track: you can see if something went wrong when trying to update a secret.
67+
3. And more importantly, `External Secrets` let you manage the lifecycle of secrets at "the top of the pipeline", or, so to speak, in the "SecOps domain", meaning you can just update the source of truth, and let "third parties" pull updates, without having to run between heterogenous apps to figure out how they store their secrets.
68+
69+
You can leverage AWS or GCP infrastructure here. I chose the pure self-hosted approach and decided to go with HashiCorp Vault :)
70+
71+
## HashiCorp Vault
72+
HV is essentially an infinite strongbox allowing you to store secrets in various forms, with several engines offering "multi-modal" patterns through engines (for example, Cubbyhole[3] for Key-Value store). You can access it with your user account defined at setup, but will eventually require multiple keys to unseal the strongbox (be sure to save those 5 setup keys in a safe place).
73+
74+
If you suspect an intrusion or unauthorized access, you can seal your vault from the UI, preventing data from being pulled. HV also has a couple of enterprise features, such as recovery, integrating to LDAP, ACLs, and so on.
75+
76+
And after a bit of setup, you end up with something like this :)
77+
{{ picture(path='/assets/screenshots/2026-03-31-vault.png', class="article-picture") }}
78+
79+
HV also exposes a handy CLI:
80+
```
81+
$ VAULT_ADDR=http://127.0.0.1:PORT vault status
82+
Key Value
83+
--- -----
84+
Seal Type [REDACTED]
85+
Initialized true
86+
Sealed false
87+
Total Shares 5
88+
Threshold 3
89+
Version [REDACTED]
90+
Build Date [REDACTED]
91+
Storage Type file
92+
Cluster Name [REDACTED]
93+
Cluster ID [REDACTED]
94+
HA Enabled false
95+
```
96+
And this CLI also allows you to pull secrets from the commandline.
97+
98+
But let's move on to the really exciting and squishy part!
99+
100+
## ArgoCD: Why, How, WTF?
101+
{{ picture(path='/assets/screenshots/2026-03-31-argo-splashscreen.png', class="article-picture") }}
102+
103+
Is there anything you could refuse to that face :)?
104+
105+
Here's a reminder of my previous setup:
106+
- A CI Runner (Act) pulling `kubectl` and applying my Kubernetes manifests directly.
107+
- Two different CI pipelines for testing (CI) and automatically deploy `main` builds to my cluster (Continuous Delivery).
108+
109+
Whether you would replace "raw" K8s files with Helm or Kustomize, this flow would essentially be the same. And would work.
110+
111+
But here's a few things that can go or seem wrong with this approach:
112+
1. You have to monitor a successful deployment manually.
113+
2. It can be hard to track or debug failures: you would add logs to your CI, rely on `kubectl` or a TUI to check your K8s ressources manually...
114+
3. You have to explicitly declare all the necessary steps for your app to deploy: update config maps, secrets, migrate your database...
115+
116+
What if I told you you can have, at the same time:
117+
1. A nice UI to visually monitor your K8s deployments, to see their status, failure reasons and logs.
118+
2. A process that'll automatically sync your app to the cluster.
119+
3. A great separation between your actual application code, and the CD configuration to deploy it.
120+
121+
Argo[4] is basically the "Cluster Admin Dashboard", or at least one of them :) It leverages the GitOps approach to deploy software, adopting the semantics of Git to make the experience flawless and concentrate everything relted to CD.
122+
123+
After a quick setup (as a MicroK8s addon, in my case: it lives in a specific namespace in my cluster), I just created a new application, specified the Git repository that Argo would listen to, and started debugging and adapting my configuration.
124+
It took a bit of time to get it right, and also to figure out what was going wrong with my secrets configuration, but I eventually ended up on a green light :)
125+
126+
I'm *almost* sure you can setup apps on Argo just by pointing to your app repo directly, since you can specify paths and branches. But in my case, i chose to separate things in a strict manner, so I ended up with:
127+
* A new repo, `cd-suto`, containing my Kustomize deployment config, a lightweight CI to update my deployment Docker image and push it to the repo,
128+
* My app repo `suto`, with a `test` CI, and a `deploy` CI triggering the `update-image` CI using a Gitea webhook.
129+
130+
That additional step in my deploy CI did the trick:
131+
132+
```yaml
133+
- name: Trigger ArgoCD deployment
134+
run: |
135+
curl -s -X POST \
136+
-H "Content-Type: application/json" \
137+
-H "Authorization: token ${{ secrets.CD_TOKEN }}" \
138+
"${{ vars.FORGE_URL }}/api/v1/repos/MakkuSoft/cd-suto/actions/workflows/update-image.yaml/dispatches" \
139+
-d "{\"ref\": \"main\", \"inputs\": {\"image_sha\": \"${{ env.DEPLOY_SHA }}\"}}"
140+
```
141+
142+
And so, my workflow on `cd-suto` would trigger, build and push my docker image, run `kustomize edit set image tinker:32000/suto:${{ inputs.image_sha }}`, commit and push to the repo, and Argo would then listen and sync!
143+
144+
I'm still a very early user of Argo and do not have much experience with it, but after 3 years with custom CI/CD and CLI tools, Argo UX already looks frankly fantastic to me. Also, of course, in the context of a large organization, Argo also allows you to use one monorepo for deployment, with paths and branches dedicated to your apps.
145+
146+
One specific thing that Argo allows is the separation of "deployment steps" into "waves". Let's say you have a typical Elixir or Rails application that requires to run database migration before running. With a custom Gitlab CI, what I would typically do prevously would be running the K8S job separately, then applying my deployment.
147+
148+
Argo allows you to define waves as metadata, so it rolls deployments in a very, very simple way:
149+
```yaml
150+
apiVersion: batch/v1
151+
kind: Job
152+
metadata:
153+
name: suto-migrate
154+
namespace: suto
155+
annotations:
156+
argocd.argoproj.io/hook: PreSync
157+
argocd.argoproj.io/hook-delete-policy: BeforeHookCreation
158+
argocd.argoproj.io/sync-wave: "-1"
159+
labels:
160+
app: suto
161+
component: migration
162+
spec:
163+
...
164+
```
165+
166+
This reflects on Argo UI, and you can tell the hierarchy between your top dependencies (config map, vault store), medium deps (external secret). Extremely nice!
167+
168+
{{ picture(path='/assets/screenshots/2026-03-31-argo-suto.png', class="article-picture") }}
169+
170+
I'll definitely try to blog about tips and tricks about Argo when I get more used to it :)
171+
172+
## A quick note about K8s manifests: Kustomize vs Helm
173+
I briefly mentioned that I migrated my Kubernetes files (which were just plain YAML files). This initial setup had a couple of issues:
174+
1. My config map was not versioned (name set to `suto-config`) which meant my app wouldn't restart when my config map would have been updated.
175+
2. It requires a bit of `sed` to update the hardcoded Docker image in several manifests. Which is a terrible idea.
176+
3. There was no guaranteed consistency between my manifests labels, annotations...
177+
178+
Helm is the "historical" way to "customize" K8s manifests. Kustomize is a newer one, and I prefer it for the following reasons:
179+
1. It uses YAML (a "kustomization") to generate more YAML, instead of using templating and variable substitution like Helm does; it's basically a match between a PHP Engine, or a Markdown static site generator: it's simpler, less prone to errors, and more elegant. I only have to think about YAML entries, not templating DSL.
180+
2. It offers a centralized and clear way to correctly manage things like config maps (with config map generators) with built-in versioning, common labels and annotations.
181+
3. It does not come with an additional layer of CRDs like Helm (Helm Releases), which seem completely redundant with the addition of Argo to the mix.
182+
4. Generally speaking, I think it's more adapted to small codebases.
183+
184+
I still do like Helm a lot for being able to deploy pre-built software, like Traefik, Argo, and so on, but that's it.
185+
186+
To give you a short example, here's what a Kustomization file may look like:
187+
```yaml
188+
apiVersion: kustomize.config.k8s.io/v1beta1
189+
kind: Kustomization
190+
191+
namespace: suto
192+
193+
configMapGenerator:
194+
- envs:
195+
- config.properties
196+
name: suto-config
197+
namespace: suto
198+
options:
199+
annotations:
200+
argocd.argoproj.io/hook: PreSync
201+
argocd.argoproj.io/hook-delete-policy: BeforeHookCreation
202+
argocd.argoproj.io/sync-wave: "-2"
203+
204+
labels:
205+
- includeSelectors: true
206+
pairs:
207+
app: suto
208+
209+
commonAnnotations:
210+
deploy.suto/commit-sha: 42ed863182efffb470a64554c3a77109a9aacb5e
211+
212+
resources:
213+
- secret-store.yaml
214+
- secret.yaml
215+
- namespace.yaml
216+
- migrate-job.yaml
217+
- headless-service.yaml
218+
- deployment.yaml
219+
- service.yaml
220+
- ingress.yaml
221+
images:
222+
- name: [REDACTED]/suto
223+
newTag: 42ed863182efffb470a64554c3a77109a9aacb5e
224+
```
225+
226+
## Wrap-up
227+
Anyway, we're starting to have a really nice setup here, still very fragile in a couple of areas, but already more elaborate (or closer to the state-of-the-art) than a lot of software companies' own architecture :)
228+
229+
On this server, we do have:
230+
* Remote access with SSH,
231+
* A K8s cluster,
232+
* A Git Forge (Gitea),
233+
* A CI Runner (Act),
234+
* A Postgres server,
235+
* Vault to store secrets,
236+
* Argo to manage deployments.
237+
238+
There's a couple of things that I want to tackle from now on:
239+
* Buy another server, and add it to the K8s cluster.
240+
* Setup an OpenTelemetry stack (Grafana's LGTM :) and start using Tracing and Logs.
241+
* Host Postgres inside K8s using PG CloudNative operator, to minimize risks of breaking updates and segregate services, and also enable replication.
242+
* Swap Gitea with Forgejo: Not the most useful move, but Forgejo is the community-driven fork of Gitea, and will probably receive more updates in the long run.
243+
* Migrate to a "dedicated" CI instead of using Act. It's directly compatible with Github Actions, but this proximity creates space for inaccurate implementation or missing documentation. Maybe Woodpecker CI?
244+
* Possibly host Gitea/Forgejo on a separate VM, to separate source code from runtime.
245+
* Add automatic backup/syncing of configuration and data to Proton Drive (I already track my server config files using Git).
246+
247+
Thank you if you had the patience to read through all of this, do not hesitate to post or DM me recommendations or questions.
248+
249+
Stay tuned!
250+
251+
[0]: /articles/20251110-blog-server-hacking-part-one
252+
[1]: /articles/20260120-blog-server-hacking-part-two
253+
[2]: https://github.com/getsops/sops
254+
[3]: https://developer.hashicorp.com/vault/docs/secrets/cubbyhole
255+
[4]: https://argoproj.github.io/cd/
646 KB
Loading
216 KB
Loading
173 KB
Loading

templates/shortcodes/picture.html

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
{% set image_url = get_url(path=path) %}
2+
<div {% if class %}class="{{class}}" {% endif %}>
3+
<img src="{{ image_url }}">
4+
</div>

0 commit comments

Comments
 (0)