1 Commits

Author SHA1 Message Date
ca1ec073dc fix(deps): update tailwindcss monorepo to v4
Some checks failed
continuous-integration/drone/push Build encountered an error
continuous-integration/drone/pr Build encountered an error
2026-01-21 01:40:28 +00:00
6 changed files with 432 additions and 824 deletions

View File

@@ -8,7 +8,7 @@ edition = "2021"
[dependencies]
chrono = "0.4.26"
color-eyre = "0.6.2"
dagger-sdk = "0.20.0"
dagger-sdk = "0.18.3"
eyre = "0.6.8"
tokio = { version = "1.31.0", features = ["full"] }
tokio-scoped = "0.2.0"

View File

@@ -16,7 +16,7 @@ We're now well into 2026, and it has been a while since I've written a blog post
In this post I am going over a few projects I've written for myself, and work, often because I have difficulty separating the two. I simply like to produce stuff that is useful, and that can often be used at work as well.
As it has been a while since I've posted anything, and if you've read previous posts of mine, you might be surprised to know that I am now working in the data space, and that is where my focus has been since end 2024 until now.
As it has been a while since I've posted anything, and if you've read previous posts of mine, you might be surprised to know that I am not working in the data space, and that is where my focus has been since end 2024 until now.
Let's jump straight into the projects shall we.

View File

@@ -1,127 +0,0 @@
---
type: blog-post
title: "Tales of the Homelab I: Moving is fun"
description: We all make mistakes, here is one of mine as I shared the tales of my homelab hobby. Revenge of the SSDs
draft: false
date: 2026-01-23
updates:
- time: 2026-01-23
description: first iteration
tags:
- "#blog"
- "#homelab"
---
I love my homelab. It is an amalgamation of random machines both efficient and not, hosted and not, pretty janky overall. A homelab reflects a lot about what kind of operator you are. Its a hobby, and we all come from different backgrounds with different interests.
Some like to replace applications when Google kills them, some like to tinker and nerd out about performance, others like to build applications. I like to own my data, kid myself into believing its cheaper (it isnt, electricity and hardware aint cheap, yall), and I like to just build stuff, if that wasnt apparent from the previous post.
A homelab is a term that isnt clearly defined. To me, its basically the meme:
> Web: here is the cloud
> Hobbyist: cloud at home
It can be anything from a Raspberry Pi, to an old Lenovo ThinkPad, to a full-scale rack with enterprise gear and often several of those states exist at the same time.
My homelab is definitely in that state: various Raspberry Pis, mini PCs, old workstations, network gear, etc. I basically have two sides to my homelab. One is my media / home-related stuff; the other is my software brain, with PCs running Kubernetes, Docker, this blog, and so on.
It all started with one of my mini PCs. It has a few NVMe drives and runs Proxmox (basically a virtual machine hypervisor datacenter at home). It runs:
* Home Assistant, where it all started I needed an upgrade from running it on a Raspberry Pi
* MinIO (S3 server)
* Vault (secrets provider)
* Drone (CI runner)
* Harbor...
* Renova...
* Zitadel...
* Todo...
* Blo...
* Gi...
* P...
In total: **19 VMs**.
You might be saying and I dont want to hear it that this is simply too many. A big, glaring single point of failure. Foreshadowing, right there.
My other nodes run highly available Kubernetes with replicated storage and so on. They do, however, depend on the central node for database and secrets.
## Moving
So, I was moving, and a little bit stressed because I was starting a new job at the same time (day, idiot). I basically packed everything into boxes / the back of my car and moved it.
It took about a week before I got around to setting up my central mini PC again, as I simply began to miss my Jellyfin media center filled with legally procured media, I assure you.
I didnt think too much of it. Plugged it in on top of a kitchen counter, heard it spin up... and nothing came online. Ive got monitoring for all my services, and none of it resolved. Curious.
I grabbed a spare screen and plugged it in.
```bash
systemd zfs-import.want: zfs pool unable to mount zfs-clank-pool
```
Hmm. Very much *hmm*. Smells like hardware failure, but no panic yet.
I had an SSD in the box the one used for all the VM volumes. Id noticed it had been a little loose before, but it hadnt been a problem. The enclosure is meant for a full HDD, not a smaller SSD.
I tried reseating the SSD. No luck.
Slightly panicky now, I found another PC and plugged the SSD into that to check whether it was just the internal connector.
Nope. Nope. Dead SSD. Absolutely dead.
The box wouldnt boot without the ZFS pool, so I needed a way to stop that from happening. Using live boot Linux usb, I could disable the ZFS import and reboot.
The Proxmox UI, however, was a bloodbath.
**0/19 VMs running.**
F@ck.
As it turns out, theres sometimes a reason we do the contingencies we do professionally high availability setups, 3-2-1 backup strategies, etc. Even though my services had enjoyed ~99% uptime until then, the single point of failure struck, leaving a lot of damage.
The way I had `designed` my VM installations was by using a separate boot drive and volume drive. This is a feature of KVM / Proxmox and allows sharing a base OS boot disk while separating actual data. Its quite convenient and keeps VMs slim.
My Debian base image was about 20 GB. That wouldve been 20 GB × 19 VMs. Not terrible and honestly, I wouldve paid that cost if Id been paying attention.
Instead, I was left with VMs that wouldnt boot because their boot disk was gone. Like a head without a body. [A dog without a bone](https://youtu.be/ubWL8VAPoYw?si=iDd3Xk6NCkF1UkRV).
After a brief panic actually quite brief I checked what mattered first: backups. And yes, the important things (code in Gitea, family data) were all backed up and available. I shouldve tested my contingencies better, but at least monitoring worked.
I restored the most important services on one of my old workstations that I use for development.
I *did* have backups of the VMs... but they were backed up to the same extra drive that had failed.
That was dumb.
However, I had a theory. I could replace the missing boot disks with new ones and reattach them to the existing VM data disks. Basically, give the dog its bone back.
It was not fun but I managed to restore Matrix, Home Assistant, this blog, Drone, PostgreSQL, and Gitea. Those were the ones I cared about most and that were actually recoverable. The rest had their data living exclusively on the dead disk.
I may or may not share how I fixed it. Its been a while, and Id have to reconstruct all the steps. So probably not.
At this point, my Kubernetes cluster was basically *borked* (if you know, you know). All the data was there, but none of the services worked most of them depended on secrets from Vault, which was gone.
So I had to start over. Pretty much.
It wasnt a huge loss, though. All my data lived in Postgres backups, and all configuration was stored GitOps-style in Gitea.
## Postmortem
I never fully restored all the VMs and thats fine. I *could* have, but this was also a good opportunity to improve my setup and finally move more things into highly available compute. It was also a chance to replace components I wasnt happy with. Basically the eternal cycle of a homelab.
Harbor was one of them. Its heavy and fragile. Basically, all my Java services had to go. Not because I hate Java but because theyre often far too resource-intensive for a homelab running on mini PCs. I cant have services consuming all RAM and CPU for very little benefit.
Since then, Ive significantly improved my backup setup. I now use proper mirrored RAID setups on my workstations for both workloads and backups, plus an offsite backup.
> Fun fact: as I was building my new backup setup, I had another of these SSDs fail on me. That is 2/3 of my EVO Samsung SSDs I don't think I am going to be buying these again.
* ZFS with zrepl
* Borgmatic / BorgBackup for offsite
* PostgreSQL incremental backups with pgBackRest
Everything is monitored. I also replaced five different Grafana services with a single monitoring platform built on OpenTelemetry and SigNoz. It works well, though replacing PromQL with SQL definitely has some growing pains.
In the next post, Ill probably share how I do compute, Kubernetes from home and maybe another homelab oops, like the time I nearly lost all my familys Christmas wishes 😉
I swear Im a professional. But we all make mistakes sometimes. What matters is learning from them and fixing problems even when they seem impossible. I am also not a millionaire, so for my home lab I neither have the budget or the time to build fault-tolerant services. I try my best, especially for my own software, which I've never had problems with, but many other services just aren't built for high availability, requires very high resource requirements, setups, or simply just an enterprise license. I put in the effort where it is most fun and rewarding to work, and that is what having a home lab is all about.
Have a great Friday, and I hope to see you in the next post.

View File

@@ -1,157 +0,0 @@
---
type: blog-post
title: "Incidental Complexity"
description: |
Complexity is a required aspect of developing products, in especially software, complexity is sometimes inherited, molded and incidental.
In this post we discuss incidental complexity in software development; history, present and speculation on the future.
draft: false
date: 2026-01-25
updates:
- time: 2026-01-25
description: first iteration
tags:
- "#blog"
- "#softwaredevelopment"
- "#ai"
- "#complexity"
- "#techdebt"
---
Complexity is a required side effect of developing products, whether in processes, human relations, or especially software.
Complexity is sometimes inherited. You have to fix an issue in a legacy system, make it fit for purpose, and either expand or refactor its capabilities and internal workings.
Or it is incidental, which we will cover in this post: whether you add more than you should to cover a need you think you have, only to discover that the result is over-engineered or simply redundant. That is not to say the code is unused, but it represents extra details that could have been simpler.
## Incidental complexity has always existed
Complexity is a feature, not a bug. One of the main benefits of software is that it is so malleable. It can be changed much more easily than hardware. It is far easier to change a line of code than to redesign a production line.
Software also has no real permanence outside of organizational or environmental factors. It must live up to requirements, both functional ones, such as fulfilling a purpose, and non-functional ones, such as handling certain constraints reliably. A trading algorithm, for example, has a specific shape because it must be fast enough to beat competitors.
Complexity is born when a decision, whether incidental or not, is added to a product. It might be a choice of programming language, coding style, infrastructure, and so on. None of these are inherently good or bad, but they exist on a scale of how easy they are to discover and change. This varies depending on context: firmware distributed once, over-the-air updates, data center services, or websites that change many times per day.
I define two subsets of complexity: emergent and incidental. The goal of most software is to be the minimal amount required to fulfill its requirements. This is not about lines of code. Rather, it is about how much context is embedded in the system for it to function. That is emergent complexity.
Incidental complexity arises when we apply too much context. These are details that are unnecessary or irrelevant to solving the problem.
### Example
A requirement might be to show a list of users on a webpage. The list comes from a database, and there are many ways to solve this.
- A simple solution is to fetch users one by one using the database client until the page is populated. If the service will never have more than ten users, this is fast enough, easy to change, and perfectly adequate.
- A complex solution might cache the total number of users, split queries into batches, fetch them in parallel, and stitch the results together.
Both solutions are valid. They produce the same output using different approaches.
If the database never has more than ten users, then much of the complex solution becomes unnecessary. Batch processing, parallelization, counting, and stitching are never used in practice.
All the logic required to handle thousands of users exists, but is never actually exercised. If the batch size is 500, then no parallel processing occurs, no splitting is needed, and no stitching is required, the code is still run, but was never required in the first place.
There is no right or wrong answer. The first solution was built knowing that ten users was the maximum, so it was designed to handle that within reason. It is also simple enough to change if this assumption ever changes. The second solution may have been built with the ambition of handling thousands of users and providing fast page loads.
Incidental complexity consists of additional details that have little or no effect on the actual output, whether functional or non-functional. You might argue that this is simply premature complexity. I define premature complexity as a step in the process that often leads to incidental complexity.
It is impossible to hit the mark perfectly. Complexity is subjective. For an experienced team with a wide toolbox, fetching one thousand items at once may be trivial and acceptable. For others, fetching items one by one may be simpler and safer.
There is no universally correct answer. What matters is that software is built to its requirements, with minimal additional details. A good solution should feel natural given the organizational context and problem domain.
Incidental complexity often appears in questions such as: Why does this use batching when there is only one item to fetch? Why does this use a strategy pattern when there is only one option?
Some of these complexities may become useful in the future, but at present they do not contribute meaningfully to solving the problem. You can likely think of many examples from your own career, whether created by you or inherited from others.
## Devil in the details
It can be very difficult to distinguish between emergent and incidental complexity. This is truly where the devil is in the details.
Continuing the previous example, it might be that every Easter, volunteers across Denmark are added to the database. Instead of 10 users, there are suddenly 1000. In that situation, the page might have unsatisfactory performance in loading the page without a sufficiently complex solution.
This gives complexity a form of permanence. It becomes difficult to remove because similar situations may have caused failures in the past. When performing rewrites or maintenance, teams often hesitate to remove such logic because it may have been added for a good reason.
Sometimes, you cannot know for certain. You have to take a risk. Doing so requires skill, experience, and intention. Risk is uncomfortable, especially in production systems.
In general, teams are more likely to preserve inherited complexity than complexity in newly written code. This is one reason why solving a subset of a problem from scratch is often easier than modifying an existing solution, but also often leaves out crucial details.
## Human vs. AI (LLM)
Humans and AI (Agents and LLMs) produce incidental complexity in similar ways, but with important differences.
A human applies personal knowledge and programming style to a problem and arrives at a solution. Humans are usually aware of the requirements and context, even if imperfectly. Incidental complexity arises from habits, preferences, and experience.
An AI has wide but fuzzy knowledge. It does not have taste, but it does have style, derived from aggregated training data and optimization for producing acceptable answers. Solving problems is part of that optimization, but so are clarity, verbosity, and perceived completeness.
AI systems often have limited understanding of the real context. Conveying full organizational and business context through text alone is extremely difficult. A large part of engineering work consists of deriving concrete requirements from incomplete information.
As a result, AI systems often produce solutions that appear to work but either omit crucial details or introduce functional and non-functional requirements that were never requested.
There is also the issue of ephemeral context. AI systems forget details over time, which can lead to severe degradation in performance. Even if this problem is solved, AI agents are still likely to produce different forms of incidental complexity than humans.
Humans and AI are tuned to different incentives. Humans are shaped by values, experience, and feedback, which leads to effective but relatively slow development. Large language models are trained on vast amounts of software of varying quality. They are optimized to generate outputs that are accepted, not necessarily correct.
This distinction is crucial. An acceptable answer is not the same as a correct one. This is comparable to a student lying about failing a class. The behavior exists because of incentives, and it is unlikely to disappear entirely.
AI systems are biased toward producing an answer even with limited context. They are also optimized to minimize follow-up questions. This often results in confident but incomplete solutions.
This tendency often appears in how AI systems structure their responses. They frequently add extensive comments explaining obvious behavior, regardless of complexity. When producing text, they default to a familiar pattern: an introduction, several bullet points, and a conclusion.
These tendencies can be influenced, but the underlying style remains consistent. The same applies to programming output. Given no examples, an agent will typically produce an average, generic solution.
At the current stage of development in 2026, AI systems often add details that are irrelevant to the actual context. Because they operate in an information vacuum, they compensate by over-specifying solutions.
A similar outcome would occur if you gave a contractor two paragraphs of instructions and then sent them away for a month without feedback. The resulting solution would likely contain many assumptions and unnecessary features.
AI agents behave in a comparable way. Because of their training, they develop a distinctive style that often introduces superfluous components. Even more problematically, they tend to treat their own incidental complexity as a features rather than as a liability.
They also struggle to distinguish between emergent and incidental complexity.
This often forces operators to continuously refine and correct the agents output by providing more context. In practice, this interaction often looks like this:
- Human: Create a web page to display a list of users.
- Agent: Here is a webpage that shows a list of users, including name, birthdate, username, and email, with a details page.
- Human: I did not ask for a details page, and users do not have usernames or emails. They only have a name and an ID.
- Agent: Compacts context.
- Agent: Here is a list with name and ID, and no details page.
The result appears correct, but hidden complexity may remain. The database schema may still contain unused fields. Migration logic may exist for data that will never be populated. Authentication hooks or unused endpoints may still be present.
That is incidental complexity.
The agent may eventually arrive at the correct solution through repeated refinement, either with human guidance or assistance from other agents. However, this raises the question of cost.
## The Cost
It is not uncommon for such workflows to leave behind multiple unused tables, unconnected components, dormant endpoints, and unnecessary abstractions.
I generally avoid focusing on lines of code, but in practice AI-generated systems often contain significantly more source code, comments, helper functions, and documentation than necessary. Much of this material reflects a distorted view of what is important for future maintainers, whether human or machine.
Ironically, this also makes the system harder for future AI agents to work with. Context is critical for large language models. Filling a codebase with irrelevant details consumes context window capacity and reduces the effectiveness of future interactions.
AI agents are a genuine productivity multiplier. They can produce text and code far faster than humans. They can explore solution spaces quickly and enable many new groups to build software.
This includes small organizations that need custom tools but cannot afford dedicated engineers, as well as engineers who can now serve smaller customers efficiently.
However, the solutions produced by AI agents frequently contain hidden technical debt in the form of incidental complexity. In many cases, this debt outweighs the initial productivity gains by constraining future development.
Systems become harder to understand, harder to modify, and harder to extend. Over time, the accumulated friction erodes the benefits that automation originally provided.
Often conflated, incidental becomes because of AI systems accidental complexity. It is something that happens because of a mistake, or negligence, in the case of agents unintentional.
## My view of the future
AI agents are not going away. They solve problems at lower marginal cost than human engineers, and this advantage is decisive.
However, their output requires intentional correction and refinement. This can be done by humans, by other agents, or by hybrid workflows. Without such intervention, complexity will continue to accumulate.
I believe that, without additional context and governance, autonomous systems will tend to produce more entropy. Problems may be solved in the short term, but systems will degrade in quality over time.
This is similar to sending a contractor to a remote cabin for two months with minimal guidance and then offering only a paragraph of feedback at the end. The solution may function, but its internal structure will reflect distorted priorities and hidden assumptions. Sending a village doesn't work either. You might end up with amazing code quality, well-documented code. But if it is over-engineered, containing superfluous details, then it doesn't matter.
Such systems often contain layers of incidental complexity that require deliberate effort to untangle.
For this reason, I believe software engineers will continue to play a central role in product development for the foreseeable future. That role will evolve, but it will not disappear.
Nothing so far suggests that autonomous systems can reliably produce sustained order from complexity. Until they can operate at the level of entire organizations, with deep contextual awareness and accountability, they will remain dependent on human input.
As long as AI systems must interact with human-created systems to fill their information gaps, engineers will remain essential to maintaining coherence, intent, and long-term quality. What is a side-effect however, is that we will have much more code going forward, with many bespoke components, as the cost of producing functionality goes to zero, so will the need for centralized services, and in turn bespoke components will continue to climb if left unchallenged.
In the coming years and decades, I expect that software products and interactions will accumulate so much complexity that they will become indistinguishable from biological systems, even more so than they already are today. We will need agents to untangle the mess, but also surgical knowledge for when to refine a component to something known.

View File

@@ -14,10 +14,10 @@
},
"dependencies": {
"@tailwindcss/typography": "^0.5.9",
"tailwindcss": "^3.3.1"
"tailwindcss": "^4.0.0"
},
"devDependencies": {
"@catppuccin/tailwindcss": "^0.1.1",
"@tailwindcss/cli": "^0.1.2"
"@tailwindcss/cli": "^4.0.0"
}
}

964
yarn.lock

File diff suppressed because it is too large Load Diff