99
content/posts/2026-01-25-accidental-complexity.md
Normal file
99
content/posts/2026-01-25-accidental-complexity.md
Normal file
@@ -0,0 +1,99 @@
|
||||
---
|
||||
type: blog-post
|
||||
title: "Incidental Complexity"
|
||||
description: |
|
||||
Complexity is a required aspect of developing products, in especially software, complexity is sometimes inherited, molded and incidental.
|
||||
In this post we discuss incidental complexity in software development; history, present and speculation on the future.
|
||||
draft: false
|
||||
date: 2026-01-25
|
||||
updates:
|
||||
- time: 2026-01-25
|
||||
description: first iteration
|
||||
tags:
|
||||
- "#blog"
|
||||
- "#softwaredevelopment"
|
||||
- "#ai"
|
||||
- "#complexity"
|
||||
- "#techdebt"
|
||||
---
|
||||
|
||||
Complexity is a required side effect of developing products, whether it is in processes, human relations and especially software, which we will discuss in this post.
|
||||
|
||||
Complexity is sometimes inherited, you have to fix an issue in legacy system, you are making it fit for purpose and either expand or refactor its capabilities and internal workings. Or Incidental which we will cover in this post; Whether you add more than you should to cover a need you think you have. But it turns out is over-engineered, or simply redundant. That is not to say the code isn't used at all, But it is extra work that could've been simpler, less handling of edge cases that aren't testet etc.
|
||||
|
||||
## Incidental complexity has always been a thing, and always will be
|
||||
|
||||
Complexity is a feature not a bug, one of the main benefits of software is that it is so maluable, it can be changed much more easily than a hardware product for example; It is simply much easier to change a line of code, flash some firmware, than redo a production line for a hardware trace. Software also has no permanency outside of organisational or environmental factors, other than software having to live up to some requirement, whether it be a functional one, has to fulfilll a purpose, or non-functional it being a certain shape to reliably handle a certain issue, a trading algorithm having a certain shape because it needs to be fast enough to beat its competitors.
|
||||
|
||||
Complexity is born when a decision, incidental or not is added to a product, it might be a choice of programming language, coding style, infrastructure, and so on. None of them are inherently good or bad, but they're choices that exists on a scale of how easy they are to discover and change for that given product, and it varies depending on many factors, whether it is firmware to be distributed once, over the air, live in a data center, website which can change many times pr day.
|
||||
|
||||
I define two subsets of complexity, emergent and incidental, the goal of most software is to be the minimal amount of software required to fulfill the set of requirements applied to a software product, this is not by lines of code! Rather it is a measure how how much context we apply to the code for it to do its job. That is emergent complexity, incidental complexity is when we apply too many details to the context, not relevant or additional to solve the set of requirements.
|
||||
|
||||
For example: A requirement might be to show a page of users on webpage. The list is designed to come from a database, and there are infinite ways of solving this problem.
|
||||
|
||||
1. A casual solution might be to simply use the database client, to fetch items one by one, until we can populate the webpage. In this context, it fits fine, because the service will never have more than 10 users, so it is fast enough to be valid, small enough to be changeable, and just perfect for this use-case.
|
||||
2. A complex solution would cache the amount of users in the database, divide it to the max amount of items the database supports returning at once, and query it all in parallel all at once, to then stitch back the result and populate the webpage.
|
||||
|
||||
You might like one solution better than the other, or prefer a third, it doesn't matter. In this case, both solutions are valid answers to the problem, both produce the same answer, but not using the same calculations. If we follow the scenario that the database will never have more than 10 users, this means that all the logic of handling many thousands of users in the complex solution while used, is never actually utilized, if the batch size is 500 or so, then we never do any parallel processing, never need the count because we never need to split, we never need to stitch anything together.
|
||||
|
||||
There is no right and wrong answer, the first solution was built knowing that 10 users was the max, so it was built to handle that within reason, and it is simple enough to be changed if this ever changes. The second solution could have been built with the ambition of handling thousands of users at once, and needed a performant page load.
|
||||
|
||||
Incidental complexity is when there are additional details, that have no minimal effect on the actual output, non-functional or not. You might be screaming at your computer now, that it is just premature complexity, I define premature complexity as a step in the process of ending up with incidental complexity.
|
||||
|
||||
It is impossible to hit the mark 100% correctly, as complexity is subjective, to an experienced team, with a wide toolbox; simply fetching 1000 items because they know it is within budget might be a small price to others it might be simpler to just fetch one by one. There is also no correct answer, what is important is that software is built to requirements, with minimal additional details. It should feel like the "natural" solution to a problem given the organisational context and problem domain.
|
||||
|
||||
Incidental complexity is those details that sometimes show up as: Curious why is the solution using batching, when there is only one item to fetch?, Or why does this use a strategy pattern, when there is only one option? Some of these complexities can come to fruition, but at the moment are not a useful part of the solution to the problem. You can probably think of endless examples of these, that you've done over your career, or inherited.
|
||||
|
||||
## Devil in the details
|
||||
|
||||
It can be very difficult to tell the difference between emergent and incidental complexity, literaly the devil in the details. It might be, to continue the previous example, at easter every year, volunteers across Denmark is added to the database, so instead of 10, it is now 10000, and as such the page would be impossible to load unless a sufficiently complex solution exists.
|
||||
|
||||
This leaves complexity with some permanency, it can be very difficult to reduce, because we've often been bitten by this in the past, when doing rewrites, or simply house-keeping. The team in the past, probably left that there for some reason.
|
||||
|
||||
You can never know, and sometimes you have to take the gamble, and it takes real skill, luck and intention to do so. It is a risk, and risk is hard to stomach.
|
||||
|
||||
In general we're generally more prone to leave complexity in as a feature of the system when inherited vs when writing new code, one of the reasons why it is often easier to solve a subset of a problem from scratch, rather than updating an inherited solution.
|
||||
|
||||
## Human VS AI (LLM)
|
||||
|
||||
Humans and AI produce incidental complexity in similar but very crucially different ways. A human will apply their knowledge and style of programming to a problem and end up with a given solution. A human is often neatly aware of the requirements and the context, but is of course still fuzzy in their exact details. A human will produce incidental complexity simply because they've got a certain style of solving problems given their knowledge.
|
||||
|
||||
An AI will have wide fuzzy knowledge, it doesn't have taste, but it has a style, because it has the amalgam of knowledge, and has been tuned to produce the most readily acceptible answers. AIs have been optimized for getting their answers chosen, solving problems is part of that selection, but so is clarity, verboseness and to which degree the problem should be solved. An AI is often and likely will have a low level of the requirements and context at hand. It is simply too difficult to convey context over the text media we've got today, a large part of the engineering profession is the engineer coming up with the concrete requirements from the client's context.
|
||||
|
||||
This means that given their information vacuum AIs often end up producing solutions that on the outset looks to work, and sort of does, but either leaves out crucial details, or produces either functional or non-funtional requirements that weren't part of the original ask.
|
||||
|
||||
> Currently there is also the problem with AIs ephemeral context, it simply forgets details after a while, which often leads to a catastrophic loss of performance. I won't go into this here, as even if this is solved, I still see AI agents producing different incidental complexity than humans.
|
||||
|
||||
AIs and Humans are tuned to different needs, a human is roughly tuned with their values, experience, input, and context, this leads to an effective but relatively slow producer of software. LLMs on the other hand are trained on all software the producers could get their hands on, no matter the quality, it is tuned to produce answers that are accepted, whether that is code or text. This is often glossed over but is crucially important, they need an answer that can be accepted, not the correct answer. This is the same as a human lying to their parents on them failing a grade. Lying is a feature of how LLMs are produced, and it won't go away (speculation, but I've not seen anything that would stop this behavior), they're biased towards action even on no to little context, and they're prompted to require as little follow up as possible to get an answer.
|
||||
|
||||
We've often seen this show up as LLMs sprinkling comments everywhere explaining what it is doing, no matter the complexity, or when promoted for text, it is by default tuned to produce a paragraph, a few bullet points then a closing paragraph. These can be influenced, but the style of communication is pretty much always followed. The same goes for programming. The agent will given no examples produce a fairly average solution to said problem, however the complexities I've found at this current point in time (2026), the AIs will sprinkle additional details onto a solution that has no relevance on the context, as the agent is in a very minimal information vacuum.
|
||||
|
||||
Slightly similar, but if you only told a contractor 2 paragraphs of a problem before sending them away for a month working on a problem with no feedback, you're going to end up with some interesting solutions. Agents are similar, but given their tuning, they end up producing a distinctive style, that often lead of superflous details, and especially today given their limited context, they accept their own created incidental complexity as a feature of the system, and even more catastrophic they can't differentiate that well between emergent and incidental complexity.
|
||||
|
||||
This often lead to the operator having to give more context to an agent as it is running. This often shows up like
|
||||
|
||||
1. Human: Create a web page to display a list of users
|
||||
2. Agent: Here is a webpage that shows a list of users (including name, birthdate, usernames, email) with a details page
|
||||
3. Human: I didn't ask for a details page and users don't have a username or email, just a name and an id.
|
||||
4. Agent: compacting context
|
||||
5. Agent: Here is a list of name and id, no details page
|
||||
|
||||
It looks to have come up with a workable solution, but it might have kept the birthdays in the database, they're just not filled out, because it thought it was an existing migration in the database etc. Incidental complexity right there, or it might not have thought of authentication etc.
|
||||
|
||||
The Agent might have come up with the right solution given additional context and refinement from either humans or another agent, but at what cost. There might now be 5 tables of user ids, a details page component that isn't hooked up anywhere, username email endpoints for users, and so much more. I don't like to talk about lines of code, but I've seen that AIs often lead to much more code, not just in lines of source code, but comments, functions, documentation etc. Often with a skewed view of what is important for someone that needs to pick up the solution, whether that is an LLM or a human.
|
||||
|
||||
Funilly enough, LLMs end up producing code, that is often more difficult to pick up again for another LLM in another session. Context is king for an LLM, so it filling the code with superflous details, will lead the LLM to be less effective by filling up the context with superflous token much faster.
|
||||
|
||||
## The cost
|
||||
|
||||
AI Agents are a superpower, they can type much faster than humans, and can come up with solutions to problem much faster than humans, not to mention that they open the software space to many other demographics of people, this might be a small shop wanting a custom tool, but not enough capital to a software engineer on staff, a software engineer being able to solve problems for smaller customers, etc.
|
||||
|
||||
But the solutions they end up producing have incidental complexity and hidden technical debt, sometimes outstripping the initial gain of solving the problem, by shackling the potential of the future of the solution.
|
||||
|
||||
## My view of the future
|
||||
|
||||
AI Agents are here to stay, they simply solve problems at a lower cost, than engineers. But the side-effect of their solution requires intentional effort, whether that is an engineer going in and solving the problem, or agents doing the same. I however, believe the without additional context, additional autonomous systems will only produce more entrophy and as such a problem might initially be solved, but will become far worse in the future if only autonomous systems are used.
|
||||
|
||||
Again similar to sending a contractor to a cabin in the winter for 2 months working away at a problem, and then only giving 1 paragraph of feedback to their solution. The hidden complexities and subleties of the their solution will be skewed, have potentially have tons of hidden incidental complexities that require intention to solve.
|
||||
|
||||
I believe that software engineers will have a role in development of software products for a long time, the role won't look the same, but nothing has shown me that autonomous systems have learned to produce order from entrophy, no matter the capabilities. Until Autonomous AI systems are able to work alongside humans one to one, or replace organisations completely then they will always have to interact with works of humans to fill their information vacuum.
|
||||
Reference in New Issue
Block a user