Some checks failed
continuous-integration/drone/push Build encountered an error
Signed-off-by: kjuulh <contact@kjuulh.io>
158 lines
14 KiB
Markdown
158 lines
14 KiB
Markdown
---
|
||
type: blog-post
|
||
title: "Incidental Complexity"
|
||
description: |
|
||
Complexity is a required aspect of developing products, in especially software, complexity is sometimes inherited, molded and incidental.
|
||
In this post we discuss incidental complexity in software development; history, present and speculation on the future.
|
||
draft: false
|
||
date: 2026-01-25
|
||
updates:
|
||
- time: 2026-01-25
|
||
description: first iteration
|
||
tags:
|
||
- "#blog"
|
||
- "#softwaredevelopment"
|
||
- "#ai"
|
||
- "#complexity"
|
||
- "#techdebt"
|
||
---
|
||
|
||
Complexity is a required side effect of developing products, whether in processes, human relations, or especially software.
|
||
|
||
Complexity is sometimes inherited. You have to fix an issue in a legacy system, make it fit for purpose, and either expand or refactor its capabilities and internal workings.
|
||
|
||
Or it is incidental, which we will cover in this post: whether you add more than you should to cover a need you think you have, only to discover that the result is over-engineered or simply redundant. That is not to say the code is unused, but it represents extra details that could have been simpler.
|
||
|
||
## Incidental complexity has always existed
|
||
|
||
Complexity is a feature, not a bug. One of the main benefits of software is that it is so malleable. It can be changed much more easily than hardware. It is far easier to change a line of code than to redesign a production line.
|
||
|
||
Software also has no real permanence outside of organizational or environmental factors. It must live up to requirements, both functional ones, such as fulfilling a purpose, and non-functional ones, such as handling certain constraints reliably. A trading algorithm, for example, has a specific shape because it must be fast enough to beat competitors.
|
||
|
||
Complexity is born when a decision, whether incidental or not, is added to a product. It might be a choice of programming language, coding style, infrastructure, and so on. None of these are inherently good or bad, but they exist on a scale of how easy they are to discover and change. This varies depending on context: firmware distributed once, over-the-air updates, data center services, or websites that change many times per day.
|
||
|
||
I define two subsets of complexity: emergent and incidental. The goal of most software is to be the minimal amount required to fulfill its requirements. This is not about lines of code. Rather, it is about how much context is embedded in the system for it to function. That is emergent complexity.
|
||
|
||
Incidental complexity arises when we apply too much context. These are details that are unnecessary or irrelevant to solving the problem.
|
||
|
||
### Example
|
||
|
||
A requirement might be to show a list of users on a webpage. The list comes from a database, and there are many ways to solve this.
|
||
|
||
- A simple solution is to fetch users one by one using the database client until the page is populated. If the service will never have more than ten users, this is fast enough, easy to change, and perfectly adequate.
|
||
- A complex solution might cache the total number of users, split queries into batches, fetch them in parallel, and stitch the results together.
|
||
|
||
Both solutions are valid. They produce the same output using different approaches.
|
||
|
||
If the database never has more than ten users, then much of the complex solution becomes unnecessary. Batch processing, parallelization, counting, and stitching are never used in practice.
|
||
|
||
All the logic required to handle thousands of users exists, but is never actually exercised. If the batch size is 500, then no parallel processing occurs, no splitting is needed, and no stitching is required, the code is still run, but was never required in the first place.
|
||
|
||
There is no right or wrong answer. The first solution was built knowing that ten users was the maximum, so it was designed to handle that within reason. It is also simple enough to change if this assumption ever changes. The second solution may have been built with the ambition of handling thousands of users and providing fast page loads.
|
||
|
||
Incidental complexity consists of additional details that have little or no effect on the actual output, whether functional or non-functional. You might argue that this is simply premature complexity. I define premature complexity as a step in the process that often leads to incidental complexity.
|
||
|
||
It is impossible to hit the mark perfectly. Complexity is subjective. For an experienced team with a wide toolbox, fetching one thousand items at once may be trivial and acceptable. For others, fetching items one by one may be simpler and safer.
|
||
|
||
There is no universally correct answer. What matters is that software is built to its requirements, with minimal additional details. A good solution should feel natural given the organizational context and problem domain.
|
||
|
||
Incidental complexity often appears in questions such as: Why does this use batching when there is only one item to fetch? Why does this use a strategy pattern when there is only one option?
|
||
|
||
Some of these complexities may become useful in the future, but at present they do not contribute meaningfully to solving the problem. You can likely think of many examples from your own career, whether created by you or inherited from others.
|
||
|
||
## Devil in the details
|
||
|
||
It can be very difficult to distinguish between emergent and incidental complexity. This is truly where the devil is in the details.
|
||
|
||
Continuing the previous example, it might be that every Easter, volunteers across Denmark are added to the database. Instead of 10 users, there are suddenly 1000. In that situation, the page might have unsatisfactory performance in loading the page without a sufficiently complex solution.
|
||
|
||
This gives complexity a form of permanence. It becomes difficult to remove because similar situations may have caused failures in the past. When performing rewrites or maintenance, teams often hesitate to remove such logic because it may have been added for a good reason.
|
||
|
||
Sometimes, you cannot know for certain. You have to take a risk. Doing so requires skill, experience, and intention. Risk is uncomfortable, especially in production systems.
|
||
|
||
In general, teams are more likely to preserve inherited complexity than complexity in newly written code. This is one reason why solving a subset of a problem from scratch is often easier than modifying an existing solution, but also often leaves out crucial details.
|
||
|
||
## Human vs. AI (LLM)
|
||
|
||
Humans and AI (Agents and LLMs) produce incidental complexity in similar ways, but with important differences.
|
||
|
||
A human applies personal knowledge and programming style to a problem and arrives at a solution. Humans are usually aware of the requirements and context, even if imperfectly. Incidental complexity arises from habits, preferences, and experience.
|
||
|
||
An AI has wide but fuzzy knowledge. It does not have taste, but it does have style, derived from aggregated training data and optimization for producing acceptable answers. Solving problems is part of that optimization, but so are clarity, verbosity, and perceived completeness.
|
||
|
||
AI systems often have limited understanding of the real context. Conveying full organizational and business context through text alone is extremely difficult. A large part of engineering work consists of deriving concrete requirements from incomplete information.
|
||
|
||
As a result, AI systems often produce solutions that appear to work but either omit crucial details or introduce functional and non-functional requirements that were never requested.
|
||
|
||
There is also the issue of ephemeral context. AI systems forget details over time, which can lead to severe degradation in performance. Even if this problem is solved, AI agents are still likely to produce different forms of incidental complexity than humans.
|
||
|
||
Humans and AI are tuned to different incentives. Humans are shaped by values, experience, and feedback, which leads to effective but relatively slow development. Large language models are trained on vast amounts of software of varying quality. They are optimized to generate outputs that are accepted, not necessarily correct.
|
||
|
||
This distinction is crucial. An acceptable answer is not the same as a correct one. This is comparable to a student lying about failing a class. The behavior exists because of incentives, and it is unlikely to disappear entirely.
|
||
|
||
AI systems are biased toward producing an answer even with limited context. They are also optimized to minimize follow-up questions. This often results in confident but incomplete solutions.
|
||
|
||
This tendency often appears in how AI systems structure their responses. They frequently add extensive comments explaining obvious behavior, regardless of complexity. When producing text, they default to a familiar pattern: an introduction, several bullet points, and a conclusion.
|
||
|
||
These tendencies can be influenced, but the underlying style remains consistent. The same applies to programming output. Given no examples, an agent will typically produce an average, generic solution.
|
||
|
||
At the current stage of development in 2026, AI systems often add details that are irrelevant to the actual context. Because they operate in an information vacuum, they compensate by over-specifying solutions.
|
||
|
||
A similar outcome would occur if you gave a contractor two paragraphs of instructions and then sent them away for a month without feedback. The resulting solution would likely contain many assumptions and unnecessary features.
|
||
|
||
AI agents behave in a comparable way. Because of their training, they develop a distinctive style that often introduces superfluous components. Even more problematically, they tend to treat their own incidental complexity as a features rather than as a liability.
|
||
|
||
They also struggle to distinguish between emergent and incidental complexity.
|
||
|
||
This often forces operators to continuously refine and correct the agent’s output by providing more context. In practice, this interaction often looks like this:
|
||
|
||
- Human: Create a web page to display a list of users.
|
||
- Agent: Here is a webpage that shows a list of users, including name, birthdate, username, and email, with a details page.
|
||
- Human: I did not ask for a details page, and users do not have usernames or emails. They only have a name and an ID.
|
||
- Agent: Compacts context.
|
||
- Agent: Here is a list with name and ID, and no details page.
|
||
|
||
The result appears correct, but hidden complexity may remain. The database schema may still contain unused fields. Migration logic may exist for data that will never be populated. Authentication hooks or unused endpoints may still be present.
|
||
|
||
That is incidental complexity.
|
||
|
||
The agent may eventually arrive at the correct solution through repeated refinement, either with human guidance or assistance from other agents. However, this raises the question of cost.
|
||
|
||
## The Cost
|
||
|
||
It is not uncommon for such workflows to leave behind multiple unused tables, unconnected components, dormant endpoints, and unnecessary abstractions.
|
||
|
||
I generally avoid focusing on lines of code, but in practice AI-generated systems often contain significantly more source code, comments, helper functions, and documentation than necessary. Much of this material reflects a distorted view of what is important for future maintainers, whether human or machine.
|
||
|
||
Ironically, this also makes the system harder for future AI agents to work with. Context is critical for large language models. Filling a codebase with irrelevant details consumes context window capacity and reduces the effectiveness of future interactions.
|
||
|
||
AI agents are a genuine productivity multiplier. They can produce text and code far faster than humans. They can explore solution spaces quickly and enable many new groups to build software.
|
||
|
||
This includes small organizations that need custom tools but cannot afford dedicated engineers, as well as engineers who can now serve smaller customers efficiently.
|
||
|
||
However, the solutions produced by AI agents frequently contain hidden technical debt in the form of incidental complexity. In many cases, this debt outweighs the initial productivity gains by constraining future development.
|
||
|
||
Systems become harder to understand, harder to modify, and harder to extend. Over time, the accumulated friction erodes the benefits that automation originally provided.
|
||
|
||
Often conflated, incidental becomes because of AI systems accidental complexity. It is something that happens because of a mistake, or negligence, in the case of agents unintentional.
|
||
|
||
## My view of the future
|
||
|
||
AI agents are not going away. They solve problems at lower marginal cost than human engineers, and this advantage is decisive.
|
||
|
||
However, their output requires intentional correction and refinement. This can be done by humans, by other agents, or by hybrid workflows. Without such intervention, complexity will continue to accumulate.
|
||
|
||
I believe that, without additional context and governance, autonomous systems will tend to produce more entropy. Problems may be solved in the short term, but systems will degrade in quality over time.
|
||
|
||
This is similar to sending a contractor to a remote cabin for two months with minimal guidance and then offering only a paragraph of feedback at the end. The solution may function, but its internal structure will reflect distorted priorities and hidden assumptions. Sending a village doesn't work either. You might end up with amazing code quality, well-documented code. But if it is over-engineered, containing superfluous details, then it doesn't matter.
|
||
|
||
Such systems often contain layers of incidental complexity that require deliberate effort to untangle.
|
||
|
||
For this reason, I believe software engineers will continue to play a central role in product development for the foreseeable future. That role will evolve, but it will not disappear.
|
||
|
||
Nothing so far suggests that autonomous systems can reliably produce sustained order from complexity. Until they can operate at the level of entire organizations, with deep contextual awareness and accountability, they will remain dependent on human input.
|
||
|
||
As long as AI systems must interact with human-created systems to fill their information gaps, engineers will remain essential to maintaining coherence, intent, and long-term quality. What is a side-effect however, is that we will have much more code going forward, with many bespoke components, as the cost of producing functionality goes to zero, so will the need for centralized services, and in turn bespoke components will continue to climb if left unchallenged.
|
||
|
||
In the coming years and decades, I expect that software products and interactions will accumulate so much complexity that they will become indistinguishable from biological systems, even more so than they already are today. We will need agents to untangle the mess, but also surgical knowledge for when to refine a component to something known.
|