This commit is contained in:
161
content/posts/2023-04-08-evolving-software.md
Normal file
161
content/posts/2023-04-08-evolving-software.md
Normal file
@@ -0,0 +1,161 @@
|
||||
---
|
||||
type: "blog-post"
|
||||
title: "Evolving Software: Embracing AI-Driven Development"
|
||||
description: "Dive into the world of AI-driven software development as we explore a system that evolves with AI capabilities and context. Learn how starting with a minimal viable product and gradually increasing responsibility can lead to AI-managed software systems. This blog post delves into the challenges of context limitations and expansion, and discusses potential solutions and strategies to optimize AI-generated code. Join us as we envision the future of AI-managed software systems and their potential for transforming the software development landscape."
|
||||
draft: false
|
||||
date: "2023-04-08"
|
||||
updates:
|
||||
- time: "2023-04-08"
|
||||
description: "first iteration"
|
||||
tags:
|
||||
- '#blog'
|
||||
- '#ai'
|
||||
- '#software-development'
|
||||
- '#ai-driven-development'
|
||||
- '#evolving-software'
|
||||
- '#GPT-4'
|
||||
authors:
|
||||
- "kjuulh"
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
In this post, we'll explore a system of software development that allows for
|
||||
evolution and greater responsibilities as AI capabilities and context grow.
|
||||
Unlike emergent AI functions, where native functions interact with AI
|
||||
capabilities through an interface, this approach enables AI to build and
|
||||
maintain its own responsibilities based on a set of goals or directives.
|
||||
|
||||
## Initial thoughts
|
||||
|
||||
The AI generative model/system would have a set of goals and requirements it
|
||||
would fulfill, it would build the initial version of the code, or spawn sub-AIs
|
||||
to build capabilities for them.
|
||||
|
||||
It would handle requirements as they come in, and may even be set up to
|
||||
automatically improve the code by updating to newer libraries, improve
|
||||
performance and create more maintainable code
|
||||
|
||||
## Starting Small: A Minimal Viable Product
|
||||
|
||||
Let's begin with a single function or unit of work, similar to what you'd find
|
||||
in a unit test. The AI generative model would be responsible for this function,
|
||||
but not as a black box. Instead, it would resemble the following:
|
||||
|
||||
```rust
|
||||
enum SomeFunctionError {
|
||||
...
|
||||
}
|
||||
|
||||
struct SomeModel {
|
||||
...
|
||||
}
|
||||
|
||||
fn some_function() -> Result<...> {
|
||||
|
||||
let resp = call_api().map_err(|e| SomeFunctionError::ServerErr(e))?;
|
||||
|
||||
let content: SomeModel = resp.content()?;
|
||||
|
||||
return Ok(content)
|
||||
}
|
||||
```
|
||||
|
||||
As more requirements arise, the generative system can be informed of them,
|
||||
allowing it to evolve the function or system accordingly:
|
||||
|
||||
```rust
|
||||
enum SomeFunctionError {
|
||||
...
|
||||
}
|
||||
|
||||
struct SomeModel {
|
||||
...
|
||||
}
|
||||
|
||||
fn some_function() -> Result<...> {
|
||||
let resp = call_api().map_err(|e| SomeFunctionError::ServerErr(e))?;
|
||||
|
||||
let content: SomeModel = resp.content()?;
|
||||
|
||||
log_metrics(content)?; // new
|
||||
|
||||
return Ok(content)
|
||||
}
|
||||
```
|
||||
|
||||
The generative model would automatically refresh its context when it cycles,
|
||||
allowing developers to directly modify the code without any runtime magic.
|
||||
|
||||
## Scaling Up: Introducing More Responsibility
|
||||
|
||||
As the capabilities and context of the AI model evolve, abstraction levels can
|
||||
be increased, allowing each AI layer to manage its own capabilities. The
|
||||
hierarchy would look like this:
|
||||
|
||||
`service has modules which has files.`
|
||||
|
||||
Each file maintains its own context and responsibility within a module, which
|
||||
itself is a single AI instance. The primary AI module can direct and query
|
||||
sub-AIs for their capabilities, prompting them to fix bugs, add features, and
|
||||
even spawn new AIs for emerging requirements.
|
||||
|
||||
## Interaction: System Level and Public API
|
||||
|
||||
Interaction with the AI should be possible both at the system level and via a
|
||||
public API. Primary engineers can prompt the AI directly, enabling it to update
|
||||
its goals and delegate tasks to its child systems.
|
||||
|
||||
Through a public API like GitHub, the AI would have its own user account,
|
||||
allowing developers to mention or assign it to issues. The AI would then handle
|
||||
the issue directly, offering help, closing it, or fixing the submitted bug.
|
||||
|
||||
# A Thought Experiment: Real-World Viability
|
||||
|
||||
While this concept requires testing in real-world scenarios, tools like AutoGPT
|
||||
and GPT4All could potentially be adapted for this purpose. The groundwork laid
|
||||
by AutoGPT makes integration with existing systems like Git, GitHub, and web
|
||||
search feasible, along with delegation and supervision tasks.
|
||||
|
||||
# The Future of AI-Managed Software Systems
|
||||
|
||||
An automated AI-managed software system may soon become a reality, and this post
|
||||
outlines a potential model for incrementally increasing AI responsibility as its
|
||||
capabilities grow. Although AI models are currently intelligent and capable,
|
||||
their context and long-term memory are not as mature, making a gradual model
|
||||
more suitable for implementation.
|
||||
|
||||
A practical example will follow as I experiment more.
|
||||
|
||||
### Reflecting on the AI Experience
|
||||
|
||||
Working with these AI models has yielded surprising results. Initially, i
|
||||
anticipated that AI would generate obscure and difficult-to-maintain code.
|
||||
However, the opposite has proven true: AI can create incredibly readable and
|
||||
maintainable code. The key is providing concise and directed requirements, as
|
||||
the AI is quite adept at discerning nuances within them and taking appropriate
|
||||
action.
|
||||
|
||||
The primary challenges i face involve context limitations and context expansion
|
||||
(acquiring new knowledge). The current context for models like ChatGPT or GPT-4
|
||||
is quite restricted, with a maximum of 32k tokens (around 20k words). This
|
||||
constraint must accommodate all the directives driving the generative software
|
||||
system, its acquired knowledge, and any new requirements.
|
||||
|
||||
The central issue is the lack of an easy way for AI to gain knowledge without
|
||||
exceeding its context cache. While GPT could read an entire library's source
|
||||
code to understand it, doing so would result in a biased perspective based on
|
||||
that specific implementation. Alternatively, GPT could read a library's API, but
|
||||
there is no standard method that's general enough for our use case. Developing
|
||||
an ingestion function for each language, package manager, and documentation
|
||||
system would be necessary.
|
||||
|
||||
A practical solution involves using AI to optimize context for another AI. In
|
||||
other words, one AI fetches and digests the documentation, then compresses it as
|
||||
succinctly as possible for another AI to use. While this approach may not be
|
||||
perfect, as the AI is not specifically designed to optimize for another AI, it
|
||||
offers a promising workaround.
|
||||
|
||||
Long-term storage is another viable option that i plan to explore. However, its
|
||||
effectiveness in practice and the extent of context it can restore remain to be
|
||||
seen.
|
Reference in New Issue
Block a user