Configuration is King
The difference between drowning in complexity and thriving with innovation
On the surface, an AI project is no different from a classical software project. Their success is a function of delivery speed, quality, and problem-solution fit. In other words: Building the thing in time, building the thing right, and building the right thing. Screw any one up, and you’ve got a failed project on your hands.
In both domains, fast iteration cycles are the key to success. Build, measure, learn, repeat. It’s the agile mantra which is incredibly effective in the right environments. The critical difference is how one achieves fast iteration speeds in AI projects.
Working backwards, there are a few key killers of iteration speed in classical systems:
- Technical debt, which increases development time per feature and reduces the ability for asynchronous work on the code base.
- Red tape, changes get endlessly debated, documented, and approved, in a way that it becomes a bottle neck for the actual development and deployment
- Unclarity, when changes are requested, revoked, corrected, and misunderstood, leading to redundant work and frustration
Many more will come to mind, but they tend to be somewhat related to these core issues. What leadership often misunderstands is that the key to fast iteration speed is usually not about pushing the speed pedal harder (i.e. more hours, more people), but about releasing the breaks (reducing friction).
So how are AI projects different? Two answers.
The first answer: they aren’t any different. AI projects are in large parts classical projects. They perform CRUD operations on databases, have authentication, integrations, infrastructure, pipelines, and all the other bells and whistles of an AI project. Just because a project has AI doesn’t mean you get the skip the fundamentals. If anything, they are even more important, because…
… AI projects have their own unique set of iteration speed killers. What makes AI projects different is the intense focus on experimentation and quality control. Any call to an LLM is effectively calling a stochastic black box. You don’t know how it got to the results, and you can’t be sure you’ll get the same result again. The icing on the cake is that “best practices” change frequently as new patterns on how deal with LLMs emerge, old ones are invalidated, and new AI capabilities constantly emerge.
The additional iteration speed killers for AI projects are:
- Lack of tools and processes, as organizations struggle to make the latest models available, define and communicate quality requirements, and come up with a way to manage risk in the AI environment
- Rigid architecture, instead of simply adding to a system, AI systems often need to replace, augment, or modify existing behavior, requiring a flexible architecture and implementation discipline
- Cumbersome experimentation, more often than not, the only way to tell if something will improve or degrade your system is by implementing and testing a version of it. It introduces a high degree of complexity and engineering discipline to isolate changes, test the impact, evaluate the results, and act on the results
So, what does the title “Configuration is King” have to do with anything?
Configurability means that the core behavior of your system is not hard-coded, but instead it is controlled via configuration. For example, the configuration of an AI flow might look like:
- Rewrite the query using a configurable prompt template and model
- Perform semantic search with configurable parameters
- Re-rank results with a configurable prompt template and parameters
- Generate the final output using a configurable prompt template and model
When configurability is a key architectural priority from day 1, you immediately solve two of the major AI iteration speed killers. Experimentation is simply a matter of modifying the configuration, or making a new component available and adding it to the configuration. Results naturally tie back directly to the configuration used to run the experiment, making them self-documenting to some degree. Changes become just as simple, replacing a model, adding or removing intermediate steps, or modifying behavior simply comes down to a configuration change.
In a domain with high degrees of uncertainty, constantly evolving technical capabilities, and turmoil in best-practices and pitfalls, configurability is the difference between being drowned by complexity or being lifted up by innovation.
Ready to be lifted up by innovation?
Designing systems around configurability is an incredible unlock that requires clear engineering guidelines and a solid architecture.What may sound daunting doesn't have to be the cause of sleepless nights.
At v9Labs, we are specialized in building highly reliable AI systems. Configurability is one of our key concerns and architecting solutions our primary expertise. We're happy to discuss your project and come up with an action plan for your success.