Two modes of building software with AI

Software development is splitting into AI-augmented and fully agentic. One keeps humans at the wheel. The other hands the wheel over. Both work -- in different places.

There's an analogy from the hardware world that maps surprisingly well onto what's happening in software development right now.

Augmented reality keeps you in the real world and layers information on top. Virtual reality replaces the world entirely. Both exist. Both sell. Neither killed the other, because they solve different problems.

Software development is going through the same split.

AI-augmented: humans drive, AI rides shotgun

This is what most teams are doing today, whether they call it that or not. A developer writes code, and an AI assistant suggests completions, catches mistakes, drafts tests, refactors a function. The human decides what to build, how to structure it, and when it's done. The AI makes each of those steps faster.

It's a productivity multiplier, not a replacement. The developer still reads the code, still reasons about architecture, still makes judgment calls about trade-offs. The AI handles the tedious parts: boilerplate, repetitive patterns, looking up API signatures, generating the eighth unit test that follows the same shape as the first seven.

This mode works well when the problem is messy, ambiguous, or deeply tied to domain knowledge that isn't in the training data. A billing system with seventeen edge cases inherited from a decade of business decisions. A migration that touches four services and two teams. A feature where the hardest part is figuring out what to build, not building it.

Here, the AI is a fast pair programmer who never gets tired and never complains about writing YAML. Useful. Not autonomous.

Fully agentic: AI builds, you supervise

The second mode is newer and less common, but it's real and growing fast.

In a fully agentic setup, you describe what you want at a higher level -- a spec, a set of constraints, a test suite to satisfy -- and an AI agent writes the code, runs the tests, fixes failures, and iterates until the thing works. The human's role shifts from writing code to defining intent and reviewing output.

We've been working this way for certain tasks and it genuinely changes the rhythm of the day. Instead of writing an implementation and debugging it, we write a clear description of what the system should do, hand it off, and come back to review what the agent produced. Sometimes the result is good on the first pass. Sometimes we send it back with corrections. Either way, we're spending more time thinking about what to build and less time on how to type it out.

This mode works well when the task is well-defined, the quality criteria are clear, and the feedback loop is tight. Generating a CRUD API from a schema. Scaffolding infrastructure. Writing migration scripts. Building a UI component that matches a design spec. Tasks where "done" is unambiguous.

It falls apart when requirements are fuzzy, when context lives in people's heads, or when the cost of a subtle mistake is high. An agent that confidently ships the wrong business logic is worse than no agent at all.

They coexist, and the boundary moves

The interesting part is where one mode ends and the other begins. That boundary isn't fixed. It moves as the tools get better, as teams learn to write better specs, and as codebases become more agent-friendly (clearer contracts, better test coverage, more structured conventions).

A year ago, we wouldn't have trusted an agent to do much beyond autocomplete. Today, we hand off entire modules when the scope is tight enough. A year from now, the scope that counts as "tight enough" will be larger.

But we don't think the augmented mode disappears. Some work is irreducibly human: understanding users, making trade-offs between competing priorities, deciding what not to build. AI can inform those decisions. It can't make them -- not because the technology is immature, but because the decisions are about values and context that no model has access to.

So you end up with a team that operates in both modes, sometimes in the same day. Morning: review what the agent built overnight. Afternoon: pair with an AI assistant on the gnarly integration problem that requires human judgment at every step. The skill isn't picking one mode. It's knowing which mode fits which task.

Engineering becomes orchestration

This is the part that takes getting used to.

If you've spent years getting good at writing code, it feels strange to spend more time writing specs and reviewing output than typing implementations. But that's the direction things are moving for a growing share of the work.

The best developers we know are already shifting. They think in systems: which parts can be delegated, which need human attention, how to structure the codebase so agents can work effectively, how to write constraints that produce correct results instead of plausible-looking ones.

It's closer to managing a team than to solo programming. You define the work, set quality standards, review deliverables, and intervene when something goes off track. The fact that half your "team" runs on GPUs instead of coffee doesn't change the management pattern.

This doesn't mean "anyone can code now" or "developers are obsolete." The opposite, actually. Orchestrating AI agents well requires deep engineering judgment. You need to know what good code looks like to evaluate whether the agent produced it. You need to understand system design to write specs that don't leave critical gaps. You need debugging skills for when the agent confidently does the wrong thing.

The job description changes. The expertise doesn't become less valuable -- it becomes more leveraged.

Where this goes

We don't know exactly how fast the boundary between augmented and agentic will shift, or where it'll settle. Nobody does, and anyone who claims otherwise is selling something.

What we do know is that teams who figure out how to operate in both modes -- and who invest in making their codebases, processes, and specs agent-ready -- will ship faster than teams stuck in either mode alone. The augmented-only teams will hit a ceiling on throughput. The agentic-only teams will hit a wall on quality and judgment. The ones running both, matched to the right tasks, will outpace everyone.


At Unllmited, we help engineering teams work effectively in both modes -- from hands-on vibe engineering workshops to identifying where agentic workflows fit in your specific stack. If you're figuring out how to make this shift, let's talk.

About Unllmited

Unllmited is a generative AI product studio that helps teams design, build, and control AI workflows and copilots that people actually use.

If you're exploring AI control or bringing generative AI into real-world workflows, get in touch or explore our projects.