In a previous post, we argued that the next wave of SaaS is wrappers around LLMs instead of wrappers around databases. The wrapper shapes the model's behavior, adds guardrails, and makes it usable for a specific job.
That still holds. But there is a second move happening underneath it: the wrapper is coming apart.
The headless pattern
"Headless" is not a new idea. It happened with content management: Contentful and Strapi stripped the CMS down to an API and let you bring your own frontend. It happened with commerce: Shopify's Storefront API let developers build custom storefronts against the same catalog and checkout engine. The pattern is always the same: separate the capability from the interface, expose it through a standard protocol, and let the consumer decide how to use it.
Now it is happening with AI capabilities.
Instead of one product that wraps an LLM and gives you a dashboard, you get individual capabilities -- classification, extraction, summarization, evaluation, generation -- each exposed as a standalone service. No UI. No opinion about how you consume the output. Just a well-defined interface that does one thing and does it reliably.
Three interfaces, same idea
The way you call these headless capabilities matters less than the fact that they exist. But in practice, three patterns are emerging.
MCP. The Model Context Protocol lets an AI agent discover and call external tools at runtime. You describe what the tool does, what parameters it takes, and the agent decides when to use it. If you have a classification service, you can expose it as an MCP tool and any agent that speaks the protocol can call it without custom integration code. The agent figures out when classification is the right move based on context.
OpenAPI. A REST API with a spec. Nothing new here, except that the consumer is increasingly not a person clicking buttons in a UI but an agent or an orchestration script. The OpenAPI spec becomes the contract: here is what this capability does, here is how you call it, here are the inputs and outputs. Agents that can read specs can call the API without any human wiring it up.
CLI. A command-line tool. The simplest interface of the three. An agent running in a terminal can invoke a CLI tool the same way a developer would. No SDK, no API client, no authentication dance (beyond what the tool itself handles). If the capability is packaged as a binary or a script, any agent with shell access can use it. Claude Code, Cursor agents, aider -- they all have shell access.
The point is not that one of these is better. The point is that a headless capability can support all three simultaneously, and the orchestrator picks whichever fits.
The orchestrator is the new wrapper
Here is where it gets interesting. If the capabilities are headless, something still has to decide which ones to call, in what order, and what to do with the results. That job is shifting to orchestration tools.
Claude Code can call an MCP tool to classify a document, then hit a REST API to extract structured data from it, then run a CLI command to push the result into a database. Cursor agents can do similar things inside a codebase. These tools are becoming the "head" that sits on top of headless capabilities.
This is a different model from the traditional SaaS approach, where the vendor owns the entire stack from capability to interface. In the headless model, you assemble your own stack. The vendor provides the capability. You (or your agent) provide the orchestration.
It is closer to how backend systems have worked for years: microservices that expose APIs, composed by an application layer. The difference is that the application layer is increasingly an AI agent rather than code you wrote by hand.
What this changes for builders
If you are building AI capabilities, the implication is straightforward: build the capability first, not the UI. Define the interface. Make it callable. Ship an MCP descriptor, an OpenAPI spec, and a CLI binary. The UI can come later, or not at all, because the people who need your capability might already have an orchestrator that can use it directly.
This does not mean UI is worthless. For non-technical users, a well-designed interface is still the product. But for developers and for agents, the interface is the protocol. If your capability is only accessible through a web dashboard, it is invisible to the tools that are increasingly doing the work.
If you are buying or integrating AI capabilities: ask whether the tool can be called programmatically. Not "does it have an API" in the vague sense. Does it have a documented, stable interface that an agent can discover and use without a human in the loop? If the answer is no, you are locked into whatever workflow the vendor designed. If the answer is yes, you can compose it with everything else in your stack.
Where the value moves
When capabilities become headless and interchangeable, the value shifts to two places: the quality of the individual capability and the intelligence of the orchestration.
A mediocre classification service that is easy to call will lose to a good one that is just as easy to call. Convenience alone stops being a moat when every capability speaks the same protocol. So capability builders need to be genuinely good at the thing they do.
On the orchestration side, the value is in knowing which capabilities to compose, how to handle failures, and how to keep humans informed without requiring them to micromanage every step. That is an engineering problem, and it is a hard one. The orchestrator needs to understand context, manage state across multiple calls, and degrade gracefully when a capability is slow or wrong.
Right now, tools like Claude Code are early versions of general-purpose orchestrators. Over time, domain-specific orchestration will matter more: an orchestrator that knows how to run a legal review pipeline, or a data processing workflow, or a customer support triage flow. The capabilities are the building blocks. The orchestration is the architecture.
The wrapper is not dead
To be clear: this does not kill the monolithic AI product. Plenty of users want a complete solution, not a bag of capabilities to assemble themselves. The headless pattern matters most for teams that build software, integrate systems, and have the engineering capacity to compose their own workflows.
But if you are one of those teams, the shift is real. The tools to orchestrate headless AI capabilities exist today. The protocols are stabilizing. And the gap between "we built an internal tool that calls three AI services" and "we have an agent that calls three AI services and decides what to do next" is closing fast.
At Unllmited, we help teams design AI capabilities that are headless from the start -- callable, composable, and ready for whatever orchestrator comes next. If you are building AI-powered workflows or figuring out how to expose your capabilities beyond a dashboard, reach out or explore our services.