Back to AI
AI

The Deployment Epoch: Why OpenAI and Anthropic Just Became Implementers

Two announcements on the same day ended the model era. Forward Deployed Engineering, the ERP echo, and what happens to a global services industry built on offshore labor arbitrage.

July 11, 2026 15 min read
The Deployment Epoch: Why OpenAI and Anthropic Just Became Implementers AI July 11, 2026 15 min /ai/the-deployment-epoch/ On the same afternoon, OpenAI announced a four-billion-dollar deployment vehicle and Anthropic announced a one-and-a-half-billion-dollar response. Both reached for the same playbook. Palantir's. The era of selling tokens is over. What replaces it reshapes systems integration, vendor consolidation, and the architecture of the enterprise itself.

The week the model era ended did not announce itself as such. It announced itself as two press releases, hours apart, from the two companies most closely identified with the frontier of artificial intelligence.

OpenAI revealed a four-billion-dollar joint venture, internally called "DeployCo," anchored by TPG, Brookfield, Advent, and Bain Capital, valued at ten billion at formation, with a guaranteed seventeen-and-a-half-percent annual return to its private equity backers and super-voting shares retained by OpenAI. Anthropic responded the same afternoon with a one-and-a-half-billion-dollar deployment vehicle, co-anchored by Blackstone, Hellman and Friedman, Goldman Sachs, Apollo, General Atlantic, and Sequoia. Together, the two announcements committed more than five-and-a-half billion dollars of new capital, not to training, not to chips, not to research, but to the act of putting models inside other people's companies.

The framing is what matters. Neither announcement was about a new model. Neither was about a benchmark. Both were about distribution, integration, and the realization that the foundation model itself is no longer the bottleneck. The bottleneck is the customer.

I want to make the strongest version of this argument because the implications are larger than the announcements themselves. The structural shift here is the same one that defined the rise of Palantir, the same one that produced the global systems integration industry around SAP and Oracle in the 1990s, and the same one that, every time it has happened, has rewritten which companies make money and which companies disappear.

01

the same-day announcement

For most of the last three years, the operating assumption inside the foundation model labs has been that capability would do the work. Get the model good enough, expose it through an API, let the developer ecosystem build the applications, and let enterprise IT figure out the integration. The labs were software vendors. The customers were buyers. The shape of the transaction was familiar.

That assumption has quietly collapsed. The numbers tell the story. OpenAI is finalizing a financing round at a valuation north of eight hundred and fifty billion dollars. Anthropic recently closed at three hundred and eighty billion. Combined, the two companies command more than one-point-two trillion dollars of private market value, one of the most concentrated accumulations of capital in the history of the technology industry. That kind of valuation requires more than chat subscriptions. It requires capturing a meaningful slice of the operational core of the Fortune 500.

The problem is that the operational core of the Fortune 500 is not a place that absorbs API endpoints gracefully. It is a place of legacy systems, regulatory perimeters, change-control boards, brittle data pipelines, and political org charts. A model that can pass the bar exam in twenty seconds cannot, on its own, get itself approved by the procurement team at a large bank. Capability and adoption have decoupled. The frontier labs spent two years confronting this gap by hoping the developer ecosystem would close it, and it has not closed.

So the labs reached for the only model that has been demonstrated to work at this kind of complexity. They reached for Palantir's.

02

the Palantir playbook

Palantir's Forward Deployed Engineer model is more talked about than understood. The headline version is "engineers go to the customer site." The actual operational structure is more interesting and harder to copy.

Inside Palantir, the field organization is built around two roles. The "Delta" is the Forward Deployed Engineer. They pass the same technical interviews as the platform engineers in Palo Alto. They write production code. They build data pipelines. They design ontology models. They architect agents. They are not solutions consultants. They are software engineers who happen to be embedded inside a customer's office for months or years at a time. The "Echo" is the Deployment Strategist. They are usually former intelligence officers, former clinicians, former forensic accountants, people who understand from the inside how the customer's institution actually works, which is often quite different from how it works on paper.

The Delta-Echo pair is the unit of work. The Delta writes code. The Echo navigates the bureaucracy. Both are deferred to by leadership because the philosophy at the field level is "Auftragstaktik," a German military doctrine that says senior leaders set the objective and the field decides the execution. The result is a development culture that looks chaotic from the outside: competing solutions, locally invented platforms, code written for one customer that may or may not generalize. It works because the company treats this entire operation as research and development, not as cost of goods sold. The deployments are unprofitable on purpose. The point of the unprofitable deployment is to learn what the frontier problem actually looks like, then bring the learnings back to the core platform.

This is the part that imitators have historically gotten wrong. They see the embedded engineers and assume the model is "hire smart people, send them to customers." They miss that the model only works if the company has a strong, opinionated product spine to migrate field learnings back into, and if the company is willing to take massive negative gross margin on individual customers for a multi-year period. Without both, the FDE organization decays into a high-priced consulting firm with software branding, which is the worst of both worlds.

This is the playbook OpenAI and Anthropic just bought. OpenAI's DeployCo is structured almost identically: senior engineers, embedded long-term, authorized to invent products at the edge, with the explicit mandate to feed reusable patterns back into the core model and platform organization. The OpenAI acquisition of Tomoro, which brought roughly one hundred and fifty Forward Deployed Engineers in one transaction, is the unmistakable signal that this is not a sales motion. It is an operational transformation.

03

the ERP echo

The closest historical analogue to what is now happening in enterprise AI is not the rise of cloud, and it is not the rise of mobile. It is the rise of ERP.

In the 1980s and 1990s, the promise of SAP and Oracle was simple. A single system of record for the entire enterprise. Manufacturing, procurement, finance, human resources, supply chain, all running on a shared data model. The technological pitch was straightforward. The implementation reality was a nightmare.

ERP rollouts ran multi-year. They ran over budget. They had a startling failure rate. The literature from the period is full of horror stories about midsize manufacturers that nearly went insolvent trying to install software that, on paper, was supposed to make them more efficient. The reason was not that the software was bad. The reason was that ERP is not really software. ERP is the act of taking a company's actual business processes and writing them, line by line, into a configurable system. That is a cultural transformation disguised as a procurement decision.

Two patterns emerged from that era that are worth holding in mind right now. The first is that the implementation became the business. The global systems integration industry, the Accentures and Deloittes and Capgeminis and Infosyses of the world, was built primarily on the back of ERP. The software vendors sold licenses. The integrators sold the multi-year transformation. The two industries grew up in lockstep because neither could deliver value without the other.

The second is that the choice of vendor became a permanent commitment. If you built your business on SAP, you were on SAP for twenty years, because the data model, the customizations, the integrations, and the muscle memory of every employee were now encoded into the system. The lock-in was not contractual. It was architectural.

Enterprise AI is now confronting both of these dynamics, but with one important inversion. In ERP, the systems integrator did the implementation. In enterprise AI, the model vendor is trying to do the implementation. The labs have looked at the SI industry and concluded, correctly, that the integration layer is where most of the value will be captured, and that letting third parties own that layer would be a strategic catastrophe. Hence the joint ventures. Hence the Forward Deployed Engineers.

The labs are not just selling models anymore. They are trying to be both SAP and Accenture at the same time.

Whether that is sustainable is a separate question. The history of enterprises that have tried to own both the product and the implementation is, generously, mixed.

04

the multi-model reality

There is a tempting story to tell about the DeployCo and Anthropic announcements that runs as follows. The labs are racing to embed themselves so deeply in customer operations that they will create irreversible vendor lock-in. The winner takes the enterprise. The loser is relegated to consumer chat.

I do not think this is what will happen. The evidence runs the other way.

Ramp's spending data, as of early 2026, shows that seventy-nine percent of companies paying for Anthropic are also paying for OpenAI. The number of businesses paying for both major platforms doubled in a single year. F5's State of Application Strategy Report found that the average enterprise is operating or evaluating seven different AI models simultaneously, and that seventy-eight percent of organizations now run their own inference services.

Whatever else is true about enterprise AI in 2026, it is not converging on a single vendor.

The driver is cost. The price differential between a top-tier reasoning model and a fast, cheap classification model is often more than an order of magnitude. An enterprise that routes every task to its most expensive model is paying a tax that compounds across millions of inferences per month. The financially rational architecture is to use the cheap model for the easy work, reserve the expensive model for the hard work, and dynamically decide which is which. That is "task-specific routing," and it requires a layer above the model that knows how to make the decision.

This is the orchestration layer, and it is the most interesting structural development of the current cycle. The market for AI orchestration is projected to grow from roughly fourteen billion dollars in 2026 to more than sixty billion by 2034. IDC's FutureScape projects that by 2028, seventy percent of leading AI-driven enterprises will use multi-tool architectures to manage model routing autonomously. The orchestration layer is not a feature of a model. It is a separate, growing, increasingly load-bearing piece of enterprise infrastructure.

The implication, which the labs would prefer not to acknowledge publicly, is that the Forward Deployed Engineer is being deployed into a structurally agnostic environment. The DeployCo engineer who is hand-building an agentic workflow inside a major bank is, in all likelihood, building it on top of an orchestration layer that will eventually route some fraction of those calls to Claude, or to Gemini, or to an open-source model running on the bank's own GPUs. The labs are competing for the majority of routed inference, not for exclusivity. The economics still work, but the moat is shallower than the ten-billion-dollar valuations imply.

05

OpenAI and Anthropic on the floor

Inside the orchestration layer, OpenAI and Anthropic are making structurally different bets, and the bets reveal something about how each company sees the world.

OpenAI is, in essence, a consumer-scale company manufacturing enterprise products. ChatGPT created a brand that arrived at the procurement meeting before the salesperson did. The product strategy follows the brand. GPT-5.3-Codex, OpenAI's flagship coding system, optimizes for speed and token efficiency. It uses roughly three times fewer tokens than Anthropic's equivalent for similar tasks, which translates into lower latency and lower API costs. Multimodality (text, images, audio, video) is more developed at OpenAI than anywhere else. The integration path through Microsoft Azure is, for any large company that already runs on the Microsoft stack, the path of least resistance for procurement and security review. OpenAI's enterprise pitch is "you already trust us; let us build on top of what you already have."

Anthropic is the inverse. It is an enterprise-first company that happens to have a consumer chat product. Its defining architectural choice, Constitutional AI, is a rule-based safety framework that produces auditable reasoning trails, which is what compliance officers in finance, healthcare, and government actually need. Claude Opus 4.6, paired with Claude Code, leads OpenAI by more than twenty-three percentage points on the SWE-bench coding benchmark, which has become the closest thing the industry has to a real measure of autonomous engineering capability. Anthropic's deployment with FIS in financial crimes detection, where Forward Deployed Engineers built an agent that compresses anti-money-laundering case work from days to minutes, is a template for what the enterprise-first strategy looks like in production.

The financial trajectories make this even more interesting. OpenAI is on track to lose fourteen billion dollars in 2026 alone, with cumulative losses projected to reach one hundred and fifteen billion by 2029. Anthropic is projecting positive cash flow by 2027. Inside Anthropic, gross margins on inference have expanded from thirty-eight percent to seventy percent in roughly a year, which is the kind of curve that compounds. For a procurement team thinking about a twenty-year relationship, that difference matters.

The cleanest way to summarize the contrast is that OpenAI is competing on velocity and ecosystem reach, while Anthropic is competing on depth and regulatory fit. Both strategies can work. They almost certainly both will, in different segments. The companies are not really competing for the same customer. They are competing for different parts of the same customer.

06

the systems integrator extinction event

The most underdiscussed consequence of the deployment epoch is what it does to the global IT services industry. The Accentures and Deloittes and Cognizants and Infosyses and Capgeminis built their multi-hundred-billion-dollar collective valuation on a labor arbitrage model: enormous pyramids of junior and mid-level engineers, mostly offshore, billed on time-and-materials contracts that ran for years. The model works as long as there is a vast amount of rote integration work that can only be done by humans.

That assumption is, right now, being demolished from both ends.

From the top, foundation model labs are doing the high-end systems integration work themselves through their FDE vehicles. The most sophisticated, most strategic, most lucrative transformation engagements, the ones that used to anchor a major SI's relationship with a Fortune 500 account, are now being run directly by Anthropic or OpenAI Forward Deployed Engineers, supported by private equity-backed delivery capital. The SI is not in the room.

From the bottom, the same agentic AI tools the labs are deploying are also obliterating the economics of the offshore pyramid. A single mid-level engineer using Claude Code can now do work that previously required four or five offshore Full-Time Equivalents. Clients have started to notice. Some of them are now using these tools inside their own Global Capability Centers, particularly in India, to bring transformation work entirely in-house and bypass the integrators altogether. The "agentified GCC" is a real thing now, and there are more than seventeen hundred of them.

The SI industry is reacting in two directions. Accenture has trained thirty thousand practitioners on Claude and announced a major ChatGPT Enterprise partnership. Deloitte has rolled Claude out to its full four-hundred-and-seventy-thousand-person global workforce. These are the firms making the bet that the only survivable position is to be a deeply specialized, AI-native delivery partner with proprietary industry accelerators on top of someone else's foundation model. The firms that are not making this bet, that are continuing to staff offshore engineers against generic implementation work, are, in my read, looking at an extinction-level economic event over the next three to five years.

There is an interesting parallel to draw with what happened to retail when Amazon arrived. The retailers that survived were the ones that built something Amazon could not, either a specialized brand experience, a private label business, or a logistical capability that operated below Amazon's cost floor. The retailers that did not, vanished. The SI industry is at the same inflection point, and the surviving firms will be the ones with proprietary intellectual property layered on top of foundation models, not the ones renting out engineers at a markup.

07

where the value actually lands

If the foundation models are commoditizing inside a multi-model orchestration architecture, and the implementation layer is being captured partly by the labs themselves and partly by AI-native SIs, the natural question is who actually captures the long-term value of the deployment epoch.

Four answers, in order of confidence.

The first is the enterprise that successfully redesigns its operating model. Most companies will use AI for marginal productivity gains, saving an hour on emails, summarizing a meeting, drafting a memo. That is real but not transformative. The companies that capture outsized returns will be the ones that use the deployment epoch to fundamentally rebuild how work flows through the organization. Agents do not just speed up existing processes. They permit processes that were previously infeasible. TELUS reportedly saved more than five hundred thousand staff hours by re-architecting workflows around Claude. HackerOne cut vulnerability response time by forty-four percent. These are not productivity stories. They are operating model stories. The enterprises that get this right will have a structural cost advantage their competitors cannot close by hiring more people.

The second is the orchestration and routing layer. Because enterprises will refuse single-vendor lock-in at the model layer, the layer that sits above the models, governs the routing, enforces the policies, and owns the customer relationship, becomes the new operating system of the enterprise stack. The dollars will be smaller than the foundation model dollars in absolute terms, but the strategic position will be stronger because the orchestration layer holds the customer contract. This is the closest analog in the current cycle to what Windows was in the PC era and what iOS is in the mobile era.

The third is the hyperscaler. AWS, Azure, and Google Cloud are the unavoidable tollbooths for almost all enterprise AI inference at scale. They have invested directly in the labs (Microsoft in OpenAI, Amazon in Anthropic, Google in everything Google), they run their own model families, and they own the underlying GPU capacity. Regardless of which lab wins which deal, the inference runs on their hardware and accrues to their revenue. This is the safest bet on the page and also probably the least exciting one, which is usually a sign that the bet is correct.

The fourth is the specialized human worker. This is the answer that is most often misunderstood. Agentic AI will displace large amounts of transactional knowledge work, and the labor market consequences will be real and uncomfortable. But the work that survives, and the new work that gets created, will be the highest-leverage work in the economy. The Forward Deployed Engineer is the obvious example: a single person with elite technical skill and C-suite credibility, embedded in a Fortune 500 operating reality, generating value at a multiple no traditional consulting model can match. The less obvious examples are the new oversight roles. As more business processes are run by agents, the people who design the agentic workflow, validate the output, audit the bias, and intervene at the chokepoint become structurally important. NIST's AI risk management framework already gestures at this. Within five years, in regulated industries, these roles will be required by law.

08

the pattern beneath

Step back from the DeployCo press release and the Anthropic counter-move and the FIS deployment and the SI training programs, and the shape of the moment becomes clearer.

Every major computing era has gone through the same arc. A new technology arrives. The early phase rewards capability. The capability becomes good enough. The bottleneck shifts from capability to integration. The vendors that built the capability discover that they cannot simply hand it off, because the customer cannot consume it without help. So the vendors move into the integration business, sometimes acquiring it, sometimes building it, sometimes joint-venturing it. Eventually the integration layer matures, becomes a market in its own right, and the original vendor's strategic position depends on whether the integration layer locks the customer in or routes around them.

This is what happened with mainframes, with PCs, with client-server, with the web, with mobile, and with cloud. It is now happening with AI. The DeployCo and Anthropic announcements are not the start of the deployment epoch. They are the moment the deployment epoch became impossible to ignore.

The honest summary of the next five years is this. The model layer will commoditize, but slowly enough that OpenAI and Anthropic will both make a great deal of money along the way. The orchestration layer will become structurally important and will host a new generation of platform companies. The hyperscalers will collect the infrastructure rent. The systems integration industry will bifurcate into AI-native specialists and a long tail of disappearing labor arbitrage shops. The enterprises that use this moment to redesign their operations will pull away from their peers. The enterprises that treat AI as a feature will fall behind.

The model era is over. The deployment epoch has begun. The fights worth watching are not about which model is best. They are about who owns the customer's workflow when the dust settles.

Written by Nitin

Technologist and writer. Co-founded Nvision Technologies (1998) and Cask Data (acquired by Google in 2018). Working in AI and distributed systems. Writing here is how I think out loud, somewhere between Stratechery and Marcus Aurelius.

More about Nitin · Get in touch
Keep reading

New essays in your inbox

One email when a new post is up. No tracking, no upsell, no thread of follow-up nudges. Unsubscribe in one click.