Back to AI
AI

Where the Value Accrues: Vertical Integration, Horizontal Orchestration, and the Real Architecture of AI

The oldest pattern in computing is playing out again in AI. Vertical titans, horizontal orchestrators, and the law that explains who wins.

June 25, 2026 By Nitin 14 min read
Where the Value Accrues: Vertical Integration, Horizontal Orchestration, and the Real Architecture of AI AI June 25, 2026 14 min /ai/where-the-value-accrues/ The AI industry is replaying the oldest pattern in computing. Vertical integration versus horizontal abstraction. Christensen's Law of Conservation of Attractive Profits explains where the real value will land.

In 1964, if you wanted to compute, you called IBM. They sold you the hardware, the operating system, the database, the networking protocol, the support contract, and probably the punch cards. One vendor, one stack, one throat to choke. It was expensive, it was locked-in, and it worked.

Then the PC happened, and the stack shattered. Suddenly you had Intel making chips, Microsoft writing the OS, Lotus building the spreadsheet, Dell assembling the box. Nobody owned everything. Everybody owned a layer. The horizontal era had arrived, and the value migrated from the vertically integrated monopolist to the owner of the thinnest, stickiest horizontal layer: the operating system.

I have spent the last several months watching the AI industry, and I keep seeing the same structural argument play out, with the same stakes and the same confusion about where the money will actually land. The question is not whether AI is transformative. That debate is over. The question is architectural: will the winners be the companies that own the entire stack from silicon to user interface, or the ones that own the abstraction layer that sits between models and the work they do?

I think the answer is both, but in very different markets, and the framework that explains why is one that Clayton Christensen articulated decades ago.

01

the pattern that keeps repeating

Every major computing era has followed the same arc. A vertically integrated pioneer dominates the early market. Then, as the technology matures and interfaces standardize, the stack modularizes. Value migrates from the integrated whole to the layer that is hardest to replicate. Then a new technology arrives, the stack re-integrates around the new constraint, and the cycle starts again.

IBM owned mainframes end to end. The PC era broke the stack apart, and Microsoft and Intel captured the value at the OS and chip layers. Mobile re-integrated: Apple built the phone, the chip, the OS, the App Store, and the services layer into a single, tightly coupled system. Google did nearly the same with Android plus its own silicon plus cloud services, just with a different business model on top.

Now AI is forcing yet another re-integration. The question is whether it follows the Apple pattern (total vertical control wins) or the PC pattern (horizontal specialization wins). The honest answer, looking at the evidence from the first half of 2026, is that it depends entirely on whether you are talking about consumer hardware or enterprise software. The two markets are diverging sharply, and treating them as one story leads to the wrong conclusions.

02

Christensen's law and why it matters now

Clayton Christensen's Law of Conservation of Attractive Profits is one of the most underappreciated ideas in technology strategy. The basic insight is this: in a technology stack, profits concentrate at the layer where supply is constrained and customers have no alternative. When one layer of the stack becomes commoditized (plentiful, interchangeable, cheap), the profits do not disappear. They migrate to an adjacent layer that is still scarce.

Think about what happened with PCs. Hardware became a commodity. Margins collapsed for everyone assembling boxes. But the operating system layer stayed proprietary, stayed scarce, and absorbed all the margin that hardware lost. Microsoft did not need to make PCs. It needed to own the layer adjacent to the commodity.

Apply this to AI in 2026. Foundation models are commoditizing fast. There are now dozens of models that can pass the bar exam, write competent code, and summarize documents. The raw reasoning capability is no longer scarce. So where do the profits migrate?

They migrate in two directions simultaneously. In consumer and physical-world applications, they migrate downward into proprietary silicon and tightly coupled hardware-software systems. In enterprise and B2B, they migrate upward into the orchestration layer that sits between commoditized models and the specific workflows businesses actually need.

This bifurcation is the key insight. Anyone telling you "vertical integration wins" or "horizontal platforms win" without specifying which market is giving you an incomplete answer.

03

the vertical integration titans

Three companies are making the most aggressive vertical integration bets in AI right now, and each one illustrates a different version of the strategy.

Google is perhaps the most complete example. At Google Cloud Next 2026, they unveiled Ironwood, their seventh-generation TPU, purpose-built for inference at massive scale. They are running their own silicon, their own networking fabric, their own model training infrastructure, and their own model family (Gemini) on top of it. The Ironwood chips deliver 2x the compute per chip over the previous generation, and they are designed to scale into clusters Google calls "AI Hypercomputers" spanning tens of thousands of chips.

What makes this a genuine vertical play, rather than just a cloud provider buying GPUs, is that Google optimizes across every layer simultaneously. Their compiler understands their hardware. Their models are designed for their inference stack. Their products (Search, Gmail, Workspace, Android) are the distribution layer. When you optimize across that many layers at once, you get compounding performance advantages that no horizontal competitor can replicate by assembling best-of-breed components.

Apple is doing the vertical play in consumer hardware. Their Apple Foundation Models run on Apple Silicon, using a hybrid on-device and Private Cloud Compute architecture. The key differentiator is not the model quality in isolation. It is the tight coupling between the model, the neural engine on the chip, the operating system's privacy guarantees, and the user's personal context (contacts, calendars, location, app usage). Apple's models are not the most powerful on any benchmark. They do not need to be. They win by having access to data and hardware integration that no cloud-only model can match.

Tesla and xAI represent the most ambitious version of the thesis: vertical integration across the physical-digital boundary. Tesla's vehicles generate real-world sensor data. xAI's Grok models process that data. The vehicles then act on the model's outputs in the physical world. This is not a software company with a hardware division. It is a closed loop where the AI model improves the product (autonomous driving, Optimus robotics), and the product generates the training data that improves the model. Musk's decision to merge Tesla's AI efforts with xAI's Colossus supercomputer, one of the world's largest GPU clusters, is a bet that controlling the entire loop from data generation to model training to physical deployment creates a moat that no modular competitor can cross.

The common thread across all three is that vertical integration works when the performance frontier still matters. In consumer devices, autonomous vehicles, and infrastructure-scale inference, we have not yet hit "good enough." Every percentage point of latency, efficiency, or accuracy translates into user experience or cost savings. As long as that is true, the integrated player who can optimize across layers has a structural advantage over the one assembling components.

04

the orchestration layer and the horizontal bet

Enterprise software tells a very different story. In the B2B world, the question is not "which model is 3% better at reasoning?" The question is "how do I connect AI to my existing systems, data, and workflows without rewriting everything?"

This is where the orchestration layer becomes the critical abstraction.

I have written before about MCP (Model Context Protocol) and where it earns its keep versus where direct tool calling wins. The orchestration story has matured significantly since then. The pattern emerging in 2026 is that enterprises are deploying multi-agent systems where a supervisor agent coordinates specialized sub-agents, each connected to different data sources and tools through standardized protocols. The orchestration layer, the thing that manages agent communication, tool discovery, context passing, and error handling, is becoming the new operating system of the enterprise.

Anthropic's bet on MCP is the clearest expression of this horizontal strategy. By publishing an open protocol for model-tool communication, they are doing what Microsoft did with Windows: defining the interface layer that everything else plugs into. If MCP becomes the standard way agents talk to tools, then the models themselves become more interchangeable (you can swap Claude for Gemini for Llama behind the same orchestration layer), and the value concentrates at the protocol and tooling level.

Meta's open-source strategy with Llama is the complement to this. By commoditizing the model layer, Meta pushes value upward toward the orchestration and application layers, and downward toward the infrastructure (where Meta itself runs some of the world's largest GPU clusters). It is a textbook "commoditize your complement" play. Meta does not need to sell models. It needs AI to be cheap and ubiquitous so that its social platforms, advertising infrastructure, and metaverse ambitions benefit from the deflation.

The enterprise orchestration market is projected to grow at roughly 44% annually through 2030. That growth rate tells you something important: this is a layer where customers are actively spending because the problem is real and unsolved. Connecting models to enterprise workflows is hard, messy, security-sensitive work. The companies that own the best tooling for this, whether through open protocols like MCP or through proprietary platforms, are positioned to capture margin that the model layer itself is losing to commoditization.

05

OpenAI's strategic crisis

OpenAI occupies an increasingly uncomfortable position in this landscape, and it is worth examining why.

They are the company most associated with the AI revolution, but they are structurally exposed on both fronts. They do not own silicon (they depend on NVIDIA and Microsoft's Azure infrastructure). They do not own distribution at the OS level (no phones, no browsers, no productivity suites with a billion users). And they have not established a dominant position in the orchestration layer (MCP is Anthropic's; LangChain and LlamaIndex are independent; the major clouds have their own agent frameworks).

What OpenAI has is brand, developer mindshare, and a strong position in the consumer chat interface through ChatGPT. But brand and mindshare are the most fragile moats in technology. The moment another model reaches comparable quality at a lower price, the switching costs for most users are close to zero.

The conversion from nonprofit to a capped-profit structure, and the associated governance controversy, further complicates the picture. OpenAI is simultaneously trying to raise capital at valuations that assume platform-level dominance, while lacking the structural advantages that historically justify platform-level margins.

Ben Evans framed this well earlier this year: OpenAI's fundamental challenge is that it built a product (ChatGPT) rather than a platform. Products can be very valuable. But in technology, platforms tend to capture and hold value in ways that products do not, because platforms benefit from network effects and ecosystem lock-in that products must continuously earn through superior performance.

Contrast this with Google, which has the platform (Search, Android, Cloud), the infrastructure (TPUs, hypercomputers), and the distribution (billions of daily users). Or with Anthropic, which is building the protocol layer (MCP) alongside the model. OpenAI has the model and the chat app. In a world where models are commoditizing, that is a narrower strategic position than it appears.

06

the macro picture

Three macro dynamics are worth tracking as this plays out.

The bubble comparison. AI capital expenditure in 2026 is massive. Google, Microsoft, Amazon, and Meta are each spending in the range of $60 to $80 billion annually on AI infrastructure. The historical parallel everyone reaches for is the dot-com bubble. But there is an important structural difference: the dot-com buildout created infrastructure (fiber, data centers, internet exchanges) that was underutilized for years before demand caught up. AI infrastructure has existing, revenue-generating workloads from day one. The risk is not that the infrastructure sits idle. The risk is that the returns do not justify the capital costs at the pace investors expect. Over-investment is possible. Total write-off is less likely than it was in 2000.

Vertical AI roll-ups. One of the most interesting patterns is the emergence of AI-native companies that skip the horizontal platform entirely and build deep vertical solutions for specific industries: legal document analysis, medical imaging, drug discovery, insurance underwriting. These companies use foundation models as components but add proprietary data, domain expertise, regulatory compliance, and workflow integration that general-purpose models cannot replicate. The "vertical AI" category is attracting significant venture capital because the margins are better than building another chatbot, and the moats (data, regulation, domain trust) are more durable. From my years in data infrastructure, this pattern is familiar. The companies that won in analytics were not the ones building general-purpose query engines. They were the ones who embedded analytics into specific industry workflows and made the data layer invisible.

Geographic diffusion. AI development is spreading beyond the US-China axis. The EU AI Act is creating a regulatory framework that shapes product design globally. India, the Gulf states, Southeast Asia, and Latin America are emerging as significant markets with distinct requirements around data sovereignty, language support, and regulatory compliance. The vertical integration strategy works best in markets with massive domestic demand (US, China). The horizontal orchestration strategy works better for geographic expansion, because it allows local customization without rebuilding the stack.

07

where I think this lands

I keep coming back to Christensen's law because it cuts through the noise. Profits do not disappear when a layer commoditizes. They migrate. The question is always: migrate to where?

In consumer and physical-world AI, they migrate to vertically integrated systems. Google, Apple, and Tesla/xAI are building these. The performance frontier still matters here, and tight coupling across silicon, models, and products creates compounding advantages. This market will likely consolidate around a small number of very large players, much like mobile consolidated around Apple and Google.

In enterprise AI, they migrate to the orchestration layer. The model becomes a commodity component. The value is in the tooling, protocols, and middleware that connect models to business workflows. MCP, agent frameworks, and multi-agent coordination systems are the early forms of what will eventually become the "enterprise AI operating system." This market will be larger and more fragmented, with room for multiple winners across different industry verticals.

The companies in the most precarious position are the ones stuck in the middle: good models but no silicon, no distribution, no protocol layer, and no vertical depth. That is not a sustainable position when the layer you occupy is the one being commoditized.

For anyone building in this space, the strategic question is clear: are you building toward the silicon (vertical integration) or toward the workflow (horizontal orchestration)? The worst place to be is the model layer alone, betting that raw capability will remain scarce when every indicator suggests the opposite.

The pattern has repeated across five decades of computing. The technology changes. The architecture of value does not.