When AI Infrastructure Companies Become Their Own Competitors: The CDN Reckoning
The Setup That Didn't Hold
Picture this: you're running Fastly's stock portfolio in early 2026. The company just posted its first profitable year. The CEO attributes growth to AI agent traffic. The stock is up 233% year-to-date. Things are looking... good.
Then April 8 happens.
Anthropic announced Claude Managed Agents—a hosted runtime for AI agents built on Claude infrastructure. Fastly drops 18% in a single trading session. Cloudflare falls 11%. Akamai follows with a 12% decline. All within days of each other.
What went wrong? Nothing operationally. All three companies had positive forward guidance. The issue wasn't earnings or technical execution. It was something more fundamental: the market's belief in their value proposition evaporated.
How CDNs Tried to Own the AI Layer
To understand the selloff, you need to understand the pitch.
Over the past two years, CDN and edge infrastructure companies repositioned themselves as the natural compute layer for AI agents. The logic made sense: autonomous AI agents execute code, call APIs, and process data. That execution needs to happen somewhere. Who better than globally distributed infrastructure companies?
Cloudflare built Workers AI with a dedicated Agents SDK. Fastly positioned its edge compute for AI workloads. Akamai followed suit. The shared thesis was clean: bring your AI model to us, we'll run the execution on our infrastructure.
It was a compelling story. CDN companies already had:
- Global edge networks
- Proven reliability at scale
- Experience handling unpredictable traffic patterns
- Existing developer relationships
For a developer, the idea of offloading agent execution to a specialized infrastructure company felt like a natural architectural decision. Why reinvent the wheel?
The Inversion
Then Anthropic inverted the entire thesis.
Claude Managed Agents flips the script. Instead of "bring your model to our infrastructure," the pitch becomes "bring your application logic to us, and we handle everything else." Developers specify the model, tools, and logic. Anthropic handles:
- Session management
- State checkpointing
- Crash recovery
- Multi-agent coordination
- Container orchestration
Each agent runs in a disposable Linux container. Pricing is $0.08 per runtime hour on top of token costs.
For Claude-based deployments, the structural reason to add a third-party infrastructure layer suddenly disappears.
You don't need Cloudflare's edge compute if Anthropic is already handling the runtime. You don't need Fastly's agent optimization if Anthropic manages the execution. The middleman—the entire value proposition—gets removed from the equation.
OpenAI had already moved in this direction two months earlier with the Responses API, providing hosted shell containers with full terminal access. But Anthropic's announcement crystallized the market's new understanding: AI model companies were becoming infrastructure companies.
Why This Matters Beyond the Stock Price
This moment reveals something important about tech market dynamics that's worth understanding, especially if you're building on top of cloud platforms or considering where to host critical infrastructure.
The commoditization of infrastructure layers is accelerating. In the past, specialized infrastructure companies could build defensible positions around performance, reliability, or features. CDNs built moats through their edge networks. That advantage still exists—but it matters less when the model provider handles all the coordination.
Model companies have a structural advantage. When you control the model, you understand its execution patterns intimately. You can optimize the runtime specifically for your model's behavior. You can integrate billing seamlessly. You can iterate on the product without API negotiations. Third parties can't match that efficiency easily.
Developer convenience is winning over optimization. A single integrated platform—model + runtime + orchestration—beats a patchwork of specialized services, even if that patchwork might theoretically be more flexible. Developers want solutions, not components.
What This Means for Your Infrastructure Choices
If you're building AI-powered applications, this shift has practical implications:
Evaluate vertical integration. If you're using Claude or another hosted model, consider whether using their managed agents makes sense for your use case. You're trading flexibility for reliability and simplicity. That's often the right trade-off.
Understand your leverage. If you're deeply integrated with one model provider's infrastructure, you're betting on that provider's roadmap and pricing. That's not inherently bad, but it's worth acknowledging. Diversification has real value, even if it means more operational complexity.
Watch the next layer. This doesn't mean CDNs are suddenly worthless. There will be workloads—especially multi-model, heterogeneous AI services—where specialized infrastructure still adds value. But that value is now constrained and defensive, not expansive.
The Bigger Picture
The CDN sector selloff wasn't irrational. The market correctly identified that a major value proposition—hosting AI agent execution—had been absorbed upstream by model providers with better leverage.
This pattern will repeat. As AI infrastructure matures, specialized layers will either integrate upward (into model providers) or focus downward (on edge cases and non-AI workloads). The companies that thrive will be those that either:
- Control the model (Anthropic, OpenAI)
- Control irreplaceable infrastructure (AWS, Azure for raw compute; legacy CDN relationships for specific customers)
- Build genuinely differentiated services (not generic compute)
For developers, the lesson is clear: when integrating infrastructure, ask yourself whether this layer is adding defensible value or simply sitting between you and what you actually need. Sometimes it's adding that value. Sometimes it's not.
And that's okay. Sometimes the best infrastructure is the one you don't have to think about.
What's your take? Are you using managed agent platforms, or building custom orchestration? Share your thoughts in the comments—we'd like to hear how you're thinking about infrastructure choices in the age of AI agents.