Skip to main content
← Back to thoughtsAI

Four Frontier AI Models Shipped in Eight Days

OpenAI, Anthropic, and DeepSeek all shipped frontier AI models in eight days. The feature set is identical. The real story is where the margin went.

··4 min read

Four Frontier AI Models Shipped in Eight Days

Between April 16 and April 24, three of the biggest AI labs in the world shipped frontier models. Opus 4.7. GPT-5.5. DeepSeek V4. Anthropic's Mythos Preview landed a week before that, gated behind an invite list. Google's Gemini 3.1 Pro set the pace back in February with a 77.1% on ARC-AGI-2. [1] [2] [3] [4] [5]

TL;DR. Four frontier AI models, eight days, one feature set. Agentic, 1M context, better coding, priced within a few multiples of each other. This is what Matt Ridley calls simultaneous invention and what I wrote about earlier this month as good products being hard to vary: when the constraints are ripe, four labs on three continents independently arrive at the same form. Convergence, not competition. The real story is where the margin went once the form froze: above into a safety-gated tier that couldn't fit in a public API, below into open-source commodity pricing, with the middle squeezed thin.

What Actually Shipped in Those Eight Days?

Claude Opus 4.7 (April 16). Software-engineering gains, high-resolution vision up to 2576px, and a new concept called "task budgets," a rough token target for an entire agentic loop. Same $5/$25 pricing as 4.6. [1]

GPT-5.5 (April 23). OpenAI's "smartest and most intuitive" model, shipping six weeks after GPT-5.4. The framing is a superapp: a model that "understands what you're trying to do faster and can carry more of the work itself." Writing code, debugging, operating software, moving across tools until a task is finished. [2]

DeepSeek V4 (April 24). V4 Pro at 1.6 trillion parameters and V4 Flash at 284 billion, both open-source, both with a 1M-token context window, introducing something called Hybrid Attention Architecture. Pricing: $0.30 per million input, $0.50 per million output. Reviewers report V4 Pro beating Claude Sonnet 4.5 and approaching Opus 4.6 non-thinking quality. [3]

Backdrop. Anthropic's Mythos Preview dropped April 8, but only through Project Glasswing: twelve founding organizations, about forty critical-infrastructure operators, $25 input and $125 output per million tokens. Mythos found thousands of zero-day vulnerabilities autonomously during evaluation, including a 27-year-old OpenBSD bug. [4]

Why Do These Frontier AI Models All Look the Same?

Read the announcements in sequence and the product pitches are interchangeable.

  • Anthropic (Opus 4.7): "complex, long-running tasks with rigor."
  • OpenAI (GPT-5.5): "move across tools until a task is finished."
  • DeepSeek (V4): the best open-source model for "Agentic Coding."

Three labs, three continents, one sentence. Meta, xAI, and Mistral didn't ship in this window, but their recent releases point at the same agentic-coding target. The form is industry-wide, not a three-lab coincidence.

The shared feature set is no longer differentiated. Context windows converged on 1M tokens. Coding benchmarks sit within a few points of each other. Every release now ships the same agentic scaffolding: budgets, tool loops, browser operation. OpenAI shipped GPT-5.5 six weeks after 5.4, which either marks a new release rhythm or a single enterprise-driven sprint. Either way, the motion has moved from "release when you beat something" to "release to stay in the conversation."

Simultaneous invention is the historical pattern

Matt Ridley calls this pattern simultaneous invention in The Evolution of Everything. [6] Ripeness produces parallel discovery:

  • Calculus. Invented twice in the same decade, by Newton and Leibniz.
  • The lightbulb. Patented by 23 different people before Edison.
  • The telephone. Bell's filing beat Elisha Gray's by a few hours on the same day.

Not copying. Convergence under shared constraint.

I wrote about this pattern earlier this month in Good Products Are Hard to Vary. Cars designed by rival teams come out of the same wind tunnel with the same shape. Boeing, Airbus, and Embraer wings converge on the same curve.

The April 2026 frontier is that wind tunnel. The air is training gradients. The constraint is the transformer. Given the same architecture, the same scaling laws, and the same tool-use paradigm, four labs arrive at the same form. 1M context. Agentic loop. Coding focus. Nothing else survived the filter.

That isn't the rhythm of breakthroughs. That's the rhythm of a form freezing.

Where Did the Margin Actually Go?

Here's the contrarian read: the public frontier is commoditizing, and the margin has already moved elsewhere. Ridley's evolutionary lens makes this specific. When a form freezes, two things happen at once. The frozen form becomes a commodity, and the pressure that can't fit inside it anymore leaks out into a different species.

Upward, a speciation event. Mythos is the tell. Anthropic admits Opus 4.7 "trails" Mythos, but Mythos is not for sale. It sits behind Glasswing, behind identity verification, priced 5x Opus. [4] That's the "what can't go home" moment: capability that outgrew the public-API form and had to leave it. Gemini 3.1 Pro's ARC-AGI-2 jump (77.1% versus 31.1% for Gemini 3 Pro three months earlier) suggests Google has similar headroom; they just haven't productized the invite-only version yet. [5] The next real frontier isn't the next number after 5.5. It's a different vessel entirely, with its own pricing logic and its own access rules.

Downward, commodity pricing. DeepSeek V4 at $0.30 input is roughly 16x cheaper than Opus 4.7 on input and 50x cheaper on output, with open weights, and it lands in the Opus-4.6-non-thinking capability band (not Opus 4.7 thinking mode, not Mythos). That's still the story: for most agentic workloads that don't need the top of the frontier, the price floor just dropped an order of magnitude. This is what happens to a form after it freezes. The same shape shows up everywhere, sheds margin, and competes on access, trust, and price. Like tires. Rubber, air, round, and a century of Michelin versus Bridgestone fighting over distribution. [3]

Middle is where margin dies. GPT-5.5 and Opus 4.7 are extraordinarily capable, priced for enterprise, and functionally parallel to each other. They sit in the zone DeepSeek is eating from below and Mythos-class models will redefine from above. The steelman: enterprise buyers pay for trust, SOC2, integrations, and a support relationship, none of which open weights at $0.30 can replicate tomorrow. True. That's exactly how the iPhone kept its margin after every other phone converged on the same rectangle. Distribution and trust do the work the product stopped doing. Fine for now. Uncomfortable in a year.

What to Watch Over the Next Six Weeks

Three things.

One, whether mid-tier frontier pricing holds. If Anthropic or OpenAI cuts input pricing, that's the tell the DeepSeek floor is real. If they don't, watch enterprise contracts instead. The discounts will move before the list price does.

Two, whether Glasswing-style gating becomes the industry default. OpenAI already has Trusted Access for Cyber. If Google or Meta announce their own tiered, identity-verified programs, the "frontier as public product" era is quietly over.

Three, how fast the open-source gap closes on agentic coding specifically. DeepSeek V4 says it matches Claude Sonnet 4.5 on real tasks. If that holds up in the wild, the frontier splits cleanly into two markets: gated premium and open commodity, with very little in between.


If you work with these models every day, pay attention to which tier your workload actually needs. Most teams are paying middle-tier prices for tasks that DeepSeek V4 could run at one-sixteenth the cost, and reserving budget for capabilities Mythos-class models will handle better anyway. That mismatch is about to get expensive.

I write about this stuff more casually on X, and do breakdowns on Instagram under The Simple Take. Longer takes land on LinkedIn.

Sources

Footnotes

  1. Introducing Claude Opus 4.7 (Anthropic) [] [ [2]]

  2. Introducing GPT-5.5 (OpenAI); OpenAI launches GPT-5.5 (Fortune) [] [ [2]]

  3. DeepSeek unveils newest flagship AI model (Bloomberg); DeepSeek V4 released (SitePoint); DeepSeek API Pricing 2026 (NxCode) [] [ [2]] [ [3]]

  4. Claude Mythos Preview (Anthropic); Anthropic releases Claude Opus 4.7, concedes it trails unreleased Mythos (Axios) [] [ [2]] [ [3]]

  5. Gemini 3 (Google DeepMind); Gemini 3.1 Pro complete guide (ALM Corp) [] [ [2]]

  6. Matt Ridley, The Evolution of Everything (2015), and How Innovation Works (2020). Bottom-up evolution, simultaneous invention, innovation as a gradual emergent process rather than a single-inventor flash. []

The Simple Take

Complex ideas, one clear thought. I write when it’s worth your time.