Anthropic built what it calls the most capable AI model ever created. Then it locked the door. Not 52,000. Not 520. Fifty-two. The rest of us got a press release.
The stated reason is cybersecurity. Mythos Preview has found thousands of high-severity vulnerabilities in every major operating system and web browser, some hiding for over two decades. Anthropic committed up to $100 million in usage credits and $4 million in donations to open-source security groups. The post-preview price tag: $25 per million input tokens and $125 per million output tokens. For context, Claude 3.5 Sonnet runs at $3 per million input tokens. That is an 8x markup.
I think the security rationale is real. But the pricing and access structure reveal something bigger. Anthropic is running a masterclass in offer construction. And every company selling AI products should be taking notes.
The Velvet Rope Principle
Here is the framework: I call it the Velvet Rope Principle. Restrict access to signal value. Price high to anchor perception. Release broadly only after the market has already decided what the product is worth.
The numbers behind Anthropic's velvet rope strategy and the broader frontier race.
This is not new. Luxury brands have done it for centuries. But Anthropic is applying it to enterprise AI infrastructure, and that changes the math for the entire industry.
The Velvet Rope Principle has three stages. Stage one: Curate the guest list. Stage two: Let the guest list do your marketing. Stage three: Open the doors at a price the market already accepts.
Anthropic is in stage one. The guest list includes Apple, Google, Microsoft, Amazon Web Services, NVIDIA, JPMorgan Chase, CrowdStrike, Cisco, Broadcom, Palo Alto Networks, and the Linux Foundation. These are not customers. They are walking case studies. When Anthropic eventually opens broad access, the sales conversation is already over. "The same model Apple uses to secure its operating system" is not a pitch. It is a fait accompli.
The $125 Output Token and the Art of Price Anchoring
Let's talk about the money, because the money tells the real story.
Anthropic set Mythos Preview at $25 per million input tokens and $125 per million output tokens. Compare that to their existing public models. Mythos is priced at roughly 8x on both sides.
Now here is what most people miss about pricing. The price is not the price. The price is a story you tell about what the product is worth. When you charge 8x more, you are not saying "this costs more to run." You are saying "this is a fundamentally different category of product." Anthropic is not competing with its own Sonnet tier. It is creating a new tier above everything else in the market.
This is textbook price anchoring. Once the market absorbs that the "best" AI model costs $125 per million output tokens, every future Anthropic model priced below that looks like a bargain. Even if the eventual broad-release price drops to $50 or $60 per million output tokens, that number feels reasonable because the anchor is $125. The first number the market sees becomes the ceiling against which all future numbers are judged.
Consider the math from the buyer's side. Anthropic reported annualized revenue of $30 billion and holds roughly 40% of enterprise LLM API spend, according to recent industry estimates. If even 30 of those 52 organizations become paying customers at the post-preview rate, the revenue per customer dwarfs anything in the standard API tier. But the revenue is almost beside the point. The real asset is the positioning.
There is a concept I keep coming back to: the golden goose versus the golden eggs. The eggs are the token revenue from 52 customers. The goose is the market perception that Anthropic's best model is so powerful it had to be restricted for safety reasons. That perception is worth more than any single quarter of API revenue.
The damaging admission here is worth noting. Anthropic is essentially telling the world: "We built something so dangerous we cannot release it." That is a risk. If Mythos Preview underdelivers, or if the vulnerabilities it finds turn out to be mostly noise, the credibility damage is severe. It is unclear whether Mythos Preview's vulnerability detection is genuinely different in quality, or just different in scale.
But if the capabilities hold up, the damaging admission becomes the most powerful trust signal in enterprise AI. "We could have released this to everyone and made a fortune. We chose not to because the risks were too high." That sentence, if believed, is worth billions in enterprise trust.
My read on this: Anthropic is selling safety as a premium feature. Not safety of the model. Safety of access to the model. The scarcity is the product.
There is a real irony buried in the partner list. Anthropic reportedly refused to work with the Pentagon on certain AI applications, citing safety concerns. Yet it handed Mythos Preview to 52 organizations, including defense contractors and financial institutions, with the understanding that this model can "surpass all but the most skilled humans at finding and exploiting software vulnerabilities." The line between defensive and offensive use is a policy document, not a technical constraint.
The pricing also reveals something about Anthropic's competitive positioning against OpenAI. OpenAI has leaned into consumer distribution with ChatGPT. Anthropic is leaning into enterprise exclusivity. These are fundamentally different go-to-market strategies. OpenAI wants to be the Google of AI. Anthropic wants to be the McKinsey of AI: expensive, exclusive, and trusted precisely because it is expensive and exclusive.
2031
Three signals inside the same shift
Anthropic prices Mythos at 8x its existing tiers to create a new category ceiling.
At $25 per million input tokens and $125 per million output tokens, Mythos Preview is not competing with Sonnet. It is anchoring the market's perception of what frontier AI costs. Every future model priced below $125 output will feel like a bargain by comparison.
Only 50 organizations get access to the most powerful vulnerability scanner ever built.
The partner list reads like a Fortune 10 roster: Apple, Google, Microsoft, NVIDIA, JPMorgan Chase. Meanwhile a 50-person healthcare company running outdated FreeBSD gets nothing. If AI-powered vulnerability detection becomes essential, the security gap between large and small enterprises widens dangerously.
GPT-5.5 sits at 96% odds for April release as DeepSeek V4 trails at 82%.
Manifold prediction markets give GPT-5.5 a 96% probability of launching before April 30, while DeepSeek V4 sits at 82% for an April release. Anthropic's restricted launch is a preemptive positioning move against a wave of frontier competitors arriving within weeks.
Pull back five years from now. What does the Velvet Rope Principle look like when every major AI lab copies it?
Here is the asymmetric bet. If Anthropic's restricted-access model works, every frontier lab will adopt some version of it. New model capabilities will be previewed to 50 to 100 enterprise partners before broad release. Pricing will be tiered not just by capability but by access timing. "Early access" becomes a product category of its own, the way "priority boarding" turned a free activity into a $50 fee.
The compounding effect matters. Each restricted release builds a network of enterprise relationships that feed data, feedback, and credibility back into the next model. Apple finds vulnerabilities with Mythos Preview. Anthropic uses those findings to improve the next model. The next model gets previewed to Apple again. The flywheel spins.
But there is a counterpositioning risk. The 52 organizations in Project Glasswing are mostly trillion-dollar companies. The other 99.9% of businesses get nothing. If AI-powered vulnerability detection becomes essential to cybersecurity, and only the largest companies have access, the security gap between big and small widens. Attackers will not restrict themselves to targeting companies on Anthropic's guest list. A 50-person healthcare company running outdated FreeBSD is exactly the kind of target that benefits most from automated vulnerability scanning and is least likely to get access.
The 5-year question is whether restricted access creates a durable advantage or a temporary window. AI capabilities tend to proliferate fast. Open-source models are 12 to 18 months behind frontier models on most benchmarks. If an open-source model reaches Mythos-level vulnerability detection by 2028, the Velvet Rope collapses. Anthropic's bet is that the window is long enough to build enterprise relationships that outlast the capability gap.
I think the window is real but shorter than Anthropic hopes. The strategic value is not in the model. It is in the relationships and the data those relationships generate. The companies that use Mythos Preview for 18 months will have shared vulnerability data, workflow integrations, and institutional trust that cannot be replicated by switching to an open-source alternative. That is the moat. Not the model weights. The switching costs.
What to Build This Weekend
You are not one of the 52 organizations. Neither am I. Here is what you can do anyway.
First, build a pricing tier experiment for your own product or service. Use Gumloop to chain together a simple workflow: pull your current pricing, generate 3 alternative tier structures using an AI model, and output a comparison table. The goal is not to find the perfect price. The goal is to practice thinking about pricing as a signal, not just a number. Gumloop's visual node editor makes this a 30-minute project, not a weekend one.
Second, apply the Velvet Rope Principle to your own launches. Next time you ship something, restrict access to 10 people for the first week. Not because your product is dangerous. Because scarcity forces you to choose your best 10 customers, and those 10 customers become your case studies. Use LinkedGrow to turn their feedback into LinkedIn content that does your marketing for you.
Third, audit your own security posture. You do not have Mythos Preview, but you have free tools. Run a dependency scan on your codebase using npm audit, pip-audit, or Trivy. These tools catch known vulnerabilities, not zero-days, but known vulnerabilities account for the vast majority of actual breaches. Do the boring work first.
The lesson from Project Glasswing is not about cybersecurity. It is about how you position what you sell. Price tells a story. Access tells a story. Anthropic is telling both stories at once, and the market is listening. You can tell the same stories at any scale. You just need to decide who gets behind the velvet rope.
Apply the Velvet Rope Principle to your own product this weekend.
- Build a pricing tier experiment. Use Gumloop (plans start at $13/month) to chain a workflow that pulls your current pricing, generates 3 alternative tier structures via an AI model, and outputs a comparison table. Think of pricing as a signal, not just a number.
- Restrict your next launch to 10 people. Pick your best 10 customers and give them exclusive early access for one week. Use their feedback as case studies and turn those stories into LinkedIn content with LinkedGrow. Scarcity forces you to choose your audience deliberately.
- Audit your security posture without Mythos. Run free open-source vulnerability scanners like OpenVAS or Trivy against your infrastructure this weekend. You do not need a $25-per-million-token model to find the low-hanging fruit that attackers exploit first.
Anthropic is not selling a model. It is selling the right to use one.
The Velvet Rope Principle works because scarcity is a signal and price is a narrative. Anthropic locked Mythos Preview behind a 50-company firewall, priced it at 8x existing tiers, and turned a product launch into a trust exercise. The real moat is not the model weights. It is the switching costs, the shared vulnerability data, and the institutional relationships that compound over 18 months of exclusive access. If the capabilities hold, every frontier lab will copy this playbook. If they do not, Anthropic just taught the industry an expensive lesson about the distance between perception and performance.