Anthropic officially bans OpenClaw, causing a 24-hour crash among global developers

Just now, an “open-door to close the gates” incident happened in the AI community—an event so significant it could be recorded in the history books.

Anthropic has officially banned the use of its own bundled plans to access OpenClaw!!!

Claude Code’s founder, Boris Cherny, announced:

Starting at 3:00 p.m. Eastern Time on April 4 (3:00 a.m. Beijing Time on April 5), Claude will block all third-party tools—only allowing you to use them with additional bundles or via the API.

This means that thousands upon thousands of developers and startup teams who rely on OpenClaw to improve efficiency overnight lost the “unlimited” usage bonus, and were forced into an extremely expensive pay-as-you-go model.

OpenClaw’s loyal Claude users took a heavy blow.

The timing of this announcement is also very thought-provoking—OpenClaw’s founder Peter Steinberger had just switched to OpenAI not long ago, and what Anthropic wants is crystal clear!

You could say this is a business revenge disguised as a policy.

This news quickly shot to the top of the developer community’s Hacker News.

Let’s remember this day: April 4, 2026—starting from this day, the AI industry has shifted from open collaboration to big players carving up the market.

A Claude official email proves that this time, Anthropic is going after OpenClaw!

This policy will be enforced first on OpenClaw starting April 4, but it applies to all third-party toolchains and will soon be rolled out to more tools.

As an appeasement, Anthropic offered a one-time subsidy equal exactly to one month’s subscription fee. Claims are valid until April 17.

A heavy hammer falls

Note that the timing when Anthropic sent out this email is really not very appropriate.

On April 3, Friday evening—this is exactly the time internet companies love to break bad news.

Anthropic sent notifications to tens of thousands of OpenClaw users: starting tomorrow, your Claude subscription quota can no longer be used on OpenClaw. If you want to continue, then pay-as-you-go.

After a three-month siege, the last blow has finally landed.

The fuse: a “defection” capable of changing the landscape

Why would Anthropic, at this time point, not care about appearances and go straight for an open-source tool?

Because OpenClaw’s key figure—the “Lobster King,” Peter Steinberger—joined their “deadly rival,” OpenAI.

In the past, Peter Steinberger was one of the developers who understood the Claude ecosystem best, and his OpenClaw made Claude incredibly useful.

Now, for Anthropic, OpenClaw has become a “Trojan horse” from the enemy camp.

Anthropic believes OpenClaw is no longer just a pure efficiency tool, but a “intelligence-gathering device” reaching into their backyard.

Since the founder is now at OpenAI, don’t even think your tools can keep “riding along” on my subscription quota.

Peter himself also had to speak up, hinting that Anthropic is “closing the doors to beat the dogs” and “freeloading” off the open-source community:

“I and Dave Morin (a member of OpenClaw’s board) tried to persuade Anthropic to stay calm.

But in the end, all we could manage was to get this day delayed by just one week.”

**Widespread developer despair **budgets explode overnight

For ordinary developers, this crackdown is nothing short of a “dimension-drop.”

Previously, many developers used a fixed monthly subscription for Claude and paired it with OpenClaw’s powerful interfaces to achieve extremely low-cost automation workflows.

Buy a Claude Pro package for $20, and the lobster can have Claude do work 7×24 hours. Using the same amount through the API route, the bill could easily run into thousands of dollars.

One is the Max subscription capped at 200 dollars, and the other is four-figure API expenses.

Now, this path has been cut off by Anthropic itself.

Pay-as-you-go means it’s no longer a bundled monthly plan, and costs are wildly uncontrollable. Many small and medium-sized AI teams’ budgets were originally locked in month by month; now they could blow up at any time.

What’s even harsher is that if you don’t want to pay this sky-high toll, you have to painfully rebuild your entire business logic within 24 hours.

One word—absolutely not!

Lobsters stir the waters, and the grudges started long ago

The feud between the “Lobster King” and Anthropic began long ago.

Steinberger once publicly complained that Anthropic’s dealings with him were “basically all lawyer letters.”

First move**, brand separation.**

At the end of January, the lawyer letters forced Clawdbot to change its name.

Second move**, technical blocking.**

On January 9, Anthropic quietly installed a detection on the server side: if the subscription token was not issued from the official Claude Code client, it would be rejected directly.

OpenClaw’s core play overnight went to zero.

Third move**, defining the terms.**

In mid-February, the service terms were updated: OAuth tokens for Free, Pro, and Max accounts used in any third-party tool are considered violations.

The harshest fourth move was simply to scrape the bottom—copy the features.

Claude Cowork launched Dispatch, allowing you to remotely control the desktop version of Claude from your phone; Claude Code rolled out Channels, connecting Telegram and Discord.

Within four weeks, OpenClaw’s core functions were replicated by the official, one-for-one!

As AI blogger Matthew Berman put it: “They directly built OpenClaw a version of themselves.”

Tech media Semafor reported this morning that when asked whether customers were requesting the company to make its own OpenClaw, Anthropic’s Chief Business Officer Paul Smith admitted that yes, that was indeed the case.

And today, this email dated April 4 is the final slash.

The play: pushing their own “biological child”

Claude Cowork

While cracking down on OpenClaw, Anthropic also fully reveals what they really intend.

They began to relentlessly hint: Stop using those unreliable third parties—come try our native Claude Cowork!

Claude Cowork can enable Claude to control the coding environment and the computer interface more deeply.

Actually, this is not an Anthropic-exclusive play.

This is exactly the “platform lock-in**” strategy that big players are best at:

  1. Step 1: Use third-party open-source tools to attract developers and build up the ecosystem.

  2. Step 2: Find excuses (such as safety concerns, infrastructure pressure, etc.) to ban third parties.

  3. Step 3: Force users to migrate to their own paid native integration tools, which are more expensive and more tightly controlled.

This typical “vertical integration” play lets them firmly control the entry points and user experience, while gradually stripping away the advantages of those “uncontrolled connectors” in the ecosystem.

Interestingly, OpenAI chose an almost completely opposite path in roughly the same period.

OpenAI explicitly allows Codex subscriptions to be used in third-party clients like OpenClaw.

In March, it went one step further by announcing that it would provide ChatGPT Pro permissions for free to maintainers of open-source projects, with OpenClaw explicitly named on the beneficiary list. The people Anthropic wanted to drive away, OpenAI took them all in!

No one is completely innocent

To be fair, Anthropic has its reasons.

Every user who drives up API usage at the thousand-dollar level with a $200 subscription is making the company lose money.

Third-party tools bypass official telemetry, disguise client identities, and create monitoring blind spots—these are real engineering and security risks.

What’s more, Steinberger is now sitting in the OpenAI office—In Anthropic’s eyes, OpenClaw has already gone from a “slightly annoying freeloader” to “a spy sent from the other side.”

And OpenClaw itself isn’t clean either.

It was exposed to have a critical vulnerability with a CVSS score of 8.8 (CVE-2026-25253). Attackers can steal users’ authentication tokens through a single link.

On the public internet, security organizations have already scanned and found more than 30,000 OpenClaw instances with open, exposed portals.

But these rationales can’t dispel developers’ feeling of being played.

The platform first leaves the doors wide open, waits for people to come in and build the building, and only then announces that things will be arranged differently—this script has been played too many times.

X killing third-party clients, Apple tightening the App Store, and Google cutting free APIs—all are the same story.

Every time, it’s developers who get hurt.

**The open ecosystem enters twilight **AI giants close the gates

Today’s most decisive “killer move” from Anthropic makes one heartbreaking fact clear—

That golden age of AI—belonging to developers, free and open—is coming to an end.

We once thought AI would be like the early internet: shared based on protocols, and evolved based on community efforts. But the reality is that the owners of the models hold the power of life and death, and can wipe out the efforts of tens of thousands of people at any time.

Now, with the final ultimatum at 3:00 p.m. on April 4 already behind us, the countdown has begun.

Developers are faced with three brutal choices—

To swallow the bitter pill, endure the sky-high pay-as-you-go charges, and keep using OpenClaw until the budget runs out?

Or to migrate everything at once, kneel under Anthropic’s native tools, and accept platform lock-in?

Or else, leave in anger, and completely abandon Claude.

In the end, when the giants are busy fighting like gods, don’t forget: it was tens of thousands of developers using line after line of code who built the thrones under your billion-dollar valuations.

This time, Anthropic won the competition, but completely lost the community’s trust!

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin