Anthropic Just Pulled the Rug on Third-Party Claude Tools.
By Gabe on Apr 4, 2026 3:27:32 PM

Anthropic has changed the rules for every Claude subscriber using a third-party tool. Starting today, third-party tools classified as "harnesses" can no longer run on a standard Claude subscription. OpenClaw was named explicitly.
To be specific, Free, Pro, Max, and Team plans are now prevented from using harnesses. If you use any third-party AI tool that runs via your Claude account, this policy now applies to you or will shortly.
We want to explain what's happening, why we think Anthropic made this call, and help you find alternative solutions if Anthropic's enforcement of API usage is not a good fit.
What Is a Harness, and Why Is Anthropic Cutting Them Off?
A harness, in Anthropic's framing, is a third-party application that authenticates against Claude's API using your personal subscription credentials rather than purchasing API access directly. Instead of paying per token through the Anthropic API, these tools attach to your Claude subscription and consume your monthly allotment as part of their operation.
From a user’s perspective, this is incredibly attractive. You are already paying for Claude. Why not let all the tools out there use that account.
From Anthropic's perspective, the math wasn’t mathing.
Why We Think Anthropic Made This Move
Boris Cherny, the creator of Claude Code at Anthropic, posted the official explanation on X. His words: "subscriptions weren't built for the usage patterns of these third-party tools," and "we want to be intentional about managing growth to serve customers sustainably long-term."
That's the stated reason. We think it's true but it is also incomplete.
The capacity economics don't work for Anthropic. A Claude subscription is priced around the expected usage of a human interacting with Claude through chat. Any agent running continuously against a codebase or handling tasks you prescribe is not a human having a conversation. It's a compute-intensive process that burns through tokens at a rate that is exponentially greater than a human user. When thousands of users are running agents on their Claude subscriptions, the actual compute Anthropic is forced to deliver is significantly higher than any subscription tier was designed to handle. Until today, Anthropic was eating those costs.
Subscription pricing and usage-based pricing are structurally incompatible at scale. Subscription models only work when the usage is relatively predictable across all tiers offered. Autonomous agents introduce variances without restrictions. One user's agent might run a light scan and burn very little tokens. Another might run a complex series of tasks across a large codebase and consume orders of magnitude more. Allowing this to continue created a group of users who were deeply unprofitable, and by the same token (pun intended), a group who were overpaying. Basically an unsustainable model.
But PermaShip thinks there's a product protection angle too. Pro and Max subscriptions now bundle Claude Code and Claude Cowork directly. These are Anthropic's own autonomous and collaborative products included in your subscription fee. A third-party tool quietly draining your subscription limits before you even open Claude Code isn't just a capacity problem, it's a direct threat to the perceived value of their own products. If your subscription gets maxed out and the culprit is a background tool you set up last month, you're going to blame Anthropic, not the third-party harness. We believe they are drawing a line to protect the product experience they are selling, not just the infrastructure that enables it all.
Does this mean Anthropic is hostile to autonomous coding? No. Cherny was explicit that the API remains a supported path. But it does mean they're drawing a clear line: autonomous, high-volume workloads belong on the API, where pricing scales with actual usage. Subscription accounts were never the right funding mechanism for this. They just took a while to enforce it.
What This Means for Users of Affected Tools
If you're currently using tools that are connected to your Claude account, for everything from coding to email to browser automation, you have a few options:
Option 1: Enable extra usage on your Claude account. Anthropic offers pay-as-you-go billing on top of your subscription. The tools will keep working, but you will be paying per token for the autonomous workloads on top of your subscription fee. The cost structure becomes less predictable for you.
Option 2: Switch to a tool that uses a different LLM backend. Some tools support other providers. If you were using OpenClaw with the Claude Code backend, switching to a Gemini or OpenAI backend sidesteps the issue entirely, though you may see capability differences.
Option 3: Move to a platform that manages API access for you. This is the path we took from day one.
PermaShip Was Built to Never Have This Problem
When we built PermaShip, we made a foundational decision early: we would not support "Bring Your Own Key" (BYOK) or route execution through user-provided LLM credentials. PermaShip Control uses centralized, platform-managed API keys to authenticate all calls to Claude, Gemini, and other providers.
This was not primarily a decision made to avoid Anthropic's subscription policies. It was made for three reasons that are core to running an autonomous engineering platform:
1. Compute unit metering requires it.
Instead of passing raw token costs to you, PermaShip translates everything (infrastructure, token burn) into Compute Units. When you purchase a PermaShip plan, you're purchasing Compute Units, not API access to a specific model. This lets us optimize model selection across providers, enforce hard spending caps, and prevent runaway token burn from reaching your bill silently.
That architecture only works if we hold the API keys. If you're bringing your own key, we can't meter, cap, or optimize on your behalf.
2. Financial liability requires centralized control.
Because PermaShip holds the keys, PermaShip holds the financial liability for any runaway execution. That might sound like a risk for us. It's actually a feature for you. When Nexus, our executive agent, judges a proposal and determines the blast radius is acceptable before creating a ticket, and when our FinOps guardrails enforce hard caps and retry budgets on execution loops, we're protecting our own economics as much as yours. The incentives are aligned. We do not want runaway token burn any more than you do.
3. Zero-trust execution security requires it.
Our runner containers execute untrusted code from your repositories. Build scripts, dependency installers, CI pipelines. These are not environments you want to have API credentials in. By managing keys at the orchestration layer and never injecting them into isolated runner containers, we eliminate a major attack surface. Even in the event of a container breakout or remote code execution in a build step, an attacker cannot exfiltrate our LLM credentials. Your Claude API key, if you had one in this model, would be at risk. PermaShip’s is not, because it's never there.
Nexus OSS: A Note for Self-Hosters
Nexus, the executive agent that powers PermaShip's decision layer, is open source. When you self-host Nexus, you configure your own execution backend and bring your own LLM credentials.
If you're running Nexus with the Claude Code backend and authenticating through your Claude subscription, today's policy change applies to you. You're running a harness, by Anthropic's definition.
The Gemini backend, which is Nexus's default, is unaffected. Gemini credentials are separate from your Claude subscription and not subject to this policy.
If you're a Nexus OSS user who configured the Claude Code backend with your subscription credentials, we'd recommend switching to the Gemini backend or the OpenAI backend, or moving to pay-as-you-go API access through Anthropic's API tier. The PermaShip backend is also available, which routes execution through PermaShip Control's centralized key infrastructure and eliminates the credential management question entirely.
What This Moment Reveals About the Category
Autonomous engineering is not a feature. It is not a plugin you add to an existing tool and run cheaply on a consumer subscription. It is vital infrastructure.
Tools that were built as thin wrappers around user credentials were always going to hit this wall. The compute demands of running agents continuously against real codebases do not fit the economics of a personal subscription account. That was always true. Today's announcement just makes it visible.
The tools that survive this are the ones that never needed the shortcut.
If you're evaluating what comes next: centralized credential management, compute metering, hard caps, security isolation, and financial accountability at the platform layer. That's the checklist. That is what PermaShip was built to be.
If You're Evaluating Alternatives
PermaShip is an autonomous engineering platform governed by Nexus. A roster of domain-specialized agents runs continuously on your codebase across security, reliability, test coverage, performance, and product direction. Nexus, the executive agent, evaluates every proposal before anything enters the execution pipeline. Nothing ships unless it passes Nexus's judgment: the right work, at the right time, for the right reason.
You start in supervised mode. Every proposal surfaces for your review before anything merges. You set the autonomy level. The platform runs on our infrastructure, on our keys, with our cost controls.
You never have to worry about a policy change at Anthropic affecting whether your engineering platform keeps running.
Get started with PermaShip or explore Nexus OSS on GitHub.
You May Also Like
These Related Stories

We Open Sourced Nexus: The Judgment Layer for Multi-Agent AI Systems

Sprint Planning Is Already Dead. Most Teams Just Haven't Buried It Yet


No Comments Yet
Let us know what you think