
A growing wave of developers and heavy AI users say Anthropic’s flagship Claude models aren’t performing the way they used to — and they’re taking that frustration public.
Over recent weeks, users of Claude Opus 4.6 and Claude Code have turned to GitHub, X and Reddit to allege that the systems feel noticeably weaker, less consistent and more wasteful with tokens than they did only weeks earlier. The emerging narrative: Claude is losing its edge just as demand for powerful AI coding assistants is accelerating.
Complaints of ‘AI shrinkflation’ and throttling
The criticism centers on several specific changes that users say they are seeing in day-to-day work:
- Reduced ability to maintain strong, sustained reasoning over longer tasks
- A higher tendency to drop or abandon tasks midway through a response
- More frequent hallucinations or self-contradictory answers
- Increased, and to some users unnecessary, token usage
These claims have been amplified through high-visibility posts across social platforms and issue threads, including on Anthropic’s own GitHub repository for Claude Code. A dedicated Reddit megathread on Claude performance and bugs has also become a focal point for ongoing reports and discussion.
Some users have described the situation as a form of “AI shrinkflation” — arguing that they are effectively paying the same price for a product they perceive as weaker. Others speculate that Anthropic might be throttling or tuning Claude models downward during periods of heavy demand, in order to stay within compute constraints.
So far, these more serious accusations remain unproven. They also run counter to public statements from Anthropic employees, who have denied that the company deliberately degrades models to manage capacity.
What Anthropic has acknowledged are recent changes in how its systems are configured and made available. According to the company, it has adjusted usage limits and reasoning defaults in recent weeks. While details of those changes are not laid out publicly in full, the admission alone has added fuel to an already heated debate: users are asking whether configuration shifts, rather than model quality itself, might be driving the perceived drop in performance.
That tension goes beyond any single model release. Power users increasingly want clarity on:
- How much reasoning capability is actually enabled by default at a given time
- How context windows are handled and whether that behaviour has recently changed
- What kinds of throttling or rate-limiting might kick in under heavy load
- Which inference parameters are being tuned behind the scenes
- How benchmark scores are produced and updated over time
In this case, VentureBeat reports that it has asked Anthropic whether any such recent changes to reasoning defaults, context handling, throttling behaviour, inference parameters or benchmark methodology could help explain the spike in user complaints. It has also requested an explanation of recent benchmark-related claims and asked if Anthropic plans to release additional data that might reassure customers.
According to VentureBeat, an Anthropic spokesperson did not address these questions.
That silence leaves the broader conversation playing out largely in public forums, where individual anecdotes and side-by-side comparisons are driving perceptions of model quality. With no fresh benchmark data or detailed technical explanations from Anthropic addressing the current controversy, users are left to infer whether they are seeing genuine regression, side effects of new defaults, or normal variance now being scrutinized more intensely.
For Anthropic, which positions Claude as a premium AI assistant and coding companion, the controversy highlights how quickly trust can become a competitive differentiator. Developers and enterprises building workflows on top of these systems increasingly want assurance not only about baseline model quality, but also about stability over time and transparency when underlying behaviour changes.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







