• AI Search
  • Cryptocurrency
  • Earnings
  • Enterprise
  • About TechBooky
  • Submit Article
  • Advertise Here
  • Contact Us
TechBooky
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
TechBooky
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Home Artificial Intelligence

Anthropic Faces User Backlash Over Alleged ‘Nerfing’ of Claude Models

TechBooky by TechBooky
April 14, 2026
in Artificial Intelligence
Share on FacebookShare on Twitter

A growing wave of developers and heavy AI users say Anthropic’s flagship Claude models aren’t performing the way they used to — and they’re taking that frustration public.

Over recent weeks, users of Claude Opus 4.6 and Claude Code have turned to GitHub, X and Reddit to allege that the systems feel noticeably weaker, less consistent and more wasteful with tokens than they did only weeks earlier. The emerging narrative: Claude is losing its edge just as demand for powerful AI coding assistants is accelerating.

This is false. We defaulted to medium as a result of user feedback about Claude using too many tokens. When we made the change, we (1) included it in the changelog and (2) showed a dialog when you opened Claude Code so you could choose to opt out. Literally nothing sneaky about…

— Boris Cherny (@bcherny) April 12, 2026

Complaints of ‘AI shrinkflation’ and throttling

The criticism centers on several specific changes that users say they are seeing in day-to-day work:

  • Reduced ability to maintain strong, sustained reasoning over longer tasks
  • A higher tendency to drop or abandon tasks midway through a response
  • More frequent hallucinations or self-contradictory answers
  • Increased, and to some users unnecessary, token usage

These claims have been amplified through high-visibility posts across social platforms and issue threads, including on Anthropic’s own GitHub repository for Claude Code. A dedicated Reddit megathread on Claude performance and bugs has also become a focal point for ongoing reports and discussion.

Some users have described the situation as a form of “AI shrinkflation” — arguing that they are effectively paying the same price for a product they perceive as weaker. Others speculate that Anthropic might be throttling or tuning Claude models downward during periods of heavy demand, in order to stay within compute constraints.

So far, these more serious accusations remain unproven. They also run counter to public statements from Anthropic employees, who have denied that the company deliberately degrades models to manage capacity.

What Anthropic has acknowledged are recent changes in how its systems are configured and made available. According to the company, it has adjusted usage limits and reasoning defaults in recent weeks. While details of those changes are not laid out publicly in full, the admission alone has added fuel to an already heated debate: users are asking whether configuration shifts, rather than model quality itself, might be driving the perceived drop in performance.

That tension goes beyond any single model release. Power users increasingly want clarity on:

  • How much reasoning capability is actually enabled by default at a given time
  • How context windows are handled and whether that behaviour has recently changed
  • What kinds of throttling or rate-limiting might kick in under heavy load
  • Which inference parameters are being tuned behind the scenes
  • How benchmark scores are produced and updated over time

In this case, VentureBeat reports that it has asked Anthropic whether any such recent changes to reasoning defaults, context handling, throttling behaviour, inference parameters or benchmark methodology could help explain the spike in user complaints. It has also requested an explanation of recent benchmark-related claims and asked if Anthropic plans to release additional data that might reassure customers.

According to VentureBeat, an Anthropic spokesperson did not address these questions.

That silence leaves the broader conversation playing out largely in public forums, where individual anecdotes and side-by-side comparisons are driving perceptions of model quality. With no fresh benchmark data or detailed technical explanations from Anthropic addressing the current controversy, users are left to infer whether they are seeing genuine regression, side effects of new defaults, or normal variance now being scrutinized more intensely.

For Anthropic, which positions Claude as a premium AI assistant and coding companion, the controversy highlights how quickly trust can become a competitive differentiator. Developers and enterprises building workflows on top of these systems increasingly want assurance not only about baseline model quality, but also about stability over time and transparency when underlying behaviour changes.

Related Posts:

  • Claude-Opus-4.5-illustration
    Anthropic Launches Claude Opus 4.5 With Major…
  • 2-1758799815688
    Microsoft Integrates Anthropic’s Claude AI Into Copilot
  • claude marketplace
    Anthropic unveils Claude Marketplace to centralize…
  • claude_bmhd
    Anthropic’s Claude AI Suffers Global Outage,…
  • anthropic
    Anthropic’s Claude Opus 4.6 Debuts 1M-Token Context
  • claude-mac-app
    Claude Memory Rolls Out to Free Tier as Anthropic…
  • claude openclaw
    Anthropic Blocks OpenClaw on Claude as Agent Usage…
  • claude code leak
    Claude Code Leak Raises Bigger Question; Can AI…

Discover more from TechBooky

Subscribe to get the latest posts sent to your email.

Tags: ai modelsAnthropicclaude
TechBooky

TechBooky

BROWSE BY CATEGORIES

Receive top tech news directly in your inbox

subscription from
Loading

Freshly Squeezed

  • Anthropic Faces User Backlash Over Alleged ‘Nerfing’ of Claude Models April 14, 2026
  • Too Much Gemini? Here’s How To Dial Back Gemini In Your Google Workspace Apps April 14, 2026
  • The Business Impact of Moving from Generative AI to True Agentic Systems April 14, 2026
  • MacTay Uses VR to Train Lagos First Responders April 14, 2026
  • OpenAI Touts Amazon Deal, Claims Microsoft Restricts Client Access April 13, 2026
  • MTN Nigeria Deploys First 25Gbps Microwave Link April 13, 2026
  • OpenAI Expands London Office as UK Stargate AI Project Stalls April 13, 2026
  • Booking.com Confirms Data Breach, South African Users Impacted April 13, 2026
  • Microsoft to Shut Down Outlook Android App April 13, 2026
  • Apple AI Smart Glasses Leak Reveals New Designs, Cameras and Strategy Shift April 12, 2026
  • OpenAI Says Elon Musk Is Trying to Upend a $100 Billion Trial at the Last Minute April 12, 2026
  • Rockstar Confirms Third‑Party Data Breach After ShinyHunters Ransom Threat April 12, 2026

Browse Archives

April 2026
MTWTFSS
 12345
6789101112
13141516171819
20212223242526
27282930 
« Mar    

Quick Links

  • About TechBooky
  • Advertise Here
  • Contact us
  • Submit Article
  • Privacy Policy
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • Artificial Intelligence
  • Gadgets
  • Metaverse
  • Tips
  • AI Search
  • About TechBooky
  • Advertise Here
  • Submit Article
  • Contact us

© 2025 Designed By TechBooky Elite

Discover more from TechBooky

Subscribe now to keep reading and get access to the full archive.

Continue reading

Chat with TechBooky AI
💬
TechBooky AI
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.