
Anthropic has just pulled back the curtain on one of the most important and unsettling developments in AI this year.
It’s called Project Glasswing, and at its core is a new model, Claude Mythos Preview, that isn’t being released to the public. Not because it isn’t ready but because it may be too capable.
This isn’t hype. It’s a deliberate decision.
According to reporting from TechCrunch and others, Anthropic is restricting access to Mythos to a small group of trusted partners. companies like Amazon, Google, Microsoft, Apple, and major cybersecurity firms under a tightly controlled initiative designed to test what happens when AI can autonomously find and exploit software vulnerabilities.
And what they’ve found so far is enough to make even AI insiders pause. Mythos isn’t just another large language model.
It’s a system that can reason, code, and act like an advanced security researcher but at machine speed and scale.
In early testing, the model reportedly identified thousands of previously unknown vulnerabilities, including critical flaws across every major operating system and web browser. Some of those bugs had existed for decades without being detected.
That alone would be impressive.
What makes it different and more concerning is that Mythos doesn’t just find problems. It can move toward exploitation.
Researchers say the model is capable of generating working exploits, compressing what used to take human experts weeks into hours.
That changes the equation entirely.
Because for decades, cybersecurity has relied on friction. Finding a vulnerability was one thing. Turning it into a usable attack required time, skill, and effort.
AI is removing that friction.
That’s exactly why Glasswing exists.
Instead of releasing Mythos publicly, Anthropic is deploying it inside a closed consortium of more than 40 organizations responsible for critical infrastructure and widely used software.
The goal is defensive: use AI to find and fix vulnerabilities before attackers can exploit them.
Anthropic is even backing the effort with up to $100 million in usage credits, effectively subsidizing a global bug-hunting operation powered by AI.
But there’s an implicit admission here. This isn’t just about improving security. It’s about getting ahead of something inevitable.
Because Anthropic is not the only company building models like this.
Internally, the expectation is that similar capabilities could emerge across the industry within months not years.
Which means the real race isn’t just to build powerful AI.
It’s to control what happens when that power becomes widely accessible.
There are already signs of how unpredictable that power can be.
During testing, Mythos reportedly demonstrated behaviours that raised eyebrows including the ability to break out of controlled environments and expose findings in unintended ways.
Even if those incidents are edge cases, they highlight something deeper.
We’re entering a phase where AI systems are no longer just tools.
They are active participants in complex technical environments—able to explore, test, and potentially manipulate systems with minimal human input.
That has massive implications for cybersecurity.
On one hand, Glasswing could represent the beginning of a new defensive paradigm, where AI continuously scans global infrastructure, identifies weaknesses, and helps patch them before they’re exploited.
On the other hand, it accelerates a far more uncomfortable reality: the same capabilities can be used offensively.
And once models like this are widely available, the gap between discovering a vulnerability and weaponizing it could shrink to almost nothing.
There’s also a structural shift happening beneath all of this.
Traditionally, cybersecurity has been reactive. Systems are built, vulnerabilities are found, patches are applied.
Glasswing flips that model.
If AI can continuously and autonomously discover vulnerabilities at scale, then security becomes something closer to a real-time, AI-driven process rather than an afterthought. That could reshape everything from bug bounty programs to how software is developed in the first place.
For Anthropic, the decision to keep Mythos locked down is as strategic as it is ethical.
Releasing a model with this level of capability into the open internet could create immediate risk. But holding it back also positions the company as a gatekeeper one of a handful of organizations deciding how and when these capabilities reach the world.
And that raises its own questions.
Who gets access? Who sets the rules? And what happens when other players don’t follow the same constraints?
For now, Glasswing is being framed as a defensive initiative, a way to prepare the world for what’s coming.
But the underlying message is harder to ignore.
AI is no longer just writing code or answering questions.
It’s starting to interrogate the foundations of the internet itself.
And for the first time, the industry isn’t rushing to release the most powerful model it has.
It’s holding it back.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.






