
Anthropic is investigating a report that an unknown group has gained unauthorised access to Claude Mythos, the experimental AI model the company itself has described as too dangerous to release.
In a statement to Bloomberg, an Anthropic spokesperson said the company is “investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments.” The company did not name the vendor or describe the technical details of the incident.
According to Bloomberg’s account, the alleged access was described by an anonymous source who claimed to be part of a group that obtained the model through their role “as a worker at a third-party contractor for Anthropic.” The source reportedly told Bloomberg that the group used “commonly used internet sleuthing tools often employed by cybersecurity researchers” to reach some form of access to the Mythos model.
Bloomberg is said to have corroborated aspects of the claim by viewing a live demo and screenshots that the source provided, which purported to show interaction with Claude Mythos. The group behind the reported access is not named, and Bloomberg’s description of both the people involved and the specific technical pathway into Anthropic’s systems is deliberately vague.
The source framed the group’s motives as experimental rather than malicious, telling Bloomberg they are “interested in playing around with new models, not wreaking havoc with them.” That characterization, however, comes solely from the unnamed individual speaking to the publication; Anthropic has not publicly commented on the intent or identity of those allegedly involved.
Claude Mythos has been described by Anthropic as a model too dangerous to release, and it has been pitched as one of the most concerning AI systems currently in development. The company has positioned Mythos as powerful enough that it must remain tightly controlled, and, according to the source material, “a whole lot of powerful institutions seem to believe it.”
The emerging picture, based on the limited information shared so far, is that:
- Anthropic maintains a restricted preview of Claude Mythos.
- Access to that preview appears to rely in part on third-party vendor environments and contractors.
- An unnamed individual claims to have used their contractor role and common online investigation tools to gain unauthorized access.
- Bloomberg reports having seen evidence of live interaction with the model, supplied by this individual.
- Anthropic has acknowledged receiving a report and says it is investigating.
The situation raises obvious questions about how much protection can realistically surround AI systems that are marketed as exceptionally dangerous or sensitive, especially when those systems depend on complex chains of partners and vendors. The account reported by Bloomberg suggests that whatever controls Anthropic has put in place around Mythos may be vulnerable if someone inside a contractor organization is willing to probe for weaknesses using readily available tools.
The broader context around Mythos also appears to be intensifying. The source material references “the government’s sudden interest in Mythos” and the possibility that Anthropic’s model “might soon be deployed across the federal government,” though it does not provide details about specific agencies, timelines, or programs. It also notes that for years, “dangerous rhetoric” around these technologies has been “out of control,” and that tensions are “turning violent,” without tying those developments to particular incidents.
At the same time, Anthropic is described as promoting a new release as “less broadly capable” than other options, a “bold strategy” in the current AI race. That positioning underscores a growing split between models that are marketed for broad use and those that companies say must remain behind closed doors due to potential harms. Mythos sits firmly in the latter category.
The report also lands against a wider backdrop in which technology companies, governments, and health and fitness platforms are expanding the collection and analysis of intimate user data. As one line in the source material puts it: if “every step taken, hour slept, and heartbeat can be measured, recorded, and analysed,” it is not clear whether this represents greater control for individuals or a loss of control as systems become more intrusive and opaque. Though that reflection is not specifically about Anthropic, it echoes the core tension in the Mythos story: powerful systems, limited transparency, and the challenge of ensuring that access and usage stay within agreed boundaries.
For now, the known facts are narrow. Anthropic confirms only that it is looking into a report of unauthorized access to Claude Mythos Preview through a third-party vendor environment. Bloomberg’s reporting, based on an unnamed source who claims to be part of the group that accessed the system, suggests that the model may already be in the hands of at least one unsanctioned group that says it is “playing around with new models.” The exact scope of that access, how long it has been going on, and what, if any, remedial steps Anthropic or its partners have taken have not been publicly disclosed.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







