
OpenAI has unveiled “Daybreak,” a new initiative that CEO Sam Altman says is designed to “accelerate cyber defense and continuously secure software.” The move places OpenAI in direct competition with Anthropic’s recently announced Project Glasswing, with both companies pitching their most advanced AI models as tools for finding and fixing security vulnerabilities.
Altman framed the effort as a broad, collaborative push with industry, saying AI is already strong at cybersecurity and “about to get super good,” and that OpenAI wants to start working with “as many companies as possible” to help them continuously secure their systems.
While OpenAI’s announcement doesn’t use the word “project,” Daybreak closely resembles Anthropic’s Project Glasswing in structure and ambition. In both cases, a leading AI firm is seeking partnerships with corporate and government organisations to uncover security weaknesses using frontier models, with the shared goal of “seeing risk earlier, acting sooner, and helping make software resilient by design.”
Anthropic introduced Project Glasswing alongside its Claude Mythos Preview model, which the company has described as so capable in cybersecurity that it chose not to release it broadly. Anthropic’s own system card for Claude Mythos Preview notes that the model’s “large increase in capabilities” led the company to keep it from general availability and instead deploy it only within a defensive cybersecurity program for a limited set of partners. Anthropic calls Claude Mythos Preview “the most cyber-capable model” it has ever built, effectively locking it away for everyone except select organizations. Software developer Daniel Stenberg has described this rollout as an “amazingly successful marketing stunt for sure.”
Reports of a similar OpenAI effort surfaced shortly after Anthropic’s announcement. An anonymously sourced Axios story described OpenAI’s work as a “product with advanced cybersecurity capabilities” slated for release to a small set of partners. Daybreak now appears as the public face of that direction, but with a significantly more open posture than Anthropic’s tightly controlled Glasswing program.
OpenAI’s Daybreak launch emphasizes accessibility rather than secrecy. The official page presents two prominent calls to action: “Request a vulnerability scan” and “Contact sales.” The vulnerability scan button leads to a short, straightforward form, reinforcing the idea that OpenAI wants a wide pool of organizations to engage with the service rather than a select, invitation-only group.
According to the announcement, Daybreak relies on Codex Security, which OpenAI previously introduced as a research preview in March. Codex Security is used to generate a “threat model” for a given system, mapping out how that system operates, who is trusted within it, and what weaknesses follow from those trust relationships and functions. Once that context is established, the system then examines the actual codebase to identify potential real-world exploits based on the threat model.
This approach positions Daybreak as both an architectural analysis tool and a code-level vulnerability hunter, combining high-level understanding of how software is structured and used with deeper inspection of the implementation details. The intention, as framed by OpenAI, is to help organizations detect and remediate risk earlier in the development and deployment lifecycle, moving closer to the ideal of software that is “resilient by design.”
Where Anthropic’s Project Glasswing has been associated with behind-the-scenes consultations and concerns among governments, OpenAI’s messaging around Daybreak leans toward openness and scale. The contrast in tone is notable: Anthropic’s rollout, tied to claims about the danger of its Claude Mythos Preview model, has been received by some observers as ominous and exclusive, while Daybreak is pitched more as a widely available service that companies can opt into via a simple form and sales contact.
Both initiatives, however, rest on the same core idea: that cutting-edge AI systems can be highly effective at discovering, modelling, and exploiting vulnerabilities and that those capabilities should, at least for now, be channelled into defensive programs overseen by the companies that built the models. The tension between demonstrating power and limiting access is evident in Anthropic’s restricted deployment of Claude Mythos Preview and in OpenAI’s decision to frame Daybreak as a collaborative security effort rather than a broadly unleashed hacking tool.







