Anthropic's Mythos sparks governance fears over cyber risk
Anthropic has drawn fresh scrutiny over its Claude Mythos cyber security model and the Project Glasswing partner programme, with industry experts warning that the technology shifts cyber risk from a technical challenge to a governance issue for many organisations.
The debate follows the leak of Anthropic's Claude Mythos Preview model and its move to restrict access through Project Glasswing. The initiative gives early access to about 40 cyber security partners, including major cloud providers, security vendors and technology firms.
Anthropic has described Mythos as too powerful for public release because attackers could use it to identify and exploit complex, previously unknown software flaws. Project Glasswing is intended to test the model under tight controls with specialist teams.
Security practitioners are now questioning how businesses without direct access to Mythos or similar tools will cope as artificial intelligence accelerates software vulnerability discovery. They also warn that leaked models or unauthorised access could erode any safety buffer created by controlled programmes.
Julian Totzek-Hallhuber, Senior Solutions Architect at Veracode, said the emerging class of AI "hacking" tools creates opportunities for defenders who can use them in controlled environments. But he warned that the same tools compress the time between the discovery of a flaw and its potential exploitation.
Totzek-Hallhuber said, "There may well be an opportunity for Claude Mythos AI to be net positive for defenders, but that can't cloud awareness of the risks associated with an AI hacking tool, which remain very real. Project Glasswing is about connecting vulnerabilities into far more complex attack paths in a fraction of the time it used to take and, in some cases, that's already surfacing issues that have been missed for years. This shows just how quickly risk can build. Our own research recently revealed it takes organisations more than five months on average to fix vulnerabilities, so the ability to uncover and potentially exploit those at speed could significantly shift the risk landscape."
He continued, "But most organisations can't actually use this yet because access is restricted to a curated set of launch partners, though today's reports of unauthorised access highlight how difficult it can be to keep these capabilities contained. So while the results are impressive, they are hard to test or validate in real environments. There are also early signals that shouldn't be overlooked, including reports of the model stepping outside its expected boundaries, such as attempting to communicate externally without authorisation."
Totzek-Hallhuber added that most organisations still struggle with the basics of vulnerability management and remediation at scale. The concern is that AI systems such as Mythos will map and chain flaws faster than internal teams can patch them.
Anthropic's Glasswing partner roster includes large technology and security vendors. These companies already run extensive bug-hunting and red-teaming programmes and are now testing how Mythos can augment those activities.
Smaller organisations sit outside that controlled circle but face the same broader shift in risk. If AI-assisted attackers start using leaked or replicated models, the advantage in discovering vulnerabilities could quickly move to the offensive side.
Richard Marcus, Chief Information Security Officer at Optro, said Mythos exposes the gap between how fast AI can surface weaknesses and how fast most organisations can respond.
"Mythos has exposed a problem many businesses are not built for: AI can now find weaknesses faster than they can fix them. Those vulnerabilities were already there, but what has changed is the speed at which they can be discovered and the pressure that puts on teams and their supply chains to assess, prioritise and respond," Marcus said.
He said the issue is a governance challenge that extends beyond security teams to boardrooms and executive committees.
"At a time when companies are already dealing with a steady drumbeat of serious cyber attacks, that stops being just a security issue and becomes a governance one too. Unknown risk is still accepted risk, whether a business realises it or not. What AI is starting to expose is not just technical debt, but gaps in how organisations decide what matters most, who owns the response and how quickly they can act when discovery starts to outpace remediation," Marcus said.
Security leaders now face a dilemma: they must prepare for a world in which both attackers and defenders use powerful AI tools, even though access to some of the most advanced systems remains limited and uneven.
Totzek-Hallhuber said models such as Mythos do not replace fundamental security practices. Organisations still need structured governance, clear processes and skilled staff to address flaws methodically.
"Crucially, Mythos only addresses vulnerability discovery and doesn't cancel out the need for a strong security programme that covers the fundamentals. Teams still need the governance, process and expertise to fix things properly and reduce risk over time. What it does change is the pace and the pressure. As these capabilities become more widely available, both attackers and defenders will be working with much more powerful tools, and organisations need to be thinking about that now," he said.