IT Brief Asia - Technology news for CIOs & IT decision-makers
Flux result 1449a80a d271 47ab a1ef 916b32f14374

AI vulnerability discovery forces boards to rethink cyber risk

Tue, 21st Apr 2026 (Yesterday)

Cybersecurity leaders warn that new artificial intelligence models with automated vulnerability discovery are reshaping corporate cyber risk and forcing boards to reassess long-held assumptions about software security and system design.

The concerns follow recent disclosures about Anthropic's Claude Mythos and OpenAI's GPT-5.4 cyber-focused model, as well as growing scrutiny of Anthropic's Model Context Protocol (MCP).

Mike Maddison, chief executive of NCC Group, said recent discussions with boards and executives showed a rapid shift in how they view AI-linked risk. Clients are focusing in particular on models that can identify, chain and exploit software flaws without human intervention.

He called the latest details on Claude Mythos a notable inflection point. "Anthropic's announcement of the latest capabilities of Claude Mythos suggests a step-change in the cyber risk landscape. We've seen clear evidence that AI can identify, chain and exploit zero-day vulnerabilities across major operating systems and browsers. Modern AI models can now analyse source code, reason about complex interactions, and uncover exploitable vulnerabilities at great speed."

In his view, that shift undermines several long-standing assumptions: that code written decades ago and assumed stable is now potentially exploitable by AI; that vulnerability discovery is no longer constrained by human review cycles; that responsible disclosure timelines designed around human research no longer reflect reality; and that the accepted risk window to address vulnerabilities has shrunk, making periodic patch cycles insufficient.

Maddison noted that Anthropic has restricted Mythos through a controlled programme, but said that does not alter the wider direction of travel. "Vulnerability discovery is accelerating, attack surfaces are becoming wider and more visible, and for security leaders patching at scale is about to get a lot harder. This has implications for system design, organisational resilience, software development and therefore risk management. It is likely particularly acute in critical national infrastructure where stability and availability has been the dominant consideration."

Chief executives are already looking for concrete answers on exposure and accountability, he said, asking questions such as where they are exposed because of legacy code and technical debt, how to reduce the time between discovery and remediation, and how to explain defensibility to boards, regulators and insurers in an AI-accelerated world.

He urged boards to move from point-in-time testing to continuous, AI-augmented assurance approaches combining automated discovery and increasingly automated remediation. He also called on organisations to reframe supply chain assurance beyond compliance checklists, and to get ahead of regulatory and insurance expectations. "Assurance that may have been enough for insurance yesterday isn't going to cut it anymore."

Maddison also highlighted digital sovereignty and public-sector oversight as pressure points, especially regarding critical infrastructure, and said organisations should begin red-teaming with AI-augmented adversary simulation now. "Simulations from even six or 12 months ago may already be irrelevant."

Despite the escalating rhetoric, he cautioned against overreaction. "It is important to take this potential 'paradigm shift' seriously, but equally not to overreact to the hype. Many of the major breaches we have seen in recent years involved compromising humans through social engineering, not finding zero-day vulnerabilities. Doing the basics is the most important defence and can be done right now."

Specialists at identity security firm Semperis echoed that view, describing AI as a "double-edged sword" that strengthens both attack and defence.

Eric Woodruff, chief identity architect at Semperis, said opinion across the security sector remains split on the immediate impact of frontier models such as Mythos and OpenAI's GPT-5.4-Cyber. "Currently, most of what I am reading within the industry is polarised: it's either 'this is all marketing hype' or 'this is a catastrophe in the making.' For me, it's neither of those - yet."

Woodruff said threat actors will eventually gain access to similar tools and linked that prospect to existing routes for compromising identity infrastructure, such as Active Directory. "Most identity compromises today still come down to boring but deadly basics: misconfigurations, poor hygiene, and organisations not taking fundamental security seriously. What I'm curious about is whether these new models will be any good at mapping and exploring identity attack paths - or even deriving novel attack paths - from the perspective of an organisation's identity system architecture. That's where things could get very interesting, very fast."

Guido Grillenmeier, principal technologist for EMEA at Semperis, noted that US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent meeting with Wall Street leaders after being briefed on Mythos's capabilities. He warned that similar tools will emerge from other providers. "These models do not only create new code well; they also understand existing code, can find vulnerabilities autonomously and even write the proper exploits to attack unpatched systems, often focusing on routines responsible for handling user authentication. Once you breach authentication, you can take over the entire system. This logic hasn't changed - just the speed of finding new vulnerabilities has increased dramatically."

Sarah Cecchetti, director of product management at Semperis, said the most critical assets in this environment remain identity and access systems. "Modern models are good at coding, and they're good at reasoning, and when they do a really good job of reasoning about code, you find the places where some code paths fail to follow the intent of their programmers and wind up causing outages or breaches."

She argued the right response is to direct new AI security budgets toward identity solutions that have long been underfunded. "We know what the most critical systems are - everything that touches identity and access management. That's where entry happens, that's where escalation and lateral movement happen, and those systems are involved in a vast majority of breaches. Is it bad news that attackers are going to use Mythos to find a bunch of holes we didn't know about? Sure. But the good news is that we know exactly what to do to defend against attacks and recover if an attack succeeds."

Alongside concerns about vulnerability discovery, new research has also raised questions about architectural weaknesses in Anthropic's Model Context Protocol, designed to connect AI agents to external data and tools across enterprise environments.

James Wickett, chief executive of DryRun Security, said the core issue is not a single exploitable bug but a design pattern that weakens established security boundaries. "We are allowing untrusted input to flow through a model and directly influence real system actions, collapsing a boundary that the cybersecurity industry has spent decades trying to enforce. MCP blurs that line in a way that feels efficient, but quietly removes one of the core safety constraints that made modern systems defensible. When you blur the line between influence and execution, you create a class of context-based risk that does not show up in isolated testing and cannot be addressed with static controls."