IT Brief Asia - Technology news for CIOs & IT decision-makers
Uk power plant control room ai failure darkened grid chaos

Gartner warns misconfigured AI could halt G20 power

Fri, 13th Feb 2026

Gartner forecasts that a misconfigured artificial intelligence system inside cyber-physical systems could shut down national critical infrastructure in a G20 country by 2028. The warning raises new questions about how governments and operators manage safety controls in increasingly automated environments.

Misconfigured AI systems can shut down vital services without human direction, misread sensor data, or trigger unsafe actions, Gartner said. Those failures could cause physical damage and widespread service disruption, threatening public safety and economic stability if control of systems such as power grids and manufacturing plants is compromised.

Cyber-physical systems combine sensing, computing, control, networking and analytics with direct interaction in the physical world. They span operational technology, industrial control systems, industrial automation and control systems, the industrial internet of things, robots and drones, and what is often described as Industrie 4.0.

Critical infrastructure operators have long planned for cyberattacks, equipment failure and natural disasters, but Gartner's warning shifts attention to configuration and change control in AI-driven environments. AI is now used for tasks close to physical processes, including industrial optimisation, predictive control and automated safety decisions. A configuration error or unexpected model behaviour can quickly move from software into the real world.

Gartner attributes the risk to complexity, opacity and the speed at which updates and configuration changes can be applied across connected systems. It also argues that failures may stem from internal decisions rather than malicious interference.

"The next great infrastructure failure may not be caused by hackers or natural disasters but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal," said Wam Voster, VP Analyst at Gartner. "A secure 'kill-switch' or override mode accessible only to authorised operators is essential for safeguarding national infrastructure from unintended shutdowns caused by an AI misconfiguration."

Override control

The recommendation centres on what Gartner calls a safe override mode for AI systems in critical infrastructure. In practice, this means an explicit mechanism that allows authorised operators to take control, stop automated actions, or place systems into a known safe state when behaviour deviates from expectations.

The approach mirrors long-standing principles in industrial safety engineering, where manual intervention and fail-safe designs are built into automated plants. Gartner argues the difference is that advanced AI models can change system behaviour in ways that are harder to predict through traditional testing and documentation.

"Modern AI models are so complex they often resemble 'black boxes,'" Voster said. "Even developers cannot always predict how small configuration changes will impact the emergent behaviour of the model. The more opaque these systems become, the greater the risk posed by misconfiguration. Hence, it is even more important that humans can intervene when needed."

The warning comes as national infrastructure operators adopt AI for grid management, transport optimisation and industrial operations. Many deployments sit alongside legacy operational technology and safety systems designed for deterministic control logic and slower change cycles. Introducing AI can increase dependencies between systems and widen the impact of configuration changes.

Power grid example

Gartner cited modern power networks as an example where misconfiguration could trigger rapid disruption. Grid operators use AI for real-time balancing between generation and consumption, with models influencing decisions about isolation, load shedding and other actions that affect large populations.

In Gartner's scenario, a misconfigured predictive model could interpret demand patterns as instability and trigger unnecessary grid isolation or load shedding across regions, or even entire countries. Those actions could cascade into additional failures if they disrupt industrial processes, communications networks, or health and emergency services.

Risk mitigation

Gartner said chief information security officers should help address these risks, which sit at the intersection of cybersecurity, safety engineering, operational resilience and governance. It recommended implementing safe override modes across critical-infrastructure cyber-physical systems and limiting their use to authorised operators.

It also recommended using digital twins to test updates and configuration changes before deployment. A full-scale digital twin provides a controlled environment that reflects real-world system behaviour, allowing operators to assess how AI-driven decisions interact with sensors, controllers and network conditions.

Gartner also called for real-time monitoring and rollback mechanisms for AI changes within cyber-physical systems. That includes visibility into model behaviour, configuration drift and system performance in production. It also recommended creating national AI incident response teams, reflecting the possibility that AI-related failures could cross organisational and sector boundaries during large-scale disruptions.

Gartner analysts are due to discuss related security and risk management issues at a series of industry events across multiple regions later this year, including a stop in London.