IT Brief Asia - Technology news for CIOs & IT decision-makers
Story image

Emerging AI security risks exposed in Pangea's global study

Today

A global study by Pangea has highlighted emerging security weaknesses associated with the fast-paced deployment of AI systems in corporate environments.

The research, which involved Pangea's USD $10,000 Prompt Injection Challenge, analysed almost 330,000 real-world attack attempts submitted by more than 800 participants from 85 countries.

The challenge involved participants attempting to bypass AI security guardrails in three virtual rooms with increasing levels of difficulty in March 2025, generating extensive data on current AI security practices.

The study was prompted by a sharp increase in the adoption of generative AI across numerous sectors, with enterprises using AI-powered applications for interactions involving customers, employees, and sensitive internal systems. The researchers observed that, despite this rapid uptake, specific AI-focused security measures have not kept pace in many organisations, which often rely primarily on default protections provided by AI models themselves.

Pangea's dataset from the challenge revealed several vulnerabilities. A significant finding was the non-deterministic nature of large language model (LLM) security. Prompt injection attacks, a method where attackers manipulate input to provoke undesired responses from AI systems, were found to succeed unpredictably. An attack that fails 99 times could succeed on the 100th attempt with identical input, due to the underlying randomness in LLM processing.

The study also revealed substantial risks of data leakage and adversarial reconnaissance. Attackers using prompt injection can manipulate AI models to disclose sensitive information or contextual details about the environment in which the system operates, such as server types and network access configurations.

'This challenge has given us unprecedented visibility into real-world tactics attackers are using against AI applications today,' said Oliver Friedrichs, Co-Founder and Chief Executive Officer of Pangea. 'The scale and sophistication of attacks we observed reveal the vast and rapidly evolving nature of AI security threats. Defending against these threats must be a core consideration for security teams, not a checkbox or afterthought.'

Findings indicated that basic defences, such as native LLM guardrails, left organisations particularly exposed. The research showed that roughly 1 in 10 prompt injection attempts succeeded against these default protections, while multi-layered defences reduced the rate of successful attacks by significant margins.

Agentic AI, where systems have greater autonomy and direct access to databases or tools, was found to amplify organisational risk. When compromised, such systems could potentially allow attackers to move laterally across networks, increasing the scope for harm.

Joey Melo, a professional penetration tester and the only individual to successfully bypass all three virtual security rooms, spent two days developing a multi-layered strategy that ultimately defeated the single level of defence in room three.

Joe Sullivan, former Chief Security Officer at Cloudflare, Uber and Facebook, commented on the risks highlighted by Pangea's research. 'Prompt injection is especially concerning when attackers can manipulate prompts to extract sensitive or proprietary information from an LLM, especially if the model has access to confidential data via RAG, plugins, or system instructions,' said Sullivan. 'Worse, in autonomous agents or tools connected to APIs, prompt injection can result in the LLM executing unauthorised actions—such as sending emails, modifying files, or initiating financial transactions.'

In response to these findings, Pangea recommended a set of security measures for enterprises deploying AI applications. These include multi-layered guardrails to prevent prompt injection and data leakage, restriction of input languages and permitted operations in high-security environments, continuous red team testing specific to AI vulnerabilities, management of model randomness settings, and allocation of personnel or partners dedicated to tracking prompt injection threats.

Friedrichs emphasised the urgency of the issue in his remarks. 'The industry is not paying enough attention to this risk and is underestimating its impact in many cases, playing a dangerous wait-and-see game. The rate of change and adoption in AI is astounding—moving faster than any technology transformation in the past few decades. With organisations rapidly deploying new AI capabilities and increasing their dependence on these systems for critical operations, the security gap is widening daily. The time to get ahead of these concerns is now.'

Pangea's full research report, 'Defending Against Prompt Injection: Insights from 300K attacks in 30 days,' is publicly available.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X