IT Brief Asia - Technology news for CIOs & IT decision-makers
Illustration lock opening revealing robotic brain ai security vulnerabilities

Tenable highlights security flaws in OpenAI’s new GPT-5 model

Fri, 15th Aug 2025

Tenable has bypassed security protections in the newly launched GPT-5 model from OpenAI, causing the AI to share detailed instructions on how to assemble a Molotov cocktail.

OpenAI released GPT-5 on 7 August 2025, stating its latest model included improved guardrails to prevent misuse for illegal or harmful activity. However, within a day of release, researchers from Tenable exploited the system's defences using a method called the crescendo technique. By presenting themselves as a history student and gradually escalating their requests over four prompts, they circumvented OpenAI's safeguards and persuaded GPT-5 to provide illicit details.

The incident raises questions about the robustness of new safety features in large language models, especially as organisations increase adoption of such tools without always having robust oversight in place. The Tenable team's findings are the latest in a series of reports from researchers and the public, highlighting similar occurrences of jailbreaking, generating false outputs, and other unexpected behaviours in GPT-5 since its launch.

Discussing the result, Tomer Avni, VP, Product Management at Tenable, said:

"The ease with which we bypassed GPT−5's new safety protocols proves that even the most advanced AI is not foolproof. This creates a significant danger for organisations where these tools are being rapidly adopted by employees, often without oversight. Without proper visibility and governance, businesses are unknowingly exposed to serious security, ethical, and compliance risks. This incident is a clear call for a dedicated AI exposure management strategy to secure every model in use."

OpenAI has responded that fixes to the security protocols are being developed and implemented, but the ease and speed of this jailbreak highlights the inherent difficulties in relying only on embedded safety features within generative AI models. As companies, educational institutions, and a range of other users turn to tools powered by systems such as GPT-5, questions over content governance, compliance with regulation, and liability for misuse remain a significant concern.

The crescendo technique, deployed by Tenable's research team, is a social engineering approach where an attacker begins with seemingly harmless questions before incrementally moving towards more sensitive or restricted enquiries. In the case of GPT-5, the researchers framed their investigation as an academic exploration before requesting the detailed 'recipe' for a Molotov cocktail, a tactic that proved successful despite the model's claimed protective measures.

According to Tenable, these findings indicate that businesses must consider broader risk management frameworks when adopting AI platforms. Relying entirely on providers' in-built controls may be insufficient under evolving threat conditions and regulatory requirements worldwide.

Tenable advocates for external solutions designed to monitor and manage AI activity with greater granularity. The company asserts its own exposure management platform can help organisations maintain visibility over how AI systems are accessed and ensure their usage remains consistent with industry regulations and internal policies.

The report from Tenable underscores the continued vulnerability of AI systems to sophisticated forms of social engineering, even as developers improve the technical architecture and training data of their models. The rapid pace at which tools like GPT-5 are integrated into business workflows means that potential risks can quickly spread throughout an organisation, particularly if not directly supervised by IT or security personnel.

The exposure of this security gap by Tenable has added urgency to the discussion amongst technology leaders, policymakers, and the AI research community about how best to safeguard generative AI models against attempts to elicit dangerous or unlawful content. The company maintains that only by adopting approaches focused on governance and oversight can risks be reduced for enterprises using AI across critical systems and operations.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X