IT Brief Asia - Technology news for CIOs & IT decision-makers
Story image

Study finds 8.5% of GenAI prompts risk data exposure

Yesterday

A recent study conducted by Harmonic Security reveals that nearly 8.5% of prompts made in generative AI tools potentially expose sensitive data.

The research analysed tens of thousands of user prompts across various GenAI platforms such as Microsoft Copilot, OpenAI ChatGPT, Google Gemini, Anthropic's Claude, and Perplexity during the last quarter of 2024. It found that a range of different types of sensitive data are at risk, depending upon how these tools are utilised by employees.

Many users typically engage with GenAI tools for basic tasks like summarising texts, editing blogs, or creating documentation. However, the study identifies that 45.8% of the potentially sensitive prompts involve customer data, including billing and authentication information. Additionally, 26.8% contain employee data, such as payroll information, personally identifiable information (PII), and employment records. Some prompts even involve conducting employee performance reviews through the AI.

Legal and financial data account for 14.9% of the sensitive prompts, encompassing sales pipeline information, investment portfolios, and matters related to mergers and acquisitions. Security-related data, making up 6.9% of the prompts, poses particular concern, involving penetration test outcomes, network designs, and incident reports. Such disclosures could potentially provide malicious actors with detailed insights into network vulnerabilities. The remaining 5.6% includes sensitive coding information, with access keys and proprietary source code being shared through these prompts.

Another significant concern raised by Harmonic Security is the use of free-tier GenAI services by employees. These versions typically lack security measures found in enterprise editions, and many even state that they train on customer data. This means that any sensitive data inputted could be used to enhance AI models. According to the study, 63.8% of ChatGPT users, 58.6% of Gemini users, 75% of Claude users, and 50.5% of Perplexity users are operating on these free tiers.

Alastair Paterson, CEO and co-founder of Harmonic Security, remarked, "Most GenAI use is mundane but the 8.5% of prompts we analyzed potentially put sensitive personal and company information at risk. In most cases, organisations were able to manage this data leakage by blocking the request or warning the user about what they were about to do. But not all firms have this capability yet. The high number of free subscriptions is also a concern, the saying that 'if the product is free, then you are the product' applies here and despite the best efforts of the companies behind GenAI tools there is a risk of data disclosure."

Harmonic Security has made several recommendations to address these concerns. These include implementing real-time monitoring systems to manage data inputs in GenAI tools, ensuring the usage of paid or non-training plans, and gaining granular visibility into the data shared through these platforms. The company also advises creating guidelines for how different departments should interact with GenAI resources, and training employees on the risks and responsible practices associated with their use.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X