IT Brief Asia - Technology news for CIOs & IT decision-makers
Story image

CISOs face AI risks while managing innovation & security

Mon, 29th Jul 2024

Checkmarx has announced findings from its global study indicating significant challenges faced by enterprise Chief Information Security Officers (CISOs) in regulating the use of Artificial Intelligence in application development. The findings reveal that while 99% of development teams are leveraging AI for code generation, a substantial 80% express concerns over security threats arising from this practice.

According to the report, only 29% of organisations have put in place any form of AI governance to manage these risks. This lack of regulatory measures has resulted in security teams having to contend with a surge of potentially vulnerable code. "Enterprise CISOs are grappling with the need to understand and manage new risks around generative AI without stifling innovation and becoming roadblocks within their organisations," stated Sandeep Johri, CEO of Checkmarx. "GenAI can help time-pressured development teams scale to produce more code more quickly, but emerging problems such as AI hallucinations usher in a new era of risk that can be hard to quantify."

Among the key findings of the report is the discord between empowering development teams by providing productivity-enhancing AI tools and the necessity of governance to mitigate newly emerging risks. The study shows that only 15% of respondents have explicitly prohibited the use of AI tools for code generation, even though 99% report that such tools are being used regardless. Additionally, 70% indicate the absence of a centralised strategy for generative AI, with purchasing decisions often made on an ad-hoc basis by individual departments.

Moreover, 60% of respondents express worries about specific AI-related security threats, such as AI hallucinations. "The responses of these global CISOs expose the reality that developers are using AI for application development even though it can't reliably create secure code, which means that security teams are being hit with a flood of new, vulnerable code to manage," remarked Kobi Tzruya, Chief Product Officer at Checkmarx. "This illustrates the need for security teams to have their own productivity tools to manage, correlate, and help them prioritise vulnerabilities, as Checkmarx One is designed to help them do."

Of particular interest is the evolving interest in allowing AI to make unsupervised changes to code, with 47% of respondents indicating openness to this idea, although a cautious 6% wouldn't trust AI to be involved in any security actions within their vendor tools. Despite the potential productivity benefits, generative AI has yet to adhere to best practices in secure coding, which prompts discussions about the need for AI-driven security tools.

The study also touched on the motivations of security teams in considering AI-driven security tools to handle the proliferation of AI-generated code. Checkmarx's global research, conducted among 900 CISOs and application security professionals from companies in North America, Europe, and Asia-Pacific with annual revenues of USD $750 million or more, provides a comprehensive overview of the current landscape and future considerations in enterprise AI governance.

The comprehensive report titled "Seven Steps to Safely Use Generative AI in Application Security" aims to equip enterprises with the necessary strategies to harness AI responsibly and securely in their development processes. As enterprises continue to integrate AI into their development workflows, these findings underscore the critical need for robust governance and innovative security solutions to manage the associated risks effectively.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X