Singapore firms urged to loosen AI identity controls
Delinea has published research showing that 95% of organisations in Singapore are pressuring security teams to loosen identity controls for AI. The study highlights growing concern over oversight of AI-related identities in corporate systems.
The findings are based on a global survey of more than 2,000 IT decision-makers using or testing AI, along with Delinea Labs analysis of cyber incidents. In Singapore, 93% of respondents reported at least one visibility gap around identities. The largest gap involved machine and non-human identities, including accounts used by AI agents.
Those gaps were more pronounced in AI-related environments than in older systems. Half of Singapore respondents said discovery gaps were most likely to persist in AI environments, compared with 31% for legacy or on-premises systems.
The research adds to the broader debate over whether companies are moving faster on AI deployment than on security controls. In Singapore, 86% of organisations said they could not always explain why a non-human identity performed a privileged action, raising questions about traceability when automated systems gain elevated access.
Another concern was the use of standing privileged access. In Singapore, 58% of organisations said they lacked workable alternatives for non-human identities and AI agents, meaning automated accounts can retain persistent permissions for long periods.
Confidence Gap
The report also described what it called an AI security confidence paradox. While 87% of Singapore respondents said their identity security posture was ready to support AI-driven automation, 47% also said their identity governance around AI systems was deficient.
That mismatch was reflected in operational practice. Although 84% said they were confident in discovering non-human identities with access to production systems, fewer than one in four validated non-human identity or AI agent activity in real time to check whether discovery processes were working.
Singapore also stood out on several measures compared with the global sample. It ranked highest globally for measurable business impact from identity friction, at 93% versus 84% globally.
Respondents in Singapore also reported greater operational complexity from fragmented tools, at 46% compared with 37% globally. At the same time, 23% said they were using just-in-time authorisation, above the global average of 17%.
Even so, Singapore organisations ranked lowest globally in explaining why an AI agent had taken a privileged action. Only 14% said they were always able to do so, below the 20% global average.
The study identified AI expansion as a leading factor behind rising non-human identity risk in Singapore. Some 37% of organisations cited AI growth as one of the top drivers of that risk over the past year, ahead of increased automation and CI/CD velocity at 29% and growth in cloud-native workloads at 26%.
These findings are likely to resonate in Singapore, where companies are investing heavily in AI while facing close scrutiny over cyber security, access controls and data governance. As more software agents and machine accounts gain access to enterprise systems, security teams face the challenge of tracking who or what has access, why that access was granted, and how it is being used.
Art Gilliland, Chief Executive Officer of Delinea, said the issue is moving up the corporate agenda as AI systems spread through businesses.
"The pressure to move fast on AI is real, but identity governance has not kept pace, which exposes enterprises to significant risk," Gilliland said.
"As AI agents multiply across enterprise environments, these identities often have the least oversight. The organisations that will succeed in the AI era will be the ones that enforce real-time, contextual access across every human, machine, and agentic AI identity."