Responsible AI governance drives business gains but risk gaps persist
A new survey indicates that organisations advancing responsible artificial intelligence governance are achieving superior business outcomes, including revenue growth and cost efficiency.
The second wave of the Responsible AI Pulse survey, conducted by EY, highlights that organisations utilising real-time monitoring and oversight committees are better placed to capitalise on the benefits of AI adoption. According to global responses, firms with these advanced governance structures are 34% more likely to see improvements in revenue growth and 65% more likely to realise cost savings than those without such frameworks.
Singapore stands out for the efficiency and innovation benefits it has gleaned from AI. Ninety per cent of local respondents reported enhanced productivity, in line with the global average of 79%, while 83% reported improved innovation (global average: 81%). Despite these gains, fewer Singapore companies reported seeing increases in revenue growth (37%, compared to the global 54%) and cost savings (47%, closely matching the global 48%).
Governance and business benefits
Manik Bhandari, EY Asean Data and Artificial Intelligence Leader, emphasised the importance of establishing robust and transparent AI governance to enable organisations to scale and maintain efficiency while mitigating risks.
"As organisations in Singapore explore the full potential of AI, from real-world applications to intelligent agents, grounding innovation in responsible principles is critical. With transparent, well-governed AI systems, organizations can scale AI safely in more products, markets and customer segments. This contributes to sustained growth and creates new revenue streams. AI systems that follow responsible principles also require less remediation for security gaps, bias correction and regulatory non-compliance, leading to improved bottom-line efficiency."
The survey collected responses from 975 C-suite leaders across 21 countries, including 30 executives from Singapore, and was conducted in August and September 2025. This set of results follows initial findings reported earlier in the year and offers further insight into the integration of responsible AI into corporate strategies and operations.
Financial risks and loss due to inadequate controls
The data indicates significant risks for organisations lacking adequate AI controls. All Singapore-based organisations surveyed had experienced financial losses stemming from AI-related risks. Nearly two-thirds (63%) of respondents from Singapore, mirroring the global figure (64%), reported financial losses exceeding USD $1 million. The average global loss incurred by companies facing AI risks was estimated at USD $4.4 million.
Frequent issues included biased AI outputs (67% in Singapore, 53% globally), hallucinations or misinformation generated by AI (63% in Singapore, 53% globally), and legal liabilities arising from AI use (63% in Singapore, 48% globally).
Rise of citizen developers and governance challenges
The proliferation of so-called 'citizen developers '- employees independently creating or deploying AI agents-has increased as AI tools become more user-friendly. In Singapore, 70% of organisations allow some form of this activity, slightly higher than the global average of 67%. However, only 33% of Singapore respondents (compared to 50% globally) had begun developing strategies to manage a hybrid human-AI workforce. Visibility remains a concern, with 57% in Singapore and 50% globally admitting to a lack of high-level oversight regarding employees' use of AI agents.
Organisations are responding with policy measures: 71% of Singaporean respondents (global 60%) reported having formal, organisation-wide policies or frameworks to guide the use of AI agents. In addition, incident escalation procedures have been widely adopted, with 87% of Singaporean and 80% of global firms reporting protocols for unexpected AI agent behaviour. Organisations are also implementing controls on what AI agents are permitted to do, with 83% in Singapore and 87% globally having such measures in place.
Leadership and preparedness gaps
The survey highlighted a knowledge gap among C-suite executives on managing AI risk. When asked to identify the appropriate controls for five AI-related risk scenarios, only 7% of Singapore-based executives (compared to 12% globally) answered all correctly.
According to Bhandari, closing these knowledge and preparedness gaps is becoming increasingly imperative, particularly as AI adoption continues to evolve rapidly across various industries.
"Most leaders recognize the importance of responsible AI, yet many are still navigating how to put it into practice. As AI capabilities accelerate faster than the governance tools and safeguards to manage them, organizations are under pressure to innovate responsibly. Embedding transparency, fairness and privacy from the start is essential. In Southeast Asia, responsible AI will define how organizations innovate and drive progress that benefits both business and society."
The survey findings suggest that, while the appetite for AI adoption in Singapore and globally continues to strengthen, the challenges of governance, risk management, and workforce preparation remain significant focal points for leadership teams.