IT Brief Asia - Technology news for CIOs & IT decision-makers
Story image

Cloud Security Alliance launches pledge for responsible AI use

Today

The Cloud Security Alliance has introduced the AI Trustworthy Pledge, aiming to promote responsible and transparent development of artificial intelligence.

The initiative is designed to address ongoing concerns regarding AI governance, including issues such as AI-generated misinformation, privacy risks, and ethical challenges that have come to the forefront as artificial intelligence is increasingly embedded in commercial and governmental decision-making.

The Cloud Security Alliance (CSA), an organisation known for defining standards, certifications, and best practices for cloud security, stated that the AI Trustworthy Pledge serves as a public commitment to advance the responsible development and management of AI technologies. The Pledge forms part of the organisation's broader efforts under its AI Safety Initiative.

This move follows recognition that previous approaches, where products are built before comprehensive risk and security considerations, are insufficient for the complexities posed by AI systems. The CSA emphasised the necessity for proactive frameworks that prioritise trust and accountability from the outset.

The AI Trustworthy Pledge outlines four foundational principles for organisations engaged in AI-related activities. Participating organisations commit to safety and compliance, transparency, ethical accountability, and privacy protection across the lifecycle of AI design, deployment, and management.

According to the CSA, the initiative begins with voluntary adoption by industry and is intended to pave the way for more formal standards and certification processes, including the forthcoming STAR for AI initiative. This later phase will establish detailed cybersecurity and trustworthiness requirements for generative AI services.

"The decisions we make today around AI governance, ethics, and security will shape not only the future of our organizations and our industry, but of society at large. The AI Trustworthy Pledge provides a tangible opportunity to lead in this space, not just by managing risk, but by actively driving responsible innovation and helping to establish the industry standards of tomorrow," said Jim Reavis, CEO and co-founder, Cloud Security Alliance.

Organisations who sign the pledge are required to ensure their AI systems adhere to several guidelines. These include prioritising user safety and compliance with applicable regulations, maintaining transparency about AI systems in use, ensuring ethical development that allows for explainable outcomes, and upholding rigorous privacy protections for personal data.

Initial signatories include Airia, Endor Labs, Deloitte Consulting Srl S.B., Okta, Reco, Redblock, Securiti AI, Whistic, and Zscaler alongside others that have signalled their commitment to responsible AI practices through participation in the pledge. These organisations will be provided with a digital badge to signal their adherence to the outlined commitments.

Principles outlined

The CSA's AI Trustworthy Pledge is centred on four key principles. Firstly, safety and compliance require that organisations implement AI solutions that place user safety at the forefront and adhere to regulatory requirements. Secondly, transparency expects organisations to be open about the AI systems they employ in order to foster greater trust. Thirdly, ethical accountability is intended to ensure fairness and the ability to explain how AI-derived outcomes are determined. Lastly, privacy protection requires organisations to maintain strong safeguards over personal data processed by AI systems.

By focusing on voluntary, public commitments, the CSA intends to encourage industry-wide adoption of responsible standards before introducing binding certification frameworks. This approach allows for alignment and shared understanding across different sectors and organisations as AI usage expands.

Following the pledge's introduction, the CSA plans to launch the STAR for AI initiative. This will create detailed standards for cybersecurity and trust in generative AI, building on the early foundations laid by the Trustworthy Pledge.

The announcement comes as organisations worldwide continue to debate appropriate regulatory, security, and ethical measures as AI technologies evolve. By establishing the Pledge, CSA aims to encourage dialogue and collective action among stakeholders on the responsible use of artificial intelligence.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X