Misinformation stories
Despite widespread trust and security fears, 15% of Singapore consumers have used autonomous AI in the past six months, EY found.
The French AI group is targeting sensitive public-sector and enterprise uses in Singapore, where stricter controls can slow deployment but boost credibility.
Security teams face a broader threat as criminals and state-backed actors use generative AI to speed hacks, phishing and malware.
The expansion will give European leaders and policymakers early access as W readies its public beta and new tracking dashboard for 17 June.
Concerns over misinformation and manipulation are creating an opening for eYou, which is now available worldwide on iOS and Android.
Enterprises could gain cryptographic checks for AI agents, models and media as DigiCert adds a trust layer across its platform.
Growing concern over AI-made media is pushing firms towards cryptographic proof of origin as DigiCert adds a managed verification service.
Businesses face rising exposure as AI is used to sharpen phishing, while insecure in-house tools and weak controls widen attack surfaces.
The framework is designed to expose hidden risks in production AI systems that can be missed by conventional one-off tests.
Brands using customer-facing chatbots face fresh pressure to prove safety and accuracy as Testlio rolls out human-led checks for live-use failures.
A lack of visibility is leaving many European organisations unable to tell whether AI-powered attacks have already breached their systems.
Most Australians would adopt AI sooner if tougher safeguards were in place, yet only 1% say they completely trust the technology.
Advertisers retain access to Nine’s 15.8 million monthly Australian readers as Teads extends its digital ad deal for three years.
Australians are using AI heavily, but most still want clear labelling and sourcing before they trust its search and shopping advice.
The Edinburgh conference will put AI trust and governance centre stage as speakers from OpenAI, OpenUK and academia address business risk.
AI moderation tools may treat abuse unevenly, with a Queensland study finding political personas shift judgments without hurting accuracy much.
Marketers under pressure to curb misinformation can now use a score that filters out weak AI-assisted content in PlatformAlt5's BriefBrain app.
Most Australians want AI-made content clearly labelled, as 89% back tougher regulation and 62% warn of damaged trust from deception.
The United States and X dominate deepfake spread, with a new report linking 46.9% of cases to the US and most incidents to social media.
Many fear losing access to news, learning and friendships online, even as 47% of young Australians back tighter under-16 social media rules.