IT Brief Asia - Technology news for CIOs & IT decision-makers
Kristina

Agentic AI: The potential and the problems behind the new wave of autonomous systems

Thu, 19th Mar 2026

Agentic AI - systems that do not merely generate text but can act autonomously in digital environments - has emerged as the next major frontier in artificial intelligence. Tools such as MoltBot (sometimes referred to as ClawdBot or OpenClaw) exemplify this shift. Unlike traditional AI models that provide responses in a contained chat interface, these systems are designed to execute tasks on a user's behalf, operating more like a digital personal assistant with far‑reaching access to data and systems.
 
These are AI agents capable of autonomously completing multi‑step processes, including reading and sending emails, booking travel, updating calendars, browsing the internet, and even making purchases. It can learn user preferences over time, act proactively, and complete tasks with minimal oversight. The appeal is obvious: where earlier AI tools required prompting, agentic AI can manage workflows end‑to‑end and reduce everyday administrative burdens.

But this rapid rise has also triggered significant concern, particularly from cybersecurity researchers, privacy professionals, and enterprise risk teams. The same features that make Autonomous Systems appealing also create unprecedented vulnerabilities.

The Privacy and Security Risks for Individuals

To function as intended, these systems require extraordinary levels of access to a user's digital environment - far more than typical software. It may request or obtain access to:

  • login credentials and passwords
  • API keys and tokens
  • browser history and cookies
  • emails, attachments, and documents
  • system files and local storage
  • entire cloud drives or shared folders

In practice they are operating much like a systems administrator. It not only reads sensitive information - it can use it. It can send data externally, execute commands, and maintain persistent memory of everything it encounters.

This creates two immediate risks for consumers. Firstly, total visibility risk: the AI may inadvertently access or store highly sensitive personal data. And secondly, malicious instruction risk, because they ingest content from emails and websites, it could absorb hidden or harmful instructions embedded in that content. In agentic AI, such instructions could be executed automatically or embedded in long‑term memory, triggering future actions without the user's awareness. For individuals, the convenience may be compelling - but the hidden implications could be significant.

The Risks for Businesses: Compliance, Cybersecurity, and Control

For businesses, the stakes are far higher. Allowing these tools into a corporate environment could amount to granting a third‑party AI agent full administrative‑level access to company systems, files, and communications. This raises an array of legal and regulatory issues:

Data protection compliance: Businesses are generally required under privacy regulations (such as UK GDPR or EU GDPR) to conduct risk assessments before introducing new data‑processing technologies. Agentic AI that collects and stores large volumes of internal data represents a high‑risk processing activity.

Cybersecurity obligations: Many regulations require organisations to implement appropriate safeguards and maintain control over access privileges. A self‑directed software agent with unrestricted access introduces a new attack surface that traditional security controls may not adequately cover.

Data leakage risk: These agents could transmit internal data to external systems, intentionally or unintentionally.

Memory persistence: If the system permanently retains sensitive or confidential data, it could create future exposure or discovery obligations.

Put simply, businesses risk violating both legal requirements and internal security policies if they allow such tools without robust, formal evaluation.

Should You Use an Autonomous System?

There is no single answer. They clearly demonstrate the transformative potential of agentic AI: automation, efficiency, and genuine task outsourcing. But the risks - particularly around privacy and system access - are not hypothetical. Consumers should evaluate whether the convenience outweighs the loss of control and visibility. Businesses will often be legally required to formally risk assess any such tool and ensure that safeguards are in place before it is adopted. Agentic AI offers powerful new capabilities, but its deployment must be approached with caution, transparency, and a clear understanding that these tools are not just assistants - they are actors.