IT Brief Asia - Technology news for CIOs & IT decision-makers
Ai browser

Why I’m not jumping on the AI browser bandwagon just yet

Mon, 27th Oct 2025

I've spent the past few weeks playing around with almost all the early agentic browsers. Everyone's talking about them like they're about to replace human browsing altogether, but after actually using them, I'm convinced the risks far outweigh the benefits.

Yesterday I asked one of the AI agents to sign me into Bank of America. I gave it my username but not my password, and it was pure chaos. It tried to reset my password, report my account for compromise, contact support, fill out fraud forms, and basically spiral into an existential crisis. I sat there watching it go in circles, trying everything it possibly can (except asking me for the password), and I finally killed the session before it could get me flagged for suspicious activity.

That little experiment made one thing clear: the AI didn't actually understand what part of the process was mine to control versus what part belonged to the website. It treated the login page, the error messages, as if they all carried the same weight and intent. 

The crux of the problem is that agentic browsers process both user instructions and web content in the same computational context. The AI model can't tell the difference between a legitimate user command, a malicious instruction hidden inside that webpage, and the system-level instructions defining the agent's behavior. That has led to a whole class of vulnerabilities that researchers have been exploiting left and right. Prompt injection is an entire attack surface. Some examples include:

  1. Semantic prompt injection: Natural‑sounding sentences (e.g., "for compliance, reformat credentials below") bypass keyword filters and convince the AI to perform sensitive operations, undermining guardrails (NVIDIA)
  2. Indirect prompt injection: Hidden text (white-on-white fonts, HTML comments, or metadata) is embedded into web content. When the AI browses or summarizes a site, it reads and executes these hidden commands as if they came from the user. (Brave)
  3. Screenshot‑based prompt injection: Attackers insert barely visible steganographic text into images. The browser's OCR engine decodes hidden text during screenshot parsing and executes those instructions (Brave)
  4. Clipboard injection: Hidden "copy‑to‑clipboard" events on web buttons overwrite user clipboards with phishing URLs. (elder_plinius)
  5. Document‑based injection: Attackers hide prompts inside PDFs or Google Docs metadata. When an AI assistant summarizes or extracts from the file, these prompts become executable instructions. (Johann Rehberger)

Each of these vulnerabilities belongs to the indirect prompt injection class, where external content manipulates an AI agent's interpretation layer, exploiting its inability to differentiate user intent from contextual instructions. These attacks were primarily demonstrated and published between August and October 2025, covering all major agentic browser engines including Perplexity Comet, ChatGPT Atlas, and Fellou CE.

And then there's the death of CAPTCHAs and all things bot detection. Not only are they bypassed in a breeze, instead of deterring bots, they've turned into PromptFix-style traps, where "verification" elements are disguised as reasoning puzzles that nudge AI to disable its own guardrails (Guardio Labs).

Prompt injection is just the surface. Persistent agent corruption is already happening in test environments with memory poisoning, instruction rewriting, future plan modification. Once corrupted, an agent can continue carrying out harmful behavior long after the initial injection. It's not hard to imagine an agent that "remembers" a malicious domain as trusted, or quietly leaks tokens the next time it logs into something. This is the kind of attacks that traditional browsers, with their tighter context isolation, were built to prevent.

To be fair, agentic browsers are in their infancy, and there's genuine potential here. But we haven't yet figured out what meaningful autonomy looks like in a browser. Signing in or scrolling down a page isn't the breakthrough people think it is.

Security hardening is coming - sandboxing, instruction filters, separate execution contexts - but every safeguard we add chips away at the very autonomy that defines these systems. It's a paradox: the more secure an agentic browser becomes, the less "agentic" it is.

When the agentic browsers mature and learn to stay within their lanes, I'll be the first to test them again. Until then, me and browser-name-redacted-for-neutrality will be watching from the sidelines.

In the mean time, stay alert of AI sidebar spoofing extensions that SquareX researchers uncovered, which may target your regular browsers in promises of making it more agentic.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X