IT Brief Asia - Technology news for CIOs & IT decision-makers
Flux result f5d018a7 4220 4dc5 8456 e0cf4c8f98ca

DTEX warns Telegram & WhatsApp AI agents risk exfiltration

Fri, 24th Apr 2026 (Today)

DTEX has published an advisory on endpoint indicators linked to AI agents receiving instructions through Telegram and WhatsApp. The research focuses on insider risk and data exfiltration from user devices.

Produced by DTEX's i3 Insider Investigations team, the advisory examines how locally executed AI agents can be directed through personal messaging apps, then operate in the background without ongoing human interaction. Host-based and containerised agents can access files, mounted network drives and external AI services with the same permissions as the user.

DTEX argues the issue lies at the endpoint, not only in debates around model behaviour, prompt injection and policy. According to the research, these agents can act without malware, exploits or the traditional indicators of compromise that security teams often rely on.

The findings draw on work covering OpenClaw, NemoClaw and NanoClaw. Investigators mapped a range of host-level indicators tied to setup, persistence, credential exposure, shell-snapshot behaviour, exfiltration preparation, message-driven execution and communication with external services.

One area of concern is the use of messaging platforms as an instruction channel. In the scenarios described, a user can configure an agent to receive instructions through a messaging app and then carry out tasks locally on the endpoint, including reading files, interacting with directories and preparing data for transfer.

That activity can resemble legitimate user behaviour unless defenders know which signals to monitor. The advisory says this creates a visibility gap for organisations trying to manage sensitive data while staff experiment with AI tools on their devices.

"What this research shows is that risk is already on the endpoint," said Jamie Lindsay, VP, APAC and Japan at DTEX. "These agents can run in the background, access files using legitimate permissions, and in some cases take instructions through messaging apps that sit outside normal security workflows. That gives security teams a visibility problem they cannot afford to ignore."

Host signals

The advisory lists several indicators that investigators say can reveal the presence or activity of these tools. They include OpenShell sandbox activity linked to NemoClaw, SSH-based agent invocations from a Telegram bridge, Docker-based installation and rebuild activity, long-lived agent processes, credentials exposed in process parameters, and outbound network connections to known large language model API endpoints.

DTEX also described a multi-step workflow tied to mounting host directories into a NanoClaw container, saying the pattern can indicate preparation for local data access and exfiltration.

In another example, the research found that process parameters can reveal the content of instructions routed from an external messaging channel to a local OpenClaw agent node. That can give investigators visibility into intent that may not be visible through network inspection alone.

Observed local ports included 11434 for Ollama inference and 3128 for HTTP proxy activity. DTEX said those observations confirmed both AI agent-to-LLM and AI agent-to-network communication during task execution.

Risk gap

DTEX linked the findings to its 2026 Cost of Insider Risks Global Report, which found that 73% of organisations worry unauthorised AI use is creating invisible data exfiltration paths, while 19% classify AI agents as equivalent to human insiders.

The company has also added an AI Factors category to its risk model to classify and detect AI behaviour on endpoints. DTEX said the move reflects a need to track what AI agents do inside working environments, rather than only whether AI use is allowed in principle.

The advisory also argues that containerised deployments do not always reduce visibility for defenders. In some tests, containerised agentic AI produced clearer detection opportunities than host-level deployments because orchestration tools such as Claude Code, Telegram bridges, SSH tunnels and OpenShell generated process telemetry on the host.

By contrast, some host-level deployments appeared as long-lived Node processes with fewer obvious behavioural artefacts. That distinction matters for investigators because the most direct route to detection may vary depending on how the agent has been deployed.

"A lot of the discussion around AI risk is still happening at the policy level, but this advisory shows why the endpoint matters," Lindsay said. "If an agent can install locally, mount directories, expose credentials and interact with external services, organisations need to be able to detect that behaviour in practice, not just assume existing controls will catch it."

Immediate steps

DTEX said risk is highest where organisations encourage rapid AI experimentation on user devices without comparable investment in monitoring, governance and detection. It also warned that AI agents may introduce vulnerabilities or risky behaviour even when a user has no malicious intent, making technical monitoring more important than assumptions about motive.

The company set out three immediate actions for security teams: monitor the human prompts used to configure an AI agent alongside the behaviour the agent exhibits on the endpoint; detect and restrict credential exposure by monitoring how agents access, store, reuse or transmit secrets and authentication material; and limit data exfiltration paths by identifying sensitive information, validating the true scope of agent access, and reviewing how users and AI agents move across directories and systems.