The lethal trifecta for AI agents: private data, untrusted content, and external communication

July 10, 2025

Venn diagram titled "The lethal trifecta" with three overlapping ovals labeled "Access to Private Data" (orange), "Ability to Externally Communicate" (green), and "Exposure to Untrusted Content" (pink).

If you are a user of LLM systems that use tools (you can call them “AI agents” if you like) it is critically important that you understand the risk of combining tools with the following three characteristics. Failing to understand this can let an attacker steal your data.

The lethal trifecta of capabilities is:

  • The ability to externally communicate in a way that could be used to steal your data (I often call this “exfiltration” but I’m not confident that term is widely understood.)
  • Access to your private data—one of the most common purposes of tools in the first place!
  • Exposure to untrusted content—any mechanism by which text (or images) controlled by a malicious attacker could become available to your LLM

Source: The lethal trifecta for AI agents: private data, untrusted content, and external communication

Agentic coding tools are the hotness right now–but even a basic understanding of the architecture gives rise to security concerns.

Here Simon Willison outlines a security anti-pattern he calls the ‘The lethal trifecta for AI agents’.