Agentic AI and Security
October 31, 2025
Agentic AI systems present unique security challenges. The fundamental security weakness of LLMs is that there is no rigorous way to separate instructions from data, so anything they read is potentially an instruction. This leads to the “Lethal Trifecta”: sensitive data, untrusted content, and external communication – the risk that the LLM will read hidden instructions that leak sensitive data to attackers. We need to take explicit steps to mitigate this risk by minimizing access to each of these three elements. It is valuable to run LLMs inside controlled containers and break up tasks so that each sub-task blocks at least one of the trifecta. Above all do small steps that can be controlled and reviewed by humans.
Source: Agentic AI and Security
With the rapid rise of the MCP protocol and agentic software development–that is, where we largely set up software agents to complete a task and let them do it, calling resources locally and online–I’ve also seen serious concerns about the security of such processes.
Here Korny Sietsma gives a detailed overview of the problem and ways of potentially mitigating rather than eliminating the associated risks.







