The Promptware Kill Chain | Lawfare

February 23, 2026

Red-tinged computer screen displaying HTML and JavaScript code with multiple lines and syntax highlighting.

Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms, which we term “promptware.” In a new paper, we, the authors, propose a structured seven-step “promptware kill chain” to provide policymakers and security practitioners with the necessary vocabulary and framework to address the escalating AI threat landscape.

Source

Anybody who works as a software engineer with large language models is at least peripherally aware of the novel security challenges that they can present, perhaps best summarised as the “Lethal Trifecta”, a term Simon Willison coined for the dangerous combination of prompt injection, access to sensitive data, and the ability for an LLM to take actions.

Here the legendary Bruce Schneider and collaborators detail the “Promptware kill chain”, and how attacks on LLM based systems can escalate dramatically.