LLMs blur instructions and data. The UK NCSC and OWASP put prompt injection at LLM01:2025. Designs like CaMeL separate control flow from untrusted data.
High-risk ingress: user prompts, README, issues, web content. Mitigations: sandbox tools, allowlists, label untrusted text.
Playbook: least-privilege tools, explicit approvals, and hard fail on unsanitized commands.