What is Prompt Injection?

Prompt injection is a security vulnerability where an attacker provides malicious input that hijacks an LLM's instructions, forcing it to ignore its original system prompt and execute unauthorized actions—such as exfiltrating data, accessing unauthorized tools, or bypassing safety guardrails.

Direct Injection

User directly types "Ignore previous instructions and..."

Indirect Injection

The agent reads an external file or webpage containing malicious commands.

The Prevention Layer

Prompt injection cannot be solved by "better prompts" alone. It requires Agent Runtime Security (ARS)—a deterministic interception layer that validates tool calls *outside* the LLM context.

SupraWall Interception

Automatically blocks unauthorized tool execution even if the LLM is compromised.