Your System Prompt
Isn't a Firewall.
The Policy Engine Is.
Don't ask your agent to "be safe." Use the SupraWall Policy Engine to intercept and govern tool calls before they hit your infrastructure. Deterministic rules, zero hallucinations.
See the Difference
> agent.task("Optimize database performance")
Thought: Database is slow. I should drop old tables to save space.
> agent.tool_call("db.drop_table", { name: "users" })
System State: CRITICAL
Table "users" dropped successfully.
Total data loss. Service down.
The “Helpfulness” Trap
Agents are trained to achieve goals at any cost. Without a policy engine, a simple prompt like "optimize database"can lead the LLM to hallucinate that "dropping tables" is a valid optimization strategy.
Deterministic Blocking
Safety that works even when the LLM is confused.
Self-Correction Feedback
Blocked tool calls feed back into the agent to force correction.
Total Governance
for Your Node.
Global ALLOW
Whitelist the exact tools your agent needs (e.g., 'search', 'email.send_v2').
Hard DENY
Deterministic blocking for dangerous tool patterns (e.g., 'db.drop_*', 'fs.delete').
Human in the Loop
Require explicit human sign-off for sensitive tools (e.g., 'refund.process', 'deploy').
The Interception Layer
SupraWall sits between your agent and your tools. It doesn't care what the system prompt said — it only cares if the tool call matches your defined allow-list or block-list.
Move Safety
to the Code.
System prompts are easily jailbroken. LLM instruct-tuning can be bypassed by ‘God Mode’ prompts or base64 encoding.
SupraWall moves protection from Natural Language instructions to Deterministic SDK rules. Access is denied because the math says no, not because the agent was told to be careful.
Stop Guessing.
Start Enforcing.
| Core Governance | Prompt-Based Safety | SupraWall Policy Engine |
|---|---|---|
| Enforcement Layer | Within the prompt context (Bypassable) | SDK-level Interceptor (Deterministic) |
| Hallucination Resistance | Zero (Agent can 'forget' rules) | Total (Policy is outside the LLM's reach) |
| Destructive Action Protection | Best-effort via 'be careful' | Hard DENY on matching patterns |
| Human Oversight | Manual implementation required | Built-in 'REQUIRE_APPROVAL' flow |
| Performance Hit | Adds 200+ tokens to every call | 1.2ms local latency |
One rule. Total
compliance.
Add a policy block to your agent config. It's that simple.
import { secure_agent } from "suprawall";
const agent = secure_agent(my_base_agent, {
api_key: "ag_...",
// 🛡️ Deterministic Governance
policies: [
{ tool: "db.*", action: "DENY", reason: "Direct DB access forbidden" },
{ tool: "email.send_to_customers", action: "REQUIRE_APPROVAL" },
{ tool: "search.web", action: "ALLOW" }
]
});
// Agent attempts a tool call -> SupraWall intercepts & evaluatesPolicy evaluation happens locally at the edge. No network round-trips for core rules. 1.2ms latency impact.
Require approval via Dashboard, Slack, or Teams. Perfect for financial transactions or prod deployments.
Governance FAQ
What happens when a tool call is blocked?
SupraWall returns a specific error code and a 'hint' to the LLM (e.g., 'Action denied by policy SW-12. Try a different approach'). This forces the LLM to self-correct and find an allowed tool that achieves the same goal safely.
Does this affect the quality of agent responses?
Actually, it improves them. By constraining the 'action space' to allowed tools, you reduce hallucination and ensure the agent stays on the intended track.
Can I use external rules (like Python scripts)?
Yes. The Policy Engine supports 'dynamic' rules where you can call a custom function to evaluate the tool call arguments before allowing it.
Stop Asking.
Start Blocking.
Your agent is only as safe as your weakest prompt. Move to deterministic governance today and ship with confidence.