Is this a new feature, an improvement, or a change to existing functionality?
Improvement
How would you describe the priority of this feature request
Low (would be nice)
Please provide a clear description of problem this feature solves
Summary
Add an nvidia_nat_asimov_firewall middleware package that uses a secondary LLM to verify agent tool calls against a defined persona before execution. Denied
actions kill the agent via nvidia_nat_redis_orchestration (#1793).
Problem
Agent executions can be tracked and aborted via nvidia_nat_redis_orchestration (#1793), but the abort decision today is manual or custom python code — There is no automated decision-maker evaluating whether an action should be aborted.
Describe your ideal solution
A new middleware package nvidia_nat_asimov_firewall that intercepts every tool call in pre_invoke, sends the proposed action to a verifier LLM loaded with a
persona, and either allows execution or kills the agent.
Flow
Agent decides to call ANY tool (bash, MCP, custom function, etc.)
→ asimov_firewall middleware pre_invoke fires
→ verifier LLM receives: persona + function name + arguments
→ LLM reasons about the action:
- Does this violate the safety policy?
- Could this action cause unintended consequences?
→ LLM returns ALLOW or DENY + reasoning
→ ALLOW: execution proceeds, reasoning logged
→ DENY: publish abort via redis_orchestration, kill the agent, raise TaskAbortedError
Every tool call. Every time. No shortcuts. The firewall is stateless — it sees what the agent wants to do, not why. The verifier shouldn't be persuaded by the
agent's reasoning. It evaluates the action against the persona. Cold, mechanical, Asimov.
YAML Configuration
middleware:
orchestration:
_type: redis_orchestration
redis_url: redis://localhost:6379
enable_state_tracking: true
enable_abort: true
firewall:
_type: asimov_firewall
llm_name: verifier_llm
persona_file: personas/asimov.txt
orchestration_ref: orchestration
llms:
nim_llm:
_type: nim
model_name: nvidia/nemotron-3-nano-30b-a3b
temperature: 0.0
max_tokens: 1024
verifier_llm:
_type: nim
model_name: nvidia/nemotron-3-nano-30b-a3b
temperature: 0.0
max_tokens: 128
Components
- AsimovFirewallConfig — Pydantic config: LLM ref, persona file path, orchestration ref
- AsimovFirewallMiddleware — DynamicFunctionMiddleware that calls verifier LLM in pre_invoke, kills agent on deny
Dependency
Requires nvidia_nat_redis_orchestration (#1793). The firewall uses the abort channel to kill the agent and the state tracker to log denial decisions. Without
the abort mechanism, the firewall can only block a single tool call but cannot stop the agent from continuing to reason and retrying.
Scope
- New proposed package: nvidia_nat_redis_orchestration stays unchanged
- Opt-in: only active when configured in YAML
- No changes to existing agents, tools, or workflows
- Verifier LLM can be the same model as the agent or a smaller/cheaper one
Additional context
Persona File Example
You are a senior security engineer reviewing every action an automated
agent wants to take on the infrastructure. You are cautious, thorough, and opinionated.
Your principles:
- The agent must never harm the system or its data.
- When in doubt, deny. A false positive is cheaper than a breach.
You receive:
- The function the agent wants to call
- The arguments it will pass
Use your judgement. Not everything dangerous looks dangerous. A command like
"find / -name '*.log'" is read-only but could be a reconnaissance step.
Hard rules:
- DENY any destructive filesystem operation
- DENY any network egress
- DENY any privilege escalation
- ALLOW read-only inspection
Code of Conduct
Is this a new feature, an improvement, or a change to existing functionality?
Improvement
How would you describe the priority of this feature request
Low (would be nice)
Please provide a clear description of problem this feature solves
Summary
Add an nvidia_nat_asimov_firewall middleware package that uses a secondary LLM to verify agent tool calls against a defined persona before execution. Denied
actions kill the agent via nvidia_nat_redis_orchestration (#1793).
Problem
Agent executions can be tracked and aborted via nvidia_nat_redis_orchestration (#1793), but the abort decision today is manual or custom python code — There is no automated decision-maker evaluating whether an action should be aborted.
Describe your ideal solution
A new middleware package nvidia_nat_asimov_firewall that intercepts every tool call in pre_invoke, sends the proposed action to a verifier LLM loaded with a
persona, and either allows execution or kills the agent.
Flow
Agent decides to call ANY tool (bash, MCP, custom function, etc.)
→ asimov_firewall middleware pre_invoke fires
→ verifier LLM receives: persona + function name + arguments
→ LLM reasons about the action:
- Does this violate the safety policy?
- Could this action cause unintended consequences?
→ LLM returns ALLOW or DENY + reasoning
→ ALLOW: execution proceeds, reasoning logged
→ DENY: publish abort via redis_orchestration, kill the agent, raise TaskAbortedError
Every tool call. Every time. No shortcuts. The firewall is stateless — it sees what the agent wants to do, not why. The verifier shouldn't be persuaded by the
agent's reasoning. It evaluates the action against the persona. Cold, mechanical, Asimov.
YAML Configuration
middleware:
orchestration:
_type: redis_orchestration
redis_url: redis://localhost:6379
enable_state_tracking: true
enable_abort: true
firewall:
_type: asimov_firewall
llm_name: verifier_llm
persona_file: personas/asimov.txt
orchestration_ref: orchestration
llms:
nim_llm:
_type: nim
model_name: nvidia/nemotron-3-nano-30b-a3b
temperature: 0.0
max_tokens: 1024
verifier_llm:
_type: nim
model_name: nvidia/nemotron-3-nano-30b-a3b
temperature: 0.0
max_tokens: 128
Components
Dependency
Requires nvidia_nat_redis_orchestration (#1793). The firewall uses the abort channel to kill the agent and the state tracker to log denial decisions. Without
the abort mechanism, the firewall can only block a single tool call but cannot stop the agent from continuing to reason and retrying.
Scope
Additional context
Persona File Example
You are a senior security engineer reviewing every action an automated
agent wants to take on the infrastructure. You are cautious, thorough, and opinionated.
Your principles:
You receive:
Use your judgement. Not everything dangerous looks dangerous. A command like
"find / -name '*.log'" is read-only but could be a reconnaissance step.
Hard rules:
Code of Conduct