The constraint
The team builds payment infrastructure. Their security policy forbids any agent process that can make arbitrary outbound network calls or access files outside the project directory. Standard Python AI frameworks do not enforce this at the OS level — they rely on the developer not calling the wrong API, which is not a control that passes a SOC 2 audit.
Two separate vendor evaluations ended the same way: “the security team will not approve this.”
What changed with Kernex
kernex-sandbox wraps macOS Seatbelt and Linux Landlock. The agent process declares its required filesystem paths and network endpoints at startup. The OS enforces the boundaries. If the model hallucinates a network call outside the allowlist, the syscall fails before the packet leaves the machine.
This is a material difference from application-level controls. The auditor’s question shifted from “how do you prevent the agent from leaking data?” to “show me the syscall policy.”
Pipeline structure
Three sequential stages defined in a TOML topology file:
[[pipeline.stages]]
name = "static-analysis"
prompt_file = "prompts/static.md"
[[pipeline.stages]]
name = "security-review"
prompt_file = "prompts/security.md"
depends_on = ["static-analysis"]
[[pipeline.stages]]
name = "summary"
prompt_file = "prompts/summary.md"
depends_on = ["static-analysis", "security-review"]
Each stage runs as an isolated agent turn. The summary stage receives the outputs from both prior stages as context. Total wall time per PR: 18-26 seconds depending on diff size.
Audit trail
Every LLM call is logged to SQLite via kernex-memory: timestamp, provider, model, input hash, output hash, latency. The log is append-only from the agent’s perspective. The security team exports it weekly as SOC 2 evidence.
Result
The tool is in production. It runs on every PR to their core payments repository. The security team approved it in the first review because the sandboxing controls were provable, not policy.