Python AI frameworks: no OS-level isolation, unbounded memory, stateless conversations.
Kernex fixes all three.
The Rust runtime
for AI agents.
Sandboxed. Provider-agnostic. Memory-persistent.
No Python. No GC pauses. No containers.
use kernex_runtime::RuntimeBuilder;
use kernex_providers::factory::ProviderFactory;
use kernex_core::message::Request;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let runtime = RuntimeBuilder::from_env().build().await?;
let provider = ProviderFactory::create("ollama", None)?;
let request = Request::text("user-1", "Explain this codebase");
let response = runtime.complete(&provider, &request).await?;
println!("{}", response.text);
Ok(())
} cargo add kernex-runtime kernex-providers · docs.rs/kernex-runtime
Everything production requires.
Not a thin wrapper. Not a prototype framework. Composable crates that compose into a production system.
OS-Level Sandbox
Seatbelt on macOS. Landlock on Linux. The OS enforces the boundary even if the model is deceived. No other Rust AI framework ships this.
11 AI Providers
Claude Code CLI, Anthropic, OpenAI, Gemini, Ollama, OpenRouter, Groq, Mistral, DeepSeek, Fireworks, xAI. Swap providers with one line. No lock-in.
Persistent Memory
SQLite-backed facts, session lessons, and reward-based learning. Context survives between runs. Your agent remembers.
TOML Pipelines
Multi-agent topologies defined in TOML. Corrective loops, parallel phases, conditional routing. Crash-safe checkpointing resumes pipelines after failure. Version-controlled alongside your code.
21 Built-in Skills
9 tool integrations (filesystem, git, browser, GitHub, databases) and 12 agent personas included. Load from SKILL.md files. Ship conventions with your repo.
Single Binary
No virtualenv. No container. No runtime dependency. One compiled binary under 15 MB. Deploy it anywhere.
Benchmarks.
preliminaryRust's advantage is real but the numbers depend on the task. Cold start and memory are the most consistent signals. Full methodology and raw results on GitHub.
| framework | lang | cold start | peak memory | throughput |
|---|---|---|---|---|
| Kernex | Rust | 12ms | 24 MB | 185 req/s |
| LangChain | Python | 2,200ms | 310 MB | 43 req/s |
| LangGraph | Python | 2,800ms | 385 MB | 36 req/s |
| CrewAI | Python | 3,100ms | 340 MB | 29 req/s |
Sequential text completions via local Ollama (codellama:13b). Apple M2 Pro, 32 GB RAM. Cold start: process launch to first API call. Memory: 10 concurrent agents. Full methodology: github.com/kernex-dev/kernex/bench
Security the OS enforces.
No other Rust AI framework has Layer 1. Not Rig. Not AutoAgents. Kernex is the only framework where a prompt injection cannot read your SSH keys.
Prompt-level controls
System prompt instructions, allowed tool calls, response filters. Useful but bypassable by a sufficiently crafted injection.
SandboxProfile (code-level)
Rust-level path validation before any file operation reaches the OS. Structured allowlists and blocklists per profile.
OS-level isolation
Seatbelt (macOS) and Landlock (Linux) enforce filesystem and network boundaries at the kernel level. Even if Layers 2 and 3 fail, the OS refuses the syscall.
All three layers run simultaneously. GuardrailRunner adds semantic input and output filtering at the pipeline layer. Audit logs written to SQLite for compliance evidence.
cat .kernex/sandbox.toml
[sandbox]
# Allowed read paths
allow_read = ["./src", "./Cargo.toml", "./Cargo.lock"]
# Blocked paths — OS enforces this, not the model
deny_read = ["~/.ssh", "~/.aws", "/etc", "/var"]
deny_write = ["*"] # read-only by default
# Network: only the provider endpoint
allow_network = ["api.anthropic.com:443"] Who builds with Kernex.
Three teams with different pressures. One framework that answers each of them.
Series A-C backend teams
AI Infrastructure Engineer
// the pain
Can prototype with LangChain. Cannot trust it in production. Memory leaks, GIL slowdowns, zero OS-level isolation.
// what kernex gives them
Types, predictable memory, and kernel-enforced sandboxing for a system that touches production data.
Compliance-driven engineering orgs
Security-Conscious Platform Team
// the pain
Legal pressure to demonstrate AI subsystems cannot access sensitive paths or exfiltrate credentials. Python offers no answer.
// what kernex gives them
OS-enforced boundaries plus HookRunner audit logs. A story that passes a security review.
Independent builders
Solo Rust Developer
// the pain
Every Rust AI library is a thin API wrapper with no orchestration. Or it requires pulling Python in.
// what kernex gives them
A complete Rust-native solution: provider, memory, pipelines, and CLI. One compiled binary that works offline.
Case studies.
Production deployments, real constraints, real results.
Secure code review pipeline for a fintech team
Python-based solutions were rejected by the security team. Kernex let the security team approve a tool that would have been blocked with any Python alternative.
3-phase pipeline, under 30 seconds per PR. Audit logs satisfy SOC 2 evidence.
Documentation bot with persistent memory
A solo maintainer with 40k lines of Rust and 15 repos. kx remembered a breaking-change decision from March and surfaced it unprompted in April.
Support question response time: 15 minutes to 4 minutes.
Local AI assistant for air-gapped development
Full offline operation with Ollama. OS-level sandboxing used as a compliance feature. No data leaves the machine, ever.
Approved for use in a classified development environment.
Quick start.
Running in under five minutes.
Add to Cargo.toml
[dependencies]
kernex-runtime = "0.4"
kernex-providers = "0.4"
tokio = { version = "1", features = ["full"] } or: cargo add kernex-runtime kernex-providers
Write your agent
use kernex_runtime::RuntimeBuilder;
use kernex_providers::factory::ProviderFactory;
use kernex_core::message::Request;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Set ANTHROPIC_API_KEY (or OPENAI_API_KEY, OLLAMA_HOST, etc.)
let runtime = RuntimeBuilder::from_env().build().await?;
let provider = ProviderFactory::create("claude", None)?;
let request = Request::text("user-1", "Summarize the repo at ./src");
let response = runtime.complete(&provider, &request).await?;
println!("{}", response.text);
Ok(())
} Set your provider and run
ANTHROPIC_API_KEY=sk-... cargo run
OLLAMA_HOST=localhost cargo run
Common questions.
- What is Kernex?
- Kernex is a Rust runtime for building production AI agents. It provides OS-level sandboxing via Seatbelt (macOS) and Landlock (Linux), persistent SQLite-backed memory, support for 11 AI providers (Claude Code CLI, Anthropic, OpenAI, Gemini, Ollama, OpenRouter, Groq, Mistral, DeepSeek, Fireworks, xAI), and TOML-defined multi-agent pipelines. It compiles to a single binary with no runtime dependencies.
- How is Kernex different from LangChain?
- Kernex differs from LangChain in three key ways: (1) OS-level isolation via kernel-enforced Seatbelt and Landlock sandboxes, not available in any Python framework; (2) persistent SQLite-backed memory that survives between sessions; (3) 12ms cold start and 24 MB peak memory vs LangChain's 2,200ms and 310 MB. Kernex requires no Python, virtualenv, or containers.
- What AI providers does Kernex support?
- Kernex ships 11 built-in providers: Claude Code CLI, Anthropic, OpenAI, Google Gemini, Ollama (local/offline), OpenRouter, Groq, Mistral, DeepSeek, Fireworks, and xAI. AWS Bedrock is available as an optional feature. Switching providers requires one line change in configuration. Anthropic users get prompt caching support to reduce token costs on long sessions. Ollama support enables fully offline operation with no data leaving the machine.
- Does Kernex work offline?
- Yes. Kernex runs fully offline with a local Ollama model. No data leaves the machine. This makes it suitable for air-gapped environments, classified development setups, and compliance-constrained organizations.
- What is kx?
- kx is a terminal-native coding assistant built on Kernex. It provides persistent per-project memory stored in SQLite, loads context from SKILL.md files at startup, supports reward-based learning, and runs inside an OS-level sandbox. Install via: cargo install kernex-agent.
- Is Kernex production-ready?
- Kernex is used in production by fintech security teams, defense contractors, and open source maintainers. It has been approved for use in classified development environments and used to pass SOC 2 audit evidence requirements. The current stable version is kernex-runtime 0.4.
Want Kernex without writing Rust?
kx is a terminal-native coding assistant built on Kernex. It remembers what you decided, knows your stack, and stays out of your way. Project memory persists across sessions in SQLite.
install
cargo install kernex-agent # installs kx