The requirement
The development environment is classified. No external network connections. No cloud services. No telemetry. The AI assistant either runs entirely on-device or it does not run at all.
This eliminates every cloud-based AI coding tool, every hosted API, and every framework that phones home for licensing or analytics.
How Kernex runs fully offline
Kernex supports Ollama as a provider via --provider ollama. Ollama runs local models (Llama, Mistral, CodeLlama, and others) on-device. The kx CLI connects to localhost:11434 with no external network calls.
The full stack is local:
- Ollama serving the model on-device
- kx running in the project directory
- kernex-sandbox enforcing that the agent process has no network access outside
localhost - kernex-memory storing conversation history in a local SQLite file
Network access for the agent process is declared at startup. In this deployment, the allowlist contains only 127.0.0.1:11434. The OS enforces it. Outbound connections to anything else fail at the syscall level.
Sandboxing as a compliance argument
The security approval process required demonstrating that the tool could not exfiltrate data even if the model was prompted to try. The Landlock profile provided that demonstration concretely: the policy is inspectable, the enforcement is at the kernel level, and the restriction applies even to the model’s tool calls.
The approving officer’s comment was that this was the first AI tool reviewed where the sandboxing was a property of the tool rather than a claim about the vendor.
Operational details
Models run on local GPU. Response latency is higher than cloud alternatives — typically 3-8 seconds per turn depending on model size and hardware. For the use case (code review, documentation, architecture questions), this is acceptable.
Memory persistence works identically to the cloud-connected configuration. Facts and conversation history accumulate in ~/.kx/projects/{project}/.