<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>Kernex Blog</title><description>Technical posts on Rust AI agents, OS-level sandboxing, and production deployments.</description><link>https://kernex.dev/</link><language>en-us</language><item><title>Benchmark methodology: how we measured cold start, memory, and throughput</title><link>https://kernex.dev/blog/benchmark-methodology/</link><guid isPermaLink="true">https://kernex.dev/blog/benchmark-methodology/</guid><description>The numbers on the homepage need a paper trail. This post is the methodology companion to [Why we built an AI agent framework in Rust](/blog/why-rust-for-ai-agents). Here is the exact setup, what each metric measures, what the tests do not cover, and where to find the raw results.</description><pubDate>Thu, 02 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Why we built an AI agent framework in Rust</title><link>https://kernex.dev/blog/why-rust-for-ai-agents/</link><guid isPermaLink="true">https://kernex.dev/blog/why-rust-for-ai-agents/</guid><description>Python is the default for AI tooling. We chose Rust. Here is the full reasoning, including the parts that made us uncomfortable.</description><pubDate>Thu, 02 Apr 2026 00:00:00 GMT</pubDate></item></channel></rss>