Build resilient guardrails for OpenClaw AI agents on Kubernetes
Learn how to secure AI agents like OpenClaw using containers for isolation, RBAC for access control, secrets management, and full observability on Kubernetes.
Here's all of my longer content from around the web, collected in chronological order.
Learn how to secure AI agents like OpenClaw using containers for isolation, RBAC for access control, secrets management, and full observability on Kubernetes.
Learn how Model Context Protocol (MCP) standardizes how AI models discover and call tools, enabling the shift from simple chatbots to reliable agentic AI applications.
A look at the open source AI model landscape in 2025, from DeepSeek and Qwen to gpt-oss and small language models, and how to run them on your own hardware.
Learn how to run AI models locally with RamaLama, an open source project that simplifies running AI models in containers for local inference, serving, and RAG.
Explore the three most useful Model Context Protocol (MCP) servers for developers, including Kubernetes, Context7, and GitHub, along with essential safety guardrails.
Learn how to run OpenAI gpt-oss open-weight models locally and securely using RamaLama, a CLI tool that leverages OCI containers and automatic GPU detection.
Discover llm-d, an open source Kubernetes-native framework for distributed AI inference that improves performance and reduces costs through disaggregation and intelligent scheduling.
Learn how model compression techniques like quantization and sparsity can significantly accelerate inference, reduce costs, and enable efficient AI deployment.
Explore how AI-powered tools like Cursor are changing software development through vibe coding, and what you need to know before diving in.
Explore how alignment tuning and retrieval-augmented generation (RAG) are two strategies for customizing large language models for enterprise use cases.
Learn how to create and manage bootable containers using Podman Desktop and the bootc extension to build workloads deployable from bare metal to cloud environments.
Learn about the three stages of AI adoption—utilization, adoption, and customization—and how open frameworks make generative AI accessible for enterprises.
Explore IBM’s Granite family of models, with a transparent look into its training data, datasets, and enhancements made to Granite 13b models.
Add knowledge to large language models with InstructLab and streamline MLOps using KitOps for efficient model improvement and deployment.
InstructLab is a community-driven project designed to simplify the process of contributing to and enhancing large language models (LLMs) through synthetic data generation.
Learn how to integrate powerful AI code assistants into your IDE using open source tools like Continue, Ollama, and the Granite family of code models.
Discover how Podman AI Lab's Playground feature accelerates your AI application development workflow
Learn how to get started with InstructLab, an open source project that enables community-driven alignment of large language models on consumer hardware.
Learn how to install, setup, and use Podman Desktop on Windows, allowing you to manage and run containers on your Windows machine.
Deploy and test a containerized application from your desktop to the no-cost Developer Sandbox for Red Hat OpenShift using Podman Desktop's Developer Sandbox extension.
Podman is an alternative to the Docker command-line interface that lets you run standalone, daemonless containers. See examples of how easy it is to use Podman.
Use Buildah to create a working Open Container Initiative container image from scratch, or from a pre-existing Dockerfile, before running it with Podman.
Let's get started with Tekton, an open-source cloud native CI/CD solution, by creating a real-world CI/CD pipeline.
Are there alternatives to Docker? Yes, and let's take a look at how we can get started with Podman, Buildah, and Skopeo.