Run production-grade GenAI workloads by containerizing, serving, and scaling LLMs, agents, and multi-model pipelines with Docker, MCP, and Kubernetes for cloud platforms
Key Features
Deploy and operate local and edge-friendly LLM inference using Docker Model Runner and an OpenAI-compatible API Orchestrate multi-model and multi-agent workloads with Docker Compose and Kubernetes patterns used by platform teams Purchase of the print or Kindle book includes a free PDF eBook
Book DescriptionThe book blends hands-on Docker and Kubernetes fundamentals with practical AI/ML deployment workflows, showing not just how containers work but why they are essential for modern machine learning pipelines. Its unique selling point (USP) is that it focuses on Docker as the central model runner demonstrating end-to-end examples from building and containerizing models to orchestrating large-scale inference and training workloads, all with real, reproducible demos. What you will learn
Containerize GenAI services using Docker images, registries, and Compose-based deployment stacks Package and distribute models as OCI artifacts for repeatable builds and controlled promotions across environments Choose GGUF quantization levels to balance cost, latency, and accuracy for cloud and hybrid runtimes Serve LLMs via Docker Model Runner with an OpenAI-compatible API suitable for internal platforms Integrate tools and data securely using MCP and Docker MCP Gateway with least-privilege access patterns
Who this book is forCloud engineers, DevOps engineers, SREs, and platform engineers who need to deploy, operate, and scale GenAI workloads using Docker and Kubernetes on cloud, hybrid, or edge environments. You should be comfortable with the command line and basic service operations; prior Docker or Kubernetes exposure is helpful but not required.