Muutke küpsiste eelistusi

Distributed AI Systems: A practical guide to building scalable training, inference, and serving systems for production AI [Pehme köide]

  • Formaat: Paperback / softback, kõrgus x laius: 235x191 mm
  • Ilmumisaeg: 29-Jun-2026
  • Kirjastus: Packt Publishing Limited
  • ISBN-10: 1807301710
  • ISBN-13: 9781807301712
  • Pehme köide
  • Hind: 69,29 €
  • See raamat ei ole veel ilmunud. Raamatu kohalejõudmiseks kulub orienteeruvalt 3-4 nädalat peale raamatu väljaandmist.
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Paperback / softback, kõrgus x laius: 235x191 mm
  • Ilmumisaeg: 29-Jun-2026
  • Kirjastus: Packt Publishing Limited
  • ISBN-10: 1807301710
  • ISBN-13: 9781807301712
Learn distributed AI through hands-on experience with training frameworks, inference engines, and orchestration tools to build production-ready training, inference and serving systems for modern large-scale AI.

Key Features

Understand GPU hardware, high-speed interconnects, and parallelism strategies Learn distributed training with resource-optimized techniques Deploy high-performance inference with advanced optimization and memory management Build production serving stacks with job schedulers, orchestration, and observability Purchase of the print or Kindle book includes a free PDF eBook

Book DescriptionAs AI models grow to billions and trillions of parameters, distributed systems are essential for training and serving them. Many resources cover fragments of this domain, but none provide a full path from distributed training to inference and production deployment. This book fills that gap with practical, production-focused examples. It starts with GPU and memory estimation, data preparation, and an overview of GPU architecture, interconnects, and core parallelism strategies. Youll learn training techniques including data parallelism for single and multi-node setups, parameter sharding for memory-efficient scaling, and methods to reduce memory usage in large models. The next section covers distributed inference and deployment. Youll build high-performance systems using optimized attention, caching, operator fusion, and router-based designs. Youll deploy on schedulers and container platforms with GPU-aware orchestration and assemble production stacks emphasizing reliability, scalability, and observability. The final section covers benchmarking, performance tuning, and trends like MoE models, edge - cloud co-ordination, and advanced parallelism. Each chapter includes tested code and debugging guidance. By the end, youll be able to build distributed AI systems that scale from a single GPU to large clusters. What you will learn

Estimate memory and compute requirements for training and inference Understand GPU hardware, interconnects, and parallelism strategies Implement distributed training with parallel and sharded techniques Build production inference systems with batching and memory management Deploy via cluster orchestration with optimized GPU scheduling Create production serving stacks with routing and observability Benchmark distributed systems using industry-standard methodologies Explore emerging model trends, distribution strategies, and future paths

Who this book is forThis book is designed for ML engineers, AI researchers, and DevOps professionals who need to train or serve large AI models at scale. Platform engineers, HPC cluster administrators, and cloud architects will also find it valuable for advancing their skill sets. A basic understanding of Python and PyTorch is required to get started. Prior experience with distributed systems, cluster schedulers, or container orchestration is helpful but not necessary - the book introduces these concepts from the ground up, beginning with resource estimation, data preparation, and hardware fundamentals.
Table of Contents

Introduction to Modern Distributed AI
GPU Hardware, Networking, and Parallelism Strategies
Distributed Training with PyTorch DDP
Scaling with Fully Sharded Data Parallel (FSDP)
DeepSpeed and ZeRO Optimization
Distributed Inference Fundamentals and vLLM
SGLang and Advanced Inference Architectures
Kubernetes for AI Workloads
Production LLM Serving Stack
Distributed Benchmarking and Performance Optimization
Henry (Fuheng) Wu is a Principal Machine Learning Tech Lead at Oracle's Generative AI organization, specializing in distributed training, large-scale inference, and GPU systems. He has delivered core components of Oracle's Vision and Document Understanding AI services, and co-authored a Microsoft and Oracle blog on high-performance deep learning. He has contributed to open-source projects including SGLang, genai-bench, pyLLaMA, chatLLaMA, and Oracle's HiQ observability system. With hands-on experience across PyTorch Distributed, DeepSpeed, Kubernetes GPU clusters, and production LLM serving, he focuses on building practical, scalable AI systems used in real-world enterprise workloads.