Muutke küpsiste eelistusi

E-raamat: Hands-On LLM Serving and Optimization: Hosting LLMs at Scale

  • Formaat: EPUB+DRM
  • Ilmumisaeg: 28-Apr-2026
  • Kirjastus: O'Reilly Media
  • Keel: eng
  • ISBN-13: 9798341621466
  • Formaat - EPUB+DRM
  • Hind: 63,77 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: EPUB+DRM
  • Ilmumisaeg: 28-Apr-2026
  • Kirjastus: O'Reilly Media
  • Keel: eng
  • ISBN-13: 9798341621466

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Large language models (LLMs) are rapidly becoming the backbone of AI-driven applications. Without proper optimization, however, LLMs can be expensive to run, slow to serve, and prone to performance bottlenecks. As the demand for real-time AI applications grows, along comes Hands-On Serving and Optimizing LLM Models, a comprehensive guide to the complexities of deploying and optimizing LLMs at scale.

In this hands-on book, authors Chi Wang and Peiheng Hu take a real-world approach backed by practical examples and code, and assemble essential strategies for designing robust infrastructures that are equal to the demands of modern AI applications. Whether you're building high-performance AI systems or looking to enhance your knowledge of LLM optimization, this indispensable book will serve as a pillar of your success.





Learn the key principles for designing a model-serving system tailored to popular business scenarios Understand the common challenges of hosting LLMs at scale while minimizing costs Pick up practical techniques for optimizing LLM serving performance Build a model-serving system that meets specific business requirements Improve LLM serving throughput and reduce latency Host LLMs in a cost-effective manner, balancing performance and resource efficiency
Chi Wang is a director of engineering at Salesforce's Einstein AI group, with over 18 years of experience in artificial intelligence and distributed systems. He leads the development of large-scale AI platforms that enable model training, inference, and optimization for hundreds of internal teams and power AI capabilities used by millions of Salesforce customers. At Salesforce, Chi oversees multiple engineering teams focused on model inference and optimization, and data science platforms. His work spans building multi-tenant AI infrastructure, scaling distributed compute systems, and improving the performance and cost-efficiency of large language model workloads in production. Chi is the lead inventor on 12 patents across areas including model serving and optimization, data access control, and large-scale system design. He is also a passionate technical writer, focused on making complex AI systems practical and accessible for engineers. Peiheng Hu is an accomplished machine learning engineer with over 10 years of industry experience and expertise in building large-scale AI systems. He currently works at NVIDIA, where he focuses on the cutting-edge distributed LLM inference, pushing the boundaries of high-performance inference engines on the latest NVIDIA GPUs. He holds a master of science in computational science and engineering from Harvard University and a bachelor of science in industrial engineering operations research from Georgia Institute of Technology. Previously, Peiheng served as a principal member of technical staff at Salesforce, where he led the development of the company's only unified serving platform, handling thousands of per-tenant models and LLM optimizations for Agentforce that saved millions in AI infrastructure expenses. Prior to that, he was a senior ML engineer at Microsoft Azure, where he architected distributed ML processing solutions for cloud security detection and analytics, handling billions of transactions per hour.