Muutke küpsiste eelistusi

Large Language Model Recipes: A Hands-On Guide to Fine-Tuning, Optimization, Deployment, and Real-World Applications [Pehme köide]

  • Formaat: Paperback / softback, 219 pages, kõrgus x laius: 254x178 mm, 69 Illustrations, color; 1 Illustrations, black and white
  • Ilmumisaeg: 13-Jun-2026
  • Kirjastus: APress
  • ISBN-13: 9798868826061
  • Pehme köide
  • Hind: 58,13 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 68,39 €
  • Säästad 15%
  • See raamat ei ole veel ilmunud. Raamatu kohalejõudmiseks kulub orienteeruvalt 3-4 nädalat peale raamatu väljaandmist.
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Paperback / softback, 219 pages, kõrgus x laius: 254x178 mm, 69 Illustrations, color; 1 Illustrations, black and white
  • Ilmumisaeg: 13-Jun-2026
  • Kirjastus: APress
  • ISBN-13: 9798868826061
The Large Language Model Recipes book is a comprehensive, practical guide designed to help developers, data scientists, and AI engineers navigate the rapidly evolving landscape of Large Language Models (LLMs). Moving beyond theory, this book provides a hands-on, recipe-based approach to mastering the entire LLMs lifecycle, from selecting the right open-source model to fine-tuning it on custom data and deploying it for production at scale.



Starting with the fundamentals of setting up a robust development environment, the book guides you through the critical decisions of model selection (Llama, Mistral, Falcon) and data preparation. It offers deep dives into advanced training techniques, including full fine-tuning, instruction tuning, and parameter-efficient methods like LoRA and QLoRA that make training accessible on consumer hardware.



The book doesn't stop at training. It tackles the crucial "last mile" of AI development: deployment and optimization. You will learn how to shrink models with quantization, serve them with high-throughput engines like vLLM and TGI, and evaluate their performance using industry-standard benchmarks. Finally, it explores cutting-edge frontiers, including Retrieval-Augmented Generation (RAG) for grounding models in real-time data, building multimodal vision-language applications, and designing autonomous AI agents.



Whether you are building a specialized chatbot, a code assistant, or a complex reasoning agent, this book provides the tested recipes and code you need to develop efficient, scalable, and robust AI solutions today.



 



What you will learn:







Design production-ready LLM systems using the Feature/Training/Inference (FTI) framework Apply advanced fine-tuning methods, including LoRA and QLoRA, for efficient model adaptation Build and optimize RAG pipelines with effective retrieval strategies and vector databases Deploy optimized LLMs using quantization techniques and scalable inference frameworks Develop multimodal and agentic AI applications with vision-language models and autonomous agents



 



Who this book is for:



This book is ideal for software developers, machine learning engineers, data scientists, and technical researchers who want to move beyond using API endpoints and start
Part I: Setting Up Your AI Culinary Station.
Chapter 1: An
Introduction.
Chapter 2: Environment Setup.- Part II: Sourcing & Preparing
Ingredients: Models & Data.
Chapter 3: Open Source vs. Closed Source.-
Chapter 4: Data Handling & Tokenization.- Part III: Mastering Core
Techniques: Prompting & Fine-Tuning.
Chapter 5: Prompt Engineering Mastery.-
Chapter 6: LLM Full Fine-Tuning.
Chapter 7: Precision Seasoning: Instruction
Fine-Tuning.
Chapter 8: Parameter-Efficient Fine-Tuning (PEFT).
Chapter 9:
Augmenting with Synthetic Data.- Part IV: Optimization, Serving &
Evaluation.
Chapter 10: Making Models Leaner: Quantization Techniques.-
Chapter 11: LLM Deployment Strategies.
Chapter 12: Evaluation Metrics &
Benchmarks.- Part V: Advanced Recipes & Future Flavors.
Chapter 13:
Retrieval-Augmented Generation (RAG).
Chapter 14: Exploring Multimodal
Models.
Chapter 15: Future Trends & Responsible AI.- Appendix A: Glossary of
LLM Terminology.- Appendix B: Tooling Cheat Sheets (Hugging Face CLI &
Libraries, PyTorch Essentials, LangChain Basics).- Appendix C: Curated List
of Datasets, Model Hubs, and Further Reading.
Bharath Kumar Bolla is a highly accomplished Data Science leader with over 15 years of experience, specializing in AI, NLP, and Deep Learning for the past decade. He holds an M.S. in Data Science (The University of Arizona) and an Executive MBA (Product Management).



As an Associate Director at Novartis, he currently drives strategic MLOps initiatives, successfully designing and scaling automated pipelines and deploying cutting-edge Generative AI solutions across multiple European markets. His commercial impact is notable, including architecting a Salesforce recommendation system (5-10% conversion boost) and developing an ML pricing optimization product (+$1.2M revenue at Verizon).



Beyond corporate leadership, Bharath is a prolific academic with over 30 peer-reviewed publications and multiple best paper awards. He is recognized as a "40 Under 40 Data Scientist" (2022) and an "AI Changemaker," and actively supports the community by reviewing AI books and mentoring students.



Kalpa Subbaiah is a leading Data Scientist and AI expert with over 17 years of experience, including more than a decade in Data Science and Machine Learning. She holds a Masters degree in Machine Learning and Artificial Intelligence from Liverpool John Moores University and specializes in building end-to-end AI solutions across Azure, Databricks, and AWS.



Kalpa is a GenAI expert, building production-grade LLM and RAG applications, fine-tuning models via Hugging Face, and architecting scalable multi-agent AI systems with robust evaluation frameworks. Her strong experience in finance and manufacturing drives projects in financial AI platforms, smart city solutions, and enterprise analytics, leveraging capabilities such as document intelligence and object detection.



As Vice President and Lead Data Scientist at JPMorgan Chase & Co., she drives large-scale AI/GenAI transformation across the enterprise. Highly certified (Azure Data Scientist/AI Engineer, AWS ML Specialist), she actively delivers global corporate and academic training as a technical trainer and mentor.



 



Sashi Kiran Kaata is a seasoned cloud data engineer, researcher, and technology leader with over 10 years of experience architecting large-scale data platforms and real-time analytics solutions. Holding a Masters degree in Information Science and Technology, he has core expertise in AWS, Snowflake, Databricks, and modern streaming frameworks. At First Citizens Bank, he led enterprise data modernization, designing resilient ingestion and governance architectures, including Data Movement Controls (DMC), which significantly improved platform performance, reliability, and regulatory compliance.



Sashi blends practical engineering with research-driven innovation, actively contributing to the community as a technical conference speaker on distributed systems and scalable cloud architectures. His work extends into blockchain-based workflow modernization, sustainable AI pipelines, and adaptive ETL systems designed to support the next generation of intelligent data platforms. He is a prolific author of numerous peer-reviewed publications on critical areas, including cloud cost optimization, self-healing systems, Green AI, and MicroLLMs.