Muutke küpsiste eelistusi

Domain-Specific Small Language Models [Kõva köide]

  • Formaat: Hardback, 347 pages, kaal: 358 g
  • Ilmumisaeg: 06-Apr-2026
  • Kirjastus: Manning Publications
  • ISBN-10: 1633436705
  • ISBN-13: 9781633436701
  • Formaat: Hardback, 347 pages, kaal: 358 g
  • Ilmumisaeg: 06-Apr-2026
  • Kirjastus: Manning Publications
  • ISBN-10: 1633436705
  • ISBN-13: 9781633436701
Bigger isn’t always better. Train and tune highly focused language models optimized for domain specific tasks.

When you need a language model to respond accurately and quickly about a specific field of knowledge, the sprawling capacity of a LLM may hurt more than it helps. Domain-Specific Small Language Models teaches you to build generative AI models optimized for specific fields.

In Domain-Specific Small Language Models you’ll discover:

 • Model sizing best practices
 • Open source libraries, frameworks, utilities and runtimes
 • Fine-tuning techniques for custom datasets
 • Hugging Face’s libraries for SLMs
 • Running SLMs on commodity hardware
 • Model optimization or quantization

Perfect for cost- or hardware-constrained environments, Small Language Models (SLMs) train on domain specific data for high-quality results in specific tasks. In Domain-Specific Small Language Models you’ll develop SLMs that can generate everything from Python code to protein structures and antibody sequences—all on commodity hardware.

About the book

Domain-Specific Small Language Models teaches you how to create language models that deliver the power of LLMs for specific areas of knowledge. You’ll learn to minimize the computational horsepower your models require, while keeping high–quality performance times and output. You’ll appreciate the clear explanations of complex technical concepts alongside working code samples you can run and replicate on your laptop. Plus, you’ll learn to develop and deliver RAG systems and AI agents that rely solely on SLMs, and without the costs of foundation model access.

About the reader

For machine learning engineers familiar with Python.

About the author

Guglielmo Iozzia is a Director, ML/AI and Applied Mathematics at MSD. He studied Electronic and Biomedical Engineering at the University of Bologna, has an extensive background in Software and ML/AI Engineering applied to real-life use cases across different industries, such as Biotech Manufacturing, Healthcare, Cloud Operations, and Cyber Security.

Get a free eBook (PDF or ePub) from Manning as well as access to the online liveBook format (and its AI assistant that will answer your questions in any language) when you purchase the print book.

Arvustused

The balance between theoretical concepts and hands-on application is excellent, while the specialized domain examples in chemistry and code generation provide unique insight not easily found elsewhere. 

Samuel Lawrence, Software Developer 





Excellent collection of tips and techniques for optimizing an LLM to run on your own hardware. 

Andrew R. Freed, Distinguished Engineer, IBM 

PART 1: FIRST STEPS 

1 LARGE LANGUAGE MODELS 

PART 2: CORE DOMAIN-SPECIFIC LLMS 

2 TUNING FOR A SPECIFIC DOMAIN 

3 RUNNING INFERENCE 

4 EXPLORING ONNX 

5 QUANTIZING FOR YOUR PRODUCTION ENVIRONMENT 

PART 3: REAL-WORLD USE CASES 

6 GENERATING PYTHON CODE 

7 GENERATING PROTEIN STRUCTURES 

PART 4: ADVANCED CONCEPTS 

8 ADVANCED QUANTIZATION TECHNIQUES 

9 PROFILING INSIGHTS 

10 DEPLOYMENT AND SERVING 

11 RUNNING ON YOUR LAPTOP 

12 CREATING END-TO-END LLM APPLICATIONS 

13 ADVANCED COMPONENTS FOR LLM APPLICATIONS 

14 TEST-TIME COMPUTE AND SMALL LANGUAGE MODELS 
Guglielmo Iozzia is Director of ML/AI and Applied Mathematics at MSD, known for turning complex theory into deployable AI solutions. With decades of cross-industry experience, Guglielmo brings pragmatic clarity and code-first rigor to every page. He distills real-world expertise into step-by-step guidance that helps engineers ship reliable, budget-friendly language models.