Muutke küpsiste eelistusi

Minimizing Data Movement and Parameter Count Across the Machine Learning Stack: Everything is a Matrix [Kõva köide]

  • Formaat: Hardback, 107 pages, kõrgus x laius: 240x168 mm, 1 Illustrations, black and white
  • Sari: Synthesis Lectures on Computer Science
  • Ilmumisaeg: 31-May-2026
  • Kirjastus: Springer Nature Switzerland AG
  • ISBN-10: 3032230993
  • ISBN-13: 9783032230997
  • Kõva köide
  • Hind: 39,60 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 46,59 €
  • Säästad 15%
  • See raamat ei ole veel ilmunud. Raamatu kohalejõudmiseks kulub orienteeruvalt 3-4 nädalat peale raamatu väljaandmist.
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Hardback, 107 pages, kõrgus x laius: 240x168 mm, 1 Illustrations, black and white
  • Sari: Synthesis Lectures on Computer Science
  • Ilmumisaeg: 31-May-2026
  • Kirjastus: Springer Nature Switzerland AG
  • ISBN-10: 3032230993
  • ISBN-13: 9783032230997
This book provides a focused, research-forward guide to making large AI models efficient in practice and also presents an array of novel techniques to reduce memory footprint, accelerate computation, and improve overall hardware utilization. The author demonstrates that substantial efficiency gains can be achieved by rethinking how data is computed, stored, and compressed, with a special focus on matrices, the core computational structure underpinning both scientific computing and neural networks. Modern AI models run on huge grids of numbers (matrices/tensors), and their speed and affordability depend on how those numbers are arranged and processed on real hardware (GPUs/TPUs/CPUs). This book explains practical methods to skip unnecessary work (structured sparsity), move data efficiently (gather/scatter), and shrink models without losing accuracy (block distillation) so that AI systems can use less memory, less time, and less energy without sacrificing quality. In addition, the book shows how to turn algorithmic ideas into hardware-aware speedups on GPUs/TPUs. Readers will learn when sparsity pays off, how to schedule irregular workloads, and how to recover accuracy in compressed models. Case studies illustrate end-to-end design choices, evaluation, and pitfalls. The result is a coherent perspective that bridges theory, compilers/run times, and real-world deployment.
Introduction and Roadmap.- CAKE: Memory-Aware Block Shaping for GEMM.-
mCAKE: From Matrices to Tensors.- Rosko: Structured Sparsity for ML
Workloads.- Gather/Scatter for Rank-Sliced Activations.- Low-Rank Models via
SVD.- Blockwise Knowledge Distillation.- Privacy-Preserving Split Inference
(Edge/Cloud).- Conclusion: Design Rules, Evaluation, and Outlook.
Andrew Sabot, Ph.D., is a Software Engineer working on Machine Learning at Google. He received his Ph.D. (2025) and M.S. (2021) in Computer Science from Harvard University. Dr. Sabots work focuses on the intersection of hardware-aware kernels,  model compression, and transformer inference acceleration to enable the sustainable deployment of state-of-the-art AI.