Muutke küpsiste eelistusi

Deep Learning Generalization: Theoretical Foundations and Practical Strategies [Pehme köide]

  • Formaat: Paperback / softback, 220 pages, kõrgus x laius: 234x156 mm, 62 Line drawings, black and white; 62 Illustrations, black and white
  • Ilmumisaeg: 12-Sep-2025
  • Kirjastus: Chapman & Hall/CRC
  • ISBN-10: 1032841893
  • ISBN-13: 9781032841892
  • Pehme köide
  • Hind: 68,94 €
  • See raamat ei ole veel ilmunud. Raamatu kohalejõudmiseks kulub orienteeruvalt 2-4 nädalat peale raamatu väljaandmist.
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Paperback / softback, 220 pages, kõrgus x laius: 234x156 mm, 62 Line drawings, black and white; 62 Illustrations, black and white
  • Ilmumisaeg: 12-Sep-2025
  • Kirjastus: Chapman & Hall/CRC
  • ISBN-10: 1032841893
  • ISBN-13: 9781032841892

This book provides a comprehensive exploration of generalization in deep learning, focusing on both theoretical foundations and practical strategies. It delves deeply into how machine learning models, particularly deep neural networks, achieve robust performance on unseen data. Key topics include balancing model complexity, addressing overfitting and underfitting, and understanding modern phenomena such as the double descent curve and implicit regularization.

The book offers a holistic perspective by addressing the four critical components of model training: data, model architecture, objective functions, and optimization processes. It combines mathematical rigor with hands-on guidance, introducing practical implementation techniques using PyTorch to bridge the gap between theory and real-world applications. For instance, the book highlights how regularized deep learning models not only achieve better predictive performance but also assume a more compact and efficient parameter space. Structured to accommodate a progressive learning curve, the content spans foundational concepts like statistical learning theory to advanced topics like Neural Tangent Kernels and overparameterization paradoxes.

By synthesizing classical and modern views of generalization, the book equips readers to develop a nuanced understanding of key concepts while mastering practical applications.

For academics, the book serves as a definitive resource to solidify theoretical knowledge and explore cutting-edge research directions. For industry professionals, it provides actionable insights to enhance model performance systematically. Whether you're a beginner seeking foundational understanding or a practitioner exploring advanced methodologies, this book offers an indispensable guide to achieving robust generalization in deep learning.



This book provides a comprehensive exploration of generalization in deep learning, focusing on both theoretical foundations and practical strategies. It delves deeply into how machine learning models, particularly deep neural networks, achieve robust performance on unseen data.

1. Unveiling Generalization in Deep Learning
2. Introduction to Statistical Learning Theory
3. Classical Perspectives on Generalization
4. Modern Perspectives on Generalization
5. Fundamentals of Deep Neural Networks
6. A Concluding Perspective

Liu Peng is currently an Assistant Professor of Quantitative Finance at the Singapore Management University (SMU). His research interests include generalization in deep learning, sparse estimation, Bayesian optimization.