Muutke küpsiste eelistusi

Normalization Techniques in Deep Learning 2022 ed. [Pehme köide]

  • Formaat: Paperback / softback, 110 pages, kõrgus x laius: 240x168 mm, kaal: 223 g, 21 Illustrations, color; 5 Illustrations, black and white; XI, 110 p. 26 illus., 21 illus. in color., 1 Paperback / softback
  • Sari: Synthesis Lectures on Computer Vision
  • Ilmumisaeg: 10-Oct-2023
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 3031145976
  • ISBN-13: 9783031145971
  • Pehme köide
  • Hind: 53,33 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 62,74 €
  • Säästad 15%
  • Raamatu kohalejõudmiseks kirjastusest kulub orienteeruvalt 2-4 nädalat
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Paperback / softback, 110 pages, kõrgus x laius: 240x168 mm, kaal: 223 g, 21 Illustrations, color; 5 Illustrations, black and white; XI, 110 p. 26 illus., 21 illus. in color., 1 Paperback / softback
  • Sari: Synthesis Lectures on Computer Vision
  • Ilmumisaeg: 10-Oct-2023
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 3031145976
  • ISBN-13: 9783031145971
?This book presents and surveys normalization techniques with a deep analysis in training deep neural networks.  In addition, the author provides technical details in designing new normalization methods and network architectures tailored to specific tasks.  Normalization methods can improve the training stability, optimization efficiency, and generalization ability of deep neural networks (DNNs) and have become basic components in most state-of-the-art DNN architectures.  The author provides guidelines for elaborating, understanding, and applying normalization methods.  This book is ideal for readers working on the development of novel deep learning algorithms and/or their applications to solve practical problems in computer vision and machine learning tasks.  The book also serves as a resource researchers, engineers, and students who are new to the field and need to understand and train DNNs.
Introduction.- Motivation and Overview of Normalization in DNNs.- A
General View of Normalizing Activations.- A Framework for Normalizing
Activations as Functions.- Multi-Mode and Combinational Normalization.- BN
for More Robust Estimation.- Normalizing Weights.- Normalizing
Gradients.- Analysis of Normalization.- Normalization
in Task-specific Applications.- Summary and Discussion.
Lei Huang, Ph.D., is an Associate Professor at Beihang University. His current research interests include normalization techniques involving methods, theories, and applications in training deep neural networks (DNNs).  He also has wide interests in representation and optimization of deep learning theory and computer vision tasks.  Dr. Huang serves as a reviewer for top-tier conferences and journals in machine learning and computer vision.