Muutke küpsiste eelistusi

E-raamat: Deep Learning at Scale: At the Intersection of Hardware, Software, and Data

  • Formaat: 448 pages
  • Ilmumisaeg: 18-Jun-2024
  • Kirjastus: O'Reilly Media
  • Keel: eng
  • ISBN-13: 9781098145248
Teised raamatud teemal:
  • Formaat - EPUB+DRM
  • Hind: 56,15 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: 448 pages
  • Ilmumisaeg: 18-Jun-2024
  • Kirjastus: O'Reilly Media
  • Keel: eng
  • ISBN-13: 9781098145248
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Bringing a deep-learning project into production at scale is quite challenging. To successfully scale your project, a foundational understanding of full stack deep learning, including the knowledge that lies at the intersection of hardware, software, data, and algorithms, is required.

This book illustrates complex concepts of full stack deep learning and reinforces them through hands-on exercises to arm you with tools and techniques to scale your project. A scaling effort is only beneficial when it's effective and efficient. To that end, this guide explains the intricate concepts and techniques that will help you scale effectively and efficiently.

You'll gain a thorough understanding of:

  • How data flows through the deep-learning network and the role the computation graphs play in building your model
  • How accelerated computing speeds up your training and how best you can utilize the resources at your disposal
  • How to train your model using distributed training paradigms, i.e., data, model, and pipeline parallelism
  • How to leverage PyTorch ecosystems in conjunction with NVIDIA libraries and Triton to scale your model training
  • Debugging, monitoring, and investigating the undesirable bottlenecks that slow down your model training
  • How to expedite the training lifecycle and streamline your feedback loop to iterate model development
  • A set of data tricks and techniques and how to apply them to scale your training model
  • How to select the right tools and techniques for your deep-learning project
  • Options for managing the compute infrastructure when running at scale

Suneeta holds a Ph.D. in applied science and has a computer science engineering background. She's worked extensively on distributed and scalable computing and machine learning experiences for IBM Software Labs, Expedita, USyd, and Nearmap. She currently leads the development of Nearmap's AI model system that produces high-quality AI data and sets and builds and manages a system that trains deep learning models efficiently. She is an active community member and speaker and enjoys learning and mentoring. She has presented at several top technical and academic conferences like SPIE, KubeCon, Knowledge Graph Conference, RE-Work, Kafka Summit, AWS Events, and YOW DATA. She has patents granted by USPTO and contributes to peer-reviewing journals besides publishing some papers in deep learning. She also authors for O'Reilly and Towards Data Science blogs and maintains her website at http://suneeta-mall.github.io