Update cookies preferences

E-book: Introduction to Deep Learning

3.85/5 (39 ratings by Goodreads)
(Brown University)
  • Format: EPUB+DRM
  • Series: The MIT Press
  • Pub. Date: 19-Feb-2019
  • Publisher: MIT Press
  • Language: eng
  • ISBN-13: 9780262351645
Other books in subject:
  • Format - EPUB+DRM
  • Price: 33,05 €*
  • * the price is final i.e. no additional discount will apply
  • Add to basket
  • Add to Wishlist
  • This ebook is for personal use only. E-Books are non-refundable.
  • Format: EPUB+DRM
  • Series: The MIT Press
  • Pub. Date: 19-Feb-2019
  • Publisher: MIT Press
  • Language: eng
  • ISBN-13: 9780262351645
Other books in subject:

DRM restrictions

  • Copying (copy/paste):

    not allowed

  • Printing:

    not allowed

  • Usage:

    Digital Rights Management (DRM)
    The publisher has supplied this book in encrypted form, which means that you need to install free software in order to unlock and read it.  To read this e-book you have to create Adobe ID More info here. Ebook can be read and downloaded up to 6 devices (single user with the same Adobe ID).

    Required software
    To read this ebook on a mobile device (phone or tablet) you'll need to install this free app: PocketBook Reader (iOS / Android)

    To download and read this eBook on a PC or Mac you need Adobe Digital Editions (This is a free app specially developed for eBooks. It's not the same as Adobe Reader, which you probably already have on your computer.)

    You can't read this ebook with Amazon Kindle

A project-based guide to the basics of deep learning.

A project-based guide to the basics of deep learning.

This concise, project-driven guide to deep learning takes readers through a series of program-writing tasks that introduce them to the use of deep learning in such areas of artificial intelligence as computer vision, natural-language processing, and reinforcement learning. The author, a longtime artificial intelligence researcher specializing in natural-language processing, covers feed-forward neural nets, convolutional neural nets, word embeddings, recurrent neural nets, sequence-to-sequence learning, deep reinforcement learning, unsupervised models, and other fundamental concepts and techniques. Students and practitioners learn the basics of deep learning by working through programs in Tensorflow, an open-source machine learning framework. “I find I learn computer science material best by sitting down and writing programs,” the author writes, and the book reflects this approach.

Each chapter includes a programming project, exercises, and references for further reading. An early chapter is devoted to Tensorflow and its interface with Python, the widely used programming language. Familiarity with linear algebra, multivariate calculus, and probability and statistics is required, as is a rudimentary knowledge of programming in Python. The book can be used in both undergraduate and graduate courses; practitioners will find it an essential reference.

Preface xi
1 Feed-Forward Neural Nets
1(28)
1.1 Perceptrons
3(6)
1.2 Cross-entropy Loss Functions for Neural Nets
9(5)
1.3 Derivatives and Stochastic Gradient Descent
14(4)
1.4 Writing Our Program
18(3)
1.5 Matrix Representation of Neural Nets
21(3)
1.6 Data Independence
24(1)
1.7 References and Further Readings
25(1)
1.8 Written Exercises
26(3)
2 Tensorflow
29(22)
2.1 Tensorflow Preliminaries
29(4)
2.2 A TF Program
33(5)
2.3 Multilayered NNs
38(4)
2.4 Other Pieces
42(6)
2.4.1 Checkpointing
42(1)
2.4.2 tensordot
43(1)
2.4.3 Initialization of TF Variables
44(3)
2.4.4 Simplifying TF Graph Creation
47(1)
2.5 References and Further Readings
48(1)
2.6 Written Exercises
49(2)
3 Convolutional Neural Networks
51(20)
3.1 Filters, Strides, and Padding
52(5)
3.2 A Simple TF Convolution Example
57(4)
3.3 Multilevel Convolution
61(3)
3.4 Convolution Details
64(3)
3.4.1 Biases
64(1)
3.4.2 Layers with Convolution
65(1)
3.4.3 Pooling
66(1)
3.5 References and Further Readings
67(1)
3.6 Written Exercises
68(3)
4 Word Embeddings and Recurrent NNs
71(24)
4.1 Word Embeddings for Language Models
71(5)
4.2 Building Feed-Forward Language Models
76(2)
4.3 Improving Feed-Forward Language Models
78(1)
4.4 Overrating
79(3)
4.5 Recurrent Networks
82(6)
4.6 Long Short-Term Memory
88(4)
4.7 References and Further Readings
92(1)
4.8 Written Exercises
92(3)
5 Sequence-to-Sequence Learning
95(18)
5.1 The Seq2Seq Paradigm
96(3)
5.2 Writing a Seq2Seq MT program
99(3)
5.3 Attention in Seq2seq
102(5)
5.4 Multilength Seq2Seq
107(1)
5.5 Programming Exercise
108(2)
5.6 Written Exercises
110(1)
5.7 References and Further Readings
111(2)
6 Deep Reinforcement Learning
113(24)
6.1 Value Iteration
114(3)
6.2 Q-learning
117(2)
6.3 Basic Deep-Q Learning
119(5)
6.4 Policy Gradient Methods
124(6)
6.5 Actor-Critic Methods
130(3)
6.6 Experience Replay
133(1)
6.7 References and Further Readings
134(1)
6.8 Written Exercises
134(3)
7 Unsupervised Neural-Network Models
137(22)
7.1 Basic Autoencoding
137(3)
7.2 Convolutional Autoencoding
140(4)
7.3 Variational Autoencoding
144(8)
7.4 Generative Adversarial Networks
152(5)
7.5 References and Further Readings
157(1)
7.6 Written Exercises
157(2)
A Answers to Selected Exercises
159(6)
A.1
Chapter 1
159(1)
A.2
Chapter 2
160(1)
A.3
Chapter 3
160(1)
A.4
Chapter 4
161(1)
A.5
Chapter 5
161(1)
A.6
Chapter 6
162(1)
A.7
Chapter 7
162(3)
Bibliography 165(4)
Index 169