Muutke küpsiste eelistusi

E-raamat: Grokking Deep Learning

  • Formaat: 336 pages
  • Ilmumisaeg: 23-Jan-2019
  • Kirjastus: Manning Publications
  • Keel: eng
  • ISBN-13: 9781638357209
Teised raamatud teemal:
  • Formaat - EPUB+DRM
  • Hind: 45,77 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: 336 pages
  • Ilmumisaeg: 23-Jan-2019
  • Kirjastus: Manning Publications
  • Keel: eng
  • ISBN-13: 9781638357209
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Artificial Intelligence is the most exciting technology of the century, and Deep Learning is, quite literally, the brain behind the worlds smartest Artificial Intelligence systems out there.

  Grokking Deep Learning is the perfect place to begin the deep learning journey. Rather than just learning the black box API of some library or framework, readers will actually understand how to build these algorithms completely from scratch.

 

Key Features: Build neural networks that can see and understand images Build an A.I. that will learn to defeat you in a classic Atari game Hands-on Learning  

Written for readers with high school-level math and intermediate programming skills. Experience with Calculus is helpful but not required.  

ABOUT THE TECHNOLOGY

Deep Learning is a subset of Machine Learning, which is a field dedicated to the study and development of machines that can learn, often with the goal of eventually attaining general artificial intelligence.

 
Preface xv
Acknowledgments xvi
About this book xvii
About the author xx
1 Introducing deep learning: why you should learn it 3(6)
Welcome to Grokking Deep Learning
3(1)
Why you should learn deep learning
4(1)
Will this be difficult to learn?
5(1)
Why you should read this book
5(2)
What you need to get started
7(1)
You'll probably need some Python knowledge
8(1)
Summary
8(1)
2 Fundamental concepts: how do machines learn? 9(12)
What is deep learning?
10(1)
What is machine learning?
11(1)
Supervised machine learning
12(1)
Unsupervised machine learning
13(1)
Parametric vs. nonparametric learning
14(1)
Supervised parametric learning
15(2)
Unsupervised parametric learning
17(1)
Nonparametric learning
18(1)
Summary
19(2)
3 Introduction to neural prediction: forward propagation 21(26)
Step 1: Predict
22(2)
A simple neural network making a prediction
24(1)
What is a neural network?
25(1)
What does this neural network do?
26(2)
Making a prediction with multiple inputs
28(2)
Multiple inputs: What does this neural network do?
30(5)
Multiple inputs: Complete runnable code
35(1)
Making a prediction with multiple outputs
36(2)
Predicting with multiple inputs and outputs
38(2)
Multiple inputs and outputs: How does it work?
40(2)
Predicting on predictions
42(2)
A quick primer on NumPy
44(2)
Summary
46(1)
4 Introduction to neural learning: gradient descent 47(32)
Predict, compare, and learn
48(1)
Compare
48(1)
Learn
49(1)
Compare: Does your network make good predictions?
50(1)
Why measure error?
51(1)
What's the simplest form of neural learning?
52(2)
Hot and cold learning
54(1)
Characteristics of hot and cold learning
55(1)
Calculating both direction and amount from error
56(2)
One iteration of gradient descent
58(2)
Learning is just reducing error
60(2)
Let's watch several steps of learning
62(2)
Why does this work? What is weight_delta, really?
64(2)
Tunnel vision on one concept
66(1)
A box with rods poking out of it
67(1)
Derivatives: Take two
68(1)
What you really need to know
69(1)
What you don't really need to know
69(1)
How to use a derivative to learn
70(1)
Look familiar?
71(1)
Breaking gradient descent
72(1)
Visualizing the overcorrections
73(1)
Divergence
74(1)
Introducing alpha
75(1)
Alpha in code
76(1)
Memorizing
77(2)
5 Learning multiple weights at a time: generalizing gradient descent 79(20)
Gradient descent learning with multiple inputs
80(2)
Gradient descent with multiple inputs explained
82(4)
Let's watch several steps of learning
86(2)
Freezing one weight: What does it do?
88(2)
Gradient descent learning with multiple outputs
90(2)
Gradient descent with multiple inputs and outputs
92(2)
What do these weights learn?
94(2)
Visualizing weight values
96(1)
Visualizing dot products (weighted sums)
97(1)
Summary
98(1)
6 Building your first deep neural network: introduction to backpropagation 99(34)
The streetlight problem
100(2)
Preparing the data
102(1)
Matrices and the matrix relationship
103(3)
Creating a matrix or two in Python
106(1)
Building a neural network
107(1)
Learning the whole dataset
108(1)
Full, batch, and stochastic gradient descent
109(1)
Neural networks learn correlation
110(1)
Up and down pressure
111(2)
Edge case: Overfitting
113(1)
Edge case: Conflicting pressure
114(2)
Learning indirect correlation
116(1)
Creating correlation
117(1)
Stacking neural networks: A review
118(1)
Backpropagation: Long-distance error attribution
119(1)
Backpropagation: Why does this work?
120(1)
Linear vs. nonlinear
121(1)
Why the neural network still doesn't work
122(1)
The secret to sometimes correlation
123(1)
A quick break
124(1)
Your first deep neural network
125(1)
Backpropagation in code
126(2)
One iteration of backpropagation
128(2)
Putting it all together
130(1)
Why do deep networks matter?
131(2)
7 How to picture neural networks: in your head and on paper 133(12)
It's time to simplify
134(1)
Correlation summarization
135(1)
The previously overcomplicated visualization
136(1)
The simplified visualization
137(1)
Simplifying even further
138(1)
Let's see this network predict
139(1)
Visualizing using letters instead of pictures
140(1)
Linking the variables
141(1)
Everything side by side
142(1)
The importance of visualization tools
143(2)
8 Learning signal and ignoring noise: introduction to regularization and batching 145(16)
Three-layer network on MNIST
146(2)
Well, that was easy
148(1)
Memorization vs. generalization
149(1)
Overfitting in neural networks
150(1)
Where overfitting comes from
151(1)
The simplest regularization: Early stopping
152(1)
Industry standard regularization: Dropout
153(1)
Why dropout works: Ensembling works
154(1)
Dropout in code
155(2)
Dropout evaluated on MNIST
157(1)
Batch gradient descent
158(2)
Summary
160(1)
9 Modeling probabilities and nonlinearities: activation functions 161(16)
What is an activation function?
162(3)
Standard hidden-layer activation functions
165(1)
Standard output layer activation functions
166(2)
The core issue: Inputs have similarity
168(1)
softmax computation
169(1)
Activation installation instructions
170(2)
Multiplying delta by the slope
172(1)
Converting output to slope (derivative)
173(1)
Upgrading the MNIST network
174(3)
10 Neural learning about edges and corners: intro to convolutional neural networks 177(10)
Reusing weights in multiple places
178(1)
The convolutional layer
179(2)
A simple implementation in NumPy
181(4)
Summary
185(2)
11 Neural networks that understand language: king - man + woman == ? 187(22)
What does it mean to understand language?
188(1)
Natural language processing (NLP)
189(1)
Supervised NLP
190(1)
IMDB movie reviews dataset
191(1)
Capturing word correlation in input data
192(1)
Predicting movie reviews
193(1)
Intro to an embedding layer
194(2)
Interpreting the output
196(1)
Neural architecture
197(2)
Comparing word embeddings
199(1)
What is the meaning of a neuron?
200(1)
Filling in the blank
201(2)
Meaning is derived from loss
203(3)
King - Man + Woman almost = Queen
206(1)
Word analogies
207(1)
Summary
208(1)
12 Neural networks that write like Shakespeare: recurrent layers for variable-length data 209(22)
The challenge of arbitrary length
210(1)
Do comparisons really matter?
211(1)
The surprising power of averaged word vectors
212(1)
How is information stored in these embeddings?
213(1)
How does a neural network use embeddings?
214(1)
The limitations of bag-of-words vectors
215(1)
Using identity vectors to sum word embeddings
216(1)
Matrices that change absolutely nothing
217(1)
Learning the transition matrices
218(1)
Learning to create useful sentence vectors
219(1)
Forward propagation in Python
220(1)
How do you backpropagate into this?
221(1)
Let's train it!
222(1)
Setting things up
223(1)
Forward propagation with arbitrary length
224(1)
Backpropagation with arbitrary length
225(1)
Weight update with arbitrary length
226(1)
Execution and output analysis
227(2)
Summary
229(2)
13 Introducing automatic optimization: let's build a deep learning framework 231(34)
What is a deep learning framework?
232(1)
Introduction to tensors
233(1)
Introduction to automatic gradient computation (autograd)
234(2)
A quick checkpoint
236(1)
Tensors that are used multiple times
237(1)
Upgrading autograd to support multiuse tensors
238(2)
How does addition backpropagation work?
240(1)
Adding support for negation
241(1)
Adding support for additional functions
242(4)
Using autograd to train a neural network
246(2)
Adding automatic optimization
248(1)
Adding support for layer types
249(1)
Layers that contain layers
250(1)
Loss-function layers
251(1)
How to learn a framework
252(1)
Nonlinearity layers
253(2)
The embedding layer
255(1)
Adding indexing to autograd
256(1)
The embedding layer (revisited)
257(1)
The cross-entropy layer
258(2)
The recurrent neural network layer
260(3)
Summary
263(2)
14 Learning to write like Shakespeare: long short-term memory 265(16)
Character language modeling
266(1)
The need for truncated backpropagation
267(1)
Truncated backpropagation
268(3)
A sample of the output
271(1)
Vanishing and exploding gradients
272(1)
A toy example of RNN backpropagation
273(1)
Long short-term memory (LSTM) cells
274(1)
Some intuition about LSTM gates
275(1)
The long short-term memory layer
276(1)
Upgrading the character language model
277(1)
Training the LSTM character language model
278(1)
Tuning the LSTM character language model
279(1)
Summary
280(1)
15 Deep learning on unseen data: introducing federated learning 281(12)
The problem of privacy in deep learning
282(1)
Federated learning
283(1)
Learning to detect Spam
284(2)
Let's make it federated
286(1)
Hacking into federated learning
287(1)
Secure aggregation
288(1)
Homomorphic encryption
289(1)
Homomorphically encrypted federated learning
290(1)
Summary
291(2)
16 Where to go from here: a brief guide 293(8)
Congratulations!
294(1)
Step 1: Start learning PyTorch
294(1)
Step 2: Start another deep learning course
295(1)
Step 3: Grab a mathy deep learning textbook
295(1)
Step 4: Start a blog, and teach deep learning
296(1)
Step 5: Twitter
297(1)
Step 6: Implement academic papers
297(1)
Step 7: Acquire access to a GPU (or many)
297(1)
Step 8: Get paid to practice
298(1)
Step 9: Join an open source project
298(1)
Step 10: Develop your local community
299(2)
Index 301
Andrew Trask is a PhD student at Oxford University, funded by the Oxford-

DeepMind Graduate Scholarship, where he researches Deep Learning

approaches with special emphasis on human language. Previously, Andrew was

a researcher and analytics product manager at Digital Reasoning where he

trained the world's largest artificial neural network with over 160 billion

parameters, and helped guide the analytics roadmap for the Synthesys

cognitive computing platform which tackles some of the most complex analysis

tasks across government intelligence, finance, and healthcare industries.