Muutke küpsiste eelistusi

E-raamat: Pro Deep Learning with TensorFlow: A Mathematical Approach to Advanced Artificial Intelligence in Python

  • Formaat: EPUB+DRM
  • Ilmumisaeg: 06-Dec-2017
  • Kirjastus: APress
  • Keel: eng
  • ISBN-13: 9781484230961
  • Formaat - EPUB+DRM
  • Hind: 67,91 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: EPUB+DRM
  • Ilmumisaeg: 06-Dec-2017
  • Kirjastus: APress
  • Keel: eng
  • ISBN-13: 9781484230961

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Deploy deep learning solutions in production with ease using TensorFlow. You'll also develop the mathematical understanding and intuition required to invent new deep learning architectures and solutions on your own.

Pro Deep Learning with TensorFlow provides practical, hands-on expertise so you can learn deep learning from scratch and deploy meaningful deep learning solutions. This book will allow you to get up to speed quickly using TensorFlow and to optimize different deep learning architectures.

All of the practical aspects of deep learning that are relevant in any industry are emphasized in this book. You will be able to use the prototypes demonstrated to build new deep learning applications. The code presented in the book is available in the form of iPython notebooks and scripts which allow you to try out examples and extend them in interesting ways.

You will be equipped with the mathematical foundation and scientific knowledge to pursue research in this field and give back to the community.

What You'll Learn
  • Understand full stack deep learning using TensorFlow and gain a solid mathematical foundation for deep learning
  • Deploy complex deep learning solutions in production using TensorFlow
  • Carry out research on deep learning and perform experiments using TensorFlow
Who This Book Is For

Data scientists and machine learning professionals, software developers, graduate students, and open source enthusiasts
About the Author xiii
About the Technical Reviewer xv
Acknowledgments xvii
Introduction xix
Chapter 1 Mathematical Foundations
1(88)
Linear Algebra
2(21)
Vector
3(1)
Scalar
4(1)
Matrix
4(1)
Tensor
5(1)
Matrix Operations and Manipulations
5(4)
Linear Independence of Vectors
9(1)
Rank of a Matrix
10(1)
Identity Matrix or Operator
11(1)
Determinant of a Matrix
12(2)
Inverse of a Matrix
14(1)
Norm of a Vector
15(1)
Pseudo Inverse of a Matrix
16(1)
Unit Vector in the Direction of a Specific Vector
17(1)
Projection of a Vector in the Direction of Another Vector
17(1)
Eigenvectors
18(5)
Calculus
23(11)
Differentiation
23(1)
Gradient of a Function
24(1)
Successive Partial Derivatives
25(1)
Hessian Matrix of a Function
25(1)
Maxima and Minima of Functions
26(2)
Local Minima and Global Minima
28(1)
Positive Semi-Definite and Positive Definite
29(1)
Convex Set
29(1)
Convex Function
30(1)
Non-convex Function
31(1)
Multivariate Convex and Non-convex Functions Examples
31(3)
Taylor Series
34(1)
Probability
34(21)
Unions, Intersection, and Conditional Probability
35(2)
Chain Rule of Probability for Intersection of Event
37(1)
Mutually Exclusive Events
37(1)
Independence of Events
37(1)
Conditional Independence of Events
38(1)
Bayes Rule
38(1)
Probability Mass Function
38(1)
Probability Density Function
39(1)
Expectation of a Random Variable
39(1)
Variance of a Random Variable
39(1)
Skewness and Kurtosis
40(4)
Covariance
44(1)
Correlation Coefficient
44(1)
Some Common Probability Distribution
45(6)
Likelihood Function
51(1)
Maximum Likelihood Estimate
52(1)
Hypothesis Testing and p Value
53(2)
Formulation of Machine-Learning Algorithm and Optimization Techniques
55(24)
Supervised Learning
56(9)
Unsupervised Learning
65(1)
Optimization Techniques for Machine Learning
66(11)
Constrained Optimization Problem
77(2)
A Few Important Topics in Machine Learning
79(8)
Dimensionality Reduction Methods
79(5)
Regularization
84(2)
Regularization Viewed as a Constraint Optimization Problem
86(1)
Summary
87(2)
Chapter 2 Introduction to Deep-Learning Concepts and TensorFlow
89(64)
Deep Learning and Its Evolution
89(3)
Perceptrons and Perceptron Learning Algorithm
92(26)
Geometrical Interpretation of Perceptron Learning
96(1)
Limitations of Perceptron Learning
97(2)
Need for Non-linearity
99(1)
Hidden Layer Perceptrons' Activation Function for Non-linearity
100(2)
Different Activation Functions for a Neuron/Perceptron
102(6)
Learning Rule for Multi-Layer Perceptrons Network
108(1)
Backpropagation for Gradient Computation
109(2)
Generalizing the Backpropagation Method for Gradient Computation
111(7)
TensorFlow
118(34)
Common Deep-Learning Packages
118(1)
TensorFlow Installation
119(1)
TensorFlow Basics for Development
119(4)
Gradient-Descent Optimization Methods from a Deep-Learning Perspective
123(6)
Learning Rate in Mini-batch Approach to Stochastic Gradient Descent
129(1)
Optimizers in TensorFlow
130(8)
XOR Implementation Using TensorFlow
138(5)
Linear Regression in TensorFlow
143(3)
Multi-class Classification with SoftMax Function Using Full-Batch Gradient Descent
146(3)
Multi-class Classification with SoftMax Function Using Stochastic Gradient Descent
149(3)
GPU
152(1)
Summary
152(1)
Chapter 3 Convolutional Neural Networks
153(70)
Convolution Operation
153(5)
Linear Time Invariant (LTI) / Linear Shift Invariant (LSI) Systems
153(2)
Convolution for Signals in One Dimension
155(3)
Analog and Digital Signals
158(3)
2D and 3D signals
160(1)
2D Convolution
161(8)
Two-dimensional Unit Step Function
161(2)
2D Convolution of a Signal with an LSI System Unit Step Response
163(2)
2D Convolution of an Image to Different LSI System Responses
165(4)
Common Image-Processing Filters
169(9)
Mean Filter
169(2)
Median Filter
171(2)
Gaussian Filter
173(1)
Gradient-based Filters
174(1)
Sobel Edge-Detection Filter
175(2)
Identity Transform
177(1)
Convolution Neural Networks
178(1)
Components of Convolution Neural Networks
179(3)
Input Layer
180(1)
Convolution Layer
180(2)
Pooling Layer
182(1)
Backpropagation Through the Convolutional Layer
182(4)
Backpropagation Through the Pooling Layers
186(1)
Weight Sharing Through Convolution and Its Advantages
187(1)
Translation Equivariance
188(1)
Translation Invariance Due to Pooling
189(1)
Dropout Layers and Regularization
190(2)
Convolutional Neural Network for Digit Recognition on the MNIST Dataset
192(4)
Convolutional Neural Network for Solving Real-World Problems
196(8)
Batch Normalization
204(2)
Different Architectures in Convolutional Neural Networks
206(5)
LeNet
206(2)
AlexNet
208(1)
VGG16
209(1)
ResNet
210(1)
Transfer Learning
211(10)
Guidelines for Using Transfer Learning
212(1)
Transfer Learning with Google's lnceptionV3
213(3)
Transfer Learning with Pre-trained VGG16
216(5)
Summary
221(2)
Chapter 4 Natural Language Processing Using Recurrent Neural Networks
223(56)
Vector Space Model (VSM)
223(4)
Vector Representation of Words
227(1)
Word2Vec
228(24)
Continuous Bag of Words (CBOW)
228(3)
Continuous Bag of Words Implementation in TensorFlow
231(4)
Skip-Gram Model for Word Embedding
235(2)
Skip-gram Implementation in TensorFlow
237(3)
Global Co-occurrence Statistics-based Word Vectors
240(5)
Glove
245(4)
Word Analogy with Word Vectors
249(3)
Introduction to Recurrent Neural Networks
252(26)
Language Modeling
254(1)
Predicting the Next Word in a Sentence Through RNN Versus Traditional Methods
255(1)
Backpropagation Through Time (BPTT)
256(3)
Vanishing and Exploding Gradient Problem in RNN
259(1)
Solution to Vanishing and Exploding Gradients Problem in RNNs
260(2)
Long Short-Term Memory (LSTM)
262(1)
LSTM in Reducing Exploding- and Vanishing-Gradient Problems
263(2)
MNIST Digit Identification in TensorFlow Using Recurrent Neural Networks
265(9)
Gated Recurrent Unit (GRU)
274(2)
Bidirectional RNN
276(2)
Summary
278(1)
Chapter 5 Unsupervised Learning with Restricted Boltzmann Machines and Auto-encoders
279(66)
Boltzmann Distribution
279(2)
Bayesian Inference: Likelihood, Priors, and Posterior Probability Distribution
281(5)
Markov Chain Monte Carlo Methods for Sampling
286(8)
Metropolis Algorithm
289(5)
Restricted Boltzmann Machines
294(28)
Training a Restricted Boltzmann Machine
299(5)
Gibbs Sampling
304(1)
Block Gibbs Sampling
305(1)
Burn-in Period and Generating Samples in Gibbs Sampling
306(1)
Using Gibbs Sampling in Restricted Boltzmann Machines
306(2)
Contrastive Divergence
308(1)
A Restricted Boltzmann Implementation in TensorFlow
309(4)
Collaborative Filtering Using Restricted Boltzmann Machines
313(4)
Deep Belief Networks (DBNs)
317(5)
Auto-encoders
322(18)
Feature Learning Through Auto-encoders for Supervised Learning
325(2)
Kullback-Leibler (KL) Divergence
327(2)
Sparse Auto-Encoder Implementation in TensorFlow
329(4)
Denoising Auto-Encoder
333(1)
A Denoising Auto-Encoder Implementation in TensorFlow
333(7)
PCA and ZCA Whitening
340(3)
Summary
343(2)
Chapter 6 Advanced Neural Networks
345(48)
Image Segmentation
345(28)
Binary Thresholding Method Based on Histogram of Pixel Intensities
345(1)
Otsu's Method
346(3)
Watershed Algorithm for Image Segmentation
349(3)
Image Segmentation Using K-means Clustering
352(3)
Semantic Segmentation
355(1)
Sliding-Window Approach
355(1)
Fully Convolutional Network (FCN)
356(2)
Fully Convolutional Network with Downsampling and Upsampling
358(6)
U-Net
364(1)
Semantic Segmentation in TensorFlow with Fully Connected Neural Networks
365(8)
Image Classification and Localization Network
373(2)
Object Detection
375(3)
R-CNN
376(1)
Fast and Faster R-CNN
377(1)
Generative Adversarial Networks
378(11)
Maximin and Minimax Problem
379(2)
Zero-sum Game
381(1)
Minimax and Saddle Points
382(1)
GAN Cost Function and Training
383(3)
Vanishing Gradient for the Generator
386(1)
TensorFlow Implementation of a GAN Network
386(3)
TensorFlow Models' Deployment in Production
389(3)
Summary
392(1)
Index 393
Santanu Pattanayak currently works at GE, Digital as a Senior Data Scientist. He has 10 years of overall work experience with six of years of experience in the data analytics/data science field and also has a background in development and database technologies. Prior to joining GE, Santanu worked in companies such as RBS, Capgemini, and IBM. He graduated with a degree in electrical engineering from Jadavpur University, Kolkata and is an avid math enthusiast. Santanu is currently pursuing a master's degree in data science from Indian Institute of Technology (IIT), Hyderabad. He also devotes his time to data science hackathons and Kaggle competitions where he ranks within the top 500 across the globe. Santanu was born and brought up in West Bengal, India and currently resides in Bangalore, India with his wife.