Muutke küpsiste eelistusi

Introduction to Machine Learning 2nd Revised edition [Kõva köide]

  • Formaat: Hardback, 584 pages, kõrgus x laius x paksus: 229x203x23 mm, kaal: 1203 g, 172 figures, 10 tables
  • Sari: Adaptive Computation and Machine Learning Series
  • Ilmumisaeg: 01-Feb-2010
  • Kirjastus: MIT Press
  • ISBN-10: 026201243X
  • ISBN-13: 9780262012430
Teised raamatud teemal:
  • Kõva köide
  • Hind: 66,63 €*
  • * saadame teile pakkumise kasutatud raamatule, mille hind võib erineda kodulehel olevast hinnast
  • See raamat on trükist otsas, kuid me saadame teile pakkumise kasutatud raamatule.
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Lisa soovinimekirja
  • Formaat: Hardback, 584 pages, kõrgus x laius x paksus: 229x203x23 mm, kaal: 1203 g, 172 figures, 10 tables
  • Sari: Adaptive Computation and Machine Learning Series
  • Ilmumisaeg: 01-Feb-2010
  • Kirjastus: MIT Press
  • ISBN-10: 026201243X
  • ISBN-13: 9780262012430
Teised raamatud teemal:
A new edition of an introductory text in machine learning that gives a unified treatment of machine learning problems and solutions.

The goal of machine learning is to program computers to use example data or past experience to solve a given problem. Many successful applications of machine learning exist already, including systems that analyze past sales data to predict customer behavior, optimize robot behavior so that a task can be completed using minimum resources, and extract knowledge from bioinformatics data. Introduction to Machine Learning is a comprehensive textbook on the subject, covering a broad array of topics not usually included in introductory machine learning texts. In order to present a unified treatment of machine learning problems and solutions, it discusses many methods from different fields, including statistics, pattern recognition, neural networks, artificial intelligence, signal processing, control, and data mining. All learning algorithms are explained so that the student can easily move from the equations in the book to a computer program.

The text covers such topics as supervised learning, Bayesian decision theory, parametric methods, multivariate methods, multilayer perceptrons, local models, hidden Markov models, assessing and comparing classification algorithms, and reinforcement learning. New to the second edition are chapters on kernel machines, graphical models, and Bayesian estimation; expanded coverage of statistical tests in a chapter on design and analysis of machine learning experiments; case studies available on the Web (with downloadable results for instructors); and many additional exercises. All chapters have been revised and updated.

Introduction to Machine Learning can be used by advanced undergraduates and graduate students who have completed courses in computer programming, probability, calculus, and linear algebra. It will also be of interest to engineers in the field who are concerned with the application of machine learning methods.

Adaptive Computation and Machine Learning series
Series Foreword xvii
Figures
xix
Tables
xxix
Preface
Acknowledgments xxxiii
Notes for the Second Edition xxxv
Notations xxxix
Introduction
1(20)
What Is Machine Learning?
1(3)
Examples of Machine Learning Applications
4(10)
Learning Associations
4(1)
Classification
5(4)
Regression
9(2)
Unsupervised Learning
11(2)
Reinforcement Learning
13(1)
Notes
14(2)
Relevant Resources
16(2)
Exercises
18(1)
References
19(2)
Supervised Learning
21(26)
Learning a Class from Examples
21(6)
Vapnik-Chervonenkis (VC) Dimension
27(2)
Probably Approximately Correct (PAC) Learning
29(1)
Noise
30(2)
Learning Multiple Classes
32(2)
Regression
34(3)
Model Selection and Generalization
37(4)
Dimensions of a Supervised Machine Learning Algorithm
41(1)
Notes
42(1)
Exercises
43(1)
References
44(3)
Bayesian Decision Theory
47(14)
Introduction
47(2)
Classification
49(2)
Losses and Risks
51(2)
Discriminant Functions
53(1)
Utility Theory
54(1)
Association Rules
55(3)
Notes
58(1)
Exercises
58(1)
References
59(2)
Parametric Methods
61(26)
Introduction
61(1)
Maximum Likelihood Estimation
62(3)
Bernoulli Density
63(1)
Multinomial Density
64(1)
Gaussian (Normal) Density
64(1)
Evaluating an Estimator: Bias and Variance
65(1)
The Bayes' Estimator
66(3)
Parametric Classification
69(4)
Regression
73(3)
Tuning Model Complexity: Bias/Variance Dilemma
76(4)
Model Selection Procedures
80(4)
Notes
84(1)
Exercises
84(1)
References
85(2)
Multivariate Methods
87(22)
Multivariate Data
87(1)
Parameter Estimation
88(1)
Estimation of Missing Values
89(1)
Multivariate Normal Distribution
90(4)
Multivariate Classification
94(5)
Tuning Complexity
99(3)
Discrete Features
102(1)
Multivariate Regression
103(2)
Notes
105(1)
Exercises
106(1)
References
107(2)
Dimensionality Reduction
109(34)
Introduction
109(1)
Subset Selection
110(3)
Principal Components Analysis
113(7)
Factor Analysis
120(5)
Multidimensional Scaling
125(3)
Linear Discriminant Analysis
128(5)
Isomap
133(2)
Locally Linear Embedding
135(3)
Notes
138(1)
Exercises
139(1)
References
140(3)
Clustering
143(20)
Introduction
143(1)
Mixture Densities
144(1)
κ-Means Clustering
145(4)
Expectation-Maximization Algorithm
149(5)
Mixtures of Latent Variable Models
154(1)
Supervised Learning after Clustering
155(2)
Hierarchical Clustering
157(1)
Choosing the Number of Clusters
158(2)
Notes
160(1)
Exercises
160(1)
References
161(2)
Nonparametric Methods
163(22)
Introduction
163(2)
Nonparametric Density Estimation
165(5)
Histogram Estimator
165(2)
Kernel Estimator
167(1)
κ-Nearest Neighbor Estimator
168(2)
Generalization to Multivariate Data
170(1)
Nonparametric Classification
171(1)
Condensed Nearest Neighbor
172(2)
Nonparametric Regression: Smoothing Models
174(4)
Running Mean Smoother
175(1)
Kernel Smoother
176(1)
Running Line Smoother
177(1)
How to Choose the Smoothing Parameter
178(2)
Notes
180(1)
Exercises
181(1)
References
182(3)
Decision Trees
185(24)
Introduction
185(2)
Univariate Trees
187(7)
Classification Trees
188(4)
Regression Trees
192(2)
Pruning
194(3)
Rule Extraction from Trees
197(1)
Learning Rules from Data
198(4)
Multivariate Trees
202(2)
Notes
204(3)
Exercises
207(1)
References
207(2)
Linear Discrimination
209(24)
Introduction
209(2)
Generalizing the Linear Model
211(1)
Geometry of the Linear Discriminant
212(4)
Two Classes
212(2)
Multiple Classes
214(2)
Pairwise Separation
216(1)
Parametric Discrimination Revisited
217(1)
Gradient Descent
218(2)
Logistic Discrimination
220(8)
Two Classes
220(4)
Multiple Classes
224(4)
Discrimination by Regression
228(2)
Notes
230(1)
Exercises
230(1)
References
231(2)
Multilayer Perceptrons
233(46)
Introduction
233(4)
Understanding the Brain
234(1)
Neural Networks as a Paradigm for Parallel Processing
235(2)
The Perceptron
237(3)
Training a Perceptron
240(3)
Learning Boolean Functions
243(2)
Multilayer Perceptrons
245(3)
MLP as a Universal Approximator
248(1)
Backpropagation Algorithm
249(7)
Nonlinear Regression
250(2)
Two-Class Discrimination
252(2)
Multiclass Discrimination
254(2)
Multiple Hidden Layers
256(1)
Training Procedures
256(7)
Improving Convergence
256(1)
Overtraining
257(1)
Structuring the Network
258(3)
Hints
261(2)
Tuning the Network Size
263(3)
Bayesian View of Learning
266(1)
Dimensionality Reduction
267(3)
Learning Time
270(1)
Time Delay Neural Networks
270(4)
Recurrent Networks
271(1)
Notes
272(2)
Exercises
274(1)
References
275(4)
Local Models
279(30)
Introduction
279(1)
Competitive Learning
280(8)
Online κ-Means
280(5)
Adaptive Resonance Theory
285(1)
Self-Organizing Maps
286(2)
Radial Basis Functions
288(6)
Incorporating Rule-Based Knowledge
294(1)
Normalized Basis Functions
295(2)
Competitive Basis Functions
297(3)
Learning Vector Quantization
300(1)
Mixture of Experts
300(3)
Cooperative Experts
303(2)
Competitive Experts
304(1)
Hierarchical Mixture of Experts
304(1)
Notes
305(1)
Exercises
306(1)
References
307(2)
Kernel Machines
309(32)
Introduction
309(2)
Optimal Separating Hyperplane
311(4)
The Nonseparable Case: Soft Margin Hyperplane
315(3)
SVM
318(1)
Kernel Trick
319(2)
Vectorial Kernels
321(3)
Defining Kernels
324(1)
Multiple Kernel Learning
325(2)
Multiclass Kernel Machines
327(1)
Kernel Machines for Regression
328(5)
One-Class Kernel Machines
333(2)
Kernel Dimensionality Reduction
335(2)
Notes
337(1)
Exercises
338(1)
References
339(2)
Bayesian Estimation
341(22)
Introduction
341(2)
Estimating the Parameter of a Distribution
343(5)
Discrete Variables
343(2)
Continuous Variables
345(3)
Bayesian Estimation of the Parameters of a Function
348(8)
Regression
348(4)
The Use of Basis/Kernel Functions
352(1)
Bayesian Classification
353(3)
Gaussian Processes
356(3)
Notes
359(1)
Exercises
360(1)
References
361(2)
Hidden Markov Models
363(24)
Introduction
363(1)
Discrete Markov Processes
364(3)
Hidden Markov Models
367(2)
Three Basic Problems of HMMs
369(1)
Evaluation Problem
369(4)
Finding the State Sequence
373(2)
Learning Model Parameters
375(3)
Continuous Observations
378(1)
The HMM with Input
379(1)
Model Selection in HMM
380(2)
Notes
382(1)
Exercises
383(1)
References
384(3)
Graphical Models
387(32)
Introduction
387(2)
Canonical Cases for Conditional Independence
389(7)
Example Graphical Models
396(6)
Naive Bayes' Classifier
396(2)
Hidden Markov Model
398(3)
Linear Regression
401(1)
d-Separation
402(1)
Belief Propagation
402(8)
Chains
403(2)
Trees
405(2)
Polytrees
407(2)
Junction Trees
409(1)
Undirected Graphs: Markov Random Fields
410(3)
Learning the Structure of a Graphical Model
413(1)
Influence Diagrams
414(5)
Notes
414(3)
Exercises
417(1)
References
417(2)
Combining Multiple Learners
419(28)
Rationale
419(1)
Generating Diverse Learners
420(3)
Model Combination Schemes
423(1)
Voting
424(3)
Error-Correcting Output Codes
427(3)
Bagging
430(1)
Boosting
431(3)
Mixture of Experts Revisited
434(1)
Stacked Generalization
435(2)
Fine-Tuning an Ensemble
437(1)
Cascading
438(2)
Notes
440(2)
Exercises
442(1)
References
443(4)
Reinforcement Learning
447(28)
Introduction
447(2)
Single State Case: K-Armed Bandit
449(1)
Elements of Reinforcement Learning
450(3)
Model-Based Learning
453(1)
Value Iteration
453(1)
Policy Iteration
454(1)
Temporal Difference Learning
454(7)
Exploration Strategies
455(1)
Deterministic Rewards and Actions
456(1)
Nondeterministic Rewards and Actions
457(2)
Eligibility Traces
459(2)
Generalization
461(3)
Partially Observable States
464(6)
The Setting
464(1)
Example: The Tiger Problem
465(5)
Notes
470(2)
Exercises
472(1)
References
473(2)
Design and Analysis of Machine Learning Experiments
475(42)
Introduction
475(3)
Factors, Response, and Strategy of Experimentation
478(3)
Response Surface Design
481(1)
Randomization, Replication, and Blocking
482(1)
Guidelines for Machine Learning Experiments
483(3)
Cross-Validation and Resampling Methods
486(3)
K-Fold Cross-Validation
487(1)
5x2 Cross-Validation
488(1)
Bootstrapping
489(1)
Measuring Classifier Performance
489(4)
Interval Estimation
493(3)
Hypothesis Testing
496(2)
Assessing a Classification Algorithm's Performance
498(3)
Binomial Test
499(1)
Approximate Normal Test
500(1)
t Test
500(1)
Comparing Two Classification Algorithms
501(3)
McNemar's Test
501(1)
K-Fold Cross-Validated Paired t Test
501(1)
5 x 2 cv Paired t Test
502(1)
5 x 2 cv Paired F Test
503(1)
Comparing Multiple Algorithms: Analysis of Variance
504(4)
Comparison over Multiple Datasets
508(4)
Comparing Two Algorithms
509(2)
Multiple Algorithms
511(1)
Notes
512(1)
Exercises
513(1)
References
514(3)
A Probability
517(12)
Elements of Probability
517(2)
Axioms of Probability
518(1)
Conditional Probability
518(1)
Random Variables
519(4)
Probability Distribution and Density Functions
519(1)
Joint Distribution and Density Functions
520(1)
Conditional Distributions
520(1)
Bayes' Rule
521(1)
Expectation
521(1)
Variance
522(1)
Weak Law of Large Numbers
523(1)
Special Random Variables
523(4)
Bernoulli Distribution
523(1)
Binomial Distribution
524(1)
Multinomial Distribution
524(1)
Uniform Distribution
524(1)
Normal (Gaussian) Distribution
525(1)
Chi-Square Distribution
526(1)
t Distribution
527(1)
F Distribution
527(1)
References
527(2)
Index 529