Muutke küpsiste eelistusi

E-raamat: Introduction to Machine Learning

  • Formaat: PDF+DRM
  • Ilmumisaeg: 15-Jul-2015
  • Kirjastus: Springer International Publishing AG
  • Keel: eng
  • ISBN-13: 9783319200101
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 55,56 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: PDF+DRM
  • Ilmumisaeg: 15-Jul-2015
  • Kirjastus: Springer International Publishing AG
  • Keel: eng
  • ISBN-13: 9783319200101
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

This book presents basic ideas of machine learning in a way that is easy to understand, by providing hands-on practical advice, using simple examples, and motivating students with discussions of interesting applications. The main topics include Bayesian classifiers, nearest-neighbor classifiers, linear and polynomial classifiers, decision trees, neural networks, and support vector machines. Later chapters show how to combine these simple tools by way of "boosting," how to exploit them in more complicated domains, and how to deal with diverse advanced practical issues. One chapter is dedicated to the popular genetic algorithms.

A Simple Machine-Learning Task.- Probabilities: Bayesian Classifiers.- Similarities: Nearest-Neighbor Classifiers.- Inter-Class Boundaries: Linear and Polynomial Classifiers.- Artificial Neural Networks.- Decision Trees.- Computational Learning Theory.- A Few Instructive Applications.- Induction of Voting Assemblies.- Some Practical Aspects to Know About.- Performance Evaluation.-Statistical Significance.- The Genetic Algorithm.- Reinforcement learning.

Arvustused

Miroslav Kubat's Introduction to Machine Learning is an excellent overview of a broad range of Machine Learning (ML) techniques. It fills a longstanding need for texts that cover the middle ground of neither oversimplifying nor too technical explanations of key concepts of key Machine Learning algorithms. All in all it is a very informative and instructive read which is well suited for undergraduate students and aspiring data scientists. (Holger K. von Joua, Google+, plus.google.com, December, 2016)

It is superbly organized: each section includes a `what have you learned summary, and every chapter has a short summary, accompanying (brief) historical remarks, and a slew of exercises. In most of the chapters, there are very clear examples, well chosen and illustrated, that really help the reader understand each concept. I did learn quite a bit about very basic machine learning by reading this book. (Jacques Carette, Computing Reviews, January, 2016)

1 A Simple Machine-Learning Task 1(18)
1.1 Training Sets and Classifiers
1(4)
1.2 Minor Digression: Hill-Climbing Search
5(3)
1.3 Hill Climbing in Machine Learning
8(3)
1.4 The Induced Classifier's Performance
11(2)
1.5 Some Difficulties with Available Data
13(2)
1.6 Summary and Historical Remarks
15(1)
1.7 Solidify Your Knowledge
16(3)
2 Probabilities: Bayesian Classifiers 19(24)
2.1 The Single-Attribute Case
19(3)
2.2 Vectors of Discrete Attributes
22(5)
2.3 Probabilities of Rare Events: Exploiting the Expert's Intuition
27(3)
2.4 How to Handle Continuous Attributes
30(3)
2.5 Gaussian "Bell" Function: A Standard pdf
33(1)
2.6 Approximating PDFs with Sets of Gaussians
34(3)
2.7 Summary and Historical Remarks
37(3)
2.8 Solidify Your Knowledge
40(3)
3 Similarities: Nearest-Neighbor Classifiers 43(22)
3.1 The k-Nearest-Neighbor Rule
43(3)
3.2 Measuring Similarity
46(3)
3.3 Irrelevant Attributes and Scaling Problems
49(3)
3.4 Performance Considerations
52(3)
3.5 Weighted Nearest Neighbors
55(2)
3.6 Removing Dangerous Examples
57(2)
3.7 Removing Redundant Examples
59(3)
3.8 Summary and Historical Remarks
62(1)
3.9 Solidify Your Knowledge
62(3)
4 Inter-Class Boundaries: Linear and Polynomial Classifiers 65(26)
4.1 The Essence
65(4)
4.2 The Additive Rule: Perceptron Learning
69(4)
4.3 The Multiplicative Rule: WINNOW
73(3)
4.4 Domains with More than Two Classes
76(2)
4.5 Polynomial Classifiers
78(3)
4.6 Specific Aspects of Polynomial Classifiers
81(2)
4.7 Numerical Domains and Support Vector Machines
83(3)
4.8 Summary and Historical Remarks
86(1)
4.9 Solidify Your Knowledge
87(4)
5 Artificial Neural Networks 91(22)
5.1 Multilayer Perceptrons as Classifiers
91(4)
5.2 Neural Network's Error
95(2)
5.3 Backpropagation of Error
97(4)
5.4 Special Aspects of Multilayer Perceptrons
101(3)
5.5 Architectural Issues
104(2)
5.6 Radial Basis Function Networks
106(2)
5.7 Summary and Historical Remarks
108(2)
5.8 Solidify Your Knowledge
110(3)
6 Decision Trees 113(24)
6.1 Decision Trees.as Classifiers
113(4)
6.2 Induction of Decision Trees
117(2)
6.3 How Much Information Does an Attribute Convey?
119(5)
6.4 Binary Split of a Numeric Attribute
124(2)
6.5 Pruning
126(4)
6.6 Converting the Decision Tree into Rules
130(2)
6.7 Summary and Historical Remarks
132(1)
6.8 Solidify Your Knowledge
133(4)
7 Computational Learning Theory 137(14)
7.1 PAC Learning
137(4)
7.2 Examples of PAC Learnability
141(2)
7.3 Some Practical and Theoretical Consequences
143(2)
7.4 VC-Dimension and Learnability
145(3)
7.5 Summary and Historical Remarks
148(1)
7.6 Exercises and Thought Experiments
149(2)
8 A Few Instructive Applications 151(22)
8.1 Character Recognition
151(4)
8.2 Oil-Spill Recognition
155(3)
8.3 Sleep Classification
158(3)
8.4 Brain-Computer Interface
161(4)
8.5 Medical Diagnosis
165(2)
8.6 Text Classification
167(2)
8.7 Summary and Historical Remarks
169(1)
8.8 Exercises and Thought Experiments
170(3)
9 Induction of Voting Assemblies 173(18)
9.1 Bagging
173(3)
9.2 Schapire's Boosting
176(3)
9.3 Adaboost: Practical Version of Boosting
179(4)
9.4 Variations on the Boosting Theme
183(2)
9.5 Cost-Saving Benefits of the Approach
185(2)
9.6 Summary and Historical Remarks
187(1)
9.7 Solidify Your Knowledge
188(3)
10 Some Practical Aspects to Know About 191(22)
10.1 A Learner's Bias
191(3)
10.2 Imbalanced Training Sets
194(4)
10.3 Context-Dependent Domains
198(4)
10.4 Unknown Attribute Values
202(2)
10.5 Attribute Selection
204(2)
10.6 Miscellaneous
206(3)
10.7 Summary and Historical Remarks
209(1)
10.8 Solidify Your Knowledge
210(3)
11 Performance Evaluation 213(22)
11.1 Basic Performance Criteria
213(3)
11.2 Precision and Recall
216(5)
11.3 Other Ways to Measure Performance
221(3)
11.4 Performance in Multi-label Domains
224(1)
11.5 Learning Curves and Computational Costs
225(2)
11.6 Methodologies of Experimental Evaluation
227(3)
11.7 Summary and Historical Remarks
230(1)
11.8 Solidify Your Knowledge
231(4)
12 Statistical Significance 235(20)
12.1 Sampling a Population
235(4)
12.2 Benefiting from the Normal Distribution
239(4)
12.3 Confidence Intervals
243(2)
12.4 Statistical Evaluation of a Classifier
245(3)
12.5 Another Kind of Statistical Evaluation
248(1)
12.6 Comparing Machine-Learning Techniques
249(2)
12.7 Summary and Historical Remarks
251(1)
12.8 Solidify Your Knowledge
252(3)
13 The Genetic Algorithm 255(22)
13.1 The Baseline Genetic Algorithm
255(3)
13.2 Implementing the Individual Modules
258(3)
13.3 Why it Works
261(3)
13.4 The Danger of Premature Degeneration
264(1)
13.5 Other Genetic Operators
265(3)
13.6 Some Advanced Versions
268(2)
13.7 Selections in k-NN Classifiers
270(3)
13.8 Summary and Historical Remarks
273(1)
13.9 Solidify Your Knowledge
274(3)
14 Reinforcement Learning 277(10)
14.1 How to Choose the Most Rewarding Action
277(3)
14.2 States and Actions in a Game
280(3)
14.3 The SARSA Approach
283(1)
14.4 Summary and Historical Remarks
284(1)
14.5 Solidify Your Knowledge
284(3)
Bibliography 287(4)
Index 291
Miroslav Kubat, Associate Professor at the University of Miami, has been teaching and studying machine learning for more than a quarter century. Over the years, he has published more than 100 peer-reviewed papers, co-edited two books, served on the program committees of some 60 program conferences and workshops, and is the member of the editorial boards of three scientific journals. He is widely credited for having co-pioneered research in two major branches of the discipline: induction of time-varying concepts and learning from imbalanced training sets. Apart from that, he contributed to induction from multi-label examples, induction of hierarchically organized classes, genetic algorithms, initialization of neural networks, and other problems.