Preface |
|
vii | |
|
1 Introduction to Machine Learning |
|
|
1 | (22) |
|
|
3 | (6) |
|
|
3 | (1) |
|
1.1.2 The Classification Task |
|
|
3 | (3) |
|
1.1.3 Mathematical Notation for Supervised Learning |
|
|
6 | (3) |
|
|
9 | (1) |
|
|
10 | (1) |
|
|
11 | (2) |
|
|
13 | (4) |
|
|
13 | (1) |
|
|
14 | (3) |
|
1.5.3 Other Bayesian Methods |
|
|
17 | (1) |
|
1.6 Other Induction Methods |
|
|
17 | (6) |
|
|
17 | (3) |
|
|
20 | (1) |
|
1.6.3 Instancebased Learning |
|
|
20 | (1) |
|
1.6.4 Support Vector Machines |
|
|
21 | (1) |
|
|
21 | (2) |
|
2 Classification and Regression Trees |
|
|
23 | (28) |
|
2.1 Training a Decision Tree |
|
|
26 | (1) |
|
|
27 | (4) |
|
|
31 | (1) |
|
2.4 Characteristics of Classification Trees |
|
|
32 | (3) |
|
|
33 | (1) |
|
2.4.2 The Hierarchical Nature of Decision Trees |
|
|
34 | (1) |
|
2.5 Overfitting and Underfitting |
|
|
35 | (2) |
|
2.6 Beyond Classification Tasks |
|
|
37 | (1) |
|
2.7 Advantages of Decision Trees |
|
|
38 | (1) |
|
2.8 Disadvantages of Decision Trees |
|
|
39 | (2) |
|
2.9 Decision Forest for Mitigating Learning Challenges |
|
|
41 | (2) |
|
2.10 Relation to Rule Induction |
|
|
43 | (1) |
|
2.11 Using Decision Trees in R |
|
|
44 | (7) |
|
|
45 | (3) |
|
|
48 | (3) |
|
3 Introduction to Ensemble Learning |
|
|
51 | (54) |
|
|
52 | (1) |
|
|
53 | (1) |
|
3.3 The Bagging Algorithm |
|
|
54 | (6) |
|
3.4 The Boosting Algorithm |
|
|
60 | (1) |
|
3.5 The AdaBoost Algorithm |
|
|
61 | (5) |
|
3.6 Occam's Razor and AdaBoost's Training and Generalization Error |
|
|
66 | (5) |
|
3.7 No-Free-Lunch Theorem and Ensemble Learning |
|
|
71 | (2) |
|
3.8 Bias-Variance Decomposition and Ensemble Learning |
|
|
73 | (2) |
|
3.9 Classifier Dependency |
|
|
75 | (22) |
|
|
75 | (11) |
|
3.9.2 Independent Methods |
|
|
86 | (4) |
|
3.9.3 Extremely Randomized Trees |
|
|
90 | (1) |
|
|
90 | (1) |
|
|
91 | (1) |
|
3.9.6 Nonlinear Boosting Projection (NLBP) |
|
|
91 | (2) |
|
3.9.7 Cross-Validated Committees |
|
|
93 | (1) |
|
|
94 | (2) |
|
|
96 | (1) |
|
|
97 | (1) |
|
3.10 Ensemble Methods for Advanced Classification Tasks |
|
|
97 | (2) |
|
3.10.1 Cost-Sensitive Classification |
|
|
97 | (2) |
|
3.10.2 Ensemble for Learning Concept Drift |
|
|
99 | (1) |
|
3.10.3 Reject Driven Classification |
|
|
99 | (1) |
|
3.11 Using R for Training a Decision Forest |
|
|
99 | (2) |
|
3.11.1 Training a Random Forest with the Party Package |
|
|
100 | (1) |
|
3.11.2 RandomForest Package |
|
|
100 | (1) |
|
3.12 Scaling Up Decision Forests Methods |
|
|
101 | (2) |
|
3.13 Ensemble Methods and Deep Neural Networks |
|
|
103 | (2) |
|
4 Ensemble Classification |
|
|
105 | (26) |
|
|
105 | (6) |
|
|
105 | (1) |
|
|
106 | (1) |
|
4.1.3 Performance Weighting |
|
|
107 | (1) |
|
4.1.4 Distribution Summation |
|
|
108 | (1) |
|
4.1.5 Bayesian Combination |
|
|
108 | (1) |
|
|
108 | (1) |
|
|
109 | (1) |
|
|
109 | (1) |
|
|
109 | (1) |
|
4.1.10 Density-based Weighting |
|
|
110 | (1) |
|
4.1.11 DEA Weighting Method |
|
|
110 | (1) |
|
4.1.12 Logarithmic Opinion Pool |
|
|
110 | (1) |
|
|
110 | (1) |
|
4.2 Selecting Classifiers |
|
|
111 | (10) |
|
4.2.1 Partitioning the Instance Space |
|
|
114 | (7) |
|
4.3 Mixture of Experts and Metalearning |
|
|
121 | (10) |
|
|
121 | (3) |
|
|
124 | (2) |
|
|
126 | (1) |
|
|
127 | (1) |
|
|
128 | (3) |
|
5 Gradient Boosting Machines |
|
|
131 | (18) |
|
|
131 | (1) |
|
5.2 Gradient Boosting for Regression Tasks |
|
|
132 | (1) |
|
5.3 Adjusting Gradient Boosting for Classification Tasks |
|
|
133 | (2) |
|
5.4 Gradient Boosting Trees |
|
|
135 | (1) |
|
5.5 Regularization Methods for Gradient Boosting Machines |
|
|
136 | (2) |
|
|
137 | (1) |
|
|
137 | (1) |
|
5.5.3 Stochastic Gradient Boosting |
|
|
137 | (1) |
|
5.5.4 Decision Tree Regularization |
|
|
138 | (1) |
|
5.6 Gradient Boosting Trees vs. Random Forest |
|
|
138 | (1) |
|
|
139 | (2) |
|
5.8 Other Popular Gradient Boosting Tree Packages: Light-GBM and CatBoost |
|
|
141 | (2) |
|
5.9 Training GBMs in R Using the XGBoost Package |
|
|
143 | (6) |
|
|
149 | (24) |
|
|
149 | (2) |
|
6.2 Manipulating the Inducer |
|
|
151 | (1) |
|
6.2.1 Manipulation of the Algorithm's Hyperparameters |
|
|
151 | (1) |
|
6.2.2 Starting Point in Hypothesis Space |
|
|
151 | (1) |
|
6.2.3 Hypothesis Space Traversal |
|
|
152 | (1) |
|
6.3 Manipulating the Training Samples |
|
|
152 | (5) |
|
|
153 | (1) |
|
|
154 | (2) |
|
|
156 | (1) |
|
6.4 Manipulating the Target Attribute Representation |
|
|
157 | (2) |
|
|
158 | (1) |
|
6.5 Partitioning the Search Space |
|
|
159 | (8) |
|
|
160 | (1) |
|
6.5.2 Feature Subset-based Ensemble Methods |
|
|
161 | (6) |
|
|
167 | (3) |
|
6.7 Measuring the Diversity |
|
|
170 | (3) |
|
|
173 | (14) |
|
|
173 | (1) |
|
7.2 Preselection of the Ensemble Size |
|
|
174 | (1) |
|
7.3 Selection of the Ensemble Size During Training |
|
|
174 | (1) |
|
7.4 Pruning -- Postselection of the Ensemble Size |
|
|
175 | (10) |
|
7.4.1 Ranking-based Methods |
|
|
176 | (1) |
|
7.4.2 Search-based Methods |
|
|
176 | (5) |
|
7.4.3 Clustering-based Methods |
|
|
181 | (1) |
|
|
182 | (3) |
|
7.5 Back to a Single Model: Ensemble Derived Models |
|
|
185 | (2) |
|
8 Error Correcting Output Codes |
|
|
187 | (20) |
|
8.1 Code Matrix Decomposition of Multiclass Problems |
|
|
189 | (1) |
|
8.2 Type I -- Training an Ensemble Given a Code Matrix |
|
|
190 | (12) |
|
8.2.1 Error-Correcting Output Codes |
|
|
192 | (1) |
|
8.2.2 Code Matrix Framework |
|
|
193 | (1) |
|
8.2.3 Code Matrix Design Problem |
|
|
194 | (4) |
|
8.2.4 Orthogonal Arrays (OA) |
|
|
198 | (2) |
|
|
200 | (1) |
|
8.2.6 Probabilistic Error-Correcting Output Code |
|
|
200 | (1) |
|
8.2.7 Other ECOC Strategies |
|
|
201 | (1) |
|
8.3 Type II -- Adapting Code Matrices to Multiclass Problems |
|
|
202 | (5) |
|
9 Evaluating Ensembles of Classifiers |
|
|
207 | (32) |
|
|
207 | (23) |
|
9.1.1 Theoretical Estimation of Generalization Error |
|
|
208 | (1) |
|
9.1.2 Empirical Estimation of Generalization Error |
|
|
209 | (3) |
|
9.1.3 Alternatives to the Accuracy Measure |
|
|
212 | (1) |
|
|
213 | (1) |
|
|
214 | (1) |
|
9.1.6 Classifier Evaluation Under Limited Resources |
|
|
215 | (12) |
|
9.1.7 Statistical Tests for Comparing Ensembles |
|
|
227 | (3) |
|
9.2 Computational Complexity |
|
|
230 | (1) |
|
9.3 Interpretability of the resulting ensemble |
|
|
231 | (1) |
|
9.4 Scalability to Large Datasets |
|
|
232 | (2) |
|
|
234 | (1) |
|
|
234 | (1) |
|
|
234 | (1) |
|
|
235 | (1) |
|
9.9 Software Availability |
|
|
235 | (1) |
|
9.10 Which Ensemble Method Should be Used? |
|
|
235 | (4) |
Bibliography |
|
239 | (42) |
Index |
|
281 | |