Muutke küpsiste eelistusi

E-raamat: Ensemble Learning: Pattern Classification Using Ensemble Methods (Second Edition)

(Ben-gurion Univ Of The Negev, Israel)
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 99,45 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

This updated compendium provides a methodical introduction with a coherent and unified repository of ensemble methods, theories, trends, challenges, and applications. More than a third of this edition comprised of new materials, highlighting descriptions of the classic methods, and extensions and novel approaches that have recently been introduced. Along with algorithmic descriptions of each method, the settings in which each method is applicable and the consequences and tradeoffs incurred by using the method is succinctly featured. R code for implementation of the algorithm is also emphasized. The unique volume provides researchers, students and practitioners in industry with a comprehensive, concise and convenient resource on ensemble learning methods.

Preface vii
1 Introduction to Machine Learning
1(22)
1.1 Supervised Learning
3(6)
1.1.1 Overview
3(1)
1.1.2 The Classification Task
3(3)
1.1.3 Mathematical Notation for Supervised Learning
6(3)
1.2 Induction Algorithms
9(1)
1.3 Rule Induction
10(1)
1.4 Decision Trees
11(2)
1.5 Bayesian Methods
13(4)
1.5.1 Overview
13(1)
1.5.2 Naive Bayes
14(3)
1.5.3 Other Bayesian Methods
17(1)
1.6 Other Induction Methods
17(6)
1.6.1 Neural Networks
17(3)
1.6.2 Genetic Algorithms
20(1)
1.6.3 Instancebased Learning
20(1)
1.6.4 Support Vector Machines
21(1)
1.6.5 Ensemble Learning
21(2)
2 Classification and Regression Trees
23(28)
2.1 Training a Decision Tree
26(1)
2.2 Illustrative Example
27(4)
2.3 Stopping Criteria
31(1)
2.4 Characteristics of Classification Trees
32(3)
2.4.1 Tree Size
33(1)
2.4.2 The Hierarchical Nature of Decision Trees
34(1)
2.5 Overfitting and Underfitting
35(2)
2.6 Beyond Classification Tasks
37(1)
2.7 Advantages of Decision Trees
38(1)
2.8 Disadvantages of Decision Trees
39(2)
2.9 Decision Forest for Mitigating Learning Challenges
41(2)
2.10 Relation to Rule Induction
43(1)
2.11 Using Decision Trees in R
44(7)
2.11.1 Party Package
45(3)
2.11.2 The Rpart Package
48(3)
3 Introduction to Ensemble Learning
51(54)
3.1 Back to the Roots
52(1)
3.2 The Wisdom of Crowds
53(1)
3.3 The Bagging Algorithm
54(6)
3.4 The Boosting Algorithm
60(1)
3.5 The AdaBoost Algorithm
61(5)
3.6 Occam's Razor and AdaBoost's Training and Generalization Error
66(5)
3.7 No-Free-Lunch Theorem and Ensemble Learning
71(2)
3.8 Bias-Variance Decomposition and Ensemble Learning
73(2)
3.9 Classifier Dependency
75(22)
3.9.1 Dependent Methods
75(11)
3.9.2 Independent Methods
86(4)
3.9.3 Extremely Randomized Trees
90(1)
3.9.4 Rotation Forest
90(1)
3.9.5 Random Projections
91(1)
3.9.6 Nonlinear Boosting Projection (NLBP)
91(2)
3.9.7 Cross-Validated Committees
93(1)
3.9.8 Robust Boosting
94(2)
3.9.9 IPGA-Forest
96(1)
3.9.10 Switching Classes
97(1)
3.10 Ensemble Methods for Advanced Classification Tasks
97(2)
3.10.1 Cost-Sensitive Classification
97(2)
3.10.2 Ensemble for Learning Concept Drift
99(1)
3.10.3 Reject Driven Classification
99(1)
3.11 Using R for Training a Decision Forest
99(2)
3.11.1 Training a Random Forest with the Party Package
100(1)
3.11.2 RandomForest Package
100(1)
3.12 Scaling Up Decision Forests Methods
101(2)
3.13 Ensemble Methods and Deep Neural Networks
103(2)
4 Ensemble Classification
105(26)
4.1 Fusion Methods
105(6)
4.1.1 Weighting Methods
105(1)
4.1.2 Majority Voting
106(1)
4.1.3 Performance Weighting
107(1)
4.1.4 Distribution Summation
108(1)
4.1.5 Bayesian Combination
108(1)
4.1.6 Dempster-Shafer
108(1)
4.1.7 Vogging
109(1)
4.1.8 Naive Bayes
109(1)
4.1.9 Entropy Weighting
109(1)
4.1.10 Density-based Weighting
110(1)
4.1.11 DEA Weighting Method
110(1)
4.1.12 Logarithmic Opinion Pool
110(1)
4.1.13 Order Statistics
110(1)
4.2 Selecting Classifiers
111(10)
4.2.1 Partitioning the Instance Space
114(7)
4.3 Mixture of Experts and Metalearning
121(10)
4.3.1 Stacking
121(3)
4.3.2 Arbiter Trees
124(2)
4.3.3 Combiner Trees
126(1)
4.3.4 Grading
127(1)
4.3.5 Gating Network
128(3)
5 Gradient Boosting Machines
131(18)
5.1 Introduction
131(1)
5.2 Gradient Boosting for Regression Tasks
132(1)
5.3 Adjusting Gradient Boosting for Classification Tasks
133(2)
5.4 Gradient Boosting Trees
135(1)
5.5 Regularization Methods for Gradient Boosting Machines
136(2)
5.5.1 Number of Models
137(1)
5.5.2 Shrinkage
137(1)
5.5.3 Stochastic Gradient Boosting
137(1)
5.5.4 Decision Tree Regularization
138(1)
5.6 Gradient Boosting Trees vs. Random Forest
138(1)
5.7 XGBoost Algorithm
139(2)
5.8 Other Popular Gradient Boosting Tree Packages: Light-GBM and CatBoost
141(2)
5.9 Training GBMs in R Using the XGBoost Package
143(6)
6 Ensemble Diversity
149(24)
6.1 Overview
149(2)
6.2 Manipulating the Inducer
151(1)
6.2.1 Manipulation of the Algorithm's Hyperparameters
151(1)
6.2.2 Starting Point in Hypothesis Space
151(1)
6.2.3 Hypothesis Space Traversal
152(1)
6.3 Manipulating the Training Samples
152(5)
6.3.1 Resampling
153(1)
6.3.2 Creation
154(2)
6.3.3 Partitioning
156(1)
6.4 Manipulating the Target Attribute Representation
157(2)
6.4.1 Label Switching
158(1)
6.5 Partitioning the Search Space
159(8)
6.5.1 Divide and Conquer
160(1)
6.5.2 Feature Subset-based Ensemble Methods
161(6)
6.6 Multiinducers
167(3)
6.7 Measuring the Diversity
170(3)
7 Ensemble Selection
173(14)
7.1 Ensemble Selection
173(1)
7.2 Preselection of the Ensemble Size
174(1)
7.3 Selection of the Ensemble Size During Training
174(1)
7.4 Pruning -- Postselection of the Ensemble Size
175(10)
7.4.1 Ranking-based Methods
176(1)
7.4.2 Search-based Methods
176(5)
7.4.3 Clustering-based Methods
181(1)
7.4.4 Pruning Timing
182(3)
7.5 Back to a Single Model: Ensemble Derived Models
185(2)
8 Error Correcting Output Codes
187(20)
8.1 Code Matrix Decomposition of Multiclass Problems
189(1)
8.2 Type I -- Training an Ensemble Given a Code Matrix
190(12)
8.2.1 Error-Correcting Output Codes
192(1)
8.2.2 Code Matrix Framework
193(1)
8.2.3 Code Matrix Design Problem
194(4)
8.2.4 Orthogonal Arrays (OA)
198(2)
8.2.5 Hadamard Matrix
200(1)
8.2.6 Probabilistic Error-Correcting Output Code
200(1)
8.2.7 Other ECOC Strategies
201(1)
8.3 Type II -- Adapting Code Matrices to Multiclass Problems
202(5)
9 Evaluating Ensembles of Classifiers
207(32)
9.1 Generalization Error
207(23)
9.1.1 Theoretical Estimation of Generalization Error
208(1)
9.1.2 Empirical Estimation of Generalization Error
209(3)
9.1.3 Alternatives to the Accuracy Measure
212(1)
9.1.4 The F-Measure
213(1)
9.1.5 Confusion Matrix
214(1)
9.1.6 Classifier Evaluation Under Limited Resources
215(12)
9.1.7 Statistical Tests for Comparing Ensembles
227(3)
9.2 Computational Complexity
230(1)
9.3 Interpretability of the resulting ensemble
231(1)
9.4 Scalability to Large Datasets
232(2)
9.5 Robustness
234(1)
9.6 Stability
234(1)
9.7 Flexibility
234(1)
9.8 Usability
235(1)
9.9 Software Availability
235(1)
9.10 Which Ensemble Method Should be Used?
235(4)
Bibliography 239(42)
Index 281