Muutke küpsiste eelistusi

E-raamat: Boosting – Foundations and Algorithms: Foundations and Algorithms

(University of California, San Diego), (Microsoft Research)
  • Formaat - PDF+DRM
  • Hind: 52,42 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Boosting is an approach to machine learning based on the idea of creating a highlyaccurate predictor by combining many weak and inaccurate "rules of thumb." A remarkablyrich theory has evolved around boosting, with connections to a range of topics, includingstatistics, game theory, convex optimization, and information geometry. Boosting algorithms havealso enjoyed practical success in such fields as biology, vision, and speech processing. At varioustimes in its history, boosting has been perceived as mysterious, controversial, evenparadoxical.

This book, written by the inventors of the method, brings together,organizes, simplifies, and substantially extends two decades of research on boosting, presentingboth theory and applications in a way that is accessible to readers from diverse backgrounds whilealso providing an authoritative reference for advanced researchers. With its introductory treatmentof all material and its inclusion of exercises in every chapter, the book is appropriate for courseuse as well. The book begins with a general introduction to machine learning algorithms and theiranalysis; then explores the core theory of boosting, especially its ability to generalize; examinessome of the myriad other theoretical viewpoints that help to explain and understand boosting;provides practical extensions of boosting for more complex learning problems; and finally presents anumber of advanced theoretical topics. Numerous applications and practical illustrations are offeredthroughout.

Muu info

Winner of Selected as a Best of 2012 by Computing Reviews 2012.
Series Foreword xi
Preface xiii
1 Introduction and Overview
1(20)
1.1 Classification Problems and Machine Learning
2(2)
1.2 Boosting
4(10)
1.3 Resistance to Overfitting and the Margins Theory
14(3)
1.4 Foundations and Algorithms
17(4)
Summary
19(1)
Bibliographic Notes
19(1)
Exercises
20(1)
I CORE ANALYSIS
21(118)
2 Foundations of Machine Learning
23(30)
2.1 A Direct Approach to Machine Learning
24(6)
2.2 General Methods of Analysis
30(13)
2.3 A Foundation for the Study of Boosting Algorithms
43(10)
Summary
49(1)
Bibliographic Notes
49(1)
Exercises
50(3)
3 Using AdaBoost to Minimize Training Error
53(22)
3.1 A Bound on AdaBoost's Training Error
54(2)
3.2 A Sufficient Condition for Weak Learnability
56(4)
3.3 Relation to Chernoff Bounds
60(2)
3.4 Using and Designing Base Learning Algorithms
62(13)
Summary
70(1)
Bibliographic Notes
71(1)
Exercises
71(4)
4 Direct Bounds on the Generalization Error
75(18)
4.1 Using VC Theory to Bound the Generalization Error
75(8)
4.2 Compression-Based Bounds
83(3)
4.3 The Equivalence of Strong and Weak Learnability
86(7)
Summary
88(1)
Bibliographic Notes
89(1)
Exercises
89(4)
5 The Margins Explanation for Boosting's Effectiveness
93(46)
5.1 Margin as a Measure of Confidence
94(3)
5.2 A Margins-Based Analysis of the Generalization Error
97(9)
5.3 Analysis Based on Rademacher Complexity
106(5)
5.4 The Effect of Boosting on Margin Distributions
111(6)
5.5 Bias, Variance, and Stability
117(5)
5.6 Relation to Support-Vector Machines
122(6)
5.7 Practical Applications of Margins
128(11)
Summary
132(1)
Bibliographic Notes
132(2)
Exercises
134(5)
II FUNDAMENTAL PERSPECTIVES
139(130)
6 Game Theory, Online Learning, and Boosting
141(34)
6.1 Game Theory
142(3)
6.2 Learning in Repeated Game Playing
145(8)
6.3 Online Prediction
153(4)
6.4 Boosting
157(6)
6.5 Application to a "Mind-Reading" Game
163(12)
Summary
169(1)
Bibliographic Notes
169(1)
Exercises
170(5)
7 Loss Minimization and Generalizations of Boosting
175(52)
7.1 AdaBoost's Loss Function
177(2)
7.2 Coordinate Descent
179(5)
7.3 Loss Minimization Cannot Explain Generalization
184(4)
7.4 Functional Gradient Descent
188(6)
7.5 Logistic Regression and Conditional Probabilities
194(8)
7.6 Regularization
202(9)
7.7 Applications to Data-Limited Learning
211(16)
Summary
219(1)
Bibliographic Notes
219(1)
Exercises
220(7)
8 Boosting, Convex Optimization, and Information Geometry
227(42)
8.1 Iterative Projection Algorithms
228(15)
8.2 Proving the Convergence of AdaBoost
243(9)
8.3 Unification with Logistic Regression
252(3)
8.4 Application to Species Distribution Modeling
255(14)
Summary
260(2)
Bibliographic Notes
262(1)
Exercises
263(6)
III ALGORITHMIC EXTENSIONS
269(106)
9 Using Confidence-Rated Weak Predictions
271(32)
9.1 The Framework
273(2)
9.2 General Methods for Algorithm Design
275(12)
9.3 Learning Rule-Sets
287(3)
9.4 Alternating Decision Trees
290(13)
Summary
296(1)
Bibliographic Notes
297(1)
Exercises
297(6)
10 Multiclass Classification Problems
303(38)
10.1 A Direct Extension to the Multiclass Case
305(5)
10.2 The One-against-All Reduction and Multi-label Classification
310(6)
10.3 Application to Semantic Classification
316(4)
10.4 General Reductions Using Output Codes
320(21)
Summary
333(1)
Bibliographic Notes
333(1)
Exercises
334(7)
11 Learning to Rank
341(34)
11.1 A Formal Framework for Ranking Problems
342(3)
11.2 A Boosting Algorithm for the Ranking Task
345(6)
11.3 Methods for Improving Efficiency
351(10)
11.4 Multiclass, Multi-label Classification
361(3)
11.5 Applications
364(11)
Summary
367(2)
Bibliographic Notes
369(1)
Exercises
369(6)
IV ADVANCED THEORY
375(116)
12 Attaining the Best Possible Accuracy
377(38)
12.1 Optimality in Classification and Risk Minimization
378(4)
12.2 Approaching the Optimal Risk
382(16)
12.3 How Minimizing Risk Can Lead to Poor Accuracy
398(17)
Summary
406(1)
Bibliographic Notes
406(1)
Exercises
407(8)
13 Optimally Efficient Boosting
415(44)
13.1 The Boost-by-Majority Algorithm
416(16)
13.2 Optimal Generalization Error
432(16)
13.3 Relation to AdaBoost
448(11)
Summary
453(1)
Bibliographic Notes
453(1)
Exercises
453(6)
14 Boosting in Continuous Time
459(32)
14.1 Adaptiveness in the Limit of Continuous Time
460(8)
14.2 BrownBoost
468(8)
14.3 AdaBoost as a Special Case of BrownBoost
476(7)
14.4 Experiments with Noisy Data
483(8)
Summary
485(1)
Bibliographic Notes
486(1)
Exercises
486(5)
Appendix: Some Notation, Definitions, and Mathematical Background
491(10)
A.1 General Notation
491(1)
A.2 Norms
492(1)
A.3 Maxima, Minima, Suprema, and Infima
493(1)
A.4 Limits
493(1)
A.5 Continuity, Closed Sets, and Compactness
494(1)
A.6 Derivatives, Gradients, and Taylor's Theorem
495(1)
A.7 Convexity
496(1)
A.8 The Method of Lagrange Multipliers
497(1)
A.9 Some Distributions and the Central Limit Theorem
498(3)
Bibliography 501(10)
Index of Algorithms, Figures, and Tables 511(2)
Subject and Author Index 513