Muutke küpsiste eelistusi

E-raamat: Ensemble Methods: Foundations and Algorithms

(Nanjing University, China)
  • Formaat - EPUB+DRM
  • Hind: 84,49 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

"Ensemble methods that train multiple learners and then combine them to use, with \textit{Boosting} and \textit{Bagging} as representatives, are well-known machine learning approaches. It has become common sense that an ensemble is usually significantly more accurate than a single learner, and ensemble methods have already achieved great success in various real-world tasks. Twelve years have passed since the publication of the first edition of the book in 2012 (Japanese and Chinese versions published in 2017 and 2020, respectively). Many significant advances in this field have been developed. First, many theoretical issues have been tackled, e.g., the fundamental question of \textit{why AdaBoost seems resistant to overfitting} gets addressed, so that nowwe understand much more about the essence of ensemble methods. Second, ensemble methods have been well developed in more machine learning fields, e.g., \textit{isolation forest} in anomaly detection, so that now we have powerful ensemble methods for tasks beyond conventional supervised learning. Third, ensemble mechanisms have also been found helpful in phenomenon and emerging areas such as deep learning and online learning. Therefore, it is time to present the second edition of the book. The book is intended to be written in a concise but comprehensive style, not to be too lengthy to make readers who just step into this field feel frightened. Due to the significantly increased amount of content, however, the current book is nearly half thicker than its anterior edition"--

Ensemble methods that train multiple learners and then combine them to use, with Boosting and Bagging as representatives, are well-known machine learning approaches. It has become common sense that an ensemble is usually significantly more accurate than a single learner, and ensemble methods have already achieved great success in various real-world tasks.

Twelve years have passed since the publication of the first edition of the book in 2012 (Japanese and Chinese versions published in 2017 and 2020, respectively). Many significant advances in this field have been developed. First, many theoretical issues have been tackled, for example, the fundamental question of why AdaBoost seems resistant to overfitting gets addressed, so that now we understand much more about the essence of ensemble methods. Second, ensemble methods have been well developed in more machine learning fields, e.g., isolation forest in anomaly detection, so that now we have powerful ensemble methods for tasks beyond conventional supervised learning.

Third, ensemble mechanisms have also been found helpful in emerging areas such as deep learning and online learning. This edition expands on the previous one with additional content to reflect the significant advances in the field, and is written in a concise but comprehensive style to be approachable to readers new to the subject.



Ensemble methods that train multiple learners and then combine them to use, with \textit{Boosting} and \textit{Bagging} as representatives, are well-known machine learning approaches. An ensemble is significantly more accurate than a single learner, and ensemble methods have already achieved great success in various real-world tasks.

Preface Notations
1. Introduction
2. Boosting
3. Bagging
4. Combination Methods
5. Diversity
6. Ensemble Pruning
7. Clustering Ensemble
8. Anomaly Detection and Isolation Forest
9. Semi-Supervised Ensemble
10. Class-Imbalance and Cost-Sensitive Ensemble
11. Deep Learning and Deep Forest
12. Advanced Topics References Index

Zhi-Hua Zhou, Professor of Computer Science and Artificial Intelligence at Nanjing University, President of IJCAI trustee, Fellow of the ACM, AAAI, AAAS, IEEE, recipient of the IEEE Computer Society Edward J. McCluskey Technical Achievement Award, CCF-ACM Artificial Intelligence Award.