Muutke küpsiste eelistusi

Learning with Partially Labeled and Interdependent Data 2015 ed. [Kõva köide]

  • Formaat: Hardback, 106 pages, kõrgus x laius: 235x155 mm, kaal: 454 g, 12 Illustrations, black and white; XIII, 106 p. 12 illus., 1 Hardback
  • Ilmumisaeg: 21-May-2015
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 3319157256
  • ISBN-13: 9783319157252
Teised raamatud teemal:
  • Kõva köide
  • Hind: 48,70 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 57,29 €
  • Säästad 15%
  • Raamatu kohalejõudmiseks kirjastusest kulub orienteeruvalt 2-4 nädalat
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Hardback, 106 pages, kõrgus x laius: 235x155 mm, kaal: 454 g, 12 Illustrations, black and white; XIII, 106 p. 12 illus., 1 Hardback
  • Ilmumisaeg: 21-May-2015
  • Kirjastus: Springer International Publishing AG
  • ISBN-10: 3319157256
  • ISBN-13: 9783319157252
Teised raamatud teemal:
This book develops two key machine learning principles: the semi-supervised paradigm and learning with interdependent data. It reveals new applications, primarily web related, that transgress the classical machine learning framework through learning with interdependent data.

The book traces how the semi-supervised paradigm and the learning to rank paradigm emerged from new web applications, leading to a massive production of heterogeneous textual data. It explains how semi-supervised learning techniques are widely used, but only allow a limited analysis of the information content and thus do not meet the demands of many web-related tasks.

Later chapters deal with the development of learning methods for ranking entities in a large collection with respect to precise information needed. In some cases, learning a ranking function can be reduced to learning a classification function over the pairs of examples. The book proves that this task can be efficiently tackled in a new framework: learning with interdependent data.

Researchers and professionals in machine learning will find these new perspectives and solutions valuable. Learning with Partially Labeled and Interdependent Data is also useful for advanced-level students of computer science, particularly those focused on statistics and learning.
1 Introduction
1(4)
1.1 New Learning Frameworks
1(1)
1.2 Outline
2(3)
2 Introduction to Learning Theory
5(28)
2.1 Empirical Risk Minimization
5(3)
2.1.1 Assumption and Definitions
6(1)
2.1.2 The Statement of the ERM Principle
7(1)
2.2 Consistency of the ERM Principle
8(14)
2.2.1 Estimation of the Generalization Error Over a Test Set
10(1)
2.2.2 A Uniform Generalization Error Bound
11(10)
2.2.3 Structural Risk Minimization
21(1)
2.3 Data-Dependent Generalization Error Bounds
22(11)
2.3.1 Rademacher Complexity
22(1)
2.3.2 Link Between the Rademacher Complexity and the VC Dimension
23(3)
2.3.3 Different Steps for Obtaining a Generalization Bound with the Rademacher Complexity
26(4)
2.3.4 Properties of the Rademacher Complexity
30(3)
3 Semi-Supervised Learning
33(30)
3.1 Assumptions
33(2)
3.2 Semi-Supervised Algorithms
35(9)
3.2.1 Graphical Approaches
35(5)
3.2.2 Generative Methods
40(1)
3.2.3 Discriminant Models
41(3)
3.3 Transductive Learning
44(7)
3.3.1 Transductive Support Vector Machines
45(2)
3.3.2 A Transductive Bound for the Voted Classifier
47(4)
3.4 Multiview Learning Based on Pseudo-Labeling
51(12)
3.4.1 Learning with Partially Observed Multiview Data
52(7)
3.4.2 Multiview Self-Training
59(4)
4 Learning with Interdependent Data
63(36)
4.1 Pairwise Ranking Tasks
65(11)
4.1.1 Ranking of Instances
66(3)
4.1.2 Ranking of Alternatives
69(3)
4.1.3 Ranking as Classification of Pairs
72(2)
4.1.4 Other Ranking Frameworks
74(2)
4.2 Classification of Interdependent Data
76(8)
4.2.1 Formal Framework of Classification with Interdependent Data
76(4)
4.2.2 Janson's Theorem and Interpretation
80(3)
4.2.3 Generic Test Bounds
83(1)
4.3 Generalization Bounds for Learning with Interdependent Data
84(15)
4.3.1 Extension of McDiarmid's Theorem
85(2)
4.3.2 The Fractional Rademacher Complexity
87(4)
4.3.3 Estimation of the Fractional Rademacher Complexity
91(3)
4.3.4 Application to Bipartite Ranking
94(1)
4.3.5 Application to Ranking of Alternatives for Multiclass Data
95(4)
References 99(6)
Index 105