Muutke küpsiste eelistusi

E-raamat: Learning with Kernels – Support Vector Machines, Regularization, Optimization, and Beyond: Support Vector Machines, Regularization, Optimization, and Beyond

, (Max Planck Institute for Intelligent Systems)
  • Formaat: 648 pages
  • Sari: Learning with Kernels
  • Ilmumisaeg: 14-May-2014
  • Kirjastus: MIT Press
  • ISBN-13: 9780262256933
  • Formaat - PDF+DRM
  • Hind: 109,82 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: 648 pages
  • Sari: Learning with Kernels
  • Ilmumisaeg: 14-May-2014
  • Kirjastus: MIT Press
  • ISBN-13: 9780262256933

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

A comprehensive introduction to Support Vector Machines and related kernel methods.

In the 1990s, a new type of learning algorithm was developed, based on results from statistical learning theory: the Support Vector Machine (SVM). This gave rise to a new class of theoretically elegant learning machines that use a central concept of SVMs-kernels—for a number of learning tasks. Kernel machines provide a modular framework that can be adapted to different tasks and domains by the choice of the kernel function and the base algorithm. They are replacing neural networks in a variety of fields, including engineering, information retrieval, and bioinformatics.

Learning with Kernels provides an introduction to SVMs and related kernel methods. Although the book begins with the basics, it also includes the latest research. It provides all of the concepts necessary to enable a reader equipped with some basic mathematical knowledge to enter the world of machine learning using theoretically well-founded yet easy-to-use kernel algorithms and to understand and apply the powerful algorithms that have been developed over the last few years.
Series Foreword xiii
Preface xv
A Tutorial Introduction
1(22)
Data Representation and Similarity
1(3)
A Simple Pattern Recognition Algorithm
4(2)
Some Insights From Statistical Learning Theory
6(5)
Hyperplane Classifiers
11(4)
Support Vector Classification
15(2)
Support Vector Regression
17(2)
Kernel Principal Component Analysis
19(2)
Empirical Results and Implementations
21(2)
I CONCEPTS AND TOOLS 23(164)
Kernels
25(36)
Product Features
26(3)
The Representation of Similarities in Linear Spaces
29(16)
Examples and Properties of Kernels
45(3)
The Representation of Dissimilarities in Linear Spaces
48(7)
Summary
55(1)
Problems
55(6)
Risk and Loss Functions
61(26)
Loss Functions
62(3)
Test Error and Expected Risk
65(3)
A Statistical Perspective
68(7)
Robust Estimators
75(8)
Summary
83(1)
Problems
84(3)
Regularization
87(38)
The Regularized Risk Functional
88(1)
The Representer Theorem
89(3)
Regularization Operators
92(4)
Translation Invariant Kernels
96(9)
Translation Invariant Kernels in Higher Dimensions
105(5)
Dot Product Kernels
110(3)
Multi-Output Regularization
113(2)
Semiparametric Regularization
115(3)
Coefficient Based Regularization
118(3)
Summary
121(1)
Problems
122(3)
Elements of Statistical Learning Theory
125(24)
Introduction
125(3)
The Law of Large Numbers
128(3)
When Does Learning Work: the Question of Consistency
131(1)
Uniform Convergence and Consistency
131(3)
How to Derive a VC Bound
134(10)
A Model Selection Example
144(2)
Summary
146(1)
Problems
146(3)
Optimization
149(38)
Convex Optimization
150(4)
Unconstrained Problems
154(11)
Constrained Problems
165(10)
Interior Point Methods
175(4)
Maximum Search Problems
179(4)
Summary
183(1)
Problems
184(3)
II SUPPORT VECTOR MACHINES 187(218)
Pattern Recognition
189(38)
Separating Hyperplanes
189(3)
The Role of the Margin
192(4)
Optimal Margin Hyperplanes
196(4)
Nonlinear Support Vector Classifiers
200(4)
Soft Margin Hyperplanes
204(7)
Multi-Class Classification
211(3)
Variations on a Theme
214(1)
Experiments
215(7)
Summary
222(1)
Problems
222(5)
Single-Class Problems: Quantile Estimation and Novelty Detection
227(24)
Introduction
228(1)
A Distribution's Support and Quantiles
229(1)
Algorithms
230(4)
Optimization
234(2)
Theory
236(5)
Discussion
241(2)
Experiments
243(4)
Summary
247(1)
Problems
248(3)
Regression Estimation
251(28)
Linear Regression with Insensitive Loss Function
251(3)
Dual Problems
254(6)
v-SV Regression
260(6)
Convex Combinations and l1-Norms
266(3)
Parametric Insensitivity Models
269(3)
Applications
272(1)
Summary
273(1)
Problems
274(5)
Implementation
279(54)
Tricks of the Trade
281(7)
Sparse Greedy Matrix Approximation
288(7)
Interior Point Algorithms
295(5)
Subset Selection Methods
300(5)
Sequential Minimal Optimization
305(7)
Iterative Methods
312(15)
Summary
327(2)
Problems
329(4)
Incorporating Invariances
333(26)
Prior Knowledge
333(2)
Transformation Invariance
335(2)
The Virtual SV Method
337(6)
Constructing Invariance Kernels
343(11)
The Jittered SV Method
354(2)
Summary
356(1)
Problems
357(2)
Learning Theory Revisited
359(46)
Concentration of Measure Inequalities
360(6)
Leave-One-Out Estimates
366(15)
PAC-Bayesian Bounds
381(10)
Operator-Theoretic Methods in Learning Theory
391(12)
Summary
403(1)
Problems
404(1)
III KERNEL METHODS 405(164)
Designing Kernels
407(20)
Tricks for Costructing Kernels
408(4)
String Kernels
412(2)
Locality-Improved Kernels
414(4)
Natural Kernels
418(5)
Summary
423(1)
Problems
423(4)
Kernel Feature Extraction
427(30)
Introduction
427(2)
Kernel PCA
429(8)
Kernel PCA Experiments
437(5)
A Framework for Feature Extraction
442(5)
Algorithms for Sparse KFA
447(3)
KFA Experiments
450(1)
Summary
451(1)
Problems
452(5)
Kernel Fisher Discriminant
457(12)
Introduction
457(1)
Fisher's Discriminant in Feature Space
458(2)
Efficient Training of Kernel Fisher Discriminants
460(4)
Probabilistic Outputs
464(2)
Experiments
466(1)
Summary
467(1)
Problems
468(1)
Bayesian Kernel Methods
469(48)
Bayesics
470(5)
Inference Methods
475(5)
Gaussian Processes
480(8)
Implementation of Gaussian Processes
488(11)
Laplacian Processes
499(7)
Relevance Vector Machines
506(5)
Summary
511(2)
Problems
513(4)
Regularized Principal Manifolds
517(26)
A Coding Framework
518(4)
A Regularized Quantization Functional
522(4)
An Algorithm for Minimizing Rreg[ f]
526(3)
Connections to Other Algorithms
529(4)
Uniform Convergence Bounds
533(4)
Experiments
537(2)
Summary
539(1)
Problems
540(3)
Pre-Images and Reduced Set Methods
543(26)
The Pre-Image Problem
544(3)
Finding Approximate Pre-Images
547(5)
Reduced Set Methods
552(2)
Reduced Set Selection Methods
554(7)
Reduced Set Construction Methods
561(3)
Sequential Evaluation of Reduced Set Expansions
564(2)
Summary
566(1)
Problems
567(2)
A Addenda 569(6)
Data Sets
569(3)
Proofs
572(3)
B Mathematical Prerequisites 575(16)
Probability
575(5)
Linear Algebra
580(6)
Functional Analysis
586(5)
References 591(26)
Index 617(8)
Notation and Symbols 625