Muutke küpsiste eelistusi

Probabilistic Theory of Pattern Recognition Softcover reprint of the original 1st ed. 1996 [Pehme köide]

  • Formaat: Paperback / softback, 638 pages, kõrgus x laius: 235x155 mm, kaal: 997 g, XV, 638 p., 1 Paperback / softback
  • Sari: Stochastic Modelling and Applied Probability 31
  • Ilmumisaeg: 22-Nov-2013
  • Kirjastus: Springer-Verlag New York Inc.
  • ISBN-10: 146126877X
  • ISBN-13: 9781461268772
  • Pehme köide
  • Hind: 87,57 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 103,03 €
  • Säästad 15%
  • Raamatu kohalejõudmiseks kirjastusest kulub orienteeruvalt 2-4 nädalat
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Paperback / softback, 638 pages, kõrgus x laius: 235x155 mm, kaal: 997 g, XV, 638 p., 1 Paperback / softback
  • Sari: Stochastic Modelling and Applied Probability 31
  • Ilmumisaeg: 22-Nov-2013
  • Kirjastus: Springer-Verlag New York Inc.
  • ISBN-10: 146126877X
  • ISBN-13: 9781461268772
Pattern recognition presents one of the most significant challenges for scientists and engineers, and many different approaches have been proposed. The aim of this book is to provide a self-contained account of probabilistic analysis of these approaches. The book includes a discussion of distance measures, nonparametric methods based on kernels or nearest neighbors, Vapnik-Chervonenkis theory, epsilon entropy, parametric classification, error estimation, free classifiers, and neural networks. Wherever possible, distribution-free properties and inequalities are derived. A substantial portion of the results or the analysis is new. Over 430 problems and exercises complement the material.

Muu info

Springer Book Archives
Preface * Introduction * The Bayes Error * Inequalities and alternate distance measures * Linear discrimination * Nearest neighbor rules * Consistency * Slow rates of convergence Error estimation * The regular histogram rule * Kernel rules Consistency of the k-nearest neighbor rule * Vapnik-Chervonenkis theory * Combinatorial aspects of Vapnik-Chervonenkis theory * Lower bounds for empirical classifier selection * The maximum likelihood principle * Parametric classification * Generalized linear discrimination * Complexity regularization * Condensed and edited nearest neighbor rules * Tree classifiers * Data-dependent partitioning * Splitting the data * The resubstitution estimate * Deleted estimates of the error probability * Automatic kernel rules * Automatic nearest neighbor rules * Hypercubes and discrete spaces * Epsilon entropy and totally bounded sets * Uniform laws of large numbers * Neural networks * Other error estimates * Feature extraction * Appendix * Notation * References * Index