Muutke küpsiste eelistusi

E-raamat: Graphical Models for Machine Learning and Digital Communication

(University of Toronto)
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 58,66 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

A variety of problems in machine learning and digital communication deal with complex but structured natural or artificial systems. In this book, Brendan Frey uses graphical models as an overarching framework to describe and solve problems of pattern classification, unsupervised learning, data compression, and channel coding. Using probabilistic structures such as Bayesian belief networks and Markov random fields, he is able to describe the relationships between random variables in these systems and to apply graph-based inference techniques to develop new algorithms. Among the algorithms described are the wake-sleep algorithm for unsupervised learning, the iterative turbodecoding algorithm (currently the best error-correcting decoding algorithm), the bits-back coding method, the Markov chain Monte Carlo technique, and variational inference.

Series Foreword ix(2)
Preface xi
1 Introduction
1(26)
1.1 A probabilistic perspective
2(6)
1.2 Graphical models: Factor graphs, Markov random fields and Bayesian belief networks
8(17)
1.3 Organization of this book
25(2)
2 Probabilistic Inference in Graphical Models
27(28)
2.1 Exact inference using probability propagation (the sum-product algorithm)
27(11)
2.2 Monte Carlo inference: Gibbs sampling and slice sampling
38(5)
2.3 Variational inference
43(6)
2.4 Helmholtz machines
49(6)
3 Pattern Classification
55(34)
3.1 Bayesian networks for pattern classification
58(1)
3.2 Autoregressive networks
59(5)
3.3 Estimating latent variable models using the EM algorithm
64(4)
3.4 Multiple-cause networks
68(12)
3.5 Classification of handwritten digits
80(9)
4 Unsupervised Learning
89(20)
4.1 Extracting structure from images using the wake-sleep algorithm
89(7)
4.2 Simultaneous extraction of continuous and categorical structure
96(6)
4.3 Nonlinear Gaussian Bayesian networks (NLGBNs)
102(7)
5 Data Compression
109(20)
5.1 Fast compression with Bayesian networks
110(1)
5.2 Communicating extra information through the codeword choice
111(6)
5.3 Relationship to maximum likelihood estimation
117(2)
5.4 The "bits-back" coding algorithm
119(4)
5.5 Experimental results
123(5)
5.6 Integrating over model parameters using bits-back coding
128(1)
6 Channel Coding
129(42)
6.1 Review: Simplifying the playing field
131(8)
6.2 Graphical models for error correction: Turbocodes, low-density parity-check codes and more
139(15)
6.3 "A code by any other network would not decode as sweetly"
154(1)
6.4 Trellis-constrained codes (TCCs)
155(7)
6.5 Decoding complexity of iterative decoders
162(1)
6.6 Parallel iterative decoding
162(3)
6.7 Speeding up iterative decoding by detecting variables early
165(6)
7 Future Research Directions
171(8)
7.1 Modularity and abstraction
171(1)
7.2 Faster inference and learning
172(1)
7.3 Scaling up to the brain
173(1)
7.4 Improving model structures
174(1)
7.5 Iterative decoding
175(2)
7.6 Iterative decoding in the real world
177(1)
7.7 Unification
177(2)
References 179(12)
Index 191