Muutke küpsiste eelistusi

Graphical Models for Machine Learning and Digital Communication [Kõva köide]

(University of Toronto)
Teised raamatud teemal:
  • Kõva köide
  • Hind: 22,08 €*
  • * saadame teile pakkumise kasutatud raamatule, mille hind võib erineda kodulehel olevast hinnast
  • See raamat on trükist otsas, kuid me saadame teile pakkumise kasutatud raamatule.
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Lisa soovinimekirja
Teised raamatud teemal:

A variety of problems in machine learning and digital communication deal with complex but structured natural or artificial systems. In this book, Brendan Frey uses graphical models as an overarching framework to describe and solve problems of pattern classification, unsupervised learning, data compression, and channel coding. Using probabilistic structures such as Bayesian belief networks and Markov random fields, he is able to describe the relationships between random variables in these systems and to apply graph-based inference techniques to develop new algorithms. Among the algorithms described are the wake-sleep algorithm for unsupervised learning, the iterative turbodecoding algorithm (currently the best error-correcting decoding algorithm), the bits-back coding method, the Markov chain Monte Carlo technique, and variational inference.

Series Foreword ix(2)
Preface xi
1 Introduction
1(26)
1.1 A probabilistic perspective
2(6)
1.2 Graphical models: Factor graphs, Markov random fields and Bayesian belief networks
8(17)
1.3 Organization of this book
25(2)
2 Probabilistic Inference in Graphical Models
27(28)
2.1 Exact inference using probability propagation (the sum-product algorithm)
27(11)
2.2 Monte Carlo inference: Gibbs sampling and slice sampling
38(5)
2.3 Variational inference
43(6)
2.4 Helmholtz machines
49(6)
3 Pattern Classification
55(34)
3.1 Bayesian networks for pattern classification
58(1)
3.2 Autoregressive networks
59(5)
3.3 Estimating latent variable models using the EM algorithm
64(4)
3.4 Multiple-cause networks
68(12)
3.5 Classification of handwritten digits
80(9)
4 Unsupervised Learning
89(20)
4.1 Extracting structure from images using the wake-sleep algorithm
89(7)
4.2 Simultaneous extraction of continuous and categorical structure
96(6)
4.3 Nonlinear Gaussian Bayesian networks (NLGBNs)
102(7)
5 Data Compression
109(20)
5.1 Fast compression with Bayesian networks
110(1)
5.2 Communicating extra information through the codeword choice
111(6)
5.3 Relationship to maximum likelihood estimation
117(2)
5.4 The "bits-back" coding algorithm
119(4)
5.5 Experimental results
123(5)
5.6 Integrating over model parameters using bits-back coding
128(1)
6 Channel Coding
129(42)
6.1 Review: Simplifying the playing field
131(8)
6.2 Graphical models for error correction: Turbocodes, low-density parity-check codes and more
139(15)
6.3 "A code by any other network would not decode as sweetly"
154(1)
6.4 Trellis-constrained codes (TCCs)
155(7)
6.5 Decoding complexity of iterative decoders
162(1)
6.6 Parallel iterative decoding
162(3)
6.7 Speeding up iterative decoding by detecting variables early
165(6)
7 Future Research Directions
171(8)
7.1 Modularity and abstraction
171(1)
7.2 Faster inference and learning
172(1)
7.3 Scaling up to the brain
173(1)
7.4 Improving model structures
174(1)
7.5 Iterative decoding
175(2)
7.6 Iterative decoding in the real world
177(1)
7.7 Unification
177(2)
References 179(12)
Index 191