Muutke küpsiste eelistusi

E-raamat: Probabilistic Graphical Models: Principles and Applications

  • Formaat - EPUB+DRM
  • Hind: 55,56 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

This fully updated new edition of a uniquely accessible textbook/reference provides a general introduction to probabilistic graphical models (PGMs) from an engineering perspective.  It features new material on partially observable Markov decision processes, causal graphical models, causal discovery and deep learning, as well as an even greater number of exercises; it also incorporates a software library for several graphical models in Python.

The book covers the fundamentals for each of the main classes of PGMs, including representation, inference and learning principles, and reviews real-world applications for each type of model. These applications are drawn from a broad range of disciplines, highlighting the many uses of Bayesian classifiers, hidden Markov models, Bayesian networks, dynamic and temporal Bayesian networks, Markov random fields, influence diagrams, and Markov decision processes.

Topics and features:





Presents a unified framework encompassing all of the main classes of PGMs Explores the fundamental aspects of representation, inference and learning for each technique Examines new material on partially observable Markov decision processes, and graphical models Includes a new chapter introducing deep neural networks and their relation with probabilistic graphical models  Covers multidimensional Bayesian classifiers, relational graphical models, and causal models

Provides substantial chapter-ending exercises, suggestions for further reading, and ideas for research or programming projects Describes classifiers such as Gaussian Naive Bayes, Circular Chain Classifiers, and Hierarchical Classifiers with Bayesian Networks Outlines the practical application of the different techniques Suggests possible course outlines for instructors

This classroom-tested work is suitable as a textbook for an advanced undergraduate or a graduate course in probabilistic graphical models for students of computer science, engineering, and physics. Professionals wishing to apply probabilistic graphical models in their own field, or interested in the basis of these techniques, will also find the book to be an invaluable reference.

Dr. Luis Enrique Sucar is a Senior Research Scientist at the National Institute for Astrophysics, Optics and Electronics (INAOE), Puebla, Mexico. He received the National Science Prize en 2016.
Part I Fundamentals
1 Introduction
3(12)
1.1 Uncertainty
3(1)
1.1.1 Effects of Uncertainty
4(1)
1.2 A Brief History
4(1)
1.3 Basic Probabilistic Models
5(3)
1.3.1 An Example
7(1)
1.4 Probabilistic Graphical Models
8(2)
1.5 Representation, Inference and Learning
10(1)
1.6 Applications
11(1)
1.7 Overview of the Book
12(1)
1.8 Additional Reading
13(1)
References
13(2)
2 Probability Theory
15(12)
2.1 Introduction
15(2)
2.2 Basic Rules
17(1)
2.3 Random Variables
18(5)
2.3.1 Two Dimensional Random Variables
22(1)
2.4 Information Theory
23(2)
2.5 Additional Reading
25(1)
2.6 Exercises
25(1)
References
26(1)
3 Graph Theory
27(16)
3.1 Definitions
27(1)
3.2 Types of Graphs
28(1)
3.3 Trajectories and Circuits
29(2)
3.4 Graph Isomorphism
31(1)
3.5 Trees
31(2)
3.6 Cliques
33(1)
3.7 Perfect Ordering
34(1)
3.8 Ordering and Triangulation Algorithms
35(2)
3.8.1 Maximum Cardinality Search
35(1)
3.8.2 Graph Filling
36(1)
3.9 Additional Reading
37(1)
3.10 Exercises
37(2)
References
39(4)
Part II Probabilistic Models
4 Bayesian Classifiers
43(28)
4.1 Introduction
43(2)
4.1.1 Classifier Evaluation
44(1)
4.2 Bayesian Classifier
45(4)
4.2.1 Naive Bayesian Classifier
46(3)
4.3 Gaussian Naive Bayes
49(1)
4.4 Alternative Models: TAN, BAN
50(2)
4.5 Semi-naive Bayesian Classifiers
52(2)
4.6 Multidimensional Bayesian Classifiers
54(5)
4.6.1 Multidimensional Bayesian Network Classifiers
55(1)
4.6.2 Chain Classifiers
56(3)
4.7 Hierarchical Classification
59(3)
4.7.1 Chained Path Evaluation
59(2)
4.7.2 Hierarchical Classification with Bayesian Networks
61(1)
4.8 Applications
62(4)
4.8.1 Visual Skin Detection
63(2)
4.8.2 HIV Drug Selection
65(1)
4.9 Additional Reading
66(1)
4.10 Exercises
66(2)
References
68(3)
5 Hidden Markov Models
71(22)
5.1 Introduction
71(1)
5.2 Markov Chains
72(4)
5.2.1 Parameter Estimation
74(1)
5.2.2 Convergence
75(1)
5.3 Hidden Markov Models
76(10)
5.3.1 Evaluation
78(2)
5.3.2 State Estimation
80(2)
5.3.3 Learning
82(2)
5.3.4 Gaussian Hidden Markov Models
84(1)
5.3.5 Extensions
84(2)
5.4 Applications
86(3)
5.4.1 PageRank
86(1)
5.4.2 Gesture Recognition
87(2)
5.5 Additional Reading
89(1)
5.6 Exercises
89(1)
References
90(3)
6 Markov Random Fields
93(18)
6.1 Introduction
93(2)
6.2 Markov Random Fields
95(3)
6.2.1 Regular Markov Random Fields
96(2)
6.3 Gibbs Random Fields
98(1)
6.4 Inference
99(2)
6.5 Parameter Estimation
101(1)
6.5.1 Parameter Estimation with Labeled Data
101(1)
6.6 Conditional Random Fields
102(2)
6.7 Applications
104(4)
6.7.1 Image Smoothing
104(2)
6.7.2 Improving Image Annotation
106(2)
6.8 Additional Reading
108(1)
6.9 Exercises
109(1)
References
110(1)
7 Bayesian Networks: Representation and Inference
111(42)
7.1 Introduction
111(1)
7.2 Representation
112(10)
7.2.1 Structure
113(4)
7.2.2 Parameters
117(5)
7.3 Inference
122(21)
7.3.1 Singly Connected Networks: Belief Propagation
123(5)
7.3.2 Multiple Connected Networks
128(10)
7.3.3 Approximate Inference
138(3)
7.3.4 Most Probable Explanation
141(1)
7.3.5 Continuous Variables
141(2)
7.4 Applications
143(6)
7.4.1 Information Validation
143(4)
7.4.2 Reliability Analysis
147(2)
7.5 Additional Reading
149(1)
7.6 Exercises
150(1)
References
151(2)
8 Bayesian Networks: Learning
153(28)
8.1 Introduction
153(1)
8.2 Parameter Learning
153(7)
8.2.1 Smoothing
154(1)
8.2.2 Parameter Uncertainty
154(1)
8.2.3 Missing Data
155(3)
8.2.4 Discretization
158(2)
8.3 Structure Learning
160(10)
8.3.1 Tree Learning
161(1)
8.3.2 Learning a Polytree
162(2)
8.3.3 Search and Score Techniques
164(5)
8.3.4 Independence Tests Techniques
169(1)
8.4 Combining Expert Knowledge and Data
170(1)
8.5 Transfer Learning
171(1)
8.6 Applications
172(4)
8.6.1 Air Pollution Model for Mexico City
172(3)
8.6.2 Agricultural Planning Using Bayesian Networks
175(1)
8.7 Additional Reading
176(1)
8.8 Exercises
177(1)
References
178(3)
9 Dynamic and Temporal Bayesian Networks
181(24)
9.1 Introduction
181(1)
9.2 Dynamic Bayesian Networks
182(7)
9.2.1 Inference
183(1)
9.2.2 Sampling
183(4)
9.2.3 Learning
187(1)
9.2.4 Dynamic Bayesian Network Classifiers
188(1)
9.3 Temporal Event Networks
189(5)
9.3.1 Temporal Nodes Bayesian Networks
189(5)
9.4 Applications
194(5)
9.4.1 DBN: Gesture Recognition
194(3)
9.4.2 TNBN: Predicting HIV Mutational Pathways
197(2)
9.5 Additional Reading
199(1)
9.6 Exercises
200(2)
References
202(3)
Part III Decision Models
10 Decision Graphs
205(24)
10.1 Introduction
205(1)
10.2 Decision Theory
206(3)
10.2.1 Fundamentals
206(3)
10.3 Decision Trees
209(2)
10.4 Influence Diagrams
211(9)
10.4.1 Modeling
211(1)
10.4.2 Evaluation
212(7)
10.4.3 Extensions
219(1)
10.5 Applications
220(6)
10.5.1 Decision Support System for Lung Cancer
220(3)
10.5.2 Decision-Theoretic Caregiver
223(3)
10.6 Additional Reading
226(1)
10.7 Exercises
226(2)
References
228(1)
11 Markov Decision Processes
229(20)
11.1 Introduction
229(1)
11.2 Modeling
230(3)
11.3 Evaluation
233(2)
11.3.1 Value Iteration
233(1)
11.3.2 Policy Iteration
234(1)
11.3.3 Complexity Analysis
234(1)
11.4 Factored MDPs
235(4)
11.4.1 Abstraction
237(1)
11.4.2 Decomposition
238(1)
11.5 Applications
239(6)
11.5.1 Power Plant Operation
239(3)
11.5.2 Robot Task Coordination
242(3)
11.6 Additional Reading
245(1)
11.7 Exercises
246(1)
References
247(2)
12 Partially Observable Markov Decision Processes
249(20)
12.1 Introduction
249(1)
12.2 Representation
250(1)
12.3 Solution Techniques
251(8)
12.3.1 Value Functions
253(3)
12.3.2 Solution Algorithms
256(3)
12.4 Applications
259(6)
12.4.1 Automatic Adaptation in Virtual Rehabilitation
259(3)
12.4.2 Hierarchical POMDPs for Task Planning in Robotics
262(3)
12.5 Additional Reading
265(1)
12.6 Exercises
265(1)
References
266(3)
Part IV Relational, Causal and Deep Models
13 Relational Probabilistic Graphical Models
269(18)
13.1 Introduction
269(1)
13.2 Logic
270(3)
13.2.1 Propositional Logic
271(1)
13.2.2 First-Order Predicate Logic
272(1)
13.3 Probabilistic Relational Models
273(2)
13.3.1 Inference
275(1)
13.3.2 Learning
275(1)
13.4 Markov Logic Networks
275(3)
13.4.1 Inference
277(1)
13.4.2 Learning
278(1)
13.5 Applications
278(5)
13.5.1 Student Modeling
278(3)
13.5.2 Visual Grammars
281(2)
13.6 Additional Reading
283(1)
13.7 Exercises
284(1)
References
285(2)
14 Graphical Causal Models
287(20)
14.1 Introduction
287(1)
14.1.1 Definition of Causality
288(1)
14.2 Causal Bayesian Networks
288(4)
14.2.1 Gaussian Linear Models
290(2)
14.3 Causal Reasoning
292(3)
14.3.1 Prediction
292(2)
14.3.2 Counterfactuals
294(1)
14.4 Front Door and Back Door Criterion
295(2)
14.4.1 Back Door Criterion
295(1)
14.4.2 Front Door Criterion
295(2)
14.5 Applications
297(5)
14.5.1 Characterizing Patterns of Unfairness
297(1)
14.5.2 Accelerating Reinforcement Learning with Causal Models
298(4)
14.6 Additional Reading
302(1)
14.7 Exercises
302(3)
References
305(2)
15 Causal Discovery
307(20)
15.1 Introduction
307(2)
15.2 Types of Graphs
309(3)
15.2.1 Markov Equivalence Classes Under Causal Sufficiency
310(1)
15.2.2 Markov Equivalence Classes with Unmeasured Variables
311(1)
15.3 Causal Discovery Algorithms
312(8)
15.3.1 Score-Based Causal Discovery
313(1)
15.3.2 Constraint-Based Causal Discovery
314(4)
15.3.3 Casual Discovery with Linear Models
318(2)
15.4 Applications
320(3)
15.4.1 Learning a Causal Model for ADHD
320(1)
15.4.2 Decoding Brain Effective Connectivity Based on fNIRS
321(2)
15.5 Additional Reading
323(1)
15.6 Exercises
323(1)
References
324(3)
16 Deep Learning and Graphical Models
327(20)
16.1 Introduction
327(1)
16.2 Review of Neural Networks and Deep Learning
328(4)
16.2.1 A Brief History
328(2)
16.2.2 Deep Neural Networks
330(2)
16.3 Graphical Models and Neural Networks
332(1)
16.3.1 Naives Bayes Classifiers Versus Perceptrons
332(3)
16.3.2 Bayesian Networks Versus Multi-layer Neural Networks
334(1)
16.4 Hybrid Models
335(4)
16.4.1 Testing Bayesian Networks
335(2)
16.4.2 Integrating Graphical and Deep Models
337(2)
16.5 Applications
339(5)
16.5.1 Human Body Pose Tracking
339(2)
16.5.2 Neural Enhanced Belief Propagation for Error Correction
341(3)
16.6 Additional Reading
344(1)
16.7 Exercises
345(1)
References
345(2)
Appendix A: A Python Library for Inference and Learning 347(2)
Glossary 349(4)
Index 353
Dr. Luis Enrique Sucar is a Senior Research Scientist in the Department of Computing at the National Institute of Astrophysics, Optics and Electronics (INAOE), Mexico.