Muutke küpsiste eelistusi

Bayesian Artificial Intelligence 2nd edition [Kõva köide]

(Monash University, Clayton, Victoria, Australia), (Monash University, Clayton, Victoria, Australia)
  • Formaat: Hardback, 492 pages, kõrgus x laius: 234x156 mm, kaal: 884 g, 44 Tables, black and white; 159 Illustrations, black and white
  • Sari: Chapman & Hall/CRC Computer Science & Data Analysis
  • Ilmumisaeg: 16-Dec-2010
  • Kirjastus: CRC Press Inc
  • ISBN-10: 1439815917
  • ISBN-13: 9781439815915
Teised raamatud teemal:
  • Formaat: Hardback, 492 pages, kõrgus x laius: 234x156 mm, kaal: 884 g, 44 Tables, black and white; 159 Illustrations, black and white
  • Sari: Chapman & Hall/CRC Computer Science & Data Analysis
  • Ilmumisaeg: 16-Dec-2010
  • Kirjastus: CRC Press Inc
  • ISBN-10: 1439815917
  • ISBN-13: 9781439815915
Teised raamatud teemal:
"The second edition of this bestseller provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. This edition contains a new chapter on Bayesian network classifiers and a new section on object-oriented Bayesian networks, along with new applications and case studies. It includes a new section that addresses foundational problems with causal discovery and Markov blanket discovery and a new section that covers methods of evaluating causal discovery programs. The book also offers more coverage on the uses of causal interventions to understand and reason with causal Bayesian networks. Supplemental materials are available on the book's website"--Provided by publisher.

"Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology. New to the Second Edition New chapter on Bayesian network classifiers New section on object-oriented Bayesian networks New section that addresses foundational problems with causal discovery and Markov blanket discovery New section that covers methods of evaluating causal discovery programs Discussions of many common modeling errors New applications and case studies More coverage on the uses of causal interventions to understand and reason with causal Bayesian networks Illustrated with real case studies, the second edition of this bestseller continues to cover the groundwork of Bayesian networks. It presents the elements of Bayesian network technology, automated causal discovery, and learning probabilities from data and shows how to employ these technologies to develop probabilistic expert systems. Web Resource The books website at www.csse.monash.edu.au/bai/book/book.html offers a variety of supplemental materials, including example Bayesian networks and data sets. Instructors can email the authors for samplesolutions to many of the problems in the text"--Provided by publisher.

Provided by publisher.

Arvustused

useful insights on Bayesian reasoning. There are extensive examples of applications and case studies. The exposition is clear, with many comments that help set the context for the material that is covered. The reader gets a strong sense that Bayesian networks are a work in progress. John H. Maindonald, International Statistical Review (2011), 79

Praise for the First Edition: this excellent book would also serve well for final year undergraduate courses in mathematics or statistics and is a solid first reference text for researchers wanting to implement Bayesian belief network (BBN) solutions for practical problems. beautifully presented, nicely written, and made accessible. Mathematical ideas, some quite deep, are presented within the flow but do not get in the way. This has the advantage that students can see and interpret the mathematics in the practical context, whereas practitioners can acquire, to personal taste, the mathematical seasoning. If you are interested in applying BBN methods to real-life problems, this book is a good place to start Journal of the Royal Statistical Society, Series A, Vol. 157(3)

List of Figures
xvii
List of Tables
xxi
Preface xxiii
About the Authors xxvii
I PROBABILISTIC REASONING
1(180)
1 Bayesian Reasoning
3(26)
1.1 Reasoning under uncertainty
3(1)
1.2 Uncertainty in AI
4(1)
1.3 Probability calculus
5(5)
1.3.1 Conditional probability theorems
8(1)
1.3.2 Variables
9(1)
1.4 Interpretations of probability
10(2)
1.5 Bayesian philosophy
12(9)
1.5.1 Bayes' theorem
12(2)
1.5.2 Betting and odds
14(1)
1.5.3 Expected utility
15(1)
1.5.4 Dutch books
16(1)
1.5.5 Bayesian reasoning examples
17(1)
1.5.5.1 Breast cancer
17(1)
1.5.5.2 People v. Collins
18(3)
1.6 The goal of Bayesian AI
21(1)
1.7 Achieving Bayesian AI
22(1)
1.8 Are Bayesian networks Bayesian?
22(1)
1.9 Summary
23(1)
1.10 Bibliographic notes
24(1)
1.11 Technical notes
24(1)
1.12 Problems
25(4)
2 Introducing Bayesian Networks
29(26)
2.1 Introduction
29(1)
2.2 Bayesian network basics
29(4)
2.2.1 Nodes and values
30(1)
2.2.2 Structure
31(1)
2.2.3 Conditional probabilities
32(1)
2.2.4 The Markov property
33(1)
2.3 Reasoning with Bayesian networks
33(4)
2.3.1 Types of reasoning
34(1)
2.3.2 Types of evidence
35(1)
2.3.3 Reasoning with numbers
36(1)
2.4 Understanding Bayesian networks
37(6)
2.4.1 Representing the joint probability distribution
37(1)
2.4.2 Pearl's network construction algorithm
37(1)
2.4.3 Compactness and node ordering
38(1)
2.4.4 Conditional independence
39(1)
2.4.4.1 Causal chains
39(1)
2.4.4.2 Common causes
40(1)
2.4.4.3 Common effects
40(1)
2.4.5 d-separation
41(2)
2.5 More examples
43(2)
2.5.1 Earthquake
43(1)
2.5.2 Metastatic cancer
44(1)
2.5.3 Asia
44(1)
2.6 Summary
45(1)
2.7 Bibliographic notes
45(5)
2.8 Problems
50(5)
3 Inference in Bayesian Networks
55(42)
3.1 Introduction
55(1)
3.2 Exact inference in chains
56(2)
3.2.1 Two node network
56(1)
3.2.2 Three node chain
57(1)
3.3 Exact inference in polytrees
58(6)
3.3.1 Kim and Pearl's message passing algorithm
59(3)
3.3.2 Message passing example
62(2)
3.3.3 Algorithm features
64(1)
3.4 Inference with uncertain evidence
64(5)
3.4.1 Using a virtual node
65(2)
3.4.2 Virtual nodes in the message passing algorithm
67(1)
3.4.3 Multiple virtual evidence
67(2)
3.5 Exact inference in multiply-connected networks
69(5)
3.5.1 Clustering methods
69(2)
3.5.2 Junction trees
71(3)
3.6 Approximate inference with stochastic simulation
74(6)
3.6.1 Logic sampling
75(1)
3.6.2 Likelihood weighting
76(2)
3.6.3 Markov Chain Monte Carlo (MCMC)
78(1)
3.6.4 Using virtual evidence
78(1)
3.6.5 Assessing approximate inference algorithms
78(2)
3.7 Other computations
80(1)
3.7.1 Belief revision
80(1)
3.7.2 Probability of evidence
80(1)
3.8 Causal inference
81(8)
3.8.1 Observation vs. intervention
81(2)
3.8.2 Defining an intervention
83(1)
3.8.3 Categories of intervention
84(2)
3.8.4 Modeling effectiveness
86(1)
3.8.5 Representing interventions
87(2)
3.9 Summary
89(1)
3.10 Bibliographic notes
89(2)
3.11 Problems
91(6)
4 Decision Networks
97(36)
4.1 Introduction
97(1)
4.2 Utilities
97(2)
4.3 Decision network basics
99(7)
4.3.1 Node types
99(1)
4.3.2 Football team example
100(1)
4.3.3 Evaluating decision networks
101(1)
4.3.4 Information links
102(2)
4.3.5 Fever example
104(1)
4.3.6 Types of actions
104(2)
4.4 Sequential decision making
106(6)
4.4.1 Test-action combination
106(1)
4.4.2 Real estate investment example
107(2)
4.4.3 Evaluation using a decision tree model
109(2)
4.4.4 Value of information
111(1)
4.4.5 Direct evaluation of decision networks
112(1)
4.5 Dynamic Bayesian networks
112(6)
4.5.1 Nodes, structure and CPTs
113(2)
4.5.2 Reasoning
115(2)
4.5.3 Inference algorithms for DBNs
117(1)
4.6 Dynamic decision networks
118(2)
4.6.1 Mobile robot example
119(1)
4.7 Object-oriented Bayesian networks
120(6)
4.7.1 OOBN basics
120(2)
4.7.2 OOBN inference
122(1)
4.7.3 OOBN examples
123(2)
4.7.4 "is-A" relationship: Class inheritance
125(1)
4.8 Summary
126(1)
4.9 Bibliographic notes
127(1)
4.10 Problems
127(6)
5 Applications of Bayesian Networks
133(48)
5.1 Introduction
133(1)
5.2 A brief survey of BN applications
134(11)
5.2.1 Types of reasoning
134(1)
5.2.2 Medical Applications
135(3)
5.2.3 Ecological and environmental applications
138(2)
5.2.4 Other applications
140(5)
5.3 Cardiovascular risk assessment
145(7)
5.3.1 Epidemiology models for cardiovascular heart disease
145(1)
5.3.2 The Busselton network
146(1)
5.3.2.1 Structure
146(1)
5.3.2.2 Parameters and discretization
147(1)
5.3.3 The PROCAM network
148(1)
5.3.3.1 Structure
148(1)
5.3.3.2 Parameters and discretization
148(2)
5.3.3.3 Points
150(1)
5.3.3.4 Target variables
151(1)
5.3.4 Evaluation
152(1)
5.4 Goulburn Catchment Ecological Risk Assessment
152(4)
5.4.1 Background: Goulburn Catchment
153(1)
5.4.2 The Bayesian network
154(1)
5.4.2.1 Structure
155(1)
5.4.2.2 Parameterization
155(1)
5.4.2.3 Evaluation
156(1)
5.5 Bayesian poker
156(7)
5.5.1 Five-card stud poker
156(2)
5.5.2 A decision network for poker
158(1)
5.5.2.1 Structure
158(1)
5.5.2.2 Node values
159(1)
5.5.2.3 Conditional probability tables
159(1)
5.5.2.4 Belief updating
159(1)
5.5.2.5 Decision node
160(1)
5.5.2.6 The utility node
160(1)
5.5.3 Betting with randomization
161(1)
5.5.4 Bluffing
162(1)
5.5.5 Experimental evaluation
162(1)
5.6 Ambulation monitoring and fall detection
163(6)
5.6.1 The domain
163(1)
5.6.2 The DBN model
164(1)
5.6.2.1 Nodes and values
164(1)
5.6.2.2 Structure and CPTs
165(2)
5.6.3 Case-based evaluation
167(1)
5.6.4 An extended sensor model
167(2)
5.7 A Nice Argument Generator (NAG)
169(7)
5.7.1 NAG architecture
170(2)
5.7.2 Example: An asteroid strike
172(1)
5.7.3 The psychology of inference
173(1)
5.7.4 Example: The asteroid strike continues
174(1)
5.7.5 The future of argumentation
175(1)
5.8 Summary
176(1)
5.9 Bibliographic notes
177(1)
5.10 Problems
178(3)
II LEARNING CAUSAL MODELS
181(112)
6 Learning Probabilities
185(20)
6.1 Introduction
185(1)
6.2 Parameterizing discrete models
185(5)
6.2.1 Parameterizing a binomial model
185(1)
6.2.1.1 The beta distribution
186(2)
6.2.2 Parameterizing a multinomial model
188(2)
6.3 Incomplete data
190(6)
6.3.1 The Bayesian solution
191(1)
6.3.2 Approximate solutions
192(1)
6.3.2.1 Gibbs sampling
192(2)
6.3.2.2 Expectation maximization
194(2)
6.3.3 Incomplete data: summary
196(1)
6.4 Learning local structure
196(5)
6.4.1 Causal interaction
197(1)
6.4.2 Noisy-or connections
197(1)
6.4.3 Classification trees and graphs
198(2)
6.4.4 Logit models
200(1)
6.5 Summary
201(1)
6.6 Bibliographic notes
201(1)
6.7 Technical notes
202(1)
6.8 Problems
202(3)
7 Bayesian Network Classifiers
205(26)
7.1 Introduction
205(1)
7.2 Naive Bayes models
206(2)
7.3 Semi-naive Bayes models
208(1)
7.4 Ensemble Bayes prediction
209(2)
7.5 The evaluation of classifiers
211(16)
7.5.1 Predictive accuracy
211(2)
7.5.2 Bias and variance
213(2)
7.5.3 ROC curves and AUC
215(2)
7.5.4 Calibration
217(3)
7.5.5 Expected value
220(2)
7.5.6 Proper scoring rules
222(1)
7.5.7 Information reward
223(2)
7.5.8 Bayesian information reward
225(2)
7.6 Summary
227(1)
7.7 Bibliographic notes
227(1)
7.8 Technical notes
228(1)
7.9 Problems
229(2)
8 Learning Linear Causal Models
231(24)
8.1 Introduction
231(2)
8.2 Path models
233(8)
8.2.1 Wright's first decomposition rule
235(3)
8.2.2 Parameterizing linear models
238(1)
8.2.3 Learning linear models is complex
239(2)
8.3 Constraint-based learners
241(9)
8.3.1 Markov equivalence
244(1)
8.3.1.1 Arc reversal
245(2)
8.3.1.2 Markov equivalence summary
247(1)
8.3.2 PC algorithm
247(2)
8.3.3 Causal discovery versus regression
249(1)
8.4 Summary
250(1)
8.5 Bibliographic notes
250(1)
8.6 Technical notes
250(2)
8.7 Problems
252(3)
9 Learning Discrete Causal Structure
255(38)
9.1 Introduction
255(1)
9.2 Cooper and Herskovits's K2
256(3)
9.2.1 Learning variable order
258(1)
9.3 MDL causal discovery
259(5)
9.3.1 Lam and Bacchus's MDL code for causal models
261(2)
9.3.2 Suzuki's MDL code for causal discovery
263(1)
9.4 Metric pattern discovery
264(1)
9.5 CaMML: Causal discovery via MML
265(4)
9.5.1 An MML code for causal structures
266(1)
9.5.1.1 Totally ordered models (TOMs)
267(1)
9.5.2 An MML metric for linear models
268(1)
9.6 CaMML stochastic search
269(7)
9.6.1 Genetic algorithm (GA) search
269(1)
9.6.2 Metropolis search
270(2)
9.6.3 Expert priors
272(2)
9.6.4 An MML metric for discrete models
274(1)
9.6.5 Learning hybrid models
275(1)
9.7 Problems with causal discovery
276(9)
9.7.1 The causal Markov condition
276(3)
9.7.2 A lack of faith
279(5)
9.7.3 Learning in high-dimensional spaces
284(1)
9.8 Evaluating causal discovery
285(4)
9.8.1 Qualitative evaluation
285(1)
9.8.2 Quantitative evaluation
286(1)
9.8.3 Causal Kullback-Leibler (CKL)
287(2)
9.9 Summary
289(1)
9.10 Bibliographic notes
289(1)
9.11 Technical notes
290(1)
9.12 Problems
290(3)
III KNOWLEDGE ENGINEERING
293(112)
10 Knowledge Engineering with Bayesian Networks
297(64)
10.1 Introduction
297(2)
10.1.1 Bayesian network modeling tasks
297(2)
10.2 The KEBN process
299(5)
10.2.1 KEBN lifecycle model
299(1)
10.2.2 Prototyping and spiral KEBN
300(2)
10.2.3 Boneh's KEBN process
302(1)
10.2.4 Are BNs suitable for the domain problem?
303(1)
10.2.5 Process management
303(1)
10.3 Stage 1: BN structure
304(20)
10.3.1 Nodes and values
305(1)
10.3.1.1 Understanding the problem context
305(1)
10.3.1.2 Types of node
305(1)
10.3.1.3 Types of values
306(1)
10.3.1.4 Discretization
306(1)
10.3.2 Common modeling mistakes: nodes and values
307(4)
10.3.3 Causal relationships
311(1)
10.3.4 Dependence and independence relationships
312(5)
10.3.5 Other relationships
317(1)
10.3.5.1 Representing time
318(1)
10.3.6 Controlling the number of arcs
319(1)
10.3.7 Combining discrete and continuous variables
320(1)
10.3.8 Using other knowledge representations
321(1)
10.3.9 Common modeling mistakes: arcs
321(2)
10.3.10 Structure evaluation
323(1)
10.3.10.1 Elicitation review
324(1)
10.4 Stage 2: Probability parameters
324(17)
10.4.1 Parameter sources
325(2)
10.4.2 Probability elicitation for discrete variables
327(3)
10.4.3 Probability elicitation for continuous variables
330(1)
10.4.4 Support for probability elicitation
330(2)
10.4.5 Local structure
332(2)
10.4.6 Case-based evaluation
334(1)
10.4.6.1 Explanation methods
334(1)
10.4.7 Validation methods
335(2)
10.4.8 Sensitivity analysis
337(1)
10.4.8.1 Sensitivity to evidence
337(2)
10.4.8.2 Sensitivity to changes in parameters
339(2)
10.5 Stage 3: Decision structure
341(1)
10.6 Stage 4: Utilities (preferences)
342(5)
10.6.1 Sensitivity of decisions
343(4)
10.6.1.1 Disease treatment example
347(1)
10.7 Modeling example: missing car
347(6)
10.8 Incremental modeling
353(1)
10.8.1 Divide-and-conquer
353(1)
10.8.2 Top-down vs. Bottom-up
354(1)
10.9 Adaptation
354(3)
10.9.1 Adapting parameters
355(2)
10.9.2 Structural adaptation
357(1)
10.10 Summary
357(1)
10.11 Bibliographic notes
357(1)
10.12 Problems
358(3)
11 KEBN Case Studies
361(44)
11.1 Introduction
361(1)
11.2 Bayesian poker revisited
361(8)
11.2.1 The initial prototype
361(2)
11.2.2 Developments 1995-2000
363(1)
11.2.3 Adaptation to Texas Hold 'em, 2003
363(2)
11.2.4 Hybrid model, 2003
365(3)
11.2.5 Improved opponent modeling 2005-2007
368(1)
11.2.6 Ongoing Bayesian poker
368(1)
11.2.7 KEBN aspects
368(1)
11.3 An intelligent tutoring system for decimal understanding
369(15)
11.3.1 The ITS domain
370(1)
11.3.2 ITS system architecture
371(2)
11.3.3 Expert elicitation
373(1)
11.3.3.1 Nodes
374(1)
11.3.3.2 Structure
374(2)
11.3.3.3 Parameters
376(1)
11.3.3.4 The evaluation process
377(1)
11.3.3.5 Empirical evaluation
378(2)
11.3.4 Automated methods
380(1)
11.3.4.1 Classification
380(1)
11.3.4.2 Parameters
381(1)
11.3.4.3 Structure
381(1)
11.3.5 Field trial evaluation
382(1)
11.3.6 KEBN aspects
383(1)
11.4 Goulburn Catchment Ecological Risk Assessment
384(10)
11.4.1 Conceptual modeling
384(1)
11.4.2 The KEBN process used for parameterization
385(1)
11.4.3 Parameter estimation
386(3)
11.4.4 Quantitative evaluation
389(1)
11.4.4.1 Evaluation by domain expert
389(1)
11.4.4.2 Sensitivity to findings analysis
390(1)
11.4.4.3 Sensitivity to parameters analysis
391(2)
11.4.5 Conclusions
393(1)
11.4.6 KEBN aspects
394(1)
11.5 Cardiovascular risk assessment
394(9)
11.5.1 Learning CHD BNs
394(2)
11.5.1.1 Experimental methodology
396(1)
11.5.1.2 Results
397(1)
11.5.2 The clinical support tool: TakeHeart II
398(4)
11.5.3 KEBN aspects
402(1)
11.6 Summary
403(2)
A NOTATION
405(4)
B SOFTWARE PACKAGES
409(8)
B.1 Introduction
409(2)
B.2 History
411(1)
B.3 BN Software Package Survey
412(5)
Bibliography 417(36)
Index 453
Kevin B. Korb is a Reader in the Clayton School of Information Technology at Monash University in Australia. He earned his Ph.D. from Indiana University. His research encompasses causal discovery, probabilistic causality, evaluation theory, informal logic and argumentation, artificial evolution, and philosophy of artificial intelligence.

Ann E. Nicholson an Associate Professor in the Clayton School of Information Technology at Monash University in Australia. She earned her Ph.D. from the University of Oxford. Her research interests include artificial intelligence, probabilistic reasoning, Bayesian networks, knowledge engineering, plan recognition, user modeling, evolutionary ethics, and data mining