Muutke küpsiste eelistusi

E-raamat: Advanced Markov Chain Monte Carlo Methods: Learning from Past Samples [Wiley Online]

(Texas A&M University, USA), (Dept of Statistics Purdue University,USA), (Texas A&M University, USA)
Teised raamatud teemal:
  • Wiley Online
  • Hind: 139,56 €*
  • * hind, mis tagab piiramatu üheaegsete kasutajate arvuga ligipääsu piiramatuks ajaks
Teised raamatud teemal:
Markov Chain Monte Carlo (MCMC) methods are now an indispensable tool in scientific computing. This book discusses recent developments of MCMC methods with an emphasis on those making use of past sample information during simulations. The application examples are drawn from diverse fields such as bioinformatics, machine learning, social science, combinatorial optimization, and computational physics.

This volume can be used as a textbook or a reference guide for a one-semester graduate course in statistics, computational biology, engineering, and computer sciences. Applied or theoretical researchers will also find this book beneficial.



This book provides comprehensive coverage of simulation of complex systems using Monte Carlo methods.

Developing algorithms that are immune to the local trap problem has long been considered as the most important topic in MCMC research. Various advanced MCMC algorithms which address this problem have been developed include, the modified Gibbs sampler, the methods based on auxiliary variables and the methods making use of past samples. The focus of this book is on the algorithms that make use of past samples.

This book includes the multicanonical algorithm, dynamic weighting, dynamically weighted importance sampling, the Wang-Landau algorithm, equal energy sampler, stochastic approximation Monte Carlo, adaptive MCMC algorithms, conjugate gradient Monte Carlo, adaptive direction sampling, the sampling Metropolis-Hasting algorithm and the multiplica sampler.

Preface xiii
Acknowledgments xvii
Publisher's Acknowledgments xix
1 Bayesian Inference and Markov Chain Monte Carlo
1(26)
1.1 Bayes
1(3)
1.1.1 Specification of Bayesian Models
2(1)
1.1.2 The Jeffreys Priors and Beyond
2(2)
1.2 Bayes Output
4(4)
1.2.1 Credible Intervals and Regions
4(1)
1.2.2 Hypothesis Testing: Bayes Factors
5(3)
1.3 Monte Carlo Integration
8(2)
1.3.1 The Problem
8(1)
1.3.2 Monte Carlo Approximation
9(1)
1.3.3 Monte Carlo via Importance Sampling
9(1)
1.4 Random Variable Generation
10(8)
1.4.1 Direct or Transformation Methods
11(1)
1.4.2 Acceptance-Rejection Methods
11(3)
1.4.3 The Ratio-of-Uniforms Method and Beyond
14(4)
1.4.4 Adaptive Rejection Sampling
18(1)
1.4.5 Perfect Sampling
18(1)
1.5 Markov Chain Monte Carlo
18(9)
1.5.1 Markov Chains
18(2)
1.5.2 Convergence Results
20(3)
1.5.3 Convergence Diagnostics
23(1)
Exercises
24(3)
2 The Gibbs Sampler
27(32)
2.1 The Gibbs Sampler
27(3)
2.2 Data Augmentation
30(3)
2.3 Implementation Strategies and Acceleration Methods
33(12)
2.3.1 Blocking and Collapsing
33(1)
2.3.2 Hierarchical Centering and Reparameterization
34(1)
2.3.3 Parameter Expansion for Data Augmentation
35(8)
2.3.4 Alternating Subspace-Spanning Resampling
43(2)
2.4 Applications
45(14)
2.4.1 The Student-t Model
45(2)
2.4.2 Robit Regression or Binary Regression with the Student-t Link
47(3)
2.4.3 Linear Regression with Interval-Censored Responses
50(4)
Exercises
54(2)
Appendix 2A The EM and PX-EM Algorithms
56(3)
3 The Metropolis-Hastings Algorithm
59(26)
3.1 The Metropolis-Hastings Algorithm
59(6)
3.1.1 Independence Sampler
62(1)
3.1.2 Random Walk Chains
63(1)
3.1.3 Problems with Metropolis-Hastings Simulations
63(2)
3.2 Variants of the Metropolis-Hastings Algorithm
65(2)
3.2.1 The Hit-and-Run Algorithm
65(1)
3.2.2 The Langevin Algorithm
65(1)
3.2.3 The Multiple-Try MH Algorithm
66(1)
3.3 Reversible Jump MCMC Algorithm for Bayesian Model Selection Problems
67(8)
3.3.1 Reversible Jump MCMC Algorithm
67(3)
3.3.2 Change-Point Identification
70(5)
3.4 Metropolis-Within-Gibbs Sampler for ChIP-chip Data Analysis
75(10)
3.4.1 Metropolis-Within-Gibbs Sampler
75(1)
3.4.2 Bayesian Analysis for ChIP-chip Data
76(7)
Exercises
83(2)
4 Auxiliary Variable MCMC Methods
85(38)
4.1 Simulated Annealing
86(2)
4.2 Simulated Tempering
88(2)
4.3 The Slice Sampler
90(1)
4.4 The Swendsen-Wang Algorithm
91(2)
4.5 The Wolff Algorithm
93(2)
4.6 The Møller Algorithm
95(2)
4.7 The Exchange Algorithm
97(1)
4.8 The Double MH Sampler
98(5)
4.8.1 Spatial Autologistic Models
99(4)
4.9 Monte Carlo MH Sampler
103(10)
4.9.1 Monte Carlo MH Algorithm
103(4)
4.9.2 Convergence
107(3)
4.9.3 Spatial Autologistic Models (Revisited)
110(1)
4.9.4 Marginal Inference
111(2)
4.10 Applications
113(10)
4.10.1 Autonormal Models
114(2)
4.10.2 Social Networks
116(5)
Exercises
121(2)
5 Population-Based MCMC Methods
123(42)
5.1 Adaptive Direction Sampling
124(1)
5.2 Conjugate Gradient Monte Carlo
125(1)
5.3 Sample Metropolis-Hastings Algorithm
126(1)
5.4 Parallel Tempering
127(1)
5.5 Evolutionary Monte Carlo
128(12)
5.5.1 Evolutionary Monte Carlo in Binary-Coded Space
129(3)
5.5.2 Evolutionary Monte Carlo in Continuous Space
132(1)
5.5.3 Implementation Issues
133(1)
5.5.4 Two Illustrative Examples
134(5)
5.5.5 Discussion
139(1)
5.6 Sequential Parallel Tempering for Simulation of High Dimensional Systems
140(6)
5.6.1 Build-up Ladder Construction
141(1)
5.6.2 Sequential Parallel Tempering
142(1)
5.6.3 An Illustrative Example: the Witch's Hat Distribution
142(3)
5.6.4 Discussion
145(1)
5.7 Equi-Energy Sampler
146(2)
5.8 Applications
148(17)
5.8.1 Bayesian Curve Fitting
148(5)
5.8.2 Protein Folding Simulations: 2D HP Model
153(3)
5.8.3 Bayesian Neural Networks for Nonlinear Time Series Forecasting
156(6)
Exercises
162(1)
Appendix 5A Protein Sequences for 2D HP Models
163(2)
6 Dynamic Weighting
165(34)
6.1 Dynamic Weighting
165(8)
6.1.1 The IWIW Principle
165(2)
6.1.2 Tempering Dynamic Weighting Algorithm
167(4)
6.1.3 Dynamic Weighting in Optimization
171(2)
6.2 Dynamically Weighted Importance Sampling
173(12)
6.2.1 The Basic Idea
173(1)
6.2.2 A Theory of DWIS
174(2)
6.2.3 Some IWIWp Transition Rules
176(3)
6.2.4 Two DWIS Schemes
179(1)
6.2.5 Weight Behavior Analysis
180(3)
6.2.6 A Numerical Example
183(2)
6.3 Monte Carlo Dynamically Weighted Importance Sampling
185(10)
6.3.1 Sampling from Distributions with Intractable Normalizing Constants
185(1)
6.3.2 Monte Carlo Dynamically Weighted Importance Sampling
186(5)
6.3.3 Bayesian Analysis for Spatial Autologistic Models
191(4)
6.4 Sequentially Dynamically Weighted Importance Sampling
195(4)
Exercises
197(2)
7 Stochastic Approximation Monte Carlo
199(106)
7.1 Multicanonical Monte Carlo
200(2)
7.2 1/k-Ensemble Sampling
202(2)
7.3 The Wang-Landau Algorithm
204(3)
7.4 Stochastic Approximation Monte Carlo
207(11)
7.5 Applications of Stochastic Approximation Monte Carlo
218(15)
7.5.1 Efficient p-Value Evaluation for Resampling-Based Tests
218(4)
7.5.2 Bayesian Phylogeny Inference
222(5)
7.5.3 Bayesian Network Learning
227(6)
7.6 Variants of Stochastic Approximation Monte Carlo
233(20)
7.6.1 Smoothing SAMC for Model Selection Problems
233(6)
7.6.2 Continuous SAMC for Marginal Density Estimation
239(5)
7.6.3 Annealing SAMC for Global Optimization
244(9)
7.7 Theory of Stochastic Approximation Monte Carlo
253(22)
7.7.1 Convergence
253(14)
7.7.2 Convergence Rate
267(4)
7.7.3 Ergodicity and its IWIW Property
271(4)
7.8 Trajectory Averaging: Toward the Optimal Convergence Rate
275(30)
7.8.1 Trajectory Averaging for a SAMCMC Algorithm
277(2)
7.8.2 Trajectory Averaging for SAMC
279(2)
7.8.3 Proof of Theorems 7.8.2 and 7.8.3
281(15)
Exercises
296(2)
Appendix 7A Test Functions for Global Optimization
298(7)
8 Markov Chain Monte Carlo with Adaptive Proposals
305(22)
8.1 Stochastic Approximation-Based Adaptive Algorithms
306(6)
8.1.1 Ergodicity and Weak Law of Large Numbers
307(2)
8.1.2 Adaptive Metropolis Algorithms
309(3)
8.2 Adaptive Independent Metropolis-Hastings Algorithms
312(3)
8.3 Regeneration-Based Adaptive Algorithms
315(2)
8.3.1 Identification of Regeneration Times
315(2)
8.3.2 Proposal Adaptation at Regeneration Times
317(1)
8.4 Population-Based Adaptive Algorithms
317(10)
8.4.1 ADS, EMC, NKC and More
317(1)
8.4.2 Adaptive EMC
318(5)
8.4.3 Application to Sensor Placement Problems
323(1)
Exercises
324(3)
References 327(26)
Index 353
Faming Liang, Associate Professor, Department of Statistics, Texas A&M University.

Chuanhai Liu, Professor, Department of Statistics, Purdue University.

Raymond J. Carroll, Distinguished Professor, Department of Statistics, Texas A&M University.