Muutke küpsiste eelistusi

E-raamat: Surrogates: Gaussian Process Modeling, Design, and Optimization for the Applied Sciences [Taylor & Francis e-raamat]

(Virginia Tech Department of Statistics, USA)
Teised raamatud teemal:
  • Taylor & Francis e-raamat
  • Hind: 170,80 €*
  • * hind, mis tagab piiramatu üheaegsete kasutajate arvuga ligipääsu piiramatuks ajaks
  • Tavahind: 244,00 €
  • Säästad 30%
Teised raamatud teemal:
"Surrogates: a graduate textbook, or professional handbook, on topics at the interface between machine learning, spatial statistics, computer simulation, meta-modeling (i.e., emulation), design of experiments, and optimization. Experimentation through simulation, "human out-of-the-loop" statistical support (focusing on the science), management of dynamic processes, online and real-time analysis, automation, and practical application are at the forefront. Topics include: Gaussian process (GP) regression for flexible nonparametric and nonlinear modeling. Applications to uncertainty quantification, sensitivity analysis, calibration of computer models to field data, sequential design/active learning and (blackbox/Bayesian) optimization under uncertainty. Advanced topics include treed partitioning, local GP approximation, modeling of simulation experiments (e.g., agent-based models) with coupled nonlinear mean and variance (heteroskedastic) models. Treatment appreciates historical response surface methodology(RSM) and canonical examples, but emphasizes contemporary methods and implementation in R at modern scale. Rmarkdown facilitates a fully reproducible tour, complete with motivation from, application to, and illustration with, compelling real-data examples. Presentation targets numerically competent practitioners in engineering, physical, and biological sciences. Writing is statistical in form, but the subjects are not about statistics. Rather, they're about prediction and synthesis under uncertainty; about visualization and information, design and decision making, computing and clean code"--

Surrogates: a graduate textbook, or professional handbook, on topics at the interface between machine learning, spatial statistics, computer simulation, meta-modeling (i.e., emulation), design of experiments, and optimization. Experimentation through simulation, "human out-of-the-loop" statistical support (focusing on the science), management of dynamic processes, online and real-time analysis, automation, and practical application are at the forefront.

Topics include:

  • Gaussian process (GP) regression for flexible nonparametric and nonlinear modeling.
  • Applications to uncertainty quantification, sensitivity analysis, calibration of computer models to field data, sequential design/active learning and (blackbox/Bayesian) optimization under uncertainty.
  • Advanced topics include treed partitioning, local GP approximation, modeling of simulation experiments (e.g., agent-based models) with coupled nonlinear mean and variance (heteroskedastic) models.
  • Treatment appreciates historical response surface methodology (RSM) and canonical examples, but emphasizes contemporary methods and implementation in R at modern scale.
  • Rmarkdown facilitates a fully reproducible tour, complete with motivation from, application to, and illustration with, compelling real-data examples.

Presentation targets numerically competent practitioners in engineering, physical, and biological sciences. Writing is statistical in form, but the subjects are not about statistics. Rather, they’re about prediction and synthesis under uncertainty; about visualization and information, design and decision making, computing and clean code.

Preface xi
1 Historical Perspective
1(30)
1.1 Response surface methodology
3(11)
1.1.1 What is it?
3(3)
1.1.2 Low-order polynomials
6(6)
1.1.3 General models, inference and sequential design
12(2)
1.2 Computer experiments
14(11)
1.2.1 Aircraft wing weight example
16(3)
1.2.2 Surrogate modeling and design
19(6)
1.3 A road map
25(2)
1.4 Homework exercises
27(4)
2 Four Motivating Datasets
31(32)
2.1 Rocket booster dynamics
31(7)
2.1.1 Data
32(2)
2.1.2 Sequential design and nonstationary surrogate modeling
34(4)
2.2 Radiative shock hydrodynamics
38(7)
2.2.1 Data
39(2)
2.2.2 Computer model calibration
41(4)
2.3 Predicting satellite drag
45(7)
2.3.1 Simulating drag
46(2)
2.3.2 Surrogate drag
48(4)
2.3.3 Big tpm runs
52(1)
2.4 Groundwater remediation
52(7)
2.4.1 Optimization and search
55(3)
2.4.2 Where from here?
58(1)
2.5 Data out there
59(1)
2.6 Homework exercises
60(3)
3 Steepest Ascent and Ridge Analysis
63(54)
3.1 Path of steepest ascent
64(18)
3.1.1 Signs and magnitudes
64(9)
3.1.2 Confidence regions
73(3)
3.1.3 Constrained ascent
76(6)
3.2 Second-order response surfaces
82(31)
3.2.1 Canonical analysis
82(9)
3.2.2 Ridge analysis
91(10)
3.2.3 Sampling properties
101(3)
3.2.4 Confidence in the stationary point
104(6)
3.2.5 Intervals on eigenvalues
110(3)
3.3 Homework exercises
113(4)
4 Space-filling Design
117(26)
4.1 Latin hypercube sample
118(11)
4.1.1 LHS properties
121(3)
4.1.2 LHS variations and extensions
124(5)
4.2 Maximin designs
129(8)
4.2.1 Calculating maximin designs
130(4)
4.2.2 Sequential maximin design
134(3)
4.3 Libraries and hybrids
137(2)
4.4 Homework exercises
139(4)
5 Gaussian Process Regression
143(80)
5.1 Gaussian process prior
144(11)
5.1.1 Gaussian process posterior
146(5)
5.1.2 Higher dimension?
151(4)
5.2 GP hyperparameters
155(32)
5.2.1 Scale
155(7)
5.2.2 Noise and nuggets
162(4)
5.2.3 Derivative-based hyperparameter optimization
166(2)
5.2.4 Lengthscale: rate of decay of correlation
168(4)
5.2.5 Anisotropic modeling
172(8)
5.2.6 Library
180(3)
5.2.7 Abakeoff
183(4)
5.3 Some interpretation and perspective
187(17)
5.3.1 Bayesian linear regression?
187(2)
5.3.2 Latent random field
189(2)
5.3.3 Stationary kernels
191(9)
5.3.4 Signal-to-noise
200(4)
5.4 Challenges and remedies
204(12)
5.4.1 GP by convolution
205(7)
5.4.2 Limits of stationarity
212(3)
5.4.3 Functional and other outputs
215(1)
5.5 Homework exercises
216(7)
6 Model-based Design for GPs
223(38)
6.1 Model-based design
224(11)
6.1.1 Maximum entropy design
224(5)
6.1.2 Minimizing predictive uncertainty
229(6)
6.2 Sequential design/active learning
235(19)
6.2.1 Whack-a-mole: active learning MacKay
237(9)
6.2.2 A more aggregate criteria: active learning Cohn
246(5)
6.2.3 Other sequential criteria
251(3)
6.3 Fast GP updates
254(1)
6.4 Homework exercises
255(6)
7 Optimization
261(72)
7.1 Surrogate-assisted optimization
262(10)
7.1.1 A running example
263(7)
7.1.2 A classical comparator
270(2)
7.2 Expected improvement
272(19)
7.2.1 Classic EI illustration
274(3)
7.2.2 EI on our running example
277(8)
7.2.3 Conditional improvement
285(1)
7.2.4 Noisy objectives
286(2)
7.2.5 Illustrating conditional improvement and noise
288(3)
7.3 Optimization under constraints
291(38)
7.3.1 Known constraints
292(4)
7.3.2 Blackbox binary constraints
296(9)
7.3.3 Real-valued constraints
305(3)
7.3.4 Augmented Lagrangian
308(6)
7.3.5 Augmented Lagrangian Bayesian optimization (ALBO)
314(4)
7.3.6 ALBO implementation details
318(3)
7.3.7 ALBO in action
321(6)
7.3.8 Equality constraints and more
327(2)
7.4 Homework exercises
329(4)
8 Calibration and Sensitivity
333(46)
8.1 Calibration
333(28)
8.1.1 Kennedy and O'Hagan framework
335(3)
8.1.2 Modularization
338(7)
8.1.3 Calibration as optimization
345(5)
8.1.4 Removing bias
350(6)
8.1.5 Back to Bayes
356(5)
8.2 Sensitivity analysis
361(15)
8.2.1 Uncertainty distribution
362(1)
8.2.2 Main effects
363(3)
8.2.3 First-order and total sensitivity
366(6)
8.2.4 Bayesian sensitivity
372(4)
8.3 Homework exercises
376(3)
9 GP Fidelity and Scale
379(78)
9.1 Compactly supported kernels
381(14)
9.1.1 Working with CSKs
382(2)
9.1.2 Sharing load between mean and variance
384(4)
9.1.3 Practical Bayesian inference and UQ
388(7)
9.2 Partition models and regression trees
395(23)
9.2.1 Divide-and-conquer regression
396(7)
9.2.2 Treed Gaussian process
403(12)
9.2.3 Regression tree extensions, off-shoots and fix-ups
415(3)
9.3 Local approximate GPs
418(34)
9.3.1 Local subdesign
419(3)
9.3.2 Illustrating LAGP: ALC v. MSPE
422(5)
9.3.3 Global LAGP surrogate
427(7)
9.3.4 Global/local multi-resolution effect
434(2)
9.3.5 Details and variations
436(5)
9.3.6 Two applications
441(11)
9.4 Homework exercises
452(5)
10 Heteroskedasticity
457(42)
10.1 Replication and stochastic kriging
458(5)
10.1.1 Woodbury trick
459(1)
10.1.2 Efficient inference and prediction under replication
460(3)
10.2 Coupled mean and variance GPs
463(13)
10.2.1 Latent variance process
465(2)
10.2.2 Illustrations with hetGP
467(9)
10.3 Sequential design
476(19)
10.3.1 Integrated mean-squared prediction error
476(5)
10.3.2 Lookahead over replication
481(7)
10.3.3 Examples
488(4)
10.3.4 Optimization, level sets, calibration and more
492(3)
10.4 Homework exercises
495(4)
Appendix
499(12)
A Numerical Linear Algebra for Fast GPs
499(6)
A.1 Intel MKL and OSX Accelerate
499(4)
A.2 Stochastic approximation
503(2)
B An Experiment Game
505(6)
B.1 A shiny update to an old game
505(2)
B.2 Benchmarking play in real-time
507(4)
Bibliography 511(20)
Index 531
Robert B. Gramacy is a professor of Statistics in the College of Science at Virginia Tech. Research interests include Bayesian modeling methodology, statistical computing, Monte Carlo inference, nonparametric regression, sequential design, and optimization under uncertainty. Bobby enjoys cycling and ice hockey, and watching his kids grow up too fast.