Preface |
|
xi | |
|
|
1 | (30) |
|
1.1 Response surface methodology |
|
|
3 | (11) |
|
|
3 | (3) |
|
1.1.2 Low-order polynomials |
|
|
6 | (6) |
|
1.1.3 General models, inference and sequential design |
|
|
12 | (2) |
|
|
14 | (11) |
|
1.2.1 Aircraft wing weight example |
|
|
16 | (3) |
|
1.2.2 Surrogate modeling and design |
|
|
19 | (6) |
|
|
25 | (2) |
|
|
27 | (4) |
|
2 Four Motivating Datasets |
|
|
31 | (32) |
|
2.1 Rocket booster dynamics |
|
|
31 | (7) |
|
|
32 | (2) |
|
2.1.2 Sequential design and nonstationary surrogate modeling |
|
|
34 | (4) |
|
2.2 Radiative shock hydrodynamics |
|
|
38 | (7) |
|
|
39 | (2) |
|
2.2.2 Computer model calibration |
|
|
41 | (4) |
|
2.3 Predicting satellite drag |
|
|
45 | (7) |
|
|
46 | (2) |
|
|
48 | (4) |
|
|
52 | (1) |
|
2.4 Groundwater remediation |
|
|
52 | (7) |
|
2.4.1 Optimization and search |
|
|
55 | (3) |
|
|
58 | (1) |
|
|
59 | (1) |
|
|
60 | (3) |
|
3 Steepest Ascent and Ridge Analysis |
|
|
63 | (54) |
|
3.1 Path of steepest ascent |
|
|
64 | (18) |
|
3.1.1 Signs and magnitudes |
|
|
64 | (9) |
|
|
73 | (3) |
|
|
76 | (6) |
|
3.2 Second-order response surfaces |
|
|
82 | (31) |
|
|
82 | (9) |
|
|
91 | (10) |
|
3.2.3 Sampling properties |
|
|
101 | (3) |
|
3.2.4 Confidence in the stationary point |
|
|
104 | (6) |
|
3.2.5 Intervals on eigenvalues |
|
|
110 | (3) |
|
|
113 | (4) |
|
|
117 | (26) |
|
4.1 Latin hypercube sample |
|
|
118 | (11) |
|
|
121 | (3) |
|
4.1.2 LHS variations and extensions |
|
|
124 | (5) |
|
|
129 | (8) |
|
4.2.1 Calculating maximin designs |
|
|
130 | (4) |
|
4.2.2 Sequential maximin design |
|
|
134 | (3) |
|
4.3 Libraries and hybrids |
|
|
137 | (2) |
|
|
139 | (4) |
|
5 Gaussian Process Regression |
|
|
143 | (80) |
|
5.1 Gaussian process prior |
|
|
144 | (11) |
|
5.1.1 Gaussian process posterior |
|
|
146 | (5) |
|
|
151 | (4) |
|
|
155 | (32) |
|
|
155 | (7) |
|
|
162 | (4) |
|
5.2.3 Derivative-based hyperparameter optimization |
|
|
166 | (2) |
|
5.2.4 Lengthscale: rate of decay of correlation |
|
|
168 | (4) |
|
5.2.5 Anisotropic modeling |
|
|
172 | (8) |
|
|
180 | (3) |
|
|
183 | (4) |
|
5.3 Some interpretation and perspective |
|
|
187 | (17) |
|
5.3.1 Bayesian linear regression? |
|
|
187 | (2) |
|
5.3.2 Latent random field |
|
|
189 | (2) |
|
|
191 | (9) |
|
|
200 | (4) |
|
5.4 Challenges and remedies |
|
|
204 | (12) |
|
|
205 | (7) |
|
5.4.2 Limits of stationarity |
|
|
212 | (3) |
|
5.4.3 Functional and other outputs |
|
|
215 | (1) |
|
|
216 | (7) |
|
6 Model-based Design for GPs |
|
|
223 | (38) |
|
|
224 | (11) |
|
6.1.1 Maximum entropy design |
|
|
224 | (5) |
|
6.1.2 Minimizing predictive uncertainty |
|
|
229 | (6) |
|
6.2 Sequential design/active learning |
|
|
235 | (19) |
|
6.2.1 Whack-a-mole: active learning MacKay |
|
|
237 | (9) |
|
6.2.2 A more aggregate criteria: active learning Cohn |
|
|
246 | (5) |
|
6.2.3 Other sequential criteria |
|
|
251 | (3) |
|
|
254 | (1) |
|
|
255 | (6) |
|
|
261 | (72) |
|
7.1 Surrogate-assisted optimization |
|
|
262 | (10) |
|
|
263 | (7) |
|
7.1.2 A classical comparator |
|
|
270 | (2) |
|
|
272 | (19) |
|
7.2.1 Classic EI illustration |
|
|
274 | (3) |
|
7.2.2 EI on our running example |
|
|
277 | (8) |
|
7.2.3 Conditional improvement |
|
|
285 | (1) |
|
|
286 | (2) |
|
7.2.5 Illustrating conditional improvement and noise |
|
|
288 | (3) |
|
7.3 Optimization under constraints |
|
|
291 | (38) |
|
|
292 | (4) |
|
7.3.2 Blackbox binary constraints |
|
|
296 | (9) |
|
7.3.3 Real-valued constraints |
|
|
305 | (3) |
|
7.3.4 Augmented Lagrangian |
|
|
308 | (6) |
|
7.3.5 Augmented Lagrangian Bayesian optimization (ALBO) |
|
|
314 | (4) |
|
7.3.6 ALBO implementation details |
|
|
318 | (3) |
|
|
321 | (6) |
|
7.3.8 Equality constraints and more |
|
|
327 | (2) |
|
|
329 | (4) |
|
8 Calibration and Sensitivity |
|
|
333 | (46) |
|
|
333 | (28) |
|
8.1.1 Kennedy and O'Hagan framework |
|
|
335 | (3) |
|
|
338 | (7) |
|
8.1.3 Calibration as optimization |
|
|
345 | (5) |
|
|
350 | (6) |
|
|
356 | (5) |
|
|
361 | (15) |
|
8.2.1 Uncertainty distribution |
|
|
362 | (1) |
|
|
363 | (3) |
|
8.2.3 First-order and total sensitivity |
|
|
366 | (6) |
|
8.2.4 Bayesian sensitivity |
|
|
372 | (4) |
|
|
376 | (3) |
|
|
379 | (78) |
|
9.1 Compactly supported kernels |
|
|
381 | (14) |
|
|
382 | (2) |
|
9.1.2 Sharing load between mean and variance |
|
|
384 | (4) |
|
9.1.3 Practical Bayesian inference and UQ |
|
|
388 | (7) |
|
9.2 Partition models and regression trees |
|
|
395 | (23) |
|
9.2.1 Divide-and-conquer regression |
|
|
396 | (7) |
|
9.2.2 Treed Gaussian process |
|
|
403 | (12) |
|
9.2.3 Regression tree extensions, off-shoots and fix-ups |
|
|
415 | (3) |
|
9.3 Local approximate GPs |
|
|
418 | (34) |
|
|
419 | (3) |
|
9.3.2 Illustrating LAGP: ALC v. MSPE |
|
|
422 | (5) |
|
9.3.3 Global LAGP surrogate |
|
|
427 | (7) |
|
9.3.4 Global/local multi-resolution effect |
|
|
434 | (2) |
|
9.3.5 Details and variations |
|
|
436 | (5) |
|
|
441 | (11) |
|
|
452 | (5) |
|
|
457 | (42) |
|
10.1 Replication and stochastic kriging |
|
|
458 | (5) |
|
|
459 | (1) |
|
10.1.2 Efficient inference and prediction under replication |
|
|
460 | (3) |
|
10.2 Coupled mean and variance GPs |
|
|
463 | (13) |
|
10.2.1 Latent variance process |
|
|
465 | (2) |
|
10.2.2 Illustrations with hetGP |
|
|
467 | (9) |
|
|
476 | (19) |
|
10.3.1 Integrated mean-squared prediction error |
|
|
476 | (5) |
|
10.3.2 Lookahead over replication |
|
|
481 | (7) |
|
|
488 | (4) |
|
10.3.4 Optimization, level sets, calibration and more |
|
|
492 | (3) |
|
|
495 | (4) |
|
|
499 | (12) |
|
A Numerical Linear Algebra for Fast GPs |
|
|
499 | (6) |
|
A.1 Intel MKL and OSX Accelerate |
|
|
499 | (4) |
|
A.2 Stochastic approximation |
|
|
503 | (2) |
|
|
505 | (6) |
|
B.1 A shiny update to an old game |
|
|
505 | (2) |
|
B.2 Benchmarking play in real-time |
|
|
507 | (4) |
Bibliography |
|
511 | (20) |
Index |
|
531 | |