Preface |
|
ix | |
Acknowledgments |
|
xiv | |
Common Optimization Techniques, Equations, Symbols, and Acronyms |
|
xv | |
Part I Dimensionality Reduction and Transforms |
|
|
1 Singular Value Decomposition (SVD) |
|
|
3 | (50) |
|
|
3 | (4) |
|
|
7 | (5) |
|
1.3 Mathematical Properties and Manipulations |
|
|
12 | (4) |
|
1.4 Pseudo-Inverse, Least-Squares, and Regression |
|
|
16 | (7) |
|
1.5 Principal Component Analysis (PCA) |
|
|
23 | (5) |
|
|
28 | (7) |
|
1.7 Truncation and Alignment |
|
|
35 | (5) |
|
1.8 Randomized Singular Value Decomposition |
|
|
40 | (6) |
|
1.9 Tensor Decompositions and N-Way Data Arrays |
|
|
46 | (7) |
|
2 Fourier and Wavelet Transforms |
|
|
53 | (44) |
|
2.1 Fourier Series and Fourier Transforms |
|
|
53 | (10) |
|
2.2 Discrete Fourier Transform (DFT) and Fast Fourier Transform (FFT) |
|
|
63 | (7) |
|
2.3 Transforming Partial Differential Equations |
|
|
70 | (6) |
|
2.4 Gabor Transform and the Spectrogram |
|
|
76 | (5) |
|
|
81 | (4) |
|
2.6 Wavelets and Multi-Resolution Analysis |
|
|
85 | (2) |
|
2.7 Two-Dimensional Transforms and Image Processing |
|
|
87 | (10) |
|
3 Sparsity and Compressed Sensing |
|
|
97 | (34) |
|
3.1 Sparsity and Compression |
|
|
97 | (4) |
|
|
101 | (4) |
|
3.3 Compressed Sensing Examples |
|
|
105 | (4) |
|
3.4 The Geometry of Compression |
|
|
109 | (4) |
|
|
113 | (4) |
|
3.6 Sparse Representation |
|
|
117 | (3) |
|
3.7 Robust Principal Component Analysis (RPCA) |
|
|
120 | (3) |
|
3.8 Sparse Sensor Placement |
|
|
123 | (8) |
Part II Machine Learning and Data Analysis |
|
131 | (120) |
|
4 Regression and Model Selection |
|
|
133 | (35) |
|
4.1 Classic Curve Fitting |
|
|
134 | (6) |
|
4.2 Nonlinear Regression and Gradient Descent |
|
|
140 | (5) |
|
4.3 Regression and Ax = b: Over- and Under-Determined Systems |
|
|
145 | (6) |
|
4.4 Optimization as the Cornerstone of Regression |
|
|
151 | (4) |
|
4.5 The Pareto Front and Lex Parsimoniae |
|
|
155 | (3) |
|
4.6 Model Selection: Cross-Validation |
|
|
158 | (4) |
|
4.7 Model Selection: Information Criteria |
|
|
162 | (6) |
|
5 Clustering and Classification |
|
|
168 | (40) |
|
5.1 Feature Selection and Data Mining |
|
|
169 | (5) |
|
5.2 Supervised versus Unsupervised Learning |
|
|
174 | (4) |
|
5.3 Unsupervised Learning: k-Means Clustering |
|
|
178 | (4) |
|
5.4 Unsupervised Hierarchical Clustering: Dendrogram |
|
|
182 | (4) |
|
5.5 Mixture Models and the Expectation-Maximization Algorithm |
|
|
186 | (3) |
|
5.6 Supervised Learning and Linear Discriminants |
|
|
189 | (4) |
|
5.7 Support Vector Machines (SVM) |
|
|
193 | (5) |
|
5.8 Classification Trees and Random Forest |
|
|
198 | (5) |
|
5.9 Top 10 Algorithms of Data Mining circa 2008 (Before the Deep Learning Revolution) |
|
|
203 | (5) |
|
6 Neural Networks and Deep Learning |
|
|
208 | (43) |
|
6.1 Neural Networks: Single-Layer Networks |
|
|
209 | (5) |
|
6.2 Multi-Layer Networks and Activation Functions |
|
|
214 | (5) |
|
6.3 The Backpropagation Algorithm |
|
|
219 | (3) |
|
6.4 The Stochastic Gradient Descent Algorithm |
|
|
222 | (2) |
|
6.5 Deep Convolutional Neural Networks |
|
|
224 | (4) |
|
6.6 Neural Networks for Dynamical Systems |
|
|
228 | (5) |
|
6.7 Recurrent Neural Networks |
|
|
233 | (3) |
|
|
236 | (4) |
|
6.9 Generative Adversarial Networks (GANs) |
|
|
240 | (2) |
|
6.10 The Diversity of Neural Networks |
|
|
242 | (9) |
Part III Dynamics and Control |
|
251 | (136) |
|
7 Data-Driven Dynamical Systems |
|
|
253 | (58) |
|
7.1 Overview, Motivations, and Challenges |
|
|
254 | (6) |
|
7.2 Dynamic Mode Decomposition (DMD) |
|
|
260 | (15) |
|
7.3 Sparse Identification of Nonlinear Dynamics (SINDy) |
|
|
275 | (11) |
|
7.4 Koopman Operator Theory |
|
|
286 | (10) |
|
7.5 Data-Driven Koopman Analysis |
|
|
296 | (15) |
|
|
311 | (49) |
|
8.1 Closed-Loop Feedback Control |
|
|
312 | (5) |
|
8.2 Linear Time-Invariant Systems |
|
|
317 | (5) |
|
8.3 Controllability and Observability |
|
|
322 | (6) |
|
8.4 Optimal Full-State Control: Linear-Quadratic Regulator (LQR) |
|
|
328 | (4) |
|
8.5 Optimal Full-State Estimation: the Kalman Filter |
|
|
332 | (3) |
|
8.6 Optimal Sensor-Based Control: Linear-Quadratic Gaussian (LQG) |
|
|
335 | (1) |
|
8.7 Case Study: Inverted Pendulum on a Cart |
|
|
336 | (10) |
|
8.8 Robust Control and Frequency-Domain Techniques |
|
|
346 | (14) |
|
9 Balanced Models for Control |
|
|
360 | (27) |
|
9.1 Model Reduction and System Identification |
|
|
360 | (1) |
|
9.2 Balanced Model Reduction |
|
|
361 | (14) |
|
9.3 System Identification |
|
|
375 | (12) |
Part IV Advanced Data-Driven Modeling and Control |
|
387 | (155) |
|
|
389 | (30) |
|
10.1 Model Predictive Control (MPC) |
|
|
390 | (2) |
|
10.2 Nonlinear System Identification for Control |
|
|
392 | (6) |
|
10.3 Machine Learning Control |
|
|
398 | (10) |
|
10.4 Adaptive Extremum-Seeking Control |
|
|
408 | (11) |
|
11 Reinforcement Learning |
|
|
419 | (30) |
|
11.1 Overview and Mathematical Formulation |
|
|
419 | (7) |
|
11.2 Model-Based Optimization and Control |
|
|
426 | (3) |
|
11.3 Model-Free Reinforcement Learning and Q-Learning |
|
|
429 | (7) |
|
11.4 Deep Reinforcement Learning |
|
|
436 | (4) |
|
11.5 Applications and Environments |
|
|
440 | (4) |
|
11.6 Optimal Nonlinear Control |
|
|
444 | (5) |
|
12 Reduced-Order Models (ROMs) |
|
|
449 | (36) |
|
12.1 Proper Orthogonal Decomposition (POD) for Partial Differential Equations |
|
|
449 | (6) |
|
12.2 Optimal Basis Elements: the POD Expansion |
|
|
455 | (6) |
|
12.3 POD and Soliton Dynamics |
|
|
461 | (4) |
|
12.4 Continuous Formulation of POD |
|
|
465 | (5) |
|
12.5 POD with Symmetries: Rotations and Translations |
|
|
470 | (5) |
|
12.6 Neural Networks for Time-Stepping with POD |
|
|
475 | (4) |
|
12.7 Leveraging DMD and SINDy for Galerkin-POD |
|
|
479 | (6) |
|
13 Interpolation for Parametric Reduced-Order Models |
|
|
485 | (35) |
|
|
485 | (5) |
|
13.2 Error and Convergence of Gappy POD |
|
|
490 | (3) |
|
13.3 Gappy Measurements: Minimize Condition Number |
|
|
493 | (4) |
|
13.4 Gappy Measurements: Maximal Variance |
|
|
497 | (3) |
|
13.5 POD and the Discrete Empirical Interpolation Method (DEIM) |
|
|
500 | (4) |
|
13.6 DEIM Algorithm Implementation |
|
|
504 | (4) |
|
13.7 Decoder Networks for Interpolation |
|
|
508 | (4) |
|
13.8 Randomization and Compression for ROMs |
|
|
512 | (1) |
|
13.9 Machine Learning ROMs |
|
|
513 | (7) |
|
14 Physics-Informed Machine Learning |
|
|
520 | (22) |
|
14.1 Mathematical Foundations |
|
|
520 | (3) |
|
14.2 SINDy Autoencoder: Coordinates and Dynamics |
|
|
523 | (3) |
|
|
526 | (3) |
|
14.4 Learning Nonlinear Operators |
|
|
529 | (4) |
|
14.5 Physics-Informed Neural Networks (PINNs) |
|
|
533 | (2) |
|
14.6 Learning Coarse-Graining for PDEs |
|
|
535 | (4) |
|
14.7 Deep Learning and Boundary Value Problems |
|
|
539 | (3) |
Glossary |
|
542 | (10) |
References |
|
552 | (36) |
Index |
|
588 | |