Muutke küpsiste eelistusi

E-raamat: Nonlinear System Identification: From Classical Approaches to Neural Networks, Fuzzy Models, and Gaussian Processes

  • Formaat: PDF+DRM
  • Ilmumisaeg: 09-Sep-2020
  • Kirjastus: Springer Nature Switzerland AG
  • Keel: eng
  • ISBN-13: 9783030474393
  • Formaat - PDF+DRM
  • Hind: 110,53 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: PDF+DRM
  • Ilmumisaeg: 09-Sep-2020
  • Kirjastus: Springer Nature Switzerland AG
  • Keel: eng
  • ISBN-13: 9783030474393

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

This book provides engineers and scientists in academia and industry with a thorough understanding of the underlying principles of nonlinear system identification. It equips them to apply the models and methods discussed to real problems with confidence, while also making them aware of potential difficulties that may arise in practice. 

Moreover, the book is self-contained, requiring only a basic grasp of matrix algebra, signals and systems, and statistics. Accordingly, it can also serve as an introduction to linear system identification, and provides a practical overview of the major optimization methods used in engineering. The focus is on gaining an intuitive understanding of the subject and the practical application of the techniques discussed. The book is not written in a theorem/proof style; instead, the mathematics is kept to a minimum, and the ideas covered are illustrated with numerous figures, examples, and real-world applications. 

In the past, nonlinear system identification was a field characterized by a variety of ad-hoc approaches, each applicable only to a very limited class of systems. With the advent of neural networks, fuzzy models, Gaussian process models, and modern structure optimization techniques, a much broader class of systems can now be handled. Although one major aspect of nonlinear systems is that virtually every one is unique, tools have since been developed that allow each approach to be applied to a wide variety of systems.


1 Introduction
1(22)
1.1 Relevance of Nonlinear System Identification
1(5)
1.1.1 Linear or Nonlinear?
2(1)
1.1.2 Prediction
2(2)
1.1.3 Simulation
4(1)
1.1.4 Optimization
4(1)
1.1.5 Analysis
5(1)
1.1.6 Control
5(1)
1.1.7 Fault Detection
6(1)
1.2 Views on Nonlinear System Identification
6(1)
1.3 Tasks in Nonlinear System Identification
7(10)
1.3.1 Choice of the Model Inputs
9(2)
1.3.2 Choice of the Excitation Signals
11(1)
1.3.3 Choice of the Model Architecture
11(1)
1.3.4 Choice of the Dynamics Representation
12(1)
1.3.5 Choice of the Model Order
13(1)
1.3.6 Choice of the Model Structure and Complexity
13(1)
1.3.7 Choice of the Model Parameters
14(1)
1.3.8 Model Validation
14(1)
1.3.9 The Role of Fiddle Parameters
15(2)
1.4 White Box, Black Box, and Gray Box Models
17(1)
1.5 Outline of the Book and Some Reading Suggestions
18(2)
1.6 Terminology
20(3)
Part I Optimization
2 Introduction to Optimization
23(12)
2.1 Overview of Optimization Techniques
25(1)
2.2 Kangaroos
25(3)
2.3 Loss Functions for Supervised Methods
28(6)
2.3.1 Maximum Likelihood Method
30(2)
2.3.2 Maximum A Posteriori and Bayes Method
32(2)
2.4 Loss Functions for Unsupervised Methods
34(1)
3 Linear Optimization
35(58)
3.1 Least Squares (LS)
37(37)
3.1.1 Covariance Matrix of the Parameter Estimate
45(2)
3.1.2 Errorbars
47(3)
3.1.3 Orthogonal Regressors
50(1)
3.1.4 Regularization/Ridge Regression
51(6)
3.1.5 Ridge Regression: Alternative Formulation
57(3)
3.1.6 L] Regularization
60(1)
3.1.7 Noise Assumptions
61(2)
3.1.8 Weighted Least Squares (WLS)
63(2)
3.1.9 Robust Regression
65(1)
3.1.10 Least Squares with Equality Constraints
66(1)
3.1.11 Smoothing Kernels
67(3)
3.1.12 Effective Number of Parameters
70(1)
3.1.13 L2 Boosting
71(3)
3.2 Recursive Least Squares (RLS)
74(5)
3.2.1 Reducing the Computational Complexity
75(2)
3.2.2 Tracking Time-Variant Processes
77(1)
3.2.3 Relationship Between the RLS and the Kalman Filter
78(1)
3.3 Linear Optimization with Inequality Constraints
79(1)
3.4 Subset Selection
80(10)
3.4.1 Methods for Subset Selection
81(4)
3.4.2 Orthogonal Least Squares (OLS) for Forward Selection
85(4)
3.4.3 Ridge Regression or Subset Selection?
89(1)
3.5 Summary
90(1)
3.6 Problems
91(2)
4 Nonlinear Local Optimization
93(36)
4.1 Batch and Sample Adaptation
95(4)
4.1.1 Mini-Batch Adaptation
97(1)
4.1.2 Sample Adaptation
97(2)
4.2 Initial Parameters
99(2)
4.3 Direct Search Algorithms
101(4)
4.3.1 Simplex Search Method
101(3)
4.3.2 Hooke-Jeeves Method
104(1)
4.4 General Gradient-Based Algorithms
105(12)
4.4.1 Line Search
106(2)
4.4.2 Finite Difference Techniques
108(1)
4.4.3 Steepest Descent
109(2)
4.4.4 Newton's Method
111(2)
4.4.5 Quasi-Newton Methods
113(2)
4.4.6 Conjugate Gradient Methods
115(2)
4.5 Nonlinear Least Squares Problems
117(5)
4.5.1 Gauss-Newton Method
119(2)
4.5.2 Levenberg-Marquardt Method
121(1)
4.6 Constrained Nonlinear Optimization
122(4)
4.7 Summary
126(2)
4.8 Problems
128(1)
5 Nonlinear Global Optimization
129(24)
5.1 Simulated Annealing (SA)
132(4)
5.2 Evolutionary Algorithms (EA)
136(13)
5.2.1 Evolution Strategies (ES)
139(4)
5.2.2 Genetic Algorithms (GA)
143(4)
5.2.3 Genetic Programming (GP)
147(2)
5.3 Branch and Bound (B&B)
149(2)
5.4 Tabu Search (TS)
151(1)
5.5 Summary
151(1)
5.6 Problems
152(1)
6 Unsupervised Learning Techniques
153(22)
6.1 Principal Component Analysis (PCA)
155(3)
6.2 Clustering Techniques
158(14)
6.2.1 FC-Means Algorithm
160(3)
6.2.2 Fuzzy C-Means (FCM) Algorithm
163(2)
6.2.3 Gustafson-Kessel Algorithm
165(1)
6.2.4 Kohonen's Self-Organizing Map (SOM)
166(3)
6.2.5 Neural Gas Network
169(1)
6.2.6 Adaptive Resonance Theory (ART) Network
170(1)
6.2.7 Incorporating Information About the Output
171(1)
6.3 Summary
172(1)
6.4 Problems
173(2)
7 Model Complexity Optimization
175(58)
7.1 Introduction
176(1)
7.2 Bias/Variance Tradeoff
177(13)
7.2.1 Bias Error
178(2)
7.2.2 Variance Error
180(3)
7.2.3 Tradeoff
183(7)
7.3 Evaluating the Test Error and Alternatives
190(15)
7.3.1 Training, Validation, and Test Data
191(1)
7.3.2 Cross-Validation (CV)
192(5)
7.3.3 Information Criteria
197(3)
7.3.4 Multi-Objective Optimization
200(2)
7.3.5 Statistical Tests
202(2)
7.3.6 Correlation-Based Methods
204(1)
7.4 Explicit Structure Optimization
205(2)
7.5 Regularization: Implicit Structure Optimization
207(11)
7.5.1 Effective Parameters
208(1)
7.5.2 Regularization by Non-Smoothness Penalties
209(2)
7.5.3 Regularization by Early Stopping
211(2)
7.5.4 Regularization by Constraints
213(2)
7.5.5 Regularization by Staggered Optimization
215(1)
7.5.6 Regularization by Local Optimization
216(2)
7.6 Structured Models for Complexity Reduction
218(11)
7.6.1 Curse of Dimensionality
218(3)
7.6.2 Hybrid Structures
221(4)
7.6.3 Projection-Based Structures
225(1)
7.6.4 Additive Structures
226(1)
7.6.5 Hierarchical Structures
227(1)
7.6.6 Input Space Decomposition with Tree Structures
228(1)
7.7 Summary
229(1)
7.8 Problems
230(3)
8 Summary of Part 1
233(6)
Part II Static Models
9 Introduction to Static Models
239(10)
9.1 Multivariate Systems
240(1)
9.2 Basis Function Formulation
241(4)
9.2.1 Global and Local Basis Functions
241(2)
9.2.2 Linear and Nonlinear Parameters
243(2)
9.3 Extended Basis Function Formulation
245(1)
9.4 Static Test Process
246(1)
9.5 Evaluation Criteria
247(2)
10 Linear, Polynomial, and Look-Up Table Models
249(30)
10.1 Linear Models
249(2)
10.2 Polynomial Models
251(11)
10.2.1 Regularized Polynomials
254(4)
10.2.2 Orthogonal Polynomials
258(3)
10.2.3 Summary Polynomials
261(1)
10.3 Look-Up Table Models
262(14)
10.3.1 One-Dimensional Look-Up Tables
263(2)
10.3.2 Two-Dimensional Look-Up Tables
265(3)
10.3.3 Optimization of the Heights
268(1)
10.3.4 Optimization of the Grid
269(2)
10.3.5 Optimization of the Complete Look-Up Table
271(1)
10.3.6 Incorporation of Constraints
271(3)
10.3.7 Properties of Look-Up Table Models
274(2)
10.4 Summary
276(1)
10.5 Problems
277(2)
11 Neural Networks
279(68)
11.1 Construction Mechanisms
282(5)
11.1.1 Ridge Construction
283(1)
11.1.2 Radial Construction
283(3)
11.1.3 Tensor Product Construction
286(1)
11.2 Multilayer Perceptron (MLP) Network
287(22)
11.2.1 MLP Neuron
288(2)
11.2.2 Network Structure
290(3)
11.2.3 Backpropagation
293(1)
11.2.4 MLP Training
294(4)
11.2.5 Simulation Examples
298(2)
11.2.6 MLP Properties
300(2)
11.2.7 Projection Pursuit Regression (PPR)
302(1)
11.2.8 Multiple Hidden Layers
303(2)
11.2.9 Deep Learning
305(4)
11.3 Radial Basis Function (RBF) Networks
309(24)
11.3.1 RBF Neuron
309(4)
11.3.2 Network Structure
313(2)
11.3.3 RBFTraining
315(8)
11.3.4 Simulation Examples
323(2)
11.3.5 RBF Properties
325(3)
11.3.6 Regularization Theory
328(2)
11.3.7 Normalized Radial Basis Function (NRBF) Networks
330(3)
11.4 Other Neural Networks
333(10)
11.4.1 General Regression Neural Network (GRNN)
334(1)
11.4.2 Cerebellar Model Articulation Controller (CMAC)
335(4)
11.4.3 Delaunay Networks
339(1)
11.4.4 Just-In-Time Models
340(3)
11.5 Summary
343(1)
11.6 Problems
344(3)
12 Fuzzy and Neuro-Fuzzy Models
347(46)
12.1 Fuzzy Logic
348(4)
12.1.1 Membership Functions
349(1)
12.1.2 Logic Operators
350(1)
12.1.3 Rule Fulfillment
351(1)
12.1.4 Accumulation
352(1)
12.2 Types of Fuzzy Systems
352(7)
12.2.1 Linguistic Fuzzy Systems
353(2)
12.2.2 Singleton Fuzzy Systems
355(2)
12.2.3 Takagi-Sugeno Fuzzy Systems
357(2)
12.3 Neuro-Fuzzy (NF) Networks
359(12)
12.3.1 Fuzzy Basis Functions
359(2)
12.3.2 Equivalence Between RBF Networks and Fuzzy Models
361(1)
12.3.3 What to Optimize?
362(3)
12.3.4 Interpretation of Neuro-Fuzzy Networks
365(5)
12.3.5 Incorporating and Preserving Prior Knowledge
370(1)
12.3.6 Simulation Examples
371(1)
12.4 Neuro-Fuzzy Learning Schemes
371(18)
12.4.1 Nonlinear Local Optimization
373(1)
12.4.2 Nonlinear Global Optimization
374(1)
12.4.3 Orthogonal Least Squares Learning
375(2)
12.4.4 Fuzzy Rule Extraction by a Genetic Algorithm (FUREGA)
377(10)
12.4.5 Adaptive Spline Modeling of Observation Data (ASMOD)
387(2)
12.5 Summary
389(1)
12.6 Problems
390(3)
13 Local Linear Neuro-Fuzzy Models: Fundamentals
393(54)
13.1 Basic Ideas
394(11)
13.1.1 Illustration of Local Linear Neuro-Fuzzy Models
396(4)
13.1.2 Interpretation of the Local Linear Model Offsets
400(1)
13.1.3 Interpretation as Takagi-Sugeno Fuzzy System
401(3)
13.1.4 Interpretation as Extended NRBF Network
404(1)
13.2 Parameter Optimization of the Rule Consequents
405(14)
13.2.1 Global Estimation
405(2)
13.2.2 Local Estimation
407(3)
13.2.3 Global Versus Local Estimation
410(5)
13.2.4 Robust Regression
415(1)
13.2.5 Regularized Regression
416(1)
13.2.6 Data Weighting
417(2)
13.3 Structure Optimization of the Rule Premises
419(25)
13.3.1 Local Linear Model Tree (LOLIMOT) Algorithm
421(9)
13.3.2 Different Objectives for Structure and Parameter Optimization
430(2)
13.3.3 Smoothness Optimization
432(2)
13.3.4 Splitting Ratio Optimization
434(2)
13.3.5 Merging of Local Models
436(2)
13.3.6 Principal Component Analysis for Preprocessing
438(2)
13.3.7 Models with Multiple Outputs
440(4)
13.4 Summary
444(1)
13.5 Problems
445(2)
14 Local Linear Neuro-Fuzzy Models: Advanced Aspects
447(136)
14.1 Different Input Spaces for Rule Premises and Consequents
448(6)
14.1.1 Identification of Processes with Direction-Dependent Behavior
451(3)
14.1.2 Piecewise Affine (PWA) Models
454(1)
14.2 More Complex Local Models
454(8)
14.2.1 From Local Neuro-Fuzzy Models to Polynomials
454(3)
14.2.2 Local Quadratic Models for Input Optimization
457(3)
14.2.3 Different Types of Local Models
460(2)
14.3 Structure Optimization of the Rule Consequents
462(4)
14.4 Interpolation and Extrapolation Behavior
466(8)
14.4.1 Interpolation Behavior
466(3)
14.4.2 Extrapolation Behavior
469(5)
14.5 Global and Local Linearization
474(4)
14.6 Online Learning
478(11)
14.6.1 Online Adaptation of the Rule Consequents
480(6)
14.6.2 Online Construction of the Rule Premise Structure
486(3)
14.7 Oblique Partitioning
489(7)
14.7.1 Smoothness Determination
489(1)
14.7.2 Hinging Hyperplanes
490(2)
14.7.3 Smooth Hinging Hyperplanes
492(2)
14.7.4 Hinging Hyperplane Trees (HHT)
494(2)
14.8 Hierarchical Local Model Tree (HILOMOT) Algorithm
496(40)
14.8.1 Forming the Partition of Unity
497(3)
14.8.2 Split Parameter Optimization
500(5)
14.8.3 Building up the Hierarchy
505(5)
14.8.4 Smoothness Adjustment
510(3)
14.8.5 Separable Nonlinear Least Squares
513(5)
14.8.6 Analytic Gradient Derivation
518(8)
14.8.7 Analyzing Input Relevance from Partitioning
526(4)
14.8.8 HILOMOT Versus LOLIMOT
530(6)
14.9 Errorbars, Design of Excitation Signals, and Active Learning
536(7)
14.9.1 Errorbars
537(3)
14.9.2 Detecting Extrapolation
540(1)
14.9.3 Design of Excitation Signals
541(2)
14.10 Design of Experiments
543(24)
14.10.1 Unsupervised Methods
543(3)
14.10.2 Model Variance-Oriented Methods
546(4)
14.10.3 Model Bias-Oriented Methods
550(4)
14.10.4 Active Learning with HILOMOT DoE
554(13)
14.11 Bagging Local Model Trees
567(8)
14.11.1 Unstable Models
569(1)
14.11.2 Bagging with HILOMOT
569(2)
14.11.3 Bootstrapping for Confidence Assessment
571(2)
14.11.4 Model Weighting
573(2)
14.12 Summary and Conclusions
575(5)
14.13 Problems
580(3)
15 Input Selection for Local Model Approaches
583(56)
15.1 Test Processes
586(4)
15.1.1 Test Process One (TP1)
586(1)
15.1.2 Test Process Two (TP2)
587(1)
15.1.3 Test Process Three (TP3)
588(1)
15.1.4 Test Process Four (TP4)
588(2)
15.2 Mixed Wrapper-Embedded Input Selection Approach: Authored by Julian Belz
590(14)
15.2.1 Investigation with Test Processes
593(1)
15.2.2 Test Process Two
594(1)
15.2.3 Extensive Simulation Studies
595(9)
15.3 Regularization-Based Input Selection Approach: Authored by Julian Belz
604(16)
15.3.1 Normalized LI Split Regularization
606(5)
15.3.2 Investigation with Test Processes
611(9)
15.4 Embedded Approach: Authored by Julian Belz
620(6)
15.4.1 Partition Analysis
621(2)
15.4.2 Investigation with Test Processes
623(3)
15.5 Visualization: Partial Dependence Plots
626(5)
15.5.1 Investigation with Test Processes
628(3)
15.6 Miles per Gallon Data Set
631(8)
15.6.1 Mixed Wrapper-Embedded Input Selection
632(1)
15.6.2 Regularization-Based Input Selection
633(2)
15.6.3 Visualization: Partial Dependence Plot
635(1)
15.6.4 Critical Assessment of Partial Dependence Plots
636(3)
16 Gaussian Process Models (GPMs)
639(70)
16.1 Overview on Kernel Methods
640(4)
16.1.1 LS Kernel Methods
644(1)
16.1.2 Non-LS Kernel Methods
644(1)
16.2 Kernels
644(2)
16.3 Kernel Ridge Regression
646(3)
16.3.1 Transition to Kernels
647(2)
16.4 Regularizing Parameters and Functions
649(4)
16.4.1 Discrepancy in Penalty Terms
651(2)
16.5 Reproducing Kernel Hilbert Spaces (RKHS)
653(6)
16.5.1 Norms
653(1)
16.5.2 RKHS Objective and Solution
654(2)
16.5.3 Equivalent Kernels and Locality
656(2)
16.5.4 Two Points of View
658(1)
16.6 Gaussian Processes/Kriging
659(18)
16.6.1 Key Idea
16.6.2 Some Basics
660(1)
16.6.3 Prior
661(5)
16.6.4 Posterior
666(5)
16.6.5 Incorporating Output Noise
671(1)
16.6.6 Model Variance
672(1)
16.6.7 Incorporating a Base Model
673(2)
16.6.8 Relationship to RBF Networks
675(1)
16.6.9 High-Dimensional Kernels
676(1)
16.7 Hyperparameters
677(29)
16.7.1 Influence of the Hyperparameters
678(8)
16.7.2 Optimization of the Hyperparameters
686(6)
16.7.3 Marginal Likelihood
692(12)
16.7.4 A Note on the Prior Variance
704(2)
16.8 Summary
706(2)
16.9 Problems
708(1)
17 Summary of Part II
709(6)
Part III Dynamic Models
18 Linear Dynamic System Identification
715(116)
18.1 Overview of Linear System Identification
716(1)
18.2 Excitation Signals
717(4)
18.3 General Model Structure
721(16)
18.3.1 Terminology and Classification
723(6)
18.3.2 Optimal Predictor
729(4)
18.3.3 Some Remarks on the Optimal Predictor
733(2)
18.3.4 Prediction Error Methods
735(2)
18.4 Time Series Models
737(4)
18.4.1 Autoregressive (AR)
738(1)
18.4.2 Moving Average (MA)
739(1)
18.4.3 Autoregressive Moving Average (ARMA)
740(1)
18.5 Models with Output Feedback
741(29)
18.5.1 Autoregressive with Exogenous Input (ARX)
741(11)
18.5.2 Autoregressive Moving Average with Exogenous Input (ARMAX)
752(5)
18.5.3 Autoregressive Autoregressive with Exogenous Input (ARARX)
757(3)
18.5.4 Output Error (OE)
760(4)
18.5.5 Box-Jenkins (BJ)
764(2)
18.5.6 State Space Models
766(2)
18.5.7 Simulation Example
768(2)
18.6 Models Without Output Feedback
770(33)
18.6.1 Finite Impulse Response (FIR)
771(4)
18.6.2 Regularized FIR Models
775(4)
18.6.3 Bias and Variance of Regularized FIR Models
779(1)
18.6.4 Impulse Response Preservation (IRP) FIR Approach
780(10)
18.6.5 Orthonormal Basis Functions (OBF)
790(9)
18.6.6 Simulation Example
799(4)
18.7 Some Advanced Aspects
803(8)
18.7.1 Initial Conditions
803(2)
18.7.2 Consistency
805(1)
18.7.3 Frequency-Domain Interpretation
806(2)
18.7.4 Relationship Between Noise Model and Filtering
808(1)
18.7.5 Offsets
809(2)
18.8 Recursive Algorithms
811(5)
18.8.1 Recursive Least Squares (RLS) Method
812(1)
18.8.2 Recursive Instrumental Variables (RIV) Method
812(2)
18.8.3 Recursive Extended Least Squares (RELS) Method
814(1)
18.8.4 Recursive Prediction Error Methods (RPEM)
815(1)
18.9 Determination of Dynamic Orders
816(1)
18.10 Multivariate Systems
817(6)
18.10.1 P-Canonical Model
819(1)
18.10.2 Matrix Polynomial Model
820(3)
18.10.3 Subspace Methods
823(1)
18.11 Closed-Loop Identification
823(5)
18.11.1 Direct Methods
824(2)
18.11.2 Indirect Methods
826(1)
18.11.3 Identification for Control
827(1)
18.12 Summary
828(1)
18.13 Problems
829(2)
19 Nonlinear Dynamic System Identification
831(62)
19.1 From Linear to Nonlinear System Identification
832(2)
19.2 External Dynamics
834(17)
19.2.1 Illustration of the External Dynamics Approach
834(7)
19.2.2 Series-Parallel and Parallel Models
841(2)
19.2.3 Nonlinear Dynamic Input/Output Model Classes
843(6)
19.2.4 Restrictions of Nonlinear Input/Output Models
849(2)
19.3 Internal Dynamics
851(1)
19.4 Parameter Scheduling Approach
851(1)
19.5 Training Recurrent Structures
852(4)
19.5.1 Backpropagation-Through-Time (BPTT) Algorithm
853(2)
19.5.2 Real-Time Recurrent Learning
855(1)
19.6 Multivariate Systems
856(3)
19.6.1 Issues with Multiple Inputs
857(2)
19.7 Excitation Signals
859(14)
19.7.1 From PRBS to APRBS
860(4)
19.7.2 Ramp
864(1)
19.7.3 Multisine
865(1)
19.7.4 Chirp
866(1)
19.7.5 APRBS
867(2)
19.7.6 NARX and NOBF Input Spaces
869(2)
19.7.7 MISO Systems
871(1)
19.7.8 Tradeoffs
872(1)
19.8 Optimal Excitation Signal Generator: Coauthored by Tim O. Heinz
873(14)
19.8.1 Approaches with Fisher Information
874(2)
19.8.2 Optimized Nonlinear Input Signal (OMNIPUS) for SISO Systems
876(2)
19.8.3 Optimized Nonlinear Input Signal (OMNIPUS) for MISO Systems
878(9)
19.9 Determination of Dynamic Orders
887(3)
19.10 Summary
890(1)
19.11 Problems
890(3)
20 Classical Polynomial Approaches
893(10)
20.1 Properties of Dynamic Polynomial Models
894(1)
20.2 Kolmogorov-Gabor Polynomial Models
895(1)
20.3 Volterra-Series Models
896(1)
20.4 Parametric Volterra-Series Models
897(1)
20.5 NDE Models
898(1)
20.6 Hammerstein Models
898(2)
20.7 Wiener Models
900(1)
20.8 Problems
901(2)
21 Dynamic Neural and Fuzzy Models
903(16)
21.1 Curse of Dimensionality
904(1)
21.1.1 MLP Networks
904(1)
21.1.2 RBF Networks
905(1)
21.1.3 Singleton Fuzzy and NRBF Models
905(1)
21.2 Interpolation and Extrapolation Behavior
905(2)
21.3 Training
907(2)
21.3.1 MLP Networks
908(1)
21.3.2 RBF Networks
908(1)
21.3.3 Singleton Fuzzy and NRBF Models
909(1)
21.4 Integration of a Linear Model
909(1)
21.5 Simulation Examples
910(6)
21.5.1 MLP Networks
911(2)
21.5.2 RBF Networks
913(2)
21.5.3 Singleton Fuzzy and NRBF Models
915(1)
21.6 Summary
916(1)
21.7 Problems
917(2)
22 Dynamic Local Linear Neuro-Fuzzy Models
919(52)
22.1 One-Step Prediction Error Versus Simulation Error
923(1)
22.2 Determination of the Rule Premises
924(2)
22.3 Linearization
926(6)
22.3.1 Static and Dynamic Linearization
927(1)
22.3.2 Dynamics of the Linearized Model
928(2)
22.3.3 Different Rule Consequent Structures
930(2)
22.4 Model Stability
932(6)
22.4.1 Influence of Rule Premise Inputs on Stability
933(2)
22.4.2 Lyapunov Stability and Linear Matrix Inequalities (LMIs)
935(2)
22.4.3 Ensuring Stable Extrapolation
937(1)
22.5 Dynamic LOLIMOT Simulation Studies
938(9)
22.5.1 Nonlinear Dynamic Test Processes
939(1)
22.5.2 Hammerstein Process
940(2)
22.5.3 Wiener Process
942(2)
22.5.4 NDE Process
944(2)
22.5.5 Dynamic Nonlinearity Process
946(1)
22.6 Advanced Local Linear Methods and Models
947(5)
22.6.1 Local Linear Instrumental Variables (IV) Method
948(3)
22.6.2 Local Linear Output Error (OE) Models
951(1)
22.6.3 Local Linear ARMAX Models
951(1)
22.7 Local Regularized Finite Impulse Response Models: Coauthored by Tobias Miinker
952(5)
22.7.1 Structure
952(2)
22.7.2 Local Estimation
954(1)
22.7.3 Hyperparamter Tuning
954(1)
22.7.4 Evaluation of Performance
955(2)
22.8 Local Linear Orthonormal Basis Functions Models
957(5)
22.9 Structure Optimization of the Rule Consequents
962(4)
22.10 Summary and Conclusions
966(4)
22.11 Problems
970(1)
23 Neural Networks with Internal Dynamics
971(14)
23.1 Fully Recurrent Networks
972(1)
23.2 Partially Recurrent Networks
973(1)
23.3 State Recurrent Networks
973(2)
23.4 Locally Recurrent Globally Feedforward Networks
975(1)
23.5 Long Short-Term Memory (LSTM) Networks
976(3)
23.6 Internal Versus External Dynamics
979(3)
23.7 Problems
982(3)
Part IV Applications
24 Applications of Static Models
985(22)
24.1 Driving Cycle
986(20)
24.1.1 Process Description
986(1)
24.1.2 Smoothing of a Driving Cycle
987(1)
24.1.3 Improvements and Extensions
988(1)
24.1.4 Differentiation
989(1)
24.1.5 The Role of Look-Up Tables in Automotive Electronics
990(4)
24.1.6 Modeling of Exhaust Gases
994(3)
24.1.7 Optimization of Exhaust Gases
997(7)
24.1.8 Outlook: Dynamic Models
1004(2)
24.2 Summary
1006(1)
25 Applications of Dynamic Models
1007(36)
25.1 Cooling Blast
1007(6)
25.1.1 Process Description
1008(1)
25.1.2 Experimental Results
1009(4)
25.2 Diesel Engine Turbocharger
1013(10)
25.2.1 Process Description
1015(2)
25.2.2 Experimental Results
1017(6)
25.3 Thermal Plant
1023(18)
25.3.1 Process Description
1024(1)
25.3.2 Transport Process
1025(5)
25.3.3 Tubular Heat Exchanger
1030(5)
25.3.4 Cross-Flow Heat Exchanger
1035(6)
25.4 Summary
1041(2)
26 Design of Experiments
1043(52)
26.1 Practical DoE Aspects: Authored by Julian Belz
1044(26)
26.1.1 Function Generator
1044(3)
26.1.2 Order of Experimentation
1047(2)
26.1.3 Biggest Gap Sequence
1049(1)
26.1.4 Median Distance Sequence
1050(1)
26.1.5 Intelligent-Means Sequence
1050(1)
26.1.6 Other Determination Strategies
1051(2)
26.1.7 Comparison on Synthetic Functions
1053(3)
26.1.8 Summary
1056(3)
26.1.9 Comer Measurement
1059(5)
26.1.10 Comparison of Space-Filling Designs
1064(6)
26.2 Active Learning for Structural Health Monitoring
1070(7)
26.2.1 Simulation Results
1072(2)
26.2.2 Experimental Results
1074(3)
26.3 Active Learning for Engine Measurement
1077(10)
26.3.1 Problem Setting
1077(3)
26.3.2 Operating Point-Specific Engine Models
1080(4)
26.3.3 Global Engine Model
1084(3)
26.4 Nonlinear Dynamic Excitation Signal Design for Common Rail Injection
1087(8)
26.4.1 Example: High-Pressure Fuel Supply System
1087(1)
26.4.2 Identifying the Rail Pressure System
1088(1)
26.4.3 Results
1089(6)
27 Input Selection Applications
1095(30)
27.1 Air Mass Flow Prediction
1096(5)
27.1.1 Mixed Wrapper-Embedded Input Selection
1098(2)
27.1.2 Partition Analysis
1100(1)
27.2 Fan Metamodeling: Authored by Julian Belz
1101(14)
27.2.1 Centrifugal Impeller Geometry
1102(1)
27.2.2 Axial Impeller Geometry
1103(1)
27.2.3 Why Metamodels?
1103(2)
27.2.4 Design of Experiments: Centrifugal Fan Metamodel
1105(1)
27.2.5 Design of Experiments: Axial Fan Metamodel
1105(1)
27.2.6 Order of Experimentation
1106(2)
27.2.7 Goal-Oriented Active Learning
1108(3)
27.2.8 Mixed Wrapper-Embedded Input Selection
1111(1)
27.2.9 Centrifugal Fan Metamodel
1111(2)
27.2.10 Axial Fan Metamodel
1113(2)
27.2.11 Summary
1115(1)
27.3 Heating, Ventilating, and Air Conditioning System
1115(10)
27.3.1 Problem Configuration
1116(1)
27.3.2 Available Data Sets
1117(1)
27.3.3 Mixed Wrapper-Embedded Input Selection
1118(1)
27.3.4 Results
1119(6)
28 Applications of Advanced Methods
1125(26)
28.1 Nonlinear Model Predictive Control
1126(4)
28.2 Online Adaptation
1130(10)
28.2.1 Variable Forgetting Factor
1131(1)
28.2.2 Control and Adaptation Models
1132(1)
28.2.3 Parameter Transfer
1133(2)
28.2.4 Systems with Multiple Inputs
1135(1)
28.2.5 Experimental Results
1136(4)
28.3 Fault Detection
1140(6)
28.3.1 Methodology
1141(2)
28.3.2 Experimental Results
1143(3)
28.4 Fault Diagnosis
1146(3)
28.4.1 Methodology
1146(2)
28.4.2 Experimental Results
1148(1)
28.5 Reconfiguration
1149(2)
29 LMN Toolbox
1151(14)
29.1 Termination Criteria
1152(4)
29.1.1 Corrected AIC
1153(1)
29.1.2 Corrected BIC
1153(1)
29.1.3 Validation
1154(1)
29.1.4 Maximum Number of Local Models
1154(1)
29.1.5 Effective Number of Parameters
1155(1)
29.1.6 Maximum Training Time
1156(1)
29.2 Polynominal Degree of Local Models
1156(1)
29.3 Dynamic Models
1157(3)
29.3.1 Nonlinear Orthonormal Basis Function Models
1160(1)
29.4 Different Input Spaces x and z
1160(1)
29.5 Smoothness
1160(1)
29.6 Data Weighting
1161(1)
29.7 Visualization and Simplified Tool
1161
Correction to: Nonlinear System Identification
1
A Vectors and Matrices
1165(4)
A.1 Vector and Matrix Derivatives
1165(2)
A.2 Gradient, Hessian, and Jacobian
1167(2)
B Statistics
1169(20)
B.1 Deterministic and Random Variables
1169(2)
B.2 Probability Density Function (pdf)
1171(2)
B.3 Stochastic Processes and Ergodicity
1173(3)
B.4 Expectation
1176(3)
B.5 Variance
1179(1)
B.6 Correlation and Covariance
1180(3)
B.7 Properties of Estimators
1183(6)
References 1189(28)
Index 1217
Oliver Nelles was born in Frankfurt (Main), Germany, and got his Masters and Ph.D. degree in Electrical Engineering and Automatic Control at the Technical University of Darmstadt. After being a Post-Doc at the Department of Mechanical Engineering at UC Berkeley he worked for Siemens VDO Automotive in Regensburg. During his five years in Regensburg he was project and group leader in the field of transmission control. Since 2004 he assumed a position as Professor for Automatic Control Mechatronics at the University of Siegen. Oliver Nelles key research areas are: machine learning, system identification, nonlinear dynamic systems & control, design of experiments (DoE), fault diagnosis.