Muutke küpsiste eelistusi

E-raamat: Control Systems: Classical, Modern, and AI-Based Approaches

(Ramaiah Instof Tech, India), (National Institute of Technology (NIT), India)
  • Formaat: 668 pages
  • Ilmumisaeg: 12-Jul-2019
  • Kirjastus: CRC Press Inc
  • ISBN-13: 9781351170789
  • Formaat - EPUB+DRM
  • Hind: 182,00 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Raamatukogudele
  • Formaat: 668 pages
  • Ilmumisaeg: 12-Jul-2019
  • Kirjastus: CRC Press Inc
  • ISBN-13: 9781351170789

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Control Systems: Classical, Modern, and AI-Based Approaches provides a broad and comprehensive study of the principles, mathematics, and applications for those studying basic control in mechanical, electrical, aerospace, and other engineering disciplines. The text builds a strong mathematical foundation of control theory of linear, nonlinear, optimal, model predictive, robust, digital, and adaptive control systems, and it addresses applications in several emerging areas, such as aircraft, electro-mechanical, and some nonengineering systems: DC motor control, steel beam thickness control, drum boiler, motional control system, chemical reactor, head-disk assembly, pitch control of an aircraft, yaw-damper control, helicopter control, and tidal power control. Decentralized control, game-theoretic control, and control of hybrid systems are discussed. Also, control systems based on artificial neural networks, fuzzy logic, and genetic algorithms, termed as AI-based systems are studied and analyzed with applications such as auto-landing aircraft, industrial process control, active suspension system, fuzzy gain scheduling, PID control, and adaptive neuro control. Numerical coverage with MATLAB® is integrated, and numerous examples and exercises are included for each chapter. Associated MATLAB® code will be made available.

Preface xv
Acknowledgments xvii
Authors xix
Introduction xxi
Section I Linear and Nonlinear Control
1 Linear Systems and Control
3(66)
1.1 Dynamic Systems and Feedback Control
3(4)
1.1.1 Balancing a Stick
3(1)
1.1.2 Simple Day-to-Day Observations
4(1)
1.1.3 Position Control System
4(1)
1.1.4 Temperature Control System
5(1)
1.1.5 Mathematical Modeling of Systems
5(2)
1.1.6 Linear, Time-Invariant, and Lumped Systems
7(1)
1.2 Transfer Functions and State Space Representations
7(23)
1.2.1 Definition: Dynamical Systems
7(1)
1.2.2 Definition: Causal Systems
8(1)
1.2.3 Definition: Linear Systems
8(2)
1.2.4 Time and Frequency Domains
10(3)
1.2.4.1 Definition: Time-Constant
11(1)
1.2.4.2 First-Order Systems
12(1)
1.2.4.3 The Role of Time-Constant
12(1)
1.2.5 Response of Second-Order Systems
13(6)
1.2.5.1 Underdamped Systems
14(1)
1.2.5.2 Critically Damped Systems
14(1)
1.2.5.3 Overdamped Systems
14(2)
1.2.5.4 Higher Order Systems
16(1)
1.2.5.5 A Time Response Analysis Example
17(1)
1.2.5.6 Frequency Response
18(1)
1.2.6 Bode Plots
19(7)
1.2.6.1 Definition: Decibel
20(1)
1.2.6.2 Construction of Bode Plots
21(5)
1.2.7 State Space Representation of Systems
26(4)
1.2.7.1 Two Examples
26(2)
1.2.7.2 Definition: State
28(1)
1.2.7.3 Solution of the State Equation
28(2)
1.3 Stability of Linear Control Systems
30(16)
1.3.1 Bounded Signals
30(1)
1.3.1.1 Definition (a): BIBO Stability
30(1)
1.3.1.2 Definition (b): BIBO Stability
30(1)
1.3.2 Routh-Hurwitz Criterion
31(3)
1.3.2.1 Special Cases
32(2)
1.3.3 Nyquist Criterion
34(6)
1.3.3.1 Polar and Nyquist Plots
34(4)
1.3.3.2 Gain and Phase Margins
38(1)
1.3.3.3 Definition: Gain Crossover Frequency
39(1)
1.3.3.4 Definition: Phase Crossover Frequency
39(1)
1.3.3.5 The Margins on a Bode Plot
39(1)
1.3.4 The Root Locus
40(6)
1.3.4.1 Definition: Root Locus
40(5)
1.3.4.2 The Stability Margin
45(1)
1.4 Design of Control Systems
46(23)
1.4.1 Development of Classical PID Control
46(15)
1.4.1.1 Controller Design Using Root Locus
46(1)
1.4.1.2 Magnitude Compensation
47(1)
1.4.1.3 Angle Compensation
48(2)
1.4.1.4 Validity of Design
50(2)
1.4.1.5 Controller Design Using Bode Plots
52(1)
1.4.1.6 Definition: Bandwidth
52(1)
1.4.1.7 The Design Perspective
52(4)
1.4.1.8 The Lead-Lag Compensator
56(4)
1.4.1.9 PID Implementation
60(1)
1.4.1.10 Reset Windup
60(1)
1.4.2 Modern Pole-Placement
61(8)
1.4.2.1 Controllability
61(1)
1.4.2.2 Definition: Controllability
62(2)
1.4.2.3 Definition: Similarity
64(2)
1.4.2.4 Algorithm: Pole Assignment - SISO Case
66(3)
2 Nonlinear Systems
69(12)
2.1 Nonlinear Phenomena and Nonlinear Models
69(4)
2.1.1 Limit Cycles
70(1)
2.1.2 Bifurcations
71(1)
2.1.3 Chaos
71(2)
2.2 Fundamental Properties of ODES
73(5)
2.2.1 Autonomous Systems
73(2)
2.2.1.1 Stability of Equilibria
73(2)
2.2.2 Non-Autonomous Systems
75(1)
2.2.2.1 Equilibrium Points
76(1)
2.2.3 Existence and Uniqueness
76(2)
2.3 Contraction Mapping Theorem
78(3)
3 Nonlinear Stability Analysis
81(32)
3.1 Phase Plane Techniques
81(8)
3.1.1 Equilibria of Nonlinear Systems
85(4)
3.2 Poincare-Bendixson Theorem
89(3)
3.2.1 Existence of Limit Cycles
90(2)
3.3 Hartman-Grobman Theorem
92(1)
3.4 Lyapunov Stability Theory
93(15)
3.4.1 Lyapunov's Direct Method
94(3)
3.4.1.1 Positive Definite Lyapunov Functions
94(1)
3.4.1.2 Equilibrium Point Theorems
95(1)
3.4.1.3 Lyapunov Theorem for Local Stability
95(1)
3.4.1.4 Lyapunov Theorem for Global Stability
96(1)
3.4.2 La Salle's Invariant Set Theorems
97(2)
3.4.3 Krasovskii's Method
99(1)
3.4.4 The Variable Gradient Method
100(1)
3.4.5 Stability of Non-Autonomous Systems
101(3)
3.4.6 Instability Theorems
104(1)
3.4.7 Passivity Framework
105(3)
3.4.7.1 The Passivity Formalism
106(2)
3.5 Describing Function Analysis
108(5)
3.5.1 Applications of Describing Functions
109(1)
3.5.2 Basic Assumptions
110(3)
4 Nonlinear Control Design
113(21)
4.1 Full-State Linearization
113(9)
4.1.1 Handling Multi-input Systems
121(1)
4.2 Input-Output Linearization
122(3)
4.2.1 Definition: Relative Degree
123(1)
4.2.2 Zero Dynamics and Non-Minimum Phase Systems
124(19)
4.2.2.1 Definition: Partially State Feedback Linearizable
124(1)
4.3 Stabilization
125(2)
4.4 Backstepping Control
127(3)
4.5 Sliding Mode Control
130(3)
4.6
Chapter Summary
133(1)
Appendix IA
134(1)
Appendix IB
135(1)
Appendix IC
135(2)
Appendix ID
137(2)
Exercises for Section I
139(1)
References for Section I
140(3)
Section II Optimal and H-Infinity Control
5 Optimization-Extremization of Cost Function
143(16)
5.1 Optimal Control Theory: An Economic Interpretation
143(2)
5.1.1 Solution for the Optimal Path
144(1)
5.1.2 The Hamiltonian
145(1)
5.2 Calculus of Variation
145(1)
5.2.1 Sufficient Conditions
145(1)
5.2.1.1 Weierstrass Result
146(1)
5.2.2 Necessary Conditions
146(1)
5.3 Euler-Lagrange Equation
146(1)
5.4 Constraint Optimization Problem
147(1)
5.5 Problems with More Variables
148(1)
5.5.1 With Higher Order Derivatives
148(1)
5.5.2 With Several Unknown Functions
148(1)
5.5.3 With More Independent Variables
148(1)
5.6 Variational Aspects
148(1)
5.7 Conversion of BVP to Variational Problem
149(1)
5.7.1 Solution of a Variational Problem Using a Direct Method
150(1)
5.8 General Variational Approach
150(5)
5.8.1 First Order Necessary Conditions
151(1)
5.8.2 Mangasarian Sufficient Conditions
152(1)
5.8.3 Interpretation of the Co-State Variables
152(1)
5.8.4 Principle of Optimality
153(1)
5.8.5 General Terminal Constraints
154(5)
5.8.5.1 Necessary Conditions for Equality Terminal Constraints
154(1)
Appendix 5A
155(4)
6 Optimal Control
159(36)
6.1 Optimal Control Problem
159(2)
6.1.1 Dynamic System and Performance Criterion
159(1)
6.1.2 Physical Constraints
160(1)
6.1.2.1 Point Constraints
160(1)
6.1.2.2 Isoperimetric Constraints
160(1)
6.1.2.3 Path Constraints
160(1)
6.1.3 Optimality Criteria
160(1)
6.1.4 Open Loop and Closed Loop Optimal Control
161(1)
6.2 Maximum Principle
161(4)
6.2.1 Hamiltonian Dynamics
161(2)
6.2.2 Pontryagin Maximum Principle
163(1)
6.2.2.1 Fixed Time, Free Endpoint Problem
163(1)
6.2.2.2 Free Time, Fixed Endpoint Problem
164(1)
6.2.3 Maximum Principle with Transversality Conditions
164(1)
6.2.4 Maximum Principle with State Constraints
164(1)
6.3 Dynamic Programming
165(4)
6.3.1 Dynamic Programming Method
166(1)
6.3.2 Verification of Optimality
166(1)
6.3.3 Dynamic Programming and Pontryagin Maximum Principle
167(2)
6.3.3.1 Characteristic Equations
167(1)
6.3.3.2 Relation between Dynamic Programming and the Maximum Principle
167(2)
6.4 Differential Games
169(1)
6.4.1 Isaacs's Equations and Maximum Principle/Dynamic Programming in Games
169(1)
6.5 Dynamic Programming in Stochastic Setting
170(2)
6.6 Linear Quadratic Optimal Regulator for Time-Varying Systems
172(1)
6.6.1 Riccati Equation
173(1)
6.6.2 LQ Optimal Regulator for Mixed State and Control Terms
173(1)
6.7 Controller Synthesis
173(9)
6.7.1 Dynamic Models
173(1)
6.7.2 Quadratic Optimal Control
174(2)
6.7.2.1 Linear Quadratic Optimal State Regulator
174(2)
6.7.2.2 Linear Quadratic Optimal Output Regulator
176(1)
6.7.3 Stability of the Linear Quadratic Controller/Regulator
176(1)
6.7.4 Linear Quadratic Gaussian (LQG) Control
177(1)
6.7.4.1 State Estimation and LQ Controller
177(1)
6.7.4.2 Separation Principle and Nominal Closed Loop Stability
178(1)
6.7.5 Tracking and Regulation with Quadratic Optimal Controller
178(5)
6.7.5.1 Transformation of the Model for Output Regulation and Tracking
179(1)
6.7.5.2 Unmeasured Disturbances and Model Mismatch
180(1)
6.7.5.3 Innovations Bias Approach
180(1)
6.7.5.4 State Augmentation Approach
181(1)
6.8 Pole Placement Design Method
182(1)
6.9 Eigenstructure Assignment
183(2)
6.9.1 Problem Statement
183(1)
6.9.2 Closed Loop Eigenstructure Assignment
184(1)
6.10 Minimum Time and Minimum-Fuel Trajectory Optimization
185(4)
6.10.1 Problem Definition
185(1)
6.10.2 Parameterization of the Control Problem
186(2)
6.10.3 Control Profile for Small α
188(1)
6.10.4 Determination of Critical α
188(1)
Appendix 6A
189(6)
7 Model Predictive Control
195(20)
7.1 Model-Based Prediction of Future Behavior
195(1)
7.2 Innovations Bias Approach
196(1)
7.3 State Augmentation Approach
197(1)
7.4 Conventional Formulation of MPC
197(1)
7.5 Tuning Parameters
198(1)
7.6 Unconstrained MPC
199(2)
7.7 Quadratic Programming (QP) Formulation of MPC
201(1)
7.8 State-Space Formulation of the MPC
202(1)
7.9 Stability
202(1)
Appendix 7A
203(5)
Appendix 7B
208(7)
8 Robust Control
215(39)
8.1 Robust Control of Uncertain Plants
216(1)
8.1.1 Robust Stability and HI Norm
216(1)
8.1.2 Disturbance Rejection and Loop-Shaping Using HI Control
217(1)
8.2 H2 Optimal Control
217(6)
8.2.1 The Optimal State Feedback Problem
218(1)
8.2.2 The Optimal State Estimation Problem
219(1)
8.2.3 The Optimal Output Feedback Problem
220(1)
8.2.4 H2 Optimal Control against General Deterministic Inputs
221(1)
8.2.5 Weighting Matrices in H2 Optimal Control
222(1)
8.3 Hinfinity Control
223(3)
8.3.1 Hinfinity Optimal State Feedback Control
223(2)
8.3.2 Hinfinity Optimal State Estimation
225(1)
8.3.3 Hinfinity Optimal Output Feedback Problem
225(1)
8.3.4 The Relation between S, P and Z
226(1)
8.4 Robust Stability and Hinfinity Norm
226(2)
8.5 Structured Uncertainties and Structured Singular Values
228(2)
8.6 Robust Performance Problem
230(3)
8.6.1 The Robust HI Performance Problem
230(1)
8.6.2 The Robust H2 Performance Problem
231(2)
8.7 Design Aspects
233(7)
8.7.1 Some Considerations
234(1)
8.7.2 Basic Performance Limitations
235(1)
8.7.3 Application of Hinfinity Optimal Control to Loop Shaping
236(4)
Appendix 8A
240(14)
Appendix IIA
254(7)
Appendix IIB
261(6)
Appendix IIC
267(17)
Exercises for Section II
284(1)
References for Section II
285(4)
Section III Digital and Adaptive Control
9 Discrete Time Control Systems
289(20)
9.1 Representation of Discrete Time System
290(2)
9.1.1 Numerical Differentiation
291(1)
9.1.2 Numerical Integration
291(1)
9.1.3 Difference Equations
291(1)
9.2 Modeling of the Sampling Process
292(2)
9.2.1 Finite Pulse Width Sampler
292(1)
9.2.2 An Approximation of the Finite Pulse Width Sampling
293(1)
9.2.3 Ideal Sampler
294(1)
9.3 Reconstruction of the Data
294(1)
9.3.1 Zero Order Hold
294(1)
9.3.2 First Order Hold
295(1)
9.4 Pulse Transfer Function
295(3)
9.4.1 Pulse Transfer Function of the ZOH
296(1)
9.4.2 Pulse Transfer Function of a Closed Loop System
297(1)
9.4.3 Characteristics Equation
298(1)
9.5 Stability Analysis in z-Plane
298(1)
9.5.1 Jury Stability Test
298(1)
9.5.2 Singular Cases
299(1)
9.5.3 Bilinear Transformation and Routh Stability Criterion
299(1)
9.5.4 Singular Cases
299(1)
9.6 Time Responses of Discrete Time Systems
299(4)
9.6.1 Transient Response Specifications and Steady-State Error
299(1)
9.6.2 Type-n Discrete Time Systems
300(1)
9.6.3 Study of a Second Order Control System
301(1)
9.6.4 Correlation between Time Response and Root Locations in s- and z-Planes
302(1)
9.6.5 Dominant Closed Loop Pole Pairs
302(1)
Appendix 9A
303(6)
10 Design of Discrete Time Control Systems
309(18)
10.1 Design Based on Root Locus Method
309(2)
10.1.1 Rules for Construction of the Root Locus
309(1)
10.1.2 Root Locus of a Digital Control System
310(1)
10.1.3 Effect of Sampling Period T
310(1)
10.1.4 Design Procedure
311(1)
10.2 Frequency Domain Analysis
311(1)
10.2.1 Nyquist Plot
311(1)
10.2.2 Bode Plot, and Gain and Phase Margins
312(1)
10.3 Compensator Design
312(1)
10.3.1 Phase Lead, Phase Lag, and Lag-Lead Compensators
312(1)
10.3.2 Compensator Design Using Bode Plot
313(1)
10.3.2.1 Phase Lead Compensator
313(1)
10.3.2.2 Phase Lag Compensator
313(1)
10.3.2.3 Lag-Lead Compensator
313(1)
10.4 Design with Deadbeat Response
313(2)
10.4.1 DBR Design of a System When the Poles and Zeros Are in the Unit Circle
314(1)
10.4.1.1 Physical Realizability of the Controller Dc(z)
314(1)
10.4.2 DBR When Some of the Poles and Zeros Are on or outside the Unit Circle
315(1)
10.4.3 Sampled Data Control Systems with DBR
315(1)
10.5 State Feedback Controller
315(4)
10.5.1 Designing K by Transforming the State Model into Controllable Canonical Form
316(1)
10.5.2 Designing K by Ackermann's Formula
317(1)
10.5.3 Set Point Tracking
317(1)
10.5.4 State Feedback with Integral Control
318(1)
10.6 State Observers
319(4)
10.6.1 Full Order Observers
319(1)
10.6.1.1 Open Loop Estimator
319(1)
10.6.1.2 Luenberger State Observer
319(1)
10.6.1.3 Controller with Observer
320(1)
10.6.2 Reduced Order Observers
320(1)
10.6.3 Controller with Reduced Order Observer
321(1)
10.6.4 Deadbeat Control by State Feedback and Deadbeat Observer
322(1)
10.6.5 Incomplete State Feedback
322(1)
10.6.6 Output Feedback Design
322(1)
10.7 Optimal Control
323(4)
10.7.1 Discrete Euler-Lagrange Equation
323(2)
10.7.2 Linear Quadratic Regulator
325(2)
11 Adaptive Control
327(44)
11.1 Direct and Indirect Adaptive Control Methods
328(3)
11.1.1 Adaptive Control and Adaptive Regulation
330(1)
11.2 Gain Scheduling
331(2)
11.2.1 Classical GS
332(1)
11.2.2 LPV and LFT Synthesis
333(1)
11.2.3 Fuzzy Logic-Based Gain Scheduling (FGS)
333(1)
11.3 Parameter Dependent Plant Models
333(3)
11.3.1 Linearization Based GS
334(1)
11.3.2 Off Equilibrium Linearizations
334(1)
11.3.3 Quasi LPV Method
335(1)
11.3.4 Linear Fractional Transformation
335(1)
11.4 Classical Gain Scheduling
336(1)
11.4.1 LTI Design
336(1)
11.4.2 GS Controller Design
336(1)
11.4.2.1 Linearization Scheduling
336(1)
11.4.2.2 Interpolation Methods
336(1)
11.4.2.3 Velocity Based Scheduling
337(1)
11.4.3 Hidden Coupling Terms
337(1)
11.4.4 Stability Properties
337(1)
11.5 LPV Controller Synthesis
337(2)
11.5.1 LPV Controller Synthesis Set Up
338(1)
11.5.1.1 Stability and Performance Analysis
338(1)
11.5.2 Lyapunov Based LPV Control Synthesis
338(1)
11.5.3 LFT Synthesis
338(1)
11.5.4 Mixed LPV-LFT Approaches
339(1)
11.6 Fuzzy Logic-Based Gain Scheduling
339(1)
11.7 Self-Tuning Control
340(3)
11.7.1 Minimum Variance Regulator/Controller
341(1)
11.7.2 Pole Placement Control
342(1)
11.7.3 A Bilinear Approach
343(1)
11.8 Adaptive Pole Placement
343(1)
11.9 Model Reference Adaptive Control/Systems (MRACS)
344(3)
11.9.1 MRAC Design of First Order System
344(1)
11.9.2 Adaptive Dynamic Inversion (ADI) Control
345(1)
11.9.3 Parameter Convergence and Comparison
346(1)
11.9.4 MRAC for n-th Order System
346(1)
11.9.5 Robustness of Adaptive Control
347(1)
11.10 A Comprehensive Example
347(4)
11.10.1 The Underlying Design Problem for Known Systems
348(1)
11.10.2 Parameter Estimation
349(1)
11.10.3 An Explicit Self-Tuner
349(1)
11.10.4 An Implicit Self-Tuner
350(1)
11.10.5 Other Implicit Self-Tuners
350(1)
11.11 Stability, Convergence, and Robustness Aspects
351(2)
11.11.1 Stability
351(1)
11.11.2 Convergence
351(2)
11.11.2.1 Martingale Theory
352(1)
11.11.2.2 Averaging Methods
352(1)
11.12 Use of the Stochastic Control Theory
353(1)
11.13 Uses of Adaptive Control Approaches
353(2)
11.13.1 Auto Tuning
353(1)
11.13.2 Automatic Construction of Gain Schedulers and Adaptive Regulators
353(1)
11.13.3 Practical Aspects and Applications
354(18)
11.13.3.1 Parameter Tracking
354(1)
11.13.3.2 Estimator Windup and Bursts
354(1)
11.13.3.3 Robustness
354(1)
11.13.3.4 Numerics and Coding
355(1)
11.13.3.5 Integral Action
355(1)
11.13.3.6 Supervisory Loops
355(1)
11.13.3.7 Applications
355(1)
Appendix 11A
355(8)
Appendix 11B
363(3)
Appendix 11C
366(5)
12 Computer-Controlled Systems
371(10)
12.1 Computers in Measurement and Control
371(1)
12.2 Components in Computer-Based Measurement and Control System (CMCS)
372(1)
12.3 Architectures
372(1)
12.3.1 Centralized Computer Control System
372(1)
12.3.2 Distributed Computer Control Systems (DDCS)
372(1)
12.3.3 Hierarchical Computer Control Systems
372(1)
12.3.4 Tasks of Computer Control Systems and Interfaces
373(1)
12.3.4.1 HMI-Human Machine Interface
373(1)
12.3.4.2 Hardware for Computer-Based Process/Plant Control System
373(1)
12.3.4.3 Interfacing Computer System with Plant
373(1)
12.4 Smart Sensor Systems
373(2)
12.4.1 Components of Smart Sensor Systems
374(1)
12.5 Control System Software and Hardware
375(1)
12.5.1 Embedded Control Systems
375(1)
12.5.2 Building Blocks
375(1)
12.5.2.1 Software and Hardware Building Blocks
375(1)
12.5.2.2 Appliance/System Building Blocks
375(1)
12.6 ECS-Implementation
376(1)
12.7 Aspects of Implementation of a Digital Controller
376(43)
12.7.1 Representations and Realizations of the Digital Controller
377(1)
12.7.1.1 Pre-Filtering and Computational Delays
377(1)
12.7.1.2 Nonlinear Actuators
377(1)
12.7.1.3 Antiwindup with an Explicit Observer
377(1)
12.7.2 Operational and Numerical Aspects
378(1)
12.7.3 Realization of Digital Controllers
379(40)
12.7.3.1 Direct/Companion Forms
379(1)
12.7.3.2 Well-Conditioned Form
379(1)
12.7.3.3 Ladder Form
379(1)
12.7.3.4 Short-Sampling-Interval Modification and 5-Operator Form
380(1)
12.7.4 Programming
380(1)
Appendix III
381(33)
Exercises for Section III
414(1)
References for Section III
415(4)
Section IV AI-Based Control
13 Introduction to AI-Based Control
419(32)
13.1 Motivation for Computational Intelligence in Control
419(1)
13.2 Artificial Neural Networks
419(14)
13.2.1 An Intuitive Introduction
419(1)
13.2.2 Perceptrons
420(1)
13.2.3 Sigmoidal Neurons
421(2)
13.2.4 The Architecture of Neural Networks
423(1)
13.2.5 Learning with Gradient Descent
424(4)
13.2.5.1 Issues in Implementation
428(1)
13.2.6 Unsupervised and Reinforcement Learning
428(1)
13.2.7 Radial Basis Networks
429(2)
13.2.7.1 Information Processing of an RBF Network
429(2)
13.2.8 Recurrent Neural Networks
431(1)
13.2.9 Towards Deep Learning
432(1)
13.2.10 Summary
432(1)
13.3 Fuzzy Logic
433(11)
13.3.1 The Linguistic Variables
435(1)
13.3.2 The Fuzzy Operators
436(1)
13.3.3 Reasoning with Fuzzy Sets
437(1)
13.3.4 The Defuzzification
438(3)
13.3.4.1 Some Remarks
441(1)
13.3.5 Type II Fuzzy Systems and Control
441(2)
13.3.5.1 MATLAB Implementation
443(1)
13.3.6 Summary
443(1)
13.4 Genetic Algorithms and Other Nature Inspired Methods
444(5)
13.4.1 Genetic Algorithms
444(3)
13.4.2 Particle Swarm Optimization
447(1)
13.4.2.1 Accelerated PSO
448(1)
13.4.3 Summary
448(1)
13.5
Chapter Summary
449(2)
14 ANN-Based Control Systems
451(20)
14.1 Applications of Radial Basis Function Neural Networks
451(9)
14.1.1 Fully Tuned Extended Minimal Resource Allocation Network RBF
451(2)
14.1.2 Autolanding Problem Formulation
453(7)
14.2 Optimal Control Using Artificial Neural Network
460(2)
14.2.1 Neural Network LQR Control Using the Hamilton-Jacobi-Bellman Equation
460(1)
14.2.2 Neural Network Hinfinity, Control Using the Hamilton-Jacobi-Isaacs Equation
461(1)
14.3 Historical Development
462(1)
Appendix 14A
463(3)
Appendix 14B
466(5)
15 Fuzzy Control Systems
471(22)
15.1 Simple Examples
471(3)
15.2 Industrial Process Control Case Study
474(5)
15.2.1 Results
476(3)
15.3
Chapter Summary
479(1)
Appendix 15A
479(4)
Appendix 15B
483(10)
16 Nature Inspired Optimization for Controller Design
493(18)
16.1 Control Application in Light Energy Efficiency
493(1)
16.1.1 A Control Systems Perspective
493(1)
16.2 PSO Aided Fuzzy Control System
494(2)
16.3 Genetic Algorithms (GAs) Aided Semi-Active Suspension System
496(3)
16.4 GA Aided Active Suspension System
499(1)
16.5 Training ANNs Using GAs
500(3)
16.6
Chapter Summary
503(1)
Appendix 16A
504(4)
Appendix 16B
508(3)
Appendix IVA
511(12)
Exercises for Section IV
523(1)
References for Section IV
524(5)
Section V System Theory and Control Related Topics
Appendix A
529(4)
Appendix B
533(6)
Appendix C
539(8)
Appendix D
547(10)
Appendix E
557(18)
Appendix F
575(12)
Appendix G
587(38)
Index 625
Jitendra R. Raol, PhD, is Emeritus Professor at the M. S. Ramaiah Institute of Technology in Bangalore, India. He previously served at the National Aerospace Laboratories (NAL) as Scientist-G and Head of the Flight Mechanics and Control Division (FMCD). He is a fellow of the IEE (UK), a senior member of the IEEE (US), a life-fellow of the Aeronautical Society of India, and a life member of the System Society of India. He has guided nearly a dozen doctoral research scholars and is a reviewer of many international journals.

Ramakalyan Ayyagari, PhD, is with the Department of Instrumentation and Control Engineering at the National Institute of Technologya deemed University, Tiruchirappalli, India. He earned a masters at Andhra University, India, and a PhD at the Indian Institute of Technology, Delhi. Dr. Ayyagaris areas of specialty include cyber physical systems, network flow control, modeling and control of big data systems, and path planning.