Muutke küpsiste eelistusi

E-raamat: Mathematical Control Theory: An Introduction

  • Formaat: PDF+DRM
  • Sari: Modern Birkhauser Classics
  • Ilmumisaeg: 03-Nov-2009
  • Kirjastus: Birkhauser Boston Inc
  • Keel: eng
  • ISBN-13: 9780817647339
Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 67,91 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: PDF+DRM
  • Sari: Modern Birkhauser Classics
  • Ilmumisaeg: 03-Nov-2009
  • Kirjastus: Birkhauser Boston Inc
  • Keel: eng
  • ISBN-13: 9780817647339
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Mathematical Control Theory: An Introduction presents, in a mathematically precise manner, a unified introduction to deterministic control theory. With the exception of a few more advanced concepts required for the final part of the book, this presentation requires only a knowledge of basic facts from linear algebra, differential equations, and calculus.



In addition to classical concepts and ideas, the author covers the stabilization of nonlinear systems using topological methods, realization theory for nonlinear systems, impulsive control and positive systems, the control of rigid bodies, the stabilization of infinite dimensional systems, and the solution of minimum energy problems.



The book will be ideal for a beginning graduate course in mathematical control theory, or for self study by professionals needing a complete picture of the mathematical theory that underlies the applications of control theory.

Arvustused

"Many textbooks and monographs in the existing literature focus on specific control problems or systems, such as linear or nonlinear, finite-dimensional or infinite-dimensional, continuous-time, discrete-time, or discrete-event dynamical systems...However, Mathematical Control Theory is of a different style, which makes it unique in the book market. This ambitious book sets its target at fundamental problems, including structural properties such as controllability and observability, for a variety of mathematical models. The 260-page book covers a remarkably wide range of materials...The contents of this well-organized book mainly include the analysis of control properties and optimization. I enjoyed reading the concise mathematical description with [ its] clean logical structure. I also learned several new things or reviewed some materials from new angles...



I recommend the book to readers who are interested in the rigorous mathematical buildup of control systems and problems. Indeed, for mathematicians who look for the basic ideas or a general picture about the main branches of control theory, I believe this book can provide an excellent bridge to this area. Finally, for students who are ready for a more rigorous approach after grasping suitable mathematical preliminaries and control engineering background, this book can be helpful owing to its theoretical beauty and clarity."   IEEE Control Systems Magazine (Review of the Reprinted Softcover Edition)



"This introduction to Mathematical Control Theory was first published by Birkhäuser in 1992, then reprinted with corrections in 1995. It has now been reprinted in the Modern Birkhäuser Classics series...This is a worthy reprint of a worthy book." MAA Reviews (Review of the Reprinted Softcover Edition)



"This book is designed as a graduate text on the mathematical theory of deterministic control. It covers aremarkable number of topics... The book includes material on the realization of both linear and nonlinear systems, impulsive control, and positive linear systemssubjects not usually covered in an 'introductory' book... To get so much material in such a short space, the pace of the presentation is brisk. However, the exposition is excellent, and the book is a joy to read. A novel one-semester course covering both linear and nonlinear systems could be given... The book is an excellent one for introducing a mathematician to control theory. The book presents a large amount of material very well, and its use is highly recommended."   Bulletin of the AMS (Review of the Original Hardcover Edition)



"The book is very well written from a mathematical point of view of control theory. The author deserves much credit for bringing out such a book which is a useful and welcome addition to books on the mathematics of control theory."   Control Theory and Advance Technology (Review of the Original Hardcover Edition)



"At last! We did need an introductory textbook on control which can be read, understood, and enjoyed by anyone."   Gian-Carlo Rota, The Bulletin of Mathematics Books (Review of the Original Hardcover Edition)

Preface
Introduction 1
§ 0.1. Problems of mathematical control theory
1
§ 0.2. Specific models
3
Bibliographical notes
8
PART I. Elements of classical control theory 10
Chapter
1. Controllability and observability
10
§ 1.1. Linear differential equations
10
§ 1.2. The controllability matrix
14
§ 1.3. Rank condition
17
§ 1.4. A classification of control systems
21
§ 1.5. Kalman decomposition
23
§ 1.6. Observability
25
Bibliographical notes
27
Chapter
2. Stability and stabilizability
28
§ 2.1. Stable linear systems
28
§ 2.2. Stable polynomials
32
§ 2.3. The Routh theorem
34
§ 2.4. Stability, observability, and Liapunov equation
40
§ 2.5. Stabilizability and controllability
43
§ 2.6. Detectability and dynamical observers
46
Bibliographical notes
49
Chapter
3. Realization theory
50
§ 3.1. Impulse response and transfer functions
50
§ 3.2. Realizations of the impulse response function
54
§ 3.3. The characterization of transfer functions
60
Bibliographical notes
61
Chapter
4. Systems with constraints
62
§ 4.1. Bounded sets of parameters
62
§ 4.2. Positive systems
64
Bibliographical notes
72
PART II. Nonlinear control systems 73
Chapter
1. Controllability and observability of nonlinear systems
73
§ 1.1. Nonlinear differential equations
73
§ 1.2. Controllability and linearization
77
§ 1.3. Lie brackets
81
§ 1.4. The openness of attainable sets
84
§ 1.5. Observability
88
Bibliographical notes
91
Chapter
2. Stability and stabilizability
92
§ 2.1. Differential inequalities
92
§ 2.2. The main stability test
95
§ 2.3. Linearization
100
§ 2.4. The Liapunov function method
103
§ 2.5. La Salle's theorem
105
§ 2.6. Topological stability criteria
108
§ 2.7. Exponential stabilizability and the robustness problem
112
§ 2.8. Necessary conditions for stabilizability
116
§2.9. Stabilization of the Euler equations
117
Bibliographical notes
120
Chapter
3. Realization theory
121
§ 3.1. Input-output maps
121
§ 3.2. Partial realizations
122
Bibliographical notes
126
PART III. Optimal control 127
Chapter
1. Dynamic programming
127
§ 1.1. Introductory comments
127
§ 1.2. Bellman's equation and the value function
128
§ 1.3. The linear regulator problem and the Riccati equation
133
§ 1.4. The linear regulator and stabilization
136
Bibliographical notes
141
Chapter
2. Dynamic programming for impulse control
142
§ 2.1. Impulse control problems
142
§ 2.2. An optimal stopping problem
144
§ 2.3. Iterations of convex mappings
145
§ 2.4. The proof of Theorem 2.1
146
Bibliographical notes
151
Chapter
3. The maximum principle
152
§ 3.1. Control problems with fixed terminal time
152
§ 3.2. An application of the maximum principle
155
§ 3.3. The maximum principle for impulse control problems
157
§ 3.4. Separation theorems
162
§ 3.5. Time-optimal problems
164
Bibliographical notes
169
Chapter
4. The existence of optimal strategies
170
§ 4.1. A control problem without an optimal solution
170
§ 4.2. Fillipov's theorem
171
Bibliographical notes
175
PART IV. Infinite dimensional linear systems 176
Chapter
1. Linear control systems
176
§ 1.1. Introduction
176
§ 1.2. Semigroups of operators
177
§ 1.3. The Hille–Yosida theorem
185
§ 1.4. Phillips' theorem
188
§ 1.5. Important classes of generators and Lions' theorem
190
§ 1.6. Specific examples of generators
194
§ 1.7. The integral representation of linear systems
202
Bibliographical notes
205
Chapter
2. Controllability
206
§ 2.1. Images and kernels of linear operators
206
§ 2.2. The controllability operator
209
§ 2.3. Various concepts of controllability
212
§ 2.4. Systems with self-adjoint generators
213
§ 2.5. Controllability of the wave equation
218
Bibliographical notes
220
Chapter
3. Stability and stabilizability
221
§ 3.1. Various concepts of stability
221
§ 3.2. Liapunov's equation
226
§ 3.3. Stabilizability and controllability
227
Bibliographical notes
231
Chapter
4. Linear regulators in Hilbert spaces
232
§ 4.1. Introduction
232
§ 4.2. The operator Riccati equation
234
§ 4.3. The finite horizon case
236
§ 4.4. The infinite horizon case: Stabilizability and detectability
240
Bibliographical notes
243
Appendix 244
§ A.1. Metric spaces
244
§ A.2. Banach spaces
245
§ A.3. Hilbert spaces
247
§ A.4. Bochner's integral
248
§ A.5. Spaces of continuous functions
250
§ A.6. Spaces of measurable functions
250
References 252
Notations 256
Index 257