Muutke küpsiste eelistusi

Chance and decision. Stochastic control in discrete time 1996 ed. [Pehme köide]

  • Formaat: Paperback / softback, 185 pages, kõrgus x laius: 240x170 mm, 185 p., 1 Paperback / softback
  • Sari: Publications of the Scuola Normale Superiore
  • Ilmumisaeg: 01-Oct-1996
  • Kirjastus: Scuola Normale Superiore
  • ISBN-10: 8876422420
  • ISBN-13: 9788876422423
Teised raamatud teemal:
  • Pehme köide
  • Hind: 20,91 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 24,60 €
  • Säästad 15%
  • Raamatu kohalejõudmiseks kirjastusest kulub orienteeruvalt 2-4 nädalat
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Paperback / softback, 185 pages, kõrgus x laius: 240x170 mm, 185 p., 1 Paperback / softback
  • Sari: Publications of the Scuola Normale Superiore
  • Ilmumisaeg: 01-Oct-1996
  • Kirjastus: Scuola Normale Superiore
  • ISBN-10: 8876422420
  • ISBN-13: 9788876422423
Teised raamatud teemal:
Mathematical theory of discrete time decision processes, also known as stochastic control, is based on two major ideas: backward induction and conditioning. It has a large number of applications in almost all branches of the natural sciences. The aim of these notes is to give a self-contained introduction to this theory and its applications. Our intention was to give a global and mathematically precise picture of the subject and present well motivated examples. We cover systems with complete or partial information as well as with complete or partial observation. We have tried to present in a unified way several topics such as dynamic programming equations, stopping problems, stabilization, Kalman-Bucy filter, linear regulator, adaptive control and option pricing. The notes discuss a large variety of models rather than concentrate on general existence theorems.
Chapter 1 Introduction
1(8)
1.1 Information and observation in decision models
2(1)
1.2 Specific problems
3(6)
1.2.1 Examination strategy
3(1)
1.2.2 Packing problem
4(1)
1.2.3 Marriage problem
4(1)
1.2.4 Gambling system
4(1)
1.2.5 Portfolio problem
5(1)
1.2.6 Automobile replacement problem
6(1)
1.2.7 Automatic regulation
6(3)
Chapter 2 Probabilistic preliminaries
9(16)
2.1 Probability spaces and random variables
9(3)
2.2 Independence
12(1)
2.3 Steinhaus construction
13(1)
2.4 Conditioning
14(2)
2.5 Martingales and stopping times
16(9)
2.5.1 Supermartingales
17(3)
2.5.2 Stopping times
20(5)
Part I Models with complete observation and information
Chapter 3 Decision models
25(10)
3.1 Constructions of controlled sequences
25(3)
3.2 Markovian strategies
28(7)
3.2.1 Gambling strategy
30(5)
Chapter 4 Dynamic programming
35(12)
4.1 Bellman's equation
35(3)
4.2 Stochastic controllability
38(1)
4.3 Optimal stopping
39(5)
4.3.1 Examples
42(2)
4.4 Packing problem
44(3)
Chapter 5 Linear regulator problem
47(10)
5.1 Solution of the problem
47(3)
5.2 General linear systems
50(2)
5.3 Automatic control
52(5)
Chapter 6 Financial models
57(16)
6.1 Portfolio selection
57(1)
6.2 Pricing options
58(15)
6.2.1 Formulation of the problem
58(2)
6.2.2 Pricing with arbitrary hedging
60(7)
6.2.3 Pricing with constraints on hedging
67(3)
6.2.4 Models in continuous time
70(3)
Chapter 7 Infinite horizon problems
73(22)
7.1 Bellman's equation
73(4)
7.2 Gambling problem
77(2)
7.3 Stabilization of linear systems
79(6)
7.3.1 Stabilizability conditions
79(2)
7.3.2 Algebraic Riccati equation
81(4)
7.4 Optimal stopping
85(5)
7.5 Selection problems
90(5)
7.5.1 Marriage problem
90(1)
7.5.2 Apartment problem
91(4)
Chapter 8 Ergodic problems
95(26)
8.1 Invariant measures for Markov chains
96(5)
8.1.1 General chains
96(1)
8.1.2 Finite chains
97(4)
8.2 Bellman-Howard's equations
101(6)
8.3 Ergodic control of finite chains
107(7)
8.3.1 Existence of an optimal strategy
107(4)
8.3.2 Linear Bellman-Howard's equations
111(2)
8.3.3 Policy improvement algorithm
113(1)
8.4 Ergodic regulator problem
114(7)
8.4.1 Bellman-Howard's equation
115(1)
8.4.2 Ergodicity of the optimal control
116(5)
Part II Models with partial observation
Chapter 9 Control of finite models
121(8)
9.1 Filtering for Markov chains
121(4)
9.2 Disorder problem
125(4)
Chapter 10 General models
129(6)
10.1 Conditional distributions
129(2)
10.2 Separation principle
131(4)
Chapter 11 Control of linear systems
135(16)
11.1 Conditional Gaussian distributions
136(3)
11.2 Kalman-Bucy filter
139(4)
11.3 Estimation of the controlled process
143(2)
11.4 Lifted model
145(2)
11.5 Partially observed regulation
147(4)
Part III Models with partial information
Chapter 12 Adaptive control of finite models
151(6)
12.1 Formulation of the problem
151(1)
12.2 Contrast function estimators
152(2)
12.3 Self-tuning regulator
154(3)
Chapter 13 Adaptive control of linear systems
157(14)
13.1 Admissible strategies
157(4)
13.2 Least-square estimation
161(5)
13.3 Optimal adaptive strategies
166(5)
Chapter 14 Adaptive stabilization
171(4)
14.1 Formulation of the problem
171(1)
14.2 Solution of the problem
172(3)
Appendix
175(6)
15.1 Complements on Markov chains
175(3)
15.1.1 Two historical examples
175(1)
15.1.2 An interpretation of invariant distributions
176(2)
15.2 Metric spaces
178(1)
15.3 Prokhorov theorem
179(2)
Bibliography 181(6)
Index 187