Muutke küpsiste eelistusi

E-raamat: Control Systems and Reinforcement Learning

(University of Florida)
  • Formaat: PDF+DRM
  • Ilmumisaeg: 09-Jun-2022
  • Kirjastus: Cambridge University Press
  • Keel: eng
  • ISBN-13: 9781009063395
  • Formaat - PDF+DRM
  • Hind: 61,74 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Formaat: PDF+DRM
  • Ilmumisaeg: 09-Jun-2022
  • Kirjastus: Cambridge University Press
  • Keel: eng
  • ISBN-13: 9781009063395

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

"A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of "deep" or "Q", or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rootedin stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning"--

Arvustused

'Control Systems and Reinforcement Learning is a densely packed book with a vivid, conversational style. It speaks both to computer scientists interested in learning about the tools and techniques of control engineers and to control engineers who want to learn about the unique challenges posed by reinforcement learning and how to address these challenges. The author, a world-class researcher in control and probability theory, is not afraid of strong and perhaps controversial opinions, making the book entertaining and attractive for open-minded readers. Everyone interested in the "why" and "how" of RL will use this gem of a book for many years to come.' Csaba Szepesvári, Canada CIFAR AI Chair, University of Alberta, and Head of the Foundations Team at DeepMind 'This book is a wild ride, from the elements of control through to bleeding-edge topics in reinforcement learning. Aimed at graduate students and very good undergraduates who are willing to invest some effort, the book is a lively read and an important contribution.' Shane G. Henderson, Charles W. Lake, Jr. Chair in Productivity, Cornell University 'Reinforcement learning, now the de facto workhorse powering most AI-based algorithms, has deep connections with optimal control and dynamic programing. Meyn explores these connections in a marvelous manner and uses them to develop fast, reliable iterative algorithms for solving RL problems. This excellent, timely book from a leading expert on stochastic optimal control and approximation theory is a must-read for all practitioners in this active research area.' Panagiotis Tsiotras, David and Andrew Lewis Chair and Professor, Guggenheim School of Aerospace Engineering, Georgia Institute of Technology

Muu info

A how-to guide and scientific tutorial covering the universe of reinforcement learning and control theory for online decision making.
Preface xi
1 Introduction
1(6)
1.1 What You Can Find in Here
1(3)
1.2 What's Missing?
4(1)
1.3 Resources
5(2)
Part I Fundamentals without Noise 7(196)
2 Control Crash Course
9(42)
2.1 You Have a Control Problem
9(2)
2.2 What to Do about It?
11(1)
2.3 State Space Models
12(5)
2.4 Stability and Performance
17(12)
2.5 A Glance Ahead: From Control Theory to RL
29(3)
2.6 How Can We Ignore Noise?
32(1)
2.7 Examples
32(11)
2.8 Exercises
43(6)
2.9 Notes
49(2)
3 Optimal Control
51(33)
3.1 Value Function for Total Cost
51(1)
3.2 Bellman Equation
52(7)
3.3 Variations
59(4)
3.4 Inverse Dynamic Programming
63(1)
3.5 Bellman Equation Is a Linear Program
64(1)
3.6 Linear Quadratic Regulator
65(2)
3.7 A Second Glance Ahead
67(1)
3.8 Optimal Control in Continuous Time
68(2)
3.9 Examples
70(8)
3.10 Exercises
78(5)
3.11 Notes
83(1)
4 ODE Methods for Algorithm Design
84(75)
4.1 Ordinary Differential Equations
84(3)
4.2 A Brief Return to Reality
87(1)
4.3 Newton-Raphson Flow
88(2)
4.4 Optimization
90(7)
4.5 Quasistochastic Approximation
97(16)
4.6 Gradient-Free Optimization
113(5)
4.7 Quasi Policy Gradient Algorithms
118(5)
4.8 Stability of ODEs
123(8)
4.9 Convergence Theory for QSA
131(18)
4.10 Exercises
149(5)
4.11 Notes
154(5)
5 Value Function Approximations
159(44)
5.1 Function Approximation Architectures
160(8)
5.2 Exploration and ODE Approximations
168(3)
5.3 TD-Learning and Linear Regression
171(5)
5.4 Projected Bellman Equations and TD Algorithms
176(10)
5.5 Convex Q-Learning
186(5)
5.6 Q-Learning in Continuous Time
191(2)
5.7 Duality
193(3)
5.8 Exercises
196(3)
5.9 Notes
199(4)
Part II Reinforcement Learning and Stochastic Control 203(190)
6 Markov Chains
205(39)
6.1 Markov Models Are State Space Models
205(3)
6.2 Simple Examples
208(3)
6.3 Spectra and Ergodicity
211(4)
6.4 A Random Glance Ahead
215(1)
6.5 Poisson's Equation
216(2)
6.6 Lyapunov Functions
218(4)
6.7 Simulation: Confidence Bounds and Control Variates
222(8)
6.8 Sensitivity and Actor-Only Methods
230(3)
6.9 Ergodic Theory for General Markov Chains
233(3)
6.10 Exercises
236(7)
6.11 Notes
243(1)
7 Stochastic Control
244(36)
7.1 MDPs: A Quick Introduction
244(4)
7.2 Fluid Models for Approximation
248(3)
7.3 Queues
251(2)
7.4 Speed Scaling
253(4)
7.5 LQG
257(4)
7.6 A Queueing Game
261(2)
7.7 Controlling Rover with Partial Information
263(3)
7.8 Bandits
266(5)
7.9 Exercises
271(7)
7.10 Notes
278(2)
8 Stochastic Approximation
280(38)
8.1 Asymptotic Covariance
281(2)
8.2 Themes and Roadmaps
283(9)
8.3 Examples
292(5)
8.4 Algorithm Design Example
297(3)
8.5 Zap Stochastic Approximation
300(4)
8.6 Buyer Beware
304(3)
8.7 Some Theory
307(7)
8.8 Exercises
314(1)
8.9 Notes
315(3)
9 Temporal Difference Methods
318(44)
9.1 Policy Improvement
319(4)
9.2 Function Approximation and Smoothing
323(2)
9.3 Loss Functions
325(2)
9.4 TD(λ) Learning
327(3)
9.5 Return to the Q-Function
330(7)
9.6 Watkins's Q-Learning
337(7)
9.7 Relative Q-Learning
344(4)
9.8 GQ and Zap
348(5)
9.9 Technical Proofs
353(4)
9.10 Exercises
357(2)
9.11 Notes
359(3)
10 Setting the Stage, Return of the Actors
362(31)
10.1 The Stage, Projection, and Adjoints
363(4)
10.2 Advantage and Innovation
367(2)
10.3 Regeneration
369(2)
10.4 Average Cost and Every Other Criterion
371(5)
10.5 Gather the Actors
376(4)
10.6 SGD without Bias
380(2)
10.7 Advantage and Control Variates
382(2)
10.8 Natural Gradient and Zap
384(1)
10.9 Technical Proofs
385(4)
10.10 Notes
389(4)
Appendices 393(22)
A Mathematical Background
395(6)
A.1 Notation and Math Background
395(2)
A.2 Probability and Markovian Background
397(4)
B Markov Decision Processes
401(8)
B.1 Total Cost and Every Other Criterion
401(2)
B.2 Computational Aspects of MDPs
403(6)
C Partial Observations and Belief States
409(6)
C.1 POMDP Model
409(1)
C.2 A Fully Observed MDP
410(3)
C.3 Belief State Dynamics
413(2)
References 415(16)
Glossary of Symbols and Acronyms 431(2)
Index 433
Sean Meyn is a professor and holds the Robert C. Pittman Eminent Scholar Chair in the Department of Electrical and Computer Engineering, University of Florida. He is well known for his research on stochastic processes and their applications. His award-winning monograph Markov Chains and Stochastic Stability with R. L. Tweedie is now a standard reference. In 2015 he and Prof. Ana Busic received a Google Research Award recognizing research on renewable energy integration. He is an IEEE Fellow and IEEE Control Systems Society distinguished lecturer on topics related to both reinforcement learning and energy systems.