Muutke küpsiste eelistusi

E-raamat: Human-Robot Interaction Control Using Reinforcement Learning

(Cranfield University, Bedford, UK), (Instituto Politécnico Nacional (CINVESTAV-IPN), Mexico City, Mexico)
Teised raamatud teemal:
  • Formaat - EPUB+DRM
  • Hind: 145,67 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
  • Raamatukogudele
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

A comprehensive exploration of the control schemes of human-robot interactions 

In Human-Robot Interaction Control Using Reinforcement Learning, an expert team of authors delivers a concise overview of human-robot interaction control schemes and insightful presentations of novel, model-free and reinforcement learning controllers. The book begins with a brief introduction to state-of-the-art human-robot interaction control and reinforcement learning before moving on to describe the typical environment model. The authors also describe some of the most famous identification techniques for parameter estimation. 

Human-Robot Interaction Control Using Reinforcement Learning offers rigorous mathematical treatments and demonstrations that facilitate the understanding of control schemes and algorithms. It also describes stability and convergence analysis of human-robot interaction control and reinforcement learning based control. 

The authors also discuss advanced and cutting-edge topics, like inverse and velocity kinematics solutions, H2 neural control, and likely upcoming developments in the field of robotics. 

Readers will also enjoy:  

  • A thorough introduction to model-based human-robot interaction control 
  • Comprehensive explorations of model-free human-robot interaction control and human-in-the-loop control using Euler angles 
  • Practical discussions of reinforcement learning for robot position and force control, as well as continuous time reinforcement learning for robot force control 
  • In-depth examinations of robot control in worst-case uncertainty using reinforcement learning and the control of redundant robots using multi-agent reinforcement learning  

Perfect for senior undergraduate and graduate students, academic researchers, and industrial practitioners studying and working in the fields of robotics, learning control systems, neural networks, and computational intelligence, Human-Robot Interaction Control Using Reinforcement Learning is also an indispensable resource for students and professionals studying reinforcement learning. 

Author Biographies xi
List of Figures
xiii
List of Tables
xvii
Preface xix
Part I Human-robot Interaction Control
1(96)
1 Introduction
3(14)
1.1 Human-Robot Interaction Control
3(3)
1.2 Reinforcement Learning for Control
6(1)
1.3 Structure of the Book
7(3)
References
10(7)
2 Environment Model of Human-Robot Interaction
17(16)
2.1 Impedance and Admittance
17(4)
2.2 Impedance Model for Human-Robot Interaction
21(3)
2.3 Identification of Human-Robot Interaction Model
24(6)
2.4 Conclusions
30(1)
References
30(3)
3 Model Based Human-Robot Interaction Control
33(12)
3.1 Task Space Impedance/Admittance Control
33(3)
3.2 Joint Space Impedance Control
36(1)
3.3 Accuracy and Robustness
37(2)
3.4 Simulations
39(3)
3.5 Conclusions
42(2)
References
44(1)
4 Model Free Human-Robot Interaction Control
45(28)
4.1 Task-Space Control Using Joint-Space Dynamics
45(7)
4.2 Task-Space Control Using Task-Space Dynamics
52(1)
4.3 Joint Space Control
53(1)
4.4 Simulations
54(1)
4.5 Experiments
55(13)
4.6 Conclusions
68(3)
References
71(2)
5 Human-in-the-loop Control Using Euler Angles
73(24)
5.1 Introduction
73(1)
5.2 Joint-Space Control
74(5)
5.3 Task-Space Control
79(4)
5.4 Experiments
83(9)
5.5 Conclusions
92(2)
References
94(3)
Part II Reinforcement Learning for Robot Interaction Control
97(138)
6 Reinforcement Learning for Robot Position/Force Control
99(20)
6.1 Introduction
99(1)
6.2 Position/Force Control Using an Impedance Model
100(3)
6.3 Reinforcement Learning Based Position/Force Control
103(7)
6.4 Simulations and Experiments
110(7)
6.5 Conclusions
117(1)
References
117(2)
7 Continuous-Time Reinforcement Learning for Force Control
119(20)
7.1 Introduction
119(1)
7.2 K-means Clustering for Reinforcement Learning
120(4)
7.3 Position/Force Control Using Reinforcement Learning
124(6)
7.4 Experiments
130(6)
7.5 Conclusions
136(1)
References
136(3)
8 Robot Control in Worst-Case Uncertainty Using Reinforcement Learning
139(34)
8.1 Introduction
139(2)
8.2 Robust Control Using Discrete-Time Reinforcement Learning
141(3)
8.3 Double Q-Learning with k-Nearest Neighbors
144(6)
8.4 Robust Control Using Continuous-Time Reinforcement Learning
150(4)
8.5 Simulations and Experiments: Discrete-Time Case
154(7)
8.6 Simulations and Experiments: Continuous-Time Case
161(9)
8.7 Conclusions
170(1)
References
170(3)
9 Redundant Robots Control Using Multi-Agent Reinforcement Learning
173(20)
9.1 Introduction
173(2)
9.2 Redundant Robot Control
175(4)
9.3 Multi-Agent Reinforcement Learning for Redundant Robot Control
179(4)
9.4 Simulations and experiments
183(4)
9.5 Conclusions
187(2)
References
189(4)
10 Robot H2 Neural Control Using Reinforcement Learning
193(40)
10.1 Introduction
193(1)
10.2 H2 Neural Control Using Discrete-Time Reinforcement Learning
194(13)
10.3 H2 Neural Control in Continuous Time
207(12)
10.4 Examples
219(10)
10.5 Conclusion
229(1)
References
229(4)
11 Conclusions
233(2)
A Robot Kinematics and Dynamics
235(12)
A.1 Kinematics
235(2)
A.2 Dynamics
237(3)
A.3 Examples
240(6)
References
246(1)
B Reinforcement Learning for Control
247(12)
B.1 Markov decision processes
247(1)
B.2 Value functions
248(2)
B.3 Iterations
250(1)
B.4 TD learning
251(7)
Reference
258(1)
Index 259
WEN YU, PhD, is Professor and Head of the Departamento de Control Automático with the Centro de Investigación y de Estudios Avanzados, Instituto Politécnico Nacional (CINVESTAV-IPN), Mexico City, Mexico. He is a co-author of Modeling and Control of Uncertain Nonlinear Systems with Fuzzy Equations and Z-Number.

ADOLFO PERRUSQUÍA, PhD, is a Research Fellow in the School of Aerospace, Transport, and Manufacturing at Cranfield University in Bedford, UK.