Muutke küpsiste eelistusi

E-raamat: Learning for Adaptive and Reactive Robot Control: A Dynamical Systems Approach

Teised raamatud teemal:
  • Formaat - PDF+DRM
  • Hind: 176,80 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Lisa ostukorvi
  • Lisa soovinimekirja
  • See e-raamat on mõeldud ainult isiklikuks kasutamiseks. E-raamatuid ei saa tagastada.
Teised raamatud teemal:

DRM piirangud

  • Kopeerimine (copy/paste):

    ei ole lubatud

  • Printimine:

    ei ole lubatud

  • Kasutamine:

    Digitaalõiguste kaitse (DRM)
    Kirjastus on väljastanud selle e-raamatu krüpteeritud kujul, mis tähendab, et selle lugemiseks peate installeerima spetsiaalse tarkvara. Samuti peate looma endale  Adobe ID Rohkem infot siin. E-raamatut saab lugeda 1 kasutaja ning alla laadida kuni 6'de seadmesse (kõik autoriseeritud sama Adobe ID-ga).

    Vajalik tarkvara
    Mobiilsetes seadmetes (telefon või tahvelarvuti) lugemiseks peate installeerima selle tasuta rakenduse: PocketBook Reader (iOS / Android)

    PC või Mac seadmes lugemiseks peate installima Adobe Digital Editionsi (Seeon tasuta rakendus spetsiaalselt e-raamatute lugemiseks. Seda ei tohi segamini ajada Adober Reader'iga, mis tõenäoliselt on juba teie arvutisse installeeritud )

    Seda e-raamatut ei saa lugeda Amazon Kindle's. 

Methods by which robots can learn control laws that enable real-time reactivity using dynamical systems; with applications and exercises.

This book presents a wealth of machine learning techniques to make the control of robots more flexible and safe when interacting with humans. It introduces a set of control laws that enable reactivity using dynamical systems, a widely used method for solving motion-planning problems in robotics. These control approaches can replan in milliseconds to adapt to new environmental constraints and offer safe and compliant control of forces in contact. The techniques offer theoretical advantages, including convergence to a goal, non-penetration of obstacles, and passivity. The coverage of learning begins with low-level control parameters and progresses to higher-level competencies composed of combinations of skills. 
 

Learning for Adaptive and Reactive Robot Control is designed for graduate-level courses in robotics,  with chapters that proceed from fundamentals to more advanced content. Techniques covered include learning from demonstration, optimization, and reinforcement learning, and using dynamical systems in learning control laws, trajectory planning, and methods for compliant and force control . 
Features for teaching in each chapter: 
 
  •  applications, which range from arm manipulators to whole-body control of humanoid robots;
  •  pencil-and-paper and programming exercises;
  •  lecture videos, slides, and MATLAB code examples available on the author’s website . 
  •  an eTextbook platform website offering protected material[ EPS2]  for instructors including solutions.
 
 
Preface xiii
Notation xix
I Introduction 1(42)
1 Using and Learning Dynamical Systems for Robot Control-Overview
3(24)
1.1 Prerequisites and Additional Material
3(1)
1.2 Trajectory Planning under Uncertainty
4(5)
1.2.1 Planning a Path to Grasp an Object
5(1)
1.2.2 Updating the Plan Online
6(3)
1.3 Computing Paths with DSs
9(4)
1.3.1 Stabilizing the System
10(3)
1.4 Learning a Control Law to Plan Paths Automatically
13(1)
1.5 Learning How to Combine Control Laws
14(1)
1.6 Modifying a Control Law through Learning
15(3)
1.7 Coupling DSs
18(2)
1.8 Generating and Learning Compliant Control with DSs
20(2)
1.9 Control Architectures
22(5)
2 Gathering Data for Learning
27(16)
2.1 Approaches to Generate Data
27(2)
2.1.1 Which Method Should Be Used, and When?
28(1)
2.2 Interfaces for Teaching Robots
29(5)
2.2.1 Motion-Tracking Systems
29(1)
2.2.2 Correspondence Problem
30(1)
2.2.3 Kinesthetic Teaching
31(1)
2.2.4 Teleoperation
32(1)
2.2.5 Interface to Transfer Forces
33(1)
2.2.6 Combining Interfaces
33(1)
2.3 Desiderata for the Data
34(2)
2.4 Teaching a Robot How to Play Golf
36(4)
2.4.1 Teaching the Task with Human Demonstrations
36(2)
2.4.2 Learning from Failed and Good Demonstrations
38(2)
2.5 Gathering Data from Optimal Control
40(3)
II Learning A Controller 43(130)
3 Learning a Control Law
45(66)
3.1 Preliminaries
46(9)
3.1.1 Multivariate Regression for DS Learning
46(5)
3.1.2 Lyapunov Theory for Stable DSs
51(4)
3.2 Nonlinear DSs as a Mixture of Linear Systems
55(2)
3.3 Learning Stable, Nonlinear DSs
57(19)
3.3.1 Constrained Gaussian Mixture Regression
57(3)
3.3.2 Stable Estimator of DSs
60(4)
3.3.3 Evaluating the Learning of Nonlinear DS
64(1)
3.3.4 LASA Handwriting Dataset: Benchmark for Evaluating the Learning of Stable DS
64(9)
3.3.5 Robotic Implementation
73(2)
3.3.6 Shortcoming of the SEDS formulation
75(1)
3.4 Learning Stable, Highly Nonlinear DSs
76(27)
3.4.1 Untied Linear Parameter Varying Formulation
77(3)
3.4.2 Physically Consistent Bayesian Nonparametric GMM
80(4)
3.4.3 Stable Estimator of LPV DSs
84(7)
3.4.4 Offline Learning Algorithm Evaluation
91(6)
3.4.5 Robotic Implementation
97(6)
3.5 Learning Stable, Second-Order DSs
103(6)
3.5.1 Second-Order LPV-DS Formulation
104(2)
3.5.2 Stable Estimator of Second-Order DSs
106(1)
3.5.3 Learning Algorithm Evaluation
106(2)
3.5.4 Robotic Implementation
108(1)
3.6 Conclusion
109(2)
4 Learning Multiple Control Laws
111(20)
4.1 Combining Control Laws through State-Space Partitioning
111(10)
4.1.1 Naive Approach
112(3)
4.1.2 Problem Formulation
115(3)
4.1.3 Scaling and Stability
118(1)
4.1.4 Precision of the Reconstruction
119(1)
4.1.5 Robotic Implementation
120(1)
4.2 Learning of DSs with Bifurcations
121(10)
4.2.1 DSs with Hopf Bifurcation
123(1)
4.2.2 Desired Shape for the DS
124(1)
4.2.3 Two Steps Optimization
125(3)
4.2.4 Extension to Nonlinear Limit Cycles
128(1)
4.2.5 Robotic Implementation
128(3)
5 Learning Sequences of Control Laws
131(42)
5.1 Learning Locally Active Globally Stable Dynamical Systems
133(21)
5.1.1 Linear LAGS-DS with a Single Locally Active Region
135(6)
5.1.2 Nonlinear LAGS-DS with Multiple Locally Active Regions
141(5)
5.1.3 Learning Nonlinear LAGS-DS
146(3)
5.1.4 Learning Algorithm Evaluation
149(2)
5.1.5 Robotic Implementation
151(3)
5.2 Learning Sequences of LPV-DS with Hidden Markov Models
154(21)
5.2.1 Inverse LPV-DS Formulation and Learning Approach
157(1)
5.2.2 Learning Stable Inverse LPV-DS with GMMs
158(7)
5.2.3 Learning Sequences of LPV-DS with HMMs
165(3)
5.2.4 Simulated and robotic implementation
168(5)
III Coupling And Modulating Controllers 173(94)
6 Coupling and Synchronizing Controllers
175(20)
6.1 Preliminaries
176(1)
6.2 Coupling Two Linear DSs
177(3)
6.2.1 Robot Cutting
178(2)
6.3 Coupling Arm-Hand Movement
180(9)
6.3.1 Formalism of the Coupling
181(1)
6.3.2 Learning the Dynamics
182(5)
6.3.3 Robotic Implementation
187(2)
6.4 Coupling Eye-Hand-Arm Movements
189(6)
7 Reaching for and Adapting to Moving Objects
195(24)
7.1 How to Reach for a Moving Object
196(2)
7.2 Unimanual Reaching for a Fixed Small Object
198(4)
7.2.1 Robotic Implementation
200(2)
7.3 Unimanual Reaching for a Moving Small Object
202(3)
7.4 Robotic Implementation
205(4)
7.5 Bimanual Reaching for a Moving Large Object
209(4)
7.6 Robotic Implementation
213(6)
7.6.1 Coordination Capabilities
213(1)
7.6.2 Grabbing a Large Moving Object
214(2)
7.6.3 Reaching for Fast-Flying Objects
216(3)
8 Adapting and Modulating an Existing Control Law
219(26)
8.1 Preliminaries
219(4)
8.1.1 Stability Properties
220(1)
8.1.2 Parametrizing a Modulation
221(2)
8.2 Learning an Internal Modulation
223(7)
8.2.1 Local Rotation and Norm-Scaling
223(1)
8.2.2 Gathering Data for Learning
224(4)
8.2.3 Robotic Implementation
228(2)
8.3 Learning an External Modulation
230(6)
8.3.1 Modulating Rotating and Speed-Scaling Dynamics
230(3)
8.3.2 Learning the External Activation Function
233(1)
8.3.3 Robotic Implementation
234(2)
8.4 Modulation to Transit from Free Space to Contact
236(9)
8.4.1 Formalism
236(4)
8.4.2 Simulated Examples
240(1)
8.4.3 Robotic Implementation
241(4)
9 Obstacle Avoidance
245(22)
9.1 Obstacle Avoidance: Formalism
246(11)
9.1.1 Obstacle Description
246(1)
9.1.2 Modulation for Obstacle Avoidance
247(1)
9.1.3 Stability Properties for Convex Obstacles
247(2)
9.1.4 Modulation for Concave Obstacles
249(2)
9.1.5 Impenetrability and Convergence
251(1)
9.1.6 Enclosing the DS in a Workspace
251(1)
9.1.7 Multiple Obstacles
252(2)
9.1.8 Avoiding Moving Obstacles
254(1)
9.1.9 Learning the Obstacle's Shape
255(2)
9.2 Self-Collision, Joint-Level Obstacle Avoidance
257(12)
9.2.1 Combining Inverse Kinematic and Self-Collision Constraints
257(1)
9.2.2 Learning an SCA Boundary
258(4)
9.2.3 SCA Data set Construction
262(1)
9.2.4 Sparse Support Vector Machines for Large Data Sets
263(2)
9.2.5 Robotic Implementation
265(2)
IV Compliant And Force Control With Dynamical Systems 267(38)
10 Compliant Control
269(26)
10.1 When and Why Should a Robot Be Compliant?
269(4)
10.2 Compliant Motion Generators
273(12)
10.2.1 Variable Impedance Control
276(9)
10.3 Learning the Desired Impedance Profiles
285(2)
10.3.1 Learning VIC from human motions
285(1)
10.3.2 Learning VIC from kinesthetic teaching
286(1)
10.4 Passive Interaction Control with DSs
287(8)
10.4.1 Extension to Nonconservative Dynamical Systems
289(6)
11 Force Control
295(8)
11.1 Motion and Force Generation in Contact Tasks with DSs
295(12)
11.1.1 A DS-Based Strategy for Contact Task
298(2)
11.1.2 Robotic Experiments
300(3)
12 Conclusion and Outlook
303(2)
V Appendices 305(74)
A Background on Dynamical Systems Theory
307(8)
A.1 Dynamical Systems
307(1)
A.2 Visualization of Dynamical Systems
308(1)
A.3 Linear and Nonlinear Dynamical Systems
308(1)
A.4 Stability Definitions
309(2)
A.5 Stability Analysis and Lyapunov Stability
311(1)
A.6 Energy Conservation and Passivity
312(1)
A.7 Limit Cycles
313(1)
A.8 Bifurcations
314(1)
B Background on Machine Learning
315(42)
B.1 Machine Learning Problems
315(1)
B.1.1 Classification
315(1)
B.1.2 Clustering
315(1)
B.1.3 Regression
315(1)
B.2 Metrics
316(3)
B.2.1 Probabilistic Model Selection Metrics
316(1)
B.2.2 Classification Metrics
316(1)
B.2.3 Clustering Metrics
317(2)
B.2.4 Regression Metrics
319(1)
B.3 Gaussian Mixture Models
319(18)
B.3.1 Finite Gaussian Mixture Model with EM-Based Parameter Estimation
320(4)
B.3.2 Bayesian Gaussian Mixture Model with Sampling-Based Parameter Estimation
324(5)
B.3.3 Bayesian Nonparametric Gaussian Mixture Model with Sampling-Based Parameter Estimation
329(1)
B.3.4 GMM Applications
330(7)
B.4 Support Vector Machines
337(11)
B.4.1 Classification with SVM (C-SVM)
338(4)
B.4.2 Regression with SVM (e-SVR)
342(3)
B.4.3 SVM Hyperparameter Optimization
345(3)
B.5 Gaussian Processes Regression
348(9)
B.5.1 Bayesian Linear Regression
348(2)
B.5.2 Estimation of Gaussian Process Regression
350(5)
B.5.3 GPR Hyperparameter Optimization
355(2)
C Background on Robot Control
357(4)
C.1 Multi-rigid Body Dynamics
357(1)
C.2 Motion Control
357(4)
C.2.1 Preliminaries
357(1)
C.2.2 Motion Control with Dynamical Systems (DSs)
358(1)
C.2.3 Inverse Kinematic
358(3)
D Proofs and Derivations
361(18)
D.1 Proofs and Derivations for
Chapter 3
361(1)
D.1.1 Collapsed Gibbs Sampler and Sampling Equations
361(1)
D.2 Proofs and Derivations for
Chapter 4
362(1)
D.2.1 Expansions for RBF Kernel
362(1)
D.3 Proofs and Derivations for
Chapter 5
363(10)
D.3.1 Preliminaries for Stability Proofs
363(2)
D.3.2 Stability of Linear Locally Active Globally Stable Dynamical Systems
365(4)
D.3.3 Stability of Nonlinear Locally Active Globally Stable Dynamical Systems
369(4)
D.4 Proofs and Derivations for
Chapter 9
373(6)
D.4.1 Proof of Theorem 9.1
373(1)
D.4.2 Proof of Theorem 9.2
374(5)
Notes 379(4)
Bibliography 383(8)
Index 391