Muutke küpsiste eelistusi

Tactile Sensing, Skill Learning, and Robotic Dexterous Manipulation [Pehme köide]

Edited by (Project Investigator, DEXMAN, Germany), Edited by , Edited by (Associate Professor, Department of Engineering, Kings College London, UK), Edited by , Edited by (Professor, University of Hamburg, Faculty of Mathematics, Informatics and Natural Science Department Informatics, Hamb)
  • Formaat: Paperback / softback, 372 pages, kõrgus x laius: 229x152 mm, kaal: 590 g, Approx. 100 illustrations (100 in full color); Illustrations
  • Ilmumisaeg: 07-Apr-2022
  • Kirjastus: Academic Press Inc
  • ISBN-10: 0323904459
  • ISBN-13: 9780323904452
  • Formaat: Paperback / softback, 372 pages, kõrgus x laius: 229x152 mm, kaal: 590 g, Approx. 100 illustrations (100 in full color); Illustrations
  • Ilmumisaeg: 07-Apr-2022
  • Kirjastus: Academic Press Inc
  • ISBN-10: 0323904459
  • ISBN-13: 9780323904452

Tactile Sensing, Skill Learning and Robotic Dexterous Manipulation focuses on cross-disciplinary lines of research and groundbreaking research ideas in three research lines: tactile sensing, skill learning and dexterous control. The book introduces recent work about human dexterous skill representation and learning, along with discussions of tactile sensing and its applications on unknown objects’ property recognition and reconstruction. Sections also introduce the adaptive control schema and its learning by imitation and exploration. Other chapters describe the fundamental part of relevant research, paying attention to the connection among different fields and showing the state-of-the-art in related branches.

The book summarizes the different approaches and discusses the pros and cons of each. Chapters not only describe the research but also include basic knowledge that can help readers understand the proposed work, making it an excellent resource for researchers and professionals who work in the robotics industry, haptics and in machine learning.

  • Provides a review of tactile perception and the latest advances in the use of robotic dexterous manipulation
  • Presents the most detailed work on synthesizing intelligent tactile perception, skill learning and adaptive control
  • Introduces recent work on human’s dexterous skill representation and learning and the adaptive control schema and its learning by imitation and exploration
  • Reveals and illustrates how robots can improve dexterity by modern tactile sensing, interactive perception, learning and adaptive control approaches
Contributors xiii
Preface xvii
Part I Tactile sensing and perception
1 GelTip tactile sensor for dexterous manipulation in clutter
Daniel Fernandes Comes
Shan Luo
1.1 Introduction
3(1)
1.2 An overview of the tactile sensors
4(4)
1.2.1 Marker-based optical tactile sensors
5(1)
1.2.2 Image-based optical tactile sensors
6(2)
1.3 The GelTip sensor
8(6)
1.3.1 Overview
8(2)
1.3.2 The sensor projective model
10(2)
1.3.3 Fabrication process
12(2)
1.4 Evaluation
14(4)
1.4.1 Contact localization
14(2)
1.4.2 Touch-guided grasping in a Blocks World environment
16(2)
1.5 Conclusions and discussion
18(5)
Acknowledgment
19(1)
References
19(4)
2 Robotic perception of object properties using tactile sensing
Jiaqi Jiang
Shan Luo
2.1 Introduction
23(1)
2.2 Material properties recognition using tactile sensing
24(2)
2.3 Object shape estimation using tactile sensing
26(3)
2.4 Object pose estimation using tactile sensing
29(1)
2.5 Grasping stability prediction using tactile sensing
30(1)
2.6 Vision-guided tactile perception for crack reconstruction
31(7)
2.6.1 Visual guidance for touch sensing
33(1)
2.6.2 Guided tactile crack perception
34(2)
2.6.3 Experimental setup
36(1)
2.6.4 Experimental results
37(1)
2.7 Conclusion and discussion
38(7)
References
40(5)
3 Multimodal perception for dexterous manipulation
Guanqun Cao
Shan Luo
3.1 Introduction
45(1)
3.2 Visual-tactile cross-modal generation
46(4)
3.2.1 "Touching to see" and "seeing to feel"
46(2)
3.2.2 Experimental results
48(2)
3.3 Spatiotemporal attention model for tactile texture perception
50(6)
3.3.1 Spatiotemporal attention model
51(1)
3.3.2 Spatial attention
52(1)
3.3.3 Temporal attention
52(2)
3.3.4 Experimental results
54(1)
3.3.5 Attention distribution visualization
55(1)
3.4 Conclusion and discussion
56(3)
Acknowledgment
57(1)
References
57(2)
4 Capacitive material detection with machine learning for robotic grasping applications
Hannes Kisner
Yitao Ding
Ulrike Thomas
4.1 Introduction
59(3)
4.1.1 Motivation
59(1)
4.1.2 Concept
60(1)
4.1.3 Related work
61(1)
4.2 Basic knowledge
62(6)
4.2.1 Capacitance perception
62(4)
4.2.2 Classification for material detection
66(2)
4.3 Methods
68(5)
4.3.1 Data preparation
68(2)
4.3.2 Classifier configurations
70(3)
4.4 Experiments
73(4)
4.5 Conclusion
77(6)
References
78(5)
Part II Skill representation and learning
5 Admittance Control: Learning from humans through collaborating with humans
Ning Wang
Chenguang Yang
5.1 Introduction
83(2)
5.2 Learning from human based on admittance control
85(5)
5.2.1 Learning a task using dynamic movement primitives
85(2)
5.2.2 Admittance control model
87(1)
5.2.3 Learning of compliant movement profiles based on biomimetic control
87(3)
5.3 Experimental validation
90(3)
5.3.1 Simulation task
90(2)
5.3.2 Handover task
92(1)
5.3.3 Sawing task
92(1)
5.4 Human robot collaboration based on admittance control
93(5)
5.4.1 Principle of human arm impedance model
94(1)
5.4.2 Estimation of stiffness matrix
95(3)
5.4.3 Stiffness mapping between human and robot arm
98(1)
5.5 Variable admittance control model
98(2)
5.6 Experiments
100(6)
5.6.1 Test of variable admittance control
100(2)
5.6.2 Human-robot collaborative sawing task
102(4)
5.7 Conclusion
106(3)
References
106(3)
6 Sensorimotor control for dexterous grasping-inspiration from human hand
Ke Li
6.1 Introduction of sensorimotor control for dexterous grasping
109(2)
6.2 Sensorimotor control for grasping kinematics
111(9)
6.3 Sensorimotor control for grasping kinetics
120(7)
6.4 Conclusions
127(6)
Acknowledgments
127(1)
References
128(5)
7 From human to Robot Grasping: Force and kinematic synergies
Abdeldjallil Naceri
Nicold Boccardo
Lorenzo Lombardi
Andrea Marinelli
Diego Hidalgo
Sami Haddadin
Matteo Laffranchi
Lorenzo De Michieli
7.1 Introduction
133(4)
7.1.1 Human hand synergies
134(2)
7.1.2 The impact of the synergies approach on robotic hands
136(1)
7.2 Experimental studies
137(7)
7.2.1 Study 1: force synergies comparison between human and robot hands
137(2)
7.2.2 Results of force synergies study
139(1)
7.2.3 Study 2: kinematic synergies in both human and robot hands
139(3)
7.2.4 Results of kinematic synergies study
142(2)
7.3 Discussion
144(2)
7.3.1 Force synergies: human vs. robot
144(1)
7.3.2 Kinematic synergies: human vs. robot
145(1)
7.4 Conclusions
146(3)
Acknowledgments
147(1)
References
147(2)
8 Learning form-closure grasping with attractive region in environment
Rui Li
Zhenshan Bing
Qi Qi
8.1 Background
149(1)
8.2 Related work
150(2)
8.2.1 Closure properties
150(1)
8.2.2 Environmental constraints
151(1)
8.2.3 Learning to grasp
151(1)
8.3 Learning a form-closure grasp with attractive region in environment
152(14)
8.3.1 Attractive region in environment for four-pin grasping
152(4)
8.3.2 Learning to evaluate grasp quality with ARIE
156(5)
8.3.3 Learning to grasp with ARIE
161(5)
8.4 Conclusion
166(5)
References
167(4)
9 Learning hierarchical control for robust in-hand manipulation
Tingguang Li
9.1 Introduction
171(2)
9.2 Related work
173(1)
9.3 Methodology
174(4)
9.3.1 Hierarchical structure for in-hand manipulation
175(1)
9.3.2 Low-level controller
176(1)
9.3.3 Mid-level controller
177(1)
9.4 Experiments
178(5)
9.4.1 Training mid-level policies and baseline
179(1)
9.4.2 Dataset
180(1)
9.4.3 Reaching desired object poses
180(1)
9.4.4 Robustness analysis
181(1)
9.4.5 Manipulating a cube
182(1)
9.5 Conclusion
183(4)
References
183(4)
10 Learning industrial assembly by guided-DDPG
Yongxiang Fan
10.1 Introduction
187(2)
10.2 From model-free RL to model-based RL
189(3)
10.2.1 Guided policy search
189(1)
10.2.2 Deep deterministic policy gradient
190(1)
10.2.3 Comparison of DDPG and GPS
191(1)
10.3 Guided deep deterministic policy gradient
192(2)
10.4 Simulations and experiments
194(5)
10.4.1 Parameter lists
195(1)
10.4.2 Simulation results
195(3)
10.4.3 Experimental results
198(1)
10.5
Chapter Summary
199(6)
References
200(5)
Part III Robotic hand adaptive control
11 Clinical evaluation of Hannes: measuring the usability of a novel polyarticulated prosthetic hand
Marianna Semprini
Nicold Boccardo
Andrea Lince
Simone Traverso
Lorenzo Lombardi
Antonio Succi
Michele Canepa
Valentina Squeri
Jody A. Saglia
Paolo Ariano
Luigi Reale
Pericle Randi
Simona Castellano
Emanuele Gruppioni
Matteo Laffranchi
Lorenzo De Michieli
11.1 Introduction
205(1)
11.2 Preliminary study
206(3)
11.2.1 Data collection
207(1)
11.2.2 Outcomes
207(2)
11.3 The Hannes system
209(3)
11.3.1 Analysis of survey study and definition of requirements
209(1)
11.3.2 System architecture
209(3)
11.4 Pilot study for evaluating the Hannes hand
212(6)
11.4.1 Materials and methods
213(2)
11.4.2 Results
215(3)
11.5 Validation of custom EMG sensors
218(4)
11.5.1 Materials and methods
218(2)
11.5.2 Results
220(2)
11.6 Discussion and conclusions
222(5)
References
224(3)
12 A hand-arm teleoperation system for robotic dexterous manipulation
Shuang Li
Qiang Li
Jianwei Zhang
12.1 Introduction
227(2)
12.2 Problem formulation
229(1)
12.3 Vision-based teleoperation for dexterous hand
230(5)
12.3.1 Transteleop
230(3)
12.3.2 Pair-wise robot-human hand dataset generation
233(2)
12.4 Hand-arm teleoperation system
235(2)
12.5 Transteleop evaluation
237(3)
12.5.1 Network implementation details
237(1)
12.5.2 Transteleop evaluation
238(2)
12.5.3 Hand pose analysis
240(1)
12.6 Manipulation experiments
240(3)
12.7 Conclusion and discussion
243(4)
References
244(3)
13 Neural network-enhanced optimal motion planning for robot manipulation under remote center of motion
Hang Su
Chenguang Yang
13.1 Introduction
247(3)
13.2 Problem statement
250(5)
13.2.1 Kinematics modeling
250(1)
13.2.2 RCM constraint
251(4)
13.3 Control system design
255(2)
13.3.1 Controller design method
255(1)
13.3.2 RBFNN-based approximation
256(1)
13.3.3 Control framework
257(1)
13.4 Simulation results
257(4)
13.5 Conclusion
261(4)
References
262(3)
14 Towards dexterous in-hand manipulation of unknown objects
Qiang Li
Robert Haschke
Helge Ritter
14.1 Introduction
265(1)
14.2 State of the art
266(2)
14.3 Reactive object manipulation framework
268(5)
14.3.1 Local manipulation controller - position part
269(1)
14.3.2 Local manipulation controller - force part
270(1)
14.3.3 Local manipulation controller - composite part
271(1)
14.3.4 Regrasp planner
272(1)
14.4 Finding optimal regrasp points
273(3)
14.4.1 Grasp stability and manipulability
273(1)
14.4.2 Object surface exploration controller
274(2)
14.5 Evaluation in physics-based simulation
276(8)
14.5.1 Local object manipulation
277(2)
14.5.2 Large-scale object manipulation
279(5)
14.6 Evaluation in a real robot experiment
284(6)
14.6.1 Unknown object surface exploration by one finger
284(4)
14.6.2 Unknown object local manipulation by two fingers
288(2)
14.7 Summary and outlook
290(7)
Acknowledgment
292(1)
References
293(4)
15 Robust dexterous manipulation and finger gaiting under various uncertainties
Yongxiang Fan
15.1 Introduction
297(4)
15.2 Dual-stage manipulation and gaiting framework
301(1)
15.3 Modeling of uncertain manipulation dynamics
301(4)
15.3.1 State-space dynamics
301(3)
15.3.2 Combining feedback linearization with modeling
304(1)
15.4 Robust manipulation controller design
305(4)
15.4.1 Design scheme
305(2)
15.4.2 Design of weighting functions
307(1)
15.4.3 Manipulation controller design
308(1)
15.5 Real-time finger gaits planning
309(6)
15.5.1 Grasp quality analysis
309(1)
15.5.2 Position-level finger gaits planning
310(1)
15.5.3 Velocity-level finger gaits planning
311(2)
15.5.4 Similarities between position-level and velocity-level planners
313(1)
15.5.5 Finger gaiting with jump control
314(1)
15.6 Simulation and experiment studies
315(14)
15.6.1 Simulation setup
315(1)
15.6.2 Experimental setup
316(1)
15.6.3 Parameter lists
317(1)
15.6.4 RMC simulation results
318(5)
15.6.5 RMC experiment results
323(2)
15.6.6 Finger gaiting simulation results
325(4)
15.7
Chapter Summary
329(4)
References
330(3)
A Key components of dexterous manipulation: tactile sensing, skill learning, and adaptive control
Qiang Li
Shan Luo
Zhaopeng Chen
Chenguang Yang
Jianwei Zhang
A.1 Introduction
333(1)
A.2 Why sensing, why tactile sensing
333(2)
A.3 Why skill learning
335(1)
A.4 Why adaptive control
336(1)
A.5 Conclusion
337(2)
Index 339
Dr. Qiang Li received his PhD in Pattern Recognition and Intelligence Systems from Shenyang Institute of Automation(SIA), Chinese Academy of Sciences (CAS) in 2010. He was awarded the stipend from the Honda Research Institute and started his postdoctoral researching at CoR-Lab of Bielefeld University from 2009 to 2012. Currently, he is a Project Investigator of ”DEXMAN” sponsored by Deutsche Forschungsgemeinschaft(DFG) and working in the neuroinformatics group at Bielefeld University. His research interests include: tactile servoing and recognition, sensory-based robotic dexterous manipulation and robotic calibration and dynamic control. He serves as Associate Editor in International Journal of Humanoid Robotics (Robotics) and Complex & Intelligent Systems (AI) and Associated Editor for top level robotics conferencesICRA, IROS, Humanoids. Dr Shan Luo is an Associate Professor in the Department of Engineering at Kings College London, where he leads the Robot Perception Lab (RPL). Shan received a Ph.D. from Kings College London for his work on robotic perception through tactile images. In 2016, he visited the MIT Computer and Artificial Intelligence Laboratory (CSAIL). He worked as a Postdoctoral Research Fellow at the University of Leeds and Harvard University, followed by a Lecturer (Assistant Professor) position at the University of Liverpool from 2018 to 2021. His current research focuses on developing intelligent robots capable of safe and agile interaction with the physical environment. His primary interests lie in visuo-tactile sensors, machine learning models for visual and tactile representation learning, and robotic manipulation of challenging objects like deformable and transparent items. He received the EPSRC New Investigator Award in 2021 and a UK-RAS Early Career Award in 2023. Prof. Dr. Zhaopeng Chen is CEO and founder of Agile Robots AG, which is one of the fastest growing high-tec robotics companies in Germany. He is also a professor in Department of Informatics, University of Hamburg, serving as part of the faculty of Mathematics, Informatics, and Natural Sciences. He was working as Lab Deputy Head in Institute of Robotics and Mechatronics, German Aerospace Center (DLR) for over 10 years. He was leading and working on many robotics projects, including DLRESA Mars rover ground test robotic system, DLR/HIT II dexterous robotic hand system, DLR robot astronaut Rollin Justin, et al. The robot he designed has been sent to the space station and is working till now. Prof. Dr. Chen has published over 30 academic papers, and received 2 best paper rewards. He is currently leading 2 European Projects, and 1 DFG projects, and supervising PhD students. Dr. Chenguang Yang is a Professor of Robotics with University of the West of England, and leader of Robot Teleoperation Group at the Bristol Robotics Laboratory. He received his Ph.D. degree in control engineering from the National University of Singapore in 2010, and postdoctoral training in human robotics from Imperial College London, U.K. His research interests lie in humanrobot interaction and intelligent system design. Dr. Yang was awarded the EU Marie Curie International Incoming Fellowship, the U.K. EPSRC UKRI Innovation Fellowship, and the Best Paper Award of IEEE TRANSACTIONS ON ROBOTICS as well as over ten international conference best paper awards. He is a Co-Chair of the Technical Committee on Bio-Mechatronics and Bio-Robotics Systems, IEEE Systems, Man, and Cybernetics Society; and a Co-Chair of the Technical Committee on Collaborative Automation for Flexible Manufacturing, IEEE Robotics and Automation Society. He serves as an Associate Editor of a number of IEEE Transactions and other international leading journals. Jianwei Zhang is professor and director of TAMS, Department of Informatics, Universität Hamburg, Germany. He is Distinguised Visiting Professor of Tsinghua University, China. He received both his Bachelor of Engineering (1986, with Distinction) and Master of Engineering (1989) at the Department of Computer Science of Tsinghua University, Beijing, China, his PhD (1994) at the Institute of Real-Time Computer Systems and Robotics, Department of Computer Science, University of Karlsruhe, Germany, and Habilitation (2000) at the Faculty of Technology, University of Bielefeld, Germany. His research interests are sensor fusion, intelligent robotics and multimodal machine learning, cognitive computing of Industry4.0, etc. In these areas he has published about 400 journal and conference papers, technical reports, six book chapters and three research monographs. He is the coordinator of the DFG/NSFC Transregional Collaborative Research Centre SFB/TRR169 Crossmodal Learning”, and several EU robotics projects. He has received multiple best paper awards. He is the General Chairs of IEEE MFI 2012, IEEE/RSJ IROS 2015, and the International Symposium of Human-Centered Robotics and Systems 2018. Jianwei Zhang is life-long Academician of Academy of Sciences in Hamburg.