Muutke küpsiste eelistusi

Systematic Approach to Learning Robot Programming with ROS [Pehme köide]

(Case Western Reserve University, Cleveland, Ohio, USA)
  • Formaat: Paperback / softback, 502 pages, kõrgus x laius: 254x178 mm, kaal: 1210 g, 50 Illustrations, color
  • Ilmumisaeg: 15-Sep-2017
  • Kirjastus: Chapman & Hall/CRC
  • ISBN-10: 1498777821
  • ISBN-13: 9781498777827
Teised raamatud teemal:
  • Pehme köide
  • Hind: 54,66 €*
  • * hind on lõplik, st. muud allahindlused enam ei rakendu
  • Tavahind: 76,89 €
  • Säästad 29%
  • Raamatu kohalejõudmiseks kirjastusest kulub orienteeruvalt 2-4 nädalat
  • Kogus:
  • Lisa ostukorvi
  • Tasuta tarne
  • Tellimisaeg 2-4 nädalat
  • Lisa soovinimekirja
  • Formaat: Paperback / softback, 502 pages, kõrgus x laius: 254x178 mm, kaal: 1210 g, 50 Illustrations, color
  • Ilmumisaeg: 15-Sep-2017
  • Kirjastus: Chapman & Hall/CRC
  • ISBN-10: 1498777821
  • ISBN-13: 9781498777827
Teised raamatud teemal:
A Systematic Approach to Learning Robot Programming with ROS provides a comprehensive, introduction to the essential components of ROS through detailed explanations of simple code examples along with the corresponding theory of operation. The book explores the organization of ROS, how to understand ROS packages, how to use ROS tools, how to incorporate existing ROS packages into new applications, and how to develop new packages for robotics and automation. It also facilitates continuing education by preparing the reader to better understand the existing on-line documentation.

The book is organized into six parts. It begins with an introduction to ROS foundations, including writing ROS nodes and ROS tools. Messages, Classes, and Servers are also covered. The second part of the book features simulation and visualization with ROS, including coordinate transforms.

The next part of the book discusses perceptual processing in ROS. It includes coverage of using cameras in ROS, depth imaging and point clouds, and point cloud processing. Mobile robot control and navigation in ROS is featured in the fourth part of the book

The fifth section of the book contains coverage of robot arms in ROS. This section explores robot arm kinematics, arm motion planning, arm control with the Baxter Simulator, and an object-grabber package. The last part of the book focuses on system integration and higher-level control, including perception-based and mobile manipulation.

This accessible text includes examples throughout and C++ code examples are also provided at https://github.com/wsnewman/learning_ros
List of Figures xiii
Preface xix
Acknowledgements xxv
Author xxvii
Section I: ROS Foundations
Chapter 1 Introduction to ROS: ROS tools and nodes
5(32)
1.1 Some ROS Concepts
5(3)
1.2 Writing ROS Nodes
8(16)
1.2.1 Creating ROS packages
9(2)
1.2.2 Writing a minimal ROS publisher
11(3)
1.2.3 Compiling ROS nodes
14(1)
1.2.4 Running ROS nodes
15(1)
1.2.5 Examining running minimal publisher node
16(2)
1.2.6 Scheduling node timing
18(2)
1.2.7 Writing a minimal ROS subscriber
20(2)
1.2.8 Compiling and running minimal subscriber
22(1)
1.2.9 Minimal subscriber and publisher node summary
23(1)
1.3 More ROS Tools: Catkin_simple, ROSlaunch, Rqt_console, And ROSbag
24(6)
1.3.1 Simplifying CMakeLists.txt with catkin_simple
24(2)
1.3.2 Automating starting multiple nodes
26(1)
1.3.3 Viewing output in a ROS console
27(1)
1.3.4 Recording and playing back data with ROSbag
28(2)
1.4 Minimal Simulator And Controller Example
30(5)
1.5 Wrap-Up
35(2)
Chapter 2 Messages, Classes and Servers
37(58)
2.1 Defining Custom Messages
38(9)
2.1.1 Defining a custom message
38(4)
2.1.2 Defining a variable-length message
42(5)
2.2 Introduction To ROS Services
47(7)
2.2.1 Service messages
47(2)
2.2.2 ROS service nodes
49(2)
2.2.3 Manual interaction with ROS services
51(1)
2.2.4 Example ROS service client
52(1)
2.2.5 Running example service and client
53(1)
2.3 Using C++ Classes In ROS
54(6)
2.4 Creating Library Modules In ROS
60(4)
2.5 Introduction To Action Servers And Action Clients
64(20)
2.5.1 Creating an action server package
65(1)
2.5.2 Defining custom action-server messages
66(6)
2.5.3 Designing an action client
72(3)
2.5.4 Running the example code
75(9)
2.6 Introduction To Parameter Server
84(4)
2.7 Wrap-Up
88(7)
Section II: Simulation and Visualization in ROS
Chapter 3 Simulation in ROS
95(58)
3.1 Simple Two-Dimensional Robot Simulator
95(8)
3.2 Modeling For Dynamic Simulation
103(2)
3.3 Unified Robot Description Format
105(9)
3.3.1 Kinematic model
105(3)
3.3.2 Visual model
108(1)
3.3.3 Dynamic model
109(3)
3.3.4 Collision model
112(2)
3.4 Introduction To Gazebo
114(8)
3.5 Minimal Joint Controller
122(5)
3.6 Using Gazebo Plug-In For Joint Servo Control
127(6)
3.7 Building Mobile-Robot Model
133(8)
3.8 Simulating Mobile-Robot Model
141(4)
3.9 Combining Robot Models
145(3)
3.10 Wrap-Up
148(5)
Chapter 4 Coordinate Transforms in ROS
153(24)
4.1 Introduction To Coordinate Transforms In ROS
153(8)
4.2 Transform Listener
161(7)
4.3 Using Eigen Library
168(5)
4.4 Transforming ROS Datatypes
173(1)
4.5 Wrap-Up
174(3)
Chapter 5 Sensing and Visualization in ROS
177(46)
5.1 Markers And Interactive Markers In Rviz
181(18)
5.1.1 Markers in rviz
182(3)
5.1.2 Triad display example
185(6)
5.1.3 Interactive markers in rviz
191(8)
5.2 Displaying Sensor Values In Rviz
199(18)
5.2.1 Simulating and displaying LIDAR
199(6)
5.2.2 Simulating and displaying color-camera data
205(4)
5.2.3 Simulating and displaying depth-camera data
209(5)
5.2.4 Selection of points in rviz
214(3)
5.3 Wrap-Up
217(6)
Section III: Perceptual Processing in ROS
Chapter 6 Using Cameras in ROS
223(24)
6.1 Projective Transformation Into Camera Coordinates
223(2)
6.2 Intrinsic Camera Calibration
225(6)
6.3 Intrinsic Calibration Of Stereo Cameras
231(6)
6.4 Using OpenCV With ROS
237(8)
6.4.1 Example OpenCV: finding colored pixels
238(5)
6.4.2 Example OpenCV: finding edges
243(2)
6.5 Wrap-Up
245(2)
Chapter 7 Depth Imaging and Point Clouds
247(14)
7.1 Depth From Scanning LIDAR
247(5)
7.2 Depth From Stereo Cameras
252(6)
7.3 Depth Cameras
258(1)
7.4 Wrap-Up
259(2)
Chapter 8 Point Cloud Processing
261(28)
8.1 Simple Point-Cloud Display Node
261(5)
8.2 Loading And Displaying Point-Cloud Images From Disk
266(3)
8.3 Saving Published Point-Cloud Images To Disk
269(2)
8.4 Interpreting Point-Cloud Images With PCL Methods
271(9)
8.5 Object Finder
280(4)
8.6 Wrap-Up
284(5)
Section IV: Mobile Robots in ROS
Chapter 9 Mobile-Robot Motion Control
289(58)
9.1 Desired State Generation
290(18)
9.1.1 From paths to trajectories
290(4)
9.1.2 A trajectory builder library
294(5)
9.1.3 Open-loop control
299(1)
9.1.4 Desired state publishing
300(8)
9.2 Robot State Estimation
308(22)
9.2.1 Getting model state from Gazebo
308(3)
9.2.2 Odometry
311(8)
9.2.3 Combining odometry, GPS and inertial sensing
319(6)
9.2.4 Combining odometry and LIDAR
325(5)
9.3 Differential-Drive Steering Algorithms
330(10)
9.3.1 Robot motion model
331(1)
9.3.2 Linear steering of a linear robot
332(1)
9.3.3 Linear steering of a non-linear robot
332(1)
9.3.4 Non-linear steering of a non-linear robot
333(3)
9.3.5 Simulating non-linear steering algorithm
336(4)
9.4 Steering With Respect To Map Coordinates
340(5)
9.5 Wrap-Up
345(2)
Chapter 10 Mobile-Robot Navigation
347(24)
10.1 Map Making
347(6)
10.2 Path Planning
353(5)
10.3 Example Move-Base Client
358(2)
10.4 Modifying Navigation Stack
360(4)
10.5 Wrap-Up
364(7)
Section V: Robot Arms in ROS
Chapter 11 Low-Level Control
371(18)
11.1 A One-DOF Prismatic-Joint Robot Model
371(1)
11.2 Example Position Controller
372(3)
11.3 Example Velocity Controller
375(2)
11.4 Example Force Controller
377(4)
11.5 Trajectory Messages For Robot Arms
381(5)
11.6 Trajectory Interpolation Action Server For A Seven-DOF Arm
386(1)
11.7 Wrap-Up
386(3)
Chapter 12 Robot Arm Kinematics
389(12)
12.1 Forward Kinematics
390(4)
12.2 Inverse Kinematics
394(5)
12.3 Wrap-Up
399(2)
Chapter 13 Arm Motion Planning
401(12)
13.1 Cartesian Motion Planning
402(1)
13.2 Dynamic Programming For Joint-Space Planning
403(5)
13.3 Cartesian-Motion Action Servers
408(4)
13.4 Wrap-Up
412(1)
Chapter 14 Arm Control with Baxter Simulator
413(28)
14.1 Running Baxter Simulator
413(2)
14.2 Baxter Joints And Topics
415(3)
14.3 Baxter's Grippers
418(3)
14.4 Head Pan Control
421(1)
14.5 Commanding Baxter Joints
422(3)
14.6 Using ROS Joint Trajectory Controller
425(1)
14.7 Joint-Space Record And Playback Nodes
426(6)
14.8 Baxter Kinematics
432(2)
14.9 Baxter Cartesian Moves
434(4)
14.10 WRAP-UP
438(3)
Chapter 15 An Object-Grabber Package
441(28)
15.1 Object-Grabber Code Organization
441(2)
15.2 Object Manipulation Query Service
443(4)
15.3 Generic Gripper Services
447(2)
15.4 Object-Grabber Action Server
449(3)
15.5 Example Object-Grabber Action Client
452(12)
15.6 Wrap-Up
464(5)
Section VI: System Integration and Higher Level Control
Chapter 16 Perception-Based Manipulation
469(12)
16.1 Extrinsic Camera Calibration
469(3)
16.2 Integrated Perception And Manipulation
472(8)
16.3 Wrap-Up
480(1)
Chapter 17 Mobile Manipulation
481(6)
17.1 Mobile Manipulator Model
481(1)
17.2 Mobile Manipulation
482(4)
17.3 Wrap-Up
486(1)
Chapter 18 Conclusion
487(4)
Bibliography 491(4)
Index 495
Wyatt Newman is a professor in the department of Electrical Engineering and Computer Science at Case Western Reserve University, where he has taught since 1988. His research is in the areas of mechatronics, robotics and computational intelligence, in which he has 12 patents and over 150 technical publications. He received the S.B. degree from Harvard College in Engineering Science, the S.M. degree in Mechanical Engineering from M.I.T. in thermal and fluid sciences, the M.S.E.E. degree from Columbia University in control theory and network theory, and the Ph.D. degree in Mechanical Engineering from M.I.T. in design and control. A former NSF Young Investigator in robotics, Prof. Newman has also held appointments as: a senior member of research staff, Philips Laboratories; visiting scientist at Philips Natuurkundig Laboratorium; visiting faculty at Sandia National Laboratories, Intelligent Systems and Robotics Center; NASA summer faculty fellow at NASA Glenn Research Center; visiting fellow in neuroscience at Princeton University; distinguished visiting fellow at Edinburgh University, School of Informatics, and the Hung Hing Ying Distinguished Visiting Professor at the University of Hong Kong. Prof. Newman led robotics teams competing in the 2007 DARPA Urban Challenge and in the 2015 DARPA Robotics Challenge, and he continues to be interested in wide-ranging aspects and applications of robotics.