Muutke küpsiste eelistusi

E-raamat: Intelligent Control of Robotic Systems [Taylor & Francis e-raamat]

, (TATA Consultancy Services, New Delhi, INDIA), (General Electric, Bengaluru, India), (Department of Electronics & Communication Engineering, IIIT, P), (Department of Electrical Engineering, Indian Institute of Technology, Kanpur, INDIA)
  • Formaat: 674 pages, 31 Tables, black and white; 702 Illustrations, black and white
  • Ilmumisaeg: 07-Apr-2020
  • Kirjastus: CRC Press
  • ISBN-13: 9780429486784
Teised raamatud teemal:
  • Taylor & Francis e-raamat
  • Hind: 276,97 €*
  • * hind, mis tagab piiramatu üheaegsete kasutajate arvuga ligipääsu piiramatuks ajaks
  • Tavahind: 395,67 €
  • Säästad 30%
  • Formaat: 674 pages, 31 Tables, black and white; 702 Illustrations, black and white
  • Ilmumisaeg: 07-Apr-2020
  • Kirjastus: CRC Press
  • ISBN-13: 9780429486784
Teised raamatud teemal:
This book illustrates basic principles, along with the development of the advanced algorithms, to realize smart robotic systems. It speaks to strategies by which a robot (manipulators, mobile robot, quadrotor) can learn its own kinematics and dynamics from data. In this context, two major issues have been dealt with; namely, stability of the systems and experimental validations. Learning algorithms and techniques as covered in this book easily extend to other robotic systems as well. The book contains MATLAB- based examples and c-codes under robot operating systems (ROS) for experimental validation so that readers can replicate these algorithms in robotics platforms.
Preface xvii
Acknowledgment xxi
Authors xxiii
1 Introduction
1(28)
1.1 Vision-Based Control
3(3)
1.2 Kinematic Control of a Redundant Manipulator
6(5)
1.2.1 Redundancy Resolution using Null Space of the Pseudo-inverse
8(1)
1.2.2 Extended Jacobian Method
8(1)
1.2.3 Optimization Based Redundancy Resolution
9(1)
1.2.4 Redundancy Resolution with Global Optimization
9(1)
1.2.5 Neural Network Based Methods
10(1)
1.3 Visual Servoing
11(2)
1.3.1 Image Based Visual Servoing (IBVS)
12(1)
1.3.2 Position Based Visual Servoing (PBVS)
12(1)
1.3.3 2-1/2-D Visual Servoing
13(1)
1.4 Visual Control of a Redundant Manipulator: Research Issues
13(3)
1.5 Learning by Demonstration
16(5)
1.5.1 DS-Based Motion Learning
19(2)
1.6 Stability of Nonlinear Systems
21(1)
1.7 Optimization Techniques
22(5)
1.7.1 Genetic Algorithm
24(1)
1.7.2 Expectation Maximization for Gaussian Mixture Model
25(2)
1.8 Composition of the Book
27(2)
I Manipulators 29(454)
2 Kinematic and Dynamic Models of Robot Manipulators
31(24)
2.1 PowerCube Manipulator
31(1)
2.2 Kinematic Configuration of the Manipulator
32(3)
2.3 Estimating the Vision Space Motion with Camera Model
35(5)
2.3.1 Transformation from Cartesian Space to Vision Space
36(2)
2.3.2 The Camera Model
38(1)
2.3.3 Computation of Image Feature Velocity in the Vision Space
39(1)
2.4 Learning-Based Controller Architecture
40(1)
2.5 Universal Robot (UR 10)
41(4)
2.5.1 Mechatronic Design
41(2)
2.5.1.1 Platform
41(2)
2.5.1.2 End-Effector
43(1)
2.5.1.3 Perception Apparatus
43(1)
2.5.2 Kinematic Model
43(2)
2.6 Barrett Warn Manipulator
45(9)
2.6.1 Overview of the System
45(1)
2.6.2 Experimental Setup
46(1)
2.6.3 Dynamic Modeling
47(2)
2.6.4 System Description and Modeling
49(4)
2.6.5 State Space Representation
53(1)
2.7 Summary
54(1)
3 Hand-eye Coordination of a Robotic Arm using KSOM Network
55(58)
3.1 Kohonen Self Organizing Map
56(4)
3.1.1 Competitive Process
57(1)
3.1.2 Cooperative Process
57(1)
3.1.3 Adaptive Process
58(2)
3.2 System Identification using KSOM
60(6)
3.3 Introduction to Learning-Based Inverse Kinematic Control
66(23)
3.3.1 The Network
68(1)
3.3.2 The Learning Problem
69(1)
3.3.3 The Approach
69(1)
3.3.4 The Formulation of Cost Function
69(1)
3.3.5 Weight Update Laws
70(19)
3.4 Visual Motor Control of a Redundant Manipulator using KSOM Network
89(5)
3.4.1 The Problem
92(2)
3.5 KSOM with Sub-Clustering in Joint Angle Space
94(6)
3.5.1 Network Architecture
95(1)
3.5.2 Training Algorithm
96(1)
3.5.3 Testing Phase
97(1)
3.5.4 Redundancy Resolution
98(1)
3.5.5 Tracking a Continuous Trajectory
99(1)
3.6 Simulation and Results
100(11)
3.6.1 Network Architecture and Workspace Dimensions
100(1)
3.6.2 Training
101(1)
3.6.3 Testing
101(7)
3.6.3.1 Reaching Isolated Target Positions in the Workspace
103(2)
3.6.3.2 Tracking a Straight Line Trajectory
105(2)
3.6.3.3 Tracking an Elliptical Trajectory
107(1)
3.6.4 Real-Time Experiment
108(5)
3.6.4.1 Redundant Solutions
109(1)
3.6.4.2 Tracking a Circular and a Straight Line Trajectory
110(1)
3.6.4.3 Multi-Step Movement
111(1)
3.7 Summary
111(2)
4 Model-based Visual Servoing of a 7 DOF Manipulator
113(32)
4.1 Introduction
113(1)
4.2 Kinematic Control of a Manipulator
113(2)
4.2.1 Kinematic Control of Redundant Manipulator
114(1)
4.3 Visual Servoing
115(6)
4.3.1 Estimating the Vision Space Motion with Camera Model
116(1)
4.3.2 Transformation from Cartesian Space to Vision Space
117(2)
4.3.3 The Camera Model
119(1)
4.3.4 Computation of Image Feature Velocity in the Vision Space
120(1)
4.4 Kinematic Control of a Manipulator Directly from Vision Space
121(1)
4.5 Image Moments
122(4)
4.6 Image Moment Velocity
126(2)
4.7 A Pinhole Camera Projection
128(4)
4.8 Image Moment Interaction Matrix
132(7)
4.9 Experimental Results using a 7 DOF Manipulator
139(2)
4.10 Summary
141(4)
5 Learning-Based Visual Servoing
145(60)
5.1 Introduction
145(3)
5.2 Kinematic Control using KSOM
148(3)
5.2.1 KSOM Architecture
149(1)
5.2.2 KSOM: Weight Update
149(1)
5.2.3 Comments on Existing KSOM Based Kinematic Control Schemes
150(1)
5.3 Problem Definition
151(1)
5.4 Analysis of Solution Learned Using KSOM
151(5)
5.4.1 KSOM: An Estimate of Inverse Jacobian
152(1)
5.4.2 Empirical Verification
152(4)
5.4.2.1 Inverse Jacobian Evolution in Learning Phase
153(1)
5.4.2.2 Testing Phase: Inverse Jacobian Estimation at each Operating Zone
153(1)
5.4.2.3 Inference
154(2)
5.5 KSOM in Closed Loop Visual Servoing
156(3)
5.5.1 Stability Analysis
157(2)
5.6 Redundancy Resolution
159(1)
5.7 Results
160(12)
5.7.1 Learning Inverse Kinematic Relationship using KSOM
160(1)
5.7.2 Visual Servoing
161(3)
5.7.3 Redundancy Resolution
164(11)
5.7.3.1 Tracking a Straight Line
165(3)
5.7.3.2 Tracking an Elliptical Trajectory
168(4)
5.8 Summary
172(1)
5.9 Reinforcement Learning-Based Optimal Redundancy Resolution Directly from the Vision Space
172(1)
5.10 Introduction
172(2)
5.11 Redundancy Resolution Problem from the Vision Space
174(1)
5.12 SNAC Based Optimal Redundancy Resolution from Vision Space
175(4)
5.12.1 Selection of Cost Function
176(1)
5.12.2 Control Challenges
177(2)
5.13 T-S Fuzzy Model-Based Critic Neural Network for Redundancy Resolution from Vision Space
179(6)
5.13.1 Fuzzy Critic Model
179(2)
5.13.2 Weight Update Law
181(1)
5.13.3 Selection of Fuzzy Zones
182(1)
5.13.4 Initialization of the Fuzzy Network Control
183(2)
5.13.4.1 Remark
184(1)
5.14 KSOM Based Critic Network for Redundancy Resolution from Vision Space
185(5)
5.14.1 KSOM Critic Model
185(3)
5.14.2 KSOM: Weight Update
188(1)
5.14.3 Initialization of KSOM Network Control
188(2)
5.15 Simulation Results
190(5)
5.15.1 T-S Fuzzy Model
190(1)
5.15.2 Kohonen's Self-organizing Map
191(4)
5.16 Real-Time Experiment
195(7)
5.16.1 Tracking Elliptical Trajectory
196(5)
5.16.1.1 T-S Fuzzy Model
196(3)
5.16.1.2 KSOM
199(2)
5.16.2 Grasping a Ball with Hand-manipulator Setup
201(1)
5.17 Summary
202(3)
6 Visual Servoing using an Adaptive Distributed Takagi-Sugeno (T-S) Fuzzy Model
205(24)
6.1 T-S Fuzzy Model
206(2)
6.2 Adaptive Distributed T-S Fuzzy PD Controller
208(8)
6.2.1 Offline Learning Algorithm
209(3)
6.2.2 Online Adaptation Algorithm
212(2)
6.2.3 Stability Analysis
214(2)
6.3 Experimental Results
216(9)
6.3.1 Visual Servoing for a Static Target
220(2)
6.3.2 Compensation of Model Uncertainties
222(1)
6.3.3 Visual Servoing for a Moving Target
223(2)
6.4 Computational Complexity
225(1)
6.5 Summary
225(4)
7 Kinematic Control using Single Network Adaptive Critic
229(54)
7.1 Introduction
229(12)
7.1.1 Discrete-Time Optimal Control Problem
230(1)
7.1.2 Adaptive Critic Based Control
231(3)
7.1.2.1 Training of Action and Critic Network
232(2)
7.1.3 Single Network Adaptive Critic (DT-SNAC)
234(1)
7.1.4 Choice of Critic Network Model
235(6)
7.1.4.1 Costate Vector Modeling with MLN Critic Network
235(1)
7.1.4.2 Costate Vector Modeling with T-S Fuzzy Model-Based Critic Network
236(5)
7.2 Adaptive Critic Based Optimal Controller Design for Continuous-time Systems
241(16)
7.2.1 Continuous-time Single Network Adaptive Critic (CT-SNAG)
242(1)
7.2.2 Critic Network: Weight Update Law
243(2)
7.2.3 Choice of Critic Network
245(14)
7.2.3.1 Critic Network using MLN
245(1)
7.2.3.2 T-S Fuzzy Model-Based Critic Network with Cluster of Local Quadratic Cost Functions
246(2)
7.2.4 CT-SNAC
248(9)
7.3 Discrete-Time Input Affine System Representation of Forward Kinematics
257(2)
7.4 Modeling the Primary and Additional Tasks as an Integral Cost Function
259(2)
7.4.1 Quadratic Cost Minimization (Global Minimum Norm Motion)
260(1)
7.4.2 Joint Limit Avoidance
260(1)
7.5 Single Network Adaptive Critic Based Optimal Redundancy Resolution
261(3)
7.5.1 T-S Fuzzy Model-Based Critic Network for Closed Loop Positioning Task
262(1)
7.5.2 Training Algorithm
263(1)
7.6 Computational Complexity
264(1)
7.7 Simulation Results
265(11)
7.7.1 Global Minimum Norm Motion
266(6)
7.7.2 Joint Limit Avoidance
272(4)
7.8 Experimental Results
276(4)
7.8.1 Global Minimum Norm Motion
276(2)
7.8.2 Joint Limit Avoidance
278(2)
7.9 Conclusion
280(3)
8 Dynamic Control using Single Network Adaptive Critic
283(36)
8.1 Introduction
283(1)
8.2 Optimal Control Problem of Continuous Time Nonlinear System
284(7)
8.2.1 Linear Quadratic Regulator
285(2)
8.2.2 Hamilton-Jacobi-Bellman Equation
287(1)
8.2.3 Optimal Control Law for Input Affine System
288(1)
8.2.4 Adaptive Critic Concept
289(2)
8.3 Policy Iteration and SNAC for Unknown Continuous Time Nonlinear Systems
291(26)
8.3.1 Policy Iteration Scheme
291(1)
8.3.2 Optimal Control Problem of an Unknown Dynamic
292(3)
8.3.3 Model Representation and Learning Scheme
295(1)
8.3.3.1 TSK Fuzzy Representation of Nonlinear Dynamics
295(1)
8.3.3.2 Learning Scheme for the TSK Fuzzy Model
295(1)
8.3.4 Critic Design and Policy Update
296(5)
8.3.4.1 Construction of Initial Critic Network using Lyapunov Based LMI
296(1)
8.3.4.2 Lyapunov Function
297(1)
8.3.4.3 Conditions for Stabilization
298(3)
8.3.4.4 Design of Fitness Function
301(1)
8.3.5 Learning Near-Optimal Controller
301(6)
8.3.5.1 Update of Critic Network
304(1)
8.3.5.2 Fitness Function for PI Based Training
305(2)
8.3.6 Examples
307(13)
8.3.6.1 Simulated Model
307(3)
8.3.6.2 Example using Real Robot
310(7)
8.4 Summary
317(2)
9 Imitation Learning
319(66)
9.1 Introduction
319(1)
9.2 Dynamic Movement Primitives
320(4)
9.2.1 Mathematical Formulations
321(2)
9.2.1.1 Choice of Mean and Variance
322(1)
9.2.1.2 Spatial and Temporal Scaling
322(1)
9.2.2 Example
323(1)
9.3 Motion Encoding using Gaussian Mixture Regression
324(3)
9.3.1 SED: Stable Estimator of Dynamical Systems
326(1)
9.3.1.1 Learning Model Parameters
326(1)
9.3.1.2 Log-likelihood Cost
327(1)
9.4 FuzzStaMP: Fuzzy Controller Regulated Stable Movement Primitives
327(27)
9.4.1 Motion Modeling with C-FuzzStaMP
328(7)
9.4.1.1 Fuzzy Lyapunov Function
329(2)
9.4.1.2 Learning Fuzzy Controller Gains
331(2)
9.4.1.3 Design of Fitness Function
333(1)
9.4.1.4 Example
333(2)
9.4.2 Motion Modeling with R-FuzzStaMP
335(11)
9.4.2.1 Stability Analysis of the Motion System
339(3)
9.4.2.2 Design of the Fuzzy Controller
342(4)
9.4.3 Global Validity and Spatial Scaling
346(8)
9.4.3.1 Examples
348(6)
9.5 Learning Skills from Heterogeneous Demonstrations
354(31)
9.5.1 Stability Analysis
357(7)
9.5.1.1 Asymptotic Stability in the Demonstrated Region
361(2)
9.5.1.2 Ensuring Asymptotic Stability outside Demonstrated Region
363(1)
9.5.2 Learning Model Parameters from Demonstrations
364(7)
9.5.2.1 Motion Modeling using GMR
364(3)
9.5.2.2 Motion Modeling using LWPR
367(1)
9.5.2.3 Motion Modeling using ˆ-SVR
368(2)
9.5.2.4 Complete Pipeline
370(1)
9.5.3 Spatial Error Calculation
371(1)
9.5.4 Examples
371(11)
9.5.4.1 Example of Monotonic and Non-monotonic State Energy
372(3)
9.5.4.2 Example of Multitasking with Single and Multiple Task-equilibrium
375(7)
9.5.5 Summary
382(3)
10 Visual Perception
385(38)
10.1 Introduction
385(1)
10.2 Deep Neural Networks and Artificial Neural Networks
386(18)
10.2.1 Neural Networks
387(8)
10.2.1.1 Multi-layer Perceptron
389(3)
10.2.1.2 MLP Implementation using Tensorflow
392(3)
10.2.2 Deep Learning Techniques: An Overview
395(4)
10.2.2.1 Convolutional Neural Network (Flow and Training with Back-propogation)
395(4)
10.2.3 Different Architectures of Convolutional Neural Networks (CNNs)
399(5)
10.3 Examples of Vision-Based Object Detection Techniques
404(13)
10.3.1 Automatic Annotation of Object ROI
405(7)
10.3.1.1 Image Acquisition
407(1)
10.3.1.2 Manual Annotation
407(1)
10.3.1.3 Augmentation and Clutter Generation
407(2)
10.3.1.4 Two-class Classification Model using Deep Networks
409(2)
10.3.1.5 Experimental Results and Discussions
411(1)
10.3.2 Automatic Segmentation of Objects for Warehouse Automation
412(5)
10.3.2.1 Network Architecture
413(3)
10.3.2.2 Base Network
416(1)
10.3.2.3 Single Shot Detection
416(1)
10.3.3 Automatic Generation of Artificial Clutter
417(1)
10.3.4 Multi-Class Segmentation using Proposed Network
417(1)
10.4 Experimental Results
417(4)
10.4.1 System Description
417(1)
10.4.1.1 Server
418(1)
10.4.2 Ground Truth Generation
418(1)
10.4.3 Image Segmentation
419(2)
10.5 Summary
421(2)
11 Vision-Based Grasping
423(30)
11.1 Introduction
423(2)
11.2 Model-Based Grasping
425(8)
11.2.1 Problem Statement
425(1)
11.2.2 Hardware Setup
426(1)
11.2.3 Dataset
427(1)
11.2.4 Data Augmentation
427(1)
11.2.5 Network Architecture and Training
428(1)
11.2.6 Axis Assignment
428(1)
11.2.7 Grasp Decide Index (GDI)
428(3)
11.2.8 Final Pose Selection
431(1)
11.2.9 Overall Pipeline and Result
431(2)
11.3 Grasping without Object Models
433(19)
11.3.1 Problem Definition
433(1)
11.3.2 Proposed Method
434(4)
11.3.2.1 Creating Continuous Surfaces in 3D Point Cloud
434(4)
11.3.3 Finding Graspable Affordances
438(5)
11.3.4 Experimental Results
443(2)
11.3.4.1 Performance Measure
443(2)
11.3.5 Grasping of Individual Objects
445(1)
11.3.6 Grasping Objects in a Clutter
446(5)
11.3.7 Computation Time
451(1)
11.4 Summary
452(1)
12 Warehouse Automation: An Example
453(30)
12.1 Introduction
453(3)
12.2 Problem Definition
456(1)
12.3 System Architecture
457(2)
12.4 The Methods
459(17)
12.4.1 System Calibration
459(1)
12.4.2 Rack Detection
460(2)
12.4.3 Object Recognition
462(3)
12.4.4 Grasping
465(1)
12.4.5 Motion Planning
466(3)
12.4.6 End-Effector Design
469(2)
12.4.6.1 Suction-based End-effector
469(1)
12.4.6.2 Combining Gripping with Suction
470(1)
12.4.7 Robot Manipulator Model
471(5)
12.4.7.1 Null Space Optimization
473(1)
12.4.7.2 Inverse Kinematics as a Control Problem .
474(1)
12.4.7.3 Damped Least Square Method
475(1)
12.5 Experimental Results
476(6)
12.5.1 Response Time
477(1)
12.5.2 Grasping and Suction
478(1)
12.5.3 Object Recognition
478(2)
12.5.4 Direction for Future Research
480(2)
12.6 Summary
482(1)
II Mobile Robotics 483(112)
13 Introduction to Mobile Robotics and Control
485(22)
13.1 Introduction
485(1)
13.2 System Model: Nonholonomic Mobile Robots
486(1)
13.3 Robot Attitude
487(3)
13.3.1 Rotation about Roll Axis
487(1)
13.3.2 Rotation about Pitch Axis
488(1)
13.3.3 Rotation About Yaw Axis
489(1)
13.4 Composite Rotation
490(1)
13.5 Coordinate System
491(1)
13.5.1 Earth-Centered Earth-Fixed (ECEF) Co-ordinate System
491(1)
13.6 Control Approaches
492(13)
13.6.1 Feedback Linearization
493(2)
13.6.2 Backstepping
495(1)
13.6.3 Sliding Mode Control
496(2)
13.6.4 Conventional SMC
498(1)
13.6.5 Terminal SMC
499(1)
13.6.6 Nonsingular TSMC (NTSMC)
500(1)
13.6.7 Fast Nonsingular TSMC (FNTSMC)
501(1)
13.6.8 Fractional Order SMC (FOSMC)
502(1)
13.6.9 Higher Order SMC (HOSMC)
503(2)
13.7 Summary
505(2)
14 Multi-robot Formation
507(30)
14.1 Introduction
507(2)
14.2 Path Planning Schemes
509(9)
14.3 Multi-Agent Formation Control
518(12)
14.3.1 Fast Adaptive Gain NTSMC
519(5)
14.3.2 Fast Adaptive Fuzzy NTSMC (FAFNTSMC)
524(3)
14.3.3 Fault Detection, Isolation and Collision Avoidance Scheme
527(3)
14.4 Experiments
530(5)
14.5 Summary
535(2)
15 Event Triggered Multi-Robot Consensus
537(18)
15.1 Introduction to Event Triggered Control
537(2)
15.2 Event Triggered Consensus
539(5)
15.2.1 Preliminaries
541(3)
15.2.2 Sliding Mode-Based Finite Time Consensus
544(1)
15.3 Event Triggered Sliding Mode-based Consensus Algorithm
544(8)
15.3.1 Consensus-based Tracking Control of Nonholonomic Multi-robot Systems
549(3)
15.4 Experiments
552(2)
15.5 Summary
554(1)
16 Vision-Based Tracking for a Human Following Mobile Robot
555(40)
16.1 Visual Tracking: Introduction
555(3)
16.1.1 Difficulties in Visual Tracking
555(1)
16.1.2 Required Features of Visual Tracking
555(1)
16.1.3 Feature Descriptors for Visual Tracking
556(2)
16.2 Human Tracking Algorithm using SURF Based Dynamic Object Model
558(9)
16.2.1 Problem Definition
559(1)
16.2.2 Object Model Description
560(2)
16.2.2.1 Maintaining a Template Pool of Descriptors
561(1)
16.2.3 The Tracking Algorithm
562(2)
16.2.3.1 Step 1: Target Initialization
563(1)
16.2.3.2 Step 2: Object Recognition and Template Pool Update
563(1)
16.2.3.3 Step 3: Occlusion Detection, Target Window Prediction
564(1)
16.2.4 SURF-Based Mean-Shift Algorithm
564(1)
16.2.5 Modified Object Model Description
565(1)
16.2.6 Modified Tracking Algorithm
566(1)
16.3 Human Tracking Algorithm with the Detection of Pose Change due to Out-of-plane Rotations
567(9)
16.3.1 Problem Definition
567(1)
16.3.2 Tracking Algorithm
568(1)
16.3.3 Template Initialization
569(1)
16.3.4 Tracking
570(1)
16.3.4.1 Scaling and Re-positioning the Tracking Window
571(1)
16.3.5 Template Update Module
571(1)
16.3.6 Error Recovery Module
572(4)
16.3.6.1 KD-tree Classifier
572(1)
16.3.6.2 Construction of KD-Tree
573(1)
16.3.6.3 Dealing with Pose Change
573(1)
16.3.6.4 Tracker Recovery from Full Occlusions
574(2)
16.4 Human Tracking Algorithm Based on Optical Flow
576(5)
16.4.1 The Template Pool and its Online Update
577(3)
16.4.1.1 Selection of New Templates
578(2)
16.4.2 Re-Initialization of Optical Flow Tracker
580(1)
16.4.3 Detection of Partial and Full Occlusion
580(1)
16.5 Visual Servo Controller
581(4)
16.5.1 Kinematic Model of the Mobile Robot
582(1)
16.5.2 Pinhole Camera Model
582(1)
16.5.3 Problem Formulation
582(1)
16.5.4 Visual Servo Control Design
583(1)
16.5.5 Simulation Results
584(1)
16.5.5.1 Example: Tracking an Object which Moves in a Circular Trajectory
584(1)
16.6 Experimental Results
585(8)
16.6.1 Experimental Results for the Human Tracking Algorithm Based on SURF-based Dynamic Object Model
585(1)
16.6.2 Tracking Results
586(3)
16.6.3 Human Following Robot
589(1)
16.6.4 Discussion on Performance Comparison
590(1)
16.6.5 Experimental Evaluation of Human Tracking Algorithm Based on Optical Flow
591(2)
16.7 Summary
593(2)
Exercises 595(8)
Bibliography 603(42)
Index 645
Laxmidhar Behera, Swagat Kumar, Prem Kumar Patchaikani, Ranjith Ravindranathan Nair, Samrat Dutta