内容简介:   
             This book describes visual perception and  control methods for robotic systems that need to interact with the environment.  Multiple view geometry is utilized to extract low-dimensional geometric  information from abundant and high-dimensional image information, making it  convenient to develop general solutions for robot perception and control tasks.  In this book, multiple view geometry is used for geometric modeling and scaled  pose estimation. Then Lyapunov methods are applied to design stabilizing  control laws in the presence of model uncertainties and multiple constraints.
  英文目录:
  Preface
  Authors
  PART I: FOUNDATIONS
 
  1 Robotics
    1.1 Pose Representation
    1.1.1 Position and Translation
    1.1.2 Orientation and Rotation
    1.1.3 Homogeneous Pose  Transformation 
  1.2 Motion Representation
    1.2.1 Path and Trajectory
    1.2.2 Pose Kinematics
  1.3 Wheeled Mobile Robot  Kinematics
    1.3.1 Wheel Kinematic Constraints
    1.3.2 Mobile Robot Kinematic  Modeling
    1.3.3 Typical Nonholonomic Mobile  Robot
  2 Multiple-View Geometry
    2.1 Projective Geometry
    2.1.1 Homogeneous Representation  of Points and Lines
    2.1.2 Projective Transformation
  2.2 Single-View Geometry
    2.2.1 Pinhole Camera Model
    2.2.2 Camera Lens Distortion
  2.3 Two-View Geometry 
    2.3.1 Homography for Planar  Scenes
    2.3.2 Epipolar Geometry for Nonplanar  Scenes
    2.3.3 General Scenes
  2.4 Three-View Geometry
    2.4.1 General Trifocal Tensor  Model
    2.4.2 Pose Estimation with Planar  Constraint
  2.5 Computation of  Multiple-View Geometry 
    2.5.1 Calibration of Single-View  Geometry 
    2.5.2 Computation of Two-View  Geometry
      2.5.2.1 Computation of Homography
      2.5.2.2 Computation of Epipolar  Geometry
    2.5.3 Computation of Three-View  Geometry
      2.5.3.1 Direct Linear Transform
      2.5.3.2 Matrix Factorization
    2.5.4 Robust Approaches 
  3 Vision-Based Robotic Systems
    3.1 System Overview
    3.1.1 System Architecture
    3.1.2 Physical Configurations
  3.2 Research Essential
PART II: VISUAL PERCEPTION OF ROBOTICS
  4 Introduction to Visual Perception
    4.1 Road Reconstruction and  Detection for Mobile Robots
    4.1.1 Previous Works
    4.1.2 A Typical Vehicle Vision  System
      4.1.2.1 System Configuration
      4.1.2.2 Two-View Geometry Model
      4.1.2.3 Image Warping Model
      4.1.2.4 Vehicle-Road Geometric  Model
      4.1.2.5 More General Configurations
  4.2 Motion Estimation of Moving  Objects 
  4.3 Scaled Pose Estimation of  Mobile Robots 
    4.3.1 Pose Reconstruction Based  on Multiple-View Geometry
    4.3.2 Dealing with Field of View  Constraintsm
      4.3.2.1 Key Frame Selection
      4.3.2.2 Pose Estimation
    4.3.3 Dealing with Measuring  Uncertainties
      4.3.3.1 Robust Image Feature  Extraction
      4.3.3.2 Robust Observers
      4.3.3.3 Robust Controllers
      4.3.3.4 Redundant Degrees of  Freedom
  5 Road Scene 3D Reconstruction
    5.1 Introduction
    5.2 Algorithm Process
    5.3 3D Reconstruction
    5.3.1 Image Model
    5.3.2 Parameterization of  Projective Parallax
    5.3.3 Objective Function  Definition and Linearization
    5.3.4 Iterative Maximization
    5.3.5 Post Processing
  5.4 Road Detection
    5.4.1 Road Region Segmentation
    5.4.2 Road Region Diffusion
  5.5 Experimental Results
    5.5.1 Row-Wise Image Registration
    5.5.2 Road Reconstruction
    5.5.3 Computational Complexity
    5.5.4 Evaluation for More  Scenarios
  6 Recursive Road Detection with Shadows
    6.1 Introduction
  6.2 Algorithm Process
  6.3 Illuminant Invariant Color  Space
    6.3.1 Imaging Process
    6.3.2 Illuminant Invariant Color  Space
    6.3.3 Practical Issues
  6.4 Road Reconstruction Process
    6.4.1 Image Fusion
    6.4.2 Geometric Reconstruction
      6.4.2.1 Geometric Modeling
    6.4.3 Road Detection
    6.4.4 Recursive Processing
  6.5 Experiments
    6.5.1 Illuminant Invariant  Transform
    6.5.2 Road Reconstruction
    6.5.3 Comparisons with Previous  Method
    6.5.4 Other Results
  7 Range Identification of Moving Objects
    7.1 Introduction
  7.2 Geometric Modeling
    7.2.1 Geometric Model of Vision  Systems
    7.2.2 Object Kinematics
    7.2.3 Range Kinematic Model
  7.3 Motion Estimation
    7.3.1 Velocity Identification of  the Object
    7.3.2 Range Identification of  Feature Points 
  7.4 Simulation Results 
  7.5 Conclusions
  8 Motion Estimation of Moving Objects
    8.1 Introduction
  8.2 System Modeling
    8.2.1 Geometric Model
    8.2.2 Camera Motion and State  Dynamics
  8.3 Motion Estimation
    8.3.1 Scaled Velocity  Identification
    8.3.2 Range Identification
  8.4 Simulation Results
  8.5 Conclusions
PART III: VISUAL CONTROL OF ROBOTICS
  9 Introduction to Visual Control
    9.1 Previous Works
      9.1.1 Classical Visual Control  Approaches
        9.1.1.1 Position Based Methods
        9.1.1.2 Image Based Methods
      9.1.1.3 Multiple-View Geometry  Based Method
    9.1.2 Main Existing Problems
      9.1.2.1 Model Uncertainties
      9.1.2.2 Field of View Constraints
      9.1.2.3 Nonholonomic Constraints
  9.2 Task Descriptions
    9.2.1 Autonomous Robot Systems
    9.2.2 Semiautonomous Systems
  9.3 Typical Visual Control  Systems
    9.3.1 Visual Control for General  Robots
    9.3.2 Visual Control for Mobile  Robots
  10 Visual Tracking Control of General Robotic  Systems
    10.1 Introduction
  10.2 Visual Tracking with  Eye-to-Hand Configuration 
    10.2.1 Geometric Modeling
      10.2.1.1 Vision System Model
      10.2.1.2 Euclidean Reconstruction
    10.2.2 Control Development
      10.2.2.1 Control Objective
      10.2.2.2 Open-Loop Error System
      10.2.2.3 Closed-Loop Error System
      10.2.2.4 Stability Analysis
  10.3 Visual Tracking with  Eye-in-Hand Configuration
    10.3.1 Geometric Modeling
    10.3.2 Control Development
      10.3.2.1 Open-Loop Error System
      10.3.2.2 Controller Design
  10.4 Simulation Results 
  10.5 Conclusion
  11 Robust Moving Object Tracking Control
    11.1 Introduction
  11.2 Vision System Model
    11.2.1 Camera Geometry
    11.2.2 Euclidean Reconstruction
  11.3 Control Development
    11.3.1 Open-Loop Error System
    11.3.2 Control Design
    11.3.3 Closed-Loop Error System
  11.4 Stability Analysis
    11.4.1 Convergence of the  Rotational Error
    11.4.2 Convergence of the  Translational Error
  11.5 Simulation Results
    11.5.1 Simulation Configuration
    11.5.2 Simulation Results and  Discussion 
  11.6 Conclusion
  12 Visual Control with Field-of-View Constraints
    12.1 Introduction
  12.2 Geometric Modeling
    12.2.1 Euclidean Homography 
    12.2.2 Projective Homography
    12.2.3 Kinematic Model of Vision  System
  12.3 Image-Based Path Planning
    12.3.1 Pose Space to Image Space  Relationship
    12.3.2 Desired Image Trajectory  Planning
    12.3.3 Path Planner Analysis
  12.4 Tracking Control  Development 
    12.4.1 Control Development 
    12.4.2 Controller Analysis
  12.5 Simulation Results
    12.5.1 Optical Axis Rotation
    12.5.2 Optical Axis Translation
    12.5.3 Camera y-Axis Rotation 
    12.5.4 General Camera Motion
  12.6 Conclusions
13 Visual Control of Mobile Robots
  13.1 Introduction
  13.2 Goemetric Reconstruction
    13.2.1 Eye-to-Hand Configuration 
    13.2.2 Eye-in-Hand Configuration
      13.2.2.1 Geometric Modeling
      13.2.2.2 Euclidean Reconstruction
  13.3 Control Development for  Eye-to-Hand Configuration
    13.3.1 Control Objective
    13.3.2 Open-Loop Error System
    13.3.3 Closed-Loop Error System
    13.3.4 Stability Analysis
    13.3.5 Regulation Extension
  13.4 Control Development for  Eye-in-Hand Configuration 
    13.4.1 Open-Loop Error System
    13.4.2 Closed-Loop Error System
    13.4.3 Stability Analysis
  13.5 Simulational and  Experimental Verifications
    13.5.1 Eye-to-Hand Case
    13.5.2 Eye-in-Hand Case
      13.5.2.1 Experimental Configuration
      13.5.2.2 Experimental Results
      13.5.2.3 Results Discussion
  13.6 Conclusion
14 Trifocal Tensor Based Visual Control of Mobile  Robots
  14.1 Introduction
  14.2 Geometric Modeling
  14.3 Control Development
    14.3.1 Error System Development
    14.3.2 Controller Design
    14.3.3 Stability Analysis
  14.4 Simulation Verification
    14.4.1 Pose Estimation
    14.4.2 Visual Trajectory Tracking  and Pose Regulation
    14.4.3 Trajectory Tracking with  Longer Range
  14.5 Conclusion
15 Unified Visual Control of Mobile Robots with  Euclidean
  Reconstruction
  15.1 Introduction
  15.2 Control Development
    15.2.1 Kinematic Model 
    15.2.2 Open-Loop Error System 
    15.2.3 Controller Design 
    15.2.4 Stability Analysis
  15.3 Euclidean Reconstruction
  15.4 Simulation Results
  15.5 Conclusion
PART IV: APPENDICES 287
Appendix A: Chapter
  A.1 Proof of Theorem 7.1
Appendix B: Chapter 10
    B.1
    B.2
    B.3
Appendix C: Chapter 11
    C.1
    C.2 Proof of Property 11.1
    C.3 Proof of Lemma 11.1
    C.4 Proof of Property 11.2
Appendix D: Chapter 12
    D.1 Open-Loop Dynamics
    D.2 Image Jacobian-Like  Matrix
    D.3 Image Space Navigation  Function
Appendix E: Chapter 13
Appendix F: Chapter 14
    F.1
    F.2 Proof of 
 for 
 and 
 
    F.3 Proof of 
 
     F.4 Proof of 
 in the Closed-Loop  System (14.19)
References
Index