Brief paper

Range identification of features on an object using a single camera

Abstract

In this paper, an adaptive nonlinear estimator is developed to identify the range and the 3D Euclidean coordinates of feature points on a moving object using a single fixed camera. No explicit model is used to describe the movement of the object. Homography-based techniques are used in the development of the object kinematics, while an adaptive nonlinear estimator is designed by employing signal filters. Lyapunov design methods are utilized to facilitate the design of the estimator and filters as well as the convergence and stability analysis. The performance of the estimator is demonstrated by simulation results.

Introduction

The recovery of the range/depth and the three-dimensional (3D) Euclidean coordinates of feature points on a moving object from a sequence of two-dimensional (2D) images is a mainstream research problem with significant potential impact for applications such as autonomous vehicle/robotic guidance, navigation, autonomous surveillance, path planning and control (de Ruiter and Benhabib, 2005, Dobrokhodov et al., 2006, Redding et al., 2006). Although the problem is inherently nonlinear, typical range identification results are based on linearization based methods such as extended Kalman filtering (Broida et al., 1990, Oliensis, 2000). In recent publications, some researchers have recast the problem as the state estimation of a continuous-time perspective dynamic system, and have employed nonlinear system analysis tools in the development of state observers that identify motion and structure parameters (Abdursal et al., 2004, Chen and Kano, 2004a, Chen and Kano, 2004b, Hu and Ersson, 2004, Jankovic and Ghosh, 1995, Karagiannis and Astolfi, 2005, Ma et al., 2001). To summarize, these papers show that if the velocity of the moving object (or camera) is known, and satisfies certain observability conditions, an estimator for the unknown Euclidean position of the feature points can be developed. In Chen and Kano (2004b), an observer for the estimation of camera motion was presented based on perspective observations of a single feature point from the (single) moving camera. The observer development was based on sliding mode and adaptive control techniques, and it was shown that upon satisfaction of a persistent excitation condition (Slotine & Li, 1991), the rotational velocity could be fully recovered; furthermore, the translational velocity could be recovered up to a scale factor. The range/depth ambiguity attributed to the unknown scale factor was resolved by resorting to stereo vision. The aforementioned approach requires that a model for the object motion be known.

In this paper, we present a unique nonlinear estimation strategy to simultaneously estimate the velocity and structure of a moving object using a single camera. Roughly speaking, satisfaction of a persistent excitation condition (similar to Chen and Kano (2004b) and others) allows the determination of the inertial coordinates for feature points on the object. A homography-based approach is utilized to develop the object kinematics in terms of reconstructed Euclidean information and image-space information for the fixed camera system. The development of object kinematics relies on the work presented in Chen, Dawson, Dixon, and Behal (2005) and Malis, Chaumette, and Bodet (1999), and requires a priori knowledge of a single vector of one feature point of the object. A novel nonlinear integral feedback estimation method developed in our previous efforts Chitrakaran, Dawson, Dixon, and Chen (2005) is then employed to identify the linear and angular velocity of the moving object. An adaptive nonlinear estimator is proposed in this paper with the design of measurable and unmeasurable signal filters to reconstruct the range and the 3D Euclidean coordinates of feature points. Identifying the velocities of the object facilitates the development of a measurable error system that can be used to formulate a nonlinear least squares adaptive update law. A Lyapunov-based analysis is then presented that indicates if a persistent excitation condition is satisfied then the range and the 3D Euclidean coordinates of each feature point can be determined.

Section snippets

Geometric model of vision systems

We define an orthogonal coordinate frame, denoted by F , attached to the object and an inertial coordinate frame, denoted by I , whose origin coincides with the optical center of the fixed camera (see Fig. 1). Let the 3D coordinates of the i th feature point on the object be denoted as the constant s i R 3 relative to the object reference frame F , and m ̄ i ( t ) [ x i y i z i ] T R 3 relative to the inertial coordinate system I .

It is assumed that the object is always in the field of view of the camera, and

Object kinematics and velocity identification

To quantify the translation of F relative to the fixed coordinate system F , we define e v ( t ) R 3 in terms of the image coordinates and the depth/range ratio of the feature point O 1 as follows e v [ u 1 u 1 v 1 v 1 ln ( α 1 ) ] T where ln ( ) R denotes the natural logarithm. In (9) and in the subsequent development, any point O i on π could have been utilized; however, to reduce the notational complexity, we have elected to select the feature point O 1 . Following the development in Chen et al. (2005), the

Range identification of feature points

To facilitate the development of the range estimator, we first define the extended image coordinates, denoted by p e i ( t ) [ u i v i ln ( α i ) ] T R 3 , for any feature point O i . Following the development of (10) in Chitrakaran et al. (2005), it can be shown that the time derivative of p e i is given by p ̇ e i = α i z i A e i R [ v e + [ ω e ] × s i ] = W i V v w θ i where W i ( . ) R 3 × 3 , V v w ( t ) R 3 × 4 and θ i R 4 are defined as follows W i α i A e i R V v w [ v e [ ω e ] × ] . θ i [ 1 z i s i z i T ] T . The elements of W i ( . ) are known and bounded, and an estimate of V v w ( t ) ,

Simulation results

For simulations, the image acquisition hardware and the image processing step were both replaced with a software component that generated feature trajectories utilizing object kinematics. We selected a planar object with four feature points initially 2 m away along the optical axis of the camera as the body undergoing motion. The velocity of the object along each of the six degrees of freedom were set to 0.2 sin ( t ) . The coordinates of the object feature points in the object's coordinate frame F

Conclusions

This paper presented an adaptive nonlinear estimator to identify the range and the 3D Euclidean coordinates of feature points on an object under motion using a single camera. The estimation error is proved to converge to zero provided that a persistent excitation condition holds. Lyapunov-based system analysis methods and homography-based vision techniques were used in the development and analysis of the identification algorithm. Simulation results demonstrated the performance of the estimator.

Jian Chen received a B.E. degree in Testing Technology and Instrumentation, a M.E. degree in Control Science and Engineering, both from Zhejiang University, Hangzhou, P.R. China, in 1998 and 2001, respectively, and a Ph.D. degree in Electrical Engineering from Clemson University, Clemson, South Carolina, in 2005. After completing my Ph.D. program in August of 2005, he had worked on MEMS as a research associate for one year and then he worked on fuel cell modeling and control at the University

Cited by (7)

Jian Chen received a B.E. degree in Testing Technology and Instrumentation, a M.E. degree in Control Science and Engineering, both from Zhejiang University, Hangzhou, P.R. China, in 1998 and 2001, respectively, and a Ph.D. degree in Electrical Engineering from Clemson University, Clemson, South Carolina, in 2005. After completing my Ph.D. program in August of 2005, he had worked on MEMS as a research associate for one year and then he worked on fuel cell modeling and control at the University of Michigan, Ann Arbor for one and a half years. Now he works on fuel cell power system modeling and control at IdaTech LLC. His research interests include fuel cell modeling and control, visual servo techniques, nonlinear control, and multi-vehicle navigation.

Vilas K. Chitrakaran received a Ph.D. in Electrical Engineering from Clemson University, SC, USA, in 2006. He has since been working as a Systems Engineer at OC Robotics, Bristol, UK, where he develops software for Snake-arm robots. His interests are in the fields of robotic system modeling, computer vision and non-linear control.

Darren M. Dawson received a B.S. degree in Electrical Engineering from the Georgia Institute of Technology in 1984. He then worked for Westinghouse as a control engineer from 1985 to 1987. In 1987, he returned to the Georgia Institute of Technology where he received the Ph.D. degree in Electrical Engineering in March 1990. In July 1990, he joined the Electrical and Computer Engineering Department at Clemson University where he has the held the endowed position of McQueen Quattlebaum Professor since 2001. From 2005 to 2007, he also served as the ECE Department Graduate Coordinator. As of August 2007, he has held the position of ECE Department Chair. Since June 2004, he has served on the Methode Board of Directors in which he currently serves on the Technical Committee and the Compensation Committee.

Dr. Dawson received the National Science Foundation Young Investigator Award and the Office of Naval Research Young Investigator Award. He has authored and/or co-authored one graduate textbook, seven research monographs, four book chapters, over 185 journal papers, and over 300 conference papers. His research group has presented over 300 talks at national/international conferences, universities and workshops. Professor Dawson has directed 34 completed Ph.D. dissertations and 53 completed master's theses.

View full text