Object Movement Computation from Two Images

Article Preview

Abstract:

This paper presents a new object movement computation method using ray vectors generated from two cameras. We compute camera's internal and external parameters of the input images using computed values from partially overlapping input image frames which has the same corresponding fixed feature points. This is achieved by computing fixed points in the environment, camera distortion values and internal and external parameters from stationary objects. Ray vectors cast from each camera to feature points keep camera external parameter values. Using computed camera external parameters, a tracked object's rigid object movement is estimated using maximum likelihood estimation by setting projected intersection points between ray vectors as a part of objective function. Our method is demonstrated and the results are compared to our another movement computation algorithm.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

1085-1089

Citation:

Online since:

April 2015

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2015 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] Z. Zhang, A Flexible New Technique for Camera Calibration, IEEE Trans. Pattern Anal. and Mach. Intelligence, 22 (11) (2000) 1330- 1334.

DOI: 10.1109/34.888718

Google Scholar

[2] J. Park and S. Park, Improvement on Zhang's camera calibration, Appl. Mech. and Mater. 479-480 (2013) 170-173.

Google Scholar

[3] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Second Edition, Cambridge University Press, (2004) 237-497.

Google Scholar

[4] T. Moons, L. van Gool and M. Vergauwen, 3D Reconstruction from Multiple Images, Part I: Principles, Found. and Trends in Comput. Graphics and Vision, 4 (4) (2009) 287-404.

DOI: 10.1561/0600000007

Google Scholar

[5] J. Park, T. Kim and S. Park, Level-of-detail canny edge based routing for object contour tracking, 17th Int. Conf. on Comput. Animation and Soc. Agents. (2004) 339–346.

Google Scholar

[6] K. Pauwels, L. Rubio, J. Diaz and E. Ros, Real-time model based rigid object pose estimation and tracking combining dense and sparse visual cues, Comput. Vision and Pattern Recog. (2013) 2347–2354.

DOI: 10.1109/cvpr.2013.304

Google Scholar

[7] J. Park, S. Park and J. Aggarwal, Model-based human motion capture from monocular video sequences, Lecture Notes in Comput. Sci. 2869 (2003) 406-412.

DOI: 10.1007/978-3-540-39737-3_51

Google Scholar

[8] T. Drummond and R. Cipolla, Real-time visual tracking of complex structures, IEEE Trans. Pattern Anal. and Mach. Intelligence, 24 (7) (2002) 932–946.

DOI: 10.1109/tpami.2002.1017620

Google Scholar

[9] C. Choi and H. Christensen, Robust 3d visual tracking using particle filtering on the special euclidean group: A combined approach of keypoint and edge features, Int. J. of Robot Res. 31 (4) (2012) 498–519.

DOI: 10.1177/0278364912437213

Google Scholar

[10] V. Prisacariu and I. Reid, Pwp3d: Real-time segmentation and tracking of 3d objects, Int. J. of Comp. Vision, 98 (2012) 335–354.

DOI: 10.1007/s11263-011-0514-3

Google Scholar

[11] D. Lowe, Distinctive image features from scale-invariant keyponts, Int. J. of Comp. Vision, 60 (2) (2004) 91–110.

Google Scholar

[12] L. Lasdon and A. Waren, GRG2 User's Guide, (1989).

Google Scholar