Robot Visual Localization Based on 4-DOF Ego-Motion Model and RANSAC

Article Preview

Abstract:

A robot monocular localization method based on 4-DOF ego-motion model and RANdom SAmple Consensus (RANSAC) in country road environment is introduced in this paper. The algorithm uses as input only those images provided by a single camera mounted on the roof of the vehicle. Given the image motion, the stable localization of vehicle is computed by the 4-DOF ego-motion model and RANSAC algorithm, which can efficiently estimate the instantaneous steering angle and traveling distance even if the vehicle runs on bumpy road. The proposed algorithms are successfully validated on videos from an automotive platform. Furthermore, the localization results from our method are compared to the ground truth measured with a GPS/INS joint system, and the trajectory recovered from the estimated localization parameters is superimposed onto a digital map. These experimental results indicate that the algorithms are efficient and encouraging.

You might also be interested in these eBooks

Info:

Periodical:

Advanced Materials Research (Volumes 268-270)

Pages:

1494-1499

Citation:

Online since:

July 2011

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2011 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] Borenstein, J., Everett, H.R., Feng, L., and Wehe, D.: Mobile Robot Positioning: Sensors and Techniques. Journal of Robotic Systems Vol. 14, No. 4, pp.231-249, (1997).

DOI: 10.1002/(sici)1097-4563(199704)14:4<231::aid-rob2>3.0.co;2-r

Google Scholar

[2] Jason, C., Rahul, S., Illah, N., and Aroon, P.: A Robust Visual Odometry and Precipice Detection System Using Consumer-grade Monocular Vision. In Proc. of Inter. Conf. on Robotics and Automation, pp.3421-3427, (2005).

DOI: 10.1109/robot.2005.1570639

Google Scholar

[3] Davide S. and Roland S.: Appearance-Guided Monocular Omnidirectional Visual Odometry for Outdoor Ground Vehicles. IEEE Trans. on Robotics. Vol. 24, No. 5, p.1015 – 1026, (2008).

DOI: 10.1109/tro.2008.2004490

Google Scholar

[4] Badino, H.: Binocular Ego-Motion Estimation for automotive application, Ph.D. thesis, Goethe Frankfurt University, Frankfurt am Main, Germany ( 2008).

Google Scholar

[5] David, N. r, Oleg, N. and James, B.: Visual odometry for ground vehicle applications. Journal of Field Robotics. Vol. 23, No. 1, pp.3-20 (2006).

Google Scholar

[6] Michal, I., Benny, R., and Shmuel, P.: Recovery of Egomotion Using Region Alignment. IEEE Trans. on Pattern analysis and machine intelligence. Vol. 19, No. 3, pp.268-272 (1997).

Google Scholar

[7] Tardif, J.P., Pavlidis, Y., Daniilidis, K.: Monocular Visual Odometry in Urban Environments Using an Omnidirectional Camera. In Proc. of IEEE Inter. Conf. on Intelligent Robots and Systems, p.2531–2538, (2008).

DOI: 10.1109/iros.2008.4651205

Google Scholar

[8] Andrew J.D., Ian D.R., Nicholas D.M. and Olivier S.: MonoSLAM: Real-Time Single Camera SLAM. IEEE Trans. on Pattern analysis and machine intelligence. Vol. 29 No. 6, p.1052–1067, (2007).

DOI: 10.1109/tpami.2007.1049

Google Scholar

[9] David, N., Oleg, N., and James, B.: Visual odometry. In Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, Vol. 1, pp.652-659, (2004).

Google Scholar