Robot Visual Localization Based on 4-DOF Ego-Motion Model and RANSAC

Abstract:

Article Preview

A robot monocular localization method based on 4-DOF ego-motion model and RANdom SAmple Consensus (RANSAC) in country road environment is introduced in this paper. The algorithm uses as input only those images provided by a single camera mounted on the roof of the vehicle. Given the image motion, the stable localization of vehicle is computed by the 4-DOF ego-motion model and RANSAC algorithm, which can efficiently estimate the instantaneous steering angle and traveling distance even if the vehicle runs on bumpy road. The proposed algorithms are successfully validated on videos from an automotive platform. Furthermore, the localization results from our method are compared to the ground truth measured with a GPS/INS joint system, and the trajectory recovered from the estimated localization parameters is superimposed onto a digital map. These experimental results indicate that the algorithms are efficient and encouraging.

Info:

Periodical:

Advanced Materials Research (Volumes 268-270)

Edited by:

Feng Xiong

Pages:

1494-1499

DOI:

10.4028/www.scientific.net/AMR.268-270.1494

Citation:

C. L. Wang et al., "Robot Visual Localization Based on 4-DOF Ego-Motion Model and RANSAC", Advanced Materials Research, Vols. 268-270, pp. 1494-1499, 2011

Online since:

July 2011

Export:

Price:

$35.00

In order to see related information, you need to Login.

In order to see related information, you need to Login.