A New INS/VNS Integrated Navigation Model for Planetary Rovers

Article Preview

Abstract:

Inertial and vision integrated navigation (INS/VNS) is an important autonomous navigation method for planetary rovers. Traditional INS/VNS methods build state model based on the inertial error model, which consider the errors of vision system as random. However, the binocular cameras calibration errors will bring some constant errors to the vision system by the reconstruction process. A new INS/VNS model that adds the constant vision relative motion errors in the state is presented. The simulation results demonstrate that the presented model effectively improves the position and attitude accuracy compared with traditional methods.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

406-410

Citation:

Online since:

December 2014

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2015 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] Liu Chuankai, Wang Baofeng, Wang Jia, et al. Integrated INSand vision based orientation determination and positioning of CE 3 lunar rover[J]. Journal of Spacecraft TT&C Technology, 2014, 33(3): 250-257.

Google Scholar

[2] Maimone M, Cheng Y, Matthies L. Two years of visual odometry on the mars exploration rovers [J]. Journal of Field Robotics, 2007, 24(3): 169-186.

DOI: 10.1002/rob.20184

Google Scholar

[3] Titterton D H, Weston J L. Strapdown Inertial Navigation Technology (2nd Edition) [M]. UK: Institution of Engineering and Technology, (2004).

DOI: 10.1049/pbra017e

Google Scholar

[4] Hern´, Badino, Kanade T. A Head-Wearable Short-Baseline Stereo System for the Simultaneous Estimation of Structure and Motion; proceedings of the 12th IAPR Conference on Machine Vision Applications, Nara, Japan, F, 2011 [C].

Google Scholar

[5] Li M, Mourikis A I. Optimization-based estimator design for vision-aided inertial navigation [J]. Robotics, 2013, 241-248.

DOI: 10.7551/mitpress/9816.003.0036

Google Scholar

[6] Trawny N, Mourikis A I, Roumeliotis S I, et al. Vision‐aided inertial navigation for pin‐point landing using observations of mapped landmarks [J]. Journal of Field Robotics, 2007, 24(5): 357-378.

DOI: 10.1002/rob.20189

Google Scholar

[7] Roumeliotis S I, Johnson A E, Montgomery J F. Augmenting Inertial Navigation with Image-Based Motion Estimation [M]. International Conference on Robotics and Automation. 2002: 4326-4333.

DOI: 10.1109/robot.2002.1014441

Google Scholar

[8] Fang Q, Huang S X. UKF for Integrated Vision and Inertial Sensors Based on Three-View Geometry [J]. Sensors Journal, IEEE, 2013, 13(7): 2711-2719.

DOI: 10.1109/jsen.2013.2259228

Google Scholar

[9] Di Corato F, Innocenti M, Pollini L. An Entropy-Like Approach to Vision-Aided Inertial Navigation; proceedings of the Proc of 18th IFAC World Congress, F, 2011 [C]. Milano (Italy).

DOI: 10.3182/20110828-6-it-1002.03360

Google Scholar

[10] Mourikis A I, Trawny N, Roumeliotis S I, et al. Vision-Aided Inertial Navigation for Spacecraft Entry, Descent, and Landing [J]. IEEE Transactions on Robotics, 2009, 25(2): 264-280.

DOI: 10.1109/tro.2009.2012342

Google Scholar

[11] Olson C F, Matthies L H, Schoppers M, et al. Rover navigation using stereo ego-motion [J]. Robotics and Autonomous Systems, 2003, 43(4): 215-229.

DOI: 10.1016/s0921-8890(03)00004-6

Google Scholar