Adaptive Searching and Kalman Filter Vision Compensation during Mobile Robot Docking Operation

Article Preview

Abstract:

Based on affine transformation image kinematics and recurrence relations, the sub-region KLT feature extract with was designed and rules were given. Optimization of sum of absolute difference was used to match feature points. Diamond search template was adapted to make matching fast, and adaptive varying template size scheme was proposed to solve the problem the minimum SAD is not exclusive, and detailed procedure were described. In order to ensure that extracted feature points are on background, not on moving robot, adaptive iterative scheme was proposed and algorithm was designed. Through solving over-determined image kinematics equations using the least squares algorithm, got the motion parameters. According to the self-developed jitter compensation model, unintended motion was reduced by Kalman filter. Then use filtered parameters to reconstruct image. Based on two autonomous robots, experiment was completed. Results show method proposed meet requirements of accuracy and real time.

You might also be interested in these eBooks

Info:

Periodical:

Key Engineering Materials (Volumes 474-476)

Pages:

592-598

Citation:

Online since:

April 2011

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2011 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] Xiang-yang Liu: Three sorts of image stabilization technology for digital cameras[N]. Digital Imaging Equipment. 2005. 4: 19-22.

Google Scholar

[2] Ping Zhong, Qian-yang Yu and Guang Jin, in: Research on methods of compensation for movement between frames of dynamic image sequences [J]. Optical Technique. 2003. 7.

Google Scholar

[29] 442-443. (in Chinese).

Google Scholar

[3] Yeping Su,Ming-Ting Sun and Hsu V, in: Global motion estimation from coarsely sampled motion vector field and the applications[C]. IEEE Trans on Circuits and Systems,2005, 2: 628–631.

DOI: 10.1109/tcsvt.2004.841656

Google Scholar

[4] Hailin Jin, Favaro P and Soatto S, in: Real-time feature tracking and outlier rejection with changes in illumination[C]. IEEE International Conference on Computer Vision, 2001, 1: 684-689.

DOI: 10.1109/iccv.2001.937588

Google Scholar

[5] Xun Pang, Wei Li and Xiao-wei Gao. In: Video stabilization algorithm based on feature matching and affine transform[J]. Microcomputer Information. 2008. 4(3): 180-182.

Google Scholar

[6] Sebastiano Battiato, Giovanni Gallo and Giovanni Puglisi, in: SIFT Features Tracking for Video Stabilization[R]. Italy, ICIAP, (2007).

DOI: 10.1109/iciap.2007.4362878

Google Scholar

[7] Sung-Jea Ko, Sung-Hee Lee and Kyung-Hoon Lee, in: Digital Image Stabilizing Algorithms Based on Bit-Plane Matching[C]. IEEE Trans on Consumer Electronics, 1998, 44(3): 617-622.

DOI: 10.1109/30.713172

Google Scholar

[8] Wen-hua Zhou, Tian-xiang Yao, Xiu-qing Ye and Wei-kang Gu, in: Application of RANSAC algorithm in removing video jitter[J]. Journal of Circuits and Systems. 2005. 10(4): 91-94.

Google Scholar

[9] Long Meng, Xing-gang Lin, Gui-jin Wang, Li-dong Xu and Fang-wen Fu, in: Motion filter for video stabilizing systems[J]. J Tsinghua Univ (Sci & Tech). 2005. 45(1): 41-43.

Google Scholar

[10] Jun Luo, Hong-yan Dong and Zhen- kang Shen, in: Video stabilization based on bit-plane registration and Kalman filter[J]. Infrared and Laser Engineering. 2008. 4(37): 304.

Google Scholar

[11] Kai Ki Lee , Kin Hong Wong , Ming Yuen Chang, et al. In: Extended Kalman filtering approach to stereo video stabilization[C]. 19th International Conference on Pattern Recognition. Tampa, 2008: 1-4.

DOI: 10.1109/icpr.2008.4761346

Google Scholar