Non-Uniform SURF Feature Point Detection and Matching

Article Preview

Abstract:

This paper presents a method for detecting feature points from an image and locating their matching correspondence points across images. The proposed method leverages a novel rapid LBP feature point detection to filter out texture-less SURF feature points. The detected feature points, also known as Non-Uniform SURF feature points, are used to match corresponding feature points from other frame images to reliably locate positions of moving objects. The proposed method consists of two processing modules: Feature Point Extraction (FPE) and Feature Point Mapping (FPM). First, FPE extracts salient feature points with Feature Transform and Feature Point Detection. FPM is then applied to generate motion vectors of each feature point with Feature Descriptor and Feature Point Matching. Experiments are conducted on both artificial template patterns and real scenes captured from moving camera at different speed settings. Experimental results show that the proposed method outperforms the commonly-used SURF feature point detection and matching approach.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

3184-3188

Citation:

Online since:

January 2013

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] C. Harris, M. Stephens, A combined corner and edge detector, in Alvey Vision Conference, (1988) 147–151.

DOI: 10.5244/c.2.23

Google Scholar

[2] H. Yanbin, Y. Jianqin, L. Jinping , Human Face Feature Extraction and Recognition Based on SIFT, International Symposium on Computer Science and Computational Technology, 1, (2008) 719-722.

DOI: 10.1109/iscsct.2008.249

Google Scholar

[3] S. Smith and J. Brady, SUSAN – A New Approach to Level Image Processing, International Journal of Computer Vision, (1997) 45-78.

Google Scholar

[4] E. Rostenand, T. Drummond. Machine learning for high-speed corner detection, In European Conference on Computer Vision, Vol. 1, (2006) 430–443.

Google Scholar

[5] B. K. P. Horn, B. G. Schunck, "Determining Optical Flow," Artificial Intelligence, 17, (1981) 185-203.

DOI: 10.1016/0004-3702(81)90024-2

Google Scholar

[6] N. P. Papanikolopoulos, P. L. Khosla, Adaptive Robotic Visual Tracking: Theory and Experiment, IEEE Trans. on Automatic Control, 38, 3, (1993) 429-445.

DOI: 10.1109/9.210141

Google Scholar

[7] A. Spinei, D. Pellerin, J. Hérault, "Spatiotemporal energy-based method for velocity estimation," Signal Processing, 65, Issue 3, (1998) 347-362.

DOI: 10.1016/s0165-1684(97)00231-4

Google Scholar

[8] M. Brown, D. Lowe, Invariant features from interest point groups, BMVC, (2002) 253-262.

Google Scholar

[9] K. Mikolajczyk, C. Schmid, Indexing Based on Scale Invariant Interest Points, ICCV, 1, (2001) 525–53.

DOI: 10.1109/iccv.2001.937561

Google Scholar

[10] T. Ojala, M. PietikaÈ inen, T. MaÈenpaÈa, Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns, PAMI, 24, 7, (2002) 971-987.

DOI: 10.1109/tpami.2002.1017623

Google Scholar