Automatic Extrapolation of User Intention for Internet Image Search without Duplication

Article Preview

Abstract:

. Image search engines (e.g. Google Image Search, Bing Image Search) mostly depends on the given query surrounding text features. It increases complexity to interpret users search intention only by giving single query keywords and this leads to ambiguous and noisy search results. To solve the ambiguity in the image search, consider visual information along with the text features. In this approach user has to click a single search return image and the search results are re-ranked based on the similarity in visual and textual content. Our work is to capture user search intention by doing one-click image search has four steps. Adaptive weight categories are predefined to category the query image and this helps to re-rank the text based search results. Keywords are expanded based on the selected query image visual content that helps to capture user intention. Based on the expanded keywords image pool get expanded that contain more relevant images with the query image. Expanded keywords are also used to expand the query image that lead to multiple positive similar images and the similarity metrics are learned for page re-ranking. Re-ranking of similarity images to the query image based on photo quality assessment to provide better search results.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

447-452

Citation:

Online since:

June 2014

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2014 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] R. Baeza-Yates and B. Ribeiro-Neto, Modern Information Retrieval. Addison-Wesley Longman Publishing Co., (1999).

Google Scholar

[2] M. Unser, Texture Classification and Segmentation Using Wavelet Frames, IEEE Trans. Image Processing, vol. 4, no. 11, pp.1549-1560, Nov. (1995).

DOI: 10.1109/83.469936

Google Scholar

[3] Y. Rubner, L. Guibas, and C. Tomasi, The Earth Movers Distance, Multi-Dimensional Scaling, and Color-Based Image Retrieval, Proc. ARPA Image Understanding Workshop, (1997).

Google Scholar

[4] T. Liu, J. Sun, N. Zheng, X. Tang, and H. Shum, Learning to Detect a Salient Object, " Proc. IEEE Int, l Conf. Computer Vision and Pattern Recognition, (2007).

DOI: 10.1109/cvpr.2007.383047

Google Scholar

[5] W. Freeman and M. Roth, Orientation Histograms for Hand Gesture Recognition, " Proc. Int, l Workshop Automatic Face and Gesture Recognition, (1995).

Google Scholar

[6] R. Xiao, H. Zhu, H. Sun, and X. Tang, Dynamic Cascades for Face Detection, " Proc. Int, l Conf. Computer Vision, (2007).

Google Scholar

[7] Y. Freund, R. Iyer, R.E. Schapire, and Y. Singer, An Efficient Boosting Algorithm for Combining Features, J. Machine Learning Research, vol. 4, pp.933-969, (2003).

Google Scholar

[8] X.S. Zhou and T.S. Huang, Relevance Feedback in Image Retrieval: A Comprehensive Review, Multimedia Systems, vol. 8, pp.536-544, (2003).

DOI: 10.1007/s00530-002-0070-3

Google Scholar

[9] J. He, M. Li, Z. Li, H. Zhang, H. Tong, and C. Zhang, Pseudo Relevance Feedback Based on Iterative Probabilistic One-Class SVMs in Web Image Retrieval, Proc. Pacific-Rim Conf. Multimedia, (2004).

DOI: 10.1007/978-3-540-30542-2_27

Google Scholar

[10] Y. Ke, X. Tang, and F. Jing, The Design of High-Level Features for Photo Quality Assessment, " Proc. IEEE Int, l Conf. Computer Vision and Pattern Recognition, (2006).

DOI: 10.1109/cvpr.2006.303

Google Scholar

[11] Y. Luo and X. Tang, Photo and Video Quality Evaluation: Focusing on the Subject, Proc. European Conf. Computer Vision, (2008).

Google Scholar

[12] W. Luo, X. Wang, and X. Tang, Content-Based Photo Quality Assessment, " Proc. IEEE Int, l Conf. Computer Vision, (2011).

Google Scholar