Fights Behavior Detection Based on Space-Time Interest Points

Article Preview

Abstract:

Human action recognition belongs to the senior visual analysis of computer vision, which involves image processing, artificial intelligence, pattern recognition and so on, is becoming one of the most hot research topic in recent years. In this paper, on the basis of comparative analysis and study towards current methods related to human action recognition, we propose a novel fights behavior detection method which is based on spatial-temporal interest point. Since most information of human action in video are indicated by the space-time interest points of video, we combine spatial-temporal features with motion energy image to describe information of video, and local spatial-temporal features are applied to extract fights behavior model by bags of words. Experimental results show that this method can achieve high accuracy and certain practical value.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

659-663

Citation:

Online since:

July 2014

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2014 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] Deva Ramman. Tracking people and recognizing their activities [C]. Computer Vision and Pattern Recognition, 2005, Vol. 2.

Google Scholar

[2] Yaser Sheikh, Mumtaz Sheikh, Mubarak Shah. Exploring the Space of Human Action [C]. Computer Vision, 2005, ICCV, Vol. 1, pp.144-149.

DOI: 10.1109/iccv.2005.90

Google Scholar

[3] Aaron F. Boick, James W. Davis. The Recognition of Human Movement Using Temporal Templates [J]. Pattern Analysis and Machine Intelligence, 2001, Vol. 23, pp.257-267.

DOI: 10.1109/34.910878

Google Scholar

[4] Alexei A. Efros, Alexander C. Berg, Greg Mori, et al. Recognizing action at a distance [C]. International Conference on Computer Vision, 2003, Vol. 2, pp.726-733.

DOI: 10.1109/iccv.2003.1238420

Google Scholar

[5] Yan Ke, Rahul Sukthankar, Martial Hebert. Efficient Visual Event Detection using Volumetric Features [C]. International Conference on Comupter Vision, 2005, Vol. 1, pp.166-173.

DOI: 10.1109/iccv.2005.85

Google Scholar

[6] Ivan Laptev, Tony Lindeberg. Local descriptors for spatio-temporal recognition [J]. Spatial Coherence for Visual Motion Analysis. 2006, Vol. 3667, pp.91-103.

DOI: 10.1007/11676959_8

Google Scholar

[7] Tony Lindeberg. Scale-space theory: a basic tool for analyzing structures at different scales[J]. Journal of Applied Statistics, 1994, Vol. 21, issue. 1-2, pp.225-270.

DOI: 10.1080/757582976

Google Scholar

[8] B. Lucas, T. kanade. An Iterative Image Registration Technique with an Application to Stereo Vision[C]. Proceeding of International Joint conference on Artificial Intelligence. 1981, pp.674-679.

Google Scholar

[9] Ming-Kuei Hu. Visual pattern recognition by moment invariants[J]. Information Theory, 1962, Vol. 8, pp.179-187.

DOI: 10.1109/tit.1962.1057692

Google Scholar

[10] Sivic J, Zisserman A. Efficient visual search of videos cast as text retrieval[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2009, Vol. 31, issue. 4, pp.591-606.

DOI: 10.1109/tpami.2008.111

Google Scholar

[11] Ko Y. A study of term weighting schemes using class information for text classification[C]/Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval. ACM, 2012, pp.1029-1030.

DOI: 10.1145/2348283.2348453

Google Scholar

[12] JA Hartigan, MA Wong. A K-means Clustering algorithm [J]. Journal of the Royal Statistical Society, Series C(Applied Statistics), 1979, Vol. 28, pp.100-108.

Google Scholar

[13] Corinna Corttes, Vladimir Vapnik. Support-vector networks [J]. Machine Learning, 1995, Vol. 20, issue. 3, pp.273-297.

Google Scholar