A Real Implementation of Dangerous Objects Capturing in Surveillance Web

Article Preview

Abstract:

The main focus of our research is capturing dangerous objects when they appear under the surveillance camera again, while have performed a dangerous activities in other places. Our solution is a two-phase method, including object learning and capturing under the classification framework. The samples of objects and non-objects are collected to train a classifier with libSVM in object learning phase. In object capturing phase, all moving objects are detected by background subtraction, then are classified into dangerous or non-dangerous. To obtain a robust objects representation to illumination, scale, rotation etc. we fuse HSV space based color feature and multiple scale texture feature. The experimental results with real surveillance data validated the proposed method.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

1032-1035

Citation:

Online since:

January 2013

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] Fieguth, P. and Terzopoulos, D., Color-based tracking of heads and other mobile objects at video frame rates,. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). p.21–27. (1996).

DOI: 10.1109/cvpr.1997.609292

Google Scholar

[2] Comaniciu, D., Ramesh, V., Andmeer, P., Kernel-based object tracking,. IEEE Trans. Patt. Analy. Mach. Intell. (PAMI), 25, p.564–575. (2003).

DOI: 10.1109/tpami.2003.1195991

Google Scholar

[3] T. Joshi, B. Vijayakumar, D. Kriegman, and J. Ponce, HOT curves for modelling and recognition of smooth curved 3D objects, Image Vision Comput. 15, p.479–498, (1997).

DOI: 10.1016/s0262-8856(97)00001-2

Google Scholar

[4] Peng Yao, Yongtian Wang, et. al, Dangerous Object Recognition for Visual Surveillance, ICALIP, pp.55-61, (2012).

Google Scholar

[5] C. Stauffer and W. Grimson, Adaptive background mixture models for real-time tracking, in Proc. IEEE Conf. Computer Vision and Pattern Recognition(CVPR), vol. 2, p.246–252, (1999).

DOI: 10.1109/cvpr.1999.784637

Google Scholar