Background Subtraction Based on Pulse Coupled Neural Network

Article Preview

Abstract:

Dynamic environments often bring great challenges to moving object detection. Solving this problem will promote expansion of application fields of moving object detection. Unlike those background subtraction methods using local feature-based background models, inspired by integrity of human visual perception, we present a background subtraction method for moving object detection in dynamic environments, building its background models based on global features extracted by pulse coupled neural network. We employ the pulse coupled neural network to take good advantage of their global coupling characters, in order to imitate the human biological visual activity. After sensing images via the pulse coupled neural network, we extract global information of the scenes and then build background models robust to background disturbances based on the global features. Experimental results prove that, our method behaves well in terms of visual and quantitative evaluations for dynamic environments.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

293-296

Citation:

Online since:

December 2014

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2015 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] T. Bouwmans: Traditional and recent approaches in background modeling for foreground detection: An overview, Computer Science Review Vol. 11-12 (2014), pp.31-66.

DOI: 10.1016/j.cosrev.2014.04.001

Google Scholar

[2] C. Stauffer and W. Grimson: Learning patterns of activity using real-time tracking, IEEE Trans. Pattern Anal. Mach. Intell. Vol. 22 (2000), p.747–757.

DOI: 10.1109/34.868677

Google Scholar

[3] Z. Zivkovic and F. V. D Heijden: Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recognit. Lett. Vol. 27 (2006), p.773–780.

DOI: 10.1016/j.patrec.2005.11.005

Google Scholar

[4] M. Heikkila and M. Pietikainen: A texture-based method for modeling the background and detecting moving objects, IEEE Trans. Pattern Anal. Mach. Intell. Vol. 28 (2006), p.657–662.

DOI: 10.1109/tpami.2006.68

Google Scholar

[5] C. Yeh, C. Lin, K. Muchtar and L. Kang: Real-time background modeling based on a multi-level texture description, Information Sciences Vol. 269 (2014), pp.106-127.

DOI: 10.1016/j.ins.2013.08.014

Google Scholar

[6] B. Han and L. S. Davis: Density-based multifeature background subtraction with Support Vector Machine, IEEE Trans. Pattern Anal. Mach. Intell. Vol. 34 (2012), p.1017–1023.

DOI: 10.1109/tpami.2011.243

Google Scholar

[7] Z. Wang, Y. Ma, F. Cheng and L. Yang: Review of pulse-coupled neural networks, Image Vis. Comput. Vol. 28 (2010), p.5–13.

Google Scholar

[8] S. Brutzer, B. Hoferlin and G. Heidemann: Evaluation of background subtraction techniques for video surveillance, in IEEE Conference on Computer Vision and Pattern Recognition (2011), p.1937–(1944).

DOI: 10.1109/cvpr.2011.5995508

Google Scholar