The Application of Visual Attention Mechanism in Road Disaster Identification and early Warning System

Article Preview

Abstract:

Visual attention mechanism is introduced into the traditional road disaster monitoring and early warning system. In this system, the disaster region is the focus of attention (FOA), which happens to be the object needed to process. Ittis algorithm [1]was used to extract the saliency map, then quickly located the regions which may contain disaster according to saliency. The recognition and early warning of disaster can be completed, quickly. This method was tested snowstorms and rolling stones are simulated, and gave the corresponding experimental results. Experiment results show the correctness and efficiency of introducing visual attention mechanism into road disaster monitor and early warning system. It is of great significance and practical value for reducing the computation and improving real-time performance of the total system.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

523-526

Citation:

Online since:

August 2013

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] Itti L, Koch C. Niebur E. A model of saliency-based visual attention for rapid scene analysis[J]. IEEE Trans on Pattern Analysis and Machine Intelligence, 1998, 20(11): 1254-1259.

DOI: 10.1109/34.730558

Google Scholar

[2] LI Qun-bin, HUANG Hao-feng, XIAO Nan-feng. The design and implementation of video-image-based alarm system of road disaster [J]. Computer Engineering and Design, 2011, Vol. 32, NO. 10: 3442-3445. (in Chinese).

Google Scholar

[3] XU Shi, WEN Han, XIAO Nan-feng. Based on the wireless video transmission road disaster monitoring system [J]. Journal of Chong qing University of Science and Technology (Natural Science). Vol. 25, NO. 10: 46-53, Oct. 2011. (in Chinese).

Google Scholar

[4] TIAN Ming-hui. Visual attention mechanism modeling and the study of its application [D]. Hefei: University of Science and Technology of China. 2010. (in Chinese).

Google Scholar

[5] J. -K. Tsotos, S. Culhane, W. -Y. Wai, Y. Lai, N. Davis and F. Nuflo. Modeling visual attention via selective tuning [J]. Artificial Intelligence, vol. 78, no 1, pp.507-545, Oct. (1995).

DOI: 10.1016/0004-3702(95)00025-9

Google Scholar

[6] L. Itti. Models of bottom-up and top-down visual attention. Ph.D. dissertation, California Institute of Technology Pasadena, (2000).

Google Scholar

[7] Koch C, Ullman S. Shift in selective visual attention: towards the underlying neural circuity [J]. Human Neurobiology, 1985, 4(4): 219-227.

Google Scholar

[8] FENG Song-he, XU De, YANG Xu, WU Ai-min. A novel region-based image retrieval algorithm using selective visual attention model. In: Proc. of Int. Conf. on Advanced Concepts for Intelligent Vision-System (ACIVS'05), Antwerp, Belgium, Sep. 2005, LNCS 3708, 235-242.

DOI: 10.1007/11558484_30

Google Scholar

[9] L Itti, C Koch. Computational modeling of visual attention. Nature Reviews Neuroscience, 2001, 1(3): 194-230.

Google Scholar