Image Fusion Method for Strip Steel Surface Detect Based on Bandelet-PCNN

Article Preview

Abstract:

To solve the problem of the pseudo-Gibbs phenomena around singularities when we implement image fusion with images of strip surface detects obtained from different angles, a novel image fusion method based on Bandelet-PCNN(Pulse coupled neural networks) is proposed. Low-pass sub-band coefficient of source image by Bandelet is inputted into PCNN. And the coefficient is selected by ignition frequency by the neuron iteration. At last the fused image can be got through inverse Bandelet using the coefficient and Geometric flow parameters. Experimental results demonstrate that for the scrip surface detects of scratches, abrasions and pit, fused image effectively combines defect information of multiple image sources. Contrast to the classical wavelet transform and Bandelet transform the method reserves more detailed and comprehensive detect information. Consequently the method proposed in this paper is more effective.

You might also be interested in these eBooks

Info:

Periodical:

Advanced Materials Research (Volumes 546-547)

Pages:

806-810

Citation:

Online since:

July 2012

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2012 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] Carisetti C A, Fong T Y, Fromm C. ILearn self-learning defect classifier [J]. Iron and Steel Engineer, 1998, 75(8): 50-53.

Google Scholar

[2] Bilstein W, Enderle W, Moreas G, etal. Two systems for online oil film and surface roughness measurement for strip steel production [J]. Revue de Metallurgie, 2007, 104(7/8): 347-353.

DOI: 10.1051/metal:2007111

Google Scholar

[3] Wang Yonghui, Yan Yunhui, Wu Yanping, Liang Huisheng. Image Fusion Based on Wavelet Transform for Strip Steel Surface Defects [J]. Journal of Northeastern University: Natural Science, 2009, 30(5): 728-732.

Google Scholar

[4] Yan Yunhui, Song Kechen, Peng Yishu, Li Jun. On the Image Fusion Rules of Strip Surface Defects Based On Dissimilarity Between Local Variance [J]. Journal of Northeastern University: Natural Science, 2011, 32(1): 121-124.

Google Scholar

[5] CANDESE J. Monoscale ridgelets for the representation of images with edges [R]. California: Stan ford University, (1999).

Google Scholar

[6] CANDESE J. Ridgelets: Theory and applications [D]. Caifornia: Stanford University, (1998).

Google Scholar

[7] CANDESE J, DONOHODL. Cuvelets [R]. California: Stanford University, (1999).

Google Scholar

[8] DOMN, VETTERLIM. The contourlet transform: An efficient directionalmulti resolution image represe tation [ J]. IEEE Transactions on Image Processing, 2005, 14(12 ): 2091 - 2106.

DOI: 10.1109/tip.2005.859376

Google Scholar

[9] Tao Guanqun, Li Dapeng, Lu Guanghua. On Image Fusion Based on Different Fusion Rules of Wavelet Transform. Acta Photonica Sinica, 2004, 33(2): 221-224.

Google Scholar

[10] Zhang Qiang, Jing Zhongliang. Multi-focus image fusion using pulse coupled neural work [J]. Pattern Recongintion Letters, 2007, 28(9): 1123-1132.

DOI: 10.1016/j.patrec.2007.01.013

Google Scholar

[11] Eckhorn R, Reitboek H J, Arndt M, Dicke P. Feature linking via synchronization among distributed assemblies: simulations of results from cat visual cortex. Neural Computation, 1990, 2(3): 293-307.

DOI: 10.1162/neco.1990.2.3.293

Google Scholar