A Novel Blind Source Separation Method Based on the Extended Natural Gradient

Article Preview

Abstract:

In this paper, a new method is introduced to derive the extended natural gradient, which was proposed by Lewicki and Sejnowski in [1]. However, they made their derivation under many approximations, and the proof is also very complicated. To give a more rigors mathematical proof for this gradient, the Lie group invariance property is introduced which makes the proof much easier and straightforward. In addition, an iterative algorithm through Newton's method is also given to estimate the sources efficiently. The results of the experiments confirm the efficiency of the proposed method.

You might also be interested in these eBooks

Info:

Periodical:

Advanced Materials Research (Volumes 433-440)

Pages:

3570-3576

Citation:

Online since:

January 2012

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2012 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] M. Lewicki and T. Sejnowski, Learning overcomplete representations, Neural computation, vol. 12, no. 2, p.337–365, (2000).

DOI: 10.1162/089976600300015826

Google Scholar

[2] P. Comon, C. Jutten, and J. Herault, Blind separation of sources, part II: problems statement, Signal Processing, vol. 24, no. 1, p.11–20, (1991).

DOI: 10.1016/0165-1684(91)90080-3

Google Scholar

[3] A. Cichocki and S. Amari, Adaptive Blind Signal and Image Processing: learning algorithms and applications. John Wiley and Sons, (2002).

Google Scholar

[4] P. Comon, Independent component analysis, a new concept?, Signal Processing, vol. 36, no. 3, p.287–314, (1994).

DOI: 10.1016/0165-1684(94)90029-9

Google Scholar

[5] A. Hyvarinen, J. Karhunen, and E. Oja, Independent component analysis. John Wiley and Sons, (2001).

Google Scholar

[6] A. Bell, An information-maximization approach to blind separation and blind deconvolution, Neural Computation, vol. 7, no. 6, p.1129–1159, (1995).

DOI: 10.1162/neco.1995.7.6.1129

Google Scholar

[7] T. Lee, Independent component analysis using an extended infomax algorithm for mixed subgaussian and supergaussian sources, Neural Computation, vol. 11, no. 2, p.417–441, (1999).

DOI: 10.1162/089976699300016719

Google Scholar

[8] S. Amari, Natural gradient works efficiently in learning, Neural Computation, vol. 10, no. 2, p.251–276, (1998).

DOI: 10.1162/089976698300017746

Google Scholar

[9] Z. He, S. Xie, L. Zhang, and A. Cichocki, A note on Lewicki-Sejnowski gradient for learning overcomplete representations, Neural computation, vol. 20, no. 3, p.636–643, (2008).

DOI: 10.1162/neco.2007.07-06-296

Google Scholar

[10] T. M. Cover and J. A. Thomas, Elements of Information Theory. Wiley-Interscience New York, (2006).

Google Scholar

[11] B. W. Silverman, Kernel density estimation using the fast Fourier transform, Applied Statistics, vol. 31, p.93–99, (1982).

DOI: 10.2307/2347084

Google Scholar

[12] A. Yeredor, Blind separation of Gaussian sources via second-order statistics with asymptotically optimal weighting, IEEE Signal Processing Letters, vol. 7, no. 7, p.197–200, (2000).

DOI: 10.1109/97.847367

Google Scholar

[13] E. Doron and A. Yeredor, Asymptotically optimal blind separation of parametric Gaussian sources, Independent Component Analysis and Blind Signal Separation, p.390–397, (2004).

DOI: 10.1007/978-3-540-30110-3_50

Google Scholar

[14] P. Tichavsky, E. Doron, A. Yeredor, and J. Nielsen, A computationally affordable implementation of an asymptotically optimal BSS algorithm for AR sources, Proc. EUSIPCO-2006, p.4–8.

Google Scholar

[15] A. Hyvarinen, Fast and robust fixed-point algorithms for independent component analysis, Neural Networks, IEEE Transactions on, vol. 10, no. 3, p.626–634, (1999).

DOI: 10.1109/72.761722

Google Scholar

[16] S. Amari, A. Cichocki, and H. H. Yang, A new learning algorithm for blind signal separation, in Advances in Neural Information Processing Systems, D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, Eds., vol. 8. MIT Press, 1996, p.757–763.

Google Scholar