An Efficient Greedy EM Algorithm for Gaussian Mixture for Adaptive Model Selection Using the Kurtosis and Skewness Criterion

Article Preview

Abstract:

For the Gaussian mixture learning, the expectation-maximization (EM) algorithm as well as its modified versions are widely used, but there are still two major limitations: (i). the number of components or Gaussians must be known in advance, and (ii). There is no generally accepted method for parameters initialization to prevent the algorithm being trapped in one of the local maxima of the likelihood function. In order to overcome these weaknesses, we proposed a greedy EM algorithm based on a kurtosis and skewness criterion. Specifically, we start with a single component and add one component step by step under the framework of EM algorithm in order to decrease the value of the kurtosis and skewness measure which provides an efficient index to show how well the Gaussian mixture model fits the sample data. In such a way, the number of components can be selected adaptively during the EM learning and the learning parameters can possibly escape from local maxima.

You might also be interested in these eBooks

Info:

Periodical:

Advanced Materials Research (Volumes 452-453)

Pages:

1501-1506

Citation:

Online since:

January 2012

Authors:

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2012 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] D.M. Titterington, A.F. Smith and U.E. Makov: Statistical Analysis of Finite Mixture Distribution, New York: Wiley, (1985).

Google Scholar

[2] R.A. Render and H.F. Walker: Mixture Density, Maximum Likelihood and the EM Algorithm, SIAM Review, Vol. 26 (2) (1984), pp.77-87.

Google Scholar

[3] G. McLachlan and D. Peel: Finite Mixture Models, New York: Wiley, (2000).

Google Scholar

[4] N. Vlassis and A. Likas: A Greedy EM Algorithm for Gaussian Mixture Learning, Neural Processing Letters Vol. 15 (2002), pp.77-87.

Google Scholar

[5] Y. Agusta and D.L. Dowe: Unsupervised Learning of Correlated Multivariate Gaussian Mixture Models Using MML, Proc. 16th Australian Joint Conf. on artificial Intelligence, 2003, pp.477-489.

DOI: 10.1007/978-3-540-24581-0_40

Google Scholar

[6] L. Xu: Best harmony, unified RPCL and automated model selection for unsupervised and supervised learning on Gaussian mixtures, three-layer nets and ME-RBF-SVM models, International Journal of Neural Systems, Vol. 11 (1) (2001), pp.3-69.

DOI: 10.1142/s0129065701000497

Google Scholar

[7] J. Ma, T. Wang and L. Xu: A gradient BYY harmony learning rule on Gaussian mixture with automated model selection, Neurocomputing, Vol. 56 (2004), pp.481-487.

DOI: 10.1016/j.neucom.2003.10.009

Google Scholar

[8] J. Ma, B. Gao and Y. Wang et al.: Conjugate and natural gradient rules for BYY harmony learning on Gaussian mixture with automated model selection, International Journal of Pattern Recognition and Artificial Intelligence, Vol. 19 (5) (2005).

DOI: 10.1142/s0218001405004228

Google Scholar

[9] J. Ma and J. Liu: The BYY annealing learning algorithm for Gaussian mixture with automated model selection, Pattern Recognition, vol. 40 (2007), p.2029-(2037).

DOI: 10.1016/j.patcog.2006.12.028

Google Scholar

[10] L. Li and J. Ma: A BYY scale-incremental EM algorithm for Gaussian mixture Learning, Applied Mathematics and Computation, vol. 205 (2008), pp.832-840.

DOI: 10.1016/j.amc.2008.05.076

Google Scholar

[11] L. Wang and J. Ma, A kurtosis and skewness based criterion for model selection on Gaussian mixture, Proceedings of the 2nd International Conference on Biomedical Engineering and Informatics (BMEI, 2009), 17-19 October 2009, Tianjin, China.

DOI: 10.1109/bmei.2009.5305528

Google Scholar

[12] N. Vlassis and A. Likas: A Kurtosis-based Dynamic Approach to Gaussian Mixture Modeling, IEEE Trans. on Systems, Man, and Cybernetics - A Vol. 29(4)(1999), pp.393-399.

DOI: 10.1109/3468.769758

Google Scholar