L1-Norm Sparse Learning and its Application

Article Preview

Abstract:

The need on finding sparse representations has attracted more and more people to research it. Researchers have developed many approaches (such as nonnegative constraint, l1-norm sparsity regularization and sparse Bayesian learning with independent Gaussian prior) for encouraging sparse solutions and established some conditions under which the feasible solutions could be found by those approaches. This paper commbined the L1-norm regularization and bayesian learning, called L1-norm sparse bayesian learning, which was inspired by RVM (relative vector machine). L1-norm sparse bayesian learning has found its applications in many fields such as MCR (multivariate curve resolution) and so on. We proposed a new method called BSMCR (bayesian sparse MCR) to enhance the quality of resolve result.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

379-385

Citation:

Online since:

August 2011

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2011 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] D. D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature, 401: 788–791, (1999).

DOI: 10.1038/44565

Google Scholar

[2] Bernhard Schlkopf and Alex Smola. Learning with Kernels. MIT Press, Cambridge, MA, (2002).

Google Scholar

[3] Simon Perkins, Kevin Lacker, and James Theiler. Grafting: fast, incremental feature selection by gradient descent in function space. Journal of Machine Learning Research, 3: 1333 – 1356, (2003).

Google Scholar

[4] Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Y. Ng. Efficient sparse coding algorithms. In Proceedings of the Neural Information Processing Systems (NIPS) 19, (2007).

Google Scholar

[5] M. R. Osborne, Brett Presnell, and B. A. Turlach. A new approach to variable selection in least squares problems. IMA Journal of Numerical Analysis, 20(3): 389–403, (2000).

DOI: 10.1093/imanum/20.3.389

Google Scholar

[6] Bradley Efron, Trevor Hastie, Iain Johnstone, and Robert Tibshirani. Least angle regression. Annals of Statistics, 32: 407–499, (2004).

DOI: 10.1214/009053604000000067

Google Scholar

[7] D. M. Malioutov, M. Cetin, and A. S. Willsky. Homotopy continuation for sparse signal representation. In Proc. ICASSP, (2005).

DOI: 10.1109/icassp.2005.1416408

Google Scholar

[8] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM J. Scientific Computing, 20(1): 33–61, (1998).

DOI: 10.1137/s1064827596304010

Google Scholar

[9] M. E. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1: 211–244, (2001).

Google Scholar

[10] D. J. C. MacKay. Bayesian interpolation. Neural Computation, 4: 415–447, (1992).

Google Scholar

[11] D. Foresee and M. Hagan. Gauss-Newton approximation to Bayesian regularization. In Proceedings of the 1997 International Joint Conference on Neural Networks, pages 1930–1935, (1997).

DOI: 10.1109/icnn.1997.614194

Google Scholar

[12] B.A. Olshausen and D.J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for nature images. Nature, 381: 607–609, (1996).

DOI: 10.1038/381607a0

Google Scholar

[13] Robert Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Socity, Series B, (Methodological), 58(1): 267–288, (1996).

DOI: 10.1111/j.2517-6161.1996.tb02080.x

Google Scholar

[14] E. J. Cand`es, M. Wakin, and S. Boyd. Enhancing sparsity by reweighted l1 minimization. Tech nical Report, California Institute of Technology, (2007).

Google Scholar

[15] S. J. Wright. Primal-Dual Interior Point Methods. Philadelphia, PA: SIAM, (1997).

Google Scholar

[16] Dimitri P. Bertsekas. Nonlinear programming. Athena Scientific, (2003).

Google Scholar