Learning a Discriminative Isotropic Space from Labeled and Unlabeled Data

Article Preview

Abstract:

Euclidean distance measure is computationally simple and commonly used in the task of classification. However, it does not capitalize on any discriminant information from training samples, which is crucial to classification performance. In this paper, we learn a Discriminative Isotropic Space from both labeled and unlabeled data to improve classification accuracy as well as generalization ability. Our regularized objective function implicitly minimizes mutual information between input space and transformed space. There is no local optimum problem in our algorithm and its closed form solution can be easily obtained. Experiments on real data sets demonstrate the efficacy of our method.

You might also be interested in these eBooks

Info:

Periodical:

Advanced Materials Research (Volumes 756-759)

Pages:

1819-1823

Citation:

Online since:

September 2013

Authors:

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] Yang, L., and Jin, R. 2006. Distance metric learning: A comprehensive survey. Technical report, Michigan State University.

Google Scholar

[2] Zien, A., and Candela, J. Q. 2005. Large margin nonlinear embedding. In Proceedings of the 24th international conference on Machine learning.

Google Scholar

[3] Zhu, X.; Ghahramani, Z.; and Lafferty, J. 2003. Semisupervised learning using gaussian fields and harmonic functions. In ICML '03: Proceedings of the 20th international conference on Machine learning, 912–919.

Google Scholar

[4] Linsker, R. 1989. An application of the principle of maximum information preservation to linear systems. In Advances in Neural Information Processing Systems. 186–194.

Google Scholar

[5] Belkin, M., and Niyogi, P. 2001. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in Neural Information Processing Systems 14, 585–591. MIT Press.

DOI: 10.7551/mitpress/1120.003.0080

Google Scholar

[6] Davis, J. V.; Kulis, B.; Jain, P.; Sra, S.; and Dhillon, I. S. 2007. Information-theoretic metric learning. In Proceedings of the 24th international conference on Machinelearning, 209–216.

DOI: 10.1145/1273496.1273523

Google Scholar

[7] Weinberger, K. Q., and Saul, L. K. 2008. Distance metric learning for large margin nearest neighbor classification. In Journal of Machine Learning Research.

Google Scholar

[8] Torresani, L., and chih Lee, K. 2007. Large margin component analysis. In Sch¨olkopf, B.; Platt, J.; and Hoffman, T., eds., Advances in Neural Information Processing Systems19. Cambridge, MA: MIT Press. 1385–1392.

Google Scholar

[9] He, X., and Niyogi, P. 2003. Locality preserving projections. In Advances in Neural Information Processing Systems.

Google Scholar