Research on Image Feature Extraction Method Based on Orthogonal Projection Transformation of Multi-Task Learning Technology

Article Preview

Abstract:

When the number of labeled training samples is very small, the sample information we can use would be very little. Because of this, the recognition rates of some traditional image recognition methods are not satisfactory. In order to use some related information that always exist in other databases, which is helpful to feature extraction and can improve the recognition rates, we apply multi-task learning to feature extraction of images. Our researches are based on transferring the projection transformation. Our experiments results on the public AR, FERET and CAS-PEAL databases demonstrate that the proposed approaches are more effective than the general related feature extraction methods in classification performance.

You might also be interested in these eBooks

Info:

Periodical:

Advanced Materials Research (Volumes 760-762)

Pages:

1609-1614

Citation:

Online since:

September 2013

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] B. Bakker and T. Heskes, Task clustering and gating for Bayesian multi-task learning, Journal of Machine Learning Research, vol. 4, pp.83-99, (2003).

Google Scholar

[2] T. Heskes, Empirical Bayes for learning to learn, Proceeding of ICML-2000, ed. Langley, P., pp.367-374, (2000).

Google Scholar

[3] L.F. Chen, H.Y.M. Liao, M.T. Ko, G.J. Yu, A New LDA-based Face Recognition System Which Can Solve the Small Sample Size Problem, Pattern Recognition, vol. 33, no. 1, pp.1713-1726, (2000).

DOI: 10.1016/s0031-3203(99)00139-9

Google Scholar

[4] Q.X. Gao, L. Zhang, D. Zhang, Face Recognition Using FLDA with Single Training Image Per-person, Applied Mathematics and Computation, vol. 205 no. 12, pp.726-734, (2008).

DOI: 10.1016/j.amc.2008.05.019

Google Scholar

[5] X Y Jing, Zhang D and Jin Z, UODV: improved algorithm and generalized theory, Pattern Recognition, vol. 36, pp.2593-2602, (2003).

DOI: 10.1016/s0031-3203(03)00177-8

Google Scholar

[6] R. Caruana, Multitask learning, Journal of Machine. Learning, vol. 1, pp.41-75, (1997).

Google Scholar

[7] R. Caruana, Learning Many Related Tasks at the Same Time with Backpropagation, NIPS, pp.657-664, (1995).

Google Scholar

[8] J. Ghosn and Y. Bengio, Bias Learning, Knowledge Sharing, IEEE Trans. Neural Networks, vol. 14, no. 4, pp.748-765, (2003).

DOI: 10.1109/tnn.2003.810608

Google Scholar

[9] D. Silver and R. Mercer, Selective Functional Transfer: Inductive Bias From Related Tasks, In Proc. IASTED Int. Conference on AI, Soft Comput, pp.182-189, (2001).

Google Scholar

[10] D. Silver and R. Mercer, The Task Rehearsal Method of Life-long Learning: Overcoming Impoverished Data, Canadian Conference on AI, pp.90-101, (2002).

DOI: 10.1007/3-540-47922-8_8

Google Scholar

[11] A. Argyriou, T. Evgeniou and M. Pontil. Multi-task Feature Learning, NIPS, pp.41-48, (2006).

Google Scholar

[12] T. Jebara, Multi-task Feature and Kernel Selection for SVMs, International Conference on Machine Learning, July, (2004).

DOI: 10.1145/1015330.1015426

Google Scholar

[13] G. Obozinski, B. Taskar and M. I. Jordan, Multi-task Feature Selection, Technical report, Department of Statistics, University of California, Berkeley, (2006).

Google Scholar

[14] X Y Jing, Q Liu, C Lan, J Y Man, S Li, Zhang D., Holistic Orthogonal Analysis of Discriminant Transforms for Color Face Recognition, Proc. Int. Conf. Image Proc, pp.3841-3844, (2010).

DOI: 10.1109/icip.2010.5654099

Google Scholar