A Novel Real-Time Facial Expression Capturing Method

Article Preview

Abstract:

In this paper, we propose a new facial expression capturing method in real-time mode. Our proposed method main includes two processes: 1) Online process and 2) Offline process. The offline process adopt human face database to build 2D face model and 3D shape model, and then train an expression feature model. Afterwards, online process extracts feature points from human face images, and then obtains facial expression by SVM classifier which is trained from offline process. The main creativity of our method lies in that we propose an effective face detection approach and propose an optimal evaluation method to facial expression recognition. Experimental results show that our approach can capture facial expression precisely in real-time mode.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

826-830

Citation:

Online since:

January 2013

Authors:

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] Ekman, P. and Rolls, ET and Perrett, DI and Ellis, HD, Facial Expressions of Emotion: an Old Controversy and New Findings, Philosophical Transactions of the Royal Society, 1992, 335, pp.63-69.

DOI: 10.1098/rstb.1992.0008

Google Scholar

[2] P. Ekman and W. V. Friesen, Constant across cultures in the face and emotion, Journal of personality and social psychology, 1972, 17(2), pp.124-129.

DOI: 10.1037/h0030377

Google Scholar

[3] P. Ekman, W. V. Friesen, and J. C. Hager, Facial Action Coding System. Salt Lake City, UT: Human Face, (2002).

Google Scholar

[4] J. J. Lien, T. Kanade, J. F. Cohn, and C. -C. Li, Subtly different facial expression recognition and expression intensity estimation, in Proc. IEEE International Conference Computer Vision and Pattern Recognition, 1998, pp.853-859.

DOI: 10.1109/cvpr.1998.698704

Google Scholar

[5] Y. Gao, S. H. Leung, and H. Leung, Facial expression recognition from line-based caricatures, IEEE Transactions on Man and Cybernetics, Part A: Systems and Humans, 2003, 33(3), pp.407-412.

DOI: 10.1109/tsmca.2003.817057

Google Scholar

[6] Y. Zhang and Q. Ji, Active and dynamic information fusion for facial expression understanding from image sequences, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(5), pp.699-714.

DOI: 10.1109/tpami.2005.93

Google Scholar

[7] Y. Tian, T. Kanade, and J. F. Cohn, Recognizing action units for facial expression analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(2), pp.97-115.

DOI: 10.1109/34.908962

Google Scholar

[8] A. M. Martínez, Recognizing imprecisely localized, partially occluded, and expression variant faces from a single sample per class, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 24(6), pp.748-763.

DOI: 10.1109/tpami.2002.1008382

Google Scholar

[9] Michael J. Lyons, Shigeru Akamatsu, Miyuki Kamachi & Jiro Gyoba. Coding Facial Expressions with Gabor Wavelets. Proceedings, Third IEEE International Conference on Automatic Face and Gesture Recognition, April 14-16 1998, Nara Japan, IEEE Computer Society, pp.200-205.

DOI: 10.1109/afgr.1998.670949

Google Scholar

[10] Michael J. Lyons, Julien Budynek, & Shigeru Akamatsu. Automatic Classification of Single Facial Images. IEEE Transactions on Pattern Analysis and Machine Intelligence 21 (12), pp.1357-1362 (1999).

DOI: 10.1109/34.817413

Google Scholar

[11] Matthew N Dailey, Carrie Joyce, Michael J Lyons, Miyuki Kamachi, Hanae Ishi, Jiro Gyoba, & Garrison W Cottrell. Evidence and a computational explanation of cultural differences in facial expression recognition. Emotion, Vol 10(6), 2010, pp.874-893.

DOI: 10.1037/a0020019

Google Scholar