Facial Expression Recognition Based on Fused Spatio-Temporal Features

Article Preview

Abstract:

Although spatio-temporal features (ST) have recently been developed and shown to be available for facial expression recognition and behavior recognition in videos, it utilizes the method of directly flattening the cuboid into a vector as a feature vector for recognition which causes the obtained vector is likely to potentially sensitive to small cuboid perturbations or noises. To overcome the drawback of spatio-temporal features, we propose a novel method called fused spatio-temporal features (FST) method utilizing the separable linear filters to detect interesting points and fusing two cuboids representation methods including local histogrammed gradient descriptor and flattening the cuboid into a vector for cuboids descriptor. The proposed FST method may robustness to small cuboid perturbations or noises and also preserve both spatial and temporal positional information. The experimental results on two video-based facial expression databases demonstrate the effectiveness of the proposed method.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

3780-3785

Citation:

Online since:

August 2013

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] P. Ekman and W. V. Friesen, Pictures of facial affect, in Human Interaction Laboratory, San Francisco, CA: Univ. California Medical Center, (1976).

Google Scholar

[2] Y.L. Tian, T. Kanade, J.F. Cohn, Facial Expression Analysis, , In: S.Z. Li, A.K. Jain (eds. ), Handbook of Facial Recognition, Springer, New York, USA, 2005, pp.247-276.

DOI: 10.1007/0-387-27257-7_12

Google Scholar

[3] Z. Zeng, M. Pantic, G.I. Roisman, and T.S. Huang, A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions, , IEEE TPAMI, 2009, Vol. 31, No. 1, pp.39-58.

DOI: 10.1109/tpami.2008.52

Google Scholar

[4] A. A. Efros, A. C. Berg, G. Mori, and J. Malik, Recognizing action at a distance, , In IEEE International Conference on Computer Vision (ICCV) , 2003, pp.726-733.

DOI: 10.1109/iccv.2003.1238420

Google Scholar

[5] I. Laptev and T. Lindeberg, Space-time interest points, In IEEE International Conference on Computer Vision, 2003, pp.432-439.

DOI: 10.1109/iccv.2003.1238378

Google Scholar

[6] P. Dollar, V. Rabaud, G. Cottrell, and S. Belongie, Behavior recognition via sparse spatiotemporal features, In VS-PETS, 2005, pp.65-72.

DOI: 10.1109/vspets.2005.1570899

Google Scholar

[7] M. S. Ryoo, and J. K. Aggarwal, Spatio-temporal relationship match: Video structure comparison for recognition of complex human activities, In: International Conference on Computer Vision, 2009, pp.1593-1600.

DOI: 10.1109/iccv.2009.5459361

Google Scholar

[8] K. Rapantzikos, Y. Avrithis, and S. Kollias, Dense saliency-based spatiotemporal feature points for action recognition, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2009, IEEE, Los Alamitos.

DOI: 10.1109/cvpr.2009.5206525

Google Scholar

[9] Y. Zhang and Q. Ji, Active and dynamic information fusion for facial expression understanding from image sequences, IEEE Trans. Pattern Anal. Mach. Intell May 2005, vol. 27, no. 5, pp.699-714.

DOI: 10.1109/tpami.2005.93

Google Scholar

[10] H. Gunes and M. Piccardi, Bi-modal emotion recognition from expressive face and body gestures, Journal of Network and Computer Applications, (2007).

DOI: 10.1016/j.jnca.2006.09.007

Google Scholar

[11] D.G. Lowe, Distinctive image features from scale-invariant keypoints , IJCV, Nov 2004, vol. 60, no. 2, pp.91-110.

DOI: 10.1023/b:visi.0000029664.99615.94

Google Scholar

[12] W. Zheng, X. Zhou, C. Zou, L. Zhao, Facial expression recognition using kernel canonical correlation analysis (KCCA), IEEE Transactions on Neural Networks, 2006, Vol. 17, No. 1, pp.233-238.

DOI: 10.1109/tnn.2005.860849

Google Scholar

[13] H. Gunes and M. Piccardi, A bimodal face and body gesture database for automatic analysis of human nonverbal affective behavior, In: Proc. Int. Conf. Pattern Recog, 2006, vol. 1, pp.1148-1153.

DOI: 10.1109/icpr.2006.39

Google Scholar