Efficient Feature Representation for Human Action Recognition in DS&SP

Article Preview

Abstract:

Human action often requires a large volume and computation-consuming representation for an accurate recognition with good diversity as the large complexity and variability of actions and scenarios. In this paper, an efficiency combined action representation approach is proposed to deal with the dilemma between accuracy and diversity. Two action features are extracted for combination from a Kinect sensor: silhouette and 3D message. An improved Histograms of Gradient named Interest-HOG is proposed for silhouette representation while the feature angles between skeleton points are calculated as the 3D representation. Kernel Principle Componet Analysis (KPCA) is also applied bidirectionally in our work to process the Interest-HOG descriptor for getting a concise and normative vector whose volume is same as the 3D one aimed at a successful combining. A depth dataset named DS&SP including 10 kinds of actions performed by 12 persons in 4 scenarios is built as the benchmark for our approach based on which Support Vector Machine (SVM) is employed for training and testing. Experimental results show that our approach has good performance in accuracy, efficiency and robustness of self-occlusion.

You might also be interested in these eBooks

Info:

Periodical:

Advanced Materials Research (Volumes 889-890)

Pages:

1057-1064

Citation:

Online since:

February 2014

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2014 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] Sunyoung Cho, Sooyeong Kwak, Hyeran Byun: submitted to Journal of Pattern Recognition Letters (2013).

DOI: 10.1109/icics.2011.6173540

Google Scholar

[2] Albert Clapés, Miguel Reyes, Sergio Escalera: submitted to Journal of Pattern Recognition Letters (2013).

Google Scholar

[3] Evan A. Suma, David M. Krum, Belinda Lange, Sebastian Koenig, Albert Rizzo, Mark Bolas: submitted to Journal of Computers & Graphics (2013).

Google Scholar

[4] Agarwal, A.; Triggs, B.: submitted to Journal of IEEE Transactions on Pattern Analysis and Machine Intelligence (2006).

Google Scholar

[5] Li Guan; Franco, J. -S.; Pollefeys, M.: 3D Occlusion Inference from Silhouette Cues (Trans Tech Publications, USA 2007).

DOI: 10.1109/cvpr.2007.383145

Google Scholar

[6] Bobick, A.F.; Davis, J.W.: submitted to Journal of IEEE Transactions on Pattern Analysis and Machine Intelligence (2001).

Google Scholar

[7] Smisek, J.; Jancosek, M.; Pajdla, T.: 3D with Kinect (Trans Tech Publications, Spain 2011).

Google Scholar

[8] Macknojia, R.; Chavez-Aragon, A.; Payeur, P.; Laganiere, R.: Experimental characterization of two generations of Kinect's depth sensors (Trans Tech Publications, Germany 2012).

DOI: 10.1109/rose.2012.6402634

Google Scholar

[9] Satpathy, A.; Jiang, X.; Eng, H.L.: submitted to Journal of IEEE Transactions on Image Processing (2013).

Google Scholar

[10] Yanwei Pang, Yuan Yuan, Xuelong Li, Jing Pan: submitted to Journal of Signal Processing (2011).

Google Scholar

[11] Cruz, L.; Lucio, D.; Velho, L.: Kinect and RGBD Images: Challenges and Applications (Trans Tech Publications, Brazil 2012).

DOI: 10.1109/sibgrapi-t.2012.13

Google Scholar

[12] Chang, S.G.; Bin Yu; Vetterli, M.: submitted to Journal of IEEE Transactions on Image Processing (2000).

Google Scholar

[13] Dalal, N.; Triggs, B.: Histograms of oriented gradients for human detection (Trans Tech Publications, USA 2005).

Google Scholar

[14] Jian Yang; Frangi, A.F.; Jing-Yu Yang; Zhang, D.; Zhong Jin: submitted to Journal of IEEE Transactions on Pattern Analysis and Machine Intelligence (2005).

DOI: 10.1109/tpami.2005.33

Google Scholar

[15] Huimin Qian, Yaobin Mao, Wenbo Xiang, Zhiquan Wang: submitted to Journal of Pattern Recognition Letters (2010).

Google Scholar