Key Points of Human Body Location Based on Single Depth Map

Article Preview

Abstract:

This paper presents a method of using single depth map to locate the key points of frontal human body. Human motion capture is the premise of motion analysis and understanding, and it has widely application prospects. There are many problems on former way to capture the state of human motion. For example, it can’t initialize automatically, it can not recover from tracking failure, it can not solve the problem caused by occlusion, or there are many constraints on participant, and so on. This article uses Kinect, which from Microsoft, to get depth maps, and use a single map as input to locate the key points of human body. First, depth map can reflect the distance, so background segmentation can be done easily by the characteristic. Then, extract the skeleton of the body’s silhouette. Finally, using the inherent connectivity features of human body, the key points of the body can be determined on the skeleton. Locating the key points from single depth map solve the problem of automatic initialization and recovery directly. The depth map can reflect distance on grayscale, which makes it easy to split the body region from the background. In addition, depth map contains some useful information can be used to solve the problem of occlusion. Using depth map can remove some constraints on the human body, as well as to reduce the influence of clothing and surround lighting, and so on. The experiment shows that this method is very accurate in locating the key points of frontal stand human body, and can solve some problems of occlusion. It is ideal used in a motion tracking system for automatic initialization and self-recovery when tracking failed

You might also be interested in these eBooks

Info:

Periodical:

Pages:

76-82

Citation:

Online since:

October 2012

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2012 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] Thomas B M, Adrian H, Volker K. A Survey of Advances in Vision-based human motion capture and analysis. Computer Vision and Image Understanding, 2006: 90-126

DOI: 10.1016/j.cviu.2006.08.002

Google Scholar

[2] Parvizi E, Wu Q M J. Real-Time 3D Head Tracking Based on Time-of-Flight Depth Sensor. 19th IEEE International Conference on Tools with Artificial Intelligence, 2007: 517-521

DOI: 10.1109/ictai.2007.115

Google Scholar

[3] Moeslund T B, Granum E. Multiple cues used in model-based human motion capture. Fourth IEEE International Conference on Automatic Face and Gesture Recognition, 2000: 362-367

DOI: 10.1109/afgr.2000.840660

Google Scholar

[4] Alcoverro M, Lopez-Mendez A, Pardas M. Connected operators on 3D data for human body analysis. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2011: 9~14

DOI: 10.1109/cvprw.2011.5981772

Google Scholar

[5] Zhenning L, Kulic D. Local Shape Context Based Real-time Endpoint Body Part Detection and Identification from Depth Images. 2011 Canadian Conference on Computer and Robot Vision, 2011: 219~226

DOI: 10.1109/crv.2011.36

Google Scholar

[6] Raheja J L, Chaudhary A, Singal K. Tracking of Fingertips and Centres of Palm Using KINECT. 2011 Third International Conference on Computational Intelligence, Modelling and Simulation, 2011: 248~252

DOI: 10.1109/cimsim.2011.51

Google Scholar

[7] Frati V, Prattichizzo D. Using Kinect for hand tracking and rendering in wearable haptics. IEEE World Haptics Conference, 2011, 317~321

DOI: 10.1109/whc.2011.5945505

Google Scholar

[8] Blum H. Biological Shape and Visual Science: Part I. J.Theoretical Biology, 1973, 38: 205~287

Google Scholar

[9] Blum H. A Transformation for Extracting New Descriptors of Shape. Models for the Perception of Speech and Visual Form, W.Walthen-Dunn, ed., 1967.

Google Scholar

[10] Serra J. Image Analysis and Mathematical Morphology, vol.1. San Diego: Academic Press, (1982)

Google Scholar