Visual-Servoing Control of Robot Hand with Estimation of Full Articulation of Human Hand

Article Preview

Abstract:

A depth sensor or depth camera is available at a reasonable cost in recent years. Due to the excessive dispersion of depth values outputted from the depth camera, however, changes in the pose cannot be directly employed for complicated hand pose estimation. The authors therefore propose a visual-servoing controlled robotic hand with RGB high-speed cameras. Two cameras have their own database in the system. Each data set has proportional information of each hand image and image features for matching, and joint angle data for output as estimated results. Once sequential hand images are recorded with two high-speed RGB cameras, the system first selects one database with bigger size of hand region in each recorded image. Second, a coarse screening is carried out according to the proportional information on the hand image which roughly corresponds to wrist rotation, or thumb or finger extension. Third, a detailed search is performed for similarity among the selected candidates. The estimated results are transmitted to a robot hand so that the same motions of an operator is reconstructed in the robot without time delay.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

728-735

Citation:

Online since:

August 2014

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2015 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] Yomiuri Online: http: /www. yomiuri. co. jp/science/news/ 20110511-OYT1T00415. htm.

Google Scholar

[2] J.M. Rehg, T. Kanade, Visual tracking of high DOF articulated structures: an application to human hand tracking, European Conf. Computer Vision (1994) 35-46.

DOI: 10.1007/bfb0028333

Google Scholar

[3] Y. Kameda, M. Minoh, A human motion estimation method using 3-successive video frames, Proc. Virtual Systems and Multimedia (1996) 135-140.

Google Scholar

[4] S. Lu, D. Metaxas, D. Samaras, J. Oliensis, Using multiple cues for hand tracking and model refinement, Proc. CVPR2003, 2 (2003) 443-450.

DOI: 10.1109/cvpr.2003.1211501

Google Scholar

[5] T. Gumpp, P. Azad, K. Welke, E. Oztop, R. Dillmann, G. Cheng, Unconstrained real-time markerless hand tracking for humanoid interaction, Proc. IEEE-RAS International Conference on Humanoid Robots, CD-ROM (2006).

DOI: 10.1109/ichr.2006.321368

Google Scholar

[6] V. Athitos, S. Scarloff, An appearance-based framework for 3D hand shape classification and camera viewpoint estimation, Proc. Automatic Face and Gesture Recognition (2002) 40-45.

DOI: 10.1109/afgr.2002.1004129

Google Scholar

[7] K. Hoshino, T. Tanimoto, Real time search for similar hand images from database for robotic hand control, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, E88-A(2005) 2514-2520.

DOI: 10.1093/ietfec/e88-a.10.2514

Google Scholar

[8] Y. Wu, J. Lin, T.S. Huang, Analyzing and capturing articulated hand motion in image sequences, IEEE Transactions on Pattern Analysis and Machine Intelligence, 27 (2005) 1910-(1922).

DOI: 10.1109/tpami.2005.233

Google Scholar

[9] http: /cvrlcode. ics. forth. gr/handtracking.

Google Scholar

[10] https: /leapmotion. com.

Google Scholar

[11] http: /www. softkinetic. com/en-us/softkinetic. aspx.

Google Scholar

[12] K. Hoshino, Estimation of full articulation of a hand with depth camera. IEICE Technical Report HIP 112, 483 (2013) 65-68. (in Japanese).

Google Scholar

[13] K. Hoshino, T. Kasahara, M. Tomida, T. Tanimoto, Gesture-World Environment Technology for Mobile Manipulation - Remote Control System of a Robot with Hand Pose Estimation -, Journal of Robotics and Mechatronics, 24, 1 (2012) 180-190.

DOI: 10.20965/jrm.2012.p0180

Google Scholar

[14] K. Hoshino, E. Tamaki, T. Tanimoto, Copycat hand - Robot hand imitating human motions at high speed and with high accuracy, Advanced Robotics, 21, 15 (2007) 1743-1761.

DOI: 10.1163/156855307782506183

Google Scholar

[15] N. Otsu, T. Kurita, A new scheme for practical, flexible and intelligent vision systems, Proc. IAPR. Workshop on Computer Vision (1998) 431-435.

Google Scholar