A New Type of Deaf-Mute Sign Language Recognition System Based on the Mobile Communciation Platform and Terminal Equipment

Article Preview

Abstract:

This paper presents a new type of deaf-mute sign language recognition system by combining mobile communication platform and the mobile communication terminal equipment. The system can implement the vision-based sign language recognition and translation. A Standard Sign Language Database is established in this system. Multi-national and multi-language sign language recognition can be completed by the following training using the database. In order to improve the accuracy of the recognition of similar sign language, an improved HMM sign language recognition method is used in this paper. The angle information of sign language which can be achieved by the traditional data-glove is introduced in the system on the basis of visual methods, makes the system taking into account these two recognition technology. The system can be implemented in ordinary mobile terminal equipment. Low cost and popularity of sign language recognition device can be realized. The deaf-mutes communicate for barrier-free anytime and anywhere by the application.

You might also be interested in these eBooks

Info:

Periodical:

Advanced Materials Research (Volumes 734-737)

Pages:

2880-2886

Citation:

Online since:

August 2013

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] A.Vassilis and S.Sclaroff, 3D Hand Pose Estimation by Finding Appearance-Based Matches in a Large Database of Training Views, IEEE Workshop on Cues in Communication. 28, 1-14 (2001).

Google Scholar

[2] S.Fels and H.G. Glove. A neural network interface between a data glove and a speech synthesizer. IEEE Trans Neural Networks. 4.1, 2-8 (1993).

DOI: 10.1109/72.182690

Google Scholar

[3] R.H. Liang, etc., A Real-time Continuous Alphabetic Sign Language to Speech Conversion VR System, Computer Graphics Forum. 14.3, 67-77 (1995).

DOI: 10.1111/j.1467-8659.1995.cgf143_0067.x

Google Scholar

[4] Y.Suenaga, K.Mase. M.Fukumoto and Y. Watanabe. Human reader: an advanced man-machine interface based on human images and speech. Systems and Computers in Japan. 24.2, 88-101 (1993).

DOI: 10.1002/scj.4690240208

Google Scholar

[5] Q.Yang, Chinese Sign Language Recognition Based On Video Sequence Appearance Modeling, IEEE Conference on Industrial Electronics and Applications. 5, 1537-1542 (2010).

DOI: 10.1109/iciea.2010.5514688

Google Scholar

[6] X.L.Li, M.Parizeau, R.Plamondon. Training hidden Markov models with multiple observations-a combinatorial method, IEEE Trans. Pattern Anal. Mach. Intell. 22.4, 371-377 (2000).

DOI: 10.1109/34.845379

Google Scholar

[7] T. Takahashi and F. Kishino, Hand gesture coding based on experiments using a hand gesture interface device, SIGCHI Bulletin. 23.2, 67-73 (1991).

DOI: 10.1145/122488.122499

Google Scholar

[8] T. Starner, J. Weaver and A. Pentland, A Real-Time American sign language recognition using desk and wearable computer based video, IEEE Transactions on Pattern Analysis and Machine Intelligence. 20.12, 1371-1375 (1998).

DOI: 10.1109/34.735811

Google Scholar

[9] O.Kenji, S.Yoichi and K.Hideki. Real-Time Fingertip Tracking and Gesture Recognition. IEEE Computer Graphics and Applications. 22, 6 (2002).

Google Scholar