Controlling Robots Using Thai Speech Recognition

Article Preview

Abstract:

Maximum security is essential for a robotic device to achieve its optimum control. In this research, we present robotic motions, controlled by technological speech recognition techniques, using commands of Thai Speech Recognition (TSR). Examples of such speech words used are; Sai, Khwa, Na, Lang Khun, Long and Yood, which are the equivalent English language representations for; turn left, turn right, forwards, backwards, upwards, downwards, and stop, respectively. The speech commands are independent for any particular user and so, as a result, they are highly beneficial for general and practical use. The three main important parts of this paper comprise; Pre-processing, Discrete Fourier Transform (DFT), and Back Propagation Neural Network (BPN). The experimental results, when reviewed, exhibit results showing that the average accuracy percentages are equal to 71.00% for female commands, and 70.00% for male commands, respectively.

You might also be interested in these eBooks

Info:

Periodical:

Advanced Materials Research (Volumes 931-932)

Pages:

1285-1290

Citation:

Online since:

May 2014

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2014 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] Wienstien C.J. Military and government applications of human-machine communication by voice. In Proceedings of the Natl. Acad. Sci. USA. Volume 92 10011 – 10016. October (1995).

DOI: 10.1073/pnas.92.22.10011

Google Scholar

[2] Anusuya M.A. and Katti S.K. Speech Recognition by Machine: A Review, International Journal of Computer Science and Information Security, vol. 6, no. 3, pp.181-205, (2009).

Google Scholar

[3] Supachai Tangwongsan and Wijit Thanasanurak , Thai Syllable Speech Recognition by Segmental Probability Model, Department of Computer Science, Faculty of Science, Mhidol University, Bangkok, (2001).

Google Scholar

[4] Jetsada Karnpracha, Robust Thai Speech Recognitio Using MFCC of Noisy Speech Autocorrelation, Chulalongkorn University. Bangkok, (2002).

Google Scholar

[5] Hrncar M., 2007, Voice Command Control for Mobile Robots, Department of Control and Information Systems Faculty of Electrical Engineering, University of Zilina, Zilina (2007).

Google Scholar

[6] Takiguchi T., Yamagata., Sako., Miyake N., Revaud J. and Ariki Y., Human-Robot Interface Using System Request Utterance Detection Base on Acoustic Features, International Journal of Hybrid Information Technology, vol. 1, no. 3, July, (2008).

DOI: 10.1109/mue.2008.87

Google Scholar

[7] Ahmed Q. AL-Thahab, Controled of Mobile Robots by Using Speech Recognition, Journal of Babylon University/Pure and Applied Sciences, no. 3, vol. 19, (2011).

Google Scholar

[8] R. Simpson and S. Levine, Voice Control of Powered Wheelchair, IEEE Transaction on Neural System and Rehabilition Engineering, vol. 10, no. 2, pp.122-125, June (2002).

Google Scholar

[9] Sili Vestheim, Pruning of RBF Networks in Robot Manipulator Learning Control, NTNU-Trondheim Norwegain University of Science and Technology, June (2012).

Google Scholar

[10] Sug. Hyontai, Performance Comparison of RBF networks and MLPs for Classification, " Proceedings of the 9th WSEAS International Conference on Applied Information and Communications (AIC, 09), pp.450-455, (2009).

Google Scholar

[11] Y. Gong, Speech recognition in noisy environments: a survey, Speech Commun., vol. 16, pp.261-291, (1995).

DOI: 10.1016/0167-6393(94)00059-j

Google Scholar