On the Development of a Voice and Gesture Based HMI for the Control of a Mobile Robot

Article Preview

Abstract:

This paper presents the development of a voice and gesture based interface (HMI – Human - Machine Interface) for the human control of a mobile robot. The application utilizes the Microsoft Kinect sensor and its Visual Studio C# libraries for voice and gestures recognition. Once, the specific voice commands or gestures are recognized, the controller (a PC or Laptop) sends specific commands to a microcontroller (ATmega2560 on an Arduino Mega board). The microcontroller is mounted on the mobile robot and controls the robot’s motors. Also there are presented some limitations of the Kinect sensor to be taken into consideration when using it.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

201-204

Citation:

Online since:

May 2015

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2015 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] International Federation of Robotics, History of Industrial Robots, www. ifr. org/uploads/media/History_of_Industrial_Robots_online_brochure_by_IFR_2012. pdf.

Google Scholar

[2] B. Dumas, D. Lalanne, S. Oviatt, Multimodal Interfaces: A Survey of Principles, Models and Frameworks, in Denis Lalanne, Jürg Kohlas (Eds), Human Machine Interaction, Springer-Verlag, Berlin/Heidelberg, 2009, pp.3-26.

DOI: 10.1007/978-3-642-00437-7_1

Google Scholar

[3] Minimum requirements on PC with Kinect, http: /msdn. microsoft. com/en-us/library/ hh855359. aspx.

Google Scholar

[4] J. Shotton et al., Real-Time Human Pose Recognition in Parts from Single Depth Images, Microsoft Research Cambridge & Xbox Incubation, http: /research. microsoft. com/pubs/145347/bodypartrecognition. pdf.

Google Scholar

[5] Kinect for Windows – Human Interface Guidlines, http: /channel9. msdn. com/coding4fun/kinect/Kinect-for-Windows-Human-Interface-Guidelines-HIG.

Google Scholar