Control Hardware Design of a Robotic Follow Spot System

Article Preview

Abstract:

A follow spot, known as a spot light, is a kind of stage lighting instrument which projects a beam light onto a specific, mobile, and individual object, i.e. actor. The developed system pans and tilts the follow spot by using its actuator mechanism, manual operating mechanism, tracking sensors, and control system. The system operates in four modes (manual operation, pre-defined motion saving/playing, auto tracking, and console operation). In the manual operation mode, a human operates the robotic follow spot by using a force sensor (load cell). This mode is activated in emergency situations and for pre-defined motion generation. The pre-defined motion saving/playing mode is applied for motion playing, which is generated by a human. A specific trajectory, e.g. circle, rectangle, and etc., can be saved and played for future use. The auto tracking mode makes the beam light to follow the actor. A vision system is used to detect the actor’s location. In the console operation mode, the robotic follow system communicates (through DMX512 protocol) with the light console, which controls the various lighting instruments in the theater. This paper explains the system design that fulfills the above stated 4 modes in detail. In addition, experimental results are presented in snap shot photos.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

1783-1787

Citation:

Online since:

January 2013

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] Autopilot II help manual. Wybron, Inc., online (2000).

Google Scholar

[2] T. Hay., Automatic followspot tracking system, 3rd year project report, University of Southampton, May (2004).

Google Scholar

[3] Je-Hun Yoo, Ill-Woo Park, Vision tracking realization of a robotic follow spot system, International conference on control, automation and system, Oct, (2011).

Google Scholar

[4] Yanlei Gu, Tomohiro Yendo, Mehrdad Panahpour Tehrani, Toshiaki Fujii, Masayuki Tanimoto, Traffic Sign Detection in Dual-Focal Active Camera System, IEEE Intelligent Vehicles Symposium (IV), pp.1054-1059, June, (2011).

DOI: 10.1109/ivs.2011.5940513

Google Scholar

[5] Jun Miura, Tsuyoshi Kanda, Yoshiaki Shirai, An Active Vision System for Real-Time Traffic Sign Recognition, IEEE Intelligent Transportation Systems Conference Proceedings Dearborn (MI), October, (2000).

DOI: 10.1109/itsc.2000.881017

Google Scholar

[6] Eric Sommerlade, Ian Reid, Information-theoretic Active Scene Exploration, IEEE Conference on Computer Vision and Pattern Recognition, June, (2008).

DOI: 10.1109/cvpr.2008.4587522

Google Scholar

[7] Oliver Tonet, Francesco Focacci, Marco Piccigallo, Lorenza Mattei, Claudio Quaglia, Giuseppe Megali, Barbara Mazzolai, Paolo Dario, Bioinspired Robotic Dual-Camera System for High-Resolution Vision, IEEE TRANSACTIONS ON ROBOTICS, VOL. 24, NO. 1, FEBRUARY, (2008).

DOI: 10.1109/tro.2008.915430

Google Scholar

[8] Wayne Wolf, Burak Ozer, Tiehan Lv, Architectures for Distributed Smart Cameras, International Conference on Multimedia and Expo, (2003).

DOI: 10.1109/icme.2003.1221539

Google Scholar

[9] Hideo MORI, Ken'ichi KANEKO, Shinji KOTANI, Kazumi FUJIMA, Realtime Danger Estimation at an Intersection by Wide and Telephoto Images, IEEE/RSJ International Conference on Intelligent Robots and Systems, (1996).

DOI: 10.1109/iros.1996.571024

Google Scholar

[10] Vitaliy Orekhov, Besma Abidi, Chris Broaddus, Mongi Abidi, UNIVERSAL CAMERA CALIBRATION WITH AUTOMATIC DISTORTION MODEL SELECTION, IEEE International Conference on Image Processing, (2007).

DOI: 10.1109/icip.2007.4379605

Google Scholar

[11] Je-Hun Yoo, Ill-Woo Park, Motion save and play function realization of the robotic follow spot, International conference on control, automation and system, Oct, (2011).

Google Scholar