Omni-Directional Vision System for Mobile Robot Using Structured Lights

Article Preview

Abstract:

Vision system can obtain rich environmental information, and vision is one of the most important sensing methods for mobile robot navigation and positioning. An omni-directional vision system using laser illumination is presented. The system can obtain the distance information of the obstacles around the mobile robot by only one image. The principle of the vision system is expounded; the mapping relationship between the image plane and the laser plane is given based on the analysis of the system model. The system benefits from the advantages of the omni-directional vision and the structured light vision, such as large ranged environmental information, high precision and robustness, etc. Experimental results of measurement and map construction verify the vision system.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

114-120

Citation:

Online since:

February 2013

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] Myung-Jin Jung, Hyun Myung, Sun-Gi Hong, Dong-Ryeol Park, Hyoung-Ki Lee, SeokWon Bang: Proceedings of the IEEE International Symposium on Micro-Nanomechatronics and Human Science (2004), pp.371-376.

DOI: 10.1109/mhs.2004.1421267

Google Scholar

[2] Ya-Chun Chang, Kuwabara. H, Yamamoto. Y: IEEE/ASME International Conference on Advanced Intelligent Mechatronics. Xian (2008), pp.280-285.

DOI: 10.1109/aim.2008.4601673

Google Scholar

[3] H. Ishiguro, M. Yamamoto, and S. Tsuji: IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 14 (1992),pp.257-262.

DOI: 10.1109/34.121792

Google Scholar

[4] Nanda H, Cutler R: CVPR Technical Sketch, December ( 2001).

Google Scholar

[5] Kannala, J, Brandt, S. S: IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 28 (2006), pp.1335-1340.

Google Scholar

[6] Hong J, Tan X, Pinette B, et al: IEEE Control Systems Vol. 12 (1992), pp.38-45.

Google Scholar

[7] Shih-Schon Lin, Ruzena Bajcsy: IEEE International Conference on Robotics and Automation (2003), pp.1694-1699.

Google Scholar

[8] Yamazawa K, Yagi Y, Yachida M: Proceedings of the 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems. Yokohama, (1993), pp.1029-1034.

DOI: 10.1109/iros.1993.583287

Google Scholar

[9] S. Nayar: 1997 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Juan, (1997), pp.482-488.

Google Scholar

[10] H. Lu, S. Yang, H. Zhang, Z. Zheng: Mechatronics Vol. 21 (2011), pp.373-389.

Google Scholar

[11] Sooyeong Yi, Byoungwook Choi, Narendra Ahuja: International Journal of Control, Automation, and Systems Vol. 5(2007), pp.184-191.

Google Scholar

[12] Yuan Li, You Fu Li, Qing Lin Wang, De Xu, Min Tan: IEEE Transactions on Instrumentation and Measurement Vol. 59 (2010), pp.1841-1849.

DOI: 10.1109/tim.2009.2028222

Google Scholar

[13] Simon Baker, Shree K. Nayar: International Journal of Computer Vision Vol. 35 (1998), pp.175-196.

Google Scholar

[14] Gaspar J, Winters N, Santos-Victor J: IEEE Transactions on Robotics and Automation Vol. 16 (2000), pp.890-898.

DOI: 10.1109/70.897802

Google Scholar

[15] Information on http: /www. vision. caltech. edu/bouguetj/calib_doc.

Google Scholar