Machine Vision-Based Automatic Placement System for Solenoid Housing

Article Preview

Abstract:

This paper proposes a machine vision-based, servo-controlled delta robotic system for solenoid housing placement. The system consists of a charge-coupled device camera and a delta robot. To begin the placement process, the solenoid housing targets inside the camera field were identified and used to guide the delta robot to the grabbing zone according to the calibrated homography transformation. To determine the angle of solenoid housing, image preprocessing was then implemented in order to rotate the target object to assemble with the solenoid coil. Finally, the solenoid housing was grabbed automatically and placed in the collecting box. The experimental results demonstrate that the proposed system can help to reduce operator fatigue and to achieve high-quality placements.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

9-13

Citation:

Online since:

June 2015

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2015 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] A. Muis and O. Kouhei, An iterative approach in pose measurement through hand-eye calibration, in Proceedings of 2003 IEEE Conference on Control Applications, pp.983-988.

DOI: 10.1109/cca.2003.1223144

Google Scholar

[2] J. L. Vilaça, J. C. Fonseca, and A. M. Pinho, Calibration procedure for 3D measurement systems using two cameras and a laser line, Optics & Laser Technology, vol. 41, pp.112-119, (2009).

DOI: 10.1016/j.optlastec.2008.05.012

Google Scholar

[3] C. -C. Ho and C. -L. Shih, A real-time fuzzy reasoning based control system for catching a moving goldfish, International Journal of Control, Automation and Systems, vol. 7, pp.755-763, (2009).

DOI: 10.1007/s12555-009-0508-x

Google Scholar

[4] D. Samper, J. Santolaria, F. J. Brosed, and J. J. Aguilar, A stereo-vision system to automate the manufacture of a semitrailer chassis, The International Journal of Advanced Manufacturing Technology, vol. 67, pp.2283-2292, (2013).

DOI: 10.1007/s00170-012-4649-3

Google Scholar

[5] F. Blais, M. Rioux, and J. Domey, Optical range image acquisition for the navigation of a mobile robot, in Proceedings of 1991 IEEE International Conference on Robotics and Automation, pp.2574-2580.

DOI: 10.1109/robot.1991.132015

Google Scholar

[6] T. Köhler, S. Haase, S. Bauer, J. Wasza, T. Kilgus, L. Maier-Hein, et al., ToF meets RGB: novel multi-sensor super-resolution for hybrid 3-D endoscopy, in Medical Image Computing and Computer-Assisted Intervention-MICCAI 2013, pp.139-146.

DOI: 10.1007/978-3-642-40811-3_18

Google Scholar

[7] J. Fischer, G. Arbeiter, and A. Verl, Combination of time-of-flight depth and stereo using semiglobal optimization, in Proceedings of 2011 IEEE International Conference on Robotics and Automation (ICRA), pp.3548-3553.

DOI: 10.1109/icra.2011.5979999

Google Scholar

[8] B. J. Nelson, Y. Zhou, and B. Vikramaditya, Integrating force and vision feedback for microassembly, in Intelligent Systems & Advanced Manufacturing, Springer, pp.30-41, (1998).

Google Scholar

[9] Y. G. Leclerc, Q. -T. Luong, P. V. Fua, and K. Miyajima, Detecting changes in 3-D shape using self-consistency, in Proceedings of 2000 IEEE Conference on Computer Vision and Pattern Recognition, pp.395-402.

DOI: 10.1109/cvpr.2000.855846

Google Scholar