Interactive System Based on Leap Motion for 3D Medical Model

Article Preview

Abstract:

An interactive visualization of the patients’ 3D medical anatomical model as guide is often helpful for doctors during complex surgery. However, there are certain limitations according to the actual requirements of building sterile operating environment. Traditional human–computer interaction tools (mouse and keyboard) must be disinfected regularly and cannot be used in the process. A noncontact gesture control medical model based on Leap Motion is proposed in this study. The gesture is recognized and localized without using mouse and keyboards through a binocular camera assembled on Leap Motion. Hence, the model is directly controlled by the gesture to complete the operation of rotation, zoom, and other functions. In this study, a 3D heart model is combined with pseudo-color processing technology to enhance the observability of its 3D structure. Gesture recognition technology is then utilized to control the rendered model as rotation and zoom. Experimental results show that our system has an absolute accuracy in recognizing circle, swipe, and other actions. Thus, rotation is proposed as a new motion that can be identified steadily. Rotation plays an essential role in usability, intuition, and interactive efficiency of future system design. The system is applicable to sterile operating environments due to its stable recognition process and small space occupation.

You might also be interested in these eBooks

Info:

Pages:

131-140

Citation:

Online since:

June 2021

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2021 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] K. Yammine and C. Violato, meta-analysis of the educational effectiveness of three-dimensional visualization technologies in teaching anatomy,, Anatomical sciences education, vol. 8, pp.525-538, (2015).

DOI: 10.1002/ase.1510

Google Scholar

[2] J. C. Goble, K. Hinckley, R. Pausch, J. W. Snell, and N. F. Kassell, Two-handed spatial interface tools for neurosurgical planning,, Computer, vol. 28, pp.20-26, (1995).

DOI: 10.1109/2.391037

Google Scholar

[3] S. Friston, T. Ritschel, and A. Steed, Perceptual rasterization for head-mounted display image synthesis,, ACM Transactions on Graphics (TOG), vol. 38, p.97, (2019).

DOI: 10.1145/3306346.3323033

Google Scholar

[4] S. Whitman, Multiprocessor methods for computer graphics rendering, AK Peters/CRC Press, USA, (1992).

Google Scholar

[5] J. Ruiterkamp, M. Ernst, L. Van de Poll-Franse, et al., Surgical resection of the primary tumour is associated with improved survival in patients with distant metastatic breast cancer at diagnosis,, European Journal of Surgical Oncology (EJSO), vol. 35, no. 11, pp.1146-1151, (2009).

DOI: 10.1016/j.ejso.2009.03.012

Google Scholar

[6] J. Preece, Y. Rogers, H. Sharp, D. Benyon, S. Holland, and T. Carey, Human-computer interaction. Addison-Wesley Longman Ltd., UK, (1994).

DOI: 10.1145/191642.1047948

Google Scholar

[7] H. S. Hasan and S. A. Kareem, Human computer interaction for vision based hand gesture recognition: a survey,, In 2012 International Conference on Advanced Computer Science Applications and Technologies (ACSAT), pp.55-60, (2012).

DOI: 10.1109/acsat.2012.37

Google Scholar

[8] S. Ruffieux, D. Lalanne, E. Mugellini, and O. A. Khaled, A survey of datasets for human gesture recognition,, In International Conference on Human-Computer Interaction, pp.337-348, (2014).

DOI: 10.1007/978-3-319-07230-2_33

Google Scholar

[9] P. Garg, N. Aggarwal, and S. Sofat, Vision based hand gesture recognition,, World Academy of Science, Engineering and Technology, vol. 49, pp.972-977, (2009).

Google Scholar

[10] Z. Chen, X. Feng, T. Liu, C. Wang, and C. Zhang, A Computer-assisted Teaching System with Gesture Recognition Technology and Its Applications,, In Proceedings of the International Conference on Digital Technology in Education, pp.1-6, (2017).

DOI: 10.1145/3134847.3134848

Google Scholar

[11] C. Graetzel, T. Fong, S. Grange, and C. Baur, A non-contact mouse for surgeon-computer interaction,, Technology and Health Care, vol. 12, pp.245-257, (2004).

DOI: 10.3233/thc-2004-12304

Google Scholar

[12] X. Xu, M. Sun, X. Sun, W. Zhao, and X. Sun, Research and development of virtual instruments system based on depth camera,, In International Conference of Pioneering Computer Scientists, Engineers and Educators, pp.108-114, (2017).

DOI: 10.1007/978-981-10-6388-6_9

Google Scholar

[13] X. Zhiqiang, H. Li, and J. He, A novel medical human-computer interaction platform based on Kinect depth image analysis refactoring and neural network,, International Journal of Biomedical Engineering and Technology, vol. 23, pp.315-334, (2017).

DOI: 10.1504/ijbet.2017.10003504

Google Scholar

[14] Z. Wang, S. Li, Y. Lv, and K. Yang, Remote sensing image enhancement based on orthogonal wavelet transformation analysis and pseudo-color processing,, International Journal of Computational Intelligence Systems, vol. 3, pp.745-753, (2010).

DOI: 10.1080/18756891.2010.9727737

Google Scholar

[15] M. Hébert, L. Henckens, et al., Changing the color of textiles with realistic visual rendering,, In Measuring, Modeling, and Reproducing Material Appearance 2015, pp. 93980Q, (2015).

DOI: 10.1117/12.2076445

Google Scholar

[16] J. Artal-Sevil, J. Montañés, A. Acón, and J. Domínguez, Control of a Bionic Hand using real-time gesture recognition techniques through Leap Motion Controller,, In 2018 XIII Technologies Applied to Electronics Teaching Conference (TAEE), pp.1-7, (2018).

DOI: 10.1109/taee.2018.8476122

Google Scholar

[17] D. Bachmann, F. Weichert, and G. Rinkenauer, Review of three-dimensional human-computer interaction with focus on the leap motion controller,, Sensors, vol. 18, p.2194, (2018).

DOI: 10.3390/s18072194

Google Scholar

[18] H. Sawant and M. Deore, A comprehensive review of image enhancement techniques,, International Journal of Computer Technology and Electronics Engineering (IJCTEE), vol. 1, pp.39-44, (2010).

Google Scholar

[19] V. S. Grinberg, G. W. Podnar, and M. Siegel, Geometry of binocular imaging,, In Stereoscopic Displays and Virtual Reality Systems, pp.56-65, (1994).

DOI: 10.1117/12.173906

Google Scholar

[20] R. Wang, F. Nie, R. Hong, X. Chang, X. Yang, and W. Yu, Fast and orthogonal locality preserving projections for dimensionality reduction,, IEEE Transactions on Image Processing, vol. 26, pp.5019-5030, (2017).

DOI: 10.1109/tip.2017.2726188

Google Scholar