Some Research on the Fusion of Virtual Scene and Video Image

Article Preview

Abstract:

Fusing three-dimensional virtual scene and video image into a perfect whole, is a key technology in augmented reality, which includes a number of problems need to be resolved. This paper makes some researches on the occlusion and consistent illumination between virtual and real objects. About occlusion, an approach for realizing multilayer occlusion is presented. A special scene graph tree is designed comprised of some special nodes, namely EMO nodes. According to the location of real moving object, different EMO node will be activated in real-time, consequently realizing the multilayer occlusion. About consistent illumination, two spheres with Lambert surface are introduced in the real scene. Through image analysis and analytical geometry of three dimensions, the illumination parameters of the real scene are estimated automatically. These parameters describe a point light source and ambient light. Using the estimated illumination parameters, the synthetic scene is relighted through inserting proper virtual light sources in the scene graph tree. This work explores a post-illumination-processing and image-based approach to rapidly improve illumination consistency in augmented reality.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

1-6

Citation:

Online since:

June 2011

Authors:

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2011 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] Rolland J P and Fuchs H. Optical versus video see-through head-mounted displays in medical visualization,. Teleoperators and Virtual Environments, Vol. 9, No. 3(2000), pp.287-309.

DOI: 10.1162/105474600566808

Google Scholar

[2] Livingston M A, Swan J E II, Gabbard J L, Hollerer T H, Hix D, Julier S J, Baillot Y, Brown D. Resolving Multiple Occluded Layers in Augmented Reality , Proceedings of the Second IEEE and ACM International Symposium on Mixed and Augmented Reality(2003).

DOI: 10.1109/ismar.2003.1240688

Google Scholar

[3] Zhu Jiejie, Pan Zhigeng. Computer Vision Based Occlusion Handling Algorithm for Video-Based Augmented Reality, Journal of Computer-Aided Design & Computer Graphics, Vol. 19, No. 12(2007), pp.1624-1628.

Google Scholar

[4] KANG Bo, SU Hong, A Real-time Method for Occlusion Handling in Augmented Reality, Journal of System Simulation, Vol. 22, No. 5(2010), pp.1174-1177.

Google Scholar

[5] Q. Zheng, R. Chellappa, Estimation of illuminant direction, albedo, and shape from shading, Proceedings of IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 13, No. 7(1991), pp.680-702.

DOI: 10.1109/34.85658

Google Scholar

[6] M. Kanbara, N. Yokoya, Real-time estimation of light source environment for photorealistic augmented reality, Proceedings of the 17th International Conference on Pattern Recognition, Vol. 2(2004), pp.911-914.

DOI: 10.1109/icpr.2004.1334407

Google Scholar

[7] H. Matsuoka, A. Onozawa, E. Hosoya, Environment mapping for objects in the real world: a trial using ARToolKit, Proceedings of the 1th IEEE International Workshop on Augmented Reality Toolkit(2002), p.2.

DOI: 10.1109/art.2002.1107006

Google Scholar

[8] J. Stumpfel, C. Tchou, A. Jones, T. Hawkins, A. Wenger, P. Debevec, Direct HDR capture of the sun and sky, Proceedings of the 3rd International Conference on Computer Graphics, Virtual Reality, Visualization and Interaction(2004), pp.145-149.

DOI: 10.1145/1029949.1029977

Google Scholar