Some Research on the Fusion of Virtual Scene and Video Image
Fusing three-dimensional virtual scene and video image into a perfect whole, is a key technology in augmented reality, which includes a number of problems need to be resolved. This paper makes some researches on the occlusion and consistent illumination between virtual and real objects. About occlusion, an approach for realizing multilayer occlusion is presented. A special scene graph tree is designed comprised of some special nodes, namely EMO nodes. According to the location of real moving object, different EMO node will be activated in real-time, consequently realizing the multilayer occlusion. About consistent illumination, two spheres with Lambert surface are introduced in the real scene. Through image analysis and analytical geometry of three dimensions, the illumination parameters of the real scene are estimated automatically. These parameters describe a point light source and ambient light. Using the estimated illumination parameters, the synthetic scene is relighted through inserting proper virtual light sources in the scene graph tree. This work explores a post-illumination-processing and image-based approach to rapidly improve illumination consistency in augmented reality.
Y. Feng "Some Research on the Fusion of Virtual Scene and Video Image", Applied Mechanics and Materials, Vols. 58-60, pp. 1-6, 2011