With emergence of online virtual reality applications, the 3D data of virtual scenes are available to heterogeneous end user devices with relatively limited computing power, resolution and transmission rate. Still, many virtual scenes created by expert developers are composed of complex 3D data models with huge number of geometry primitives and appearence elements. This complexity can cause a lot of problems when the scenes are deployed on the limited access devices. To address this issue, we propose a virtual scene adaptation framework which is able to perform the transformation of given complex 3D model into new forms with less geometric and appearance data. Through the framework, complex virtual scenes are connected with real-world semantics and are preprocessed with selected optimization strategies based on the semantic features matching client devices’ capabilities before deployment.