Building Facades Extraction from on-Vehicle Camera Images in a Graph-Cut-Based Optimization Framework


Article Preview

In this paper, a method to extract building wall textures from an on-vehicle camera imagesis proposed as an aid to construct the 3D maps. The building wall textures are required to attach to 3Dpolygons which are obtained through 3D measurements in urban space.We assume that the on-vehiclecamera is under linear uniform motion and building walls are planar regions perpendicular to the opti-cal axis. Under the assumption, the same building wall region has the same depth, or disparities, overthe region among successive images. Since disparities derived from foreground objects are differentfrom the disparities derived from the building wall, we can use the disparity differences as a clue toeffectively distinguish the building walls from the foreground objects. We formulate the extractionof building wall textures incorporating these consideration as an optimization problem which can besolved by graph-cut algorithm. To show the effectiveness of the proposed method, it is applied to somesequential scenes.



Edited by:

Qiancheng Zhao






H. Kawano et al., "Building Facades Extraction from on-Vehicle Camera Images in a Graph-Cut-Based Optimization Framework", Applied Mechanics and Materials, Vol. 103, pp. 641-648, 2012

Online since:

September 2011




[1] Brenner C., Haala N., and Fritsch D., 'Towards fully automated 3D city model generation, ', Workshop on Automatic Extraction of Man-Made Objects from Aerial and Space Images III, (2001).

[2] Frueh C. and Zakhor A., 'An Automated Method for Large-Scale, Ground-Based City Model Acquisition, ', International Journal of Computer Vision, October 2004, vol. 60, pp.5-24.

[3] Zhao H. and Shibasaki R., 'Reconstructing a textured CAD model of an urban environment using vehicle-borne laser range scanners and line cameras, ', Machine Vision and Applications 14 1, pp.35-41.

DOI: 10.1007/s00138-002-0099-5

[4] D. Sinclair and A. Blake, 'Quantitative planar region detection, ', International Journal of Computer Vision, 18(1), 77-91, (1996).

DOI: 10.1007/bf00126141

[5] H. Kawakami, Y. Ito, and Y. Kanazawa, 'A robust method for detecting planar regions based on random sampling using distributions of feature points, ', IEICE Trans., J88-D-II, 2, 313-324, (2005).

DOI: 10.1002/scj.20492

[6] P. Lombardi, M. Zanin and S. Messelodi, 'Unified stereovision for ground, road, and obstacle detection, ', IEEE Intelligent Vehicles Symposium, 783-788, (2005).

DOI: 10.1109/ivs.2005.1505200

[7] Q. Yu, H. Araujo and H. Wang, 'A stereovision method for obstacle detection and tracking in nonflat urban environments, ', Autonomous Robots, 19(2), 141-157, (2005).

DOI: 10.1007/s10514-005-0612-6

[8] M. Irani, B. Rousso and S. Peleg, 'Computing occluding and transparent motions, ', International Journal of Computer Vision, 12(1), 5-16, (1994).

DOI: 10.1007/bf01420982

[9] F. Dornaika and A. D. Sappa, 'Real time stereo image registration for planar structure and 3D sensor pose estimation, ', Stereo Vision, InTech Education and Publishing, 299-316, (2008).

DOI: 10.5772/5895

[10] M. Okutomi, K. Nakano, J. Maruyama and T. Hara, 'Robust estimation of planar regions for visual navigation using sequential stereo images, ', IEEE International Conference on Robotics and Automation, Vol. 4, 3321-3327, (2002).

DOI: 10.1109/robot.2002.1014224

[11] A. Seki and M. Okutomi, 'Robust obstacle detection in general road environment based on road extraction and pose estimation, ', IEEE Intelligent Vehicles Symposium, 437-444, (2006).

DOI: 10.1109/ivs.2006.1689668

[12] H. Nakai, N. Takeda, H. Hattori, Y. Okamoto and K. Onoguchi, ``A practical stereo scheme for obstacle detection in automotive use, International Conference on Pattern Recognition, Vol. 3, 346-350, (2004).

DOI: 10.1109/icpr.2004.1334538

[13] T. Taeda, S. Sugimoto, and M. Okutomi, 'Planar Region Extraction from Stereo Images using Graph Cut, ', Meeting on Image Recognition and Understanding 2009, pp.606-613, (2009).

In order to see related information, you need to Login.