Video Segmentation of Multiresolution Dynamic Spatiotemporal Model Based on Quaternion Wavelet Transform

Article Preview

Abstract:

How to achieve a meaningful video representation is an important problem in various research communities. Automatically segmenting non-specific objects is an open problem. To deal with the error segmentation problem of the existing video algorithms under dynamic scenes, we propose a dynamic spatiotemporal saliency model based on the quaternion wavelet transform for video segmentation in this paper, which has the capability of segmenting the salient objects from moving background automatically. The model is a dynamic combination of the temporal attention model and static salient model. In temporal attention model, motion contrast information can be computed from the phase disparity between two consecutive frames. The phase is extracted from quaternionic pyramid. In static salient model, the spatial attention information is computed by an inverse quaternion wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function has been optimized to better replicate psychophysical data on color appearance. We combine the two kinds of attention information to get the preliminary results and use Grabuct algorithm for the optimization of the result finally. The segmentation and comparison experimental results demonstrate the validity of proposed algorithm.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

3822-3829

Citation:

Online since:

February 2014

Authors:

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2014 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] Mullen K T, The contrast sensitivity of human colour vision to red-green and blue-yellow chromatic gratings[J]. The Journal of Physiology, 1985, 359(1), pp.381-400.

DOI: 10.1113/jphysiol.1985.sp015591

Google Scholar

[2] Gao D, Mahadevan V, Vasconcelos N, On the plausibility of the discriminant center-surround hypothesis for visual saliency[J]. Journal of vision, 2008, 8(7).

DOI: 10.1167/8.7.13

Google Scholar

[3] Bayro-Corrochano E, Multi-resolution image analysis using the quaternion wavelet transform[J]. Numerical Algorithms, 2005, 39(1-3), pp.35-55.

DOI: 10.1007/s11075-004-3619-8

Google Scholar

[4] Liu C, Yuen J, Torralba A, Sivic J, Freeman W T, SIFT flow: dense correspondence across different scenes[M]. Computer Vision–ECCV 2008. Springer Berlin Heidelberg, 2008, pp.28-42.

DOI: 10.1007/978-3-540-88690-7_3

Google Scholar

[5] Liu T, Sun J, Zheng N N, Tang X O, Shum H Y, Learning to detect a salient object. Computer Vision and Pattern Recognition[J], 2007. CVPR'07. IEEE Conference on. IEEE, 2007, pp.1-8.

DOI: 10.1109/cvpr.2007.383047

Google Scholar

[6] Murray N, Vanrell M, Otazu X, Parraga C. A, Saliency estimation using a non-parametric low-level vision model[C]. Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE, 2011, pp.433-440.

DOI: 10.1109/cvpr.2011.5995506

Google Scholar

[7] Otazu X, Parraga C A, Vanrell M, Toward a unified chromatic induction model[J]. Journal of Vision, 2010, 10(12).

DOI: 10.1167/10.12.5

Google Scholar

[8] Guo C, Ma Q, Zhang L, Spatio-temporal saliency detection using phase spectrum of quaternion fourier transform[C]. Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on. IEEE, 2008, pp.1-8.

DOI: 10.1109/cvpr.2008.4587715

Google Scholar

[9] Arbelaez P, Maire M, Fowlkes C, Malik J, Contour detection and hierarchical image segmentation[J]. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2011, 33(5), pp.898-916.

DOI: 10.1109/tpami.2010.161

Google Scholar

[10] Rother C, Kolmogorov V, Blake A, Grabcut: Interactive foreground extraction using iterated graph cuts[C]. ACM Transactions on Graphics (TOG). ACM, 2004, 23(3), pp.309-314.

DOI: 10.1145/1015706.1015720

Google Scholar

[11] Li H, Ngan K N, Face segmentation in head-and-shoulder video sequences based on facial saliency map[C]. Circuits and Systems, 2006. ISCAS 2006. Proceedings. 2006 IEEE International Symposium on. IEEE, 2006: 4 pp.

DOI: 10.1109/iscas.2006.1693176

Google Scholar