Multi-Modality Video Scene Segmentation Algorithm with Shot Force Competition

Article Preview

Abstract:

In order to quickly and effectively segment the video scene, a multi-modality video scene segmentation algorithm with shot force competition is proposed in this paper. This method is take full account of temporal associated co-occurrence of multimodal media data, to calculate the similarity between video shot by merging the video low-level features, then go to the video scene segmentation based on the judgment method of shot competition. The authors experiments show that the video scene can be efficiently separated by the method proposed in the paper.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

514-517

Citation:

Online since:

February 2014

Authors:

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2014 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] XiDian University. Image and video retrieval new developments and scientific issues need to be resolved [J]. International academic dynamic, 2011, 2: 33-38.

Google Scholar

[2] Stephen W. Smoliar, Zhang HJ. Content based video indexing and retrieval [J]. IEEE Trans. on Multimedia, 1994, 1(2): 62-72.

DOI: 10.1109/93.311653

Google Scholar

[3] Rasheed Z, Shah M. Detection and representation of scenes in videos [J]. IEEE Trans. on Multimedia, 2005, 7(6): 1097-1105.

DOI: 10.1109/tmm.2005.858392

Google Scholar

[4] Fei Wu, Yanan Liu, Yueting Zhuang. Tensor-based transductive learning for multi-modality video semantic concept detection [J]. IEEE Transactions on Multimedia, 2009, 11(5): 868-878.

DOI: 10.1109/tmm.2009.2021724

Google Scholar

[5] Changjian Fu. Video data structure mining based on content [D]. National University of Defense Technology, 2008: 56-68.

Google Scholar

[6] Yanlan Liu, Fei Wu. Video semantic concepet detection using multi-modality subspace correlation propagation [C]. Proc. of the 13th Int'l Multimedia Modeling Conference, Berlin, Germany: Springer, 2007: 527-534.

DOI: 10.1007/978-3-540-69423-6_51

Google Scholar