A Human-Attention Driven Video Capture Method for Wearable Visual Computing

Article Preview

Abstract:

In this paper, we propose a human-attention driven video capture method for wearable visual computing. This method could help wearable vision system to avoid collecting a lot of meaningless visual images and save the platform resources. We firstly calculate background similarity and recognize user’s attention behavior, then detect user’s active or passive attention status, finally automatically and selectively capture video information according to user’s attention interest. We describe architecture of proposed method and give an experiment to validate its feasible.

You might also be interested in these eBooks

Info:

Periodical:

Key Engineering Materials (Volumes 467-469)

Pages:

1361-1366

Citation:

Online since:

February 2011

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2011 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] Dongyi Chen. The Evolution and Trend of Wearable Computer (I) [J]. JOURNAL OF CHONGQING UNIVERSITY, 2000, vol 3(In Chinese).

Google Scholar

[2] Dongyi Chen. The Evolution and Trend of Wearable Computer (II) [J]. JOURNAL OF CHONGQING UNIVERSITY, 2000, vol 4(In Chinese).

Google Scholar

[3] Definition of Wearable Computer, on http: /wearcomp. org/wearcompdef. html.

Google Scholar

[4] S. Mann, Humanistic Intelligence/Humanistic Computing: "Wearcomp" as a New Framework for Intelligent Signal Processing, Proc. IEEE, vol. 86, no. 11, Nov. 1998, pp.2123-2151.

DOI: 10.1109/5.726784

Google Scholar

[5] Takeo Kanade, First-Person, Inside-Out Vision, Keynote Speech on First Workshop on Egocentric Vision in conjunction with CVPR (2009).

Google Scholar

[6] Marr, D. (1982) Vision. A computational investigation into the human representation and processing of visual information. New York: W.H. Freeman.

Google Scholar

[7] S Mann. An Historical Account of WearComp and WearCam Inventions Developed for Applications in Personal Imagin. Proceedings of the International Symposiumon Wearable Computers, LosAlamitos[C]. CA, USA: IEEE Computer Society, 1997: 66-73.

DOI: 10.1109/iswc.1997.629921

Google Scholar

[8] T. Kurata, T. Okuma, M. Kourogi, T. Kato, and K. Sakaue, VizWear: Toward Human-Centered Interaction through Wearable Vision and Visualization, Proceedings of the Second IEEE Pacific Rim Conference on Multimedia, LNCS, 2001. Vol. 2195: 40 - 47.

DOI: 10.1007/3-540-45453-5_6

Google Scholar

[9] M. Pilu. On the use of attention clues for an autonomous wearable camera. Technical report, HP Laboratories Bristol, 2003. URL: http: /www. hpl. hp. com/personal/Maurizio_Pilu.

Google Scholar

[10] Y. Nakamura, J. Ohde, and Y. Ohta, Structuring personal experiences-analyzing views from a head-mounted camera, " in Proc. IEEE International Conference on Multimedia and Expo (ICME , 00), vol. 2, pp.1137-1140, New York, NY, USA, July-August (2000).

DOI: 10.1109/icme.2000.871561

Google Scholar