Research on the Peripheral Sound Visualization Using the Improved Ripple Mode

Article Preview

Abstract:

In this paper, we proposed a peripheral sound visualization method based on improved ripple mode for the deaf. In proposed mode, we designed the processes of transforming sound intensity and exterminating the locations of sound sources. We used power spectrum function to determine the sound intensity. ARTI neural network was subtly applied to identify which kind of the real-time input sound signals and to display the locations of the sound sources. We present the software that aids the development of peripheral displays and four sample peripheral displays are used to demonstrate our toolkit’s capabilities. The results show that the proposed ripple mode correctly showed the information of combination of the sound intensity and location of the sound source and ART1 neural network made accurate identifications for input audio signals. Moreover, we found that participants in the research were more likely to achieve more information of locations of sound sources.

You have full access to the following eBook

Info:

Periodical:

Pages:

123-126

Citation:

Online since:

December 2011

Export:

Share:

Citation:

[1] Somervell J., An evaluation of information visualization in attention-limited environments, In Proceedings of IEEE TCVG Symposium on Visualization. (2002) 211-217.

Google Scholar

[2] Information on http://www.hearingdogs.org.uk

Google Scholar

[3] Harvey Dillon. Hearing aids, Thieme, New York,2001.

Google Scholar

[4] Hsieh G., Mankoff J.,A Comparison of Two Peripheral Displays for Monitoring Email: Measuring Usability, Awareness, and Distraction, University of California, Berkeley Technotes. 3(2003) 286-293.

Google Scholar

[5] Tara Matthews, Janette Fong, F. Wai-ling Ho-Ching, and Jennifer Mankoff, Evaluating non-speech sound visualizations for the deaf, In Behaviour and Information Technology.25(2006) 333-351.

DOI: 10.1080/01449290600636488

Google Scholar

[6] Tara Matthews, Scott Carter, Carol Pai, Janette Fong, Jennifer Mankoff. Evaluating a mobile sound transcription tool for the deaf, In Proceedings of the international conference on Ubiquitous Computing. (2006)159-176.

DOI: 10.1007/11853565_10

Google Scholar

[7] S. Grossberg. Adaptive pattern classification and universal recording:I.Parallel development and coding of neural feature detectors. Biological Cybernetics. 23(1976) 121-134.

DOI: 10.1007/bf00344744

Google Scholar

[8] S. Grossberg. Adaptive pattern classification and universal recording:II. Feedback, expectation, olfaction, illusions. Biological Cybernetics.23(1976) 187-202.

DOI: 10.1007/bf00340335

Google Scholar