Hearing the Buildings: Smart Monitoring through Advanced Sonification Approaches

Article Preview

Abstract:

Risk prevention of historical buildings through sensor network monitoring represents a challenging and together promising task towards an optimized buildings’ protection plan. Due to the significant character that historical buildings have within the history and identity of local and consequently national communities, the monitoring approach above is nowadays considered a fundamental component of cultural preservation. Modern trends on sensor networks nowadays allow the installation of wireless monitoring equipment, able to collect a large volume and variety of data that can be further processed in order to support prevention modelling techniques and strategies. Despite the valuable post-processing outcome of the above modelling techniques, in several real-time monitoring cases, an aspect that may significant impact the data analysis accuracy from the monitoring engineer is to create an audiovisual representation of the gathered data that are easily perceived and directly associated to well-known environmental conditions. Towards this aim, in this work an audio-based representation of the collected data is introduced that achieves high monitoring performance in terms of the perceived nature of the building vibrations themselves. The proposed audio-based representation technique is based on a number of advanced sonification methods, combined with specific human-hearing simulation mechanisms that allow an effective increase of the represented information volume in real-time. A number of subjective tests that were performed demonstrate a significant improvement on the way that the monitoring personnel can efficiently perceive the origin and / or nature of the acquired vibrations, rendering the proposed technique a strong supplement towards efficient historical building risk prevention.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

231-236

Citation:

Online since:

August 2014

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2015 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] M. Diaferio, D. Foti and D. Sepe, Dynamic Identification of the Tower of the Provincial Administration Building. In Proceedings 11th International Conference on Civil, Structural and Environmental Engineering Computing (2007).

DOI: 10.4203/ccp.86.2

Google Scholar

[2] D. Foti, M. Diaferio, I. Giannocarro and M. Mongelli, Ambient Vibration Testing, Dynamic Identification and Model Updating of a Historic Tower. NDT&E International Vol. 47 (2012), pp.88-95.

DOI: 10.1016/j.ndteint.2011.11.009

Google Scholar

[3] S.A. Brewster, Overcoming the lack of screen space on mobile computers. Personal and Ubiquitous Computing Vol. 6 (2002), pp.88-205.

DOI: 10.1007/s007790200019

Google Scholar

[4] K. Edwards, E.D. Mynatt and E.D. Stockton, Access to graphical interfaces for blind users. Interactions Vol. 2 (1995), pp.56-67.

DOI: 10.1145/208143.208161

Google Scholar

[5] L. Brown and S.A. Brewster, Drawing by ear: Interpreting sonified line graphs. In Proceedings of the International Conference on Auditory Displays (2003), pp.152-156.

Google Scholar

[6] G. Kramer, Sonification Report: Status of the Field and Research Agenda. NSF Sonification White Paper (1998).

Google Scholar

[7] G. Kramer, An introduction to auditory display. In: Auditory Display : sonification, audification, and auditory interfaces, edited by G. Kramer, Addison-Wesley, Reading, MA (1994).

DOI: 10.2307/3680606

Google Scholar

[8] C. Frauenberger, V. Putz and R. Holdrich, Spatial Auditory Displays A Study on the Use of Virtual Audio Environments as Interfaces for Users with Visual Disabilities. In Proceedings of the 7th International Conference on Digital Audio Effects (2004).

Google Scholar

[9] O. Ben-Tal, J. Berger, B. Cook, M. Daniels, M. and G. Scavone, SonART: The Sonification Application Research Toolbox. In Proceedings of the 8th International Conference on Auditory Display (2002).

Google Scholar

[10] S.K. Lodha, J. Beahan, T. Heppe, A. Joseph and B. Zane-Ulman. MUSE: A Musical Data Sonification Toolkit. In Proceedings of the 4th International Conference on Auditory Display (1997).

Google Scholar

[11] T. Hermannand and H. Ritter, Listen to your data: Model-based sonification for data analysis. In: Advances in intelligent computing and multimedia systems, edited by G. E. Lasker, Baden-Baden, Germany (1999).

Google Scholar

[12] W. Gaver, Auditory Icons: Using sound in computer interfaces. Human Computer Interaction Vol. 2 (1986), pp.167-177.

DOI: 10.1207/s15327051hci0202_3

Google Scholar

[13] M.M. Blattner, D.A. Sumikawa and R.M. Greenberg, Earcons and Icons: Their Structure and Common Design Principles. ACM SIGCHI Bulletin Vol. 21 (1989), pp.123-124.

DOI: 10.1145/67880.1046599

Google Scholar

[14] T. Dingler, J. Lindsay and B.N. Walker, MUSE: A Musical Data Sonification Toolkit. In Proceedings of the 14th International Conference on Auditory Display (2008).

Google Scholar

[15] D.K. McGookin and S.A. Brewster, Empirically Derived Guidelines for the Presentation of Concurrent Earcons. In Proceedings of the of the HCI2004 (2004).

Google Scholar

[16] J. Blauert, Spatial Hearing, revised ed., The MIT Press, Cambridge, Massachusetts, (1997).

Google Scholar

[17] U. Zölzer, S. Vettukadu and S. Möller, PLL-based Pitch Detection and Tracking for Audio Signals. In Proceedings of the 8th International Conference on Intelligent Information Hiding and Multimedia Signal Processing (2012).

DOI: 10.1109/iih-msp.2012.110

Google Scholar