An Agent-Based Dialog System for Adaptive and Multimodal Interface: A Case Study


Article Preview

Graphical Interfaces using an agent-based dialog can handle errors and interruptions, and dynamically adapts to the current context and situation, the needs of the task performed, and the user model. This is especially true for the design of multimodal interfaces, where interaction designers need to physically explore and prototype new interaction modalities and therefore require development environments that especially support the interactivity and the dynamic of this creative development process. We argue that, in the domain of sophisticated human-machine interfaces, we can make use of the increasing tendency to design such interfaces as independent agents that themselves engage in an interactive dialogue (both graphical and linguistic) with their users. This paper focuses on the implementation of a flexible and robust dialogue system which integrates emotions and other influencing parameters in the dialogue flow. In order to achieve a higher degree of adaptability and multimodality, we present Spoken Language Dialogue System (SLDS) architecture. The manufacturing process of the oil plant (GLZ: Gas Liquefying Zone), is selected as an application domain in this study



Advanced Materials Research (Volumes 217-218)

Edited by:

Zhou Mark




N. Taghezout, "An Agent-Based Dialog System for Adaptive and Multimodal Interface: A Case Study", Advanced Materials Research, Vols. 217-218, pp. 578-583, 2011

Online since:

March 2011





[1] R. Carlson, J. Hirschberg and M. Swerts, Error handling in spoken dialogue systems, Speech Communication. 45, no. 3, (2005) 207-209.


[2] M.E. Bratman, D.J. Israel and E. Pollack, Plans and resource-bounded practical reasoning, Computational Intelligence 4 (1988).

[3] E. P. Turunen, J. Salonen and Kanner, Mobile architecture for distributed multimodal dialogues, Proc. of ASIDE (2005).

[4] N. Taghezout, A. Adla and P. Zaraté, A Hybrid Approach for Designing an Adaptive User Interface:   IDSS and BDI Agents, in: T. -h. Kim et al. (Eds. ), CCIS 30, Springer-Verlag Berlin Heidelberg, (2009) 164–182.


[5] J. Sturm, L. Boves, Effective error recovery strategies for multimodal form-filling applications, Speech Communication, 45, no. 3( 2005) 289-303.


[6] Gupta. Anurag, A Reference Model for Multimodal Input Interpretation, in: Proceedings of Conference on Human Factors in Computing Systems (2003).

[7] Craig. Wootton, VoiceBrowse: The Dynamic Generation of Spoken Dialogue from Online Content, Thesis submitted for the degree of Doctor of Philosophy. Computing Science with Dip. Industrial Studies Faculty of Computing and Engineering of the University of Ulste (October 2008).

[8] M.F. McTear, Spoken Dialogue Technology – Toward the Conversational User Interface, Springer (2004).

[9] R. López-Cózar, Z. Callejas, M. Gea and G. Montoro, Multimodal, multilingual and adaptive dialogue system for ubiquitous interaction in an educational space, Applied Spoken Language Interaction in Distributed Environments, Aalborg, Denmark (2005).

[10] M. Shahrokhi, A. Bernard. A framework to develop an analysis agent for evaluating human performance in manufacturing systems, CIRP Journal of Manufacturing Science and Technology 2 (2009) 55–60.


[11] S. Keizer, H. Bunt , Evaluating combinations of dialogue acts for generation, in: Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue, Antwerp (2007) 158–165.

[12] Cole. R et al., 1993. The Challenge of Spoken Language Systems: Research Directions for the Nineties, IEEE Trans. on Speech and Audio Processing, Vol. 3, No. 1, pp.1-21.

[13] E. P. Salonen, M. Turunen, J. Hakulinen, L. Helin, P. Prusi and A. Kainulainen, Distributed Dialogue Management for Smart Terminal Devices, in: Proceeding of Interspeech (2005) 849–852.

[14] J. Bouchet, L. Nigay, Icare: a component-based approach for the design and development of multimodal interfaces. In: CHI'04 extended abstracts on human factors in computing systems. ACM, New York, (2004) 1325–1328.