DocumentCode
531001
Title
Design and Evaluation of Explainable BDI Agents
Author
Harbers, Maaike ; Van den Bosch, Karel ; Meyer, John-Jules
Author_Institution
Utrecht Univ., Utrecht, Netherlands
Volume
2
fYear
2010
fDate
Aug. 31 2010-Sept. 3 2010
Firstpage
125
Lastpage
132
Abstract
It is widely acknowledged that providing explanations is an important capability of intelligent systems. Explanation capabilities are useful, for example, in scenario-based training systems with intelligent virtual agents. Trainees learn more from scenario-based training when they understand why the virtual agents act the way they do. In this paper, we present a model for explainable BDI agents which enables the explanation of BDI agent behavior in terms of underlying beliefs and goals. Different explanation algorithms can be specified in the model, generating different types of explanations. In a user study (n=20), we compare four explanation algorithms by asking trainees which explanations they consider most useful. Based on the results, we discuss which explanation types should be given under what conditions.
Keywords
explanation; learning (artificial intelligence); software agents; explainable BDI agents; explanation algorithms; intelligent systems; intelligent virtual agents; scenario based training; Algorithm design and analysis; Artificial intelligence; Guidelines; History; Humans; Intelligent agent; Training; BDI agent; Explanation; virtual training;
fLanguage
English
Publisher
ieee
Conference_Titel
Web Intelligence and Intelligent Agent Technology (WI-IAT), 2010 IEEE/WIC/ACM International Conference on
Conference_Location
Toronto, ON
Print_ISBN
978-1-4244-8482-9
Electronic_ISBN
978-0-7695-4191-4
Type
conf
DOI
10.1109/WI-IAT.2010.115
Filename
5614190
Link To Document