Self-Explaining Abilities of an Intelligent Agent for Transparency in a Collaborative Driving Context

Publisher:
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Publication Type:
Journal Article
Citation:
IEEE Transactions on Human-Machine Systems, 2022, 52, (6), pp. 1155-1165
Issue Date:
2022-12-01
Full metadata record
A critical challenge in human-autonomy teaming is for human players to comprehend their nonhuman teammates (agents). Transparency in agents' behaviors is the key for such comprehension, which may be obtained by embedding a self-explanation ability into the agent to explain its own behaviors. Previous studies have relied on searching for the executed functions and logics to generate explanations for behaviors of goal-following logic-based agents. With the increasing number of functions and logics, current methods, such as component and process-based methods, have become impractical. This article proposes a new method exploiting the agent's artificial situation awareness states for generating explanations that involves several techniques: A Bayesian network, fuzzy theory, and Hamming distance. Our new method is evaluated in a collaborative driving context, in which a significant number of accidents recently occurred around the globe due to the lack of understanding of the autopilot agents. Using an autonomous driving simulator called Carla, two typical scenarios in collaborative driving, namely, traffic light and overtaking situations, are used. The findings show that the new method potentially reduces the search space in generating explanations and exhibits better computational performance and a lower cognitive workload. This work is important to calibrate human trust and to enhance comprehension of the agent.
Please use this identifier to cite or link to this item: