Explaining Imitation Learning Through Frames

Publisher:
Institute of Electrical and Electronics Engineers (IEEE)
Publication Type:
Journal Article
Citation:
IEEE Intelligent Systems, 2024, PP, (99), pp. 1-9
Issue Date:
2024-01-01
Filename Description Size
Explaining Imitation Learning Through Frames.pdfAccepted version1.19 MB
Adobe PDF
Full metadata record
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the importance of frames with respect to the overall policy performance. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames’ importance equality, the effectiveness of the importance map, and connections in importance maps from different IL models. The result shows that R2RISE distinguishes important frames from the demonstrations effectively.
Please use this identifier to cite or link to this item: