Reinforced Path Reasoning for Counterfactual Explainable Recommendation

Publisher:
Institute of Electrical and Electronics Engineers (IEEE)
Publication Type:
Journal Article
Citation:
IEEE Transactions on Knowledge and Data Engineering, 2024, PP, (99), pp. 1-17
Issue Date:
2024-01-01
Filename Description Size
Reinforced Path Reasoning for Counterfactual Explainable Recommendation.pdfAccepted version4.77 MB
Adobe PDF
Full metadata record
Counterfactual explanations interpret the recommendation mechanism by exploring how minimal alterations on items or users affect recommendation decisions. Existing counterfactual explainable approaches face huge search space, and their explanations are either action-based (e.g., user click) or aspect-based (i.e., item description). We believe item attribute-based explanations are more intuitive and persuadable for users since they explain by fine-grained demographic features, e.g., brand. Moreover, counterfactual explanations could enhance recommendations by filtering out negative items. In this work, we propose a novel Counterfactual Explainable Recommendation (CERec) to generate item attribute-based counterfactual explanations meanwhile to boost recommendation performance. Our CERec optimizes an explanation policy upon uniformly searching candidate counterfactuals within a reinforcement learning environment. We reduce the huge search space with an adaptive path sampler by using rich context information of a given knowledge graph. We also deploy the explanation policy to a recommendation model to enhance the recommendation. Extensive explainability and recommendation evaluations demonstrate CERec's ability to provide explanations consistent with user preferences and maintain improved recommendations. We release our code and processed datasets at https://github.com/Chrystalii/CERec.
Please use this identifier to cite or link to this item: