Towards Graph-based Explainable Recommender Systems

Publication Type:
Thesis
Issue Date:
2024
Full metadata record
Explainable recommender systems, which aim to provide accurate recommendations and reliable explanations, have attracted significant research interest due to their ability to enhance the recommender system's transparency, build user trust, and bring additional benefits to the system. In this thesis, I study a critical yet not well-explored question of explainable recommendation in graph structure. Graph-based explainable recommendations aim to predict users' preferences in the recommendation graph for accurate recommendations and reliable explanations by learning users' historical behaviors. The research delves into the cutting-edge work in graph-based explainable recommendations and reveals that the current explainable recommendations overlook the issue of imbalanced data structures, and hampers the effective modeling of users' historical behaviors on the massive information, leading to less precise recommendation outcomes and explanations. Furthermore, most existing work fails to provide reliable explanations for their recommendation results due to the controversy of attention mechanisms. I have studied the above research problem in deep depth and addressed the following three technical challenges: 1) The imbalance of graph data distribution leads to imbalanced learning results and poor generalization performance to sparse but majority structures. 2) Recommendation graphs, constructed from heterogeneous, massive, and temporal data, pose a fundamental challenge in capturing users' historical behaviors for explanations due to their complex structure. 3) The black-box nature of graph neural networks makes it a difficult challenge to understand, reason the graph structure, and further generate meaningful explanations. Specifically, I proposed three research works to achieve satisfactory recommendation results and explainability. For the first challenge, I first study the imbalanced graph data distribution problem in a dynamic graph scenario and propose a novel fair dynamic graph embedding to close the gap between the active and inactive items to achieve fair recommendations. For the second challenge, I propose a novel reinforcement learning framework for explainable paths exploration, and then model the user's historical behavior for accurate recommendation via the explored meaningful paths. For the third challenge, I propose a post-hoc explainable module leveraging counterfactual learning to generate reliable explanations. Qualitative and quantitative experiments are conducted to validate the state-of-the-art and effective recommendation performance and explainability. I am confident that this research will significantly advance graph-based explainable recommender systems, setting the stage for the creation of more reliable explainable systems in the future.
Please use this identifier to cite or link to this item: