Constrained Off-policy Learning over Heterogeneous Information for Fairness-aware Recommendation

Publisher:
Association for Computing Machinery (ACM)
Publication Type:
Journal Article
Citation:
ACM Transactions on Recommender Systems
Full metadata record
Fairness-aware recommendation eliminates discrimination issues to build trustworthy recommendation systems. Existing fairness-aware approaches ignore accounting for rich user and item attributes and thus cannot capture the impact of attributes on affecting recommendation fairness. These real-world attributes severely cause unfair recommendations by favoring items with popular attributes, leading to item exposure unfairness in recommendations. Moreover, existing approaches mostly mitigate unfairness for static recommendation models, e.g., collaborative filtering. Static models can not handle dynamic user interactions with the system that reflect users’ preferences shift through time. Thus, static models are limited in their ability to adapt to user behavior shifts to gain long-run user satisfaction. As user and item attributes are largely involved in modern recommenders and user interactions are naturally dynamic, it is essential to develop a novel method that eliminates unfairness caused by attributes meanwhile embrace the dynamic modeling of user behavior shifts. In this paper, we propose Constrained Off-policy Learning over Heterogeneous Information for Fairness-aware Recommendation (Fair-HINpolicy) , which uses recent advances in context-aware off-policy learning to produce fairness-aware recommendations with rich attributes from a Heterogeneous Information Network. In particular, we formulate the off-policy learning as a Constrained Markov Decision Process (CMDP) by dynamically constraining the fairness of item exposure at each iteration. We also design an attentive action sampling to reduce the search space for off-policy learning. Our solution adaptively receives HIN-augmented corrections for counterfactual risk minimization, and ultimately yields an effective policy that maximizes long-term user satisfaction. We extensively evaluate our method through simulations on large-scale real-world datasets, obtaining favorable results compared with state-of-the-art methods.
Please use this identifier to cite or link to this item: