Sequence Unlearning for Sequential Recommender Systems

Publisher:
SPRINGER-VERLAG SINGAPORE PTE LTD
Publication Type:
Chapter
Citation:
AI 2023: Advances in Artificial Intelligence, 2024, 14471 LNAI, pp. 403-415
Issue Date:
2024-01-01
Filename Description Size
Sequence.pdfPublished version215.72 kB
Adobe PDF
Full metadata record
Sequential recommender systems, leveraging clients’ sequential product browsing history, have become an essential tool in delivering personalized product recommendations. As data protection regulations come into focus, certain clients may demand the removal of their data from the training sets used by these systems. In this paper, we focus on the problem of how specific client information can be efficiently removed from a pre-trained sequential recommender system without the need for retraining, particularly when the change to the data set is not substantial. We propose a novel sequence unlearning method for sequential recommender systems by leveraging label noise injection. Intuitively, our method promotes data unlearning by encouraging the system to produce random predictions for the sequences aiming to unlearn. To further prevent the model from overfitting an incorrect label, which could lead to substantial changes in its parameters, our method incorporates a dynamic process wherein the incorrect label is continually altered during the learning phase. This effectively encourages the model to lose confidence in the original label, while also discouraging it from fitting to a specific incorrect label. To the best of our knowledge, this is the first work to tackle the unlearning problem in sequential recommender systems without accessing the remaining data. Our approach is general and can work with any sequential recommender system. Empirically, we demonstrate that our method effectively helps different recommender systems unlearn specific sequential data while maintaining strong generalization performance on the remaining data across different datasets.
Please use this identifier to cite or link to this item: