All You See Is the Tip of the Iceberg: Distilling Latent Interactions Can Help You Find Treasures

Publisher:
Springer Nature
Publication Type:
Chapter
Citation:
Neural Information Processing, 2024, 1968 CCIS, pp. 244-257
Issue Date:
2024-01-01
Filename Description Size
63. ICONIP2023. Zhuo Cai.pdfAccepted version5.35 MB
Adobe PDF
Full metadata record
Recommender systems suffer from data sparsity problem severely, which can be attributed to the combined action of various possible causes like: gradually strengthened privacy protection policies, exposure bias, etc. In these cases, the unobserved items do not always refer to the items that users are not interested in; they could also be imputed to the inaccessibility of interaction data or users’ unawareness over items. Thus, blindly fitting all unobserved interactions as negative interactions in the training stage leads to the incomplete modeling of user preferences. In this work, we propose a novel training strategy to distill latent interactions for recommender systems (shorted as DLI). Latent interactions refer to the possible links between users and items that can reflect user preferences but not happened. We first design a False-negative interaction selecting module to dynamically distill latent interactions along the training process. After that, we devise two types of loss paradigms: Truncated Loss and Reversed Loss. The former one can reduce the detrimental effect of False-negative interactions by discarding the False-negative samples in the loss computing stage, while the latter turning them into positive ones to enrich the interaction data. Meanwhile, both loss functions can be further detailed into full mode and partial mode to discriminate different confidence levels of False-negative interactions. Extensive experiments on three benchmark datasets demonstrate the effectiveness of DLI in improving the recommendation performance of backbone models.
Please use this identifier to cite or link to this item: