Enhancing the robustness of neural collaborative filtering systems under malicious attacks

Publication Type:
Journal Article
Citation:
IEEE Transactions on Multimedia, 2019, 21 (3), pp. 555 - 565
Issue Date:
2019-03-01
Full metadata record
© 2018 IEEE Recommendation systems have become ubiquitous in online shopping in recent decades due to their power in reducing excessive choices of customers and industries. Recent collaborative filtering methods based on the deep neural network are studied and introduce promising results due to their power in learning hidden representations for users and items. However, it has revealed its vulnerabilities under malicious user attacks. With the knowledge of a collaborative filtering algorithm and its parameters, the performance of this recommendation system can be easily downgraded. Unfortunately, this problem is not addressed well, and the study on defending recommendation systems is insufficient. In this paper, we aim to improve the robustness of recommendation systems based on two concepts-stage-wise hints training and randomness. To protect a target model, we introduce noise layers in the training of a target model to increase its resistance to adversarial perturbations. To reduce the noise layers' influence on model performance, we introduce intermediate layer outputs as hints from a teacher model to regularize the intermediate layers of a student target model. We consider white box attacks under which attackers have the knowledge of the target model. The generalizability and robustness properties of our method have been analytically inspected in experiments and discussions, and the computational cost is comparable to training a standard neural network-based collaborative filtering model. Through our investigation, the proposed defensive method can reduce the success rate of malicious user attacks and keep the prediction accuracy comparable to standard neural recommendation systems.
Please use this identifier to cite or link to this item: