Learning to embed music and metadata for context-aware music recommendation

Publisher:
Springer Verlag
Publication Type:
Journal Article
Citation:
World Wide Web, 2018, pp. 1 - 25
Issue Date:
2018
Full metadata record
Files in This Item:
Filename Description Size
WWWJ-D-16-00136_DJ-accepted.pdfAccepted Manuscript824.03 kB
Adobe PDF
© 2017 Springer Science+Business Media, LLC, part of Springer Nature Contextual factors greatly influence users’ musical preferences, so they are beneficial remarkably to music recommendation and retrieval tasks. However, it still needs to be studied how to obtain and utilize the contextual information. In this paper, we propose a context-aware music recommendation approach, which can recommend music pieces appropriate for users’ contextual preferences for music. In analogy to matrix factorization methods for collaborative filtering, the proposed approach does not require music pieces to be represented by features ahead, but it can learn the representations from users’ historical listening records. Specifically, the proposed approach first learns music pieces’ embeddings (feature vectors in low-dimension continuous space) from music listening records and corresponding metadata. Then it infers and models users’ global and contextual preferences for music from their listening records with the learned embeddings. Finally, it recommends appropriate music pieces according to the target user’s preferences to satisfy her/his real-time requirements. Experimental evaluations on a real-world dataset show that the proposed approach outperforms baseline methods in terms of precision, recall, F1 score, and hitrate. Especially, our approach has better performance on sparse datasets.
Please use this identifier to cite or link to this item: