A deep approach for multi-modal user attribute modeling
- Publication Type:
- Conference Proceeding
- Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2017, 10538 LNCS pp. 217 - 230
- Issue Date:
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2017, Springer International Publishing AG. With the explosive growth of user-generated contents (e.g., texts, images and videos) on social networks, it is of great significance to analyze and extract people’s interests from the massive social media data, thus providing more accurate personalized recommendations and services. In this paper, we propose a novel multimodal deep learning algorithm for user profiling, dubbed multi-modal User Attribute Model (mmUAM), which explores the intrinsic semantic correlations across different modalities. Our proposed model is based on Poisson Gamma Belief Network (PGBN), which is a deep learning topic model for count data in documents. By improving PGBN, we succeed in addressing the problem of learning a shared representation between texts and images in order to obtain textual and visual attributes for users. To evaluate the effectiveness of our proposed method, we collect a real dataset from Sina Weibo. Experimental results demonstrate that the proposed algorithm achieves encouraging performance compared with several state-of-the-art methods.
Please use this identifier to cite or link to this item: