A Novel Dual-Pipeline based Attention Mechanism for Multimodal Social Sentiment Analysis

Publisher:
Association for Computing Machinery (ACM)
Publication Type:
Conference Proceeding
Citation:
WWW 2024 Companion - Companion Proceedings of the ACM Web Conference, 2024, pp. 1816-1822
Issue Date:
2024-05-13
Full metadata record
Traditionally, sentiment analysis methods rely solely on text or image data. However, most user-generated social media content includes both textual and image content. In this study, we propose a novel Dual-Pipeline based Attentional method that uses different modalities of data, including text and images, to analyse and interpret emotions and sentiments expressed in tweets. Our proposed method simultaneously extracts meaningful local and global contextual features from multiple modalities. Local fusion layers within each pipeline combine modality-specific features using an attention mechanism to enrich the joint multimodal representation. A global fusion layer consolidates the collective sentiment representation by seamlessly intermixing the outputs of both pipelines. We evaluate our proposed method using performance metrics such as accuracy and F1-score. Through extensive experimentation on the MVSA dataset, our method demonstrates superior performance compared to state-of-the-art techniques in identifying the sentiment conveyed in social media data.
Please use this identifier to cite or link to this item: