Multi-modality sensor data classification with selective attention

Publication Type:
Conference Proceeding
Citation:
IJCAI International Joint Conference on Artificial Intelligence, 2018, 2018-July pp. 3111 - 3117
Issue Date:
2018-01-01
Full metadata record
© 2018 International Joint Conferences on Artificial Intelligence. All right reserved. Multimodal wearable sensor data classification plays an important role in ubiquitous computing and has a wide range of applications in scenarios from healthcare to entertainment. However, most existing work in this field employs domain-specific approaches and is thus ineffective in complex situations where multi-modality sensor data are collected. Moreover, the wearable sensor data are less informative than the conventional data such as texts or images. In this paper, to improve the adaptability of such classification methods across different application domains, we turn this classification task into a game and apply a deep reinforcement learning scheme to deal with complex situations dynamically. Additionally, we introduce a selective attention mechanism into the reinforcement learning scheme to focus on the crucial dimensions of the data. This mechanism helps to capture extra information from the signal and thus it is able to significantly improve the discriminative power of the classifier. We carry out several experiments on three wearable sensor datasets and demonstrate the competitive performance of the proposed approach compared to several state-of-the-art baselines.
Please use this identifier to cite or link to this item: