LGAttNet: Automatic micro-expression detection using dual-stream local and global attentions

Publisher:
ELSEVIER
Publication Type:
Journal Article
Citation:
Knowledge-Based Systems, 2021, 212
Issue Date:
2021-01-05
Filename Description Size
1-s2.0-S095070512030695X-main.pdfPublished version1.28 MB
Adobe PDF
Full metadata record
© 2020 Elsevier B.V. Research in the field of micro-expressions has gained significance in recent years. Many researchers have concentrated on classifying micro-expressions in different discrete emotion classes, while detecting the presence of micro-expression in the video frames is considered as a pre-requisite step in the recognition process. Hence, there is a need to introduce more advanced detection models for micro-expressions. In order to address this, we propose a dual attention network based micro-expression detection architecture called LGAttNet. LGAttNet is one of the first to utilize a dual attention network grouped with 2-dimensional convolutional neural network to perform frame-wise automatic micro-expression detection. This method divides the feature extraction and enhancement task into two different convolutional neural network modules; sparse module and feature enhancement module. One of the key modules in our approach is the attention network which extracts local and global facial features, namely local attention module and global attention module. The attention mechanism adopts the human characteristic of focusing on the specific regions of micro-movements, which enables the LGAttNet to concentrate on particular facial regions along with the full facial features to identify the micro-expressions in the frames. Experiments performed on widely used publicly available databases demonstrate the robustness and superiority of our LGAttNet when compared to state-of-the-art approaches.
Please use this identifier to cite or link to this item: