FlowFace: Semantic Flow-Guided Shape-Aware Face Swapping
- Publisher:
- AAAI
- Publication Type:
- Conference Proceeding
- Citation:
- Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023, 2023, 37, (3), pp. 3367-3375
- Issue Date:
- 2023-06-27
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
25444-Article Text-29507-1-2-20230626.pdf | Published version | 1.54 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
In this work, we propose a semantic flow-guided two-stage framework for shape-aware face swapping, namely Flow-Face. Unlike most previous methods that focus on transferring the source inner facial features but neglect facial contours, our FlowFace can transfer both of them to a target face, thus leading to more realistic face swapping. Concretely, our FlowFace consists of a face reshaping network and a face swapping network. The face reshaping network addresses the shape outline differences between the source and target faces. It first estimates a semantic flow (i.e., face shape differences) between the source and the target face, and then explicitly warps the target face shape with the estimated semantic flow. After reshaping, the face swapping network generates inner facial features that exhibit the identity of the source face. We employ a pre-trained face masked autoencoder (MAE) to extract facial features from both the source face and the target face. In contrast to previous methods that use identity embedding to preserve identity information, the features extracted by our encoder can better capture facial appearances and identity information. Then, we develop a cross-attention fusion module to adaptively fuse inner facial features from the source face with the target facial attributes, thus leading to better identity preservation. Extensive quantitative and qualitative experiments on in-the-wild faces demonstrate that our FlowFace outperforms the state-of-the-art significantly.
Please use this identifier to cite or link to this item: