Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation
- Publisher:
- AAAI
- Publication Type:
- Conference Proceeding
- Citation:
- 35th AAAI Conference on Artificial Intelligence, AAAI 2021, 2021, 3A, pp. 1911-1920
- Issue Date:
- 2021-01-01
Closed Access
| Filename | Description | Size | |||
|---|---|---|---|---|---|
| 16286-Article Text-19780-1-2-20210518.pdf | Published version | 1.55 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
In this paper, we propose a novel text-based talking-head video generation framework that synthesizes high-fidelity facial expressions and head motions in accordance with contextual sentiments as well as speech rhythm and pauses. To be specific, our framework consists of a speaker-independent stage and a speaker-specific stage. In the speaker-independent stage, we design three parallel networks to generate animation parameters of the mouth, upper face, and head from texts, separately. In the speaker-specific stage, we present a 3D face model guided attention network to synthesize videos tailored for different individuals. It takes the animation parameters as input and exploits an attention mask to manipulate facial expression changes for the input individuals. Furthermore, to better establish authentic correspondences between visual motions (i.e., facial expression changes and head movements) and audios, we leverage a high-accuracy motion capture dataset instead of relying on long videos of specific individuals. After attaining the visual and audio correspondences, we can effectively train our network in an end-to-end fashion. Extensive experiments on qualitative and quantitative results demonstrate that our algorithm achieves high-quality photo-realistic talking-head videos including various facial expressions and head motions according to speech rhythms and outperforms the state-of-the-art.
Please use this identifier to cite or link to this item:
