LipSync generation based on discrete cosine transform
- Publication Type:
- Conference Proceeding
- Citation:
- Proceedings - 2017 NICOGRAPH International, NICOInt 2017, 2017, pp. 76 - 79
- Issue Date:
- 2017-09-19
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
08047398.pdf | Published version | 846.16 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2017 IEEE. Nowadays, voice acting plays more advanced in the video games, especially for the role-playing games, anime-based games and serious games. In order to enhance the communication, synchronizing the lip and mouth movements naturally is an important part of convincing 3D character performance [XFMS13]. In this paper, we propose a lightweight LipSync generation algorithm. According to the heuristic knowledge on the mouth movement in game, extracting the value of voice frequency domain is essential for LipSync in game. Therefore, we analytically convert the problem into Discrete Cosine Transform (DCT) that focuses on extracting the voice frequency domain value by absolute value computing operation so as to avoid redundant computation on phases and modulus of operation in Fourier Transform (FT) voice model. Our experimental results demonstrate that our DCT based method enables to achieve good performance for game making.
Please use this identifier to cite or link to this item: