CS-GAN: Cross-Structure Generative Adversarial Networks for Chinese calligraphy translation[Formula presented]
- Publication Type:
- Journal Article
- Knowledge-Based Systems, 2021, 229
- Issue Date:
|1-s2.0-S0950705121005967-main.pdf||Published version||2.75 MB|
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Generative Adversarial Networks (GANs) have made great progress in cross-domain image translation. In fact, image-to-image translation tasks often encounter structural differences in two domains, such as translation on unpaired Chinese calligraphy dataset. However, existing models can only convert color and texture features and keep the structures unchanged (e.g.: in apples to oranges tasks, these models only convert the color of apples, but maintain the shape of apples). In order to address cross-structure image translation, such as cross-structure translation of Chinese calligraphy, a novel Generative Adversarial Networks (GAN) model, named CS-GAN, is proposed in this paper. In CS-GAN, distribution transform, reparameterization trick and sampling features are used to convert feature maps obtained from domain S to domain T. Then images of domain T are generated through features concatenation. The proposed CS-GAN is verified on three sets of Chinese calligraphic data with structural differences from three famous calligraphers, Yan Zhenqing, Zhao Mengfu and Ouyang Xun. The extensive experimental results show that the proposed CS-GAN successfully transforms the Chinese calligraphy data of different structures and outperforms the state of art models.
Please use this identifier to cite or link to this item: