Random Reconstructed Unpaired Image-to-Image Translation

Publisher:
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Publication Type:
Journal Article
Citation:
IEEE Transactions on Industrial Informatics, 2023, 19, (3), pp. 3144-3154
Issue Date:
2023-03-01
Filename Description Size
1563683.pdfPublished version5.64 MB
Adobe PDF
Full metadata record
The goal of unpaired image-to-image translation is to learn a mapping from a source domain to a target domain without using any labeled examples of paired images. This problem can be solved by learning the conditional distribution of source images in the target domain. A major limitation of existing unpaired image-to-image translation algorithms is that they generate untruthful images which are overcolored and lack details, while the translation of realistic images must be rich in details. To address this limitation, in this article, we propose a random reconstructed unpaired image-to-image translation (RRUIT) framework by generative adversarial network, which uses random reconstruction to preserve the high-level features in the source and adopts an adversarial strategy to learn the distribution in the target. We update the proposed objective function with two loss functions. The auxiliary loss guides the generator to create a coarse image, while the coarse-to-fine block next to the generator block produces an image that obeys the distribution of the target domain. The coarse-to-fine block contains two submodules based on the densely connected atrous spatial pyramid pooling, which enriches the details of generated images. We conduct extensive experiments on photorealistic stylization and artistic stylization. The experimental results confirm the superiority of the proposed RRUIT.
Please use this identifier to cite or link to this item: