GAN2C: Information Completion GAN with Dual Consistency Constraints
- Publication Type:
- Conference Proceeding
- Citation:
- Proceedings of the International Joint Conference on Neural Networks, 2018, 2018-July
- Issue Date:
- 2018-10-10
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
08489550.pdf | Published version | 657.08 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2018 IEEE. This paper proposes an information completion technique, GAN2C, by imposing dual consistency constraints (2C) to a closed loop encoder-decoder architecture based on the generative adversarial nets (GAN). When adopting deep neural networks as function approximators, GAN2C enables highly effective multi-modality image conversion with sparse observation in the target modes. For empirical demonstration and model evaluation, we show that trained deep neural networks in GAN2C can infer colors for grayscale images, as well as estimate rich 3D information of a scene by densely predicting the depths. The results of the experiments show that in both tasks GAN2C as a generic framework has been comparable to or advanced the state-of-the-art performance which are achieved by highly specialized systems. Code is available at https://github.com/AdalinZhang/GAN2C.
Please use this identifier to cite or link to this item: