Very long natural scenery image prediction by outpainting
- Publisher:
- IEEE
- Publication Type:
- Conference Proceeding
- Citation:
- Proceedings of the IEEE International Conference on Computer Vision, 2020, 2019-October, pp. 10560-10569
- Issue Date:
- 2020
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
09010032.pdf | Published version | 2.32 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2019 IEEE. Comparing to image inpainting, image outpainting receives less attention due to two challenges in it. The first challenge is how to keep the spatial and content consistency between generated images and original input. The second challenge is how to maintain high quality in generated results, especially for multi-step generations in which generated regions are spatially far away from the initial input. To solve the two problems, we devise some innovative modules, named Skip Horizontal Connection and Recurrent Content Transfer, and integrate them into our designed encoder-decoder structure. By this design, our network can generate highly realistic outpainting prediction effectively and efficiently. Other than that, our method can generate new images with very long sizes while keeping the same style and semantic content as the given input. To test the effectiveness of the proposed architecture, we collect a new scenery dataset with diverse, complicated natural scenes. The experimental results on this dataset have demonstrated the efficacy of our proposed network.
Please use this identifier to cite or link to this item: