Deep Multi-Resolution Mutual Learning for Image Inpainting

Publisher:
Association for Computing Machinery (ACM)
Publication Type:
Conference Proceeding
Citation:
MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 6359-6367
Issue Date:
2022-10-10
Filename Description Size
Deep Multi-Resolution Mutual Learning for Image Inpainting.pdfPublished version6.06 MB
Adobe PDF
Full metadata record
Deep image inpainting methods have improved the inpainting performance greatly due to the powerful representation ability of deep learning. However, current deep inpainting networks still tend to produce unreasonable structures and blurry textures due to the ill-posed properties of the task, i.e., image inpainting is still a challenging topic. In this paper, we therefore propose a novel deep multi-resolution mutual learning (DMRML) strategy, which can fully explore the information from various resolutions. Specifically, we design a new image inpainting network, termed multi-resolution mutual network (MRM-Net), which takes the damaged images of different resolutions as input, then excavates and exploits the correlation among different resolutions to guide the image inpainting process. Technically, we designs two new modules called multi-resolution information interaction (MRII) and adaptive content enhancement (ACE). MRII aims at discovering the correlation of multiple resolutions and exchanging information, and ACE focuses on enhancing the contents using the interacted features. Note that we also present an memory preservation mechanism (MPM) to prevent from the information loss with the increasing layers. Extensive experiments on Paris Street View, Places2 and CelebA-HQ datasets demonstrate that our proposed MRM-Net can effectively recover the textures and structures, and performs favorably against other state-of-the-art methods.
Please use this identifier to cite or link to this item: