GCM-Net: Towards Effective Global Context Modeling for Image Inpainting

Publisher:
ACM
Publication Type:
Conference Proceeding
Citation:
MM 2021 - Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 2586-2594
Issue Date:
2021-10-17
Filename Description Size
3474085.3475433_published.pdfPublished version1.83 MB
Adobe PDF
Full metadata record
Deep learning based inpainting methods have obtained promising performance for image restoration, however current image inpainting methods still tend to produce unreasonable structures and blurry textures when processing the damaged images with heavy corruptions. In this paper, we propose a new image inpainting method termed Global Context Modeling Network (GCM-Net). By capturing the global contextual information, GCM-Net can potentially improve the performance of recovering the missing region in the damaged images with irregular masks. To be specific, we first use four convolution layers to extract the shadow features. Then, we design a progressive multi-scale fusion block termed PMSFB to extract and fuse the multi-scale features for obtaining local features. Besides, a dense context extraction (DCE) module is also designed to aggregate the local features extracted by PMSFBs. To improve the information flow, a channel attention guided residual learning module is deployed in both the DCE and PMSFB, which can reweight the learned residual features and refine the extracted information. To capture more global contextual information and enhance the representation ability, a coordinate context attention (CCA) based module is also presented. Finally, the extracted features with rich information are decoded as the image inpainting result. Extensive results on the Paris Street View, Places2 and CelebA-HQ datasets demonstrate that our method can better recover the structures and textures, and deliver significant improvements, compared with some related inpainting methods.
Please use this identifier to cite or link to this item: