See more than once: Kernel-sharing atrous convolution for semantic segmentation

Elsevier BV
Publication Type:
Journal Article
Neurocomputing, 2021, 443, pp. 26-34
Issue Date:
Filename Description Size
20210223_NEUCOM-D-20-02834R2_ACCEPTED.pdfAccepted version77.42 kB
Adobe PDF
Full metadata record
The state-of-the-art semantic segmentation solutions usually leverage different receptive fields via multiple parallel branches to handle objects of different sizes. However, employing separate kernels for individual branches may degrade the generalization of the network to objects with different scales, and the computational cost increases with the increase of the number of branches. To tackle this problem, we propose a novel network structure, namely Kernel-Sharing Atrous Convolution (KSAC), where branches with different receptive fields share the same kernel, i.e., let a single kernel ‘see’ the input feature maps more than once with different receptive fields. Experiments conducted on the benchmark PASCAL VOC 2012 dataset show that our proposed sharing strategy can not only boost the network's generalization and representation abilities but also reduce the computational cost significantly. Specifically, on the validation set, when compared with DeepLabv3+, about 2.7G FLOPs and 12.7G FLOPs are saved for output stride = 16 and 8 respectively. In addition, different from the widely used ASPP structure, our proposed KSAC is able to further improve the mIOU by taking benefit of wider context with larger atrous rates. Finally, our KSAC achieves mIOUs of 88.1%, 45.47% and 80.7% on the PASCAL VOC 2012 test set (Everingham et al., 2009), ADE20K dataset (Zhou et al., 2017) and Cityscapes datasets (Marius et al., 2016), respectively. Our full code will be released on Github:
Please use this identifier to cite or link to this item: