Looking Beyond Single Images for Weakly Supervised Semantic Segmentation Learning.
- Publisher:
- Institute of Electrical and Electronics Engineers (IEEE)
- Publication Type:
- Journal Article
- Citation:
- IEEE Trans Pattern Anal Mach Intell, 2022, PP, (99), pp. 1-1
- Issue Date:
- 2022-04-19
Closed Access
| Filename | Description | Size | |||
|---|---|---|---|---|---|
| Looking_Beyond_Single_Images_for_Weakly_Supervised_Semantic_Segmentation_Learning.pdf | Published version | 10.66 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
This article studies the problem of learning weakly supervised semantic segmentation (WSSS) from image-level supervision only. Rather than previous efforts that primarily focus on intra-image information, we address the value of cross-image semantic relations for comprehensive object pattern mining. To achieve this, two neural co-attentions are incorporated into the classifier to complimentarily capture cross-image semantic similarities and differences. In particular, given a pair of training images, one co-attention enforces the classifier to recognize the common semantics from co-attentive objects, while the other one, called contrastive co-attention, drives the classifier to identify the unique semantics from the rest, unshared objects. This helps the classifier discover more object patterns and better ground semantics in image regions. More importantly, our algorithm provides a unified framework that handles well different WSSS settings, i.e., learning WSSS with (1) precise image-level supervision only, (2) extra simple single-label data, and (3) extra noisy web data. Without bells and whistles, it sets new state-of-the-arts on all these settings. Moreover, our approach ranked 1 st place in the WSSS Track of CVPR2020 LID Challenge. The extensive experimental results demonstrate well the efficacy and high utility of our method.
Please use this identifier to cite or link to this item:
