In-N-Out Generative Learning for Dense Unsupervised Video Segmentation
- Publisher:
- Association for Computing Machinery (ACM)
- Publication Type:
- Conference Proceeding
- Citation:
- MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 1819-1827
- Issue Date:
- 2022-10-10
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
_ACMMM22____In_N_Out_Generative_Learning_for_Dense_Unsupervised_Video_Segmentation_OPUS.pdf | Published version | 8.85 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
In this paper, we focus on unsupervised learning for Video Object Segmentation (VOS) which learns visual correspondence (i.e., the similarity between pixel-level features) from unlabeled videos. Previous methods are mainly based on the contrastive learning paradigm, which optimize either in image level or pixel level. Image-level optimization (e.g., the spatially pooled feature of ResNet) learns robust high-level semantics but is sub-optimal since the pixel-level features are optimized implicitly. By contrast, pixel-level optimization is more explicit, however, it is sensitive to the visual quality of training data and is not robust to object deformation. To complementarily perform these two levels of optimization in a unified framework, we propose the In-aNd-Out (INO) generative learning from a purely generative perspective with the help of naturally designed class tokens and patch tokens in Vision Transformer (ViT). Specifically, for image-level optimization, we force the out-view imagination from local to global views on class tokens, which helps capture high-level semantics, and we name it as out-generative learning. As to pixel-level optimization, we perform in-view masked image modeling on patch tokens, which recovers the corrupted parts of an image via inferring its fine-grained structure, and we term it as in-generative learning. To discover the temporal information better, we additionally force the inter-frame consistency from both feature and affinity matrix levels. Extensive experiments on DAVIS-2017 val and YouTube-VOS 2018 val show that our INO outperforms previous state-of-the-art methods by significant margins.
Please use this identifier to cite or link to this item: