Active Learning for Point Cloud Semantic Segmentation via Spatial-Structural Diversity Reasoning
- Publisher:
- ACM
- Publication Type:
- Conference Proceeding
- Citation:
- MM 2022 - Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 2575-2585
- Issue Date:
- 2022-10-10
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
3503161.3547820.pdf | Published version | 4.05 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
The expensive annotation cost is notoriously known as the main constraint for the development of the point cloud semantic segmentation technique. Active learning methods endeavor to reduce such cost by selecting and labeling only a subset of the point clouds, yet previous attempts ignore the spatial-structural diversity of the selected samples, inducing the model to select clustered candidates with similar shapes in a local area while missing other representative ones in the global environment. In this paper, we propose a new 3D region-based active learning method to tackle this problem. Dubbed SSDR-AL, our method groups the original point clouds into superpoints and incrementally selects the most informative and representative ones for label acquisition. We achieve the selection mechanism via a graph reasoning network that considers both the spatial and structural diversities of superpoints. To deploy SSDR-AL in a more practical scenario, we design a noise-aware iterative labeling strategy to confront the "noisy annotation"problem introduced by the previous "dominant labeling"strategy in superpoints. Extensive experiments on two point cloud benchmarks demonstrate the effectiveness of SSDR-AL in the semantic segmentation task. Particularly, SSDR-AL significantly outperforms the baseline method and reduces the annotation cost by up to 63.0% and 24.0% when achieving 90% performance of fully supervised learning, respectively. Code is available at https://github.com/shaofeifei11/SSDR-AL.
Please use this identifier to cite or link to this item: