Unsupervised Point Cloud Pre-Training Via Contrasting and Clustering

Publisher:
Institute of Electrical and Electronics Engineers (IEEE)
Publication Type:
Conference Proceeding
Citation:
Proceedings - International Conference on Image Processing, ICIP, 2022, 00, pp. 66-70
Issue Date:
2022-10-16
Filename Description Size
Binder2.pdfAccepted version429.49 kB
Adobe PDF
Full metadata record
The annotation for large-scale point clouds is still time-consuming and unavailable for many complex real-world tasks. Point cloud pre-training is a promising direction to auto-extract features without labeled data. Therefore, this paper proposes a general unsupervised approach, named ConClu for point cloud pre-training by jointly performing contrasting and clustering. Specifically, the contrasting is formulated by maximizing the similarity feature vectors produced by encoders fed with two augmentations of the same point cloud. The clustering simultaneously clusters the data while enforcing consistency between cluster assignments produced different augmentations. Experimental evaluations on downstream applications outperform state-of-the-art techniques, which demonstrates the effectiveness of our framework.
Please use this identifier to cite or link to this item: