Jo-SRC: A Contrastive Approach for Combating Noisy Labels

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, 00, pp. 5188-5197
Issue Date:
2021-11-13
Filename Description Size
2103.13029.pdfPublished version2.62 MB
Adobe PDF
Full metadata record
Due to the memorization effect in Deep Neural Networks (DNNs), training with noisy labels usually results in inferior model performance. Existing state-of-the-art methods primarily adopt a sample selection strategy, which selects small-loss samples for subsequent training. However, prior literature tends to perform sample selection within each mini-batch, neglecting the imbalance of noise ratios in different mini-batches. Moreover, valuable knowledge within high-loss samples is wasted. To this end, we propose a noise-robust approach named Jo-SRC (Joint Sample Selection and Model Regularization based on Consistency). Specifically, we train the network in a contrastive learning manner. Predictions from two different views of each sample are used to estimate its "likelihood" of being clean or out-of-distribution. Furthermore, we propose a joint loss to advance the model generalization performance by introducing consistency regularization. Extensive experiments have validated the superiority of our approach over existing state-of-the-art methods. The source code and models have been made available at https://github.com/NUST-Machine-Intelligence-Laboratory/Jo-SRC.
Please use this identifier to cite or link to this item: