Regularly Truncated M-Estimators for Learning With Noisy Labels.

Publisher:
Institute of Electrical and Electronics Engineers (IEEE)
Publication Type:
Journal Article
Citation:
IEEE Trans Pattern Anal Mach Intell, 2024, 46, (5), pp. 3522-3536
Issue Date:
2024-05
Filename Description Size
Regularly_Truncated_M-Estimators_for_Learning_With_Noisy_Labels.pdfPublished version3.86 MB
Adobe PDF
Full metadata record
The sample selection approach is very popular in learning with noisy labels. As deep networks "learn pattern first", prior methods built on sample selection share a similar training procedure: the small-loss examples can be regarded as clean examples and used for helping generalization, while the large-loss examples are treated as mislabeled ones and excluded from network parameter updates. However, such a procedure is arguably debatable from two folds: (a) it does not consider the bad influence of noisy labels in selected small-loss examples; (b) it does not make good use of the discarded large-loss examples, which may be clean or have meaningful information for generalization. In this paper, we propose regularly truncated M-estimators (RTME) to address the above two issues simultaneously. Specifically, RTME can alternately switch modes between truncated M-estimators and original M-estimators. The former can adaptively select small-losses examples without knowing the noise rate and reduce the side-effects of noisy labels in them. The latter makes the possibly clean examples but with large losses involved to help generalization. Theoretically, we demonstrate that our strategies are label-noise-tolerant. Empirically, comprehensive experimental results show that our method can outperform multiple baselines and is robust to broad noise types and levels.
Please use this identifier to cite or link to this item: