Taming Overconfident Prediction on Unlabeled Data From Hindsight.
- Publisher:
- IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
- Publication Type:
- Journal Article
- Citation:
- IEEE Trans Neural Netw Learn Syst, 2023, PP, (99)
- Issue Date:
- 2023-05-23
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Taming_Overconfident_Prediction_on_Unlabeled_Data_From_Hindsight.pdf | Accepted version | 974.26 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Minimizing prediction uncertainty on unlabeled data is a key factor to achieve good performance in semi-supervised learning (SSL). The prediction uncertainty is typically expressed as the entropy computed by the transformed probabilities in output space. Most existing works distill low-entropy prediction by either accepting the determining class (with the largest probability) as the true label or suppressing subtle predictions (with the smaller probabilities). Unarguably, these distillation strategies are usually heuristic and less informative for model training. From this discernment, this article proposes a dual mechanism, named adaptive sharpening (ADS), which first applies a soft-threshold to adaptively mask out determinate and negligible predictions, and then seamlessly sharpens the informed predictions, distilling certain predictions with the informed ones only. More importantly, we theoretically analyze the traits of ADS by comparing it with various distillation strategies. Numerous experiments verify that ADS significantly improves state-of-the-art SSL methods by making it a plug-in. Our proposed ADS forges a cornerstone for future distillation-based SSL research.
Please use this identifier to cite or link to this item: