Supervision Adaptation Balancing In-Distribution Generalization and Out-of-Distribution Detection.

Publisher:
IEEE COMPUTER SOC
Publication Type:
Journal Article
Citation:
IEEE Trans Pattern Anal Mach Intell, 2023, 45, (12), pp. 15743-15758
Issue Date:
2023-12
Full metadata record
The discrepancy between in-distribution (ID) and out-of-distribution (OOD) samples can lead to distributional vulnerability in deep neural networks, which can subsequently lead to high-confidence predictions for OOD samples. This is mainly due to the absence of OOD samples during training, which fails to constrain the network properly. To tackle this issue, several state-of-the-art methods include adding extra OOD samples to training and assign them with manually-defined labels. However, this practice can introduce unreliable labeling, negatively affecting ID classification. The distributional vulnerability presents a critical challenge for non-IID deep learning, which aims for OOD-tolerant ID classification by balancing ID generalization and OOD detection. In this paper, we introduce a novel supervision adaptation approach to generate adaptive supervision information for OOD samples, making them more compatible with ID samples. First, we measure the dependency between ID samples and their labels using mutual information, revealing that the supervision information can be represented in terms of negative probabilities across all classes. Second, we investigate data correlations between ID and OOD samples by solving a series of binary regression problems, with the goal of refining the supervision information for more distinctly separable ID classes. Our extensive experiments on four advanced network architectures, two ID datasets, and eleven diversified OOD datasets demonstrate the efficacy of our supervision adaptation approach in improving both ID classification and OOD detection capabilities.
Please use this identifier to cite or link to this item: