CAFE: Learning to Condense Dataset by Aligning Features
- Publisher:
- IEEE
- Publication Type:
- Conference Proceeding
- Citation:
- 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, 2022-June, pp. 1-10
- Issue Date:
- 2022-09-27
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
Dataset condensation aims at reducing the network training effort through condensing a cumbersome training set into a compact synthetic one State of the art approaches largely rely on learning the synthetic data by matching the gradients between the real and synthetic data batches Despite the intuitive motivation and promising results such gradient based methods by nature easily overfit to a biased set of samples that produce dominant gradients and thus lack a global supervision of data distribution In this paper we propose a novel scheme to Condense dataset by Aligning FEatures CAFE which explicitly attempts to preserve the real feature distribution as well as the discriminant power of the resulting synthetic set lending itself to strong generalization capability to various architectures At the heart of our approach is an effective strategy to align features from the real and synthetic data across various scales while accounting for the classification of real samples Our scheme is further backed up by a novel dynamic bi level optimization which adaptively adjusts parameter updates to prevent over under fitting We validate the proposed CAFE across various datasets and demonstrate that it generally outperforms the state of the art on the SVHN dataset for example the performance gain is up to 11 Extensive experiments and analysis verify the effectiveness and necessity of proposed designs
Please use this identifier to cite or link to this item: