Moderately Distributional Exploration for Domain Generalization
- Publication Type:
- Conference Proceeding
- Citation:
- Proceedings of Machine Learning Research, 2023, 202, pp. 6786-6817
- Issue Date:
- 2023-06-01
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
Domain generalization (DG) aims to tackle the
distribution shift between training domains and
unknown target domains. Generating new domains is one of the most effective approaches,
yet its performance gain depends on the distribution discrepancy between the generated and
target domains. Distributionally robust optimization is promising to tackle distribution discrepancy by exploring domains in an uncertainty set.
However, the uncertainty set may be overwhelmingly large, leading to low-confidence prediction
in DG. It is because a large uncertainty set could
introduce domains containing semantically different factors from training domains. To address
this issue, we propose to perform a moderately
distributional exploration (MODE) for domain
generalization. Specifically, MODE performs distribution exploration in an uncertainty subset that
shares the same semantic factors with the training domains. We show that MODE can endow
models with provable generalization performance
on unknown target domains. The experimental
results show that MODE achieves competitive performance compared to state-of-the-art baselines.
Please use this identifier to cite or link to this item: