Personalization Disentanglement for Federated Learning
- Publisher:
- IEEE
- Publication Type:
- Conference Proceeding
- Citation:
- 2023 IEEE International Conference on Multimedia and Expo (ICME), 2023, 2023-July, pp. 318-323
- Issue Date:
- 2023-01-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
1665047.pdf | Published version | 2.73 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Personalized federated learning PFL jointly trains a variety of local models through balancing between knowledge sharing across clients and model personalization per client This paper addresses PFL via explicit disentangling latent representations into two parts to capture the shared knowledge and client specific personalization which leads to more reliable and effective PFL The disentanglement is achieved by a novel Federated Dual Variational Autoencoder FedDVA which employs two encoders to infer the two types of representations FedDVA can produce a better understanding of the trade off between global knowledge sharing and local personalization in PFL Moreover it can be integrated with existing FL methods and turn them into personalized models for heterogeneous downstream tasks Extensive experiments validate the advantages caused by disentanglement and show that models trained with disentangled representations substantially outperform those vanilla methods
Please use this identifier to cite or link to this item: