Federated representation learning across heterogeneous clients
- Publication Type:
- Thesis
- Issue Date:
- 2024
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
Federated learning (FL) has achieved great advances in the field of machine learning that enables multiple devices to jointly train a global model while keeping their own data private. However, heterogeneity across clients in FL usually hinders optimization convergence and generalization performance when the aggregation of distributed knowledge occurs in the gradient space. For example, clients may differ in terms of data distribution, network latency, input/output space, and/or model architecture, which can easily lead to the misalignment of their local gradients. To overcome these challenges, in this research, we investigate representation-based federated learning approaches across heterogeneous clients to establish an FL framework that is more communication-efficient and robust to heterogeneity issues. Specifically, 1) we propose a general prototype-based federated learning framework that allows clients to share knowledge by exchanging information in terms of class-wise prototypes instead of model parameters or gradients, despite statistical and model heterogeneity, 2) we further extend the framework from training-from-scratch paradigm to pre-training-based paradigms, exploring the potential of learning from large foundation models in an FL manner, 3) we consider a more challenging and realistic data scenario and examine the robustness of federated representation learning framework in different data shift scenarios.
Please use this identifier to cite or link to this item: