Efficient federated multi-view learning
- Publisher:
- ELSEVIER SCI LTD
- Publication Type:
- Journal Article
- Citation:
- Pattern Recognition, 2022, 131
- Issue Date:
- 2022-11-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Efficient federated multi-view learning.pdf | Published version | 1.02 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Multi-view learning aims to explore a global common structure shared by different views collected from multiple individual sources. The nascent field of federated learning tries to learn a global model over distributed networks of devices. This paper shows that multi-view learning is naturally suited to address the feature heterogeneity of the federated setting. We propose a novel model, namely robust federated multi-view learning (FedMVL), which is considered in the following formulation: given a dataset with M views, it is required to train machine learning models while the M views are distributed across M devices or nodes. Considering the unique challenges like stragglers and fault tolerance in federated setting, we derive an iterative federated optimization algorithm that allows each node with the flexibility to approximately address its subproblem. To the best of our knowledge, our model for the first time considers the issues including high communication cost, fault tolerance, and stragglers for distributed multi-view learning. The proposed model also achieves encouraging performance on clustering task compared to closely related methods, as we illustrate through simulations on several real-world datasets.
Please use this identifier to cite or link to this item: