Personalized Federated Learning with Robust Clustering Against Model Poisoning

Publisher:
Springer Nature
Publication Type:
Conference Proceeding
Citation:
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2022, 13726 LNAI, pp. 238-252
Issue Date:
2022-01-01
Filename Description Size
paper.pdfPublished version1.02 MB
Adobe PDF
Full metadata record
Recently, federated Learning (FL) has been widely used to protect clients’ data privacy in distributed applications, and heterogeneous data and model poisoning are two critical challenges to attack. To tackle the first challenge that data of each client is usually not independent or identically distributed, personalized FL (PFL) or clustered FL, which can be seen as a cluster-wise PFL method to learn multiple models across clients or clusters. To detect the anomaly clients or outliers, local outlier factor is a popular method based on the density of data points. Therefore, a nested bi-level optimization objective is constructed, and an algorithm of PFL with robust clustering called FedPRC is proposed to detect outliers and maintain state-of-the-art performance. The breakdown point of FedPRC can be at least 0.5. Our experimental analysis has demonstrated effectiveness and superior performance in comparison with baselines in multiple benchmark datasets.
Please use this identifier to cite or link to this item: