A Game-theoretic Federated Learning Framework for Data Quality Improvement

Publisher:
IEEE COMPUTER SOC
Publication Type:
Journal Article
Citation:
IEEE Transactions on Knowledge and Data Engineering, 2023, 35, (11), pp. 10952-10966
Issue Date:
2023
Filename Description Size
A_Game-Theoretic_Federated_Learning_Framework_for_Data_Quality_Improvement.pdfPublished version1.02 MB
Adobe PDF
Full metadata record
Federated learning is a promising distributed machine learning paradigm that has been playing a significant role in privacy-preserving machine learning tasks. However, alongside all its achievements, the framework has limitations. First, traditional frameworks assume that all clients want to improve model accuracy and so participation is voluntary. However, in reality, clients usually want to be appropriately compensated for the data and resources they will need to commit to the training process before contributing. Second, today's frameworks allow clients to perturb their parameter updates locally, which introduces a great deal of noise to the trained model and can seriously impact model accuracy. To address these concerns, we have developed a private reward game that incentivizes clients to contribute high-quality data to the training process. The game converges to a Nash equilibrium under the guarantee of joint differential privacy, and each client maximizes their reward following an equilibrium strategy. The noise injected into the model is reduced by introducing a centralized differential privacy model that aggregates the parameters and compensates clients via a data trading market. Experimental simulations show the rationales behind and effectiveness of the proposed game approach. Additionally, we present comparisons between different training models to demonstrate the performance of the proposed approach in real-world scenarios.
Please use this identifier to cite or link to this item: