Locally Random Sampling for Practical Privacy Protection in Federated Learning
- Institute of Electrical and Electronics Engineers (IEEE)
- Publication Type:
- Conference Proceeding
- GLOBECOM 2022 - 2022 IEEE Global Communications Conference, 2022, 00, pp. 528-533
- Issue Date:
|Locally_Random_Sampling_for_Practical_Privacy_Protection_in_Federated_Learning.pdf||Published version||4.68 MB|
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Federated learning (FL) is an emerging solution for machine learning model training in edge/fog computing systems. Unlike traditional systems that collect and train models on clouds, FL allows multiple edge/fog nodes to train a global model collaboratively without revealing their local data to clouds. Compared with traditional systems, it is inherited with better privacy protection ability. Although the basic privacy protection is inherited in FL, the privacy leakage from shard models is still unsolved. Existing solutions attempt to enhance the privacy of shared model parameters by adding differential privacy (DP) noise. However, these solutions all suffer from accuracy loss and convergence problems owing to the injected noise. In this paper, we propose a novel federated learning protocol to solve the above problem. The model trained on a carefully selected sampling subset can achieve the same level privacy protection as DP while preserving the model accuracy. Experimentally, we proved that our protocol achieves better model accuracy in the same privacy guarantee compared with noise injecting DP methods.
Please use this identifier to cite or link to this item: