High-accuracy low-cost privacy-preserving federated learning in IoT systems via adaptive perturbation

Publisher:
ELSEVIER
Publication Type:
Journal Article
Citation:
Journal of Information Security and Applications, 2022, 70
Issue Date:
2022-11-01
Full metadata record
With the rapid development of the Internet of Things (IoT), federated learning (FL) has been widely used to obtain insights from collected data while preserving data privacy. Differential privacy (DP) is an additive noise scheme that has been widely studied as a privacy-preserving approach on FL. However, privacy protection under DP usually comes at the cost of model accuracy for the underlying FL process. In this paper, we propose a novel low-cost (for both communication and computational overhead) adaptive noise perturbation/masking scheme to protect FL clients’ privacy without degrading the global model accuracy. In particular, we set the magnitude of the additive noise to adaptively change with the magnitude of the local model updates. Then, a direction-based filtering scheme is used to accelerate the convergence of the FL model. A maximum tolerable noise bound for local clients is derived using the central limit theorem (CLT). The designed noise maximizes privacy protection for clients while preserving the accuracy and convergence rate of the FL model, as a result of the noise cancelling out and forming a more concentrated distribution after the aggregation operation on the server. We theoretically prove that FL with the proposed noise perturbation scheme retains the same accuracy and convergence rate (O(1/T) for convex loss functions and O(1/T) for non-convex loss functions) as that of non-private FL with SGD. We also evaluate the performance of the proposed scheme in terms of convergence behavior, computational efficiency, and privacy protection against state-of-the-art privacy inference attacks on real-world datasets. Experimental results show that FL with our proposed perturbation scheme outperforms DP in the accuracy and convergence rate of the FL model in both client dropout and non-client dropout scenarios. Compared with DP, our proposed scheme does not incur additional computational and communication overhead. Our approach provides DP-comparable or better effectiveness in defending against privacy attacks under the same global model accuracy.
Please use this identifier to cite or link to this item: