Fed-dfe: A decentralized function encryption-based privacy-preserving scheme for federated learning
- Publisher:
- Computers, Materials and Continua (Tech Science Press)
- Publication Type:
- Journal Article
- Citation:
- Computers, Materials and Continua, 2022, 71, (1), pp. 1867-1886
- Issue Date:
- 2022-01-01
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
Federated learning is a distributed learning framework which trains globalmodels by passing model parameters instead of raw data. However, the training mechanism for passing model parameters is still threatened by gradient inversion, inference attacks, etc. With a lightweight encryption overhead, function encryption is a viable secure aggregation technique in federation learning, which is often used in combination with differential privacy. The function encryption in federal learning still has the following problems: A) Traditional function encryption usually requires a trust third party (TTP) to assign the keys. If a TTP colludes with a server, the security aggregationmechanismcan be compromised. b)When using differential privacy in combination with function encryption, the evaluation metrics of incentive mechanisms in the traditional federal learning become invisible. In this paper, we propose a hybrid privacy-preserving scheme for federated learning, called Fed-DFE. Specifically, we present a decentralized multi-client function encryption algorithm. It replaces the TTP in traditional function encryption with an interactive key generation algorithm, avoiding the problem of collusion. Then, an embedded incentive mechanism is designed for function encryption. It models the real parameters in federated learning and finds a balance between privacy preservation and model accuracy. Subsequently, we implemented a prototype of Fed-DFE and evaluated the performance of decentralized function encryption algorithm. The experimental results demonstrate the effectiveness and efficiency of our scheme.
Please use this identifier to cite or link to this item: