Better Together: Attaining the Triad of Byzantine-robust Federated Learning via Local Update Amplification

Publisher:
ASSOC COMPUTING MACHINERY
Publication Type:
Conference Proceeding
Citation:
ACM International Conference Proceeding Series, 2022, pp. 201-213
Issue Date:
2022-12-05
Filename Description Size
22‘ ACSAC Shen-2022-Better-together-attaining-the-triad.pdfAccepted version1.05 MB
Adobe PDF
Full metadata record
Manipulation of local training data and local updates, i.e., the Byzantine poisoning attack, is the main threat arising from the collaborative nature of the federated learning (FL) paradigm. Many Byzantine-robust aggregation algorithms (AGRs) have been proposed to filter out or moderate suspicious local updates uploaded by Byzantine participants at the central aggregator. However, they largely suffer from model quality degradation due to the over-removal of local updates or/and the inefficiency caused by the expensive analysis of the high-dimensional local updates. In this work, we propose AgrAmplifier that aims to simultaneously attain the triad of robustness, fidelity and efficiency for FL. AgrAmplifier features the amplification of the "morality"of local updates to render their maliciousness and benignness clearly distinguishable. It re-organizes the local updates into patches and extracts the most activated features in the patches. This strategy can effectively enhance the robustness of the aggregator, and it also retains high fidelity as the amplified updates become more resistant to local translations. Furthermore, the significant dimension reduction in the feature space greatly benefits the efficiency of the aggregation. AgrAmplifier is compatible with any existing Byzantine-robust mechanism. In this paper, we integrate it with three mainstream ones, i.e., distance-based, prediction-based, and trust bootstrapping-based mechanisms. Our extensive evaluation against five representative poisoning attacks on five datasets across diverse domains demonstrates the consistent enhancement for all of them, with average gains at, and in terms of robustness, fidelity, and efficiency respectively. We release the source code of AgrAmplifier and our artifacts to facilitate future research in this area: https://github.com/UQ-Trust-Lab/AgrAmplifier.
Please use this identifier to cite or link to this item: