Concept-Guided Interpretable Federated Learning

Publisher:
Springer Nature
Publication Type:
Conference Proceeding
Citation:
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2024, 14472 LNAI, pp. 160-172
Issue Date:
2024-01-01
Filename Description Size
paper_96.pdfPublished version966.61 kB
Adobe PDF
Full metadata record
Interpretable federated learning is an emerging challenge to identify explainable characteristics of each client-specific personalized model in a federated learning system. This paper proposes a novel federated concept bottleneck (FedCBM) method by introducing human-friendly concepts for client-wise model interpretation. Specifically, given a set of pre-defined concepts, all clients will collaboratively train shared Concept Activation Vectors (CAVs) in federated settings. The shared concepts will be the information carrier to align client-specific representations, and also be applied to enhance the model’s accuracy under a supervised learning loss. The effectiveness of our method and concept-level reasoning is demonstrated in our experimental analysis.
Please use this identifier to cite or link to this item: