Concept-Guided Interpretable Federated Learning
- Publisher:
- Springer Nature
- Publication Type:
- Conference Proceeding
- Citation:
- Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2024, 14472 LNAI, pp. 160-172
- Issue Date:
- 2024-01-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
paper_96.pdf | Published version | 966.61 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Interpretable federated learning is an emerging challenge to identify explainable characteristics of each client-specific personalized model in a federated learning system. This paper proposes a novel federated concept bottleneck (FedCBM) method by introducing human-friendly concepts for client-wise model interpretation. Specifically, given a set of pre-defined concepts, all clients will collaboratively train shared Concept Activation Vectors (CAVs) in federated settings. The shared concepts will be the information carrier to align client-specific representations, and also be applied to enhance the model’s accuracy under a supervised learning loss. The effectiveness of our method and concept-level reasoning is demonstrated in our experimental analysis.
Please use this identifier to cite or link to this item: