Density-based reliable and robust explainer for counterfactual explanation
- Publisher:
- PERGAMON-ELSEVIER SCIENCE LTD
- Publication Type:
- Journal Article
- Citation:
- Expert Systems with Applications, 2023, 226
- Issue Date:
- 2023-09-15
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
26Density-based reliable and robust explainer for counterfactual explanation.pdf | Published version | 1.93 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
As an essential post-hoc explanatory method, counterfactual explanation enables people to understand and react to machine learning models. Works on counterfactual explanation generally aim at generating high-quality results, which means providing close and detailed explanations to users. However, the counterfactual explainer trained on data is fragile in practice, i.e., even a small perturbation to samples can lead to large differences in explanation. In this work, we address this issue by analyzing and formalizing the robustness of counterfactual explainer with practical considerations. An explainer is considered robust if it can generate relatively stable counterfactuals under various settings. To this end, we propose a robust and reliable explainer for searching counterfactuals of classifier predictions by using density gravity. To evaluate the performance, we provide metrics that allow comparison of our proposed explainer with others and further demonstrate the importance of density in enhancing robustness. Extensive experiments on real-world datasets show that our method offers a significant improvement in explainer reliability and stability.
Please use this identifier to cite or link to this item: