Effects of Fairness and Explanation on Trust in Ethical AI

Publisher:
SPRINGER INTERNATIONAL PUBLISHING AG
Publication Type:
Chapter
Citation:
Machine Learning and Knowledge Extraction, 2022, 13480 LNCS, pp. 51-67
Issue Date:
2022-01-01
Filename Description Size
Trust_Explanation_fairness__CD_MAKE_2022_.pdfAccepted version770.49 kB
Adobe PDF
Full metadata record
AI ethics has been a much discussed topic in recent years. Fairness and explainability are two important ethical principles for trustworthy AI. In this paper, the impact of AI explainability and fairness on user trust in AI-assisted decisions is investigated. For this purpose, a user study was conducted simulating AI-assisted decision making in a health insurance scenario. The study results demonstrated that fairness only affects user trust when the fairness level is low, with a low fairness level reducing user trust. However, adding explanations helped users increase their trust in AI-assisted decision making. The results show that the use of AI explanations and fairness statements in AI applications is complex: we need to consider not only the type of explanations, but also the level of fairness introduced. This is a strong motivation for further work.
Please use this identifier to cite or link to this item: