A Feasible Situation Awareness-Based Evaluation Framework for Quality of Machine Learning Explanations
- Publication Type:
- Thesis
- Issue Date:
- 2024
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
EXplainable Artificial Intelligence (XAI) has emerged as a critical domain, with the aim of enhancing the transparency and interpretability of advanced machine learning (ML) models. As the need to introduce more complicated ML in broader industries surged, especially for industries with high sensitivity to safety, such as finance and medicine, explanations for the complexity of the models attract attention. A lack of methodology for in-context user need analysis and low-level evaluations of explanations appears to be the pain point for professionals in such industries to use advanced ML models confidently.
As a case study, in one of the safety-sensitive fields, actuarial insurance pricing, this research addresses a significant gap by focussing on user-centric evaluations of XAI explanations. The study unfolds in two main parts.
The first study uses the Actuarial Control Cycle (ACC) and the Goal-Directed Task Analysis (GDTA) framework to conduct a detailed analysis of user needs in insurance pricing. Focussing on the prediction of claim counts for Motor Third Party Liability Insurance (MTPL) using Generalised Linear Models (GLM), this study establishes a robust foundation for understanding the nuanced requirements of actuarial professionals in complex pricing scenarios.
Building on the insights gained from the first study, the second study evaluates the effectiveness of XAI explanations, particularly those derived from SHAP values. A user-participated questionnaire, grounded in Endsley's 1995 Model of Situation Awareness, provides quantitative metrics to assess the Situation Awareness (SA) of users. This study delves into the user-centric evaluation of XAI techniques in the specific context of insurance pricing, contributing to the evolving landscape of XAI.
Synthesising the results of both studies, the research challenges the traditional limitations in explaining ML models and highlights the importance of aligning XAI techniques with user needs, fostering transparency, trustworthiness, and effective decision-making in the intricate field of actuarial science. The discussions underscore implications for refining XAI methodologies, improving explanations, and improving user satisfaction. The study acknowledges limitations and challenges while emphasising the need for an iterative control cycle of the effectiveness of XAI, enabling ongoing collaboration between model developers and users to refine explanations and promote a symbiotic dynamic within the actuarial control cycle.
Please use this identifier to cite or link to this item: