Explainable exclusion in the life insurance using multi-label classifier
- Publisher:
- IEEE
- Publication Type:
- Conference Proceeding
- Citation:
- 2023 International Joint Conference on Neural Networks (IJCNN), 2023, 2023-June, pp. 1-8
- Issue Date:
- 2023-01-01
Embargoed
Filename | Description | Size | |||
---|---|---|---|---|---|
Explainable exclusion in the life insurance using multi-label classifier.pdf | Accepted version | 394.53 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is currently unavailable due to the publisher's embargo.
To reduce manual tasks and to minimise the risks from a customer, many insurance companies have applied ar-tificial intelligence (AI) solutions, including, but not limited to, machine learning (ML) and deep learning (DL). Exclusion anal-ysis is one of the primary tasks in terms of minimising the risks a customer imposes on a life insurance company. Although a few research studies have made this the primary focus, they have yet to provide explainable research for the exclusion analysis to assist the underwriting process (UP) using ML/DL methods. Therefore, this paper makes the process of exclusion classification, along with its explainability, its primary concentration to assist the underwriters in understanding the underwriting data taken from the customer disclosure information. First, we explore this problem by applying a set of four multi-label classifiers (named binary relevance, classifier chains, label powerset, and ensemble learning) blended with five ML techniques (named multinomial Naive Bayes, support vector classifier, logistic regression, random forest, decision tree), using the data provided by one of the leading insurance companies in Australia. Then, we consider the best-performing model's classification probability and feature importance as input for the explainable ML system - Shapley additive explanations and introduced explainability outcome - as a quality assurance report (QAR). This paper offers an extensive empirical evaluation by comparing different metrics and human underwriters' reviews. Finally, the result demonstrates that the binary relevance algorithm combined with the decision tree classifier outperforms other existing methods for explainable exclusion, providing a better overview of the customer's risk profile.
Please use this identifier to cite or link to this item: