A new method to promptly evaluate spatial earthquake probability mapping using an explainable artificial intelligence (XAI) model

Publisher:
Elsevier
Publication Type:
Journal Article
Citation:
Gondwana Research, 2022
Issue Date:
2022-01-01
Full metadata record
Machine learning (ML) models have been extensively used in several geological applications. Owing to the increase in model complexity, interpreting the outputs becomes quite challenging. Shapley additive explanation (SHAP) measures the importance of each input attribute on the model's output. This study implemented SHAP to estimate earthquake probability using two different types of ML approaches, namely, artificial neural network (ANN) and random forest (RF). The two algorithms were first compared to evaluate the importance and effect of the factors. SHAP was then carried out to interpret the output of the models designed for the earthquake probability. This study aims not only to achieve high accuracy in probability estimation but also to rank the input parameters and select appropriate features for classification. SHAP was tested on earthquake probability assessment using eight factors for the Indian subcontinent. The models obtained an overall accuracy of 96 % for ANN and 98 % for RF. SHAP identified the high contributing factors as epicenter distance, depth density, intensity variation, and magnitude density in a sequential order for ANN. Finally, the authors argued that an explainable artificial intelligence (AI) model can help in earthquake probability estimation, which then open avenues to building a transferable AI model.
Please use this identifier to cite or link to this item: