An explainable AI (XAI) model for landslide susceptibility modeling

Publisher:
Elsevier
Publication Type:
Journal Article
Citation:
Applied Soft Computing, 2023, 142, pp. 110324
Issue Date:
2023-07-01
Full metadata record
Landslides are among the most devastating natural hazards, severely impacting human lives and damaging property and infrastructure. Landslide susceptibility maps, which help to identify which regions in a given area are at greater risk of a landslide occurring, are a key tool for effective mitigation. Research in this field has grown immensely, ranging from quantitative to deterministic approaches, with a recent surge in machine learning (ML)-based computational models. The development of ML models, in particular, has undergone a meteoritic rise in the last decade, contributing to the successful development of accurate susceptibility maps. However, despite their success, these models are rarely used by stakeholders owing to their “black box” nature. Hence, it is crucial to explain the results, thus providing greater transparency for the use of such models. To address this gap, the present work introduces the use of an ML-based explainable algorithm, SHapley Additive exPlanations (SHAP), for landslide susceptibility modeling. A convolutional neural network model was used conducted in the CheongJu region in South Korea. A total of 519 landslide locations were examined with 16 landslide-affected variables, of which 70% was used for training and 30% for testing, and the model achieved an accuracy of 89%. Further, the comparison was performed using Support Vector Machine mode, which achieved an accuracy of 84%. The SHAP plots showed variations in feature interactions for both landslide and non-landslide locations, thus providing more clarity as to how the model achieves a specific result. The SHAP dependence plots explained the relationship between altitude and slope, showing a negative relationship with altitude and a positive relationship with slope. This is the first use of an explainable ML model in landslide susceptibility modeling, and we argue that future works should include aspects of explainability to open up the possibility of developing a transferable artificial intelligence model.
Please use this identifier to cite or link to this item: