An Explanation Module for Deep Neural Networks Facing Multivariate Time Series Classification

Springer International Publishing
Publication Type:
Conference Proceeding
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2022, 13151 LNAI, pp. 3-14
Issue Date:
Filename Description Size
AJCAI__explanation_module.pdfAccepted version208.6 kB
Adobe PDF
1.pdfSupporting information160.18 kB
Adobe PDF
Full metadata record
Deep neural networks currently achieve state-of-the-art performance in many multivariate time series classification (MTSC) tasks, which are crucial for various real-world applications. However, the black-box characteristic of deep learning models impedes humans from obtaining insights into the internal regulation and decisions made by classifiers. Existing explainability research generally requires constructing separate explanation models to work with deep learning models or process their results, thus calling for additional development efforts. We propose a novel explanation module pluggable into existing deep neural networks to explore variable importance for explaining MTSC. We evaluate our module with popular deep neural networks on both real-world and synthetic datasets to demonstrate its effectiveness in generating explanations for MTSC. Our experiments also show the module improves the classification accuracy of existing models due to the comprehensive incorporation of temporal features.
Please use this identifier to cite or link to this item: