An Interpretable Deep Learning Framework for Health Monitoring Systems: A Case Study of Eye State Detection using EEG Signals
- Publisher:
- IEEE
- Publication Type:
- Conference Proceeding
- Citation:
- 2020 IEEE Symposium Series on Computational Intelligence, SSCI 2020, 2021, 00, pp. 211-218
- Issue Date:
- 2021-01-05
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
An_Interpretable_Deep_Learning_Framework_for_Health_Monitoring_Systems_A_Case_Study_of_Eye_State_Detection_using_EEG_Signals.pdf | Published version | 774.42 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Effective monitoring and early detection of deterioration in patients play an essential role in healthcare. This includes minimizing the number of emergency encounters, reducing the length of hospitalization stay, re-admission rates of the patients, and etc. Cutting-edge methods in artificial intelligence (AI) have the ability to significantly improve outcomes. However, the struggle to interpret these black box models presents a serious problem to the healthcare industry. When selecting a model, the decision to sacrifice accuracy for interpretability must be made. In this paper, we propose an interpretable framework with the ability of real-time prediction. To demonstrate the predictive power of the framework, a case study on eye state detection using electroencephalogram (EEG) signals was employed to investigate how a deep neural network (DNN) model makes a prediction, and how that prediction can be interpreted. The promising results can be used to employ more advanced models in healthcare solutions without any concern of sacrificing the interpretation.
Please use this identifier to cite or link to this item: