Explainable model of deep learning for outcomes prediction of in-hospital cardiac arrest patients

Abstract

Introduction:

Deep learning has outperformed traditional methods in predicting healthcare outcomes. However, deep learning models struggle with explainability and are considered a black box. This article demonstrated the output from Shapley additive explanations (SHAP) analysis can provide meaningful insight into a model’s predictions.

Methods:
Starting from Taiwan National Health Insurance Research Database, we selected adult people (>20 years) experienced in-hospital cardiac arrest during 2003 to 2010, and built a dataset using de-identified claims of Emergency Department (ED) and hospitalization. Final dataset had 169,287 claims with data randomly split into 3 sections, train 70%, validation 15%, and test 15%. Two outcomes, 30-day readmission and 30-day mortality are chosen. Deep learning system was constructed by taxonomy mapping system Text2Node and multilevel hierarchical model based on Long Short-Term Memory (LSTM) architecture. Then explainable model was constructed by SHAP analysis.

Results:
In SHAP analysis, overall feature weight on the full test dataset was generated to test the model explainability. For predicting 30-day mortality, medication codes had the most powerful impact with roughly 10% weight, followed by the diagnosis codes and test codes with roughly 6.5% and 6% respectively (Fig. 1a). For 30-day readmission, we found hospital stay to have the largest average impact on readmission prediction followed by medications and total cost (Fig. 1b). We notice that significantly more emphasis was placed on the hospital stay in the past number of months for readmission than for the mortality prediction.

Conclusion:
To apply deep learning models in a clinical setting, model explainability is required, for both justifying the model output and providing signals of worst outcomes. We found that SHAP analysis seems to provide a meaningful explanation when adapting to a deep learning model. Future research must be performed to generate explanations on a medical code level.

Read publication here

Scroll to Top
%d bloggers like this: