Machine learning has become a popular tool for learning models of complex dynamics from biomedical data. In Type 1 Diabetes (T1D) management, these models are increasingly been integrated in decision support systems (DSS) to forecast glucose levels and provide preventive therapeutic suggestions, like corrective insulin boluses (CIB), accordingly. Typically, models are chosen based on their prediction accuracy. However, since patient safety is a concern in this application, the algorithm should also be physiologically sound and its outcome should be explainable. This paper aims to discuss the importance of using tools to interpret the output of black-box models in T1D management by presenting a case-of-study on the selection of the best prediction algorithm to integrate in a DSS for CIB suggestion. By retrospectively “replaying” real patient data, we show that two long-short term memory neural networks (LSTM) (named p-LSTM and np-LSTM) with similar prediction accuracy could lead to different therapeutic decisions. An analysis with SHAP—a tool for explaining black-box models’ output—unambiguously shows that only p-LSTM learnt the physiological relationship between inputs and glucose prediction, and should therefore be preferred. This is verified by showing that, when embedded in the DSS, only p-LSTM can improve patients’ glycemic control.

The importance of interpreting machine learning models for blood glucose prediction in diabetes: an analysis using SHAP

Prendin F.;Pavan J.;Cappon G.;Del Favero S.;Sparacino G.;Facchinetti A.
2023

Abstract

Machine learning has become a popular tool for learning models of complex dynamics from biomedical data. In Type 1 Diabetes (T1D) management, these models are increasingly been integrated in decision support systems (DSS) to forecast glucose levels and provide preventive therapeutic suggestions, like corrective insulin boluses (CIB), accordingly. Typically, models are chosen based on their prediction accuracy. However, since patient safety is a concern in this application, the algorithm should also be physiologically sound and its outcome should be explainable. This paper aims to discuss the importance of using tools to interpret the output of black-box models in T1D management by presenting a case-of-study on the selection of the best prediction algorithm to integrate in a DSS for CIB suggestion. By retrospectively “replaying” real patient data, we show that two long-short term memory neural networks (LSTM) (named p-LSTM and np-LSTM) with similar prediction accuracy could lead to different therapeutic decisions. An analysis with SHAP—a tool for explaining black-box models’ output—unambiguously shows that only p-LSTM learnt the physiological relationship between inputs and glucose prediction, and should therefore be preferred. This is verified by showing that, when embedded in the DSS, only p-LSTM can improve patients’ glycemic control.
2023
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3507758
Citazioni
  • ???jsp.display-item.citation.pmc??? 1
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 5
social impact