Facial reconstruction

Search LJMU Research Online

Browse Repository | Browse E-Theses

Towards interpretable machine learning for clinical decision support

Walters, B, Ortega Martorell, S, Olier-Caparroso, I and Lisboa, P (2022) Towards interpretable machine learning for clinical decision support. In: Proceedings of the International Joint Conference on Neural Networks . (International Joint Conference on Neural Networks, Padua, Italy).

[img]
Preview
Text
WCCI_2022_Camera_Ready.pdf - Accepted Version

Download (528kB) | Preview

Abstract

A major challenge in delivering reliable and trustworthy computational intelligence for practical applications in clinical medicine is interpretability. This aspect of machine learning is a major distinguishing factor compared with traditional statistical models for the stratification of patients, which typically use rules or a risk score identified by logistic regression.
We show how functions of one and two variables can be extracted from pre-trained machine learning models using anchored Analysis of Variance (ANOVA) decompositions. This enables complex interaction terms to be filtered out by aggressive regularisation using the Least Absolute Shrinkage and Selection Operator (LASSO) resulting in a sparse model with comparable or even better performance than the original pre-trained black-box.
Besides being theoretically well-founded, the decomposition of a black-box multivariate probabilistic binary classifier into a General Additive Model (GAM) comprising a linear combination of non-linear functions of one or two variables provides full interpretability. In effect this extends logistic regression into non-linear modelling without the need for manual intervention by way of variable transformations, using the pre-trained model as a seed.
The application of the proposed methodology to existing machine learning models is demonstrated using the Multi-Layer Perceptron (MLP), Support Vector Machine (SVM), Random Forests (RF) and Gradient Boosting Machines (GBM), to model a data frame from a well-known benchmark dataset available from Physionet, the Medical Information Mart for Intensive Care (MIMIC-III). Both the classification performance and plausibility of clinical interpretation compare favourably with other state-of-the-art sparse models namely Sparse Additive Models (SAM) and the Explainable Boosting Machine (EBM).

Item Type: Conference or Workshop Item (Paper)
Additional Information: © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Uncontrolled Keywords: Interpretability; Generalised Additive Neural Networks; Self-Explaining Neural Networks; Sparse Additive Model; Machine explanation; Multi-Layer Perceptron
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Divisions: Computer Science & Mathematics
Publisher: IEEE
SWORD Depositor: A Symplectic
Date Deposited: 27 May 2022 08:54
Last Modified: 12 Oct 2022 10:02
DOI or ID number: 10.1109/IJCNN55064.2022.9892114
URI: https://researchonline.ljmu.ac.uk/id/eprint/16935
View Item View Item