Facial reconstruction

Search LJMU Research Online

Browse Repository | Browse E-Theses

White box radial basis function classifiers with component selection for clinical prediction models

Van Belle, V and Lisboa, P (2013) White box radial basis function classifiers with component selection for clinical prediction models. Artificial Intelligence in Medicine, 60 (1). pp. 53-64. ISSN 0933-3657

AIIM-D-12-00314R3.pdf - Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (692kB) | Preview


Objective: To propose a new flexible and sparse classifier that results in interpretable decision support systems. Methods: Support vector machines (SVMs) for classification are very powerful methods to obtain classifiers for complex problems. Although the performance of these methods is consistently high and non-linearities and interactions between variables can be handled efficiently when using non-linear kernels such as the radial basis function (RBF) kernel, their use in domains where interpretability is an issue is hampered by their lack of transparency. Many feature selection algorithms have been developed to allow for some interpretation but the impact of the different input variables on the prediction still remains unclear. Alternative models using additive kernels are restricted to main effects, reducing their usefulness in many applications. This paper proposes a new approach to expand the RBF kernel into interpretable and visualizable components, including main and two-way interaction effects. In order to obtain a sparse model representation, an iterative l-regularized parametric model using the interpretable components as inputs is proposed. Results: Results on toy problems illustrate the ability of the method to select the correct contributions and an improved performance over standard RBF classifiers in the presence of irrelevant input variables. For a 10-dimensional x-or problem, an SVM using the standard RBF kernel obtains an area under the receiver operating characteristic curve (AUC) of 0.947, whereas the proposed method achieves an AUC of 0.997. The latter additionally identifies the relevant components. In a second 10-dimensional artificial problem, the underlying class probability follows a logistic regression model. An SVM with the RBF kernel results in an AUC of 0.975, as apposed to 0.994 for the presented method. The proposed method is applied to two benchmark datasets: the Pima Indian diabetes and the Wisconsin Breast Cancer dataset. The AUC is in both cases comparable to those of the standard method (0.826 versus 0.826 and 0.990 versus 0.996) and those reported in the literature. The selected components are consistent with different approaches reported in other work. However, this method is able to visualize the effect of each of the components, allowing for interpretation of the learned logic by experts in the application domain. Conclusions: This work proposes a new method to obtain flexible and sparse risk prediction models. The proposed method performs as well as a support vector machine using the standard RBF kernel, but has the additional advantage that the resulting model can be interpreted by experts in the application domain. © 2013 Elsevier B.V.

Item Type: Article
Uncontrolled Keywords: 08 Information And Computing Sciences, 09 Engineering
Subjects: Q Science > QA Mathematics
Divisions: Applied Mathematics (merged with Comp Sci 10 Aug 20)
Publisher: Elsevier
Date Deposited: 08 Oct 2015 08:49
Last Modified: 04 Sep 2021 13:55
DOI or ID number: 10.1016/j.artmed.2013.10.001
URI: https://researchonline.ljmu.ac.uk/id/eprint/2137
View Item View Item