Partial Responses: Unlocking Black Box AI Models

Walters, B (2026) Partial Responses: Unlocking Black Box AI Models. Doctoral thesis, Liverpool John Moores University.

[thumbnail of 2026waltersphd.pdf]
Preview
Text
2026waltersphd.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial.

Download (4MB) | Preview

Abstract

This thesis extends the partial response methodology to a range of non-linear black box models, such as Random Forests and Multi-Layer Perceptron neural networks. The outcome is a model-agnostic interpretability framework capable of maintaining the predictive power of the original black box models whilst offering full transparency into their decision-making processes. The proposed framework demonstrates competitive performance when evaluated against established interpretability techniques, both in terms of accuracy and explainability.
The framework enables the construction of intuitive univariate and bivariate visualisations derived from the partial response functions. These visual tools effectively communicate how individual variables, or pairs of variables, influence predictions across their entire respective domains. By providing a detailed, range-wide view of the variables, these plots support more comprehensive insights into model behaviour and facilitate informed decision-making.
In addition, preliminary experimentation is shown in the area of bootstrapping, wherein repeated resampling of the data was employed to assess the stability and reliability of the derived partial responses. This approach enhances the robustness of the interpretability outputs by incorporating measures of uncertainty, such as confidence intervals, thereby increasing user trust in the resulting explanations.
All experimental analyses are conducted using a combination of synthetic datasets, designed to evaluate the methodology under controlled and interpretable conditions, and real-world datasets, which served to examine the framework’s efficacy in capturing complex, non-linear interactions among variables in noisy and heterogeneous environments. The use of both types of data ensures a comprehensive assessment of the method’s generalisability and practical utility.

Item Type: Thesis (Doctoral)
Uncontrolled Keywords: Machine Learning; Explainability; XAI; Artificial Intelligence; Data Science
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Q Science > QA Mathematics > QA76 Computer software
Divisions: Computer Science and Mathematics
Date of acceptance: 23 March 2026
Date of first compliant Open Access: 20 April 2026
Date Deposited: 20 Apr 2026 12:57
Last Modified: 20 Apr 2026 12:58
DOI or ID number: 10.24377/LJMU.t.00028360
Supervisors: Ortega-Martorell, S, Olier, I and Lisboa, P
URI: https://researchonline.ljmu.ac.uk/id/eprint/28360
View Item View Item