Salah, S, Elbatanouny, H, Sobuh, A, Almajali, E, Khan, W, Alaskar, H, Binbusayyis, A, Hassan, T, Yousaf, J and Hussain, A (2025) Explainable AI for Unraveling the Significance of Visual Cues in High Stakes Deception Detection. IEEE Access, 13. pp. 65839-65862.
Preview |
Text
Explainable AI for Unraveling the Significance of Visual Cues in High Stakes Deception Detection.pdf - Published Version Available under License Creative Commons Attribution. Download (3MB) | Preview |
Abstract
Deception, a widespread aspect of human behavior, has significant implications in fields like law enforcement, security, judicial proceedings, and social areas. Detecting deception accurately, especially in high-stakes environments, is critical for ensuring justice and security. Recently, machine learning has significantly enhanced deception detection capabilities by analyzing various behavioral and visual cues. However, machine learning models often operate as opaque "black boxes,"offering high predictive accuracy without explaining the reasoning behind the decisions. This lack of transparency necessitates the integration of Explainable Artificial Intelligence to make the models' decisions understandable and trustworthy. This study proposes the implementation of existing model-agnostic Explainable Artificial Intelligence techniques - Permutation Importance, Partial Dependence Plots, and SHapley Additive exPlanations - to showcase the contributions of visual features in deception detection. Using Real-Life Trial dataset, recognized as the most valuable high-stake dataset, we demonstrate that Multi-layer Perceptron achieved the highest accuracy of 88% and a recall of 92.86%. Along with the aforementioned existing techniques, Real-Life Trial dataset inspired us to develop a novel technique: 'set-of-features permutation importance'. Additionally, this study is novel in the sense of that it extensively applies XAI techniques in the field of deception detection on Real-Life Trial dataset. Experimental results shows that the visual cues related to eyebrow movements are most indicative of deceptive behavior. Along with the new findings, our work underscores the importance of making machine learning models more transparent and explainable, thereby enhancing their utility for human-in-loop AI and ethical acceptability.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | 46 Information and Computing Sciences; 4608 Human-Centred Computing; Behavioral and Social Science; Networking and Information Technology R&D (NITRD); Machine Learning and Artificial Intelligence; 16 Peace, Justice and Strong Institutions; 08 Information and Computing Sciences; 09 Engineering; 10 Technology; 40 Engineering; 46 Information and computing sciences |
Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science |
Divisions: | Computer Science and Mathematics |
Publisher: | Institute of Electrical and Electronics Engineers (IEEE) |
Date of acceptance: | 28 March 2025 |
Date of first compliant Open Access: | 28 May 2025 |
Date Deposited: | 28 May 2025 14:06 |
Last Modified: | 28 May 2025 14:15 |
DOI or ID number: | 10.1109/ACCESS.2025.3558875 |
URI: | https://researchonline.ljmu.ac.uk/id/eprint/26450 |
![]() |
View Item |