Facial reconstruction

Search LJMU Research Online

Browse Repository | Browse E-Theses

Combining cardiovascular and pupil features using k-nearest neighbor classifiers to assess task demand, social context and sentence accuracy during listening

Plain, B, Pielage, H, Kramer, SE, Richter, M, Saunders, GH, Versfield, NJ, Zekveld, AA and Bhuiyan, TA (2024) Combining cardiovascular and pupil features using k-nearest neighbor classifiers to assess task demand, social context and sentence accuracy during listening. Trends in Hearing, 28. ISSN 2331-2165

[img]
Preview
Text
plain-et-al-2024-combining-cardiovascular-and-pupil-features-using-k-nearest-neighbor-classifiers-to-assess-task-demand.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial.

Download (1MB) | Preview

Abstract

In daily life, both acoustic factors and social context can affect listening effort investment. In laboratory settings, information about listening effort has been deduced from pupil and cardiovascular responses independently. The extent to which these measures can jointly predict listening-related factors is unknown. Here we combined pupil and cardiovascular features to predict acoustic and contextual aspects of speech perception. Data were collected from 29 adults (mean  =  64.6 years, SD  =  9.2) with hearing loss. Participants performed a speech perception task at two individualized signal-to-noise ratios (corresponding to 50% and 80% of sentences correct) and in two social contexts (the presence and absence of two observers). Seven features were extracted per trial: baseline pupil size, peak pupil dilation, mean pupil dilation, interbeat interval, blood volume pulse amplitude, pre-ejection period and pulse arrival time. These features were used to train k-nearest neighbor classifiers to predict task demand, social context and sentence accuracy. The k-fold cross validation on the group-level data revealed above-chance classification accuracies: task demand, 64.4%; social context, 78.3%; and sentence accuracy, 55.1%. However, classification accuracies diminished when the classifiers were trained and tested on data from different participants. Individually trained classifiers (one per participant) performed better than group-level classifiers: 71.7% (SD  =  10.2) for task demand, 88.0% (SD  =  7.5) for social context, and 60.0% (SD  =  13.1) for sentence accuracy. We demonstrated that classifiers trained on group-level physiological data to predict aspects of speech perception generalized poorly to novel participants. Individually calibrated classifiers hold more promise for future applications.

Item Type: Article
Uncontrolled Keywords: Pupil; Humans; Speech Intelligibility; Speech Perception; Adult; classification; k-nearest neighbor; listening effort; physiological measures; social context; Adult; Humans; Pupil; Speech Perception; Speech Intelligibility; 1103 Clinical Sciences
Subjects: B Philosophy. Psychology. Religion > BF Psychology
R Medicine > RA Public aspects of medicine > RA0421 Public health. Hygiene. Preventive Medicine
Divisions: Psychology (from Sep 2019)
Publisher: SAGE Publishing
SWORD Depositor: A Symplectic
Date Deposited: 08 Apr 2024 11:19
Last Modified: 08 Apr 2024 11:30
DOI or ID number: 10.1177/23312165241232551
URI: https://researchonline.ljmu.ac.uk/id/eprint/22973
View Item View Item