Facial reconstruction

Search LJMU Research Online

Browse Repository | Browse E-Theses

Combining Cardiovascular and Pupil Features Using k-Nearest Neighbor Classifiers to Assess Task Demand, Social Context, and Sentence Accuracy During Listening

Plain, B, Pielage, H, Kramer, SE, Richter, M, Saunders, G, Versfeld, NJ, Zekveld, AA and Bhuiyan, TA (2024) Combining Cardiovascular and Pupil Features Using k-Nearest Neighbor Classifiers to Assess Task Demand, Social Context, and Sentence Accuracy During Listening. Trends in Hearing, 28. pp. 1-22. ISSN 2331-2165

[img]
Preview
Text
57b177f3-d924-4ca8-af91-29c4ebb732cf.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial.

Download (1MB) | Preview

Abstract

In daily life, both acoustic factors and social context can affect listening effort investment. In laboratory settings, information about listening effort has been deduced from pupil and cardiovascular responses independently. The extent to which these measures can jointly predict listening-related factors is unknown. Here we combined pupil and cardiovascular features to predict acoustic and contextual aspects of speech perception. Data were collected from 29 adults (mean  =  64.6 years, SD  =  9.2) with hearing loss. Participants performed a speech perception task at two individualized signal-to-noise ratios (corresponding to 50% and 80% of sentences correct) and in two social contexts (the presence and absence of two observers). Seven features were extracted per trial: baseline pupil size, peak pupil dilation, mean pupil dilation, interbeat interval, blood volume pulse amplitude, pre-ejection period and pulse arrival time. These features were used to train k-nearest neighbor classifiers to predict task demand, social context and sentence accuracy. The k-fold cross validation on the group-level data revealed above-chance classification accuracies: task demand, 64.4%; social context, 78.3%; and sentence accuracy, 55.1%. However, classification accuracies diminished when the classifiers were trained and tested on data from different participants. Individually trained classifiers (one per participant) performed better than group-level classifiers: 71.7% (SD  =  10.2) for task demand, 88.0% (SD  =  7.5) for social context, and 60.0% (SD  =  13.1) for sentence accuracy. We demonstrated that classifiers trained on group-level physiological data to predict aspects of speech perception generalized poorly to novel participants. Individually calibrated classifiers hold more promise for future applications.

Item Type: Article
Uncontrolled Keywords: 1103 Clinical Sciences
Subjects: B Philosophy. Psychology. Religion > BF Psychology
R Medicine > RF Otorhinolaryngology
Divisions: Psychology (from Sep 2019)
Publisher: SAGE Publishing
SWORD Depositor: A Symplectic
Date Deposited: 04 Jun 2024 10:13
Last Modified: 04 Jun 2024 10:15
DOI or ID number: 10.1177/23312165241232551
URI: https://researchonline.ljmu.ac.uk/id/eprint/23413
View Item View Item