Alwashmi, K, Meyer, G, Rowe, F and Ward, R (2023) Enhancing learning outcomes through multisensory integration: A fMRI study of audio-visual training in virtual reality. NeuroImage, 285. p. 120483. ISSN 1053-8119
|
Text
Enhancing learning outcomes through multisensory integration A fMRI study of audiovisual training in virtual reality.pdf - Published Version Available under License Creative Commons Attribution. Download (6MB) | Preview |
Abstract
The integration of information from different sensory modalities is a fundamental process that enhances perception and performance in real and virtual environments (VR). Understanding these mechanisms, especially during learning tasks that exploit novel multisensory cue combinations provides opportunities for the development of new rehabilitative interventions. This study aimed to investigate how functional brain changes support behavioural performance improvements during an audio-visual (AV) learning task. Twenty healthy participants underwent a 30 min daily VR training for four weeks. The task was an AV adaptation of a ‘scanning training’ paradigm that is commonly used in hemianopia rehabilitation. Functional magnetic resonance imaging (fMRI) and performance data were collected at baseline, after two and four weeks of training, and four weeks post-training. We show that behavioural performance, operationalised as mean reaction time reduction in VR, significantly improves. In separate tests in a controlled laboratory environment, we showed that the behavioural performance gains in the VR training environment transferred to a significant mean RT reduction for the trained AV voluntary task on a computer screen. Enhancements were observed in both the visual-only and AV conditions, with the latter demonstrating a faster response time supported by the presence of audio cues. The behavioural learning effect also transfers to two additional tasks that were tested: a visual search task and an involuntary visual task. Our fMRI results reveal an increase in functional activation (BOLD signal) in multisensory brain regions involved in early-stage AV processing: the thalamus, the caudal inferior parietal lobe and cerebellum. These functional changes were only observed for the trained, multisensory, task and not for unimodal visual stimulation. Functional activation changes in the thalamus were significantly correlated to behavioural performance improvements. This study demonstrates that incorporating spatial auditory cues to voluntary visual training in VR leads to augmented brain activation changes in multisensory integration, resulting in measurable performance gains across tasks. The findings highlight the potential of VR-based multisensory training as an effective method for enhancing cognitive function and as a potentially valuable tool in rehabilitative programmes.
Item Type: | Article |
---|---|
Uncontrolled Keywords: | Audio-visual; Eye-movement; Learning; Multisensory; Virtual-reality; fMRI; Humans; Magnetic Resonance Imaging; Learning; Brain; Visual Perception; Virtual Reality; Blindness; Auditory Perception; 11 Medical and Health Sciences; 17 Psychology and Cognitive Sciences; Neurology & Neurosurgery |
Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science |
Divisions: | Computer Science & Mathematics |
Publisher: | Elsevier |
SWORD Depositor: | A Symplectic |
Date Deposited: | 07 May 2024 08:31 |
Last Modified: | 07 May 2024 08:31 |
DOI or ID number: | 10.1016/j.neuroimage.2023.120483 |
URI: | https://researchonline.ljmu.ac.uk/id/eprint/23179 |
View Item |