Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published August 12, 2013 | Supplemental Material + Published
Journal Article Open

Involvement of Right STS in Audio-Visual Integration for Affective Speech Demonstrated Using MEG

Abstract

Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV>[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously with speech; through the voice in the form of speech prosody and through the face in the form of facial expression. Previous studies of AV nonverbal emotion integration showed a role for right (rather than left) STS. The current study therefore examined whether the integration of facial and prosodic signals of emotional speech is associated with supra-additive responses in left (cf. results for speech integration) or right (due to emotional content) STS. As emotional displays are sometimes difficult to interpret, we also examined whether supra-additive responses were affected by emotional incongruence (i.e., ambiguity). Using magnetoencephalography, we continuously recorded eighteen participants as they viewed and heard AV congruent emotional and AV incongruent emotional speech stimuli. Significant supra-additive responses were observed in right STS within the first 250 ms for emotionally incongruent and emotionally congruent AV speech stimuli, which further underscores the role of right STS in processing crossmodal emotive signals.

Additional Information

© 2013 Hagan et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Received: March 20, 2013; Accepted: June 20, 2013; Published: August 12, 2013. The authors wish to thank all participants who partook in the study and those individuals who contributed to the creation and editing of the AV emotion video stimulus set: Jodie Davies-Thompson, Simon Fletcher, Lisa Henderson, Mark Hurlstone, Goran Lukic, Paul McLaughlin, Dean Mobbs, Joe Wherton, and Anna Wilkinson. A special thank you also to Cinly Ooi, for assisting with figure creation. Author Contributions: Conceived and designed the experiments: CCH SJ AWY. Performed the experiments: CCH. Analyzed the data: CCH WW SJ. Contributed reagents/materials/analysis tools: WW SJ GGRG. Wrote the paper: CCH WW GGRG AWY. The authors have no support or funding to report. The authors have declared that no competing interests exist.

Attached Files

Published - journal.pone.0070648.PDF

Supplemental Material - journal.pone.0070648.s001.DOC

Files

journal.pone.0070648.PDF
Files (1.3 MB)
Name Size Download all
md5:6ce07779d792cefc0f07d13118bf6768
713.2 kB Preview Download
md5:9e7ce44c759c3cb7bf36c68f3a715d11
593.4 kB Download

Additional details

Created:
August 19, 2023
Modified:
October 19, 2023