Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 9, 2008 | Published
Journal Article Open

Decoding face information in time, frequency and space from direct intracranial recordings of the human brain

Abstract

Faces are processed by a neural system with distributed anatomical components, but the roles of these components remain unclear. A dominant theory of face perception postulates independent representations of invariant aspects of faces (e.g., identity) in ventral temporal cortex including the fusiform gyrus, and changeable aspects of faces (e.g., emotion) in lateral temporal cortex including the superior temporal sulcus. Here we recorded neuronal activity directly from the cortical surface in 9 neurosurgical subjects undergoing epilepsy monitoring while they viewed static and dynamic facial expressions. Applying novel decoding analyses to the power spectrogram of electrocorticograms (ECoG) from over 100 contacts in ventral and lateral temporal cortex, we found better representation of both invariant and changeable aspects of faces in ventral than lateral temporal cortex. Critical information for discriminating faces from geometric patterns was carried by power modulations between 50 to 150 Hz. For both static and dynamic face stimuli, we obtained a higher decoding performance in ventral than lateral temporal cortex. For discriminating fearful from happy expressions, critical information was carried by power modulation between 60–150 Hz and below 30 Hz, and again better decoded in ventral than lateral temporal cortex. Task-relevant attention improved decoding accuracy more than 10% across a wide frequency range in ventral but not at all in lateral temporal cortex. Spatial searchlight decoding showed that decoding performance was highest around the middle fusiform gyrus. Finally, we found that the right hemisphere, in general, showed superior decoding to the left hemisphere. Taken together, our results challenge the dominant model for independent face representation of invariant and changeable aspects: information about both face attributes was better decoded from a single region in the middle fusiform gyrus.

Additional Information

© 2008 Tsuchiya et al. Received October 14, 2008; Accepted November 6, 2008; Published December 9, 2008. We thank Haiming Chen for overall technical assistance, Yota Kimura, Christopher Kovach and Joe Hitchon for their assistance during various phases of the experiment, Dirk Neumann and Ueli Rutishauser for helpful discussion on the analysis, and Alex Maier, Rufin VanRullen, Christof Koch and Fred Gosselin for their comments on the manuscript. We thank all subjects who participated in the study for their time. This work was supported by a fellowship from the Japan Society for the Promotion of Science (N.T.) and grants from NIH (R03 MH070497-01A2 to H.K.; R01 DC004290-06 to M.H.), the James S. McDonnell Foundation (R.A.) and the Gordon and Betty Moore Foundation (R.A.). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. Conceived and designed the experiments: HK RA. Performed the experiments: HK. Analyzed the data: NT. Contributed reagents/ materials/analysis tools: NT. Wrote the paper: NT HK RA. Initial data analyses: HK. Performed anatomical localization of the electrodes: HO. Performed the neurosurgery and oversaw all recordings: MAH.

Attached Files

Published - TSUplosone08.pdf

Files

TSUplosone08.pdf
Files (1.0 MB)
Name Size Download all
md5:c882e2bb85674ac14200c236965b704d
1.0 MB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 18, 2023