Explaining face representation in the primate brain using different computational models
Abstract
Understanding how the brain represents the identity of complex objects is a central challenge of visual neuroscience. The principles governing object processing have been extensively studied in the macaque face patch system, a sub-network of inferotemporal (IT) cortex specialized for face processing. A previous study reported that single face patch neurons encode axes of a generative model called the "active appearance" model, which transforms 50D feature vectors separately representing facial shape and facial texture into facial images. However, a systematic investigation comparing this model to other computational models, especially convolutional neural network models that have shown success in explaining neural responses in the ventral visual stream, has been lacking. Here, we recorded responses of cells in the most anterior face patch anterior medial (AM) to a large set of real face images and compared a large number of models for explaining neural responses. We found that the active appearance model better explained responses than any other model except CORnet-Z, a feedforward deep neural network trained on general object classification to classify non-face images, whose performance it tied on some face image sets and exceeded on others. Surprisingly, deep neural networks trained specifically on facial identification did not explain neural responses well. A major reason is that units in the network, unlike neurons, are less modulated by face-related factors unrelated to facial identification, such as illumination.
Additional Information
© 2021 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Received 20 May 2020, Revised 22 March 2021, Accepted 8 April 2021, Available online 4 May 2021. This work was supported by NIH (EY030650-01), the Howard Hughes Medical Institute, and the Chen Center for Systems Neuroscience at Caltech. We are grateful to Nicole Schweers for help with animal training and MingPo Yang for help with implementing the CORnets. Author contributions: L.C. and D.Y.T. conceived the project and wrote the paper with the help of all other authors, L.C. performed the experiments and analyzed the data, D.Y.T. supervised the project, and B.E. and T.V. constructed the 3D morphable model used to compare with the neural data. The authors declare no competing interests.Attached Files
Published - 1-s2.0-S0960982221005273-main.pdf
Submitted - 2020.06.07.111930v2.full.pdf
Supplemental Material - 1-s2.0-S0960982221005273-mmc1.pdf
Files
Additional details
- Alternative title
- What computational model provides the best explanation of face representations in the primate brain?
- PMCID
- PMC8566016
- Eprint ID
- 103817
- Resolver ID
- CaltechAUTHORS:20200610-100834335
- EY030650-01
- NIH
- Howard Hughes Medical Institute (HHMI)
- Tianqiao and Chrissy Chen Institute for Neuroscience
- Created
-
2020-06-10Created from EPrint's datestamp field
- Updated
-
2023-07-07Created from EPrint's last_modified field
- Caltech groups
- Tianqiao and Chrissy Chen Institute for Neuroscience, Division of Biology and Biological Engineering