Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published August 27, 2021 | Supplemental Material + Submitted + Published
Journal Article Open

Four dimensions characterize attributions from faces using a representative set of English trait words

Abstract

People readily (but often inaccurately) attribute traits to others based on faces. While the details of attributions depend on the language available to describe social traits, psychological theories argue that two or three dimensions (such as valence and dominance) summarize social trait attributions from faces. However, prior work has used only a small number of trait words (12 to 18), limiting conclusions to date. In two large-scale, preregistered studies we ask participants to rate 100 faces (obtained from existing face stimuli sets), using a list of 100 English trait words that we derived using deep neural network analysis of words that have been used by other participants in prior studies to describe faces. In study 1 we find that these attributions are best described by four psychological dimensions, which we interpret as "warmth", "competence", "femininity", and "youth". In study 2 we partially reproduce these four dimensions using the same stimuli among additional participant raters from multiple regions around the world, in both aggregated and individual-level data. These results provide a comprehensive characterization of trait attributions from faces, although we note our conclusions are limited by the scope of our study (in particular we note only white faces and English trait words were included).

Additional Information

© The Author(s) 2021. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Received 12 July 2020; Accepted 12 August 2021; Published 27 August 2021. We thank Dean Mobbs, Mark A. Thornton, R. Michael Alvarez, Mark Bowren, Antonio Rangel, Clare Sutherland, Uri Maoz, and William Revelle for their input, Remya Nair and Christopher J. Birtja for technology support, and Becky Santora for helping with testing participants in foreign locations through Digital Divide Data. Funded in part by NSF grants BCS-1840756 and BCS-1845958, the Carver Mead New Adventures Fund, and the Simons Foundation Collaboration on the Global Brain (542941). Data availability: All de-identified data generated in this study have been deposited in the Open Science Framework: https://osf.io/4mvyt/ and https://osf.io/xeb6w/. Source data are provided with this paper. All face images used in this study are from publicly available databases: https://www.chicagofaces.org/ (Chicago Face Database), https://figshare.com/articles/dataset/Face_Research_Lab_London_Set/5047666 (London Face Database), https://sirileknes.com/oslo-face-database/ (Oslo Face Database). Source data are provided with this paper. Code availability: All data were collected via online experiments using custom codes written in Javascript. All data analyses were performed using R (version 3.5.1) and Python (version 3.6.9). All experiment codes and analysis codes are available at Open Science Framework: https://osf.io/4mvyt/ and https://osf.io/xeb6w/. Author Contributions: C.L. and R.A. developed the study concept and designed the study; C.L. and U.K. prepared experimental materials; R.A. supervised the experiments and analyses; C.L. performed and supervised data collection; C.L. and U.K. performed data analyses; C.L. and R.A. drafted the manuscript; all authors revised and reviewed the manuscript and approved the final manuscript for submission. The authors declare no competing interests. Peer review information: Nature Communications thanks Nick Enfield, Aleix Martinez, Clare Sutherland and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.

Attached Files

Published - s41467-021-25500-y.pdf

Submitted - 10.31234osf.io87nex.pdf

Supplemental Material - 41467_2021_25500_MOESM1_ESM.pdf

Supplemental Material - 41467_2021_25500_MOESM2_ESM.pdf

Supplemental Material - 41467_2021_25500_MOESM3_ESM.pdf

Supplemental Material - 41467_2021_25500_MOESM4_ESM.zip

Files

10.31234osf.io87nex.pdf
Files (21.5 MB)
Name Size Download all
md5:896d98acb2f0d6593b892acc375eda26
8.6 MB Preview Download
md5:ada5a5d578941831e6b7fd1eb7931133
6.7 MB Preview Download
md5:e68cbfaa472fa04ebf09261814c0c5e0
316.0 kB Preview Download
md5:a78f34d03b2650cfea2a35e1e741ab64
492.6 kB Preview Download
md5:2889dafb54558d618e66059ce7581a88
2.3 MB Preview Download
md5:34c1bb18b458b8e70fe2bdf0e4e31c3b
3.0 MB Preview Download

Additional details

Created:
August 22, 2023
Modified:
December 22, 2023