Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published March 9, 2015 | Published + Supplemental Material
Journal Article Open

Rate perception adapts across the senses: evidence for a unified timing mechanism

Abstract

The brain constructs a representation of temporal properties of events, such as duration and frequency, but the underlying neural mechanisms are under debate. One open question is whether these mechanisms are unisensory or multisensory. Duration perception studies provide some evidence for a dissociation between auditory and visual timing mechanisms; however, we found active crossmodal interaction between audition and vision for rate perception, even when vision and audition were never stimulated together. After exposure to 5 Hz adaptors, people perceived subsequent test stimuli centered around 4 Hz to be slower, and the reverse after exposure to 3 Hz adaptors. This aftereffect occurred even when the adaptor and test were different modalities that were never presented together. When the discrepancy in rate between adaptor and test increased, the aftereffect was attenuated, indicating that the brain uses narrowly-tuned channels to process rate information. Our results indicate that human timing mechanisms for rate perception are not entirely segregated between modalities and have substantial implications for models of how the brain encodes temporal features. We propose a model of multisensory channels for rate perception, and consider the broader implications of such a model for how the brain encodes timing.

Additional Information

© 2015 Macmillan Publishers Limited. This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission fromthe license holder in order to reproduce thematerial. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Received 14 October 2014; Accepted 4 February 2015; Published 9 March 2015. NIH 1R21EY023796-01, JST.ERATO, JST.CREST, and Tamagawa-Caltech gCOE (MEXT, Japan) for funding the research. Occidental College Undergraduate Research Center, Department of Cognitive Science for providing support to YAB. National Science Foundation Graduate Research Fellowship Program and the Mary Louise Remy Endowed Scholar Award from the Philanthropic Educational Organization for providing support for NRS. Thanks to John Delacruz, Chess Stetson, and Juri Minxha for technical assistance, to Charlotte Yang for assisting with data collection and analysis, to Arthur Shapiro and Jeffrey Yau for helpful discussions, and to Johannes Burge, Jess Hartcher-O'Brien,Michael Landy, Cesare Parise, and Virginie van Wassenhove for helpful comments on the manuscript. Author contributions: C.A.L., Y.A.B., N.R.B.S. and S.S. contributed to conceiving and designing the experiments. C.A.L. and Y.A.B. wrote the code. C.A.L., Y.A.B. and N.R.B.S. collected and analyzed the data. C.A.L., Y.A.B., N.R.B.S. and S.S. contributed to preparation of the manuscript.

Attached Files

Published - srep08857.pdf

Supplemental Material - srep08857-s1.pdf

Supplemental Material - srep08857-s2.xls

Files

srep08857.pdf
Files (1.8 MB)
Name Size Download all
md5:8c631ca0cf4ced15ba9c3c6920f8380d
38.9 kB Download
md5:25343c954518034eb4d81b455defd67a
732.3 kB Preview Download
md5:76fad90950c6926b224c26770b95b2e6
1.0 MB Preview Download

Additional details

Created:
August 20, 2023
Modified:
March 5, 2024