Rate perception adapts across the senses: evidence for a unified timing mechanism
Abstract
The brain constructs a representation of temporal properties of events, such as duration and frequency, but the underlying neural mechanisms are under debate. One open question is whether these mechanisms are unisensory or multisensory. Duration perception studies provide some evidence for a dissociation between auditory and visual timing mechanisms; however, we found active crossmodal interaction between audition and vision for rate perception, even when vision and audition were never stimulated together. After exposure to 5 Hz adaptors, people perceived subsequent test stimuli centered around 4 Hz to be slower, and the reverse after exposure to 3 Hz adaptors. This aftereffect occurred even when the adaptor and test were different modalities that were never presented together. When the discrepancy in rate between adaptor and test increased, the aftereffect was attenuated, indicating that the brain uses narrowly-tuned channels to process rate information. Our results indicate that human timing mechanisms for rate perception are not entirely segregated between modalities and have substantial implications for models of how the brain encodes temporal features. We propose a model of multisensory channels for rate perception, and consider the broader implications of such a model for how the brain encodes timing.
Additional Information
© 2015 Macmillan Publishers Limited. This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission fromthe license holder in order to reproduce thematerial. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Received 14 October 2014; Accepted 4 February 2015; Published 9 March 2015. NIH 1R21EY023796-01, JST.ERATO, JST.CREST, and Tamagawa-Caltech gCOE (MEXT, Japan) for funding the research. Occidental College Undergraduate Research Center, Department of Cognitive Science for providing support to YAB. National Science Foundation Graduate Research Fellowship Program and the Mary Louise Remy Endowed Scholar Award from the Philanthropic Educational Organization for providing support for NRS. Thanks to John Delacruz, Chess Stetson, and Juri Minxha for technical assistance, to Charlotte Yang for assisting with data collection and analysis, to Arthur Shapiro and Jeffrey Yau for helpful discussions, and to Johannes Burge, Jess Hartcher-O'Brien,Michael Landy, Cesare Parise, and Virginie van Wassenhove for helpful comments on the manuscript. Author contributions: C.A.L., Y.A.B., N.R.B.S. and S.S. contributed to conceiving and designing the experiments. C.A.L. and Y.A.B. wrote the code. C.A.L., Y.A.B. and N.R.B.S. collected and analyzed the data. C.A.L., Y.A.B., N.R.B.S. and S.S. contributed to preparation of the manuscript.Attached Files
Published - srep08857.pdf
Supplemental Material - srep08857-s1.pdf
Supplemental Material - srep08857-s2.xls
Files
Additional details
- PMCID
- PMC4894401
- Eprint ID
- 56320
- Resolver ID
- CaltechAUTHORS:20150402-152906978
- NIH
- 1R21EY023796-01
- JST.ERATO
- JST.CREST
- Tamagawa-Caltech gCOE (MEXT)
- Occidental College
- NSF Graduate Research Fellowship
- Philanthropic Educational Organization
- Created
-
2015-04-02Created from EPrint's datestamp field
- Updated
-
2021-11-10Created from EPrint's last_modified field