Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published October 22, 2015 | Published + Supplemental Material
Journal Article Open

Auditory Sensory Substitution is Intuitive and Automatic with Texture Stimuli

Abstract

Millions of people are blind worldwide. Sensory substitution (SS) devices (e.g., vOICe) can assist the blind by encoding a video stream into a sound pattern, recruiting visual brain areas for auditory analysis via crossmodal interactions and plasticity. SS devices often require extensive training to attain limited functionality. In contrast to conventional attention-intensive SS training that starts with visual primitives (e.g., geometrical shapes), we argue that sensory substitution can be engaged efficiently by using stimuli (such as textures) associated with intrinsic crossmodal mappings. Crossmodal mappings link images with sounds and tactile patterns. We show that intuitive SS sounds can be matched to the correct images by naive sighted participants just as well as by intensively-trained participants. This result indicates that existing crossmodal interactions and amodal sensory cortical processing may be as important in the interpretation of patterns by SS as crossmodal plasticity (e.g., the strengthening of existing connections or the formation of new ones), especially at the earlier stages of SS usage. An SS training procedure based on crossmodal mappings could both considerably improve participant performance and shorten training times, thereby enabling SS devices to significantly expand blind capabilities.

Additional Information

© 2015 Macmillan Publishers Limited. This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ Received: 15 December 2014. Accepted: 29 September 2015. Published online: 22 October 2015. We are grateful for a fellowship from the National Science Foundation (NSF) Graduate Research Fellowship Program, and research grants from the Della Martin Fund for Discoveries in Mental Illness, and from the Japan Science and Technology Agency, Core Research for Evolutional Science and Technology. We appreciate Yuqian Zheng's support with training participants on the vOICe device, and Carmel Levitan's and Armand R. Tanguay, Jr.'s comments on the manuscript. We would also like to thank Peter Meijer, Luis Goncalves, and Enrico Di Bernardo from MetaModal LLC for the use of several of the vOICe devices used in this study. Contributions: N.S. designed experiments, collected and analyzed data, and drafted the paper. S.S. designed experiments, interpreted data, and drafted the paper. The authors declare no competing financial interests.

Attached Files

Published - srep15628.pdf

Supplemental Material - srep15628-s1.pdf

Supplemental Material - srep15628-s2.mov

Files

srep15628-s1.pdf
Files (9.7 MB)
Name Size Download all
md5:7663b1a0fac27604adc42df895cc429e
1.6 MB Preview Download
md5:b4e344211147b89605fb1254e47c30c6
6.3 MB Download
md5:83afdb81bbca6a91e0daeaefaf83afa0
1.8 MB Preview Download

Additional details

Created:
August 20, 2023
Modified:
March 5, 2024