Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published June 17, 2015 | Published + Supplemental Material
Journal Article Open

Length and orientation constancy learning in 2-dimensions with auditory sensory substitution: the importance of self-initiated movement

Abstract

A subset of sensory substitution (SS) devices translate images into sounds in real time using a portable computer, camera, and headphones. Perceptual constancy is the key to understanding both functional and phenomenological aspects of perception with SS. In particular, constancies enable object externalization, which is critical to the performance of daily tasks such as obstacle avoidance and locating dropped objects. In order to improve daily task performance by the blind, and determine if constancies can be learned with SS, we trained blind (N = 4) and sighted (N = 10) individuals on length and orientation constancy tasks for 8 days at about 1 h per day with an auditory SS device. We found that blind and sighted performance at the constancy tasks significantly improved, and attained constancy performance that was above chance. Furthermore, dynamic interactions with stimuli were critical to constancy learning with the SS device. In particular, improved task learning significantly correlated with the number of spontaneous left-right head-tilting movements while learning length constancy. The improvement from previous head-tilting trials even transferred to a no-head-tilt condition. Therefore, not only can SS learning be improved by encouraging head movement while learning, but head movement may also play an important role in learning constancies in the sighted. In addition, the learning of constancies by the blind and sighted with SS provides evidence that SS may be able to restore vision-like functionality to the blind in daily tasks.

Additional Information

© 2015 Stiles, Zheng and Shimojo. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Received: 31 January 2015; Accepted: 03 June 2015; Published: 17 June 2015. The authors declare no competing financial interests. We appreciate Carmel Levitan's and Armand R. Tanguay, Jr.'s feedback on the manuscript. We would also like to thank Peter Meijer, Luis Goncalves, and Enrico Di Bernardo from MetaModal LLC for the use of several of the vOICe devices used in this study. We are grateful for a fellowship from the National Science Foundation (NSF) Graduate Research Fellowship Program [Grant Number DGE-1144469] and research grants from the Della Martin Fund for Discoveries in Mental Illness [Grant Number 9900050]; Japan Science and Technology Agency, Core Research for Evolutional Science and Technology [Grant Number JST.CREST]; and the National Institutes of Health (NIH) [Grant Number R21EY023796A]. Author Contributions: The study design and concept were generated by NS and SS. Subject training and testing was performed by NS and YZ. NS and SS drafted the manuscript; YZ reviewed and commented on the manuscript.

Attached Files

Published - fpsyg-06-00842.pdf

Supplemental Material - video_1.mov

Supplemental Material - video_2.mov

Files

fpsyg-06-00842.pdf
Files (30.2 MB)
Name Size Download all
md5:5ca0acbfc8dcc4a8a8fd96acab6e13d6
3.3 MB Preview Download
md5:c4b7b4aa62ee61b54914d4f13cea9490
13.4 MB Download
md5:8ac5c306d3959f11809be2d599c87f60
13.4 MB Download

Additional details

Created:
August 20, 2023
Modified:
October 23, 2023