Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published November 28, 2018 | Supplemental Material + Submitted + Published
Journal Article Open

Augmented Reality Powers a Cognitive Assistant for the Blind

Abstract

To restore vision for the blind, several prosthetic approaches have been explored that convey raw images to the brain. So far, these schemes all suffer from a lack of bandwidth. An alternate approach would restore vision at the cognitive level, bypassing the need to convey sensory data. A wearable computer captures video and other data, extracts important scene knowledge, and conveys that to the user in compact form. Here, we implement an intuitive user interface for such a device using augmented reality: each object in the environment has a voice and communicates with the user on command. With minimal training, this system supports many aspects of visual cognition: obstacle avoidance, scene understanding, formation and recall of spatial memories, navigation. Blind subjects can traverse an unfamiliar multi-story building on their first attempt. To spur further development in this domain, we developed an open-source environment for standardized benchmarking of visual assistive devices.

Additional Information

© 2017 Liu et al. This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited. Received: 24 April 2018; Accepted: 27 October 2018; Published: 27 November 2018. Data and materials availability: Data and code that produced the figures are available on the Dryad Digital Repository. Data availability: All data generated or analysed during this study are included in the manuscript and supporting files. Source data files for all figures have been deposited in Dryad. Additional code is published on a Github repository (https://github.com/meisterlabcaltech/CARA_Public; copy archived at https://github.com/elifesciences-publications/CARA_Public). Supported by grant 103212 from the Shurl and Kay Curci Foundation. We thank Ralph Adolphs, David Anderson, and Shin Shimojo for comments on the manuscript, and Kristina Dylla for posing for Figure 1A and field testing all of the tasks. Author contributions: Yang Liu, Conceptualization, Data curation, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing—review and editing; Noelle RB Stiles, Resources, Validation, Investigation, Methodology; Markus Meister, Conceptualization, Supervision, Funding acquisition, Writing—original draft, Project administration, Writing—review and editing. Ethics: Human subjects: All procedures involving human subjects were reviewed and approved by the Institutional Review Board at Caltech, Human Subjects Protocol 16-0663. All subjects gave their informed consent to the experiments, and where applicable to publication of videos that accompany this article.

Attached Files

Published - elife-37841-v1.pdf

Submitted - 321265.full.pdf

Supplemental Material - elife-37841-transrepform-v1.pdf

Files

321265.full.pdf
Files (13.5 MB)
Name Size Download all
md5:2f7c2b3a406b2ec179fd5d45d28eaed7
10.2 MB Preview Download
md5:6ee5ae26466e9a58f3f96867576f0fcd
322.3 kB Preview Download
md5:dc04cc584f7bb67625f5d996a4d7d364
3.0 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 23, 2023