Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published June 23, 2021 | Supplemental Material
Journal Article Open

A seasonally invariant deep transform for visual terrain-relative navigation

Abstract

Visual terrain-relative navigation (VTRN) is a localization method based on registering a source image taken from a robotic vehicle against a georeferenced target image. With high-resolution imagery databases of Earth and other planets now available, VTRN offers accurate, drift-free navigation for air and space robots even in the absence of external positioning signals. Despite its potential for high accuracy, however, VTRN remains extremely fragile to common and predictable seasonal effects, such as lighting, vegetation changes, and snow cover. Engineered registration algorithms are mature and have provable geometric advantages but cannot accommodate the content changes caused by seasonal effects and have poor matching skill. Approaches based on deep learning can accommodate image content changes but produce opaque position estimates that either lack an interpretable uncertainty or require tedious human annotation. In this work, we address these issues with targeted use of deep learning within an image transform architecture, which converts seasonal imagery to a stable, invariant domain that can be used by conventional algorithms without modification. Our transform preserves the geometric structure and uncertainty estimates of legacy approaches and demonstrates superior performance under extreme seasonal changes while also being easy to train and highly generalizable. We show that classical registration methods perform exceptionally well for robotic visual navigation when stabilized with the proposed architecture and are able to consistently anticipate reliable imagery. Gross mismatches were nearly eliminated in challenging and realistic visual navigation tasks that also included topographic and perspective effects.

Additional Information

© 2021 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. Submitted 19 October 2020; Accepted 1 June 2021; Published 23 June 2021. We thank P. Tokumaru. This project was in part funded by the Boeing Company with R. K. Li as Boeing Project Manager. C.T.L. acknowledges the National Science Foundation Graduate Research Fellowship under grant no. DGE1745301. A.S.M. was in part supported by Caltech's Summer Undergraduate Research Fellowship (SURF). Author contributions: A.T.F. developed the deep transform, implemented the software, designed the test and training datasets, and designed the experiments. C.T.L. contributed to architecture development, implemented the software, and conducted the experiments. A.S.M. contributed to the VTRN demonstration and conducted the experiments. S.-J. C. contributed to development of the VTRN concept and robotics applications and directed the research activities. All authors participated in the preparation of the manuscript. Competing interests: A.T.F., C.T.L., and S.-J. C. are inventors on a pending patent submitted by the California Institute of Technology that covers the material described herein. The authors declare that they have no other competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper or the Supplementary Materials. Imagery used and the U-Net network used are publicly available at the relevant cited sources.

Attached Files

Supplemental Material - abf3320_Movie_S1.mp4

Supplemental Material - abf3320_Movie_S2.mp4

Supplemental Material - abf3320_Movie_S3.mp4

Supplemental Material - abf3320_SM.pdf

Files

abf3320_SM.pdf
Files (68.3 MB)
Name Size Download all
md5:f2f9d590ac8b81a9b49436f5f0dfca9d
7.7 MB Download
md5:aa00adae111cfb81c37b1c7cef737269
9.0 MB Download
md5:89585d44a16b52b5d325a0638b3d685c
302.3 kB Preview Download
md5:acc799d07379e7ac879ecdf6c5f35db8
51.4 MB Download

Additional details

Created:
August 20, 2023
Modified:
October 23, 2023