Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published September 2019 | Accepted Version
Conference Paper Open

DGNSS-Vision Integration for Robust and Accurate Relative Spacecraft Navigation

Abstract

Relative spacecraft navigation based on Global Navigation Satellite System (GNSS) has been already successfully performed in low earth orbit (LEO). Very high accuracy, of the order of the millimeter, has been achieved in postprocessing using carrier phase differential GNSS (CDGNSS) and recovering the integer number of wavelength (Ambiguity) between the GNSS transmitters and the receiver. However the performance achievable on-board, in real time, above LEO and the GNSS constellation would be significantly lower due to limited computational resources, weaker signals, and worse geometric dilution of precision (GDOP). At the same time, monocular vision provides lower accuracy than CDGNSS when there is significant spacecraft separation, and it becomes even lower for larger baselines and wider field of views (FOVs). In order to increase the robustness, continuity, and accuracy of a real-time on-board GNSS-based relative navigation solution in a GNSS degraded environment such as Geosynchronous and High Earth Orbits, we propose a novel navigation architecture based on a tight fusion of carrier phase GNSS observations and monocular vision-based measurements, which enables fast autonomous relative pose estimation of cooperative spacecraft also in case of high GDOP and low GNSS visibility, where the GNSS signals are degraded, weak, or cannot be tracked continuously. In this paper we describe the architecture and implementation of a multi-sensor navigation solution and validate the proposed method in simulation. We use a dataset of images synthetically generated according to a chaser/target relative motion in Geostationary Earth Orbit (GEO) and realistic carrier phase and code-based GNSS observations simulated at the receiver position in the same orbits. We demonstrate that our fusion solution provides higher accuracy, higher robustness, and faster ambiguity resolution in case of degraded GNSS signal conditions, even when using high FOV cameras.

Additional Information

The first author was supported by the Swiss National Science Foundation (SNSF). This work was also supported in part by the Jet Propulsion Laboratory (JPL). Government sponsorship is acknowledged. The authors thank F. Y. Hadaegh, A. Rahmani, and S. R. Alimo.

Attached Files

Accepted Version - DGNSS_Vision_Integration_for_Robust_and_Accurate_Pose_Determination_of_Cooperative_Spacecraft__11_.pdf

Files

DGNSS_Vision_Integration_for_Robust_and_Accurate_Pose_Determination_of_Cooperative_Spacecraft__11_.pdf

Additional details

Created:
August 19, 2023
Modified:
October 18, 2023