Pose Estimation of Uncooperative Spacecraft from Monocular Images Using Neural Network Based Keypoints
Abstract
A novel method for monocular-based pose estimation of uncooperative spacecraft using keypoints specialized for a given target is presented. A set of robust keypoints are created by examining the effectiveness of existing localization algorithms by simulating and testing different perspectives. The feature extraction and matching is used to build a model of the spacecraft before the flight mission using the same feature extraction algorithms that can be used during the mission. Further, a visibility map is determined for each keypoint to aid in outlier filtering, matching, and measurement covariance estimation. For initialization and matching, a Convolutional Neural Network (CNN) is trained to generate descriptors robust to illumination, scale, and affine changes for the pre-computed keypoints. In the second part of the paper, we focus on pose determination and filtering after keypoint-to-model matching. While several approaches for pose acquisition have been formulated, we propose a novel method for tracking that makes use of a nonlinear filter, based on the spacecraft translational and rotational relative dynamics which estimates the covariance of the vision-based observations using the keypoint preprocessing information. Further, the estimated propagated covariance for each extracted feature is used for aiding the feature matching.
Additional Information
© 2020 American Institute of Aeronautics and Astronautics. Published Online: 5 Jan 2020.Additional details
- Eprint ID
- 100664
- Resolver ID
- CaltechAUTHORS:20200113-084156386
- Created
-
2020-01-13Created from EPrint's datestamp field
- Updated
-
2021-11-16Created from EPrint's last_modified field
- Caltech groups
- GALCIT
- Other Numbering System Name
- AIAA Paper
- Other Numbering System Identifier
- 2020-1874