The next best touch for model-based localization
Abstract
This paper introduces a tactile or contact method whereby an autonomous robot equipped with suitable sensors can choose the next sensing action involving touch in order to accurately localize an object in its environment. The method uses an information gain metric based on the uncertainty of the object's pose to determine the next best touching action. Intuitively, the optimal action is the one that is the most informative. The action is then carried out and the state of the object's pose is updated using an estimator. The method is further extended to choose the most informative action to simultaneously localize and estimate the object's model parameter or model class. Results are presented both in simulation and in experiment on the DARPA Autonomous Robotic Manipulation Software (ARM-S) robot.
Additional Information
© 2013 European Union. The author wishes to acknowledge the support of the work by the National Science and Engineering Research Council of Canada (NSERC) and DARPA's Autonomous Robotic Manipulation Software Track (ARM-S) program through an agreement with NASA. © 2013. All rights reserved.Additional details
- Eprint ID
- 96786
- Resolver ID
- CaltechAUTHORS:20190627-105232401
- Natural Sciences and Engineering Research Council of Canada (NSERC)
- Defense Advanced Research Projects Agency (DARPA)
- NASA
- Created
-
2019-06-27Created from EPrint's datestamp field
- Updated
-
2021-11-16Created from EPrint's last_modified field