Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published July 14, 2020 | Accepted Version + Published
Journal Article Open

Automated Synthetic-to-Real Generalization

Abstract

Models trained on synthetic images often face degraded generalization to real data. As a convention, these models are often initialized with ImageNet pretrained representation. Yet the role of ImageNet knowledge is seldom discussed despite common practices that leverage this knowledge to maintain the generalization ability. An example is the careful hand-tuning of early stopping and layer-wise learning rates, which is shown to improve synthetic-to-real generalization but is also laborious and heuristic. In this work, we explicitly encourage the synthetically trained model to maintain similar representations with the ImageNet pretrained model, and propose a learning-to-optimize (L2O) strategy to automate the selection of layer-wise learning rates. We demonstrate that the proposed framework can significantly improve the synthetic-to-real generalization performance without seeing and training on real data, while also benefiting downstream tasks such as domain adaptation. Code is available at: https://github.com/NVlabs/ASG.

Additional Information

© 2020 by the author(s). Work done during internship at NVIDIA. We appreciate the computing power supported by NVIDIA GPU infrastructure. We also thank for the discussion and suggestions from four anonymous reviewers and the help from Yang Zou for the domain adaptation experiments. The research of Z. Wang was partially supported by NSF Award RI-1755701.

Attached Files

Published - chen20x.pdf

Accepted Version - 2007.06965.pdf

Files

2007.06965.pdf
Files (9.1 MB)
Name Size Download all
md5:e3be99769a10dddc8f90d66306927511
4.6 MB Preview Download
md5:84abd1b16815d1637aa4c464b3173d7d
4.5 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 20, 2023