Published October 2021
| Submitted
Journal Article
Open
Online Optimization With Predictions and Switching Costs: Fast Algorithms and the Fundamental Limit
- Creators
- Li, Yingying
- Qu, Guannan
- Li, Na
Abstract
This article considers online optimization with a finite prediction window of cost functions and additional switching costs on the decisions. We study the fundamental limits of dynamic regret of any online algorithm for both the with-prediction and the no-prediction cases. Besides, we propose two gradient-based online algorithms: receding horizon gradient descent (RHGD) and receding horizon accelerated gradient (RHAG); and provide their regret upper bounds. RHAG's regret upper bound is close to the lower bound, indicating the tightness of our lower bound and that our RHAG is near-optimal. Finally, we conduct numerical experiments to complement the theoretical results.
Additional Information
© 2020 IEEE. Manuscript received August 21, 2020; accepted October 28, 2020. Date of publication November 24, 2020; date of current version September 27, 2021. This work was supported in part by NSF CAREER grant under Grant ECCS-1553407, in part by AFOSR YIP under Grant FA9550-18-1-0150, and in part by ONR YIP under Grant N00014-19-1-2217. Recommended by Associate Editor S. Grammatico.Attached Files
Submitted - 1801.07780.pdf
Files
1801.07780.pdf
Files
(586.9 kB)
Name | Size | Download all |
---|---|---|
md5:829d9a410b0c1a8b50ac619753cb221e
|
586.9 kB | Preview Download |
Additional details
- Eprint ID
- 111619
- DOI
- 10.1109/tac.2020.3040249
- Resolver ID
- CaltechAUTHORS:20211022-223024246
- ECCS-1553407
- NSF
- FA9550-18-1-0150
- Air Force Office of Scientific Research (AFOSR)
- N00014-19-1-2217
- Office of Naval Research (ONR)
- Created
-
2021-10-23Created from EPrint's datestamp field
- Updated
-
2021-10-25Created from EPrint's last_modified field