Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published June 2000 | Published
Journal Article Open

The Early Restart Algorithm

Abstract

Consider an algorithm whose time to convergence is unknown (because of some random element in the algorithm, such as a random initial weight choice for neural network training). Consider the following strategy. Run the algorithm for a specific time T. If it has not converged by time T, cut the run short and rerun it from the start (repeat the same strategy for every run). This so-called restart mechanism has been proposed by Fahlman (1988) in the context of backpropagation training. It is advantageous in problems that are prone to local minima or when there is a large variability in convergence time from run to run, and may lead to a speed-up in such cases. In this article, we analyze theoretically the restart mechanism, and obtain conditions on the probability density of the convergence time for which restart will improve the expected convergence time. We also derive the optimal restart time. We apply the derived formulas to several cases, including steepest-descent algorithms.

Additional Information

© 2000 Massachusetts Institute of Technology. Received October 2, 1998; accepted April 13, 1999. Posted Online March 13, 2006. We thank Yaser Abu-Mostafa and the Caltech Learning Systems Group for their useful input. We also acknowledge the support of NSF's Engineering Research Center at Caltech.

Attached Files

Published - MAGnc00b.pdf

Files

MAGnc00b.pdf
Files (179.2 kB)
Name Size Download all
md5:8ff9dbc8a3da9ea3a1f3f4dc156b3126
179.2 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 24, 2023