Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published 1993 | Published
Book Section - Chapter Open

A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization

Abstract

A parallel stochastic algorithm is investigated for error-descent learning and optimization in deterministic networks of arbitrary topology. No explicit information about internal network structure is needed. The method is based on the model-free distributed learning mechanism of Dembo and Kailath. A modified parameter update rule is proposed by which each individual parameter vector perturbation contributes a decrease in error. A substantially faster learning speed is hence allowed. Furthermore, the modified algorithm supports learning time-varying features in dynamical networks. We analyze the convergence and scaling properties of the algorithm, and present simulation results for dynamic trajectory learning in recurrent networks.

Additional Information

© 1993 Morgan Kaufmann. We thank J. Alspector, P. Baldi, B. Flower, D. Kirk, M. van Putten, A. Yariv, and many other individuals for valuable suggestions and comments on the work presented here.

Attached Files

Published - 690-a-fast-stochastic-error-descent-algorithm-for-supervised-learning-and-optimization.pdf

Files

690-a-fast-stochastic-error-descent-algorithm-for-supervised-learning-and-optimization.pdf

Additional details

Created:
August 20, 2023
Modified:
January 13, 2024