Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published May 1992 | Published
Journal Article Open

A practical Bayesian framework for backpropagation networks

Abstract

A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian "evidence" automatically embodies "Occam's razor," penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained.

Additional Information

© 1992 Massachusetts Institute of Technology. Received 21 May 1991; accepted 29 October 1991. Posted Online March 13, 2008. I thank Mike Lewicki, Nick Weir, and Haim Sompolinsky for helpful conversations, and Andreas Herz for comments on the manuscript. This work was supported by a Caltech Fellowship and a Studentship from SERC, UK.

Attached Files

Published - MACnc92b.pdf

Files

MACnc92b.pdf
Files (1.2 MB)
Name Size Download all
md5:cf481a66d256e04f122979374034ce02
1.2 MB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 18, 2023