Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published September 1994 | Published
Book Section - Chapter Open

A learning algorithm for multi-layer perceptrons with hard-limiting threshold units

Abstract

We propose a novel learning algorithm to train networks with multilayer linear-threshold or hard-limiting units. The learning scheme is based on the standard backpropagation, but with "pseudo-gradient" descent, which uses the gradient of a sigmoid function as a heuristic hint in place of that of the hard-limiting function. A justification that the pseudo-gradient always points in the right down hill direction in error surface for networks with one hidden layer is provided. The advantages of such networks are that their internal representations in the hidden layers are clearly interpretable, and well-defined classification rules can be easily obtained, that calculations for classifications after training are very simple, and that they are easily implementable in hardware. Comparative experimental results on several benchmark problems using both the conventional backpropagation networks and our learning scheme for multilayer perceptrons are presented and analyzed.

Additional Information

© 1994 IEEE. The research described in this paper was supported by ARPA under grants number AFOSR-90-0199 and N00014-92-5-1860.

Attached Files

Published - 00366045.pdf

Files

00366045.pdf
Files (477.0 kB)
Name Size Download all
md5:04bfe8d96650e8783625be95173a8c07
477.0 kB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 20, 2023