Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published August 1, 1986 | Published
Book Section - Chapter Open

VLSI architectures for implementation of neural networks

Abstract

A large scale collective system implementing a specific model for associative memory was described by Hopfield [1]. A circuit model for this operation is illustrated in Figure 1, and consists of three major components. A collection of active gain elements (called amplifiers or "neurons") with gain function V = g(v) are connected by a passive interconnect matrix which provides unidirectional excitatory or inhibitory connections ("synapses") between the output of one neuron and the input to another. The strength of this interconnection is given by the conductance G_(ij) = G_0T_(ij). The requirements placed on the gain function g(v) are not very severe [2], and easily met by VLSI-realizable amplifiers. The third circuit element is the capacitances that determine the time evolution of the system, and are modelled as lumped capacitances. This formulation leads to the equations of motion shown in Figure 2, and to a Liapunov energy function which determines the dynamics of the system, and predicts the location of stable states (memories) in the case of a symmetric matrix T.

Additional Information

© 1986 American Institute of Physics. This research was supported by the System Development Foundation.

Attached Files

Published - 1.36247.pdf

Files

1.36247.pdf
Files (523.0 kB)
Name Size Download all
md5:b4bf7a40268a0bd40bb119aaacc83ace
523.0 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
March 5, 2024