VLSI architectures for implementation of neural networks
- Other:
- Denker, John S.
Abstract
A large scale collective system implementing a specific model for associative memory was described by Hopfield [1]. A circuit model for this operation is illustrated in Figure 1, and consists of three major components. A collection of active gain elements (called amplifiers or "neurons") with gain function V = g(v) are connected by a passive interconnect matrix which provides unidirectional excitatory or inhibitory connections ("synapses") between the output of one neuron and the input to another. The strength of this interconnection is given by the conductance G_(ij) = G_0T_(ij). The requirements placed on the gain function g(v) are not very severe [2], and easily met by VLSI-realizable amplifiers. The third circuit element is the capacitances that determine the time evolution of the system, and are modelled as lumped capacitances. This formulation leads to the equations of motion shown in Figure 2, and to a Liapunov energy function which determines the dynamics of the system, and predicts the location of stable states (memories) in the case of a symmetric matrix T.
Additional Information
© 1986 American Institute of Physics. This research was supported by the System Development Foundation.Attached Files
Published - 1.36247.pdf
Files
Name | Size | Download all |
---|---|---|
md5:b4bf7a40268a0bd40bb119aaacc83ace
|
523.0 kB | Preview Download |
Additional details
- Eprint ID
- 52838
- Resolver ID
- CaltechAUTHORS:20141215-164438997
- System Development Foundation
- Created
-
2014-12-16Created from EPrint's datestamp field
- Updated
-
2021-11-10Created from EPrint's last_modified field
- Series Name
- AIP Conference Proceedings
- Series Volume or Issue Number
- 151