Analog VLSI Implementation of Multi-dimensional Gradient Descent
- Creators
- Kirk, David B.
- Kerns, Douglas
Abstract
We describe an analog VLSI implementation of a multi-dimensional gradient estimation and descent technique for minimizing an on-chip scalar function f(). The implementation uses noise injection and multiplicative correlation to estimate derivatives, as in [Anderson, Kerns 92]. One intended application of this technique is setting circuit parameters on-chip automatically, rather than manually [Kirk 91]. Gradient descent optimization may be used to adjust synapse weights for a backpropagation or other on-chip learning implementation. The approach combines the features of continuous multi-dimensional gradient descent and the potential for an annealing style of optimization. We present data measured from our analog VLSI implementation.
Additional Information
© 1993 Morgan Kaufmann. This work was supported in part by an AT&T Bell Laboratories Ph.D. Fellowship, and by grants from Apple, DEC, Hewlett Packard, and IBM. Additional support was provided by NSF (ASC-S9-20219), as part of the NSF/DARPA STC for Computer Graphics and Scientific Visualization. All opinions, findings, conclusions, or recommendations expressed in this document are those of the author and do not necessarily reflect the views of the sponsoring agencies.Attached Files
Published - 632-analog-vlsi-implementation-of-gradient-descent.pdf
Files
Name | Size | Download all |
---|---|---|
md5:cadf9f85831df794f8d1b52752612334
|
1.7 MB | Preview Download |
Additional details
- Eprint ID
- 64106
- Resolver ID
- CaltechAUTHORS:20160129-165404707
- AT&T Bell Laboratories
- Apple Computer
- DEC
- Hewlett-Packard
- IBM
- NSF
- ASC-S9-20219
- Created
-
2016-02-03Created from EPrint's datestamp field
- Updated
-
2019-10-03Created from EPrint's last_modified field
- Series Name
- Advances in Neural Information Processing Systems
- Series Volume or Issue Number
- 5