CaltechTHESIS
  A Caltech Library Service

Analog Computation and Learning in VLSI

Citation

Koosh, Vincent Frank (2001) Analog Computation and Learning in VLSI. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/9B65-TB43. https://resolver.caltech.edu/CaltechETD:etd-10022001-201911

Abstract

Nature has evolved highly advanced systems capable of performing complex computations, adaptation, and learning using analog components. Although digital systems have significantly surpassed analog systems in terms of performing precise, high speed, mathematical computations, digital systems cannot outperform analog systems in terms of power. Furthermore, nature has evolved techniques to deal with imprecise analog components by using redundancy and massive connectivity. In this thesis, analog VLSI circuits are presented for performing arithmetic functions and for implementing neural networks. These circuits draw on the power of the analog building blocks to perform low power and parallel computations.

The arithmetic function circuits presented are based on MOS transistors operating in the subthreshold region with capacitive dividers as inputs to the gates. Because the inputs to the gates of the transistors are floating, digital switches are used to dynamically reset the charges on the floating gates to perform the computations. Circuits for performing squaring, square root, and multiplication/division are shown. A circuit that performs a vector normalization, based on cascading the preceding circuits, is shown to display the ease with which simpler circuits may be combined to obtain more complicated functions. Test results are shown for all of the circuits.

Two feedforward neural network implementations are also presented. The first uses analog synapses and neurons with a digital serial weight bus. The chip is trained in loop with the computer performing control and weight updates. By training with the chip in the loop, it is possible to learn around circuit offsets. The second neural network also uses a computer for the global control operations, but all of the local operations are performed on chip. The weights are implemented digitally, and counters are used to adjust them. A parallel perturbative weight update algorithm is used. The chip uses multiple, locally generated, pseudorandom bit streams to perturb all of the weights in parallel. If the perturbation causes the error function to decrease, the weight change is kept, otherwise it is discarded. Test results are shown of both networks successfully learning digital functions such as AND and XOR.

Item Type:Thesis (Dissertation (Ph.D.))
Subject Keywords:analog VLSI; neural network; translinear
Degree Grantor:California Institute of Technology
Division:Engineering and Applied Science
Major Option:Electrical Engineering
Thesis Availability:Public (worldwide access)
Research Advisor(s):
  • Goodman, Rodney M.
Thesis Committee:
  • Goodman, Rodney M. (chair)
  • Martin, Alain J.
  • Diorio, Christopher J.
  • Koch, Christof
  • Bruck, Jehoshua
Defense Date:20 April 2001
Non-Caltech Author Email:darkd (AT) micro.caltech.edu
Record Number:CaltechETD:etd-10022001-201911
Persistent URL:https://resolver.caltech.edu/CaltechETD:etd-10022001-201911
DOI:10.7907/9B65-TB43
Default Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:3858
Collection:CaltechTHESIS
Deposited By: Imported from ETD-db
Deposited On:06 Nov 2001
Last Modified:30 Aug 2022 23:47

Thesis Files

[img]
Preview
PDF (thesis.pdf) - Final Version
See Usage Policy.

827kB

Repository Staff Only: item control page