CaltechTHESIS
  A Caltech Library Service

Intrinsic Gradient Networks

Citation

Rolfe, Jason Tyler (2012) Intrinsic Gradient Networks. Dissertation (Ph.D.), California Institute of Technology. doi:10.7907/YCB7-7X24. https://resolver.caltech.edu/CaltechTHESIS:04202012-133210844

Abstract

Artificial neural networks are computationally powerful and exhibit brain-like dynamics. Unfortunately, the conventional gradient-dependent learning algorithms used to train them are biologically implausible. The calculation of the gradient in a traditional artificial neural network requires a complementary network of fast training signals that are dependent upon, but must not affect, the primary output-generating network activity. In contrast, the network of neurons in the cortex is highly recurrent; a network of gradient-calculating neurons in the brain would certainly project back to and influence the primary network. We address this biological implausibility by introducing a novel class of recurrent neural networks, intrinsic gradient networks, for which the gradient of an error function with respect to the parameters is a simple function of the network state. These networks can be trained using only their intrinsic signals, much like the network of neurons in the brain.

We derive a simple equation that characterizes intrinsic gradient networks, and construct a broad set of networks that satisfy this characteristic equation. The resulting set of intrinsic gradient networks includes many highly recurrent instances for which the gradient can be calculated by a simple, local, pseudo-Hebbian function of the network state, thus resolving a long-standing contradiction between artificial and biological neural networks. We demonstrate that these networks can learn to perform nontrivial tasks like handwritten digit recognition using only their intrinsic signals. Finally, we show that a cortical implementation of an intrinsic gradient network would have a number of characteristic computational, anatomical, and electrophysiological properties, and review experimental evidence suggesting the manifestation of these properties in the cortex.

Item Type:Thesis (Dissertation (Ph.D.))
Subject Keywords:recurrent neural networks; biologically plausible learning; cortical computation; hebbian learning; backpropagation
Degree Grantor:California Institute of Technology
Division:Engineering and Applied Science
Major Option:Computation and Neural Systems
Thesis Availability:Public (worldwide access)
Research Advisor(s):
  • Cook, Matthew M. (advisor)
  • Perona, Pietro (advisor)
Thesis Committee:
  • Perona, Pietro (chair)
  • Cook, Matthew M.
  • Bruck, Jehoshua
  • Winfree, Erik
  • Koch, Christof
Defense Date:9 June 2011
Record Number:CaltechTHESIS:04202012-133210844
Persistent URL:https://resolver.caltech.edu/CaltechTHESIS:04202012-133210844
DOI:10.7907/YCB7-7X24
Default Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:6953
Collection:CaltechTHESIS
Deposited By: Jason Rolfe
Deposited On:07 May 2012 16:38
Last Modified:08 Nov 2023 00:44

Thesis Files

[img]
Preview
PDF - Final Version
See Usage Policy.

10MB

Repository Staff Only: item control page