Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published November 2016 | Submitted
Journal Article Open

Deep vs. shallow networks: An approximation theory perspective

Abstract

The paper briefly reviews several recent results on hierarchical architectures for learning from examples, that may formally explain the conditions under which Deep Convolutional Neural Networks perform much better in function approximation problems than shallow, one-hidden layer architectures. The paper announces new results for a non-smooth activation function — the ReLU function — used in present-day neural networks, as well as for the Gaussian networks. We propose a new definition of relative dimension to encapsulate different notions of sparsity of a function class that can possibly be exploited by deep networks but not by shallow ones to drastically reduce the complexity required for approximation and learning.

Additional Information

© 2016 World Scientific Publishing Co. Received: 7 July 2016; Accepted: 7 August 2016; Published: 14 October 2016. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. H.M. is supported in part by ARO Grant W911NF-15-1-0385.

Attached Files

Submitted - 1608.03287v1.pdf

Files

1608.03287v1.pdf
Files (983.3 kB)
Name Size Download all
md5:b3df910478058b77adc2db162c167268
983.3 kB Preview Download

Additional details

Created:
August 22, 2023
Modified:
October 23, 2023