Tensor Contraction Layers for Parsimonious Deep Nets
Abstract
Tensors offer a natural representation for many kinds of data frequently encountered in machine learning. Images, for example, are naturally represented as third order tensors, where the modes correspond to height, width, and channels. In particular, tensor decompositions are noted for their ability to discover multi-dimensional dependencies and produce compact low-rank approximations of data. In this paper, we explore the use of tensor contractions as neural network layers and investigate several ways to apply them to activation tensors. Specifically, we propose the Tensor Contraction Layer (TCL), the first attempt to incorporate tensor contractions as end-to-end trainable neural network layers. Applied to existing networks, TCLs reduce the dimensionality of the activation tensors and thus the number of model parameters. We evaluate the TCL on the task of image recognition, augmenting popular networks (AlexNet, VGG). The resulting models are trainable end-to-end. We evaluate TCL's performance on the task of image recognition, using the CIFAR100 and ImageNet datasets, studying the effect of parameter reduction via tensor contraction on performance. We demonstrate significant model compression without significant impact on the accuracy and, in some cases, improved performance.
Additional Information
© 2017 IEEE.Attached Files
Submitted - 1706.00439.pdf
Files
Name | Size | Download all |
---|---|---|
md5:2ebc28d8cee30dfb4f7e6381181229b8
|
758.8 kB | Preview Download |
Additional details
- Eprint ID
- 85394
- DOI
- 10.1109/CVPRW.2017.243
- Resolver ID
- CaltechAUTHORS:20180321-103123441
- Created
-
2018-03-26Created from EPrint's datestamp field
- Updated
-
2021-11-15Created from EPrint's last_modified field