Robust Deep Networks with Randomized Tensor Regression Layers
Abstract
In this paper, we propose a novel randomized tensor decomposition for tensor regression. It allows to stochastically approximate the weights of tensor regression layers by randomly sampling in the low-rank subspace. We theoretically and empirically establish the link between our proposed stochastic rank-regularization and the dropout on low-rank tensor regression. This acts as an additional stochastic regularization on the regression weight, which, combined with the deterministic regularization imposed by the low-rank constraint, improves both the performance and robustness of neural networks augmented with it. In particular, it makes the model more robust to adversarial attacks and random noise, without requiring any adversarial training. We perform a thorough study of our method on synthetic data, object classification on the CIFAR100 and ImageNet datasets, and large scale brain-age prediction on UK Biobank brain MRI dataset. We demonstrate superior performance in all cases, as well as significant improvement in robustness to adversarial attacks and random noise.
Additional Information
This research has been conducted using the UK Biobank Resource under Application Number 18545.Attached Files
Submitted - 1902.10758.pdf
Files
Name | Size | Download all |
---|---|---|
md5:3a15bf7b59e89d8e9234a8d12281ec6c
|
2.4 MB | Preview Download |
Additional details
- Eprint ID
- 98450
- Resolver ID
- CaltechAUTHORS:20190905-154237568
- UK Biobank
- 18545
- Created
-
2019-09-06Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field