Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published October 25, 2021 | Submitted
Report Open

ZerO Initialization: Initializing Residual Networks with only Zeros and Ones

Abstract

Deep neural networks are usually initialized with random weights, with adequately selected initial variance to ensure stable signal propagation during training. However, there is no consensus on how to select the variance, and this becomes challenging especially as the number of layers grows. In this work, we replace the widely used random weight initialization with a fully deterministic initialization scheme ZerO, which initializes residual networks with only zeros and ones. By augmenting the standard ResNet architectures with a few extra skip connections and Hadamard transforms, ZerO allows us to start the training from zeros and ones entirely. This has many benefits such as improving reproducibility (by reducing the variance over different experimental runs) and allowing network training without batch normalization. Surprisingly, we find that ZerO achieves state-of-the-art performance over various image classification datasets, including ImageNet, which suggests random weights may be unnecessary for modern network initialization.

Attached Files

Submitted - 2110.12661.pdf

Files

2110.12661.pdf
Files (662.2 kB)
Name Size Download all
md5:76355b2d04e72f0fec7af5ecd81bd575
662.2 kB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 24, 2023