Published February 2, 2020
| public
Book Section - Chapter
Improve Robustness of Deep Neural Networks by Coding
Chicago
Abstract
Deep neural networks (DNNs) typically have many weights. When errors appear in their weights, which are usually stored in non-volatile memories, their performance can degrade significantly. We review two recently presented approaches that improve the robustness of DNNs in complementary ways. In the first approach, we use error-correcting codes as external redundancy to protect the weights from errors. A deep reinforcement learning algorithm is used to optimize the redundancy-performance tradeoff. In the second approach, internal redundancy is added to neurons via coding. It enables neurons to perform robust inference in noisy environments.
Additional Information
© 2020 IEEE.Additional details
- Eprint ID
- 106993
- DOI
- 10.1109/ita50056.2020.9244998
- Resolver ID
- CaltechAUTHORS:20201209-153308085
- Created
-
2020-12-10Created from EPrint's datestamp field
- Updated
-
2021-11-16Created from EPrint's last_modified field