Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published August 2021 | Supplemental Material + Accepted Version
Journal Article Open

Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors

Abstract

Although the quest for more accurate solutions is pushing deep learning research towards larger and more complex algorithms, edge devices demand efficient inference and therefore reduction in model size, latency and energy consumption. One technique to limit model size is quantization, which implies using fewer bits to represent weights and biases. Such an approach usually results in a decline in performance. Here, we introduce a method for designing optimally heterogeneously quantized versions of deep neural network models for minimum-energy, high-accuracy, nanosecond inference and fully automated deployment on chip. With a per-layer, per-parameter type automatic quantization procedure, sampling from a wide range of quantizers, model energy consumption and size are minimized while high accuracy is maintained. This is crucial for the event selection procedure in proton–proton collisions at the CERN Large Hadron Collider, where resources are strictly limited and a latency of O(1)μs is required. Nanosecond inference and a resource consumption reduced by a factor of 50 when implemented on field-programmable gate array hardware are achieved.

Additional Information

© The Author(s), under exclusive licence to Springer Nature Limited 2021. Received 23 November 2020; Accepted 06 May 2021; Published 21 June 2021. M.P. and S.S. are supported by, and V.L. and A.A.P. are partially supported by, the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant no. 772369). V.L. is supported by Zenseact under the CERN Knowledge Transfer Group. A.A.P. is supported by CEVA under the CERN Knowledge Transfer Group. We acknowledge the Fast Machine Learning collective as an open community of multi-domain experts and collaborators. This community was important for the development of this project. Data availability: The data used in this study are openly available at Zenodo from https://doi.org/10.5281/zenodo.3602260. Code availability: The QKeras library, which also includes AutoQKeras and QTools, is available from https://github.com/google/qkeras (the work presented here uses QKeras version 0.7.4). Examples on how to run the library are available in the notebook subdirectory. The hls4ml library is available at https://github.com/fastmachinelearning/hls4ml and all versions ≥0.2.1 support QKeras models (the work presented here is based on version 0.2.1). For examples on how to use QKeras models in hls4ml, the notebook part4_quantization at https://github.com/fastmachinelearning/hls4ml-tutorial serves as a general introduction. Author Contributions: C.N.C., A.K., S.L. and H.Z. conceived and designed the QKeras, AutoQKeras and QTools software libraries. T.A., V.L., M.P., A.A.P., S.S. and J.N. designed and implemented support for QKeras in hls4ml. S.S. conducted the experiments. T.A., A.A.P. and S.S. wrote the manuscript. The authors declare no competing interests. Peer review information: Nature Machine Intelligence thanks Jose Nunez-Yanez, Stylianos Venieris and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Attached Files

Accepted Version - 2006.10159.pdf

Supplemental Material - 42256_2021_356_Fig4_ESM.webp

Supplemental Material - 42256_2021_356_Fig5_ESM.webp

Supplemental Material - 42256_2021_356_Fig6_ESM.webp

Supplemental Material - 42256_2021_356_Fig7_ESM.webp

Files

2006.10159.pdf
Files (2.9 MB)
Name Size Download all
md5:f6a5240f9408d058353fc70621dfa917
105.2 kB Download
md5:e609574f5f36ba93b5a29821c69a9ad2
524.3 kB Download
md5:9299b3ed46732af10a7a49dadee654df
30.1 kB Download
md5:4ad0aab3463d06cfecfc1c0d606588b8
2.1 MB Preview Download
md5:e72ffde57c7a0040df01875a43e8fc11
132.8 kB Download

Additional details

Created:
August 22, 2023
Modified:
October 23, 2023