Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published September 17, 2019 | Published
Journal Article Open

Large-Scale Distributed Training Applied to Generative Adversarial Networks for Calorimeter Simulation

Abstract

In recent years, several studies have demonstrated the benefit of using deep learning to solve typical tasks related to high energy physics data taking and analysis. In particular, generative adversarial networks are a good candidate to supplement the simulation of the detector response in a collider environment. Training of neural network models has been made tractable with the improvement of optimization methods and the advent of GP-GPU well adapted to tackle the highly-parallelizable task of training neural nets. Despite these advancements, training of large models over large data sets can take days to weeks. Even more so, finding the best model architecture and settings can take many expensive trials. To get the best out of this new technology, it is important to scale up the available network-training resources and, consequently, to provide tools for optimal large-scale distributed training. In this context, our development of a new training workflow, which scales on multi-node/multi-GPU architectures with an eye to deployment on high performance computing machines is described. We describe the integration of hyper parameter optimization with a distributed training framework using Message Passing Interface, for models defined in keras [12] or pytorch [13]. We present results on the speedup of training generative adversarial networks trained on a data set composed of the energy deposition from electron, photons, charged and neutral hadrons in a fine grained digital calorimeter.

Additional Information

© 2019 The Authors, published by EDP Sciences. This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Published online 17 September 2019. Part of this work was conducted on Piz Daint at CSCS under the allocations d59 (2016) and cn01 (2018). Part of this work was conducted on Titan at OLCF under the allocation csc291 (2018). Part of this work was conducted at "iBanks", the AI GPU cluster at Caltech. We acknowledge NVIDIA, SuperMicro and the Kavli Foundation for their support of "iBanks". Part of the team is funded by ERC H2020 grant number 772369.

Attached Files

Published - epjconf_chep2018_06025.pdf

Files

epjconf_chep2018_06025.pdf
Files (133.7 kB)
Name Size Download all
md5:4a02ba501cc0b4b6d10a10fe7f3342f2
133.7 kB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 19, 2023