Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published July 28, 2020 | Published
Journal Article Open

Evolutionary reinforcement learning of dynamical large deviations

Abstract

We show how to bound and calculate the likelihood of dynamical large deviations using evolutionary reinforcement learning. An agent, a stochastic model, propagates a continuous-time Monte Carlo trajectory and receives a reward conditioned upon the values of certain path-extensive quantities. Evolution produces progressively fitter agents, potentially allowing the calculation of a piece of a large-deviation rate function for a particular model and path-extensive quantity. For models with small state spaces, the evolutionary process acts directly on rates, and for models with large state spaces, the process acts on the weights of a neural network that parameterizes the model's rates. This approach shows how path-extensive physics problems can be considered within a framework widely used in machine learning.

Additional Information

© 2020 Published under license by AIP Publishing. Submitted: 27 May 2020; Accepted: 31 May 2020; Published Online: 27 July 2020. We thank Hugo Touchette for the comments. This work was performed as part of a user project at the Molecular Foundry, Lawrence Berkeley National Laboratory, supported by the Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02–05CH11231. D.J. acknowledges support from the Department of Energy Computational Science Graduate Fellowship. I.T. performed work at the National Research Council of Canada under the auspices of the AI4D Program. D.J. acknowledges support from the Department of Energy Computational Science Graduate Fellowship, under Contract No. DE-FG02-97ER25308.

Attached Files

Published - 5.0015301.pdf

Files

5.0015301.pdf
Files (1.7 MB)
Name Size Download all
md5:0abf88a556ce13b66cdb3ad5c150be80
1.7 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 20, 2023