Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 10, 2019 | Submitted
Journal Article Open

Regression-clustering for Improved Accuracy and Training Cost with Molecular-Orbital-Based Machine Learning

Abstract

Machine learning (ML) in the representation of molecular-orbital-based (MOB) features has been shown to be an accurate and transferable approach to the prediction of post-Hartree-Fock correlation energies. Previous applications of MOB-ML employed Gaussian Process Regression (GPR), which provides good prediction accuracy with small training sets; however, the cost of GPR training scales cubically with the amount of data and becomes a computational bottleneck for large training sets. In the current work, we address this problem by introducing a clustering/regression/classification implementation of MOB-ML. In a first step, regression clustering (RC) is used to partition the training data to best fit an ensemble of linear regression (LR) models; in a second step, each cluster is regressed independently, using either LR or GPR; and in a third step, a random forest classifier (RFC) is trained for the prediction of cluster assignments based on MOB feature values. Upon inspection, RC is found to recapitulate chemically intuitive groupings of the frontier molecular orbitals, and the combined RC/LR/RFC and RC/GPR/RFC implementations of MOB-ML are found to provide good prediction accuracy with greatly reduced wall-clock training times. For a dataset of thermalized (350 K) geometries of 7211 organic molecules of up to seven heavy atoms (QM7b-T), both RC/LR/RFC and RC/GPR/RFC reach chemical accuracy (1 kcal/mol prediction error) with only 300 training molecules, while providing 35000-fold and 4500-fold reductions in the wall-clock training time, respectively, compared to MOB-ML without clustering. The resulting models are also demonstrated to retain transferability for the prediction of large-molecule energies with only small-molecule training data. Finally, it is shown that capping the number of training datapoints per cluster leads to further improvements in prediction accuracy with negligible increases in wall-clock training time.

Additional Information

© 2019 American Chemical Society. Received: September 4, 2019; Published: October 22, 2019. This work emerged from a CMS 273 class project at Caltech that also involved Dmitry Burov, Jialin Song, Ying Shi Teh, and Dr. Tamara Husch, as well as Professors Kaushik Bhattacharya and Richard Murray; we thank these individuals for their ideas and contributions. This work is supported by the US Air Force Office of Scientific Research (AFOSR) grant FA9550-17-1-0102. M.W. acknowledges a postdoctoral fellowship from the Resnick Sustainability Institute. N.B.K. is supported, in part, by the US National Science Foundation (NSF) grant DMS 1818977, the US Office of Naval Research (ONR) grant N00014-17-1-2079, and the US Army Research Office (ARO) grant W911NF-12-2-0022. Computational resources were provided by the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility supported by the DOE Office of Science under contract DE-AC02-05CH11231. The authors declare no competing financial interest.

Attached Files

Submitted - 1909.02041.pdf

Files

1909.02041.pdf
Files (1.6 MB)
Name Size Download all
md5:236ecc22502dbea88a3c0b83ed47ba5e
1.6 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 18, 2023