Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published May 27, 2020 | Submitted
Report Open

Model Reduction and Neural Networks for Parametric PDEs

Abstract

We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces. The proposed approach is motivated by the recent successes of neural networks and deep learning, in combination with ideas from model reduction. This combination results in a neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation. For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology. Numerically we demonstrate the effectiveness of the method on a class of parametric elliptic PDE problems, showing convergence and robustness of the approximation scheme with respect to the size of the discretization, and compare our method with existing algorithms from the literature.

Additional Information

Submitted to the editors May 8, 2020. The authors are grateful to Anima Anandkumar, Kamyar Azizzadenesheli, Zongyi Li and Nicholas H. Nelsen for helpful discussions in the general area of neural networks for PDE-defined maps between Hilbert spaces. The work is supported by MEDE-ARL funding (W911NF-12-0022). AMS is also partially supported by NSF (DMS 1818977) and AFOSR (FA9550-17-1-0185). BH is partially supported by a Von Kármán instructorship at the California Institute of Technology.

Attached Files

Submitted - 2005.03180.pdf

Files

2005.03180.pdf
Files (2.0 MB)
Name Size Download all
md5:f9704e4458bc51a3e7bb557e77bda30d
2.0 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
March 5, 2024