On universal approximation and error bounds for Fourier Neural Operators
Abstract
Fourier neural operators (FNOs) have recently been proposed as an effective framework for learning operators that map between infinite-dimensional spaces. We prove that FNOs are universal, in the sense that they can approximate any continuous operator to desired accuracy. Moreover, we suggest a mechanism by which FNOs can approximate operators associated with PDEs efficiently. Explicit error bounds are derived to show that the size of the FNO, approximating operators associated with a Darcy type elliptic PDE and with the incompressible Navier-Stokes equations of fluid dynamics, only increases sub (log)-linearly in terms of the reciprocal of the error. Thus, FNOs are shown to efficiently approximate operators arising in a large class of PDEs.
Additional Information
© 2021 Nikola Kovachki and Samuel Lanthaler and Siddhartha Mishra. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v22/21-0806.html. Submitted 7/21; Revised 10/21; Published 11/21.Attached Files
Published - 21-0806.pdf
Accepted Version - 2107.07562.pdf
Files
Name | Size | Download all |
---|---|---|
md5:2a1598801fc2ed807f0367ea114f24e0
|
723.4 kB | Preview Download |
md5:d10c5f42bcb426dbd108a80c0f0a68c3
|
708.2 kB | Preview Download |
Additional details
- Eprint ID
- 112582
- Resolver ID
- CaltechAUTHORS:20211221-002325597
- Created
-
2021-12-21Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field