Application of Machine Learning to Hyperspectral Radiative Transfer Simulations
Abstract
Hyperspectral observations have become one of the most popular and powerful methods for atmospheric remote sensing, and are widely used for temperature, gas, aerosol, and cloud retrievals. However, accurate forward radiative transfer simulations are computationally expensive since typical line-by-line approaches involve a larger number of monochromatic radiative transfer calculations. This study explores the feasibility of machine learning techniques for fast hyperspectral radiative transfer (HRT) simulations, which essentially performs calculations at a small fraction of hyperspectral wavelengths and extends them across the entire spectral range. A neural network (NN) model is used as an example for the development of the fast HRT, and its results are compared with those from a principal component analysis (PCA) model, which shares a similar principle. We consider hyperspectral radiances from both actual satellite observations and accurate line-by-line simulations. The NN model can alleviate the computational burden by two to three orders of magnitude, and generates radiances with small relative errors (generally less than 0.5% compared to exact calculations); the performance of the NN model is better than that of the PCA model. The model can be further improved by optimizing the training procedure and parameters, the representative wavelengths, and the machine learning technique itself.
Additional Information
© 2020 Elsevier Ltd. Received 30 September 2019, Revised 4 January 2020, Accepted 22 February 2020, Available online 28 February 2020.Additional details
- Eprint ID
- 101698
- Resolver ID
- CaltechAUTHORS:20200304-091512556
- Created
-
2020-03-04Created from EPrint's datestamp field
- Updated
-
2021-11-16Created from EPrint's last_modified field
- Caltech groups
- Astronomy Department, Division of Geological and Planetary Sciences