A Framework for Evaluating Approximation Methods for Gaussian Process Regression
Abstract
Gaussian process (GP) predictors are an important component of many Bayesian approaches to machine learning. However, even a straightforward implementation of Gaussian process regression (GPR) requires O(n^2) space and O(n^3) time for a data set of n examples. Several approximation methods have been proposed, but there is a lack of understanding of the relative merits of the different approximations, and in what situations they are most useful. We recommend assessing the quality of the predictions obtained as a function of the compute time taken, and comparing to standard baselines (e.g., Subset of Data and FITC). We empirically investigate four different approximation algorithms on four different prediction problems, and make our code available to encourage future comparisons.
Additional Information
© 2013 Krzysztof Chalupka, Christopher K. I. Williams and Iain Murray. Submitted November 2011; Revised June 2012, November 2012; Published February 2013. We thank the anonymous referees whose comments helped improve the paper. We also thank Carl Rasmussen, Ed Snelson and Joaquin Quiñinero-Candela for many discussions on the comparison of GP approximation methods. This work is supported in part by the IST Programme of the European Community, under the PASCAL2 Network of Excellence, IST-2007-216886. This publication only reflects the authors' views.Attached Files
Published - Chalupka_2013p333.pdf
Files
Name | Size | Download all |
---|---|---|
md5:7dd3567ce0839e8f26f851f5492526ea
|
317.0 kB | Preview Download |
Additional details
- Eprint ID
- 37769
- Resolver ID
- CaltechAUTHORS:20130404-141235986
- IST-2007-216886
- European Community
- Created
-
2013-04-04Created from EPrint's datestamp field
- Updated
-
2019-10-03Created from EPrint's last_modified field