Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published July 15, 2019 | Submitted
Journal Article Open

Kernel Flows: From learning kernels from data into the abyss

Abstract

Learning can be seen as approximating an unknown function by interpolating the training data. Although Kriging offers a solution to this problem, it requires the prior specification of a kernel and it is not scalable to large datasets. We explore a numerical approximation approach to kernel selection/construction based on the simple premise that a kernel must be good if the number of interpolation points can be halved without significant loss in accuracy (measured using the intrinsic RKHS norm ∥·∥ associated with the kernel). We first test and motivate this idea on a simple problem of recovering the Green's function of an elliptic PDE (with inhomogeneous coefficients) from the sparse observation of one of its solutions. Next we consider the problem of learning non-parametric families of deep kernels of the form K_1(F_n(x), F_n(x')) with F_(n+1) = (I_d + ϵG_(n+1)) ◦ F_n and G_(n+1) ∈ span{K_1(F_n(x_i), ·)}. With the proposed approach constructing the kernel becomes equivalent to integrating a stochastic data driven dynamical system, which allows for the training of very deep (bottomless) networks and the exploration of their properties. These networks learn by constructing flow maps in the kernel and input spaces via incremental data-dependent deformations/perturbations (appearing as the cooperative counterpart of adversarial examples) and, at profound depths, they (1) can achieve accurate classification from only one data point per class (2) appear to learn archetypes of each class (3) expand distances between points that are in different classes and contract distances between points in the same class. For kernels parameterized by the weights of Convolutional Neural Networks, minimizing approximation errors incurred by halving random subsets of interpolation points, appears to outperform training (the same CNN architecture) with relative entropy and dropout.

Additional Information

© 2019 Published by Elsevier. Received 28 September 2018, Revised 17 March 2019, Accepted 20 March 2019, Available online 28 March 2019. The authors gratefully acknowledges this work supported by the Air Force Office of Scientific Research and the DARPA EQUiPS Program under award number FA9550-16-1-0054 (Computational Information Games) and the Air Force Office of Scientific Research under award number FA9550-18-1-0271 (Games for Computation and Learning). We also thank Andrew Stuart and Yifan Chen for helpful discussions for the clarification of Section 7.4.

Attached Files

Submitted - 1808.04475.pdf

Files

1808.04475.pdf
Files (6.8 MB)
Name Size Download all
md5:f391c87abb00bffc620c1883f667899f
6.8 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 20, 2023