Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published October 2013 | public
Book Section - Chapter

The squared-error of generalized LASSO: A precise analysis

Abstract

We consider the problem of estimating an unknown but structured signal x0 from its noisy linear observations y = Ax_0 + z ∈ ℝ^m. To the structure of x0 is associated a structure inducing convex function f(·). We assume that the entries of A are i.i.d. standard normal N(0, 1) and z ~ N(0, σ^2I_m). As a measure of performance of an estimate x^* of x_0 we consider the "Normalized Square Error" (NSE) ∥x* - x0∥^2_2/σ^2. For sufficiently small σ, we characterize the exact performance of two different versions of the well known LASSO algorithm. The first estimator is obtained by solving the problem arg min_x ∥y - Ax∥_2 + λf(x). As a function of λ, we identify three distinct regions of operation. Out of them, we argue that "RON" is the most interesting one. When λ ∈ R_(ON), we show that the NSE is (D_f(x_0, λ))/(m-D_f(x_0, λ)) for small σ, where D_f(x_0, λ) is the expected squared-distance of an i.i.d. standard normal vector to the dilated subdifferential λ · ∂f(x_0). Secondly, we consider the more popular estimator arg min_x 1/2∥y - Ax∥^2_2 + στ f(x). We propose a formula for the NSE of this estimator by establishing a suitable mapping between this and the previous estimator over the region RON. As a useful side result, we find explicit formulae for the optimal estimation performance and the optimal penalty parameters λ* and τ*.

Additional Information

© 2013 IEEE. This work was supported in part by the National Science Foundation under grants CCF-0729203, CNS-0932428 and CIF-1018927, by the Office of Naval Research under the MURI grant N00014-08-1-0747, and by a grant from Qualcomm Inc.

Additional details

Created:
August 19, 2023
Modified:
March 5, 2024