Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published March 3, 2010 | Published
Journal Article Open

A Singular Value Thresholding Algorithm for Matrix Completion

Abstract

This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative, produces a sequence of matrices {X^k,Y^k}, and at each step mainly performs a soft-thresholding operation on the singular values of the matrix Y^k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {X^k} is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On the theoretical side, we provide a convergence analysis showing that the sequence of iterates converges. On the practical side, we provide numerical examples in which 1,000 × 1,000 matrices are recovered in less than a minute on a modest desktop computer. We also demonstrate that our approach is amenable to very large scale problems by recovering matrices of rank about 10 with nearly a billion unknowns from just about 0.4% of their sampled entries. Our methods are connected with the recent literature on linearized Bregman iterations for ℓ_1 minimization, and we develop a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.

Additional Information

© 2010 Society for Industrial and Applied Mathematics. Received by the editors October 23, 2008; accepted for publication (in revised form) January 5, 2010; published electronically March 3, 2010. This author is supported by the Wavelets and Information Processing Programme under a grant from DSTA, Singapore. This author is partially supported by the Waterman Award from the National Science Foundation and by ONR grant N00014-08-1-0749. This author is supported in part by grant R-146-000-113-112 from the National University of Singapore. The second author would like to thank Benjamin Recht and Joel Tropp for fruitful conversations related to this project, and Stephen Becker for his help in preparing the computational results of section 5.2.2.

Attached Files

Published - Cai2010p10180Siam_J_Optimiz.pdf

Files

Cai2010p10180Siam_J_Optimiz.pdf
Files (460.2 kB)
Name Size Download all
md5:a041b2eac0e8fd1c2ff645141610b754
460.2 kB Preview Download

Additional details

Created:
August 21, 2023
Modified:
October 20, 2023