Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published August 29, 2014 | Accepted Version
Report Open

Error Bounds for Random Matrix Approximation Schemes

Abstract

Randomized matrix sparsification has proven to be a fruitful technique for producing faster algorithms in applications ranging from graph partitioning to semidefinite programming. In the decade or so of research into this technique, the focus has been—with few exceptions—on ensuring the quality of approximation in the spectral and Frobenius norms. For certain graph algorithms, however, the ∞→1 norm may be a more natural measure of performance. This paper addresses the problem of approximating a real matrix A by a sparse random matrix X with respect to several norms. It provides the first results on approximation error in the ∞→1 and ∞→2 norms, and it uses a result of Lata la to study approximation error in the spectral norm. These bounds hold for a reasonable family of random sparsification schemes, those which ensure that the entries of X are independent and average to the corresponding entries of A. Optimality of the ∞→1 and ∞→2 error estimates is established. Concentration results for the three norms hold when the entries of X are uniformly bounded. The spectral error bound is used to predict the performance of several sparsification and quantization schemes that have appeared in the literature; the results are competitive with the performance guarantees given by earlier scheme-specific analyses.

Attached Files

Accepted Version - GT09-Error-Bounds.pdf

Accepted Version - GT14-Error-Bounds-TR.pdf

Files

GT09-Error-Bounds.pdf
Files (604.8 kB)
Name Size Download all
md5:c363d02abad85f181f1f5355c23f53c2
242.5 kB Preview Download
md5:9a30e00ad6e3b390b6c7490dea538e93
362.3 kB Preview Download

Additional details

Created:
August 20, 2023
Modified:
January 13, 2024