Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published June 2015 | Submitted
Book Section - Chapter Open

Analysis of a Classical Matrix Preconditioning Algorithm

Abstract

We study a classical iterative algorithm for the problem of balancing matrices in the L∞ norm via a scaling transformation. This algorithm, which goes back to Osborne and Parlett & Reinsch in the 1960s, is implemented as a standard preconditioner in many numerical linear algebra packages. Surprisingly, despite its widespread use over several decades, no bounds were known on its rate of convergence. In this paper we prove that, for a large class of irreducible n x n (real or complex) input matrices A, a natural variant of the algorithm converges in O(n^3 log(nρ/ε)) elementary balancing operations, where ρ measures the initial imbalance of A and ε is the target imbalance of the output matrix. (The imbalance of A is maxi |log(a_i^(out)/a_i^(in))|, where a_i^(out),a_i^(in) are the maximum entries in magnitude in the ith row and column respectively.) This bound is tight up to the log n factor. A balancing operation scales the ith row and column so that their maximum entries are equal, and requires O(m/n) arithmetic operations on average, where m is the number of non-zero elements in A. Thus the running time of the iterative algorithm is ~O(n^2m). This is the first time bound of any kind on any variant of the Osborne-Parlett-Reinsch algorithm. The class of matrices for which the above analysis holds are those which satisfy a condition we call Unique Balance, meaning that the limit of the iterative balancing process does not depend on the order in which balancing operations are performed. We also prove a combinatorial characterization of the Unique Balance property, which had earlier been conjectured by Chen.

Additional Information

© 2015 ACM. Work supported in part by NSF grants 1038578 and 1319745 and by the Simons Institute for the Theory of Computing, where some of this work was performed. Work supported in part by NSF grants 1016896 and 1420934. We thank Tzu-Yi Chen and Jim Demmel for introducing us to this problem and for several helpful pointers, Martin Dyer for useful discussions, and Yuval Rabani for help in the early stages of this work which led to the proof of Theorem 1. We also thank anonymous referees of an earlier version for suggestions that improved the presentation of the paper.

Attached Files

Submitted - 1504.03026v2.pdf

Files

1504.03026v2.pdf
Files (445.7 kB)
Name Size Download all
md5:ecf5b32acb036e682722b732eb1b7c16
445.7 kB Preview Download

Additional details

Created:
August 20, 2023
Modified:
October 23, 2023