Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published February 2005 | public
Book Section - Chapter

Floating-Point Sparse Matrix-Vector Multiply for FPGAs

Abstract

Large, high density FPGAs with high local distributed memory bandwidth surpass the peak floating-point performance of high-end, general-purpose processors. Microprocessors do not deliver near their peak floating-point performance on efficient algorithms that use the Sparse Matrix-Vector Multiply (SMVM) kernel. In fact, it is not uncommon for microprocessors to yield only 10–20% of their peak floating-point performance when computing SMVM. We develop and analyze a scalable SMVM implementation on modern FPGAs and show that it can sustain high throughput, near peak, floating-point performance. For benchmark matrices from the Matrix Market Suite we project 1.5 double precision Gflops/FPGA for a single Virtex II 6000-4 and 12 double precision Gflops for 16 Virtex IIs (750Mflops/FPGA).

Additional Information

© 2005 ACM. This work was supported by the Microelectronics Advanced Research Consortium (MARCO) and is part of the efforts of the Gigascale Systems Research Center (GSRC). Thanks to Keith Underwood for valuable editorial comments on this writeup.

Additional details

Created:
August 19, 2023
Modified:
October 23, 2023