Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published October 10, 2019 | Submitted
Journal Article Open

The SXS Collaboration catalog of binary black hole simulations

Abstract

Accurate models of gravitational waves from merging black holes are necessary for detectors to observe as many events as possible while extracting the maximum science. Near the time of merger, the gravitational waves from merging black holes can be computed only using numerical relativity. In this paper, we present a major update of the Simulating eXtreme Spacetimes (SXS) Collaboration catalog of numerical simulations for merging black holes. The catalog contains 2018 distinct configurations (a factor of 11 increase compared to the 2013 SXS catalog), including 1426 spin-precessing configurations, with mass ratios between 1 and 10, and spin magnitudes up to 0.998. The median length of a waveform in the catalog is 39 cycles of the dominant ℓ = m = 2 gravitational-wave mode, with the shortest waveform containing 7.0 cycles and the longest 351.3 cycles. We discuss improvements such as correcting for moving centers of mass and extended coverage of the parameter space. We also present a thorough analysis of numerical errors, finding typical truncation errors corresponding to a waveform mismatch of  ~10−4. The simulations provide remnant masses and spins with uncertainties of 0.03% and 0.1% (90th percentile), about an order of magnitude better than analytical models for remnant properties. The full catalog is publicly available at www.black-holes.org/waveforms.

Additional Information

© 2019 IOP Publishing Ltd. Received 9 April 2019, revised 3 July 2019; Accepted for publication 24 July 2019; Published 11 September 2019. We are pleased to thank Davide Gerosa and Josh Smith for helpful discussions. This work was supported in part by the Sherman Fairchild Foundation, NSF Grants PHY-1708212 and PHY-1708213 at Caltech, NSF Grants PHY-1606654 and DGE-1650441 at Cornell, NSF Grants PHY-1606522, PHY-1654359, and AST-1559694 and by Dan Black and Goodhue-McWilliams at Cal State Fullerton. S E Field is partially supported by NSF Grant PHY-1806665. H. Fong and C. J. Woodford acknowledge support from NSERC of Canada grant PGSD3-504366-2017, and H. Fong further acknowledges support from the University of Toronto and the Japan Society for the Promotion of Science. P. Schmidt acknowledges support from the NWO Veni Grant No. 680-47-460. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation Grant No. ACI-1548562. This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. Computations were performed on the GPC supercomputer at the SciNet HPC Consortium [232]; SciNet is funded by: the Canada Foundation for Innovation (CFI) under the auspices of Compute Canada; the Government of Ontario; Ontario Research Fund (ORF)—Research Excellence; and the University of Toronto. Computations were performed on the supercomputer Briareé from the Université de Montréal, managed by Calcul Québec and Compute Canada. The operation of these supercomputers is funded by the Canada Foundation for Innovation (CFI), NanoQuébec, RMGA and the Fonds de recherche du Québec-Nature et Technologie (FRQ-NT). Computations were performed on the Wheeler cluster at Caltech, which is supported by the Sherman Fairchild Foundation and by Caltech; on NSF/NCSA Blue Waters under allocation NSF PRAC–1713694; and on XSEDE resources Bridges at the Pittsburgh Supercomputing Center, Comet at the San Diego Supercomputer Center, and Stampede and Stampede2 at the Texas Advanced Computing Center, through allocation TG-PHY990007N. Computations were performed on the Minerva high-performance computer cluster at the Max Planck Institute for Gravitational Physics in Potsdam.

Attached Files

Submitted - 1904.04831.pdf

Files

1904.04831.pdf
Files (2.9 MB)
Name Size Download all
md5:de61360bdaba6d90f72b3733d1bd9eb5
2.9 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 20, 2023