Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published July 7, 2022 | Published + Submitted
Journal Article Open

Lessons for adaptive mesh refinement in numerical relativity

Abstract

We demonstrate the flexibility and utility of the Berger–Rigoutsos adaptive mesh refinement (AMR) algorithm used in the open-source numerical relativity (NR) code GRChombo for generating gravitational waveforms from binary black-hole (BH) inspirals, and for studying other problems involving non-trivial matter configurations. We show that GRChombo can produce high quality binary BH waveforms through a code comparison with the established NR code Lean. We also discuss some of the technical challenges involved in making use of full AMR (as opposed to, e.g. moving box mesh refinement), including the numerical effects caused by using various refinement criteria when regridding. We suggest several 'rules of thumb' for when to use different tagging criteria for simulating a variety of physical phenomena. We demonstrate the use of these different criteria through example evolutions of a scalar field theory. Finally, we also review the current status and general capabilities of GRChombo.

Additional Information

© 2022 The Author(s). Published by IOP Publishing Ltd. Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Received 19 January 2022, revised 3 May 2022. Accepted for publication 13 May 2022. Published 6 June 2022. We thank the rest of the GRChombo collaboration (www.grchombo.org) for their support and code development work, and for helpful discussions regarding their use of AMR. MR acknowledges support from a Science and Technology Facilities Council (STFC) studentship. KC acknowledges funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (Grant Agreement No. 693024), and an STFC Ernest Rutherford Fellowship. AD is supported by a Junior Research Fellowship (JRF) at Homerton College, University of Cambridge. PF is supported by the European Research Council Grant No. ERC-2014-StG 639022-NewNGR, and by a Royal Society University Research Fellowship Grant Nos. UF140319, RGF∖EA∖180260, URF∖R∖201026 and RF∖ERE∖210291. JCA acknowledges funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (Grant Agreement No. 693024), from the Beecroft Trust and from The Queen's College via an extraordinary Junior Research Fellowship (eJRF). TF is supported by a Royal Society Enhancement Award (Grant No. RGF∖EA∖180260). TH is supported by NSF Grant Nos. PHY-1912550, AST-2006538, PHY-090003 and PHY-20043, and NASA Grant Nos. 17-ATP17-0225, 19-ATP19-0051 and 20-LPS20-0011. JLR and US are supported by STFC Research Grant No. ST/V005669/1; US is supported by the H2020-ERC-2014-CoG Grant 'MaGRaTh' No. 646597. The simulations presented in this paper used DiRAC resources under the projects ACSP218, ACTP186 and ACTP238. The work was performed using the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The DiRAC component of CSD3 was funded by BEIS capital funding via STFC capital Grants ST/P002307/1 and ST/cqgac6fa9bib2452/1 and STFC operations Grant ST/cqgac6fa9bib689X/1. In addition we have used the DiRAC at Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital Grants ST/P002293/1 and ST/cqgac6fa9bib2371/1, Durham University and STFC operations Grant ST/cqgac6fa9bib0832/1. DiRAC is part of the National e-Infrastructure. XSEDE resources under Grant No. PHY-090003 were used on the San Diego Supercomputing Center's clusters Comet and Expanse and the Texas Advanced Supercomputing Center's (TACC) Stampede2. Furthermore, we acknowledge the use of HPC resources on the TACC Frontera cluster [138]. PRACE resources under Grant No. 2020225359 were used on the GCS Supercomputer JUWELS at the Jülich Supercomputing Center (JCS) through the John von Neumann Institute for Computing (NIC), funded by the Gauss Center for Supercomputing e.V. (www.gauss-centre.eu). The Fawcett supercomputer at the Department of Applied Mathematics and Theoretical Physics (DAMTP), University of Cambridge was also used, funded by STFC Consolidated Grant ST/P000673/1. Data availability statement. The data that support the findings of this study are available upon reasonable request from the authors.

Attached Files

Published - Radia_2022_Class._Quantum_Grav._39_135006.pdf

Submitted - 2112.10567.pdf

Files

Radia_2022_Class._Quantum_Grav._39_135006.pdf
Files (6.0 MB)
Name Size Download all
md5:fc8df0fa88a8c8b3d947d5c63cae34d3
3.7 MB Preview Download
md5:fb7f98e520073b61b6dcedaa966bd4e5
2.2 MB Preview Download

Additional details

Created:
August 22, 2023
Modified:
October 24, 2023