#!/usr/bin/env python # coding: utf-8 # # # # # `BaikalETK`: NRPy+-Based BSSN Solvers for the Einstein Toolkit # # ## Author: Zach Etienne # # #### Special thanks to Roland Haas for help in answering many implementation questions # # ## This notebook generates `Baikal` and `BaikalVacuum`, [Einstein Toolkit](https://einsteintoolkit.org) thorns for solving Einstein's equations in the BSSN formalism, in Cartesian coordinates. These thorns are highly optimized for modern CPU architectures, featuring SIMD intrinsics and OpenMP support. # # **Notebook Status:** Validated against the Einstein Toolkit `McLachlan` BSSN thorn, both in the context of black hole binary simulations (excellent gravitational waveform agreement) as well as binary neutron star simulations (when parameter `enable_stress_energy_source_terms` below is set to `True`). Once plots demonstrating this agreement are added to this tutorial notebook, the validation status will be set to Validated. # # **Validation Notes:** This tutorial notebook has been validated against a trusted Einstein Toolkit thorn, but plots demonstrating its validation have yet to be included in this notebook. # # ## Introduction # # ``` # How often did my soul cry out: # Come back to Baikal once again? # I still do not know this lake: # To see does not mean to know. # ``` # [Igor Severyanin](https://en.wikipedia.org/wiki/Igor_Severyanin), [[1]](https://1baikal.ru/en/istoriya/let’s-turn-to-baikal-a-poetic-view). # # [Lake Baikal](https://en.wikipedia.org/wiki/Lake_Baikal) is home to the [nerpa seal](https://en.wikipedia.org/wiki/Baikal_seal), NRPy+'s mascot. # # This notebook generates two thorns: `Baikal` and `BaikalVacuum`. `Baikal` contains stress-energy source terms (i.e., the $8\pi T^{\mu\nu}$ part of Einstein's equations) for general relativistic hydrodynamics (GRHD) and magnetohydrodynamics (GRMHD), and the `BaikalVacuum` contains no such source terms, so can be used for e.g., black hole and binary black hole calculations in which no self-gravitating matter is considered. # # `Baikal` and `BaikalVacuum` are meant to reproduce the functionality of the `McLachlan` thorn, generated by the [Mathematica](https://www.wolfram.com/mathematica/)-based [Kranc](http://kranccode.org/) code, but using the fully open-source NRPy+/SymPy infrastructure. # # Unlike `McLachlan`, `Baikal` and `BaikalVacuum` enable the user at runtime to select only the most popular options, like finite-difference order and Kreiss-Oliger dissipation strength. Further, neither thorn yet supports symmetry options or Jacobians needed for `Llama` grids. Support for these options might be provided in a future update. As for BSSN gauge choice, `Baikal` and `BaikalVacuum` by default support only the BSSN moving-puncture gauge conditions, though this can be changed by selecting one of the [other gauge choices implemented within the NRPy+ infrastructure](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb). # # **Regarding the structure of this notebook:** # # As generating the optimized C code kernels needed for these thorns can *individually* take roughly 10 minutes per finite-difference order, there is a strong motivation to parallelize the code generation process. # # To this end, this Jupyter notebook makes use of Python's [`multiprocessing`](https://docs.python.org/3/library/multiprocessing.html) module, which spawns an independent Python interpreter to construct individual C-code kernels. On a modern, multi-core CPU, this greatly cuts down the time needed to generate the thorns (sometimes by 10x or more), all while ensuring a convenient user interface for adjusting the thorns to suit one's needs. # # One significant complication arises with the use of `multiprocessing`: the NRPy+ environment. Each Python interpreter inherits the NRPy+ environment of the parent interpreter from which it spawns. Within each interpreter NRPy+ registers an additional individual C function within NRPy+'s [`outputC`](outputC.py) module. But how to combine all of these individual NRPy+ environments back into the parent after `multiprocessing` is run? The answer lies in NRPy+'s [`pickling`](pickling.py) module, which uses Python's built-in [`pickle`](https://docs.python.org/3/library/pickle.html) module to store ("pickle") an entire NRPy+ environment and add it to the parent NRPy+ environment ("unpickle"). # # ### Associated NRPy+ Source Code & Tutorial Modules for this module: # * [BSSN/ADM_in_terms_of_BSSN.py](BSSN/ADM_in_terms_of_BSSN.py); [\[**tutorial**\]](Tutorial-ADM_in_terms_of_BSSN.ipynb): Constructs ADM quantities in terms of BSSN quantities (in arbitrary curvilinear coordinates, though we use Cartesian here). This is used to generate BaikalETK's BSSN$\to$ADM function, which make ADM variables available to diagnostic thorns within the ETK. # * [BSSN/BSSN_in_terms_of_ADM.py](BSSN/BSSN_in_terms_of_ADM.py); [\[**tutorial**\]](Tutorial-BSSN_in_terms_of_ADM.ipynb): Inverse of the above; reads ADM quantities and outputs BSSN quantities. # * [BSSN/BSSN_Ccodegen_library.py](BSSN/BSSN_Ccodegen_library.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-C_codegen_library.ipynb): BSSN C code generation library, which contains a library of Python functions for generating & registering BSSN C functions within NRPy+'s [`outputC`](outputC.py) module, including # * BSSN constraints (symbolic expression [Python module](BSSN/BSSN_constraints.py) and [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb)) # * BSSN time evolution equation right-hand sides (RHSs) for spacetime quantities ([Python module](BSSN/BSSN_RHSs.py) and [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb)) # * BSSN time evolution equation right-hand sides (RHSs) for gauge quantities ([Python module](BSSN/BSSN_gauge_RHSs.py) and [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb) # # # # Table of Contents # $$\label{toc}$$ # # This notebook is organized as follows # # 1. [Step 1](#initializenrpy): Initialize needed Python/NRPy+ modules # 1. [Step 2](#bssn): Description of C codes generated for BSSN spacetime solve # 1. [Step 2.a](#bssnrhs): BSSN right-hand-side (RHS) expressions # 1. [Step 2.b](#hammomconstraints): Hamiltonian & momentum constraints # 1. [Step 2.c](#gamconstraint): Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (in Cartesian coordinates, $\det{\hat{\gamma}_{ij}}=1$) # 1. [Step 3](#kernel_codegen): Generation of C code functions for `Baikal` and `BaikalVacuum` # 1. [Step 3.a](#feature_choice): Set compile-time and runtime parameters for `Baikal` and `BaikalVacuum` # 1. [Step 3.b](#parallel_codegen): Generate all C-code kernels for `Baikal` and `BaikalVacuum`, in parallel if possible # 1. [Step 4](#cclfiles): CCL files - Define how this module interacts and interfaces with the wider Einstein Toolkit infrastructure # 1. [Step 4.a](#paramccl): `param.ccl`: specify free parameters within `Baikal`/`BaikalVacuum` # 1. [Step 4.b](#interfaceccl): `interface.ccl`: define needed gridfunctions; provide keywords denoting what this thorn provides and what it should inherit from other thorns # 1. [Step 4.c](#scheduleccl): `schedule.ccl`:schedule all functions used within `Baikal`/`BaikalVacuum`, specify data dependencies within said functions, and allocate memory for gridfunctions # 1. [Step 5](#cdrivers): Register library of C functions needed for interfacing with the `Einstein Toolkit` # 1. [Step 5.a](#etkfunctions): Needed ETK functions: Banner, Symmetry registration, Parameter sanity check, Method of Lines (`MoL`) registration, Boundary condition registration # 1. [Step 5.b](#bssnadmconversions): BSSN $\leftrightarrow$ ADM conversions # 1. [Step 5.b.i](#admtobssn): ADM $\to$ BSSN # 1. [Step 5.b.ii](#bssntoadm): BSSN $\to$ ADM # 1. [Step 5.c](#t4uu): `T4DD_to_T4UU()`: Compute $T^{\mu\nu}$ from `TmunuBase`'s $T_{\mu\nu}$, using BSSN quantities as inputs for the 4D raising operation # 1. [Step 6](#outcdrivers): Output all C driver functions needed for `Baikal`/`BaikalVacuum` # 1. [Step 6.a](#makecodedefn): `make.code.defn`: List of all C driver functions needed to compile `BaikalETK` # 1. [Step 7](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file # # # # Step 1: Initialize needed Python/NRPy+ modules \[Back to [top](#toc)\] # # $$\label{initializenrpy}$$ # In[1]: # Step 1: Import needed core NRPy+ modules from outputC import outC_function_master_list, indent_Ccode, lhrh, add_to_Cfunction_dict # NRPy+: Core C code output module from pickling import unpickle_NRPy_env # NRPy+: pickle/unpickle NRPy+ environment; needed for parallel codegen import finite_difference as fin # NRPy+: Finite difference C code generation module import NRPy_param_funcs as par # NRPy+: Parameter interface import grid as gri # NRPy+: Functions having to do with numerical grids import loop as lp # NRPy+: Generate C code loops import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support import reference_metric as rfm # NRPy+: Reference metric support import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface import os, sys, shutil # Standard Python modules for multiplatform OS-level functions # Needed for setting BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption & LapseEvolutionOption import BSSN.BSSN_gauge_RHSs as gaugerhss # Compile-time (i.e., NRPy+-time) parameters for both Baikal & BaikalVacuum: LapseCondition = "OnePlusLog" ShiftCondition = "GammaDriving2ndOrder_NoCovariant" enable_golden_kernels = True # In[2]: import reference_metric as rfm par.set_parval_from_str("reference_metric::CoordSystem", "Cartesian") rfm.reference_metric() # Sets up coordinate system within NRPy+; needed for generating BSSN RHSs, etc. par.set_parval_from_str("grid::GridFuncMemAccess", "ETK") par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption", ShiftCondition) par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::LapseEvolutionOption", LapseCondition) # Output finite difference stencils as functions instead of inlined expressions. # Dramatically speeds up compile times (esp with higher-order finite differences # and GCC 9.3+) par.set_parval_from_str("finite_difference::enable_FD_functions", True) # In[3]: for thornname in ["Baikal", "BaikalVacuum"]: # First remove C code output directory if it exists # Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty shutil.rmtree(thornname, ignore_errors=True) cmd.mkdir(os.path.join(thornname)) cmd.mkdir(os.path.join(thornname, "src")) cmd.mkdir(os.path.join(thornname, "src", "SIMD")) import fileinput f = fileinput.input(os.path.join("SIMD", "SIMD_intrinsics.h")) with open(os.path.join(thornname, "src", "SIMD", "SIMD_intrinsics.h"),"w") as outfile: for line in f: outfile.write(line.replace("#define REAL_SIMD_ARRAY REAL", "#define REAL_SIMD_ARRAY CCTK_REAL")) # Create directory for rfm_files output cmd.mkdir(os.path.join(thornname, "src", "rfm_files")) # # # # Step 2: Description of C codes generated for BSSN spacetime solve \[Back to [top](#toc)\] # $$\label{bssn}$$ # # # # ## Step 2.a: BSSN RHS expressions \[Back to [top](#toc)\] # $$\label{bssnrhs}$$ # # `BaikalETK` implements a fully covariant version of the BSSN 3+1 decomposition of Einstein's equations of general relativity, which is fully documented within NRPy+ ([start here](Tutorial-BSSN_formulation.ipynb)). However, especially if you are a newcomer to the field of numerical relativity, you may also find the following lectures and papers useful for understanding the adopted formalism: # # * Mathematical foundations of BSSN and 3+1 initial value problem decompositions of Einstein's equations: # * [Thomas Baumgarte's lectures on mathematical formulation of numerical relativity](https://www.youtube.com/watch?v=t3uo2R-yu4o&list=PLRVOWML3TL_djTd_nsTlq5aJjJET42Qke) # * [Yuichiro Sekiguchi's introduction to BSSN](http://www2.yukawa.kyoto-u.ac.jp/~yuichiro.sekiguchi/3+1.pdf) # * Extensions to the standard BSSN approach used in NRPy+ # * [Brown's covariant "Lagrangian" formalism of BSSN](https://arxiv.org/abs/0902.3652) # * [BSSN in spherical coordinates, using the reference-metric approach of Baumgarte, Montero, Cordero-Carrión, and Müller (2012)](https://arxiv.org/abs/1211.6632) # * [BSSN in generic curvilinear coordinates, using the extended reference-metric approach of Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658) # # Here, we simply call the [BSSN.BSSN_RHSs](BSSN/BSSN_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb) and [BSSN.BSSN_gauge_RHSs](BSSN/BSSN_gauge_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb) NRPy+ Python modules to generate the symbolic expressions, add Kreiss-Oliger dissipation, and then output the finite-difference C code form of the equations using NRPy+'s [finite_difference](finite_difference.py) ([**tutorial**](Tutorial-Finite_Difference_Derivatives.ipynb)) C code generation module. # # # ## Step 2.b: Hamiltonian & momentum constraints \[Back to [top](#toc)\] # $$\label{hammomconstraints}$$ # # Next output the C code for evaluating the Hamiltonian & momentum constraints [(**Tutorial**)](Tutorial-BSSN_constraints.ipynb). In the absence of numerical error, this constraint should evaluate to zero. However it does not due to numerical (typically truncation and roundoff) error. Therefore it is useful to measure the Hamiltonian & momentum constraint violation to gauge the accuracy of our simulation, and, ultimately determine whether errors are dominated by numerical finite differencing (truncation) error as expected. # # # ## Step 2.c: Enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (in Cartesian coordinates, $\det{\hat{\gamma}_{ij}}=1$) \[Back to [top](#toc)\] # $$\label{gamconstraint}$$ # # Then enforce conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN_enforcing_determinant_gammabar_equals_gammahat_constraint.ipynb) # # Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint: # # # # Step 3: Generation of C code functions for `Baikal` and `BaikalVacuum` \[Back to [top](#toc)\] # $$\label{kernel_codegen}$$ # # Here we generate the C code kernels (i.e., the C-code representation of the equations needed) for `Baikal` and `BaikalVacuum`. # # # # ## Step 3.a: Set compile-time and runtime parameters for `Baikal` and `BaikalVacuum` \[Back to [top](#toc)\] # $$\label{feature_choice}$$ # # NRPy+ is a code generation package that is designed to offer maximum flexibility *at the time of C code generation*. As a result, although NRPy+ can in principle output an infinite variety of C code kernels for solving Einstein's equations, generally free parameters in each kernel steerable at *runtime* are restricted to simple scalars. This leads to more optimized kernels (i.e., significantly improved performance as compared to `McLachlan`), but at the expense of flexibility in choosing e.g., different gauges at runtime (currently only the most commonly used gauge is supported), as well as the need to generate multiple kernels (one per finite-differencing order). Reducing the number of kernels and adding more flexibility at runtime will be a focus of future work. # # For now, `Baikal` and `BaikalVacuum` support the following runtime options: # # * `Baikal`: Enables $T^{\mu\nu}$ source terms; useful for general relativistic hydrodynamics (GRHD) and general relativistic magnetohydrodynamics (GRMHD) simulations. # * Finite differencing of order 2 and 4, via runtime parameter `FD_order` # * BSSN RHSs, Ricci tensor, Hamiltonian constraint, and $\hat{\gamma}=\bar{\gamma}$ constraint # * Kreiss-Oliger dissipation strength via runtime parameter `diss_strength`, which is the exact analogue of the `eps_diss` parameter of the `Dissipation` thorn # * `BaikalVacuum`: Disables $T^{\mu\nu}$ source terms; useful for generating dynamical black hole and binary black hole spacetimes, in which matter is not present. # * Finite differencing of orders 4, 6, and 8 via runtime parameter `FD_order` # * BSSN RHSs, Ricci tensor, Hamiltonian constraint, and $\hat{\gamma}=\bar{\gamma}$ constraint # * Kreiss-Oliger dissipation strength via runtime parameter `diss_strength`, which is the exact analogue of the `eps_diss` parameter of the `Dissipation` thorn # # Both thorns adopt the standard moving-puncture gauge conditions, which include the 1+log slicing condition for the lapse and the non-covariant $\Gamma$-driving shift condition, as documented [in this NRPy+ Tutorial notebook](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb), and implemented in [the corresponding NRPy+ Python module](BSSN/BSSN_gauge_RHSs.py). In case you'd like to make another gauge choice, you need only choose the desired gauge from [the NRPy+ Python module](BSSN/BSSN_gauge_RHSs.py) and add it as parameters `ShiftCondition` and `LapseCondition` to the main BaikalETK code generation function `BaikalETK_C_kernels_codegen_onepart()`. You will find that adding user-defined gauge choices is a straightforward process, as the module is easily extended. # # Next we set up the default parameters for `Baikal` and `BaikalVacuum` thorns and call them from within Python's `multiprocessing.Pool()`. # In[4]: from collections import namedtuple funcarg = namedtuple('funcarg', 'name thornname fdorder') import BSSN.BSSN_Ccodegen_library as BCL BSSN_funcs = [] Baikal_FDorders_list = [2, 4] BaikalVacuum_FDorders_list = [4, 6, 8] for thornname in ["Baikal", "BaikalVacuum"]: # BSSN_funcs = [funcarg(name=BCL.add_to_Cfunction_dict_BSSN_to_ADM, thornname=thornname, fdorder=None)] BSSN_funcs += [funcarg(name=BCL.add_enforce_detgammahat_constraint_to_Cfunction_dict, thornname=thornname, fdorder=None)] fdorder_list = Baikal_FDorders_list if thornname == "BaikalVacuum": fdorder_list = BaikalVacuum_FDorders_list for fdorder in fdorder_list: BSSN_funcs += [funcarg(name=BCL.add_rhs_eval_to_Cfunction_dict, thornname=thornname, fdorder=fdorder)] BSSN_funcs += [funcarg(name=BCL.add_Ricci_eval_to_Cfunction_dict, thornname=thornname, fdorder=fdorder)] BSSN_funcs += [funcarg(name=BCL.add_BSSN_constraints_to_Cfunction_dict, thornname=thornname, fdorder=fdorder)] # BSSN_funcs = [funcarg(name=BCL.add_to_Cfunction_dict_ADM_to_BSSN, thornname=thornname, fdorder=fdorder)] # # # ## Step 3.b: Generate all C-code kernels for `Baikal` and `BaikalVacuum`, in parallel if possible \[Back to [top](#toc)\] # $$\label{parallel_codegen}$$ # # As described above, a significant complication arises with the use of `multiprocessing`: the NRPy+ environment. Each Python interpreter inherits the NRPy+ environment of the parent interpreter from which it spawns. Within each interpreter NRPy+ registers an additional individual C function within NRPy+'s [`outputC`](outputC.py) module. But how to combine all of these individual NRPy+ environments back into the parent after `multiprocessing` is run? The answer lies in NRPy+'s [`pickling`](pickling.py) module, which uses Python's built-in [`pickle`](https://docs.python.org/3/library/pickle.html) module to store ("pickle") an entire NRPy+ environment and add it to the parent NRPy+ environment ("unpickle"). # In[5]: def master_func(arg): enable_stress_energy = True par.set_parval_from_str("reference_metric::enable_rfm_precompute","True") par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir", os.path.join(BSSN_funcs[arg].thornname, "src", "rfm_files/")) rfm.reference_metric() if BSSN_funcs[arg].thornname == "BaikalVacuum": enable_stress_energy = False func_name_suffix = "_" + BSSN_funcs[arg].thornname if BSSN_funcs[arg].fdorder is not None: par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", BSSN_funcs[arg].fdorder) func_name_suffix += "_order_" + str(BSSN_funcs[arg].fdorder) if BSSN_funcs[arg].name.__name__ == "add_BSSN_constraints_to_Cfunction_dict": ret = BSSN_funcs[arg].name(includes=["cctk.h", "cctk_Arguments.h", "cctk_Parameters.h"], rel_path_to_Cparams=os.path.join("."), enable_stress_energy_source_terms=enable_stress_energy, leave_Ricci_symbolic=False, output_H_only=False, enable_rfm_precompute=True, enable_golden_kernels=False, enable_SIMD=True, func_name_suffix=func_name_suffix) elif BSSN_funcs[arg].name.__name__ == "add_rhs_eval_to_Cfunction_dict": ret = BSSN_funcs[arg].name(includes=["cctk.h", "cctk_Arguments.h", "cctk_Parameters.h"], rel_path_to_Cparams=os.path.join("."), enable_stress_energy_source_terms=enable_stress_energy, enable_rfm_precompute=True, enable_golden_kernels=enable_golden_kernels, enable_SIMD=True, ShiftCondition=ShiftCondition, enable_KreissOliger_dissipation=True, func_name_suffix=func_name_suffix) elif BSSN_funcs[arg].name.__name__ == "add_Ricci_eval_to_Cfunction_dict": ret = BSSN_funcs[arg].name(includes=["cctk.h", "cctk_Arguments.h", "cctk_Parameters.h"], rel_path_to_Cparams=os.path.join("."), enable_rfm_precompute=True, enable_golden_kernels=enable_golden_kernels, enable_SIMD=True, func_name_suffix=func_name_suffix) elif BSSN_funcs[arg].name.__name__ == "add_enforce_detgammahat_constraint_to_Cfunction_dict": ret = BSSN_funcs[arg].name(includes=["math.h", "cctk.h", "cctk_Arguments.h", "cctk_Parameters.h"], rel_path_to_Cparams=os.path.join("."), enable_rfm_precompute=True, func_name_suffix=func_name_suffix) else: print("ERROR: DID NOT RECOGNIZE FUNCTION " + BSSN_funcs[arg].name.__name__ + "\n") sys.exit(1) return ret NRPyEnvVars = [] raised_exception = False try: if os.name == 'nt': # It's a mess to get working in Windows, so we don't bother. :/ # https://medium.com/@grvsinghal/speed-up-your-python-code-using-multiprocessing-on-windows-and-jupyter-or-ipython-2714b49d6fac raise Exception("Parallel codegen currently not available in certain environments, e.g., Windows") # Step 2.d: Import the multiprocessing module. import multiprocessing # Step 2.e: Evaluate list of functions in parallel if possible; # otherwise fallback to serial evaluation: pool = multiprocessing.Pool() NRPyEnvVars.append(pool.map(master_func, range(len(BSSN_funcs)))) pool.terminate() pool.join() except: print("FAILED PARALLEL CODEGEN!") NRPyEnvVars = [] # Reset, as pickling/unpickling unnecessary for serial codegen (see next line) # Steps 2.d-e, alternate: As fallback, evaluate functions in serial. # This will happen on Android and Windows systems for i, func in enumerate(BSSN_funcs): master_func(i) raised_exception = True outCfunc_master_list = outC_function_master_list if not raised_exception: outCfunc_master_list = unpickle_NRPy_env(NRPyEnvVars) for el in outCfunc_master_list: if el not in outC_function_master_list: # in case there are duplicate funcs, which can happen # if finite_difference_functions = True outC_function_master_list += [el] # # # # Step 4: ETK `ccl` file generation \[Back to [top](#toc)\] # $$\label{cclfiles}$$ # # The Einstein Toolkit (ETK) ccl files contain runtime parameters (`param.ccl`), registered gridfunctions (`interface.ccl`), and function scheduling (`schedule.ccl`). As parameters and gridfunctions are registered with NRPy+ when the C-code kernels are generated, and this generation occurs on separate processes in parallel, we store the entire NRPy+ environment for *each* process. This results in a tremendous amount of duplication, which is sorted out next. Once all duplicated environment variables (e.g., registered gridfunctions) are removed, we replace the current NRPy+ environment with the new one, by setting `gri.glb_gridfcs_list[],par.glb_params_list[],par.glb_Cparams_list[]`. # In[6]: # Store all NRPy+ environment variables to file so NRPy+ environment from within this subprocess can be easily restored # Finally, output all functions needed for computing finite-difference stencils # to thornname/src/finite_difference_functions.h # Note that output_finite_difference_functions_h() deletes all the FD functions from # the Cfunction dictionary, so it can only be called once; we must copy the generated # file to BaikalVacuum: fin.output_finite_difference_functions_h(os.path.join("Baikal", "src")) shutil.copy(os.path.join("Baikal", "src", "finite_difference_functions.h"), os.path.join("BaikalVacuum", "src", "finite_difference_functions.h")) # # # ## Step 4.a: `param.ccl`: specify free parameters within `BaikalETK` \[Back to [top](#toc)\] # $$\label{paramccl}$$ # # All parameters necessary for the computation of the BSSN right-hand side (RHS) expressions are registered within NRPy+; we use this information to automatically generate `param.ccl`. NRPy+ also specifies default values for each parameter. # # More information on `param.ccl` syntax can be found in the [official Einstein Toolkit documentation](http://einsteintoolkit.org/usersguide/UsersGuidech12.html#x17-183000D2.3). # In[7]: def keep_param__return_type(paramtuple): keep_param = True # We'll not set some parameters in param.ccl; # e.g., those that should be #define'd like M_PI. typestring = "" # Separate thorns within the ETK take care of grid/coordinate parameters; # thus we ignore NRPy+ grid/coordinate parameters: if paramtuple.module in ('grid', 'reference_metric'): keep_param = False partype = paramtuple.type if partype == "bool": typestring += "BOOLEAN " elif partype == "REAL": if paramtuple.defaultval != 1e300: # 1e300 is a magic value indicating that the C parameter should be mutable typestring += "CCTK_REAL " else: keep_param = False elif partype == "int": typestring += "CCTK_INT " elif partype == "#define": keep_param = False elif partype == "char": # FIXME: char/string parameter types should in principle be supported print("Error: parameter "+paramtuple.module+"::"+paramtuple.parname+ " has unsupported type: \""+ paramtuple.type + "\"") sys.exit(1) else: print("Error: parameter "+paramtuple.module+"::"+paramtuple.parname+ " has unsupported type: \""+ paramtuple.type + "\"") sys.exit(1) return keep_param, typestring def output_param_ccl(ThornName="BaikalETK"): with open(os.path.join(ThornName,"param.ccl"), "w") as file: file.write(""" # This param.ccl file was automatically generated by NRPy+. # You are advised against modifying it directly; instead # modify the Python code that generates it. shares: ADMBase # Extends multiple ADMBase variables: EXTENDS CCTK_KEYWORD evolution_method "evolution_method" { "BaikalETK" :: "" } EXTENDS CCTK_KEYWORD lapse_evolution_method "lapse_evolution_method" { "BaikalETK" :: "" } EXTENDS CCTK_KEYWORD shift_evolution_method "shift_evolution_method" { "BaikalETK" :: "" } EXTENDS CCTK_KEYWORD dtshift_evolution_method "dtshift_evolution_method" { "BaikalETK" :: "" } EXTENDS CCTK_KEYWORD dtlapse_evolution_method "dtlapse_evolution_method" { "BaikalETK" :: "" } restricted: CCTK_INT FD_order "Finite-differencing order" {\n""".replace("BaikalETK",ThornName)) FDorders = Baikal_FDorders_list if ThornName == "BaikalVacuum": FDorders = BaikalVacuum_FDorders_list for order in FDorders: file.write(" "+str(order)+":"+str(order)+" :: \"finite-differencing order = "+str(order)+"\"\n") FDorder_default = 4 if FDorder_default not in FDorders: print("WARNING: 4th-order FD kernel was not output!?! Changing default FD order to "+str(FDorders[0])) FDorder_default = FDorders[0] file.write("} "+str(FDorder_default)+ "\n\n") # choose 4th order by default, consistent with ML_BSSN paramccl_str = "" for i in range(len(par.glb_Cparams_list)): # keep_param is a boolean indicating whether we should accept or reject # the parameter. singleparstring will contain the string indicating # the variable type. keep_param, singleparstring = keep_param__return_type(par.glb_Cparams_list[i]) if keep_param: parname = par.glb_Cparams_list[i].parname partype = par.glb_Cparams_list[i].type singleparstring += parname + " \""+ parname +" (see NRPy+ for parameter definition)\"\n" singleparstring += "{\n" if partype != "bool": singleparstring += " *:* :: \"All values accepted. NRPy+ does not restrict the allowed ranges of parameters yet.\"\n" singleparstring += "} "+str(par.glb_Cparams_list[i].defaultval)+"\n\n" paramccl_str += singleparstring file.write(paramccl_str) # # # ## Step 4.b: `interface.ccl`: define needed gridfunctions; provide keywords denoting what this thorn provides and what it should inherit from other thorns \[Back to [top](#toc)\] # $$\label{interfaceccl}$$ # # `interface.ccl` declares all gridfunctions and determines how `BaikalETK` interacts with other Einstein Toolkit thorns. # # The [official Einstein Toolkit (Cactus) documentation](http://einsteintoolkit.org/usersguide/UsersGuide.html) defines what must/should be included in an `interface.ccl` file [**here**](http://einsteintoolkit.org/usersguide/UsersGuidech12.html#x17-178000D2.2). # In[8]: # First construct lists of the basic gridfunctions used in NRPy+. # Each type will be its own group in BaikalETK. evol_gfs_list = [] auxevol_gfs_list = [] aux_gfs_list = [] for i in range(len(gri.glb_gridfcs_list)): if gri.glb_gridfcs_list[i].gftype == "EVOL": evol_gfs_list.append( gri.glb_gridfcs_list[i].name+"GF") if gri.glb_gridfcs_list[i].gftype == "AUX": aux_gfs_list.append( gri.glb_gridfcs_list[i].name+"GF") if gri.glb_gridfcs_list[i].gftype == "AUXEVOL": auxevol_gfs_list.append(gri.glb_gridfcs_list[i].name+"GF") # NRPy+'s finite-difference code generator assumes gridfunctions # are alphabetized; not sorting may result in unnecessary # cache misses. evol_gfs_list.sort() aux_gfs_list.sort() auxevol_gfs_list.sort() rhs_list = [] for gf in evol_gfs_list: rhs_list.append(gf.replace("GF","")+"_rhsGF") def output_interface_ccl(ThornName="BaikalETK",enable_stress_energy_source_terms=False): outstr = """ # This interface.ccl file was automatically generated by NRPy+. # You are advised against modifying it directly; instead # modify the Python code that generates it. # With "implements", we give our thorn its unique name. implements: BaikalETK # By "inheriting" other thorns, we tell the Toolkit that we # will rely on variables/function that exist within those # functions. inherits: ADMBase Boundary Grid MethodofLines CoordGauge\n""" if enable_stress_energy_source_terms: outstr += "inherits: TmunuBase\n" # Need a raw string here due to all the backslashes: outstr += r""" # Needed functions and #include's: USES INCLUDE: Symmetry.h USES INCLUDE: Boundary.h USES INCLUDE: Slicing.h # Needed Method of Lines function CCTK_INT FUNCTION MoLRegisterEvolvedGroup(CCTK_INT IN EvolvedIndex, \ CCTK_INT IN RHSIndex) REQUIRES FUNCTION MoLRegisterEvolvedGroup # Needed Boundary Conditions function CCTK_INT FUNCTION GetBoundarySpecification(CCTK_INT IN size, CCTK_INT OUT ARRAY nboundaryzones, CCTK_INT OUT ARRAY is_internal, CCTK_INT OUT ARRAY is_staggered, CCTK_INT OUT ARRAY shiftout) USES FUNCTION GetBoundarySpecification CCTK_INT FUNCTION SymmetryTableHandleForGrid(CCTK_POINTER_TO_CONST IN cctkGH) USES FUNCTION SymmetryTableHandleForGrid CCTK_INT FUNCTION Boundary_SelectVarForBC(CCTK_POINTER_TO_CONST IN GH, CCTK_INT IN faces, CCTK_INT IN boundary_width, CCTK_INT IN table_handle, CCTK_STRING IN var_name, CCTK_STRING IN bc_name) USES FUNCTION Boundary_SelectVarForBC CCTK_INT FUNCTION Driver_SelectVarForBC(CCTK_POINTER_TO_CONST IN GH, CCTK_INT IN faces, CCTK_INT IN boundary_width, CCTK_INT IN table_handle, CCTK_STRING IN group_name, CCTK_STRING IN bc_name) USES FUNCTION Driver_SelectVarForBC # Needed for EinsteinEvolve/NewRad outer boundary condition driver: CCTK_INT FUNCTION \ NewRad_Apply \ (CCTK_POINTER_TO_CONST IN cctkGH, \ CCTK_REAL ARRAY IN var, \ CCTK_REAL ARRAY INOUT rhs, \ CCTK_REAL IN var0, \ CCTK_REAL IN v0, \ CCTK_INT IN radpower) REQUIRES FUNCTION NewRad_Apply # Needed to convert ADM initial data into BSSN initial data (gamma extrapolation) CCTK_INT FUNCTION \ ExtrapolateGammas \ (CCTK_POINTER_TO_CONST IN cctkGH, \ CCTK_REAL ARRAY INOUT var) REQUIRES FUNCTION ExtrapolateGammas # Tell the Toolkit that we want all gridfunctions # to be visible to other thorns by using # the keyword "public". Note that declaring these # gridfunctions *does not* allocate memory for them; # that is done by the schedule.ccl file. # FIXME: add info for symmetry conditions: # https://einsteintoolkit.org/thornguide/CactusBase/SymBase/documentation.html public: """ # Next we declare gridfunctions based on their corresponding gridfunction groups as registered within NRPy+ def output_list_of_gfs(gfs_list, description="User did not provide description"): gfs_list_parsed = [] for j in range(len(gfs_list)): # Add all gridfunctions in the list... gfs_list_parsed.append(gfs_list[j]) # Then remove T4UU gridfunction from list if enable_stress_energy_source_terms==False: if "T4UU" in gfs_list_parsed[-1] and enable_stress_energy_source_terms==False: del gfs_list_parsed[-1] gfsstr = " " for j in range(len(gfs_list_parsed)): gfsstr += gfs_list_parsed[j] if j != len(gfs_list_parsed)-1: gfsstr += "," # This is a comma-separated list of gridfunctions else: gfsstr += "\n} \""+description+"\"\n\n" return gfsstr # First EVOL type: outstr += "CCTK_REAL evol_variables type = GF Timelevels=3\n{\n" outstr += output_list_of_gfs(evol_gfs_list, "BSSN evolved gridfunctions") # Second EVOL right-hand-sides outstr += "CCTK_REAL evol_variables_rhs type = GF Timelevels=1 TAGS=\'InterpNumTimelevels=1 prolongation=\"none\" checkpoint=\"no\"\'\n{\n" outstr += output_list_of_gfs(rhs_list, "right-hand-side storage for BSSN evolved gridfunctions") # Then AUX type: outstr += "CCTK_REAL aux_variables type = GF Timelevels=3\n{\n" outstr += output_list_of_gfs(aux_gfs_list, "Auxiliary gridfunctions for BSSN diagnostics") # Finally, AUXEVOL type: outstr += "CCTK_REAL auxevol_variables type = GF Timelevels=1 TAGS=\'InterpNumTimelevels=1 prolongation=\"none\" checkpoint=\"no\"'\n{\n" outstr += output_list_of_gfs(auxevol_gfs_list, "Auxiliary gridfunctions needed for evaluating the BSSN RHSs") with open(os.path.join(ThornName, "interface.ccl"), "w") as file: file.write(outstr.replace("BaikalETK", ThornName)) # # # ## Step 4.c: `schedule.ccl`: schedule all functions used within `BaikalETK`, specify data dependencies within said functions, and allocate memory for gridfunctions \[Back to [top](#toc)\] # $$\label{scheduleccl}$$ # # Official documentation on constructing ETK `schedule.ccl` files is found [here](http://einsteintoolkit.org/usersguide/UsersGuidech12.html#x17-186000D2.4). # In[9]: def output_schedule_ccl(ThornName="BaikalETK", enable_stress_energy_source_terms=False): FDorders = Baikal_FDorders_list if ThornName == "BaikalVacuum": FDorders = BaikalVacuum_FDorders_list outstr = """ # This schedule.ccl file was automatically generated by NRPy+. # You are advised against modifying it directly; instead # modify the Python code that generates it. # First allocate storage for one timelevel of ADMBase gridfunctions, which is the # bare minimum needed by NRPy+. If another thorn (e.g., ADMBase itself) requests # more timelevels of storage, Cactus automatically allocates the maximum requested. STORAGE: ADMBase::metric[1], ADMBase::curv[1], ADMBase::lapse[1], ADMBase::shift[1] # Next allocate storage for all 3 gridfunction groups used in BaikalETK STORAGE: evol_variables[3] # Evolution variables STORAGE: evol_variables_rhs[1] # Variables storing right-hand-sides STORAGE: aux_variables[3] # Diagnostics variables STORAGE: auxevol_variables[1] # Single-timelevel storage of variables needed for evolutions. # The following scheduler is based on Lean/LeanBSSNMoL/schedule.ccl schedule Banner_BaikalETK at STARTUP { LANG: C OPTIONS: meta } "Output ASCII art banner" schedule RegisterSlicing_BaikalETK at STARTUP after Banner_BaikalETK { LANG: C OPTIONS: meta } "Register 3+1 slicing condition" # This function registers the boundary conditions with PreSync. schedule specify_Driver_BoundaryConditions_BaikalETK in Driver_BoundarySelect { LANG: C OPTIONS: LEVEL } "Register boundary conditions with PreSync" schedule Symmetry_registration_oldCartGrid3D_BaikalETK at BASEGRID { LANG: C OPTIONS: Global } "Register symmetries, the CartGrid3D way." schedule zero_rhss_BaikalETK at BASEGRID after Symmetry_registration_oldCartGrid3D_BaikalETK { LANG: C WRITES: evol_variables_rhs(everywhere) } "Idea from Lean: set all rhs functions to zero to prevent spurious nans" schedule ADM_to_BSSN_BaikalETK at CCTK_INITIAL after ADMBase_PostInitial { LANG: C OPTIONS: Local READS: ADMBase::metric, ADMBase::shift, ADMBase::curv, ADMBase::dtshift, ADMBase::lapse WRITES: evol_variables SYNC: evol_variables } "Convert initial data into BSSN variables" # This scheduled function shouldn't be necessary, as there are no BCs scheduled to be # applied at this point. schedule GROUP ApplyBCs as BaikalETK_ApplyBCs at CCTK_INITIAL after ADM_to_BSSN_BaikalETK { } "Apply boundary conditions" # MoL: registration schedule MoL_registration_BaikalETK in MoL_Register { LANG: C OPTIONS: META } "Register variables for MoL" # MoL: compute RHSs, etc """ if enable_stress_energy_source_terms == True: outstr += """ schedule T4DD_to_T4UU_BaikalETK in MoL_CalcRHS as BaikalETK_T4UU before BSSN_to_ADM_BaikalETK { LANG: C READS: TmunuBase::stress_energy_scalar, TmunuBase::stress_energy_vector, TmunuBase::stress_energy_tensor, hDD00GF, hDD01GF, hDD02GF, hDD11GF, hDD12GF, hDD22GF, alphaGF, cfGF, vetU0GF, vetU1GF, vetU2GF WRITES: T4UU00GF(everywhere), T4UU01GF(everywhere), T4UU02GF(everywhere), T4UU03GF(everywhere), T4UU11GF(everywhere), T4UU12GF(everywhere), T4UU13GF(everywhere), T4UU22GF(everywhere), T4UU23GF(everywhere), T4UU33GF(everywhere) } "MoL: Compute T4UU from T4DD (provided in eT?? from TmunuBase), needed for BSSN RHSs." """ for order in FDorders: outstr += """ if(FD_order == """+str(order)+""") { schedule Ricci_eval_BaikalETK_order_"""+str(order)+""" in MoL_CalcRHS as BaikalETK_Ricci before BaikalETK_RHS { LANG: C READS: hDD00GF, hDD01GF, hDD02GF, hDD11GF, hDD12GF, hDD22GF, lambdaU0GF, lambdaU1GF, lambdaU2GF WRITES: RbarDD00GF, RbarDD01GF, RbarDD02GF, RbarDD11GF, RbarDD12GF, RbarDD22GF } "MoL: Compute Ricci tensor, needed for BSSN RHSs, at finite-differencing order: """+str(order)+"""" schedule rhs_eval_BaikalETK_order_"""+str(order)+""" in MoL_CalcRHS as BaikalETK_RHS after BaikalETK_Ricci { LANG: C READS: evol_variables(everywhere), auxevol_variables(interior) WRITES: evol_variables_rhs(interior) } "MoL: Evaluate BSSN RHSs, at finite-differencing order: """+str(order)+"""" }""" outstr += """ schedule specify_NewRad_BoundaryConditions_parameters_BaikalETK in MoL_CalcRHS after BaikalETK_RHS { LANG: C READS: evol_variables(everywhere) WRITES: evol_variables_rhs(boundary) } "NewRad boundary conditions, scheduled right after RHS eval." schedule floor_the_lapse_BaikalETK in MoL_PostStep before enforce_detgammahat_constraint_BaikalETK before BC_Update { LANG: C READS: alphaGF(everywhere) WRITES: alphaGF(everywhere) } "Set lapse = max(lapse_floor, lapse)" schedule enforce_detgammahat_constraint_BaikalETK in MoL_PostStep before BC_Update { LANG: C READS: hDD00GF, hDD01GF, hDD02GF, hDD11GF, hDD12GF, hDD22GF WRITES: hDD00GF(everywhere), hDD01GF(everywhere), hDD02GF(everywhere), hDD11GF(everywhere), hDD12GF(everywhere), hDD22GF(everywhere) } "Enforce detgammabar = detgammahat (= 1 in Cartesian)" # This schedule call is not required for PreSync but remains in the schedule for backward compatibility. schedule specify_BoundaryConditions_evolved_gfs_BaikalETK in MoL_PostStep { LANG: C OPTIONS: LEVEL SYNC: evol_variables } "Apply boundary conditions and perform AMR+interprocessor synchronization" # This schedule call is not required for PreSync but remains in the schedule for backward compatibility. schedule GROUP ApplyBCs as BaikalETK_ApplyBCs in MoL_PostStep after specify_BoundaryConditions_evolved_gfs_BaikalETK { } "Group for applying boundary conditions" # Next update ADM quantities schedule BSSN_to_ADM_BaikalETK in MoL_PostStep after BaikalETK_ApplyBCs before ADMBase_SetADMVars { LANG: C OPTIONS: Local READS: aDD00GF, aDD01GF, aDD02GF, aDD11GF, aDD12GF, aDD22GF, trKGF, cfGF, betU0GF, betU1GF, betU2GF, vetU0GF, vetU1GF, vetU2GF, hDD00GF, hDD01GF, hDD02GF, hDD11GF, hDD12GF, hDD22GF, alphaGF WRITES: ADMBase::metric(everywhere), ADMBase::shift(everywhere), ADMBase::curv(everywhere), ADMBase::dtshift(everywhere), ADMBase::lapse(everywhere) } "Perform BSSN-to-ADM conversion. Useful for diagnostics." # Compute Hamiltonian & momentum constraints """ if enable_stress_energy_source_terms == True: outstr += """ schedule T4DD_to_T4UU_BaikalETK in MoL_PseudoEvolution before BaikalETK_BSSN_constraints { LANG: C OPTIONS: Local READS: TmunuBase::stress_energy_scalar, TmunuBase::stress_energy_vector, TmunuBase::stress_energy_tensor, hDD00GF, hDD01GF,hDD02GF, hDD11GF, hDD12GF, hDD22GF, alphaGF, cfGF, vetU0GF, vetU1GF, vetU2GF WRITES: T4UU00GF(everywhere), T4UU01GF(everywhere), T4UU02GF(everywhere), T4UU03GF(everywhere), T4UU11GF(everywhere), T4UU12GF(everywhere), T4UU13GF(everywhere), T4UU22GF(everywhere), T4UU23GF(everywhere), T4UU33GF(everywhere) } "MoL_PseudoEvolution: Compute T4UU from T4DD (provided in eT?? from TmunuBase), needed for BSSN constraints" """ for order in FDorders: outstr += """ if(FD_order == """+str(order)+""") { schedule BSSN_constraints_BaikalETK_order_"""+str(order)+""" as BaikalETK_BSSN_constraints in MoL_PseudoEvolution { LANG: C OPTIONS: Local READS: aDD00GF, aDD01GF, aDD02GF, aDD11GF, aDD12GF, aDD22GF, trKGF, hDD00GF, hDD01GF, hDD02GF, hDD11GF, hDD12GF, hDD22GF, cfGF, lambdaU0GF, lambdaU1GF, lambdaU2GF""" if enable_stress_energy_source_terms == True: outstr += """ READS: alphaGF, vetU0GF, vetU1GF, vetU2GF, T4UU00GF, T4UU01GF, T4UU02GF, T4UU03GF""" outstr += """ WRITES: aux_variables } "Compute BSSN (Hamiltonian and momentum) constraints, at finite-differencing order: """+str(order)+"""" } """ outstr += """ # This schedule call is not required for PreSync but remains in the schedule for backward compatibility. schedule specify_BoundaryConditions_aux_gfs_BaikalETK in MoL_PseudoEvolution after BaikalETK_BSSN_constraints { LANG: C OPTIONS: LEVEL SYNC: aux_variables } "Enforce symmetry BCs in constraint computation" """ if enable_stress_energy_source_terms == True: outstr += """ schedule BSSN_to_ADM_BaikalETK in MoL_PseudoEvolution after specify_BoundaryConditions_aux_gfs_BaikalETK { LANG: C OPTIONS: Local READS: aDD00GF, aDD01GF, aDD02GF, aDD11GF, aDD12GF, aDD22GF, trKGF, cfGF, betU0GF, betU1GF, betU2GF, vetU0GF, vetU1GF, vetU2GF, hDD00GF, hDD01GF, hDD02GF, hDD11GF, hDD12GF, hDD22GF, alphaGF WRITES: ADMBase::metric(everywhere), ADMBase::shift(everywhere), ADMBase::curv(everywhere), ADMBase::dtshift(everywhere), ADMBase::lapse(everywhere) } "Perform BSSN-to-ADM conversion in MoL_PseudoEvolution. Needed for proper HydroBase integration." """ outstr += """ # This schedule call is not required for PreSync but remains in the schedule for backward compatibility. schedule GROUP ApplyBCs as BaikalETK_auxgfs_ApplyBCs in MoL_PseudoEvolution after specify_BoundaryConditions_aux_gfs_BaikalETK { } "Apply boundary conditions" """ with open(os.path.join(ThornName, "schedule.ccl"), "w") as file: file.write(outstr.replace("BaikalETK", ThornName)) # # # # Step 5: Generate library of C functions needed for interfacing with the `Einstein Toolkit` \[Back to [top](#toc)\] # $$\label{cdrivers}$$ # # Now that we have constructed the basic C code kernels and the needed Einstein Toolkit `ccl` files, we next write the driver functions for registering `Baikal`/`BaikalVacuum` within the Toolkit and the C code kernels. Each of these driver functions will be called directly from the thorn's [`schedule.ccl`](#scheduleccl) in the ETK. # # # ## Step 5.a: Needed ETK functions: Banner, Symmetry registration, Parameter sanity check, Method of Lines (`MoL`) registration, Boundary condition registration \[Back to [top](#toc)\] # $$\label{etkfunctions}$$ # # ### To-do: Parameter sanity check function. E.g., error should be thrown if `cctk_nghostzones[]` is set too small for the chosen finite-differencing order within NRPy+. # In[10]: import NRPy_logo as logo # First the ETK banner code, proudly showing the NRPy+ banner def add_to_Cfunction_dict_Banner(ThornName): logostr = logo.print_logo(print_to_stdout=False) body = " printf(\"BaikalETK: another Einstein Toolkit thorn generated by\\n\");\n" for line in logostr.splitlines(): body += " printf(\""+line+"\\n\");\n" add_to_Cfunction_dict( includes=["stdio.h"], desc="Output banner for NRPy+-generated thorn " + ThornName, c_type="void", name="Banner_" + ThornName, params="", body=body.replace("BaikalETK", ThornName), enableCparameters=False) # In[11]: def add_to_Cfunction_dict_RegisterSlicing(ThornName): add_to_Cfunction_dict( includes=["cctk.h", "Slicing.h"], desc="Register slicing condition for NRPy+-generated thorn " + ThornName, c_type="int", name="RegisterSlicing_" + ThornName, params="", body=r""" Einstein_RegisterSlicing ("BaikalETK"); return 0; """.replace("BaikalETK", ThornName), enableCparameters=False) # In[12]: def add_to_Cfunction_dict_Symmetry_registration_oldCartGrid3D(ThornName): enable_stress_energy_source_terms = True if ThornName == "BaikalVacuum": enable_stress_energy_source_terms = False # Next Symmetry_registration_BaikalETK(): Register symmetries includes = ["cctk.h", "cctk_Arguments.h", "cctk_Parameters.h", "Symmetry.h"] desc = "Register symmetries for NRPy+-generated thorn " + ThornName c_type = "void" name = "Symmetry_registration_oldCartGrid3D_" + ThornName params = "CCTK_ARGUMENTS" full_gfs_list = [] full_gfs_list.extend(evol_gfs_list) full_gfs_list.extend(auxevol_gfs_list) full_gfs_list.extend(aux_gfs_list) body = r""" DECLARE_CCTK_ARGUMENTS_"""+name+r"""; DECLARE_CCTK_PARAMETERS; // Stores gridfunction parity across x=0, y=0, and z=0 planes, respectively int sym[3]; // Next register parities for each gridfunction based on its name // (to ensure this algorithm is robust, gridfunctions with integers // in their base names are forbidden in NRPy+). """ body += "" for gfname in full_gfs_list: gfname_without_GFsuffix = gfname[:-2] # Do not add T4UU gridfunctions if enable_stress_energy_source_terms==False: if not (enable_stress_energy_source_terms == False and "T4UU" in gfname_without_GFsuffix): body += """ // Default to scalar symmetry: sym[0] = 1; sym[1] = 1; sym[2] = 1; // Now modify sym[0], sym[1], and/or sym[2] as needed // to account for gridfunction parity across // x=0, y=0, and/or z=0 planes, respectively """ # If gridfunction name does not end in a digit, by NRPy+ syntax, it must be a scalar if gfname_without_GFsuffix[len(gfname_without_GFsuffix) - 1].isdigit() == False: body += " // (this gridfunction is a scalar -- no need to change default sym[]'s!)\n" elif len(gfname_without_GFsuffix) > 2: # Rank-1 indexed expression (e.g., vector) if gfname_without_GFsuffix[len(gfname_without_GFsuffix) - 2].isdigit() == False: if int(gfname_without_GFsuffix[-1]) > 2: print("Error: Found invalid gridfunction name: "+gfname) sys.exit(1) symidx = gfname_without_GFsuffix[-1] if int(symidx) < 3: body += " sym[" + symidx + "] = -1;\n" # Rank-2 indexed expression elif gfname_without_GFsuffix[len(gfname_without_GFsuffix) - 2].isdigit() == True: if len(gfname_without_GFsuffix) > 3 and gfname_without_GFsuffix[len(gfname_without_GFsuffix) - 3].isdigit() == True: print("Error: Found a Rank-3 or above gridfunction: "+gfname+", which is at the moment unsupported.") print("It should be easy to support this if desired.") sys.exit(1) symidx0 = gfname_without_GFsuffix[-2] if "T4UU" in gfname: symidx0 = str(int(symidx0)-1) # E.g., T4UU23 is T4UUyz, corresponding to directions 1,2 if int(symidx0) >= 0: body += " sym[" + symidx0 + "] *= -1;\n" symidx1 = gfname_without_GFsuffix[-1] if "T4UU" in gfname: symidx1 = str(int(symidx1)-1) # E.g., T4UU23 is T4UUyz, corresponding to directions 1,2 if int(symidx1) >= 0: body += " sym[" + symidx1 + "] *= -1;\n" else: print("Don't know how you got this far with a gridfunction named "+gfname+", but I'll take no more of this nonsense.") print(" Please follow best-practices and rename your gridfunction to be more descriptive") sys.exit(1) body += " SetCartSymVN(cctkGH, sym, \"BaikalETK::" + gfname + "\");\n" add_to_Cfunction_dict( includes=includes, desc=desc, c_type=c_type, name=name, params=params, body=body.replace("BaikalETK", ThornName), enableCparameters=False) # In[13]: def add_to_Cfunction_dict_zero_rhss(ThornName): # Next zero_rhss_BaikalETK(): initialize RHSs of gridfunctions to zero. # TODO: should probably set to NaN. includes = ["cctk.h", "cctk_Arguments.h", "cctk_Parameters.h"] desc = "Zero RHSs for NRPy+-generated thorn " + ThornName c_type = "void" name = "zero_rhss_" + ThornName params = "CCTK_ARGUMENTS" body = r""" DECLARE_CCTK_ARGUMENTS_"""+name+r"""; DECLARE_CCTK_PARAMETERS; """ set_rhss_to_zero = "" for gf in rhs_list: set_rhss_to_zero += gf+"[CCTK_GFINDEX3D(cctkGH,i0,i1,i2)] = 0.0;\n" set_rhss_to_zero = set_rhss_to_zero[:-1] body += lp.loop(["i2","i1","i0"],["0", "0", "0"], ["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"], ["1","1","1"], ["#pragma omp parallel for","","",],padding=" ",interior=set_rhss_to_zero) add_to_Cfunction_dict( includes=includes, desc=desc, c_type=c_type, name=name, params=params, body=body.replace("BaikalETK", ThornName), enableCparameters=False) # In[14]: def add_to_Cfunction_dict_floor_the_lapse(ThornName): # Register the C parameter for the lapse floor. _lap_floor = par.Cparameters("REAL", ThornName, "lapse_floor", 1e-15) includes = ["cctk.h", "cctk_Arguments.h", "cctk_Parameters.h"] desc = "Floor the lapse" c_type = "void" name = "floor_the_lapse_" + ThornName params = "CCTK_ARGUMENTS" body = r""" DECLARE_CCTK_ARGUMENTS_"""+name+r"""; DECLARE_CCTK_PARAMETERS; #ifndef MAX #define MAX(a,b) ( ((a) > (b)) ? (a) : (b) ) """ body += lp.loop(["i2","i1","i0"],["0", "0", "0"], ["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"], ["1","1","1"], ["#pragma omp parallel for","","",],padding=" ", interior="""alphaGF[CCTK_GFINDEX3D(cctkGH, i0,i1,i2)] = MAX(alphaGF[CCTK_GFINDEX3D(cctkGH, i0,i1,i2)], lapse_floor);""") body += """ #undef MAX #endif """ add_to_Cfunction_dict( includes=includes, desc=desc, c_type=c_type, name=name, params=params, body=body.replace("BaikalETK", ThornName), enableCparameters=False) # In[15]: def add_to_Cfunction_dict_MoL_registration(ThornName): # Next register gridfunctions with MoL includes = ["stdio.h", "cctk.h", "cctk_Arguments.h", "cctk_Parameters.h"] desc = """Register evolved gridfunctions & RHSs with the Method of Lines timestepper MoL (the Einstein Toolkit Method of Lines thorn) (MoL thorn, found in arrangements/CactusBase/MoL). MoL documentation located in arrangements/CactusBase/MoL/doc """ c_type = "void" name = "MoL_registration_" + ThornName params = "CCTK_ARGUMENTS" body = r""" DECLARE_CCTK_ARGUMENTS_"""+name+r"""; DECLARE_CCTK_PARAMETERS; CCTK_INT ierr = 0, group, rhs; // Register evolution & RHS gridfunction groups with MoL, so it knows // how to perform the appropriate timestepping group = CCTK_GroupIndex("BaikalETK::evol_variables"); rhs = CCTK_GroupIndex("BaikalETK::evol_variables_rhs"); ierr += MoLRegisterEvolvedGroup(group, rhs); if (ierr) CCTK_ERROR("Problems registering with MoL"); """ add_to_Cfunction_dict( includes=includes, desc=desc, c_type=c_type, name=name, params=params, body=body.replace("BaikalETK", ThornName), enableCparameters=False) # In[16]: def add_to_Cfunction_dict_specify_Driver_BoundaryConditions(ThornName): includes = ["stdio.h", "cctk.h", "cctk_Arguments.h", "cctk_Parameters.h", "cctk_Faces.h", "util_Table.h"] desc = """""" desc = """Set all boundary conditions for PreSync. BSSN constraints are set to use `flat` boundary conditions, similar to what Lean does. BSSN evolved variables are set to use `none` boundary conditions, as these are set via NewRad. Since we choose NewRad boundary conditions, we must register all evolved gridfunctions to have boundary type "none". This is because NewRad directly modifies the RHSs. This code is based on Kranc's McLachlan/ML_BSSN/src/Boundaries.cc code. """ c_type = "void" name = "specify_Driver_BoundaryConditions_" + ThornName params = "CCTK_ARGUMENTS" body = r""" DECLARE_CCTK_ARGUMENTS_"""+name+r"""; DECLARE_CCTK_PARAMETERS; CCTK_INT ierr CCTK_ATTRIBUTE_UNUSED = 0; const CCTK_INT bndsize = FD_order / 2 + 1; // <- bndsize = number of ghostzones """ for gf in aux_gfs_list: body += """ ierr = Driver_SelectVarForBC(cctkGH, CCTK_ALL_FACES, bndsize, -1, "BaikalETK::"""+gf+"""", "flat"); if (ierr < 0) CCTK_ERROR("Failed to register BC with Driver for BaikalETK::"""+gf+"""!"); """ for gf in evol_gfs_list: body += """ ierr = Driver_SelectVarForBC(cctkGH, CCTK_ALL_FACES, 1, -1, "BaikalETK::"""+gf+"""", "none"); if (ierr < 0) CCTK_ERROR("Failed to register BC with Driver for BaikalETK::"""+gf+"""!"); """ add_to_Cfunction_dict( includes=includes, desc=desc, c_type=c_type, name=name, params=params, body=body.replace("BaikalETK", ThornName), enableCparameters=False) # In[17]: def add_to_Cfunction_dict_specify_BoundaryConditions_evolved_gfs(ThornName): includes = ["stdio.h", "cctk.h", "cctk_Arguments.h", "cctk_Parameters.h"] desc = """Set `none` boundary conditions on BSSN evolved variables, as these are set via NewRad. Since we choose NewRad boundary conditions, we must register all evolved gridfunctions to have boundary type "none". This is because NewRad directly modifies the RHSs. This code is based on Kranc's McLachlan/ML_BSSN/src/Boundaries.cc code. """ c_type = "void" name = "specify_BoundaryConditions_evolved_gfs_" + ThornName params = "CCTK_ARGUMENTS" body = r""" DECLARE_CCTK_ARGUMENTS_"""+name+r"""; DECLARE_CCTK_PARAMETERS; CCTK_INT ierr CCTK_ATTRIBUTE_UNUSED = 0; """ for gf in evol_gfs_list: body += """ ierr = Boundary_SelectVarForBC(cctkGH, CCTK_ALL_FACES, 1, -1, "BaikalETK::"""+gf+"""", "none"); if (ierr < 0) CCTK_ERROR("Failed to register BC for BaikalETK::"""+gf+"""!"); """ add_to_Cfunction_dict( includes=includes, desc=desc, c_type=c_type, name=name, params=params, body=body.replace("BaikalETK", ThornName), enableCparameters=False) # In[18]: def add_to_Cfunction_dict_specify_BoundaryConditions_aux_gfs(ThornName): includes = ["stdio.h", "cctk.h", "cctk_Arguments.h", "cctk_Parameters.h", "cctk_Faces.h", "util_Table.h"] desc = """Set `flat` boundary conditions on BSSN constraints, similar to what Lean does.""" c_type = "void" name = "specify_BoundaryConditions_aux_gfs_" + ThornName params = "CCTK_ARGUMENTS" body = r""" DECLARE_CCTK_ARGUMENTS_"""+name+r"""; DECLARE_CCTK_PARAMETERS; CCTK_INT ierr CCTK_ATTRIBUTE_UNUSED = 0; const CCTK_INT bndsize = FD_order / 2 + 1; // <- bndsize = number of ghostzones """ for gf in aux_gfs_list: body += """ ierr = Boundary_SelectVarForBC(cctkGH, CCTK_ALL_FACES, bndsize, -1, "BaikalETK::"""+gf+"""", "flat"); if (ierr < 0) CCTK_ERROR("Failed to register BC for BaikalETK::"""+gf+"""!"); """ add_to_Cfunction_dict( includes=includes, desc=desc, c_type=c_type, name=name, params=params, body=body.replace("BaikalETK", ThornName), enableCparameters=False) # In[19]: def add_to_Cfunction_dict_specify_NewRad_BoundaryConditions_parameters(ThornName): includes = ["math.h", "cctk.h", "cctk_Arguments.h", "cctk_Parameters.h"] desc = """ Set up NewRad boundary conditions. As explained in lean_public/LeanBSSNMoL/src/calc_bssn_rhs.F90, the function NewRad_Apply takes the following arguments: NewRad_Apply(cctkGH, var, rhs, var0, v0, radpower), which implement the boundary condition: var = var_at_infinite_r + u(r-var_char_speed*t)/r^var_radpower Obviously for var_radpower>0, var_at_infinite_r is the value of the variable at r->infinity. var_char_speed is the propagation speed at the outer boundary, and var_radpower is the radial falloff rate. """ c_type = "void" name = "specify_NewRad_BoundaryConditions_parameters_" + ThornName params = "CCTK_ARGUMENTS" body = r""" DECLARE_CCTK_ARGUMENTS_"""+name+r"""; DECLARE_CCTK_PARAMETERS; """ for gf in evol_gfs_list: var_at_infinite_r = "0.0" var_char_speed = "1.0" for reg_gf in gri.glb_gridfcs_list: if reg_gf.name+"GF" == gf: var_at_infinite_r = str(reg_gf.f_infinity) var_char_speed = str(reg_gf.wavespeed) var_radpower = "1.0" if "aDD" in gf or "trK" in gf: # consistent with Lean code. var_radpower = "2.0" body += " NewRad_Apply(cctkGH, "+gf+", "+gf.replace("GF","")+"_rhsGF, "+var_at_infinite_r+", "+var_char_speed+", "+var_radpower+");\n" add_to_Cfunction_dict( includes=includes, desc=desc, c_type=c_type, name=name, params=params, body=body.replace("BaikalETK", ThornName), enableCparameters=False) # # # ## Step 5.b: BSSN $\leftrightarrow$ ADM conversions \[Back to [top](#toc)\] # $$\label{bssnadmconversions}$$ # # # # ### Step 5.b.i: ADM $\to$ BSSN \[Back to [top](#toc)\] # $$\label{admtobssn}$$ # # Initial data in the Einstein Toolkit are given in terms of [ADM quantities](https://en.wikipedia.org/wiki/ADM_formalism), so a conversion is necessary so the quantities are in terms of BSSN variables used for evolving the initial data forward in time. # In[20]: def add_to_Cfunction_dict_ADM_to_BSSN(ThornName): includes = ["math.h", "cctk.h", "cctk_Arguments.h", "cctk_Parameters.h"] desc = """Converting from ADM to BSSN quantities is required in the Einstein Toolkit, as initial data are given in terms of ADM quantities, and Baikal evolves the BSSN quantities.""" c_type = "void" name = "ADM_to_BSSN_" + ThornName params = "CCTK_ARGUMENTS" # Set CoordSystem to Cartesian par.set_parval_from_str("reference_metric::CoordSystem", "Cartesian") rfm.reference_metric() import sympy as sp gammaSphorCartDD = ixp.declarerank2("gammaSphorCartDD", "sym01") KSphorCartDD = ixp.declarerank2("KSphorCartDD", "sym01") alphaSphorCart = sp.symbols("alphaSphorCart") betaSphorCartU = ixp.declarerank1("betaSphorCartU") BSphorCartU = ixp.declarerank1("BSphorCartU") import BSSN.BSSN_in_terms_of_ADM as BitoA BitoA.gammabarDD_hDD(gammaSphorCartDD) BitoA.trK_AbarDD_aDD(gammaSphorCartDD, KSphorCartDD) BitoA.cf_from_gammaDD(gammaSphorCartDD) BitoA.betU_vetU(betaSphorCartU, BSphorCartU) alpha = alphaSphorCart hDD = BitoA.hDD trK = BitoA.trK aDD = BitoA.aDD cf = BitoA.cf vetU = BitoA.vetU betU = BitoA.betU # Next compute lambda^i import BSSN.BSSN_quantities as Bq Bq.gammabar__inverse_and_derivs() # Provides gammabarUU and GammabarUDD gammabarUU = Bq.gammabarUU GammabarUDD = Bq.GammabarUDD # Next evaluate \bar{\Lambda}^i, based on GammabarUDD above and GammahatUDD # (from the reference metric): LambdabarU = ixp.zerorank1() for i in range(3): for j in range(3): for k in range(3): LambdabarU[i] += gammabarUU[j][k] * (GammabarUDD[i][j][k] - rfm.GammahatUDD[i][j][k]) # Finally apply rescaling: # lambda^i = Lambdabar^i/\text{ReU[i]} lambdaU = ixp.zerorank1() for i in range(3): lambdaU[i] = LambdabarU[i] / rfm.ReU[i] # Store the original list of registered gridfunctions; we'll want to unregister # all the *SphorCart* gridfunctions after we're finished with them below. orig_glb_gridfcs_list = [] for gf in gri.glb_gridfcs_list: orig_glb_gridfcs_list.append(gf) # We ignore the return values for the following register_gridfunctions...() calls, # as they are unused. gri.register_gridfunctions( "AUXEVOL", "alphaSphorCart") ixp.register_gridfunctions_for_single_rank1("AUXEVOL", "betaSphorCartU") ixp.register_gridfunctions_for_single_rank1("AUXEVOL", "BSphorCartU") ixp.register_gridfunctions_for_single_rank2("AUXEVOL", "gammaSphorCartDD", "sym01") ixp.register_gridfunctions_for_single_rank2("AUXEVOL", "KSphorCartDD", "sym01") # ADM to BSSN conversion, used for converting ADM initial data into a form readable by this thorn. # ADM to BSSN, Part 1: Set up function call and pointers to ADM gridfunctions body = """ DECLARE_CCTK_ARGUMENTS_"""+name+"""; DECLARE_CCTK_PARAMETERS; const CCTK_REAL *restrict alphaSphorCartGF = alp; """ # It's ugly if we output code in the following ordering, so we'll first # output to a string and then sort the string to beautify the code a bit. outstrtmp = [] for i in range(3): outstrtmp.append(" const CCTK_REAL *restrict betaSphorCartU"+str(i)+"GF = beta"+chr(ord('x')+i)+";\n") outstrtmp.append(" const CCTK_REAL *restrict BSphorCartU"+str(i)+"GF = dtbeta"+chr(ord('x')+i)+";\n") for j in range(i,3): outstrtmp.append(" const CCTK_REAL *restrict gammaSphorCartDD"+str(i)+str(j)+"GF = g"+chr(ord('x')+i)+chr(ord('x')+j)+";\n") outstrtmp.append(" const CCTK_REAL *restrict KSphorCartDD"+str(i)+str(j)+"GF = k"+chr(ord('x')+i)+chr(ord('x')+j)+";\n") outstrtmp.sort() for line in outstrtmp: body += line # ADM to BSSN, Part 2: Set up ADM to BSSN conversions for BSSN gridfunctions that do not require # finite-difference derivatives (i.e., all gridfunctions except lambda^i (=Gamma^i # in non-covariant BSSN)): # h_{ij}, a_{ij}, trK, vet^i=beta^i,bet^i=B^i, cf (conformal factor), and alpha # Output finite difference stencils as inlined expressions. # We do this instead of outputting as FD functions, as this function # does not take long to compile, and we have already output all the # FD functions to file, so if this one contains new FD functions, # the compile will fail. par.set_parval_from_str("finite_difference::enable_FD_functions", False) all_but_lambdaU_expressions = [ lhrh(lhs=gri.gfaccess("in_gfs", "hDD00"), rhs=hDD[0][0]), lhrh(lhs=gri.gfaccess("in_gfs", "hDD01"), rhs=hDD[0][1]), lhrh(lhs=gri.gfaccess("in_gfs", "hDD02"), rhs=hDD[0][2]), lhrh(lhs=gri.gfaccess("in_gfs", "hDD11"), rhs=hDD[1][1]), lhrh(lhs=gri.gfaccess("in_gfs", "hDD12"), rhs=hDD[1][2]), lhrh(lhs=gri.gfaccess("in_gfs", "hDD22"), rhs=hDD[2][2]), lhrh(lhs=gri.gfaccess("in_gfs", "aDD00"), rhs=aDD[0][0]), lhrh(lhs=gri.gfaccess("in_gfs", "aDD01"), rhs=aDD[0][1]), lhrh(lhs=gri.gfaccess("in_gfs", "aDD02"), rhs=aDD[0][2]), lhrh(lhs=gri.gfaccess("in_gfs", "aDD11"), rhs=aDD[1][1]), lhrh(lhs=gri.gfaccess("in_gfs", "aDD12"), rhs=aDD[1][2]), lhrh(lhs=gri.gfaccess("in_gfs", "aDD22"), rhs=aDD[2][2]), lhrh(lhs=gri.gfaccess("in_gfs", "trK"), rhs=trK), lhrh(lhs=gri.gfaccess("in_gfs", "vetU0"), rhs=vetU[0]), lhrh(lhs=gri.gfaccess("in_gfs", "vetU1"), rhs=vetU[1]), lhrh(lhs=gri.gfaccess("in_gfs", "vetU2"), rhs=vetU[2]), lhrh(lhs=gri.gfaccess("in_gfs", "betU0"), rhs=betU[0]), lhrh(lhs=gri.gfaccess("in_gfs", "betU1"), rhs=betU[1]), lhrh(lhs=gri.gfaccess("in_gfs", "betU2"), rhs=betU[2]), lhrh(lhs=gri.gfaccess("in_gfs", "alpha"), rhs=alpha), lhrh(lhs=gri.gfaccess("in_gfs", "cf"), rhs=cf)] outCparams = "preindent=0,outCfileaccess=a,outCverbose=False,includebraces=False" all_but_lambdaU_outC = fin.FD_outputC("returnstring", all_but_lambdaU_expressions, outCparams) body += lp.loop(["i2", "i1", "i0"], ["0", "0", "0"], ["cctk_lsh[2]", "cctk_lsh[1]", "cctk_lsh[0]"], ["1", "1", "1"], ["#pragma omp parallel for", "", ""], " ", all_but_lambdaU_outC) # ADM to BSSN, Part 3: Set up ADM to BSSN conversions for BSSN gridfunctions defined from # finite-difference derivatives: lambda^i, which is Gamma^i in non-covariant BSSN): body += """ const CCTK_REAL invdx0 = 1.0/CCTK_DELTA_SPACE(0); const CCTK_REAL invdx1 = 1.0/CCTK_DELTA_SPACE(1); const CCTK_REAL invdx2 = 1.0/CCTK_DELTA_SPACE(2); """ BSSN_FD_orders = list(Baikal_FDorders_list) if ThornName == "BaikalVacuum": BSSN_FD_orders = list(BaikalVacuum_FDorders_list) for FD_order in BSSN_FD_orders: # Store original finite-differencing order: orig_FD_order = par.parval_from_str("finite_difference::FD_CENTDERIVS_ORDER") # Set new finite-differencing order: par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", FD_order) outCparams = "preindent=1,outCfileaccess=a,outCverbose=False,includebraces=False" lambdaU_expressions = [lhrh(lhs=gri.gfaccess("in_gfs","lambdaU0"),rhs=lambdaU[0]), lhrh(lhs=gri.gfaccess("in_gfs","lambdaU1"),rhs=lambdaU[1]), lhrh(lhs=gri.gfaccess("in_gfs","lambdaU2"),rhs=lambdaU[2])] lambdaU_expressions_FDout = fin.FD_outputC("returnstring",lambdaU_expressions, outCparams) lambdakernel = lp.loop(["i2","i1","i0"],["cctk_nghostzones[2]","cctk_nghostzones[1]","cctk_nghostzones[0]"], ["cctk_lsh[2]-cctk_nghostzones[2]","cctk_lsh[1]-cctk_nghostzones[1]","cctk_lsh[0]-cctk_nghostzones[0]"], ["1","1","1"],["#pragma omp parallel for","",""],padding=" ", interior=lambdaU_expressions_FDout) body += " if(FD_order == "+str(FD_order)+") {\n" body += indent_Ccode(lambdakernel) body += " }\n" # Restore original FD order par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER", orig_FD_order) body += """ ExtrapolateGammas(cctkGH, lambdaU0GF); ExtrapolateGammas(cctkGH, lambdaU1GF); ExtrapolateGammas(cctkGH, lambdaU2GF); """ # Unregister the *SphorCartGF's. gri.glb_gridfcs_list = orig_glb_gridfcs_list add_to_Cfunction_dict( includes=includes, desc=desc, c_type=c_type, name=name, params=params, body=body.replace("BaikalETK", ThornName), enableCparameters=False) # # # ### Step 5.b.ii: BSSN $\to$ ADM \[Back to [top](#toc)\] # $$\label{bssntoadm}$$ # # All modules (thorns) in the Einstein Toolkit that deal with spacetime quantities do so via the core `ADMBase` module, which assumes variables are written in ADM form. Therefore, in order for `BaikalETK` to interface properly with the rest of the Toolkit, its native BSSN variables must be converted to ADM quantities. # In[21]: def add_to_Cfunction_dict_BSSN_to_ADM(ThornName): includes = ["math.h", "cctk.h", "cctk_Arguments.h", "cctk_Parameters.h"] desc = """Converting from ADM to BSSN quantities is required in the Einstein Toolkit, as initial data are given in terms of ADM quantities, and Baikal evolves the BSSN quantities.""" c_type = "void" name = "BSSN_to_ADM_" + ThornName params = "CCTK_ARGUMENTS" import BSSN.ADM_in_terms_of_BSSN as btoa import BSSN.BSSN_quantities as Bq btoa.ADM_in_terms_of_BSSN() Bq.BSSN_basic_tensors() # Gives us betaU & BU body = """ DECLARE_CCTK_ARGUMENTS_"""+name+"""; DECLARE_CCTK_PARAMETERS; """ btoa_lhrh = [] for i in range(3): for j in range(i,3): btoa_lhrh.append(lhrh(lhs="g"+chr(ord('x')+i)+chr(ord('x')+j)+"[CCTK_GFINDEX3D(cctkGH,i0,i1,i2)]", rhs=btoa.gammaDD[i][j])) for i in range(3): for j in range(i,3): btoa_lhrh.append(lhrh(lhs="k"+chr(ord('x')+i)+chr(ord('x')+j)+"[CCTK_GFINDEX3D(cctkGH,i0,i1,i2)]", rhs=btoa.KDD[i][j])) btoa_lhrh.append(lhrh(lhs="alp[CCTK_GFINDEX3D(cctkGH,i0,i1,i2)]",rhs=Bq.alpha)) for i in range(3): btoa_lhrh.append(lhrh(lhs="beta"+chr(ord('x')+i)+"[CCTK_GFINDEX3D(cctkGH,i0,i1,i2)]", rhs=Bq.betaU[i])) for i in range(3): btoa_lhrh.append(lhrh(lhs="dtbeta"+chr(ord('x')+i)+"[CCTK_GFINDEX3D(cctkGH,i0,i1,i2)]", rhs=Bq.BU[i])) outCparams = "preindent=0,outCfileaccess=a,outCverbose=False,includebraces=False" bssn_to_adm_Ccode = fin.FD_outputC("returnstring",btoa_lhrh, outCparams) body += lp.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"], ["1","1","1"],["#pragma omp parallel for","",""],padding=" ",interior=bssn_to_adm_Ccode[:-1]) add_to_Cfunction_dict( includes=includes, desc=desc, c_type=c_type, name=name, params=params, body=body.replace("BaikalETK", ThornName), enableCparameters=False) # # # ## Step 5.c: `T4DD_to_T4UU()`: Compute $T^{\mu\nu}$ from `TmunuBase`'s $T_{\mu\nu}$, using BSSN quantities as inputs for the 4D raising operation \[Back to [top](#toc)\] # $$\label{t4uu}$$ # # Here we implement $T^{\mu\nu} = g^{\mu \delta} g^{\nu \gamma} T_{\delta \gamma}.$ # In[22]: def add_to_Cfunction_dict_T4DD_to_T4UU(ThornName): includes = ["math.h", "cctk.h", "cctk_Arguments.h", "cctk_Parameters.h"] desc = """Compute T4UU from T4DD (provided by TmunuBase), using BSSN quantities as inputs for the 4D raising operation WARNING: Do not enable SIMD here, as it is not guaranteed that cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2] is a multiple of SIMD_width! """ c_type = "void" name = "T4DD_to_T4UU_" + ThornName params = "CCTK_ARGUMENTS" # Declare T4DD as a set of gridfunctions. These won't # actually appear in interface.ccl, as interface.ccl # was set above. Thus before calling the code output # by FD_outputC(), we'll have to set pointers # to the actual gridfunctions they reference. # (In this case the eTab's.) T4DD = ixp.register_gridfunctions_for_single_rank2("AUXEVOL", "T4DD", "sym01", DIM=4) import BSSN.ADMBSSN_tofrom_4metric as AB4m AB4m.g4UU_ito_BSSN_or_ADM("BSSN") T4UUraised = ixp.zerorank2(DIM=4) for mu in range(4): for nu in range(4): for delta in range(4): for gamma in range(4): T4UUraised[mu][nu] += AB4m.g4UU[mu][delta]*AB4m.g4UU[nu][gamma]*T4DD[delta][gamma] T4UU_expressions = [ lhrh(lhs=gri.gfaccess("in_gfs","T4UU00"),rhs=T4UUraised[0][0]), lhrh(lhs=gri.gfaccess("in_gfs","T4UU01"),rhs=T4UUraised[0][1]), lhrh(lhs=gri.gfaccess("in_gfs","T4UU02"),rhs=T4UUraised[0][2]), lhrh(lhs=gri.gfaccess("in_gfs","T4UU03"),rhs=T4UUraised[0][3]), lhrh(lhs=gri.gfaccess("in_gfs","T4UU11"),rhs=T4UUraised[1][1]), lhrh(lhs=gri.gfaccess("in_gfs","T4UU12"),rhs=T4UUraised[1][2]), lhrh(lhs=gri.gfaccess("in_gfs","T4UU13"),rhs=T4UUraised[1][3]), lhrh(lhs=gri.gfaccess("in_gfs","T4UU22"),rhs=T4UUraised[2][2]), lhrh(lhs=gri.gfaccess("in_gfs","T4UU23"),rhs=T4UUraised[2][3]), lhrh(lhs=gri.gfaccess("in_gfs","T4UU33"),rhs=T4UUraised[3][3])] outCparams = "outCverbose=False,includebraces=False,preindent=0,enable_SIMD=False" T4UUstr = fin.FD_outputC("returnstring",T4UU_expressions, outCparams) T4UUstr_loop = lp.loop(["i2","i1","i0"],["0","0","0"],["cctk_lsh[2]","cctk_lsh[1]","cctk_lsh[0]"], ["1","1","1"],["#pragma omp parallel for","",""],padding=" ",interior=T4UUstr) body = r""" DECLARE_CCTK_ARGUMENTS_"""+name+r"""; DECLARE_CCTK_PARAMETERS; // First read in TmunuBase gridfunctions const CCTK_REAL *restrict T4DD00GF = eTtt; const CCTK_REAL *restrict T4DD01GF = eTtx; const CCTK_REAL *restrict T4DD02GF = eTty; const CCTK_REAL *restrict T4DD03GF = eTtz; const CCTK_REAL *restrict T4DD11GF = eTxx; const CCTK_REAL *restrict T4DD12GF = eTxy; const CCTK_REAL *restrict T4DD13GF = eTxz; const CCTK_REAL *restrict T4DD22GF = eTyy; const CCTK_REAL *restrict T4DD23GF = eTyz; const CCTK_REAL *restrict T4DD33GF = eTzz; // TmunuBase provides T_{\alpha \beta}, but BSSN needs T^{\alpha \beta}. // Here we perform the raising operation using BSSN quantities as input: """ + T4UUstr_loop # Now unregister T4DD gridfunctions new_glb_gridfcs_list = [] for gf in gri.glb_gridfcs_list: if "T4DD" not in gf.name: new_glb_gridfcs_list += [gf] gri.glb_gridfcs_list = list(new_glb_gridfcs_list) # creates a copy! add_to_Cfunction_dict( includes=includes, desc=desc, c_type=c_type, name=name, params=params, body=body.replace("BaikalETK", ThornName), enableCparameters=False) # # # # Step 6: Output all C driver functions needed for `Baikal`/`BaikalVacuum` \[Back to [top](#toc)\] # $$\label{outcdrivers}$$ # # First we call the above functions (output above to the `BaikalETK_validate.BaikalETK_C_drivers_codegen` Python module) to store all needed driver C files to a Python dictionary, then we simply outputs the dictionary to the appropriate files. # In[23]: add_to_Cfunction_dict_T4DD_to_T4UU("Baikal") # BaikalVacuum does not deal with T4UU, as T4UU==zero in vacuum. for thorn in ["Baikal", "BaikalVacuum"]: add_to_Cfunction_dict_ADM_to_BSSN(thorn) add_to_Cfunction_dict_Banner(thorn) add_to_Cfunction_dict_BSSN_to_ADM(thorn) add_to_Cfunction_dict_floor_the_lapse(thorn) add_to_Cfunction_dict_MoL_registration(thorn) add_to_Cfunction_dict_RegisterSlicing(thorn) add_to_Cfunction_dict_specify_Driver_BoundaryConditions(thorn) add_to_Cfunction_dict_specify_BoundaryConditions_aux_gfs(thorn) add_to_Cfunction_dict_specify_BoundaryConditions_evolved_gfs(thorn) add_to_Cfunction_dict_specify_NewRad_BoundaryConditions_parameters(thorn) add_to_Cfunction_dict_Symmetry_registration_oldCartGrid3D(thorn) add_to_Cfunction_dict_zero_rhss(thorn) # In[24]: # Output CCL files. This step depends on all Cparameters & gridfunctions being defined first. for enable_stress_energy_source_terms in [True, False]: ThornName = "BaikalVacuum" if enable_stress_energy_source_terms: ThornName = "Baikal" output_param_ccl(ThornName) output_interface_ccl(ThornName, enable_stress_energy_source_terms) output_schedule_ccl(ThornName, enable_stress_energy_source_terms) # In[25]: from outputC import outC_function_dict def out_C_file(thorn, name): with open(os.path.join(thorn, "src", name+".c"), "w") as Cfile: Cfile.write(outC_function_dict[name]) # # # ## Step 6.a: List all registered C files needed to compile `Baikal`/`BaikalVacuum` in `make.code.defn` \[Back to [top](#toc)\] # $$\label{makecodedefn}$$ # # When registering each C function above, `outputC` stored the function to its `outC_function_master_list` list of named tuples. We'll now make use of `outC_function_master_list` to list each of those files in `Baikal`/`BaikalVacuum`'s `make.code.defn` file, needed by the Einstein Toolkit's build system. # In[26]: # Finally, output [thornname]/src/make.code.defn for thorn in ["Baikal", "BaikalVacuum"]: with open(os.path.join(thorn, "src", "make.code.defn"), "w") as file: file.write("# Main make.code.defn file for thorn " + thorn + "\n\n") file.write("# Source files in this directory that need compiled:\n") file.write("SRCS = \\\n") outstr = "" for item in outC_function_master_list: if thorn == "BaikalVacuum": if "_" + thorn in item.name: outstr += item.name + ".c \\\n" out_C_file(thorn, item.name) else: if "_" + thorn in item.name and not "BaikalVacuum" in item.name: outstr += item.name + ".c \\\n" out_C_file(thorn, item.name) outstr = outstr[:-2] + "\n" file.write(indent_Ccode(outstr, " ")) # # # # Step 7: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\] # $$\label{latex_pdf_output}$$ # # The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename # [Tutorial-ETK_thorn-BaikalETK.pdf](Tutorial-ETK_thorn-BaikalETK.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.) # In[27]: import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-ETK_thorn-BaikalETK")