Introduction

Here you will find a step by step guide to downloading, configuring, and running the Einstein Toolkit. You may use this tutorial on a workstation or laptop, or on a supported cluster. Configuring the Einstein Toolkit on an unsupported cluster is beyond the scope of this tutorial. If you find something that does not work, please feel free to mail [email protected]

Prerequisites

When using the Einstein Toolkit on a laptop or workstation you will want a number of packages installed in order to download, compile and use the Einstein Toolkit components. If this is a machine which you control (i.e. you have root), you can install using one of the recipes that follow:

On Mac, please first

  • Install Xcode from the Apple App Store. In addition agree to Xcode license in Terminal
    sudo xcodebuild -license
    
  • install MacPorts for your version of the Mac operating system, if you did not already install it (https://www.macports.org/install.php).
  • Next, please install the following packages, using the commands:
    sudo port -N install pkgconfig gcc8 openmpi fftw-3 gsl jpeg zlib hdf5 +fortran +gfortran openssl ld64 +ld64_xcode
    sudo port select mpi openmpi-mp-fortran
    

On Debian/Ubuntu/Mint use this command:

sudo apt-get install -y subversion gcc git numactl libgsl-dev libpapi-dev python libhwloc-dev make libopenmpi-dev libhdf5-openmpi-dev libfftw3-dev libssl-dev liblapack-dev g++ curl gfortran patch pkg-config libhdf5-dev libjpeg-turbo?-dev

On Fedora use this command:

sudo dnf install -y libjpeg-turbo-devel gcc git lapack-devel make subversion gcc-c++ which papi-devel python hwloc-devel openmpi-devel hdf5-openmpi-devel openssl-devel libtool-ltdl-devel numactl-devel gcc-gfortran findutils hdf5-devel fftw-devel patch gsl-devel pkgconfig
module load mpi/openmpi-x86_64

You will have to repeat the module load command each time you would like to compile or run the code.

On Centos use this command:

sudo yum install -y epel-release
sudo yum install -y libjpeg-turbo-devel gcc git lapack-devel make subversion gcc-c++ which papi-devel hwloc-devel openmpi-devel hdf5-openmpi-devel openssl-devel libtool-ltdl-devel numactl-devel gcc-gfortran hdf5-devel fftw-devel patch gsl-devel

On OpenSuse use this command:

sudo zypper install -y curl gcc git lapack-devel make subversion gcc-c++ which papi-devel hwloc-devel openmpi-devel libopenssl-devel libnuma-devel gcc-fortran hdf5-devel libfftw3-3 patch gsl-devel pkg-config

Download

A script called GetComponents is used to fetch the components of the Einstein Toolkit. GetComponents serves as convenient wrapper around lower level tools like git and svn to download the codes that make up the Einstein toolkit from their individual repositories. You may download and make it executable as follows:

Note: By default, the cells in this notebook are Python 3 commands. However, lines that begin with the ! character are run inside a bash shell. Bash commands to set environment variables or change the working directory will not work if executed with ! (their effects are forgotten immediately). Therefore, when setting the directory a special magic %cd command is used. If you wish to run these commands outside the notebook and in a bash shell, cut and paste only the characters following the initial ! or %.

In [ ]:
%cd ~/
In [ ]:
!curl -kLO https://raw.githubusercontent.com/gridaphobe/CRL/ET_2019_03/GetComponents
!chmod a+x GetComponents

GetComponents accepts a thorn list as an argument. To check out the needed components:

In [ ]:
!./GetComponents --parallel https://bitbucket.org/einsteintoolkit/manifest/raw/ET_2019_03/einsteintoolkit.th
In [ ]:
%cd ~/Cactus

Configure and build

The recommended way to compile the Einstein Toolkit is to use the Simulation Factory ("SimFactory").

Configuring SimFactory for your machine

The ET depends on various libraries, and needs to interact with machine-specific queueing systems and MPI implementations. As such, it needs to be configured for a given machine. For this, it uses SimFactory. Generally, configuring SimFactory means providing an optionlist, for specifying library locations and build options, a submit script for using the batch queueing system, and a runscript, for specifying how Cactus should be run, e.g. which mpirun command to use.

In [ ]:
!./simfactory/bin/sim setup-silent
# If you are on Mac, uncomment and use the next line instead:
# !./simfactory/bin/sim setup-silent --optionlist osx-macports.cfg --runscript osx-macports.run

After this step is complete you will find your machine's default setup under ./simfactory/mdb/machines/<hostname >.ini You can edit some of these settings freely, such as "description", "basedir" etc. Some entry edits could result in simulation start-up warnings and/or errors such as "ppn" (processor-per-node meaning number of cores on your machine), "num-threads" (number of threads per core) so such edits must be done with some care.

Building the Einstein Toolkit

Assuming that SimFactory has been successfully set up on your machine, you should be able to build the Einstein Toolkit with the command below. The option "--mdbkey make 'make -j2'" sets the make command that will be used by the script. The number used is the number of processors used when building. Even in parallel, this step may take a while, as it compiles all the thorns specified in manifest/einsteintoolkit.th.

Note that the "cat" command on the end of the line is to prevent problems with the display in Jupyter when output comes from multiple threads.

Note: Using too many threads to compile on the test machine may result in compiler failures.

In [ ]:
!./simfactory/bin/sim build --mdbkey make 'make -j1' --thornlist ../einsteintoolkit.th | cat

Running a simple example

You can now run the Einstein Toolkit with a simple test parameter file.

In [ ]:
!./simfactory/bin/sim create-submit helloworld \
    --parfile arrangements/CactusExamples/HelloWorld/par/HelloWorld.par --walltime 0:5:0

The above command will submit the simulation to the queue naming it "helloworld" and ask for a 5 minutes long job time, if you are running on a cluster, or run it immediately in the background if you are on a personal laptop or workstation. You can check the status of the simulation with the command below. You can run this command repeatedly until the job shows

[ACTIVE (FINISHED)...
as it's state. Prior to that, it may show up as QUEUED or RUNNING.

In [ ]:
!simfactory/bin/sim list-simulations helloworld

Once it finished you can look at the output with the command below.

In [ ]:
!simfactory/bin/sim show-output helloworld 

If you see

INFO (HelloWorld): Hello World!
anywhere in the above output, then congratulations, you have successfully downloaded, compiled and run the Einstein Toolkit! You may now want to try some of the other tutorials to explore some interesting physics examples.

Running single star simulation

What follows is the much more computationally intensive example of simulating a static TOV star. Just below this cell you can see the contents of a Cactus parameter file to simulate a single, spherical symmetric star using the Einstein Toolkit. The parameter file has been set up to run to completion in about 10 minutes, making it useful for a tutorial but too coarsely resolved to do science with it.

In [ ]:
%%writefile par/tov_ET.par
# Example parameter file for a static TOV star. Everything is evolved, but
# because this is a solution to the GR and hydro equations, nothing changes
# much. What can be seen is the initial perturbation (due to numerical errors)
# ringing down (look at the density maximum), and later numerical errors
# governing the solution. Try higher resolutions to decrease this error.

# Some basic stuff
ActiveThorns = "Time MoL"
ActiveThorns = "Coordbase CartGrid3d Boundary StaticConformal"
ActiveThorns = "SymBase ADMBase TmunuBase HydroBase InitBase ADMCoupling ADMMacros"
ActiveThorns = "IOUtil"
ActiveThorns = "Formaline"
ActiveThorns = "SpaceMask CoordGauge Constants LocalReduce aeilocalinterp LoopControl"
ActiveThorns = "Carpet CarpetLib CarpetReduce CarpetRegrid2 CarpetInterp"
ActiveThorns = "CarpetIOASCII CarpetIOScalar CarpetIOHDF5 CarpetIOBasic"

# Finalize
Cactus::terminate           = "time"
Cactus::cctk_final_time     = 400 #800 # divide by ~203 to get ms

# Termination Trigger
ActiveThorns = "TerminationTrigger"
TerminationTrigger::max_walltime = 24          # hours
TerminationTrigger::on_remaining_walltime = 0  # minutes
TerminationTrigger::check_file_every = 512
TerminationTrigger::termination_file = "TerminationTrigger.txt"
TerminationTrigger::termination_from_file   = "yes"
TerminationTrigger::create_termination_file = "yes"

# grid parameters
Carpet::domain_from_coordbase = "yes"
CartGrid3D::type         = "coordbase"
CartGrid3D::domain       = "full"
CartGrid3D::avoid_origin = "no"
CoordBase::xmin =  0.0
CoordBase::ymin =  0.0
CoordBase::zmin =  0.0
CoordBase::xmax = 24.0
CoordBase::ymax = 24.0
CoordBase::zmax = 24.0
# Change these parameters to change resolution. The ?max settings above
# have to be multiples of these. 'dx' is the size of one cell in x-direction.
# Making this smaller means using higher resolution, because more points will
# be used to cover the same space.
CoordBase::dx   =   2.0
CoordBase::dy   =   2.0
CoordBase::dz   =   2.0

CarpetRegrid2::regrid_every =   0
CarpetRegrid2::num_centres  =   1
CarpetRegrid2::num_levels_1 =   2
CarpetRegrid2::radius_1[1]  = 12.0


CoordBase::boundary_size_x_lower        = 3
CoordBase::boundary_size_y_lower        = 3
CoordBase::boundary_size_z_lower        = 3
CoordBase::boundary_size_x_upper        = 3
CoordBase::boundary_size_y_upper        = 3
CoordBase::boundary_size_z_upper        = 3
CoordBase::boundary_shiftout_x_lower    = 1
CoordBase::boundary_shiftout_y_lower    = 1
CoordBase::boundary_shiftout_z_lower    = 1
CoordBase::boundary_shiftout_x_upper    = 0
CoordBase::boundary_shiftout_y_upper    = 0
CoordBase::boundary_shiftout_z_upper    = 0


ActiveThorns = "ReflectionSymmetry"

ReflectionSymmetry::reflection_x = "yes"
ReflectionSymmetry::reflection_y = "yes"
ReflectionSymmetry::reflection_z = "yes"
ReflectionSymmetry::avoid_origin_x = "no"
ReflectionSymmetry::avoid_origin_y = "no"
ReflectionSymmetry::avoid_origin_z = "no"

# storage and coupling
TmunuBase::stress_energy_storage = yes
TmunuBase::stress_energy_at_RHS  = yes
TmunuBase::timelevels            =  1
TmunuBase::prolongation_type     = none


HydroBase::timelevels            = 3

ADMMacros::spatial_order = 4

SpaceMask::use_mask      = "yes"

Carpet::enable_all_storage       = no
Carpet::use_buffer_zones         = "yes"

Carpet::poison_new_timelevels    = "yes"
Carpet::check_for_poison         = "no"

Carpet::init_3_timelevels        = no
Carpet::init_fill_timelevels     = "yes"

CarpetLib::poison_new_memory = "yes"
CarpetLib::poison_value      = 114

# system specific Carpet paramters
Carpet::max_refinement_levels    = 10
driver::ghost_size               = 3
Carpet::prolongation_order_space = 3
Carpet::prolongation_order_time  = 2

# Time integration
time::dtfac = 0.25

MoL::ODE_Method             = "rk4"
MoL::MoL_Intermediate_Steps = 4
MoL::MoL_Num_Scratch_Levels = 1

# check all physical variables for NaNs
#  This can save you computing time, so it's not a bad idea to do this
#  once in a whioe.
ActiveThorns = "NaNChecker"
NaNChecker::check_every = 16384
NaNChecker::action_if_found = "terminate" #"terminate", "just warn", "abort"
NaNChecker::check_vars = "ADMBase::metric ADMBase::lapse ADMBase::shift HydroBase::rho HydroBase::eps HydroBase::press HydroBase::vel"

# Hydro paramters

ActiveThorns = "EOS_Omni GRHydro"

HydroBase::evolution_method      = "GRHydro"

GRHydro::riemann_solver            = "HLLE"
GRHydro::GRHydro_eos_type          = "General"
GRHydro::GRHydro_eos_table         = "Ideal_Fluid"
GRHydro::recon_method              = "ppm"
GRHydro::GRHydro_stencil            = 3
GRHydro::bound                     = "none"
GRHydro::rho_abs_min               = 1.e-10
GRHydro::GRHydro_atmo_tolerance    = 1.e-3
GRHydro::sources_spatial_order     = 4

# Curvature evolution parameters

ActiveThorns = "GenericFD NewRad"
ActiveThorns = "ML_BSSN ML_BSSN_Helper"
ADMBase::evolution_method        = "ML_BSSN"
ADMBase::lapse_evolution_method  = "ML_BSSN"
ADMBase::shift_evolution_method  = "ML_BSSN"
ADMBase::dtlapse_evolution_method= "ML_BSSN"
ADMBase::dtshift_evolution_method= "ML_BSSN"

ML_BSSN::timelevels = 3

ML_BSSN::harmonicN           = 1      # 1+log
ML_BSSN::harmonicF           = 2.0    # 1+log
ML_BSSN::ShiftBCoeff         = 1
ML_BSSN::ShiftGammaCoeff     = 0.75
ML_BSSN::BetaDriver          = 2.66
ML_BSSN::LapseAdvectionCoeff = 0.0
ML_BSSN::ShiftAdvectionCoeff = 0.0

ML_BSSN::my_initial_boundary_condition = "extrapolate-gammas"
ML_BSSN::my_rhs_boundary_condition     = "NewRad"

# Some dissipation to get rid of high-frequency noise
ActiveThorns = "SphericalSurface Dissipation"
Dissipation::verbose   = "no"
Dissipation::epsdis   = 0.01
Dissipation::vars = "
        ML_BSSN::ML_log_confac
        ML_BSSN::ML_metric
        ML_BSSN::ML_curv
        ML_BSSN::ML_trace_curv
        ML_BSSN::ML_Gamma
        ML_BSSN::ML_lapse
        ML_BSSN::ML_shift
"


# init parameters
InitBase::initial_data_setup_method = "init_some_levels"

# Use TOV as initial data
ActiveThorns = "TOVSolver"

HydroBase::initial_hydro         = "tov"
ADMBase::initial_data            = "tov"
ADMBase::initial_lapse           = "tov"
ADMBase::initial_shift           = "tov"
ADMBase::initial_dtlapse         = "zero"
ADMBase::initial_dtshift         = "zero"

# Parameters for initial star
TOVSolver::TOV_Rho_Central[0] = 1.28e-3
TOVSolver::TOV_Gamma          = 2
TOVSolver::TOV_K              = 100

# Set equation of state for evolution
EOS_Omni::poly_gamma                   = 2
EOS_Omni::poly_k                       = 100
EOS_Omni::gl_gamma                     = 2
EOS_Omni::gl_k                         = 100

# I/O

cactus::cctk_timer_output = "full"

# Use (create if necessary) an output directory named like the
# parameter file (minus the .par)
IO::out_dir             = ${parfile}

# Write one file overall per output (variable/group)
# In production runs, comment this or set to "proc" to get one file
# per MPI process
# RH: 2018-02-10 disable until ticket is addressed: https://trac.einsteintoolkit.org/ticket/2117
#IO::out_mode            = "onefile"

# Some screen output
IOBasic::outInfo_every = 512
IOBasic::outInfo_vars  = "Carpet::physical_time_per_hour HydroBase::rho{reductions='maximum'}"

# Scalar output
IOScalar::outScalar_every    = 512
IOScalar::one_file_per_group = "yes"
IOScalar::outScalar_reductions = "norm1 norm2 norm_inf sum maximum minimum"
IOScalar::outScalar_vars     = "
 HydroBase::rho{reductions='maximum'}
 HydroBase::press{reductions='maximum'}
 HydroBase::eps{reductions='minimum maximum'}
 HydroBase::vel{reductions='minimum maximum'}
 HydroBase::w_lorentz{reductions='minimum maximum'}
 ADMBase::lapse{reductions='minimum maximum'}
 ADMBase::shift{reductions='minimum maximum'}
 ML_BSSN::ML_Ham{reductions='norm1 norm2 maximum minimum norm_inf'}
 ML_BSSN::ML_mom{reductions='norm1 norm2 maximum minimum norm_inf'}
 GRHydro::dens{reductions='minimum maximum sum'}
 Carpet::timing{reductions='average'}
"

# 1D ASCII output. Disable for production runs!
IOASCII::out1D_every        = 2048
IOASCII::one_file_per_group = yes
IOASCII::output_symmetry_points = no
IOASCII::out1D_vars         = "
 HydroBase::rho
 HydroBase::press
 HydroBase::eps
 HydroBase::vel
 ADMBase::lapse
 ADMBase::metric
 ADMBase::curv
 ML_BSSN::ML_Ham
 ML_BSSN::ML_mom
"

# 2D HDF5 output
CarpetIOHDF5::output_buffer_points = "no"

CarpetIOHDF5::out2D_every = 2048
CarpetIOHDF5::out2D_vars = "
 HydroBase::rho
 HydroBase::eps
 HydroBase::vel
 HydroBase::w_lorentz
 ADMBase::lapse
 ADMBase::shift
 ADMBase::metric
 ML_BSSN::ML_Ham
 ML_BSSN::ML_mom
 "

# Checkpointing options
IOHDF5::checkpoint                  = "no" # disable checkpointing on tutorial server
IO::checkpoint_dir                  = $parfile
IO::recover_dir                     = $parfile
IO::recover                         = "autoprobe"
IO::checkpoint_ID                   = "yes"
IO::checkpoint_every                = 1048576
IO::checkpoint_keep                 = 3
IO::checkpoint_on_terminate         = "yes"

# Enable to get detailed timing information
#ActiveThorns = "TimerReport"
#TimerReport::out_every    = 1024
#TimerReport::out_filename = "TimerReport"
#TimerReport::output_all_timers_readable = "yes"
#TimerReport::output_all_timers = "yes"

# Enable for profiling
#Carpet::output_timers_every = 1024
In [ ]:
# start simulation, watch log output
!./simfactory/bin/sim create-run tov_ET \
  --parfile=par/tov_ET.par --procs=2 --num-threads=1 --walltime=0:20:0

Plotting the Output

The following commands will generate a simple line plot of the data. They will work in a python script as easily as they do in the notebook (just remove the "%matplotlib inline" directive).

In [ ]:
# This cell enables inline plotting in the notebook
%matplotlib inline

import matplotlib
import numpy as np
import matplotlib.pyplot as plt

Numpy has a routine called genfromtxt() which is an extremely efficient reader of textual arrays of floating point numbers. This is well-suited to Cactus .asc files.

In [ ]:
import os
home = os.environ["HOME"]
lin_data = np.genfromtxt(home+"/simulations/tov_ET/output-0000/tov_ET/hydrobase-rho.maximum.asc")

This is all you need to do to plot the data once you've loaded it. Note, this uses Python array notation to grab columns 1 and 2 of the data file.

In [ ]:
plt.plot(lin_data[:,1],lin_data[:,2])
In [ ]: