# Getting started with Parcels: general structure¶

There are many different ways in which to use Parcels for research. The flexibility of the parcels code enables this wide range of applicability and allows you to build complex simulations. But this also means that you have to know what you're doing. SO please take some time to learn how to use Parcels, starting with this tutorial.

For a smooth programming experience with Parcels, it is recommended you make a general structure in setting up your simulations. Here we give you an overview of the main components of a Parcels simulation. These components provide you with the basic requirements for your first simulation, while the structure allows you to keep track of more complex simulations later on. A good practice is to separate the different parts of your code into sections:

1. FieldSet. Load and set up the (velocity) fields that your particles need to access.
2. ParticleSet. Define the type of particles you want to release, what Variables they have and what their initial conditions are.
3. Execute kernels. Define and compile the kernels that encode what your particles need to do each timestep and execute them.
4. Output. Write and store the output to a NetCDF file. Ideas on what you can do with the output in terms of analysis is documented here.

These four components are used in the python cell below. Further on in this notebook, we'll focus on each component separately.

In [1]:
from parcels import FieldSet, ParticleSet, JITParticle, AdvectionRK4

# 1. Setting up the velocity fields in a FieldSet object

fname = 'GlobCurrent_example_data/*.nc'
filenames = {'U': fname, 'V': fname}
variables = {'U': 'eastward_eulerian_current_velocity', 'V': 'northward_eulerian_current_velocity'}
dimensions = {'U': {'lat': 'lat', 'lon': 'lon', 'time': 'time'},
'V': {'lat': 'lat', 'lon': 'lon', 'time': 'time'}}
fieldset = FieldSet.from_netcdf(filenames, variables, dimensions)

# 2. Defining the particles type and initial conditions in a ParticleSet object

pset = ParticleSet(fieldset=fieldset,   # the fields on which the particles are advected
pclass=JITParticle,  # the type of particles (JITParticle or ScipyParticle)
lon=28,              # release longitudes
lat=-33)             # release latitudes

# 3. Executing an advection kernel on the given fieldset

output_file = pset.ParticleFile(name="GCParticles.nc", outputdt=3600) # the file name and the time step of the outputs
pset.execute(AdvectionRK4,                 # the kernel (which defines how particles move)
runtime=86400*6,              # the total length of the run
dt=300,                       # the timestep of the kernel
output_file=output_file)

# 4. Exporting the simulation output to a netcdf file

output_file.export()

WARNING: Casting lon data to np.float32
WARNING: Casting lat data to np.float32
WARNING: Casting depth data to np.float32
WARNING: Casting field data to np.float32


When you start making the parcels simulation more complex, it is a good idea to keep these different steps separate to keep a clear overview and find bugs more easily

# 1. FieldSet¶

Parcels provides a framework to simulate the movement of particles within an existing flowfield environment. To start a parcels simulation we must define this environment with the FieldSet class. The minimal requirements for this Fieldset are that it must contain the 'U' and 'V' fields: the 2D hydrodynamic data that will move the particles.

The general method to use is FieldSet.from_netcdf, which requires filenames, variables and dimensions. Each of these is a dictionary, and variables requires at least a U and V, but any other variable can be added too (e.g. vertical velocity, temperature, mixedlayerdepth, etc). Note also that filenames can contain wildcards. For example, the GlobCurrent data that can be downloaded using the parcels_get_examples script (see step 4 of the installation guide) can be read with:

In [2]:
fname = 'GlobCurrent_example_data/*.nc'
filenames = {'U': fname, 'V': fname}
variables = {'U': 'eastward_eulerian_current_velocity', 'V': 'northward_eulerian_current_velocity'}
dimensions = {'U': {'lat': 'lat', 'lon': 'lon', 'time': 'time'}, # In the GlobCurrent data the dimensions are also called 'lon', 'lat' and 'time'
'V': {'lat': 'lat', 'lon': 'lon', 'time': 'time'}}
fieldset = FieldSet.from_netcdf(filenames, variables, dimensions)


# 2. ParticleSet¶

Once you have set up the environment with the FieldSet object, you can start defining your particles in a ParticleSet object. This object requires:

1. The FieldSet on which the particles live.
2. The type of Particle, which contains the information each particle will store.
3. The initial conditions for each Variable defined in the Particle, most notably the release locations in lon and lat.
In [3]:
pset = ParticleSet(fieldset=fieldset,   # the fields on which the particles are advected
pclass=JITParticle,  # the type of particles (JITParticle or ScipyParticle)
lon=28,              # release longitude
lat=-33)             # release latitude


The different Particle types available are the JITParticle and the ScipyParticle, but it is very easy to create your own particle class which includes other Variables:

In [4]:
from parcels import Variable

class PressureParticle(JITParticle):         # Define a new particle class
p = Variable('p', initial=0)             # Variable 'p' with initial value 0.


#### For more advanced tutorials on how to setup your ParticleSet:¶

For more information on how to implement Particle types with specific behavior, see the section on writing your own kernels

# 3. Kernel execution¶

After defining the flowfield environment with FieldSet and the particle information with ParticleSet, we can move on to actually running the parcels simulation by using ParticleSet.execute(). Running a simulation in parcels actually means executing kernels, little snippets of code that are run for each particle at each timestep. The most basic kernels are advection kernels which calculate the movement of each particle based on the FieldSet in which the ParticleSet lives. A few different advection kernels are included in Parcels.

If you want to store the particle data generated in the simulation, you usually first want to define the ParticleFile to which the output of the kernel execution will be written. Then, on the ParticleSet you have defined, you can use the method ParticleSet.execute() which requires the following arguments:

1. The kernels to be executed.
2. The runtime defining how long the execution loop runs. Alternatively, you may define the endtime at which the execution loop stops.
3. The timestep dt at which to execute the kernels.
4. (Optional) The ParticleFile object to write the output to.
In [5]:
output_file = pset.ParticleFile(name="GCParticles.nc", outputdt=3600) # the file name and the time step of the outputs

pset.execute(AdvectionRK4,                 # the kernel (which defines how particles move)
runtime=86400*6,              # the total length of the run in seconds
dt=300,                       # the timestep of the kernel
output_file=output_file)

INFO: Compiled JITParticleAdvectionRK4 ==> C:\Users\GEBRUI~1\AppData\Local\Temp\parcels-tmp\a23c56b16e735df652d3b3c4b045eabb_0.dll


One of the most powerful features of Parcels is the ability to write custom Kernels (see e.g. this example to add the vertical movements of an Argo float). You probably want to define these kernels here; after defining your ParticleSet and before executing them. If your kernels become very large or complex, you might want to store them in another python file and import them into your simulation script.

In [6]:
def WestVel(particle, fieldset, time):
if time > 86400 and time < 2*86400:
uvel = -0.02
particle.lon += uvel * particle.dt

WV_kernel = pset.Kernel(WestVel)

output_file = pset.ParticleFile(name="GC_WestVel.nc", outputdt=3600)

runtime=86400*6,              # the total length of the run in seconds
dt=300,                       # the timestep of the kernel
output_file=output_file)

INFO: Compiled JITParticleAdvectionRK4WestVel ==> C:\Users\GEBRUI~1\AppData\Local\Temp\parcels-tmp\2d3461f1f21e8976d23b053f3bf517e8_0.dll


However, there are some key limitations to the Kernels that everyone who wants to write their own should be aware of:

• Every Kernel must be a function with the following (and only those) arguments: (particle, fieldset, time)
• In order to run successfully in JIT mode, Kernel definitions can only contain the following types of commands:

• Basic arithmetical operators (+, -, *, /, **) and assignments (=).
• Basic logical operators (<, ==, !=, >, &, |). Note that you can use a statement like particle.lon != particle.lon to check if particle.lon is NaN (since math.nan != math.nan).
• if and while loops, as well as break statements. Note that for-loops are not supported in JIT mode.
• Interpolation of a Field from the FieldSet at a [time, depth, lat, lon] point, using square brackets notation. For example, to interpolate the zonal velocity (U) field at the particle location, use the following statement: value = fieldset.U[time, particle.depth, particle.lat, particle.lon]
• Functions from the math standard library and from the custom ParcelsRandom library at parcels.rng
• Simple print statements, such as:

• print("Some print")
• print(particle.lon)
• print("particle id: %d" % particle.id)
• print("lon: %f, lat: %f" % (particle.lon, particle.lat))

Although note that these print statements are not shown in Jupyter notebooks in JIT mode, see this long-standing Issue.

• Local variables can be used in Kernels, and these variables will be accessible in all concatenated Kernels. Note that these local variables are not shared between particles, and also not between time steps.
• Note that one has to be careful with writing kernels for vector fields on Curvilinear grids. While Parcels automatically rotates the U and V field when necessary, this is not the case for for example wind data. In that case, a custom rotation function will have to be written.

# 4. Output¶

While executing the ParticleSet, parcels stores the data in npy files in an output folder. To take all the data and store them in a netcdf file, you can use ParticleFile.export() if you want to keep the folder with npy files; or ParticleFile.close() if you only want to keep the netcdf file:

In [7]:
output_file.export()
output_file.close()


After running your simulation, you probably want to analyse the output. Although there is some simple plotting functionality built into Parcels, we recommend you write your own code to analyse your specific output and you can probably separate the analysis from the simulation.

#### For more tutorials on the parcels output:¶

In [ ]: