#!/usr/bin/env python # coding: utf-8 # # Getting started with Parcels: general structure # There are many different ways in which to use Parcels for research. The flexibility of the parcels code enables this wide range of applicability and allows you to build complex simulations. But this also means that you have to know what you're doing. SO please take some time to learn how to use Parcels, starting with this tutorial. # # For a smooth programming experience with Parcels, it is recommended you make a general structure in setting up your simulations. Here we give you an overview of the main components of a Parcels simulation. These components provide you with the basic requirements for your first simulation, while the structure allows you to keep track of more complex simulations later on. A good practice is to separate the different parts of your code into sections: # 1. [**FieldSet**](#1.-FieldSet). Load and set up the (velocity) fields that your particles need to access. # 2. [**ParticleSet**](#2.-ParticleSet). Define the type of particles you want to release, what `Variables` they have and what their initial conditions are. # 3. [**Execute kernels**](#3.-Kernel-execution). Define and compile the kernels that encode what your particles need to do each timestep and execute them. # 4. [**Output**](#4.-Output). Write and store the output to a NetCDF file. Ideas on what you can do with the output in terms of analysis is documented [**here**](https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_output.ipynb). # These four components are used in the python cell below. Further on in this notebook, we'll focus on each component separately. # In[1]: from parcels import FieldSet, ParticleSet, JITParticle, AdvectionRK4 # 1. Setting up the velocity fields in a FieldSet object fname = 'GlobCurrent_example_data/*.nc' filenames = {'U': fname, 'V': fname} variables = {'U': 'eastward_eulerian_current_velocity', 'V': 'northward_eulerian_current_velocity'} dimensions = {'U': {'lat': 'lat', 'lon': 'lon', 'time': 'time'}, 'V': {'lat': 'lat', 'lon': 'lon', 'time': 'time'}} fieldset = FieldSet.from_netcdf(filenames, variables, dimensions) # 2. Defining the particles type and initial conditions in a ParticleSet object pset = ParticleSet(fieldset=fieldset, # the fields on which the particles are advected pclass=JITParticle, # the type of particles (JITParticle or ScipyParticle) lon=28, # release longitudes lat=-33) # release latitudes # 3. Executing an advection kernel on the given fieldset output_file = pset.ParticleFile(name="GCParticles.nc", outputdt=3600) # the file name and the time step of the outputs pset.execute(AdvectionRK4, # the kernel (which defines how particles move) runtime=86400*6, # the total length of the run dt=300, # the timestep of the kernel output_file=output_file) # 4. Exporting the simulation output to a netcdf file output_file.export() # When you start making the parcels simulation more complex, it is a good idea to keep these different steps separate to keep a clear overview and find bugs more easily # # 1. FieldSet # Parcels provides a framework to simulate the movement of particles **within an existing flowfield environment**. To start a parcels simulation we must define this environment with the [**`FieldSet`** class](https://oceanparcels.org/gh-pages/html/#module-parcels.fieldset). The minimal requirements for this Fieldset are that it must contain the `'U'` and `'V'` fields: the 2D hydrodynamic data that will move the particles. # The general method to use is [**`FieldSet.from_netcdf`**](https://oceanparcels.org/gh-pages/html/#parcels.fieldset.FieldSet.from_netcdf), which requires `filenames`, `variables` and `dimensions`. Each of these is a dictionary, and `variables` requires at least a `U` and `V`, but any other `variable` can be added too (e.g. vertical velocity, temperature, mixedlayerdepth, etc). Note also that `filenames` can contain wildcards. For example, the GlobCurrent data that can be downloaded using the `parcels_get_examples` script (see step 4 of the [installation guide](http://oceanparcels.org/#installing)) can be read with: # In[2]: fname = 'GlobCurrent_example_data/*.nc' filenames = {'U': fname, 'V': fname} variables = {'U': 'eastward_eulerian_current_velocity', 'V': 'northward_eulerian_current_velocity'} dimensions = {'U': {'lat': 'lat', 'lon': 'lon', 'time': 'time'}, # In the GlobCurrent data the dimensions are also called 'lon', 'lat' and 'time' 'V': {'lat': 'lat', 'lon': 'lon', 'time': 'time'}} fieldset = FieldSet.from_netcdf(filenames, variables, dimensions) # #### For more advanced tutorials on creating `FieldSets`: # # * [Implement **periodic boundaries**](https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_periodic_boundaries.ipynb) # * [How to **interpolate** field data for different fields](https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_interpolation.ipynb) # * [**Converting units** in the field data](https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_unitconverters.ipynb) # * [Working around incompatible **time coordinates**](https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_timestamps.ipynb) # # #### If you are working with field data on different grids: # * [Grid **indexing** on different grids](https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/documentation_indexing.ipynb) # * [Load field data from **Curvilinear grids**](https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_nemo_curvilinear.ipynb) # * [Load field data from **3D C-grids**](https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_nemo_3D.ipynb) # # #### If you want to combine different velocity fields: # * [Add different velocity fields in a **SummedField**](https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_SummedFields.ipynb) # * [Nest velocity fields of different regions or resolutions in a **NestedField**](https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_NestedFields.ipynb) # # 2. ParticleSet # Once you have set up the environment with the FieldSet object, you can start defining your particles in a [**`ParticleSet`** object](https://oceanparcels.org/gh-pages/html/#module-parcels.particleset). This object requires: # 1. The [**`FieldSet`**](https://oceanparcels.org/gh-pages/html/#module-parcels.fieldset) on which the particles live. # 2. The type of [**`Particle`**](https://oceanparcels.org/gh-pages/html/#module-parcels.particle), which contains the information each particle will store. # 3. The initial conditions for each [**`Variable`**](https://oceanparcels.org/gh-pages/html/#parcels.particle.Variable) defined in the `Particle`, most notably the release locations in `lon` and `lat`. # In[3]: pset = ParticleSet(fieldset=fieldset, # the fields on which the particles are advected pclass=JITParticle, # the type of particles (JITParticle or ScipyParticle) lon=28, # release longitude lat=-33) # release latitude # The different [**`Particle`**](https://oceanparcels.org/gh-pages/html/#module-parcels.particle) types available are the [**`JITParticle`**](https://oceanparcels.org/gh-pages/html/#parcels.particle.JITParticle) and the [**`ScipyParticle`**](https://oceanparcels.org/gh-pages/html/#parcels.particle.ScipyParticle), but it is very easy to create your own particle class which includes other [**`Variables`**](https://oceanparcels.org/gh-pages/html/#parcels.particle.Variable): # In[4]: from parcels import Variable class PressureParticle(JITParticle): # Define a new particle class p = Variable('p', initial=0) # Variable 'p' with initial value 0. # #### For more advanced tutorials on how to setup your `ParticleSet`: # * [**Releasing particles** at different times](https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_delaystart.ipynb) # * [The difference between **JITParticles and ScipyParticles**](https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_jit_vs_scipy.ipynb) # # For more information on how to implement **`Particle` types** with specific behavior, see the [section on writing your own kernels](#For-more-advanced-tutorials-on-writing-custom-kernels-that-work-on-custom-particles:) # # 3. Kernel execution # After defining the flowfield environment with `FieldSet` and the particle information with `ParticleSet`, we can move on to actually running the parcels simulation by using [**`ParticleSet.execute()`**](https://oceanparcels.org/gh-pages/html/#parcels.particleset.ParticleSet.execute). Running a simulation in parcels actually means executing kernels, little snippets of code that are run for each particle at each timestep. The most basic kernels are advection kernels which calculate the movement of each particle based on the `FieldSet` in which the `ParticleSet` lives. A few different advection kernels are [included in Parcels](https://oceanparcels.org/gh-pages/html/#module-parcels.kernels.advection). # # If you want to store the particle data generated in the simulation, you usually first want to define the [**`ParticleFile`**](https://oceanparcels.org/gh-pages/html/#parcels.particlefile.ParticleFile) to which the output of the kernel execution will be written. Then, on the [`ParticleSet`](#ParticleSet) you have defined, you can use the method [**`ParticleSet.execute()`**](https://oceanparcels.org/gh-pages/html/#parcels.particleset.ParticleSet.execute) which requires the following arguments: # 1. The kernels to be executed. # 2. The `runtime` defining how long the execution loop runs. Alternatively, you may define the `endtime` at which the execution loop stops. # 3. The timestep `dt` at which to execute the kernels. # 4. (Optional) The `ParticleFile` object to write the output to. # In[5]: output_file = pset.ParticleFile(name="GCParticles.nc", outputdt=3600) # the file name and the time step of the outputs pset.execute(AdvectionRK4, # the kernel (which defines how particles move) runtime=86400*6, # the total length of the run in seconds dt=300, # the timestep of the kernel output_file=output_file) # One of the most powerful features of Parcels is the ability to write custom Kernels (see e.g. [this example](http://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_Argofloats.ipynb) to add the vertical movements of an Argo float). You probably want to define these kernels here; after defining your [`ParticleSet`](#ParticleSet) and before executing them. If your kernels become very large or complex, you might want to store them in another python file and import them into your simulation script. # In[6]: def WestVel(particle, fieldset, time): if time > 86400 and time < 2*86400: uvel = -0.02 particle.lon += uvel * particle.dt WV_kernel = pset.Kernel(WestVel) output_file = pset.ParticleFile(name="GC_WestVel.nc", outputdt=3600) pset.execute(AdvectionRK4 + WV_kernel, # simply add kernels using the + operator runtime=86400*6, # the total length of the run in seconds dt=300, # the timestep of the kernel output_file=output_file) # # However, there are some key limitations to the Kernels that everyone who wants to write their own should be aware of: # * Every Kernel must be a function with the following (and only those) arguments: `(particle, fieldset, time)` # * In order to run successfully in JIT mode, Kernel definitions can only contain the following types of commands: # * Basic arithmetical operators (`+`, `-`, `*`, `/`, `**`) and assignments (`=`). # * Basic logical operators (`<`, `==`, `!=`, `>`, `&`, `|`). Note that you can use a statement like `particle.lon != particle.lon` to check if `particle.lon` is NaN (since `math.nan != math.nan`). # * `if` and `while` loops, as well as `break` statements. Note that `for`-loops are not supported in JIT mode. # * Interpolation of a `Field` from the `FieldSet` at a `[time, depth, lat, lon]` point, using square brackets notation. # For example, to interpolate the zonal velocity (U) field at the particle location, use the following statement: # `value = fieldset.U[time, particle.depth, particle.lat, particle.lon]` # * Functions from the `math` standard library and from the custom `ParcelsRandom` library at [`parcels.rng`](https://oceanparcels.org/gh-pages/html/#module-parcels.rng) # * Simple `print` statements, such as: # * `print("Some print")` # * `print(particle.lon)` # * `print("particle id: %d" % particle.id)` # * `print("lon: %f, lat: %f" % (particle.lon, particle.lat))` # # Although note that these `print` statements are not shown in Jupyter notebooks in JIT mode, see [this long-standing Issue](https://github.com/OceanParcels/parcels/issues/369). # * Local variables can be used in Kernels, and these variables will be accessible in all concatenated Kernels. Note that these local variables are not shared between particles, and also not between time steps. # * Note that one has to be careful with writing kernels for vector fields on Curvilinear grids. While Parcels automatically rotates the U and V field when necessary, this is not the case for for example wind data. In that case, a custom rotation function will have to be written. # #### For more advanced tutorials on writing custom kernels that work on custom particles: # * [Sample other fields like temperature](https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_sampling.ipynb). # * [Mimic the behavior of ARGO floats](https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_Argofloats.ipynb). # * [Adding diffusion to approximate subgrid-scale processes and unresolved physics](https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_diffusion.ipynb). # * [Converting between units in m/s and degree/s](https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_unitconverters.ipynb). # # 4. Output # While executing the `ParticleSet`, parcels stores the data in **npy** files in an output folder. To take all the data and store them in a netcdf file, you can use [**`ParticleFile.export()`**](https://oceanparcels.org/gh-pages/html/#parcels.particlefile.ParticleFile.export) if you want to keep the folder with npy files; or [**`ParticleFile.close()`**](https://oceanparcels.org/gh-pages/html/#parcels.particlefile.ParticleFile.close) if you only want to keep the netcdf file: # In[7]: output_file.export() output_file.close() # After running your simulation, you probably want to analyse the output. Although there is some simple plotting functionality built into Parcels, we **recommend you write your own code** to analyse your specific output and you can probably separate the analysis from the simulation. # #### For more tutorials on the parcels output: # * [How the output is structured and how to start your own analysis](https://nbviewer.jupyter.org/github/OceanParcels/parcels/blob/master/parcels/examples/tutorial_output.ipynb) # In[ ]: