FieldSet.advancetime
method¶In many real-world applications, particles are run for long times, using many snapshots of the hydrographic data. If these files are large, having to read them all into memory can take a significant amount of resources
The FieldSet.advancetime
method allows a simulation where only three snapshots of the hydrodynamic fields are in memory at any time, and they can be cycled through. This brief tutorial shows how to use the FieldSet.advancetime
method to read in only a sebset of all the time slices available at once
We start with importing the relevant modules
from parcels import FieldSet, ParticleSet, JITParticle, AdvectionRK4
from datetime import timedelta as delta
import numpy as np
from glob import glob
from os import path
Now define a function that loads the Globcurrent fields from the GlobCurrent_example_data
directory
def loadglobcurrentfile(filenames):
filenames = {'U': filenames,
'V': filenames}
variables = {'U': 'eastward_eulerian_current_velocity',
'V': 'northward_eulerian_current_velocity'}
dimensions = {'lat': 'lat',
'lon': 'lon',
'time': 'time'}
return FieldSet.from_netcdf(filenames, variables, dimensions)
We can create a list of all the files available in the GlobCurrent_example_data
directory using
files = sorted(glob(str(path.join('GlobCurrent_example_data','20*.nc'))))
Now we read in the first three files into the fieldset
(by using files[0:3]
)
fieldset = loadglobcurrentfile(files[0:3])
WARNING: Casting lon data to np.float32 WARNING: Casting lat data to np.float32 WARNING: Casting depth data to np.float32
Now create a ParticleSet
object
pset = ParticleSet(fieldset=fieldset, pclass=JITParticle, lon=[20], lat=[-35])
Now we can advect the particles, for ten days. Normally, since we only have three days in memory, we can not advect that long. But in this case, we can use a custom for
-loop to constantly update the fieldset
with the latest snapshot.
for i in range(10):
pset.execute(AdvectionRK4, # First advect the particles
runtime=delta(days=1), # runtime needs to be equal to the time between snapshots
dt=delta(minutes=5))
# Then update the fieldset using the advancetime method
fieldset.advancetime(loadglobcurrentfile(files[i+3]))
INFO: Compiled JITParticleAdvectionRK4 ==> /var/folders/r2/8593q8z93kd7t4j9kbb_f7p00000gn/T/parcels-501/27805ff3aa34ba12ddb373f3f2cb1d1b.so
With this relatively simple setup, Parcels can be run on hydrodynamic datasets that are potentially hundreds of gigabytes in size; just as long as any single snapshot isn't too big.