# In this notebook you will learn basic information about redispatching capabilities offered by grid2op.
Objectives
As for now, we presented a type of action available in grid2op: a discrete action space. Redispatching is a kind of continuous action that will be described here.
This notebook will:
import os
import sys
import grid2op
from grid2op.Agent import DoNothingAgent, Agent
from tqdm.notebook import tqdm
import numpy as np
max_iter = 100 # to make computation much faster we will only consider 50 time steps instead of 287
import pdb
res = None
try:
from jyquickhelper import add_notebook_menu
res = add_notebook_menu()
except ModuleNotFoundError:
print("Impossible to automatically add a menu / table of content to this notebook.\nYou can download \"jyquickhelper\" package with: \n\"pip install jyquickhelper\"")
res
TODO
env_wrong = grid2op.make("case5_example")
print("Is this environment suitable for redispatching: {}".format(env_wrong.redispatching_unit_commitment_availble))
Is this environment suitable for redispatching: True
As we can see, on the cell above, the simple environment example is not suitable for redispatching. By default, some environments doesn't specify the cost of generators, their maximum and minimum production values etc. In this case it is not possible to use this grid2op feature.
To know more about what is needed for using redispatching, it is advised to look at the online help https://grid2op.readthedocs.io/en/latest/space.html#grid2op.Space.GridObjects.redispatching_unit_commitment_availble for the most recent documentation. When this notebook was created, what was needed was:
We made available a dedicated environment, based on the IEEE case14 powergrid that has all this features. It is advised to use this small environment for testing and get familiar with this feature.
This environment counts 5 generators, like the original case14 system. It has one solar and one wind generator (that cannot be dispatched), one nuclear powerplant (dispatchable) and 2 thermal generators (dispatchable also). This problem is then a problem of continuous control with 3 degress of freedom.
env = grid2op.make("case14_redisp")
print("Is this environment suitable for redispatching: {}".format(env.redispatching_unit_commitment_availble))
/home/donnotben/Documents/Grid2Op_dev/getting_started/grid2op/MakeEnv.py:652: UserWarning: Your are using only 2 chronics for this environment. More can be download by running, from a command line: python -m grid2op.download --name "case14_redisp" --path_save PATH\WHERE\YOU\WANT\TO\DOWNLOAD\DATA
Is this environment suitable for redispatching: True
We can notice 2 things:
In the L2RPN 2019 challenges, we rewarded participants based on the utilization of the powerline. In next challenges, or for other usage of this platform where redispatching plays a role, it's better to consider the economic cost of the sytem. However, usually the cost is minimized, while the reward is maximized. To take this into account, a simplistic reward named "EconomicReward" has been created. It has the following property:
Note that this reward doesn't take into account the cost to perform a redispatching action. This reward can be used to build what is called an "economic dispatch", a problem especially interesting for electricity producers that is of lower interest for Transmission System Operators (as opposed to the topology).
Compared to standard "economic dispatch" problems, for now storages are not implemented (coming soon) and we don't fully take into account startup cost, shutdown cost, as well as min downtime and min uptime (even though all of these features are implemented). Also, note that the redispatching is implemented in "delta" it means you need first to provide an economic dispatch, and then you reason in terms of variation compare to it. This is the usage explained in this notebook. For real unit commitment / economic dispatch problem the key words "injections" / "prod_p" in the action would probably be much suited.
Unlike topological actions, that are always feasible (assumption made in this package) redispatching actions are limited by physical constraints on the generators. For example:
Having said this, a lot of thing can happen, that makes redispatching a bit less trivial than topology:
Out of simplicity for the participants, there are some "automaton" that automatically transform an proposed redispatching action into a valid redispatching action. These automatons basically ensure that the two above-mentionned conditions hold. It explains the differences between "target_redispatching" which is the setpoint enter by the agent, and the "actual_redispatching" which what has been implemented on the powergrid after these automatons work.
env.gen_redispatchable
array([ True, True, False, False, True])
The above vector says which generator is dispatchable and which is not. Any attempt to dispatch a generator that is not dispatchable leads to an ambiguous action.
act = env.action_space({"redispatch": [(2,+10)]})
act.is_ambiguous()
(True, Grid2OpException AmbiguousAction InvalidRedispatching InvalidRedispatching('Trying to apply a redispatching action on a non redispatchable generator',))
As we see, this action is ambiguous. And this is ambiguous due to Trying to apply a redispatching action on a non redispatchable generator
.
Generators have also physical constraints. You cannot ask them to change the active production value too fast, this would damage the generator, and breaking a nuclear plant is often a terrible idea. In grid2op it is implemented as ambiguous action. Trying to go beyond this value will result in an ambiguous action.
This value is called the "ramp" and it's available through the "max_ramp_up" attribute. On the next cell, you see the ramp is of $5$ for the first generator, for $10$ for second and last generators. For the other 2 it's irrelevant because they are not dispatchable.
env.gen_max_ramp_up
array([ 5., 10., 0., 0., 10.])
Any attempt to go beyond this value will raise an ambiguous error (remember index in python starts at 0).
act = env.action_space({"redispatch": [(0,+10)]})
act.is_ambiguous()
(True, Grid2OpException AmbiguousAction InvalidRedispatching InvalidRedispatching('Some redispatching amount are above the maximum ramp up',))
In the previous action, we asked the generator 0 to produce 10MW more than it's setpoint. However, the maximum ramp up is only of 5MW. This action is then ambiguous.
And of course, there are some perfectly valid redispatching action:
act = env.action_space({"redispatch": [(1,+10)]})
act.is_ambiguous()
(False, None)
As said in the preamble of this section, the target dispatching, what we want to achieve (the target), is not equal to the dispatching implemented. To make transparent what is being done, both these values are present in the observation, as shown in the cell below.
# perform a valid redispatching action
env.set_id(0) # make sure to use the same environment input data.
obs_init = env.reset() # reset the environment
act = env.action_space()
act = env.action_space({"redispatch": [(0,+5)]})
# act = env.action_space({"redispatch": [(0,0)]})
obs, reward, done, info = env.step(act)
obs.actual_dispatch
array([ 5. , -2.5, 0. , 0. , -2.5])
The target dispatch is exactly what we wanted, eg increasing generator with id 0 of +5 MW. To compensate for this increase, both generator 1 and 4 have seen their setpoint diminish from 2.5MW.
donothing = env.action_space()
obs, reward, done, info = env.step(donothing)
print(obs.actual_dispatch)
obs, reward, done, info = env.step(donothing)
print(obs.actual_dispatch)
[ 5. -2.5 0. 0. -2.5] [ 5. -2.5 0. 0. -2.5]
In the above cell, we didn't performed any redispatching action. We just do nothing. This example is here to illustrate that, until the original redispatching action is removed (ie until the opposite command is sent), grid2op will continue to apply the previous redispatching configuration over time.
Here, the original redispatching action is to increase of +5MW generator 0, in this case removing it means decreasing it of 5MW.
act = env.action_space({"redispatch": [(0,-5)]})
# act = env.action_space({"redispatch": [(0,0)]})
obs, reward, done, info = env.step(act)
obs.actual_dispatch
array([ 1.30935415, -0.65710904, 0. , 0. , -0.65224512])
As we see in the cell above, there are still a residual on the dispatch. This is because of the physical limit of the ramp of the generator 0. It was above it's normal setpoint of +5MW (action at cell 10). We wanted it to return to its setpoint value (action cell 12). But at the same time step, the environment also modified the setpoint of this generator of -1.3MW. The ramp down for this step being $5+1.3 = 6.3 > maxrampdown$, grid2op capped the redispatch occuring at this timestep to $maxrampdown$
That is why we can see a small part of the dispatch left. If we wait another timestep and do nothing, the generator will likely be in order.
obs, reward, done, info = env.step(donothing)
print(obs.actual_dispatch)
obs, reward, done, info = env.step(donothing)
print(obs.actual_dispatch)
[0. 0. 0. 0. 0.] [0. 0. 0. 0. 0.]
Now everything is set up as it should be. The system is back to its original state. Let's now see what happens if we ask to increase again the value of this generator 0.
act = env.action_space({"redispatch": [(0,+5)]})
# act = env.action_space({"redispatch": [(0,0)]})
obs, reward, done, info = env.step(act)
obs.actual_dispatch
array([ 4.69015403, -2.34502005, 0. , 0. , -2.34513398])
This time we directly see the full (valid) redispatching action is not apply completely. This is due to the same phenomenon that previously. The environment increased the value of this generator, and at the same time, we also ask to increase it of its maximum value. So our action was "capped" and only 4.7MW (out of 5) were indeed produced by the generator. At the next time step, the action would be fully implemented.
To conclude on redispathcin we saw that there is a difference between the value we ask, and the value implemented by the environment. This is mainly to:
Redispatching action also last in time. One action must be explicitely canceled to be reset to 0. This cancelation, because of the limitations above-mentionned can take a few time step to be fully effective.
agent = DoNothingAgent(env.action_space)
done = False
reward = env.reward_range[0]
env.set_id(0) # make sure to evaluate the models on the same experiments
obs = env.reset()
cum_reward = 0
nrow = env.chronics_handler.max_timestep() if max_iter <= 0 else max_iter
gen_p = np.zeros((nrow, env.n_gen))
gen_p_setpoint = np.zeros((nrow, env.n_gen))
load_p = np.zeros((nrow, env.n_load))
rho = np.zeros((nrow, env.n_line))
i = 0
with tqdm(total=max_iter, desc="step") as pbar:
while not done:
act = agent.act(obs, reward, done)
obs, reward, done, info = env.step(act)
data_generator = env.chronics_handler.real_data.data
gen_p_setpoint[i,:] = data_generator.prod_p[data_generator.current_index, :]
gen_p[i,:] = obs.prod_p
load_p[i,:] = obs.load_p
rho[i,:] = obs.rho
cum_reward += reward
i += 1
pbar.update(1)
if i >= max_iter:
break
print("The cumulative reward with this agent is {:.0f}".format(cum_reward))
HBox(children=(FloatProgress(value=0.0, description='step', style=ProgressStyle(description_width='initial')),…
The cumulative reward with this agent is 121369
let's do the same, but forcing as much redispatching as possible on the cheapest generator.
class GreedyEconomic(Agent):
def __init__(self, action_space):
Agent.__init__(self, action_space)
self.do_nothing = action_space()
def act(self, obs, reward, done):
act = self.do_nothing
if obs.prod_p[0] < obs.gen_pmax[0] - 1 and \
obs.target_dispatch[0] < (obs.gen_pmax[0] - obs.gen_max_ramp_up[0]) - 1 and\
obs.prod_p[0] > 0.:
# if the cheapest generator is significantly bellow its maximum cost
if obs.target_dispatch[0] < obs.gen_pmax[0]:
#in theory i can still ask for more
act = env.action_space({"redispatch": [(0, obs.gen_max_ramp_up[0])]})
return act
agent = GreedyEconomic(env.action_space)
done = False
reward = env.reward_range[0]
env.set_id(0) # reset the env to the same id
obs = env.reset()
cum_reward = 0
nrow = env.chronics_handler.max_timestep() if max_iter <= 0 else max_iter
gen_p = np.zeros((nrow, env.n_gen))
gen_p_setpoint = np.zeros((nrow, env.n_gen))
load_p = np.zeros((nrow, env.n_load))
rho = np.zeros((nrow, env.n_line))
i = 0
with tqdm(total=max_iter, desc="step") as pbar:
while not done:
act = agent.act(obs, reward, done)
obs, reward, done, info = env.step(act)
# print("act: {}".format(act))
# print("info: {}".format(info['exception']))
# if info['exception'] is not None:
if np.abs(np.sum(obs.actual_dispatch)) > 1e-2:
pdb.set_trace()
data_generator = env.chronics_handler.real_data.data
gen_p_setpoint[i,:] = data_generator.prod_p[data_generator.current_index, :]
gen_p[i,:] = obs.prod_p
load_p[i,:] = obs.load_p
rho[i,:] = obs.rho
cum_reward += reward
i += 1
pbar.update(1)
if i >= max_iter:
break
print("The cumulative reward with this agent is {:.0f}".format(cum_reward))
HBox(children=(FloatProgress(value=0.0, description='step', style=ProgressStyle(description_width='initial')),…
The cumulative reward with this agent is 97832