This Notebook will develop how to build an Agent and assess its performance.

Try me out interactively with: Binder

It is recommended to have a look at the 0_basic_functionalities notebook before getting into this one.

Objective

This notebook will cover the basics of how to "code" an Agent that takes actions on the powergrid. Examples of "expert agents" that can take actions based on some fixed rules, will be given. More generic types of Agents, relying for example on machine learning / deep learning will be covered in the notebook 3_TrainingAnAgent.

This notebook will also cover the description of the Observation class, which is useful to take some actions.

In [ ]:
import os
import sys
import grid2op
In [ ]:
res = None
try:
    from jyquickhelper import add_notebook_menu
    res = add_notebook_menu()
except ModuleNotFoundError:
    print("Impossible to automatically add a menu / table of content to this notebook.\nYou can download \"jyquickhelper\" package with: \n\"pip install jyquickhelper\"")
res

I) Description of the observations

In this paragraph we will cover the observation class. For more information about it, we recommend to have a look at the official documentation, or here or in the Observations.py files for more information. Only basic concepts are detailed in this notebook.

I.A) Getting an observation

An observation can be accessed by calling env.step(). The next cell is dedicated to creating an environment and getting an observation instance. We use the default rte_case14_realistic environment from Grid2Op framework.

In [ ]:
env = grid2op.make(test=True)

To perform a step, as stated on the short description above, we need an action. More information about actions is given in the 2_ActionRepresentation notebook. Here we use a DoNothingAgent, that does nothing. obs is the observation of the environment.

In [ ]:
do_nothing_act = env.action_space({})
obs, reward, done, info = env.step(do_nothing_act)

I.B) Information contained in an Observation

In this notebook we will detail only the "CompleteObservation". Grid2Op allows to model different kinds of observations. For example, some observations could have incomplete data, or noisy data, etc. CompleteObservation gives the full state of the powergrid, without any noise. It's the default type of observation used.

a) Some of its attributes

An observation has calendar data (eg the time stamp of the observation):

In [ ]:
obs.year, obs.month, obs.day, obs.hour_of_day, obs.minute_of_hour, obs.day_of_week

It has some powergrid generic information:

In [ ]:
print("Number of generators of the powergrid: {}".format(obs.n_gen))
print("Number of loads of the powergrid: {}".format(obs.n_load))
print("Number of powerline of the powergrid: {}".format(obs.n_line))
print("Number of elements connected to each substations in the powergrid: {}".format(obs.sub_info))
print("Total number of elements: {}".format(obs.dim_topo))

It has some information about the generators (each generator can be viewed as a point in a 3-dimensional space)

In [ ]:
print("Generators active production: {}".format(obs.prod_p))
print("Generators reactive production: {}".format(obs.prod_q))
print("Generators voltage setpoint : {}".format(obs.prod_v))

It has some information about the loads (each load is a point in a 3-dimensional space, too)

In [ ]:
print("Loads active consumption: {}".format(obs.load_p))
print("Loads reactive consumption: {}".format(obs.prod_q))
print("Loads voltage (voltage magnitude of the bus to which it is connected) : {}".format(obs.load_v))

In this setting, a powerline can be viewed as a point in an 8-dimensional space:

  • active flow
  • reactive flow
  • voltage magnitude
  • current flow

for both its origin and its extremity.

For example, suppose the powerline line1 is connecting two node A and B. There are two separate values for the active flow on line1 : the active flow from A to B (origin) and the active flow from B to A (extremity).

These powerline features can be accessed with :

In [ ]:
print("Origin active flow: {}".format(obs.p_or))
print("Origin reactive flow: {}".format(obs.q_or))
print("Origin current flow: {}".format(obs.a_or))
print("Origin voltage (voltage magnitude to the bus to which the origin end is connected): {}".format(obs.v_or))
print("Extremity active flow: {}".format(obs.p_ex))
print("Extremity reactive flow: {}".format(obs.q_ex))
print("Extremity current flow: {}".format(obs.a_ex))
print("Extremity voltage (voltage magnitude to the bus to which the origin end is connected): {}".format(obs.v_ex))

Another powerline feature is the $\rho$ ratio, ie. for each powerline, the ratio between the current flow in the powerline and its thermal limit. It can be accessed with:

In [ ]:
obs.rho

The observation (obs) also stores information on the topology and the state of the powerline.

In [ ]:
obs.timestep_overflow # the number of timestep each of the powerline is in overflow (1 powerline per component)
obs.line_status # the status of each powerline: True connected, False disconnected
obs.topo_vect  # the topology vector the each element (generator, load, each end of a powerline) to which the object
# is connected: 1 = bus 1, 2 = bus 2.

In grid2op, all objects (end of a powerline, load or generator) can be either disconnected, connected to the first bus of its substation, or connected to the second bus of its substation.

topo_vect is the vector containing the connection information, it is part of the observation. If an object is disconnected, then its corresponding component in topo_vect will be -1. If it's connected to the first bus of its substation, its component will be 1 and if it's connected to the second bus, its component will be 2.

More information about this topology vector is given in the documentation here.

More information about this topology vector will be given in the notebook dedicated to vizualisation.

b) Some of its methods

The observation can be converted to / from a flat numpy array. This function is useful for interacting with machine learning libraries or to store it, but it probably makes it less readable for a human. The function proceeds by stacking all the features mentionned above in a single numpy.float64 vector.

In [ ]:
vector_representation_of_observation = obs.to_vect()
vector_representation_of_observation

An observation can be copied, of course:

In [ ]:
obs2 = obs.copy()

Or reset:

In [ ]:
obs2.reset()
print(obs2.prod_p)

Or loaded from a vector:

In [ ]:
obs2.from_vect(vector_representation_of_observation)
obs2.prod_p

It is also possible to assess whether two observations are equal or not:

In [ ]:
obs == obs2

For this type of observation, it is also possible to retrieve the topology as a matrix. The topology matrix can be obtained in two different formats.

Format 1: the connectivity matrix which has as many rows / columns as the number of elements in the powergrid (remember that an element is either an end of a powerline, or a generator or a load) and that tells if 2 elements are connected to one another or not:

$$ \left\{ \begin{aligned} \text{conn mat}[i,j] = 0 & ~\text{element i and j are NOT connected to the same bus}\\ \text{conn mat}[i,j] = 1 & ~\text{element i and j are connected to the same bus, or i and j are both ends of the same powerline}\\ \end{aligned} \right. $$
In [ ]:
obs.connectivity_matrix()

This representation has the advantage to always have the same dimension, regardless of the topology of the powergrid.

Format 2: the bus connectivity matrix has as many rows / columns as the number of active buses of the powergrid. It should be understood as follows:

$$ \left\{ \begin{aligned} \text{bus conn mat}[i,j] = 0 & ~\text{if no powerline connects bus i to bus j}\\ \text{bus conn mat}[i,j] = 1 & ~\text{if at least one powerline connects bus i to bus j (or i == j)}\\ \end{aligned} \right. $$
In [ ]:
obs.bus_connectivity_matrix()

c) Simulate

As opposed to most RL problems, in this framework we add the possibility to "simulate" the impact of a possible action on the power grid. This helps calculating roll-outs in the RL setting, and can be close to "model-based" reinforcement learning approaches (except that nothing more has to be learned).

This "simulate" method uses the available forecast data (forecasts are made available by the same way we loaded the data here, with the class GridStateFromFileWithForecasts. For this class, only forecasts for 1 time step are provided, but this might be adapted in the future).

Note that this simulate function can use a different simulator than the one used by the Environment. Fore more information, we encourage you to read the official documentation, or if it has been built locally (recommended), to consult this page.

This function will:

  1. apply the forecasted injection on the powergrid
  2. run a powerflow with the decidated simulate powerflow simulator
  3. return:
    1. the anticipated observation (after the action has been taken)
    2. the anticipated reward (of this simulated action)
    3. whether or not there has been an error
    4. some more informations

From a user point of view, this is the main difference with the previous pypownet framework. In pypownet, this "simulation" used to be performed directly by the environment, thus giving direct access of the environment's future data to the agent, which could break the RL framework since the agent is only supposed to know about the current state of the environment (it was not the case in the first edition of the Learning to Run A Power Network as the Environment was fully observable). In grid2op, the simulation is now performed from the current state of the environment and it is imperfect since it does not have access to future information.

Here is an example of some features of the observation, in the current state and in the simulated next state :

In [ ]:
do_nothing_act = env.action_space({})
obs_sim, reward_sim, is_done_sim, info_sim = obs.simulate(do_nothing_act)
In [ ]:
obs.prod_p
In [ ]:
obs_sim.prod_p

II) Taking actions based on the observation

In this section we will make our first Agent that will act based on these observations.

All Agents must derive from the grid2op.Agent class. The main function to implement for the Agents is the "act" function (more information can be found on the official documentation or here ).

Basically, the Agent receives a reward and an observation, and chooses a new action. Some different Agents are pre-defined in the grid2op package. We won't talk about them here (for more information, see the documentation or the Agent.py file), but rather we will make a custom Agent.

This Agent will select among:

  • doing nothing
  • disconnecting the powerline having the higher relative flows
  • reconnecting a powerline disconnected
  • disconnecting the powerline having the lower relative flows

by using simulate on the corresponding actions, and choosing the one that has the highest predicted reward.

Note that this kind of Agent is not particularly smart and is given only as an example.

More information about the creation / manipulation of Action will be given in the notebook 2_Action_GridManipulation

In [ ]:
from grid2op.Agent import BaseAgent
import numpy as np
import pdb


class MyAgent(BaseAgent):
    def __init__(self, action_space):
        # python required method to code
        BaseAgent.__init__(self, action_space)
        self.do_nothing = self.action_space({})
        self.print_next = False
        
    def act(self, observation, reward, done=False):
        i_max = np.argmax(observation.rho)
        new_status_max = np.zeros(observation.rho.shape)
        new_status_max[i_max] = -1
        act_max = self.action_space({"set_line_status": new_status_max})
        
        i_min = np.argmin(observation.rho)
        new_status_min = np.zeros(observation.rho.shape)
        if observation.rho[i_min] > 0:
            # all powerlines are connected, i try to disconnect this one
            new_status_min[i_min] = -1
            act_min = self.action_space({"set_line_status": new_status_min})
        else:
            # at least one powerline is disconnected, i try to reconnect it
            new_status_min[i_min] = 1
#             act_min = self.action_space({"set_status": new_status_min})
            act_min = self.action_space({"set_line_status": new_status_min,
                                         "set_bus": {"lines_or_id": [(i_min, 1)], "lines_ex_id": [(i_min, 1)]}})
    
        _, reward_sim_dn, *_ = observation.simulate(self.do_nothing)
        _, reward_sim_max, *_ = observation.simulate(act_max)
        _, reward_sim_min, *_ = observation.simulate(act_min)
            
        if reward_sim_dn >= reward_sim_max and reward_sim_dn >= reward_sim_min:
            self.print_next = False
            res = self.do_nothing
        elif reward_sim_max >= reward_sim_min:
            self.print_next = True
            res = act_max
            print(res)
        else:
            self.print_next = True
            res = act_min
            print(res)
        return res

We compare this Agent with the Donothing agent (already coded) on the 3 episodes made available with this package. To make this comparison more interesting, it's better to use the L2RPN rewards (L2RPNReward).

In [ ]:
from grid2op.Runner import Runner
from grid2op.Agent import DoNothingAgent
from grid2op.Reward import L2RPNReward
from grid2op.Chronics import GridStateFromFileWithForecasts

max_iter = 10  # to make computation much faster we will only consider 50 time steps instead of 287
runner = Runner(**env.get_params_for_runner(),
                agentClass=DoNothingAgent
               )
res = runner.run(nb_episode=1, max_iter=max_iter)

print("The results for DoNothing agent are:")
for _, chron_name, cum_reward, nb_time_step, max_ts in res:
    msg_tmp = "\tFor chronics with id {}\n".format(chron_name)
    msg_tmp += "\t\t - cumulative reward: {:.6f}\n".format(cum_reward)
    msg_tmp += "\t\t - number of time steps completed: {:.0f} / {:.0f}".format(nb_time_step, max_ts)
    print(msg_tmp)
In [ ]:
runner = Runner(**env.get_params_for_runner(),
                agentClass=MyAgent
               )
res = runner.run(nb_episode=1, max_iter=max_iter)
print("The results for the custom agent are:")
for _, chron_name, cum_reward, nb_time_step, max_ts in res:
    msg_tmp = "\tFor chronics with id {}\n".format(chron_name)
    msg_tmp += "\t\t - cumulative reward: {:.6f}\n".format(cum_reward)
    msg_tmp += "\t\t - number of time steps completed: {:.0f} / {:.0f}".format(nb_time_step, max_ts)
    print(msg_tmp)

As we can see, both agents obtain the same score here, but there would be a difference if we didn't limit the episode length to 10 time steps.

NB Disabling the time limit for the episode can be done by setting max_iter=-1 in the previous cells. Here, setting max_iter=10 is only done so that this notebook can run faster, but increasing or disabling the time limit would allow us to spot differences in the agents' performances.

The same can be done for the PowerLineSwitch agent :

In [ ]:
from grid2op.Agent import PowerLineSwitch
runner = Runner(**env.get_params_for_runner(),
                agentClass=PowerLineSwitch
               )
res = runner.run(nb_episode=1, max_iter=max_iter)
print("The results for the PowerLineSwitch agent are:")
for _, chron_name, cum_reward, nb_time_step, max_ts in res:
    msg_tmp = "\tFor chronics with ID {}\n".format(chron_name)
    msg_tmp += "\t\t - cumulative reward: {:.6f}\n".format(cum_reward)
    msg_tmp += "\t\t - number of time steps completed: {:.0f} / {:.0f}".format(nb_time_step, max_ts)
    print(msg_tmp)