This notebook present the most basic use of Grid2Op

Objectives

This notebook will cover some basic raw functionality at first. It will then show how these raw functionalities are encapsulated with easy to use functions.

The recommended way to use these is to through the Runner, and not by getting through the instanciation of class one by one.

In [1]:
import os
import sys
import grid2op
In [2]:
res = None
try:
    from jyquickhelper import add_notebook_menu
    res = add_notebook_menu()
except ModuleNotFoundError:
    print("Impossible to automatically add a menu / table of content to this notebook.\nYou can download \"jyquickhelper\" package with: \n\"pip install jyquickhelper\"")
res
Impossible to automatically add a menu / table of content to this notebook.
You can download "jyquickhelper" package with: 
"pip install jyquickhelper"

0) Summary of RL method

Though the Grid2Op package can be used to perform many different tasks, these set of notebooks will be focused on the machine learning part, and its usage in a Reinforcement learning framework.

The reinforcement learning is a framework that allows to train "agent" to solve time dependant domain. We tried to cast the grid operation planning into this framework. The package Grid2Op was inspired by it.

In a reinforcement learning (RL), there are 2 distinct entities:

  • Environment: is a modeling of the "world" in which the agent takes some actions to achieve some pre definite objectives.
  • Agent: will do actions on the environment that will have consequences.

These 2 entities exchange 3 main type of information:

  • Action: it's an information sent by the Agent that will modify the internal state of the environment.
  • State / Observation: is the (partial) view of the environment by the Agent. The Agent receive a new state after each actions. He can use the observation (state) at time step t to take the action at time t.
  • Reward: is the score received by the agent for the previous action.

A schematic representaiton of this is shown in the figure bellow (Credit: Sutton & Barto):

title

In this notebook, we will develop a simple Agent that takes some action (powerline disconnection) based on the observation of the environment.

For more information about the problem, please visit the Example_5bus notebook which dive more into the casting of the real time operation planning into a RL framework. Note that this notebook is still under development at the moment.

A good material is also provided in the white paper Reinforcement Learning for Electricity Network Operation presented for the L2RPN 2020 Neurips edition.

I) Creating an Environement: Step by Step explanation of the basic classes of this package

I.A) Get Data to feed the powergrid

In order to be initialized, an Agent need to know in which space he operates. For that, we load an Environement, based on the IEEE case14.

An example of this powergrid can be found in the package data. We import them here:

In [3]:
powergrid_path = grid2op.CASE_14_FILE
multi_episode_path = grid2op.CHRONICS_MLUTIEPISODE
names_chronics_to_backend = grid2op.NAMES_CHRONICS_TO_BACKEND
max_iter = 10

NB In order to work smoothly, the backend and the data in the files need to have the exact same name. As often the process to generate the data (we suppose that the equilibrium between productions and load is performed beforehand) is different and agnostic from the powergrid, it is not surpising to have the same physical object with different name in the data for the temporal series and for the powergrid file description. In order to simplify the matching, in this example we use the mapper names_chronics_to_backend that is able to "convert" the names given in the data to the names of the objects in the powergrid description file. It's a mapper that makes the link between the two.

More detail about how the matching is performed can be found in the help of the ChronicsHandler.GridValue.initialize method here or in the file ChronicsHandler.py

In order to work, an Environement need to be fed with data. These data can be read from files for example. Some examples are also provided in this package. We import them:

In [4]:
from grid2op.Chronics import ChronicsHandler, Multifolder, GridStateFromFileWithForecasts
data_feeding = ChronicsHandler(chronicsClass=Multifolder,
                               path=multi_episode_path,
                               gridvalueClass=GridStateFromFileWithForecasts,
                               max_iter=max_iter)

data_feeding is now an instance that will automatically load the data and modify the powergrid accordingly at each time step. The process of reading is handled by this class, but the process of modifying the underlying powergrid is carried out by the Environment and performed by the Backend.

I.B) Get a Backend to carry out the computations

A Backend is a dedicated object that has the responsability to compute the resulting powerflow from given injections (poductions and loads). The possibility to implement your own Backend make the Grid2Op framework completely agnostic of the modeling of the powergrid you want to use, or the method to solve the powerflow.

A implementation of a Backend is provided with Pandapower.

In [5]:
from grid2op.Backend import PandaPowerBackend
backend = PandaPowerBackend()

backend is now a variable that is able to compute powerflow and is able to emulate cascading failures. To be able to work properly and carry out the right computations, the worker need to be aware of some Parameters.

I.C) Getting the parameters of the game

For this example, we will use the default parameters available. More information about the parameters that can be modified can be found in the help here, or in the file Parameters.py.

In [6]:
from grid2op.Parameters import Parameters
param = Parameters()

Note on the parameters: some parameters here have direct influence on the difficulty of the game. For example: "NO_OVERFLOW_DISCONNECTION" member of this class, if set to True (default is False) will not trigger the automatic disconnection of powerline when they are in overflow. While some other parameters can have a influence on the speed with which some timesteps will be computed. This is for example the case with the member "ENV_DC". If this member "ENV_DC" is set to True a faster (but less accurate) computation engine will be used to compute the flows. This might be usefull for example at the beginning of training.

To set the paramters, you can do, for example:

from grid2op.Parameters import Parameters
param = Parameters()
param.from_dict({"NO_OVERFLOW_DISCONNECTION": True, "ENV_DC": True})

I.D) Building the Environment

In [7]:
from grid2op.Environment import Environment
env = Environment(init_grid_path=powergrid_path,
                 chronics_handler=data_feeding,
                 backend=backend,
                 parameters=param,
                 names_chronics_to_backend=names_chronics_to_backend)

Creating an Environment will load the powergrid (powergrid_path), load the data to feed it (data_feeding), network powerflow simulator (backend) and game settings (param. Use the first row to initialize the powergrid and some other internal checking that the data are suited to the powergrid.

It can take a moment (usually a few seconds) to load it.

This environment can be greatly customize. We expose here only basic functionnalities. For more information it is advised to read the documentation here (if it has been built locally), to consult the official documentation on the internet or to consult the source code of Environment.py

I.E) A shortcut to perfom all that

All of the above can be done in calling one function that will handle the creation of the Environment with default values.

This particular above mentioned environment is named case14_fromfile.

To define/create it, we can call:

In [8]:
env = grid2op.make("case14_fromfile")

II) Creating an Agent

An Agent is the name given to the "operator" / "bot" / "algorithm" that will perform some modification of the powergrid when he faces some "observation".

Some example of Agents are provided in the file Agent.py.

A deeper look at the different Agent provided can be found in the 4_StudyYourAgent notebook (in progress). We suppose here we use the most simple Agent, the one that does nothing

In [9]:
from grid2op.Agent import DoNothingAgent
my_agent = DoNothingAgent(env.helper_action_player)

III) Assess how the Agent is performing

The performance of each Agent is assessed with the reward. For this example, the reward is a FlatReward that just computes how many times step the Agent has sucessfully managed before breaking any rules. For more control on this reward, it is recommended to use look at the document of the Environment class.

More example of rewards are also available on the official document or here.

In [10]:
done = False
time_step = int(0)
cum_reward = 0.
obs = env.reset()
reward = env.reward_range[0]
while not done:
    act = my_agent.act(obs, reward, done) # chose an action to do, in this case "do nothing"
    obs, reward, done, info = env.step(act) # implement this action on the powergrid
    cum_reward += reward
    time_step += 1
    if time_step > max_iter:
        break
---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-10-40b56d1e1159> in <module>
      4 obs = env.reset()
      5 while not done:
----> 6     act = my_agent.act(obs, reward, done) # chose an action to do, in this case "do nothing"
      7     obs, reward, done, info = env.step(act) # implement this action on the powergrid
      8     cum_reward += reward

NameError: name 'reward' is not defined

We can now evaluate how well this agent is performing:

In [ ]:
print("This agent managed to survive {} timesteps".format(time_step))
print("It's final cumulated reward is {}".format(cum_reward))

NB To the function "make" is highly customizable. For example, you can change the reward you are using to train your agent this way:

from grid2op.Reward import L2RPNReward
env = grid2op.make(action_class=my_agent,
                              reward_class=L2RPNReward)

Because we thought using a single reward to train an agent in such a complex environment, we also gave the possibility to assess different rewards during training. This can be done with the following code:

from grid2op.Reward import L2RPNReward, FlatReward
env = grid2op.make(action_class=my_agent,
                   reward_class=L2RPNReward,
                   other_rewards={"other_reward" : FlatReward })

These result of these reward can be accessed in the "info" return value of the call to env.step. See the official document of reward here for more information.

IV) More convenient ways to perform all these operations

All the above steps have been detailed as a "quick start", to give an example of the main classes of the Grid2Op package. Having to code all the above is quite tedious, but offers a lot of flexibility.

Implementing all this before starting to evaluate an agent can be quite tedious. What we expose here is a much shorter way to perfom all of the above. In this section we will expose 2 ways:

  • The quickest way, using the grid2op.main API, most suited when basic computations need to be carried out.
  • The recommended way using a Runner, it gives more flexibilities than the grid2op.main API but can be harder to configure.

For this section, we assume the same as before:

  • The Agent is "Do Nothing"
  • The Environment is the default Environment
  • PandaPower is used as the backend
  • The chronics comes from the files included in this package
  • etc.

IV.A) Using the grid2op.main API

When only simple assessment need to be performed, the grid2op.main API is perfectly suited. This API can also be access with the command line:

python3 -m grid2op.main

We detail here its usage as an API, to assess the performance of a given Agent.

As opposed to building en environment from scratch (see the previous section) this requires much less effort: we don't need to initialize (instanciate) anything. All is carried out inside the Runner called by the main function.

We ask here 1 episode (eg. we play one scenario until: either the agent does a game over, or until the scenario ends). But this method would work as well if we more.

In [ ]:
from grid2op.main import main
res = main(nb_episode=1,
           agent_class=DoNothingAgent,
           path_casefile=powergrid_path,
           path_chronics=multi_episode_path,
           names_chronics_to_backend=names_chronics_to_backend,
           gridStateclass_kwargs={"gridvalueClass": GridStateFromFileWithForecasts, "max_iter": max_iter}
          )

A call of the single 2 lines above will:

  • Create a valid environment
  • Create a valid agent
  • Assess how well an agent performs on one episode.
In [ ]:
print("The results are:")
for chron_name, _, cum_reward, nb_time_step, max_ts in res:
    msg_tmp = "\tFor chronics located at {}\n".format(chron_name)
    msg_tmp += "\t\t - cumulative reward: {:.2f}\n".format(cum_reward)
    msg_tmp += "\t\t - number of time steps completed: {:.0f} / {:.0f}".format(nb_time_step, max_ts)
    print(msg_tmp)

This is particularly suited to evaluate different agents, for example we can quickly evaluate a second agent. For the below example, we can import an agent class PowerLineSwitch whose job is to connect and disconnect the power lines in the power network. This PowerLineSwitch Agent will simulate the effect of disconnecting each powerline on the powergrid and take the best action found (its execution can take a long time, depending on the scenario and the amount of powerlines on the grid). The execution of the code below can take a few moments

In [ ]:
from grid2op.Agent import PowerLineSwitch
res = main(nb_episode=1,
           agent_class=PowerLineSwitch,
           path_casefile=powergrid_path,
           path_chronics=multi_episode_path,
           names_chronics_to_backend=names_chronics_to_backend,
           gridStateclass_kwargs={"gridvalueClass": GridStateFromFileWithForecasts, "max_iter": max_iter}
          )
print("The results are:")
for chron_name, _, cum_reward, nb_time_step, max_ts in res:
    msg_tmp = "\tFor chronics located at {}\n".format(chron_name)
    msg_tmp += "\t\t - cumulative reward: {:.2f}\n".format(cum_reward)
    msg_tmp += "\t\t - number of time steps completed: {:.0f} / {:.0f}".format(nb_time_step, max_ts)
    print(msg_tmp)

Using this API it's also possible to store the results for a detailed examination of the aciton taken by the Agent. Note that writing on the hard drive has an overhead on the computation time.

To do this, only a simple argument need to be added to the main function call. An example can be found below (where the outcome of the experiment will be stored in the saved_experiment_donothing directory):

In [ ]:
res = main(nb_episode=1,
           agent_class=DoNothingAgent,
           path_casefile=powergrid_path,
           path_chronics=multi_episode_path,
           names_chronics_to_backend=names_chronics_to_backend,
           gridStateclass_kwargs={"gridvalueClass": GridStateFromFileWithForecasts, "max_iter": max_iter},
           path_save=os.path.abspath("saved_experiment_donothing")
          )
print("The results are:")
for chron_name, _, cum_reward, nb_time_step, max_ts in res:
    msg_tmp = "\tFor chronics located at {}\n".format(chron_name)
    msg_tmp += "\t\t - cumulative reward: {:.2f}\n".format(cum_reward)
    msg_tmp += "\t\t - number of time steps completed: {:.0f} / {:.0f}".format(nb_time_step, max_ts)
    print(msg_tmp)
In [ ]:
!ls saved_experiment_donothing/1

All the informations saved are showed above. For more information about them, please don't hesitate to read the documentation of Runner (if compiled locally) or consult the Runner.py file.

NB A lot more of informations about Action is provided in the 2_Action_GridManipulation notebook. In the 3_TrainingAnAgent (last section) there is an quick example on how to read / write action from a saved repository.

IV.B) Using the "make" function

By default, the grid2op framework come with a few pre defined environment, each with its own properties. Some are suitable for discrete control, some more for continuous control etc.

Two environments can be used to get familiar with the platform. The first one is the "case5_example" that represents a tiny powergrid (with only 5 substations), the other one is the "case14_redisp" that is a transposition of the case14 ieee powergrid TODO describe it.

They can both be created easily:

In [ ]:
env_case5 = grid2op.make("case5_example")
In [ ]:
env_case14 = grid2op.make("case14_redisp")
# more data for this environment are available as a github release . You can download them easily with:
# python -m grid2op.download --name "case14_redisp" --path_save PATH\WHERE\YOU\WANT\TO\DOWNLOAD\DATA
# from a command line
In [ ]:
from grid2op.Runner import Runner
from grid2op.Agent import DoNothingAgent
runner = Runner(**env_case14.get_params_for_runner(), agentClass=DoNothingAgent)
res = runner.run(nb_episode=2, nb_process=1, max_iter=10)
for path_chron, episode_id, total_reward, nb_iter, max_iter in res:
    print("Total reward for episode {} is {}".format(episode_id, total_reward))

The use of make + runner makes it easy to assess the performance of trained agent. Beside, Runner has been particularly integrated with other tools and makes easy the replay / post analysis of the episode. It is the recommended method to use in grid2op.