Be aware that this notebook, as well as the plotting capabilities in grid2op are a work in progress. This notebook will change, and some function might change name in future major grid2op release.
With the module "grid2op.Plot" (more information on the official documentation here) we offer some possibility to inspect visually (aka plot) some informations about the state of the powergrid.
This module counts today 3 "base" classes:
PlotPyGame
that uses the "pygame" lirabrie to plot information about the powergrid. This library is particularly suited for making videos, looking at the temporal dynamics of the grid etc. It is not recommended to use it for studying a particular step.PlotMatplotlib
which uses the well-known matplotlib python library to render the plot. Matplotlib being the most used plotting library in python, we decided to add its support in grid2op.PlotPlotly
that uses the plotply librairy. As opposed to pygame, plotly is particularly suited for making in depth study of some particular time step.It is not recommended to use any of these. Rather, we developped two higher level classes:
Plotting
that will allow easier manipulation of the "base" classes above mentionned. This should be the main class using for studying the behaviour of your agent as of version 0.7.0.EpisodeReplay
which uses the "PlotPyGame" class and is used to render a movie "as a gif or a mp4 file mainly for communication about the results of your agentenv.render
which is the familiar method for all people used to the open ai gym framework.We want to emphasize that for now, the results of the plottings are not always aesthetically pleasing. We are working on improving this for the next releases of grid2op.
Last but not least, a package called grid2viz
has been developped to help you diagnose in depth the behaviour of your agent. This package is much more advance than all the methods presented above and we highly recommend its usage to get the best of your agents!
import matplotlib.pyplot as plt # pip install matplotlib
import seaborn as sns # pip install seaborn
import pygame # pip install pygame
import plotly.graph_objects as go # pip install plotly
import imageio # pip install imageio
import imageio_ffmpeg # pip install imageio-ffmpeg
pygame 1.9.6 Hello from the pygame community. https://www.pygame.org/contribute.html
This notebook will not work if one of the 6 packages above cannot be imported. We then highly recommend you to install them on your machine. (we put the pip command to install them if you have any trouble)
import grid2op
env = grid2op.make_new(test=True)
/home/benjamin/Documents/grid2op_test/getting_started/grid2op/MakeEnv.py:686: UserWarning: Your are using only 2 chronics for this environment. More can be download by running, from a command line: python -m grid2op.download --name "case14_realistic" --path_save PATH\WHERE\YOU\WANT\TO\DOWNLOAD\DATA
As we already said, the "Plot.Plotting" module can help render a powergrid using 3 different methods: pygame, matplotlib or plotly. The display method is defined when you create a "plotting" object as shown bellow.
All functions exposed here are available for pygame, plotly and matplotlib. Fill free to switch from one to the other in order to see the differences.
The next cell will plot the names of each object on the powergrid, as well as their id.
from grid2op.Plot import Plotting
plot_helper = Plotting(env.observation_space,
display_mod="matplotlib" # all available values are : "matplotlib", "pygame" or "plotly"
)
plot_helper.plot_layout()
/home/benjamin/Documents/grid2op_test/getting_started/grid2op/Plot/Plotting.py:47: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.
(<Figure size 1080x1080 with 1 Axes>, <matplotlib.axes._subplots.AxesSubplot at 0x7f3182b9a5f8>)
It is also possible to display some "external" information on this layout, for example, you can plot the thermal limit of each powerlines:
plot_helper.plot_info(line_info=env._thermal_limit_a, colormap="line")
(<Figure size 1080x1080 with 1 Axes>, <matplotlib.axes._subplots.AxesSubplot at 0x7f3183253908>)
The argument "line_info" shows that you want to plot information about powerlines (here the thermal limit). The argument "colormap=line" is present to indicate you want to color differently the different values on the powerlines. As you can notice, the higher thermal limit are displayed in a darker color.
It is also possible to display information abouts loads, generators or substation in the same manner.
For this part, we highly recommend to use "plotly" method, as it allows more user interactions than a bare matplotlib figure, for example it is possible to zoom in or out.
plot_helper = Plotting(env.observation_space,
display_mod="plotly" # all available values are : "matplotlib", "pygame" or "plotly"
)
obs = env.reset()
_ = plot_helper.plot_obs(obs)
Here you can see that the powerlines are colored with respect to their flow (in % of the thermal limit). Load and generator information are provided with the injection (+ for generators, - for loads) given in MW. All that can be modified as showed in the cell below where we plot the active power flow for the powerline and the load voltage magnitude for the load. Notice that the units will be modified, and so will the colormap.
_ = plot_helper.plot_obs(obs, line_info="p", load_info="v")
Finally, the topology at each substation can also be plotted. For example, let's take a topological action at substation 1.
We will move the load there (with a voltage magnitude of 142.1kV, the powerline with 42.1MW and the powerline with 40.4MW to the bus number 2. This can be done easily, for example by looking at their id on the first layout, and apply the appropriate action (see notebook 2_Action_GridManipulation for more information).
action = env.action_space({"set_bus": {"loads_id": [(0,2)], "lines_or_id": [(3,2)], "lines_ex_id": [(0,2)]}})
print(action)
This action will: - NOT change anything to the injections - NOT perform any redispatching action - NOT force any line status - NOT switch any line status - NOT switch anything in the topology - Set the bus of the following element: - assign bus 2 to line (extremity) 0 [on substation 1] - assign bus 2 to line (origin) 3 [on substation 1] - assign bus 2 to load 0 [on substation 1]
The print
utility helps us check the action we implemented was the one we wanted to implement. print
is also a nice way to see what happened sometimes.
new_obs, reward, done, info = env.step(action)
_ = plot_helper.plot_obs(new_obs)
You are more than encouraged to zoom-in on what happened in the substation 1. Now you see 2 dots, each one representing a different "bus" ("elictrical node" if you prefer).
On the orange bus are connected all the object that have not moved. It is the bus 1.
On blue bus we see the connected the objects that have been moved to this bus, so the load 0, the "origin" of powerline 3, and the "extremity" of powerline 0.
This plotting utility is a pretty usefull tool to detect what happened, especially just before a game over.
Another mode to inspect what is going on "in live" is to use the render. The renderer can be used as in any open ai gym environment.
In the next cell we will: reset the environment created above, create an agent that takes random actions every 10 timesteps and see how it goes (we take care to use this agent instead of the bare "RandomAgent" because most of the time a RandomAgent games over at the first or second time step...)
NB env.render
(and in general pygame plot utility) is NOT recommended on notebook as weird things can happen internally. For example, the window of pygame might stay "blocked" or "freezing" if the cell of the notebook is interrupted. We only use it here to explain its behaviour.
NB for lisibility env.render
is parametrized to display an observation for at least 1s (this can be changed by calling the env.change_duration_timestep_display
method) but in any case, it is not recommended to train your agent with any renderer turned on. The preferred way is to train your agent without any renderer in a first step, and in a second step to evaluate your agent on a fixed set of scenarios with possibily the renderer on (see the next section about that).
from grid2op.Agent import RandomAgent
class CustomRandom(RandomAgent):
def __init__(self, action_space):
RandomAgent.__init__(self, action_space)
self.i = 1
def my_act(self, transformed_observation, reward, done=False):
if (self.i % 10) != 0:
res = 0
else:
res = self.action_space.sample()
self.i += 1
return res
myagent = CustomRandom(env.action_space)
obs = env.reset()
reward = env.reward_range[0]
done = False
while not done:
env.render()
act = myagent.act(obs, reward, done)
obs, reward, done, info = env.step(act)
env.close()
This tool allows you to save gif or mp4 of the "renderer" of your agent in an offline manner. We recommend, as stated above, to get rid of any renderer in the training phase, and then to use the runner to assess the performance of your agent using a runner, and with the saved results of the runner, to start this class, or to study it more in depth with grid2viz (see next section).
But first things first, let's mimic what we think is a good process. Suppose you are happy with the result of your agent (for the sake of simplicity, we will not train any agent here, but we will rather use the CustomRandom class here). Now what do you do ?
First, you create an environment on which it will be evaluated, and the associated runner:
from grid2op.Runner import Runner
env = grid2op.make_new(test=True)
my_awesome_agent = CustomRandom(env.action_space)
runner = Runner(**env.get_params_for_runner(), agentClass=None, agentInstance=my_awesome_agent)
/home/benjamin/Documents/grid2op_test/getting_started/grid2op/MakeEnv.py:686: UserWarning: Your are using only 2 chronics for this environment. More can be download by running, from a command line: python -m grid2op.download --name "case14_realistic" --path_save PATH\WHERE\YOU\WANT\TO\DOWNLOAD\DATA
Second, you start the runner and save the result in a given directory (here we limit the runner to perform 30 iterations to gain time):
import os
path_agents = "path_agents" # this is mandatory for grid2viz to have a directory with only agents
# that is why we have it here. It is aboslutely not mandatory for this more simple class.
max_iter = 30 # to save time we only assess performance on 30 iterations
if not os.path.exists(path_agents):
os.mkdir(path_agents)
path_awesome_agent_log = os.path.join(path_agents, "awesome_agent_logs")
res = runner.run(nb_episode=2, path_save=path_awesome_agent_log, max_iter=max_iter)
Third, you use the results of the runner to save the results as gif for example (you can also visualize it offline on the screen if you prefer for that you simply need to switch the "display" argument to True
from grid2op.Plot import EpisodeReplay
gif_name = "episode.gif"
ep_replay = EpisodeReplay(agent_path=path_awesome_agent_log)
for _, chron_name, cum_reward, nb_time_step, max_ts in res:
ep_replay.replay_episode(chron_name, # which chronic was started
video_name=os.path.join(path_awesome_agent_log, chron_name, gif_name), # save the video
display=False, # dont wait before rendering each frames
max_fps=1. # limit to 1 frame per second
)
And you can even see the gif in the jupyter notebook afterwards (which is much more convenient that to start pygame from a jupyter notebook.)
This only works if:
This tool is a really usefull tool to deep dive into the anylisis of your agent. We highly recommend you use it to develop always stronger agent and score high in the competition.
Grid2viz is a package that has been developped to help you visualize the behaviour of your agent.
It is available for now in a github repository grid2viz. In the few following cells we will demonstrate how to use it to inspect in more detail the log of the agents generated by the runner (second cell of this notebook).
We will first run some other agents to show the full potential of grid2viz (optional). Then we emphasize a constraint on the use of grid2viz: the folder tree must respect a certain order. Then we show how to install it and finally how to launch it on the data generated by this notebook.
This section is not mandatory, but it is better to show the full capabilities of grid2viz. We will first run 2 others agents: the do nothing agent, and the topology greedy agents.
# make a runner for this agent
from grid2op.Agent import DoNothingAgent, TopologyGreedy
import shutil
for agentClass, agentName in zip([DoNothingAgent], # , TopologyGreedy
["DoNothingAgent"]): # , "TopologyGreedy"
path_this_agent = os.path.join(path_agents, agentName)
shutil.rmtree(os.path.abspath(path_this_agent), ignore_errors=True)
runner = Runner(**env.get_params_for_runner(),
agentClass=agentClass
)
res = runner.run(path_save=path_this_agent, nb_episode=2,
max_iter=max_iter)
print("The results for the {} agent are:".format(agentName))
for _, chron_id, cum_reward, nb_time_step, max_ts in res:
msg_tmp = "\tFor chronics with id {}\n".format(chron_id)
msg_tmp += "\t\t - cumulative reward: {:.6f}\n".format(cum_reward)
msg_tmp += "\t\t - number of time steps completed: {:.0f} / {:.0f}".format(nb_time_step, max_ts)
print(msg_tmp)
The results for the DoNothingAgent agent are: For chronics with id 000 - cumulative reward: 34403.382277 - number of time steps completed: 30 / 30 For chronics with id 001 - cumulative reward: 34420.375574 - number of time steps completed: 30 / 30
Grid2Viz is not yet on pypi, the python package repository. So you need a specific command to install it. It can be done super easily by running the cell bellow (more information can be found on the grid2iz github).
import sys
print("To install it, either uncomment the cell bellow, or type, in a command prompt:\n{}".format(
("\t{} -m pip install git+https://github.com/mjothy/grid2viz.git --user --extra-index-url https://test.pypi.org/simple/".format(sys.executable))))
To install it, either uncomment the cell bellow, or type, in a command prompt: /usr/bin/python3.6 -m pip install git+https://github.com/mjothy/grid2viz.git --user --extra-index-url https://test.pypi.org/simple/
# !$sys.executable -m pip install git+https://github.com/mjothy/grid2viz --user --extra-index-url https://test.pypi.org/simple/
Once the above package is installed, you can now start to study what your agent did (NB the agent must have been run with a runner and the "path_save" argument in order for grid2viz to work properly.
For performance optimization, grid2viz uses a cache. This notebook being an example, it is recommended to clear the cache before starting the grid2viz app. Of course, if you study different generation of your agent, it is NOT recommended to clear the cache before any study.
shutil.rmtree(os.path.join(os.path.abspath(path_agents), "_cache"), ignore_errors=True)
!$sys.executable -m grid2viz.main --path=$path_agents