Basic Agent Sudy

It is recommended to have a look at the 0_basic_functionalities, 1_Observation_Agents, 2_Action_GridManipulation and 3_TrainingAnAgent notebooks before getting into this one.

Objectives

In this notebook we will expose how to study an Agent. For this notebook to be interested, we first use a dummy agent, and then we look at how to study its behaviour from the file saved.

This notebook will also show you how to use the Graphical User Interface built for analyzing grid2Op agents called "Grid2Viz".

It is more than recommended to know how to define an Agent and use a Runner before doing this tutotial!

Evaluate the performance of a simple Agen

In [1]:
import os
import sys
import grid2op
import copy
import numpy as np
import shutil
import seaborn as sns
import plotly.graph_objects as go

from tqdm.notebook import tqdm
from grid2op.Agent import PowerLineSwitch
from grid2op.Reward import L2RPNReward
from grid2op.Runner import Runner
from grid2op.Chronics import GridStateFromFileWithForecasts, Multifolder
path_agents = "study_agent_getting_started"
max_iter = 30

In the next cell we evaluate the agent "PowerLineSwitch" and save the results of this evaluation in "study_agent_getting_started"

In [2]:
scoring_function = L2RPNReward
env = grid2op.make_new(reward_class=L2RPNReward, test=True)
# env.chronics_handler.set_max_iter(max_iter)
shutil.rmtree(os.path.abspath(path_agents), ignore_errors=True)
if not os.path.exists(path_agents):
    os.mkdir(path_agents)

# make a runner for this agent
path_agent = os.path.join(path_agents, "PowerLineSwitch")
shutil.rmtree(os.path.abspath(path_agent), ignore_errors=True)

runner = Runner(**env.get_params_for_runner(),
                agentClass=PowerLineSwitch
                )
res = runner.run(path_save=path_agent, nb_episode=2, 
                max_iter=max_iter,
                pbar=tqdm)
print("The results for the evaluated agent are:")
for _, chron_id, cum_reward, nb_time_step, max_ts in res:
    msg_tmp = "\tFor chronics with id {}\n".format(chron_id)
    msg_tmp += "\t\t - cumulative reward: {:.6f}\n".format(cum_reward)
    msg_tmp += "\t\t - number of time steps completed: {:.0f} / {:.0f}".format(nb_time_step, max_ts)
    print(msg_tmp)
/home/donnotben/Documents/Grid2Op_dev/getting_started/grid2op/MakeEnv/MakeNew.py:138: UserWarning:

You are using a development environment. This is really not recommended for training agents.


The results for the evaluated agent are:
	For chronics with id 000
		 - cumulative reward: 496.772954
		 - number of time steps completed: 30 / 30
	For chronics with id 001
		 - cumulative reward: 515.620095
		 - number of time steps completed: 30 / 30

Looking at the results, understand the behaviour of the Agent

The content of the folder is the following:

In [3]:
os.listdir(path_agent)
Out[3]:
['dict_observation_space.json',
 '000',
 'dict_action_space.json',
 'dict_env_modification_space.json',
 '001']
In [4]:
!ls $path_agent
000  dict_action_space.json	       dict_observation_space.json
001  dict_env_modification_space.json

Now we can load the data corresponding to episode 1 for example, we can load the actions and the observations and de-serialize them properly into proper objects. This is now automatically done with the class "EpisodeData" that can be used as follow:

In [5]:
from grid2op.EpisodeData import EpisodeData
episode_studied = "001"
this_episode = EpisodeData.from_disk(path_agent, episode_studied)

Inspect the actions

And now we can start to study the given agent, for example, let's inspect its actions and wonder how many powerlines it has disconnected (for example, this is probably not the best thing to do here...)

In [6]:
line_disc = 0
line_reco = 0
for act in this_episode.actions:
    dict_ = act.as_dict() # representation of an action as a dictionnary, see the documentation for more information
    if "change_line_status" in dict_:
        if "set_bus_vect" in dict_:
            line_reco += 1
        else:
            line_disc += 1
line_disc
Out[6]:
3

We can also wonder how many times this Agent acted on the powerline with id $14$, and inspect how many times it has change its status:

In [7]:
id_line_inspected = 13
line_disconnected = 0
for act in this_episode.actions:
    dict_ = act.effect_on(line_id=id_line_inspected) # which effect has this action action on the substation with given id
    # other objects are: load_id, gen_id, line_id or substation_id
    if dict_['change_line_status'] or dict_["set_line_status"] != 0:
        line_disconnected += 1
line_disconnected
Out[7]:
1

Inspect the modification of the environment

For example, we might want to inspect the number of hazards and maintenance of a total scenario, to see how difficult it was.

In [8]:
nb_hazards = 0
nb_maintenance = 0
for act in this_episode.env_actions:
    dict_ = act.as_dict() # representation of an action as a dictionnary, see the documentation for more information
    if "nb_hazards" in dict_:
        nb_hazards += 1
    if "nb_maintenance" in dict_:
        nb_maintenance += 1
nb_maintenance
Out[8]:
0

Inspect the observations

For example, let's look at the value consumed by load 1. For this cell to work, it requires plotly for displaying the results.

In [9]:
import plotly.graph_objects as go
load_id = 1
# extract the data
val_load1 = np.zeros(len(this_episode.observations))
for i, obs in enumerate(this_episode.observations):
    dict_ = obs.state_of(load_id=load_id) # which effect has this action action on the substation with id 1
    # other objects are: load_id, gen_id, line_id or substation_id
    # see the documentation for more information.
    val_load1[i] = dict_['p']

# plot it
fig = go.Figure(data=[go.Scatter(x=[i for i in range(len(val_load1))],
                                 y=val_load1)])
fig.update_layout(title="Consumption of load {}".format(load_id),
                 xaxis_title="Time step",
                 yaxis_title="Load (MW)")
fig.show()