#!/usr/bin/env python # coding: utf-8 # # Training Agent, action converters and l2rpn_baselines # It is recommended to have a look at the [0_basic_functionalities](0_basic_functionalities.ipynb), [1_Observation_Agents](1_Observation_Agents.ipynb) and [2_Action_GridManipulation](2_Action_GridManipulation.ipynb) notebooks before getting into this one. # **Objectives** # # In this notebook we will expose : # * how to use the "converters": some specific action_space that allows to manipulate a specific action representation # * how to train a (stupid) Agent using reinforcement learning. # * how to inspect (rapidly) the action taken by the Agent # # **NB** for this tutorial we train an Agent inspired from this blog post: [deep-reinforcement-learning-tutorial-with-open-ai-gym](https://towardsdatascience.com/deep-reinforcement-learning-tutorial-with-open-ai-gym-c0de4471f368). Many other different reinforcement learning tutorial exist. The code showed in this notebook has no pretention except to demonstrate how to use Grid2Op functionality to train a Deep Reinforcement learning Agent and inspect its behaviour. There are absolutely nothing implied about the performance, training strategy, type of Agent, meta parameters etc. All of them are purely "random". # # In[1]: import os import sys import grid2op # In[2]: res = None try: from jyquickhelper import add_notebook_menu res = add_notebook_menu() except ModuleNotFoundError: print("Impossible to automatically add a menu / table of content to this notebook.\nYou can download \"jyquickhelper\" package with: \n\"pip install jyquickhelper\"") res # ## I) Manipulating action representation # Grid2op package has been built with an "object oriented" perspective: almost everything is encapsulated in a dedicated `class`. This allows for more customization of the plateform. # # The downside of this approach is that machine learning method, and especially deep learning, often prefers to deal with vectors rather than with `complex` objects. Indeed, as we covered in the previous tutorials on the platform, building our own actions can be tedious and can sometime require knowledge of the powergrid. # # On the contrary, in most of standard Reinforcement Learning environment, actions have an higher representation. For example in pacman, there are 4 different types of actions: turn left, turn right, go up or do down. This allows for easy sampling (you need to achieve a uniform sampling you simply need to sample a number between 0 and 3 included) and an easy representation: each action is a different component of a vector of dimension 4 [because there are 4 actions]. # # On the other hand this representation is not "human friendly". It is quite convenient in the case of pacman because the action space is rather small making it possible to remember which action corresponds to which component, but in the case of the grid2op package, there are hundreds, sometimes thousands of actions, making it impossible to remember which component corresponds to which actions. We suppose we don't really care about this fact here, as tutorials on Reinforcement Learning with discrete action space often assume that actions are labelled with integer (such as in pacman for example). # # Howerever, to allow the training of RL agent more easily, we allows to make some "[Converters](https://grid2op.readthedocs.io/en/latest/converters.html)" whose roles are to allow an agent to deal with a custom representation of the action space. The class [AgentWithConverter](https://grid2op.readthedocs.io/en/latest/agent.html#grid2op.Agent.AgentWithConverter) is perfect for such usage. # In[3]: # import the usefull class import numpy as np from grid2op import make from grid2op.Agent import RandomAgent max_iter = 100 # to make computation much faster we will only consider 50 time steps instead of 287 train_iter = 1000 env_name = "rte_case14_redisp" env = make(env_name, test=True) env.seed(0) # this is to ensure the same action are taken by the "RandomAgent". my_agent = RandomAgent(env.action_space) # And that's it. This agent will be able to perform any action, but instead of going through the description of the actions from a powersystem point of view (ie setting what is connected to what, what is disconnected etc.) it will simply choose an integer with the method `my_act` this integer will then be converter back to a proper valid action. # # Here we have an example on the action representation as seen by the Agent: # In[4]: for el in range(3): print(my_agent.my_act(None, None)) # And below you can see the "`act`" functions behaves as expected: # In[5]: for el in range(3): print(my_agent.act(None, None)) # **NB** lots of these actions are equivalent to the "do nothing" action at some point. For example, when trying to reconnect a powerline that is already connected will do nothing. Same for the topology. If everything is already connected to bus 1, then the action to connect things to bus 1 on the same substation will not affect the powergrid. # ## II) Training an Agent # For this tutorial, we will expose to built a Q-learning Agent. Most of the code originated from this blog post (today deleted) [https://towardsdatascience.com/deep-reinforcement-learning-tutorial-with-open-ai-gym-c0de4471f368](https://towardsdatascience.com/deep-reinforcement-learning-tutorial-with-open-ai-gym-c0de4471f368). # # The goal of this notebook is to emphasize the possibility to train agent using grid2op framework. The key message is: as grid2op fully implement the gym API it is rather easy to do. We will use the [l2rpn baselines](https://github.com/rte-france/l2rpn-baselines) repository and implement a Double Duelling Deep Q learning Algorithm. For more information, you can consult the code in the dedicated repository [here](https://github.com/rte-france/l2rpn-baselines/tree/master/l2rpn_baselines/DoubleDuelingDQN). # # **Requirements** This notebook require to have `keras` installed on your machine as well as the `l2rpn_baselines` repository # # As always in these notebook, we will use the `rte_case14_realistic` test Environment. More data are available if you don't pass the `test=True` parameters. # ### II.A) Defining some "helpers" # The type of Agent were are using require a bit of set up, independantly of Grid2Op. We will reuse the code showed in # [https://towardsdatascience.com/deep-reinforcement-learning-tutorial-with-open-ai-gym-c0de4471f368](https://towardsdatascience.com/deep-reinforcement-learning-tutorial-with-open-ai-gym-c0de4471f368) and in [Reinforcement-Learning-Tutorial](https://github.com/abhinavsagar/Reinforcement-Learning-Tutorial) from Abhinav Sagar code under a *MIT license* found here: [MIT License](https://github.com/abhinavsagar/Reinforcement-Learning-Tutorial/blob/master/LICENSE). # # This first section is here to define these classes. # But first let's import the necessary dependencies # In[6]: #tf2.0 friendly import numpy as np import random import warnings import l2rpn_baselines # #### b) Meta parameters of the methods # In[7]: DECAY_RATE = 0.9 BUFFER_SIZE = 40000 MINIBATCH_SIZE = 64 TOT_FRAME = 3000000 EPSILON_DECAY = 10000 MIN_OBSERVATION = 42 #5000 FINAL_EPSILON = 1/300 # have on average 1 random action per scenario of approx 287 time steps INITIAL_EPSILON = 0.1 TAU = 0.01 ALPHA = 1 # Number of frames to "throw" into network NUM_FRAMES = 1 ## this has been changed compared to the original implementation. # ### II.B) Adapatation of the inputs # # For most of the Deep Reinforcement Learning (for example on model used to play an Atari games) the inputs are images and output are integers that encodes for different action types (typically "move up" or "move down" in Atari). For our system (the powergrid) it is rather different. We did our best effort to simply the task of transforming to / from complex structures. Indeed the use of converters such as ([IdToAct](https://grid2op.readthedocs.io/en/latest/converter.html#grid2op.Converter.IdToAct)) allows easily to: # - convert the class "Observation" automatically into vectors # - map the action from integer to complete action type define in the previous notebook. # # In essence, a converter substitue to the "action space" of the Agent and is such that: # - Agent manipulates simple structure # - Converter ensure the mapping from this structure to complex grid2op class Action / Observation # - So that outside of the Agent, it is "as if" the Agent manipulated the original Action / Observation. # # # #### A note on the converter # To use this converter, Agent must inherit the class [`grid2op.Agent.AgentWithConverter`](https://grid2op.readthedocs.io/en/latest/agent.html#grid2op.Agent.AgentWithConverter) and implement the following interface (showed here as an example): # # # ```python # from grid2op.Agent import AgentWithConverter # class MyAgent(AgentWithConverter): # def __init__(self, action_space, action_space_converter=None): # super(MyAgent, self).__init__(action_space=action_space, action_space_converter=action_space_converter) # # for example you can define here all the actions you will consider # self.my_actions = [action_space(), # action_space({"redispatching": [0,+1]}), # action_space({"set_line_status": [(0,-1)]}), # action_space({"change_bus": {"lines_or_id": [12]}}), # ... # ] # # or load them from a file for example... # # self.my_action = np.load("my_action_pre_selected.npy") # # # you can also in this agent load a neural network... # self.my_nn_model = model.load("my_saved_neural_network_weights.h5") # # def convert_obs(self, observation): # """ # This method is used to convert the observation, represented as a class Observation in input # into a "transformed_observation" that will be manipulated by the agent # An example here will transform the observation into a numpy array. # # It is recommended to modify it to suit your needs. # # """ # return observation.to_vect() # # def convert_act(self, encoded_act): # """ # This method will take an "encoded_act" (for example a integer) into a valid grid2op action. # """ # if encoded_act < 0 or encoded_act > len(self.my_action): # raise RuntimeError("Invalid action with id {}".format(encoded_act)) # return self.my_actions[encoded_act] # # def my_act(self, transformed_observation, reward, done=False): # """ # This is the main function where you can take your decision. # # Instead of: # - calling "act(observation, reward, done)" you implement # "my_act(transformed_observation, reward, done)" # - this manipulates only "transformed_observation" fully flexible as you defined "convert_obs" # - and returns "encoded_action" that are then digest automatically by # "convert_act(encoded_act)" and to return valid actions. # # Here we suppose, as many dqn agent, that `my_nn_model` return a vector of size # nb_actions filled with number between 0 and 1 and we take the action given the highest score # """ # pred_score = self.my_nn_model.predict(transformed_observation, reward, done) # res = np.argmax(pred_score) # return res # ``` # And that's it. Nothing else to do, your agent is ready to learn to control powergrid using this only 3 functions. # # # **NB** A few things are worth noting: # - if you use an agent with converter, do not modify the method **act** but rather change the method **my_act** this is really important ! # - some automatic functions can compute the set of all possible actions, so no need to do "self.my_actions = ..." This was done as an example # - if the converter is properly set up, you don't even need to modify "convert_obs(self, observation)" and "convert_act(self, encoded_act)" as this is already performed by the default implementation. # ## II.C) Using the code of the Agent and train it # #### a) Code of the agent # # Here we show the most interesting part (for this tutorial) part of the code that are implemented into the baseline. For a full description of the code, you can check [here](https://github.com/rte-france/l2rpn-baselines/tree/master/l2rpn_baselines/DoubleDuelingDQN) # # This is the `DoubleDuelingDQN_NN.py` file: # # ```python # import tensorflow.keras as tfk # class DoubleDuelingDQN_NN(object): # """Constructs the desired deep q learning network""" # def __init__(self, # action_size, # observation_size, # HIDDEN_FOR_SIMPLICITY # ): # self.action_size = action_size # self.observation_size = observation_size # HIDDEN_FOR_SIMPLICITY # # def construct_q_network(self): # """ # we showed this here to tell you it was exactly like any keras implementation # """ # input_layer = tfk.Input(shape = (self.observation_size * self.num_frames,), name="input_obs") # lay1 = tfkl.Dense(self.observation_size * 2, name="fc_1")(input_layer) # lay1 = tfka.relu(lay1, alpha=0.01) #leaky_relu # ... # HIDDEN_FOR_SIMPLICITY # ... # self.model = tfk.Model(...) # # def random_move(self): # """ # Moves are encoded by a random number between 0 and the total number of actions. # Easy to do a random move isn't it ? :-) # """ # opt_policy = np.random.randint(0, self.action_size) # return opt_policy # # def predict_move(self, data): # """ # in this example we decided to show # """ # model_input = data.reshape(1, self.observation_size * self.num_frames) # q_actions = self.model.predict(model_input, batch_size = 1) # opt_policy = np.argmax(q_actions) # return opt_policy, q_actions[0] # ``` # # This is the `DoubleDuelingDQN.py` file: # ```python # from grid2op.Agent import AgentWithConverter # all converter agent should inherit this # from grid2op.Converter import IdToAct # this is the automatic converter to convert action given as ID (integer) # # to valid grid2op action (in particular it is able to compute all actions). # # from l2rpn_baselines.DoubleDuelingDQN.DoubleDuelingDQN_NN import DoubleDuelingDQN_NN # class DoubleDuelingDQN(AgentWithConverter): # def __init__(self, # observation_space, # action_space, # HIDDEN_FOR_SIMPLICITY # ): # ... # HIDDEN_FOR_SIMPLICITY # ... # # Load network graph # self.Qmain = DoubleDuelingDQN_NN(self.action_size, # self.observation_size, # HIDDEN_FOR_SIMPLICITY) # ## Agent Interface # def convert_obs(self, observation): # # Made a custom version to normalize per attribute # # return observation.to_vect() - like object scaled accordingly # li_vect= [] # for el in observation.attr_list_vect: # v = observation._get_array_from_attr_name(el).astype(np.float) # v_fix = np.nan_to_num(v) # v_norm = np.linalg.norm(v_fix) # if v_norm > 1e8: # v_res = (v_fix / v_norm) * 10.0 # else: # v_res = v_fix # li_vect.append(v_res) # return np.concatenate(li_vect) # # def convert_act(self, action): # """ # calling the convert_act method of the base class. # This is not mandatory as this is the standard behaviour in OOP (object oriented programming) # """ # return super().convert_act(action) # # def my_act(self, state, reward, done=False): # """ # The complete implementation of the my_act function # """ # # Register current state to stacking buffer # self._save_current_frame(state) # # We need at least num frames to predict # if len(self.frames) < self.num_frames: # return 0 # Do nothing # # Infer with the last num_frames states # a, _ = self.Qmain.predict_move(np.array(self.frames)) # self.Qmain is of type 'DoubleDuelingDQN_NN' previously defined # return a # ``` # # #### b) Training the model # # Now we can define the model (agent), and then train it. # # For that we will use the "train" method provided in the `l2rpn_baselines` repository. # # **NB** The code bellow can take a few minutes to run. It's training a Deep Reinforcement Learning Agent afterall. It this takes too long on your machine, you can always decrease the "nb_frame", and set it to 1000 for example. In this case, the Agent will probably not be really good. # # **NB** For a real Agent, it would take much longer to train. # In[8]: # create an environment env = make(env_name, test=True) # don't forget to set "test=False" (or remove it, as False is the default value) for "real" training # import the train function and train your agent from l2rpn_baselines.DoubleDuelingDQN import train agent_name = "test_agent" save_path = "saved_agent_DDDQN_{}".format(train_iter) train(env, name=agent_name, iterations=train_iter, save_path=save_path, load_path=None, # put something else if you want to reload an agent instead of creating a new one logs_path="tf_logs_DDDQN") # Logs are saved in the "tf_logs_DDDQN" logs repository. To watch the training (you can even do it while it's training) you can type the command (from a bash command line for example): # ``` # tensorboard --logdir='tf_logs_DDDQN' # ``` # ## III) Evaluating the Agent # And now, time to test this trained agent. # # To do that, we have multiple choices. # # Either we recode the "DeepQAgent" class to load the stored weights (that have been saved during trainig) when it is initialized (not covered in this notebook), or we can also directly specify the "instance" of the Agent to use in the Grid2Op Runner. # # To do that, it's fairly simple. First, you need to specify that you won't use the "*agentClass*" argument, by setting it to ``None``, and secondly you simply provide the agent to use in the *agentInstance* argument. # # **NB** If you don't do that, the Runner will be created (the constructor will raise an exception). And if you choose to use the "*agentClass*" argument, your agent will be reloaded from scratch. So **if it doesn't load the weights** it will behave as a non trained agent, unlikely to perform well on the task. # ### III.A) Evaluate the Agent # Now that we have "successfully" trained our Agent, we will evaluating it. As opposed to the trainining, the evaluation is done classically using a standard Runner. # # Note that the Runner will use a "scoring function" that might be different from the "reward function" used during training. In our case, it's not. We use the `L2RPNReward` in both cases. # # In the code bellow, we commented on what can be different and what must be identical for training and evaluation of model. # In[9]: from grid2op.Runner import Runner # chose a scoring function (might be different from the reward you use to train your agent) from grid2op.Reward import L2RPNReward scoring_function = L2RPNReward # load your agent from l2rpn_baselines.DoubleDuelingDQN import DoubleDuelingDQN my_agent = DoubleDuelingDQN(env.observation_space, env.action_space) my_agent.load(os.path.join(save_path, "{}.h5".format(agent_name))) # here we do that to limit the time take, and will only assess the performance on "max_iter" iteration dict_params = env.get_params_for_runner() dict_params["gridStateclass_kwargs"]["max_iter"] = max_iter # make a runner from an intialized environment runner = Runner(**dict_params, agentClass=None, agentInstance=my_agent) # Run the Agent and save the results. As opposed to the multiple times we exposed the "runner.run" call, we never really dive into the "path_save" argument. This path allows you to save lots of information about your Agent behaviour. Please All the informations present are shown on the documentation [here](https://grid2op.readthedocs.io/en/latest/runner.html). # In[10]: import shutil path_save="trained_agent_log" # delete the previous stored results if os.path.exists(path_save): shutil.rmtree(path_save) # run the episode res = runner.run(nb_episode=2, path_save=path_save) print("The results for the trained agent are:") for _, chron_name, cum_reward, nb_time_step, max_ts in res: msg_tmp = "\tFor chronics located at {}\n".format(chron_name) msg_tmp += "\t\t - total score: {:.6f}\n".format(cum_reward) msg_tmp += "\t\t - number of time steps completed: {:.0f} / {:.0f}".format(nb_time_step, max_ts) print(msg_tmp) # ### III.B) Inspect the Agent # Please refer to the official document for more information about the content of the directory where the data are saved. Note that the saving of the information is triggered by the "path_save" argument sent to the "runner.run" function. # # Some information that will be present in this repository are: # If enabled, the :class:`Runner` will save the information in a structured way. For each episode there will be a folder # with: # # - "episode_meta.json" that represents some meta information about: # # - "backend_type": the name of the `grid2op.Backend` class used # - "chronics_max_timestep": the **maximum** number of timestep for the chronics used # - "chronics_path": the path where the temporal data (chronics) are located # - "env_type": the name of the `grid2op.Environment` class used. # - "grid_path": the path where the powergrid has been loaded from # # - "episode_times.json": gives some information about the total time spend in multiple part of the runner, mainly the # `grid2op.Agent` (and especially its method `grid2op.Agent.act`) and amount of time spent in the # `grid2op.Environment` # # - "_parameters.json": is a representation as json of a the `grid2op.Parameters` used for this episode # - "rewards.npy" is a numpy 1d array giving the rewards at each time step. We adopted the convention that the stored # reward at index `i` is the one observed by the agent at time `i` and **NOT** the reward sent by the # `grid2op.Environment` after the action has been implemented. # - "exec_times.npy" is a numpy 1d array giving the execution time of each time step of the episode # - "actions.npy" gives the actions that has been taken by the `grid2op.Agent`. At row `i` of "actions.npy" is a # vectorized representation of the action performed by the agent at timestep `i` *ie.* **after** having observed # the observation present at row `i` of "observation.npy" and the reward showed in row `i` of "rewards.npy". # - "disc_lines.npy" gives which lines have been disconnected during the simulation of the cascading failure at each # time step. The same convention as for "rewards.npy" has been adopted. This means that the powerlines are # disconnected when the `grid2op.Agent` takes the `grid2op.Action` at time step `i`. # - "observations.npy" is a numpy 2d array reprensenting the `grid2op.Observation` at the disposal of the # `grid2op.Agent` when he took his action. # # We can first look at the repository were the data are stored: # In[11]: import os os.listdir(path_save) # As we can see, there is only one folder there. It's named "1" because, in the original data, this came from the folder named "1" (the original data are located at "/home/donnotben/.local/lib/python3.6/site-packages/grid2op/data/test_multi_chronics/") # # If there were multiple episode, each episode would have it's own folder, with a name as resemblant as possible to the origin name of the data. This is done to ease the studying of the results. # # Now let's see what is inside this folder: # In[12]: os.listdir(os.path.join(path_save, "0")) # We can for example load the "actions" performed by the Agent, and have a look at them. # # To do that we will load the action array (represented as vector) and use the action_space to convert it back into valid action class. # In[13]: from grid2op.Episode import EpisodeData this_episode = EpisodeData.from_disk(path_save, name="0") all_actions = this_episode.get_actions() li_actions = [] for i in range(all_actions.shape[0]): try: tmp = runner.env.action_space.from_vect(all_actions[i,:]) li_actions.append(tmp) except: break # In[14]: get_ipython().system('ls $path_save') # This allows us to have a deeper look at the action, and their effect. Note that here, we used action that can only **set** the line status, so looking at their effect is pretty straightforward. # # Also, note that as oppose to "change", if a powerline is already connected, trying to **set** it as connected has absolutely no impact. # In[15]: line_disc = 0 line_reco = 0 for act in li_actions: dict_ = act.as_dict() if "set_line_status" in dict_: line_reco += dict_["set_line_status"]["nb_connected"] line_disc += dict_["set_line_status"]["nb_disconnected"] line_reco # As wa can see for our event, the agent always try to reconnect a powerline. As all lines are alway reconnected, this Agent does basically nothing. # We can also do the same kind of post analysis for the observation, even though here, as the observations come from files, it's probably not particularly intersting. # In[16]: all_observations = this_episode.get_observations() li_observations = [] nb_real_disc = 0 for i in range(all_observations.shape[0]): try: tmp = runner.env.observation_space.from_vect(all_observations[i,:]) li_observations.append(tmp) nb_real_disc += (np.sum(tmp.line_status == False)) except: break nb_real_disc # We can also look at the type of action the agent did: # In[17]: actions_count = {} for act in li_actions: act_as_vect = tuple(act.to_vect()) if not act_as_vect in actions_count: actions_count[act_as_vect] = 0 actions_count[act_as_vect] += 1 print("The agent did {} different valid actions:".format(len(actions_count))) all_act = np.array(list(actions_count.keys())) for act in all_act: print(runner.env.action_space.from_vect(act)) # ## IV) Improve your Agent # As we saw, the agent we develop was not really interesting. To improve it, we could think about: # # - a better encoding of the observation. For now everything is fed to the neural network, without any normalization of any kind. This is a real problem for learning algorithm. # - a better neural network architecture (as said, we didn't pay any attention to it in our model) # - train it for a longer time # - adapt the learning rate and all the meta parameters of the learning algorithm. # - etc. # # In this notebook, we will focus on changing the observation representation, by only feeding the agent only some informations. # # To do so, the only modification we need to do is to modify the way the observation are converted. So the "*convert_obs*" method, and that is it. Nothing else need to be changed. Here for example we could think of only using the flow ratio (*i.e.,* the current flows divided by the thermal limits) as part of the observation (named rho) instead of feeding the whole observation. # In[18]: class DoubleDuelingDQN_Improved(DoubleDuelingDQN): def convert_obs(self, observation): """ And by just changing that, i can change what is fed to the neural network :-) NB: i need however to tell in the initialization of the neural network the changes I made... """ return observation.rho def __init__(self, observation_space, action_space, name=__name__, num_frames=4, is_training=False, batch_size=32, lr=1e-5): """ We have changed the size of the observation, so we need to re create another neural network with the proper input size. That is why we need to change this. """ # Call parent constructor DoubleDuelingDQN.__init__(self, observation_space=observation_space, action_space=action_space, name=name, num_frames=num_frames, is_training=is_training, batch_size=batch_size, lr=lr) # import some constant and the class for this baseline from l2rpn_baselines.DoubleDuelingDQN.DoubleDuelingDQN_NN import DoubleDuelingDQN_NN from l2rpn_baselines.DoubleDuelingDQN.DoubleDuelingDQN import LR_DECAY_STEPS, LR_DECAY_RATE # Compute dimensions from intial spaces self.observation_size = self.obs_space.n_line # Load network graph self.Qmain = DoubleDuelingDQN_NN(self.action_size, self.observation_size, num_frames = self.num_frames, learning_rate = self.lr, learning_rate_decay_steps = LR_DECAY_STEPS, learning_rate_decay_rate = LR_DECAY_RATE) # Setup training vars if needed if self.is_training: self._init_training() # And we can reuse the generic method provided by l2rpn_baselines to train it. # In[19]: from l2rpn_baselines.utils import train_generic agent_name = "test_agent2" save_path = "saved_agent_DDDQN2_{}".format(train_iter) my_new_agent = DoubleDuelingDQN_Improved(env.observation_space, env.action_space, is_training=True, name=agent_name) my_new_agent_trained = train_generic(agent=my_new_agent, env=env, iterations=train_iter, save_path="saved_agent_DDDQN_{}".format(train_iter)) # And we re use the code we made above to assess its performance. # In[20]: runner2 = Runner(**dict_params, agentClass=None, agentInstance=my_new_agent_trained) # run the episode res = runner2.run(nb_episode=2, path_save=path_save) print("The results for the trained agent are:") for _, chron_name, cum_reward, nb_time_step, max_ts in res: msg_tmp = "\tFor chronics located at {}\n".format(chron_name) msg_tmp += "\t\t - total score: {:.6f}\n".format(cum_reward) msg_tmp += "\t\t - number of time steps completed: {:.0f} / {:.0f}".format(nb_time_step, max_ts) print(msg_tmp)