Uirá Caiado. Aug 10, 2016
In this project, I will present an adaptive learning model to trade a single stock under the reinforcement learning framework. This area of machine learning consists in training an agent by reward and punishment without needing to specify the expected action. The agent learns from its experience and develops a strategy that maximizes its profits. The simulation results show initial success in bringing learning techniques to build algorithmic trading strategies.
In this section, I will provide a high-level overview of the project, define the problem addressed and the metric used to measure the performance of the model created.
Udacity:
In this section, look to provide a high-level overview of the project in layman’s terms. Questions to ask yourself when writing this section:
- Has an overview of the project been provided, such as the problem domain, project origin, and related datasets or input data?
- Has enough background information been given so that an uninformed reader would understand the problem domain and following problem statement?
Nowadays, algo trading represents almost half of all cash equity trading in western Europe. In advanced markets, it already accounts for over 40%-50% of total volume. In Brazil its market share is not as large – currently about 10% – but is expected to rise in the years ahead as markets and players go digital.
As automated strategies are becoming increasingly popular, building an intelligent system that can trade many times a day and adapts itself to the market conditions and still consistently makes money is a subject of keen interest of any market participant.
Given that it is hard to produce such strategy, in this project I will try to build an algorithm that just does better than a random agent, but learns by itself how to trade. To do so, I will feed my agent with four days of information about every trade and change in the top of the order book in the PETR4 - one of the most liquidity assets in Brazilian Stock Market - in a Reinforcement Learning Framework. Later on, I will test what it has learned in a newest dataset.
The dataset used in this project is also known as level I order book data and includes all trades and changes in the prices and total quantities at best Bid (those who wants to buy the stock) and Offer side (those who intends to sell the stock).
Udacity:
In this section, you will want to clearly define the problem that you are trying to solve, including the strategy (outline of tasks) you will use to achieve the desired solution. You should also thoroughly discuss what the intended solution will be for this problem. Questions to ask yourself when writing this section:
- Is the problem statement clearly defined? Will the reader understand what you are expecting to solve?
- Have you thoroughly discussed how you will attempt to solve the problem?
- Is an anticipated solution clearly defined? Will the reader understand what results you are looking for?
Algo trading strategies usually are programs that follow a predefined set of instructions to place its orders.
The primary challenge to this approach is building these rules in a way that it can consistently generate profit without being too sensitive to market conditions. Thus, the goal of this project is to develop an adaptive learning model that can learn by itself those rules and trade a particular asset using reinforcement learning framework under an environment that replays historical high-frequency data.
As \cite{chan2001electronic} described, reinforcement learning can be considered as a model-free approximation of dynamic programming. The knowledge of the underlying processes is not assumed but learned from experience. The agent can access some information about the environment state as the order flow imbalance, the sizes of the best bid and offer and so on. At each time step t, It should generate some valid action, as buy stocks or insert a limit order at the Ask side. The agent also should receive a reward or a penalty at each time step if it is already carrying a position from previous rounds or if it has made a trade (the cost of the operations are computed as a penalty). Based on the rewards and penalties it gets, the agent should learn an optimal policy for trade this particular stock, maximizing the profit it receives from its actions and resulting positions.
Udacity Reviewer:
This is really quite close! I'm marking as not meeting specifications because you should fully outline your solution here. You've outlined your strategy regarding reinforcement learning, but you should also address things like data preprocessing, choosing your state space etc. Basically, this section should serve as an outline for your entire solution. Just add a paragraph or two to fully outline your proposed methodology and you're good to go.
This project starts with an overview of the dataset and shows how the environment states will be represented in Section 2. The same section also dives in the reinforcement learning framework and defines the benchmark used at the end of the project. Section 3 discretizes the environment states by transforming its variables and clustering them into six groups. Also describes the implementation of the model and the environments, as well as and the process of improvement made upon the algorithm used. Section 4 presents the final model and compares statistically its performance to the benchmark chosen. Section 5 concludes the project with some closing remarks and possible improvements.
Udacity:
In this section, you will need to clearly define the metrics or calculations you will use to measure performance of a model or result in your project. These calculations and metrics should be justified based on the characteristics of the problem and problem domain. Questions to ask yourself when writing this section:
- Are the metrics you’ve chosen to measure the performance of your models clearly discussed and defined?
- Have you provided reasonable justification for the metrics chosen based on the problem and solution?
Udacity Reviewer:
The section on metrics should address any statistics or metrics that you'll be using in your report. What you've written in your benchmark section is roughly what we're looking for for the metrics section and vice versa. I'd recommend changing the subtitles to clarify this. If it's more logical to introduce the benchmark before explaining your metrics, you could combine the 'Benchmark' and 'Metrics' subsections into a single 'Benchmark and Metrics' section.
Different metrics are used to support the decisions made throughout the project. We use the mean Silhouette Coefficient of all samples to justify the clustering method chosen to reduce the state space representation of the environment. As exposed in the scikit-learn documentation, this coefficient is composed by the mean intra-cluster distance (a) and the mean nearest-cluster distance (b) for each sample. The score for a single cluster is given by s=b−amaxa,b.This scores are so average down to all samples and varying between 1 (the best one) and −1 (the worst value).
Then, we use sharpe ratio to help us understanding the performance impact of different values to the model parameters. The Sharpe is measure upon the first difference (Δr) of the accumulated PnL curve of the model. So, the first difference is defined as Δr=PnLt−PnLt−1.
Finally, as we shall justify latter, the performance of my agent will be compared to the performance of a random agent. These performances will be measured primarily of Reais made (the Brazilian currency) by the agents. To compared the final PnL of both agents in the simulations, we will perform a one-sided Welch's unequal variances t-test for the null hypothesis that the learning agent has the expected PnL greater than the random agent. As the implementation of the t-test in the scipy assumes a two-sided t-test, to perform the one-sided test, we will divide the p-value by 2 to compare to a critical value of 0.05 and requires that the t-value is greater than zero. In the next section, I will detail the behavior of learning agent.
In this section, I will explore the data set that will be used in the simulation, define and justify the inputs employed in the state representation of the algorithm, explain the reinforcement learning techniques used and provide a benchmark.
Udacity:
In this section, you will be expected to analyze the data you are using for the problem. This data can either be in the form of a dataset (or datasets), input data (or input files), or even an environment. The type of data should be thoroughly described and, if possible, have basic statistics and information presented (such as discussion of input features or defining characteristics about the input or environment). Any abnormalities or interesting qualities about the data that may need to be addressed have been identified (such as features that need to be transformed or the possibility of outliers). Questions to ask yourself when writing this section:
- If a dataset is present for this problem, have you thoroughly discussed certain features about the dataset? Has a data sample been provided to the reader?
- If a dataset is present for this problem, are statistics about the dataset calculated and reported? Have any relevant results from this calculation been discussed?
- If a dataset is **not** present for this problem, has discussion been made about the input space or input data for your problem?
- Are there any abnormalities or characteristics about the input space or dataset that need to be addressed? (categorical variables, missing values, outliers, etc.)
The dataset used is composed by level I order book data from PETR4, a stock traded at BMFBovespa Stock Exchange. Includes 45 trading sessions from 07/25/2016 to 09/26/2016. I will use one day to create the scalers of the features used, that I shall explain. Then, I will use four days to train and test the model, and after each training session, I will validate the policy found in an unseen dataset from the subsequent day. The data was collected from Bloomberg.
In the figure below can be observed how the market behaved on the days that the out-of-sample will be performed. In this charts are plotted the number of cents that an investment of the same amount of money in PETR4 and BOVA11 would have varied on these market sessions. BOVA11 is an ETF that can be used as a proxy to the Bovespa Index, the Brazilian Stock Exchange Index. As can seem, PETR4 was relatively more volatile than the rest of the market.
import zipfile
s_fname = "data/data_0725_0926.zip"
s_fname2 = "data/bova11_2.zip"
archive = zipfile.ZipFile(s_fname, 'r')
archive2 = zipfile.ZipFile(s_fname2, 'r')
l_fnames = archive.infolist()
import qtrader.eda as eda; reload(eda);
df_last_pnl = eda.plot_cents_changed(archive, archive2)
Let's start by looking at the size of the files that can be used in the simulation:
def foo():
f_total = 0.
f_tot_rows = 0.
for i, x in enumerate(archive.infolist()):
f_total += x.file_size/ 1024.**2
for num_rows, row in enumerate(archive.open(x)):
f_tot_rows += 1
print "{}:\t{:,.0f} rows\t{:0.2f} MB".format(x.filename, num_rows + 1, x.file_size/ 1024.**2)
print '=' * 42
print "TOTAL\t\t{} files\t{:0.2f} MB".format(i+1,f_total)
print "\t\t{:0,.0f} rows".format(f_tot_rows)
%time foo()
20160725.csv: 110,756 rows 4.42 MB 20160726.csv: 100,109 rows 3.98 MB 20160727.csv: 123,175 rows 4.93 MB 20160728.csv: 109,655 rows 4.37 MB 20160729.csv: 135,111 rows 5.40 MB 20160801.csv: 109,710 rows 4.37 MB 20160802.csv: 108,053 rows 4.30 MB 20160803.csv: 137,039 rows 5.49 MB 20160804.csv: 139,118 rows 5.56 MB 20160805.csv: 112,852 rows 4.51 MB 20160808.csv: 89,730 rows 3.55 MB 20160809.csv: 83,826 rows 3.33 MB 20160810.csv: 105,758 rows 4.21 MB 20160811.csv: 144,728 rows 5.81 MB 20160812.csv: 147,086 rows 5.90 MB 20160815.csv: 108,633 rows 4.33 MB 20160816.csv: 108,795 rows 4.33 MB 20160817.csv: 118,980 rows 4.75 MB 20160818.csv: 84,489 rows 3.36 MB 20160819.csv: 98,329 rows 4.00 MB 20160822.csv: 98,594 rows 4.02 MB 20160823.csv: 90,752 rows 3.69 MB 20160824.csv: 87,930 rows 3.56 MB 20160825.csv: 95,929 rows 3.89 MB 20160826.csv: 152,547 rows 6.24 MB 20160829.csv: 98,630 rows 4.02 MB 20160830.csv: 122,067 rows 4.98 MB 20160831.csv: 155,391 rows 6.37 MB 20160901.csv: 150,122 rows 6.15 MB 20160902.csv: 147,257 rows 6.04 MB 20160905.csv: 70,243 rows 2.86 MB 20160906.csv: 109,355 rows 4.46 MB 20160908.csv: 140,519 rows 5.77 MB 20160909.csv: 142,940 rows 5.86 MB 20160912.csv: 171,462 rows 7.02 MB 20160913.csv: 224,427 rows 9.25 MB 20160914.csv: 172,215 rows 7.05 MB 20160915.csv: 139,648 rows 5.72 MB 20160916.csv: 119,952 rows 4.90 MB 20160919.csv: 126,815 rows 5.18 MB 20160920.csv: 149,962 rows 6.15 MB 20160921.csv: 163,128 rows 6.70 MB 20160922.csv: 163,957 rows 6.74 MB 20160923.csv: 159,513 rows 6.56 MB 20160926.csv: 101,986 rows 4.15 MB ========================================== TOTAL 45 files 228.26 MB 5,631,273 rows CPU times: user 18.7 s, sys: 161 ms, total: 18.9 s Wall time: 18.9 s
There are 45 files, each one has 110,000 rows on average, resulting in 5,631,273 rows at total and almost 230 MB of information. Now, let's look at the structure of one of them:
import pandas as pd
df = pd.read_csv(archive.open(l_fnames[0]), index_col=0, parse_dates=['Date'])
df.head()
Date | Type | Price | Size | |
---|---|---|---|---|
0 | 2016-07-25 10:02:00 | TRADE | 11.98 | 5800 |
1 | 2016-07-25 10:02:00 | BID | 11.97 | 6100 |
2 | 2016-07-25 10:02:00 | ASK | 11.98 | 51800 |
3 | 2016-07-25 10:02:00 | ASK | 11.98 | 56800 |
4 | 2016-07-25 10:02:00 | ASK | 11.98 | 56900 |
Each file is composed of four different fields. The column Date is the timestamp of the row and has a precision of seconds. Type is the kind of information that the row encompasses. The type "TRADE" relates to an actual trade that has happened. "BID" is related to changes in the best Bid level and "ASK," to the best Offer level. Price is the current best bid or ask and Size is the cumulated quantity on that price and side.
All this data will be used to create the environment where my agent will operate. This environment is an order book, where the agent will be able to insert limit orders and execute trades at the best prices. The order book is represented by two binary trees, one for the Bid and other for the Ask side. As can be seen in the table below, the nodes of these trees are sorted by price (price level) in ascending order on the Bid side and descending order on the ask side. At each price level, there are other binary trees sorted by order of arrival. The first order to arrive is the first order filled when coming in a trade.
import qtrader.simulator as simulator
import qtrader.environment as environment
e = environment.Environment()
sim = simulator.Simulator(e)
%time sim.run(n_trials=1)
CPU times: user 13.8 s, sys: 45.4 ms, total: 13.9 s Wall time: 13.9 s
sim.env.get_order_book()
qBid | Bid | Ask | qAsk | |
---|---|---|---|---|
0 | 61,400 | 12.02 | 12.03 | 13,800 |
1 | 47,100 | 12.01 | 12.04 | 78,700 |
2 | 51,700 | 12.00 | 12.05 | 20,400 |
3 | 37,900 | 11.99 | 12.06 | 23,100 |
4 | 97,000 | 11.98 | 12.07 | 27,900 |
The environment will answer with the agent's current position and Profit and Loss (PnL) every time the agent executes a trade or has an order filled. The cost of the trade will be accounted as a penalty.
The agent also will be able to sense the state of the environment and include it in its own state representation. So, this intern state will be represented by a set of variables about the current situation os the market and the state of the agent, given by:
Udacity:
Exploratory Visualization:
In this section, you will need to provide some form of visualization that summarizes or extracts a relevant characteristic or feature about the data. The visualization should adequately support the data being used. Discuss why this visualization was chosen and how it is relevant. Questions to ask yourself when writing this section:
- Have you visualized a relevant characteristic or feature about the dataset or input data?
- Is the visualization thoroughly analyzed and discussed?
- If a plot is provided, are the axes, title, and datum clearly defined?
Regarding the measure of the Order Flow Imbalance (OFI), there are many ways to measure it. \cite{cont2014price} argued the order flow imbalance is a measure of supply/demand imbalance and defines it as a sum of individual event contribution en over time intervals [tk−1,tk], such that:
OFIk=N(tk)∑n=N(tk−1)+1enWhere N(tk) and N(tk−1)+1 are index of the first and last event in the interval. The en was defined by the authors as a measure of the contribution of the n-th event to the size of the bid and ask queues:
en=1PBn≥PBn−1qBn−1PBn≤PBn−1qBn−1−1PAn≤PAn−1qAn+1PAn≥PAn−1qAn−1Where qBn and qAn are linked to the cumulated quantities at the best bid and ask in the time n. The subscript n−1 is related to the last observation. 1 is an indicator function. In the figure below is ploted the 10-second log-return of PETR4 against the contemporaneous OFI. Log-return is defined as lnrt=lnPtPt−1, where Pt is the current price of PETR4 and Pt−1 is the previous one.
import qtrader.eda as eda; reload(eda);
s_fname = "data/petr4_0725_0818_2.zip"
%time eda.test_ofi_indicator(s_fname, f_min_time=20.)
CPU times: user 2.48 s, sys: 16 ms, total: 2.49 s Wall time: 2.59 s
import pandas as pd
df = pd.read_csv('data/ofi_petr.txt', sep='\t')
df.drop('TIME', axis=1, inplace=True)
df.dropna(inplace=True)
ax = sns.lmplot(x="OFI", y="LOG_RET", data=df, markers=["x"], palette="Set2", size=4, aspect=2.)
ax.ax.set_title(u'Relation between the Log-return and the $OFI$\n', fontsize=15);
ax.ax.set_ylim([-0.004, 0.005])
ax.ax.set_xlim([-400000, 400000])
(-400000, 400000)
As described by \cite{cont2014price} in a similar test, the figure suggests that order flow imbalance is a stronger driver of high-frequency price changes and this variable will be used to describe the current state of the order book.
Udacity:
In this section, you will need to discuss the algorithms and techniques you intend to use for solving the problem. You should justify the use of each one based on the characteristics of the problem and the problem domain. Questions to ask yourself when writing this section:
- Are the algorithms you will use, including any default variables/parameters in the project clearly defined?
- Are the techniques to be used thoroughly discussed and justified?
- Is it made clear how the input data or datasets will be handled by the algorithms and techniques chosen?
Based on \cite{cont2014price}, the algo trading might be conveniently modeled in the framework of reinforcement learning. As suggested by \cite{du1algorithm}, this framework adjusts the parameters of an agent to maximize the expected payoff or reward generated due to its actions. Therefore, the agent learns a policy that tells him the actions it must perform to achieve its best performance. This optimal policy is exactly what we hope to find when we are building an automated trading strategy.
According to \cite{chan2001electronic}, Markov decision processes (MDPs) are the most common model when implementing reinforcement learning. The MDP model of the environment consists, among other things, of a discrete set of states S and a discrete set of actions taken from A. In this project, depending on the position of the learner(long or short), at each time step t it will be allowed to choose an action at from different subsets from the action space A , that consists of six possibles actions:
at∈(None,buy,sell,best_bid,best_ask,best_both)Where None indicates that the agent shouldn't have any order in the market. Buy and Sell means that the agent should execute a market order to buy or sell 100 stocks (the size of an order will always be a hundred shares). This kind of action will be allowed based on a trailing stop of 4 cents. best_bid and best_ask indicate that the agent should keep order at best price just in the mentioned side and best_both, it should have ordered at best price in both sides.
So, at each discrete time step t, the agent senses the current state st and choose to take an action at. The environment responds by providing the agent a reward rt=r(st,at) and by producing the succeeding state st+1=δ(st,at). The functions r and δ only depend on the current state and action (it is memoryless), are part of the environment and are not necessarily known to the agent.
The task of the agent is to learn a policy π that maps each state to an action (π:S→A), selecting its next action at based solely on the current observed state st, that is π(st)=at. The optimal policy, or control strategy, is the one that produces the greatest possible cumulative reward over time. So, stating that:
Vπ(st)=rt+γrt+1+γ2rt+1+...=∞∑i=0γirt+iWhere Vπ(st) is also called the discounted cumulative reward and it represents the cumulative value achieved by following an policy π from an initial state st and γ∈[0,1] is a constant that determines the relative value of delayed versus immediate rewards. It is one of the
If we set γ=0, only immediate rewards is considered. As γ→1, future rewards are given greater emphasis relative to immediate reward. The optimal policy π∗ that will maximizes Vπ(st) for all states s can be written as:
π∗=argmaxπVπ(s),∀sHowever, learning π∗:S→A directly is difficult because the available training data does not provide training examples of the form (s,a). Instead, as \cite{Mitchell} explained, the only available information is the sequence of immediate rewards r(si,ai) for i=1,2,3,...
So, as we are trying to maximize the cumulative rewards V∗(st) for all states s, the agent should prefer s1 over s2 wherever V∗(s1)>V∗(s2). Given that the agent must choose among actions and not states, and it isn't able to perfectly predict the immediate reward and immediate successor for every possible state-action transition, we also must learn V∗ indirectly.
To solve that, we define a function Q(s,a) such that its value is the maximum discounted cumulative reward that can be achieved starting from state s and applying action a as the first action. So, we can write:
Q(s,a)=r(s,a)+γV∗(δ(s,a))As δ(s,a) is the state resulting from applying action a to state s (the successor) chosen by following the optimal policy, V∗ is the cumulative value of the immediate successor state discounted by a factor γ. Thus, what we are trying to achieve is
π∗(s)=argmaxaQ(s,a)It implies that the optimal policy can be obtained even if the agent just uses the current action a and state s and chooses the action that maximizes Q(s,a). Also, it is important to notice that the function above implies that the agent can select optimal actions even when it has no knowledge of the functions r and δ.
Lastly, according to \cite{Mitchell}, there are some conditions to ensure that the reinforcement learning converges toward an optimal policy. On a deterministic MDP, the agent must select actions in a way that it visits every possible state-action pair infinitely often. This requirement can be a problem in the environment that the agent will operate.
As the most inputs suggested in the last subsection was defined in an infinite space, in section 3 I will discretize those numbers before use them to train my agent, keeping the state space representation manageable, hopefully. We also will see how \cite{Mitchell} defined a reliable way to estimate training values for Q, given only a sequence of immediate rewards r.
Udacity:
In this section, you will need to provide a clearly defined benchmark result or threshold for comparing across performances obtained by your solution. The reasoning behind the benchmark (in the case where it is not an established result) should be discussed. Questions to ask yourself when writing this section:
- Has some result or value been provided that acts as a benchmark for measuring performance?
- Is it clear how this result or value was obtained (whether by data or by hypothesis)?
In 1988, the Wall Street Journal created a Dartboard Contest, where Journal staffers threw darts at a stock table to select their assets, while investment experts picked their own stocks. After six months, they compared the results of the two methods. After adjusting the results to risk level, they found out that the pros barely have beaten the random pickers.
Given that, the benchmark used to measure the performance of the learner will be the amount of money made, in Reais, by a random agent. So, my goal will be to outperform this agent, that should just produce some random action from a set of allowed actions taken from A at each time step t.
Just like my learner, the set of action can change over time depending on the open position, that is limited to 100 stocks at most, on any side. When it reaches its limit, it will be allowed just to perform actions that decrease its position. So, for instance, if it already long in 100 shares, the possible moves would be (None,sell,best_ask). If it is short, it just can perform (None,buy,best_bid).
The performance will be measured primarily in the money made by the agents (that will be optimized by the learner). First, I will analyze if the learning agent was able to improve its performance on the same dataset after different trials. Later on, I will use the policy learned to simulate the learning agent behavior in a different dataset and then I will compare the final Profit and Loss of both agents. All data analyzed will be obtained by simulation.
As the last reference, in the final section, we will compare the total return of the learner to a strategy of buy-and-hold in BOVA11 and in the stock traded to check if we are consistently beating the market and not just being profitable, as the Udacity reviewer noticed.
In this section, I will discretize the input space and implement an agent to learn the Q function.
Udacity:
In this section, all of your preprocessing steps will need to be clearly documented, if any were necessary. From the previous section, any of the abnormalities or characteristics that you identified about the dataset will be addressed and corrected here. Questions to ask yourself when writing this section:
- If the algorithms chosen require preprocessing steps like feature selection or feature transformations, have they been properly documented?
- Based on the **Data Exploration** section, if there were abnormalities or characteristics that needed to be addressed, have they been properly corrected?
- If no preprocessing is needed, has it been made clear why?
As mentioned before, I will implement a Markov decision processes (MDP) that requires, among other things, of a discrete set of states S. Apart from the input variables position, OrderBid, OrderAsk, the other variables are defined in an infinite domain. I am going to discretize those inputs, so my learning agent can use them in the representation of their intern state. In the Figure bellow, we can see the distribution of those variables. The data was produced using the first day of the dataset.
import pandas as pd
df = pd.read_csv('data/ofi_petr.txt', sep='\t')
df.drop(['TIME', 'DELTA_MID'], axis=1, inplace=True)
df.dropna(inplace=True)
# Produce a scatter matrix for each pair of features in the data
pd.scatter_matrix(df.ix[:, ['OFI', 'BOOK_RATIO']],
alpha = 0.3, figsize = (14,8), diagonal = 'kde');
Udacity Reviewer:
Please be sure to specify how you are doing this (I'd recommend giving the formula).
The scale of the variables is very different, and, in the case of the BOOK_RATIO, it presents a logarithmic distribution. I will apply a logarithm transformation on this variable and rescale both to lie between a given minimum and maximum value of each feature using the function MinMaxScaler from scikit-learn. So, both variable will be scaled to lie between 0 and 1 by applying the formula zi=xi−minXmaxX−minX. Where z is the transformed variable, xi is the variable to be transformed and X is a vector with all x that will be transformed. The result of the transformation can be seen in the figure below.
import sklearn.preprocessing as preprocessing
import numpy as np
scaler_ofi = preprocessing.MinMaxScaler().fit(pd.DataFrame(df.OFI))
scaler_bookratio = preprocessing.MinMaxScaler().fit(pd.DataFrame(np.log(df.BOOK_RATIO)))
d_transformed = {}
d_transformed['OFI'] = scaler_ofi.transform(pd.DataFrame(df.OFI)).T[0]
d_transformed['BOOK_RATIO'] = scaler_bookratio.transform(pd.DataFrame(np.log(df.BOOK_RATIO))).T[0]
df_transformed = pd.DataFrame(d_transformed)
pd.scatter_matrix(df_transformed.ix[:, ['OFI', 'BOOK_RATIO']],
alpha = 0.3, figsize = (14,8), diagonal = 'kde');
As mentioned before, in an MDP environment the agent must visit every possible state-action pair infinitely often. If I just bucketize the variables and combine them, I will end up with a huge number of states to explore. So, to reduce the state space, I am going to group those variables using K-Means and Gaussian Mixture Model (GMM) clustering algorithm. Then I will quantify the "goodness" of the clustering results by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). In the figure below, I am going to calculate the mean silhouette coefficient to K-Means and GMM using a different number of clusters. Also, I will test different covariance structures to GMM.
from sklearn import metrics
from sklearn.cluster import KMeans
from sklearn.mixture import GMM
import time
reduced_data = df_transformed.ix[:, ['OFI', 'BOOK_RATIO']]
reduced_data.columns = ['Dimension 1', 'Dimension 2']
range_n_clusters = [2, 3, 4, 5, 6, 8, 10]
f_st = time.time()
d_score = {}
d_model = {}
s_key = "Kmeans"
d_score[s_key] = {}
d_model[s_key] = {}
for n_clusters in range_n_clusters:
# TODO: Apply your clustering algorithm of choice to the reduced data
clusterer = KMeans(n_clusters=n_clusters, random_state=10)
preds = clusterer.fit_predict(reduced_data)
d_model[s_key][n_clusters] = clusterer
d_score[s_key][n_clusters] = metrics.silhouette_score(reduced_data, preds)
print "K-Means took {:0.2f} seconds to run over all complexity space".format(time.time() - f_st)
f_avg = 0
for covar_type in ['spherical', 'diag', 'tied', 'full']:
f_st = time.time()
s_key = "GMM_{}".format(covar_type)
d_score[s_key] = {}
d_model[s_key] = {}
for n_clusters in range_n_clusters:
# TODO: Apply your clustering algorithm of choice to the reduced data
clusterer = GMM(n_components=n_clusters,
covariance_type=covar_type,
random_state=10)
clusterer.fit(reduced_data)
preds = clusterer.predict(reduced_data)
d_model[s_key][n_clusters] = clusterer
d_score[s_key][n_clusters] = metrics.silhouette_score(reduced_data, preds)
f_avg += time.time() - f_st
print "GMM took {:0.2f} seconds on average to run over all complexity space".format(f_avg / 4.)
K-Means took 4.59 seconds to run over all complexity space GMM took 16.85 seconds on average to run over all complexity space
import pandas as pd
ax = pd.DataFrame(d_score).plot()
ax.set_xlabel("Number of Clusters")
ax.set_ylabel("Silhouette Score\n")
ax.set_title("Performance vs Complexity\n", fontsize = 16);
The maximum score has happened using 2 clusters.However, I believe that the market can't be simplified that much. So, I will use the K-means with six centroids to group the variables. In the figure below we can see how the algorithm classified the data. Also, in the following table, the centroid was put in their original scales.
# get centers
sample_preds = []
centers = d_model["Kmeans"][6].cluster_centers_
preds = d_model["Kmeans"][6].fit_predict(reduced_data)
# Display the results of the clustering from implementation
import qtrader.eda as eda; reload(eda);
eda.cluster_results(reduced_data, preds, centers)
# recovering data
log_centers = centers.copy()
df_aux = pd.DataFrame([np.exp(scaler_bookratio.inverse_transform(log_centers.T[0].reshape(1, -1))[0]),
scaler_ofi.inverse_transform(log_centers.T[1].reshape(1, -1))[0]]).T
df_aux.columns = df_transformed.columns
df_aux.index.name = 'CLUSTER'
df_aux.columns = ['BOOK RATIO', 'OFI']
df_aux.round(2)
BOOK RATIO | OFI | |
---|---|---|
CLUSTER | ||
0 | 0.89 | -173662.94 |
1 | 0.91 | 281563.51 |
2 | 0.76 | 116727.32 |
3 | 0.85 | -16602.29 |
4 | 7.91 | 23334.13 |
5 | 0.09 | 34240.00 |
Curiously, the algorithm gave more emphasis on the BOOK_RATIO when its value was very large (the bid size almost eight times greater than the ask size) or tiny (when the bid size was one tenth of the ask size). The other cluster seems mostly dominated by the OFI. In the next subsection, I will discuss how I have implemented the Q-learning, how I intend to perform the simulations and make some tests. Lastly, let's serialize the objects used in clusterization to be used later.
import pickle
pickle.dump(d_model["Kmeans"][6] ,open('data/kmeans_2.dat', 'w'))
pickle.dump(scaler_ofi, open('data/scale_ofi_2.dat', 'w'))
pickle.dump(scaler_bookratio, open('data/scale_bookratio_2.dat', 'w'))
print 'Done !'
Done !
Udacity:
In this section, the process for which metrics, algorithms, and techniques that you implemented for the given data will need to be clearly documented. It should be abundantly clear how the implementation was carried out, and discussion should be made regarding any complications that occurred during this process. Questions to ask yourself when writing this section:
- Is it made clear how the algorithms and techniques were implemented with the given datasets or input data?
- Were there any complications with the original metrics or techniques that required changing prior to acquiring a solution?
- Was there any part of the coding process (e.g., writing complicated functions) that should be documented?
As we have seen, learning the Q function corresponds to learning the optimal policy. According to \cite{Mohri_2012}, the optimal state-action value function Q∗ is defined for all (s,a)∈S×A as the expected return for taking the action a∈A at the state s∈S, following the optimal policy. So, it can be written as \cite{Mitchell} suggested:
V∗(s)=argmaxa′Q(s,a′)Using this relationship, we can write a recursive definition of Q function, such that:
Q(s,a)=r(s,a)+γmaxa′Q(δ(s,a),a′)The recursive nature of the function above implies that our agent doesn't know the actual Q function. It just can estimate Q, that we will refer as ˆQ. It will represents is hypothesis ˆQ as a large table that attributes each pair (s,a) to a value for ˆQ(s,a) - the current hypothesis about the actual but unknown value Q(s,a). I will initialize this table with zeros, but it could be filled with random numbers, according to \cite{Mitchell}. Still according to him, the agent repeatedly should observe its current state s and do the following:
The main issue in the strategy presented in Algorithm 1 is that the agent could overcommit to actions that presented positive ˆQ values early in the simulation, failing to explore other actions that could present even higher values. \cite{Mitchell} proposed to use a probabilistic approach to select actions, assigning higher probabilities to action with high ˆQ values, but given to every action at least a nonzero probability. So, I will implement the following relation:
P(ai|s)=kˆQ(s,ai)∑jkˆQ(s,aj)Where P(ai|s) is the probability of selecting the action ai given the state s. The constant k is positive and determines how strongly the selection favors action with high ˆQ values.
Ideally, to optimize the policy found, the agent should iterate over the same dataset repeatedly until it is not able to improve its PnL. Later on, the policy learned will be tested against the same dataset to check its consistency. Lastly, this policy will be tested on the subsequent day of the training session. So, before perform the out-of-sample test, we will use the following procedure:
Each training session will include data from the largest part of a trading session, starting at 10:30 and closing at 16:30. Also, the agent will be allowed to hold a position of just 100 shares at maximum (long or short). When the training session is over, all positions from the learner will be closed out so the agent always will start a new session without carrying positions.
The agent will be allowed to take action every 2 seconds and, due to this delay, every time it decides to insert limit orders, it will place it 1 cent worst than the best price. So, if the best bid is 12.00 and the best ask is 12.02, if the agent chooses the action BEST_BOTH, it should include a buy order at 11.99 and a sell order at 12.03. It will be allowed to cancel these orders after 2 seconds. However, if these orders are filled in the mean time, the environment will inform the agent so it can update its current position. Even though, it just will take new actions after passed those 2 seconds.
Udacity Reviewer:
Please be sure to note any complications that occurred during the coding process. Otherwise, this section is simply excellent
One of the biggest complication of the approach proposed in this project was to find out a reasonable representation of the environment state that wasn't too big to visit each state-action pair sufficiently often but was still useful in the learning process. In the next subsection, I will try different configurations of k and γ to try to improve the performance of the learning agent over the same trial.
Udacity:
In this section, you will need to discuss the process of improvement you made upon the algorithms and techniques you used in your implementation. For example, adjusting parameters for certain models to acquire improved solutions would fall under the refinement category. Your initial and final solutions should be reported, as well as any significant intermediate results as necessary. Questions to ask yourself when writing this section:
- Has an initial solution been found and clearly reported?
- Is the process of improvement clearly documented, such as what techniques were used?
- Are intermediate and final solutions clearly reported as the process is improved?
As mentioned before, we should iterate over the same dataset and check the policy learned on the same observations until convergence. Given the time required to perform each train-test iteration, "until convergence" will be 10 repetitions. We are going to train the model on the dataset from 08/15/2016. After each iteration, we will check how the agent would perform using the policy it has just learned. The agent in the first training session will use γ=0.7 and k=0.3. In the figure below are the results of the first round of iterations:
# analyze the logs from the in-sample tests
import qtrader.eda as eda;reload(eda);
s_fname = 'log/train_test/sim_Fri_Oct__7_002946_2016.log' # 15 old
# s_fname = 'log/train_test/sim_Wed_Oct__5_110344_2016.log' # 15
# s_fname = 'log/train_test/sim_Thu_Oct__6_165539_2016.log' # 25
# s_fname = 'log/train_test/sim_Thu_Oct__6_175507_2016.log' # 35
# s_fname = 'log/train_test/sim_Thu_Oct__6_183555_2016.log' # 5
%time d_rtn_train_1 = eda.simple_counts(s_fname, 'LearningAgent_k')
CPU times: user 38.7 s, sys: 200 ms, total: 38.9 s Wall time: 39.1 s
import qtrader.eda as eda; reload(eda);
eda.plot_train_test_sim(d_rtn_train_1)
The curve Train in the charts is the PnL obtained during the training session when the agent was allowed to explore new actions randomly. The test is the PnL obtained using strictly the policy learned.
Although the agent was able to profit at the end of every single round, "Convergence" is something that I can not claim. For instance, the PnL was worst in the first round than in the first one. I believe this stability of the results is difficult to obtain in day-trading. For example, even if the agent think that it should buy before the market goes up, it doesn't depending on its will if its order is filled.
We will target on improving the final PnL of the agent. However, less variability of the results is desired, especially at the beginning of the day, when the strategy didn't make any money yet. So, we also will look at the sharpe ratio of the first difference of the cumulated PnL produced by each configuration.
First, we are going to iterate through some values for k and look at its performance in the training phase at the first hours of the training session. We also will use just 5 iterations here to speed up the tests.
# improving K
import qtrader.eda as eda;reload(eda);
s_fname = 'log/train_test/sim_Thu_Oct__6_133518_2016.log'
%time d_rtn_k = eda.count_by_k_gamma(s_fname, 'LearningAgent_k', 'k')
CPU times: user 42 s, sys: 140 ms, total: 42.1 s Wall time: 43.1 s
import pandas as pd
import matplotlib.pyplot as plt
f, na_ax = plt.subplots(1, 4, sharex=True, sharey=True)
for ax1, s_key in zip(na_ax.ravel(), ['0.3', '0.8', '1.3', '2.0']):
df_aux = pd.Series(d_rtn_k[s_key][5])
df_filter = pd.Series([x.hour for x in df_aux.index])
df_aux = df_aux[((df_filter < 15)).values]
df_aux.reset_index(drop=True, inplace=True)
df_aux.plot(legend=False, ax=ax1)
df_first_diff = df_aux - df_aux.shift()
df_first_diff = df_first_diff[df_first_diff != 0]
f_sharpe = df_first_diff.mean()/df_first_diff.std()
ax1.set_title('$k = {}$ | $sharpe = {:0.2f}$'.format(s_key, f_sharpe), fontsize=10)
ax1.xaxis.set_ticklabels([])
ax1.set_ylabel('PnL', fontsize=8)
ax1.set_xlabel('Time', fontsize=8)
f.tight_layout()
s_title = 'Cumulative PnL Changing K\n'
f.suptitle(s_title, fontsize=16, y=1.03);
When the agent was set to use k=0.8 and k=2.0, it achieved very similar results and Sharpe ratios. As the variable k control the likelihood of the agent try new actions based on the Q value already observed, I will prefer the smallest value because it improves the chance of the agent to explore. Now, let's perform the same analysis varying only the γ:
# improving Gamma
import qtrader.eda as eda;reload(eda);
s_fname = 'log/train_test/sim_Thu_Oct__6_154516_2016.log'
%time d_rtn_gammas = eda.count_by_k_gamma(s_fname, 'LearningAgent_k', 'gamma')
CPU times: user 41.4 s, sys: 140 ms, total: 41.5 s Wall time: 42.8 s
import pandas as pd
import matplotlib.pyplot as plt
f, na_ax = plt.subplots(1, 4, sharex=True, sharey=True)
for ax1, s_key in zip(na_ax.ravel(), ['0.3', '0.5', '0.7', '0.9']):
df_aux = pd.Series(d_rtn_gammas[s_key][5])
df_filter = pd.Series([x.hour for x in df_aux.index])
df_aux = df_aux[((df_filter < 15)).values]
df_aux.reset_index(drop=True, inplace=True)
df_aux.plot(legend=False, ax=ax1)
df_first_diff = df_aux - df_aux.shift()
f_sharpe = df_first_diff.mean()/df_first_diff.std()
ax1.set_title('$\gamma = {}$ | $sharpe = {:0.2f}$'.format(s_key, f_sharpe), fontsize=10)
ax1.xaxis.set_ticklabels([])
ax1.set_ylabel('PnL', fontsize=8)
ax1.set_xlabel('Time Step', fontsize=8)
f.tight_layout()
s_title = 'Cumulative PnL Changing Gamma\n'
f.suptitle(s_title, fontsize=16, y=1.03);
As explained before, as γ approaches one, future rewards are given greater emphasis about the immediate reward. When it is zero, only immediate rewards is considered. Despite the fact that the best parameter was γ=0.9, I am not comfortable in giving so little attention to immediate rewards. It sounds dangerous when we talk about stock markets. So, I will choose to use γ=0.5 arbitrarily in the next tests. In the figure below, the agent is trained using γ=0.5 and k=0.8. [the next chart is not used in the final version]
# analyze the logs from the in-sample tests
import qtrader.eda as eda;reload(eda);
# s_fname = 'log/train_test/sim_Fri_Oct__7_002946_2016.log' # 15 old
s_fname = 'log/train_test/sim_Wed_Oct__5_110344_2016.log' # 15
# s_fname = 'log/train_test/sim_Thu_Oct__6_165539_2016.log' # 25
# s_fname = 'log/train_test/sim_Thu_Oct__6_175507_2016.log' # 35
# s_fname = 'log/train_test/sim_Thu_Oct__6_183555_2016.log' # 5
%time d_rtn_train_2 = eda.simple_counts(s_fname, 'LearningAgent_k')
CPU times: user 40.8 s, sys: 331 ms, total: 41.1 s Wall time: 41.6 s
import qtrader.eda as eda; reload(eda);
eda.plot_train_test_sim(d_rtn)
# analyze the logs from the out-of-sample tests
import qtrader.eda as eda;reload(eda);
s_fname = 'log/train_test/sim_Fri_Oct__7_003943_2016.log' # idx = 15 old
%time d_rtn_test_1 = eda.simple_counts(s_fname, 'LearningAgent_k')
CPU times: user 1.91 s, sys: 13.9 ms, total: 1.92 s Wall time: 1.96 s
# analyze the logs from the out-of-sample tests
import qtrader.eda as eda;reload(eda);
s_fname = 'log/train_test/sim_Wed_Oct__5_111812_2016.log' # idx = 15
%time d_rtn_test_2 = eda.simple_counts(s_fname, 'LearningAgent_k')
CPU times: user 1.8 s, sys: 9.08 ms, total: 1.81 s Wall time: 1.82 s
# compare the old with the data using the new configuration
import pandas as pd
df_plot = pd.DataFrame(d_rtn_test_1['pnl']['test']).mean(axis=1).fillna(method='ffill')
ax1 = df_plot.plot(legend=True, label='old')
df_plot = pd.DataFrame(d_rtn_test_2['pnl']['test']).mean(axis=1).fillna(method='ffill')
df_plot.plot(legend=True, label='new', ax=ax1)
ax1.set_title('Cumulative PnL Produced by New\nand Old Configurations')
ax1.set_xlabel('Time')
ax1.set_ylabel('PnL');
In the figure above, an agent was trained using γ=0.5 and k=0.8 and its performance in out-of-sample test is compared to the previous implementation. In this case, the dataset from 07/16/2016 was used. the current configuration improved the performance of the model. We will discuss the final results in the next section.
In this section, I will evaluate the final model, test its robustness and compare its performance to the benchmark established earlier.
Udacity:
In this section, the final model and any supporting qualities should be evaluated in detail. It should be clear how the final model was derived and why this model was chosen. In addition, some type of analysis should be used to validate the robustness of this model and its solution, such as manipulating the input data or environment to see how the model’s solution is affected (this is called sensitivity analysis). Questions to ask yourself when writing this section:
- Is the final model reasonable and aligning with solution expectations? Are the final parameters of the model appropriate?
- Has the final model been tested with various inputs to evaluate whether the model generalizes well to unseen data?
- Is the model robust enough for the problem? Do small perturbations (changes) in training data or the input space greatly affect the results?
- Can results found from the model be trusted?
One of the last questions that remain is if the model can make money in different scenarios. To test the robustness of the final model, I am going to use the same framework in very spaced days.
As each round of training and testing sessions takes 20-30 minutes to complete, I will check its performance just on three different days. I have already used the file of index 15 in the last tests. Now, I am going to use the files with index 5, 25 and 35 to train new models, and use the files with index 6, 26 and 36 to perform out-of-sample tests. In the Figure below we can see how the model performed in different unseen datasets.
# analyze the logs from the out-of-sample tests
import qtrader.eda as eda;reload(eda);
l_fname = ['log/train_test/sim_Thu_Oct__6_171842_2016.log', # idx = 25
'log/train_test/sim_Thu_Oct__6_181611_2016.log', # idx = 35
'log/train_test/sim_Thu_Oct__6_184852_2016.log'] # idx = 5
def foo(l_fname):
d_learning_k = {}
for idx, s_fname in zip([25, 35, 5], l_fname):
d_learning_k[idx] = eda.simple_counts(s_fname, 'LearningAgent_k')
return d_learning_k
%time d_learning_k = foo(l_fname)
CPU times: user 5.83 s, sys: 35 ms, total: 5.86 s Wall time: 5.94 s
import pandas as pd
import matplotlib.pyplot as plt
f, na_ax = plt.subplots(1, 3, sharey=True)
for ax1, idx in zip(na_ax.ravel(), [5, 25, 35]):
df_plot = pd.DataFrame(d_learning_k[idx]['pnl']['test']).mean(axis=1)
df_plot.fillna(method='ffill').plot(legend=False, ax=ax1)
ax1.set_title('idx: {}'.format(idx + 1), fontsize=10)
ax1.set_ylabel('PnL', fontsize=8)
ax1.set_xlabel('Time', fontsize=8)
f.tight_layout()
s_title = 'Cumulative PnL in Diferent Days\n'
f.suptitle(s_title, fontsize=16, y=1.03);
The model was able to make money in two different days after being trained in the previous session to each day. The performance of the third day was pretty bad. However, even wasting a lot of money at the beginning of the day, the agent was able to recover the most of its loss at the end of the session.
Looking at just to this data, the performance of the model looks very unstable and a little disapointing. In the next subsection, we will see why it is not that bad.
Udacity:
In this section, your model’s final solution and its results should be compared to the benchmark you established earlier in the project using some type of statistical analysis. You should also justify whether these results and the solution are significant enough to have solved the problem posed in the project. Questions to ask yourself when writing this section:
- Are the final results found stronger than the benchmark result reported earlier?
- Have you thoroughly analyzed and discussed the final solution?
- Is the final solution significant enough to have solved the problem?
Lastly, I am going to compare the final model with the performance of a random agent. We are going to compare the performance of those agents in an out-of-sample test.
As the learning agent follows strictly the policy learned, I will simulate the operations of this agent on the datasets tested just once. Even though I had run more trials, the return would be the same. However, I will simulate the operations of the random agent 20 times at each dataset. As this agent can take any action at each run, the performance can be very good or very bad. So, I will compare the performance of the learning agent to the average performance of the random agent.
In the figure below we can see how much money each one has made in the first dataset used in this project, from 08/16/2016. The learning agent was trained using data from 08/15/2016, the previous day.
# analyze the logs from the out-of-sample random agent
import qtrader.eda as eda;reload(eda);
s_fname = 'log/train_test/sim_Wed_Oct__5_111907_2016.log' # idx = 15
%time d_rtn_test_1r = eda.simple_counts(s_fname, 'BasicAgent')
CPU times: user 36 s, sys: 135 ms, total: 36.1 s Wall time: 36.2 s
import pandas as pd
import scipy
ax1 = pd.DataFrame(d_rtn_test_2['pnl']['test']).mean(axis=1).fillna(method='ffill').plot(legend=True, label='LearningAgent_k')
pd.DataFrame(d_rtn_test_1r['pnl']['test']).mean(axis=1).fillna(method='ffill').plot(legend=True, label='RandomAgent', ax=ax1)
ax1.set_title('Cumulative PnL Comparision\n')
ax1.set_xlabel('Time')
ax1.set_ylabel('PnL');
#performs t-test
a = [float(pd.DataFrame(d_rtn_test_2['pnl']['test']).iloc[-1].values)] * 2
b = list(pd.DataFrame(d_rtn_test_1r['pnl']['test']).fillna(method='ffill').iloc[-1].values)
tval, p_value = scipy.stats.ttest_ind(a, b, equal_var=False)
A Welch's unequal variances t-test was conducted to compare if the PnL of the learner was greater than the PnL of a random agent. There was a significant difference between the performances (t-value ≈7.93; p-value <0.000). These results suggest that learning agent really outperformed the random agent, the chosen benchmark. Finally, I am going to perform the same test using the datasets used in the previous subsection.
print "t-value = {:0.6f}, p-value = {:0.8f}".format(tval, p_value)
t-value = 7.928302, p-value = 0.00000019
# analyze the logs from the out-of-sample tests
import qtrader.eda as eda;reload(eda);
l_fname = ['log/train_test/sim_Thu_Oct__6_172024_2016.log', # idx = 25
'log/train_test/sim_Thu_Oct__6_181735_2016.log', # idx = 35
'log/train_test/sim_Thu_Oct__6_184957_2016.log'] # idx = 5
def foo(l_fname):
d_basic = {}
for idx, s_fname in zip([25, 35, 5], l_fname):
d_basic[idx] = eda.simple_counts(s_fname, 'BasicAgent')
return d_basic
%time d_basic = foo(l_fname)
CPU times: user 1min 52s, sys: 433 ms, total: 1min 52s Wall time: 1min 52s
import pandas as pd
import matplotlib.pyplot as plt
import scipy.stats
f, na_ax = plt.subplots(1, 3, sharey=True)
l_stattest = []
for ax1, idx in zip(na_ax.ravel(), [5, 25, 35]):
# plot results
df_learning_agent = pd.DataFrame(d_learning_k[idx]['pnl']['test']).mean(axis=1)
df_learning_agent.fillna(method='ffill').plot(legend=True, label='LearningAgent_k', ax=ax1)
df_random_agent = pd.DataFrame(d_basic[idx]['pnl']['test']).mean(axis=1)
df_random_agent.fillna(method='ffill').plot(legend=True, label='RandomAgent', ax=ax1)
#performs t-test
a = [float(pd.DataFrame(d_learning_k[idx]['pnl']['test']).iloc[-1].values)] * 2
b = list(pd.DataFrame(d_basic[idx]['pnl']['test']).iloc[-1].values)
tval, p_value = scipy.stats.ttest_ind(a, b, equal_var=False)
l_stattest.append({'key': idx+1,'tval': tval, 'p_value': p_value/2})
# set axis
ax1.set_title('idx: ${}$ | p-value : ${:.3f}$'.format(idx+1, p_value/2.), fontsize=10)
ax1.set_ylabel('PnL', fontsize=8)
ax1.set_xlabel('Time', fontsize=8)
f.tight_layout()
s_title = 'Cumulative PnL Comparision in Diferent Days\n'
f.suptitle(s_title, fontsize=16, y=1.03);
pd.DataFrame(l_stattest)
key | p_value | tval | |
---|---|---|---|
0 | 6 | 0.000005 | 5.972489 |
1 | 26 | 0.432472 | 0.172402 |
2 | 36 | 0.469878 | 0.076584 |
In the dataset with index 26 and 36, the random agent outperformed the learning agent most the time, but at the end of these days, the learning agent was able to catch up the random agent performance. On this days, the t-test also rejected that the PnL of the learner was greater than the PnL from the random agent. In the dataset with index 6, the learning agent outperformed the random agent by a large margin, also confirmed by the t-test (t-value ≈5.97; p-value <0.000). Curiously, in the worst day of the test, the random agent also performed poorly, suggesting that it wasn't a problem of my agent, but something that has happened on the market.
I believe these results are encoraging because they suggested that using the same learning framework on different days we can successfully find practical solutions that adapt well to new circumstances.
In this section, I will discuss the final result of the model, summarize the entire problem solution and suggest some improvements that could be made.
Udacity:
Free-Form Visualization:
In this section, you will need to provide some form of visualization that emphasizes an important quality about the project. It is much more free-form, but should reasonably support a significant result or characteristic about the problem that you want to discuss. Questions to ask yourself when writing this section:
- Have you visualized a relevant or important quality about the problem, dataset, input data, or results?
- Is the visualization thoroughly analyzed and discussed?
- If a plot is provided, are the axes, title, and datum clearly defined?
In this project, We have proposed the use of the reinforcement learning framework to build an agent that learns how to trade according to the market states and its own conditions. After that the agent's policy was optimized to the previous sessions to the days it would have traded (in the out-of-sample tests), the agent would have been able to generate the result exposed in the figure below.
# group all data generated previously
df_aux = pd.concat([pd.DataFrame(d_learning_k[5]['pnl']['test']),
pd.DataFrame(d_rtn_test_2['pnl']['test']),
pd.DataFrame(d_learning_k[25]['pnl']['test']),
pd.DataFrame(d_learning_k[35]['pnl']['test'])])
d_data = df_aux.to_dict()
df_plot = eda.make_df(d_data).reset_index(drop=True)[1]
df_aux = pd.concat([pd.DataFrame(d_basic[5]['pnl']['test']).mean(axis=1),
pd.DataFrame(d_rtn_test_1r['pnl']['test']).mean(axis=1),
pd.DataFrame(d_basic[25]['pnl']['test']).mean(axis=1),
pd.DataFrame(d_basic[35]['pnl']['test']).mean(axis=1)])
d_data = pd.DataFrame(df_aux).to_dict()
df_plot2 = eda.make_df(d_data).reset_index(drop=True)[0]
ax1 = df_plot.plot(legend=True, label='LearningAgent_k')
df_plot2.plot(legend=True, label='RandomAgent')
ax1.set_title('Cumulated PnL from Simulations\n', fontsize=16)
ax1.set_ylabel('PnL')
ax1.set_xlabel('Time Step');
The chart above shows the accumulated return in four different days generated by the learning agent and by the random agent. Although the learning agent has not made money all the time, it still beat the performance of the random agent on the period of the tests. It also would have beaten a buy-and-hold strategy in BOVA11 and PETR4. Both would have lost money in the period, R-14,00andR −8,00, respectively.
((df_last_pnl)*100).sum()
BOVA11 -14.0 PETR4 -8.0 dtype: float64
Udacity:
Reflection:
In this section, you will summarize the entire end-to-end problem solution and discuss one or two particular aspects of the project you found interesting or difficult. You are expected to reflect on the project as a whole to show that you have a firm understanding of the entire process employed in your work. Questions to ask yourself when writing this section:
- Have you thoroughly summarized the entire process you used for this project?
- Were there any interesting aspects of the project?
- Were there any difficult aspects of the project?
- Does the final model and solution fit your expectations for the problem, and should it be used in a general setting to solve these types of problems?
To find the optimal policy we have used Q-Learning, a model-free approach to reinforcement learning. We trained the agent by simulating several runs on the same dataset, allowing the agent to explore the results of different actions on the same environment. So, we have back-tested the policy learned on the same dataset and found out that the policy learned not always converge to a better one. We noticed that this non-convergence could be related to the nature of our problem.
So, we refined the model testing different configurations of the model parameters and compare the PnL of the new policy to the old one backtesting them against a different dataset. Finally, after we selected the best parameters, we trained the model in different days and tested against the subsequent sessions.
We compared these results to the returns of a random agent and concluded that our model was significantly better during the period of the tests.
One of the most interesting parts of this project was to define the state representation of the environment. I find out that when we increase the state space too much, it becomes very hard the agent learns an acceptable policy in the number of the trials we have used. The number of trials used was mostly determined by the time it took to run (several minutes)
It was interesting to see that, even clustering the variables using k-means, the agent was still capable of using the resulting clusters to learn something useful from the environment.
Building the environment was the most difficult and challenging part of the entire project. Not just find an adequate structure for build the order book wasn't trivial, but make the environment operates it correctly was difficult. It has to manage different orders from various agents and ensure that each agent can place, cancel or fill orders (or have orders been filled) in the right sequence.
Overall, I believe that the simulation results have shown initial success in bringing reinforcement learning techniques to build algorithmic trading strategies. Develop a strategy that doesn't perform any arbitrage and still never lose money is something very unlikely to happen. This agent was able to mimic the performance of an average random agent sometimes and outperforms it other times. In the long run, It would be good enough.
Udacity:
In this section, you will need to provide discussion as to how one aspect of the implementation you designed could be improved. As an example, consider ways your implementation can be made more general, and what would need to be modified. You do not need to make this improvement, but the potential solutions resulting from these changes are considered and compared/contrasted to your current solution. Questions to ask yourself when writing this section:
- Are there further improvements that could be made on the algorithms or techniques you used in this project?
- Were there algorithms or techniques you researched that you did not know how to implement, but would consider using if you knew how?
- If you used your final solution as the new benchmark, do you think an even better solution exists?
Many areas could be explored to improve the current model and refine the test results. I wasn't able to achieve a stable solution using Q-Learning, and I believe that it is most due to the non-deterministic nature of the problem. So, we could test Recurrent Reinforcement Learning, for instance, which \cite{du1algorithm} argued that it could outperform Q-learning in the sense of stability and computational convenience.
Also, I believe that different state representations should be tested much deeper. The state observed by the agent is one of the most relevant aspects of reinforcement learning problems and probably there are better representations that the one used in this project to the given task.
Another future extension to that project also could include a more realistic environment, where other agents respond to the actions of the learning agent, and lastly, we could test other reward functions to the problem posed. Would be interesting to include some future information in the response of the environment to the actions of the agent, for example, to see how it would affect the policies learned.
Style notebook and change matplotlib defaults
#loading style sheet
from IPython.core.display import HTML
HTML( open('ipython_style.css').read())
#changing matplotlib defaults
%matplotlib inline
import seaborn as sns
sns.set_palette("deep", desat=.6)
sns.set_context(rc={"figure.figsize": (8, 4)})
sns.set_style("whitegrid")
sns.set_palette(sns.color_palette("Set2", 10))