import pandas as pd
import numpy as np
import statsmodels.api as sm
import seaborn as sns
sns.set_style("darkgrid")
import matplotlib.pylab as pylab
pylab.rcParams['figure.figsize'] = 16, 8
%matplotlib inline
base_path = '/run/media/jjd/Win7/Users/jjd/Desktop/GAExperiment_2017-05-28_0211/'
The genetic algorithm ran for 32 generations. The session was ending after a fitness stagnation for 9 generation.
The paper uses Sterling ratio as fitness: $$SterlingRatio = \frac{NetProfit}{MaxDrawdown}$$
The Net Profit instead the Annualized return because the training period was of six months.
In order to avoid negative fitness issues, this study used: $$fitness = 2^{SterlingRatio}$$
fitness_by_generation_file = base_path + 'FitnessByGeneration.csv'
fitness_by_generation = pd.read_csv(fitness_by_generation_file)
fitness_by_generation.plot(x='Generation', y='Fitness', figsize=(16,6));
The fitness shows a typical pattern in genetic algorithms, that is, many generations of fitness stagnation until a new innovation reaches new highs.
The number of individuals tested in the learning session was 1873.
# Read the data of all individual generated by the genetic algorithm and apply CamleCase format for the column names.
optimization_resutls = base_path + 'ResultsNoUnits.csv'
ga_results = pd.read_csv(optimization_resutls, )
camel_column_name = [n.replace(" ", "") for n in ga_results.columns]
ga_results.columns = camel_column_name
# The simple order in which the backtest results were saved will be used as individuals identification for further comparison.
ga_results['ID'] = ga_results.index
print('Total individuals:', ga_results.shape[0])
Total individuals: 1873
ga_results['SterlingRatio'] = ga_results.NetProfit / ga_results.Drawdown
ga_results['Fitness'] = 2 ** ga_results.SterlingRatio
ga_results.Fitness.plot(figsize=(12,6));
print('Cases with zero Total Trades:', ga_results[ga_results.TotalTrades==0].shape[0])
Cases with zero Total Trades: 249
print('Cases with zero Drawdown:', ga_results[ga_results.Drawdown==0].shape[0])
Cases with zero Drawdown: 456
print('Cases with trades made but zero Drawdown:', ga_results[(ga_results.Drawdown==0) & (ga_results.TotalTrades!=0)].shape[0])
Cases with trades made but zero Drawdown: 207
ga_results = ga_results[ga_results.Drawdown!=0]
The blanks in the plot corresponds to the 456 cases where the MaxDrawdown
is equal to zero.
From those cases 249 corresponds to cases where the agent was not able to generate signals (TotalTrades==0
), the remaining 207 cases are consequence of some flaws in the experiment design and implementation.
First, the algorithm used to run the evaluations had a very low exposure. The reason for that decision was trying to keep the portfolio in positives values, even when many bad trades were made by the individuals (some individuals made more than 10000 trades!). The problem with the low exposure is that the draw down is very small. Second, the MaxDrawdown
value is parsed from a string in percentage format with two decimals; thus, the small draw downs are rounded to zero.
Another side-effect of this problem is that as the genetic algorithm only checks the fitness when considering the next generation, some of those individuals were completely ignored for the recombination, event when they had a good behavior (the best one has a Sharpe of 1.9).
ga_results.TotalTrades.hist(bins=50, figsize=(12,6));
oos_results = ga_results[ga_results.SterlingRatio>1]
cols = ['ID']
cols.extend(ga_results.columns[:28])
# Save file to be used by LeanSTP in the OOS analysis
#oos_results[cols].to_csv(base_path+'SelectedIndividualsForOOS.csv', index=False)
oos_results.shape[0]
163
The training session period was 6 months from July 1st 2016 to December 31th 2016. The out sample period was January 2017.
Only the 40 individuals with Sterling ratio greater than one were considered to run the out of sample analysis.
statistics_cols = ['TotalTrades', 'AverageWin', 'AverageLoss', 'CompoundingAnnualReturn',
'Drawdown', 'Expectancy', 'NetProfit', 'SharpeRatio', 'LossRate',
'WinRate', 'Profit-LossRatio', 'Alpha', 'Beta', 'AnnualStandardDeviation',
'AnnualVariance', 'InformationRatio', 'TrackingError', 'TreynorRatio',
'TotalFees', 'ID', 'SterlingRatio']
for i in range(1,5):
out_of_sample_results = base_path + 'OutOfSample'+str(i)+'Month/FullResutls.csv'
oos_results = pd.read_csv(out_of_sample_results)
oos_results['SterlingRatio'] = oos_results.NetProfit / oos_results.Drawdown
oos_results['OosPeriod'] = str(i) + '_month'
oos_results = oos_results.join(ga_results.ix[:, statistics_cols], on='ID', how='inner',
lsuffix='_OOS', rsuffix='_IS')
if(i==1):
oos_vs_is = oos_results
else:
oos_vs_is = pd.concat([oos_vs_is, oos_results], axis=0)
oos_vs_is = oos_vs_is.replace([np.inf, -np.inf], np.nan).fillna(value=0)
oos_vs_is.to_csv(base_path + "FullOosResults.csv", index=False)
oos_vs_is
ID | TotalTrades_OOS | AverageWin_OOS | AverageLoss_OOS | CompoundingAnnualReturn_OOS | Drawdown_OOS | Expectancy_OOS | NetProfit_OOS | SharpeRatio_OOS | LossRate_OOS | ... | Alpha_IS | Beta_IS | AnnualStandardDeviation_IS | AnnualVariance_IS | InformationRatio_IS | TrackingError_IS | TreynorRatio_IS | TotalFees_IS | ID_IS | SterlingRatio_IS | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1005 | 4 | 0.0000 | -0.0001 | -0.00284 | 0.002 | -1.000 | -0.00025 | -0.386 | 1.00 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 68.02 | 1005 | 1.35 |
1 | 1014 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 152.06 | 1014 | 3.16 |
2 | 1025 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 156.07 | 1025 | 3.40 |
3 | 1026 | 6 | 0.0023 | -0.0004 | 0.01705 | 0.003 | 1.271 | 0.00148 | 1.616 | 0.67 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 60.02 | 1026 | 1.24 |
4 | 1031 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 156.06 | 1031 | 2.79 |
5 | 1032 | 6 | 0.0023 | -0.0004 | 0.01705 | 0.003 | 1.271 | 0.00148 | 1.616 | 0.67 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 72.02 | 1032 | 1.46 |
6 | 1039 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 156.07 | 1039 | 3.40 |
7 | 1044 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 152.06 | 1044 | 3.16 |
8 | 1059 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 156.07 | 1059 | 3.40 |
9 | 1063 | 4 | 0.0000 | -0.0001 | -0.00284 | 0.002 | -1.000 | -0.00025 | -0.386 | 1.00 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 68.02 | 1063 | 1.35 |
10 | 1068 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 152.06 | 1068 | 3.16 |
11 | 1077 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 152.06 | 1077 | 3.16 |
12 | 1084 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 156.07 | 1084 | 3.40 |
13 | 1086 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 152.06 | 1086 | 3.16 |
14 | 1091 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 168.08 | 1091 | 3.47 |
15 | 1093 | 6 | 0.0001 | -0.0002 | -0.00018 | 0.000 | -0.027 | -0.00002 | -0.185 | 0.33 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 172.02 | 1093 | 1.23 |
16 | 1103 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 156.06 | 1103 | 2.79 |
17 | 1107 | 6 | 0.0003 | -0.0002 | 0.00554 | 0.001 | 1.066 | 0.00048 | 2.694 | 0.33 | ... | 0 | 0 | 0.001 | 0 | 0 | 0 | 0 | 164.02 | 1107 | 1.23 |
18 | 1108 | 6 | 0.0023 | -0.0004 | 0.01705 | 0.003 | 1.271 | 0.00148 | 1.616 | 0.67 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 60.02 | 1108 | 1.39 |
19 | 1111 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 156.07 | 1111 | 3.40 |
20 | 1122 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 152.06 | 1122 | 3.16 |
21 | 1136 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 152.06 | 1136 | 3.16 |
22 | 1139 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 156.07 | 1139 | 3.40 |
23 | 1146 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 152.06 | 1146 | 3.16 |
24 | 1152 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 156.07 | 1152 | 3.40 |
25 | 1156 | 6 | 0.0001 | -0.0002 | -0.00018 | 0.000 | -0.027 | -0.00002 | -0.185 | 0.33 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 168.02 | 1156 | 1.09 |
26 | 1172 | 6 | 0.0023 | -0.0004 | 0.01705 | 0.003 | 1.271 | 0.00148 | 1.616 | 0.67 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 60.02 | 1172 | 1.39 |
27 | 1177 | 6 | 0.0057 | -0.0048 | 0.07651 | 0.006 | 0.454 | 0.00648 | 1.761 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 152.05 | 1177 | 2.46 |
28 | 1180 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 156.07 | 1180 | 3.40 |
29 | 1184 | 6 | 0.0054 | -0.0048 | 0.07099 | 0.006 | 0.422 | 0.00603 | 1.701 | 0.33 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 164.06 | 1184 | 3.23 |
... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
133 | 842 | 20 | 0.0005 | -0.0014 | -0.03166 | 0.013 | -0.737 | -0.01061 | -2.750 | 0.80 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 68.02 | 842 | 1.34 |
134 | 845 | 14 | 0.0010 | -0.0003 | 0.00541 | 0.003 | 0.824 | 0.00179 | 0.958 | 0.57 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 72.02 | 845 | 1.46 |
135 | 850 | 20 | 0.0005 | -0.0014 | -0.03131 | 0.012 | -0.736 | -0.01049 | -2.615 | 0.80 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 68.02 | 850 | 1.35 |
136 | 855 | 20 | 0.0005 | -0.0014 | -0.03131 | 0.012 | -0.736 | -0.01049 | -2.615 | 0.80 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 68.02 | 855 | 1.35 |
137 | 857 | 20 | 0.0005 | -0.0014 | -0.03131 | 0.012 | -0.736 | -0.01049 | -2.615 | 0.80 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 68.02 | 857 | 1.35 |
138 | 867 | 20 | 0.0005 | -0.0014 | -0.03166 | 0.013 | -0.737 | -0.01061 | -2.750 | 0.80 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 80.02 | 867 | 1.41 |
139 | 869 | 20 | 0.0005 | -0.0014 | -0.03131 | 0.012 | -0.736 | -0.01049 | -2.615 | 0.80 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 68.02 | 869 | 1.35 |
140 | 872 | 20 | 0.0005 | -0.0014 | -0.03131 | 0.012 | -0.736 | -0.01049 | -2.615 | 0.80 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 68.02 | 872 | 1.35 |
141 | 883 | 20 | 0.0005 | -0.0014 | -0.03131 | 0.012 | -0.736 | -0.01049 | -2.615 | 0.80 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 68.02 | 883 | 1.35 |
142 | 892 | 14 | 0.0010 | -0.0003 | 0.00541 | 0.003 | 0.824 | 0.00179 | 0.958 | 0.57 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 60.02 | 892 | 1.39 |
143 | 901 | 20 | 0.0005 | -0.0014 | -0.03131 | 0.012 | -0.736 | -0.01049 | -2.615 | 0.80 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 68.02 | 901 | 1.35 |
144 | 904 | 14 | 0.0010 | -0.0003 | 0.00541 | 0.003 | 0.824 | 0.00179 | 0.958 | 0.57 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 60.02 | 904 | 1.39 |
145 | 906 | 14 | 0.0010 | -0.0003 | 0.00541 | 0.003 | 0.824 | 0.00179 | 0.958 | 0.57 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 72.02 | 906 | 1.46 |
146 | 920 | 20 | 0.0005 | -0.0014 | -0.03131 | 0.012 | -0.736 | -0.01049 | -2.615 | 0.80 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 68.02 | 920 | 1.35 |
147 | 921 | 20 | 0.0005 | -0.0014 | -0.03166 | 0.013 | -0.737 | -0.01061 | -2.750 | 0.80 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 68.02 | 921 | 1.34 |
148 | 928 | 14 | 0.0010 | -0.0003 | 0.00541 | 0.003 | 0.824 | 0.00179 | 0.958 | 0.57 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 60.02 | 928 | 1.39 |
149 | 929 | 42 | 0.0028 | -0.0021 | 0.00158 | 0.014 | 0.015 | 0.00052 | 0.060 | 0.57 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 152.06 | 929 | 3.16 |
150 | 936 | 44 | 0.0027 | -0.0021 | 0.00524 | 0.015 | 0.041 | 0.00173 | 0.165 | 0.55 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 152.06 | 936 | 3.16 |
151 | 938 | 20 | 0.0005 | -0.0014 | -0.03131 | 0.012 | -0.736 | -0.01049 | -2.615 | 0.80 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 68.02 | 938 | 1.35 |
152 | 944 | 42 | 0.0028 | -0.0021 | 0.00158 | 0.014 | 0.015 | 0.00052 | 0.060 | 0.57 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 152.06 | 944 | 3.16 |
153 | 946 | 14 | 0.0010 | -0.0003 | 0.00541 | 0.003 | 0.824 | 0.00179 | 0.958 | 0.57 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 72.02 | 946 | 1.46 |
154 | 947 | 20 | 0.0005 | -0.0014 | -0.03131 | 0.012 | -0.736 | -0.01049 | -2.615 | 0.80 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 68.02 | 947 | 1.35 |
155 | 948 | 20 | 0.0005 | -0.0014 | -0.03131 | 0.012 | -0.736 | -0.01049 | -2.615 | 0.80 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 68.02 | 948 | 1.35 |
156 | 956 | 14 | 0.0010 | -0.0003 | 0.00541 | 0.003 | 0.824 | 0.00179 | 0.958 | 0.57 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 60.02 | 956 | 1.39 |
157 | 960 | 16 | 0.0010 | -0.0004 | 0.00337 | 0.004 | 0.365 | 0.00111 | 0.582 | 0.62 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 68.02 | 960 | 1.35 |
158 | 964 | 20 | 0.0005 | -0.0014 | -0.03131 | 0.012 | -0.736 | -0.01049 | -2.615 | 0.80 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 68.02 | 964 | 1.35 |
159 | 981 | 42 | 0.0027 | -0.0016 | 0.01366 | 0.012 | 0.133 | 0.00451 | 0.431 | 0.57 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 156.06 | 981 | 2.79 |
160 | 983 | 42 | 0.0031 | -0.0021 | 0.00920 | 0.014 | 0.073 | 0.00304 | 0.288 | 0.57 | ... | 0 | 0 | 0.003 | 0 | 0 | 0 | 0 | 156.07 | 983 | 3.40 |
161 | 986 | 20 | 0.0005 | -0.0014 | -0.03131 | 0.012 | -0.736 | -0.01049 | -2.615 | 0.80 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 80.02 | 986 | 1.42 |
162 | 994 | 50 | 0.0005 | -0.0009 | -0.03431 | 0.013 | -0.518 | -0.01150 | -3.254 | 0.68 | ... | 0 | 0 | 0.002 | 0 | 0 | 0 | 0 | 172.02 | 994 | 1.26 |
652 rows × 44 columns
x = sm.add_constant(oos_vs_is.SterlingRatio_IS, prepend=False)
y = oos_vs_is.SharpeRatio_OOS
mod = sm.OLS(y, x)
res = mod.fit()
print(res.summary())
OLS Regression Results ============================================================================== Dep. Variable: SharpeRatio_OOS R-squared: 0.174 Model: OLS Adj. R-squared: 0.173 Method: Least Squares F-statistic: 136.9 Date: Wed, 07 Jun 2017 Prob (F-statistic): 7.98e-29 Time: 14:12:21 Log-Likelihood: -1146.7 No. Observations: 652 AIC: 2297. Df Residuals: 650 BIC: 2306. Df Model: 1 Covariance Type: nonrobust ==================================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------------ SterlingRatio_IS 0.6282 0.054 11.699 0.000 0.523 0.734 const -1.7429 0.147 -11.875 0.000 -2.031 -1.455 ============================================================================== Omnibus: 43.457 Durbin-Watson: 1.362 Prob(Omnibus): 0.000 Jarque-Bera (JB): 15.818 Skew: -0.002 Prob(JB): 0.000367 Kurtosis: 2.237 Cond. No. 8.13 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
for i in range(1,5):
sample = str(i) + '_month'
x = sm.add_constant(oos_vs_is[oos_vs_is.OosPeriod==sample].SterlingRatio_IS, prepend=False)
y = oos_vs_is[oos_vs_is.OosPeriod==sample].SharpeRatio_OOS
mod = sm.OLS(y, x)
res = mod.fit()
print ("\n\nOut of Sample lenght:", sample )
print(res.summary())
print ('='*90)
Out of Sample lenght: 1_month OLS Regression Results ============================================================================== Dep. Variable: SharpeRatio_OOS R-squared: 0.356 Model: OLS Adj. R-squared: 0.352 Method: Least Squares F-statistic: 88.83 Date: Wed, 07 Jun 2017 Prob (F-statistic): 4.53e-17 Time: 14:12:38 Log-Likelihood: -172.60 No. Observations: 163 AIC: 349.2 Df Residuals: 161 BIC: 355.4 Df Model: 1 Covariance Type: nonrobust ==================================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------------ SterlingRatio_IS 0.5051 0.054 9.425 0.000 0.399 0.611 const -0.0173 0.146 -0.118 0.906 -0.307 0.272 ============================================================================== Omnibus: 8.941 Durbin-Watson: 2.035 Prob(Omnibus): 0.011 Jarque-Bera (JB): 9.169 Skew: 0.473 Prob(JB): 0.0102 Kurtosis: 3.674 Cond. No. 8.13 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. ========================================================================================== Out of Sample lenght: 2_month OLS Regression Results ============================================================================== Dep. Variable: SharpeRatio_OOS R-squared: 0.108 Model: OLS Adj. R-squared: 0.102 Method: Least Squares F-statistic: 19.43 Date: Wed, 07 Jun 2017 Prob (F-statistic): 1.90e-05 Time: 14:12:38 Log-Likelihood: -268.28 No. Observations: 163 AIC: 540.6 Df Residuals: 161 BIC: 546.7 Df Model: 1 Covariance Type: nonrobust ==================================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------------ SterlingRatio_IS 0.4248 0.096 4.408 0.000 0.235 0.615 const -2.1024 0.263 -7.981 0.000 -2.623 -1.582 ============================================================================== Omnibus: 6.672 Durbin-Watson: 2.219 Prob(Omnibus): 0.036 Jarque-Bera (JB): 6.907 Skew: 0.486 Prob(JB): 0.0316 Kurtosis: 2.735 Cond. No. 8.13 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. ========================================================================================== Out of Sample lenght: 3_month OLS Regression Results ============================================================================== Dep. Variable: SharpeRatio_OOS R-squared: 0.376 Model: OLS Adj. R-squared: 0.372 Method: Least Squares F-statistic: 96.97 Date: Wed, 07 Jun 2017 Prob (F-statistic): 3.34e-18 Time: 14:12:38 Log-Likelihood: -262.02 No. Observations: 163 AIC: 528.0 Df Residuals: 161 BIC: 534.2 Df Model: 1 Covariance Type: nonrobust ==================================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------------ SterlingRatio_IS 0.9133 0.093 9.848 0.000 0.730 1.097 const -2.7345 0.253 -10.787 0.000 -3.235 -2.234 ============================================================================== Omnibus: 5.332 Durbin-Watson: 2.127 Prob(Omnibus): 0.070 Jarque-Bera (JB): 5.453 Skew: 0.439 Prob(JB): 0.0655 Kurtosis: 2.824 Cond. No. 8.13 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. ========================================================================================== Out of Sample lenght: 4_month OLS Regression Results ============================================================================== Dep. Variable: SharpeRatio_OOS R-squared: 0.264 Model: OLS Adj. R-squared: 0.259 Method: Least Squares F-statistic: 57.66 Date: Wed, 07 Jun 2017 Prob (F-statistic): 2.39e-12 Time: 14:12:38 Log-Likelihood: -253.78 No. Observations: 163 AIC: 511.6 Df Residuals: 161 BIC: 517.8 Df Model: 1 Covariance Type: nonrobust ==================================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------------ SterlingRatio_IS 0.6696 0.088 7.594 0.000 0.495 0.844 const -2.1174 0.241 -8.786 0.000 -2.593 -1.641 ============================================================================== Omnibus: 6.316 Durbin-Watson: 2.160 Prob(Omnibus): 0.043 Jarque-Bera (JB): 6.081 Skew: 0.467 Prob(JB): 0.0478 Kurtosis: 3.148 Cond. No. 8.13 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. ==========================================================================================
oos_vs_is.OosPeriod.unique()
df_for_kde = pd.DataFrame({"SharpeRatio_IS":oos_vs_is[oos_vs_is.OosPeriod=='1_month'].SharpeRatio_IS,
"SharpeRatio_OOS_1Month":oos_vs_is[oos_vs_is.OosPeriod=='1_month'].SharpeRatio_OOS,
"SharpeRatio_OOS_2Month":oos_vs_is[oos_vs_is.OosPeriod=='2_month'].SharpeRatio_OOS,
"SharpeRatio_OOS_3Month":oos_vs_is[oos_vs_is.OosPeriod=='3_month'].SharpeRatio_OOS,
"SharpeRatio_OOS_4Month":oos_vs_is[oos_vs_is.OosPeriod=='4_month'].SharpeRatio_OOS,
})
df_for_kde.plot.hist(figsize=(12,6), stacked=True, alpha = 0.3, colormap='gnuplot', title='Sharpe ratio in-sample vs. out of sample');
df_for_kde.plot.kde(figsize=(12,6), title='Sharpe ratio stacked frequencies');
df_for_kde.to_csv(base_path + 'sharpeForKDE.csv')
x = sm.add_constant(oos_vs_is.SterlingRatio_IS, prepend=False)
y = oos_vs_is.SharpeRatio_OOS
mod = sm.OLS(y, x)
res = mod.fit()
print(res.summary())
OLS Regression Results ============================================================================== Dep. Variable: SharpeRatio_OOS R-squared: 0.174 Model: OLS Adj. R-squared: 0.173 Method: Least Squares F-statistic: 136.9 Date: Wed, 07 Jun 2017 Prob (F-statistic): 7.98e-29 Time: 14:13:17 Log-Likelihood: -1146.7 No. Observations: 652 AIC: 2297. Df Residuals: 650 BIC: 2306. Df Model: 1 Covariance Type: nonrobust ==================================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------------ SterlingRatio_IS 0.6282 0.054 11.699 0.000 0.523 0.734 const -1.7429 0.147 -11.875 0.000 -2.031 -1.455 ============================================================================== Omnibus: 43.457 Durbin-Watson: 1.362 Prob(Omnibus): 0.000 Jarque-Bera (JB): 15.818 Skew: -0.002 Prob(JB): 0.000367 Kurtosis: 2.237 Cond. No. 8.13 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified.