Smart beta has a broad meaning, but we can say in practice that when we use the universe of stocks from an index, and then apply some weighting scheme other than market cap weighting, it can be considered a type of smart beta fund. A Smart Beta portfolio generally gives investors exposure or "beta" to one or more types of market characteristics (or factors) that are believed to predict prices while giving investors a diversified broad exposure to a particular market. Smart Beta portfolios generally target momentum, earnings quality, low volatility, and dividends or some combination. Smart Beta Portfolios are generally rebalanced infrequently and follow relatively simple rules or algorithms that are passively managed. Model changes to these types of funds are also rare requiring prospectus filings with US Security and Exchange Commission in the case of US focused mutual funds or ETFs.. Smart Beta portfolios are generally long-only, they do not short stocks.
In contrast, a purely alpha-focused quantitative fund may use multiple models or algorithms to create a portfolio. The portfolio manager retains discretion in upgrading or changing the types of models and how often to rebalance the portfolio in attempt to maximize performance in comparison to a stock benchmark. Managers may have discretion to short stocks in portfolios.
Imagine you're a portfolio manager, and wish to try out some different portfolio weighting methods.
One way to design portfolio is to look at certain accounting measures (fundamentals) that, based on past trends, indicate stocks that produce better results.
For instance, you may start with a hypothesis that dividend-issuing stocks tend to perform better than stocks that do not. This may not always be true of all companies; for instance, Apple does not issue dividends, but has had good historical performance. The hypothesis about dividend-paying stocks may go something like this:
Companies that regularly issue dividends may also be more prudent in allocating their available cash, and may indicate that they are more conscious of prioritizing shareholder interests. For example, a CEO may decide to reinvest cash into pet projects that produce low returns. Or, the CEO may do some analysis, identify that reinvesting within the company produces lower returns compared to a diversified portfolio, and so decide that shareholders would be better served if they were given the cash (in the form of dividends). So according to this hypothesis, dividends may be both a proxy for how the company is doing (in terms of earnings and cash flow), but also a signal that the company acts in the best interest of its shareholders. Of course, it's important to test whether this works in practice.
You may also have another hypothesis, with which you wish to design a portfolio that can then be made into an ETF. You may find that investors may wish to invest in passive beta funds, but wish to have less risk exposure (less volatility) in their investments. The goal of having a low volatility fund that still produces returns similar to an index may be appealing to investors who have a shorter investment time horizon, and so are more risk averse.
So the objective of your proposed portfolio is to design a portfolio that closely tracks an index, while also minimizing the portfolio variance. Also, if this portfolio can match the returns of the index with less volatility, then it has a higher risk-adjusted return (same return, lower volatility).
Smart Beta ETFs can be designed with both of these two general methods (among others): alternative weighting and minimum volatility ETF.
Each problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a # TODO
comment. After implementing the function, run the cell to test it against the unit tests we've provided. For each problem, we provide one or more unit tests from our project_tests
package. These unit tests won't tell you if your answer is correct, but will warn you of any major errors. Your code will be checked for the correct solution when you submit it to Udacity.
When you implement the functions, you'll only need to you use the packages you've used in the classroom, like Pandas and Numpy. These packages will be imported for you. We recommend you don't add any import statements, otherwise the grader might not be able to run your code.
The other packages that we're importing are helper
, project_helper
, and project_tests
. These are custom packages built to help you solve the problems. The helper
and project_helper
module contains utility functions and graph functions. The project_tests
contains the unit tests for all the problems.
import sys
!{sys.executable} -m pip install -r requirements.txt
Requirement already satisfied: colour==0.1.5 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 1))
Collecting cvxpy==1.0.3 (from -r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/a1/59/2613468ffbbe3a818934d06b81b9f4877fe054afbf4f99d2f43f398a0b34/cvxpy-1.0.3.tar.gz (880kB)
100% |████████████████████████████████| 880kB 691kB/s ta 0:00:01
Requirement already satisfied: cycler==0.10.0 in /opt/conda/lib/python3.6/site-packages/cycler-0.10.0-py3.6.egg (from -r requirements.txt (line 3))
Collecting numpy==1.13.3 (from -r requirements.txt (line 4))
Downloading https://files.pythonhosted.org/packages/57/a7/e3e6bd9d595125e1abbe162e323fd2d06f6f6683185294b79cd2cdb190d5/numpy-1.13.3-cp36-cp36m-manylinux1_x86_64.whl (17.0MB)
100% |████████████████████████████████| 17.0MB 39kB/s eta 0:00:01 17% |█████▌ | 2.9MB 20.7MB/s eta 0:00:01 30% |█████████▉ | 5.2MB 24.7MB/s eta 0:00:01 44% |██████████████▎ | 7.6MB 24.4MB/s eta 0:00:01 58% |██████████████████▉ | 10.0MB 26.3MB/s eta 0:00:01
Collecting pandas==0.21.1 (from -r requirements.txt (line 5))
Downloading https://files.pythonhosted.org/packages/3a/e1/6c514df670b887c77838ab856f57783c07e8760f2e3d5939203a39735e0e/pandas-0.21.1-cp36-cp36m-manylinux1_x86_64.whl (26.2MB)
100% |████████████████████████████████| 26.2MB 24kB/s eta 0:00:01 7% |██▌ | 2.0MB 25.5MB/s eta 0:00:01 26% |████████▌ | 7.0MB 24.9MB/s eta 0:00:01 31% |██████████ | 8.2MB 24.8MB/s eta 0:00:01 45% |██████████████▌ | 11.9MB 25.8MB/s eta 0:00:01 49% |████████████████ | 13.1MB 26.2MB/s eta 0:00:01 54% |█████████████████▍ | 14.3MB 25.5MB/s eta 0:00:01 58% |██████████████████▉ | 15.4MB 24.5MB/s eta 0:00:01 63% |████████████████████▏ | 16.6MB 24.8MB/s eta 0:00:01 67% |█████████████████████▋ | 17.8MB 25.2MB/s eta 0:00:01 72% |███████████████████████ | 18.9MB 24.4MB/s eta 0:00:01 89% |████████████████████████████▊ | 23.5MB 24.6MB/s eta 0:00:01 98% |███████████████████████████████▍| 25.8MB 21.7MB/s eta 0:00:01
Collecting plotly==2.2.3 (from -r requirements.txt (line 6))
Downloading https://files.pythonhosted.org/packages/99/a6/8214b6564bf4ace9bec8a26e7f89832792be582c042c47c912d3201328a0/plotly-2.2.3.tar.gz (1.1MB)
100% |████████████████████████████████| 1.1MB 621kB/s eta 0:00:01
Requirement already satisfied: pyparsing==2.2.0 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 7))
Requirement already satisfied: python-dateutil==2.6.1 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 8))
Requirement already satisfied: pytz==2017.3 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 9))
Requirement already satisfied: requests==2.18.4 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 10))
Collecting scipy==1.0.0 (from -r requirements.txt (line 11))
Downloading https://files.pythonhosted.org/packages/d8/5e/caa01ba7be11600b6a9d39265440d7b3be3d69206da887c42bef049521f2/scipy-1.0.0-cp36-cp36m-manylinux1_x86_64.whl (50.0MB)
100% |████████████████████████████████| 50.0MB 12kB/s eta 0:00:011 4% |█▌ | 2.3MB 21.1MB/s eta 0:00:03 6% |██▏ | 3.3MB 22.1MB/s eta 0:00:03 17% |█████▌ | 8.5MB 19.5MB/s eta 0:00:03 19% |██████ | 9.5MB 21.4MB/s eta 0:00:02 23% |███████▍ | 11.6MB 19.2MB/s eta 0:00:03 25% |████████ | 12.6MB 18.8MB/s eta 0:00:02 31% |██████████ | 15.7MB 20.4MB/s eta 0:00:02 33% |██████████▊ | 16.8MB 22.8MB/s eta 0:00:02 37% |████████████ | 18.8MB 21.4MB/s eta 0:00:02 43% |██████████████ | 21.9MB 20.6MB/s eta 0:00:02 47% |███████████████▍ | 24.0MB 17.2MB/s eta 0:00:02 49% |████████████████ | 24.9MB 17.1MB/s eta 0:00:02 55% |█████████████████▉ | 28.0MB 20.0MB/s eta 0:00:02 59% |███████████████████▏ | 30.0MB 18.4MB/s eta 0:00:02 64% |████████████████████▌ | 32.1MB 20.4MB/s eta 0:00:01 65% |█████████████████████▏ | 33.0MB 20.0MB/s eta 0:00:01 68% |█████████████████████▉ | 34.1MB 22.5MB/s eta 0:00:01 70% |██████████████████████▌ | 35.1MB 19.7MB/s eta 0:00:01 74% |███████████████████████▊ | 37.1MB 22.5MB/s eta 0:00:01 78% |█████████████████████████ | 39.1MB 21.3MB/s eta 0:00:01 80% |█████████████████████████▊ | 40.2MB 25.4MB/s eta 0:00:01 86% |███████████████████████████▋ | 43.2MB 21.0MB/s eta 0:00:01 90% |█████████████████████████████ | 45.2MB 22.4MB/s eta 0:00:01 92% |█████████████████████████████▋ | 46.2MB 20.6MB/s eta 0:00:01 94% |██████████████████████████████▏ | 47.2MB 20.9MB/s eta 0:00:01 96% |██████████████████████████████▊ | 48.0MB 21.0MB/s eta 0:00:01
Requirement already satisfied: scikit-learn==0.19.1 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 12))
Requirement already satisfied: six==1.11.0 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 13))
Collecting tqdm==4.19.5 (from -r requirements.txt (line 14))
Downloading https://files.pythonhosted.org/packages/71/3c/341b4fa23cb3abc335207dba057c790f3bb329f6757e1fcd5d347bcf8308/tqdm-4.19.5-py2.py3-none-any.whl (51kB)
100% |████████████████████████████████| 61kB 4.0MB/s eta 0:00:01
Collecting osqp (from cvxpy==1.0.3->-r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/05/42/0ccab82eb6ed0edb83d184928ec864232dc00c3cf968a4b92a02caf0f7ec/osqp-0.4.0-cp36-cp36m-manylinux1_x86_64.whl (146kB)
100% |████████████████████████████████| 153kB 3.3MB/s eta 0:00:01
Collecting ecos>=2 (from cvxpy==1.0.3->-r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/b6/b4/988b15513b13e8ea2eac65e97d84221ac515a735a93f046e2a2a3d7863fc/ecos-2.0.5.tar.gz (114kB)
100% |████████████████████████████████| 122kB 4.4MB/s eta 0:00:01
Collecting scs>=1.1.3 (from cvxpy==1.0.3->-r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/b3/fd/6e01c4f4a69fcc6c3db130ba55572089e78e77ea8c0921a679f9da1ec04c/scs-2.0.2.tar.gz (133kB)
100% |████████████████████████████████| 143kB 3.7MB/s eta 0:00:01
Collecting multiprocess (from cvxpy==1.0.3->-r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/7a/ee/b9bf3e171f936743758ef924622d8dd00516c5532b00a1210a09bce68325/multiprocess-0.70.6.1.tar.gz (1.4MB)
100% |████████████████████████████████| 1.4MB 481kB/s eta 0:00:01 55% |█████████████████▊ | 768kB 17.1MB/s eta 0:00:01
Requirement already satisfied: fastcache in /opt/conda/lib/python3.6/site-packages (from cvxpy==1.0.3->-r requirements.txt (line 2))
Requirement already satisfied: toolz in /opt/conda/lib/python3.6/site-packages (from cvxpy==1.0.3->-r requirements.txt (line 2))
Requirement already satisfied: decorator>=4.0.6 in /opt/conda/lib/python3.6/site-packages (from plotly==2.2.3->-r requirements.txt (line 6))
Requirement already satisfied: nbformat>=4.2 in /opt/conda/lib/python3.6/site-packages (from plotly==2.2.3->-r requirements.txt (line 6))
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 10))
Requirement already satisfied: idna<2.7,>=2.5 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 10))
Requirement already satisfied: urllib3<1.23,>=1.21.1 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 10))
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.6/site-packages (from requests==2.18.4->-r requirements.txt (line 10))
Requirement already satisfied: future in /opt/conda/lib/python3.6/site-packages (from osqp->cvxpy==1.0.3->-r requirements.txt (line 2))
Collecting dill>=0.2.8.1 (from multiprocess->cvxpy==1.0.3->-r requirements.txt (line 2))
Downloading https://files.pythonhosted.org/packages/6f/78/8b96476f4ae426db71c6e86a8e6a81407f015b34547e442291cd397b18f3/dill-0.2.8.2.tar.gz (150kB)
100% |████████████████████████████████| 153kB 1.9MB/s eta 0:00:01
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /opt/conda/lib/python3.6/site-packages (from nbformat>=4.2->plotly==2.2.3->-r requirements.txt (line 6))
Requirement already satisfied: traitlets>=4.1 in /opt/conda/lib/python3.6/site-packages (from nbformat>=4.2->plotly==2.2.3->-r requirements.txt (line 6))
Requirement already satisfied: ipython-genutils in /opt/conda/lib/python3.6/site-packages (from nbformat>=4.2->plotly==2.2.3->-r requirements.txt (line 6))
Requirement already satisfied: jupyter-core in /opt/conda/lib/python3.6/site-packages (from nbformat>=4.2->plotly==2.2.3->-r requirements.txt (line 6))
Building wheels for collected packages: cvxpy, plotly, ecos, scs, multiprocess, dill
Running setup.py bdist_wheel for cvxpy ... done
Stored in directory: /root/.cache/pip/wheels/2b/60/0b/0c2596528665e21d698d6f84a3406c52044c7b4ca6ac737cf3
Running setup.py bdist_wheel for plotly ... done
Stored in directory: /root/.cache/pip/wheels/98/54/81/dd92d5b0858fac680cd7bdb8800eb26c001dd9f5dc8b1bc0ba
Running setup.py bdist_wheel for ecos ... done
Stored in directory: /root/.cache/pip/wheels/50/91/1b/568de3c087b3399b03d130e71b1fd048ec072c45f72b6b6e9a
Running setup.py bdist_wheel for scs ... done
Stored in directory: /root/.cache/pip/wheels/ff/f0/aa/530ccd478d7d9900b4e9ef5bc5a39e895ce110bed3d3ac653e
Running setup.py bdist_wheel for multiprocess ... done
Stored in directory: /root/.cache/pip/wheels/8b/36/e5/96614ab62baf927e9bc06889ea794a8e87552b84bb6bf65e3e
Running setup.py bdist_wheel for dill ... done
Stored in directory: /root/.cache/pip/wheels/e2/5d/17/f87cb7751896ac629b435a8696f83ee75b11029f5d6f6bda72
Successfully built cvxpy plotly ecos scs multiprocess dill
Installing collected packages: numpy, scipy, osqp, ecos, scs, dill, multiprocess, cvxpy, pandas, plotly, tqdm
Found existing installation: numpy 1.12.1
Uninstalling numpy-1.12.1:
Successfully uninstalled numpy-1.12.1
Found existing installation: scipy 0.19.1
Uninstalling scipy-0.19.1:
Successfully uninstalled scipy-0.19.1
Found existing installation: dill 0.2.7.1
Uninstalling dill-0.2.7.1:
Successfully uninstalled dill-0.2.7.1
Found existing installation: pandas 0.20.3
Uninstalling pandas-0.20.3:
Successfully uninstalled pandas-0.20.3
Found existing installation: plotly 2.0.15
Uninstalling plotly-2.0.15:
Successfully uninstalled plotly-2.0.15
Found existing installation: tqdm 4.11.2
Uninstalling tqdm-4.11.2:
Successfully uninstalled tqdm-4.11.2
Successfully installed cvxpy-1.0.3 dill-0.2.8.2 ecos-2.0.5 multiprocess-0.70.6.1 numpy-1.13.3 osqp-0.4.0 pandas-0.21.1 plotly-2.2.3 scipy-1.0.0 scs-2.0.2 tqdm-4.19.5
You are using pip version 9.0.1, however version 18.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
!{sys.executable} -m pip install numpy --upgrade --ignore-installed
Collecting numpy
Downloading https://files.pythonhosted.org/packages/fe/94/7049fed8373c52839c8cde619acaf2c9b83082b935e5aa8c0fa27a4a8bcc/numpy-1.15.1-cp36-cp36m-manylinux1_x86_64.whl (13.9MB)
100% |████████████████████████████████| 13.9MB 48kB/s eta 0:00:01 64% |████████████████████▋ | 8.9MB 27.3MB/s eta 0:00:01
Installing collected packages: numpy
Successfully installed numpy-1.15.1
You are using pip version 9.0.1, however version 18.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
import pandas as pd
import numpy as np
import helper
import project_helper
import project_tests
df = pd.read_csv('eod-quotemedia.csv')
percent_top_dollar = 0.2
high_volume_symbols = project_helper.large_dollar_volume_stocks(df, 'adj_close', 'adj_volume', percent_top_dollar)
df = df[df['ticker'].isin(high_volume_symbols)]
close = df.reset_index().pivot(index='date', columns='ticker', values='adj_close')
volume = df.reset_index().pivot(index='date', columns='ticker', values='adj_volume')
dividends = df.reset_index().pivot(index='date', columns='ticker', values='dividends')
To see what one of these 2-d matrices looks like, let's take a look at the closing prices matrix.
project_helper.print_dataframe(close)
In Part 1 of this project, you'll build a portfolio using dividend yield to choose the portfolio weights. A portfolio such as this could be incorporated into a smart beta ETF. You'll compare this portfolio to a market cap weighted index to see how well it performs.
Note that in practice, you'll probably get the index weights from a data vendor (such as companies that create indices, like MSCI, FTSE, Standard and Poor's), but for this exercise we will simulate a market cap weighted index.
The index we'll be using is based on large dollar volume stocks. Implement generate_dollar_volume_weights
to generate the weights for this index. For each date, generate the weights based on dollar volume traded for that date. For example, assume the following is close prices and volume data:
Prices
A B ...
2013-07-08 2 2 ...
2013-07-09 5 6 ...
2013-07-10 1 2 ...
2013-07-11 6 5 ...
... ... ... ...
Volume
A B ...
2013-07-08 100 340 ...
2013-07-09 240 220 ...
2013-07-10 120 500 ...
2013-07-11 10 100 ...
... ... ... ...
The weights created from the function generate_dollar_volume_weights
should be the following:
A B ...
2013-07-08 0.126.. 0.194.. ...
2013-07-09 0.759.. 0.377.. ...
2013-07-10 0.075.. 0.285.. ...
2013-07-11 0.037.. 0.142.. ...
... ... ... ...
def generate_dollar_volume_weights(close, volume):
"""
Generate dollar volume weights.
Parameters
----------
close : DataFrame
Close price for each ticker and date
volume : str
Volume for each ticker and date
Returns
-------
dollar_volume_weights : DataFrame
The dollar volume weights for each ticker and date
"""
assert close.index.equals(volume.index)
assert close.columns.equals(volume.columns)
#TODO: Implement function
dollar_volume = close * volume
dollar_volume_weights = dollar_volume.divide(dollar_volume.sum(axis=1), axis=0)
return dollar_volume_weights
project_tests.test_generate_dollar_volume_weights(generate_dollar_volume_weights)
Tests Passed
Let's generate the index weights using generate_dollar_volume_weights
and view them using a heatmap.
index_weights = generate_dollar_volume_weights(close, volume)
project_helper.plot_weights(index_weights, 'Index Weights')
Now that we have the index weights, let's choose the portfolio weights based on dividend. You would normally calculate the weights based on trailing dividend yield, but we'll simplify this by just calculating the total dividend yield over time.
Implement calculate_dividend_weights
to return the weights for each stock based on its total dividend yield over time. This is similar to generating the weight for the index, but it's using dividend data instead.
For example, assume the following is dividends
data:
Prices
A B
2013-07-08 0 0
2013-07-09 0 1
2013-07-10 0.5 0
2013-07-11 0 0
2013-07-12 2 0
... ... ...
The weights created from the function calculate_dividend_weights
should be the following:
A B
2013-07-08 NaN NaN
2013-07-09 0 1
2013-07-10 0.333.. 0.666..
2013-07-11 0.333.. 0.666..
2013-07-12 0.714.. 0.285..
... ... ...
def calculate_dividend_weights(dividends):
"""
Calculate dividend weights.
Parameters
----------
ex_dividend : DataFrame
Ex-dividend for each stock and date
Returns
-------
dividend_weights : DataFrame
Weights for each stock and date
"""
#TODO: Implement function
cum_dividend_yields = dividends.cumsum()
dividend_weights = cum_dividend_yields.divide(cum_dividend_yields.sum(axis=1), axis=0)
return dividend_weights
project_tests.test_calculate_dividend_weights(calculate_dividend_weights)
Tests Passed
Just like the index weights, let's generate the ETF weights and view them using a heatmap.
etf_weights = calculate_dividend_weights(dividends)
project_helper.plot_weights(etf_weights, 'ETF Weights')
Implement generate_returns
to generate returns data for all the stocks and dates from price data. You might notice we're implementing returns and not log returns. Since we're not dealing with volatility, we don't have to use log returns.
def generate_returns(prices):
"""
Generate returns for ticker and date.
Parameters
----------
prices : DataFrame
Price for each ticker and date
Returns
-------
returns : Dataframe
The returns for each ticker and date
"""
#TODO: Implement function
return (prices - prices.shift(1)) / prices.shift(1)
project_tests.test_generate_returns(generate_returns)
Tests Passed
Let's generate the closing returns using generate_returns
and view them using a heatmap.
returns = generate_returns(close)
project_helper.plot_returns(returns, 'Close Returns')
With the returns of each stock computed, we can use it to compute the returns for an index or ETF. Implement generate_weighted_returns
to create weighted returns using the returns and weights.
def generate_weighted_returns(returns, weights):
"""
Generate weighted returns.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
weights : DataFrame
Weights for each ticker and date
Returns
-------
weighted_returns : DataFrame
Weighted returns for each ticker and date
"""
assert returns.index.equals(weights.index)
assert returns.columns.equals(weights.columns)
#TODO: Implement function
return returns * weights
project_tests.test_generate_weighted_returns(generate_weighted_returns)
Tests Passed
Let's generate the ETF and index returns using generate_weighted_returns
and view them using a heatmap.
index_weighted_returns = generate_weighted_returns(returns, index_weights)
etf_weighted_returns = generate_weighted_returns(returns, etf_weights)
project_helper.plot_returns(index_weighted_returns, 'Index Returns')
project_helper.plot_returns(etf_weighted_returns, 'ETF Returns')
To compare performance between the ETF and Index, we're going to calculate the tracking error. Before we do that, we first need to calculate the index and ETF comulative returns. Implement calculate_cumulative_returns
to calculate the cumulative returns over time given the returns.
def calculate_cumulative_returns(returns):
"""
Calculate cumulative returns.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
Returns
-------
cumulative_returns : Pandas Series
Cumulative returns for each date
"""
#TODO: Implement function
return (returns.sum(axis=1) + 1).cumprod()
project_tests.test_calculate_cumulative_returns(calculate_cumulative_returns)
Tests Passed
Let's generate the ETF and index cumulative returns using calculate_cumulative_returns
and compare the two.
index_weighted_cumulative_returns = calculate_cumulative_returns(index_weighted_returns)
etf_weighted_cumulative_returns = calculate_cumulative_returns(etf_weighted_returns)
project_helper.plot_benchmark_returns(index_weighted_cumulative_returns, etf_weighted_cumulative_returns, 'Smart Beta ETF vs Index')
In order to check the performance of the smart beta portfolio, we can calculate the annualized tracking error against the index. Implement tracking_error
to return the tracking error between the ETF and benchmark.
For reference, we'll be using the following annualized tracking error function: $$ TE = \sqrt{252} * SampleStdev(r_p - r_b) $$
Where $ r_p $ is the portfolio/ETF returns and $ r_b $ is the benchmark returns.
Note: When calculating the sample standard deviation, the delta degrees of freedom is 1, which is the also the default value.
def tracking_error(benchmark_returns_by_date, etf_returns_by_date):
"""
Calculate the tracking error.
Parameters
----------
benchmark_returns_by_date : Pandas Series
The benchmark returns for each date
etf_returns_by_date : Pandas Series
The ETF returns for each date
Returns
-------
tracking_error : float
The tracking error
"""
assert benchmark_returns_by_date.index.equals(etf_returns_by_date.index)
#TODO: Implement function
return np.sqrt(252) * np.std(etf_returns_by_date - benchmark_returns_by_date, ddof=1)
project_tests.test_tracking_error(tracking_error)
Tests Passed
Let's generate the tracking error using tracking_error
.
smart_beta_tracking_error = tracking_error(np.sum(index_weighted_returns, 1), np.sum(etf_weighted_returns, 1))
print('Smart Beta Tracking Error: {}'.format(smart_beta_tracking_error))
Smart Beta Tracking Error: 0.1020761483200753
Now, let's create a second portfolio. We'll still reuse the market cap weighted index, but this will be independent of the dividend-weighted portfolio that we created in part 1.
We want to both minimize the portfolio variance and also want to closely track a market cap weighted index. In other words, we're trying to minimize the distance between the weights of our portfolio and the weights of the index.
$Minimize \left [ \sigma^2_p + \lambda \sqrt{\sum_{1}^{m}(weight_i - indexWeight_i)^2} \right ]$ where $m$ is the number of stocks in the portfolio, and $\lambda$ is a scaling factor that you can choose.
Why are we doing this? One way that investors evaluate a fund is by how well it tracks its index. The fund is still expected to deviate from the index within a certain range in order to improve fund performance. A way for a fund to track the performance of its benchmark is by keeping its asset weights similar to the weights of the index. We’d expect that if the fund has the same stocks as the benchmark, and also the same weights for each stock as the benchmark, the fund would yield about the same returns as the benchmark. By minimizing a linear combination of both the portfolio risk and distance between portfolio and benchmark weights, we attempt to balance the desire to minimize portfolio variance with the goal of tracking the index.
Implement get_covariance_returns
to calculate the covariance of the returns
. We'll use this to calculate the portfolio variance.
If we have $m$ stock series, the covariance matrix is an $m \times m$ matrix containing the covariance between each pair of stocks. We can use Numpy.cov
to get the covariance. We give it a 2D array in which each row is a stock series, and each column is an observation at the same period of time. For any NaN
values, you can replace them with zeros using the DataFrame.fillna
function.
The covariance matrix $\mathbf{P} = \begin{bmatrix} \sigma^2_{1,1} & ... & \sigma^2_{1,m} \\ ... & ... & ...\\ \sigma_{m,1} & ... & \sigma^2_{m,m} \\ \end{bmatrix}$
def get_covariance_returns(returns):
"""
Calculate covariance matrices.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
Returns
-------
returns_covariance : 2 dimensional Ndarray
The covariance of the returns
"""
#TODO: Implement function
return np.cov(returns.T.fillna(0))
project_tests.test_get_covariance_returns(get_covariance_returns)
Tests Passed
Let's look at the covariance generated from get_covariance_returns
.
covariance_returns = get_covariance_returns(returns)
covariance_returns = pd.DataFrame(covariance_returns, returns.columns, returns.columns)
covariance_returns_correlation = np.linalg.inv(np.diag(np.sqrt(np.diag(covariance_returns))))
covariance_returns_correlation = pd.DataFrame(
covariance_returns_correlation.dot(covariance_returns).dot(covariance_returns_correlation),
covariance_returns.index,
covariance_returns.columns)
project_helper.plot_covariance_returns_correlation(
covariance_returns_correlation,
'Covariance Returns Correlation Matrix')
We can write the portfolio variance $\sigma^2_p = \mathbf{x^T} \mathbf{P} \mathbf{x}$
Recall that the $\mathbf{x^T} \mathbf{P} \mathbf{x}$ is called the quadratic form.
We can use the cvxpy function quad_form(x,P)
to get the quadratic form.
We want portfolio weights that track the index closely. So we want to minimize the distance between them.
Recall from the Pythagorean theorem that you can get the distance between two points in an x,y plane by adding the square of the x and y distances and taking the square root. Extending this to any number of dimensions is called the L2 norm. So: $\sqrt{\sum_{1}^{n}(weight_i - indexWeight_i)^2}$ Can also be written as $\left \| \mathbf{x} - \mathbf{index} \right \|_2$. There's a cvxpy function called norm()
norm(x, p=2, axis=None)
. The default is already set to find an L2 norm, so you would pass in one argument, which is the difference between your portfolio weights and the index weights.
We want to minimize both the portfolio variance and the distance of the portfolio weights from the index weights.
We also want to choose a scale
constant, which is $\lambda$ in the expression.
$\mathbf{x^T} \mathbf{P} \mathbf{x} + \lambda \left \| \mathbf{x} - \mathbf{index} \right \|_2$
This lets us choose how much priority we give to minimizing the difference from the index, relative to minimizing the variance of the portfolio. If you choose a higher value for scale
($\lambda$).
We can find the objective function using cvxpy objective = cvx.Minimize()
. Can you guess what to pass into this function?
We can also define our constraints in a list. For example, you'd want the weights to sum to one. So $\sum_{1}^{n}x = 1$. You may also need to go long only, which means no shorting, so no negative weights. So $x_i >0 $ for all $i$. you could save a variable as [x >= 0, sum(x) == 1]
, where x was created using cvx.Variable()
.
So now that we have our objective function and constraints, we can solve for the values of $\mathbf{x}$.
cvxpy has the constructor Problem(objective, constraints)
, which returns a Problem
object.
The Problem
object has a function solve(), which returns the minimum of the solution. In this case, this is the minimum variance of the portfolio.
It also updates the vector $\mathbf{x}$.
We can check out the values of $x_A$ and $x_B$ that gave the minimum portfolio variance by using x.value
import cvxpy as cvx
def get_optimal_weights(covariance_returns, index_weights, scale=2.0):
"""
Find the optimal weights.
Parameters
----------
covariance_returns : 2 dimensional Ndarray
The covariance of the returns
index_weights : Pandas Series
Index weights for all tickers at a period in time
scale : int
The penalty factor for weights the deviate from the index
Returns
-------
x : 1 dimensional Ndarray
The solution for x
"""
assert len(covariance_returns.shape) == 2
assert len(index_weights.shape) == 1
assert covariance_returns.shape[0] == covariance_returns.shape[1] == index_weights.shape[0]
#TODO: Implement function
# Number of index weights
num_of_weights = len(index_weights)
# x variables (to be found with optimization)
x = cvx.Variable(num_of_weights)
# Portfolio variance, in quadratic form
portfolio_variance = cvx.quad_form(x, covariance_returns)
# Distance (L2 norm) between portfolio and index weights
distance_to_index = cvx.norm(x - index_weights, p=2)
# Objective function
objective = cvx.Minimize(portfolio_variance + scale * distance_to_index)
# Constraints
constraints = [x >= 0, sum(x) == 1]
# Using cvxpy to solve the objective
problem = cvx.Problem(objective, constraints)
problem.solve()
# Retrieve the weights of the optimized portfolio
x_values = x.value
return x_values
project_tests.test_get_optimal_weights(get_optimal_weights)
Tests Passed
Using the get_optimal_weights
function, let's generate the optimal ETF weights without rebalanceing. We can do this by feeding in the covariance of the entire history of data. We also need to feed in a set of index weights. We'll go with the average weights of the index over time.
raw_optimal_single_rebalance_etf_weights = get_optimal_weights(covariance_returns.values, index_weights.iloc[-1])
optimal_single_rebalance_etf_weights = pd.DataFrame(
np.tile(raw_optimal_single_rebalance_etf_weights, (len(returns.index), 1)),
returns.index,
returns.columns)
With our ETF weights built, let's compare it to the index. Run the next cell to calculate the ETF returns and compare it to the index returns.
optim_etf_returns = generate_weighted_returns(returns, optimal_single_rebalance_etf_weights)
optim_etf_cumulative_returns = calculate_cumulative_returns(optim_etf_returns)
project_helper.plot_benchmark_returns(index_weighted_cumulative_returns, optim_etf_cumulative_returns, 'Optimized ETF vs Index')
optim_etf_tracking_error = tracking_error(np.sum(index_weighted_returns, 1), np.sum(optim_etf_returns, 1))
print('Optimized ETF Tracking Error: {}'.format(optim_etf_tracking_error))
Optimized ETF Tracking Error: 0.05795012630412267
The single optimized ETF portfolio used the same weights for the entire history. This might not be the optimal weights for the entire period. Let's rebalance the portfolio over the same period instead of using the same weights. Implement rebalance_portfolio
to rebalance a portfolio.
Reblance the portfolio every n number of days, which is given as shift_size
. When rebalancing, you should look back a certain number of days of data in the past, denoted as chunk_size
. Using this data, compute the optimal weights using get_optimal_weights
and get_covariance_returns
.
def rebalance_portfolio(returns, index_weights, shift_size, chunk_size):
"""
Get weights for each rebalancing of the portfolio.
Parameters
----------
returns : DataFrame
Returns for each ticker and date
index_weights : DataFrame
Index weight for each ticker and date
shift_size : int
The number of days between each rebalance
chunk_size : int
The number of days to look in the past for rebalancing
Returns
-------
all_rebalance_weights : list of Ndarrays
The ETF weights for each point they are rebalanced
"""
assert returns.index.equals(index_weights.index)
assert returns.columns.equals(index_weights.columns)
assert shift_size > 0
assert chunk_size >= 0
#TODO: Implement function
all_rebalance_weights = []
for i in range(chunk_size, len(returns), shift_size):
chunk = returns.iloc[i - chunk_size : i]
covariance_returns = get_covariance_returns(chunk)
optimal_weights = get_optimal_weights(covariance_returns, index_weights.iloc[i - 1])
all_rebalance_weights.append(optimal_weights)
return all_rebalance_weights
project_tests.test_rebalance_portfolio(rebalance_portfolio)
Tests Passed
Run the following cell to rebalance the portfolio using rebalance_portfolio
.
chunk_size = 250
shift_size = 5
all_rebalance_weights = rebalance_portfolio(returns, index_weights, shift_size, chunk_size)
With the portfolio rebalanced, we need to use a metric to measure the cost of rebalancing the portfolio. Implement get_portfolio_turnover
to calculate the annual portfolio turnover. We'll be using the formulas used in the classroom:
$ AnnualizedTurnover =\frac{SumTotalTurnover}{NumberOfRebalanceEvents} * NumberofRebalanceEventsPerYear $
$ SumTotalTurnover =\sum_{t,n}{\left | x_{t,n} - x_{t+1,n} \right |} $ Where $ x_{t,n} $ are the weights at time $ t $ for equity $ n $.
$ SumTotalTurnover $ is just a different way of writing $ \sum \left | x_{t_1,n} - x_{t_2,n} \right | $
def get_portfolio_turnover(all_rebalance_weights, shift_size, rebalance_count, n_trading_days_in_year=252):
"""
Calculage portfolio turnover.
Parameters
----------
all_rebalance_weights : list of Ndarrays
The ETF weights for each point they are rebalanced
shift_size : int
The number of days between each rebalance
rebalance_count : int
Number of times the portfolio was rebalanced
n_trading_days_in_year: int
Number of trading days in a year
Returns
-------
portfolio_turnover : float
The portfolio turnover
"""
assert shift_size > 0
assert rebalance_count > 0
#TODO: Implement function
sum_total_turnover = np.abs(np.diff(np.flip(all_rebalance_weights, axis=0), axis=0)).sum()
number_rebalance_events_per_year = n_trading_days_in_year // shift_size
annualized_turnover = (sum_total_turnover / rebalance_count) * number_rebalance_events_per_year
return annualized_turnover
project_tests.test_get_portfolio_turnover(get_portfolio_turnover)
Tests Passed
Run the following cell to get the portfolio turnover from get_portfolio turnover
.
print(get_portfolio_turnover(all_rebalance_weights, shift_size, len(all_rebalance_weights) - 1))
16.594080020340048
That's it! You've built a smart beta portfolio in part 1 and did portfolio optimization in part 2. You can now submit your project.
Now that you're done with the project, it's time to submit it. Click the submit button in the bottom right. One of our reviewers will give you feedback on your project with a pass or not passed grade. You can continue to the next section while you wait for feedback.