#!/usr/bin/env python # coding: utf-8 # # Lucas Asset Pricing with advanced Approximation Methods # ### João Brogueira and Fabian Schütze # ## Introduction # This note describes why and how we modified the computer code of the original lucastree.py module. We briefly reformulate Lucas' asset pricing problem as found in the lecture notes . Denote by $y$ the fruit of the tree. The fruit’s growth rate follows the process $G(y,z') = y^\alpha z'$ with $z' \sim \log N(0,\sigma^2)$. The investor has CRRA preferences with curvature parameter $\gamma$ and discount factor $\beta$. Following Lucas (1978) , the pricing function, $p(y)$, solves the functional equation: # # $$ f(y) = h(y) + \beta \int_Z f(G(y,z')) Q(dz').$$ # with # \begin{align*} # f(y) &= p(y)y^{-\gamma}, \\ # h(y) &= \beta \int_Z \left( G(y,z') \right)^{1-\gamma} Q(dz') = \beta y^{ (1-\gamma)\alpha } \exp \left( (1-\gamma)^2 \sigma^2/2 \right). # \end{align*} # # We want the numeric solution $f$ to comply with theoretical predictions about its functional form. In the following, it is first documented under which circumstances $h$ transmits montoncity and concavity onto $f$. In particular, we prove that if $G$ is strictly increasing and concave 1, $h$ transmits the sign of its first and second derivatives onto $f$. Additionally, we show that if both $G$ and $h$ are strictly decreasing and convex, $f$ is strictly decreasing and convex as well. The solution to the functional equation is numerically obtained by iterating the contraction mapping $Tf(y) = h(y) + \beta \int_Z f(G(y,z')) Q(dz')$ until the distance between two successive iterations is smaller than a tolerance criteria. To compute the integral numerically, $f(G(y,z'))$ needs to be evaluated at arguments $y$ that are not on the grid. This is a chance to impose the properties of $h$ onto $f$ through an appropriate approximation routine. This note discusses how to implement such a routine at the end. # # 1. For the sake of brevity, when writing strictly increasing and concave we really mean strictly increasing and strictly concave. Also, strictly decreasing and convex refers to strictly decreasing and strictly convex, etc. # # # ## Theoretical Predictions about the Functional Form of the Solution to Lucas' Asset Pricing Equation # This section documents under which circumstances $f$ inherits the sign of the first and second derivatives of $h$. In the following, suppose all necessary assumptions to guarantee a unique solution to Lucas' asset pricing problem are satisfied. One assumption is that the function $h$ is bounded in the supremum norm. Numercially, the assumption is satisfied because the lower end of the interval $Y$ is striclty positive and because $Y$ is bounded. Theoretically, one can prove that the $h$ needs only be bounded in a weighted supremum norm when the parameter $\alpha > 0$. Based on exercise 9.7 of the book by Stokey and Lucas with Prescott (1989), we prove the following proposition: # # **Proposition** # # 1. Suppose $G$ is strictly increasing and concave in $y$. If $h$ is sctrictly increasing and convave, $f$ is strictly increasing and concave. If $h$ is sctrictly decreasing and convex, $f$ is strictly decreasing and convex. # 2. Suppose $G$ is strictly decreasing and convex in $y$. If $h$ is strictly decreasing and convex, $f$ is strictly decreasing and convex. # # **Proof** # # **1** Following the notation of the lecture notes, denote by $cb\mathbf{R}_+$ the set of continuous and bounded functions $f:\mathbf{R}_{+} \rightarrow \mathbf{R}_{+}$ . The set $cb'\mathbf{R}_{+} \subset cb\mathbf{R}_{+}$ is the set of continuous, bounded, nondecreasing and concave functions, and $cb''\mathbf{R}_{+} \subset cb'\mathbf{R}_{+}$ imposes additionally strict monotonicity and concavity. We want to show that the contraction operator $T$ maps any function $\tilde{f} \in cb'\mathbf{R}_{+}$ into the subset $cb''\mathbf{R}_{+}$. As the solution to the functional equation is characterized by $Tf = f$ and $cb'\mathbf{R}_{+}$ is a closed set, if the operator $T$ transforms any nondecreasing and concave function into a strictly increasing and concave function, then $f$ is strictly increasing and concave (Corollary 1 of the Contraction Mapping Theorem in Stokey and Lucas with Prescott (1989), p. 52). # # To show the desired result, suppose first that $h$ is strictly increasing and concave and pick any $\tilde{f} \in cb'\mathbf{R}_{+}$. To begin, study whether $T\tilde{f}$ is strictly increasing. For any pair $\hat{y},y \in Y$ with $\hat{y} > y$, the function $T\tilde{f}$ satisfies: # # \begin{align*} # T\tilde{f}(\hat{y}) &= h(\hat{y}) + \beta \int_Z \tilde{f}( G(\hat{y},z')) Q(dz')\\ # &> h(y) + \beta \int_Z \tilde{f}( G(y,z')) Q(dz')\\ # &= T\tilde{f}(y). # \end{align*} # # The inequality holds because $G$ and $h$ are strictly increasing and $\tilde{f}$ is nondecreasing. Hence, $T\tilde{f}$ is strictly increasing. # # To analyze concavity, define $y_{\omega} = \omega y + (1-\omega) y'$, for any $y,y' \in Y$, $y \neq y'$, and $0 < \omega < 1$. The strict concavity form of $h$ and $G$, together with $\tilde{f}$ being concave, ensure that: # # \begin{align*} # T\tilde{f}(y_\omega) &= h(y_\omega) + \beta \int_Z \tilde{f}( G(y_\omega,z')) Q(dz') \\ # &> \omega \left[ h(y) + \beta \int_Z \tilde{f}( G(y,z')) Q(dz') \right] + (1 - \omega) \left[ h(y') + \beta \int_Z \tilde{f}( G(y',z')) Q(dz') \right] \\ # &= \omega T\tilde{f}(y) + (1-\omega) T \tilde{f}(y'). # \end{align*} # # The function $T\tilde{f}$ is stricly concave. Taken together, we know that for any $\tilde{f} \in cb'\mathbf{R}_{+}$, $T\tilde{f} \in cb''\mathbf{R}_{+}$. Hence, $f$ must be an element of the set $cb''\mathbf{R}_+$, guaranteeing that $f$ has the same functional form as $h$.

# Now, suppose $h$ is convex and decreasing. We could again define the operator $T$ as $Tf(y) = h(y) + \beta \int_Z f(G(y,z')) Q(dz')$ and study into which subset a candidate solution is mapped into. To facilitate analysis though, take a different route. Look at the modified operator # $$Tf_{-} = h_{-} + \beta \int_Z f_{-} (G(y,z')) Q(z'),$$ # with $h_{-} = -h$ and $f_{-} = -f$. Under the same assumptions guaranteeing a unique solution to the original contraction mapping, there exists a unique solution to the modified contraction mapping. As $h_{-}$ is strictly increasing and concave, the proof above applies to the modified contraction mapping. As $f_{-}$ is strictly increasing and concave, $f$ is strictly decreasing and convex and inherits the properties of $h$. # # **2** As both $G$ and $h$ are strictly decreasing and convex, one can proceed in a similar fashion as in case (1.) to show that $h$ transmits its functional form to $f$. # The different cases of the proposition can be rephrased in terms of the values of the parameters $\gamma,\alpha$. The functional form of $h$ is jointly determined by $\gamma,\alpha$ as $h(y) = y^{(1-\gamma)\alpha} \exp \left( (1-\gamma)^2 \sigma^2/2 \right)$. If $0 < \alpha < 1$, $G$ is strictly increasing and concave and case (1.) of the proposition applies. If $0 < \gamma < 1$, $f$ is strictly increasing and concave. If $\gamma > 1$, $f$ is strictly decreasing and convex. In contrast, suppose $-1 < \alpha < 0$. If $0 < \gamma < 1$ case (2.) of the proposition applies and $f$ is strictly decreasing and convex. If $\gamma > 1$, theory does not offer any help in determining the functional form of $f$. In this situation $G$ is decreasing and convex, while $h$ is increasing. Our proposition is deliberately more restrictive than the one in exercise 9.7 of Stokey and Lucas with Prescott (1989). Because we can calculate the functions $f$ analytically for the special cases of $\alpha \in \left\lbrace 0,1\right\rbrace$, numercial techniques are not needed. # ## Imposing the functional form of $h$ onto $f$ through advanced approximation # This section describes how we impose the functional form of $h$ onto $f$. The solution to the functional equation is numerically obtained by iterating the contraction mapping $Tf(y) = h(y) + \beta \int_Z f(G(y,z')) Q(dz')$ until the distance between two successive iterations is smaller than a tolerance criteria. To compute the integral numerically, $f(G(y,z'))$ needs to be evaluated at arguments $x$ that are not on the grid through numerical approximation. This approximation is a chance to impose the properties of $h$ onto $f$. The grid points are a set $Y_{\text{Grid}} = \left\lbrace y_1,y_2,\ldots, y_{N-1},y_N \right\rbrace \subset Y$, with $y_l < y_m$ if $l < m$, $l,m \in \mathbf{N}$. Point $x \in Y$ is not on the gird. If $y_1 < x < y_N$ we interpolate the functional $f$ at $x$ by: # # \begin{equation} # f(x) = f(y_L) + \dfrac{f(y_H) - f(y_L)}{h(y_H) - h(y_L)} \left( h(x) - h(y_L) \right). # \end{equation} # # with $y_L = \max \left\lbrace y_i \in Y_{\text{Grid}} : y_i < x \right\rbrace$ and $y_H = \min \left\lbrace y_i \in Y_{\text{Grid}}: y_i > x \right\rbrace$. For any point $x$ lower than $y_1$ or higher than $y_N$, we define the function value as: # # \begin{align} # f(x) = # \begin{cases} # f(y_1) + \dfrac{f(y_1) - f(y_2)}{h(y_1) - h(y_2)} \left(h(x) - h(y_1) \right) & \text{if } x < y_1,\\ # f(y_N) + \dfrac{f(y_N) - f(y_{N-1})}{h(y_N) - h(y_{N-1})} \left( h(x) - h(y_N) \right) & \text{if } x > y_N. # \end{cases} # \end{align} # # The approximation transmits the slope and shape of the function $h$ onto $f$ as $f'(x) \propto h'(x)$ and $f''(x) \propto h''(x)$ because the ratio in front of $h(x)$ is always positive. The function `interpolationFunction` of the modified `lucastree.py` module converts this idea into computer code. The entire module is contained in the next cell. # In[1]: get_ipython().run_cell_magic('writefile', './lucastree.py', 'r"""\nFilename: lucastree.py\n\nAuthors: Joao Brogueira and Fabian Schuetze \n\nThis file is a slight modification of the lucastree.py file \nby Thomas Sargent, John Stachurski, Spencer Lyon under the \nquant-econ project. We don\'t claim authorship of the entire file,\nbut full responsability for it and any existing mistakes.\n\nSolves the price function for the Lucas tree in a continuous state\nsetting, using piecewise linear approximation for the sequence of\ncandidate price functions. The consumption endownment follows the log\nlinear AR(1) process\n\n.. math::\n\n log y\' = \\alpha log y + \\sigma \\epsilon\n\nwhere y\' is a next period y and epsilon is an iid standard normal shock.\nHence\n\n.. math::\n\n y\' = y^{\\alpha} * \\xi,\n\nwhere\n\n.. math::\n\n \\xi = e^(\\sigma * \\epsilon)\n\nThe distribution phi of xi is\n\n.. math::\n\n \\phi = LN(0, \\sigma^2),\n \nwhere LN means lognormal.\n\n"""\n#from __future__ import division # == Omit for Python 3.x == #\nimport numpy as np\nfrom scipy.stats import lognorm\nfrom scipy.integrate import fixed_quad\nfrom quantecon.compute_fp import compute_fixed_point\n\n\nclass LucasTree(object):\n\n """\n Class to solve for the price of a tree in the Lucas\n asset pricing model\n\n Parameters\n ----------\n gamma : scalar(float)\n The coefficient of risk aversion in the investor\'s CRRA utility\n function\n beta : scalar(float)\n The investor\'s discount factor\n alpha : scalar(float)\n The correlation coefficient in the shock process\n sigma : scalar(float)\n The volatility of the shock process\n grid : array_like(float), optional(default=None)\n The grid points on which to evaluate the asset prices. Grid\n points should be nonnegative. If None is passed, we will create\n a reasonable one for you\n\n Attributes\n ----------\n gamma, beta, alpha, sigma, grid : see Parameters\n grid_min, grid_max, grid_size : scalar(int)\n Properties for grid upon which prices are evaluated\n init_h : array_like(float)\n The functional values h(y) with grid points being arguments \n phi : scipy.stats.lognorm\n The distribution for the shock process\n\n Notes\n -----\n This file is a slight modification of the lucastree.py file \n by Thomas Sargent, John Stachurski, Spencer Lyon, [SSL]_ under the \n quant-econ project. We don\'t claim authorship of the entire file,\n but full responsability for it and any existing mistakes.\n\n References\n ----------\n .. [SSL] Thomas Sargent, John Stachurski and Spencer Lyon, lucastree.py,\n GitHub repository, \n https://github.com/QuantEcon/QuantEcon.py/blob/master/quantecon/models/lucastree.py\n\n Examples\n --------\n >>> tree = LucasTree(gamma=2, beta=0.95, alpha=0.90, sigma=0.1)\n >>> grid, price_vals = tree.grid, tree.compute_lt_price()\n\n """\n\n def __init__(self, gamma, beta, alpha, sigma, grid=None):\n self.gamma = gamma\n self.beta = beta\n self.alpha = alpha\n self.sigma = sigma\n\n # == set up grid == #\n if grid is None:\n (self.grid, self.grid_min,\n self.grid_max, self.grid_size) = self._new_grid()\n else:\n self.grid = np.asarray(grid)\n self.grid_min = min(grid)\n self.grid_max = max(grid)\n self.grid_size = len(grid)\n\n # == set up distribution for shocks == #\n self.phi = lognorm(sigma)\n\n # == set up integration bounds. 4 Standard deviations. Make them\n # private attributes b/c users don\'t need to see them, but we\n # only want to compute them once. == #\n self._int_min = np.exp(-5 * sigma)\n self._int_max = np.exp(5 * sigma)\n\n # == Set up h for the Lucas Operator == #\n self.init_h = self.h(self.grid)\n\n def h(self, x):\n """\n Compute the function values of h in the Lucas operator. \n\n Parameters\n ----------\n x : array_like(float)\n The arguments over which to computer the function values \n\n Returns\n -------\n h : array_like(float)\n The functional values \n\n Notes\n -----\n Recall the functional form of h \n\n .. math:: h(x) &= \\beta * \\int_Z u\'(G(x,z)) phi(dz)\n &= \\beta x**((1-\\gamma)*\\alpha) * \\exp((1-\\gamma)**2 *\\sigma /2) \n\n """\n alpha, gamma, beta, sigma = self.alpha, self.gamma, self.beta, self.sigma\n h = beta * x**((1 - gamma) * alpha) * \\\n np.exp((1 - gamma)**2 * sigma**2 / 2) * np.ones(x.size)\n\n return h\n\n def _new_grid(self):\n """\n Construct the default grid for the problem\n\n This is defined to be np.linspace(0, 10, 100) when alpha >= 1\n and 100 evenly spaced points covering 4 standard deviations\n when alpha < 1\n\n """\n grid_size = 50\n if abs(self.alpha) >= 1.0:\n grid_min, grid_max = 0.1, 10\n else:\n # == Set the grid interval to contain most of the mass of the\n # stationary distribution of the consumption endowment == #\n ssd = self.sigma / np.sqrt(1 - self.alpha**2)\n grid_min, grid_max = np.exp(-4 * ssd), np.exp(4 * ssd)\n\n grid = np.linspace(grid_min, grid_max, grid_size)\n\n return grid, grid_min, grid_max, grid_size\n\n def integrate(self, g, int_min=None, int_max=None):\n """\n Integrate the function g(z) * self.phi(z) from int_min to\n int_max.\n\n Parameters\n ----------\n g : function\n The function which to integrate\n\n int_min, int_max : scalar(float), optional\n The bounds of integration. If either of these parameters are\n `None` (the default), they will be set to 4 standard\n deviations above and below the mean.\n\n Returns\n -------\n result : scalar(float)\n The result of the integration\n\n """\n # == Simplify notation == #\n phi = self.phi\n if int_min is None:\n int_min = self._int_min\n if int_max is None:\n int_max = self._int_max\n\n # == set up integrand and integrate == #\n integrand = lambda z: g(z) * phi.pdf(z)\n result, error = fixed_quad(integrand, int_min, int_max, n=20)\n return result, error\n\n def Approximation(self, x, grid, f):\n r"""\n Approximates the function f at given sample points x.\n\n Parameters\n ----------\n x: array_like(float)\n Sample points over which the function f is evaluated\n\n grid: array_like(float) \n The grid values representing the domain of f \n\n f: array_like(float)\n The function values of f over the grid \n\n Returns:\n --------\n fApprox: array_like(float)\n The approximated function values at x\n\n Notes\n -----\n Interpolation is done by the following function:\n\n .. math:: f(x) = f(y_L) + \\dfrac{f(y_H) - f(y_L)}{h(y_H) - h(y_L)} (h(x) - h(y_L) ).\n\n Extrapolation is done as follows:\n\n .. math:: f(x) = \n \\begin{cases}\n f(y_1) + \\dfrac{f(y_1) - f(y_2)}{h(y_1) - h(y_2)} \\left(h(x) - h(y_1) \\right) & \\text{if } x < y_1,\\\\\n f(y_N) + \\dfrac{f(y_N) - f(y_{N-1})}{h(y_N) - h(y_{N-1})} \\left( h(x) - h(y_N) \\right) & \\text{if } x > y_N.\n \\end{cases}\n\n The approximation routine imposes the functional\n form of the function :math:`h` onto the function math:`f`, as stated\n in chapter 9.2 (in particular theorem 9.6 and 9.7 and exercise 9.7) of the \n book by Stokey, Lucas and Prescott (1989).\n\n """\n # == Initalize and create empty arrays to be filled in the == #\n gamma, sigma, beta = self.gamma, self.sigma, self.beta\n hX, hGrid = self.h(x), self.init_h\n fL, fH, fApprox = np.empty_like(x), np.empty_like(x), np.empty_like(x)\n hL, idxL, idxH, hH = np.empty_like(x), np.empty_like(\n x), np.empty_like(x), np.empty_like(x)\n\n # == Create Boolean array to determine which sample points are used for interpoltion\n # and which are used for extrapolation == #\n lower, middle, upper = (x < grid[0]), (x > grid[0]) & (\n x < grid[-1]), (x > grid[-1])\n\n # == Calcualte the indices of y_L, idxL[index], and y_H ,idxH[index], that are below and above a sample point, called value.\n # In the notation of the interpolation routine, these indices are used to pick the function values\n # f(y_L),f(y_H),h(y_L) and h(y_H) == #\n for index, value in enumerate(x):\n # Calculates the indices of y_L\n idxL[index] = (np.append(grid[grid <= value], grid[0])).argmax()\n idxH[index] = min(idxL[index] + 1, len(grid) - 1)\n fL[index] = f[idxL[index]]\n fH[index] = f[idxH[index]]\n hL[index] = hGrid[idxL[index]]\n hH[index] = hGrid[idxH[index]]\n\n # == Interpolation == #\n if self.alpha != 0:\n ratio = (fH[middle] - fL[middle]) / (hH[middle] - hL[middle])\n elif self.alpha == 0:\n # If self.alpha ==0, `ratio` is zero, as hH == hL\n ratio = (hH[middle] - hL[middle])\n fApprox[middle] = fL[middle] + ratio * (hX[middle] - hL[middle])\n\n # == Extrapolation == #\n if self.alpha != 0:\n fApprox[lower] = f[\n 0] + (f[0] - f[1]) / (hGrid[0] - hGrid[1]) * (hX[lower] - hGrid[0])\n fApprox[upper] = f[-1] + \\\n (f[-1] - f[-2]) / (hGrid[-1] - hGrid[-2]) * \\\n (hX[upper] - hGrid[-1])\n elif self.alpha == 0:\n fApprox[lower] = f[0]\n fApprox[upper] = f[-1]\n\n return fApprox\n\n def lucas_operator(self, f, Tf=None):\n """\n The approximate Lucas operator, which computes and returns the\n updated function Tf on the grid points.\n\n Parameters\n ----------\n f : array_like(float)\n A candidate function on R_+ represented as points on a grid\n and should be flat NumPy array with len(f) = len(grid)\n\n Tf : array_like(float)\n Optional storage array for Tf\n\n Returns\n -------\n Tf : array_like(float)\n The updated function Tf\n\n Notes\n -----\n The argument `Tf` is optional, but recommended. If it is passed\n into this function, then we do not have to allocate any memory\n for the array here. As this function is often called many times\n in an iterative algorithm, this can save significant computation\n time.\n\n """\n grid, h = self.grid, self.init_h\n alpha, beta = self.alpha, self.beta\n\n # == set up storage if needed == #\n if Tf is None:\n Tf = np.empty_like(f)\n\n # == Apply the T operator to f == #\n Af = lambda x: self.Approximation(x, grid, f)\n\n for i, y in enumerate(grid):\n Tf[i] = h[i] + beta * self.integrate(lambda z: Af(y**alpha * z))[0]\n\n return Tf\n\n def compute_lt_price(self, error_tol=1e-7, max_iter=600, verbose=0):\n """\n Compute the equilibrium price function associated with Lucas\n tree lt\n\n Parameters\n ----------\n error_tol, max_iter, verbose\n Arguments to be passed directly to\n `quantecon.compute_fixed_point`. See that docstring for more\n information\n\n\n Returns\n -------\n price : array_like(float)\n The prices at the grid points in the attribute `grid` of the\n object\n\n """\n # == simplify notation == #\n grid, grid_size = self.grid, self.grid_size\n lucas_operator, gamma = self.lucas_operator, self.gamma\n\n # == Create storage array for compute_fixed_point. Reduces memory\n # allocation and speeds code up == #\n Tf = np.empty(grid_size)\n\n # == Initial guess, just a vector of ones == #\n f_init = np.ones(grid_size)\n f = compute_fixed_point(lucas_operator, f_init, error_tol,\n max_iter, verbose, Tf=Tf)\n\n price = f * grid**gamma\n\n return price\n') # The following two figures plot the functions $h,f$ and their first and second differences for parameters $(\gamma,\alpha) \in \left\lbrace (2,0.75),(0.5,0.75),(0.5,-0.75) \right\rbrace$. Note that the x-axis is in indices instead of grid values because the grid values change with different parameters. The graph illustrates that the sign of the slope and shape of $h$ is transmitted to $f$. We used $|\alpha| = 0.75$ because it generates a relatively strong visual slope of $h$. Our unit testing function also consider autoregressive parameters of $|\alpha| \in \left\lbrace 0.25, 0.5 \right\rbrace$. # In[2]: get_ipython().run_line_magic('matplotlib', 'inline') from lucastree import LucasTree import numpy as np import matplotlib.pyplot as plt # first element gamma, second element alpha vector = np.array([[2, 0.75], [0.5, 0.75], [0.5, -0.75]]) tree = LucasTree(gamma=2, beta=0.95, alpha=0.5, sigma=0.1) h, hdiff, hdiff2 = np.empty((len(tree.grid), vector.shape[0])), np.empty( (len(tree.grid) - 1, vector.shape[0])), np.empty((len(tree.grid) - 2, vector.shape[0])) for idx, element in enumerate(vector): tree = LucasTree(gamma=element[0], beta=0.95, alpha=element[1], sigma=0.1) h[:, idx] = tree.h(tree.grid) hdiff[:, idx] = np.ediff1d(h[:, idx]) hdiff2[:, idx] = np.ediff1d(hdiff[:, idx]) fig1, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex='col') annotation = ['(gamma,alpha): ' + str((i)) for i in vector] ax1.plot(h) ax2.plot(hdiff) ax3.plot(hdiff2) ax1.set_title('Plot of h') ax2.set_title('Plot of the first difference of h') ax3.set_title('Plot of the second difference of h') ax3.legend(annotation, loc='upper center', bbox_to_anchor=(0.5, -0.07), ncol=2, fancybox=True, shadow=True, fontsize=10) fig1.suptitle( 'Plot of the function h and its first and second difference', fontsize=15) fig1.set_size_inches(15.5, 10.5) fig1.show() # first element gamma, second element alpha vector = np.array([[2, 0.75], [0.5, 0.75], [0.5, -0.75]]) tree = LucasTree(gamma=2, beta=0.95, alpha=0.5, sigma=0.1) f, fdiff, fdiff2 = np.empty((len(tree.grid), vector.shape[0])), np.empty( (len(tree.grid) - 1, vector.shape[0])), np.empty((len(tree.grid) - 2, vector.shape[0])) price = np.empty_like(f) for idx, element in enumerate(vector): tree = LucasTree(gamma=element[0], beta=0.95, alpha=element[1], sigma=0.1) price[:, idx], grid = tree.compute_lt_price(), tree.grid f[:, idx] = price[:, idx] * grid**(-element[0]) fdiff[:, idx] = np.ediff1d(f[:, idx]) fdiff2[:, idx] = np.ediff1d(fdiff[:, idx]) fig2, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex='col') annotation = ['(gamma,alpha): ' + str((i)) for i in vector] ax1.plot(f) ax2.plot(fdiff) ax3.plot(fdiff2) ax1.set_title('Plot of f') ax2.set_title('Plot of the first difference of f') ax3.set_title('Plot of the second difference of f') ax3.legend(annotation, loc='upper center', bbox_to_anchor=(0.5, -0.05), ncol=2, fancybox=True, shadow=True, fontsize=11) fig2.suptitle( 'Plot of the function f and its first and second difference', fontsize=15) fig2.set_size_inches(15.5, 10.5) fig2.show() # The two following figures graphs the solution $f$ if $\alpha \in \left\lbrace 0,1 \right\rbrace$. If dividends follow an i.i.d. process, ($\alpha = 0$) the function $f$ is constant. We reproduce the numerical results in the top panel of the following figure. The lower subplot graphs the price dividend ratio when dividend growth follows and i.i.d process ($\alpha =1$). As predicted by theory, the price dividend ratio is a constant. # In[3]: get_ipython().run_line_magic('matplotlib', 'inline') from lucastree import LucasTree import numpy as np import matplotlib.pyplot as plt beta, gamma, sigma = 0.95, 2, 0.1 tree = LucasTree(gamma=gamma, beta=beta, alpha=1, sigma=sigma) priceLinear, grid = tree.compute_lt_price(), tree.grid fig1, (ax1, ax2) = plt.subplots(2, 1) theoreticalPDRatio = np.ones(len(grid)) * beta * np.exp((1 - gamma) ** 2 * sigma**2 / 2) / (1 - beta * np.exp((1 - gamma)**2 * sigma**2 / 2)) ax1.plot(grid, priceLinear / grid, grid, theoreticalPDRatio, 'g^') annotation = ['Numerical Solution', 'Analytical Solution'] ax1.legend(annotation) ax1.set_title('price dividend ratio for alpha = 1') ax1.set_ylim([min(priceLinear / grid) - 1, max(priceLinear / grid) + 1]) tree = LucasTree(gamma=gamma, beta=beta, alpha=0, sigma=sigma) priceFalling, grid = tree.compute_lt_price(), tree.grid theoreticalF = np.ones(len(grid)) * beta * \ np.exp((1 - gamma)**2 * sigma**2 / 2) / (1 - beta) f = priceFalling * grid**(-2) ax2.plot(grid, f, grid, theoreticalF, 'g^') ax2.set_ylim([min(f) - 1, max(f) + 1]) annotation = ['Numerical Solution', 'Analytical Solution'] ax2.legend(annotation) ax2.set_title('function f for alpha = 0') fig1.set_size_inches(15.5, 10.5) fig1.suptitle( 'Plot of the function f and the price dividend ratio for alpha=0 and alpha=1 respecitvely', fontsize=15) fig1.show() # Finally, we report the unit testing function accompanying our lucastree.py module. This file tests if the functional form of $f$ adheres to the theoretical predicitons as outlined by the Proposition above. The file can be run from the Shell. # In[4]: get_ipython().run_cell_magic('writefile', './test_lucastree.py', '"""\nfilename: test_lucastree.py\n\nAuthors: Joao Brogueira and Fabian Schuetze \n\nThis file contains for different test for the \nlucastree.py file \n\nFunctions\n---------\n compute_lt_price() [Status: Tested in test_ConstantPDRatio, test_ConstantF,\n test_slope_f, test_shape_f]\n\n"""\n\nimport unittest\nfrom lucastree import LucasTree # This relative importing doesn\'t work!\nimport numpy as np\n\n\nclass Testlucastree(unittest.TestCase):\n\n """\n Test Suite for lucastree.py based on the outout of the \n LucasTree.compute_lt_price() function.\n\n """\n # == Parameter values applicable to all test cases == #\n beta = 0.95\n sigma = 0.1\n\n # == Paramter values for different tests == #\n ConstantPD = np.array([2, 1])\n ConstantF = np.array([2, 0])\n FunctionalForm = np.array([[2, 0.75], [2, 0.5], [2, 0.25], [0.5, 0.75], [\n 0.5, 0.5], [0.5, 0.25], [0.5, -0.75], [0.5, -0.5], [0.5, -0.25]])\n\n # == Tolerance Criteria == #\n Tol = 1e-2\n\n def setUp(self):\n self.storage = lambda parameter0, parameter1: LucasTree(gamma=parameter0, beta=self.beta, alpha=parameter1,\n sigma=self.sigma)\n\n def test_ConstantPDRatio(self):\n """\n Test whether the numerically computed price dividend ratio is \n identical to its theoretical counterpart when dividend \n growth follows an idd process\n\n """\n gamma, alpha = self.ConstantPD\n tree = self.storage(gamma, alpha)\n grid = tree.grid\n theoreticalPDRatio = np.ones(len(grid)) * self.beta * np.exp(\n (1 - gamma)**2 * self.sigma**2 / 2) / (1 - self.beta * np.exp((1 - gamma)**2 * self.sigma**2 / 2))\n self.assertTrue(\n np.allclose(theoreticalPDRatio, tree.compute_lt_price() / grid, atol=self.Tol))\n\n def test_ConstantF(self):\n """\n Tests whether the numericlaly obtained solution, math:`f` \n to the functional equation :math:`f(y) = h(y) + \\beta \\int_Z f(G(y,z\')) Q(z\')`\n is identical to its theoretical counterpart, when divideds follow an \n iid process \n\n """\n gamma, alpha = self.ConstantF\n tree = self.storage(gamma, alpha)\n grid = tree.grid\n theoreticalF = np.ones(len(\n grid)) * self.beta * np.exp((1 - gamma)**2 * self.sigma**2 / 2) / (1 - self.beta)\n self.assertTrue(np.allclose(\n theoreticalF, tree.compute_lt_price() * grid**(-gamma), atol=self.Tol))\n\n def test_slope_f(self):\n """\n Tests whether the first difference of the numerically obtained function \n :math:`f` is has the same sign as the first difference of the function \n :math:`h`.\n\n Notes\n -----\n This test is motivated by Theorem 9.7 ans exercise 9.7c) of the \n book by Stokey, Lucas and Prescott (1989)\n\n """\n for parameters in self.FunctionalForm:\n gamma, alpha = parameters\n tree = self.storage(gamma, alpha)\n f = tree.compute_lt_price() * tree.grid ** (-gamma)\n h = tree.h(tree.grid)\n fdiff, hdiff = np.ediff1d(f), np.ediff1d(h)\n if all(hdiff > 0):\n self.assertTrue(all(fdiff > 0))\n elif all(hdiff < 0):\n self.assertTrue(all(fdiff < 0))\n\n def test_shape_f(self):\n """\n Tests whether the second difference of the numerically obtained function \n :math:`f` is has the same sign as the second difference of the function \n :math:`h`.\n\n Notes\n -----\n This test is motivated by Theorem 9.8 ans exercise 9.7d) of the \n book by Stokey, Lucas and Prescott (1989)\n\n """\n for parameters in self.FunctionalForm:\n gamma, alpha = parameters\n tree = self.storage(gamma, alpha)\n f = tree.compute_lt_price() * tree.grid ** (-gamma)\n h = tree.h(tree.grid)\n fdiff, hdiff = np.ediff1d(f), np.ediff1d(h)\n fdiff2, hdiff2 = np.ediff1d(fdiff), np.ediff1d(hdiff)\n if all(hdiff2 > 0):\n self.assertTrue(all(fdiff2 > 0))\n elif all(hdiff2 < 0):\n self.assertTrue(all(fdiff2 < 0))\n\n def tearDown(self):\n pass\n') # In[ ]: