# Livestock Feeding¶

Akira Matsushita

Department of Economics, University of Tokyo

From Miranda and Fackler, Applied Computational Economics and Finance, 2002, Section 8.4.8 and 9.7.8

## 8.4.8 Model Formulation and Solution by hand¶

• A livestock producer feeds their stock for $T$ periods and sells it at the beginning of period $T+1$
• Each period $t$, the producer determine the amount of feed: $x_t \geq 0$. The fixed unit cost of grain is $\kappa \geq 0$
• $s_t$, the weight of livestock at time $t$, follows the deterministic process: $s_{t+1} = g(s_t, x_t), s_0 = \bar{s}$. Assume $s_t \geq 0$ for all $t$
• The selling price of the livestock at time $T+1$ is given by $s_{T+1} \times p$

Then the profit maximization problem for producer is formulated as:

\begin{align*} \underset{ \{x_t \geq 0 \}_{t=1}^T}{\max} \delta^T p s_{T+1} - \sum_{t=1}^{T} \delta^{t-1} \kappa x_t \hspace{1em} \text{s.t.} \hspace{1em} \begin{cases} s_{t+1} = g(s_t, x_t) \\ s_0 = \bar{s} \end{cases} \end{align*}

where $\delta \geq 0$ is the discount factor

The corresponding Bellman equation is:

\begin{align*} \begin{cases} V_t(s_t) = \underset{x_t \geq 0}{\max} \{ -\kappa x_t + \delta V_{t+1}(g(s_t, x_t)) \} \hspace{1em} t=1, \ldots, T \\ V_{T+1}(s_{T+1}) = ps_{T+1} \end{cases} \end{align*}

### Euler Conditions¶

Let's derive the Euler Equilibrium Conditions from the Bellman equation

Assume $g: \mathbb{R}_+ \times \mathbb{R}_+ \rightarrow \mathbb{R}_+$ is differentiable on the entire domain (so $g$ is partially differentiable by both $s$ and $x$)

Define $g_x = \frac{\partial g}{\partial x}$ and $g_s = \frac{\partial g}{\partial s}$

Assume $g_x(s, 0)$ is large enough so that the nonnegative constraint of $x$ will not be binded at an optimum (ensures an interior point solution)

Let $X_t^*$ be the optimal solution correspondence, i.e.,

\begin{align*} X_t^*(s) = \left\{ x_t^* \in [0, \infty) \mid V_t(s) = -\kappa x_t^* + \delta V_{t+1}(g(s_t, x_t^*)) \right\} \end{align*}

and $x_t^*(s) \in X_t^*(s)$ be a selection of it. Assume $x_t^*(s)$ is differentiable for all $t$

Note that by using $x_t^*(s)$ the optimal value function at time 1 is derived as

\begin{align*} V_1(\bar{s}) &= -\kappa x_1^*(\bar{s}) + \delta V_2(g(\bar{s}, x_1^*(\bar{s}))) \\ &= -\kappa \left[ x_1^*(\bar{s}) + \delta x_2^*(g(\bar{s}, x_1^*(\bar{s}))) \right] + \delta^2 V_3(g(g(\bar{s}, x_1^*(\bar{s})), x_2^*(g(\bar{s}, x_1^*(\bar{s}))))) \\ &= \ldots \end{align*}

Here we show that $V_t$ is differentiable by $s_t$ for all $t=1, 2, \ldots, T+1$

• At $t = T+1$, $V_{T+1}(s_{T+1}) = ps_{T+1}$ so this is differentiable by $s_{T+1}$
• As an induction hypothesis, assume $V_{t+1}$ is differentiable by $s_t$ ($1 \leq t \leq T$). Since

\begin{align*} V_t(s_t) = -\kappa x_t^*(s_t) + \delta V_{t+1}(g(s_t, x_t^*(s_t))) \end{align*}

and $x_t^*(s_t)$ and $V_{t+1}(s_t)$ are differentiable by $s_t$ and $g(s_t, x)$ is differentiable by both $s_t$ and $x$, $V_t(s_t)$ is also differentiable. The derivative is

\begin{align*} \frac{\partial}{\partial s_t} V_t(s_t) = -\kappa \frac{\partial}{\partial s_t} x_t^*(s) + \delta \frac{\partial}{\partial s_t}V_{t+1}(g(s_t, x_t^*(s_t))) \left\{ g_s(s_t, x_t^*(s_t)) + g_x(s_t, x_t^*(s_t)) \frac{\partial}{\partial s_t} x_t^*(s_t) \right\} \end{align*}

Hence by induction, $V_t$ is differentiable by $s_t$ for all $t$

Next consider the optimality condition of the maximization problem in the Bellman equation: $\underset{x_t \geq 0}{\max} \{ -\kappa x_t + \delta V_{t+1}(g(s_t, x_t)) \}$

Define $\lambda_t(s_t) \equiv V_{t}'(s_t) = \frac{\partial V_t}{\partial s_t}$

Using the chain rule, the FOCs are

\begin{align*} \delta \lambda_{t+1}(g(s_t, x_t^*))g_x(s_t, x_t^*) = \kappa \hspace{1em} t=1, \ldots, T \\ \end{align*}

Then the drivatives of the value functions by $s_t$: $\lambda_t(s_t) = \frac{\partial}{\partial s_t} V_t(s_t)$ become

\begin{align*} \lambda_t(s) &= \underbrace{ \bigl\{ -\kappa + \delta \lambda_{t+1}(g(s_t, x_t^*(s_t))) g_x(s_t, x_t^*(s_t)) \bigr\}}_{=0 \text{ by FOC}} \frac{\partial}{\partial s_t} x_t^*(s_t) + \delta \lambda_{t+1}(g(s_t, x_t^*(s_t))) g_s(s_t, x_t^*(s_t)) \\ &= \delta \lambda_{t+1}(g(s_t, x_t^*(s_t))) g_s(s_t, x_t^*(s_t)) \end{align*}

for $t=1, \ldots, T$ and

\begin{align*} \lambda_{T+1}(s_{T+1}) = p \end{align*}

In summary, the optimal path follows the following equations:

\begin{align*} \begin{cases} \delta \lambda_{t+1}(g(s_t, x_t))g_x(s_t, x_t) = \kappa \hspace{1em} t=1, \ldots, T \\ \lambda_t(s_t) = \delta \lambda_{t+1}(g(s_t, x_t)) g_s(s_t, x_t) \hspace{1em} t=1, \ldots, T \\ \lambda_{T+1}(s_{T+1}) = p \end{cases} \end{align*}

### Solve the equations by hand¶

How to solve the above equations and get the optimal polity $\{x_t\}_{t=1}^{T}$?

• At $t=T$, since $\lambda_{T+1}(s) = p$ regardless of the value of $s$, the 1st equation becomes $$g_x(s_T, x_T) = \frac{\kappa}{\delta p}$$

So we can get the optimal policy at $t=T$ by solving this equation for $x_T$. Then by the 2nd equation, we get $\lambda_{T}(s_T)$

• At $1 \leq t < T$, we can get $x_t$ and $\lambda_t(s_t)$ in the similar way

• Finally we set $s_1 = \bar{s}$ and then we get the concrete values of $\lambda_1(s_1^*), x_1^*, \lambda_2(s_2^*), x_2^*, \ldots$

### Concrete Example¶

Let

• $T = 6$
• $g(x, s) = \alpha s + \sqrt{x}$
• $p = 1$

Then

• $g_x(x, s) = \cfrac{0.5}{\sqrt{x}}$
• $g_s(x, s) = \alpha$

At $t=6$,

• $g_x(s_6, x_6) = \cfrac{0.5}{\sqrt{x_6}} = \cfrac{\kappa}{\delta} \hspace{2em} \therefore x_6(s_6) = \cfrac{\delta^2}{4 \kappa^2}$
• $\lambda_6(s_6) = \delta \alpha$

At $1 \leq t \leq 5$,

• $g_x(s_t, x_t) = \cfrac{0.5}{\sqrt{x_t}} = \cfrac{\kappa}{\delta^{7-t} \alpha^{6-t}} \hspace{2em} \therefore x_t(s_t) = \cfrac{(\delta^{7-t} \alpha^{6-t})^2}{4 \kappa^2}$
• $\lambda_t(s_t) = \delta^{7-t} \alpha^{7-t}$

So the optimal feeding policy is

\begin{align*} x_t^*(s_t) = \cfrac{(\delta^{7-t} \alpha^{6-t})^2}{4 \kappa^2} \hspace{1em} t=1, \ldots, 6 \end{align*}

Note that in this special case the optimal policy $x_1, \ldots, x_6$ does not depend on the initial weight $\bar{s}$

## 9.7.8 Solution by computation¶

In addition to the settings of the example above, assume

• $\alpha = 0.9$
• $\kappa = 0.4$
• $\delta = 0.9$

In order to calculate the value function $V_t(s_t) = \underset{x_t \geq 0}{\max} \{ -\kappa x_t + \delta V_{t+1}(g(s_t, x_t)) \}$ given $s_t$ computationally, we approximate $V_t(s_t)$ as

\begin{align*} V_t(s_t) \simeq \tilde{V_t}(s_t) = \sum_{i=1}^n c_{it} \phi_i(s_t) \end{align*}

where $\{ \phi_i \}_{i=1}^n$ are the predetermined basis functions and $\{ c_{it} \}_{i=1}^n$ are coefficients of them

Also we choose $n$ sample points $\xi_1 < \xi_2 < \ldots < \xi_n$ in the state space

The procedure of calculating optimal policy is as follows:

• At $t=T$

• Using optimizer, solve $V_T(\xi_j) = \underset{x_T \geq 0}{\max} \{ -\kappa x_T + \delta p g(\xi_j, x_T) \}$ for each $\xi_1, \ldots, \xi_n$ and get optimal $V_T(\xi_j)$ and $x_T(\xi_j)$

• Solve $n$ simultaneous linear equations

\begin{align*} \sum_{i=1}^n c_{iT} \phi_i(\xi_j) = V_T(\xi_j) \hspace{1em} j=1, \ldots, n \end{align*}

to determine the $n$ coefficients $\{ c_{iT} \}_{i=1}^n$ and get $\tilde{V_T}(s_T)$

• At $1 \leq t < T$, given $\tilde{V_{t+1}}(\xi_j)$,

• Using optimizer, solve $V_t(\xi_j) = \underset{x_t \geq 0}{\max} \{ -\kappa x_t + \delta \tilde{V_{t+1}}(g(\xi_j, x_t)) \}$ for each $\xi_1, \ldots, \xi_n$ and get optimal $V_t(\xi_j)$ and $x_t(\xi_j)$

• Solve $n$ simultaneous linear equations

\begin{align*} \sum_{i=1}^n c_{it} \phi_i(\xi_j) = V_t(\xi_j) \hspace{1em} j=1, \ldots, n \end{align*}

to determine the $n$ coefficients $\{ c_{it} \}_{i=1}^n$ and get $\tilde{V_t}(s_t)$

• Finally set $s_1 = \bar{s}$ and then we can get all optimal policies

In [1]:
using QuantEcon
using Optim
using BasisMatrices
using Plots
plotlyjs()