“We may regard the present state of the universe as the effect of its past and the cause of its future” – Marquis de Laplace
This lecture introduces the linear state space dynamic system.
This model is a workhorse that carries a powerful theory of prediction.
Its many applications include:
representing dynamics of higher-order linear systems
predicting the position of a system $ j $ steps into the future
predicting a geometric sum of future values of a variable like
key ingredient of useful models
using InstantiateFromURL
github_project("QuantEcon/quantecon-notebooks-julia", version = "0.6.0")
# uncomment to force package installation and precompilation
# github_project("QuantEcon/quantecon-notebooks-julia", version="0.6.0", instantiate=true, precompile = true)
using LinearAlgebra, Statistics
Here is the linear state-space system
$$ \begin{aligned} x_{t+1} & = A x_t + C w_{t+1} \\ y_t & = G x_t \nonumber \\ x_0 & \sim N(\mu_0, \Sigma_0) \nonumber \end{aligned} \tag{1} $$
The primitives of the model are
Given $ A, C, G $ and draws of $ x_0 $ and $ w_1, w_2, \ldots $, the model (1) pins down the values of the sequences $ \{x_t\} $ and $ \{y_t\} $.
Even without these draws, the primitives 1–3 pin down the probability distributions of $ \{x_t\} $ and $ \{y_t\} $.
Later we’ll see how to compute these distributions and their moments.
We’ve made the common assumption that the shocks are independent standardized normal vectors.
But some of what we say will be valid under the assumption that $ \{w_{t+1}\} $ is a martingale difference sequence.
A martingale difference sequence is a sequence that is zero mean when conditioned on past information.
In the present case, since $ \{x_t\} $ is our state sequence, this means that it satisfies
$$ \mathbb{E} [w_{t+1} | x_t, x_{t-1}, \ldots ] = 0 $$This is a weaker condition than that $ \{w_t\} $ is iid with $ w_{t+1} \sim N(0,I) $.
By appropriate choice of the primitives, a variety of dynamics can be represented in terms of the linear state space model.
The following examples help to highlight this point.
They also illustrate the wise dictum finding the state is an art.
Let $ \{y_t\} $ be a deterministic sequence that satisfies
$$ y_{t+1} = \phi_0 + \phi_1 y_t + \phi_2 y_{t-1} \quad \text{s.t.} \quad y_0, y_{-1} \text{ given} \tag{2} $$
To map (2) into our state space system (1), we set
$$ x_t= \begin{bmatrix} 1 \\ y_t \\ y_{t-1} \end{bmatrix} \qquad A = \begin{bmatrix} 1 & 0 & 0 \\ \phi_0 & \phi_1 & \phi_2 \\ 0 & 1 & 0 \end{bmatrix} \qquad C= \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \qquad G = \begin{bmatrix} 0 & 1 & 0 \end{bmatrix} $$You can confirm that under these definitions, (1) and (2) agree.
The next figure shows dynamics of this process when $ \phi_0 = 1.1, \phi_1=0.8, \phi_2 = -0.8, y_0 = y_{-1} = 1 $
Later you’ll be asked to recreate this figure.
We can use (1) to represent the model
$$ y_{t+1} = \phi_1 y_{t} + \phi_2 y_{t-1} + \phi_3 y_{t-2} + \phi_4 y_{t-3} + \sigma w_{t+1} \tag{3} $$
where $ \{w_t\} $ is iid and standard normal.
To put this in the linear state space format we take $ x_t = \begin{bmatrix} y_t & y_{t-1} & y_{t-2} & y_{t-3} \end{bmatrix}' $ and
$$ A = \begin{bmatrix} \phi_1 & \phi_2 & \phi_3 & \phi_4 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix} \qquad C = \begin{bmatrix} \sigma \\ 0 \\ 0 \\ 0 \end{bmatrix} \qquad G = \begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix} $$The matrix $ A $ has the form of the companion matrix to the vector $ \begin{bmatrix}\phi_1 & \phi_2 & \phi_3 & \phi_4 \end{bmatrix} $.
The next figure shows dynamics of this process when
$$ \phi_1 = 0.5, \phi_2 = -0.2, \phi_3 = 0, \phi_4 = 0.5, \sigma = 0.2, y_0 = y_{-1} = y_{-2} = y_{-3} = 1 $$Then (3) is termed a vector autoregression.
To map this into (1), we set
$$ x_t = \begin{bmatrix} y_t \\ y_{t-1} \\ y_{t-2} \\ y_{t-3} \end{bmatrix} \quad A = \begin{bmatrix} \phi_1 & \phi_2 & \phi_3 & \phi_4 \\ I & 0 & 0 & 0 \\ 0 & I & 0 & 0 \\ 0 & 0 & I & 0 \end{bmatrix} \quad C = \begin{bmatrix} \sigma \\ 0 \\ 0 \\ 0 \end{bmatrix} \quad G = \begin{bmatrix} I & 0 & 0 & 0 \end{bmatrix} $$where $ I $ is the $ k \times k $ identity matrix and $ \sigma $ is a $ k \times k $ matrix.
We can use (1) to represent
In fact both are special cases of (3).
With the deterministic seasonal, the transition matrix becomes
$$ A = \begin{bmatrix} 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix} $$It is easy to check that $ A^4 = I $, which implies that $ x_t $ is strictly periodic with period 4:[1]
$$ x_{t+4} = x_t $$Such an $ x_t $ process can be used to model deterministic seasonals in quarterly time series.
The indeterministic seasonal produces recurrent, but aperiodic, seasonal fluctuations.
The model $ y_t = a t + b $ is known as a linear time trend.
We can represent this model in the linear state space form by taking
$$ A = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} \qquad C = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \qquad G = \begin{bmatrix} a & b \end{bmatrix} \tag{4} $$
and starting at initial condition $ x_0 = \begin{bmatrix} 0 & 1\end{bmatrix}' $.
In fact it’s possible to use the state-space system to represent polynomial trends of any order.
For instance, let
$$ x_0 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \qquad A = \begin{bmatrix} 1 & 1 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} \qquad C = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} $$It follows that
$$ A^t = \begin{bmatrix} 1 & t & t(t-1)/2 \\ 0 & 1 & t \\ 0 & 0 & 1 \end{bmatrix} $$Then $ x_t^\prime = \begin{bmatrix} t(t-1)/2 &t & 1 \end{bmatrix} $, so that $ x_t $ contains linear and quadratic time trends.
A nonrecursive expression for $ x_t $ as a function of $ x_0, w_1, w_2, \ldots, w_t $ can be found by using (1) repeatedly to obtain
$$ \begin{aligned} x_t & = Ax_{t-1} + Cw_t \\ & = A^2 x_{t-2} + ACw_{t-1} + Cw_t \nonumber \\ & \qquad \vdots \nonumber \\ & = \sum_{j=0}^{t-1} A^j Cw_{t-j} + A^t x_0 \nonumber \end{aligned} \tag{5} $$
Representation (5) is a moving average representation.
It expresses $ \{x_t\} $ as a linear function of
As an example of a moving average representation, let the model be
$$ A = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} \qquad C = \begin{bmatrix} 1 \\ 0 \end{bmatrix} $$You will be able to show that $ A^t = \begin{bmatrix} 1 & t \cr 0 & 1 \end{bmatrix} $ and $ A^j C = \begin{bmatrix} 1 & 0 \end{bmatrix}' $.
Substituting into the moving average representation (5), we obtain
$$ x_{1t} = \sum_{j=0}^{t-1} w_{t-j} + \begin{bmatrix} 1 & t \end{bmatrix} x_0 $$where $ x_{1t} $ is the first entry of $ x_t $.
The first term on the right is a cumulated sum of martingale differences, and is therefore a martingale.
The second term is a translated linear function of time.
For this reason, $ x_{1t} $ is called a martingale with drift.
Using (1), it’s easy to obtain expressions for the (unconditional) means of $ x_t $ and $ y_t $.
We’ll explain what unconditional and conditional mean soon.
Letting $ \mu_t := \mathbb{E} [x_t] $ and using linearity of expectations, we find that
$$ \mu_{t+1} = A \mu_t \quad \text{with} \quad \mu_0 \text{ given} \tag{6} $$
Here $ \mu_0 $ is a primitive given in (1).
The variance-covariance matrix of $ x_t $ is $ \Sigma_t := \mathbb{E} [ (x_t - \mu_t) (x_t - \mu_t)'] $.
Using $ x_{t+1} - \mu_{t+1} = A (x_t - \mu_t) + C w_{t+1} $, we can determine this matrix recursively via
$$ \Sigma_{t+1} = A \Sigma_t A' + C C' \quad \text{with} \quad \Sigma_0 \text{ given} \tag{7} $$
As with $ \mu_0 $, the matrix $ \Sigma_0 $ is a primitive given in (1).
As a matter of terminology, we will sometimes call
This is to distinguish $ \mu_t $ and $ \Sigma_t $ from related objects that use conditioning information, to be defined below.
However, you should be aware that these “unconditional” moments do depend on the initial distribution $ N(\mu_0, \Sigma_0) $.
Using linearity of expectations again we have
$$ \mathbb{E} [y_t] = \mathbb{E} [G x_t] = G \mu_t \tag{8} $$
The variance-covariance matrix of $ y_t $ is easily shown to be
$$ \textrm{Var} [y_t] = \textrm{Var} [G x_t] = G \Sigma_t G' \tag{9} $$
In general, knowing the mean and variance-covariance matrix of a random vector is not quite as good as knowing the full distribution.
However, there are some situations where these moments alone tell us all we need to know.
These are situations in which the mean vector and covariance matrix are sufficient statistics for the population distribution.
(Sufficient statistics form a list of objects that characterize a population distribution)
One such situation is when the vector in question is Gaussian (i.e., normally distributed).
This is the case here, given
In fact, it’s well-known that
$$ u \sim N(\bar u, S) \quad \text{and} \quad v = a + B u \implies v \sim N(a + B \bar u, B S B') \tag{10} $$
In particular, given our Gaussian assumptions on the primitives and the linearity of (1) we can see immediately that both $ x_t $ and $ y_t $ are Gaussian for all $ t \geq 0 $ [2].
Since $ x_t $ is Gaussian, to find the distribution, all we need to do is find its mean and variance-covariance matrix.
But in fact we’ve already done this, in (6) and (7).
Letting $ \mu_t $ and $ \Sigma_t $ be as defined by these equations, we have
$$ x_t \sim N(\mu_t, \Sigma_t) \tag{11} $$
How should we interpret the distributions defined by (11)–(12)?
Intuitively, the probabilities in a distribution correspond to relative frequencies in a large population drawn from that distribution.
Let’s apply this idea to our setting, focusing on the distribution of $ y_T $ for fixed $ T $.
We can generate independent draws of $ y_T $ by repeatedly simulating the evolution of the system up to time $ T $, using an independent set of shocks each time.
The next figure shows 20 simulations, producing 20 time series for $ \{y_t\} $, and hence 20 draws of $ y_T $.
The system in question is the univariate autoregressive model (3).
The values of $ y_T $ are represented by black dots in the left-hand figure
In the right-hand figure, these values are converted into a rotated histogram that shows relative frequencies from our sample of 20 $ y_T $’s.
(The parameters and source code for the figures can be found in file linear_models/paths_and_hist.jl)
Here is another figure, this time with 100 observations
Let’s now try with 500,000 observations, showing only the histogram (without rotation)
The black line is the population density of $ y_T $ calculated from (12).
The histogram and population distribution are close, as expected.
By looking at the figures and experimenting with parameters, you will gain a feel for how the population distribution depends on the model primitives listed above, as intermediated by the distribution’s sufficient statistics.
In the preceding figure we approximated the population distribution of $ y_T $ by
Just as the histogram approximates the population distribution, the ensemble or cross-sectional average
$$ \bar y_T := \frac{1}{I} \sum_{i=1}^I y_T^i $$approximates the expectation $ \mathbb{E} [y_T] = G \mu_T $ (as implied by the law of large numbers).
Here’s a simulation comparing the ensemble averages and population means at time points $ t=0,\ldots,50 $.
The parameters are the same as for the preceding figures, and the sample size is relatively small ($ I=20 $).
The ensemble mean for $ x_t $ is
$$ \bar x_T := \frac{1}{I} \sum_{i=1}^I x_T^i \to \mu_T \qquad (I \to \infty) $$The limit $ \mu_T $ is a “long-run average”.
(By long-run average we mean the average for an infinite ($ I = \infty $) number of sample $ x_T $’s)
Another application of the law of large numbers assures us that
$$ \frac{1}{I} \sum_{i=1}^I (x_T^i - \bar x_T) (x_T^i - \bar x_T)' \to \Sigma_T \qquad (I \to \infty) $$In the preceding discussion we looked at the distributions of $ x_t $ and $ y_t $ in isolation.
This gives us useful information, but doesn’t allow us to answer questions like
Such questions concern the joint distributions of these sequences.
To compute the joint distribution of $ x_0, x_1, \ldots, x_T $, recall that joint and conditional densities are linked by the rule
$$ p(x, y) = p(y \, | \, x) p(x) \qquad \text{(joint }=\text{ conditional }\times\text{ marginal)} $$From this rule we get $ p(x_0, x_1) = p(x_1 \,|\, x_0) p(x_0) $.
The Markov property $ p(x_t \,|\, x_{t-1}, \ldots, x_0) = p(x_t \,|\, x_{t-1}) $ and repeated applications of the preceding rule lead us to
$$ p(x_0, x_1, \ldots, x_T) = p(x_0) \prod_{t=0}^{T-1} p(x_{t+1} \,|\, x_t) $$The marginal $ p(x_0) $ is just the primitive $ N(\mu_0, \Sigma_0) $.
In view of (1), the conditional densities are
$$ p(x_{t+1} \,|\, x_t) = N(Ax_t, C C') $$An important object related to the joint distribution is the autocovariance function
$$ \Sigma_{t+j, t} := \mathbb{E} [ (x_{t+j} - \mu_{t+j})(x_t - \mu_t)' ] \tag{13} $$
Elementary calculations show that
$$ \Sigma_{t+j,t} = A^j \Sigma_t \tag{14} $$
Notice that $ \Sigma_{t+j,t} $ in general depends on both $ j $, the gap between the two dates, and $ t $, the earlier date.
Stationarity and ergodicity are two properties that, when they hold, greatly aid analysis of linear state space models.
Let’s start with the intuition.
Let’s look at some more time series from the same model that we analyzed above.
This picture shows cross-sectional distributions for $ y $ at times $ T, T', T'' $
Note how the time series “settle down” in the sense that the distributions at $ T' $ and $ T'' $ are relatively similar to each other — but unlike the distribution at $ T $.
Apparently, the distributions of $ y_t $ converge to a fixed long-run distribution as $ t \to \infty $.
When such a distribution exists it is called a stationary distribution.
In our setting, a distribution $ \psi_{\infty} $ is said to be stationary for $ x_t $ if
$$ x_t \sim \psi_{\infty} \quad \text{and} \quad x_{t+1} = A x_t + C w_{t+1} \quad \implies \quad x_{t+1} \sim \psi_{\infty} $$Since
we can restate the definition as follows: $ \psi_{\infty} $ is stationary for $ x_t $ if
$$ \psi_{\infty} = N(\mu_{\infty}, \Sigma_{\infty}) $$where $ \mu_{\infty} $ and $ \Sigma_{\infty} $ are fixed points of (6) and (7) respectively.
Let’s see what happens to the preceding figure if we start $ x_0 $ at the stationary distribution.
Now the differences in the observed distributions at $ T, T' $ and $ T'' $ come entirely from random fluctuations due to the finite sample size.
By
we’ve ensured that
$$ \mu_t = \mu_{\infty} \quad \text{and} \quad \Sigma_t = \Sigma_{\infty} \quad \text{for all } t $$Moreover, in view of (14), the autocovariance function takes the form $ \Sigma_{t+j,t} = A^j \Sigma_\infty $, which depends on $ j $ but not on $ t $.
This motivates the following definition.
A process $ \{x_t\} $ is said to be covariance stationary if
In our setting, $ \{x_t\} $ will be covariance stationary if $ \mu_0, \Sigma_0, A, C $ assume values that imply that none of $ \mu_t, \Sigma_t, \Sigma_{t+j,t} $ depends on $ t $.
The difference equation $ \mu_{t+1} = A \mu_t $ is known to have unique fixed point $ \mu_{\infty} = 0 $ if all eigenvalues of $ A $ have moduli strictly less than unity.
That is, if all(abs(eigvals(A)) .< 1) == true
.
The difference equation (7) also has a unique fixed point in this case, and, moreover
$$ \mu_t \to \mu_{\infty} = 0 \quad \text{and} \quad \Sigma_t \to \Sigma_{\infty} \quad \text{as} \quad t \to \infty $$regardless of the initial conditions $ \mu_0 $ and $ \Sigma_0 $.
This is the globally stable case — see these notes for more a theoretical treatment
However, global stability is more than we need for stationary solutions, and often more than we want.
To illustrate, consider our second order difference equation example.
Here the state is $ x_t = \begin{bmatrix} 1 & y_t & y_{t-1} \end{bmatrix}' $.
Because of the constant first component in the state vector, we will never have $ \mu_t \to 0 $.
How can we find stationary solutions that respect a constant state component?
To investigate such a process, suppose that $ A $ and $ C $ take the form
$$ A = \begin{bmatrix} A_1 & a \\ 0 & 1 \end{bmatrix} \qquad C = \begin{bmatrix} C_1 \\ 0 \end{bmatrix} $$where
Let $ x_t = \begin{bmatrix} x_{1t}' & 1 \end{bmatrix}' $ where $ x_{1t} $ is $ (n-1) \times 1 $.
It follows that
$$ \begin{aligned} x_{1,t+1} & = A_1 x_{1t} + a + C_1 w_{t+1} \\ \end{aligned} $$Let $ \mu_{1t} = \mathbb{E} [x_{1t}] $ and take expectations on both sides of this expression to get
$$ \mu_{1,t+1} = A_1 \mu_{1,t} + a \tag{15} $$
Assume now that the moduli of the eigenvalues of $ A_1 $ are all strictly less than one.
Then (15) has a unique stationary solution, namely,
$$ \mu_{1\infty} = (I-A_1)^{-1} a $$The stationary value of $ \mu_t $ itself is then $ \mu_\infty := \begin{bmatrix} \mu_{1\infty}' & 1 \end{bmatrix}' $.
The stationary values of $ \Sigma_t $ and $ \Sigma_{t+j,t} $ satisfy
$$ \begin{aligned} \Sigma_\infty & = A \Sigma_\infty A' + C C' \\ \Sigma_{t+j,t} & = A^j \Sigma_\infty \nonumber \end{aligned} \tag{16} $$
Notice that here $ \Sigma_{t+j,t} $ depends on the time gap $ j $ but not on calendar time $ t $.
In conclusion, if
then the $ \{x_t\} $ process is covariance stationary, with constant state component
Note
If the eigenvalues of $ A_1 $ are less than unity in modulus, then
(a) starting from any initial value, the mean and variance-covariance matrix both converge to their stationary values; and (b) iterations on (7) converge to the fixed point of the discrete Lyapunov equation in the first line of (16).
Let’s suppose that we’re working with a covariance stationary process.
In this case we know that the ensemble mean will converge to $ \mu_{\infty} $ as the sample size $ I $ approaches infinity.
Ensemble averages across simulations are interesting theoretically, but in real life we usually observe only a single realization $ \{x_t, y_t\}_{t=0}^T $.
So now let’s take a single realization and form the time series averages
$$ \bar x := \frac{1}{T} \sum_{t=1}^T x_t \quad \text{and} \quad \bar y := \frac{1}{T} \sum_{t=1}^T y_t $$Do these time series averages converge to something interpretable in terms of our basic state-space representation?
The answer depends on something called ergodicity.
Ergodicity is the property that time series and ensemble averages coincide.
More formally, ergodicity implies that time series sample averages converge to their expectation under the stationary distribution.
In particular,
In our linear Gaussian setting, any covariance stationary process is also ergodic.
In some settings the observation equation $ y_t = Gx_t $ is modified to include an error term.
Often this error term represents the idea that the true state can only be observed imperfectly.
To include an error term in the observation we introduce
and extend the linear state-space system to
$$ \begin{aligned} x_{t+1} & = A x_t + C w_{t+1} \\ y_t & = G x_t + H v_t \nonumber \\ x_0 & \sim N(\mu_0, \Sigma_0) \nonumber \end{aligned} \tag{17} $$
The sequence $ \{v_t\} $ is assumed to be independent of $ \{w_t\} $.
The process $ \{x_t\} $ is not modified by noise in the observation equation and its moments, distributions and stability properties remain the same.
The unconditional moments of $ y_t $ from (8) and (9) now become
$$ \mathbb{E} [y_t] = \mathbb{E} [G x_t + H v_t] = G \mu_t \tag{18} $$
The variance-covariance matrix of $ y_t $ is easily shown to be
$$ \textrm{Var} [y_t] = \textrm{Var} [G x_t + H v_t] = G \Sigma_t G' + HH' \tag{19} $$
The distribution of $ y_t $ is therefore
$$ y_t \sim N(G \mu_t, G \Sigma_t G' + HH') $$The natural way to predict variables is to use conditional distributions.
For example, the optimal forecast of $ x_{t+1} $ given information known at time $ t $ is
$$ \mathbb{E}_t [x_{t+1}] := \mathbb{E} [x_{t+1} \mid x_t, x_{t-1}, \ldots, x_0 ] = Ax_t $$The right-hand side follows from $ x_{t+1} = A x_t + C w_{t+1} $ and the fact that $ w_{t+1} $ is zero mean and independent of $ x_t, x_{t-1}, \ldots, x_0 $.
That $ \mathbb{E}_t [x_{t+1}] = \mathbb{E}[x_{t+1} \mid x_t] $ is an implication of $ \{x_t\} $ having the Markov property.
The one-step-ahead forecast error is
$$ x_{t+1} - \mathbb{E}_t [x_{t+1}] = Cw_{t+1} $$The covariance matrix of the forecast error is
$$ \mathbb{E} [ (x_{t+1} - \mathbb{E}_t [ x_{t+1}] ) (x_{t+1} - \mathbb{E}_t [ x_{t+1}])'] = CC' $$More generally, we’d like to compute the $ j $-step ahead forecasts $ \mathbb{E}_t [x_{t+j}] $ and $ \mathbb{E}_t [y_{t+j}] $.
With a bit of algebra we obtain
$$ x_{t+j} = A^j x_t + A^{j-1} C w_{t+1} + A^{j-2} C w_{t+2} + \cdots + A^0 C w_{t+j} $$In view of the iid property, current and past state values provide no information about future values of the shock.
Hence $ \mathbb{E}_t[w_{t+k}] = \mathbb{E}[w_{t+k}] = 0 $.
It now follows from linearity of expectations that the $ j $-step ahead forecast of $ x $ is
$$ \mathbb{E}_t [x_{t+j}] = A^j x_t $$The $ j $-step ahead forecast of $ y $ is therefore
$$ \mathbb{E}_t [y_{t+j}] = \mathbb{E}_t [G x_{t+j} + H v_{t+j}] = G A^j x_t $$It is useful to obtain the covariance matrix of the vector of $ j $-step-ahead prediction errors
$$ x_{t+j} - \mathbb{E}_t [ x_{t+j}] = \sum^{j-1}_{s=0} A^s C w_{t-s+j} \tag{20} $$
Evidently,
$$ V_j := \mathbb{E}_t [ (x_{t+j} - \mathbb{E}_t [x_{t+j}] ) (x_{t+j} - \mathbb{E}_t [x_{t+j}] )^\prime ] = \sum^{j-1}_{k=0} A^k C C^\prime A^{k^\prime} \tag{21} $$
$ V_j $ defined in (21) can be calculated recursively via $ V_1 = CC' $ and
$$ V_j = CC^\prime + A V_{j-1} A^\prime, \quad j \geq 2 \tag{22} $$
$ V_j $ is the conditional covariance matrix of the errors in forecasting $ x_{t+j} $, conditioned on time $ t $ information $ x_t $.
Under particular conditions, $ V_j $ converges to
$$ V_\infty = CC' + A V_\infty A' \tag{23} $$
Equation (23) is an example of a discrete Lyapunov equation in the covariance matrix $ V_\infty $.
A sufficient condition for $ V_j $ to converge is that the eigenvalues of $ A $ be strictly less than one in modulus.
Weaker sufficient conditions for convergence associate eigenvalues equaling or exceeding one in modulus with elements of $ C $ that equal $ 0 $.
In several contexts, we want to compute forecasts of geometric sums of future random variables governed by the linear state-space system (1).
We want the following objects
These objects are important components of some famous and interesting dynamic models.
For example,
Fortunately, it is easy to use a little matrix algebra to compute these objects.
Suppose that every eigenvalue of $ A $ has modulus strictly less than $ \frac{1}{\beta} $.
It then follows that $ I + \beta A + \beta^2 A^2 + \cdots = \left[I - \beta A \right]^{-1} $.
This leads to our formulas:
Our preceding simulations and calculations are based on code in the file lss.jl from the QuantEcon.jl package.
The code implements a type which the linear state space models can act on directly through specific methods (for simulations, calculating moments, etc.).
Examples of usage are given in the solutions to the exercises.
Replicate this figure modulo randomness using the same type.
The state space model and parameters are the same as for the preceding exercise.
Replicate this figure modulo randomness using the same type.
The state space model and parameters are the same as for the preceding exercise, except that the initial condition is the stationary distribution.
Hint: You can use the stationary_distributions
method to get the initial conditions.
The number of sample paths is 80, and the time horizon in the figure is 100.
Producing the vertical bars and dots is optional, but if you wish to try, the bars are at dates 10, 50 and 75.
using QuantEcon, Plots
gr(fmt=:png);
ϕ0, ϕ1, ϕ2 = 1.1, 0.8, -0.8
A = [1.0 0.0 0
ϕ0 ϕ1 ϕ2
0.0 1.0 0.0]
C = zeros(3, 1)
G = [0.0 1.0 0.0]
μ_0 = ones(3)
lss = LSS(A, C, G; mu_0=μ_0)
x, y = simulate(lss, 50)
plot(dropdims(y, dims = 1), color = :blue, linewidth = 2, alpha = 0.7)
plot!(xlabel="time", ylabel = "y_t", legend = :none)
using Random
Random.seed!(42) # For deterministic results.
ϕ1, ϕ2, ϕ3, ϕ4 = 0.5, -0.2, 0, 0.5
σ = 0.2
A = [ϕ1 ϕ2 ϕ3 ϕ4
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0]
C = [σ
0.0
0.0
0.0]''
G = [1.0 0.0 0.0 0.0]
ar = LSS(A, C, G; mu_0 = ones(4))
x, y = simulate(ar, 200)
plot(dropdims(y, dims = 1), color = :blue, linewidth = 2, alpha = 0.7)
plot!(xlabel="time", ylabel = "y_t", legend = :none)
ϕ1, ϕ2, ϕ3, ϕ4 = 0.5, -0.2, 0, 0.5
σ = 0.1
A = [ ϕ1 ϕ2 ϕ3 ϕ4
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0]
C = [σ
0.0
0.0
0.0]
G = [1.0 0.0 0.0 0.0]
I = 20
T = 50
ar = LSS(A, C, G; mu_0 = ones(4))
ymin, ymax = -0.5, 1.15
ensemble_mean = zeros(T)
ys = []
for i ∈ 1:I
x, y = simulate(ar, T)
y = dropdims(y, dims = 1)
push!(ys, y)
ensemble_mean .+= y
end
ensemble_mean = ensemble_mean ./ I
plot(ys, color = :blue, alpha = 0.2, linewidth = 0.8, label = "")
plot!(ensemble_mean, color = :blue, linewidth = 2, label = "y_t_bar")
m = moment_sequence(ar)
pop_means = zeros(0)
for (i, t) ∈ enumerate(m)
(μ_x, μ_y, Σ_x, Σ_y) = t
push!(pop_means, μ_y[1])
i == 50 && break
end
plot!(pop_means, color = :green, linewidth = 2, label = "G mu_t")
plot!(ylims=(ymin, ymax), xlabel = "time", ylabel = "y_t", legendfont = font(12))
ϕ1, ϕ2, ϕ3, ϕ4 = 0.5, -0.2, 0, 0.5
σ = 0.1
A = [ϕ1 ϕ2 ϕ3 ϕ4
1.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0]
C = [σ
0.0
0.0
0.0]''
G = [1.0 0.0 0.0 0.0]
T0 = 10
T1 = 50
T2 = 75
T4 = 100
ar = LSS(A, C, G; mu_0 = ones(4))
ymin, ymax = -0.6, 0.6
μ_x, μ_y, Σ_x, Σ_y = stationary_distributions(ar)
ar = LSS(A, C, G; mu_0=μ_x, Sigma_0=Σ_x)
colors = ["c", "g", "b"]
ys = []
x_scatter = []
y_scatter = []
for i ∈ 1:80
rcolor = colors[rand(1:3)]
x, y = simulate(ar, T4)
y = dropdims(y, dims = 1)
push!(ys, y)
x_scatter = [x_scatter; T0; T1; T2]
y_scatter = [y_scatter; y[T0]; y[T1]; y[T2]]
end
plot(ys, linewidth = 0.8, alpha = 0.5)
plot!([T0 T1 T2; T0 T1 T2], [-1 -1 -1; 1 1 1], color = :black, legend = :none)
scatter!(x_scatter, y_scatter, color = :black, alpha = 0.5)
plot!(ylims=(ymin, ymax), ylabel = "y_t", xticks =[], yticks = ymin:0.2:ymax)
plot!(annotations = [(T0+1, -0.55, "T");(T1+1, -0.55, "T'");(T2+1, -0.55, "T''")])
Footnotes
[1] The eigenvalues of $ A $ are $ (1,-1, i,-i) $.
[2] The correct way to argue this is by induction. Suppose that $ x_t $ is Gaussian. Then [(1)](#equation-st-space-rep) and [(10)](#equation-lss-glig) imply that $ x_{t+1} $ is Gaussian. Since $ x_0 $ is assumed to be Gaussian, it follows that every $ x_t $ is Gaussian. Evidently this implies that each $ y_t $ is Gaussian.