This notebook is a mental note about issues coming up when trying to estimate a panel with one and two-ways fixed effects. Any comments, suggestions or ideas are more than welcome.
You can access the notebook in two flavours:
* Online (read-only): link
* Raw file (as a gist): link
Last, the notebook relies on two functions I provide in a separate Python file in order not to clutter and make this text more readable. You get the additional file when you pull the gist
, but you can also check it online here. All the code examples are written in Python and rely on the following dependencies that you will need to reproduce the code in your machine:
import panel_ols
import pandas as pd
print "Pandas: ", pd.__version__
import pysal as ps
print "PySAL: ", ps.version
import numpy as np
print "Numpy: ", np.__version__
seed = np.random.seed(1234)
Pandas: 0.12.0 PySAL: 1.7.0dev Numpy: 1.7.1
The idea is that we have a panel with two dimensions (you can think of individuals $i$ over time $t$, but there can be other dimensions too). In formal notation, we begin with the following baseline model:
$$ y_{it} = \alpha + X_{it} \beta + \epsilon_{it} $$Assuming the usual stuff about the good behaviour of $\epsilon_{it}$$, this can be estimated by OLS. If we want to control for time-invariant characteristics of individuals, we can include what is called a fixed-effect:
$$ y_{it} = X_{it} \beta + \mu_i + \epsilon_{it} $$where $\mu_i$ absorbs every individual effect that does not vary over time. If we want to estimate this model, we can simply include a dummy of each $i$ and run OLS. The problem is that if the number of individuals is very large, one quickly runs out of degrees of freedom and increases multicollinearity. An equivalent way to work around this is suggested by Baltagi on p. 34 (of Third edition at least). We can demean $y_{it} and $X_{it}$ as follows:
$$ y_{it}* = y_{it} - \bar{y}_{i.} $$where $\bar{y}_{i.}$ is the average of values for each individual. Running a straight OLS (without intercept) on the transformed variables results in the same $\beta$ estimates as we would by using dummy variables.
If we also want to account for the unboserved individual-invariant heterogeneity, we can also include a time effect, converting the model in the so called two-way fixed-effects modes:
$$ y_{it} = X_{it} \beta + \mu_i + \nu_t + \epsilon_{it} $$In theory, you could also estimate this using dummy variables, but the problems that arise in the one-way model are even more acute. Fortunately, Baltagi proposes another transformation that gets around this:
$$ y_{it}* = y_{it} - \bar{y}_{i.} - \bar{y}_{.t} + \bar{y}_{..} $$If you apply this transformation to both $y$ and $X$, you can run a straight OLS (without intercept again) and obtain the $\beta$ estimates of interest.
Let's now get our hands dirty and check this alternative ways of especifying the models are indeed equivalent. First we need some synthetic data to run the experiments:
n = 1000
d1 = 100
d2 = 10
seed = np.random.seed(1234)
x = np.random.random((n, 3))
fe1 = np.random.random_integers(0, d1, n)
fe1_d = pd.get_dummies(fe1, 'fe1')
fe2 = np.random.random_integers(0, d2, n)
fe2_d = pd.get_dummies(fe2, 'fe2')
e = np.random.normal(0, 1, (n, 1))
y = np.dot(x, np.array([[1., 1., 1.]]).T) \
+ fe1_d.ix[:, :-1].values.sum(axis=1)[:, None] \
+ fe2_d.values.sum(axis=1)[:, None] \
+ e
We can estimate the model including the dummies for the first effect:
xone = np.hstack((x, fe1_d.values))
m1_dummy = ps.spreg.BaseOLS(y, xone)
Or use the transformation, built into the method one_way_fe
:
m1_demean = panel_ols.one_way_fe(y, x, fe1)
We can now compare the estimates of $\beta$:
np.hstack((m1_dummy.betas[:3], m1_demean.betas))
array([[ 1.05322727, 1.05322727], [ 0.94063773, 0.94063773], [ 1.08095942, 1.08095942]])
If you check, the difference between the two is negligible:
m1_dummy.betas[:3] - m1_demean.betas
array([[ -4.15223411e-14], [ 1.57651669e-14], [ -1.82076576e-14]])
Also, you can see how the multicollinearity issue starts to creep up although, in this case, it is not unacceptable:
ci = ps.spreg.diagnostics.condition_index
print "Dummy: %.4f | Demean: %.4f"%(ci(m1_dummy), ci(m1_demean))
Dummy: 7.0236 | Demean: 1.0713
Here there are three alternatives and, in theory, they all give the same estimates although, as we are going to see, differences are a bit larger.
We can estimate with all the dummies directly:
xtwo = np.hstack((xone, fe2_d.values[:, 1:]))
m2_dummies = ps.spreg.BaseOLS(y, xtwo)
We can estimate the fully transformed model:
m2_demean = panel_ols.two_way_fe(y, x, fe1, fe2)
Or you can have a middle way using dummies for the effect with less categories (fe2
in this case) and the demeaner for the other one:
m2_mix = panel_ols.one_way_fe(y, xone[:, :-1], fe2)
We can now check the coefficients of the three methods:
pd.DataFrame({'Dummies': m2_dummies.betas[:3].flatten(), \
'Mix': m2_mix.betas[:3].flatten(), \
'Demean': m2_demean.betas.flatten()})
Demean | Dummies | Mix | |
---|---|---|---|
0 | 1.051114 | 1.051047 | 1.051047 |
1 | 0.963858 | 0.964616 | 0.964616 |
2 | 1.072238 | 1.070982 | 1.070982 |
And their multicollinearity:
pd.Series({'Dummies': ci(m2_dummies), \
'Mix': ci(m2_mix), \
'Demean': ci(m2_demean)})
Demean 1.073158 Dummies 11.582533 Mix 11.704122 dtype: float64
As you can see, estimates for Dummies
and Mix
are exactly the same (at least at the sixth decimal), while those for the Demean
are slightly different. This is probably related to precission issues, but if you have any suggestion as to why this happens, please drop me a line, I'll be happy to learn!