Comparing R lmer to Statsmodels MixedLM

The Statsmodels imputation of linear mixed models (MixedLM) closely follows the approach outlined in Lindstrom and Bates (JASA 1988). This is also the approach followed in the R package LME4. Other packages such as Stata, SAS, etc. should also be consistent with this approach, as the basic techniques in this area are mostly mature.

Here we show how linear mixed models can be fit using the MixedLM procedure in Statsmodels. Results from R (LME4) are included for comparison.

Here are our import statements:

In [11]:
import numpy as np
import pandas as pd
import statsmodels.api as sm

Growth curves of pigs

These are longitudinal data from a factorial experiment. The outcome variable is the weight of each pig, and the only predictor variable we will use here is "time". First we fit a model that expresses the mean weight as a linear function of time, with a random intercept for each pig. The model is specified using formulas. Since the random effects structure is not specified, the default random effects structure (a random intercept for each group) is automatically used.

In [12]:
data = pd.read_csv("dietox.csv")
model = sm.MixedLM.from_formula("Weight ~ Time", data, groups=data["Pig"])
result = model.fit()
print result.summary()
          Mixed Linear Model Regression Results
========================================================
Model:            MixedLM Dependent Variable: Weight    
No. Observations: 861     Method:             REML      
No. Groups:       72      Scale:              11.3668   
Min. group size:  11      Likelihood:         -2404.7753
Max. group size:  12      Converged:          Yes       
Mean group size:  12.0                                  
--------------------------------------------------------
             Coef.  Std.Err.    z    P>|z| [0.025 0.975]
--------------------------------------------------------
Intercept    15.724    0.788  19.952 0.000 14.180 17.269
Time          6.942    0.033 207.939 0.000  6.877  7.008
Intercept RE 40.399    2.166                            
========================================================

Here is the same model fit in R using LMER:

Linear mixed model fit by REML 
Formula: Weight ~ Time + (1 | Pig) 
   Data: data 
  AIC  BIC logLik deviance REMLdev
 4818 4837  -2405     4806    4810
Random effects:
 Groups   Name        Variance Std.Dev.
 Pig      (Intercept) 40.394   6.3556  
 Residual             11.367   3.3715  
Number of obs: 861, groups: Pig, 72
Fixed effects:
            Estimate Std. Error t value
(Intercept) 15.72352    0.78805   19.95
Time         6.94251    0.03339  207.94
Correlation of Fixed Effects:
     (Intr)
Time -0.275

Note that in the Statsmodels summary of results, the fixed effects and random effects parameter estimates are shown in a single table. The random effects are labeled "RE" in the Statmodels output. In the LME4 output, this effect is the pig intercept under the random effects section.

There has been a lot of debate about whether the standard errors for random effect variance and covariance parameters are useful. In LME4, these standard errors are not displayed, because the authors of the package believe they are not very informative. While there is good reason to question their utility, we elected to include the standard errors in the summary table, but do not show the corresponding Wald confidence intervals.

Next we fit a model with two random effects for each animal: a random intercept, and a random slope (with respect to time). This means that each pig may have a different baseline weight, as well as growing at a different rate. The formula specifies that "Time" is a covariate with a random coefficient. By default, formulas always include an intercept (which could be suppressed here using "0 + Time" as the formula).

In [13]:
model = sm.MixedLM.from_formula("Weight ~ Time", data, re_formula="Time", groups=data["Pig"])
result = model.fit()
print result.summary()
              Mixed Linear Model Regression Results
=================================================================
Model:               MixedLM    Dependent Variable:    Weight    
No. Observations:    861        Method:                REML      
No. Groups:          72         Scale:                 6.0374    
Min. group size:     11         Likelihood:            -2217.0475
Max. group size:     12         Converged:             Yes       
Mean group size:     12.0                                        
-----------------------------------------------------------------
                       Coef.  Std.Err.   z    P>|z| [0.025 0.975]
-----------------------------------------------------------------
Intercept              15.739    0.550 28.609 0.000 14.661 16.817
Time                    6.939    0.080 86.927 0.000  6.783  7.095
Intercept RE           19.493    1.572                           
Intercept RE x Time RE  0.294    0.154                           
Time RE                 0.416    0.033                           
=================================================================

Here is the same model fit using LMER in R:

Linear mixed model fit by REML 
Formula: Weight ~ Time + (1 + Time | Pig) 
   Data: data 
  AIC  BIC logLik deviance REMLdev
 4446 4475  -2217     4432    4434
Random effects:
 Groups   Name        Variance Std.Dev. Corr  
 Pig      (Intercept) 19.49346 4.41514        
          Time         0.41606 0.64503  0.103 
 Residual              6.03745 2.45712        
Number of obs: 861, groups: Pig, 72
Fixed effects:
            Estimate Std. Error t value
(Intercept) 15.73865    0.55013   28.61
Time         6.93901    0.07982   86.93
Correlation of Fixed Effects:
     (Intr)
Time 0.005 

The random intercept and random slope are only weakly correlated (0.294 / sqrt(19.493 * 0.416) ~ 0.1). So next we fit a model in which the two random effects are constrained to be uncorrelated:

In [14]:
from statsmodels.regression.mixed_linear_model import MixedLMParams

model = sm.MixedLM.from_formula("Weight ~ Time", data, re_formula="Time", groups=data["Pig"])
free = MixedLMParams(2, 2)
free.set_fe_params(np.ones(2))
free.set_cov_re(np.eye(2))
result = model.fit(free=free)
print result.summary()
              Mixed Linear Model Regression Results
=================================================================
Model:               MixedLM    Dependent Variable:    Weight    
No. Observations:    861        Method:                REML      
No. Groups:          72         Scale:                 6.0281    
Min. group size:     11         Likelihood:            -2217.3481
Max. group size:     12         Converged:             Yes       
Mean group size:     12.0                                        
-----------------------------------------------------------------
                       Coef.  Std.Err.   z    P>|z| [0.025 0.975]
-----------------------------------------------------------------
Intercept              15.740    0.554 28.385 0.000 14.653 16.827
Time                    6.939    0.080 86.248 0.000  6.781  7.097
Intercept RE           19.845    1.584                           
Intercept RE x Time RE  0.000    0.000                           
Time RE                 0.423    0.033                           
=================================================================

The likelihood drops by 0.3 when we fix the correlation parameter to 0. Comparing 2 x 0.3 = 0.6 to the chi^2 1 df reference distribution suggests that the data are very consistent with a model in which this parameter is equal to 0.

Here is the same model fit using LMER in R (note that here R is reporting the REML criterion instead of the likelihood, where the REML criterion is twice the log likeihood):

Linear mixed model fit by REML ['lmerMod']
Formula: Weight ~ Time + (1 | Pig) + (0 + Time | Pig)
   Data: data
REML criterion at convergence: 4434.7
Scaled residuals: 
    Min      1Q  Median      3Q     Max 
-6.4281 -0.5527 -0.0405  0.4840  3.5661 
Random effects:
 Groups   Name        Variance Std.Dev.
 Pig      (Intercept) 19.8409  4.4543  
 Pig.1    Time         0.4234  0.6507  
 Residual              6.0282  2.4552  
Number of obs: 861, groups: Pig, 72
Fixed effects:
            Estimate Std. Error t value
(Intercept) 15.73875    0.55445   28.39
Time         6.93899    0.08045   86.25
Correlation of Fixed Effects:
     (Intr)
Time -0.086

Sitka growth data

This is one of the example data sets provided in the LMER R library. The outcome variable is the size of the tree, and the covariate used here is a time value. The data are grouped by tree.

In [15]:
data = pd.read_csv("Sitka.csv")
endog = data["size"]
exog = np.ones((data.shape[0], 2), dtype=np.float64)
exog[:,1] = data["Time"]
exog = pd.DataFrame(data=exog)
exog.columns = ["Intercept", "Time"]

Here is the statsmodels LME fit for a basic model with a random intercept. We are passing the endog and exog data directly to the LME init function as arrays. Also note that endog_re is specified explicitly in argument 4 as a random intercept (although this would also be the default if it were not specified).

In [17]:
model = sm.MixedLM(endog, exog, data["tree"], exog.iloc[:,0])
result = model.fit()
print result.summary()
         Mixed Linear Model Regression Results
======================================================
Model:            MixedLM Dependent Variable: size    
No. Observations: 395     Method:             REML    
No. Groups:       79      Scale:              0.0392  
Min. group size:  5       Likelihood:         -82.3884
Max. group size:  5       Converged:          Yes     
Mean group size:  5.0                                 
------------------------------------------------------
             Coef. Std.Err.   z    P>|z| [0.025 0.975]
------------------------------------------------------
Intercept    2.273    0.088 25.863 0.000  2.101  2.446
Time         0.013    0.000 47.796 0.000  0.012  0.013
Intercept RE 0.375    0.348                           
======================================================

Here is the same model fit in R using LMER:

Linear mixed model fit by REML 
Formula: size ~ Time + (1 | tree) 
   Data: Sitka 
   AIC   BIC logLik deviance REMLdev
 172.8 188.7 -82.39    146.6   164.8
Random effects:
 Groups   Name        Variance Std.Dev.
 tree     (Intercept) 0.374512 0.61197 
 Residual             0.039206 0.19800 
Number of obs: 395, groups: tree, 79
Fixed effects:
             Estimate Std. Error t value
(Intercept) 2.2732443  0.0878948   25.86
Time        0.0126855  0.0002654   47.80
Correlation of Fixed Effects:
     (Intr)
Time -0.611

We can now try to add a random slope. We start with R this time. From the code and output below we see that the REML estimate of the variance of the random slope is nearly zero.

Linear mixed model fit by REML 
Formula: size ~ Time + (1 + Time | tree) 
   Data: Sitka 
   AIC   BIC logLik deviance REMLdev
 176.6 200.4 -82.28    146.4   164.6
Random effects:
 Groups   Name        Variance   Std.Dev.   Corr  
 tree     (Intercept) 3.4418e-01 0.58666576       
          Time        1.5640e-08 0.00012506 1.000 
 Residual             3.9179e-02 0.19793693       
Number of obs: 395, groups: tree, 79
Fixed effects:
             Estimate Std. Error t value
(Intercept) 2.2732443  0.0856713   26.53
Time        0.0126855  0.0002657   47.75
Correlation of Fixed Effects:
     (Intr)
Time -0.585

If we run this in statsmodels LME with defaults, we see that the variance estimate is indeed very small, which leads to a warning about the solution being on the boundary of the parameter space. The regression slopes agree very well with R, but the likelihood value is much higher than that returned by R.

In [18]:
exog_re = np.asarray(exog.copy())
model = sm.MixedLM(endog, exog, data["tree"], exog_re)
result = model.fit()
print result.summary()
          Mixed Linear Model Regression Results
========================================================
Model:              MixedLM Dependent Variable: size    
No. Observations:   395     Method:             REML    
No. Groups:         79      Scale:              0.0264  
Min. group size:    5       Likelihood:         -62.4834
Max. group size:    5       Converged:          Yes     
Mean group size:    5.0                                 
--------------------------------------------------------
              Coef.  Std.Err.   z    P>|z| [0.025 0.975]
--------------------------------------------------------
Intercept      2.273    0.101 22.513 0.000  2.075  2.471
Time           0.013    0.000 33.888 0.000  0.012  0.013
Z1 RE          0.646    0.923                           
Z1 RE x Z2 RE -0.001    0.003                           
Z2 RE          0.000    0.000                           
========================================================

We can further explore the random effects struture by constructing plots of the profile likelihoods. We start with the random intercept, generating a plot of the profile likelihood from 0.1 units below to 0.1 units above the MLE. Since each optimization inside the profile likelihood generates a warning (due to the random slope variance being close to zero), we turn off the warnings here.

In [ ]:
import warnings
warnings.filterwarnings("ignore")

re = result.cov_re.iloc[0, 0]
likev = result.profile_re(0, dist_low=0.1, dist_high=0.1)

Here is a plot of the profile likelihood function. We multiply the log-likelihood difference by 2 to obtain a chi^2 reference distribution with 1 degree of freedom.

In [21]:
plt.figure(figsize=(10,8))
plt.plot(likev[:,0], 2*likev[:,1])
plt.xlabel("Variance of random slope", size=17)
plt.ylabel("-2 times profile log likelihood", size=17)
Out[21]:
<matplotlib.text.Text at 0x7fcbdb24cc10>

Here is a plot of the profile likelihood function. The profile likelihood plot shows that the MLE of the random slope variance parameter is a very small positive number, and that there is low uncertainty in this estimate.

In [ ]:
re = result.cov_re.iloc[1, 1]
likev = result.profile_re(1, dist_low=0.8*re, dist_high=0.8*re)

plt.figure(figsize=(10, 8))
plt.plot(likev[:,0], 2*likev[:,1])
plt.xlabel("Variance of random slope", size=17)
plt.ylabel("-2 times profile log likelihood", size=17)
Out[ ]:
<matplotlib.text.Text at 0x7fcbd3036650>