As a huge t-wolves fan, I've been curious all year by what we can infer from Karl-Anthony Towns' great rookie season. To answer this question, I've create a simple linear regression model that uses rookie year performance to predict career performance.
Many have attempted to predict NBA players' success via regression style approaches. Notable models I know of include Layne Vashro's model which uses combine and college performance to predict career performance. Layne Vashro's model is a quasi-poisson GLM. I tried a similar approach, but had the most success when using ws/48 and OLS. I will discuss this a little more at the end of the post.
#import some libraries and tell ipython we want inline figures rather than interactive figures.
import matplotlib.pyplot as plt, pandas as pd, numpy as np, matplotlib as mpl
from __future__ import print_function
%matplotlib inline
pd.options.display.mpl_style = 'default' #load matplotlib for plotting
plt.style.use('ggplot') #im addicted to ggplot. so pretty.
mpl.rcParams['font.family'] = ['Bitstream Vera Sans']
I collected all the data for this project from basketball-reference.com. I posted the functions for collecting the data on my github. The data is also posted there. Beware, the data collection scripts take awhile to run.
This data includes per 36 stats and advanced statistics such as usage percentage. I simply took all the per 36 and advanced statistics from a player's page on basketball-reference.com.
df = pd.read_pickle('nba_bballref_career_stats_2016_Mar_15.pkl') #here's the career data.
rookie_df = pd.read_pickle('nba_bballref_rookie_stats_2016_Mar_15.pkl') #here's the rookie year data
The variable I am trying to predict is average WS/48 over a player's career. There's no perfect box-score statistic when it comes to quantifying a player's peformance, but ws/48 seems relatively solid.
Games = df['G']>50 #only using players who played in more than 50 games.
Year = df['Year']>1980 #only using players after 1980 when they started keeping many important records such as games started
Y = df[Games & Year]['WS/48'] #predicted variable
plt.hist(Y);
plt.ylabel('Bin Count')
plt.xlabel('WS/48');
The predicted variable looks pretty gaussian, so I can use ordinary least squares. This will be nice because while ols is not flexible, it's highly interpretable. At the end of the post I'll mention some more complex models that I will try.
rook_games = rookie_df['Career Games']>50
rook_year = rookie_df['Year']>1980
#remove rookies from before 1980 and who have played less than 50 games. I also remove some features that seem irrelevant or unfair
rookie_df_games = rookie_df[rook_games & rook_year] #only players with more than 50 games.
rookie_df_drop = rookie_df_games.drop(['Year','Career Games','Name'],1)
Above, I remove some predictors from the rookie data. Lets run the regression!
import statsmodels.api as sm
X_rookie = rookie_df_drop.as_matrix() #take data out of dataframe
X_rookie = sm.add_constant(X_rookie) # Adds a constant term to the predictor
estAll = sm.OLS(Y,X_rookie) #create ordinary least squares model
estAll = estAll.fit() #fit the model
print(estAll.summary())
OLS Regression Results ============================================================================== Dep. Variable: WS/48 R-squared: 0.476 Model: OLS Adj. R-squared: 0.461 Method: Least Squares F-statistic: 31.72 Date: Sun, 20 Mar 2016 Prob (F-statistic): 2.56e-194 Time: 19:32:47 Log-Likelihood: 3303.9 No. Observations: 1690 AIC: -6512. Df Residuals: 1642 BIC: -6251. Df Model: 47 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [95.0% Conf. Int.] ------------------------------------------------------------------------------ const 0.2509 0.078 3.223 0.001 0.098 0.404 x1 -0.0031 0.001 -6.114 0.000 -0.004 -0.002 x2 -0.0004 9.06e-05 -4.449 0.000 -0.001 -0.000 x3 -0.0003 8.12e-05 -3.525 0.000 -0.000 -0.000 x4 1.522e-05 4.73e-06 3.218 0.001 5.94e-06 2.45e-05 x5 0.0030 0.031 0.096 0.923 -0.057 0.063 x6 0.0109 0.019 0.585 0.559 -0.026 0.047 x7 -0.0312 0.094 -0.331 0.741 -0.216 0.154 x8 0.0161 0.027 0.594 0.553 -0.037 0.069 x9 -0.0054 0.018 -0.292 0.770 -0.041 0.031 x10 0.0012 0.007 0.169 0.866 -0.013 0.015 x11 0.0136 0.023 0.592 0.554 -0.031 0.059 x12 -0.0099 0.018 -0.538 0.591 -0.046 0.026 x13 0.0076 0.054 0.141 0.888 -0.098 0.113 x14 0.0094 0.012 0.783 0.433 -0.014 0.033 x15 0.0029 0.002 1.361 0.174 -0.001 0.007 x16 0.0078 0.009 0.861 0.390 -0.010 0.026 x17 -0.0107 0.019 -0.573 0.567 -0.047 0.026 x18 -0.0062 0.018 -0.342 0.732 -0.042 0.029 x19 0.0095 0.017 0.552 0.581 -0.024 0.043 x20 0.0111 0.004 2.853 0.004 0.003 0.019 x21 0.0109 0.018 0.617 0.537 -0.024 0.046 x22 -0.0139 0.006 -2.165 0.030 -0.026 -0.001 x23 0.0024 0.005 0.475 0.635 -0.008 0.012 x24 0.0022 0.001 1.644 0.100 -0.000 0.005 x25 -0.0125 0.012 -1.027 0.305 -0.036 0.011 x26 -0.0006 0.000 -1.782 0.075 -0.001 5.74e-05 x27 -0.0011 0.001 -1.749 0.080 -0.002 0.000 x28 0.0012 0.003 0.487 0.626 -0.004 0.006 x29 0.1824 0.089 2.059 0.040 0.009 0.356 x30 -0.0288 0.025 -1.153 0.249 -0.078 0.020 x31 -0.0128 0.011 -1.206 0.228 -0.034 0.008 x32 -0.0046 0.008 -0.603 0.547 -0.020 0.010 x33 -0.0071 0.005 -1.460 0.145 -0.017 0.002 x34 0.0131 0.012 1.124 0.261 -0.010 0.036 x35 -0.0023 0.001 -2.580 0.010 -0.004 -0.001 x36 -0.0077 0.013 -0.605 0.545 -0.033 0.017 x37 0.0069 0.004 1.916 0.055 -0.000 0.014 x38 -0.0015 0.001 -2.568 0.010 -0.003 -0.000 x39 -0.0002 0.002 -0.110 0.912 -0.005 0.004 x40 -0.0109 0.017 -0.632 0.528 -0.045 0.023 x41 -0.0142 0.017 -0.821 0.412 -0.048 0.020 x42 0.0217 0.017 1.257 0.209 -0.012 0.056 x43 0.0123 0.102 0.121 0.904 -0.188 0.213 x44 0.0441 0.018 2.503 0.012 0.010 0.079 x45 0.0406 0.018 2.308 0.021 0.006 0.075 x46 -0.0410 0.018 -2.338 0.020 -0.075 -0.007 x47 0.0035 0.003 1.304 0.192 -0.002 0.009 ============================================================================== Omnibus: 42.820 Durbin-Watson: 1.966 Prob(Omnibus): 0.000 Jarque-Bera (JB): 54.973 Skew: 0.300 Prob(JB): 1.16e-12 Kurtosis: 3.649 Cond. No. 1.88e+05 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 1.88e+05. This might indicate that there are strong multicollinearity or other numerical problems.
There's a lot to look at in the regression output (especially with this many features). For an explanation of all the different parts of the regression take a look at this post. Below is a quick plot of predicted ws/48 against actual ws/48.
plt.plot(estAll.predict(X_rookie),Y,'o')
plt.plot(np.arange(0,0.25,0.01),np.arange(0,0.25,0.01),'b-')
plt.ylabel('Career WS/48')
plt.xlabel('Predicted WS/48');
The blue line above is NOT the best-fit line. It's the identity line. I plot it to help visualize where the model fails. The model seems to primarily fail in the extremes - it tends to overestimate the worst players.
All in all, This model does a remarkably good job given its simplicity (linear regression), but it also leaves a lot of variance unexplained.
One reason this model might miss some variance is there's more than one way to be a productive basketball player. For instance, Dwight Howard and Steph Curry find very different ways to contribute. One linear regression model is unlikely to succesfully predict both players.
In a previous post, I grouped players according to their on-court performance. These player groupings might help predict career performance.
Below, I will use the same player grouping I developed in my previous post, and examine how these groupings impact my ability to predict career performance.
from sklearn.preprocessing import StandardScaler
df = pd.read_pickle('nba_bballref_career_stats_2016_Mar_15.pkl')
df = df[df['G']>50]
df_drop = df.drop(['Year','Name','G','GS','MP','FG','FGA','FG%','3P','2P','FT','TRB','PTS','ORtg','DRtg','PER','TS%','3PAr','FTr','ORB%','DRB%','TRB%','AST%','STL%','BLK%','TOV%','USG%','OWS','DWS','WS','WS/48','OBPM','DBPM','BPM','VORP'],1)
X = df_drop.as_matrix() #take data out of dataframe
ScaleModel = StandardScaler().fit(X)
X = ScaleModel.transform(X)
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
reduced_model = PCA(n_components=5, whiten=True).fit(X)
reduced_data = reduced_model.transform(X) #transform data into the 5 PCA components space
final_fit = KMeans(n_clusters=6).fit(reduced_data) #fit 6 clusters
df['kmeans_label'] = final_fit.labels_ #label each data point with its clusters
See my other post for more details about this clustering procedure.
Let's see how WS/48 varies across the groups.
WS_48 = [df[df['kmeans_label']==x]['WS/48'] for x in np.unique(df['kmeans_label'])] #create a vector of ws/48. One for each cluster
plt.boxplot(WS_48);
Some groups perform better than others, but there's lots of overlap between the groups. Importantly, each group has a fair amount of variability. Each group spans at least 0.15 WS/48. This gives the regression enough room to successfully predict performance in each group.
Now, lets get a bit of a refresher on what the groups are. Again, my previous post has a good description of these groups.
TS = [np.mean(df[df['kmeans_label']==x]['TS%'])*100 for x in np.unique(df['kmeans_label'])] #create vectors of each stat for each cluster
ThreeAr = [np.mean(df[df['kmeans_label']==x]['3PAr'])*100 for x in np.unique(df['kmeans_label'])]
FTr = [np.mean(df[df['kmeans_label']==x]['FTr'])*100 for x in np.unique(df['kmeans_label'])]
RBD = [np.mean(df[df['kmeans_label']==x]['TRB%']) for x in np.unique(df['kmeans_label'])]
AST = [np.mean(df[df['kmeans_label']==x]['AST%']) for x in np.unique(df['kmeans_label'])]
STL = [np.mean(df[df['kmeans_label']==x]['STL%']) for x in np.unique(df['kmeans_label'])]
TOV = [np.mean(df[df['kmeans_label']==x]['TOV%']) for x in np.unique(df['kmeans_label'])]
USG = [np.mean(df[df['kmeans_label']==x]['USG%']) for x in np.unique(df['kmeans_label'])]
Data = np.array([TS,ThreeAr,FTr,RBD,AST,STL,TOV,USG])
ind = np.arange(1,9)
plt.figure(figsize=(16,8))
plt.plot(ind,Data,'o-',linewidth=2)
plt.xticks(ind,('True Shooting', '3 point Attempt', 'Free Throw Rate', 'Rebound', 'Assist','Steal','TOV','Usage'),rotation=45)
plt.legend(('Group 1','Group 2','Group 3','Group 4','Group 5','Group 6'))
plt.ylabel('Percent');
I've plotted the groups across a number of useful categories. For information about these categories see basketball reference's glossary.
Here's a quick rehash of the groupings. See my previous post for more detail.
The group explanation below are likely not in the correct order as the group order changes each time this script is run.
On to the regression.
rookie_df = pd.read_pickle('nba_bballref_rookie_stats_2016_Mar_15.pkl')
rookie_df = rookie_df.drop(['Year','Career Games','Name'],1)
X = rookie_df.as_matrix() #take data out of dataframe
ScaleRookie = StandardScaler().fit(X) #scale data
X = ScaleRookie.transform(X) #transform data to scale
reduced_model_rookie = PCA(n_components=10).fit(X) #create pca model of first 10 components.
You might have noticed the giant condition number in the regression above. This indicates significant multicollinearity of the features, which isn't surprising since I have many features that reflect the same abilities.
The multicollinearity doesn't prevent the regression model from making accurate predictions, but does it make the beta weight estimates irratic. With irratic beta weights, it's hard to tell whether the different clusters use different models when predicting career ws/48.
In the following regression, I put the predicting features through a PCA and keep only the first 10 PCA components. Using only the first 10 PCA components keeps the component score below 20, indicating that multicollinearity is not a problem. I then examine whether the different groups exhibit a different patterns of beta weights (whether different models predict success of the different groups).
cluster_labels = df[df['Year']>1980]['kmeans_label'] #limit labels to players after 1980
rookie_df_drop['kmeans_label'] = cluster_labels #label each data point with its clusters
estHold = [[],[],[],[],[],[]]
for i,group in enumerate(np.unique(final_fit.labels_)):
Grouper = df['kmeans_label']==group #do regression one group at a time
Yearer = df['Year']>1980
Group1 = df[Grouper & Yearer]
Y = Group1['WS/48'] #get predicted data
Group1_rookie = rookie_df_drop[rookie_df_drop['kmeans_label']==group] #get predictor data of group
Group1_rookie = Group1_rookie.drop(['kmeans_label'],1)
X = Group1_rookie.as_matrix() #take data out of dataframe
X = ScaleRookie.transform(X) #scale data
X = reduced_model_rookie.transform(X) #transform data into the 10 PCA components space
X = sm.add_constant(X) # Adds a constant term to the predictor
est = sm.OLS(Y,X) #create regression model
est = est.fit()
#print(est.summary())
estHold[i] = est
plt.figure(figsize=(12,6)) #plot the beta weights
width=0.12
for i,est in enumerate(estHold):
plt.bar(np.arange(11)+width*i,est.params,color=plt.rcParams['axes.color_cycle'][i],width=width,yerr=(est.conf_int()[1]-est.conf_int()[0])/2)
plt.xlim(right=11)
plt.xlabel('Principle Components')
plt.legend(('Group 1','Group 2','Group 3','Group 4','Group 5','Group 6'))
plt.ylabel('Beta Weights');
/home/dan-laptop/anaconda/lib/python2.7/site-packages/matplotlib/collections.py:590: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison if self._edgecolors == str('face'):
Above I plot the beta weights for each principle component across the groupings. This plot is a lot to look at, but I wanted to depict how the beta values changed across the groups. They are not drastically different, but they're also not identical. Error bars depict 95% confidence intervals.
Below I fit a regression to each group, but with all the features. Again, multicollinearity will be a problem, but this will not decrease the regression's accuracy, which is all I really care about.
X = rookie_df.as_matrix() #take data out of dataframe
cluster_labels = df[df['Year']>1980]['kmeans_label']
rookie_df_drop['kmeans_label'] = cluster_labels #label each data point with its clusters
plt.figure(figsize=(8,6));
estHold = [[],[],[],[],[],[]]
for i,group in enumerate(np.unique(final_fit.labels_)):
Grouper = df['kmeans_label']==group #do one regression at a time
Yearer = df['Year']>1980
Group1 = df[Grouper & Yearer]
Y = Group1['WS/48'] #get predictor data
Group1_rookie = rookie_df_drop[rookie_df_drop['kmeans_label']==group]
Group1_rookie = Group1_rookie.drop(['kmeans_label'],1) #get predicted data
X = Group1_rookie.as_matrix() #take data out of dataframe
X = sm.add_constant(X) # Adds a constant term to the predictor
est = sm.OLS(Y,X) #fit with linear regression model
est = est.fit()
estHold[i] = est
#print est.summary()
plt.subplot(3,2,i+1) #plot each regression's prediction against actual data
plt.plot(est.predict(X),Y,'o',color=plt.rcParams['axes.color_cycle'][i])
plt.plot(np.arange(-0.1,0.25,0.01),np.arange(-0.1,0.25,0.01),'-')
plt.title('Group %d'%(i+1))
plt.text(0.15,-0.05,'$r^2$=%.2f'%est.rsquared)
plt.xticks([0.0,0.12,0.25])
plt.yticks([0.0,0.12,0.25]);
The plots above depict each regression's predictions against actual ws/48. I provide each model's r^2 in the plot too.
Some regressions are better than others. For instance, the regression model does a pretty awesome job predicting the bench warmers...I wonder if this is because they have shorter careers... The regression model does not do a good job predicting the 3-point shooters.
Now onto the fun stuff though.
Below, create a function for predicting a players career WS/48. First, I write a function that finds what cluster a player would belong to, and what the regression model predicts for this players career (with 95% confidence intervals).
#plot the residuals. there's obviously a problem with under/over prediction
plt.figure(figsize=(8,6));
for i,group in enumerate(np.unique(final_fit.labels_)):
Grouper = df['kmeans_label']==group #do one regression at a time
Yearer = df['Year']>1980
Group1 = df[Grouper & Yearer]
Y = Group1['WS/48'] #get predictor data
resid = estHold[i].resid
plt.subplot(3,2,i+1) #plot each regression's prediction against actual data
plt.plot(Y,resid,'o',color=plt.rcParams['axes.color_cycle'][i])
plt.title('Group %d'%(i+1))
plt.xticks([0.0,0.12,0.25])
plt.yticks([-0.1,0.0,0.1]);
in the plots above i look at the residuals across the predicted variable. There is an obvious correlation in the residuals this is not great...
def player_prediction__regressionModel(PlayerName):
from statsmodels.sandbox.regression.predstd import wls_prediction_std
clust_df = pd.read_pickle('nba_bballref_career_stats_2016_Mar_05.pkl')
clust_df = clust_df[clust_df['Name']==PlayerName]
clust_df = clust_df.drop(['Name','G','GS','MP','FG','FGA','FG%','3P','2P','FT','TRB','PTS','ORtg','DRtg','PER','TS%','3PAr','FTr','ORB%','DRB%','TRB%','AST%','STL%','BLK%','TOV%','USG%','OWS','DWS','WS','WS/48','OBPM','DBPM','BPM','VORP'],1)
new_vect = ScaleModel.transform(clust_df.as_matrix()[0])
reduced_data = reduced_model.transform(new_vect)
Group = final_fit.predict(reduced_data)
clust_df['kmeans_label'] = Group[0]
Predrookie_df = pd.read_pickle('nba_bballref_rookie_stats_2016_Mar_15.pkl')
Predrookie_df = Predrookie_df[Predrookie_df['Name']==PlayerName]
Predrookie_df = Predrookie_df.drop(['Year','Career Games','Name'],1)
predX = Predrookie_df.as_matrix() #take data out of dataframe
predX = sm.add_constant(predX,has_constant='add') # Adds a constant term to the predictor
prstd_ols, iv_l_ols, iv_u_ols = wls_prediction_std(estHold[Group[0]],predX,alpha=0.05)
return {'Name':PlayerName,'Group':Group[0]+1,'Prediction':estHold[Group[0]].predict(predX),'Upper_CI':iv_u_ols,'Lower_CI':iv_l_ols}
Here I create a function that creates a list of all the first round draft picks from a given year.
def gather_draftData(Year):
import urllib2
from bs4 import BeautifulSoup
import pandas as pd
import numpy as np
draft_len = 30
def convert_float(val):
try:
return float(val)
except ValueError:
return np.nan
url = 'http://www.basketball-reference.com/draft/NBA_'+str(Year)+'.html'
html = urllib2.urlopen(url)
soup = BeautifulSoup(html,"lxml")
draft_num = [soup.findAll('tbody')[0].findAll('tr')[i].findAll('td')[0].text for i in range(draft_len)]
draft_nam = [soup.findAll('tbody')[0].findAll('tr')[i].findAll('td')[3].text for i in range(draft_len)]
games = [convert_float(soup.findAll('tbody')[0].findAll('tr')[i].findAll('td')[6].text) for i in range(draft_len)]
draft_df = pd.DataFrame([draft_num,draft_nam,games]).T
draft_df.columns = ['Number','Name','Games']
df.index = range(np.size(df,0))
return draft_df
Below I create predictions for each first-round draft pick from 2015. The spurs' first round pick, Nikola Milutinov, has yet to play so I do not create a prediction for him.
import matplotlib.patches as mpatches
draft_df = gather_draftData(2015)
draft_df['Name'][14] = 'Kelly Oubre Jr.' #annoying name inconsistencies
plt.subplots(figsize=(14,6));
plt.xticks(range(1,31),draft_df['Name'],rotation=90)
draft_df = draft_df.drop(17, 0) #Sam Dekker has received little playing time making his prediction highly erratic
for name in draft_df['Name']:
draft_num = draft_df[draft_df['Name']==name]['Number']
try: int(draft_df[draft_df['Name']==name]['Games'])
except:continue
predict_dict = player_prediction__regressionModel(name)
yerr = (predict_dict['Upper_CI']-predict_dict['Lower_CI'])/2
plt.errorbar(draft_num,predict_dict['Prediction'],fmt='o',label=name,
color=plt.rcParams['axes.color_cycle'][predict_dict['Group']-1],yerr=yerr);
plt.xlim(left=0,right=31)
patch = [mpatches.Patch(color=plt.rcParams['axes.color_cycle'][i], label='Group %d'%(i+1)) for i in range(6)]
plt.legend(handles=patch,ncol=3)
plt.ylabel('Predicted WS/48')
plt.xlabel('Draft Position');
The plot above is ordered by draft pick. The error bars depict 95% confidence interbals...which are a little wider than I would like. It's interesting to look at what clusters these players fit into. Lots of 3-pt shooters! It could be that rookies play a limited role in the offense - just shooting 3s.
As a t-wolves fan, I am relatively happy about the high prediction for Karl-Anthony Towns. His predicted ws/48 is between Marc Gasol and Elton Brand. Again, the CIs are quite wide, so the model says there's a 95% chance he is somewhere between Lebron James ever and a player that averages less than 0.1 ws/48.
Karl-Anthony Towns would have the highest predicted ws/48 if it were not for Kevin Looney who the model loves. Kevin Looney has not seen much playing time though, which likely makes his prediction more erratic. Keep in mind I did not use draft position as a predictor in the model.
Sam Dekker has a pretty huge error bar, likely because of his limited playing time this year.
While I fed a ton of features into this model, it's still just a linear regression. The simplicity of the model might prevent me from making more accurate predictions.
I've already started playing with some more complex models. If those work out well, I will post them here. I ended up sticking with a plain linear regression because my vast number of features is a little unwieldy in a more complex models. If you're interested (and the models produce better results) check back in the future.
For now, these models explain between 40 and 70% of the variance in career ws/48 from only a player's rookie year. Even predicting 30% of variance is pretty remarkable, so I don't want to trash on this part of the model. Explaining 65% of the variance is pretty awesome. The model gives us a pretty accurate idea of how these "bench players" will perform. For instance, the future does not look bright for players like Emmanuel Mudiay and Tyus Jones. Not to say these players are doomed. The model assumes that players will retain their grouping for the entire career. Emmanuel Mudiay and Tyus Jones might start performing more like distributors as their career progresses. This could result in a better career.
One nice part about this model is it tells us where the predictions are less confident. For instance, it is nice to know that we're relatively confident when predicting bench players, but not when we're predicting 3-point shooters.
For those curious, I output each groups regression summary below.
[print(i.summary()) for i in estHold];
OLS Regression Results ============================================================================== Dep. Variable: WS/48 R-squared: 0.518 Model: OLS Adj. R-squared: 0.426 Method: Least Squares F-statistic: 5.589 Date: Sun, 20 Mar 2016 Prob (F-statistic): 1.11e-19 Time: 19:32:55 Log-Likelihood: 655.20 No. Observations: 292 AIC: -1214. Df Residuals: 244 BIC: -1038. Df Model: 47 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [95.0% Conf. Int.] ------------------------------------------------------------------------------ const 0.6481 0.218 2.967 0.003 0.218 1.078 x1 -0.0026 0.001 -2.148 0.033 -0.005 -0.000 x2 -0.0003 0.000 -1.338 0.182 -0.001 0.000 x3 -0.0002 0.000 -1.263 0.208 -0.001 0.000 x4 -2.87e-06 1.15e-05 -0.249 0.803 -2.56e-05 1.98e-05 x5 0.0622 0.071 0.878 0.381 -0.077 0.202 x6 0.0348 0.040 0.862 0.390 -0.045 0.114 x7 0.3493 0.507 0.688 0.492 -0.650 1.349 x8 -0.0106 0.060 -0.178 0.859 -0.128 0.107 x9 -0.0234 0.041 -0.578 0.564 -0.103 0.056 x10 -0.0113 0.017 -0.667 0.505 -0.044 0.022 x11 -0.0150 0.058 -0.261 0.794 -0.128 0.098 x12 -0.0437 0.040 -1.091 0.276 -0.123 0.035 x13 -0.3398 0.437 -0.778 0.437 -1.200 0.520 x14 0.0373 0.028 1.329 0.185 -0.018 0.092 x15 -0.0035 0.012 -0.299 0.765 -0.027 0.020 x16 -0.0458 0.042 -1.077 0.282 -0.129 0.038 x17 -0.0082 0.038 -0.214 0.831 -0.084 0.067 x18 0.0260 0.038 0.689 0.492 -0.048 0.100 x19 -0.0227 0.036 -0.624 0.533 -0.094 0.049 x20 0.0070 0.010 0.710 0.478 -0.012 0.027 x21 -0.0373 0.041 -0.913 0.362 -0.118 0.043 x22 -0.0270 0.019 -1.398 0.163 -0.065 0.011 x23 -0.0029 0.015 -0.189 0.850 -0.033 0.027 x24 0.0070 0.003 2.309 0.022 0.001 0.013 x25 -0.0054 0.028 -0.196 0.844 -0.060 0.049 x26 0.0038 0.002 2.242 0.026 0.000 0.007 x27 -0.0064 0.002 -2.885 0.004 -0.011 -0.002 x28 0.0110 0.006 1.916 0.057 -0.000 0.022 x29 -0.3450 0.412 -0.838 0.403 -1.156 0.466 x30 -0.0680 0.186 -0.367 0.714 -0.434 0.298 x31 -0.0447 0.057 -0.788 0.431 -0.156 0.067 x32 0.0165 0.016 1.051 0.294 -0.014 0.047 x33 0.0043 0.011 0.374 0.708 -0.018 0.027 x34 -0.0121 0.026 -0.458 0.648 -0.064 0.040 x35 -0.0012 0.002 -0.488 0.626 -0.006 0.004 x36 0.0178 0.030 0.595 0.553 -0.041 0.077 x37 0.0113 0.011 1.027 0.305 -0.010 0.033 x38 0.0008 0.002 0.347 0.729 -0.004 0.005 x39 -0.0123 0.005 -2.528 0.012 -0.022 -0.003 x40 -0.0124 0.036 -0.342 0.733 -0.084 0.059 x41 -0.0173 0.036 -0.477 0.634 -0.089 0.054 x42 0.0263 0.037 0.713 0.476 -0.046 0.099 x43 -1.2830 0.410 -3.133 0.002 -2.090 -0.476 x44 0.0412 0.037 1.111 0.268 -0.032 0.114 x45 0.0524 0.037 1.427 0.155 -0.020 0.125 x46 -0.0441 0.037 -1.208 0.228 -0.116 0.028 x47 -0.0050 0.007 -0.763 0.446 -0.018 0.008 ============================================================================== Omnibus: 5.349 Durbin-Watson: 2.029 Prob(Omnibus): 0.069 Jarque-Bera (JB): 5.364 Skew: 0.242 Prob(JB): 0.0684 Kurtosis: 3.456 Cond. No. 4.91e+05 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 4.91e+05. This might indicate that there are strong multicollinearity or other numerical problems. OLS Regression Results ============================================================================== Dep. Variable: WS/48 R-squared: 0.607 Model: OLS Adj. R-squared: 0.460 Method: Least Squares F-statistic: 4.136 Date: Sun, 20 Mar 2016 Prob (F-statistic): 1.21e-10 Time: 19:32:55 Log-Likelihood: 358.04 No. Observations: 174 AIC: -620.1 Df Residuals: 126 BIC: -468.5 Df Model: 47 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [95.0% Conf. Int.] ------------------------------------------------------------------------------ const 0.0938 0.434 0.216 0.829 -0.764 0.952 x1 -0.0011 0.002 -0.525 0.600 -0.005 0.003 x2 0.0006 0.000 1.189 0.237 -0.000 0.001 x3 3.789e-05 0.000 0.133 0.894 -0.001 0.001 x4 -1.886e-05 1.86e-05 -1.013 0.313 -5.57e-05 1.8e-05 x5 0.0602 0.113 0.533 0.595 -0.163 0.284 x6 0.0881 0.070 1.262 0.209 -0.050 0.226 x7 1.1696 0.880 1.329 0.186 -0.572 2.911 x8 -0.0880 0.111 -0.791 0.430 -0.308 0.132 x9 -0.0881 0.071 -1.234 0.219 -0.229 0.053 x10 -0.0504 0.023 -2.162 0.032 -0.097 -0.004 x11 -0.1112 0.099 -1.124 0.263 -0.307 0.085 x12 -0.0635 0.071 -0.897 0.371 -0.204 0.077 x13 -0.6204 0.571 -1.086 0.280 -1.751 0.510 x14 -0.0179 0.048 -0.374 0.709 -0.113 0.077 x15 0.0048 0.018 0.270 0.788 -0.030 0.040 x16 0.1898 0.109 1.742 0.084 -0.026 0.405 x17 -0.1190 0.077 -1.544 0.125 -0.272 0.033 x18 -0.1231 0.066 -1.873 0.063 -0.253 0.007 x19 0.0970 0.064 1.517 0.132 -0.030 0.224 x20 -0.0215 0.018 -1.184 0.239 -0.057 0.014 x21 0.0408 0.075 0.542 0.589 -0.108 0.190 x22 -0.0225 0.023 -1.000 0.319 -0.067 0.022 x23 0.0259 0.031 0.834 0.406 -0.036 0.087 x24 0.0100 0.007 1.514 0.133 -0.003 0.023 x25 0.0044 0.046 0.095 0.924 -0.086 0.095 x26 0.0006 0.004 0.130 0.897 -0.008 0.009 x27 0.0006 0.004 0.161 0.873 -0.007 0.009 x28 0.0292 0.013 2.305 0.023 0.004 0.054 x29 -1.0776 0.934 -1.154 0.251 -2.925 0.770 x30 0.3468 0.195 1.776 0.078 -0.040 0.733 x31 0.1407 0.131 1.071 0.286 -0.119 0.401 x32 -0.0587 0.035 -1.684 0.095 -0.128 0.010 x33 -0.0524 0.021 -2.548 0.012 -0.093 -0.012 x34 0.1172 0.050 2.321 0.022 0.017 0.217 x35 0.0002 0.004 0.048 0.962 -0.008 0.008 x36 -0.0495 0.055 -0.902 0.369 -0.158 0.059 x37 -0.0073 0.016 -0.466 0.642 -0.039 0.024 x38 0.0007 0.006 0.115 0.908 -0.012 0.013 x39 -0.0156 0.011 -1.467 0.145 -0.037 0.005 x40 -0.0397 0.070 -0.564 0.574 -0.179 0.100 x41 -0.0200 0.070 -0.287 0.775 -0.158 0.118 x42 0.0343 0.070 0.487 0.627 -0.105 0.174 x43 -0.3365 0.757 -0.444 0.658 -1.835 1.162 x44 0.0525 0.066 0.793 0.429 -0.079 0.184 x45 0.0489 0.067 0.730 0.467 -0.084 0.182 x46 -0.0518 0.066 -0.784 0.435 -0.183 0.079 x47 0.0146 0.008 1.721 0.088 -0.002 0.031 ============================================================================== Omnibus: 2.052 Durbin-Watson: 2.145 Prob(Omnibus): 0.358 Jarque-Bera (JB): 1.632 Skew: 0.197 Prob(JB): 0.442 Kurtosis: 3.264 Cond. No. 8.73e+05 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 8.73e+05. This might indicate that there are strong multicollinearity or other numerical problems. OLS Regression Results ============================================================================== Dep. Variable: WS/48 R-squared: 0.564 Model: OLS Adj. R-squared: 0.454 Method: Least Squares F-statistic: 5.137 Date: Sun, 20 Mar 2016 Prob (F-statistic): 3.71e-16 Time: 19:32:55 Log-Likelihood: 483.11 No. Observations: 235 AIC: -870.2 Df Residuals: 187 BIC: -704.2 Df Model: 47 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [95.0% Conf. Int.] ------------------------------------------------------------------------------ const -0.5179 0.300 -1.724 0.086 -1.110 0.075 x1 -0.0034 0.002 -2.127 0.035 -0.007 -0.000 x2 -0.0006 0.000 -2.061 0.041 -0.001 -2.52e-05 x3 -0.0006 0.000 -2.240 0.026 -0.001 -7.06e-05 x4 3.194e-05 1.85e-05 1.726 0.086 -4.57e-06 6.84e-05 x5 -0.0015 0.086 -0.017 0.986 -0.170 0.167 x6 -0.0521 0.048 -1.095 0.275 -0.146 0.042 x7 -0.5453 0.528 -1.033 0.303 -1.587 0.496 x8 -0.1367 0.074 -1.839 0.067 -0.283 0.010 x9 0.0649 0.048 1.356 0.177 -0.030 0.159 x10 0.0135 0.028 0.488 0.626 -0.041 0.068 x11 -0.0626 0.062 -1.014 0.312 -0.184 0.059 x12 0.0563 0.047 1.202 0.231 -0.036 0.149 x13 -0.0158 0.172 -0.092 0.927 -0.355 0.324 x14 -0.0429 0.038 -1.129 0.260 -0.118 0.032 x15 -0.0095 0.017 -0.552 0.581 -0.044 0.025 x16 -0.0786 0.038 -2.081 0.039 -0.153 -0.004 x17 -0.0786 0.057 -1.382 0.169 -0.191 0.034 x18 -0.0749 0.055 -1.363 0.174 -0.183 0.033 x19 0.0855 0.051 1.674 0.096 -0.015 0.186 x20 0.0254 0.010 2.480 0.014 0.005 0.046 x21 0.0354 0.050 0.712 0.477 -0.063 0.134 x22 0.0822 0.050 1.643 0.102 -0.016 0.181 x23 0.0108 0.019 0.560 0.576 -0.027 0.049 x24 -0.0017 0.005 -0.329 0.742 -0.012 0.009 x25 0.0117 0.037 0.317 0.752 -0.061 0.085 x26 -0.0020 0.001 -1.676 0.095 -0.004 0.000 x27 0.0058 0.003 2.237 0.026 0.001 0.011 x28 -0.0081 0.010 -0.819 0.414 -0.028 0.011 x29 0.9112 0.504 1.809 0.072 -0.083 1.905 x30 -0.0410 0.109 -0.377 0.706 -0.255 0.173 x31 0.0887 0.102 0.871 0.385 -0.112 0.289 x32 -0.0088 0.027 -0.323 0.747 -0.063 0.045 x33 -0.0094 0.017 -0.537 0.592 -0.044 0.025 x34 0.0153 0.042 0.364 0.716 -0.068 0.098 x35 -0.0066 0.002 -2.670 0.008 -0.011 -0.002 x36 -0.0194 0.038 -0.505 0.614 -0.095 0.056 x37 -0.0475 0.028 -1.712 0.089 -0.102 0.007 x38 -0.0026 0.002 -1.149 0.252 -0.007 0.002 x39 0.0192 0.009 2.223 0.027 0.002 0.036 x40 0.0051 0.048 0.107 0.915 -0.089 0.100 x41 0.0047 0.048 0.099 0.921 -0.089 0.099 x42 0.0108 0.048 0.225 0.822 -0.084 0.106 x43 1.2759 0.448 2.851 0.005 0.393 2.159 x44 0.1232 0.049 2.536 0.012 0.027 0.219 x45 0.1241 0.049 2.522 0.012 0.027 0.221 x46 -0.1220 0.049 -2.470 0.014 -0.219 -0.025 x47 -0.0056 0.009 -0.591 0.556 -0.024 0.013 ============================================================================== Omnibus: 2.419 Durbin-Watson: 1.941 Prob(Omnibus): 0.298 Jarque-Bera (JB): 2.455 Skew: 0.242 Prob(JB): 0.293 Kurtosis: 2.872 Cond. No. 4.14e+05 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 4.14e+05. This might indicate that there are strong multicollinearity or other numerical problems. OLS Regression Results ============================================================================== Dep. Variable: WS/48 R-squared: 0.708 Model: OLS Adj. R-squared: 0.625 Method: Least Squares F-statistic: 8.467 Date: Sun, 20 Mar 2016 Prob (F-statistic): 2.73e-25 Time: 19:32:55 Log-Likelihood: 513.11 No. Observations: 212 AIC: -930.2 Df Residuals: 164 BIC: -769.1 Df Model: 47 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [95.0% Conf. Int.] ------------------------------------------------------------------------------ const 0.2939 0.180 1.630 0.105 -0.062 0.650 x1 0.0011 0.001 0.900 0.370 -0.001 0.004 x2 -0.0003 0.000 -1.378 0.170 -0.001 0.000 x3 -0.0004 0.000 -1.620 0.107 -0.001 9.46e-05 x4 3.531e-05 1.79e-05 1.969 0.051 -9.18e-08 7.07e-05 x5 -0.1301 0.085 -1.530 0.128 -0.298 0.038 x6 0.0515 0.051 1.018 0.310 -0.048 0.151 x7 -0.4282 0.365 -1.172 0.243 -1.149 0.293 x8 0.0230 0.082 0.281 0.779 -0.139 0.185 x9 -0.0695 0.052 -1.343 0.181 -0.172 0.033 x10 -0.0084 0.020 -0.421 0.674 -0.048 0.031 x11 0.0139 0.073 0.191 0.849 -0.130 0.158 x12 -0.0533 0.051 -1.042 0.299 -0.154 0.048 x13 0.7598 0.265 2.867 0.005 0.237 1.283 x14 -0.0358 0.027 -1.340 0.182 -0.088 0.017 x15 -0.0053 0.008 -0.690 0.491 -0.020 0.010 x16 -0.0131 0.029 -0.447 0.655 -0.071 0.045 x17 -0.0519 0.038 -1.349 0.179 -0.128 0.024 x18 -0.0313 0.039 -0.805 0.422 -0.108 0.045 x19 0.0385 0.037 1.035 0.302 -0.035 0.112 x20 0.0305 0.014 2.237 0.027 0.004 0.057 x21 -0.0100 0.044 -0.226 0.822 -0.097 0.077 x22 -0.0200 0.015 -1.327 0.186 -0.050 0.010 x23 -0.0099 0.012 -0.852 0.396 -0.033 0.013 x24 0.0066 0.003 2.060 0.041 0.000 0.013 x25 0.0595 0.028 2.157 0.032 0.005 0.114 x26 9.128e-05 0.001 0.072 0.943 -0.002 0.003 x27 -0.0032 0.002 -1.929 0.055 -0.006 7.58e-05 x28 0.0041 0.006 0.705 0.482 -0.007 0.015 x29 -0.2977 0.272 -1.096 0.275 -0.834 0.238 x30 -0.0266 0.091 -0.293 0.770 -0.205 0.152 x31 -0.0363 0.038 -0.955 0.341 -0.111 0.039 x32 -0.0012 0.014 -0.085 0.933 -0.029 0.027 x33 -0.0102 0.010 -1.048 0.296 -0.029 0.009 x34 0.0156 0.023 0.680 0.497 -0.030 0.061 x35 -0.0072 0.003 -2.190 0.030 -0.014 -0.001 x36 0.0135 0.031 0.434 0.665 -0.048 0.075 x37 0.0120 0.009 1.320 0.189 -0.006 0.030 x38 0.0004 0.002 0.218 0.828 -0.003 0.004 x39 -0.0037 0.005 -0.775 0.440 -0.013 0.006 x40 0.0608 0.038 1.620 0.107 -0.013 0.135 x41 0.0531 0.039 1.361 0.175 -0.024 0.130 x42 -0.0473 0.038 -1.243 0.216 -0.122 0.028 x43 -0.1573 0.307 -0.512 0.610 -0.764 0.450 x44 0.0203 0.044 0.463 0.644 -0.066 0.107 x45 0.0167 0.044 0.379 0.705 -0.070 0.104 x46 -0.0211 0.044 -0.482 0.630 -0.108 0.065 x47 0.0038 0.011 0.347 0.729 -0.018 0.025 ============================================================================== Omnibus: 13.737 Durbin-Watson: 2.123 Prob(Omnibus): 0.001 Jarque-Bera (JB): 26.385 Skew: 0.299 Prob(JB): 1.86e-06 Kurtosis: 4.622 Cond. No. 1.88e+05 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 1.88e+05. This might indicate that there are strong multicollinearity or other numerical problems. OLS Regression Results ============================================================================== Dep. Variable: WS/48 R-squared: 0.415 Model: OLS Adj. R-squared: 0.352 Method: Least Squares F-statistic: 6.615 Date: Sun, 20 Mar 2016 Prob (F-statistic): 7.94e-29 Time: 19:32:55 Log-Likelihood: 1073.4 No. Observations: 486 AIC: -2051. Df Residuals: 438 BIC: -1850. Df Model: 47 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [95.0% Conf. Int.] ------------------------------------------------------------------------------ const 0.3320 0.151 2.199 0.028 0.035 0.629 x1 -0.0021 0.001 -2.942 0.003 -0.004 -0.001 x2 -0.0003 0.000 -1.802 0.072 -0.001 2.34e-05 x3 -1.564e-05 0.000 -0.133 0.894 -0.000 0.000 x4 1.933e-06 7.42e-06 0.261 0.795 -1.26e-05 1.65e-05 x5 0.0129 0.042 0.310 0.757 -0.069 0.095 x6 0.0275 0.028 0.989 0.323 -0.027 0.082 x7 -0.3138 0.340 -0.922 0.357 -0.983 0.355 x8 0.0587 0.036 1.636 0.103 -0.012 0.129 x9 -0.0151 0.027 -0.565 0.572 -0.067 0.037 x10 -0.0030 0.017 -0.178 0.859 -0.037 0.030 x11 0.0362 0.029 1.240 0.216 -0.021 0.094 x12 -0.0212 0.027 -0.799 0.424 -0.073 0.031 x13 0.0249 0.063 0.394 0.694 -0.099 0.149 x14 0.0141 0.020 0.708 0.479 -0.025 0.053 x15 0.0105 0.009 1.207 0.228 -0.007 0.028 x16 0.0212 0.014 1.519 0.130 -0.006 0.049 x17 0.0356 0.033 1.087 0.278 -0.029 0.100 x18 0.0091 0.030 0.308 0.758 -0.049 0.067 x19 -0.0336 0.028 -1.207 0.228 -0.088 0.021 x20 0.0071 0.008 0.907 0.365 -0.008 0.022 x21 -0.0118 0.032 -0.372 0.710 -0.074 0.051 x22 0.0246 0.023 1.048 0.295 -0.021 0.071 x23 0.0238 0.011 2.166 0.031 0.002 0.045 x24 0.0026 0.003 0.909 0.364 -0.003 0.008 x25 -0.0276 0.019 -1.439 0.151 -0.065 0.010 x26 0.0009 0.001 1.066 0.287 -0.001 0.002 x27 -0.0018 0.001 -1.435 0.152 -0.004 0.001 x28 0.0031 0.005 0.616 0.539 -0.007 0.013 x29 0.1040 0.342 0.304 0.762 -0.569 0.777 x30 -0.1213 0.070 -1.729 0.085 -0.259 0.017 x31 -0.0332 0.047 -0.705 0.481 -0.126 0.059 x32 -0.0204 0.014 -1.443 0.150 -0.048 0.007 x33 -0.0128 0.008 -1.641 0.101 -0.028 0.003 x34 0.0399 0.019 2.086 0.038 0.002 0.077 x35 -0.0024 0.002 -1.376 0.169 -0.006 0.001 x36 0.0067 0.022 0.299 0.765 -0.037 0.051 x37 -0.0131 0.012 -1.124 0.262 -0.036 0.010 x38 -0.0013 0.001 -0.892 0.373 -0.004 0.002 x39 -0.0065 0.004 -1.572 0.117 -0.015 0.002 x40 0.0012 0.028 0.042 0.966 -0.054 0.056 x41 -0.0032 0.028 -0.114 0.909 -0.058 0.052 x42 0.0090 0.028 0.324 0.746 -0.046 0.064 x43 -0.0012 0.216 -0.005 0.996 -0.426 0.424 x44 -0.0178 0.026 -0.675 0.500 -0.070 0.034 x45 -0.0208 0.026 -0.792 0.429 -0.072 0.031 x46 0.0199 0.026 0.759 0.448 -0.032 0.071 x47 0.0030 0.005 0.537 0.592 -0.008 0.014 ============================================================================== Omnibus: 17.328 Durbin-Watson: 2.010 Prob(Omnibus): 0.000 Jarque-Bera (JB): 18.550 Skew: 0.426 Prob(JB): 9.37e-05 Kurtosis: 3.434 Cond. No. 4.73e+05 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 4.73e+05. This might indicate that there are strong multicollinearity or other numerical problems. OLS Regression Results ============================================================================== Dep. Variable: WS/48 R-squared: 0.432 Model: OLS Adj. R-squared: 0.323 Method: Least Squares F-statistic: 3.938 Date: Sun, 20 Mar 2016 Prob (F-statistic): 1.29e-12 Time: 19:32:55 Log-Likelihood: 646.07 No. Observations: 291 AIC: -1196. Df Residuals: 243 BIC: -1020. Df Model: 47 Covariance Type: nonrobust ============================================================================== coef std err t P>|t| [95.0% Conf. Int.] ------------------------------------------------------------------------------ const 0.2671 0.169 1.584 0.114 -0.065 0.599 x1 -0.0020 0.001 -1.872 0.062 -0.004 0.000 x2 -0.0002 0.000 -0.902 0.368 -0.001 0.000 x3 -0.0001 0.000 -0.646 0.519 -0.000 0.000 x4 -7.071e-06 1.36e-05 -0.521 0.603 -3.38e-05 1.97e-05 x5 -0.0392 0.088 -0.445 0.657 -0.213 0.134 x6 -0.0084 0.046 -0.184 0.854 -0.099 0.082 x7 -0.2000 0.465 -0.430 0.668 -1.117 0.717 x8 -0.0052 0.088 -0.059 0.953 -0.180 0.169 x9 -0.0068 0.052 -0.130 0.896 -0.109 0.096 x10 -0.0186 0.013 -1.485 0.139 -0.043 0.006 x11 0.0241 0.074 0.326 0.745 -0.121 0.170 x12 -0.0085 0.047 -0.182 0.856 -0.100 0.083 x13 -0.0754 0.429 -0.176 0.861 -0.921 0.770 x14 -0.0090 0.027 -0.336 0.737 -0.062 0.044 x15 -0.0088 0.005 -1.922 0.056 -0.018 0.000 x16 -0.0137 0.023 -0.588 0.557 -0.060 0.032 x17 0.0205 0.039 0.518 0.605 -0.057 0.098 x18 0.0331 0.040 0.833 0.406 -0.045 0.111 x19 -0.0224 0.037 -0.607 0.545 -0.095 0.050 x20 0.0292 0.018 1.623 0.106 -0.006 0.065 x21 0.0199 0.043 0.462 0.644 -0.065 0.105 x22 -0.0242 0.010 -2.305 0.022 -0.045 -0.004 x23 -0.0315 0.013 -2.487 0.014 -0.057 -0.007 x24 0.0013 0.003 0.432 0.666 -0.005 0.007 x25 0.0223 0.028 0.811 0.418 -0.032 0.077 x26 -0.0011 0.001 -1.305 0.193 -0.003 0.001 x27 -0.0013 0.001 -0.881 0.379 -0.004 0.002 x28 -0.0034 0.006 -0.590 0.555 -0.015 0.008 x29 0.4418 0.200 2.205 0.028 0.047 0.837 x30 0.0595 0.185 0.321 0.748 -0.306 0.425 x31 0.0070 0.016 0.429 0.668 -0.025 0.039 x32 0.0190 0.018 1.046 0.297 -0.017 0.055 x33 0.0111 0.011 0.972 0.332 -0.011 0.034 x34 -0.0268 0.028 -0.971 0.332 -0.081 0.028 x35 -0.0037 0.004 -0.913 0.362 -0.012 0.004 x36 -0.0074 0.031 -0.239 0.811 -0.069 0.054 x37 0.0173 0.006 2.895 0.004 0.006 0.029 x38 -0.0017 0.001 -1.356 0.176 -0.004 0.001 x39 0.0050 0.006 0.805 0.422 -0.007 0.017 x40 -0.0324 0.036 -0.902 0.368 -0.103 0.038 x41 -0.0157 0.036 -0.436 0.663 -0.086 0.055 x42 0.0318 0.036 0.893 0.373 -0.038 0.102 x43 -0.1606 0.254 -0.633 0.527 -0.660 0.339 x44 -0.0351 0.040 -0.875 0.383 -0.114 0.044 x45 -0.0398 0.040 -1.000 0.318 -0.118 0.039 x46 0.0350 0.040 0.876 0.382 -0.044 0.114 x47 0.0125 0.008 1.595 0.112 -0.003 0.028 ============================================================================== Omnibus: 7.440 Durbin-Watson: 1.904 Prob(Omnibus): 0.024 Jarque-Bera (JB): 7.321 Skew: 0.340 Prob(JB): 0.0257 Kurtosis: 3.376 Cond. No. 3.86e+05 ============================================================================== Warnings: [1] Standard Errors assume that the covariance matrix of the errors is correctly specified. [2] The condition number is large, 3.86e+05. This might indicate that there are strong multicollinearity or other numerical problems.