Supervised Learning: Regression of Housing Data

By the end of this section you will

  • Know how to instantiate a scikit-learn regression model
  • Know how to train a regressor by calling the fit(...) method
  • Know how to predict new labels by calling the predict(...) method

Here we'll do a short example of a regression problem: learning a continuous value from a set of features.

We'll use the simple Boston house prices set, available in scikit-learn. This records measurements of 13 attributes of housing markets around Boston, as well as the median price. The question is: can you predict the price of a new market given its attributes?

First we'll load the dataset:

In [ ]:
from sklearn.datasets import load_boston
data = load_boston()
print data.keys()

We can see that there are just over 500 data points:

In [ ]:
print data.data.shape
print data.target.shape

The DESCR variable has a long description of the dataset:

In [ ]:
print data.DESCR

It often helps to quickly visualize pieces of the data using histograms, scatter plots, or other plot types. Here we'll load pylab and show a histogram of the target values: the median price in each neighborhood.

In [ ]:
%pylab inline
In [ ]:
plt.hist(data.target)
plt.xlabel('price ($1000s)')
plt.ylabel('count')

Quick Exercise: Try some scatter plots of the features versus the target.

Are there any features that seem to have a strong correlation with the target value? Any that don't?

Remember, you can get at the data columns using:

column_i = data.data[:, i]
In [ ]:
 

This is a manual version of a technique called feature selection.

Sometimes, in Machine Learning it is useful to use feature selection to decide which features are most useful for a particular problem. Automated methods exist which quantify this sort of exercise of choosing the most informative features. We won't cover feature selection in this tutorial, but you can read about it elsewhere.

Predicting Home Prices: a Simple Linear Regression

Now we'll use scikit-learn to perform a simple linear regression on the housing data. There are many possibilities of regressors to use. A particularly simple one is LinearRegression: this is basically a wrapper around an ordinary least squares calculation.

We'll set it up like this:

In [ ]:
from sklearn.linear_model import LinearRegression

clf = LinearRegression()

clf.fit(data.data, data.target)
In [ ]:
predicted = clf.predict(data.data)
In [ ]:
plt.scatter(data.target, predicted)
plt.plot([0, 50], [0, 50], '--k')
plt.axis('tight')
plt.xlabel('True price ($1000s)')
plt.ylabel('Predicted price ($1000s)')

The prediction at least correlates with the true price, though there are clearly some biases. We could imagine evaluating the performance of the regressor by, say, computing the RMS residuals between the true and predicted price. There are some subtleties in this, however, which we'll cover in a later section.

There are many examples of regression-type problems in machine learning

  • Sales: given consumer data, predict how much they will spend
  • Advertising: given information about a user, predict the click-through rate for a web ad.
  • Collaborative Filtering: given a collection of user-ratings for movies, predict preferences for other movies & users
  • Astronomy: given observations of galaxies, predict their mass or redshift

And much, much more.

Exercise: Decision Tree Regression

There are many other types of regressors available in scikit-learn: we'll try one more here.

Use the DecisionTreeRegressor class to fit the housing data.

You can copy and paste some of the above code, replacing LinearRegression with DecisionTreeRegressor.

In [ ]:
from sklearn.tree import DecisionTreeRegressor
# Instantiate the model, fit the results, and scatter in vs. out
In [ ]:
 

Do you see anything surprising in the results?

The Decision Tree classifier is an example of an instance-based algorithm. Rather than try to determine a model that best fits the data, an instance-based algorithm in some way matches unknown data to the known catalog of training points.

How does this fact explain the results you saw here?

We'll return to the subject of Decision trees at a later point in the tutorial.