Model Selection

Grid-Search with build-in cross validation

In [ ]:
from sklearn.grid_search import GridSearchCV

Define parameter grid:

In [ ]:
import numpy as np
param_grid = {'C': 10. ** np.arange(-3, 3), 'gamma' : 10. ** np.arange(-3, 3)}
print(param_grid)
In [ ]:
from sklearn.svm import SVC
grid_search = GridSearchCV(SVC(), param_grid, verbose=3)

A GridSearchCV object behaves just like a normal classifier.

In [ ]:
from sklearn.datasets import load_digits
from sklearn.cross_validation import train_test_split
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target)
In [ ]:
grid_search.fit(X_train, y_train)
In [ ]:
# We extract just the scores
%matplotlib inline
import matplotlib.pyplot as plt

scores = [x[1] for x in grid_search.grid_scores_]
scores = np.array(scores).reshape(6, 6)

plt.matshow(scores)
plt.xlabel('gamma')
plt.ylabel('C')
plt.colorbar()
plt.xticks(np.arange(6), param_grid['gamma'])
plt.yticks(np.arange(6), param_grid['C'])
In [ ]:
grid_search.best_params_
In [ ]:
grid_search.predict(X_test)
In [ ]:
grid_search.score(X_test, y_test)

Preprocessing and Pipelines

In [ ]:
from sklearn.preprocessing import StandardScaler

Same interface as always.

In [ ]:
scaler = StandardScaler()
In [ ]:
scaler.fit(X_train)
In [ ]:
scaler.transform(X_train).mean(axis=0)
In [ ]:
scaler.transform(X_train).std(axis=0)

For cross-validation, we need to estimate mean and standard deviation separately for each fold. To do that, we build a pipeline.

In [ ]:
from sklearn.pipeline import Pipeline
In [ ]:
pipeline = Pipeline([("scaler", scaler), ("svm", SVC())])
In [ ]:
pipeline.fit(X_train, y_train)
In [ ]:
pipeline.predict(X_train)

Cross-validation with a pipeline

In [ ]:
from sklearn.cross_validation import cross_val_score
cross_val_score(pipeline, X_train, y_train)

So, yeah, don't forget the preprocessing.

In [ ]:
param_grid_pipeline = {'svm__C': 10. ** np.arange(-3, 3), 'svm__gamma' : 10. ** np.arange(-3, 3)}

grid_pipeline = GridSearchCV(pipeline, param_grid=param_grid_pipeline, verbose=3)
In [ ]:
grid_pipeline.fit(X_train, y_train)
In [ ]:
# We extract just the scores
scores = [x[1] for x in grid_pipeline.grid_scores_]
scores = np.array(scores).reshape(6, 6)

plt.matshow(scores)
plt.xlabel('gamma')
plt.ylabel('C')
plt.colorbar()
plt.xticks(np.arange(6), param_grid['gamma'])
plt.yticks(np.arange(6), param_grid['C'])
In [ ]:
grid_pipeline.score(X_test, y_test)

Randomized Searching

In [ ]:
from sklearn.grid_search import RandomizedSearchCV
In [ ]:
from scipy.stats import expon
In [ ]:
plt.hist([expon.rvs() for x in xrange(1000)])
In [ ]:
params = {'C': expon(), 'gamma': expon()}
rs = RandomizedSearchCV(SVC(), param_distributions=params, n_iter=50, verbose=3)
In [ ]:
rs.fit(X_train, y_train)
In [ ]:
rs.best_params_
In [ ]:
rs.best_score_
In [ ]:
scores, Cs, gammas = zip(*[(score.mean_validation_score, score.parameters['C'], score.parameters['gamma']) for score in rs.grid_scores_])
In [ ]:
plt.scatter(Cs, gammas, s=40, c=scores)
plt.xlabel("C")
plt.ylabel("gamma")

Tasks

  1. Do grid-search over a pipeline consisting of the KBest feature selection and an rbf SVM on iris.