#!/usr/bin/env python # coding: utf-8 # # Lecture 9: Meta-Learning # ## Ensembles: $n$ Heads are Better than One # # In a previous assignment, we had you analyze the Titanic dataset on Kaggle. You may have noticed that Kaggle competitions boast thousands of participants, many of whom are professional data scientists. Out of all of these participants, many of whom are using the models we learned in class, how does a winner emerge? # # Is it in the way that they clean and process their data? Perhaps, but cleaning data can only go so far to improve predictive power. Ultimately, success is in the selection and optimization of models. We've seen some techniques for this in lecture 7 (like PCA), but many Kagglers take things even further. Let's see what the first-place Kaggle user, Gilberto Titericz Jr, used for the Otto Product Classification Challenge [1]: # # ![Kaggle winning image](https://kaggle2.blob.core.windows.net/forum-message-attachments/79598/2514/FINAL_ARCHITECTURE.png) # # Titericz's solution contained several layers, each containing a blend of learning models. We'll now learn why this is the case. # ## Review: Bias-Variance Tradeoff # # Why do several models perform better than a single model? Recall our discussion of goodness of fit - a model is often prone to either underfitting or overfitting. Another way to look at this dichotomy is as a tradeoff between __bias__ and __variance__. # # __Bias__ is another word for "systematic error." This is error that can occur because a model is underdeveloped or undertrained - in other words, it is inadequate for the task it is attempting to perform. Imagine, for example, using a linear model to fit a relationship that is clearly quadratic or cubic. Some of the relationship would be captured, and occasionally the model would get things right. But as a whole, this linear model would have all-around poor performance. We call this __underfitting__. # # __Variance__ can be thought of as "random error." This, in contrast with bias, occurs because a model is overly sophisticated or complex for the task at hand. The model would perform very well on the training data, but as soon as the data is varied slightly - as soon as we move to a testing set, for example - this model would perform poorly. An example of this is fitting a degree-9 polynomial to a relationship that is quadratic or cubic. As we saw last lecture, this is an example of __overfitting__. # # The bias-variance tradeoff states that for a single machine learning model, bias and variance cannot be reduced simultaneously: [2] # # ![Bias/variance graph](http://scott.fortmann-roe.com/docs/docs/BiasVariance/biasvariance.png) # # We have already seen methods for reducing variance (regularization), but there are other recently-developed methods for reducing both bias and variance. These methods, also called __ensemble methods__, bypass the bias-variance tradeoff by employing a mix of many models. The usage of many models increases predictive power, and the blending of these models together into a single output reduces variance. # # We'll examine a few types of ensembles: boosting, bagging, and stacking. # ## Boosting # # ### The Idea # # __Boosting__ is a sequential ensemble method: we start from an ineffective machine learning model and gradually "boost" it up, increasing its predictive abilities in each time step. The method by which we improve this model is underspecified, and each specific boosting implementation performs it differently. # # Generally, the improvement method is as follows: take the parts of the data where the model has done poorly and retrain a new version of the model on this "failed" data. We then combine this new version with the old version, hoping that this combination performs better than the old version alone [3]. # # Below is a diagram of a boosted model. Note how each model is trained on a different set of data based on the strengths of the others [4]: # # ![Boosted model](https://codesachin.files.wordpress.com/2016/03/boosting_new_fit5.png?w=491&h=275) # # When boosting, our starting point (the very first model used) is often called a __weak learner__. The power of this starting point doesn't matter much, since the output of a boosting procedure (if done correctly) should be stronger than any individual model. # # ### Example: AdaBoost # # AdaBoost is a widely used boosting framework which uses the procedure described above on decision trees. Each iteration of the decision tree created by AdaBoost weights incorrectly classified data more heavily than the previous tree. We continue the process a given number of times until we're satisfied, and then return the weighted sum of all of the decision trees [5]. # # In this competition, we are given a list of collision events and their properties. We will then predict whether a τ → 3μ decay happened in this collision. This τ → 3μ is currently assumed by scientists not to happen, and the goal of this competition was to discover τ → 3μ happening more frequently than scientists currently can understand. # # The challenge here was to design a machine learning problem for something no one has ever observed before. Scientists at CERN developed the following designs to achieve the goal. # # https://www.kaggle.com/c/flavours-of-physics/data # In[2]: #Data Cleaning import pandas as pd data_test = pd.read_csv("test.csv") data_train = pd.read_csv("training.csv") data_train = data_train.drop('min_ANNmuon',1) data_train = data_train.drop('production',1) data_train = data_train.drop('mass',1) #Cleaned data Y = data_train['signal'] X = data_train.drop('signal',1) # In[3]: #adaboost from sklearn.ensemble import AdaBoostClassifier from sklearn.tree import DecisionTreeClassifier seed = 9001 #this ones over 9000!!! boosted_tree = AdaBoostClassifier(DecisionTreeClassifier(max_depth=1), algorithm="SAMME", n_estimators=50, random_state = seed) model = boosted_tree.fit(X, Y) # In[4]: predictions = model.predict(data_test) predictions #Note we can't really validate this data since we don't have an array of "right answers" # ### Example: GradientBoost # # GradientBoost uses another paradigm: the idea of __gradient descent__. We take some cost function that tells us how poorly the ensemble is performing. We then take the gradient of this cost function. This is the direction of maximum decrease; this is the direction we'd like to go in since we want to minimize the cost function. We then adjust the next tree such that the cost function moves in the direction of this gradient [6]. # # In[5]: #stochastic gradient boosting from sklearn.ensemble import GradientBoostingClassifier gradient_boosted_tree = GradientBoostingClassifier(n_estimators=50, random_state=seed) model2 = gradient_boosted_tree.fit(X,Y) # In[6]: predictions2 = model2.predict(data_test) predictions2 # ## Bagging # # ### The Idea # # __Bagging__, also known as **B**ootstrap **agg**regating, is a parallel ensemble method. Before talking more about Bootstrap Aggregating ensemble, we might want to understand what Bootstrap is; bootstrap is choosing a random sample from the dataset with replacement (in our context, it will be getting a random subset() from the original data.frame with replacement). Therefore, bootstrap agggregating is to choose multiple bootstrap samples, train them separately and independently (thus, a parallel method as mentioned above), and combine (or aggregate) each trained model's result as a whole. # # More formal definition is: Bagging is a parallel ensemble method: we run a host of models in parallel on a dataset, averaging or aggregating their results in some way. Each model runs on a randomly selected subset of the data, and all models are trained independently of the others (contrast this with boosting). # # The primary goal of bagging is to reduce variance. Bagging does not necessarily improve predictive power; instead it is mainly used to prevent overfitting [7]. Below is a diagram of a typical bagging process [8]: # # ![Bagging](https://www.analyticsvidhya.com/wp-content/uploads/2015/09/bagging.png) # # ### Example: Random Forest # # An example of bagging applied to decision trees is the popular __random forest__ model. Random forests create decision trees which are randomly trained on the data. The final output of a given test point is the average of the outputs of all of the randomly trained trees on that point. # In[7]: from sklearn.datasets import load_digits digits = load_digits() X = digits['data'] y = digits['target'] from sklearn import model_selection kfold = model_selection.KFold(n_splits=50, random_state=7) # In[8]: from sklearn.tree import DecisionTreeClassifier model = DecisionTreeClassifier() results = model_selection.cross_val_score(model, X, y, cv=kfold) print(results.mean()) print(results.var()) # In[10]: from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier(n_estimators=50, max_features=5) results = model_selection.cross_val_score(model, X, y, cv=kfold) print(results.mean()) print(results.var()) # ## Stacking # # ### The Idea # # __Stacking__ is perhaps the epitome of meta-learning: stacked models perform machine learning on other models. We take various models (often with different features or different learning algorithms) and train them on random subsets (folds) of our data. We then perform another machine learning model - for instance, linear or logistic regression - on the outputs of these models to obtain a final value [9]. Using a regression model gives us a weighted sum of all of the models - much like what we see in the Kaggle winner's diagram. # # Here is a diagram of stacking [10]: # # ![Stacking](http://www.chioka.in/wp-content/uploads/2013/09/stacking.png) # # It is important to train each model on a random subset of the data. Otherwise, the stacked model will simply lean towards the most accurate model in the set of models and ignore others. Random subsetting is important for achieving the right "blend" of model weights. # ## Stacking Example # We will be using the titanic dataset found here and stacked models to predict who survived the sinking of the titanic. This example stacks four different models (kNN, SVM, decision trees, and logistic regression) and uses logistic regression as the stacked model. Note the use of test folds and how the predictions are averaged in the final stacked model. This ensures all models are taken into account rather than the stacked model trying to determine which model is the most accurate. # In[12]: import pandas as pd from sklearn.model_selection import train_test_split from sklearn.model_selection import KFold from sklearn.preprocessing import LabelEncoder titanic = pd.read_csv('train.csv') titanic = titanic.dropna(axis=0, how='any') Y = titanic['Survived'] titanic = titanic[['Pclass', 'Sex', 'Age']] # changes sex to numeric enc = LabelEncoder() titanic['Sex'] = enc.fit_transform(titanic['Sex']) # splits into training and testing X_train, X_test, Y_train, Y_test = train_test_split(titanic, Y, test_size=.1) # splits training into 4 folds kf = KFold(n_splits = 4) split1, split2, split3, split4 = kf.split(X_train, Y_train) # In[13]: # kNN - uses first fold X_train_kNN = X_train.iloc[split1[0]] X_test_kNN = X_train.iloc[split1[1]] Y_train_kNN = Y_train.iloc[split1[0]] Y_test_kNN = Y_train.iloc[split1[1]] from sklearn.neighbors import KNeighborsClassifier kNN = KNeighborsClassifier() kNN.fit(X_train_kNN, Y_train_kNN) kNN_pred = kNN.predict(X_test_kNN) kNN_score = (Y_test_kNN == kNN_pred).sum() / len(kNN_pred) print("kNN: " + str(kNN_score)) # In[14]: # SVM - uses second fold X_train_SVM = X_train.iloc[split2[0]] X_test_SVM = X_train.iloc[split2[1]] Y_train_SVM = Y_train.iloc[split2[0]] Y_test_SVM = Y_train.iloc[split2[1]] from sklearn import svm SVM = svm.SVC() SVM.fit(X_train_SVM, Y_train_SVM) SVM_pred = SVM.predict(X_test_SVM) SVM_score = (Y_test_SVM == SVM_pred).sum() / len(SVM_pred) print("SVM: " + str(SVM_score)) # In[15]: # decision tree - uses third fold X_train_tree = X_train.iloc[split3[0]] X_test_tree = X_train.iloc[split3[1]] Y_train_tree = Y_train.iloc[split3[0]] Y_test_tree = Y_train.iloc[split3[1]] from sklearn.tree import DecisionTreeClassifier tree = DecisionTreeClassifier() tree.fit(X_train_tree, Y_train_tree) tree_pred = tree.predict(X_test_tree) tree_score = (Y_test_tree == tree_pred).sum() / len(tree_pred) print("Decision Tree: " + str(tree_score)) # In[16]: # logistic - uses fourth fold X_train_logit = X_train.iloc[split4[0]] X_test_logit = X_train.iloc[split4[1]] Y_train_logit = Y_train.iloc[split4[0]] Y_test_logit = Y_train.iloc[split4[1]] from sklearn.linear_model import LogisticRegression logit = LogisticRegression() logit.fit(X_train_logit, Y_train_logit) logit_pred = logit.predict(X_test_logit) logit_score = (Y_test_logit == logit_pred).sum() / len(logit_pred) print("Logistic Regression: " + str(logit_score)) # In[17]: # stacking # combines four folds import numpy as np kNN_pred = kNN_pred.reshape(-1,1) SVM_pred = SVM_pred.reshape(-1,1) tree_pred = tree_pred.reshape(-1,1) logit_pred = logit_pred.reshape(-1,1) pred_input = np.vstack([kNN_pred, SVM_pred, tree_pred, logit_pred]) from sklearn.linear_model import LogisticRegression stacked = LogisticRegression() stacked.fit(pred_input, Y_train) # the first layer predicts pred_test = pd.DataFrame({'kNN': kNN.predict(X_test), 'SVM': SVM.predict(X_test), 'tree': tree.predict(X_test), 'logit': logit.predict(X_test)}) # the average of the predictions is calculated pred_test = pred_test.assign(avg=pred_test.mean(axis=1)) stacked_pred = stacked.predict(pred_test[['avg']]) stacked_score = (Y_test == stacked_pred).sum() / len(stacked_pred) print("Stacked: " + str(stacked_score)) # ## Terms to Review # # * Ensemble # * Bias # * Variance # * Boosting # * Weak learner # * AdaBoost # * GradientBoost # * Gradient descent # * Bagging (bootstrap aggregating) # * Random forest # * Stacking # ## Sources # # [1] https://kaggle2.blob.core.windows.net/forum-message-attachments/79598/2514/FINAL_ARCHITECTURE.png # # [2] http://scott.fortmann-roe.com/docs/docs/BiasVariance/biasvariance.png # # [3] https://stats.stackexchange.com/questions/256/how-does-boosting-work # # [4] https://codesachin.files.wordpress.com/2016/03/boosting_new_fit5.png?w=491&h=275 # # [5] https://www.quora.com/What-is-AdaBoost/answer/Janu-Verma-2 # # [6] http://xgboost.readthedocs.io/en/latest/model.html # # [7] https://stats.stackexchange.com/questions/18891/bagging-boosting-and-stacking-in-machine-learning # # [8] https://www.analyticsvidhya.com/wp-content/uploads/2015/09/bagging.png # # [9] https://mlwave.com/kaggle-ensembling-guide/ # # [10] http://www.chioka.in/wp-content/uploads/2013/09/stacking.png