Suppose the spam generator becomes more intelligent and begins producing prose which looks "more legitimate" than before.
There are numerous ways the prose could become more like legitimate text. For the purpose of this notebook we will simply force the spam data to drift by adding the first few lines of Pride and Prejudice to the start of the spam documents in our testing set. We will then see how the trained model responds.
import pandas as pd
import os.path
df = pd.read_parquet(os.path.join("data", "training.parquet"))
We split the data into training and testing sets, as in the modelling notebooks. We use the random_state
parameter to ensure that the data is split in the same way as it was when we fit the model.
from sklearn import model_selection
df_train, df_test = model_selection.train_test_split(df, random_state=43)
df_test_spam = df_test[df_test.label == 'spam'].copy() #filter the spam documents
def add_text(doc, adds):
"""
takes in a string _doc_ and
appends text _adds_ to the start
"""
return adds + doc
pride_pred = '''It is a truth universally acknowledged, that a single man in possession of a good fortune, must be in want of a wife.However little known the feelings or views of such a man may be on his first entering a neighbourhood, this truth is so well fixed in the minds of the surrounding families, that he is considered the rightful property of some one or other of their daughters.“My dear Mr. Bennet,” said his lady to him one day, “have you heard that Netherfield Park is let at last?” Mr. Bennet replied that he had not. “But it is,” returned she; “for Mrs. Long has just been here, and she told me all about it.” Mr. Bennet made no answer. “Do you not want to know who has taken it?” cried his wife impatiently.'''
# appending text to the start of the spam
df_test_spam["text"] = df_test_spam.text.apply(add_text, adds=pride_pred)
pd.set_option('display.max_colwidth', -1) #ensures that all the text is visible
df_test_spam.sample(3)
We now pass this "drifted" data through the pipeline we created: we compute feature vectors, and we make spam/legitimate classifications using the model we trained.
from sklearn.pipeline import Pipeline
import pickle, os
## loading in feature vectors pipeline
filename = 'feature_pipeline.sav'
feat_pipeline = pickle.load(open(filename, 'rb'))
## loading model
filename = 'model.sav'
model = pickle.load(open(filename, 'rb'))
pipeline = Pipeline([
('features',feat_pipeline),
('model',model)
])
## we need to fit the model, using the un-drifted data, as we did in the previous notebooks.
pipeline.fit(df_train["text"], df_train["label"])
## we can then go on and make predictions for the drifted spam, using the fitted pipeline above.
# predict test instances
y_preds = pipeline.predict(df_test_spam["text"])
print(y_preds)
import numpy as np
np.array(np.unique(y_preds, return_counts = True))
The model is worse at classifying drifted data, since this is not what we trained the model on.
The two models perform very similarly on the "drifted" data in this notebook. Consider alternative types of data drift and see how the models perform: