In this notebook, we're going to perform sentiment analysis on a dataset of tweets about US airlines. Sentiment analysis is the task of extracting affective states from text. Sentiment analysis is most ofen used to answer questions like:
We're going to treat sentiment analysis as a text classification problem. Text classification is just like other instances of classification in data science. We use the term "text classification" when the features come from natural language data. (You'll also hear it called "document classification" at times.) What makes text classification interestingly different from other instances of classification is the way we extract numerical features from text.
The dataset was collected by Crowdflower, which they then made public through Kaggle. I've downloaded it for you and put it in the "data" directory. Note that this is a nice clean dataset; not the norm in real-life data science! I've chosen this dataset so that we can concentrate on understanding what text classification is and how to do it.
Here's what we'll cover in our hour. Like any data science task, we'll first do some EDA to understand what data we've got. Then, like always, we'll have to preprocess our data. Because this is text data, it'll be a little different from preprocessing other types of data. Next we'll perform our sentiment classification. Finally, we'll interpret the results of the classifier in terms of our big question.
We'll cover:
Don't worry if you don't follow every single step in our hour. If things are moving too quickly, concentrate on the big picture. Afterwards, you can go through the notebook line by line to understand all the details.
%matplotlib inline
import os
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.linear_model import LogisticRegressionCV
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
from sklearn.ensemble import RandomForestClassifier
sns.set()
DATA_DIR = 'data'
fname = os.path.join(DATA_DIR, 'tweets.csv')
df = pd.read_csv(fname)
df.head(3)
Which airlines are tweeted about and how many of each in this dataset?
sns.countplot(df['airline'], order=df['airline'].value_counts().index);
Regular expressions are like advanced find-and-replace. They allow us to specify complicated patterns in text data and find all the matches. They're indispensable in text processing. You can learn more about them here.
We can use regular expressions to find hashtags and user mentions in a tweet. We first write the pattern we're looking for as a (raw) string, using regular expression's special syntax. The twitter_handle_pattern
says "find me a @ sign immediately followed by one or more upper or lower case letters, digits or underscore". The hashtag_pattern
is a little more complicated; it says "find me exactly one # or #, immediately followed by one or more upper or lower case letters, digits or underscore, but only if it's at the beginning of a line or immediately after a whitespace character".
import re
twitter_handle_pattern = r'@(\w+)'
hashtag_pattern = r'(?:^|\s)[##]{1}(\w+)'
url_pattern = r'https?:\/\/.*.com'
example_tweet = "lol @justinbeiber and @BillGates are like soo #yesterday #amiright saw it on https://twitter.com #yolo"
re.findall(twitter_handle_pattern, example_tweet)
re.findall(hashtag_pattern, example_tweet)
re.findall(url_pattern, example_tweet)
pandas
has great in-built support for operating with regular expressions on columns. We can extract
all user mentions from a column of text like this:
df['text'].str.extract(twitter_handle_pattern).head(10)
And find all the hashtags like this:
df['text'].str.extract(hashtag_pattern).head(20)
Often in preprocessing text data, we don't care about the exact hashtag/user/URL that someone used (although sometimes we do!). Your job is to replace all the hashtags with 'HASHTAG'
, the user mentions with 'USER'
and URLs with 'URL'
. To do this, you'll use the replace
string method of the text
column. The result of this will be a series, which you should add to df
as a column called clean_text
. See the docs here for more information on the method.
Now that we've cleaned the text, we need to turn the text into numbers for our classifier. We're going to use a "bag of words" as our features. A bag of words is just like a frequency count of all the words that appear in a tweet. It's called a bag because we ignore the order of the words; we just care about what words are in the tweet. To do this, we can use scikit-learn
's CountVectorizer
. CountVectorizer
replaces each tweet with a vector (think a list) of counts. Each position in the vector represents a unique word in the corpus. The value of an entry in a vector represents the number of times that word appeared in that tweet. Below, we restrict the length of the vectors to be 5,000 and the counts to be 0 (not in the tweet) and 1 (in the tweet).
countvectorizer = CountVectorizer(max_features=5000, binary=True)
X = countvectorizer.fit_transform(df['clean_text'])
features = X.toarray()
features
response = df['airline_sentiment'].values
response
We don't want to train our classifier on the same dataset that we test it on, so let's split it into training and test sets.
X_train, X_test, y_train, y_test = train_test_split(features, response, test_size=0.2)
OK, so now that we've turned our data into numbers, we're ready to feed it into a classifier. We're not going to concentrate too much on the code below, but here's the big picture. In the fit_model
function defined below, we're going to use logistic regression as a classifier to take in the numerical representation of the tweets and spit out whether it's positive, neutral or negative. Then we'll use test_model
to test the model's performance against our test data and print out some results.
def fit_logistic_regression(X_train, y_train):
model = LogisticRegressionCV(Cs=5, penalty='l1', cv=3, solver='liblinear', refit=True)
model.fit(X_train, y_train)
return model
def conmat(model, X_test, y_test):
"""Wrapper for sklearn's confusion matrix."""
labels = model.classes_
y_pred = model.predict(X_test)
c = confusion_matrix(y_test, y_pred)
sns.heatmap(c, annot=True, fmt='d',
xticklabels=labels,
yticklabels=labels,
cmap="YlGnBu", cbar=False)
plt.ylabel('Ground truth')
plt.xlabel('Prediction')
def test_model(model, X_train, y_train):
conmat(model, X_test, y_test)
print('Accuracy: ', model.score(X_test, y_test))
lr = fit_logistic_regression(X_train, y_train)
test_model(lr, X_test, y_test)
Use the fit_random_forest
function below to train a random forest classifier on the training set and test the model on the test set. Which performs better?
def fit_random_forest(X_train, y_train):
model = RandomForestClassifier()
model.fit(X_train, y_train)
return model
Use the test_tweet
function below to test your classifier's performance on a list of tweets. Write your tweets
def clean_tweets(tweets):
tweets = [re.sub(hashtag_pattern, 'HASHTAG', t) for t in tweets]
tweets = [re.sub(twitter_handle_pattern, 'USER', t) for t in tweets]
return [re.sub(url_pattern, 'URL', t) for t in tweets]
def test_tweets(tweets, model):
tweets = clean_tweets(tweets)
features = countvectorizer.transform(tweets)
predictions = model.predict(features)
return list(zip(tweets, predictions))
my_tweets = [example_tweet,
'omg I am never flying on Delta again',
'I love @VirginAmerica so much #friendlystaff']
Now we can interpret the classifier by the features that it found important.
vocab = [(v,k) for k,v in countvectorizer.vocabulary_.items()]
vocab = sorted(vocab, key=lambda x: x[0])
vocab = [word for num,word in vocab]
coef = list(zip(vocab, lr.coef_[0]))
important = pd.DataFrame(lr.coef_).T
important.columns = lr.classes_
important['word'] = vocab
important.head()
important.sort_values(by='negative', ascending=False).head(10)
important.sort_values(by='positive', ascending=False).head(10)