#!/usr/bin/env python # coding: utf-8 # This is this fifth in a series of notebooks designed to show you how to analyze social media data. For demonstration purposes we are looking at tweets sent by CSR-related Twitter accounts -- accounts related to ethics, equality, the environment, etc. -- of Fortune 200 firms in 2013. We assume you have already downloaded the data and have completed the steps taken in Chapter 1, Chapter 2, Chapter 3, and Chapter 4. In this fifth notebook I will show you how to generate new variables -- such as dummy variables -- from your variables currently in your dataframe. # # Chapter 5: Generating New Variables # First, we will import several necessary Python packages and set some options for viewing the data. As with prior chapters, we will be using the Python Data Analysis Library, or PANDAS, extensively for our data manipulations. # ### Import packages and set viewing options # In[1]: import numpy as np import pandas as pd from pandas import DataFrame from pandas import Series # In[2]: #Set PANDAS to show all columns in DataFrame pd.set_option('display.max_columns', None) # I'm using version 0.16.2 of PANDAS # In[3]: pd.__version__ # ### Read in data # In Chapter 4 we created a version of the dataframe that omitted all tweets that were retweets, allowing us to focus only on original messages sent by the 41 Twitter accounts. Let's now open this saved file. As we can see in the operations below this dataframe contains 54 variables for 26,257 tweets. # In[41]: df = pd.read_pickle('Original 2013 CSR Tweets.pkl') print "# of variables:", len(df.columns) print "# of tweets:", len(df) df.head(2) #
Note that, in earlier notebooks, we have been creating new dataframes -- taking our tweet-level data and converting it into a dataframe organized by account, by company, or by time period. Now we are going to do something different. We are going to keep the organization of the data at the same level (tweets) but merely add columns. # ### Generate Dummy Variables # To run statistical procedures we have to convert text-based columns into numerical format. For example, let's say we're interested in exploring whether the language used in a tweet influences how often a message gets favorited or retweeted. To do this we need to transform the `language` variable in our dataset. To see why this is necessary, let's take a closer look at the variable. # In[89]: df[['content', 'language']].head() #
What we see is that values of this variable are text-based, with "en" representing English, "es" representing Spanish, etc. Let's see how many languages are used and also inspect the frequencies. # In[27]: len(pd.unique(df.language.ravel())) # In[28]: df['language'].value_counts() #
So there are 23 different languages used in our tweet database, with (not surprisingly given the data source) English being by far the most popular. What if we are interested in analyzing whether the "success" of each tweet is related to whether the language used was English, Spanish, French, Dutch, or Chinese? To do this in a regression analysis, for instance, we need to generate a different variable for each of these languages. Most statistical programs include a shortcut for creating these dummy variables. # # In PANDAS we can generate 23 dummy variables -- one for each language -- with a single line of code. The following command creates the dummies and shows the first five rows of the data (we haven't added it to our dataframe yet). # In[29]: pd.get_dummies(df['language'], prefix='lang').head(5) #
Here's what has happened. Let's take the first two rows as examples. Here is the content of those two tweets to refresh your memory. # In[35]: df[['content', 'language']].head(2) #
The first tweet is in English, as indicated by the value "en" in our `language` column. The `get_dummies` command will first create a separate dummy variable for each of our 23 language values; it then assigns for each row a value of "1" to the appropriate language dummy and a value of "0" for each of the other 22 dummy variables. So, in our first row, the value for the dummy variable `lang_en` will be "1" and the value will be "0" for all the others. In contrast, the second row is assigned a value of "1" to `lang_es` and a values of "0" to all the others. In effect, each row can be assigned a value of "1" to only one of the dummy variables. # # Note that in our commands above we have not actually added our columns to our dataframe. To do that we need to concatenate the columns containing the dummy variables to our primary dataset using PANDAS' `concat` command. # In[42]: df = pd.concat([df, pd.get_dummies(df['language'], prefix='lang')], axis=1) print "# of variables:", len(df.columns) print "# of tweets:", len(df) df.head(2) #
We see there are now 77 columns in the dataframe (23 more than before) but same number of rows. Exactly as intended. But let's suppose we're not really interested in exploring the differences among all those languages, so let's revert to our original dataframe with only 54 columns. # In[57]: df = pd.read_pickle('Original 2013 CSR Tweets.pkl') print "# of variables:", len(df.columns) print "# of tweets:", len(df) df.head(2) # ### Generating Binary Variables from Categorical Data # We are not always interested in every single value in a text-based or categorical variable. Instead, we're often interested in creating a single binary (0,1) variable from one of the values in a given categorical variable. For instance, let's assume we're interested in examining whether tweets written in English receive more tweets than those written in other languages. To do this we'll need to create a number variable with one numerical value for English tweets and another numerical value for non-English tweets. The convention for these binary variables is to use the values "0" and "1" and to name your variable after the category you're interested in. Accordingly, we'll call our new variable `English` and assign values of "1" to tweets written in English, and "0" to all non-English tweets. # # There are two steps to generating our new variable. First we name our new variable and assign it values of "true" if the text in our `language` column matches "en" and "false" otherwise. PANDAS' string methods are powerful. We are using the `match` function here (meaning an exact match of the entire text in our `language` column). In other cases you might use the `contains` method instead if you are only interested in a match anywhere in the cell. # In[59]: df['English'] = df['language'].str.match('en', na=False) df[['content', 'language', 'English']].head() #SHOW FIRST FIVE ROWS OF THREE CHOSEN COLUMNS OF DATAFRAME #
In the second step we convert our variable to a numerical format. As you can see, each row now has a value of "0" or "1", with tweets in English being assigned values of "1". # In[60]: df['English'] = df['English'].astype(float) df[['content', 'language', 'English']].head() #
Below we can see that our new column has been added to the end of the dataframe. There are now 55 variables and the same number of tweets. # In[63]: print "# of variables:", len(df.columns) print "# of tweets:", len(df) df.head(2) # ### Generating Binary Variables from Numerical Data # We also are often interested in generating binary versions from numerical data in our dataframe. Most tweets, for instance, are never retweeted even once. We might thus be interested in analyzing what differentiates those tweets that do get retweeted from those that don't. Here we thus create a new binary variable called `RTs_binary`. We use the `numpy` package's `where` function to look for tweets with a retweet count of 0 (`df[retweet_count']==0`). The final two numbers in the first line of code (`0,1`) indicate that we are assigning values of "0" to tweets that meet this condition, otherwise assigning values of "1". We now have a variable that numerically differentiates retweeted tweets from ignored tweets. This comes in handy for logistic regression, which we'll delve into in a future tutorial. # In[72]: df['RTs_binary'] = np.where(df['retweet_count']==0, 0, 1) print "# of variables in dataframe:", len(df.columns) print "# of tweets in dataframe:", len(df) df[['content','retweet_count','RTs_binary']].head(5) #
Let's also do the same for the favorite count. # In[73]: df['favorites_binary'] = np.where(df['favorite_count']==0, 0, 1) print "# of variables in dataframe:", len(df.columns) print "# of tweets in dataframe:", len(df) df[['content','favorite_count','favorites_binary']].head(5) #
While we're at it, let's create three final binary variables indicating whether the tweet contains any hashtags, user mentions, or URLs, respectively. # In[85]: df['hashtags_binary'] = np.where(df['entities_hashtags_count']==0, 0, 1) df['mentions_binary'] = np.where(df['entities_mentions_count']==0, 0, 1) df['URLs_binary'] = np.where(df['entities_urls_count']==0, 0, 1) print "# of variables in dataframe:", len(df.columns) print "# of tweets in dataframe:", len(df) df[['content','entities_hashtags_count','hashtags_binary','entities_mentions_count','mentions_binary','entities_urls_count','URLs_binary']].head(5) #
Below we can see that our six new binary variables have been included as new columns in our dataframe. # In[86]: df.head(2) # ### Verification # Before we actually start using our new data it is critical we verify that our data transformations worked. So let's take a couple of steps using the example of our new variable `RTs_binary`. First, we're expecting a new variable with values of only 0 or 1; `value_counts()` will let us know whether the new variable has the expected values. # In[91]: df['RTs_binary'].value_counts() #
OK, that's perfect. But we should also double check that there are no values of our old variable that did not get properly translated. We can run this check visually with PANDAS' `crosstabs` command. As expected, we see that there are no instances where a `retweet_count` value of `0` has been assigned anything other than `0` on `RTs_binary`, and likewise, no instances of `retweet_count` where a value greater than `0` is assigned anything other than `1` on `RTs_binary`. # In[80]: pd.crosstab(df['retweet_count'], df['RTs_binary']) #
Alternatively, we could run a conditional version of the above crosstab to see the same thing in a condensed format. Everything is as expected: Our new binary variable `RTs_binary` constitutes a perfect binary representation of our original ratio-level `retweet_count` variable. # In[83]: pd.crosstab(df['retweet_count']>0, df['RTs_binary']) # ### Save new dataframe # Let's now save a copy of this dataframe for future use. # In[92]: df.to_pickle('Original 2013 CSR Tweets with 3 binary variables.pkl') #
In this tutorial we have covered how to generate new variables from your existing data. Specifically, we have covered how to create dummy variables from an existing categorical variable. We have also covered how to generate binary variables indicating, respectively, the presence of URLs, user mentions, and hashtags, whether a tweet was written in English, and whether a tweet gets favorited or retweeted. Such data transformations are essential for moving beyond mere description to statistical analyses of the data. # # This is intended to merely be an introduction. There are lots of other methods for generating new variables. For some additional recipes my PANDAS cookbook see here. # # For more Notebooks as well as additional Python and Big Data tutorials, please visit http://social-metrics.org or follow me on Twitter @gregorysaxton #