This is this fifth in a series of notebooks designed to show you how to analyze social media data. For demonstration purposes we are looking at tweets sent by CSR-related Twitter accounts -- accounts related to ethics, equality, the environment, etc. -- of Fortune 200 firms in 2013. We assume you have already downloaded the data and have completed the steps taken in Chapter 1, Chapter 2, Chapter 3, and Chapter 4. In this fifth notebook I will show you how to generate new variables -- such as dummy variables -- from your variables currently in your dataframe.
First, we will import several necessary Python packages and set some options for viewing the data. As with prior chapters, we will be using the Python Data Analysis Library, or PANDAS, extensively for our data manipulations.
import numpy as np
import pandas as pd
from pandas import DataFrame
from pandas import Series
#Set PANDAS to show all columns in DataFrame
pd.set_option('display.max_columns', None)
I'm using version 0.16.2 of PANDAS
pd.__version__
'0.16.2'
In Chapter 4 we created a version of the dataframe that omitted all tweets that were retweets, allowing us to focus only on original messages sent by the 41 Twitter accounts. Let's now open this saved file. As we can see in the operations below this dataframe contains 54 variables for 26,257 tweets.
df = pd.read_pickle('Original 2013 CSR Tweets.pkl')
print "# of variables:", len(df.columns)
print "# of tweets:", len(df)
df.head(2)
# of variables: 54 # of tweets: 26257
rowid | query | tweet_id_str | inserted_date | language | coordinates | retweeted_status | created_at | month | year | content | from_user_screen_name | from_user_id | from_user_followers_count | from_user_friends_count | from_user_listed_count | from_user_favourites_count | from_user_statuses_count | from_user_description | from_user_location | from_user_created_at | retweet_count | favorite_count | entities_urls | entities_urls_count | entities_hashtags | entities_hashtags_count | entities_mentions | entities_mentions_count | in_reply_to_screen_name | in_reply_to_status_id | source | entities_expanded_urls | entities_media_count | media_expanded_url | media_url | media_type | video_link | photo_link | twitpic | num_characters | num_words | retweeted_user | retweeted_user_description | retweeted_user_screen_name | retweeted_user_followers_count | retweeted_user_listed_count | retweeted_user_statuses_count | retweeted_user_location | retweeted_tweet_created_at | Fortune_2012_rank | Company | CSR_sustainability | specific_project_initiative_area | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 67340 | humanavitality | 306897327585652736 | 2014-03-09 13:46:50.222857 | en | NaN | NaN | 2013-02-27 22:43:19.000000 | 2 | 2013 | @louloushive (Tweet 2) We encourage other empl... | humanavitality | 274041023 | 2859 | 440 | 38 | 25 | 1766 | This is the official Twitter account for Human... | NaN | Tue Mar 29 16:23:02 +0000 2011 | 0 | 0 | NaN | 0 | NaN | 0 | louloushive | 1 | louloushive | 3.062183e+17 | web | NaN | NaN | NaN | NaN | NaN | 0 | 0 | 0 | 121 | 19 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 79 | Humana | 0 | 1 |
1 | 39454 | FundacionPfizer | 308616393706844160 | 2014-03-09 13:38:20.679967 | es | NaN | NaN | 2013-03-04 16:34:17.000000 | 3 | 2013 | ¿Sabes por qué la #vacuna contra la #neumonía ... | FundacionPfizer | 188384056 | 2464 | 597 | 50 | 11 | 2400 | Noticias sobre Responsabilidad Social y Fundac... | México | Wed Sep 08 16:14:11 +0000 2010 | 1 | 0 | NaN | 0 | vacuna, neumonía | 2 | NaN | 0 | NaN | NaN | web | NaN | NaN | NaN | NaN | NaN | 0 | 0 | 0 | 138 | 20 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 40 | Pfizer | 0 | 1 |
Note that, in earlier notebooks, we have been creating new dataframes -- taking our tweet-level data and converting it into a dataframe organized by account, by company, or by time period. Now we are going to do something different. We are going to keep the organization of the data at the same level (tweets) but merely add columns.
To run statistical procedures we have to convert text-based columns into numerical format. For example, let's say we're interested in exploring whether the language used in a tweet influences how often a message gets favorited or retweeted. To do this we need to transform the language
variable in our dataset. To see why this is necessary, let's take a closer look at the variable.
df[['content', 'language']].head()
content | language | |
---|---|---|
0 | @louloushive (Tweet 2) We encourage other empl... | en |
1 | ¿Sabes por qué la #vacuna contra la #neumonía ... | es |
2 | RT @droso: @RodrigoReinaL con un tema muy inte... | es |
3 | "Every child is born a scientist. We’re all bo... | en |
4 | RT @ClintonGlobal Watch Pres. @BillClinton's f... | en |
What we see is that values of this variable are text-based, with "en" representing English, "es" representing Spanish, etc. Let's see how many languages are used and also inspect the frequencies.
len(pd.unique(df.language.ravel()))
23
df['language'].value_counts()
en 25483 es 503 de 84 fr 38 und 36 tl 15 vi 15 sk 15 pt 12 in 10 ht 8 da 7 it 7 nl 5 id 4 pl 3 et 3 sv 2 sl 2 fi 2 zh 1 lv 1 ar 1 dtype: int64
So there are 23 different languages used in our tweet database, with (not surprisingly given the data source) English being by far the most popular. What if we are interested in analyzing whether the "success" of each tweet is related to whether the language used was English, Spanish, French, Dutch, or Chinese? To do this in a regression analysis, for instance, we need to generate a different variable for each of these languages. Most statistical programs include a shortcut for creating these dummy variables.
In PANDAS we can generate 23 dummy variables -- one for each language -- with a single line of code. The following command creates the dummies and shows the first five rows of the data (we haven't added it to our dataframe yet).
pd.get_dummies(df['language'], prefix='lang').head(5)
lang_ar | lang_da | lang_de | lang_en | lang_es | lang_et | lang_fi | lang_fr | lang_ht | lang_id | lang_in | lang_it | lang_lv | lang_nl | lang_pl | lang_pt | lang_sk | lang_sl | lang_sv | lang_tl | lang_und | lang_vi | lang_zh | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
2 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
3 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
4 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Here's what has happened. Let's take the first two rows as examples. Here is the content of those two tweets to refresh your memory.
df[['content', 'language']].head(2)
content | language | |
---|---|---|
0 | @louloushive (Tweet 2) We encourage other empl... | en |
1 | ¿Sabes por qué la #vacuna contra la #neumonía ... | es |
The first tweet is in English, as indicated by the value "en" in our language
column. The get_dummies
command will first create a separate dummy variable for each of our 23 language values; it then assigns for each row a value of "1" to the appropriate language dummy and a value of "0" for each of the other 22 dummy variables. So, in our first row, the value for the dummy variable lang_en
will be "1" and the value will be "0" for all the others. In contrast, the second row is assigned a value of "1" to lang_es
and a values of "0" to all the others. In effect, each row can be assigned a value of "1" to only one of the dummy variables.
Note that in our commands above we have not actually added our columns to our dataframe. To do that we need to concatenate the columns containing the dummy variables to our primary dataset using PANDAS' concat
command.
df = pd.concat([df, pd.get_dummies(df['language'], prefix='lang')], axis=1)
print "# of variables:", len(df.columns)
print "# of tweets:", len(df)
df.head(2)
# of variables: 77 # of tweets: 26257
rowid | query | tweet_id_str | inserted_date | language | coordinates | retweeted_status | created_at | month | year | content | from_user_screen_name | from_user_id | from_user_followers_count | from_user_friends_count | from_user_listed_count | from_user_favourites_count | from_user_statuses_count | from_user_description | from_user_location | from_user_created_at | retweet_count | favorite_count | entities_urls | entities_urls_count | entities_hashtags | entities_hashtags_count | entities_mentions | entities_mentions_count | in_reply_to_screen_name | in_reply_to_status_id | source | entities_expanded_urls | entities_media_count | media_expanded_url | media_url | media_type | video_link | photo_link | twitpic | num_characters | num_words | retweeted_user | retweeted_user_description | retweeted_user_screen_name | retweeted_user_followers_count | retweeted_user_listed_count | retweeted_user_statuses_count | retweeted_user_location | retweeted_tweet_created_at | Fortune_2012_rank | Company | CSR_sustainability | specific_project_initiative_area | lang_ar | lang_da | lang_de | lang_en | lang_es | lang_et | lang_fi | lang_fr | lang_ht | lang_id | lang_in | lang_it | lang_lv | lang_nl | lang_pl | lang_pt | lang_sk | lang_sl | lang_sv | lang_tl | lang_und | lang_vi | lang_zh | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 67340 | humanavitality | 306897327585652736 | 2014-03-09 13:46:50.222857 | en | NaN | NaN | 2013-02-27 22:43:19.000000 | 2 | 2013 | @louloushive (Tweet 2) We encourage other empl... | humanavitality | 274041023 | 2859 | 440 | 38 | 25 | 1766 | This is the official Twitter account for Human... | NaN | Tue Mar 29 16:23:02 +0000 2011 | 0 | 0 | NaN | 0 | NaN | 0 | louloushive | 1 | louloushive | 3.062183e+17 | web | NaN | NaN | NaN | NaN | NaN | 0 | 0 | 0 | 121 | 19 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 79 | Humana | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
1 | 39454 | FundacionPfizer | 308616393706844160 | 2014-03-09 13:38:20.679967 | es | NaN | NaN | 2013-03-04 16:34:17.000000 | 3 | 2013 | ¿Sabes por qué la #vacuna contra la #neumonía ... | FundacionPfizer | 188384056 | 2464 | 597 | 50 | 11 | 2400 | Noticias sobre Responsabilidad Social y Fundac... | México | Wed Sep 08 16:14:11 +0000 2010 | 1 | 0 | NaN | 0 | vacuna, neumonía | 2 | NaN | 0 | NaN | NaN | web | NaN | NaN | NaN | NaN | NaN | 0 | 0 | 0 | 138 | 20 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 40 | Pfizer | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
We see there are now 77 columns in the dataframe (23 more than before) but same number of rows. Exactly as intended. But let's suppose we're not really interested in exploring the differences among all those languages, so let's revert to our original dataframe with only 54 columns.
df = pd.read_pickle('Original 2013 CSR Tweets.pkl')
print "# of variables:", len(df.columns)
print "# of tweets:", len(df)
df.head(2)
# of variables: 54 # of tweets: 26257
rowid | query | tweet_id_str | inserted_date | language | coordinates | retweeted_status | created_at | month | year | content | from_user_screen_name | from_user_id | from_user_followers_count | from_user_friends_count | from_user_listed_count | from_user_favourites_count | from_user_statuses_count | from_user_description | from_user_location | from_user_created_at | retweet_count | favorite_count | entities_urls | entities_urls_count | entities_hashtags | entities_hashtags_count | entities_mentions | entities_mentions_count | in_reply_to_screen_name | in_reply_to_status_id | source | entities_expanded_urls | entities_media_count | media_expanded_url | media_url | media_type | video_link | photo_link | twitpic | num_characters | num_words | retweeted_user | retweeted_user_description | retweeted_user_screen_name | retweeted_user_followers_count | retweeted_user_listed_count | retweeted_user_statuses_count | retweeted_user_location | retweeted_tweet_created_at | Fortune_2012_rank | Company | CSR_sustainability | specific_project_initiative_area | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 67340 | humanavitality | 306897327585652736 | 2014-03-09 13:46:50.222857 | en | NaN | NaN | 2013-02-27 22:43:19.000000 | 2 | 2013 | @louloushive (Tweet 2) We encourage other empl... | humanavitality | 274041023 | 2859 | 440 | 38 | 25 | 1766 | This is the official Twitter account for Human... | NaN | Tue Mar 29 16:23:02 +0000 2011 | 0 | 0 | NaN | 0 | NaN | 0 | louloushive | 1 | louloushive | 3.062183e+17 | web | NaN | NaN | NaN | NaN | NaN | 0 | 0 | 0 | 121 | 19 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 79 | Humana | 0 | 1 |
1 | 39454 | FundacionPfizer | 308616393706844160 | 2014-03-09 13:38:20.679967 | es | NaN | NaN | 2013-03-04 16:34:17.000000 | 3 | 2013 | ¿Sabes por qué la #vacuna contra la #neumonía ... | FundacionPfizer | 188384056 | 2464 | 597 | 50 | 11 | 2400 | Noticias sobre Responsabilidad Social y Fundac... | México | Wed Sep 08 16:14:11 +0000 2010 | 1 | 0 | NaN | 0 | vacuna, neumonía | 2 | NaN | 0 | NaN | NaN | web | NaN | NaN | NaN | NaN | NaN | 0 | 0 | 0 | 138 | 20 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 40 | Pfizer | 0 | 1 |
We are not always interested in every single value in a text-based or categorical variable. Instead, we're often interested in creating a single binary (0,1) variable from one of the values in a given categorical variable. For instance, let's assume we're interested in examining whether tweets written in English receive more tweets than those written in other languages. To do this we'll need to create a number variable with one numerical value for English tweets and another numerical value for non-English tweets. The convention for these binary variables is to use the values "0" and "1" and to name your variable after the category you're interested in. Accordingly, we'll call our new variable English
and assign values of "1" to tweets written in English, and "0" to all non-English tweets.
There are two steps to generating our new variable. First we name our new variable and assign it values of "true" if the text in our language
column matches "en" and "false" otherwise. PANDAS' string methods are powerful. We are using the match
function here (meaning an exact match of the entire text in our language
column). In other cases you might use the contains
method instead if you are only interested in a match anywhere in the cell.
df['English'] = df['language'].str.match('en', na=False)
df[['content', 'language', 'English']].head() #SHOW FIRST FIVE ROWS OF THREE CHOSEN COLUMNS OF DATAFRAME
content | language | English | |
---|---|---|---|
0 | @louloushive (Tweet 2) We encourage other empl... | en | True |
1 | ¿Sabes por qué la #vacuna contra la #neumonía ... | es | False |
2 | RT @droso: @RodrigoReinaL con un tema muy inte... | es | False |
3 | "Every child is born a scientist. We’re all bo... | en | True |
4 | RT @ClintonGlobal Watch Pres. @BillClinton's f... | en | True |
In the second step we convert our variable to a numerical format. As you can see, each row now has a value of "0" or "1", with tweets in English being assigned values of "1".
df['English'] = df['English'].astype(float)
df[['content', 'language', 'English']].head()
content | language | English | |
---|---|---|---|
0 | @louloushive (Tweet 2) We encourage other empl... | en | 1 |
1 | ¿Sabes por qué la #vacuna contra la #neumonía ... | es | 0 |
2 | RT @droso: @RodrigoReinaL con un tema muy inte... | es | 0 |
3 | "Every child is born a scientist. We’re all bo... | en | 1 |
4 | RT @ClintonGlobal Watch Pres. @BillClinton's f... | en | 1 |
Below we can see that our new column has been added to the end of the dataframe. There are now 55 variables and the same number of tweets.
print "# of variables:", len(df.columns)
print "# of tweets:", len(df)
df.head(2)
# of variables: 55 # of tweets: 26257
rowid | query | tweet_id_str | inserted_date | language | coordinates | retweeted_status | created_at | month | year | content | from_user_screen_name | from_user_id | from_user_followers_count | from_user_friends_count | from_user_listed_count | from_user_favourites_count | from_user_statuses_count | from_user_description | from_user_location | from_user_created_at | retweet_count | favorite_count | entities_urls | entities_urls_count | entities_hashtags | entities_hashtags_count | entities_mentions | entities_mentions_count | in_reply_to_screen_name | in_reply_to_status_id | source | entities_expanded_urls | entities_media_count | media_expanded_url | media_url | media_type | video_link | photo_link | twitpic | num_characters | num_words | retweeted_user | retweeted_user_description | retweeted_user_screen_name | retweeted_user_followers_count | retweeted_user_listed_count | retweeted_user_statuses_count | retweeted_user_location | retweeted_tweet_created_at | Fortune_2012_rank | Company | CSR_sustainability | specific_project_initiative_area | English | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 67340 | humanavitality | 306897327585652736 | 2014-03-09 13:46:50.222857 | en | NaN | NaN | 2013-02-27 22:43:19.000000 | 2 | 2013 | @louloushive (Tweet 2) We encourage other empl... | humanavitality | 274041023 | 2859 | 440 | 38 | 25 | 1766 | This is the official Twitter account for Human... | NaN | Tue Mar 29 16:23:02 +0000 2011 | 0 | 0 | NaN | 0 | NaN | 0 | louloushive | 1 | louloushive | 3.062183e+17 | web | NaN | NaN | NaN | NaN | NaN | 0 | 0 | 0 | 121 | 19 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 79 | Humana | 0 | 1 | 1 |
1 | 39454 | FundacionPfizer | 308616393706844160 | 2014-03-09 13:38:20.679967 | es | NaN | NaN | 2013-03-04 16:34:17.000000 | 3 | 2013 | ¿Sabes por qué la #vacuna contra la #neumonía ... | FundacionPfizer | 188384056 | 2464 | 597 | 50 | 11 | 2400 | Noticias sobre Responsabilidad Social y Fundac... | México | Wed Sep 08 16:14:11 +0000 2010 | 1 | 0 | NaN | 0 | vacuna, neumonía | 2 | NaN | 0 | NaN | NaN | web | NaN | NaN | NaN | NaN | NaN | 0 | 0 | 0 | 138 | 20 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 40 | Pfizer | 0 | 1 | 0 |
We also are often interested in generating binary versions from numerical data in our dataframe. Most tweets, for instance, are never retweeted even once. We might thus be interested in analyzing what differentiates those tweets that do get retweeted from those that don't. Here we thus create a new binary variable called RTs_binary
. We use the numpy
package's where
function to look for tweets with a retweet count of 0 (df[retweet_count']==0
). The final two numbers in the first line of code (0,1
) indicate that we are assigning values of "0" to tweets that meet this condition, otherwise assigning values of "1". We now have a variable that numerically differentiates retweeted tweets from ignored tweets. This comes in handy for logistic regression, which we'll delve into in a future tutorial.
df['RTs_binary'] = np.where(df['retweet_count']==0, 0, 1)
print "# of variables in dataframe:", len(df.columns)
print "# of tweets in dataframe:", len(df)
df[['content','retweet_count','RTs_binary']].head(5)
# of variables in dataframe: 56 # of tweets in dataframe: 26257
content | retweet_count | RTs_binary | |
---|---|---|---|
0 | @louloushive (Tweet 2) We encourage other empl... | 0 | 0 |
1 | ¿Sabes por qué la #vacuna contra la #neumonía ... | 1 | 1 |
2 | RT @droso: @RodrigoReinaL con un tema muy inte... | 3 | 1 |
3 | "Every child is born a scientist. We’re all bo... | 198 | 1 |
4 | RT @ClintonGlobal Watch Pres. @BillClinton's f... | 0 | 0 |
Let's also do the same for the favorite count.
df['favorites_binary'] = np.where(df['favorite_count']==0, 0, 1)
print "# of variables in dataframe:", len(df.columns)
print "# of tweets in dataframe:", len(df)
df[['content','favorite_count','favorites_binary']].head(5)
# of variables in dataframe: 57 # of tweets in dataframe: 26257
content | favorite_count | favorites_binary | |
---|---|---|---|
0 | @louloushive (Tweet 2) We encourage other empl... | 0 | 0 |
1 | ¿Sabes por qué la #vacuna contra la #neumonía ... | 0 | 0 |
2 | RT @droso: @RodrigoReinaL con un tema muy inte... | 0 | 0 |
3 | "Every child is born a scientist. We’re all bo... | 99 | 1 |
4 | RT @ClintonGlobal Watch Pres. @BillClinton's f... | 1 | 1 |
While we're at it, let's create three final binary variables indicating whether the tweet contains any hashtags, user mentions, or URLs, respectively.
df['hashtags_binary'] = np.where(df['entities_hashtags_count']==0, 0, 1)
df['mentions_binary'] = np.where(df['entities_mentions_count']==0, 0, 1)
df['URLs_binary'] = np.where(df['entities_urls_count']==0, 0, 1)
print "# of variables in dataframe:", len(df.columns)
print "# of tweets in dataframe:", len(df)
df[['content','entities_hashtags_count','hashtags_binary','entities_mentions_count','mentions_binary','entities_urls_count','URLs_binary']].head(5)
# of variables in dataframe: 60 # of tweets in dataframe: 26257
content | entities_hashtags_count | hashtags_binary | entities_mentions_count | mentions_binary | entities_urls_count | URLs_binary | |
---|---|---|---|---|---|---|---|
0 | @louloushive (Tweet 2) We encourage other empl... | 0 | 0 | 1 | 1 | 0 | 0 |
1 | ¿Sabes por qué la #vacuna contra la #neumonía ... | 2 | 1 | 0 | 0 | 0 | 0 |
2 | RT @droso: @RodrigoReinaL con un tema muy inte... | 1 | 1 | 2 | 1 | 1 | 1 |
3 | "Every child is born a scientist. We’re all bo... | 2 | 1 | 0 | 0 | 1 | 1 |
4 | RT @ClintonGlobal Watch Pres. @BillClinton's f... | 1 | 1 | 3 | 1 | 1 | 1 |
Below we can see that our six new binary variables have been included as new columns in our dataframe.
df.head(2)
rowid | query | tweet_id_str | inserted_date | language | coordinates | retweeted_status | created_at | month | year | content | from_user_screen_name | from_user_id | from_user_followers_count | from_user_friends_count | from_user_listed_count | from_user_favourites_count | from_user_statuses_count | from_user_description | from_user_location | from_user_created_at | retweet_count | favorite_count | entities_urls | entities_urls_count | entities_hashtags | entities_hashtags_count | entities_mentions | entities_mentions_count | in_reply_to_screen_name | in_reply_to_status_id | source | entities_expanded_urls | entities_media_count | media_expanded_url | media_url | media_type | video_link | photo_link | twitpic | num_characters | num_words | retweeted_user | retweeted_user_description | retweeted_user_screen_name | retweeted_user_followers_count | retweeted_user_listed_count | retweeted_user_statuses_count | retweeted_user_location | retweeted_tweet_created_at | Fortune_2012_rank | Company | CSR_sustainability | specific_project_initiative_area | English | RTs_binary | favorites_binary | hashtags_binary | mentions_binary | URLs_binary | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 67340 | humanavitality | 306897327585652736 | 2014-03-09 13:46:50.222857 | en | NaN | NaN | 2013-02-27 22:43:19.000000 | 2 | 2013 | @louloushive (Tweet 2) We encourage other empl... | humanavitality | 274041023 | 2859 | 440 | 38 | 25 | 1766 | This is the official Twitter account for Human... | NaN | Tue Mar 29 16:23:02 +0000 2011 | 0 | 0 | NaN | 0 | NaN | 0 | louloushive | 1 | louloushive | 3.062183e+17 | web | NaN | NaN | NaN | NaN | NaN | 0 | 0 | 0 | 121 | 19 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 79 | Humana | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 |
1 | 39454 | FundacionPfizer | 308616393706844160 | 2014-03-09 13:38:20.679967 | es | NaN | NaN | 2013-03-04 16:34:17.000000 | 3 | 2013 | ¿Sabes por qué la #vacuna contra la #neumonía ... | FundacionPfizer | 188384056 | 2464 | 597 | 50 | 11 | 2400 | Noticias sobre Responsabilidad Social y Fundac... | México | Wed Sep 08 16:14:11 +0000 2010 | 1 | 0 | NaN | 0 | vacuna, neumonía | 2 | NaN | 0 | NaN | NaN | web | NaN | NaN | NaN | NaN | NaN | 0 | 0 | 0 | 138 | 20 | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | 40 | Pfizer | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 |
Before we actually start using our new data it is critical we verify that our data transformations worked. So let's take a couple of steps using the example of our new variable RTs_binary
. First, we're expecting a new variable with values of only 0 or 1; value_counts()
will let us know whether the new variable has the expected values.
df['RTs_binary'].value_counts()
1 14334 0 11923 dtype: int64
OK, that's perfect. But we should also double check that there are no values of our old variable that did not get properly translated. We can run this check visually with PANDAS' crosstabs
command. As expected, we see that there are no instances where a retweet_count
value of 0
has been assigned anything other than 0
on RTs_binary
, and likewise, no instances of retweet_count
where a value greater than 0
is assigned anything other than 1
on RTs_binary
.
pd.crosstab(df['retweet_count'], df['RTs_binary'])
RTs_binary | 0 | 1 |
---|---|---|
retweet_count | ||
0 | 11923 | 0 |
1 | 0 | 5583 |
2 | 0 | 2908 |
3 | 0 | 1744 |
4 | 0 | 1067 |
5 | 0 | 710 |
6 | 0 | 470 |
7 | 0 | 339 |
8 | 0 | 236 |
9 | 0 | 174 |
10 | 0 | 122 |
11 | 0 | 120 |
12 | 0 | 103 |
13 | 0 | 68 |
14 | 0 | 62 |
15 | 0 | 42 |
16 | 0 | 42 |
17 | 0 | 38 |
18 | 0 | 38 |
19 | 0 | 27 |
20 | 0 | 26 |
21 | 0 | 15 |
22 | 0 | 16 |
23 | 0 | 18 |
24 | 0 | 12 |
25 | 0 | 17 |
26 | 0 | 13 |
27 | 0 | 13 |
28 | 0 | 8 |
29 | 0 | 6 |
... | ... | ... |
446 | 0 | 1 |
448 | 0 | 1 |
468 | 0 | 2 |
471 | 0 | 1 |
473 | 0 | 2 |
476 | 0 | 1 |
491 | 0 | 1 |
505 | 0 | 1 |
517 | 0 | 1 |
528 | 0 | 1 |
557 | 0 | 1 |
559 | 0 | 1 |
578 | 0 | 1 |
581 | 0 | 1 |
585 | 0 | 1 |
596 | 0 | 1 |
601 | 0 | 2 |
648 | 0 | 1 |
655 | 0 | 1 |
656 | 0 | 1 |
750 | 0 | 1 |
850 | 0 | 1 |
910 | 0 | 1 |
1113 | 0 | 1 |
1423 | 0 | 1 |
1549 | 0 | 1 |
1756 | 0 | 1 |
1899 | 0 | 1 |
2979 | 0 | 1 |
3719 | 0 | 1 |
190 rows × 2 columns
Alternatively, we could run a conditional version of the above crosstab to see the same thing in a condensed format. Everything is as expected: Our new binary variable RTs_binary
constitutes a perfect binary representation of our original ratio-level retweet_count
variable.
pd.crosstab(df['retweet_count']>0, df['RTs_binary'])
RTs_binary | 0 | 1 |
---|---|---|
retweet_count | ||
False | 11923 | 0 |
True | 0 | 14334 |
Let's now save a copy of this dataframe for future use.
df.to_pickle('Original 2013 CSR Tweets with 3 binary variables.pkl')
In this tutorial we have covered how to generate new variables from your existing data. Specifically, we have covered how to create dummy variables from an existing categorical variable. We have also covered how to generate binary variables indicating, respectively, the presence of URLs, user mentions, and hashtags, whether a tweet was written in English, and whether a tweet gets favorited or retweeted. Such data transformations are essential for moving beyond mere description to statistical analyses of the data.
This is intended to merely be an introduction. There are lots of other methods for generating new variables. For some additional recipes my PANDAS cookbook see here.
For more Notebooks as well as additional Python and Big Data tutorials, please visit http://social-metrics.org or follow me on Twitter @gregorysaxton