by Leon Yin
2017-11-22
updated 2018-03-06
In this Jupyter Notebook we will
View this on Github.
View this on NBViewer.
Visit my lab's website
You can find the dataset hosted here:
clean_os_url = 'https://raw.githubusercontent.com/yinleon/fake_news/master/data/sources_clean.tsv'
It can be read into a Pandas Dataframe directly from Github.
import pandas as pd
df_os = pd.read_csv(clean_os_url, sep='\t')
df_os.head(3)
domain | bias | clickbait | conspiracy | fake | hate | junksci | political | reliable | rumor | satire | state | unreliable | notes | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 100percentfedup.com | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | NaN |
1 | 16wmpo.com | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | http://www.politifact.com/punditfact/article/2... |
2 | 21stcenturywire.com | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | NaN |
OpenSources is a "Professionally curated lists of online sources, available free for public use." by Melissa Zimdars and collegues. It contains websites labeled with categories spanning state-sponsored media outlets, to conpiracy theory rumor mills. It is a comprehensive resource for researchers and technologists interested in propaganda and mis/disinformation.
The opensources project is in-fact open sourced, and available in json and csv format.
One issue however, is that the data is entered by hand, and not readily machine-readible.
Let's fix that with some good ole-fashioned data wrangling.
import numpy as np
import pandas as pd
os_csv_url = 'https://raw.githubusercontent.com/BigMcLargeHuge/opensources/master/sources/sources.csv'
df = pd.read_csv(os_csv_url)
df.head(10)
Unnamed: 0 | type | 2nd type | 3rd type | Source Notes (things to know?) | Unnamed: 5 | |
---|---|---|---|---|---|---|
0 | 100percentfedup.com | bias | NaN | NaN | NaN | NaN |
1 | 365usanews.com | bias | conspiracy | NaN | NaN | NaN |
2 | 4threvolutionarywar.wordpress.com | bias | conspiracy | NaN | NaN | NaN |
3 | aheadoftheherd.com | bias | NaN | NaN | false quotes regarding banking, heavily promot... | NaN |
4 | americablog.com | bias | clickbait | NaN | domain for sale | NaN |
5 | americanlookout.com | bias | clickbait | NaN | NaN | NaN |
6 | americanpatriotdaily.com | bias | clickbait | bias | NaN | NaN |
7 | americanthinker.com | bias | NaN | sites both reliable/not reliable sources, mix ... | NaN | |
8 | americasfreedomfighters.com | bias | clickbait | NaN | NaN | NaN |
9 | AmmoLand.com | bias | NaN | NaN | NaN |
There's no column name and it looks like a lot of NaN (not a number) values!
We can see all distinct values in that column by calling the dataframe (df
),
indexed with the column name, and using the built-in unique
function.
df['Unnamed: 5'].unique()
array([nan, 'I would classify this as "religious clickbait," with all of its reporting coming through a fundamentalist Christian filter-- thus the "extreme bias."', ' '], dtype=object)
There is only one unique sentence!
We can find the row with the only non-nan value by filtering the dataframe.
df[df['Unnamed: 5'] == df['Unnamed: 5'].unique()[-1]]
Unnamed: 0 | type | 2nd type | 3rd type | Source Notes (things to know?) | Unnamed: 5 | |
---|---|---|---|---|---|---|
683 | christwire.org | satire | NaN | NaN | NaN | |
831 | wikileaks.org | unreliable | NaN | Increasingly wikileaks is being accused of spr... |
This works by filtering the Dataframe wherever the condition is true.
In the above case, we're looking for rows where the unamed column contains this justification for an "extreme bias" website.
Based on this outlier, my best guess is a parser error, where Source Notes (things to know?)
contains a comma (which was parsed)!
df[df['Unnamed: 5'] == df['Unnamed: 5'].unique()[-1]]['Source Notes (things to know?)'].tolist()
[nan, 'Increasingly wikileaks is being accused of spreading misinformation']
I'm sure this zany issue won't be in the json file they're provided!
os_json_raw_url = 'https://raw.githubusercontent.com/BigMcLargeHuge/opensources/master/sources/sources.json'
df = pd.read_json(os_json_raw_url, orient='index')
df.index.name = 'domain'
Let's simplify this long column names into something that's short and sweet.
df.columns
Index(['2nd type', '3rd type', 'Source Notes (things to know?)', 'type'], dtype='object')
We can use a list-comprehension to loop through each column name
[c for c in df.columns]
['2nd type', '3rd type', 'Source Notes (things to know?)', 'type']
... and use the dictionary to replace keys on the left (Source Notes (things to know?)
) with values on the right (notes
).
replace_col = {'Source Notes (things to know?)' : 'notes'}
[replace_col.get(c, c) for c in df.columns]
['2nd type', '3rd type', 'notes', 'type']
when you use the built-in get
function for a dictionary, it either returns the value of the given key (c
), or c
if it's not in the dictionary.
df.columns = [replace_col.get(c, c) for c in df.columns]
Let's also reorder the column for readibility.
df = df[['type', '2nd type', '3rd type', 'notes']]
df.head(10)
type | 2nd type | 3rd type | notes | |
---|---|---|---|---|
domain | ||||
100percentfedup.com | bias | |||
16wmpo.com | fake | http://www.politifact.com/punditfact/article/2... | ||
21stcenturywire.com | conspiracy | |||
24newsflash.com | fake | |||
24wpn.com | fake | http://www.politifact.com/punditfact/article/2... | ||
365usanews.com | bias | conspiracy | ||
4threvolutionarywar.wordpress.com | bias | conspiracy | ||
70news.wordpress.com | fake | |||
82.221.129.208 | conspiracy | fake | ||
Acting-Man.com | unreliable | conspiracy | publishes articles denying climate change |
If we look at all the available categories, you'll see some inconsistences:
df['type'].unique()
array(['bias', 'fake', 'conspiracy', 'unreliable', 'junksci', 'political', 'hate', 'fake news', 'clickbait', 'satire', 'rumor', 'reliable', 'Conspiracy', 'rumor ', 'fake ', 'state'], dtype=object)
df['2nd type'].unique()
array(['', 'conspiracy', 'fake', 'bias', 'fake news', 'clickbait', 'hate', 'unrealiable', 'unreliable', 'rumor', 'reliable', 'satire', 'junksci', 'political', 'Fake', 'blog', 'satirical', 'state'], dtype=object)
df['3rd type'].unique()
array(['', 'fake', 'unreliable', 'clickbait', 'satire', 'bias', 'rumor', 'hate', 'Political', 'conspiracy', 'political', 'junksci', ' unreliable'], dtype=object)
Some categories here are redundant, or misspelt.
see "fake" and "fake news", "unrealiable" and "unreliable."
We can use use a dictionary again to replace values on the left (fake news
) with keys on the right (fake
).
replace_vals = {
'fake news' : 'fake',
'satirical' : 'satire',
'unrealiable': 'unreliable',
'blog' : np.nan
}
We can group all our data preprocessing in one function.
def clean_type(value):
'''
This function clean various type values (str).
If the value is not null,
the value is cast to a string,
leading and trailing zeros are removed,
cast to lower case,
and redundant values are replaced.
returns either None, or a cleaned string.
'''
if value and value != np.nan:
value = str(value)
value = value.strip().lower()
value = replace_vals.get(value, value)
return value
else:
return None
df.fillna(value=0, inplace=True)
We'll now loop through each of the columns,
and run the clean_type
function on all the values in each column.
for col in ['type', '2nd type', '3rd type']:
df[col] = df[col].apply(clean_type)
df['type'].unique()
array(['bias', 'fake', 'conspiracy', 'unreliable', 'junksci', 'political', 'hate', 'clickbait', 'satire', 'rumor', 'reliable', 'state'], dtype=object)
One-hot encoding is used to make a sparse matrix from a single categorical column.
Let's use this toy example to understand:
df_example = pd.DataFrame([
{'color' : 'blue'},
{'color' : 'black'},
{'color' : 'red'},
])
df_example
color | |
---|---|
0 | blue |
1 | black |
2 | red |
pd.get_dummies(df_example)
color_black | color_blue | color_red | |
---|---|---|---|
0 | 0 | 1 | 0 |
1 | 1 | 0 | 0 |
2 | 0 | 0 | 1 |
We just made the data machine-readible by transforming a categorical column to three numerical columns.
One-hot encoding converts one-column to many,
but we have 3 columns we need to encode!
One possibility would be to one-hot encode each column into three sparse matricies, and then add them up.
However, not all columns share the same categoires, so we'll get three different one-hot encoded sparse matricies.
We can fix that by collecting all possible categories, and appending them to each column that gets one hot encoded.
We can collect all the categories accross the three columns using pd.unique
.
all_hot_encodings = pd.Series(pd.unique(df[['type', '2nd type', '3rd type']].values.ravel('K')))
all_hot_encodings
0 bias 1 fake 2 conspiracy 3 unreliable 4 junksci 5 political 6 hate 7 clickbait 8 satire 9 rumor 10 reliable 11 state 12 None 13 NaN dtype: object
Flatten all the categories across the three columns using ravel
, which transforms this:
df[['type', '2nd type', '3rd type']].values
array([['bias', None, None], ['fake', None, None], ['conspiracy', None, None], ..., ['clickbait', 'junksci', None], ['conspiracy', None, None], ['conspiracy', None, None]], dtype=object)
into this:
df[['type', '2nd type', '3rd type']].values.ravel('K')
array(['bias', 'fake', 'conspiracy', ..., None, None, None], dtype=object)
And then we use pd.unique
to turn the flattened Series into a list of unique values.
Now let's append the Series of unique categories to each column, and one-hot encode them using get_dummies
.
dum1 = pd.get_dummies(df['type'].append(all_hot_encodings))
dum2 = pd.get_dummies(df['2nd type'].append(all_hot_encodings))
dum3 = pd.get_dummies(df['3rd type'].append(all_hot_encodings))
Let's get the max value for each one-hot encoded column.
By doing so we can combine the three columns information into one dataframe.
__d = dum1.where(dum1 > dum2, dum2)
__d = __d.where(__d > dum3, dum3)
Taking a sum is also an option, but across rows I noticed duplicate categories between columns.
This would return one-hot encoded columns of 2 or 3!
lastly, let's remove the rows from the unique categorical values we appended.
__d.tail(len(all_hot_encodings) - 1)
bias | clickbait | conspiracy | fake | hate | junksci | political | reliable | rumor | satire | state | unreliable | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
2 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
4 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
5 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
6 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
7 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 |
10 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
11 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 |
12 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
13 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
dummies = __d.iloc[:-len(all_hot_encodings)]
Now we have a wonderful new dataset!
dummies.head(10)
bias | clickbait | conspiracy | fake | hate | junksci | political | reliable | rumor | satire | state | unreliable | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
100percentfedup.com | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
16wmpo.com | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
21stcenturywire.com | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
24newsflash.com | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
24wpn.com | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
365usanews.com | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
4threvolutionarywar.wordpress.com | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
70news.wordpress.com | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
82.221.129.208 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Acting-Man.com | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
let's add the notes to this new dataset by concatenating dummies
with df
row-wise.
df_os = pd.concat([dummies, df['notes']], axis=1)
df_os.head(10)
bias | clickbait | conspiracy | fake | hate | junksci | political | reliable | rumor | satire | state | unreliable | notes | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
domain | |||||||||||||
100percentfedup.com | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
16wmpo.com | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | http://www.politifact.com/punditfact/article/2... |
21stcenturywire.com | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
24newsflash.com | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
24wpn.com | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | http://www.politifact.com/punditfact/article/2... |
365usanews.com | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
4threvolutionarywar.wordpress.com | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
70news.wordpress.com | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
82.221.129.208 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
Acting-Man.com | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | publishes articles denying climate change |
With one-hot encoding, the opensources dataset is fast and easy to filter for domains that are considered fake news.
df_os[df_os['fake'] == 1].index
Index(['16wmpo.com', '24newsflash.com', '24wpn.com', '70news.wordpress.com', '82.221.129.208', 'Amposts.com', 'BB4SP.com', 'DIYhours.net', 'DeadlyClear.wordpress.com', 'DonaldTrumpPotus45.com', ... 'washingtonpost.com.co', 'webdaily.com', 'weeklyworldnews.com', 'worldpoliticsnow.com', 'worldpoliticsus.com', 'worldrumor.com', 'worldstoriestoday.com', 'wtoe5news.com', 'yesimright.com', 'yourfunpage.com'], dtype='object', name='domain', length=271)
We can see how many domains were categorized as each category:
df_os.sum(axis=0).sort_values(ascending=False)
fake 271 bias 228 conspiracy 201 satire 126 unreliable 114 clickbait 97 political 73 junksci 63 hate 39 rumor 25 reliable 9 state 2 dtype: int64
We can see all sites which are .org
superdomains.
df_os[df_os.index.str.contains('.org')].sample(10, random_state=42)
bias | clickbait | conspiracy | fake | hate | junksci | political | reliable | rumor | satire | state | unreliable | notes | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
domain | |||||||||||||
heartland.org | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | http://www.sourcewatch.org/index.php/Heartland... |
ExperimentalVaccines.org | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | |
heritage.org | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | |
breakpoint.org | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | |
bigbluevision.org | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
witscience.org | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | |
freedomworks.org | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | |
adflegal.org/media | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://www.splcenter.org/fighting-hate/extrem... |
moonofalabama.org | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
thefreepatriot.org | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
I see a "/media", is the rest of the site ok?
Let's clean up the domain names a bit...
def preprocess_domains(value):
'''
Removes subsites from domains by splitting out bashslashes,
Removes www. from domains
returns a lowercase cleaned up domain
'''
value = value.split('/')[0]
value = value.replace('www.', '')
return value.lower()
Because the index is a list, rather than use apply
-- which only works on Series or DataFrames, we can use map, or a list generator to apply the preprocess_domains
function to each element in the index.
df_os.index = df_os.index.map(preprocess_domains)
rename_col = {'index' : 'domain'}
df_os.reset_index(inplace=True)
df_os.columns = [rename_col.get(c, c) for c in df_os.columns]
Here is the finished product:
df_os.head(3)
domain | bias | clickbait | conspiracy | fake | hate | junksci | political | reliable | rumor | satire | state | unreliable | notes | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 100percentfedup.com | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | |
1 | 16wmpo.com | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | http://www.politifact.com/punditfact/article/2... |
2 | 21stcenturywire.com | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Let's use pandas to_csv
to write this cleaned up file as a tab-separated value file (tsv).
df_os.to_csv('data/sources_clean.tsv', sep='\t')
OpenSources is a great resources for research and technology.
If you are aware of other projects that have categorized the online news ecosystem, I'd love to hear about it.
Let's recap what we've covered:
In the next notebook, we'll use this new dataset to analyze links shared on Twitter. We can begin to build a profile of how sites categorized from open sources are used during viral campaigns.
Rishab Nithyanand and Robyn Caplan from D&S.
Also my collegue Andrew Guess, who introduced me to links as data.
Leon Yin is an engineer and scientist at NYU's Social Media and Political Participation Lab and the Center for Data Science.
He is interested in using images and links as data, and finding odd applications for cutting-edge machine learning techniques.