Parsing OpenSources

by Leon Yin
2017-11-22
updated 2018-03-06

What is this?

In this Jupyter Notebook we will

  1. download a real world dataset,
  2. clean-up human-entered text (twice), and
  3. one-hot encode categories of misleading websites
  4. Use Pandas to analyze these sites, and
  5. make a machine-readible file.

View this on Github.
View this on NBViewer.
Visit my lab's website

You can find the dataset hosted here:

In [1]:
clean_os_url = 'https://raw.githubusercontent.com/yinleon/fake_news/master/data/sources_clean.tsv'

It can be read into a Pandas Dataframe directly from Github.

In [2]:
import pandas as pd
df_os = pd.read_csv(clean_os_url, sep='\t')
df_os.head(3)
Out[2]:
domain bias clickbait conspiracy fake hate junksci political reliable rumor satire state unreliable notes
0 100percentfedup.com 1 0 0 0 0 0 0 0 0 0 0 0 NaN
1 16wmpo.com 0 0 0 1 0 0 0 0 0 0 0 0 http://www.politifact.com/punditfact/article/2...
2 21stcenturywire.com 0 0 1 0 0 0 0 0 0 0 0 0 NaN

Intro

OpenSources is a "Professionally curated lists of online sources, available free for public use." by Melissa Zimdars and collegues. It contains websites labeled with categories spanning state-sponsored media outlets, to conpiracy theory rumor mills. It is a comprehensive resource for researchers and technologists interested in propaganda and mis/disinformation.

The opensources project is in-fact open sourced, and available in json and csv format.
One issue however, is that the data is entered by hand, and not readily machine-readible.

Let's fix that with some good ole-fashioned data wrangling.

Let's Code!

In [3]:
import numpy as np
import pandas as pd
In [4]:
os_csv_url = 'https://raw.githubusercontent.com/BigMcLargeHuge/opensources/master/sources/sources.csv'
df = pd.read_csv(os_csv_url)
df.head(10)
Out[4]:
Unnamed: 0 type 2nd type 3rd type Source Notes (things to know?) Unnamed: 5
0 100percentfedup.com bias NaN NaN NaN NaN
1 365usanews.com bias conspiracy NaN NaN NaN
2 4threvolutionarywar.wordpress.com bias conspiracy NaN NaN NaN
3 aheadoftheherd.com bias NaN NaN false quotes regarding banking, heavily promot... NaN
4 americablog.com bias clickbait NaN domain for sale NaN
5 americanlookout.com bias clickbait NaN NaN NaN
6 americanpatriotdaily.com bias clickbait bias NaN NaN
7 americanthinker.com bias NaN sites both reliable/not reliable sources, mix ... NaN
8 americasfreedomfighters.com bias clickbait NaN NaN NaN
9 AmmoLand.com bias NaN NaN NaN

What's going on in Unamed:5?

There's no column name and it looks like a lot of NaN (not a number) values!
We can see all distinct values in that column by calling the dataframe (df),
indexed with the column name, and using the built-in unique function.

In [5]:
df['Unnamed: 5'].unique()
Out[5]:
array([nan,
       'I would classify this as "religious clickbait," with all of its reporting coming through a fundamentalist Christian filter-- thus the "extreme bias."',
       ' '], dtype=object)

There is only one unique sentence!
We can find the row with the only non-nan value by filtering the dataframe.

In [6]:
df[df['Unnamed: 5'] == df['Unnamed: 5'].unique()[-1]]
Out[6]:
Unnamed: 0 type 2nd type 3rd type Source Notes (things to know?) Unnamed: 5
683 christwire.org satire NaN NaN NaN
831 wikileaks.org unreliable NaN Increasingly wikileaks is being accused of spr...

This works by filtering the Dataframe wherever the condition is true.
In the above case, we're looking for rows where the unamed column contains this justification for an "extreme bias" website.

Based on this outlier, my best guess is a parser error, where Source Notes (things to know?) contains a comma (which was parsed)!

In [7]:
df[df['Unnamed: 5'] == df['Unnamed: 5'].unique()[-1]]['Source Notes (things to know?)'].tolist()
Out[7]:
[nan, 'Increasingly wikileaks is being accused of spreading misinformation']

I'm sure this zany issue won't be in the json file they're provided!

In [8]:
os_json_raw_url = 'https://raw.githubusercontent.com/BigMcLargeHuge/opensources/master/sources/sources.json'
df = pd.read_json(os_json_raw_url, orient='index')
In [9]:
df.index.name = 'domain'

Let's simplify this long column names into something that's short and sweet.

In [10]:
df.columns
Out[10]:
Index(['2nd type', '3rd type', 'Source Notes (things to know?)', 'type'], dtype='object')

We can use a list-comprehension to loop through each column name

In [11]:
[c for c in df.columns]
Out[11]:
['2nd type', '3rd type', 'Source Notes (things to know?)', 'type']

... and use the dictionary to replace keys on the left (Source Notes (things to know?)) with values on the right (notes).

In [12]:
replace_col = {'Source Notes (things to know?)' : 'notes'}
In [13]:
[replace_col.get(c, c) for c in df.columns]
Out[13]:
['2nd type', '3rd type', 'notes', 'type']

when you use the built-in get function for a dictionary, it either returns the value of the given key (c), or c if it's not in the dictionary.

In [14]:
df.columns = [replace_col.get(c, c) for c in df.columns]

Let's also reorder the column for readibility.

In [15]:
df = df[['type', '2nd type', '3rd type', 'notes']]
In [16]:
df.head(10)
Out[16]:
type 2nd type 3rd type notes
domain
100percentfedup.com bias
16wmpo.com fake http://www.politifact.com/punditfact/article/2...
21stcenturywire.com conspiracy
24newsflash.com fake
24wpn.com fake http://www.politifact.com/punditfact/article/2...
365usanews.com bias conspiracy
4threvolutionarywar.wordpress.com bias conspiracy
70news.wordpress.com fake
82.221.129.208 conspiracy fake
Acting-Man.com unreliable conspiracy publishes articles denying climate change

Data Processing - Making categories standard

If we look at all the available categories, you'll see some inconsistences:

In [17]:
df['type'].unique()
Out[17]:
array(['bias', 'fake', 'conspiracy', 'unreliable', 'junksci', 'political',
       'hate', 'fake news', 'clickbait', 'satire', 'rumor', 'reliable',
       'Conspiracy', 'rumor ', 'fake ', 'state'], dtype=object)
In [18]:
df['2nd type'].unique()
Out[18]:
array(['', 'conspiracy', 'fake', 'bias', 'fake news', 'clickbait', 'hate',
       'unrealiable', 'unreliable', 'rumor', 'reliable', 'satire',
       'junksci', 'political', 'Fake', 'blog', 'satirical', 'state'],
      dtype=object)
In [19]:
df['3rd type'].unique()
Out[19]:
array(['', 'fake', 'unreliable', 'clickbait', 'satire', 'bias', 'rumor',
       'hate', 'Political', 'conspiracy', 'political', 'junksci',
       ' unreliable'], dtype=object)

Some categories here are redundant, or misspelt.

see "fake" and "fake news", "unrealiable" and "unreliable."

We can use use a dictionary again to replace values on the left (fake news) with keys on the right (fake).

In [20]:
replace_vals = {
    'fake news' : 'fake',
    'satirical' : 'satire',
    'unrealiable': 'unreliable',
    'blog' : np.nan
}

We can group all our data preprocessing in one function.

In [21]:
def clean_type(value):
    '''
    This function clean various type values (str).
    
    If the value is not null,
    the value is cast to a string,
    leading and trailing zeros are removed,
    cast to lower case,
    and redundant values are replaced.
    
    returns either None, or a cleaned string.
    '''
    if value and value != np.nan:
        value = str(value)
        value = value.strip().lower()
        value = replace_vals.get(value, value)
        return value
    else:
        return None
In [22]:
df.fillna(value=0, inplace=True)

We'll now loop through each of the columns,
and run the clean_type function on all the values in each column.

In [23]:
for col in ['type', '2nd type', '3rd type']:
    df[col] = df[col].apply(clean_type)
In [24]:
df['type'].unique()
Out[24]:
array(['bias', 'fake', 'conspiracy', 'unreliable', 'junksci', 'political',
       'hate', 'clickbait', 'satire', 'rumor', 'reliable', 'state'],
      dtype=object)

One-Hot Encoding

One-hot encoding is used to make a sparse matrix from a single categorical column.
Let's use this toy example to understand:

In [25]:
df_example = pd.DataFrame([
    {'color' : 'blue'},
    {'color' : 'black'},
    {'color' : 'red'},
    
])

df_example
Out[25]:
color
0 blue
1 black
2 red
In [26]:
pd.get_dummies(df_example)
Out[26]:
color_black color_blue color_red
0 0 1 0
1 1 0 0
2 0 0 1

We just made the data machine-readible by transforming a categorical column to three numerical columns.

We're going to do the same for each website category opensources!

Problem 1:

One-hot encoding converts one-column to many,
but we have 3 columns we need to encode!
One possibility would be to one-hot encode each column into three sparse matricies, and then add them up.

Problem 2:

However, not all columns share the same categoires, so we'll get three different one-hot encoded sparse matricies.

Answer?

We can fix that by collecting all possible categories, and appending them to each column that gets one hot encoded.

We can collect all the categories accross the three columns using pd.unique.

In [27]:
all_hot_encodings = pd.Series(pd.unique(df[['type', '2nd type', '3rd type']].values.ravel('K')))
In [28]:
all_hot_encodings
Out[28]:
0           bias
1           fake
2     conspiracy
3     unreliable
4        junksci
5      political
6           hate
7      clickbait
8         satire
9          rumor
10      reliable
11         state
12          None
13           NaN
dtype: object

What did we just do?

Flatten all the categories across the three columns using ravel, which transforms this:

In [29]:
df[['type', '2nd type', '3rd type']].values
Out[29]:
array([['bias', None, None],
       ['fake', None, None],
       ['conspiracy', None, None],
       ...,
       ['clickbait', 'junksci', None],
       ['conspiracy', None, None],
       ['conspiracy', None, None]], dtype=object)

into this:

In [30]:
df[['type', '2nd type', '3rd type']].values.ravel('K')
Out[30]:
array(['bias', 'fake', 'conspiracy', ..., None, None, None], dtype=object)

And then we use pd.unique to turn the flattened Series into a list of unique values.

Time to encode

Now let's append the Series of unique categories to each column, and one-hot encode them using get_dummies.

In [31]:
dum1 = pd.get_dummies(df['type'].append(all_hot_encodings))
dum2 = pd.get_dummies(df['2nd type'].append(all_hot_encodings))
dum3 = pd.get_dummies(df['3rd type'].append(all_hot_encodings))

Let's get the max value for each one-hot encoded column.
By doing so we can combine the three columns information into one dataframe.

In [32]:
__d = dum1.where(dum1 > dum2, dum2)
__d = __d.where(__d > dum3, dum3)

Why not take the sum?

Taking a sum is also an option, but across rows I noticed duplicate categories between columns.
This would return one-hot encoded columns of 2 or 3!

lastly, let's remove the rows from the unique categorical values we appended.

In [33]:
__d.tail(len(all_hot_encodings) - 1)
Out[33]:
bias clickbait conspiracy fake hate junksci political reliable rumor satire state unreliable
1 0 0 0 1 0 0 0 0 0 0 0 0
2 0 0 1 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 0 0 0 1
4 0 0 0 0 0 1 0 0 0 0 0 0
5 0 0 0 0 0 0 1 0 0 0 0 0
6 0 0 0 0 1 0 0 0 0 0 0 0
7 0 1 0 0 0 0 0 0 0 0 0 0
8 0 0 0 0 0 0 0 0 0 1 0 0
9 0 0 0 0 0 0 0 0 1 0 0 0
10 0 0 0 0 0 0 0 1 0 0 0 0
11 0 0 0 0 0 0 0 0 0 0 1 0
12 0 0 0 0 0 0 0 0 0 0 0 0
13 0 0 0 0 0 0 0 0 0 0 0 0
In [34]:
dummies = __d.iloc[:-len(all_hot_encodings)]

Now we have a wonderful new dataset!

In [35]:
dummies.head(10)
Out[35]:
bias clickbait conspiracy fake hate junksci political reliable rumor satire state unreliable
100percentfedup.com 1 0 0 0 0 0 0 0 0 0 0 0
16wmpo.com 0 0 0 1 0 0 0 0 0 0 0 0
21stcenturywire.com 0 0 1 0 0 0 0 0 0 0 0 0
24newsflash.com 0 0 0 1 0 0 0 0 0 0 0 0
24wpn.com 0 0 0 1 0 0 0 0 0 0 0 0
365usanews.com 1 0 1 0 0 0 0 0 0 0 0 0
4threvolutionarywar.wordpress.com 1 0 1 0 0 0 0 0 0 0 0 0
70news.wordpress.com 0 0 0 1 0 0 0 0 0 0 0 0
82.221.129.208 0 0 1 1 0 0 0 0 0 0 0 0
Acting-Man.com 0 0 1 0 0 0 0 0 0 0 0 1

let's add the notes to this new dataset by concatenating dummies with df row-wise.

In [36]:
df_os = pd.concat([dummies, df['notes']], axis=1)
In [37]:
df_os.head(10)
Out[37]:
bias clickbait conspiracy fake hate junksci political reliable rumor satire state unreliable notes
domain
100percentfedup.com 1 0 0 0 0 0 0 0 0 0 0 0
16wmpo.com 0 0 0 1 0 0 0 0 0 0 0 0 http://www.politifact.com/punditfact/article/2...
21stcenturywire.com 0 0 1 0 0 0 0 0 0 0 0 0
24newsflash.com 0 0 0 1 0 0 0 0 0 0 0 0
24wpn.com 0 0 0 1 0 0 0 0 0 0 0 0 http://www.politifact.com/punditfact/article/2...
365usanews.com 1 0 1 0 0 0 0 0 0 0 0 0
4threvolutionarywar.wordpress.com 1 0 1 0 0 0 0 0 0 0 0 0
70news.wordpress.com 0 0 0 1 0 0 0 0 0 0 0 0
82.221.129.208 0 0 1 1 0 0 0 0 0 0 0 0
Acting-Man.com 0 0 1 0 0 0 0 0 0 0 0 1 publishes articles denying climate change

Analysis

With one-hot encoding, the opensources dataset is fast and easy to filter for domains that are considered fake news.

In [38]:
df_os[df_os['fake'] == 1].index
Out[38]:
Index(['16wmpo.com', '24newsflash.com', '24wpn.com', '70news.wordpress.com',
       '82.221.129.208', 'Amposts.com', 'BB4SP.com', 'DIYhours.net',
       'DeadlyClear.wordpress.com', 'DonaldTrumpPotus45.com',
       ...
       'washingtonpost.com.co', 'webdaily.com', 'weeklyworldnews.com',
       'worldpoliticsnow.com', 'worldpoliticsus.com', 'worldrumor.com',
       'worldstoriestoday.com', 'wtoe5news.com', 'yesimright.com',
       'yourfunpage.com'],
      dtype='object', name='domain', length=271)

We can see how many domains were categorized as each category:

In [39]:
df_os.sum(axis=0).sort_values(ascending=False)
Out[39]:
fake          271
bias          228
conspiracy    201
satire        126
unreliable    114
clickbait      97
political      73
junksci        63
hate           39
rumor          25
reliable        9
state           2
dtype: int64

We can see all sites which are .org superdomains.

In [40]:
df_os[df_os.index.str.contains('.org')].sample(10, random_state=42)
Out[40]:
bias clickbait conspiracy fake hate junksci political reliable rumor satire state unreliable notes
domain
heartland.org 1 0 0 0 0 0 0 0 0 0 0 0 http://www.sourcewatch.org/index.php/Heartland...
ExperimentalVaccines.org 0 0 1 0 0 1 0 0 0 0 0 0
heritage.org 0 0 0 0 0 0 1 0 0 0 0 0
breakpoint.org 0 0 0 0 0 0 0 0 0 0 0 1
bigbluevision.org 1 1 0 0 0 0 0 0 0 0 0 0
witscience.org 0 0 0 0 0 0 0 0 0 1 0 0
freedomworks.org 0 0 0 0 0 0 1 0 0 0 0 0
adflegal.org/media 0 0 0 0 1 0 0 0 0 0 0 0 https://www.splcenter.org/fighting-hate/extrem...
moonofalabama.org 1 0 0 0 0 0 0 0 0 0 0 0
thefreepatriot.org 1 1 0 1 0 0 0 0 0 0 0 0

Some Last Clean-ups

I see a "/media", is the rest of the site ok?

Let's clean up the domain names a bit...

  1. remove "www."
  2. remove subsites like "/media"
  3. cast to lower case
In [41]:
def preprocess_domains(value):
    '''
    Removes subsites from domains by splitting out bashslashes,
    Removes www. from domains
    returns a lowercase cleaned up domain
    '''
    value = value.split('/')[0]
    value = value.replace('www.', '')
    return value.lower()

Because the index is a list, rather than use apply-- which only works on Series or DataFrames, we can use map, or a list generator to apply the preprocess_domains function to each element in the index.

In [42]:
df_os.index = df_os.index.map(preprocess_domains)
In [52]:
rename_col = {'index' : 'domain'}
df_os.reset_index(inplace=True)
df_os.columns = [rename_col.get(c, c) for c in df_os.columns]

Here is the finished product:

In [53]:
df_os.head(3)
Out[53]:
domain bias clickbait conspiracy fake hate junksci political reliable rumor satire state unreliable notes
0 100percentfedup.com 1 0 0 0 0 0 0 0 0 0 0 0
1 16wmpo.com 0 0 0 1 0 0 0 0 0 0 0 0 http://www.politifact.com/punditfact/article/2...
2 21stcenturywire.com 0 0 1 0 0 0 0 0 0 0 0 0

Let's use pandas to_csv to write this cleaned up file as a tab-separated value file (tsv).

In [54]:
df_os.to_csv('data/sources_clean.tsv', sep='\t')

Conclusion

OpenSources is a great resources for research and technology.
If you are aware of other projects that have categorized the online news ecosystem, I'd love to hear about it.

Let's recap what we've covered:

  1. How to read data from the web into a Pandas Dataframe
  2. How to search and explore Pandas Dataframes
  3. How to preprocess messy real world data, twice!
  4. How to one-hot encode a categorical dataset.

In the next notebook, we'll use this new dataset to analyze links shared on Twitter. We can begin to build a profile of how sites categorized from open sources are used during viral campaigns.

Thank yous:

Rishab Nithyanand and Robyn Caplan from D&S.
Also my collegue Andrew Guess, who introduced me to links as data.

About the Author:

Leon Yin is an engineer and scientist at NYU's Social Media and Political Participation Lab and the Center for Data Science.
He is interested in using images and links as data, and finding odd applications for cutting-edge machine learning techniques.