Parsing OpenSources

by leon yin
2017-11-22

What is this?

In this Jupyter Notebook we will

  1. download a real world dataset,
  2. clean-up human-entered text (twice), and
  3. one-hot encode categories of misleading websites
  4. Use Pandas to analyze these sites, and
  5. make a machine-readible file.

Please view the detailed version if you want to know how everything works, and if you're unfamiliar with Jupyter Notebooks and Python.

View this on Github. View this on NBViewer. Visit my Lab's website

Intro

OpenSources is a "Professionally curated lists of online sources, available free for public use." by Melissa Zimdars and collegues. It contains websites labeled with categories spanning state-sponsored media outlets, to conpiracy theory rumor mills. It is a comprehensive resource for researchers and technologists interested in propaganda and mis/disinformation.

The opensources project is in-fact open sourced in json and csv format.
One issue however, is that the data is entered by people, and not readily machine-readible.

Let's take a moment to appreciate the work of peopke

And optimize this information for machines,

Using some good ole'fashioned data wrangling.

Let's Code Yo!

In [1]:
import numpy as np
import pandas as pd
In [2]:
filename = "data/sources.json"
In [3]:
%%sh -s $filename
mkdir -p data
curl https://raw.githubusercontent.com/BigMcLargeHuge/opensources/master/sources/sources.json --output $1
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  136k  100  136k    0     0   136k      0  0:00:01 --:--:--  0:00:01  510k
In [4]:
df = pd.read_json(filename, orient='index')
In [5]:
df.index.name = 'domain'

Let's simplify this long column name into something that's short and sweet.

In [6]:
replace_col = {'Source Notes (things to know?)' : 'notes'}
In [7]:
df.columns = [replace_col.get(c, c) for c in df.columns]

Let's also reorder the column for readibility.

In [8]:
df = df[['type', '2nd type', '3rd type', 'notes']]
In [9]:
df.head(10)
Out[9]:
type 2nd type 3rd type notes
domain
100percentfedup.com bias
16wmpo.com fake http://www.politifact.com/punditfact/article/2...
21stcenturywire.com conspiracy
24newsflash.com fake
24wpn.com fake http://www.politifact.com/punditfact/article/2...
365usanews.com bias conspiracy
4threvolutionarywar.wordpress.com bias conspiracy
70news.wordpress.com fake
82.221.129.208 conspiracy fake
Acting-Man.com unreliable conspiracy publishes articles denying climate change

Data Processing - Making categories standard

If we look at all the available categories, you'll see some inconsistences:

In [10]:
replace_vals = {
    'fake news' : 'fake',
    'satirical' : 'satire',
    'unrealiable': 'unreliable',
    'blog' : np.nan
}

We can group all our data preprocessing in one function.

In [11]:
def clean_type(value):
    '''
    This function clean various type values (str).
    
    If the value is not null,
    the value is cast to a string,
    leading and trailing zeros are removed,
    cast to lower case,
    and redundant values are replaced.
    
    returns either None, or a cleaned string.
    '''
    if value and value != np.nan:
        value = str(value)
        value = value.strip().lower()
        value = replace_vals.get(value, value)
        return value
    else:
        return None
In [12]:
df.fillna(value=0, inplace=True)

We'll now loop through each of the columns,
and run the clean_type function on all the values in each column.

In [13]:
for col in ['type', '2nd type', '3rd type']:
    df[col] = df[col].apply(clean_type)

One-Hot Encoding

One-hot encoding is used to make a sparse matrix from a single categorical column.
Let's use this toy example to understand:

In [14]:
all_hot_encodings = pd.Series(pd.unique(df[['type', '2nd type', '3rd type']].values.ravel('K')))
In [15]:
all_hot_encodings
Out[15]:
0           bias
1           fake
2     conspiracy
3     unreliable
4        junksci
5      political
6           hate
7      clickbait
8         satire
9          rumor
10      reliable
11         state
12          None
13           NaN
dtype: object
In [16]:
dum1 = pd.get_dummies(df['type'].append(all_hot_encodings))
dum2 = pd.get_dummies(df['2nd type'].append(all_hot_encodings))
dum3 = pd.get_dummies(df['3rd type'].append(all_hot_encodings))

Let's get the max value for each one-hot encoded column.
By doing so we can combine the three columns information into one dataframe.

In [17]:
__d = dum1.where(dum1 > dum2, dum2)
__d = __d.where(__d > dum3, dum3)

Why not take the sum?

Taking a sum is also an option, but across rows I noticed duplicate categories between columns.
This would return one-hot encoded columns of 2 or 3!

lastly, let's remove the rows from the unique categorical values we appended.

In [18]:
dummies = __d.iloc[:-len(all_hot_encodings)]

Now we have a wonderful new dataset!

In [19]:
dummies.head(10)
Out[19]:
bias clickbait conspiracy fake hate junksci political reliable rumor satire state unreliable
100percentfedup.com 1 0 0 0 0 0 0 0 0 0 0 0
16wmpo.com 0 0 0 1 0 0 0 0 0 0 0 0
21stcenturywire.com 0 0 1 0 0 0 0 0 0 0 0 0
24newsflash.com 0 0 0 1 0 0 0 0 0 0 0 0
24wpn.com 0 0 0 1 0 0 0 0 0 0 0 0
365usanews.com 1 0 1 0 0 0 0 0 0 0 0 0
4threvolutionarywar.wordpress.com 1 0 1 0 0 0 0 0 0 0 0 0
70news.wordpress.com 0 0 0 1 0 0 0 0 0 0 0 0
82.221.129.208 0 0 1 1 0 0 0 0 0 0 0 0
Acting-Man.com 0 0 1 0 0 0 0 0 0 0 0 1

let's add the notes to this new dataset by concatenating dummies with df row-wise.

In [20]:
df_news = pd.concat([dummies, df['notes']], axis=1)
In [21]:
df_news.head(10)
Out[21]:
bias clickbait conspiracy fake hate junksci political reliable rumor satire state unreliable notes
domain
100percentfedup.com 1 0 0 0 0 0 0 0 0 0 0 0
16wmpo.com 0 0 0 1 0 0 0 0 0 0 0 0 http://www.politifact.com/punditfact/article/2...
21stcenturywire.com 0 0 1 0 0 0 0 0 0 0 0 0
24newsflash.com 0 0 0 1 0 0 0 0 0 0 0 0
24wpn.com 0 0 0 1 0 0 0 0 0 0 0 0 http://www.politifact.com/punditfact/article/2...
365usanews.com 1 0 1 0 0 0 0 0 0 0 0 0
4threvolutionarywar.wordpress.com 1 0 1 0 0 0 0 0 0 0 0 0
70news.wordpress.com 0 0 0 1 0 0 0 0 0 0 0 0
82.221.129.208 0 0 1 1 0 0 0 0 0 0 0 0
Acting-Man.com 0 0 1 0 0 0 0 0 0 0 0 1 publishes articles denying climate change

With one-hot encoding, the opensources dataset is fast and easy to filter for domains that are considered fake news.

In [22]:
df_news[df_news['fake'] == 1].index
Out[22]:
Index(['16wmpo.com', '24newsflash.com', '24wpn.com', '70news.wordpress.com',
       '82.221.129.208', 'Amposts.com', 'BB4SP.com', 'DIYhours.net',
       'DeadlyClear.wordpress.com', 'DonaldTrumpPotus45.com',
       ...
       'washingtonpost.com.co', 'webdaily.com', 'weeklyworldnews.com',
       'worldpoliticsnow.com', 'worldpoliticsus.com', 'worldrumor.com',
       'worldstoriestoday.com', 'wtoe5news.com', 'yesimright.com',
       'yourfunpage.com'],
      dtype='object', name='domain', length=271)

We can see how many articles were categorized as conspiracy theory sites.

In [23]:
df_news['conspiracy'].sum()
Out[23]:
201

We can see all sites which are .org superdomains.

In [24]:
df_news[df_news.index.str.contains('.org')].sample(10, random_state=42)
Out[24]:
bias clickbait conspiracy fake hate junksci political reliable rumor satire state unreliable notes
domain
heartland.org 1 0 0 0 0 0 0 0 0 0 0 0 http://www.sourcewatch.org/index.php/Heartland...
ExperimentalVaccines.org 0 0 1 0 0 1 0 0 0 0 0 0
heritage.org 0 0 0 0 0 0 1 0 0 0 0 0
breakpoint.org 0 0 0 0 0 0 0 0 0 0 0 1
bigbluevision.org 1 1 0 0 0 0 0 0 0 0 0 0
witscience.org 0 0 0 0 0 0 0 0 0 1 0 0
freedomworks.org 0 0 0 0 0 0 1 0 0 0 0 0
adflegal.org/media 0 0 0 0 1 0 0 0 0 0 0 0 https://www.splcenter.org/fighting-hate/extrem...
moonofalabama.org 1 0 0 0 0 0 0 0 0 0 0 0
thefreepatriot.org 1 1 0 1 0 0 0 0 0 0 0 0

Some Last Clean-ups

I see a "/media", is the rest of the site ok?

Let's clean up the domain names a bit...

  1. remove "www."
  2. remove subsites like "/media"
  3. cast to lower case
In [25]:
def preprocess_domains(value):
    '''
    Removes subsites from domains by splitting out bashslashes,
    Removes www. from domains
    returns a lowercase cleaned up domain
    '''
    value = value.split('/')[0]
    value = value.replace('www.', '')
    return value.lower()

Because the index is a list, rather than use apply-- which only works on Series or DataFrames, we can use map, or a list generator to apply the preprocess_domains function to each element in the index.

In [26]:
df_news.index = df_news.index.map(preprocess_domains)

Let's use pandas to_csv to write this cleaned up file as a tab-separated value file (tsv).

In [27]:
df_news.to_csv('data/sources_clean.tsv', sep='\t')
In [28]:
!ls data
sources.csv       sources.json      sources_clean.tsv

Conclusion

OpenSources is a great resources for research and technology.
If you are aware of other projects that have categorized the online news ecosystem, I'd love to hear about it.

Let's recap what we've covered:

  1. How to download data from the web using bash commands
  2. How to search and explore Pandas Dataframes
  3. How to preprocess messy real world data, twice!
  4. How to one-hot encode a categorical dataset.

In the next notebook, we'll use this new dataset to analyze links shared on Twitter. We can begin to build a profile of how sites categorized from open sources are used during viral campaigns.

Thank yous:

Rishab and Robyn from D&S.
Also my friend and collegue Andrew Guess, who introduced me to links as data.

About the Author:

Leon Yin is an engineer and scientist at NYU's Social Media and Political Participation Lab and the Center for Data Science.
He is interested in using images and links as data, and finding odd applications for cutting-edge machine learning techniques.