#!/usr/bin/env python # coding: utf-8 # # Parsing OpenSources # by [Leon Yin](twitter.com/leonyin)
# 2017-11-22
# updated 2018-03-06 # # ## What is this? # In this Jupyter Notebook we will # 1. download a real world dataset, # 2. clean-up human-entered text (twice), and # 3. one-hot encode categories of misleading websites # 4. Use Pandas to analyze these sites, and # 5. make a [machine-readible file](https://github.com/yinleon/fake_news/blob/master/data/sources_clean.tsv). # # # View this on [Github](https://github.com/yinleon/fake_news/blob/master/opensources.ipynb).
# View this on [NBViewer](https://nbviewer.jupyter.org/github/yinleon/fake_news/blob/master/opensources.ipynb).
# Visit my lab's [website](https://wp.nyu.edu/smapp/) # # You can find the dataset hosted here: # In[1]: clean_os_url = 'https://raw.githubusercontent.com/yinleon/fake_news/master/data/sources_clean.tsv' # It can be read into a Pandas Dataframe directly from Github. # In[2]: import pandas as pd df_os = pd.read_csv(clean_os_url, sep='\t') df_os.head(3) # ## Intro # [OpenSources](http://www.opensources.co/) is a "Professionally curated lists of online sources, available free for public use." by Melissa Zimdars and collegues. It contains websites labeled with categories spanning state-sponsored media outlets, to conpiracy theory rumor mills. It is a comprehensive resource for researchers and technologists interested in propaganda and mis/disinformation. # # The opensources project is in-fact open sourced, and available in json and csv format.
# One issue however, is that the data is entered by hand, and not readily machine-readible. # # Let's fix that with some good ole-fashioned data wrangling. # # ## Let's Code! # In[3]: import numpy as np import pandas as pd # In[4]: os_csv_url = 'https://raw.githubusercontent.com/BigMcLargeHuge/opensources/master/sources/sources.csv' df = pd.read_csv(os_csv_url) df.head(10) # #### What's going on in Unamed:5? # There's no column name and it looks like a lot of NaN (not a number) values!
# We can see all distinct values in that column by calling the dataframe (`df`),
# indexed with the column name, and using the built-in `unique` function. # In[5]: df['Unnamed: 5'].unique() # There is only one unique sentence!
# We can find the row with the only non-nan value by filtering the dataframe. # In[6]: df[df['Unnamed: 5'] == df['Unnamed: 5'].unique()[-1]] # This works by filtering the Dataframe wherever the condition is true.
# In the above case, we're looking for rows where the unamed column contains this justification for an "extreme bias" website. # # Based on this outlier, my best guess is a parser error, where `Source Notes (things to know?)` contains a comma (which was parsed)! # In[7]: df[df['Unnamed: 5'] == df['Unnamed: 5'].unique()[-1]]['Source Notes (things to know?)'].tolist() # I'm sure this _zany_ issue won't be in the json file they're provided! # In[8]: os_json_raw_url = 'https://raw.githubusercontent.com/BigMcLargeHuge/opensources/master/sources/sources.json' df = pd.read_json(os_json_raw_url, orient='index') # In[9]: df.index.name = 'domain' # Let's simplify this long column names into something that's short and sweet. # In[10]: df.columns # We can use a list-comprehension to loop through each column name # In[11]: [c for c in df.columns] # ... and use the dictionary to replace **keys** on the left (`Source Notes (things to know?)`) with **values** on the right (`notes`). # In[12]: replace_col = {'Source Notes (things to know?)' : 'notes'} # In[13]: [replace_col.get(c, c) for c in df.columns] # when you use the built-in `get` function for a dictionary, it either returns the value of the given key (`c`), or `c` if it's not in the dictionary. # In[14]: df.columns = [replace_col.get(c, c) for c in df.columns] # Let's also reorder the column for readibility. # In[15]: df = df[['type', '2nd type', '3rd type', 'notes']] # In[16]: df.head(10) # ### Data Processing - Making categories standard # If we look at all the available categories, you'll see some inconsistences: # In[17]: df['type'].unique() # In[18]: df['2nd type'].unique() # In[19]: df['3rd type'].unique() # Some categories here are redundant, or misspelt.
# > see "fake" and "fake news", "unrealiable" and "unreliable."
# # We can use use a dictionary again to replace **values** on the left (`fake news`) with **keys** on the right (`fake`). # In[20]: replace_vals = { 'fake news' : 'fake', 'satirical' : 'satire', 'unrealiable': 'unreliable', 'blog' : np.nan } # We can group all our data preprocessing in one function. # In[21]: def clean_type(value): ''' This function clean various type values (str). If the value is not null, the value is cast to a string, leading and trailing zeros are removed, cast to lower case, and redundant values are replaced. returns either None, or a cleaned string. ''' if value and value != np.nan: value = str(value) value = value.strip().lower() value = replace_vals.get(value, value) return value else: return None # In[22]: df.fillna(value=0, inplace=True) # We'll now loop through each of the columns,
# and run the `clean_type` function on all the values in each column. # In[23]: for col in ['type', '2nd type', '3rd type']: df[col] = df[col].apply(clean_type) # In[24]: df['type'].unique() # ### One-Hot Encoding # One-hot encoding is used to make a sparse matrix from a single categorical column.
# Let's use this toy example to understand: # In[25]: df_example = pd.DataFrame([ {'color' : 'blue'}, {'color' : 'black'}, {'color' : 'red'}, ]) df_example # In[26]: pd.get_dummies(df_example) # We just made the data machine-readible by transforming a categorical column to three numerical columns. # # ### We're going to do the same for each website category opensources! # #### Problem 1: # One-hot encoding converts one-column to many,
# but we have 3 columns we need to encode!
# One possibility would be to one-hot encode each column into three sparse matricies, and then add them up. # # #### Problem 2: # However, [not all columns share the same categoires](#whoops), so we'll get three different one-hot encoded sparse matricies. # #### Answer? # We can fix that by collecting all possible categories, and appending them to each column that gets one hot encoded. # # We can collect all the categories accross the three columns using `pd.unique`. # In[27]: all_hot_encodings = pd.Series(pd.unique(df[['type', '2nd type', '3rd type']].values.ravel('K'))) # In[28]: all_hot_encodings # ### What did we just do? # Flatten all the categories across the three columns using `ravel`, which transforms this: # In[29]: df[['type', '2nd type', '3rd type']].values # into this: # In[30]: df[['type', '2nd type', '3rd type']].values.ravel('K') # And then we use `pd.unique` to turn the flattened Series into a list of unique values. # # ### Time to encode # Now let's append the Series of unique categories to each column, and one-hot encode them using `get_dummies`. # In[31]: dum1 = pd.get_dummies(df['type'].append(all_hot_encodings)) dum2 = pd.get_dummies(df['2nd type'].append(all_hot_encodings)) dum3 = pd.get_dummies(df['3rd type'].append(all_hot_encodings)) # Let's get the max value for each one-hot encoded column.
# By doing so we can combine the three columns information into one dataframe.
# In[32]: __d = dum1.where(dum1 > dum2, dum2) __d = __d.where(__d > dum3, dum3) # #### Why not take the sum? # Taking a sum is also an option, but across rows I noticed duplicate categories between columns.
This would return one-hot encoded columns of 2 or 3! # # lastly, let's remove the rows from the unique categorical values we appended. # In[33]: __d.tail(len(all_hot_encodings) - 1) # In[34]: dummies = __d.iloc[:-len(all_hot_encodings)] # Now we have a wonderful new dataset! # In[35]: dummies.head(10) # let's add the notes to this new dataset by concatenating `dummies` with `df` row-wise. # In[36]: df_os = pd.concat([dummies, df['notes']], axis=1) # In[37]: df_os.head(10) # ## Analysis # With one-hot encoding, the opensources dataset is fast and easy to filter for domains that are considered fake news. # In[38]: df_os[df_os['fake'] == 1].index # We can see how many domains were categorized as each category: # In[39]: df_os.sum(axis=0).sort_values(ascending=False) # We can see all sites which are `.org` superdomains. # In[40]: df_os[df_os.index.str.contains('.org')].sample(10, random_state=42) # ### Some Last Clean-ups # I see a "/media", is the rest of the site ok? # # Let's clean up the domain names a bit... # 1. remove "www." # 2. remove subsites like "/media" # 3. cast to lower case # In[41]: def preprocess_domains(value): ''' Removes subsites from domains by splitting out bashslashes, Removes www. from domains returns a lowercase cleaned up domain ''' value = value.split('/')[0] value = value.replace('www.', '') return value.lower() # Because the index is a list, rather than use `apply`-- which only works on Series or DataFrames, we can use map, or a list generator to apply the `preprocess_domains` function to each element in the index. # In[42]: df_os.index = df_os.index.map(preprocess_domains) # In[52]: rename_col = {'index' : 'domain'} df_os.reset_index(inplace=True) df_os.columns = [rename_col.get(c, c) for c in df_os.columns] # Here is the finished product: # In[53]: df_os.head(3) # Let's use pandas `to_csv` to write this cleaned up file as a tab-separated value file (tsv). # In[54]: df_os.to_csv('data/sources_clean.tsv', sep='\t') # ## Conclusion # OpenSources is a great resources for research and technology.
# If you are aware of other projects that have categorized the online news ecosystem, I'd love to hear about it. # # Let's recap what we've covered: # 1. How to read data from the web into a Pandas Dataframe # 2. How to search and explore Pandas Dataframes # 3. How to preprocess messy real world data, twice! # 4. How to one-hot encode a categorical dataset. # # In the next notebook, we'll use this new dataset to analyze links shared on Twitter. # We can begin to build a profile of how sites categorized from open sources are used during viral campaigns. # # ### Thank yous: # Rishab Nithyanand and Robyn Caplan from D&S.
# Also my collegue Andrew Guess, who introduced me to links as data. # # ### About the Author: # Leon Yin is an engineer and scientist at NYU's Social Media and Political Participation Lab and the Center for Data Science.
He is interested in using images and links as data, and finding odd applications for cutting-edge machine learning techniques.