#!/usr/bin/env python # coding: utf-8 # # Parsing OpenSources # by [leon yin](twitter.com/leonyin)
# 2017-11-22 # # ## What is this? # In this Jupyter Notebook we will # 1. download a real world dataset, # 2. clean-up human-entered text (twice), and # 3. one-hot encode categories of misleading websites # 4. Use Pandas to analyze these sites, and # 5. make a [machine-readible file](https://github.com/yinleon/fake_news/blob/master/data/sources_clean.tsv). # # Please view the [detailed version](https://nbviewer.jupyter.org/github/yinleon/fake_news/blob/master/opensources.ipynb) if you want to know how everything works, and if you're unfamiliar with Jupyter Notebooks and Python. # # View this on [Github](https://github.com/yinleon/fake_news/blob/master/opensources-lite.ipynb). # View this on [NBViewer](https://nbviewer.jupyter.org/github/yinleon/fake_news/blob/master/opensources-lite.ipynb). # Visit my Lab's [website](https://wp.nyu.edu/smapp/) # # ## Intro # [OpenSources](http://www.opensources.co/) is a "Professionally curated lists of online sources, available free for public use." by Melissa Zimdars and collegues. It contains websites labeled with categories spanning state-sponsored media outlets, to conpiracy theory rumor mills. It is a comprehensive resource for researchers and technologists interested in propaganda and mis/disinformation. # # The opensources project is in-fact open sourced in json and csv format.
# One issue however, is that the data is entered by people, and not readily machine-readible. # # Let's take a moment to appreciate the work of _peopke_
# # # And optimize this information for machines, # # # Using some good ole'fashioned data wrangling. # # ## Let's Code Yo! # In[1]: import numpy as np import pandas as pd # In[2]: filename = "data/sources.json" # In[3]: get_ipython().run_cell_magic('sh', '-s $filename', 'mkdir -p data\ncurl https://raw.githubusercontent.com/BigMcLargeHuge/opensources/master/sources/sources.json --output $1\n') # In[4]: df = pd.read_json(filename, orient='index') # In[5]: df.index.name = 'domain' # Let's simplify this long column name into something that's short and sweet. # In[6]: replace_col = {'Source Notes (things to know?)' : 'notes'} # In[7]: df.columns = [replace_col.get(c, c) for c in df.columns] # Let's also reorder the column for readibility. # In[8]: df = df[['type', '2nd type', '3rd type', 'notes']] # In[9]: df.head(10) # ### Data Processing - Making categories standard # If we look at all the available categories, you'll see some inconsistences: # In[10]: replace_vals = { 'fake news' : 'fake', 'satirical' : 'satire', 'unrealiable': 'unreliable', 'blog' : np.nan } # We can group all our data preprocessing in one function. # In[11]: def clean_type(value): ''' This function clean various type values (str). If the value is not null, the value is cast to a string, leading and trailing zeros are removed, cast to lower case, and redundant values are replaced. returns either None, or a cleaned string. ''' if value and value != np.nan: value = str(value) value = value.strip().lower() value = replace_vals.get(value, value) return value else: return None # In[12]: df.fillna(value=0, inplace=True) # We'll now loop through each of the columns,
# and run the `clean_type` function on all the values in each column. # In[13]: for col in ['type', '2nd type', '3rd type']: df[col] = df[col].apply(clean_type) # ### One-Hot Encoding # One-hot encoding is used to make a sparse matrix from a single categorical column.
# Let's use this toy example to understand: # In[14]: all_hot_encodings = pd.Series(pd.unique(df[['type', '2nd type', '3rd type']].values.ravel('K'))) # In[15]: all_hot_encodings # In[16]: dum1 = pd.get_dummies(df['type'].append(all_hot_encodings)) dum2 = pd.get_dummies(df['2nd type'].append(all_hot_encodings)) dum3 = pd.get_dummies(df['3rd type'].append(all_hot_encodings)) # Let's get the max value for each one-hot encoded column.
# By doing so we can combine the three columns information into one dataframe.
# In[17]: __d = dum1.where(dum1 > dum2, dum2) __d = __d.where(__d > dum3, dum3) # #### Why not take the sum? # Taking a sum is also an option, but across rows I noticed duplicate categories between columns.
This would return one-hot encoded columns of 2 or 3! # # lastly, let's remove the rows from the unique categorical values we appended. # In[18]: dummies = __d.iloc[:-len(all_hot_encodings)] # Now we have a wonderful new dataset! # In[19]: dummies.head(10) # let's add the notes to this new dataset by concatenating `dummies` with `df` row-wise. # In[20]: df_news = pd.concat([dummies, df['notes']], axis=1) # In[21]: df_news.head(10) # With one-hot encoding, the opensources dataset is fast and easy to filter for domains that are considered fake news. # In[22]: df_news[df_news['fake'] == 1].index # We can see how many articles were categorized as conspiracy theory sites. # In[23]: df_news['conspiracy'].sum() # We can see all sites which are `.org` superdomains. # In[24]: df_news[df_news.index.str.contains('.org')].sample(10, random_state=42) # ### Some Last Clean-ups # I see a "/media", is the rest of the site ok? # # Let's clean up the domain names a bit... # 1. remove "www." # 2. remove subsites like "/media" # 3. cast to lower case # In[25]: def preprocess_domains(value): ''' Removes subsites from domains by splitting out bashslashes, Removes www. from domains returns a lowercase cleaned up domain ''' value = value.split('/')[0] value = value.replace('www.', '') return value.lower() # Because the index is a list, rather than use `apply`-- which only works on Series or DataFrames, we can use map, or a list generator to apply the `preprocess_domains` function to each element in the index. # In[26]: df_news.index = df_news.index.map(preprocess_domains) # Let's use pandas `to_csv` to write this cleaned up file as a tab-separated value file (tsv). # In[27]: df_news.to_csv('data/sources_clean.tsv', sep='\t') # In[28]: get_ipython().system('ls data') # ## Conclusion # OpenSources is a great resources for research and technology.
# If you are aware of other projects that have categorized the online news ecosystem, I'd love to hear about it. # # Let's recap what we've covered: # 1. How to download data from the web using bash commands # 2. How to search and explore Pandas Dataframes # 3. How to preprocess messy real world data, twice! # 4. How to one-hot encode a categorical dataset. # # In the next notebook, we'll use this new dataset to analyze links shared on Twitter. # We can begin to build a profile of how sites categorized from open sources are used during viral campaigns. # # ### Thank yous: # Rishab and Robyn from D&S.
# Also my friend and collegue Andrew Guess, who introduced me to links as data. # # ### About the Author: # Leon Yin is an engineer and scientist at NYU's Social Media and Political Participation Lab and the Center for Data Science.
He is interested in using images and links as data, and finding odd applications for cutting-edge machine learning techniques.