The Local News Dataset

View this document on Github | NbViewer by Leon Yin
Data scientist SMaPP Lab NYU and affiliate at Data & Society.

1. Introduction

Though not particularly a noticeable part of the current discussion, 2018 has shown us that understanding "fake news", and the media (manipulation) ecosystem at large, has as much to do with local broadcasting stations as it does Alex Jones and CNN.

We saw local news outlets used as a sounding board to decry mainstream media outlets as "fake news." We also saw Russian trolls masquerade as local news outlets to build trust as sleeper accounts on Twitter.

To help put the pieces of this disinformation ecosystem into context, we can refer to a 2016 Pew Study on Trust and Accuracy of the Modern Media Consumer which showed that 86% of survey respondents had "a lot" or "some" confidence in local news. This was more than their confidence of national media outlets, social media, and family and friends.

Few have a lot of confidence in information from professional news outlets or friends and family, though majorities show at least some trust in both, but social media garners less trust than either

Social media is the least trustworthy news source according to the 4.6K respondents of the Pew study. It's important to note that this study was published before the 2016 US Presidential Election and social media platforms were not under the same scrutiny as they are today.

Perhaps the most significant finding in this study is that very few have "a lot" of trust in information from professional news outlets. Is this because so called "fake news" blurs the line between reputable and false pieces of information? Political scientist Andy Guess has shown that older (60+ yrs old) citizens are more sussceptitble to spreading links containing junk news on Facebook. Yet the mistrust economy is more than the junk news sites Craig Silverman analyzed when he first coined "fake news" in late 2016.

Few have a lot of confidence in information from professional news outlets or friends and family, though majorities show at least some trust in both, but social media garners less trust than either

In 2017, media historian Caroline Jack released a lexicon in an effort to define what was formerly referred to as "fake news," with more nuance. Jack calls this umbrella of deceptive content problematic information.

The social media scholar Alice Marwick -- who made some of the first breakthroughs in thie field with Becca Lewis, recently reminded us that problematic information spreads not only through junk news headlines, but also through memes, videos and podcasts. What other mediums are we overlooking? As a hint, we can listen to Marwick and other researchers such as ethnographer Francesca Tripoldi, who observe that problematic information is deeply connected to one's self-presentation and the reinforcement of group identity. So where does local news fit into this equation?

Though local news is widely viewed as a relatively trustworthy news source, its role in the current media and information landscape is not well studied. To better understand that role, I put together the Local News Dataset in the hopes that it will accelerate research of local news across the web.

About the Data Set

This dataset is a machine-readable directory of state-level newspapers, TV stations and magazines. In addition to basic information such as the name of the outlet and state it is located in, all available information regarding web presence, social media (Twitter, YouTube, Facebook) and their owners is scraped, too.

The sources of this dataset are usnpl.com-- newspapers and magazines by state, stationindex.com -- TV stations by state and by owner, and homepages of the media corporations Meredith, Sinclair, Nexstar, Tribune and Hearst.

This dataset was inspired by ProPublica's Congress API. I hope that this dataset will serve a similar purpose as a starting point for research and applications, as well as a bridge between datasets from social media, news articles and online communities.

While you use this dataset, if you see irregularities, questionable entries, or missing outlets please submit an issue on Github or contact me on Twitter. I'd love to hear how this dataset is put to work

You can browse the dataset on Google Sheets
Or look at the raw dataset on Github
Or just scroll down to the tech specs!

Happy hunting!

Acknowledgements

I'd like to acknowledge the work of the people behind usnpl.com and stationindex.com for compiling lists of local media outlets. Andreu Casas and Gregory Eady provided invaluable comments to improve this dataset for public release. Kinjal Dave provided much needed proofreading. The dataset was created by Leon Yin at the SMaPP Lab at NYU. Thank you Josh Tucker, Jonathan Nagler, Richard Bonneau and my collegue Nicole Baram.

Citation

If this dataset is helpful to you please cite it as:

@misc{leon_yin_2018_1345145,
  author       = {Leon Yin},
  title        = {Local News Dataset},
  month        = aug,
  year         = 2018,
  doi          = {10.5281/zenodo.1345145},
  url          = {https://doi.org/10.5281/zenodo.1345145}
}

License

This data is free to use, but please follow the ProPublica Terms.


2.Tech Specs

This section is an in-depth look at what is scraped from the web and how these pieces of disparate Internet matter come together to form the Local News Dataset.

For those who tinker...
The intermediates can be generated and updated:
>>> python download_data.py
The output file is created from merging and pre-processing the intermediates:
>>> python merge.py
These two scripts -- and this notebook, is written in Python 3.6.5 using open sources packages listed in in requirements.txt.

Top of Notebook

In [1]:
from runtimestamp.runtimestamp import runtimestamp # for reproducibility
from docs.build_docs import *                      # auto-generates docs
runtimestamp('Leon')
generate_docs()
Updated 2018-11-12 16:19:34.240807
By Leon
Using Python 3.6.5
On Linux-3.10.0-514.10.2.el7.x86_64-x86_64-with-centos-7.3.1611-Core

Inventory


An intermediate file of news outlets owned by Sinclair scraped from their website

Read the raw file from this URL:
https://raw.githubusercontent.com/yinleon/LocalNewsDataset/master/data/sinclair.tsv

See the code used to make this dataset:
https://github.com/yinleon/LocalNewsDataset/blob/master/py/download_data.py

What Does the Data Look Like?

Sample of ../data/sinclair.tsv (N = 1321)

city geo network state station website broadcaster source collection_date
0 Chico Chico-Redding cozitv CA KRVU-LD-2 NaN Sinclair sbgi.net 2018-08-02 20:31:06.425892
1 West Palm West Palm BeachFort Pierce, FL tbd FL WTCN-3 NaN Sinclair sbgi.net 2018-08-02 20:31:06.425892
2 Abilene Abilene-Sweetwater cw TX KTXS-2 NaN Sinclair sbgi.net 2018-08-02 20:31:06.425892

What do the columns mean?

Column Name Description N Unique Values
city The name of the city that the TV station is located in. 85
geo The raw geolocation field from the website. We parse this field to get city and state 89
network The franchise or brand name that the station belongs to IE "Fox" 28
state The two letter state abbreviation the media outlet is located in. 35
station The name of the TV station IE "WGBH". 611
website The website of the media outlet exactly as we found it online. 152
broadcaster The corporate owner of the station. 1
source Where was this record scraped from? 1
collection_date When was this record collected? 3

An intermediate file of news outlets owned by Meredith scraped from their website

Read the raw file from this URL:
https://raw.githubusercontent.com/yinleon/LocalNewsDataset/master/data/meredith.tsv

See the code used to make this dataset:
https://github.com/yinleon/LocalNewsDataset/blob/master/py/download_data.py

What Does the Data Look Like?

Sample of ../data/meredith.tsv (N = 16)

city facebook google network state station twitter website broadcaster source collection_date
0 Phoenix https://www.facebook.com/CBS5AZ https://plus.google.com/+cbs5az/posts NaN AZ KPHO https://twitter.com/CBS5AZ http://www.kpho.com/ Meredith meridith.com 2018-08-02 14:55:24.612585
1 Nashville https://www.facebook.com/WSMVTV https://plus.google.com/117143042785436999262/... NaN TN WSMV https://twitter.com/WSMV http://www.wsmv.com Meredith meridith.com 2018-08-02 14:55:24.612585
2 Springfield https://www.facebook.com/westernmassnews NaN NaN MA Western Mass News https://twitter.com/WMASSNEWS http://www.westernmassnews.com Meredith meridith.com 2018-08-02 14:55:24.612585

What do the columns mean?

Column Name Description N Unique Values
city The name of the city that the TV station is located in. 12
facebook The URL to the media outlet's Facebook presence. 14
google The URL to the media outlet's Google Plus presence. 13
network The franchise or brand name that the station belongs to IE "Fox" 1
state The two letter state abbreviation the media outlet is located in. 11
station The name of the TV station IE "WGBH". 16
twitter The URL to the Twitter screen name of the news outlet. 14
website The website of the media outlet exactly as we found it online. 16
broadcaster The corporate owner of the station. 1
source Where was this record scraped from? 1
collection_date When was this record collected? 1

An intermediate file of news outlets owned by Nexstar scraped from their website

Read the raw file from this URL:
https://raw.githubusercontent.com/yinleon/LocalNewsDataset/master/data/nexstar.tsv

See the code used to make this dataset:
https://github.com/yinleon/LocalNewsDataset/blob/master/py/download_data.py

What Does the Data Look Like?

Sample of ../data/nexstar.tsv (N = 180)

station website city state broadcaster source collection_date
0 KWBQ kwbq.com Albuquerque NM Nexstar nexstar.tv 2018-08-02 20:31:06.425892
1 KBVO kbvotv.com Austin TX Nexstar nexstar.tv 2018-08-02 20:31:06.425892
2 KOIN koin.com Portland OR Nexstar nexstar.tv 2018-08-02 20:31:06.425892

What do the columns mean?

Column Name Description N Unique Values
station The name of the TV station IE "WGBH". 177
website The website of the media outlet exactly as we found it online. 113
city The name of the city that the TV station is located in. 97
state The two letter state abbreviation the media outlet is located in. 39
broadcaster The corporate owner of the station. 1
source Where was this record scraped from? 1
collection_date When was this record collected? 1

An intermediate file of news outlets owned by Hearst scraped from their website

Read the raw file from this URL:
https://raw.githubusercontent.com/yinleon/LocalNewsDataset/master/data/hearst.tsv

See the code used to make this dataset:
https://github.com/yinleon/LocalNewsDataset/blob/master/py/download_data.py

What Does the Data Look Like?

Sample of ../data/hearst.tsv (N = 33)

city facebook network state station twitter website broadcaster source collection_date
0 Monterey-Salinas https://www.facebook.com/ksbw8?fref=ts NaN CA KSBW-TV https://twitter.com/ksbw http://www.ksbw.com/ Hearst hearst.com 2018-08-02 14:55:24.612585
1 Portland-Auburn https://www.facebook.com/wmtwtv NaN ME WMTW-TV https://twitter.com/WMTWTV http://www.wmtw.com/ Hearst hearst.com 2018-08-02 14:55:24.612585
2 Burlington VT/Plattsburgh https://www.facebook.com/5WPTZ NaN NY WPTZ-TV/WNNE-TV https://twitter.com/mynbc5 http://www.wptz.com/ Hearst hearst.com 2018-08-02 14:55:24.612585

What do the columns mean?

Column Name Description N Unique Values
city The name of the city that the TV station is located in. 27
facebook The URL to the media outlet's Facebook presence. 33
network The franchise or brand name that the station belongs to IE "Fox" 1
state The two letter state abbreviation the media outlet is located in. 23
station The name of the TV station IE "WGBH". 33
twitter The URL to the Twitter screen name of the news outlet. 30
website The website of the media outlet exactly as we found it online. 33
broadcaster The corporate owner of the station. 1
source Where was this record scraped from? 1
collection_date When was this record collected? 1

An intermediate file of news outlets owned by Tribune scraped from their website.

Read the raw file from this URL:
https://raw.githubusercontent.com/yinleon/LocalNewsDataset/master/data/tribune.tsv

See the code used to make this dataset:
https://github.com/yinleon/LocalNewsDataset/blob/master/py/download_data.py#L21-L86

What Does the Data Look Like?

Sample of ../data/tribune.tsv (N = 47)

city facebook network station twitter website youtube broadcaster source state collection_date
0 South Florida http://www.facebook.com/SFLCW NaN WSFL https://twitter.com/SFLCW http://sfltv.net/ NaN Tribune tribunemedia.com FL 2018-08-04 01:06:48.283394
1 Indianapolis https://www.facebook.com/CBS4Indy NaN WTTV https://twitter.com/cbs4indy http://cbs4indy.com/ NaN Tribune tribunemedia.com IN 2018-08-04 01:06:48.283394
2 Dallas https://www.facebook.com/NightcapNews NaN KDAF https://twitter.com/NewsFixDFW http://cw33.com/ http://www.youtube.com/user/kdaf Tribune tribunemedia.com TX 2018-08-04 01:06:48.283394

What do the columns mean?

Column Name Description N Unique Values
city The name of the city that the TV station is located in. 36
facebook The URL to the media outlet's Facebook presence. 46
network The franchise or brand name that the station belongs to IE "Fox" 1
station The name of the TV station IE "WGBH". 46
twitter The URL to the Twitter screen name of the news outlet. 43
website The website of the media outlet exactly as we found it online. 47
youtube The URL to the media outlet's YouTube presence. 30
broadcaster The corporate owner of the station. 1
source Where was this record scraped from? 1
state The two letter state abbreviation the media outlet is located in. 26
collection_date When was this record collected? 1

An intermediate file of TV stations compiled on stationindex.com. The website is scraped according to the market (reigon), and again according to the owner. The two scraped datasets are merged and duplicates are dropped. When dropping duplicates, precedence is given to the entry scraped owners.

Read the raw file from this URL:
https://raw.githubusercontent.com/yinleon/LocalNewsDataset/master/data/station_index.tsv

See the code used to make this dataset:
https://github.com/yinleon/LocalNewsDataset/blob/master/py/download_data.py

What Does the Data Look Like?

Sample of ../data/station_index.tsv (N = 1867)

city collection_date id owner source state station station_info subchannels website
0 Houston 2018-08-02 14:55:24.612585 "FOX 26" Fox Television Stations stationindex TX KRIV Digital Full-Power - 1000 kW NaN http://www.fox26houston.com/
1 Boise 2018-08-02 14:55:24.612585 "Telemundo Boise" Boise Telecasters stationindex ID KKJB Digital Full-Power - 35 kW 39.1 Telemundo, 39.2 Cozi TV, 39.3 Antenna TV... NaN
2 Phoenix 2018-08-02 14:55:24.612585 NaN Daystar stationindex AZ KDPH-LP Low-Power - 150 kW NaN NaN

What do the columns mean?

Column Name Description N Unique Values
city The name of the city that the TV station is located in. 676
id The human-recognizable name for the TV station. 699
owner The corporate owner of the station. 641
state The two letter state abbreviation the media outlet is located in. 56
station_info Related to the frequency of the transmission and technical specs 765
station The name of the TV station IE "WGBH". 1866
subchannels Alternative names for the TV station 626
website The website of the media outlet exactly as we found it online. 1172
source Where was this record scraped from? 1
collection_date When was this record collected? 1

An intermediate file of News papers, magazines and college papers compiled by usnpl.com. The website is scraped by visiting state-specific pages using requests and BeautifulSoup, websites and social media are collected wherever possible.

Read the raw file from this URL:
https://raw.githubusercontent.com/yinleon/LocalNewsDataset/master/data/usnpl.tsv

See the code used to make this dataset:
https://github.com/yinleon/LocalNewsDataset/blob/master/py/download_data.py

What Does the Data Look Like?

Sample of ../data/usnpl.tsv (N = 6221)

Facebook Geography Medium Name Twitter_Name Website Youtube source collection_date
0 https://www.facebook.com/MissionTimesCourier CA Newspapers Mission Times Courier NaN http://www.missiontimescourier.com NaN usnpl.com 2018-08-02 14:55:24.612585
1 https://www.facebook.com/adelantevalle CA Newspapers Adelante Valle IVPNews http://www.ivpressonline.com/adelantevalle http://www.youtube.com/user/ivpressonline usnpl.com 2018-08-02 14:55:24.612585
2 https://www.facebook.com/calmarcourier IA Newspapers Calmar Courier calmarcourier http://calmarcourier.com https://www.youtube.com/channel/UCVTvRL0P_eaIU... usnpl.com 2018-08-02 14:55:24.612585

What do the columns mean?

Column Name Description N Unique Values
Facebook The URL to the media outlet's Facebook presence. 5100
Geography The two letter state abbreviation the media outlet is located in. 51
Medium Whether the news outlet is a newspaper, magazine or college newspaper. 3
Name The name of the TV station IE "WGBH". 5765
Twitter_Name The Twitter screen name of the news outlet. 3643
Website The website of the media outlet exactly as we found it online. 6080
Youtube The URL to the media outlet's YouTube presence. 2226
source Where was this record scraped from? 1
collection_date When was this record collected? 1

The intermediate files above are preprocessed (renaming columns, removing duplicates) and merged resulting in the Local News Dataset! This is it! We made it!

Read the raw file from this URL:
https://raw.githubusercontent.com/yinleon/LocalNewsDataset/master/data/local_news_dataset_2018.csv

See the code used to make this dataset:
https://github.com/yinleon/LocalNewsDataset/blob/master/py/merge.py

What Does the Data Look Like?

Sample of ../data/local_news_dataset_2018.csv (N = 8720)

name state website domain twitter youtube facebook owner medium source collection_date
0 WTVC TN http://www.newschannel9.com/ newschannel9.com NaN NaN NaN Freedom Communications TV station stationindex 2018-08-02 14:55:24.612585
1 Fountain Hills Times AZ http://www.fhtimes.com fhtimes.com NaN NaN https://www.facebook.com/fountainhillstimes NaN Newspapers usnpl.com 2018-08-02 14:55:24.612585
2 Beauregard Daily News LA http://www.beauregarddailynews.net beauregarddailynews.net beauregardnews NaN https://www.facebook.com/beauregardnews NaN Newspapers usnpl.com 2018-08-02 14:55:24.612585

What do the columns mean?

Column Name Description N Unique Values
name The name of the TV station IE "WGBH". 8095
state The two letter state abbreviation the media outlet is located in. 55
website The website of the media outlet exactly as we found it online. 7345
domain The domain that houses the media outlet. It is standardized (no "www" or "http://"). Sometimes multiple media outlets direct to the same domain (but seprate sub-domain). 6296
twitter The Twitter screen name of the news outlet. 3632
youtube The URL to the media outlet's YouTube presence. 2220
facebook The URL to the media outlet's Facebook presence. 5093
owner The corporate owner of the station. 634
medium Whether the news outlet is a newspaper, magazine, college newspater or a TV station 4
source Where was this record scraped from? 8
collection_date When was this record collected? 4

Breakdown of mediums in the Local News Dataset

medium
Newspapers 5336
TV station 2593
College Newspapers 480
Magazines 311

Breakdown of data sources in the Local News Dataset

source
usnpl.com 6119
stationindex 1708
sbgi.net 611
nexstar.tv 177
tribunemedia.com 47
hearst.com 33
meridith.com 16
User Input 9

The User Input entires are custom additions added from the contents of this JSON file and added to the dataset in merge.py

Below is an interactive Plot.ly chloropleth map of state-level representation in this dataset. Scroll over each state to get a counts (num stations) of the top mediums and owners.

3. Using the Dataset

Below is some starter code in Python to read the Local News Dataset from the web into a Pandas Dataframe.

Top of Notebook

In [2]:
url = 'https://raw.githubusercontent.com/yinleon/LocalNewsDataset/master/data/local_news_dataset_2018.csv'
df_local = pd.read_csv(url)

If you want to use this dataset for a list of web domains, there are a few steps you'll need to take:

In [9]:
df_local_website.to_csv('../data/local_news_dataset_2018_for_domain_analysis.csv', index=False)
In [3]:
df_local_website = df_local[(~df_local.domain.isnull()) &
                             (df_local.domain != 'facebook.com') &
                             (df_local.domain != 'google.com') &
                             (df_local.domain != 'tumblr.com') &
                             (df_local.domain != 'wordpress.com') &
                             (df_local.domain != 'comettv.com')].drop_duplicates(subset=['domain'])

We do these steps because some entries don't have websites, at least one listed website is Facebook pages, comet TV is a nationwide franchise, and some stations share the a website.

In [4]:
df_local_website.sample(3, random_state=303)
Out[4]:
name state website domain twitter youtube facebook owner medium source collection_date
5537 Mount Desert Islander ME http://www.mdislander.com mdislander.com TheMDIslander NaN https://www.facebook.com/mdislander NaN Newspapers usnpl.com 2018-08-02 14:55:24.612585
5992 Lake County News-Chronicle MN http://www.lcnewschronicle.com lcnewschronicle.com NaN NaN https://www.facebook.com/pages/Lake-County-New... NaN Newspapers usnpl.com 2018-08-02 14:55:24.612585
633 WATC GA http://www.watc.tv/ watc.tv NaN NaN NaN Carolina Christian Broadcasting TV station stationindex 2018-08-02 14:55:24.612585

For convenience this filtered dataset is available here: https://raw.githubusercontent.com/yinleon/LocalNewsDataset/master/data/local_news_dataset_2018_for_domain_analysis.csv and also here:
http://bit.ly/local_news_dataset_domains

In [12]:
df_local_news_domain = pd.read_csv('http://bit.ly/local_news_dataset_domains')
df_local_news_domain.head(2)
Out[12]:
name state website domain twitter youtube facebook owner medium source collection_date
0 KWHE HI http://www.kwhe.com/ kwhe.com NaN NaN NaN LeSea TV station stationindex 2018-08-02 14:55:24.612585
1 WGVK MI http://www.wgvu.org/ wgvu.org NaN NaN NaN Grand Valley State University TV station stationindex 2018-08-02 14:55:24.612585

If you want to get Twitter accounts for all local news stations in Kansas you can filter the dataset as follows:

In [7]:
twitter_ks = df_local[(~df_local.twitter.isnull()) & 
                      (df_local.state == 'KS')]
twitter_ks.twitter.unique()
Out[7]:
array(['atchisonglobe', 'BCtimesgazette', 'baldwincity', 'CCNewsAdvocate',
       'BTelescope', 'ChanuteTribune', 'TimesSentinel1', 'Star_Argosy',
       'DerbyInformer', 'dcglobe', 'HighPlainsJrnl', 'emporiagazette',
       'FSTribune', 'GSentineltimes', 'gctelegram', 'GB_Tribune',
       'HaysDaily', 'RecordTime', 'HoltonRecorder', 'HutchNews',
       'iolaregister', 'thedailyunion', 'kckansan', 'KCStar', 'ljworld',
       'fortleavenworth', 'LVTimesNews', 'louisburgherald',
       'MERCnewsroom ', 'marionrecord', 'MarysvilleTweet', 'macsentinel',
       'ClarionPaper', 'ChadFrey', 'OsageCounty', 'osawatomienews',
       'oheraldnews', 'micorepublic', 'ParsonsSun', 'The_Morning_Sun',
       'pratttribune', 'sabethaherald', 'salinajournal',
       'shawneedispatch', 'tonganoxie', 'CJ_news', 'arkvalleynews',
       'wgtndailynews', 'voiceitwichita', 'kansasdotcom',
       'winfieldcourier', 'esubulletin', 'TigerMediaNet',
       'kstatecollegian', 'PSU_Collegio', 'sunflowernews'], dtype=object)

We can also get an array of all domains affiliated with Sinclair:

In [8]:
sinclair_stations = df_local[df_local.owner == 'Sinclair'].domain.unique()
sinclair_stations
Out[8]:
array(['wabm68.com', 'abc3340.com', 'wtto21.com', 'comettv.com',
       'utv44.com', 'wfgxtv.com', 'weartv.com', 'local15tv.com', nan,
       'katv.com', 'bakersfieldnow.com', 'kmph.com', 'kmph-kfre.com',
       'wjla.com', 'mycbs4.com', 'sbgi.net', 'myfoxtallahassee.com',
       'wtwc40.com', 'my15wtcn.com', 'azteca48.com', 'cw34.com',
       'cbs12.com', 'wfxl.com', 'wgxa.tv', 'foxsavannah.com', 'kgan.com',
       'cbs2iowa.com', 'fox28iowa.com', 'kdsm.com', 'ktvo.com',
       'siouxlandnews.com', 'cwtreasurevalley.com', 'kboi2.com',
       'wics.com', 'wyzz43.com', 'wicd15.com', 'cw23tv.com',
       'foxillinois.com', 'khqa.com', 'wsbt.com', 'foxkansas.com',
       'mytvwichita.com', 'wdky56.com', 'foxlexington.com', 'mywdka.com',
       'kbsi23.com', 'foxbaltimore.com', 'cwbaltimore.com',
       'mytvbaltimore.com', 'myfoxmaine.com', 'wgme.com', 'nbc25news.com',
       'wsmh.com', 'thecw46.com', 'cw7michigan.com', 'wwmt.com',
       'upnorthlive.com', 'thecw23.com', 'krcgtv.com', 'abcstlouis.com',
       'wlos.com', 'my48.tv', 'abc45.com', 'myrdctv.com', 'raleighcw.com',
       'nebraska.tv', 'foxnebraska.com', 'cw15kxvo.com', 'kptm.com',
       'mynews3.com', 'thecwlasvegas.tv', 'mylvtv.com', 'my21reno.com',
       'mynews4.com', 'foxreno.com', 'wutv.com', 'wsyt68.com',
       'cbs6albany.com', 'cwalbany.com', 'mytvbuffalo.com', 'wutv29.com',
       'foxrochester.com', 'cwrochester.com', '13wham.com',
       'cnycentral.com', 'wsyx6.com', 'my64.tv', 'wkef22.com',
       'star64.tv', 'local12.com', 'cwcincinnati.com',
       'abc6onyourside.com', 'cwcolumbus.com', 'myfox28columbus.com',
       'mytvdayton.com', 'abc22now.com', 'fox45now.com', 'nbc24.com',
       'okcfox.com', 'cwokc.com', 'ktul.com', 'kcby.com', 'kval.com',
       'kmtr.com', 'kpic.com', 'ktvl.com', 'southernoregoncw.com',
       'kunptv.com', 'katu.com', 'cwcentralpa.com', 'local21news.com',
       'wjactv.com', '22thepoint.com', 'wpgh53.com', 'fox56.com',
       'turnto10.com', 'abcnews4.com', 'wach.com', 'wpde.com', 'my40.tv',
       'foxchattanooga.com', 'chattanoogacw.com', 'mytv30web.com',
       'cw58.tv', 'fox17.com', 'abc7amarillo.com', 'telemundoaustin.com',
       'cbsaustin.com', 'kfdm.com', 'fox4beaumont.com',
       'fox38corpuschristi.com', 'kfoxtv.com', 'cbs4local.com',
       'valleycentral.com', 'foxsanantonio.com', 'news4sanantonio.com',
       'kmys.tv', 'kenvtv.com', 'kutv.com', 'kmyu.tv', 'fox35.com',
       'mytvz.com', 'mytvrichmond.com', 'foxrichmond.com', 'wset.com',
       'komonews.com', 'univisionseattle.com', 'klewtv.com', 'kunwtv.com',
       'keprtv.com', 'kimatv.com', 'cq9tv.com', 'thattvwebsite.com',
       'fox11online.com', 'cw14online.com', 'fox47.com', 'super18tv.com',
       'wvah.com', 'wchstv.com', 'wtov9.com'], dtype=object)

Stay tuned for more in-depth tutorials about how this dataset can be used!

4. Data Sheet

In the spirit of transparency and good documentation, I am going to answer some questions for datasets proposed in the recent paper Datasheets for Datasets by Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, Kate Crawford.

Top of Notebook

Motivation for Dataset Creation

Why was the dataset created? (e.g., were there specific tasks in mind, or a specific gap that needed to be filled?)
This Dataset was created to study the role of state-level local news on Twitter.
We wanted to find users who follow both local news outlets and members of congress.

What (other) tasks could the dataset be used for? Are there obvious tasks for which it should not be used?
The dataset can be used to query other social media platforms for local news outlet's social feeds.
It can also serve as a list of state-level domains for link analysis. This is one use of this dataset in an uncoming report on the Internet Research Agency's use of links on Twitter.

I hope that this dataset might be of interest for researchers applying to the Social Science One and Facebook RFP.

Has the dataset been used for any tasks already? If so, where are the results so others can compare (e.g., links to published papers)?
A study of IRA Twitter accounts sharing national, local, and junk news articles.

Who funded the creation of the dataset? If there is an associated grant, provide the grant number.
The dataset was created by Leon Yin at the SMaPP Lab at NYU. For more information, please visit our website.

Dataset Composition

What are the instances? (that is, examples; e.g., documents, images, people, countries) Are there multiple types of instances? (e.g., movies, users, ratings; people, interactions between them; nodes, edges)
Each instance is a local news outlet.

Are relationships between instances made explicit in the data (e.g., social network links, user/movie ratings, etc.)? How many instances of each type are there?
We have relational links in this data, but that is up to you to make those connections. For counts, please refer to the spec sheet above.

What data does each instance consist of? “Raw” data (e.g., unprocessed text or images)? Features/attributes?
Each instance is a scraped entity from a website. There are no images involved. The metadata fields regarding state, website, and social accounts are scraped from raw HTML.

Is there a label/target associated with instances? If the instances are related to people, are subpopulations identified (e.g., by age, gender, etc.) and what is their distribution?
This is not a traditional supervised machine learning dataset.

Is everything included or does the data rely on external resources? (e.g., websites, tweets, datasets) If external resources, a) are there guarantees that they will exist, and remain constant, over time; b) is there an official archival version.
The data relies of external sources! There are abolutely no guarentees that data to Twitter, Youtube, Facebook, the source websites (where data is scraped), or the destination websites (homepages for news outlets).

Currently there are open source libraries -- like TweePy, to query Twitter, and my collegue Megan Brown and I are about to release a Python wrapper for the Youtube Data API library.

Are there licenses, fees or rights associated with any of the data?
This dataset is free to use. We're copying terms of use from ProPublica:

In general, you may use this dataset under the following terms. However, there may be different terms included for some data sets. It is your responsibility to read carefully the specific terms included with the data you download or purchase from our website.

You can’t republish the raw data in its entirety, or otherwise distribute the data (in whole or in part) on a stand-alone basis.
You can’t change the data except to update or correct it.
You can’t charge people money to look at the data, or sell advertising specifically against it.
You can’t sub-license or resell the data to others.
If you use the data for publication, you must cite Leon Yin and the SMaPP Lab. 
We do not guarantee the accuracy or completeness of the data. You acknowledge that the data may contain errors and omissions. 
We are not obligated to update the data, but in the event we do, you are solely responsible for checking our site for any updates.
You will indemnify, hold harmless, and defend Leon Yin and the SMaPP Lab from and against any claims arising out of your use of the data.

Data Collection Process

How was the data collected? (e.g., hardware apparatus/sensor, manual human curation, software program, software interface/API; how were these constructs/measures/methods validated?)
The data was collected using 4 CPUs on the NYU HPC Prince Cluster. It was written using custom code that utilizes the requests, beautifulsoup, and Pandas Python libraries. For this reason no APIs are used to collect this data. Data was quality checked by exploring data in Jupyter Noteooks. It was compared to lists curated by AbilityPR of the top 10 newspapers by state.

Who was involved in the data collection process?
This dataset was collected by Leon Yin.

Over what time-frame was the data collected?
The process_datetime columns capture when datasets are collected. Initial development for this project began in April 2018.

How was the data associated with each instance acquired?
Data is directly scraped from HTML, there is no inferred data. There is no information how the sources curate their websites-- especially TVstationindex.com and USNPL.com.

Does the dataset contain all possible instances?
Ths is not a sample, but the best attempt at creating a comprehensive list.

Is there information missing from the dataset and why?
News Outlets not listed in the websites we scrape, or the custom additions JSON are not included. We'll make attempt to take requests for additions and ammendments on GitHub with the intention of creating a website with a submission forum.

Are there any known errors, sources of noise, or redundancies in the data? There are possible redundencies of news outlets occuring across the websites scraped. We have measures to drop duplicates, but if we missed any please submit an error in GitHub.

Data Preprocessing

What preprocessing/cleaning was done?
Twitter Screen Names are extracted from URLs, states are parsed from raw HTML that usually contains a city name, there is no aggregation or engineered features.

Was the “raw” data saved in addition to the preprocessed/cleaned data?
The raw HTML for each site is not provided (so changes in website UI's) will crash future collection. There are no warranties for this. However the intermediate files are saved, and thoroughly documented in the tech specs above.

Is the preprocessing software available?
The dataset is a standard CSV, so any relevant open source software can be used.

Does this dataset collection/processing procedure achieve the motivation for creating the dataset stated in the first section of this datasheet?
The addition of Twitter Screen names makes it possible to use this data for Twitter research. The inclusion of additional fields like website, other social media platforms (Facebook, Youtube) allows for additional applications

Dataset Distribution

How is the dataset distributed? (e.g., website, API, etc.; does the data have a DOI; is it archived redundantly?)
The dataset is being hosted on GitHub at the moment. It does not have a DOI (if you have suggestions on how to get one please reach out!). There are plans to migrate the dataset to its own website.

When will the dataset be released/first distributed?
August 2018.

What license (if any) is it distributed under?
MIT

Are there any fees or access/export restrictions?
Not while it is on GitHub, but if its migrated elsewhere that's possible.

Dataset Maintenance

Who is supporting/hosting/maintaining the dataset?
The dataset is currently solely maintained by Leon Yin. This seems unsustainable, so if this project sparks an interest with you please reach out to me here: data-smapp_lab at nyu dot edu

Will the dataset be updated? How often and by whom? How will updates/revisions be documented and communicated (e.g., mailing list, GitHub)? Is there an erratum?
The dataset can be updated locally by running the scripts in this repo. Ammendments to the hosted dataset will contain a separate filepath and URL, and be documented in the README.

If the dataset becomes obsolete how will this be communicated?
If the dataset becomes obsolete, we'll make this clear in the README in the GitHub repository (or whereever it is being hosted).

Is there a repository to link to any/all papers/systems that use this dataset?
There aren't any publications that use this dataset that are published. We'll keep a list on the README or the website.

If others want to extend/augment/build on this dataset, is there a mechanism for them to do so?
Modifications can be made by adding records to the ammendments JSON.

If the dataset relates to people (e.g., their attributes) or was generated by people, were they informed about the data collection?
This dataset has no people-level information. However we don't know anything about the people who generated the webpages that this dataset is built on.

Does the dataset contain information that might be considered sensitive or confidential?
To my knowledge there is no personally identifiable information in this dataset.

Does the dataset contain information that might be considered inappropriate or offensive?
I hope not!