This notebooks lets you harvest large amounts of data for Papers Past (via DigitalNZ) for further analysis. It saves the results as a CSV file that you can open in any spreadsheet program. It currently includes the OCRd text of all the newspaper articles, but I might make this optional in the future — thoughts?
You can edit this notebook to harvest other collections in DigitalNZ — see the notes below for pointers. However, this is currently only saving a small subset of the available metadata, so you'd probably want to adjust the fields as well. Add an issue on GitHub if you need help creating a custom harvester.
There's only two things you have to change — you need to enter your API key, and a search query. There are additional options for limiting your search results.
If you haven't used one of these notebooks before, they're basically web pages in which you can write, edit, and run live code. They're meant to encourage experimentation, so don't feel nervous. Just try running a few cells and see what happens!.
Some tips:
Go get yourself a DigitalNZ API key, then paste it between the quotes below. You need a key to make API requests, but they're free and quick to obtain.
# Past your API key between the quotes
# You might need to trim off any spaces at the beginning and end
api_key = 'YOUR API KEY'
print('Your API key is: {}'.format(api_key))
Just run these cells to set up some things that we'll need later on.
# This cell just sets up some stuff that we'll need later
import logging
import requests
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
import pandas as pd
from tqdm.auto import tqdm
import time
import re
from slugify import slugify
from time import strftime
from IPython.display import display, FileLink
logging.basicConfig(level=logging.ERROR)
s = requests.Session()
retries = Retry(total=5, backoff_factor=1, status_forcelist=[ 502, 503, 504 ])
s.mount('https://', HTTPAdapter(max_retries=retries))
# This cell sets the basic parameters that we'll send to the API
# You'll add your search query to this below
# You could change the 'display_collection' value to something other than
# Papers Past to harvest other parts of DigitalNZ
params = {
'and[display_collection][]': 'Papers Past',
'per_page': '100',
'api_key': api_key
}
This is where you specify your search. Just put in anything you might enter in the DigitalNZ search box.
params['text'] = 'possum'
#params['text'] = 'possum AND opossum'
#params['text'] = '"possum skins"'
You can also add limit your results to a particular newspaper. Just remove the '#' from the start of the line to add this parameter to your query.
#params['and[collection][]'] = 'Evening Post'
You can also limit your query by date, but it's a bit fiddly.
Filtering by a single century, decade or year is simple. Just add the appropriate parameter as in the examples below. Remove the '#', edit the value, and run the cell.
#params['and[century][]'] = '1800'
#params['and[decade][]'] = '1850'
#params['and[year][]'] = '1853'
There's no direct way (I think) to search a range of years, but we can get around this by issuing a request for each year separately and then combining the results. If you want to do this, change the values below.
# This sets the default values
# Change from None to a year, eg 1854 to set a specific range.
# You need both a start and an end year
start_year = None
end_year = None
This is where all the serious harvesting work gets done. You shouldn't need to change anything unless you want to harvest something other than Papers Past. Just run the cell.
class Harvester():
def __init__(self, params, start_year=None, end_year=None):
self.params = params
self.start_year = start_year
self.end_year = end_year
self.current_year = None
self.total = 0
self.articles = []
def process_results(self, data):
results = data['search']['results']
if results:
self.articles += self.process_articles(results)
return len(results)
def process_articles(self, results):
articles = []
for result in results:
# If you're harvesting something other than Papers Past, you'd probably
# want to change the way results are processed.
title = re.sub(r'(\([^)]*\))[^(]*$', '', result['title']).strip()
articles.append({
'id': result['id'],
'title': title,
'newspaper': result['publisher'][0],
'date': result['date'][0],
'text': result['fulltext'],
'paperspast_url': result['landing_url'],
'source_url': result['source_url']
})
return articles
def get_data(self):
response = s.get('http://api.digitalnz.org/v3/records.json', params=self.params)
return response.json()
def harvest(self):
'''
Do the harvesting!
'''
data = self.get_data()
total = data['search']['result_count']
result_count = self.process_results(data)
with tqdm(total=total, desc=str(self.current_year)) as pbar:
pbar.update(result_count)
while result_count > 100:
self.params['page'] += 1
data = self.get_data()
result_count = self.process_results(data)
pbar.update(result_count)
time.sleep(0.2)
def start_harvest(self):
'''
Initiates a harvest.
If you've specified start and end years it'll loop over them getting results for each.
'''
if self.start_year and self.end_year:
for year in tqdm(range(self.start_year, self.end_year+1), desc='Years'):
self.params['page'] = 1
self.current_year = year
self.params['and[year][]'] = year
self.harvest()
else:
self.harvest()
def save_as_csv(self, filename=None):
'''
Save the results as a CSV file.
You can supply a filename, but if you don't it'll construct one from the query and current date.
Displays a download link when finished.
'''
if not filename:
if self.start_year and self.end_year:
year_range = '{}-{}-'.format(self.start_year, self.end_year)
else:
year_range = ''
filename = '{}-{}{}.csv'.format(slugify(self.params['text']), year_range, strftime("%Y%m%d"))
df = pd.DataFrame(self.articles)
df.to_csv(filename, index=False)
display(FileLink(filename))
harvester = Harvester(params, start_year=start_year, end_year=end_year)
harvester.start_harvest()
This cell generates a CSV file and creates a link that you can use to download it.
harvester.save_as_csv()