Corrections of OCRd text in Trove's newspapers

The full text of newspaper articles in Trove is extracted from page images using Optical Character Recognition (OCR). The accuracy of the OCR process is influenced by a range of factors including the font and the quality of the images. Many errors slip through. Volunteers have done a remarkable job in correcting these errors, but it's a huge task. This notebook explores the scale of OCR correction in Trove.

There are two ways of getting data about OCR corrections using the Trove API. To get aggregate data you can include has:corrections in your query to limit the results to articles that have at least one OCR correction.

To get information about the number of corrections made to the articles in your results, you can add the reclevel=full parameter to include the number of corrections and details of the most recent correction to the article record. For example, note the correctionCount and lastCorrection values in the record below:

{
    "article": {
        "id": "41697877",
        "url": "/newspaper/41697877",
        "heading": "WRAGGE AND WEATHER CYCLES.",
        "category": "Article",
        "title": {
            "id": "101",
            "value": "Western Mail (Perth, WA : 1885 - 1954)"
        },
        "date": "1922-11-23",
        "page": 4,
        "pageSequence": 4,
        "troveUrl": "https://trove.nla.gov.au/ndp/del/article/41697877",
        "illustrated": "N",
        "wordCount": 1054,
        "correctionCount": 1,
        "listCount": 0,
        "tagCount": 0,
        "commentCount": 0,
        "lastCorrection": {
            "by": "*anon*",
            "lastupdated": "2016-09-12T07:08:57Z"
        },
        "identifier": "https://nla.gov.au/nla.news-article41697877",
        "trovePageUrl": "https://trove.nla.gov.au/ndp/del/page/3522839",
        "pdf": "https://trove.nla.gov.au/ndp/imageservice/nla.news-page3522839/print"
    }
}

Setting things up

In [2]:
import requests
import os
import ipywidgets as widgets
from operator import itemgetter # used for sorting
import pandas as pd # makes manipulating the data easier
import altair as alt
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry
from tqdm.auto import tqdm
from IPython.display import display, HTML, FileLink, clear_output
import math
from collections import OrderedDict
import time

# Make sure data directory exists
os.makedirs('data', exist_ok=True)

# Create a session that will automatically retry on server errors
s = requests.Session()
retries = Retry(total=5, backoff_factor=1, status_forcelist=[ 502, 503, 504 ])
s.mount('http://', HTTPAdapter(max_retries=retries))
s.mount('https://', HTTPAdapter(max_retries=retries))
In [ ]:
api_key = 'YOUR API KEY'
print('Your API key is: {}'.format(api_key))
In [5]:
# Basic parameters for Trove API
params = {
    'facet': 'year', # Get the data aggregated by year.
    'zone': 'newspaper',
    'key': api_key,
    'encoding': 'json',
    'n': 0 # We don't need any records, just the facets!
}
In [6]:
def get_results(params):
    '''
    Get JSON response data from the Trove API.
    Parameters:
        params
    Returns:
        JSON formatted response data from Trove API 
    '''
    response = s.get('https://api.trove.nla.gov.au/v2/result', params=params, timeout=30)
    response.raise_for_status()
    # print(response.url) # This shows us the url that's sent to the API
    data = response.json()
    return data

How many newspaper articles have corrections?

Let's find out what proportion of newspaper articles have at least one OCR correction.

First we'll get to the total number of newspaper articles in Trove.

In [7]:
# Set the q parameter to a single space to get everything
params['q'] = ' '

# Get the data from the API
data = get_results(params)

# Extract the total number of results
total = int(data['response']['zone'][0]['records']['total'])
print('{:,}'.format(total))
232,202,146

Now we'll set the q parameter to has:corrections to limit the results to newspaper articles that have at least one correction.

In [8]:
# Set the q parameter to 'has:corrections' to limit results to articles with corrections
params['q'] = 'has:corrections'

# Get the data from the API
data = get_results(params)

# Extract the total number of results
corrected = int(data['response']['zone'][0]['records']['total'])
print('{:,}'.format(corrected))
12,743,782

Calculate the proportion of articles with corrections.

In [9]:
print('{:.2%} of articles have at least one correction'.format(corrected/total))
5.49% of articles have at least one correction

You might be thinking that these figures don't seem to match the number of corrections by individuals displayed on the digitised newspapers home page. Remember that these figures show the number of articles that include corrections, while the individual scores show the number of lines corrected by each volunteer.

Number of corrections by year

In [10]:
def get_facets(data):
    '''
    Loop through facets in Trove API response, saving terms and counts.
    Parameters:
        data  - JSON formatted response data from Trove API  
    Returns:
        A list of dictionaries containing: 'term', 'total_results'
    '''
    facets = []
    try:
        # The facets are buried a fair way down in the results
        # Note that if you ask for more than one facet, you'll have use the facet['name'] param to find the one you want
        # In this case there's only one facet, so we can just grab the list of terms (which are in fact the results by year)
        for term in data['response']['zone'][0]['facets']['facet']['term']:
            
            # Get the year and the number of results, and convert them to integers, before adding to our results
            facets.append({'term': term['search'], 'total_results': int(term['count'])})
            
        # Sort facets by year
        facets.sort(key=itemgetter('term'))
    except TypeError:
        pass
    return facets

def get_facet_data(params, start_decade=180, end_decade=201):
    '''
    Loop throught the decades from 'start_decade' to 'end_decade',
    getting the number of search results for each year from the year facet.
    Combine all the results into a single list.
    Parameters:
        params - parameters to send to the API
        start_decade
        end_decade
    Returns:
        A list of dictionaries containing 'year', 'total_results' for the complete 
        period between the start and end decades.
    '''
    # Create a list to hold the facets data
    facet_data = []
    
    # Loop through the decades
    for decade in tqdm(range(start_decade, end_decade + 1)):
        
        #print(params)
        # Avoid confusion by copying the params before we change anything.
        search_params = params.copy()
        
        # Add decade value to params
        search_params['l-decade'] = decade
        
        # Get the data from the API
        data = get_results(search_params)
        
        # Get the facets from the data and add to facets_data
        facet_data += get_facets(data)
        
    # Reomve the progress bar (you can also set leave=False in tqdm, but that still leaves white space in Jupyter Lab)
    clear_output()
    return facet_data
In [11]:
facet_data = get_facet_data(params)
In [12]:
# Convert our data to a dataframe called df
df = pd.DataFrame(facet_data)
In [13]:
df.head()
Out[13]:
term total_results
0 1803 526
1 1804 619
2 1805 430
3 1806 367
4 1807 134

So which year has the most corrections?

In [14]:
df.loc[df['total_results'].idxmax()]
Out[14]:
term               1915
total_results    256092
Name: 112, dtype: int64

The fact that there's more corrections in newspaper articles from 1915, might make you think that people have been more motivated to correct articles relating to WWI. But if you look at the total number of articles per year, you'll see that there's been more articles digitised from 1915! The raw number of corrections is probably not very useful, so let's look instead at the proportion of articles each year that have at least one correction.

To do that we'll re-harvest the facet data, but this time with a blank, or empty search, to get the total number of articles available from each year.

In [15]:
# Reset the 'q' parameter
# Use a an empty search (a single space) to get ALL THE ARTICLES
params['q'] = ' '

# Get facet data for all articles
all_facet_data = get_facet_data(params)
In [16]:
# Convert the results to a dataframe
df_total = pd.DataFrame(all_facet_data)

No we'll merge the number of articles by year with corrections with the total number of articles. Then we'll calculate the proportion with corrections.

In [17]:
def merge_df_with_total(df, df_total, how='left'):
    '''
    Merge dataframes containing search results with the total number of articles by year.
    This is a left join on the year column. The total number of articles will be added as a column to 
    the existing results.
    Once merged, do some reorganisation and calculate the proportion of search results.
    Parameters:
        df - the search results in a dataframe
        df_total - total number of articles per year in a dataframe
    Returns:
        A dataframe with the following columns - 'year', 'total_results', 'total_articles', 'proportion' 
        (plus any other columns that are in the search results dataframe).
    '''
    # Merge the two dataframes on year
    # Note that we're joining the two dataframes on the year column
    df_merged = pd.merge(df, df_total, how=how, on='term')

    # Rename the columns for convenience
    df_merged.rename({'total_results_y': 'total_articles'}, inplace=True, axis='columns')
    df_merged.rename({'total_results_x': 'total_results'}, inplace=True, axis='columns')

    # Set blank values to zero to avoid problems
    df_merged['total_results'] = df_merged['total_results'].fillna(0).astype(int)

    # Calculate proportion by dividing the search results by the total articles
    df_merged['proportion'] = df_merged['total_results'] / df_merged['total_articles']
    return df_merged
In [18]:
# Merge the search results with the total articles
df_merged = merge_df_with_total(df, df_total)
df_merged.head()
Out[18]:
term total_results total_articles proportion
0 1803 526 526 1.0
1 1804 619 619 1.0
2 1805 430 430 1.0
3 1806 367 367 1.0
4 1807 134 134 1.0

Let's visualise the results, showing both the number of articles with corrections each year, and the proportion of articles each year with corrections.

In [19]:
# Number of articles with corrections
chart1 = alt.Chart(df_merged).mark_line(point=True).encode(
        x=alt.X('term:Q', axis=alt.Axis(format='c', title='Year')),
        y=alt.Y('total_results:Q', axis=alt.Axis(format=',d', title='Number of articles with corrections')),
        tooltip=[alt.Tooltip('term:Q', title='Year'), alt.Tooltip('total_results:Q', title='Articles', format=',')]
    ).properties(width=700, height=250)

# Proportion of articles with corrections
chart2 = alt.Chart(df_merged).mark_line(point=True, color='red').encode(
        x=alt.X('term:Q', axis=alt.Axis(format='c', title='Year')),
    
        # This time we're showing the proportion (formatted as a percentage) on the Y axis
        y=alt.Y('proportion:Q', axis=alt.Axis(format='%', title='Proportion of articles with corrections')),
        tooltip=[alt.Tooltip('term:Q', title='Year'), alt.Tooltip('proportion:Q', title='Proportion', format='%')],
        
        # Make the charts different colors
        color=alt.value('orange')
    ).properties(width=700, height=250)

# This is a shorthand way of stacking the charts on top of each other
chart1 & chart2
Out[19]:

This is really interesting – it seems there's been a deliberate effort to get the earliest newspapers corrected.

Number of corrections by category

Let's see how the number of corrections varies across categories. This time we'll use the category facet instead of year.

In [20]:
params['q'] = 'has:corrections'
params['facet'] = 'category'
In [21]:
data = get_results(params)
facets = []
for term in data['response']['zone'][0]['facets']['facet']['term']:
    # Get the state and the number of results, and convert it to integers, before adding to our results
    facets.append({'term': term['search'], 'total_results': int(term['count'])})
df_categories = pd.DataFrame(facets)
In [22]:
df_categories.head()
Out[22]:
term total_results
0 Article 9707996
1 Family Notices 1324644
2 Advertising 1231444
3 Detailed Lists, Results, Guides 485544
4 Literature 9371

Once again, the raw numbers are probably not all that useful, so let's get the total number of articles in each category and calculate the proportion that have at least one correction.

In [23]:
# Blank query
params['q'] = ' '
data = get_results(params)
facets = []
for term in data['response']['zone'][0]['facets']['facet']['term']:
    # Get the state and the number of results, and convert it to integers, before adding to our results
    facets.append({'term': term['search'], 'total_results': int(term['count'])})
df_total_categories = pd.DataFrame(facets)

We'll merge the two corrections by category data with the total articles per category and calculate the proportion.

In [24]:
df_categories_merged = merge_df_with_total(df_categories, df_total_categories)
df_categories_merged
Out[24]:
term total_results total_articles proportion
0 Article 9707996 161358361 0.060164
1 Family Notices 1324644 1913143 0.692392
2 Advertising 1231444 42882886 0.028716
3 Detailed Lists, Results, Guides 485544 26049761 0.018639
4 Literature 9371 32539 0.287993
5 Obituaries 6626 7004 0.946031
6 Humour 6313 22693 0.278192
7 News 6035 7439 0.811265
8 Law, Courts, And Crime 5244 6445 0.813654
9 Sport And Games 4501 8982 0.501113
10 Letters 2511 9137 0.274817
11 Arts And Culture 1579 2241 0.704596
12 Editorial 1480 9274 0.159586
13 Puzzles 1403 29650 0.047319
14 Classified Advertisements And Notices 1129 1291 0.874516
15 Shipping Notices 1056 1164 0.907216
16 Official Appointments And Notices 815 838 0.972554
17 Weather 743 5223 0.142255
18 Commerce And Business 666 1038 0.641618
19 Reviews 594 898 0.661470
20 Display Advertisement 247 282 0.875887

A lot of the categories have been added recently and don't contain a lot of articles. Some of these have a very high proportion of articles with corrections – 'Obituaries' for example. This suggests users are systematically categorising and correcting certain types of article.

Let's focus on the main categories by filtering out those with less than 30,000 articles.

In [25]:
df_categories_filtered = df_categories_merged.loc[df_categories_merged['total_articles'] > 30000]
df_categories_filtered
Out[25]:
term total_results total_articles proportion
0 Article 9707996 161358361 0.060164
1 Family Notices 1324644 1913143 0.692392
2 Advertising 1231444 42882886 0.028716
3 Detailed Lists, Results, Guides 485544 26049761 0.018639
4 Literature 9371 32539 0.287993

And now we can visualise the results.

In [27]:
cat_chart1 = alt.Chart(df_categories_filtered).mark_bar().encode(
    x=alt.X('term:N', title='Category'),
    y=alt.Y('total_results:Q', title='Articles with corrections')
)

cat_chart2 = alt.Chart(df_categories_filtered).mark_bar().encode(
    x=alt.X('term:N', title='Category'),
    y=alt.Y('proportion:Q', axis=alt.Axis(format='%', title='Proportion of articles with corrections')),
    color=alt.value('orange')
)

cat_chart1 | cat_chart2
Out[27]:

As we can see, the rate of corrections is much higher in the 'Family Notices' category than any other. This probably reflects the work of family historians and others searching for, and correcting, articles containing particular names.

Number of corrections by newspaper

How do rates of correction vary across newspapers? We can use the title facet to find out.

In [28]:
params['q'] = 'has:corrections'
params['facet'] = 'title'
In [29]:
data = get_results(params)
facets = []
for term in data['response']['zone'][0]['facets']['facet']['term']:
    # Get the state and the number of results, and convert it to integers, before adding to our results
    facets.append({'term': term['search'], 'total_results': int(term['count'])})
df_newspapers = pd.DataFrame(facets)
In [30]:
df_newspapers.head()
Out[30]:
term total_results
0 35 801023
1 13 757930
2 11 347365
3 16 335237
4 30 304692

Once again we'll calculate the proportion of articles corrected for each newspaper by getting the total number of articles for each newspaper on Trove.

In [31]:
params['q'] = ' '
In [32]:
data = get_results(params)
facets = []
for term in data['response']['zone'][0]['facets']['facet']['term']:
    # Get the state and the number of results, and convert it to integers, before adding to our results
    facets.append({'term': term['search'], 'total_results': int(term['count'])})
df_newspapers_total = pd.DataFrame(facets)
In [33]:
df_newspapers_merged = merge_df_with_total(df_newspapers, df_newspapers_total, how='right')
In [34]:
df_newspapers_merged.sort_values(by='proportion', ascending=False, inplace=True)
df_newspapers_merged.rename(columns={'term': 'id'}, inplace=True)
In [35]:
df_newspapers_merged.head()
Out[35]:
id total_results total_articles proportion
1628 729 3 3 1.0
1614 154 21 21 1.0
1338 5 1556 1556 1.0
1522 1028 286 286 1.0
1540 273 193 193 1.0

The title facet only gives us the id number for each newspaper, not its title. Let's get all the titles and then merge them with the facet data.

In [36]:
# Get all the newspaper titles
title_params = {
    'key': api_key,
    'encoding': 'json',
}

title_data = s.get('https://api.trove.nla.gov.au/v2/newspaper/titles', params=params).json()
In [37]:
titles = []
for newspaper in title_data['response']['records']['newspaper']:
    titles.append({'title': newspaper['title'], 'id': int(newspaper['id'])})
df_titles = pd.DataFrame(titles)
In [38]:
df_titles.head()
Out[38]:
title id
0 Canberra Community News (ACT : 1925 - 1927) 166
1 Canberra Illustrated: A Quarterly Magazine (AC... 165
2 Federal Capital Pioneer (Canberra, ACT : 1924 ... 69
3 Good Neighbour (ACT : 1950 - 1969) 871
4 Student Notes/Canberra University College Stud... 665
In [39]:
df_titles.shape
Out[39]:
(1666, 2)

One problem with this list is that it also includes the titles of the Government Gazettes (this seems to be a bug in the API). Let's get the gazette titles and then subtract them from the complete list.

In [40]:
# Get gazette titles
gazette_data = s.get('https://api.trove.nla.gov.au/v2/gazette/titles', params=params).json()
gazettes = []
for gaz in gazette_data['response']['records']['newspaper']:
    gazettes.append({'title': gaz['title'], 'id': int(gaz['id'])})
df_gazettes = pd.DataFrame(gazettes)
In [41]:
df_gazettes.shape
Out[41]:
(38, 2)

Subtract the gazettes from the list of titles.

In [42]:
df_titles_not_gazettes = df_titles[~df_titles['id'].isin(df_gazettes['id'])]

Now we can merge the newspaper titles with the facet data using the id to link the two datasets.

In [43]:
df_newspapers_with_titles = pd.merge(df_titles_not_gazettes, df_newspapers_merged, how='left', on='id').fillna(0).sort_values(by='proportion', ascending=False)
In [44]:
# Convert the totals back to integers
df_newspapers_with_titles[['total_results', 'total_articles']] = df_newspapers_with_titles[['total_results', 'total_articles']].astype(int)

Now we can display the newspapers with the highest rates of correction. Remember, that a proportion of 1.00 means that every available article has at least one correction.

In [45]:
df_newspapers_with_titles[:25]
Out[45]:
title id total_results total_articles proportion
191 Party (Sydney, NSW : 1942) 1000 6 6 1.000000
20 The Australian Abo Call (National : 1938) 51 78 78 1.000000
416 The Satirist and Sporting Chronicle (Sydney, N... 1028 286 286 1.000000
146 Justice (Narrabri, NSW : 1891) 885 45 45 1.000000
463 The Temora Telegraph and Mining Advocate (NSW ... 729 3 3 1.000000
467 The True Sun and New South Wales Independent P... 1038 20 20 1.000000
533 Moonta Herald and Northern Territory Gazette (... 118 56 56 1.000000
735 Suedaustralische Zeitung (Adelaide, SA : 1850 ... 314 47 47 1.000000
816 Hobart Town Gazette and Van Diemen's Land Adve... 5 1556 1556 1.000000
829 Tasmanian and Port Dalrymple Advertiser (Launc... 273 193 193 1.000000
851 The Derwent Star and Van Diemen's Land Intelli... 1046 12 12 1.000000
903 Alexandra and Yea Standard, Thornton, Gobur an... 154 21 21 1.000000
961 Elsternwick Leader and East Brighton, ... (Vic... 201 17 17 1.000000
280 The Branxton Advocate: Greta and Rothbury Reco... 686 53 53 1.000000
212 Society (Sydney, NSW : 1887) 1042 21 21 1.000000
1442 Swan River Guardian (WA : 1836 - 1838) 1142 437 437 1.000000
887 The Van Diemen's Land Gazette and General Adve... 1047 38 38 1.000000
857 The Hobart Town Gazette and Southern Reporter ... 4 1922 1923 0.999480
2 Federal Capital Pioneer (Canberra, ACT : 1924 ... 69 542 545 0.994495
1211 The Melbourne Advertiser (Vic. : 1838) 935 120 121 0.991736
721 South Australian Gazette and Colonial Register... 40 1051 1065 0.986854
140 Intelligence (Bowral, NSW : 1884) 624 117 119 0.983193
1625 York Advocate (WA : 1915) 1131 236 241 0.979253
558 Logan and Albert Advocate (Qld. : 1893 - 1900) 842 82 84 0.976190
383 The Newcastle Argus and District Advertiser (N... 513 29 30 0.966667

At the other end, we can see the newspapers with the smallest rates of correction. Note that some newspapers have no corrections at all.

In [46]:
df_newspapers_with_titles.sort_values(by='proportion')[:25]
Out[46]:
title id total_results total_articles proportion
1125 Seamen's Strike Bulletin (Melbourne, Vic. : 1919) 1043 0 14 0.000000
1521 The Miner's Right (Perth, WA : 1894) 1729 0 426 0.000000
59 Campbelltown Ingleburn News (NSW : 1953 - 1954) 1699 0 6248 0.000000
1104 Progress (North Fitzroy, Vic. : 1889 - 1890) 1574 0 254 0.000000
1439 Sunday Figaro (Kalgoorlie, WA : 1904) 1664 0 362 0.000000
1476 The Derby News (WA : 1887) 1617 0 9 0.000000
1527 The Mount Margaret Mercury (WA : 1897) 1641 0 24 0.000000
495 To Ethnico Vema = Greek National Tribune (Arnc... 1592 7 62861 0.000111
249 The Australian Jewish Times (Sydney, NSW : 195... 1694 43 268379 0.000160
509 Vil'na Dumka = Free Thought (Sydney, NSW : 194... 1593 2 11607 0.000172
743 The Coromandel Times (Blackwood, SA : 1970 - 1... 1681 2 9900 0.000202
787 West Coast Recorder (Port Lincoln, SA : 1909 -... 1702 23 104481 0.000220
1588 The W.A. Sportsman (Kalgoorlie, WA : 1901 - 1902) 1666 1 4129 0.000242
447 The Sydney Jewish News (Sydney, N.S.W : 1939 -... 1693 18 71686 0.000251
1201 The Jewish Weekly News (Melbourne, Vic. : 1933... 1707 3 11865 0.000253
742 The Coromandel (Blackwood, SA : 1945 - 1970) 1680 16 55691 0.000287
175 Mu̇sų Pastogė = Our Haven (Sydney, NSW : 195... 1594 3 9060 0.000331
1418 Northam Advertiser and Toodyay Times (WA : 1954) 1652 1 2619 0.000382
1483 The Evening News (Boulder, WA : 1921 - 1922) 1621 4 8310 0.000481
1436 Sporting Life : Dryblower's Journal (Kalgoorli... 1663 4 8242 0.000485
152 L'Italo-Australiano = The Italo-Australian (Sy... 1597 3 6106 0.000491
84 Cowra Guardian and Lachlan Agricultural Record... 1697 20 36453 0.000549
1377 Kulin Advocate and Dudinin-Jitarning Harrismit... 1632 6 10856 0.000553
143 Italian Bulletin of Commerce (Sydney, NSW : 19... 1603 1 1775 0.000563
705 Port Lincoln, Tumby and West Coast Recorder (S... 1700 7 11597 0.000604

We'll save the full list of newspapers as a CSV file.

In [47]:
df_newspapers_with_titles_csv = df_newspapers_with_titles.copy()
df_newspapers_with_titles_csv.rename({'total_results': 'articles_with_corrections'}, axis=1, inplace=True)
df_newspapers_with_titles_csv['percentage_with_corrections'] = df_newspapers_with_titles_csv['proportion'] * 100
df_newspapers_with_titles_csv.sort_values(by=['percentage_with_corrections'], inplace=True)
df_newspapers_with_titles_csv[['id', 'title', 'articles_with_corrections', 'total_articles', 'percentage_with_corrections']].to_csv('titles_corrected.csv', index=False)
df_newspapers_with_titles_csv['title_url'] = df_newspapers_with_titles_csv['id'].apply(lambda x: f'http://nla.gov.au/nla.news-title{x}')
df_newspapers_with_titles_csv.to_csv('titles_corrected.csv', index=False)
In [48]:
display(FileLink('titles_corrected.csv'))

Neediest newspapers

Let's see if we can combine some guesses about OCR error rates with the correction data to find the newspapers most in need of help.

To make a guesstimate of error rates, we'll use the occurance of 'tbe' – ie a common OCR error for 'the'. I don't know how valid this is, but it's a place to start!

In [49]:
# Search for 'tbe' to get an indication of errors by newspaper
params['q'] = 'text:"tbe"~0'
params['facet'] = 'title'
In [50]:
data = get_results(params)
facets = []
for term in data['response']['zone'][0]['facets']['facet']['term']:
    # Get the state and the number of results, and convert it to integers, before adding to our results
    facets.append({'term': term['search'], 'total_results': int(term['count'])})
df_errors = pd.DataFrame(facets)

Merge the error data with the total articles per newspaper to calculate the proportion.

In [52]:
df_errors_merged = merge_df_with_total(df_errors, df_newspapers_total, how='right')
df_errors_merged.sort_values(by='proportion', ascending=False, inplace=True)
df_errors_merged.rename(columns={'term': 'id'}, inplace=True)
In [53]:
df_errors_merged.head()
Out[53]:
id total_results total_articles proportion
1235 1316 2005 2954 0.678741
1034 758 5250 8078 0.649913
812 927 9450 17227 0.548557
902 382 6966 12744 0.546610
927 262 6279 11527 0.544721

Add the title names.

In [54]:
df_errors_with_titles = pd.merge(df_titles_not_gazettes, df_errors_merged, how='left', on='id').fillna(0).sort_values(by='proportion', ascending=False)

So this is a list of the newspapers with the highest rate of OCR error (by our rather dodgy measure).

In [55]:
df_errors_with_titles[:25]
Out[55]:
title id total_results total_articles proportion
482 The Weekly Advance (Granville, NSW : 1892 - 1893) 1316 2005 2954 0.678741
959 Dunolly and Betbetshire Express and County of ... 758 5250 8078 0.649913
1001 Hamilton Spectator and Grange District Adverti... 927 9450 17227 0.548557
514 Wagga Wagga Express and Murrumbidgee District ... 382 6966 12744 0.546610
615 The North Australian, Ipswich and General Adve... 262 6279 11527 0.544721
614 The North Australian (Brisbane, Qld. : 1863 - ... 264 2875 5314 0.541024
338 The Hay Standard and Advertiser for Balranald,... 725 21698 42068 0.515784
206 Robertson Advocate (NSW : 1894 - 1923) 530 37007 72376 0.511316
232 Temora Herald and Mining Journal (NSW : 1882 -... 728 640 1253 0.510774
831 Tasmanian Morning Herald (Hobart, Tas. : 1865 ... 865 4857 9559 0.508108
226 Sydney Mail (NSW : 1860 - 1871) 697 24593 48535 0.506707
1096 Port Phillip Gazette and Settler's Journal (Vi... 1138 6116 12127 0.504329
827 Morning Star and Commercial Advertiser (Hobart... 1242 855 1703 0.502055
166 Molong Argus (NSW : 1896 - 1921) 424 52111 104984 0.496371
1095 Port Phillip Gazette (Vic. : 1851) 1139 243 491 0.494908
837 Telegraph (Hobart Town, Tas. : 1867) 1250 68 140 0.485714
890 Trumpeter General (Hobart, Tas. : 1833 - 1834) 869 701 1482 0.473009
306 The Cumberland Free Press (Parramatta, NSW : 1... 724 6238 13247 0.470899
560 Logan Witness (Beenleigh, Qld. : 1878 - 1893) 850 6845 14654 0.467108
645 Adelaide Chronicle and South Australian Litera... 986 901 1937 0.465152
607 The Darling Downs Gazette and General Advertis... 257 29514 65268 0.452197
388 The News, Shoalhaven and Southern Coast Distri... 1588 2473 5495 0.450045
848 The Cornwall Chronicle (Launceston, Tas. : 183... 170 72730 163791 0.444041
940 Chronicle, South Yarra Gazette, Toorak Times a... 847 1639 3720 0.440591
865 The Mount Lyell Standard and Strahan Gazette (... 1251 36450 83363 0.437244

And those with the lowest rate of errors. Note the number of non-English newspapers in this list – of course our measure of accuracy fails completely in newspapers that don't use the word 'the'!

In [56]:
df_errors_with_titles[-25:]
Out[56]:
title id total_results total_articles proportion
1175 The Chinese Advertiser (Ballarat, Vic. : 1856) 706 0 15 0.0
1437 Stampa Italiana = The Italian Press (Perth, WA... 1380 0 2493 0.0
224 Sydney General Trade List, Mercantile Chronicl... 696 0 22 0.0
1476 The Derby News (WA : 1887) 1617 0 9 0.0
1 Canberra Illustrated: A Quarterly Magazine (AC... 165 0 57 0.0
533 Moonta Herald and Northern Territory Gazette (... 118 0 56 0.0
31 Auburn and District News (NSW : 1929) 1320 0 25 0.0
762 The Port Adelaide Post Shipping Gazette, Farme... 719 0 18 0.0
212 Society (Sydney, NSW : 1887) 1042 0 21 0.0
151 L'Italo-Australiano = The Italo-Australian (Su... 1596 0 197 0.0
978 Frankston Standard (Frankston, Vic. : 1949) 233 0 1997 0.0
1305 Chung Wah News (Perth, WA : 1981 - 1987) 1383 0 860 0.0
46 Blayney West Macquarie (NSW : 1949) 802 0 110 0.0
1125 Seamen's Strike Bulletin (Melbourne, Vic. : 1919) 1043 0 14 0.0
1294 Bullfinch Miner and Yilgarn Advocate (WA : 1910) 1460 0 27 0.0
741 The Citizen (Port Adelaide, SA : 1938-1940) 1305 0 1284 0.0
1388 Mediterranean Voice (Perth, WA : 1971 - 1972) 1390 0 431 0.0
1565 The Southern Cross (Perth, WA : 1893) 1660 0 59 0.0
1188 The Elsternwick Leader and Caulfield and Balac... 200 0 47 0.0
66 Citizen Soldier (Sydney, NSW : 1942) 996 0 60 0.0
343 The Hospital Saturday News (Katoomba, NSW : 1930) 915 0 54 0.0
1557 The Possum (Fremantle, WA : 1890) 1201 0 105 0.0
67 Clarence and Richmond Examiner (Grafton, NSW :... 104 0 111 0.0
961 Elsternwick Leader and East Brighton, ... (Vic... 201 0 17 0.0
1378 La Rondine (Perth, WA : 1969 - 1994) 1388 0 1383 0.0

Now let's merge the error data with the correction data.

In [57]:
corrections_errors_merged_df = pd.merge(df_newspapers_with_titles, df_errors_with_titles, how='left', on='id')
In [58]:
corrections_errors_merged_df.head()
Out[58]:
title_x id total_results_x total_articles_x proportion_x title_y total_results_y total_articles_y proportion_y
0 Party (Sydney, NSW : 1942) 1000 6 6 1.0 Party (Sydney, NSW : 1942) 0 6 0.000000
1 The Australian Abo Call (National : 1938) 51 78 78 1.0 The Australian Abo Call (National : 1938) 0 78 0.000000
2 The Satirist and Sporting Chronicle (Sydney, N... 1028 286 286 1.0 The Satirist and Sporting Chronicle (Sydney, N... 0 286 0.000000
3 Justice (Narrabri, NSW : 1891) 885 45 45 1.0 Justice (Narrabri, NSW : 1891) 1 45 0.022222
4 The Temora Telegraph and Mining Advocate (NSW ... 729 3 3 1.0 The Temora Telegraph and Mining Advocate (NSW ... 0 3 0.000000
In [59]:
corrections_errors_merged_df['proportion_uncorrected'] = corrections_errors_merged_df['proportion_x'].apply(lambda x: 1 - x)
corrections_errors_merged_df.rename(columns={'title_x': 'title', 'proportion_x': 'proportion_corrected', 'proportion_y': 'proportion_with_errors'}, inplace=True)
corrections_errors_merged_df.sort_values(by=['proportion_with_errors', 'proportion_uncorrected'], ascending=False, inplace=True)

So, for what it's worth, here's a list of the neediest newspapers – those with high error rates and low correction rates! As I've said, this is a pretty dodgy method, but interesting nonetheless.

In [60]:
corrections_errors_merged_df[['title', 'proportion_with_errors', 'proportion_uncorrected']][:25]
Out[60]:
title proportion_with_errors proportion_uncorrected
1185 The Weekly Advance (Granville, NSW : 1892 - 1893) 0.678741 0.974272
599 Dunolly and Betbetshire Express and County of ... 0.649913 0.933028
381 Hamilton Spectator and Grange District Adverti... 0.548557 0.893655
440 Wagga Wagga Express and Murrumbidgee District ... 0.546610 0.907250
179 The North Australian, Ipswich and General Adve... 0.544721 0.767936
255 The North Australian (Brisbane, Qld. : 1863 - ... 0.541024 0.835341
999 The Hay Standard and Advertiser for Balranald,... 0.515784 0.962941
820 Robertson Advocate (NSW : 1894 - 1923) 0.511316 0.952553
588 Temora Herald and Mining Journal (NSW : 1882 -... 0.510774 0.931365
474 Tasmanian Morning Herald (Hobart, Tas. : 1865 ... 0.508108 0.911915
336 Sydney Mail (NSW : 1860 - 1871) 0.506707 0.879922
226 Port Phillip Gazette and Settler's Journal (Vi... 0.504329 0.821225
146 Morning Star and Commercial Advertiser (Hobart... 0.502055 0.724016
649 Molong Argus (NSW : 1896 - 1921) 0.496371 0.937495
247 Port Phillip Gazette (Vic. : 1851) 0.494908 0.830957
180 Telegraph (Hobart Town, Tas. : 1867) 0.485714 0.771429
134 Trumpeter General (Hobart, Tas. : 1833 - 1834) 0.473009 0.695007
335 The Cumberland Free Press (Parramatta, NSW : 1... 0.470899 0.879293
269 Logan Witness (Beenleigh, Qld. : 1878 - 1893) 0.467108 0.849188
125 Adelaide Chronicle and South Australian Litera... 0.465152 0.681982
266 The Darling Downs Gazette and General Advertis... 0.452197 0.846740
1372 The News, Shoalhaven and Southern Coast Distri... 0.450045 0.985623
229 The Cornwall Chronicle (Launceston, Tas. : 183... 0.444041 0.823372
922 Chronicle, South Yarra Gazette, Toorak Times a... 0.440591 0.958602
1362 The Mount Lyell Standard and Strahan Gazette (... 0.437244 0.985113

Created by Tim Sherratt for the GLAM Workbench.
Support this project by becoming a GitHub sponsor.