Created by Nathan Kelber and Ted Lawless for JSTOR Labs under Creative Commons CC BY License
For questions/comments/improvements, email [email protected]


Exploring Word Frequencies

Description: This notebook shows how to find the most common words in a dataset. The following processes are described:

  • Using the tdm_client to create a Pandas DataFrame
  • Filtering based on a pre-processed ID list
  • Filtering based on a stop words list
  • Using a Counter() object to get the most common words

Use Case: For Learners (Detailed explanation, not ideal for researchers)

Take me to the Research Version of this notebook ->

Difficulty: Intermediate

Completion time: 60 minutes

Knowledge Required:

Knowledge Recommended:

Data Format: JSON Lines (.jsonl)

Libraries Used:

  • tdm_client to collect, unzip, and read our dataset
  • NLTK to help clean up our dataset
  • Counter from Collections to help sum up our word frequencies

Research Pipeline:

  1. Build a dataset
  2. Create a "Pre-Processing CSV" with Exploring Metadata (Optional)
  3. Create a "Custom Stopwords List" with Creating a Stopwords List (Optional)
  4. Complete the word frequencies analysis with this notebook

Import your dataset

We'll use the tdm_client library to automatically retrieve the dataset in the JSON file format.

Enter a dataset ID in the next code cell.

If you don't have a dataset ID, you can:

In [ ]:
# Creating a variable `dataset_id` to hold our dataset ID
# The default dataset is Shakespeare Quarterly, 1950-present
dataset_id = "7e41317e-740f-e86a-4729-20dab492e925"

Next, import the tdm_client, passing the dataset_id as an argument using the get_dataset method.

In [ ]:
# Importing your dataset with a dataset ID
import tdm_client
# Pull in the dataset that matches `dataset_id`
# in the form of a gzipped JSON lines file.
# The .get_dataset() method downloads the gzipped JSONL file
# to the /data folder and returns a string for the file name and location
# dataset_metadata will be a string containing that file name and location
dataset_file = tdm_client.get_dataset(dataset_id)

Apply Pre-Processing Filters (if available)

If you completed pre-processing with the "Exploring Metadata and Pre-processing" notebook, you can use your CSV file of dataset IDs to automatically filter the dataset. Your pre-processed CSV file should be in the /data directory.

In [ ]:
# Import a pre-processed CSV file of filtered dataset IDs.
# If you do not have a pre-processed CSV file, the analysis
# will run on the full dataset and may take longer to complete.
import pandas as pd
import os

# Define a string that describes the path to the CSV
pre_processed_file_name = f'data/pre-processed_{dataset_id}.csv'

# Test if the path to the CSV exists
# If true, then read the IDs into filtered_id_list
if os.path.exists(pre_processed_file_name):
    df = pd.read_csv(pre_processed_file_name)
    filtered_id_list = df["id"].tolist()
    use_filtered_list = True
    print(f'Pre-Processed CSV found. Filtered dataset is ' + str(len(df)) + ' documents.')
else: 
    use_filtered_list = False
    print('No pre-processed CSV file found. Full dataset will be used.')

Extract Unigram Counts from the JSON file (No cleaning)

We pulled in our dataset using a dataset_id. The file, which resides in the datasets/ folder, is a compressed JSON Lines file (jsonl.gz) that contains all the metadata information found in the metadata CSV plus the textual data necessary for analysis including:

  • Unigram Counts
  • Bigram Counts
  • Trigram Counts
  • Full-text (if available)

To complete our analysis, we are going to pull out the unigram counts for each document and store them in a Counter() object. We will import Counter which will allow us to use Counter() objects for counting unigrams. Then we will initialize an empty Counter() object word_frequency to hold all of our unigram counts.

In [ ]:
# Import Counter()
from collections import Counter

# Create an empty Counter object called `word_frequency`
word_frequency = Counter()

We can read in each document using the tdm_client.dataset_reader.

In [ ]:
# Gather unigramCounts from documents in `filtered_id_list` if it is available

for document in tdm_client.dataset_reader(dataset_file):
    if use_filtered_list is True:
        document_id = document['id']
        # Skip documents not in our filtered_id_list
        if document_id not in filtered_id_list:
            continue
    unigrams = document.get("unigramCount", [])
    for gram, count in unigrams.items():
        word_frequency[gram] += count

# Print success message
if use_filtered_list is True:
    print('Unigrams have been collected only for the ' + str(len(df)) + ' documents listed in your CSV file.')
else:
    print('Unigrams have been collected for all documents without filtering from a CSV file.')

Find Most Common Unigrams

Now that we have a list of the frequency of all the unigrams in our corpus, we need to sort them to find which are most common

In [ ]:
for gram, count in word_frequency.most_common(25):
    print(gram.ljust(20), count)

Some issues to consider

We have successfully created a word frequency list. There are a couple small issues, however, that we still need to address:

  1. There are many function words, words like "the", "in", and "of" that are grammatically important but do not carry as much semantic meaning like content words, such as nouns and verbs.
  2. The words represented here are actually case-sensitive strings. That means that the string "the" is a different from the string "The". You may notice this in your results above.

Extract Unigram Counts from the JSON File (with cleaning)

To address these issues, we need to find a way to remove common function words and combine strings that may have capital letters in them. We can address these issues by:

  1. Using a stopwords list to remove common function words
  2. Lowercasing all the characters in each string to combine our counts

Load Stopwords List

If you have created a stopword list in the stopwords notebook, we will import it here. (You can always modify the CSV file to add or subtract words then reload the list.) Otherwise, we'll load the NLTK stopwords list automatically.

In [ ]:
# Load a custom data/stop_words.csv if available
# Otherwise, load the nltk stopwords list in English

# Create an empty Python list to hold the stopwords
stop_words = []

# The filename of the custom data/stop_words.csv file
stopwords_list_filename = 'data/stop_words.csv'

if os.path.exists(stopwords_list_filename):
    import csv
    with open(stopwords_list_filename, 'r') as f:
        stop_words = list(csv.reader(f))[0]
    print('Custom stopwords list loaded from CSV')
else:
    # Load the NLTK stopwords list
    from nltk.corpus import stopwords
    stop_words = stopwords.words('english')
    print('NLTK stopwords list loaded')

Gather unigrams again with extra cleaning steps

In addition to using a stopwords list, we will clean up the tokens by lowercasing all tokens and combining them. This will combine tokens with different capitalization such as "quarterly" and "Quarterly." We will also remove any tokens that are not alphanumeric.

In [ ]:
# Gather unigramCounts from documents in `filtered_id_list` if available
# and apply the processing.

word_frequency = Counter()

for document in tdm_client.dataset_reader(dataset_file):
    if use_filtered_list is True:
        document_id = document['id']
        # Skip documents not in our filtered_id_list
        if document_id not in filtered_id_list:
            continue
    unigrams = document.get("unigramCount", [])
    for gram, count in unigrams.items():
        clean_gram = gram.lower()
        if clean_gram in stop_words:
            continue
        if not clean_gram.isalpha():
            continue
        word_frequency[clean_gram] += count

Display Results

Finally, we will display the 20 most common words by using the .most_common() method on the Counter() object.

In [ ]:
# Print the most common processed unigrams and their counts
for gram, count in word_frequency.most_common(25):
    print(gram.ljust(20), count)
In [ ]: