In this code-along session, you will use some basic Natural Language Processing to plot the most frequently occurring words in the novel Moby Dick. In doing so, you'll also see the efficacy of thinking in terms of the following Data Science pipeline with a constant regard for process:
For example, what would the following word frequency distribution be from?
Follow the instructions in the README.md to get your system set up and ready to go.
What are the most frequent words in the novel Moby Dick and how often do they occur?
Your raw data is the text of Melville's novel Moby Dick. We can find it at Project Gutenberg.
TO DO: Head there, find Moby Dick and then store the relevant url in your Python namespace:
# Store url
You're going to use requests
to get the web data.
You can find out more in DataCamp's Importing Data in Python (Part 2) course.
According to the requests
package website:
Requests is one of the most downloaded Python packages of all time, pulling in over 13,000,000 downloads every month. All the cool kids are doing it!
You'll be making a GET
request from the website, which means you're getting data from it. requests
make this easy with its get
function.
TO DO: Make the request here and check the object type returned.
# Import `requests`
# Make the request and check object type
This is a Response
object. You can see in the requests
kickstart guide that a Response
object has an attribute text
that allows you to get the HTML from it!
TO DO: Get the HTML and print the HTML to check it out:
# Extract HTML from Response object and print
OK! This HTML is not quite what you want. However, it does contain what you want: the text of Moby Dick. What you need to do now is wrangle this HTML to extract the novel.
Recap:
Up next: it's time for you to parse the html and extract the text of the novel.
Here you'll use the package BeautifulSoup
. The package website says:
TO DO: Create a BeautifulSoup
object from the HTML.
# Import BeautifulSoup from bs4
# Create a BeautifulSoup object from the HTML
From these soup objects, you can extract all types of interesting information about the website you're scraping, such as title:
# Get soup title
Or the title as a string:
# Get soup title as string
Or all URLs found within a page’s < a > tags (hyperlinks):
# Get hyperlinks from soup and check out first 10
What you want to do is to extract the text from the soup
and there's a super helpful .get_text()
method precisely for this.
TO DO: Get the text, print it out and have a look at it. Is it what you want?
# Get the text out of the soup and print it
Notice that this is now nearly what you want. You'll need to do a bit more work.
Recap:
Up next: you'll use Natural Language Processing, tokenization and regular expressions to extract the list of words in Moby Dick.
You'll now use nltk
, the Natural Language Toolkit, to
You want to tokenize your text, that is, split it into a list a words.
To do this, you're going to use a powerful tool called regular expressions, or regex.
The regular expression that matches all words beginning with 'p' is 'p\w+'. Let's unpack this:
You'll now use the built-in Python package re
to extract all words beginning with 'p' from the sentence 'peter piper picked a peck of pickled peppers' as a warm-up.
# Import regex package
# Define sentence
# Define regex
# Find all words in sentence that match the regex and print them
This looks pretty good. Now, if 'p\w+' is the regex that matches words beginning with 'p', what's the regex that matches all words?
It's your job to now do this for our toy Peter Piper sentence above.
# Find all words and print them
TO DO: use regex to get all the words in Moby Dick:
# Find all words in Moby Dick and print several
Recap:
Up next: extract the list of words in Moby Dick using nltk
, the Natural Language Toolkit.
Go get it!
# Import RegexpTokenizer from nltk.tokenize
# Create tokenizer
# Create tokens
TO DO: Create a list containing all the words in Moby Dick such that all words contain only lower case letters. You'll find the string method .lower()
handy:
# Initialize new list
# Loop through list tokens and make lower case
# Print several items from list as sanity check
Recap:
Up next: remove common words such as 'a' and 'the' from the list of words.
It is common practice to remove words that appear alot in the English language such as 'the', 'of' and 'a' (known as stopwords) because they're not so interesting. For more on all of these techniques, check out our Natural Language Processing Fundamentals in Python course.
The package nltk
has a list of stopwords in English which you'll now store as sw
and print the first several elements of:
# Import nltk
# Get English stopwords and print some of them
You want the list of all words in words
that are not in sw
. One way to get this list is to loop over all elements of words
and add the to a new list if they are not in sw
:
# Initialize new list
# Add to words_ns all words that are in words but not in sw
# Print several list items as sanity check
Recap:
Up next: plot the word frequency distribution of words in Moby Dick.
Our question was 'What are the most frequent words in the novel Moby Dick and how often do they occur?'
You can now plot a frequency distribution of words in Moby Dick in two line of code using nltk
. To do this,
nltk.FreqDist()
;#Import datavis libraries
# Figures inline and set visualization style
# Create freq dist and plot
Recap:
Up next: adding more stopwords.
# Import stopwords from sklearn
# Add sklearn stopwords to words_sw
# Initialize new list
# Add to words_ns all words that are in words but not in sw
# Print several list items as sanity check
# Create freq dist and plot
The cool thing is that, in using nltk
to answer our question, we actually already presented our solution in a manner that can be communicated to other: a frequency distribution plot! You can read off the most common words, along with their frequency. For example, 'whale' is the most common word in the novel (go figure), excepting stopwords, and it occurs a whopping >1200 times!
As you have seen that there are lots of novels on Project Gutenberg we can make these word frequency distributions of, it makes sense to write your own function that does all of this: