When you receive a CSV file from an external source, you'll want to get a feel for the data.
Let's import some data and then learn how to explore it.
# Setup
import os
import pandas as pd
# We use os.path.join because Windows uses a back slash (\) to separate directories
# while others use a forward slash (/)
users_file_name = os.path.join('data', 'users.csv')
users_file_name
If you want to take a peek at your CSV file, you could open it in an editor.
Let's just use some standard Python to see the first couple lines of the file.
# Open the file and print out the first 5 lines
with open(users_file_name) as lines:
for _ in range(5):
# The `file` object is an iterator, so just get the next line
print(next(lines))
Notice how the first line is a header row. It has column names in it. By default, it will be assumed that the first row is your header row.
Also note how the first column of that header row is empty...the values below in that first column appear to be usernames. They are what we want for the index.
We can use the index_col
parameter of the pandas.read_csv
function.
# Create a new `DataFrame` and set the index to the first column
users = pd.read_csv(users_file_name, index_col=0)
A quick way to check and see if you got your CSV file read correctly is to use the DataFrame.head
method. This gives you the first x number of rows. The head method, by default, returns 5 records. You can specify the number you want as the first argument, for instance users.head(10)
will return the first 10 rows.
users.head()
Nice! We got it. So let's see how many rows we have. There are a couple of ways.
# Pythonic approach still works
len(users)
Side note: This length call is quick. Under the covers this DataFrame.__len__
call is actually performing a len(df.index)
, counting the rows by using the index. You might see older code that uses the style of len(df.index)
to get a count of rows. As of pandas version 0.11, len(df)
is the same as len(df.index)
.
The DataFrame.shape
property works just like np.array.shape
does. This is the length of each axis of your data frame, rows and columns.
users.shape
The DataFrame.count
method will count up each column for how many non-empty values we have.
users.count()
Most of our columns include values for each row, but looks like last_name has some missing ones. The missing data will show up as np.nan
-- NumPy's not a number -- in those records.
The count
method is data missing aware.
Remember that a DataFrame
has the ability to contain multiple data types or dtypes
.
You can use the DataFrame.dtypes
to see the dtype
of each column.
users.dtypes
As you can see, most of the data types of these columns were inferred, or assumed, correctly. See how automatically email_verified
is bool
, referral_count
is an integer, and balance
a float. This happened when we used pd.read_csv
.
One thing to note though that the signup_date
field is an object
and not a datetime
. You can convert these during or after import if you need to, and we'll do some of that later in this course.
The DataFrame.describe
method is a great way to get a vibe for all numeric data in your DataFrame
. You'll notice only columns that have numeric data are returned, and ones with booleans or text, like email_verified
and first_name
are left out.
You'll see lots of different aggregations.
users.describe()
Most of these aggregations are available by themselves as well
# The mean or average
users.mean()
# Standard deviation
users.std()
# The minimum of each column
users.min()
# The maximum of each column
users.max()
Since columns are in reality a Series
you can quickly access their counts of different values using the value_counts
method.
users.email_verified.value_counts()
By default the value counts are sorted descending, so the most frequent are at top.
# Most common first name
users.first_name.value_counts().head()
You can create a new DataFrame
that is sorted by using the sort_values
method.
Let's sort the DataFrame so that the user with the highest balance
is at the top. By default, ascending order is assumed, you can change that by setting the ascending
keyword argument to False
.
users.sort_values(by='balance', ascending=False).head()
You'll notice that sort_values
call actually created a new DataFrame
. If you want to permanently change the sort from the default (sorted by index), you can pass True
as an argument to the inplace
keyword parameter.
# Sort first by last_name and then first_name. By default, np.nan show up at the end
users.sort_values(by=['last_name', 'first_name'], inplace=True)
# Sort order is now changed
users.head()
And if you want to sort by the index, like it was originally, you can use the sort_index
method.
users.sort_index(inplace=True)
users.head()