This notebook is our output for the fourth guided project for the 'Data Scientist in Python' track. The personal goal for this project is to familiarize ourselves with some of the plotting functionalities available through the pandas
and matplotlib
libraries.
We'll be using data found in five-thirtyeight's github repo on the earnings and employment statistics of different college majors between 2010 and 2012. The raw data was originally published by the American Community Survey.
Here are the header descriptions for the recent_grads.csv
table:
Variable | Description |
---|---|
Rank |
Rank by median earnings |
Major_code |
Major code |
Major |
Major description |
Major_category |
Category of major from Carnevale et al |
Total |
Total number of people with major |
Sample_size |
Sample size (unweighted) of full-time, year-round ONLY (used for earnings) |
Men |
Male graduates |
Women |
Female graduates |
ShareWomen |
Women as share of total |
Employed |
Number employed |
Full_time |
Employed 35 hours or more |
Part_time |
Employed less than 35 hours |
Full_time_year_round |
Employed at least 50 weeks (WKW == 1) and at least 35 hours (WKHP >= 35) |
Unemployed |
Number unemployed |
Unemployment_rate |
Unemployed / (Unemployed + Employed) |
Median |
Median earnings of full-time, year-round workers |
P25th |
25th percentile of earnings |
P75th |
75th percentile of earnings |
College_jobs |
Number with job requiring a college degree |
Non_college_jobs |
Number with job not requiring a college degree |
Low_wage_jobs |
Number in low-wage service jobs |
We will try to gain some insights as to certain relationships among earning prospects, gender patterns, and degree programs, among others. Some of the questions we'll try to answer are:
We begin by loading in the data set.
# Loading the libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Seeting plots to be displayed inline
%matplotlib inline
# Loading the data set
recent_grads = pd.read_csv('recent-grads.csv')
print(recent_grads.columns)
Index(['Rank', 'Major_code', 'Major', 'Total', 'Men', 'Women', 'Major_category', 'ShareWomen', 'Sample_size', 'Employed', 'Full_time', 'Part_time', 'Full_time_year_round', 'Unemployed', 'Unemployment_rate', 'Median', 'P25th', 'P75th', 'College_jobs', 'Non_college_jobs', 'Low_wage_jobs'], dtype='object')
Checking Basic Information and Preliminary Cleaning
As you may have noticed in the earlier table, the variable names are not in camel case. Let's fix that manually.
new_col_names = ['rank', 'major_code', 'major', 'total', 'men',
'women', 'major_category', 'share_women', 'sample_size',
'employed', 'full_time', 'part_time', 'full_time_year_round',
'unemployed', 'unemployment_rate', 'median', 'p25th',
'p75th', 'college_jobs', 'non_college_jobs', 'low_wage_jobs'
]
recent_grads.columns = new_col_names
We've finished loading the data set and renaming the column headers. Let's have a first look at our data.
# Checking the first few rows of our data
print('HEAD')
recent_grads.head(5)
HEAD
rank | major_code | major | total | men | women | major_category | share_women | sample_size | employed | ... | part_time | full_time_year_round | unemployed | unemployment_rate | median | p25th | p75th | college_jobs | non_college_jobs | low_wage_jobs | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 2419 | PETROLEUM ENGINEERING | 2339.0 | 2057.0 | 282.0 | Engineering | 0.120564 | 36 | 1976 | ... | 270 | 1207 | 37 | 0.018381 | 110000 | 95000 | 125000 | 1534 | 364 | 193 |
1 | 2 | 2416 | MINING AND MINERAL ENGINEERING | 756.0 | 679.0 | 77.0 | Engineering | 0.101852 | 7 | 640 | ... | 170 | 388 | 85 | 0.117241 | 75000 | 55000 | 90000 | 350 | 257 | 50 |
2 | 3 | 2415 | METALLURGICAL ENGINEERING | 856.0 | 725.0 | 131.0 | Engineering | 0.153037 | 3 | 648 | ... | 133 | 340 | 16 | 0.024096 | 73000 | 50000 | 105000 | 456 | 176 | 0 |
3 | 4 | 2417 | NAVAL ARCHITECTURE AND MARINE ENGINEERING | 1258.0 | 1123.0 | 135.0 | Engineering | 0.107313 | 16 | 758 | ... | 150 | 692 | 40 | 0.050125 | 70000 | 43000 | 80000 | 529 | 102 | 0 |
4 | 5 | 2405 | CHEMICAL ENGINEERING | 32260.0 | 21239.0 | 11021.0 | Engineering | 0.341631 | 289 | 25694 | ... | 5180 | 16697 | 1672 | 0.061098 | 65000 | 50000 | 75000 | 18314 | 4440 | 972 |
5 rows × 21 columns
# Checking the last few rows of our data
print('TAIL')
recent_grads.tail(5)
TAIL
rank | major_code | major | total | men | women | major_category | share_women | sample_size | employed | ... | part_time | full_time_year_round | unemployed | unemployment_rate | median | p25th | p75th | college_jobs | non_college_jobs | low_wage_jobs | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
168 | 169 | 3609 | ZOOLOGY | 8409.0 | 3050.0 | 5359.0 | Biology & Life Science | 0.637293 | 47 | 6259 | ... | 2190 | 3602 | 304 | 0.046320 | 26000 | 20000 | 39000 | 2771 | 2947 | 743 |
169 | 170 | 5201 | EDUCATIONAL PSYCHOLOGY | 2854.0 | 522.0 | 2332.0 | Psychology & Social Work | 0.817099 | 7 | 2125 | ... | 572 | 1211 | 148 | 0.065112 | 25000 | 24000 | 34000 | 1488 | 615 | 82 |
170 | 171 | 5202 | CLINICAL PSYCHOLOGY | 2838.0 | 568.0 | 2270.0 | Psychology & Social Work | 0.799859 | 13 | 2101 | ... | 648 | 1293 | 368 | 0.149048 | 25000 | 25000 | 40000 | 986 | 870 | 622 |
171 | 172 | 5203 | COUNSELING PSYCHOLOGY | 4626.0 | 931.0 | 3695.0 | Psychology & Social Work | 0.798746 | 21 | 3777 | ... | 965 | 2738 | 214 | 0.053621 | 23400 | 19200 | 26000 | 2403 | 1245 | 308 |
172 | 173 | 3501 | LIBRARY SCIENCE | 1098.0 | 134.0 | 964.0 | Education | 0.877960 | 2 | 742 | ... | 237 | 410 | 87 | 0.104946 | 22000 | 20000 | 22000 | 288 | 338 | 192 |
5 rows × 21 columns
We will also check how many observations we have and what data types we are dealing with.
recent_grads.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 173 entries, 0 to 172 Data columns (total 21 columns): rank 173 non-null int64 major_code 173 non-null int64 major 173 non-null object total 172 non-null float64 men 172 non-null float64 women 172 non-null float64 major_category 173 non-null object share_women 172 non-null float64 sample_size 173 non-null int64 employed 173 non-null int64 full_time 173 non-null int64 part_time 173 non-null int64 full_time_year_round 173 non-null int64 unemployed 173 non-null int64 unemployment_rate 173 non-null float64 median 173 non-null int64 p25th 173 non-null int64 p75th 173 non-null int64 college_jobs 173 non-null int64 non_college_jobs 173 non-null int64 low_wage_jobs 173 non-null int64 dtypes: float64(5), int64(14), object(2) memory usage: 28.5+ KB
Removing Rows with Missing Information
We notice that most columns have corresponding 173 observations but it appears that some columns have missing data. Since it doesn't appear to be the case that we have many missing observations, we can just drop these rows without losing a lot of information.
# Checking number of rows
raw_data_count = recent_grads.shape[0]
print('raw data count: ', raw_data_count)
# Dropping rows with missing values
recent_grads = recent_grads.dropna()
# Rechecking number of rows after dropping some observations
cleaned_data_count = recent_grads.shape[0]
print('cleaned data count: ', cleaned_data_count)
raw data count: 173 cleaned data count: 172
# Checking the characteristics of our numeric columns
recent_grads.describe()
rank | major_code | total | men | women | share_women | sample_size | employed | full_time | part_time | full_time_year_round | unemployed | unemployment_rate | median | p25th | p75th | college_jobs | non_college_jobs | low_wage_jobs | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
count | 172.000000 | 172.000000 | 172.000000 | 172.000000 | 172.000000 | 172.000000 | 172.000000 | 172.00000 | 172.000000 | 172.000000 | 172.000000 | 172.000000 | 172.000000 | 172.000000 | 172.000000 | 172.000000 | 172.000000 | 172.000000 | 172.000000 |
mean | 87.377907 | 3895.953488 | 39370.081395 | 16723.406977 | 22646.674419 | 0.522223 | 357.941860 | 31355.80814 | 26165.767442 | 8877.232558 | 19798.843023 | 2428.412791 | 0.068024 | 40076.744186 | 29486.918605 | 51386.627907 | 12387.401163 | 13354.325581 | 3878.633721 |
std | 49.983181 | 1679.240095 | 63483.491009 | 28122.433474 | 41057.330740 | 0.231205 | 619.680419 | 50777.42865 | 42957.122320 | 14679.038729 | 33229.227514 | 4121.730452 | 0.030340 | 11461.388773 | 9190.769927 | 14882.278650 | 21344.967522 | 23841.326605 | 6960.467621 |
min | 1.000000 | 1100.000000 | 124.000000 | 119.000000 | 0.000000 | 0.000000 | 2.000000 | 0.00000 | 111.000000 | 0.000000 | 111.000000 | 0.000000 | 0.000000 | 22000.000000 | 18500.000000 | 22000.000000 | 0.000000 | 0.000000 | 0.000000 |
25% | 44.750000 | 2403.750000 | 4549.750000 | 2177.500000 | 1778.250000 | 0.336026 | 42.000000 | 3734.75000 | 3181.000000 | 1013.750000 | 2474.750000 | 299.500000 | 0.050261 | 33000.000000 | 24000.000000 | 41750.000000 | 1744.750000 | 1594.000000 | 336.750000 |
50% | 87.500000 | 3608.500000 | 15104.000000 | 5434.000000 | 8386.500000 | 0.534024 | 131.000000 | 12031.50000 | 10073.500000 | 3332.500000 | 7436.500000 | 905.000000 | 0.067544 | 36000.000000 | 27000.000000 | 47000.000000 | 4467.500000 | 4603.500000 | 1238.500000 |
75% | 130.250000 | 5503.250000 | 38909.750000 | 14631.000000 | 22553.750000 | 0.703299 | 339.000000 | 31701.25000 | 25447.250000 | 9981.000000 | 17674.750000 | 2397.000000 | 0.087247 | 45000.000000 | 33250.000000 | 58500.000000 | 14595.750000 | 11791.750000 | 3496.000000 |
max | 173.000000 | 6403.000000 | 393735.000000 | 173809.000000 | 307087.000000 | 0.968954 | 4212.000000 | 307933.00000 | 251540.000000 | 115172.000000 | 199897.000000 | 28169.000000 | 0.177226 | 110000.000000 | 95000.000000 | 125000.000000 | 151643.000000 | 148395.000000 | 48207.000000 |
Most popular major categories
Let's check the total number of graduates for 2010-2012 for each major categories.
major_cat_unique = recent_grads['major_category'].unique()
major_cat_grads = {}
for cat in major_cat_unique:
category = recent_grads['major_category'] == cat
total = np.sum(recent_grads.loc[category, 'total'])
major_cat_grads[cat] = total
total_grads_major_cats = pd.Series(major_cat_grads)
total_grads_major_cats = pd.DataFrame(total_grads_major_cats,
columns=['total_grads']
)
total_grads_major_cats.sort_values('total_grads')
total_grads | |
---|---|
Interdisciplinary | 12296.0 |
Agriculture & Natural Resources | 75620.0 |
Law & Public Policy | 179107.0 |
Physical Sciences | 185479.0 |
Industrial Arts & Consumer Services | 229792.0 |
Computers & Mathematics | 299008.0 |
Arts | 357130.0 |
Communications & Journalism | 392601.0 |
Biology & Life Science | 453862.0 |
Health | 463230.0 |
Psychology & Social Work | 481007.0 |
Social Science | 529966.0 |
Engineering | 537583.0 |
Education | 559129.0 |
Humanities & Liberal Arts | 713468.0 |
Business | 1302376.0 |
Checking for majors with graduates predominantly male
Let's check which majors and how many of them have graduates that are predominantly male.
recent_grads['more_men'] = recent_grads['men'] > recent_grads['women']
print(recent_grads['more_men'].value_counts())
print('\n')
print(np.mean(recent_grads['more_men']))
False 96 True 76 Name: more_men, dtype: int64 0.4418604651162791
44% of the 172 majors are predominantly male which means 56% are predominantly female.
Scatter plots allow us to visually inspect, to a certain extent, whether there are any relationships between two variables. We'll generate a few scatter plots for some of the columns.
*1.1 Are more popular majors associated with higher median earnings*
total
and median
*1.2 Are more popular majors associated with lower or higher unemployment rate?*
total
and unemployment_rate
*1.3 Are graduates of popular majors more likely to full time jobs?*
total
and share_full_time
(we need to generate this column)
*1.4 Are majors with more women as a proportion of total graduates ranked higher in median earnings?*
share_women
and rank
*1.5 Are majors with more men graduating associated with lower or higher unemployment rate?*
men
and umployment_rate
*1.6 Are majors with more women as a proportion of total graduates associated with lower or higher unemployment rate?*
share_women
and unemployment_rate
recent_grads.plot(x='total', y='median', kind='scatter', alpha=0.7)
plt.title('Figure 1.1')
<matplotlib.text.Text at 0x7f7035eb0be0>
recent_grads.plot(x='total', y='unemployment_rate', kind='scatter', alpha=0.7)
plt.title('Figure 1.2')
<matplotlib.text.Text at 0x7f7035e5b6a0>
# Creating column for share_full_time
recent_grads['share_full_time'] = recent_grads['full_time'] / recent_grads['employed']
# the formula above results in division by zero since there are majors with zero employed
# let's set the share for these rows equal to the mean of the non-infinity shares
over_one = recent_grads['share_full_time'] > 1
recent_grads.loc[over_one, 'share_full_time'] = np.mean(recent_grads.loc[~over_one, 'share_full_time'])
recent_grads.plot(x='total', y='share_full_time', kind='scatter', alpha=0.7)
plt.title('Figure 1.3')
<matplotlib.text.Text at 0x7f7033d3eeb8>
recent_grads.plot(x='share_women', y='rank', kind='scatter', alpha=0.7)
plt.title('Figure 1.4')
<matplotlib.text.Text at 0x7f7033caf0b8>
recent_grads.plot(x='men', y='unemployment_rate', kind='scatter', alpha=0.7)
plt.title('Figure 1.5')
<matplotlib.text.Text at 0x7f7033c8dcf8>
recent_grads.plot(x='share_women', y='unemployment_rate', kind='scatter', alpha=0.7)
plt.title('Figure 1.6')
<matplotlib.text.Text at 0x7f7033beb2e8>
After generating the scatter plots, there doesn't appear to be any strong relationships between the columns that can be easily seen through visual inspection EXCEPT for share_women
and rank
as shown in Figure 1.4. Apparently, major degrees that have more women graduates as a propotion of total number of graduates are associated with lower ranking in terms of median earnings. Other than that, we can't really say much just yet and would need to dig deeper if we want to know more about this observed pattern.
Furthermore, Figure 1.6 appears to depict some weak positive relationship between share_women
and unemployment_rate
although this is not really very clear so we can't say anything definitive for now.
Histograms give us visual represenations of the distribution of values for a column. We will generate histograms for a few selected columns.
We will generate all the histograms into one figure. We will also let pandas
figure out the ranges for itself but let's try to calculate the approximate number of bins we will use for each histogram that we generate.
# For determining number of bins (round off to the nearest multiple of 5)
def num_bins(series):
iqr = series.quantile(0.75) - series.quantile(0.25)
h = 2 * iqr * (len(series) ** (-1/3))
num_bins = (series.max() - series.min()) / h
return np.round(num_bins, 0)
cols_interest = ['sample_size', 'median', 'employed', 'share_full_time', 'share_women', 'unemployment_rate', 'men', 'women']
for col in cols_interest:
bins = num_bins(recent_grads[col])
print('Number of bins for {a} : {b}'.format(a=col, b=bins))
Number of bins for sample_size : 39.0 Number of bins for median : 20.0 Number of bins for employed : 31.0 Number of bins for share_full_time : 10.0 Number of bins for share_women : 7.0 Number of bins for unemployment_rate : 13.0 Number of bins for men : 39.0 Number of bins for women : 41.0
# Generating the figure and the axes objects
fig, ((ax1, ax2), (ax3, ax4), (ax5, ax6), (ax7, ax8)) = plt.subplots(4,2, figsize=(10, 10))
# Creating a dictionary we can loop over
dictionary = {ax1: ['sample_size', 40, 'Sample Size Histogram'],
ax2: ['median', 20, 'Median Salary Histogram'],
ax3: ['employed', 30, 'Number Employed Histogram'],
ax4: ['share_full_time', 10, 'Proportion Full Time Histogram'],
ax5: ['share_women', 10, 'Proportion Women Histogram'],
ax6: ['unemployment_rate', 15, 'Unemployment Rate Histogram'],
ax7: ['men', 40, 'Male Graduates Histogram'],
ax8: ['women', 40, 'Female Graduates Histogram'],
}
# Plotting the histograms using a loop over the dictionary
i = 0
for key, value in dictionary.items():
i += 1
key = fig.add_subplot(4, 2, i)
key = recent_grads[value[0]].plot(kind='hist', bins=value[1])
key.set_title(value[2])
# Not sure what's going on under the hood but this helps clean the tick labels
for key, value in dictionary.items():
fig.delaxes(key)
# Fixing the layout
fig.tight_layout()
Here are some quick observations we can glean from the histograms we plotted.
sample_size
is below 500, with a large number having sample sizes below 100.A scatter matrix plot combines both histograms and scatter plots. The diagonals contain the histograms of the variables while the intersections between the rows and columns will be the scatter plot showing the relationship between the row and the column.
We will generate a couple of scatter matrix plots.
# Importing scatter_matrix
from pandas.plotting import scatter_matrix
# Scatter matrix for Sample_size and Median
cols2 = ['sample_size', 'median']
scatter_matrix(recent_grads[cols2], figsize=(8, 8))
# Scatter matrix for Sample_size, Median, and Unemployment_rate
cols3 = ['sample_size', 'median', 'unemployment_rate']
scatter_matrix(recent_grads[cols3], figsize=(8, 8))
array([[<matplotlib.axes._subplots.AxesSubplot object at 0x7f7031a53668>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f70319af7f0>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f70319c1630>], [<matplotlib.axes._subplots.AxesSubplot object at 0x7f7031990128>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f703194a668>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f70319169b0>], [<matplotlib.axes._subplots.AxesSubplot object at 0x7f70318d74a8>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f7031824630>, <matplotlib.axes._subplots.AxesSubplot object at 0x7f70317ee860>]], dtype=object)
Let's try generating a scatter plot containing more columns of interest.
cols_scatter = ['total', 'share_women', 'median', 'unemployment_rate', 'share_full_time']
scatter_matrix(recent_grads[cols_scatter], figsize=(12,12))
plt.show()
What pops up the most in the scatter plot above is the apparent strong and steep negative relationship between share_women
and median
. There could be many explanations for this which we can't really with just the data we have right now. A related observation is that majors with more women accounting for their total number of graduates also tend to have a lower share of employed people working full time. Unemployment rate is not showing any strong relationship with proportion of women graduates which suggests that the lower median salary may be driven more by the quality or regularity of employment rather than just employment itself. Other confounding factors could contribute to these observed patterns but we have shown here some motivation for further investigation.
As for total number of graduates, we can't see any particularly strong relationships with any of the other columns. It seems popularity of a degree is not related to salary and employment outcomes.
For this data set, bar plots can give us a visual representation of the values of the columns across different rows (majors). Before we generate the bar plots, we will first sort the major degrees based on popularity (the total
column). We will also be using horizontal bar plots for improved readability.
We will check how the *share of women in total number of graduates* and *unemployment rate* look like for the ten most popular degrees and the ten least popular degrees.
# Sorting by popularity (in terms of total number of graduates)
grads_sorted = recent_grads.sort_values('total', ascending=False)
# Top 10 Share Women
grads_sorted[:10].plot.barh(x='major', y='share_women', legend=False)
plt.title('Share of Women in Total Graduates (Top 10 Majors)')
<matplotlib.text.Text at 0x7f7031ae7080>
# Bottom 10 Share Women
grads_sorted[-10:].plot.barh(x='major', y='share_women', legend=False)
plt.title('Share of Women in Total Graduates (Bottom 10 Majors)')
<matplotlib.text.Text at 0x7f7033957eb8>
# Top 10 Unemployment
grads_sorted[:10].plot.barh(x='major', y='unemployment_rate', legend=False)
plt.title('Unemployment Rate (Top 10 Majors)')
<matplotlib.text.Text at 0x7f7031181208>
# Top 10 Unemployment
grads_sorted[-10:].plot.barh(x='major', y='unemployment_rate', legend=False)
plt.title('Unemployment Rate (Bottom 10 Majors)')
<matplotlib.text.Text at 0x7f7030fc2080>
It appears that women account for a significant portion of total graduates for the Top 10 most popular majors, with their share ranging from 40% to over 80%. There are plenty of women relative to men taking up Nursing.
The share of women in total number of graduates varies more for the least popular majors, going as low as 0% for Military Technologies and as high as over 80% for Library Science and School Student Counseling.
Surprisingly, unemployment rate is apparently zero for several of the least popular majors (Military Technologies, Mathematics and Computer Sicence, Soil Science, and Educational Supervision). On the other hand, unemployment rate for the most popular major (Political Science and Government) is relatively high at 10% while Nursing has the lowest unemployment rate among the top 10 most popular majors with just a little over 4%.
Let's examine how many men or women graduate for each major category. A grouped bar plot will help us see this easily.
# Creating a dataframe where we can graph our grouped bar plots
major_cat_unique = recent_grads['major_category'].unique()
major_categories_total = {}
major_categories_men = {}
major_categories_women = {}
for cat in major_cat_unique:
category = recent_grads['major_category'] == cat
total = int(np.sum(recent_grads.loc[category, 'total']))
men = int(np.sum(recent_grads.loc[category, 'men']))
women = int(np.sum(recent_grads.loc[category, 'women']))
major_categories_total[cat] = total
major_categories_men[cat] = men
major_categories_women[cat] = women
major_categories_total = pd.Series(major_categories_total)
major_categories_men = pd.Series(major_categories_men)
major_categories_women = pd.Series(major_categories_women)
major_categories_gender = pd.concat([major_categories_total, major_categories_men, major_categories_women], axis=1)
major_categories_gender = pd.DataFrame(major_categories_gender)
major_categories_gender.columns = ['total', 'men', 'women']
categories_sorted = major_categories_gender.sort_values('total', ascending=True)
categories_sorted[['men','women']].plot.barh(figsize=(10,10))
<matplotlib.axes._subplots.AxesSubplot at 0x7f7031afb5c0>
We find that when we group majors into their categories, ten out of the sixteen categories are dominated by women. We do note that men vastly outnumber women in "Computer & Mathematics" and "Engineering" while it's the other way around for "Education", "Psychology & Social Work", and "Health".
Boxplots show us the distribution of numeric columns. It shows more or less the same information as the ones returned through the describe()
method but in graphical format. Let's produce the boxplots for the columns of interest we identified earlier.
fig, ((ax_1, ax_2, ax_3, ax_4), (ax_5, ax_6, ax_7, ax_8)) = plt.subplots(2,4, figsize=(10, 10))
# Creating a dictionary we can loop over
dictionary = {ax_1: ['sample_size', 'Sample Size'],
ax_2: ['median', 'Median Salary'],
ax_3: ['employed', 'Number Employed'],
ax_4: ['share_full_time', 'Proportion Full Time'],
ax_5: ['share_women', 'Proportion Women'],
ax_6: ['unemployment_rate', 'Unemployment Rate'],
ax_7: ['men', 'Male Graduates'],
ax_8: ['women', 'Female Graduates'],
}
# Plotting the boxplots using a loop over the dictionary
i = 0
for key, value in dictionary.items():
i += 1
key = fig.add_subplot(2, 4, i)
key = recent_grads[value[0]].plot(kind='box')
key.set_title(value[1])
# Not sure what's going on under the hood but this helps clean the tick labels
for key, value in dictionary.items():
fig.delaxes(key)
# Fixing the layout
fig.tight_layout()
# recent_grads[cols_interest].plot(kind='box')
# plt.tight_layout()
For comparison, let's also see the numbers returned by the describe()
method:
recent_grads[cols_interest].describe()
sample_size | median | employed | share_full_time | share_women | unemployment_rate | men | women | |
---|---|---|---|---|---|---|---|---|
count | 172.000000 | 172.000000 | 172.00000 | 172.000000 | 172.000000 | 172.000000 | 172.000000 | 172.000000 |
mean | 357.941860 | 40076.744186 | 31355.80814 | 0.826769 | 0.522223 | 0.068024 | 16723.406977 | 22646.674419 |
std | 619.680419 | 11461.388773 | 50777.42865 | 0.082081 | 0.231205 | 0.030340 | 28122.433474 | 41057.330740 |
min | 2.000000 | 22000.000000 | 0.00000 | 0.574301 | 0.000000 | 0.000000 | 119.000000 | 0.000000 |
25% | 42.000000 | 33000.000000 | 3734.75000 | 0.772969 | 0.336026 | 0.050261 | 2177.500000 | 1778.250000 |
50% | 131.000000 | 36000.000000 | 12031.50000 | 0.826769 | 0.534024 | 0.067544 | 5434.000000 | 8386.500000 |
75% | 339.000000 | 45000.000000 | 31701.25000 | 0.888830 | 0.703299 | 0.087247 | 14631.000000 | 22553.750000 |
max | 4212.000000 | 110000.000000 | 307933.00000 | 0.997595 | 0.968954 | 0.177226 | 173809.000000 | 307087.000000 |
To conclude, let's return to the questions we asked earlier and try to answer them:
*Which category of majors have the most students?*
The two most popular categories are "Business", "Humanities & Liberal Arts", and "Education". We were able to answer this both through the grouped bar plots and just aggregating them based on categories and printing the results.
*Do students in more popular majors make more money?*
The scatter plots show that there is no strong evidence that more popular majors make more money (i.e. higher median salary).
*Are male-dominated majors associated with higher earnings?*
The scatter plots provide some evidence that female-dominated majors are associated with lower median salary. Hence, we have reason to believe that male-dominated majors are associated with higher earnings but we don't have enough information to explain why we are seeing this pattern.
*How many majors are predominantly male/female?*
We answered this question early in this notebook. We learned that 96 out of the 172 majors (56%) in our data set have predominantly female graduates.
What We Learned
In this project, we were able to practice creating graphs to help answer questions. We learned that some questions can better be answered by looking at graphs while some can be answered better by looking at tables or summary statistics.