In this project, you will apply unsupervised learning techniques to identify segments of the population that form the core customer base for a mail-order sales company in Germany. These segments can then be used to direct marketing campaigns towards audiences that will have the highest expected rate of returns. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.
This notebook will help you complete this task by providing a framework within which you will perform your analysis steps. In each step of the project, you will see some text describing the subtask that you will perform, followed by one or more code cells for you to complete your work. Feel free to add additional code and markdown cells as you go along so that you can explore everything in precise chunks. The code cells provided in the base template will outline only the major tasks, and will usually not be enough to cover all of the minor tasks that comprise it.
It should be noted that while there will be precise guidelines on how you should handle certain tasks in the project, there will also be places where an exact specification is not provided. There will be times in the project where you will need to make and justify your own decisions on how to treat the data. These are places where there may not be only one way to handle the data. In real-life tasks, there may be many valid ways to approach an analysis task. One of the most important things you can do is clearly document your approach so that other scientists can understand the decisions you've made.
At the end of most sections, there will be a Markdown cell labeled Discussion. In these cells, you will report your findings for the completed section, as well as document the decisions that you made in your approach to each subtask. Your project will be evaluated not just on the code used to complete the tasks outlined, but also your communication about your observations and conclusions at each stage.
# import libraries here; add more as necessary
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from ast import literal_eval
from sklearn.preprocessing import Imputer, StandardScaler
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
# magic word for producing visualizations in notebook
%matplotlib inline
There are four files associated with this project (not including this one):
Udacity_AZDIAS_Subset.csv
: Demographics data for the general population of Germany; 891211 persons (rows) x 85 features (columns).Udacity_CUSTOMERS_Subset.csv
: Demographics data for customers of a mail-order company; 191652 persons (rows) x 85 features (columns).Data_Dictionary.md
: Detailed information file about the features in the provided datasets.AZDIAS_Feature_Summary.csv
: Summary of feature attributes for demographics data; 85 features (rows) x 4 columnsEach row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. You will use this information to cluster the general population into groups with similar demographic properties. Then, you will see how the people in the customers dataset fit into those created clusters. The hope here is that certain clusters are over-represented in the customers data, as compared to the general population; those over-represented clusters will be assumed to be part of the core userbase. This information can then be used for further applications, such as targeting for a marketing campaign.
To start off with, load in the demographics data for the general population into a pandas DataFrame, and do the same for the feature attributes summary. Note for all of the .csv
data files in this project: they're semicolon (;
) delimited, so you'll need an additional argument in your read_csv()
call to read in the data properly. Also, considering the size of the main dataset, it may take some time for it to load completely.
Once the dataset is loaded, it's recommended that you take a little bit of time just browsing the general structure of the dataset and feature summary file. You'll be getting deep into the innards of the cleaning in the first major step of the project, so gaining some general familiarity can help you get your bearings.
# Load in the general demographics data.
azdias = pd.read_csv('Udacity_AZDIAS_Subset.csv', delimiter=';')
# Load in the feature summary file.
feat_info = pd.read_csv('AZDIAS_Feature_Summary.csv', delimiter=';')
# Check the structure of the data after it's loaded (e.g. print the number of
# rows and columns, print the first few rows).
azdias.head()
AGER_TYP | ALTERSKATEGORIE_GROB | ANREDE_KZ | CJT_GESAMTTYP | FINANZ_MINIMALIST | FINANZ_SPARER | FINANZ_VORSORGER | FINANZ_ANLEGER | FINANZ_UNAUFFAELLIGER | FINANZ_HAUSBAUER | ... | PLZ8_ANTG1 | PLZ8_ANTG2 | PLZ8_ANTG3 | PLZ8_ANTG4 | PLZ8_BAUMAX | PLZ8_HHZ | PLZ8_GBZ | ARBEIT | ORTSGR_KLS9 | RELAT_AB | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | -1 | 2 | 1 | 2.0 | 3 | 4 | 3 | 5 | 5 | 3 | ... | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
1 | -1 | 1 | 2 | 5.0 | 1 | 5 | 2 | 5 | 4 | 5 | ... | 2.0 | 3.0 | 2.0 | 1.0 | 1.0 | 5.0 | 4.0 | 3.0 | 5.0 | 4.0 |
2 | -1 | 3 | 2 | 3.0 | 1 | 4 | 1 | 2 | 3 | 5 | ... | 3.0 | 3.0 | 1.0 | 0.0 | 1.0 | 4.0 | 4.0 | 3.0 | 5.0 | 2.0 |
3 | 2 | 4 | 2 | 2.0 | 4 | 2 | 5 | 2 | 1 | 2 | ... | 2.0 | 2.0 | 2.0 | 0.0 | 1.0 | 3.0 | 4.0 | 2.0 | 3.0 | 3.0 |
4 | -1 | 3 | 1 | 5.0 | 4 | 3 | 4 | 1 | 3 | 2 | ... | 2.0 | 4.0 | 2.0 | 1.0 | 2.0 | 3.0 | 3.0 | 4.0 | 6.0 | 5.0 |
5 rows × 85 columns
print('The dataset has',len(azdias) ,'rows')
print('The dataset has',len(azdias.columns) ,'columns')
The dataset has 891221 rows The dataset has 85 columns
(feat_info.groupby('type').count()['attribute']
.sort_values(ascending = False).plot(kind = 'bar', figsize = (10, 7)))
plt.title('Feature Relations by Variable Type')
plt.show()
(feat_info.groupby('information_level').count()['type']
.sort_values(ascending = False).plot(kind = 'bar', figsize = (10, 7)))
plt.title('Feature Relations by Information Level')
plt.show()
Tip: Add additional cells to keep everything in reasonably-sized chunks! Keyboard shortcut
esc --> a
(press escape to enter command mode, then press the 'A' key) adds a new cell before the active cell, andesc --> b
adds a new cell after the active cell. If you need to convert an active cell to a markdown cell, useesc --> m
and to convert to a code cell, useesc --> y
.
The feature summary file contains a summary of properties for each demographics data column. You will use this file to help you make cleaning decisions during this stage of the project. First of all, you should assess the demographics data in terms of missing data. Pay attention to the following points as you perform your analysis, and take notes on what you observe. Make sure that you fill in the Discussion cell with your findings and decisions at the end of each step that has one!
The fourth column of the feature attributes summary (loaded in above as feat_info
) documents the codes from the data dictionary that indicate missing or unknown data. While the file encodes this as a list (e.g. [-1,0]
), this will get read in as a string object. You'll need to do a little bit of parsing to make use of it to identify and clean the data. Convert data that matches a 'missing' or 'unknown' value code into a numpy NaN value. You might want to see how much data takes on a 'missing' or 'unknown' code, and how much data is naturally missing, as a point of interest.
As one more reminder, you are encouraged to add additional cells to break up your analysis into manageable chunks.
There are 5 different types of missing data encodings. These types will be analyzed separately
feat_info['missing_or_unknown'].replace({'[-1,X]': '[-1,"X"]',
'[-1,XX]': '[-1,"XX"]',
'[XX]': '["XX"]'}, inplace = True)
encodings = feat_info.set_index('attribute')['missing_or_unknown'].to_dict()
encodings = {key: {k: np.nan for k in literal_eval(value)} for key, value in encodings.items()}
for column,mapping in encodings.items():
azdias[column].replace(mapping, inplace = True)
How much missing data is present in each column? There are a few columns that are outliers in terms of the proportion of values that are missing. You will want to use matplotlib's hist()
function to visualize the distribution of missing value counts to find these columns. Identify and document these columns. While some of these columns might have justifications for keeping or re-encoding the data, for this project you should just remove them from the dataframe. (Feel free to make remarks about these outlier columns in the discussion, however!)
For the remaining features, are there any patterns in which columns have, or share, missing data?
# Perform an assessment of how much missing data there is in each column of the dataset.
azdias_nullcnt_col = pd.DataFrame(azdias.isnull().sum())
azdias_nullcnt_col.columns =['num_nulls']
(azdias_nullcnt_col/len(azdias) * 100).plot(kind = 'hist', figsize = (10, 7), bins = 60,
title = 'Number of Missing Values Distribution')
plt.xlabel('Missing Data %')
Text(0.5,0,'Missing Data %')
Using heatmap to investigate the pattern of missing data
plt.figure(figsize = (20,15))
sns.heatmap(azdias.isnull(), cmap='Blues')
<matplotlib.axes._subplots.AxesSubplot at 0x1faa3d29160>
Investigate missing data categories
feat_info_joined = feat_info.set_index('attribute').join(azdias_nullcnt_col)
missing_std = (feat_info_joined.groupby('information_level').sum()/(feat_info_joined.groupby('information_level').count()
*len(azdias))*100)
missing_std['num_nulls'].plot(kind = 'bar', figsize = (10, 7),title = 'Number of Missing Values by Informarion Level')
plt.ylabel('Missing Data %')
Text(0,0.5,'Missing Data %')
Outlier columns
outlier_columns = list(azdias_nullcnt_col[azdias_nullcnt_col['num_nulls'] > 200000].index)
feat_info[feat_info['attribute'].isin(outlier_columns)]
attribute | information_level | type | missing_or_unknown | |
---|---|---|---|---|
0 | AGER_TYP | person | categorical | [-1,0] |
11 | GEBURTSJAHR | person | numeric | [0] |
40 | TITEL_KZ | person | categorical | [-1,0] |
43 | ALTER_HH | household | interval | [0] |
47 | KK_KUNDENTYP | household | categorical | [-1] |
64 | KBA05_BAUMAX | microcell_rr3 | mixed | [-1,0] |
The rest of the outlier columns will be treated as categorical features and transformed using one-hot encoding
print('The columns that are outliers:', outlier_columns)
The columns that are outliers: ['AGER_TYP', 'GEBURTSJAHR', 'TITEL_KZ', 'ALTER_HH', 'KK_KUNDENTYP', 'KBA05_BAUMAX']
The data set has 6 columns that has over 200,000 missing values. These columns (AGER_TYP, GEBURTSJAHR, TITEL_KZ, ALTER_HH, KK_KUNDENTYP, KBA05_BAUMAX) will be removed from the dataset. There are another 55 columns in the dataset that are missing around 150,000 values. From the heatmap visualization above, the missing values has strong correlations with each other - most missing values among different columns happens over the same row. From the barplot "Missing Value by Information Level", most of the missing values belongs to the feature group household and micro_cell_rr3 (more than 20%)
Now, you'll perform a similar assessment for the rows of the dataset. How much data is missing in each row? As with the columns, you should see some groups of points that have a very different numbers of missing values. Divide the data into two subsets: one for data points that are above some threshold for missing values, and a second subset for points below that threshold.
In order to know what to do with the outlier rows, we should see if the distribution of data values on columns that are not missing data (or are missing very little data) are similar or different between the two groups. Select at least five of these columns and compare the distribution of values.
countplot()
function to create a bar chart of code frequencies and matplotlib's subplot()
function to put bar charts for the two subplots side by side.Depending on what you observe in your comparison, this will have implications on how you approach your conclusions later in the analysis. If the distributions of non-missing features look similar between the data with many missing values and the data with few or no missing values, then we could argue that simply dropping those points from the analysis won't present a major issue. On the other hand, if the data with many missing values looks very different from the data with few or no missing values, then we should make a note on those data as special. We'll revisit these data later on. Either way, you should continue your analysis for now using just the subset of the data with few or no missing values.
azdias_nullcnt_row = pd.DataFrame(azdias.isnull().sum(axis = 1))
azdias_nullcnt_row.columns = ['num_nulls']
azdias_nullcnt_row.plot(kind = 'hist', bins = 100, figsize = (10, 7), title ='Missing Data by Rows')
plt.xlabel('Number of Missing Values')
Text(0.5,0,'Number of Missing Values')
There are several rows that has more than 30 missing values across the column. Now build function that separates the data into two parts, and visualize the distributions separately
def visualizeDistAzdias(azdias_nullcnt_row, column, threshold = 30):
# separate dataset based on number of null values in rows
lessNull_rows = list(azdias_nullcnt_row[azdias_nullcnt_row['num_nulls'] < threshold].index)
moreNull_rows = list(azdias_nullcnt_row[azdias_nullcnt_row['num_nulls'] > threshold].index)
# Flag the high missing value rows
azdias_moreNull = azdias.loc[moreNull_rows, :]
azdias_lessNull = azdias.loc[lessNull_rows, :]
# create subplots
fig, (ax1, ax2) = plt.subplots(1, 2, sharey = True, figsize = (20, 7))
ax1.set_title('Less Missing Values')
ax2.set_title('More Missing Values')
sns.barplot(x = column, y = column, ax = ax1,
data = azdias_lessNull, estimator= lambda x: len(x)/len(azdias_lessNull)* 100)
sns.barplot(x = column, y = column, ax = ax2,
data = azdias_moreNull, estimator= lambda x: len(x)/len(azdias_moreNull)* 100)
plt.ylabel('% values among total')
plt.suptitle(column)
Now visualize 5 different columns
for i in ['GFK_URLAUBERTYP', 'SEMIO_REL','SEMIO_VERT', 'HH_EINKOMMEN_SCORE', 'CJT_GESAMTTYP']:
visualizeDistAzdias(azdias_nullcnt_row, i, 10)
From the visualizations above, there are clearly differences between the rows that have a large number of missing values and small number of missing values. These columns needs to be treated separately rather than simply dropping them. From the histogram of missing value distributions, the rows with more than 30 missing values will be dropped, while the rest of the missing values will be imputed
Checking for missing data isn't the only way in which you can prepare a dataset for analysis. Since the unsupervised learning techniques to be used will only work on data that is encoded numerically, you need to make a few encoding changes or additional assumptions to be able to make progress. In addition, while almost all of the values in the dataset are encoded using numbers, not all of them represent numeric values. Check the third column of the feature summary (feat_info
) for a summary of types of measurement.
In the first two parts of this sub-step, you will perform an investigation of the categorical and mixed-type features and make a decision on each of them, whether you will keep, drop, or re-encode each. Then, in the last part, you will create a new data frame with only the selected and engineered columns.
Data wrangling is often the trickiest part of the data analysis process, and there's a lot of it to be done here. But stick with it: once you're done with this step, you'll be ready to get to the machine learning parts of the project!
# How many features are there of each data type?
(feat_info.groupby('type').count()['attribute']
.sort_values(ascending = False).plot(kind = 'bar', figsize = (10, 7)))
plt.title('# Features by Variable Types')
plt.show()
For categorical data, you would ordinarily need to encode the levels as dummy variables. Depending on the number of categories, perform one of the following:
Assess categorical variables: which are binary, which are multi-level, and which one needs to be re-encoded
categoricals = list(feat_info[feat_info['type'] == 'categorical']['attribute'].values)
print('Categorical features unique values:')
_ = azdias[categoricals].apply(lambda x: print(x.name, ':', x.unique()))
Categorical features unique values: AGER_TYP : [nan 2.0 3.0 1.0] ANREDE_KZ : [1 2] CJT_GESAMTTYP : [2.0 5.0 3.0 4.0 1.0 6.0 nan] FINANZTYP : [4 1 6 5 2 3] GFK_URLAUBERTYP : [10.0 1.0 5.0 12.0 9.0 3.0 8.0 11.0 4.0 2.0 7.0 6.0 nan] GREEN_AVANTGARDE : [0 1] LP_FAMILIE_FEIN : [2.0 5.0 1.0 nan 10.0 7.0 11.0 3.0 8.0 4.0 6.0 9.0] LP_FAMILIE_GROB : [2.0 3.0 1.0 nan 5.0 4.0] LP_STATUS_FEIN : [1.0 2.0 3.0 9.0 4.0 10.0 5.0 8.0 6.0 7.0 nan] LP_STATUS_GROB : [1.0 2.0 4.0 5.0 3.0 nan] NATIONALITAET_KZ : [nan 1.0 3.0 2.0] SHOPPER_TYP : [nan 3.0 2.0 1.0 0.0] SOHO_KZ : [nan 1.0 0.0] TITEL_KZ : [nan 4.0 1.0 3.0 5.0 2.0] VERS_TYP : [nan 2.0 1.0] ZABEOTYP : [3 5 4 1 6 2] KK_KUNDENTYP : [nan 1.0 3.0 6.0 4.0 2.0 5.0] GEBAEUDETYP : [nan 8.0 1.0 3.0 2.0 6.0 4.0 5.0] OST_WEST_KZ : [nan 'W' 'O'] CAMEO_DEUG_2015 : [nan '8' '4' '2' '6' '1' '9' '5' '7' '3'] CAMEO_DEU_2015 : [nan '8A' '4C' '2A' '6B' '8C' '4A' '2D' '1A' '1E' '9D' '5C' '8B' '7A' '5D' '9E' '9B' '1B' '3D' '4E' '4B' '3C' '5A' '7B' '9A' '6D' '6E' '2C' '7C' '9C' '7D' '5E' '1D' '8D' '6C' '6A' '5B' '4D' '3A' '2B' '7E' '3B' '6F' '5F' '1C']
The binary variables that are encoded by non-numerical features are OST_WEST_KZ and CAMEO_DEU_2015
All of the categorical variables (except for GREEN_AVANTGARDE, which is already encoded as [0,1] binary) are encoded by one-hot encoding. Since categorical features does not have a specific ordering, the relative numerical values does not have any meaning, therefore the ordering effect needs to be removed by encoding these features as one-hot columns
There are a handful of features that are marked as "mixed" in the feature summary that require special treatment in order to be included in the analysis. There are two in particular that deserve attention; the handling of the rest are up to your own choices:
Be sure to check Data_Dictionary.md
for the details needed to finish these tasks.
mixed = list(feat_info[feat_info['type'] == 'mixed']['attribute'].values)
_ = azdias[mixed].apply(lambda x: print(x.name, ':', x.unique()))
LP_LEBENSPHASE_FEIN : [15.0 21.0 3.0 nan 32.0 8.0 2.0 5.0 10.0 4.0 6.0 23.0 12.0 20.0 1.0 11.0 25.0 13.0 7.0 18.0 31.0 19.0 38.0 35.0 30.0 22.0 14.0 33.0 29.0 24.0 28.0 37.0 26.0 39.0 27.0 36.0 9.0 34.0 40.0 16.0 17.0] LP_LEBENSPHASE_GROB : [4.0 6.0 1.0 nan 10.0 2.0 3.0 5.0 7.0 12.0 11.0 9.0 8.0] PRAEGENDE_JUGENDJAHRE : [nan 14.0 15.0 8.0 3.0 10.0 11.0 5.0 9.0 6.0 4.0 2.0 1.0 12.0 13.0 7.0] WOHNLAGE : [nan 4.0 2.0 7.0 3.0 5.0 1.0 8.0 0.0] CAMEO_INTL_2015 : [nan '51' '24' '12' '43' '54' '22' '14' '13' '15' '33' '41' '34' '55' '25' '23' '31' '52' '35' '45' '44' '32'] KBA05_BAUMAX : [nan 5.0 1.0 2.0 3.0 4.0] PLZ8_BAUMAX : [nan 1.0 2.0 4.0 5.0 3.0]
# Re-encode categorical variable(s) to be kept in the analysis.
dropped = set.intersection(set(outlier_columns), set(categoricals), set(mixed))
one_hot = set(outlier_columns) - dropped
cate_encode = list(set.union(set(categoricals),set(mixed))
- dropped - set(['GREEN_AVANTGARDE', 'PRAEGENDE_JUGENDJAHRE', 'CAMEO_INTL_2015']))
azdias = pd.get_dummies(azdias, columns=cate_encode, dummy_na = True)
PRAEGENDE_JUGENDJAHRE
Dominating movement of person's youth (avantgarde vs. mainstream; east vs. west)
full_map = pd.DataFrame({-1: [np.nan, np.nan],
0: [np.nan, np.nan],
1: [40, 1],
2: [40, 0],
3: [50, 1],
4: [50, 0],
5: [60, 1],
6: [60, 0],
7: [60, 0],
8: [70, 1],
9: [70, 0],
10: [80, 1],
11: [80, 0],
12: [80, 1],
13: [80, 0],
14: [90, 1],
15: [90, 0],
np.nan: [np.nan, np.nan]}).T
# Create column that encodes decades
decades_map = full_map[0].to_dict()
azdias['DECADES'] = azdias['PRAEGENDE_JUGENDJAHRE'].map(decades_map)
# Create column that encodes movements
movements_map = full_map[1].to_dict()
azdias['MOVEMENTS'] = azdias['PRAEGENDE_JUGENDJAHRE'].map(movements_map)
CAMEO_INTL_2015
German CAMEO: Wealth / Life Stage Typology, mapped to international code
azdias['WEALTH'] = pd.to_numeric(azdias['CAMEO_INTL_2015'])//10
azdias['LIFE_STAGE'] = pd.to_numeric(azdias['CAMEO_INTL_2015'])%10
The two columns that contains more than 1 dimensions of data, PRAEGENDE_JUGENDJAHRE and CAMEO_INTL_2015 are engineered in ways that could separate dimensions. The rest of the mixed values columns only contains 1 dimension, therefore are left as the same as their original states
In order to finish this step up, you need to make sure that your data frame now only has the columns that you want to keep. To summarize, the dataframe should consist of the following:
Make sure that for any new columns that you have engineered, that you've excluded the original columns from the final dataset. Otherwise, their values will interfere with the analysis later on the project. For example, you should not keep "PRAEGENDE_JUGENDJAHRE", since its values won't be useful for the algorithm: only the values derived from it in the engineered features you created should be retained. As a reminder, your data should only be from the subset with few or no missing values.
outlier_columns = list(set(outlier_columns) - one_hot)
azdias.drop((['CAMEO_INTL_2015', 'PRAEGENDE_JUGENDJAHRE'] + outlier_columns), axis = 1, inplace = True)
Even though you've finished cleaning up the general population demographics data, it's important to look ahead to the future and realize that you'll need to perform the same cleaning steps on the customer demographics data. In this substep, complete the function below to execute the main feature selection, encoding, and re-engineering steps you performed above. Then, when it comes to looking at the customer data in Step 3, you can just run this function on that DataFrame to get the trimmed dataset in a single step.
def clean_data(df, outlier_columns):
"""
Perform feature trimming, re-encoding, and engineering for demographics
data
INPUT: Demographics DataFrame
OUTPUT: Trimmed and cleaned demographics DataFrame
"""
# Put in code here to execute all main cleaning steps:
# convert missing value codes into NaNs
feat_info['missing_or_unknown'].replace({'[-1,X]': '[-1,"X"]',
'[-1,XX]': '[-1,"XX"]',
'[XX]': '["XX"]'}, inplace = True)
encodings = feat_info.set_index('attribute')['missing_or_unknown'].to_dict()
encodings = {key: {k: np.nan for k in literal_eval(value)} for key, value in encodings.items()}
for columns, mapping in encodings.items():
df[columns].replace(mapping, inplace = True)
# catgorical variables
categoricals = list(feat_info[feat_info['type'] == 'categorical']['attribute'].values)
mixed = list(feat_info[feat_info['type'] == 'categorical']['attribute'].values)
dropped = set.intersection(set(outlier_columns), set(categoricals), set(mixed))
one_hot = set(outlier_columns) - dropped
cate_encode = list(set.union(set(categoricals),set(mixed))
- dropped - set(['GREEN_AVANTGARDE', 'PRAEGENDE_JUGENDJAHRE', 'CAMEO_INTL_2015']))
df = pd.get_dummies(df, columns=cate_encode, dummy_na = True)
# mixed variables - PRAEGENDE_JUGENDJAHRE
full_map = pd.DataFrame({-1: [np.nan, np.nan],
0: [np.nan, np.nan],
1: [40, 1],
2: [40, 0],
3: [50, 1],
4: [50, 0],
5: [60, 1],
6: [60, 0],
7: [60, 0],
8: [70, 1],
9: [70, 0],
10: [80, 1],
11: [80, 0],
12: [80, 1],
13: [80, 0],
14: [90, 1],
15: [90, 0],
np.nan: [np.nan, np.nan]}).T
# Create column that encodes decades
decades_map = full_map[0].to_dict()
df['DECADES'] = df['PRAEGENDE_JUGENDJAHRE'].map(decades_map)
# Create column that encodes movements
movements_map = full_map[1].to_dict()
df['MOVEMENTS'] = df['PRAEGENDE_JUGENDJAHRE'].map(movements_map)
# mixed variables - CAMEO_INTL_2015
df['WEALTH'] = pd.to_numeric(df['CAMEO_INTL_2015'])//10
df['LIFE_STAGE'] = pd.to_numeric(df['CAMEO_INTL_2015'])%10
# drop extra columns
outlier_columns = list(set(outlier_columns) - one_hot)
df.drop(['CAMEO_INTL_2015', 'PRAEGENDE_JUGENDJAHRE'] + outlier_columns, axis = 1, inplace = True)
# Return the cleaned dataframe.
return df
# Load in the general demographics data.
azdias = pd.read_csv('Udacity_AZDIAS_Subset.csv', delimiter=';')
Run azdias through clean data and make sure the columns are the same
azdias = clean_data(azdias, outlier_columns)
feature_names = list(azdias.columns)
Before we apply dimensionality reduction techniques to the data, we need to perform feature scaling so that the principal component vectors are not influenced by the natural differences in scale for features. Starting from this part of the project, you'll want to keep an eye on the API reference page for sklearn to help you navigate to all of the classes and functions that you'll need. In this substep, you'll need to check the following:
.fit_transform()
method to both fit a procedure to the data as well as apply the transformation to the data at the same time. Don't forget to keep the fit sklearn objects handy, since you'll be applying them to the customer demographics data towards the end of the project.Missing data impute
imputer = Imputer()
imputer_model = imputer.fit(azdias)
azdias_impute = pd.DataFrame(imputer_model.transform(azdias), columns = feature_names)
Apply a standard scalar
scalar = StandardScaler()
scalar_model = scalar.fit(azdias_impute)
azdias_scaled = pd.DataFrame(scalar_model.transform(azdias_impute), columns = feature_names)
The missing data are imputed first, then the data is scaled. Since the rows that have large number of missing data are dropped before, imputing missing values will not have a large impact on the overall distribution
On your scaled data, you are now ready to apply dimensionality reduction techniques.
plot()
function. Based on what you find, select a value for the number of transformed features you'll retain for the clustering part of the project.Apply PCA to the data.
pca = PCA()
pca_model= pca.fit(azdias_scaled)
azdias_pca = pca_model.transform(azdias_scaled)
Plot % variance explained
def variance_plot(pca):
'''
Creates a variance plot associated with the principal components
INPUT: pca - the result of instantian of PCA in scikit learn
OUTPUT: number of components suggested
'''
num_components=len(pca.explained_variance_ratio_)
ind = np.arange(num_components)
vals = pca.explained_variance_ratio_
plt.figure(figsize=(15, 8))
ax = plt.subplot(111)
cumvals = np.cumsum(vals)
ax.bar(ind[:150], vals[:150])
ax.plot(ind[:150], cumvals[:150])
try:
for i in np.arange(0, 150, 10):
ax.annotate(r"%s%%" % ((str(cumvals[i]*100)[:4])), (ind[i] , cumvals[i] + 0.02),
va="bottom", ha="center", fontsize=12)
ax.xaxis.set_tick_params(width=0)
ax.yaxis.set_tick_params(width=2, length=12)
ax.set_xlim([-1, 150])
ax.set_xlabel("Principal Component")
ax.set_ylabel("Variance Explained (%)")
plt.title('Explained Variance Per Principal Component')
except:
pass
fit_components = np.argmax(cumvals > 0.85)
return fit_components
fit_components = variance_plot(pca)
Find the number of components that will explain at least 85% of the total variance, then reapply PCA
pca = PCA(fit_components)
pca_model= pca.fit(azdias_scaled)
azdias_pca = pca_model.transform(azdias_scaled)
According to the scree plot above, I selected the number of components that will represent at least 95% of the total vairances, which gives the number of components to be 120
Now that we have our transformed principal components, it's a nice idea to check out the weight of each variable on the first few components to see if they can be interpreted in some fashion.
As a reminder, each principal component is a unit vector that points in the direction of highest variance (after accounting for the variance captured by earlier principal components). The further a weight is from zero, the more the principal component is in the direction of the corresponding feature. If two features have large weights of the same sign (both positive or both negative), then increases in one tend expect to be associated with increases in the other. To contrast, features with different signs can be expected to show a negative correlation: increases in one variable should result in a decrease in the other.
def getComponentLinkage(pca, component_number, feature_names):
"""Prints the sorted relevancy of each feature given the component
INPUT: pca object, the component number, feature names
"""
component_linkage = pd.DataFrame(pca.components_, columns = feature_names)
relevancy = pd.DataFrame(component_linkage.iloc[component_number + 1,:]
.transpose().sort_values(ascending = False))
relevancy.columns = ['weights']
return relevancy.head(5), relevancy.tail(5).sort_values('weights')
def visualizeRelevancy(head_df, tail_df, PC_number):
"""Plot the top and bottom weights of PCs
INPUT: head, tail dfs
"""
# create subplots
ind = np.arange(len(head_df))
fig, (ax1, ax2) = plt.subplots(1, 2, sharey = True, figsize = (16, 5))
plt.setp(ax1.xaxis.get_majorticklabels(), rotation=45)
plt.setp(ax2.xaxis.get_majorticklabels(), rotation=45)
ax1.set_title('Highest Positive Relevancy')
ax2.set_title('Highest Negative Relevancy')
sns.barplot(x = head_df.index, y = head_df['weights'], ax = ax1)
sns.barplot(x = tail_df.index, y = tail_df['weights'], ax = ax2, data = tail_df)
for i in range(len(head_df)):
ax1.annotate(r"%s%%" % ((str(head_df['weights'][i]*100)[:4])), (ind[i] , -0.06),
va="bottom", ha="center", fontsize=12)
ax2.annotate(r"%s%%" % ((str(tail_df['weights'][i]*100)[:5])), (ind[i] , 0.02),
va="bottom", ha="center", fontsize=12)
plt.ylabel('Weights')
plt.suptitle('PC' + str(PC_number))
First component relevancy
c1_head, c1_tail = getComponentLinkage(pca_model, 1, feature_names)
visualizeRelevancy(c1_head, c1_tail, 1)
Second component relevancy
c2_head, c2_tail = getComponentLinkage(pca, 2, feature_names)
visualizeRelevancy(c2_head, c2_tail, 2)
Third component relevancy
c3_head, c3_tail = getComponentLinkage(pca, 3, feature_names)
visualizeRelevancy(c3_head, c3_tail, 3)
First Component
The first component is mix a person's personality topology, financial topology and age factors. Most of the weights are relatively uniform. Esitmated age has opposite weights with person's birth decades, which aligns with our intuition
Top positively correlated features with PC1,
Top negatively correlated features with PC1,
Second Component
The second component is mostly related to genders of customers and perosnality traits. The male and female represents the largest positive and negative weights.
Top positively correlated features with PC2,
the negatively correlated features with PC2,
Opposite personalities traits are captured in PC2, as more peaceful personalities have large positive weights and more aggressive personalities have large negative weights. Personalities has significant effects on personal behaviors, the weights in second component valided the point
Third Component
The third components is describing the person's life stages, family traits and income levels.
Top positively correlated feattures with PC3,
Top negatively correlated features with PC3,
Different types of families, life stages and the immigrating activities will also affects people's behaviors.
You've assessed and cleaned the demographics data, then scaled and transformed them. Now, it's time to see how the data clusters in the principal components space. In this substep, you will apply k-means clustering to the dataset and use the average within-cluster distances from each point to their assigned cluster's centroid to decide on a number of clusters to keep.
.score()
method might be useful here, but note that in sklearn, scores tend to be defined so that larger is better. Try applying it to a small, toy dataset, or use an internet search to help your understanding.cluster_nums = [1, 4, 8] + list(np.arange(10, 36, 5))
avg_d = []
for i in cluster_nums:
# run k-means clustering on the data
kmeans = KMeans(i)
kmeans.fit(azdias_pca)
# compute the average within-cluster distances
avg_d.append(-kmeans.score(azdias_pca))
Plot the scores and find the right number of clusters using the elbow method
plt.figure(figsize=(10, 7))
plt.plot(cluster_nums, avg_d, linestyle='--',
marker='o')
plt.xlabel('Number of Clusters')
plt.ylabel('Average Distance')
plt.title('Average Distance Between Points and Cluster Center VS. Number of Clusters')
plt.show()
From the plot above, though there is no obvious elbow in the dataset, the number of clusters could be chosen as 25. The below selection also shows 25 is the number of clusters that should be implemented
d_change = np.diff(avg_d)
n_clusters = cluster_nums[np.argmin(abs(d_change)) - 1]
Refit the K means cluster
kmeans = KMeans(n_clusters)
kmeans_best = kmeans.fit(azdias_pca)
azdias_impute['clusters'] = kmeans_best.predict(azdias_pca)
Using the elbow method, the average distance does not decrease a lot after number of clusters reaches 20. The appropriate number of clusters is therefore chosen to be 20
Now that you have clusters and cluster centers for the general population, it's time to see how the customer data maps on to those clusters. Take care to not confuse this for re-fitting all of the models to the customer data. Instead, you're going to use the fits from the general population to clean, transform, and cluster the customer data. In the last step of the project, you will interpret how the general population fits apply to the customer data.
;
) delimited.clean_data()
function you created earlier. (You can assume that the customer demographics data has similar meaning behind missing data patterns as the general demographics data.).fit()
or .fit_transform()
method to re-fit the old objects, nor should you be creating new sklearn objects! Carry the data through the feature scaling, PCA, and clustering steps, obtaining cluster assignments for all of the data in the customer demographics data.# Load in the customer demographics data.
customers = pd.read_csv('Udacity_CUSTOMERS_Subset.csv', delimiter=';')
Apply preprocessing, feature transformation, and clustering from the general demographics onto the customer data, obtaining cluster predictions for the customer demographics data.
customers = clean_data(customers, outlier_columns)
Fit PCA using the same sklearn object used to fit the demographics data
def transform_data(df, feature_names, imputer, scalar, pca):
"""Impute, Scale and find PCA of the input data"""
if set(feature_names) != set(df.columns):
missing_cols = list(set(feature_names) - set(df.columns))
df = pd.concat([df, pd.DataFrame(columns = missing_cols)])
df = df.reindex(columns = feature_names)
df[missing_cols] = 0
df_imputed = imputer.transform(df)
df_scaled = scalar.transform(df_imputed)
df_pca = pca.transform(df_scaled)
return df_pca, pd.DataFrame(df_imputed, columns=feature_names)
customers_pca, customers_impute = transform_data(customers, feature_names, imputer_model, scalar_model, pca_model)
Fit the clustering object on customer level data
customers_impute['clusters'] = kmeans_best.predict(customers_pca)
At this point, you have clustered data based on demographics of the general population of Germany, and seen how the customer data for a mail-order sales company maps onto those demographic clusters. In this final substep, you will compare the two cluster distributions to see where the strongest customer base for the company is.
Consider the proportion of persons in each cluster for the general population, and the proportions for the customers. If we think the company's customer base to be universal, then the cluster assignment proportions should be fairly similar between the two. If there are only particular segments of the population that are interested in the company's products, then we should see a mismatch from one to the other. If there is a higher proportion of persons in a cluster for the customer data compared to the general population (e.g. 5% of persons are assigned to a cluster for the general population, but 15% of the customer data is closest to that cluster's centroid) then that suggests the people in that cluster to be a target audience for the company. On the other hand, the proportion of the data in a cluster being larger in the general population than the customer data (e.g. only 2% of customers closest to a population centroid that captures 6% of the data) suggests that group of persons to be outside of the target demographics.
Take a look at the following points in this step:
countplot()
or barplot()
function could be handy..inverse_transform()
method of the PCA and StandardScaler objects to transform centroids back to the original data space and interpret the retrieved values directly.Group count on all the data in different clusters
# general demographics
azdias_clusters_cnt = pd.DataFrame(azdias_impute.groupby(['clusters'], as_index= False).count().iloc[:, :2])
azdias_clusters_cnt.iloc[:, 1] = azdias_clusters_cnt.iloc[:, 1]/sum(azdias_clusters_cnt.iloc[:, 1]) *100
azdias_clusters_cnt['dataset'] = 'General'
# customers demographics
customers_clusters_cnt = pd.DataFrame(customers_impute.groupby(['clusters'], as_index= False).count().iloc[:, :2])
customers_clusters_cnt.iloc[:, 1] = customers_clusters_cnt.iloc[:, 1]/sum(customers_clusters_cnt.iloc[:, 1]) *100
customers_clusters_cnt['dataset'] = 'Customers'
Concatenate clusters together
clusters_cnt = pd.concat([azdias_clusters_cnt, customers_clusters_cnt])
clusters_cnt.columns = ['clusters','percentage %','dataset']
Visualize the % of datapoints in each cluster
plt.figure(figsize=(10, 7))
sns.barplot(data=clusters_cnt, x = 'clusters', y ='percentage %', hue = 'dataset')
plt.title('Clusters Visualizations for General and Customer Demographics')
plt.show()
Select over and under represented clusters
# pivot cluster tables
clusters_pivot = clusters_cnt.pivot(index = 'clusters', columns= 'dataset', values= 'percentage %')
clusters_pivot['difference %'] = clusters_pivot['Customers'] - clusters_pivot['General']
def getCentroids(centroids_nums, feature_names, customers):
"""Get the features given the cluster centroid numbers"""
centroids = customers[customers['clusters'].isin(centroids_nums)]
return pd.DataFrame(centroids.iloc[:, :-1].mean()).T
def getMeanRest(centroids_nums, customers):
"""Get the mean of the rest of the data"""
rest_customers = customers[~customers['clusters'].isin(centroids_nums)]
return pd.DataFrame(rest_customers.iloc[:, :-1].mean()).T
Overrepresented population
Select the top 2 over represented clusters, 18 and 20
overrepresented = list(clusters_pivot['difference %'].sort_values(ascending = False).head(2).index)
clusters_diff = pd.concat([getCentroids([overrepresented[0]], feature_names,customers_impute ),
getCentroids([overrepresented[1]], feature_names, customers_impute),
getMeanRest(overrepresented, customers_impute)])
clusters_diff.set_index(pd.Series(overrepresented + ['rest']), inplace = True)
Find representative features
average_diff = (abs(clusters_diff.loc[overrepresented[0]] - clusters_diff.loc['rest'])
+ abs(clusters_diff.loc[overrepresented[1]] - clusters_diff.loc['rest']))/2
top5_diff = list(average_diff.sort_values(ascending = False).head(5).index)
Visualize cluster the top 5 different features
clusters_diff[top5_diff]
KBA13_ANZAHL_PKW | DECADES | GEBURTSJAHR | LP_LEBENSPHASE_FEIN | ANZ_HAUSHALTE_AKTIV | |
---|---|---|---|---|---|
9 | 619.698207 | 73.328046 | 1967.102689 | 16.332161 | 8.354924 |
24 | 694.675669 | 55.834402 | 1953.619264 | 28.864037 | 1.553170 |
rest | 656.748831 | 60.466167 | 1956.264147 | 22.644504 | 6.186470 |
clusters_diff[top5_diff].apply(lambda x: x/ max(x)).T.plot(kind = 'bar',
figsize = (10, 6),
title = 'Top 5 different features (Scaled), Overrepresented vs. Rest')
plt.xticks(rotation = 30)
plt.show()
Cluster 9, the most over represented cluster, is showing signnificant differences in these 5 features with the rest of populations. In some features dimensions, the second most overrepresented cluster 24, also have significant differences with the rest of the populations in a different way.
In particular, cluster 9 has the following characteristics besides the rest
cluster 24 has the following characteristics besides the rest
Arvato's primary customers seems to be individuals who are middle-aged to elderly, has a family, average to top earners.
Underrepresented Clusters
Select the top 2 under represented clusters, 15 and 16
underrepresented = list(clusters_pivot['difference %'].sort_values(ascending = False).dropna().tail(2).index)
clusters_diff = pd.concat([getCentroids([underrepresented[0]], feature_names ,customers_impute),
getCentroids([underrepresented[1]], feature_names,customers_impute),
getMeanRest(underrepresented, customers_impute)])
clusters_diff.set_index(pd.Series(underrepresented + ['rest']), inplace = True)
average_diff = (abs(clusters_diff.loc[underrepresented[0]] - clusters_diff.loc['rest'])
+ abs(clusters_diff.loc[underrepresented[1]] - clusters_diff.loc['rest']))/2
top5_diff = list(average_diff.sort_values(ascending=False).head(5).index)
clusters_diff[top5_diff]
KBA13_ANZAHL_PKW | DECADES | GEBURTSJAHR | LP_LEBENSPHASE_FEIN | ANZ_HAUSHALTE_AKTIV | |
---|---|---|---|---|---|
7 | 721.796890 | 83.814499 | 1974.850683 | 13.001715 | 4.692734 |
2 | 487.111996 | 84.919094 | 1975.162247 | 12.537518 | 13.771688 |
rest | 655.004609 | 62.308560 | 1958.020757 | 22.319477 | 5.923502 |
clusters_diff[top5_diff].apply(lambda x: x/ max(x)).T.plot(kind = 'bar',
figsize = (10, 6),
title = 'Top 5 different features (Scaled), Underrepresented vs. Rest')
plt.xticks(rotation = 30)
plt.show()
Cluster 7 and 2 are significantly under represented. The also have significant differences with the rest of the population
In particular, cluster 9 has the following characteristics
Cluster 2 has the following characteristics
Two types of individuals are not significantly represented in Arvato's customer base. Those who do not have a family, on the more extreme side of income generation abilities (either extremely rich or pretty poor), are not very interested in Arvato's products. These people are on the younger spectrum of middle aged group.
Congratulations on making it this far in the project! Before you finish, make sure to check through the entire notebook from top to bottom to make sure that your analysis follows a logical flow and all of your findings are documented in Discussion cells. Once you've checked over all of your work, you should export the notebook as an HTML document to submit for evaluation. You can do this from the menu, navigating to File -> Download as -> HTML (.html). You will submit both that document and this notebook for your project submission.