Congratulations! You just got some contract work with an Ecommerce company based in New York City that sells clothing online but they also have in-store style and clothing advice sessions. Customers come in to the store, have sessions/meetings with a personal stylist, then they can go home and order either on a mobile app or website for the clothes they want.
The company is trying to decide whether to focus their efforts on their mobile app experience or their website. They've hired you on contract to help them figure it out! Let's get started!
Just follow the steps below to analyze the customer data (it's fake, don't worry I didn't give you real credit card numbers or emails).
** Import pandas, numpy, matplotlib,and seaborn. Then set %matplotlib inline (You'll import sklearn as you need it.)**
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
We'll work with the Ecommerce Customers csv file from the company. It has Customer info, suchas Email, Address, and their color Avatar. Then it also has numerical value columns:
** Read in the Ecommerce Customers csv file as a DataFrame called customers.**
cust=pd.read_csv('Ecommerce Customers')
Check the head of customers, and check out its info() and describe() methods.
cust.head()
Address | Avatar | Avg. Session Length | Time on App | Time on Website | Length of Membership | Yearly Amount Spent | ||
---|---|---|---|---|---|---|---|---|
0 | mstephenson@fernandez.com | 835 Frank Tunnel\nWrightmouth, MI 82180-9605 | Violet | 34.497268 | 12.655651 | 39.577668 | 4.082621 | 587.951054 |
1 | hduke@hotmail.com | 4547 Archer Common\nDiazchester, CA 06566-8576 | DarkGreen | 31.926272 | 11.109461 | 37.268959 | 2.664034 | 392.204933 |
2 | pallen@yahoo.com | 24645 Valerie Unions Suite 582\nCobbborough, D... | Bisque | 33.000915 | 11.330278 | 37.110597 | 4.104543 | 487.547505 |
3 | riverarebecca@gmail.com | 1414 David Throughway\nPort Jason, OH 22070-1220 | SaddleBrown | 34.305557 | 13.717514 | 36.721283 | 3.120179 | 581.852344 |
4 | mstephens@davidson-herman.com | 14023 Rodriguez Passage\nPort Jacobville, PR 3... | MediumAquaMarine | 33.330673 | 12.795189 | 37.536653 | 4.446308 | 599.406092 |
cust.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 500 entries, 0 to 499 Data columns (total 8 columns): Email 500 non-null object Address 500 non-null object Avatar 500 non-null object Avg. Session Length 500 non-null float64 Time on App 500 non-null float64 Time on Website 500 non-null float64 Length of Membership 500 non-null float64 Yearly Amount Spent 500 non-null float64 dtypes: float64(5), object(3) memory usage: 31.3+ KB
Let's explore the data!
For the rest of the exercise we'll only be using the numerical data of the csv file. ___ Use seaborn to create a jointplot to compare the Time on Website and Yearly Amount Spent columns. Does the correlation make sense?
sns.jointplot(cust['Time on Website'],cust['Yearly Amount Spent'])
<seaborn.axisgrid.JointGrid at 0x195f05ed4e0>
** Do the same but with the Time on App column instead. **
sns.jointplot(cust['Time on App'],cust['Yearly Amount Spent'])
<seaborn.axisgrid.JointGrid at 0x195f64f02e8>
** Use jointplot to create a 2D hex bin plot comparing Time on App and Length of Membership.**
sns.jointplot(cust['Time on App'],cust['Length of Membership'],kind='hex')
<seaborn.axisgrid.JointGrid at 0x195f6afcc18>
Let's explore these types of relationships across the entire data set. Use pairplot to recreate the plot below.(Don't worry about the the colors)
sns.pairplot(cust)
<seaborn.axisgrid.PairGrid at 0x195f6bb5b38>
Based off this plot what looks to be the most correlated feature with Yearly Amount Spent?
sns.heatmap(cust.corr())
#Looks like Time on App & Length of membership are highly correlated with our target variable Yearly Amount Spent
<matplotlib.axes._subplots.AxesSubplot at 0x195f804d9b0>
Create a linear model plot (using seaborn's lmplot) of Yearly Amount Spent vs. Length of Membership.
sns.lmplot('Yearly Amount Spent', 'Length of Membership', data=cust)
<seaborn.axisgrid.FacetGrid at 0x195f9534a90>
Now that we've explored the data a bit, let's go ahead and split the data into training and testing sets. ** Set a variable X equal to the numerical features of the customers and a variable y equal to the "Yearly Amount Spent" column. **
#Lets get to the mpre fun part :)
cust.columns
X=cust[['Avg. Session Length', 'Time on App',
'Time on Website', 'Length of Membership']]
y=cust[['Yearly Amount Spent']]
** Use model_selection.train_test_split from sklearn to split the data into training and testing sets. Set test_size=0.3 and random_state=101**
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.4,random_state=101)
Now its time to train our model on our training data!
** Import LinearRegression from sklearn.linear_model **
from sklearn.linear_model import LinearRegression
Create an instance of a LinearRegression() model named lm.
lm=LinearRegression()
** Train/fit lm on the training data.**
lm.fit(X_train,y_train)
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
Print out the coefficients of the model
lm.intercept_
array([-1045.11521682])
lm.coef_
array([[ 25.69154034, 37.89259966, 0.56058149, 61.64859402]])
Now that we have fit our model, let's evaluate its performance by predicting off the test values!
** Use lm.predict() to predict off the X_test set of the data.**
y_pred=lm.predict(X_test)
print(len(y_pred))
200
** Create a scatterplot of the real test values versus the predicted values. **
plt.scatter(y_test,y_pred)
<matplotlib.collections.PathCollection at 0x195f9b2f978>
Let's evaluate our model performance by calculating the residual sum of squares and the explained variance score (R^2).
** Calculate the Mean Absolute Error, Mean Squared Error, and the Root Mean Squared Error. Refer to the lecture or to Wikipedia for the formulas**
from sklearn import metrics
mae=metrics.mean_absolute_error(y_test,y_pred)
mse=metrics.mean_squared_error(y_test,y_pred)
rmse=np.sqrt(mse)
print("Mean absolute error:"+str(mae))
print("Mean squared error:"+str(mse))
print("Root mean squared error:"+str(rmse))
Mean absolute error:7.74267128584 Mean squared error:93.8329780082 Root mean squared error:9.6867423837
You should have gotten a very good model with a good fit. Let's quickly explore the residuals to make sure everything was okay with our data.
Plot a histogram of the residuals and make sure it looks normally distributed. Use either seaborn distplot, or just plt.hist().
sns.distplot((y_test-y_pred))
<matplotlib.axes._subplots.AxesSubplot at 0x195f9e88a58>
We still want to figure out the answer to the original question, do we focus our efforst on mobile app or website development? Or maybe that doesn't even really matter, and Membership Time is what is really important. Let's see if we can interpret the coefficients at all to get an idea.
lm.coef_
array([[ 25.69154034, 37.89259966, 0.56058149, 61.64859402]])
X.columns
Index(['Avg. Session Length', 'Time on App', 'Time on Website', 'Length of Membership'], dtype='object')
Looks like Length of Membership, Time on App & Avg.Session Length should be carefully investigated fo rgrowth opportunity