import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
from statsmodels.stats.outliers_influence import variance_inflation_factor
from patsy import dmatrices
%matplotlib inline
#We are setting the seed to assure you get the same answers on quizzes as we set up
random.seed(42)
1.
Now, read in the ab_data.csv
data. Store it in df
.
a. Read in the dataset and take a look at the top few rows
df = pd.read_csv('ab_data.csv')
df.head()
user_id | timestamp | group | landing_page | converted | |
---|---|---|---|---|---|
0 | 851104 | 2017-01-21 22:11:48.556739 | control | old_page | 0 |
1 | 804228 | 2017-01-12 08:01:45.159739 | control | old_page | 0 |
2 | 661590 | 2017-01-11 16:55:06.154213 | treatment | new_page | 0 |
3 | 853541 | 2017-01-08 18:28:03.143765 | treatment | new_page | 0 |
4 | 864975 | 2017-01-21 01:52:26.210827 | control | old_page | 1 |
b. Use the cell below to find the number of rows in the dataset.
df.shape[0]
294478
c. The number of unique users in the dataset.
df.nunique().user_id
290584
d. The proportion of users converted.
df.converted.mean()
0.11965919355605512
e. The number of times the new_page
and treatment
don't match.
df_treatment_old = df.query('group == "treatment" and landing_page == "old_page"')
df_control_new = df.query('group == "control" and landing_page == "new_page"')
pd.concat([df_treatment_old, df_control_new]).user_id.count()
3893
f. Do any of the rows have missing values?
df.isna().sum()
user_id 0 timestamp 0 group 0 landing_page 0 converted 0 dtype: int64
2.
For the rows where treatment does not match with new_page or control does not match with old_page, we cannot be sure if this row truly received the new or old page.
a. Store your new dataframe in df2.
df2 = df.drop(df_treatment_old.index)
df2 = df2.drop(df_control_new.index)
# Double Check all of the correct rows were removed - this should be 0
df2[((df2['group'] == 'treatment') == (df2['landing_page'] == 'new_page')) == False].shape[0]
0
3.
Use df2 and the cells below to answer questions
a. How many unique user_ids are in df2?
df2.user_id.nunique()
290584
b. There is one user_id repeated in df2. What is it?
df2[df2.user_id.duplicated() == True]
user_id | timestamp | group | landing_page | converted | |
---|---|---|---|---|---|
2893 | 773192 | 2017-01-14 02:55:59.590927 | treatment | new_page | 0 |
c. What is the row information for the repeat user_id?
df2[df2.user_id.duplicated() == True]
user_id | timestamp | group | landing_page | converted | |
---|---|---|---|---|---|
2893 | 773192 | 2017-01-14 02:55:59.590927 | treatment | new_page | 0 |
d. Remove one of the rows with a duplicate user_id, but keep your dataframe as df2.
df2.drop(df2[df2.user_id.duplicated() == True].index, inplace=True)
4.
Use df2 in the cells below to answer the quiz questions
a. What is the probability of an individual converting regardless of the page they receive?
df2.converted.mean()
0.11959708724499628
b. Given that an individual was in the control
group, what is the probability they converted?
df2.query('group == "control"').converted.mean()
0.1203863045004612
c. Given that an individual was in the treatment
group, what is the probability they converted?
df2.query('group == "treatment"').converted.mean()
0.11880806551510564
d. What is the probability that an individual received the new page?
df2.query('landing_page == "new_page"').user_id.count() / df2.shape[0]
0.50006194422266881
e. Consider your results from parts (a) through (d) above, and explain below whether you think there is sufficient evidence to conclude that the new treatment page leads to more conversions.
No, the treatment group has a less probability than the control group. Therefore, there is no evidence to conclude that the new treatment page leads to more conversions.
Notice that because of the time stamp associated with each event, you could technically run a hypothesis test continuously as each observation was observed.
However, then the hard question is do you stop as soon as one page is considered significantly better than another or does it need to happen consistently for a certain amount of time? How long do you run to render a decision that neither page is better than another?
These questions are the difficult parts associated with A/B tests in general.
1.
For now, consider you need to make the decision just based on all the data provided. If you want to assume that the old page is better unless the new page proves to be definitely better at a Type I error rate of 5%, what should your null and alternative hypotheses be? You can state your hypothesis in terms of words or in terms of pold and pnew, which are the converted rates for the old and new pages.
H0: pnew - pold ≤ 0
H1: pnew - pold > 0
2.
Assume under the null hypothesis, pnew and pold both have "true" success rates equal to the converted success rate regardless of page - that is pnew and pold are equal. Furthermore, assume they are equal to the converted rate in ab_data.csv regardless of the page.
Use a sample size for each page equal to the ones in ab_data.csv.
Perform the sampling distribution for the difference in converted between the two pages over 10,000 iterations of calculating an estimate from the null.
Use the cells below to provide the necessary parts of this simulation. If this doesn't make complete sense right now, it is okay we are going to work through the problems below to complete this problem.
a. What is the conversion rate for pnew under the null?
df2.converted.mean()
0.11959708724499628
b. What is the conversion rate for pold under the null?
df2.converted.mean()
0.11959708724499628
c. What is nnew, the number of individuals in the treatment group?
df2.query('group == "treatment"').user_id.count()
145310
d. What is nold, the number of individuals in the control group?
df2.query('group == "control"').user_id.count()
145274
e. Simulate nnew transactions with a conversion rate of pnew under the null. Store these nnew 1's and 0's in new_page_converted.
df2['new_page_converted'] = df2.query('landing_page == "new_page"').converted
df2.head()
user_id | timestamp | group | landing_page | converted | new_page_converted | |
---|---|---|---|---|---|---|
0 | 851104 | 2017-01-21 22:11:48.556739 | control | old_page | 0 | NaN |
1 | 804228 | 2017-01-12 08:01:45.159739 | control | old_page | 0 | NaN |
2 | 661590 | 2017-01-11 16:55:06.154213 | treatment | new_page | 0 | 0.0 |
3 | 853541 | 2017-01-08 18:28:03.143765 | treatment | new_page | 0 | 0.0 |
4 | 864975 | 2017-01-21 01:52:26.210827 | control | old_page | 1 | NaN |
f. Simulate nold transactions with a conversion rate of pold under the null. Store these nold 1's and 0's in old_page_converted.
df2['old_page_converted'] = df2.query('landing_page == "old_page"').converted
g. Find pnew - pold for your simulated values from part (e) and (f).
observed_diff = df2.new_page_converted.mean() - df2.old_page_converted.mean()
h. Create 10,000 pnew - pold values using the same simulation process you used in parts (a) through (g) above. Store all 10,000 values in a NumPy array called p_diffs.
diffs_list = []
sample_size = df2.shape[0]
for _ in range(10000):
sample_df = df2.sample(sample_size, replace=True)
pro_new = sample_df.query('landing_page == "new_page"').converted.mean()
pro_old = sample_df.query('landing_page == "old_page"').converted.mean()
diff = pro_new - pro_old
diffs_list.append(diff)
p_diffs = np.array(diffs_list)
i. Plot a histogram of the p_diffs. Does this plot look like what you expected? Use the matching problem in the classroom to assure you fully understand what was computed here.
plt.hist(p_diffs, facecolor='g', alpha=0.75);
This graph follows the normal distribution. It is because of the central limit theorem
j. What proportion of the p_diffs are greater than the actual difference observed in ab_data.csv?
(p_diffs > observed_diff).mean()
0.50519999999999998
null_mean = 0
null_vals = np.random.normal(null_mean, p_diffs.std(), sample_size)
plt.hist(null_vals);
plt.axvline(x=observed_diff, color = 'red');
p_val = (null_vals > observed_diff).mean()
p_val
0.90307449825179642
k. Please explain using the vocabulary you've learned in this course what you just computed in part j. What is this value called in scientific studies? What does this value mean in terms of whether or not there is a difference between the new and old pages?
It is the p-value. This p-value is greater than 0.05 so that we cannot reject the null hypothesis. We can conclude there is not differene between the new and old pages
l. We could also use a built-in to achieve similar results. Though using the built-in might be easier to code, the above portions are a walkthrough of the ideas that are critical to correctly thinking about statistical significance. Fill in the below to calculate the number of conversions for each page, as well as the number of individuals who received each page. Let n_old
and n_new
refer the the number of rows associated with the old page and new pages, respectively.
import statsmodels.api as sm
convert_old = df2.query('landing_page == "old_page"').converted.sum()
convert_new = df2.query('landing_page == "new_page"').converted.sum()
n_old = df2.query('landing_page == "old_page"').user_id.count()
n_new = df2.query('landing_page == "new_page"').user_id.count()
/opt/conda/lib/python3.6/site-packages/statsmodels/compat/pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead. from pandas.core import datetools
df2.query('landing_page == "old_page"').converted.sum(), df2.query('landing_page == "old_page"').user_id.count()
(17489, 145274)
m. Now use stats.proportions_ztest
to compute your test statistic and p-value. Here is a helpful link on using the built in.
count = np.array([convert_new, convert_old])
nobs = np.array([n_new, n_old])
z_score, p_val_z = sm.stats.proportions_ztest(count, nobs, alternative='larger')
z_score, p_val_z
(-1.3109241984234394, 0.90505831275902449)
n. What do the z-score and p-value you computed in the previous question mean for the conversion rates of the old and new pages? Do they agree with the findings in parts j. and k.?
z-score is -1.310924, and the p-value is 0.905058 which is the same in parts j.
1.
In this final part, you will see that the result you achieved in the A/B test in Part II above can also be achieved by performing regression.
a. Since each row is either a conversion or no conversion, what type of regression should you be performing in this case?
Logistic regression
b. The goal is to use statsmodels to fit the regression model you specified in part a. to see if there is a significant difference in conversion based on which page a customer receives. However, you first need to create in df2 a column for the intercept, and create a dummy variable column for which page each user received. Add an intercept column, as well as an ab_page column, which is 1 when an individual receives the treatment and 0 if control.
df2['intercept'] = 1
df2[['ab_page', 'ab_page_temp']] = pd.get_dummies(df2.landing_page)
df2.head()
user_id | timestamp | group | landing_page | converted | new_page_converted | old_page_converted | intercept | ab_page | ab_page_temp | |
---|---|---|---|---|---|---|---|---|---|---|
0 | 851104 | 2017-01-21 22:11:48.556739 | control | old_page | 0 | NaN | 0.0 | 1 | 0 | 1 |
1 | 804228 | 2017-01-12 08:01:45.159739 | control | old_page | 0 | NaN | 0.0 | 1 | 0 | 1 |
2 | 661590 | 2017-01-11 16:55:06.154213 | treatment | new_page | 0 | 0.0 | NaN | 1 | 1 | 0 |
3 | 853541 | 2017-01-08 18:28:03.143765 | treatment | new_page | 0 | 0.0 | NaN | 1 | 1 | 0 |
4 | 864975 | 2017-01-21 01:52:26.210827 | control | old_page | 1 | NaN | 1.0 | 1 | 0 | 1 |
df2.drop('ab_page_temp', axis=1, inplace=True)
df2.head()
user_id | timestamp | group | landing_page | converted | new_page_converted | old_page_converted | intercept | ab_page | |
---|---|---|---|---|---|---|---|---|---|
0 | 851104 | 2017-01-21 22:11:48.556739 | control | old_page | 0 | NaN | 0.0 | 1 | 0 |
1 | 804228 | 2017-01-12 08:01:45.159739 | control | old_page | 0 | NaN | 0.0 | 1 | 0 |
2 | 661590 | 2017-01-11 16:55:06.154213 | treatment | new_page | 0 | 0.0 | NaN | 1 | 1 |
3 | 853541 | 2017-01-08 18:28:03.143765 | treatment | new_page | 0 | 0.0 | NaN | 1 | 1 |
4 | 864975 | 2017-01-21 01:52:26.210827 | control | old_page | 1 | NaN | 1.0 | 1 | 0 |
c. Use statsmodels to instantiate your regression model on the two columns you created in part b., then fit the model using the two columns you created in part b. to predict whether or not an individual converts.
logistic_model = sm.Logit(df2['converted'], df2[['intercept', 'ab_page']])
result = logistic_model.fit()
result.summary()
Optimization terminated successfully. Current function value: 0.366118 Iterations 6
Dep. Variable: | converted | No. Observations: | 290584 |
---|---|---|---|
Model: | Logit | Df Residuals: | 290582 |
Method: | MLE | Df Model: | 1 |
Date: | Thu, 09 May 2019 | Pseudo R-squ.: | 8.077e-06 |
Time: | 05:29:24 | Log-Likelihood: | -1.0639e+05 |
converged: | True | LL-Null: | -1.0639e+05 |
LLR p-value: | 0.1899 |
coef | std err | z | P>|z| | [0.025 | 0.975] | |
---|---|---|---|---|---|---|
intercept | -1.9888 | 0.008 | -246.669 | 0.000 | -2.005 | -1.973 |
ab_page | -0.0150 | 0.011 | -1.311 | 0.190 | -0.037 | 0.007 |
d. Provide the summary of your model below, and use it as necessary to answer the following questions.
np.exp(result.params)
intercept 0.136863 ab_page 0.985123 dtype: float64
e. What is the p-value associated with ab_page? Why does it differ from the value you found in Part II?
Hint: What are the null and alternative hypotheses associated with your regression model, and how do they compare to the null and alternative hypotheses in Part II?
p-value associated with ab_page is 0.19. The hypothesis in Part II is the difference between converted rates for the new and old pages is zero. But the hypothesis in Part III is there is no effect on conversion rates with repect to the new and old pages.
f. Now, you are considering other things that might influence whether or not an individual converts. Discuss why it is a good idea to consider other factors to add into your regression model. Are there any disadvantages to adding additional terms into your regression model?
Change Avertsion and Novelty effect might influence on individual convert rates. We should consider this kind of factors because there are many thing to influence on dependent variables. When we add additional variables into our regression model, it might be harder to interpret each variables.
g. Now along with testing if the conversion rate changes for different pages, also add an effect based on which country a user lives in. You will need to read in the countries.csv dataset and merge together your datasets on the appropriate rows. Here are the docs for joining tables.
Does it appear that country had an impact on conversion? Don't forget to create dummy variables for these country columns - Hint: You will need two columns for the three dummy variables. Provide the statistical output as well as a written response to answer this question.
df_additional = pd.read_csv('countries.csv')
df_additional.head()
user_id | country | |
---|---|---|
0 | 834778 | UK |
1 | 928468 | US |
2 | 822059 | UK |
3 | 711597 | UK |
4 | 710616 | UK |
df2 = df2.join(df_additional.set_index('user_id'), on='user_id')
df2.head()
user_id | timestamp | group | landing_page | converted | new_page_converted | old_page_converted | intercept | ab_page | country | |
---|---|---|---|---|---|---|---|---|---|---|
0 | 851104 | 2017-01-21 22:11:48.556739 | control | old_page | 0 | NaN | 0.0 | 1 | 0 | US |
1 | 804228 | 2017-01-12 08:01:45.159739 | control | old_page | 0 | NaN | 0.0 | 1 | 0 | US |
2 | 661590 | 2017-01-11 16:55:06.154213 | treatment | new_page | 0 | 0.0 | NaN | 1 | 1 | US |
3 | 853541 | 2017-01-08 18:28:03.143765 | treatment | new_page | 0 | 0.0 | NaN | 1 | 1 | US |
4 | 864975 | 2017-01-21 01:52:26.210827 | control | old_page | 1 | NaN | 1.0 | 1 | 0 | US |
df2[['CA', 'UK', 'US']] = pd.get_dummies(df2.country)
df2.head()
user_id | timestamp | group | landing_page | converted | new_page_converted | old_page_converted | intercept | ab_page | country | CA | UK | US | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 851104 | 2017-01-21 22:11:48.556739 | control | old_page | 0 | NaN | 0.0 | 1 | 0 | US | 0 | 0 | 1 |
1 | 804228 | 2017-01-12 08:01:45.159739 | control | old_page | 0 | NaN | 0.0 | 1 | 0 | US | 0 | 0 | 1 |
2 | 661590 | 2017-01-11 16:55:06.154213 | treatment | new_page | 0 | 0.0 | NaN | 1 | 1 | US | 0 | 0 | 1 |
3 | 853541 | 2017-01-08 18:28:03.143765 | treatment | new_page | 0 | 0.0 | NaN | 1 | 1 | US | 0 | 0 | 1 |
4 | 864975 | 2017-01-21 01:52:26.210827 | control | old_page | 1 | NaN | 1.0 | 1 | 0 | US | 0 | 0 | 1 |
logistic_model_add_country = sm.Logit(df2['converted'], df2[['intercept', 'ab_page', 'US', 'UK']])
result2 = logistic_model_add_country.fit()
result2.summary()
Optimization terminated successfully. Current function value: 0.366113 Iterations 6
Dep. Variable: | converted | No. Observations: | 290584 |
---|---|---|---|
Model: | Logit | Df Residuals: | 290580 |
Method: | MLE | Df Model: | 3 |
Date: | Thu, 09 May 2019 | Pseudo R-squ.: | 2.323e-05 |
Time: | 05:29:25 | Log-Likelihood: | -1.0639e+05 |
converged: | True | LL-Null: | -1.0639e+05 |
LLR p-value: | 0.1760 |
coef | std err | z | P>|z| | [0.025 | 0.975] | |
---|---|---|---|---|---|---|
intercept | -2.0300 | 0.027 | -76.249 | 0.000 | -2.082 | -1.978 |
ab_page | -0.0149 | 0.011 | -1.307 | 0.191 | -0.037 | 0.007 |
US | 0.0408 | 0.027 | 1.516 | 0.130 | -0.012 | 0.093 |
UK | 0.0506 | 0.028 | 1.784 | 0.074 | -0.005 | 0.106 |
np.exp(result2.params)
intercept 0.131332 ab_page 0.985168 US 1.041599 UK 1.051944 dtype: float64
y, X = dmatrices('converted ~ ab_page + US + UK', df2, return_type = 'dataframe')
vif = pd.DataFrame()
vif["VIF Factor"] = [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]
vif["features"] = X.columns
vif
VIF Factor | features | |
---|---|---|
0 | 21.055923 | Intercept |
1 | 1.000006 | ab_page |
2 | 4.502223 | US |
3 | 4.502236 | UK |
I think there is no different effects between countries.
h. Though you have now looked at the individual factors of country and page on conversion, we would now like to look at an interaction between page and country to see if there significant effects on conversion. Create the necessary additional columns, and fit the new model.
Provide the summary results, and your conclusions based on the results.
We cannot say new page is better.
from subprocess import call
call(['python', '-m', 'nbconvert', 'Analyze_ab_test_results_notebook.ipynb'])
0