#!/usr/bin/env python # coding: utf-8 # # Project 1: Digital Divide # ### Data Analysis # # #### Based on PPIC's Just the Facts report ["California's Digital Divide"](https://www.ppic.org/publication/californias-digital-divide/) # ## Research Question(s): # 1. What share households with school-age children in X state have access to high-speed internet? # 2. Does this number vary across demographic groups? (in this case race/ethnicity). # ## Goal: # * Use our `working-data dataset` (created in [Data_Prep notebook](00_DigitalDivide_Data_Prep.ipynb) notebook) to answer our research questions. # ## Context: # * Write yourself a description of the context: Include a description of the data (_data set contains X state's data for YYYY year_) # *** # #### Step 1: Set up your working environment. # # Import all necessary libraries and create `Path`s to your data directories. This ensures reproducibility across file systems (windows uses `\` instead of `/`) # We need # 1. `pandas` to work with the data. # 2. `pathlib`, and more specifically its `Path` object, to work with paths. This will ensure our code works in both Windows (which uses `\` in its file paths) and MacOS/Linux (which uses `/`). # 3. `datetime` - tip: There are version control systems for data but tagging your data files with the date is not a bad first step if you're getting started. # 4. `tree` - to display a directory's tree. # In[1]: # setting up working environment import _____ as pd from _____ import Path from tools import _________ from ______ import _______ as dt today = dt.______()._______("%_-%_-%_") print(today) # In[2]: # data folder and paths RAW_DATA_PATH = ____("../data/raw/") XXXX_XXXXX_XXXX = ____("../data/interim/") YYYY_YYYYY_YYYY = ____("../data/processed/") ZZZZ_ZZZZZ_ZZZZ = ____("../data/final/") # In[ ]: tree(INTERIM_DATA_PATH) # In[4]: data = pd.read_stata(INTERIM_DATA_PATH / f'working_data-{today}.dta') # In[5]: data._______ # In[6]: data._______() # In[7]: data._____() # Our **unit of observation** is still a (weighted) person but we're interested in **household-level** data. # # From IPUMS docs: # >HHWT indicates how many households in the U.S. population are represented by a given household in an IPUMS sample.

# >It is generally a good idea to use HHWT when conducting a household-level analysis of any IPUMS sample. The use of HHWT is optional when analyzing one of the "flat" or unweighted IPUMS samples. Flat IPUMS samples include the 1% samples from 1850-1930, all samples from 1960, 1970, and 1980, the 1% unweighted samples from 1990 and 2000, the 10% 2010 sample, and any of the full count 100% census datasets. HHWT must be used to obtain nationally representative statistics for household-level analyses of any sample other than those.

# >**Users should also be sure to select one person (e.g., PERNUM = 1) to represent the entire household.** # *** # #### Step 2: Drop all observations were `pernum` doesn't equal 1 # In[ ]: mask_pernum = (________ _= 1) # In[ ]: data[mask_pernum].shape # Save your data to an appropriately named variable. # In[ ]: state_households = ____[_________] # *** # #### Step 3: Familiarize yourself with your variables of interest # From IPUMS [docs](https://usa.ipums.org/usa-action/variables/CINETHH#description_section): # # >CINETHH reports whether any member of the household accesses the Internet. Here, "access" refers to whether or not someone in the household uses or connects to the Internet, regardless of whether or not they pay for the service. # In[ ]: # find the value_counts for your cinethh series # From IPUMS [docs](https://usa.ipums.org/usa-action/variables/CIHISPEED#description_section): # >CIHISPEED reports whether the respondent or any member of their household subscribed to the Internet using broadband (high speed) Internet service such as cable, fiber optic, or DSL service.

# >User Note: The ACS 2016 introduced changes to the questions regarding computer use and Internet access. See the comparability section and questionnaire text for more information. Additional information provided by the Census Bureau regarding these question alterations are available in the report: ACS Content Test Shows Need to Update Terminology # In[ ]: # find the value_counts for your cihispeed series # _quick tip_ `.value_counts()` _has a_ `normalize` _parameter:_ # In[13]: get_ipython().run_line_magic('pinfo', 'pd.Series.value_counts') # In[ ]: # try it on your cinethh series # In[ ]: # on cihispeed # *** # This would be the end of our analysis if we weren't working with **weighted** data. **Weighted** data means each of our observations represent more than one person or household. # # `perwt` = "Person's weight" # # `hhwt` = "Household's weight" # `.value_counts(normalize=True)` counts the number of **observations** for each of a series' values and then divides it by the total count. If each of our observations was 1 person/household, we would have the answer already. # What we need to do is **aggregate**. # *** # #### Step 4: Grouping and aggregating data # # The mechanics are kind of the same: # 1. Count the number of observations each that match each of the values in a series. # 2. Add up **not the number of observations** but the weight of each observation. # 3. Divide by the total. # #### Step 4.1: Group your data by their corresponding values # In[17]: state_households.groupby("_________") # From the [docs](http://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html): # # >A groupby operation involves some combination of splitting the # object, __applying a function__, and combining the results. This can be # used to group large amounts of data and compute operations on these # groups. # We're missing the **applying a function** part of it. # # Try the following: # ```python # state_households.groupby("countyfip").sum() # ``` # # you can pass _almost_ any function to this. # # Try `.mean()`, `.max()`, `.min()`, `.std()`. # In[ ]: # You can select columns just like you would any other regular dataframe. # In[ ]: state_households.groupby("________")['hhwt']._____() # *** # In[ ]: n_households = state_households.groupby("cihispeed")['hhwt'].sum()[2] _state = state_households['statefip'].unique()[0] print(f""" We can see now {n_households:,} households in {_state} have access to high-speed internet. But, out of how many? To make this easier to follow, let's save our results to a variable: """) # In[ ]: households_with_highspeed_access = ____________._____("_____")["____"].___() households_with_highspeed_access # This looks like any regular `pandas.Series`, how do we find the total `.sum()` of a series elements? # ![math](../../static/math.png) # In[ ]: # That's our denominator! # # ![nice](../../static/nooice.gif) # *** # When you _apply_ and operation to a `pandas.Series` it _maps_ to each of its elements. # # Try the following: # ```python # households_with_highspeed_access * 1_000_000 # ``` # # ```python # households_with_highspeed_access + 1_000_000 # ``` # # ```python # households_with_highspeed_access / 1_000_000 # ``` # In[ ]: # In[ ]: # In[ ]: # Now that you know the denominator of our equation (how many households total in X state), how would you find each of the 3 values in your `households_with_highspeed_access` share of the total? # In[ ]: # *** # *** # ### Part 2 of analysis: Creating derived variables # Now that you have answered **Research Question 1**, we can move on to Q2: # >_Does this number vary across demographic groups? (in this case race/ethnicity)._ # pandas `.groupby()` function can take a list of columns by which to group by # # Try the following: # ```python # state_households.groupby(['race', 'cihispeed'])[['hhwt']].sum() # ``` # # _Notice that I'm passing_ `[['hhwt']]` _(a 1-element list) and not just_ `['hhwt']` _try both yourself and let's discuss what's the difference._ # In[ ]: # In[ ]: # *** # #### Step 1: Define your groups # # # Pandas' `.loc` indexer serves not only to slice dataframes but also to assign new values to certain slices of dataframes. # # For example, # ```python # mask_madeup_data = (data['column_1'] == 'no answer') # data.loc[mask_madeup_data, 'new_column'] = 'this row did not answer' # ``` # The code above grabs all the rows that satisfy the condition and then looks at `'new_column'`, if it doesn't exist, it'll create it for you and assign the value `'this row did not answer'` to all the rows that match the condition. The rest will be filled with null values (NaNs). # ###### Let's create our masks # In[ ]: mask_latino = # In[ ]: mask_white = # In[ ]: mask_black = # In[ ]: mask_______ = # In[ ]: mask_______ = # In[ ]: mask_______ = # Assign the values to a new column `'racen'` for Race/Ethnicity # In[ ]: state_households.loc[mask_latino, 'racen'] = 'Latino' state_households.loc[mask_white, 'racen'] = 'White' state_households.loc[mask_black, 'racen'] = 'Black/African-American' state_households.loc[mask_______, 'racen'] = '_______' state_households.loc[mask_______, 'racen'] = '_______' state_households.loc[mask_______, 'racen'] = '_______' # Checking your results. # # Under your new logic, all `race` values should fit into `racen` values so there should not be any null values, right? # Pandas `.isna()` returns a series of either True or False for each value of a series depending on whether or not it is Null. # # AND # # in python, True = 1 and False = 0. # # What do you think would happen if you as for the `.sum()` total of a `pandas.Series` of booleans? # In[ ]: # *** # ##### Multiple ways of grouping data # Now that you have derived a working variable for race/ethnicity you can aggregate your data to answer **RQ2**. In pandas, there are many ways to do this, some of them are: # 1. `.groupby()` like we've done so far. # 2. `.pivot_table()` # 3. `pd.crosstabs()` <- this one is a `pandas` method, not a DataFrame method. More later. # ##### GroupBy # In[ ]: state_households.groupby(['racen', '______'])[['______']]._____() # Let's save that to an appropriately named variable since we'll be using it later. # In[ ]: cihispeed_by_racen = state_households.groupby(['racen', '______'])[['______']]._____() # Now, this grouped dataframe has the total number of households in each of these racen-cihispeed groups. # # We need the share of cihispeed values by racen group. # # In our equation, # # $$ \frac{households\ with\ high\ speed\ internet}{total\ households\ in\ racen\ group}$$ # # We need to find the denominator. # In[ ]: # find the denominator # In[ ]: # divide your racen-cihispeed by denominator # In[ ]: # save to appropriately named variable shares_cihispeed_by_racen = # This is a multi-level index dataframe and there are a few ways to slice it. Let's try 3: # 1. a classsic `.loc` slice # 2. a cross-section (`.xs()`) # 3. the `.reset_index()` method # **Classic `.loc`** # In[ ]: shares_cihispeed_by_racen.loc[INDEX_SLICER, COLUMNS] # **Cross-section** # In[59]: get_ipython().run_line_magic('pinfo', 'shares_cihispeed_by_racen.xs') # In[ ]: shares_cihispeed_by_racen.xs(key = '________', level = _) # **`.reset_index()`** # Another way to slice a multi-level index dataframe is to make it a not-multi-level index dataframe. To do that you need to _reset_ its index. After that, we can slice it how we've been slicing our dataframes previously. # In[ ]: __________ = ____________._________() # In[ ]: __________ # In[ ]: mask_yes_cihispeed = (_____________ = '___________') _______[mask_yes_cihispeed] # *** # ##### Pivot Tables # The second method of aggregating our data is `.pivot_table()`s. # # If you've worked with Excel, you might already be familiar with what a pivot table is. # # From [Wikipedia](https://en.wikipedia.org/wiki/Pivot_table): # >A pivot table is a table of statistics that summarizes the data of a more extensive table (such as from a database, spreadsheet, or business intelligence program). This summary might include sums, averages, or other statistics, which the pivot table groups together in a meaningful way. # In[66]: get_ipython().run_line_magic('pinfo', 'state_households.pivot_table') # What we need are four things: # 1. What variable will become our `index`? # 2. What variable will become our `columns`? # 3. What variable will become our `values`? # 4. How will we aggregate our values? # # Pandas is going to grab each unique value in the variables you choose and use those as rows in your `.index` or separate columns in your `.columns`. The `values` variable should be _quantitative_ in this case (but it doesn't have to be, necessarily). `.pivot_table` will by default find the `mean` of your `values` variable for each cell in your new table, in this case we don't care about the `mean`, we want to `sum` up the total number of households. # Try the following: # # ```python # state_households.pivot_table( # index = '______', # columns = '______', # values = 'hhwt', # aggfunc = '___', # margins = True, # ) # ``` # In[ ]: # Save it to an appropriately named variable. # In[ ]: households_pivot_table = state_households.pivot_table( index = '_____', columns = '______', ______ = '______', ______ = '____', _______ = True, ) # What do you think the next step should be? # In[ ]: