#!/usr/bin/env python # coding: utf-8 # # Example Tool Usage - Regression Problems # ---- # # # About # This notebook contains simple, toy examples to help you get started with FairMLHealth tool usage. This same content is mirrored in the repository's main [README](../../../README.md) # # Example Setup # In[1]: from fairmlhealth import report, measure, stat_utils import numpy as np import pandas as pd from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression, TweedieRegressor # In[2]: # First, we'll create a semi-randomized dataframe with specific columns for our attributes of interest rng = np.random.RandomState(506) N = 240 X = pd.DataFrame({'col1': rng.randint(1, 4, N), 'col2': rng.randint(1, 75, N), 'col3': rng.randint(0, 2, N), 'gender': [0, 1]*int(N/2), 'ethnicity': [1, 1, 0, 0]*int(N/4), 'other': [1, 0, 0, 0, 1, 0, 0, 1]*int(N/8) }) # Second, we'll create a randomized target variable y = pd.Series((X['col3']+X['gender']).values + rng.uniform(0, 6, N), name='Example_Target') # Third, we'll split the data and use it to train two generic models splits = train_test_split(X, y, test_size=0.5, random_state=42) X_train, X_test, y_train, y_test = splits model_1 = LinearRegression().fit(X_train, y_train) model_2 = TweedieRegressor().fit(X_train, y_train) # In[3]: display(X.head(), y.head()) # # Generalized Reports # fairMLHealth has tools to create generalized reports of model bias and performance. # # The primary reporting tool is now the **compare** function, which can be used to generate side-by-side comparisons for any number of models, and for either binary classifcation or for regression problems. Model performance metrics such as accuracy and precision (or MAE and RSquared for regression problems) are also provided to facilitate comparison. # # A flagging protocol is applied by default to highlight any cells with values that are out of range. This can be turned off by passing ***flag_oor = False*** to report.compare(). # # Below is an example applying the function for a regression model. Note that the "fair" range to be used for evaluation of regression metrics does requires judgment on the part of the user. Default ranges have been set to [0.8, 1.2] for ratios, 10% of the available target range for *Mean Prediction Difference*, and 10% of the available MAE range for *MAE Difference*. If the default flags do not meet your needs, they can be turned off by passing ***flag_oor = False*** to report.compare(). More information is available in our [Evaluating Fairness Documentation](./docs/resources/Evaluating_Fairness.md#regression_ranges). # In[4]: # Generate a measure report report.compare(X_test, y_test, X_test['gender'], model_1, pred_type="regression") # In[5]: # Display the same report without performance measures bias_report = report.compare(test_data=X_test, targets=y_test, protected_attr=X_test['gender'], models=model_1, pred_type="regression", skip_performance=True) print("Returned type:", type(bias_report)) display(bias_report) # ### Alternative Return Types # # By default the **compare** function returns a flagged comparison of type pandas Styler (pandas.io.formats.style.Styler). When flags are disabled, the default return type is a pandas DataFrame. Outputs can also be returned as embedded HTML -- with or without flags -- by specitying *output_type="html"*. # In[6]: # With flags disabled, the report is returned as a pandas DataFrame df = report.compare(test_data=X_test, targets=y_test, protected_attr=X_test['gender'], models=model_1, pred_type="regression", flag_oor=False) print("Returned type:", type(df)) display(df.head(2)) # In[7]: # Comparisons can also be returned as embedded HTML from IPython.core.display import HTML html_output = report.compare(test_data=X_test, targets=y_test, protected_attr=X_test['gender'], models=model_1, pred_type="regression", output_type="html") print("Returned type:", type(html_output)) HTML(html_output) # ## Comparing Results for Multiple Models # # The **compare** tool can also be used to measure two different models or two different protected attributes. Protected attributes are measured separately and cannot yet be combined together with the **compare** tool, although they can be grouped as cohorts in the stratified tables [as shown below](#cohort). # # Here is an example output comparing the two test models defined above. Missing values have been added for metrics requiring prediction probabilities, which the second model does not have (note the warning below). # In[8]: # Generate a pandas dataframe of measures report.compare(X_test, y_test, X_test['gender'], {'model 1':model_1, 'model 2':model_2}, pred_type="regression") # # Detailed Analyses # # ## Significance Testing # # It is generally recommended to test whether any differences in model outcomes for protected attributes are the effect of a sampling error in our test. FairMLHealth comes with a bootstrapping utility and supporting functions that can be used in statistical testing. The bootstrapping utility accepts any function that returns a p-value and will return a True or False if the p-value is greater than some alpha for a threshold number of randomly sampled trials. While the selection of proper statistical tests is beyond the scope of this notebook, three examples using the bootstrap_significance tool with a built-in Kruskal-Wallis test function are shown below. # In[9]: # Example 1 Bootstrap Test Results Applying Kruskal-Wallis to Relative to Gender isMale = X['gender'].eq(1) reject_h0 = stat_utils.bootstrap_significance(func=stat_utils.kruskal_pval, a=y.loc[isMale], b=y.loc[~isMale]) print("Is the y value is different for male vs female?\n", reject_h0) # In[10]: # Example 1 Bootstrap Test Results Applying Kruskal-Wallis to Relative to Ethnicity isCaucasian = X['ethnicity'].eq(1) reject_h0 = stat_utils.bootstrap_significance(func=stat_utils.kruskal_pval, a=y.loc[isCaucasian], b=y.loc[~isCaucasian]) print("Is the y-value is different for caucasian vs not-caucasian?\n", reject_h0) # In[11]: # Example of Single Krusakal-Wallis Test pval = stat_utils.kruskal_pval(a=y.loc[X['col3'].eq(1)], b=y.loc[X['col3'].eq(0)], # If n_sample set to None, tests on full dataset rather than sample n_sample=None ) print("P-Value of single K-W test:", pval) # ## Stratified Tables # FairMLHealth also provides tools for detailed analysis of model variance by way of stratified data, performance, and bias tables. Beyond evaluating fairness, these tools are intended for flexible use in any generic assessment of model bais. Tables can evaluate multiple features at once. *An important update starting in Version 1.0.0 is that all of these features are now contained in the **measure.py** module (previously named reports.py).* # # All tables display a summary row for "All Features, All Values". This summary can be turned off by passing ***add_overview=False*** to measure.data(). # ### Data Tables # # The stratified data table can be used to evaluate data against one or multiple targets. Two methods are available for identifying which features to assess, as shown in the examples below. # In[12]: # Arguments Option 1: pass full set of data, subsetting with *features* argument measure.data(X_test, y_test, features=['gender']) # In[13]: # Arguments Option 2: pass the data subset of interest without using the *features* argument measure.data(X_test['gender'], y_test) # In[14]: # Display a similar report for multiple targets, dropping the summary row measure.data(X=X_test, # used to define rows Y=X_test, # used to define columns features=['gender', 'col1'], # optional subset of X targets=['col2', 'col3'], # optional subset of Y add_overview=False # turns off "All Features, All Values" row ) # In[15]: # Analytical tables are output as pandas DataFrames test_table = measure.data(X=X_test[['gender', 'col1']], # used to define rows Y=X_test[['col2', 'col3']], # used to define columns ) test_table.loc[test_table['Feature Value'].eq("1"), ['Feature Name', 'Feature Value', 'Mean col2', 'Mean col3']] # ### Stratified Performance Tables # # The stratified performance table evaluates model performance specific to each feature-value subset. These tables are compatible with both classification and regression models. # In[16]: # Performance table example measure.performance(X_test[['gender']], y_test, model_1.predict(X_test), pred_type="regression") # ### Stratified Bias Tables # # The stratified bias analysis feature applies fairness-related metrics for each feature-value pair. It assumes a given feature-value as the "privileged" group relative to all other possible values for the feature. For example, in the table output shown in the cell below, row **2** in the table below displays measures for **"col1"** with a value of **"2"**. For this row, "2" is considered to be the privileged group, while all other non-null values (namely "1" and "3") are considered unprivileged. # # Note that the *flag* function is compatible with both **measure.bias()** and **measure.summary()** (which is demonstrated below). However, to enable colored cells the tool returns a pandas Styler rather than a DataTable. For this reason, *flag_oor* is False by default for these features. Flagging can be turned on by passing *flag_oor=True* to either function. As an added feature, optional custom ranges can be passed to either **measure.bias()** or **measure.summary()** to facilitate regression evaluation as shown below. # In[17]: # Custom "fair" ranges may be passed as dictionaries of tuples whose keys # are case-insensitive measure names my_ranges = {'mean prediction difference':(-2, 2)} # Note that flag_oor is set to False by default for this feature measure.bias(X_test[['gender', 'col1']], y_test, model_1.predict(X_test), pred_type="regression", flag_oor=True, custom_ranges=my_ranges) # The **measure** module also contains a summary function that works similarly to report.compare(). While it can only be applied to one model at a time, it can accept custom "fair" ranges, and accept cohort groups as [shown in the next section](#cohort). # In[18]: # Example summary output for the regression model with custom ranges measure.summary(X_test[['gender', 'col1']], y_test, model_1.predict(X_test), prtc_attr=X_test['gender'], pred_type="regression", flag_oor=True, custom_ranges={ 'mean prediction difference':(-0.5, 2)}) # ## Analysis by Cohort # # Table-generating functions in the **measure** module can be additionally grouped using the *cohort_labels* argument to specify additional labels for each observation. Cohorts may consist of either as a single label or a set of labels, and may be either separate from or attached to the existing data. # In[19]: # Define cohort labels relative to the true values of the target cohort_labels = pd.qcut(y_test, 3, labels=False).rename('True Value Group') # Separate, Single-Level Cohorts measure.bias(X_test['col3'], y_test, model_1.predict(X_test), pred_type="regression", flag_oor=True, cohort_labels=cohort_labels) # In[20]: ## Multi-Level Cohorts for the Data table measure.data(X=X_test[['col3']], Y=y_test, cohort_labels=X_test[['gender', 'ethnicity']])