"Identifying Significant Predictive Bias in Classifiers" https://arxiv.org/abs/1611.08292
The goal of bias scan is to identify a subgroup(s) that has significantly more predictive bias than would be expected from an unbiased classifier. There are $\prod_{m=1}^{M}\left(2^{|X_{m}|}-1\right)$ unique subgroups from a dataset with $M$ features, with each feature having $|X_{m}|$ discretized values, where a subgroup is any $M$-dimension Cartesian set product, between subsets of feature-values from each feature --- excluding the empty set. Bias scan mitigates this computational hurdle by approximately identifing the most statistically biased subgroup in linear time (rather than exponential).
We define the statistical measure of predictive bias function, $score_{bias}(S)$ as a likelihood ratio score and a function of a given subgroup $S$. The null hypothesis is that the given prediction's odds are correct for all subgroups in
$\mathcal{D}$: $H_{0}:odds(y_{i})=\frac{\hat{p}_{i}}{1-\hat{p}_{i}}\ \forall i\in\mathcal{D}$.
The alternative hypothesis assumes some constant multiplicative bias in the odds for some given subgroup $S$:
$H_{1}:\ odds(y_{i})=q\frac{\hat{p}_{i}}{1-\hat{p}_{i}},\ \text{where}\ q>1\ \forall i\in S\ \mbox{and}\ q=1\ \forall i\notin S.$
In the classification setting, each observation's likelihood is Bernoulli distributed and assumed independent. This results in the following scoring function for a subgroup $S$
\begin{align*} score_{bias}(S)= & \max_{q}\log\prod_{i\in S}\frac{Bernoulli(\frac{q\hat{p}_{i}}{1-\hat{p}_{i}+q\hat{p}_{i}})}{Bernoulli(\hat{p}_{i})}\\ = & \max_{q}\log(q)\sum_{i\in S}y_{i}-\sum_{i\in S}\log(1-\hat{p}_{i}+q\hat{p}_{i}). \end{align*}Our bias scan is thus represented as: $S^{*}=FSS(\mathcal{D},\mathcal{E},F_{score})=MDSS(\mathcal{D},\hat{p},score_{bias})$.
where $S^{*}$ is the detected most anomalous subgroup, $FSS$ is one of several subset scan algorithms for different problem settings, $\mathcal{D}$ is a dataset with outcomes $Y$ and discretized features $\mathcal{X}$, $\mathcal{E}$ are a set of expectations or 'normal' values for $Y$, and $F_{score}$ is an expectation-based scoring statistic that measures the amount of anomalousness between subgroup observations and their expectations.
Predictive bias emphasizes comparable predictions for a subgroup and its observations and Bias scan provides a more general method that can detect and characterize such bias, or poor classifier fit, in the larger space of all possible subgroups, without a priori specification.
MDScan currently supports three scoring functions. These scoring functions usage are described below:
Note, non-parametric scoring functions can only be used for datasets where the expectations are constant or none.
The type of outcomes must be provided using the mode keyword argument. The definition for the four types of outcomes supported are provided below:
from aif360.detectors.mdss_detector import bias_scan
from aif360.algorithms.preprocessing.optim_preproc_helpers.data_preproc_functions import load_preproc_data_compas
import numpy as np
import pandas as pd
We'll demonstrate finding the most anomalous subset with bias scan using the compas dataset. We can specify subgroups to be scored or scan for the most anomalous subgroup. Bias scan allows us to decide if we aim to identify bias as higher
than expected probabilities or lower
than expected probabilities.
This is a binary classification use case where the favorable label is 0 and the scoring function is the default bernoulli.
np.random.seed(0)
dataset_orig = load_preproc_data_compas()
The dataset has the categorical features one-hot encoded so we'll modify the dataset to convert them back to the categorical featues because scanning one-hot encoded features may find subgroups that are not meaningful eg. a subgroup with 2 race values.
dataset_orig_df = pd.DataFrame(dataset_orig.features, columns=dataset_orig.feature_names)
age_cat = np.argmax(dataset_orig_df[['age_cat=Less than 25', 'age_cat=25 to 45',
'age_cat=Greater than 45']].values, axis=1).reshape(-1, 1)
priors_count = np.argmax(dataset_orig_df[['priors_count=0', 'priors_count=1 to 3',
'priors_count=More than 3']].values, axis=1).reshape(-1, 1)
c_charge_degree = np.argmax(dataset_orig_df[['c_charge_degree=F', 'c_charge_degree=M']].values, axis=1).reshape(-1, 1)
features = np.concatenate((dataset_orig_df[['sex', 'race']].values, age_cat, priors_count, \
c_charge_degree, dataset_orig.labels), axis=1)
feature_names = ['sex', 'race', 'age_cat', 'priors_count', 'c_charge_degree']
df = pd.DataFrame(features, columns=feature_names + ['two_year_recid'])
df.head()
sex | race | age_cat | priors_count | c_charge_degree | two_year_recid | |
---|---|---|---|---|---|---|
0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 | 1.0 |
1 | 0.0 | 0.0 | 0.0 | 2.0 | 0.0 | 1.0 |
2 | 0.0 | 1.0 | 1.0 | 2.0 | 0.0 | 1.0 |
3 | 1.0 | 1.0 | 1.0 | 0.0 | 1.0 | 0.0 |
4 | 0.0 | 1.0 | 1.0 | 0.0 | 0.0 | 0.0 |
We'll train a simple classifier to predict the probability of the outcome
from sklearn.linear_model import LogisticRegression
X = df.drop('two_year_recid', axis = 1)
y = df['two_year_recid']
clf = LogisticRegression(solver='lbfgs', C=1.0, penalty='l2')
clf.fit(X, y)
LogisticRegression()
Note that the probability scores we use are the probabilities of the favorable label, which is 0 in this case.
probs = pd.Series(clf.predict_proba(X)[:,0])
We can scan for a privileged and unprivileged subset using bias scan
privileged_subset = bias_scan(data=X,observations=y,expectations=probs,favorable_value=0, overpredicted=True)
unprivileged_subset = bias_scan(data=X,observations=y,expectations=probs,favorable_value=0,overpredicted=False)
print(privileged_subset)
print(unprivileged_subset)
({'age_cat': [1.0], 'priors_count': [0.0, 1.0, 2.0], 'sex': [1.0], 'race': [1.0], 'c_charge_degree': [0.0]}, 7.9086) ({'race': [0.0], 'age_cat': [1.0, 2.0], 'priors_count': [1.0], 'c_charge_degree': [0.0, 1.0]}, 7.0227)
dff = X.copy()
dff['observed'] = y
dff['probabilities'] = 1 - probs
to_choose = dff[privileged_subset[0].keys()].isin(privileged_subset[0]).all(axis=1)
temp_df = dff.loc[to_choose]
"Our detected priviledged group has a size of {}, we observe {} as the average risk of recidivism, but our model predicts {}"\
.format(len(temp_df), temp_df['observed'].mean(), temp_df['probabilities'].mean())
'Our detected priviledged group has a size of 147, we observe 0.5374149659863946 as the average risk of recidivism, but our model predicts 0.3827815971689547'
to_choose = dff[unprivileged_subset[0].keys()].isin(unprivileged_subset[0]).all(axis=1)
temp_df = dff.loc[to_choose]
"Our detected priviledged group has a size of {}, we observe {} as the average risk of recidivism, but our model predicts {}"\
.format(len(temp_df), temp_df['observed'].mean(), temp_df['probabilities'].mean())
'Our detected priviledged group has a size of 732, we observe 0.3770491803278688 as the average risk of recidivism, but our model predicts 0.44470388217799317'
This is a binary classification use case where the favorable label is 1 and the scoring function is the berk jones.
data = pd.read_csv('https://gist.githubusercontent.com/Viktour19/b690679802c431646d36f7e2dd117b9e/raw/d8f17bf25664bd2d9fa010750b9e451c4155dd61/adult_autostrat.csv')
data.head()
workclass | education | marital_status | occupation | relationship | race | sex | native_country | age_bin | education_num_bin | hours_per_week_bin | capital_gain_bin | capital_loss_bin | observed | expectation | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | Private | 11th | Never-married | Machine-op-inspct | Own-child | Black | Male | United-States | 17-27 | 1-8 | 40-44 | 0 | 0 | 0 | 0.236226 |
1 | Private | HS-grad | Married-civ-spouse | Farming-fishing | Husband | White | Male | United-States | 37-47 | 9 | 45-99 | 0 | 0 | 0 | 0.236226 |
2 | Local-gov | Assoc-acdm | Married-civ-spouse | Protective-serv | Husband | White | Male | United-States | 28-36 | 12-16 | 40-44 | 0 | 0 | 1 | 0.236226 |
3 | Private | Some-college | Married-civ-spouse | Machine-op-inspct | Husband | Black | Male | United-States | 37-47 | 10-11 | 40-44 | 7298-7978 | 0 | 1 | 0.236226 |
4 | ? | Some-college | Never-married | ? | Own-child | White | Female | United-States | 17-27 | 10-11 | 1-39 | 0 | 0 | 0 | 0.236226 |
Note that for the adult dataset, the positive label is 1 and thus the expectations provided is the probability of the earning >50k i.e label 1 and the favorable label is 1 which is the default for binary classification tasks. Since we would be using scoring function BerkJones, we also need to pass in an alpha value. Alpha can be interpreted as what proportion of the data you expect to have the favorable value
X = data.drop(['observed','expectation'], axis = 1)
probs = data['expectation']
y = data['observed']
privileged_subset = bias_scan(data=X, observations=y, scoring='BerkJones', expectations=probs, overpredicted=True,penalty=50, alpha = .24)
unprivileged_subset = bias_scan(data=X,observations=y, scoring='BerkJones', expectations=probs, overpredicted=False,penalty=50, alpha = .24)
print(privileged_subset)
print(unprivileged_subset)
({'relationship': [' Not-in-family', ' Other-relative', ' Own-child', ' Unmarried'], 'capital_gain_bin': ['0']}, 932.4812) ({'education_num_bin': ['12-16'], 'marital_status': [' Married-civ-spouse']}, 1041.1901)
dff = X.copy()
dff['observed'] = y
dff['probabilities'] = probs
to_choose = dff[privileged_subset[0].keys()].isin(privileged_subset[0]).all(axis=1)
temp_df = dff.loc[to_choose]
"Our detected privileged group has a size of {}, we observe {} as the average probability of earning >50k, but our model predicts {}"\
.format(len(temp_df), np.round(temp_df['observed'].mean(),4), np.round(temp_df['probabilities'].mean(),4))
'Our detected privileged group has a size of 8532, we observe 0.0472 as the average probability of earning >50k, but our model predicts 0.2362'
to_choose = dff[unprivileged_subset[0].keys()].isin(unprivileged_subset[0]).all(axis=1)
temp_df = dff.loc[to_choose]
"Our detected unprivileged group has a size of {}, we observe {} as the average probability of earning >50k, but our model predicts {}"\
.format(len(temp_df), np.round(temp_df['observed'].mean(),4), np.round(temp_df['probabilities'].mean(),4))
'Our detected unprivileged group has a size of 2430, we observe 0.6996 as the average probability of earning >50k, but our model predicts 0.2362'
This is a regression use case where the favorable value is 0 and the scoring function is Gaussian.
data = pd.read_csv('https://raw.githubusercontent.com/Adebayo-Oshingbesan/data/main/insurance.csv')
data.shape
(1338, 7)
for col in ['bmi','age']:
data[col] = pd.qcut(data[col], 10, duplicates='drop')
data[col] = data[col].apply(lambda x: str(round(x.left, 2)) + ' - ' + str(round(x.right,2)))
features = data.drop('charges', axis = 1)
X = features.copy()
for feature in X.columns:
X[feature] = X[feature].astype('category').cat.codes
y = data['charges']
from sklearn.linear_model import LinearRegression
reg = LinearRegression()
reg.fit(X, y)
y_pred = pd.Series(reg.predict(X))
privileged_subset = bias_scan(data=features, observations=y, expectations=y_pred, scoring = 'Gaussian',
overpredicted=True, penalty=1e10, mode ='continuous', favorable_value='low')
unprivileged_subset = bias_scan(data=features, observations=y, expectations=y_pred, scoring = 'Gaussian',
overpredicted=False, penalty=1e10, mode ='continuous', favorable_value='low')
print(privileged_subset)
print(unprivileged_subset)
({'bmi': ['15.96 - 22.99', '22.99 - 25.33', '25.33 - 27.36'], 'smoker': ['no']}, 2384.5786) ({'bmi': ['15.96 - 22.99', '22.99 - 25.33', '25.33 - 27.36', '27.36 - 28.8'], 'smoker': ['yes']}, 3927.8765)
to_choose = data[privileged_subset[0].keys()].isin(privileged_subset[0]).all(axis=1)
temp_df = data.loc[to_choose].copy()
temp_y = y_pred.loc[to_choose].copy()
"Our detected privileged group has a size of {}, we observe {} as the mean insurance costs, but our model predicts {}"\
.format(len(temp_df), temp_df['charges'].mean(), temp_y.mean())
'Our detected privileged group has a size of 321, we observe 7844.840295856697 as the mean insurance costs, but our model predicts 5420.49326277455'
to_choose = data[unprivileged_subset[0].keys()].isin(unprivileged_subset[0]).all(axis=1)
temp_df = data.loc[to_choose].copy()
temp_y = y_pred.loc[to_choose].copy()
"Our detected privileged group has a size of {}, we observe {} as the mean insurance costs, but our model predicts {}"\
.format(len(temp_df), temp_df['charges'].mean(), temp_y.mean())
'Our detected privileged group has a size of 115, we observe 21148.37389617392 as the mean insurance costs, but our model predicts 29694.035319112852'
This is an ordinal, multiclass classification use case where the favorable value is 1 and the scoring function is Poisson.
data = pd.read_csv('https://raw.githubusercontent.com/Adebayo-Oshingbesan/data/main/hospital.csv')
data = data[data['Length of Stay'] != '120 +'].fillna('Unknown')
data.shape
(29980, 22)
X = data.drop(['Length of Stay'], axis = 1)
y = pd.to_numeric(data['Length of Stay'])
privileged_subset = bias_scan(data=X, observations=y, scoring = 'Poisson', favorable_value = 'low', overpredicted=True, penalty=50, mode ='ordinal')
unprivileged_subset = bias_scan(data=X, observations=y, scoring = 'Poisson', favorable_value = 'low', overpredicted=False, penalty=50, mode ='ordinal')
print(privileged_subset)
print(unprivileged_subset)
({'APR Severity of Illness Description': ['Extreme']}, 11180.5386) ({'Patient Disposition': ['Home or Self Care', 'Left Against Medical Advice', 'Short-term Hospital'], 'APR Severity of Illness Description': ['Minor', 'Moderate'], 'APR MDC Code': [1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 21]}, 9950.881)
dff = X.copy()
dff['observed'] = y
dff['predicted'] = y.mean()
to_choose = dff[privileged_subset[0].keys()].isin(privileged_subset[0]).all(axis=1)
temp_df = dff.loc[to_choose]
"Our detected privileged group has a size of {}, we observe {} as the average number of days spent in the hospital, but our model predicts {}"\
.format(len(temp_df), np.round(temp_df['observed'].mean(),4), np.round(temp_df['predicted'].mean(),4))
'Our detected privileged group has a size of 1900, we observe 15.2216 as the average number of days spent in the hospital, but our model predicts 5.4231'
to_choose = dff[unprivileged_subset[0].keys()].isin(unprivileged_subset[0]).all(axis=1)
temp_df = dff.loc[to_choose]
"Our detected unprivileged group has a size of {}, we observe {} as the average number of days spent in the hospital, but our model predicts {}"\
.format(len(temp_df), np.round(temp_df['observed'].mean(),4), np.round(temp_df['predicted'].mean(),4))
'Our detected unprivileged group has a size of 14620, we observe 2.8301 as the average number of days spent in the hospital, but our model predicts 5.4231'
This is a regression use case where the favorable value is the higher temperatures and the scoring function is Berk Jones.
data = pd.read_csv('https://raw.githubusercontent.com/Adebayo-Oshingbesan/data/main/weatherHistory.csv')
data.head()
Summary | PrecipType | Humidity | WindSpeed | Visibility | Pressure | DailySummary | Temperature | |
---|---|---|---|---|---|---|---|---|
0 | Partly Cloudy | rain | 0.89 | 14.1197 | 15.8263 | 1015.13 | Partly cloudy throughout the day. | 9.472222 |
1 | Partly Cloudy | rain | 0.86 | 14.2646 | 15.8263 | 1015.63 | Partly cloudy throughout the day. | 9.355556 |
2 | Mostly Cloudy | rain | 0.89 | 3.9284 | 14.9569 | 1015.94 | Partly cloudy throughout the day. | 9.377778 |
3 | Partly Cloudy | rain | 0.83 | 14.1036 | 15.8263 | 1016.41 | Partly cloudy throughout the day. | 8.288889 |
4 | Mostly Cloudy | rain | 0.83 | 11.0446 | 15.8263 | 1016.51 | Partly cloudy throughout the day. | 8.755556 |
Binning the continuous features since bias scan support only categorical features.
for col in ['Humidity','WindSpeed','Visibility','Pressure']:
data[col] = pd.qcut(data[col], 10, duplicates='drop')
data[col] = data[col].apply(lambda x: str(round(x.left, 2)) + ' - ' + str(round(x.right,2)))
features = data.drop('Temperature', axis = 1)
y = data['Temperature']
privileged_subset = bias_scan(data=features, observations=y, favorable_value = 'high',
scoring = 'BerkJones', overpredicted=True, penalty=50, mode ='continuous', alpha = .4)
unprivileged_subset = bias_scan(data=features, observations=y, favorable_value = 'high',
scoring = 'BerkJones', overpredicted=False, penalty=50, mode ='continuous', alpha = .4)
print(privileged_subset)
print(unprivileged_subset)
({'Pressure': ['-0.0 - 1007.07', '1018.17 - 1020.0', '1020.0 - 1022.42', '1022.42 - 1026.61', '1026.61 - 1046.38'], 'Humidity': ['0.72 - 0.78', '0.78 - 0.83', '0.83 - 0.87', '0.87 - 0.92', '0.92 - 0.95', '0.95 - 1.0']}, 6907.8227) ({'Visibility': ['9.9 - 9.98', '9.98 - 10.05', '10.05 - 11.04', '11.04 - 11.45', '11.45 - 15.15', '15.15 - 15.83', '15.83 - 16.1'], 'PrecipType': ['rain'], 'Pressure': ['-0.0 - 1007.07', '1007.07 - 1010.68', '1010.68 - 1012.95', '1012.95 - 1014.8', '1014.8 - 1016.45', '1016.45 - 1018.17', '1018.17 - 1020.0', '1020.0 - 1022.42']}, 19962.4291)
to_choose = data[privileged_subset[0].keys()].isin(privileged_subset[0]).all(axis=1)
temp_df = data.loc[to_choose].copy()
"Our detected privileged group has a size of {}, we observe {} as the mean temperature, but our model predicts {}"\
.format(len(temp_df), temp_df['Temperature'].mean(), y.mean())
'Our detected privileged group has a size of 31607, we observe 5.155584909121915 as the mean temperature, but our model predicts 11.932678437519867'
to_choose = data[unprivileged_subset[0].keys()].isin(unprivileged_subset[0]).all(axis=1)
temp_df = data.loc[to_choose].copy()
"Our detected unprivileged group has a size of {}, we observe {} as the mean temperature, but our model predicts {}"\
.format(len(temp_df), temp_df['Temperature'].mean(), y.mean())
'Our detected unprivileged group has a size of 55642, we observe 16.773802762911167 as the mean temperature, but our model predicts 11.932678437519867'
This is an nominal, multiclass classification use case where the favorable value is a flower specie and the scoring function is Bernoulli.
iris_data = pd.read_csv('https://raw.githubusercontent.com/Adebayo-Oshingbesan/data/main/Iris.csv').drop('Id', axis = 1)
iris_data.head()
SepalLengthCm | SepalWidthCm | PetalLengthCm | PetalWidthCm | Species | |
---|---|---|---|---|---|
0 | 5.1 | 3.5 | 1.4 | 0.2 | Iris-setosa |
1 | 4.9 | 3.0 | 1.4 | 0.2 | Iris-setosa |
2 | 4.7 | 3.2 | 1.3 | 0.2 | Iris-setosa |
3 | 4.6 | 3.1 | 1.5 | 0.2 | Iris-setosa |
4 | 5.0 | 3.6 | 1.4 | 0.2 | Iris-setosa |
for col in iris_data.columns:
if col != 'Species':
iris_data[col] = pd.qcut(iris_data[col], 10, duplicates='drop')
iris_data[col] = iris_data[col].apply(lambda x: str(round(x.left, 2)) + ' - ' + str(round(x.right,2)))
Training simple model on data
X = iris_data.drop('Species', axis = 1)
for col in X.columns:
X[col] = X[col].cat.codes
y = iris_data['Species']
from sklearn.linear_model import LogisticRegression
clf_2 = LogisticRegression(C=1e-3)
clf_2.fit(X, y)
iris_data['Prediction'] = clf_2.predict(X)
features = iris_data.drop(['Species','Prediction'], axis = 1)
expectations = pd.DataFrame(clf_2.predict_proba(X), columns=clf_2.classes_)
Bias scan
privileged_subset = bias_scan(data=features, observations=y, expectations=expectations, scoring = 'Bernoulli',
favorable_value = 'Iris-virginica', overpredicted=True, penalty=.05, mode ='nominal')
unprivileged_subset = bias_scan(data=features, observations=y, expectations=expectations, scoring = 'Bernoulli',
favorable_value = 'Iris-virginica', overpredicted=False, penalty=.005, mode ='nominal')
print(privileged_subset)
print(unprivileged_subset)
({'PetalLengthCm': ['1.0 - 1.4', '1.4 - 1.5', '1.5 - 1.7', '1.7 - 3.9', '3.9 - 4.35', '4.35 - 4.64'], 'PetalWidthCm': ['0.1 - 0.2', '0.2 - 0.4', '0.4 - 1.16', '1.16 - 1.3', '1.3 - 1.5']}, 20.0508) ({'SepalLengthCm': ['4.8 - 5.0', '5.6 - 5.8', '6.1 - 6.3', '6.3 - 6.52', '6.52 - 6.9', '6.9 - 7.9'], 'PetalWidthCm': ['1.5 - 1.8', '1.8 - 1.9', '1.9 - 2.2', '2.2 - 2.5'], 'PetalLengthCm': ['4.35 - 4.64', '5.0 - 5.32', '5.32 - 5.8', '5.8 - 6.9']}, 22.101)
to_choose = iris_data[privileged_subset[0].keys()].isin(privileged_subset[0]).all(axis=1)
temp_df = iris_data.loc[to_choose].copy()
"Our detected privileged group has a size of {}, we observe {} as the count of Iris-virginica, but our model predicts {}"\
.format(len(temp_df), (temp_df['Species'] == 'Iris-virginica').sum(), (temp_df['Prediction'] == 'Iris-setosa').sum())
'Our detected privileged group has a size of 88, we observe 0 as the count of Iris-virginica, but our model predicts 50'
to_choose = iris_data[unprivileged_subset[0].keys()].isin(unprivileged_subset[0]).all(axis=1)
temp_df = iris_data.loc[to_choose].copy()
"Our detected unprivileged group has a size of {}, we observe {} as the count of Iris-virginica, but our model predicts {}"\
.format(len(temp_df), (temp_df['Species'] == 'Iris-virginica').sum(), (temp_df['Prediction'] == 'Iris-virginica').sum())
'Our detected unprivileged group has a size of 39, we observe 39 as the count of Iris-virginica, but our model predicts 38'
Assuming we want to scan for the second most privileged group, we can remove the records that belongs to the most privileged_subset and then rescan.
to_choose = iris_data[unprivileged_subset[0].keys()].isin(unprivileged_subset[0]).all(axis=1)
X_filtered = iris_data[~to_choose]
y_filtered = y[~to_choose]
privileged_subset = bias_scan(data=X_filtered.drop(['Species','Prediction'], axis = 1), observations=y_filtered,
favorable_value = 'Iris-virginica', scoring = 'Bernoulli', overpredicted=True, penalty=1e-6, mode = 'nominal')
print(privileged_subset)
({'PetalLengthCm': ['1.0 - 1.4', '1.4 - 1.5', '1.5 - 1.7', '1.7 - 3.9', '3.9 - 4.35', '4.35 - 4.64']}, 36.0207)
to_choose = X_filtered[privileged_subset[0].keys()].isin(privileged_subset[0]).all(axis=1)
temp_df = X_filtered.loc[to_choose]
"Our detected privileged group has a size of {}, we observe {} as the count of Iris-virginica, but our model predicts {}"\
.format(len(temp_df), (temp_df['Species'] == 'Iris-virginica').sum(), (temp_df['Prediction'] == 'Iris-virginica').sum())
'Our detected privileged group has a size of 89, we observe 0 as the count of Iris-virginica, but our model predicts 4'
In summary, this notebook explains how to use the new mdss bias scan interface in aif360.detectors to scan for bias, even for tasks beyond binary classification, using the concepts of over-predictions and under-predictions.