This file contains a Notebook for example usage of the ContrastiveExplanation
Python script, so that you can familiarize yourself with the usage flow and showcasing the package functionalities. It contains three examples, two for explaining classification problems and one for explaining a regression analysis problem.
Before we proceed, let us set a seed for reproducibility, and import packages.
import numpy as np
import pandas as pd
import os.path
import urllib.request
import pprint
from sklearn import datasets, model_selection, ensemble, metrics, pipeline, preprocessing
SEED = np.random.RandomState(1994)
For printing out the features and their corresponding values of an instance we define a function print_sample()
:
def print_sample(feature_names, sample):
print('\n'.join(f'{name}: {value}' for name, value in zip(feature_names, sample)))
First, for classification we use the Iris data set (as also used in the example of README.md
). We first load the data set (and print its characteristics), then train an ML model on the data, and finally explain an instance predicted using the model.
data = datasets.load_iris()
print(data['DESCR'])
.. _iris_dataset: Iris plants dataset -------------------- **Data Set Characteristics:** :Number of Instances: 150 (50 in each of three classes) :Number of Attributes: 4 numeric, predictive attributes and the class :Attribute Information: - sepal length in cm - sepal width in cm - petal length in cm - petal width in cm - class: - Iris-Setosa - Iris-Versicolour - Iris-Virginica :Summary Statistics: ============== ==== ==== ======= ===== ==================== Min Max Mean SD Class Correlation ============== ==== ==== ======= ===== ==================== sepal length: 4.3 7.9 5.84 0.83 0.7826 sepal width: 2.0 4.4 3.05 0.43 -0.4194 petal length: 1.0 6.9 3.76 1.76 0.9490 (high!) petal width: 0.1 2.5 1.20 0.76 0.9565 (high!) ============== ==== ==== ======= ===== ==================== :Missing Attribute Values: None :Class Distribution: 33.3% for each of 3 classes. :Creator: R.A. Fisher :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov) :Date: July, 1988 The famous Iris database, first used by Sir R.A. Fisher. The dataset is taken from Fisher's paper. Note that it's the same as in R, but not as in the UCI Machine Learning Repository, which has two wrong data points. This is perhaps the best known database to be found in the pattern recognition literature. Fisher's paper is a classic in the field and is referenced frequently to this day. (See Duda & Hart, for example.) The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the other 2; the latter are NOT linearly separable from each other. .. topic:: References - Fisher, R.A. "The use of multiple measurements in taxonomic problems" Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to Mathematical Statistics" (John Wiley, NY, 1950). - Duda, R.O., & Hart, P.E. (1973) Pattern Classification and Scene Analysis. (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218. - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System Structure and Classification Rule for Recognition in Partially Exposed Environments". IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-2, No. 1, 67-71. - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE Transactions on Information Theory, May 1972, 431-433. - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II conceptual clustering system finds 3 classes in the data. - Many, many more ...
# Split data in a train/test set and in predictor (x) and target (y) variables
x_train, x_test, y_train, y_test = model_selection.train_test_split(data.data,
data.target,
train_size=0.80,
random_state=SEED)
# Train a RandomForestClassifier
model = ensemble.RandomForestClassifier(random_state=SEED, n_estimators=100)
model.fit(x_train, y_train)
# Print out the classifier performance (F1-score)
print('Classifier performance (F1):', metrics.f1_score(y_test, model.predict(x_test), average='weighted'))
Classifier performance (F1): 0.9333333333333333
# Import
import contrastive_explanation as ce
# Select a sample to explain ('questioned data point') why it predicted the fact instead of the foil
sample = x_test[1]
print_sample(data.feature_names, sample)
# Create a domain mapper (map the explanation to meaningful labels for explanation)
dm = ce.domain_mappers.DomainMapperTabular(x_train,
feature_names=data.feature_names,
contrast_names=data.target_names)
# Create the contrastive explanation object (default is a Foil Tree explanator)
exp = ce.ContrastiveExplanation(dm)
# Explain the instance (sample) for the given model
exp.explain_instance_domain(model.predict_proba, sample)
sepal length (cm): 5.1 sepal width (cm): 3.8 petal length (cm): 1.5 petal width (cm): 0.3
"The model predicted 'setosa' instead of 'versicolor' because 'petal length (cm) <= 2.528 and petal width (cm) <= 1.704 and sepal length (cm) <= 5.159'"
Instead, we can also manually provide a foil to explain (e.g. class 'virginica'):
exp.explain_instance_domain(model.predict_proba, sample, foil='virginica')
"The model predicted 'setosa' instead of 'virginica' because 'petal length (cm) <= 5.133 and sepal length (cm) <= 6.059'"
We use the Adult Census Income data set from the UCI Machine Learning repository as an additional classification example, where we showcase the usage of multi-valued categorical features and Pandas DataFrames as input.
# Import
import contrastive_explanation as ce
# Read the adult data set (https://archive.ics.uci.edu/ml/datasets/Adult)
c_file = ce.utils.download_data('https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data')
c_df = pd.read_csv(c_file, header=None, skipinitialspace=True)
c_df = c_df.drop([2, 4], axis=1)
# Give descriptive names to features
c_features = ['age', 'workclass', 'education', 'marital-status',
'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'hours-per-week',
'native-country']
c_categorical = ['workclass', 'education', 'marital-status', 'occupation',
'relationship', 'race', 'sex', 'native-country']
c_df.columns = c_features + ['class']
c_contrasts = c_df['class'].unique()
# Split into x and y (class feature is last feature)
cx, cy = c_df.iloc[:, :-1], c_df.iloc[:, -1]
c_df.head()
age | workclass | education | marital-status | occupation | relationship | race | sex | capital-gain | capital-loss | hours-per-week | native-country | class | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 39 | State-gov | Bachelors | Never-married | Adm-clerical | Not-in-family | White | Male | 2174 | 0 | 40 | United-States | <=50K |
1 | 50 | Self-emp-not-inc | Bachelors | Married-civ-spouse | Exec-managerial | Husband | White | Male | 0 | 0 | 13 | United-States | <=50K |
2 | 38 | Private | HS-grad | Divorced | Handlers-cleaners | Not-in-family | White | Male | 0 | 0 | 40 | United-States | <=50K |
3 | 53 | Private | 11th | Married-civ-spouse | Handlers-cleaners | Husband | Black | Male | 0 | 0 | 40 | United-States | <=50K |
4 | 28 | Private | Bachelors | Married-civ-spouse | Prof-specialty | Wife | Black | Female | 0 | 0 | 40 | Cuba | <=50K |
# Split data in a train/test set and in predictor (x) and target (y) variables
cx_train, cx_test, cy_train, cy_test = model_selection.train_test_split(cx,
cy,
train_size=0.80,
random_state=SEED)
# Train an AdaBoostClassifier
c_model = pipeline.Pipeline([('label_encoder', ce.CustomLabelEncoder(c_categorical).fit(cx)),
('classifier', ensemble.AdaBoostClassifier(random_state=SEED, n_estimators=100))])
c_model.fit(cx_train, cy_train)
# Print out the classifier performance (F1-score)
print('Classifier performance (F1):', metrics.f1_score(cy_test, c_model.predict(cx_test), average='weighted'))
Classifier performance (F1): 0.8653015631373624
# Select a sample to explain ('questioned data point') why it predicted the fact instead of the foil
sample = cx_test.iloc[1]
print(sample)
# Create a domain mapper for the Pandas DataFrame (it will automatically infer feature names)
c_dm = ce.domain_mappers.DomainMapperPandas(cx_train,
contrast_names=c_contrasts)
# Create the contrastive explanation object (default is a Foil Tree explanator)
c_exp = ce.ContrastiveExplanation(c_dm)
# Explain the instance (sample) for the given model
c_exp.explain_instance_domain(c_model.predict_proba, sample)
age 22 workclass Private education Some-college marital-status Never-married occupation Sales relationship Other-relative race White sex Male capital-gain 0 capital-loss 0 hours-per-week 38 native-country United-States Name: 18253, dtype: object
"The model predicted '<=50K' instead of '>50K' because 'age <= 42.179 and education /= Bachelors'"
Here, we explain an instance of the Diabetes data set using the same steps as the classification problems. Instead of just the counterfactual explanation (difference between fact and foil), this example also includes the factual explanation (difference of fact versus all foils).
r_data = datasets.load_diabetes()
print(r_data['DESCR'])
.. _diabetes_dataset: Diabetes dataset ---------------- Ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements were obtained for each of n = 442 diabetes patients, as well as the response of interest, a quantitative measure of disease progression one year after baseline. **Data Set Characteristics:** :Number of Instances: 442 :Number of Attributes: First 10 columns are numeric predictive values :Target: Column 11 is a quantitative measure of disease progression one year after baseline :Attribute Information: - Age - Sex - Body mass index - Average blood pressure - S1 - S2 - S3 - S4 - S5 - S6 Note: Each of these 10 feature variables have been mean centered and scaled by the standard deviation times `n_samples` (i.e. the sum of squares of each column totals 1). Source URL: https://www4.stat.ncsu.edu/~boos/var.select/diabetes.html For more information see: Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani (2004) "Least Angle Regression," Annals of Statistics (with discussion), 407-499. (https://web.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf)
# Split data in a train/test set and in predictor (x) and target (y) variables
rx_train, rx_test, ry_train, ry_test = model_selection.train_test_split(r_data.data,
r_data.target,
train_size=0.80,
random_state=SEED)
# Train a RandomForestRegressor with hyperparameter tuning (selecting the best n_estimators)
m_cv = ensemble.RandomForestRegressor(random_state=SEED)
r_model = model_selection.GridSearchCV(m_cv, cv=5, param_grid={'n_estimators': [50, 100, 500]})
r_model.fit(rx_train, ry_train)
# Print out the regressor performance
print('Regressor performance (R-squared):', metrics.r2_score(ry_test, r_model.predict(rx_test)))
Regressor performance (R-squared): 0.40216144211319016
/usr/local/lib/python3.7/site-packages/sklearn/model_selection/_search.py:813: DeprecationWarning: The default of the `iid` parameter will change from True to False in version 0.22 and will be removed in 0.24. This will change numeric results when test-set sizes are unequal. DeprecationWarning)
# Import
import contrastive_explanation as ce
# Select a sample to explain
r_sample = rx_test[1]
print_sample(r_data.feature_names, r_sample)
print('\n')
# Create a domain mapper (still tabular data, but for regression we do not have named labels for the outcome)
r_dm = ce.domain_mappers.DomainMapperTabular(rx_train,
feature_names=r_data.feature_names)
# Create the CE objects, ensure that 'regression' is set to True
# again, we use the Foil Tree explanator, but now we print out intermediary outcomes and steps (verbose)
r_exp = ce.ContrastiveExplanation(r_dm,
regression=True,
explanator=ce.explanators.TreeExplanator(verbose=True),
verbose=False)
# Explain using the model, also include a 'factual' (non-contrastive 'why fact?') explanation
r_exp.explain_instance_domain(r_model.predict, r_sample, include_factual=True)
age: -0.0309423241359475 sex: -0.044641636506989 bmi: 0.00564997867688165 bp: -0.00911348124867051 s1: 0.0190703330528056 s2: 0.00682798258030921 s3: 0.0744115640787594 s4: -0.0394933828740919 s5: -0.0411803851880079 s6: -0.0424987666488135 [E] Explaining with a decision tree... [E] Fidelity of tree on neighborhood data = 1.0 [E] Found 10 contrastive decision regions, starting from node 2 [E] Found shortest path [25, 23, 24] using strategy "informativeness"
("The model predicted '113.68' instead of 'more than 113.68' because 's2 > -0.0'", "The model predicted '113.68' because 's5 <= -0.012 and bmi <= 0.007 and s5 <= -0.045 and age > 0.024 and sex <= -0.044 and age <= -0.029 and s4 <= 0.003'")