Credit Approval Tutorial

This tutorial illustrates the use of several methods in the AI Explainability 360 Toolkit to provide different kinds of explanations suited to different users in the context of a credit approval process enabled by machine learning. We use data from the FICO Explainable Machine Learning Challenge as described below. The three types of users (a.k.a. consumers) that we consider are a data scientist, who evaluates the machine learning model before deployment, a loan officer, who makes the final decision based on the model's output, and a bank customer, who wants to understand the reasons for their application result.

For the data scientist, we present two directly interpretable rule-based models that provide global understanding of their behavior. These models are produced by the Boolean Rule Column Generation (BRCG, class BooleanRuleCG) and Logistic Rule Regression (LogRR, class LogisticRuleRegression) algorithms in AIX360. The former yields very simple OR-of-ANDs classification rules while the latter gives weighted combinations of rules that are more accurate and still interpretable.

For the loan officer, we demonstrate a different way of explaining machine learning predictions by showing examples, specifically prototypes or representatives in the training data that are similar to a given loan applicant and receive the same class label. We use the ProtoDash method (class ProtodashExplainer) to find these prototypes.

For the bank customer, we consider the Contrastive Explanations Method (CEM, class CEMExplainer) for explaining the predictions of black box models to end users. CEM builds upon the popular approach of highlighting features present in the input instance that are responsible for the model's classification. In addition to these, CEM also identifies features that are (minimally) absent in the input instance, but whose presence would have altered the classification.

The tutorial is organized around these three types of consumers, following an introduction to the dataset.

  1. Introduction to FICO HELOC Dataset
  2. Data Scientist: Boolean Rules and Logistic Rule Regression models
  3. Loan Officer: Similar samples as explanations for predictions based on HELOC Dataset
  4. Customer: Contrastive Explanations for predictions based on HELOC Dataset

1. Introduction to FICO HELOC Dataset

The FICO HELOC dataset contains anonymized information about home equity line of credit (HELOC) applications made by real homeowners. A HELOC is a line of credit typically offered by a US bank as a percentage of home equity (the difference between the current market value of a home and the outstanding balance of all liens, e.g. mortgages). The customers in this dataset have requested a credit line in the range of USD 5,000 - 150,000. The machine learning task we are considering is to use the information about the applicant in their credit report to predict whether they will make timely payments over a two year period. The machine learning prediction can then be used to decide whether the homeowner qualifies for a line of credit and, if so, how much credit should be extended.

The HELOC dataset and more information about it, including instructions to download, can be found here.

The table below reproduces part of the data dictionary that comes with the HELOC dataset, explaining the predictor variables and target variable. For example, NumSatisfactoryTrades is a predictor variable that counts the number of past credit agreements with the applicant, which resulted in on-time payments. The target variable to predict is a binary variable called RiskPerformance. The value “Bad” indicates that an applicant was 90 days past due or worse at least once over a period of 24 months from when the credit account was opened. The value “Good” indicates that they have made their payments without ever being more than 90 days overdue. The relationship between a predictor variable and the target is indicated in the last column of the table. If a predictor variable is monotonically decreasing with respect to probability of bad = 1, it means that as the value of the variable increases, the probability of the loan application being "Bad" decreases, i.e. it becomes more "good". For example, ExternalRiskEstimate and NumSatisfactoryTrades are shown as monotonically decreasing. Monotonically increasing has the opposite meaning.

Field Meaning Monotonicity Constraint (with respect to probability of bad = 1)
ExternalRiskEstimate Consolidated version of risk markers Monotonically Decreasing
MSinceOldestTradeOpen Months Since Oldest Trade Open Monotonically Decreasing
MSinceMostRecentTradeOpen Months Since Most Recent Trade Open Monotonically Decreasing
AverageMInFile Average Months in File Monotonically Decreasing
NumSatisfactoryTrades Number Satisfactory Trades Monotonically Decreasing
NumTrades60Ever2DerogPubRec Number Trades 60+ Ever Monotonically Decreasing
NumTrades90Ever2DerogPubRec Number Trades 90+ Ever Monotonically Decreasing
PercentTradesNeverDelq Percent Trades Never Delinquent Monotonically Decreasing
MSinceMostRecentDelq Months Since Most Recent Delinquency Monotonically Decreasing
MaxDelq2PublicRecLast12M Max Delq/Public Records Last 12 Months. See tab "MaxDelq" for each category Values 0-7 are monotonically decreasing
MaxDelqEver Max Delinquency Ever. See tab "MaxDelq" for each category Values 2-8 are monotonically decreasing
NumTotalTrades Number of Total Trades (total number of credit accounts) No constraint
NumTradesOpeninLast12M Number of Trades Open in Last 12 Months Monotonically Increasing
PercentInstallTrades Percent Installment Trades No constraint
MSinceMostRecentInqexcl7days Months Since Most Recent Inq excl 7days Monotonically Decreasing
NumInqLast6M Number of Inq Last 6 Months Monotonically Increasing
NumInqLast6Mexcl7days Number of Inq Last 6 Months excl 7days. Excluding the last 7 days removes inquiries that are likely due to price comparision shopping. Monotonically Increasing
NetFractionRevolvingBurden Net Fraction Revolving Burden. This is revolving balance divided by credit limit Monotonically Increasing
NetFractionInstallBurden Net Fraction Installment Burden. This is installment balance divided by original loan amount Monotonically Increasing
NumRevolvingTradesWBalance Number Revolving Trades with Balance No constraint
NumInstallTradesWBalance Number Installment Trades with Balance No constraint
NumBank2NatlTradesWHighUtilization Number Bank/Natl Trades w high utilization ratio Monotonically Increasing
PercentTradesWBalance Percent Trades with Balance No constraint
RiskPerformance Paid as negotiated flag (12-36 Months). String of Good and Bad Target

Storing HELOC dataset to run this notebook

  • In this notebook, we assume that the HELOC dataset is saved as ./aix360/data/heloc_data/heloc_dataset.csv, where "." is the root directory of the Git repository before running a pip install of aix360 library.
  • If the data is downloaded after installation, please place the file within the respective folder under site-packages of your virtual environment path-to-your-virtual-env/lib/python3.6/site-packages/aix360/data/heloc_data/heloc_dataset.csv

2. Data scientist: Boolean Rule and Logistic Rule Regression models

In evaluating a machine learning model for deployment, a data scientist would ideally like to understand the behavior of the model as a whole, not just in specific instances (e.g. specific loan applicants). This is especially true in regulated industries such as banking where higher standards of explainability may be required. For example, the data scientist may have to present the model to: 1) technical and business managers for review before deployment, 2) a lending expert to compare the model to the expert's knowledge, or 3) a regulator to check for compliance. Furthermore, it is common for a model to be deployed in a different geography than the one it was trained on. A global view of the model may uncover problems with overfitting and poor generalization to other geographies before deployment.

Directly interpretable models can provide such global understanding because they have a sufficiently simple form for their workings to be transparent. Below we present two directly interpretable models in the form of a Boolean rule (BR) and a logistic rule regression (LogRR) model. The former is produced by the Boolean Rule Column Generation (BRCG) algorithm while the latter is a generalized linear rule model (GLRM), both implemented in AIX360. While both models are interpretable, they provide different trade-offs between model simplicity and accuracy in predicting loan repayment. BRCG yields a very simple set of rules that has reasonable accuracy. LogRR achieves higher accuracy, higher even than some uninterpretable models, while retaining the form of a linear model. Its interpretation is enhanced by plots as demonstrated below.

2.1. Load and process data for BRCG and LogRR

We use the HELOCDataset class in AIX360 to load the FICO HELOC data as a DataFrame. The setting custom_preprocessing=nan_preprocessing converts special values in the data (coded as negative integers) to np.nan, which can be handled properly by BRCG and LogRR, as opposed to replacing them with zeros or mean values. The data is then split into training and test sets using a fixed random seed.

In [20]:
# Load FICO HELOC data with special values converted to np.nan
from aix360.datasets.heloc_dataset import HELOCDataset, nan_preprocessing
data = HELOCDataset(custom_preprocessing=nan_preprocessing).data()
# Separate target variable
y = data.pop('RiskPerformance')

# Split data into training and test sets using fixed random seed
from sklearn.model_selection import train_test_split
dfTrain, dfTest, yTrain, yTest = train_test_split(data, y, random_state=0, stratify=y)
dfTrain.head().transpose()
Out[20]:
8960 8403 1949 4886 4998
ExternalRiskEstimate 64.0 57.0 59.0 65.0 65.0
MSinceOldestTradeOpen 175.0 47.0 168.0 228.0 117.0
MSinceMostRecentTradeOpen 6.0 9.0 3.0 5.0 7.0
AverageMInFile 97.0 35.0 38.0 69.0 48.0
NumSatisfactoryTrades 29.0 5.0 21.0 24.0 7.0
NumTrades60Ever2DerogPubRec 9.0 1.0 0.0 3.0 1.0
NumTrades90Ever2DerogPubRec 9.0 0.0 0.0 2.0 1.0
PercentTradesNeverDelq 63.0 50.0 100.0 85.0 78.0
MSinceMostRecentDelq 2.0 16.0 NaN 3.0 36.0
MaxDelq2PublicRecLast12M 4.0 6.0 7.0 0.0 6.0
MaxDelqEver 4.0 5.0 8.0 2.0 4.0
NumTotalTrades 41.0 10.0 21.0 27.0 9.0
NumTradesOpeninLast12M 1.0 1.0 12.0 1.0 2.0
PercentInstallTrades 63.0 30.0 38.0 31.0 56.0
MSinceMostRecentInqexcl7days 0.0 0.0 0.0 7.0 7.0
NumInqLast6M 1.0 2.0 1.0 0.0 0.0
NumInqLast6Mexcl7days 1.0 2.0 1.0 0.0 0.0
NetFractionRevolvingBurden 16.0 66.0 85.0 13.0 54.0
NetFractionInstallBurden 94.0 70.0 90.0 66.0 69.0
NumRevolvingTradesWBalance 1.0 2.0 10.0 3.0 2.0
NumInstallTradesWBalance 1.0 2.0 5.0 2.0 3.0
NumBank2NatlTradesWHighUtilization NaN 0.0 4.0 0.0 1.0
PercentTradesWBalance 50.0 57.0 94.0 46.0 83.0

BRCG and LogRR require non-binary features to be binarized using the provided FeatureBinarizer class. We use the default of nine quantile thresholds (i.e. 10 bins) to binarize ordinal (including continuous-valued) features, include all negations (e.g. '>' comparisons as well as '<='), and also return standardized versions of the original unbinarized ordinal features, which are used by LogRR but not BRCG. Below is the result of binarizing the first 'ExternalRiskEstimate' feature.

In [21]:
# Binarize data and also return standardized ordinal features
from aix360.algorithms.rbm import FeatureBinarizer
fb = FeatureBinarizer(negations=True, returnOrd=True)
dfTrain, dfTrainStd = fb.fit_transform(dfTrain)
dfTest, dfTestStd = fb.transform(dfTest)
dfTrain['ExternalRiskEstimate'].head()
/Applications/anaconda3/lib/python3.7/site-packages/aix360/algorithms/rbm/features.py:154: RuntimeWarning: invalid value encountered in less_equal
  Anew = (data[c].values[:, np.newaxis] <= thresh[c]).astype(int)
/Applications/anaconda3/lib/python3.7/site-packages/aix360/algorithms/rbm/features.py:154: RuntimeWarning: invalid value encountered in less_equal
  Anew = (data[c].values[:, np.newaxis] <= thresh[c]).astype(int)
Out[21]:
operation <= > == !=
value 59.0 63.0 66.0 69.0 72.0 75.0 78.0 82.0 86.0 59.0 63.0 66.0 69.0 72.0 75.0 78.0 82.0 86.0 NaN NaN
8960 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1
8403 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1
1949 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1
4886 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1
4998 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 1

2.2. Run Boolean Rule Column Generation (BRCG)

First we consider BRCG, which is designed to produce a very simple OR-of-ANDs rule (known more formally as disjunctive normal form, DNF) or alternatively an AND-of-ORs rule (conjunctive normal form, CNF) to predict whether an applicant will repay the loan on time (Y = 1). For a binary classification problem such as we have here, a DNF rule is equivalent to a rule set, where AND clauses in the DNF correspond to individual rules in the rule set. Furthermore, it can be shown that a CNF rule for Y = 1 is equivalent to a DNF rule for Y = 0 [1]. BRCG is distinguished by its use of the optimization technique of column generation to search the space of possible clauses, which is exponential in size. To learn more about column generation, please see our NeurIPS paper [2].

For this dataset, we find that a CNF rule for Y = 1 (i.e. a DNF for Y = 0, enabled by setting CNF=True) is slightly better than a DNF rule for Y = 1. The model complexity parameters lambda0 and lambda1 penalize the number of clauses in the rule and the number of conditions in each clause. We use the default values of 1e-3 for lambda0 and lambda1 (decreasing them did not increase accuracy here) and leave other parameters at their defaults as well. The model is then trained, evaluated, and printed.

In [22]:
# Instantiate BRCG with small complexity penalty and large beam search width
from aix360.algorithms.rbm import BooleanRuleCG
br = BooleanRuleCG(lambda0=1e-3, lambda1=1e-3, CNF=True)

# Train, print, and evaluate model
br.fit(dfTrain, yTrain)
from sklearn.metrics import accuracy_score
print('Training accuracy:', accuracy_score(yTrain, br.predict(dfTrain)))
print('Test accuracy:', accuracy_score(yTest, br.predict(dfTest)))
print('Predict Y=0 if ANY of the following rules are satisfied, otherwise Y=1:')
print(br.explain()['rules'])
Learning CNF rule with complexity parameters lambda0=0.001, lambda1=0.001
Initial LP solved
Iteration: 1, Objective: 0.2895
Iteration: 2, Objective: 0.2895
Iteration: 3, Objective: 0.2895
Iteration: 4, Objective: 0.2895
Iteration: 5, Objective: 0.2864
Iteration: 6, Objective: 0.2864
Iteration: 7, Objective: 0.2864
Training accuracy: 0.719573146021883
Test accuracy: 0.696515397082658
Predict Y=0 if ANY of the following rules are satisfied, otherwise Y=1:
['ExternalRiskEstimate <= 75.00 AND NumSatisfactoryTrades <= 17.00', 'ExternalRiskEstimate <= 72.00 AND NumSatisfactoryTrades > 17.00']

The returned DNF rule for Y = 0 is indeed very simple with only two clauses, each involving the same two features. It is interesting to see that such a rule can already achieve 69.7% accuracy. 'ExternalRiskEstimate' is a consolidated version of some risk markers (higher is better), while 'NumSatisfactoryTrades' is the number of satisfactory credit accounts. It makes sense therefore that for applicants with more than 17 satisfactory accounts, the ExternalRiskEstimate threshold dividing good (Y = 1) and bad (Y = 0) credit risk is slightly lower (more lenient) than for applicants with fewer satisfactory accounts.

We note that AIX360 includes only a heuristic beam search version of BRCG. The published version of BRCG [2] (not implemented in AIX360) uses integer programming to yield slightly more complex rules that are also more accurate (close to 72% test accuracy).

2.3. Run Logistic Rule Regression (LogRR)

Next we consider a LogRR model, which can improve accuracy at the cost of a more complex but still interpretable model. Specifically, LogRR fits a logistic regression model using rule-based features, where column generation is again used to generate promising candidates from the space of all possible rules. Here we are also including unbinarized ordinal features (useOrd=True) in addition to rules. Similar to BRCG, the complexity parameters lambda0, lambda1 penalize the number of rules included in the model and the number of conditions in each rule. the The values for lambda0, lambda1 below strike a good balance between accuracy and model complexity, based on our published experience with the FICO HELOC dataset [3].

In [36]:
# Instantiate LRR with good complexity penalties and numerical features
from aix360.algorithms.rbm import LogisticRuleRegression
lrr = LogisticRuleRegression(lambda0=0.005, lambda1=0.001, useOrd=True)

# Train, print, and evaluate model
lrr.fit(dfTrain, yTrain, dfTrainStd)
print('Training accuracy:', accuracy_score(yTrain, lrr.predict(dfTrain, dfTrainStd)))
print('Test accuracy:', accuracy_score(yTest, lrr.predict(dfTest, dfTestStd)))
print('Probability of Y=1 is predicted as logistic(z) = 1 / (1 + exp(-z))')
print('where z is a linear combination of the following rules/numerical features:')
lrr.explain()
Training accuracy: 0.742536809401594
Test accuracy: 0.7260940032414911
Probability of Y=1 is predicted as logistic(z) = 1 / (1 + exp(-z))
where z is a linear combination of the following rules/numerical features:
Out[36]:
rule/numerical feature coefficient
0 (intercept) -0.0684696
1 MSinceMostRecentInqexcl7days > 0.00 0.680258
2 ExternalRiskEstimate 0.654171
3 NetFractionRevolvingBurden -0.554063
4 NumSatisfactoryTrades 0.551644
5 NumInqLast6M -0.463222
6 NumBank2NatlTradesWHighUtilization -0.448346
7 AverageMInFile <= 52.00 -0.434366
8 NumRevolvingTradesWBalance <= 5.00 0.421533
9 MaxDelq2PublicRecLast12M <= 5.00 -0.418156
10 PercentInstallTrades > 50.00 -0.317581
11 NumSatisfactoryTrades <= 12.00 -0.31248
12 MSinceMostRecentDelq <= 21.00 -0.301572
13 PercentTradesNeverDelq <= 95.00 -0.273936
14 ExternalRiskEstimate > 75.00 0.263452
15 AverageMInFile <= 84.00 -0.182134
16 PercentTradesNeverDelq 0.166524
17 AverageMInFile 0.150683
18 PercentInstallTrades > 42.00 -0.148731
19 NumBank2NatlTradesWHighUtilization <= 0.00 0.135388
20 MSinceOldestTradeOpen <= 122.00 -0.132505
21 PercentTradesNeverDelq <= 91.00 -0.117713
22 NumSatisfactoryTrades <= 17.00 -0.110228
23 ExternalRiskEstimate > 72.00 0.107617
24 NumInqLast6M > 0.00 -0.0993614
25 MSinceOldestTradeOpen <= 146.00 -0.0966503
26 PercentInstallTrades <= 42.00 0.0916733
27 MSinceMostRecentInqexcl7days <= 0.00 -0.0900543
28 AverageMInFile <= 61.00 -0.0794703
29 AverageMInFile <= 76.00 -0.072278
30 NetFractionRevolvingBurden <= 39.00 0.0627657
31 MSinceOldestTradeOpen > 122.00 0.060358
32 NetFractionRevolvingBurden <= 50.00 0.0455664
33 MSinceOldestTradeOpen 0.0421272
34 ExternalRiskEstimate > 69.00 0.0354293
35 PercentTradesWBalance <= 73.00 -0.0345454
36 MSinceOldestTradeOpen > 146.00 0.024503

The test accuracy of LogRR is significantly better than that of BRCG and even better than the neural network in the Loan Officer and Customer sections. The LogRR model remains directly interpretable as it is a logistic regression model that uses the 36 rule-based and ordinal features shown above (in addition to an intercept term). Rules are distinguished by having one or more conditions on feature values (e.g. AverageMInFile <= 52.0) while ordinal features are marked by just the feature name without conditions (e.g. ExternalRiskEstimate). Being a linear model, feature importance is naturally given by the model coefficients and thus the list is sorted in order of decreasing coefficient magnitude. The list can be truncated if the user wishes to display fewer features.

Since the rules in this LogRR model happen to all be single conditions on individual features, the model contains no interactions between features. It is therefore a kind of generalized additive model (GAM), i.e. a sum of functions of individual features, where these functions are themselves sums of step function components from rules and linear components from unbinarized ordinal features. Thus a better way to visualize the model is by plotting the univariate functions that make up the GAM, as we do next.

2.4. Visualize LogRR model as a Generalized Additive Model (GAM)

We use the visualize() method of LogisticRuleRegression to plot the functions in the GAM that corresponds to the LogRR model (more generally, visualize() plots the GAM part of a LogRR model, excluding higher-degree rules). The plots show the sizes and shapes of the model's dependences on individual features. These can then be compared to a lending expert's knowledge. In the present case, all plots indicate that the model behaves as we would expect with some interesting nuances.

The 36 features shown above involve only 14 of the original features in the data (not including the intercept), as verified below. For example, ExternalRiskEstimate appears in its unbinarized form in row 2 above and also in 3 rules (rows 14, 23, 34).

In [37]:
dfx = lrr.explain()
# Separate 1st-degree rules into (feature, operation, value) to count unique features
dfx2 = dfx['rule/numerical feature'].str.split(' ', expand=True)
dfx2.columns = ['feature','operation','value']
dfx2['feature'].nunique() # includes intercept
Out[37]:
15

It follows that there are 14 functions to plot, which we organize into semantic groups below to ease interpretation.

ExternalRiskEstimate

As expected from the BRCG Boolean rule above, 'ExternalRiskEstimate' is an important feature positively correlated with good credit risk. The jumps in the plot indicate that applicants with above average 'ExternalRiskEstimate' (the mean is 72) get an additional boost.

In [38]:
lrr.visualize(data, fb, ['ExternalRiskEstimate']);

Credit inquiries

The next two plots illustrate the dependence on the applicant's credit inquiries. The first plot shows a significant penalty for having less than one month since the most recent inquiry ('MSinceMostRecentInqexcl7days' = 0).

In [39]:
lrr.visualize(data, fb, ['MSinceMostRecentInqexcl7days']);

The second shows that predicted risk increases with the number of inquiries in the last six months ('NumInqLast6M').

In [40]:
lrr.visualize(data, fb, ['NumInqLast6M']);

Debt level

The following four plots relate to the applicant's debt level. 'NetFractionRevolvingBurden' is the ratio of revolving debt (e.g. credit card) balance to credit limit, expressed as a percentage, and has a large negative impact on the probability of good credit. A small fraction of applicants (less than 1%) actually have NetFractionRevolvingBurden greater than 100%, i.e. more revolving debt than their credit limit. This might be investigated further by the data scientist.

In [41]:
lrr.visualize(data, fb, ['NetFractionRevolvingBurden']);

The second 'NumBank2NatlTradesWHighUtilization' plot shows that the number of accounts ("trades") with high utilization (high balance relative to credit limit for each account) also has a large impact, with a drop as soon as one account has high utilization.

In [42]:
lrr.visualize(data, fb, ['NumBank2NatlTradesWHighUtilization']);

The third plot shows that the model gives a bonus to applicants who carry balances on no more than five revolving debt accounts.

In [43]:
lrr.visualize(data, fb, ['NumRevolvingTradesWBalance']);

The fourth shows an effect from the percentage of accounts with a balance that is much smaller than those from other features.

In [44]:
lrr.visualize(data, fb, ['PercentTradesWBalance']);

Number and type of accounts

The number of "satisfactory" accounts ("trades") has a significant positive effect on the predicted probability of good credit, with jumps at 12 and 17 accounts.

In [45]:
lrr.visualize(data, fb, ['NumSatisfactoryTrades']);

However, having more than 40% as installment debt accounts (e.g. car loans) is seen as a negative.

In [46]:
lrr.visualize(data, fb, ['PercentInstallTrades']);

Length of credit history

The 'AverageMInFile' plot shows that most of the benefit of having a longer average credit history accrues between average ages of 52 and 84 months (four to seven years).

In [47]:
lrr.visualize(data, fb, ['AverageMInFile']);

Similar but smaller gains come when the age of the oldest account ('MSinceOldestTradeOpen') exceeds 122 and 146 months (10-12 years).

In [48]:
lrr.visualize(data, fb, ['MSinceOldestTradeOpen']);

Delinquencies

The last set of plots looks at the effect of delinquencies. The first plot shows that much of the change due to the percentage of accounts that were never delinquent ('PercentTradesNeverDelq') occurs between 90% and 100%.

In [49]:
lrr.visualize(data, fb, ['PercentTradesNeverDelq']);

'MaxDelq2PublicRecLast12M' measures the severity of the applicant's worst delinquency from the last 12 months of the public record. A value of 5 or below indicates that some delinquency has occurred, whether of unknown duration, 30/60/90/120 days delinquent, or a derogatory comment.

In [50]:
lrr.visualize(data, fb, ['MaxDelq2PublicRecLast12M']);

According to the last 'MSinceMostRecentDelq' plot, the effect of the most recent delinquency wears off after 21 months.

In [51]:
lrr.visualize(data, fb, ['MSinceMostRecentDelq']);

3. Loan Officer: Prototypical explanations for HELOC use case

We now show how to generate explanations in the form of selecting prototypical or similar user profiles to an applicant in question that a bank employee such as a loan officer may be interested in. This may help the employee understand the decision of an applicant's HELOC application being accepted or rejected in the context of other similar applications. Note that the selected prototypical applications are profiles that are part of the training set that has been used to train an AI model that predicts good or bad i.e. approved or rejected for these applications. In fact, the method used in this notebook can work even if we are given not just one but a set of user profiles for which we want to find similar profiles from a training dataset. Additionally, the method computes weights for each prototype showcasing its similarity to the user(s) in question.

The prototypical explanations in AIX360 are obtained using the Protodash algorithm developed in the following work: ProtoDash: Fast Interpretable Prototype Selection

We now provide a brief overview of the method. The method takes as input a datapoint (or group of datapoints) that we want to explain with respect to instances in a training set belonging to the same feature space. The method then tries to minimize the maximum mean discrepancy (MMD metric) between the datapoints we want to explain and a prespecified number of instances from the training set that it will select. In other words, it will try to select training instances that have the same distribution as the datapoints we want to explain. The method does greedy selection and has quality guarantees with it also returning importance weights for the chosen prototypical training instances indicative of how similar/representative they are.

In this tutorial, we will see two examples of obtaining prototypes, one for a user whose HELOC application was approved and another for a user whose HELOC application was rejected. In each case, we showcase the top five prototypes from the training data along with how similar the feature values were for these prototypes.

Example 1. Obtaining similar samples as explanations for a HELOC applicant predicted as "Good"
Example 2. Obtaining similar samples as explanations for a HELOC applicant predicted as "Bad"

Why Protodash?

Before we showcase the two examples we provide some motivation for using this method. The method selects applications from the training set that are similar in different ways to the user application we want to explain. For example, a users loan may be rejected justifiably because the number of satisfactory trades he performed were low similar to another rejected user, or because his/her debts were too high similar to a different rejected user. Either of these reasons in isolation may be sufficient for rejection and the method is able to surface a variety of such reasons through the selected prototypes. This is not the case using standard nearest neighbor techniques which use metrics such as euclidean distance, cosine similarity amongst others, where one might get the same type of explanation (i.e. applications with only low number of satisfactory trades). Protodash thus is able to provide a much more well rounded and comprehensive view of why the decision for the applicant may be justifiable.

Another benefit of the method is that — since it does distribution matching between the user/users in question and those available in the training set — it could, in principle, be applied also in non-iid settings such as for time series data. Other approaches which find similar profiles using standard distance measures (viz. euclidean, cosine) do not have this property. Additionally, we can also highlight important features for the different prototypes that made them similar to the user/users in question.

Import statements

Import necessary libraries, frameworks and algorithms.

In [52]:
import pandas as pd
import numpy as np
import tensorflow as tf
from keras.models import Sequential, Model, load_model, model_from_json
from keras.layers import Dense
import matplotlib.pyplot as plt
from IPython.core.display import display, HTML

from aix360.algorithms.contrastive import CEMExplainer, KerasClassifier
from aix360.algorithms.protodash import ProtodashExplainer
from aix360.datasets.heloc_dataset import HELOCDataset

Load HELOC dataset and show sample applicants

In [53]:
heloc = HELOCDataset()
df = heloc.dataframe()
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 24)
pd.set_option('display.width', 1000)
print("Size of HELOC dataset:", df.shape)
print("Number of \"Good\" applicants:", np.sum(df['RiskPerformance']=='Good'))
print("Number of \"Bad\" applicants:", np.sum(df['RiskPerformance']=='Bad'))
print("Sample Applicants:")
df.head(10).transpose()
/anaconda3/envs/aix360/lib/python3.6/site-packages/aix360/datasets/heloc_dataset.py:31: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  df[col][df[col].isin([-7, -8, -9])] = 0
Size of HELOC dataset: (10459, 24)
Number of "Good" applicants: 5000
Number of "Bad" applicants: 5459
Sample Applicants:
Out[53]:
0 1 2 3 4 5 6 7 8 9
ExternalRiskEstimate 55 61 67 66 81 59 54 68 59 61
MSinceOldestTradeOpen 144 58 66 169 333 137 88 148 324 79
MSinceMostRecentTradeOpen 4 15 5 1 27 11 7 7 2 4
AverageMInFile 84 41 24 73 132 78 37 65 138 36
NumSatisfactoryTrades 20 2 9 28 12 31 25 17 24 19
NumTrades60Ever2DerogPubRec 3 4 0 1 0 0 0 0 0 0
NumTrades90Ever2DerogPubRec 0 4 0 1 0 0 0 0 0 0
PercentTradesNeverDelq 83 100 100 93 100 91 92 83 85 95
MSinceMostRecentDelq 2 -7 -7 76 -7 1 9 31 5 5
MaxDelq2PublicRecLast12M 3 0 7 6 7 4 4 6 4 4
MaxDelqEver 5 8 8 6 8 6 6 6 6 6
NumTotalTrades 23 7 9 30 12 32 26 18 27 19
NumTradesOpeninLast12M 1 0 4 3 0 1 3 1 1 3
PercentInstallTrades 43 67 44 57 25 47 58 44 26 26
MSinceMostRecentInqexcl7days 0 0 0 0 0 0 0 0 0 0
NumInqLast6M 0 0 4 5 1 0 4 0 1 6
NumInqLast6Mexcl7days 0 0 4 4 1 0 4 0 1 6
NetFractionRevolvingBurden 33 0 53 72 51 62 89 28 68 31
NetFractionInstallBurden -8 -8 66 83 89 93 76 48 -8 86
NumRevolvingTradesWBalance 8 0 4 6 3 12 7 2 7 5
NumInstallTradesWBalance 1 -8 2 4 1 4 7 2 1 3
NumBank2NatlTradesWHighUtilization 1 -8 1 3 0 3 2 2 3 1
PercentTradesWBalance 69 0 86 91 80 94 100 40 90 62
RiskPerformance Bad Bad Bad Bad Bad Bad Good Good Bad Bad
In [54]:
# Plot (example) distributions for two features
print("Distribution of ExternalRiskEstimate and NumSatisfactoryTrades columns:")
hist = df.hist(column=['ExternalRiskEstimate', 'NumSatisfactoryTrades'], bins=10)
Distribution of ExternalRiskEstimate and NumSatisfactoryTrades columns:

Step 1: Process and Normalize HELOC dataset for training

We will first process the HELOC dataset before using it to train an NN model that can predict the target variable RiskPerformance. The HELOC dataset is a tabular dataset with numerical values. However, some of the values are negative and need to be filtered. The processed data is stored in the file heloc.npz for easy access. The dataset is also normalized for training.

The data processing and the type of model built in this case is different from the Data Scientist persona described above where rule based methods are showcased. This is the reason for going through these steps again for the Loan Officer persona.

a. Process the dataset

In [55]:
# Clean data and split dataset into train/test
(Data, x_train, x_test, y_train_b, y_test_b) = heloc.split()

b. Normalize the dataset

In [56]:
Z = np.vstack((x_train, x_test))
Zmax = np.max(Z, axis=0)
Zmin = np.min(Z, axis=0)

#normalize an array of samples to range [-0.5, 0.5]
def normalize(V):
    VN = (V - Zmin)/(Zmax - Zmin)
    VN = VN - 0.5
    return(VN)
    
# rescale a sample to recover original values for normalized values. 
def rescale(X):
    return(np.multiply ( X + 0.5, (Zmax - Zmin) ) + Zmin)

N = normalize(Z)
xn_train = N[0:x_train.shape[0], :]
xn_test  = N[x_train.shape[0]:, :]

Step 2. Define and train a NN classifier

Let us now build a loan approval model based on the HELOC dataset.

a. Define NN architecture

We now define the architecture of a 2-layer neural network classifier whose predictions we will try to interpret.

In [57]:
# nn with no softmax
def nn_small():
    model = Sequential()
    model.add(Dense(10, input_dim=23, kernel_initializer='normal', activation='relu'))
    model.add(Dense(2, kernel_initializer='normal'))    
    return model    

b. Train the NN

In [58]:
# Set random seeds for repeatability
np.random.seed(1) 
tf.set_random_seed(2) 

class_names = ['Bad', 'Good']

# loss function
def fn(correct, predicted):
    return tf.nn.softmax_cross_entropy_with_logits(labels=correct, logits=predicted)

# compile and print model summary
nn = nn_small()
nn.compile(loss=fn, optimizer='adam', metrics=['accuracy'])
nn.summary()


# train model or load a trained model
TRAIN_MODEL = False

if (TRAIN_MODEL): 
    nn.fit(xn_train, y_train_b, batch_size=128, epochs=500, verbose=1, shuffle=False)
    nn.save_weights("heloc_nnsmall.h5")     
else:    
    nn.load_weights("heloc_nnsmall.h5")
        

# evaluate model accuracy        
score = nn.evaluate(xn_train, y_train_b, verbose=0) #Compute training set accuracy
#print('Train loss:', score[0])
print('Train accuracy:', score[1])

score = nn.evaluate(xn_test, y_test_b, verbose=0) #Compute test set accuracy
#print('Test loss:', score[0])
print('Test accuracy:', score[1])
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_7 (Dense)              (None, 10)                240       
_________________________________________________________________
dense_8 (Dense)              (None, 2)                 22        
=================================================================
Total params: 262
Trainable params: 262
Non-trainable params: 0
_________________________________________________________________
Train accuracy: 0.7387545589625827
Test accuracy: 0.7224473257698542

Step 3: Obtain similar samples as explanations for a HELOC applicant predicted as "Good" (Example 1)

a. Normalize the data and chose a particular applicant, whose profile is displayed below.

In [59]:
p_train = nn.predict_classes(xn_train) # Use trained neural network to predict train points
p_train = p_train.reshape((p_train.shape[0],1))

z_train = np.hstack((xn_train, p_train)) # Store (normalized) instances that were predicted as Good
z_train_good = z_train[z_train[:,-1]==1, :]

zun_train = np.hstack((x_train, p_train)) # Store (unnormalized) instances that were predicted as Good 
zun_train_good = zun_train[zun_train[:,-1]==1, :]

Let us now consider applicant 8 whose loan was approved. Note that this applicant was also considered for the contrastive explainer, however, we now justify the approved status in a different manner using prototypical examples, which is arguably a better explanation for a bank employee.

In [60]:
idx = 8

X = xn_test[idx].reshape((1,) + xn_test[idx].shape)
print("Chosen Sample:", idx)
print("Prediction made by the model:", class_names[np.argmax(nn.predict_proba(X))])
print("Prediction probabilities:", nn.predict_proba(X))
print("")

# attach the prediction made by the model to X
X = np.hstack((X, nn.predict_classes(X).reshape((1,1))))

Xun = x_test[idx].reshape((1,) + x_test[idx].shape) 
dfx = pd.DataFrame.from_records(Xun.astype('double')) # Create dataframe with original feature values
dfx[23] = class_names[X[0, -1]]
dfx.columns = df.columns
dfx.transpose()
Chosen Sample: 8
Prediction made by the model: Good
Prediction probabilities: [[-0.1889221   0.29527372]]

/anaconda3/envs/aix360/lib/python3.6/site-packages/keras/engine/sequential.py:247: UserWarning: Network returning invalid probability values. The last layer might not normalize predictions into probabilities (like softmax or sigmoid would).
  warnings.warn('Network returning invalid probability values. '
Out[60]:
0
ExternalRiskEstimate 82
MSinceOldestTradeOpen 280
MSinceMostRecentTradeOpen 13
AverageMInFile 102
NumSatisfactoryTrades 22
NumTrades60Ever2DerogPubRec 0
NumTrades90Ever2DerogPubRec 0
PercentTradesNeverDelq 91
MSinceMostRecentDelq 26
MaxDelq2PublicRecLast12M 6
MaxDelqEver 6
NumTotalTrades 23
NumTradesOpeninLast12M 0
PercentInstallTrades 9
MSinceMostRecentInqexcl7days 0
NumInqLast6M 0
NumInqLast6Mexcl7days 0
NetFractionRevolvingBurden 3
NetFractionInstallBurden 0
NumRevolvingTradesWBalance 4
NumInstallTradesWBalance 1
NumBank2NatlTradesWHighUtilization 1
PercentTradesWBalance 42
RiskPerformance Good

b. Find similar applicants predicted as "good" using the protodash explainer.

In [61]:
explainer = ProtodashExplainer()
(W, S, setValues) = explainer.explain(X, z_train_good, m=5) # Return weights W, Prototypes S and objective function values
     pcost       dcost       gap    pres   dres
 0:  0.0000e+00 -2.0000e+04  4e+00  1e+00  1e+00
 1:  1.8207e+01 -2.2985e+05  5e+01  1e+00  1e+00
 2: -1.6771e+00 -1.4132e+06  3e+02  1e+00  1e+00
 3:  6.4653e-01 -7.7669e+06  2e+03  1e+00  1e+00
 4:  9.0963e-01 -1.6930e+08  3e+04  1e+00  1e+00
 5:  6.8400e-01 -8.7461e+10  2e+07  1e+00  1e+00
 6:  2.1065e+08 -1.7700e+18  2e+18  6e-13  9e-03
 7:  2.1065e+08 -1.7700e+16  2e+16  6e-15  1e-03
 8:  2.1065e+08 -1.7700e+14  2e+14  4e-16  3e-05
 9:  2.1065e+08 -1.7706e+12  2e+12  2e-16  5e-07
10:  2.1059e+08 -1.8270e+10  2e+10  2e-16  6e-09
11:  2.0548e+08 -7.3263e+08  9e+08  2e-16  6e-10
12:  5.4547e+06 -5.0769e+08  5e+08  2e-16  2e-11
13:  2.4579e+06 -1.0151e+07  1e+07  3e-16  8e-13
14:  3.9731e+05 -4.8259e+05  9e+05  2e-16  2e-13
15:  5.6807e+04 -6.2926e+04  1e+05  2e-16  4e-14
16:  8.0641e+03 -9.1700e+03  2e+04  1e-16  1e-14
17:  1.1237e+03 -1.3430e+03  2e+03  8e-17  3e-14
18:  1.4817e+02 -2.0491e+02  4e+02  9e-17  2e-15
19:  1.5650e+01 -3.4597e+01  5e+01  2e-16  8e-16
20: -6.5180e-01 -7.5158e+00  7e+00  3e-16  7e-16
21: -2.1215e+00 -2.8262e+00  7e-01  1e-16  6e-17
22: -2.2224e+00 -2.3257e+00  1e-01  5e-17  2e-17
23: -2.2551e+00 -2.2713e+00  2e-02  8e-17  8e-17
24: -2.2583e+00 -2.2599e+00  2e-03  3e-16  7e-17
25: -2.2584e+00 -2.2585e+00  5e-05  9e-17  2e-16
26: -2.2584e+00 -2.2584e+00  5e-07  8e-17  7e-17
Optimal solution found.
     pcost       dcost       gap    pres   dres
 0:  0.0000e+00 -3.0000e+04  6e+00  1e+00  1e+00
 1:  3.0722e+01 -4.4267e+05  9e+01  1e+00  1e+00
 2: -1.6074e+00 -1.6114e+06  3e+02  1e+00  1e+00
 3:  1.4698e+00 -5.8978e+06  1e+03  1e+00  1e+00
 4:  5.1359e+00 -5.6757e+07  1e+04  1e+00  1e+00
 5:  8.8032e+00 -6.9908e+09  1e+06  1e+00  1e+00
 6:  1.8944e+08 -1.6526e+17  2e+17  6e-13  3e-04
 7:  1.8944e+08 -1.6526e+15  2e+15  6e-15  2e-04
 8:  1.8944e+08 -1.6526e+13  2e+13  2e-16  1e-06
 9:  1.8943e+08 -1.6612e+11  2e+11  2e-16  2e-08
10:  1.8825e+08 -2.5115e+09  3e+09  2e-16  5e-10
11:  1.2280e+08 -5.4548e+08  7e+08  7e-17  1e-07
12:  1.8756e+07 -9.4847e+07  1e+08  4e-16  2e-12
13:  3.6747e+06 -5.2922e+06  9e+06  5e-17  5e-13
14:  5.3007e+05 -5.8496e+05  1e+06  2e-16  2e-13
15:  7.5911e+04 -8.5173e+04  2e+05  2e-16  2e-13
16:  1.0792e+04 -1.2212e+04  2e+04  1e-16  4e-14
17:  1.5104e+03 -1.7838e+03  3e+03  3e-16  7e-15
18:  2.0196e+02 -2.6962e+02  5e+02  2e-16  6e-15
19:  2.2751e+01 -4.4473e+01  7e+01  2e-16  2e-15
20:  1.6485e-01 -9.1332e+00  9e+00  2e-16  4e-16
21: -2.0349e+00 -3.0759e+00  1e+00  3e-16  4e-16
22: -2.1758e+00 -2.4046e+00  2e-01  5e-17  2e-16
23: -2.2521e+00 -2.3210e+00  7e-02  2e-16  7e-17
24: -2.2594e+00 -2.2652e+00  6e-03  2e-16  5e-17
25: -2.2601e+00 -2.2604e+00  3e-04  3e-16  7e-17
26: -2.2601e+00 -2.2601e+00  3e-06  2e-16  9e-17
27: -2.2601e+00 -2.2601e+00  3e-08  1e-16  3e-17
Optimal solution found.
     pcost       dcost       gap    pres   dres
 0:  0.0000e+00 -4.0000e+04  8e+00  1e+00  1e+00
 1:  4.4367e+01 -7.1824e+05  1e+02  1e+00  1e+00
 2: -2.0468e+00 -3.1903e+06  7e+02  1e+00  1e+00
 3:  1.2538e+01 -1.4991e+07  3e+03  1e+00  1e+00
 4:  1.8503e+01 -3.6431e+08  7e+04  1e+00  1e+00
 5:  1.4872e+01 -4.6590e+11  1e+08  1e+00  1e+00
 6:  1.8484e+08 -7.2574e+18  7e+18  5e-13  9e-03
 7:  1.8484e+08 -7.2574e+16  7e+16  5e-15  5e-03
 8:  1.8484e+08 -7.2574e+14  7e+14  1e-16  8e-05
 9:  1.8484e+08 -7.2586e+12  7e+12  3e-17  7e-07
10:  1.8482e+08 -7.3749e+10  7e+10  2e-16  8e-09
11:  1.8327e+08 -1.8914e+09  2e+09  2e-16  3e-10
12:  6.6101e+07 -3.5884e+08  4e+08  3e-16  3e-08
13:  1.4131e+07 -2.4415e+07  4e+07  2e-16  1e-09
14:  2.0607e+06 -2.3295e+06  4e+06  2e-16  6e-13
15:  2.9628e+05 -3.3354e+05  6e+05  2e-16  3e-13
16:  4.2328e+04 -4.7437e+04  9e+04  2e-16  8e-14
17:  5.9948e+03 -6.8520e+03  1e+04  1e-16  2e-14
18:  8.3044e+02 -1.0092e+03  2e+03  4e-16  1e-14
19:  1.0739e+02 -1.5582e+02  3e+02  3e-16  3e-15
20:  1.0233e+01 -2.7124e+01  4e+01  2e-16  1e-15
21: -1.3211e+00 -6.3304e+00  5e+00  1e-16  3e-16
22: -2.2395e+00 -2.6971e+00  5e-01  2e-16  1e-16
23: -2.2596e+00 -2.2756e+00  2e-02  3e-16  1e-16
24: -2.2616e+00 -2.2630e+00  1e-03  1e-16  1e-16
25: -2.2617e+00 -2.2617e+00  2e-05  9e-17  1e-16
26: -2.2617e+00 -2.2617e+00  2e-07  2e-16  6e-17
Optimal solution found.
/anaconda3/envs/aix360/lib/python3.6/site-packages/cvxopt/coneprog.py:2111: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
  if 'x' in initvals:
/anaconda3/envs/aix360/lib/python3.6/site-packages/cvxopt/coneprog.py:2116: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
  if 's' in initvals:
/anaconda3/envs/aix360/lib/python3.6/site-packages/cvxopt/coneprog.py:2131: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
  if 'y' in initvals:
/anaconda3/envs/aix360/lib/python3.6/site-packages/cvxopt/coneprog.py:2136: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
  if 'z' in initvals:
     pcost       dcost       gap    pres   dres
 0:  0.0000e+00 -5.0000e+04  1e+01  1e+00  1e+00
 1:  6.4530e+01 -1.0740e+06  2e+02  1e+00  1e+00
 2: -1.0387e+00 -3.8672e+06  8e+02  1e+00  1e+00
 3:  1.7593e+01 -1.4129e+07  3e+03  1e+00  1e+00
 4:  2.4754e+01 -1.7515e+08  4e+04  1e+00  1e+00
 5:  2.8355e+01 -4.2312e+10  9e+06  1e+00  1e+00
 6:  2.6599e+08 -9.5334e+17  1e+18  4e-13  1e-03
 7:  2.6599e+08 -9.5334e+15  1e+16  4e-15  9e-04
 8:  2.6599e+08 -9.5336e+13  1e+14  1e-16  6e-06
 9:  2.6599e+08 -9.5545e+11  1e+12  1e-16  9e-08
10:  2.6558e+08 -1.1640e+10  1e+10  1e-16  2e-09
11:  2.4039e+08 -2.0164e+09  2e+09  2e-16  2e-08
12:  2.9390e+07 -1.5952e+09  2e+09  2e-16  7e-09
13:  1.0754e+07 -3.7180e+07  5e+07  2e-16  2e-10
14:  1.6461e+06 -1.9697e+06  4e+06  2e-16  2e-12
15:  2.3560e+05 -2.6053e+05  5e+05  1e-16  2e-13
16:  3.3615e+04 -3.7633e+04  7e+04  2e-16  5e-14
17:  4.7521e+03 -5.4485e+03  1e+04  2e-16  2e-14
18:  6.5532e+02 -8.0583e+02  1e+03  1e-16  9e-15
19:  8.3449e+01 -1.2556e+02  2e+02  9e-17  4e-15
20:  7.2389e+00 -2.2354e+01  3e+01  2e-16  7e-16
21: -1.5947e+00 -5.4973e+00  4e+00  2e-16  6e-16
22: -2.2383e+00 -2.5578e+00  3e-01  2e-16  1e-16
23: -2.2526e+00 -2.2903e+00  4e-02  2e-16  7e-17
24: -2.2616e+00 -2.2685e+00  7e-03  3e-16  8e-17
25: -2.2622e+00 -2.2630e+00  8e-04  2e-16  1e-16
26: -2.2622e+00 -2.2622e+00  2e-05  2e-16  2e-16
27: -2.2622e+00 -2.2622e+00  2e-07  2e-16  2e-16
Optimal solution found.

c. Display similar applicant user profiles and the extent to which they are similar to the chosen applicant as indicated by the last row in the table below labelled as "Weight".

In [62]:
dfs = pd.DataFrame.from_records(zun_train_good[S, 0:-1].astype('double'))
RP=[]
for i in range(S.shape[0]):
    RP.append(class_names[z_train_good[S[i], -1]]) # Append class names
dfs[23] = RP
dfs.columns = df.columns  
dfs["Weight"] = np.around(W, 5)/np.sum(np.around(W, 5)) # Calculate normalized importance weights
dfs.transpose()
Out[62]:
0 1 2 3 4
ExternalRiskEstimate 85 89 77 83 73
MSinceOldestTradeOpen 223 379 338 789 230
MSinceMostRecentTradeOpen 13 156 2 6 5
AverageMInFile 87 257 109 102 89
NumSatisfactoryTrades 23 3 16 41 61
NumTrades60Ever2DerogPubRec 0 0 2 0 0
NumTrades90Ever2DerogPubRec 0 0 2 0 0
PercentTradesNeverDelq 91 100 90 100 100
MSinceMostRecentDelq 26 0 65 0 0
MaxDelq2PublicRecLast12M 6 7 6 7 6
MaxDelqEver 6 8 2 8 7
NumTotalTrades 26 3 21 41 37
NumTradesOpeninLast12M 0 0 1 1 3
PercentInstallTrades 9 33 14 17 18
MSinceMostRecentInqexcl7days 1 0 0 0 0
NumInqLast6M 1 0 1 1 2
NumInqLast6Mexcl7days 1 0 1 0 2
NetFractionRevolvingBurden 4 0 2 1 59
NetFractionInstallBurden 0 0 0 0 72
NumRevolvingTradesWBalance 4 0 1 3 9
NumInstallTradesWBalance 1 0 1 0 1
NumBank2NatlTradesWHighUtilization 0 0 0 1 7
PercentTradesWBalance 50 0 22 23 53
RiskPerformance Good Good Good Good Good
Weight 0.730222 0.0690562 0.0978593 0.0498047 0.0530578

d. Compute how similar a feature of a prototypical user is to the chosen applicant.

The more similar the feature of prototypical user is to the applicant, the closer its weight is to 1. We can see below that several features for prototypes are quite similar to the chosen applicant. A human friendly explanation is provided thereafter.

In [63]:
z = z_train_good[S, 0:-1] # Store chosen prototypes
eps = 1e-10 # Small constant defined to eliminate divide-by-zero errors
fwt = np.zeros(z.shape)
for i in range (z.shape[0]):
    for j in range(z.shape[1]):
        fwt[i, j] = np.exp(-1 * abs(X[0, j] - z[i,j])/(np.std(z[:, j])+eps)) # Compute feature similarity in [0,1]
                
# move wts to a dataframe to display
dfw = pd.DataFrame.from_records(np.around(fwt.astype('double'), 2))
dfw.columns = df.columns[:-1]
dfw.transpose()        
Out[63]:
0 1 2 3 4
ExternalRiskEstimate 0.59 0.29 0.42 0.84 0.21
MSinceOldestTradeOpen 0.76 0.62 0.76 0.09 0.79
MSinceMostRecentTradeOpen 1.00 0.09 0.83 0.89 0.87
AverageMInFile 0.79 0.09 0.90 1.00 0.82
NumSatisfactoryTrades 0.95 0.39 0.74 0.39 0.15
NumTrades60Ever2DerogPubRec 1.00 1.00 0.08 1.00 1.00
NumTrades90Ever2DerogPubRec 1.00 1.00 0.08 1.00 1.00
PercentTradesNeverDelq 1.00 0.15 0.81 0.15 0.15
MSinceMostRecentDelq 1.00 0.36 0.22 0.36 0.36
MaxDelq2PublicRecLast12M 1.00 0.13 1.00 0.13 1.00
MaxDelqEver 1.00 0.41 0.17 0.41 0.64
NumTotalTrades 0.80 0.23 0.86 0.26 0.35
NumTradesOpeninLast12M 1.00 1.00 0.40 0.40 0.06
PercentInstallTrades 1.00 0.05 0.54 0.37 0.33
MSinceMostRecentInqexcl7days 0.08 1.00 1.00 1.00 1.00
NumInqLast6M 0.21 1.00 0.21 0.21 0.04
NumInqLast6Mexcl7days 0.26 1.00 0.26 1.00 0.07
NetFractionRevolvingBurden 0.96 0.88 0.96 0.92 0.09
NetFractionInstallBurden 1.00 1.00 1.00 1.00 0.08
NumRevolvingTradesWBalance 1.00 0.28 0.38 0.73 0.20
NumInstallTradesWBalance 1.00 0.13 1.00 0.13 1.00
NumBank2NatlTradesWHighUtilization 0.69 0.69 0.69 1.00 0.11
PercentTradesWBalance 0.67 0.12 0.36 0.38 0.57

Explanation:

The above table depicts the five closest user profiles to the chosen applicant. Based on importance weight outputted by the method, we see that the prototype under column zero is the most representative user profile by far. This is (intuitively) confirmed from the feature similarity table above where more than 50% of the features (12 out of 23) of this prototype are identical to that of the chosen user whose prediction we want to explain. Also, the bank employee looking at the prototypical users and their features surmises that the approved applicant belongs to a group of approved users that have practically no debt (NetFractionInstallBurden). This justification gives the employee more confidence in approving the users application.

Example 2. Obtaining similar samples as explanations for a HELOC applicant predicted as "Bad".

We now consider a user 1272 whose loan was denied. We obtained a contrastive explanation for this user before. Similar to user 8, we now obtain exemplar based explanations for this user to help the bank employee understand the reasons for the rejection. Steps similar to example 1 are followed in this case too, where we first process the data, obtain prototypes and their importance weights, and finally showcase how similar the features are of these prototypes to the user we want to explain.

a. Normalize the data and chose a particular applicant, whose profile is displayed below.

In [64]:
z_train_bad = z_train[z_train[:,-1]==0, :]
zun_train_bad = zun_train[zun_train[:,-1]==0, :]
In [65]:
idx = 1272 #another user to try 2385

X = xn_test[idx].reshape((1,) + xn_test[idx].shape)
print("Chosen Sample:", idx)
print("Prediction made by the model:", class_names[np.argmax(nn.predict_proba(X))])
print("Prediction probabilities:", nn.predict_proba(X))
print("")

X = np.hstack((X, nn.predict_classes(X).reshape((1,1))))

# move samples to a dataframe to display
Xun = x_test[idx].reshape((1,) + x_test[idx].shape)
dfx = pd.DataFrame.from_records(Xun.astype('double'))
dfx[23] = class_names[X[0, -1]]
dfx.columns = df.columns
dfx.transpose()
Chosen Sample: 1272
Prediction made by the model: Bad
Prediction probabilities: [[ 0.40682057 -0.391679  ]]

Out[65]:
0
ExternalRiskEstimate 65
MSinceOldestTradeOpen 256
MSinceMostRecentTradeOpen 15
AverageMInFile 52
NumSatisfactoryTrades 17
NumTrades60Ever2DerogPubRec 0
NumTrades90Ever2DerogPubRec 0
PercentTradesNeverDelq 100
MSinceMostRecentDelq 0
MaxDelq2PublicRecLast12M 7
MaxDelqEver 8
NumTotalTrades 19
NumTradesOpeninLast12M 0
PercentInstallTrades 29
MSinceMostRecentInqexcl7days 2
NumInqLast6M 5
NumInqLast6Mexcl7days 5
NetFractionRevolvingBurden 57
NetFractionInstallBurden 79
NumRevolvingTradesWBalance 2
NumInstallTradesWBalance 4
NumBank2NatlTradesWHighUtilization 2
PercentTradesWBalance 60
RiskPerformance Bad

b. Find similar applicants predicted as "bad" using the protodash explainer.

In [66]:
(W, S, setValues) = explainer.explain(X, z_train_bad, m=5) # Return weights W, Prototypes S and objective function values
     pcost       dcost       gap    pres   dres
 0:  0.0000e+00 -2.0000e+04  4e+00  1e+00  1e+00
 1:  1.3951e+01 -1.8757e+05  4e+01  1e+00  1e+00
 2: -1.0452e+00 -2.4808e+06  5e+02  1e+00  1e+00
 3:  9.9094e-01 -3.7820e+07  8e+03  1e+00  1e+00
 4:  1.2044e+00 -5.7710e+09  1e+06  1e+00  1e+00
 5:  1.6105e+08 -1.3427e+17  1e+17  7e-13  7e-04
 6:  1.6105e+08 -1.3427e+15  1e+15  7e-15  2e-04
 7:  1.6105e+08 -1.3427e+13  1e+13  2e-16  4e-06
 8:  1.6104e+08 -1.3473e+11  1e+11  3e-17  2e-08
 9:  1.6063e+08 -1.8053e+09  2e+09  3e-16  6e-10
10:  1.2950e+08 -3.8084e+08  5e+08  2e-16  1e-09
11:  6.9589e+06 -2.2750e+08  2e+08  5e-16  8e-12
12:  2.4924e+06 -4.9489e+06  7e+06  1e-16  5e-13
13:  3.7960e+05 -4.1688e+05  8e+05  3e-17  4e-13
14:  5.4362e+04 -6.0989e+04  1e+05  6e-17  2e-13
15:  7.7281e+03 -8.7442e+03  2e+04  3e-16  3e-14
16:  1.0814e+03 -1.2777e+03  2e+03  3e-17  4e-15
17:  1.4452e+02 -1.9320e+02  3e+02  3e-16  9e-15
18:  1.6236e+01 -3.1901e+01  5e+01  2e-16  2e-15
19:  7.9417e-02 -6.5709e+00  7e+00  4e-16  4e-16
20: -1.5034e+00 -2.2383e+00  7e-01  3e-16  2e-16
21: -1.6411e+00 -1.7669e+00  1e-01  9e-17  7e-17
22: -1.6773e+00 -1.6963e+00  2e-02  1e-16  1e-16
23: -1.6801e+00 -1.6816e+00  1e-03  5e-17  4e-17
24: -1.6802e+00 -1.6802e+00  2e-05  2e-16  6e-17
25: -1.6802e+00 -1.6802e+00  2e-07  2e-16  2e-16
Optimal solution found.
     pcost       dcost       gap    pres   dres
 0:  0.0000e+00 -3.0000e+04  6e+00  1e+00  1e+00
 1:  2.2652e+01 -3.4494e+05  7e+01  1e+00  1e+00
 2: -9.7166e-01 -1.5165e+06  3e+02  1e+00  1e+00
 3:  8.8732e-01 -6.4545e+06  1e+03  1e+00  1e+00
 4:  4.0619e+00 -9.5492e+07  2e+04  1e+00  1e+00
 5:  6.8342e+00 -3.0797e+10  7e+06  1e+00  1e+00
 6:  1.4054e+08 -6.7268e+17  7e+17  4e-13  2e-03
 7:  1.4054e+08 -6.7268e+15  7e+15  3e-15  7e-04
 8:  1.4054e+08 -6.7269e+13  7e+13  3e-16  2e-05
 9:  1.4054e+08 -6.7332e+11  7e+11  2e-16  2e-07
10:  1.4040e+08 -7.3721e+09  8e+09  3e-16  3e-09
11:  1.2803e+08 -6.5498e+08  8e+08  1e-16  3e-10
12:  1.0414e+07 -2.6146e+08  3e+08  2e-16  3e-10
13:  3.1711e+06 -6.3976e+06  1e+07  9e-17  7e-12
14:  4.8148e+05 -5.3517e+05  1e+06  2e-16  5e-13
15:  6.8958e+04 -7.7139e+04  1e+05  1e-16  2e-13
16:  9.8128e+03 -1.1070e+04  2e+04  2e-16  4e-14
17:  1.3771e+03 -1.6135e+03  3e+03  3e-16  2e-14
18:  1.8573e+02 -2.4246e+02  4e+02  1e-16  5e-15
19:  2.1713e+01 -3.9393e+01  6e+01  1e-16  4e-15
20:  7.1800e-01 -7.7941e+00  9e+00  2e-16  5e-16
21: -1.4375e+00 -2.4351e+00  1e+00  1e-16  2e-16
22: -1.6114e+00 -1.8294e+00  2e-01  2e-16  1e-16
23: -1.6765e+00 -1.7151e+00  4e-02  1e-16  9e-17
24: -1.6822e+00 -1.6854e+00  3e-03  2e-16  2e-16
25: -1.6824e+00 -1.6824e+00  6e-05  5e-16  1e-16
26: -1.6824e+00 -1.6824e+00  6e-07  7e-17  4e-17
Optimal solution found.
     pcost       dcost       gap    pres   dres
 0:  0.0000e+00 -4.0000e+04  8e+00  1e+00  1e+00
 1:  3.1518e+01 -5.1960e+05  1e+02  1e+00  1e+00
 2: -5.3941e-01 -2.2190e+06  5e+02  1e+00  1e+00
 3: -5.5212e-01 -8.2651e+06  2e+03  1e+00  1e+00
 4:  8.1060e-01 -7.5932e+07  2e+04  1e+00  1e+00
 5:  2.5556e+00 -6.5290e+09  1e+06  1e+00  1e+00
 6:  3.5093e+08 -1.5662e+17  2e+17  9e-13  3e-04
 7:  3.5093e+08 -1.5662e+15  2e+15  9e-15  1e-04
 8:  3.5093e+08 -1.5663e+13  2e+13  2e-16  2e-06
 9:  3.5088e+08 -1.5782e+11  2e+11  2e-16  3e-08
10:  3.4591e+08 -2.7463e+09  3e+09  2e-16  8e-10
11:  1.5618e+08 -4.5360e+08  6e+08  1e-16  1e-08
12:  2.5039e+07 -5.2066e+07  8e+07  2e-16  3e-12
13:  3.9788e+06 -4.5353e+06  9e+06  2e-16  5e-13
14:  5.7260e+05 -6.4450e+05  1e+06  1e-16  1e-13
15:  8.1978e+04 -9.1397e+04  2e+05  1e-16  2e-13
16:  1.1669e+04 -1.3145e+04  2e+04  9e-17  3e-14
17:  1.6394e+03 -1.9143e+03  4e+03  4e-16  9e-15
18:  2.2187e+02 -2.8701e+02  5e+02  2e-16  4e-15
19:  2.6316e+01 -4.6341e+01  7e+01  1e-16  2e-15
20:  1.1314e+00 -9.0235e+00  1e+01  2e-16  8e-16
21: -1.4923e+00 -2.7146e+00  1e+00  2e-16  4e-16
22: -1.6504e+00 -1.7666e+00  1e-01  2e-16  1e-16
23: -1.6819e+00 -1.6990e+00  2e-02  1e-16  2e-16
24: -1.6844e+00 -1.6860e+00  2e-03  3e-16  9e-17
25: -1.6845e+00 -1.6845e+00  5e-05  1e-16  5e-17
26: -1.6845e+00 -1.6845e+00  5e-07  6e-17  9e-17
Optimal solution found.
     pcost       dcost       gap    pres   dres
 0:  0.0000e+00 -5.0000e+04  1e+01  1e+00  1e+00
 1:  4.2777e+01 -7.4034e+05  1e+02  1e+00  1e+00
 2: -4.0907e-01 -3.0255e+06  6e+02  1e+00  1e+00
 3:  3.2489e+00 -1.1962e+07  2e+03  1e+00  1e+00
 4:  1.3141e+01 -1.5965e+08  3e+04  1e+00  1e+00
 5:  2.0375e+01 -5.8520e+10  1e+07  1e+00  1e+00
 6:  1.4807e+08 -1.2576e+18  1e+18  4e-13  3e-03
 7:  1.4807e+08 -1.2576e+16  1e+16  4e-15  1e-03
 8:  1.4807e+08 -1.2576e+14  1e+14  2e-16  2e-05
 9:  1.4807e+08 -1.2588e+12  1e+12  2e-16  2e-07
10:  1.4795e+08 -1.3762e+10  1e+10  2e-16  2e-09
11:  1.3813e+08 -1.2350e+09  1e+09  1e-16  9e-10
12:  1.4061e+07 -5.6498e+08  6e+08  3e-16  8e-11
13:  3.8779e+06 -1.2907e+07  2e+07  2e-16  2e-12
14:  5.8014e+05 -6.7599e+05  1e+06  2e-16  5e-13
15:  8.2943e+04 -9.1922e+04  2e+05  2e-16  2e-13
16:  1.1808e+04 -1.3283e+04  3e+04  2e-16  5e-14
17:  1.6603e+03 -1.9330e+03  4e+03  8e-17  1e-14
18:  2.2530e+02 -2.8932e+02  5e+02  1e-16  6e-15
19:  2.7008e+01 -4.6485e+01  7e+01  2e-16  3e-15
20:  1.3453e+00 -8.9445e+00  1e+01  1e-16  7e-16
21: -1.3724e+00 -2.6264e+00  1e+00  1e-16  3e-16
22: -1.5360e+00 -1.9467e+00  4e-01  5e-17  2e-16
23: -1.6708e+00 -1.8638e+00  2e-01  2e-16  1e-16
24: -1.6830e+00 -1.6970e+00  1e-02  3e-16  1e-16
25: -1.6847e+00 -1.6856e+00  9e-04  2e-16  2e-16
26: -1.6848e+00 -1.6848e+00  4e-05  1e-16  1e-16
27: -1.6848e+00 -1.6848e+00  6e-07  3e-16  1e-16
Optimal solution found.

c. Display similar applicant user profiles and the extent to which they are similar to the chosen applicant as indicated by the last row in the table below labelled as "Weight".

In [67]:
# move samples to a dataframe to display
dfs = pd.DataFrame.from_records(zun_train_bad[S, 0:-1].astype('double'))
RP=[]
for i in range(S.shape[0]):
    RP.append(class_names[z_train_bad[S[i], -1]]) # Append class names
dfs[23] = RP
dfs.columns = df.columns  
dfs["Weight"] = np.around(W, 5)/np.sum(np.around(W, 5)) # Compute normalized importance weights for prototypes
dfs.transpose()
Out[67]:
0 1 2 3 4
ExternalRiskEstimate 73 61 64 55 0
MSinceOldestTradeOpen 191 125 85 194 383
MSinceMostRecentTradeOpen 17 7 0 26 383
AverageMInFile 53 32 13 100 383
NumSatisfactoryTrades 19 5 2 18 1
NumTrades60Ever2DerogPubRec 0 1 0 0 1
NumTrades90Ever2DerogPubRec 0 1 0 0 1
PercentTradesNeverDelq 100 100 100 84 100
MSinceMostRecentDelq 0 0 0 1 0
MaxDelq2PublicRecLast12M 7 7 7 4 6
MaxDelqEver 8 8 8 6 8
NumTotalTrades 20 6 9 11 1
NumTradesOpeninLast12M 0 3 8 0 0
PercentInstallTrades 25 60 33 42 100
MSinceMostRecentInqexcl7days 0 0 0 23 0
NumInqLast6M 0 1 66 0 1
NumInqLast6Mexcl7days 0 1 66 0 1
NetFractionRevolvingBurden 31 232 65 84 0
NetFractionInstallBurden 78 83 0 48 0
NumRevolvingTradesWBalance 4 1 2 5 0
NumInstallTradesWBalance 3 3 3 3 0
NumBank2NatlTradesWHighUtilization 1 1 1 3 0
PercentTradesWBalance 54 100 71 100 0
RiskPerformance Bad Bad Bad Bad Bad
Weight 0.781763 0.0822525 0.0573946 0.0642844 0.0143057

d. Compute how similar a feature of a prototypical user is to the chosen applicant.

The more similar the feature of prototypical user is to the applicant, the closer its weight is to 1. We can see below that several features for prototypes are quite similar to the chosen applicant. Following this table we provide human friendly explanation based on this table.

In [68]:
z = z_train_bad[S, 0:-1] # Store the prototypes
eps = 1e-10 # Small constant to guard against divide by zero errors
fwt = np.zeros(z.shape)
for i in range (z.shape[0]): # Compute feature similarity for each prototype
    for j in range(z.shape[1]):
        fwt[i, j] = np.exp(-1 * abs(X[0, j] - z[i,j])/(np.std(z[:, j])+eps))
                
# move wts to a dataframe to display
dfw = pd.DataFrame.from_records(np.around(fwt.astype('double'), 2))
dfw.columns = df.columns[:-1]
dfw.transpose()        
Out[68]:
0 1 2 3 4
ExternalRiskEstimate 0.73 0.86 0.96 0.68 0.08
MSinceOldestTradeOpen 0.53 0.28 0.19 0.55 0.29
MSinceMostRecentTradeOpen 0.99 0.95 0.90 0.93 0.08
AverageMInFile 0.99 0.86 0.75 0.70 0.09
NumSatisfactoryTrades 0.78 0.22 0.15 0.88 0.13
NumTrades60Ever2DerogPubRec 1.00 0.13 1.00 1.00 0.13
NumTrades90Ever2DerogPubRec 1.00 0.13 1.00 1.00 0.13
PercentTradesNeverDelq 1.00 1.00 1.00 0.08 1.00
MSinceMostRecentDelq 1.00 1.00 1.00 0.08 1.00
MaxDelq2PublicRecLast12M 1.00 1.00 1.00 0.08 0.42
MaxDelqEver 1.00 1.00 1.00 0.08 1.00
NumTotalTrades 0.85 0.13 0.20 0.28 0.06
NumTradesOpeninLast12M 1.00 0.38 0.08 1.00 1.00
PercentInstallTrades 0.86 0.31 0.86 0.61 0.07
MSinceMostRecentInqexcl7days 0.80 0.80 0.80 0.10 0.80
NumInqLast6M 0.83 0.86 0.10 0.83 0.86
NumInqLast6Mexcl7days 0.83 0.86 0.10 0.83 0.86
NetFractionRevolvingBurden 0.72 0.11 0.91 0.71 0.49
NetFractionInstallBurden 0.97 0.90 0.11 0.42 0.11
NumRevolvingTradesWBalance 0.34 0.58 1.00 0.20 0.34
NumInstallTradesWBalance 0.43 0.43 0.43 0.43 0.04
NumBank2NatlTradesWHighUtilization 0.36 0.36 0.36 0.36 0.13
PercentTradesWBalance 0.85 0.34 0.74 0.34 0.20

Explanation:

Here again, the above table depicts the five closest user profiles to the chosen applicant. Based on importance weight outputted by the method we see that the prototype under column zero is the most representative user profile by far. This is (intuitively) confirmed from the feature similarity table above where 10 features out of 23 of this prototype are highly similar (>0.9) to that of the user we want to explain. Also the bank employee can see that the applicant belongs to a group of rejected applicants with similar deliquency behavior. Realizing that the user also poses similar risk as these other applicants whose loan was rejected, the employee takes the more conservative decision of rejecting the users application as well.

4. Customer: Contrastive explanations for HELOC Use Case

We now demonstrate how to compute contrastive explanations using AIX360 and how such explanations can help home owners understand the decisions made by AI models that approve or reject their HELOC applications.

Typically, home owners would like to understand why they do not qualify for a line of credit and if so what changes in their application would qualify them. On the other hand, if they qualified, they might want to know what factors led to the approval of their application.

In this context, contrastive explanations provide information to applicants about what minimal changes to their profile would have changed the decision of the AI model from reject to accept or vice-versa (pertinent negatives). For example, increasing the number of satisfactory trades to a certain value may have led to the acceptance of the application everything else being the same.

The method presented here also highlights a minimal set of features and their values that would still maintain the original decision (pertinent positives). For example, for an applicant whose HELOC application was approved, the explanation may say that even if the number of satisfactory trades was reduced to a lower number, the loan would have still gotten through.

Additionally, organizations (Banks, financial institutions, etc.) would like to understand trends in the behavior of their AI models in approving loan applications, which could be done by studying contrastive explanations for individuals whose loans were either accepted or rejected. Looking at the aggregate statistics of pertinent positives for approved applicants the organization can get insight into what minimal set of features and their values play an important role in acceptances. While studying the aggregate statistics of pertinent negatives the organization can get insight into features that could change the status of rejected applicants and potentially uncover ways that an applicant may game the system by changing potentially non-important features that could alter the models outcome.

The contrastive explanations in AIX360 are implemented using the algorithm developed in the following work:

Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives

We now provide a brief overview of the method. As mentioned above the algorithm outputs a contrastive explanation which consists of two parts: a) pertinent negatives (PNs) and b) pertinent positives (PPs). PNs identify a minimal set of features which if altered would change the classification of the original input. For example, in the loan case if a person's credit score is increased their loan application status may change from reject to accept. The manner in which the method accomplishes this is by optimizing a change in the prediction probability loss while enforcing an elastic norm constraint that results in minimal change of features and their values. Optionally, an auto-encoder may also be used to force these minimal changes to produce realistic PNs. PPs on the other hand identify a minimal set of features and their values that are sufficient to yield the original input's classification. For example, an individual's loan may still be accepted if the salary was 50K as opposed to 100K. Here again we have an elastic norm term so that the amount of information needed is minimal, however, the first loss term in this case tries to make the original input's class to be the winning class. For a more in-depth discussion, please refer to the above work.

The three main steps to obtain a contrastive explanation are shown below. The first two steps are more about processing the data and building an AI model while the third step computes the actual explanation.

Step 1. Process and Normalize HELOC dataset for training
Step 2. Define and train a NN classifier
Step 3. Compute contrastive explanations for a few applicants

Load HELOC dataset and show sample applicants

In [69]:
heloc = HELOCDataset()
df = heloc.dataframe()
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 24)
pd.set_option('display.width', 1000)
print("Size of HELOC dataset:", df.shape)
print("Number of \"Good\" applicants:", np.sum(df['RiskPerformance']=='Good'))
print("Number of \"Bad\" applicants:", np.sum(df['RiskPerformance']=='Bad'))
print("Sample Applicants:")
df.head(10).transpose()
/anaconda3/envs/aix360/lib/python3.6/site-packages/aix360/datasets/heloc_dataset.py:31: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  df[col][df[col].isin([-7, -8, -9])] = 0
Size of HELOC dataset: (10459, 24)
Number of "Good" applicants: 5000
Number of "Bad" applicants: 5459
Sample Applicants:
Out[69]:
0 1 2 3 4 5 6 7 8 9
ExternalRiskEstimate 55 61 67 66 81 59 54 68 59 61
MSinceOldestTradeOpen 144 58 66 169 333 137 88 148 324 79
MSinceMostRecentTradeOpen 4 15 5 1 27 11 7 7 2 4
AverageMInFile 84 41 24 73 132 78 37 65 138 36
NumSatisfactoryTrades 20 2 9 28 12 31 25 17 24 19
NumTrades60Ever2DerogPubRec 3 4 0 1 0 0 0 0 0 0
NumTrades90Ever2DerogPubRec 0 4 0 1 0 0 0 0 0 0
PercentTradesNeverDelq 83 100 100 93 100 91 92 83 85 95
MSinceMostRecentDelq 2 -7 -7 76 -7 1 9 31 5 5
MaxDelq2PublicRecLast12M 3 0 7 6 7 4 4 6 4 4
MaxDelqEver 5 8 8 6 8 6 6 6 6 6
NumTotalTrades 23 7 9 30 12 32 26 18 27 19
NumTradesOpeninLast12M 1 0 4 3 0 1 3 1 1 3
PercentInstallTrades 43 67 44 57 25 47 58 44 26 26
MSinceMostRecentInqexcl7days 0 0 0 0 0 0 0 0 0 0
NumInqLast6M 0 0 4 5 1 0 4 0 1 6
NumInqLast6Mexcl7days 0 0 4 4 1 0 4 0 1 6
NetFractionRevolvingBurden 33 0 53 72 51 62 89 28 68 31
NetFractionInstallBurden -8 -8 66 83 89 93 76 48 -8 86
NumRevolvingTradesWBalance 8 0 4 6 3 12 7 2 7 5
NumInstallTradesWBalance 1 -8 2 4 1 4 7 2 1 3
NumBank2NatlTradesWHighUtilization 1 -8 1 3 0 3 2 2 3 1
PercentTradesWBalance 69 0 86 91 80 94 100 40 90 62
RiskPerformance Bad Bad Bad Bad Bad Bad Good Good Bad Bad
In [70]:
# Plot (example) distributions for two features
print("Distribution of ExternalRiskEstimate and NumSatisfactoryTrades columns:")
hist = df.hist(column=['ExternalRiskEstimate', 'NumSatisfactoryTrades'], bins=10)
Distribution of ExternalRiskEstimate and NumSatisfactoryTrades columns:

Step 1. Process and Normalize HELOC dataset for training

We will first process the HELOC dataset before using it to train an NN model that can predict the target variable RiskPerformance. The HELOC dataset is a tabular dataset with numerical values. However, some of the values are negative and need to be filtered. The processed data is stored in the file heloc.npz for easy access. The dataset is also normalized for training.

The data processing and model building is very similar to the Loan Officer persona above, where ProtoDash was the method of choice. We repeat these steps here so that both the use cases can be run independently.

a. Process the dataset

In [71]:
# Clean data and split dataset into train/test
PROCESS_DATA = False

if (PROCESS_DATA): 
    (Data, x_train, x_test, y_train_b, y_test_b) = heloc.split()
    np.savez('heloc.npz', Data=Data, x_train=x_train, x_test=x_test, y_train_b=y_train_b, y_test_b=y_test_b)
else:
    heloc = np.load('heloc.npz', allow_pickle = True)
    Data = heloc['Data']
    x_train = heloc['x_train']
    x_test  = heloc['x_test']
    y_train_b = heloc['y_train_b']
    y_test_b  = heloc['y_test_b']

b. Normalize the dataset

In [72]:
Z = np.vstack((x_train, x_test))
Zmax = np.max(Z, axis=0)
Zmin = np.min(Z, axis=0)

#normalize an array of samples to range [-0.5, 0.5]
def normalize(V):
    VN = (V - Zmin)/(Zmax - Zmin)
    VN = VN - 0.5
    return(VN)
    
# rescale a sample to recover original values for normalized values. 
def rescale(X):
    return(np.multiply ( X + 0.5, (Zmax - Zmin) ) + Zmin)

N = normalize(Z)
xn_train = N[0:x_train.shape[0], :]
xn_test  = N[x_train.shape[0]:, :]

Step 2. Define and train a NN classifier

Let us now build a loan approval model based on the HELOC dataset.

a. Define NN architecture

We now define the architecture of a 2-layer neural network classifier whose predictions we will try to interpret.

In [73]:
# nn with no softmax
def nn_small():
    model = Sequential()
    model.add(Dense(10, input_dim=23, kernel_initializer='normal', activation='relu'))
    model.add(Dense(2, kernel_initializer='normal'))    
    return model    

b. Train the NN

In [74]:
# Set random seeds for repeatability
np.random.seed(1) 
tf.set_random_seed(2) 

class_names = ['Bad', 'Good']

# loss function
def fn(correct, predicted):
    return tf.nn.softmax_cross_entropy_with_logits(labels=correct, logits=predicted)

# compile and print model summary
nn = nn_small()
nn.compile(loss=fn, optimizer='adam', metrics=['accuracy'])
nn.summary()


# train model or load a trained model
TRAIN_MODEL = False

if (TRAIN_MODEL):             
    nn.fit(xn_train, y_train_b, batch_size=128, epochs=500, verbose=1, shuffle=False)
    nn.save_weights("heloc_nnsmall.h5")     
else:    
    nn.load_weights("heloc_nnsmall.h5")
        

# evaluate model accuracy        
score = nn.evaluate(xn_train, y_train_b, verbose=0) #Compute training set accuracy
#print('Train loss:', score[0])
print('Train accuracy:', score[1])

score = nn.evaluate(xn_test, y_test_b, verbose=0) #Compute test set accuracy
#print('Test loss:', score[0])
print('Test accuracy:', score[1])
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_9 (Dense)              (None, 10)                240       
_________________________________________________________________
dense_10 (Dense)             (None, 2)                 22        
=================================================================
Total params: 262
Trainable params: 262
Non-trainable params: 0
_________________________________________________________________
Train accuracy: 0.7387545589625827
Test accuracy: 0.7224473257698542

Step 3. Compute contrastive explanations for a few applicants

Given the trained NN model to decide on loan approvals, let us first examine an applicant whose application was denied and what (minimal) changes to his/her application would lead to approval (i.e. finding pertinent negatives). We will then look at another applicant whose loan was approved and ascertain features that would minimally suffice in him/her still getting a positive outcome (i.e. finding pertinent positives).

a. Compute Pertinent Negatives (PN):

In order to compute pertinent negatives, the CEM explainer computes a user profile that is close to the original applicant but for whom the decision of HELOC application is different. The explainer alters a minimal set of features by a minimal (positive) amount. This will help the user whose loan application was initially rejected say, to ascertain how to get it accepted.

In [75]:
# Some interesting user samples to try: 2344 449 1168 1272
idx = 1272

X = xn_test[idx].reshape((1,) + xn_test[idx].shape)
print("Computing PN for Sample:", idx)
print("Prediction made by the model:", nn.predict_proba(X))
print("Prediction probabilities:", class_names[np.argmax(nn.predict_proba(X))])
print("")

mymodel = KerasClassifier(nn)
explainer = CEMExplainer(mymodel)

arg_mode = 'PN' # Find pertinent negatives
arg_max_iter = 1000 # Maximum number of iterations to search for the optimal PN for given parameter settings
arg_init_const = 10.0 # Initial coefficient value for main loss term that encourages class change
arg_b = 9 # No. of updates to the coefficient of the main loss term
arg_kappa = 0.1 # Minimum confidence gap between the PNs (changed) class probability and original class' probability
arg_beta = 1e-1 # Controls sparsity of the solution (L1 loss)
arg_gamma = 100 # Controls how much to adhere to a (optionally trained) auto-encoder
my_AE_model = None # Pointer to an auto-encoder

# Find PN for applicant 1272
(adv_pn, delta_pn, info_pn) = explainer.explain_instance(X, arg_mode, my_AE_model, arg_kappa, arg_b,
                                                         arg_max_iter, arg_init_const, arg_beta, arg_gamma)
Computing PN for Sample: 1272
Prediction made by the model: [[ 0.40682057 -0.391679  ]]
Prediction probabilities: Bad

iter:0 const:[10.]
Loss_Overall:0.2935, Loss_Attack:0.0000
Loss_L2Dist:0.2065, Loss_L1Dist:0.8703, AE_loss:0.0
target_lab_score:-1.1559, max_nontarget_lab_score:1.3184

iter:500 const:[10.]
Loss_Overall:5.9870, Loss_Attack:5.9782
Loss_L2Dist:0.0032, Loss_L1Dist:0.0563, AE_loss:0.0
target_lab_score:0.2639, max_nontarget_lab_score:-0.2339

iter:0 const:[5.]
Loss_Overall:0.0668, Loss_Attack:0.0000
Loss_L2Dist:0.0368, Loss_L1Dist:0.3000, AE_loss:0.0
target_lab_score:-0.2295, max_nontarget_lab_score:0.3076

iter:500 const:[5.]
Loss_Overall:1.5487, Loss_Attack:1.5277
Loss_L2Dist:0.0085, Loss_L1Dist:0.1243, AE_loss:0.0
target_lab_score:0.1245, max_nontarget_lab_score:-0.0810

iter:0 const:[2.5]
Loss_Overall:1.8033, Loss_Attack:1.7989
Loss_L2Dist:0.0011, Loss_L1Dist:0.0335, AE_loss:0.0
target_lab_score:0.3218, max_nontarget_lab_score:-0.2978

iter:500 const:[2.5]
Loss_Overall:2.2462, Loss_Attack:2.2462
Loss_L2Dist:0.0000, Loss_L1Dist:0.0000, AE_loss:0.0
target_lab_score:0.4068, max_nontarget_lab_score:-0.3917

iter:0 const:[1.25]
Loss_Overall:1.1231, Loss_Attack:1.1231
Loss_L2Dist:0.0000, Loss_L1Dist:0.0000, AE_loss:0.0
target_lab_score:0.4068, max_nontarget_lab_score:-0.3917

iter:500 const:[1.25]
Loss_Overall:1.1231, Loss_Attack:1.1231
Loss_L2Dist:0.0000, Loss_L1Dist:0.0000, AE_loss:0.0
target_lab_score:0.4068, max_nontarget_lab_score:-0.3917

iter:0 const:[1.875]
Loss_Overall:1.6834, Loss_Attack:1.6834
Loss_L2Dist:0.0000, Loss_L1Dist:0.0001, AE_loss:0.0
target_lab_score:0.4065, max_nontarget_lab_score:-0.3913

iter:500 const:[1.875]
Loss_Overall:1.6847, Loss_Attack:1.6847
Loss_L2Dist:0.0000, Loss_L1Dist:0.0000, AE_loss:0.0
target_lab_score:0.4068, max_nontarget_lab_score:-0.3917

iter:0 const:[2.1875]
Loss_Overall:1.7709, Loss_Attack:1.7690
Loss_L2Dist:0.0003, Loss_L1Dist:0.0168, AE_loss:0.0
target_lab_score:0.3641, max_nontarget_lab_score:-0.3445

iter:500 const:[2.1875]
Loss_Overall:1.9655, Loss_Attack:1.9655
Loss_L2Dist:0.0000, Loss_L1Dist:0.0000, AE_loss:0.0
target_lab_score:0.4068, max_nontarget_lab_score:-0.3917

iter:0 const:[2.03125]
Loss_Overall:1.7340, Loss_Attack:1.7331
Loss_L2Dist:0.0001, Loss_L1Dist:0.0085, AE_loss:0.0
target_lab_score:0.3853, max_nontarget_lab_score:-0.3679

iter:500 const:[2.03125]
Loss_Overall:1.8251, Loss_Attack:1.8251
Loss_L2Dist:0.0000, Loss_L1Dist:0.0000, AE_loss:0.0
target_lab_score:0.4068, max_nontarget_lab_score:-0.3917

iter:0 const:[1.953125]
Loss_Overall:1.7104, Loss_Attack:1.7100
Loss_L2Dist:0.0000, Loss_L1Dist:0.0043, AE_loss:0.0
target_lab_score:0.3959, max_nontarget_lab_score:-0.3796

iter:500 const:[1.953125]
Loss_Overall:1.7549, Loss_Attack:1.7549
Loss_L2Dist:0.0000, Loss_L1Dist:0.0000, AE_loss:0.0
target_lab_score:0.4068, max_nontarget_lab_score:-0.3917

iter:0 const:[1.9921875]
Loss_Overall:1.7227, Loss_Attack:1.7220
Loss_L2Dist:0.0000, Loss_L1Dist:0.0064, AE_loss:0.0
target_lab_score:0.3906, max_nontarget_lab_score:-0.3738

iter:500 const:[1.9921875]
Loss_Overall:1.7900, Loss_Attack:1.7900
Loss_L2Dist:0.0000, Loss_L1Dist:0.0000, AE_loss:0.0
target_lab_score:0.4068, max_nontarget_lab_score:-0.3917

Let us start by examining one particular loan application that was denied for applicant 1272. We showcase below how the decision could have been different through minimal changes to the profile conveyed by the pertinent negative. We also indicate the importance of different features to produce the change in the application status. The column delta in the table below indicates the necessary deviations for each of the features to produce this change. A human friendly explanation is then provided based on these deviations following the feature importance plot.

In [76]:
Xpn = adv_pn
classes = [ class_names[np.argmax(nn.predict_proba(X))], class_names[np.argmax(nn.predict_proba(Xpn))], 'NIL' ]

print("Sample:", idx)
print("prediction(X)", nn.predict_proba(X), class_names[np.argmax(nn.predict_proba(X))])
print("prediction(Xpn)", nn.predict_proba(Xpn), class_names[np.argmax(nn.predict_proba(Xpn))] )


X_re = rescale(X) # Convert values back to original scale from normalized
Xpn_re = rescale(Xpn)
Xpn_re = np.around(Xpn_re.astype(np.double), 2)

delta_re = Xpn_re - X_re
delta_re = np.around(delta_re.astype(np.double), 2)
delta_re[np.absolute(delta_re) < 1e-4] = 0

X3 = np.vstack((X_re, Xpn_re, delta_re))

dfre = pd.DataFrame.from_records(X3) # Create dataframe to display original point, PN and difference (delta)
dfre[23] = classes

dfre.columns = df.columns
dfre.rename(index={0:'X',1:'X_PN', 2:'(X_PN - X)'}, inplace=True)
dfret = dfre.transpose()


def highlight_ce(s, col, ncols):
    if (type(s[col]) != str):
        if (s[col] > 0):
            return(['background-color: yellow']*ncols)    
    return(['background-color: white']*ncols)

dfret.style.apply(highlight_ce, col='(X_PN - X)', ncols=3, axis=1) 
Sample: 1272
prediction(X) [[ 0.40682057 -0.391679  ]] Bad
prediction(Xpn) [[-0.02118406  0.07892033]] Good
Out[76]:
X X_PN (X_PN - X)
ExternalRiskEstimate 65 76.37 11.37
MSinceOldestTradeOpen 256 256 0
MSinceMostRecentTradeOpen 15 15 0
AverageMInFile 52 64.81 12.81
NumSatisfactoryTrades 17 20.3 3.3
NumTrades60Ever2DerogPubRec 0 0 0
NumTrades90Ever2DerogPubRec 0 0 0
PercentTradesNeverDelq 100 100 0
MSinceMostRecentDelq 0 0 0
MaxDelq2PublicRecLast12M 7 7 0
MaxDelqEver 8 8 0
NumTotalTrades 19 19 0
NumTradesOpeninLast12M 0 0 0
PercentInstallTrades 29 29 0
MSinceMostRecentInqexcl7days 2 2 0
NumInqLast6M 5 5 0
NumInqLast6Mexcl7days 5 5 0
NetFractionRevolvingBurden 57 57 0
NetFractionInstallBurden 79 79 0
NumRevolvingTradesWBalance 2 2 0
NumInstallTradesWBalance 4 4 0
NumBank2NatlTradesWHighUtilization 2 2 0
PercentTradesWBalance 60 60 0
RiskPerformance Bad Good NIL

Now let us compute the importance of different PN features that would be instrumental in 1272 receiving a favorable outcome and display below.

In [77]:
plt.rcdefaults()
fi = abs((X-Xpn).astype('double'))/np.std(xn_train.astype('double'), axis=0) # Compute PN feature importance
objects = df.columns[-2::-1]
y_pos = np.arange(len(objects))
performance = fi[0, -1::-1]

plt.barh(y_pos, performance, align='center', alpha=0.5) # bar chart
plt.yticks(y_pos, objects) # Display features on y-axis
plt.xlabel('weight') # x-label
plt.title('PN (feature importance)') # Heading

plt.show() # Display PN feature importance

Explanation:

We observe that the applicant 1272's loan application would have been accepted if the consolidated risk marker score (i.e. ExternalRiskEstimate) increased from 65 to 76, the loan application was on file (i.e. AverageMlnFile) for about 65 months and if the number of satisfactory trades (i.e. NumSatisfactoryTrades) increased to little over 20.

The above changes to the three suggested factors are also intuitively consistent in improving the chances of acceptance of an application, since all three are monotonic with probability of acceptance (refer HELOC description table). However, one must realize that the above explanation is for the particular applicant based on what the model would do and does not necessarily have to agree with their intuitive meaning. In fact, if the explanation is deemed unacceptable then its an indication that perhaps the model should be debugged/updated.

Compute Pertinent Positives (PP):

In order to compute pertinent positives, the CEM explainer identifies a minimal set of features along with their values (as close to 0) that would still maintain the predicted loan application status of the applicant.

In [78]:
# Some interesting user samples to try: 8 9 11
idx = 8

X = xn_test[idx].reshape((1,) + xn_test[idx].shape)
print("Computing PP for Sample:", idx)
print("Prediction made by the model:", class_names[np.argmax(nn.predict_proba(X))])
print("Prediction probabilities:", nn.predict_proba(X))
print("")


mymodel = KerasClassifier(nn)
explainer = CEMExplainer(mymodel)

arg_mode = 'PP' # Find pertinent positives
arg_max_iter = 1000 # Maximum number of iterations to search for the optimal PN for given parameter settings
arg_init_const = 10.0 # Initial coefficient value for main loss term that encourages class change
arg_b = 9 # No. of updates to the coefficient of the main loss term
arg_kappa = 0.1 # Minimum confidence gap between the PNs (changed) class probability and original class' probability
arg_beta = 1e-1 # Controls sparsity of the solution (L1 loss)
arg_gamma = 100 # Controls how much to adhere to a (optionally trained) auto-encoder
my_AE_model = None # Pointer to an auto-encoder

(adv_pp, delta_pp, info_pp) = explainer.explain_instance(X, arg_mode, my_AE_model, arg_kappa, arg_b,
                                                         arg_max_iter, arg_init_const, arg_beta, arg_gamma)
Computing PP for Sample: 8
Prediction made by the model: Good
Prediction probabilities: [[-0.1889221   0.29527372]]

/anaconda3/envs/aix360/lib/python3.6/site-packages/keras/engine/sequential.py:247: UserWarning: Network returning invalid probability values. The last layer might not normalize predictions into probabilities (like softmax or sigmoid would).
  warnings.warn('Network returning invalid probability values. '
iter:0 const:[10.]
Loss_Overall:8.1419, Loss_Attack:7.9649
Loss_L2Dist:0.1243, Loss_L1Dist:0.5266, AE_loss:0.0
target_lab_score:-0.3578, max_nontarget_lab_score:0.3387

iter:500 const:[10.]
Loss_Overall:0.3945, Loss_Attack:0.0000
Loss_L2Dist:0.3318, Loss_L1Dist:0.6264, AE_loss:0.0
target_lab_score:0.0991, max_nontarget_lab_score:-0.0737

iter:0 const:[5.]
Loss_Overall:8.4407, Loss_Attack:8.3992
Loss_L2Dist:0.0216, Loss_L1Dist:0.1987, AE_loss:0.0
target_lab_score:-0.8223, max_nontarget_lab_score:0.7575

iter:500 const:[5.]
Loss_Overall:4.1897, Loss_Attack:3.9453
Loss_L2Dist:0.1997, Loss_L1Dist:0.4469, AE_loss:0.0
target_lab_score:-0.3484, max_nontarget_lab_score:0.3406

iter:0 const:[2.5]
Loss_Overall:6.1030, Loss_Attack:6.1013
Loss_L2Dist:0.0002, Loss_L1Dist:0.0149, AE_loss:0.0
target_lab_score:-1.2149, max_nontarget_lab_score:1.1256

iter:500 const:[2.5]
Loss_Overall:6.2723, Loss_Attack:6.2723
Loss_L2Dist:0.0000, Loss_L1Dist:0.0000, AE_loss:0.0
target_lab_score:-1.2499, max_nontarget_lab_score:1.1590

iter:0 const:[3.75]
Loss_Overall:7.8400, Loss_Attack:7.8242
Loss_L2Dist:0.0059, Loss_L1Dist:0.0990, AE_loss:0.0
target_lab_score:-1.0324, max_nontarget_lab_score:0.9541

iter:500 const:[3.75]
Loss_Overall:5.8230, Loss_Attack:5.7570
Loss_L2Dist:0.0449, Loss_L1Dist:0.2119, AE_loss:0.0
target_lab_score:-0.7511, max_nontarget_lab_score:0.6841

iter:0 const:[4.375]
Loss_Overall:8.2661, Loss_Attack:8.2388
Loss_L2Dist:0.0125, Loss_L1Dist:0.1488, AE_loss:0.0
target_lab_score:-0.9274, max_nontarget_lab_score:0.8558

iter:500 const:[4.375]
Loss_Overall:6.5857, Loss_Attack:6.5105
Loss_L2Dist:0.0523, Loss_L1Dist:0.2288, AE_loss:0.0
target_lab_score:-0.7263, max_nontarget_lab_score:0.6618

iter:0 const:[4.0625]
Loss_Overall:8.0845, Loss_Attack:8.0632
Loss_L2Dist:0.0089, Loss_L1Dist:0.1239, AE_loss:0.0
target_lab_score:-0.9799, max_nontarget_lab_score:0.9049

iter:500 const:[4.0625]
Loss_Overall:6.6793, Loss_Attack:6.6236
Loss_L2Dist:0.0365, Loss_L1Dist:0.1912, AE_loss:0.0
target_lab_score:-0.7999, max_nontarget_lab_score:0.7306

iter:0 const:[3.90625]
Loss_Overall:7.9701, Loss_Attack:7.9516
Loss_L2Dist:0.0073, Loss_L1Dist:0.1115, AE_loss:0.0
target_lab_score:-1.0061, max_nontarget_lab_score:0.9295

iter:500 const:[3.90625]
Loss_Overall:6.2569, Loss_Attack:6.1965
Loss_L2Dist:0.0403, Loss_L1Dist:0.2008, AE_loss:0.0
target_lab_score:-0.7772, max_nontarget_lab_score:0.7091

iter:0 const:[3.828125]
Loss_Overall:7.9070, Loss_Attack:7.8899
Loss_L2Dist:0.0066, Loss_L1Dist:0.1052, AE_loss:0.0
target_lab_score:-1.0193, max_nontarget_lab_score:0.9418

iter:500 const:[3.828125]
Loss_Overall:6.3022, Loss_Attack:6.2467
Loss_L2Dist:0.0364, Loss_L1Dist:0.1909, AE_loss:0.0
target_lab_score:-0.8005, max_nontarget_lab_score:0.7312

iter:0 const:[3.8671875]
Loss_Overall:7.9391, Loss_Attack:7.9213
Loss_L2Dist:0.0070, Loss_L1Dist:0.1083, AE_loss:0.0
target_lab_score:-1.0127, max_nontarget_lab_score:0.9356

iter:500 const:[3.8671875]
Loss_Overall:6.0266, Loss_Attack:5.9612
Loss_L2Dist:0.0443, Loss_L1Dist:0.2105, AE_loss:0.0
target_lab_score:-0.7543, max_nontarget_lab_score:0.6872

For the pertinent positives, we look at a different applicant 8 whose loan application was approved. We want to ascertain here what minimal values for this profile would still have lead to acceptance. Below, we showcase the pertinent positive as well as the important features in maintaining the approved status. The 0s in the PP column indicate that those features were not important. The 0s in the PP column indicate that those features were not important. Here too, we provide a human friendly explanation following the feature importance plot.

In [79]:
Xpp = delta_pp
classes = [ class_names[np.argmax(nn.predict_proba(X))], class_names[np.argmax(nn.predict_proba(Xpp))]]

print("PP for Sample:", idx)
print("Prediction(Xpp) :", class_names[np.argmax(nn.predict_proba(Xpp))])
print("Prediction probabilities for Xpp:", nn.predict_proba(Xpp))
print("")

X_re = rescale(X) # Convert values back to original scale from normalized
adv_pp_re = rescale(adv_pp)
Xpp_re = X_re - adv_pp_re
Xpp_re = np.around(Xpp_re.astype(np.double), 2)
Xpp_re[Xpp_re < 1e-4] = 0

X2 = np.vstack((X_re, Xpp_re))

dfpp = pd.DataFrame.from_records(X2.astype('double')) # Showcase a dataframe for the original point and PP
dfpp[23] = classes
dfpp.columns = df.columns
dfpp.rename(index={0:'X',1:'X_PP'}, inplace=True)
dfppt = dfpp.transpose()

dfppt.style.apply(highlight_ce, col='X_PP', ncols=2, axis=1) 
PP for Sample: 8
Prediction(Xpp) : Good
Prediction probabilities for Xpp: [[-0.09004497  0.11049862]]

Out[79]:
X X_PP
ExternalRiskEstimate 82 37.65
MSinceOldestTradeOpen 280 0
MSinceMostRecentTradeOpen 13 0
AverageMInFile 102 73.67
NumSatisfactoryTrades 22 11.49
NumTrades60Ever2DerogPubRec 0 0
NumTrades90Ever2DerogPubRec 0 0
PercentTradesNeverDelq 91 0
MSinceMostRecentDelq 26 0
MaxDelq2PublicRecLast12M 6 0
MaxDelqEver 6 0
NumTotalTrades 23 0
NumTradesOpeninLast12M 0 0
PercentInstallTrades 9 0
MSinceMostRecentInqexcl7days 0 0
NumInqLast6M 0 0
NumInqLast6Mexcl7days 0 0
NetFractionRevolvingBurden 3 0
NetFractionInstallBurden 0 0
NumRevolvingTradesWBalance 4 0
NumInstallTradesWBalance 1 0
NumBank2NatlTradesWHighUtilization 1 0
PercentTradesWBalance 42 0
RiskPerformance Good Good
In [80]:
plt.rcdefaults()
fi = abs(Xpp_re.astype('double'))/np.std(x_train.astype('double'), axis=0) # Compute PP feature importance
    
objects = df.columns[-2::-1]
y_pos = np.arange(len(objects)) # Get input feature names
performance = fi[0, -1::-1]

plt.barh(y_pos, performance, align='center', alpha=0.5) # Bar chart
plt.yticks(y_pos, objects) # Plot feature names on y-axis
plt.xlabel('weight') #x-label
plt.title('PP (feature importance)') # Figure heading

plt.show()    # Display the feature importance

Explanation:

We observe that the applicant 8's loan application would still have been accepted even if the consolidated risk marker score (i.e. ExternalRiskEstimate) reduced from 82 to around 40, application was on file (i.e. AverageMlnFile) for close to 70 months and number of satisfactory trades (i.e. NumSatisfactoryTrades) reduced from 22 to almost single digits.

Note that explanations may change a bit based on equivalent values in a local minima.