# Proactive Retention¶

This notebook illustrates how to use the TED_CartesianExplainer class. The TED_CartesianExplainer is an implementation of the algorithm in the AIES'19 paper by Hind et al. It is most suited for use cases where matching explanations to the mental model of the explanation consumer is the highest priority; i.e., where the explanations are similar to what would be produced by a domain expert.

To achieve this goal, the TED (Teaching Explanations for Decisions) framework requires that the training data is augmented so that each instance contains an explanation (E). The goal is to teach the framework what are appropriate explanations in the same manner the training dataset teaches what are appropriate labels (Y). Thus, the training dataset contains the usual features (X) and labels (Y), augmented with an explanation (E) for each instance. For example, consider a loan application use case, where the features are the loan application answers, and the label is the decision to approve or reject the loan. The explanation would be the reason for the approve/reject decision.

The format of the explanation is flexible and determined by the use case. It can be a number, text, an image, an audio, a video, etc. The TED framework simply requires that it can be mapped to a unique integer [0, N] and that any two explanations that are semantically the same should be mapped to the same integer. In many domains there are a list of reasons for making a decision, such as denying a loan, and these reasons would form the finite explanation space.

Given this setup, the TED framework will train a classifier on this training set of instances of (X, Y, E); i.e, features, labels, and explanations. When the classifier is given a new feature vector, it will produce a label (Y) and explanation (E).

There are many approaches to implementing this functionality. In this notebook we illustrate the simplest implementation, TED_CartesianExplainer, which simply takes the Cartesian product of the label and explanation and creates a new label (YE) and uses this to train a (multiclass) classifier. (See the TED_CartesianExplainer for more details.) There are other possibilities, such as Codella et al.'s paper at the HILL 2019 workshop. However, we expect the interface to these implementations to be the same, and thus the user of the TED framework, illustrated by this notebook, would not have to change their code.

This simple cartesian product approach is quite general in that it can use any classifier (passed as a parameter), as long as it complies with the fit/predict paradigm.

This implementation assumes the initial problem is a binary classification problem with labels 0 and 1, and the explanations form a dense integer space from [0, NumExplanations -1]. The mapping of explanations to integers is performed by the user of the explanation as we will illustrate below. This allows flexibility to the user if, for example, they want to change explanations from text to a video.

Before we show how to use TED_Cartesian, we will describe our use case and associated dataset. Then we will walk through the code, following these steps.

# The use case¶

The use case we will consider in this notebook is predicting which employees should be targeted for retention actions at a fictious company, based on various features of the employee. The features we will consider are

• Position, [1, 2, 3, 4]; higher is better
• Organization, [1, 2, 3]; organization 1 has more retention challenges
• Potential, an integer mapped to Yes (-10), No (-11)
• Rating, an integer mapped to High (-3), Med (-2), and Low (-1)
• Rating slope (average rating over last 2 years), an integer mapped to High (-3), Med (-2), and Low (-1)
• Salary competitiveness, an integer mapped to High (-3), Med (-2), and Low (-1)
• Tenure, # of months at company, an integer in [0..360]
• Position tenure, # of months at current position, an integer in [0..360]

These features generate a feature space of over 80,000,000 possibilities.

# The dataset¶

Given these features, we synthetically generate a dataset using the following distribution functions:

• Position: 1 (45%), 2 (30%), 3 (20%), 4 (5%)
• Organization: 1 (40%), 2 (30%), 3 (30%)
• Potential: Yes (50%), No (50%)
• Rating: High (15%), Med (80%), and Low (5%)
• Rating slope: High (15%), Med (80%), and Low (5%)
• Salary competitiveness: High (10%), Med (70%), and Low (20%)
• Tenure: [0..24] (30%), [25..60] (30%), [61..360] (40%); values are evenly distributed within each range
• Position tenure: [0..12] (70%), [13..24] (20%), [25..360] (10%); values are evenly distributed within each range

These are the target distributions. The actual distributions in the dataset vary slightly because they are selected randomly from these distributions.

The values for each feature are generated independently; i.e., it is equally likely that a person in position 1 and a person in position 4 will be in the same organization. The only constraint among features is that the Position tenure cannot be greater than the Tenure (with the company); i.e., one cannot be in a position for longer than they have been with the company.

The dataset and code to generate is available as part of AI Explainability 360.

### Assigning labels¶

To determine if a given employee, as represented by these features, should be target with a retention action, we would ideally ask a human resource specialist with deep knowledge of the circumstances for employee retention. Under the TED framework, we would ask this expert for both a prediction of whether the employee was at risk to leave AND for a reason why the HR expert felt that way.

We simulate this process by creating 25 rules, based on the above features, for why a retention action is needed to reduce the chances of an employee choosing to leave our fictitious company. These rules are motivated by common scenarios, such as not getting a promotion in a while, not being paid competitively, receiving a disappointing evaluation, being a new employee in certain organizations with inherently high attrition, not having a salary that is consistent with positive evaluations, mid-career crisis, etc. We vary the application of these rules depending on various positions and organizations. For example, in our fictitious company organization #1 has much higher attrition because their skills are more transferable outside the company.

Each of these 25 rules would result in the label "Yes"; i.e., the employee is a risk to leave the company. Because the rules capture the reason for the "Yes", we use the rule number as the explanation (E), which is required by the TED framework.

If none of the rules are satisfied, it means the employee is not a candidate for a retention action; i.e., a "No" label is assigned. Although we could also construct explanations for these cases (see AIES'19 paper for such examples), we choose not to in this use case because there are many such cases where explanations for users will only be required in the "Bad" case. For example, if a person is denied credit, rejected for a job, or is diagnosed with a disease, they will want to know why. However, when they are approved for credit, get the job, or are told they do not have a disease, they are usually not interested in, or told, the reasons for the decision.

We make no claim that all predictions are in this category or that other personas (the data scientist, regulator, or loan agent) might want to know why for both kinds of decisions. In fact, the TED framework can provide explanation for each decision outcome. We are just not addressing these general situations in this notebook.

### Dataset characteristics¶

With the above distribution, we generate 10,000 fictious employees (X) and applied the 26 (25 Yes + 1 No) rules to produce Yes/No labels (Y), using these rules as explanations (E). After applying these rules, the resulting dataset has the following characteristics:

• Yes (33.8%)
• No (66.2%)

Of the 33.6% of "Yes" labels, each of 25 explanations (rules) were used with frequencies ranging from 20 (rule 16 & 18, counting from 0) to 410 (rule 13). (When multiple rules applied to a feature vector (3.5% of the dataset, or 10.24% of Yes instances in the dataset), we chose the more specific rule, i.e., the one that only matched specified values for a feature, as opposed to matching all values for that feature.

We now are ready to discuss the code that uses the TED_CartesianExplainer class to produce explanations.

# Step 1: Import relevant packages¶

The code below sets up the imports. We will use the svm classifier, the train_test_split routine to partition our dataset, and the TED_CartesianExplainer for explanations and the TEDDataset for the training and test data.

In [2]:
from sklearn import svm         # this can be any classifier that follows the fit/predict paradigm
from sklearn.model_selection import train_test_split

from aix360.algorithms.ted.TED_Cartesian import TED_CartesianExplainer
from aix360.datasets.ted_dataset import TEDDataset


# Step 2: Open datafile and create train/test splits¶

Below we create a new TEDDataset object based on the "Retention.csv" file. The load_file method decomposes the dataset into its X, Y, and E components. (See TEDDataset class for the expected format.) We then partition these instances into train and test sets, using the sklearn routine train_test_split, with 80% going to train and 20% going to test.

In [3]:
# Decompose the dataset into X, Y, E
print("X's shape:", X.shape)
print("Y's shape:", Y.shape)
print("E's shape:", E.shape)
print()

# set up train/test split
X_train, X_test, Y_train, Y_test, E_train, E_test = train_test_split(X, Y, E, test_size=0.20, random_state=0)
print("X_train shape:", X_train.shape, ", X_test shape:", X_test.shape)
print("Y_train shape:", Y_train.shape, ", Y_test shape:", Y_test.shape)
print("E_train shape:", E_train.shape, ", E_test shape:", E_test.shape)

X's shape: (10000, 8)
Y's shape: (10000,)
E's shape: (10000,)

X_train shape: (8000, 8) , X_test shape: (2000, 8)
Y_train shape: (8000,) , Y_test shape: (2000,)
E_train shape: (8000,) , E_test shape: (2000,)


# Step 3: Create a fit/predict classifier and TED classifer¶

We can now create a fit/predict classifier and the TED_CartesianExplainer instance, passing in the classifier. The commented out code shows some other example classifiers that can be used. (You will need to add the appropriate import statements.) There are many more classifiers that can be used.

In [4]:
# Create classifier and pass to TED_CartesianExplainer
estimator = svm.SVC(kernel='linear')
# estimator = DecisionTreeClassifier()
# estimator = RandomForestClassifier()

ted = TED_CartesianExplainer(estimator)


# Step 4: Train the TED classifier¶

Next, we fit the TED-enhanced classifier, passing in the 3 training components: features (X), labels (Y), and explanations (E).

In [5]:
print("Training the classifier")

ted.fit(X_train, Y_train, E_train)   # train classifier

Training the classifier


# Step 5: Ask classifer for a few predictions and explanations¶

The trained TED classifier is now ready for predictions with explanations. We construct some raw feature vectors, created from the original dataset, and ask for a label (Y) prediction and its explanation (E).

In [6]:
import numpy as np

# Create an instance level example
X1 = [[1, 2, -11, -3, -2, -2,  22, 22]]

Y1, E1 = ted.predict_explain(X1)
print("Predicting for feature vector:")
print(" ", X1[0])
print("\t\t      Predicted \tCorrect")
print("Label(Y)\t\t " + np.array2string(Y1[0]) + "\t\t   -10")
print("Explanation (E) \t " + np.array2string(E1[0]) + "\t\t   13")
print()

X2 = [[3, 1, -11, -2, -2, -2, 296, 0]]

Y2, E2 = ted.predict_explain(X2)
print("Predicting for feature vector:")
print(" ", X2[0])

print("\t\t      Predicted \tCorrect")
print("Label(Y)\t\t " + np.array2string(Y2[0]) + "\t\t   -11")
print("Explanation (E) \t " + np.array2string(E2[0]) + "\t\t   25")

Predicting for feature vector:
[1, 2, -11, -3, -2, -2, 22, 22]
Predicted 	Correct
Label(Y)		 -10		   -10
Explanation (E) 	 13		   13

Predicting for feature vector:
[3, 1, -11, -2, -2, -2, 296, 0]
Predicted 	Correct
Label(Y)		 -11		   -11
Explanation (E) 	 25		   25


# Step 6: Create a more relevant human interface¶

Although we just showed how TED_CaresianExplainer can produce the correct explanation for a feature vector, simply producing "3" as an explanation is not sufficient in most uses. This section shows one way to implement the mapping of real explanations to the explanation IDs that TED requires. This is inspired by the FICO reason codes, which are explanations for a FICO credit score.

In this case the explanations are text, but the same idea can be used to map explanation IDs to other formats, such as a file name containing an audio or video explanation.

In [7]:
Label_Strings =["IS", "Approved for"]
def labelToString(label) :
if label == -10 :
return "IS"
else :
return "IS NOT"

Explanation_Strings = [
"Seeking Higher Salary in Org 1",
"Promotion Lag, Org 1, Position 1",
"Promotion Lag, Org 1, Position 2",
"Promotion Lag, Org 1, Position 3",
"Promotion Lag, Org 2, Position 1",
"Promotion Lag, Org 2, Position 2",
"Promotion Lag, Org 2, Position 3",
"Promotion Lag, Org 3, Position 1",
"Promotion Lag, Org 3, Position 2",
"Promotion Lag, Org 3, Position 3",
"New employee, Org 1, Position 1",
"New employee, Org 1, Position 2",
"New employee, Org 1, Position 3",
"New employee, Org 2, Position 1",
"New employee, Org 2, Position 2",
"Disappointing evaluation, Org 1",
"Disappointing evaluation, Org 2",
"Compensation does not match evaluations, Med rating",
"Compensation does not match evaluations, High rating",
"Compensation does not match evaluations, Org 1, Med rating",
"Compensation does not match evaluations, Org 2, Med rating",
"Compensation does not match evaluations, Org 1, High rating",
"Compensation does not match evaluations, Org 2, High rating",
"Mid-career crisis, Org 1",
"Mid-career crisis, Org 2",
"Did not match any retention risk rules"]

print("Employee #1 " + labelToString(Y1[0]) + " a retention risk with explanation: " + Explanation_Strings[E1[0]])
print()
print("Employee #2 " + labelToString(Y2[0]) + " a retention risk with explanation: " + Explanation_Strings[E2[0]])

Employee #1 IS a retention risk with explanation: New employee, Org 2, Position 1

Employee #2 IS NOT a retention risk with explanation: Did not match any retention risk rules


# Step 7: Compute overall accuracy metrics using the test dataset¶

As we have a test part of the dataset, we can use it to see how well TED_Cartesian does in predicting all test labels (Y) and explanations (E). We use the handy "score" method of TED_Cartesian to do this computation. We also report the accuracy of predicting the combined YE labels, which could be of interest to researchers who want to better understand the inner workings of TED_Cartesian.

In [8]:

YE_accuracy, Y_accuracy, E_accuracy = ted.score(X_test, Y_test, E_test)    # evaluate the classifier
print("Evaluating accuracy of TED-enhanced classifier on test data")
print(' Accuracy of predicting Y labels: %.2f%%' % (100*Y_accuracy))
print(' Accuracy of predicting explanations: %.2f%%' % (100*E_accuracy))
print(' Accuracy of predicting Y + explanations: %.2f%%' % (100*YE_accuracy))

Evaluating accuracy of TED-enhanced classifier on test data
Accuracy of predicting Y labels: 86.15%
Accuracy of predicting explanations: 85.10%
Accuracy of predicting Y + explanations: 85.10%


# Conclusions¶

This notebook has illustrated how easy it is to use the TED_CartesianExplainer if you have a training dataset that contains explanations. The framework is general in that it can use any classification technique that follows the fit/predict paradigm, so that if you already have a favorite algorithm, you can use it with the TED framework.

The main advantage of this algorithm is that the quality of the explanations produced are exactly the same quality as those that the algorithm is trained on. Thus, if you teach (train) the system well with good training data and good explanations, you will get good explanations out in a language you should understand.

The downside of this approach is that someone needs to create explanations. This should be straightforward when a domain expert is creating the initial training data: if they decide a loan should be rejected, they should know why, and if they do not, it may not be a good decision.

However, this may be more of a challenge when a training dataset already exists without explanations and now someone needs to create the explanations. The original person who did the labeling of decisions may no longer be available, so the explanations for the decisions may not be known. In this case, we argue, the system is in a dangerous state. Training data exists that no one understands why it is labeled in a certain way. Asking the model to explain one of its predictions when no person can explain an instance in the training data does not seem consistent.

Dealing with this situation is one of the open research problems that comes from the TED approach.