Shap is a method developed by Scott Lundberg et al that also tries to estimate Shapley value.
The estimated Shapley values agree on linear systems.
%matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from keras.models import Sequential
from keras.layers import Flatten, Dense, Dropout
from keras.layers.core import Activation
import sys
sys.path.append("../")
from IntegratedGradients import *
from shap import KernelExplainer, DenseData, visualize, initjs
Using TensorFlow backend.
Loading data
X = np.array([[float(j) for j in i.rstrip().split(",")[:-1]] for i in open("iris.data").readlines()][:-1])
Y = np.array([0 for i in range(100)] + [1 for i in range(50)])
Training models
model = Sequential([
Dense(1, input_dim=4),
#Activation('sigmoid'),
])
model.compile(optimizer='sgd', loss='mean_squared_error')
history = model.fit(X, Y,
epochs=300, batch_size=10,
validation_split=0.1, verbose=0)
Predict
predictions = model.predict(X)
plt.figure(figsize=(10,0.25))
ax = sns.heatmap(np.transpose(predictions), cbar=False)
plt.xticks([],[])
plt.yticks([],[])
plt.xlabel("samples (150)")
plt.title("Predictions")
<matplotlib.text.Text at 0x7fd7b4021390>
Explaning with integrated gradients
ig = integrated_gradients(model)
Evaluated output channel (0-based index): All Building gradient functions Progress: 100.0% Done.
exp1 = ig.explain(X[0], num_steps=10000)
f = lambda x: model.predict(x)[:,0]
# use Shap to explain a single prediction
x = X[0:1,:]
background = DenseData(np.zeros((1,4)), range(4))
explainer = KernelExplainer(f, background, nsamples=10000)
exp2=explainer.explain(x).effects
print "Integrated Gradients:", exp1
print "Shap:", exp2
Integrated Gradients: [-0.57897955 0.56409896 0.06661686 0.09188381] Shap: [-0.57898097 0.56416369 0.06662203 0.09187283]