By convention, the first lines of code are always about importing the packages we'll use.
import pandas as pd
import numpy as np
from scipy.spatial.distance import pdist, squareform
from matplotlib import pyplot as plt
import networkx as nx
Tutorials on pandas can be found at:
Tutorials on numpy can be found at:
A tutorial on networkx can be found at:
The following line is a magic command. It enables plotting inside the notebook.
%matplotlib inline
# %matplotlib notebook
We will play with the famous Iris dataset. This dataset can be found in many places on the net and was first released at https://archive.ics.uci.edu/ml/index.php. For example it is stored on Kaggle, with many demos and Jupyter notebooks you can test (have a look at the "kernels" tab).
iris = pd.read_csv('data/iris.csv')
iris.head()
Id | SepalLengthCm | SepalWidthCm | PetalLengthCm | PetalWidthCm | Species | |
---|---|---|---|---|---|---|
0 | 1 | 5.1 | 3.5 | 1.4 | 0.2 | Iris-setosa |
1 | 2 | 4.9 | 3.0 | 1.4 | 0.2 | Iris-setosa |
2 | 3 | 4.7 | 3.2 | 1.3 | 0.2 | Iris-setosa |
3 | 4 | 4.6 | 3.1 | 1.5 | 0.2 | Iris-setosa |
4 | 5 | 5.0 | 3.6 | 1.4 | 0.2 | Iris-setosa |
The description of the entries is given here: https://www.kaggle.com/uciml/iris/home
iris['Species'].unique()
array(['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'], dtype=object)
iris.describe()
Id | SepalLengthCm | SepalWidthCm | PetalLengthCm | PetalWidthCm | |
---|---|---|---|---|---|
count | 150.000000 | 150.000000 | 150.000000 | 150.000000 | 150.000000 |
mean | 75.500000 | 5.843333 | 3.054000 | 3.758667 | 1.198667 |
std | 43.445368 | 0.828066 | 0.433594 | 1.764420 | 0.763161 |
min | 1.000000 | 4.300000 | 2.000000 | 1.000000 | 0.100000 |
25% | 38.250000 | 5.100000 | 2.800000 | 1.600000 | 0.300000 |
50% | 75.500000 | 5.800000 | 3.000000 | 4.350000 | 1.300000 |
75% | 112.750000 | 6.400000 | 3.300000 | 5.100000 | 1.800000 |
max | 150.000000 | 7.900000 | 4.400000 | 6.900000 | 2.500000 |
We are going to build a graph from these data. The idea is to represent iris samples (rows of the table) as nodes, with connections depending on their physical similarity.
The main question is how to define the notion of similarity between the flowers. For that, we need to introduce a measure of similarity. It should use the properties of the flowers and provide a positive real value for each pair of samples.
Remark: The value should increase with the similarity.
Let us separate the data into two parts: physical properties and labels.
features = iris.loc[:, ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm']]
species = iris.loc[:, 'Species']
features.head()
SepalLengthCm | SepalWidthCm | PetalLengthCm | PetalWidthCm | |
---|---|---|---|---|
0 | 5.1 | 3.5 | 1.4 | 0.2 |
1 | 4.9 | 3.0 | 1.4 | 0.2 |
2 | 4.7 | 3.2 | 1.3 | 0.2 |
3 | 4.6 | 3.1 | 1.5 | 0.2 |
4 | 5.0 | 3.6 | 1.4 | 0.2 |
species.head()
0 Iris-setosa 1 Iris-setosa 2 Iris-setosa 3 Iris-setosa 4 Iris-setosa Name: Species, dtype: object
You can define many similarity measures. One of the most intuitive and perhaps the easiest to program relies on the notion of distance. If a distance between samples is defined, we can compute the weight accordingly: if the distance is short, which means the nodes are similar, we want a strong connection between them (large weight).
The cosine distance is a good candidate for high-dimensional data. It is defined as follows: $$d(u,v) = 1 - \frac{u \cdot v} {\|u\|_2 \|v\|_2},$$ where $u$ and $v$ are two feature vectors.
The distance is proportional to the angle formed by the two vectors (0 if colinear, 1 if orthogonal, 2 if opposed direction).
Alternatives are the $p$-norms (or $\ell_p$-norms), defined as $$d(u,v) = \|u - v\|_p,$$ of which the Euclidean distance is a special case with $p=2$.
The pdist
function from scipy
computes the pairwise distance. By default it is the Euclidian distance. features.values
is a numpy array extracted from the Pandas dataframe. Very handy.
#from scipy.spatial.distance import pdist, squareform
pdist?
distances = pdist(features.values, metric='euclidean')
# other metrics: 'cosine', 'cityblock', 'minkowski'
Now that we have a distance, we can compute the weights.
A common function used to turn distances into edge weights is the Gaussian function: $$\mathbf{W}(u,v) = \exp \left( \frac{-d^2(u, v)}{\sigma^2} \right),$$ where $\sigma$ is the parameter which controls the width of the Gaussian.
The function giving the weights should be positive and monotonically decreasing with respect to the distance. It should take its maximum value when the distance is zero, and tend to zero when the distance increases. Note that distances are non-negative by definition. So any funtion $f : \mathbb{R}^+ \rightarrow [0,C]$ that verifies $f(0)=C$ and $\lim_{x \rightarrow +\infty}f(x)=0$ and is strictly decreasing should be adapted. The choice of the function depends on the data.
Some examples:
# Let us use the Gaussian function
kernel_width = distances.mean()
weights_list = np.exp(-distances**2 / kernel_width**2)
# Turn the list of weights into a matrix.
weight_matrix = squareform(weights_list)
Exercise: Find the nodes with highest degree and display their respective entry in the iris
dataframe. Do they belong to the same iris species?
# Your code here.
Sometimes, you may need to compute additional features before processing them with some machine learning or some other data processing step. With Pandas, it is as simple as that:
# Compute a new column using the existing ones.
features['SepalLengthSquared'] = features['SepalLengthCm']**2
features.head()
SepalLengthCm | SepalWidthCm | PetalLengthCm | PetalWidthCm | SepalLengthSquared | |
---|---|---|---|---|---|
0 | 5.1 | 3.5 | 1.4 | 0.2 | 26.01 |
1 | 4.9 | 3.0 | 1.4 | 0.2 | 24.01 |
2 | 4.7 | 3.2 | 1.3 | 0.2 | 22.09 |
3 | 4.6 | 3.1 | 1.5 | 0.2 | 21.16 |
4 | 5.0 | 3.6 | 1.4 | 0.2 | 25.00 |
Coming back to the weight matrix, we have obtained a full matrix but we may not need all the connections (reducing the number of connections saves some space and computations!). We can sparsify the graph by removing the values (edges) below some fixed threshold. Let us see what kind of threshold we could use:
plt.hist(weights_list)
plt.title('Distribution of weights')
plt.show()
(array([2838., 1250., 824., 569., 458., 462., 545., 896., 1333., 2000.]), array([4.27321579e-04, 1.00384589e-01, 2.00341857e-01, 3.00299125e-01, 4.00256393e-01, 5.00213661e-01, 6.00170929e-01, 7.00128196e-01, 8.00085464e-01, 9.00042732e-01, 1.00000000e+00]), <a list of 10 Patch objects>)
Text(0.5, 1.0, 'Distribution of weights')
# Let us choose a threshold of 0.6.
# Too high, we will have disconnected components
# Too low, the graph will have too many connections
weight_matrix[weight_matrix < 0.6] = 0
Exercise: Plot the number of edges with respect to the threshold, for threshold values between 0 and 1.
# Your code here.
Remark: The distances presented here do not work well for categorical data.
To conclude, let us visualize the graph. We will use the python module networkx.
# A simple command to create the graph from the adjacency matrix.
graph = nx.from_numpy_array(weight_matrix)
Let us try some direct visualizations using networkx.
# Let us add some colors
colors = species.values
colors[colors == 'Iris-setosa'] = 0
colors[colors == 'Iris-versicolor'] = 1
colors[colors == 'Iris-virginica'] = 2
nx.draw_spectral(graph, node_color=colors)
/home/michael/.conda/envs/ntds_2019/lib/python3.7/site-packages/networkx/drawing/nx_pylab.py:579: MatplotlibDeprecationWarning: The iterable function was deprecated in Matplotlib 3.1 and will be removed in 3.3. Use np.iterable instead. if not cb.iterable(width):
Oh! It seems to be separated in 3 parts! Are they related to the 3 different species of iris?
Let us try another layout algorithm, where the edges are modeled as springs.
nx.draw_spring(graph, node_color=colors)
Save the graph to disk in the gexf
format, readable by gephi and other tools that manipulate graphs. You may now explore the graph using gephi and compare the visualizations.
nx.write_gexf(graph, 'iris.gexf')
Exercise 1:
Modify the experiment such that the distance is computed using normalized features, i.e., all features (columns of features
) having the same mean and variance.
This avoids having some features with too much importance in the computation of distance.
Exercise 2 (advanced): Construct the graph of k-nearest neighbors (choose $k=4$). You may read the kNN section of scikit-learn and use this python module.
# Your code here.