#!/usr/bin/env python # coding: utf-8 # # [NTDS'19] tutorial 3: build a graph from features # [ntds'19]: https://github.com/mdeff/ntds_2019 # # [Benjamin Ricaud](https://people.epfl.ch/benjamin.ricaud), [EPFL LTS2](https://lts2.epfl.ch). # # * Dataset: [Iris](https://archive.ics.uci.edu/ml/datasets/Iris) # * Tools: [pandas](https://pandas.pydata.org), [numpy](http://www.numpy.org), [scipy](https://www.scipy.org), [matplotlib](https://matplotlib.org), [networkx](https://networkx.github.io), [gephi](https://gephi.org/) # ## Tools # By convention, the first lines of code are always about importing the packages we'll use. # In[1]: import pandas as pd import numpy as np from scipy.spatial.distance import pdist, squareform from matplotlib import pyplot as plt import networkx as nx # Tutorials on pandas can be found at: # * # * # # Tutorials on numpy can be found at: # * # * # * # # A tutorial on networkx can be found at: # * # The following line is a [magic command](https://ipython.readthedocs.io/en/stable/interactive/magics.html). It enables plotting inside the notebook. # In[2]: get_ipython().run_line_magic('matplotlib', 'inline') # %matplotlib notebook # ## Import and explore the data # # We will play with the famous Iris dataset. This dataset can be found in many places on the net and was first released at . For example it is stored on [Kaggle](https://www.kaggle.com/uciml/iris/), with many demos and Jupyter notebooks you can test (have a look at the "kernels" tab). # # ![Iris Par Za — Travail personnel, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=144395](https://upload.wikimedia.org/wikipedia/commons/thumb/2/27/Iris_germanica_002.jpg/251px-Iris_germanica_002.jpg) # In[3]: iris = pd.read_csv('data/iris.csv') iris.head() # The description of the entries is given here: # https://www.kaggle.com/uciml/iris/home # In[4]: iris['Species'].unique() # In[5]: iris.describe() # ## Build a graph from the features # # We are going to build a graph from these data. The idea is to represent iris samples (rows of the table) as nodes, with connections depending on their physical similarity. # # The main question is how to define the notion of similarity between the flowers. For that, we need to introduce a measure of similarity. It should use the properties of the flowers and provide a positive real value for each pair of samples. # # *Remark:* The value should increase with the similarity. # # Let us separate the data into two parts: physical properties and labels. # In[6]: features = iris.loc[:, ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm']] species = iris.loc[:, 'Species'] # In[7]: features.head() # In[8]: species.head() # ### Similarity, distance and edge weight # You can define many similarity measures. One of the most intuitive and perhaps the easiest to program relies on the notion of distance. If a distance between samples is defined, we can compute the weight accordingly: if the distance is short, which means the nodes are similar, we want a strong connection between them (large weight). # #### Different distances # The cosine distance is a good candidate for high-dimensional data. It is defined as follows: # $$d(u,v) = 1 - \frac{u \cdot v} {\|u\|_2 \|v\|_2},$$ # where $u$ and $v$ are two feature vectors. # # The distance is proportional to the angle formed by the two vectors (0 if colinear, 1 if orthogonal, 2 if opposed direction). # # Alternatives are the [$p$-norms](https://en.wikipedia.org/wiki/Norm_%28mathematics%29#p-norm) (or $\ell_p$-norms), defined as # $$d(u,v) = \|u - v\|_p,$$ # of which the Euclidean distance is a special case with $p=2$. # The `pdist` function from `scipy` computes the pairwise distance. By default it is the Euclidian distance. `features.values` is a numpy array extracted from the Pandas dataframe. Very handy. # In[9]: #from scipy.spatial.distance import pdist, squareform get_ipython().run_line_magic('pinfo', 'pdist') # In[10]: distances = pdist(features.values, metric='euclidean') # other metrics: 'cosine', 'cityblock', 'minkowski' # Now that we have a distance, we can compute the weights. # #### Distance to weights # A common function used to turn distances into edge weights is the Gaussian function: # $$\mathbf{W}(u,v) = \exp \left( \frac{-d^2(u, v)}{\sigma^2} \right),$$ # where $\sigma$ is the parameter which controls the width of the Gaussian. # # The function giving the weights should be positive and monotonically decreasing with respect to the distance. It should take its maximum value when the distance is zero, and tend to zero when the distance increases. Note that distances are non-negative by definition. So any funtion $f : \mathbb{R}^+ \rightarrow [0,C]$ that verifies $f(0)=C$ and $\lim_{x \rightarrow +\infty}f(x)=0$ and is *strictly* decreasing should be adapted. The choice of the function depends on the data. # # Some examples: # * A simple linear function $\mathbf{W}(u,v) = \frac{d_{max} - d(u, v)}{d_{max} - d_{min}}$. As the cosine distance is bounded by $[0,2]$, a suitable linear function for it would be $\mathbf{W}(u,v) = 1 - d(u,v)/2$. # * A triangular kernel: a straight line between the points $(0,1)$ and $(t_0,0)$, and equal to 0 after this point. # * The logistic kernel $\left(e^{d(u,v)} + 2 + e^{-d(u,v)} \right)^{-1}$. # * An inverse function $(\epsilon+d(u,v))^{-n}$, with $n \in \mathbb{N}^{+*}$ and $\epsilon \in \mathbb{R}^+$. # * You can find some more [here](https://en.wikipedia.org/wiki/Kernel_%28statistics%29). # # In[11]: # Let us use the Gaussian function kernel_width = distances.mean() weights_list = np.exp(-distances**2 / kernel_width**2) # In[12]: # Turn the list of weights into a matrix. weight_matrix = squareform(weights_list) # **Exercise:** Find the nodes with highest degree and display their respective entry in the `iris` dataframe. Do they belong to the same iris species? # In[13]: # Your code here. # Sometimes, you may need to compute additional features before processing them with some machine learning or some other data processing step. With Pandas, it is as simple as that: # In[14]: # Compute a new column using the existing ones. features['SepalLengthSquared'] = features['SepalLengthCm']**2 features.head() # Coming back to the weight matrix, we have obtained a full matrix but we may not need all the connections (reducing the number of connections saves some space and computations!). We can sparsify the graph by removing the values (edges) below some fixed threshold. Let us see what kind of threshold we could use: # In[15]: plt.hist(weights_list) plt.title('Distribution of weights') plt.show() # In[16]: # Let us choose a threshold of 0.6. # Too high, we will have disconnected components # Too low, the graph will have too many connections weight_matrix[weight_matrix < 0.6] = 0 # **Exercise:** Plot the number of edges with respect to the threshold, for threshold values between 0 and 1. # In[17]: # Your code here. # *Remark:* The distances presented here do not work well for categorical data. # ## Graph visualization # # To conclude, let us visualize the graph. We will use the python module networkx. # In[18]: # A simple command to create the graph from the adjacency matrix. graph = nx.from_numpy_array(weight_matrix) # Let us try some direct visualizations using networkx. # In[19]: # Let us add some colors colors = species.values colors[colors == 'Iris-setosa'] = 0 colors[colors == 'Iris-versicolor'] = 1 colors[colors == 'Iris-virginica'] = 2 # In[20]: nx.draw_spectral(graph, node_color=colors) # Oh! It seems to be separated in 3 parts! Are they related to the 3 different species of iris? # # Let us try another [layout algorithm](https://en.wikipedia.org/wiki/Graph_drawing#Layout_methods), where the edges are modeled as springs. # In[21]: nx.draw_spring(graph, node_color=colors) # Save the graph to disk in the `gexf` format, readable by gephi and other tools that manipulate graphs. You may now explore the graph using gephi and compare the visualizations. # In[22]: nx.write_gexf(graph, 'iris.gexf') # **Exercise 1:** # Modify the experiment such that the distance is computed using normalized features, i.e., all features (columns of `features`) having the same mean and variance. # This avoids having some features with too much importance in the computation of distance. # # **Exercise 2 (advanced):** # Construct the graph of k-nearest neighbors (choose $k=4$). # You may read the [kNN section of scikit-learn](https://scikit-learn.org/stable/modules/neighbors.html) and use this python module. # In[23]: # Your code here.