Team 2
Michael Allemann, Roman Bachmann, Liamarcia Bifano, Virginia Bordignon
US Senators
The goal of this milestone is to do some Graph Signal Processing (GSP) on the data of your project.
There are several questions in this milestone that ask you to plot a signal on your network. There are several ways from which you could approach it. In all cases, compute the position of the nodes a single time at the beginning, as this is likely to be a costly operation. Using a single layout for all the graph plots will also make it easier to compare the plots. Indeed, the only thing changing between plots is the signal displayed. You can represent the features/labels lying on the graph via node colors. To do so, make sure to have a consistent color map throughout and remember to display a colorbar and scale in all plots, so that we can tell what numbers the colors represent.
from matplotlib import pyplot as plt
plt.scatter(eigenvectors[:, 1], eigenvectors[:, 2], c=signal, alpha=0.5)
plt.colorbar()
import networkx as nx
graph = nx.from_scipy_sparse_matrix(adjacency)
coords = nx.spring_layout(graph) # Force-directed layout.
coords = eigenvectors[:, 1:3] # Laplacian eigenmaps.
nx.draw_networkx_nodes(graph, coords, node_size=60, node_color=signal)
nx.draw_networkx_edges(graph, coords, alpha=0.3)
import pygsp as pg
graph = pg.graphs.Graph(adjacency)
graph.set_coordinates('spring') # Force-directed layout.
graph.set_coordinates(eigenvectors[:, 1:3]) # Laplacian eigenmaps.
graph.plot_signal(signal)
We encourage you to try all the above methods before making your choice. Then be consistent and use only one throughout the milestone.
NetworkX and PyGSP should already be installed in your environement. If that's not the case, install with conda install networkx pygsp
(after activating the ntds_2018
environment).
%matplotlib inline
If you get a No module named 'pyunlocbox'
error when running the below cell, install the pyunlocbox with conda install pyunlocbox
(after activating the ntds_2018
environment).
import numpy as np
import pandas as pd
from scipy import sparse
import scipy.sparse.linalg
from matplotlib import pyplot as plt
from pyunlocbox import functions, solvers
import networkx as nx
For this milestone, all we will need is a set of features/labels for each of the nodes on the network, as well as the Laplacian, $L,$ and Gradient, $\nabla_G,$ matrices that you have computed for your network while working on milestone 3.
Import those objects in the cell below (or recompute the Laplacian and Gradient from your stored adjacency matrix, if you wish).
Note: If your features/labels are not floating-point numbers, please convert them. For example, if your data has labels "cat" and "dog" for nodes that represent cats or dogs, respectively, you may assign the number 1.0
for the label "cat" and the number -1.0
for the label "dog".
adjacency = np.loadtxt("../data/adjacency.csv", delimiter=",")
n_nodes = len(adjacency)
n_edges = int(adjacency.sum() // 2)
# Recomputing the Laplacian
D = np.diag(np.sum(adjacency, 1)) # Degree matrix
D_norm = np.diag(np.sum(adjacency, 1)**(-1/2))
laplacian_combinatorial = D - adjacency
laplacian_normalized = D_norm @ laplacian_combinatorial @ D_norm
laplacian = laplacian_normalized
# Computing the gradient (incident matrix S)
S = np.zeros((n_nodes, n_edges))
edge_idx = 0
for i in range(n_nodes):
for k in range(i):
if adjacency[i,k] == 1.0:
S[i,edge_idx] = 1
S[k,edge_idx] = -1
edge_idx += 1
assert np.allclose(S @ S.T, laplacian_combinatorial)
Here we map the three parties to unique integers to represent our label signal. We chose to give the Democrats a lower number and the Republicans a higher one, such that when plotting using the Matplotlib color map "bwr", the Democrats are represented in blue, while the Republicans are red.
Furthermore, we chose to label the independent party members between the two main parties.
string_labels = pd.read_csv('../data/all_votes.csv').sort_values(by=['party']).party.values
labels = np.zeros(string_labels.shape)
labels[np.where(string_labels == 'D')] = -1 # Democrats
labels[np.where(string_labels == 'I')] = 0 # Independent
labels[np.where(string_labels == 'R')] = 1 # Republicans
labels
In this section we will observe how your feature/label vector looks like in the "Graph Fourier" domain.
Compute the Fourier basis vectors and the Laplacian eigenvalues. Make sure to order those from smaller to larger, $\lambda_0 \leq \lambda_1 \leq \dots \leq \lambda_{N-1},$ and use the same ordering for the Fourier basis vectors.
# e: Ordered Laplacian eigenvalues
# U: Ordered graph Fourier basis
e, U = np.linalg.eigh(laplacian)
The Eigenvalues are already sorted in ascending order:
plt.figure(figsize=(20,5))
(markerline, stemlines, baseline)=plt.stem(np.arange(len(e)),e);
plt.setp(baseline, visible=False)
plt.ylim((0,1.2))
plt.xlim((-1,98))
plt.ylabel('$\lambda$');
plt.xlabel('Eigenvalue index');
Plot the first 3 and the last Fourier basis vectors as signals on your graph. Clearly indicate which plot belongs to which basis vector.
# Using the generalized second and fourth Eigenvectors as a two-dimensional embedding
D_norm = np.diag(np.sum(adjacency, 1)**(-1/2))
network_emb = D_norm @ U[:,[1,3]]
emb_x = network_emb[:,0]
emb_y = network_emb[:,1]
We plot the first four and two last Fourier basis vectors instead to see some more variation:
fig, ax = plt.subplots(3, 2, figsize=(15,17))
fourier_bases = [0,1,2,3,96,97]
vmax = max(-U[:,fourier_bases].min(), U[:,fourier_bases].max())
vmin = -vmax
for ax_idx, fourier_basis in enumerate(fourier_bases):
ax_x, ax_y = ax_idx//2, ax_idx%2
im = ax[ax_x, ax_y].scatter(emb_x, emb_y, c=U[:,fourier_basis], cmap='bwr', s=70,
edgecolors='black', vmin=vmin, vmax=vmax)
ax[ax_x, ax_y].set_title('Signal = Fourier basis {}'.format(fourier_basis))
ax[ax_x, ax_y].set_xlabel('Generalized eigenvector embedding $U_1$')
ax[ax_x, ax_y].set_ylabel('Generalized eigenvector embedding $U_3$')
fig.subplots_adjust(right=0.9)
cbar_ax = fig.add_axes([0.925, 0.15, 0.025, 0.7])
fig.colorbar(im, cax=cbar_ax)
In addition to our own embedding using two chosen Eigenvectors, we would also like to display all graphs using NetworkX's Force-directed layout. This finds the party separation between the two main clusters and arranges them nicely (with one exception).
Also, in this representation we can plot all the edges without distracting too much from the nodes.
graph = nx.from_numpy_matrix(adjacency)
coords = nx.spring_layout(graph) # Force-directed layout.
fig, ax = plt.subplots(3, 2, figsize=(15,15))
fourier_bases = [0,1,2,3,96,97]
vmax = max(-U[:,fourier_bases].min(), U[:,fourier_bases].max())
vmin = -vmax
for ax_idx, fourier_basis in enumerate(fourier_bases):
ax_x, ax_y = ax_idx//2, ax_idx%2
im = nx.draw_networkx_nodes(graph, coords, node_size=60, node_color=U[:,fourier_basis],
cmap='bwr', edgecolors='black', ax=ax[ax_x, ax_y], vmin=vmin, vmax=vmax)
nx.draw_networkx_edges(graph, coords, alpha=0.2, ax=ax[ax_x, ax_y])
ax[ax_x, ax_y].set_title('Signal = Fourier basis {}'.format(fourier_basis))
fig.subplots_adjust(right=0.9)
cbar_ax = fig.add_axes([0.925, 0.15, 0.025, 0.7])
fig.colorbar(im, cax=cbar_ax)
What can you observe in terms of local variations when comparing the basis vectors corresponding to the smallest eigenvalues to those corresponding to the largest eigenvalue? How would this justify the interpretation of the eigenvalues as "graph frequencies"?
Your answer here.
For the signals corresponding to the smallest eigenvalues, we can observe them having very low frequencies over the network while still having very large weights. The basis vectors of larger eigenvalues contain mostly lower amplitudes and slightly higher frequencies. The one outlier node has a large influence over some of the Fourier bases, like basis 2.
Implement a function that returns the Graph Fourier Transform (GFT) of a given vector $x \in \mathbb{R}^{N},$ with respect to your graph, and a function that computes the corresponding inverse GFT (iGFT).
def GFT(x):
return U.T @ x
def iGFT(x):
return U @ x
Plot your feature/label vector as a signal on your graph
def coplot_network_signal(signal, title='Signal = ...'):
'''
Plots a signal on a graph using both a Laplacian embedding and the NetworkX force-directed layout.
Args:
signal: The signal of each node to plot on the graph
title: Plot title
'''
fig, ax = plt.subplots(1, 2, figsize=(16,7))
vmax = max(-np.nanmin(signal), np.nanmax(signal))
vmin = -vmax
im = ax[0].scatter(emb_x, emb_y, c=signal, cmap='bwr', s=70, edgecolors='black',
vmin=vmin, vmax=vmax)
ax[0].set_title('Laplacian Embedding')
ax[0].set_xlabel('Generalized eigenvector embedding $U_1$')
ax[0].set_ylabel('Generalized eigenvector embedding $U_3$')
nx.draw_networkx_nodes(graph, coords, node_size=60, node_color=signal, cmap='bwr',
edgecolors='black', ax=ax[1], vmin=vmin, vmax=vmax)
nx.draw_networkx_edges(graph, coords, alpha=0.2, ax=ax[1])
ax[1].set_title('NetworkX Force-directed layout')
fig.suptitle(title, fontsize=16)
fig.subplots_adjust(right=0.9)
cbar_ax = fig.add_axes([0.925, 0.15, 0.025, 0.7])
fig.colorbar(im, cax=cbar_ax)
coplot_network_signal(labels, title='Signal = Ground truth labels')
Plot the absolute values of the GFT of your feature/label signal as a function of the graph eigenvalues. Make sure to add a marker indicating the position of each graph eigenvalue, and remember to properly name the axes.
We plot the signal with respect to the (normalized) Eigenvalue frequency as well as the respective index.
plt.figure(figsize=(20,8))
(markerline, stemlines, baseline)=plt.stem(np.arange(len(e)),abs(GFT(labels)));
plt.setp(baseline, visible=False)
plt.ylim((0,10))
plt.xlim((-1,98))
plt.title('Graph Fourier Transform of the real label signal')
plt.ylabel('$|U^T x|$');
plt.xlabel('Eigenvalue index');
plt.figure(figsize=(20,8))
(markerline, stemlines, baseline)=plt.stem(e,abs(GFT(labels)));
plt.setp(baseline, visible=False)
plt.ylim((0,10))
plt.title('Graph Fourier Transform of the real label signal')
plt.ylabel('$|U^T x|$');
plt.xlabel('Eigenvalue frequency');
Discuss the behavior of the GFT that you plotted in the last question via comparing the plot of your label signal and those of the Fourier basis of Question 1. Would you consider your labels a "low-pass" or "high-pass" signal, or yet something else entirely?
Your answer here.
We observe one very strong value corresponding to eigenvalue 1 with several smaller peaks located in the lower and upper 20 eigenvalues. This main peak represents a single, very smooth transition over the whole network between our labels of -1 and 1. This is coherent with the underlying assumption that Democrats and Republicans form two internally similar but opposing voting patterns. Indeed, comparing the above plot of our ground truth labels with the plot of Fourier basis 1 shows them being very similar.
We would consider our labels as a "low-pass" signal, as most mass is concentrated in the very low frequencies, with some noise present in the upper end of the spectrum.
In this section we will check how filtered Dirac impulses diffuse on your graph.
Implement the following three filter kernels and the graph filtering operation.
e
and a parameter t
and output a vector of evaluations of the heat kernel at those eigenvalues (see the course slides for help).e
and a parameter t
and implement spectrally the filter defined in the node domain by $f_{out} = (I + t L)^{-1} f_{in},$ where $f_{in}, f_{out} \in \mathbb{R}^{N}$ are, repectively, the input and output signals to the filter.e
and parameters l_min
and l_max
and returns 1.0
at coordinates satisfying $(e[l] \geq l_{min}) \wedge (e[l] \leq l_{max}),$ and 0.0
otherwise.kernel
and a set of keyworded variables, and returns the corresponding filtered signal.GFT
and iGFT
operations in Question 3.**kwargs
is a placeholder to collect supplementary pairs of keyword-values that are not known by the implementation before execution time.
The kwargs
variable is a dictionary whose keyes and values are the parameter names and values.
This is useful to allow both graph_filter(x, heat_kernel, tau=1.0)
and graph_filter(x, rectangle_kernel, lambda_min=0.0, lambda_max=1.0)
to be valid calls from the same implementation.
One can then defer the keyword-value assignment to the kernel
call: foo = kernel(bar, **kwargs)
.def heat_kernel(e, t):
return np.exp(-t * e)
def inverse_kernel(e, t):
return 1/(1 + t*e)
def rectangle_kernel(e, l_min, l_max):
return ((e >= l_min) & (e <= l_max)).astype(float)
def graph_filter(x, kernel, **kwargs):
return iGFT(kernel(e, **kwargs) * GFT(x))
Plot all three filter kernels in the spectral domain. Remember to properly name the axes and title the plots. Choose filter parameters that best approximate the behavior of the GFT of your feature/label signal (as seen in Question 4).
We plot the signal with respect to the (normalized) Eigenvalue frequency as well as the respective index.
fig, ax = plt.subplots(3, 1, figsize=(20,16))
kernels = {
'Heat kernel': heat_kernel(e,5),
'Inverse kernel': inverse_kernel(e,10),
'Rectangular kernel': rectangle_kernel(e,0.1,0.8)
}
for idx, (kernel_name, kernel) in enumerate(kernels.items()):
(markerline, stemlines, baseline)=ax[idx].stem(np.arange(len(e)), kernel);
plt.setp(baseline, visible=False)
ax[idx].set_ylim((0,1.1))
ax[idx].set_xlim((-1,98))
ax[idx].set_title(kernel_name)
ax[idx].set_ylabel('Kernel strength');
ax[idx].set_xlabel('Eigenvalue index');
fig, ax = plt.subplots(3, 1, figsize=(20,16))
kernels = {
'Heat kernel': heat_kernel(e,5),
'Inverse kernel': inverse_kernel(e,10),
'Rectangular kernel': rectangle_kernel(e,0.1,0.8)
}
for idx, (kernel_name, kernel) in enumerate(kernels.items()):
(markerline, stemlines, baseline)=ax[idx].stem(e, kernel);
plt.setp(baseline, visible=False)
ax[idx].set_ylim((0,1.1))
ax[idx].set_title(kernel_name)
ax[idx].set_ylabel('Kernel strength');
ax[idx].set_xlabel('Eigenvalue frequency');
We chose the filter parameters such that the second peak gets preserved well, while larger frequencies are attenuated. With the rectangle kernel, we can very easily select the desired frequencies, while with the other two we have to find some equilibrium between the strength of the desired signal and the rest. In case of the heat kernel, we could nicely isolate the second peak while not having to worry about the first one, as it models a constant signal over the network. In the inverse kernel, we still have a tail with a lot of weight.
When filtering, we multiply this filter kernel by the Fourier spectrum of a signal, whereby the low frequencies are kept and the larger frequencies are discarded.
Consider two Dirac impulses arbitrarily placed on your graph. Plot their filtered versions by the three filter kernels implemented in Question 6.
def plot_filtered_diracs(dirac_dict, kernel_name, kernel, **kwargs):
'''
Plots the filtered signal of two randomly chosen dirac spikes on a graph.
Plots using both Laplacian embedding and NetworkX Force-directed layout.
Args:
dirac_dict: Dictionary specifying index and value of diracs. Eg: {30: 1, 60: -1}
kernel_name: Used in the title to indicate used kernel type
kernel: Kernel function to apply to diracs
kwargs: Additional arguments for specific kernel
'''
diracs = np.zeros(n_nodes)
dirac_idxs = list(dirac_dict.keys())
for idx, dirac_value in dirac_dict.items():
diracs[idx] = dirac_value
filtered = graph_filter(diracs, kernel, **kwargs)
vmax = max(-filtered.min(), filtered.max())
vmin = -vmax
# Plot using Laplacian embedding
fig, ax = plt.subplots(1, 2, figsize=(16,7))
# Plotting all nodes
ax[0].scatter(emb_x, emb_y, c=filtered, cmap='PiYG', s=70,
edgecolors='black', vmin=vmin, vmax=vmax)
# Plotting the dirac locations
im = ax[0].scatter(emb_x[dirac_idxs], emb_y[dirac_idxs], c=diracs[dirac_idxs],
cmap='PiYG', s=300, edgecolors='black', vmin=vmin, vmax=vmax, marker='x')
ax[0].set_title('Laplacian Embedding')
ax[0].set_xlabel('Generalized eigenvector embedding $U_1$')
ax[0].set_ylabel('Generalized eigenvector embedding $U_3$')
# Plot using NetworkX
nx.draw_networkx_nodes(graph, coords, node_size=60, node_color=filtered,
cmap='PiYG', edgecolors='black', ax=ax[1], vmin=vmin, vmax=vmax)
nx.draw_networkx_edges(graph, coords, alpha=0.2, ax=ax[1])
# Plotting the dirac locations
d1_coords = coords[dirac_idxs[0]]
d2_coords = coords[dirac_idxs[1]]
im = ax[1].scatter(d1_coords[0], d1_coords[1], c=diracs[dirac_idxs[0]],
cmap='PiYG', s=300, edgecolors='black', vmin=vmin, vmax=vmax, marker='x')
im = ax[1].scatter(d2_coords[0], d2_coords[1], c=diracs[dirac_idxs[1]],
cmap='PiYG', s=300, edgecolors='black', vmin=vmin, vmax=vmax, marker='x')
ax[1].set_title('NetworkX Force-directed layout')
fig.suptitle('Signal = Two diracs filtered using {}'.format(kernel_name), fontsize=16)
fig.subplots_adjust(right=0.9)
cbar_ax = fig.add_axes([0.925, 0.15, 0.025, 0.7])
fig.colorbar(im, cax=cbar_ax)
In the following plots, as our graph consists of two main clusters, we will plot the following possible combinations of placing two dirac spikes:
For reproducibility reasons we will hereby not initialize the nodes randomly, but choose them arbitrarily ourself. We chose nodes 15, 30 and 60 for that purpose.
We plot the bellow graphs using a different color palette, as not to conflict the colours with any party associations in this exercise.
plot_filtered_diracs({30: -1, 60: 1}, 'Heat Kernel', heat_kernel, t=5)
plot_filtered_diracs({30: -1, 60: 1}, 'Inverse Kernel', inverse_kernel, t=10)
plot_filtered_diracs({30: -1, 60: 1}, 'Rectangle Kernel', rectangle_kernel, l_min=0.1, l_max=0.8)
plot_filtered_diracs({30: -1, 15: 1}, 'Heat Kernel', heat_kernel, t=5)
plot_filtered_diracs({30: -1, 15: 1}, 'Inverse Kernel', inverse_kernel, t=10)
plot_filtered_diracs({30: -1, 15: 1}, 'Rectangle Kernel', rectangle_kernel, l_min=0.1, l_max=0.8)
plot_filtered_diracs({30: 1, 60: 1}, 'Heat Kernel', heat_kernel, t=5)
plot_filtered_diracs({30: 1, 60: 1}, 'Inverse Kernel', inverse_kernel, t=10)
plot_filtered_diracs({30: 1, 60: 1}, 'Rectangle Kernel', rectangle_kernel, l_min=0.1, l_max=0.8)
plot_filtered_diracs({30: 1, 15: 1}, 'Heat Kernel', heat_kernel, t=5)
plot_filtered_diracs({30: 1, 15: 1}, 'Inverse Kernel', inverse_kernel, t=10)
plot_filtered_diracs({30: 1, 15: 1}, 'Rectangle Kernel', rectangle_kernel, l_min=0.1, l_max=0.8)
Comment on the "diffusion" of the Diracs induced by the filters. What does it say about the "communication" of information across your network? Relate that to the network connectivity measures that you analyzed during the previous milestones.
Your answer here.
8.1: Diracs of opposite signs in opposite clusters
Both communities get nicely coloured in their respective sign. The inverse kernel diffusion results in a very weak signal across the network, but separation can be faintly visible.
8.2: Diracs of opposite signs in same cluster
The opposite spikes difused equally over the clusters, cancelling each other out in the process. The rectangular kernel on the other hand inhibits a particularly strong separation. This is the case because we are only preserving the signal belonging to a divide with the lowest frequency over the cluster.
8.3: Diracs of equal signs in opposite clusters
In this case we see an equal diffusion over the whole network. An exception is again made by the rectangular kernel, as seen in 8.2.
8.4: Diracs of equal signs in same cluster
The spikes are mainly difused into the cluster where they originated. The resulting signal is stronger in those clusters and weaker in the others.
The above results are in line with our findings that communication within clusters is very high (i.e. distances are small), while distances between the two clusters are higher and thus communication lower.
In this section we will add some centered Gaussian noise to your feature/label signal and attempt to recover it.
In the cell below, set the noise variance $\sigma^2$ by making sure that the signal-to-noise ratio $SNR = \frac{\operatorname{Var}(\text{labels})}{\sigma^2}$ is about $1.5$.
Note: Actually, you might want to play with the noise variance here and set it to different values and see how the denoising filters behave.
noise_variance = labels.std()**2 / 1.5
noisy_measurements = labels + noise_variance * np.random.randn(n_nodes)
In the denoising setting, a common graph signal processing assumption is that the signal $z$ that we want to recover is "smooth", in the sense that $\|\nabla_G z\|_2 = \sqrt{z^{\top} L z}$ is small, while remaining "close" to the measurements that we start with. This leads to denoising by solving the following optimization problem:
$$ z^\star = \text{arg} \, \underset{z \in \mathbb{R}^{N}}{\min} \, \|z - y\|_2^2 + \gamma z^{\top} L z, $$where $y \in \mathbb{R}^{N}$ is the vector of noisy measurements.
Derive the close form solution to this problem giving $z^\star$ as a function of $y$, $\gamma$ and $L$. Does this solution correspond to any graph filtering operation that you know?
Your answer here.
To find the optimum, we derive the above term (let's denote it as $g$) with respect to the vector $z$:
$$ \nabla_z g(z,y,\gamma) = \nabla_z \|z - y\|_2^2 + \gamma z^{\top} L z = 2(z-y) + 2 \gamma L $$Setting $ \nabla_z g(z,y,\gamma) = 0 $ to find our minimum:
\begin{align} 0 &= 2(z-y) + 2 \gamma L \\ &= 2 (z(1+\gamma L) - y) \end{align}Solving for $z$ we get:
$$ z^* = \frac{y}{1 + \gamma L} $$We have to check that $z^*$ is indeed our global minima. The function is convex, since it is a linear combination of other convex functions. Furthermore, $z^*$ is a minima, as the second derivative is positive semi-definite:
$$ \nabla_z^2 g(z,y,\gamma) = \nabla_z^2 \|z - y\|_2^2 + \gamma z^{\top} L z = 2 + 2 \gamma L $$The solution for $z^*$ corresponds to the inverse kernel filter from above, that models $f_{out} = (I + tL)^{-1} f_{in}$ in the node domain.
Now, denoise the noisy measurements by passing them through the filters that you implemented in Question 6. Choose the filter parameters based on the behavior of the GFT of your original label signal (this is the prior knowledge that you input to the problem).
Plot, on your graph, the original label signal, the noisy measurements, and the three denoised version obtained above. Report on each plot the value of the corresponding relative error $$ \text{rel-err} = \frac{\|\text{labels} - z \|_2}{\|\text{labels}\|_2}, $$ where $z$ is the plotted signal.
def rel_err(labels, z):
'''
Calculates the relative error between the true labels and an estimate z
Args:
labels: Ground truth signal
z: Estimated signal
'''
return np.linalg.norm(labels - z, 2) / np.linalg.norm(labels, 2)
coplot_network_signal(labels, title='Signal = Ground truth labels')
print('Relative Error: {:.2f}'.format(rel_err(labels, labels)))
coplot_network_signal(noisy_measurements, title='Signal = Noisy measurements')
print('Relative Error: {:.2f}'.format(rel_err(labels, noisy_measurements)))
z_heat_denoised = graph_filter(noisy_measurements, heat_kernel, t=5)
coplot_network_signal(z_heat_denoised, title='Signal = Heat denoised measurements')
print('Relative Error: {:.2f}'.format(rel_err(labels, z_heat_denoised)))
z_inv_denoised = graph_filter(noisy_measurements, inverse_kernel, t=5)
coplot_network_signal(z_inv_denoised, title='Signal = Inverse denoised measurements')
print('Relative Error: {:.2f}'.format(rel_err(labels, z_inv_denoised)))
z_rect_denoised = graph_filter(noisy_measurements, rectangle_kernel, l_min=0.1, l_max=0.8)
coplot_network_signal(z_rect_denoised, title='Signal = Rectangle denoised measurements')
print('Relative Error: {:.2f}'.format(rel_err(labels, z_rect_denoised)))
Finally, overlay on the same plot the GFT of all five signals above.
signals = {
'Ground truth': labels,
'Noisy measurements': noisy_measurements,
'Heat filtered': z_heat_denoised,
'Inverse filtered': z_inv_denoised,
'Rectangle filtered': z_rect_denoised
}
plt.figure(figsize=(15,10))
for signal_name, signal in signals.items():
plt.plot(signal, label=signal_name)
plt.xlabel('Node index')
plt.ylabel('Signal strength')
plt.title('Signals on sorted nodes')
plt.legend()
Comment on which denoised version seems to best match the original label signal. What is the underlying assumption behind the three filtering approaches? Do you think it holds for your label signal? Why?
We plot the signal with respect to the (normalized) Eigenvalue frequency as well as the respective index.
plt.figure(figsize=(20,8))
(markerline, stemlines, baseline)=plt.stem(np.arange(len(e)),abs(GFT(noisy_measurements)));
plt.setp(baseline, visible=False)
plt.ylim((0,11))
plt.xlim((-1,98))
plt.title('Graph Fourier Transform of the noisy measurements')
plt.ylabel('$|U^T x|$');
plt.xlabel('Eigenvalue index');
plt.figure(figsize=(20,8))
(markerline, stemlines, baseline)=plt.stem(e,abs(GFT(noisy_measurements)));
plt.setp(baseline, visible=False)
plt.ylim((0,11))
plt.title('Graph Fourier Transform of the noisy measurements')
plt.ylabel('$|U^T x|$');
plt.xlabel('Eigenvalue frequency');
Your answer here.
From the graph plots and the plot with all the signals overlayed, the rectangle filter seems to recover the underlying labels best, without losing much signal strength. It also has the best relative error between the ground truth and denoised signal.
Since our graph has only two big clusters, all other above filters did not have much of a problem denoising the noisy measurements either. We could threshold by taking the sign function of each denoised signal and retrieve a very accurate reconstruction of our labels.
We tried to match the strength of our filters in the spectral domain to the strength of our Laplacian's eigenvalues, whereby the aim was to preserve as much as possible the frequencies of the very strong eigenvalue 1 and attenuate all others. As seen in our plots from Question 1, the eigenvector associated with eigenvalue 1 presents a smooth transition between the two main parties, accurately modeling their respective clusters.
The underlying assumption is then, that if we design a filter that only keeps the preselected frequencies (and weakens or completely removes all others) over the network that we selected based on the ground truth, that noise is filtered out. This again stands on the assumption, that the introduced noise has frequencies that can be well separated from the signal in the frequency domain.
For our signal, these assumptions hold, as we can see in the above plot of the GFT of our noisy signal. The main peak from eigenvalue 1 is still clearly visible and easy to filter out.
It is often the case in large networks that we can only afford to query properties/labels on a small subset of nodes. Nonetheless, if the underlying labels signal is "regular" enough, we might still be able to recover a good approximation of it by solving an offline variational problem, with constraints on the values of the measured nodes.
In this section, we will be interested in solving such transductive learning problems by minimizing a (semi-) p-norm of the graph gradient applied to the signal of interest:
$$ \text{arg} \, \underset{z|_S = y}{\min} \|\nabla_G z\|_p^p, $$where $S$ is the set of measured nodes.
In English, we can say that we are looking for solutions with small "aggregated local variations", as measured by $\|\nabla_G z\|_p^p = \sum_{i=1}^{n} \sum_{j=1}^{n} \left( \sqrt{W_{ij}} |z[i] - z[j]| \right)^p,$ while satisfying the measurement constraints $z[i] = y[i]$ for $i \in S.$
We will work with two cases, according to the choices $p=1$ or $p=2.$ For $p=1,$ the problem is known as "interpolation by graph total-variation minimization," whereas for $p=2$ it is sometimes called "interpolation by Tikhonov regularization".
In order to solve these variational problems with the black-box solver provided to you, you will use the pyunlocbox. This toolbox implements iterative solvers based on so-called "proximal-splitting" methods.
Throughout this section, we will consider only a binarized version of your label signal. If your variable labels
currently has values other than $\{-1, 1\},$ threshold them so that those are the only values taken in this vector. This can be done for example by choosing a number $t \in \mathbb{R}$ and then setting $\text{labels_bin}[i] = 1$ if $\text{labels}[i] \geq t$ and $\text{labels_bin}[i] = 0$ otherwise.
# Setting the independent party members to be Democrats
labels_bin = labels.copy()
labels_bin[labels_bin==0] = -1
Now, subsample this binarized label signal by $70\%$ by choosing, uniformly at random, $30\%$ of the nodes whose labels we will keep.
You will do this by computing a "measurement mask" vector w
with 1.0
's at the measured coordinates, and $0.0$'s otherwise.
mn_ratio = 0.3
w = np.random.binomial(n=1, p=mn_ratio, size=n_nodes)
m = sum(w) # Number of measurements
print('Sampled {} out of {} nodes'.format(m, n_nodes))
Plot the subsampled signal on the graph. Hint: you might want to set to numpy.nan
the values of the un-measured nodes for a cleaner plot.
subsampled = labels_bin*w
subsampled_nan = subsampled.copy()
subsampled_nan[subsampled_nan==0] = np.nan
subsampled_nan
coplot_network_signal(subsampled_nan, title='Signal = Subsampled labels')
For the solution of the variational problems you can use the following function as a "black-box".
You will just need to provide a gradient
matrix (which you should already have from Section 0), and an orthogonal projection operator P
onto the span of the measured coordinates (made precise in the next question).
def graph_pnorm_interpolation(gradient, P, w, x0=None, p=1., **kwargs):
r"""
Solve an interpolation problem via gradient p-norm minimization.
A signal :math:`x` is estimated from its measurements :math:`y = A(x)` by solving
:math:`\text{arg}\underset{z \in \mathbb{R}^n}{\min}
\| \nabla_G z \|_p^p \text{ subject to } Az = y`
via a primal-dual, forward-backward-forward algorithm.
Parameters
----------
gradient : array_like
A matrix representing the graph gradient operator
P : callable
Orthogonal projection operator mapping points in :math:`z \in \mathbb{R}^n`
onto the set satisfying :math:`A P(z) = A z`.
x0 : array_like, optional
Initial point of the iteration. Must be of dimension n.
(Default is `numpy.random.randn(n)`)
p : {1., 2.}
kwargs :
Additional solver parameters, such as maximum number of iterations
(maxit), relative tolerance on the objective (rtol), and verbosity
level (verbosity). See :func:`pyunlocbox.solvers.solve` for the full
list of options.
Returns
-------
x : array_like
The solution to the optimization problem.
"""
grad = lambda z: gradient.dot(z)
div = lambda z: gradient.transpose().dot(z)
# Indicator function of the set satisfying :math:`y = A(z)`
f = functions.func()
f._eval = lambda z: 0
f._prox = lambda z, gamma: P(z, w)
# :math:`\ell_1` norm of the dual variable :math:`d = \nabla_G z`
g = functions.func()
g._eval = lambda z: np.sum(np.abs(grad(z)))
g._prox = lambda d, gamma: functions._soft_threshold(d, gamma)
# :math:`\ell_2` norm of the gradient (for the smooth case)
h = functions.norm_l2(A=grad, At=div)
stepsize = (0.9 / (1. + scipy.sparse.linalg.norm(gradient, ord='fro'))) ** p
solver = solvers.mlfbf(L=grad, Lt=div, step=stepsize)
if p == 1.:
problem = solvers.solve([f, g, functions.dummy()], x0=x0, solver=solver, **kwargs)
return problem['sol']
if p == 2.:
problem = solvers.solve([f, functions.dummy(), h], x0=x0, solver=solver, **kwargs)
return problem['sol']
else:
return x0
During the iterations of the algorithm used for solving the variational problem, we have to make sure that the labels at the measured nodes stay the same. We will do this by means of an operator P
which, given a vector $a \in \mathbb{R}^{N},$ returns another vector $b \in \mathbb{R}^{N}$ satisfying $b[i] = \text{labels_bin}[i]$ for every node $i$ in the set $S$ of known labels, and $b[i] = a[i]$ otherwise. Write in the cell below the function for this orthogonal projection operator P
.
Hint: remember you have already computed the mask w
.
def P(a, w):
mask_pos = np.where(w==1)
b = a.copy()
b[mask_pos] = labels_bin[mask_pos]
return b
Solve the variational problems for $p = 1$ and $p = 2$. Record the solution for the $1-$norm minimization under sol_1norm_min
and the one for $2-$norm minimization under sol_2norm_min
.
Compute also binarized versions of these solutions by thresholding the values with respect to $0$, that is, non-negative values become 1.0
, while negative values become -1.0
. Store those binarized versions under sol_1norm_bin
and sol_2norm_bin
, respectively.
sol_1norm_min = graph_pnorm_interpolation(sparse.csr_matrix(S).T, P, w, x0=subsampled.copy(), p=1.)
sol_2norm_min = graph_pnorm_interpolation(sparse.csr_matrix(S).T, P, w, x0=subsampled.copy(), p=2.)
threshold = 0
sol_1norm_bin = (sol_1norm_min > threshold).astype(int)
sol_1norm_bin[sol_1norm_bin==0] = -1
sol_2norm_bin = (sol_2norm_min > threshold).astype(int)
sol_2norm_bin[sol_2norm_bin==0] = -1
Plot, on your graph, the original labels_bin
signal, as well as the solutions to the variational problems (both binarized and otherwise). Indicate on each plot the value of the relative error $\text{rel-err} = \frac{\|\text{labels_bin} - z\|_2}{\|\text{labels_bin}\|_2}$, where $z$ is the signal in the corresponding plot.
coplot_network_signal(labels_bin, title='Signal = Ground truth labels')
print('Relative Error: {:.2f}'.format(rel_err(labels_bin, labels_bin)))
coplot_network_signal(sol_1norm_min, title='Signal = Variational problem solution for p=1')
print('Relative Error: {:.2f}'.format(rel_err(labels_bin, sol_1norm_min)))
coplot_network_signal(sol_1norm_bin, title='Signal = Binarized variational problem solution for p=1')
print('Relative Error: {:.2f}'.format(rel_err(labels_bin, sol_1norm_bin)))
coplot_network_signal(sol_2norm_min, title='Signal = Variational problem solution for p=2')
print('Relative Error: {:.2f}'.format(rel_err(labels_bin, sol_2norm_min)))
coplot_network_signal(sol_2norm_bin, title='Signal = Binarized variational problem solution for p=2')
print('Relative Error: {:.2f}'.format(rel_err(labels_bin, sol_2norm_bin)))
Now that you have got a feeling for the sort of solutions that the transductive learning problems studied can give, we will see what is the effect of the number of measurements on the accuracy of both $p-$norm minimization problems.
Towards this goal, you will write a phase_transition()
function. This function will basically go over all the procedures that you have implemented in this section, but for varying numbers of measurements and thresholding values. It will also compute the relative error, $\text{rel-err},$ of the solutions and average them over a number of trials.
The output of the phase_transition()
function has to be a matrix with len(mn_ratios)
columns and len(thresholds)
rows. Each pixel $(i,j)$ in the output matrix has to contain the average, over n_trials
trials, of the relative error $\text{rel-err}$ in the binarized (with threshold thresholds[i]
) solution given by graph_pnorm_interpolation()
from observing an mn_ratios[j]
fraction of nodes. The randomness comes from a different choice of mask w
at each trial, hence the averaging.
The interest of this phase transition matrix is to assess what level of recovery error one could expect for a certain fraction of measurements and a certain threshold level.
def phase_transition(mn_ratios, thresholds, n_trials, labels_bin, p, **kwargs):
pt_matrix = np.zeros((len(thresholds), len(mn_ratios)))
for i, threshold in enumerate(thresholds):
for j, ratio in enumerate(mn_ratios):
# Simulate n_trials times
for trial in range(n_trials):
# Subsample randomly
w = np.random.binomial(n=1, p=ratio, size=n_nodes)
subsampled = labels_bin * w
# Solve p-norm interpolation
sol = graph_pnorm_interpolation(sparse.csr_matrix(S).T, P, w,
x0=subsampled.copy(), p=p, **kwargs)
# Threshold to -1 and 1
sol_bin = (sol > threshold).astype(int)
sol_bin[sol_bin==0] = -1
# Compute and store the error
pt_matrix[i,j] += rel_err(labels_bin, sol_bin)
# Computing the mean of all trials
pt_matrix /= n_trials
return pt_matrix
Pick 5 "m/n" ratios in $(0, 1)$ and 5 threshold levels in $(-1, 1)$ and run the phase_transition()
function with n_trials
= 20, for both $p = 1$ and $p = 2$.
N = 11 # For more detail in the plots below, but longer computation time!
n_trials = 20
#mn_ratios = np.linspace(0.05, 0.9, N) # To space linearly
mn_ratios = np.geomspace(0.01, 0.9, N) # Small ratios are more interesting
thresholds = np.linspace(-0.9, 0.9, N)
pt_matrix_1norm = phase_transition(mn_ratios, thresholds, n_trials, labels_bin, 1, verbosity='NONE')
pt_matrix_2norm = phase_transition(mn_ratios, thresholds, n_trials, labels_bin, 2, verbosity='NONE')
Plot both phase transition matrices as images with a colorbar. Make sure to properly name the axes and title the images.
ratio_strings = ['{:.2f}'.format(ratio) for ratio in mn_ratios]
thresh_strings = ['{:.2f}'.format(thresh) for thresh in thresholds]
vmax = max(pt_matrix_1norm.max(), pt_matrix_2norm.max())
vmin = min(pt_matrix_1norm.min(), pt_matrix_2norm.min())
fig, ax = plt.subplots(1,2,figsize=(16,8))
im = ax[0].imshow(pt_matrix_1norm, cmap='inferno', vmin=vmin, vmax=vmax)
ax[0].set_title('1-Norm')
ax[0].set_xlabel('m/n ratio')
ax[0].set_ylabel('Threshold')
ax[0].set_xticks(np.arange(N))
ax[0].set_yticks(np.arange(N))
ax[0].set_xticklabels(ratio_strings)
ax[0].set_yticklabels(thresh_strings)
im = ax[1].imshow(pt_matrix_2norm, cmap='inferno', vmin=vmin, vmax=vmax)
ax[1].set_title('2-Norm')
ax[1].set_xlabel('m/n ratio')
ax[1].set_ylabel('Threshold')
ax[1].set_xticks(np.arange(N))
ax[1].set_yticks(np.arange(N))
ax[1].set_xticklabels(ratio_strings)
ax[1].set_yticklabels(thresh_strings)
fig.suptitle('Phase transition matrices (Darker is better)', fontsize=16)
fig.subplots_adjust(right=0.9)
cbar_ax = fig.add_axes([0.925, 0.15, 0.025, 0.7])
fig.colorbar(im, cax=cbar_ax)
plt.show()
Do the phase transition plots above provide any justification for choosing one $p-$norm interpolation over the other? Why?
Your answer here.
When picking a threshold of 0, the 2-Norm yields very good results up to very small ratios. If we choose different thresholds, the results quickly degrade. This is because the 2-Norm tends to produce estimates between the smallest and largest inputs -1 and 1, which makes chosing a good threshold important. As we have two balanced clusters, choosing the threshold to be zero is the most reasonable choice.
For the 1-Norm, results are almost entirely independent of the threshold we choose. Indeed, the estimated values tend to be bellow or above the smallest and largest inputs -1 and 1, making the choice of a threshold between -1 and 1 futile. Performance is only slightly worse than using the 2-Norm with a carefully selected threshold.
If there is an easy and justified way to pick a good threshold, taking the 2-Norm might produce better results. If this is not the case, one can get good results across the board with the 1-Norm using an almost arbitrary threshold.