# Certifiable Robustness of Graph Convolutional Networks under Structure Perturbations¶

## Contribution¶

A framework for graph structure perturbation robustness.

## Preliminaries and Approaches¶

### Their Structural Robustness Definition¶

For one node, perturb on the transformation of adjacency matrix $\mathbf{A}$ with admissible perturbations, the label should not change.
\begin{aligned} m^{t}\left(y^{*}, y\right):=& \operatorname{minimize}_{\tilde{A}} f_{\theta}^{t}(X, \mathcal{T}(\tilde{A}))_{y^{*}}-f_{\theta}^{t}(X, \mathcal{T}(\tilde{A}))_{y} \\ & \text { subject to } \tilde{A} \in \mathcal{A}(A) \end{aligned}

The admissible perturbation is defined as \begin{aligned} \mathcal{A}(A)=&\left\{\tilde{A} \in\{0,1\}^{N \times N} \mid \tilde{A}_{i j} \leq A_{i j} \wedge \tilde{A}=\tilde{A}^{T}\right.\\ & \wedge\|\tilde{A}-A\|_{0} \leq 2 Q \\ &\left.\wedge\left\|\tilde{A}_{i}-A_{i}\right\|_{0} \leq q_{i} \forall 1 \leq i \leq N\right\} \end{aligned}

The formalization is unclear to me, they said that the allow to insert at most $q_i$ edges, but the definition above seems to be delete some edges. Based on the left paper, the insert should be a typo.

### Induced constraints.¶

It is critical to make sure that the perturbed results in the transformed space are subjected to the constraints on the real adjacency matrix. The authors proposed a sequence of constraints here. Did not read carefully.

### Relaxation of the neural network¶

Abstraction based on this.

### Bilinear Program¶

Guess that bilinear is because $\hat{\mathbf{A}}$ is a range. Did not read other part carefully. There seems to be an iterative algorithm that have coverage guarantee. As long as they can made the lower boundary of $y* - y > 0$, the network is robust.

## Follow-up¶

It is worthwhile to read their formalization on the inducted constraints, if I want to write this kind of paper.