This notebook is the computational appendix of arXiv:1609.06139. We demonstrate how to use some convenience functions to decide whether a qubit or qutrit POVM is simulable and how much noise is needed to make it simulable. We also give the details how to reproduce the numerical results shown in the paper. Furthermore, we show how to decompose a simulable POVM into a convex combination of projective measurements.
To improve readability of this notebook, we placed the supporting functions to a separate file; please download this in the same folder as the notebook if you would like to evaluate it. The following dependencies must also be available: the Python Interface for Conic Optimization Software Picos and its dependency cvxopt, at least one SDP solver (SDPA as an executable in the path or Mosek with its Python interface installed; cvxopt as a solver is not recommended), and a vertex enumerator (cdd with its Python interface or lrs/plrs as an executable in the path).
First, we import everything we will need:
from __future__ import print_function, division
from fractions import Fraction
import numpy as np
import numpy.linalg
import random
import time
from povm_tools import basis, check_ranks, complex_cross_product, dag, decomposePovmToProjective, \
enumerate_vertices, find_best_shrinking_factor, get_random_qutrit, \
get_visibility, Pauli, truncatedicosahedron
The question whether a qubit or qutrit POVM is simulable by projective measurements boils down to an SDP feasibility problem. Few SDP solvers can handle feasibility problems, so from a computational point of view, it is always easier to phrase the question as an SDP optimization, which would return the critical visibility, below which the amount of depolarizing noise would allow simulability. Recall Eq. (8) from the paper that defines the noisy POVM that we obtain subjecting a POVM $\mathbf{M}$ to a depolarizing channel $\Phi_t$:
$\left[\Phi_t\left(\mathbf{M}\right)\right]_i := t M_i + (1-t)\frac{\mathrm{tr}(M_i)}{d} \mathbb{1}$.
If this visibility $t\in[0,1]$ is one, the POVM $\mathbf{M}$ is simulable.
As an example, we study the tetrahedron measurement (see Appendix B in arXiv:quant-ph/0702021):
def dp(v):
result = np.eye(2, dtype=np.complex128)
for i in range(3):
result += v[i]*Pauli[i]
return result
b = [np.array([ 1, 1, 1])/np.sqrt(3),
np.array([-1, -1, 1])/np.sqrt(3),
np.array([-1, 1, -1])/np.sqrt(3),
np.array([ 1, -1, -1])/np.sqrt(3)]
M = [dp(bj)/4 for bj in b]
Next we check with an SDP whether it is simulable by projective measurements. A four outcome qubit POVM $\mathbf{M} \in\mathcal{P}(2,4)$ is simulable if and only if
$M_{1}=N_{12}^{+}+N_{13}^{+}+N_{14}^{+},$
$M_{2}=N_{12}^{-}+N_{23}^{+}+N_{24}^{+},$
$M_{3}=N_{13}^{-}+N_{23}^{-}+N_{34}^{+},$
$M_{4}=N_{14}^{-}+N_{24}^{-}+N_{34}^{-},$
where Hermitian operators $N_{ij}^{\pm}$ satisfy $N_{ij}^{\pm}\geq0$ and $N_{ij}^{+}+N_{ij}^{-}=p_{ij}\mathbb{1}$, where $i<j$ , $i,j=1,2,3,4$ and $p_{ij}\geq0$ as well as $\sum_{i<j}p_{ij}=1$, that is, the $p_{ij}$ values form a probability vector. This forms an SDP feasibility problem, which we can rephrase as an optimization problem by adding depolarizing noise to the left-hand side of the above equations and maximizing the visibility $t$:
$\max_{t\in[0,1]} t$
such that
$t\,M_{1}+(1-t)\,\mathrm{tr}(M_{1})\frac{\mathbb{1}}{2}=N_{12}^{+}+N_{13}^{+}+N_{14}^{+},$
$t\,M_{2}+(1-t)\,\mathrm{tr}(M_{2})\frac{\mathbb{1}}{2}=N_{12}^{-}+N_{23}^{+}+N_{24}^{+},$
$t\,M_{3}+(1-t)\,\mathrm{tr}(M_{3})\frac{\mathbb{1}}{2}=N_{13}^{-}+N_{23}^{-}+N_{34}^{+},$
$t\,M_{4}+(1-t)\,\mathrm{tr}(M_{4})\frac{\mathbb{1}}{2}=N_{14}^{-}+N_{24}^{-}+N_{34}^{-}$.
If it is, the critical visibility is one, we have a simulable measurement. We solve this SDP with the function get_visibility
for the tetrahedron measurement, indicating that it is not simulable:
get_visibility(M)
0.8164966401734447
We take the qutrit POVM from Section 4.1 in arXiv:quant-ph/0310013:
psi0 = np.array([[1/np.sqrt(2)], [1/np.sqrt(2)], [0]])
omega = np.exp(2*np.pi*1j/3)
D = [[omega**(j*k/2) * sum(np.power(omega, j*m) * np.kron(basis((k+m) % 3), basis(m).T)
for m in range(3)) for k in range(1, 4)] for j in range(1, 4)]
psi = [[D[j][k].dot(psi0) for k in range(3)] for j in range(3)]
M = [np.kron(psi[k][j], psi[k][j].conj().T)/3 for k in range(3) for j in range(3)]
The SDP to be solved is more intricate than the qubit case. Using the notations of Lemma 14, let $\mathbf{M}\in\mathcal{P}(d,n)$ be $n$-outcome measurement on $d$ dimensional Hilbert space. Let $m\leq n$. The critical visibility $t_{m}(\mathbf{M})$ can be computed via the following SPD programme
$\max_{t\in[0,1]} t$
Such that
$t\mathbf{M}+(1-t)\left(\mathrm{tr}(M_1)\frac{\mathbb{1}}{d},\ldots,\mathrm{tr}(M_n)\frac{\mathbb{1}}{d}\right)=\mathbf{M}=\sum_{X\in[n]_2} \mathbf{N}_X + \sum_{Y\in[n]_3} \mathbf{N}_Y$,
$\left\{\mathbf{N}_X\right\}_{X\in[n]_3}\ ,\ \left\{p_X\right\}_{X\in[n]_2}\ , \left\{\mathbf{N}_Y\right\}_{Y\in[n]_3}\ ,\ \left\{p_Y\right\}_{X\in[n]_3}$,
$\mathbf{M}=\sum_{X\in[n]_2} \mathbf{N}_X + \sum_{Y\in[n]_3} \mathbf{N}_Y$,
$[\mathbf{N}_X]_i \geq 0\ ,\ [\mathbf{N}_Y]_i \geq 0\,\ \ i=1,\ldots,n$,
$[\mathbf{N}_X]_i = 0$ for $i\notin{X}, [\mathbf{N}_Y]_i = 0$ for $i\notin{Y}$,
$\mathrm{tr}\left([\mathbf{N}_Y]_i\right) = p_Y$ for $i\in{Y}$,
$\sum_{i=1}^n [\mathbf{N}_X]_i= p_X \mathbb{1}\ , \sum_{i=1}^n[\mathbf{N}_Y]_i= p_Y \mathbb{1}$
$p_X \geq 0\ ,\ p_Y\geq 0\ ,\ \sum_{X\in[n]_2} p_X+\sum_{Y\in[n]_3} p_Y=1$.
Solving this SDP for the qutrit POVM above, we see that the visibility is far from one:
get_visibility(M, solver=None, proj=True)
0.8057639895134708
We can also look only for the visibility needed to decompose the POVM into general 3-outcome POVMs.
get_visibility(M, solver=None, proj=False)
0.805763970628155
Next we look at a projective-simulable POVM:
psi = [get_random_qutrit()]
psi.append(complex_cross_product(psi[0], np.array([[0], [0], [1]])))
psi.append(complex_cross_product(psi[0], psi[1]))
phi = [get_random_qutrit()]
phi.append(complex_cross_product(phi[0], np.array([[0], [0], [1]])))
phi.append(complex_cross_product(phi[0], phi[1]))
M = [0.5*np.kron(psi[0], psi[0].conj().T),
0.5*np.kron(psi[1], psi[1].conj().T),
0.5*np.kron(psi[2], psi[2].conj().T) + 0.5*np.kron(phi[0], phi[0].conj().T),
0.5*np.kron(phi[1], phi[1].conj().T),
0.5*np.kron(phi[2], phi[2].conj().T),
np.zeros((3, 3), dtype=np.float64),
np.zeros((3, 3), dtype=np.float64),
np.zeros((3, 3), dtype=np.float64),
np.zeros((3, 3), dtype=np.float64)]
The result is very close to one:
get_visibility(M)
0.9999999990236691
Here we repeat some of the theory we presented in Appendix D. We would like to find an external polytope that tightly approximates $\mathcal{P}(2,4)$ and then check how much we have to "shrink" it until it fits inside the set of simulable POVMs. Since the operators $\{\mathbb{1}, \sigma_x, \sigma_y, \sigma_z\}$ form is a basis for the real space of Hermitian matrices $\mathrm{Herm}(\mathbb{C}^2)$, we can write any matrix in this set as $M = \alpha \mathbb{1} + x \sigma_x + y \sigma_y + z \sigma_z$ for some $\alpha, x, y, z$ real numbers. We relax the positivity condition of the measurement effects by requiring $\mathrm{tr}(M|\psi_j\rangle\langle\psi_j|)\geq0$ for some collection of pure states $\{|\psi_j\rangle\langle\psi_j|\}_{j=1}^N$, which in turn can always be expressed as $|\psi_j\rangle\langle\psi_j|=(1/2)(\mathbb{1}-\vec{v}_j \cdot \vec{\sigma})$ for a vector $\vec{v}_j$ from a unit sphere in $\mathbb{R}^3$. Thus, with the measurement effects also expressed in the same basis, we can write the relaxed positivity conditions as
$(x,y,z)\cdot v_j \leq \alpha,\ i=1,\ldots,N$,
where "$\cdot$" denotes the standard inner product in $\mathbb{R}^3$. We describe the approximating polytope as
$\begin{eqnarray} &\alpha_i \geq 0, \ i=1,...,4, \sum_i{\alpha_i} = 1\\ &\sum_i{x_i} = \sum_i{y_i} = \sum_i{z_i} = 0. \end{eqnarray}$
This yields sixteen real parameters, which would be expensive to treat computationally. We can, however, exploit certain properties that reduce the number of parameters. First of all, since the effects add up to the identity, we can drop the last four parameters. Then, due to invariance properties and the characteristics of extremal POVMs, one parameter is sufficient for the first effect, and three are enough for the second. In total, we are left with eight parameters.
We would like to define inequalities defining the facets of the polytope, run a vertex enumeration algorithm, and refine the polytope further. From a computational perspective, a critical point is enumerating the vertices given the inequalities. For this, two major implementations are available that use fundamentally different algorithms:
Using cdd results in fewer vertices, but lrs and plrs run at least a magnitude faster. The function enumerate_vertices
abstracts away the implementation, and the user can choose between cdd, lrs, and plrs. Note that format of inequalities is $b+Ax\geq 0$, where $b$ is the constant vector and $A$ is the coefficient matrix. Thus a line in our parametrization is of the form $[b, \alpha_1, \alpha_2, \alpha_3, \alpha_4, \alpha_5, \alpha_6, \alpha_7, \alpha_8]$, corresponding to an inequality $b +\alpha_1 x_1 + \alpha_2 x_2 + \alpha_3 x_3 + \alpha_4 x_4 + \alpha_5 x_5 + \alpha_6 x_6 + \alpha_7 x_7 + \alpha_8 x_8 \geq 0$.
Since $M_2$ lies in the plane, we consider an approximation to the circle, rather than the sphere. Furthermore, we can always assume that this vector lies in the $y$-positive semi-plane, and take a polygon approximating the semi-circle from the outside, defined by the points of tangency to the semi-circle. In order to obtain a reliable polytope, given only by rational coordinates, we set these points to be the image of a net of 100 points of the interval $[-1,1]$ via the stereographic projection. By dealing only with rational points, we ensure that we can go back and forth from inequalities to vertices recovering the same initial set.
n = 25
# crit is an approximation of 1/(1+sqrt2), the point whose 2D
# stereographic projection is (1/sqrt2, 1/sqrt2)
crit = Fraction(4142, 10000)
# for the interval [crit, 1] the projection from the pole P = (-1, 0)
# approximates "well" the circle
nn = Fraction(1 - crit, n)
# u discretizes the quarter of circle where x, y \geq 0
u = []
for r in range(1, n + 1):
# P = (0, -1), x \in [crit, 1]
u.append([Fraction(2*(crit + r*nn), (crit + r*nn)**2 + 1),
Fraction(2, (crit + r*nn)**2 + 1) - 1])
# P = (-1, 0), y \in [crit, 1]
u.append([Fraction(2, (crit + r*nn)**2 + 1) - 1,
Fraction(2*(crit + r*nn), (crit + r*nn)**2 + 1)])
u = np.array(u)
# u1 discretizes the quarter of circle where x \leq 0, y \geq 0
u1 = np.column_stack((-u[:, 0], u[:, 1]))
u = np.row_stack((u, u1))
# W1 encodes the polyhedron given by the tangency points in u
W1 = np.zeros((u.shape[0] + 1, 9), dtype=fractions.Fraction)
for i in range(u.shape[0]):
W1[i, 2:5] = np.array([1, -u[i, 0], -u[i, 1]])
# This constraint is to get only the half polygon with positive y2
W1[u.shape[0], 4] = 1
Next, we would like to constrain the third effect $M_3$, and we start this approximation by defining a polytope approximating the unit sphere in $\mathbb{R}^3$. Similiarly to the previous case, the approximation is defined by the points of tangency to the sphere, provided by the stereographic projection of a set of rational points contained in $[-1, 1]\times[-1, 1]$.
m1 = 2
m2 = 1
# crit is the same as above
mm1 = Fraction(1, m1)
mm2 = Fraction(crit, m2)
# v1 discretizes the positive octant of the sphere
v1 = []
# P = (0, 0, -1), x, y \in [0, 1]
for rx in range(1, m1 + 1):
for ry in range(1, m1 + 1):
v1.append([Fraction(2*(rx*mm1), (rx*mm1)**2 + (ry*mm1)**2 + 1),
Fraction(2*(ry*mm1), (rx*mm1)**2 + (ry*mm1)**2 + 1),
1 - Fraction(2, (rx*mm1)**2 + (ry*mm1)**2 + 1)])
# a second round to improve the approximation around the pole
# P = (0, 0, -1), x, y \in [0, crit]
for rx in range(1, m2 + 1):
for ry in range(1, m2 + 1):
v1.append([Fraction(2*(rx*mm2), (rx*mm2)**2 + (ry*mm2)**2 + 1),
Fraction(2*(ry*mm2), (rx*mm2)**2 + (ry*mm2)**2 + 1),
1 - Fraction(2, (rx*mm2)**2 + (ry*mm2)**2 + 1)])
v1 = np.array(v1)
# we now reflect the positive octant to construct the whole sphere
v1a = np.column_stack((-v1[:, 0], v1[:, 1], v1[:, 2]))
v1 = np.row_stack((v1, v1a))
v1b = np.column_stack((v1[:, 0], -v1[:, 1], v1[:, 2]))
v1 = np.row_stack((v1, v1b))
v1c = np.column_stack((v1[:, 0], v1[:, 1], -v1[:, 2]))
v1 = np.row_stack((v1, v1c))
# the following discretizes the quarters of equators where x, y, z > 0,
# corresponding to the case where rx, ry = 0 above, around the origin
yz = []
xz = []
xy = []
for r in range(1, m1+1):
# P = [0, 0, -1], x = 0, y \in [0, 1]
yz.append([0,
Fraction(2*(r*m1), (r*m1)**2 + 1),
1 - Fraction(2, (r*m1)**2 + 1)])
# P = [0, 0,-1], y = 0, x \in [0, 1]
xz.append([Fraction(2*(r*m1), (r*m1)**2 + 1),
0,
1 - Fraction(2, (r*m1)**2 + 1)])
# P = [0, -1, 0], z = 0, x \in [0, 1]
xy.append([Fraction(2*(r*m1), (r*m1)**2 + 1),
1 - Fraction(2, (r*m1)**2 + 1),
0])
yz = np.array(yz)
xz = np.array(xz)
xy = np.array(xy)
yz1 = np.column_stack((yz[:, 0], -yz[:, 1], yz[:, 2]))
yz2 = np.column_stack((yz[:, 0], yz[:, 1], -yz[:, 2]))
yz3 = np.column_stack((yz[:, 0], -yz[:, 1], -yz[:, 2]))
yz = np.row_stack((yz, yz1, yz2, yz3))
xz1 = np.column_stack((-xz[:, 0], xz[:, 1], xz[:, 2]))
xz2 = np.column_stack((xz[:, 0], xz[:, 1], -xz[:, 2]))
xz3 = np.column_stack((-xz[:, 0], xz[:, 1], -xz[:, 2]))
xz = np.row_stack((xz, xz1, xz2, xz3))
xy1 = np.column_stack((-xy[:, 0], xy[:, 1], xy[:, 2]))
xy2 = np.column_stack((xy[:, 0], -xy[:, 1], xy[:, 2]))
xy3 = np.column_stack((-xy[:, 0], -xy[:, 1], xy[:, 2]))
xy = np.row_stack((xy, xy1, xy2, xy3))
v2 = np.row_stack((yz, xz, xy))
v = np.row_stack((v1, v2))
The following constraints ensure that the third operator $M_3$ of the measurement is a quasi-effect, with the approximation given by $v$:
W2 = np.zeros((v.shape[0], 9), dtype=fractions.Fraction)
for i in range(v.shape[0]):
W2[i, 5:] = np.array([1, -v[i, 0], -v[i, 1], -v[i, 2]])
The next set of constraints ensures the same condition for the last effect, which we express by $\mathbb{1} - M_1 - M_2 - M_3$:
W3 = np.zeros((v.shape[0], 9))
for i in range(v.shape[0]):
W3[i] = [1, -1+v[i, 0], -1, v[i, 0], v[i, 1], -1,
v[i, 0], v[i, 1], v[i, 2]]
We need that $\alpha_0, \alpha_1, \alpha_2 \geq 0$:
W4 = np.array([[0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0]])
We also require that $\alpha_0+\alpha_1+\alpha_2 \leq 1$, which corresponds to expressing the previous constraint for the last effect:
W5 = np.array([[1, -1, -1, 0, 0, -1, 0, 0, 0]])
Finally, we need that $\alpha_0 \geq \alpha_1 \geq \alpha_2 \geq 1-\alpha_0-\alpha_1-\alpha_2$, a condition that we can impose without lost of generality due to relabeling. Once we have the last constraints, we stack the vectors in a single array.
W6 = np.array([[0, 1, -1, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, -1, 0, 0, 0],
[-1, 1, 1, 0, 0, 2, 0, 0, 0]])
hull = np.row_stack((W1, W2, W3, W4, W5, W6))
We enumerate the vertices, which is a time-consuming operation:
time0 = time.time()
ext = enumerate_vertices(hull, method="plrs", verbose=1)
print("Vertex enumeration in %d seconds" % (time.time()-time0))
Vertex enumeration in 1219 seconds
As the last step, we iterate the SDP optimization described in Section "Checking criticial visibility" over all vertices to get the best shrinking factor. This takes several hours to complete. Parallel computations do not work from a notebook, but they do when the script is executed in a console. For this reason, here we disable parallel computations.
time0 = time.time()
alphas = find_best_shrinking_factor(ext, 2, solver="mosek", parallel=False)
print("\n Found in %d seconds" % (time.time()-time0))
Current alpha: 0.815743 (done: 7.24%)
Following the same reasoning applied to approximate $\mathcal{P}(2,4)$ we now approximate the set of qutrit covariant POVMs $\mathcal{P}_{\mathrm{cov}}(3,9)$ regarding the discrete Heinsenberg group. This task is relatively simple, since we need only to consider a quasi-positive seed $M$ and derive the effects from it by conjugating by the unitaries $D_{ij}$, which rotate the space by symmetric directions.
w = np.cos(2*np.pi/3) + 1j*np.sin(2*np.pi/3)
x = np.matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
D = [[], [], []]
for j in range(3):
for k in range(3):
D[j].append(np.matrix((w**(j*k/2))*(
sum(w**(j*m)*x[:, np.mod(k + m, 3)]*dag(x[:, m]) for m in
range(3)))))
We use the the approximation to the set of positive semi-definite operators given by $tr(M\Psi_i)\geq0$, for some finite set of rank-one projectors $\{\Psi_i\}$. We generate random projectors and rotate them by the unitaries $D_{ij}$ above in order to obtain a more regular distribution in the space of projectors.
# Discretization of the set of PSD matrices with 9*N elements
N = 2
disc = []
for _ in range(N):
psi = np.matrix(qutip.Qobj.full(qutip.rand_ket(3)))
for j in range(3):
for k in range(3):
disc.append(D[j][k]*(psi*dag(psi))*dag(D[j][k]))
We now translate each of the trace constraints above by writing each $M = (m_{ab})$ as a 8-dimensional real vector $(m_{00}, re(m_{01}), im(m_{01}),..., re(m_{12}), im(m_{12}))$, where $m_{22} = 1/3 -m_{00} -m_{11}$ since its trace is fixed.
hull = []
for i in range(9*N):
# each row of hull ensures tr(M*disc[i])>= 0
hull.append([np.real(disc[i][2, 2])/3,
np.real(disc[i][0, 0]) - np.real(disc[i][2, 2]),
2*np.real(disc[i][1, 0]), -2*np.imag(disc[i][1, 0]),
2*np.real(disc[i][2, 0]), -2*np.imag(disc[i][2, 0]),
np.real(disc[i][1, 1]) - np.real(disc[i][2, 2]),
2*np.real(disc[i][2, 1]), -2*np.imag(disc[i][2, 1])])
We then construct the polytope generated by these inequalities.
cov_ext = enumerate_vertices(np.array(hull), method="plrs")
To have an idea on how good the approximation is, we can translate each vertice obtained into a quasi-POVM and check how negative its eigenvalues are.
# Converting vectors into covariant POVMs
povms = []
for i in range(cov_ext.shape[0]):
eff = np.matrix([[cov_ext[i, 1],
cov_ext[i, 2] + cov_ext[i, 3]*1j,
cov_ext[i, 4] + cov_ext[i, 5]*1j],
[cov_ext[i, 2] - cov_ext[i, 3]*1j,
cov_ext[i, 6],
cov_ext[i, 7] + cov_ext[i, 8]*1j],
[cov_ext[i, 4] - cov_ext[i, 5]*1j,
cov_ext[i, 7] - cov_ext[i, 8]*1j,
1/3 - cov_ext[i, 1] - cov_ext[i, 6]]])
M = []
for j in range(3):
for k in range(3):
M.append(D[j][k]*eff*dag(D[j][k]))
povms.append(M)
# Finding the least eigenvalues
A = np.zeros((cov_ext.shape[0]))
for i in range(cov_ext.shape[0]):
A[i] = min(numpy.linalg.eigvalsh(povms[i][0]))
a = min(A)
We then optimise over the extremal points of the polytope.
alphas = find_best_shrinking_factor(cov_ext, 3, parallel=True)
We implemented the constructive strategy to find a projective decomposition for trace-one qutrit measurements $\mathbf{M}\in\mathcal{P}_1(3,3)$ we described in Appendix D.
First we define a function to generate a random trace-1 POVM. This is the only step that requires an additional dependency compared to the ones we loaded in the beginning of the notebook. The dependency is QuTiP.
from qutip import rand_unitary
def get_random_trace_one_povm(dim=3):
U = rand_unitary(dim)
M = [U[:, i]*dag(U[:, i]) for i in range(dim)]
for _ in range(dim-1):
U = rand_unitary(dim)
r = random.random()
for i in range(dim):
M[i] = r*M[i] + (1-r)*U[:, i]*dag(U[:, i])
return M
Then we decompose a random POVM following the cascade of "rank reductions" described in Appendix D, and check the ranks:
M = get_random_trace_one_povm()
print("Rank of POVM: ", check_ranks(M))
coefficients, projective_measurements = decomposePovmToProjective(M)
Rank of POVM: [3, 2, 2]
As a sanity check, we look at the ranks of the effects of the individual projective measurements. We must point out that the numerical calculations occasionally fail, and we set the tolerance in rank calculations high.
print("Ranks of projective measurements: ")
for measurement in projective_measurements:
print(check_ranks(measurement, tolerance=0.01))
Ranks of projective measurements: [0, 0, 0] [0, 0, 0] [1, 1, 1] [1, 1, 1] [1, 1, 1] [1, 1, 1]
We show that the projective measurements indeed return the POVM:
N = coefficients[0]*projective_measurements[0] + \
coefficients[1]*(coefficients[2]*projective_measurements[1] +
coefficients[3]*(coefficients[4]*(coefficients[6]*projective_measurements[2] +
coefficients[7]*projective_measurements[3]) +
coefficients[5]*(coefficients[8]*projective_measurements[4] +
coefficients[9]*projective_measurements[5])))
not np.any(M - N > 10e-10)
True