We rely on Voronoi's first reduction of quadratic forms to decompose symmetric positive definite matrices in dimension $\leq 6$, in a form which resembles the eigenvalue-eigenvector decomposition but with integer offsets: $$ D = \sum_{1 \leq i \leq d'} \rho_i e_i e_i^T $$ where $d'=d(d+1)/2$ (except $d'=12$ in dimension $4$ in our implementation), $\rho_i\geq 0$, and $e_i \in Z^d$.
The six-dimensional case is especially interesting for its applications to the Hooke elasticity tensor in three dimensional elasticity. The approach is (very) unlikely to extend to higher dimensions (including dimension 7), in a computationally efficient manner, due to a combinatorial explosion in Voronoi's theory.
Note on excecuting this notebook.
The implementation of Voronoi's decomposition involves a rather large data file, which is not included in the agd library, but provided in the AdaptiveGridDiscretizations repository (File Miscellaneous/Geometry6_data2.h
). Executing this notebook thus requires a bit more effort than the others.
Summary of volume Algorithmic tools, this series of notebooks.
Main summary of the Adaptive Grid Discretizations book of notebooks, including the other volumes.
Acknowledgement. Some of the experiments presented in these notebooks are part of ongoing research with Ludovic Métivier and Da Chen.
Copyright Jean-Marie Mirebeau, Centre Borelli, ENS Paris-Saclay, CNRS, University Paris-Saclay
import sys; sys.path.insert(0,"..") # Allow import of agd from parent directory (useless if conda package installed)
#from Miscellaneous import TocTools; print(TocTools.displayTOC('TensorVoronoi6','Algo'))
from agd import LinearParallel as lp
from agd.Selling import GatherByOffset
from agd import AutomaticDifferentiation as ad
from agd.Metrics import Seismic
from agd.Plotting import savefig; #savefig.dirName = 'Figures/TensorVoronoi'
from agd.Metrics.misc import flatten_symmetric_matrix, expand_symmetric_matrix
The routines for tensor decomposition are for efficiency purposes provided in a small c++ library, named FileVDQ where VDQ stands for "Voronoi Decomposition of Quadratic forms". This is in contrast with the two and three dimensional cases, where the decomposition algorithm is coded in Python (the c++ library can also be used in smaller dimensions). A function named VoronoiDecomposition
provides the interface.
from agd.Eikonal import VoronoiDecomposition
import numpy as np; allclose = np.allclose
import matplotlib.pyplot as plt
%matplotlib inline
import time
def ReloadPackages():
from Miscellaneous.rreload import rreload
global expand_symmetric_matrix,VoronoiDecomposition
expand_symmetric_matrix,VoronoiDecomposition = rreload([expand_symmetric_matrix,VoronoiDecomposition],rootdir='../..')
Uncomment the following line to use the GPU implementation of Voronoi's decomposition.
#VoronoiDecomposition.default_mode = 'gpu_transfer'; allclose = ad.cupy_friendly(allclose)
Choose to use, or not, large instances by uncommenting the following line (computation time may become longer).
large_instances = (VoronoiDecomposition.default_mode == 'gpu_transfer') # Large instances on GPU only by default
We illustrate our tensor decomposition method on random positive definite matrices, of the form $$ D = A^T A, $$ where $A$ is a square matrix with random coefficients w.r.t. the Gaussian normal law.
def MakeRandomTensor(dim, shape=tuple(), relax=0.05):
A = np.random.standard_normal( (dim,dim) + shape )
D = lp.dot_AA(lp.transpose(A),A)
identity = np.eye(dim).reshape((dim,dim)+(1,)*len(shape))
return D+lp.trace(D)*relax*identity
The inserse operation to tensor decomposition is, of course, reconstruction, defined by $$ (\lambda_i, e_i)_{i=1}^I \mapsto D = \sum_{1 \leq i \leq I} \lambda_i e_i e_i^T $$
def Reconstruct(coefs,offsets):
return lp.mult(coefs,lp.outer_self(offsets)).sum(2)
def LInfNorm(a):
return np.max(np.abs(a))
np.random.seed(42) # Reproducibility
D = MakeRandomTensor(6)
D
array([[ 5.86365855, 0.78781668, -1.53319092, 2.11419436, -1.32247999, 1.41133885], [ 0.78781668, 11.28811803, 0.62779508, -0.31702439, 2.97614926, -1.20780797], [-1.53319092, 0.62779508, 8.66990649, 0.95257212, 2.59215657, -2.21101525], [ 2.11419436, -0.31702439, 0.95257212, 5.82259858, -1.14947197, 0.71740399], [-1.32247999, 2.97614926, 2.59215657, -1.14947197, 3.91888129, -0.97283498], [ 1.41133885, -1.20780797, -2.21101525, 0.71740399, -0.97283498, 5.55749118]])
The compile time can be quite long on GPU. The execution time may not be instantaneous either, especially on GPU.
coefs,offsets = VoronoiDecomposition(D,retry64_tol=0.)
Our decomposition of a $6\times 6$ symmetric positive definite matrix involves $21$ (non-negative) weights and (integral) offsets.
print("Coefficients : ", coefs)
print("Offsets : \n", offsets.astype(int))
Coefficients : [0.06272357 0.08321161 0.09108027 0.17300802 0.25543099 0.33765603 0.37974797 0.43206798 0.57262901 0.60285459 0.90199881 1.01424559 1.02162947 1.14710001 1.20004529 1.32672471 1.71589609 1.77768024 2.17099232 2.50835894 6.57177511] Offsets : [[ 1 1 1 1 0 1 1 1 0 1 1 0 1 0 0 0 0 1 0 0 0] [ 1 1 0 0 1 0 -1 0 1 0 0 1 1 0 0 1 0 0 0 0 1] [ 0 0 -1 -1 1 -1 -1 -1 0 0 1 1 -1 1 1 0 1 0 0 0 0] [ 1 0 0 0 0 1 1 1 0 0 1 0 0 0 1 0 0 0 0 1 0] [ 0 0 0 -1 1 -1 -1 -1 0 0 0 1 0 0 0 1 0 0 0 0 0] [ 0 0 1 0 -1 1 1 0 -1 1 0 0 0 -1 0 0 0 0 1 0 0]]
By design, the coefficients are non-negative, and the reconstruction is exact up to numerical precision.
print("Minimal coefficient : ", np.min(coefs))
print("Reconstruction error : ", LInfNorm(D-Reconstruct(coefs,offsets)))
assert np.allclose(D,Reconstruct(coefs,offsets))
Minimal coefficient : 0.06272357357338984 Reconstruction error : 6.545874953189923e-13
As we interpolate between two tensors, the coefficients and the offsets vary. Voronoi's decomposition is not unique, hence we select a decomposition among the optimal ones by minimizing a linear form. This linear form has fixed coefficients, which were initially randomly generated, and with probability one, it should select a decomposition in a locally Lispchitz manner w.r.t. the metric. (An alternative selection principle, which involves no randomness, would be to select a decomposition which minimizes some lexicographic ordering, but this is numerically more complex and costly.)
def Interpolate(a,b,T=np.linspace(0,1,100)):
return T, np.moveaxis(np.array([(1-t)*a + t*b for t in T]),0,-1)
T_interp, D_interp = Interpolate(MakeRandomTensor(6),MakeRandomTensor(6))
np.random.seed(43)
a,b = MakeRandomTensor(6),MakeRandomTensor(6)
t0,t1 = 0,1 #t1,t2=0.8,0.85
T_interp, D_interp = Interpolate(a,b,np.linspace(t0,t1,1000))
%%time
coefs,offsets = VoronoiDecomposition(D_interp)
CPU times: user 1.36 ms, sys: 3.41 ms, total: 4.77 ms Wall time: 555 ms
offsets[:,:,0].T
array([[ 0, 0, 1, -1, 1, 0], [ 1, 0, 1, -1, 0, -1], [ 1, 0, 0, -1, 0, -1], [ 0, 1, 0, 0, -1, 0], [ 0, 0, 1, -1, 0, 0], [ 0, 1, 1, 0, 0, 1], [ 1, 0, -1, 0, -1, 0], [ 0, 1, 1, -1, 0, 0], [ 1, 0, 0, 0, 0, -1], [ 0, 1, 0, 1, -1, 1], [ 0, 0, 1, 0, 0, 0], [ 0, 1, 0, 0, 0, 0], [ 0, 0, 0, 0, 0, 1], [ 0, 0, 0, 1, 0, 0], [ 0, 1, 0, 0, 0, 1], [ 0, 1, 1, 0, 0, 0], [ 1, 1, 0, 0, -1, 0], [ 1, 0, 0, 0, 0, 0], [ 0, 1, 0, 0, -1, 1], [ 1, 0, 0, 0, -1, 0], [ 0, 0, 0, 0, 1, 0]])
print("Reconstruction error : ", LInfNorm(D_interp - Reconstruct(coefs,offsets)))
assert np.allclose(D_interp, Reconstruct(coefs,offsets),atol=1e-4)
Reconstruction error : 1.5241141682054149e-12
decomp = GatherByOffset(T_interp,coefs,offsets)
fig = plt.figure(figsize=(20,10))
for offset,(t,coef) in decomp.items():
plt.plot(t,coef)
plt.legend(decomp.keys(),ncol=3)
savefig(fig,"Coefs_Vor6.pdf")
On the CPU, the decomposition time increases linearly with the number of tensors. On the GPU, the decomposition runs on a single thread, and there are $2560$ of them on a nvidia $1080$ class card. Therefore time is rather insensitive to the number of tensors until that number is reached.
Notes on the GPU implementation. The GPU based Voronoi decomposition is approximately $100$ times faster than the CPU based one (except on the first call on a given machine, where JIT compilation takes a little while). However, it is also less stable, since it uses single precision scalars. In order to address this issue, the GPU implementation is first run using single precision, and then using double precision for the matrices whose decomposition proved to be excessively inaccurate.
def decomp_time(n):
np.random.seed(42)
D = MakeRandomTensor(6,(n,))
start = time.time()
coefs,offsets = VoronoiDecomposition(D)
print(f"Decomposition of {n} matrices completed in {time.time()-start} seconds")
print("Tensor shape: ",D.shape,", max reconstruction error : ",np.max(np.abs(D-Reconstruct(coefs,offsets))))
# assert allclose(D,Reconstruct(coefs,offsets))
return D,coefs,offsets
decomp_time(10);
Decomposition of 10 matrices completed in 0.012002944946289062 seconds Tensor shape: (6, 6, 10) , max reconstruction error : 1.7532642004880472e-12
decomp_time(100);
Decomposition of 100 matrices completed in 0.06799983978271484 seconds Tensor shape: (6, 6, 100) , max reconstruction error : 3.227640377190255e-12
decomp_time(1000);
Decomposition of 1000 matrices completed in 0.6193840503692627 seconds Tensor shape: (6, 6, 1000) , max reconstruction error : 4.46931380793103e-12
On the GPU, decomposition time should be mostly insensitive to $n\leq 2500$, and scale linearly with $n$ beyond that limit.
if large_instances: decomp_time(10000);
if large_instances: decomp_time(50000)
Voronoi's first reduction. Voronoi associates to each positive quadratic form, defined by a positive definite matrix $D$, the following optimization problem $$ V(D) = \min_M \mathrm{Tr}(D M) $$ where $M$ is a symmetric matrix, subject to the following linear constraints: for all $e \in Z^d \setminus \{0\}$ $$ <e,M e> \geq 2. $$ The set $\mathcal M$ of matrices $M$ obeying these constraints is known as Ryskov's polyhedron. (The constant $2$ in the above equation is customary, but may of course be replaced with an arbitary positive constant, such as $1$.)
Dual linear program. Voronoi's first reduction is a well posed linear program, hence it admits a dual linear program, namely $$ \max_\lambda \sum_{e \in Z^d\setminus\{0\}} \lambda(e) $$ where $\lambda : Z^d\setminus\{0\} \to R$ is constrained to be non-negative, and to obey $$ \sum_{e \in Z^d\setminus\{0\}} \lambda(e) e e^T = D. $$
Computation of the decomposition. We numerically solve Voronoi's first reduction, using a highly precomputed simplex-like method. By precomputation, we mean that all the vertices of Ryskov's polyhedron are known in advance, as well as their connectivity. Indeed, a famous result of Voronoi states that Ryskov's polyhedron admits only finitely many classes of vertices, up to arithmetic equivalence (linear changes of variables with integer coordinates).
From the solution to the primal linear program, Voronoi's first reduction, we can then deduce a solution to the dual linear program, namely the matrix decomposition that is of interest to us.
The primal linear program admits a unique solution, for generic $D$. However, the dual linear program usually has several solutions, because some of the vertices of Ryskov's polyhedron are degenerate.
Ryskov's polyhedron in dimension $d=6$ admits $7$ classes of vertices, up to arithmetic equivalence, see above. They are described in detail in the following reference:
perfect_forms6 = expand_symmetric_matrix(np.array([
[2. ,1. ,2. ,1. ,1. ,2. ,1. ,1. ,1. ,2. ,1. ,1. ,1. ,1. ,2. ,1. ,1. ,1. ,1. ,1. ,2. ],
[2. ,0. ,2. ,1. ,1. ,2. ,1. ,1. ,1. ,2. ,1. ,1. ,1. ,1. ,2. ,1. ,1. ,1. ,1. ,1. ,2. ],
[2. ,0. ,2. ,0. ,1. ,2. ,1. ,1. ,1. ,2. ,1. ,1. ,1. ,1. ,2. ,1. ,1. ,1. ,1. ,1. ,2. ],
[2. ,0.5,2. ,1. ,1. ,2. ,1. ,1. ,0.5,2. ,1. ,1. ,1. ,1. ,2. ,1. ,1. ,1. ,1. ,0.5,2. ],
[2. ,0.5,2. ,1. ,1. ,2. ,1. ,1. ,0.5,2. ,1. ,1. ,0.5,0.5,2. ,1. ,1. ,0.5,0.5,0.5,2. ],
[2. ,0.5,2. ,1. ,1. ,2. ,1. ,1. ,0.5,2. ,1. ,1. ,0.5,0.5,2. ,1. ,1. ,1. ,1. ,1. ,2. ],
[2. ,0. ,2. ,0.5,1. ,2. ,1. ,1. ,1. ,2. ,1. ,0.5,1. ,1. ,2. ,0.5,1. ,1. ,0.5,0. ,2. ]]).T
)
Perfect forms are beautiful mathematical objects, and each of these matrices has a particular story to tell. In particular, the form of index two:
np.linalg.det(np.moveaxis(perfect_forms6,-1,0))
array([7. , 4. , 3. , 5.484375, 3.796875, 5.0625 , 5.359375])
perfect_forms6[:,:,2]
array([[2., 0., 0., 1., 1., 1.], [0., 2., 1., 1., 1., 1.], [0., 1., 2., 1., 1., 1.], [1., 1., 1., 2., 1., 1.], [1., 1., 1., 1., 2., 1.], [1., 1., 1., 1., 1., 2.]])
On the other hand, the perfect form of index $0$ admits counterparts in all dimensions. It is optimal (up to arithmetic equivalence) for a matrix $D$ if, and only if, $D$ admits an obtuse superbase. See the notebook on Selling's reduction.
It is a non degenerate vertex of Ryskov's polyhedron, and thus has $21 = \mathrm{dim}(S_6)$ such vector pairs $\pm e\in Z^d$ saturating the constaint $<e,Me>\geq 2$, and $21$ neighbor vertices on the skeleton of Ryskov's polyhedron.
perfect_forms6[:,:,0]
array([[2., 1., 1., 1., 1., 1.], [1., 2., 1., 1., 1., 1.], [1., 1., 2., 1., 1., 1.], [1., 1., 1., 2., 1., 1.], [1., 1., 1., 1., 2., 1.], [1., 1., 1., 1., 1., 2.]])
In order to obtain the full Voronoi decomposition, we use the split argument.
np.random.seed(42)
D = MakeRandomTensor(6)
a,vertex,objective,weights,offsets = VoronoiDecomposition(D,steps='Split')
In this example, the second perfect form is optimal, up to arithmetic equivalence.
vertex # Index of the optimal perfect form
array(2)
The linear change of variables, defining the arithmetic equivalence, is returned as well.
a
array([[ 0., 1., 0., -1., -1., 0.], [ 1., 0., 0., -1., 0., 0.], [ 0., 0., 0., -1., 0., 0.], [ 0., -1., -1., 1., 1., -1.], [ 0., 0., 0., 1., 1., 0.], [-1., 0., 0., 1., 0., 1.]])
And the optimal matrix for Voronoi's decomposition is thus obtained as follows.
def dot_tADA(A,D):
return lp.dot_AA(lp.dot_AA(lp.transpose(A),D),A)
M = dot_tADA(a,perfect_forms6[:,:,vertex])
M
array([[ 2., -1., 0., -1., 1., -1.], [-1., 2., 1., 0., -2., 1.], [ 0., 1., 2., -1., -2., 1.], [-1., 0., -1., 2., 1., 0.], [ 1., -2., -2., 1., 4., -1.], [-1., 1., 1., 0., -1., 2.]])
The value of Voronoi's first reduction is also, by Kantorovitch duality, (twice) the sum of the coefficients of the decomposition of $D$.
lp.trace(lp.dot_AA(D,M)) # Value of primal linear program
48.6937132003662
objective # Common value of the primal and dual linear programs
array(48.6937132)
2*np.sum(weights) # Value of dual linear program (matrix decomposition)
48.69371320036792
The offsets involved in the decomposition of $D$ saturate the constraint $<e, M e> \geq 2$.
lp.dot_VAV(offsets,np.expand_dims(M,axis=-1),offsets)
array([2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2.])
Not all perfect forms are equal. The form of index two takes is optimal most of the time ($90\%$ of cases), up to arithmetic equivalence, whereas the form of index zero is rarely optimal. These two forms are discussed above.
np.random.seed(42)
D = MakeRandomTensor(6,(20000 if large_instances else 1000,))
a,vertex,objective,weights,offsets = VoronoiDecomposition(D,steps='Split')
index,counts = np.unique(vertex,return_counts=True)
print(f"Found optimal forms {index} with counts {counts},\n hence frequency {counts/vertex.size}")
Found optimal forms [1 2 3 4 5 6] with counts [ 83 890 10 5 11 1], hence frequency [0.083 0.89 0.01 0.005 0.011 0.001]
The theory descibed in the previous section for six dimensions applies as well in all other dimensions. We briefly reproduce similar results in dimension $2$, $3$, $4$ and $5$. There is a single perfect form in dimension $2$ and $3$. In other words, all vertices of Ryskov's polyhedron are arithmetically equivalent in these dimensions.
perfect_forms2 = expand_symmetric_matrix(np.array([
[2.,1.,2.]
]).T)
perfect_forms3 = expand_symmetric_matrix(np.array([
[2.,1.,2.,1.,1.,2.]
]).T)
perfect_forms4 = expand_symmetric_matrix(np.array([
[2, 1, 2, 1, 1, 2, 0, 1, 1, 2],
[2, 1, 2, 1, 1, 2, 1, 1, 1, 2.]
]).T)
perfect_forms5 = expand_symmetric_matrix(np.array([
[2,1,2,1,1,2,1,1,1,2,0,1,1,1,2],
[2,1,2,1,1,2,1,1,1,2,1,1,1,1,2],
[2,0.5,2,0.5,0.5,2,-1,-1,-1,2,-1,-1,-1,0.5,2]
]).T)
perfect_forms = (None,None,perfect_forms2,perfect_forms3,perfect_forms4,perfect_forms5,perfect_forms6)
np.random.seed(42)
for d in range(2,7):
if d<=3 and VoronoiDecomposition.default_mode == 'gpu_transfer': continue # low dim not implemented on GPU
print(f'\n---- Dimension {d} ----')
D = MakeRandomTensor(d)
a,vertex,objective,weights,offsets = VoronoiDecomposition(D,steps='Split')
print(f"Decomposed matrix : \n {D}")
M = dot_tADA(a,perfect_forms[d][:,:,vertex])
print(f"Optimal vertex of Ryskov's polyhedron has class {vertex}, and value : \n {M} ")
print(f"Common value of the primal and dual linear programs : {objective}")
print(objective,2*weights.sum(),lp.trace(lp.dot_AA(D,M)))
assert allclose(objective,2*weights.sum())
assert allclose(objective,lp.trace(lp.dot_AA(D,M)))
print(f"Minimal weight {np.min(weights)} (must be non-negative)")
assert np.allclose(D,Reconstruct(weights,offsets))
assert np.all(weights>=0)
offset_norms = lp.dot_VAV(offsets,np.expand_dims(M,axis=-1),offsets)
print(f"Offset norms w.r.t optimal vertex {offset_norms} (must be two)")
if d<=3: continue
nTest=1000
D = MakeRandomTensor(d, (nTest,))
a,vertex,objective,weights,offsets = VoronoiDecomposition(D,steps='Split')
index,counts = np.unique(vertex,return_counts=True)
print(f"\nReduced {nTest} random positive definite matrices.")
print(f"Found optimal forms {index} with counts {counts},\nhence frequency {counts/vertex.size}")
---- Dimension 2 ---- Decomposed matrix : [[0.81647351 0.91777115] [0.91777115 2.48898508]] Optimal vertex of Ryskov's polyhedron has class 0, and value : [[ 6. -3.] [-3. 2.]] Common value of the primal and dual linear programs : 4.370184339584078 4.370184339584078 4.370184339584078 4.370184339584078 Minimal weight 0.10129763611014386 (must be non-negative) Offset norms w.r.t optimal vertex [2. 2. 2.] (must be two) ---- Dimension 3 ---- Decomposed matrix : [[ 1.06841485 -0.08963958 -0.06552819] [-0.08963958 0.70200555 -0.73715916] [-0.06552819 -0.73715916 3.05670529]] Optimal vertex of Ryskov's polyhedron has class 0, and value : [[2. 2. 1.] [2. 6. 3.] [1. 3. 2.]] Common value of the primal and dual linear programs : 7.549703933010018 7.549703933010018 7.549703933010018 7.549703933010018 Minimal weight 0.035153608391365165 (must be non-negative) Offset norms w.r.t optimal vertex [2. 2. 2. 2. 2. 2.] (must be two) ---- Dimension 4 ---- Decomposed matrix : [[ 4.68590969 2.87198961 0.99534872 2.45469031] [ 2.87198961 5.99243377 1.72366886 1.07077618] [ 0.99534872 1.72366886 5.34506365 -0.95048934] [ 2.45469031 1.07077618 -0.95048934 4.69431083]] Optimal vertex of Ryskov's polyhedron has class 0, and value : [[ 2. -1. 0. -1.] [-1. 2. -1. 0.] [ 0. -1. 2. 1.] [-1. 0. 1. 2.]] Common value of the primal and dual linear programs : 25.433759643747482 25.433759643747482 25.433759643747486 25.43375964374748 Minimal weight 0.2427733777756641 (must be non-negative) Offset norms w.r.t optimal vertex [2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.] (must be two) Reduced 1000 random positive definite matrices. Found optimal forms [0 1] with counts [764 236], hence frequency [0.764 0.236] ---- Dimension 5 ---- Decomposed matrix : [[ 5.79675256 -1.64227078 -1.8973522 1.35873272 3.02118772] [-1.64227078 4.80083533 -0.8203728 -3.39651079 -0.21430139] [-1.8973522 -0.8203728 4.54561602 1.33988971 -2.62273808] [ 1.35873272 -3.39651079 1.33988971 6.38186716 -0.79150986] [ 3.02118772 -0.21430139 -2.62273808 -0.79150986 3.8393835 ]] Optimal vertex of Ryskov's polyhedron has class 0, and value : [[ 2. 0. 0. -1. -1.] [ 0. 2. 1. 1. 1.] [ 0. 1. 2. 0. 1.] [-1. 1. 0. 2. 1.] [-1. 1. 1. 1. 2.]] Common value of the primal and dual linear programs : 26.278202412012355 26.278202412012355 26.278202412012362 26.27820241201236 Minimal weight 0.026685915733476933 (must be non-negative) Offset norms w.r.t optimal vertex [2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.] (must be two) Reduced 1000 random positive definite matrices. Found optimal forms [0 1 2] with counts [975 10 15], hence frequency [0.975 0.01 0.015] ---- Dimension 6 ---- Decomposed matrix : [[ 7.28380813 -1.91552559 0.77040122 -0.76997375 -5.78315428 3.2924392 ] [-1.91552559 8.12584371 3.09389792 1.60303287 3.3532303 -4.82701984] [ 0.77040122 3.09389792 15.11899155 6.46441512 -2.98807881 2.78370843] [-0.76997375 1.60303287 6.46441512 9.74790599 0.78084802 1.32851281] [-5.78315428 3.3532303 -2.98807881 0.78084802 11.25845622 -5.0365393 ] [ 3.2924392 -4.82701984 2.78370843 1.32851281 -5.0365393 13.97420385]] Optimal vertex of Ryskov's polyhedron has class 2, and value : [[ 2. -1. 1. 0. 1. -1.] [-1. 2. -1. 0. -1. 1.] [ 1. -1. 2. -1. 1. -1.] [ 0. 0. -1. 2. -1. 0.] [ 1. -1. 1. -1. 2. 0.] [-1. 1. -1. 0. 0. 2.]] Common value of the primal and dual linear programs : 69.65668867903864 69.65668867903864 69.65668867904145 69.65668867903881 Minimal weight 0.010874275890620537 (must be non-negative) Offset norms w.r.t optimal vertex [2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.] (must be two) Reduced 1000 random positive definite matrices. Found optimal forms [1 2 3 4 5 6] with counts [ 81 894 3 10 11 1], hence frequency [0.081 0.894 0.003 0.01 0.011 0.001]