# Grounding in LTN (cont.)¶

This tutorial explains how to ground connectives and quantifiers. It expects some familiarity with the first tutorial on grounding non-logical symbols (constants, variables, functions and predicates).

In :
import logictensornetworks as ltn
import numpy as np
import tensorflow as tf


## Connectives¶

LTN suppports various logical connectives. They are grounded using fuzzy semantics. We have implemented the most common fuzzy logic operators using tensorflow primitives in ltn.fuzzy_ops. We recommend to use the following configuration:

• not: the standard negation $\lnot u = 1-u$,
• and: the product t-norm $u \land v = uv$,
• or: the product t-conorm (probabilistic sum) $u \lor v = u+v-uv$,
• implication: the Reichenbach implication $u \rightarrow v = 1 - u + uv$,

where $u$ and $v$ denote two truth values in $[0,1]$. For more details on choosing the right operators for your task, read the complementary notebook.

In :
Not = ltn.Wrapper_Connective(ltn.fuzzy_ops.Not_Std())
And = ltn.Wrapper_Connective(ltn.fuzzy_ops.And_Prod())
Or = ltn.Wrapper_Connective(ltn.fuzzy_ops.Or_ProbSum())
Implies = ltn.Wrapper_Connective(ltn.fuzzy_ops.Implies_Reichenbach())


The wrapper ltn.Wrapper_Connective allows to use the operators with LTN formulas. It takes care of combining sub-formulas that have different variables appearing in them (the sub-formulas may have different dimensions that need to be "broadcasted").

In :
x = ltn.variable('x',np.random.normal(0.,1.,(10,2))) # 10 values in R²
y = ltn.variable('y',np.random.normal(0.,2.,(5,2))) # 5 values in R²

c1 = ltn.constant([0.5,0.0])
c2 = ltn.constant([4.0,2.0])

Eq = ltn.Predicate.Lambda(lambda args: tf.exp(-tf.norm(args-args,axis=1))) # predicate measuring similarity

Eq([c1,c2])

Out:
<tf.Tensor: shape=(), dtype=float32, numpy=0.017754275>
In :
Not(Eq([c1,c2]))

Out:
<tf.Tensor: shape=(), dtype=float32, numpy=0.98224574>
In :
Implies(Eq([c1,c2]), Eq([c2,c1]))

Out:
<tf.Tensor: shape=(), dtype=float32, numpy=0.98256093>
In :
# Notice the dimension of the outcome: the result is evaluated for every x.
And(Eq([x,c1]), Eq([x,c2])).shape

Out:
TensorShape()
In :
# Notice the dimensions of the outcome: the result is evaluated for every x and y.
# Notice also that y did not appear in the 1st argument of Or;
# the connective broadcasts the results of its two arguments to match.
Or(Eq([x,c1]), Eq([x,y])).shape

Out:
TensorShape([10, 5])

## Quantifiers¶

LTN suppports universal and existential quantification. They are grounded using aggregation operators. We recommend using the following two operators:

• existential quantification ("exists"):
the generalized mean (pMean) $\mathrm{pM}(u_1,\dots,u_n) = \biggl( \frac{1}{n} \sum\limits_{i=1}^n u_i^p \biggr)^{\frac{1}{p}} \qquad p \geq 1$,
• universal quantification ("for all"):
the generalized mean of "the deviations w.r.t. the truth" (pMeanError) $\mathrm{pME}(u_1,\dots,u_n) = 1 - \biggl( \frac{1}{n} \sum\limits_{i=1}^n (1-u_i)^p \biggr)^{\frac{1}{p}} \qquad p \geq 1$,

where $u_1,\dots,u_n$ is a list of truth values in $[0,1]$.

In :
Forall = ltn.Wrapper_Quantifier(ltn.fuzzy_ops.Aggreg_pMeanError(p=2),semantics="forall")
Exists = ltn.Wrapper_Quantifier(ltn.fuzzy_ops.Aggreg_pMean(p=5),semantics="exists")


The wrapper ltn.Wrapper_Quantifier allows to use the aggregators with LTN formulas. It takes care of selecting the tensor dimensions to aggregate, given some variables in arguments.

In :
x = ltn.variable('x',np.random.normal(0.,1.,(10,2))) # 10 values in R²
y = ltn.variable('y',np.random.normal(0.,2.,(5,2))) # 5 values in R²

Eq = ltn.Predicate.Lambda(lambda args: tf.exp(-tf.norm(args-args,axis=1))) # predicate measuring similarity

Eq([x,y]).shape

Out:
TensorShape([10, 5])
In :
Forall(x,Eq([x,y])).shape

Out:
TensorShape()
In :
Forall((x,y),Eq([x,y]))

Out:
<tf.Tensor: shape=(), dtype=float32, numpy=0.08151126>
In :
Exists((x,y),Eq([x,y]))

Out:
<tf.Tensor: shape=(), dtype=float32, numpy=0.32602137>
In :
Forall(x, Exists(y, Eq([x,y])))

Out:
<tf.Tensor: shape=(), dtype=float32, numpy=0.20610821>

pMean can be understood as a soft-maximum that depends on the hyper-paramer $p$:

• $p \to 1$: the operator tends to mean,
• $p \to +\infty$: the operator tends to max.

Similarly, pMeanError can be understood as a soft-minimum:

• $p \to 1$: the operator tends to mean,
• $p \to +\infty$: the operator tends to min.

Therefore, $p$ offers flexibility in writing more or less strict formulas, to account for outliers in the data depending on the application. Note that this can have strong implications for training (see complementary notebook). One can set a default value for $p$ when initializing the operator, or can use different values at each call of the operator.

In :
Forall(x,Eq([x,c1]),p=2)

Out:
<tf.Tensor: shape=(), dtype=float32, numpy=0.2339012>
In :
Forall(x,Eq([x,c1]),p=10)

Out:
<tf.Tensor: shape=(), dtype=float32, numpy=0.18922693>
In :
Exists(x,Eq([x,c1]),p=2)

Out:
<tf.Tensor: shape=(), dtype=float32, numpy=0.26211008>
In :
Exists(x,Eq([x,c1]),p=10)

Out:
<tf.Tensor: shape=(), dtype=float32, numpy=0.34966397>

## Diagonal Quantification¶

Given 2 (or more) variables, there are scenarios where one wants to express statements about specific pairs (or tuples) only, such that the $i$-th tuple contains the $i$-th instances of the variables. We allow this using ltn.diag. Note: diagonal quantification assumes that the variables have the same number of individuals.

In simplified pseudo-code, the usual quantification would compute:

for x_i in x:
for y_j in y:
results.append(P(x_i,y_j))
aggregate(results)


In contrast, diagonal quantification would compute:

for x_i, y_i in zip(x,y):
results.append(P(x_i,y_i))
aggregate(results)


We illustrate ltn.diag in the following setting:

• the variable $x$ denotes 100 individuals in $\mathbb{R}^{2\times2}$,
• the variable $l$ denotes 100 one-hot labels in $\mathbb{N}^3$ (3 possible classes),
• $l$ is grounded according to $x$ such that each pair $(x_i,l_i)$, for $i=0..100$ denotes one correct example from the dataset,
• the classifier $C(x,l)$ gives a confidence value in $[0,1]$ of the sample $x$ corresponding to the label $l$.
In :
# The values are generated at random, for the sake of illustration.
# In a real scenario, they would come from a dataset.
samples = np.random.rand(100,2,2) # 100 R^{2x2} values
labels = np.random.randint(3, size=100) # 100 labels (class 0/1/2) that correspond to each sample
onehot_labels = tf.one_hot(labels,depth=3)

x = ltn.variable("x",samples)
l = ltn.variable("l",onehot_labels)

class ModelC(tf.keras.Model):
def __init__(self):
super(ModelC, self).__init__()
self.flatten = tf.keras.layers.Flatten()
self.dense1 = tf.keras.layers.Dense(5, activation=tf.nn.elu)
self.dense2 = tf.keras.layers.Dense(3, activation=tf.nn.softmax)
def call(self, inputs):
x, l = inputs, inputs
x = self.flatten(x)
x = self.dense1(x)
x = self.dense2(x)
return tf.math.reduce_sum(x*l,axis=1)

C = ltn.Predicate(ModelC())


If some variables are marked using ltn.diag, LTN will only compute their "zipped" results (instead of the usual "broadcasting").

In :
print(C([x,l]).shape) # Computes the 100x100 combinations
ltn.diag(x,l) # sets the diag behavior for x and l
print(C([x,l]).shape) # Computes the 100 zipped combinations
ltn.undiag(x,l) # resets the normal behavior
print(C([x,l]).shape) # Computes the 100x100 combinations

(100, 100)
(100,)
(100, 100)


In practice, ltn.diag is designed to be used with quantifiers. Every quantifier automatically calls ltn.undiag after the aggregation is performed, so that the variables keep their normal behavior outside of the formula. Therefore, it is recommended to use ltn.diag only in quantified formulas as follows.

In :
Forall(ltn.diag(x,l), C([x,l])) # Aggregates only on the 100 "zipped" pairs.
# Automatically calls ltn.undiag so the behavior of x/l is unchanged outside of this formula.

Out:
<tf.Tensor: shape=(), dtype=float32, numpy=0.33488452>

## Guarded Quantifiers¶

One may wish to quantify over a set of elements whose grounding satisfy some boolean condition. Let us assume $x$ is a variable from some domain and $m$ is a mask function that returns boolean values (that is, $0$ or $1$, not continuous truth degrees in $[0,1]$) for each element of the domain.
In guarded quantification, one has quantifications of the form $$(\forall x: m(x)) \phi(x)$$ which means "every x satisfying $m(x)$ also satisfies $\phi(x)$", or $$(\exists x: m(x)) \phi(x)$$ which means "some x satisfying $m(x)$ also satisfies $\phi(x)$".

The mask $m$ can also depend on other variables in the formula. For instance, the quantification $\exists y (\forall x:m(x,y)) \phi(x,y)$ is also a valid sentence.

Let us consider the following example, that states that there exists an euclidian distance $d$ below which all pairs of points $x$, $y$ should be considered as similar: $\exists d \ \forall x,y : \mathrm{dist}(x,y) < d \ ( \mathrm{Eq}(x,y)))$

In :
Eq = ltn.Predicate.Lambda(lambda args: tf.exp(-tf.norm(args-args,axis=1))) # predicate measuring similarity

points = np.random.rand(50,2) # 50 values in [0,1]^2
x = ltn.variable("x",points)
y = ltn.variable("y",points)
d = ltn.variable("d",[.1,.2,.3,.4,.5,.6,.7,.8,.9])

In :
eucl_dist = lambda x,y: tf.expand_dims(tf.norm(x-y,axis=1),axis=1) # function measuring euclidian distance
Exists(d,
Forall([x,y],
Eq([x,y]),

<tf.Tensor: shape=(), dtype=float32, numpy=0.782431>
The guarded option is particularly useful to propagate gradients (see notebook on learning) over only a subset of the domains, that verifies the condition $m$.