This notebook discusses the basic ideas and use-cases for Poincaré embeddings and demonstrates what kind of operations can be done with them. For more comprehensive technical details and results, this blog post may be a more appropriate resource.
Poincaré embeddings are a method to learn vector representations of nodes in a graph. The input data is of the form of a list of relations (edges) between nodes, and the model tries to learn representations such that the vectors for the nodes accurately represent the distances between them.
The learnt embeddings capture notions of both hierarchy and similarity - similarity by placing connected nodes close to each other and unconnected nodes far from each other; hierarchy by placing nodes lower in the hierarchy farther from the origin, i.e. with higher norms.
The paper uses this model to learn embeddings of nodes in the WordNet noun hierarchy, and evaluates these on 3 tasks - reconstruction, link prediction and lexical entailment, which are described in the section on evaluation. We have compared the results of our Poincaré model implementation on these tasks to other open-source implementations and the results mentioned in the paper.
The paper also describes a variant of the Poincaré model to learn embeddings of nodes in a symmetric graph, unlike the WordNet noun hierarchy, which is directed and asymmetric. The datasets used in the paper for this model are scientific collaboration networks, in which the nodes are researchers and an edge represents that the two researchers have co-authored a paper.
This variant has not been implemented yet, and is therefore not a part of our tutorial and experiments.
The main innovation here is that these embeddings are learnt in hyperbolic space, as opposed to the commonly used Euclidean space. The reason behind this is that hyperbolic space is more suitable for capturing any hierarchical information inherently present in the graph. Embedding nodes into a Euclidean space while preserving the distance between the nodes usually requires a very high number of dimensions. A simple illustration of this can be seen below -
Here, the positions of nodes represent the positions of their vectors in 2-D euclidean space. Ideally, the distances between the vectors for nodes
(A, D) should be the same as that between
(D, H) and as that between
H and its child nodes. Similarly, all the child nodes of
H must be equally far away from node
A. It becomes progressively hard to accurately preserve these distances in Euclidean space as the degree and depth of the tree grows larger. Hierarchical structures may also have cross-connections (effectively a directed graph), making this harder.
There is no representation of this simple tree in 2-dimensional Euclidean space which can reflect these distances correctly. This can be solved by adding more dimensions, but this becomes computationally infeasible as the number of required dimensions grows exponentially. Hyperbolic space is a metric space in which distances aren't straight lines - they are curves, and this allows such tree-like hierarchical structures to have a representation that captures the distances more accurately even in low dimensions.
% cd ../..
%load_ext autoreload %autoreload 2 import os import logging import numpy as np from gensim.models.poincare import PoincareModel, PoincareKeyedVectors, PoincareRelations logging.basicConfig(level=logging.INFO) poincare_directory = os.path.join(os.getcwd(), 'docs', 'notebooks', 'poincare') data_directory = os.path.join(poincare_directory, 'data') wordnet_mammal_file = os.path.join(data_directory, 'wordnet_mammal_hypernyms.tsv')
The model can be initialized using an iterable of relations, where a relation is simply a pair of nodes -
model = PoincareModel(train_data=[('node.1', 'node.2'), ('node.2', 'node.3')])
INFO:gensim.models.poincare:Loading relations from train data.. INFO:gensim.models.poincare:Loaded 2 relations from train data, 3 nodes
The model can also be initialized from a csv-like file containing one relation per line. The module provides a convenience class
PoincareRelations to do so.
relations = PoincareRelations(file_path=wordnet_mammal_file, delimiter='\t') model = PoincareModel(train_data=relations)
INFO:gensim.models.poincare:Loading relations from train data.. INFO:gensim.models.poincare:Loaded 7724 relations from train data, 1182 unique terms
Note that the above only initializes the model and does not begin training. To train the model -
model = PoincareModel(train_data=relations, size=2, burn_in=0) model.train(epochs=1, print_every=500)
INFO:gensim.models.poincare:Loading relations from train data.. INFO:gensim.models.poincare:Loaded 7724 relations from train data, 1182 unique terms INFO:gensim.models.poincare:training model of size 2 with 1 workers on 7724 relations for 1 epochs and 0 burn-in epochs, using lr=0.10000 burn-in lr=0.01000 negative=10 INFO:gensim.models.poincare:Starting training (1 epochs)---------------------------------------- INFO:gensim.models.poincare:Training on epoch 1, examples #4990-#5000, loss: 23.57 INFO:gensim.models.poincare:Time taken for 5000 examples: 0.47 s, 10562.18 examples / s INFO:gensim.models.poincare:Training finished
The same model can be trained further on more epochs in case the user decides that the model hasn't converged yet.
INFO:gensim.models.poincare:training model of size 2 with 1 workers on 7724 relations for 1 epochs and 0 burn-in epochs, using lr=0.10000 burn-in lr=0.01000 negative=10 INFO:gensim.models.poincare:Starting training (1 epochs)---------------------------------------- INFO:gensim.models.poincare:Training on epoch 1, examples #4990-#5000, loss: 21.98 INFO:gensim.models.poincare:Time taken for 5000 examples: 0.48 s, 10442.40 examples / s INFO:gensim.models.poincare:Training finished
The model can be saved and loaded using two different methods -
# Saves the entire PoincareModel instance, the loaded model can be trained further model.save('/tmp/test_model') PoincareModel.load('/tmp/test_model')
INFO:gensim.utils:saving PoincareModel object under /tmp/test_model, separately None INFO:gensim.utils:saved /tmp/test_model INFO:gensim.utils:loading PoincareModel object from /tmp/test_model INFO:gensim.utils:loading kv recursively from /tmp/test_model.kv.* with mmap=None INFO:gensim.utils:loaded /tmp/test_model
<gensim.models.poincare.PoincareModel at 0x7f82ec108668>
# Saves only the vectors from the PoincareModel instance, in the commonly used word2vec format model.kv.save_word2vec_format('/tmp/test_vectors') PoincareKeyedVectors.load_word2vec_format('/tmp/test_vectors')
INFO:gensim.models.keyedvectors:storing 3x50 projection weights into /tmp/test_vectors INFO:gensim.models.keyedvectors:loading projection weights from /tmp/test_vectors INFO:gensim.models.keyedvectors:loaded (3, 50) matrix from /tmp/test_vectors
<gensim.models.poincare.PoincareKeyedVectors at 0x7f82d03c6588>
# Load an example model models_directory = os.path.join(poincare_directory, 'models') test_model_path = os.path.join(models_directory, 'gensim_model_batch_size_10_burn_in_0_epochs_50_neg_20_dim_50') model = PoincareModel.load(test_model_path)
INFO:gensim.utils:loading PoincareModel object from /home/jayant/projects/gensim/docs/notebooks/poincare/models/gensim_model_batch_size_10_burn_in_0_epochs_50_neg_20_dim_50 INFO:gensim.utils:loading kv recursively from /home/jayant/projects/gensim/docs/notebooks/poincare/models/gensim_model_batch_size_10_burn_in_0_epochs_50_neg_20_dim_50.kv.* with mmap=None INFO:gensim.utils:loaded /home/jayant/projects/gensim/docs/notebooks/poincare/models/gensim_model_batch_size_10_burn_in_0_epochs_50_neg_20_dim_50
The learnt representations can be used to perform various kinds of useful operations. This section is split into two - some simple operations that are directly mentioned in the paper, as well as some experimental operations that are hinted at, and might require more work to refine.
The models that are used in this section have been trained on the transitive closure of the WordNet hypernym graph. The transitive closure is the list of all the direct and indirect hypernyms in the WordNet graph. An example of a direct hypernym is
(seat.n.03, furniture.n.01) while an example of an indirect hypernym is
All the following operations are based simply on the notion of distance between two nodes in hyperbolic space.
# Distance between any two nodes model.kv.distance('plant.n.02', 'tree.n.01')
# Nodes most similar to a given input node model.kv.most_similar('electricity.n.01')
[('phenomenon.n.01', 2.0296901412261614), ('natural_phenomenon.n.01', 2.1052921648852934), ('physical_phenomenon.n.01', 2.1084626073820045), ('photoelectricity.n.01', 2.4527217652991005), ('piezoelectricity.n.01', 2.4687111939575397), ('galvanism.n.01', 2.9496409087300357), ('cloud.n.02', 3.164090455102602), ('electrical_phenomenon.n.01', 3.2563741920630225), ('pressure.n.01', 3.3063009504377368), ('atmospheric_phenomenon.n.01', 3.313970950348909)]
[('male.n.02', 1.725430794111438), ('physical_entity.n.01', 3.5532684790327624), ('whole.n.02', 3.5663516391532815), ('object.n.01', 3.5885342299888077), ('adult.n.01', 3.6422291495399124), ('organism.n.01', 4.096498630105297), ('causal_agent.n.01', 4.127447093914292), ('living_thing.n.01', 4.198756842588067), ('person.n.01', 4.371831459784078), ('lawyer.n.01', 4.581830548066727)]
# Nodes closer to node 1 than node 2 is from node 1 model.kv.nodes_closer_than('dog.n.01', 'carnivore.n.01')
['domestic_animal.n.01', 'canine.n.02', 'terrier.n.01', 'hunting_dog.n.01', 'hound.n.01']
# Rank of distance of node 2 from node 1 in relation to distances of all nodes from node 1 model.kv.rank('dog.n.01', 'carnivore.n.01')
# Finding Poincare distance between input vectors vector_1 = np.random.uniform(size=(100,)) vector_2 = np.random.uniform(size=(100,)) vectors_multiple = np.random.uniform(size=(5, 100)) # Distance between vector_1 and vector_2 print(PoincareKeyedVectors.vector_distance(vector_1, vector_2)) # Distance between vector_1 and each vector in vectors_multiple print(PoincareKeyedVectors.vector_distance_batch(vector_1, vectors_multiple))
0.24618276804 [ 0.20492232 0.21622492 0.22568267 0.20813361 0.26086168]
These operations are based on the notion that the norm of a vector represents its hierarchical position. Leaf nodes typically tend to have the highest norms, and as we move up the hierarchy, the norm decreases, with the root node being close to the center (or origin).
# Closest child node model.kv.closest_child('person.n.01')
# Closest parent node model.kv.closest_parent('person.n.01')
# Position in hierarchy - lower values represent that the node is higher in the hierarchy print(model.kv.norm('person.n.01')) print(model.kv.norm('teacher.n.01'))
# Difference in hierarchy between the first node and the second node # Positive values indicate the first node is higher in the hierarchy print(model.kv.difference_in_hierarchy('person.n.01', 'teacher.n.01'))
# One possible descendant chain model.kv.descendants('mammal.n.01')
['carnivore.n.01', 'dog.n.01', 'hunting_dog.n.01', 'terrier.n.01', 'sporting_dog.n.01']
# One possible ancestor chain model.kv.ancestors('dog.n.01')
['canine.n.02', 'domestic_animal.n.01', 'placental.n.01', 'ungulate.n.01', 'chordate.n.01', 'animal.n.01', 'physical_entity.n.01']
Note that the chains are not symmetric - while descending to the closest child recursively, starting with
mammal, the closest child of
dog, however, while ascending from
dog to the closest parent, the closest parent to
This is despite the fact that Poincaré distance is symmetric (like any distance in a metric space). The asymmetry stems from the fact that even if node
Y is the closest node to node
X amongst all nodes with a higher norm (lower in the hierarchy) than
X may not be the closest node to node
Y amongst all the nodes with a lower norm (higher in the hierarchy) than