This notebook demonstrates how well Poincare embeddings perform on the tasks detailed in the original paper about the embeddings.
The following two external, open-source implementations are used -
This is the list of tasks -
A more detailed explanation of the tasks and the evaluation methodology is present in the individual evaluation subsections.
The following section performs the following -
% cd ../..
/home/jayant/Projects/gensim/gensim
# Some libraries need to be installed that are not part of Gensim
! pip install click>=6.7 nltk>=3.2.5 prettytable>=0.7.2 pygtrie>=2.2
import csv
from collections import OrderedDict
from IPython.display import display, HTML
import logging
import os
import pickle
import random
import re
import click
from gensim.models.poincare import PoincareModel, PoincareRelations, \
ReconstructionEvaluation, LinkPredictionEvaluation, \
LexicalEntailmentEvaluation, PoincareKeyedVectors
from gensim.utils import check_output
import nltk
from prettytable import PrettyTable
from smart_open import smart_open
logging.basicConfig(level=logging.INFO)
nltk.download('wordnet')
[nltk_data] Downloading package wordnet to /home/jayant/nltk_data... [nltk_data] Package wordnet is already up-to-date!
True
Please set the variable parent_directory
below to change the directory to which the repositories are cloned.
% cd docs/notebooks/
/home/jayant/Projects/gensim/gensim/docs/notebooks
current_directory = os.getcwd()
# Change this variable to `False` to not remove and re-download repos for external implementations
force_setup = False
# The poincare datasets, models and source code for external models are downloaded to this directory
parent_directory = os.path.join(current_directory, 'poincare')
! mkdir -p {parent_directory}
% cd {parent_directory}
# Clone repos
np_repo_name = 'poincare-np-embedding'
if force_setup and os.path.exists(np_repo_name):
! rm -rf {np_repo_name}
clone_np_repo = not os.path.exists(np_repo_name)
if clone_np_repo:
! git clone https://github.com/nishnik/poincare_embeddings.git {np_repo_name}
cpp_repo_name = 'poincare-cpp-embedding'
if force_setup and os.path.exists(cpp_repo_name):
! rm -rf {cpp_repo_name}
clone_cpp_repo = not os.path.exists(cpp_repo_name)
if clone_cpp_repo:
! git clone https://github.com/TatsuyaShirakawa/poincare-embedding.git {cpp_repo_name}
patches_applied = False
/home/jayant/Projects/gensim/gensim/docs/notebooks/poincare
# Apply patches
if clone_cpp_repo and not patches_applied:
% cd {cpp_repo_name}
! git apply ../poincare_burn_in_eps.patch
if clone_np_repo and not patches_applied:
% cd ../{np_repo_name}
! git apply ../poincare_numpy.patch
patches_applied = True
# Compile the code for the external c++ implementation into a binary
% cd {parent_directory}/{cpp_repo_name}
! mkdir -p work
% cd work
! cmake ..
! make
% cd {current_directory}
/home/jayant/projects/gensim/docs/notebooks/poincare/poincare-cpp-embedding /home/jayant/projects/gensim/docs/notebooks/poincare/poincare-cpp-embedding/work -- Configuring done -- Generating done -- Build files have been written to: /home/jayant/projects/gensim/docs/notebooks/poincare/poincare-cpp-embedding/work [100%] Built target poincare_embedding /home/jayant/projects/gensim/docs/notebooks
You might need to install an updated version of cmake
to be able to compile the source code. Please make sure that the binary poincare_embedding
has been created before proceeding by verifying the above cell does not raise an error.
cpp_binary_path = os.path.join(parent_directory, cpp_repo_name, 'work', 'poincare_embedding')
assert(os.path.exists(cpp_binary_path)), 'Binary file doesnt exist at %s' % cpp_binary_path
# These directories are auto created in the current directory for storing poincare datasets and models
data_directory = os.path.join(parent_directory, 'data')
models_directory = os.path.join(parent_directory, 'models')
# Create directories
! mkdir -p {data_directory}
! mkdir -p {models_directory}
# Prepare the WordNet data
# Can also be downloaded directly from -
# https://github.com/jayantj/gensim/raw/wordnet_data/docs/notebooks/poincare/data/wordnet_noun_hypernyms.tsv
wordnet_file = os.path.join(data_directory, 'wordnet_noun_hypernyms.tsv')
if not os.path.exists(wordnet_file):
! python {parent_directory}/{cpp_repo_name}/scripts/create_wordnet_noun_hierarchy.py {wordnet_file}
# Prepare the HyperLex data
hyperlex_url = "http://people.ds.cam.ac.uk/iv250/paper/hyperlex/hyperlex-data.zip"
! wget {hyperlex_url} -O {data_directory}/hyperlex-data.zip
if os.path.exists(os.path.join(data_directory, 'hyperlex')):
! rm -r {data_directory}/hyperlex
! unzip {data_directory}/hyperlex-data.zip -d {data_directory}/hyperlex/
hyperlex_file = os.path.join(data_directory, 'hyperlex', 'nouns-verbs', 'hyperlex-nouns.txt')
--2017-11-14 11:15:54-- http://people.ds.cam.ac.uk/iv250/paper/hyperlex/hyperlex-data.zip Resolving people.ds.cam.ac.uk (people.ds.cam.ac.uk)... 131.111.3.47 Connecting to people.ds.cam.ac.uk (people.ds.cam.ac.uk)|131.111.3.47|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 183900 (180K) [application/zip] Saving to: ‘/home/jayant/projects/gensim/docs/notebooks/poincare/data/hyperlex-data.zip’ /home/jayant/projec 100%[===================>] 179.59K --.-KB/s in 0.06s 2017-11-14 11:15:54 (2.94 MB/s) - ‘/home/jayant/projects/gensim/docs/notebooks/poincare/data/hyperlex-data.zip’ saved [183900/183900] Archive: /home/jayant/projects/gensim/docs/notebooks/poincare/data/hyperlex-data.zip creating: /home/jayant/projects/gensim/docs/notebooks/poincare/data/hyperlex/nouns-verbs/ inflating: /home/jayant/projects/gensim/docs/notebooks/poincare/data/hyperlex/nouns-verbs/hyperlex-verbs.txt inflating: /home/jayant/projects/gensim/docs/notebooks/poincare/data/hyperlex/nouns-verbs/hyperlex-nouns.txt creating: /home/jayant/projects/gensim/docs/notebooks/poincare/data/hyperlex/splits/ creating: /home/jayant/projects/gensim/docs/notebooks/poincare/data/hyperlex/splits/random/ inflating: /home/jayant/projects/gensim/docs/notebooks/poincare/data/hyperlex/splits/random/hyperlex_training_all_random.txt inflating: /home/jayant/projects/gensim/docs/notebooks/poincare/data/hyperlex/splits/random/hyperlex_test_all_random.txt inflating: /home/jayant/projects/gensim/docs/notebooks/poincare/data/hyperlex/splits/random/hyperlex_dev_all_random.txt creating: /home/jayant/projects/gensim/docs/notebooks/poincare/data/hyperlex/splits/lexical/ inflating: /home/jayant/projects/gensim/docs/notebooks/poincare/data/hyperlex/splits/lexical/hyperlex_dev_all_lexical.txt inflating: /home/jayant/projects/gensim/docs/notebooks/poincare/data/hyperlex/splits/lexical/hyperlex_test_all_lexical.txt inflating: /home/jayant/projects/gensim/docs/notebooks/poincare/data/hyperlex/splits/lexical/hyperlex_training_all_lexical.txt inflating: /home/jayant/projects/gensim/docs/notebooks/poincare/data/hyperlex/hyperlex-all.txt inflating: /home/jayant/projects/gensim/docs/notebooks/poincare/data/hyperlex/README.txt
def train_cpp_model(
binary_path, data_file, output_file, dim, epochs, neg,
num_threads, epsilon, burn_in, seed=0):
"""Train a poincare embedding using the c++ implementation
Args:
binary_path (str): Path to the compiled c++ implementation binary
data_file (str): Path to tsv file containing relation pairs
output_file (str): Path to output file containing model
dim (int): Number of dimensions of the trained model
epochs (int): Number of epochs to use
neg (int): Number of negative samples to use
num_threads (int): Number of threads to use for training the model
epsilon (float): Constant used for clipping below a norm of one
burn_in (int): Number of epochs to use for burn-in init (0 means no burn-in)
Notes:
If `output_file` already exists, skips training
"""
if os.path.exists(output_file):
print('File %s exists, skipping' % output_file)
return
args = {
'dim': dim,
'max_epoch': epochs,
'neg_size': neg,
'num_thread': num_threads,
'epsilon': epsilon,
'burn_in': burn_in,
'learning_rate_init': 0.1,
'learning_rate_final': 0.0001,
}
cmd = [binary_path, data_file, output_file]
for option, value in args.items():
cmd.append("--%s" % option)
cmd.append(str(value))
return check_output(args=cmd)
model_sizes = [5, 10, 20, 50, 100, 200]
default_params = {
'neg': 20,
'epochs': 50,
'threads': 8,
'eps': 1e-6,
'burn_in': 0,
'batch_size': 10,
'reg': 0.0
}
non_default_params = {
'neg': [10],
'epochs': [200],
'burn_in': [10]
}
def cpp_model_name_from_params(params, prefix):
param_keys = ['burn_in', 'epochs', 'neg', 'eps', 'threads']
name = ['%s_%s' % (key, params[key]) for key in sorted(param_keys)]
return '%s_%s' % (prefix, '_'.join(name))
def train_model_with_params(params, train_file, model_sizes, prefix, implementation):
"""Trains models with given params for multiple model sizes using the given implementation
Args:
params (dict): parameters to train the model with
train_file (str): Path to tsv file containing relation pairs
model_sizes (list): list of dimension sizes (integer) to train the model with
prefix (str): prefix to use for the saved model filenames
implementation (str): whether to use the numpy or c++ implementation,
allowed values: 'numpy', 'c++'
Returns:
tuple (model_name, model_files)
model_files is a dict of (size, filename) pairs
Example: ('cpp_model_epochs_50', {5: 'models/cpp_model_epochs_50_dim_5'})
"""
files = {}
if implementation == 'c++':
model_name = cpp_model_name_from_params(params, prefix)
elif implementation == 'numpy':
model_name = np_model_name_from_params(params, prefix)
elif implementation == 'gensim':
model_name = gensim_model_name_from_params(params, prefix)
else:
raise ValueError('Given implementation %s not found' % implementation)
for model_size in model_sizes:
output_file_name = '%s_dim_%d' % (model_name, model_size)
output_file = os.path.join(models_directory, output_file_name)
print('Training model %s of size %d' % (model_name, model_size))
if implementation == 'c++':
out = train_cpp_model(
cpp_binary_path, train_file, output_file, model_size,
params['epochs'], params['neg'], params['threads'],
params['eps'], params['burn_in'], seed=0)
elif implementation == 'numpy':
train_external_numpy_model(
python_script_path, train_file, output_file, model_size,
params['epochs'], params['neg'], seed=0)
elif implementation == 'gensim':
train_gensim_model(
train_file, output_file, model_size, params['epochs'],
params['neg'], params['burn_in'], params['batch_size'], params['reg'], seed=0)
else:
raise ValueError('Given implementation %s not found' % implementation)
files[model_size] = output_file
return (model_name, files)
model_files = {}
model_files['c++'] = {}
# Train c++ models with default params
model_name, files = train_model_with_params(default_params, wordnet_file, model_sizes, 'cpp_model', 'c++')
model_files['c++'][model_name] = {}
for dim, filepath in files.items():
model_files['c++'][model_name][dim] = filepath
# Train c++ models with non-default params
for param, values in non_default_params.items():
params = default_params.copy()
for value in values:
params[param] = value
model_name, files = train_model_with_params(params, wordnet_file, model_sizes, 'cpp_model', 'c++')
model_files['c++'][model_name] = {}
for dim, filepath in files.items():
model_files['c++'][model_name][dim] = filepath
python_script_path = os.path.join(parent_directory, np_repo_name, 'poincare.py')
def np_model_name_from_params(params, prefix):
param_keys = ['neg', 'epochs']
name = ['%s_%s' % (key, params[key]) for key in sorted(param_keys)]
return '%s_%s' % (prefix, '_'.join(name))
def train_external_numpy_model(
script_path, data_file, output_file, dim, epochs, neg, seed=0):
"""Train a poincare embedding using an external numpy implementation
Args:
script_path (str): Path to the Python training script
data_file (str): Path to tsv file containing relation pairs
output_file (str): Path to output file containing model
dim (int): Number of dimensions of the trained model
epochs (int): Number of epochs to use
neg (int): Number of negative samples to use
Notes:
If `output_file` already exists, skips training
"""
if os.path.exists(output_file):
print('File %s exists, skipping' % output_file)
return
args = {
'input-file': data_file,
'output-file': output_file,
'dimensions': dim,
'epochs': epochs,
'learning-rate': 0.01,
'num-negative': neg,
}
cmd = ['python', script_path]
for option, value in args.items():
cmd.append("--%s" % option)
cmd.append(str(value))
return check_output(args=cmd)
model_files['numpy'] = {}
# Train models with default params
model_name, files = train_model_with_params(default_params, wordnet_file, model_sizes, 'np_model', 'numpy')
model_files['numpy'][model_name] = {}
for dim, filepath in files.items():
model_files['numpy'][model_name][dim] = filepath
def gensim_model_name_from_params(params, prefix):
param_keys = ['neg', 'epochs', 'burn_in', 'batch_size', 'reg']
name = ['%s_%s' % (key, params[key]) for key in sorted(param_keys)]
return '%s_%s' % (prefix, '_'.join(name))
def train_gensim_model(
data_file, output_file, dim, epochs, neg, burn_in, batch_size, reg, seed=0):
"""Train a poincare embedding using gensim implementation
Args:
data_file (str): Path to tsv file containing relation pairs
output_file (str): Path to output file containing model
dim (int): Number of dimensions of the trained model
epochs (int): Number of epochs to use
neg (int): Number of negative samples to use
burn_in (int): Number of epochs to use for burn-in initialization
batch_size (int): Size of batch to use for training
reg (float): Coefficient used for l2-regularization while training
Notes:
If `output_file` already exists, skips training
"""
if os.path.exists(output_file):
print('File %s exists, skipping' % output_file)
return
train_data = PoincareRelations(data_file)
model = PoincareModel(train_data, size=dim, negative=neg, burn_in=burn_in, regularization_coeff=reg)
model.train(epochs=epochs, batch_size=batch_size)
model.save(output_file)
non_default_params_gensim = [
{'neg': 10,},
{'burn_in': 10,},
{'batch_size': 50,},
{'neg': 10, 'reg': 1, 'burn_in': 10, 'epochs': 200},
]
model_files['gensim'] = {}
# Train models with default params
model_name, files = train_model_with_params(default_params, wordnet_file, model_sizes, 'gensim_model', 'gensim')
model_files['gensim'][model_name] = {}
for dim, filepath in files.items():
model_files['gensim'][model_name][dim] = filepath
# Train models with non-default params
for new_params in non_default_params_gensim:
params = default_params.copy()
params.update(new_params)
model_name, files = train_model_with_params(params, wordnet_file, model_sizes, 'gensim_model', 'gensim')
model_files['gensim'][model_name] = {}
for dim, filepath in files.items():
model_files['gensim'][model_name][dim] = filepath
def transform_cpp_embedding_to_kv(input_file, output_file, encoding='utf8'):
"""Given a C++ embedding tsv filepath, converts it to a KeyedVector-supported file"""
with smart_open(input_file, 'rb') as f:
lines = [line.decode(encoding) for line in f]
if not len(lines):
raise ValueError("file is empty")
first_line = lines[0]
parts = first_line.rstrip().split("\t")
model_size = len(parts) - 1
vocab_size = len(lines)
with smart_open(output_file, 'w') as f:
f.write('%d %d\n' % (vocab_size, model_size))
for line in lines:
f.write(line.replace('\t', ' '))
def transform_numpy_embedding_to_kv(input_file, output_file, encoding='utf8'):
"""Given a numpy poincare embedding pkl filepath, converts it to a KeyedVector-supported file"""
np_embeddings = pickle.load(open(input_file, 'rb'))
random_embedding = np_embeddings[list(np_embeddings.keys())[0]]
model_size = random_embedding.shape[0]
vocab_size = len(np_embeddings)
with smart_open(output_file, 'w') as f:
f.write('%d %d\n' % (vocab_size, model_size))
for key, vector in np_embeddings.items():
vector_string = ' '.join('%.6f' % value for value in vector)
f.write('%s %s\n' % (key, vector_string))
def load_poincare_cpp(input_filename):
"""Load embedding trained via C++ Poincare model.
Parameters
----------
filepath : str
Path to tsv file containing embedding.
Returns
-------
PoincareKeyedVectors instance.
"""
keyed_vectors_filename = input_filename + '.kv'
transform_cpp_embedding_to_kv(input_filename, keyed_vectors_filename)
embedding = PoincareKeyedVectors.load_word2vec_format(keyed_vectors_filename)
os.unlink(keyed_vectors_filename)
return embedding
def load_poincare_numpy(input_filename):
"""Load embedding trained via Python numpy Poincare model.
Parameters
----------
filepath : str
Path to pkl file containing embedding.
Returns:
PoincareKeyedVectors instance.
"""
keyed_vectors_filename = input_filename + '.kv'
transform_numpy_embedding_to_kv(input_filename, keyed_vectors_filename)
embedding = PoincareKeyedVectors.load_word2vec_format(keyed_vectors_filename)
os.unlink(keyed_vectors_filename)
return embedding
def load_poincare_gensim(input_filename):
"""Load embedding trained via Gensim PoincareModel.
Parameters
----------
filepath : str
Path to model file.
Returns:
PoincareKeyedVectors instance.
"""
model = PoincareModel.load(input_filename)
return model.kv
def load_model(implementation, model_file):
"""Convenience function over functions to load models from different implementations.
Parameters
----------
implementation : str
Implementation used to create model file ('c++'/'numpy'/'gensim').
model_file : str
Path to model file.
Returns
-------
PoincareKeyedVectors instance
Notes
-----
Raises ValueError in case of invalid value for `implementation`
"""
if implementation == 'c++':
return load_poincare_cpp(model_file)
elif implementation == 'numpy':
return load_poincare_numpy(model_file)
elif implementation == 'gensim':
return load_poincare_gensim(model_file)
else:
raise ValueError('Invalid implementation %s' % implementation)
def display_results(task_name, results):
"""Display evaluation results of multiple embeddings on a single task in a tabular format
Args:
task_name (str): name the task being evaluated
results (dict): mapping between embeddings and corresponding results
"""
result_table = PrettyTable()
result_table.field_names = ["Model Description", "Metric"] + [str(dim) for dim in sorted(model_sizes)]
for model_name, model_results in results.items():
metrics = [metric for metric in model_results.keys()]
dims = sorted([dim for dim in model_results[metrics[0]].keys()])
description = model_description_from_name(model_name)
row = [description, '\n'.join(metrics) + '\n']
for dim in dims:
scores = ['%.2f' % model_results[metric][dim] for metric in metrics]
row.append('\n'.join(scores))
result_table.add_row(row)
result_table.align = 'r'
result_html = result_table.get_html_string()
search = "<table>"
insert_at = result_html.index(search) + len(search)
new_row = """
<tr>
<th colspan="1" style="text-align:left">%s</th>
<th colspan="1"></th>
<th colspan="%d" style="text-align:center"> Dimensions</th>
</tr>""" % (task_name, len(model_sizes))
result_html = result_html[:insert_at] + new_row + result_html[insert_at:]
display(HTML(result_html))
def model_description_from_name(model_name):
if model_name.startswith('gensim'):
implementation = 'Gensim'
elif model_name.startswith('cpp'):
implementation = 'C++'
elif model_name.startswith('np'):
implementation = 'Numpy'
else:
raise ValueError('Unsupported implementation for model: %s' % model_name)
description = []
for param_key in sorted(default_params.keys()):
pattern = '%s_([^_]*)_?' % param_key
match = re.search(pattern, model_name)
if match:
description.append("%s=%s" % (param_key, match.groups()[0]))
return "%s: %s" % (implementation, ", ".join(description))
For this task, embeddings are learnt using the entire transitive closure of the WordNet noun hypernym hierarchy. Subsequently, for every hypernym pair (u, v)
, the rank of v
amongst all nodes that do not have a positive edge with v
is computed. The final metric mean_rank
is the average of all these ranks. The MAP
metric is the mean of the Average Precision of the rankings for all positive nodes for a given node u
.
Note that this task tests representation capacity of the learnt embeddings, and not the generalization ability.
reconstruction_results = OrderedDict()
metrics = ['mean_rank', 'MAP']
for implementation, models in sorted(model_files.items()):
for model_name, files in models.items():
if model_name in reconstruction_results:
continue
reconstruction_results[model_name] = OrderedDict()
for metric in metrics:
reconstruction_results[model_name][metric] = {}
for model_size, model_file in files.items():
print('Evaluating model %s of size %d' % (model_name, model_size))
embedding = load_model(implementation, model_file)
eval_instance = ReconstructionEvaluation(wordnet_file, embedding)
eval_result = eval_instance.evaluate(max_n=1000)
for metric in metrics:
reconstruction_results[model_name][metric][model_size] = eval_result[metric]
display_results('WordNet Reconstruction', reconstruction_results)
WordNet Reconstruction | Dimensions | ||||||
---|---|---|---|---|---|---|---|
Model Description | Metric | 5 | 10 | 20 | 50 | 100 | 200 |
C++: burn_in=0, epochs=200, eps=1e-06, neg=20, threads=8 | mean_rank MAP |
191.69 0.34 |
97.65 0.43 |
72.07 0.51 |
55.48 0.57 |
46.76 0.59 |
49.62 0.59 |
C++: burn_in=0, epochs=50, eps=1e-06, neg=10, threads=8 | mean_rank MAP |
280.17 0.27 |
129.46 0.40 |
92.06 0.49 |
80.41 0.53 |
71.42 0.56 |
69.30 0.56 |
C++: burn_in=0, epochs=50, eps=1e-06, neg=20, threads=8 | mean_rank MAP |
265.72 0.28 |
116.94 0.41 |
90.81 0.49 |
59.47 0.56 |
55.14 0.58 |
54.31 0.59 |
C++: burn_in=10, epochs=50, eps=1e-06, neg=20, threads=8 | mean_rank MAP |
252.86 0.26 |
195.73 0.32 |
182.57 0.34 |
165.33 0.36 |
157.37 0.36 |
155.78 0.36 |
Gensim: batch_size=10, burn_in=10, epochs=50, neg=20, reg=0.0 | mean_rank MAP |
108.01 0.37 |
100.73 0.47 |
97.38 0.48 |
94.49 0.49 |
94.68 0.48 |
89.66 0.49 |
Gensim: batch_size=10, burn_in=0, epochs=50, neg=20, reg=0.0 | mean_rank MAP |
154.41 0.40 |
62.77 0.63 |
27.32 0.72 |
20.22 0.77 |
16.15 0.78 |
13.20 0.79 |
Gensim: batch_size=10, burn_in=0, epochs=50, neg=10, reg=0.0 | mean_rank MAP |
211.71 0.33 |
54.42 0.60 |
24.90 0.72 |
21.42 0.76 |
15.80 0.78 |
15.13 0.79 |
Gensim: batch_size=50, burn_in=0, epochs=50, neg=20, reg=0.0 | mean_rank MAP |
148.51 0.38 |
63.67 0.62 |
28.36 0.72 |
20.23 0.76 |
15.75 0.78 |
13.59 0.79 |
Gensim: batch_size=10, burn_in=10, epochs=200, neg=10, reg=1 | mean_rank MAP |
61.48 0.38 |
54.70 0.41 |
53.02 0.41 |
50.80 0.42 |
49.58 0.42 |
48.56 0.43 |
Numpy: epochs=50, neg=20 | mean_rank MAP |
9617.57 0.14 |
5902.65 0.16 |
3868.78 0.19 |
1117.77 0.25 |
529.92 0.30 |
377.45 0.35 |
Results from the paper -
The figures above illustrate a few things -
This task is similar to the reconstruction task described above, except that the list of relations is split into a training and testing set, and the mean rank reported is for the edges in the test set.
Therefore, this tests the ability of the model to predict unseen edges between nodes, i.e. generalization ability, as opposed to the representation capacity tested in the Reconstruction task
def train_test_split(data_file, test_ratio=0.1):
"""Creates train and test files from given data file, returns train/test file names
Args:
data_file (str): path to data file for which train/test split is to be created
test_ratio (float): fraction of lines to be used for test data
Returns
(train_file, test_file): tuple of strings with train file and test file paths
"""
train_filename = data_file + '.train'
test_filename = data_file + '.test'
if os.path.exists(train_filename) and os.path.exists(test_filename):
print('Train and test files already exist, skipping')
return (train_filename, test_filename)
root_nodes, leaf_nodes = get_root_and_leaf_nodes(data_file)
test_line_candidates = []
line_count = 0
all_nodes = set()
with smart_open(data_file, 'rb') as f:
for i, line in enumerate(f):
node_1, node_2 = line.split()
all_nodes.update([node_1, node_2])
if (
node_1 not in leaf_nodes
and node_2 not in leaf_nodes
and node_1 not in root_nodes
and node_2 not in root_nodes
and node_1 != node_2
):
test_line_candidates.append(i)
line_count += 1
num_test_lines = int(test_ratio * line_count)
if num_test_lines > len(test_line_candidates):
raise ValueError('Not enough candidate relations for test set')
print('Choosing %d test lines from %d candidates' % (num_test_lines, len(test_line_candidates)))
test_line_indices = set(random.sample(test_line_candidates, num_test_lines))
train_line_indices = set(l for l in range(line_count) if l not in test_line_indices)
train_set_nodes = set()
with smart_open(data_file, 'rb') as f:
train_file = smart_open(train_filename, 'wb')
test_file = smart_open(test_filename, 'wb')
for i, line in enumerate(f):
if i in train_line_indices:
train_set_nodes.update(line.split())
train_file.write(line)
elif i in test_line_indices:
test_file.write(line)
else:
raise AssertionError('Line %d not present in either train or test line indices' % i)
train_file.close()
test_file.close()
assert len(train_set_nodes) == len(all_nodes), 'Not all nodes from dataset present in train set relations'
return (train_filename, test_filename)
def get_root_and_leaf_nodes(data_file):
"""Return keys of root and leaf nodes from a file with transitive closure relations
Args:
data_file(str): file path containing transitive closure relations
Returns:
(root_nodes, leaf_nodes) - tuple containing keys of root and leaf nodes
"""
root_candidates = set()
leaf_candidates = set()
with smart_open(data_file, 'rb') as f:
for line in f:
nodes = line.split()
root_candidates.update(nodes)
leaf_candidates.update(nodes)
with smart_open(data_file, 'rb') as f:
for line in f:
node_1, node_2 = line.split()
if node_1 == node_2:
continue
leaf_candidates.discard(node_1)
root_candidates.discard(node_2)
return (leaf_candidates, root_candidates)
wordnet_train_file, wordnet_test_file = train_test_split(wordnet_file)
Train and test files already exist, skipping
# Training models for link prediction
lp_model_files = {}
lp_model_files['c++'] = {}
# Train c++ models with default params
model_name, files = train_model_with_params(default_params, wordnet_train_file, model_sizes, 'cpp_lp_model', 'c++')
lp_model_files['c++'][model_name] = {}
for dim, filepath in files.items():
lp_model_files['c++'][model_name][dim] = filepath
# Train c++ models with non-default params
for param, values in non_default_params.items():
params = default_params.copy()
for value in values:
params[param] = value
model_name, files = train_model_with_params(params, wordnet_train_file, model_sizes, 'cpp_lp_model', 'c++')
lp_model_files['c++'][model_name] = {}
for dim, filepath in files.items():
lp_model_files['c++'][model_name][dim] = filepath
lp_model_files['numpy'] = {}
# Train numpy models with default params
model_name, files = train_model_with_params(default_params, wordnet_train_file, model_sizes, 'np_lp_model', 'numpy')
lp_model_files['numpy'][model_name] = {}
for dim, filepath in files.items():
lp_model_files['numpy'][model_name][dim] = filepath
lp_model_files['gensim'] = {}
# Train models with default params
model_name, files = train_model_with_params(default_params, wordnet_train_file, model_sizes, 'gensim_lp_model', 'gensim')
lp_model_files['gensim'][model_name] = {}
for dim, filepath in files.items():
lp_model_files['gensim'][model_name][dim] = filepath
# Train models with non-default params
for new_params in non_default_params_gensim:
params = default_params.copy()
params.update(new_params)
model_name, files = train_model_with_params(params, wordnet_file, model_sizes, 'gensim_lp_model', 'gensim')
lp_model_files['gensim'][model_name] = {}
for dim, filepath in files.items():
lp_model_files['gensim'][model_name][dim] = filepath
lp_results = OrderedDict()
metrics = ['mean_rank', 'MAP']
for implementation, models in sorted(lp_model_files.items()):
for model_name, files in models.items():
lp_results[model_name] = OrderedDict()
for metric in metrics:
lp_results[model_name][metric] = {}
for model_size, model_file in files.items():
print('Evaluating model %s of size %d' % (model_name, model_size))
embedding = load_model(implementation, model_file)
eval_instance = LinkPredictionEvaluation(wordnet_train_file, wordnet_test_file, embedding)
eval_result = eval_instance.evaluate(max_n=1000)
for metric in metrics:
lp_results[model_name][metric][model_size] = eval_result[metric]
display_results('WordNet Link Prediction', lp_results)
WordNet Link Prediction | Dimensions | ||||||
---|---|---|---|---|---|---|---|
Model Description | Metric | 5 | 10 | 20 | 50 | 100 | 200 |
C++: burn_in=0, epochs=200, eps=1e-06, neg=20, threads=8 | mean_rank MAP |
218.26 0.15 |
99.09 0.24 |
60.50 0.31 |
52.24 0.35 |
60.81 0.36 |
69.13 0.36 |
C++: burn_in=0, epochs=50, eps=1e-06, neg=20, threads=8 | mean_rank MAP |
687.48 0.12 |
281.88 0.15 |
72.95 0.31 |
57.37 0.35 |
52.56 0.36 |
61.42 0.36 |
C++: burn_in=0, epochs=50, eps=1e-06, neg=10, threads=8 | mean_rank MAP |
230.34 0.14 |
123.24 0.22 |
75.62 0.28 |
65.97 0.31 |
55.33 0.33 |
56.89 0.34 |
C++: burn_in=10, epochs=50, eps=1e-06, neg=20, threads=8 | mean_rank MAP |
236.31 0.10 |
214.85 0.13 |
193.30 0.14 |
180.27 0.15 |
169.00 0.16 |
163.22 0.16 |
Gensim: batch_size=10, burn_in=0, epochs=50, neg=10, reg=0.0 | mean_rank MAP |
141.52 0.18 |
58.89 0.34 |
31.66 0.46 |
22.13 0.51 |
21.29 0.52 |
19.38 0.53 |
Gensim: batch_size=10, burn_in=0, epochs=50, neg=20, reg=0.0 | mean_rank MAP |
121.42 0.19 |
52.51 0.37 |
24.61 0.46 |
19.96 0.52 |
20.44 0.50 |
19.55 0.54 |
Gensim: batch_size=50, burn_in=0, epochs=50, neg=20, reg=0.0 | mean_rank MAP |
144.19 0.19 |
53.65 0.35 |
25.21 0.47 |
20.68 0.52 |
21.32 0.51 |
18.97 0.53 |
Gensim: batch_size=10, burn_in=10, epochs=50, neg=20, reg=0.0 | mean_rank MAP |
154.95 0.16 |
138.12 0.21 |
122.06 0.24 |
117.96 0.26 |
112.99 0.25 |
110.84 0.26 |
Gensim: batch_size=10, burn_in=10, epochs=200, neg=10, reg=1 | mean_rank MAP |
51.72 0.22 |
39.85 0.28 |
38.60 0.29 |
36.55 0.30 |
35.32 0.31 |
34.66 0.31 |
Numpy: epochs=50, neg=20 | mean_rank MAP |
14526.67 0.01 |
8411.10 0.02 |
5749.57 0.04 |
1873.12 0.07 |
1639.50 0.10 |
1350.13 0.13 |
Results from the paper -
These results follow similar trends as the reconstruction results. Repeating here for ease of reading -
The main difference from the reconstruction results is that mean ranks for link prediction are slightly worse most of the time than the corresponding reconstruction results. This is to be expected, as link prediction is performed on a held-out test set.
The Lexical Entailment task is performed using the HyperLex dataset, a collection of 2163 noun pairs and scores that denote "To what degree is noun A a type of noun Y". For example -
girl person 9.85
These scores are out of 10.
The spearman's correlation score is computed for the predicted and actual similarity scores, with the models trained on the entire WordNet noun hierarchy.
entailment_results = OrderedDict()
eval_instance = LexicalEntailmentEvaluation(hyperlex_file)
for implementation, models in sorted(model_files.items()):
for model_name, files in models.items():
if model_name in entailment_results:
continue
entailment_results[model_name] = OrderedDict()
entailment_results[model_name]['spearman'] = {}
for model_size, model_file in files.items():
print('Evaluating model %s of size %d' % (model_name, model_size))
embedding = load_model(implementation, model_file)
entailment_results[model_name]['spearman'][model_size] = eval_instance.evaluate_spearman(embedding)
display_results('Lexical Entailment (HyperLex)', entailment_results)
Lexical Entailment (HyperLex) | Dimensions | ||||||
---|---|---|---|---|---|---|---|
Model Description | Metric | 5 | 10 | 20 | 50 | 100 | 200 |
C++: burn_in=0, epochs=200, eps=1e-06, neg=20, threads=8 | spearman |
0.45 | 0.46 | 0.45 | 0.45 | 0.45 | 0.46 |
C++: burn_in=0, epochs=50, eps=1e-06, neg=10, threads=8 | spearman |
0.42 | 0.41 | 0.43 | 0.42 | 0.43 | 0.43 |
C++: burn_in=0, epochs=50, eps=1e-06, neg=20, threads=8 | spearman |
0.44 | 0.43 | 0.47 | 0.44 | 0.45 | 0.44 |
C++: burn_in=10, epochs=50, eps=1e-06, neg=20, threads=8 | spearman |
0.43 | 0.42 | 0.44 | 0.44 | 0.44 | 0.45 |
Gensim: batch_size=10, burn_in=10, epochs=50, neg=20, reg=0.0 | spearman |
0.45 | 0.46 | 0.45 | 0.46 | 0.45 | 0.46 |
Gensim: batch_size=10, burn_in=0, epochs=50, neg=20, reg=0.0 | spearman |
0.47 | 0.45 | 0.47 | 0.47 | 0.48 | 0.47 |
Gensim: batch_size=10, burn_in=0, epochs=50, neg=10, reg=0.0 | spearman |
0.46 | 0.46 | 0.45 | 0.47 | 0.47 | 0.48 |
Gensim: batch_size=50, burn_in=0, epochs=50, neg=20, reg=0.0 | spearman |
0.46 | 0.46 | 0.47 | 0.47 | 0.48 | 0.47 |
Gensim: batch_size=10, burn_in=10, epochs=200, neg=10, reg=1 | spearman |
0.52 | 0.51 | 0.51 | 0.51 | 0.52 | 0.51 |
Numpy: epochs=50, neg=20 | spearman |
0.15 | 0.19 | 0.20 | 0.20 | 0.24 | 0.26 |
Results from paper (for Poincaré Embeddings, as well as other embeddings from previous papers) -
Some observations -
However, there are a few ambiguities and caveats -
The paper also describes a variant of the Poincaré model to learn embeddings of nodes in a symmetric graph, unlike the WordNet noun hierarchy, which is directed and asymmetric. The datasets used in the paper for this model are scientific collaboration networks, in which the nodes are researchers and an edge represents that the two researchers have co-authored a paper.
This variant has not been implemented yet, and is therefore not a part of our experiments.