Relation extraction using distant supervision

In [1]:
__author__ = "Bill MacCartney ([email protected])"
__version__ = "CS224U, Stanford, Spring 2018"

Overview

This codebook illustrates an approach to relation extraction using distant supervision. It uses a simplified version of the approach taken by Mintz et al. in their 2009 paper, Distant supervision for relation extraction without labeled data. If you haven't yet read that paper, read it now! The rest of the codebook will make a lot more sense after you're familiar with it.

The task of relation extraction

Relation extraction is the task of extracting from natural language text relational triples such as:

(founders, SpaceX, Elon_Musk)
(has_spouse, Elon_Musk, Talulah_Riley)
(worked_at, Elon_Musk, Tesla_Motors)

If we can accumulate a large knowledge base (KB) of relational triples, we can use it to power question answering and other applications. Building a KB manually is slow and expensive, but much of the knowledge we'd like to capture is already expressed in abundant text on the web. The aim of relation extraction, therefore, is to accelerate the construction of new KBs — and facilitate the ongoing curation of existing KBs — by extracting relational triples from natural language text.

Hand-built patterns

An obvious way to start is to write down a few patterns which express each relation. For example, we can use the pattern "X is the founder of Y" to find new instances of the founders relation. If we search a large corpus, we may find the phrase "Elon Musk is the founder of SpaceX", which we can use as evidence for the relational triple (founders, SpaceX, Elon_Musk).

Unfortunately, this approach doesn't get us very far. The central challenge of relation extraction is the fantastic diversity of language, the multitude of possible ways to express a given relation. For example, each of the following sentences expressed the relational triple (founders, SpaceX, Elon_Musk):

  • "You may also be thinking of Elon Musk (founder of SpaceX), who started PayPal."
  • "Interesting Fact: Elon Musk, co-founder of PayPal, went on to establish SpaceX, one of the most promising space travel startups in the world."
  • "If Space Exploration (SpaceX), founded by Paypal pioneer Elon Musk succeeds, commercial advocates will gain credibility and more support in Congress."

The patterns which connect "Elon Musk" with "SpaceX" in these examples are not ones we could have easily anticipated. To do relation extraction effectively, we need to go beyond hand-built patterns.

Supervised learning

Effective relation extraction will require applying machine learning methods. The natural place to start is with supervised learning. This means training an extraction model from a dataset of examples which have been labeled with the target output. Sentences like the three examples above would be annotated with the founders relation, but we'd also have sentences which include "Elon Musk" and "SpaceX" but do not express the founders relation, such as:

  • "Billionaire entrepreneur Elon Musk announced the latest addition to the SpaceX arsenal: the 'Big F---ing Rocket' (BFR)".

Such "negative examples" would be labeled as such, and the fully-supervised model would then be able to learn from both positive and negative examples the linguistic patterns that indicate each relation.

The difficulty with the fully-supervised approach is the cost of generating training data. Because of the great diversity of linguistic expression, our model will need lots and lots of training data: at least tens of thousands of examples, although hundreds of thousands or millions would be much better. But labeling the examples is just a slow and expensive as building the KB by hand would be.

Distant supervision

The goal of distant supervision is to capture the benefits of supervised learning without paying the cost of labeling training data. Instead of labeling extraction examples by hand, we use existing relational triples to automatically identify extraction examples in a large corpus. For example, if we already have in our KB the relational triple (founders, SpaceX, Elon_Musk), we can search a large corpus for sentences in which "SpaceX" and "Elon Musk" co-occur, make the (unreliable!) assumption that all the sentences express the founder relation, and then use them as training data for a learned model to identify new instances of the founder relation — all without doing any manual labeling.

This is a powerful idea, but it has two limitations. The first is that, inevitably, some of the sentences in which "SpaceX" and "Elon Musk" co-occur will not express the founder relation — like the BFR example above. By making the blind assumption that all such sentences do express the founder relation, we are essentially injecting noise into our training data, and making it harder for our learning algorithms to learn good models. Distant supervision is effective in spite of this problem because it makes it possible to leverage vastly greater quantities of training data, and the benefit of more data outweighs the harm of noisier data.

The second limitation is that we need an existing KB to start from. We can only train a model to extract new instances of the founders relation if we already have many instances of the founders relation. Thus, while distant supervision is a great way to extend an existing KB, it's not useful for creating a KB containing new relations from scratch.

[ top ]

Set-up

  • Make sure your environment includes all the requirements for the cs224u repository.
  • Download the data distribution for this unit, unpack it, and place it in the directory containing the course repository. (If you want to put it somewhere else, change rel_ext_data_home below.)
In [2]:
rel_ext_data_home = 'rel_ext_data'
In [3]:
import gzip
import numpy as np
import random
import os

from collections import Counter, defaultdict, namedtuple
from sklearn.feature_extraction import DictVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import precision_recall_fscore_support
from sklearn.model_selection import train_test_split

[ top ]

The corpus

As usual when we're doing NLP, we need to start with a corpus — a large sample of natural language text. And because our goal is to do relation extraction with distant supervision, we need to be able to identify entities in the text and connect them to a knowledge base of relations between entities. So, we need a corpus in which entity mentions are annotated with entity resolutions which map them to a unique, unambiguous identifiers. Entity resolution serves two purposes:

  1. It ensures that if an entity mention could refer to two different entities, it is properly disambiguated. For example, "New York" could refer to the city or the state.
  2. It ensures that if two different entity mentions refer to the same entity, they are properly identified. For example, both "New York City" and "The Big Apple" refer to New York City.

The corpus we'll use for this project is derived from the Wikilinks dataset announced by Google in 2013. This dataset contains over 40M mentions of 3M distinct entities spanning 10M webpages. It provides entity resolutions by mapping each entity mention to a Wikipedia URL.

Now, in order to do relation extraction, we actually need pairs of entity mentions, and it's important to have the context around and between the two mentions. Fortunately, UMass has provided an expanded version of Wikilinks which includes the context around each entity mention. We've written code to stitch together pairs of entity mentions along with their contexts, and we've filtered the examples extensively. The result is a compact corpus suitable for our purposes. Let's take a closer look.

In [4]:
Example = namedtuple('Example',
    'entity_1, entity_2, left, mention_1, middle, mention_2, right, '
    'left_POS, mention_1_POS, middle_POS, mention_2_POS, right_POS')

def read_examples():
    examples = []
    path = os.path.join(rel_ext_data_home, 'corpus.tsv.gz')
    print('Reading examples from {}'.format(path))
    with gzip.open(path) as f:
        for line in f:
            fields = line[:-1].decode('utf-8').split('\t')
            examples.append(Example(*fields))
    print('Read {} examples'.format(len(examples)))            
    return examples

examples = read_examples()
Reading examples from rel_ext_data/corpus.tsv.gz
Read 414123 examples

Great, that's a lot of examples! Let's take a closer look at one.

In [5]:
print(examples[1])
Example(entity_1='New_Mexico', entity_2='Arizona', left='to all Spanish-occupied lands . The horno has a beehive shape and uses wood as the only heat source . The procedure still used in parts of', mention_1='New Mexico', middle='and', mention_2='Arizona', right='is to build a fire inside the Horno and , when the proper amount of time has passed , remove the embers and ashes and insert the', left_POS='to/TO all/DT Spanish-occupied/JJ lands/NNS ./. The/DT horno/NN has/VBZ a/DT beehive/NN shape/NN and/CC uses/VBZ wood/NN as/IN the/DT only/JJ heat/NN source/NN ./. The/DT procedure/NN still/RB used/VBN in/IN parts/NNS of/IN', mention_1_POS='New/NNP Mexico/NNP', middle_POS='and/CC', mention_2_POS='Arizona/NNP', right_POS='is/VBZ to/TO build/VB a/DT fire/NN inside/IN the/DT Horno/NNP and/CC ,/, when/WRB the/DT proper/JJ amount/NN of/IN time/NN has/VBZ passed/VBN ,/, remove/VB the/DT embers/NNS and/CC ashes/NNS and/CC insert/VB the/DT')

Every example represents a fragment of webpage text containing two entity mentions. The first two fields, entity_1 and entity_2, contain unique identifiers for the two entities mentioned. We name entities using Wiki IDs, which you can think of as the last portion of a Wikipedia URL. Thus the Wiki ID Barack_Obama designates the entity described by https://en.wikipedia.org/wiki/Barack_Obama.

The next five fields represent the text surrounding the two mentions, divided into five chunks: left contains the text before the first mention, mention_1 is the first mention itself, middle contains the text between the two mentions, mention_2 is the second mention, and the field right contains the text after the second mention. Thus, we can reconstruct the context as a single string like this:

In [6]:
ex = examples[1]
' '.join((ex.left, ex.mention_1, ex.middle, ex.mention_2, ex.right))
Out[6]:
'to all Spanish-occupied lands . The horno has a beehive shape and uses wood as the only heat source . The procedure still used in parts of New Mexico and Arizona is to build a fire inside the Horno and , when the proper amount of time has passed , remove the embers and ashes and insert the'

The last five fields contain the same five chunks of text, but this time annotated with part-of-speech (POS) tags, which may turn out to be useful when we start building models for relation extraction.

Let's look at the distribution of entities over the corpus. How many entities are there, and what are the most common ones?

In [7]:
counter = Counter()
for example in examples:
    counter[example.entity_1] += 1
    counter[example.entity_2] += 1
print('The corpus contains {} entities'.format(len(counter)))
counts = sorted([(count, key) for key, count in counter.items()], reverse=True)
print('The most common entities are:')
for count, key in counts[:20]:
    print('{:10d} {}'.format(count, key))
The corpus contains 107820 entities
The most common entities are:
      9399 India
      6214 England
      4585 Germany
      4486 France
      4128 Australia
      3939 China
      3930 Canada
      3897 Italy
      3368 California
      3125 Pakistan
      3103 Europe
      3097 New_York_City
      3025 London
      2470 Japan
      2468 United_Kingdom
      2279 New_Zealand
      2275 New_York
      2259 Spain
      2132 Philippines
      2120 Asia

Because we're frequently going to want to retrieve corpus examples containing specific entities, it will be convenient to create a Corpus class which holds not only the examples themselves, but also a precomputed index.

In [8]:
class Corpus():

    def __init__(self, examples):
        self._examples = examples
        self._examples_by_entities = {}
        self._index_examples_by_entities()

    def _index_examples_by_entities(self):
        for ex in self._examples:
            if ex.entity_1 not in self._examples_by_entities:
                self._examples_by_entities[ex.entity_1] = {}
            if ex.entity_2 not in self._examples_by_entities[ex.entity_1]:
                self._examples_by_entities[ex.entity_1][ex.entity_2] = []
            self._examples_by_entities[ex.entity_1][ex.entity_2].append(ex)
    
    def get_examples(self):
        return iter(self._examples)
        
    def get_examples_for_entities(self, e1, e2):
        try:
            return self._examples_by_entities[e1][e2]
        except KeyError:
            return []
        
    def __repr__(self):
        return 'Corpus with {} examples'.format(len(self._examples))
In [9]:
corpus = Corpus(examples)
corpus
Out[9]:
Corpus with 414123 examples

The main benefit we gain from the Corpus class is the ability to retrieve examples containing specific entities. Let's find examples containing Steve_Jobs and Pixar.

In [10]:
def show_examples_for_pair(e1, e2, corpus):
    exs = corpus.get_examples_for_entities(e1, e2)
    if exs:
        print('The first of {} examples for {} and {} is:'.format(len(exs), e1, e2))
        print(exs[0])
    else:
        print('No examples for {} and {} is:'.format(e1, e2))

show_examples_for_pair('Steve_Jobs', 'Pixar', corpus)
The first of 9 examples for Steve_Jobs and Pixar is:
Example(entity_1='Steve_Jobs', entity_2='Pixar', left='of visual effects on films like The Abyss ( 1989 ) , Terminator 2 ( 1991 ) and Jurassic Park ( 1993 ) The computer graphics division of ILM was bought by', mention_1='Steve Jobs', middle='and became', mention_2='Pixar', right=', who would go on to make several groundbreaking animated films starting with Toy Story ( 1995 ) – more information on the history of that here', left_POS='of/IN visual/JJ effects/NNS on/IN films/NNS like/IN The/DT Abyss/NN -LRB-/-LRB- 1989/CD -RRB-/-RRB- ,/, Terminator/NNP 2/CD -LRB-/-LRB- 1991/CD -RRB-/-RRB- and/CC Jurassic/JJ Park/NN -LRB-/-LRB- 1993/CD -RRB-/-RRB- The/DT computer/NN graphics/NNS division/NN of/IN ILM/NNP was/VBD bought/VBN by/IN', mention_1_POS='Steve/NNP Jobs/NNP', middle_POS='and/CC became/VBD', mention_2_POS='Pixar/NNP', right_POS=',/, who/WP would/MD go/VB on/IN to/TO make/VB several/JJ groundbreaking/VBG animated/JJ films/NNS starting/VBG with/IN Toy/NNP Story/NNP -LRB-/-LRB- 1995/CD -RRB-/-RRB- --/: more/JJR information/NN on/IN the/DT history/NN of/IN that/DT here/RB')

Actually, this might not be all of the examples containing Steve_Jobs and Pixar. It's only the examples where Steve_Jobs was mentioned first and Pixar second. There may be additional examples that have them in the reverse order. Let's check.

In [11]:
show_examples_for_pair('Pixar', 'Steve_Jobs', corpus)
The first of 2 examples for Pixar and Steve_Jobs is:
Example(entity_1='Pixar', entity_2='Steve_Jobs', left='in the visual accompaniment to his recordings of Bach ’ s Six Suites for Unaccompanied Cello . Ma has also been seen with Apple Inc. and former', mention_1='Pixar', middle='CEO', mention_2='Steve Jobs', right='. Ma is often invited to press events for Jobs ’ s companies , and has performed on stage during event keynote presentations , as well as appearing in', left_POS="in/IN the/DT visual/JJ accompaniment/NN to/TO his/PRP$ recordings/NNS of/IN Bach/NNP '/POS s/NNS Six/CD Suites/NNP for/IN Unaccompanied/NNP Cello/NNP ./. Ma/NNP has/VBZ also/RB been/VBN seen/VBN with/IN Apple/NNP Inc./NNP and/CC former/JJ", mention_1_POS='Pixar/NNP', middle_POS='CEO/NNP', mention_2_POS='Steve/NNP Jobs/NNP', right_POS="./. Ma/NNP is/VBZ often/RB invited/VBN to/TO press/VB events/NNS for/IN Jobs/NNP '/POS s/NNS companies/NNS ,/, and/CC has/VBZ performed/VBN on/IN stage/NN during/IN event/NN keynote/NN presentations/NNS ,/, as/RB well/RB as/IN appearing/VBG in/IN")

Sure enough. Going forward, we'll have to remember to check both "directions" when we're looking for examples contains a specific pair of entities.

This corpus is not without flaws. As you get more familiar with it, you will likely discover that it contains many examples that are nearly — but not exactly — duplicates. This seems to be a consequence of the web document sampling methodology that was used in the construction of the Wikilinks dataset. However, despite a few warts, it will serve our purposes.

One thing this corpus does not include is any annotation about relations. Thus, it could not be used for the fully-supervised approach to relation extraction, because the fully-supervised approach requires that each pair of entity mentions be annotated with the relation (if any) that holds between the two entities. In order to make any headway, we'll need to connect the corpus with an external source of knowledge about relations. We need a knowledge base.

[ top ]

The knowledge base

The data distribution for this unit includes a knowledge base (KB) ultimately derived from Freebase. Unfortunately, Freebase was shut down in 2016, but the Freebase data is still available from various sources and in various forms. The KB included here was extracted from the Freebase Easy data dump.

The KB is a collection of relational triples, each consisting of a relation, a subject, and an object. For example, here are three triples from the KB:

(place_of_birth, Barack_Obama, Honolulu)
(has_spouse, Barack_Obama, Michelle_Obama)
(author, The_Audacity_of_Hope, Barack_Obama)

As you might guess:

  • The relation is one of a handful of predefined constants, such as place_of_birth or has_spouse.
  • The subject and object are entities represented by Wiki IDs (that is, suffixes of Wikipedia URLs).

Let's write some code to read the KB so that we can take a closer look.

In [12]:
KBTriple = namedtuple('KBTriple', 'rel, sbj, obj')

def read_kb_triples():
    kb_triples = []
    path = os.path.join(rel_ext_data_home, 'kb.tsv.gz')
    print('Reading KB triples from {} ...'.format(path))
    with gzip.open(path) as f:
        for line in f:
            rel, sbj, obj = line[:-1].decode('utf-8').split('\t')
            kb_triples.append(KBTriple(rel, sbj, obj))
    print('Read {} KB triples'.format(len(kb_triples)))
    return kb_triples

kb_triples = read_kb_triples()
Reading KB triples from rel_ext_data/kb.tsv.gz ...
Read 56575 KB triples

OK great, we have a KB!

Now, just as we did for the corpus, we'll create a KB class to store the KB triples and some associated indexes. We'll want to be able to look up KB triples both by relation and by entities, so we'll create indexes for both of those access patterns.

In [13]:
class KB():

    def __init__(self, kb_triples):
        self._kb_triples = kb_triples
        self._all_relations = []
        self._all_entity_pairs = []
        self._kb_triples_by_relation = {}
        self._kb_triples_by_entities = {}
        self._collect_all_entity_pairs()
        self._index_kb_triples_by_relation()
        self._index_kb_triples_by_entities()

    def _collect_all_entity_pairs(self):
        pairs = set()
        for kbt in self._kb_triples:
            pairs.add((kbt.sbj, kbt.obj))
        self._all_entity_pairs = sorted(list(pairs))
        
    def _index_kb_triples_by_relation(self):
        for kbt in self._kb_triples:
            if kbt.rel not in self._kb_triples_by_relation:
                self._kb_triples_by_relation[kbt.rel] = []
            self._kb_triples_by_relation[kbt.rel].append(kbt)
        self._all_relations = sorted(list(self._kb_triples_by_relation))
    
    def _index_kb_triples_by_entities(self):
        for kbt in self._kb_triples:
            if kbt.sbj not in self._kb_triples_by_entities:
                self._kb_triples_by_entities[kbt.sbj] = {}
            if kbt.obj not in self._kb_triples_by_entities[kbt.sbj]:
                self._kb_triples_by_entities[kbt.sbj][kbt.obj] = []
            self._kb_triples_by_entities[kbt.sbj][kbt.obj].append(kbt)

    def get_triples(self):
        return iter(self._kb_triples)
        
    def get_all_relations(self):
        return self._all_relations
            
    def get_all_entity_pairs(self):
        return self._all_entity_pairs
            
    def get_triples_for_relation(self, rel):
        try:
            return self._kb_triples_by_relation[rel]
        except KeyError:
            return []

    def get_triples_for_entities(self, e1, e2):
        try:
            return self._kb_triples_by_entities[e1][e2]
        except KeyError:
            return []

    def __repr__(self):
        return 'KB with {} triples'.format(len(self._kb_triples))
In [14]:
kb = KB(kb_triples)
kb
Out[14]:
KB with 56575 triples

Let's get a sense of the high-level characteristics of this KB. Some questions we'd like to answer:

  • How many relations are there?
  • How big is each relation?
  • Examples of each relation.
  • How many unique entities does the KB include?
In [15]:
all_relations = kb.get_all_relations()
print(len(all_relations))
16

How big is each relation? That is, how many triples does each relation contain?

In [16]:
for rel in all_relations:
    print('{:12d} {}'.format(len(kb.get_triples_for_relation(rel)), rel))
        2140 adjoins
        3316 author
         637 capital
       22489 contains
        4958 film_performance
        2404 founders
        1012 genre
        3280 has_sibling
        3774 has_spouse
        3153 is_a
        1981 nationality
        2013 parents
        1388 place_of_birth
        1031 place_of_death
        1526 profession
        1473 worked_at

Let's look at one example from each relation, so that we can get a sense of what they mean.

In [17]:
for rel in all_relations:
    print(tuple(kb.get_triples_for_relation(rel)[0]))
('adjoins', 'Siegburg', 'Bonn')
('author', 'Uncle_Silas', 'Sheridan_Le_Fanu')
('capital', 'Tunisia', 'Tunis')
('contains', 'Brickfields', 'Kuala_Lumpur_Sentral_railway_station')
('film_performance', 'Colin_Hanks', 'The_Great_Buck_Howard')
('founders', 'Bomis', 'Jimmy_Wales')
('genre', 'SPARQL', 'Semantic_Web')
('has_sibling', 'Ari_Emanuel', 'Rahm_Emanuel')
('has_spouse', 'Percy_Bysshe_Shelley', 'Mary_Shelley')
('is_a', 'Bhanu_Athaiya', 'Costume_designer')
('nationality', 'Ruben_Rausing', 'Sweden')
('parents', 'Prince_Arthur_of_Connaught', 'Prince_Arthur,_Duke_of_Connaught_and_Strathearn')
('place_of_birth', 'William_Penny_Brookes', 'Much_Wenlock')
('place_of_death', 'Jean_Drapeau', 'Montreal')
('profession', 'Rufus_Wainwright', 'Actor')
('worked_at', 'Ray_Jackendoff', 'Tufts_University')

The get_triples_for_entities() method allows us to look up triples by the entities they contain. Let's use it to see what relation(s) hold between France and Germany.

In [18]:
kb.get_triples_for_entities('France', 'Germany')
Out[18]:
[KBTriple(rel='adjoins', sbj='France', obj='Germany')]

Relations like adjoins and has_sibling are intuitively symmetric — if the relation holds between X and Y, then we expect it to hold between Y and X as well.

In [19]:
kb.get_triples_for_entities('Germany', 'France')
Out[19]:
[KBTriple(rel='adjoins', sbj='Germany', obj='France')]

However, there's no guarantee that all such inverse triples actually appear in the KB. (You could write some code to check.)

Most relations, however, are intuitively asymmetric. Let's see what relation holds between Pixar and Steve_Jobs.

In [20]:
kb.get_triples_for_entities('Pixar', 'Steve_Jobs')
Out[20]:
[KBTriple(rel='founders', sbj='Pixar', obj='Steve_Jobs')]

It's a bit arbitrary that the KB includes a given asymmetric relation rather than its inverse. For example, instead of the founders relation with triple (founders, Pixar, Steve_Jobs), we might have had a founder_of relation with triple (founder_of, Steve_Jobs, Pixar). It doesn't really matter.

Although we don't have a founder_of relation, there might still be a relation between Steve_Jobs and Pixar. Let's check.

In [21]:
kb.get_triples_for_entities('Steve_Jobs', 'Pixar')
Out[21]:
[KBTriple(rel='worked_at', sbj='Steve_Jobs', obj='Pixar')]

Aha, yes, that makes sense. So it can be the case that one relation holds between X and Y, and a different relation holds between Y and X.

One more observation: there may be more than one relation that holds between a given pair of entities, even in one direction.

In [22]:
kb.get_triples_for_entities('Cleopatra', 'Ptolemy_XIII_Theos_Philopator')
Out[22]:
[KBTriple(rel='has_sibling', sbj='Cleopatra', obj='Ptolemy_XIII_Theos_Philopator'),
 KBTriple(rel='has_spouse', sbj='Cleopatra', obj='Ptolemy_XIII_Theos_Philopator')]

No! What? Yup, it's true — Cleopatra married her younger brother, Ptolemy XIII. Wait, it gets worse — she also married her even younger brother, Ptolemy XIV. Apparently this was normal behavior in ancient Egypt.

Moving on ...

Let's look at the distribution of entities in the KB. How many entities are there, and what are the most common ones?

In [23]:
counter = Counter()
for kbt in kb.get_triples():
    counter[kbt.sbj] += 1
    counter[kbt.obj] += 1
print('The KB contains {} entities'.format(len(counter)))
counts = sorted([(count, key) for key, count in counter.items()], reverse=True)
print('The most common entities are:')
for count, key in counts[:20]:
    print('{:10d} {}'.format(count, key))
The KB contains 46275 entities
The most common entities are:
       962 England
       815 India
       465 London
       456 Italy
       437 France
       420 Germany
       412 California
       396 United_Kingdom
       378 Canada
       324 New_York_City
       262 Actor
       248 New_York
       244 Australia
       235 China
       226 Philippines
       224 Japan
       223 Russia
       214 Scotland
       204 Europe
       177 Pakistan

The number of entities in the KB is less than half the number of entities in the corpus! Evidently the corpus has much broader coverage than the KB.

Note that there is no promise or expectation that this KB is complete. Not only does the KB contain no mention of many entities from the corpus — even for the entities it does include, there may be possible triples which are true in the world but are missing from the KB. As an example, these triples are in the KB:

(founders, SpaceX, Elon_Musk)
(founders, Tesla_Motors, Elon_Musk)
(worked_at, Elon_Musk, Tesla_Motors)

but this one is not:

(worked_at, Elon_Musk, SpaceX)

In fact, the whole point of developing methods for automatic relation extraction is to extend existing KBs (and build new ones) by identifying new relational triples from natural language text. If our KBs were complete, we wouldn't have anything to do.

[ top ]

Problem formulation

With our data assets in hand, it's time to provide a precise formulation of the prediction problem we aim to solve. We need to specify:

  • What is the input to the prediction?
    • Is it a specific pair of entity mentions in a specific context?
    • Or is it a pair of entities, apart from any specific mentions?
  • What is the output of the prediction?

Joining the corpus and the KB

In order to leverage the distant supervision paradigm, we'll need to connect information in the corpus with information in the KB. There are two possibilities, depending on how we formulate our prediction problem:

  • Use the KB to generate labels for the corpus. If our problem is to classify a pair of entity mentions in a specific example in the corpus, then we can use the KB to provide labels for training examples. Labeling specific examples is how the fully supervised paradigm works, so it's the obvious way to think about leveraging distant supervision as well. Although it can be made to work, it's not actually the preferred approach.
  • Use the corpus to generate features for entity pairs. If instead our problem is to classify a pair of entities, then we can use all the examples from the corpus where those two entities co-occur to generate a feature representation describing the entity pair. This is the approach taken by Mintz et al. 2009, and it's the approach we'll pursue here.

So we'll formulate our prediction problem such that the input is a pair of entities, and the goal is to predict what relation(s) the pair belongs to. The KB will provide the labels, and the corpus will provide the features.

Let's determine how many examples we have for each triple in the KB. We'll compute averages per relation.

In [24]:
def count_examples(corpus, kb):
    counter = Counter()
    for rel in all_relations:
        for kbt in kb.get_triples_for_relation(rel):
            # count examples in both forward and reverse directions
            counter[rel] += len(corpus.get_examples_for_entities(kbt.sbj, kbt.obj))
            counter[rel] += len(corpus.get_examples_for_entities(kbt.obj, kbt.sbj))
    # report results
    print('{:20s} {:>10s} {:>10s} {:>10s}'.format('', '', '', 'examples'))
    print('{:20s} {:>10s} {:>10s} {:>10s}'.format('relation', 'examples', 'triples', '/triple'))
    print('{:20s} {:>10s} {:>10s} {:>10s}'.format('--------', '--------', '-------', '-------'))
    for rel in all_relations:
        nx = counter[rel]
        nt = len(kb.get_triples_for_relation(rel))
        print('{:20s} {:10d} {:10d} {:10.2f}'.format(rel, nx, nt, 1.0 * nx / nt))
        
count_examples(corpus, kb)
                                             examples
relation               examples    triples    /triple
--------               --------    -------    -------
adjoins                   85660       2140      40.03
author                    15822       3316       4.77
capital                   12520        637      19.65
contains                  99572      22489       4.43
film_performance          11195       4958       2.26
founders                   8061       2404       3.35
genre                      1941       1012       1.92
has_sibling               12332       3280       3.76
has_spouse                16188       3774       4.29
is_a                       6955       3153       2.21
nationality                4649       1981       2.35
parents                    5387       2013       2.68
place_of_birth             2214       1388       1.60
place_of_death             2047       1031       1.99
profession                 2876       1526       1.88
worked_at                  4494       1473       3.05

For most relations, the total number of examples is fairly large, so we can be optimistic about learning what linguistic patterns express a given relation. However, for individual entity pairs, the number of examples is often quite low. Of course, more data would be better — much better! But more data could quickly become unwieldy to work with in a notebook like this.

Negative instances

By joining the corpus to the KB, we can obtain abundant positive instances for each relation. But a classifier cannot be trained on positive instances alone. In order to apply the distant supervision paradigm, we will also need some negative instances — that is, entity pairs which do not belong to any known relation. If you like, you can think of these entity pairs as being assigned to a special relation called NO_RELATION. We can find plenty of such pairs by searching for examples in the corpus which contain two entities which do not belong to any relation in the KB.

In [25]:
def find_unrelated_pairs(corpus, kb):
    unrelated_pairs = set()
    for ex in corpus.get_examples():
        if kb.get_triples_for_entities(ex.entity_1, ex.entity_2):
            continue
        if kb.get_triples_for_entities(ex.entity_2, ex.entity_1):
            continue
        unrelated_pairs.add((ex.entity_1, ex.entity_2))
        unrelated_pairs.add((ex.entity_2, ex.entity_1))
    return unrelated_pairs
In [26]:
unrelated_pairs = find_unrelated_pairs(corpus, kb)
print('Found {} unrelated pairs, including:'.format(len(unrelated_pairs)))
for pair in list(unrelated_pairs)[:10]:
    print('   ', pair)
Found 301073 unrelated pairs, including:
    ('Lainie_Kazan', 'Tab_Hunter')
    ('City_of_Brussels', 'Belgian_Comic_Strip_Center')
    ('John_Francis_Daley', 'Hart_Hanson')
    ('Andrew_W._Mellon_Foundation', 'American_Council_on_Education')
    ('Hollywood_Walk_of_Fame', 'Laura_Ingalls_Wilder_Medal')
    ('Keeping_the_Faith', 'Great_Expectations')
    ('American_Revolutionary_War', 'British_Empire')
    ('Sino-Indian_War', 'Bangladesh_Liberation_War')
    ('Greg_Howe', 'Richie_Kotzen')
    ('A41_road', 'Marble_Arch')

That's a lot of negative instances! In fact, because these negative instances far outnumber our positive instances (that is, the triples in our KB), when we train models we'll wind up downsampling the negative instances substantially.

Remember, though, that some of these supposedly negative instances may be false negatives. Our KB is not complete. A pair of entities might be related in real life even if they don't appear together in the KB.

Multi-label classification

A given pair of entities can belong to more than one relation. In fact, this is quite common in our KB.

In [27]:
def count_relation_combinations(kb):
    counter = Counter()
    for sbj, obj in kb.get_all_entity_pairs():
        rels = tuple(sorted(set([kbt.rel for kbt in kb.get_triples_for_entities(sbj, obj)])))
        if len(rels) > 1:
            counter[rels] += 1
    counts = sorted([(count, key) for key, count in counter.items()], reverse=True)
    print('The most common relation combinations are:')
    for count, key in counts:
        print('{:10d} {}'.format(count, key))
In [28]:
count_relation_combinations(kb)
The most common relation combinations are:
      1526 ('is_a', 'profession')
       495 ('capital', 'contains')
       183 ('place_of_birth', 'place_of_death')
        76 ('nationality', 'place_of_birth')
        11 ('nationality', 'place_of_death')
        11 ('adjoins', 'contains')
         8 ('has_sibling', 'has_spouse')
         3 ('nationality', 'place_of_birth', 'place_of_death')
         2 ('parents', 'worked_at')
         1 ('nationality', 'worked_at')
         1 ('has_spouse', 'parents')
         1 ('author', 'founders')

While a few of those combinations look like data errors, most look natural and intuitive. Multiple relations per entity pair is a commonplace phenomenon.

This observation strongly suggests formulating our prediction problem as multi-label classification. We could instead treat it as multi-class classification — and indeed, Mintz et al. 2009 did so — but if we do, we'll be faced with the problem of assigning a single relation label to entity pairs which actually belong to multiple relations. It's not obvious how best to do this (and Mintz et al. 2009 did not make their method clear).

There are a number of ways to approach multi-label classification, but the most obvious is the binary relevance method, which just factors multi-label classification over n labels into n independent binary classification problems, one for each label. A disadvantage of this approach is that, by treating the binary classification problems as independent, it fails to exploit correlations between labels. But it has the great virtue of simplicity, and it will suffice for our purposes.

So our problem will be to take as input an entity pair and a candidate relation (label), and to return a binary prediction as to whether the entity pair belongs to the relation. Since a KB triple is precisely a relation and a pair of entities, we could say equivalently that our prediction problem amounts to binary classification of KB triples. Given a candidate KB triple, do we predict that it is valid?

Building datasets

We're now in a position to write a function to build datasets suitable for training and evaluating predictive models. It will have the following characteristics:

  • Because we've formulated our problem as multi-label classification, and we'll be training separate models for each relation, we won't build a single dataset. Instead, we'll build a dataset for each relation, and our return value will be a map from relation names to datasets.
  • The dataset for each relation will consist of two parallel lists:
    • A list of candidate KBTriples which combine the given relation with a pair of entities.
    • A corresponding list of boolean labels indicating whether the given KBTriple belongs to the KB.
  • The dataset for each relation will include KBTriples derived from two sources:
    • Positive instances will be drawn from the KB.
    • Negative instances will be sampled from unrelated entity pairs, as described above.
In [29]:
def build_datasets(corpus, kb, include_positive=True, sampling_rate=0.1, seed=1):
    unrelated_pairs = find_unrelated_pairs(corpus, kb)
    random.seed(seed)
    unrelated_pairs = random.sample(unrelated_pairs, int(sampling_rate * len(unrelated_pairs)))
    kbts_by_rel = defaultdict(list)
    labels_by_rel = defaultdict(list)
    for index, rel in enumerate(all_relations):
        if include_positive:
            for kbt in kb.get_triples_for_relation(rel):
                kbts_by_rel[rel].append(kbt)
                labels_by_rel[rel].append(True)
        for sbj, obj in unrelated_pairs:
            kbts_by_rel[rel].append(KBTriple(rel, sbj, obj))
            labels_by_rel[rel].append(False)        
    return kbts_by_rel, labels_by_rel

[ top ]

Evaluation

Before we start building models, let's set up a test harness that allows us to measure a model's performance. This may seem backwards, but it's analogous to the software engineering paradigm of test-driven development: first, define success; then, pursue it.

Splitting the data

Whenever building a model from data, it's good practice to partition the data into a multiple splits — minimally, a training split on which to train the model, and a test split on which to evaluate it. In fact, we'll go a bit further, and define four splits:

  • The tiny split (1%). It's often useful to carve out a tiny chunk of data to use in place of training or test data during development. Of course, any quantitative results obtained by evaluating on the tiny split are nearly meaningless, but because evaluations run extremely fast, using this split is a good way to flush out bugs during iterative cycles of code development.
  • The train split (69%). We'll use the majority of our data for training models, both during development and at final evaluation. Experiments with the train split may take longer to run, but they'll have much greater statistical power.
  • The dev split (15%). We'll use the dev split as test data for intermediate (formative) evaluations during development. During routine experiments, all evaluations should use the dev split.
  • The test split (15%). We'll reserve the test split for our final (summative) evaluation at the conclusion of our work. Running evaluations on the test split before you are ready to conclude your work is methodologically unsound and intellectually dishonest!

Splitting our data assets is somewhat more complicated than in many other NLP problems, because we have both a corpus and KB. In order to minimize leakage of information from training data into test data, we'd like to split both the corpus and the KB. And in order to maximize the value of a finite quantity of data, we'd like to align the corpus splits and KB splits as closely as possible. In an ideal world, each split would have its own hermetically-sealed universe of entities, the corpus for that split would contain only examples mentioning those entities, and the KB for that split would contain only triples involving those entities. However, that ideal is not quite achievable in practice. In order to get as close as possible, we'll follow this plan:

  • First, we'll split the set of entities which appear as the subject in some KB triple.
  • Then, we'll split the set of KB triples based on their subject entity.
  • Finally, we'll split the set of corpus examples.
    • If the first entity in the example has already been assigned to a split, we'll assign the example to the same split.
    • Alternatively, if the second entity has already been assigned to a split, we'll assign the example to the same split.
    • Otherwise, we'll assign the example to a split randomly.

Here's code to implement the splits. It's OK to skip past the details.

In [30]:
def split_corpus_and_kb(
        split_names=['tiny', 'train', 'dev', 'test'],
        split_fracs=[0.01, 0.69, 0.15, 0.15],   
        seed=1):
    if len(split_names) != len(split_fracs):
        raise ValueError('split_names and split_fracs must be of equal length')
    if sum(split_fracs) != 1.0:
        raise ValueError('split_fracs must sum to 1')
    n = len(split_fracs) # for convenience only
    
    def split_list(xs):
        xs = sorted(xs) # sorted for reproducibility
        if seed:
            random.seed(seed)
        random.shuffle(xs)
        split_points = [0] + [int(round(frac * len(xs))) for frac in np.cumsum(split_fracs)]
        return [xs[split_points[i]:split_points[i + 1]] for i in range(n)]
    
    # first, split the entities that appear as subjects in the KB
    sbjs = list(set([kbt.sbj for kbt in kb.get_triples()]))
    sbj_splits = split_list(sbjs)
    sbj_split_dict = dict([(sbj, i) for i, split in enumerate(sbj_splits) for sbj in split])
    
    # next, split the KB triples based on their subjects
    kbt_splits = [[kbt for kbt in kb.get_triples() if sbj_split_dict[kbt.sbj] == i] for i in range(n)]
    
    # now split examples based on the entities they contain
    ex_splits = [[] for i in range(n + 1)] # include an extra split
    for ex in corpus.get_examples():
        if ex.entity_1 in sbj_split_dict:
            # if entity_1 is a sbj in the KB, assign example to split of that sbj
            ex_splits[sbj_split_dict[ex.entity_1]].append(ex)
        elif ex.entity_2 in sbj_split_dict:
            # if entity_2 is a sbj in the KB, assign example to split of that sbj
            ex_splits[sbj_split_dict[ex.entity_2]].append(ex)
        else:
            # otherwise, put in extra split to be redistributed
            ex_splits[-1].append(ex)
    # reallocate the examples that weren't assigned to a split on first pass
    extra_ex_splits = split_list(ex_splits[-1])
    ex_splits = [ex_splits[i] + extra_ex_splits[i] for i in range(n)]
    
    # create a Corpus and a KB for each split
    data = {}
    for i in range(n):
        data[split_names[i]] = {'corpus': Corpus(ex_splits[i]), 'kb': KB(kbt_splits[i])}
    data['all'] = {'corpus': corpus, 'kb': kb}
    return data

Great. Let's use it to create the splits.

In [31]:
data = split_corpus_and_kb(seed=1)
data
Out[31]:
{'all': {'corpus': Corpus with 414123 examples, 'kb': KB with 56575 triples},
 'dev': {'corpus': Corpus with 57241 examples, 'kb': KB with 7939 triples},
 'test': {'corpus': Corpus with 63382 examples, 'kb': KB with 8053 triples},
 'tiny': {'corpus': Corpus with 3458 examples, 'kb': KB with 425 triples},
 'train': {'corpus': Corpus with 290042 examples, 'kb': KB with 40158 triples}}

So now we can use data['train']['corpus'] to refer to the training corpus, or data['dev']['kb'] to refer to the dev KB.

As a convenience, let's add a function for creating datasets for a specific split:

In [32]:
def build_datasets_for_split(split, include_positive=True, sampling_rate=0.1, seed=1):
    return build_datasets(data[split]['corpus'], data[split]['kb'], include_positive, sampling_rate, seed)

Choosing evaluation metrics

Because we've formulated our prediction problem as a family of binary classification problems, one for each relation (label), choosing evaluation metrics is pretty straightforward. The standard metrics for evaluating binary classification are precision and recall, which are more meaningful than simple accuracy, particularly in problems with a highly biased label distribution (like ours). We'll compute and report precision and recall separately for each relation (label). There are only two wrinkles:

  1. How best to combine precision and recall into a single metric. Having two evaluation metrics is often inconvenient. If we're considering a change to our model which improves precision but degrades recall, should we take it? To drive an iterative development process, it's useful to have a single metric on which to hill-climb. For binary classification, the standard answer is the F1-score, which is the harmonic mean of precision and recall. However, the F1-score gives equal weight to precision and recall. For our purposes, precision is probably more important than recall. If we're extracting new relation triples from (massively abundant) text on the web in order to augment a knowledge base, it's probably more important that the triples we extract are correct (precision) than that we extract all the triples we could (recall). Accordingly, instead of the F1-score, we'll use the F0.5-score, which gives precision twice as much weight as recall.

  2. How to aggregate metrics across relations (labels). Reporting metrics separately for each relation is great, but in order to drive iterative development, we'd also like to have summary metrics which aggregate across all relations. There are two possible ways to do it: micro-averaging will give equal weight to all problem instances, and thus give greater weight to relations with more instances, while macro-averaging will give equal weight to all relations, and thus give lesser weight to problem instances in relations with more instances. Because the number of problem instances per relation is, to some degree, an accident of our data collection methodology, we'll choose macro-averaging.

Thus, while every evaluation will report lots of metrics, when we need a single metric on which to hill-climb, it will be the macro-averaged F0.5-score.

Running evaluations

It's time to write some code to run evaluations and report results. This is now straightforward. The evaluate() function takes as inputs:

  • classifier, which is just a function that takes a list of KBTriples and returns a list of boolean predictions;
  • test_split, the split on which to evaluate the classifier, dev by default;
  • verbose, a boolean indicating whether to print output.

The other functions below are just helper functions to evaluate().

In [33]:
def print_statistics_header():
    print('{:20s} {:>10s} {:>10s} {:>10s} {:>10s} {:>10s}'.format(
        'relation', 'precision', 'recall', 'f-score', 'support', 'size'))
    print('{:20s} {:>10s} {:>10s} {:>10s} {:>10s} {:>10s}'.format(
        '-' * 18, '-' * 9, '-' * 9, '-' * 9, '-' * 9, '-' * 9))

def print_statistics_row(rel, result):
    print('{:20s} {:10.3f} {:10.3f} {:10.3f} {:10d} {:10d}'.format(rel, *result))

def print_statistics_footer(avg_result):
    print('{:20s} {:>10s} {:>10s} {:>10s} {:>10s} {:>10s}'.format(
        '-' * 18, '-' * 9, '-' * 9, '-' * 9, '-' * 9, '-' * 9))
    print('{:20s} {:10.3f} {:10.3f} {:10.3f} {:10d} {:10d}'.format('macro-average', *avg_result))

def macro_average_results(results):
    avg_result = [np.average([r[i] for r in results.values()]) for i in range(3)]
    avg_result.append(np.sum([r[3] for r in results.values()]))
    avg_result.append(np.sum([r[4] for r in results.values()]))
    return avg_result
    
def evaluate(classifier, test_split='dev', verbose=True):
    test_kbts_by_rel, true_labels_by_rel = build_datasets_for_split(test_split)
    results = {}
    if verbose:
        print_statistics_header()
    for rel in all_relations:
        pred_labels = classifier(test_kbts_by_rel[rel])
        stats = precision_recall_fscore_support(true_labels_by_rel[rel], pred_labels, beta=0.5)
        stats = [stat[1] for stat in stats]  # stats[1] is the stat for label True
        stats.append(len(pred_labels)) # number of examples
        results[rel] = stats
        if verbose:
            print_statistics_row(rel, results[rel])
    avg_result = macro_average_results(results)
    if verbose:
        print_statistics_footer(avg_result)
    return avg_result[2]  # return f_0.5 score as summary statistic

Evaluating a random-guessing strategy

In order to validate our evaluation framework, and to set a floor under expected results for future evaluations, let's implement and evaluate a random-guessing strategy. The random guesser is a classifier which completely ignores its input, and simply flips a coin.

In [34]:
def lift(f):
    return lambda xs: [f(x) for x in xs]

def make_random_classifier(p=0.50):
    def random_classify(kb_triple):
        return random.random() < p
    return lift(random_classify)
In [35]:
evaluate(make_random_classifier())
relation              precision     recall    f-score    support       size
------------------    ---------  ---------  ---------  ---------  ---------
adjoins                   0.058      0.515      0.070        303       5319
author                    0.088      0.508      0.106        480       5496
capital                   0.019      0.539      0.024         89       5105
contains                  0.349      0.502      0.371       2667       7683
film_performance          0.138      0.491      0.162        822       5838
founders                  0.064      0.482      0.078        359       5375
genre                     0.039      0.608      0.048        166       5182
has_sibling               0.092      0.493      0.109        513       5529
has_spouse                0.110      0.530      0.130        575       5591
is_a                      0.099      0.547      0.119        494       5510
nationality               0.054      0.463      0.066        311       5327
parents                   0.062      0.502      0.075        325       5341
place_of_birth            0.040      0.488      0.049        217       5233
place_of_death            0.028      0.490      0.034        145       5161
profession                0.052      0.563      0.063        245       5261
worked_at                 0.047      0.544      0.058        228       5244
------------------    ---------  ---------  ---------  ---------  ---------
macro-average             0.084      0.517      0.098       7939      88195
Out[35]:
0.09757501010273492

The results are not too surprising. Recall is generally around 0.50, which makes sense: on any given example with label True, we are 50% likely to guess the right label. But precision is very poor, because most labels are not True, and because our classifier is completely ignorant of the features of specific problem instances. Accordingly, the F0.5-score is also very poor — first because even the equally-weighted F1-score is always closer to the lesser of precision and recall, and second because the F0.5-score weights precision twice as much as recall.

Actually, the most remarkable result in this table is the comparatively good performance for the contains relation! What does this result tell us about the data?

[ top ]

A simple baseline model

It shouldn't be too hard to do better than random guessing. But for now, let's aim low — let's use the data we have in the easiest and most obvious way, and see how far that gets us.

We start from the intuition that the words between two entity mentions frequently tell us how they're related. For example, in the phrase "SpaceX was founded by Elon Musk", the words "was founded by" indicate that the founders relation holds between the first entity mentioned and the second. Likewise, in the phrase "Elon Musk established SpaceX", the word "established" indicates the founders relation holds between the second entity mentioned and the first.

So let's write some code to find the most common phrases that appear between the two entity mentions for each relation. As the examples illustrate, we need to make sure to consider both directions: that is, where the subject of the relation appears as the first mention, and where it appears as the second.

In [36]:
def find_common_middles(split='train', top_k=3, show_output=False):
    corpus = data[split]['corpus']
    kb = data[split]['kb']
    mids_by_rel = {
        'fwd': defaultdict(lambda: defaultdict(int)),
        'rev': defaultdict(lambda: defaultdict(int)),
    }
    for rel in all_relations:
        for kbt in kb.get_triples_for_relation(rel):
            for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
                mids_by_rel['fwd'][rel][ex.middle] += 1
            for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
                mids_by_rel['rev'][rel][ex.middle] += 1
    def most_frequent(mid_counter):
        return sorted([(cnt, mid) for mid, cnt in mid_counter.items()], reverse=True)[:top_k]
    for rel in all_relations:
        for dir in ['fwd', 'rev']:
            top = most_frequent(mids_by_rel[dir][rel])
            if show_output:
                for cnt, mid in top:
                    print('{:20s} {:5s} {:10d} {:s}'.format(rel, 'fwd', cnt, mid))
            mids_by_rel[dir][rel] = set([mid for cnt, mid in top])
    return mids_by_rel

_ = find_common_middles(show_output=True)
adjoins              fwd         8461 ,
adjoins              fwd         5633 and
adjoins              fwd          993 , and
adjoins              fwd         5599 ,
adjoins              fwd         3780 and
adjoins              fwd          680 , and
author               fwd         1214 by
author               fwd          155 ,
author               fwd          130 , by
author               fwd         1106 's
author               fwd          294 ‘ s
author               fwd          175 ’ s
capital              fwd           37 ,
capital              fwd           19 in
capital              fwd           18 (
capital              fwd         3711 ,
capital              fwd          178 in
capital              fwd           87 , the capital of
contains             fwd          460 's
contains             fwd          355 ,
contains             fwd          250 (
contains             fwd        25095 ,
contains             fwd         5603 in
contains             fwd          668 in the
film_performance     fwd          286 in
film_performance     fwd          200 's
film_performance     fwd          115 film
film_performance     fwd          213 with
film_performance     fwd          152 , starring
film_performance     fwd          115 opposite
founders             fwd           98 founder
founders             fwd           59 co-founder
founders             fwd           57 ,
founders             fwd          180 's
founders             fwd          104 of
founders             fwd           77 ‘ s
genre                fwd           28 , a
genre                fwd           13 in 1994 , he became a central figure in the
genre                fwd           11 is a
genre                fwd          122 ,
genre                fwd           62 series
genre                fwd           23 
has_sibling          fwd         1369 and
has_sibling          fwd          614 ,
has_sibling          fwd          139 , and
has_sibling          fwd          930 and
has_sibling          fwd          460 ,
has_sibling          fwd           94 , and
has_spouse           fwd         2029 and
has_spouse           fwd          375 ,
has_spouse           fwd          112 and his wife
has_spouse           fwd         1382 and
has_spouse           fwd          271 ,
has_spouse           fwd           78 and his wife
is_a                 fwd          120 ,
is_a                 fwd           77 and
is_a                 fwd           34 , a
is_a                 fwd          252 ,
is_a                 fwd          175 and
is_a                 fwd           81 
nationality          fwd          331 of
nationality          fwd           79 in
nationality          fwd           34 of the
nationality          fwd           57 ,
nationality          fwd           27 by
nationality          fwd           25 under
parents              fwd           77 , son of
parents              fwd           50 and
parents              fwd           44 ,
parents              fwd          177 and
parents              fwd          167 ,
parents              fwd           47 and his son
place_of_birth       fwd           90 of
place_of_birth       fwd           64 was born in
place_of_birth       fwd           37 in
place_of_birth       fwd           17 ,
place_of_birth       fwd           16 by
place_of_birth       fwd           11 under
place_of_death       fwd           73 in
place_of_death       fwd           63 of
place_of_death       fwd           17 at
place_of_death       fwd           12 ,
place_of_death       fwd           10 under
place_of_death       fwd            9 mayor
profession           fwd           65 ,
profession           fwd           27 , a
profession           fwd           20 and
profession           fwd          114 ,
profession           fwd           74 
profession           fwd           24 and
worked_at            fwd          103 of
worked_at            fwd           82 at
worked_at            fwd           66 's
worked_at            fwd           37 ,
worked_at            fwd           30 founder
worked_at            fwd           25 co-founder

A few observations here:

  • Some of the most frequent middles are natural and intuitive. For example, ", son of" indicates a forward parents relation, while "and his son" indicates a reverse parents relation.
  • Punctuation and stop words such as "and" and "of" are extremely common. Unlike some other NLP applications, it's probably a bad idea to throw these away — they carry lots of useful information.
  • However, punctuation and stop words tend to be highly ambiguous. For example, a bare comma is a likely middle for almost every relation in at least one direction.
  • A few of the results reflect quirks of the dataset. For example, the appearance of the phrase "in 1994 , he became a central figure in the" as a common middle for the genre relation reflects both the relative scarcity of examples for that relation, and an unfortunate tendency of the Wikilinks dataset to include duplicate or near-duplicate source documents. (That middle connects the entities Ready to Die — the first studio album by the Notorious B.I.G. — and East Coast hip hop.)
In [37]:
def train_top_k_middles_classifier(train_split='train', top_k=3):
    corpus = data[train_split]['corpus']
    top_k_mids_by_rel = find_common_middles(split=train_split, top_k=top_k)
    def classify(kb_triple):
        fwd_mids = top_k_mids_by_rel['fwd'][kb_triple.rel]
        rev_mids = top_k_mids_by_rel['rev'][kb_triple.rel]
        for ex in corpus.get_examples_for_entities(kb_triple.sbj, kb_triple.obj):
            if ex.middle in fwd_mids:
                return True
        for ex in corpus.get_examples_for_entities(kb_triple.obj, kb_triple.sbj):
            if ex.middle in rev_mids:
                return True
        return False
    return lift(classify)
In [38]:
evaluate(train_top_k_middles_classifier())
relation              precision     recall    f-score    support       size
------------------    ---------  ---------  ---------  ---------  ---------
adjoins                   0.311      0.406      0.327        303       5319
author                    0.212      0.058      0.139        480       5496
capital                   0.087      0.191      0.098         89       5105
contains                  0.494      0.066      0.214       2667       7683
film_performance          0.250      0.002      0.012        822       5838
founders                  0.177      0.061      0.129        359       5375
genre                     0.000      0.000      0.000        166       5182
has_sibling               0.295      0.222      0.277        513       5529
has_spouse                0.359      0.249      0.330        575       5591
is_a                      0.019      0.010      0.016        494       5510
nationality               0.115      0.039      0.083        311       5327
parents                   0.079      0.068      0.077        325       5341
place_of_birth            0.052      0.023      0.042        217       5233
place_of_death            0.011      0.007      0.010        145       5161
profession                0.008      0.008      0.008        245       5261
worked_at                 0.074      0.031      0.058        228       5244
------------------    ---------  ---------  ---------  ---------  ---------
macro-average             0.159      0.090      0.114       7939      88195
Out[38]:
0.11360108865631796

Not surprisingly, the performance of even this extremely simplistic model is noticeably better than random guessing. Of course, recall is much worse across the board, but precision and F0.5-score are sometimes much better. We observe big gains especially on adjoins, author, contains, founders, has_sibling, and has spouse. Then again, at least one relation actually got worse. (Can you offer any explanation for that?)

Admittedly, performance is still not great in absolute terms. However, we should have modest expectations for performance on this task — we are unlikely ever to get anywhere near perfect precision with perfect recall. Why?

  • High precision will be hard to achieve because the KB is incomplete: some entity pairs that are related in the world — and in the corpus — may simply be missing from the KB.
  • High recall will be hard to achieve because the corpus is finite: some entity pairs that are related in the KB may not have any examples in the corpus.

Because of these unavoidable obstacles, what matters is not so much absolute performance, but relative performance of different approaches.

Exercise: What's the optimal value for top_k, the number of most frequent middles to consider? What choice maximizes our chosen figure of merit, the macro-averaged F0.5-score?

[ top ]

Building a classifier

OK, it's time to get (halfway) serious. Let's apply real machine learning to train a classifier on the training data, and see how it performs on the test data. We'll begin with one of the simplest machine learning setups: a bag-of-words feature representation, and a linear model trained using logistic regression.

Just like we did in the unit on supervised sentiment analysis, we'll leverage the sklearn library, and we'll introduce functions for featurizing instances, training models, making predictions, and evaluating results.

Featurizers

Featurizers are functions which define the feature representation for our model. The primary input to a featurizer will be the KBTriple for which we are generating features. But since our features will be derived from corpus examples containing the entities of the KBTriple, we must also pass in a reference to a Corpus. And in order to make it easy to combine different featurizers, we'll also pass in a feature counter to hold the results.

Here's an implementation for a very simple bag-of-words featurizer. It finds all the corpus examples containing the two entities in the KBTriple, breaks the phrase appearing between the two entity mentions into words, and counts the words. Note that it makes no distinction between "forward" and "reverse" examples.

In [39]:
def simple_bag_of_words_featurizer(kbt, corpus, feature_counter):
    for ex in corpus.get_examples_for_entities(kbt.sbj, kbt.obj):
        for word in ex.middle.split(' '):
            feature_counter[word] += 1
    for ex in corpus.get_examples_for_entities(kbt.obj, kbt.sbj):
        for word in ex.middle.split(' '):
            feature_counter[word] += 1

You can experiment with adding new kinds of features just by implementing additional featurizers, following simple_bag_of_words_featurizer as an example.

Now, in order to apply machine learning algorithms such as those provided by sklearn, we need a way to convert datasets of KBTriples into feature matrices. The function featurize_datasets() achieves that. It takes in a collection of KBTriples grouped by relation, and returns a corresponding collection of feature matrices grouped by relation. It also needs a Corpus from which to extract features, and a list of featurizers to generate the features. Finally, it accepts a vectorizer as an optional argument. At training time, we won't supply a vectorizer, so this code will create a new one; at test time, we'll supply the vectorizer we created at training time.

In [40]:
def featurize_datasets(
        kbts_by_rel,
        corpus,
        featurizers=[simple_bag_of_words_featurizer],
        vectorizer=None):
    # Create feature counters for all instances (kbts).
    feat_counters_by_rel = defaultdict(list)
    for rel, kbts in kbts_by_rel.items():
        for kbt in kbts:
            feature_counter = Counter()
            for featurizer in featurizers:
                featurizer(kbt, corpus, feature_counter)
            feat_counters_by_rel[rel].append(feature_counter)
    feat_matrices_by_rel = defaultdict(list)
    # If we haven't been given a Vectorizer, create one and fit it to all the feature counters.
    if vectorizer == None:
        vectorizer = DictVectorizer(sparse=True)
        def traverse_dicts():
            for dict_list in feat_counters_by_rel.values():
                for d in dict_list:
                    yield d
        vectorizer.fit(traverse_dicts())
    # Now use the Vectorizer to transform feature dictionaries into feature matrices.
    for rel, feat_counters in feat_counters_by_rel.items():
        feat_matrices_by_rel[rel] = vectorizer.transform(feat_counters)
    return feat_matrices_by_rel, vectorizer

Experiments

Now we need some functions to train models, make predictions, and evaluate the results. We'll start with train_models(). This function takes as arguments a data split on which to train, a list of featurizers, and model factory, which is a function which initializes an sklearn classifier. It returns a dictionary holding the featurizers, the vectorizer that was used to generate the training matrix, and a dictionary holding the trained models, one per relation.

In [41]:
def train_models(
        split='train',
        featurizers=[simple_bag_of_words_featurizer],
        model_factory=lambda: LogisticRegression(fit_intercept=True),
        verbose=True):
    if verbose: print('Building datasets')
    train_o, train_y = build_datasets_for_split(split=split)
    if verbose: print('Featurizing')
    train_X, vectorizer = featurize_datasets(train_o, data[split]['corpus'], featurizers)
    models = {}
    if verbose: print('Training models')
    for rel in all_relations:
        models[rel] = model_factory()
        models[rel].fit(train_X[rel], train_y[rel])
    if verbose: print('Training complete\n')
    return {
        'featurizers': featurizers,
        'vectorizer': vectorizer,
        'models': models,
    }        

Next comes predict(). This function takes as arguments a test split, a list of featurizers, the vectorizer that was used during training, and a dictionary holding the models, one per relation. It returns two parallel dictionaries: one holding the predictions (grouped by relation), the other holding the true labels (again, grouped by prediction).

In [42]:
def predict(split, featurizers, vectorizer, models):
    test_o, test_y = build_datasets_for_split(split=split)
    test_X, _ = featurize_datasets(test_o, data[split]['corpus'], featurizers, vectorizer=vectorizer)
    predictions = {}
    for rel in all_relations:
        predictions[rel] = models[rel].predict(test_X[rel])
    return predictions, test_y

Now evaluate_predictions(). This function takes as arguments the parallel dictionaries of predictions and true labels produced by predict(). It prints summary statistics for each relation, including precision, recall, and F0.5-score, and it returns the macro-averaged F0.5-score.

In [43]:
def evaluate_predictions(predictions, test_y, verbose=True):
    results = {}  # one result row for each relation
    if verbose:
        print_statistics_header()
    for rel in all_relations:
        stats = precision_recall_fscore_support(test_y[rel], predictions[rel], beta=0.5)
        stats = [stat[1] for stat in stats]  # stats[1] is the stat for label True
        stats.append(len(test_y[rel]))
        results[rel] = stats
        if verbose:
            print_statistics_row(rel, results[rel])
    avg_result = macro_average_results(results)
    if verbose:
        print_statistics_footer(avg_result)
    return avg_result[2]  # return f_0.5 score as summary statistic

Finally, we introduce experiment(), which simply chains together train_models(), predict(), and evaluate_predictions(). For convenience, this function returns the output of train_models() as its result.

In [44]:
def experiment(
        train_split='train',
        test_split='dev',
        featurizers=[simple_bag_of_words_featurizer],
        model_factory=lambda: LogisticRegression(fit_intercept=True),
        verbose=True):
    train_result = train_models(train_split, featurizers, model_factory, verbose)
    predictions, test_y = predict(test_split,
                                  featurizers,
                                  train_result['vectorizer'],
                                  train_result['models'])
    evaluate_predictions(predictions, test_y, verbose)
    return train_result

Running experiment() in its default configuration will give us a baseline result for machine-learned models.

In [45]:
_ = experiment()
# _ = experiment(train_split='tiny', test_split='tiny') # better for rapid development
Building datasets
Featurizing
Training models
Training complete

relation              precision     recall    f-score    support       size
------------------    ---------  ---------  ---------  ---------  ---------
adjoins                   0.874      0.459      0.740        303       5319
author                    0.830      0.558      0.756        480       5496
capital                   0.677      0.236      0.493         89       5105
contains                  0.752      0.615      0.720       2667       7683
film_performance          0.820      0.580      0.757        822       5838
founders                  0.837      0.387      0.679        359       5375
genre                     0.619      0.235      0.467        166       5182
has_sibling               0.870      0.234      0.563        513       5529
has_spouse                0.900      0.362      0.694        575       5591
is_a                      0.710      0.223      0.494        494       5510
nationality               0.584      0.145      0.363        311       5327
parents                   0.877      0.569      0.791        325       5341
place_of_birth            0.759      0.203      0.490        217       5233
place_of_death            0.621      0.124      0.345        145       5161
profession                0.639      0.159      0.399        245       5261
worked_at                 0.740      0.250      0.532        228       5244
------------------    ---------  ---------  ---------  ---------  ---------
macro-average             0.757      0.334      0.580       7939      88195

Considering how vanilla our model is, these results are quite surprisingly good! We see huge gains for every relation over our top_k_middles_classifier. This strong performance is a powerful testament to the effectiveness of even the simplest forms of machine learning.

But there is still much more we can do. To make further gains, we must not treat the model as a black box. We must open it up and get visibility into what it has learned, and more importantly, where it still falls down.

Examining the trained models

One important way to gain understanding of our trained model is to inspect the model weights. What features are strong positive indicators for each relation, and what features are strong negative indicators?

In [46]:
def examine_model_weights(
        train_split='train',
        featurizers=[simple_bag_of_words_featurizer],
        model_factory=lambda: LogisticRegression(fit_intercept=True),
        k=3,
        verbose=True):
    train_result = train_models(train_split, featurizers, model_factory, verbose)
    feature_names = train_result['vectorizer'].get_feature_names()
    for rel, model in train_result['models'].items():
        print('Highest and lowest feature weights for relation {}:\n'.format(rel))
        sorted_weights = sorted([(wgt, idx) for idx, wgt in enumerate(model.coef_[0])], reverse=True)
        for wgt, idx in sorted_weights[:k]:
            print('{:10.3f} {}'.format(wgt, feature_names[idx]))
        print('{:>10s} {}'.format('.....', '.....'))
        for wgt, idx in sorted_weights[-k:]:
            print('{:10.3f} {}'.format(wgt, feature_names[idx]))
        print()
In [47]:
examine_model_weights()
# examine_model_weights(train_split='tiny') # better for rapid development
Building datasets
Featurizing
Training models
Training complete

Highest and lowest feature weights for relation adjoins:

     2.523 Córdoba
     2.255 Taluks
     2.001 nearby
     ..... .....
    -1.193 an
    -1.338 Egypt
    -1.484 Caribbean

Highest and lowest feature weights for relation author:

     3.329 author
     2.548 poem
     2.463 wrote
     ..... .....
    -2.312 directed
    -2.563 controversial
    -4.219 1945

Highest and lowest feature weights for relation capital:

     3.833 capital
     1.749 headquarters
     1.735 towns
     ..... .....
    -1.635 during
    -1.751 includes
    -1.963 Westminster

Highest and lowest feature weights for relation contains:

     2.517 third-largest
     2.434 Channel
     2.308 districts
     ..... .....
    -2.316 rise
    -3.798 Ceylon
    -4.005 occupation

Highest and lowest feature weights for relation film_performance:

     4.141 starring
     3.631 alongside
     3.419 opposite
     ..... .....
    -1.874 Khakee
    -1.994 Westminster
    -3.850 Mohabbatein

Highest and lowest feature weights for relation founders:

     3.969 founder
     3.888 founded
     2.649 company
     ..... .....
    -1.682 William
    -1.893 writing
    -1.998 Griffith

Highest and lowest feature weights for relation genre:

     3.504 
     3.184 series
     2.783 album
     ..... .....
    -1.387 and
    -1.465 Playhouse
    -1.801 at

Highest and lowest feature weights for relation has_sibling:

     5.262 brother
     4.058 sister
     3.076 Marlon
     ..... .....
    -1.463 starring
    -1.959 formed
    -2.062 Her

Highest and lowest feature weights for relation has_spouse:

     5.185 wife
     4.286 husband
     4.216 married
     ..... .....
    -1.634 engineer
    -2.341 Straus
    -2.341 Isidor

Highest and lowest feature weights for relation is_a:

     4.043 stage
     3.838 
     2.800 theatre
     ..... .....
    -1.485 Texas
    -1.557 at
    -5.974 characin

Highest and lowest feature weights for relation nationality:

     2.586 born
     2.035 -born
     1.933 President
     ..... .....
    -1.584 or
    -1.790 foreign
    -1.900 state

Highest and lowest feature weights for relation parents:

     5.083 daughter
     4.963 son
     4.377 father
     ..... .....
    -1.374 need
    -1.477 no
    -2.788 Indian

Highest and lowest feature weights for relation place_of_birth:

     3.774 born
     2.971 mayor
     2.432 -born
     ..... .....
    -1.371 and
    -1.447 or
    -1.762 Indian

Highest and lowest feature weights for relation place_of_death:

     2.745 died
     2.228 assassinated
     1.930 Germany
     ..... .....
    -1.180 Belgium
    -1.433 state
    -1.953 Westminster

Highest and lowest feature weights for relation profession:

     4.132 
     2.762 American
     2.272 philosopher
     ..... .....
    -1.423 Texas
    -1.425 on
    -1.439 from

Highest and lowest feature weights for relation worked_at:

     3.425 professor
     2.792 president
     2.773 CEO
     ..... .....
    -1.507 confluence
    -1.613 state
    -1.718 or

By and large, the high-weight features for each relation are pretty intuitive — they are words that are used to express the relation in question. (The counter-intuitive results merit a bit of investigation!)

The low-weight features (that is, features with large negative weights) may be a bit harder to understand. In some cases, however, they can be interpreted as features which indicate some other relation which is anti-correlated with the target relation. (As an example, "directed" is a negative indicator for the author relation.)

Exercise: Investigate one of the counter-intuitive high-weight features. Find the training examples which caused the feature to be included. Given the training data, does it make sense that this feature is a good predictor for the target relation?

Discovering new relation instances

Another way to gain insight into our trained models is to use them to discover new relation instances that don't currently appear in the KB. In fact, this is the whole point of building a relation extraction system: to extend an existing KB (or build a new one) using knowledge extracted from natural language text at scale. Can the models we've trained do this effectively?

Because the goal is to discover new relation instances which are true but absent from the KB, we can't evalute this capability automatically. But we can generate candidate KB triples and manually evaluate them for correctness.

To do this, we'll start from corpus examples containing pairs of entities which do not belong to any relation in the KB (earlier, we described these as "negative examples"). We'll then apply our trained models to each pair of entities, and sort the results by probability assigned by the model, in order to find the most likely new instances for each relation.

In [48]:
def find_new_relation_instances(
        train_split='train',
        test_split='dev',
        featurizers=[simple_bag_of_words_featurizer],
        model_factory=lambda: LogisticRegression(fit_intercept=True),
        k=10,
        verbose=True):

    # train models
    train_result = train_models(train_split, featurizers, model_factory, verbose)

    # build datasets for negative instances only
    neg_o, neg_y = build_datasets_for_split(test_split, include_positive=False, sampling_rate=1.0)
    neg_X, _ = featurize_datasets(neg_o,
                                  data[test_split]['corpus'],
                                  featurizers,
                                  train_result['vectorizer'])

    # report highest confidence predictions
    for rel, model in train_result['models'].items():
        print('Highest probability examples for relation {}:\n'.format(rel))
        probs = model.predict_proba(neg_X[rel])
        probs = [prob[1] for prob in probs] # probability for class True
        sorted_probs = sorted([(p, idx) for idx, p in enumerate(probs)], reverse=True)
        for p, idx in sorted_probs[:k]:
            print('{:10.3f} {}'.format(p, neg_o[rel][idx]))
        print()
In [49]:
find_new_relation_instances()
# find_new_relation_instances(train_split='tiny', test_split='tiny') # for rapid development
Building datasets
Featurizing
Training models
Training complete

Highest probability examples for relation adjoins:

     1.000 KBTriple(rel='adjoins', sbj='Sun', obj='Moon')
     1.000 KBTriple(rel='adjoins', sbj='Moon', obj='Sun')
     1.000 KBTriple(rel='adjoins', sbj='India', obj='Maharashtra')
     1.000 KBTriple(rel='adjoins', sbj='Maharashtra', obj='India')
     1.000 KBTriple(rel='adjoins', sbj='Europe', obj='Great_Britain')
     1.000 KBTriple(rel='adjoins', sbj='Great_Britain', obj='Europe')
     1.000 KBTriple(rel='adjoins', sbj='Isle_of_Wight', obj='Ryde')
     1.000 KBTriple(rel='adjoins', sbj='Ryde', obj='Isle_of_Wight')
     1.000 KBTriple(rel='adjoins', sbj='Uttar_Pradesh', obj='India')
     1.000 KBTriple(rel='adjoins', sbj='India', obj='Uttar_Pradesh')

Highest probability examples for relation author:

     1.000 KBTriple(rel='author', sbj='Systema_Naturae', obj='Carl_Linnaeus')
     1.000 KBTriple(rel='author', sbj='The_Doors_of_Perception', obj='Aldous_Huxley')
     1.000 KBTriple(rel='author', sbj='Aldous_Huxley', obj='The_Doors_of_Perception')
     1.000 KBTriple(rel='author', sbj='Carl_Linnaeus', obj='Systema_Naturae')
     1.000 KBTriple(rel='author', sbj='Charlie_and_the_Chocolate_Factory', obj='Roald_Dahl')
     1.000 KBTriple(rel='author', sbj='Roald_Dahl', obj='Charlie_and_the_Chocolate_Factory')
     1.000 KBTriple(rel='author', sbj='Stephen_Hawking', obj='A_Brief_History_of_Time')
     1.000 KBTriple(rel='author', sbj='A_Brief_History_of_Time', obj='Stephen_Hawking')
     1.000 KBTriple(rel='author', sbj='Neil_Gaiman', obj='American_Gods')
     1.000 KBTriple(rel='author', sbj='American_Gods', obj='Neil_Gaiman')

Highest probability examples for relation capital:

     1.000 KBTriple(rel='capital', sbj='Italy', obj='Rome')
     1.000 KBTriple(rel='capital', sbj='Rome', obj='Italy')
     1.000 KBTriple(rel='capital', sbj='Isle_of_Wight', obj='Ryde')
     1.000 KBTriple(rel='capital', sbj='Ryde', obj='Isle_of_Wight')
     1.000 KBTriple(rel='capital', sbj='India', obj='Maharashtra')
     1.000 KBTriple(rel='capital', sbj='Maharashtra', obj='India')
     1.000 KBTriple(rel='capital', sbj='Chernobyl_Nuclear_Power_Plant', obj='Ukraine')
     1.000 KBTriple(rel='capital', sbj='Ukraine', obj='Chernobyl_Nuclear_Power_Plant')
     1.000 KBTriple(rel='capital', sbj='Blarney', obj='Republic_of_Ireland')
     1.000 KBTriple(rel='capital', sbj='Republic_of_Ireland', obj='Blarney')

Highest probability examples for relation contains:

     1.000 KBTriple(rel='contains', sbj='Italy', obj='Rome')
     1.000 KBTriple(rel='contains', sbj='Uttar_Pradesh', obj='India')
     1.000 KBTriple(rel='contains', sbj='Isle_of_Wight', obj='Ryde')
     1.000 KBTriple(rel='contains', sbj='India', obj='Maharashtra')
     1.000 KBTriple(rel='contains', sbj='Roman_Empire', obj='Rome')
     1.000 KBTriple(rel='contains', sbj='Rome', obj='Italy')
     1.000 KBTriple(rel='contains', sbj='India', obj='Uttar_Pradesh')
     1.000 KBTriple(rel='contains', sbj='Rome', obj='Roman_Empire')
     1.000 KBTriple(rel='contains', sbj='Ryde', obj='Isle_of_Wight')
     1.000 KBTriple(rel='contains', sbj='Maharashtra', obj='India')

Highest probability examples for relation film_performance:

     1.000 KBTriple(rel='film_performance', sbj='Hong_Kong', obj='Shanghai_Noon')
     1.000 KBTriple(rel='film_performance', sbj='Shanghai_Noon', obj='Hong_Kong')
     1.000 KBTriple(rel='film_performance', sbj='Francis_Ford_Coppola', obj='Robin_Williams')
     1.000 KBTriple(rel='film_performance', sbj='Robin_Williams', obj='Francis_Ford_Coppola')
     1.000 KBTriple(rel='film_performance', sbj='The_Pink_Panther_2', obj='Harald_Zwart')
     1.000 KBTriple(rel='film_performance', sbj='Harald_Zwart', obj='The_Pink_Panther_2')
     1.000 KBTriple(rel='film_performance', sbj='Salman_Khan', obj='Tere_Naam')
     1.000 KBTriple(rel='film_performance', sbj='Tere_Naam', obj='Salman_Khan')
     1.000 KBTriple(rel='film_performance', sbj='Gia', obj='Angelina_Jolie')
     1.000 KBTriple(rel='film_performance', sbj='Angelina_Jolie', obj='Gia')

Highest probability examples for relation founders:

     1.000 KBTriple(rel='founders', sbj='L._Ron_Hubbard', obj='Church_of_Scientology')
     1.000 KBTriple(rel='founders', sbj='Church_of_Scientology', obj='L._Ron_Hubbard')
     1.000 KBTriple(rel='founders', sbj='Insect', obj='Lepidoptera')
     1.000 KBTriple(rel='founders', sbj='Lepidoptera', obj='Insect')
     1.000 KBTriple(rel='founders', sbj='Illuminati', obj='Adam_Weishaupt')
     1.000 KBTriple(rel='founders', sbj='Adam_Weishaupt', obj='Illuminati')
     1.000 KBTriple(rel='founders', sbj='Austria', obj='Gaston_Glock')
     1.000 KBTriple(rel='founders', sbj='Gaston_Glock', obj='Austria')
     1.000 KBTriple(rel='founders', sbj='Sri_Lanka', obj='Matale_District')
     1.000 KBTriple(rel='founders', sbj='Matale_District', obj='Sri_Lanka')

Highest probability examples for relation genre:

     1.000 KBTriple(rel='genre', sbj='Cartoon_Cartoons', obj="Dexter's_Laboratory")
     1.000 KBTriple(rel='genre', sbj="Dexter's_Laboratory", obj='Cartoon_Cartoons')
     1.000 KBTriple(rel='genre', sbj='All_We_Know_Is_Falling', obj='Taylor_York')
     1.000 KBTriple(rel='genre', sbj='Taylor_York', obj='All_We_Know_Is_Falling')
     1.000 KBTriple(rel='genre', sbj='Lanner_falcon', obj='Falcon')
     1.000 KBTriple(rel='genre', sbj='Falcon', obj='Lanner_falcon')
     0.998 KBTriple(rel='genre', sbj='Meg_Griffin', obj='Family_Guy')
     0.998 KBTriple(rel='genre', sbj='Family_Guy', obj='Meg_Griffin')
     0.982 KBTriple(rel='genre', sbj='Tattoo_artist', obj='Tattoo_artist')
     0.977 KBTriple(rel='genre', sbj='Tunch_Ilkin', obj='Tunch_Ilkin')

Highest probability examples for relation has_sibling:

     1.000 KBTriple(rel='has_sibling', sbj='Ishmael', obj='Abraham')
     1.000 KBTriple(rel='has_sibling', sbj='Abraham', obj='Ishmael')
     1.000 KBTriple(rel='has_sibling', sbj='Sergey_Brin', obj='Larry_Page')
     1.000 KBTriple(rel='has_sibling', sbj='Larry_Page', obj='Sergey_Brin')
     1.000 KBTriple(rel='has_sibling', sbj='Isaac', obj='Abraham')
     1.000 KBTriple(rel='has_sibling', sbj='Abraham', obj='Isaac')
     1.000 KBTriple(rel='has_sibling', sbj='Jamie_Lee_Curtis', obj='Janet_Leigh')
     1.000 KBTriple(rel='has_sibling', sbj='Janet_Leigh', obj='Jamie_Lee_Curtis')
     1.000 KBTriple(rel='has_sibling', sbj='Karl_Marx', obj='Friedrich_Engels')
     1.000 KBTriple(rel='has_sibling', sbj='Friedrich_Engels', obj='Karl_Marx')

Highest probability examples for relation has_spouse:

     1.000 KBTriple(rel='has_spouse', sbj='Sergey_Brin', obj='Larry_Page')
     1.000 KBTriple(rel='has_spouse', sbj='Larry_Page', obj='Sergey_Brin')
     1.000 KBTriple(rel='has_spouse', sbj='Isidor_Straus', obj='Denver')
     1.000 KBTriple(rel='has_spouse', sbj='Denver', obj='Isidor_Straus')
     1.000 KBTriple(rel='has_spouse', sbj='Karl_Marx', obj='Friedrich_Engels')
     1.000 KBTriple(rel='has_spouse', sbj='Friedrich_Engels', obj='Karl_Marx')
     1.000 KBTriple(rel='has_spouse', sbj='Rajiv_Gandhi', obj='Indira_Gandhi')
     1.000 KBTriple(rel='has_spouse', sbj='Indira_Gandhi', obj='Rajiv_Gandhi')
     0.999 KBTriple(rel='has_spouse', sbj='Anne_Boleyn', obj='Elizabeth_I_of_England')
     0.999 KBTriple(rel='has_spouse', sbj='Elizabeth_I_of_England', obj='Anne_Boleyn')

Highest probability examples for relation is_a:

     1.000 KBTriple(rel='is_a', sbj='Insect', obj='Lepidoptera')
     1.000 KBTriple(rel='is_a', sbj='Lepidoptera', obj='Insect')
     1.000 KBTriple(rel='is_a', sbj='Apidae', obj='Bee')
     1.000 KBTriple(rel='is_a', sbj='Bee', obj='Apidae')
     1.000 KBTriple(rel='is_a', sbj='Odonata', obj='Insect')
     1.000 KBTriple(rel='is_a', sbj='Insect', obj='Odonata')
     1.000 KBTriple(rel='is_a', sbj='Malvaceae', obj='Hibiscus')
     1.000 KBTriple(rel='is_a', sbj='Hibiscus', obj='Malvaceae')
     1.000 KBTriple(rel='is_a', sbj='Okra', obj='Malvaceae')
     1.000 KBTriple(rel='is_a', sbj='Malvaceae', obj='Okra')

Highest probability examples for relation nationality:

     1.000 KBTriple(rel='nationality', sbj='Sri_Lanka', obj='Matale_District')
     1.000 KBTriple(rel='nationality', sbj='Matale_District', obj='Sri_Lanka')
     1.000 KBTriple(rel='nationality', sbj='North_Island', obj='New_Zealand')
     1.000 KBTriple(rel='nationality', sbj='New_Zealand', obj='North_Island')
     1.000 KBTriple(rel='nationality', sbj='Systema_Naturae', obj='Carl_Linnaeus')
     1.000 KBTriple(rel='nationality', sbj='Carl_Linnaeus', obj='Systema_Naturae')
     1.000 KBTriple(rel='nationality', sbj='Insect', obj='Lepidoptera')
     1.000 KBTriple(rel='nationality', sbj='Lepidoptera', obj='Insect')
     1.000 KBTriple(rel='nationality', sbj='California', obj='San_Francisco_Bay_Area')
     1.000 KBTriple(rel='nationality', sbj='San_Francisco_Bay_Area', obj='California')

Highest probability examples for relation parents:

     1.000 KBTriple(rel='parents', sbj='Ishmael', obj='Abraham')
     1.000 KBTriple(rel='parents', sbj='Isaac', obj='Abraham')
     1.000 KBTriple(rel='parents', sbj='Kim_Jong-il', obj='Kim_Jong-un')
     1.000 KBTriple(rel='parents', sbj='Kim_Jong-un', obj='Kim_Jong-il')
     1.000 KBTriple(rel='parents', sbj='Abraham', obj='Isaac')
     1.000 KBTriple(rel='parents', sbj='Abraham', obj='Ishmael')
     1.000 KBTriple(rel='parents', sbj='Anne_Boleyn', obj='Elizabeth_I_of_England')
     1.000 KBTriple(rel='parents', sbj='Elizabeth_I_of_England', obj='Anne_Boleyn')
     1.000 KBTriple(rel='parents', sbj='Louis_the_Pious', obj='Charlemagne')
     1.000 KBTriple(rel='parents', sbj='Charlemagne', obj='Louis_the_Pious')

Highest probability examples for relation place_of_birth:

     1.000 KBTriple(rel='place_of_birth', sbj='Sri_Lanka', obj='Matale_District')
     1.000 KBTriple(rel='place_of_birth', sbj='Matale_District', obj='Sri_Lanka')
     1.000 KBTriple(rel='place_of_birth', sbj='North_Island', obj='New_Zealand')
     1.000 KBTriple(rel='place_of_birth', sbj='New_Zealand', obj='North_Island')
     1.000 KBTriple(rel='place_of_birth', sbj='Illinois', obj='United_States_Senate')
     1.000 KBTriple(rel='place_of_birth', sbj='United_States_Senate', obj='Illinois')
     0.998 KBTriple(rel='place_of_birth', sbj='California', obj='San_Francisco_Bay_Area')
     0.998 KBTriple(rel='place_of_birth', sbj='San_Francisco_Bay_Area', obj='California')
     0.988 KBTriple(rel='place_of_birth', sbj='Pangasinan', obj='Philippines')
     0.988 KBTriple(rel='place_of_birth', sbj='Philippines', obj='Pangasinan')

Highest probability examples for relation place_of_death:

     1.000 KBTriple(rel='place_of_death', sbj='Systema_Naturae', obj='Carl_Linnaeus')
     1.000 KBTriple(rel='place_of_death', sbj='Carl_Linnaeus', obj='Systema_Naturae')
     1.000 KBTriple(rel='place_of_death', sbj='Ishmael', obj='Abraham')
     1.000 KBTriple(rel='place_of_death', sbj='Abraham', obj='Ishmael')
     1.000 KBTriple(rel='place_of_death', sbj='Sri_Lanka', obj='Matale_District')
     1.000 KBTriple(rel='place_of_death', sbj='Matale_District', obj='Sri_Lanka')
     1.000 KBTriple(rel='place_of_death', sbj='North_Island', obj='New_Zealand')
     1.000 KBTriple(rel='place_of_death', sbj='New_Zealand', obj='North_Island')
     0.999 KBTriple(rel='place_of_death', sbj='Chernobyl_Nuclear_Power_Plant', obj='Ukraine')
     0.999 KBTriple(rel='place_of_death', sbj='Ukraine', obj='Chernobyl_Nuclear_Power_Plant')

Highest probability examples for relation profession:

     1.000 KBTriple(rel='profession', sbj='Eyeless_in_Gaza', obj='Aldous_Huxley')
     1.000 KBTriple(rel='profession', sbj='Aldous_Huxley', obj='Eyeless_in_Gaza')
     0.999 KBTriple(rel='profession', sbj='Hispania', obj='Spain')
     0.999 KBTriple(rel='profession', sbj='Spain', obj='Hispania')
     0.996 KBTriple(rel='profession', sbj='Screenwriter', obj='Actor')
     0.996 KBTriple(rel='profession', sbj='Actor', obj='Screenwriter')
     0.995 KBTriple(rel='profession', sbj='Tunch_Ilkin', obj='Tunch_Ilkin')
     0.995 KBTriple(rel='profession', sbj='Blog_award', obj='Blog_award')
     0.995 KBTriple(rel='profession', sbj='Physicist', obj='Nikola_Tesla')
     0.995 KBTriple(rel='profession', sbj='Guitarist', obj='Robby_Krieger')

Highest probability examples for relation worked_at:

     1.000 KBTriple(rel='worked_at', sbj='Sri_Lanka', obj='Matale_District')
     1.000 KBTriple(rel='worked_at', sbj='Matale_District', obj='Sri_Lanka')
     1.000 KBTriple(rel='worked_at', sbj='North_Island', obj='New_Zealand')
     1.000 KBTriple(rel='worked_at', sbj='New_Zealand', obj='North_Island')
     1.000 KBTriple(rel='worked_at', sbj='Insect', obj='Lepidoptera')
     1.000 KBTriple(rel='worked_at', sbj='Lepidoptera', obj='Insect')
     1.000 KBTriple(rel='worked_at', sbj='Austria', obj='Gaston_Glock')
     1.000 KBTriple(rel='worked_at', sbj='Gaston_Glock', obj='Austria')
     1.000 KBTriple(rel='worked_at', sbj='L._Ron_Hubbard', obj='Church_of_Scientology')
     1.000 KBTriple(rel='worked_at', sbj='Church_of_Scientology', obj='L._Ron_Hubbard')

There are actually some good discoveries here! The predictions for the author relation seem especially good. Of course, there are also plenty of bad results, and a few that are downright comical. We may hope that as we improve our models and optimize performance in our automatic evaluations, the results we observe in this manual evaluation improve as well.

Exercise: Note that every time we predict that a given relation holds between entities X and Y, we also predict, with equal confidence, that it holds between Y and X. Why? How could we fix this?

[ top ]

Next steps

Our current model is quite rudimentary — it's merely a starting point for further exploration. This section lists a number of suggestions for next steps. Pursue whatever ideas look most promising! Your immediate goal is to optimize macro-averaged F0.5-score — but don't blinker yourself. Consider other ways of evaluating performance, and remember that the ultimate goal is to extract new relational triples.

Experimental methodology

  • Write code that facilitates error analysis — that is, code that enables you to inspect, analyze, and categorize specific classification errors. Error analysis is the best avenue to understanding how to improve your model.
  • Our code for building datasets provides the ability to downsample negative examples, and we've used a default sampling rate of 10%. Is that the best choice? What are the consequences of sampling at a higher or lower rate?

Feature representation

  • Add a feature that indicates the length of the middle.
  • Augment the bag-of-words representation to include bigrams or trigrams (not just unigrams).
  • Introduce features based on the entity mentions themselves.
  • Experiment with features based on the context outside (rather than between) the two entity mentions — that is, the words before the first mention, or after the second.
  • Try adding features which capture syntactic information, such as the dependency-path features used by Mintz et al. 2009. The NLTK toolkit contains a variety of parsing algorithms that may help.
  • The bag-of-words representation does not permit generalization across word categories such as names of people, places, or companies. Can we do better using word embeddings such as GloVe?

Model selection

  • The LogisticRegression model in sklearn does L2 regularization by default. Try using L1 regularization instead.
  • Whether you're using L1 or L2 regularization, you may get better results by tuning the regularization parameter.
  • Experiment with different model types. sklearn makes this very easy: it provides implementations for everything from elastic nets to SVMs to gradient boosting.
  • Explore ways to predict the relations that hold between a given pair of entities jointly, instead of independently, in order to exploit the correlations between relations. This could be done using an ensemble or hierarchial model, or a neural architecture.
  • Over the last few years, neural sequence models such as LSTMs have yielded dramatic gains on many NLP tasks. Investigate whether they help here.

[ top ]

Homework 3

The purpose of this homework is to begin exploring larger, more diverse feature space, to figure out which ones lead to improved classifiers.

As a reminder, our baseline classifier (using simple_bag_of_words_featurizer looks like this) when run on the dev set with the standard model_factory.

In [50]:
baseline = experiment(featurizers=[simple_bag_of_words_featurizer])
Building datasets
Featurizing
Training models
Training complete

relation              precision     recall    f-score    support       size
------------------    ---------  ---------  ---------  ---------  ---------
adjoins                   0.874      0.459      0.740        303       5319
author                    0.830      0.558      0.756        480       5496
capital                   0.677      0.236      0.493         89       5105
contains                  0.752      0.615      0.720       2667       7683
film_performance          0.820      0.580      0.757        822       5838
founders                  0.837      0.387      0.679        359       5375
genre                     0.619      0.235      0.467        166       5182
has_sibling               0.870      0.234      0.563        513       5529
has_spouse                0.900      0.362      0.694        575       5591
is_a                      0.710      0.223      0.494        494       5510
nationality               0.584      0.145      0.363        311       5327
parents                   0.877      0.569      0.791        325       5341
place_of_birth            0.759      0.203      0.490        217       5233
place_of_death            0.621      0.124      0.345        145       5161
profession                0.639      0.159      0.399        245       5261
worked_at                 0.740      0.250      0.532        228       5244
------------------    ---------  ---------  ---------  ---------  ---------
macro-average             0.757      0.334      0.580       7939      88195

Questions 1–3: Directional unigram features [3 points]

The current bag-of-words representation makes no distinction between "forward" and "reverse" examples. But, intuitively, there is big difference between X and his son Y and Y and his son X. This question asks you to modify simple_bag_of_words_featurizer to capture these differences.

To submit:

  1. A feature function directional_bag_of_words_featurizer that is just like simple_bag_of_words_featurizer except that it distinguishes "forward" and "reverse". To do this, you just need to mark each word feature for whether it is derived from a subject–object example or from an object–subject example. The precise nature of the mark you add for the two cases doesn't make a difference to the model.

  2. The macro-average F-score on the dev set that you obtain from running experiment with directional_bag_of_words_featurizer as the only featurizer. (Aside from this, use all the default values for experiment.)

  3. experiment returns some of the core objects used in the experiment. How many feature names does the vectorizer have for the experiment run in the previous step? (Note: we're partly asking you to figure out how to get this value by using the sklearn documentation, so please don't ask how to do it on Piazza!)

Questions 4–5: The part-of-speech tags of the "middle" words [3 points]

Our corpus distribution contains part-of-speech (POS) tagged versions of the core text spans. Let's begin to explore whether there is information in these sequences, focusing on middle_POS.

To submit:

  1. A feature function middle_bigram_pos_tag_featurizer that is just like simple_bag_of_words_featurizer except that it creates a feature for bigram POS sequences. For example, given

    The/DT dog/N napped/V

    we obtain the list of bigram POS sequences

    ['<s> DT', 'DT N', 'N V', 'V </s>'].

    Don't forget the start and end tags, to model those environments properly!

  2. The macro-average F-score on the dev set that you obtain from running experiment with middle_bigram_pos_tag_featurizer as the only featurizer. (Aside from this, use all the default values for experiment.)

Note: To parse middle_POS, one splits on whitespace to get the word/TAG pairs. Each of these pairs s can be parsed with s.rsplit('/', 1).

Questions 6–7: Bag of Synsets [3 points]

The following allows you to use NLTK's WordNet API to get the synsets compatible with dog as used as a noun:

from nltk.corpus import wordnet as wn
dog = wn.synsets('dog', pos='n')

This question asks you to create synset-based features from the word/tag pairs in middle_POS.

To convert the tags in the corpus to WordNet tags:

Tag begins with WordNet pos value
N 'n'
V 'v'
J 'a'
R 'r'
Otherwise None

To submit:

  1. A feature function synset_featurizer that is just like simple_bag_of_words_featurizer except that it creates a count dictionary where the keys are synsets, as derived from the unigrams in the middle_POS field using wn.synsets. Stringify the synsets with str so that they can be dict keys. Use the table above to convert tags to pos arguments usable by wn.synsets.

  2. The macro-average F-score on the dev set that you obtain from running experiment with synset_featurizer as the only featurizer. (Aside from this, use all the default values for experiment.)

Questions 8–9: Bringing them all together [2 points]

Run experiment with all directional_bag_of_words_featurizer, middle_bigram_pos_tag_featurizer, and synset_featurizer together as the featurizers. Let's see if all this work paid off in terms of raw peformance!

To submit:

  1. The macro-average F-score on the dev set that you obtain from running this experiment. (Aside from featurizers, use all the default values for experiment.)

  2. The number of feature names contained in the vectorizer for this experiment.

Note: You'll see that this is a very large model. We want you to submit with the default value for model_factory, but it is worth trying variants that are more suitable for these inputs – penalty="l1" for LogisticRegression is a good starting point, as it will give 0 weight to many uninformative features.

[ top ]

Bake-off

The goal of the bake-off for this unit is very simple: to achieve the best macro-averaged F0.5-score on the test split. The Next steps section suggested a number of possible strategies for improving the baseline model, and Homework 3 explores several more. But of course these are by no means exhaustive. You can surely come up with lots of additional ideas. The sky is the limit!

There's only one strict rule here: you must not evaluate on the test split until you ready to report your bake-off results. All evaluations during development must be on the dev split. To do otherwise would be methodologically unsound and intellectually dishonest!

Your bake-off submission should include:

  • Your macro-averaged F0.5-score on the test split.
  • A brief description of the strategies you employed to achieve this result.

Submission URL: https://goo.gl/forms/ohqzpnHMwHIT7f642

[ top ]