# DeepDive Tutorial Extracting mentions of spouses from the news¶

In this tutorial, we show an example of a prototypical task that DeepDive is often applied to: extraction of structured information from unstructured or 'dark' data such as web pages, text documents, images, etc. While DeepDive can be used as a more general platform for statistical learning and data processing, most of the tooling described herein has been built for this type of use case, based on our experience of successfully applying DeepDive to a variety of real-world problems of this type.

In this setting, our goal is to take in a set of unstructured (and/or structured) inputs, and populate a relational database table with extracted outputs, along with marginal probabilities for each extraction representing DeepDive's confidence in the extraction. More formally, we write a DeepDive application to extract mentions of relations and their constituent entities or attributes, according to a specified schema; this task is often referred to as relation extraction.* Accordingly, we'll walk through an example scenario where we wish to extract mentions of two people being spouses from news articles.

The high-level steps we'll follow are:

1. Data processing. First, we'll load the raw corpus, add NLP markups, extract a set of candidate relation mentions, and a sparse feature representation of each.

2. Distant supervision with data and rules. Next, we'll use various strategies to provide supervision for our dataset, so that we can use machine learning to learn the weights of a model.

3. Learning and inference: model specification. Then, we'll specify the high-level configuration of our model.

4. Error analysis and debugging. Finally, we'll show how to use DeepDive's labeling, error analysis and debugging tools.

*Note the distinction between extraction of true, i.e., factual, relations and extraction of mentions of relations. In this tutorial, we do the latter, however DeepDive supports further downstream methods for tackling the former task in a principled manner.

Whenever something isn't clear, you can always refer to the complete example code at examples/spouse/ that contains everything shown in this document.

## 0. Preparation¶

### 0.0. Installing DeepDive and tweaking notebook¶

First of all, let's make sure DeepDive is installed and can be used from this notebook. See DeepDive installation guide for more details.

In [1]:
# PATH needs correct setup to use DeepDive
import os; PWD=os.getcwd(); HOME=os.environ["HOME"]; PATH=os.environ["PATH"]
# home directory installation
%env PATH=$HOME/local/bin:$PATH
# notebook-local installation
%env PATH=$PWD/deepdive/bin:$PATH

!type deepdive
no_deepdive_found = !type deepdive >/dev/null
if no_deepdive_found: # install it next to this notebook
!bash -c 'PREFIX="$PWD"/deepdive bash <(curl -fsSL git.io/getdeepdive) deepdive_from_release'  env: PATH=/home/jovyan/local/bin:/opt/conda/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin env: PATH=/ConfinedWater/deepdive-examples/spouse/deepdive/bin:/opt/conda/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin deepdive is /usr/local/bin/deepdive  We need to make sure this IPython/Jupyter notebook will work correctly with DeepDive: In [2]: # check if notebook kernel was launched in a Unicode locale import locale; LC_CTYPE = locale.getpreferredencoding() if LC_CTYPE != "UTF-8": raise EnvironmentError("Notebook is running in '%s' encoding not compatible with DeepDive's Unicode output.\n\nPlease restart notebook in a UTF-8 locale with a command like the following:\n\n LC_ALL=en_US.UTF-8 jupyter notebook" % (LC_CTYPE))  ### 0.1. Declaring what to predict¶ Above all, we shall tell DeepDive what we want to predict as a random variable in a language called DDlog, stored in a file app.ddlog: In [3]: %%file app.ddlog ## Random variable to predict ################################################# # This application's goal is to predict whether a given pair of person mention # are indicating a spouse relationship or not. has_spouse?( p1_id text, p2_id text ).  Overwriting app.ddlog  In this notebook, we are going to write our application in this app.ddlog one part at a time. We can check if the code make sense by asking DeepDive to compile it. DeepDive automatically compiles our application whenever we execute things after making changes, but we can also do this manually by running: In [4]: !deepdive compile  ‘run/compiled’ -> ‘20161105/171411.425517242’  ### 0.2. Setting up a database¶ Next, DeepDive will store all data—input, intermediate, output, etc.—in a relational database. Currently, Postgres and Greenplum are supported. For operating at a larger scale, Greenplum is strongly recommended. To set the location of this database, we need to configure a URL in the db.url file, e.g.: In [5]: !echo 'postgresql://'"${PGHOST:-localhost}"'/deepdive_spouse_$USER' >db.url  If you have no running database yet, the following commands can quickly bring up a new PostgreSQL server to be used with DeepDive, storing all data at run/database/postgresql next to this notebook. In [6]: no_database_running = !deepdive db is_ready || echo$?
if no_database_running:
PGDATA = "run/database/postgresql"
!mkdir -p $PGDATA; test -s$PGDATA/PG_VERSION || pg_ctl init -D $PGDATA >/dev/null !nohup pg_ctl -D$PGDATA -l $PGDATA/logfile start >/dev/null  Note: DeepDive will drop and then create this database if run from scratch—beware of pointing to an existing populated one! In [7]: !deepdive redo init/app  ‘run/RUNNING’ -> ‘20161105/171412.988492903’ 2016-11-05 17:14:13.099080 process/init/app/run.sh ‘run/FINISHED’ -> ‘20161105/171412.988492903’  ## 1. Data processing¶ In this section, we'll generate the traditional inputs of a statistical learning-type problem: candidate spouse relations, represented by a set of features, which we will aim to classify as actual relation mentions or not. We'll do this in four basic steps: 1. Loading raw input data 2. Adding NLP markups 3. Extracting candidate relation mentions 4. Extracting features for each candidate ### 1.1. Loading raw input data¶ Our first task is to download and load the raw text of a corpus of news articles provided by Signal Media into an articles table in our database. Keeping the identifier of each article and its content in the table would be good enough. We can tell DeepDive to do this by declaring the schema of this articles table in our app.ddlog file; we add the following lines: In [8]: %%file -a app.ddlog ## Input Data ################################################################# articles( id text, content text ).  Appending to app.ddlog  DeepDive can use a script's output as a data source for loading data into the table if we follow a simple naming convention. We create a simple shell script at input/articles.tsj.sh that outputs the news articles in TSJ format (tab-separated JSONs) from the downloaded corpus. In [9]: !mkdir -p input  In [10]: %%file input/articles.tsj.sh #!/usr/bin/env bash set -euo pipefail cd "$(dirname "$0")" corpus=signalmedia/signalmedia-1m.jsonl [[ -e "$corpus" ]] || {
echo "ERROR: Missing $PWD/$corpus"
echo
echo "# Alternatively, use our sampled data by running:"
echo
echo "# Or, skipping all NLP markup processes by running:"
echo "deepdive create table sentences"
echo "deepdive mark done sentences"
false
} >&2

cat "$corpus" | #grep -E 'wife|husband|married' | #head -100 | jq -r '[.id, .content] | map(@json) | join("\t")'  Overwriting input/articles.tsj.sh  We need to mark the script as an executable so DeepDive can actually execute it: In [11]: !chmod +x input/articles.tsj.sh  The aforementioned script reads a sample of the corpus (provided as lines of JSON objects), and then using the jq language extracts the fields id (for article identifier) and content from each entry and format those into TSJ. We can uncomment the grep or head lines in between and apply some naive filter to subsample articles. Now, we tell DeepDive to execute the steps to load the articles table using the input/articles.tsj.sh script. You must have the full corpus downloaded at input/signalmedia/signalmedia-1m.jsonl for the following to finish correctly. In [12]: !deepdive redo articles  app.ddlog: updated since last deepdive compile ‘run/compiled’ -> ‘20161105/171415.746380177’ ‘run/RUNNING’ -> ‘20161105/171416.578993662’ 2016-11-05 17:14:16.708589 process/init/relation/articles/run.sh 2016-11-05 17:14:16.708348 ################################################################################ 2016-11-05 17:14:16.708439 # Host: b7ea137f8e52 2016-11-05 17:14:16.708456 # DeepDive: deepdive v0.8.0-742-g4b812a1 (Linux x86_64) 2016-11-05 17:14:16.708467 export PATH='/usr/local/bin':"$PATH"
2016-11-05 17:14:16.708477 export DEEPDIVE_PWD='/ConfinedWater/deepdive-examples/spouse'
2016-11-05 17:14:16.708486 export DEEPDIVE_APP='/ConfinedWater/deepdive-examples/spouse'
2016-11-05 17:14:16.708495 cd "$DEEPDIVE_APP"/run 2016-11-05 17:14:16.708504 export DEEPDIVE_RUN_ID='20161105/171416.578993662' 2016-11-05 17:14:16.708524 # Plan: 20161105/171416.578993662/plan.sh 2016-11-05 17:14:16.708535 # Targets: articles 2016-11-05 17:14:16.708543 ################################################################################ 2016-11-05 17:14:16.708551 2016-11-05 17:14:16.708570 # process/init/app/run.sh ####################################### last done: 2016-11-05T17:14:14+0000 (2s ago) 2016-11-05 17:14:16.708589 process/init/relation/articles/run.sh ############################### last done: N/A 2016-11-05 17:14:16.708599 ++ dirname process/init/relation/articles/run.sh 2016-11-05 17:14:16.708615 + cd process/init/relation/articles 2016-11-05 17:14:16.708624 + export DEEPDIVE_CURRENT_PROCESS_NAME=process/init/relation/articles 2016-11-05 17:14:16.708633 + DEEPDIVE_CURRENT_PROCESS_NAME=process/init/relation/articles 2016-11-05 17:14:16.708651 + deepdive create table articles 2016-11-05 17:14:17.031058 CREATE TABLE 2016-11-05 17:14:17.032028 + deepdive load articles 2016-11-05 17:14:17.320186 Loading articles from input/articles.tsj.sh (tsj format) 2016-11-05 17:14:17.323291 ERROR: Missing /ConfinedWater/deepdive-examples/spouse/input/signalmedia/signalmedia-1m.jsonl 2016-11-05 17:14:17.323435 # Please Download it from http://research.signalmedia.co/newsir16/signal-dataset.html 2016-11-05 17:14:17.323478 2016-11-05 17:14:17.323507 # Alternatively, use our sampled data by running: 2016-11-05 17:14:17.323537 deepdive load articles input/articles-100.tsv.bz2 2016-11-05 17:14:17.323568 2016-11-05 17:14:17.323596 # Or, skipping all NLP markup processes by running: 2016-11-05 17:14:17.323624 deepdive create table sentences 2016-11-05 17:14:17.323658 deepdive load sentences 2016-11-05 17:14:17.323680 deepdive mark done sentences 2016-11-05 17:14:17.459342 COPY 0 ‘run/ABORTED’ -> ‘20161105/171416.578993662’  Alternatively, a sample of 100 and 1000 articles can be downloaded from GitHub and loaded into DeepDive with the following command: In [13]: NUM_ARTICLES = 100 ARTICLES_FILE = "articles-%d.tsj.bz2" % NUM_ARTICLES articles_not_done = !deepdive done articles || date if articles_not_done: !cd input && curl -RLO "https://github.com/HazyResearch/deepdive/raw/master/examples/spouse/input/"$ARTICLES_FILE
!deepdive reload articles input/$ARTICLES_FILE   % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 171 100 171 0 0 352 0 --:--:-- --:--:-- --:--:-- 352 100 135k 100 135k 0 0 128k 0 0:00:01 0:00:01 --:--:-- 574k CREATE TABLE Loading articles from input/articles-100.tsj.bz2 (tsj format) COPY 100 ANALYZE  After DeepDive finishes creating the table and then fetching and loading the data, we can take a look at the loaded data using the following deepdive query command, which enumerates the values for the id column of the articles table: In [14]: !deepdive query '|10 ?- articles(id, _).'   id -------------------------------------- ba44d0cd-bff2-4875-8036-86f37419b5e7 c5f8a528-cc0f-4f3e-aaef-b9e3b6b00325 0d07e617-00d4-4866-aee2-0ae197ae366f ebcd41ea-e5b4-43a4-9e16-4406d81cfcda 7516303b-0db5-477d-9e5d-243a73865e39 f6e047d0-e409-42a6-ab0e-13ab926719a6 15d53efb-2151-4164-aee0-cae51faedeeb fe6e8fcc-1128-4410-923d-f05c42174336 8b31ede3-0f3b-431a-86a3-342ee18cfd83 4336860e-fa87-4f54-b3ce-b4afb72c4acd (10 rows)  ### 1.2. Adding NLP markups¶ Next, we'll use Stanford's CoreNLP natural language processing (NLP) system to add useful markups and structure to our input data. This step will split up our articles into sentences and their component tokens (roughly, the words). Additionally, we'll get lemmas (normalized word forms), part-of-speech (POS) tags, named entity recognition (NER) tags, and a dependency parse of the sentence. Let's first declare the output schema of this step in app.ddlog: In [15]: %%file -a app.ddlog ## NLP markup ################################################################# sentences( doc_id text, sentence_index int, tokens json, lemmas json, pos_tags json, ner_tags json, doc_offsets json, dep_types json, dep_tokens json ).  Appending to app.ddlog  Next, we declare a DDlog function which takes in the doc_id and content for an article and returns rows conforming to the sentences schema we just declared, using the user-defined function (UDF) in udf/nlp_markup.sh. We specify that this nlp_markup function should be run over each row from articles, and the output appended to sentences: In [16]: %%file -a app.ddlog function nlp_markup over ( doc_id text, content text ) returns rows like sentences implementation "udf/nlp_markup.sh" handles tsj lines. sentences += nlp_markup(doc_id, content) :- articles(doc_id, content).  Appending to app.ddlog  This UDF udf/nlp_markup.sh is a Bash script which uses our own wrapper around CoreNLP. In [17]: !mkdir -p udf  In [18]: %%file udf/nlp_markup.sh #!/usr/bin/env bash # Parse documents in tab-separated JSONs input stream with CoreNLP # #$ deepdive corenlp install
# $deepdive corenlp start #$ deepdive env udf/nlp_markup.sh
# $deepdive corenlp stop ## set -euo pipefail cd "$(dirname "$0")" # some configuration knobs for CoreNLP :${CORENLP_PORT:=$(deepdive corenlp unique-port)} # a CoreNLP server started ahead of time is shared across parallel UDF processes # See: http://stanfordnlp.github.io/CoreNLP/annotators.html :${CORENLP_ANNOTATORS:="
tokenize
ssplit
pos
ner
lemma
depparse
"}
export CORENLP_PORT
export CORENLP_ANNOTATORS

# make sure CoreNLP server is available
deepdive corenlp is-running || {
echo >&2 "PLEASE MAKE SURE YOU HAVE RUN: deepdive corenlp start"
false
}

# parse input with CoreNLP and output a row for every sentence
deepdive corenlp parse-tsj docid+ content=nlp -- docid nlp |
deepdive corenlp sentences-tsj docid content:nlp \
-- docid nlp.{index,tokens.{word,lemma,pos,ner,characterOffsetBegin}} \
nlp.collapsed-dependencies.{dep_type,dep_token}

Overwriting udf/nlp_markup.sh


Again, we mark it as executable for DeepDive to run it:

In [19]:
!chmod +x udf/nlp_markup.sh


Before executing this NLP markup step, we need to launch the CoreNLP server in advance, which may take a while to install and load everything. Note that the CoreNLP library requires Java 8 to run.

In [20]:
!deepdive corenlp install
# If CoreNLP seems to take forever to start, retry after uncommenting the following line:
%env CORENLP_JAVAOPTS=-Xmx4g
!deepdive corenlp start

CoreNLP already installed at /deepdive/lib/stanford-corenlp/corenlp
env: CORENLP_JAVAOPTS=-Xmx4g
CoreNLP server at CORENLP_PORT=24393 starting...
To stop it after final use, run: deepdive corenlp stop
To watch its log, run: deepdive corenlp watch-log

In [21]:
!deepdive redo sentences

app.ddlog: updated since last deepdive compile
‘run/compiled’ -> ‘20161105/171516.511996312’
‘run/RUNNING’ -> ‘20161105/171518.022314534’
2016-11-05 17:15:18.174210 process/ext_sentences_by_nlp_markup/run.sh
2016-11-05 17:15:51.256532 deepdive mark 'done' data/sentences
‘run/FINISHED’ -> ‘20161105/171518.022314534’


Now, if we take a look at a sample of the NLP markups, they will have tokens and NER tags that look like the following:

In [22]:
%%bash
deepdive query '
doc_id, index, tokens, ner_tags | 5
?- sentences(doc_id, index, tokens, lemmas, pos_tags, ner_tags, _, _, _).
'

                doc_id                | index |                                                                                                     tokens                                                                                                     |                                                                ner_tags
--------------------------------------+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------
8b31ede3-0f3b-431a-86a3-342ee18cfd83 |     0 | ["Just","what","Sherlock","needed","after","his","relapse",":","to","come","face-to-face","with","his","daddy","issues","."]                                                                                   | ["O","O","PERSON","O","O","O","O","O","O","O","O","O","O","O","O","O"]
8b31ede3-0f3b-431a-86a3-342ee18cfd83 |     1 | ["When","Elementary","returns","this","fall",",","Sherlock","-LRB-","Jonny","Lee","Miller","-RRB-","will","be","dealing","with","the","aftermath","of","his","relapse","in","last","season","'s","finale","."] | ["O","O","O","DATE","DATE","O","PERSON","O","PERSON","PERSON","PERSON","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O"]
8b31ede3-0f3b-431a-86a3-342ee18cfd83 |     2 | ["One","of","the","consequences","?"]                                                                                                                                                                          | ["NUMBER","O","O","O","O"]
8b31ede3-0f3b-431a-86a3-342ee18cfd83 |     3 | ["His","father","Morland","Holmes",",","played","by","John","Noble",",","is","coming","to","New","York","to","check","up","on","his","son","."]                                                                | ["O","O","PERSON","PERSON","O","O","O","PERSON","PERSON","O","O","O","O","LOCATION","LOCATION","O","O","O","O","O","O","O"]
8b31ede3-0f3b-431a-86a3-342ee18cfd83 |     4 | ["Morland","is","an","international","consultant","who","has","a","lot","of","power","and","has","amassed","a","considerable","fortune","."]                                                                   | ["PERSON","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O","O"]
(5 rows)



### 1.3. Extracting candidate relation mentions¶

#### Mentions of people¶

Once again we first declare the schema:

In [23]:
%%file -a app.ddlog

## Candidate mapping ##########################################################
person_mention(
mention_id     text,
mention_text   text,
doc_id         text,
sentence_index int,
begin_index    int,
end_index      int
).

Appending to app.ddlog


We will be storing each person as a row referencing a sentence with beginning and ending indexes. Again, we next declare a function that references a UDF and takes as input the sentence tokens and NER tags:

In [24]:
%%file -a app.ddlog

function map_person_mention over (
doc_id         text,
sentence_index int,
tokens         text[],
ner_tags       text[]
) returns rows like person_mention
implementation "udf/map_person_mention.py" handles tsj lines.

Appending to app.ddlog


We'll write a simple UDF in Python that will tag spans of contiguous tokens with the NER tag PERSON as person mentions (i.e., we'll essentially rely on CoreNLP's NER module). Note that we've already used a Bash script as a UDF, and indeed any programming language can be used. (DeepDive will just check the path specified in the top line, e.g., #!/usr/bin/env python.) However, DeepDive provides some convenient utilities for Python UDFs which handle all IO encoding/decoding. To write our UDF udf/map_person_mention.py, we'll start by specifying that our UDF will handle TSV lines (as specified in the DDlog above). Additionally, we'll specify the exact type schema of both input and output, which DeepDive will check for us:

In [25]:
%%file udf/map_person_mention.py
#!/usr/bin/env python
from deepdive import *

@tsj_extractor
@returns(lambda
mention_id       = "text",
mention_text     = "text",
doc_id           = "text",
sentence_index   = "int",
begin_index      = "int",
end_index        = "int",
:[])
def extract(
doc_id         = "text",
sentence_index = "int",
tokens         = "text[]",
ner_tags       = "text[]",
):
"""
Finds phrases that are continuous words tagged with PERSON.
"""
num_tokens = len(ner_tags)
# find all first indexes of series of tokens tagged as PERSON
first_indexes = (i for i in xrange(num_tokens) if ner_tags[i] == "PERSON" and (i == 0 or ner_tags[i-1] != "PERSON"))
for begin_index in first_indexes:
# find the end of the PERSON phrase (consecutive tokens tagged as PERSON)
end_index = begin_index + 1
while end_index < num_tokens and ner_tags[end_index] == "PERSON":
end_index += 1
end_index -= 1
# generate a mention identifier
mention_id = "%s_%d_%d_%d" % (doc_id, sentence_index, begin_index, end_index)
mention_text = " ".join(map(lambda i: tokens[i], xrange(begin_index, end_index + 1)))
# Output a tuple for each PERSON phrase
yield [
mention_id,
mention_text,
doc_id,
sentence_index,
begin_index,
end_index,
]

Overwriting udf/map_person_mention.py

In [26]:
!chmod +x udf/map_person_mention.py


Above, we write a simple function which extracts and tags all subsequences of tokens having the NER tag "PERSON". Note that the extract function must be a generator (i.e., use a yield statement to return output rows).

Finally, we specify that the function will be applied to rows from the sentences table and append to the person_mention table:

In [27]:
%%file -a app.ddlog

person_mention += map_person_mention(
doc_id, sentence_index, tokens, ner_tags
) :-
sentences(doc_id, sentence_index, tokens, _, _, ner_tags, _, _, _).

Appending to app.ddlog


Again, to run, just compile and execute as in previous steps:

In [28]:
!deepdive redo person_mention

app.ddlog: updated since last deepdive compile
‘run/compiled’ -> ‘20161105/171552.714502406’
‘run/RUNNING’ -> ‘20161105/171553.779649233’
2016-11-05 17:15:53.959601 process/ext_person_mention_by_map_person_mention/run.sh
2016-11-05 17:15:55.686093 deepdive mark 'done' data/person_mention
‘run/FINISHED’ -> ‘20161105/171553.779649233’

In [29]:
%%bash
deepdive query '
name, doc, sentence, begin, end | 20
?- person_mention(p_id, name, doc, sentence, begin, end).
'

       name       |                 doc                  | sentence | begin | end
------------------+--------------------------------------+----------+-------+-----
Sherlock         | 8b31ede3-0f3b-431a-86a3-342ee18cfd83 |        0 |     2 |   2
Sherlock         | 8b31ede3-0f3b-431a-86a3-342ee18cfd83 |        1 |     6 |   6
Jonny Lee Miller | 8b31ede3-0f3b-431a-86a3-342ee18cfd83 |        1 |     8 |  10
Morland Holmes   | 8b31ede3-0f3b-431a-86a3-342ee18cfd83 |        3 |     2 |   3
John Noble       | 8b31ede3-0f3b-431a-86a3-342ee18cfd83 |        3 |     7 |   8
Morland          | 8b31ede3-0f3b-431a-86a3-342ee18cfd83 |        4 |     0 |   0
Rob Doherty      | 8b31ede3-0f3b-431a-86a3-342ee18cfd83 |        5 |    27 |  28
Sherlock         | 8b31ede3-0f3b-431a-86a3-342ee18cfd83 |        8 |     1 |   1
Mega Buzz        | 8b31ede3-0f3b-431a-86a3-342ee18cfd83 |        9 |     6 |   7
Holmes           | 8b31ede3-0f3b-431a-86a3-342ee18cfd83 |       10 |     5 |   5
Morland          | 8b31ede3-0f3b-431a-86a3-342ee18cfd83 |       10 |    21 |  21
Sherlock         | 8b31ede3-0f3b-431a-86a3-342ee18cfd83 |       10 |    27 |  27
Tony             | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        0 |    17 |  17
Jessie Mueller   | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        0 |    19 |  20
Mueller          | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        1 |     0 |   0
Abby             | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        1 |     4 |   4
Carole King      | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        1 |    14 |  15
Abby Mueller     | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        2 |     0 |   1
Abby Mueller     | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        4 |    13 |  14
Jessie           | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        5 |    11 |  11
(20 rows)



#### Mentions of spouses (pairs of people)¶

Next, we'll take all pairs of non-overlapping person mentions that co-occur in a sentence with less than 5 people total, and consider these as the set of potential ('candidate') spouse mentions. We thus filter out sentences with large numbers of people for the purposes of this tutorial; however, these could be included if desired. Again, to start, we declare the schema for our spouse_candidate table—here just the two names, and the two person_mention IDs referred to:

In [30]:
%%file -a app.ddlog

spouse_candidate(
p1_id   text,
p1_name text,
p2_id   text,
p2_name text
).

Appending to app.ddlog


Next, for this operation we don't use any UDF script, instead rely entirely on DDlog operations. We simply construct a table of person counts, and then do a join with our filtering conditions. In DDlog this looks like:

In [31]:
%%file -a app.ddlog

num_people(doc_id, sentence_index, COUNT(p)) :-
person_mention(p, _, doc_id, sentence_index, _, _).

spouse_candidate(p1, p1_name, p2, p2_name) :-
num_people(same_doc, same_sentence, num_p),
person_mention(p1, p1_name, same_doc, same_sentence, p1_begin, _),
person_mention(p2, p2_name, same_doc, same_sentence, p2_begin, _),
num_p < 5,
p1 < p2,
p1_name != p2_name,
p1_begin != p2_begin.

Appending to app.ddlog


Now, let's tell DeepDive to run what we have so far:

In [32]:
!deepdive redo spouse_candidate

app.ddlog: updated since last deepdive compile
‘run/compiled’ -> ‘20161105/171556.664236290’
‘run/RUNNING’ -> ‘20161105/171557.725553271’
2016-11-05 17:15:57.944202 process/ext_num_people/run.sh
2016-11-05 17:15:58.155046 deepdive mark 'done' data/num_people
2016-11-05 17:15:58.185821 process/ext_spouse_candidate/run.sh
2016-11-05 17:15:58.377589 deepdive mark 'done' data/spouse_candidate
‘run/FINISHED’ -> ‘20161105/171557.725553271’

In [33]:
%%bash
deepdive query '
name1, name2, doc, sentence | 20
?- spouse_candidate(p1, name1, p2, name2),
person_mention(p1, _, doc, sentence, _, _).
'

       name1       |       name2       |                 doc                  | sentence
-------------------+-------------------+--------------------------------------+----------
Sherlock          | Jonny Lee Miller  | 8b31ede3-0f3b-431a-86a3-342ee18cfd83 |        1
Morland Holmes    | John Noble        | 8b31ede3-0f3b-431a-86a3-342ee18cfd83 |        3
Sherlock          | Holmes            | 8b31ede3-0f3b-431a-86a3-342ee18cfd83 |       10
Morland           | Holmes            | 8b31ede3-0f3b-431a-86a3-342ee18cfd83 |       10
Morland           | Sherlock          | 8b31ede3-0f3b-431a-86a3-342ee18cfd83 |       10
Tony              | Jessie Mueller    | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        0
Carole King       | Abby              | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        1
Mueller           | Abby              | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        1
Mueller           | Carole King       | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        1
Mueller           | Abby Mueller      | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        7
Jessie            | Abby Mueller      | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        7
Jessie            | Mueller           | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        7
Jill Shellabarger | Matt              | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        8
Roger Mueller     | Matt              | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        8
Jill Shellabarger | Andrew            | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        8
Roger Mueller     | Andrew            | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        8
Matt              | Andrew            | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        8
Roger Mueller     | Jill Shellabarger | 9b28e780-ba48-4a53-8682-7c58c141a1b6 |        8
Khoury            | Greg Medcraft     | ebcd41ea-e5b4-43a4-9e16-4406d81cfcda |       34
Dame Joan Collins | Jackie            | df13cc43-53fd-4f09-9a7e-d69b12a4adc0 |        0
(20 rows)



### 1.4. Extracting features for each candidate¶

Finally, we will extract a set of features for each candidate:

In [34]:
%%file -a app.ddlog

## Feature Extraction #########################################################

# Feature extraction (using DDLIB via a UDF) at the relation level
spouse_feature(
p1_id   text,
p2_id   text,
feature text
).

Appending to app.ddlog


The goal here is to represent each spouse candidate mention by a set of attributes or features which capture at least the key aspects of the mention, and then let a machine learning model learn how much each feature is correlated with our decision variable ('is this a spouse mention?'). For those who have worked with machine learning systems before, note that we are using a sparse storage representation- you could think of a spouse candidate (p1_id, p2_id) as being represented by a vector of length L = COUNT(DISTINCT feature), consisting of all zeros except for at the indexes specified by the rows with key (p1_id, p2_id).

DeepDive includes an automatic feature generation library, DDlib, which we will use here. Although many state-of-the-art applications have been built using purely DDlib-generated features, others can be used and/or added as well. To use DDlib, we create a list of ddlib.Word objects, two ddlib.Span objects, and then use the function get_generic_features_relation, as shown in the following Python code for udf/extract_spouse_features.py:

In [35]:
%%file udf/extract_spouse_features.py
#!/usr/bin/env python
from deepdive import *
import ddlib

@tsj_extractor
@returns(lambda
p1_id   = "text",
p2_id   = "text",
feature = "text",
:[])
def extract(
p1_id          = "text",
p2_id          = "text",
p1_begin_index = "int",
p1_end_index   = "int",
p2_begin_index = "int",
p2_end_index   = "int",
doc_id         = "text",
sent_index     = "int",
tokens         = "text[]",
lemmas         = "text[]",
pos_tags       = "text[]",
ner_tags       = "text[]",
dep_types      = "text[]",
dep_parents    = "int[]",
):
"""
Uses DDLIB to generate features for the spouse relation.
"""
# Create a DDLIB sentence object, which is just a list of DDLIB Word objects
sent = []
for i,t in enumerate(tokens):
sent.append(ddlib.Word(
begin_char_offset=None,
end_char_offset=None,
word=t,
lemma=lemmas[i],
pos=pos_tags[i],
ner=ner_tags[i],
dep_par=dep_parents[i] - 1,  # Note that as stored from CoreNLP 0 is ROOT, but for DDLIB -1 is ROOT
dep_label=dep_types[i]))

# Create DDLIB Spans for the two person mentions
p1_span = ddlib.Span(begin_word_id=p1_begin_index, length=(p1_end_index-p1_begin_index+1))
p2_span = ddlib.Span(begin_word_id=p2_begin_index, length=(p2_end_index-p2_begin_index+1))

# Generate the generic features using DDLIB
for feature in ddlib.get_generic_features_relation(sent, p1_span, p2_span):
yield [p1_id, p2_id, feature]

Overwriting udf/extract_spouse_features.py

In [36]:
!chmod +x udf/extract_spouse_features.py


Note that getting the input for this UDF requires joining the person_mention and sentences tables:

In [37]:
%%file -a app.ddlog

function extract_spouse_features over (
p1_id          text,
p2_id          text,
p1_begin_index int,
p1_end_index   int,
p2_begin_index int,
p2_end_index   int,
doc_id         text,
sent_index     int,
tokens         text[],
lemmas         text[],
pos_tags       text[],
ner_tags       text[],
dep_types      text[],
dep_tokens     int[]
) returns rows like spouse_feature
implementation "udf/extract_spouse_features.py" handles tsj lines.

spouse_feature += extract_spouse_features(
p1_id, p2_id, p1_begin_index, p1_end_index, p2_begin_index, p2_end_index,
doc_id, sent_index, tokens, lemmas, pos_tags, ner_tags, dep_types, dep_tokens
) :-
person_mention(p1_id, _, doc_id, sent_index, p1_begin_index, p1_end_index),
person_mention(p2_id, _, doc_id, sent_index, p2_begin_index, p2_end_index),
sentences(doc_id, sent_index, tokens, lemmas, pos_tags, ner_tags, _, dep_types, dep_tokens).

Appending to app.ddlog


Now, let's execute this UDF to get our features:

In [38]:
!deepdive redo spouse_feature

app.ddlog: updated since last deepdive compile
‘run/compiled’ -> ‘20161105/171559.768170292’
‘run/RUNNING’ -> ‘20161105/171600.894283335’
2016-11-05 17:16:01.100184 process/ext_spouse_feature_by_extract_spouse_features/run.sh
2016-11-05 17:16:11.115510 deepdive mark 'done' data/spouse_feature
‘run/FINISHED’ -> ‘20161105/171600.894283335’


If we take a look at a sample of the extracted features, they will look roughly like the following:

In [39]:
!deepdive query '| 20 ?- spouse_feature(_, _, feature).'

                                                    feature
----------------------------------------------------------------------------------------------------------------
WORD_SEQ_[will try to apply those skills to his son remains to be seen , but Morland will stick his nose into]
LEMMA_SEQ_[will try to apply those skill to he son remain to be see , but Morland will stick he nose into]
NER_SEQ_[O O O O O O O O O O O O O O O PERSON O O O O O]
POS_SEQ_[MD VB TO VB DT NNS TO PRP$NN VBZ TO VB VBN , CC NNP MD VB PRP$ NN IN]
W_LEMMA_L_1_R_1_[elder]_['s]
W_NER_L_1_R_1_[O]_[O]
W_LEMMA_L_1_R_2_[elder]_['s first]
W_NER_L_1_R_2_[O]_[O ORDINAL]
W_LEMMA_L_1_R_3_[elder]_['s first case]
W_NER_L_1_R_3_[O]_[O ORDINAL O]
W_LEMMA_L_2_R_1_[the elder]_['s]
W_NER_L_2_R_1_[O O]_[O]
W_LEMMA_L_2_R_2_[the elder]_['s first]
W_NER_L_2_R_2_[O O]_[O ORDINAL]
W_LEMMA_L_2_R_3_[the elder]_['s first case]
W_NER_L_2_R_3_[O O]_[O ORDINAL O]
W_LEMMA_L_3_R_1_[not the elder]_['s]
W_NER_L_3_R_1_[O O O]_[O]
W_LEMMA_L_3_R_2_[not the elder]_['s first]
W_NER_L_3_R_2_[O O O]_[O ORDINAL]
(20 rows)



Now we have generated what looks more like the standard input to a machine learning problem—a set of objects, represented by sets of features, which we want to classify (here, as true or false mentions of a spousal relation). However, we don't have any supervised labels (i.e., a set of correct answers) for a machine learning algorithm to learn from! In most real world applications, a sufficiently large set of supervised labels is not available. With DeepDive, we take the approach sometimes referred to as distant supervision or data programming, where we instead generate a noisy set of labels using a mix of mappings from secondary datasets and other heuristic rules.

## 2. Distant supervision with data and rules¶

In this section, we'll use distant supervision (or 'data programming') to provide a noisy set of labels for candidate relation mentions, with which we will train a machine learning model.

We'll describe two basic categories of approaches:

1. Mapping from secondary data for distant supervision
2. Using heuristic rules for distant supervision

Then, we'll describe a simple majority-vote approach to resolving multiple labels per example, which can be implemented within DDlog.

Let's declare a new table where we'll store the labels (referring to the spouse candidate mentions), with an integer value (True=1, False=-1) and a description (rule_id):

In [40]:
%%file -a app.ddlog

## Distant Supervision ########################################################
spouse_label(
p1_id   text,
p2_id   text,
label   int,
rule_id text
).

Appending to app.ddlog


Let's put all the spouse candidate mentions with a NULL label. This is just for simplifying some steps later:

In [41]:
%%file -a app.ddlog

# make sure all pairs in spouse_candidate are considered as unsupervised examples
spouse_label(p1,p2, 0, NULL) :-
spouse_candidate(p1, _, p2, _).

Appending to app.ddlog


### 2.1. Mapping from secondary data for distant supervision¶

First, we'll try using an external structured dataset of known married couples, from DBpedia, to distantly supervise our dataset. We'll download the relevant data, and then map it to our candidate spouse relations.

Our goal is to first extract a collection of known married couples from DBpedia and then load this into the spouses_dbpedia table in our database. To extract known married couples, we use the DBpedia dump present in Google's BigQuery platform. First we extract the URI, name and spouse information from the DBpedia person table records in BigQuery for which the field name is not NULL. We use the following query:

SELECT URI,name, spouse
FROM [fh-bigquery:dbpedia.person]
where name <> "NULL"


We store the result of the above query in a local project table dbpedia.validnames and perform a self-join to obtain the pairs of married couples.

SELECT t1.name, t2.name
FROM [dbpedia.validnames] AS t1
JOIN EACH [dbpedia.validnames] AS t2
ON t1.spouse = t2.URI


The output of the above query is stored in a new table named dbpedia.spouseraw. Finally, we use the following query to remove symmetric duplicates.

SELECT p1, p2
FROM (SELECT t1_name as p1, t2_name as p2 FROM [dbpedia.spouseraw]),
(SELECT t2_name as p1, t1_name as p2 FROM [dbpedia.spouseraw])
WHERE p1 < p2


The output of this query is stored in a local file. The file contains duplicate rows (BigQuery does not support distinct). It also contains noisy rows where the name field contains a string where the given name family name and multiple aliases were concatenated and reported in a string including the characters { and }. Using the Unix commands sed, sort and uniq we first remove the lines containing characters { and } and then duplicate entries. This results in an input file spouses_dbpedia.csv containing 6,126 entries of married couples.

Note that we made this spouses_dbpedia.csv available for download from GitHub, so you don't have to repeat the above process.

To load the known married couples data into DeepDive, we first declare the schema in DDlog:

In [42]:
%%file -a app.ddlog

# distant supervision using data from DBpedia

spouses_dbpedia(
person1_name text,
person2_name text
).

Appending to app.ddlog


Notice that we can easily load the data in spouses_dbpedia.csv data to the table we just declared if we follow DeepDive's convention of organizing input data under input/ directory. The input file name simply needs to start with the target database table name. Let's download the file from GitHub to input/spouses_dbpedia.csv.bz2 under our application:

In [43]:
!cd input && curl -RLO "https://github.com/HazyResearch/deepdive/raw/master/examples/spouse/input/spouses_dbpedia.csv.bz2"

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
100   174  100   174    0     0    360      0 --:--:-- --:--:-- --:--:--   360
100 77463  100 77463    0     0  82313      0 --:--:-- --:--:-- --:--:-- 82313


Then execute this command to load it into the database:

In [44]:
!deepdive redo spouses_dbpedia

app.ddlog: updated since last deepdive compile
‘run/compiled’ -> ‘20161105/171613.590881475’
‘run/RUNNING’ -> ‘20161105/171614.696781763’
2016-11-05 17:16:14.842279 process/init/relation/spouses_dbpedia/run.sh
2016-11-05 17:16:15.643122 deepdive mark 'done' data/spouses_dbpedia
‘run/FINISHED’ -> ‘20161105/171614.696781763’


Now the database should include tuples that look like the following:

In [45]:
!deepdive query '| 20 ?- spouses_dbpedia(name1, name2).'

        name1         |                  name2
----------------------+-----------------------------------------
20th Earl of Arundel | Anne Howard Countess of Arundel
Aafia Siddiqui       | Amjad Mohammed Khan
A. A. Gill           | Amber Rudd
Aamir Ali Malik      | Sanjeeda Shaikh
Aamir Khan           | Kiran Rao
Aarón Díaz           | Kate del Castillo
Aaron Hotchner       | Beth Clemmons
Aaron Spelling       | Carolyn Jones
Aaron Staton         | Connie Fletcher
Aarti Bajaj          | Anurag Kashyap
Abbas                | Erum Ali
Abbas Tyrewala       | Pakhi Tyrewala
Abbe Lane            | Xavier Cugat
Abbie G. Rogers      | Henry Huttleston Rogers
Abby Jimenez         | Ramon Jimenez Jr.
Abby Lockhart        | Luka Kovač
Abby McDeere         | Mitch McDeere
Abdel Hakim Amer     | Berlenti Abdul Hamid  برلنتي عبد الحميد
(20 rows)



#### Supervising spouse candidates with DBpedia data¶

Next we'll implement a simple distant supervision rule which labels any spouse mention candidate with a pair of names appearing in DBpedia as true:

In [46]:
%%file -a app.ddlog

spouse_label(p1,p2, 1, "from_dbpedia") :-
spouse_candidate(p1, p1_name, p2, p2_name),
spouses_dbpedia(n1, n2),
[ lower(n1) = lower(p1_name), lower(n2) = lower(p2_name) ;
lower(n2) = lower(p1_name), lower(n1) = lower(p2_name) ].

Appending to app.ddlog


It should be noted that there are many clear ways in which this rule could be improved (fuzzy matching, more restrictive conditions, etc.), but this serves as an example of one major type of distant supervision rule.

### 2.2. Using heuristic rules for distant supervision¶

We can also create a supervision rule which does not rely on any secondary structured dataset like DBpedia, but instead just uses some heuristic. We set up a DDlog function, supervise, which uses a UDF containing several heuristic rules over the mention and sentence attributes:

In [47]:
%%file -a app.ddlog

# supervision by heuristic rules in a UDF
function supervise over (
p1_id text, p1_begin int, p1_end int,
p2_id text, p2_begin int, p2_end int,
doc_id         text,
sentence_index int,
sentence_text  text,
tokens         text[],
lemmas         text[],
pos_tags       text[],
ner_tags       text[],
dep_types      text[],
dep_tokens     int[]
) returns (
p1_id text, p2_id text, label int, rule_id text
)
implementation "udf/supervise_spouse.py" handles tsj lines.

spouse_label += supervise(
p1_id, p1_begin, p1_end,
p2_id, p2_begin, p2_end,
doc_id, sentence_index,
tokens, lemmas, pos_tags, ner_tags, dep_types, dep_token_indexes
) :-
spouse_candidate(p1_id, _, p2_id, _),
person_mention(p1_id, p1_text, doc_id, sentence_index, p1_begin, p1_end),
person_mention(p2_id, p2_text,      _,              _, p2_begin, p2_end),
sentences(
doc_id, sentence_index,
tokens, lemmas, pos_tags, ner_tags, _, dep_types, dep_token_indexes
).

Appending to app.ddlog


The Python UDF named udf/supervise_spouse.py contains several heuristic rules:

• Candidates with person mentions that are too far apart in the sentence are marked as false.
• Candidates with person mentions that have another person in between are marked as false.
• Candidates with person mentions that have words like "wife" or "husband" in between are marked as true.
• Candidates with person mentions that have "and" in between and "married" after are marked as true.
• Candidates with person mentions that have familial relation words in between are marked as false.
In [48]:
%%file udf/supervise_spouse.py
#!/usr/bin/env python
from deepdive import *
import random
from collections import namedtuple

SpouseLabel = namedtuple('SpouseLabel', 'p1_id, p2_id, label, type')

@tsj_extractor
@returns(lambda
p1_id   = "text",
p2_id   = "text",
label   = "int",
rule_id = "text",
:[])
# heuristic rules for finding positive/negative examples of spouse relationship mentions
def supervise(
p1_id="text", p1_begin="int", p1_end="int",
p2_id="text", p2_begin="int", p2_end="int",
doc_id="text", sentence_index="int",
tokens="text[]", lemmas="text[]", pos_tags="text[]", ner_tags="text[]",
dep_types="text[]", dep_token_indexes="int[]",
):

# Constants
MARRIED = frozenset(["wife", "husband"])
FAMILY = frozenset(["mother", "father", "sister", "brother", "brother-in-law"])
MAX_DIST = 10

# Common data objects
p1_end_idx = min(p1_end, p2_end)
p2_start_idx = max(p1_begin, p2_begin)
p2_end_idx = max(p1_end,p2_end)
intermediate_lemmas = lemmas[p1_end_idx+1:p2_start_idx]
intermediate_ner_tags = ner_tags[p1_end_idx+1:p2_start_idx]
tail_lemmas = lemmas[p2_end_idx+1:]
spouse = SpouseLabel(p1_id=p1_id, p2_id=p2_id, label=None, type=None)

# Rule: Candidates that are too far apart
if len(intermediate_lemmas) > MAX_DIST:
yield spouse._replace(label=-1, type='neg:far_apart')

# Rule: Candidates that have a third person in between
if 'PERSON' in intermediate_ner_tags:
yield spouse._replace(label=-1, type='neg:third_person_between')

# Rule: Sentences that contain wife/husband in between
#         (<P1>)([ A-Za-z]+)(wife|husband)([ A-Za-z]+)(<P2>)
if len(MARRIED.intersection(intermediate_lemmas)) > 0:
yield spouse._replace(label=1, type='pos:wife_husband_between')

# Rule: Sentences that contain and ... married
#         (<P1>)(and)?(<P2>)([ A-Za-z]+)(married)
if ("and" in intermediate_lemmas) and ("married" in tail_lemmas):
yield spouse._replace(label=1, type='pos:married_after')

# Rule: Sentences that contain familial relations:
#         (<P1>)([ A-Za-z]+)(brother|stster|father|mother)([ A-Za-z]+)(<P2>)
if len(FAMILY.intersection(intermediate_lemmas)) > 0:
yield spouse._replace(label=-1, type='neg:familial_between')

Overwriting udf/supervise_spouse.py

In [49]:
!chmod +x udf/supervise_spouse.py


Note that the rough theory behind this approach is that we don't need high-quality (e.g., hand-labeled) supervision to learn a high quality model. Instead, using statistical learning, we can in fact recover high-quality models from a large set of low-quality or noisy labels.

### 2.3. Resolving multiple labels per example with majority vote¶

Finally, we implement a very simple majority vote procedure, all in DDlog, for resolving scenarios where a single spouse candidate mention has multiple conflicting labels. First, we sum the labels (which are all -1, 0, or 1):

In [50]:
%%file -a app.ddlog

# resolve multiple labels by majority vote (summing the labels in {-1,0,1})
spouse_label_resolved(p1_id, p2_id, SUM(vote)) :-
spouse_label(p1_id, p2_id, vote, rule_id).

Appending to app.ddlog


Then, we simply threshold and add these labels to our decision variable table has_spouse (see next section for details here):

In [51]:
%%file -a app.ddlog

# assign the resolved labels for the spouse relation
has_spouse(p1_id, p2_id) = if l > 0 then TRUE
else if l < 0 then FALSE
else NULL end :- spouse_label_resolved(p1_id, p2_id, l).

Appending to app.ddlog


Once again, to execute all of the above, just run the following command:

In [52]:
!deepdive redo has_spouse

app.ddlog: updated since last deepdive compile
‘run/compiled’ -> ‘20161105/171617.316682458’
‘run/RUNNING’ -> ‘20161105/171618.630553368’
2016-11-05 17:16:18.966464 process/ext_spouse_label__0_by_supervise/run.sh
2016-11-05 17:16:22.551696 deepdive mark 'done' data/spouse_label__0
2016-11-05 17:16:22.588306 process/ext_spouse_label/run.sh
2016-11-05 17:16:22.773465 deepdive mark 'done' data/spouse_label
2016-11-05 17:16:22.802270 process/ext_spouse_label_resolved/run.sh
2016-11-05 17:16:22.988895 deepdive mark 'done' data/spouse_label_resolved
2016-11-05 17:16:23.017430 process/ext_has_spouse/run.sh
2016-11-05 17:16:23.203629 deepdive mark 'done' data/has_spouse
‘run/FINISHED’ -> ‘20161105/171618.630553368’


Recall that deepdive do will execute all upstream tasks as well, so this will execute all of the previous steps!

Now, we can take a brief look at how many candidates are supervised by different rules, which will look something like the table below. Obviously, the counts will vary depending on your input corpus.

In [53]:
!deepdive query 'rule, @order_by COUNT(1) ?- spouse_label(p1,p2, label, rule).'

           rule           | COUNT(1)
--------------------------+----------
neg:familial_between     |       26
pos:wife_husband_between |       49
neg:third_person_between |      174
neg:far_apart            |      239
|      636
(5 rows)



## 3. Learning and inference: model specification¶

Now, we need to specify the actual model that DeepDive will perform learning and inference over. At a high level, this boils down to specifying three things:

1. What are the variables of interest that we want DeepDive to predict for us?

2. What are the features for each of these variables?

3. What are the connections between the variables?

One we have specified the model in this way, DeepDive will learn the parameters of the model (the weights of the features and potentially the connections between variables), and then perform statistical inference over the learned model to determine the probability that each variable of interest is true.

For more advanced users: we are specifying a factor graph where the features are unary factors, and then using SGD and Gibbs sampling for learning and inference. Further technical detail is available here.

### 3.1. Specifying prediction variables¶

In our case, we have one variable to predict per spouse candidate mention, namely, is this mention actually indicating a spousal relation or not? In other words, we want DeepDive to predict the value of a Boolean variable for each spouse candidate mention, indicating whether it is true or not. Recall that we started this tutorial with specifying this at the beginning of app.ddlog as follows:

ddlog
has_spouse?(
p1_id text,
p2_id text
).

DeepDive will predict not only the value of these variables, but also the marginal probabilities, i.e., the confidence level that DeepDive has for each individual prediction.

### 3.2. Specifying features¶

Next, we indicate (i) that each has_spouse variable will be connected to the features of the corresponding spouse_candidate row, (ii) that we wish DeepDive to learn the weights of these features from our distantly supervised data, and (iii) that the weight of a specific feature across all instances should be the same, as follows:

In [54]:
%%file -a app.ddlog

## Inference Rules ############################################################

# Features
@weight(f)
has_spouse(p1_id, p2_id) :-
spouse_feature(p1_id, p2_id, f).

Appending to app.ddlog


### 3.3. Specifying connections between variables¶

Finally, we can specify dependencies between the prediction variables, with either learned or given weights. Here, we'll specify two such rules, with fixed (given) weights that we specify. First, we define a symmetry connection, namely specifying that if the model thinks a person mention p1 and a person mention p2 indicate a spousal relationship in a sentence, then it should also think that the reverse is true, i.e., that p2 and p1 indicate one too:

In [55]:
%%file -a app.ddlog

# Inference rule: Symmetry
@weight(3.0)
has_spouse(p1_id, p2_id) => has_spouse(p2_id, p1_id) :-
TRUE.

Appending to app.ddlog


Next, we specify a rule that the model should be strongly biased towards finding one marriage indication per person mention. We do this inversely, using a negative weight, as follows:

In [56]:
%%file -a app.ddlog

# Inference rule: Only one marriage
@weight(-1.0)
has_spouse(p1_id, p2_id) => has_spouse(p1_id, p3_id) :-
TRUE.

Appending to app.ddlog


### 3.4. Performing learning and inference¶

Finally, to perform learning and inference using the specified model, we need to run the following command:

In [57]:
!deepdive redo probabilities

app.ddlog: updated since last deepdive compile
‘run/compiled’ -> ‘20161105/171629.260084208’
‘run/RUNNING’ -> ‘20161105/171630.704470831’
2016-11-05 17:16:31.297709 process/grounding/from_grounding/run.sh
2016-11-05 17:16:31.312432 process/grounding/variable/has_spouse/materialize/run.sh
2016-11-05 17:16:36.389875 process/grounding/variable_assign_id/run.sh
2016-11-05 17:16:36.673670 process/grounding/factor/inf_imply_has_spouse_has_spouse_0/materialize/run.sh
2016-11-05 17:16:42.015477 process/grounding/factor/inf_imply_has_spouse_has_spouse_1/materialize/run.sh
2016-11-05 17:16:52.138722 process/grounding/factor/inf_istrue_has_spouse/materialize/run.sh
2016-11-05 17:16:59.239468 process/grounding/assign_weight_id/run.sh
2016-11-05 17:16:59.887608 process/grounding/factor/inf_imply_has_spouse_has_spouse_0/0/dump/run.sh
2016-11-05 17:17:01.095601 process/grounding/factor/inf_imply_has_spouse_has_spouse_0/dump_weights/run.sh
2016-11-05 17:17:02.291527 process/grounding/factor/inf_imply_has_spouse_has_spouse_1/0/dump/run.sh
2016-11-05 17:17:03.516167 process/grounding/factor/inf_imply_has_spouse_has_spouse_1/dump_weights/run.sh
2016-11-05 17:17:04.702877 process/grounding/factor/inf_istrue_has_spouse/0/dump/run.sh
2016-11-05 17:17:05.028324 process/grounding/factor/inf_istrue_has_spouse/dump_weights/run.sh
2016-11-05 17:17:05.287317 process/grounding/global_weight_table/run.sh
2016-11-05 17:17:05.470889 process/grounding/variable_holdout/run.sh
2016-11-05 17:17:06.148270 process/grounding/variable/has_spouse/0/dump/run.sh
2016-11-05 17:17:07.342957 process/grounding/combine_factorgraph/run.sh
2016-11-05 17:17:07.424862 process/model/learning/run.sh
2016-11-05 17:17:09.555646 process/model/inference/run.sh
2016-11-05 17:17:10.366123 deepdive mark 'done' data/model/probabilities
‘run/FINISHED’ -> ‘20161105/171630.704470831’


This will ground the model based on the data in the database, learn the weights, infer the expectations or marginal probabilities of the variables in the model, and then load them back to the database.

Let's take a look at the probabilities inferred by DeepDive for the has_spouse variables.

In [58]:
!deepdive sql 'SELECT p1_id, p2_id, expectation FROM has_spouse_inference ORDER BY random() LIMIT 20'

                     p1_id                     |                     p2_id                     | expectation
-----------------------------------------------+-----------------------------------------------+-------------
8b31ede3-0f3b-431a-86a3-342ee18cfd83_10_27_27 | 8b31ede3-0f3b-431a-86a3-342ee18cfd83_10_5_5   |           0
acedaa54-9820-4b71-aa7b-38dc7ed1d2a6_0_35_35  | acedaa54-9820-4b71-aa7b-38dc7ed1d2a6_0_37_38  |       0.032
328623e0-52f3-44a6-b66b-496cd9d93762_3_1_1    | 328623e0-52f3-44a6-b66b-496cd9d93762_3_23_24  |       0.008
c27a162d-f2d1-4bdb-84ba-0915a082775b_32_21_21 | c27a162d-f2d1-4bdb-84ba-0915a082775b_32_31_31 |       0.019
f6e047d0-e409-42a6-ab0e-13ab926719a6_19_24_25 | f6e047d0-e409-42a6-ab0e-13ab926719a6_19_31_32 |       0.015
172960c6-cb26-4cd1-99a8-d7cb92f8dec8_29_15_15 | 172960c6-cb26-4cd1-99a8-d7cb92f8dec8_29_7_8   |       0.034
9662058b-fca5-4771-8058-c7fd7bd548a3_3_0_1    | 9662058b-fca5-4771-8058-c7fd7bd548a3_3_17_18  |        0.03
693ae030-4239-4291-b248-dbf7c1696ff2_4_15_15  | 693ae030-4239-4291-b248-dbf7c1696ff2_4_2_2    |           0
eacc9625-b22d-4a44-a62e-7d53c132af1a_14_0_0   | eacc9625-b22d-4a44-a62e-7d53c132af1a_14_14_15 |        0.01
dbc798be-9a6e-48b7-8721-31f84e89c10b_27_15_15 | dbc798be-9a6e-48b7-8721-31f84e89c10b_27_2_2   |       0.007
18658e4a-a94e-478f-ab2e-2ee709bd47e5_8_11_11  | 18658e4a-a94e-478f-ab2e-2ee709bd47e5_8_21_22  |       0.027
b4968e78-ec5a-466e-863f-fef18e8ae99d_34_33_33 | b4968e78-ec5a-466e-863f-fef18e8ae99d_34_39_39 |        0.01
7e5f4072-b69f-4819-8ed6-62bdd0100621_13_14_15 | 7e5f4072-b69f-4819-8ed6-62bdd0100621_13_21_22 |       0.007
acedaa54-9820-4b71-aa7b-38dc7ed1d2a6_1_12_12  | acedaa54-9820-4b71-aa7b-38dc7ed1d2a6_1_48_48  |       0.008
9662058b-fca5-4771-8058-c7fd7bd548a3_34_0_0   | 9662058b-fca5-4771-8058-c7fd7bd548a3_34_6_7   |       0.023
23490793-bb60-44c0-bbec-9c3be871d762_15_17_18 | 23490793-bb60-44c0-bbec-9c3be871d762_15_21_22 |       0.036
d6880afb-7fcb-4576-9d17-cedd343677f9_29_0_0   | d6880afb-7fcb-4576-9d17-cedd343677f9_29_20_20 |       0.008
c27a162d-f2d1-4bdb-84ba-0915a082775b_19_26_26 | c27a162d-f2d1-4bdb-84ba-0915a082775b_19_5_5   |           0
0a74a914-54fb-47bc-acae-5dcd10ed5c3d_5_25_25  | 0a74a914-54fb-47bc-acae-5dcd10ed5c3d_5_3_3    |           0
(20 rows)



## 4. Error analysis and debugging¶

After finishing a pass of writing and running the DeepDive application, the first thing we want to see is how good the results are. In this section, we describe how DeepDive's interactive tools can be used for viewing the results as well as error analysis and debugging.

### 4.1. Calibration Plots¶

DeepDive provides calibration plots to see how well the expectations computed by the system are calibrated. The following command generates a plot for each variable under run/model/calibration-plots/.

In [ ]:
!deepdive do calibration-plots


It will produce a file run/model/calibration-plots/has_spouse.png that holds three plots as shown below:

Refer to the full documentation on calibration data for more detail on how to interpret the plots and take actions.

### 4.2. Browsing data with Mindbender¶

Mindbender is the name of the tool that provides an interactive user interface to DeepDive. It can be used for browsing any data that has been loaded into DeepDive and produced by it.

#### Browsing input corpus¶

We need to give hints to DeepDive about which part of the data we want to browse using DDlog's annotation. For example, on the articles relation we declared earlier in app.ddlog, we can sprinkle some annotations such as @source, @key, and @searchable, as the following.

ddlog
@source
articles(
@key
id text,
@searchable
content text
).

The fully annotated DDlog code is available at GitHub and can be downloaded to replace your app.ddlog by running the following command:

In [ ]:
!curl -RLO "https://github.com/HazyResearch/deepdive/raw/master/examples/spouse/app.ddlog"


Next, if we run the following command, DeepDive will create and populate a search index according to these hints.

In [ ]:
!mindbender search drop; mindbender search update


To access the populated search index through a web browser, run:

In [ ]:
!mindbender search gui


Then, point your browser to the URL that appears after the command (typically http://localhost:8000) to see a view that looks like the following:

#### Browsing result data¶

To browse the results, we can add annotations to the inferred relations and how they relate to their source relations. For example, the @extraction and @references annotations in the following DDlog declaration tells DeepDive that the variable relation has_spouse is inferred from pairs of person_mention.

ddlog
@extraction
has_spouse?(
@key
@references(relation="person_mention", column="mention_id", alias="p1")
p1_id text,
@key
@references(relation="person_mention", column="mention_id", alias="p2")
p2_id text
).

The relation person_mention as well as the relations it references should have similar annotations (see the complete app.ddlog code for full detail).

Then, repeating the commands to update the search index and load the user interface will allow us to browse the expected marginal probabilities of has_spouse as well.

#### Customizing how data is presented¶

In fact, the screenshots above are showing the data presented using a carefully prepared set of templates under mindbender/search-templates/. In these AngularJS templates, virtually anything you can program in HTML/CSS/JavaScript/CoffeeScript can be added to present the data that is ideal for human consumption (e.g., highlighted text spans rather than token indexes). Please see the documentation about customizing the presentation for further detail.

### 4.3. Estimating precision with Mindtagger¶

Mindtagger, which is part of the Mindbender tool suite, assists data labeling tasks to quickly assess the precision and/or recall of the extraction. We show how Mindtagger helps us perform a labeling task to estimate the precision of the extraction. The necessary set of files shown below already exist in the example under labeling/has_spouse-precision/.

#### Preparing a data labeling task¶

First, we can take a random sample of 100 examples from has_spouse relation whose expectation is higher than or equal to a 0.9 threshold as shown in the following SQL query, and store them in a file called has_spouse.csv.

In [ ]:
!mkdir -p labeling/has_spouse-precision/

In [ ]:
%%bash
deepdive sql eval "

SELECT hsi.p1_id
, hsi.p2_id
, s.doc_id
, s.sentence_index
, hsi.dd_label
, hsi.expectation
, s.tokens
, pm1.mention_text AS p1_text
, pm1.begin_index  AS p1_start
, pm1.end_index    AS p1_end
, pm2.mention_text AS p2_text
, pm2.begin_index  AS p2_start
, pm2.end_index    AS p2_end

FROM has_spouse_inference hsi
, person_mention             pm1
, person_mention             pm2
, sentences                  s

WHERE hsi.p1_id          = pm1.mention_id
AND pm1.doc_id         = s.doc_id
AND pm1.sentence_index = s.sentence_index
AND hsi.p2_id          = pm2.mention_id
AND pm2.doc_id         = s.doc_id
AND pm2.sentence_index = s.sentence_index
AND       expectation >= 0.9

ORDER BY random()
LIMIT 100



We also prepare the mindtagger.conf and template.html files under labeling/has_spouse-precision/ that look like the following:

In [ ]:
%%file labeling/has_spouse-precision/mindtagger.conf
title: Labeling task for estimating has_spouse precision
items: {
file: has_spouse.csv
key_columns: [p1_id, p2_id]
}
template: template.html

In [ ]:
%%file labeling/has_spouse-precision/template.html
<mindtagger mode="precision">

<template for="each-item">
<strong title="item_id: {{item.id}}">{{item.p1_text}} -- {{item.p2_text}}</strong>
with expectation <strong>{{item.expectation | number:3}}</strong> appeared in:
<blockquote>
<big mindtagger-word-array="item.tokens" array-format="json">
<mindtagger-highlight-words from="item.p1_start" to="item.p1_end" with-style="background-color: yellow;"/>
<mindtagger-highlight-words from="item.p2_start" to="item.p2_end" with-style="background-color: cyan;"/>
</big>
</blockquote>

<div>
<div mindtagger-item-details></div>
</div>
</template>

<template for="tags">
<span mindtagger-note-tags></span>
</template>

</mindtagger>


#### Labeling data with Mindtagger¶

Mindtagger can then be started for the task using the following command:

In [ ]:
!mindbender tagger labeling/has_spouse-precision/mindtagger.conf


Then, point your browser to the URL that appears after the command (typically http://localhost:8000) to see a dedicated user interface for labeling data that looks like the following:

We can quickly label the sampled 100 examples using the intuitive user interface with buttons for correct/incorrect tags. It also supports keyboard shortcuts for entering labels and moving between items. (Press the ? key to view all supported keys.) How many were labeled correct, as well as other tags, are shown in the "Tags" dropdown at the top right corner as shown below.

The collected tags can also be exported in various format for post-processing.

For further detail, see the documentation about labeling data.

### 4.4. Monitoring statistics with Dashboard¶

Dashboard provides a way to monitor various descriptive statistics of the data products after each pass of DeepDive improvements. We can use a combination of SQL, any Bash script, and Markdown in each report template that produces a report, and we can produce a collection of them as a snapshot against the data extracted by DeepDive. Dashboard provides a structure to manage those templates and instantiate them in a sophisticated way using parameters. It provides a graphical interface for visualizing the collected statistics and trends as shown below. Refer to the full documentation on Dashboard to set up your own set of reports.