#!/usr/bin/env python # coding: utf-8 # # DeepDive Tutorial Extracting mentions of spouses from the news # # In this tutorial, we show an example of a prototypical task that DeepDive is often applied to: # extraction of _structured information_ from _unstructured or 'dark' data_ such as web pages, text documents, images, etc. # While DeepDive can be used as a more general platform for statistical learning and data processing, most of the tooling described herein has been built for this type of use case, based on our experience of successfully applying DeepDive to [a variety of real-world problems of this type](http://deepdive.stanford.edu/showcase/apps). # # In this setting, our goal is to take in a set of unstructured (and/or structured) inputs, and populate a relational database table with extracted outputs, along with marginal probabilities for each extraction representing DeepDive's confidence in the extraction. # More formally, we write a DeepDive application to extract mentions of _relations_ and their constituent _entities_ or _attributes_, according to a specified schema; this task is often referred to as **_relation extraction_**.* # Accordingly, we'll walk through an example scenario where we wish to extract mentions of two people being spouses from news articles. # # The high-level steps we'll follow are: # # 1. **Data processing.** First, we'll load the raw corpus, add NLP markups, extract a set of _candidate_ relation mentions, and a sparse _feature_ representation of each. # # 2. **Distant supervision with data and rules.** Next, we'll use various strategies to provide _supervision_ for our dataset, so that we can use machine learning to learn the weights of a model. # # 3. **Learning and inference: model specification.** Then, we'll specify the high-level configuration of our _model_. # # 4. **Error analysis and debugging.** Finally, we'll show how to use DeepDive's labeling, error analysis and debugging tools. # # *_Note the distinction between extraction of true, i.e., factual, relations and extraction of mentions of relations. # In this tutorial, we do the latter, however DeepDive supports further downstream methods for tackling the former task in a principled manner._ # # # Whenever something isn't clear, you can always refer to [the complete example code at `examples/spouse/`](https://github.com/HazyResearch/deepdive/tree/master/examples/spouse/) that contains everything shown in this document. # ## 0. Preparation # # ### 0.0. Installing DeepDive and tweaking notebook # First of all, let's make sure DeepDive is installed and can be used from this notebook. # See [DeepDive installation guide](http://deepdive.stanford.edu/installation) for more details. # In[1]: # PATH needs correct setup to use DeepDive import os; PWD=os.getcwd(); HOME=os.environ["HOME"]; PATH=os.environ["PATH"] # home directory installation get_ipython().run_line_magic('env', 'PATH=$HOME/local/bin:$PATH') # notebook-local installation get_ipython().run_line_magic('env', 'PATH=$PWD/deepdive/bin:$PATH') get_ipython().system('type deepdive') no_deepdive_found = get_ipython().getoutput('type deepdive >/dev/null') if no_deepdive_found: # install it next to this notebook get_ipython().system('bash -c \'PREFIX="$PWD"/deepdive bash <(curl -fsSL git.io/getdeepdive) deepdive_from_release\'') # We need to make sure this IPython/Jupyter notebook will work correctly with DeepDive: # In[2]: # check if notebook kernel was launched in a Unicode locale import locale; LC_CTYPE = locale.getpreferredencoding() if LC_CTYPE != "UTF-8": raise EnvironmentError("Notebook is running in '%s' encoding not compatible with DeepDive's Unicode output.\n\nPlease restart notebook in a UTF-8 locale with a command like the following:\n\n LC_ALL=en_US.UTF-8 jupyter notebook" % (LC_CTYPE)) # ### 0.1. Declaring what to predict # # Above all, we shall tell DeepDive what we want to predict as a *random variable* in a language called *DDlog*, stored in a file `app.ddlog`: # In[3]: get_ipython().run_cell_magic('file', 'app.ddlog', "## Random variable to predict #################################################\n\n# This application's goal is to predict whether a given pair of person mention\n# are indicating a spouse relationship or not.\nhas_spouse?(\n p1_id text,\n p2_id text\n).\n") # In this notebook, we are going to write our application in this `app.ddlog` one part at a time. # We can check if the code make sense by asking DeepDive to compile it. # DeepDive automatically compiles our application whenever we execute things after making changes, but we can also do this manually by running: # In[4]: get_ipython().system('deepdive compile') # ### 0.2. Setting up a database # # Next, DeepDive will store all data—input, intermediate, output, etc.—in a relational database. # Currently, Postgres and Greenplum are supported. # For operating at a larger scale, Greenplum is strongly recommended. # To set the location of this database, we need to configure a URL in the [`db.url` file](../examples/spouse/db.url), e.g.: # In[5]: get_ipython().system('echo \'postgresql://\'"${PGHOST:-localhost}"\'/deepdive_spouse_$USER\' >db.url') # If you have no running database yet, the following commands can quickly bring up a new PostgreSQL server to be used with DeepDive, storing all data at `run/database/postgresql` next to this notebook. # In[6]: no_database_running = get_ipython().getoutput('deepdive db is_ready || echo $?') if no_database_running: PGDATA = "run/database/postgresql" get_ipython().system('mkdir -p $PGDATA; test -s $PGDATA/PG_VERSION || pg_ctl init -D $PGDATA >/dev/null') get_ipython().system('nohup pg_ctl -D $PGDATA -l $PGDATA/logfile start >/dev/null') # _Note: DeepDive will drop and then create this database if run from scratch—beware of pointing to an existing populated one!_ # In[7]: get_ipython().system('deepdive redo init/app') # ## 1. Data processing # # In this section, we'll generate the traditional inputs of a statistical learning-type problem: candidate spouse relations, represented by a set of features, which we will aim to classify as _actual_ relation mentions or not. # # We'll do this in four basic steps: # # 1. Loading raw input data # 2. Adding NLP markups # 3. Extracting candidate relation mentions # 4. Extracting features for each candidate # # # ### 1.1. Loading raw input data # Our first task is to download and load the raw text of [a corpus of news articles provided by Signal Media](http://research.signalmedia.co/newsir16/signal-dataset.html) into an `articles` table in our database. # # Keeping the identifier of each article and its content in the table would be good enough. # We can tell DeepDive to do this by declaring the schema of this `articles` table in our `app.ddlog` file; we add the following lines: # In[8]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\n## Input Data #################################################################\narticles(\n id text,\n content text\n).\n') # DeepDive can use a script's output as a data source for loading data into the table if we follow a simple naming convention. # We create a simple shell script at `input/articles.tsj.sh` that outputs the news articles in TSJ format (tab-separated JSONs) from the downloaded corpus. # In[9]: get_ipython().system('mkdir -p input') # In[10]: get_ipython().run_cell_magic('file', 'input/articles.tsj.sh', '#!/usr/bin/env bash\nset -euo pipefail\ncd "$(dirname "$0")"\n\ncorpus=signalmedia/signalmedia-1m.jsonl\n[[ -e "$corpus" ]] || {\n echo "ERROR: Missing $PWD/$corpus"\n echo "# Please Download it from http://research.signalmedia.co/newsir16/signal-dataset.html"\n echo\n echo "# Alternatively, use our sampled data by running:"\n echo "deepdive load articles input/articles-100.tsv.bz2"\n echo\n echo "# Or, skipping all NLP markup processes by running:"\n echo "deepdive create table sentences"\n echo "deepdive load sentences"\n echo "deepdive mark done sentences"\n false\n} >&2\n\ncat "$corpus" |\n#grep -E \'wife|husband|married\' |\n#head -100 |\njq -r \'[.id, .content] | map(@json) | join("\\t")\'\n') # We need to mark the script as an executable so DeepDive can actually execute it: # In[11]: get_ipython().system('chmod +x input/articles.tsj.sh') # The aforementioned script reads a sample of the corpus (provided as lines of JSON objects), and then using the [jq](https://stedolan.github.io/jq/) language extracts the fields `id` (for article identifier) and `content` from each entry and format those into TSJ. # We can uncomment the `grep` or `head` lines in between and apply some naive filter to subsample articles. # Now, we tell DeepDive to execute the steps to load the `articles` table using the `input/articles.tsj.sh` script. # You must have the [full corpus](http://research.signalmedia.co/newsir16/signal-dataset.html) downloaded at `input/signalmedia/signalmedia-1m.jsonl` for the following to finish correctly. # In[12]: get_ipython().system('deepdive redo articles') # Alternatively, a sample of 100 and 1000 articles can be downloaded from GitHub and loaded into DeepDive with the following command: # In[13]: NUM_ARTICLES = 100 ARTICLES_FILE = "articles-%d.tsj.bz2" % NUM_ARTICLES articles_not_done = get_ipython().getoutput('deepdive done articles || date') if articles_not_done: get_ipython().system('cd input && curl -RLO "https://github.com/HazyResearch/deepdive/raw/master/examples/spouse/input/"$ARTICLES_FILE') get_ipython().system('deepdive reload articles input/$ARTICLES_FILE') # After DeepDive finishes creating the table and then fetching and loading the data, we can take a look at the loaded data using the following `deepdive query` command, which enumerates the values for the `id` column of the `articles` table: # In[14]: get_ipython().system("deepdive query '|10 ?- articles(id, _).'") # ### 1.2. Adding NLP markups # Next, we'll use Stanford's [CoreNLP](http://stanfordnlp.github.io/CoreNLP/) natural language processing (NLP) system to add useful markups and structure to our input data. # This step will split up our articles into sentences and their component _tokens_ (roughly, the words). # Additionally, we'll get _lemmas_ (normalized word forms), _part-of-speech (POS) tags_, _named entity recognition (NER) tags_, and a dependency parse of the sentence. # # Let's first declare the output schema of this step in `app.ddlog`: # In[15]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\n## NLP markup #################################################################\nsentences(\n doc_id text,\n sentence_index int,\n tokens json,\n lemmas json,\n pos_tags json,\n ner_tags json,\n doc_offsets json,\n dep_types json,\n dep_tokens json\n).\n') # Next, we declare a DDlog function which takes in the `doc_id` and `content` for an article and returns rows conforming to the sentences schema we just declared, using the **user-defined function (UDF)** in `udf/nlp_markup.sh`. # We specify that this `nlp_markup` function should be run over each row from `articles`, and the output appended to `sentences`: # In[16]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\nfunction nlp_markup over (\n doc_id text,\n content text\n ) returns rows like sentences\n implementation "udf/nlp_markup.sh" handles tsj lines.\n\nsentences += nlp_markup(doc_id, content) :-\n articles(doc_id, content).\n') # This UDF `udf/nlp_markup.sh` is a Bash script which uses [our own wrapper around CoreNLP](https://github.com/HazyResearch/deepdive/tree/deepdive-corenlp/util/nlp). # In[17]: get_ipython().system('mkdir -p udf') # In[18]: get_ipython().run_cell_magic('file', 'udf/nlp_markup.sh', '#!/usr/bin/env bash\n# Parse documents in tab-separated JSONs input stream with CoreNLP\n#\n# $ deepdive corenlp install\n# $ deepdive corenlp start\n# $ deepdive env udf/nlp_markup.sh\n# $ deepdive corenlp stop\n##\nset -euo pipefail\ncd "$(dirname "$0")"\n\n# some configuration knobs for CoreNLP\n: ${CORENLP_PORT:=$(deepdive corenlp unique-port)} # a CoreNLP server started ahead of time is shared across parallel UDF processes\n# See: http://stanfordnlp.github.io/CoreNLP/annotators.html\n: ${CORENLP_ANNOTATORS:="\n tokenize\n ssplit\n pos\n ner\n lemma\n depparse\n "}\nexport CORENLP_PORT\nexport CORENLP_ANNOTATORS\n\n# make sure CoreNLP server is available\ndeepdive corenlp is-running || {\n echo >&2 "PLEASE MAKE SURE YOU HAVE RUN: deepdive corenlp start"\n false\n}\n\n# parse input with CoreNLP and output a row for every sentence\ndeepdive corenlp parse-tsj docid+ content=nlp -- docid nlp |\ndeepdive corenlp sentences-tsj docid content:nlp \\\n -- docid nlp.{index,tokens.{word,lemma,pos,ner,characterOffsetBegin}} \\\n nlp.collapsed-dependencies.{dep_type,dep_token}\n') # Again, we mark it as executable for DeepDive to run it: # In[19]: get_ipython().system('chmod +x udf/nlp_markup.sh') # Before executing this NLP markup step, we need to launch the CoreNLP server in advance, which may take a while to install and load everything. # Note that the CoreNLP library requires Java 8 to run. # In[20]: get_ipython().system('deepdive corenlp install') # If CoreNLP seems to take forever to start, retry after uncommenting the following line: get_ipython().run_line_magic('env', 'CORENLP_JAVAOPTS=-Xmx4g') get_ipython().system('deepdive corenlp start') # In[21]: get_ipython().system('deepdive redo sentences') # Now, if we take a look at a sample of the NLP markups, they will have tokens and NER tags that look like the following: # In[22]: get_ipython().run_cell_magic('bash', '', "deepdive query '\n doc_id, index, tokens, ner_tags | 5\n ?- sentences(doc_id, index, tokens, lemmas, pos_tags, ner_tags, _, _, _).\n'\n") # ### 1.3. Extracting candidate relation mentions # # #### Mentions of people # Once again we first declare the schema: # # In[23]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\n## Candidate mapping ##########################################################\nperson_mention(\n mention_id text,\n mention_text text,\n doc_id text,\n sentence_index int,\n begin_index int,\n end_index int\n).\n') # We will be storing each person as a row referencing a sentence with beginning and ending indexes. # Again, we next declare a function that references a UDF and takes as input the sentence tokens and NER tags: # In[24]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\nfunction map_person_mention over (\n doc_id text,\n sentence_index int,\n tokens text[],\n ner_tags text[]\n ) returns rows like person_mention\n implementation "udf/map_person_mention.py" handles tsj lines.\n') # We'll write a simple UDF in Python that will tag spans of contiguous tokens with the NER tag `PERSON` as person mentions (i.e., we'll essentially rely on CoreNLP's NER module). # Note that we've already used a Bash script as a UDF, and indeed any programming language can be used. # (DeepDive will just check the path specified in the top line, e.g., `#!/usr/bin/env python`.) # However, DeepDive provides some convenient utilities for Python UDFs which handle all IO encoding/decoding. # To write our UDF `udf/map_person_mention.py`, we'll start by specifying that our UDF will handle TSV lines (as specified in the DDlog above). # Additionally, we'll specify the exact type schema of both input and output, which DeepDive will check for us: # In[25]: get_ipython().run_cell_magic('file', 'udf/map_person_mention.py', '#!/usr/bin/env python\nfrom deepdive import *\n\n@tsj_extractor\n@returns(lambda\n mention_id = "text",\n mention_text = "text",\n doc_id = "text",\n sentence_index = "int",\n begin_index = "int",\n end_index = "int",\n :[])\ndef extract(\n doc_id = "text",\n sentence_index = "int",\n tokens = "text[]",\n ner_tags = "text[]",\n ):\n """\n Finds phrases that are continuous words tagged with PERSON.\n """\n num_tokens = len(ner_tags)\n # find all first indexes of series of tokens tagged as PERSON\n first_indexes = (i for i in xrange(num_tokens) if ner_tags[i] == "PERSON" and (i == 0 or ner_tags[i-1] != "PERSON"))\n for begin_index in first_indexes:\n # find the end of the PERSON phrase (consecutive tokens tagged as PERSON)\n end_index = begin_index + 1\n while end_index < num_tokens and ner_tags[end_index] == "PERSON":\n end_index += 1\n end_index -= 1\n # generate a mention identifier\n mention_id = "%s_%d_%d_%d" % (doc_id, sentence_index, begin_index, end_index)\n mention_text = " ".join(map(lambda i: tokens[i], xrange(begin_index, end_index + 1)))\n # Output a tuple for each PERSON phrase\n yield [\n mention_id,\n mention_text,\n doc_id,\n sentence_index,\n begin_index,\n end_index,\n ]\n') # In[26]: get_ipython().system('chmod +x udf/map_person_mention.py') # Above, we write a simple function which extracts and tags all subsequences of tokens having the NER tag "PERSON". # Note that the `extract` function must be a generator (i.e., use a `yield` statement to return output rows). # # Finally, we specify that the function will be applied to rows from the `sentences` table and append to the `person_mention` table: # In[27]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\nperson_mention += map_person_mention(\n doc_id, sentence_index, tokens, ner_tags\n) :-\n sentences(doc_id, sentence_index, tokens, _, _, ner_tags, _, _, _).\n') # Again, to run, just compile and execute as in previous steps: # In[28]: get_ipython().system('deepdive redo person_mention') # In[29]: get_ipython().run_cell_magic('bash', '', "deepdive query '\n name, doc, sentence, begin, end | 20\n ?- person_mention(p_id, name, doc, sentence, begin, end).\n'\n") # #### Mentions of spouses (pairs of people) # Next, we'll take all pairs of **non-overlapping person mentions that co-occur in a sentence with less than 5 people total,** and consider these as the set of potential ('candidate') spouse mentions. # We thus filter out sentences with large numbers of people for the purposes of this tutorial; however, these could be included if desired. # Again, to start, we declare the schema for our `spouse_candidate` table—here just the two names, and the two `person_mention` IDs referred to: # # # In[30]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\nspouse_candidate(\n p1_id text,\n p1_name text,\n p2_id text,\n p2_name text\n).\n') # Next, for this operation we don't use any UDF script, instead rely entirely on DDlog operations. # We simply construct a table of person counts, and then do a join with our filtering conditions. # In DDlog this looks like: # In[31]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\nnum_people(doc_id, sentence_index, COUNT(p)) :-\n person_mention(p, _, doc_id, sentence_index, _, _).\n\nspouse_candidate(p1, p1_name, p2, p2_name) :-\n num_people(same_doc, same_sentence, num_p),\n person_mention(p1, p1_name, same_doc, same_sentence, p1_begin, _),\n person_mention(p2, p2_name, same_doc, same_sentence, p2_begin, _),\n num_p < 5,\n p1 < p2,\n p1_name != p2_name,\n p1_begin != p2_begin.\n') # Now, let's tell DeepDive to run what we have so far: # In[32]: get_ipython().system('deepdive redo spouse_candidate') # In[33]: get_ipython().run_cell_magic('bash', '', "deepdive query '\n name1, name2, doc, sentence | 20\n ?- spouse_candidate(p1, name1, p2, name2),\n person_mention(p1, _, doc, sentence, _, _).\n'\n") # ### 1.4. Extracting features for each candidate # Finally, we will extract a set of **features** for each candidate: # # # In[34]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\n## Feature Extraction #########################################################\n \n# Feature extraction (using DDLIB via a UDF) at the relation level\nspouse_feature(\n p1_id text,\n p2_id text,\n feature text\n).\n') # The goal here is to represent each spouse candidate mention by a set of attributes or **_features_** which capture at least the key aspects of the mention, and then let a machine learning model learn how much each feature is correlated with our decision variable ('is this a spouse mention?'). # For those who have worked with machine learning systems before, note that we are using a sparse storage representation- # you could think of a spouse candidate `(p1_id, p2_id)` as being represented by a vector of length `L = COUNT(DISTINCT feature)`, consisting of all zeros except for at the indexes specified by the rows with key `(p1_id, p2_id)`. # # DeepDive includes an [automatic feature generation library, DDlib](http://deepdive.stanford.edu/gen_feats), which we will use here. # Although many state-of-the-art [applications](http://deepdive.stanford.edu/showcase/apps) have been built using purely DDlib-generated features, others can be used and/or added as well. # To use DDlib, we create a list of `ddlib.Word` objects, two `ddlib.Span` objects, and then use the function `get_generic_features_relation`, as shown in the following Python code for `udf/extract_spouse_features.py`: # In[35]: get_ipython().run_cell_magic('file', 'udf/extract_spouse_features.py', '#!/usr/bin/env python\nfrom deepdive import *\nimport ddlib\n\n@tsj_extractor\n@returns(lambda\n p1_id = "text",\n p2_id = "text",\n feature = "text",\n :[])\ndef extract(\n p1_id = "text",\n p2_id = "text",\n p1_begin_index = "int",\n p1_end_index = "int",\n p2_begin_index = "int",\n p2_end_index = "int",\n doc_id = "text",\n sent_index = "int",\n tokens = "text[]",\n lemmas = "text[]",\n pos_tags = "text[]",\n ner_tags = "text[]",\n dep_types = "text[]",\n dep_parents = "int[]",\n ):\n """\n Uses DDLIB to generate features for the spouse relation.\n """\n # Create a DDLIB sentence object, which is just a list of DDLIB Word objects\n sent = []\n for i,t in enumerate(tokens):\n sent.append(ddlib.Word(\n begin_char_offset=None,\n end_char_offset=None,\n word=t,\n lemma=lemmas[i],\n pos=pos_tags[i],\n ner=ner_tags[i],\n dep_par=dep_parents[i] - 1, # Note that as stored from CoreNLP 0 is ROOT, but for DDLIB -1 is ROOT\n dep_label=dep_types[i]))\n\n # Create DDLIB Spans for the two person mentions\n p1_span = ddlib.Span(begin_word_id=p1_begin_index, length=(p1_end_index-p1_begin_index+1))\n p2_span = ddlib.Span(begin_word_id=p2_begin_index, length=(p2_end_index-p2_begin_index+1))\n\n # Generate the generic features using DDLIB\n for feature in ddlib.get_generic_features_relation(sent, p1_span, p2_span):\n yield [p1_id, p2_id, feature]\n') # In[36]: get_ipython().system('chmod +x udf/extract_spouse_features.py') # Note that getting the input for this UDF requires joining the `person_mention` and `sentences` tables: # # In[37]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\nfunction extract_spouse_features over (\n p1_id text,\n p2_id text,\n p1_begin_index int,\n p1_end_index int,\n p2_begin_index int,\n p2_end_index int,\n doc_id text,\n sent_index int,\n tokens text[],\n lemmas text[],\n pos_tags text[],\n ner_tags text[],\n dep_types text[],\n dep_tokens int[]\n ) returns rows like spouse_feature\n implementation "udf/extract_spouse_features.py" handles tsj lines.\n\nspouse_feature += extract_spouse_features(\n p1_id, p2_id, p1_begin_index, p1_end_index, p2_begin_index, p2_end_index,\n doc_id, sent_index, tokens, lemmas, pos_tags, ner_tags, dep_types, dep_tokens\n) :-\n person_mention(p1_id, _, doc_id, sent_index, p1_begin_index, p1_end_index),\n person_mention(p2_id, _, doc_id, sent_index, p2_begin_index, p2_end_index),\n sentences(doc_id, sent_index, tokens, lemmas, pos_tags, ner_tags, _, dep_types, dep_tokens).\n') # Now, let's execute this UDF to get our features: # In[38]: get_ipython().system('deepdive redo spouse_feature') # If we take a look at a sample of the extracted features, they will look roughly like the following: # In[39]: get_ipython().system("deepdive query '| 20 ?- spouse_feature(_, _, feature).'") # Now we have generated what looks more like the standard input to a machine learning problem—a set of objects, represented by sets of features, which we want to classify (here, as true or false mentions of a spousal relation). # However, we **don't have any supervised labels** (i.e., a set of correct answers) for a machine learning algorithm to learn from! # In most real world applications, a sufficiently large set of supervised labels is _not_ available. # With DeepDive, we take the approach sometimes referred to as _distant supervision_ or _data programming_, where we instead generate a **noisy set of labels using a mix of mappings from secondary datasets and other heuristic rules**. # ## 2. Distant supervision with data and rules # # In this section, we'll use _distant supervision_ (or '_data programming_') to provide a noisy set of labels for candidate relation mentions, with which we will train a machine learning model. # # We'll describe two basic categories of approaches: # # 1. Mapping from secondary data for distant supervision # 2. Using heuristic rules for distant supervision # # Then, we'll describe a simple majority-vote approach to resolving multiple labels per example, which can be implemented within DDlog. # Let's declare a new table where we'll store the labels (referring to the spouse candidate mentions), with an integer value (`True=1, False=-1`) and a description (`rule_id`): # In[40]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\n## Distant Supervision ########################################################\nspouse_label(\n p1_id text,\n p2_id text,\n label int,\n rule_id text\n).\n') # Let's put all the spouse candidate mentions with a `NULL` label. This is just for simplifying some steps later: # In[41]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\n# make sure all pairs in spouse_candidate are considered as unsupervised examples\nspouse_label(p1,p2, 0, NULL) :-\n spouse_candidate(p1, _, p2, _).\n') # ### 2.1. Mapping from secondary data for distant supervision # First, we'll try using an external structured dataset of known married couples, from [DBpedia](http://wiki.dbpedia.org/), to distantly supervise our dataset. # We'll download the relevant data, and then map it to our candidate spouse relations. # # #### Extracting and downloading the DBpedia data # Our goal is to first extract a collection of known married couples from DBpedia and then load this into the `spouses_dbpedia` table in our database. # To extract known married couples, we use the DBpedia dump present in [Google's BigQuery platform](https://bigquery.cloud.google.com). # First we extract the URI, name and spouse information from the DBpedia `person` table records in BigQuery for which the field `name` is not NULL. # We use the following query: # # ```sql # SELECT URI,name, spouse # FROM [fh-bigquery:dbpedia.person] # where name <> "NULL" # ``` # # We store the result of the above query in a local project table `dbpedia.validnames` and perform a self-join to obtain the pairs of married couples. # # ```sql # SELECT t1.name, t2.name # FROM [dbpedia.validnames] AS t1 # JOIN EACH [dbpedia.validnames] AS t2 # ON t1.spouse = t2.URI # ``` # # The output of the above query is stored in a new table named `dbpedia.spouseraw`. # Finally, we use the following query to remove symmetric duplicates. # # ```sql # SELECT p1, p2 # FROM (SELECT t1_name as p1, t2_name as p2 FROM [dbpedia.spouseraw]), # (SELECT t2_name as p1, t1_name as p2 FROM [dbpedia.spouseraw]) # WHERE p1 < p2 # ``` # # The output of this query is stored in a local file. # The file contains duplicate rows (BigQuery does not support `distinct`). # It also contains noisy rows where the name field contains a string where the given name family name and multiple aliases were concatenated and reported in a string including the characters `{` and `}`. # Using the Unix commands `sed`, `sort` and `uniq` we first remove the lines containing characters `{` and `}` and then duplicate entries. # This results in an input file `spouses_dbpedia.csv` containing 6,126 entries of married couples. # # *Note that we made this [`spouses_dbpedia.csv` available for download from GitHub](https://github.com/HazyResearch/deepdive/blob/master/examples/spouse/input/spouses_dbpedia.csv.bz2), so you don't have to repeat the above process.* # #### Loading DBpedia data to database # # To load the known married couples data into DeepDive, we first declare the schema in DDlog: # In[42]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\n# distant supervision using data from DBpedia\n\nspouses_dbpedia(\n person1_name text,\n person2_name text\n).\n') # Notice that we can easily load the data in `spouses_dbpedia.csv` data to the table we just declared if we follow DeepDive's convention of organizing input data under `input/` directory. # The input file name simply needs to start with the target database table name. # Let's download the file from GitHub to `input/spouses_dbpedia.csv.bz2` under our application: # In[43]: get_ipython().system('cd input && curl -RLO "https://github.com/HazyResearch/deepdive/raw/master/examples/spouse/input/spouses_dbpedia.csv.bz2"') # Then execute this command to load it into the database: # In[44]: get_ipython().system('deepdive redo spouses_dbpedia') # Now the database should include tuples that look like the following: # # In[45]: get_ipython().system("deepdive query '| 20 ?- spouses_dbpedia(name1, name2).'") # #### Supervising spouse candidates with DBpedia data # # Next we'll implement a simple distant supervision rule which labels any spouse mention candidate with a pair of names appearing in DBpedia as true: # In[46]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\nspouse_label(p1,p2, 1, "from_dbpedia") :-\n spouse_candidate(p1, p1_name, p2, p2_name),\n spouses_dbpedia(n1, n2),\n [ lower(n1) = lower(p1_name), lower(n2) = lower(p2_name) ;\n lower(n2) = lower(p1_name), lower(n1) = lower(p2_name) ].\n') # It should be noted that there are many clear ways in which this rule could be improved (fuzzy matching, more restrictive conditions, etc.), but this serves as an example of one major type of distant supervision rule. # ### 2.2. Using heuristic rules for distant supervision # We can also create a supervision rule which does not rely on any secondary structured dataset like DBpedia, but instead just uses some heuristic. # We set up a DDlog function, `supervise`, which uses a UDF containing several heuristic rules over the mention and sentence attributes: # # # In[47]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\n# supervision by heuristic rules in a UDF\nfunction supervise over (\n p1_id text, p1_begin int, p1_end int,\n p2_id text, p2_begin int, p2_end int,\n doc_id text,\n sentence_index int,\n sentence_text text,\n tokens text[],\n lemmas text[],\n pos_tags text[],\n ner_tags text[],\n dep_types text[],\n dep_tokens int[]\n ) returns (\n p1_id text, p2_id text, label int, rule_id text\n )\n implementation "udf/supervise_spouse.py" handles tsj lines.\n\nspouse_label += supervise(\n p1_id, p1_begin, p1_end,\n p2_id, p2_begin, p2_end,\n doc_id, sentence_index,\n tokens, lemmas, pos_tags, ner_tags, dep_types, dep_token_indexes\n) :-\n spouse_candidate(p1_id, _, p2_id, _),\n person_mention(p1_id, p1_text, doc_id, sentence_index, p1_begin, p1_end),\n person_mention(p2_id, p2_text, _, _, p2_begin, p2_end),\n sentences(\n doc_id, sentence_index,\n tokens, lemmas, pos_tags, ner_tags, _, dep_types, dep_token_indexes\n ).\n') # The Python UDF named [`udf/supervise_spouse.py`](https://github.com/HazyResearch/deepdive/blob/master/examples/spouse/udf/supervise_spouse.py) contains several heuristic rules: # # * Candidates with person mentions that are too far apart in the sentence are marked as false. # * Candidates with person mentions that have another person in between are marked as false. # * Candidates with person mentions that have words like "wife" or "husband" in between are marked as true. # * Candidates with person mentions that have "and" in between and "married" after are marked as true. # * Candidates with person mentions that have familial relation words in between are marked as false. # # In[48]: get_ipython().run_cell_magic('file', 'udf/supervise_spouse.py', '#!/usr/bin/env python\nfrom deepdive import *\nimport random\nfrom collections import namedtuple\n\nSpouseLabel = namedtuple(\'SpouseLabel\', \'p1_id, p2_id, label, type\')\n\n@tsj_extractor\n@returns(lambda\n p1_id = "text",\n p2_id = "text",\n label = "int",\n rule_id = "text",\n :[])\n# heuristic rules for finding positive/negative examples of spouse relationship mentions\ndef supervise(\n p1_id="text", p1_begin="int", p1_end="int",\n p2_id="text", p2_begin="int", p2_end="int",\n doc_id="text", sentence_index="int",\n tokens="text[]", lemmas="text[]", pos_tags="text[]", ner_tags="text[]",\n dep_types="text[]", dep_token_indexes="int[]",\n ):\n\n # Constants\n MARRIED = frozenset(["wife", "husband"])\n FAMILY = frozenset(["mother", "father", "sister", "brother", "brother-in-law"])\n MAX_DIST = 10\n\n # Common data objects\n p1_end_idx = min(p1_end, p2_end)\n p2_start_idx = max(p1_begin, p2_begin)\n p2_end_idx = max(p1_end,p2_end)\n intermediate_lemmas = lemmas[p1_end_idx+1:p2_start_idx]\n intermediate_ner_tags = ner_tags[p1_end_idx+1:p2_start_idx]\n tail_lemmas = lemmas[p2_end_idx+1:]\n spouse = SpouseLabel(p1_id=p1_id, p2_id=p2_id, label=None, type=None)\n\n # Rule: Candidates that are too far apart\n if len(intermediate_lemmas) > MAX_DIST:\n yield spouse._replace(label=-1, type=\'neg:far_apart\')\n\n # Rule: Candidates that have a third person in between\n if \'PERSON\' in intermediate_ner_tags:\n yield spouse._replace(label=-1, type=\'neg:third_person_between\')\n\n # Rule: Sentences that contain wife/husband in between\n # ()([ A-Za-z]+)(wife|husband)([ A-Za-z]+)()\n if len(MARRIED.intersection(intermediate_lemmas)) > 0:\n yield spouse._replace(label=1, type=\'pos:wife_husband_between\')\n\n # Rule: Sentences that contain and ... married\n # ()(and)?()([ A-Za-z]+)(married)\n if ("and" in intermediate_lemmas) and ("married" in tail_lemmas):\n yield spouse._replace(label=1, type=\'pos:married_after\')\n\n # Rule: Sentences that contain familial relations:\n # ()([ A-Za-z]+)(brother|stster|father|mother)([ A-Za-z]+)()\n if len(FAMILY.intersection(intermediate_lemmas)) > 0:\n yield spouse._replace(label=-1, type=\'neg:familial_between\')\n') # In[49]: get_ipython().system('chmod +x udf/supervise_spouse.py') # Note that the rough theory behind this approach is that we don't need high-quality (e.g., hand-labeled) supervision to learn a high quality model. # Instead, using statistical learning, we can in fact recover high-quality models from a large set of low-quality or **_noisy_** labels. # # ### 2.3. Resolving multiple labels per example with majority vote # Finally, we implement a very simple majority vote procedure, all in DDlog, for resolving scenarios where a single spouse candidate mention has multiple conflicting labels. # First, we sum the labels (which are all -1, 0, or 1): # # # In[50]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\n# resolve multiple labels by majority vote (summing the labels in {-1,0,1})\nspouse_label_resolved(p1_id, p2_id, SUM(vote)) :-\n spouse_label(p1_id, p2_id, vote, rule_id).\n') # Then, we simply threshold and add these labels to our decision variable table `has_spouse` (see next section for details here): # # In[51]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\n# assign the resolved labels for the spouse relation\nhas_spouse(p1_id, p2_id) = if l > 0 then TRUE\n else if l < 0 then FALSE\n else NULL end :- spouse_label_resolved(p1_id, p2_id, l).\n') # Once again, to execute all of the above, just run the following command: # In[52]: get_ipython().system('deepdive redo has_spouse') # Recall that `deepdive do` will execute all upstream tasks as well, so this will execute all of the previous steps! # # Now, we can take a brief look at how many candidates are supervised by different rules, which will look something like the table below. # Obviously, the counts will vary depending on your input corpus. # In[53]: get_ipython().system("deepdive query 'rule, @order_by COUNT(1) ?- spouse_label(p1,p2, label, rule).'") # ## 3. Learning and inference: model specification # Now, we need to specify the actual model that DeepDive will perform learning and inference over. # At a high level, this boils down to specifying three things: # # 1. What are the _variables_ of interest that we want DeepDive to predict for us? # # 2. What are the _features_ for each of these variables? # # 3. What are the _connections_ between the variables? # # One we have specified the model in this way, DeepDive will _learn_ the parameters of the model (the weights of the features and potentially the connections between variables), and then perform _statistical inference_ over the learned model to determine the probability that each variable of interest is true. # # For more advanced users: we are specifying a _factor graph_ where the features are unary factors, and then using SGD and Gibbs sampling for learning and inference. # Further technical detail is available [here](#). # # ### 3.1. Specifying prediction variables # In our case, we have one variable to predict per spouse candidate mention, namely, **is this mention actually indicating a spousal relation or not?** # In other words, we want DeepDive to predict the value of a Boolean variable for each spouse candidate mention, indicating whether it is true or not. # Recall that we started this tutorial with specifying this at the beginning of [`app.ddlog`](app.ddlog) as follows: # # ```ddlog # has_spouse?( # p1_id text, # p2_id text # ). # ``` # # DeepDive will predict not only the value of these variables, but also the marginal probabilities, i.e., the confidence level that DeepDive has for each individual prediction. # ### 3.2. Specifying features # Next, we indicate (i) that each `has_spouse` variable will be connected to the features of the corresponding `spouse_candidate` row, (ii) that we wish DeepDive to learn the weights of these features from our distantly supervised data, and (iii) that the weight of a specific feature across all instances should be the same, as follows: # # In[54]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\n## Inference Rules ############################################################\n \n# Features\n@weight(f)\nhas_spouse(p1_id, p2_id) :-\n spouse_feature(p1_id, p2_id, f).\n') # ### 3.3. Specifying connections between variables # Finally, we can specify dependencies between the prediction variables, with either learned or given weights. # Here, we'll specify two such rules, with fixed (given) weights that we specify. # First, we define a _symmetry_ connection, namely specifying that if the model thinks a person mention `p1` and a person mention `p2` indicate a spousal relationship in a sentence, then it should also think that the reverse is true, i.e., that `p2` and `p1` indicate one too: # In[55]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\n# Inference rule: Symmetry\n@weight(3.0)\nhas_spouse(p1_id, p2_id) => has_spouse(p2_id, p1_id) :-\n TRUE.\n') # Next, we specify a rule that the model should be strongly biased towards finding one marriage indication per person mention. # We do this inversely, using a negative weight, as follows: # In[56]: get_ipython().run_cell_magic('file', '-a app.ddlog', '\n# Inference rule: Only one marriage\n@weight(-1.0)\nhas_spouse(p1_id, p2_id) => has_spouse(p1_id, p3_id) :-\n TRUE.\n') # ### 3.4. Performing learning and inference # # Finally, to perform learning and inference using the specified model, we need to run the following command: # # In[57]: get_ipython().system('deepdive redo probabilities') # This will ground the model based on the data in the database, learn the weights, infer the expectations or marginal probabilities of the variables in the model, and then load them back to the database. # # Let's take a look at the probabilities inferred by DeepDive for the `has_spouse` variables. # In[58]: get_ipython().system("deepdive sql 'SELECT p1_id, p2_id, expectation FROM has_spouse_inference ORDER BY random() LIMIT 20'") # ## 4. Error analysis and debugging # # After finishing a pass of writing and running the DeepDive application, the first thing we want to see is how good the results are. # In this section, we describe how DeepDive's interactive tools can be used for viewing the results as well as error analysis and debugging. # ### 4.1. Calibration Plots # # DeepDive provides *calibration plots* to see how well the expectations computed by the system are calibrated. # The following command generates a plot for each variable under `run/model/calibration-plots/`. # In[ ]: get_ipython().system('deepdive do calibration-plots') # It will produce a file `run/model/calibration-plots/has_spouse.png` that holds three plots as shown below: # ![Calibration plot for spouse example](run/model/calibration-plots/has_spouse.png) # # Refer to the [full documentation on calibration data](calibration.md) for more detail on how to interpret the plots and take actions. # ### 4.2. Browsing data with Mindbender # # *Mindbender* is the name of the tool that provides an interactive user interface to DeepDive. # It can be used for browsing any data that has been loaded into DeepDive and produced by it. # #### Browsing input corpus # # We need to give hints to DeepDive about which part of the data we want to browse [using DDlog's annotation](http://deepdive.stanford.edu/browsing#ddlog-annotations-for-browsing). # For example, on the `articles` relation we declared earlier in `app.ddlog`, we can sprinkle some annotations such as `@source`, `@key`, and `@searchable`, as the following. # # # ```ddlog # @source # articles( # @key # id text, # @searchable # content text # ). # ``` # # The fully annotated DDlog code is available at GitHub and can be downloaded to replace your `app.ddlog` by running the following command: # In[ ]: get_ipython().system('curl -RLO "https://github.com/HazyResearch/deepdive/raw/master/examples/spouse/app.ddlog"') # Next, if we run the following command, DeepDive will create and populate a search index according to these hints. # In[ ]: get_ipython().system('mindbender search drop; mindbender search update') # To access the populated search index through a web browser, run: # In[ ]: get_ipython().system('mindbender search gui') # Then, point your browser to the URL that appears after the command (typically ) to see a view that looks like the following: # ![Screenshot of the search interface showing input corpus](https://github.com/HazyResearch/deepdive/raw/master/doc/images/browsing_corpus.png) # #### Browsing result data # # To browse the results, we can add annotations to the inferred relations and how they relate to their source relations. # For example, the `@extraction` and `@references` annotations in the following DDlog declaration tells DeepDive that the variable relation `has_spouse` is inferred from pairs of `person_mention`. # # ```ddlog # @extraction # has_spouse?( # @key # @references(relation="person_mention", column="mention_id", alias="p1") # p1_id text, # @key # @references(relation="person_mention", column="mention_id", alias="p2") # p2_id text # ). # ``` # # The relation `person_mention` as well as the relations it references should have similar annotations (see the [complete `app.ddlog` code](../examples/spouse/app.ddlog) for full detail). # # Then, repeating the commands to update the search index and load the user interface will allow us to browse the expected marginal probabilities of `has_spouse` as well. # ![Screenshot of the search interface showing results](https://github.com/HazyResearch/deepdive/raw/master/doc/images/browsing_results.png) # #### Customizing how data is presented # # # # In fact, the screenshots above are showing the data presented using a [carefully prepared set of templates under `mindbender/search-templates/`](https://github.com/HazyResearch/deepdive/tree/master/examples/spouse/mindbender/search-template/). # In these AngularJS templates, virtually anything you can program in HTML/CSS/JavaScript/CoffeeScript can be added to present the data that is ideal for human consumption (e.g., highlighted text spans rather than token indexes). # Please see the [documentation about customizing the presentation](http://deepdive.stanford.edu/browsing#customizing-presentation) for further detail. # ### 4.3. Estimating precision with Mindtagger # # *Mindtagger*, which is part of the Mindbender tool suite, assists data labeling tasks to quickly assess the precision and/or recall of the extraction. # We show how Mindtagger helps us perform a labeling task to estimate the precision of the extraction. # The necessary set of files shown below already exist [in the example under `labeling/has_spouse-precision/`](https://github.com/HazyResearch/deepdive/tree/master/examples/spouse/labeling/has_spouse-precision/). # # # #### Preparing a data labeling task # # First, we can take a random sample of 100 examples from `has_spouse` relation whose expectation is higher than or equal to a 0.9 threshold as shown in [the following SQL query](../examples/spouse/labeling/has_spouse-precision/sample-has_spouse.sql), and store them in [a file called `has_spouse.csv`](../examples/spouse/labeling/has_spouse-precision/has_spouse.csv). # # # In[ ]: get_ipython().system('mkdir -p labeling/has_spouse-precision/') # In[ ]: get_ipython().run_cell_magic('bash', '', 'deepdive sql eval "\n\nSELECT hsi.p1_id\n , hsi.p2_id\n , s.doc_id\n , s.sentence_index\n , hsi.dd_label\n , hsi.expectation\n , s.tokens\n , pm1.mention_text AS p1_text\n , pm1.begin_index AS p1_start\n , pm1.end_index AS p1_end\n , pm2.mention_text AS p2_text\n , pm2.begin_index AS p2_start\n , pm2.end_index AS p2_end\n\n FROM has_spouse_inference hsi\n , person_mention pm1\n , person_mention pm2\n , sentences s\n\n WHERE hsi.p1_id = pm1.mention_id\n AND pm1.doc_id = s.doc_id\n AND pm1.sentence_index = s.sentence_index\n AND hsi.p2_id = pm2.mention_id\n AND pm2.doc_id = s.doc_id\n AND pm2.sentence_index = s.sentence_index\n AND expectation >= 0.9\n\n ORDER BY random()\n LIMIT 100\n\n" format=csv header=1 >labeling/has_spouse-precision/has_spouse.csv\n') # We also prepare the [`mindtagger.conf`](https://github.com/HazyResearch/deepdive/blob/master/examples/spouse/labeling/has_spouse-precision/mindtagger.conf) and [`template.html`](https://github.com/HazyResearch/deepdive/blob/master/examples/spouse/labeling/has_spouse-precision/template.html) files under [`labeling/has_spouse-precision/`](https://github.com/HazyResearch/deepdive/blob/master/examples/spouse/labeling/has_spouse-precision/) that look like the following: # In[ ]: get_ipython().run_cell_magic('file', 'labeling/has_spouse-precision/mindtagger.conf', 'title: Labeling task for estimating has_spouse precision\nitems: {\n file: has_spouse.csv\n key_columns: [p1_id, p2_id]\n}\ntemplate: template.html\n') # In[ ]: get_ipython().run_cell_magic('file', 'labeling/has_spouse-precision/template.html', '\n\n \n\n \n\n\n') # #### Labeling data with Mindtagger # # Mindtagger can then be started for the task using the following command: # In[ ]: get_ipython().system('mindbender tagger labeling/has_spouse-precision/mindtagger.conf') # Then, point your browser to the URL that appears after the command (typically ) to see a dedicated user interface for labeling data that looks like the following: # ![Screenshot of the labeling interface showing the sampled data](https://github.com/HazyResearch/deepdive/raw/master/doc/images/mindtagger_screenshot.png) # We can quickly label the sampled 100 examples using the intuitive user interface with buttons for correct/incorrect tags. # It also supports keyboard shortcuts for entering labels and moving between items. # (Press the ? key to view all supported keys.) # How many were labeled correct, as well as other tags, are shown in the "Tags" dropdown at the top right corner as shown below. # ![Screenshot of the labeling interface showing tag statistics](https://github.com/HazyResearch/deepdive/raw/master/doc/images/mindtagger_screenshot_tags.png) # The collected tags can also be exported in various format for post-processing. # ![Screenshot of the labeling interface for exporting tags](https://github.com/HazyResearch/deepdive/raw/master/doc/images/mindtagger_screenshot_export.png) # For further detail, see the [documentation about labeling data](http://deepdive.stanford.edu/labeling). # ### 4.4. Monitoring statistics with Dashboard # # # # *Dashboard* provides a way to monitor various descriptive statistics of the data products after each pass of DeepDive improvements. # We can use a combination of SQL, any Bash script, and Markdown in each *report template* that produces a *report*, and we can produce a collection of them as a *snapshot* against the data extracted by DeepDive. # Dashboard provides a structure to manage those templates and instantiate them in a sophisticated way using parameters. # It provides a graphical interface for visualizing the collected statistics and trends as shown below. # Refer to the [full documentation on Dashboard](http://deepdive.stanford.edu/dashboard) to set up your own set of reports. # # # ![Screenshot of Dashboard Reports](https://github.com/HazyResearch/deepdive/raw/master/doc/images/dashboard/supervision_report.png) # ![Screenshot of Dashboard Trends](https://github.com/HazyResearch/deepdive/raw/master/doc/images/dashboard/homepage.png)