Homework 4: Word-level entailment with neural networks

In [1]:
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2019"

Overview

The general problem is word-level natural language inference.

Training examples are pairs of words $(w_{L}, w_{R}), y$ with $y = 1$ if $w_{L}$ entails $w_{R}$, otherwise $0$.

The homework questions below ask you to define baseline models for this and develop your own system for entry in the bake-off, which will take place on a held-out test-set distributed at the start of the bake-off. (Thus, all the data you have available for development is available for training your final system before the bake-off begins.)

wordentail-diagram.png

Set-up

See the first notebook in this unit for set-up instructions.

In [2]:
from collections import defaultdict
import json
import numpy as np
import os
import pandas as pd
from torch_shallow_neural_classifier import TorchShallowNeuralClassifier
import nli
import utils
In [3]:
DATA_HOME = 'data'

NLIDATA_HOME = os.path.join(DATA_HOME, 'nlidata')

wordentail_filename = os.path.join(
    NLIDATA_HOME, 'nli_wordentail_bakeoff_data.json')

GLOVE_HOME = os.path.join(DATA_HOME, 'glove.6B')

Data

I've processed the data into two different train/test splits, in an effort to put some pressure on our models to actually learn these semantic relations, as opposed to exploiting regularities in the sample.

  • edge_disjoint: The train and dev edge sets are disjoint, but many words appear in both train and dev.
  • word_disjoint: The train and dev vocabularies are disjoint, and thus the edges are disjoint as well.

These are very different problems. For word_disjoint, there is real pressure on the model to learn abstract relationships, as opposed to memorizing properties of individual words.

In [4]:
with open(wordentail_filename, encoding='utf8') as f:
    wordentail_data = json.load(f)

The outer keys are the splits plus a list giving the vocabulary for the entire dataset:

In [5]:
wordentail_data.keys()
Out[5]:
dict_keys(['edge_disjoint', 'vocab', 'word_disjoint'])

Edge disjoint

In [6]:
wordentail_data['edge_disjoint'].keys()
Out[6]:
dict_keys(['dev', 'train'])

This is what the split looks like; all three have this same format:

In [7]:
wordentail_data['edge_disjoint']['dev'][: 5]
Out[7]:
[[['sweater', 'stroke'], 0],
 [['constipation', 'hypovolemia'], 0],
 [['disease', 'inflammation'], 0],
 [['herring', 'animal'], 1],
 [['cauliflower', 'outlook'], 0]]

Let's test to make sure no edges are shared between train and dev:

In [8]:
nli.get_edge_overlap_size(wordentail_data, 'edge_disjoint')
Out[8]:
0

As we expect, a lot of vocabulary items are shared between train and dev:

In [9]:
nli.get_vocab_overlap_size(wordentail_data, 'edge_disjoint')
Out[9]:
2916

This is a large percentage of the entire vocab:

In [10]:
len(wordentail_data['vocab'])
Out[10]:
8470

Here's the distribution of labels in the train set. It's highly imbalanced, which will pose a challenge for learning. (I'll go ahead and reveal that the dev set is similarly distributed.)

In [11]:
def label_distribution(split):
    return pd.DataFrame(wordentail_data[split]['train'])[1].value_counts()
In [12]:
label_distribution('edge_disjoint')
Out[12]:
0    14650
1     2745
Name: 1, dtype: int64

Word disjoint

In [13]:
wordentail_data['word_disjoint'].keys()
Out[13]:
dict_keys(['dev', 'train'])

In the word_disjoint split, no words are shared between train and dev:

In [14]:
nli.get_vocab_overlap_size(wordentail_data, 'word_disjoint')
Out[14]:
0

Because no words are shared between train and dev, no edges are either:

In [15]:
nli.get_edge_overlap_size(wordentail_data, 'word_disjoint')
Out[15]:
0

The label distribution is similar to that of edge_disjoint, though the overall number of examples is a bit smaller:

In [16]:
label_distribution('word_disjoint')
Out[16]:
0    7199
1    1349
Name: 1, dtype: int64

Baseline

Even in deep learning, feature representation is vital and requires care! For our task, feature representation has two parts: representing the individual words and combining those representations into a single network input.

Representing words: vector_func

Let's consider two baseline word representations methods:

  1. Random vectors (as returned by utils.randvec).
  2. 50-dimensional GloVe representations.
In [17]:
def randvec(w, n=50, lower=-1.0, upper=1.0):
    """Returns a random vector of length `n`. `w` is ignored."""
    return utils.randvec(n=n, lower=lower, upper=upper)
In [18]:
# Any of the files in glove.6B will work here:

glove_dim = 50

glove_src = os.path.join(GLOVE_HOME, 'glove.6B.{}d.txt'.format(glove_dim))

# Creates a dict mapping strings (words) to GloVe vectors:
GLOVE = utils.glove2dict(glove_src)

def glove_vec(w):    
    """Return `w`'s GloVe representation if available, else return 
    a random vector."""
    return GLOVE.get(w, randvec(w, n=glove_dim))

Combining words into inputs: vector_combo_func

Here we decide how to combine the two word vectors into a single representation. In more detail, where u is a vector representation of the left word and v is a vector representation of the right word, we need a function vector_combo_func such that vector_combo_func(u, v) returns a new input vector z of dimension m. A simple example is concatenation:

In [19]:
def vec_concatenate(u, v):
    """Concatenate np.array instances `u` and `v` into a new np.array"""
    return np.concatenate((u, v))

vector_combo_func could instead be vector average, vector difference, etc. (even combinations of those) – there's lots of space for experimentation here; homework question 2 below pushes you to do some exploration.

Classifier model

For a baseline model, I chose TorchShallowNeuralClassifier:

In [20]:
net = TorchShallowNeuralClassifier(hidden_dim=50, max_iter=100)

Baseline results

The following puts the above pieces together, using vector_func=glove_vec, since vector_func=randvec seems so hopelessly misguided for word_disjoint!

In [21]:
word_disjoint_experiment = nli.wordentail_experiment(
    train_data=wordentail_data['word_disjoint']['train'],
    assess_data=wordentail_data['word_disjoint']['dev'], 
    model=net, 
    vector_func=glove_vec,
    vector_combo_func=vec_concatenate)
Finished epoch 100 of 100; error is 0.026732386788353324
              precision    recall  f1-score   support

           0       0.92      0.94      0.93      1910
           1       0.42      0.36      0.39       239

   micro avg       0.87      0.87      0.87      2149
   macro avg       0.67      0.65      0.66      2149
weighted avg       0.87      0.87      0.87      2149

Homework questions

Please embed your homework responses in this notebook, and do not delete any cells from the notebook. (You are free to add as many cells as you like as part of your responses.)

Hypothesis-only baseline [2 points]

During our discussion of SNLI and MultiNLI, we noted that a number of research teams have shown that hypothesis-only baselines for NLI tasks can be remarkably robust. This question asks you to explore briefly how this baseline effects the 'edge_disjoint' and 'word_disjoint' versions of our task.

For this problem, submit code the following:

  1. A vector_combo_func function called hypothesis_only that simply throws away the premise, using the unmodified hypothesis (second) vector as its representation of the example.

  2. Code for looping over the two conditions 'word_disjoint' and 'edge_disjoint' and the two vector_combo_func values vec_concatenate and hypothesis_only, calling nli.wordentail_experiment to train on the conditions 'train' portion and assess on its 'dev' portion, with glove_vec as the vector_func. So that the results are consistent, use an sklearn.linear_model.LogisticRegression with default parameters as the model.

  3. Print out the percentage-wise increase in macro-F1 over the hypothesis_only baseline that vec_concatenate delivers for each of the two conditions. For example, if hypothesis_only returns 0.52 for condition C and vec_concatenate delivers 0.75 for C, then you'd report a ((0.75 / 0.52) - 1) * 100 = 44.23 percent increase for C. The values you need are stored in the dictionary returned by nli.wordentail_experiment, with key 'macro-F1'. Please round the percentages to two digits.

In [ ]:
 

Alternatives to concatenation [1 point]

We've so far just used vector concatenation to represent the premise and hypothesis words. This question asks you to explore a simple alternative.

For this problem, submit code the following:

  1. A new potential value for vector_combo_func that does something different from concatenation. Options include, but are not limited to, element-wise addition, difference, and multiplication. These can be combined with concatenation if you like.
  2. Include a use of nli.wordentail_experiment in the same configuration as the one in Baseline results above, but with your new value of vector_combo_func.
In [ ]:
 

A deeper network [2 points]

It is very easy to subclass TorchShallowNeuralClassifier if all you want to do is change the network graph: all you have to do is write a new define_graph. If your graph has new arguments that the user might want to set, then you should also redefine __init__ so that these values are accepted and set as attributes.

For this question, please subclass TorchShallowNeuralClassifier so that it defines the following graph:

$$\begin{align} h_{1} &= xW_{1} + b_{1} \\ r_{1} &= \textbf{Bernoulli}(1 - \textbf{dropout_prob}, n) \\ d_{1} &= r_1 * h_{1} \\ h_{2} &= f(d_{1}) \\ h_{3} &= h_{2}W_{2} + b_{2} \end{align}$$

Here, $r_{1}$ and $d_{1}$ define a dropout layer: $r_{1}$ is a random binary vector of dimension $n$, where the probability of a value being $1$ is given by $1 - \textbf{dropout_prob}$. $r_{1}$ is multiplied element-wise by our first hidden representation, thereby zeroing out some of the values. The result is fed to the user's activation function $f$, and the result of that is fed through another linear layer to produce $h_{3}$. (Inside TorchShallowNeuralClassifier, $h_{3}$ is the basis for a softmax classifier, so no activation function is applied to it.)

For comparison, using this notation, TorchShallowNeuralClassifier defines the following graph:

$$\begin{align} h_{1} &= xW_{1} + b_{1} \\ h_{2} &= f(h_{1}) \\ h_{3} &= h_{2}W_{2} + b_{2} \end{align}$$

The following code starts this sub-class for you, so that you can concentrate on define_graph. Be sure to make use of self.dropout_prob

For this problem, submit just your completed TorchDeepNeuralClassifier. You needn't evaluate it, though we assume you will be keen to do that!

In [22]:
import torch.nn as nn

class TorchDeepNeuralClassifier(TorchShallowNeuralClassifier):
    def __init__(self, dropout_prob=0.7, **kwargs):
        self.dropout_prob = dropout_prob
        super().__init__(**kwargs)
    
    def define_graph(self):
        """Complete this method!
        
        Returns
        -------
        an `nn.Module` instance, which can be a free-standing class you 
        write yourself, as in `torch_rnn_classifier`, or the outpiut of 
        `nn.Sequential`, as in `torch_shallow_neural_classifier`.
        
        """
    

Your original system [4 points]

This is a simple dataset, but our focus on the 'word_disjoint' condition ensures that it's a challenging one, and there are lots of modeling strategies one might adopt.

You are free to do whatever you like. We require only that your system differ in some way from those defined in the preceding questions. They don't have to be completely different, though. For example, you might want to stick with the model but represent examples differently, or the reverse.

Keep in mind that, for the bake-off evaluation, the 'edge_disjoint' portions of the data are off limits. You can, though, train on the combination of the 'word_disjoint' 'train' and 'dev' portions. You are free to use different pretrained word vectors and the like. Please do not introduce additional entailment datasets into your training data, though.

Please embed your code in this notebook so that we can rerun it.

Bake-off [1 point]

The goal of the bake-off is to achieve the highest macro-average F1 score on word_disjoint, on a test set that we will make available at the start of the bake-off on May 6. The announcement will go out on Piazza. To enter, you'll be asked to run nli.bake_off_evaluation on the output of your chosen nli.wordentail_experiment run.

To enter the bake-off, upload this notebook on Canvas:

https://canvas.stanford.edu/courses/99711/assignments/187250

The cells below this one constitute your bake-off entry.

The rules described in the Your original system homework question are also in effect for the bake-off.

Systems that enter will receive the additional homework point, and systems that achieve the top score will receive an additional 0.5 points. We will test the top-performing systems ourselves, and only systems for which we can reproduce the reported results will win the extra 0.5 points.

The bake-off will close at 4:30 pm on May 8. Late entries will be accepted, but they cannot earn the extra 0.5 points. Similarly, you cannot win the bake-off unless your homework is submitted on time.

In [23]:
# Enter your bake-off assessment code into this cell. 
# Please do not remove this comment.
In [24]:
# On an otherwise blank line in this cell, please enter
# your macro-avg f1 value as reported by the code above. 
# Please enter only a number between 0 and 1 inclusive.
# Please do not remove this comment.