Homework 4

In [1]:
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2018 term"

The goal of this homework is to begin to assess the extent to which RNNs can learn to simulate compositional semantics: the way the meanings of words and phrases combine to form more complex meanings. We're going to do this with simulated data so that we have clear learning targets and so we can track the extent to which the models are truly generalizing in the desired ways.

In [2]:
import json
import nli
import os
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
import tensorflow as tf
from tf_rnn_classifier import TfRNNClassifier
/Applications/anaconda/envs/nlu/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters

Data and background

The base dataset is nli_simulated_data.json in nlidata. (You'll see below why it's the "base" dataset.)

In [3]:
data_home = "nlidata"

base_data_filename = os.path.join(data_home, 'nli_simulated_data.json')
In [4]:
def read_base_dataset(base_data_filename):
    """Read in the dataset and return it in a format that lets us
    define it as a set.
    """
    with open(base_data_filename, 'rt') as f:
        base = {((tuple(x), tuple(y)), z) for (x, y), z in json.load(f)}
    return base
In [5]:
base = read_base_dataset(base_data_filename)

This is a set of triples, where the first two members are tuples (premise and hypothesis) and the third member is a label:

In [6]:
list(base)[: 5]
Out[6]:
[((('f',), ('n',)), 'superset'),
 ((('f',), ('l',)), 'neutral'),
 ((('g',), ('b',)), 'subset'),
 ((('i',), ('d',)), 'neutral'),
 ((('g',), ('c',)), 'subset')]

The letters are arbitrary names, but the dataset was generated in a way that ensures logical consistency. For instance, since

In [7]:
((('a',), ('c',)), 'superset') in base
Out[7]:
True

and

In [8]:
((('c',), ('k',)), 'superset') in base
Out[8]:
True

we have

In [9]:
((('a',), ('k',)), 'superset') in base
Out[9]:
True

by the transitivity of subset,

Here's the full label set:

In [10]:
simulated_labels = ['disjoint', 'equal', 'neutral', 'subset', 'superset']

These are interpreted as disjoint. In particular, subset is proper subset and superset is proper superset – both exclude the case where the two arguments are equal.

Here is the full vocabulary, which you'll need in order to create embedding spaces:

In [11]:
sim_vocab = ["not", "$UNK"] + sorted(set([p[0] for x,y in base for p in x]))

sim_vocab
Out[11]:
['not',
 '$UNK',
 'a',
 'b',
 'c',
 'd',
 'e',
 'f',
 'g',
 'h',
 'i',
 'j',
 'k',
 'l',
 'm',
 'n']

Question 1: Experiment function [4 points]

Complete the function sim_experiment so that it trains a TfRNNClassifier on a dataset in the format of base, prints out a classification_report and returns the trained model. Make sure all of the keyword arguments to sim_experiment are respected!

To submit:

  • Your completed version of sim_experiment and any supporting functions it uses.
In [12]:
def sim_experiment(
        train_dataset, 
        test_dataset, 
        embed_dim=50, 
        hidden_dim=50, 
        eta=0.01, 
        max_iter=10, 
        cell_class=tf.nn.rnn_cell.LSTMCell, 
        hidden_activation=tf.nn.tanh):    
    # To be completed: 
    
    # Process `train_dataset` into an (X, y) pair
    # that is suitable for the `fit` methd of 
    # `TfRNNClassifier`.        
    
    # Train a `TfRNNClassifier` on `train_dataset`,
    # using all the keyword arguments given above.
    
    # Test the trained model on `test_dataset`;
    # assumes `test_dataset` is processed for use
    # with `predict` and the `classification_report`
    # below.
    
    # Specified printing and return value, feel free
    # to change the variable names if you wish:
    print(classification_report(y_test, predictions))
    return model

Question 2: Memorize the training data [2 points]

Experiment with sim_experiment until you've found a setting where sim_experiment(base, base) yields perfect performance on all classes. (If it's a little off, that's okay.)

To submit:

  • Your function call to sim_experiment showing the values of all the parameters.

Tips: Definitely explore different values of cell_class and hidden_activation. You might also pick high embed_dim and hidden_dim to ensure that you have sufficient representational power. These settings in turn demand a large number of iterations.

Note: There is value in finding the smallest, or most conservative, models that will achieve this memorization, but you needn't engage in such search. Go big if you want to get this done fast!

Question 3: Negation [2 points]

Now that we have some indication that the model works, we want to start making the data more complex. To do this, we'll simply negate one or both arguments and assign them the relation determined by their original label and the logic of negation. For instance, the training instance

((('p',), ('q',)), 'subset')

will become five distinct ones:

((('not', 'p'), ('not', 'p')), 'equal')
((('not', 'p'), ('not', 'q')), 'superset')
((('not', 'p'), ('q',)), 'neutral')
((('not', 'q'), ('not', 'q')), 'equal')
((('p',), ('not', 'q')), 'disjoint')

The full logic of this is a somewhat liberal interpretation of the theory of negation developed by MacCartney and Manning 2007:

$$\begin{array}{c c} \hline & \text{not-}p, \text{not-}q & p, \text{not-}q & \text{not-}p, q \\ \hline p \text{ disjoint } q & \text{neutral} & \text{subset} & \text{superset} \\ p \text{ equal } q & \text{equal} & \text{disjoint} & \text{disjoint} \\ p \text{ neutral } q & \text{neutral} & \text{neutral} & \text{neutral} \\ p \text{ subset } q & \text{superset} & \text{disjoint} & \text{neutral} \\ p \text{ superset } q & \text{subset} & \text{neutral} & \text{disjoint} \\ \hline \end{array}$$

where we also add all instances of $p \text{ equal } p$.

If you don't want to worry about the details, that's okay – you can treat negate_dataset as a black-box. Just think of it as implementing the theory of negation.

In [15]:
def negate_dataset(dataset):
    """Map `dataset` to a new dataset that has been thoroughly negated.
    
    Parameters
    ----------
    dataset : set of pairs ((p, h), label)
        Where `p` and `h` are tuples of str.
    
    Returns
    -------
    set
        Same format as `dataset`, and disjoint from it.
        
    """
    new_dataset = set()
    for (p, q), rel in dataset:        
        neg_p = tuple(["not"] + list(p))
        neg_q = tuple(["not"] + list(q))
        new_dataset.add(((neg_p, neg_p), 'equal'))
        new_dataset.add(((neg_q, neg_q), 'equal'))
        combos = [(neg_p, neg_q), (p, neg_q), (neg_p, q)]
        if rel == "disjoint":
            new_rels = ("neutral", "subset", "superset")
        elif rel == "equal":
            new_rels = ("equal", "disjoint", "disjoint") 
        elif rel == "neutral":
            new_rels = ("neutral", "neutral", "neutral")
        elif rel == "subset":
            new_rels = ("superset", "disjoint", "neutral")
        elif rel == "superset":
            new_rels = ("subset", "neutral", "disjoint") 
        new_dataset |= set(zip(combos, new_rels))
    return new_dataset

Using negate_dataset, we can map the base dataset to a singly negated one:

In [16]:
neg1 = negate_dataset(base)
In [17]:
list(neg1)[: 5]
Out[17]:
[((('n',), ('not', 'n')), 'disjoint'),
 ((('e',), ('not', 'l')), 'neutral'),
 ((('not', 'n'), ('n',)), 'disjoint'),
 ((('not', 'i'), ('g',)), 'superset'),
 ((('not', 'd'), ('d',)), 'disjoint')]

Your tasks:

  1. Create a dataset that is the union of base, neg1, and a doubly negated version of base, where doubly negating x is achieved by negate_dataset(negate_dataset(x)).

  2. Use sklearn.model_selection.train_test_split to create a random split of this new dataset, with 0.70 of the data used for training and the rest used for testing.

  3. Use sim_experiment to evaluate your network on this split, and play around with the keyword arguments until you have an average F1-score at or above 0.55.

To submit:

  • Your function call to sim_experiment showing the values of all the parameters.

Question 4: Negation and generalization [2 points]

So you got reasonably good results in the previous question. Has your model truly learned negation? To really address this question, we should see how it does on sequences of a length it hasn't seen before.

Your task:

Use your sim_experiment to train a network on the union of base and neg1, and evaluate it on the doubly negated dataset. By design, this means that your model will be evaluated on examples that are longer than those it was trained on. Use all the same keyword arguments to sim_experiment that you used for the previous question.

To submit:

  • The printed classification report from your run (you can just paste it in).

A note on performance: our mean F1 dropped a lot, and we expect it to drop for you too. You will not be evaluated based on the numbers you achieve, but rather only on whether you successfully run the required experiment.

(If you did really well, go a step further, by testing on the triply negated version!)