spaCy Tutorial

(C) 2018 by Damir Cavar

Version: 1.1, February 2018

This is a tutorial related to the L665 course on Machine Learning for NLP focusing on Deep Learning, Spring 2018 at Indiana University.

Introduction to spaCy

Follow the instructions on the spaCy homepage about installation of the module and language models. Your local spaCy module is correctly installed, if the following command is successfull:

In [1]:
import spacy

We can load the English NLP pipeline in the following way:

In [2]:
nlp = spacy.load('en')

Tokenization

In [3]:
doc = nlp(u'John was wondering, if Peter knew that Dr. Smith bought a new car for her older son.')
for token in doc:
    print(token.text)
John
was
wondering
,
if
Peter
knew
that
Dr.
Smith
bought
a
new
car
for
her
older
son
.

Part-of-Speech Tagging

We can tokenize and part of speech tag the individual tokens using the following code:

In [4]:
doc = nlp(u'John said yesterday that Mary bought a new car for her older son.')

for token in doc:
    print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_,
          token.shape_, token.is_alpha, token.is_stop)
John john PROPN NNP nsubj Xxxx True False
said say VERB VBD ROOT xxxx True False
yesterday yesterday NOUN NN npadvmod xxxx True False
that that ADP IN mark xxxx True True
Mary mary PROPN NNP nsubj Xxxx True False
bought buy VERB VBD ccomp xxxx True False
a a DET DT det x True True
new new ADJ JJ amod xxx True False
car car NOUN NN dobj xxx True False
for for ADP IN prep xxx True True
her -PRON- ADJ PRP$ poss xxx True True
older old ADJ JJR amod xxxx True False
son son NOUN NN pobj xxx True False
. . PUNCT . punct . False False

The above output contains for every token in a line the token itself, the lemma, the Part-of-Speech tag, the dependency label, the orthographic shape (upper and lower case characters as X or x respectively), the boolean for the token being an alphanumeric string, and the boolean for it being a stopword.

Dependency Parse

Using the same approach as above for PoS-tags, we can print the Dependency Parse relations:

In [5]:
for token in doc:
    print(token.text, token.dep_, token.head.text, token.head.pos_,
          [child for child in token.children])
John nsubj said VERB []
said ROOT said VERB [John, yesterday, bought, .]
yesterday npadvmod said VERB []
that mark bought VERB []
Mary nsubj bought VERB []
bought ccomp said VERB [that, Mary, car, for]
a det car NOUN []
new amod car NOUN []
car dobj bought VERB [a, new]
for prep bought VERB [son]
her poss son NOUN []
older amod son NOUN []
son pobj for ADP [her, older]
. punct said VERB []

As specified in the code, each line represents one token. The token is printed in the first column, followed by the dependency relation to it from the token in the third column, followed by its main category type.

Named Entity Recognition

Similarly to PoS-tags and Dependency Parse Relations, we can print out Named Entity labels:

In [6]:
for ent in doc.ents:
    print(ent.text, ent.start_char, ent.end_char, ent.label_)
John 0 4 PERSON
yesterday 10 19 DATE
Mary 25 29 PERSON

We can extend the input with some more entities:

In [7]:
doc = nlp(u'John Smith said that Apple Inc. will buy Google in May 2018.')

The corresponding NE-labels are:

In [8]:
for ent in doc.ents:
    print(ent.text, ent.start_char, ent.end_char, ent.label_)
John Smith 0 10 PERSON
Apple Inc. 21 31 ORG
Google 41 47 ORG
May 2018 51 59 DATE

Pattern Matching in spaCy

In [9]:
from spacy.matcher import Matcher

matcher = Matcher(nlp.vocab)
pattern = [{'LOWER': 'hello'}, {'IS_PUNCT': True}, {'LOWER': 'world'}]
matcher.add('HelloWorld', None, pattern)

doc = nlp(u'Hello, world! Hello world!')
matches = matcher(doc)
print(matches)
[(15578876784678163569, 0, 3)]

spaCy is Missing

From the linguistic standpoint, when looking at the analytical output of the NLP pipeline in spaCy, there are some important components missing:

  • Clause boundary detection
  • Constituent structure trees (scope relations over constituents and phrases)
  • Anaphora resolution
  • Coreference analysis
  • Temporal reference resolution
  • ...

Clause Boundary Detection

Complex sentences consist of clauses. For precise processing of semantic properties of natural language utterances we need to segment the sentences into clauses. The following sentence:

The man said that the woman claimed that the child broke the toy.

can be broken into the following clauses:

  • Matrix clause: [ the man said ]
  • Embedded clause: [ that the woman claimed ]
  • Embedded clause: [ that the child broke the toy ]

These clauses do not form an ordered list or flat sequence, they in fact are hierarchically organized. The matrix clause verb selects as its complement an embedded finite clause with the complementizer that. The embedded predicate claimed selects the same kind of clausal complement. We express this hierarchical relation in form of embedding in tree representations:

[ the man said [ that the woman claimed [ that the child broke the toy ] ] ]

Or using a graphical representation in form of a tree:

<img src="Embedded_Clauses_1.png", width=70%, height=70%>

The hierarchical relation of sub-clauses is relevant when it comes to semantics. The clause John sold his car can be interpreted as an assertion that describes an event with John as the agent, and the car as the object of a selling event in the past. If the clause is embedded under a matrix clause that contains a sentential negation, the proposition is assumed to NOT be true: [ Mary did not say that [ John sold his car ] ]

It is possible with additional effort to translate the Dependency Trees into clauses and reconstruct the clause hierarchy into a relevant form or data structure. SpaCy does not offer a direct data output of such relations.

One problem still remains, and this is clausal discontinuities. None of the common NLP pipelines, and spaCy in particular, can deal with any kind of discontinuities in any reasonable way. Discontinuities can be observed when sytanctic structures are split over the clause or sentence, or elements ocur in a cannoically different position, as in the following example:

Which car did John claim that Mary took?

The embedded clause consists of the sequence [ Mary took which car ]. One part of the sequence appears dislocated and precedes the matrix clause in the above example. Simple Dependency Parsers cannot generate any reasonable output that makes it easy to identify and reconstruct the relations of clausal elements in these structures.

Constitutent Structure Trees

Dependency Parse trees are a simplification of relations of elements in the clause. They ignore structural and hierarchical relations in a sentence or clause, as shown in the examples above. Instead the Dependency Parse trees show simple functional relations in the sense of sentential functions like subject or object of a verb.

SpaCy does not output any kind of constituent structure and more detailed relational properties of phrases and more complex structural units in a sentence or clause.

Since many semantic properties are defined or determined in terms of structural relations and hierarchies, that is scope relations, this is more complicated to reconstruct or map from the Dependency Parse trees.

Anaphora Resolution

SpaCy does not offer any anaphora resolution annotation. That is, the referent of a pronoun, as in the following examples, is not annotated in the resulting linguistic data structure:

  • John saw him.
  • John said that he saw the house.
  • Tim sold his house. He moved to Paris.
  • John saw himself in the mirror.

Knowing the restrictions of pronominal binding (in English for example), we can partially generate the potential or most likely anaphora - antecedent relations. This - however - is not part of the spaCy output.

One problem, however, is that spaCy does not provide parse trees of the constituent structure and clausal hierarchies, which is crucial for the correct analysis of pronominal anaphoric relations.

Coreference Analysis

Some NLP pipelines are capable of providing coreference analyses for constituents in clauses. For example, the two clauses should be analyzed as talking about the same subject:

The CEO of Apple, Tim Cook, decided to apply for a job at Google. Cook said that he is not satisfied with the quality of the iPhones anymore. He prefers the Pixel 2.

The constituents [ the CEO of Apple, Tim Cook ] in the first sentence, [ Cook ] in the second sentence, and [ he ] in the third, should all be tagged as referencing the same entity, that is the one mentioned in the first sentence. SpaCy does not provide such a level of analysis or annotation.

Temporal Reference

For various analysis levels it is essential to identify the time references in a sentence or utterance, for example the time the utterance is made or the time the described event happened.

Certain tenses are expressed as periphrastic constructions, including auxiliaries and main verbs. SpaCy does not provide the relevant information to identify these constructions and tenses.

Using the Dependency Parse Visualizer

More on Dependency Parse trees

In [10]:
import spacy

We can load the visualizer:

In [11]:
from spacy import displacy

Loading the English NLP pipeline:

In [12]:
nlp = spacy.load('en')

Process an input sentence:

In [13]:
doc = nlp(u'John said yesterday that Mary bought a new car for her older son.')

Visualizing the Dependency Parse tree can be achieved by running the following server code and opening up a new tab on the URL http://localhost:5000/. You can shut down the server by clicking on the stop button at the top in the notebook toolbar.

In [ ]:
displacy.serve(doc, style='dep')

Instead of serving the graph, one can render it directly into a Jupyter Notebook:

In [14]:
displacy.render(doc, style='dep', jupyter=True, options={"distance": 140})
John PROPN said VERB yesterday NOUN that ADP Mary PROPN bought VERB a DET new ADJ car NOUN for ADP her ADJ older ADJ son. NOUN nsubj npadvmod mark nsubj ccomp det amod dobj prep poss amod pobj

In addition to the visualization of the Dependency Trees, we can visualize named entity annotations:

In [15]:
text = """Apple decided to fire Tim Cook and hire somebody called John Doe as the new CEO.
They also discussed a merger with Google. On the long run it seems more likely that Apple
will merge with Amazon and Microsoft with Google. The companies will all relocate to
Austin in Texas before the end of the century."""

doc = nlp(text)
displacy.render(doc, style='ent', jupyter=True)
Apple ORG decided to fire Tim Cook PERSON and hire somebody called John Doe PERSON as the new CEO. GPE They also discussed a merger with Google ORG . On the long run it seems more likely that Apple ORG will merge with Amazon ORG and Microsoft ORG with Google ORG . The companies will all relocate to GPE Austin GPE in Texas GPE before the end of the century DATE .

Vectors

To use vectors in spaCy, you might consider installing the larger models for the particular language. The common module and language packages only come with the small models. The larger models can be installed as described on the spaCy vectors page:

python -m spacy download en_core_web_lg

The large model en_core_web_lg contains more than 1 million unique vectors.

Let us restart all necessary modules again, in particular spaCy:

In [16]:
import spacy

We can now import the English NLP pipeline to process some word list. Since the small models in spacy only include context-sensitive tensors, we should use the dowloaded large model for better word vectors. We load the large model as follows:

In [17]:
# nlp = spacy.load('en_core_web_lg')
nlp = spacy.load('en')

We can process a list of words by the pipeline using the nlp object:

In [18]:
tokens = nlp(u'dog cat banana')

As described in the spaCy chapter Word Vectors and Semantic Similarity, the resulting elements of Doc, Span, and Token provide a method similarity(), which returns the similarities between words:

In [19]:
for token1 in tokens:
    for token2 in tokens:
        print(token1, token2, token1.similarity(token2))
dog dog 1.0
dog cat 0.5390696
dog banana 0.28760988
cat dog 0.5390696
cat cat 1.0
cat banana 0.48752153
banana dog 0.28760988
banana cat 0.48752153
banana banana 1.0

We can access the vectors of these objects using the vector attribute:

In [20]:
tokens = nlp(u'dog cat banana sasquatch')

for token in tokens:
    print(token.text, token.has_vector, token.vector_norm, token.is_oov)
dog True 23.92024 True
cat True 24.228516 True
banana True 25.35453 True
sasquatch True 26.209084 True

The attribute has_vector returns a boolean depending on whether the token has a vector in the model or not. The token sasquatch has no vector. It is also out-of-vocabulary (OOV), as the fourth column shows. Thus, it also has a norm of $0$, that is, it has a length of $0$.

Here the token vector has a length of $300$. We can print out the vector for a token:

In [22]:
n = 0
print(tokens[n].text, len(tokens[n].vector), tokens[n].vector)
dog 384 [ 8.27200770e-01  2.36963582e+00 -6.35798633e-01  4.51421201e-01
  2.03428909e-01  1.73726356e+00 -3.18652272e+00  8.14928174e-01
  1.90902579e+00  2.81861591e+00  2.24422216e+00 -1.73021841e+00
  1.79004085e+00  3.29744518e-02 -1.84130037e+00  8.92891705e-01
 -2.34007502e+00 -6.58327699e-01 -2.56982803e+00  1.81837606e+00
 -2.24640161e-01  1.19199407e+00 -1.03678751e+00  1.85581863e+00
  9.48346257e-02 -1.62571692e+00 -5.23630440e-01  1.61878800e+00
 -2.62793928e-01 -2.29376721e+00 -6.65396869e-01 -7.22711563e-01
 -3.73787642e-01  1.11173570e-01 -8.51480961e-02 -1.27650201e+00
  1.60682821e+00 -5.60200214e-01  2.31330538e+00 -1.79506028e+00
 -1.91947556e+00 -2.31478238e+00  1.07934499e+00 -2.57284474e+00
 -2.47225070e+00 -6.94101095e-01 -1.99404633e+00 -5.84194660e-01
 -1.05473995e-01 -1.13228750e+00  3.32133532e+00 -1.98626065e+00
 -2.27126360e+00  3.23185134e+00  3.57697129e-01 -2.88535762e+00
  3.46697450e+00  3.08543921e+00  1.69311810e+00  6.86959505e-01
 -8.70782137e-03  7.88555026e-01  4.69463825e-01  3.27023649e+00
 -3.19191742e+00 -1.22353923e+00 -3.13184476e+00 -1.44323611e+00
  5.12833214e+00  2.09720802e+00 -1.15142405e+00 -2.01891994e+00
 -2.02433491e+00  1.37387061e+00 -2.25904417e+00  6.98948383e-01
 -3.45357203e+00  8.38878632e-01 -9.06848311e-01  5.01224136e+00
 -2.46539593e+00 -4.75015116e+00 -2.55216300e-01 -1.16558373e+00
  1.21537983e+00  6.96649194e-01 -2.64912218e-01 -1.57365394e+00
 -7.75560617e-01  1.59184903e-01 -1.97478056e+00  5.72311020e+00
 -1.04499507e+00  2.78367281e+00 -2.77576303e+00  5.90612650e-01
 -2.53874826e+00 -1.00345612e+00 -4.75460351e-01  2.93002069e-01
 -1.78783464e+00  8.40276659e-01 -2.64874160e-01 -2.22559881e+00
  7.17729807e-01  2.50333309e+00  5.79268813e-01 -2.08806300e+00
 -8.20727587e-01  4.24402654e-02 -1.32487774e-01 -4.07865286e+00
 -1.05328310e+00  2.31404638e+00  1.23619747e+00  4.23198128e+00
  9.68020082e-01  5.01313543e+00  2.75291491e+00  4.56159532e-01
 -4.08713043e-01  1.43276000e+00  3.23144019e-01  1.52091861e+00
  2.90289223e-01 -7.89957464e-01  1.66499197e+00 -2.12638402e+00
 -7.98128247e-02 -3.89738739e-01 -2.35880613e-02 -8.74554887e-02
 -2.46094987e-01  5.38658381e-01  3.29445362e-01 -3.26883793e-02
  6.67730451e-01  1.20416296e+00  6.86277628e-01  1.46121562e-01
  1.93115473e-01 -4.06555533e-01 -5.42419374e-01  2.56459832e-01
 -3.82755846e-01  1.09645474e+00  1.37204313e+00 -9.80867594e-02
  9.19185281e-02 -2.11603552e-01  4.40820903e-01  5.58294833e-01
 -1.97448909e-01 -5.96628249e-01  2.62966901e-01 -5.32624245e-01
  3.47955227e-01  7.34314546e-02 -2.36523330e-01  1.25075683e-01
 -3.01510602e-01  9.68412161e-02  2.22957149e-01 -5.92421293e-01
  3.39704216e-01 -1.80009753e-01  2.12132156e-01 -1.49858803e-01
 -3.76682818e-01  1.26572382e+00 -3.39105964e-01 -7.12203145e-01
  1.98278084e-01  3.81588757e-01 -1.92670852e-01  2.16052324e-01
  2.07061306e-01 -5.17681360e-01 -8.34657371e-01 -4.73373979e-01
 -5.41145980e-01 -7.99374729e-02 -1.94415748e-01 -4.37415063e-01
  1.08554578e+00 -4.17823732e-01  1.04889834e+00  4.20893312e-01
  1.11030042e-03 -2.97780633e-02 -6.54897153e-01 -8.36886838e-02
  3.27429950e-01 -2.84426063e-02 -6.00608960e-02 -3.35153490e-02
  2.89587498e-01  5.37356734e-01 -4.13916409e-02 -4.12048362e-02
 -7.10642576e-01 -1.98923230e-01 -3.98404375e-02  3.55616391e-01
  3.81583542e-01  1.06807493e-01  2.93916345e-01  3.71418297e-01
  1.32994205e-02  1.82372063e-01 -2.52966046e-01  5.71190000e-01
 -5.09459153e-02  3.00350755e-01 -2.52297699e-01  2.57598221e-01
 -9.37784463e-02 -3.28560054e-01  4.81325567e-01 -5.76627135e-01
 -3.89352441e-01 -1.50123060e-01 -2.67110527e-01 -5.85993946e-01
  1.28378779e-01 -1.81259692e-01 -9.87434983e-02 -6.41854227e-01
 -3.48477781e-01 -4.56766784e-01  2.44291127e-02  7.53845990e-01
  2.89254367e-01 -5.47284961e-01 -6.63643360e-01  1.03627034e-01
 -3.02436620e-01  1.83912486e-01 -2.33762667e-01 -7.05996692e-01
 -2.28715092e-02 -5.82007408e-01  8.50298226e-01 -1.08366501e+00
  1.46153510e-01 -6.83578253e-02 -3.63923788e-01  4.13221925e-01
  3.02115619e-01 -6.86111391e-01  2.07210332e-01 -2.24037230e-01
  2.28939816e-01  4.85156536e-01  1.39915168e-01  1.09533980e-01
 -4.99969572e-01 -3.90477479e-02  1.20926738e-01 -4.35628146e-01
 -2.46861517e-01 -4.92394716e-01  1.57996669e-01 -2.61013567e-01
 -4.11366224e-01  4.92623188e-02  5.19491434e-01 -2.57162377e-02
 -2.32993830e-02 -2.84265041e-01 -3.96629095e-01  8.89276862e-01
 -4.16647792e-01 -6.95781708e-01 -1.44267231e-01 -4.50714618e-01
 -3.31009626e-02 -1.62504449e-01  7.33935237e-01 -7.89064348e-01
 -9.05397475e-01 -1.63613930e-02 -3.23807955e-01  7.22003222e-01
  5.05698085e-01  4.35230672e-01 -1.62770301e-01 -4.11142468e-01
 -1.04695559e-01  1.94984049e-01 -1.38163015e-01 -1.24378584e-01
 -3.60506475e-02  1.24236047e+00 -6.44289032e-02 -6.25225782e-01
  3.48200321e-01 -3.14976394e-01 -1.42186344e-01 -7.13658690e-01
  1.19100243e-01 -4.79408562e-01 -3.11907917e-01 -7.30596960e-01
  6.31558537e-01 -1.90374047e-01  6.12288713e-04 -4.54252213e-01
  5.78279495e-01  7.82649040e-01 -6.94978893e-01  5.27717531e-01
 -7.52819419e-01  2.11533800e-01  1.21591091e+00  1.54507950e-01
  2.51218945e-01  1.09018588e+00 -5.27395368e-01  4.28521097e-01
  6.58494711e-01  9.36106801e-01 -2.10810065e-01  3.56885761e-01
 -1.25558943e-01  7.94972107e-03 -6.62281394e-01 -1.10129826e-03
  8.11133087e-02 -2.51154840e-01  9.73601639e-01  1.59540921e-01
  6.89092278e-03  6.55956864e-01 -1.98863089e-01  5.42057157e-01
 -3.21460724e-01  7.43597895e-02 -1.73077226e-01 -1.23403013e-01
 -2.86472976e-01  3.84209380e-02  6.84875399e-02  5.78825921e-03
 -1.38569742e-01  2.25923032e-01 -1.95183381e-01  5.73984683e-01
  5.97869992e-01 -3.88437361e-01 -2.17346624e-01  2.20151603e-01
 -3.53521854e-01 -3.70415449e-02 -7.69326091e-02 -8.02676558e-01
 -4.01631653e-01  2.75890291e-01 -8.62186491e-01 -7.68790960e-01
  6.61593974e-02  4.12950784e-01  9.71895307e-02  1.62017494e-01
 -8.88495326e-01  9.61809158e-01 -3.20118994e-01  7.44656026e-01
  2.94522941e-01  7.65041113e-02 -7.67438352e-01  2.87442714e-01
  6.36602566e-02 -7.06121206e-01  2.70684063e-03  1.16398585e+00
 -5.01691282e-01  9.78847966e-02  7.38977849e-01 -1.37028068e-01
 -4.56177801e-01 -3.05478781e-01  2.47685671e-01 -2.43861869e-01]

Here just another example of similarities for some famous words:

In [23]:
tokens = nlp(u'queen king chef')

for token1 in tokens:
    for token2 in tokens:
        print(token1, token2, token1.similarity(token2))
queen queen 1.0
queen king 0.34783703
queen chef 0.2586036
king queen 0.34783703
king king 1.0
king chef 0.47207302
chef queen 0.2586036
chef king 0.47207302
chef chef 1.0

Similarities in Context

In spaCy parsing, tagging and NER models make use of vector representations of contexts that represent the meaning of words. A text meaning representation is represented as an array of floats, i.e. a tensor, computed during the NLP pipeline processing. With this approach words that have not been seen before can be typed or classified. SpaCy uses a 4-layer convolutional network for the computation of these tensors. In this approach these tensors model a context of four words left and right of any given word.

Let us use the example from the spaCy documentation and check the word labrador:

In [24]:
import spacy
nlp = spacy.load('en')

tokens = nlp(u'labrador')

for token in tokens:
    print(token.text, token.has_vector, token.vector_norm, token.is_oov)
labrador True 23.063505 True

We can now test for the context:

In [25]:
doc1 = nlp(u"The labrador barked.")
doc2 = nlp(u"The labrador swam.")
doc3 = nlp(u"the labrador people live in canada.")

count = 0
for doc in [doc1, doc2, doc3]:
    lab = doc[1]
    dog = nlp(u"dog")
    count += 1
    print(str(count) + ":", lab.similarity(dog))
1: 0.3551335059008647
2: 0.21606158966020875
3: 0.2074718583991242

Using this strategy we can compute document or text similarities as well:

In [26]:
docs = ( nlp(u"Paris is the largest city in France."),
        nlp(u"Vilnius is the capital of Lithuania."),
        nlp(u"An emu is a large bird.") )

for x in range(len(docs)):
    for y in range(len(docs)):
        print(x, y, docs[x].similarity(docs[y]))
0 0 1.0
0 1 0.8139621420526477
0 2 0.6578787369563981
1 0 0.8139621420526477
1 1 1.0
1 2 0.6000087099931554
2 0 0.6578787369563981
2 1 0.6000087099931554
2 2 1.0

We can vary the word order in sentences and compare them:

In [27]:
docs = [nlp(u"dog bites man"), nlp(u"man bites dog"),
        nlp(u"man dog bites"), nlp(u"dog man bites")]

for doc in docs:
    for other_doc in docs:
        print('"' + doc.text + '"', '"' + other_doc.text + '"', doc.similarity(other_doc))
"dog bites man" "dog bites man" 1.0
"dog bites man" "man bites dog" 0.941871368221926
"dog bites man" "man dog bites" 0.9062079104027668
"dog bites man" "dog man bites" 0.9328819114282291
"man bites dog" "dog bites man" 0.941871368221926
"man bites dog" "man bites dog" 1.0
"man bites dog" "man dog bites" 0.91031258826218
"man bites dog" "dog man bites" 0.9005242840640686
"man dog bites" "dog bites man" 0.9062079104027668
"man dog bites" "man bites dog" 0.91031258826218
"man dog bites" "man dog bites" 1.0
"man dog bites" "dog man bites" 0.9483532486752623
"dog man bites" "dog bites man" 0.9328819114282291
"dog man bites" "man bites dog" 0.9005242840640686
"dog man bites" "man dog bites" 0.9483532486752623
"dog man bites" "dog man bites" 1.0

Custom Models

Optimization

In [ ]:
nlp = spacy.load('en_core_web_lg')