Lambda notebook demo v. 1.0.1

Author: Kyle Rawlins

This notebook provides a demo of the core capabilities of the lambda notebook, aimed at linguists who already have training in semantics (but not necessarily implemented semantics).

Last updated Dec 2018. Version history:

  • 0.5: first version
  • 0.6: updated to work with refactored class hierarchy (Apr 2013)
  • 0.6.1: small fixes to adapt to changes in various places (Sep 2013)
  • 0.7: various fixes to work with alpha release (Jan 2014)
  • 0.9: substantial updates, merge content from LSA poster (Apr 2014)
  • 0.95: substantial updates for a series of demos in Apr-May 2014
  • 1.0: various changes / fixes, more stand-alone text (2017)
  • 1.0.1: small tweaks (Nov/Dec 2018)

To run through this demo incrementally, use shift-enter (runs and moves to next cell). If you run things out of order, you may encounter problems (missing variables etc.)

In [1]:
reload_lamb()
from lamb.types import TypeMismatch, type_e, type_t, type_property
from lamb.meta import TypedTerm, TypedExpr, LFun, CustomTerm
In [2]:
# Just some basic configuration
meta.constants_use_custom(False)
lang.bracket_setting = lang.BRACKET_FANCY
lamb.display.default(style=lamb.display.DerivStyle.BOXES) # you can also try lamb.display.DerivStyle.PROOF

First pitch

Have you ever wanted to type something like this in, and have it actually do something?

In [3]:
%%lamb
||every|| = λ f_<e,t> : λ g_<e,t> : Forall x_e : f(x) >> g(x)
||student|| = L x_e : Student(x)
||danced|| = L x_e : Danced(x)
INFO (meta): Coerced guessed type for 'Student_t' into <e,t>, to match argument 'x_e'
INFO (meta): Coerced guessed type for 'Danced_t' into <e,t>, to match argument 'x_e'
Out[3]:
$[\![\mathbf{\text{every}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}\left\langle{}e,t\right\rangle{},t\right\rangle{}\right\rangle{}} \:=\: $$\lambda{} f_{\left\langle{}e,t\right\rangle{}} \: . \: \lambda{} g_{\left\langle{}e,t\right\rangle{}} \: . \: \forall{} x_{e} \: . \: ({f}({x}) \rightarrow{} {g}({x}))$
$[\![\mathbf{\text{student}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: {Student}({x})$
$[\![\mathbf{\text{danced}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: {Danced}({x})$
In [4]:
r = ((every * student) * danced)
r
Out[4]:
1 composition path. Result:
    [0]: $[\![\mathbf{\text{[[every student] danced]}}]\!]^{}_{t} \:=\: $$\forall{} x_{e} \: . \: ({Student}({x}) \rightarrow{} {Danced}({x}))$
In [5]:
r.tree()
Out[5]:
1 composition path:
$[\![\mathbf{\text{every}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}\left\langle{}e,t\right\rangle{},t\right\rangle{}\right\rangle{}}$
$\lambda{} f_{\left\langle{}e,t\right\rangle{}} \: . \: \lambda{} g_{\left\langle{}e,t\right\rangle{}} \: . \: \forall{} x_{e} \: . \: ({f}({x}) \rightarrow{} {g}({x}))$
*
$[\![\mathbf{\text{student}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: {Student}({x})$
[
FA
]
$[\![\mathbf{\text{[every student]}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},t\right\rangle{}}$
$\lambda{} g_{\left\langle{}e,t\right\rangle{}} \: . \: \forall{} x_{e} \: . \: ({Student}({x}) \rightarrow{} {g}({x}))$
*
$[\![\mathbf{\text{danced}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: {Danced}({x})$
[
FA
]
$[\![\mathbf{\text{[[every student] danced]}}]\!]^{}_{t}$
$\forall{} x_{e} \: . \: ({Student}({x}) \rightarrow{} {Danced}({x}))$


Two problems in formal semantics

  1. Type-driven computation could be a lot easier to visualize and check. (Q: could it be made too easy?)

  2. Grammar fragments as in Montague Grammar: good idea in principle, hard to use in practice.

    • A fragment is a complete formalization of sublanguage consisting of the key relevant phenomena for the problem at hand. (Potential problem-points italicized.)

Solution: a system for developing interactive fragments: "IPython Lambda Notebook"

  • Creator can work interactively with analysis -- accelerate development, limit time spent on tedious details.
  • Reader can explore derivations in ways that are not typically possible in typical paper format.
  • Creator and reader can be certain that derivations work, verified by the system.
  • Bring closer together formal semantics and computational modeling.

Inspired by:

  • Von Eijck and Unger (2013): implementation of compositional semantics in Haskell. No interface (beyond standard Haskell terminal); great if you like Haskell. Introduced the idea of a fragment in digital form.
  • UPenn Lambda calculator (Champollion, Tauberer, and Romero 2007): teaching oriented. (Now under development again.)
  • nltk.sem: implementation of the lambda calculus with a typed metalanguage, interface with theorem provers. No interactive interface.
  • Jealousy of R studio, Matlab, Mathematica, etc.

The role of formalism & fragments

What does formal mean in semantics? What properties should a theory have?

  1. Mathematically precise (lambda calculus, type theory, logic, model theory(?), ...)
  2. Complete (covers "all" the relevant data).
  3. Predictive (like any scientific theory).
  4. Consistent, or at least compatible (with itself, analyses of other phenomena, some unifying conception of the grammar).

The method of fragments (Partee 1979, Partee and Hendriks 1997) provides a structure for meeting these criteria.

  • Paper with a fragment provides a working system. (Probably.)
  • Explicit outer bound for empirical coverage.
  • Integration with a particular theory of grammar. (To some extent.)
  • Explicit answer to many detailed questions not necessarily dealt with in the text.

Claim: fragments are a method of replicability, similar to a computational modeller providing their model.

  • To be clear, a fragment is neither necessary nor sufficient for having a good theory / analysis / paper...

Additional benefit: useful internal check for researcher.

"...But I feel strongly that it is important to try to [work with fully explicit fragments] periodically, because otherwise it is extremely easy to think that you have a solution to a problem when in fact you don't." (Partee 1979, p. 41)

The challenges of fragments

Part 1 of the above quote:

"It can be very frustrating to try to specify frameworks and fragments explicitly; this project has not been entirely rewarding. I would not recommend that one always work with the constraint of full explicitness." (Ibid.)

  • Fragments can be tedious and time-consuming to write (not to mention hard).
  • Fragments as traditionally written are in practice not easy for a reader to use.

    • Dense/unapproachable. With exactness can come a huge chunk of hard-to-digest formalism. E.g. Partee (1979), about 10% of the paper.
    • Monolithic/non-modular. For the specified sublanguage, everything specified. Outside the bounds of the sublanguage, nothing specified. How does the theory fit in with others?
    • Exact opposite of the modern method -- researchers typically hold most aspects of the grammar constant (implicitly) while changing a few key points. (Portner and Partee intro)

Summary: In practice, the typical payoff for neither the reader nor the writer of a fragment exceeded the effort.

A solution: digital fragments

Von Eijck and Unger 2010: specify a fragment in digital form.

  • They use Haskell. Type system of Haskell extremely well-suited to natural language semantics.
  • (Provocative statement) Interface, learning curve of Haskell not well suited to semanticists (or most people)?

Benefits of digital fragments (in principle)

  • Interactive.
  • Easy to distribute, adapt, modify.
  • Possibility of modularity. (E.g. abstract a 'library' for compositional systems away from the analysis of a particular phenomenon.)
  • Bring closer together the CogSci idea of a 'computational model' to the project of natural language semantics.
  • Connections to computational semantics. (weak..)

What sorts of things might we want in a fragment / system for fragments?

  • Typed lambda calculus.
  • Logic / logical metalanguage.
  • Framework for semantic composition. (Broad...)
  • Model theory? (x)
  • Interface with theorem provers? (x)

IPython Lambda Notebook aims to provide these tools in a usable, interactive, format.

  • Choose Python, rather than Haskell/Java. Easy learning curve, rapid prototyping, existence of IPython.

Layer 1: interface using IPython Notebook.

Layer 2: flexible typed metalanguage.

Layer 3: composition system for object language, building on layer 2.

Layer 1: an interface using IPython/Jupyter Notebook (Perez and Granger 2007)

  • Client-server system where a specialized IPython "kernel" is running in the background. This kernel implements various tools for formal semantics (see parts 2-3).
  • Page broken down into cells in which can be entered python code, markdown code, raw text, other formats.
  • Jupyter: supports display of graphical representations of python objects.
  • Notebook format uses the "MathJax" framework to enable it to render most math-mode latex. Can have python objects automatically generate decent-looking formulas. Can use latex math mode in documentation as well (e.g. $\lambda x \in D_e : \mathit{CAT}(x)$)

This all basically worked off-the-shelf.

  • Bulk of interface work so far: rendering code for logical and compositional representations.
  • Future: interactive widgets, etc.
In [6]:
meta.pmw_test1
Out[6]:
$\lambda{} p_{t} \: . \: \lambda{} x_{e} \: . \: ({P}({x}) \wedge{} {p})$
In [7]:
meta.pmw_test1._repr_latex_()
Out[7]:
'$\\lambda{} p_{t} \\: . \\: \\lambda{} x_{e} \\: . \\: ({P}({x}) \\wedge{} {p})$'

 

Part 2: a typed metalanguage

The metalanguage infrastructure is a set of classes that implement the building blocks of logical expressions, lambda terms, and various combinations combinations. This rests on an implementation of a type system that matches what semanticists tend to assume.

Starting point (2012): a few implementations of things like predicate logic do exist, this is an intro AI exercise sometimes. I started with the AIMA python Expr class, based on the standard Russell and Norvig AI text. But, had to scrap most of it. Another starting point would have been nltk.sem (I was unaware of its existence at the time.)

Preface cell with %%lamb to enter metalanguage formulas directly. The following cell defines a variable x that has type e, and exports it to the notebook's environment.

In [8]:
%%lamb reset
x = x_e # define x to have this type
Out[8]:
${x}_{e}\:=\:{x}_{e}$
In [9]:
x.type
Out[9]:
$e$

This next cell defines some variables whose values are more complex object -- in fact, functions in the typed lambda calculus.

In [10]:
%%lamb
test1 = L p_t : L x_e : P(x) & p # based on a Partee et al example
test1b = L x_e : P(x) & Q(x)
t2 = Q(x_e)
INFO (meta): Coerced guessed type for 'P_t' into <e,t>, to match argument 'x_e'
INFO (meta): Coerced guessed type for 'P_t' into <e,t>, to match argument 'x_e'
INFO (meta): Coerced guessed type for 'Q_t' into <e,t>, to match argument 'x_e'
INFO (meta): Coerced guessed type for 'Q_t' into <e,t>, to match argument 'x_e'
Out[10]:
${test1}_{\left\langle{}t,\left\langle{}e,t\right\rangle{}\right\rangle{}}\:=\:\lambda{} p_{t} \: . \: \lambda{} x_{e} \: . \: ({P}({x}) \wedge{} {p})$
${test1b}_{\left\langle{}e,t\right\rangle{}}\:=\:\lambda{} x_{e} \: . \: ({P}({x}) \wedge{} {Q}({x}))$
${t2}_{t}\:=\:{Q}({x}_{e})$

These are now registered as variables in the python namespace and can be manipulated directly. A typed lambda calculus is fully implemented with all that that entails -- e.g. the value of test1 includes the whole syntactic structure of the formula, its type, etc. and can be used in constructing new formulas. The following cells build a complex function-argument formula, and following that, does the reduction.

(Notice that beta reduction works properly, i.e. bound $x$ in the function is renamed in order to avoid collision with the free x in the argument.)

In [11]:
test1(t2)
Out[11]:
${[\lambda{} p_{t} \: . \: \lambda{} x_{e} \: . \: ({P}({x}) \wedge{} {p})]}({Q}({x}_{e}))$
In [12]:
test1(t2).reduce()
Out[12]:
$\lambda{} x1_{e} \: . \: ({P}({x1}) \wedge{} {Q}({x}_{e}))$
In [13]:
%%lamb
catf = L x_e: Cat(x)
dogf = λx: Dog(x_e)
INFO (meta): Coerced guessed type for 'Cat_t' into <e,t>, to match argument 'x_e'
INFO (meta): Coerced guessed type for 'Dog_t' into <e,t>, to match argument 'x_e'
Out[13]:
${catf}_{\left\langle{}e,t\right\rangle{}}\:=\:\lambda{} x_{e} \: . \: {Cat}({x})$
${dogf}_{\left\langle{}e,t\right\rangle{}}\:=\:\lambda{} x_{e} \: . \: {Dog}({x})$
In [14]:
(catf(x)).type
Out[14]:
$t$
In [15]:
catf.type
Out[15]:
$\left\langle{}e,t\right\rangle{}$

Type checking of course is a part of all this. If the types don't match, the computation will throw a TypeMismatch exception. The following cell uses python syntax to catch and print such errors.

In [16]:
result = None
try:
    result = test1(x) # function is type <t<et>> so will trigger a type mismatch.  This is a python exception so adds all sorts of extraneous stuff, but look to the bottom
except TypeMismatch as e:
    result = e
result
Out[16]:
Type mismatch: '$\lambda{} p_{t} \: . \: \lambda{} x_{e} \: . \: ({P}({x}) \wedge{} {p})$'/$\left\langle{}t,\left\langle{}e,t\right\rangle{}\right\rangle{}$ and '${x}_{e}$'/$e$ conflict (mode: Function argument combination (unification failed))

A more complex expression:

In [17]:
%%lamb
p2 = (Cat_<e,t>(x_e) & p_t) >> (Exists y: Dog_<e,t>(y_e))
Out[17]:
${p2}_{t}\:=\:(({Cat}({x}_{e}) \wedge{} {p}_{t}) \rightarrow{} \exists{} y_{e} \: . \: {Dog}({y}))$

What is going on behind the scenes? The objects manipulated are recursively structured python objects of class TypedExpr.

Class TypedExpr: parent class for typed expressions. Key subclasses:

  • BinaryOpExpr: parent class for things like conjunction.
  • TypedTerm: variables, constants of arbitrary type
  • BindingOp: operators that bind a single variable
    • LFun: lambda expression

Many straightforward expressions can be parsed. Most expressions are created using a call to TypedExpr.factory, which is abbreviated as "te" in the following examples. The %%lamb magic is calling this behind the scenes.

Three ways of instantiating a variable x of type e:

In [18]:
%%lamb 
x = x_e # use cell magic
Out[18]:
${x}_{e}\:=\:{x}_{e}$
In [19]:
x = te("x_e") # use factory function to parse string
x
Out[19]:
${x}_{e}$
In [20]:
x = meta.TypedTerm("x", types.type_e) # use object constructer
x
Out[20]:
${x}_{e}$

Various convenience python operators are overloaded, including functional calls. Here is an example repeated from earlier in two forms:

In [21]:
%%lamb
p2 = (Cat_<e,t>(x_e) & p_t) >> (Exists y: Dog_<e,t>(y_e))
Out[21]:
${p2}_{t}\:=\:(({Cat}({x}_{e}) \wedge{} {p}_{t}) \rightarrow{} \exists{} y_{e} \: . \: {Dog}({y}))$
In [22]:
p2 = (te("Cat_<e,t>(x)") & te("p_t")) >> te("(Exists y: Dog_<e,t>(y_e))")
p2
Out[22]:
$(({Cat}({x}_{e}) \wedge{} {p}_{t}) \rightarrow{} \exists{} y_{e} \: . \: {Dog}({y}))$

Let's examine in detail what happens when a function and argument combine.

In [23]:
catf = meta.LFun(types.type_e, te("Cat(x_e)"), "x")
catf
INFO (meta): Coerced guessed type for 'Cat_t' into <e,t>, to match argument 'x_e'
Out[23]:
$\lambda{} x_{e} \: . \: {Cat}({x})$
In [24]:
catf(te("y_e"))
Out[24]:
${[\lambda{} x_{e} \: . \: {Cat}({x})]}({y}_{e})$

Building a function-argument expression builds a complex, unreduced expression. This can be explicitly reduced (note that the reduce_all() function would be used to apply reduction recursively):

In [25]:
catf(te("y_e")).reduce()
Out[25]:
${Cat}({y}_{e})$
In [26]:
(catf(te("y_e")).reduce()).derivation
Out[26]:
1.
${[\lambda{} x_{e} \: . \: {Cat}({x})]}({y}_{e})$
2.
${Cat}({y}_{e})$
Reduction

The metalanguage supports some basic type inference. Type inference happens already on combination of a function and argument into an unreduced expression, not on beta-reduction.

In [27]:
%lamb ttest = L x_X : P_<?,t>(x) # type <?,t>
%lamb tvar = y_t
ttest(tvar)
Out[27]:
${[\lambda{} x_{t} \: . \: {P}_{\left\langle{}t,t\right\rangle{}}({x})]}({y}_{t})$

 

Part 3: composition systems for an object language

On top of the metalanguage are 'composition systems' for modeling (step-by-step) semantic composition in an object language such as English. This is the part of the lambda notebook that tracks and manipulates mappings between object language elements (words, trees, etc) and denotations in the metalanguage.

A composition at its core consists of a set of composition rules; the following cell defines a simple composition system that will be familiar to anyone who has taken a basic course in compositional semantics. (This example is just a redefinition of the default composition system.)

In [28]:
# none of this is strictly necessary, the built-in library already provides effectively this system.
fa = lang.BinaryCompositionOp("FA", lang.fa_fun, reduce=True)
pm = lang.BinaryCompositionOp("PM", lang.pm_fun, commutative=False, reduce=True)
pa = lang.BinaryCompositionOp("PA", lang.pa_fun, allow_none=True)
demo_hk_system = lang.CompositionSystem(name="demo system", rules=[fa, pm, pa])
lang.set_system(demo_hk_system)
demo_hk_system
Out[28]:
Composition system 'demo system'
Operations: {
    Binary composition rule FA, built on python function 'lamb.lang.fa_fun'
    Binary composition rule PM, built on python function 'lamb.lang.pm_fun'
    Binary composition rule PA, built on python function 'lamb.lang.pa_fun'
}

Expressing denotations is done in a %%lamb cell, and almost always begins with lexical items. The following cell defines several lexical items that will be familiar from introductory exercises in the Heim & Kratzer 1998 textbook "Semantics in Generative Grammar". These definitions produce items that are subclasses of the class Composable.

In [29]:
%%lamb
||cat|| = L x_e: Cat(x)
||gray|| = L x_e: Gray(x)
||john|| = John_e
||julius|| = Julius_e
||inP|| = L x_e : L y_e : In(y, x) # `in` is a reserved word in python
||texas|| = Texas_e
||isV|| = L p_<e,t> : p # `is` is a reserved word in python
INFO (meta): Coerced guessed type for 'Cat_t' into <e,t>, to match argument 'x_e'
INFO (meta): Coerced guessed type for 'Gray_t' into <e,t>, to match argument 'x_e'
INFO (meta): Coerced guessed type for 'In_t' into <(e,e),t>, to match argument '(y_e, x_e)'
Out[29]:
$[\![\mathbf{\text{cat}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: {Cat}({x})$
$[\![\mathbf{\text{gray}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: {Gray}({x})$
$[\![\mathbf{\text{john}}]\!]^{}_{e} \:=\: $${John}_{e}$
$[\![\mathbf{\text{julius}}]\!]^{}_{e} \:=\: $${Julius}_{e}$
$[\![\mathbf{\text{inP}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: \lambda{} y_{e} \: . \: {In}({y}, {x})$
$[\![\mathbf{\text{texas}}]\!]^{}_{e} \:=\: $${Texas}_{e}$
$[\![\mathbf{\text{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: $$\lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$

In the purely type-driven mode, composition is triggered by using the '*' operator on a Composable. This searches over the available composition operations in the system to see if any results can be had. inP and texas above should be able to compose using the FA rule:

In [30]:
inP * texas
Out[30]:
1 composition path. Result:
    [0]: $[\![\mathbf{\text{[inP texas]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} y_{e} \: . \: {In}({y}, {Texas}_{e})$

On the other hand isV is looking for a property, so we shouldn't expect succesful composition. Below this I have given a complete sentence and shown some introspection on that composition result.

In [31]:
julius * isV # will fail due to type mismatches
Out[31]:
Composition failed. Attempts:
    Type mismatch: '$[\![\mathbf{\text{julius}}]\!]^{}_{e} \:=\: $${Julius}_{e}$'/$e$ and '$[\![\mathbf{\text{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: $$\lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$'/$\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}$ conflict (mode: Function Application)
    Type mismatch: '$[\![\mathbf{\text{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: $$\lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$'/$\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}$ and '$[\![\mathbf{\text{julius}}]\!]^{}_{e} \:=\: $${Julius}_{e}$'/$e$ conflict (mode: Function Application)
    Type mismatch: '$[\![\mathbf{\text{julius}}]\!]^{}_{e} \:=\: $${Julius}_{e}$'/$e$ and '$[\![\mathbf{\text{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: $$\lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$'/$\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}$ conflict (mode: Predicate Modification)
    Type mismatch: '$[\![\mathbf{\text{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: $$\lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$'/$\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}$ and '$[\![\mathbf{\text{julius}}]\!]^{}_{e} \:=\: $${Julius}_{e}$'/$e$ conflict (mode: Predicate Modification)
    Type mismatch: '$[\![\mathbf{\text{julius}}]\!]^{}_{e} \:=\: $${Julius}_{e}$'/$e$ and '$[\![\mathbf{\text{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: $$\lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$'/$\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}$ conflict (mode: Predicate Abstraction)
    Type mismatch: '$[\![\mathbf{\text{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: $$\lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$'/$\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}$ and '$[\![\mathbf{\text{julius}}]\!]^{}_{e} \:=\: $${Julius}_{e}$'/$e$ conflict (mode: Predicate Abstraction)
In [32]:
sentence1 = julius * (isV * (inP * texas))
sentence1
Out[32]:
1 composition path. Result:
    [0]: $[\![\mathbf{\text{[[isV [inP texas]] julius]}}]\!]^{}_{t} \:=\: $${In}({Julius}_{e}, {Texas}_{e})$
In [33]:
sentence1.trace()
Out[33]:
Full composition trace. 1 path:
    Step 1: $[\![\mathbf{\text{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: $$\lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$
    Step 2: $[\![\mathbf{\text{inP}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: \lambda{} y_{e} \: . \: {In}({y}, {x})$
    Step 3: $[\![\mathbf{\text{texas}}]\!]^{}_{e} \:=\: $${Texas}_{e}$
    Step 4: $[\![\mathbf{\text{inP}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}$ * $[\![\mathbf{\text{texas}}]\!]^{}_{e}$ leads to: $[\![\mathbf{\text{[inP texas]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} y_{e} \: . \: {In}({y}, {Texas}_{e})$ [by FA]
    Step 5: $[\![\mathbf{\text{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}}$ * $[\![\mathbf{\text{[inP texas]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$ leads to: $[\![\mathbf{\text{[isV [inP texas]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} y_{e} \: . \: {In}({y}, {Texas}_{e})$ [by FA]
    Step 6: $[\![\mathbf{\text{julius}}]\!]^{}_{e} \:=\: $${Julius}_{e}$
    Step 7: $[\![\mathbf{\text{[isV [inP texas]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$ * $[\![\mathbf{\text{julius}}]\!]^{}_{e}$ leads to: $[\![\mathbf{\text{[[isV [inP texas]] julius]}}]\!]^{}_{t} \:=\: $${In}({Julius}_{e}, {Texas}_{e})$ [by FA]

Composition will find all possible paths (beware of combinatorial explosion). I have temporarily disabled the fact that standard PM is symmetric/commutative (because of conjunction), to illustrate a case with multiple composition paths:

In [34]:
gray * cat
Out[34]:
2 composition paths. Results:
    [0]: $[\![\mathbf{\text{[gray cat]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: ({Gray}({x}) \wedge{} {Cat}({x}))$
    [1]: $[\![\mathbf{\text{[cat gray]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: ({Cat}({x}) \wedge{} {Gray}({x}))$
In [35]:
gray * (cat * (inP * texas))
Out[35]:
4 composition paths. Results:
    [0]: $[\![\mathbf{\text{[gray [cat [inP texas]]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: ({Gray}({x}) \wedge{} ({Cat}({x}) \wedge{} {In}({x}, {Texas}_{e})))$
    [1]: $[\![\mathbf{\text{[[cat [inP texas]] gray]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: (({Cat}({x}) \wedge{} {In}({x}, {Texas}_{e})) \wedge{} {Gray}({x}))$
    [2]: $[\![\mathbf{\text{[gray [[inP texas] cat]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: ({Gray}({x}) \wedge{} ({In}({x}, {Texas}_{e}) \wedge{} {Cat}({x})))$
    [3]: $[\![\mathbf{\text{[[[inP texas] cat] gray]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: (({In}({x}, {Texas}_{e}) \wedge{} {Cat}({x})) \wedge{} {Gray}({x}))$
In [36]:
a = lang.Item("a", lang.isV.content) # identity function for copula as well
isV * (a * (gray * cat * (inP * texas)))
Out[36]:
4 composition paths. Results:
    [0]: $[\![\mathbf{\text{[isV [a [[gray cat] [inP texas]]]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: (({Gray}({x}) \wedge{} {Cat}({x})) \wedge{} {In}({x}, {Texas}_{e}))$
    [1]: $[\![\mathbf{\text{[isV [a [[inP texas] [gray cat]]]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: ({In}({x}, {Texas}_{e}) \wedge{} ({Gray}({x}) \wedge{} {Cat}({x})))$
    [2]: $[\![\mathbf{\text{[isV [a [[cat gray] [inP texas]]]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: (({Cat}({x}) \wedge{} {Gray}({x})) \wedge{} {In}({x}, {Texas}_{e}))$
    [3]: $[\![\mathbf{\text{[isV [a [[inP texas] [cat gray]]]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: ({In}({x}, {Texas}_{e}) \wedge{} ({Cat}({x}) \wedge{} {Gray}({x})))$
In [37]:
np = ((gray * cat) * (inP * texas))
vp = (isV * (a * np))
sentence2 = julius * vp
sentence2
Out[37]:
4 composition paths. Results:
    [0]: $[\![\mathbf{\text{[[isV [a [[gray cat] [inP texas]]]] julius]}}]\!]^{}_{t} \:=\: $$(({Gray}({Julius}_{e}) \wedge{} {Cat}({Julius}_{e})) \wedge{} {In}({Julius}_{e}, {Texas}_{e}))$
    [1]: $[\![\mathbf{\text{[[isV [a [[inP texas] [gray cat]]]] julius]}}]\!]^{}_{t} \:=\: $$({In}({Julius}_{e}, {Texas}_{e}) \wedge{} ({Gray}({Julius}_{e}) \wedge{} {Cat}({Julius}_{e})))$
    [2]: $[\![\mathbf{\text{[[isV [a [[cat gray] [inP texas]]]] julius]}}]\!]^{}_{t} \:=\: $$(({Cat}({Julius}_{e}) \wedge{} {Gray}({Julius}_{e})) \wedge{} {In}({Julius}_{e}, {Texas}_{e}))$
    [3]: $[\![\mathbf{\text{[[isV [a [[inP texas] [cat gray]]]] julius]}}]\!]^{}_{t} \:=\: $$({In}({Julius}_{e}, {Texas}_{e}) \wedge{} ({Cat}({Julius}_{e}) \wedge{} {Gray}({Julius}_{e})))$
In [38]:
sentence1.results[0]
Out[38]:
$[\![\mathbf{\text{[[isV [inP texas]] julius]}}]\!]^{}_{t} \:=\: $${In}({Julius}_{e}, {Texas}_{e})$
In [39]:
sentence1.results[0].tree()
Out[39]:
$[\![\mathbf{\text{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}}$
$\lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$
*
$[\![\mathbf{\text{inP}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}$
$\lambda{} x_{e} \: . \: \lambda{} y_{e} \: . \: {In}({y}, {x})$
*
$[\![\mathbf{\text{texas}}]\!]^{}_{e}$
${Texas}_{e}$
[
FA
]
$[\![\mathbf{\text{[inP texas]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} y_{e} \: . \: {In}({y}, {Texas}_{e})$
[
FA
]
$[\![\mathbf{\text{[isV [inP texas]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} y_{e} \: . \: {In}({y}, {Texas}_{e})$
*
$[\![\mathbf{\text{julius}}]\!]^{}_{e}$
${Julius}_{e}$
[
FA
]
$[\![\mathbf{\text{[[isV [inP texas]] julius]}}]\!]^{}_{t}$
${In}({Julius}_{e}, {Texas}_{e})$
In [40]:
sentence2.results[0].tree()
Out[40]:
$[\![\mathbf{\text{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}}$
$\lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$
*
$[\![\mathbf{\text{a}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}}$
$\lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$
*
$[\![\mathbf{\text{gray}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: {Gray}({x})$
*
$[\![\mathbf{\text{cat}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: {Cat}({x})$
[
PM
]
$[\![\mathbf{\text{[gray cat]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: ({Gray}({x}) \wedge{} {Cat}({x}))$
*
$[\![\mathbf{\text{inP}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}$
$\lambda{} x_{e} \: . \: \lambda{} y_{e} \: . \: {In}({y}, {x})$
*
$[\![\mathbf{\text{texas}}]\!]^{}_{e}$
${Texas}_{e}$
[
FA
]
$[\![\mathbf{\text{[inP texas]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} y_{e} \: . \: {In}({y}, {Texas}_{e})$
[
PM
]
$[\![\mathbf{\text{[[gray cat] [inP texas]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: (({Gray}({x}) \wedge{} {Cat}({x})) \wedge{} {In}({x}, {Texas}_{e}))$
[
FA
]
$[\![\mathbf{\text{[a [[gray cat] [inP texas]]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: (({Gray}({x}) \wedge{} {Cat}({x})) \wedge{} {In}({x}, {Texas}_{e}))$
[
FA
]
$[\![\mathbf{\text{[isV [a [[gray cat] [inP texas]]]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: (({Gray}({x}) \wedge{} {Cat}({x})) \wedge{} {In}({x}, {Texas}_{e}))$
*
$[\![\mathbf{\text{julius}}]\!]^{}_{e}$
${Julius}_{e}$
[
FA
]
$[\![\mathbf{\text{[[isV [a [[gray cat] [inP texas]]]] julius]}}]\!]^{}_{t}$
$(({Gray}({Julius}_{e}) \wedge{} {Cat}({Julius}_{e})) \wedge{} {In}({Julius}_{e}, {Texas}_{e}))$

One of the infamous exercise examples from Heim and Kratzer (names different):

(1) Julius is a gray cat in Texas fond of John.

First let's get rid of all the extra readings, to keep this simple.

In [41]:
demo_hk_system.get_rule("PM").commutative = True
In [42]:
fond = lang.Item("fond", "L x_e : L y_e : Fond(y)(x)")
ofP = lang.Item("of", "L x_e : x")
sentence3 = julius * (isV * (a * (((gray * cat) * (inP * texas)) * (fond * (ofP * john)))))
sentence3
INFO (meta): Coerced guessed type for 'Fond_t' into <e,t>, to match argument 'y_e'
INFO (meta): Coerced guessed type for 'Fond_<e,t>(y_e)' into <e,t>, to match argument 'x_e'
Out[42]:
1 composition path. Result:
    [0]: $[\![\mathbf{\text{[[isV [a [[[gray cat] [inP texas]] [fond [of john]]]]] julius]}}]\!]^{}_{t} \:=\: $$((({Gray}({Julius}_{e}) \wedge{} {Cat}({Julius}_{e})) \wedge{} {In}({Julius}_{e}, {Texas}_{e})) \wedge{} {Fond}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({Julius}_{e})({John}_{e}))$
In [43]:
sentence3.tree()
Out[43]:
1 composition path:
$[\![\mathbf{\text{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}}$
$\lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$
*
$[\![\mathbf{\text{a}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}}$
$\lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$
*
$[\![\mathbf{\text{gray}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: {Gray}({x})$
*
$[\![\mathbf{\text{cat}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: {Cat}({x})$
[
PM
]
$[\![\mathbf{\text{[gray cat]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: ({Gray}({x}) \wedge{} {Cat}({x}))$
*
$[\![\mathbf{\text{inP}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}$
$\lambda{} x_{e} \: . \: \lambda{} y_{e} \: . \: {In}({y}, {x})$
*
$[\![\mathbf{\text{texas}}]\!]^{}_{e}$
${Texas}_{e}$
[
FA
]
$[\![\mathbf{\text{[inP texas]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} y_{e} \: . \: {In}({y}, {Texas}_{e})$
[
PM
]
$[\![\mathbf{\text{[[gray cat] [inP texas]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: (({Gray}({x}) \wedge{} {Cat}({x})) \wedge{} {In}({x}, {Texas}_{e}))$
*
$[\![\mathbf{\text{fond}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}$
$\lambda{} x_{e} \: . \: \lambda{} y_{e} \: . \: {Fond}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({y})({x})$
*
$[\![\mathbf{\text{of}}]\!]^{}_{\left\langle{}e,e\right\rangle{}}$
$\lambda{} x_{e} \: . \: {x}$
*
$[\![\mathbf{\text{john}}]\!]^{}_{e}$
${John}_{e}$
[
FA
]
$[\![\mathbf{\text{[of john]}}]\!]^{}_{e}$
${John}_{e}$
[
FA
]
$[\![\mathbf{\text{[fond [of john]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} y_{e} \: . \: {Fond}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({y})({John}_{e})$
[
PM
]
$[\![\mathbf{\text{[[[gray cat] [inP texas]] [fond [of john]]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: ((({Gray}({x}) \wedge{} {Cat}({x})) \wedge{} {In}({x}, {Texas}_{e})) \wedge{} {Fond}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({x})({John}_{e}))$
[
FA
]
$[\![\mathbf{\text{[a [[[gray cat] [inP texas]] [fond [of john]]]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: ((({Gray}({x}) \wedge{} {Cat}({x})) \wedge{} {In}({x}, {Texas}_{e})) \wedge{} {Fond}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({x})({John}_{e}))$
[
FA
]
$[\![\mathbf{\text{[isV [a [[[gray cat] [inP texas]] [fond [of john]]]]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: ((({Gray}({x}) \wedge{} {Cat}({x})) \wedge{} {In}({x}, {Texas}_{e})) \wedge{} {Fond}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({x})({John}_{e}))$
*
$[\![\mathbf{\text{julius}}]\!]^{}_{e}$
${Julius}_{e}$
[
FA
]
$[\![\mathbf{\text{[[isV [a [[[gray cat] [inP texas]] [fond [of john]]]]] julius]}}]\!]^{}_{t}$
$((({Gray}({Julius}_{e}) \wedge{} {Cat}({Julius}_{e})) \wedge{} {In}({Julius}_{e}, {Texas}_{e})) \wedge{} {Fond}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({Julius}_{e})({John}_{e}))$


The Composite class subclasses nltk.Tree, and so supports the things that class does. E.g. []-based paths:

In [44]:
parse_tree3 = sentence3.results[0]
parse_tree3[0][1][1].tree()
Out[44]:
$[\![\mathbf{\text{gray}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: {Gray}({x})$
*
$[\![\mathbf{\text{cat}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: {Cat}({x})$
[
PM
]
$[\![\mathbf{\text{[gray cat]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: ({Gray}({x}) \wedge{} {Cat}({x}))$
*
$[\![\mathbf{\text{inP}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}$
$\lambda{} x_{e} \: . \: \lambda{} y_{e} \: . \: {In}({y}, {x})$
*
$[\![\mathbf{\text{texas}}]\!]^{}_{e}$
${Texas}_{e}$
[
FA
]
$[\![\mathbf{\text{[inP texas]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} y_{e} \: . \: {In}({y}, {Texas}_{e})$
[
PM
]
$[\![\mathbf{\text{[[gray cat] [inP texas]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: (({Gray}({x}) \wedge{} {Cat}({x})) \wedge{} {In}({x}, {Texas}_{e}))$
*
$[\![\mathbf{\text{fond}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}$
$\lambda{} x_{e} \: . \: \lambda{} y_{e} \: . \: {Fond}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({y})({x})$
*
$[\![\mathbf{\text{of}}]\!]^{}_{\left\langle{}e,e\right\rangle{}}$
$\lambda{} x_{e} \: . \: {x}$
*
$[\![\mathbf{\text{john}}]\!]^{}_{e}$
${John}_{e}$
[
FA
]
$[\![\mathbf{\text{[of john]}}]\!]^{}_{e}$
${John}_{e}$
[
FA
]
$[\![\mathbf{\text{[fond [of john]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} y_{e} \: . \: {Fond}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({y})({John}_{e})$
[
PM
]
$[\![\mathbf{\text{[[[gray cat] [inP texas]] [fond [of john]]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: ((({Gray}({x}) \wedge{} {Cat}({x})) \wedge{} {In}({x}, {Texas}_{e})) \wedge{} {Fond}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({x})({John}_{e}))$

There is support for traces and indexed pronouns, using the PA rule. (The implementation may not be what you expect.)

In [45]:
binder = lang.Binder(23)
binder2 = lang.Binder(5)
t = lang.Trace(23, types.type_e)
t2 = lang.Trace(5)
ltx_print(t, t2, binder)
Out[45]:
$[\![\mathbf{\text{t}}_{23}]\!]^{}_{e} \:=\: $${var23}_{e}$
$[\![\mathbf{\text{t}}_{5}]\!]^{}_{e} \:=\: $${var5}_{e}$
$[\![\mathbf{\text{23}}]\!]^{}$
In [46]:
((t * gray))
Out[46]:
1 composition path. Result:
    [0]: $[\![\mathbf{\text{[gray t23]}}]\!]^{}_{t} \:=\: $${Gray}({var23}_{e})$
In [47]:
b1 = (binder * (binder2 * (t * (lang.inP * t2))))
b2 = (binder2 * (binder * (t * (lang.inP * t2))))
ltx_print(b1, b2)
Out[47]:
1 composition path. Result:
    [0]: $[\![\mathbf{\text{[23 [5 [[in t5] t23]]]}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: $$\lambda{} x1_{e} \: . \: \lambda{} x_{e} \: . \: {In}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({x1})({x})$
1 composition path. Result:
    [0]: $[\![\mathbf{\text{[5 [23 [[in t5] t23]]]}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: $$\lambda{} x1_{e} \: . \: \lambda{} x_{e} \: . \: {In}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({x})({x1})$
In [48]:
b1.trace()
Out[48]:
Full composition trace. 1 path:
    Step 1: $[\![\mathbf{\text{23}}]\!]^{}$
    Step 2: $[\![\mathbf{\text{5}}]\!]^{}$
    Step 3: $[\![\mathbf{\text{in}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: \lambda{} y_{e} \: . \: {In}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({y})({x})$
    Step 4: $[\![\mathbf{\text{t}}_{5}]\!]^{}_{e} \:=\: $${var5}_{e}$
    Step 5: $[\![\mathbf{\text{in}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}$ * $[\![\mathbf{\text{t}}_{5}]\!]^{}_{e}$ leads to: $[\![\mathbf{\text{[in t5]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} y_{e} \: . \: {In}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({y})({var5}_{e})$ [by FA]
    Step 6: $[\![\mathbf{\text{t}}_{23}]\!]^{}_{e} \:=\: $${var23}_{e}$
    Step 7: $[\![\mathbf{\text{[in t5]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$ * $[\![\mathbf{\text{t}}_{23}]\!]^{}_{e}$ leads to: $[\![\mathbf{\text{[[in t5] t23]}}]\!]^{}_{t} \:=\: $${In}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({var23}_{e})({var5}_{e})$ [by FA]
    Step 8: $[\![\mathbf{\text{5}}]\!]^{}$ * $[\![\mathbf{\text{[[in t5] t23]}}]\!]^{}_{t}$ leads to: $[\![\mathbf{\text{[5 [[in t5] t23]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: {In}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({var23}_{e})({x})$ [by PA]
    Step 9: $[\![\mathbf{\text{23}}]\!]^{}$ * $[\![\mathbf{\text{[5 [[in t5] t23]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$ leads to: $[\![\mathbf{\text{[23 [5 [[in t5] t23]]]}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: $$\lambda{} x1_{e} \: . \: \lambda{} x_{e} \: . \: {In}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({x1})({x})$ [by PA]
In [49]:
b1.results[0].tree()
Out[49]:
$[\![\mathbf{\text{23}}]\!]^{}$
N/A
*
$[\![\mathbf{\text{5}}]\!]^{}$
N/A
*
$[\![\mathbf{\text{in}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}$
$\lambda{} x_{e} \: . \: \lambda{} y_{e} \: . \: {In}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({y})({x})$
*
$[\![\mathbf{\text{t}}_{5}]\!]^{}_{e}$
${var5}_{e}$
[
FA
]
$[\![\mathbf{\text{[in t5]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} y_{e} \: . \: {In}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({y})({var5}_{e})$
*
$[\![\mathbf{\text{t}}_{23}]\!]^{}_{e}$
${var23}_{e}$
[
FA
]
$[\![\mathbf{\text{[[in t5] t23]}}]\!]^{}_{t}$
${In}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({var23}_{e})({var5}_{e})$
[
PA
]
$[\![\mathbf{\text{[5 [[in t5] t23]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: {In}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({var23}_{e})({x})$
[
PA
]
$[\![\mathbf{\text{[23 [5 [[in t5] t23]]]}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}$
$\lambda{} x1_{e} \: . \: \lambda{} x_{e} \: . \: {In}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({x1})({x})$

Composition in tree structures

Some in-progress work: implementing tree-based computation, and top-down/deferred computation

  • using nltk Tree objects.
  • system for deferred / uncertain types -- basic inference over unknown types
  • arbitrary order of composition expansion. (Of course, some orders will be far less efficient!)
In [50]:
reload_lamb()
lang.set_system(lang.hk3_system)
In [51]:
%%lamb
||gray|| = L x_e : Gray_<e,t>(x)
||cat|| = L x_e : Cat_<e,t>(x)
Out[51]:
$[\![\mathbf{\text{gray}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: {Gray}({x})$
$[\![\mathbf{\text{cat}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: {Cat}({x})$
In [52]:
t2 = Tree("S", ["NP", "VP"])
t2
Out[52]:
NP
VP
S
In [53]:
t2 = Tree("S", ["NP", "VP"])
r2 = lang.hk3_system.compose(t2)
r2.tree()
r2.paths()
Out[53]:
3 composition paths:
Path [0]:
$[\![\mathbf{\text{NP}}]\!]^{}_{?}$
*
$[\![\mathbf{\text{VP}}]\!]^{}_{?}$
[
FA/left
]
$[\![\mathbf{\text{S}}]\!]^{}_{X'}$
$[\![\mathbf{\text{NP}}]\!]^{}_{\left\langle{}X,X'\right\rangle{}}([\![\mathbf{\text{VP}}]\!]^{}_{X})$


Path [1]:
$[\![\mathbf{\text{NP}}]\!]^{}_{?}$
*
$[\![\mathbf{\text{VP}}]\!]^{}_{?}$
[
FA/right
]
$[\![\mathbf{\text{S}}]\!]^{}_{X'}$
$[\![\mathbf{\text{VP}}]\!]^{}_{\left\langle{}X,X'\right\rangle{}}([\![\mathbf{\text{NP}}]\!]^{}_{X})$


Path [2]:
$[\![\mathbf{\text{NP}}]\!]^{}_{?}$
*
$[\![\mathbf{\text{VP}}]\!]^{}_{?}$
[
PM
]
$[\![\mathbf{\text{S}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: ([\![\mathbf{\text{NP}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}({x}) \wedge{} [\![\mathbf{\text{VP}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}({x}))$


In [54]:
Tree = lamb.utils.get_tree_class()
t = Tree("NP", ["gray", Tree("N", ["cat"])])
t
Out[54]:
gray
cat
N
NP
In [55]:
t2 = lang.CompositionTree.tree_factory(t)
r = lang.hk3_system.compose(t2)
r
Out[55]:
3 composition paths. Results:
    [0]: $[\![\mathbf{\text{NP}}]\!]^{}_{X'} \:=\: $$[\![\mathbf{\text{gray}}]\!]^{}_{\left\langle{}X,X'\right\rangle{}}([\![\mathbf{\text{N}}]\!]^{}_{X})$
    [1]: $[\![\mathbf{\text{NP}}]\!]^{}_{X'} \:=\: $$[\![\mathbf{\text{N}}]\!]^{}_{\left\langle{}X,X'\right\rangle{}}([\![\mathbf{\text{gray}}]\!]^{}_{X})$
    [2]: $[\![\mathbf{\text{NP}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: ([\![\mathbf{\text{gray}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}({x}) \wedge{} [\![\mathbf{\text{N}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}({x}))$
In [56]:
r.tree()
Out[56]:
$[\![\mathbf{\text{gray}}]\!]^{}_{?}$
*
$[\![\mathbf{\text{N}}]\!]^{}_{?}$
$[\![\mathbf{\text{[NP]}}]\!]$
[path 0]:
$[\![\mathbf{\text{NP}}]\!]^{}_{X'} \:=\: $$[\![\mathbf{\text{gray}}]\!]^{}_{\left\langle{}X,X'\right\rangle{}}([\![\mathbf{\text{N}}]\!]^{}_{X})$
[path 1]:
$[\![\mathbf{\text{NP}}]\!]^{}_{X'} \:=\: $$[\![\mathbf{\text{N}}]\!]^{}_{\left\langle{}X,X'\right\rangle{}}([\![\mathbf{\text{gray}}]\!]^{}_{X})$
[path 2]:
$[\![\mathbf{\text{NP}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: ([\![\mathbf{\text{gray}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}({x}) \wedge{} [\![\mathbf{\text{N}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}({x}))$
In [57]:
r = lang.hk3_system.expand_all(t2)
r
Out[57]:
1 composition path. Result:
    [0]: $[\![\mathbf{\text{NP}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: ({Gray}({x}) \wedge{} {Cat}({x}))$
In [58]:
r.tree()
Out[58]:
$[\![\mathbf{\text{[gray]}}]\!]$
[path 0]:
$[\![\mathbf{\text{gray}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: {Gray}({x})$
*
$[\![\mathbf{\text{[cat]}}]\!]$
[path 0]:
$[\![\mathbf{\text{cat}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: {Cat}({x})$
$[\![\mathbf{\text{[N]}}]\!]$
[path 0]:
$[\![\mathbf{\text{N}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: {Cat}({x})$
$[\![\mathbf{\text{[NP]}}]\!]$
[path 0]:
$[\![\mathbf{\text{NP}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: $$\lambda{} x_{e} \: . \: ({Gray}({x}) \wedge{} {Cat}({x}))$
In [59]:
r.paths()
Out[59]:
1 composition path:
$[\![\mathbf{\text{gray}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
[
Lexicon
]
$\lambda{} x_{e} \: . \: {Gray}({x})$
*
$[\![\mathbf{\text{cat}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
[
Lexicon
]
$\lambda{} x_{e} \: . \: {Cat}({x})$
[
NN
]
$[\![\mathbf{\text{N}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: {Cat}({x})$
[
PM
]
$[\![\mathbf{\text{NP}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$
$\lambda{} x_{e} \: . \: ({Gray}({x}) \wedge{} {Cat}({x}))$


 

Some future projects, non-exhaustive

  • complete fragment of Heim and Kratzer
  • In general: more fragments!
  • extend fragment coverage. Some interesting targets where interactivity would be useful to understanding:
    • Compositional hamblin semantics (partial)
    • Compositional DRT (partial)
    • QR
  • underlying model theory.
  • various improvements to the graphics -- trees (d3? graphviz?), interactive widgets, ...
  • full latex output (trees in tikz-qtree and so on).

Longer term:

  • integration with SymPy (?)
  • deeper integration with nltk.
  • parsing that makes less use of python eval, and is generally less ad-hoc.
    • this is an issue where in principle, a language like Haskell is a better choice than python. But I think the usability / robustness of python and its libraries has the edge here overall, not to mention ipython notebook...
  • toy spatial language system
  • side-by-side comparison of e.g. multiple analyses of presupposition projection