This notebook provides a demo of the core capabilities of the lambda notebook, aimed at linguists who already have training in semantics (but not necessarily implemented semantics).
Last updated Aug 2023. Version history:
To run through this demo incrementally, use shift-enter (runs and moves to next cell). If you run things out of order, you may encounter problems (missing variables etc.)
from lamb.types import TypeMismatch, type_e, type_t, type_property
from lamb.meta import TypedTerm, TypedExpr, LFun, CustomTerm
# There are several ways of display outputs, this one is the default:
lamb.display.default(style=lamb.display.DerivStyle.BOXES)
# you can also try:
# lamb.display.default(style=lamb.display.DerivStyle.PROOF)
Have you ever wanted to type something like this in, and have it actually do something?
%%lamb
||every|| = λ f_<e,t> : λ g_<e,t> : Forall x_e : f(x) >> g(x)
||student|| = L x_e : Student(x)
||danced|| = L x_e : Danced(x)
INFO (core): Coerced guessed type for 'Student_t' into <e,t>, to match argument 'x_e' INFO (core): Coerced guessed type for 'Danced_t' into <e,t>, to match argument 'x_e'
$[\![\text{\textbf{every}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}\left\langle{}e,t\right\rangle{},t\right\rangle{}\right\rangle{}} \:=\: \lambda{} f_{\left\langle{}e,t\right\rangle{}} \: . \: \lambda{} g_{\left\langle{}e,t\right\rangle{}} \: . \: \forall{} x_{e} \: . \: {f}({x}) \rightarrow{} {g}({x})$
$[\![\text{\textbf{student}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: \lambda{} x_{e} \: . \: {Student}({x})$
$[\![\text{\textbf{danced}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: \lambda{} x_{e} \: . \: {Danced}({x})$
r = ((every * student) * danced)
r
1 composition path. Result:
[0]: $[\![\text{\textbf{[[every student] danced]}}]\!]^{}_{t} \:=\: \forall{} x_{e} \: . \: {Student}({x}) \rightarrow{} {Danced}({x})$
r.tree()
Type-driven computation could be a lot easier to visualize and check. (Q: could it be made too easy?)
Grammar fragments as in Montague Grammar: good idea in principle, hard to use in practice.
Solution: a system for developing interactive fragments: "IPython Lambda Notebook"
Inspired by:
nltk.sem
: implementation of the lambda calculus with a typed metalanguage, interface with theorem provers. No interactive interface.What does formal mean in semantics? What properties should a theory have?
The method of fragments (Partee 1979, Partee and Hendriks 1997) provides a structure for meeting these criteria.
Claim: fragments are a method of replicability, similar to a computational modeller providing their model.
Additional benefit: useful internal check for researcher.
"...But I feel strongly that it is important to try to [work with fully explicit fragments] periodically, because otherwise it is extremely easy to think that you have a solution to a problem when in fact you don't." (Partee 1979, p. 41)
Part 1 of the above quote:
"It can be very frustrating to try to specify frameworks and fragments explicitly; this project has not been entirely rewarding. I would not recommend that one always work with the constraint of full explicitness." (Ibid.)
Fragments can be tedious and time-consuming to write (not to mention hard).
Fragments as traditionally written are in practice not easy for a reader to use.
Summary: In practice, the typical payoff for neither the reader nor the writer of a fragment exceeded the effort.
Von Eijck and Unger 2010: specify a fragment in digital form.
IPython Lambda Notebook aims to provide these tools in a usable, interactive, format.
Layer 1: interface using IPython Notebook.
Layer 2: flexible typed metalanguage.
Layer 3: composition system for object language, building on layer 2.
This all basically worked off-the-shelf.
pmw_test1 = %te L p_t : L x_e : P_<e,t>(x) & p
pmw_test1
pmw_test1._repr_latex_()
'$\\lambda{} p_{t} \\: . \\: \\lambda{} x_{e} \\: . \\: {P}({x}) \\wedge{} {p}$'
The metalanguage infrastructure is a set of classes that implement the building blocks of logical expressions, lambda terms, and various combinations combinations. This rests on an implementation of a type system that matches what semanticists tend to assume.
Starting point (2012): a few implementations of things like predicate logic do exist, this is an intro AI exercise sometimes. I started with the AIMA python Expr class, based on the standard Russell and Norvig AI text. But, had to scrap most of it. Another starting point would have been nltk.sem
(I was unaware of its existence at the time.)
Preface cell with %%lamb
to enter metalanguage formulas directly. The following cell defines a variable x
that has type e, and exports it to the notebook's environment.
%%lamb reset
x = x_e # define x to have this type
${x}_{e}\:=\:{x}_{e}$
x.type
This next cell defines some variables whose values are more complex object -- in fact, functions in the typed lambda calculus.
%%lamb
test1 = L p_t : L x_e : P(x) & p # based on a Partee et al example
test1b = L x_e : P(x) & Q(x)
t2 = Q(x_e)
INFO (core): Coerced guessed type for 'P_t' into <e,t>, to match argument 'x_e' INFO (core): Coerced guessed type for 'P_t' into <e,t>, to match argument 'x_e' INFO (core): Coerced guessed type for 'Q_t' into <e,t>, to match argument 'x_e' INFO (core): Coerced guessed type for 'Q_t' into <e,t>, to match argument 'x_e'
${test1}_{\left\langle{}t,\left\langle{}e,t\right\rangle{}\right\rangle{}}\:=\:\lambda{} p_{t} \: . \: \lambda{} x_{e} \: . \: {p} \wedge{} {P}({x})$
${test1b}_{\left\langle{}e,t\right\rangle{}}\:=\:\lambda{} x_{e} \: . \: {P}({x}) \wedge{} {Q}({x})$
${t2}_{t}\:=\:{Q}({x}_{e})$
These are now registered as variables in the python namespace and can be manipulated directly. A typed lambda calculus is fully implemented with all that that entails -- e.g. the value of test1
includes the whole syntactic structure of the formula, its type, etc. and can be used in constructing new formulas. The following cells build a complex function-argument formula, and following that, does the reduction.
(Notice that beta reduction works properly, i.e. bound $x$ in the function is renamed in order to avoid collision with the free x
in the argument.)
test1(t2)
test1(t2).reduce()
%%lamb
catf = L x_e: Cat(x)
dogf = λx: Dog(x_e)
INFO (core): Coerced guessed type for 'Cat_t' into <e,t>, to match argument 'x_e' INFO (core): Coerced guessed type for 'Dog_t' into <e,t>, to match argument 'x_e'
${catf}_{\left\langle{}e,t\right\rangle{}}\:=\:\lambda{} x_{e} \: . \: {Cat}({x})$
${dogf}_{\left\langle{}e,t\right\rangle{}}\:=\:\lambda{} x_{e} \: . \: {Dog}({x})$
(catf(x)).type
catf.type
Type checking of course is a part of all this. If the types don't match, the computation will throw a TypeMismatch
exception. The following cell uses python syntax to catch and print such errors.
with lamb.errors():
display(test1(x)) # function is type <t,<e,t>> so will trigger a type mismatch.
TypeMismatch: (λ p_t: (λ x_e: (p_t & P_<e,t>(x_e))))/<t,<e,t>>
and x_e
conflict (Function-argument expression: mismatched argument types t
and e
)
A more complex expression:
%%lamb
p2 = (Cat_<e,t>(x_e) & p_t) >> (Exists y: Dog_<e,t>(y_e))
${p2}_{t}\:=\:({p}_{t} \wedge{} {Cat}({x}_{e})) \rightarrow{} (\exists{} y_{e} \: . \: {Dog}({y}))$
What is going on behind the scenes? The objects manipulated are recursively structured python objects of class TypedExpr.
Class TypedExpr: parent class for typed expressions. Key subclasses:
Many straightforward expressions can be parsed. Most expressions are created using a call to TypedExpr.factory, which is abbreviated as "te" in the following examples. The %%lamb
magic is calling this behind the scenes.
Three ways of instantiating a variable x
of type e
:
%%lamb
x = x_e # use cell magic
${x}_{e}\:=\:{x}_{e}$
x = te("x_e") # use factory function to parse string
x
x = meta.TypedTerm("x", types.type_e) # use object constructer
x
Various convenience python operators are overloaded, including functional calls. Here is an example repeated from earlier in two forms:
%%lamb
p2 = (Cat_<e,t>(x_e) & p_t) >> (Exists y: Dog_<e,t>(y_e))
${p2}_{t}\:=\:({p}_{t} \wedge{} {Cat}({x}_{e})) \rightarrow{} (\exists{} y_{e} \: . \: {Dog}({y}))$
p2 = (te("Cat_<e,t>(x)") & te("p_t")) >> te("(Exists y: Dog_<e,t>(y_e))")
p2
Let's examine in detail what happens when a function and argument combine.
catf = meta.LFun(te("x_e"), te("Cat(x_e)"))
catf
INFO (core): Coerced guessed type for 'Cat_t' into <e,t>, to match argument 'x_e'
catf(te("y_e"))
Building a function-argument expression builds a complex, unreduced expression. This can be explicitly reduced (note that the reduce_all()
function would be used to apply reduction recursively):
catf(te("y_e")).reduce()
(catf(te("y_e")).reduce()).derivation
The metalanguage supports some basic type inference. Type inference happens already on combination of a function and argument into an unreduced expression, not on beta-reduction.
%lamb ttest = L x_X : P_<?,t>(x) # type <?,t>
%lamb tvar = y_t
ttest(tvar)
${ttest}_{\left\langle{}X,t\right\rangle{}}\:=\:\lambda{} x_{X} \: . \: {P}_{\left\langle{}X,t\right\rangle{}}({x})$
${tvar}_{t}\:=\:{y}_{t}$
On top of the metalanguage are 'composition systems' for modeling (step-by-step) semantic composition in an object language such as English. This is the part of the lambda notebook that tracks and manipulates mappings between object language elements (words, trees, etc) and denotations in the metalanguage.
A composition at its core consists of a set of composition rules; the following cell defines a simple composition system that will be familiar to anyone who has taken a basic course in compositional semantics. (This example is just a redefinition of the default composition system.)
# none of this is strictly necessary, the built-in library already provides effectively this system.
fa = lang.BinaryCompositionOp("FA", lang.fa_fun, reduce=True)
pm = lang.BinaryCompositionOp("PM", lang.pm_fun, commutative=False, reduce=True)
pa = lang.BinaryCompositionOp("PA", lang.pa_fun, allow_none=True)
demo_hk_system = lang.CompositionSystem(name="demo system", rules=[fa, pm, pa])
lang.set_system(demo_hk_system)
demo_hk_system
Expressing denotations is done in a %%lamb
cell, and almost always begins with lexical items. The following cell defines several lexical items that will be familiar from introductory exercises in the Heim & Kratzer 1998 textbook "Semantics in Generative Grammar". These definitions produce items that are subclasses of the class Composable
.
%%lamb
||cat|| = L x_e: Cat(x)
||gray|| = L x_e: Gray(x)
||john|| = John_e
||julius|| = Julius_e
||inP|| = L x_e : L y_e : In(y, x) # `in` is a reserved word in python
||texas|| = Texas_e
||isV|| = L p_<e,t> : p # `is` is a reserved word in python
INFO (core): Coerced guessed type for 'Cat_t' into <e,t>, to match argument 'x_e' INFO (core): Coerced guessed type for 'Gray_t' into <e,t>, to match argument 'x_e' INFO (core): Coerced guessed type for 'In_t' into <(e,e),t>, to match argument '(y_e, x_e)'
$[\![\text{\textbf{cat}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: \lambda{} x_{e} \: . \: {Cat}({x})$
$[\![\text{\textbf{gray}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: \lambda{} x_{e} \: . \: {Gray}({x})$
$[\![\text{\textbf{john}}]\!]^{}_{e} \:=\: {John}_{e}$
$[\![\text{\textbf{julius}}]\!]^{}_{e} \:=\: {Julius}_{e}$
$[\![\text{\textbf{inP}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: \lambda{} x_{e} \: . \: \lambda{} y_{e} \: . \: {In}({y}, {x})$
$[\![\text{\textbf{texas}}]\!]^{}_{e} \:=\: {Texas}_{e}$
$[\![\text{\textbf{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: \lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$
In the purely type-driven mode, composition is triggered by using the '*
' operator on a Composable
. This searches over the available composition operations in the system to see if any results can be had. inP
and texas
above should be able to compose using the FA rule:
inP * texas
1 composition path. Result:
[0]: $[\![\text{\textbf{[inP texas]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: \lambda{} y_{e} \: . \: {In}({y}, {Texas}_{e})$
On the other hand isV
is looking for a property, so we shouldn't expect succesful composition. Below this I have given a complete sentence and shown some introspection on that composition result.
julius * isV # will fail due to type mismatches
Composition of "[julius isV]" failed:
TypeMismatch: $[\![\text{\textbf{julius}}]\!]^{}_{e} \:=\: {Julius}_{e}$ and $[\![\text{\textbf{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: \lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$ conflict (Function Application)
TypeMismatch: $[\![\text{\textbf{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: \lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$ and $[\![\text{\textbf{julius}}]\!]^{}_{e} \:=\: {Julius}_{e}$ conflict (Function Application)
TypeMismatch: $[\![\text{\textbf{julius}}]\!]^{}_{e} \:=\: {Julius}_{e}$ and $[\![\text{\textbf{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: \lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$ conflict (Predicate Modification)
TypeMismatch: $[\![\text{\textbf{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: \lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$ and $[\![\text{\textbf{julius}}]\!]^{}_{e} \:=\: {Julius}_{e}$ conflict (Predicate Modification)
Composition failure (PA requires a valid binder) on: $[\![\text{\textbf{julius}}]\!]^{}_{e} \:=\: {Julius}_{e}$ * $[\![\text{\textbf{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: \lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$
Composition failure (PA requires a valid binder) on: $[\![\text{\textbf{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: \lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$ * $[\![\text{\textbf{julius}}]\!]^{}_{e} \:=\: {Julius}_{e}$
sentence1 = julius * (isV * (inP * texas))
display(sentence1[0].source_tree())
sentence1
1 composition path. Result:
[0]: $[\![\text{\textbf{[julius [isV [inP texas]]]}}]\!]^{}_{t} \:=\: {In}({Julius}_{e}, {Texas}_{e})$
sentence1.trace()
Full composition trace. 1 path:
Step 1: $[\![\text{\textbf{julius}}]\!]^{}_{e} \:=\: {Julius}_{e}$
Step 2: $[\![\text{\textbf{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: \lambda{} p_{\left\langle{}e,t\right\rangle{}} \: . \: {p}$
Step 3: $[\![\text{\textbf{inP}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: \lambda{} x_{e} \: . \: \lambda{} y_{e} \: . \: {In}({y}, {x})$
Step 4: $[\![\text{\textbf{texas}}]\!]^{}_{e} \:=\: {Texas}_{e}$
Step 5: $[\![\text{\textbf{inP}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}$ * $[\![\text{\textbf{texas}}]\!]^{}_{e}$ leads to: $[\![\text{\textbf{[inP texas]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: \lambda{} y_{e} \: . \: {In}({y}, {Texas}_{e})$ [by FA]
Step 6: $[\![\text{\textbf{isV}}]\!]^{}_{\left\langle{}\left\langle{}e,t\right\rangle{},\left\langle{}e,t\right\rangle{}\right\rangle{}}$ * $[\![\text{\textbf{[inP texas]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$ leads to: $[\![\text{\textbf{[isV [inP texas]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: \lambda{} y_{e} \: . \: {In}({y}, {Texas}_{e})$ [by FA]
Step 7: $[\![\text{\textbf{julius}}]\!]^{}_{e}$ * $[\![\text{\textbf{[isV [inP texas]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$ leads to: $[\![\text{\textbf{[julius [isV [inP texas]]]}}]\!]^{}_{t} \:=\: {In}({Julius}_{e}, {Texas}_{e})$ [by FA]
Composition will find all possible paths (beware of combinatorial explosion). I have temporarily disabled the fact that standard PM is symmetric/commutative (because of conjunction), to illustrate a case with multiple composition paths:
gray * cat
1 composition path. Result:
[0]: $[\![\text{\textbf{[gray cat]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: \lambda{} x_{e} \: . \: {Cat}({x}) \wedge{} {Gray}({x})$ (2 equivalent paths lead here)
gray * (cat * (inP * texas))
1 composition path. Result:
[0]: $[\![\text{\textbf{[gray [cat [inP texas]]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: \lambda{} x_{e} \: . \: {Cat}({x}) \wedge{} {Gray}({x}) \wedge{} {In}({x}, {Texas}_{e})$ (4 equivalent paths lead here)
a = lang.Item("a", isV.content) # identity function for copula as well
isV * (a * (gray * cat * (inP * texas)))
1 composition path. Result:
[0]: $[\![\text{\textbf{[isV [a [[gray cat] [inP texas]]]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: \lambda{} x_{e} \: . \: {Cat}({x}) \wedge{} {Gray}({x}) \wedge{} {In}({x}, {Texas}_{e})$ (4 equivalent paths lead here)
np = ((gray * cat) * (inP * texas))
vp = (isV * (a * np))
sentence2 = julius * vp
sentence2
1 composition path. Result:
[0]: $[\![\text{\textbf{[julius [isV [a [[gray cat] [inP texas]]]]]}}]\!]^{}_{t} \:=\: {Cat}({Julius}_{e}) \wedge{} {Gray}({Julius}_{e}) \wedge{} {In}({Julius}_{e}, {Texas}_{e})$ (4 equivalent paths lead here)
sentence1.results[0]
sentence1.results[0].tree()
sentence2.results[0].tree()
One of the infamous exercise examples from Heim and Kratzer (names different):
(1) Julius is a gray cat in Texas fond of John.
First let's get rid of all the extra readings, to keep this simple.
demo_hk_system.get_rule("PM").commutative = True
fond = lang.Item("fond", "L x_e : L y_e : Fond(y)(x)")
ofP = lang.Item("of", "L x_e : x")
sentence3 = julius * (isV * (a * (((gray * cat) * (inP * texas)) * (fond * (ofP * john)))))
display(sentence3[0].source_tree())
sentence3
INFO (core): Coerced guessed type for 'Fond_t' into <e,t>, to match argument 'y_e' INFO (core): Coerced guessed type for 'Fond_<e,t>(y_e)' into <e,t>, to match argument 'x_e'
1 composition path. Result:
[0]: $[\![\text{\textbf{[julius [isV [a [[[gray cat] [inP texas]] [fond [of john]]]]]]}}]\!]^{}_{t} \:=\: {Cat}({Julius}_{e}) \wedge{} {Fond}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}({Julius}_{e})({John}_{e}) \wedge{} {Gray}({Julius}_{e}) \wedge{} {In}({Julius}_{e}, {Texas}_{e})$
sentence3.tree()
The Composite class subclasses nltk.Tree, and so supports the things that class does. E.g. []-based paths:
parse_tree3 = sentence3.results[0]
parse_tree3[1][1][1].tree()
There is support for traces and indexed pronouns, using the PA rule. (The implementation may not be what you expect.)
binder = lang.Binder(23)
binder2 = lang.Binder(5)
t = lang.Trace(23, types.type_e)
t2 = lang.Trace(5)
display(t, t2, binder)
((t * gray))
1 composition path. Result:
[0]: $[\![\text{\textbf{[t23 gray]}}]\!]^{}_{t} \:=\: {Gray}({var23}_{e})$
b1 = (binder * (binder2 * (t * (inP * t2))))
b2 = (binder2 * (binder * (t * (inP * t2))))
display(b1, b2)
1 composition path. Result:
[0]: $[\![\text{\textbf{[23 [5 [t23 [inP t5]]]]}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: \lambda{} x1_{e} \: . \: \lambda{} x_{e} \: . \: {In}({x1}, {x})$
1 composition path. Result:
[0]: $[\![\text{\textbf{[5 [23 [t23 [inP t5]]]]}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: \lambda{} x1_{e} \: . \: \lambda{} x_{e} \: . \: {In}({x}, {x1})$
b1.trace()
Full composition trace. 1 path:
Step 1: $[\![\text{\textbf{23}}]\!]^{}\text{ [vacuous]}$
Step 2: $[\![\text{\textbf{5}}]\!]^{}\text{ [vacuous]}$
Step 3: $[\![\text{\textbf{t}}_{23}]\!]^{}_{e} \:=\: {var23}_{e}$
Step 4: $[\![\text{\textbf{inP}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: \lambda{} x_{e} \: . \: \lambda{} y_{e} \: . \: {In}({y}, {x})$
Step 5: $[\![\text{\textbf{t}}_{5}]\!]^{}_{e} \:=\: {var5}_{e}$
Step 6: $[\![\text{\textbf{inP}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}}$ * $[\![\text{\textbf{t}}_{5}]\!]^{}_{e}$ leads to: $[\![\text{\textbf{[inP t5]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: \lambda{} y_{e} \: . \: {In}({y}, {var5}_{e})$ [by FA]
Step 7: $[\![\text{\textbf{t}}_{23}]\!]^{}_{e}$ * $[\![\text{\textbf{[inP t5]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$ leads to: $[\![\text{\textbf{[t23 [inP t5]]}}]\!]^{}_{t} \:=\: {In}({var23}_{e}, {var5}_{e})$ [by FA]
Step 8: $[\![\text{\textbf{5}}]\!]^{}$ * $[\![\text{\textbf{[t23 [inP t5]]}}]\!]^{}_{t}$ leads to: $[\![\text{\textbf{[5 [t23 [inP t5]]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: \lambda{} x_{e} \: . \: {In}({var23}_{e}, {x})$ [by PA]
Step 9: $[\![\text{\textbf{23}}]\!]^{}$ * $[\![\text{\textbf{[5 [t23 [inP t5]]]}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}$ leads to: $[\![\text{\textbf{[23 [5 [t23 [inP t5]]]]}}]\!]^{}_{\left\langle{}e,\left\langle{}e,t\right\rangle{}\right\rangle{}} \:=\: \lambda{} x1_{e} \: . \: \lambda{} x_{e} \: . \: {In}({x1}, {x})$ [by PA]
b1.results[0].tree()
Some in-progress work: implementing tree-based computation, and top-down/deferred computation
lang.set_system(lang.hk3_system)
%%lamb
||gray|| = L x_e : Gray_<e,t>(x)
||cat|| = L x_e : Cat_<e,t>(x)
$[\![\text{\textbf{gray}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: \lambda{} x_{e} \: . \: {Gray}({x})$
$[\![\text{\textbf{cat}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: \lambda{} x_{e} \: . \: {Cat}({x})$
t2 = Tree("S", ["NP", "VP"])
t2
t2 = Tree("S", ["NP", "VP"])
r2 = lang.hk3_system.compose(t2)
r2.tree()
r2.paths()
Tree = lamb.utils.get_tree_class()
t = Tree("NP", ["gray", Tree("N", ["cat"])])
t
t2 = lang.CompositionTree.tree_factory(t)
r = lang.hk3_system.compose(t2)
r
3 composition paths. Results:
[0]: $[\![\text{\textbf{NP}}]\!]^{}_{X'} \:=\: [\![\text{\textbf{gray}}]\!]^{}_{\left\langle{}X,X'\right\rangle{}}([\![\text{\textbf{N}}]\!]^{}_{X})$
[1]: $[\![\text{\textbf{NP}}]\!]^{}_{X'} \:=\: [\![\text{\textbf{N}}]\!]^{}_{\left\langle{}X,X'\right\rangle{}}([\![\text{\textbf{gray}}]\!]^{}_{X})$
[2]: $[\![\text{\textbf{NP}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: \lambda{} x_{e} \: . \: [\![\text{\textbf{N}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}({x}) \wedge{} [\![\text{\textbf{gray}}]\!]^{}_{\left\langle{}e,t\right\rangle{}}({x})$
r.tree()
r = lang.hk3_system.expand_all(t2)
r
1 composition path. Result:
[0]: $[\![\text{\textbf{NP}}]\!]^{}_{\left\langle{}e,t\right\rangle{}} \:=\: \lambda{} x_{e} \: . \: {Cat}({x}) \wedge{} {Gray}({x})$
r.tree()
r.paths()
Longer term:
eval
, and is generally less ad-hoc.