Verbal valence is a kind of signature of a verb, not unlike overloading in programming languages. The meaning of a verb depends on the number and kind of its complements, i.e. the linguistic entities that act as arguments for the semantic function of the verb.
We will use a set of flowcharts to specify and compute the sense of a verb in specific contexts depending on the verbal valence. The flowcharts have been composed by Janet Dyk. Although they are not difficult to understand, it takes a good deal of ingenuity to apply them in all the real world situations that we encounter in our corpus.
Read more in the wiki.
import sys
import os
import collections
import yaml
from copy import deepcopy
import utils
from tf.fabric import Fabric
from tf.core.helpers import formatMeta
if "SCRIPT" not in locals():
SCRIPT = False
FORCE = True
CORE_NAME = "bhsa"
NAME = "valence"
VERSION = "c"
CORE_MODULE = "core"
def stop(good=False):
if SCRIPT:
sys.exit(0 if good else 1)
We have carried out the valence project against the Hebrew Text Database of the BHSA, version 4b
.
See the description of the sources.
However, we can run our stuff also against the newer versions.
We also make use of corrected and enriched data delivered by the enrich notebook. The features of that data module are specified here.
We produce a text-fabric feature sense
with the sense labels per verb occurrence, and add
this to the valence data module created in the
enrich notebook.
We also show the results in SHEBANQ, the website of the ETCBC that exposes its Hebrew Text Database in such a way that users can query it, save their queries, add manual annotations and even upload bulks of generated annotations. That is exactly what we do: the valency results are visible in SHEBANQ in notes view, so that every outcome can be viewed in context.
Valence flowchart logic translates the verb context into a label that is characteristic for the context. You could say, it is a fingerprint of the context. Verb meanings are complex, depending on context. It turns out that we can organize the meaning selection of verbs around these finger prints.
For each verb, the we can specify a flowchart as a mapping of fingerprints to concrete meanings. We have flowcharts for a limited, but open set of verbs. They are listed in the wiki, and will be referred to from the resulting valence annotations in SHEBANQ.
For each verb, the flowchart is represented as a mapping of sense labels to meaning templates. A sense label is a code for the presence and nature of direct objects and complements that are present in the context. See the legend of sense labels.
The interesting part is the sense template, which consist of a translation text augmented with placeholders for the direct objects and complements.
See for example the flowchart of NTN.
{verb}
the verb occurrence in question{pdos}
principal direct objects (phrase){kdos}
K-objects (phrase){ldos}
L-objects (phrase){ndos}
direct objects (phrase) (none of the above){idos}
infinitive construct (clause) objects{cdos}
direct objects (clause) (none of the above){inds}
indirect objects{bens}
benefactive adjuncts{locs}
locatives{cpls}
complements, not marked as either indirect object or locativeIn case there are multiple entities, the algorithm returns them chunked as phrases/clauses.
Apart from the template, there is also a status and an optional account.
The status is !
in normal cases, ?
in dubious cases, and -
in erroneous cases.
In SHEBANQ these statuses are translated into colors
of the notes (blue/orange/red).
The account contains information about the grounds of which the algorithm has arrived at its conclusions.
senses = set(
"""
<FH
BR>
CJT
DBQ
FJM
NTN
QR>
ZQN
""".strip().split()
)
senseLabels = """
--
-i
-b
-p
-c
d-
di
db
dp
dc
n.
l.
k.
i.
c.
""".strip().split()
constKindSpecs = """
verb:verb
dos:direct object
pdos:principal direct object
kdos:K-object
ldos:L-object
ndos:NP-object
idos:infinitive object clause
cdos:direct object clause
inds:indirect object
bens:benefactive
locs:locative
cpls:complement
""".strip().split(
"\n"
)
The conversion is executed in an environment of directories, so that sources, temp files and results are in convenient places and do not have to be shifted around.
In[4]:
repoBase = os.path.expanduser("~/github/etcbc")
coreRepo = "{}/{}".format(repoBase, CORE_NAME)
thisRepo = "{}/{}".format(repoBase, NAME)
coreTf = "{}/tf/{}".format(coreRepo, VERSION)
thisSource = "{}/source/{}".format(thisRepo, VERSION)
thisTemp = "{}/_temp/{}".format(thisRepo, VERSION)
thisTempTf = "{}/tf".format(thisTemp)
thisTf = "{}/tf/{}".format(thisRepo, VERSION)
thisNotes = "{}/shebanq/{}".format(thisRepo, VERSION)
In[5]:
notesFile = "valenceNotes.csv"
flowchartBase = "https://github.com/ETCBC/valence/wiki"
if not os.path.exists(thisNotes):
os.makedirs(thisNotes)
Check whether this conversion is needed in the first place. Only when run as a script.
In[6]:
if SCRIPT:
(good, work) = utils.mustRun(
None, "{}/.tf/{}.tfx".format(thisTf, "sense"), force=FORCE
)
if not good:
stop(good=False)
if not work:
stop(good=True)
In[7]:
utils.caption(4, "Load the existing TF dataset")
TF = Fabric(locations=[coreTf, thisTf], modules=[""])
.............................................................................................. . 0.00s Load the existing TF dataset . .............................................................................................. This is Text-Fabric 9.2.0 Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html 124 features found and 0 ignored
We instruct the API to load data.
In[8]:
api = TF.load(
"""
function rela typ
g_word_utf8 trailer_utf8
lex prs uvf sp pdp ls vs vt nametype gloss
book chapter verse label number
s_manual f_correction
valence predication grammatical original lexical semantic
mother
"""
)
api.makeAvailableIn(globals())
1.44s Dataset without structure sections in otext:no structure functions in the T-API | | 1.17s C __characters__ from otext | | 1.04s T f_correction from ~/github/etcbc/valence/tf/c | | 1.16s T grammatical from ~/github/etcbc/valence/tf/c | | 1.06s T lexical from ~/github/etcbc/valence/tf/c | | 1.01s T original from ~/github/etcbc/valence/tf/c | | 1.19s T predication from ~/github/etcbc/valence/tf/c | | 1.03s T s_manual from ~/github/etcbc/valence/tf/c | | 1.08s T semantic from ~/github/etcbc/valence/tf/c | | 1.17s T valence from ~/github/etcbc/valence/tf/c 21s All features loaded/computed - for details use TF.isLoaded()
[('Computed', 'computed-data', ('C Computed', 'Call AllComputeds', 'Cs ComputedString')), ('Features', 'edge-features', ('E Edge', 'Eall AllEdges', 'Es EdgeString')), ('Fabric', 'loading', ('TF',)), ('Locality', 'locality', ('L Locality',)), ('Nodes', 'navigating-nodes', ('N Nodes',)), ('Features', 'node-features', ('F Feature', 'Fall AllFeatures', 'Fs FeatureString')), ('Search', 'search', ('S Search',)), ('Text', 'text', ('T Text',))]
Here we specify by what features we recognize key constituents. We use predominantly features that come from the correction/enrichment workflow.
In[9]:
pf_
... : predication feature
gf_
... : grammatical feature
vf_
... : valence feature
sf_
... : lexical feature
of_
... : original feature
pf_predicate = {
"regular",
}
gf_direct_object = {
"principal_direct_object",
"NP_direct_object",
"direct_object",
"L_object",
"K_object",
"infinitive_object",
}
gf_indirect_object = {
"indirect_object",
}
gf_complement = {
"*",
}
sf_locative = {
"location",
}
sf_benefactive = {
"benefactive",
}
vf_locative = {
"complement",
"adjunct",
}
verbal_stems = set(
"""
qal
""".strip().split()
)
We collect the information to determine how to render pronominal suffixes on words.
On verbs, they must be rendered accusatively, like see him
.
But on nouns, they must be rendered genitively, like hand my
.
So we make an inventory of part of speech types and the pronominal suffixes that occur on them.
On that basis we make the translation dictionaries pronominal suffix
and switch_prs
.
Finally, we define a function get_prs_info
that for each word delivers the pronominal suffix info and gloss,
if there is any, and else (None, None)
.
In[10]:
prss = collections.defaultdict(lambda: collections.defaultdict(lambda: 0))
for w in F.otype.s("word"):
prss[F.sp.v(w)][F.prs.v(w)] += 1
if not SCRIPT:
for sp in sorted(prss):
for prs in sorted(prss[sp]):
print("{:<5} {:<3} : {:>5}".format(sp, prs, prss[sp][prs]))
adjv H : 16 adjv HM : 10 adjv J : 25 adjv K : 35 adjv K= : 3 adjv KM : 7 adjv M : 8 adjv MW : 1 adjv NW : 5 adjv W : 59 adjv absent : 9273 advb n/a : 4550 art n/a : 30386 conj n/a : 62722 inrg K : 1 inrg M : 2 inrg W : 5 inrg absent : 1277 intj K : 13 intj K= : 7 intj KM : 2 intj M : 37 intj NJ : 181 intj NW : 8 intj W : 3 intj absent : 1634 nega n/a : 6053 nmpr n/a : 33081 prde n/a : 2660 prep H : 1019 prep H= : 36 prep HJ : 13 prep HM : 1499 prep HN : 74 prep HW : 174 prep HWN : 19 prep J : 1853 prep K : 1634 prep K= : 353 prep KM : 1181 prep KN : 2 prep KWN : 1 prep M : 684 prep MW : 68 prep N : 3 prep N> : 4 prep NJ : 105 prep NW : 539 prep W : 3247 prep absent : 60765 prin n/a : 1021 prps n/a : 5011 subs H : 1635 subs H= : 108 subs HJ : 58 subs HM : 1417 subs HN : 114 subs HW : 340 subs HWN : 32 subs J : 4332 subs K : 4362 subs K= : 744 subs KM : 1335 subs KN : 16 subs KWN : 7 subs M : 1919 subs MW : 25 subs N : 29 subs N> : 3 subs NJ : 19 subs NW : 809 subs W : 7653 subs absent : 96548 verb H : 682 verb H= : 17 verb HJ : 6 verb HM : 121 verb HN : 4 verb HW : 1097 verb J : 356 verb K : 1089 verb K= : 201 verb KM : 132 verb KN : 1 verb KWN : 2 verb M : 1288 verb MW : 23 verb N : 15 verb N> : 3 verb NJ : 1016 verb NW : 274 verb W : 938 verb absent : 66445
In[11]:
pronominal_suffix = {
"accusative": {
"W": ("p3-sg-m", "him"),
"K": ("p2-sg-m", "you:m"),
"J": ("p1-sg-", "me"),
"M": ("p3-pl-m", "them:mm"),
"H": ("p3-sg-f", "her"),
"HM": ("p3-pl-m", "them:mm"),
"KM": ("p2-pl-m", "you:mm"),
"NW": ("p1-pl-", "us"),
"HW": ("p3-sg-m", "him"),
"NJ": ("p1-sg-", "me"),
"K=": ("p2-sg-f", "you:f"),
"HN": ("p3-pl-f", "them:ff"),
"MW": ("p3-pl-m", "them:mm"),
"N": ("p3-pl-f", "them:ff"),
"KN": ("p2-pl-f", "you:ff"),
},
"genitive": {
"W": ("p3-sg-m", "his"),
"K": ("p2-sg-m", "your:m"),
"J": ("p1-sg-", "my"),
"M": ("p3-pl-m", "their:mm"),
"H": ("p3-sg-f", "her"),
"HM": ("p3-pl-m", "their:mm"),
"KM": ("p2-pl-m", "your:mm"),
"NW": ("p1-pl-", "our"),
"HW": ("p3-sg-m", "his"),
"NJ": ("p1-sg-", "my"),
"K=": ("p2-sg-f", "your:f"),
"HN": ("p3-pl-f", "their:ff"),
"MW": ("p3-pl-m", "their:mm"),
"N": ("p3-pl-f", "their:ff"),
"KN": ("p2-pl-f", "your:ff"),
},
}
switch_prs = dict(
subs="genitive",
verb="accusative",
prep="accusative",
conj=None,
nmpr=None,
art=None,
adjv="genitive",
nega=None,
prps=None,
advb=None,
prde=None,
intj="accusative",
inrg="genitive",
prin=None,
)
def get_prs_info(w):
sp = F.sp.v(w)
prs = F.prs.v(w)
switch = switch_prs[sp]
return pronominal_suffix.get(switch, {}).get(prs, (None, None))
We generate an index which gives for each verb lexeme a list of clauses that have that lexeme as the main verb. In the index we store the clause node together with the word node(s) that carries the main verb(s).
Clauses may have multiple verbs. In many cases it is a copula plus an other verb. In those cases, we are interested in the other verb, so we exclude copulas.
Yet, there are also sentences with more than one main verb. In those cases, we treat both verbs separately as main verb of one and the same clause.
In[12]:
utils.caption(4, "Making the verb-clause index")
occs = collections.defaultdict(
list
) # dictionary of all verb occurrence nodes per verb lexeme
verb_clause = collections.defaultdict(
list
) # dictionary of all verb occurrence nodes per clause node
clause_verb = (
collections.OrderedDict()
) # idem but for the occurrences of selected verbs
.............................................................................................. . 1m 01s Making the verb-clause index . ..............................................................................................
for w in F.otype.s("word"):
if F.sp.v(w) != "verb":
continue
lex = F.lex.v(w).rstrip("[")
pf = F.predication.v(L.u(w, "phrase")[0])
if pf in pf_predicate:
cn = L.u(w, "clause")[0]
clause_verb.setdefault(cn, []).append(w)
verb_clause[lex].append((cn, w))
utils.caption(0, "\tDone ({} clauses)".format(len(clause_verb)))
| 1m 03s Done (69439 clauses)
In[13]:
utils.caption(4, "Finding key constituents")
constituents = collections.defaultdict(lambda: collections.defaultdict(set))
ckinds = """
dos pdos ndos kdos ldos idos cdos inds locs cpls bens
""".strip().split()
.............................................................................................. . 1m 03s Finding key constituents . ..............................................................................................
# go through all relevant clauses and collect all types of direct objects
for c in clause_verb:
these_constituents = collections.defaultdict(set)
# phrase like constituents
for p in L.d(c, "phrase"):
gf = F.grammatical.v(p)
of = F.original.v(p)
sf = F.semantic.v(p)
vf = F.valence.v(p)
ckind = None
if gf in gf_direct_object:
if gf == "principal_direct_object":
ckind = "pdos"
elif gf == "NP_direct_object":
ckind = "ndos"
elif gf == "L_object":
ckind = "ldos"
elif gf == "K_object":
ckind = "kdos"
else:
ckind = "dos"
elif gf in gf_indirect_object:
ckind = "inds"
elif sf and sf in sf_benefactive:
ckind = "bens"
elif sf in sf_locative and vf in vf_locative:
ckind = "locs"
elif gf in gf_complement:
ckind = "cpls"
if ckind:
these_constituents[ckind].add(p)
# clause like constituents: only look for object clauses dependent on this clause
for ac in L.d(L.u(c, "sentence")[0], "clause"):
dep = list(E.mother.f(ac))
if len(dep) and dep[0] == c:
gf = F.grammatical.v(ac)
ckind = None
if gf in gf_direct_object:
if gf == "direct_object":
ckind = "cdos"
elif gf == "infinitive_object":
ckind = "idos"
if ckind:
these_constituents[ckind].add(ac)
for ckind in these_constituents:
constituents[c][ckind] |= these_constituents[ckind]
utils.caption(
0, "\tDone, {} clauses with relevant constituents".format(len(constituents))
)
| 1m 05s Done, 47571 clauses with relevant constituents
In[14]:
def makegetGloss():
if "lex" in F.otype.all:
def _getGloss(w):
gloss = F.gloss.v(L.u(w, "lex")[0])
return "?" if gloss is None else gloss
else:
def _getGloss(w):
gloss = F.gloss.v(w)
return "?" if gloss is None else gloss
return _getGloss
getGloss = makegetGloss()
In[15]:
testcases = (
# 426955,
# 427654,
# 428420,
# 429412,
# 429501,
# 429862,
# 431695,
# 431893,
# 430372,
)
def showcase(n):
otype = F.otype.v(n)
verseNode = L.u(n, "verse")[0]
place = T.sectionFromNode(verseNode)
print(
"""CASE {}={} ({}-{})\nCLAUSE: {}\nVERSE\n{} {}\nGLOSS {}\n""".format(
n,
otype,
F.rela.v(n),
F.typ.v(n),
T.text(L.d(n, "word"), fmt="text-trans-plain"),
"{} {}:{}".format(*place),
T.text(L.d(verseNode, "word"), fmt="text-trans-plain"),
" ".join(getGloss(w) for w in L.d(verseNode, "word")),
)
)
print("PHRASES\n")
for p in L.d(n, "phrase"):
print(
'''{} ({}-{}) {} "{}"'''.format(
p,
F.function.v(p),
F.typ.v(n),
T.text(L.d(p, "word"), fmt="text-trans-plain"),
" ".join(getGloss(w) for w in L.d(p, "word")),
)
)
print(
"valence = {}; grammatical = {}; lexical = {}; semantic = {}\n".format(
F.valence.v(p),
F.grammatical.v(p),
F.lexical.v(p),
F.semantic.v(p),
)
)
print("SUBCLAUSES\n")
for ac in L.d(L.u(n, "sentence")[0], "clause"):
dep = list(E.mother.f(ac))
if not (len(dep) and dep[0] == n):
continue
print(
'''{} ({}-{}) {} "{}"'''.format(
ac,
F.rela.v(ac),
F.typ.v(ac),
T.text(L.d(ac, "word"), fmt="text-trans-plain"),
" ".join(getGloss(w) for w in L.d(ac, "word")),
)
)
print(
"valence = {}; grammatical = {}; lexical = {}; semantic = {}\n".format(
F.valence.v(ac),
F.grammatical.v(ac),
F.lexical.v(ac),
F.semantic.v(ac),
)
)
print("CONSTITUENTS")
for ckind in ckinds:
print(
"{:<4}: {}".format(
ckind, ",".join(str(x) for x in sorted(constituents[n][ckind]))
)
)
print("================\n")
if not SCRIPT:
for n in testcases:
showcase(n)
In[16]:
utils.caption(4, "Counting constituents")
.............................................................................................. . 1m 08s Counting constituents . ..............................................................................................
constituents_count = collections.defaultdict(collections.Counter)
for c in constituents:
for ckind in ckinds:
n = len(constituents[c][ckind])
constituents_count[ckind][n] += 1
for ckind in ckinds:
total = 0
for (count, n) in sorted(constituents_count[ckind].items(), key=lambda y: -y[0]):
if count:
total += n
utils.caption(
0, "\t{:>5} clauses with {:>2} {:<10} constituents".format(n, count, ckind)
)
utils.caption(
0, "\t{:>5} clauses with {:>2} {:<10} constituent".format(total, "a", ckind)
)
utils.caption(0, "\t{:>5} clauses".format(len(clause_verb)))
| 1m 10s 22375 clauses with 1 dos constituents | 1m 10s 25196 clauses with 0 dos constituents | 1m 10s 22375 clauses with a dos constituent | 1m 10s 3557 clauses with 1 pdos constituents | 1m 10s 44014 clauses with 0 pdos constituents | 1m 10s 3557 clauses with a pdos constituent | 1m 10s 991 clauses with 1 ndos constituents | 1m 10s 46580 clauses with 0 ndos constituents | 1m 10s 991 clauses with a ndos constituent | 1m 10s 111 clauses with 1 kdos constituents | 1m 10s 47460 clauses with 0 kdos constituents | 1m 10s 111 clauses with a kdos constituent | 1m 10s 33 clauses with 2 ldos constituents | 1m 10s 3788 clauses with 1 ldos constituents | 1m 10s 43750 clauses with 0 ldos constituents | 1m 10s 3821 clauses with a ldos constituent | 1m 10s 1 clauses with 3 idos constituents | 1m 10s 18 clauses with 2 idos constituents | 1m 10s 1193 clauses with 1 idos constituents | 1m 10s 46359 clauses with 0 idos constituents | 1m 10s 1212 clauses with a idos constituent | 1m 10s 1305 clauses with 1 cdos constituents | 1m 10s 46266 clauses with 0 cdos constituents | 1m 10s 1305 clauses with a cdos constituent | 1m 10s 56 clauses with 2 inds constituents | 1m 10s 5223 clauses with 1 inds constituents | 1m 10s 42292 clauses with 0 inds constituents | 1m 10s 5279 clauses with a inds constituent | 1m 10s 1 clauses with 6 locs constituents | 1m 10s 1 clauses with 4 locs constituents | 1m 10s 16 clauses with 3 locs constituents | 1m 10s 330 clauses with 2 locs constituents | 1m 10s 12164 clauses with 1 locs constituents | 1m 10s 35059 clauses with 0 locs constituents | 1m 10s 12512 clauses with a locs constituent | 1m 10s 3 clauses with 3 cpls constituents | 1m 10s 87 clauses with 2 cpls constituents | 1m 10s 8704 clauses with 1 cpls constituents | 1m 10s 38777 clauses with 0 cpls constituents | 1m 10s 8794 clauses with a cpls constituent | 1m 10s 2 clauses with 2 bens constituents | 1m 10s 171 clauses with 1 bens constituents | 1m 10s 47398 clauses with 0 bens constituents | 1m 10s 173 clauses with a bens constituent | 1m 10s 69439 clauses
We can now apply the flowchart in a straightforward manner.
We output the results as a comma separated file that can be imported directly into SHEBANQ as a set of notes, so that the reader can check results within SHEBANQ. This has the benefit that the full context is available, and also data view can be called up easily to inspect the coding situation for each particular instance.
In[17]:
glossHacks = {
"XQ/": "law/precept",
}
In[23]:
def reptext(
label,
ckind,
v,
phrases,
num=False,
txt=False,
gloss=False,
textformat="text-trans-plain",
):
if phrases is None:
return ""
phrases_rep = []
for p in sorted(phrases, key=N.sortKey):
ptext = "[{}|".format(F.number.v(p) if num else "[")
if txt:
ptext += T.text(L.d(p, "word"), fmt=textformat)
if gloss:
words = L.d(p, "word")
if ckind == "ldos" and F.lex.v(words[0]) == "L":
words = words[1:]
wtexts = []
for w in words:
g = glossHacks.get(F.lex.v(w), getGloss(w)).replace(
"<object marker>", "&"
)
if F.lex.v(w) == "BJN/" and F.pdp.v(w) == "prep":
g = "between"
prs_g = get_prs_info(w)[1]
uvf = F.uvf.v(w)
wtext = ""
if uvf == "H":
ptext += "toward "
wtext += (
g if w != v else ""
) # we do not have to put in the gloss of the verb in question
wtext += ("~" + prs_g) if prs_g is not None else ""
wtexts.append(wtext)
ptext += " ".join(wtexts)
ptext += "]"
phrases_rep.append(ptext)
return " ".join(phrases_rep)
In[24]:
debug_messages = collections.defaultdict(lambda: collections.defaultdict(list))
constKinds = collections.OrderedDict()
for constKindSpec in constKindSpecs:
(constKind, constKindName) = constKindSpec.strip().split(":", 1)
constKinds[constKind] = constKindName
def flowchart(v, lex, verb, consts):
consts = deepcopy(consts)
n_ = collections.defaultdict(lambda: 0)
for ckind in ckinds:
n_[ckind] = len(consts[ckind])
char1 = None
char2 = None
# determine char 1 of the sense label
if n_["pdos"] > 0:
if n_["ndos"] > 0:
char1 = "n"
elif n_["cdos"] > 0:
char1 = "c"
elif n_["ldos"] > 0:
char1 = "l"
elif n_["kdos"] > 0:
char1 = "k"
elif n_["idos"] > 0:
char1 = "i"
else:
# in trouble: if there is a principal direct object, there should be an other object as well
# and the other one should be an NP, object clause, L_object, K_object, or I_object
# If this happens, it is probably the result of manual correction
# We warn, and remedy
msg_rep = "; ".join("{} {}".format(n_[ckind], ckind) for ckind in ckinds)
if n_["dos"] > 0:
# there is an other object (dos should only be used if there is a single object)
# we'll put the dos in the ndos (which was empty)
# This could be caused by a manual enrichment sheet that has been generated
# before the concept of NP_direct_object had been introduced
char1 = "n"
consts["ndos"] = consts["dos"]
del consts["dos"]
debug_messages[lex]["pdos with dos"].append(
"{}: {}".format(T.sectionFromNode(v), msg_rep)
)
else:
# there is not another object, we treat this as a single object, so as a dos
char1 = "d"
consts["dos"] = consts["pdos"]
del consts["pdos"]
debug_messages[lex]["lonely pdos"].append(
"{}: {}".format(T.sectionFromNode(v), msg_rep)
)
else:
if n_["cdos"] > 0:
# in the case of a single object, the clause objects act as ordinary objects
char1 = "d"
consts["dos"] |= consts["cdos"]
del consts["cdos"]
if n_["ndos"] > 0:
# in the case of a single object, the np_objects act as ordinary objects
char1 = "d"
consts["dos"] |= consts["ndos"]
del consts["ndos"]
n_ = collections.defaultdict(lambda: 0)
for ckind in ckinds:
n_[ckind] = len(consts[ckind])
if n_["pdos"] == 0 and n_["dos"] > 0:
char1 = "d"
if n_["pdos"] == 0 and n_["dos"] == 0:
char1 = "-"
# determine char 2 of the sense label
if char1 in "nclki":
char2 = "."
else:
if n_["inds"] > 0:
char2 = "i"
elif n_["bens"] > 0:
char2 = "b"
elif n_["locs"] > 0:
char2 = "p"
elif n_["cpls"] > 0:
char2 = "c"
else:
char2 = "-"
sense_label = char1 + char2
sense = lex if lex in senses else None
status = "*" if lex in senses else "?"
consts_rep = dict(
(ckind, reptext("", ckind, v, consts[ckind], num=True, gloss=True))
for ckind in consts
)
return (sense_label, sense, status, consts_rep)
sfields = """
version
book
chapter
verse
clause_atom
is_shared
is_published
status
keywords
ntext
""".strip().split()
sfields_fmt = ("{}\t" * (len(sfields) - 1)) + "{}\n"
The next cell finally performs all the flowchart computations for all verbs in all contexts.
utils.caption(4, "Checking the flowcharts")
missingFlowcharts = set()
.............................................................................................. . 1m 15s Checking the flowcharts . ..............................................................................................
for lex in verb_clause:
if lex not in senses:
missingFlowcharts.add(lex)
utils.caption(
0,
"\tNo flowchart for {} verbs, e.g. {}".format(
len(missingFlowcharts), ", ".join(sorted(missingFlowcharts)[0:10])
),
)
| 1m 16s No flowchart for 1543 verbs, e.g. <BC, <BD, <BH, <BR, <BR=, <BT, <BV, <BV=, <CC, <CN
good = True
for lex in senses:
if lex not in verb_clause:
TF.error("No verb {} in enriched corpus".format(lex))
good = False
if good:
utils.caption(0, "\tAll flowcharts belong to a verb in the corpus")
| 1m 18s All flowcharts belong to a verb in the corpus
utils.caption(4, "Applying the flowcharts")
.............................................................................................. . 1m 19s Applying the flowcharts . ..............................................................................................
outcome_lab = collections.Counter()
outcome_lab_l = collections.defaultdict(lambda: collections.Counter())
we want an overview of the flowchart decisions per lexeme Per lexeme, per sense label we store the clauses
decisions = collections.defaultdict(lambda: collections.defaultdict(dict))
note_keyword_base = "valence"
nnotes = collections.Counter()
senseFeature = dict()
ofs = open("{}/{}".format(thisNotes, notesFile), "w")
ofs.write("{}\n".format("\t".join(sfields)))
84
i = 0
j = 0
chunkSize = 10000
for lex in verb_clause:
hasFlowchart = lex in senses
for (c, v) in verb_clause[lex]:
if F.vs.v(v) not in verbal_stems:
continue
i += 1
j += 1
if j == chunkSize:
j = 0
utils.caption(0, "\t{:>5} clauses".format(i))
book = F.book.v(L.u(v, "book")[0])
chapter = F.chapter.v(L.u(v, "chapter")[0])
verse = F.verse.v(L.u(v, "verse")[0])
sentence_n = F.number.v(L.u(v, "sentence")[0])
clause_n = F.number.v(c)
clause_atom_n = F.number.v(L.u(v, "clause_atom")[0])
verb = [L.u(v, "phrase")[0]]
consts = constituents[c]
n_ = collections.defaultdict(lambda: 0)
for ckind in ckinds:
n_[ckind] = len(consts[ckind])
(sense_label, sense, status, constsRep) = flowchart(v, lex, verb, consts)
senseRep = "legend" if sense is None else sense
senseDoc = (
"Legend"
if sense is None
else "FC_{}".format(sense.replace(">", "A").replace("<", "O"))
)
senseLink = "{}/{}".format(flowchartBase, senseDoc)
senseFeature[v] = sense_label
constElems = []
for (constKind, constKindName) in constKinds.items():
if constKind not in constsRep:
continue
material = constsRep[constKind]
if not material:
continue
constElems.append("*{}*={}".format(constKindName, material))
outcome_lab[sense_label] += 1
outcome_lab_l[lex][sense_label] += 1
decisions[lex][sense_label][c] = sense_label
ofs.write(
sfields_fmt.format(
VERSION,
book,
chapter,
verse,
clause_atom_n,
"T",
"",
status,
note_keyword_base,
"verb [{nm}|{vb}] has sense `{sl}` [{sn}]({slink}) {cs}".format(
nm=F.number.v(L.u(v, "phrase")[0]),
vb=F.g_word_utf8.v(v),
sn=senseRep,
slink=senseLink,
sl=sense_label,
cs="; ".join(constElems),
),
)
)
nnotes[note_keyword_base] += 1
utils.caption(0, "\t{:>5} clauses".format(i))
ofs.close()
| 1m 28s 10000 clauses | 1m 30s 20000 clauses | 1m 31s 30000 clauses | 1m 33s 40000 clauses | 1m 34s 47381 clauses
show_limit = 20
for lex in debug_messages:
TF.error(lex, continuation=True)
for kind in debug_messages[lex]:
utils.caption(0, "\tERROR: {}".format(kind), continuation=True)
messages = debug_messages[lex][kind]
lm = len(messages)
utils.caption(
0,
"\tERROR: \t{}{}".format(
"\n\t\t".join(messages[0:show_limit]),
"" if lm <= show_limit else "\n\t\tAND {} more".format(lm - show_limit),
),
continuation=True,
)
genericMetaPath = f"{thisRepo}/yaml/generic.yaml"
flowchartMetaPath = f"{thisRepo}/yaml/flowchart.yaml"
with open(genericMetaPath) as fh:
genericMeta = yaml.load(fh, Loader=yaml.FullLoader)
genericMeta["version"] = VERSION
with open(flowchartMetaPath) as fh:
flowchartMeta = formatMeta(yaml.load(fh, Loader=yaml.FullLoader))
metaData = {"": genericMeta, **flowchartMeta}
nodeFeatures = dict(sense=senseFeature)
for f in nodeFeatures:
metaData[f]["valueType"] = "str"
utils.caption(4, "Writing sense feature to TF")
TF = Fabric(locations=thisTempTf, silent=True)
TF.save(nodeFeatures=nodeFeatures, edgeFeatures={}, metaData=metaData)
.............................................................................................. . 1m 40s Writing sense feature to TF . ..............................................................................................
True
Check differences with previous versions.
In[30]:
utils.checkDiffs(thisTempTf, thisTf, only=set(nodeFeatures))
.............................................................................................. . 1m 41s Check differences with previous version . .............................................................................................. | 1m 41s no features to add | 1m 41s no features to delete | 1m 41s 1 features in common | 1m 41s sense ... no changes | 1m 41s Done
Copy the new TF feature from the temporary location where it has been created to its final destination.
utils.deliverFeatures(thisTempTf, thisTf, nodeFeatures)
.............................................................................................. . 1m 44s Deliver features to /Users/werk/github/etcbc/valence/tf/c . .............................................................................................. | 1m 44s sense
utils.caption(4, "Load and compile the new TF features")
.............................................................................................. . 1m 47s Load and compile the new TF features . ..............................................................................................
TF = Fabric(locations=[coreTf, thisTf], modules=[""])
api = TF.load(
"""
lex sp vs
predication gloss
"""
+ " ".join(nodeFeatures)
)
api.makeAvailableIn(globals())
This is Text-Fabric 9.2.0 Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html 124 features found and 0 ignored 4.08s Dataset without structure sections in otext:no structure functions in the T-API | 0.31s T sense from ~/github/etcbc/valence/tf/c 15s All features loaded/computed - for details use TF.isLoaded()
[('Computed', 'computed-data', ('C Computed', 'Call AllComputeds', 'Cs ComputedString')), ('Features', 'edge-features', ('E Edge', 'Eall AllEdges', 'Es EdgeString')), ('Fabric', 'loading', ('TF',)), ('Locality', 'locality', ('L Locality',)), ('Nodes', 'navigating-nodes', ('N Nodes',)), ('Features', 'node-features', ('F Feature', 'Fall AllFeatures', 'Fs FeatureString')), ('Search', 'search', ('S Search',)), ('Text', 'text', ('T Text',))]
utils.caption(4, "Show sense counts")
senseLabels = sorted({F.sense.v(v) for v in F.otype.s("word")} - {None})
utils.caption(0, "\tSense labels = {}".format(" ".join(senseLabels)))
.............................................................................................. . 2m 09s Show sense counts . .............................................................................................. | 2m 09s Sense labels = -- -b -c -i -p c. d- db dc di dp i. k. l. n.
senseCount = collections.Counter()
noSense = []
isPredicate = {"regular", "copula"}
for v in F.sp.s("verb"):
sense = F.sense.v(v)
if sense is None:
# skip words that are not verbs in the qal
if F.vs.v(v) != "qal":
continue
# skip verbs in a phrase that is not a verb phrase, e.g. some participles
# the criterion here is whether the value of feature `predication` is non trivial
p = L.u(v, "phrase")
if F.predication.v(p) not in isPredicate:
continue
noSense.append(v)
continue
senseCount[sense] += 1
utils.caption(0, "\tCounted {} senses".format(sum(senseCount.values())))
if noSense:
utils.caption(
0, "\tWARNING: {} verb occurrences do not have a sense".format(len(noSense))
)
for v in noSense[0:10]:
utils.caption(
0,
"\t\t{:<20} word {:>6} phrase {:>6} = {:<5}".format(
"{} {}:{}".format(*T.sectionFromNode(v)),
v,
L.u(v, "phrase")[0],
F.lex.v(v),
),
)
else:
utils.caption(0, "\tAll relevant verbs have been assigned a sense")
| 2m 09s Counted 47381 senses | 2m 09s All relevant verbs have been assigned a sense
for x in sorted(senseCount.items(), key=lambda x: (-x[1], x[0])):
utils.caption(0, "\t\t{:<2} occurs {:>6}x".format(*x))
| 2m 10s -- occurs 17999x | 2m 10s d- occurs 9979x | 2m 10s -p occurs 6193x | 2m 10s -c occurs 4250x | 2m 10s -i occurs 2869x | 2m 10s dp occurs 1853x | 2m 10s dc occurs 1073x | 2m 10s di occurs 889x | 2m 10s l. occurs 876x | 2m 10s i. occurs 629x | 2m 10s n. occurs 533x | 2m 10s -b occurs 66x | 2m 10s db occurs 61x | 2m 10s c. occurs 57x | 2m 10s k. occurs 54x
For more fine grained overview with graphics, see the senses notebook.
In[34]:
if SCRIPT:
stop(good=True)
In[65]:
if not SCRIPT:
utils.caption(0, "\tReporting flowchart application")
ntot = 0
for (lab, n) in sorted(nnotes.items(), key=lambda x: x[0]):
ntot += n
print("{:<10} notes: {}".format(lab, n))
print("{:<10} notes: {}".format("Total", ntot))
for lex in [""] + sorted(senses):
print("All lexemes" if lex == "" else lex)
src_lab = (
outcome_lab
if lex == ""
else outcome_lab_l.get(lex, collections.defaultdict(lambda: 0))
)
tot = 0
for x in senseLabels:
n = src_lab[x]
tot += n
print(" Sense {:<7}: {:>5} clauses".format(x, n))
print(" All senses : {:>5} clauses".format(tot))
print(" ")
| 2m 11s Reporting flowchart application valence notes: 47381 Total notes: 47381 All lexemes Sense -- : 17999 clauses Sense -b : 66 clauses Sense -c : 4250 clauses Sense -i : 2869 clauses Sense -p : 6193 clauses Sense c. : 57 clauses Sense d- : 9979 clauses Sense db : 61 clauses Sense dc : 1073 clauses Sense di : 889 clauses Sense dp : 1853 clauses Sense i. : 629 clauses Sense k. : 54 clauses Sense l. : 876 clauses Sense n. : 533 clauses All senses : 47381 clauses <FH Sense -- : 749 clauses Sense -b : 1 clauses Sense -c : 35 clauses Sense -i : 103 clauses Sense -p : 79 clauses Sense c. : 2 clauses Sense d- : 913 clauses Sense db : 9 clauses Sense dc : 58 clauses Sense di : 140 clauses Sense dp : 88 clauses Sense i. : 45 clauses Sense k. : 6 clauses Sense l. : 111 clauses Sense n. : 129 clauses All senses : 2468 clauses BR> Sense -- : 4 clauses Sense -b : 0 clauses Sense -c : 0 clauses Sense -i : 0 clauses Sense -p : 0 clauses Sense c. : 0 clauses Sense d- : 25 clauses Sense db : 0 clauses Sense dc : 0 clauses Sense di : 0 clauses Sense dp : 2 clauses Sense i. : 1 clauses Sense k. : 0 clauses Sense l. : 0 clauses Sense n. : 4 clauses All senses : 36 clauses CJT Sense -- : 3 clauses Sense -b : 0 clauses Sense -c : 7 clauses Sense -i : 1 clauses Sense -p : 2 clauses Sense c. : 0 clauses Sense d- : 7 clauses Sense db : 1 clauses Sense dc : 10 clauses Sense di : 3 clauses Sense dp : 18 clauses Sense i. : 3 clauses Sense k. : 5 clauses Sense l. : 10 clauses Sense n. : 12 clauses All senses : 82 clauses DBQ Sense -- : 6 clauses Sense -b : 0 clauses Sense -c : 5 clauses Sense -i : 0 clauses Sense -p : 26 clauses Sense c. : 0 clauses Sense d- : 1 clauses Sense db : 0 clauses Sense dc : 0 clauses Sense di : 0 clauses Sense dp : 1 clauses Sense i. : 0 clauses Sense k. : 0 clauses Sense l. : 0 clauses Sense n. : 0 clauses All senses : 39 clauses FJM Sense -- : 14 clauses Sense -b : 2 clauses Sense -c : 31 clauses Sense -i : 2 clauses Sense -p : 29 clauses Sense c. : 2 clauses Sense d- : 47 clauses Sense db : 4 clauses Sense dc : 85 clauses Sense di : 24 clauses Sense dp : 156 clauses Sense i. : 23 clauses Sense k. : 19 clauses Sense l. : 78 clauses Sense n. : 61 clauses All senses : 577 clauses NTN Sense -- : 133 clauses Sense -b : 0 clauses Sense -c : 51 clauses Sense -i : 156 clauses Sense -p : 57 clauses Sense c. : 6 clauses Sense d- : 188 clauses Sense db : 2 clauses Sense dc : 132 clauses Sense di : 305 clauses Sense dp : 357 clauses Sense i. : 89 clauses Sense k. : 18 clauses Sense l. : 326 clauses Sense n. : 92 clauses All senses : 1912 clauses QR> Sense -- : 149 clauses Sense -b : 1 clauses Sense -c : 56 clauses Sense -i : 69 clauses Sense -p : 69 clauses Sense c. : 1 clauses Sense d- : 102 clauses Sense db : 0 clauses Sense dc : 8 clauses Sense di : 37 clauses Sense dp : 23 clauses Sense i. : 8 clauses Sense k. : 1 clauses Sense l. : 30 clauses Sense n. : 98 clauses All senses : 652 clauses ZQN Sense -- : 22 clauses Sense -b : 0 clauses Sense -c : 0 clauses Sense -i : 0 clauses Sense -p : 0 clauses Sense c. : 0 clauses Sense d- : 0 clauses Sense db : 0 clauses Sense dc : 0 clauses Sense di : 0 clauses Sense dp : 0 clauses Sense i. : 0 clauses Sense k. : 0 clauses Sense l. : 0 clauses Sense n. : 0 clauses All senses : 22 clauses
In[49]:
def show_decision(
verbs=None, labels=None, books=None
): # show all clauses that have a verb in verbs and a sense label in labels
results = []
for verb in decisions:
if verbs is not None and verb not in verbs:
continue
for label in decisions[verb]:
if labels is not None and label not in labels:
continue
for (c, stxt) in sorted(decisions[verb][label].items()):
book = T.sectionFromNode(L.u(c, "book")[0])[0]
if books is not None and book not in books:
continue
sentence_words = L.d(L.u(c, "sentence")[0], "word")
results.append(
"{:<7} {:<12} {:<5} {:<2} {}\n\t{}\n\t{}\n".format(
c,
"{} {}: {}".format(*T.sectionFromNode(c)),
verb,
label,
stxt,
T.text(sentence_words, fmt="text-trans-plain"),
" ".join(getGloss(w) for w in sentence_words),
).replace("<", "<")
)
print("\n".join(sorted(results)))
In[50]:
show_decision(verbs={"FJM"}, books={"Isaiah"})
468348 Isaiah 3: 7 FJM n. n. L> TFJMNJ QYJN <M00 not put chief people 468512 Isaiah 5: 20 FJM l. l. HWJ H>MRJM LR< WLVWB FMJM XCK L>WR W>WR LXCK FMJM MR LMTWQ WMTWQ LMR00_S alas the say to the evil and to the good put darkness to light and light to darkness put bitter to sweet and sweet to bitter 468514 Isaiah 5: 20 FJM l. l. HWJ H>MRJM LR< WLVWB FMJM XCK L>WR W>WR LXCK FMJM MR LMTWQ WMTWQ LMR00_S alas the say to the evil and to the good put darkness to light and light to darkness put bitter to sweet and sweet to bitter 468912 Isaiah 10: 6 FJM n. n. W<L&<M <BRTJ >YWNW LCLL CLL WLBZ BZ WLFJMW MRMS KXMR XWYWT00 and upon people anger command to plunder plunder and to spoil spoiling and to put trampled land as clay outside 469117 Isaiah 13: 9 FJM l. l. HNH JWM&JHWH B> >KZRJ W<BRH WXRWN >P LFWM H>RY LCMH behold day YHWH come cruel and anger and anger nose to put the earth to destruction 469219 Isaiah 14: 17 FJM k. k. FM TBL KMDBR put world as the desert 469240 Isaiah 14: 23 FJM l. l. WFMTJH LMWRC QPD W>GMJ&MJM and put to possession hedgehog and reedy pool water 469603 Isaiah 21: 4 FJM l. l. >T NCP XCQJ FM LJ LXRDH00 <object marker> breeze desire put to to trembling 469804 Isaiah 23: 13 FJM l. l. FMH LMPLH00 put to decay 469924 Isaiah 25: 2 FJM l. l. KJ FMT M<JR LGL QRJH BYWRH LMPLH >RMWN ZRJM M<JR that put from town to the wave, heap town fortified to decay dwelling tower strange from town 470077 Isaiah 27: 9 FJM k. k. BFWMW05 KL&>BNJ MZBX K>BNJ&GR MNPYWT L>&JQMW >CRJM WXMNJM00 in put whole stone altar as stone chalk shatter not arise asherah and incense-stand 470164 Isaiah 28: 15 FJM n. n. KJ FMNW KZB MXSNW that put lie refuge 470171 Isaiah 28: 17 FJM l. l. WFMTJ MCPV LQW WYDQH LMCQLT and put justice to line and justice to leveller 470211 Isaiah 28: 25 FJM n. n. WFM XVH FWRH WF<RH NSMN WKSMT GBLTW00 and put wheat <animal> and barley <uncertain> and spelt boundary 471035 Isaiah 37: 29 FJM dp dp WFMTJ XXJ B>PK WMTGJ BFPTJK and put thorn in nose and bridle in lip 471406 Isaiah 41: 15 FJM l. l. HNH FMTJK LMWRG XRWY XDC B<L PJPJWT behold put to threshing-sledge threshing instrument new lord, baal double-edged 471409 Isaiah 41: 15 FJM k. k. WGB<WT KMY TFJM00 and hill as the chaff put 471424 Isaiah 41: 18 FJM l. l. >FJM MDBR L>GM&MJM W>RY YJH LMWY>J MJM00 put desert to reedy pool water and earth dry country to issue water 471427 Isaiah 41: 19 FJM dp dp >FJM B<RBH BRWC TDHR WT>CWR JXDW00 put in the desert juniper box tree and cypress together 471430 Isaiah 41: 20 FJM -- -- WJFJMW and put 471444 Isaiah 41: 22 FJM d- d- WNFJMH LBNW and put heart 471501 Isaiah 42: 4 FJM dp dp <D&JFJM B>RY MCPV unto put in the earth justice 471534 Isaiah 42: 12 FJM l. l. JFJMW LJHWH KBWD put to YHWH weight 471549 Isaiah 42: 15 FJM l. l. WFMTJ NHRWT L>JJM and put stream to the coast, island 471555 Isaiah 42: 16 FJM l. l. >FJM MXCK LPNJHM L>WR WM<QCJM LMJCWR put dark place to face to the light and rugged country to fairness 471605 Isaiah 42: 25 FJM -c -c WL>&JFJM <L&LB00_P and not put upon heart 471699 Isaiah 43: 19 FJM dp dp >P >FJM BMDBR DRK BJCMWN NHRWT00 even put in the desert way in wilderness stream 471761 Isaiah 44: 7 FJM d- d- WJ<RKH LJ MFWMJ <M&<WLM and arrange to from put people eternity 472133 Isaiah 47: 6 FJM di di L>&FMT LHM RXMJM not put to compassion 472137 Isaiah 47: 7 FJM dc dc <D L>&FMT >LH <L&LBK unto not put these upon heart 472307 Isaiah 49: 2 FJM k. k. WJFM PJ KXRB XDH and put mouth as dagger sharp 472309 Isaiah 49: 2 FJM l. l. WJFJMNJ LXY BRWR and put to arrow purge 472361 Isaiah 49: 11 FJM l. l. WFMTJ KL&HRJ LDRK and put whole mountain to the way 472449 Isaiah 50: 2 FJM n. n. >FJM NHRWT MDBR put stream desert 472453 Isaiah 50: 3 FJM n. n. WFQ >FJM KSWTM00_S and sack put covering 472468 Isaiah 50: 7 FJM k. k. <L&KN FMTJ PNJ KXLMJC upon thus put face as the flint 472510 Isaiah 51: 3 FJM k. k. WJFM MDBRH K<DN W<RBTH KGN&JHWH and put desert as Eden and desert as garden YHWH 472551 Isaiah 51: 10 FJM n. n. HLW> >T&HJ> HMXRBT JM MJ THWM RBH HFMH M<MQJ&JM DRK L<BR G>WLJM00 <interrogative> not you she the be dry sea water primeval ocean much the put depths sea way to pass redeem 472581 Isaiah 51: 16 FJM dp dp W>FJM DBRJ BPJK and put word in mouth 472617 Isaiah 51: 23 FJM dp dp WFMTJH BJD&MWGJK >CR&>MRW LNPCK and put in hand grieve <relative> say to soul 472621 Isaiah 51: 23 FJM k. k. WTFJMJ K>RY GWK WKXWY L<BRJM00_S and put as the earth back and as the outside to the pass 472743 Isaiah 53: 10 FJM d- d- >M&TFJM >CM NPCW if put guilt soul 472808 Isaiah 54: 12 FJM n. n. WFMTJ KDKD CMCTJK WC<RJK L>BNJ >QDX WKL&GBWLK L>BNJ&XPY00 and put ruby sun and gate to stone beryl and whole boundary to stone pleasure 472970 Isaiah 57: 1 FJM -c -c W>JN >JC FM <L&LB and <NEG> man put upon heart 472992 Isaiah 57: 7 FJM dp dp <L HR&GBH WNF> FMT MCKBK upon mountain high and lift put couch 472995 Isaiah 57: 8 FJM dp dp W>XR HDLT WHMZWZH FMT ZKRWNK and after the door and the door-post put remembrance 473015 Isaiah 57: 11 FJM -c -c L>&FMT <L&LBK not put upon heart 473247 Isaiah 59: 21 FJM -p -p RWXJ >CR <LJK WDBRJ >CR&FMTJ BPJK L>&JMWCW MPJK WMPJ ZR<K WMPJ ZR< ZR<K M<TH W<D&<WLM00_S wind <relative> upon and word <relative> put in mouth not depart from mouth and from mouth seed and from mouth seed seed from now and unto eternity 473307 Isaiah 60: 15 FJM l. l. TXT HJWTK <ZWBH WFNW>H W>JN <WBR WFMTJK LG>WN <WLM MFWF DWR WDWR00 under part be leave and hate and <NEG> pass and put to height eternity joy generation and generation 473317 Isaiah 60: 17 FJM n. n. WFMTJ PQDTK CLWM WNGFJK YDQH00 and put commission peace and drive justice 473349 Isaiah 61: 3 FJM -- -- CLXNJ LXBC LNCBRJ&LB LQR> LCBWJM DRWR WL>SWRJM PQX_QWX00 LQR> CNT&RYWN LJHWH WJWM NQM L>LHJNW LNXM KL&>BLJM00 LFWM05 L>BLJ YJWN LTT LHM P>R TXT >PR CMN FFWN TXT >BL M<VH THLH TXT RWX KHH send to saddle to break heart to call to take captive release and to bind opening to call year pleasure to YHWH and day vengeance to god(s) to repent, console whole mourning to put to mourning Zion to give to headdress under part dust oil rejoicing under part mourning rites wrap praise under part wind dim 473421 Isaiah 62: 7 FJM n. n. HMZKRJM >T&JHWH >L&DMJ LKM00 W>L&TTNW DMJ LW <D&JKWNN W<D&JFJM >T&JRWCLM THLH B>RY00 the remember <object marker> YHWH not rest to and not give rest to unto be firm and unto put <object marker> Jerusalem praise in the earth 473500 Isaiah 63: 11 FJM dp dp >JH HFM BQRBW >T&RWX QDCW00 MWLJK LJMJN MCH ZRW< TP>RTW BWQ< MJM MPNJHM L<FWT LW CM <WLM00 MWLJKM BTHMWT where the put in interior <object marker> wind holiness walk to right-hand side Moses arm splendour split water from face to make to name eternity walk in the primeval ocean 473804 Isaiah 66: 19 FJM dp dp WFMTJ BHM >WT and put in sign
In[ ]: