You might want to consider the start of this tutorial.
Short introductions to other TF datasets:
or the
%load_ext autoreload
%autoreload 2
The ins and outs of installing Text-Fabric, getting the corpus, and initializing a notebook are explained in the start tutorial.
from tf.app import use
from tf.convert.recorder import Recorder
A = use("ETCBC/bhsa", hoist=globals())
Locating corpus resources ...
Name | # of nodes | # slots/node | % coverage |
---|---|---|---|
book | 39 | 10938.21 | 100 |
chapter | 929 | 459.19 | 100 |
lex | 9230 | 46.22 | 100 |
verse | 23213 | 18.38 | 100 |
half_verse | 45179 | 9.44 | 100 |
sentence | 63717 | 6.70 | 100 |
sentence_atom | 64514 | 6.61 | 100 |
clause | 88131 | 4.84 | 100 |
clause_atom | 90704 | 4.70 | 100 |
phrase | 253203 | 1.68 | 100 |
phrase_atom | 267532 | 1.59 | 100 |
subphrase | 113850 | 1.42 | 38 |
word | 426590 | 1.00 | 100 |
3
ETCBC/bhsa
/Users/me/text-fabric-data/github/ETCBC/bhsa/app
gd905e3fb6e80d0fa537600337614adc2af157309
''
<code>Genesis 1:1</code> (use <a href="https://github.com/{org}/{repo}/blob/master/tf/{version}/book%40en.tf" target="_blank">English book names</a>)
g_uvf_utf8
g_vbs
kq_hybrid
languageISO
g_nme
lex0
is_root
g_vbs_utf8
g_uvf
dist
root
suffix_person
g_vbe
dist_unit
suffix_number
distributional_parent
kq_hybrid_utf8
crossrefSET
instruction
g_prs
lexeme_count
rank_occ
g_pfm_utf8
freq_occ
crossrefLCS
functional_parent
g_pfm
g_nme_utf8
g_vbe_utf8
kind
g_prs_utf8
suffix_gender
mother_object_type
none
unknown
NA
{docRoot}/{repo}
''
''
https://{org}.github.io
0_home
{}
True
local
/Users/me/text-fabric-data/github/ETCBC/bhsa/_temp
BHSA = Biblia Hebraica Stuttgartensia Amstelodamensis
10.5281/zenodo.1007624
Phonetic Transcriptions
https://nbviewer.jupyter.org/github/etcbc/phono/blob/master/programs/phono.ipynb
10.5281/zenodo.1007636
ETCBC
/tf
phono
Parallel Passages
https://nbviewer.jupyter.org/github/ETCBC/parallels/blob/master/programs/parallels.ipynb
10.5281/zenodo.1007642
ETCBC
/tf
parallels
ETCBC
/tf
bhsa
2021
https://shebanq.ancient-data.org/hebrew
Show this on SHEBANQ
la
True
{webBase}/text?book=<1>&chapter=<2>&verse=<3>&version={version}&mr=m&qw=q&tp=txt_p&tr=hb&wget=v&qget=v&nget=vt
{webBase}/word?version={version}&id=<lid>
v1.8
{typ} {rela}
''
True
{code}
1
''
True
{label}
''
True
gloss
{voc_lex_utf8}
word
orig
{voc_lex_utf8}
{typ} {function}
''
True
{typ} {rela}
1
''
{number}
''
True
{number}
1
''
True
{number}
''
pdp vs vt
lex:gloss
hbo
We work with Genesis 1 (in fact, only the first 10 clauses).
gen1 = T.nodeFromSection(("Genesis", 1))
We prepare our portion of text for annotation outside TF.
What needs to happen is, that we produce a text file and that we remember the positions of the relevant nodes in that text file.
The Recorder is a new thing in TF (in development) that lets you create a string from nodes, where the positions of the nodes in that string are remembered. You may add all kinds of material in between the texts of the nodes. And it is up to you how you represent the nodes.
We start a recorder.
rec = Recorder()
We can add strings to the recorder, and we can tell nodes to start and to stop.
We add clause atoms and phrase atoms to the recorder.
LIMIT = 10
for (i, cla) in enumerate(L.d(gen1, otype="clause_atom")):
if i >= LIMIT: # only first ten clause atoms
break
# we want a label in front of each clause atom
label = "{} {}:{}".format(*T.sectionFromNode(cla))
rec.add(f"{label}@{i} ")
# we start a clause atom node:
# until we end this node, all text that we add counts as material for this clause atom
rec.start(cla)
for pa in L.d(cla, otype="phrase_atom"):
# we start a phrase node
# until we end this node, all text that we add also counts as material for this phrase atom
rec.start(pa)
# we add text, it belongs to the current clause atom and to the current phrase atom
rec.add(T.text(pa, fmt="text-trans-plain"))
# we end the phrase atom
rec.end(pa)
# we end the clause atom
rec.end(cla)
# very clause atom on its own line
# this return character does not belong to any node
rec.add("\n")
We can print the recorded text.
print(rec.text())
Genesis 1:1@0 BR>CJT BR> >LHJM >T HCMJM W>T H>RY00 Genesis 1:2@1 WH>RY HJTH THW WBHW Genesis 1:2@2 WXCK <L&PNJ THWM Genesis 1:2@3 WRWX >LHJM MRXPT <L&PNJ HMJM00 Genesis 1:3@4 WJ>MR >LHJM Genesis 1:3@5 JHJ >WR Genesis 1:3@6 WJHJ&>WR00 Genesis 1:4@7 WJR> >LHJM >T&H>WR Genesis 1:4@8 KJ&VWB Genesis 1:4@9 WJBDL >LHJM BJN H>WR WBJN HXCK00
We can print the recorded node positions.
print("\n".join(f"pos {i}: {p}" for (i, p) in enumerate(rec.positions()) if p))
pos 14: frozenset({904776, 515690}) pos 15: frozenset({904776, 515690}) pos 16: frozenset({904776, 515690}) pos 17: frozenset({904776, 515690}) pos 18: frozenset({904776, 515690}) pos 19: frozenset({904776, 515690}) pos 20: frozenset({904776, 515690}) pos 21: frozenset({904777, 515690}) pos 22: frozenset({904777, 515690}) pos 23: frozenset({904777, 515690}) pos 24: frozenset({904777, 515690}) pos 25: frozenset({515690, 904778}) pos 26: frozenset({515690, 904778}) pos 27: frozenset({515690, 904778}) pos 28: frozenset({515690, 904778}) pos 29: frozenset({515690, 904778}) pos 30: frozenset({515690, 904778}) pos 31: frozenset({515690, 904779}) pos 32: frozenset({515690, 904779}) pos 33: frozenset({515690, 904779}) pos 34: frozenset({515690, 904779}) pos 35: frozenset({515690, 904779}) pos 36: frozenset({515690, 904779}) pos 37: frozenset({515690, 904779}) pos 38: frozenset({515690, 904779}) pos 39: frozenset({515690, 904779}) pos 40: frozenset({515690, 904779}) pos 41: frozenset({515690, 904779}) pos 42: frozenset({515690, 904779}) pos 43: frozenset({515690, 904779}) pos 44: frozenset({515690, 904779}) pos 45: frozenset({515690, 904779}) pos 46: frozenset({515690, 904779}) pos 47: frozenset({515690, 904779}) pos 48: frozenset({515690, 904779}) pos 49: frozenset({515690, 904779}) pos 50: frozenset({515690, 904779}) pos 66: frozenset({515691, 904780}) pos 67: frozenset({515691, 904781}) pos 68: frozenset({515691, 904781}) pos 69: frozenset({515691, 904781}) pos 70: frozenset({515691, 904781}) pos 71: frozenset({515691, 904781}) pos 72: frozenset({515691, 904782}) pos 73: frozenset({515691, 904782}) pos 74: frozenset({515691, 904782}) pos 75: frozenset({515691, 904782}) pos 76: frozenset({515691, 904782}) pos 77: frozenset({515691, 904783}) pos 78: frozenset({515691, 904783}) pos 79: frozenset({515691, 904783}) pos 80: frozenset({515691, 904783}) pos 81: frozenset({515691, 904783}) pos 82: frozenset({515691, 904783}) pos 83: frozenset({515691, 904783}) pos 84: frozenset({515691, 904783}) pos 85: frozenset({515691, 904783}) pos 101: frozenset({904784, 515692}) pos 102: frozenset({904785, 515692}) pos 103: frozenset({904785, 515692}) pos 104: frozenset({904785, 515692}) pos 105: frozenset({904785, 515692}) pos 106: frozenset({904786, 515692}) pos 107: frozenset({904786, 515692}) pos 108: frozenset({904786, 515692}) pos 109: frozenset({904786, 515692}) pos 110: frozenset({904786, 515692}) pos 111: frozenset({904786, 515692}) pos 112: frozenset({904786, 515692}) pos 113: frozenset({904786, 515692}) pos 114: frozenset({904786, 515692}) pos 115: frozenset({904786, 515692}) pos 116: frozenset({904786, 515692}) pos 117: frozenset({904786, 515692}) pos 133: frozenset({904787, 515693}) pos 134: frozenset({904788, 515693}) pos 135: frozenset({904788, 515693}) pos 136: frozenset({904788, 515693}) pos 137: frozenset({904788, 515693}) pos 138: frozenset({904788, 515693}) pos 139: frozenset({904788, 515693}) pos 140: frozenset({904788, 515693}) pos 141: frozenset({904788, 515693}) pos 142: frozenset({904788, 515693}) pos 143: frozenset({904788, 515693}) pos 144: frozenset({515693, 904789}) pos 145: frozenset({515693, 904789}) pos 146: frozenset({515693, 904789}) pos 147: frozenset({515693, 904789}) pos 148: frozenset({515693, 904789}) pos 149: frozenset({515693, 904789}) pos 150: frozenset({515693, 904790}) pos 151: frozenset({515693, 904790}) pos 152: frozenset({515693, 904790}) pos 153: frozenset({515693, 904790}) pos 154: frozenset({515693, 904790}) pos 155: frozenset({515693, 904790}) pos 156: frozenset({515693, 904790}) pos 157: frozenset({515693, 904790}) pos 158: frozenset({515693, 904790}) pos 159: frozenset({515693, 904790}) pos 160: frozenset({515693, 904790}) pos 161: frozenset({515693, 904790}) pos 162: frozenset({515693, 904790}) pos 163: frozenset({515693, 904790}) pos 179: frozenset({515694, 904791}) pos 180: frozenset({904792, 515694}) pos 181: frozenset({904792, 515694}) pos 182: frozenset({904792, 515694}) pos 183: frozenset({904792, 515694}) pos 184: frozenset({904792, 515694}) pos 185: frozenset({904793, 515694}) pos 186: frozenset({904793, 515694}) pos 187: frozenset({904793, 515694}) pos 188: frozenset({904793, 515694}) pos 189: frozenset({904793, 515694}) pos 190: frozenset({904793, 515694}) pos 206: frozenset({904794, 515695}) pos 207: frozenset({904794, 515695}) pos 208: frozenset({904794, 515695}) pos 209: frozenset({904794, 515695}) pos 210: frozenset({904795, 515695}) pos 211: frozenset({904795, 515695}) pos 212: frozenset({904795, 515695}) pos 213: frozenset({904795, 515695}) pos 229: frozenset({515696, 904796}) pos 230: frozenset({515696, 904797}) pos 231: frozenset({515696, 904797}) pos 232: frozenset({515696, 904797}) pos 233: frozenset({515696, 904797}) pos 234: frozenset({515696, 904798}) pos 235: frozenset({515696, 904798}) pos 236: frozenset({515696, 904798}) pos 237: frozenset({515696, 904798}) pos 238: frozenset({515696, 904798}) pos 239: frozenset({515696, 904798}) pos 255: frozenset({515697, 904799}) pos 256: frozenset({904800, 515697}) pos 257: frozenset({904800, 515697}) pos 258: frozenset({904800, 515697}) pos 259: frozenset({904800, 515697}) pos 260: frozenset({515697, 904801}) pos 261: frozenset({515697, 904801}) pos 262: frozenset({515697, 904801}) pos 263: frozenset({515697, 904801}) pos 264: frozenset({515697, 904801}) pos 265: frozenset({515697, 904801}) pos 266: frozenset({515697, 904802}) pos 267: frozenset({515697, 904802}) pos 268: frozenset({515697, 904802}) pos 269: frozenset({515697, 904802}) pos 270: frozenset({515697, 904802}) pos 271: frozenset({515697, 904802}) pos 272: frozenset({515697, 904802}) pos 273: frozenset({515697, 904802}) pos 289: frozenset({515698, 904803}) pos 290: frozenset({515698, 904803}) pos 291: frozenset({515698, 904803}) pos 292: frozenset({515698, 904804}) pos 293: frozenset({515698, 904804}) pos 294: frozenset({515698, 904804}) pos 295: frozenset({515698, 904804}) pos 311: frozenset({515699, 904805}) pos 312: frozenset({515699, 904806}) pos 313: frozenset({515699, 904806}) pos 314: frozenset({515699, 904806}) pos 315: frozenset({515699, 904806}) pos 316: frozenset({515699, 904806}) pos 317: frozenset({515699, 904807}) pos 318: frozenset({515699, 904807}) pos 319: frozenset({515699, 904807}) pos 320: frozenset({515699, 904807}) pos 321: frozenset({515699, 904807}) pos 322: frozenset({515699, 904807}) pos 323: frozenset({904808, 515699}) pos 324: frozenset({904808, 515699}) pos 325: frozenset({904808, 515699}) pos 326: frozenset({904808, 515699}) pos 327: frozenset({904808, 515699}) pos 328: frozenset({904808, 515699}) pos 329: frozenset({904808, 515699}) pos 330: frozenset({904808, 515699}) pos 331: frozenset({904808, 515699}) pos 332: frozenset({904808, 515699}) pos 333: frozenset({904808, 515699}) pos 334: frozenset({904808, 515699}) pos 335: frozenset({904808, 515699}) pos 336: frozenset({904808, 515699}) pos 337: frozenset({904808, 515699}) pos 338: frozenset({904808, 515699}) pos 339: frozenset({904808, 515699}) pos 340: frozenset({904808, 515699}) pos 341: frozenset({904808, 515699}) pos 342: frozenset({904808, 515699}) pos 343: frozenset({904808, 515699})
We can write the recorded text and the positions to two files:
rec.write("data/gen1.txt")
!head -n 10 data/gen1.txt
Genesis 1:1@0 BR>CJT BR> >LHJM >T HCMJM W>T H>RY00 Genesis 1:2@1 WH>RY HJTH THW WBHW Genesis 1:2@2 WXCK <L&PNJ THWM Genesis 1:2@3 WRWX >LHJM MRXPT <L&PNJ HMJM00 Genesis 1:3@4 WJ>MR >LHJM Genesis 1:3@5 JHJ >WR Genesis 1:3@6 WJHJ&>WR00 Genesis 1:4@7 WJR> >LHJM >T&H>WR Genesis 1:4@8 KJ&VWB Genesis 1:4@9 WJBDL >LHJM BJN H>WR WBJN HXCK00
!head -n 30 data/gen1.txt.pos
904776 515690 904776 515690 904776 515690 904776 515690 904776 515690 904776 515690 904776 515690 904777 515690 904777 515690 904777 515690 904777 515690 515690 904778 515690 904778 515690 904778 515690 904778 515690 904778
Now we produce a (fake) annotation file, based on the text.
The file is tab delimited, the columns are:
We annotate as follows:
B
gets bword=1
T
gets tword=1
Then we want every phrase with a b-word to get bword=1
and likewise
every clause with a b-word to get bword=1
,
and the same for tword
.
def annotate(fileName):
annotations = {}
with open(fileName) as fh:
pos = 0
for line in fh:
words = line.split(" ")
for word in words[0:2]:
lWord = len(word)
pos += lWord + 1
for word in words[2:]:
word = word.rstrip("\n")
lWord = len(word)
start = pos
end = pos + lWord - 1
pos += lWord + 1
if lWord:
if word[0] == "B":
annotations.setdefault((start, end), {})["bword"] = 1
if word[-1] == "T":
annotations.setdefault((start, end), {})["tword"] = 1
with open(f"{fileName}.ann", "w") as fh:
fh.write("start\tend\tbword\ttword\n")
for ((start, end), features) in annotations.items():
row = "\t".join(
str(a)
for a in (
start,
end,
features.get("bword", ""),
features.get("tword", ""),
)
)
fh.write(f"{row}\n")
annotate("data/gen1.txt")
Here is the annotation file.
!cat data/gen1.txt.ann
start end bword tword 14 19 1 1 21 23 1 31 32 1 40 42 1 144 148 1 323 325 1
Now we want to feed back these annotations as TF features on phrase_atom
and clause_atom
nodes.
Our recorder knows how to do that.
features = rec.makeFeatures("data/gen1.txt.ann")
Let's see.
features["bword"]
{904776: '1', 515690: '1', 904777: '1', 904808: '1', 515699: '1'}
features["tword"]
{904776: '1', 515690: '1', 904779: '1', 904789: '1', 515693: '1'}
Let's check:
for feat in ("bword", "tword"):
for n in features[feat]:
print(f'{feat} {F.otype.v(n)} {n}: {T.text(n, fmt="text-trans-plain")}')
bword phrase_atom 904776: BR>CJT bword clause_atom 515690: BR>CJT BR> >LHJM >T HCMJM W>T H>RY00 bword phrase_atom 904777: BR> bword phrase_atom 904808: BJN H>WR WBJN HXCK00 bword clause_atom 515699: WJBDL >LHJM BJN H>WR WBJN HXCK00 tword phrase_atom 904776: BR>CJT tword clause_atom 515690: BR>CJT BR> >LHJM >T HCMJM W>T H>RY00 tword phrase_atom 904779: >T HCMJM W>T H>RY00 tword phrase_atom 904789: MRXPT tword clause_atom 515693: WRWX >LHJM MRXPT <L&PNJ HMJM00
What if we want to transform the annotations to word features instead to features on phrase and clause atoms?
Then we should record the text differently.
We only add slots to the mix.
rec = Recorder()
LIMIT = 10
for (i, cla) in enumerate(L.d(gen1, otype="clause_atom")):
if i >= LIMIT:
break
label = "{} {}:{}".format(*T.sectionFromNode(cla))
rec.add(f"{label}@{i} ")
for w in L.d(cla, otype="word"):
rec.start(w)
rec.add(T.text(w, fmt="text-trans-plain"))
rec.end(w)
rec.add("\n")
It gives the same text:
print(rec.text())
Genesis 1:1@0 BR>CJT BR> >LHJM >T HCMJM W>T H>RY00 Genesis 1:2@1 WH>RY HJTH THW WBHW Genesis 1:2@2 WXCK <L&PNJ THWM Genesis 1:2@3 WRWX >LHJM MRXPT <L&PNJ HMJM00 Genesis 1:3@4 WJ>MR >LHJM Genesis 1:3@5 JHJ >WR Genesis 1:3@6 WJHJ&>WR00 Genesis 1:4@7 WJR> >LHJM >T&H>WR Genesis 1:4@8 KJ&VWB Genesis 1:4@9 WJBDL >LHJM BJN H>WR WBJN HXCK00
but the node positions are different:
print("\n".join(f"pos {i}: {p}" for (i, p) in enumerate(rec.positions()) if p))
pos 14: frozenset({1}) pos 15: frozenset({2}) pos 16: frozenset({2}) pos 17: frozenset({2}) pos 18: frozenset({2}) pos 19: frozenset({2}) pos 20: frozenset({2}) pos 21: frozenset({3}) pos 22: frozenset({3}) pos 23: frozenset({3}) pos 24: frozenset({3}) pos 25: frozenset({4}) pos 26: frozenset({4}) pos 27: frozenset({4}) pos 28: frozenset({4}) pos 29: frozenset({4}) pos 30: frozenset({4}) pos 31: frozenset({5}) pos 32: frozenset({5}) pos 33: frozenset({5}) pos 34: frozenset({6}) pos 35: frozenset({7}) pos 36: frozenset({7}) pos 37: frozenset({7}) pos 38: frozenset({7}) pos 39: frozenset({7}) pos 40: frozenset({8}) pos 41: frozenset({9}) pos 42: frozenset({9}) pos 43: frozenset({9}) pos 44: frozenset({10}) pos 45: frozenset({11}) pos 46: frozenset({11}) pos 47: frozenset({11}) pos 48: frozenset({11}) pos 49: frozenset({11}) pos 50: frozenset({11}) pos 66: frozenset({12}) pos 67: frozenset({13}) pos 68: frozenset({14}) pos 69: frozenset({14}) pos 70: frozenset({14}) pos 71: frozenset({14}) pos 72: frozenset({15}) pos 73: frozenset({15}) pos 74: frozenset({15}) pos 75: frozenset({15}) pos 76: frozenset({15}) pos 77: frozenset({16}) pos 78: frozenset({16}) pos 79: frozenset({16}) pos 80: frozenset({16}) pos 81: frozenset({17}) pos 82: frozenset({18}) pos 83: frozenset({18}) pos 84: frozenset({18}) pos 85: frozenset({18}) pos 101: frozenset({19}) pos 102: frozenset({20}) pos 103: frozenset({20}) pos 104: frozenset({20}) pos 105: frozenset({20}) pos 106: frozenset({21}) pos 107: frozenset({21}) pos 108: frozenset({21}) pos 109: frozenset({22}) pos 110: frozenset({22}) pos 111: frozenset({22}) pos 112: frozenset({22}) pos 113: frozenset({23}) pos 114: frozenset({23}) pos 115: frozenset({23}) pos 116: frozenset({23}) pos 117: frozenset({23}) pos 133: frozenset({24}) pos 134: frozenset({25}) pos 135: frozenset({25}) pos 136: frozenset({25}) pos 137: frozenset({25}) pos 138: frozenset({26}) pos 139: frozenset({26}) pos 140: frozenset({26}) pos 141: frozenset({26}) pos 142: frozenset({26}) pos 143: frozenset({26}) pos 144: frozenset({27}) pos 145: frozenset({27}) pos 146: frozenset({27}) pos 147: frozenset({27}) pos 148: frozenset({27}) pos 149: frozenset({27}) pos 150: frozenset({28}) pos 151: frozenset({28}) pos 152: frozenset({28}) pos 153: frozenset({29}) pos 154: frozenset({29}) pos 155: frozenset({29}) pos 156: frozenset({29}) pos 157: frozenset({30}) pos 158: frozenset({31}) pos 159: frozenset({31}) pos 160: frozenset({31}) pos 161: frozenset({31}) pos 162: frozenset({31}) pos 163: frozenset({31}) pos 179: frozenset({32}) pos 180: frozenset({33}) pos 181: frozenset({33}) pos 182: frozenset({33}) pos 183: frozenset({33}) pos 184: frozenset({33}) pos 185: frozenset({34}) pos 186: frozenset({34}) pos 187: frozenset({34}) pos 188: frozenset({34}) pos 189: frozenset({34}) pos 190: frozenset({34}) pos 206: frozenset({35}) pos 207: frozenset({35}) pos 208: frozenset({35}) pos 209: frozenset({35}) pos 210: frozenset({36}) pos 211: frozenset({36}) pos 212: frozenset({36}) pos 213: frozenset({36}) pos 229: frozenset({37}) pos 230: frozenset({38}) pos 231: frozenset({38}) pos 232: frozenset({38}) pos 233: frozenset({38}) pos 234: frozenset({39}) pos 235: frozenset({39}) pos 236: frozenset({39}) pos 237: frozenset({39}) pos 238: frozenset({39}) pos 239: frozenset({39}) pos 255: frozenset({40}) pos 256: frozenset({41}) pos 257: frozenset({41}) pos 258: frozenset({41}) pos 259: frozenset({41}) pos 260: frozenset({42}) pos 261: frozenset({42}) pos 262: frozenset({42}) pos 263: frozenset({42}) pos 264: frozenset({42}) pos 265: frozenset({42}) pos 266: frozenset({43}) pos 267: frozenset({43}) pos 268: frozenset({43}) pos 269: frozenset({44}) pos 270: frozenset({45}) pos 271: frozenset({45}) pos 272: frozenset({45}) pos 273: frozenset({45}) pos 289: frozenset({46}) pos 290: frozenset({46}) pos 291: frozenset({46}) pos 292: frozenset({47}) pos 293: frozenset({47}) pos 294: frozenset({47}) pos 295: frozenset({47}) pos 311: frozenset({48}) pos 312: frozenset({49}) pos 313: frozenset({49}) pos 314: frozenset({49}) pos 315: frozenset({49}) pos 316: frozenset({49}) pos 317: frozenset({50}) pos 318: frozenset({50}) pos 319: frozenset({50}) pos 320: frozenset({50}) pos 321: frozenset({50}) pos 322: frozenset({50}) pos 323: frozenset({51}) pos 324: frozenset({51}) pos 325: frozenset({51}) pos 326: frozenset({51}) pos 327: frozenset({52}) pos 328: frozenset({53}) pos 329: frozenset({53}) pos 330: frozenset({53}) pos 331: frozenset({53}) pos 332: frozenset({54}) pos 333: frozenset({55}) pos 334: frozenset({55}) pos 335: frozenset({55}) pos 336: frozenset({55}) pos 337: frozenset({56}) pos 338: frozenset({57}) pos 339: frozenset({57}) pos 340: frozenset({57}) pos 341: frozenset({57}) pos 342: frozenset({57}) pos 343: frozenset({57})
We have produced the same text, so we can use the earlier annotation file to create word features.
features = rec.makeFeatures("data/gen1.txt.ann")
features["bword"]
{1: '1', 2: '1', 3: '1', 51: '1'}
features["tword"]
{1: '1', 2: '1', 5: '1', 8: '1', 9: '1', 27: '1'}
Let's check:
for feat in ("bword", "tword"):
for n in features[feat]:
print(f'{feat} {F.otype.v(n)} {n}: {T.text(n, fmt="text-trans-plain")}')
bword word 1: B bword word 2: R>CJT bword word 3: BR> bword word 51: BJN tword word 1: B tword word 2: R>CJT tword word 5: >T tword word 8: W tword word 9: >T tword word 27: MRXPT
The annotator just looked at the string BR>CJT
without knowing that it is two words.
!cat data/gen1.txt.ann
start end bword tword 14 19 1 1 21 23 1 31 32 1 40 42 1 144 148 1 323 325 1
So it has annotated pos 14-19 as a bword
and as a tword
.
But TF knows that 14-19 are slots 1 and 2, so when the annotations are applied,
slots 1 and 2 are both set to bwords
and twords
.
We can remedy the situation by producing an other text to the annotator, one where slots are always separated by a space.
Lets do that by always adding a space, so real words are separated by two spaces.
rec = Recorder()
LIMIT = 10
for (i, cla) in enumerate(L.d(gen1, otype="clause_atom")):
if i >= LIMIT:
break
label = "{} {}:{}".format(*T.sectionFromNode(cla))
rec.add(f"{label}@{i} ")
for w in L.d(cla, otype="word"):
rec.start(w)
rec.add(T.text(w, fmt="text-trans-plain") + " ")
rec.end(w)
rec.add("\n")
Here is the text
print(rec.text())
Genesis 1:1@0 B R>CJT BR> >LHJM >T H CMJM W >T H >RY00 Genesis 1:2@1 W H >RY HJTH THW W BHW Genesis 1:2@2 W XCK <L& PNJ THWM Genesis 1:2@3 W RWX >LHJM MRXPT <L& PNJ H MJM00 Genesis 1:3@4 W J>MR >LHJM Genesis 1:3@5 JHJ >WR Genesis 1:3@6 W JHJ& >WR00 Genesis 1:4@7 W JR> >LHJM >T& H >WR Genesis 1:4@8 KJ& VWB Genesis 1:4@9 W JBDL >LHJM BJN H >WR W BJN H XCK00
We write the text to file.
rec.write("data/gen1wx.txt")
We run our annotator again, because we have a different text:
annotate("data/gen1wx.txt")
Here is the new annotation file.
!cat data/gen1wx.txt.ann
start end bword tword 14 14 1 16 20 1 23 25 1 35 36 1 49 50 1 99 101 1 170 174 1 373 375 1 387 389 1
The features are no surprise:
features = rec.makeFeatures("data/gen1wx.txt.ann")
features["bword"]
{1: '1', 3: '1', 18: '1', 51: '1', 55: '1'}
features["tword"]
{2: '1', 5: '1', 9: '1', 27: '1'}
Let's check:
for feat in ("bword", "tword"):
for n in features[feat]:
print(f'{feat} {F.otype.v(n)} {n}: {T.text(n, fmt="text-trans-plain")}')
bword word 1: B bword word 3: BR> bword word 18: BHW bword word 51: BJN bword word 55: BJN tword word 2: R>CJT tword word 5: >T tword word 9: >T tword word 27: MRXPT
CC-BY Dirk Roorda