Annotation outside TF

Task:

  • prepare a text file based on TF data.
  • annotate the text file by assigning values to pieces of text
  • generate TF features based on these annotations
In [1]:
%load_ext autoreload
%autoreload 2
In [2]:
from tf.app import use
from tf.convert.recorder import Recorder
In [3]:
A = use('bhsa', hoist=globals(), silent='deep')
silentOff()

We work with Genesis 1 (in fact, only the first 10 clauses).

In [4]:
gen1 = T.nodeFromSection(('Genesis', 1))

We prepare our portion of text for annotation outside TF.

What needs to happen is, that we produce a text file and that we remember the postions of the relevant nodes in that text file.

The Recorder is a new thing in TF (in development) that lets you create a string from nodes, where the positions of the nodes in that string are remembered. You may add all kinds of material in between the texts of the nodes. And it is up to you how you represent the nodes.

We start a recorder.

In [5]:
rec = Recorder()

We can add strings to the recorder, and we can tell nodes to start and to stop.

We add clause atoms and phrase atoms to the recorder.

In [6]:
LIMIT = 10

for (i, cla) in enumerate(L.d(gen1, otype='clause_atom')):
  if i >= LIMIT: # only first ten clause atoms
    break
    
  # we want a label in front of each clause atom
  label = '{} {}:{}'.format(*T.sectionFromNode(cla))
  rec.add(f'{label}@{i} ')

  # we start a clause atom node:
  #   until we end this node, all text that we add counts as material for this clause atom
  rec.start(cla)
  
  for pa in L.d(cla, otype='phrase_atom'):
    # we start a phrase node
    #   until we end this node, all text that we add also counts as material for this phrase atom
    rec.start(pa)
    
    # we add text, it belongs to the current clause atom and to the current phrase atom
    rec.add(T.text(pa, fmt='text-trans-plain'))
    
    # we end the phrase atom
    rec.end(pa)
    
  # we end the clause atom
  rec.end(cla)
  
  # very clause atom on its own line
  #  this return character does not belong to any node
  rec.add('\n')

We can print the recorded text.

In [7]:
print(rec.text())
Genesis 1:[email protected] BR>CJT BR> >LHJM >T HCMJM W>T H>RY00 
Genesis 1:[email protected] WH>RY HJTH THW WBHW 
Genesis 1:[email protected] WXCK <L&PNJ THWM 
Genesis 1:[email protected] WRWX >LHJM MRXPT <L&PNJ HMJM00 
Genesis 1:[email protected] WJ>MR >LHJM 
Genesis 1:[email protected] JHJ >WR 
Genesis 1:[email protected] WJHJ&>WR00 
Genesis 1:[email protected] WJR> >LHJM >T&H>WR 
Genesis 1:[email protected] KJ&VWB 
Genesis 1:[email protected] WJBDL >LHJM BJN H>WR WBJN HXCK00 

We can print the recorded node positions.

In [8]:
print('\n'.join(f'pos {i}: {p}' for (i, p) in enumerate(rec.positions()) if p))
pos 14: frozenset({515674, 904749})
pos 15: frozenset({515674, 904749})
pos 16: frozenset({515674, 904749})
pos 17: frozenset({515674, 904749})
pos 18: frozenset({515674, 904749})
pos 19: frozenset({515674, 904749})
pos 20: frozenset({515674, 904749})
pos 21: frozenset({515674, 904750})
pos 22: frozenset({515674, 904750})
pos 23: frozenset({515674, 904750})
pos 24: frozenset({515674, 904750})
pos 25: frozenset({515674, 904751})
pos 26: frozenset({515674, 904751})
pos 27: frozenset({515674, 904751})
pos 28: frozenset({515674, 904751})
pos 29: frozenset({515674, 904751})
pos 30: frozenset({515674, 904751})
pos 31: frozenset({904752, 515674})
pos 32: frozenset({904752, 515674})
pos 33: frozenset({904752, 515674})
pos 34: frozenset({904752, 515674})
pos 35: frozenset({904752, 515674})
pos 36: frozenset({904752, 515674})
pos 37: frozenset({904752, 515674})
pos 38: frozenset({904752, 515674})
pos 39: frozenset({904752, 515674})
pos 40: frozenset({904752, 515674})
pos 41: frozenset({904752, 515674})
pos 42: frozenset({904752, 515674})
pos 43: frozenset({904752, 515674})
pos 44: frozenset({904752, 515674})
pos 45: frozenset({904752, 515674})
pos 46: frozenset({904752, 515674})
pos 47: frozenset({904752, 515674})
pos 48: frozenset({904752, 515674})
pos 49: frozenset({904752, 515674})
pos 50: frozenset({904752, 515674})
pos 66: frozenset({904753, 515675})
pos 67: frozenset({904754, 515675})
pos 68: frozenset({904754, 515675})
pos 69: frozenset({904754, 515675})
pos 70: frozenset({904754, 515675})
pos 71: frozenset({904754, 515675})
pos 72: frozenset({515675, 904755})
pos 73: frozenset({515675, 904755})
pos 74: frozenset({515675, 904755})
pos 75: frozenset({515675, 904755})
pos 76: frozenset({515675, 904755})
pos 77: frozenset({515675, 904756})
pos 78: frozenset({515675, 904756})
pos 79: frozenset({515675, 904756})
pos 80: frozenset({515675, 904756})
pos 81: frozenset({515675, 904756})
pos 82: frozenset({515675, 904756})
pos 83: frozenset({515675, 904756})
pos 84: frozenset({515675, 904756})
pos 85: frozenset({515675, 904756})
pos 101: frozenset({515676, 904757})
pos 102: frozenset({515676, 904758})
pos 103: frozenset({515676, 904758})
pos 104: frozenset({515676, 904758})
pos 105: frozenset({515676, 904758})
pos 106: frozenset({515676, 904759})
pos 107: frozenset({515676, 904759})
pos 108: frozenset({515676, 904759})
pos 109: frozenset({515676, 904759})
pos 110: frozenset({515676, 904759})
pos 111: frozenset({515676, 904759})
pos 112: frozenset({515676, 904759})
pos 113: frozenset({515676, 904759})
pos 114: frozenset({515676, 904759})
pos 115: frozenset({515676, 904759})
pos 116: frozenset({515676, 904759})
pos 117: frozenset({515676, 904759})
pos 133: frozenset({904760, 515677})
pos 134: frozenset({904761, 515677})
pos 135: frozenset({904761, 515677})
pos 136: frozenset({904761, 515677})
pos 137: frozenset({904761, 515677})
pos 138: frozenset({904761, 515677})
pos 139: frozenset({904761, 515677})
pos 140: frozenset({904761, 515677})
pos 141: frozenset({904761, 515677})
pos 142: frozenset({904761, 515677})
pos 143: frozenset({904761, 515677})
pos 144: frozenset({904762, 515677})
pos 145: frozenset({904762, 515677})
pos 146: frozenset({904762, 515677})
pos 147: frozenset({904762, 515677})
pos 148: frozenset({904762, 515677})
pos 149: frozenset({904762, 515677})
pos 150: frozenset({904763, 515677})
pos 151: frozenset({904763, 515677})
pos 152: frozenset({904763, 515677})
pos 153: frozenset({904763, 515677})
pos 154: frozenset({904763, 515677})
pos 155: frozenset({904763, 515677})
pos 156: frozenset({904763, 515677})
pos 157: frozenset({904763, 515677})
pos 158: frozenset({904763, 515677})
pos 159: frozenset({904763, 515677})
pos 160: frozenset({904763, 515677})
pos 161: frozenset({904763, 515677})
pos 162: frozenset({904763, 515677})
pos 163: frozenset({904763, 515677})
pos 179: frozenset({904764, 515678})
pos 180: frozenset({904765, 515678})
pos 181: frozenset({904765, 515678})
pos 182: frozenset({904765, 515678})
pos 183: frozenset({904765, 515678})
pos 184: frozenset({904765, 515678})
pos 185: frozenset({515678, 904766})
pos 186: frozenset({515678, 904766})
pos 187: frozenset({515678, 904766})
pos 188: frozenset({515678, 904766})
pos 189: frozenset({515678, 904766})
pos 190: frozenset({515678, 904766})
pos 206: frozenset({515679, 904767})
pos 207: frozenset({515679, 904767})
pos 208: frozenset({515679, 904767})
pos 209: frozenset({515679, 904767})
pos 210: frozenset({904768, 515679})
pos 211: frozenset({904768, 515679})
pos 212: frozenset({904768, 515679})
pos 213: frozenset({904768, 515679})
pos 229: frozenset({515680, 904769})
pos 230: frozenset({515680, 904770})
pos 231: frozenset({515680, 904770})
pos 232: frozenset({515680, 904770})
pos 233: frozenset({515680, 904770})
pos 234: frozenset({515680, 904771})
pos 235: frozenset({515680, 904771})
pos 236: frozenset({515680, 904771})
pos 237: frozenset({515680, 904771})
pos 238: frozenset({515680, 904771})
pos 239: frozenset({515680, 904771})
pos 255: frozenset({515681, 904772})
pos 256: frozenset({515681, 904773})
pos 257: frozenset({515681, 904773})
pos 258: frozenset({515681, 904773})
pos 259: frozenset({515681, 904773})
pos 260: frozenset({515681, 904774})
pos 261: frozenset({515681, 904774})
pos 262: frozenset({515681, 904774})
pos 263: frozenset({515681, 904774})
pos 264: frozenset({515681, 904774})
pos 265: frozenset({515681, 904774})
pos 266: frozenset({515681, 904775})
pos 267: frozenset({515681, 904775})
pos 268: frozenset({515681, 904775})
pos 269: frozenset({515681, 904775})
pos 270: frozenset({515681, 904775})
pos 271: frozenset({515681, 904775})
pos 272: frozenset({515681, 904775})
pos 273: frozenset({515681, 904775})
pos 289: frozenset({904776, 515682})
pos 290: frozenset({904776, 515682})
pos 291: frozenset({904776, 515682})
pos 292: frozenset({904777, 515682})
pos 293: frozenset({904777, 515682})
pos 294: frozenset({904777, 515682})
pos 295: frozenset({904777, 515682})
pos 311: frozenset({904778, 515683})
pos 312: frozenset({904779, 515683})
pos 313: frozenset({904779, 515683})
pos 314: frozenset({904779, 515683})
pos 315: frozenset({904779, 515683})
pos 316: frozenset({904779, 515683})
pos 317: frozenset({515683, 904780})
pos 318: frozenset({515683, 904780})
pos 319: frozenset({515683, 904780})
pos 320: frozenset({515683, 904780})
pos 321: frozenset({515683, 904780})
pos 322: frozenset({515683, 904780})
pos 323: frozenset({515683, 904781})
pos 324: frozenset({515683, 904781})
pos 325: frozenset({515683, 904781})
pos 326: frozenset({515683, 904781})
pos 327: frozenset({515683, 904781})
pos 328: frozenset({515683, 904781})
pos 329: frozenset({515683, 904781})
pos 330: frozenset({515683, 904781})
pos 331: frozenset({515683, 904781})
pos 332: frozenset({515683, 904781})
pos 333: frozenset({515683, 904781})
pos 334: frozenset({515683, 904781})
pos 335: frozenset({515683, 904781})
pos 336: frozenset({515683, 904781})
pos 337: frozenset({515683, 904781})
pos 338: frozenset({515683, 904781})
pos 339: frozenset({515683, 904781})
pos 340: frozenset({515683, 904781})
pos 341: frozenset({515683, 904781})
pos 342: frozenset({515683, 904781})
pos 343: frozenset({515683, 904781})

We can write the recorded text and the postions to two files:

In [9]:
rec.write('gen1.txt')
In [10]:
!head -n 10 gen1.txt
Genesis 1:[email protected] BR>CJT BR> >LHJM >T HCMJM W>T H>RY00 
Genesis 1:[email protected] WH>RY HJTH THW WBHW 
Genesis 1:[email protected] WXCK <L&PNJ THWM 
Genesis 1:[email protected] WRWX >LHJM MRXPT <L&PNJ HMJM00 
Genesis 1:[email protected] WJ>MR >LHJM 
Genesis 1:[email protected] JHJ >WR 
Genesis 1:[email protected] WJHJ&>WR00 
Genesis 1:[email protected] WJR> >LHJM >T&H>WR 
Genesis 1:[email protected] KJ&VWB 
Genesis 1:[email protected] WJBDL >LHJM BJN H>WR WBJN HXCK00 
In [11]:
!head -n 30 gen1.txt.pos












515674	904749
515674	904749
515674	904749
515674	904749
515674	904749
515674	904749
515674	904749
515674	904750
515674	904750
515674	904750
515674	904750
515674	904751
515674	904751
515674	904751
515674	904751
515674	904751

Now we produce a (fake) annotation file, based on the text.

The file is tab delimited, the columns are:

  • start character position
  • end character position
  • feature1 value
  • feature2 value
  • etc

We annotate as follows:

  • every word that starts with a B gets bword=1
  • every word that ends with a T gets tword=1

Then we want every phrase with a b-word to get bword=1 and likewise every clause with a b-word to get bword=1, and the same for tword.

In [12]:
def annotate(fileName):
  annotations = {}

  with open(fileName) as fh:
    pos = 0
    for line in fh:
      words = line.split(' ')

      for word in words[0:2]:
        lWord = len(word)
        pos += lWord + 1
      for word in words[2:]:
        word = word.rstrip('\n')
        lWord = len(word)
        start = pos
        end = pos + lWord - 1
        pos += lWord + 1
        if lWord:
          if word[0] == 'B':
            annotations.setdefault((start, end), {})['bword'] = 1
          if word[-1] == 'T':
            annotations.setdefault((start, end), {})['tword'] = 1

  with open(f'{fileName}.ann', 'w') as fh:
    fh.write('start\tend\tbword\ttword\n')
    for ((start, end), features) in annotations.items():
      row = '\t'. join(str(a) for a in (start, end, features.get('bword', ''), features.get('tword', '')))
      fh.write(f'{row}\n')
In [13]:
annotate('gen1.txt')

Here is the annotation file.

In [14]:
!cat gen1.txt.ann
start	end	bword	tword
14	19	1	1
21	23	1	
31	32		1
40	42		1
144	148		1
323	325	1	

Now we want to feed back these annotations as TF features on phrase_atom and clause_atom nodes.

Our recorder knows how to do that.

In [15]:
features = rec.makeFeatures('gen1.txt.ann')

Let's see.

In [16]:
features['bword']
Out[16]:
{515674: '1', 904749: '1', 904750: '1', 515683: '1', 904781: '1'}
In [17]:
features['tword']
Out[17]:
{515674: '1', 904749: '1', 904752: '1', 904762: '1', 515677: '1'}

Let's check:

In [18]:
for feat in ('bword', 'tword'):
  for n in features[feat]:
    print(f'{feat} {F.otype.v(n)} {n}: {T.text(n, fmt="text-trans-plain")}')
bword clause_atom 515674: BR>CJT BR> >LHJM >T HCMJM W>T H>RY00 
bword phrase_atom 904749: BR>CJT 
bword phrase_atom 904750: BR> 
bword clause_atom 515683: WJBDL >LHJM BJN H>WR WBJN HXCK00 
bword phrase_atom 904781: BJN H>WR WBJN HXCK00 
tword clause_atom 515674: BR>CJT BR> >LHJM >T HCMJM W>T H>RY00 
tword phrase_atom 904749: BR>CJT 
tword phrase_atom 904752: >T HCMJM W>T H>RY00 
tword phrase_atom 904762: MRXPT 
tword clause_atom 515677: WRWX >LHJM MRXPT <L&PNJ HMJM00 

What if we want to transform the annotations to word features instead to features on phrase and clause atoms?

Then we should record the text differently.

We only add slots to the mix.

In [19]:
rec = Recorder()
LIMIT = 10

for (i, cla) in enumerate(L.d(gen1, otype='clause_atom')):
  if i >= LIMIT:
    break
  label = '{} {}:{}'.format(*T.sectionFromNode(cla))
  rec.add(f'{label}@{i} ')

  for w in L.d(cla, otype='word'):
    rec.start(w)
    rec.add(T.text(w, fmt='text-trans-plain'))
    rec.end(w)
  
  rec.add('\n')

It gives the same text:

In [20]:
print(rec.text())
Genesis 1:[email protected] BR>CJT BR> >LHJM >T HCMJM W>T H>RY00 
Genesis 1:[email protected] WH>RY HJTH THW WBHW 
Genesis 1:[email protected] WXCK <L&PNJ THWM 
Genesis 1:[email protected] WRWX >LHJM MRXPT <L&PNJ HMJM00 
Genesis 1:[email protected] WJ>MR >LHJM 
Genesis 1:[email protected] JHJ >WR 
Genesis 1:[email protected] WJHJ&>WR00 
Genesis 1:[email protected] WJR> >LHJM >T&H>WR 
Genesis 1:[email protected] KJ&VWB 
Genesis 1:[email protected] WJBDL >LHJM BJN H>WR WBJN HXCK00 

but the node positions are different:

In [21]:
print('\n'.join(f'pos {i}: {p}' for (i, p) in enumerate(rec.positions()) if p))
pos 14: frozenset({1})
pos 15: frozenset({2})
pos 16: frozenset({2})
pos 17: frozenset({2})
pos 18: frozenset({2})
pos 19: frozenset({2})
pos 20: frozenset({2})
pos 21: frozenset({3})
pos 22: frozenset({3})
pos 23: frozenset({3})
pos 24: frozenset({3})
pos 25: frozenset({4})
pos 26: frozenset({4})
pos 27: frozenset({4})
pos 28: frozenset({4})
pos 29: frozenset({4})
pos 30: frozenset({4})
pos 31: frozenset({5})
pos 32: frozenset({5})
pos 33: frozenset({5})
pos 34: frozenset({6})
pos 35: frozenset({7})
pos 36: frozenset({7})
pos 37: frozenset({7})
pos 38: frozenset({7})
pos 39: frozenset({7})
pos 40: frozenset({8})
pos 41: frozenset({9})
pos 42: frozenset({9})
pos 43: frozenset({9})
pos 44: frozenset({10})
pos 45: frozenset({11})
pos 46: frozenset({11})
pos 47: frozenset({11})
pos 48: frozenset({11})
pos 49: frozenset({11})
pos 50: frozenset({11})
pos 66: frozenset({12})
pos 67: frozenset({13})
pos 68: frozenset({14})
pos 69: frozenset({14})
pos 70: frozenset({14})
pos 71: frozenset({14})
pos 72: frozenset({15})
pos 73: frozenset({15})
pos 74: frozenset({15})
pos 75: frozenset({15})
pos 76: frozenset({15})
pos 77: frozenset({16})
pos 78: frozenset({16})
pos 79: frozenset({16})
pos 80: frozenset({16})
pos 81: frozenset({17})
pos 82: frozenset({18})
pos 83: frozenset({18})
pos 84: frozenset({18})
pos 85: frozenset({18})
pos 101: frozenset({19})
pos 102: frozenset({20})
pos 103: frozenset({20})
pos 104: frozenset({20})
pos 105: frozenset({20})
pos 106: frozenset({21})
pos 107: frozenset({21})
pos 108: frozenset({21})
pos 109: frozenset({22})
pos 110: frozenset({22})
pos 111: frozenset({22})
pos 112: frozenset({22})
pos 113: frozenset({23})
pos 114: frozenset({23})
pos 115: frozenset({23})
pos 116: frozenset({23})
pos 117: frozenset({23})
pos 133: frozenset({24})
pos 134: frozenset({25})
pos 135: frozenset({25})
pos 136: frozenset({25})
pos 137: frozenset({25})
pos 138: frozenset({26})
pos 139: frozenset({26})
pos 140: frozenset({26})
pos 141: frozenset({26})
pos 142: frozenset({26})
pos 143: frozenset({26})
pos 144: frozenset({27})
pos 145: frozenset({27})
pos 146: frozenset({27})
pos 147: frozenset({27})
pos 148: frozenset({27})
pos 149: frozenset({27})
pos 150: frozenset({28})
pos 151: frozenset({28})
pos 152: frozenset({28})
pos 153: frozenset({29})
pos 154: frozenset({29})
pos 155: frozenset({29})
pos 156: frozenset({29})
pos 157: frozenset({30})
pos 158: frozenset({31})
pos 159: frozenset({31})
pos 160: frozenset({31})
pos 161: frozenset({31})
pos 162: frozenset({31})
pos 163: frozenset({31})
pos 179: frozenset({32})
pos 180: frozenset({33})
pos 181: frozenset({33})
pos 182: frozenset({33})
pos 183: frozenset({33})
pos 184: frozenset({33})
pos 185: frozenset({34})
pos 186: frozenset({34})
pos 187: frozenset({34})
pos 188: frozenset({34})
pos 189: frozenset({34})
pos 190: frozenset({34})
pos 206: frozenset({35})
pos 207: frozenset({35})
pos 208: frozenset({35})
pos 209: frozenset({35})
pos 210: frozenset({36})
pos 211: frozenset({36})
pos 212: frozenset({36})
pos 213: frozenset({36})
pos 229: frozenset({37})
pos 230: frozenset({38})
pos 231: frozenset({38})
pos 232: frozenset({38})
pos 233: frozenset({38})
pos 234: frozenset({39})
pos 235: frozenset({39})
pos 236: frozenset({39})
pos 237: frozenset({39})
pos 238: frozenset({39})
pos 239: frozenset({39})
pos 255: frozenset({40})
pos 256: frozenset({41})
pos 257: frozenset({41})
pos 258: frozenset({41})
pos 259: frozenset({41})
pos 260: frozenset({42})
pos 261: frozenset({42})
pos 262: frozenset({42})
pos 263: frozenset({42})
pos 264: frozenset({42})
pos 265: frozenset({42})
pos 266: frozenset({43})
pos 267: frozenset({43})
pos 268: frozenset({43})
pos 269: frozenset({44})
pos 270: frozenset({45})
pos 271: frozenset({45})
pos 272: frozenset({45})
pos 273: frozenset({45})
pos 289: frozenset({46})
pos 290: frozenset({46})
pos 291: frozenset({46})
pos 292: frozenset({47})
pos 293: frozenset({47})
pos 294: frozenset({47})
pos 295: frozenset({47})
pos 311: frozenset({48})
pos 312: frozenset({49})
pos 313: frozenset({49})
pos 314: frozenset({49})
pos 315: frozenset({49})
pos 316: frozenset({49})
pos 317: frozenset({50})
pos 318: frozenset({50})
pos 319: frozenset({50})
pos 320: frozenset({50})
pos 321: frozenset({50})
pos 322: frozenset({50})
pos 323: frozenset({51})
pos 324: frozenset({51})
pos 325: frozenset({51})
pos 326: frozenset({51})
pos 327: frozenset({52})
pos 328: frozenset({53})
pos 329: frozenset({53})
pos 330: frozenset({53})
pos 331: frozenset({53})
pos 332: frozenset({54})
pos 333: frozenset({55})
pos 334: frozenset({55})
pos 335: frozenset({55})
pos 336: frozenset({55})
pos 337: frozenset({56})
pos 338: frozenset({57})
pos 339: frozenset({57})
pos 340: frozenset({57})
pos 341: frozenset({57})
pos 342: frozenset({57})
pos 343: frozenset({57})

We have produced the same text, so we can use the earlier annotation file to create word features.

In [22]:
features = rec.makeFeatures('gen1.txt.ann')
In [23]:
features['bword']
Out[23]:
{1: '1', 2: '1', 3: '1', 51: '1'}
In [24]:
features['tword']
Out[24]:
{1: '1', 2: '1', 5: '1', 8: '1', 9: '1', 27: '1'}

Let's check:

In [25]:
for feat in ('bword', 'tword'):
  for n in features[feat]:
    print(f'{feat} {F.otype.v(n)} {n}: {T.text(n, fmt="text-trans-plain")}')
bword word 1: B
bword word 2: R>CJT 
bword word 3: BR> 
bword word 51: BJN 
tword word 1: B
tword word 2: R>CJT 
tword word 5: >T 
tword word 8: W
tword word 9: >T 
tword word 27: MRXPT 

Explanation:

The annotator just looked at the string BR>CJT without knowing that it is two words.

In [26]:
!cat gen1.txt.ann
start	end	bword	tword
14	19	1	1
21	23	1	
31	32		1
40	42		1
144	148		1
323	325	1	

So it has annotated pos 14-19 as a bword and as a tword.

But TF knows that 14-19 are slots 1 and 2, so when the annotations are applied, slots 1 and 2 are both set to b-words and t-words.

We can remedy the situation by producing an other text to the annotator, one where slots are always separated by a space.

Lets do that by always adding a space, so real words are separated by two spaces.

In [27]:
rec = Recorder()
LIMIT = 10

for (i, cla) in enumerate(L.d(gen1, otype='clause_atom')):
  if i >= LIMIT:
    break
  label = '{} {}:{}'.format(*T.sectionFromNode(cla))
  rec.add(f'{label}@{i} ')

  for w in L.d(cla, otype='word'):
    rec.start(w)
    rec.add(T.text(w, fmt='text-trans-plain')+' ')
    rec.end(w)
  
  rec.add('\n')

Here is the text

In [28]:
print(rec.text())
Genesis 1:[email protected] B R>CJT  BR>  >LHJM  >T  H CMJM  W >T  H >RY00  
Genesis 1:[email protected] W H >RY  HJTH  THW  W BHW  
Genesis 1:[email protected] W XCK  <L& PNJ  THWM  
Genesis 1:[email protected] W RWX  >LHJM  MRXPT  <L& PNJ  H MJM00  
Genesis 1:[email protected] W J>MR  >LHJM  
Genesis 1:[email protected] JHJ  >WR  
Genesis 1:[email protected] W JHJ& >WR00  
Genesis 1:[email protected] W JR>  >LHJM  >T& H >WR  
Genesis 1:[email protected] KJ& VWB  
Genesis 1:[email protected] W JBDL  >LHJM  BJN  H >WR  W BJN  H XCK00  

We write the text to file.

In [29]:
rec.write('gen1wx.txt')

We run our annotator again, because we have a different text:

In [30]:
annotate('gen1wx.txt')

Here is the new annotation file.

In [31]:
!cat gen1wx.txt.ann
start	end	bword	tword
14	14	1	
16	20		1
23	25	1	
35	36		1
49	50		1
99	101	1	
170	174		1
373	375	1	
387	389	1	

The features are no surprise:

In [32]:
features = rec.makeFeatures('gen1wx.txt.ann')
In [33]:
features['bword']
Out[33]:
{1: '1', 3: '1', 18: '1', 51: '1', 55: '1'}
In [34]:
features['tword']
Out[34]:
{2: '1', 5: '1', 9: '1', 27: '1'}

Let's check:

In [35]:
for feat in ('bword', 'tword'):
  for n in features[feat]:
    print(f'{feat} {F.otype.v(n)} {n}: {T.text(n, fmt="text-trans-plain")}')
bword word 1: B
bword word 3: BR> 
bword word 18: BHW 
bword word 51: BJN 
bword word 55: BJN 
tword word 2: R>CJT 
tword word 5: >T 
tword word 9: >T 
tword word 27: MRXPT 
In [ ]: