You might want to consider the start of this tutorial.

Short introductions to other TF datasets:

In [1]:
%load_ext autoreload
%autoreload 2
In [2]:
from tf.app import use
In [3]:
VERSION = '2017'
In [4]:
A = use('bhsa', hoist=globals(), version=VERSION)
# A = use('bhsa:clone', checkout="clone", hoist=globals(), version=VERSION)
rate limit is 5000 requests per hour, with 4880 left for this hour
	connecting to online GitHub repo annotation/app-bhsa ... connected
Using TF-app in /Users/dirk/text-fabric-data/annotation/app-bhsa/code:
	rv2.0.0=#7b3b9ffba7ee6dbc76a52b8d76475d17babf0daf (latest release)
rate limit is 5000 requests per hour, with 4875 left for this hour
	connecting to online GitHub repo etcbc/bhsa ... connected
Using data in /Users/dirk/text-fabric-data/etcbc/bhsa/tf/2017:
	rv1.6 (latest release)
rate limit is 5000 requests per hour, with 4870 left for this hour
	connecting to online GitHub repo etcbc/phono ... connected
Using data in /Users/dirk/text-fabric-data/etcbc/phono/tf/2017:
	r1.2=#1ac68e976ee4a7f23eb6bb4c6f401a033d0ec169 (latest release)
rate limit is 5000 requests per hour, with 4865 left for this hour
	connecting to online GitHub repo etcbc/parallels ... connected
Using data in /Users/dirk/text-fabric-data/etcbc/parallels/tf/2017:
	r1.2=#395dfe2cb69c261862fab9f0289e594a52121d5c (latest release)
   |     0.00s Dataset without structure sections in otext:no structure functions in the T-API

Rough edges

It might be helpful to peek under the hood, especially when exploring searches that go slow.

If you went through the previous parts of the tutorial you have encountered cases where things come to a grinding halt.

Yet we can get a hunch of what is going on, even in those cases. For that, we use the lower-level search api S of Text-Fabric, and not the wrappers that the high level A api provides.

The main difference is, that S.search() returns a generator of the results, whereas A.search() returns a list of the results. In fact, A.search() calls the generator function delivered by S.search() as often as needed.

For some queries, the fetching of results is quite costly, so costly that we do not want to fetch all results up-front. Rather we want to fetch a few, to see how it goes. In these cases, directly using S.search() is preferred over A.search().

In [5]:
query = '''
book
  chapter
    verse
      phrase det=und
        word lex=>LHJM/
'''

Study

First we call S.study(query).

The syntax will be checked, features loaded, the search space will be set up, narrowed down, and the fetching of results will be prepared, but not yet executed.

In order to make the query a bit more interesting, we lift the constraint that the results must be in Genesis 1-2.

In [6]:
S.study(query)
  0.00s Checking search template ...
  0.00s Setting up search space for 5 objects ...
  0.67s Constraining search space with 4 relations ...
  0.75s 	2 edges thinned
  0.75s Setting up retrieval plan with strategy small_choice_multi ...
  0.81s Ready to deliver results from 3345 nodes
Iterate over S.fetch() to get the results
See S.showPlan() to interpret the results

Before we rush to the results, lets have a look at the plan.

In [7]:
S.showPlan()
  0.83s The results are connected to the original search template as follows:
 0     
 1 R0  book
 2 R1    chapter
 3 R2      verse
 4 R3        phrase det=und
 5 R4          word lex=>LHJM/
 6     

Here you see already what your results will look like. Each result r is a tuple of nodes:

(R0, R1, R2, R3, R4)

that instantiate the objects in your template.

In case you are curious, you can get details about the search space as well:

In [8]:
S.showPlan(details=True)
Search with 5 objects and 4 relations
Results are instantiations of the following objects:
node  0-book                                              39   choices
node  1-chapter                                          929   choices
node  2-verse                                            754   choices
node  3-phrase                                           805   choices
node  4-word                                             818   choices
Performance parameters:
	yarnRatio            =    1.25
	tryLimitFrom         =      40
	tryLimitTo           =      40
Instantiations are computed along the following relations:
node                                  0-book              39   choices
edge        0-book             [[     1-chapter           23.8 choices
edge        1-chapter          [[     2-verse              0.7 choices
edge        2-verse            [[     3-phrase             1.0 choices (thinned)
edge        3-phrase           [[     4-word               1.0 choices (thinned)
  0.86s The results are connected to the original search template as follows:
 0     
 1 R0  book
 2 R1    chapter
 3 R2      verse
 4 R3        phrase det=und
 5 R4          word lex=>LHJM/
 6     

The part about the nodes shows you how many possible instantiations for each object in your template has been found. These are not results yet, because only combinations of instantiations that satisfy all constraints are results.

The constraints come from the relations between the objects that you specified. In this case, there is only an implicit relation: embedding [[. Later on we'll examine all spatial relations.

The part about the edges shows you the constraints, and in what order they will be computed when stitching results together. In this case the order is exactly the order by which the relations appear in the template, but that will not always be the case. Text-Fabric spends some time and ingenuity to find out an optimal stitch plan. Fetching results is like selecting a node, stitching it to another node with an edge, and so on, until a full stitch of nodes intersects with all the node sets from which they must be chosen (the yarns).

Fetching results may take time.

For some queries, it can take a large amount of time to walk through all results. Even worse, it may happen that it takes a large amount of time before getting the first result. During stitching, many stitchings will be tried and fail before they can be completed.

This has to do with search strategies on the one hand, and the very likely possibility to encounter pathological search patterns, which have billions of results, mostly unintended. For example, a simple query that asks for 5 words in the Hebrew Bible without further constraints, will have 425,000 to the power of 5 results. That is 10-e28 (a one with 28 zeros), roughly the number of molecules in a few hundred liters of air. That may not sound much, but it is 10,000 times the amount of bytes that can be currently stored on the whole Internet.

Text-Fabric search is not yet done with finding optimal search strategies, and I hope to refine its arsenal of methods in the future, depending on what you report.

Counting results

It is always a good idea to get a feel for the amount of results, before you dive into them head-on.

In [9]:
S.count(progress=1, limit=5)
  0.00s Counting results per 1 up to 5 ...
   |     0.00s 1
   |     0.00s 2
   |     0.00s 3
   |     0.00s 4
   |     0.00s 5
  0.00s Done: 5 results

We asked for 5 results in total, with a progress message for every one. That was a bit conservative.

In [10]:
S.count(progress=100, limit=500)
  0.00s Counting results per 100 up to 500 ...
   |     0.01s 100
   |     0.03s 200
   |     0.05s 300
   |     0.07s 400
   |     0.10s 500
  0.10s Done: 500 results

Still pretty quick, now we want to count all results.

In [11]:
S.count(progress=200, limit=-1)
  0.00s Counting results per 200 up to  the end of the results ...
   |     0.02s 200
   |     0.06s 400
   |     0.09s 600
   |     0.11s 800
  0.11s Done: 818 results

Fetching results

It is time to see something of those results.

In [12]:
S.fetch(limit=10)
Out[12]:
((426585, 426624, 1414190, 651505, 4),
 (426585, 426624, 1414191, 651515, 26),
 (426585, 426624, 1414192, 651520, 34),
 (426585, 426624, 1414193, 651528, 42),
 (426585, 426624, 1414193, 651534, 50),
 (426585, 426624, 1414194, 651538, 60),
 (426585, 426624, 1414195, 651554, 81),
 (426585, 426624, 1414196, 651564, 97),
 (426585, 426624, 1414197, 651578, 127),
 (426585, 426624, 1414198, 651590, 142))

Not very informative. Just a quick observation: look at the last column. These are the result nodes for the word part in the query, indicated as R7 by showPlan() before. And indeed, they are all below 425,000, the number of words in the Hebrew Bible.

Nevertheless, we want to glean a bit more information off them.

In [13]:
for r in S.fetch(limit=10):
    print(S.glean(r))
  Genesis 1:1 phrase[אֱלֹהִ֑ים ] אֱלֹהִ֑ים 
  Genesis 1:2 phrase[ר֣וּחַ אֱלֹהִ֔ים ] אֱלֹהִ֔ים 
  Genesis 1:3 phrase[אֱלֹהִ֖ים ] אֱלֹהִ֖ים 
  Genesis 1:4 phrase[אֱלֹהִ֛ים ] אֱלֹהִ֛ים 
  Genesis 1:4 phrase[אֱלֹהִ֔ים ] אֱלֹהִ֔ים 
  Genesis 1:5 phrase[אֱלֹהִ֤ים׀ ] אֱלֹהִ֤ים׀ 
  Genesis 1:6 phrase[אֱלֹהִ֔ים ] אֱלֹהִ֔ים 
  Genesis 1:7 phrase[אֱלֹהִים֮ ] אֱלֹהִים֮ 
  Genesis 1:8 phrase[אֱלֹהִ֛ים ] אֱלֹהִ֛ים 
  Genesis 1:9 phrase[אֱלֹהִ֗ים ] אֱלֹהִ֗ים 
Caution

It is not possible to do len(S.fetch()). Because fetch() is a generator, not a list. It will deliver a result every time it is being asked and for as long as there are results, but it does not know in advance how many there will be.

Fetching a result can be costly, because due to the constraints, a lot of possibilities may have to be tried and rejected before a the next result is found.

That is why you often see results coming in at varying speeds when counting them.

We can also use A.table() to make a list of results. This function is part of the Bhsa API, not of the generic Text-Fabric machinery, as opposed to S.glean().

So, you can use S.glean() for every Text-Fabric corpus, but the output is still not very nice. A.table() gives much nicer output.

In [14]:
A.table(S.fetch(limit=5))
npbookchapterversephraseword
1Genesis 1:1GenesisGenesis 1אֱלֹהִ֑ים אֱלֹהִ֑ים
2Genesis 1:2GenesisGenesis 1ר֣וּחַ אֱלֹהִ֔ים אֱלֹהִ֔ים
3Genesis 1:3GenesisGenesis 1אֱלֹהִ֖ים אֱלֹהִ֖ים
4Genesis 1:4GenesisGenesis 1אֱלֹהִ֛ים אֱלֹהִ֛ים
5Genesis 1:4GenesisGenesis 1אֱלֹהִ֔ים אֱלֹהִ֔ים

Slow queries

The search template above has some pretty tight constraints on one of its objects, so the amount of data to deal with is pretty limited.

If the constraints are weak, search may become slow.

For example, here is a query that looks for pairs of phrases in the same clause in such a way that one is engulfed by the other.

In [15]:
query = '''
% test
% verse book=Genesis chapter=2 verse=25
verse
  clause
                                 
    p1:phrase
      w1:word
      w3:word
      w1 < w3

    p2:phrase
      w2:word
      w1 < w2 
      w3 > w2
    
    p1 < p2   
'''

A couple of remarks you may have encountered before.

  • some objects have got a name
  • there are additional relations specified between named objects
  • < means: comes before, and >: comes after in the canonical order for nodes, which for words means: comes textually before/after, but for other nodes the meaning is explained here
  • later on we describe those relations in more detail

Note on order Look at the words w1 and w3 below phrase p1. Although in the template w1 comes before w3, this is not translated in a search constraint of the same nature.

Order between objects in a template is never significant, only embedding is.

Because order is not significant, you have to specify order yourself, using relations.

It turns out that this is better than the other way around. In MQL order is significant, and it is very difficult to search for w1 and w2 in any order. Especially if your are looking for more than 2 complex objects with lots of feature conditions, your search template would explode if you had to spell out all possible permutations. See the example of Reinoud Oosting below.

Note on gaps Look at the phrases p1 and p2. We do not specify an order here, only that they are different. In order to prevent duplicated searches with p1 and p2 interchanged, we even stipulate that p1 < p2. There are many spatial relationships possible between different objects. In many cases, neither the one comes before the other, nor vice versa. They can overlap, one can occur in a gap of the other, they can be completely disjoint and interleaved, etc.

In [16]:
# ignore this
# S.tweakPerformance(yarnRatio=2)
In [17]:
S.study(query)
  0.00s Checking search template ...
  0.00s Setting up search space for 7 objects ...
  0.32s Constraining search space with 10 relations ...
  2.36s 	6 edges thinned
  2.36s Setting up retrieval plan with strategy small_choice_multi ...
  2.41s Ready to deliver results from 1894418 nodes
Iterate over S.fetch() to get the results
See S.showPlan() to interpret the results

Text-Fabric knows that narrowing down the search space in this case would take ages, without resulting in a significantly shrunken space. So it skips doing so for most constraints.

Let us see the plan, with details.

In [18]:
S.showPlan(details=True)
Search with 7 objects and 9 relations
Results are instantiations of the following objects:
node  0-verse                                          23207   choices
node  1-clause                                         88051   choices
node  2-phrase                                        252982   choices
node  3-word                                          425732   choices
node  4-word                                          425732   choices
node  5-phrase                                        252982   choices
node  6-word                                          425732   choices
Performance parameters:
	yarnRatio            =    1.25
	tryLimitFrom         =      40
	tryLimitTo           =      40
Instantiations are computed along the following relations:
node                                  0-verse          23207   choices
edge        0-verse            [[     1-clause             4.4 choices (thinned)
edge        1-clause           [[     2-phrase             2.7 choices (thinned)
edge        2-phrase           [[     4-word               1.7 choices (thinned)
edge        2-phrase           [[     3-word               2.2 choices (thinned)
edge        3-word             <      4-word               0   choices
edge        1-clause           [[     5-phrase             2.9 choices (thinned)
edge        2-phrase           <      5-phrase             0   choices
edge        5-phrase           [[     6-word               2.3 choices (thinned)
edge      4,3-word            >,<     6-word               0   choices
  2.48s The results are connected to the original search template as follows:
 0     
 1     % test
 2     % verse book=Genesis chapter=2 verse=25
 3 R0  verse
 4 R1    clause
 5                                      
 6 R2      p1:phrase
 7 R3        w1:word
 8 R4        w3:word
 9           w1 < w3
10     
11 R5      p2:phrase
12 R6        w2:word
13           w1 < w2 
14           w3 > w2
15         
16         p1 < p2   
17     

As you see, we have a hefty search space here. Let us play with the count() function.

In [19]:
S.count(progress=10, limit=100)
  0.00s Counting results per 10 up to 100 ...
   |     0.08s 10
   |     0.08s 20
   |     0.08s 30
   |     0.09s 40
   |     0.10s 50
   |     0.10s 60
   |     0.11s 70
   |     0.11s 80
   |     0.11s 90
   |     0.11s 100
  0.11s Done: 100 results

We can be bolder than this!

In [20]:
S.count(progress=100, limit=1000)
  0.00s Counting results per 100 up to 1000 ...
   |     0.10s 100
   |     0.12s 200
   |     0.12s 300
   |     0.22s 400
   |     0.25s 500
   |     0.26s 600
   |     0.30s 700
   |     0.38s 800
   |     0.40s 900
   |     0.52s 1000
  0.52s Done: 1000 results

OK, not too bad, but note that it takes a big fraction of a second to get just 100 results.

Now let us go for all of them by the thousand.

In [21]:
S.count(progress=1000, limit=-1)
  0.00s Counting results per 1000 up to  the end of the results ...
   |     0.48s 1000
   |     0.84s 2000
   |     1.20s 3000
   |     1.57s 4000
   |     1.89s 5000
   |     2.51s 6000
   |     3.76s 7000
  4.66s Done: 7618 results

See? This is substantial work.

In [22]:
A.table(S.fetch(limit=5))
npverseclausephrasewordwordphraseword
1Genesis 2:25וַיִּֽהְי֤וּ שְׁנֵיהֶם֙ עֲרוּמִּ֔ים הָֽאָדָ֖ם וְאִשְׁתֹּ֑ו שְׁנֵיהֶם֙ הָֽאָדָ֖ם וְאִשְׁתֹּ֑ו שְׁנֵיהֶם֙ הָֽעֲרוּמִּ֔ים עֲרוּמִּ֔ים
2Genesis 2:25וַיִּֽהְי֤וּ שְׁנֵיהֶם֙ עֲרוּמִּ֔ים הָֽאָדָ֖ם וְאִשְׁתֹּ֑ו שְׁנֵיהֶם֙ הָֽאָדָ֖ם וְאִשְׁתֹּ֑ו שְׁנֵיהֶם֙ אָדָ֖ם עֲרוּמִּ֔ים עֲרוּמִּ֔ים
3Genesis 2:25וַיִּֽהְי֤וּ שְׁנֵיהֶם֙ עֲרוּמִּ֔ים הָֽאָדָ֖ם וְאִשְׁתֹּ֑ו שְׁנֵיהֶם֙ הָֽאָדָ֖ם וְאִשְׁתֹּ֑ו שְׁנֵיהֶם֙ וְעֲרוּמִּ֔ים עֲרוּמִּ֔ים
4Genesis 2:25וַיִּֽהְי֤וּ שְׁנֵיהֶם֙ עֲרוּמִּ֔ים הָֽאָדָ֖ם וְאִשְׁתֹּ֑ו שְׁנֵיהֶם֙ הָֽאָדָ֖ם וְאִשְׁתֹּ֑ו שְׁנֵיהֶם֙ אִשְׁתֹּ֑ו עֲרוּמִּ֔ים עֲרוּמִּ֔ים
5Genesis 4:4וְהֶ֨בֶל הֵבִ֥יא גַם־ה֛וּא מִבְּכֹרֹ֥ות צֹאנֹ֖ו וּמֵֽחֶלְבֵהֶ֑ן הֶ֨בֶל גַם־ה֛וּא הֶ֨בֶל גַם־הֵבִ֥יא הֵבִ֥יא

Hand-coding

As a check, here is some code that looks for basically the same phenomenon: a phrase within the gap of another phrase. It does not use search, and it gets a bit more focused results, in half the time compared to the search with the template.

Hint If you are comfortable with programming, and what you look for is fairly generic, you may be better off without search, provided you can translate your insight in the data into an effective procedure within Text-Fabric. But wait till we are completely done with this example!

In [23]:
indent(reset=True)
info('Getting gapped phrases')
results = []
for v in F.otype.s('verse'):
    for c in L.d(v, otype='clause'):
        ps = L.d(c, otype='phrase')
        first = {}
        last = {}
        slots = {}
        # make index of phrase boundaries
        for p in ps:
            words = L.d(p, otype='word')
            first[p] = words[0]
            last[p] = words[-1]
            slots[p] = set(words)
        for p1 in ps:
            for p2 in ps:
                if p2 < p1: continue
                if len(slots[p1] & slots[p2]) != 0: continue
                if first[p1] < first[p2] and last[p2] < last[p1]:
                    results.append((v, c, p1, p2, first[p1], first[p2], last[p2], last[p1]))
info('{} results'.format(len(results)))
  0.00s Getting gapped phrases
  2.37s 368 results

Pretty printing

We can use the pretty printing of A.table() and A.show() here as well, even though we have not used search!

Not that you can show the node numbers. In this case it helps to see where the gaps are.

In [24]:
A.table(results, withNodes=True, end=5)
A.show(results, start=1, end=1)
npverseclausephrasephrasewordwordwordword
1Genesis 2:251414245427767וַיִּֽהְי֤וּ 6521471159שְׁנֵיהֶם֙ 6521481160עֲרוּמִּ֔ים 652147הָֽאָדָ֖ם וְ1164אִשְׁתֹּ֑ו 6521471159שְׁנֵיהֶם֙ 652147הָֽאָדָ֖ם וְ1164אִשְׁתֹּ֑ו 6521481160עֲרוּמִּ֔ים 1159שְׁנֵיהֶם֙ 1160עֲרוּמִּ֔ים 1160עֲרוּמִּ֔ים 1164אִשְׁתֹּ֑ו
2Genesis 4:41414273427889וְ6525041720הֶ֨בֶל 6525051721הֵבִ֥יא 652504גַם־1723ה֛וּא מִבְּכֹרֹ֥ות צֹאנֹ֖ו וּמֵֽחֶלְבֵהֶ֑ן 6525041720הֶ֨בֶל 652504גַם־1723ה֛וּא 6525051721הֵבִ֥יא 1720הֶ֨בֶל 1721הֵבִ֥יא 1721הֵבִ֥יא 1723ה֛וּא
3Genesis 10:2114144454283866541024819גַּם־ה֑וּא 6541034821אֲבִי֙ כָּל־בְּנֵי־4824עֵ֔בֶר 654102אֲחִ֖י יֶ֥פֶת הַ4828גָּדֹֽול׃ 6541024819גַּם־ה֑וּא 654102אֲחִ֖י יֶ֥פֶת הַ4828גָּדֹֽול׃ 6541034821אֲבִי֙ כָּל־בְּנֵי־4824עֵ֔בֶר 4819גַּם־4821אֲבִי֙ 4824עֵ֔בֶר 4828גָּדֹֽול׃
4Genesis 12:171414505428569וַיְנַגַּ֨ע יְהוָ֧ה׀ 6546785803אֶת־פַּרְעֹ֛ה 6546795805נְגָעִ֥ים 5806גְּדֹלִ֖ים 654678וְ654678אֶת־5809בֵּיתֹ֑ו עַל־דְּבַ֥ר שָׂרַ֖י אֵ֥שֶׁת אַבְרָֽם׃ 6546785803אֶת־פַּרְעֹ֛ה 654678וְ654678אֶת־5809בֵּיתֹ֑ו 6546795805נְגָעִ֥ים 5806גְּדֹלִ֖ים 5803אֶת־5805נְגָעִ֥ים 5806גְּדֹלִ֖ים 5809בֵּיתֹ֑ו
5Genesis 13:11414509428585וַיַּעַל֩ 6547255868אַבְרָ֨ם 6547265869מִ5870מִּצְרַ֜יִם 654725ה֠וּא וְאִשְׁתֹּ֧ו וְ5875כָל־428585הַנֶּֽגְבָּה׃ 6547255868אַבְרָ֨ם 654725ה֠וּא וְאִשְׁתֹּ֧ו וְ5875כָל־6547265869מִ5870מִּצְרַ֜יִם 5868אַבְרָ֨ם 5869מִ5870מִּצְרַ֜יִם 5875כָל־

result 1

NB Gaps are a tricky phenomenon. In gaps we will deal with them cruelly.

Performance tuning

Here is an example by Yanniek van der Schans (2018-09-21).

In [25]:
query = '''
c:clause
  PreGap:phrase_atom
  LastPhrase:phrase_atom
  :=

Gap:clause_atom
  :: word

PreGap < Gap
Gap < LastPhrase
c || Gap
'''
In [26]:
S.study(query)
S.showPlan(details=True)
  0.00s Checking search template ...
  0.00s Setting up search space for 5 objects ...
  0.16s Constraining search space with 8 relations ...
  0.69s 	2 edges thinned
  0.69s Setting up retrieval plan with strategy small_choice_multi ...
  0.71s Ready to deliver results from 454123 nodes
Iterate over S.fetch() to get the results
See S.showPlan() to interpret the results
Search with 5 objects and 8 relations
Results are instantiations of the following objects:
node  0-clause                                         88101   choices
node  1-phrase_atom                                   267519   choices
node  2-phrase_atom                                    88101   choices
node  3-clause_atom                                     5201   choices
node  4-word                                            5201   choices
Performance parameters:
	yarnRatio            =    1.25
	tryLimitFrom         =      40
	tryLimitTo           =      40
Instantiations are computed along the following relations:
node                                  3-clause_atom     5201   choices
edge        3-clause_atom      ::     4-word               1.0 choices (thinned)
edge        4-word             ]]     3-clause_atom        0   choices
edge        3-clause_atom      <      2-phrase_atom    44050.5 choices
edge        2-phrase_atom      :=     0-clause             1.0 choices (thinned)
edge        0-clause           [[     2-phrase_atom        0   choices
edge        0-clause           ||     3-clause_atom        0   choices
edge        0-clause           [[     1-phrase_atom        2.7 choices
edge        1-phrase_atom      <      3-clause_atom        0   choices
  0.72s The results are connected to the original search template as follows:
 0     
 1 R0  c:clause
 2 R1    PreGap:phrase_atom
 3 R2    LastPhrase:phrase_atom
 4       :=
 5     
 6 R3  Gap:clause_atom
 7 R4    :: word
 8     
 9     PreGap < Gap
10     Gap < LastPhrase
11     c || Gap
12     
In [27]:
S.count(progress=1, limit=2)
  0.00s Counting results per 1 up to 2 ...
   |     1.65s 1
   |     8.25s 2
  8.25s Done: 2 results

Can we do better?

The performance parameter yarnRatio can be used to increase the amount of preprocessing, and we can increase to number of random samples that we make by tryLimitFrom and tryLimitTo.

We start with increasing the amount of up-front edge-spinning.

In [28]:
S.tweakPerformance(yarnRatio=0.2, tryLimitFrom=10000, tryLimitTo=10000)
Performance parameters, current values:
	tryLimitFrom         =   10000
	tryLimitTo           =   10000
	yarnRatio            =     0.2
In [29]:
S.study(query)
S.showPlan(details=True)
  0.00s Checking search template ...
  0.00s Setting up search space for 5 objects ...
  0.20s Constraining search space with 8 relations ...
  1.10s 	2 edges thinned
  1.10s Setting up retrieval plan with strategy small_choice_multi ...
  1.35s Ready to deliver results from 454123 nodes
Iterate over S.fetch() to get the results
See S.showPlan() to interpret the results
Search with 5 objects and 8 relations
Results are instantiations of the following objects:
node  0-clause                                         88101   choices
node  1-phrase_atom                                   267519   choices
node  2-phrase_atom                                    88101   choices
node  3-clause_atom                                     5201   choices
node  4-word                                            5201   choices
Performance parameters:
	yarnRatio            =     0.2
	tryLimitFrom         =   10000
	tryLimitTo           =   10000
Instantiations are computed along the following relations:
node                                  3-clause_atom     5201   choices
edge        3-clause_atom      ::     4-word               1.0 choices (thinned)
edge        4-word             ]]     3-clause_atom        0   choices
edge        3-clause_atom      <      2-phrase_atom    44050.5 choices
edge        2-phrase_atom      :=     0-clause             1.0 choices (thinned)
edge        0-clause           [[     2-phrase_atom        0   choices
edge        0-clause           ||     3-clause_atom        0   choices
edge        0-clause           [[     1-phrase_atom        3.0 choices
edge        1-phrase_atom      <      3-clause_atom        0   choices
  1.36s The results are connected to the original search template as follows:
 0     
 1 R0  c:clause
 2 R1    PreGap:phrase_atom
 3 R2    LastPhrase:phrase_atom
 4       :=
 5     
 6 R3  Gap:clause_atom
 7 R4    :: word
 8     
 9     PreGap < Gap
10     Gap < LastPhrase
11     c || Gap
12     

It seems to be the same plan. No improvement.

What if we decrease the amount of edge spinning?

In [30]:
S.tweakPerformance(yarnRatio=5, tryLimitFrom=10000, tryLimitTo=10000)
Performance parameters, current values:
	tryLimitFrom         =   10000
	tryLimitTo           =   10000
	yarnRatio            =       5
In [31]:
S.study(query)
S.showPlan(details=True)
  0.00s Checking search template ...
  0.00s Setting up search space for 5 objects ...
  0.20s Constraining search space with 8 relations ...
  0.79s 	2 edges thinned
  0.79s Setting up retrieval plan with strategy small_choice_multi ...
  1.03s Ready to deliver results from 454123 nodes
Iterate over S.fetch() to get the results
See S.showPlan() to interpret the results
Search with 5 objects and 8 relations
Results are instantiations of the following objects:
node  0-clause                                         88101   choices
node  1-phrase_atom                                   267519   choices
node  2-phrase_atom                                    88101   choices
node  3-clause_atom                                     5201   choices
node  4-word                                            5201   choices
Performance parameters:
	yarnRatio            =       5
	tryLimitFrom         =   10000
	tryLimitTo           =   10000
Instantiations are computed along the following relations:
node                                  3-clause_atom     5201   choices
edge        3-clause_atom      ::     4-word               1.0 choices (thinned)
edge        4-word             ]]     3-clause_atom        0   choices
edge        3-clause_atom      <      2-phrase_atom    44050.5 choices
edge        2-phrase_atom      :=     0-clause             1.0 choices (thinned)
edge        0-clause           [[     2-phrase_atom        0   choices
edge        0-clause           ||     3-clause_atom        0   choices
edge        0-clause           [[     1-phrase_atom        3.0 choices
edge        1-phrase_atom      <      3-clause_atom        0   choices
  1.04s The results are connected to the original search template as follows:
 0     
 1 R0  c:clause
 2 R1    PreGap:phrase_atom
 3 R2    LastPhrase:phrase_atom
 4       :=
 5     
 6 R3  Gap:clause_atom
 7 R4    :: word
 8     
 9     PreGap < Gap
10     Gap < LastPhrase
11     c || Gap
12     

No change either.

We'll look for queries where the parameters matter more in the future.

Next

You have seen cases where the implementation is to blame.

Now I want to point to gaps in your understanding: gaps


basic advanced sets relations quantifiers rough gaps

All steps

  • start your first step in mastering the bible computationally
  • display become an expert in creating pretty displays of your text structures
  • search turbo charge your hand-coding with search templates

advanced sets relations quantifiers fromMQL rough

You have seen cases where the implementation is to blame.

Now I want to point to gaps in your understanding:

gaps


  • exportExcel make tailor-made spreadsheets out of your results
  • share draw in other people's data and let them use yours
  • export export your dataset as an Emdros database

CC-BY Dirk Roorda