Week 6: 2016/02/22-26¶

In [1]:
from tock import *
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as patches, matplotlib.lines as lines


Tuesday class¶

This week we'll show that the two models we learned last week, context-free grammars and pushdown automata, are equivalent. Today we will show how to convert a context-free grammar to a pushdown automaton, which is important because it is the basis for a lot of parsing algorithms (algorithms that take a string as input, decide whether it belongs to the language, and if so, generates a tree as output).

The top-down construction¶

The construction used in the proof of Lemma 2.21 is known as top-down or sometimes "nondeterministic LL" parsing.

The basic idea is pretty simple, and probably easier to describe first without getting into the details of the PDA. The stack is initialized to $S\mathtt{$}$(remember that the top of the stack is on the left). Whenever the top stack symbol is a terminal symbol and it matches the next input symbol, we pop it and read in the input symbol. If it doesn't match, then this path of the derivation rejects. Whenever the top stack symbol is a nonterminal symbol, we pop it and nondeterministically push all possible replacements for the nonterminal. Each replacement is pushed in reverse order, so that the leftmost symbol is on the top. If we reach the end of the input string and the stack just has$\mathtt{$}$, then we accept.

Here's an example grammar:

\begin{align*} S &\rightarrow \mathtt{a} T \mathtt{b} \\ S &\rightarrow \mathtt{b} \\ T &\rightarrow T \mathtt{a} \\ T &\rightarrow \varepsilon \end{align*}

Here's what the successful parse looks like for string aaab:

Input Stack
aaab S$ aaab aTb$
aab Tb$ aab Tab$
aab Taab$ aab aab$
ab ab$ b b$
$\varepsilon$  There are also many unsuccessful parses, but as long as one of them succeeds, we accept the string. The conversion from CFG to PDA basically implements the above algorithm in the PDA. This construction is implemented in Tock: In [2]: g = Grammar.from_lines(["S -> a T b", "S -> b", "T -> T a", "T -> &"]) p1 = from_grammar(g) to_graph(p1)  In [3]: run(p1, "a a a b")  Question. Convert the following CFG to a PDA: \begin{align*} S &\rightarrow \mathtt{0} S \mathtt{0} \\ S &\rightarrow \mathtt{1} S \mathtt{1} \\ S &\rightarrow \varepsilon \end{align*} Question. If you actually had to implement a parser this way, how would you do it? What would its time complexity be? The bottom-up construction¶ There's another parsing strategy that the book doesn't mention at this point. It's called bottom-up, shift-reduce, or maybe sometimes "nondeterministic LR" parsing. It's the basis for most parsing algorithms that are used in compilers. The idea is again pretty simple -- it's like top-down parsing in reverse. The stack is initialized to\mathtt{\$}$. At any point in time, we can do two operations.

In a shift, we read in one input symbol and push it onto the stack.

In a reduce, we check to see if the prefix (top symbols) of the stack match a right-hand-side of a rule (in reverse order), and if so, we can pop those symbols and replace them with the left-hand-side of the rule.

This algorithm is again nondeterministic: it's always possible to do a shift unless we're at the end of the string, and it may be possible to do several different reduces.

If we reach the end of the input and the stack has just $S\mathtt{\$}$, then we accept. Here's what the successful parse looks like for string aaab: Input Stack aaab $
aab a$ aab Ta$
ab aTa$ ab Ta$
b aTa$ b Ta$
$\varepsilon$ bTa$$\varepsilon$S$
In [15]:
p2 = from_grammar(g, mode="bottomup")
to_graph(p2)


Thursday class¶

The conversion from a PDA to a CFG is probably the trickiest to understand of all the constructions we do in this class. Fortunately, though, it's not very difficult to perform.

Let's start with an example PDA, one that recognizes the language of balanced parentheses but using a and b for left and right parentheses:

In [5]:
mc = read_csv("pda-parens.csv")
to_graph(mc)

In [6]:
w = list("aaaabbbaabbb")
r = run(mc, w, show_stack=100)
r


The book uses plots of stack height over time (Figure 2.28 and 2.29) to help illustrate the construction. Here's a function that produces similar plots (you don't need to understand this):

In [7]:
def plot_height(ax, path):
n = len(path)
heights = [len(c[2]) for c in path]
bars = ax.bar(np.arange(n), heights)
ax.set_xticks(np.arange(n-1)+0.5)
labels = []
for i in range(n-1):
if len(path[i][1]) > len(path[i+1][1]):
labels.append(path[i][1][0])
else:
labels.append("")
ax.set_xticklabels(labels)
ax.set_xlabel("input string")
ax.set_yticks(np.arange(max(heights)+1))
ax.set_ylim(ymax=max(heights)+0.5)
ax.set_ylabel("stack height")

for c, b in zip(path, bars):
h = b.get_height()
ax.text(b.get_x() + b.get_width()/2., h, c[0], ha="center", va="bottom")

In [8]:
fig, ax = plt.subplots()
plot_height(ax, r.shortest_path())
plt.show()


Each bar (including bars with zero height) represents a configuration of the machine as it reads the string. The machine's state is written above the bar. The horizontal axis shows the input symbols that are read in (if any) and the vertical axis is the height of the stack. In this case, notice how the stack grows whenever an a is read and shrinks whenever a b is read.

You can try changing the input string w or even the PDA to see how the above graph changes. (However, the figures below will get messed up.)

Let's think about how a CFG would generate this string, working from the top down. The key idea is that every nonterminal symbol covers a substring that is read by a sub-run of the PDA that begins and ends with the same stack. The start symbol covers the whole string and corresponds to the whole run. Let's call the start symbol $A_{q_1q_3}$ because the PDA begins in state $q_1$ and ends in state $q_3$. We can picture it as below:

In [9]:
fig, ax = plt.subplots()
plot_height(ax, r.shortest_path())
plt.show()


Because the run starts and ends with epsilon transitions, there's also a sub-run that covers the whole string but begins in state $q_2$ and ends in state $q_2$:

In [10]:
fig, ax = plt.subplots()
plot_height(ax, r.shortest_path())
plt.show()


So we should add a rule $A_{q_1q_3} \rightarrow A_{q_2q_2}$. The next smallest sub-run that begins and ends with the same stack is the one that covers the whole string except the first and last symbols. It, too, begins in state $q_2$ and ends in state $q_2$:

In [11]:
fig, ax = plt.subplots()
plot_height(ax, r.shortest_path())
plt.show()


So we should add a rule $A_{q_2q_2} \rightarrow \texttt{a} A_{q_2q_2} \texttt{b}$. Now what is the next smallest sub-run? Notice that if we lop off the first and last symbols again, the resulting sub-run wouldn't begin and end with the same stack:

In [12]:
fig, ax = plt.subplots()
plot_height(ax, r.shortest_path())
plt.show()


Instead, there are two next-smallest sub-runs:

In [13]:
fig, ax = plt.subplots()
plot_height(ax, r.shortest_path())
plt.show()


So we should add a rule $A_{q_2q_2} \rightarrow A_{q_2q_2} A_{q_2q_2}$.

The PDA to CFG construction does this, but not for a particular string; it does this for all possible strings. The book's version generates a lot more rules than are usually necessary. Let's run the construction but remove the useless rules:

In [14]:
to_grammar(mc).remove_useless()

Out[14]:
start: (start,accept)
(start,accept) → (q1,q3)
(q2,q2) → a (q2,q2) b
(q1,q3) → (q2,q2)
(q1,q1) → (q1,q1) (q1,q1)
(q1,q3) → (q1,q1) (q1,q3)
(q1,q3) → (q1,q3) (q3,q3)
(q3,q3) → (q3,q3) (q3,q3)
(q2,q2) → (q2,q2) (q2,q2)
(q1,q1) → ε
(q3,q3) → ε
(q2,q2) → ε

Even after removing useless rules, there are still some that don't really need to be there; but hopefully it's clear that the grammar generates the right language. "start" and "accept" are the names of states that were added to the PDA during preprocessing.