from tock import *
from IPython.display import IFrame, HTML
We have been climbing up the Chomsky hierarchy of language classes (cf. Figure 4.10):
Unit | Language class | Automaton | Grammar |
---|---|---|---|
III | Turing-recognizable languages | Turing machines | Unrestricted grammars |
Context-sensitive languages | Linear bounded automata | Context-sensitive grammars | |
II | Context-free languages | Pushdown automata | Context-free grammars |
I | Regular languages | Finite automata | Regular expressions |
(Cells in italics are not core topics in this course, though we will mention them at some point.)
Now we begin Unit III of the course and finally introduce Turing machines (TMs), which sit at the top of the hierarchy, because, as we will discuss at the end of today's lecture, we think Turing machines define what it is possible to compute.
Read both the informal and formal definitions of TMs (165–170). For now, you can skim or skip over the definition of configuration (page 168 at "As a Turing machine computes" through the end of page 169), though we will certainly need it later.
Watch W8E1: Turing Machines.
There are many variations out there in the definition of TMs. Be careful if you refer to other sources.
The parts of a TM:
The start configuration:
A transition:
The formal definition of TM implies that it is deterministic: For any state $q$ and symbol $a$, there must be exactly one transition from $q$ on symbol $a$.
If the machine:
then the machine will:
More about moves:
There are three possible outcomes of a TM run:
Looping is something we've never had to worry about before. Deterministic FAs and PDAs can't loop; nondeterministic FAs and PDAs can loop (because of cycles of $\varepsilon$-transitions), but it's always possible to change such machines into an equivalent ones that do not loop. But with Turing machines, the possibility of looping is real and very important.
Consequently, there is an important distinction between:
People have built various physical Turing machines, including TMs made out of Lego Mindstorms, Lego and air tubes, wood, and Redstone.
IFrame(width=560, height=315, src="https://www.youtube.com/embed/E3keLeMwfHY")
IFrame(width=560, height=315, src="https://www.youtube.com/embed/vo8izCKHiF0")
IFrame(width=560, height=315, src="https://www.youtube.com/embed/KrNTmOSVW-U")
Read the book's examples of TMs (170–175).
Here are the book's two examples of formal descriptions. (Note that if you are Tock to write Turing machines, Tock's notation is different from Sipser's; see the Tock documentation.)
Machine $M_2$ recognizes the language $\{\texttt{0}^{2^n} \mid n \geq 0\}$.
m2 = read_csv("tm-m2.csv")
to_graph(m2)
Here's an example run; try a few different input strings to see how the run changes.
run(m2, "0 0").only_path()
Machine $M_1$ recognizes the language $\{w\#w \mid w \in \{\texttt{0}, \texttt{1}\}^\ast\}$.
m1 = read_csv("tm-m1.csv")
m1
As explained at the bottom of page 173, the reject state is implicit in this example. You can do this when you write formal descriptions of TMs also.
run(m1, "0 1 1 0 0 0 # 0 1 1 0 0 0").only_path()
Read the subsection "Terminology for Describing Turing Machines" (184–185).
Writing Turing machines is extremely tedious -- much more so than assembly language. Although some authors introduce a mechanism for defining "macros", Sipser takes a totally different approach, proposing three ways of defining Turing machines.
You've seen formal descriptions. When you are asked for a formal description, either a table or a state diagram is fine.
You've also seen pseudocode-like descriptions of Turing machines; Sipser calls these implementation-level descriptions. These abstract away from states and transitions, but they still speak in terms of the head moving on the tape and reading and writing symbols on the tape. They shouldn't, for example, make use of variables or arithmetic. See the examples in the book to get an idea of what is allowed and what isn't.
High-level descriptions will come later and abstract even away from the tape.
When we ask you to write a TM, we'll always specify what kind of description we're looking for.
Read the rest of Section 3.3 (182–184).
The Church-Turing thesis (CTT) is also often simply called "Church's thesis" but the name "Church-Turing thesis" acknowledges Turing's key role in developing the thesis.
Sipser's concise statement of the thesis is "Intuitive notion of algorithms equals Turing machine algorithms." It is a thesis (or possibly, a definition), not a theorem. Rather than prove it, we can only give arguments in favor of it.
The first justification comes from Turing's original paper (1936), where proposed Turing machines as a model of what humans do when they compute. He imagined a computing person in an idealized scenario:
0
to 9
).Basically this is an appeal to intuition that when people compute, this is what they do. Furthermore, he proved (quite briefly) that Turing machines can perform any such computation.
In addition to Turing's original justification, another justification for the CTT, the one more commonly cited today, is that many other proposals have been made for models of computability, and for the most part they have all turned out to be equivalent to Turing machines. The end of Section 3.2 alludes to some of these. The most important ones are probably:
Accidentally Turing Complete is a collection of other weird things that are equivalent to TMs.
An important subcase of this argument is that no matter how we try to augment Turing machines to make them more powerful, it seems to always turn out to be equivalent to standard Turing machines. This is the topic of the next class.