Lecture: Machines Simulating Machines, Some Decideability

Last time we mentioned casually the idea that Turing machines could simulate other Turing machines. This isn’t covered much in Sipser, at least not in a way I liked, so let’s talk a bit informally about how such a thing makes sense. First off, let’s note that a Turing machine itself can be given some textual, finite, description as a string. Thought it might seem silly, remember

 digraph M {
  rankdir=LR;
  size="8,5"
  node [shape = circle];
  M0 -> Mr [label = "_ -> R"];
  M0 -> Mr [label = "x -> R"];
  M0 -> M1 [label = "0 -> _,R"];
  M1 -> M1 [label = "x -> R"];
  M1 -> Ma [label = "_ -> R"];
  M1 -> M2 [label = "0 -> x,R"];
  M2 -> M2 [label = "x -> R"];
  M2 -> M4 [label = "_ -> L"];
  M2 -> M3 [label = "0 -> R"];
  M3 -> Mr [label = "_ -> R"];
  M3 -> M3 [label = "x -> R"];
  M4 -> M4 [label = "0 -> L; x -> L"];
  M4 -> M1 [label = "_ -> R"];
}

zeropowersagain.png

and there’s also the graphviz source for drawing that diagram. Now, that source for drawing the graph includes some data for how things should look, but other than that it’s a finite text description of the Turing machine. Now since we know that a Turing machine can have multiple tapes, let’s imagine a machine that has two tapes. The first tape will contain the text description of the machine we’re simulating, the second will contain the input to the simulation machine. The basic approach is that we’ll step through the simulation by reading the input tape and treating it as normal and then using the tape with the description of the machine to both keep track of what state in the simulation we’re in and to figure out what to do at each point. Now, you might object and say that it’s not obvious that we can do the appropriate lookups and moving around for an arbitrary number of states in a finitary way. I think it is possible, with a clever encoding, to write down the description of the Turing machine so that we go either left or right into the appropriate state on the tape which eliminates the need for a lookup, but means that we need a number of repeated copies of the states on the tape. On the other hand, we could do a naive encoding on the machine description and then just build the machine so that it does a lookup but for only a number of states up to some cutoff. We can just do different versions of the simulator for different “sizes” of Turing machines. In either case, everything will be nice and okay and finite.

There’s a strange and important lesson here that I’d like to expound upon for a bit. When we write programs, we’re writing finite text descriptions of algorithms. Of course, I know you all know that you’ve been writing text but let’s let that sink in for a moment. All our programs are of finite length, the alphabets we use to write the programs are finite, and thus how many programs are there? Countably infinite, which in the grand scheme of mathematics is a pretty tiny number. On the other hand, there are uncountably infinite real numbers. That alone pretty much guarantees that we can’t easily have exact arithmetic with real numbers. The fact that there are uncountably infinite functions \mathbb{N} \to \mathbb{N} also tells us that we’re giving up a lot of possible functions by requiring that we can write finite descriptions of functions. On the other hand, the really amazing part is just how much we can do with finite descriptions of functions. This idea that we can write finite descriptions of every Turing machine as a string is going to be integral to the rest of this topic where we explore the limits of what is computable.

Before we get to the limits of computability, that is the limits of what is Turing recognizable, let’s first explore the limits of the more restricted notion of decidability.

Our first language is going to be A_{DFA} = \{(B,w)| B \text{ is a DFA that accepts input string } w\}. We want to prove that this language is decideable. How do you prove something is decideable? You define a Turing machine for it that never loops. Just as we described how to make a TM that simulates other TMs, we can make a TM that simulates a DFA on an input. Now, since the DFA always accepts or rejects in finite time then the simulation will always accept or reject in finite time. Thus, our decider is simply running the simulation on the input of the text description of B and the input w.

Now, for NFAs we can do a similar thing with A_{NFA} = \{(B,w) | B \text{ is an NFA that accepts input string } w\} and we can reuse our machine for A_{DFA} by making a new machine that first converts the input NFA to a DFA and then runs A_{DFA} on the resulting DFA and the input string. Since converting the NFA to a DFA is a simple algorithm, we can encode the conversion in a Turing machine, and thus we have our decider.

Also similarly, we can do the same thing for regular expressions and reuse our machine that decides A_{NFA} by first converting the regular expression into an NFA. This way, A_{REX} = \{(R,w) | R \text{ is a regular expression that generates } w\} is also decideable.

We can keep following Sipser pretty closely and talk about other languages related to regular languages that are decideable, but for the moment let’s stop here and we’ll do more on recognizable languages next time.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s