Lecture: More Examples of Turing Machines and Turing Machine Variants

We’ve talked enough now about Turing machines in the abstract, now let’s talk about how we’re going to specify them in this class. To be completely formal, one should should always define the full state machine, but that’s not going to be how we actually do things for the most part. We’re going to, in general, give an informal description of Turing machines by writing out in words what the algorithm does. First, though, let’s take a couple examples straight out of Sipser as state machines then we can discuss some intuition for what informal descriptions actually make sense for Turing machines.

First, there’s the language \{0^{2^n} | n \ge 0\}. Now the idea of the algorithm is that we’ll scan across the tape and cross off half the 0s on the tape each time and if we never hit an odd number of 0s before we cross everything off, then we accept, otherwise we reject.

The informal description from Sipser is

“On input string w:

  1. Sweep left to right across the tape, crossing off every other 0.
  2. If in stage 1 the tape contained only one 0, accept
  3. If in stage 1 the tape contained more than a single 0 and the number of 0s was odd, reject
  4. Return the head to the left-hand end of the tape.
  5. Go to stage 1.”

and the state diagram is


Now, we can see that the state diagram implements the spec of the informal description and our conceptual outline. State M3 is the state where we’ve seen one zero, and if we see another one then we cross it off and go back into state M2, otherwise we get to the other end we go to the error state. If we keep seeing even numbers of zeros until we hit the edge of the tape, then we go back to state M1: that control flow is what M4 does.

So, the level of description we’re going for here is something like the one for Sipser for this problem: a list of steps that are allowed to refer to each other, where you can do things like

  • sweep across the tape
  • read symbols and change them
  • jump to other stages depending on what you read on the tape

If you can describe your algorithm informally without using more complicated concepts than that, then you should be sticking to things that we can implement obviously in a Turing machine. We’ll expand these limits a little bit as we establish a set of things that we know Turing machines can do, which we can reference like they’re predefined functions in a programming language. For example, the ability to simulate another machine given its description as a string and the proposed input, is a computable process. Once we’ve shown that, we’ll have other Turing machines whose informal description will say “Simulate the machine blah on the input”.

Let’s talk about a few other Turing machines that decide languages we’ve seen before that were not regular or context free. First, we have the arithmetic language \{m+n=p | m,n,p \text{ are binary numbers and } m+n=p \text{ as numbers}\}. This was not regular or context free by the respective pumping lemmas for those classes of languages. Now, as an informal description of a Turing machine we have something like

On input w,

  1. scan to the end of the input and place a # symbol, scan all the back to the left
  2. scan repeatedly to ensure that there is a + before an = and that the string before the + and the string before the = are the same length (this can be done by marking them specially to ensure that we’ve scanned all the appropriate symbols and then in the last step replace the marked versions of the 0s and 1s with the normal versions as a cleanup phase)
  3. look for the non-x character closest to the + symbol but to its left, x it out, then scan to the right until non-x character closest to the + symbol but to its left, x it out, and move to the first blank space to the right of the # and, if both symbols were 1s then write a 0 and go to stage 4, if one was a 1, write a 1 and go to stage 3, if both were 0 write a 0 and go to stage 3. If all characters (other than +) the left of the = are x’ed out, then go to stage 5
  4. look for the non-x character closest to the + symbol but to its left, x it out, then scan to the right until non-x character closest to the + symbol but to its left, x it out, and move to the first blank space to the right of the # and, if both symbols were 1s then write a 1 and go to stage 4, if one was a 1 write a 0 and go to stage 4, if both were 0 then write a 1 and go to stage 3. If all characters (other than +) the left of the = are x’ed out, then go to stage 5.
  5. scan back and forth across the # and ensure that the string between the = and # is the mirror of the string to the right of the #, this is just an iterated scan where you mark off with an x matching pairs of characters until everything is x’ed out, then accept. If at any point you are not matching characters then reject.

So that’s an informal description of how a turing machine can handle very basic arithmetic problems. We can play similar, but messier, games to describe other operations such as multiplication, division, etc. Note that the key was that we had two stages corresponding to whether or not we had a carry bit in the next step of our add. Hopefully the way this worked made it clear that we, in essence, answered the question “does m+n=p?” by actually computing m+n and then checking it against p.

Now, another language we can describe as a Turing machine that wasn’t a CFL is the language L = \{ w | w \text{ is a palindrome and the number of 0s and 1s are equal}\}. This is, basically, just done by scanning to ensure that it’s a palindrome and rather than x’ing out the symbols we read we replace them with a “marked” version of 0s and 1s, and if it is a palindrome then we scan across to make sure that there are an equal numbers of 0s and 1s by just x’ing out one of each on each pass across the string. It’s not the simplest thing in the world, but it works!

Now there’s a couple of variants of Turing machines that we should discuss before we move on. First, what if we allowed ourselves multiple tapes to work with? Is that going to be more or less powerful than a single tape Turing machine? By power, I mean can it decide and/or recognize the same set of languages, I don’t mean efficiency which is a separate concern. Well, it turns out that multi-tape machines are just as powerful as single tape Turing machines. Obviously, a single tape Turing machine is a special case of a Turing machine with a fixed number of tapes so we know that the languages described by single-tape machines are a subset of the languages described by multi-tape machines. As for the other direction, we can simulate a multi-tape machine with k tapes by having the contents of all k tapes split up into k regions on the single tape and we move back and forth between them, remembering in states the last character we saw as we move to the next tape segment. If you’re familiar with the concept, this is much like “currying” when it comes to functions: a function of two arguments f : A \times B \to C can also be thought of as a function f' : A \to B \to C, so similarly we’re changing the decision process so that the states of our original TM, rather than taking in all k-arguments at once from k-states, take the k-arguments one at a time, leading to new states each time. This rather dramatically increases the number of states we’ll be using in the simulation, and increasing the number of steps as well, but that’s okay since we just need to know that the simulation is possible.

The other, similar, variant of the Turing machine is the non-deterministic Turing machine. Non-deterministic Turing machines do exactly what they sound like, having multiple possible transitions for each combination of state and symbol on the tape. We can simulate a non-deterministic Turing machine readily enough using a 3-tape Turing machine, which we thus know is equivalent to some single tape Turing machine. The basic idea is that we have a tape for the original input, we have a tape that acts as the working tape for the simulation of a path through the non-deterministic machine, and we have another tape that keeps track of where we are in breadth-first search through possible paths in the computational tree. Now, why breadth-first? Because we want to be certain that if there is a path to an accept or reject state, that we find it. Depth-first search runs the risk of diving down a loop when there was a perfectly good terminating path.

I think that’ll be all for this post, and next time we’ll actually get to talking about what kinds of languages are decideable, what ones are recognizable, etc.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s