# Lecture: More Decidability and Recognizability

Continuing from last time, we want to talk more about what kinds of things are decidable and recognizable. We argued semi-formally last time that

• $A_{DFA} = \{(B,w)| B \text{ is a DFA that accepts input string } w\}$
• $A_{REX} = \{(R,w) | R \text{ is a regular expression that generates } w\}$
• $A_{NFA} = \{(B,w) | B \text{ is an NFA that accepts input string } w\}$

are all decidable. These tell us if a given DFA/NFA/RegExp accepts a given string, which is basically answering the question “is this string in the this language”, so it’s pretty easy to see that TMs encompass everything that regular languages could do. Indeed, it’s even stronger than that because not only do we have a Turing machine corresponding to each DFA in simulation, but rather we have just one Turing machine that is capable of answering that question for all regular languages. That’s kinda interesting to me, really, that we are now seeing the advantage of describing computation by languages: we can have languages whose elements are descriptions of other computational systems. Now, what about more “meta” properties about regular languages? For example, what if we want to look at a DFA and decide whether or not it was the empty language? It turns out, we can do that with Turing machines. We can’t just naively attempt to test all strings and see if none of them are accepted by the DFA: that will lead to a machine that isn’t even a Turing recognizer because it will always loop. Instead, we’ll be a bit more clever (and almost quoting Sipser) by having a machine with the following description:

On input $A$ where $A$ is a DFA

1. Mark the start state of A
2. Mark any state that has a transition coming into it from a state that is marked
3. As long as a state was marked in stage 2, go back to stage 2, otherwise go to stage 4
4. If an accept state has been marked, accept, otherwise reject

So in words, what we’re doing here is checking to see that there is some path from the start to an accept state. We don’t care about what this path is, so we’re not considering the labels at all but rather just traversing the graph. We’ll call the language decided by this $E_{DFA}$, which is the language of descriptions of DFAs whose corresponding language is empty.

Using this machine that decides $E_{DFA}$ we can make a machine that decides the language $EQ_{DFA} = \{(A,B) | A,B \text{ are DFAs and } L(A)=L(B)\}$, i.e. a machine that when given two descriptions of DFAs $A$ and $B$ will tell us whether or not the two DFAs decide the same language. The way we do this is with two facts

• Regular languages are closed under complement and intersection
• The symmetric difference of two sets $(M \cap \overline{N}) \cup (\overline{M} \cap N)$ is empty iff the sets are equal. Now why is that? Well the first clause $(M \cap \overline{N})$ is only empty if every string in $M$ is in $N$, i.e. if $M$ is a subset of $N$, similarly $(\overline{M} \cap N)$ is only empty if $N$ is a subset of $M$ and thus the union is empty iff both sets are subsets of each other, which is only true if they are equal.

So now what we can do is make a Turing machine that will take two descriptions of DFAs, construct a DFA corresponding to the symmetric difference of the two DFAS, then test to see if it’s empty. There’s no possibility of loops anywhere in the process, so this machine will be a decider!

What about the context free languages? Well, the equivalent of $A_{DFA}$ above, which is $A_{CFG} = \{(G,w) | G \text{ is a CFG that generates string } w\}$ is decidable, as is $E_{CFG}$, but sadly because as has been discussed before context free languages aren’t closed under intersection and complement and thus we can’t play the same trick as above to get $EQ_{CFG}$ to be decidable. Indeed, it turns out that it isn’t decidable at all but we have to delay that.

Alright, now we’re pretty much done following Sipser so directly. Let’s go back to talking about numbers for a second. I quoted, without proving it, that the size of the real numbers is much larger than the size of the natural numbers. Let’s prove that explicitly in a proof by contradiction, using what’s called a diagonalization argument.

If the size, or cardinality, of the real numbers is the same as the natural numbers then we should be able to make a 1-1 correspondence between them, thus labeling the real numbers as the 0th real number, 1st real number, 2nd real number, etc. We can also label each digit of the real numbers starting with the most significant digit as the 0th digit and increasing as we move to the right. We can take these two sets of labels to make a “table”, with the rows being all the real numbers “in order” of their label and the columns being the digits. So, for example, at place 3,5 in the table we will find the 6th digit of the 4th real number (we’re starting at 0). Now if we have this table, let’s make a new real number that can’t possibily be in the table: it’s 0th digit is something other than the 0th digit of the 0th number, it’s 1st digit is something other than the 1st digit of the 1st number, etc. We just arbitrarily pick a number at each digit that meets the criterion.

Now we ask the question? Is this number in the table? If it’s not in the table, then it contradicts the assumption that the table contains all real numbers, but if it is in the table then it must be some row $i$, but by the way we constructed this number then it disagrees with the $i$ th real number at the $i$ th digit, which means it would not be equal to itself. Hence, there can be no table by this argument.

If this argument makes you slightly uncomfortable, it’s okay, don’t worry. This argument made Georg Cantor a heretic and an outcast, only for the rest of mathematics to eventually realize he was right in a collective “oops, my bad” after he’d already left the field. It’s counterintuitive because we’re quantifying over the set of all real numbers when trying to define a real number. If y’all will induldge me briefly, this notion of wreckless quantification in classical set theory is exactly what lead to the development of mathematical logic to prevent such bizarre paradoxes. For example, let’s say we have a set of all sets. This is completely allowed under set theory, as strange as it may seem. Does this set contain itself? Well, yes, it would because it contains all sets. Now, however, let’s consider the set of all sets that do not contain themselves. This is also a perfectly reasonable definition in set theory. Does it contain itself? If it does contain itself, then it can’t because it only contains sets that don’t contain themselves. If it doesn’t contain itself, then it does because it contains all sets that do not contain themselves. Either way you try to argue, you get bitten by a nasty contradiction. This is the paradox found by Bertrand Russell that helped motivate the development of ramified type theory, which is itself in a sense the ancestor of types in both modern programming languages and logics.

So why are we even talking about this, other than the fact that it gave me an opportunity to ramble about set theory and the need for mathematical logic? Well we’re going to play a very similar trick to this table in order to show that the language $A_{TM} = \{(M,w) | M \text{ accepts the string } w\}$ isn’t decideable. It’s still recognizable though, which we can see if we consider our Turing machine simulator from last time: that machine would accept if $M$ accepted $w$, reject if it rejects, and loop if $M$ loops on $w$.

Let’s start with the proof by contradiction: assume there is a Turing machine $A$ that decides $A_{TM}$, then we can make a Turing machine $D$ out of $A$ as follows

On input $M$, where $M$ is a description of a Turing machine

1. Run $A$ on the input $(M,M)$, i.e. if $M$ accepts on its own description
2. If $A$ accepts, reject, and if $A$ rejects, then accept

Okay, neato, right? Well we know that the set of all Turing machines is countable, so let’s make a table labeled by all the Turing machines along both axes. Each entry in the table, (i,j), will be the output of the $i$ th Turing machine when run on the description of the $j$ th Turing machine, i.e. the output of $A$ on the pair of $M_i$ and $M_j$. Well $D$ must appear somewhere on the table and have some index, let’s call it $n$, so what do we find at entry $(n,n)$? If it’s an accept, then that means $D$ accepts when passed $D$, except that $D$ rejects on $D$ when $D$ applied to $D$ accepts. Similarly, if $D$ applied to $D$ rejects then that means it must accept. Either way we got a contradiction, which ultimately means the table can’t exist, which means that $A$ doesn’t exist and there is no decider for $A_{TM}$.

Next time, we’ll cover computable reductions and do a whole lot more with proving languages decidable, undecidable, recognizable, or unrecognizable. It’ll be fun!