Continuing from last time, we’ve shown that NFAs and DFAs are equivalent. We’re now well prepared to discuss what operations other than union and intersection that the regular languages are closed under. Having two different ways of representing the regular languages means that we can choose to present our constructions in terms of DFAs or NFAs, depending on which is easier.
So other than union and intersection, other operations that the regular languages are closed under are concatenation, Kleene star, and complement. We’ll go through and define each of these in turn and prove, by construction, that the regular languages are closed under each of them.
First concatenation: we define, for languages and the concatenation . In words, the concatenation of two languages is a language that consists of strings of the first language followed by strings of the other. To get some intuition, let’s talk about what some simple concatenations are:
- , why? Because there are no strings in the and thus there is no string that can be first or second (respectively) in the concatenation.
- , because there is only the empty string in and we know that for any that .Now, to prove that the regular languages are closed under concatenation we will assume that we have two regular languages and and that we have NFAs which decides and which decides . We’ll describe in words how we take these two NFAs and make a new NFA that decides the concatenation. First, we take every accept state of and draw an transition from it to the start state of . Then we take the old accept states of and demote them to regular states. That’s it! Pictorially, we can see it as being something like if is
and is something like
then after step one the concatenation looks like
and after step two will look like
The next construction we’ll look at is complement. Complement is probably what it sounds like, if you have a language then the complement . Now, this might make you uncomfortable a touch. After all, I don’t find it inherently obvious that just because you can computably tell if a string is in a language that you can computably tell if it’s not in the language. In the case of regular languages, it’s actually pretty easy as we can show using DFAs. If we have a regular language and a DFA that decides it , then we can construct a new DFA that decides the complement just by taking the complement of the set of accept states and leaving everything else the same. In other words, if a state was an accept state in then it’s not an accept state in and visa-versa. We can see somewhat intuitively that this is the complement: if a string would end in an accept state of , then it won’t be in an accept state of and if it would not end in an accept state of then it will end in an accept state of . The NFA case is less simple, but the nice thing about knowing that NFAs and DFAs describe the exact same languages is that we can use whichever representation is the simplest for our purposes.
Finally, we need to describe the Kleene star. This one is slightly more complicated to describe but very simple to construct. For a language , the Kleene star is . In words, the Kleene star operation takes a language and returns a new language that’s the concatenation of 0 or more strings in the language. Since “0” is an option, this means that whether or not the language contains the empty string , the Kleene star of the language does contain .
We’ll show the regular languages are closed under this operation using NFAs. In words, what we do is for our NFA we attach a new state and make it the start state and also an accept state, we make an transition from the new start state to the old starte state, and then we make transitions from each of the other accept states to the new start state. Essentially, we are making a loop out of our NFA that can be executed an arbitrary number of times. Why do we make the new start state also an accept state though? Well, it’s because we’ve insisted that the Kleene star always include the empty string and this is an easy way to guarantee that our new NFA represents will accept the empty string.
Now given that we have all of these operations, maybe there’s another way we can encode the regular languages in a way that is a bit more familiar: the regular expressions. Essentially, the idea of regular expressions is that we describe the entirety of the regular languages with an inductive type that includes only things that are obviously regular. So we’ll define the regular expressions as being made out of
- where is a character in
- where and are regular expressions
- where and are regular expressions
- where is a regular expression
Now, intuitively we want the regular expressions to be exactly the regular languages. First, though, we should have a way to describe what it means for a regular expression to match a string. We can describe it in terms of expansions and we’ll do so inductively:
- expands into the literal character
- expands into where is an expansion of and is an expansion of
- expands into an either an expansion of or an expansion of
- expands into or it expands into where is an expansion of and is an expansion of
- expands into nothing
- expands into the empty string
Now, we say that a string is accepted by a regular expression when there exists some expansion of that is equal to the string . For example, if we have a regular expression and we want to match it against the string we can expand the regular expressions as follows . Let’s make sure that this notion of “expansion” is computable according to the informal criterion we’ve been having to use so far. As we can see, expansion only has a finite set of rules so we’re good on that front, and since we can terminate our expansion whenever we’re out of options or we’ve exceeded the length of the target string we only need finite data and finite time. This means that our ability to test whether a string is generated by a regular expression is computable.
So while we can intuitively believe that our definition of regular expressions does, in fact, describe regular languages we want to actually prove it. In order to prove it, we need do what we did for the DFA/NFA correspondence: we first show that we can take any regular expression and turn into into an NFA, then go back the other direction and take any NFA and show we can convert it into a regular expression that decides the same language.
We’ll start, again, with the easy direction: converting a regular expression to an NFA. We’ll define this inductively, that is case by case, over the structure of regular expressions.
- becomes the NFA that accepts the single character
- becomes the concatenation of the NFAs for and
- becomes the union of the NFAs for and
- becomes the Kleene star of the NFA for
- becomes the NFA for the empty set
- becomes the NFA for the language that only has the empty string
So, now for the hard direction which is converting NFAs to RegExps. The way we’ll do this is with the path . Gosh, GNFAs aren’t something we’ve seen yet are they? Let’s defined them. Informally, they are NFAs where we are allowed to have regular expressions as labels rather than simply characters. The idea being that the transition occurs when some prefix of the input string can be “consumed” as an expansion of the regular expression that labels the transition. We follow Sipser in our insistence that all our GNFAs meet the following conditions
- The start state has transition arrows going to every other state but no incoming arrows
- There is only a single accept state, distinct from the start state, and there is a transition from every other state to it
- Every other state has one transition to every other non-start/non-accept state including itself
Wow, those conditions might feel kinda weird, but they’re meant to make the construction as easy as possible. So the way our construction works is that we can take NFAs to DFAs with the powerset construction we’ve seen earlier, then we can turn DFAs into GNFAs, and ultimately turn GNFAs into RegExps in a principled way. We’ll cover that and the pumping lemma for next time.