(there are some pictures that I’ll eventually insert into these notes once I decide on what tool I want to stick with for drawing out my automata)

Today we begin the class in earnest and come back to our initial class of problems: “does the string belong in the language “?

We start with a very *simple* class of languages, defined by a very *simple* class of machines called deterministic finite automata (DFA). Pictorially, a DFA is very simple: it’s a graph where there is one node that is designated as the *start state*, there are zero or more nodes designated as the *accept states*, and there is exactly one line out from each node per letter of the alphabet.

As an example, consider the following DFA: (insert DFA for (00)*)

How do we *execute* a DFA, though? Being very informal, we say that a string is accepted by a DFA when there is a path from the start state to an accept state, whose labled transitions “spell out” .

As a useful example, trace out how the DFA above computes on the strings “000000” and “000”. You should find that you end in an accept state for “000000” but not “000”.

Now, in a more formal sense a DFA is a tuple of where

- is the finite set of states.
- is the alphabet, which you might recall from last time means that it must be finite
- is the transition function that defines what the machine does when it receives an input character.
- is start state of the automata
- is the set of accepting states.

In this more formal description, what does it mean for a string to be accepted by a DFA? A string $ of length is accepted by a DFA when there is a sequence of states such that

which, in words, says that there’s a sequence of states the DFA follows when processing the string and that it ends in an accepting state. Now we can look at the description of deciding whether or not to accept a string and see that it is ultimately a computable process in the sense of the last lecture: there is finite data in the form of the finite states of the DFA, there are finite rules in the form of the transition function , and the process of finding what sequence of states the transition function generates on the input takes a finite number of steps when the input is finite. Thus, we can say that a DFA *decides* the problem “does the string belong in the language “? for some language , where by decides I mean that it always finishes in finite time and gives a “yes” or a “no” answer. A string is accepted or rejected in finite time.

Now, what kinds of languages can be defined with such simple machines? Clearly, any *finite* language can since we can simply create a unique path through the DFA per string in the language, which is possible because there are only a finite number of strings over a finite alphabet so it can only take a finite number of states to construct this automata. However, a notion of computation that can *only* handle finite languages isn’t particularly interesting. After all, we know those are computable by lookup table! We’ll prove, in the next lecture most likely, that DFAs describe the “regular languages” which, as you might guess, are the languages that regular expressions define.

Let’s consider, instead, what the DFAs for a few simple languages look like.

(insert images later)

Building DFAs for a language is mostly a matter of patience and experience. You learn the patterns for how to do them and get better at seeing whether a DFA correctly accepts the right language. The *act* of building DFAs isn’t particularly interesting, so we won’t spend that much time on it per se.

Although, as an interesting exercise, let’s try building a DFA for the language . Can we do it? Does anything seem strange about it? So there’s no obvious way to construct a DFA for this language, but does that tell us that there is *no* way to construct such a DFA? No, it doesn’t. Instead, in a couple of lectures we’ll come to the issue of how one proves a language is *not* regular.

Another thing that I think is interesting to note is that for each regular language, there isn’t necessarily only one DFA that can accept it. For example, there are an *infinite* number of DFAs that can describe the empty language, an infinite number of DFAs that describe , and so forth with all of the examples we gave above. For the more mathematically inclined, the relationship between “regular languages” and “DFAs” isn’t so much an isomorphism as it is an example of an “adjoint equivalence”. This is the start of a pattern we’ll see for the rest of the course: there isn’t a 1-1 relationship between the machines that answer the question “does the string belong in the language “? and the class of languages they define.

Now I want to talk about the idea of closure of languages under operations. First we should define what “closure” means. For example, you can add any two integers and get another integer: the integers are closed under addition. On the other hand, if you divide, say, and you do not get an integer: the integers aren’t closed under division. A set is closed under an operation when you cannot “escape” the set using the operation. So, we assert that the regular languages are closed under union and intersection. Let us define what these operations are, first:

In words, is the language made up of strings in *or* in and is the language made up of strings in both *and* . I’ve claimed that the regular languages are closed under these operations. How would we show this? Well, we’ve defined the regular languages as those decided by a DFA. This means that if we want to show that the regular languages are closed under these operations, then we can do so by taking two DFAs and that decide and and then constructing new DFAs and that decide the union and intersection respectively.

Let’s go through somewhat systematically how this construction will work, though we’ll elide a proper proof that these constructions are *correct* and instead point you to the book.

Let and and our goal is to construct and . We’ll just construct at first and then describe how to change it to the version.

The basic idea is that we want to simulate running *both* and at once on the input, using our states to keep track of where we are in both DFAs. Then our transition function will operate by stepping us forward in our pairs of states. We can accept whenever *either* or is in an accepting state. This gives us enough pieces we can write out the DFA as a formal tuple. We note, first, that our alphabet is the same this entire time through so we do not repeat it.

Alright, hopefully it’s clear that this really follows through with that “simulation” plan we explained above. What’s nice is that the intersection comes from just changing the “or” in the definition of the accepting states to an “and”. Again, we skip over the details of showing that a string is in the union of and iff it is accepted by . The basic idea, though, is that if a string is in the union then it must be in at least one of the languages, and then the simulation will end in an accepting state, and visa versa.

Of course, this wasn’t the cleanest construction. Ideally for the union, we’d like to be able to say something like “try or and if one of them works, accept”. We can’t do that with DFAs as we’ve defined them, but next time we’ll tinker with our definition of a DFA to get a definition of non-deterministic finite automata (NFA) that still decides the regular languages. We’ll do some more closure properties, prove that NFAs and DFAs decide the same set of languages, and perhaps work with regular expressions.

Consider using Org-Mode with GraphViz. That way, you can use org2blog to post to WordPress (automatic inline images, yay!) and you can export to LaTeX as well. http://doc.norang.ca/org-mode.html#Graphviz has some notes on getting Org to work with Graphviz and http://sachachua.com/blog/2013/08/helping-someone-get-started-with-emacs-and-org-mode-through-org2blog-troubleshooting-steps/ has some notes on getting Org2blog + LaTeX playing nicely with WordPress (if you’re using WordPress’ LaTeX support). Hope that helps!