Continuing from last time, we have that the “machine” that corresponds to CFGs are PDAs. Informally, our machines will be finite automata with a limited notion of memory: a stack. In our transitions, we’ll be allowed to not only look at the input character when making our decision but we’ll also be allowed to look at the top of the stack. When we make a transition, we pop a symbol from the stack and then look at both the next character of the input stream as well as the character we just popped. Note that there’s no reason why the input stream and the stack have to have the same alphabet, so in our definition of push down automata we’ll allow them to be different. After we make our transition, we will optionally push another character to the top of the stack. A PDA accepts a string when we reach an accept state at the end of processing the string. This is informally computable by the definition we’ve been using since, because you can only look at the top character our number of rules is just going to be, roughly, the product of the number of states, the size of the input alphabet, and the size of stack alphabet. We can clearly do this in finite time for the same reasons that our NFA and DFA were finite, and we only need a finite amount of data for storing the stack and the state machine. So, this is also a nice computable definition.

One thing we should address: should our PDA be deterministic or non-deterministic? If we think about our goal, which is to have a kind of machine that represents context free languages and has the same power as context free grammars, are context free grammars *inherently* deterministic or non-deterministic? Let’s consider a grammar such as

and let’s consider the string . How many ways are there to expand the start variable, , into this string? Just at first blush, I believe there are four different ways. If there’s ambiguity in how we generate strings, *how* do we pick? Non-deterministically! Context free grammars are naturally non-deterministic. Now, you might wonder if for every CFG there exists a *deterministic* CFG that also describes the same language and thus the non-determinism isn’t necessary. It turns out that, indeed, the CFGs and determinstic CFGs are *not* equivalent. I don’t actually know a cute way to demonstrate this, but if I end up finding one I’ll share it with the class. (Also, that’s a hint to anyone reading this that if they know a cute demonstration that I’m overlooking then please share!)

We’ll include the formal definition as a tuple just like we did with NFAs/DFAs. It consists of

- A finite set of states
- , the input alphabet
- , the
*stack*alphabet - which is the start state
- which is the set of accept states

Now, let’s talk about what all of this actually means. We have a state machine much like what we had with NFAs, it’s non-deterministic as we can see if we look at the presence of the power set in the type of , and we have two different alphabets now just as we discussed above. Note, though, that the powerset isn’t just over the set of states this time but of the product . That’s because the choices we have aren’t just in terms of which state to go to next, but also in terms of what to do with the stack. Continuing, we interpret on the left hand side of the arrow in the type of to mean that we’re popping a character from the stack, if the stack is non-empty, and looking at it in order to make our decision. If the stack is empty, then we get an instead of an element of . means something slightly different on the right-hand side of the arrow, because that’s what we’re going to be *pushing onto* the stack. In this case, we’re either pushing a character from onto the stack or we’re optionally pushing *nothing* onto the stack, in which case we’re pushing .

You might wonder, since we’ve been trying to keep our informal notation of computation intact so far, if there are any limits to the size of the stack. The answer will be “no”, because we’ll be using only a finite amount of stack after a finite number of steps, since we can either accept or reject a string after a finite number of steps then we know we’ll always be using just a finite amount of memory. We could, in a sense, just assume that there’s some size limit to the stack that’s hidden from us and behind the scenes for every input the PDA gets configured to set the size of the stack to be larger than we could possibly need for an input of that length. That’s a completely valid interpretation of things, mechanically, but mathematically let’s just assume that there are no hard limits on the size of the stack and just get comfortable with the fact that we only use a finite amount of it if we take a finite number of steps.

We still need to define, formally, what it means for a string to be accepted by a PDA though. First, we define what the state of the stack is at all times by defining what it is after a step of computation.

- If our stack is , where , and where then our new stack is . Note that we’re representing the stack as, essentially, being a string here and reusing the machinery of string concatenation to describe this. We could also introduce a list data structure, but Sipser just uses strings to represent stacks, where the leftmost character of the string is the top of the stack, and represent the empty stack as . I don’t entirely agree with reusing strings for this, but can appreciate the economy of abstractions by introducing as little machinery as possible.
- If our stack is , and then our new stack is .

Thus, as long as we have an initial definition of the state of the stack, we can understand what the sequence of stack states as the computation progresses are.

We say that a PDA accepts a string when where and that there exists a sequence of states and stack states such that

- and , i.e. we start in the start state and the stack is initially empty
- where and .

Now that we’ve done all of that we can go ahead and start working out examples of PDAs and show that, indeed, they can handle the kinds of CFLs we’re wanting to do. We’ll label the transitions with somethig slightly more complicated than before and all our labels will be of the form “(a,b) -> c$ where “a” is going to be the character we read from input, “b” is the character we pop off the stack, and “c” is the character we’re pushing onto the stack. And the reason why those are formatted in ugly ascii rather than latex code is that I’m still not sure how to get latex excepted by the tool I’m using to make the inline graphs. In any case, let’s consider what the PDA looks like for the language , our old friend. The basic idea is that we’re going to use the stack to track how many 0s we see before we start accepting 1s, pushing a 0 onto the stack per 0 we see in the input stream. We then pop off a 0 for each 1 we see, and then we make sure that the whole stack is empty before accepting at the end of input. Wait, shoot, how do we see if the stack is empty? We do that by pushing a special “start symbol” onto the stack during our first transition, and then by having the transition to the accept state only happen by popping the start symbol back off the stack. Also, a last notational thing is that we’ll use e for . Without further ado,

We can see how this graph implements the algorithm we just saw. Now, what about the palindromes? Let’s remember that our CFG for the palindromes was

Well what we want here is to use the memory of the PDA to keep track of all the characters we saw up until we start accepting the “other half” of the string. Of course, how can you tell *when* you’ve “seen half” of the string? That’s where non-determinism comes in incredibly handy, because we can just make that whenever we want. Now, keep in mind, though, that there’s those two transitions that we need to get the odd palindromes as well

because they’ll mean that when we make the switch from “first half” to “second half” then we’ll need to use an *or* 1 *or* 0. Let’s just see what this looks like

We can see that this follows a very similar structure to the language of matched 0s and 1s and that if we trace out something like the execution for then it should look something like the following, where we represent the computation as a triple of which will be what’s left of the string to process, which will be the state of the stack, and which is the state we’re in. So we start out in and the correctly terminating trace of the execution becomes

- accept

Neat, huh?

We’ll leave this lecture here and pick up next post with a sketch of the equivalence of PDAs and CFGs and a bit on the context free pumping lemma