# Genetic Programming and Derivatives

So this being a Friday, I figured it’d be a decent day to share some of the weird ideas that occur to me. Today, I wanted to talk about genetic programming, derivatives of datatypes, and how these two ideas might make for a really cute combination.

We’ll start by talking about what genetic programming is. Now, genetic algorithms in general are essentially a kind of search algorithm where you start with an initial, often randomized, population of possible solutions and then you test them according to a fitness measure, take a subset of the solutions that do relatively well by the fitness function, and then do “recombination” to generate new possible solutions from the old and continue until you’ve found a solution that is sufficiently close to the ideal. It sounds a lot like a really simplified notion of natural selection, right? A lot of times the solutions are going to be, essentially, bitstrings and recombination, often called crossover, is just swapping pieces between two bitstrings. I’ll be honest, I know very little about the general usage domain of genetic algorithms because it’s not my field, but it’s an idea I first came across when I was a frustrated physicist reading books on math and CS and I thought it rather fascinating. I then read Koza’s book on genetic programming, which seemed like a really neat extension of the earlier genetic algorithms techniques.

Genetic programming is where one evolves not just a simple datastructure but rather the syntax tree of the solution algorithm itself. Now, in general you’re not using the entirety of your programming language for this, but a subset of functions and constants that you’ve decided are relevant to your solution, and you evolve the syntax tress, with the fitness function being executing the program and comparing it against how you want it to behave. Koza’s book was written using Common Lisp, and since I was first learning CL at the time I devoured this book and thought it was a fantastic example of code-as-data and I loved it. I’ve read criticisms of his work, arguing that it isn’t really that different than other genetic algorithm techniques and the fact that the usefulness of it comes not from evolving syntax trees but from syntax trees. The richer shape, which allows for more interesting possibilities in recombination, is then what makes a difference over linear data structures in genetic algorithms. Again, this isn’t my field, so I don’t honestly know what the current state of the art in GA/GP is, I don’t know how to actually evaluate claims of effectiveness and what problem domains these algorithms are good in.

On the other hand, a few years ago I started reading about zippers and derivatives of data types. A zipper for a data type is a datastructure that allows you to traverse and act on the data in a principled way. Essentially, a zipper is going to be a list whose elements tell you the “path” you’ve taken through the data structure and the “context” at that point that’s needed to reconstruct the original substructure at that point. So, for example, if we have a binary tree (ignoring value in nodes for now) then our zipper would be a list of pairs telling us whether we went left or right along with the subtree for the direction we didn’t take, paired with the subtree we’re currently examining. A cute explanation of zippers in Haskell can be found here. Now where do “derivatives” come in? Well, there’s a notion of taking derivatives of data types that I believe comes from Conor McBride. If we convert our data type into the “polynomial” form, this derivative follows all the normal rules we’d expect. The point being, though, that the derivative of the data type is exactly the data structure we need in the zipper. For example, in our binary tree example where the binary tree is defined as the fixed point of the functor $B(X) = 1 + X^2$ and thus the derivative will be $2*X$ which we can interpret as the product of a two point type and an $X$, which tells us that the elements of the zipper will be pairs of subtrees and a “direction” that tells us whether we went left or right. This is exactly the result we want!

The idea I had a few years ago but never actually developed was that we could combine these ideas and have a very generic system for “genetic programming” over a variety of datastructures by using the zippers to handle recombination in a completely generic way: to do crossover just perform a random traversal through the two structures you’re recombining so that you have your pairs of a list of zippers and the in-focus substructure, then swap the in-focus substructures and “rezip”. It’s honestly pretty simple, but the last time I checked no one had actually written a paper on how to do it. If this is something that anyone would have an interest in seeing, I’d probably finally have the motivation to write the whole thing up and perhaps do some analysis about the differences based on different tree structures.

I hadn’t, yet, because I really have no idea how interesting anyone would find such a thing and I’ve never had the nerve to annoy someone who works in evolutionary algorithms to find out if it’d be a useful contribution.