On Free Will and AI: A Rant and/or Rave

Free will. I know that its existence and nature has been debated to death, but given the prevalence of pop-science articles about AI and whether evil super-intelligences will spell the doom of humanity it’s a topic I want to ramble about for a bit.

First off, we need to define what we mean by “free will”. I think the easiest definition is that to have free will means to have the ability to make a deliberate choice between alternatives. If you haven’t spent many insomnia laden nights worrying about this topic, you might be wondering “okay, that sounds good, why is that controversial?”. Fundamentally, the issue is that physics as we understand it doesn’t leave a lot of room for “choice”. In the classical, Newtonian, view of things the motion of every particle is deterministic. Once you have the initial conditions, you can turn-the-crank on the equations of motion and get the position and momentum of every particle in the universe at any point in the future. Thus in classical mechanics, there can’t be any ability to make a choice since everything that happens follows exactly from everything that happened before it.

Now, folks sometimes try to dig into quantum mechanics hoping that it will save us and provide a clue as to how consciousness and choice work. The problem there is that probabilistic physics doesn’t allow choice any more than deterministic physics. There is still no reflective mechanism that allows for a conscious entity to deliberate and choose.

Do we then conclude that free will is an illusion? I don’t think so. I believe we have free will. I think we have the ability to make choices. In part, because a world where we

  • seem to make choices
  • can debate on our ability to make choices
  • are able to think we make choices
  • but don’t actually make choices

seems very, very unlikely. Of course, if I’m wrong then I don’t suppose I have any choice in my beliefs anyway (1). I’m not even necessarily appealing to a supernatural cause for free will. My best guess is that there’s something deeply strange about the universe that we have yet to discover. But who really knows? I certainly don’t. For the rest of this piece, though, I will be assuming that free will somehow exists.

Where does this fit in with AI, though? It’s a long standing anxiety in science fiction that machines will be sapient just like us and rise up to destroy us, and it’s an anxiety that’s finding its way into tech journalism as AI become more and more prevalent and powerful, but is it even possible? I’d argue that we don’t know whether a machine can ever be conscious. If my assumption is correct and we do have free will, then I imagine we’ll need to understand how this choice works physically before we can make a machine that use it. There’s a few potential showstoppers here. We might not find a physical mechanism for free will. Even if we find one, it’s not obvious to me that it would be easily harnessed in a computer-as-we-know-it. As much as I’d love for them to exist, I don’t particularly see sentient machines happening in the near future.

We can’t talk about thinking machines without discussing the Turing test. The Turing test doesn’t make any reference to free will or choice as a requirement for being a thinking machine, just the ability to fool a person 70% of the time after a five minute conversation into thinking they’re speaking with a human. I think that Turing’s test isn’t necessarily helpful given our current computational power. What I mean is that I agree with the extensional point of view that if a machine isn’t distinguishable in behavior from a person, then there’s no useful distinction made between the machine and us, thus it’s completely reasonable to call the machine “thinking”. I also believe that Turing didn’t realize how quickly our computational power and storage size would grow. Chatbots have become fairly common place at this point, but the facade is rather thin. Even if the program is capable—with enough AIML rules, processing power, and clever heuristics—of convincing a person that they are speaking with another human being in the short term, it doesn’t tell us anything about the ability to make choices.

If our AI cannot make deliberate decisions, then should we be concerned then with AI-as-we-know-it running out of control? I don’t think so, at least not any more concerned than we should be about any large and important software system. This may seem like a facile point, but without choice there’s no room for AI to surprise us if we’ve properly proved properties of their behavior. If anything, I think this is just a strong argument that we need to invest further in careful program construction and better theorem proving and formal methods technologies so that we no longer have to pay an overhead of many person-years to analyze a large system.

(1) I’m being facetious, but I thought it would be too much of a digression to discuss compatiblist views of free will with determinism. It’s probably obvious that I disagree with them, but I’m not dismissive of them.

Advertisements

2 thoughts on “On Free Will and AI: A Rant and/or Rave

  1. I’m not sure that “properly proving properties of their behavior” necessarily frees us from being surprised by their behavior. Naturally, this all comes down to what ‘properly’ is supposed to mean, but that’s evading the point 🙂

    The problem is that the sorts of things we are capable of proving are often local properties, but local properties of behavior only have so much influence on the global properties of that behavior. This is where so-called “emergent behavior” typically comes from: while all the little parts of a behavior may be well understood, what happens when all those parts are put together is often unclear.

    This isn’t simply an issue of not proving the right theorems. Often the global behavior is far too complex to admit any proofs (because the point of the system is to do complex things when we humans only understand how to do the simple parts, and want to apply our small parts automatically to do bigger things). Or, if it does admit proofs, it only admits proofs of weak semantic theorems like “it always picks the best choice” (without the proof giving any indication of what choice actually happens to be the best in some arbitrary setting). Still further, even if we can somehow get the proofs we want, the more complex the overall behavior is the more complex the theorem must be; and just because we can prove it doesn’t mean we *understand* what it is we’ve proved or the consequences thereof. There are plenty of examples all throughout game theory and probability theory that demonstrate these problems, so it’s not just a philosophical conundrum…

    • I don’t actually disagree with what you’re saying and, indeed, I think my statement came across stronger than I meant it.

      The scope of what I meant by “surprise” was the apocalyptic scenarios I’ve seen about AI, such as the “a super intelligent factory decides optimal efficiency comes from wiping out humanity” style of anxiety.

      My point is that if there is true choice, it will probably be near impossible to rule these anxious scenarios out, but without choice then surely we can and should analyze the systems we deploy well enough to ensure that “kill all humans” is never a conclusion.

      I’m not saying that it’s trivial to do that, though.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s