Free will. I know that its existence and nature has been debated to death, but given the prevalence of pop-science articles about AI and whether evil super-intelligences will spell the doom of humanity it’s a topic I want to ramble about for a bit.
First off, we need to define what we mean by “free will”. I think the easiest definition is that to have free will means to have the ability to make a deliberate choice between alternatives. If you haven’t spent many insomnia laden nights worrying about this topic, you might be wondering “okay, that sounds good, why is that controversial?”. Fundamentally, the issue is that physics as we understand it doesn’t leave a lot of room for “choice”. In the classical, Newtonian, view of things the motion of every particle is deterministic. Once you have the initial conditions, you can turn-the-crank on the equations of motion and get the position and momentum of every particle in the universe at any point in the future. Thus in classical mechanics, there can’t be any ability to make a choice since everything that happens follows exactly from everything that happened before it.
Now, folks sometimes try to dig into quantum mechanics hoping that it will save us and provide a clue as to how consciousness and choice work. The problem there is that probabilistic physics doesn’t allow choice any more than deterministic physics. There is still no reflective mechanism that allows for a conscious entity to deliberate and choose.
Do we then conclude that free will is an illusion? I don’t think so. I believe we have free will. I think we have the ability to make choices. In part, because a world where we
- seem to make choices
- can debate on our ability to make choices
- are able to think we make choices
- but don’t actually make choices
seems very, very unlikely. Of course, if I’m wrong then I don’t suppose I have any choice in my beliefs anyway (1). I’m not even necessarily appealing to a supernatural cause for free will. My best guess is that there’s something deeply strange about the universe that we have yet to discover. But who really knows? I certainly don’t. For the rest of this piece, though, I will be assuming that free will somehow exists.
Where does this fit in with AI, though? It’s a long standing anxiety in science fiction that machines will be sapient just like us and rise up to destroy us, and it’s an anxiety that’s finding its way into tech journalism as AI become more and more prevalent and powerful, but is it even possible? I’d argue that we don’t know whether a machine can ever be conscious. If my assumption is correct and we do have free will, then I imagine we’ll need to understand how this choice works physically before we can make a machine that use it. There’s a few potential showstoppers here. We might not find a physical mechanism for free will. Even if we find one, it’s not obvious to me that it would be easily harnessed in a computer-as-we-know-it. As much as I’d love for them to exist, I don’t particularly see sentient machines happening in the near future.
We can’t talk about thinking machines without discussing the Turing test. The Turing test doesn’t make any reference to free will or choice as a requirement for being a thinking machine, just the ability to fool a person 70% of the time after a five minute conversation into thinking they’re speaking with a human. I think that Turing’s test isn’t necessarily helpful given our current computational power. What I mean is that I agree with the extensional point of view that if a machine isn’t distinguishable in behavior from a person, then there’s no useful distinction made between the machine and us, thus it’s completely reasonable to call the machine “thinking”. I also believe that Turing didn’t realize how quickly our computational power and storage size would grow. Chatbots have become fairly common place at this point, but the facade is rather thin. Even if the program is capable—with enough AIML rules, processing power, and clever heuristics—of convincing a person that they are speaking with another human being in the short term, it doesn’t tell us anything about the ability to make choices.
If our AI cannot make deliberate decisions, then should we be concerned then with AI-as-we-know-it running out of control? I don’t think so, at least not any more concerned than we should be about any large and important software system. This may seem like a facile point, but without choice there’s no room for AI to surprise us if we’ve properly proved properties of their behavior. If anything, I think this is just a strong argument that we need to invest further in careful program construction and better theorem proving and formal methods technologies so that we no longer have to pay an overhead of many person-years to analyze a large system.
(1) I’m being facetious, but I thought it would be too much of a digression to discuss compatiblist views of free will with determinism. It’s probably obvious that I disagree with them, but I’m not dismissive of them.