top of page


AI and the Free Will Debate

Thomas Larsen

February 13th, 2023

DALL·E 2023-01-08 23.32.17 - Postmodern illustration of the concept of artificial intellig

For at least the last two and a half millennia mankind has been discussing the nature of human will. Philosophers from Aristotle to Sam Harris (and nearly every other philosopher in between) debate whether we are truly free or if all of our thoughts and actions are determined by preexisting causes. The debate can take many forms, but it often revolves around the nature of the mind, how it dictates our actions, and whether we are in control of it and to what extent, or if we are mere observers as our bodies and minds simply follow the deterministic flow of causal events through time.


Before studying Artificial Intelligence (AI), both contemporary, functioning AI and hypothetical future iterations of AI (including the holy grail of Artificial General Intelligence, or AGI), I would have considered myself firmly on the Free Will side of the debate. It goes without saying that our intuition tells us we are free to make choices. We deliberate the smallest decisions, we regret making bad decisions and claim credit for the good ones. Our justice system relies on the existence of free will, allowing us to pass judgment upon others for their harmful deeds and deliver seemingly just punishments. Every intuition we have points to the existence of Free Will. But, is intuition always right? 


It’s easy to think of examples of our intuition coming up short in our judgment of reality. In fact, optical illusions, one of my favorite pastimes as a child, rely on our brain’s inability to properly intuit things. We willfully and regularly trick our brains into seeing objects that are not there, or completely miss the most obvious objects directly in front of us. We see faces in rocks and trees, and see animals and pirate ships in the clouds. Artists like M.C. Escher are famous for their mind-tricking illusions, inspiring and expanding the minds of countless individuals over the years. 


While intuition is a powerful tool in our day-to-day lives, early scientists quickly realized that intuition and first-hand observation are unreliable, leading to inventions that take human intuition and observation out of many scientific processes. Should we then also reconsider our intuitions and personal observations of free will? There are countless arguments for doing so, but the argument I would like to make in this article relies on a comparison between existing AI systems and their future iterations, and the human mind. While I don’t think this argument will once and for all clear up the multi-millenia debate of Free Will vs Determinism, I think it does give us some food for thought.


To get started, what exactly is the nature of today’s AI? In a traditional sense, AI is a machine with the ability to gain and use information and reason on its own. Arguably, there are no traditional AI systems yet, but the term AI has broadened to encompass many of the developing technologies that may one day lead to a machine with these powers of reasoning. The AI systems I want to focus on fall under the category of Machine Learning (ML). There are two main categories of ML and both have some interesting parallels to our own brains, and how we learn.


The first type of ML we should consider is that of human-aided systems. These systems can take many forms, but the basic idea is that the human-aided ML system is trained by interacting with human-labeled data, either by being pre-trained with the data, or having the human labels applied to a system’s output before having it re-fed into the system as a sort of corrective learning. Our brains also function this way in most traditional learning situations. We read books, take tests, are corrected by those who raise us or teach us, and so on. ML systems are interesting because they can learn in this way far more rapidly than you or I up to a certain point. There are some ways in which our brains create connections between objects and ideas far more rapidly than our machine counterparts, but when repetition is key to understanding, ML systems can parse through terabytes of examples in minutes. But the main point to understand about these types of ML systems is that they reflect one of our main modes of learning.


The second type of ML we need to look at is the unaided type. This type of ML can be broken down into many different forms, but the main idea is that the system is given a task, and certain abilities, and uses a trial-and-error approach to accomplish the task by experimenting within the limits of its abilities. Many ML-driven robotics teams use this approach to train new robots, especially when experimenting with locomotion and dexterity. Also, many scientists use this method of ML to help find new ways to think about vast amounts of data, where unique approaches to data processing are required for new discoveries. Again though, our brains learn using this method as well. Much of what we learn before we gain the ability to read is obtained through this trial and error approach to learning. We learn to walk, learn to speak, learn to jump and play and so much more in this manner. Of course, much of that is a hybrid between the aided and unaided approaches to learning, but both are very active tools in our learning repertoire. 


Now that we understand that ML systems are trained in much the same way as we learn, it is now also important for us to understand the level at which we understand what these ML systems do with what they’ve been trained on. Most ML takes place in what are known as black box systems. In a black box system, the input is well understood, the algorithms being used to process the input are well understood, but because ML relies on seemingly arbitrary connections between potentially billions of bits of information found in its training data, forming an incomprehensibly complicated web, the output is unpredictable. We do know how to manipulate outputs by manipulating the inputs, and adjusting the algorithms to meet our needs, but for all intents and purposes, it produces an indeterminable output. Our brains are also essentially black boxes, except that our brains differ in complexity from ML systems on immense scales. Not only do we not fully understand every aspect of human learning (input), but we also don’t understand its nuanced functions (algorithms), and we, of course, cannot predict people’s thoughts and actions. We have gained insight into the brain on many levels, but it is, on a whole, still quite the enigma. 


There do exist ML systems that function using glass box principals, where every function is traceable, all variables taken into account in any of its processes are known, and its output is predictable. These sorts of ML systems are useful anywhere that transparency is essential. Some examples of where this may be the case are uses that involve human rights and the justice system. Algorithms exist that attempt to determine the potential recidivism of an individual when they come up for parole. To reduce the tendency towards biased judgment so often seen in the justice system, these algorithms ought to be completely transparent (though some in use today are unfortunately not). The drawback with these sorts of systems is that the level of complexity needed to produce human-level processing does not seem to be possible while maintaining true transparency. This is, however, an ongoing debate, and much work is being done to explore how transparent systems can be while maintaining usable function.


Glass box systems aside though, the parallels between the human mind and ML systems are close enough to have an effect on how we view the other, especially when it comes to the existence of free will. It is safe to say that any existent ML system has no free will. There may be fringe arguments against that statement, but I don’t know of anyone taking these statements seriously, nor should they. Because we understand all that goes into a system, and how it works, our inability to predict output due to complexity is not an argument for free will in any existing ML system. Likewise, I would argue that just because we don’t understand with every nuance how our brains produce our thoughts and actions, doesn’t mean we shouldn’t also view our inability to predict others’ actions and thoughts as a function of complexity rather than a function of free will.


Because it is conceivable to imagine a future ML system that so fully mimics our abilities, either through scaling of existing technologies or the development of currently unforeseen future technologies, that we cannot tell a machine’s processing of information from our own, we should still have as much insight into its programing and development as existing technologies and will know that it has no free will just as we know current ML systems do not. While free will and consciousness are not equivalent, and I don’t claim to know if machines will ever be conscious or not, I think we can fairly confidently assert that machines will never have true free will and will always be deterministic systems bound to their training data and programing. Because added complexity does not change the root function of a system, even though we are so much more complex than current ML systems, I would argue that we too do not have free will, but are just as deterministic as any system we can build ourselves. 


Free will is key to much of what we tell ourselves about daily life, we justify punishments, and sentences based on an individual’s free will to choose their own actions, we truly believe we chose the salad over the lasagna because we’ve reigned in our will, but what would change if we viewed the world as completely deterministic? What sorts of reforms would we need to make in politics, governance, and justice? If such a dynamic shift in perspective were to occur, whether we mapped the entirety of all deterministic causal forces of any action or not, we would be forced to morally reconsider how we treat ourselves and others. If we didn’t view people as guilty, but viewed them as subjects of their circumstances, what would we do to change the circumstances that lead to dangerous behavior? I cannot predict what would happen, but I can imagine a world where people get help before punishment, aid before incarceration, or a true chance at life before being sentenced to death. Hopefully our continued venture into more and more advanced AI systems will eventually lead our species to this realization, pushing us to more ethical reforms across all of society.

bottom of page