Monday, December 24, 2012

How To Build A Free-Will Machine

One of the hottest topics in physics and philosophy these days is the question of free will. Do we humans make truly free choices in the world, or is this impression merely an illusion? It certainly seems as if we have free will, but if science teaches us anything, it’s that we can’t always trust our intuition: Feathers respond to gravity the same as bowling balls (in a vacuum), light does not travel infinitely fast, and the surface of a pond is not flat, but curves slightly with the shape of the Earth. Is free will wrong, too?

Those who subscribe to the reductionist school of thought would say that we do not have free will. According to this argument, a complete understanding of the world can be reduced to particles strictly obeying the laws of physics; therefore, whatever physical state you (i.e., the particles of your brain/body) and your environment were in prior to making a choice, there’s one and only one corresponding state afterward, and that means only one choice — predetermined by the earlier state of the atoms and molecules in your body. Some take this to the extreme, arguing that the future of everything in the universe is already decided, and that this future could be predicted with perfect accuracy in principle (if not in practice), given complete knowledge* of the universe’s present state. A different line of argument points to cognition experiments: It is now well known that our brain has already settled on a decision about a half-second before we consciously sense that we are deciding; therefore our conscious sensation of free will must be an illusion, it’s argued.

The term I use for this kind of thinking is “retarded.” I mean no offense; I use the term literally, that these arguments are regressive and backward, unimaginative, crippled by the Einstellung effect — they are based on old paradigms and leave no room for new, “outside the box” ways of thinking. (We used to think patterns on the surface of Mars were canals built by Martians; after all, we humans build canals on Earth, right? And, the Martians have human faces. That’s retarded thinking.) The experimental argument against free will is retarded, because it assumes that only our conscious self is capable of making free decisions. What if the actual free decision happens a half-second earlier in the subconscious, and only the conscious aspect of free will (“I think I’ll make a left turn here”) is the illusory part? The high-level executive functions of the brain, which include thoughts and sensations, are only a small part of consciousness, like the images displayed on a computer screen. There’s a lot more going on at deeper levels than it seems, and this is where free will may reside.

That leaves the physical, reductionist argument against free will. Prominent scientists including physicist Paul Davies and mathematician George Ellis reject this as well, on the grounds that strict reductionism does not apply to living systems. In the science literature there has been an explosion of research and theory on the role of information in biological systems, and we are seeing a groundswell of acknowledgment that in living organisms, information plays a causal role on the atoms and molecules of life. This recognition of “top-down” effects is changing our view of the bottom-up mechanisms which, according to traditional reductionist thinking, drive everything in the universe.

If information is fundamental to the way organisms (such as humans) operate, can we demonstrate that free will really exists by describing it in terms of information, rather than atoms and molecules? How would that work?

Let’s consider one of the most human-brain-like machines in the world, the Jeopardy-playing IBM computer Watson. Watson uses a sophisticated statistical approach: Given a Jeopardy clue, Watson compares keywords and strings of words with a vast database of information, runs a slew of algorithms simultaneously, and then comes up with a list of possible responses, assigning each a confidence level. If the confidence level of one response is sufficiently high, Watson rings in and gives a response. Google Translate and Apple’s Siri use similar statistical approaches. But no one in their right mind would say that Watson or Siri has free will; given exactly the same prompt and the same database — what scientists call initial conditions — the result will be entirely predictable. Despite being an incredibly complex computer, Watson is still not as complex as the simplest one-celled animal, let alone a human brain. So, how could we modify Watson so that it would start to exhibit qualities of free will?

Dynamics. That’s the key difference between Watson and living cells. Watson uses more or less a fixed database and operating rules, which is why, given the same clue, Watson would respond the same. But dynamics — in the form of highly complex, interacting internal changes — are one of the most obvious hallmarks of living organisms. If Watson were built with interacting dynamics, the results would be chaotic enough that Watson would begin to exhibit free-will-like qualities. For example, a random number generator could alter all statistical calculations slightly over time. That alone would make its responses more unpredictable. Another number generator could randomly remove access to sectors of the database, mimicking the imperfection of biological memory and recall. Watson’s thresholds — the risks it is willing to take — could go up and down slightly with time, as well as in response to external conditions (how far into the game it is, Watson’s score against those of its competitors, even the instantaneous temperature and air pressure). Watson could be given “moods”: If it missed a couple of clues in a row, it might get “bummed out” and avoid risks for a while. There could be positive and negative feedback mechanisms that either exaggerate or reduce risk-taking over time, based on several of the other factors. Changes in light levels and noises (such as a burst of laughter or applause) could “distract” Watson, causing the confidence levels to dip or fluctuate uncontrollably, with some distractions being longer than others, based on factors such as recent performance and the scores. “Fatigue” could set in, with the threshold for distraction going down not only steadily by time but also as a function of Watson’s performance and even the time of day. Watson might be given a mechanical “body” that must cooperate in order to play the game, this interplay dynamically affected by the body’s own complex dynamics and feedback mechanisms and distractions. (Too much ringing in? “Hand” cramps up.) And so on.

Given all of these extra dynamics, would Watson have human-type free will? Not quite. That would require piling on astronomical layers of complexity. Phrases in clues might conjure specific “memories” from its “life” that could either help or hurt performance; it could have multiple competing internal influences or “dialogues,” akin to Freud’s id and superego (or like Gollum/Sméagol from Lord of the Rings); and it could have advanced “emotions” such as jealousy or contempt, those emotions modulated by its “memories” as well as every other factor I’ve mentioned. That doesn’t even touch on the decidedly human skill of analytically understanding (and misunderstanding!) the true meanings of the clues the first place.

Regardless, adding only five or ten interacting dynamic parameters to the existing Watson would create a system sufficiently chaotic that its behavior might exhibit the free will of, say, a flatworm. Simple living creatures have enough dynamic complexity going on that it’s effectively impossible for us to recreate the same initial conditions, both internal and external, of any given choice it might have to make. So, even though a flatworm or a modified Watson will usually respond to a certain stimulus in a certain way, you can never know enough about the system to say for sure. Throw in the indeterminate/random nature of quantum-mechanical influences at the sub-cellular level (analogous to the number generators in modified Watson), and it becomes impossible even in theory to predict how choices will be made.

As far as I’m concerned, that means free will, even for a flatworm, or for a Watson. For a human being, with all of its complexities and frailties, it isn’t even a matter of debate.

* The “knower,” being a part of the universe, would need complete instantaneous knowledge of itself, including knowledge of the state of having learned the last fact about itself. Or, it would need to be external to the universe, which is defined as all that exists. Both options are logically impossible.