Saturday, January 31, 2015

Cherry Pickers Deny Global Warming

Global warming deniers are like a broken record, repeating the same old climate change myths.  It is always possible to find supposed counterexamples to any well-established and impeccably-tested scientific theory such as evolution, the big bang theory, the germ theory of disease, etc. by cherry picking the data.  This despicable practice is forbidden by the scientific community, and those who practice it are censured and shunned.  Unfortunately, cherry picking is the favorite tool of politicians, lobbyists, and purveyors of ideology, who use it unashamedly with impunity.

Take the claim that global warming has stalled.  A plot of the mean global land-ocean temperature shows that the 2013 temperature is indeed below the 1998 value.  However, such variations are well within the expected range of statistical fluctuations.  Taken out of context and ignoring the rest of the dataset leads to the erroneous conclusion that warming has stopped.  The long-term trend is clear and agreement between the data and computer models (within computational and measurement uncertainties, of course) increases our confidence in the results.

What about the claim that scientists are changing terminology in an effort to hide the perception that global warming has stopped?  To be precise, the earth is experiencing global heating.  It is a fact that CO2 is a greenhouse gas that traps heat, which warms the atmosphere, land and oceans.  Manifestations of heating include temperature increase, evaporation of water, melting of ice, volatility, and changes in climate patterns.  Deniers see a conspiracy in what they believe is an intentional obfuscation on the part of scientists.  The fact that the media uses different words or has changed its terminology is irrelevant to the underlying facts.  The terms  global warming and climate change have been used by scientists for decades, and it has always been understood that these are consequences of heating.

Deniers like to downplay the scientific consensus.  According to several studies that analyze the vast literature on climate change, more than 97% of active climate scientists conclude that the data supports anthropogenic climate change.  The most meticulous study of this sort assembled a list of researchers who are both active climate scientists (judged by numbers of publications in refereed journals), and who have been signatories of public statements that either support or denounce anthropogenic climate change.  This study confirms the 97% figure.  Global warming deniers point to one specific report that suffers from a documented flaw in its methodology as evidence that the 97% number is incorrect.  They ignore all the other studies that confirm the sentiments of 97% of the climate researchers who concur that global warming is real.

Deniers call for productive discussions on climate change.  These are already taking place. Vigorous debate is common at scientific conferences, new and updated studies are inundating the journals with data, and interpretations of the data are continually being fine-tuned.  Scientists are not zombies who tow the party line.  Only after years of debate is a scientific consensus reached; and even then, scientists doggedly try to find holes in the evidence.  In contrast, the climate change denying machine cherry picks the data in a deliberate effort to cast doubt and plant misinformation in the popular press.

Demanding 100% certainty before taking action on global warming is a stalling tactic.  Prudence demands that steps be taken in proportion to the degree of confidence in the information.  Public debate should focus on finding the most economical ways with the greatest impact on reducing greenhouse gas emissions.  The scientists who have the deepest and broadest understanding of the literature have reached a consensus based on the preponderance of evidence.  We are fools for not heeding their warnings.

Saturday, January 24, 2015

The value of ‘curiosity-driven’ research


Darwin did not study the natural world to understand the genome with the purpose of developing the field of genetic engineering.  Einstein did not study the curvature of space-time and the eerie condensation of particles called bosons to develop the GPS system or to invent the laser. Instead, these scientists were driven by curiosity and an intense thirst to unlock the mysteries of life, matter, and space-time. The practical applications of their research, which have fundamentally transformed the fabric of our lives, came about fortuitously as a byproduct of their relentless pursuit of understanding.

Universities must continue to hold a public commitment and financially invest in fundamental research, not only to expand human knowledge, as an end to itself, but also because basic research can lead to breakthroughs that shift scientific paradigms and enable novel applications that leapfrog existing technologies.  Strong support for continued hiring of bright and energetic basic researchers at universities should continue, as it reinvigorates existing research areas, attracts the best and brightest students, and increases the visibility of our research and the reputation of our institution.  Basic research is the foundation on which new innovation and societal solutions are built.

Sunday, January 4, 2015

Why Free Will is an Illusion

Free will is always a topic that generates spirited debate.  Several years ago, one of Pat's junior colleagues in the Economics Department started a book club, which we we recently inherited.  Our conversations tend to meander all over the place and often return to the topic of free will.

That free will is an illusion is a view that seems to be generally accepted by philosophers.  Debate on the topic seems to focus on semantics and details.  According to my limited understanding of the debate, even those camps that reject determinism are not really rejecting the notion that free will is an illusion.

The last meeting of our book club in 2014 once again landed on the topic of free will.  Using quantum principles to add a little uncertainty to the mix, we can show that while a choice made by an individual is not deterministic, one can still determine the probabilities associated with the action he or she will take.  These probabilities do not reside in a conscience that is associated with a person, but rather is a property of all physical systems.  The probabilities are just built in  and there cannot be any volition under the hood.

In what follows is a response to our discussions from the last book club meeting that shows my reasoning that leads to the conclusion that there is no such thing as free will even though our actions may fool us into believing otherwise.

Since this is my first post of the new year, may you all have a happy and prosperous 2015.  (I was destined to say that!)

INTRODUCTION

"Why aren't you more excited?  We're witnessing quantization firsthand!" The young balding scientist stared at me through his thick glasses, waiting for my response.

I watched the tiny pen run across the waxy paper like an old fashioned EKG.  In its first pass, it had drawn a bell curve.  The scientist replaced the black pen with a red one, turned the knob that controlled the magnetic field, and restarted the experiment.  The new red curve had traced out two peaks that straddled the black one like a pair of centurions escorting a shackled a prisoner.

"This is the foundation of quantum physics, and you don't seem to care."  He scribbled something on the strip of paper, placed his pen in the pocket protector of his white dress shirt, and left the lab with his hands inserted awkwardly in his jeans, shaking his head.  In my defense, I was a graduate student and this lab course was required.  At the time, it was a chore.

What bearing does this have on free will?  Free will involves making a decision and a decision is an event that is amenable to measurement.  It resides in electrical impulses that flicker around the brain.  Even if one denies the material origin of thought, a decision has an observable outcome such as an action that can be perceived or recorded.  Detecting the decision directly from states of the brain using electrodes or noting the ripples made by the action on the physical world are measurements.  As such, an analysis of free will requires a rudimentary understanding of how mother nature's magnificent clockwork is deduced from strictly controlled experimentation (i.e. measurements).  Since the concept of a measurement is central to the discussion, it requires a careful analysis from the most fundamental perspective to avoid fallacies introduced unknowingly by our biases.

The experiments performed by Stern and Gerlach in 1920 provide insights into the meaning of a measurement and how it is limited not by our technological shortcomings, but by the laws of nature.  Stern and Gerlach used the observed trajectories of silver atoms that fly out of a hot oven to establish a fundamental principle of quantum mechanics -- spin quantization.  This simple property can be measured in a way that removes all spurious effects leaving bare the measurement process itself.  Our approach will be to start with this most simple system to gain insights into the meaning of a measurement, then use it as a scaffolding to build ever more complex systems.  Each step is guided by measurements in a chain of evidence that connects the simple to the complex.

An appreciation of the importance of the Stern-Gerlach experiments and their bearing on measurement requires a review of an electron's rap sheet.  The electron is characterized by three quantities: mass, electric charge and magnetic dipole moment.  As we will later see, the magnetic dipole moment and spin are inseparable, so they can be discussed interchangeably.  In the Nobel-Prize-winning research, a magnetic field tugs at the dipole moment of an electron to deflect its trajectory.  As usual, things are not quite so simple.  First, it was difficult in 1920 to make a beam of electrons and measure their presence.  More damning is their electric charge, which would make them fly apart like the pieces of an exploding firework.  In perhaps a grand stroke of luck, Stern and Gerlach used silver atoms.  Each is electrically neutral, and unbeknownst to them at the time, has a single unpaired electron. The remainder of the electrons are locked together in pairs in what is called a closed shell, so their dipole moments cancel.  In short, we can ignore the closed-shell electrons. The result is that the trajectory of a silver atom is deflected by a magnetic field by the tug it experts on that one lonely electron.

The force on a magnetic dipole by a magnetic field had been well characterized by 1920.  It was established that the force, therefore the degree of deflection, depends on the angle between the magnetic field (more precisely, the magnetic field gradient) and the unpaired electron's magnetic dipole moment, which can be visualized as a tiny bar magnet and often represented by an arrow that points from its south to north pole.  Since the atoms flying out of the oven are randomly oriented, a magnetic field will make some atoms deflect up, some down and some not at all.  As such, the narrow beam shown in the upper diagram was expected to broaden into a fuzzy smear in the direction along the magnetic field as shown in the middle diagram (the blue arrows show the orientation of the dipole moment that is deflected to the corresponding part of the red smear to its left).  Instead, the beam was observed to split into two sharp beams as shown in the bottom diagram, implying that the electrons are restricted to only two distinct orientations -- along or opposite to the magnetic field, which for convenience we call spin-up and spin-down.

Just think about this for a moment; the implications are profound, a fact that escaped me as a fatigued graduate student.  The electrons are randomly oriented when leaving the oven but a measurement sees only spin-up or spin-down electrons.  What happened to the rest of them?  We will discuss this and many other questions more rigourously later.

The Stern-Gerlach experiment became an indispensable tool for studying quantum effects.  To produce a beam with spin-up electrons, the experimenter simply blocks the spin-down beam with a metal plate.  Alternatively, the experiment can be used to count the number of spin-up and spin down-electrons by measuring the brightness of the two spots made by the two beams on a screen (or on the detector of a digital camera in the modern version of the experiment).  If one atom is passed through the apparatus, the electron's spin is determined from the location of the spot on the screen.  The Stern-Gerlach apparatus thus has two separate functions -- it can be used as a detector that measures an electron's spin or can be used to prepare a beam of electrons of identical spin.

As we will later see, the perplexing observation of only two distinct spins from a source implies that there is an intimate connection between the electrons and the measurement device.  Before getting ahead of ourselves, we need to get back to defining what we mean by free will.  Then we will be able to apply the concept of spin and measurement to choice and free will.

The act of making a choice is a step in the process of exercising free will.  A choice is the selection of one option out of the many available ones, and one hypothesis of its origin is free will.  However, there can be free choice in the absence of free will when the agent can make a choice freely in the absence of outside coercion.  For example, does the dieter eat the cupcake or resist the temptation?  To make the choice freely, all external influences -- such as the watchful eyes of the concerned spouse -- must be eliminated.

Before establishing the fundamentals, we need to take a brief detour to define terminology by extrapolating common usage to the quantum realm.  Studies of free choice require a controlled environment, such as (1) a dieter in an isolated room; (2) an enumeration of all possible choices, such as eating the cupcake, not eating it, stepping on it, licking the icing, etc; (3) the means for assessing the outcome without interfering with the subject's actions; and so on.  Each of these criteria are impractical even for the simplest human trials, but in principle such experiments are possible.

Let's consider a few scenarios as illustrations.  First, imagine the same individual who is placed in the exact same environment on two consecutive days.  A common sense hypothesis that explains two opposite outcomes is the individual's different state of mind on each day.  Repeating the same experiment many times would reveal a distribution of outcomes, allowing the assignment of a probability to each.  The classical view posits that the observed distribution reflects the different states of mind of the subject.  As such, the probabilities would then be deemed causative, i.e. different states of mind cause different outcomes.

As we will see later, natural processes -- such as those governing a spin measurement, lead to probabilistic behavior even for truly identical systems; and, there are no underlying differences that explain the variation.  So, different outcomes do not imply differing causes or variations in the environment.  Rather, the well-defined probability distribution is built into the system.  The outcomes are uncertain but the probability distribution is known with certainty.  We call such a system definitive, which has a definitive probability distribution.

It may seem paradoxical from our biased perspective that variations are not attributable to an underlying cause, but that's what the evidence supports.  The "common sense" conclusion that all probabilistic behavior is causative is wrong.  As we later describe, electrons don't behave commonsensically in that way and larger systems built of electrons inherit that same trait.  Clearly, the fact that definitive processes exist does not rule out the existence of causative phenomena, but definitive behavior cannot be summarily dismissed.

It comes as no surprise that the outcome of an experiment can depend on the environment.  For example, slight changes in the hue of the walls might change the probability distribution of the cupcake temptation experiment.  As we will see shortly, the consequences of measurements are mindbogglingly bizarre.  In many natural phenomena, the observed probability distribution can also depend on how the observer records the choice made, even when there is no discernable connection with the subject.  Imagine that the dieter eats the cupcake more frequency when the camera angle of the hidden video recorder is changed.  The average person may dismiss this observation as absurd, and an artifact or a flaw in the experiment; but, this effect is found to be the dominant one in the Stern-Gerlach experiment and illustrates the intimate connection between observer and subject.  The observer and the observed cannot be separated.

"Temptation," another commonly used term, implies opposing forces that lobby for competing choices from which the individual selects an action.  Such multiple cognitive processes can live together in one brain -- the hunger center or its addictive kin apply an influence to eat the cupcake while knowledge of the consequences are weighed by the logical machinery.  If the brain is viewed as hardware that is configured on the basis of genetics, with software that is partially pre-wired and later shaped by the environment -- such as upbringing and education, the frequency of choices made can be predictable and therefore definitive.  Presented with the same options under the same environmental conditions, the brain will make a range of choices at frequencies determined by the probability distribution -- i.e. the system is definitive.  This type of response is called  deterministic.  Note that determinism does not imply that the same choice will be made every time, but that variations in the choice made are predictable and not due to a capricious agent.  This is perhaps the most subtle point in the argument, which will be more rigourously established in the following section.

The reality of a physical thinking brain eliminates the need for the existence of a tiny devil that sits on the left shoulder providing temptation and an angel on the right who provides encouragement to an agent who is free from the physical laws of the universe to make a choice.  What appear to be unpredictable and willful actions arise from the natural variability inherent in a measurement; and, the consequences are deterministic in the sense that there are no underlying causes.  In contrast, free will implies an agent that is unfettered by the laws that govern the physical world.  A free choice, however can made by an agent that lives in the material world, but the choice is constrained by nature's laws.  The agent, then, can only make choices that are available to it from the hardware and software from which it is formed.  It can no more readily make a decision outside of the sensory input provided to it within the constraints of programming than a human can will himself to transport to the other side of our galaxy.  Free choice is characterized by the absence of external coercion on the agent making the choice; but, the agent is still bound by the laws of nature.  Free choice experiments can then be used to determine if free will is required to make sense of the observations.

A prerequisite for fruitful discussions of free will is the identification of the forces at work in making a choice.  The first round of arguments are inevitably muddied by the lack of precise definitions, such as the meaning of "material" and "spiritual," where this causative agent must reside.  A mixture of sloppy thinking and imprecision gives debaters the latitude to skirt those issues that are often critical to meaningful discussions.

In the arguments presented here, we use the broadest definition of "material," which includes anything that can be observed and measured with our senses or through proxies such as telescopes equipped with cameras, geiger counters, and x-ray machines.  This includes common materials made from atoms and molecules, light beams, dark energy, and sound waves.  Though the world may seem chaotic and random, the underlying phenomena responsible are known to be governed by well-defined reproducible laws.  Even the observed mayhem of nature can be predicted using the simple classical laws even without invoking quantum mechanics.  As we will later show, the behaviour of small particles, such as electrons and atoms, necessarily requires the use of quantum mechanics.  When the atoms and molecules act together in large associations, the paradoxical behavior cancels leading to the kinds of phenomena that behave according to what we call common sense.

The brain, which is responsible for sentience, is so large that quantum mechanical effects cancel.  As such, quantum effects cannot be invoked as the agent of free will.  We later show that even residual quantum effects cannot be a refuge for such an agent, and provide indisputable proof that free will cannot be a consequence of quantum mechanics.  Free will can only be salvaged by proposing the existence an extra-material agent that is the source of the observed randomness.  But since much of the randomness that is observed can be accounted for by the laws of physics, the postulated agent is unnecessary.

In the section that follows, a hierarchical approach is taken to study simple systems that are combined to make more complex ones.  Interactions between elementary particles and an observer are used as a paradigm of measurement as a prototype for analyzing free will.  We will see that even in the simplest cases where only binary choices are available, behavior reminiscent of free will is observed.  In these cases, free will is ruled out based on the definitive nature of the probabilities associated with the observations.  Sources of the randomness will be shown to be an inherent property of the particles and not due to ignorance of the observer or the action of an external but hidden entity.  The logical development will be built on meticulous observations and rigorous arguments, which concludes that there can be no hidden agent at any level of the hierarchy that is shielded from view.

A mischievous philosopher may argue that the scientific method is based on an inductive process, so a small number of measurements cannot predict with certainty every eventuality, leaving the door open for a world of spirits and physics-breaking agents.  To the contrary, the laws of physics are being put to the test more than trillions of times per second, leaving the crack in the door too small for the kinds of appreciable violations that would be needed if free will were being exercised by the masses as a part of their daily routine.  As new technologies build on older ones, the rate at which the laws of physics are being tested is growing exponentially.  More on this later.

In concert with tests of nature's laws from the fundamental to the complex, our general understanding of nature is expanding, encompassing more phenomena in grander sweeps.  Gaps in knowledge, where free will could take refuge, are closing from above and below, putting the squeeze on the existence of a capricious agent.

One might argue that only a great leap of faith connects the behavior of microscopic particles to the likes of humans or to galactic scales.  The reductionist view is validated in many cases where the actions of many particles, each obeying simple laws of physics, lead to predictable phenomena that are observed on much larger scales.  In other cases, such connections are more tenuous, making it difficult or impossible to predict a system's behavior on larger scales on the basis of the simple laws that govern its parts.  However, this fact alone is not a compelling reason for invoking a spirit to fill these gaps.  The need to confirm a philosophical disposition that is fed by subjective experiences and reinforced by religiosity is not reason enough.

As we will later show, even when there is no clear path from a microscopic system to larger scales, well established laws are found to hold at each level that predict definitive behavior, which is observed over and over again.  Furthermore, there are hints of connections between the hierarchies that are becoming clearer as science marches forward.  In the next section, we start with the simplest systems and show how the behavior of composites can be deduced from smaller parts, leading to the kinds of behavior that might appear willful, but requires no otherworldly agent.

Specifically, we will start with an analysis of the Stern-Gerlach experiment and show how measurements force us to accept the notion that the outcome of an experiment can behave probabilistically without an underlying cause and that the observer is inseparably connected to the subject. If one clings to the notion that all variations have a cause, then one must attribute an underlying agency to the electrons behavior.  We will show that a rigourous analysis of the electron's behavior is incompatible with an invisible agent.  More precisely, the existence of what are called hidden variables are found incompatible with observations, sealing the case against free will in an electron.  With this proof in hand, we then consider more complex materials made of multiple electrons.  The richness of the observed phenomena grows as expected, yet hidden variables continue to be forbidden because their existence contradicts observations.  When a system becomes too large to be modeled from first principals, we introduce the concept of hierarchies, where new laws appear to emerge on larger size scales but in fact are derivable from the simpler cases.  Finally, an estimate is made of the reliability of the data and its interpretation.

ELECTRONS ACTING ALONE HAVE NO FREE WILL

When a beam of spin-up electrons is prepared by blocking the spin-down ones, subsequent measurements of the spin using a vertical magnetic field finds only spin-up electrons.  A compact way of stating this fact is that spin-up electrons remain spin-up when the measurement is along the vertical axis.  Similarly, spin-down electrons are always measured to be spin-down.  The implication is that the spin remains unchanged as an electron travels through empty space.

Next, start with spin-up electrons and do the measurement along the horizontal axis.  Since the horizontal axis is perpendicular to the applied magnetic field, classical physics predicts no horizonal deflection, so the measurement should yield no spin-left and no spin-right electrons.  Instead, an equal number of spin-left and spin-right electrons are observed.

This behavior persists even for a single electron; if the electron is prepared with spin-up, and the experiment is repeated many times, half the time spin-left is observed and the other half, spin-right.  To summarize, a spin-up electron always remains spin-up and a spin-down electron always remains spin-down when its vertical spin is measured.  Similarly, a spin-left electron always remains spin-left and spin-right remains spin-right with a horizontal measurement.  However, when the electron is prepared to have a definite spin along a particular axis, a measurement perpendicular to that axis finds opposite spins with equal probability.  This puzzling behavior runs counter to common experience, making one imagine an experimentalist banging on the apparatus to make sure it is working properly.

These same results have been found by many researchers, including students of physics like me 35 years ago.  Quantum mechanics, like all science, is based on controlled experiments.  Students are expected to question everything and to do the experiments to "see" for themselves.  This instills a healthy skepticism and builds an appreciation of the indispensability of measurement.  Most importantly, the scientific method fights our natural tendency for making conclusions based on subjective evidence.

Getting back to Stern and Gerlach, their results left physicists with a bizarre puzzle.  When there are many electrons, one can argue that when preparing them to have spin-up, we are inadvertently making half of them spin-left and half of them spin-right.  But when we prepare a single spin-up electron we know through experimentation that it always remains spin-up.  So, how can it be measured to be spin-left or spin-right?  The only resolution is that a spin-up electron, though it is a single particle, is composed of an equal proportion of spin-left-ness and spin-right-ness; and, this mixture of properties is hidden until a measurement is performed.  This explanation was proposed by Niels Bohr and is the most popular paradigm in use by contemporary physicists. The Copenhagen interpretation, as it is called, leaves physicists uncomfortable, but it is used because it makes accurate predictions of the way nature behaves.

The notion of a definitive electron has to be abandoned in lieu of what is called the state vector -- a probabilistic object from which any material parameter can be determined using simple rules.  The twist is that the state vector is represented in terms of the amount of this-ness or that-ness with respect to a measurement axis.  For example, if the spin is measured along the vertical axis to have spin-up, it will remain so provided that subsequent measurements are performed along the vertical axis.  In this case, the state vector is made of two parts: a spin-up part which is assigned a probability of 1 and a spin-down assigned a probability of zero.

To measure the spin in the horizontal direction, the coordinate axes must be rotated to represent the state vector in terms of left- and right-spin probabilities, which yields a state vector with a spin-left part of 1/2 and a spin-right part of 1/2.  (More precisely, the square of the state vector gives probabilities, but we will ignore this fact to keep our arguments simple.) This is no stranger than width becoming height and height becoming width when we rotate an object by 90 degrees.  A measurement of the horizontal electron spin, if repeated over and over again for an electron that is prepared to be in the spin-up state will yield an equal number of left- and right-spins.  Thus, on average, the spin is zero -- as predicted by classical physics; but, the probabilistic picture must be used when predicting the observation of a single electron.

To review, an electron (and all particles) cannot generally be viewed as being in just one state, but are simultaneously in a mixture of states; and, the components of the mixture depend on the coordinate axes.  The state vector lists the probabilities of finding the electron in each state and the average over many observations of a particular property leads to the classical prediction when averaged over large numbers of measurements.  The state vector is a definitive object that describes every possible result of a measurement, to which it assigns probabilities.  However, the seeming paradox of a particle being simultaneously in multiple states deserves further attention.

Before the horizontal measurement is performed on a spin-up electron, it is simultaneously in two opposite horizontal spin states.  Intuition based on everyday experience may lead one to conclude that the electron is "really" in one state or the other, and the probabilities merely quantify our ignorance, as in a coin toss before the quarter is uncovered.  But this is not the case.  The spin state is in a "truly" indeterminate state of spin that had no classical analogue.  Once we make a measurement, the system jumps into one of those states (called the collapse of the state vector).  The notion that the system is in a fuzzy state until a measurement is performed is an affront to classical physics, contradicting the cherished belief that nature is a machine that operates independently of the observer. The fact that a measurement forces the system into one of its possible states both makes the observer an integral part of the universe and gives her the privileged position of deciding when to collapse the state vector.

This interpretation of a measurement is understandably confusing and counterintuitive, but this is the way that nature behaves.  In our daily lives, our senses sum over billions and billions of particles, so we see only the averages, which behave in the way that we would expect from experience.  For example, light is made of photons and a forty watt light bulb emits ten billion billion photons every second.  Similarly, a dime has about a million billion billion electrons.  It is no wonder that we see none of the fuzziness associated with individual photons and electrons.  To the human eye, light and matter are continuous and smooth.  Thus, our intuition is biased by observations where the quantum effects are averaged over and nulled, leaving behind only classical behavior.

Einstein despised this probabilistic character of quantum theory, stating, "God does not play dice."  He had difficulty arguing against the theory because it predicted every observation with impeccable accuracy.  To his irritation, each thought experiment that he introduced as an example of the absurdity of the theory ended up reinforcing it by introducing what seemed to be nonsensical predictions that were later experimentally verified.  Out of frustration, he concluded that there must be a deeper yet-to-be-discovered theory with hidden variables that make the universe appear less orderly than it "truly" is at the most basic level.

Not long after Einstein's death, John Bell introduced his famous inequality.  Using relatively simple arguments (but too complex for the present discussion), he showed that hidden variables would restrict what an experiment would measure to a range predicted by an inequality.  Since all predictions of quantum mechanics contradict Bell's inequality as do all measurements, there can be no hidden variables, sealing the coffin on Einstein's antiquated belief of a deeper theory.  There can be no hidden variable theories, so there can be no hidden causes.  The probabilistic interpretation of a definitive state vector is firmly established as being fundamental.  It's the way nature behaves, so there can be no hidden agent whose actions are causing the fluctuations that are observed.  This type of blanket statement may appear incredibly arrogant, but as we will show later, the evidence is far more convincing and rigorous than that for a crime caught live by every television camera on the planet.

The measurement of the spin of the electron serves as the simplest example of the fundamental nature of a free binary choice.  Imagine that a particular spin state is formed using a Stern-Gerlach apparatus.  Subsequently, the electron spends its metaphorical life responding to the stimuli it encounters, somewhat like a human being making choices.  The elevator arrives and the operator asks "Are you interested in going up or down today?"  Being in the pure spin-up state, it responds "up."  When the door opens, it is presented with a left-right choice and turns right, at which point is sees a ladder and makes the choice to climb up or down at a 45 degree angle.  When another electron is prepared to be in the identical initial state as the first one and is provided with the same alternatives, it is observed to make different choices.  Two individuals with free will could be imagined behaving differently as do the electrons, responding in what appears to be a nondeterministic way when presented with the same choices.  Does the appearance of nondeterministic behavior imply that free will is at work?

We have seen that the probabilities of outcomes that are coded into the state vector are not the result of a cause in the classical sense.  The probabilities are built into the state vector and are determined from how the system was prepared and from what perspective the measurement is taken.  In the case of a coin toss, we don't know the outcome until it hits the floor.  However, this is willful ignorance in the sense that a high-speed video camera could record a movie of the spinning quarter, which could be analyzed to predict the outcome before the coin lands.  In the quantum case, the outcome is absolutely unknown to any observer, because the system is in what's called a superposition of states, where the outcomes are mixed together.  The act of the measurement then forces the state vector to collapse into just one of the possible states based on the probabilities built into the state vector.

The conclusion from this analysis is that variability in the actions of two individual electrons need not imply causative factors.  Quantum effects lead naturally to such variability in the case when two electrons are prepared in the same way and placed in the same environment.

Particles in the quantum world behave in many ways like humans.  For the electron, future choices are made based on probabilities that are determined from the initial conditions, not unlike the cards we are dealt at birth.  It is possible to impose evolutionary forces on an electron to favor a spin-up state.  For example, if the electron is initially in a spin-down state, first do a horizontal measurement, which will result in a spin-left or spin-right state, each of equal probability.  Once in the spin-left or spin-right state, a vertical measurement will collapse the state vector into a spin-up state half the time.  If the electron is found to be in the spin-up state, the goal is achieved,  If the spin-down state results, restart the process again by measuring the horizontal then vertical spin.  Each pair of vertical/horizontal measurements has a 1/2 probability of flipping the spin to the desired spin-up state, so the desired state can be reached with high probability after only a few iterations of the selection process.  This behavior can act as a metaphor for evolutionary forces that favor a particular spin orientation, or upbringing that defines the character of the individual.

The spin of one electron, then, can display the behavior of intention and conscious choice; but, the mathematical structure of the state vector and the constraints imposed by Bell's theorem removes the possibility for the action of free will and demands that the state vector be definitive.  As such, the probabilities are determined definitively from the initial conditions and the measurement axis.  Since the world is not made of single electrons, we next consider two electrons that act together as the precursor to an inductive argument that can be applied to more complex systems, including conscious beings.

IT TAKES TWO TO TANGO

The electron may seem to have little to do with our everyday lives, but they are essential in defining the character of atoms, are central to chemical bonds that make the molecules of life, and are fundamental to our senses.  The sensation of touch comes from repulsive forces between electrons and we see things because of light scattering from or being absorbed by electrons in atoms and molecules.

The electron has a truly out-of-this-world property that we have alluded to, and it stems from the relationship between the spin and the magnetic dipole moment, which we used interchangeably but now need to define more precisely.

Let's begin by using our macroscopic worldly bias by picturing the electron as a sphere whose volume is filled with a smooth smeared-out electric charge.  A spinning sphere has angular momentum -- a property of mass in circular motion.  For convenience, let's focus on the equator, which is furthest from the spin axis and travels at the highest speed -- thus contributes most to the angular momentum.  Since moving electric charge is by definition current, the equator forms a circular loop of electrical current.

The magnetic dipole moment is defined as the product of the current and the area of the loop that carries the current.  It points perpendicular to the loop, so for the earth, the dipole moment from an electric current traveling around the equator would point north.  To get the total dipole moment, one must sum over all the loops on the surface and in the interior.  Since the electron is charged and is spinning, it is not surprising that it has a magnetic dipole moment.

Quantum mechanics has a way of muddying the simplest ideas.  The descriptions up to this point have avoided the use of mathematics, but without mathematics, we will miss a very important point.  To appreciate it, we must first return to a deeper discussion of the state vector.  The state vector for electron spin can be constructed directly from observations of probabilities of finding an electron's spin along the two mutually perpendicular axes that are perpendicular to its originally-prepared spin and is representable as a pair of complex numbers - called a spinor.

Every possible spin can be fully described by a unique pair of complex numbers.  When the rules of rotation are applied to a spinor, a very strange thing happens.  To appreciate its strangeness, first consider what happens to a chair that is rotated by 360 degrees about any axis.  If you do this in your parent's house when they are gone, they'll never notice that anything has changed when they return.  For the electron, the state vector changes by a minus sign, which can be detected by measurements.  If you were to rotate an electron by 360 degrees, your parents - if physicists, would notice.  The electron must be rotated by 720 degrees for the rotation to go unnoticed.

This behavior is weird even for the quantum world.  Nothing in our space behaves in this way.  The electron gets away with it because the spin angular momentum does not live in regular space where airplanes fly and ballroom dancers dip. The electron does not have an equator, but oddly, it still has spin angular momentum and a magnetic dipole moment.  Though counterintuitive and confusing, this is the way nature behaves and we have to accept it without asserting our macro-world biases.  To gain experience with a spinor in the real world, place a cup of coffee on the palm of your hand and make the appropriate contortions so that the cup rotates smoothly in only one directions.  You'll discover that it takes a 720 degree turn of the cup to bring it and your hand back to its original position without spilling the coffee.

The magnetic dipole moment associated with the spin is what deflects the beam of atoms and enables the spin to be measured, which is found to have exactly half of the fundamental quantum of angular momentum.  This half integer spin makes the electron what's called a fermion.  Two fermions cannot occupy the same state unless they are of opposite spin.  This property is of extreme importance in the formation of atoms and making our world the way it is.  If this strange type of spin did not exist, all matter would collapse into an unrecognizable form, making our existence impossible.

The electron lives is space, so its position too can be measured.  In contrast with the spin, which can have only two outcomes, the position it occupies can be in any of the infinite number of points within space.  But, a measurement of the position exhibits probabilistic behavior that cannot be due to hidden variables so excludes the possibility of an underlying cause.  The probabilities associated with all infinite number of possible positions of the electron are also part of the state vector.  Unlike the classical view of an electron as occupying a particular position, the state vector spans all space.  Recall that the spin was characterized by two complex numbers.  The state vector for position associates a complex number with every point in space.  The spatial and spin state vectors together fully describe the electron.

The Schrodinger equation is used to determine the state vector of any particle under the action of an arbitrary force.  Give the Schrodinger equation a mathematical expression for a particular type of force, and it will spit out a function that assigns a complex number to every point in space.  This function is called a wave function because it travels through space like a wave that can constructively and destructively interfere with itself.  One example of a source of force is a nucleus, with electric charge equal in magnitude and opposite in sign to the electron charge, so it attracts the electron.  Thus, the electron is a particle that moves around regular space like the rest of us, described by the wave function; but, has a secret pocket that lives in another dimension in which it keeps its spin hidden from view, represented by a spinor.

For the case of a nucleus, the Schrodinger equation does not spit out classical elliptical orbits for the electron as found using Newtonian mechanics.  Instead, the wave function gives the probability density of finding the electron at each point in space and looks like a fuzzy blob centered on the nucleus.  These blobs come in all sorts of shapes and sizes that can be described by three integers, called quantum numbers.  The upshot of the electron being a fermion is that each electron must have a unique set of quantum numbers.  One of these quantum numbers is related to the energy, so adding electrons to a nucleus is like adding balls to a bucket.  First, the bottom gets filled, then the next layer -- which has more energy than the first, and so on.  All the chemical properties of an atom, and the mechanism of how atoms combine to form molecules, originate in how the electrons stack.  Chemical bonds are then the spilling over of the top layer of the little balls from one bucket to another.

The Schrodinger equation can be used to predict the spectrum of the light emitted from atoms and molecules that are excited with electric currents or light, and they match beautifully for those systems for which the Schrodinger Equation can be solved exactly.  Nothing unexpected is observed, and everything that is observed is predicted from the wave equation.  Thus, the simple atoms and molecules, which combine electrons and nuclei, don't show any behavior that would hint at nondeterminism as would be expected of free will.  The state vector of such complex systems remain definitive.

Large molecules that were beyond the capabilities of computers in the past are a piece of cake today.  As the horizon of computer capabilities expands, the level of complexity that can be numerically computed increases.  All predictions continue to hold and no new unpredicted properties are observed.  For molecules that are out of range of exact computations, approximation techniques yield definitive state vectors, albeit with predicted energies that deviate slightly from measured ones.  These deviation, however, show no hints of randomness beyond that of the probabilistic nature of quantum mechanics.

This ground-up development of the quantum theory of atoms and molecules, from which we are all made, shows only deterministic behavior.  Since we are made of atoms and molecules in great numbers and of many types, direct calculations of our behavior from first principles is not practical.  Given that the degree of complexity of a system grows super-exponentially with the number of interacting particles it contains, the richness of human behaviors that are observed could be explained without the need to call upon the supernatural.  Indeed, all sorts of emergent behavior can be found in nature that include hurricanes, sand dunes, flocks of birds and termite mounds.  In each case, very simple rules that govern interactions between its parts lead to complex global patterns.

Though the complex patterns of swirls, foam, and spray from a waterfall cannot be predicted from a quantum mechanical calculation that accounts for every single electron and nucleus because of limitations of computer power, there are no inconsistencies between the observed behavior and the laws of physics.  We do not need to invoke a spirit of swirls and a specter of sprays to impose a nondeterministic cause of their perceived randomness.  We don't need an overlord to tell each grain of sand where to settle to make amazingly intricate global patters of dunes.  Is there a rational basis for invoking a non-material entity that is intertwined with each human to explain our complexity and to remove determinism?  Or are we imagining spirits in a grand conspiracy of self-deception that is sustained by ritual and ruled by dogma, which is so intense that it compels believers to fight tooth and nail the notion of a material world that is ruled by simple laws?

Defenders of free will are backed into an intellectual corner.  Free will can be eliminated from all the known places where the laws of physics are always found to be obeyed.  The unknown places that have yet to be studied are their last resort.   These dimly-lit alleyways include the full characterization of complex systems such as human brains, which are beyond today's computational reach.  This tactic is called arguing from ignorance -- just stuff free will into those unknown places that are beyond our present technological means of verification.  Once those places are uncovered, find another place to hide.  There is lots of room just beyond our ever-growing sphere of knowledge where unjustifiable beliefs can remain conveniently out of sight.

Heuristic methods can shed light into the cracks of our understanding.  Large collections of atoms can form solids, liquids, gasses and plasmas.  When acting together, as water in a puddle on the street, or helium in a balloon, the detailed nature of the substituent molecules no longer matter in defining how water swirls and splashes, gasses move a piston in your engine, and solids expand when heated.  These kinds of behavior are treated with macroscopic theories such as fluid dynamics, thermodynamics and statistical mechanics.  The approach is to calculate exactly the behavior of a small number of particles, use these results to determine how the particles behave on average, make the system larger to check if it behaves in the predicted way, and stich together the theory into hierarchies.  At the atomic level, electrons are characterized by wave functions that surround nuclei, atoms in large molecules behave as tiny balls connected with invisible springs, large molecules stick together like wriggling worms, and the wriggling worms form membranes that form cells, and the cells pass electrical current between them to yield thought and consciousness.  There is no reason to dismiss the material origin of thought, though one could always argue from ignorance.

Many of these macroscopic theories were developed before quantum mechanics came on the scene, so material-dependent properties such as specific heat, elasticity, and viscosity were defined and tabulated for many materials -- as if they were fundamental properties, giving engineers the information needed to make things.  Consider, for example, specific heat.  It is a constant for a given material that relates its temperature increase per unit of mass, per unit of heat added.  For a gas, in which the molecules are treated as idealized point particles, the relationship between temperature, volume, pressure, entropy, etc. can be calculated, from which the specific heat can be determined.  Experiments that independently vary each parameter in turn have shown the theory of ideal gasses to work well.  One of the significant accomplishments of the early twentieth centaury was the development a formal connection between the quantum description and the macroscopic theories, a field now refereed to as statistical mechanics.

The theory of gasses serves as an example of a test that spans the microscopic realm of quantum mechanics to the macroscopic world, where an observer can watch the transition between the two unfold before their very eyes at the microscope.  In such experiments, the classical and the quantum theories are found to agree with each other, but more importantly, the observations give insights into why the macroscopic theories work and how they transition into the quantum world.  This show unfolds under the simple turn of a knob that controls the temperature.  To understand such experiments, we first need to take a diversion to discuss fermions.

The half integer spin of the electron, which makes it a fermion, endows it with its asocial behavior.  Bosons, which have integer spin, are a gregarious sort who like to occupy the same state.  Helium-3 is an example of a fermion that behaves antisocially even when it's cooled, staying distant from its compatriots.  Helium-4, on the other hand, is a boson, so all the atoms collapse into a tight grouping at low temperature, called a Bose-Einstein condensate.  The super-cooled liquid shows peculiar behavior such as superfluidity.  If your cup of coffee were made of a Bose-Einstein condensate, one stir in the morning, and the little vortex would continue to swirl hours later.

In the early days of superfluidity research, a story surfaced of a physicist who discovered that his dewar (like a thermos bottle made of silvered glass and a vacuum layer for insulation) had sprung a leak.  He watched helplessly as his precious superfluid leaked out onto the floor and boiled away.  He replaced the dewar, and again, it leaked.  Upon closer examination, he found that the helium was climbing up the inner wall of the vessel, spilling over the edge, and running down the outer surface.  Since bosons like to congregate, the ones behind it followed the stream up and over.

The purpose of this diversion is to show that a macroscopic object (a clump of helium-4 gas) when cooled forms a purely quantum object whose properties can be calculated with quantum theory.  At higher temperatures, the quantum theory and classical theory agree.  This is an example of hierarchies.  At lower temperatures, we must use quantum theories, but at higher temperatures -- where quantum theories are too complex to apply effortlessly, classical theories are adequate.  This is made possible by the fact that quantum effects become irrelevant at higher temperatures.

This is a simple illustration of how there is a direct and smooth connection between the quantum world and the macroscopic world that can be controlled with a turn of the thermostat.  We can apply the classical theories with confidence that they give accurate results.  Most importantly for our argument, the smooth transition from the well defined Bose-Einstein condensate to the gaseous classical system behaves as we expect from the theory.  The probabilistic behavior turns into the reliable deterministic clockwork that Einstein cherished, absent of any unexplainable behavior that would suggest free will.

In the careful sequence of experimentation in overlapping domains of the hierarchies, science can establish connections between them and determine which parameters are relevant and which can be ignored.  For example, a quantitative description of perspiration does not require a quantum calculation of the electron spins in all the molecules.  Though electron spin is essential in determining the structure of a water molecule, and the flow of water depends on the forces between hydrogen and oxygen atoms, the behavior of billions and billions of molecules can be described by a few macroscopic parameters such as viscosity, specific heat, latent heat of vaporization, and surface tension.

The methodical study of overlapping domains to establish connections between them, and the investigation of the laws within a domain stitches together the fabric of our knowledge.  Molecules are connected to make proteins, DNA and RNA -- which drive the formation of cells that aggregate to make organs, which form you and me.  At each step of the process, the larger objects are built from smaller ones that follow the simple rules within that hierarchy, but in large numbers, new laws seem to come into play.  In reality, they are just manifestations of the more fundamental laws.  The behavior of this larger system can appear highly complex because of the diversity of parts that are acting together; but, no additional factors, such as the fickle action of free will, need to be introduced to describe what is observed.

At some point, things get a bit more complicated and a dichotomy of sorts arises.  First, a system can have a complex underlying structure that leads to a countless number of choices.  Each process contributing to these choices from the level of hierarchy below is definitive; but, the large number of choices gives the appearance of a randomness that might be attributable to free will.  Secondly, there are some processes, even simple ones, whose governing equations have the property that microscopic changes in the initial conditions cause divergent behavior as in the proverbial butterfly in China that causes the hurricane in the Caribbean.  In both cases, an ad hoc claim in support of free will might be forwarded.

In this meticulous ground-up analysis, there is little room for free will to hide except in the more complex systems such as sentient beings.  Gaps in the science makes the case against free will less than ironclad.  However, a gap in knowledge is not in itself evidence for free will.  Why should free will be invoked when there is no evidence in support of it at any level?  In fact, free will and morality can be shown to be evolved traits, nullifying the hypothesis of free will at the highest level of hierarchies; i.e at the level of social groups.  So why is the concept so strongly defended?

It comes down to a feeling.  All those who vehemently support free will do so not on a shred of evidence, but in its perceived "necessity" or gut instinct.  When asked for proof, a common answer is that not all things that are true are provable.  Others might point to religious texts, where morality is handed down from God, who provides each being with a moral compass that in some way guides a non-corporeal part of us into doing what is commanded.

The most succinct answer to such arguments was well articulated by Christopher Hitchens, "What can be asserted without proof can be dismissed without proof." This is a variation of Bertrand Russell's famous orbiting teapot.  The quote speaks for itself.

Some argue that personal experiences and subjective proof are sufficient for belief in the metaphysical, and articles of faith lead one to accept free will as a truth. Scott Bidstrup argues that, "If no metaphysical explanation is required to explain your experience, your 'evidence' is no longer evidence of anything metaphysical."

The line of reasoning presented here starts from microscopic processes and shows that the evidence supports a definitive world in which hidden variables are eliminated, leaving no room for free will.  Systems of ever greater complexity are built using only the underlying fundamental principles, and the observed behavior agrees with predictions, providing strong evidence that larger systems do not inherit new properties that are unexplained by the fundamental principles alone.  Those who insist that free will exists have the intellectual obligation to offer supporting evidence. In the absence of evidence, free will is merely an unproven hypothesis.

HOW CAN WE BE SO SURE?

Complaints are often voiced against scientists for being arrogant and closed minded.  I can't imagine how individuals who accept bizarre notions such as curvature of space-time, Hawking radiation from black holes, the big bang and evolution could be considered close minded.  These ideas are accepted because of the evidence.  When propositions are forwarded without evidence, they are not accepted until enough convincing evidence comes to light to form a consensus.  Truth is a precious commodity that is a reward for painstaking investigations.  The relentless championing of ideas for which there is no proof is the pinnacle of arrogance.

A line of evidence was presented above that shows little wiggle room for free will in the fundamental processes of the material world.  As we transition from the simple to the complex, we admit that bigger holes permeate our knowledge, allowing cracks through which free will might ooze.  However, the existence of gaps is not an invitation for them to be filled with the newest unproven fad.  A valid question, however, pertains to the soundness of the fundamental scientific theories.  If they are not adequately tested, our knowledge rests on shaky ground, as do our conclusions.

The front-line evidence lies in well controlled laboratory experiments, which are repeated over and over by many researchers using various experimental designs.  Even thousands of tests have a nonzero, though negligible, chance of all being spurious.  A more subtle confirmation of the evidence comes from its consistency with related theories; agreement in all areas of overlap between the new theory and all other established ones increases its sphere of validity. There are countless interconnections between various experiments, which form the tapestry of science.  Even loose or torn threads don't matter much because of redundancies.  This interconnectedness, through its diversity and breath of applications provides stronger evidence than that of any single line of inquiry.  Finally, technological applications based on science require every bit of the science to work flawlessly; otherwise, the device fails.  Then there are huge numbers of technologies that use multiple overlapping scientific principles, requiring even a higher degree of adherence to the laws of physics if they are to work reliably.  The fact that a large variety of applications run smoothly in our society provides the strongest evidence for the laws of physics.

The simplest estimate of our certainty in the validity of science is a simple count of failures per unit of sample size.  To illustrate, we return to the electron.  What would be the consequence of an electron loosing its property of being a fermion?  If this were so, nothing would prevent the lowest energy configuration of an atom or molecule to be the one where all electrons are in the lowest energy state, i.e. all the balls in the bottom layer of the bucket.   Then, the nature of interactions between atoms would change so that a semiconductor would not behave in the usual way, making computers useless.  More damning would be the consequence of chemical bonds that fail to behave in the expected ways, leading to a deterioration of our bodies even if a small percentage of chemical bonds fail.  The signature of each such falure would be a burst of light. We would literally be glowing in the dark.

To make a concrete example, let's consider computers.  Hi tech consulting groups estimate that chip makers are selling a million microprocessors per day.  Many microprocessors have several cores that act together to accelerate computation times, but for the sake of argument, we assume here that each CPU has just one core.

At this sales rate, chip makers are supplying CPUs to make well over a billion computers every three years.  Furthermore, let's underestimate the CPU speed to be 1 GHz.  Under these assumptions, one computer that runs for three years at 1GHz will access the cpu's tiny transistors 30 quadrillion quadrillion times.  If every computer on earth failed once within three years of manufacture due to the laws of physics breaking down in a cpu's transistor, that would still make the theory of electron spin confirmed to one part in 30,000,000,000,000,000,000,000,000,000,000.  Since observed failures rates are much lower, and failures that do occur can usually be traced to manufacturing variability and human error -- not to a breakdown in the laws of physics, the probability that the theory of electron spin is correct is much higher than estimated above.  Add to this calculation the fact there are many more chips in use that are not found in computers, the probabilities are even higher.  Then extend the argument to glowing life forms, and the probabilities become astronomically low that the theory of electron spin is wrong and that our observations are spurious.

SUMMARY

Meticulous detective work led physicists to a description of nature that goes against common intuition.  Concepts such as position and speed, which we consider obvious based on experience, become unfamiliar and paradoxical when extended to the microscopic world.  The Newtonian view of a projectile's trajectory as a calculable certainty must be discarded.  Instead, the state vector contains the source of all knowable information about the system and the probabilities of outcomes.

Classically, a trajectory is defined as the position of an object as a function of time.  In the quantum realm, the position of an object is not a well-defined property.  Instead, the equations of quantum mechanics describe how the state vector evolves as a function of time, from which the probability of finding an object at a given position and time can be determined from simple mathematical operations.  The deterministic path is replaced with a fuzzy probabilistic path.  For large objects, the fuzziness of the quantum path is too fine to measure even with state-of-the-art microscopes, but in the world of the tiny, occupied by electrons, fuzziness rules.

The illusion of free will lives in both worlds.  In the microscopic realm, probabilistic behavior hints at free agency; but, upon closer study, can be eliminated through an analysis of simple experiments.  On larger scales, the world behaves classically, where divergent behavior between two individuals lies in the complexity of their brains.  With about 100 billion neurons, and each connected with tens of thousands of other neurons, the degree of complexity is astounding.  It would have been surprising if consciousness had not emerged.

The complexity of the brain makes it difficult, perhaps eternally impractical, to associate a physical process with every possible state of mind, thought, or emotion; but, functional Magnetic Resonance Imaging (fMRI) identifies the neurons that are active during various activities, making a strong case that the circuitry of the brain is what defines the mind, and there is no reason for dismissing the hypothesis that states of the mind are physical states of the brain.

One might argue that nonphysical processes might exist, such as a nonmaterial agent, a soul or a spirit that is connected with each human, but, transcends the physical world and acts as a moral compass to the brain.  Because it lives outside our realm, it is not amenable to being put to the test as a scientific hypothesis.  Can't such truths exist?  After all, there are gaps in neuroscience that provide some wiggle room.

Such so-called arguments from ignorance fare poorly upon a retrospective analysis.  For example, not so long ago, certain types of mental illnesses were commonly attributed to be the work of an evil spirit that needed to be exorcized.  The discovery of drugs that change the chemical balance of the brain and bring under control many forms of mental illness attest to the material nature of the mind.  Daemons of yesteryear are merely pathological states of the physical mind.

As science digs deeper, the gaps in our understanding continue to shrink, leaving less room for nonphysical explanations.  Yesterday's gaps where the spiritual purportedly resided have been filled with reasonable explanations based firmly in the material world.  An understanding of how the brain functions no longer requires nonphysical explanations.  If a nonmaterial hypothesis is proposed, the burden of proof is on the proposer.

There are an infinite number of falsities but only one truth.  Planets are held in elliptical orbits around the sun through gravitational forces and the theory of celestial mechanics is so well developed that it makes highly accurate predictions that guide spacecraft over billions of miles to rendezvous with a comet.  Fairies, unicorns, angels, alien astronauts, invisible entities etc. as agents of gravity are not supported by the evidence, nor do such entities make any predictions that can be tested.  It is a waste of resources to belabor alternative explanations that add little to our understanding other than the need to confirm one's own personal beliefs.

Freeing ourselves from the burden of dead ends allows us to apply a methodical approach for investigating how physical processes are involved in consciousness and the perception of free will.

Earlier, we saw how a simple quantum process can give the appearance of free will in the choices made by a particle.  The behavior was based on probabilities that are an inherent property of a system without a direct cause.  The quantum state of the system is certain, and evolves in a predictable way.  It is the measurements of the system that forces it into one of the possible states based on probabilities, and is a prototype for the concept of a choice.  The probabilistic nature of quantum mechanics is sometimes argued to be the engine of free will by adding some uncertainty to the choice being made.  Since the results are determined by the roll of the dice, most people would not agree that it meets their expectations of free will as a thoughtful process that is guided, for example, by morality.

The basic argument for a deterministic outcome can originate in both classical and quantum mechanical processes.  The classical perspective views a brain as a machine, albeit a very complex one that therefore responds in a complex way.  Being a machine, it has programming that decides how to react to a given set of choices.  The machine is pre-wired based on genetics, and reprogrammed in response to environmental stimuli.  Under the right environmental conditions, past history, and hardware, the behavior of the brain is fully deterministic.  This is not to say that the brain is forced to make a particular choice.  It is unencumbered in making a choice, but the choice it makes can be predicted or its probability determined from the hardware, software, and state of mind at that instant.

The quantum description of free will parallels the classical view except that the state vector, with its probabilistic interpretation, replaces the classical deterministic machine.  While the outcomes vary randomly, the probabilities built into the state vector are precisely specified, and they evolve in time in a well-defined way.

Part of the confusion about free will is in its definition.  Free will is pictured as a separate decision making entity that guides the process of making a choice.  The physical brain is complex enough to allow different parts to communicate with each other.  The logical part of the brain can be tempered by the emotional part, and an individual can argue with him or herself, but because all these processes reside in the brain, the choice made is predictable so the process is deterministic.

CONCLUSION

Human perceptions lead to classical concepts such as well defied paths describing the trajectories of cannonballs, high jumpers and satellites.  The quantum description of matter tears down such pre-conceived notions and replaces them with a probabilistic description defined by state vectors.  A ballistic missile does not have a well-defined trajectory; rather, the classical trajectory corresponds to the average path take when the experiment is repeated many times.

The concept of free will is most likely a human construct that has an appearance of plausibility when in reality, it is an illusion that results from the complexity of the human brain.  The same can be said for the related concept of morality, which is a deep-rooted emotion in the human psyche, and therefore "feels" quite real.  Its independence too can be an illusion.  Rather than being an immutable set of rules that originate from a deity, our moral senses are most likely evolved traits that gave highly social humans a survival advantage over much more powerful beasts and random calamities of nature.  To be effective, morality needed to be deeply ingrained into our hardware and be perceived as absolute and noble truths that lived independently of the material world.  Otherwise, it would be difficult for a human to make sacrifices on behalf of others.

While the focus of this essay is on free will, it touches on a deeper question.  The fact that free will and morality feel real to us and yet are most likely illusions brings into question the sanity of arguing over vague concepts that we take for granted without question.  For example, a common defense of religion is that it provides the "oughts" while science provides the "hows."  Such statements carry no weight unless one can prove the existence of an immutable "ought."  The fact that science provides a mechanism for "oughts" to evolve in a social group sheds doubt on the alternative explanations that rely on the uncritical acceptance of a deity who passes on his wisdom to his followers.

There are undoubtedly many questions that have been posed from the time of antiquity and debated vehemently that are unresolvable unless the underlying concepts are put to the test.  Free will is one such topic.  The fact that all of the processes that take place in the brain are described by physical laws, and systems made of many parts that each obey such laws show complex behavior in aggregate that is derivable from the properties of its parts, there is no reason to believe that anything outside of the material realm is required to describe process within the brain, where free will purportedly resides.  Since any proposition to the contrary that is offered without proof can be rejected without proof, the concept of free will can be rejected unless compelling evidence is found in its support.