Thursday, July 10, 2014

Chalmers Vs Churchland or The Hard Problem of Consciousness vs The Hornswoggle Problem

http://existentialcomics.com/comic/67
Context and Introduction
In this article Patricia Churchland is arguing against David Chalmers' assertion that the physical sciences, including contemporary disciplines like neuroscience, will never be able to explain consciousness.  More specifically, they will never be able to explain the hard problem of consciousness: how and why physical processes give rise to subjective experience.  Otherwise stated, how and why is it that subjective experience arises from physical processes?

We can contrast the hard problem of consciousness with the easy problems of consciousness.  Easy problems of consciousness involve explanations of how cognitive or behavioral functions are performed.  Examples include how our brains merge perception of different attributes (e.g., color and shape) which take place in different parts of the brain into a unified perception and how independent processes in the brain combine to produce coherent (behavioral) responses to perceived events, such as verbal reports.

The easy problems are all questions about how our brains discriminate stimuli, integrate information, produce reports, etc... these problems are all about identifying and explaining physical processes. Since these are all questions about how physical systems work, in principle, neuroscience will advance enough to give an explanation.

However, none of these explanation give us any answer to how or why conscious experience arises from these processes.  If everything performed by the brain (inputs to outputs) is a closed physical system, why have conscious experience of anything?  To further bring out this problem, lets look at the zombie thought experiment...

Zombie!!!
To illustrate that consciousness might not be necessary for successfully navigating the world, consider the' famous Zombie thought-experiment (pre-dating Chalmers, but famously advocated by him).  Chalmers says that we can conceive of beings just like us in every way except they lack conscious experience.  They live lives just like ours: walking, talking, driving, eating (mostly brains), discussing philosophy of mind, etc..except they don't experience these things consciously.  There is no phenomenal aspect to their lives.  They have no qualia. (In the literature, they are often referred to as p-zombies--i.e., philosophical zombies.)

Consider motion detection and vision.  It seems loco to think we could decouple motion detection from phenomenal awareness of what we are seeing.  However, there is considerable evidence that we can detect motion without having conscious experience of it!  Crazy, I know, but this might be the case.  This condition is called "blindsight."  There are people who have brain damage in the part of the brain that produces conscious awareness of what we are seeing yet still behave as though they can see!  This seems to show that we could navigate through our world without walking into things without having the phenomenal experience of seeing.  If this sounds too crazy to be true, check out the video:

Start at 6:50 on the first one then from the beginning of the second video.
















The main point is this:  suppose there are processes/functions that we have which are accompanied by conscious phenomenal experience but that we could perform these same functions just as well without the consciousness (like someone with blindsight).  This means that consciousness is just "tacked on" to the process.  It doesn't help us perform the task any better or worse (maybe even worse, because there's one more thing to go wrong).  

If this is the case, then if we explained the process in physical terms, we'd be left with a "hard problem":  why the crap is consciousness added on to this process?  Also, since we'd know everything there is to know about the process from a physical point of view, there'd be information about the conscious experience of performing that process that'd be left unexplained by our physical account.   Therefore, a purely physical account of the mind is incomplete.

Conclusion
Anyhow, the upshot of Chalmers' position is that since the physical sciences will never be able to give us an answer to the Hard Questions of consciousness, we need to postulate consciousness as a separate fundamental feature of our world, not reducible to physical laws on any level.  Just as electro -magnetic forces can't be explained in terms of other physical laws and principles and are thus considered fundamental forces of the world, so too should we consider the psychophysical laws and principles of consciousness--fundamental and irreducible.

The Hornswaggle Problem:  Churchland's Response to the Hard Problem
I think Churchland's article is as close to a rant as you can get in published academic writing.  She essentially takes a machine-gun approach to criticizing Charlmers' argument for a distinction between the easy problems and the Hard Problems.  Hold on to your hats, we're going on a critical thinking rampage! 

Conceptualization
Churchland's general approach to opposing the assertion that there is a Hard Problem is to cast doubt on whether this is the right way to conceptualize consciousness.  The idea here is that if you misconceptualize a problem, you will make it seem intractable while, in fact, it may not be.

Consider an analogy.  Medieval doctors, to explain animation, would ask of the heart, "in which part are the animal spirits concocted?". Of course, if you conceptualize animal spirits as being what causes animated life, asking this question about the heart will seem like an intractable problem.   If, however, you ask how much the blood the heart pumps/hour, the problem no longer seems intractable. The point here is that how we conceptualize a problem has everything to do with how solvable it will appear.

Arbitrary Line-Drawing 
Another attack on the Hard Problem is to ask why it is that the line of what can and cannot be solved through the physical sciences is drawn at the particular place Chalmers suggests--rather than somewhere else.  Consciousness is a difficult problem, but difficult problems are nothing new to the scientific enterprise.  

Why not also include into the Hard Problem the neurological basis for autism and schizophrenia? Why we dream when we sleep? How short-term and long-term memory work? How we develop skill acquisition, plan, and make decisions? Why is the dividing line between the Hard and soft problems drawn exactly where it is?  There doesn't seem to be any good reason for drawing the line in the particular place it is.

The Left-Out Hypothesis
Chalmers argues that even if we were to solve all the easy problems, their answers wouldn't inform the Hard Problem. But where is the evidence that if we eventually understood all the easy problems that we still wouldn't understand the Hard Problem? (why and how consciousness emerges from those processes).   The left-out hypothesis is why would the solution to the Hard problem be "left-out" from our cumulative understanding of all the easy problems?  Where's the evidence for this hypothesis?  And how many sentences can I end with question marks? 3? 4? 5? More?  Does anyone know?

Another implication of the left-out hypothesis is that the Hard Problem frames the issue of consciousness such that current research on consciousness is presumed to fail even before the results are in.  But if we accept the demarkation line suggested by the Hard Problem, we must say, before the research runs its course that this line of research will not contribute anything to the problem of consciousness.  This conclusion runs counter to what the empirical results suggest.  We should be guided by empirical results, not a priori conclusions about what is or is not a solvable possible. Let the science decide!!!

Furthermore, there is strong evidence in current research suggesting that attention, awareness, and short-term memory are very closely connected consciousness.  Why shouldn't we think that advances in these areas won't contribute valuable understanding to the problem of consciousness?   

Vs The Zombie Thought-Experiment
Recall that the Zombie thought experiment is supposed to show that we can't study consciousness by studying physical stuff:  All (or at least some) of our mental/brain functions can possibly work without the conscious experience, therefore, conscious experience and qualia are simply "tacked on" and are fundamentally different "stuff" from physical "stuff". (see Zombie discussion in the intro of this post)

Anyhow, assuming that we've rejected the left-out hypothesis, the only thing still supporting the Hard/easy distinction is the zombie thought experiment.  Since the zombie shares all of our behaviors and capacities (minus consciousness), using the physical sciences to explain how all these capacities work would not tell us anything about how and why we have conscious experience of these processes (since consciousness adds nothing to our ability to perform them and the contents of subjective consciousness can't be accessed by the physical sciences).

But accepting the conclusion of the zombie thought experiment relies on the possibility of zombies.  Zombies are merely a thought experiment!  It seems redonk to draw conclusions one way or another about the limits of science based on a thought experiment about zombies!

For example, imagine a possible world in which gasses do not get hot even though their constituent molecules are moving very quickly.  Does your ability to imagine this possibility function as an argument against the empirically verified relationship between temperature and mean molecular kinetic energy?  That's just silliness!

Just because we can imagine non-conscious zombies is no argument for the limits of brain science (mmm...brain science!).

Vs Scope of Qualia/Spectrum Argument
The Hard Problem seems to be directed at brain events that are accompanied by qualia.  But there doesn't seem to be any consensus as to which types of capacities or functions are accompanied by qualia and which aren't.  There are of course obvious cases like the pain you experience when you stub your toe or the blue you experience when you look up at the sky...or maybe even the overwhelming pleasure you feel when you know it's time for philosophy 101.

But there are also areas of dispute.  Some people say they have "limb-position" qualia; that is, they have a phenomenal experience of where there limbs are.  Others disagree.  Do we have quali associated with "what it's like" to move our head?  To know which way is up?  Do eye movements have qualia? Maybe some movements do and other's don't?  What about introspective qualia?  Or thoughts?...some seem auditory, others visual, others, like when I do logic problems, don't seem to have any qualia.

When it comes to capacities and functions, is there a continuum for the vividness of qualia that are associated with them?  Does it vary from individual to individual?  Do some have qualia for a capacity and and other's don't have it? All of these issues cast doubt on a clear demarkation line for the Hard Problem--if there is indeed such a problem.  The answer to where to draw the line between the processes that have qualia and those that don't might seem clear when we consider only the prototypical cases of qualia, but these cases represent only a small sub-set of the whole.   

The class of processes accompanied by conscious experience is not as well-defined as we might initially suppose.  To further confuse matters, there are fuzzy boundaries between attention, short-term memory, and awareness.

Are the Easy Problems Really Easy?
The easy problems are yet to be solved, so why should we suppose their solution is easy?  This is pure conjecture.  For example, the nature of motor representation is a mystery:  a signature is recognizable whether it is written with the dominant or non-dominant hand, the foot, or with a pencil strapped to the shoulder.  How can completely different sets of muscles do this when they weren't the muscle groups used to learn the task?

The solution to this problem lacks important--not just minor--details about the concepts of motor control, learning, and information retrieval.  On what grounds do we call it an Easy Problem?

The Danger of Drawing a Line
There is a danger of drawing a line at consciousness based on current ignorance.  If we rope an area off to certain methods of research before really giving it a good try, then we are writing a self-fulfilling prophesy and blocking off what might have been fruitful research.

Argument from Ignorance
The Hard Problem is an argument from ignorance.  That is, the argument moves from a claim that we are currently ignorant about/lack understanding of a phenomenon (consciousness) to the conclusion that the phenomenon will never be understood/explained (using current methods) etc...  Specifically, in the context of the problem of consciousness, Chalmers'  argument goes like this:

(P1)  We do not understand much about consciousness;
Therefore:
(C1)  Consciousness can never be explained;
(C2)  Nothing science could ever discover would deepen our understanding of consciousness;
(C3)  Consciousness can never be explained in terms of physical properties.

But the fact that we know little of a particular phenomenon only tells us that we know little about it! Consider an analogy.  Just because I don't know what a flying object is, it doesn't follow that it's an alien space craft.  I can only conclude that...I don't know what it is!   Not knowing isn't positive evidence for some positive conclusion.  We cannot draw substantive conclusion from our lack of knowledge...especially given that modern brain science is still in its infancy.

If brain science had progressed as far as molecular biology has on the transmission of evolutionary traits, we could make a substantive conclusion, but, again, given the pre-pubescent state of neuroscience, all we can reasonably conclude is, "we don't know".

Metaphysical vs Epistemological "Mysteriousness"
The fact that a problem appears mysterious is not a fact about the problem or a fact about the metaphysical nature of the universe.  It is an epistemological and psychological fact about us!  The problem is mysterious to us given the current state of our science.  Perhaps, if the state of our scientific understanding of the brain were different, the problem wouldn't be so mysterious.

The history of science is littered with previously "mysterious" problems.  Consider the problem of life previously known as "the mysterious problem of life".   For millennia the best minds could not grasp how life could emerge from the inanimate matter of cells.  "Surely, the physical sciences can't solve this problem!" they said!  "There must be magical animal spirits...or something."

The mystery of how life emerges from proteins and sugars was a mystery to be sure, but the mystery was not a property of the problem, but a consequence of the epistemological state of the pre-cellular biology world.

The Argument from Personal Incredulity
The other informal fallacy Churchland accuses Chalmers of is the argument from personal incredulity.  It goes like this, "well, I simply cannot imagine how x will be able to explain y."  We've been hearing this argument for centuries in regards to everything from thunder to computers that can learn.  As far as I know, this type of argument has by and large been on the losing end.  I simply can't imagine it being valid! ;)

Anyhow, Chalmers' argument seems to--in part--rely on an argument from personal incredulity.  He just cannot fathom how the physical sciences could explain consciousness.  But that's more a reflection of his epistemological state than it is an argument against the possibility of a scientific solution.  

Why should we care two hoots about what someone can or cannot imagine when we consider what science may or may not be able to explain?

Again, the history of science give us plenty of examples which were imagined to be too difficult to solve, but ended up having fairly simple solutions, and also examples of problems that were thought easy to solve but turned out to be very difficult.

Summary
In short, Churchland argues that when you're in a position of ignorance concerning scientific matters, and the science is still young, we need to do the science and see how it plays out, not make pronouncements about what can a cannot be solved.

Thursday, July 3, 2014

Neuroscience and Freewill: Libet, Mele, Wegner

Introduction and Context:
How do you think actions come about? The common sense (and our experience) explanation is we (1)  make a conscious decision to do something (2) our brain activates whatever neuro-pathways are required for the action, the (3) we perform the action.  Libet's famous experiments give strong evidence showing that this is NOT the order of how our actions come about.

For many people, the famous Libet experiments show that we don't have free will.  Free will is only an illusion.  Our brains have already decided what we're going to do, then, after the fact, we only have the experience of deciding what we'll do.  Watch the video yourself and think about what the experiment shows.




In case it wasn't clear from the video, here's how the experiment goes:  The subject observes a timer thingy (the type of timer varies from experiment to experiment).  The subject is asked to raise their finger whenever they want.  By looking at the clock, they are also supposed to note the time at which they first became aware of their (conscious) desire to move their finger.  The subject is also wired up to an EEG (electroencephalography) sensor which measures the electrical potentials around the scalp coming from the part of the brain responsible for motor activity.  There's also an EMG (electromyography) sensor that measures the exact time the finger moves.

The results of the experiment show that there is a ramping up of brain activity .550 seconds before the subject's consciously aware of their decision or desire to move their finger.  This "ramping up" activity is called readiness potential (RP).  So, the order of events is (1) RP, (2) conscious willing to move the finger, (3) finger movement.   In theory, because readiness potential happens before conscious awareness of a decision, Libet can tell us we are going to move our finger before our own conscious awareness of our decision to do so!  Mind=blown.

Wegner's Interpretation
Libet's experiments seem to give compelling evidence in favour of determinism.  Our conscious experience of choice is an illusion.  Our body's physical systems have already "decided" what to do and our consciousness of what we will do occurs only after this happens.  Our conscious selves are merely along for the ride.  "Voluntary" actions don't go: (t1) "hmmm...I'm going to move finger now", (t2) *finger moves*.  They go like this (t1)  brain initiates preparations for moving the finger (t2) meta-brain says "I decide to move my finger" (t3) *finger moves*.

From these experiments it doesn't seem like we consciously will our actions.  Our dictator brain has already begun preparations for what you will do before you are even conscious of it.  Our consciousness selves just think they're making a decision.  Curse you, evil brain! I want to be free!

Libet's Interpretation
Libet's own interpretation was different.  He thought that rather than free will, we had "free won't".  He thought, yes, the brain initiates urges and intentions but we have a window (about .1-.2 seconds) to consciously override the brain's urge.

To test his hypothesis he set up the following experiment.  He set things up similar to the original experiment but this time he told the subjects to plan to move their finger at a set time on the timer and then to "veto" the intention to move their finger.

Results:  RP started about 1 second (vs .550 sec. in the original version) before the set time.  Then at about .1-.2 seconds before the subject was to move their finger RP flattened out.

Interpretation:  The brain generated the unconscious desire to move the finger but when this desire entered into consciousness "free won't" was able to veto the urge.   In other words, our desires and intentions are generated unconsciously but when they enter consciousness we have the ability to over-ride them.

Problem: What if the process that generated the 'free won't' is also unconscious?  Doh!

Mele's Interpretation
Alfred Mele be like, "whatchu talkin' 'bout Willis? That ain't no proof of determinism!"  Poor Libet. He doesn't have a philosopher's training and is therefore blurs some important distinctions.  In Libet's interpretation of the results, he uses the words "intention," "decision," "wish," and "urge" interchangeably.  Unfortunately for Libet, he never had the good fortune of taking philosophy 101 at UNLV where he would have learned that you can't just go around willy-nilly using words without first specifying what they mean.  Lets look at some of the important distinctions and see how they apply to interpreting Libet's experiments.

Wanting/Urges to vs Intending/Deciding to
You can want to do A without having settled that you are actually going to do A.  I want to live on a ranch with a herd of wiener dogs but I don't intend to do it (right now, anyway).  I can want to eat all the donuts in the bakery but still not form the intention to do so...

We can further see the distinction when we have competing wants.  I want to finish my grading by 9pm but I also want to finish writing my lecture by 9 pm.  I can't do both.  The one that I end up doing is the one for which I formed an intention.  When you make up your mind about a course of action between competing wants then you can say you intend to do it.  In short, wanting to A is simply having the desire to A.   Intending to A requires making a decision to A.

Distal vs Proximal Intentions
We can also distinguish between distal and proximal intentions.  A proximal intention is when I intend to do something that is temporally close.  A distal intention is when I intend to do something in the more distant future.  For example, on Saturday I intend to take my dogs for a hike.

Ok, back to Libet.  Libet says that the process that produces the urge to move the finger (the 'act now' process) is occurring before conscious awareness to decide to move the finger.  This process begins at around 550 msec before the finger moves.  Also, the urge that initiates the 'act now' process creates a proximal intention to flex the finger.   So far, we can agree with Libet that the "'act now' process is initiated unconsciously, "[...] conscious free will is not doing it"; i.e. conscious free will is not initiating the 'act now' process.

However, why should we suppose that the role of conscious free will is to produce urges or causally contributes to urges?  Typically, free will is thought to apply to situations where the agent is deliberating between between possible courses of action or whether they should or should not act.  Free will is not thought to have the role of producing urges, rather, it is about choosing.

Free will does this:




Processes Have Parts
Free will doesn't create the urge.  The origins of the urge are unconscious.  However, the process that begins with an unconscious urge can give rise to a conscious intention to act or not act in accordance with the urge.  The conscious intention is temporally closer to the final act (move finger) and so it seems as though it is the conscious intention rather than the unconscious urge that is causally responsible for the act.

Issue:  What is the relationship between temporal distance and causal power?

Other Objections/Issues to Deterministic Interpretation of Libet Experiments

Issue:  Do these results generalize?  Lab conditions vs Real life.  Do the results generalize to all types of decisions/intentions?

Objection: Of Course There's Prior Brain Activity!
If brain events underlie mental events then we shouldn't be surprised that there is brain activity prior to a conscious decision.  Why should we suppose that the production of conscious decisions doesn't involve prior brain activity to lead up to the brain state that is a conscious mental state?  Having no brain activity prior to a conscious decision would be the surprising finding.  Not that there is prior brain activity.

Objection:  The Meta-State
Consider that you've been reading this post for the last minute or so.  The entire time that you were reading or watching the video were you actively conscious of the fact that you were reading or watching the video?  Or were you reading and watching without the awareness "I'm reading/watching".

The argument here is that the Libet experiment measures an awareness of a conscious state; i.e., a meta-consciousness.  Most activities that we do, we aren't actively aware.  When we drive, read, walk, etc...often doing so isn't part of our immediate conscious awareness, yet no one could seriously say that we aren't conscious when we do these things.  Only when something draws our attention to our activity do we become aware of what we are doing;  this is the meta-conscious state.

The Libet experiments measure the meta-conscious state about a prior awareness of our decision to move our finger, not the immediate state of awareness that we want to move our finger.  The time delay between the primary state and the meta-state is what accounts for the effect.

Further Studies that Might Prove Determinism True:



In further studies using an fMRI machine, scientists in lab coats have been able to predict a subject's decision of up to 7 SECS before the subject's own awareness of what she will choose.  Ho.  Lee.  Crap. If this doesn't sound like evidence against free will, I don't know what is!

Objection 1:  The Media Isn't Reporting the Whole Story and Sensationalizing (Surprise!)
What the data actually shows is that the scientist can predict your decision to move your finger at a rate 7% better than chance.  If you were to make predictions blind, over the long run, you'd be right about 50% of the time.  The fMRI data allows you to get it right around 57-58% of the time.

Reply:  Yes, but 7-8% above chance is still a significant result.  If you were given these odds in a casino, you'd be a fool not to take them.

Counter Reply:  True dat.  However, this result may be a consequence of how the experiment was set up.  If subjects were incentivized to try to fool the experimenters, this predictive power might disappear. (If you have an fMRI machine, please do this study!)

Further Study:  There may be newer studies that use better equipment and more sophisticated models that have better predictive power.  There doesn't seem to be any a priori reason to suppose in the future 100% predictive power couldn't be achieved.

Preferences Vs Free Will
Suppose you frequently go out to dinner with someone you know very well.  You've eaten with them many times.  You know what their preferences are.  You go to a new restaurant and based on what you know about them, you successfully predict they will order the T-bone steak.  Does the fact that we can accurately predict someone's actions tell us anything about free will?

The Mechanistic Brain Vs Free Will (Adina Roskies)
So maybe being able to predict someone's behaviour--be it from fMRI scans or from preferences--isn't sufficient to imply determinism is true.  Do these studies provide evidence for any other challenges to free will?

One thing these (and subsequent) studies make clear is that the brain is mechanistic.  We can identify which parts of the brain and which neurons are responsible for certain actions and behaviours.  In short, the brain behaves in mechanistic law-like ways.  So, the difficulty is to explain how we get free will out of a mechanistic law-like system.  Consider a computer.  It performs its functions in mechanistic law-like ways, yet we don't attribute to it free will.  How are we different?  Are we really all like 2Chainz showin' up to the scene with our top down?  Is it cuz we're made of meat rather than metal and silicon?  What's so special about meat?

The underlying worry is that because we are meat-based mechanisms we don't have free will.  But Adina Roskies suggests maybe this conclusion needn't follow.  If we suppose that having a mind is necessary for free will then maybe having a better understanding of the brain's mechanism gives us a better understanding of mind.

For example, most theories of free will tell us that certain mental capacities are required for free will:  the capacity for rational deliberation, the capacity to assign moral value to certain outcomes, the capacity to put judgments into action.  So, while at first blush it may seem that neuroscience undermines free will, in fact it doesn't, it gives us a better understanding of the brain mechanisms, functions, and states that underlie the mental capacities that are integral to free agency.  This type of study can inform us of things that can happen to the brain that can impede capacities for free agency.

An often cited example in the literature is a patient that led a perfectly normal life up to a point when he started to have pedophilic urges and eventually couldn't control himself.   When he was sentenced to jail, he complained of a headache.  When they scanned his brain they found a large tumor.  When they removed the tumor his urges completely went away.

Later he started to feel the urges again and when they scanned his brain, they found a tumor in the same place.  Once the tumor was removed, the urges disappeared again.  This is fairly strong evidence for a causal relationship between the tumor interfering with normal brain activity and the ability to exercise one's will.


Epiphenominalism: The Role of Consciousness in Decision-Making
Epiphenominalism is the idea that our conscious experiences don't play any causal role. They just "ride on top" of whatever our brains our doing. They're superfluous. We encountered this idea earlier with Chalmers' Zombies, and blind-sight.

So, if our brains are causing us to act before we have any conscious awareness of what we're going to do, then why should we think that consciousness plays a role in decision-making? First of all, it looks like there are at least some areas where consciousness does play a causal role. Conscious experiences (memories) can inform decisions even if those decisions proceed unconsciously.

For example, my memory of the line for the salad bar being really slow causes me to choose something else like subway for lunch. The decision might be unconscious, but a conscious state plays a causal role in the decision.

Maybe Libet's results support limited epiphenominalism about decision-making.  Consciousness doesn't cause the decision but it doesn't make consciousness irrelevant. My conscious experiences figure into my unconscious decisions.  But the decision isn't caused by consciousness.

Should non-determinists be worried?
From a neuro-science point of view, Libet's findings make sense. The decision-making system does its job first, then the conscious monitoring system does its job. Of course the decision has to come first, otherwise there'd be nothing for the conscious monitoring system to monitor!

Possible Compatibilist interpretation: The urges/wants I have are going to be a consequence of my values and preferences. In that sense, they represent what "I" want rather than being totally random.