Dienstag, 26. Juli 2011

Consciousness Explained Anyway



Today, we are going on a slight diversion from the course of this blog so far, in order for me to write down some thoughts on the nature of human consciousness that have been rattling around in my head.
In a way, this is very apropos to the overarching theme of computationalism (which I personally take to be the stance that all of reality can be explained in computable terms, a subset of physicalism) that has pervaded the posts so far (and will continue to do so), because the idea that consciousness can't be reduced to 'mere computation' is often central to supposed rebuttals.
In another way, though, consciousness is far too high-level a property to properly concern ourselves with right now; nevertheless, I wanted to write these things down, in part just to clear my head.
My thoughts on consciousness basically echo those of the American philosopher Daniel Dennett, as laid out in his seminal work Consciousness Explained. However, while what Dennett laid out should perhaps most appropriately be called a theory of mental content (called the Multiple Drafts Model), I will in this (comparatively...) short posting merely attempt to answer one question, which, however, seems to me the defining one: How does subjective experience arise from non-subjective fundamental processes (neuron firings, etc.)? How can the impression of having a point of view -- of being something, someone with a point of view -- come about?

On this question, there seem to be two intuitions, both very reasonably motivated on their own terms: 1) it's simple, and 2) it's impossible. I'll representatively discuss two thought experiments motivating each of the two points of view.
The first is what I call the 'brain of Theseus'-experiment (after the well-known 'ship of Theseus' paradox): Imagine one single neuron. Its functional characteristics consist essentially of a list of conditions under which it fires. This can be easily modelled artificially, with a little electronic circuit, or a computer program. Suppose one can match the characteristics of the neuron exactly. Now, in such a way as to make the transition appear seamless, the 'fake' neuron is substituted for a 'real' neuron in a living brain. None of the surrounding neurons notices anything: they receive the same firings when the same conditions are met as before. Thus, there is no difference to the brain as a whole. So, let's go on: replace a second neuron. And a third. And so on. If the previous considerations were correct, at no point should the brain notice any change -- everything continues to work the way it always did.
Now, imagine at some point, one hemisphere of the brain has been entirely replaced. The other, still, won't notice anything off -- any signals it sends across the pons elicit the same responses they would if there were still a 'real' hemisphere there, instead of a 'fake' one.
Then imagine completing the transition from 'real' to 'fake' brain. If the first exchanged neuron did not make a difference, and neither did the second, and so on, then it's hard to avoid the conclusion that the new, 'fake' brain will still work the exact same way the original one did -- if the brain with one neuron replaced still thought, experienced, felt and behaved like before, and so did the brain with two neurons replaced, etc., then the new, fake machine-brain will do, as well.
Since the new machine-brain is essentially nothing but a complicated computer, thus, consciousness and sentience can be generated by a computational structure, and could, for instance, be simulated on a computer.
The other thought experiment, which is in some sense the exact opposite to the preceding one, is known as the 'zombie argument'. I'll give a slightly modified form to better fit the context of computation. A philosophical zombie, somewhat removed from the brain-gluttons of horror movie lore, is a being that is, in its actions and behaviour, indistinguishable from any ordinary human, but lacks any sort of consciousness, or subjective experience. That is, if the zombie is subject to certain stimuli -- such as, for instance, being poked with a sharp stick --, he will react in exactly the same way as a human would -- recoiling from the offending stimulus, uttering a cry of pain, and possibly a choice selection of profanities directed at whoever is at the other end of the stick. However, he would not feel the pain; he would not mean the profanities, at least not in the same way we do. His reaction will be entirely an 'automatic' response, triggered by the presence of a certain stimulus.
Ultimately, all human behaviours can be brought into this paradigm: under certain conditions, certain behaviours are produced. From this, one could abstract a computational model, which would react just like a real person, while lacking all the rich inner life that ultimately makes us human. Moreover, one could continue refining this model: add some more fine-grained descriptions of the person's biology, chemistry etc. -- all of which are equally well just simple chains of certain conditions evoking certain responses, and are thus not going to trigger some sort of spontaneous 'awakening' to consciousness.
Indeed, the same continues to hold at ever more fine-grained levels of simulation, up to cellular level, or even beyond. The zombie's neurons, after all, are just this: outputs, firings, evoked by certain conditions. This is essentially the same model we described in the previous thought experiment -- however, arrived at in this 'top-down' manner, the conclusion appears reversed! It seems that there is no way that this simple collection of rules for generating certain responses could have anything at all like what we call 'consciousness'.
We're faced with quite the dilemma: with seemingly equally good reasons, we have arrived at contradictory viewpoints. How can this be reconciled?

Zombots
First, we'll backtrack to the point where both thought experiments still agree -- which is, that it is possible to create a being, indistinguishable from a human in action and reaction, using computational means. Using this agreed-upon starting point, we will show that this is actually all that we need, thereby eliminating the apparent paradox.
For definiteness, let's imagine a machine running a 'human-simulation' in the form of a chat-window one can type into; call such a device a zombot for whimsy. The possibility of zombots will be taken as a given from now on.
As stipulated, such a zombot would pass the Turing test: i.e. to any human it converses with, it would seem indistinguishable from another human; likewise, it would pass any other test for 'consciousness' that can be administered in this way. But it would, of course, not be actually conscious.
More than that, though, it could also administer Turing tests, and as well as any human can -- i.e. it would be 'convinced' of its testee's consciousness whenever a human would be, too (though of course, not being conscious, it would only be convinced in the sense that it might print out the words 'Well, I'm convinced', or something similar -- it would not actually feel convinced, or be in some mental state of convincedness, neither having feelings nor mental states).
So, when a zombot is presented with another zombot to test, it would be just as 'convinced' of the zombot's consciousness as a human would be. We need not limit the zombots to text-based interaction, for this experiment; they may exchange data in whatever form suits a zombot best -- though of course, all the information they might exchange in other ways can, as we have already learned, be recast in the form of a question-answering process.
But now, what happens if we pull a trick on the zombot -- if we just wire its output to its input, directing its interrogation 'inwards', onto itself?
Well, by the reasoning above -- it would pronounce itself conscious!
Of course, that doesn't mean much -- actually, it doesn't mean anything, at least not to the zombot. All that's going to happen is that maybe a little lamp lights up somewhere to indicate 'test subject is conscious', or the zombot prints out words to that effect.
Nevertheless, in any interaction with the zombot, it would claim itself conscious -- and to the best of its ability to tell, that's nothing but the truth. Moreover, in any interactions the zombot might have with itself, it will insist on being conscious.

Evening Matinee in the Cartesian Theater
In order to properly gauge the significance (or lack thereof) of the conclusion we just arrived at, I'd like to step back for a moment to consider a different issue, which is how perception works -- or more precisely, how it doesn't work.
The intuitive picture most people have of their own perceptual process is that of somehow being presented with percepts, i.e that objects of perception, or perceptual attention, are represented in the mind, for the self to behold. It's as if there were an inner stage (which Dennett calls the Cartesian theater), on which a play, 'inspired by actual events' -- those in the real world that are being perceived -- takes place.
This idea has a certain obviousness to it: the senses yield data, which is in some way prepared by the computational apparatus of our brain -- there's always some editing, details judged unimportant and cast to the cutting room floor, some embellishment of salient scenes, various other steps of post production (my brain, for instance, seems to like adding a soundtrack), and often quite a shocking amount of artistic license --, to then be perceived by the self.
But wait a minute -- this picture was supposed to explain perception (of the outside world by us), but now it turns out to crucially depend on perception (of the representation of the outside world by our selves). This is rather blatantly circular -- how is this next step of perception to work? Again by invoking some process of representation and perception, on a yet higher level? Or, if this second-level perception does not depend on such a scheme, why was it necessary in the first place -- if our selves can perceive the representation brought before them without recourse to another level of representation/perception, then why can't we, using the same process, perceive the outside world without creating a representation for our selves' benefit?
And indeed, the regress is vicious: as our perceptual act, in order to be completed, depends on the completion of our selves' perceptual act, so does the selves' perceptual act's completion depend on the completion of the analogous perception on the level above them, and so on. We have run headforward into the homunculus fallacy (where 'homunculus', i.e. little man, denotes the entity perceiving the representations in the mind's eye).
This, despite its obviousness, can actually be a very hard problem to spot, and an even harder one to get rid of. Most theories in which mental content is generated as a representation of the outside world, or a representation of some mental state, suffer from it in some form.
In order to rid ourselves of it, we will first consider the related, but simpler question of how vision works -- I distinguish vision here from perception, in the sense that the former entails no conscious awareness: vision is perception in the sense a TV camera, or a zombot, might possess it, capable of producing mechanical reactions to the object of perception, but not capable of giving rise to mental states having as their content said object.
The simplest idea of how vision might work is a sort of bottom-up process: out of the data provided by the retina, an image is built up, and each object in it is identified through some sort of pattern-matching (identification, again here of a totally non-conscious sort, being necessary to produce the proper reaction to the image's contents) -- perhaps a list of its visual properties is generated, and anything that fits this list is retrieved from some sort of memory.
This is a possible, but rather cumbersome process. 'Building up' an image in this way is going to be very demanding, computationally, and reaction to visual stimuli correspondingly will likely be rather slow. So, unless resources are of no importance -- and they pretty much always are, in nature as much as in science --, it seems unlikely that this process is what underlies vision.
In fact, nature chose a rather more elegant -- and surprisingly scientific -- scheme in order to bestow her creatures with vision.
How does science come to know the world? Through formulating hypotheses, and subjecting them to empirical testing. Those found wanting are discarded; those continually in account with the data are kept, on a provisional basis. This process ensures that knowledge can only ever grow, and ultimately converges onto a faithful picture of reality.
Vision works similarly, at least to a first approximation. An agent enters a scene with certain expectations of what he will see; these expectations are met with actual visual data, and either fulfilled, or renounced. Concretely, one may imagine a question-answering process, where the questions are formulated in such a way as to lead to quick dismissal of expectations. So, rather than going from the general to the specific, as one would normally do in a game of '20 questions', for a certain limited set of critical hypotheses, the specific will be preferred.
This has the advantage that one needs less information to decide a limited number of important hypotheses than one would ordinarily need if one followed the 'bottom-up' strategy. Going for the general to the specific, encircling the object being viewed, leads to results in an uniform way -- for all possible objects that can be identified asking a roughly equal number of questions, i.e. that can be described with a similar amount of information, it takes roughly the same amount of time to identify them.
However, in reality, certain objects that might be in the field of view are far more important than others, and hence, their presence (or absence) needs to be recognized quickly -- one would want to know whether there's a tiger lurking somewhere in the scene as quickly as possible.
So, the strategy that is being taken is that the questions that are asked of the visual data are geared towards falsifying the hypothesis that there is a tiger somewhere in the field of view -- perhaps looking for a characteristic orange-black-orange pattern, which can be done relatively easily -- and if that hypothesis can't be dismissed easily, there's at least a chance of tiger, so it is probably a good strategy to flee.
This leads to false positives -- sometimes, we see things that aren't really there. But, evolutionarily speaking, that's not a bad thing -- better to flee from a non-existing tiger, than to miss one that actually is there!
Nowadays, however, this tendency towards false positives can be quite distracting -- it causes us to see faces, which is something you'd want to recognize exceptionally quickly (chances are, if you can see a face, the face's owner can see you -- always a potentially dangerous situation), nearly everywhere. Just take this little guy: :-). Objectively, that does not look very much like a face at all -- yet we have no trouble parsing it as such. The name for this effect is pareidolia, Greek for something like 'wrong image'.
We should take away from this excursion that seeing can be described as a question-answering process, in which the questions that are asked are in part determined by the expected answers.

A Blind Spot to Help Us See
Now take the case in which your vision is occulted in part or all of the visual field, perhaps by a cataract, or some external obstruction. What you'll see is that you don't see something: there's a part of the visual field that's noticeably obscured, that doesn't produce data even though it should. There's something very noticeably missing. The reason for this is, essentially, that the questions asked of this part of our visual field go unanswered, or are answered uniformly with 'darkness'.
But, in everybody's eye, there exists a spot -- the aptly named blind spot -- in which there are no light sensitive cells, because at that point, the optical nerve punches through the retina, which is necessary because the human eye, unlike, say, the cephalopod version, is wired backasswards.
Typically, we don't notice this defect in our visual field, unless we go to some lengths, as in the test provided in the wikipedia link above (if you've never done it, go ahead! It's quite striking.).
The question is, why don't we notice this blind spot? Why is there no sense of something noticeably missing at that point?
The usual answer, and perhaps the most intuitive one, is that the brain somehow 'fills in' the missing information, 'papers over' the hole in the visual field, so as not to disrupt the enjoyment of the audience in the Cartesian theater with any ugly blemish. In our picture, this would mean that the questions asked of that particular area will receive special made-up answers. But this is actually completely unnecessary.
Moreover, it can't be the whole story: in the test above, why does the brain, doing the filling in, forget about the O, supplying just random background -- yet, confronted with a highly complex picture, such as a fractal, no part seems 'blank' or different from the rest in any way? Why does the brain fail at the seemingly much easier task?
Well, the reason is simple -- it's not that the questions asked of the blind spot are met with made-up answers; it's simply that there are no questions asked, at all. There is nothing noticeably missing because there's nobody looking for anything there. That's why our field of vision seems perfectly continuous -- no alarm gets raised by the missing of data from a certain area, because there are just no detectors that could raise this alarm. The blind spot constitutes an absence of representation, not a representation of absence, as in the case of a physical obscuration of a part of the visual fields -- which are two very different things.
This effect is not limited to the blind spot -- there is a more general phenomenon known as a scotoma, which can be caused by various forms of damage to the retina or optical nerve, that exhibits similar phenomenology. Going out on a limb, one might even speculate that the condition known as blindness denial or more prosaically as Anton-Babinsky syndrome, in which a person may be blind, while being unaware of the fact, has a similar cause: the neurological damage incurred may inhibit the asking of questions; since thus no signals of missing answers arrive, the patient judges himself, wrongly, sighted.
For a different metaphor, consider the often crazy and jumbled logic of dreams, with changing plotlines, locations, actors and circumstances, which nevertheless often seems perfectly sound to the dreamer: it's not that there are elaborate measures in place to hide the logical gaps, rather, it may just be that those parts of the brain that would ordinarily expose the flaws and point to something being amiss are asleep, or otherwise not acting according to their normal function.
In any case, it evidently may be the case that something seems to us to be a certain way, because there is nothing there to expose it being 'truly' different -- it seems to us as if our field of vision were contiguous, because no mechanisms exist to tell us otherwise. We are blind to our blind spots.

Vorstellung
Let's now turn out gaze inward, to where we imagine the things that we imagine are. The German word 'vorstellen' literally means 'putting before'; this captures the intuitive idea we have of how our imagination works: when we imagine something, or visualize it, what we imagine we imagine is the picture of this something, drawn in the mind, for 'us' to look at. But of course, this is just the Cartesian theater again, and with it, the threat of vicious regress rears its ugly head.
In fact, it is easy to see that this idea of creating an actual visualization 'in the mind's eye' holds no water: whatever we could learn from the visualization, we already must know in order to create it. Think of a computer producing a drawing on its screen. It does so for the benefit of the user. But in the case of the mind, our selves, and the mind's eye, the computer is the user. It would make no sense to equip it with a camera and have it behold the picture it itself drew -- all the data that can be gained from the picture is already present, stored in the computer's memory. It must be -- else, it could not have drawn the object!
So, why would the mind go through all this trouble to tell itself things it already knows? Why create any visualization at all?
Well, in a manner of speaking, it doesn't -- it just makes it seem as if it did. Recall how vision can be viewed as a question-answering process. So, too, can inner vision: when you visualize something, you ask yourself question about that something's appearance, which are met with the appropriate answers -- as a result, something seems to be visualized. The apparent visualization is merely the actualization of latent visual knowledge, prompted by question asking -- which in these circumstances perhaps should be called introspection. The object seems visualized in the same way the blind spot seems filled or dream logic seems consistent -- because there is nobody there to ask questions -- to actually go inside your mind and look -- and say otherwise. Knowing how it would seem if you actually visualized something is no different from actually visualizing something, at least as far as you are concerned.
Moreover, this inner viewing comes implicitly with an inner viewpoint -- there is no observation without an observer. View and viewpoint, observation and observer, imply each other, thus, through making it seem as if there were an object visualized inside the mind, the process of actualizing knowledge via introspection, via question-answering, has the side effect of making it seem as if there were something or someone beholding the visualization, or perhaps just holding it in his mind's eye.
The remarks made here about vision and visualization can be extended to other modes of perception and representation; in general, mental content is generated by answering questions about mental content, and is itself represented in the answers to these questions -- this reciprocity produces both the appearance of observing this mental content, and the observer doing this observing.

I, Zombot
We have come quite a way towards answering the question: how can subjective, experiential states emerge from non-subjective processes?
Let us review where we started: a zombot, a machine capable of emulating the behaviour of a conscious human being perfectly, was given the task of determining its own consciousness. How did it do so?
Well, whatever the process may be in detail, ultimately, if it is limited to the exchange of information, it can be modelled as a question-answering process. The zombot, thus introspecting, asking questions of itself, proclaimed itself conscious, as it had to. This, as we surmised, did not provide sufficient grounds for believing such a bold assertion.
But in the end, when you determine your own consciousness, what do you do? You introspect -- you ask questions of yourself -- and those are answered as if you were conscious. It seems to you that you are conscious, and so you claim -- and believe -- yourself to be. And similarly, it seems to be thus to the zombot -- who, on equal grounds as yours, then believes himself conscious.
But wait, have I not just tried to sneak one past you? Certainly, before something can seem a certain way to the zombot, before he can believe anything, he must be conscious -- thus, it seems he must be conscious in order to be conscious, and it seems we haven't made it past the regress after all!
And it is true that there is an element of self-referentiality here, but the circularity is not vicious. Remember how the zombot can come to know ('knowing' used here in the sense of 'having data stored in his memory') how it would be to visualize some object, through a process of introspection; this, itself, isn't consciousness. It's just an ability to answer certain questions.
But, the zombot then can repeat his introspection, with the object to be 'visualized' this time being his knowledge of how it would be to visualize a certain object -- somewhat more abstract, certainly, but it can be just as well represented in the form of answers to certain questions. Thanks to this iterated introspection, the zombot knows not only how an object looks (his 'visualization'), but he knows that he knows it (which is equivalent to actually visualizing an object, or at least, believing to do so), and possibly even knows that he knows that he knows, and so on, up to a certain point; this iterative process can be ended whenever we wish -- we can climb the ladder of self-reference upwards, rather than having to descend downwards, from infinity, as in the case of the homunculus watching the play in the Cartesian theater. This certainly mirrors our own experience -- ordinarily, we may think about something, then may have cause to think about our thoughts, and occasionally perhaps even think about our thinking about our thoughts; but rarely go things further. The explanation here is that there simply are no more levels, rather than that they somehow must get lost in the mist as the tower of regress climbs up to infinity.
The zombot thus can gain subjective states through gaining knowledge about how it would be to have subjective states -- which he then can again gain knowledge about.
Yet still, one might think that there is some difference between the real consciousness of a human being, and the fake consciousness the zombot fake-experiences. That, while the zombot can't tell himself from an actually conscious entity, an actually conscious entity can nevertheless point to some subtle difference in mental content that differentiates both cases.
And it is entirely possible that such a fundamentally subjective, irreducible quality to consciousness exists. But, even if that is the case, how could you ever tell whether or not you possess it? Anything you could point to, a zombot would equally well point to -- 'deceived' into believing he possessed it. But if the zombot can be thus deceived, how do you know you aren't, as well? Any attempt to find out loops back onto itself; at every point, you might as well be the zombot. Your knowledge of your own consciousness is just data you have access to; but the zombot has access to equivalent data, generated through his introspection.

Being Real and Seeming Real
This may seem to be a deeply unsatisfactory explanation to some; indeed, it seems as if there is no 'real' consciousness left anymore, that everything is just a clever trick with mirrors.
And in a way, that's true. Whenever one sets out to explain magic, the explanation can contain no magic in the end, or it is not an explanation at all -- but that does lead to the paradoxical consequence that the phenomenon that one has set out to explain now seems to have vanished all together, that it has been explained away rather than explained.
But even if some phenomenon is reduced to its fundamental constituents, this does not make the phenomenon any less real. On the level of cells, it makes no sense to talk about my arm -- yet clearly, this does not mean that my arm 'doesn't exist'. Emergent properties are no less real than 'fundamental' ones, they are just answers to a different set of questions, that it would make no sense to ask on the lower level.
So, too, is it the case with consciousness -- but in the mind, things get an extra twist. The reason for this is that there is no objective fact to what's real about subjective experiences -- so whatever seems real about them, is, or at least, can be taken to be. Consider a migraine: is there a difference between the case where you have a migraine, and the case where it merely seems to you as if you had a migraine? The two are identical: in both cases, you have a splitting headache, no less real in the second than in the first. Thus, there is no real meaning to 'seeming conscious': if to you, you seem conscious, then you are conscious -- both yield an identical phenomenology. Thus, by making it seem as if there were consciousness, consciousness can be created.
In the end, we're all zombots -- just suffering from unconsciousness denial. However, unlike the case of seeing, where the appearance or the belief of possessing sight still leaves you blind, the appearance or the belief of possessing consciousness suffices to establish consciousness.

Keine Kommentare:

Kommentar veröffentlichen