2011 blanquet consciousness

Upload: oana-camelia-serban

Post on 03-Apr-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/29/2019 2011 Blanquet Consciousness

    1/20

    J ournal of Behavioral and Brain Science, 2011, 1, 242-261doi:10.4236/jbbs.2011.14031 Published Online November 2011 (http://www.SciRP.org/journal/jbbs)

    Copyright 2011 SciRes. J BBS

    Advances in Interdisciplinary Researches to Construct aTheory of Consciousness

    Pierre R. BlanquetClub dHistoire des Neurosciences, University P. & M. Curie, Paris, France

    E-mail: [email protected] August22, 2011; revised October 13, 2011; accepted October 28, 2011

    Abstract

    The interdisciplinary researches for a scientific explanation of consciousness constitute one of the most ex-citing challenges of contemporary science. However, although considerable progress has been made in theneurophysiology of states of consciousness such as sleep/waking cycles, investigation of subjective and ob-

    jective nature of consciousness contents raises still serious difficulties. Based on a wide range of analysis andexperimental studies, approaches to modeling consciousness actually focus on both philosophical, non-neuraland neural approaches. Philosophical and non-neural approaches include the naturalistic dualism model ofChalmers, the multiple draft cognitive model of Dennett, the phenomenological theory of Varela andMaturana, and the physics-based hypothesis of Hameroff and Penrose. The neurobiological approaches in-clude the neurodynamical model of Freeman, the visual system-based theories of Lamme, Zeki, Milner andGoodale, the phenomenal/access hypothesis of Block, the emotive somatosensory theory of Damasio, thesynchronized cortical model of Llinas and of Crick and Koch, and the global neurophysiological brain modelof Changeux and Edelman. There have been also many efforts in recent years to study the artificial intelli-gence systems such as neurorobots and some supercomputer programs, based on laws of computational ma-chines and on laws of processing capabilities of biological systems. This approach has proven to be a fertilephysical enterprise to check some hypothesis about the functioning of brain architecture. Until now, however,

    no machine has been capable of reproducing an artificial consciousness.

    Keywords:Models of Consciousness, Third and First-Person Data/Qualia, Neuronal Workspace,Thalamo-Cortical Neurons, Artificial Intelligence

    1. Introduction

    By separating the immaterial mind and the material body,Descartes (1596-1650) traced a dividing line betweentwo incommensurable worlds: the spiritual and the mate-rial world [1]. From this ontological dualism is born anepistemological dualism according to which the materialmust be known by science and the mind by introspection.During the 18thcentury, the definition of the mind startedto emerge with that of consciousness, recognized as in-strument of knowledge of the world which surrounds usas also of our interiority. Hume (1711-1776), for example,defined the mind: as an interior theatre on which all thatis mental ravels in a chaotic way in front of an interioreye whose eyelids would not blink [2]. On the contrary,the materialist philosophers of the 18th century will giveup the cartesian immaterial substance, but they will keepthe metaphor of the body-machine and will extend this

    metaphor to all human functions, including thought. Farfrom showing the immateriality of the heart, indeed, thecartesian cogito would prove rather for La Mettrie (1709-1751), Diderot (1713-1784) and Holbach (1723-1789)that the matter can think. In this line of idea, transform-ist/proto-evolutionist and darwinian theories of 18th and19th centuries, by introducing the idea of a biological ori-gin of the man, introduced the vast field of investigationof the living matter and the problem of when andhow of the mind [3].

    However, it was necessary to await the arise of the psy-chology in the mid-nineteenth century, so that conscious-ness becomes the central object of a new discipline claim-ing science and asserting its independence with regard tophilosophy. This introspective approach of mind domi-nated the investigations of researchers such as Hermanvon Helmholtz (1821-1894), Wilhelm Wundt (1832-1920)and William James (1842-1910) [4-6]. In the end of the

  • 7/29/2019 2011 Blanquet Consciousness

    2/20

    243P. R. BLANQUET

    19th century, in his founder text for a phenomenology,Husserl (1859-1938) exposed his thesis on the nature ofconsciousness [7]. Following Brentanos work, he adoptedthe concept of intentionality and gave it a central roleconcerning the transcendental ego. He taught that con-

    sciousness is never empty: it always aims an object and,more generally, the world. The ego always carries in it therelationship with the world as intensional aiming. The early20th century saw the eclipse of consciousness from scien-tific psychology [8,9]. Without denying the reality of sub-jective experiences, the strategy of behaviorists was todraw aside consciousness of the direct field of investiga-tion and to regard the brain as a puzzle-box. They thenput between brackets the irresolute problems which en-cumbered the psychology to study stimuli and answers,i.e. behaviors which reduce the analysis to rigid relationsbetween inputs (stimuli) and outputs (movements). In the

    1960s, the grip of behaviorism weakened with the rise ofthe cognitive psychology [10,11]. However, despite thisrenewed emphasis on explaining cognitive capacities suchas memory, perception and language comprehension, con-sciousness remained a largely neglected topic until the ma-jor scientific and philosophical researches in the 1980s.Researches on the correlations between the identifiablemental activities and the objectivable cerebral activitiessucceeded the traditional questioning on the relation be-tween the mind and the brain. They were approached byneurophilosophers, neurobiologists and researchers in ar-tificial intelligence as familiar with the epidemiologic andontological questions as with the methodologic and empi-ric ones. My purpose here is to review the main attempts toprovide an adequate explanatory basis for consciousness.

    2. Consciousness, a Challenge for Philosophyand Science

    2.1. Difficulties in Philosophically Tackling theProblem of Consciousness

    Traditionally, one supposed in philosophy of mind thatthere was a fundamental distinction between the dualistphilosophers, for which there are two kinds of phenom-

    ena in the world, the mind and the body, and the monistphilosophers, for which the world is made of only onesubstance. Although dualism can be of substance, as Des-cartes thought it, the majority of dualist philosophers cur-rently adopt a dualism of property which admits that thematter can have both mental and physical properties. Like-wise , if the monism can be an idealism (in the sense ofthe philosopher Berkeley), the totality of monist philo-sophers in the present time are materialists. Dualism ofproperty asserts the existence of conscious properties thatare neither identical with nor reductible to physical pro-

    perties, but which may nevertheless be instantiated by thevery same things that instantiate physical properties. Themost radical dualism is the psychological parallelism,which seeks to account for the psychophysiological cor-relations while postulating that the mental state and the

    cerebral state correspond mutually without acting one onthe other [12]. A less radical version, the epiphenome-nalism, recognizes the existence of causal influences ofbrain on the mental states, but not the reverse [13]. In con-trast to dualism, the monism claims that everything thatexists must be made of matter. A modern version of ma-terialism is the physicalism. One type of physicalist ma-terialism is the radical current of thought called elimi-nativism, which rejects the very notion of consciousnessas muddled or wrong headed. It affirms that the mentalstates are only temporary beliefs intended to be replacedby neurobiologic models of research [14]. Another form

    of physicalist materialism is the thesis of the strong iden-tity. It affirms the existence of an internal principle ofcontrol (the mental principle) which is anything else onlythe brain: the mental is reducible with the biologicalproperties, which themselves are reducible with physics.However, the thesis most commonly adopted is that offunctionalism. The functionalism has been proposed toanswer an intriguing question: how to explain, if the the-ory of identity is admitted, that two individuals can havedifferent cerebral states and nevertheless to have at oneprecise time exactly the same mental state? The responseof functionalists is as follows: what there is identical inthe two different cerebral occurrences from the same men-tal state, it is the function. Whatever the physical consti-tution of the cerebral state, several mental states are iden-tical if their relations are causal. For the functionalists,the mental states are thus defined by their functional roleinside the mental economy of the subject [15].

    To accept traditional dualism it is agree to make a strictdistinction between the mental and the physical proper-ties. In other words, it is to give up the unified neurosci-entific theory which one can hope for to obtain one day.But to accept the solutions recommended by the physi-calist materialism is worse still, because those end updenying the obvious fact that consciousness has internal

    subjective and qualitative states. For this reason, certainphilosophers tried to solve the problem, either by adopt-ing a mixed theory, both materialistic and dualistic, or bydenying the existence of subjective states of conscious-ness. David Chalmers, for example, adopted the first so-lution. Chalmers is known for his formulation of the dis-tinction between the easy problem of consciousnessand the single hard problem [16,17]. The essential differ-ence between the easy problem and the hard (phenomenal)problem is that the former is at least theoretically an-swerable via the standard strategy of functionalism. In

    Copyright 2011 SciRes. J BBS

  • 7/29/2019 2011 Blanquet Consciousness

    3/20

    244 P. R. BLANQUET

    support of this, in a thought experiment, Chalmers pro-posed that if zombies are conceivably complete physicalduplicates of human beings, lacking only qualitative ex-perience, then they must be logically possible and subjec-tive personal experiences are not fully explained by physi-

    cal properties alone. Instead, he argued that consciousnessis a fundamental property ontologically autonomous ofany known physical properties. Chalmers described there-fore the mind as having phenomenal and psychologi-cal aspects. He proposed that mind can be explained bya form of naturalistic dualism. That is, Chalmers ac-cepted the analysis of functionalists while introducing theconcept of consciousness irreducible into his system.According to him, since the functional organization pro-vides the elements of mental states in their nonconsciousforms, it is necessary that consciousness is added againto this organization.

    Other philosophers, on the contrary, claimed that clos-ing the explanatory gap and fully accounting for subject-tive personal experiences is not merely hard but ratherimpossible. This position was most closely associated withthat, for example, of Colin McGinn, Thomas Nagel [18,19]and Daniel Dennett [20]. In particular, Dennett arguedthat the concept of qualia is so confused that it cannot beput to any use or understood in any non-contradictory way.Having related the consciousness to properties, he thendeclared that these properties are actually judgements ofproperties. That is, he considered judgements of proper-ties of consciousness to be identical to the propertiesthemselves. Having identified properties with judgementsof properties, he could then show that since the judge-ments are insubstantial, the properties are insubstantialand thence the qualia are insubstantial. Dennett concludedtherefore that qualia can be rejected as non-existent [20].For Dennett, consciousness is a mystifying word becauseit supposes the existence of a unified center of piloting ofthoughts and behaviors. For him, psychism is a hetero-geneous unit which combines a series of mental proc-esses that one knows little about such as perception, pro-duction of the language, training, etc: we allot, with theothers and ourself, some intentions, a consciousness,because our behaviors are finalized [21,22]. In other writ-

    ings, Dennett defended an original thesis on the free will[23]. For that, he rested on evolutionary biology, on cog-nitive sciences, on the theory of cellular automata and onmemetics [24,25]. Taking the opposite course to the ar-gument of those which say that evolutionary psychology,together with memetics, implies necessarily a world de-prived of any possibility of choice, Dennett estimated onthe contrary that there is something of special in the hu-man subject. According to him, the theory of evolutionsupports the selective advent of avoiding agents. Theseagents have the capacity to extract information of the

    environment to work out strategies in order to avoid therisks and to choose the judicious behaviors. Now, the per-formance of this capacity is much more important for theman than for the animal because each human individualmemorizes an important quantity of social and cultural

    informations. It is therefore absurd to conceive a linear de-terminism similar to that of the demon of Laplace. ForDennett, the free will must rather be designed like the cha-otic determinism of a vast neuromimetic network, in whichinformations received in entries are combined accordingto their respective weights to give completely unfore-seeable and nonreproducible, but non random, outputs.

    2.2. Difficulties in Scientifically Tackling theProblem of Consciousness

    One of difficulties in scientifically tackling the problem

    raised by consciousness comes from the obvious fact thatit is the originating principle from which are generatedthe categories of the interpersonal world and the subjec-tivity of each personal world. Consciousness is not a sub-stance but an act in itself non objectivable; it escapes anyfrom representation. What consciousness has disconcert-ing, notes Edelman, it is that it does not seem to raise ofthe behavior. It is, quite simply, always there [26]. An-other difficulty comes owing to the fact that conscious-ness is aJanus Bifron, in the sense that it has both on-tology with the first-person perspective and ontology withthe third-person perspective, irreducible one with the other[27]. It has a first-person perspective because the mentalstates are purely subjective interior experiences of eachmoment of the life of men and animals. Principal sorts offirst-person data include visual, perceptual, bodily and emo-tional experiences, mental imagery and occurent thoughts.But consciousness has also ontology with the third-personperspective that concerns the behavior and the brain pr-ocesses of conscious systems [22]. These behavioral andneurophysiological data relevant to the third-person per-spective provide the traditional material of interest for cog-nitive psychology. Principal sorts of third-person data in-clude perceptual discrimination of external stimuli, inte-gration of information to do with the across sensory mo-

    dalities, automatic and voluntary actions, levels of accessto internally represented information, verbal reportabilityof internal states and differences between sleep and wake-fulness. The problem to scientifically approach the con-sciousness phenomenon is much more complex still be-cause the first-person experience is common for qualia(singular quale), intentionality and self-consciousness.Introduced by Lewis [28], the term of qualia is either usedin contemporary usage to refer to the introspectively ac-cessible phenomenal aspects of our mental live [29], orused in a more restricted way so that qualia are intrinsic

    Copyright 2011 SciRes. J BBS

  • 7/29/2019 2011 Blanquet Consciousness

    4/20

    245P. R. BLANQUET

    properties of experiences that are ineffable, nonphysical,and given incorrigibly [30], or still used by some phi-losophers such as Whitehead, which admits that qualiaare fundamental components of physical reality and de-scribed the ultimate concrete entities in the cosmos as

    being actual occasions of experience [31]. Experienceswith qualia are involved, for example, in seeing green,hearing loud trumpets or tasting liquorice, bodily sensa-tions such as feeling a twinge of pain, feeling an itch,feeling hungry, etc They are also involved in felt reac-tions, passions or emotions such as feeling delight, lust,fear or love, and felt moods such as feeling elated, de-pressed, calm, etc [32,33]. Intentionality and qualia ne-cessarily coexist in the generation of conscious states,but the aspect qualia may be distinguished from the as-pect intentionality insofar as the perception of an object,the evocation of a memory or an abstract thought can be

    accompanied by different affective experiences (joy orannoys, wellbeing or faintness, etc) [34].

    3. Experimental Approaches to a Science ofConsciousness

    3.1. Neurophysiological Studies of NeuralNetworks and Neuro-Mental Correlations

    There are two common, but quite distinct, usages of theterm of consciousness, one revolving around arousal orstates of consciousness and another around the contentsof consciousness or conscious states. States of conscious-ness are states of vigilance (i.e., the continuum of stateswhich encompasses wakefulness, sleep, coma, anesthesia,etc) with which attention is associated. They are cyclicand can last several hours. In contrast, conscious states(percept, thought, memory or subjective experiences suchas qualia) are not cyclic nor reproductible and can lastonly a few minutes, seconds or sometimes milliseconds.Conscious states are situated in a spatiotemporal context.They may refer to the past or the future but are alwaysexperienced in the present. They introduce the consciousrepresentations and, more or less explicitly, global modelsof self, alter-ego and world. States of consciousness and

    contents of consciousness offer therefore very unequaldifficulties to the neuroscientific investigation [35-37].

    3.1.1. Neurophysiological Studies of States ofConsciousness

    Considerable progress has been made during the last de-cade in the neurophysiology of states of consciousness. Inparticular, impressive progress has been made in the neu-rophysiological investigation of states of vigilance. Notably,the molecular mechanisms of distinct sleep/wake cycleshave been thoroughly studied [38]. The neuroanatomical

    systems, the cellular and molecular mechanisms, and theprincipal types of neurotransmitters involved in these me-chanisms for the most part have been identified. In par-ticular, an important line of research has investigate thearousal in altered states of consciousness, for instance, in

    and after epileptic seizures, after taking psychedelic drugs,during global anesthesia or after severe traumatic braininjury. These studies demonstrate that a plethora of nucleiwith distinct chemical signatures in the thalamus, mid-brain and pons, must function for a subject to be in a suf-ficient state of brain arousal to experience anything at all.In particular, sleep/wake cycles essentially depend on ana-tomical system which comprises structures of brainstem,thalamic and hypothalamic nuclei, and the nucleus ofMeynert. Awakening into the vigilant state correlates witha progressive increase in regional cerebral blood flow,first in the brainstem and thalamus, then in the cortex with

    a particularly important increase in prefrontal-cingulateactivation and functional connectivity [38,39]. Anesthetic,sleep, vegetative state and coma are all associated withmodulations of the activity of this thalamocortical network.Although most dreams occur in paradoxical sleep, theneurobiological mechanisms of walkefulness and para-doxical sleep are identical at the thalamocortical level; theonly difference between the two states is at the brainstemlevel. What differentiates these states is the relationshipwith the environment: walkefulness brings into play motorand sensory interactions with the external world, whilethe paradoxical sleep is relatively independent of the ex-ternal world. Thus, deaming may be considered to be anincomplete form of consciousness, uncontrolled by theenvironment, mainly reflecting internal factors [38].

    3.1.2. Neurophysiological Studies of Contents ofConsciousness

    Contrary to the study of consciousness states, the studyof consciousness contents raises very many problems.Much difficulties are due to the brevity and not very re-producible nature of subjective experiences. In addition,the mechanisms of conscious thoughts often result fromprocesses at the same time conscious and unconsciouswhich coexist and even interact (the language, for exam-

    ple, brings into play the unconscious use of linguistic rou-tines). To explain the contents of consciousness of the third-person perspective, we need actually to specify the neuraland/or behavioral mechanisms that perform the functions[27]. The availability of behavioral data is reasonablystraightforward, because researchers have accumulated arich body of behavioral data relevant to consciousness.But the body of neural data that has been obtained to dateis correspondingly more limited because of technologicallimitations [33,40]. To study neural mechanisms, resear-chers use currently a variety of neuroscientific measure-

    Copyright 2011 SciRes. J BBS

  • 7/29/2019 2011 Blanquet Consciousness

    5/20

    246 P. R. BLANQUET

    ment techniques including brain imaging via functionalmagnetic resonance imaging (fMRI) and positron emis-sion tomography (PET) technology, single-cell recordingthrough insertion of electrodes, and surface recordingthrough electroencephalography (EEG) and magnetoen-

    cephalography (MEG) [27,41,42]. However, though theseapproaches seem quite promising, many experimental find-ings proved to be not univocal results and must be com-pare and integrate with findings obtained from other ap-proaches. For example, when one sees a face there be muchactivities (for example, on the retina and in the early vi-sual cortex) that seem explanatorily redundant for theformation of the conscious percept of the face [37]. Tospecify precisely the neuronal basis of such a consciousperception, psychologists have perfected a number of tech-niques (masking, binocular rivalry, continuous flash sup-pression, motion-induced blindness, change blindness, inat-

    tentional blindness) [43]. In this design, one keeps as manythings as possible constant, including the stimuli, whilevarying the conscious percept, so that changes in neuralactivation reflect changes in the conscious percept ratherthan changes in the stimuli. For instance, a stimulus canbe perceptually suppressed for seconds of time: the imageis projected into one of the observers eyes but is invisi-ble, not seen. In this manner, the neural mechanisms thatrespond to the subjective percept rather than the physicalstimulus can be isolated, permitting the footprints of visualconsciousness to be tracked in the brain. In some per-ceptual illusion experiments, on the contrary, the physicalstimulus remains fixed while the percept fluctuates. Agood example is the Necker cube whose 12 lines can beperceived in one of two different ways in depth.

    Contrarily to techniques used for studying the third-person experiences, methodologies for investigating first-person experiences (in particular qualia) are relativelythin on the ground, and formalisms for expressing themare even thinner. The most obvious obstacle to the gath-ering of first-person data concerns the privacy of suchdata. Indeed, first-person data are directly available onlyto the subject having these experiences. To others, thesefirst-person data are only indirectly available, mediatedby observation of the subjects behavior or brain proc-

    esses. In practice, the most common way of gatheringdata about the conscious experiences of other subjects isto rely on their verbal reports. However, verbal reportshave some limits. Some aspects of conscious experience(e.g. the experience of emotion) are very difficult to de-scribe. Moreover, verbal reports cannot be used at all insubjects without language, such as infants and animals.In these cases, one needs to rely on other behavior indi-cators. For example, if an individual presses one of twobuttons depending on which of two ways the creatureperceives a Necker cube at a time, this button pressing is

    a source of first-person data. A second obstacle is posedby the absence of general formalisms with which first-person data can be expressed. Reseachers typically relyeither on simple qualitative characterizations of data oron simple parameterization of ones. These formalisms suf-

    fice for some purposes, but they are unlikely to sufficefor the formulation of systematic theories [27,33].

    3.1.3. Approaches to Research Animal ConsciousnessThere is now abundant and increasing behavioral and

    neurophysiological evidence consistent with, and evensuggestive of, conscious states in some animals. This isdue to the fact that human studies involving the correla-tion of accurate reports with neural correlates can pro-vide a valuable benchmark for assessing evidence fromstudies of animal behavior and neurophysiology of someanimals [44]. Relevant forms of report included analysis

    of responses to binocular rivalry in the case of primates,vocalization in the case of birds such as african grey parrots,and coloration and body patterning in the case of cepha-lopods such as Octopus vulgaris. Rhesus macaque mon-keys, for example, were trained to press a lever to reportperceived stimuli in a binocular rivalry paradigm. Theresults from these studies were consistent with evidencefrom humans subjected to binocular rivalry and magne-toencephalography. They suggested that consciousness ofan object in monkeys involves widespread coherent syn-chronous cortical activity. Likewise, similarities have beenfound among the functional circuitry underlying the or-ganization and sequencing of motor behaviors related tovocalization in birds and mammals capable of vocal learn-ing. Much of the neural basis for song learning in somebirds was found to reside in an anterior forebrain path-way involving the basal ganglia, in particular a striatalneuronal area ressembling that present in the mammalianstriatum. These homologies were strongly suggestive ofneural dynamics that support consciousness in birds. Thecase of cephalopod mollusks is much less clear. Indeed,cephalopod such as Octopus possesses a large populationof sensory receptors (they communicate with a nervoussystem containing between 170 and 500 million cells)and numerous nucleus-like lobes in its brain. Its optic

    lobe, containing as many as 65 million neurons, plays acritical role in higher motor control and establishment ofmemory. Moreover, a number of other lobes appear to befunctionally equivalent to vertebrate forebrain structures,though their organization bears little resemblance to thelaminar sheets of mammalian cortex. Recently, laboratorytests and observations in a natural environment showedthat Octopus is heavily dependent on learning, and evenmight form simple concepts. This strongly suggested thatcephalopod mollusks have a certain form of primary con-sciousness [45].

    Copyright 2011 SciRes. J BBS

  • 7/29/2019 2011 Blanquet Consciousness

    6/20

    247P. R. BLANQUET

    3.2. Approaches to Build ArtificialIntelligence Systems

    Artificial Intelligence has not been sparing of metaphorsconcerning the functioning of the human mind [46]. In

    1950, Alan Turing tackled the problem of computation-alism by proposing its famous test to establish whetherone can consider a machine as intelligent as a human [47].So far, however, no computer has given responses totallyindistinguishable from the human responses. It appearsthat this computational cognitivism is limited insofar asit is founded on the formal character of calculation (inthis case, to think is equivalent to process data, i.e. to cal-culate, to handle symbols). This neuroscientific approachis not very different from that where computer simula-tions and mathematical models are used to study systemssuch as stomachs, planetary movements, tornadoes, and

    so on. In contrast, connectionism equates thinking withthe operation of a network of neurons and argues thatevery cognitive operation is the result of countless inter-connected units interacting among themselves, with nocentral control. The connexionnist networks are in generaladaptive and allow the study of mechanisms of training[48,49]. Such an approach is based on the view that thenervous system itself computes [50]. In this topic, newresearch has particularly developed that consists in mak-ing work equations put in darwinian competition thanksto evolutionary algorithms inspired by the modeling ofcertain natural systems (for example, the competition be-tween the social insects in the construction of the nest)[51-53]. These systems, which are generic population-based metaheuristic optimization algorithms, are able tosolve problems using some mechanisms inspired by thebiological evolution such as the reproduction, mutation,recombination and selection. The principle on which theyare founded consists in initializing a population of indi-viduals in a way dependent on the problem to be solved(environment), then to make move this population of ge-neration in generation using operators of selection, re-combination and change. Actually, a number of cyber-neticians try to approach this difficult problem with sys-tems multi-agents (SMA), or Massively Multi-Agents

    [54-60]. An agent-based model is a class of computa-tional models for simulating the actions and interactionsof autonomous agents (both individuals or collective en-tities such as organizations or groups) with the view toassessing their effects on the system as a whole. This modelsimulates the simultaneous operations and interactions ofmultiple agents located in an environment made up ofobjects which are not agents and nonlikely of evolution,in an attempt to re-create and predict the appearance ofcomplex phenomena. The agents can thus substitute forthe programmer and even produce unexpected results. In

    this vein of approaches, a novel area of great interest isthe construction of robotic organisms. One essential pro-perty of neurorobots is that, like living organisms, theymust organize the unlabeled signals they receive from theenvironment into categories. This organization of signals,

    which in general depends on a combination of sensorymodalities (e.g. vision, sound, taste, or touch), is a percep-tual categorization, as in living organisms, makes objectrecognition possible based on experience but without apriori knowledge or instruction. Like the brain, neuroro-bots operate according to selectional principles: they formcategorical memory, associate categories with innatevalues, and adapt to the environment [61].

    An important problem arises however with these newperspectives of research: any constructive and exhaustiveapproach of artificial consciousness must define a systemwhich has access, as the human primary consciousness,

    within the meaning of its own knowledge. To investigatethis question, as Owen Holland said it, the system mustbe able simultaneously to produce an intentional repre-sentation and the perception in its own organization ofthis intentionally generated form; it must be self-aware[62]. A question then becomes: when will a machine be-come self-aware? Although an answer to this question ishazardous, one can determine at least a plausible neces-sary precondition without which a machine could not de-velop self-awareness. This precondition derives from theassertion that, to develop self-awareness, a neural networkmust be at least as complex as the human brain. Recently,an enormous project (Human Brain Project) was givenfor objective, believing this could be achievable in aslittle as 10 years time, to succeed in simulating the func-tioning of the neocortex of mammals by means of thefastest supercomputer architecture in the world, the IBMsBlue Gene platform [63]. For the moment, one modelleda single cortical column consisting of approximately 60,000neurons and 5 km of interconnecting synapses, chosenfrom about 15,000 experiments carried out in laboratory.This Blue Brain project requires to reproduce in the fu-ture the equivalent of the million functional units thatholds the neocortex. It should be however noted that thisproject refers only to a necessary but not sufficient con-

    dition for the development of an artificial consciousness.Even if the machine becomes as skilled as humans inmany disciplines, one cannot assume that it has becomeselfaware. The existence of a powerful computer equippedwith millions of gigabytes is not in itself sufficient toguarantee that the machine will be a self-awareness intel-ligence. Moreover, it remains to define the criteria whichwill make possible to recognize that an artificial entityhas a conscious state. Indeed, the problem is not to knowif a machine suffers, but if it behaves as if it suffered[64,65].

    Copyright 2011 SciRes. J BBS

  • 7/29/2019 2011 Blanquet Consciousness

    7/20

    248 P. R. BLANQUET

    4. Explanatory Theories of Consciousness

    Explanatory theories of consciousness should be distin-guished from the experimental approaches to phenomenaof consciousnesss. While the identification of correlations

    between aspects of brain activity and aspects of conscious-ness may constrain the specification of neurobiologicallyplausible models, such correlations do not by themselvesprovide explanatory links between neural activity and con-sciousness. Models of consciousness are valuable preci-sely to the extent that they propose such explanatory links.To days, one can arbitrarily classify the various approachesto modeling consciousness into two categories: the theo-ries making correspond certain functional modes of thebrain to the conscious activity, and the theories bindingthe structure of neural networks to conscious activity .

    4.1. Theories Making Correspond CertainModes of the Brain to the Conscious Activity

    In this category, one finds mainly the phenomenologicalapproaches of Francisco Varela and Humberto Maturana,the physics-based hypothesis of Stuart Hameroff and Ro-ger Penrose, and the neurodynamical model of Walter J .Freeman.

    Model of Varela and Maturana.Varella, and his men-tor Humberto Maturana, developed a model based on thenotion of autopoiesis [66-70]. Based on cellular life,autopoiesis attempts to define, beyond the diversity of allliving organisms, a common denominator that allows forthe discrimination of the living from the non-living. In-side the boundary of a cell, many reactions and manychemical transformations occur but, despite all these pro-cesses, the cell always remains itself and maintains its ownidentity. The consequence is that the interaction betweena living autopoietic unit and a component of its environ-ment is only dictated by the way in which this compo-nent is seen by the living unit. To characterize this veryparticular nature of interaction, Maturana and Varela usedthe term of cognition. In their theory, the environmenthas its own structural dynamics but it does not determinethe changes in organism. Although it can induce a reac-

    tion in the organism, the accepted changes are determinedby the internal structure of the organism itself. The con-sequence is that the environment brings to life the organ-ism and the organism creates the environment with itsown perceptory sensorium. It should be emphasize thatthis thinking is close to certain european philosophies, inparticular to that of Merleau-Ponty [71]. To express thisprocess of mutual calling into existence, this co-emergence,this equivalence between life and cognition, Varela andMaturana used the word of enaction. For Varela, themind as a phenomenology in action, viewed from either

    the first- or the third-person perspective, can only be de-scribed as a behavior literally situated in a specific cycleof operation [72]. For him, the mind is not in the head, itis in a non-place of the co-determination of inner andouter [73]. There is no separation between the cognitive

    act and the organic structure of life, they are one ; thetraditional cartesian division between matter and minddisappears at the level of human cognition at which thenotion of consciousness appears. To signify that humanconsciousness has its counterpart in the organic structure,that there is no consciousness outside the reality of bod-ily experience, Varela used the term of embodiment[74]. We are therefore global dynamic processes, in dy-namical equilibrium, emerging and acting from interac-tions of constituents and interactions of interactions. Inthis thesis, the brain level is only considered to contributeto properties of conscious experiences. At the phenome-

    nal level, the constitution of conscious moments impliesa high temporal integration of multiple contents emergingin a transitory way. According to Varela, because of itsbiophysical organization (its organizational closure), thebrain belongs to multistable dynamical systems, in whicheigenbehaviors are constrained by a landscape of multiplenon-stable attractors. There are however some methodo-logical problems in this theory. The first is the old problemthat the mere act of attention to ones experience trans-forms that experience. This is not too much of a problemat the start of investigation, because one has a long wayto go until this degree of subtlety even comes into play,but it may eventually lead to deep paradoxes of obser-vership. The second problem is that of developing a lan-guage or a formalism in which phenomenological datacan be expressed. Indeed, the notorious ineffability of con-scious experience plays a role here, because the languageone has for describing experiences is largely derivativeon the language one has for describing the external world.The third difficulty lies in the failure, or at least the limi-tations, of incorrigibility: judgments could be wrong.

    Model of Hameroff and Penrose. For Stuart Hamer-off and Roger Penrose, neurons belong to world of tradi-tional physics, are calculable and cannot explain as a suchconsciousness. They proposed therefore a new physics of

    objective reduction which appeals to a form of quantumgravity to provide an useful description of fundamentalprocesses at the quantum/classical borderline [75-78].Within this scheme, consciousness occurs if an appropri-ately organized system is able to develop and maintainquantum coherent superposition until a specific objectivecriterion (a threshold related to quantum gravity) is rea-ched; the coherent system then self-reduces (objectivereduction). This type of objective self-collapse introducesnon-computability, an essential feature of consciousnesswhich distinguishes our mind from classical computers.

    Copyright 2011 SciRes. J BBS

  • 7/29/2019 2011 Blanquet Consciousness

    8/20

    249P. R. BLANQUET

    Objective reduction is taken as an instantaneous event(the climax of a self-organizing process in fundamentalspace-time) [31]. In this model, quantum-superposed statesdevelop in microtubules subunit proteins (tubulins) withinthe brain neurons. They recruit more superposed tubulins

    until a mass-time-energy threshold (related to quantumgravity) is reached. At this point, self-collapse, or objec-tive reduction, abruptly occurs. One equares the pre-re-duction, coherent superposition phase with pre-consciousprocesses, and each instantaneous (and non-computable)objective reduction, or self-collapse, with a discrete con-scious event. Sequences of objective reduction give riseto a stream of consciousness. Microtubule-associated-proteins can tune the quantum oscillations of the co-herent superposed states. The objective reduction is thusself-organized, or orchestrated. Each orchestrated ob-jective reduction event selects (non-computably) micro-

    tubule subunit states which regulate synaptic/neural func-tions using classical signaling. The quantum gravity thre-shold for self-collapse is relevant to consciousness be-cause macroscopic superposed quantum states each havetheir own superposed space-time geometries. However,when these geometries are sufficiently separated, theirsuperposition becomes significantly unstable and reduceto a single universe state. Quantum gravity determinesthus non-computably the limits of the instability. In sum,each orchestrated objective reduction event is a self-se-lection of space-time geometry coupled to the brain throughmicrotubules and other molecules. This orchestrated ob-jective reduction provides us with a completely new anduniquely promising perspective on the hard problem ofconsciousness [77,78]. The model of Hameroff and Pen-rose has received serious criticism, notably from certainsphilosophers such as Rick Grush and Patricia Churchland[79]. These authors pointed out that microtubules arefound in all plant and animal cells, and not only in brainneurons. They also stated that some chemicals that areknown to destroy microtubules do not seem to have anyeffects on consciousness and that, conversely, anaesthet-ics act without affecting the microtubules. Another ob-jection addresses one of the strengths of Penrose andHameroffs model, which is, according to its authors, that

    it can account for the unity of consciousness. Indeed, ifthis impression of human consciousness unity should proveto be an illusion, then explanations based on non-localityand quantum coherence would become irrelevant.

    Model of Freeman. The work of Walter J. Freemanwas based mainly on electrophysiological recording ofthe olfactory system of awake and behaving rabbits [80-84].Freeman found that the central code for olfaction is spa-tial. Although this had been predicted by others on thebasis of studies in the hedgehog, certain aspects of hisresults were surprising. For example, Freeman showed

    that the information is uniformly distributed over theentire olfactory bulb for every odorant. By inference everyneuron participates in every discrimination, even if andperhaps especially if it does not fire, because a spatialpattern requires both black and white. He discovered that

    the bulbar information does not relate to the stimulusdirectly but instead to the meaning of the stimulus. Thismeans that the brain does not process information in thecommonly used sense of the word. When one scans aphotograph or an abstract, one takes in its import, not itsnumber of pixels or bits. The brain processes meaning asin this example. He also found that the carrier wave isaperiodic: it does not show oscillations at single frequen-cies, but instead has wave forms that are erratic and un-predictable, irrespective of odorant condition. In the the-ory of Freeman, the chaotic activity distributed amongthe neuronal populations is the carrier of a spatial pattern

    of amplitude modulation that can be described by thelocal heights of the recorded waveform. When an inputreaches the mixed population, an increase in the nonlin-ear feedback gain will produce a given amplitude-modu-lation pattern. The emergence of this pattern is consideredto be the first step in perception: meaning is embodied inthese amplitude-modulation patterns of neural activity,whose structure is dependent on synaptic changes due toprevious experience. Through the divergence and con-vergence of neural activity onto the entorhinal cortex, thepulse patterns coming from the bulb are smoothed, therebyenhancing the macroscopic amplitude-modulation pat-tern, while attenuating the sensory-driven microscopicactivity. Thus, what the cortex sees is a constructionmade by the bulb, not a mapping of the stimulus. Hence,perception is an essential active process, closer to hy-pothesis testing than to passive recovery of incominginformation. The stimulus then confirms or disconfirmsthe hypothesis, through state transitions that generate theamplitude-modulation patterns. Finally, through multiplefeedback loops, global amplitude-modulation patterns ofchaotic activity emerge throughout the entire hemispheredirecting its subsequent activity. These loops comprisefeedforward flow from the sensory systems to the en-torhinal cortex and the motor systems, and feedback flow

    from the motor systems to the entorhinal cortex and fromthe entorhinal cortex to the sensory systems. Such globalbrain states emerge, persist for a small fraction of a sec-ond, then disappear and are replaced by other states. It isthis level of emergent and global cooperative activitythat is crucial for consciousness. Freeman tackled alsothe enigmatic problem of the nature of the free will. Heproposed that the man, entirely engaged in a project andin permanent relation with the other men and the world,takes his decisions in real time by implying all his body.The perceived will as conscious is informed only with a

    Copyright 2011 SciRes. J BBS

  • 7/29/2019 2011 Blanquet Consciousness

    9/20

    250 P. R. BLANQUET

    light delay. Not deciding a behavior in progress, it onlytakes action to modulate the various aspects of the vol-untary decision and to legitimate it taking into considera-tion whole of significances which constitute the subject.There are however some problems in this interesting

    Freemans account [85]. The mechanisms of origine andthe control of gamma oscillations are not yet entirelyclear. As predicted by Freeman, during gamma oscilla-tions an average lead/lag relation exists between localexcitatory and inhibitory cells. Now, recent analyses ofthe cellular dynamics concluded that recurrent inhibitionfrom fast spiking inhibitory cells is largely responsiblefor maintaining the rhythmic drive, but the role playedby excitatory processes in modulating or driving the os-cillations remains undetermined. Since cortical networksform a dilute and massively interconnected network, asatisfactory explanation for gamma activity should ex-

    plain the onset and offset of gamma activity in relation toevents at more distant sites and at larger scale in thebrain. Without clarification of these mechanisms it re-mains difficult to define the link of gamma activity tostorage, retrieval and transmission of information, andbetween thermodynamic and cognitive or informationalperspectives.

    4.2. Theories Binding the Structure of NeuralPathways to Conscious Activity

    In this second category of models, a first strategy consistsin being focused on the visual system. This approach hasbeen mainly studied by Viktor Lamme, Semir Zeki, DavidMilner and Melvyn Goodale. A second stategy, based ona more global neurophysiological approach, has beenmainly developed by Ned Block, by Francis Crick andChristof Koch, and by Rodolfo Llinas, Antonio Damasio,Jean-Pierre Changeux and Gerald Edelman. In spite ofsome overwhelming commonalities among these theo-ries, as Chalmers said it [17], nearly all of them havegiven a major role to interactions between the thalamusand the cortex.

    4.2.1. First Strategy Focusing on the Visual System

    Model of Lamme. The Local Recurrence theory of Vik-tor Lamme distinguished between three hierarchical typesof neural processes related to consciousness [86-88]. Thefirst stage involves a feedforward sweep during whichthe information is fed forward from striate visual regions(i.e., V1) toward extrastriate areas as well as parietal andtemporal cortices, without being accompanied by anyconscious experience of the visual input. The secondstage involves the localized recurrent processing dur-ing which the information is fed back to the early visualcortex. It is this recurrent interaction between early and

    higher visual areas which leads to visual experience. Thethird and final stages consist of widespread recurrentprocessing which involves global interactions (as in theglobal workspace of Changeux) and extends toward theexecutive (i.e., the frontoparietal network) and language

    areas. This final step also involves the attentional, execu-tive, and linguistic processes that are necessary for con-scious access and reportability of the stimulus. An inter-esting aspect of this theory is that it offers an explanationfor the difference between conscious and unconsciousperception in mechanistic rather in architectural terms.Another interesting aspect of this theory, although pro-vocative, is that consciousness should not be defined bybehavioral indexes such as the introspective reports ofthe subject. Instead, one should rely on neural indexes ofconsciousness, one of which is neural recurrence. Indeed,Lamme is concerned with the question of defining phe-

    nomenological consciousness when a report is impossible.However, one main difficulty with this theory is that itfails to take into account the recurrent connections thatexist between regions that are not associated with con-sciousness (for instance between the area V1 and the tha-lamus). Although it remains possible that consciousnessinvolves local recurrence between some cerebral com-ponents, this processus cannot then be considered as asufficient condition for consciousness since it requiresthe involvement of additional mechanisms for explainingwhy it only applies to a restricted set of brain regions.

    Model of Zeki. In the microconsciousness theory ofSemir Zeki, it is assumed that instead of a single con-sciousness, there are multiple consciousness that are dis-tributed in time and space [89,90]. This theory reflectsthe large functional specialization of the visual cortex.For example, while the perception of colors is associatedwith neural activity in area V4, motion perception re-flects neural activity in MT/V5. Zeki took these findingsas evidence that consciousness is not a unitary and sin-gular phenomenon, but rather that it involves multipleconsciousness that are distributed across processing sites,which are independent from each other. Another evi-dence in favor of this hypothesis is that the consciousperception of different attributes is not synchronous and

    can respect a temporal hierarchy. For example, psycho-physical measures have shown that color is perceived afew tens of milliseconds before motion, reflecting thefact that neural activity during perception reaches V4before reaching MT/V5. One main difficulty with thistheory is that any processing region in the brain should,at first glance, constitute a neural correlates of conscious-ness in the multiple-consciousness theory. As such, itremains unclear why conscious perception is not associ-ated with activity in most brain regions, including thecerebellum and subcortical regions, especially those con-

    Copyright 2011 SciRes. J BBS

  • 7/29/2019 2011 Blanquet Consciousness

    10/20

    251P. R. BLANQUET

    veying visual signals (e.g., the lateral geniculate nucleus).Another difficulty for this hypothesis is that visual re-gions can lead to the binding of several attributes in theabsence of consciousness. This has been shown, for in-stance, in a patient with a bilateral parietal damage, sug-

    gesting that the binding mechanisms that are supposed tolead to microconsciousness can in fact operate in theabsence of consciousness.

    Model of Milner and Goodale. The duplex visiontheory proposed by David Milner and Melvyn Goodalepostulates that visual perception involves two intercon-nected, but distinctive pathways in the visual cortex, na-mely, the dorsal and the ventral stream [91,92]. Afterbeing conveyed along retinal and subcortical (i.e., geni-culate) structures, visual information reaches V1 andthen involves two streams. The ventral stream projectstoward the inferior part of the temporal cortex and serves

    to construct a conscious perceptual representation of ob-jects, whereas the dorsal stream projects toward the pos-terior parietal cortex and mediates the control of actionsdirected at those objects. The two streams also differ atthe computational and functional levels. On the one side,the ventral stream conveys information about the endur-ing (i.e., long-lasting) characteristics that will be used toidentify the objects correctly, and subsequently to linkthem to a meaning and classify them in relation to otherelements of the visual scene. On the other side, the dorsalstream can be regarded as a fast and online visuomotorsystem dealing with the moment-to-moment informationavailable to the system, which will be used to performactions in real time. Recent evidence with visual maskinghas revealed unconscious neural activity in ventral re-gions, including the fusiform face area. This type of evi-dence is problematic for the duplex vision hypothesissince it predicts that conscious perceptions should beproportional to neural activity in the ventral system. Sucha possibility of unconscious ventral processing can benevertheless accommodated by assuming a threshold me-chanism, as proposed in the model of Zeki [90]. How-ever, including this threshold leads the theory to lose itformer appeal, since consciousness is only partially cor-related with activity in the ventral system.

    4.2.2. Second Strategy Based on a GlobalNeurophysiological Approach

    Model of Block. One of the most influential issue inrecent years has been the potential distinction betweenphenomenal and access consciousness proposed by NedBlock [93-95]. According to Block: phenomenal con-sciousness is experience; the phenomenally consciousaspect of a state is what it is like to be in that state. Themark of access-consciousness, by contrast, is availabilityfor use in reasoning and rationally guiding speech and

    action. In short, phenomenal consciousness results fromsensory experiences such as hearing, smelling, tasting,and having pains. Block grouped together as phenomenalconsciousness the experiences such as sensations, feel-ings, perceptions, thoughts, wants and emotions, whereas

    he excluded anything having to do with cognition, inten-tionality, or with properties definable in a computer pro-gram. In contrast, access consciousness is available foruse in reasoning and for direct conscious control of ac-tion and speech. Access consciousness must be repre-sentational because only representational content can fi-gure in reasoning. Examples of access consciousness arethoughts, beliefs, and desires. A point of controversy forthis attempt to divide consciousness into phenomenal andaccess consciousness is that some people view the mindas resulting from fundamentally computational processes.This view of mind implies that all of consciousness is

    definable in a computer program. In fact, Block felt thatphenomenal consciousness and access consciousness nor-mally interact, but it is possible to have access conscious-ness without phenomenal consciousness. In particular,Block believed that zombies are possible and a robotcould exist that is computationally identical to a personwhile having no phenomenal consciousness. Similarly,Block felt that one can have an animal with phenomenalconsciousness but no access consciousness. If the dis-tinction of Block between phenomenal and access con-sciousness is correct, then it has important implicationsfor attempts by neuroscientifists to identify the neuralcorrelates of consciousness and for attempts by computerscientists to produce artificial consciousness in man-madedevices such as robots. In particular, Block suggested thatnon-computational mechanisms for producing the sub-jective experiences of phenomenal consciousness mustbe found in order to account for the richness of humanconsciousnesss, or for there to be a way to rationally en-dow man-made machines with a similarly rich scope ofpersonal experiences of what it is like to be in consciousstates. However, many advocates of the idea that thereis a fundamentally computational basis of mind felt thatthe phenomenal aspects of consciousness do not lie out-side of the bounds of what can be accomplished by com-

    putation. Indeed, some of the conflict over the importanceof the distinction between phenomenal consciousnessand access consciousness centers on just what is meantby terms such as computation, program and algo-rithm, because one cannot know if it is within the powerof computation, program or algorithm to producehuman-like consciousness.

    Model of Llinas. Rodolfo Llinas proposed a model inwhich the notion of emergent collective activity plays acentral role [96-98]. He suggested that the brain is essen-tially a closed system capable of self-generated activity

    Copyright 2011 SciRes. J BBS

  • 7/29/2019 2011 Blanquet Consciousness

    11/20

    252 P. R. BLANQUET

    based on the intrinsic electrical properties of its compo-nent neurons and their connectivity. For Llinas, con-sciousness arises from the ongoing dialogue between thecortex and the thalamus. The hypothesis that the brain isa closed system followed from the observation that the

    thalamus input from the cortex is larger than that fromthe peripheral sensory system, suggesting that thalamo-cortical iterative recurrent activity is the basis for con-sciousness. A crucial feature of this proposal was the pre-cise temporal relations established by neurons in the cor-tico-thalamic loop. This temporal mapping was viewedas a functional geometry, and involved oscillatory activityat different scales, ranging from individual neurons to thecortical mantle. Oscillations that traverse the cortex in ahighly spatially structured manner were therefore con-sidered as candidates for the production of a temporalconjunction of rhythmic activity over large ensemble of

    neurons. Such gamma oscillations were believed to besustained by a thalamo-cortical resonant circuit involvingpyramidal neurons in layer IV of the neocortex, relay-thalamic neurons, and reticular-nucleus neurons. In thiscontext, functional states such as wakefulness or sleepand other sleep stages are prominent examples of thebreadth of variation that self-generated brain activity willyield. Since no gross morphological changes occur in thebrain between wakefulness and dreamless sleep, the dif-ference must be functional. Llimas proposed thereforethat conscious processes of changes observed in the cy-cle sleep/dream/awakening rest on pairs of coupled os-cillators, each pair connecting thalamus and the corticalregion. He suggested also that temporal binding is gener-ated by the conjunction of a specific circuit involvingspecific sensory and motor nuclei projecting to layer IVand the feedback via the reticular nucleus, and a non-specific circuit involving non-specific intralaminar nucleiprojecting to the most superficial layer of the cortex andcollaterals to the reticular and non-specific thalamic nu-clei. Thus, the specific system was supposed to supplythe content that relates to the external world, and the non-specific system was supposed to give rise to the temporalconjunction or the context. Together, they were consid-ered as generating a single cognitive experience.

    Model of Crick and Koch. In their framework, Fran-cis Crick and Christof Koch tried to find the neural cor-relates of consciousness and suggested the existence ofdynamic coalitions of neurons, in the form of neural as-semblies whose sustained activity embodies the contentsof consciousness [99-101]. By cortex they meant the re-gions closely associated with it, such as the thalamus andthe claustrum. Crick and Koch began their theory withthe notion of an unconscious homunculus, which is asystem consisting of frontal regions of the brain lookingat the back, mostly sensory region. They proposed that

    we are not conscious of our thoughts, but only of sensoryrepresentations of them in imagination. The brain presentsmultiple rapid, transient, stereotyped and unconsciousprocessing modules that act as zombie modes. This isin contrast to the conscious mode, that deals more slowly

    with broader, less stereotyped thoughts and responses. Inthis model, the cortex acts by forming temporary coali-tions of neurons which support each other activity. Thecoalitions compete among each other, and the winningcoalition represents what we are conscious of. These coa-litions can vary in how widespread they are over thebrain, and in how vivid and sustained they are. Moreover,more than one coalition can win at a given time. Espe-cially, there might be different coalitions in the back andin the front of the cortex, where the coalitions in the frontrepresent feelings such as happiness. An important pointin this model is that consciousness may arise from cer-

    tain oscillations in the cerebral cortex, which becomesynchronized as neurons fire 40 times per second. Crickand Koch believed the phenomenon might explain howdifferent attributes of a single perceived object (its colorand shape, for example), which are processed in differentparts of the brain, are merged into a coherent whole. Inthis hypothesis, two pieces of information become boundtogether precisely when they are represented by synchro-nized neural firing. However, the extent and importanceof this synchronized firing in the cortex is controversial.In particular, it remains a mystery: why should synchro-nized oscillations give rise to a visual experience, no matterhow much integration is taking place? It should be addedthat, in this model, the neurons that are a part of the neuralcorrelate of consciousness will influence many other neu-rons outside this correlate. These are called the penumbra,which exists of synaptic changes and firing rates. It alsoincludes past associations, expected consequences of mo-vements, and so on. It is by definition not conscious, butit might become so. Also, the penumbra might be the siteof unconscious priming.

    Model of Damasio. Antonio Damasio explored mainlythe complexity of the human brain taking into considera-tion emotion, feeling, language and memory. Accordingto him, the more important concepts are emotion, feeling,

    and feeling a feeling for the core consciousness [102,103].The substrate for the representation of an emotional stateis a collection of neural dispositions in the brain whichare activated as a reaction on a certain stimulus. Oncethis occurs, it entails modification of both the body stateand the brain state. This process starts as soon as the or-ganism perceives, in the form of simple or complex sen-sory messages, proprioceptive or behavioral indicatorsmeaning the danger or, on the contrary, the welfare. ForDamasio, the emergence of feeling is based on the cen-tral role played by the proto-self that provides a map of

    Copyright 2011 SciRes. J BBS

  • 7/29/2019 2011 Blanquet Consciousness

    12/20

    253P. R. BLANQUET

    the state of the body [102]. The neuronal patterns whichconstitute the substrate of feeling arise in two classes ofchanges: changes related to body state, and changes re-lated to cognitive state. Thus, a feeling emerges when thecollection of neural patterns contributing to the emotion

    leads to mental images. The feelings correspond to theperception of a certain state of the body to which theperception of the corresponding state of mind is added,i.e. thoughts that the brain generates taking into accountwhat it perceives of the state of the body. The notion offeeling is based on the organism detecting that the repre-sentation of its own body state has been changed by theoccurence of a certain object [102]. Consciousness isthus built on the basis of emotions transformed into feel-ings and feeling a feeling ; it mobilizes and gathers con-stantly, in a workspace, a certain number of informationsnecessary to strategies of survival and decision making.

    In this theory, consciousness is defined explicitly as astate of mind in which there is knowledge of ones ownexistence and of the existence of surroundings. Accord-ing to Damasio, the emotions generate three types oflevels of consciousness during the darwinian evolution:protoself, core self, and oneself-autobiographical [102,104].The protoself is the oneself most primitive and most wi-despread within the alive species. It is constituted byspecial kinds of mental images of the body produced inbody-mapping structures, below the level of the cerebralcortex. It results from the coherent but temporary inter-connections of various cerebral charts of reentries of sig-nals that represent the state of the organism at a given time.The protoself is an integrated collection of separate neu-ral patterns that map moment by moment, the most stableaspects of the organisms physical structure. The firstproduct of the protoself is primordial feelings. Wheneverone is awake there has to be some form of feeling. On ahigher level, the second form of the self, the core self, isabout action. Damasio said that the core self unfolds in asequence of images that described an object engaging theprotoself and modifying that protoself, including its pri-mordial feelings. These images are now conscious be-cause they have encountered the self. On a still higherlevel there is the autobiographical self, constituted in

    large part by memories of facts and events about the selfand about its social setting. It develops during social in-teractions to form what Damasio called the wide con-sciousness. That is, the protoself and the core self con-stitute a material me whereas the autobiographical selfconstitutes a social me. Our sense of person and identityis therefore in the autobiographical self.

    All emotions engage structures related to the repre-sentation and/or regulation of organism state as, for ex-ample, the insular cortex, the secondary somatosensorycortex, the cingulated cortex, and nuclei in brainstem

    tegmentum and hypothalamus [105]. These regions sharea major feature in that they are all direct and indirectrecipients of signals from the internal milieu, viscera andmusculoskeletal frame. Damasio considered that no mancan make decisions that are independent of the state of

    his body and his emotions. The influences that he un-dergoes are so numerous, that the assumption of a lineardeterminism which would determine him is not defend-able. Till now, most theories had addressed emotion as aconsequence of a decision rather than as the reactionarising directly from the decision itself. On the contrary,Damasio proposed that individuals make judgements notonly by assessing the severity of outcomes and theirprobability of occurrence, but also primarily in terms oftheir emotional quality [106]. The key idea of Damasiowas that decision making is a process that is influencedby signals that arise in bioregulatory processes, including

    those that express themselves in emotions and feelings.Decision making is not mediated by the orbito-frontalcortex alone, but arises from large-scale systems thatinclude cortical and subcortical components. Like Den-nett and Freeman, therefore, Damasio asserted that theindividuals are under the influence of a chaotic causalityof processes imprdictibles, indescribable, nonreproduci-ble, but not random. Two main criticisms have been madeabout this theory [107]. First, Damasio tried to give anaccount of the mind as a set of unconscious mappingactivities of the brain, and this did not presuppose, or atleast it did not obviously presuppose, that these activitiesare conscious. But it is hard to understand any of thesedivisions of the self, protoself, core self and autobio-graphical self without supposing that they are alreadyconscious. Second, Damasio stumbled over dreaming.Although phenomenal consciousness can be very vivid indreams even when the rational processes of self-cons-ciousness are much diminished, Damasio described dreamsas mind processes unassisted by consciousness. He claimedthat wakefulness is a necessary condition for conscious-ness. He described dreaming as paradoxical since, ac-cording to him, the mental processes in dreaming are notguided by a regular, properly functioning self of the kindwe deploy when we reflect and deliberate. However,

    contrary to this point of view, it should be noted thatdreaming is paradoxical only if one adopts a model ofphenomenal consciousness based on self-consciousness(on knowledge, rationality, reflection and wakefulness).

    Model of Changeux.The model of Changeux, devel-oped with Stanislas Dehaene and collaborators [108,109],is founded on the idea that cerebral plasticity is mainlyassured by a vast unit of interconnected neurons based onthe model of global workspace, historically related toBaars cognitive model proposed in the 1980s [110]. Re-call that the global workspace model of Baar is a process

    Copyright 2011 SciRes. J BBS

  • 7/29/2019 2011 Blanquet Consciousness

    13/20

    254 P. R. BLANQUET

    which implies a pooling of information in a complexsystem of neural circuits, in order to solve problems thatnone of circuits could solve alone. This theory makespossible to explain how the consciousness is able to mo-bilize many unconscious processes, autonomous and lo-

    calised in various cerebral regions (sensory, perceptuals,or imply in attention, evaluation, etc), in order to treatthem in a flexible and adjusted way [111]. Based on thisworkspace theory, Changeux proposed that neurons dis-tributed with long-distance connections are capable ofinterconnecting multiple specialized processors. He alsoproposed that, when presented with a stimulus at thresh-old, workspace neurons can broadcast signals at the brainscale in an all-or-none manner, either highly activated ortotally inactivated [108,112-114]. Workspace neurons thusallow many different processors to exchange informationin a global and flexible manner. These neurons are as-

    sumed to be the targets of two different types of neuro-modulatory inputs. First, they display a constantly fluc-tuating spontaneous activity, whose intensity is modu-lated by activating systems from cholinergic, noradren-ergic and serotoninergic nuclei in the brain stem, basalforebrain and hypothalamus. These systems modify thestate of consciousness through different levels of arousal.Second, their activity is modulated by inputs arising fromthe limbic system, via connections to the anterior cingu-lated, orbitofrontal cortex, and the direct influence of do-paminergic inputs [115].

    In this global workspace model, highly variable net-works of neural cells are selected and their activity isreinforced by environmental stimuli. These reinforcednetworks can be said to represent the stimuli, though noparticular network of neural cells exclusively representsany one set of stimuli. The environment does not imprintprecise images in memory. Rather, working through senses,the environment selects certain networks and reinforcesthe connections between them. These processes connectmemory to the acquisition of knowledge and the testingof its validity. Every evocation of memory is a recon-struction on the basis of physical relativel stable tracesstored in the brain in latent form [116]. Changeux andDehaene distinguished three kinds of neuronal represen-

    tations: percepts; images, concepts, and intentions; and pre-representations. Percepts consist of correlated activity ofneurons that is determined by the outer world and disin-tegrate as soon as external stimulation is terminated. Im-ages, concepts, and intentions are activated objects ofmemory, which result from activating a stable memorytrace (similar to the remembered present of Edelman).Prerepresentations are multiple, spontaneously arisingunstable; they are transient activity patterns that can beselected or eliminated [117]. In this line of idea, thoughtscan be defined in terms of calculations on the mental

    objects [118]. To be capable to be mobilized in the con-scious workspace, a mental object must meet three criteria:1) The object must be represented as a firing pattern ofneurons; 2) The active workspace neurons must possessa sufficient number of reciprocal anatomical connections,

    particular in prefrontal, parietal and cingulated cortices;3) At any moment, workspace neurons can only sustain asingle global representation, the rest of workspace neu-rons being inhibited. This implies that only one active cor-tical representation will receive the appropriate top-downamplification and be mobilized into consciousness, theother representations being temporarily unconscious [119].When a new situation happens, selection would occurfrom an abundance of spontaneously occurring prerepre-sentations, namely those that are adequate to new circum-stances and fit existing percepts and concepts. The freewill is based on this adaptation process called resonance,

    because planning and decision making result from thisselective adaptation process [117]. As in evolution, thiscould be a random recombination of representations. Themeaning of representations involved would be given bytheir proper functions. Plans generated in this way wouldbe intelligible because they could be appropriate for thesituation. Naturally, linguistically coded representationscould therein also be of central importance. It results fromthis global neurophysiological model that, as Changeuxsaid it, the identity between mental states and physio-logical or physicochemical states of the brain is essentialin all legitimacy and that it is enough with the man to bea neuronal man [118,120]. One important aspect that fol-lows from this theory is that once a set of processorshave stated to communicate through workspace connec-tions, this multidomain processing stream also becomesmore and more automatized through practice, resulting indirect interconnections, without requiring the use of work-space neurons and without involving consciousness. An-other important aspect of this theory is that informationcomputed in neural processors that do not possess directlong-range connections with workspace neurons, will al-ways remain unconscious. This global workspace modelhas however been considered by some authors as a re-strictive theory of access consciousness sacrificing some

    phenomenological aspects of consciousness. In other terms,it has been criticized for confounding the subjective ex-perience of consciousness with the subsequent executiveprocesses that are used to access its content.

    Model of Edelman.To explain the primary conscious-ness, Edelman argued that living organisms must organizeinto perceptual categorization the unlabeled signals theyreceive from the environment. He started therefore hismodel by proposing a hypothesis which describes thefundamental process of categorization. This highly arti-culated hypothesis is mainly based on the theory of neu-

    Copyright 2011 SciRes. J BBS

  • 7/29/2019 2011 Blanquet Consciousness

    14/20

    255P. R. BLANQUET

    ronal group selection implying a basic process termedreentry, a massively recursive activity between neuronalgroups. It is also based on the fundamental concept calleddegeneracy, whereby elements that are structurally dif-ferent can perform the same function. The first idea of

    Edelman was to use the assumption of tying, imaginedby Francis Crick for visual perception [22], to explain howthe categories go from the form, color, texture and move-ment of external stimulations to the constitution of ob-jectivity. His theory is based on the concept of neuronalchart. A chart is a sheet of neurons whose points are sys-tematically connected, on the one hand, at points on sheetsof receiving cells (skin, retina of the eye, etc) to satisfyentries of signals and, on the other hand, at points locatedon other charts to satisfy reentries of signals functioningin the two directions. His second idea was to admit thatthe brain, genetically equipped at the birth with a su-

    perabundance of neurons, develops thanks to the death ofa number of these neurons due to a process of darwinianselection. Lastly, his third idea was that parallel reentriesof signals occur between the charts to ensure an adaptiveanswer. It is the ongoing recursive dynamic interchangeof signals occurring in parallel between maps that con-tinually interrelates these maps to each other in space andtime. Categorization, therefore, is ensured by the dyna-mic network of a considerable number of entries andreentries selected by a multitude of charts [26,121,122].Although distributed on many surfaces of the brain, theinputs, at the same time connected between them andconnected to charts of reentries, can thus react the oneson the others in a mechanism of functional segregationfor finally giving a unified representation of the object.In other words, categorization is a global encartagewhich does not need a programa priori, a homoncule,to produce it. For Edelman, this mechanism is a plausibleworking hypothesis because his research group has shownthat brain-based devices are capable of perceptual cate-gorization and conditioned behavior [123]. Over the last14 years, a series of robotic organisms (called Darwin se-ries) have indeed been used to study perception, operantconditioning, episodic and spatial memory, and motorcontrol through the simulation of brain regions such as the

    visual cortex, the dopaminergic reward system, the hip-pocampus, and the cerebellum [124]. However, to reachthe major stage of conscious perceptive experience, Edel-man thought that at least six further conditions should bemet [26,122,125,126]. Firstly, the brain must have mem-ory. Memory, according to Edelman, is an active processof recategorization on the basis of a former categorization.The perceptible category, for example, is acquired by anexperience by means of charts of reentries. But if there isa perceptive entry by seeing, for example, a table again,one recategorizes this entry by improving preceding ca-

    tegorization, and so on; one reinvents the category con-tinuously. In other terms, memory generates informa-tion by construction: it is creative but nonreplicative.Secondly, the brain must have a system of training. Thetraining implies changes of behavior which rest on cate-

    gorization controls by positive or negative values; an ani-mal can choose, for example, what is clear or dark, hot orcold, etc, to satisfy its needs for survival. Thirdly, althoughself-awareness appears only with the higher conscious-ness, the brain must have a system of diffuse feelingswhich enables it to discriminate between oneself and notoneself. Fourthly, the brain must be able to categorize thesuccession of events in time and to form concepts pre-linguistics. Edelman thought that the categorization ofsuccessive events and the formation of these conceptshave a common neurobiological substrate. Fifthly, theremust be continuous interactions between conditions 1, 2,

    3 and 4 so that a particular memorial system for the val-ues is formed. Sixtly, the brain needs also to have reentrantconnections between this particular memorial system andthe anatomical systems. That is, according to Edelman,to formulate a model of primary consciousness it is nec-essary to have the notions of perceptual categorization,concept formation, and value-category memory in hand.

    As Changeux, Edelman considered that it is the tha-lamo-cortical system, called dynamic core, which con-tributes directly to the conscious experience [125,126]. Itis a dense fibre network inter-connected in the two direc-tions, in which connections occur unceasingly and comein competition so that the most effective fibres carry it(according to a process of darwinian competition). Thisreentrant dynamic core is thus a process, not a substance.It is in perpetual rebuilding (with periods of time of afew hundred milliseconds), in wide space but formednevertheless of unified and highly integrated elements.As spatial process, it ensures the coherence of conscious-nesses states ; it is the principal dynamic driving core ofthe constitution of self. The key structures for the emer-gence of consciousness are the specific thalamic nuclei,modulated by the reticular nucleus in their reentrant con-nectivity to cortex, the intralaminar nuclei of the thala-mus, and the grand system of corticocortical fibers. What

    emerges from these interactions is the ability to constructa scene. This activity is influenced by animals history ofrewards and punishments accumulated during its past. Itis essential for constructing, within a time window offractions of seconds up to several seconds, the particularmemory of animals, that is the remembered present[126]. Higherorder consciousness is consciousness to beconscious, self-awareness. It is accompanied by a senseof self and the ability to construct past and future scenesin the waking state. It is the activity of the dynamic corewhich converts the signals received from the outside and

    Copyright 2011 SciRes. J BBS

  • 7/29/2019 2011 Blanquet Consciousness

    15/20

    256 P. R. BLANQUET

    inside of the body into oneself, what Edelman called phe-nomenal transformation [125]. This higher conscious-ness requires, at the minimum, a semantic capability and,in its most developed form, a linguistic capability whichmakes possible to symbolize the relations of the past,

    present and future and to do representations. On the otherhand, Edelman asserted that the linkage proceeds fromthe underlying neural activity to the conscious processand not the reverse, but that consciousness is not directlycaused by the neural processes, it accompanies them.Rather, they are the neural processes and the entire bodywhich are causal. Consciousness is reincorporated inthe body, insofar as it is one of the modulations by whichthe body expresses its decisions; it amplifies the effectsof the body to produce the free will [125]. Since the bodyis decided by many nonlinear and chaotic determinisms(described by Alain Berthoz [127]), consciousness is thus

    also decided by these complex body determinations.However, consciousness adds to the body an additionallevel of complexity, because it is able to make differenceswithin a vast repertory of informations. What makes in-deed informative a state of human consciousness is thefact that the man is able to make a difference betweenbillion states different of things. The intentional under-standing of consciousness to reduce uncertainty is alwayssingular and gives the character subjective and private tomental-lived of each individual at every moment. LikeDennett, Freeman and Damasio, Edelman thus admittedthat the free will is a nonlinear chaotic causality, gener-ally circular, which results from the complex interactionsbetween the body and the brain in the local environmentalcontexts.The model of Edelman offers an interesting ten-tative of unifying the hard and easy problems. Indeed,since the reentrant dynamic core theory considers con-sciousness as an emergent property of any system shar-ing specific core mechanisms of differentiated integra-tion, the most reliable index of consciousness shouldreflect the quantification of this core mechanism. How-ever, a conceptual difficulty of this hypothesis lies in theparadox of trying to prove that neural indexes are morerespectable because they supposedly probe phenomenaland not access consciousness. Indeed, neural indexes of

    any sort have to be validated by confronting them at onepoint with some kind of report, hence with access andnot phenomenal consciousness. Even if it turns out thatone of these neural indexes turns out to be perfectly cor-related with consciousness, and thus becomes a perfectlyreliable measure of consciousness, then one might stillask whether we have made any progress.

    5. Conclusions

    A great variety of specific theories have been proposed,

    whether cognitive, phenomenal, physics, neural or basedon Artificial Intelligence, to explain consciousness as anatural feature of the physical world. The most prominentexamples of philosophical and non-neural approachesincluded the naturalistic dualism model of David Chal-

    mers, the multiple draft cognitive model of Daniel Dennett,the phenomenological theory of Franscisco Varela andHumberto Maturana and the physics-based approaches ofStuart Hameroff and Roger Penrose. The major neuro-physiological theories included the emerging spatiotem-poral patterns of Walter Freeman, the visual system-basedapproaches of Viktor Lamme, Semir Zeki, David Milnerand Melvyn Goodale, the phenomenal/access conscious-ness model of Ned Block, the synchronous cortico-thalamicrhythms of Rodolfo Llinas, the synchronized firing corti-cal oscillations of Francis Crick and Christof Koch, theemotive somatosensory processes of Antonio Damasio,

    the neuronal global workspace of Jean-Pierre Changeuxand Stanislas Dehaene, and the reentrant cortical loops ofGerald Edelman and Giulio Tononi. It is foreseeable thatthe range of these models will extend in the future. Forexample, it is supposed now that consciousness associ-ated with the dream would be a rebuilding at the time ofthe waking, related to relaxations of specific neuromodu-lators on very short intervals of time [128]. Such a hy-pothesis might generate new models. On the other hand,new perspectives of Artificial Intelligence have attempt tocheck the hypothesis of the functioning of brain archi-tecture and simulate consciousness using both the meth-ods and laws of informational machines and the process-ing capabilities of biological systems. For example, seriesof brain-based robots have been tested over the last dec-ade to study perception, operant conditioning, episodicand spatial memory, and motor control. Moreover, a su-percomputer program, called Human Brain Project, wasgiven for objective to attempt to reproduce an artificialconsciousness in the future.

    One can be skeptic, like Paul Ricoeur, as for the pos-sibility of obtaining a true theory which makes a completesynthesis between a speech neuro and a psychologicalspeech. The body is this object which is bothmeandmien,but there is no passage from one speech to another [120].

    Any attempt to confuse the figures of the body-object andbody-lived of consciousness, was it founded on the prin-ciples and methods of neuroscientific approaches moreinnovating, is obviously impossible for ontological reasons.

    However, like Edelman says it, to deplore that onecannot build a scientific model of consciousness withoutbeing able to theoretically explain the quality of a quale(what one feels, for example, by seeing the red of a flower)is a non-problem; it is a problem of the same order as towant to answer the famous philosophical question: whythere is something and why not rather nothing? [46].

    Copyright 2011 SciRes. J BBS

  • 7/29/2019 2011 Blanquet Consciousness

    16/20

    257P. R. BLANQUET

    The subjective aspect of consciousness is a pure obvi-ousness of the real world. It is a pure sensitive which isgiven of only one blow of the medium of itself [129],which it is impossible to analyze it without causing itsdisappearance immediately. It does not remain about it

    less than the neurobiological understanding of relation-ships between the brain and the objective aspects ofconsciousness is the great challenge which will animateresearches of future decades. Certainly, the phenomeno-logical model of Varela is compatible with some experi-mental neurobiological approaches. Likewise, the phys-ics-based model of Hameroff and Penrose could be opento some researches of mental-brain correlations via thestudy of signaling mechanisms of synaptic functions.Moreover, Artificial Intelligence has proven to be a fertilephysical enterprise to check some hypothesis about thefunctioning of brain architecture, although no machine

    has been capable of reproducing an artificial conscious-ness. However, only the advances in neurosciences seemsactually capable of taking into account the constraints ofa wide range of complex properties to provide an ade-quate explanatory basis for consciousness. Of these pro-perties, several stand out as particular challenges to theo-retical effort, in particular the property of intentionality,the problem of explaining both the first-person and third-person aspects of consciousness, and the question of plausi-ble necessary preconditions to develop self-awareness. Asattests it analysis and the myriad of neurophysiologicaldata already available, the states (arousal) of conscioun-ess, the behavior, the brain processes of conscious sys-

    tems, and a number of subjective contents of conscious-ness (some qualia) are objectivable and can be experi-mentally studied. In spite of innumerable difficulties whichone will still meet to study and understand a phenome-non which is the originating base of categorical representa-tions of the intersubjective and subjective world, it istherefore possible that one will manage one day to builda naturalistic satisfying explanation of consciousness thatlinks together molecular, neuronal, behavioral, psycholo-gical and subjective data in an unified, coh