eprint - neurobiography
TRANSCRIPT
Journal of Physiology - Paris 98 (2004) 171–189
www.elsevier.com/locate/jphysparis
Multisensory contributions to the 3-D representationof visuotactile peripersonal space in humans:evidence from the crossmodal congruency task
Charles Spence a,*, Francesco Pavani b, Angelo Maravita b, Nicholas Holmes a
a Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford OX1 3UD, UKb Institute of Cognitive Neuroscience, University College London, Alexandra House, 17 Queen Square, London WC1N 3AR, UK
Abstract
In order to determine precisely the location of a tactile stimulus presented to the hand it is necessary to know not only which part
of the body has been stimulated, but also where that part of the body lies in space. This involves the multisensory integration of
visual, tactile, proprioceptive, and even auditory cues regarding limb position. In recent years, researchers have become increasingly
interested in the question of how these various sensory cues are weighted and integrated in order to enable people to localize tactile
stimuli, as well as to give rise to the ‘felt’ position of our limbs, and ultimately the multisensory representation of 3-D peripersonal
space. We highlight recent research on this topic using the crossmodal congruency task, in which participants make speeded ele-
vation discrimination responses to vibrotactile targets presented to the thumb or index finger, while simultaneously trying to ignore
irrelevant visual distractors presented from either the same (i.e., congruent) or a different (i.e., incongruent) elevation. Crossmodal
congruency effects (calculated as performance on incongruent) congruent trials) are greatest when visual and vibrotactile stimuli are
presented from the same azimuthal location, thus providing an index of common position across different sensory modalities. The
crossmodal congruency task has been used to investigate a number of questions related to the representation of space in both normal
participants and brain-damaged patients. In this review, we detail the major findings from this research, and highlight areas of
convergence with other cognitive neuroscience disciplines.
� 2004 Elsevier Ltd. All rights reserved.
Keywords: Multisensory integration; Peripersonal space; Body image; Attention
1. Introduction
For many years, both scientists and philosophers
have been interested in the question of how the brain
derives common representations of external space across
different sensory modalities (such as vision, touch, and
audition), given that sensory information is coded at the
peripheral receptor level in a variety of different frames
of reference [45,93,121,130]. Some of the most impres-
*Corresponding author. Tel.: +44-1865-271364; fax: +44-1865-
310447.
E-mail address: [email protected] (C. Spence).
0928-4257/$ - see front matter � 2004 Elsevier Ltd. All rights reserved.
doi:10.1016/j.jphysparis.2004.03.008
sive advances toward a resolution of this question have
emerged from contemporary neuroscience, particu-larly single-cell neurophysiology [139]. For example,
researchers have demonstrated the existence of multi-
sensory neurons in several areas of the cat and monkey
brain, including the putamen, superior colliculus, ven-
tral premotor cortex, and parietal area 7b, that represent
visual and tactile stimuli in approximate spatial register
[35,36,41,42,94,114,140]. Many of the cells in these areas
that are responsive to tactile stimuli on the hand alsohave visual receptive fields (RFs) in the region around
the hand. More importantly, the visual RFs of these
neurons appear to follow the hand around as the arm is
placed in different postures (see [34] for a review). A
growing body of research in both normal human par-
ticipants [68,133], and in patients with selective brain-
damage [22,73], now suggests the existence of similar
Fig. 1. Schematic view of a participant adopting both the uncrossed (a)
and crossed hands (b) postures. Participants held a foam cube in each
of their left and right hands. Two vibrotactile stimulators (shaded
rectangles) and two visual distractor lights (filled circles) were
embedded in each cube, by the thumb and index finger. Participants
made speeded elevation discrimination responses (by raising the toe or
heel of their right foot), in response to vibrotactile targets presented
172 C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189
multisensory representations of 3-D peripersonal spacein humans as well. 1
One paradigm that we have used extensively in recent
years to investigate the representation of visuotactile
space in humans is the crossmodal congruency task
[132,138]. Participants are typically required to make
speeded elevation discrimination responses to a series of
targets in one sensory modality (most frequently touch),
while simultaneously trying to ignore irrelevant di-stractors presented in another sensory modality (typi-
cally vision). The crossmodal congruency task has been
shown in a number of studies to provide a robust
experimental index of common spatial location across
different sensory modalities. We have used the task in a
wide variety of experimental situations to investigate the
multisensory representation of visuotactile space in both
normal participants [132,138], and in brain-damagedpatients [135], and also to investigate the consequences
of prolonged tool-use on the boundaries of peripersonal
space and the body schema [52,81].
In this review, we first describe the basic crossmodal
congruency effect, before going on to highlight the re-
sults of a number of experiments that have used the
crossmodal congruency task to investigate the conse-
quences of posture change on the representation of vi-suotactile space. In subsequent sections, we illustrate
how the crossmodal congruency task is currently being
used to address increasingly sophisticated questions
regarding the representation of 3-D peripersonal space.
Although this review is primarily designed to highlight
the major findings to have emerged from research on the
crossmodal congruency task itself, we also draw paral-
lels to the findings made by researchers in other disci-plines, such as primate neurophysiology and cognitive
neuropsychology.
either from the ‘top’ by the index finger of either hand, or from the‘bottom’ by either thumb respectively. Maximal crossmodal congru-
ency effects were always reported for visual distractors placed closest to
the location of the vibrotactile target (i.e., on the same foam cube), no
matter whether the hands were held in an uncrossed or crossed pos-
ture.
2. The crossmodal congruency task
In a typical crossmodal congruency study, partici-
pants are required to hold two foam cubes, one in eitherhand (see Fig. 1a for a schematic illustration). A vi-
brotactile target stimulus and a visual distractor are
presented randomly and independently from one of the
1 Interestingly, while the involvement of bimodal visuotactile
neurons in brain areas such as those reported by Graziano, Gross,
and colleagues have been suggested to explain human behaviour in a
variety of normal and patient studies [22,68,135,136], the involvement
of such areas has not, to our knowledge, been demonstrated directly in
humans previously. Recent neuroimaging data from this laboratory
[77] has provided some of the first empirical evidence that the same
network of neural structures appears to be involved in the multisensory
representation of limb position in humans as reported previously in
primates, namely the IPS and inferior frontal gyrus (corresponding to
the VIP-F4 circuit in monkeys) [116].
four possible stimulus locations on each trial. We typi-
cally use vibrotactile target stimuli consisting of pulsed
white noise (i.e., three 30 ms bursts of white noise each
separated by 20 ms silent intervals) presented to one of
the vibrators, while visual distractors consist of the
pulsed illumination of one of the four LEDs.Participants are required to make a series of speeded
elevation discrimination responses, deciding whether
vibrotactile target stimuli are presented from the index
finger or thumb of either hand (i.e., ‘‘above’’, at the
index finger; or ‘‘below’’, at the thumb), while simulta-
neously trying to ignore the visual distractors presented
at approximately the same time. Although these di-
stractors are just as likely to be presented from the same
Direction of tactile attention
01020304050607080
Focused Divided
CC
E (m
s)
0
5
10
15
20
25
30
CC
E (%
Erro
rs)
RT % Errors
020406080
100120140160
Near FarTarget-distractor spatial separation
CC
E (m
s)
0
5
10
15
20
25
30
CC
E (%
Erro
rs)
RT % Errors
0102030405060708090
Visual distractorslead by 30ms
Simultaneous Vibrotactile targetslead by 30ms
CC
E (m
s)
0
5
10
15
20
25
30
CC
E (%
Erro
rs)
RT % Errors
Target-distractor temporal separation
c Attentional manipulation
b Temporal manipulation
a Spatial manipulation
C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189 173
elevation as the target, as from a different elevation,participants are normally much worse (i.e., both slower
and less accurate, see Fig. 2a) at discriminating the
elevation of the vibrotactile targets when the visual di-
stractors are presented from an incongruent elevation
(i.e., when the vibrotactile target is presented from the
top and the visual distractor from the bottom, or vice
versa) than when they are presented from the same
(congruent) elevation (i.e., either both up or bothdown).
Crossmodal congruency effects are calculated as the
difference in performance between incongruent and
congruent distractor trials for a particular pair of dis-
tractor LEDs. 2 Although we typically focus on the
crossmodal congruency effects present in the reaction
time (RT) data, similar effects are normally reported in
the error data as well. Often, therefore, we combinethese two measures of performance into a single measure
known as inverse efficiency (IE)––where the inverse
efficiency score equals the mean RT for a particular
condition divided by the proportion of correct responses
for that condition [135,144].
While the magnitude of any effects reported in a
particular experimental situation decline with practice,
significant crossmodal congruency effects still occur evenafter participants have performed many hundreds of
trials [81,138]. The very existence of crossmodal con-
gruency effects highlights a fundamental problem of
crossmodal selective attention in humans: Specifically,
people cannot ignore what they see, even if they are
instructed to respond only to what they feel (see [134] for
a similar failure of audiovisual selective attention).
Crossmodal congruency effects have also been ob-served when the role of the two stimulus modalities is
reversed; That is, when participants are instructed to
respond to the elevation of the visual stimuli, while
attempting to ignore the elevation of the vibrotactile
stimuli [147,148]. However, the crossmodal congruency
effects elicited by vibrotactile distractors on visual ele-
Fig. 2. Modulation of the crossmodal congruency effect as a function
of spatial, temporal, and attentional manipulations [138]. (a) The
crossmodal congruency effect is greatest when the target and distractor
stimuli are presented from approximately the same spatial location,
and declines as the distance between the target and distractor stimuli
increases (here the azimuthal distance between the target and distrac-
tor was 63� in the Far condition). (b) Crossmodal congruency effects
are large when the onset of the visual distractors precedes the onset of
the vibrotactile targets by 30 ms, with the effect declining in magnitude
if the onset of the target and distractor occur simultaneously, or if the
onset of vibrotactile targets occurs before that of the visual distractors.
(c) The magnitude of the crossmodal congruency effect was, however,
unaffected by whether the participant knew in advance to which hand
the vibrotactile target would be presented (focused attention), versus
when the target was presented unpredictably to either hand on each
trial (divided attention). The bars show the crossmodal congruency
effects in the RT data, while the line plots show the crossmodal con-
gruency effects present in the error data. Error bars indicate the
standard error of the mean.
2 One interesting question that has yet to receive an empirical
answer concerns the nature of the effect of the visual distractor stimuli
on vibrotactile elevation discrimination responses. That is, we do not
know whether the presentation of the visual distractors improves (in
terms of speed and accuracy) vibrotactile elevation discrimination
responses on congruent trials, impairs performance on incongruent
distractor trials, or whether both effects may co-occur when perfor-
mance is compared to that in a no-distractor baseline condition. We
are currently initiating a series of experiments to address this issue.
However, it seems probable that visual distractors may influence
vibrotactile discrimination responses not just in terms of congruency
effects, but also in terms of crossmodal attentional cuing and/or
crossmodal alerting effects [68,136]. It is worth noting though that our
use of the crossmodal congruency paradigm does not depend upon a
precise and exact understanding of the causes of the effect, since we use
it simply as an index of common location across different sensory
modalities.
174 C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189
vation discrimination responses tend to be somewhatsmaller in magnitude than for the prototypical case of
vibrotactile targets and visual distractors (see [70,86] for
similar results from somewhat different experimental
paradigms). This asymmetrical pattern of crossmodal
congruency effects may reflect an underlying difference
in the relative salience of the vibrotactile and visual
stimuli used in our previous studies, and/or some sort of
inherent bias of our participant’s attentional resourcestoward the visual modality [106,137]. In another series
of experiments, Merat and colleagues have shown that
vibrotactile distractors elicit a small, but reliable,
crossmodal congruency effect when participants are re-
quired to discriminate the elevation of auditory targets
(up to approximately 80 ms when the two stimuli are
presented at same azimuth; [91]).
There are at least two possible explanations for thecrossmodal congruency effect, one in terms of a multi-
sensory ‘perceptual’ integration effect (i.e., spatial ven-
triloquism), and the other in terms of competition at the
level of response selection between the target and dis-
tractor. According to the spatial ventriloquism account,
the perceived location of the vibrotactile target in our
prototypical study might be ventriloquized toward that
of the incongruent visual distractor (see [8,14]). Whenthe visual distractor is placed at a different elevation
from the vibrotactile target, but still close to it (i.e., on
the same hand), the latter may be mislocalized toward
the former. Such a spatial ventriloquism effect, should it
occur, might lead to errors in participant’s responses, or
simply to their finding it harder (and therefore taking
more time) to discriminate the correct elevation of the
target on incongruent trials (ventriloquism would pre-sumably, if anything, facilitate the perception of the
‘correct’ location of the target on congruent trials).
Alternatively, according to the response competition
account, the crossmodal congruency effect may reflect
the consequences of competition between the response
tendencies elicited by the target and distractor on
incongruent trials. Presumably the presentation of both
stimuli will prime the response(s) associated with theelevation at which they are presented. Given that the
distractor will prime the incorrect response on incon-
gruent trials, this might lead to a slowing of responses,
attributable to the time taken to overcome the incon-
gruent (i.e., ‘inappropriate’) response tendency. In fact,
the slowest responses in crossmodal congruency exper-
iments are usually reported on trials where the visual
distractor is presented from the same side as the vibro-tactile target, but at an incongruent elevation [138]. By
contrast, performance of congruent trials might be ex-
pected to show some degree of response facilitation,
since the target and distractor stimuli would both prime
the same, ‘correct’, response [85].
We have attempted to demonstrate the contribution
of spatial ventriloquism to the crossmodal congruency
effect by conducting an unspeeded version of theexperiment, in which participants were not permitted to
respond until at least 750 ms after the onset of the target
and distractor. The importance of response accuracy
over response speed was also repeatedly stressed to the
participants. If response compatibility is responsible for
the crossmodal congruency effect then one might expect
that there should be virtually no residual crossmodal
congruency effect, given that participants in this un-speeded version of the task presumably had sufficient
time to resolve any potential response conflict. The re-
sults demonstrated a small but significant increase in
errors specifically when the visual distractor was pre-
sented from an incongruent elevation on the same side
(and not when the distractor was presented from either
position on the opposite side), suggesting some role for
spatial ventriloquism in the crossmodal congruency ef-fect. We do, however, believe that the majority of the
crossmodal congruency effect probably reflects response
conflict effects instead. Although the very existence of
the crossmodal congruency effect is by itself of theoret-
ical interest, our continued use of this paradigm has
been motivated by the fact that crossmodal congruency
effects are typically both large in magnitude (in com-
parison to many other behavioural effects), and modu-lated by the spatial separation between the target and
distractor stimuli.
2.1. Spatial modulation of the crossmodal congruency
effect
Crossmodal congruency effects are normally largest
when the target and distractor stimuli are presentedfrom the same azimuthal location (i.e., when the dis-
tracting lights are situated by the hand receiving the
vibrotactile target), and decline as the visual distractor
and vibrotactile target hand are moved further and
further away from each other [132,138] (see Fig. 2a).
Over the last few years, we have conducted many
studies investigating the consequences of a number of
basic postural manipulations using the crossmodalcongruency task [132,138]. These studies have shown
that the crossmodal congruency effect elicited by a
particular pair of visual stimuli falls off as the hand
receiving the vibrotactile target is moved further away
from them. Even when the hands are crossed over the
midline (see Fig. 1b), visual distractors by the current
target hand position still elicit the largest crossmodal
congruency effects; this, despite the fact that the afferentsignals from the vibrotactile targets presented to the
crossed hand project, at least initially, to the opposite
cerebral hemisphere with respect to the visual distrac-
tors. Such results support claims that placing a hand by
a light may ensure that the relevant location within a
bimodal visuotactile topographic representation is
stimulated by the light [22,73].
C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189 175
The findings reported so far are consistent with thehand-position dependent modulation of the visual RF
of bimodal visuotactile neurons reported previously in
primates [35,36]. Interestingly, however, the spatial
modulation of the crossmodal congruency effect does
not appear to depend on participants actually being able
to see their hands, as it has even been shown to occur
when the hands remain out of view, as when participants
perform the crossmodal congruency task in completedarkness [138]. This result demonstrates that proprio-
ceptive and tactile cues regarding limb position can, by
themselves, provide sufficient information to code a
particular light source as being either close to, or far
from, an unseen hand (see [68] for similar results). Once
again, these behavioural findings are consistent with the
known primate neurophysiology [37,96], though see
[21,74] for neuropsychological evidence that the limb-position dependent modulation of crossmodal visuo-
tactile extinction, shown in certain parietal patients,
appears to depend upon their being able to see their own
hand and arm.
2.2. Does spatial attention modulate the crossmodal
congruency effect?
Spence et al. [138, Experiment 1] have shown that the
magnitude of crossmodal congruency effects are greater
when visual distractors lead vibrotactile targets by 30
ms, than when the two stimuli are presented simulta-
neously, or when vibrotactile targets are presented
shortly before (30 ms) visual distractors (mean cross-
modal congruency effects of 72, 59 and 46 ms respec-
tively; see Fig. 2b). It is for this reason (i.e., to maximizethe size of the crossmodal congruency effect) that we
normally present visual distractors shortly (i.e., 30 ms)
before vibrotactile targets in our crossmodal congruency
experiments.
This experimental design is in some sense similar to
that of crossmodal studies of exogenous spatial atten-
tion (see [71,129] for detailed discussion of the difference
between exogenous and endogenous spatial attention).For example, in a typical crossmodal cuing study, a
spatially non-predictive cue in one modality is presented
to the left or right of fixation, and followed shortly
afterward by a target stimulus in a different modality
(see [127] for a review). Several studies have shown that
the presentation of a spatially non-predictive visual cue
facilitates elevation discrimination responses for vibro-
tactile targets presented from the same (as opposed tothe opposite) side of fixation for several hundred milli-
seconds after cue onset [16,67,68]. Typically, crossmodal
cuing effects evidence themselves as a facilitation of
target discrimination responses of around 20–30 ms
when the target is presented from the cued, as opposed
to the uncued, side.
Given such findings, it would seem likely that theonset of the visual distractor shortly before the vibro-
tactile target in our crossmodal congruency studies
would also have led to a shift of ‘tactile’ attention to the
side of the visual distractor [127,128]. However, while
maximal facilitation would be expected to accrue at the
particular location of the visual stimulus (i.e., distrac-
tor), it is likely that the other location (i.e., elevation) at
the same azimuthal position would also be facilitated toaround the same extent, given the relatively spatially-
unfocused nature of any crossmodal shifts of attention
(see [127] on this point). Consequently, any exogenous
shift of spatial attention elicited by the visual distractor
would not be expected to have much of an effect on the
magnitude of crossmodal congruency effects, since it
should primarily result in a reduction of the overall RT
when the target and distractor are presented from thesame side, without differentially affecting performance
on congruent as opposed to incongruent trials.
One might also wonder whether there is any role for
endogenous spatial attention in modulating crossmodal
congruency effects. However, somewhat surprisingly,
Spence et al. [138, Experiment 1] reported that the
magnitude of crossmodal congruency effects is relatively
unaffected by the direction of endogenous spatialattention to one hand or the other (see Fig. 2c). That is,
crossmodal congruency effects were just as large when
participants knew in advance which hand the vibrotac-
tile target would be presented to (and so could pre-
sumably direct their spatial attention in advance to just
the target hand), as when the target was presented
unpredictably to either hand on each trial (and where
spatial attention presumably had to be divided equallybetween the two hands).
Spence et al.’s [138, Experiment 1] results stand in
marked contrast to the results of a number of other
studies of endogenous spatial attention, where elevation
discrimination responses for both vibrotactile and visual
targets (presented individually, and in the absence of
any distractors) have been shown to be facilitated by the
direction of endogenous spatial attention to a particularexpected target side/hand [13,133]. Presumably, while
directing attention to one hand or the other can speed-
up overall response latencies to stimuli presented by (or
to) that hand, it has little effect on the pattern of
crossmodal congruency effects, since performance on
both congruent and incongruent distractor trials will be
facilitated to the same extent (just as for the exogenous
case described above).
3. Using the crossmodal congruency task to investigate the
representation of 3-D peripersonal space
One of the key findings to emerge from the re-
search discussed up to this point has been that
176 C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189
larger crossmodal congruency effects are found whenvibrotactile targets and visual distractors are pre-
sented from the same azimuthal position, and that
the magnitude of these effects falls off as the sepa-
ration between the target and distractor increases
[132,138]. As such, the spatial modulation of the
crossmodal congruency effect provides a reliable
index of whether vibrotactile and visual stimuli are
perceived (functionally) to occupy the same spatiallocation or not. The results reported so far highlight
important parallels between human performance on
the crossmodal congruency task, and the findings of
a number of primate neurophysiological studies [35–
37,96], a link that is also seen in the studies reported
below.
Although the crossmodal congruency task is only
one of a range of experimental paradigms currentlyavailable to researchers to investigate the spatial co-
registration of stimuli across different sensory modali-
ties, it has the benefit over many of these other
tasks (such as the crossmodal exogenous spatial ori-
enting paradigm; [127]) of eliciting large and robust
behavioural effects. Moreover, the crossmodal con-
gruency task also has the advantage of being seem-
ingly robust to the specific distribution of spatialattention (both exogenous and endogenous) to one
hand or the other. The crossmodal congruency effect
is, therefore, ideal for use when investigating more
subtle questions related to multisensory spatial repre-
sentation, and when testing patient populations, as we
shall see.
3 Many researchers have made a conceptual distinction between the
terms ‘body image’ and ‘body schema’. Body image is typically used
when referring to a conscious representation of the body, or of bodies
in general, and may depend primarily on semantic information. By
contrast, body schema is typically meant to refer to an unconscious
representation of the body and its position and movement in space.
The body schema is thought to be derived primarily from proprio-
ceptive, kinaesthetic, and possibly visual information arising from the
body itself (for further discussion and reviews, see [7,17,28,29,48,55]).4 Note that other researchers have elicited visual capture effects
using either mirrors (with phantom limb patients) [109,110,122] or
prisms [44,46,152] as well (see also [12,55]).
3.1. Visual and proprioceptive contributions to tactile
localization
Having characterized the crossmodal congruency ef-
fect and, more specifically, having demonstrated the
reliability and robustness of the crossmodal congruency
task as an indicator of common location across vision
and touch, we next went on to investigate the relative
contributions of visual and proprioceptive cues to thelocalization of tactile stimuli in 3-D peripersonal space.
Our research was stimulated by a fascinating report
published in Nature by Botvinick and Cohen [11] that
highlighted the potential importance of visual cues in
determining perceived limb position. Participants in
their study sat at a table, with their left hand placed on
the table occluded from view behind a screen. A life-
sized rubber left hand and arm (an ‘alien’ limb) wasplaced in a plausible posture in front of the participant.
The experimenter stroked the rubber arm in full view of
the participant, while the participant’s own left hand
was stroked synchronously, but out of sight, behind the
occluding screen. Botvinick and Cohen reported that
participants rapidly started to ‘incorporate’ the left
rubber arm into their own body image. 3 In fact, after 10min of stroking, participants would agree strongly with
questionnaire statements such as ‘I felt as if the rubber
hand were my hand’, and, more importantly for present
purposes, ‘It seemed as if I were feeling the touch of the
paintbrush in the location where I saw the rubber hand
touched.’
Botvinick and Cohen [11] also reported in a second
experiment that after stroking, participants would sys-tematically misreach when trying to point with their
right hand under the table to the position of the unseen
left hand resting on the table-top. Critically, the mag-
nitude of this intermanual pointing error was correlated
with the perceived duration of the rubber hand illusion
during stroking. These results suggest that the partici-
pants in Botvinick and Cohen’s study really perceived
their left hand as if it was displaced toward the locationof the rubber arm (i.e., they experienced a ‘visual cap-
ture’ of felt limb position). By contrast, the illusion did
not occur (and pointing remained veridical) under con-
ditions where the real and fake left arms were stroked
asynchronously. These results highlight the prominence
of visual over proprioceptive cues in determining per-
ceived limb position (and possibly also tactile localiza-
tion) when the senses are artificially placed in conflict.Similar reports of the visual capture of proprioception
by fake or alien limbs/digits have also been reported in
several other studies over the years [95,141,151]. 4
Importantly, however, the interpretation of many
of these studies is limited by the possibility that experi-
menter expectancy effects, task demands, and/or
response biases may have contributed to (or even
determined) the participant’s performance [15,58,97,120]. That is, participants in these studies may simply
have responded in the manner in which they thought the
experimenter wanted them to respond, rather than be-
cause they actually experienced ‘visual capture’ per se.
Furthermore, given that participants in many of these
studies were not made aware that the body part they saw
might not be a part of their own body, they may simply
have responded on the basis of what they saw, becausethey had no reason to doubt the veridicality of their
(normally accurate) visual perception (cf. [154]). A final
C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189 177
limitation to the interpretation of all of these previousstudies is that while they may provide evidence (the
above caveats notwithstanding) for the visual capture of
proprioceptive sensation, this does not necessarily imply
any consequences for the localization of tactile sensation
per se (as implied by a literal interpretation of the
everyday expression––an ‘out-of-body’ experience).
Pavani et al. [104] attempted to overcome these lim-
itations by examining the consequences of the rubberhand illusion for specifically tactile perception using the
crossmodal congruency task. Participants were required
to wear a pair of rubber washing-up gloves, and to hold
two foam cubes on each of which were mounted two
vibrators. Participants could not see their own hands as
they were hidden below an opaque screen (see Fig. 3).
Pavani et al. reported that the magnitude of the cross-
modal congruency effect elicited by the visual distractorsincreased when a pair of rubber arms (actually stuffed
rubber washing-up gloves) were placed in a plausible
posture in front of their participants (and on top of the
occluding screen), apparently ‘holding’ the visual di-
stractors. Using a questionnaire similar to that devised
by Botvinick and Cohen [11], Pavani et al. also found
that the magnitude of the increase in the crossmodal
congruency effect was correlated with subjective reportsof the vividness of the rubber hand illusion, as indexed
by participant’s agreement with the statements: ‘I felt as
if the rubber hands were my hands’, and ‘It seemed as if I
were feeling the vibration in the location where I saw the
rubber hands’.
Fig. 3. Schematic view of the experimental set-up in Pavani et al.’s
[104] ‘rubber hand’ experiments, highlighting the location of the vi-
brotactile stimulators (indicated by the arrows) on the foam cubes held
by the participant below an occluding screen, and the visual distractor
lights (open circles on the upper cubes) held by the rubber hands were
present and aligned with the participant’s own hands. Note that in
some conditions (not shown), the rubber arms were placed at 90� to the
participant’s own arms (i.e., in a posture that the participant could not
possibly adopt).
In contrast to Botvinick and Cohen’s [11] study, Pa-vani et al.’s [104] results were obtained in the absence of
any stroking of either the real or rubber hands, thus
showing that synchronized multisensory stimulation is
not a prerequisite for the visual capture of propriocep-
tive and tactile sensation, at least under conditions
where proprioceptive acuity is relatively low (i.e., in the
elevation dimension; see [150]). Interestingly, Pavani et
al. [104, Experiment 2] also reported that crossmodalcongruency effects were unaffected by the presence of the
rubber arms if they were placed in an implausible pos-
ture for the participants (i.e., by placing them at 90� withrespect to the participant). The latter result enabled
Pavani et al. to rule out any interpretation of their
apparent visual capture results in terms of either in-
creased saliency of the lights when placed in close
proximity to what may have been a visually intriguingpair of rubber hands (i.e., ruling out the possibility that
participants may simply have been directing more of
their attention to the modality of the distractors when
the rubber hands were present), and also that placing
fingers and thumbs by the lights might result in the lights
eliciting a greater propensity to initiate a finger (up) or
thumb (down) response (i.e., an enhanced semantic re-
sponse activation account).It is important to note that the visual capture effect in
Pavani et al.’s [104] study occurred despite the fact that
participants were made aware of the sensory ‘deception’
taking place. That is, the experimenter pointed out to
participants both the existence and construction of the
rubber arms, and participants could also see the rubber
hands being positioned in front of them at the start of
each block of trials when they were present. The par-ticipants were therefore clearly aware that the limbs they
saw were not part of their own body, hence making
Pavani et al.’s visual capture results even more dramatic.
Finally, it is worth noting that the fact that the illusion
still occurred under these transparent conditions, sup-
ports the cognitive impenetrability of the visual capture
phenomenon [8]. It appears, therefore, that people do
not have direct access to the absolute localization oftactile stimuli in space, but instead their perception re-
flects only the consequences of the multisensory inte-
gration of the available sensory cues (be they visual,
proprioceptive, tactile, or even perhaps, auditory 5
[23,98,99,119,153]).
Remarkably, Erin Austen and colleagues at the
University of British Columbia [4] have shown that
limbs bearing even less resemblance to the human form(a pair of ‘Frankenstein-like’ green hairy arms) can also
elicit visual capture, resulting in the mislocalization of
vibrotactile stimuli (though see [3,38,108]). It is, of
5 As when we hear sounds indicating that our limbs have made
contact with a particular surface.
178 C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189
course, possible that more pronounced visual captureeffects might have been demonstrated in Austen et al.’s
study if participants were not made aware of the
deception (as happened in previous studies where
appendages with a far greater likeness to the human
form were used; though see [154] for evidence that
people do not find it as easy to recognize their own limbs
as the phrase: ‘I know it like the back of my own hand’
would suggest; see also [151]). It is an interesting ques-tion for future research to try and identify those factors
that are critical for eliciting this form of ‘identification’
with a visually-presented limb, as this may help to
provide important constraints for the design of virtual
haptic and remote teleoperation systems in the years to
come [51,84].
Pavani et al. [104] argued that the increased magni-
tude of crossmodal congruency effects reported in thealigned rubber hand condition of their experiment could
be attributed to the ‘apparent’ perception of vibrotactile
targets as being close to the distractor lights. In other
words, that tactile (and not just proprioceptive) stimuli
were mislocalized toward the apparent visual location of
a limb (really a stuffed rubber washing-up glove). In a
more recent study, Walton and Spence [148] repeated
Pavani et al.’s study, but simply reversed the role of thevisual and vibrotactile stimuli. That is, participants now
had to respond to the elevation of the visual ‘target’
stimuli, while attempting to ignore the elevation of the
vibrotactile ‘distractor’ stimuli. Once again, the majority
of the participants in this study experienced the rubber
hand illusion when the rubber hands were placed on top
of the occluding surface in alignment with their own
hands. However, the occurrence of the rubber armillusion now led to a reduction in the magnitude of the
crossmodal congruency effect that was correlated with
responses on the post-experiment questionnaire (i.e.,
those participants who experienced the illusion most
strongly showed the greatest reduction of the magnitude
of the crossmodal congruency effect; the opposite pat-
tern of results to those reported by [104]).
Walton and Spence’s [148] results would appear tosupport the view that placing the hands near the visual
stimuli may make them more task-relevant, perhaps
because this leads to the lights producing more activa-
tion in a topographic representation of visuotactile
space [22,73]. Given that the magnitude of the cross-
modal congruency effect has been shown to depend on
the relative intensity and discriminability of the target
and distractor stimuli [85], this would have the effect ofincreasing the crossmodal congruency effect in Pavani et
al.’s [104] study, where the relative saliency of the di-
stractors would have been increased, but reducing them
in Walton and Spence’s study, where the relative
saliency of the targets were now increased. This modu-
latory effect of rubber hands on the crossmodal con-
gruency effect does appear to depend, though, on the
rubber arms being in a posture that is compatible (i.e.,plausible) with the participant’s own body.
Neurophysiological data on the phenomenon of vi-
sual capture of perceived limb position was reported in a
study by Michael Graziano [33,38] in which a monkey’s
own right arm was hidden below an occluding screen
while a taxidermied monkey’s right arm was placed
above the occluding screen in front of the monkey. The
visual response of bimodal premotor neurons with vi-sual RFs centred on the observed fake arm were shown
to be reduced when the position of the fake arm was
changed from an uncrossed to a crossed posture while
the monkey’s own arm remained in an uncrossed pos-
ture below the occluding screen [33]. Moreover, the
postural response of parietal neurons was enhanced by
the vision of a stuffed rubber arm with a posture com-
patible to that of the hidden arm [38]. Similarly, Farn�eet al. [26] have also demonstrated, in a group of uni-
lateral spatial neglect patients, that the presentation of a
rubber limb by a visual stimulus can modify the extent
to which the visual stimulus extinguished the patient’s
perception of an ipsilesionally-presented tactile stimulus.
These visual capture effects only occurred when the
rubber hands were placed in alignment with the patient’s
own body, and not when positioned at 90� to them (justas in Pavani et al.’s [104] study, in normal participants).
3.2. Indirect perception of the limbs
In a further series of experiments, we have explored
whether crossmodal congruency effects can be modu-
lated through a more abstract understanding of the
source of visual stimuli in a scene [82]. In particular,whether a spatial re-coding of distant visual stimuli
would make them equivalent to near visual stimuli in
terms of the crossmodal congruency effects that they
elicit. To this end, participants were presented with a
mirror reflection of their own limbs (while any direct
view of their hands was occluded by means of an opaque
screen). Such indirect reflected visual information con-
veys bivalent information about the true spatial positionof lights placed by the hands. Visual stimuli originating
close to one’s body (e.g., a comb stroking one’s hair),
but only visible via a mirror reflection, give a visual
impression suggesting an object situated behind (or
through) the mirror, at least in terms of initial retinal
projections (see [39] for an interesting account on mirror
reflections from a psychological perspective). However,
our familiarity with reflecting surfaces (in addition tothe synchronous proprioceptive and somatosensory
information, in the example given above), provides us
with an immediate and exact notion of the true spatial
location of the comb. The experimental question ad-
dressed in Maravita et al.’s study was whether visual
stimuli that appear in the mirror to occupy a position in
far space (i.e., outside peripersonal space) would be
0
20
40
60
80
100
120
E1 E2 E3
16 12 18 15 13 10
60 cm60 cm30 cm
i
iii
ii
60 cm60 cm30 cm
i
iii
ii
Con
grue
ncy
Eff
ect
(ms)
(a)
(b)
Fig. 4. (a) Schematic view of the experimental set-up in Maravita
et al.’s [82] experiments on the effects of viewing visual stimuli indi-
rectly via a mirror reflection on crossmodal congruency effects. Par-
ticipants held one foam cube in either hand below an opaque screen (i).
A semi-reflecting mirror (ii) was placed on one side of an open opaque
box (iii) facing the participant. Depending on the ambient illumina-
tion, participants could either see their own hands reflected in the
mirror, or else see through the mirror to reveal the contents of the box.
Lines drawn from the distractor lights on each sponge held by the
observer and crossing the mirror, suggest the position of the virtual
image produced on the mirror by such objects, as observed by the
participant. This horizontal distance is equivalent to double the dis-
tance of the foam cubes from the mirror (60 cm) plus the distance
between the participant and the foam cubes themselves (30 cm). The
foam cubes and distractor lights in the Far condition were placed at
the exact same position as the position of these virtual images in the
Mirror condition. (b) The crossmodal congruency effect in the three
experiments (E1, E2, and E3) conducted by Maravita et al. [82] was
significantly larger for the Mirror condition (white bars) than for the
Far condition (black bars). The bars show the results from the analysis
of the reaction time data (the error bars represent the standard error of
the mean), while the numbers within each column highlight the error
rates for that condition for each of the three experiments. Crossmodal
congruency effects reflect the difference between incongruent and
congruent trials (see text for details).
C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189 179
treated as originating in near or peripersonal space if theparticipants were made aware that what they are look-
ing at was a mirror reflection of near peripersonal space
(albeit seen indirectly). We assessed this by asking
whether the apparent visual conflict generated by mirror
reflections could be modulated, in a similarly automatic
fashion, by the obligatory spatial effect of visual di-
stractors assessed by the crossmodal congruency task.
In a series of experiments, Maravita et al. [82] sys-tematically varied the position of the visual distractors,
so as to obtain two stimulation conditions, while par-
ticipants made speeded vibrotactile elevation discrimi-
nation responses. In one condition, the visual distractors
(LEDs) were placed near the hands (Mirror condition),
occluded from direct view by an opaque screen, and
were observed as their reflection in a mirror placed 90
cm from the participant. In this situation, the retinalprojection produced by the visual distractors (virtual
mirror image) was equivalent to that of stimuli placed at
a distance double that between the real stimulus and the
mirror (plus the distance between the observer’s eyes
and the stimulus itself; see Fig. 4a). In a second condi-
tion, visual distractors were located far away from the
hands, inside a box (Far condition). Participants now
observed these stimuli by looking through the mirror(actually a semi-reflecting mirror). Visual distractors in
this condition were specifically positioned so as to pro-
duce a retinal projection that was exactly equivalent to
the virtual mirror image produced by the distractors in
the previous Mirror condition. Now, the only clue to
indicate any difference between the actual positions of
the distractors in the two conditions was the partici-
pant’s knowledge about the experimental setting.We made sure that participants always knew when a
mirror reflection of a stimulus near the hand was ob-
served or, when the stimulus was actually located far
from the hand, inside the box. A preliminary training
session was included prior to each block of experimental
trials in which participants were repeatedly required to
move the finger that was stimulated tactually to rein-
force this. This corresponded to a visual input of themoving finger in the Mirror condition, but the partici-
pants received no such visual feedback when they were
looking at the pair of rubber hands placed inside the
box. Given our finding that the largest crossmodal
congruency effects are reported when visual distractors
are situated close to the vibrotactile target stimuli
[104,132,138], we expected a larger crossmodal congru-
ency effect in the Mirror condition, where visualdistractors were physically located close to the partici-
pant’s hands that received the vibrotactile targets, than
in the Far condition, where visual objects were genuinely
placed outside peripersonal space.
Consistent with our expectations, the participants in
our study showed larger crossmodal congruency effects
from the visual distractors in the Mirror condition than
from the visual distractors in the box in the Far condi-
tion (see Fig. 4b). This result was replicated in a secondexperiment when participants observed rubber hands
grasping the sponges inside the box in the Far condition
(actually washing gloves stuffed with a metallic frame
180 C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189
and padded with cotton in order to give them the real-istic look of a real hand inside the glove; similar to those
used in Pavani et al.’s [104] study), as opposed to their
own hands in the mirror. In a final experiment, the
participants responded by releasing spatially-compatible
response buttons that were embedded in the foam cubes
they were holding, instead of by releasing foot-pedals as
in the majority of our previous studies. This meant that
participants were given some visual feedback regardingtheir own hand movements in the Mirror condition
while they were actually performing the crossmodal
congruency task (and not just in the period preceding
each test block). Critically, in the Far condition of this
final experiment, participants saw the hands of a sex-
matched experimenter holding the foam cubes inside the
box. The experimenter also made responses on each
trial, consistent with those of the participant (thoughslightly delayed). Nevertheless, even though participants
could see a pair of real hands placed by the foam cubes
in the Far condition performing the same task, the vi-
sual distractors in the Mirror condition still resulted in a
larger crossmodal congruency effect than those pre-
sented in the Far condition inside the box. These results
demonstrate that visual stimuli seen distally via a mirror
reflection are correctly coded as originating in near-space when they are presented from cubes held by the
participant (but out of direct view; see also [63,64,
90,117]).
In a similar vein, Maravita et al. [79] have also shown
that a visual stimulus seen indirectly in a mirror reflec-
tion placed close to one of the hands of a right-hemi-
sphere brain-damaged patient elicits greater crossmodal
extinction for tactile stimuli presented simultaneously tothe other hand than when the ‘same’ visual stimulus is
positioned and seen in far space. Once again, vision of
the hand seen in the mirror must have activated a rep-
resentation of peripersonal space and not extrapersonal
space, as was suggested by the retinal image provided by
the stimulus in the mirror. Maravita et al.’s [79,82] re-
sults appear to show that knowledge of the experimental
situation, Mirror or Far condition, is sufficient forpeople to ‘‘re-map’’ the position of the visual distractors
to their actual location (within near, peripersonal space
in the Mirror condition, and in far space in the Far
condition). So it would appear that while crossmodal
congruency effects may be ‘‘cognitively impenetrable’’,
they can still be modulated by abstract knowledge
concerning the real location of visual distractors, and
not just by physical changes of the positions of thehands relative to visual distractors.
We are currently attempting to extend this line of
research to look at people’s ability to adapt to other
kinds of indirect information about limb position as, for
example, when the hand is seen in a video monitor (cf.
[60], for neurophysiological data on this; and
[19,20,27,145], for human data), and the consequences
and determinants of this for the sense of ownership oridentification with a limb as one’s own [5,6,30,31,
75,100,118,123,142,143,146].
3.3. Tool-use and the body schema: consequences for
peripersonal space
Thanks to the evolutionary liberation of the hands
from locomotion, humans can efficiently use tools inorder to extend the range of actions that they can per-
form [102]. Think, for example, of the croupier’s rake,
the blacksmith’s hammer or the surgeon’s knife. In fact,
humans frequently use tools to perform a variety of
different tasks in their everyday lives; Everything from
eating using cutlery to writing with pens and other
implements, from gardening with rakes, shovels, and
hoes, to playing sport with a variety of racquets, sticks,poles, and other equipment. Tool-use has become such
an integral part of modern life that there are actually few
activities left that we perform without them, and this
raises fascinating and important questions about how
sensory information arriving at the somatosensory epi-
thelia can be modulated and spatially re-coded by tool-
use. In particular, how are visual and somatosensory
information integrated in tool-use, and can functionalperipersonal space (that region of visible space around
the body that is reachable with our hands [9,107,
113,115]) be extended dynamically by active tool-use?
The scientific study of tool-use is one area that is cur-
rently becoming increasingly popular with cognitive
neuroscience researchers ([57,59–62], see [53] for a re-
view).
In the classic neurology literature, the so-called bodyschema is constructed by continuous input from
somatosensory and proprioceptive afference [29,48].
This ‘‘schema’’ is an on-going and constantly updated
internal representation of the shape of the body, and of
the position of the body in space both in respect to the
external world, and in relation to its own parts [7,34].
Somatosensory signals are both perceived and localized
with reference to this representation and these functionscan be impaired as a consequence of sensory disruption
following selective brain damage [7,17,28], or peripheral
deafferantation (e.g., phantom limbs; [89]). In many
cases, it has been argued that tools are actually assimi-
lated into the ‘body schema’ or ‘body image’
[7,62,101,102,153,156] (see also Footnote 3 on this
distinction).
Phantom phenomena, in particular, provide remark-able evidence in support of the plasticity of the body
image, and its extension by inanimate objects or ‘tools’
[12,65,88,101,103,110,123]. Many amputees feel pain in
their missing limb [89], and over time, their phantom
and its associated pain retracts, ‘telescoping’ toward the
stump. The wearing of a prosthetic limb, however, can
suddenly relieve pain and restore the phantom to its
75 cm
75 cm(a)
(b)
Fig. 5. Schematic view of the experimental set-up used by Maravita et
al. [81] to investigate the possible modification of the body schema
elicited by extended tool use. The position of the vibrotactile stimu-
lators is indicated by the grey triangles by participant’s hands, while
the circles at the distal tip of the tools represent visual distractors
(LEDs). (a) shows the Uncrossed-tools condition, while (b) shows the
Crossed-tools condition. In Maravita et al.’s Experiment 1, the tools
consisted of toy-golf clubs, while plastic sticks were used in their
Experiment 2. The alignment of the tools was changed actively by the
participant after every four trials in Experiment 1, and changed by the
experimenter after every 48 trials in Experiment 2. (Redrawn from
[81].)
C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189 181
previous length, ‘fleshing out’ the artificial limb ([89],[111, p. 221]). 6 Several accounts from primate studies,
as well as from normal participants and brain-damaged
human patient populations suggests that the manipula-
tion of tools, and other external objects that frequently
come into contact with our bodies (such as rings worn
on the hand) can also become incorporated into the
body schema [2,18,48,59]. Certain patterns of brain
damage can also lead to specific deficits in the ability touse or manipulate tools. For example, Paillard [102]
reports that while some patients with left parietal brain
damage can describe a tool and correctly define its
function, they may also, at the same time, be unable to
manipulate it correctly.
Pioneering primate neurophysiological data in this
area has demonstrated that the multisensory integration
of visual and somatosensory inputs can be critically af-fected by the use of tools. In monkeys trained to use
tools, the emergence of bimodal visuotactile cells was
reported during recording from the medial bank of the
intraparietal sulcus [59]. Many of the cells in this area
had overlapping RFs, and would respond both to tactile
or proprioceptive stimulation of, for example, the hand,
and to visual stimuli (especially the sight of a food re-
ward) approaching the hand. Immediately after tool-use, the visual RF of these bimodal cells approved to be
elongated along the length of the tool, such that visual
stimuli approaching the tip of the tool were as effective
at driving the neurons as stimuli approaching the hand
[59]. Iriki et al. speculated that the use of a tool could
plastically extend the representation of the hand in the
body schema, so that now even distant stimuli activate
those multisensory neurons coding for stimuli presentednear the body (i.e., an extension of peripersonal space to
incorporate all stimuli that are now accessible by the
manipulation of the tool, and not just by the hand).
More recently, Maravita and colleagues [81] have
demonstrated behaviourally that the body image can be
modified by extended tool-use: The prolonged wielding
of golf-club-like sticks resulted in changes in the pattern
of crossmodal congruency effects elicited by visual di-stractors placed at the end of tools (see also [52]). Par-
ticipants in their study were required to make speeded
elevation discrimination responses with their right foot
to vibrotactile target stimuli presented from vibrators
attached to the proximal ends of two tools, one held in
either hand. As in our previous studies, participants
rested their index fingers and thumbs on these vibrators
in an upper/lower arrangement (see Fig. 5). However, incontrast to our previous studies, the upper and lower
visual distractors were now placed at the far end of each
6 The crossmodal congruency task may provide an ideal tool to
examine empirically the perceptual reality of this retraction and
extension of (physically non-existent) phantom limbs.
tool. On some trials, participants were required to keep
the tools in an uncrossed posture (Uncrossed- orStraight-tools condition, Fig. 5a), while on other trials
they were required to cross the tools over the midline
(Crossed-tools condition, Fig. 5b). Although there were
visual distractors and vibrotactile stimulators on each
side of space in both conditions, the relative spatial
relationship between the pairs of visual distractors and
the vibrotactile targets connected by each tool changed
when the tools were crossed over. While each hand was‘connected’ by the tool with distractors on the same side
of space in the Uncrossed-, or Straight-tools condition,
each hand was ‘connected’ with distractors on the
opposite side of space in the Crossed-tools condition.
The question Maravita et al. addressed was whether
reaching with the tool to distractors on the opposite side
of space could reduce, or even invert, the usual pattern
of crossmodal congruency effects (whereby visual di-stractors on the same side as vibrotactile targets usually
produce larger crossmodal congruency effects than those
on the opposite side), such that we would find larger
crossmodal congruency effects for opposite side than for
same side distractors. A reversal of this kind would be
predicted if one believed that by extending the hand’s
action space via the tool, vibrotactile stimuli at the hand
182 C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189
and visual distractors on the opposite end of the toolwould now share a common multisensory representation
and possibly show larger crossmodal congruency effects
(see [138], discussed earlier, for related results with
crossed hands).
The results confirmed our prediction by showing that
the typical pattern of larger crossmodal congruency ef-
fects for same side distractors demonstrated in the
Straight-tools posture, was reversed by crossing thetools (Fig. 6a). Interestingly, however, while this pattern
of results was found in the first experiment, where par-
ticipants actively switched between the two postures
after every 4 trials, no such reversal of crossmodal
congruency effects was reported in a second experiment
(see Fig. 6b) when the participant’s posture was swit-
ched passively by the experimenter after only every 48
trials, instead. Under such conditions, the pattern ofcrossmodal congruency effects remained very similar for
the two postures. These results suggest that the tool-
based spatial re-mapping of the crossmodal congruency
effect requires both frequent and active use of the tools
(cf. [59,102]).
In order to clarify the precise effects of practice fur-
ther, Maravita et al. [81] also compared the results from
the earlier and later parts of the experimental session.Interestingly, the critical spatial reversal of the cross-
modal congruency effect with crossed tools was only
present in the second part of the experiment (i.e., blocks
3–5; see Fig. 6d), and not in the first part (i.e., in the first
two experimental blocks; see Fig. 6c), presumably due to
the prolonged practice with the tools participants had
had by this stage. This result could be compatible with a
modification of the representation of the hand in the
0
20
40
60
80
100
120
14 105 10
0
20
40
60
80
100
120
13 117 4
Straight Crossed
CrossedStraightCro
ssm
odal
con
grue
ncy
(ms)
Cro
ssm
odal
con
grue
ncy
(ms)
(c)
(a)
Fig. 6. The pattern of crossmodal congruency effects demonstrated in Marav
congruency effects (the error bars represent the standard error of the mean
(uncrossed versus crossed). For all panels, the white bars represent performa
vibrotactile targets, while the black bars represent performance when the ta
values shown within each column highlight the mean crossmodal congruency
blocks) in which the alignment of the tools was changed actively by the partici
in which the alignment of the tools was changed by the experimenter only aft
blocks of trials (respectively) in Experiment 1.
body schema, which followed the prolonged use of thetool. Note here that the majority of previous studies
investigating the use of tools in patients incorporated an
adaptation phase at the start of their experiments in
which participants were required to use the tool for a
while––for example, to retrieve some sort of distal ob-
jects with a rake [25,83].
These results suggest that the crossmodal congruency
effect is prone to modulation due to changes of spatialcoding following tool-based modification of crossmodal
integration. The crossmodal congruency effect elicited
by the visual distractors in this task depended not only
upon the spatial proximity between the target stimuli
and the distractors, but also upon a ‘‘functional’’
proximity in terms of action space (for logically related
reports in brain-damaged patients, see [1,9,25,80,
83,105]). Once a region of space, distant from the hand,is reached by a tool it might become equivalent to a
near, peripersonal source of stimulation for visuotactile
congruency effects. Our results with the crossmodal
congruency task therefore converge nicely with the re-
sults from single-cell studies in monkeys who have been
taught to use tools [59], and the results of human studies
using a tactile temporal order judgment task ([156], see
also [112]).Our most recent research into the tool-use dependent
modulation of visuotactile peripersonal space has ad-
dressed the question of whether peripersonal space is
literally ‘extended’ along the shaft of actively-wielded
tools, or whether instead just the space around the tip of
the tool is modified [52]. To do this, we presented visual
distractors not only at the tips of two actively-wielded
hand-held tools, but also halfway along the shaft of the
0
20
40
60
80
100
120
13 11 4 6
0
20
40
60
80
100
120
14 9 4 11
Straight Crossed
CrossedStraight
(d)
(b)
ita et al.’s [81] tool-use study. The data represent the mean crossmodal
) in terms of the RT data as a function of the alignment of the tools
nce when the visual distractors were presented on the same side as the
rget and distractor were presented from different sides. The numerical
effect in terms of the error data. (a) The results for Experiment 1 (all
pant after every four trials. (b) The results for Experiment 2 (all blocks)
er every 48 trials. (c) and (d) show the results for the first and last four
C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189 183
tool, and next to the hands. We found that visual di-stractors presented at the tip of the tool held in the right
hand produced much larger crossmodal congruency ef-
fects for simultaneous right hand vibrotactile targets,
than for identical targets presented to the left hand. This
same-tool effect also occurred for visual distractors
presented on the tool held in the left hand. The effect of
visual distractors in the middle of the tool shafts, how-
ever, was not selective for the hand holding the tool.Two further experiments supported this finding by
demonstrating that target-directed motor tasks per-
formed in both peripersonal space (pushing a button
with the detached handle of the tool) and extrapersonal
space (pointing to a distant target with a laser-pointer)
also did not result in significant differences between di-
stractors on the same-side versus on opposite-sides to
the vibrotactile targets. This result seems to suggest thatvisuotactile peripersonal space is not simply ‘extended’
by active tool-use, but rather that only the functional
part of a hand-held tool is incorporated into periper-
sonal space. We are currently extending this line of re-
search to investigate whether the active wielding of a
long tool is sufficient for this effect to occur, or whether
instead the part of the tool used to perform a task is
critical for the observed modulations of peripersonalspace. We also hope to look at the link between the
wielding of real physical tools that are directly manip-
ulated by a person, and the remote control of tools, such
as when we use a mouse to move a cursor across a
computer monitor [56].
4. Using the crossmodal congruency task to investigate theneural underpinnings of 3-D perispersonal space
The behavioural results from the crossmodal con-
gruency studies reviewed so far are consistent with the
existence in humans of visuotactile representations of 3-
D peripersonal space that are updated as posture
changes, and that can adapt to incorporate into peri-
personal space visual stimuli that would normally beconsidered to be in far space instead [43]. However, it is
not clear whether the maintenance of an accurate rep-
resentation of visuotactile space as posture changes re-
lies on cortical structures (such as ventral premotor
cortex and parietal area 7b), sub-cortical structures
(such as the putamen), or both, since bimodal visuo-
tactile neurons with tactile RFs on the hand and visual
RFs that follow the hands as they move have been re-ported in all of these structures [35,36].
4.1. The representation of visuotactile space in the split-
brain
Spence et al. [135] attempted to address this question
by testing a split-brain patient on the crossmodal con-
gruency task. For split-brain patients (as for normalparticipants), the left hemisphere controls the right hand
and receives direct visual projections from the right vi-
sual field. Similarly, the right hemisphere controls the
left hand and receives direct visual projections from the
left visual field. In most situations, neural signals
resulting from visual and tactile stimuli in the same
spatial location will project, at least initially, to the same
hemisphere (i.e., the right hand and the right visual fieldproject to the left hemisphere; and the left hand and left
visual field project to the right hemisphere). It is unclear
though what happens when a hand is crossed over into
the opposite hemifield. For instance, if the right hand of
a split-brain patient is placed in the left field, will visual
events in the left visual field map onto the tactile RFs of
the right hand, as they apparently do in the intact hu-
man brain? If this normal remapping does not occur,then bimodal cells in cortical structures in one hemi-
sphere (such as the ventral premotor cortex, parietal
area 7b, or both)––which are disconnected from similar
structures in the opposite hemisphere of split brain pa-
tients––would appear to be crucial for remapping the
visual RF onto the tactile RF when the hand crosses the
midline. Conversely, if this normal remapping does oc-
cur in the split brain, then bimodal cells in sub-corticalstructures (e.g., putamen or superior colliculus)––which
are shared between the disconnected hemispheres––
would appear to be implicated.
Spence et al. [135] therefore compared the perfor-
mance of a split-brain patient (J.W.) with that of two
healthy age-matched neurologically normal control
participants on the crossmodal congruency task. At the
time of testing, J.W. was a 45 year-old patient, whosecorpus callosum had been completely sectioned in 1979
(with the anterior commissure left intact) in order to try
to cure his intractable epilepsy (see [125], for a more
detailed description of J.W.’s neurological status). All
three participants made elevation discrimination re-
sponses with their right foot to vibrotactile targets pre-
sented to the thumb or index finger of the right hand,
thus ensuring that both their perception of the vibro-tactile target, and the initiation of their elevation dis-
crimination response were controlled by the same left
cerebral hemisphere. The participants held a foam cube
in their right hand in one of three different postures,
while their left arm rested passively in their laps. The
visual distractor stimuli were presented from two foam
cubes, one presented to either side of fixation (see Fig. 7
for a schematic illustration of the postures adopted, andthe crossmodal congruency results obtained).
Visual inspection of Fig. 7a and b shows that the
magnitude of the crossmodal congruency effects elicited
by the visual distractors on the right cube were modu-
lated by the relative position of the right hand: More
specifically, crossmodal congruency effects from the
right distractor lights were more pronounced when the
Fig. 7. Schematic view of the foam cubes (represented by open rect-
angles) and postures adopted by the normal control participants and
by the split-brain patient J.W., in Spence et al.’ [135] study (a–c)
showing the direction of fixation (dotted line) in the different posture
conditions. The locations of the vibrotactile targets, which were always
presented to the right hand, are indicated by the letter ‘T’. Mean
crossmodal congruency effects elicited by visual distractors are shown
numerically next to the cube on which they were situated. (The absence
of any values next to certain cubes shows that no distractor lights were
attached to that particular cube.) Crossmodal congruency effects rep-
resent a difference score: Performance on incongruent–distractor trials
(i.e., trials on which the vibrotactile target and visual distractor ap-
peared at different elevations) minus performance on congruent–dis-
tractor trials (i.e., trials on which the target and distractor were
presented from the same elevation), measured in terms of inverse
efficiency (average response time divided by the proportion correct for
each condition [144], which combines speed and accuracy into a single
measure to allow comparisons among conditions uncontaminated by
possible speed-accuracy trade-offs (see text for details).
184 C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189
right hand held the cube on which they were mounted,and decreased when the participants grasped a more
eccentrically positioned cube with their right hand in-
stead. However, the most interesting result for present
purposes occurred when participants moved their right
hand across the midline into the left hemifield (see Fig.
7c): For the two control participants, crossmodal con-
gruency effects were now greater for distractor lights on
the left cube (now held by the crossed right hand) thanfor lights on the right cube, again replicating our pre-
vious findings. By contrast, the right distractor lights
always interfered more than those on the left for the
split-brain patient J.W., no matter whether his right
hand was placed in an uncrossed or a crossed posture.
This result suggests a failure to remap visuotactile space
appropriately when his right hand crosses into the left
hemispace.Before accepting this conclusion, however, an alter-
native interpretation for J.W.’s performance needs to be
ruled out: Namely, that there may actually have been
nothing wrong with his ability to remap visuotactile
space across the midline per se; Instead, his problems
might be explained simply in terms of his responding left
hemisphere not ‘seeing’ the left visual distractors, which
were initially projected to his ‘non-responding’ righthemisphere. (Note that the left visual distractors never
actually elicited any noticeable crossmodal congruency
effects for vibrotactile targets presented to J.W.’s righthand, no matter what the posture adopted.) Although a
range of previously-published behavioural data made
such an explanation seem unlikely [32,54,76], we
thought it preferable to examine this issue more directly
in the context of the crossmodal congruency task.
To this end, we conducted a number of further pos-
tural manipulation experiments on J.W. in the years
following our original study. In one such study, wefound that left visual distractors can elicit significant
crossmodal congruency effects for vibrotactile targets
presented to the right hand, no matter whether the right
hand was placed in the right or left hemispace (see [136,
Fig. 2, for details]). These results presumably reflect the
behavioural consequences of the residual connections
between cortical and sub-cortical visual areas that re-
main intact in J.W., rather than cortico–cortico con-nections [10,47,76]. However, the crucial point is that
even though the distractor lights on the left could
interfere with right hand elevation discrimination per-
formance, there was still no evidence of a remapping of
visuotactile space when J.W.’s right hand crossed into
left hemispace.
These results, therefore, demonstrate that J.W.’s
problem is not simply with seeing lights presented ipsi-lateral to the responding hemisphere (i.e., on the left),
but more specifically with a failure to maintain an
accurate representation of visuotactile peripersonal
space across the two hemifields. Spence et al.’s [135,136]
results indicate that cross-cortical (or more strictly
speaking, trans-commissural) connections are crucial for
the maintenance of an up-to-date representation of
peripersonal visuotactile space, at least when the righthand crosses the midline. It would be interesting in fu-
ture studies to investigate whether a similar pattern of
results would be obtained if J.W. were required to make
elevation discrimination responses with his left foot for
vibrotactile targets presented to the left hand (cf.
[49,50]).
4.2. Disrupting the representation of visuotactile space
with repetitive transcranial magnetic stimulation
The pattern of results obtained with the split-brain
patient J.W. supports the view that performance on the
crossmodal congruency task may index a relatively high-
level (i.e., cortical) representation of visuotactile space.
However, given that J.W. has by now been tested on a
near-daily basis for much of the last 20 years, it isimportant that converging findings are sought from
other cognitive neuroscience methodologies to back up
the claims made on the basis of this rather unique pa-
tient. To this end, we are currently investigating whether
we can elicit the abnormal pattern of crossmodal con-
gruency effects demonstrated by J.W. in a relatively
normal population of Oxford undergraduates: We are
C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189 185
using repetitive transcranial magnetic stimulation(rTMS) to disrupt activity in a region corresponding
approximately to the angular gyrus and the posterior
parts of the intraparietal sulcus. Although this research
is still at an early stage, our preliminary results suggest
that performance on the crossmodal congruency task in
our normal participants can also be selectively impaired
when they adopt a crossed hands posture (rather than
an uncrossed posture) and rTMS is applied over theregion of the angular gyrus and posterior parts of the
intraparietal sulcus (rather than over primary visual or
somatosensory areas, or when sham rTMS is applied to
the back of the neck; [126,149]). The pattern of cross-
modal congruency effects observed while applying rTMS
therefore provide converging evidence to support the
critical importance of cortical structures (and presum-
ably also cross-cortical or cross-commissural connec-tions) in maintaining an up-to-date representation of
visuotactile space.
Some indirect evidence in favour of a cortical
involvement in the representation of visuotactile peri-
personal space also comes from an extensive body of
research on neurological patients suffering the clinical
phenomenon of crossmodal extinction (see [73] for a
recent review). In these patients, a tactile stimulus to thecontralesional side of the body (e.g., the hand or neck) is
often ‘extinguished’ by a concurrent stimulus presented
in a different sensory modality (e.g., a light or a sound),
particularly when it is presented near the corresponding
body part on the ipsilesional side (e.g., the other hand,
or the other side of the neck). However, such a difference
in the magnitude of visuotactile (or audiotactile)
extinction as a function of the vicinity of the ipsilesionalstimulus to the body part (the ‘near’ vs. ‘far’ compari-
son), has not been consistently observed in all patients
[72,87]. This suggests that under some circumstances,
the cortical lesion of these patients (typically involving
frontal, temporal and/or parietal areas of the right
hemisphere (e.g., [22,73,83]) can disrupt the normal
multisensory representation of peripersonal space,
resulting in a similar degree of multisensory extinctionregardless of the proximity of the ipsilesional stimuli.
Although a clear anatomical definition of brain regions
responsible for crossmodal extinction, and for the ex-
trapersonal/peripersonal phenomena has so far been
complicated by the extensive nature of the brain-damage
often presented by these patients, high-resolution struc-
tural and functional neuroimaging could now help to
shed light on the neural basis of multisensory repre-sentation of peripersonal space (e.g., [77]).
4.3. Future studies using the crossmodal congruency task
It will be particularly interesting in future research to
compare the pattern of performance on the crossmodal
congruency task highlighted in the split-brain patient
J.W. with that of other patients, such as those withparietal brain damage exhibiting the phenomenon of
crossmodal extinction [22,73]. We also hope to extend
our research to investigate the neural underpinnings of
the crossmodal congruency effect and the multisensory
perception of limb position more directly, using func-
tional Magnetic Resonance Imaging (fMRI; cf.
[77,78,92]).
In future studies, it may also prove to be particularlyinformative to investigate the extent to which other
tasks, such as making saccadic responses to sudden
onset visual and tactile stimuli, may rely more on sub-
cortical, rather than cortical, representations of visuo-
tactile space [24,40–42]. A growing body of behavioural
evidence is now leading many researchers to suggest that
performance on certain behavioural tasks may reflect
the competing influences of multiple representations ofvisuotactile space (or of the body image/schema
[34,56,66,69,131,153]), hence possibly resulting in a dif-
ferent pattern of behavioural results, even when normal
participants place their limbs in unusual postures
[124,155].
5. Conclusions
Hopefully, it should be clear by now that variations
in the magnitude of the crossmodal congruency effect
provide both a reliable and a robust index of common
spatial position across different sensory modalities, in
particular, vision and touch. Over a number of studies,
we have shown that visual distractors interfere with
speeded elevation discrimination responses to vibrotac-tile target stimuli presented to the thumb or index finger
of either hand, even when participants are repeatedly
instructed to ignore what they see. Crossmodal con-
gruency effects are maximal when vision and touch are
presented from the same spatial location at approxi-
mately the same time, and fall off as the relative spa-
tiotemporal separation between target and distractor
stimuli increases [132,138]. The maximal crossmodalcongruency effects elicited by visual distractors follow
the hands when they move through space, even when
they cross the midline in normal participants [132,
135,138].
In the last few years, we have used the crossmodal
congruency task to investigate the flexibility of the body
representation (body schema), as highlighted by the
apparent displacement of the limbs seen in the ‘rubberhand’ illusion [104,148], and the changes in peripersonal
space that can occur following extended tool-use [52,81].
These results are consistent with the extant neurophys-
iology of the visuotactile representation of 3-D peri-
personal space seen in primates [33,34,59,96]. The
crossmodal congruency task can also be used to eluci-
date disturbances to the visuotactile representation of
186 C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189
space seen following specific brain damage, such as thesectioning of the corpus callosum in split-brain patients
[135,136]. The growing understanding of the factors that
govern whether or not particular distal events will be
functionally incorporated into the body schema to ex-
tend the boundary of what constitutes peripersonal
space may also have a number of important applications
for the future design and implementation of teleopera-
tion and virtual haptic reality systems [51,84].Taken together, we believe that the results of the
crossmodal congruency studies conducted to date
highlight the utility of the paradigm itself for investi-
gating the relative contributions of visual, tactile, and
proprioceptive inputs to the multisensory representation
of 3-D peripersonal space in both normal and patient
populations. In the years to come, we hope to be able to
combine neurophysiological, electrophysiological, neu-ropsychological, and neuroimaging data with behavio-
ural data from normal participants on this task in order
to try and bridge the gap between the rich body of
published single-cell neurophysiological data, and the
human perceptual experiences with which we are all
familiar (cf. [34]). For it is only by adopting this con-
verging methodologies approach that we will make sig-
nificant inroads toward resolving the questionsregarding the multisensory representation of space that
have long vexed both scientists and philosophers
[121,130].
References
[1] K. Ackroyd, M.J. Riddoch, G.W. Humphreys, S. Nightingale, S.
Townsend, Widening the sphere of influence: using a tool to
extend extrapersonal visual space in a patient with severe neglect,
Neurocase 8 (2002) 1–12.
[2] S. Aglioti, N. Smania, M. Manfredi, G. Berlucchi, Disownership
of left hand and objects related to it in a patient with right brain
damage, NeuroReport 8 (1996) 293–296.
[3] K.C. Armel, V.S. Ramachandran, Projecting sensations to
external objects: evidence from skin conductance response, Proc.
Royal Soc. London B 270 (2003) 1499–1506.
[4] E.L. Austin, S. Soto-Faraco, J.P.J. Pinel, A.F. Kingstone, Virtual
body effect: factors influencing visual-tactile integration, Abstr.
Psychon. Soc. 6 (2001) 2.
[5] L.E. Bahrick, Intermodal origins of self-perception, in: P. Rochat
(Ed.), The Self in Infancy. Theory and Research, Elsevier
Science, Amsterdam, 1995, pp. 349–373.
[6] L. Bahrick, J.S. Watson, Detection of intermodal proprioceptive-
visual contingency as a potential basis of self-perception in
infancy, Dev. Psychol. 21 (1985) 963–973.
[7] G. Berlucchi, S. Aglioti, The body in the brain: neural bases of
corporeal awareness, Trends Neurosci. 20 (1997) 560–564.
[8] P. Bertelson, B. de Gelder, The psychology of multimodal
perception, in: C. Spence, J. Driver (Eds.), Crossmodal Space
and Crossmodal Attention, Oxford University Press, New York,
2004.
[9] A. Berti, F. Frassinetti, When far becomes near: remapping of
space by tool use, J. Cogn. Neurosci. 12 (2000) 415–420.
[10] R.G. Bittar, M. Ptito, J. Faubert, S.O. Dumoulin, A. Ptito,
Activation of the remaining hemisphere following stimulation of
the blind hemifield in hemispherectomized subjects, Neuroimage
10 (1999) 339–346.
[11] M. Botvinick, J. Cohen, Rubber hands ‘feel’ touch that eyes see,
Nature 391 (1998) 756.
[12] P.R. Bromage, R. Melzack, Phantom limbs and the body
schema, Can. Anaesth. Soc. 21 (1974) 267–274.
[13] C.M. Butter, H.A. Buchtel, R. Santucci, Spatial attentional
shifts: further evidence for the role of polysensory mechanisms
using visual and tactile stimuli, Neuropsychologia 27 (1989)
1231–1240.
[14] A. Caclin, S. Soto-Faraco, A. Kingstone, C. Spence, Tactile
‘capture’ of audition, Percept. Psychophys. 64 (2002) 616–630.
[15] C.S. Choe, R.B. Welch, R.M. Gilford, J.F. Juola, The ’ventril-
oquist effect’: visual dominance or response bias?, Percept.
Psychophys. 18 (1975) 55–60.
[16] T. Chong, J.B. Mattingley, Preserved cross-modal attentional
links in the absence of conscious vision: evidence from patients
with primary visual cortex lesions, J. Cogn. Neurosci. 12 (Supp.)
(2000) 38.
[17] M. Critchley, Disorders of the body-image, in: M. Critchley
(Ed.), The Parietal Lobe, Hafner Press, New York, 1953, pp.
225–255.
[18] M. Critchley, The Divine Banquet of the Brain and Other Essays,
Raven Press, New York, 1979.
[19] E. Daprati, N. Franck, N. Georgieff, J. Proust, E. Pacherie, J.
Dalery, M. Jeannerod, Looking for the agent: an investigation
into consciousness of action and self-consciousness in schizo-
phrenic patients, Cognition 65 (1997) 71–86.
[20] E. Daprati, A. Sirigu, P. Pradat-Diehl, N. Franck, M. Jeannerod,
Recognition of self-produced movement in a case of severe
neglect, Neurocase 6 (2000) 477–486.
[21] G. di Pellegrino, F. Frassinetti, Direct evidence from parietal
extinction of enhancement of visual attention near a visible hand,
Curr. Biol. 10 (2000) 1475–1477.
[22] G. di Pellegrino, E. L�adavas, A. Farn�e, Seeing where your hands
are, Nature 388 (1997) 730.
[23] J. Driver, P.G. Grossenbacher, Multimodal spatial constraints
on tactile selective attention, in: T. Inui, J.L. McClelland (Eds.),
Attention and Performance XVI: Information Integration in
Perception and Communication, MIT Press, Cambridge, MA,
1996, pp. 209–235.
[24] J. Driver, C. Spence, Crossmodal links in spatial attention,
Philos. Trans. R. Soc. B. Biol. Sci. 353 (1998) 1319–1331.
[25] A. Farn�e, E. Ladavas, Dynamic size-change of hand peripersonal
space following tool use, NeuroReport 11 (2000) 1645–1649.
[26] A. Farn�e, F. Pavani, F. Meneghello, E. L�adavas, Left tactile
extinction following visual stimulation of a rubber hand, Brain
123 (2000) 2350–2360.
[27] N. Franck, C. Farrer, N. Georgieff, M. Marie-Cardine, J. Dalery,
T. d’Amato, M. Jeannerod, Defective recognition of one’s own
actions in patients with schizophrenia, Am. J. Psychiatry 158
(2001) 454–459.
[28] J.A.M. Frederiks, Disorders of the body schema, in: J.A.M.
Frederiks (Ed.), Clinical Neuropsychology (Revised series), first
ed., Elsevier Science, Amsterdam, 1985, pp. 207–240.
[29] S. Gallagher, Body image and body schema: a conceptual
clarification, J. Mind Behav. 7 (1986) 541–554.
[30] S. Gallagher, Body schema and intentionality, in: J.L. Bermudez,
A. Marcel, N. Eilan (Eds.), The Body and the Self, MIT Press,
Cambridge, MA, 1995, pp. 225–244.
[31] S. Gallagher, Philosophical conceptions of the self: implications
for cognitive science, Trends Cogn. Sci. 4 (2000) 14–21.
[32] M.S. Gazzaniga, Perceptual and attentional processes following
callosal section in humans, Neuropsychologia 25 (1987) 119–133.
[33] M.S.A. Graziano, Where is my arm? The relative role of vision
and proprioception in the neuronal representation of limb
position, Proc. Natl. Acad. Sci. USA 96 (1999) 10418–10421.
C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189 187
[34] M.S.A. Graziano, M.M. Botvinick, How the brain represents the
body: insights from neurophysiology and psychology, in: W.
Prinz, B. Hommel (Eds.), Common Mechanisms in Perception
and Action: Attention and Performance, vol. XIX, Oxford
University Press, New York, 2002, pp. 136–157.
[35] M.S.A. Graziano, C.G. Gross, A bimodal map of space:
somatosensory receptive fields in the macaque putamen with
corresponding visual receptive fields, Exp. Brain Res. 97 (1993)
96– 109.
[36] M.S.A. Graziano, G.S. Yap, C.G. Gross, Coding of visual space
by premotor neurons, Science 266 (1994) 1054–1057.
[37] M.S.A. Graziano, X.T. Hu, C.G. Gross, Coding the locations of
objects in the dark, Science 277 (1997) 239–241.
[38] M.S.A. Graziano, D.F. Cooke, C.S.R. Taylor, Coding the
location of the arm by sight, Science 290 (2000) 1782–1786.
[39] R. Gregory, Mirrors in Mind, Penguin Books, UK, 1997.
[40] J.M. Groh, D.L. Sparks, Saccades to somatosensory targets. 1.
Behavioral characteristics, J. Neurophysiol. 75 (1996) 412–427.
[41] J.M. Groh, D.L. Sparks, Saccades to somatosensory targets. 2.
Motor convergence in primate superior colliculus, J. Neurophys-
iol. 75 (1996) 428–438.
[42] J.M. Groh, D.L. Sparks, Saccades to somatosensory targets. 3.
Eye-position dependent somatosensory activity in primate supe-
rior colliculus, J. Neurophysiol. 75 (1996) 439–453.
[43] O.-J. Grusser, Multimodal structure of the extrapersonal space,
in: A. Hein, M. Jeannerod (Eds.), Spatially Oriented Behavior,
Springer-Verlag, New York, 1983, pp. 327–352.
[44] C.S. Harris, Adaptation to displaced vision: visual, motor, or
proprioceptive change?, Science 140 (1963) 812–813.
[45] L.R. Harris, C. Blakemore, M. Donaghy, Integration of visual
and auditory space in the mammalian superior colliculus, Nature
288 (1980) 56–59.
[46] J.C. Hay, H.L. Pick Jr., K. Ikeda, Visual capture produced by
prism spectacles, Psychon. Sci. 2 (1965) 215–216.
[47] L.N. Hazrati, A. Parent, Contralateral pallidothalamic and
pallidotegmental projections in primates: an anterograde and
retrograde labelling study, Brain Res. 567 (1991) 212–223.
[48] H. Head, G. Holmes, Sensory disturbances from cerebral lesions,
Brain 34 (1911) 102–254.
[49] K.M. Heilman, T. Van Den Abell, Right hemisphere dominance
for attention: The mechanisms underlying hemispheric asymme-
tries of inattention (neglect), Neurology 30 (1980) 327–330.
[50] K.M. Heilman, R.T. Watson, E. Valenstein, Neglect and related
disorders, in: K.M. Heilman, E. Valenstein (Eds.), Clinical
Neuropsychology, second ed., Oxford University Press, New
York, 1985, pp. 243–293.
[51] R. Held, N. Durlach, Telepresence, time delay and adaptation,
in: S.R. Ellis, M.K. Kaiser, A.C. Grunwald (Eds.), Pictorial
Communication in Virtual and Real Environments, Taylor &
Francis, London, 1993, pp. 232–246.
[52] N.P. Holmes, G.A. Calvert, C. Spence, Does tool-use extend
peripersonal space? Evidence from the crossmodal congruency
task, Percept. Psychophys., submitted for publication.
[53] N.P. Holmes, C. Spence, The body schema and the multisensory
representation(s) of peripersonal space, Cog. Process., in press.
[54] J.D. Holtzman, Interactions between cortical and subcortical
visual areas: evidence from human commissurotomy patients,
Vision Res. 24 (1984) 801–813.
[55] J.P. Hunter, J. Katz, K.D. Davis, The effect of tactile and visual
sensory inputs on phantom limb awareness, Brain 126 (2003)
579–589.
[56] H. Imamizu, S. Miyauchi, T. Tamada, Y. Sasaki, R. Takino, B.
P€utz, T. Yoshioka, M. Kawato, Human cerebellar activity
reflecting an acquired internal model of a new tool, Nature 403
(2000) 192–195.
[57] K. Inoue, R. Kawashima, M. Sugiura, A. Ogawa, T. Schormann,
K. Zilles, H. Fukada, Activation in the ipsilateral posterior
parietal cortex during tool use: a PET study, Neuroimage 14
(2001) 1469–1475.
[58] M.J. Intons-Peterson, Imagery paradigms: how vulnerable are
they to experimenters’ expectations?, J. Exp. Psychol. Hum.
Percept. Perform. 9 (1983) 394–412.
[59] A. Iriki, M. Tanaka, Y. Iwamura, Coding of modified body
schema during tool use by macaque postcentral neurones,
NeuroReport 7 (1996) 2325–2330.
[60] A. Iriki, M. Tanaka, S. Obayashi, Y. Iwamura, Self-images in the
video monitor coded by monkey intraparietal neurons, Neurosci.
Res. 40 (2001) 163–173.
[61] H. Ishibashi, S. Hihara, A. Iriki, Acquisition and development of
monkey tool-use: behavioural and kinematic analyses, Can. J.
Physiol. Pharmacol. 78 (2000) 1–9.
[62] H. Ishibashi, S. Hihara, M. Takahashi, T. Heike, T. Yokota, A.
Iriki, Tool-use learning selectively induces expression of brain-
derived neurotrophic factor, trkB, and neurotrophin 3 in the
intraparietal multisensorycortex of monkeys, Cogn. Brain Res.
14 (2002) 3–9.
[63] S. Itakura, Mirror guided behavior in Japanese monkeys
(Macaca fuscata fuscata), Primates 28 (1987) 149–161.
[64] S. Itakura, Use of mirror to direct their responses in Japanese
monkeys (Macaca fuscata fuscata), Primates 28 (1987) 343–
352.
[65] J.H. Kaas, Phantoms of the brain, Nature 391 (1998) 331–333.
[66] J.H. Kaas, R.J. Nelson, M. Sur, C.-S. Lin, M.M. Merzenich,
Multiple representations of the body within the primary
somatosensory cortex of primates, Science 204 (1979) 521–
523.
[67] S. Kennett, M. Eimer, C. Spence, J. Driver, Tactile-visual links in
exogenous spatial attention under different postures: convergent
evidence from psychophysics and ERPs, J. Cogn. Neurosci. 13
(2001) 462–478.
[68] S. Kennett, C. Spence, J. Driver, Visuo-tactile links in covert
exogenous spatial attention remap across changes in unseen hand
posture, Percept. Psychophys. 64 (2002) 1083–1094.
[69] D.-H. Kim, H. Cruse, Two kinds of body representation are used
to control hand movements following tactile stimulation, Exp.
Brain Res. 139 (2001) 76–91.
[70] R.M. Klein, Attention and visual dominance: a chronometric
analysis, J. Exp. Psychol. Hum. Percept. Perform. 3 (1977) 365–
378.
[71] R.M. Klein, D.I. Shore, Relationships among modes of visual
orienting, in: S. Monsell, J. Driver (Eds.), Control of Cognitive
Processes: Attention and Performance, vol. XVIII, MIT Press,
Cambridge, MA, 2000, pp. 195–208.
[72] E. L�adavas, Functional and dynamic properties of visual
peripersonal space, Trends Cogn. Sci. 6 (2002) 17–22.
[73] E. L�adavas, A. Farn�e, Neuropsychological evidence for multi-
modal representations of space near specific body parts, in: C.
Spence, J. Driver (Eds.), Crossmodal Space and Crossmodal
Attention, Oxford University Press, 2004.
[74] E. L�adavas, A. Farn�e, G. Zeloni, G. di Pellegrino, Seeing or not
seeing where your hands are, Exp. Brain Res. 131 (2000) 458–
467.
[75] M. Lewis, J. Brooks-Gunn, Social Cognition and the Aquisition
of the Self, Plenum, New York, 1979.
[76] J. Liederman, A reinterpretation of the split-brain syndrome:
implications for the function of corticocortical fibers, in: R.J.
Davidson, K. Hugdahl (Eds.), Brain Asymmetry, MIT Press,
Cambridge, MA, 1995, pp. 451–490.
[77] D.M. Lloyd, D.I. Shore, C. Spence, G.A. Calvert, Multisensory
representation of limb position in human premotor cortex, Nat.
Neurosci. 6 (2003) 17–18.
[78] E. Macaluso, C.D. Frith, J. Driver, Supramodal effects of covert
spatial orienting triggered by visual or tactile events, J. Cogn.
Neurosci. 3 (2002) 215–229.
188 C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189
[79] A. Maravita, C. Spence, K. Clarke, M. Husain, J. Driver, Vision
and touch through the looking glass in a case of crossmodal
extinction, NeuroReport 11 (2000) 3521–3526.
[80] A. Maravita, M. Husain, K. Clarke, J. Driver, Reaching with a
tool extends visual-tactile interactions into far space: evidence
from cross-modal extinction, Neuropsychologia 39 (2001) 580–
585.
[81] A. Maravita, C. Spence, S. Kennett, J. Driver, Tool-use changes
multimodal spatial interactions between vision and touch in
normal humans, Cognition 83 (2002) B25–B34.
[82] A. Maravita, C. Spence, C. Sergent, J. Driver, Seeing your own
touched hands in a mirror modulates cross-modal interactions,
Psychol. Sci. 13 (2002) 350–356.
[83] A. Maravita, K. Clarke, M. Husain, J. Driver, Active tool-use
with contralesional hand can reduce crossmodal extinction of
touch on that hand, Neurocase 8 (2002) 411–416.
[84] J. Marescaux, J. Leroy, M. Gagner, F. Rubino, D. Mutter, M.
Vix, S.E. Butner, M.K. Smith, Transatlantic robot-assisted
telesurgery, Nature 413 (2001) 379–380.
[85] L.E. Marks, Cross-modal interactions in speeded classification,
in: G. Calvert, C. Spence, B. Stein (Eds.), Handbook of
Multisensory Processes, MIT Press, in press.
[86] G. Martino, L.E. Marks, Cross-modal interaction between vision
and touch: the role of synesthetic correspondence, Perception 29
(2000) 745–754.
[87] J.B. Mattingley, J. Driver, N. Beschin, I.H. Robertson, Atten-
tional competition between modalities: extinction between touch
and vision after right hemisphere damage, Neuropsychologia 35
(1997) 867–880.
[88] H. Melville, Moby Dick, Penguin, Harmondsworth, UK, 1851/
1985.
[89] R. Melzack, Phantom limbs and the concept of the neuromatrix,
Trends Neurosci. 13 (1990) 88–92.
[90] E.W. Menzel Jr., E.S. Savage-Rumbaugh, J. Lawson, Chimpan-
zee (Pan troglodytes) spatial problem solving with the use of
mirrors and televised equivalents of mirrors, J. Comp. Psychol.
99 (1985) 211–217.
[91] N. Merat, C. Spence, D.M. Lloyd, D.J. Withington, F. McG-
lone, Audiotactile links in focused and divided spatial attention,
Soc. Neurosci. Abstr. 25 (1999) 1417.
[92] M. Misaki, E. Matsumoto, S. Miyauchi, Dorsal visual cortex
activity elicited by posture change in a visuo-tactile matching
task, Neuroreport 13 (2002) 1797–1800.
[93] F. Morrell, Visual system’s view of acoustic space, Nature 238
(1972) 44–46.
[94] V.B. Mountcastle, J.C. Lynch, P.A. Georgopoulos, H. Sakata, C.
Acuna, Posterior parietal association cortex of the monkey:
command functions for operations within extrapersonal space, J.
Neurophysiol. 38 (1975) 871–908.
[95] T.I. Nielsen, Volition: a new experimental approach, Scand. J.
Psychol. 4 (1963) 225–230.
[96] S. Obayashi, M. Tanaka, A. Iriki, Subjective image of invisible
hand coded by monkey intraparietal neurons, NeuroReport 11
(2000) 3499–3505.
[97] M.T. Orne, On the social psychology of the psychological
experiment: with particular reference to demand characteris-
tics and their implications, Am. Psychol. 17 (1962) 776–
783.
[98] B. O’Shaughnessy, vol. 1, Chapter 7 (cited in O’Shaughnessy,
1995), 1980.
[99] B. O’Shaughnessy, The sense of touch, Aust. J. Philos. 67 (1989)
37–58.
[100] B. O’Shaughnessy, Proprioception and the body image, in: J.L.
Bermudez, A. Marcel, N. Eilan (Eds.), The Body and the Self,
MIT Press, Cambridge, MA, 1995, pp. 175–203.
[101] J. Paillard, Les d�eterminants moteurs de l’organisation de
l’espace, Cah. Psychol. 14 (1971) 261–316.
[102] J. Paillard, The hand and the tool: the functional architec-
ture of human technical skills, in: A. Berthelet, J. Chavaillon
(Eds.), The Use of Tools by Human and Non-human
Primates, Oxford University Press, New York, 1993, pp. 34–
46.
[103] X. Paqueron, M. Leguen, D. Rosenthal, P. Coriat, J.C. Willer,
N. Danziger, The phenomenology of body image distortions
induced by regional anaesthesia, Brain 126 (2003) 702–712.
[104] F. Pavani, C. Spence, J. Driver, Visual capture of touch: out-of-
the-body experiences with rubber gloves, Psychol. Sci. 11 (2000)
353–359.
[105] A.J. Pegna, L. Petit, A.S. Caldara-Schnetzer, A. Khateb, J.M.
Annoni, R. Sztajzel, T. Landis, So near yet so far: neglect in far
or near space depends on tool use, Ann. Neurol. 50 (2001) 820–
822.
[106] M.I. Posner, M.J. Nissen, R.M. Klein, Visual dominance: an
information-processing account of its origins and significance,
Psychol. Rev. 83 (1976) 157–171.
[107] F.H. Previc, The neuropsychology of 3-D space, Psychol. Bull.
124 (1998) 123–164.
[108] V.S. Ramachandran, S. Blakeslees, Phantoms in the Brain,
Fourth Estate, London, 1998.
[109] V.S. Ramachandran, D. Rogers-Ramachandran, Synaesthesia in
phantom limbs induced with mirrors, Proc. R. Soc. Lond. B Biol.
Sci. 263 (1996) 377–386.
[110] V.S. Ramachandran, D. Rogers-Ramachandran, S. Cobb,
Touching the phantom limb, Nature 377 (1995) 489–490.
[111] G. Riddoch, Phantom limbs and body shape, Brain 64 (1941)
197–222.
[112] L. Riggio, L. de G. Gawryszewski, C. Umilt�a, What is crossed in
crossed-hand effect?, Acta Psychol. 62 (1986) 89–100.
[113] G. Rizzolatti, V. Gallese, Mechanisms and theories of spatial
neglect, in: F. Boller, J. Grafman (Eds.), Handbook of Neuro-
psychology, Elsevier Science, Amsterdam, 1988, pp. 223–246.
[114] G.C. Rizzolatti, M. Scandolara, M. Matelli, M. Gentilucci,
Afferent properties of periarcuate neurons in macaque monkeys:
II. Visual responses, Behav. Brain Res. 2 (1981) 147–163.
[115] G. Rizzolatti, L. Fadiga, L. Fogassi, V. Gallese, The space
around us, Science 277 (1997) 190–191.
[116] G. Rizzolatti, L. Fogassi, V. Gallese, Motor and cognitive
functions of the ventral premotor cortex, Curr. Opin. Neurobiol.
12 (2002) 149–154.
[117] J.A. Robinson, S. Connell, B.E. McKensie, R.H. Day, Do
infants use their own images to locate objects reflected in a
mirror?, Child Dev. 61 (1990) 1558–1568.
[118] P. Rochat, Self-perception and action in infancy, Exp. Brain Res.
123 (1998) 102–109.
[119] J.P. Roll, R. Roll, J.-L. Velay, Proprioception as a link between
body space and extra-personal space, in: J. Paillard (Ed.), Brain
and Space, Oxford University Press, New York, 1991, pp. 112–
132.
[120] R. Rosenthal, Covert communication in the psychological
experiment, Psychol. Bull. 67 (1967) 356–367.
[121] B. Russell, Human Knowledge: Its Scope and Limits, Routledge,
London, 1948/1992.
[122] K. Sathian, A.I. Greenspan, S.L. Wolf, Doing it with mirrors: a
case study of a novel approach to neurohabilitation, Neurohabil.
Neural Repair 14 (2000) 73–76.
[123] C. Semenza, Disorders of body representation, in: R.S. Berndt
(Ed.), Handbook of Neuropsychology, second ed., vol. 3,
Elsevier Science, Amsterdam, 2001, pp. 285–303.
[124] D.I. Shore, E. Spry, C. Spence, Confusing the mind by crossing
the hands, Cogn. Brain Res. 14 (2002) 153–163.
[125] J.J. Sidtis, B.T. Volpe, D.H. Wilson, M. Rayport, M.S. Gazz-
aniza, Variability in right hemisphere language function after
callosal section: evidence for a continuum of generative capacity,
J. Neurosci. 1 (1981) 323–331.
C. Spence et al. / Journal of Physiology - Paris 98 (2004) 171–189 189
[126] C. Spence, Crossmodal links in spatial attention, In: 7th Annual
Meeting of the Cognitive Neuroscience Society, Multisensory
spatial representation symposium held at the 9–11th April, San
Francisco, 2000.
[127] C. Spence, Crossmodal attentional capture: a controversy
resolved? in: C. Folk, B. Gibson (Eds.), Attention, Distraction
and Action: Multiple Perspectives on Attentional Capture,
Elsevier Science, Amsterdam, 2001, pp. 231–262.
[128] C. Spence, Multimodal attention and tactile information-pro-
cessing, Behav. Brain Res. 135 (2002) 57–64.
[129] C.J. Spence, J. Driver, Covert spatial orienting in audition:
exogenous and endogenous mechanisms facilitate sound locali-
zation, J. Exp. Psychol Hum. Percept. Perform. 20 (1994) 555–
574.
[130] C. Spence, J. Driver (Eds.), Crossmodal Space and Crossmodal
Attention, Oxford University Press, New York, 2004.
[131] C. Spence, C. Harris, M. Zampini, The ‘Japanese illusion’
revisited? Impaired vibrotactile movement discrimination with
interleaved fingers, in: Experimental Psychology Society Meeting
held in Reading, UK, July 9–11, 2003.
[132] C. Spence, F. Pavani, J. Driver, What crossing the hands can
reveal about crossmodal links in spatial attention, Abstr.
Psychon. Soc. 3 (1998) 13.
[133] C. Spence, F. Pavani, J. Driver, Crossmodal links between vision
and touch in covert endogenous spatial attention, J. Exp.
Psychol. Hum. Percept. Perform. 26 (2000) 1298–1319.
[134] C. Spence, J. Ranson, J. Driver, Crossmodal selective attention:
on the difficulty of ignoring sounds at the locus of visual
attention, Percept. Psychophys. 62 (2000) 410–424.
[135] C. Spence, A. Kingstone, D.I. Shore, M.S. Gazzaniga, Repre-
sentation of visuotactile space in the split brain, Psychol. Sci. 12
(2001) 90–93.
[136] C. Spence, D.I. Shore, M.S. Gazzaniga, S. Soto-Faraco, A.
Kingstone, Failure to remap visuotactile space across the midline
in the split-brain, Can. J. Exp. Psychol. 55 (2001) 135–142.
[137] C. Spence, D.I. Shore, R.M. Klein, Multisensory prior entry, J.
Exp. Psychol. Gen. 130 (2001) 799–832.
[138] C. Spence, F. Pavani, J. Driver, The spatial modulation of the
crossmodal congruency task, Behav. Cogn. Affect. Neurosci.,
submitted for publication.
[139] B.E. Stein, M.A. Meredith, The Merging of the Senses, MIT
Press, Cambridge, MA, 1993.
[140] B.E. Stein, B. Magalh~aes-Castro, L. Kruger, Superior colliculus:
visuotopic–somatotopic overlap, Science 189 (1975) 224–226.
[141] J. Tastevin, En partant de l’exp�erience d’Aristote: les d�eplace-ments artificiels des parties du corps ne sont pas suivis par le
sentiment de ces parties ni pas les sensations qu’on peut y
produire [Starting from Aristotle’s illusion: the artificial displace-
ments of parts of the body are not followed by feeling in these
parts or by the sensations which can be produced there],
L’Encephale 1 (1937) 57–84, 140–158.
[142] S.P. Tipper, D. Lloyd, B. Shorland, C. Dancer, L.A. Howard, F.
McGlone, Vision influences tactile perception without proprio-
ceptive orienting, Neuroreport 9 (1998) 1741–1744.
[143] S.P. Tipper, N. Phillips, C. Dancer, D. Lloyd, L.A. Howard, M.
Dvorrak, F. McGlone, Vision influences tactile perception at
body sites that cannot be directly viewed, Exp. Brain Res. 139
(2001) 160–167.
[144] J.T. Townsend, F.G. Ashby, Stochastic Modelling of Elementary
Psychological Processes, Cambridge University Press, New York,
1983.
[145] E. Van den Bos, M. Jeannerod, Sense of body and sense of action
both contribute to self-recognition, Cognition 85 (2002) 177–187.
[146] K. Vogeley, G.R. Fink, Neural correlates of the first-person-
perspective, Trends Cogn. Sci. 7 (2003) 38–42.
[147] M. Walton, C. Spence, Crossmodal interference of touch on
vision: spatial coordinates and visual capture. Poster presented at
the 1st International Multisensory Research Conference: Cross-
modal Attention and Multisensory Integration, Oxford, 1–2nd
October 1999. Available from <http://www.wfubmc.edu/nba/
IMRF/99abstracts.html>.
[148] M. Walton, C. Spence, Crossmodal congruency and visual
capture in a visual elevation discrimination task, Exp. Brain Res.
154 (2004) 113–120.
[149] M. Walton, V. Walsh, M. Rushworth, C. Spence, Disrupting the
representation of visuotactile space with repetitive transcranial
magnetic stimulation, manuscript in preparation.
[150] J.P. Wann, S.F. Ibrahim, Does limb proprioception drift? Exp.
Brain Res. 91 (1992) 162–166.
[151] R.B. Welch, The effect of experienced limb identity upon
adaptation to simulated displacement of the visual field, Percept.
Psychophys. 12 (1972) 453–456.
[152] R.B. Welch, M.H. Widawski, J. Harrington, D.H. Warren, An
examination of the relationship between visual capture and prism
adaptation, Percept. Psychophys. 25 (1979) 126–132.
[153] D.M. Wolpert, S.J. Goodbody, M. Husain, Maintaining internal
representations: the role of the human superior parietal lobe,
Nat. Neurosci. 1 (1998) 529–533.
[154] D. Wuillemin, B. Richardson, On the failure to recognize the
back of one’s own hand, Perception 11 (1982) 53–55.
[155] S. Yamamoto, S. Kitazawa, Reversal of subjective temporal
order due to arm crossing, Nat. Neurosci. 4 (2001) 759–765.
[156] S. Yamamoto, S. Kitazawa, Sensation at the tips of invisible
tools, Nat. Neurosci. 4 (2001) 979–980.