emotional stroop task
Post on 11-Jun-2017
230 Views
Preview:
TRANSCRIPT
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 1
Infants, Inquiries, and Perceptual Inconsistencies: An Emotional Stroop Task Analyzing
Stimuli Pairings of Congruent and Incongruent Visuals and Audio
Alex Dorman, Kelsey Schulman, Alex Stopka, Brittany Wilder
The College of Wooster
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 2
Abstract
A modified emotional Stroop test was presented to the participants in the form of
incongruent and congruent auditory and visual stimuli. Participants viewed happy babies paired
with crying audio, sad babies paired with laughing audio, happy babies paired with laughing
audio and sad babies paired with crying audio. They were asked to report the emotion shown in
the photo, and to ignore the audio. We hypothesized that the congruent image/audio paring
would elicit a faster response time and account for less error rate. Results indicated that
participants responded faster to congruent image/audio pairings and made fewer errors.
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 3
Infants, Inquiries, and Perceptual Inconsistencies: An Emotional Stroop Task Analyzing
Stimuli Pairings of Congruent and Incongruent Visuals and Audio
Attention is the mechanism by which we navigate our surroundings. Humans depend
upon their ability to attend to important stimuli in order to interpret information that will
correctly influence their actions (Fenske & Eastwood, 2003). Attention moderates many aspects
of functioning, such as development of skills, perceiving the world, immediate responses, and
memory recall (Parasuramen, 1998). But our ability to attend to the world around us is limited by
our cognitive capabilities (Parasuramen, 1998; Fenske & Eastwood, 2003; Buschman & Miller,
2010). Due to our limitations in perceiving our world, research in this field is crucial and has
been of interest dating back to the principals of psychology first published by William James in
1890 (MacLeod & Dunbar, 1988).
One common theory of how humans allocate attention is via one of two processes:
Automatic processing of information; Or controlled processing of information (Beall & Herbert,
2008; Macleod & Dunbar, 1988; Shiffrin & Schneider, 1977; Shiffrin & Shneider, 1984; Cohen,
Dunbar, McClelland, 1990). Automatic processing of information is often characterized as rapid
and unintentional, and uses little relative cognitive function. An example of automatic processing
would be reading a single presented word. This is automatic processing (for someone who is
literate) because the word is processed before it can be ignored (Beall & Herbert, 2008).
Controlled processing of information is somewhat the opposite, it is information that depends on
a “cognitive strategy, and requires cognitive resources” (Macleod & Dunbar, 1988). While these
two forms of processing seem to be inherently in opposition, a combination of the two processes
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 4
is used in all information processing (Shiffrin & Schneider, 1984), and can be manipulated, to an
extent, with training in learning how to automatically process (or ignore) stimuli not normally
automatically processed or ignored (MacLeod & Dunbar, 1988; Beall & Herbert, 2008; Cohen et
al., 1990). This would explain why reading is an automatic process for literate adults; they have
lots of practice processing words.
In an attempt to understand how humans utilize the finite amount of attention processing
capabilities they’ve been given, psychologists often turn to the Stroop Task. The Stroop Task,
created by John Ridley Stroop in 1935, was described as the “Gold Standard of Attentional
Measures” (Macleod, 1992). It allows researchers to test interferences among two stimuli, which
in turn reveals which process is more automatic between the two. Put in other words, if one
stimulus interferes with the processing of the other stimulus, it is because the interfering stimulus
is being processed more rapidly. In its original form, Stroop called it the “Naming Color Test”.
The focus was to identify the color of the word, ignoring what the word said. For example, the
word “red” could be printed in blue, and it was the participant’s job to identify the color blue.
(Stroop, 1935). What Stroop discovered was there was an apparent asymmetry between the
processing of these paired stimuli. The participants were only marginally hindered in identifying
the words themselves when they were printed in different colored ink, but response times in
identifying the color of the ink the words were printed in were significantly affected. Stroop
concluded that this meant that associations formed between words and their representations were
processed more automatically than color stimuli and their respective names (Stroop, 1935).
` Given the Stroop Task’s effectiveness in identifying interference effects, it has been used
hundreds of times in the testing of human attention (Macleod, 1992). It can be utilized across
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 5
many different fields using slight variations to the test design. In the field of clinical psychology,
one study changed the words being reported to have significant relevance to different
psychopathologies, and clients reporting on the colors of words had a harder time doing so when
the words were representative of an aspect of a psychopathology similar to said participant’s
clinical condition, opening the door to research focusing on attentional bias in clients with
psychopathologies (Williams, Mathews, MacLeod 1996). The Stroop Task has also been
reviewed as a valuable tool in identifying implicit attitudes in sexual offenders (Price, Beech,
Mitchell, Humphreys, 2012). Among other things, the Stroop task has also been used as a way of
identifying aspects of embodied cognition (Paelecke, Paelecke-Habermann, Borkenau, 2012).
This is just a small sample of the diverse applications of the Stroop Task.
One common variation of the Stroop task utilizes faces as one of the two stimuli
presented (Beall & Herbert, 2008; Ovaysikia, Tahir, Chan, DeSouza, 2011; Avram, Baltes,
Miclea, Miu 2010). Faces are of particular interest because countless research has shown that
human’s process faces differently than non-facial objects (Rakover, 2013; Reed, Stone, Bozova,
Tanaka, 2003; Murray, Yong, Rhodes, 2000). There is much interest in where our attention is
allocated when it comes to processing faces paired with other stimuli, and why it is our attention
allocation behaves in the ways it does. In the current study, researchers developed a variation of
the Emotional Stroop Task, utilizing pairings of faces and vocalizations.
Research in the area of multisensory perception when it comes to recognizing the
expression of emotion is relatively scarce (Collignon et al., 2008). Previous research suggests
that congruent affect of visual and audio stimuli leads to faster processing of information, while
incongruent affect of visual and audio stimuli leads to slower processing of information
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 6
(Collignon et al., 2008; Dolan, Morris, Gelder, 2000; Massaro & Egan, 1996). Collignon et al.
(2008) reported that between audio and visual stimuli, the visual stimulus (the face) was
processed more automatically than audio stimulus, resulting in quicker identification of facial
expression even when the paired audio stimulus was incongruent in affect. However Collignon et
al. (2008) stressed that visual dominance in affect perception was not rigid, but instead very
flexible and situation-dependent. This is in partial agreement with Massaro & Egan (1996) who
stressed that both visual and audio stimulus cues were equally utilized in identifying affect of
stimuli.
Yet what is it that dictates our attention allocation to certain stimuli? There is current
research to support the theory that while both valence of the stimuli and arousal of the observer
upon observing said stimuli manipulate processing, it is the strength of arousal that is more
influential. Put in other words, it is the intensity of emotional response to certain stimuli, as
opposed to the stimuli’s inherent positive or negative affect, that affects our processing (Dresler
& Meriau, 2008).
The current study aims to expand off of previous research by making use of stimuli that
are very potent in eliciting emotional response. We will be utilizing black and white images of
thirty different babies, paired with ten different four second audio samples of babies. Half of the
thirty images of babies will be conveying clear positive emotions, while the other half will be
conveying clear sad emotions. Five of the audio samples will contain babies conveying positive
emotions (laughing), while the other five will convey negative emotions (crying). Faces are
established as being processed automatically (Rakover, 2013; Reed et al., 2003; Murray et al.,
2000), and are commonly used in the Emotional Stroop Task (Beall & Herbert, 2008; Ovaysikia
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 7
et al., 2011, Avram et al., 2010). However, no researcher has utilized infant vocalizations as far
as the researchers are aware. Reasons for the decision to use infant vocalizations stem from the
importance of infant vocalizations on human attention, in particular the effect of infantile crying
on human arousal responses (Parsons, Young, Parsons, Stein, Kringelbach, 2011; Boukydis &
Burgees, 1982; Zeskind & Collins, 1987). Such effects include the activation of the autonomic
nervous system (Parsons et al., 2011) and the urgency to soothe the infant (Boukydis & Burgees,
1982). While there is little to no research on the response of infantile laughter on human arousal,
research on laughing itself shows such physiological responses like changes in heart rate, skin
temperature, and brain activity, which may be linked with overall improved well being (Bennet,
Zeller, Rosenberg, McCann, 2003). So it seems feasible that observers would also elicit as strong
an arousal to infantile laughter as they would to infantile crying. This is particularly feasible
because these audio stimuli (infantile laughing and crying) will inherently be emotionally
arousing and of clear positive/negative valence, the discrepancy between what specifically
moderates the Stroop Effect, arousal or valence, will not be of issue to our research.
The current study will expose participants to one of four situations in rapid succession: A
visual of a baby with positive valence (smiling) and audio of a baby with negative valence
(laughing); A visual of a baby with positive valence and audio of a baby with negative valence
(crying); A visual of a baby with negative valence (frowning) and audio of a baby with positive
valence; A visual of a baby with negative valence and audio of a baby with negative valence.
Two of these scenarios are congruent pairings, and the other two are incongruent pairings. The
participant will be asked to identify the valence of the images in our first experiment, and the
valence of the audio in the second experiment, their response time in doing so will be further
analyzed.
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 8
The current hypothesis is that a Stroop effect will occur, meaning stimuli that are
incongruent with one another when paired together will result in extended latency periods. The
second hypothesis is that that there will be a Stroop asymmetry effect between visual and audio
stimuli, which will expose an attentional bias towards processing infant vocalizations more
rapidly. This is hypothesized in light of Massaro and Egan’s (1996) conclusion that humans
utilize auditory and visual information in an optimal manner for perception, in accordance with
the idea presented by Collignon et al. (2008) of perception being flexible and situation
dependent, and keeping in mind the research stating that infant vocalizations can be of intrinsic
importance to humans (Parsons et al., 2011; Boukydis & Burgees, 1982; Zeskind & Collins,
1987). Finally, the hypothesis falls in line with Beall and Herbert’s (2008) findings that “while
all information can be processed automatically, not all information is of equal importance”.
Method
Participants
The participants consisted of 30 College of Wooster undergraduate students, ages 18-22.
There were 15 students (females=6, males= 9) who completed experiment one and 15 students
(females= 5, males=10) who completed experiment two. Students were recruited through an
online subject pool for another study and volunteered to additionally participate in the current
study. Participants had normal hearing.
Materials
Baby Photos and Audio. To measure the emotional stroop effect, baby photos and baby audio
clips were used. To test the congruency of the emotional content of audio and visual stimuli, ten
happy baby photos, ten sad baby photos, ten happy baby audio clips and ten sad baby audio clips
were presented to participants. The baby photos were randomly paired with the audio clips and
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 9
presented to participants in two experiments. The baby photos were in color and 500 x 500
pixels. The audio clips were four seconds long (see Appendix A).
Direct RT. The participants were presented with the stimuli using the Direct RT program. There
were 40 possible combinations of baby images and audio. Each participant randomly viewed 20
pairs of baby images and audio per experiment. There were ten happy photos randomly paired
with sad audio clips and ten sad photos paired with happy audio clips that made up the
incongruent stimuli; ten happy photos paired with happy audio and ten sad photos paired with
sad audio made up the congruent stimuli. The audio played for two seconds before the image
appeared on the screen, and there were two seconds where the participant simultaneously viewed
the image and heard the audio. Direct RT recorded the reaction time of each participant.
Procedure
In the first experiment participants who volunteered for the experiment met in the
designated room and signed the consent form. After signing the form, the participant was
instructed to follow the experimenter into the audio room and asked to take a seat at the chair in
front of the computer. The experimenter explained that the participant would place the
headphones on and view a series of photos and hear an audio clip. Each participant was asked to
ignore the voice and respond to the image as quickly and accurately as possible and say if the
photo was happy or sad; using their dominant index finger participants pressed the left arrow key
for happy or the right arrow key for sad. Once the participant completed the experiment a black
screen appeared indicting the experiment was over. In the second experiment the procedure was
the same except the participant was now told to ignore the face and indicate if the voice was
happy or sad using the same arrow keys. Once the participant completed the experiment they
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 10
received a debriefing form explaining the experiment and thanking the participant for
volunteering to do the experiment.
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 11
Appendix A: Example Baby Photos
Audio clips available upon request
Results
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 12
The present study expanded off of previous research by making use of stimuli that are
very potent in eliciting emotional response. The researchers hypothesized that a Stroop effect
would occur, meaning stimuli that are incongruent with one another when paired together will
result in extended latency periods. The researchers also hypothesized that there would be a
Stroop asymmetry effect between visual and audio stimuli, which will expose an attentional bias
towards processing infant vocalizations more rapidly. The first experiment had the participants
ignore the happy or sad voice and determine if the baby face was happy or sad. A paired samples
t-test was conducted to test for differences between congruent and incongruent conditions in
Experiment 1. The congruent group had a mean of 1021.19 (SD = 510.27) and the incongruent
group had a mean of 1005.14 (SD = 457.69). There was no significant difference between the
congruent and incongruent groups t (12) =.61, p > .05. The second experiment had the
participants ignore the baby face and determine if the baby voice was happy or sad. A paired
sample t-test was conducted. The congruent group had a mean of 1523.53 (SD = 501.12) and the
incongruent group had a mean of 1589.22 (SD = 573.71). There was also no significant
difference between the congruent and incongruent groups in this experiment t (12) = .13, p > .05.
Discussion
The overall findings of the current study suggest that there is no difference in processing
information when stimuli are congruent or incongruent, specifically for infant faces and laughing
and crying audio. While previous research has been effective in using the Stroop Task in various
studies, the Stroop Task in this study proved to be ineffective. There was no Stroop effect found
for either experiment 1 or experiment 2.
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 13
In experiment one; there was no difference between the processing of incongruent stimuli
and congruent stimuli, where audio precluded the photo. This suggests that there were was no
difference in difficulty in indicating the emotion in, for example, a happy photo when there is a
baby cry simultaneously playing and when there is a laugh simultaneously playing. Previous
research demonstrated that there are discrepancies in how two stimuli are processed, where one
may be processed faster (more automatically) than the other, and the stroop test examines
interferences among two stimuli, which in turn reveals which process is more automatic between
the two (Macleod, 1992, Stroop, 1935). The audio appearing before the photo did not change the
processing of the photo, which is supported by the finding that visual dominance in affect
perception was not rigid, but instead very flexible and situation-dependent (Collignon et al.
2008).
The emotional stroop test we implemented showed that neither the infant vocalizations
nor the baby photo is processed more automatically than the other. Parasuramen (1998) states the
importance of attention in navigating one’s surroundings and in mediating immediate responses.
There is also evidence that suggests that human’s process faces differently than non-facial
objects (Rakover, 2013; Reed, Stone, Bozova, Tanaka, 2003; Murray, Yong, Rhodes, 2000).
Considering our findings, there may be something different about infant photos and vocalizations
that are easier to identify even when there is incongruent audio and visual information. The
emotional valence and intensity of both laughter and crying seemed to create no differences in
how the participant responded to the photo. Overall, the participant’s attention was not thwarted
by the incongruent audio presented with the photo, which is an interesting finding considering
past successes of the stroop test. This also highlights the way our experience with human faces
and emotions differs from everyday object perception.
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 14
The second experiment tested the Stroop effect of audio recordings while having
participants ignore a picture. Our hypothesis was not supported and there was no significant
difference found between the congruent and incongruent groups. These results are intriguing
because literature states that humans allocate attention via automatic processing, or controlled
processing (Beall & Herbert, 2008). If information had been processed via automatic processing,
then the expected result would be that of inconsistency between incongruent and congruent
stimuli. Instead, we found consistencies even between congruent and incongruent stimuli.
Literature also discusses how all information processing could be manipulated with training on
how to automatically process or ignore stimuli that was not normally processed or ignored
(Macleod & Dunbar, 1988). Humans are naturally inclined to devote major attention to human
faces as well as voices. Results may have differed had the participants been trained to ignore the
faces. Evolutionarily, humans are attuned to baby faces and vocalizations; because infants are
defenseless and these emotional cues are pertinent in identifying if a baby is in need. Previous
research has also suggested that congruent affect of visual and audio stimuli leads to faster
processing of information, while incongruent affect of visual and audio stimuli leads to slower
processing of information (Collignon et al., 2008; Dolan, Morris, Gelder, 2000; Massaro &
Egan, 1996). However, neither of our experiments supported this previous finding and further
research would be beneficial for investigating why no effect was found.
There are various factors that may have influenced the results. The limited sample of 30
participants, most of which were psychology students at the College of Wooster, may have
contributed to the insignificant findings. Testing a broader, larger sample may have increased the
statistical power of the results. It may have been interesting to analyze the participant’s exposure
to infants as this may influence the results. Future research should take into account these
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 15
variables. This study is beneficial to research in this field because this is one of the few studies
conducted on incongruent and congruent emotional valence. This is a good base for examining
the way we interact with our environment and how we process dual modalities and multisensory
perception.
Some key strategies to move this literature forward include (a) larger sample sizes, (b) a
more controlled environment with more detailed instructions, and (c) more elaborate background
information on the participants. While we used a sample of 30 participants; a sample size capable
of producing statistical significance, testing a larger sample may have allowed our results to
support our hypotheses. Prior to the study, we instructed participants of how to complete the
experiment as well as providing them with instructions on the computer screen in which they
used. However, we left out the minor detail of making sure they clicked the arrows as fast as they
could. We also could have been more specific, informing the participants to leave their eyes open
while they were ignoring the visual stimuli, and to keep the head phones on while ignoring the
audio stimuli. While this minor detail most likely was not the resulting factor of our insignificant
results, it may have influenced the outcome. One last key strategy that could help to move this
literature forward is that of having more information on our participants. Participants with
previous exposure to infants may have had an advantage when completing our study, while
participants with less exposure to infants may have been at a disadvantage. Avenues in which
this study could take could possibly be that of sex differences in analyzing incongruent and
congruent identification, research dealing with infant vocalizations. Psychological fields that
could benefit greatly by research in this area include developmental psych, social psych as well
as behavioral and neuro psych.
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 16
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 17
References
Avram, J., Baltes, F. R., Miclea, M., & Miu, A. (2010). Frontal EEG activation asymmetry
reflects cognitive biases in anxiety: Evidence from an emotional face stroop task.
Appl Psychophysical Biofeedback, 35, 285-292.
Beall, P., & Herbert, A. (2008). The face wins: Stronger automatic processing of affect in
facial expressions than words in a modified stroop task. Cognition and Emotion, 22,
1613-1642.
Bennet, M., Zeller,J., Rosenberg,L., & McCann, J. (2003). The effect of mirthful laughter on
stress and natural killer cell activity. Alternative Therapies in Health and Medicine, 9,
38-45.
Boukydis, C.F., & Burgess, R. (1982). Adult physiological response to infant cries: Effects of
temperament of infant, parental, status, and gender. Child Development, 53, 1291-
1298.
Buschman, T.J., & Miller, E.K. (2012). Shifting the spotlight of attention: Evidence for
discrete computations in cognition. Frontiers in Human Neuroscience, 4, 1-7.
Cohen, J.D., Dunbar, K., & Mclelland, J.L. (1990). On the control of automatic Processes: A
parallel distributed processing account of the stroop effect. Control of the Automatic
Processes, 3, 332-361.
Collignon, O., Girard, S., Gosselin, F., Roy, S., Saint-Amour, D., Lassonde, M., & Lepore, F.
(2008). Audio-visual integration of emotion expression. Brain Research, 126-135.
Dolan, R.J., Morris, J.S., & Gelder, B. (2000). Crossmodal binding of fear in voice and face.
PNAS, 98, 10006-10010.
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 18
Dresler, T., Meriau, K., Heekeren, H.R., & Meer, E.V.D. (2008). Emotional stroop task:
Effect of word arousal and subject anxiety on emotional interference. Psychological
Research, 73, 364-371.
Fenske, M.J., & Eastwood, J.D. (2003). Modulation of focused attention by faces expressing
emotion: Evidence from flanker tasks. Emotion, 3, 327-343.
MacLeod, C.M., & Dunbar, K.(1988). Training and stroop-like interference: Evidence for a
continuum of automaticity. Journal of Experimental Psychology, 14, 126-135.
MacLeod, C.M. (1992). The stroop task: The “gold standard” of attentional measures.
Journal of Experimental Psychology, 121, 12-14.
Massaro, D.W., & Egan, P.B. (1996). Perceiving affect from the voice and the face.
Psychonomic Bulletin and Review, 3, 215-221.
Murray, J.E., Yong, E., & Rhodes, G.(2000). Revisiting the perception of upside-down faces.
Psychological Science, 11, 492-496.
Ovaysikia, S., Tahir, K.A., Chan, J.L., & DeSouza, J.F.X. (2011). Word wins over face:
Emotional stroop effect activates the frontal cortical network. Frontiers in Human
Neuroscience, 4, 1-8.
Paelecke, M., Paelecke-Habermann, Y., & Borkenau, P. (2012). Temperament and
attentional bias in vocal emotional stroop tasks. European Journal of Personality, 26,
111-122.
Parasuraman, R. (1998). The attentive brain. Massachusetts: Massachusetts Institute of
Technology.
INFANTS, INQUIRIES, AND PERCEPTUAL INCONSISTENCIES 19
Parsons, C.E., Young, K.S., Parsons, E., Stein, A.,& Kringelbach, M.L. (2011). Listening to
infant distress vocalizations enhances effortful motor performance. Acta Paediatrica,
189-191.
Price, S.A., Beech, A.R., Mitchell, I.J., & Humphreys, G.W. (2012). The promises and perils
of the emotional stroop task: A general review and considerations for use with
forensic samples. Journal of Sexual Aggression, 18, 253-268.
Rakover, S.S. (2013). Explaining the face-inversion effect: The face-scheme incompatibility
(FSI) model. Psychon Bull Review.
Reed, C.L., Stone, V.E., Bozova, S., & Tanaka,J. (2003). The body-inversion effect.
Psychological Science, 14, 302-307.
Shiffrin, R.M., & Schneider, W. (1984). Automatic and controlled processing revisited.
Psychological Review, 91, 269-276.
Shriffin, R.M., & Schneider, W. (1977). Controlled and automatic information processing:
Perceptual learning, automatic attending, and a general theory. Psychological Review,
84, 127-186.
Stroop, J.R. (1992). Studies of interference in serial verbal reactions. Journal of
Experimental Psychology, 121, 15-23.
Williams, M.G., Mathews, A., & MacLeoad, C. (1996). The emotional stroop task and
psychopathology. Psychological Bulletin, 120, 3-24.
Zeskind, P.S., & Collins, V. (1987). Pitch of infant crying and caregiver responses in a
natural Setting. Infant Behavior and Development, 10, 501-504.
top related