musical chords influence responses to a non-affective

48
ABSTRACT BAKER, JAMES DANIEL. A Study of the Influences of Musical Chords on Responses to a Non-Affective Visual Feature of Emotional Faces. (Under the direction of Dr. Douglas J. Gillan.) Under a variety of everyday circumstances, a musical stimulus may accompany a task that requires visual information processing. There are many psychological mechanisms by which the musical stimulus may actually influence the required visual information processing. One such mechanism may be affective priming. The results of several studies suggest that automatic processing of the affective connotations of musical stimuli can interfere with the controlled processing of the affective connotations of visually displayed words and human faces. The present experiment measured the extent to which consonant and dissonant musical chords can prime responses to a non-affective feature of schematic faces depicting happiness or anger. In each of the key conditions, the chord and the face were either affect-congruent (e.g., consonant chord and happy face) or affect-incongruent (e.g., consonant chord and angry face). Rather than responding to the affective valence of each face, the undergraduate participants (N = 28) affirmed or denied the presence of a mole on each face. Due to a hypothetical response conflict following a spontaneous comparison of the affective congruence between chords and faces, task performance should have been context-dependent. Across mole-present trials, the average speed of correct affirmations should have been higher when the chord and face were affect-congruent compared to when they were affect-incongruent. Conversely, across mole-absent trials, the speed of correct denials should have been higher when the chord and face were affect-incongruent compared to when they were affect-congruent. Statistical analyses revealed variation in the degree to which participants exhibited the expected effects. Differences in automatic and/or strategic deployment of attentional resources may have factored into the results.

Upload: others

Post on 01-Feb-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

ABSTRACT

BAKER, JAMES DANIEL. A Study of the Influences of Musical Chords on Responses to a

Non-Affective Visual Feature of Emotional Faces. (Under the direction of Dr. Douglas J.

Gillan.)

Under a variety of everyday circumstances, a musical stimulus may accompany a task

that requires visual information processing. There are many psychological mechanisms by

which the musical stimulus may actually influence the required visual information

processing. One such mechanism may be affective priming. The results of several studies

suggest that automatic processing of the affective connotations of musical stimuli can

interfere with the controlled processing of the affective connotations of visually displayed

words and human faces. The present experiment measured the extent to which consonant

and dissonant musical chords can prime responses to a non-affective feature of schematic

faces depicting happiness or anger. In each of the key conditions, the chord and the face

were either affect-congruent (e.g., consonant chord and happy face) or affect-incongruent

(e.g., consonant chord and angry face). Rather than responding to the affective valence of

each face, the undergraduate participants (N = 28) affirmed or denied the presence of a mole

on each face. Due to a hypothetical response conflict following a spontaneous comparison of

the affective congruence between chords and faces, task performance should have been

context-dependent. Across mole-present trials, the average speed of correct affirmations

should have been higher when the chord and face were affect-congruent compared to when

they were affect-incongruent. Conversely, across mole-absent trials, the speed of correct

denials should have been higher when the chord and face were affect-incongruent compared

to when they were affect-congruent. Statistical analyses revealed variation in the degree to

which participants exhibited the expected effects. Differences in automatic and/or strategic

deployment of attentional resources may have factored into the results.

© Copyright 2014 by James Daniel Baker

All Rights Reserved

A Study of the Influences of Musical Chords on Responses to a Non-Affective Visual

Feature of Emotional Faces

by

James Daniel Baker

A thesis submitted to the Graduate Faculty of

North Carolina State University

in partial fulfillment of the

requirements for the degree of

Master of Science

Psychology

Raleigh, North Carolina

2014

APPROVED BY:

Douglas J. Gillan

Committee Chair

James W. Kalat Donald H. Mershon

ii

Dedication

For Mom

I did not quit when the night was darkest.

iii

Biography

James Daniel Baker grew up in Weirton, West Virginia. After graduating from Brooke High

School in 2002, he attended West Virginia University. He graduated cum laude in 2007,

earning Bachelor of Science degrees in Computer Science and Psychology. His interest in

those fields led him to pursue a graduate degree in Human Factors, a Psychology program at

North Carolina State University. He began studying music perception and cognition upon

hearing his friend play a flatted fifth on the guitar one fateful day. Throughout his life, James

has often enjoyed adding humor to mundane activities, such as walking, chewing,

autobiography-ing, and punctuating

iv

Acknowledgments

My thanks go to the professors who helped me envision and complete this project.

Dr. Douglas J. Gillan

Dr. Thomas M. Hess

Dr. James W. Kalat

Dr. Shari A. Lane

Dr. Donald H. Mershon

I am particularly grateful to Shari for her patience and sympathy.

v

Table of Contents

List of Tables ____________________________________________________________ vi List of Figures ____________________________________________________________ vii Introduction ______________________________________________________________ 1

Experiment Overview ____________________________________________________ 7 Method _________________________________________________________________ 8

Participants ____________________________________________________________ 8 Design ________________________________________________________________ 8

Materials ______________________________________________________________ 9 Apparatus _____________________________________________________________ 10

Procedure _____________________________________________________________ 11 Results __________________________________________________________________ 13

Examinations of Stimulus Ratings and Musical Sophistication ____________________ 14 Preliminary Test for a Response Mapping Issue _______________________________ 16 Exploratory Test with Neutral Stimuli Included ________________________________ 17

Affective Matching Hypothesis Test with Neutral Stimuli Excluded _______________ 19 Additional Tests for Group and Individual Differences __________________________ 20

Discussion _______________________________________________________________ 25 Applications _____________________________________________________________ 28 References _______________________________________________________________ 29

Appendices ______________________________________________________________ 36

Appendix A: Angry Faces ________________________________________________ 37 Appendix B: Neutral Faces ________________________________________________ 38 Appendix C: Happy Faces ________________________________________________ 39

vi

List of Tables

Table 1. Mean Accuracies and Speeds within Sound and Face Contexts ______________ 18

Table 2. Hierarchical Regression Results Predicting Affective Matching Effect ________ 24

vii

List of Figures

Figure 1. Affective Priming: Evidence from an Evaluative Decision Task ____________ 2

Figure 2. Affective Priming: Evidence from a Non-Evaluative Affirmation/Denial Task _ 4

Figure 3. Mean Pleasantness Ratings for Sounds and Faces ________________________ 15

Figure 4. Evidence of an Extraneous Response Conflict __________________________ 17

Figure 5. Mean Speeds within Eighteen Exploratory Conditions ____________________ 19

Figure 6. Mean Speeds within Four Experimental Conditions ______________________ 20

Figure 7. Affective Priming Effects for Neg-AME Group Members _________________ 22

Figure 8. Affective Priming Effects for Pos-AME Group Members _________________ 22

Figure 9. Affective Priming Effects for Nil-AME Group Members __________________ 23

1

Introduction

In numerous situations, a person may process a musical stimulus in parallel with a

visual stimulus. The musical stimulus can trigger affective (e.g., emotional) processes in a

listener (Hunter, & Schellenberg, 2010; Juslin & Västfjäll, 2008; Scherer, 2004; Zentner,

Grandjean, & Scherer, 2008). Consequently, the musical stimulus may momentarily

influence the attentive processing of a visual object via the phenomenon known as affective

priming.

Researchers have typically studied affective priming via experiments employing an

evaluative decision task (see Fazio, 2001). In each trial, the participant must quickly respond

as to whether a target stimulus has a positive or negative affective valence. The onset of a

task-irrelevant valenced prime stimulus precedes that of the target stimulus by a very short

interval (e.g., 200 ms). A two-way (prime valence × target valence) interaction indicates

affective priming. Performance (speed and/or accuracy) is superior under conditions in

which the prime and target express the same valence than when they express opposite

valences (see Figure 1). Fazio, Sanbonmatsu, Powell, and Kardes (1986) initially reported

affective priming effects. Their prime and target stimuli were printed words that expressed

polarized affective valence (i.e., they were deemed to be either very positive or very

negative). Since then, many other researchers have found affective priming effects via

various renditions of the same basic paradigm (e.g., Bargh, Chaiken, Govender, & Pratto,

1992; De Houwer, Hermans, & Eelen, 1998; Greenwald, Klinger, & Liu, 1989).

2

Figure 1. Affective Priming: Evidence from an Evaluative Decision Task.

Because the evaluative decision task and the classical Stroop task (Stroop, 1935) are

paradigmatically similar, a popular explanation for Stroop effects (e.g., Logan & Zbrodoff,

1979; MacLeod, 1991) has come to serve as a leading explanation for affective priming

effects (Klauer, Roßnagel, & Musch, 1997; Musch & Klauer, 2001; Wentura, 1999). In the

evaluative decision task, the participant need only attend to the affective valence of the target

stimulus. However, an automatic evaluation of the prime stimulus can momentarily bias the

participant’s response to the target. When the separate valences of the prime and target are

congruent, the superfluous evaluation of the prime can bias the participant toward providing

the correct response. Conversely, when the separate valences are incongruent, the prime can

bias the participant toward providing an incorrect response, in which case he or she will

either respond incorrectly or spend additional time overcoming the bias. Thus, the valence of

3

the target in the evaluative decision task is analogous to the ink color of the typed color-word

in the Stroop task, while the valence of the prime is analogous to the verbal color word.

The Stroop mechanism hypothesis (Klauer & Musch, 2002, 2003) has gained

additional support from experiments demonstrating a lack of affective priming effects within

tasks requiring participants to categorize targets according to some non-affective trait (De

Houwer, Hermans, Rothermund, & Wentura, 2002; Klauer & Musch, 2002; Klinger, Burton,

& Pitts, 2000). However, within non-evaluative decision tasks explicitly requiring the

participants to provide an affirmation (e.g., “yes”) or a denial (e.g., “no”) response, typical

affective priming effects have emerged when the target demands an affirmation, while

reversed affective priming effects have emerged when the target demands a denial (Klauer &

Musch, 2002; Klauer & Stern, 1992; Wentura, 2000). An affective matching mechanism

hypothesis (Klauer & Musch, 2002, 2003) can account for those peculiar results. Though a

task may require a participant to attend only to a non-affective trait of the target, the

participant may automatically compare the affective valences of the prime and target. When

the prime and target are valence-congruent, a spontaneous feeling of plausibility biases the

participant toward providing an affirmation response. Conversely, when the prime and target

are valence-incongruent, a spontaneous feeling of implausibility biases the participant toward

providing a denial response. In either case, the response bias may conflict with the correct

response regarding the non-affective trait of the target. As a result, a normal affective

priming effect can occur when the necessary response is an affirmation, while a reverse

affective priming effect can occur when the necessary response is a denial (see Figure 2).

4

Figure 2. Affective Priming: Evidence from a Non-Evaluative Affirmation/Denial Task.

Whether by the Stroop mechanism or by the affective matching mechanism, the

affective content of a task-irrelevant prime stimulus can momentarily influence selective

attention to a target stimulus. Though the evidence has come primarily from experiments

5

employing only visual stimuli, several studies have shown that the prime need not be of the

same sensory modality as the target. Using printed words as the visual targets, researchers

have demonstrated cross-modal affective priming effects by using odorants as the primes

(Hermans, Baeyens, & Eelen, 1998), by using tastants as the primes (Veldhuizen, Oosterhoff,

& Kroeze, 2010), and by using various environmental sounds as the primes (Scherer &

Larsen, 2011). Numerous other examples of auditory-visual affective priming come from

studies in which musical stimuli served as primes. However, these priming effects have been

limited to pairings of particular types of musical and visual stimuli.

Across several studies, researchers have found typical affective priming effects by

using single chords as auditory primes and single words as visual targets. The researchers

polarized the affective valences of their chords via multiple techniques that are important in

music. For example, their experiments have produced affective priming effects not only

when the primes were either consonant or dissonant chords (Sollberger, Reber, & Eckstein,

2003; Steinbeis & Koelsch, 2011), but also when the primes were either major mode or

minor mode chords (Costa, 2012; Ragozzine, 2011; Steinbeis & Koelsch, 2011).

Specifically, participants’ responses to positive target words were superior when the prime

chords were consonant or major mode, and responses to negative target words were superior

when the prime chords were dissonant or minor mode.

In other studies, researchers have examined the relationships between emotional

facial expressions and emotional speech (de Gelder & Vroomen, 2000; Horstmann, 2010;

Horstmann & Ansorge, 2011; Pell, 2005). Though these researchers were not explicitly

6

examining cross-modal affective priming by musical stimuli, their paradigms and findings

are relevant to the present study. For example, Horstmann (2010) showed that the pitch of a

single task-irrelevant contextual tone can influence not only the evaluations of facial

expressions, but also the imitations of facial expressions. Participants evaluated and imitated

happy faces more quickly when high-pitch tones accompanied the faces compared to when

low-pitch tones accompanied them. Conversely, participants responded to angry faces more

quickly when low-pitch tones accompanied the faces compared to when high-pitch tones

accompanied them.

The results of Horstmann’s (2010) experiments correspond to the results of Costa’s

(2012) experiments, in which the pitch range (octave) of single chords influenced responses

both to typed words and to pictures. Participants’ evaluated positive target words and

pictures more quickly when high-octave chords preceded the targets compared to when low-

octave chords preceded them. Conversely, participants’ evaluated negative target words and

pictures more quickly when low-register chords preceded the targets compared to when high-

register chords preceded them.

In review, a number of studies have provided evidence that, via an underlying

affective priming mechanism, a musical stimulus can exert a temporary influence on the

attentional processing of a visual stimulus. There is consistent evidence that single musical

chords can facilitate or impede responses to typed words (Costa, 2012; Ragozzine, 2011;

Sollberger et al., 2003; Steinbeis & Koelsch, 2011). There is additional evidence that tonal

properties shared by music and emotional speech can facilitate or impede responses to

7

emotionally expressive faces (de Gelder & Vroomen, 2000; Horstmann, 2010; Horstmann &

Ansorge, 2011; Pell, 2005). Taken together, these two lines of evidence imply that single

musical chords may facilitate or impede responses to emotionally expressive faces.

Experiment Overview

For the present affective priming experiment, the primes were consonant and

dissonant chords similar to those that have demonstrated affective priming capacity in past

studies (Sollberger et al., 2003; Steinbeis & Koelsch, 2011). The targets were schematic

faces expressing happiness and anger. Schematic faces have demonstrated intra-modal

affective priming capacity (Lipp, Price, & Tellegen, 2009), and compared to photographed

faces, they more readily met the demands of the participants’ task.

Rather than evaluating the affective valence of a target face, the participants quickly

decided whether or not each schematic face contained a mole (a small black spot). Because

each decision took the form of an affirmation or a denial (“yes” or “no”), I hypothesized that

the affective matching mechanism (Klauer & Musch, 2002, 2003; Klauer & Stern, 1992;

Wentura, 2000) would bias participants’ responses. I predicted that overall performance

across mole-present trials would be better (i.e., lower mean response latency and higher mean

response accuracy) when the chord and face were affect-congruent compared to when they

were affect-incongruent. Conversely, I predicted that overall performance across mole-

absent trials would be better when the chord and face were affect-incongruent compared to

when they were affect-congruent.

8

Method

Participants

Twenty-eight undergraduates (14 women and 14 men) participated, partially

satisfying a requirement within their introductory psychology course at North Carolina State

University. Their ages ranged from 16 to 34 years old, and the sample’s median age was 19.

All participants demonstrated a visual acuity of 20/30 or better via the University at Buffalo

Interactive Visual Acuity Chart (IVAC), an online Snellen-type test. Via the Hearing

Screening Inventory (Coren & Hakstian, 1992), all but three participants reported a hearing

ability that was “average” or better in both ears. No participants reported a hearing ability

that was “poor” or worse in either ear.

Design

An affective matching paradigm requires a 2 × 2 × 2 (prime valence × target valence

× response type) within-subjects design. This experiment consisted of an affective matching

paradigm embedded within a 3 × 3 × 2 design. Specifically, it included three auditory prime

conditions (dissonant chord, consonant chord, and pure tone), three facial expression

conditions (angry, happy, and neutral), and two mole presence conditions (mole-absent and

mole-present). It also included a six-level manipulation of the mole location (upper-right,

middle-right, lower-right, upper-left, middle-left, and lower-left). Participants completed six

blocks of the above design. The dependent variables were response latency and response

accuracy.

9

Materials

Primes. I used MuseScore 1.3 software (Schweer et al., 2013) to synthesize the two

chords. Each chord consisted of four superpositioned grand piano notes with fundamental

frequencies between 261.63 Hz (C4) and 523.25 Hz (C5). The consonant chord consisted of

the notes C4–E4–G4–C5, and the dissonant chord consisted of the notes C4– F♯4–B4–C5.

These chords previously demonstrated the capacity for affective priming in a study by

Steinbeis and Koelsch (2011). I used Audacity 2.0 software (The Audacity Team, 2000) to

synthesize the pure tone, which was a 261.63 Hz (C4) sine wave. All three primes were

1,400 ms in duration, with S-curve attacks of approximately 120 ms and S-curve decays of

approximately 1,000 ms. I used Audacity to adjust their amplitudes, such that when I played

them at a particular output setting, they seemed comfortable and equally loud through over-

the-ear stereo headphones. By holding a sound level meter inside the ear padding ring of

each headphone, I ensured that each sound had a peak level within a safe range of 65 to 70

dBSPL.

Targets. The visual targets were a set of three schematic faces that Öhman,

Lundqvist, and Esteves (2001) used as part of a study on threat-detection. I modified the set

to incorporate both the two-level mole presence factor and the six-level mole location factor.

Appendix A, Appendix B, and Appendix C show the angry, neutral, and happy faces,

respectively. When viewed from a distance of 24 inches, each face subtended a vertical

visual angle of approximately 7°, and each mole subtended a visual angle of approximately

0.3°.

10

Questionnaire. The questionnaire consisted of four sections. The first questionnaire

section requested a rating for each prime and target stimulus. The format was a seven-point

pleasantness scale ranging from extremely unpleasant to extremely pleasant. The second

questionnaire section was the self-report component of the Goldsmiths Musical

Sophistication Index, or Gold-MSI (Müllensiefen, Gingras, Musil, & Stewart, 2014;

Müllensiefen, Gingras, Stewart, & Musil, 2014). This 38-item inventory allows for the

calculation of a General Musical Sophistication score based upon five factors: Active

Musical Engagement, Perceptual Abilities, Musical Training, Singing Abilities, and

Sophisticated Emotional Engagement. The third questionnaire section was the Hearing

Screening Inventory, or HSI (Coren & Hakstian, 1992), which allows for estimation of

hearing loss via 12 self-report items. Upon development, the HSI demonstrated high internal

consistency (Cronbach’s = .89) and high test-retest reliability (Cronbach’s = .88), and it

showed a high correlation (r = .81) with conventional pure-tone audiometric testing. In the

present study, it served as a cost- and time-efficient alternative. The fourth and final

component of the questionnaire requested typical demographics data, including age, gender,

ethnicity, and linguistic background.

Apparatus

Participants completed the tasks via two modern Dell PCs running the Microsoft

Windows 7 operating system. The PCs were connected to 24-inch LCD monitors, which

displayed images at a resolution of 1920 × 1080p, and which produced no visually detectable

glare. One PC administered the choice reaction time (CRT) test via Affect 4.0 software

11

(Hermans, Clarysse, Baeyens, & Spruyt, 2005; Spruyt et al., 2010). The other PC

administered the questionnaire via Qualtrics online browser application. The CRT PC also

presented sounds via over-the-ear headphones, and the questionnaire PC presented sounds

via speakers located on either side of the monitor. Participants completed the CRT test via

standard keyboard, and they completed the questionnaire via both standard keyboard and

mouse. On the CRT PC’s keyboard, YES, OK, and NO labels covered the left arrow, down

arrow, and right arrow keys, respectively. The Affect software disabled all other keys on that

keyboard, with the exception of the escape key. I used IBM SPSS 22.0 software to conduct

all statistical analyses.

Procedure

Under the guidance of one researcher, each participant completed a solo session

within a quiet laboratory setting that had typical office-style fluorescent lighting. Each

session lasted approximately 40 minutes. Upon arrival, the participant read a consent form,

which described the focus of the experiment as “the human ability to make quick decisions

about different types of visual stimuli, specifically, depictions of faces.” Upon providing

consent, the participant then passed the screening for visual acuity. The participant sat in a

straight-back chair and correctly read aloud a row of letters that appeared on a computer

monitor, situated approximately 2.4 meters from the participant’s face. Given that particular

distance, the IVAC site automatically sized and spaced the letters to test for 20/30 acuity.

After passing the visual acuity screening, the participant moved to the CRT workstation. The

researcher instructed the participant to carefully read and follow all on-screen instructions.

12

The participant donned the headphones, the researcher moved to a nearby, out-of-view

location, and the Affect 4.0 software then guided the participant through the CRT test.

The participant completed one set of 18 practice trials, across which each possible

combination of the auditory prime, facial expression, and mole presence conditions appeared

exactly once and in a random order. Across the nine mole-present practice trials, the mole

randomly appeared in at least three of the six possible locations. The participant then

completed six blocks of 108 trials. Each trial block included 54 mole-present trials, across

which each possible combination of the auditory prime, facial expression, and mole location

conditions appeared exactly once. Across the 54 mole-absent trials, each possible

combination of the auditory prime and facial expression conditions appeared exactly six

times. The trial order was random within each block.

Each trial began with the display of a black fixation cross, which remained visible at

the center of the monitor for 600 ms. Immediately after the cross disappeared, the prime

sounded via the headphones for 1,400 ms. After 150 ms from the onset of the prime, the face

appeared at the center of the screen, and it remained visible for 1,250 ms or until the

participant provided a response. The software recorded the participant’s first response only.

Regardless of the participant’s response, the prime completed its full presentation, at which

point the trial ended. Thus, within each block, the trial inter-onset interval was always 2,000

ms.

Prior to initiating the set of practice trials, the software instructed the participant to

quickly enter each response “within the 1.25 seconds during which the face is displayed.”

13

Prior to initiating each trial block, the software reminded the participant to: “Look directly at

the black cross while it is present. When the cross disappears, immediately attend to the face

that appears. Press the YES key only if the face has a mole. Otherwise, press the NO key.”

The participant initiated each trial block by pressing the OK key. Thus, the procedure

allowed multiple rest periods.

After completing the CRT test, the participant moved to the questionnaire workstation

to complete the four sections of the questionnaire. The participant first rated the three

auditory stimuli in one of six possible orders. A counter-balancing and random assignment

procedure predetermined the participant’s stimulus order. Before rating each auditory

stimulus, the participant had to press a button to listen to the sound at least once (i.e., he or

she could listen to the sound multiple times, if necessary). After rating each sound, the

participant rated all 21 faces in a random order. The participant then completed the MSI,

HSI, and general demographics sections of the questionnaire, in that order. The final screen

of the questionnaire thanked the participant, and it included a debriefing that revealed the

purpose of the task-irrelevant sounds and facial expressions. The researcher addressed the

participant’s remaining questions and concerns.

Results

Prior to conducting any hypothesis tests, I filtered out all data from any trial in which

the response input was likely anticipatory (less than 200 ms latency) and from any trial in

which the time limit had expired (greater than 1,250 ms latency). This resulted in a loss of

55 (0.3%) of the 18,144 observations. Across the remaining trials, the grand mean of

14

participants’ mean accuracy scores (proportions correct) was .97 (SD = .03). In light of such

high accuracy scores, I decided to limit reports of significant accuracy statistics to tables and

figures. Thus, I focused on response latency data across only the 17,582 trials in which

responses were correct. Across those trials, the grand mean of participants’ mean latency

scores was 519 ms (SD = 66). After transforming each trial’s latency value to its log, square-

root, and inverse (speed), I selected the speed data for further inferential analyses, because

that distribution exhibited both the smallest skewness (S = 0.21, SE = 0.02) and the smallest

kurtosis (K = 0.77, SE = 0.04). Across the participants’ mean speeds, the grand mean was

2.03 responses per second (SD = 0.26).

For each analysis of variance (ANOVA), I first referred to Mauchly’s test of

sphericity. If the test revealed a sphericity violation for a particular main effect or interaction

term, I referred to the conservative Greenhouse-Geisser-corrected univariate F-test, in order

to determine significance. When conducting family-wise comparisons of means, I used

SPSS’s Bonferroni-corrected criterion to determine the significance of each mean difference.

Examinations of Stimulus Ratings and Musical Sophistication

Figure 3 shows the mean pleasantness ratings of the sounds and faces (averaged

across mole conditions). Among ratings of the sounds, pairwise comparisons showed that

the consonant chord was significantly more pleasant than the pure tone and the dissonant

chord. However, the dissonant chord was only marginally less pleasant than the pure tone.

Among ratings of the faces, pairwise comparisons showed that the happy face was

significantly more pleasant than the neutral face and the angry face, and the angry face was

15

significantly less pleasant than the neutral face.

Figure 3. Mean Pleasantness Ratings for Sounds and Faces (N = 28). A rating of 7 indicates

an “extremely pleasant” stimulus, and a rating of 1 indicates an “extremely unpleasant”

stimulus. Error bars represent standard errors. Each bold horizontal line connecting two bars

represents a significant difference between those means.

Within the General Musical Sophistication Index (Müllensiefen, Gingras, Stewart, &

Musil, 2014), the minimum achievable score is 18 (low sophistication) and the maximum is

126 (high sophistication). Müllensiefen et al. reported a mean score of 82 (SD = 21) within a

sample of over 140k participants. In the present study, the mean score was 74 (SD = 17), as

16

was the median. A MANOVA revealed that the upper 50% of participants (n = 14) did not

differ from the lower 50% with respect to their mean pleasantness ratings of the sounds and

faces (Wilks’s Λ = .809, F(6, 21) = 0.83, p = .562, = .191).

Preliminary Test for a Response Mapping Issue

During the data-collection phase, a few participants admitted that they sometimes

accidentally responded to the left-vs-right direction of a target mole rather than responding to

its mere presence. Indeed, pairwise comparisons revealed that the mean accuracies and

speeds within mole-left trials were superior to those of their mole-right counterparts (see

Figure 4). The problem was likely due to the fact that the keyboard’s left arrow key served

as the YES response, while its right arrow key served as the NO response. When a mole

appeared to the left, a direction-based response was consistent with the correct presence-

based response, but when a mole appeared to the right, a direction-based response conflicted

with the correct presence-based response. In light of this extraneous response mapping issue,

I include a dichotomized version of the mole location in later analyses.

17

Figure 4. Evidence of an Extraneous Response Conflict (N = 28). Performance scores are

means from within the mole-present condition only. Error bars represent standard errors. In

the Difference Diagrams, each line connecting two moles represents a significant difference

between the means of those trials.

Exploratory Test with Neutral Stimuli Included

The affective matching hypothesis includes no predictions regarding effects of

time/practice, valence-neutral stimuli, or the task-irrelevant locations of moles. However,

prior to testing the hypothesis, I conducted an exploratory five-way ANOVA that included

the extraneous elements of the experiment design. The within-subjects factors were trial

block (1 – 6), prime sound (dissonant chord, pure tone, or consonant chord), facial expression

(angry, neutral, or happy), mole presence (absent or present), and mole direction (right or

18

left). The main effect of prime sound was significant (F(2, 54) = 4.87, p = .011, = .153).

Pairwise comparisons showed that, for reasons unknown, participants responded with higher

speed and with lower accuracy when the prime sound was a pure tone compared to when it

was a consonant chord (see Table 1). Due to the response mapping issue, the extraneous

two-way interaction between mole presence and mole direction was significant (F(1, 27) =

142.72, p < .001, = .841), qualifying the significant main effect of mole direction (F(1,

27) = 111.62, p < .001, = .805) and that of mole presence (F(1, 27) = 8.31, p = .008,

=

.235). Along with all other main effects and interactions, the critical affective matching

interaction (prime sound × facial expression × mole presence) was not significant (F(4, 108)

= 0.54, p = .705, = .020). Thus, for the sample as a whole, neither the ability to affirm nor

the ability to deny mole presence depended upon the task-irrelevant sound and facial

expression combination (see Figure 5).

Table 1. Mean Accuracies and Speeds within Sound and Face Contexts

Accuracy (proportion correct) Speed (1 / latency in sec.)

Stimulus M SE 95% CI M SE 95% CI

Dissonant chord .973 .006 [.961, .985] 2.016 .047 [1.919, 2.113]

Pure tone .968 .007 [.955, .981] 2.032 .049 [1.931, 2.132]

Consonant chord .975 .005 [.964, .986] 2.012 .048 [1.914, 2.111]

Angry face .973 .006 [.961, .985] 2.021 .048 [1.922, 2.120]

Neutral face .971 .006 [.959, .983] 2.026 .048 [1.927, 2.124]

Happy face .972 .006 [.959, .985] 2.013 .048 [1.915, 2.111]

Note. N = 28. Compared to consonant chords, pure tones led to responses of significantly higher speed (p =

.028) and significantly lower accuracy (p = .040).

19

Figure 5. Mean Speeds within Eighteen Exploratory Conditions (N = 28). Error bars

represent standard errors. These results did not support the affective matching hypothesis.

Affective Matching Hypothesis Test with Neutral Stimuli Excluded

The affective matching hypothesis predicts a two-way interaction between a two-level

affective congruence factor (incongruent or congruent) and a two-level response type factor

(denial or affirmation). I tested the hypothesis on a reduced dataset, which excluded any trial

in which the prime was a pure tone or the target was a neutral face. I categorized each trial

as either affect-incongruent (e.g., a dissonant chord paired with a happy face) or affect-

congruent (e.g., a dissonant chord paired with an angry face), and I then conducted a four-

way ANOVA. The within-subjects factors were trial block (1 – 6), affective congruence

(incongruent or congruent), mole presence (absent or present), and mole direction (right or

20

left). Due again to the response mapping issue, the extraneous two-way interaction between

mole presence and mole direction was significant (F(1, 27) = 88.16, p < .001, = .766),

qualifying the significant main effect of mole direction (F(1, 27) = 79.52, p < .001, =

.747) and that of mole presence (F(1, 27) = 13.26, p = .001, = .329). Along with all other

main effects and interactions, the critical interaction between affective congruence and mole

presence was not significant (F(1, 27) = 0.10, p = .921, < .001). Observed power was

.051. Thus, for the sample as a whole, neither the ability to affirm nor the ability to deny

depended upon the affective congruence of the task-irrelevant sound and facial expression

(see Figure 6).

Figure 6. Mean Speeds within Four Experimental Conditions (N = 28). Error bars represent

standard errors. Each bold horizontal line connecting two bars represents a significant

difference between the means of those trials. These results did not support the affective

matching hypothesis.

Additional Tests for Group and Individual Differences

Having found no two-way interaction between affective congruence and mole

presence, I examined the priming effects of individual participants. For both the mole-

21

present trials and the mole-absent trials, I computed the magnitude of the simple affective

priming effect as the mean speed across affect-congruent trials minus the mean across affect-

incongruent trials. I then determined the affective matching effect (AME) by subtracting the

mole-absent affective priming effect (APEA) from the mole-present affective priming effect

(APEP). The affective matching hypothesis predicts positive AME scores. Most of the

participants (n = 18) exhibited mathematically positive AME scores, but some (n = 10)

exhibited mathematically negative scores.

I then used a k-means cluster analysis to divide the entire sample into three

subsamples according to AME scores. The clustering algorithm split the sample such that

the AME scores within each cluster were maximally different from those within the other

two clusters. One cluster (see Figure 7) consisted of the six participants with the highest

magnitude negative AME scores (M = -0.15, SD = 0.05). Another cluster (see Figure 8)

consisted of the eleven participants with the highest magnitude positive AME scores (M =

0.08, SD = 0.03). The third cluster (see Figure 9) consisted of the eleven participants with

AME scores closer to zero (M = 0.00, SD = 0.03). I referred to these three groups as the

Neg-AME, Pos-AME, and Nil-AME groups, respectively.

22

Figure 7. Affective Priming Effects for Neg-AME Group Members (n = 6). Each priming

effect is the mean speed across affectively congruent trials minus the mean speed across

affectively incongruent trials. The group exhibited the reverse-of-expected pattern of results.

Figure 8. Affective Priming Effects for Pos-AME Group Members (n = 11). Each priming

effect is the mean speed across affectively congruent trials minus the mean speed across

affectively incongruent trials. The group exhibited the expected pattern of results.

23

Figure 9. Affective Priming Effects for Nil-AME Group Members (n = 11). Each priming

effect is the mean speed across affectively congruent trials minus the mean speed across

affectively incongruent trials. The group exhibited neither the expected nor the reverse-of-

expected pattern of results.

I initially considered the possibility that differences in musical sophistication allowed

for differences in attentional and/or affective processing of auditory stimuli, leading to the

different patterns of affective priming among the three groups. I tested this via MANOVA,

using all five factors of the Gold-MSI as dependent variables. The AME groups did not

differ within any of the five factors (Wilks’s Λ = .797, F(10, 42) = 0.51, p = .877, = .107).

I used a separate MANOVA to test whether the groups differed in their pleasantness ratings

of the three prime sounds and the three target faces (averaged across mole conditions). The

groups did not differ with respect to their ratings of the six stimuli (Wilks’s Λ = .449, F(12,

40) = 1.64, p = .119, = .330).

24

As a final test, I used a hierarchical regression to examine the extent to which

additional extraneous variables accounted for variation in the AME scores. The first block

consisted of only the session day, coded as the day of the month of January during which

individuals participated. The second block consisted of a few participant variables: General

Musical Sophistication Index (MSI-G), the Hearing Screening Inventory (HSI) score, age,

and gender. Table 2 shows the results of the regression. By itself, session day was a

significant predictor of AME scores, due to negative scores becoming more common as the

session day increased. However, both the significance of session day and the significance of

the model as a whole were lost upon inclusion of the participant variables, none of which

were significant predictors. Nevertheless, session day was the most significant predictor,

possibly indicating a cohort effect.

Table 2. Hierarchical Regression Results Predicting Affective Matching Effect

Model 1 Model 2

Variable B SE β t p B SE β t p

Day -.007 .003 -.427 -2.41 .024 -.006 .003 -.349 -1.83 .082

MSI-G -.001 .001 -.206 -1.06 .301

HSI .005 .005 .212 1.03 .315

Age .005 .005 .195 1.00 .327

Gender .068 .045 .365 1.52 .144

R2 .18 .29

F 5.79 .024 1.76 .162

Note. N = 28. A higher MSI-G score indicates greater musical sophistication. A higher HSI score indicates

poorer hearing. Gender was coded as 0 for women and 1 for men.

25

Discussion

The affective matching paradigm differs from the typical affective priming paradigm,

in that the latter requires participants to attend to the affective dimension of the target

stimulus while the former does not. According to the affective matching hypothesis (Klauer

& Musch, 2002, 2003), participants within the present study should have spontaneously

measured the task-irrelevant affective congruence between valenced sounds and valenced

facial expressions. Affect-congruent pairs should have biased participants toward providing

affirmation responses, while affect-incongruent pairs should have biased them toward

providing denial responses. Consequently, across mole-present trials, participants should

have correctly responded “YES” more quickly within affect-congruent contexts than within

affect-incongruent contexts, and across mole-absent trials, they should have correctly

responded “NO” more quickly within affect-incongruent contexts than within affect-

congruent contexts. The results of critical statistical analyses did not support these

predictions. Furthermore, variation in the potentially relevant measures of musical

sophistication and perceived stimulus pleasantness did not account for the pattern of variation

in the affective matching effect.

In any given trial of an affective priming paradigm, the magnitude of a response

conflict may be related to the degree to which the participant automatically and/or selectively

attends to the task-irrelevant affective features of the stimuli (Spruyt, De Houwer, Everaert,

& Hermans, 2012; Spruyt, De Houwer, & Hermans, 2009; Spruyt, De Houwer, Hermans, &

Eelen, 2007). In a recent study, Gast, Werner, Heitmann, Spruyt, and Rothermund (2013)

26

examined the relationship between selective attention and affective priming effects within a

response priming paradigm that resembled the affective matching paradigm of the present

study. In two experiments, the primes and targets were affectively valenced pictures. Either

the letter “X” or the letter “Y” was present on each target picture at one of four locations.

For all participants, the primary task was letter-discrimination. In their first experiment, the

researchers instructed the experimental group of participants to attend to the valences of the

prime pictures. For those participants, an evaluative decision task followed the letter

discrimination task in some of the trials (about 17% of the total). The researchers gave no

such instructions to a control group of participants, who never encountered the evaluative

decision task. Analyses of letter discrimination latencies revealed a significant affective

priming effect within the experimental group but not within the control group. In the second

experiment, the researchers instructed the experimental group of participants to attend to the

valences of both the primes and the targets, and they occasionally tasked those participants

with categorizing the pictures as having either the same or opposite valences. Other aspects

of the design, such as the primary letter-discrimination task, were the same as in the first

experiment. Again, analyses of letter discrimination latencies revealed a significant affective

priming effect within the experimental group, but not within the control group. These results

were in line with those of other studies (e.g., Spruyt, De Houwer, Hermans, & Eelen, 2007)

that have provided evidence that affective priming effects can be greater when participants

devote greater amounts of attention to the affective properties of primes and targets, whether

or not those affective properties are relevant to the task used to measure the priming effects.

27

In the present study, it is possible that participants varied in the degree to which they

automatically and/or selectively attended to the task-irrelevant affective congruence between

prime sounds and target facial expressions. The variation in attentional deployment could

account for the peculiar patterns of affective priming effects.

In conclusion, the present study did not support the general notion that musical

stimuli can influence the attentive processing of visual stimuli via the specific mechanism of

affective priming. Any follow-up to the present study should address at least three of its

limitations. Firstly, it will be important to use a task that does not lead to extraneous

response conflicts, such as that which caused participants to erroneously respond to the left-

vs-right direction of the target mole. A possible solution would be to use special software to

recognize and record voiced “YES” and “NO” responses. That method could in fact be

optimal for inducing affective priming via the affective matching mechanism (Klauer &

Musch, 2002, 2003), given the mechanism’s dependence upon responses that are either

affirmations or denials. Secondly, in light of the results of the study by Gast, Werner,

Heitmann, Spruyt, and Rothermund (2013), it will be important to have greater control over

participants’ attentional deployment strategies. This may be achieved by simply

incorporating a version of Gast et al.’s dual task paradigm into the affective matching

paradigm. Thirdly, the design should include various types of pictorial targets (e.g.,

weapons, food, spiders, etc.) in addition to schematic faces. Such a manipulation could

reveal whether musical stimuli form better affective matches with some types of stimuli (e.g.,

social) than with other types (e.g., edible).

28

Applications

The present study served to provide a better understanding of one mechanism

(affective priming) through which a rudimentary musical stimulus can momentarily help or

hinder the concurrent processing of a visual stimulus. Evidence of cross-modal affective

priming can have implications for the design of multi-sensory displays. For example, many

scientist-practitioners are interested in improving the design of musical stimuli known as

earcons (McGookin & Brewster, 2011), which can guide users’ interactions with various

facets of a device’s visual interface. If particular musical features (e.g., consonance and

dissonance) can express/induce affect, then affective relationships may exist between

particular earcons and their referents. Some of these affective relationships might be useful,

while others may be unintentional and undesirable.

29

References

Bargh, J. A, Chaiken, S., Govender, R., & Pratto, F. (1992). The generality of the automatic

attitude activation effect. Journal of Personality and Social Psychology, 62, 893-912.

doi:10.1037/0022-3514.62.6.893

Coren, S., & Hakstian, A. (1992). The development and cross-validation of a self-report

inventory to assess pure-tone threshold hearing sensitivity. Journal of Speech &

Hearing Research, 35, 921-929.

Costa, M. (2013). Effects of mode, consonance, and register in visual and word-evaluation

affective priming experiments. Psychology of Music, 41, 713-728.

doi:10.1177/0305735612446536

de Gelder, B., & Vroomen, J. (2000). The perception of emotions by ear and by eye.

Cognition & Emotion, 14, 289-311. doi:10.1080/026999300378824

De Houwer, J., Hermans, D., & Eelen, P. (1998). Affective and identity priming with

episodically associated stimuli. Cognition & Emotion, 12, 145-169.

doi:10.1080/026999398379691

De Houwer, J., Hermans, D., Rothermund, K., & Wentura, D. (2002). Affective priming of

semantic categorisation responses. Cognition & Emotion, 16, 643-666.

doi:10.1080/02699930143000419

Fazio, R. H. (2001). On the automatic activation of associated evaluations: An overview.

Cognition & Emotion, 15, 115-141. doi:10.1080/02699930125908

30

Fazio, R. H., Sanbonmatsu, D. M., Powell, M. C., & Kardes, F. R. (1986). On the automatic

activation of attitudes. Journal of Personality and Social Psychology, 50, 229-238.

doi:10.1037/0022-3514.50.2.229

Gast, A., Werner, B., Heitmann, C., Spruyt, A., & Rothermund, K. (2013). Evaluative

stimulus (in)congruency impacts performance in an unrelated task: Evidence for a

resource-based account of evaluative priming. Experimental Psychology. Advance

online publication. doi: 10.1027/1618-3169/a000238

Greenwald, A. G., Klinger, M. R., & Liu, T. J. (1989). Unconscious processing of

dichoptically masked words. Memory & Cognition, 17, 35-47.

doi:10.3758/BF03199555

Hermans, D., Baeyens, F., & Eelen, P. (1998). Odours as affective-processing context for

word evaluation: A case of cross-modal affective priming. Cognition & Emotion, 12,

601-613. doi:10.1080/026999398379583

Hermans, D., Clarysse, J., Baeyens, F., & Spruyt, A. (2005). Affect (Version 4.0) [Computer

software]. University of Leuven, Belgium. Available from

http://fac.ppw.kuleuven.be/clep/affect4/

Horstmann, G. (2010). Tone-affect compatibility with affective stimuli and affective

responses. Quarterly Journal of Experimental Psychology, 63, 2239-2250.

doi:10.1080/17470211003687538

Horstmann, G., & Ansorge, U. (2011). Compatibility between tones, head movements, and

facial expressions. Emotion, 11, 975-980. doi:10.1037/a0023468

31

Hunter, P. G., & Schellenberg, E. G. (2010). Music and emotion. In M. R. Jones, R. R. Fay,

& A. N. Popper (Eds.), Music Perception (pp. 129-164). New York: Springer.

IVAC [Computer software]. University at Buffalo, The State University of New York.

Available from http://www.smbs.buffalo.edu/oph/ped/IVAC/IVAC.html

Juslin, P. N., & Västfjäll, D. (2008). Emotional responses to music: The need to consider

underlying mechanisms. Behavioral and Brain Sciences, 31, 559-575.

doi:10.1017/S0140525X08005293

Klauer, K. C., & Musch, J. (2002). Goal-dependent and goal-independent effects of

irrelevant evaluations. Personality and Social Psychology Bulletin, 28, 802-814.

doi:10.1177/0146167202289009

Klauer, K. C., & Musch, J. (2003). Affective priming: Findings and theories. In J. Musch &

K. C. Klauer (Eds.), The Psychology of Evaluation: Affective Processes in Cognition

and Emotion (pp. 7-49). Mahwah, NJ, US: Lawrence Erlbaum Associates Publishers.

Klauer, K. C., Roßnagel, C., & Musch, J. (1997). List-context effects in evaluative priming.

Journal of Experimental Psychology: Learning, Memory, and Cognition, 23, 246-

255. doi:10.1037/0278-7393.23.1.246

Klauer, K. C., & Stern, E. (1992). How attitudes guide memory-based judgments: A two-

process model. Journal of Experimental Social Psychology, 28, 186-206.

doi:10.1016/0022-1031(92)90038-L

32

Klinger, M. R., Burton, P. C., & Pitts, G. S. (2000). Mechanisms of unconscious priming: I.

Response competition, not spreading activation. Journal of Experimental Psychology:

Learning, Memory, and Cognition, 26, 441-455. doi:10.1037/0278-7393.26.2.441

Lipp, O. V., Price, S. M., & Tellegen, C. L. (2009). No effect of inversion on attentional and

affective processing of facial expressions. Emotion, 9, 248-259.

doi:10.1037/a0014715

Logan, G. D., & Zbrodoff, N. J. (1979). When it helps to be misled: Facilitative effects of

increasing the frequency of conflicting stimuli in a Stroop-like task. Memory &

Cognition, 7, 166-174. doi:10.3758/BF03197535

MacLeod, C. M. (1991). Half a century of research on the Stroop effect: An integrative

review. Psychological Bulletin, 109, 163-203. doi:10.1037/0033-2909.109.2.163

McGookin, D., & Brewster, S. (2011). Earcons. In Hermann, T., Hunt, A., & Neuhoff, J. G.

(Eds.), The Sonification Handbook (pp. 339-361). Berlin, Germany: Logos.

Müllensiefen, D., Gingras, B., Musil, J., & Stewart L. (2014). The musicality of non-

musicians: An index for assessing musical sophistication in the general population.

PLoS ONE, 9, e89642. doi:10.1371/journal.pone.0089642

Müllensiefen, D., Gingras, B., Stewart, L. & Musil, J. (2014). The Goldsmiths Musical

Sophistication Index (Gold-MSI): Technical Report and Documentation v1.0.

London: Goldsmiths, University of London.

33

Musch, J., & Klauer, K. C. (2001). Locational uncertainty moderates affective congruency

effects in the evaluative decision task. Cognition & Emotion, 15, 167-188.

doi:10.1080/02699930126132

Öhman, A., Lundqvist, D., & Esteves, F. (2001). The face in the crowd revisited: A threat

advantage with schematic stimuli. Journal of Personality and Social Psychology, 80,

381-396. doi:10.1037/0022-3514.80.3.381

Pell, M. D. (2005). Nonverbal emotion priming: Evidence from the “Facial Affect Decision

Task.” Journal of Nonverbal Behavior, 29, 45-73. doi:10.1007/s10919-004-0889-8

Ragozzine, F. (2011). Cross-modal affective priming with musical stimuli: Effect of major

and minor triads on word-valence categorization. Journal of ITC Sangeet Research

Academy, 25, 8-24.

Scherer, K. R. (2004). Which emotions can be induced by music? What are the underlying

mechanisms? And how can we measure them? Journal of New Music Research, 33,

239-251. doi:10.1080/0929821042000317822

Scherer, L. D., & Larsen, R. J. (2011). Cross-modal evaluative priming: Emotional sounds

influence the processing of emotion words. Emotion, 11, 203-208.

doi:10.1037/a0022588

Schweer, W., et al. (2013). MuseScore (Version 1.3) [Computer software]. Available from

http://musescore.org

34

Sollberger, B., Reber, R., & Eckstein, D. (2003). Musical chords as affective priming context

in a word-evaluation task. Music Perception, 20, 263-282.

doi:10.1525/mp.2003.20.3.263

Spruyt, A., Clarysse, J., Vansteenwegen, D., Baeyens, F., & Hermans, D. (2010). Affect 4.0:

A free software package for implementing psychological and psychophysiological

experiments. Experimental Psychology, 57, 36-45. doi:10.1027/1618-3169/a000005

Spruyt, A., De Houwer, J., Everaert, T., & Hermans, D. (2012). Unconscious semantic

activation depends on feature-specific attention allocation. Cognition, 122, 91-95.

doi:10.1016/j.cognition.2011.08.017

Spruyt, A., De Houwer, J., & Hermans, D. (2009). Modulation of automatic semantic

priming by feature-specific attention allocation. Journal of Memory and Language,

61, 37-54. doi: 10.1016/j.jml.2009.03.004

Spruyt, A., De Houwer, J., Hermans, D., & Eelen, P. (2007). Affective priming of

nonaffective semantic categorization responses. Experimental Psychology, 54, 44-53.

doi:10.1027/1618-3169.54.1.44

Steinbeis, N., & Koelsch, S. (2011). Affective priming effects of musical sounds on the

processing of word meaning. Journal of Cognitive Neuroscience, 23, 604-21.

doi:10.1162/jocn.2009.21383

Stroop, J. R. (1935). Studies of interference in serial verbal reactions. Journal of

Experimental Psychology, 18, 643-662. doi:10.1037/h0054651

35

The Audacity Team (2000). Audacity (Version 2.0) [Computer software]. Available from

http://audacity.sourceforge.net

Veldhuizen, M. G., Oosterhoff, A. F., & Kroeze, J. H. A. (2010). Flavors prime processing of

affectively congruent food words and non-food words. Appetite, 54, 71-76.

doi:10.1016/j.appet.2009.09.008

Wentura, D. (1999). Activation and inhibition of affective information: Evidence for negative

priming in the evaluation task. Cognition & Emotion, 13, 65-91.

doi:10.1080/026999399379375

Wentura, D. (2000). Dissociative affective and associative priming effects in the lexical

decision task: Yes versus no responses to word targets reveal evaluative judgment

tendencies. Journal of Experimental Psychology: Learning, Memory, and Cognition,

26, 456-469. doi:10.1037//0278-7393.26.2.456

Zentner, M., Grandjean, D., & Scherer, K. R. (2008). Emotions evoked by the sound of

music: Characterization, classification, and measurement. Emotion, 8, 495-521.

doi:10.1037/1528-3542.8.4.494

36

Appendices

37

Appendix A: Angry Faces

38

Appendix B: Neutral Faces

39

Appendix C: Happy Faces