wfs

77
1

Upload: 351ryderhall

Post on 28-Oct-2015

23 views

Category:

Documents


0 download

TRANSCRIPT

1

2

PREFACE

A piece of music, no matter what style or from what era or culture, will consist of many different

musical elements. These elements work together and interact in various ways to create the

composition as we know it. Among the most important elements that nearly all works share are

melody, rhythm, harmony, sonority, texture and form. Different musical compositions stress

different elements: dance music, for example, focuses primarily on the element of rhythm, while

countless modern classical concert pieces and especially electronic works are mostly “about”

sound quality and tone color. A large number of compositions, however, strive for a balance of all

the elements.

This text will describe the major elements of music and define a number of concepts that relate to

each of them. It will explore many different techniques and methods used by composers from all

eras and styles to create meaningful and unified musical compositions. By reading the text while

listening to the musical examples, then discussing the concepts while analyzing additional

examples in class, the student will gain a greater understanding of the roles each musical element

can serve in a work. This will also allow students to express their ideas and observations about a

piece of music using terminology that is universally accepted. This effort will not be successful,

however, without careful attention and repeated listening to the musical examples that accompany

each chapter. All examples in the book can be heard via the embedded links.

Above all, this text intends to expose students to the various elements of music with the goal of

understanding how they interact in a composition. By engaging these elements in an active

listening experience, it is hoped that a greater appreciation of the composition as a whole will be

gained.

How to use this text:

Highlighted text implies a link to a music example. Students should become familiar with these

examples in order to best integrate and interpret the written material and also for possible

identification on a quiz. Most of the musical examples played in class will also be found on

Blackboard. The text will only be effective when the student listens to the examples online while

reading. The importance of listening to the examples online cannot be overstated. Note that the

majority of the musical examples are of electronic music, but because a lot of electronic music is

not notated, some of the examples use traditional acoustic music, as they best illustrate the

concept being covered.

The symbol § indicates that an assignment is required for the section of text just completed. Short

answer and listening assignments are on Blackboard in a folder called Assignments.

Finally, be aware that the text will only be a framework for the course, and topics of interest to

the class may be covered spontaneously. Most importantly, the book is intended to supplement in-

class discussions and will be most useful as a means to reinforce concepts covered during the

class sessions. It is not intended to standalone as a self-directed reader.

(Thanks to Paul Beaudoin for his contributions to Unit VI and to Brian Robison for his critiques and

advice.) -DHM

3

MELODY

In order to provide a definition broad enough to cover the many different ways melodies

can exist in a musical work, we will define melody simply as a succession of musical

tones that can be perceived as a whole. In some 20th

- and 21st-century concert music, the

melody might, on first hearing, sound like no more than a random series of notes.

Moreover, in many electronic works, it may be hard to detect any recognizable melody

due to the emphasis on other elements, such as sonority or rhythm. After repeated

hearings, however, the various aspects of nearly any melody can be identified and

analyzed.

There are a number of characteristics of melody that are important to consider when

evaluating the melodic element of a composition. These include the melody’s physical

characteristics, meaning its shape or contour; structure, which identifies how the notes

are strung together into smaller and larger groupings; and tonal makeup, meaning how

the notes are organized into a recognizable collection of notes. These topics, along with

other important issues related to melody, will be covered below, but first, a distinction

must be made between the terms frequency and pitch.

The frequency of a sound is an objective, scientifically measurable characteristic of a

sonic event that refers to the number of times per second a sound wave vibrates in the air

(this topic will be covered further in the section on sonority). The unit of measurement

for frequency is cycles per second (cps), or Hertz (abbreviated Hz), a cycle being one

complete back and forth vibration of a waveform. A sound’s frequency accounts for our

perception of its pitch. The term “pitch” has two different but related meanings. In a

general sense, pitch is the phenomenon of “high and low” that we experience when we

hear a sound. People will differ in their perception of pitch; to one person, a sound might

seem to have a very high pitch, while to another, it might be only moderately so.

Pitch can also be used as a synonym for “note”: the note “A4” could also be called the

pitch “A4.” Musicians often refer to the entire set of As or Cs or Bs as pitch-classes. In

other words, to make a reference to all the As found on the piano you could say “there are

8 instances of the pitch-class A found on the piano.” Many musical events have a clearly

defined pitch, while others tend more towards noise or some other broad-spectrum sound.

4

Listen to Example 1, an excerpt from "MAA" by Pan Sonic and Example 2, "Midnight

Trail" by Tangerine Dream, and consider the relative importance of melody in each

composition. Example 1 has no recognizable succession of notes that could be considered

a traditional melody, while Example 2 uses a repeating four-note melody that begins on a

new starting note with each repetition.

TONAL CONTENT

The notes used in a melody are typically drawn from various types of pitch collections.

Among the most common collections are major and minor scales, though other types,

for example, modes and tone rows, are often found in Western music, and additional

types of collections also appear in many non-western musical traditions. Electronic music

composers also have the ability to use microtonal scales, which consist of distances

between notes that are even smaller than those found on traditional acoustic instruments

(see below). By listening carefully to the music and examining the notes of a melody

from a printed score (if available), the listener can determine what type of collection the

notes are drawn from and thereby characterize the tonal content of the music.

Like the other collections, major and minor scales are standardized arrangements of notes

that form the basis for the melodies found in a composition. Each of these scales is

organized in a different way, but they are all similar in that they contain only seven of the

twelve total notes that are available in the Western music system. The name for this

larger set of 12 is the chromatic scale, but unlike major and minor scales, the chromatic

scale is not typically used directly as the source for melodies. Rather, it represents the

“superset” of all possible pitch classes, in effect, the “theoretical universe” of all the notes

available to a composer or songwriter. In the example below, the chromatic scale is

written out starting on the note C, but in fact, it could be written starting on any note in

any octave, as any written version would contain the same pitches. Listen to Example 3,

a chromatic scale and note the identical distance between each pair of notes.

Ex. 3 The chromatic scale contains all pitch classes available in Western music

Notice that there are two versions of all of the notes except E and B: the chromatic scale

contains C and C# (called C sharp), G and G#, etc. A major or minor scale will only use

one each of every pitch class –there will never be a repeated note name in any traditional

scale.

The seven individual notes of a major and minor scale are identified by their scale

degree, which is a number that identifies their position within the scale. Each note of the

scale also has a corresponding name that identifies the role or function it serves. The first

note of any scale, for example, is the tonic, which serves as the “home base” or resting

point, and the fifth, which acts as a guiding force back towards the tonic home base, is

5

always called the dominant, regardless of what the actual note name might be or what

specific scale it appears in. Melodies that employ the concept of home base – of having a

central focus on a specific note – are called tonal melodies, and with few exceptions,

using a major or minor scale will produce this result.

The scale degrees for a C major scale along with their names are shown below notated on

a traditional five-line musical staff. Listen to Example 4 and note that unlike the

chromatic scale, the distances between notes are not all identical.

Ex. 4 A C major scale

Scales do not exist in isolation; rather, they are part of a larger hierarchy of musical

materials called a key. Choosing a key is like setting a context for a musical work; it

determines the scale, hence the primary notes that will be used and the relationship of the

notes to one another and to the tonic. The key also determines what key signature will

be placed at the beginning of the notated music as well as how different combinations of

notes from the scale should be combined into chords (chords will be discussed further in

the unit on harmony). The key signature is a notational shorthand used to indicate which

version of any given note is to be played throughout the composition. This saves the

composer from having to use a special mark called an accidental on every instance of

that note each time it occurs. For example, in certain keys, the note F natural is used,

while in others, the note F# (F sharp) is required. Both F and F-sharp are written on the

same space within the staff lines, even though F-sharp sounds a little higher than the

normal F. By putting a sharp sign (#) in the key signature at the beginning of each stave

of the music, the performer understands that every time an F occurs, it is to be interpreted

as F-sharp.

Keys are often chosen because the notes they contain can be easily performed by certain

instruments or singers. The keys of C, A, G, for example, are good keys for guitarists

because the chords they provide are particularly easy to play. They might also be chosen

because of a certain affect or mood they are perceived to have: the key of D minor is

6

often considered to be a “sad” key, for example, while D and A major are considered to

be “bright” or “happy” keys. (These characterizations are mostly subjective.) Any of the

12 notes of the chromatic scale can function as the starting point for a key, though not all

have traditional moods associated with them.

Many of the notes in a composition will come from the scale determined by the key,

which helps provide a sense of unity to a piece. It also helps to create a sense of stability

on and around the tonic note. However, nearly all melodies employ chromatic notes

(from the Greek word “chroma,” which means color), which are notes that fall outside

the scale determined by the key being used. Chromatic notes add variety to a melody and

can create momentary points of tension and instability. Because they are not indicated by

the key signature, the music would need to contain accidentals before any chromatic note,

as shown in the Beethoven example below.

**

Ex. 5 Accidentals are used to indicate chromatic notes (Beethoven: Fur Elise)

(QQQ get chromatic electronic melody)

Being able to judge if a melody’s tonal content is drawn from a major or minor scale

takes practice; many listeners can’t easily distinguish between these two, especially on

first hearing. Melodies based on both these types of scales often have a feeling of moving

in a clear direction, of heading to a goal. They typically exhibit a quality of resolving or

having reached a conclusion when they end. Major and minor scales account for the vast

majority of the music that modern listeners are exposed to, though other options will be

considered below.

Atonal Melodies

In much classical concert music of the past century, both acoustic and electronic, and in

various forms of modern jazz, composers often composed melodies that had no tonic note

or home base and that were not based on traditional pitch collections. The pitch

collections used were often unique and distinct for every new composition. Moreover,

rather than using only seven of the notes of the chromatic scale, composers used all

twelve notes in individualized arrangements that fit their expressive needs. The melodies

thereby created moved freely among the notes of the chromatic scale and avoided the

familiar landmarks and references found in tonal music, i.e., music using traditional

scales. Melodies of this type are called atonal and have no clear key or home base,

though they might focus on a single note or groups of notes at different points in the

composition.

Because they tend to use all twelve notes throughout a composition, atonal compositions

do not use a key signature. Rather, every note that requires an accidental will be written

with one preceding it, as shown below.

7

Example 6 Stefan Wolpe: Piece in Two Parts for Solo Violin, an atonal melody

Listen to Example 6 and follow the score. You may find it to be unfamiliar and perhaps

even unpleasant. It will, perhaps, sound aimless or lacking direction, and you may be

completely unable to predict when or where it will end. Atonal music demands new

listening strategies from the listener and is often less accessible, especially on first

hearing. Yet with repeated exposure and some idea of the composer’s intentions and

methods, it can be as aesthetically pleasing as any other style.

A more systematic approach to atonal melody uses a system developed by Viennese

composer Arnold Schoenberg. Schoenberg’s system, which is called serial atonality (or

serialism), will be discussed in detail in the unit on harmony. In brief, composers using

this system employ predetermined pitch collections called pitch-class sets or tone rows.

Tone rows typically use all twelve tones of the chromatic scale that they arrange in a set

order prior to beginning the composition of the piece. However, rows containing fewer

than 12 notes are also common. As with scales, tone rows are used to create both

melodies and chords. Because many of the early electronic music composers came out of

the 20th classical tradition, a large number of electronic work intended for concert hall

performance, use atonal melodies.

PHYSICAL CHARACTERISTICS

The physical characteristics of a melody involve the way in which it moves through its

“surroundings,” that is, how the notes travel through the "musical space" that has been

defined for a particular piece of music. Perhaps it moves ever higher, ascending towards

some climatic point, or maybe it descends rapidly to the lowest note of the instrument,

then sweeps slowly upward until it hits a peak. A melody might also simply remain in

one place, repeating the same note multiple times, or jump randomly from high to low

and back.

Imagine a melody that would sound like the images below. The first melody would be

smooth and move gently between its successive notes while the second is more sharp-

8

edged or “angular,” reaching its highest point then dropping to its lowest notes fairly

quickly:

Now look at the contours in the figure below and try to imagine what a melody using

each would sound like. These graphs were created by a scholar named Inge Skog in his

effort to standardize the way melodies from all over the world are classified:

Figure 7 Skog classification of melodic pitch contour

In general, good melodies have a clear sense of direction and lead the listener to a climax

point or goal—they seem to be “heading someplace.” Ascending melodies in particular

can create a sense of moving forward, of building momentum towards a goal, and

descending melodies can imply a sense of release or resolution. However, a good melody

will typically have variety and balance in the way it moves. Too much activity in the

same direction, for example a melody that moves consistently upward, or only

downward, or that focuses primarily on just a single, repeated note, could lead to

monotony and predictability and might cause the listener to lose interest in the

composition. Some alternation of ascending and descending motion with a climactic

point perhaps midway through on the other hand, could make the melodic line more

interesting.

Look at the lines drawn over the notes in the Mozart example below and listen to

Example 7. These “contour lines” illustrate the movement of the melody and show how it

moves downward at times, occasionally upward, and also sometimes stays in place with a

repeated note. Trace the movement of the notes across the page by drawing over the

contour lines with a pencil while listening to the music. How often does the melody

change direction?

9

Example 7 Mozart: Symphony in G minor, opening theme, mvt. I ♫

The terms contour and range are used to describe this movement. Contour, or shape,

refers to the way in which a melody moves up and down. The examples above are useful

terms for describing the shape of a melody but you might also find other words to be

useful. The "curve," "angle" or “profile” of a melody could be depicted by terms such as

"ascending" or "descending," "flat" or "static," "angular" or "sharp-edged" and "wave-

like" or "disjunct." (Imagine a melody with a contour like a mountain range...)

Listen to Example 7 again - it is well-balanced in the type of motion it employs, and the

upward skip in measure 3 contrasts nicely with the descending scale-wise motion that

follows it, ending with the focus on the repeated note C5 in ms. 5. Now listen to Example

8, “Prep Gwarlek 3b” by Aphex Twin. Notice how the melody consists of a sequence of

only a few notes, uses entirely low notes, and doesn't span a very great distance overall.

With the exception of a few unexpected events that recur starting around :50, the melody

is fairly static in contour. Clearly, traditional melody is less of a priority in this piece than

in the Mozart.

Range refers to the overall distance between the highest and lowest notes of a melody

and like contour, is a characteristic of a melody’s basic design. Different instruments

have wider or narrower potential ranges than others. The piano, for example, has eighty-

eight notes that span just over seven octaves, which allows a composer to write melodies

with a much greater range than the human voice, which has a range of just about two

octaves. A typical electronic keyboard has a 61-note range, though larger professional

digital pianos often employ the entire range of 88 notes used by an acoustic piano.

Therefore, a melody played on one instrument might cover a larger part of that

instrument’s available range than the same melody played on another instrument, and

when assessing or describing the range of a melody, it is important to keep in mind the

specific instrument that is playing.

10

From www.cartage.org

Melodies that use a large portion of an instrument’s possible notes are called “wide,” and

those that have only a few notes separating their highest from their lowest pitches would

be called “narrow.” (There are many additional possibilities in between those extremes.)

Range can be more accurately characterized by counting the exact number of half steps

between the lowest and highest notes and assigning the proper interval, though this

information is generally only useful to someone studying a melody for analysis purposes.

Listen again to Prep Gwarlek 3b and evaluate the overall range covered by the melody.

Now listen to Example 9 Steven Jaffe’s Silicon Valley Breakdown and evaluate the space

from high to low within which the melody, generated by a program that models acoustic

string instruments on a computer, uses in the first minute or so. Notice how the range

extends even further when the second, “counter” melody comes in halfway through.

Use of Range

Rather than use all of the available “musical space” all of the time, melodies tend to

emphasize and exist in different segments of the total range that is available. These

segments are called registers, and an instrument’s overall range can typically be broken

down into several distinct registers. For example, it is common to split the available range

into at least three registers, described simply as “upper,” “middle” and “lower.” (The

term “register” will be discussed again when the element of Sonority is covered.) In

practice, it’s possible that a melody in one part of a song or composition uses only the

highest notes of the instrument and never descends into the lower register. Other

melodies in the same composition might move continuously throughout the entire

musical canvas, dipping into the extreme lower register before ascending to a climax that

employs the highest notes of the instrument. Obviously, the possibilities are endless. The

Italian term tessitura refers to the predominant register that is used by an individual

instrumental or vocal part for some major portion of a composition.

Listen to Example 10, a popular song from the 1960s, originally sung by a male tenor

voice. The highest note is F4 and the lowest is C4, so the entire range is just a fourth, or

5 semitones, but the tenor can cover a range of 20 or more semitones. How would you

describe the range of this tune? Listen to the example to confirm your assessment.

11

Example 10 The Beatles: “Come Together” main melody.

Like most musical elements, the characteristics of a melody can change dramatically

from one part of a composition to another, so it’s best not to try and characterize too

much of a composition’s melodic material at once. (It is, of course, possible that no

change will occur over a long span of time). Moreover, no analysis can take into account

every subtle change in the music, so descriptions of melodies tend to be general in nature

(“primarily ascending” or “mostly wavelike,” etc.). Most importantly, be sure that any

analysis you make of a melody is based on the characteristics and capabilities of the

specific performing instruments and that you make your judgments only after listening to

the melody repeatedly.

STRUCTURE

Like sentences in prose, melodies are constructed from smaller segments called phrases.

Phrases act like clauses in English grammar and when combined, form larger units of

structure. If the music is pitched and/or key-based, phrases will often delineated by the

harmonic underpinnings of the music, and phrase endings often coincide with resting

points or harmonic goals called cadences. (Harmony will be discussed in detail in the

next unit.) Cadences are typically brought about through clearly directed harmonic

motion that can be either temporary, like a comma in prose, or terminal (final), like a

period. They can also be created through directed melodic motion, for example, a melody

that ends on some note that the composer has established as a point of arrival or goal, or

through a rhythmic device such as a slowing down of the tempo (called a ritardando). In

many cases, all of these methods work together to form the sense of repose or ending that

a cadence implies.

Another common technique in some styles of electronic music is for the entire

composition to be made up of phrases of equal length that simply recur throughout.

Example 11 , "4001" by Squarepusher, uses this approach, repeating an 8-beat (two

measures of four beats each) phrase structure repeatedly, both during the opening

rhythmic section and when the background melody enters.

Occasionally, phrases are made up of smaller building blocks called motives, or motifs.

Motives are the smallest recognizable elements in a melody and might be no more than

three or four notes. One of the most famous of all motives is the opening of Beethoven’s

12

Fifth Symphony, familiar from both many thousands of concert performances and more

recently, aspirin commercials. Listen to Example 12 now.

Example 12 Opening motive form Beethoven’s Fifth Symphony

Motives can become very important elements in a work if the composer repeats them or

uses them as the basis for variation and development. In some cases, a motive will sound

almost insignificant on first hearing but will take on enhanced meaning as it recurs.

Motives will typically have a distinctive melodic and rhythmic character, and in some

cases, composers will isolate one or more of those characteristics for manipulation over

the course of an entire section or movement of a piece. Beethoven, for example, uses only

the three repeating-note figure (both with and without the ending note) at many points in

his symphony.

In the next example, the melody is made up of two short phrases that can’t standalone –

the first phrase seems to ask a question while the second “answers” it. This type of

phrasing is called antecedent-consequent phrasing, which gets its name from a similar

technique of English grammar. Listen to Example 12a and note the slight pause between

the two phrases.

By listening to and examining a melody's component parts and noting the way it is put

together, the melody’s phrase structure can be characterized. For example, a melody

that is comprised of two or three phrases of fairly equal length, such as Examples 12a, is

said to have a symmetric (meaning, “even” or “balanced”) phrase structure. Melodies

made from phrases of considerably different lengths are said to be asymmetric (or

“uneven”). Classically derived styles of electronic music would favor the latter approach,

with the possibility that phrases cannot even be distinguished one from the next, while

dance or other pop-oriented styles would tend to be far more symmetric in their structure.

Typically, a change in some characteristic of the melody will help the listener identify a

change in the phrasing. A change in the melody’s direction, such as a large leap followed

by a sequence of steps, could signal a new phrase. Phrases might be delineated when a

melody restarts after reaching a goal or coming to a temporary pause.

A change in the dynamics (loudness level) or articulation marking (playing style) is

another way phrases could be differentiated. For example, one phrase might be soft and

legato (smoothly connected), then the next phrase might be loud and staccato (short,

detached). A change in instrumentation, meaning the instrument (or instruments) that is

playing the melody, would also be a fairly clear indication that a new phrase has begun,

as would a change in register. Like other elements in music, phrase structure is best

looked at for only short, distinct portions of a composition, rather than for the work as a

whole.

13

Listen to and characterize the phrasing in Example 13, the song “Twin Soul Trie” by

Tangerine Dream. Is it mostly symmetric or asymmetric? Can you tell where one phrase

ends and the next begins? What qualities allow you to do so? Now compare that with the

phrasing in Example 14, Stockhausen's Study #2. How clear is the phrasing in this

example? Are the phrases of equal length? Can you predict when one phrase ends and the

next begins? How is the music put together?

Two or more phrases, such as those in an antecedent-consequent relationship, often

combine to form a larger structural unit called a period. There are many types of musical

periods, some that stand alone and some that imply further movement or continuation,

but a complete discussion is beyond the scope of this book. A later unit on form will

explore musical structure in more detail.

Tuning Systems

Most Western music incorporates a system of tuning in which there are 12 equal

divisions of the octave, i.e., each of the 12 notes is equal distance from the next. This

system of tuning is called 12-division equal temperament. Other types of equal-

tempered tunings divide the octave into different numbers of notes, for example, 6 or

even 24 equal divisions. In the Stockhausen Study II example above, the composer

creates an unorthodox scale that results in a distance roughly 10% larger than the

traditional semitone between each scale step. In another of his works, Gesang der

Jünglinge (Song of the Youths; 1955–56) the scales included distances as small as 60

divisions to the octave up to divisions of only seven per octave (as opposed to the 12

divisions found in traditional Western music). The electronic devices Stockhausen used

to generate sound in these and his other works gave him tremendous control over the

tuning characteristics of his music.

Equal temperament is a “man-made” construct that was created, in part, so that

instruments could play in more than one key. It is not based strictly on the laws of

physics, which calculate musical intervals according to strict mathematical ratios between

the fundamental frequencies of notes. (Recall that frequency refers to the number of

times per second that a sound wave repeats its pattern of motion as it travels through the

air and is the scientific basis behind our perception of pitch. The distance of an octave in

music is equivalent to a frequency ratio of 2:1, for example A4 is 440 and A5 is 880, and

other common intervals are also simple whole-number ratios. This topic will be covered

in greater detail in the section on Sonority). Equal temperament adjusts the frequency of

various notes so that they will be in tune, regardless of what note a melody starts on or

what key the music is in.

There are dozens of other tuning systems besides equal temperament and each uses

different ratios between notes. And as noted above, many tuning systems, including those

that use equal division, split the octave into more or fewer than 12 parts. One system,

called Just Intonation, uses the strict ratios that occur naturally between intervals with

no adjustments or modifications. (Pythagoras supposedly uncovered these relationships

through experiments with vibrating strings.) Some people claim that just intonation is

more pleasing to the ear, but one problem it presents is that an instrument tuned using just

intonation can only play in one key; it must be retuned to play in another. Listen to

14

Example 15 to hear a guitar retuned to conform to a just intonation tuning. It may sound

out of tune until your ears adjust to the sound.

Like Stockhausen, other modern composers, such as American composer Harry Partch

(1901 – 1974), created their own unique tuning systems and even built custom

instruments to use them. Partch constructed an instrument that divides the octave into 43

unequal parts, which you can hear in Example 16. Here again, the music may sound out

of tune, as the tuning systems are not what we are accustomed to hearing.

The term microtone is used to describe any musical distance that is smaller than a semi-

tone, for example, a quartertone, which is one half of a semitone. Microtonal music is

a general term that applies to music that uses such intervals. The term “microtonal” can

be used to characterize music that incorporates alternative tuning systems or it can simply

refer to a melody where the octave is split into more than 12 parts with no systematic

organization. Microtonal music is especially easy to produce on a computer, where

composers can create sounds that are extremely small distances apart. To calculate such

small distances, composers use a unit of measurement called a cent. A cent is the name

for an interval that is 1/100th

of a semitone, and most modern synthesizers and synthesis

software allows for sounds to be generated using such increments.

Microtones are common features of many non-Western music systems, and such systems

have served an influential role on various styles of electronic music. The Middle Eastern

maqam, for example, is a system that governs how melodies are constructed and used

(raga is the name for a similar system that governs melody in Indian music). Example

16a uses a maqam called bayati, which incorporates whole tones, semitones and three-

quarter tones. The electronic instrument performing the music is a Moog “Little Phatty,”

a keyboard synthesizer that can be easily retuned to produce a wide range of tuning

approaches. Such devices are highly programmable, which means that the any

frequency desired could result when a key is pressed. This topic will be discussed further

when electronic hardware is covered later in the course.

The example below shows one means of notating microtonal intervals in an acoustic

orchestral work, for which standard notation has no common symbols. The symbols

shown here were used by Polish composer Krzysztof Penderecki (1933 - ) in his piece,

Anaklasis (1959/60). From left, they indicate 1/3-tone sharp, 2/3-tone sharp, etc. In

addition to requiring the performers to play unusual tunings, the piece also requires them

to drop pencils on the strings of the piano and sweep the strings of the piano with jazz-

drumming brushes. Listen to Example 16b and notice the unusual sounds made by the

orchestra. Though all of the sounds you hear are produced by acoustic instruments, it’s

easy to imagine that a composer interested in this type of sound quality would be

attracted to electronic music, where similar sounds can be easily created.

Symbols denoting microtones used by Penderecki.

15

Easley Blackwood is another composer who has worked extensively with microtones in

an electronic context. In Example 17 you can hear his piece Twelve Microtonal Etudes.

Each of the 12 short etudes (“studies”) uses a different equal division of the octave. The

movements are called simply “16 Notes,” “23 Notes,” “20 Notes,” etc., and the work is

performed using a keyboard synthesizer that was retuned to produce microtones specified

by the composer.

Compositional Techniques

Composing a work for full symphonic orchestra or purely electronic playback may seem

like magic to some listeners, but in fact, composers for centuries have relied on a number

of common and familiar techniques to assist them in generating the musical material

needed to suit their expressive purposes. Developing their craft through many years of

study and careful examination of works by influential composers and songwriters,

musicians acquire a vast repertory of techniques and processes that they use in their

music.

In much recent electronic music, modern composers often spend considerable amounts of

time planning a composition by developing custom processes or concepts that will

determine the melody or other elements of the work. Oftentimes, implementing these

processes is more important than the actual notes that are created using such processes.

Composer Iannis Xenakis, for example, used mathematical models to determine the

distribution of notes in many of his works and allowed the models to determine nearly

every aspect of the piece; the composer had no regard for the actual notes that were heard

at any given moment as long as they fit the model he chose.

More traditionally, melody has played a central role in most styles of music and many

techniques have evolved that are intended to help the composer come up with the

extensive amount of melodic material that is often needed for a lengthy composition or

extended improvisation. These techniques involve reusing the same melody or, perhaps,

only a portion of it, in different shapes and guises throughout a composition. The term

melodic development is used to describe a large number of different techniques for

developing or transforming a melody through varied reuse or alteration. These techniques

were developed by classical composers over the years but have now found their way into

jazz, electronic music and other styles. Some of those techniques are discussed below.

Sequencing is a process where a melody is repeated several times in succession with

each repetition beginning on a different note. Each successive starting note of the

repeating pattern might be a step above or below the previous one, or each new starting

note could be a large interval away from the original. Regardless of which approach is

used, the same distance is usually used for each successive repeat, so if the first repetition

is a two semitones below the original, the next will be the same distance from that, and so

on. Sequencing is a very effective technique over a short period of time but can become

monotonous if carried on for too long. Normally after only three or four repetitions, the

technique becomes obvious and predictable to the listener. Listen to the melody below

and notice the fourfold repetition of the sequence, each starting a step below the previous.

(QQQ do new example)

16

Example 18 A melodic sequence

(QQQ replace this) The song “Autumn Leaves” below is built on a two-measure

sequence, initially starting on the note G4, that is repeated four times. The indications

“1.” And “2.” mean that the melody is to be played through once, then repeated using the

section marked “2.,” which means second ending, on the second occurrence.

Example 19 “Autumn Leaves” A melodic sequence.

(QQQchange this) As you can see in both examples, successive sequences tend to move

the same intervallic distance from one another. In both of the examples above, each

repetition is one scale step below the previous one. Moving down a step is by no means

the only option; each sequence could leap upward or downward by some amount, but as

noted, it is common for the pattern of intervallic distance to be repeated with each

successive occurrence.

Note that the term “sequencing” has another meaning when used to describe a type of

music software that places melodic (or rhythmic) patterns in sequence, one after the

other, using a timeline. This type of software will be discussed later in the reading.

Motivic development is another rather broad term that refers to the use of a small

motive, often simply 2 or 3 notes, as the "cell" or "germinal idea" of a larger section of

music. A motive that is well suited to development should have a clear and distinct

character including both identifiable melodic or rhythmic traits, which provides the

composer with opportunities for variation and transformation while still allowing the

motive to remain recognizable.

As mentioned above, one of the most famous examples of motivic development is in the

first movement of Beethoven's Fifth Symphony, shown above. This short, 4-note motive

is transformed into a large portion of the movement’s melodic material. By repeating,

17

shortening, lengthening, inverting (playing “upside down”), sequencing and otherwise

developing this motive, much of the music for the first movement is created.

A number of the most common techniques of motivic development have specific names.

For example, to take a melody and shorten all the note durations is called diminution

(compare Example 34a with Example 34). To lengthen all the durations is called

augmentation (34b).

Example 20 Original form of melody

Example 20a Diminution

Example 20b Augmentation (first two measures only)

To play a melody backwards is called retrograde, and playing each note in the opposite

direction is called inversion. To isolate just a part of a melody—perhaps the first three

notes of a longer melody—is called fragmentation. Listen to the different processes used

in Example 35, which will first present a four-measure melody, then use retrograde and

18

two types of inversion.

Example 21 Some techniques for developing a melody. Note the difference between

Tonal inversion and Real inversion.

Of course, any number of melodic techniques can be combined and used at the same

time. Inventive composers will find endless ways to create new material from a limited

number of basic elements. This helps give a piece of music an "organic" or integrated

quality and helps the listener better comprehend the logic of the music. The reuse of

melodic material for modern composers is so significant that composers today often use

music software to generate a large number of variations and permutations on a melody

that they give to the computer as input. This type of software is called algorithmic

composition software and it could be used, for example, to quickly create a dozen

rhythmic variations on a melody or produce endless variants of an original microtonal

scale created by the composer.

Bear in mind that when melodies receive the type of treatments described above, they can

become thematic, that is, they take on special meaning and significance within a piece.

Like the theme or subject of a novel or play, musical themes are often melodies that

reappear throughout a work at key moments and that are heard by the listener as the

major focus of the composition. This happens when a composer gives emphasis to a

melody by reusing it in whole or part over long sections of a composition.

In many electronic compositions, however, especially those that focus on sound quality

and color above other elements, the music may be “non-pitched,” meaning it does not

contain any specific pitches at all. This approach to composition will be discussed in the

unit on Sonority.

19

The Maschine Mikro MK2 Groove Production Studio from Native Instruments is a

hardware device that can produce a wide range of automated variations on a melodic or

rhythmic pattern.

Complete Melody Assignment and Listening Assignment 1 now

20

Rhythm refers to the flow or movement of music through time and the way in which that

movement is organized. It is one of the most significant and distinct elements of music

and has served as the basis for much experimentation in the music of this and the past

century. Unlike melody, rhythm can stand alone: a single drummer playing a solo, or an

Indian percussionist performing by him or herself on the tabla both represent rhythm

existing independent of melody.

Rhythm occurs on many different levels, from surface activity to a more fundamental,

organizing element deep within the background of a composition. The most basic unit of

rhythm is the beat or pulse. In most styles of music, the beat serves as the underlying,

driving force, and almost all other rhythmic activity in a composition occurs in some

multiple or fractional relationship to this basic beat (i.e., twice as fast or half as fast). The

speed at which the beat moves is called the tempo, which might remain constant for an

entire piece, but might speed up (accelerando) or slow down (ritardando) at important

points. Listen to Example 22, Autechre’s “Fold4, Wrap5,” from the album LP5 and

notice that at the end of every phrase, the music gradually slows down. Though this type

of recurring ritardando is very unusual, the technique helps establish clear breaks

between one phrase and the next.

A solo performer or conductor might wish to interpret a passage of music by playing

rubato, meaning “to rob” the tempo. When playing a passage rubato, the tempo will

freely slow down or speed up based on the performer’s desire to emphasize one phrase or

another. The composer might indicate this instruction directly in the score, or a performer

might simply use her/his intuition to play the music at appropriate points in this manner.

In an electronic piece, the composer can use a variety of means to achieve the same effect

for the music. For example, altering the speed of a recording, either faster or slower, as it

plays back. Dedicated software called time-stretching software also gives the electronic

composer vast resources for altering the speed of the music, from changing the speed of a

single repeating loop to expanding the duration of a short sample to many times it

original length.

Beginning around the first decades of the nineteenth century, composers used mechanical

timing devices called metronomes to specify the tempo of a piece. A metronome

marking (abbreviated on the written musical score as “MM”) tells the performer

precisely how fast a quarter note should last by indicating the number of quarter notes per

minute, as in , which means “120 quarter notes per minute.” If there are 120

quarter notes per minute, then each quarter will last one-half second. Because the system

of note values is relative, the performer can then determine the duration of all the other

note values relative to the duration of the quarter note. Musicians are not expected to

perform using a stopwatch or clock, however, as a conductor will typically set the tempo

before and while the music is playing. In a performance of music that is not notated, one

member of the ensemble (the drummer or bandleader, for example), will simply give a

count-off or lead-in to establish the proper speed.

21

In music that employs a computer, tempo can be indicated with extreme precision, often

down to a tenth or smaller fraction of a specific metronome value. For example using a

program called a MIDI sequencer, a songwriter can specify that a musical event such as

a note occur 1/1000th

of a second before or after another event. Such programs provide

extremely high timing accuracy and are especially useful for synchronizing musical

events with visuals, for example in a film, animation or even a video game.

Before the metronome was invented, less specific terms, such as adagio (slow) or allegro

(fast), were employed to designate tempo. Tempo indications such as these do not

correlate with any one specific metronome marking but simply provide a general sense of

speed. As a result, a performer will use his or her musical judgment and familiarity with

the style of the music being performed to determine the exact tempo to be used when one

of these terms appears on the musical score. By studying the performance practices of

various historical periods and musical traditions, performers can get a general idea of

how the music might have been performed at the time it was written.

Shown below are a number of common tempo markings. Assuming Allegro is about

quarter = 120, what would be the approximate MM marking for each of these terms?

(Hint: markings below the word allegro” below would be faster and those above allegro

would be slower.)

22

Tempo markings are often elaborated by another set of qualifying terms such as molto,

meaning "very" (molto allegro, very fast); piu, meaning “more” (piu mosso, or literally,

more motion); and poco, meaning "a little" (poco vivace, a little faster). This provides

the performer with even more specific information about the speed of a composition or

musical passage.

Explicit vs. Implicit

When listening to a piece of music, the listener might find that the basic pulse is very

prominent and clear, perhaps because one or more instruments is playing with an accent

on every beat or because some sound is marking the beat clearly. This type of pulse is

said to explicit, that is, it is clearly felt at the forefront of the music. In other cases, it

might be harder to identify the main pulse because no instrument is emphasizing every

pulse; the pulse is “implied” but not entirely clear. In this approach, the pulse is said to be

implicit, meaning somewhere "just beneath the surface." Dance music is expected to

have a clear, explicit pulse, which might be represented by alternate attacks on the bass

and snare drums, or by a "walking bass" pattern played by a bass player, where there's

one note in the bass part on every beat.

On the other hand, some types of music often have a beat that is present but more

difficult to find. This is common especially in styles that are not intended for dancing.

Music without an explicit pulse can present a challenge to musicians performing in an

ensemble that doesn’t have a conductor. In acoustic compositions, the performers rely on

their ears and musical acuity to follow each other and stay in synch. (Familiarity with

each other’s playing style is also helpful.) Whether explicit or implicit, it's important to

attempt to locate the beat (if there is one) before discussing other aspects of the rhythm.

Techno, like some other styles of electronic music, is characterized by a very explicit

pulse usually created by a bass drum accenting every beat. (A number of classic analog

electronic drum machines, such as the Roland TR-808 Rhythm Composer, were

developed and used for this purpose.) Listen to this effect in Example 23, the song

"Homeless" by the band Fatali. Now compare that with Example 24, "Wood End

23

BriteLite" by the band Sol Tek. Can you find a pulse in this song? If so, how clear is it? If

not, how would you describe the overall pace of the music?

All the instruments in an acoustic piece or layers in an electronic composition rarely

perform in rhythmic unison, that is, they don't use the same rhythmic pattern

simultaneously. As a result, an extremely dense rhythmic texture could be created if

each instrument or layer had its own unique rhythmic pattern. The number of possible

combinations is obviously infinite and clearly indicates how much more there is to

consider when examining rhythm than just locating the basic pulse.

Special mention should be made of the role that repetition plays in the rhythmic

practices of many musical styles. Rhythmic patterns are often repeated either intact or

with slight variations, which helps brings structure and order to a composition. For

example, in many styles of popular electronic music, a process called looping serves as

the basic ordering force in the work. Looping takes its name from the tape loop, a short

piece of tape that has its beginning and end spliced (connected) together so that it repeats

indefinitely when played back on a tape recorder. Today, computer software such as

Fruity Loops (currently called “FL Studio”) is used for looping, and some modern

musicians even have dedicated hardware devices for this purpose.

Looping can refer simply to a repeating rhythmic part (or “groove”) or to a

rhythmic/melodic combination. Listen to Example 25, which is based on several

overlapping looping patterns. Note especially where new patterns begin.

Literal repetition of a rhythmic pattern also serves as the unifying force in the music of

some recent Minimalist composers (Steve Reich, Terry Riley, and Phillip Glass, among

others). In one approach, different musicians start out playing the same pattern but

gradually move out of phase with one another by playing their parts at a slightly

different speed or just ahead of or behind other performers. Listen to Example 26, the

piece Electric Counterpoint, composed by Steve Reich for guitarist Pat Metheny for an

example of this technique. In this recording, the guitarist plays live accompanied by an

ever-changing recorded version of himself.

METER

Rhythm in music does not usually occur as simply an endless string of isolated beats or

pulses. Rather, the beats in a piece of music are typically grouped into small units called

measures or bars. (In music notation, a barline is used to separate successive

measures.) Music that has this characteristic is said to have meter or to be metered. A

composer will indicate a fixed value or length for each measure by using a time

signature, such as 4/4 (four quarter notes per measure) or 5/8 (five eighth notes per

measure). The actual musical activity, however it may occur, will have to cover the span

of four quarter notes (or five eighths) in every measure of the piece or until some new

time signature is assigned. Any combination of sounds or silences (which are designated

24

by rests in the music) may appear within the measure as long as the total elapsed time is

exactly equal to the total duration given by the time signature. (Meter is also found in

prose, especially in poetry, where the "basic unit" would be the syllable, grouped into

words and sentences. Meters are also given names in prose, for example, iambic

pentameter.)

Normally, the use of a time signature implies that the first beat (downbeat) of a measure

will be emphasized, or accented. Musical styles such as techno and most forms of

electronic dance music often emphasize the first beat of a measure, which helps provide a

strong aural cue to the listener/dancer. Though we may not be able to distinguish exactly

what note value is being used to represent the beat (it could be a quarter, half or eighth

note, for example), the grouping of the music into collections of four (or three or six or

any recurring number) beats allows us to recognize the presence of meter in the music.

Analyzing Metered Music

By listening carefully to a piece of music, the various layers of rhythmic activity in a

composition, i.e., the rhythmic texture, can usually be identified. The first step is to

identify the rate at which the various recurring pulses in the music are appearing, then

choose which of the pulses seems most likely to be the beat. Typically, the beat will be a

recurring pulse that is neither the fastest nor the slowest of the pulses you can detect.

The next step is to identify the relationship between the beat and the other active layers of

the texture. Often there will be consistent subdivisions that create rhythmic activity twice

or even four times as fast as the basic pulse. It’s also likely that there will multiples of

the beat that produce a layer of rhythmic activity at one-half (or less) the speed of the

beat. Typically, the slower moving layers are in the lower registers of the music, while

the more active ones are in the upper registers. Listen to Example 26a and notice the

high-pitched drum that is moving at a rate twice as fast as the lower-sounding drum.

Once these layers are identified, the final step is to show which instruments or tracks are

most closely tied to which layer. You might find that the bass track is playing one note

every two beats or that a percussion part is performing a rhythm of steady eighth notes,

which is double the speed of the basic pulse. Perhaps another part is using a division of

four notes per beats during a section, or, rather, playing long note values that change only

every two or four beats. Though every part may not fit perfectly into one of the layers,

and it is nearly certain that the rhythmic texture will change frequently throughout a

composition, it can be useful to classify how each part's rhythmic pattern fits into the

overall fabric of the music.

Mixed Meter A composer might choose to switch meter in a composition, for example starting with a

pattern of four beats (quadruple meter) and switching to a pattern of three (triple meter).

This is called mixed meter. Changes in meter might appear repeatedly throughout a

piece or could occur only once at some point where the music moves into an entirely new

section. Example 27, the Beatles song, “Good Morning,” (Sgt. Pepper, 1967) opens with

four measures of quadruple meter (4/4), switches to quintuple (5/4) for three measures,

triple (3/4) for one measure, quadruple (4/4) for one measure, then back to quintuple (5/4)

25

for one measure, and so on. A more regular alternation between duple meter, as

represented by the time signature 2/4, and triple (3/8) appears in Example 28a. Try to

follow the beat pattern in each meter: 1 2, 1 2, followed by a quicker 1 2 3.

A more complex example is found in Example 28b, a techno composition by German

artist Gadzatronic. The meter in this song is fairly constant in the opening 30 seconds,

then begins to change throughout the rest of the piece.

Finally, one of the most famous of all mixed meter examples appears in Igor Stravinsky's

ballet score to The Rite of Spring, Example 29. Here, the pounding, driving music and

ever-changing placement of the accents keeps the meter in constant flux and must have

been a real challenge for the dancers when the work first appeared in 1913. See if you can

detect the pattern of accents while listening to this example.

Polymeter

Another technique, called polymeter, involves the use of two different meters

simultaneously. In music that is not notated, for example much popular music, it’s quite

easy for several musicians to perform rhythmic patterns that accent different beats,

thereby creating the effect of two simultaneous meters. Listen carefully to Example 30,

the Phish song “First Tube” and focus on the meter established by the bass and drum in

the introduction, then the guitar as it enters. Tap each beat of the guitar part as it is

playing to determine the grouping of accents. Where do the two parts line up? Can you

tell how many beats are implied by the two parts? Does the grouping of beats of the

guitar part change at any point or does it stay constant?

A rather extreme example can be found in the middle of Example 31, the Frank Zappa

song “Toads of the Short Forest” (Weasels Ripped My Flesh, 1995). According to

Zappa, “(at this moment) we have drummer A playing in 7/8, drummer B playing in 3/4,

the bass playing in 3/4, the organ playing in 5/8, the tambourine playing in 3/4, and the

alto sax blowing his nose."

In notated music, a composer will typically assign one time signature to the parts of all

the performers, then create a feeling of polymeter by instructing one or more performers

to use a different pattern of accents than is implied by the time signature. In some cases,

composers will even use two different time signatures to impose different patterns of

accents on different parts. Some modern piano music, for example, uses different time

signatures for each hand. Though this may create problems of notation, it can often be the

best way for the composer to impart his or her intentions to the performer.

26

Example 32 Gyorgy Ligeti: Piano Etudes, #5. An example of polymeter

Notice in the example above that the composer uses both 3 over 4 (¾) and 2 over a dotted

quarter to indicate what pattern of accents the performer should play. The notation clearly

supports this goal: the beaming in the right hand has three groups of four 16th

notes to

indicate a simple triple meter (3/4) and the beaming in the left hand uses two groups of

six 16th

notes each to indicate the compound duple meter. The accents in the left hand on

the beginning note of each group of six further clarify the composer’s intentions. Similar

combinations of different metric patterns can be found in much electronic music, where

no notation is needed.

Ametric Though meter plays an important role in most popular music styles of this century, it has

been abandoned in much classical music from the 20th

century onward and in some styles

of jazz, especially since the late 1950s. Moreover, the vast majority of "classic"

(meaning those that stem from a modernist 20th century classical music tradition)

electronic works have no meter at all. Works without meter are called ametric, meaning

that the music has no regular pattern of accents or has no detectable pulses whatsoever.

This characteristic is also found in the music of “progressive” rock groups such as Yes

and King Crimson. When music appears to have no pulse of any type and cannot be tied

to a beat, it is most likely ametric.

Listen to this excerpt from the piece Sonal Atoms, (Example 33) by composer and author

Curtis Roads. There is no recurring pulse in the music and, hence, no meter. The

movement of the music through time is based on the composer's intuitive sense of how

long events should last and at what rate those events should move. Would you

characterize the music as having a fast or a slow pace? Do new events occur quickly or

relatively slowly? Do these qualities change during the excerpt?

Because ametric music does not rely on a recurring pulse to organize the temporal aspects

of a composition, the composer will often use other methods to organize the element of

time. These methods might not be immediately obvious to the listener, however. A

composer might choose to use some pre-determined collection of rhythmic values

repeatedly throughout all or part of a piece, or he or she might pick note values according

to a scheme based on actual time durations.

For example, a composer could organize a series of timings based on some numeric

pattern – a credit card number or a list of birth dates or even phone numbers – and

translate that sequence into the length of notes in seconds. Or a composer might ascribe

some number of eighth notes to every pitch in the chromatic scale (C = 1 eighth, C# = 2

eighths (quarter), D = 3 (dotted quarter), etc.) and assign each note actually used in the

piece to that duration (so every time a C was heard, it would be one-eighth long, etc.). In

other cases, ametric music will rely solely on the composer’s judgment regarding how

long each musical event should last, with no special pre-arranged pattern. There are a

limitless number of possibilities once music moves outside the realm of meter, and in

most cases, it takes repeated listening and careful analysis to determine what approach to

organizing time a composition uses.

27

Here are some additional guidelines for dealing with ametric music: Try and determine

the composer’s intention with respect to time and characterize this aspect of the music in

any way you can. Does it feel “rushed?” Is the overall pace fast or slow? Is there a

“dreamy,” “timeless,” effect? Does the music feel “static,” with no clear sense of forward

motion? Perhaps there are occasional “bursts of energy” that move the music along, while

at other moments, the momentum seems to stall. Finding descriptive terms to characterize

the music will require some imagination on the part of the listener, but it is a good way to

begin to get a grasp on the rhythmic elements in use.

Listen to Example 33b and note the slow pace of the music. The sounds in the lower

register sustain like a drone and the vocal-like sound that appears above it enters and

exits very gradually. Now evaluate Example 34, the opening of the composition Scambi

by French composer, Henri Pousseur, using the same criteria. Note how the music

reaches moments of intensity then settles into more relaxed passages. Can you predict

when the explosive outbursts are going to occur?

Complete Assignment ES_A2_Rhythm now

28

Harmony refers to the relationships among the notes employed in a composition and

can be thought of as the “environment” in which the notes of a work interact. In many

styles of music, harmony creates a backdrop or “tonal landscape” that helps establish

momentum and direction for a piece. Harmonic elements can produce a sense of

movement towards a goal, or, if desired, they can create a sense of wandering and

aimlessness, or even complete stasis and lack of motion. The means by which harmony

can achieve these goals is the subject of this chapter.

In most styles of popular music and in classical music before the 20th century, harmony

typically operates as a series of chords that support the melodic layer of a composition.

However, several strands of melody occurring simultaneously or even a single melodic

line performed on a solo instrument could also create a harmonic framework for a

composition by implying specific harmonic activity. In other words, the single instrument

could be playing the same notes that might be used by an accompanying harmonic

instrument, if there were one. Rather than playing multiple notes simultaneously as in a

chord, however, the instrument would play the notes individually in succession.

The relationship between harmony and melody in such music is usually very clear: both

are derived primarily from the same source, most often the scale defined by the key of the

piece. In some cases, the chords are performed by a keyboard part, though in other cases,

the notes of the chords are spread out or distributed among several instruments. (The term

chord voicing is used to describe the way the notes of a chord are arranged throughout

the musical texture, especially in the choice of which registers are chosen for each note.)

Even without a chordal accompaniment, a melody might have clear harmonic

implications. Such music will often adhere to certain harmonic guidelines and will

demonstrate harmonic tendencies through the choice of notes being played. The listener

must use his or her ear to distinguish what harmonic information is coming from the

music and which chords and tonal functions, i.e., the role these chords play in

establishing a sense of key, are being implied by the various notes of the melody.

CHORD TYPES

One useful way to begin the study of harmony is to summarize the structure of chords.

Chords are typically formed by combining three or more notes from a scale, and any of

the seven scale steps has the ability to serve as the root or starting point for a chord.

Western music most often uses a system called tertian harmony to govern the building

of its chords. In tertian harmony, each successive note in a chord is a distance (or

interval) of one third away from the previous one. The most frequently used chord type

in music is called a triad, which is a three-note structure built by stacking thirds above

the root note. In C major, C is the first note of the scale. C to E and E to G are thirds, and

the chord built on C would, therefore, consist of C, E and G. The starting note, C, is

called the root of the chord, the middle note E is the third of the chord (not to be

confused with the fact that it is also the distance of a third from the root) and the top

29

note G is the fifth of the chord, because it is the musical interval known as a "fifth" away

from the root note C. These assignations hold true no matter how the notes of the chord

are distributed among the performers in a piece or which particular note an

instrumentalist might happen to be using as the bottom note he or she is playing at any

given moment.

To provide a composer with the "raw materials" of harmony in tonal music, a triad is

built on each step of the major or minor scale that is being used for the composition.

(Certain alterations to the notes provided by the scale are used when needed, for example

raising the seventh step of a minor scale to make the chord built on the fifth step a major

chord.) The resulting seven chords are arranged by the composer into various sequences

to provide the music with the type of motion and direction he or she desires. The seven

chords built from a major scale are shown here, with the name of the chord quality

(major, minor, etc.) shown below. The Roman numerals are a form of musical

“shorthand” that is used to indicate the quality of each chord and its position within the

scale.

Example 35 Chords derived from a major scale

Example 36 Chords derived from a minor scale with seventh scale step (B) raised in

the G and B chords

Other types of tertian chords, such as seventh and ninth chords, are also derived from

major and minor scales by adding additional notes above the top note at the required

interval. A seventh chord adds another note a third above the fifth of the chord, which

forms the distance of a seventh from the root, hence its name, while a ninth chord adds

another note a third above the seventh. The process can be thought of as simply stacking

alternate notes from a scale, starting on any note:

30

Chord type: triad (Ex. 37a) seventh (Ex. 37b) ninth (Ex. 37c)

Seventh and ninth chords are called “extended chords” and serve the same role or

function as the triads they are built on. (Functions will be defined below.) Because they

too are built entirely from thirds, they are also considered to be tertian chords.

CHORD FUNCTIONS

The use of chords in many types of music is governed by a system called functional

harmony, which involves assigning a set role or function to each chord in the key. In

this system, chords typically fall into one of three categories:

1) Tonic-functioning chords: chords that help establish the tonic, which is the

"home base" or harmonic reference point in the music; typically the I (or i in

minor) and at times, the vi and iii (III in minor);

2) Subdominant-functioning chords: chords whose role is to move the music

away from the tonic; typically the IV (or iv) and at times, the ii and vi (VI in

minor);

3) Dominant-functioning chords: chords that prepare the return of or point

towards the home base; typically the V (in both major and minor) and at

times, the vii and iii. The V must be a major chord to act as dominant, so in a

minor key, the seventh scale step, which is the third of the V chord, is raised.

(Recall that the raised seventh also converts vii from a diminished to a major

chord.) Raising the naturally occurring seventh scale step allows that note to

serve as a leading tone, which is what gives dominant-functioning chords

their tendency to resolve to the tonic.

Chords acquire their roles by the notes they contain: certain scale steps and intervals have

tendencies to resolve in certain ways, and these tendencies are carried over into the

chords that use these steps and intervals. Such tendencies help define the chord’s

function. For example, the seventh step of the scale, the leading tone, has a very strong

tendency to move upward to the tonic. When it is used in the V (as the third of the chord)

and viio (as the chord’s root), it supplies both these chords with the tendency to move to

the tonic. Hence they can both serve the dominant function.

Some chords contain dissonant intervals (intervals that have a need to resolve), which

account for their function. For example, when the third and seventh step of a seventh

chord built on the V of a major or minor chord are played together, the sound is very

unstable and tense. This quality is past along to the chord that contains these notes,

31

thereby producing a tendency to resolve to a more stable interval, such as the major third

(C and E) in a major triad:

Example 38 A tritone in a V7 chord resolving to a I (tonic) chord resolution

Stable intervals, such as the octave (C to C), perfect fifth (C to G), and major third, are

typically considered consonant and do not have the quality of needing to resolve. All

intervals can be classified as consonant or dissonant, but these classifications are not

absolute, that is, they will depend on context. An interval that sounds extremely dissonant

in one context might sound fairly consonant in another.

The rate at which the chords of a piece change is called the harmonic rhythm. Many

popular songs use a harmonic rhythm of one chord per four-beat measure, others change

chords every two beats, but there is no single approach that applies to any type or genre

of music.

USE OF HARMONY

Through the correct use of the seven triads in a key, a composer creates a sense of focus

on and around one principle chord or note, which is the tonic mentioned above. If the

rules of functional harmony are followed so that the tonic is made clear throughout the

work, then the music is said to be tonal and the effect created is called tonality. In other

words, by properly applying functional harmony, we create tonality in music. This gives

the listener a sense of knowing where home base is and where the music is at any point

(whether “close to home” or far away) relative to that home base. By using tonality,

which is extremely common in nearly all styles of popular music, a composer could also

create a sense of instability and wandering. For example, the tonic could be avoided or

undermined by the use of unusual chords or sequences of chords. This can add tension

and excitement to music because it forces the listener to delay the gratification he/she

would get upon returning to the expected goal. Such an effect could be suitable, for

example, when the lyrics of a song reflect a similar uncertainty or tension or where this

quality is desired at some point in an instrumental piece.

Note how a clear tonic base is established in Example 39, the song "Ghazal (Love Song)"

by Tangerine Dream, right from the opening. The short chord progression includes

chords only from the main key and starts and ends on the tonic. That same progression

repeats a second time, reinforcing the feeling of home base. After the second repetition,

the key changes, a technique called modulation. This effect works best because the tonic

was established so clearly at the opening of the song. In Example 39a by Radiohead, the

three- (and later, four-) chord progression begins on the tonic chord, but the progression

itself, though it repeats numerous times, doesn’t have a strong sense of direction; it

32

simply ends, then starts again from the beginning, creating a near drone-like effect. Count

the number of repetitions of the basic progression that you hear in this example.

In large-scale compositions, composers often emphasize one tonal area or tonal region

for an extended period of time. It is also common for compositions to modulate to keys

that are not closely related to the tonic, again for long portions of the music. These and

other harmonic techniques can make it difficult to get one's bearings in the music, making

it hard to know what relationship the current harmonic material has to the tonic at any

given point. This is especially likely when many chords containing notes outside of the

key (called chromatic chords) are used. Therefore, it is often necessary to listen to a

composition multiple times before gaining a clear understanding of its harmonic

roadmap.

Regardless of the musical style, tonal harmony can establish a feeling of continuity and

cohesion in a musical composition. Using functional harmony, the composer or

songwriter can create consistency and continuity that give the listener familiar landmarks

to follow as they move through the music. A good understanding of how harmony

operates can help a composer or songwriter establish goals and reach them at the

moments he or she chooses. It can also aid the listener in following the “logic” or

momentum of a piece.

Atonality

One of the hallmarks of 19th century Romantic music was the demand for heightened

musical expressivity. To meet their artistic demands, composers began to use to use an

increased number of notes and chords outside of the chosen key. Chord sequences

became ever more complex as did the construction of the chords themselves. As more

and more chromatic notes entered the tonal landscape, the music sounded more intensely

emotional – an attribute favored by late 19th

-century audiences. The drawback to this,

however, was that chromatic saturation weakened (and eventually eliminated) the

harmonic functionality of diatonic scales, keys and tonality in general. As a result, near

the end of the 19th century, the guiding force that tonality provided for music began to

disappear.

One landmark composition in the demise of tonality is the opera Tristan and Isolde by

Richard Wagner, completed in 1859. This work contained many chromatic notes and it

was difficult to determine what the tonic was at many points in the piece. This unsettling

quality perfectly suited the theme of the opera, which was, like Romeo and Juliet,

unrequited love. The opening measures of Tristan (Ex. 40) are among the most famous in

music history:

33

Example 40 Opening measures of Wagner’s Tristan and Isolde

The very first chord (measure 2) is ambiguous harmonically and functionally and no

simple harmonic analysis can neatly describe it. The chord produces a high degree of

tension when it is heard, and the tension is only somewhat resolved in the third measure,

which, though less tense, also implies a need for resolution. The vagueness of the chord’s

function reflects the overall harmonic instability that permeates the entire work. Its

uniqueness in the history of harmony has earned this chord a special name: the “Tristan

chord.”

By the beginning of the 20th

century, a number of composers began to find tonal music

ever more incapable of expressing the world they experienced around them, and a search

for a replacement was undertaken. For a period lasting several decades, and which

includes the era during which music created electronically first became established,

individual composers developed highly personal approaches to organizing the pitched

(and increasingly non-pitched) materials of their work. Much of this music falls into the

very general heading of free atonality.

Free Atonality

The chromatically saturated music of the late 19th

century came about as composers

relied less and less on the relationships among the notes in the major and minor systems.

A systematic substitute for these scales and the functional harmony that governed their

use was proposed in 1921 by the Viennese composer, Arnold Schoenberg. The first two

decades of the 20th

century however, reflect the use of free atonality, a loosely structured

approach to organizing the pitched elements in music. (Atonality means, literally,

without tonality). Free Atonality was (and is) used in classical works of various types,

including solo, chamber and orchestral compositions and is also regularly used today by

jazz musicians, especially those who fall under the “free jazz" heading. It can also be

found in the music of some progressive rock artists and is among the most common

approaches used in the electronic masterworks of the past century.

Free Atonality stems from the “free” use of chromatic materials such that the listener

hears no strong tonal center in the music. It occurs when a composer uses common

chords without concern for their traditional function, or, more likely, when he or she

creates chromatic chordal structures that do not fit into any one key. Free atonality can

also occur in music that is not chordal at all - a melody that uses a large number of notes

that are not confined to a single key could also be considered freely atonal, as would

music that is based mostly on noise or on manipulated prerecorded sounds. The constant

use of chromatic materials can completely cloud any sense of orientation around a tonal

34

center, even though there may be nothing “systematic” or organized about the way in

which the notes and chords are being used (hence the idea of “free”). Listen to the

improvised passage for electric piano and drums from the King Crimson song

“Moonchild” (Ex. 41) and note the lack of any tonal center or focus.

In other freely atonal works, a composer might choose some unique or unusual chord

formation to unify the piece and serve as its central focus. This is the case with the

“mystic color chord” that Alexander Scriabin used in his orchestral composition

Prometheus, The Poem of Fire (1910). (This work also included a part for the live

projection of colored light, reflecting Scriabin’s interest in combining image and sound.)

Example 42 Scriabin: Prometheus chord (mystic color chord)

Scriabin moves among various versions of this chord while subtly changing the

instruments that play each note, and also uses the intervals that make up the chord in his

melodies. This six-note chromatic chord is built using a non-tertian structure and includes

notes that would not appear in any single key. Though the composition is 100 years old,

the sound would be perfectly acceptable in many forms of modern jazz.

Example 42a, entitled electronic soundscape 72113a (composer unknown), is a mostly atonal

composition that opens with a focus on a single, sustained note, creating a drone effect.

Drones are common in much “ambient” (slow moving music usually emphasizing long,

sustained notes with little rhythmic activity) electronic music and are also often found in

non-western musical traditions. The drone in Example 42a is accompanied by a variety of

chromatic notes that enter and exit over the first 55 seconds or so, then the harmony

begins to move freely to other overlapping notes and soon loses any sense of tonality,

becoming atonal. Listen to Example 42a and see if you can detect when the focus on the

opening note begins to weaken, then gradually disappears entirely.

Serial Atonality

Along with other composers of this era, Schoenberg and Scriabin’s reliance on their own

artistic instinct proved a difficult course to maintain as compositions grew longer and

longer. Free atonality offered little in the way of organizational standards, forcing each

work to be a unique world unto itself. With no pre-conceived framework to rely on,

freely atonal music became a challenge for composers and listeners alike. Each new

work had to be approached afresh, and repeated listenings were needed before a piece

revealed itself fully. One solution to this dilemma, called serial atonality, was proposed

by Schoenberg himself.

35

In 1921, Schoenberg revealed his “method of composing with 12 notes which are related

only to one another.” That discovery paved a new path for musical organization, one that

has a profound impact on music of the last 100 years. Schoenberg's system consists of

several key elements. First, the twelve notes of the chromatic scale are ordered into a set

arrangement, different for every piece, called a tone row as the basis for every work.

Next, because Schoenberg’s goal is to avoid giving any one pitch more importance than

any other, all of the notes of the row are sounded before any one note is repeated. Once

all twelve pitches have been played, whether as part of a melody or in chords, the process

begins again. (Other composers have used Schoenberg’s basic method since its inception,

and there are many variations on the specific way it is interpreted for a piece. Schoenberg

himself, for example, did not adhere strictly to the practice of sounding all notes before

any was repeated.)

The music that results when Schoenberg’s 12-tone system is employed is called serial

atonal or 12-tone music. Because the row and its variations (described below) are reused

throughout the work, the recurring pattern of intervals found between notes of the row

provides a strong sense of cohesion for the pitched materials of the piece and, with time,

can become recognizable to a listener. If the music adheres to Schoenberg’s suggested

guidelines, serial atonality can be as powerful as functional harmony (though not as

obvious on first hearing) in creating a sense of unity and integration for a musical work.

This is one of its biggest appeals to composers who employ it. The careful listener will

get oriented to the reuse of the specific intervals in the row as they appear in different

musical motives and gestures, though this recognition may not occur until the

composition has been heard multiple times, if at all.

For example, listen to Example 42b, Total Serial Composition by Biggie Phanrath, and

note the recurring 12-tone melody that appears in both the flute and the electronic

accompaniment. Because the same row is used repeatedly, you can easily hear the serial

quality of the music. Now listen again to Example 14, Stockhausen's Study #2. This work

is extremely tightly controlled by serial principles that determine the notes, rhythms and

other parameters. Yet the listener cannot recognize this in the music on first hearing, nor

would the composer wish them to.

Interestingly, visual artists at about this same time were also looking for ways to move

beyond realism and the representation of actual physical objects in their work. Artists

such as Wassily Kandinsky (1866 - 1944), considered by many to be the “father” of

abstract modern art, were directly influenced by Schoenberg in the composer’s attempt to

move beyond tonality. Kandinsky heard a concert of Schoenberg’s music in 1911, and

the two immediately began a long correspondence about their mutual goals.

Tone Rows

At an early stage of the compositional process, a composer will construct a unique tone

row that best meets his or her musical and expressive needs. A near-infinite number of

tone rows can be created using different arrangements of the twelve notes; the number is

well into the millions. Any combination of notes (except, perhaps, a literal ascending or

descending chromatic scale) is possible: The composer might choose to embed the

interval of a perfect fifth at several points in the row so that interval becomes

36

characteristic of the music or even use the notes of a major triad somewhere among the

succession of 12 pitches. For example, the tone row used by composer Alban Berg in his

famous Violin Concerto (1935) has multiple triads embedded among the successive

notes:

Tone row used in Alban Berg’s Violin Concerto showing embedded triads

Listen to Example 43 to hear the full orchestra play the embedded triads one by one

followed by the solo violin playing the notes of the row in turn. The composer subtitles

the piece “To the Memory of an Angel” as a memorial to the daughter of a close friend.

A composer might use a secondary layer of organization within the larger 12-note

framework, for example, arranging the twelve notes into four groups of three, each with a

similar set of intervals among the notes in the group. This could give the music a highly

motivic quality, as the same small number of intervals will repeat often, helping to unify

the piece:

Tone row from Anton Webern’s Concerto using four groups of three notes each

Note in the example above that every group of three notes includes a minor second (for

example, B to Bb and G to F#) and a major third (Bb to D and Eb to G).

In practice, the composer does not simply lay out the notes of the 12-tone row from the

first to the last then start over at the beginning using the same notes. That would lead to a

very repetitive and boring composition. Rather, the original row is used to generate four

additional row forms, each of which is then used on any of 12 different starting notes.

Combined, this larger collection of related row forms provides unity and cohesion to the

pitched materials.

This large collection of notes would be very difficult to manage if there were not some

way for the composer to visualize the entire universe of available pitches. In order to do

so, a grid called a matrix, shown below, is created for each piece. A matrix shows all

four row forms in each of its twelve possible transpositions and is an essential tool for use

by the composer. The original prime row form (P0) starts at the top left of the matrix on

the note F and moves from left to right along the top line, ending on F#. The

37

untransposed inversion (I0), where each successive note is as far below as its counterpart

was above in the original, also starts on the note F but moves downward along the y axis

ending on E. (Note that if you draw a diagonal from the upper left to the bottom right, the

note F will appear at every point.) Because the untransposed prime form goes from F

down to E (a half step down) between the first two notes, the inversion goes from F up to

F# (a half step up). After filling in the untransposed prime and inversion forms, the

composer completes the matrix by filling out each of the subsequent prime forms,

transposed as needed. For example, the second row contains the prime form transposed to

start on F#, because that is the second note in the inverted version.

A matrix showing all 48 forms of a 12-note tone row (from Webern, op. 25)

The matrix helps the composer organize the pitch material of the piece and provides an

overview of all the different arrangements of notes that can be derived from the basic

tone row. The composer selects various row forms to create the melodies and harmonies

of the piece using his or her musical sensibility as a guide. Composing serial music in

this fashion is not simply a mathematical exercise in cycling through the various row

forms in a random or in some predetermined order, however. Every compositional

decision must be made by the composer, including what the original series of notes will

be, how the various forms of the row will appear in sequence in the work, whether the

melodies will be supported by chords from the same or from a different row, etc. In

addition, all matters of timing, pacing, rhythm, articulation, instrumentation, and form

must be determined by the composer. The vast amount and range of music that has been

written using this method over the past 90+ years is a testament to the great flexibility

Schoenberg’s technique provides.

38

Integral Serialism

A variation on Schoenberg’s basic approach to organizing the pitched material of a

composition soon developed that was known as total (or integral) serialism. This

concept, which was employed by a number of composers of electronic music, most

notably, Karlheinz Stockhausen (mentioned previously), involves applying the principle

of serial organization to other elements of the music besides pitch, such as note duration

and rhythm, dynamics, articulation, register and perhaps even instrumentation. For

example, a composer might assign a different numeric value to each of 12 different note

durations, then use a specific note duration in conjunction with the corresponding note

number in the row.

A composer could also create a set of twelve different note articulations and associate

each with a different pitch as well, or perhaps a different version of the row could be used

to organize the dynamics (inversion), registers (retrograde), or instrumentation

(retrograde of the inversion, for example). Among the composers interested in this

approach were Stockhausen, who employed it in many of his electronic works, as did

American Milton Babbitt. Schoenberg’s student Anton Webern, French composer Olivier

Messiaen, and innumerable other composers of the 20th

and 21st century have also

explored integral serialism in both acoustic and electronic compositions.

Stockhausen used serial principles in several ways. One was to set up a set or series of

proportions: 2:1, 3:1, 4:1, 5:1, for example, and apply those proportions to different

musical characteristics. For example, he might use note durations that incorporate those

proportions (in whatever order he chooses) and use all of them before any one of the

ratios is repeated. The dynamics (loudness levels) of notes might be controlled by the

same set of proportions as could the pitches or even the amount of time that larger

sections of the piece take to occur. Though it is very unlikely that the listener would

detect the use of such a set of ratios, it could become apparent over time. It is also likely

that such a unifying elements as the recurring use of those numbers might also add to the

cohesiveness of the music, if only subliminally.

Milton Babbitt was another composer whose music was organized along similar lines.

His Composition for Synthesizer (1961) uses integral serialism techniques to govern

every musical parameter, including the tone quality of the sounds themselves. Listen to

Example 43a to hear an example of this approach to composition; it is not likely that you

will be able to perceive any of the specific methods that are used by the composer.

Summary

Any discussion of harmony in music must include three fundamental concerns. First, a

thorough examination of the source of the harmonic materials in the work should be

undertaken, whether it is a scale, tone row or other controlling factor. Next, the system

that governs the use of these materials, most often functional harmony but perhaps

serialism, must be examined and understood. (And of course in some cases, there is no

system at all.) Finally, the impact or effect that the harmonic activity has on the listener,

whether it gives him or her a clear sense of direction and inevitability or a feeling of

uncertainty and confusion, whether the harmonic techniques are clearly perceivable or

difficult to identify, should be determined.

39

Complete Listening Assignment - Harmony now

40

Texture refers to the interweaving of the various layers of a musical composition and the

ways in which these layers relate to one another. In characterizing the texture of a work,

the listener must identify the number of layers that exist, determine the degree of

independence that these layers have from one another, and interpret the role and relative

importance of each. Texture has become an increasingly important element in music

since the beginning of the twentieth century, as more traditional elements such as melody

and harmony have often been de-emphasized.

The term voice or part is most often used to refer to an individual strand or line of music

that can be identified or isolated within the fabric of a piece. In recent electronic music,

an equally common term is track, as in the "bass" or "percussion track." The number of

voices or parts does not refer simply to the number of performers in a work; because a

piece of music is performed by a group of eight or nine singers or instrumentalists does

not mean that there are eight or nine distinct voices or parts in the texture. They might,

for example, all be singing the exact same music. Typically, it is the number of distinct

and unique musical ideas or lines created by the performers and the way these interact

that is significant in helping to characterize texture. The term “track,” however, will

often refer to a single instrument or distinct layer in the music.

A number of different terms, both musical and general, can be used to describe the

texture of a work. Music that contains many different strands of melody or rhythm

sounding at the same time might be described as "thick" or "dense," while a work with

only a single instrument performing a single line of melody could be characterized as

"thin," "sparse" or "transparent." These terms are obviously not specific but simply give

a general description of the fabric of the music. More specific musical terms, to be

discussed below, have been adopted to describe the qualities of texture in a composition.

Texture in electronic music cannot always be broken down into clearly distinguishable

layers - it is often difficult to isolate the various elements in a piece - so broader terms

such as "foreground" and "background" are sometimes used to describe how the larger

parts of the music interact simultaneously. In the foreground, there might be a prominent

sound, which might be louder and more emphasized than other, indistinct sounds in the

background, which could, perhaps, be shrouded in dense reverb.

As with all musical elements, the texture in a work can and typically will change as the

music progresses. The music may sound thin and transparent at the outset, then become

dense or opaque thereafter. Changes of texture of this type often signify important formal

landmarks or divisions in the music. Listen to Example 44, the short song, "Sarmays," by

the band Pan Sonic. How would you describe the texture? How many events are

occurring at one time at the beginning? Does that change? Are some sounds clearly in the

foreground and others in the background? Make a clear distinction between the number

of events that occur at the same time and the nature of those events themselves.

41

Now listening to Example 45, an excerpt from the song "Turning of the Wheel" by

Tangerine Dream. How many layers are there at the opening? When and how does this

change? How many layers are there when the excerpt ends? Is it easy to distinguish one

part from the next or not? Why?

Monophony

A texture consisting of only a single line, for example, a solo melody without any

accompaniment is called monophonic (meaning "one sound"). This texture is common

in many non-Western traditions and in some Western classical music, where many great

works have been written for single instruments, but is not particularly common in popular

music except, perhaps, during an instrumental solo or drum break. It’s also not

particularly common in most electronic music, with the exception of music that was

written for and played on early electronic synthesizers, which were monophonic

instruments. A monophonic instrument is one that is capable of playing only one note at

a time. Though often performed without accompaniment, the piano, harp and guitar are

not monophonic because they can play more than one note at once and have the ability to

create the effect of multiple independent voices sounding simultaneously. Listen to

Example 45a, a demo performance on a Korg monophonic synth from 1978. Even though

the performer can change a variety of settings, such as the brightness or “bassiness” of

the sound, the synth can play only one note at a time. Now listen to a very different

monophonic example in Example 46a. Monophonic electronic instruments were often

used in live performance, for example, with the keyboard player using an entire rack of

multiple instruments, or in the studio during a production that might include the sound of

other instruments added into the mix one at a time. Example 46 illustrates a technique in

which the notes were not played by a live performer but were generated purely

electronically by a device called a sequencer.

A melody that is doubled (duplicated or repeated) one or more octaves above or below

the original, whether played by two separate instruments or only a single instrument, is

also considered to be monophonic. For example, all members of a chorus singing the

national anthem in melodic unison would be considered monophonic, even if the

different parts started on the same note in different octaves. In any style of music,

monophony can appear at a point where the composer or performer wishes to introduce

variety in the texture. As an example, a saxophone or trumpet player might take an

extended solo in the middle of a jazz performance while the other members of the

ensemble remain silent.

Listen to the solo section of the Lynyrd Skynyrd song “Free Bird” (Example 46) and

note the shifting texture of the three guitars that are soloing. At what points do the guitars

play monophonically and where and how does the texture they create change (focus only

on the lead guitars)? Can you tell there are three guitars playing; if not, what does it

sound like? Though there are actually multiple guitars playing in this example, a single

guitarist can create the effect of unison playing by overdubbing a second duplicate part

in the recording studio. The performer will listen to (or monitor) the first part while

recording one or more additional tracks “over” the original.

42

Polyphony

In musical terms, a texture that consists of numerous musical lines or voices, each of

which is equal in importance to the other (i.e., one part is not merely a support or

background for another) is called polyphonic, meaning "many sounds." Each layer in this

type of texture is independent and has a clear identity of its own. Polyphony is common

in both Western and non-Western cultures and is found, for example, in New Orleans-

style (“Dixieland”) jazz, which features a group of melodic instruments (typically

trumpet, clarinet and trombone) improvising simultaneously while being supported by a

rhythm section. The specific approach used in Dixieland is called collective

improvisation, as each instrument is free to create its own melody, with or without

reference to the original melody of the song. The resultant dense and highly active texture

is clearly polyphonic in nature, which you can hear in Example 47. You can also hear a

polyphonic texture in Example 47a played on the Minimoog, a polyphonic keyboard

synthesizer developed by electronic music pioneer, Robert Moog. How many individual

layers of melody can you detect in this example? This music, used in a popular video

game (Tetris), was actually composed by classical composer J. S. Bach (1685-1750), the

most renowned practitioner of polyphonic music in the classical Western music tradition.

A very different example of polyphony is in the SquarePusher song "Scopem Hard" (Ex.

48 after about 54 seconds). How many layers of sound are there at the opening of this

example? How many when it ends? What are the roles of the different layers - are they

equal in importance or does one or more sound like it is secondary or a background for a

more important principal layer?

Polyphony is not common in many types of popular music, where a single melody is

usually performed by only one singer or where one instrument is typically the main focus

of the music. However, a polyphonic texture would be created, for example, when two

guitars solo simultaneously. Listen to the opening of the song “On Reflection” (Example

49) by Gentle Giant. What is the texture in this excerpt and how many individual parts do

you hear? How long does the first texture last and what texture enters roughly half-way

through the excerpt?

Homophony

The final texture, homophony, is the most common texture in Western popular music

and has been used in nearly every musical style since the Middle Ages. This texture has

two main variants, the first is called melody and accompaniment and the second is

known as chordal or block chord texture. Melody and accompaniment involves the

presence of one main melody and a clearly subordinate, usually chordal accompaniment.

A solo guitarist improvising against the background of a rhythm section would be a

simple example of this approach, as would the highly standard arrangement of a lead

singer performing with a backup band. Though this texture involves two layers, the

accompaniment in both cases is completely subordinate to the principal melody and does

not typically have sufficient musical interest to stand alone (although the backup band

might disagree!).

Listen to Example 50, the song "Hy A Scullyas Lyf A Dhagrow" by Aphex Twin, and

43

notice that there is a melody in the upper portion of the piano's notes and an active

accompaniment (chords being played one note at a time) in the lower portion. This

simple example of melody and accompaniment homophony demonstrates that the number

of instruments being used is not always a factor in determining the texture.

Now listen to Example 51, an excerpt from the song "Atlas Eyes" by Tangerine Dream.

The melody played by the oboe is rather slow moving but stands out clearly because the

oboe tone is so different from the slow sustained chords that accompany it.

Spatialization

The spatial distribution of sound is often the basis today for live presentations of

multichannel electronic music, where composers disseminate or diffuse their music

throughout an auditorium in which multiple speakers have been arranged during a live

performance. Sitting at a mixing console with his or her hands on the volume and

positional (pan) controls, the composer will “perform” the playback of a prerecorded

composition by controlling how much sound appears at each speaker. In some cases,

composers diffuse their work into as many as four, eight or even more loudspeakers.

Edgard Varèse's Poème Électronique (1958), presented at the Philips Pavilion at the

Brussels World's Fair, incorporated 20 discrete streams (called channels) of music sent to

over 400 loudspeakers. Surround sound, which uses 6 or more discrete channels of

sound, gives composers new options for diffusing their work spatially and creating a

variety of antiphonal effects.

Because most home audio systems and digital music players have only two-channel

stereo playback, multichannel works are not usually found on audio CDs or in

downloadable audio files (nor can they be demonstrated via the Internet). But a

presentation of a live diffused multichannel piece in a concert or theatrical setting, with

sound whipping around in front, back and to the sides of the listener, can be a very

exciting experience. Numerous multichannel audio programs are available to the modern

studio composer for the creation of such works.

Clusters Modern composers use numerous different approaches to the distribution of notes within

their music. Rather than arrange the musical parts into clear and distinct layers,

composers might use clusters to best express their artistic intentions. Clusters are tightly

spaced groups of two, three or more notes, typically no more than a few half-steps apart.

Clusters can add an evocative color to a composition and are used by both classical

composers and by jazz arrangers.

44

Example 52 Tone Clusters

Composers such as the Hungarian Gyorgy Ligeti (1923 – 2006) have written music in

which every instrument (or instrumental section) of the orchestra is given its own unique

note, with each note being only one-half step away from the next. In Ligeti’s

Atmospheres (1961; Ex. 53), for example, fifty-six musicians play different notes,

creating a sound mass that is, perhaps, unique in music. The resulting texture consists of a

massive cluster spanning a five-octave range in which every note of the chromatic scale

is played simultaneously. Ligeti likened the composition to a “far-away mass for the

dead” and called its texture “micropolphony” due to the shifting entrance and exit of

players and the extremely subtle changes in the instruments’ loudness levels.

Example 53a illustrates the use of microtonal clusters created through digital processing

of a clarinet and a cymbal. The piece, entitled The Hand of Gravity, is by Michel Plourde

and incorporates sounds with no distinct pitch.

Another approach to texture is found in Example 53b, Sonal Atoms. Listen to this

example and note that there is no distinct melodic layer – the music is also entirely non-

pitched – so the traditional terms used to describe texture do not really fir for this music.

A better approach would be to describe the overall density of the music that is whether

the texture is “thick” or “thin,” or perhaps label it “rough” or “grainy,” and as elsewhere,

note especially how and when changes in texture occur.

Complete Listening Assignment 4 - Texture now

45

46

Sonority, also known as timbre or tone color, refers to the unique qualities of sound

produced by any instrument that distinguishes it from all others. These qualities range

from the "pure, hollow" sound of the flute, to the “honking screech” of a saxophone in its

highest register, to the "deep, mellow" tone of the cello or bass, to the “squeal” of a

distorted electric guitar, with unlimited gradations in between. Although each of these

instruments can play the exact same pitches, the different properties of each is instantly

recognizable by any listener, assuming they have had some prior exposure to that

instrument.

One way to describe timbre is using the language and measurements of science, and in

fact, the differences in the tone color of different instruments can be explained rather

simply from a scientific basis. Any musical instrument when performed generates a series

of multiple simultaneous vibrations that combine to form a unique pattern of wave

motion in the air. This motion is called a sound wave or sound-pressure wave. Each of

the individual vibrations is a sine wave, which is the basic back and forth wave pattern

that all other sounds are made from. The specific combination of sine waves that make up

any given sound is called its spectrum, and it is the spectrum of a sound that accounts for

the tone color we hear.

It’s possible to examine and analyze the spectrum of any sound using tools of science to

identify and measure the sound’s specific components. These observations might be

interesting and even helpful to a musician working with modern electronic instruments,

as such instruments can either generate any type of sound by adding together individual

sine waves or manipulate the spectrum of an existing sound. But there are other

approaches and terms that are more generally used to describe timbre in the world of

music. A brief explanation of the physics of sound, a field known as acoustics, will

greatly aid in the understanding of the musical aspects of sonority.

Acoustics: The Physics of Sound

Sound begins when molecules in the air are disturbed by some type of motion produced

by a vibrating object. The object, which might be a guitar string, human vocal cord or

rolling garbage can, is set into motion because energy is applied to it. The guitar string is

struck by a finger or pick, while the garbage can is hit perhaps by a hammer or shoe. In

both cases, the result is the same: each object begins to vibrate. In fact, they begin to

vibrate at multiple rates simultaneously, though what we actually hear is a combination or

composite of all these vibrations.

Both the rate (speed) of the vibrations and their amount (or strength) is critical to our

perception of the sound. If they are not fast enough, we won’t hear the sound, and if they

are fast enough but not strong enough, we won’t hear it either. If, however, the

composite vibration repeatedly occurs at least twenty times a second, the minimum for

human perception, and the molecules in the air are moved far enough (a more difficult

phenomena to measure), then we will detect a sound. To understand the process better,

the behavior of a guitar string will be used as an example of a vibrating object producing

47

sound. Note that many of the basic characteristics of the string also apply to other objects,

such as brass and wind instruments.

Frequency. When a pick plucks a string, the entire string vibrates back and forth at a

certain rate of speed (see Figure 12 below). This speed is called the frequency of the

vibration, and the term “frequency” is used to describe the rate of vibration of any object

set into motion. One single back and forth motion of a vibrating object is called a cycle,

and the number of cycles per second, or cps, is the increment of measurement used for

frequency. (Cps is also referred to as Hertz, abbreviated Hz, named for a 19th

-century

German researcher.)

The phrase “A-440,” which is a frequency we associate with the note A above middle C

(A4), refers to a vibrating frequency that recurs 440 times per second. This is written as

“440 Hz,” or “440 cps” (see the table below showing the correspondence of pitch to

frequency). Like other vibrating objects we might wish to measure, for example the

vibration of air inside a clarinet or trumpet, or the vibration of the head of a drum, the

frequency of the string is very fast, so we use the abbreviation kHz (kilohertz) to measure

frequency in thousands of vibrations per second. A frequency of 2 kHz (read as “2

kilohertz”) signifies a vibrating frequency of 2,000 cycles per second. This means the

string or other object goes through its back and forth vibrating motion 2,000 times per

second, a frequency well within the range of human hearing. Two thousand cps is not a

frequency that coincides with a specific musical pitch – our system of notation isolates

only a few dozen of the nearly infinite possible frequencies our ears can detect and

assigns them to pitches. Several of these are shown in the table below.

Correspondence between musical pitch and frequency of vibration

Displacement. The actual distance a string or other object moves from its point of rest is

called its displacement, and displacement is the main factor in determining the loudness

of the sound we hear. The distance the string moves is a function of the strength of the

energy applied to it and for that reason, the term amplitude, meaning strength, is

commonly used as a substitute for displacement. The material the string is made of and

its thickness and length also greatly influence the distance it will move when struck.

The scientific measurement used for displacement is not particularly important for this

discussion. Rather, it is important to know that the different vibrations that occur when an

object is struck are not equal in strength; some are stronger than others by a significant

amount. As a result, displacement is usually measured on a relative scale, where the

specific amplitudes of the different simultaneous vibrations are compared to one another.

However, if there is not adequate displacement to move the air molecules surrounding the

48

vibrating object, the waveform cannot travel through the air and we will not hear the

sound.

FIGURE 12 A plucked string in motion. This figure shows one complete cycle.

As the string in the example above moves, it displaces the molecules around it in a

recurring, wavelike pattern; as the string moves back and forth, the molecules also move

back and forth. The movement of these molecules is propagated in the air, meaning that

individual molecules bump against molecules next to them, which in turn bump their

neighbors (much like a chain reaction). If this movement of molecules is strong enough

and we are close enough to the source (the guitar string, in this case), the molecules next

to our ears will very soon be set in motion (sound travels at over 1130 feet per second in

the air and even faster in water), and they, in turn, will move our eardrum in a pattern

similar or analogous to the original string movement. Next, our brain will detect this

movement, look up the specific pattern in its “data bank” to see if there is a match, then

identify the pattern as the sound of a guitar (assuming it has been exposed to the sound of

a guitar before). We will then “hear” and recognize the sound. Note that sound cannot

travel without a medium of transmission; there must be air (or water) molecules for a

sound to exist. For this reason, sound cannot occur in a vacuum, for example on the

surface of the moon, where there is no medium to sustain the sound.

The air- or sound-pressure wave created by the pattern of moving air molecules can be

depicted in several ways. One way to represent the wave is to use a mathematical

formula. Musicians, however, typically prefer to use software to view an actual image of

the wave on a computer screen. The graphic representation shown below is called a

waveform plot, and it shows how much air is being moved at any point in time. The

amplitude or amount of air pressure is represented on the y (vertical) axis and time is

shown on the horizontal x axis. There are two similar graphs because this sound was

recorded in stereo, meaning there is separate information for the left loudspeaker and the

right:

A graphical waveform representation of a single piano note of 1” duration

49

Note in the figure above that there is a very brief point of silence at the beginning of the

diagram, then the sound starts at a very high amplitude. Gradually, the sound fades out

until it dies at the end of this graphic. The movement of air molecules depicted here lasts

less than one second (the indication 00:00:00.500 indicates the half-second point).

Figure 13 above illustrates the movement of air molecules that have been set in motion

by a vibrating string. It is an oversimplified plot as it only represents a portion of the

actual vibrating pattern, but nonetheless shows clearly how the molecules move during

the first portion of the sound’s life. The dashed line represents the string at rest before

any motion occurs. The segment marked "A" represents the impact of the string vibrating

after it is first struck by the pick; "B" shows the air molecule movement as the string

moves back towards its resting point; "C" represents the string moving through the

resting point and onward to its outer limit; then "D" shows it moving back towards the

point of rest. The segment from the start of the graph on the left through to the point

marked D represents a single cycle of the waveform. How many cycles are shown in

total?

This cyclic pattern of vibration repeats continuously until the friction of the molecules in

the air (or water) gradually slows the string down to a stop—you can see above that the

amount of movement away from the initial point of rest (i.e., the displacement) decreases

over time. In order for us to hear the string sound, this back and forth pattern must repeat

at least twenty times per second. This frequency threshold, 20 cps, is the lower limit of

human hearing perception. The fastest sound we can hear is theoretically 20,000 cps (20

kHz), but in reality, it's probably closer to a frequency of 15 kHz or 17 kHz. Moreover,

many playback systems (inexpensive headphones, for example) cannot reproduce

frequencies anywhere near 20 kHz.

The rate at which the entire string vibrates is called the fundamental frequency, and this

frequency is the frequency that gives a sound its strongest sense of pitch. The lowest

string of a guitar vibrates at a rate of about 82 Hz, which produces the pitch E2, and the

highest string has a frequency of about 329 Hz, which is E4. But if this one, simple back

and forth motion were the only phenomenon involved in creating a sound, then all

50

stringed instruments would probably sound much the same. We know this is not true and

alas, the laws of physics are not quite so simple. In fact, the string vibrates not only

across its entire length, producing the pattern of fundamental movement shown above,

but also at one-half its length, one-third, one-fourth, one-fifth, etc., simultaneously.

These additional vibrations are called overtones because they occur at rates faster than

(“over”) the rate of the fundamental vibration. In addition, their amplitudes decrease

proportionally, the faster they vibrate.

For example, the first overtone, which is a vibration of the string over one-half its length,

occurs at twice the frequency of the fundamental and has an amplitude one-half as strong,

the second overtone, a vibration of one-third the entire string, occurs at three times the

rate of the fundamental and one-third its strength, etc. Our ear doesn't hear each overtone

as a discrete pitched event, however. If it did, we would hear a multi-note chord every

time a single note on a string was played. Rather, all these vibrations are added together

or “fused” to form a complex or composite waveform pattern that our ear perceives as a

single tone (see Figure 14 below).

Simply measuring the combination of simultaneous vibrations that occur when any

instrument is played still does not account completely for the uniqueness of the timbre of

different instruments, as there is another major factor that comes into play before the

sound propagates through the air to reach our ears. This is factor is the resonator, and it

has a significant role in determining the sound quality of the tone we ultimately hear.

The resonator in the case of the guitar is the large block of hollow wood that the strings

are attached to, that is, the guitar body. It is also the body of the violin or harp, the large

sounding board and case of a piano, and the body of a clarinet or trumpet. The resonator

51

strengthens (or amplifies) some of the vibrations produced by the instrument and

weakens (or attenuates) others.

Different types of guitar bodies and even different types of wood will impact the sound

differently, though perhaps only in very subtle ways. Similarly, a clarinet will sound

different if it is made of wood, metal or plastic, even though the basic physics of sound

production on any clarinet are much the same. Ultimately, it is the combined effect of all

the simultaneously occurring vibrations produced by an instrument being altered by the

resonator, then “fusing” into a complex wave pattern as they travel together through the

air that accounts for the phenomenon we identify as a musical sound.

Stringed instruments provide only one model of the acoustic properties of instruments.

Wind and percussion instruments as well as the human voice share certain acoustics

properties with strings, but each instrument has its own unique physical attributes that

demand different types of representation. A thorough discussion of the properties of

instruments is beyond the scope of this text.

In summary, sound consists of three primary stages, each with multiple components:

52

INSTRUMENTATION AND ORCHESTRATION

The number of different timbres that exist among the entire realm of acoustic instruments

and human voices is obviously infinite. In addition, new musical resources made

available through the use of electronic instrumentation and the computer have vastly

enhanced the composer's palette and choice of timbral combinations (more on electric

sound below). As a result, sonority in and of itself has become a fundamental organizing

force in music, and no composer can completely separate this element from his or her use

of elements such as pitch, rhythm or texture.

Sound quality is often among the most important traits in distinguishing one style of

music from another. It can be characterized in many different ways and involves not only

the type of electronic sounds or acoustic instruments that are used and the groupings they

are put into, but the ways in which they are produced or performed. Two issues that must

be addressed when discussing sonority are instrumentation, which is the term used to

describe which instruments are found in any ensemble, and orchestration, which

describes how the instruments are used in combinations within that group. Choosing the

instruments, or in the case of electronic music, “designing” or creating them, and

orchestrating them are two of the most important tasks any composer will undertake.

In electronic music, the term "instrument" is often used to describe a sound created

electronically by the composer using one of the synthesis methods described elsewhere

in this text. This instrument will likely be an original "design" of the composer or perhaps

just a modification of some pre-existing or “preset” sound supplied by the manufacturer

of a device and will incorporate whatever sound-generating or sound-manipulating

techniques the composer desires. In the world of MIDI (Musical Instrument Digital

Interface), a protocol or “language” for communication between musical instruments and

digital devices, such unique sounds are called "patches," which is a term that harkens

back to the days of analog synthesis when composers created their sounds by linking

together actual wires called patch cords (see Figure 15 below) . In the MIDI universe,

patches are also known variously as "programs," "tones," or even "sounds" or

"instruments" depending on the terminology used by the device's manufacturer. A

commercial synthesizer may contain as many as 1,000 or more preset patches (known

simply as "presets") supplied by the manufacturer and will also allow the user to modify

and save those presets or create his or her own from scratch.

Fig. 15 An analog synthesizer that used patch cords to interconnect synth modules

53

The focus on sound quality or timbre in modern electronic music has created a new

approach to experiencing music called "timbral listening" or "timbral music." In this

approach, the sonority of the sounds is far more important than the pitched or even the

rhythmic elements. Along the same lines, a new form of composition called "spectral

composition" emphasizes relationships among the components that make up the music's

spectrum, for example, working with and developing procedures, some of which involve

complex mathematical formulae, for manipulating the ratios between the various partials

that make up a sound, or attempting to serialize the various frequencies of the sounds that

are used.

A corollary of this approach works in reverse, where composers try to mimic as closely

as possible the spectra of electronically generated sound using combinations of traditional

acoustic orchestral instruments. This technique is known as "spectral composition."

French composers Tristan Murail and Gerard Grisey, both of whom conducted research

and composed at the French institute IRCAM, are interested in using a traditional

orchestra to create sounds that resemble those more closely associated with electronic

music. Listen to Example 53b, Murail’s Gondwana, and note the similarity to electronic

music heard elsewhere in these readings.

One common combination of sound resources found in electronic music today is called

electroacoustic. Electroacoustic refers to the combination of a live acoustic instrument

with a purely electronic part, which might be prerecorded on a laptop or also performed

live by a second musician. An important consideration when analyzing the sonority of an

electroacoustic work is to determine the relationship between the two parts. The listener

must consider whether the prerecorded part is intended to enhance and extend the timbre

of the live instrument, perhaps resembling it in a variety of ways but performing music

faster or maybe higher or louder than the live instrument could perform. In this approach,

the prerecorded electronic part becomes a “super flute” or “super piano” and might use

electronically manipulated flute or piano sounds as all or part of its source material. A

different type of relationship between the two would be where the prerecorded part is

meant to serve as a contrast to the live one, offering a completely different sonic universe

than that of the acoustic instrument.

Listen to Example 54, the opening minutes of Mario Davidovsky’s Synchronism #6,

originally written for piano and prerecorded magnetic tape, and note the relationship of

the tape and piano parts. When and in what ways are they similar and where do they

appear to be clearly contrasted? In a performance of this piece, an engineer sits on or near

the stage and starts and stops the tape part with some flexibility to choose when that

occurs, thus becoming an integral "performer" in the music. Modern performances of

electroacoustic works might consist of the tape part being played back from a computer

that the performer is to start and stop using an electronic foot pedal or some other device,

though this would not be very practical for a pianist, as his or her feet are engaged with

the piano’s own pedals.

Now listen to Example 55, the opening of Milton Babbitt's Philomel for soprano and tape

and make similar observations about the relationship of the two main parts. What role

does the prerecorded electronic part serve? Is it an equal partner to the vocalist? Do they

54

share actual melodies or rhythms? Note in particular any changes in this relationship as

the music evolves.

ARTICULATION

Articulation refers to the actual manner or method by which a sound is produced on an

instrument and can be examined and discussed equally with sounds produced

electronically. Articulation is a key aspect of sonority and composers of all styles of

music use a vast number of approaches to achieve the quality of sound they desire. For

example, a melody might use notes or sounds that are smoothly connected with no

apparent space between them, a technique called legato, or the notes might be short and

detached, an articulation called staccato. Listen to Example 56, the song "Cliffs" by

Aphex Twin, and note the music's quality. What term best describes the articulation in

this example? Does it change at any point?

Now consider the next example, "4001" (Ex. 57) by SquarePusher and notice the space

between successive notes. What happens at around :25 seconds when the second layer

comes in? These types of articulations, along with dozens of others, are found in both

notated and improvised forms of music and help give a piece variety and color. Keep in

mind that different tracks or layers of a composition might employ different types of

articulations simultaneously. In the Tangerine Dream song, "Turning of the Wheel," (Ex.

58) you should be able to identify at least three different types of articulation within the

first minute or so. How would you describe each?

In the example below, the composer has added detailed articulation markings for every

note. In addition to the symbols used to inform the performer about how to play the note,

the passage also contains very exacting dynamic markings that are used to create

changes in loudness. Dynamics (discussed below) play a large role in compositions of all

styles and are among the most powerful types of expression markings composers use to

clarify their intentions.

Articulation and dynamic markings.

Sometimes an entire passage of music contains one predominant type of articulation.

When the music switches to another form of articulation, the listener is given a clear cue

that a new passage or section of the work has begun. This is one means by which

composers use articulation to help delineate the overall structure or form of a

composition. In other instances, articulation is used simply to add an expressive quality to

a composition and does not have any particular implications for the design of the work.

55

The example below shows a number of symbols that are used to indicate various types of

expression marks to a performer. When an electronic work is notated, similar markings

are used to signify the composer’s intentions. Tenuto means to hold the note out to its

full value, accent and marcato are two different ways to imply a heavy emphasis on the

note under the symbol, a breath mark tells the performer where a break is allowed to

accurate, tremolo means to move rapidly between the two indicated notes and repeated

note simply means to play the note repeatedly as fast as possible for the duration of the

note’s value. The tremolo example uses two different notes, which is the way non-

stringed instruments would interpret the marking. A stringed instrument can also perform

a tremolo on a single note and all of these markings can be realized by electronic

instruments just as easily:

Some common articulation markings

Dynamics

Controlling the loudness levels of entire musical passages or even single notes is an

important way for a composer to insure that the music is performed as he or she intended.

Below are some common dynamic markings used in notated music, which are typically

written in Italian. Dynamic levels are relative – a piccolo’s ff is much louder than one

played on an English horn, and a single loud note on an electric guitar could easily

overpower a quartet of clarinets playing mf.

56

Dynamics are often used to shape or clarify a phrase. A long crescendo can help add

tension to a passage of music and bring it to a point of climax. Repeated, rapid changes

from soft to loud can also help build excitement or momentum. Composers often use

extreme dynamics in their work: Tchaikovsky, for example, specified markings between

pppppp and ffff in one of his symphonies, while modern composers have used more

poetic indications such as al niente (meaning “fade to nothing”) to instruct the performer

in the nuance they desire.

In electronic music, dynamics can be extremely precise. The MIDI language, for

example, provides 128 (28) dynamic levels (numbered 0 - 127) to indicate the value of

every discrete note as well as 16,384 (214

) values that can be used for a gradual,

continuous change in volume. (Dynamics in MIDI are called Velocity because they are a

function of how fast or a slow a key on a keyboard moves when pressed.) Using

specialized software to create sounds synthetically, composers have access to 65,536

(216

) or more distinct dynamic levels, which makes long fade ins and outs extremely

smooth.

Listen to the first few minutes of the Merzbow composition "September" (Ex. 59) and

note how many changes in dynamics occur. Are the changes gradual or instantaneous?

Changes in dynamics can occur either by raising the actual loudness level of a single

event or by adding or decreasing the total number of events sounding at once. How do the

changes occur here? Now listen to Example 59a , a work entitled Volumina by Gyorgy

Ligeti. This composition, for electronic organ, opens with the performer pressing every

key on the organ using the maximum possible volume, then gradually over an extended

period, reducing the volume to almost nothing (you can see the effect in a waveform view

57

of the recording below). Clearly, dynamics play a significant role in the composer’s

concept underlying the work.

This image shows the waveform of Ligett’s Volumina for electric organ

REGISTER

Another element of sonority that helps account for the sound quality of music is register.

As mentioned earlier, register refers to the specific segments of the overall range that is

used in a composition. By emphasizing a single register for some section of a piece, a

composer can give the music a dark and solemn quality, or, if preferred, a bright and

glittering sound. The latter occurs, for example, in the opening of Maurice Ravel’s piano

piece, Gaspard de la Nuit (Example 60), in which a bright, shimmering sonority results

through the exclusive use of the piano’s upper-most register. The overall range of most

musical instruments can be divided into different registers, and, in some cases, the sound

of these regions are so distinct that they have names. The lower register of the clarinet,

for example, which includes the notes from E3 to Bb4 (written), is called the chalumeau

register and is characterized by a dark, woody sound.

Chalumeau register of a clarinet

Listen to Example 61, Spasm by Michael Lowenstein and identify the various registers

used by the different elements in the music. Can you detect where changes in register

occur? Does the music reach the upper most registers of the instrument or remain in a

fairly low register? Now listen to Example 61a, Proximity, an electronic work by Tokyo

Dawn Labs & Vladg Sound, and note the use of different registers simultaneously. How

many registers are there and in which registers do the main musical events occur?

Using electronic instruments, the available range of sound extends even farther than

when acoustic instruments are used, and composers have often employed tones that reach

the extremes of human perception. (The range of tones a human can hear, assuming

normal hearing, extends above and below the extreme notes playable by an orchestra.)

Moreover, sounds generated electronically can be so close in frequency or duration that

they can be beyond the limits of human perception. Though a traditional keyboard

synthesizer normally is programmed to play notes that fall roughly within the range of an

58

orchestra from highest to lowest, sound synthesis software allows composers to generate

tones of nearly any frequency. Descriptions such as "high," "middle," and "low" still

apply, however, when characterizing the register in which electronic sounds occur, but

far more accurate gradations are often useful.

OTHER SOUND CHARACTERISTICS

Pitch-Noise Continuum

From a scientific standpoint, a "pitched" sound is one whose waveform repeats in a

regular, periodic manner. The graph below shows the first few cycles of a sine tone

generated electronically. Note the "purity" of the wave shape and the absolute regularity

of the up and down wave motion, which results in a very pure sound with a distinct pitch,

in this case, the note A4 (A440). Listen to Example 62a and become familiar with this

tone. Because it is generated electronically, it will never change quality, unlike an actual

acoustic instrument.

Graph of a few cycles of the note A-400

Compare that with the graph and sound of a flute playing a low D-flat (Ex. 62). The flute

is the instrument that produces the most sine-wave like tone of all instruments, but it

clearly has more components in its spectrum than just the single sine tone.

Graph of a flute playing a D-flat 4

In the next example, Example 63, a noise-based sound is shown as a complex, irregular

and non-recurring pattern of motion. This example was created using a noise generator

and will sound like static to most people; in fact, noises such as these are the starting

points for several types of sound synthesis and are used as the basis for more familiar

tones including brasslike and other sounds.

59

Graph of a noise waveform

Pitch and noise are polar opposites and there is a vast range of sound between them.

Identifying where on this continuum a composition's sounds fall and when and how it

might change will be an important part of assessing the sonic characteristics of a piece of

music.

Envelope

Some sounds begin instantaneously or nearly so following an initial action by a

performer. An electronic organ for example, begins to sound the moment a key is

depressed. A piano also starts almost immediately after a key is struck, while a stringed

instrument, like an oboe and some other woodwinds, will take a longer time (measured in

milliseconds) to "speak." The figure below shows the first 2/100s of a second of a piano

note.

Unlike a piano, which begins to die away after the note is hit, an electronic organ will

continue to sound as long as a key is held down, and a woodwind or stringed instrument

will make sound as long as the player blows into it or rubs the bow across the string.

These properties, which describe the "evolution" of a sound from its beginning to end and

refer equally to acoustic and electronic sounds, are a result of the instrument's amplitude

envelope.

An amplitude envelope (or simply "envelope") describes how the loudness of a sound

varies over time and can be graphed visually. In the example below, the changes in the

sound’s loudness (marked as v for volume) level are plotted on the y axis and the time for

each to occur is shown on the x axis. Note that there is a slight amount of time (no

specific time reference is given) for the sound to reach its maximum loudness level. This

segment of the envelope is marked “A” for Attack. After the sound reaches its peak,

there is a slight dip in the level that occurs a little faster than the time it took for the

Attack segment. This portion, labeled “D” for Decay, brings the sound to its “S” or

Sustain level. If this graph were showing the amplitude envelope of an electronic sound

played on a synthesizer, for example, the Sustain portion would normally last until the

performer took his or her finger off the key, at which point the final segment, labeled “R”

for Release, would occur.

60

The envelope above, known as an ADSR envelope, is a model for many musical

instruments. A clarinet, for example, takes a short amount of time to make a sound,

during which the player actually blows too much air into the instrument (a natural

occurrence), then finds the proper pressure level and maintains it (the Sustain) for as long

as needed. The Sustain portion on a clarinet is not nearly as steady as the example shown

above, as small fluctuations in air pressure would appear more as a wavy line for this

segment than shown here. After the player stops blowing, the reed takes just a few

milliseconds to stop vibrating, which represents the Release segment.

The term "envelope" can be used for other characteristics of a sound; in fact, it can refer

to any aspect of a sound that changes over time. This might include its pitch or its

position in a stereo field (moving between the left and right speakers), or any other

characteristic of its timbre. To be clear, when used alone, the term will be considered in

reference to a sound's changing dynamic level, and when applied to other concepts, the

term will be used in conjunction with that specific characteristic, for example, pitch

envelope or stereo-position envelope, etc.

USE OF SONORITY

Many listeners will respond immediately and intuitively to the sonority of a work - its

sound quality - almost without thinking. On first impression, a piece might sound loud

and aggressive, or maybe it’s heard as soft and soothing. A listener might notice a

“shimmering” quality upon first hearing a new composition or detect a dark, "moody"

tone. These and many other attributes can be created by the composer through the careful

interaction of all the elements of a composition, but especially by close attention to its

instruments’ capabilities and characteristics, whether real or virtual.

In classical music, sonority serves numerous functions. Before the twentieth century,

sonority was used most often to help define and distinguish the main melodic elements of

a composition. For example, it was not unusual for a composer to employ different

instruments to perform the various phrases of a melody. A flute might be used to play the

antecedent part of a phrase, while the strings might answer with the consequent. In this

way, sonority helped guide the listener to an understanding of what the main melodic

themes consisted of while clarifying their structure.

61

In the twentieth century and beyond, changes in instrumentation or timbre can still help

distinguish divisions of a piece, but sonority has become even more significant as an

independent musical element. For that matter, like certain styles of modern art, some

compositions, especially electronic ones, are simply "about color"- their focus is on

exploring and developing sonority and timbre above and beyond the other elements of the

piece. Beginning with Claude Debussy and other Impressionist composers (in particular

in France at the beginning of the 20th

century), color for its own sake became a working

premise. Choosing from the unlimited palette of colors that the orchestra provided,

composers often attempted to create new and highly original sound combinations. Some

pieces consisted of nothing more than gradual movements from one tone color to another.

Schoenberg’s orchestral work Five Pieces for Orchestra, Opus 16, for example, contains

a movement called “Farben” (Color; Example 64) that reflects the subtle hues of a

shimmering scene by the lake, with occasional flickers of activity. The sound is a

continuously evolving melting and blending of colors, or in Schoenberg’s words, a

“melody of color” (in German, klangfarbenmelodie). Ravel’s orchestral work Bolero

(Example 65) on the other hand, evolves by repeating a small number of melodies

layered over a recurring rhythmic pattern played by a snare drum a vast number of times.

The shifting and ever-changing instruments used to play the theme give the piece a

kaleidoscopic quality.

A focus on sonority as an element equal to (if not surpassing) others has become

commonplace in classical music since 1950 and in electronic and computer music in

particular. Electronic music composers have a vast range of tools available to them to

employ in their search for unique and personalized sounds, some of which will be

discussed below. Indeed, the process of designing new sounds for each new work is often

equal in importance to the actual composition and sequencing of those sonic events.

Though at first glance this may seem an incomplete or inappropriate working premise for

creating a composition, the main intention in such works is often to sensitize the listener

to the incredibly rich and varied sound quality music can provide. Moreover, popular

music of many styles now uses more advanced elements of sonority, including electronic

instruments and digital effects processing, than at any time in the past.

ELECTRIC SOUND

Since the early decades of the 20th

century, composers have employed electrical and,

later, electronic devices of various kinds in their search for an expanded sonic palette. In

some cases, phonographs and tape recorders were used manipulate prerecorded sound;

slowing down or speeding up a phonograph, for example, or cutting magnetic tape into

small pieces and recombining them into a new arbitrary arrangement. (These and similar

techniques were the precursor of modern sampling.) By the mid 1950s, composers were

able to generate new sounds by entirely electrical means, often using equipment such as

soundwave generators that were initially designed for other uses.

62

In the next section, several recent techniques of electronic sound production will be

covered. As before, these tools are used to give the composer an ever-greater world of

sound to manipulate. What follows is simply an introduction to each topic; additional

resources for each can be found online.

Sampling

Sound created on or by a computer is found everywhere in today’s music. In some cases,

a computer is used to manipulate or process acoustic (natural) sounds that have been

previously recorded. This technique, called sampling, is common in both popular music

and in electronic (“computer”) music that is intended for the concert hall, and because it

uses natural sounds, it gives a composer the potential to use any sound imaginable in his

or her work. Sampling is also commonly found in music that accompanies visual media

such as film or games, for which a sound designer creates the effects requested by a

director or producer. (Creating sounds for use with other media is a process known as

sound design.)

A sample is typically a short sound, either electronic or, more likely, acoustic, that has

been recorded onto a computer or onto a standalone electronic hardware device called a

sampler. Once it has been recorded, it can be manipulated in a vast number of ways. For

example, a sample can be pitched-shifted (transposed) under the control of a MIDI

keyboard. Example 66 illustrates this effect, playing first a sample of a soprano singing a

short phrase, then a version of the phrase shifted one octave down that retains the original

duration. If the sample happened to be the sound of a trumpet playing a middle C, then

when the performer played the note middle C on the keyboard, the sound at its original

pitch level would be heard. But if a D, E or other note were played on the keyboard, then

the trumpet sample would automatically be pitch-shifted by the sampler and would play

back at that new pitch.

Pitch-shifting works well if the original sound is pitch-shifted no more than a fourth or

fifth up or down, but is not effective when the original sound is transposed more than that

amount. As a result, musicians often use a technique called multisampling, whereby

multiple notes of the instrument are sampled – perhaps every three half-steps – and no

sample would need to be shifted more than just a few steps. (For a variety of reasons –

memory limitations, most notably – it is not feasible to sample every note of an

instrument. New approaches to sampling are now bypassing such limits, however.)

A sample could also be time-stretched, whereby it would be lengthened or shortened

without having its pitch changed. Unlike simply changing the speed of a tape recorder or

record player, where the sound would slow down or speed up and the pitch would be

altered, time-stretching allows the composer to alter the length of a sample yet keep it at

its original pitch. This has both corrective uses, for example, changing the length of a

music cue that is intended to accompany a specific scene in a video, or artistic ones.

Example 67 illustrates time-stretching, playing first the same soprano sample used in

Example 66 stretched 5 times its length, then stretched 15 times its length with the pitch

staying the same in both cases. Filtering is a very common process in which a sound’s

spectral makeup is altered. A filtered sample could be made to sound “brighter” or

“duller” than the original, or it could be made to sound as if it were emanating from a tin

63

can, a large gymnasium, an old AM radio or even underwater. It could also be altered

into an unrecognizable state. Example 68 is a drum loop played in its original version,

then played using two different filtering effects, one after the other.

Another type of filtering, called convolution, can be thought of as “spectral cloning,”

where the spectral characteristics of one sound are applied to another. This can produce

effects like a cat singing or a baby meowing. Example 69 begins with two cat meows,

which are followed by a single note on a Jew’s harp, then a convolution of the cat with

the Jew’s harp. It appears that the cat is actually inside the Jew’s harp, creating the sound.

In the modern studio, a composer is likely to use sampling techniques one of three ways.

First, a traditional hardware sampler, one that can record sounds and play them back via

MIDI control, is a viable option. Companies such as E-MU and Kurzweil have made very

high quality hardware samplers over the years, many of which are still in active use.

Next, composers might employ software samplers such as those from Native

Instruments and Steinberg, which are computer programs that offer features very similar

to their hardware counterparts but exist entirely in the "virtual world" on the computer.

Ironically, most software samplers do not actually sample - they have no sound recording

capabilities. Rather, it is assumed that composers using such software would have other

means to get their audio onto the computer. Yet the ability to integrate a software sampler

into a complex audio production environment, where sound from one program can easily

be sent directly to another with no wires to attach or cords to plug in, or where the limits

of hardware samplers in the amount of data they can store are entirely mitigated, is a

huge advantage over using the hardware option.

Finally, many of the same techniques that would be available from any type of sampler,

hardware or software, are now being performed in multitrack digital audio editors,

(often referred to a DAWs, for Digital Audio Workstation). These programs, of which

Digidesign’s Pro Tools is the best known, let the composer organize a vast number of

overlapping sounds of any length or complexity along a timeline with very exacting

control as to the start time, duration, loudness level, stereo position and other aspects of

their playback. Of course this method of arranging sonic events on a timeline cannot be

done in real time the way the sounds in a hardware or software sampler can be performed

live from a keyboard. This is not a limitation for most studio musicians, though, and it is

not unusual for composers using such tools to work on a single piece for many months.

Note below the appearance of a MIDI software program in which instructions,

represented by dash lines, are being sent to a sampler informing it when a note is to be

played and for how long. Below that is the interface from a digital audio editor in which

the actual sounds, not simply instructions to another device to begin playing, are shown.

Composers will use one or perhaps both of these approaches depending upon their own

personal preference.

64

A MIDI sequencer sending instructions to a synthesizer

A digital audio editor sending actual sound files to loudspeakers

Now listen to excerpts from two works, Bill Brunsons's Inside Pandora's Box (Ex. 70)

and Christopher Calon’s Les Corps Ebouis (Ex. 71), which use sampling extensively. The

exact methods used to process and transform each individual sound cannot usually be

determined simply by listening (though it is assumed none were simply played live from

a keyboard and recorded), but it is certain that the composers took great care in ordering

and manipulating each of the huge number of individual events that make up the

composition.

Sound Synthesis

A second approach to creating sound on the computer is called synthesis, and with this

approach, a sound is generated by the computer or a hardware synthesizer (which

contains a microprocessor) literally “from scratch.” In sound synthesis, the computer is

given an algorithm, which is a set of instructions (or “recipe”) that describes what

65

processes or operations it should perform to create the desired sound. Unlike sampling,

which manipulates natural sounds, sound synthesis can be used to create new and unique

sounds that could never occur in the natural world.

Sound synthesis is used in a number of commercial software programs, some of which

provide the user with colorful, graphical interfaces for designing sounds of any

complexity. Reaktor, by Native Instruments, is an example of this type of visual

interface. Each of the small boxes in the example below represents some type of sound-

generating or sound-manipulating process, and the result of all these processes interacting

result in the sound a user hears when a trigger note is sent from some MIDI device.

Main interface for Native Instruments Reaktor

There are also several sound programming languages dedicated to synthesizing sound

entirely in software. Csound, developed by Professor Barry Vercoe at MIT, is the most

popular of this type of application. The text below is the exact programming code that

would be used by the Csound language to create a simple sine tone, the most basic sound

in nature. This tone will have a frequency of 440 (A above middle C) and a relative

loudness of 5000 (out of a possible 32,000). The instructions regarding how long the tone

should last would appear in a separate file:

You can hear the sound in Example 72.

66

There are many different synthesis methods available to the computer musician, each of

which has its advantages and disadvantages and each of which will generate its own class

and category of timbres. Frequency Modulation (FM), for example, is particularly

useful for synthesizing metallic, brass and percussive sounds. Listen to Example 73,

which starts with a harsh sounding FM phrase followed by a more-musical bell sound

also created with FM. FM works by using one sound wave to change (or modulate)

another sound wave. Additive Synthesis, a process where many dozens of individual

sine tones, all with different frequencies and in varying amounts, are added together. It is

effective for producing vocal timbres as well as flute and other woodwind simulations,

organs, and more.

Listen to Example 73a and you may notice that this additive synthesis excerpt, created on

a synthesizer offering hundreds of sine waves for manipulation, resembles the sound of

an organ. In fact, organs from the earliest times used primitive forms of additive synthesis

to generate sound. Because it operates with the most basic components of all sounds,

additive synthesis has the potential to recreate synthetically nearly any sound imaginable.

But the effort and computations required to synthesize highly complex sounds such as a

grand piano are not worth the effort, especially because other, newer techniques are far

more effective for that task.

Subtractive synthesis is perhaps the most common of all synthesis methods and has

been used in nearly every genre of electronic music. It employs a sound source such as an

oscillator (a digital function used to generate a basic waveform) or noise generator, a

filter to shape the sound, and an amplifier to control the final output level. All of these

processes are modeled in software; like the others, there is no hardware required by this

method. Listen to Example 74, which begins with a fairly complex waveform called a

sawtooth wave, then ends with an elaborate, animated sound created by modulating the

waveform using filters whose own characteristics change over time.

One of the newest synthesis techniques, Physical Modeling (PM), is a novel approach

for creating the sound of both highly realistic acoustic instruments and completely other-

worldly virtual instruments, such as the sound of a 10-foot long glass flute or string or the

sound of an instrument that gets larger while it is playing. Physical Modeling works by

analyzing the most significant physical properties of an instrument that determine its

sound, such as its length, width and the material it is made of, as well as how air travels

through or across the instrument, then generating a mathematical formula that models all

those properties. All of the sounds heard in Example 75, an excerpt of a composition

entitled Monostique by Philippe Dérogis, were created using this method. What types of

natural materials do you hear represented?

A modern electronic music studio (often called a “home” or “project” studio) will no

doubt be based largely around a personal computer on which many different types of

software for creating, editing and notating music have been installed. A studio might also

contain a number of actual hardware devices, for example a sampler or sample player

(which could play back actual sound files but not record them), one or more synthesizers

(either with or without a keyboard attached), and some type of keyboard or other MIDI

67

controller. A controller is a device used to transmit performance instructions to a sound-

generating module such as a synthesizer or sampler for real-time performance or to a

computer to be recorded. Controllers can take the shape not only of keyboards, but also

guitars, wind instruments, electronic drums, etc. More recent controllers can even convert

and transmit brain waves to an electronic instrument.

An electronic-wind instrument (EWI), a form of MIDI controller

A well-equipped studio would also contain audio hardware such as an amp, a mixer,

headphones and speakers (called “studio monitors”), as well as processing gear to

manipulate sound, for example reverb and delay units, compressors, EQs, and more. All

of these hardware devices have been modeled (simulated) by software applications,

however, and today’s studios, even those of many professionals, have become almost

entirely computer-based.

Complete Listening Assignment - Sonority now

68

Form refers to the overall outline or design of a musical composition. It is the structural

element of a piece of music that helps account for that composition's sense of long-range

coherence and continuity. Because it can involve materials that extend over long spans

of time, the form of a work is often difficult to perceive on the first hearing. However,

composers use a number of means to give their music coherent shapes that can be

recognized and understood by the listener, though perhaps only after repeated listenings.

Musical compositions are organized in many ways and on many different levels. Certain

connections among the elements of a piece are obvious or at the surface. For example,

two sections of a piece might be in the same key, use the same instrumentation or focus

on the same electronic sound. Or an entire composition might consist of a series of

variations on a simple rhythmic idea stated at the outset. Other connections between and

among the elements in a work are more subtle, often hidden beneath the surface. For

example, the beginning and end of a large-scale composition might use elements that are

similar to one another but varied just enough so that no connection between them is

immediately apparent.

The way in which the musical elements of a work are interrelated—the way in which

they are shaped and held together—determines the overall form and design of a

composition. Using and reusing elements, introducing them at important moments along

the roadmap of a piece, allows the composer to create large-scale connections among the

sections of a musical piece and helps insure that the work as a whole is unified and

coherent. The same idea is relevant for improvised music – a performer might build an

improvised musical performance around a central theme or idea that reoccurs in different

versions throughout the piece.

Repetition and Contrast

Regardless of the musical style or era, several general factors are important in the

creation of a cohesive musical composition. Among the most important of these are the

principles of repetition and contrast. Repetition, the reuse of the key elements in a work

either immediately after they first occur or at later moments, helps the listener become

familiar with the principal melodies, themes, sonorities, rhythms, and other materials that

make up the piece. It creates a sense of unity and continuity for a work, whether a

popular song, electronic piece or a symphony, and helps establish reference points and

associations that will assist the listener in following the logic, direction and goals of the

music.

Contrast, on the other hand, is the use of materials that are totally new or radically

different from others used elsewhere. It is essential for giving a work variety, for keeping

it from becoming monotonous or predictable, and for creating a sense of surprise, tension

and anticipation. The bridge section of a popular song, often in a different key and using

a new chord progression, is a simple example of contrast and is the section where a song

moves away from the musical elements it had been using up to that point. Just like a

contrasting new section in a classical piece, the bridge adds variety to the song, and when

69

it ends, there is most often a return to the familiar music that preceded it. Similarly, a

composer might use any of the musical elements above to add contrast to a composition

For example, a piece may begin with a very soft, quiet section that contains only a few

isolated, short events, then gradually (or perhaps abruptly) shifts to a section with

numerous long, loud sounds – the possibilities for incorporating contrasting materials of

any type is obviously infinite, and it is important to recognize exactly which elements are

being used to create the contrasting music and which (if any) have remained the same.

A successful balance of repetition and contrast, between the reuse of existing materials

and the introduction of new materials, must be maintained by a composer if his or her

music is to present itself as a coherent, continuous entity. Very few musical forms are

based entirely on exactly repeating material or the presentation of entirely new music in

ever section.

Tension and Repose

Tension and repose offer another dialectic for organizing a composition. Especially with

music that is not based around traditional melodic or harmonic landmarks, the contrast of

moments of heightened activity and sections of little or minimal action in the music can

be a vital force in propelling a composition forward.

Tension is created in a variety of ways. A composer might build up the listener’s

expectations by continuously repeating a rhythmic or melodic idea. This might go on for

an extended portion of time, then stop abruptly. The listener is then unsure whether the

theme will return or if something new will happen – the moment is tense because the

listener does not know what to expect at this point. The more a composer affirms the

expectations of a listener, the higher that listener's "comfort level" will be. On the other

hand, constant surprises and denials of expectations are likely to produce a lower comfort

level and result in increased tension.

Contrasting fast rhythms, dense textures and/or loud passages with slow rhythms, thin

textures and soft passages over either a short time or long time frame can also set up a

conflict between tension and repose. This conflict is often very important as an

organizing principle in music and should be a part of any examination of the element of

form.

ANALYZING FORM

One key challenge in analyzing the form of a work is determining where "to draw the

line," that is, where to make a distinction or division between one part of a work and the

next. Equally important is knowing whether changes that are detected represent short-

term divisions or are major formal landmarks in the music. Another goal is to identify

representative elements within the music, whether melodic, rhythmic, harmonic or

timbral, and determine whether they represent fundamental building blocks that recur and

are reused in various transformations or if they are simply secondary or transitional

elements that appear as contrast to elements more central to the music. There is no single

or simple way to accomplish these tasks, but understanding form will always involve

70

repeated listenings: the more familiar you become with the music, the easier the task will

be. Here are some key steps.

First do a little research on the composer or performers. From what musical era does the

work come? Does it fit easily within some recognizable musical style or genre or is it

more of a hybrid of several styles? You can gain some expectations about form if you

know the music was written during the 1950s and not the 1990s or that the composer is

known as a minimalist or a serialist or what have you. Does the composer have a

particular technique that he or she is known for? For example, is he or she associated

with a synthesis technique such as FM or granular synthesis? Who are the composer's

influences? Where did he or she study and is that institution known for perpetuating

certain compositional approaches? How has the composer's styles changed over time?

Also, find out if the composer has written something about the work itself.

Next, listen to the piece repeatedly until you become very familiar with its intricacies.

Listening to the work multiple times will make you familiar with the overall flow of

musical events and help you better understand both the large-scale structure of the work

and the subtle details.

If at all possible acquire a score of the piece you are going to analyze or, if possible,

make your own using software such as Variations Audio Timeliner

(http://variations.sourceforge.net/vat/). Most purely electronic works do not have a

printed score, but works that combine live performers and a prerecorded part (i.e.,

electroacoustic music) are usually represented in some visual form. Composers work

very hard at including important structural information in their scores. After all, it will be

easy to detect sections that repeat or have texture or sonority changes just by looking at

the music itself. Look for and highlight repeat signs, double bar lines, second endings,

Da Capo markings and any sectional information that you see on the printed score.

Next, detect and mark areas where there are significant changes in timbre, dynamics or

meter. Large-scale repetition, motivic connections, moments of tension and release and

structural breaks will all be clearly seen (and probably heard) in the musical score. Listen

again and verify that you have marked the changes that sound most important. Note these

same elements if you have made your own timeline.

You might find it helpful to write down your responses to the piece you are listening to.

For example, noting that a high sustained brass-like tone entering after a quiet pad-like

section was “shocking,” or that the loud percussive gesture just before the final dense

chordal ending seemed “really dramatic.” This kind of observation can guide you

through a piece of music in a more informed way. Don’t be surprised that you will hear

new things each time you listen.

Next, determine whether the work is sectional or continuous. A sectional work will

most likely have clear, distinct areas that should be easy to identify and isolate. You

might hear strong cadences at the end of each section, significant changes in the tempo of

the music, the introduction of entirely new timbral, changes in register or texture or other

cues. Sectional works tend to be balanced in length, meaning it’s likely that the sections

71

will be relatively the same in length, but there is, of course, no guarantee that that will be

the case. It’s also possible that a sectional work will have a single section of music that

simply repeats, perhaps with minor changes or the addition of a few new elements each

time. For example, note in Example 76 that a single section of music repeats multiple

times. Within the larger section “A” are four smaller sections, which would be labeled

using small letters “a” and “b”, or if the listener thinks the second melody is different

enough from the first, then the “b” phrase would instead be labeled “a1 ” (a prime). A

graph of the form would look like this (the numbers represent the number of measures of

4 beats each per phrase):

A transition A etc.

a a b b a a b b

4 4 4 4 4 2 4 4 4 4

(inst) (vocals)

Follow the graph above as you listen to the music and note the difference in the 4-

measure melody between its first two repetitions and the second two. Decide if you think

it is different enough to be called a “b” phrase or if a1is better in your opinion.

Continuous forms do not have clear sections. Works of this type, which are often called

through-composed, simply spin out their ideas along lines of continuity and

cohesiveness determined by the composer. As in other, more structured forms, one

would still expect to find a balance between similarity and contrast—unity and variety—

that would guide the composer in his or her choice of material. Moving from loud,

aggressive music after a long, extended soft and slow section, for example, or shifting to

music mainly in the low register after a section mostly in the upper register are both

possible reasons for explaining how a composer organizes the materials in a piece. Note

any recurrence of melodic, rhythmic or timbral elements and determine if their

reappearance signifies a meaningful division in the work.

Listen to Example 77 and notice that there are no recurring elements that define or

distinguish one section of the music form the next. New elements appear and disappear

essentially at will over the recurring drum part. The listener can’t really predict how long

any one melodic idea will last or if it will be repeated or when a new idea might appear.

This form is continuous and through composed (it is also mostly improvised in live

performance). Continuous works can often be unpredictable, which can produce a feeling

of excitement and anticipation in a listener or, on the other hand, may make them feel

uncomfortable. These qualities are mostly under the control of the composer and are

often used to manipulate the response a composer wants from his or her audience.

Some through-composed works are episodic, meaning they contain numerous short

“episodes” consisting of musical ideas that are developed over a short span of time and

then moved away from. Other pieces might use dynamics as a structural element - the

music simply gets louder from the beginning to the mid point, then gets softer again from

the middle to the end. This is one of many ways a composer might create a giant arch

form, which is a formal shape used in various ways throughout music history.

72

Because there is no preset arrangement of the materials in a through-composed work and

there is typically no repetition of distinct sections, through-composed music is more

challenging for the listener to follow (and for the analyst to analyze). Yet every new piece

will expose its “logic” and order over time to the patient listener.

Indeterminancy

Another approach to creating a musical form involves the use of indeterminacy, which is

a technique of composing in which the composer leaves many aspects of a composition’s

elements to chance or to decisions made on the spot by performers. American composer

John Cage was a major proponent of this approach and used techniques such as rolling

dice or simply putting numbers on paper, then randomly reshuffling them to determine

how long each section of a piece might be. He and others also used graphical notation,

which involves creating elaborate pictures with a variety of shapes and forms whose

“meaning” performers are supposed to interpret as musical instructions. Clearly,

indeterminancy will produce music that never flows or evolves the same way twice –

each “realization” of a musical performance will be different in some way, which is what

the composer intends.

For example, in his piece, Fontana Mix (1958), a work scored for “any number of tracks

of magnetic tape, any number of performers and any number of instruments,” the

performers use the image below to guide them in their decisions about the musical

material of the work.

Score for John Cage’s Fonatana Mix (1958)

73

To begin the analysis of a musical work, ask some questions about the structure of the

music. The questions below are a guide to help you refine your analysis. Not all the

questions will apply to all music. Remember that the purpose of formal analysis is to

deepen your understanding of the organizational principles of the music you are

experiencing.

1) How unified is the musical material? Are there obvious contrasts of thematic and

non-thematic elements? Are the melodic themes strongly contrasted with each

other? Are the themes clearly stated? (Use your understanding of melody to

determine your answers here). If melodic themes are not found, what other

elements seem to be the central focus?

2) Is there a transition between the thematic elements of the music, whether those be

melodic, rhythmic, harmonic, etc? If so, what does the transition bring about: A

change of key? A change of texture? Sonority? Do the transitions serve more to

link or separate the materials? Does a new theme arrive during the transition or

after some section of the piece has been reached?

3) Is the piece sectional? If it is, what is the overall tonal plan of the music? At

what points is the tonic clearly established? Do these coincide with the statement

of familiar material or do they bring new material? If the work is not tonal, then

how are the sections distinguished from one another?

4) Does the music come to a stop at some point before the piece ends? When does

the stop occur and what does it signal? Is the stop a moment of tension or of

release? Does the music repeat after this stop or does it continue to new material?

5) When, where and what is the moment of highest tension? What are some of the

characteristics that give this moment its identity?

6) Does the moment of most significant repose occur soon after the moment of

highest intensity?

7) Does any of the opening material reappear near the end of the work? Is this a

literal repetition? In what way has the opening material been altered (change of

key, change of instrumentation, change of register, etc.)?

8) Is there any musical material that appears only once in the piece? Can you

explain why that material is not repeated? Is it just filling space, killing time,

transitioning between more important sections?

9) Does the piece have a “follow-up” section after it seems to have concluded? If so,

why? What does this add to the composition? Does it impact the balance of

tension and repose in the piece?

10) Identify the phrase lengths – are they symmetric or asymmetric? Do they

fluctuate between the two? How do the phrase lengths affect your experience of

the work? Do these elements become predictable (do you “tune out”)?

11) Count the number of measures for each section you detect. What are the

proportions of each section? Are they equal in length? Unequal?

12) Does your understanding of the form of the piece accurately reflect your

experience of hearing it? Can you follow the roadmap set out by the composer?

13) Does your written analysis reflect your understanding of the piece? Have you

clearing explained your observations?

74

Your written analysis of a composition should help a reader understand the way you

experienced the music. Try to be convincing in your writing and be specific when giving

representative examples: “Event X occurs in measure Y and from that point forward, the

music does such and such …” This helps your reader understand what you believe the

important landmarks in the work are.

Though there can be correct and incorrect parts of the same analysis, for example, a

phrase may or may not be symmetric or a modulation may not have occurred where you

thought it did, each analysis is a reflection of one listener’s experience of the music.

Everyone’s experience will be slightly different, but the more compelling, convincing

and accurate your powers of observations are, the more likely you will persuade others to

experience the piece in the way you have.

SUMMARY

The theorist Wallace Berry in his book Form in Music (1986, Prentice-Hall) speaks of

five overriding elements that govern a vast number of musical compositions. These are:

“the process of introduction,” in which the major elements that will be used in a work are

first prepared and in which “expectation” of what is to come is created; “the expository

process,” in which a statement of the principle thematic materials occurs; “the process of

transition,” where the music moves from the expository section to what is to follow; “the

developmental process,” where musical activity is “intensified” and where elements from

the exposition are “reviewed” and “explored;” and “the process of resolution,” in which

“closure and conclusion” occur. Though not every work will use each of these processes

in a clearly defined way, they are the underlying principles that operate in a great number

of musical compositions from all eras.

Not coincidentally, the same processes could be identified in other time-based art works,

such as film, theatre, certain forms of dance (particularly classical ballet) and fine-art

animation. They might equally apply to a novel or epic poem.

Being familiar with representative forms from different musical eras and cultures is a

significant part of understanding and appreciating music and will help the student expand

his or her understanding of how musical processes work. Form may not be the most

obvious musical element to detect but, ultimately, it is what makes a composition most

successful and satisfying to the listener. Listen carefully to each new piece and try to

detect what the composer is “saying,” and try to determine how they are saying it. Then

listen again and see what new information you can acquire. Gradually and over time,

every new work will reveal itself to you.

Complete Listening Assignment - Form now

75

Approaches to Music Analysis

Though many electronic compositions will reveal characteristics that conform neatly to the musical elements discussed in the above units, other compositions, especially those intended for playback in the concert hall, consist of materials that cannot easily be identified or defined by the listener. When analyzing music of this type and writing about it in a listening report, other approaches are needed to best describe the composition, and new guidelines for listening and discussion must be used. For example, when a musical composition does not illustrate clear melodic or harmonic characteristics, or the rhythm and texture can’t easily be described using terms such as “meter” or “homophony,” the listener might find it helpful simply to describe the events in the music as they occur from beginning to end, outlining the music’s main activity as if it were a running commentary or “play by play” narrative. With this approach, recognizable musical elements can be alluded to if and when they occur, but the listener has more flexibility in his or her use of terms to describe the music being heard. Every piece of time-based art will have different priorities and will express its meaning in a distinct and perhaps unique way, so it’s important to identify what aspects of the composition are primary - which recur and become significant motivating elements in the music - and which are only secondary and do not seem to play a major role in the unfolding of the music. These and other topics should be considered when evaluating a new work for the first time. Read the list of topics below, which should be included in your own listening reports, then listen to Example 78 several times, the opening minutes of “The Wild Bull” by Morton Subotnick. When you are familiar with the music, read the listening report that was written by a student for this music, then complete the online listening assignment (Listening Assignment - Analysis) on Blackboard and answer the questions using terms and concepts that you feel best fit the music. Note that when you write your required report, you do not need to write it as a long series of answers to these specific questions. Instead, write it as a continuous narration that covers some or all of these points and anything else you feel is important for the music. Be sure to use a music player that displays elapsed time so you can identify exactly where important events occur. Use the time format you see below (01:15, for example, for one minute and 15 seconds of elapsed time into the music). Describe the sonic elements at the opening of the piece. Are the events that appear there sustained throughout the work? If so, how and for how long? Do they recur later? Explain how new elements appear in the work. Are they introduced by foreshadowing or do they enter abruptly? Is there a smooth fade-in or crossfade between one element and another? Give some examples using specific timings as a reference. At what rate or pace do new elements appear? Is the work static for long sections? Are there sections where new events appear at a rapid pace? Are there any sections of the work that occur totally "without warning," i.e., that were not in any way anticipated? Were you surprised at their appearance? If so, why?

76

In the music, do the timbral elements bear any resemblance to the timbres of acoustic instruments? Are any of the sounds vocal or percussive in quality, for example, or do they appear to be modeled after any other family of acoustics instruments? Are there one or more predominant synthesis or signal-processing methods, and if so, which one(s)? Is spatial dimension of elements in the work clearly defined? Does the sound seem up close and “in your face” or is it well back in a huge cavernous environment, or somewhere in between. Does this aspect of the music change? Does the music have clearly defined pitch elements? Can you detect what type of pitch system is in place? How important is spatial/stereo movement in the music? Do sonic elements move between the two stereo channels? Do elements move from the front of the soundstage to the back or vice versa? What registers does the music use? How important is the use of register in your opinion? Does the work have clear-cut sections? If so, how many are there? How are they distinguished from one another? If not, what keeps it moving?

Describe the ending of the piece. Was it anticipated or is it a surprise? Do you hear a strong cadence, a sense of closure, or does it seem as if it might have continued beyond the end point? Explain your reasoning. What other elements of the music seem especially significant for this piece? Listening Report: The Wild Bull This report will discuss the piece, The Wild Bull by Morton Subotnick. The Wild Bull was written in 1968 and was intended for playback on prerecorded tape. The composition uses a Moog synthesizer as its only sound source.

The Wild Bull opens with a descending tone sweeping from the middle register to the bottom. The tone has no clear pitch and is moderately loud. This is followed by a lengthy silence, then a second similar tone that descends farther and over a longer amount of time. Another silence follows, then the original tone is heard accompanied by a short, second sound that enters at the same time. This sound sounds almost like a bull roaring. A series of non-pitched bass-drum type sounds enter at around 1:10 and last for a few seconds, then after the bass-drum sounds start to accelerate, a loud clanging metallic timbre enters at 1:20 and continues for the next 10 seconds, with more and more layers of very short, quick sounds adding to it. Some of the new layers sound as if they are underwater. The opening section seems to end at around 1:30 where a new sustained tone in the mid to upper register enters. The music beginning at 1:30 alternates between a fairly high tone that is repeated several times and other pitches, some of which sustain, in different registers. The high tone often sweeps upward like a siren. Even though the high pitch often repeats, it does not sound like a tonic, as there is no clear connection between the high repeating tone and the other notes that are sounding at the same time. The high tone has a timbre somewhat similar to a trumpet or some other brass instrument. At around 2:08 the rhythm of the high note becomes more active and the tone is heard many more times than it was originally. The other pitches continue to sound through this section but still sound as if they are background to the high repeating tone. At 2:28, the high note is no longer heard but the background sounds remain. The pace of this section is fairly slow and there is not a lot of activity. Occasionally, a new sound will appear in the upper

77

register, but nothing major happens until around 3:15, when some of the drum-style sounds heard earlier appear again in conjunction with a lot of other sounds that come in in different registers. All of these sounds have a clear electronic quality and last for only a few seconds each. There is no feeling of any tonal home base. Instead the notes seem to be randomly places up and down the various registers. There is also nothing that could be called a unified melody at this point, as the notes do not feel connected to one another. This type of music continues for several minutes but around 4:15 the music begins to get very gradually louder. At 4:50, the music suddenly changes and seems to be getting faster as more and more sounds come in almost on top of one another. The texture here seems to be polyphonic as it’s easy to hear several distinct layers in the music. It is also still atonal and many of the sounds do not have a clear pitched quality. Starting around 5:40 a long series of low notes are heard in succession, but here again none of the notes sound as if they are any type of tonal center. While the low notes are playing, other notes appear at random points in different register. Some of these are sustained tones but most are short notes and some are louder than others. The long sequence of short quick notes continues for several minutes but during this time, the music moves higher and higher until it reaches the upper register. At around 8:00 it becomes even more chaotic with notes sounding in nearly all registers. Also at this point, the notes get longer and more sustained and begin to overlap. The individual pitches are also less clear as many of the notes seem to sweep upward or downward very quickly. The entire pattern changes at 8:21 when non-pitched percussive notes enter. These tones sound like wood or breaking glass. At 8:27 a sharp bell-like sound enters and serves as a signal that something is happening. At this moment, the music becomes very sparse like the music of the very beginning, and bell like sounds occur at random moments, with long silences between them. The tones are moderately loud and sound like someone is hitting a bell with a hammer. They do not have a clear pitch so the music remains atonal. The entire pace of the music slows are good amount in this section. At times, it almost sounds like the piece has ended. At around 9:00, low sustained electronic sounds enter accompanied by some higher sound music that has no clear pitch. Other tones, both electronic and some that sound like bells, also appear at random points. The music is mostly soft at this point and as before, has no clear meter. It is hard to tell when one sound will end and the next begin. At 9:30, a struck bell sound enters and repeats multiple times. After the fifth or sixth repetition, this sound really becomes prominent and seems to be signaling some new section. The bell sound occasionally alternates with some other low electronic sounds, but none of those seems to be very important as they are much softer and are definitely in the background. At 10:07 the struck bell sound disappears without warning and the music consists of several layers of music of the same type that has been heard elsewhere, Low electronic sounds alternate with sounds in other registers with no clear focus or emphasis. Occasionally there’s the sound of a dying trumpet or some other type of strange brass instrument, but there is no meter or any type of focus on a central note. A repeated siren like sound comes in at around 10:30 and takes over some of the focus of attention. Overall, The Wild Bull consists of a large number of electronic sounds that appear in different combinations throughout. At several points in the piece, one sound seems to be more important than others, but for most of the composition, there is no clear timbre or pitch that seems more important than any other. The pace of the music is mostly slow, though there are times where it seems to speed up. Overall, it seems like the composer’s intuition is mostly responsible for the choice of sounds.