music and language

32
Music and Language music.arts.uci.edu /dobrian/CD.music.lang.htm (1992) by Chris Dobrian Table of Contents The Subjectivity of Experience Every input to our senses is a stimulus, available for us to interpret as information[1] , and from which we can derive further information. Our physical sensory receptors--our ears, eyes, etc.--can well be thought of as information "transducers" which convert external stimuli--changes in air pressure, light, etc.--into nerve impulses recognized by the brain. Scientists and philosophers have advanced many conceptual models of what the brain does with these nerve impulses to derive knowledge and meaning.[2] (C.f. Dobrian, Chris, "Music and Artificial Intelligence ", regarding models of music cognition.) Regardless of the mechanism by which our brain accomplishes it, it is clear that we generate (interpret, deduce, recall, or create) information ourselves, stimulated by external information. For example, when we hear a lion's roar, our ear drum simply receives continuous changes in air pressure. The cochlea, so we are taught, responds to the frequencies and amplitudes of those changes and conveys those responses to the brain. Our brain, by means largely unknown to us (past experience, instinct, deduction, instruction in roar analysis?) evaluates those time-varying frequencies and amplitudes as a lion's roar. Our brain then derives further information about the actual source of the sound and its meaning. A person in one time or place might interpret the sound to mean "My life is in danger. I must run away from the sound source immediately as fast and as far as I can." A person in another time or place might look around calmly for the electronic recording device that produced the simulation of a lion's roar. A person who had never learned to associate that sound with any particular source--e.g., a person who had never heard a similar sound before--might attempt to compare it with other known sounds, or might even remain unconcerned as to what produced the sound. When we hear a strange sound--thinking "What was that?"...we try to identify it....Occasionally we pay attention to the sound itself. Then it is more than a cue, and we are listening in another mode, music mode, regardless of the source of the sound.[3] The point of the above example is that the sound phenomenon which is external to our body--the fluctuation of air pressure--is considered an objective informational message, and everything that happens once it is converted by our "transducer" is subjective, based on our brain's understanding of the transducer's output, our own life experience, and our own favored ways of deriving knowledge. We may quite easily say, "That sound symbolizes a lion," but would we so easily say, "That sound symbolizes a tape recorder"? Are we talking about the sound or about our own personal referents derived from the

Upload: andrew-niess

Post on 27-Dec-2015

69 views

Category:

Documents


7 download

DESCRIPTION

Professor Chris Dobrian's paper on music and language.

TRANSCRIPT

Music and Languagemusic.arts.uci.edu /dobrian/CD.music.lang.htm

(1992)

by

Chris Dobrian

Table of Contents

The Subjectivity of Experience

Every input to our senses is a stimulus, available for us to interpret as information[1], and from which wecan derive further information. Our physical sensory receptors--our ears, eyes, etc.--can well be thought ofas information "transducers" which convert external stimuli--changes in air pressure, light, etc.--into nerveimpulses recognized by the brain. Scientists and philosophers have advanced many conceptual models ofwhat the brain does with these nerve impulses to derive knowledge and meaning.[2] (C.f. Dobrian, Chris,"Music and Artificial Intelligence", regarding models of music cognition.) Regardless of the mechanism bywhich our brain accomplishes it, it is clear that we generate (interpret, deduce, recall, or create)information ourselves, stimulated by external information.

For example, when we hear a lion's roar, our ear drum simply receives continuous changes in airpressure. The cochlea, so we are taught, responds to the frequencies and amplitudes of those changesand conveys those responses to the brain. Our brain, by means largely unknown to us (past experience,instinct, deduction, instruction in roar analysis?) evaluates those time-varying frequencies and amplitudesas a lion's roar. Our brain then derives further information about the actual source of the sound and itsmeaning. A person in one time or place might interpret the sound to mean "My life is in danger. I must runaway from the sound source immediately as fast and as far as I can." A person in another time or placemight look around calmly for the electronic recording device that produced the simulation of a lion's roar. Aperson who had never learned to associate that sound with any particular source--e.g., a person who hadnever heard a similar sound before--might attempt to compare it with other known sounds, or might evenremain unconcerned as to what produced the sound.

When we hear a strange sound--thinking "What was that?"...we try to identifyit....Occasionally we pay attention to the sound itself. Then it is more than a cue, and we arelistening in another mode, music mode, regardless of the source of the sound.[3]

The point of the above example is that the sound phenomenon which is external to our body--thefluctuation of air pressure--is considered an objective informational message, and everything that happensonce it is converted by our "transducer" is subjective, based on our brain's understanding of thetransducer's output, our own life experience, and our own favored ways of deriving knowledge. We mayquite easily say, "That sound symbolizes a lion," but would we so easily say, "That sound symbolizes atape recorder"? Are we talking about the sound or about our own personal referents derived from the

sound?

The subjective nature of perception was considered a truism by scientist/philosopher Gregory Bateson:

There is no objective experience. All experience is subjective....Our brains make the imagesthat we think we "perceive"....When somebody steps on my toe, what I experience is not hisstepping on my toe, but my image of his stepping on my toe reconstructed from neuralreports reaching my brain somewhat after his foot has landed on mine. Experience of theexterior is always mediated by particular sense organs and neural pathways. To that extent,objects are my creation, and my experience of them is subjective, not objective.

It is, however, not a trivial assertion to note that very few persons, at least in occidentalculture, doubt the objectivity of such sense data as pain or their visual images of theexternal world. Our civilization is deeply based on this illusion.[4]

This raises some fundamental questions regarding perception and knowledge, or more precisely,regarding information and meaning. If information has a certain significance to me, how do I determinewhether that significance is personal to me or whether it is actually "contained" in the external information(and thus available to others who receive the same information)? After all, people all look different, areshaped differently, act differently, talk differently; is there any reason to believe that we all hear the same?How do we communicate those aspects of our knowledge which are personal?

The way that we probe for an answer to these questions is to rely upon a system of symbols and meaningsthat we feel relatively confident are shared by others. Our most developed and established system ofcommunicative symbols is, of course, language.

Understanding language is...a matter of habits acquired in oneself and rightly assumed inothers.[5]

Because we use language so much, and have done so for so much of our lives, and have done so as aspecies for so long, we often take words for granted as having objective, agreed-upon meanings. Ofcourse, this trust is belied by everyday misunderstandings, and is actually as much of an illusion as theillusion of objective experience commented upon above (see also my summary of Benjamin Hrushovski'sanalysis of "The Structure of Semiotic Objects"), but it is true that our spoken language is our most fullyshared basis for communication.

Aspects of the Music-Language Relationship

Music and language are related in so many ways that it is necessary to categorize some of thoserelationships. I will then address each category in turn.

First, there is the seemingly never-ending debate of whether music is itself a language. The belief thatmusic possesses, in some measure, characteristics of language leads people to attempt to apply linguistictheories to the understanding of music. These include semiotic analyses, information theory, theories ofgenerative grammar, and other diverse beliefs or specially invented theories of what is being expressedand how. This category could thus be called "music as language".

A second category is "talking about music". Regardless of whether music actually is a language, our

experience of music is evidently so subjective as to cause people not to be satisfied that their perception ofit is shared by others. This has led to the practice of attempting to "translate" music into words or to"describe" musical phenomena in words, or to "explain" the causes of musical phenomena. The sheerquantity of language expended about music is enormous, and includes writings and lectures on musichistory, music "appreciation", music "theory", music criticism, description of musical phenomena (from bothscientific and experiential points of view), and systems and methods for creating music. These approachesmay include the linguistic theories of the first category, as well as virtually any other aspect of the culture inwhich the music occurs: literary references; anecdotes about the lives and thoughts of composers,performers, and performances; analogies with science and mathematics; scientific explanations ofperception based on psychology and acoustics; poetry or prose "inspired" by hearing music; even ideas ofcomputer programs for simulations or models of music perception and generation.

A third category is composed of a large number of "specialized music languages". These are inventeddescriptive or explanatory (mostly written) languages, specially designed for the discussion of music, asdistinguished from everyday spoken language. The best known and probably most widely acknowledgedspecialized music language is Western music (five-line staff) notation. Myriad others can be found in theU.S. alone, ranging from guitar tablature to computer-readable protocols (e.g., MIDI file format).

Not only is the role of language in the learning and teaching of music important, but the study of the role oflanguage is important, as well. "What we talk about when we talk about music" is a matter that is too oftentaken for granted and too little investigated. I will delve into each of these three categories in an effort todefine better the music-language relationship, and I will give attention to the importance of that relationshipto music teaching and learning.

Music As Language

The oft-quoted poetical statement that "Music is the universal language of mankind[6]" is indicative of thecommunicative quality of music, and at the same time is indicative of the elusive and ambiguous nature ofwhatever it is that music communicates. Is music a universal language? Is it a language at all?

Because music is a stimulus to our sense of hearing, it is clear that music can, and inevitably does, conveyinformation. What is the nature of that information? What does it express? These questions have longbeen--and continue to be--the source of considerable debate. The following quotes encapsulate a fewviews.

I consider that music is, by its very nature, powerless to express anything at all, whether afeeling, an attitude of mind, a psychological mood, a phenomenon of nature, etc....If, as isnearly always the case, music appears to express something, this is only an illusion, andnot a reality....

The phenomenon of music is given to us with the sole purpose of establishing an order ofthings, including particularly the coordination between man and time. To be put intopractice, its indispensable and single requirement is construction....It is precisely thisconstruction, this achieved order, which produces in us a unique emotion having nothing incommon with our ordinary sensations and our responses to the impressions of daily life.[7]

Do we not, in truth, ask the impossible of music when we expect it to express feelings, totranslate dramatic situations, even to imitate nature?[8]

* * *

Our audiences have come to identify nineteenth-century musical romanticism as analogousto the art of music itself. Because romanticism was, and still remains, so powerful anexpression, they tend to forget that great music was written for hundreds of years before theromantics flourished.[9]

Music expresses, at different moments, serenity or exuberance, regret or triumph, fury ordelight. It expresses each of these moods, and many others, in a numberless variety ofsubtle shadings and differences. It may even express a state of meaning for which thereexists no adequate word in any language. In that case, musicians often like to say that ithas only a purely musical meaning. They sometimes go farther and say that all music hasonly a purely musical meaning. What they really mean is that no appropriate word can befound to express the music's meaning and that, even if it could, they do not feel the need offinding it.[10]

My own belief is that all music has an expressive power, some more and some less, butthat all music has a certain meaning behind the notes and that that meaning behind thenotes constitutes, after all, what the piece is saying, what the piece is about. This wholeproblem can be stated quite simply by asking, "Is there a meaning to music?" My answer tothat would be, "Yes." And "Can you state in so many words what the meaning is?" Myanswer to that would be, "No." Therein lies the difficulty.[11]

* * *

Within the orbit of tonality, composers have always been bound by certain expressive lawsof the medium, laws which are analogous to those of language....Music is, in fact, 'extra-musical' in the sense that poetry is 'extra-verbal', since notes, like words, have emotionalconnotations....Music functions as a language of the emotions.[12]

* * *

One given piece of music may cause remarkably different reactions in different listeners. Asan illustration of this statement, I like to mention the second movement of Beethoven'sSeventh Symphony, which I have found leads some people into a pseudo feeling ofprofound melancholy, while another group takes it for a kind of scurrilous scherzo, and athird for a subdued kind of pastorale. Each group is justified in judging as it does.[13]

* * *

Hindemith is undoubtedly right in his observation that people react in different emotionalways to a given piece of music, but his statement that each reaction is equally justifiable

fails to take a simple psychological point into account. Could it not be that some listenersare incapable of understanding the feeling of the music properly? The answer, of course, isyes....Such people, whom one knows to exist, are just plainly unmusical.[14]

Stravinsky's resolute assertion that music is "powerless to express anything at all" is almost certainlysymptomatic of an overreaction against the poetic excesses of romantically inclined music commentators,or of an effort to distinguish himself from the Viennese "expressionists". Nevertheless, his professed beliefwas that, if music is "about" anything, it is about music.

Just as cubism is a poetic statement about objects and forms, and about the nature ofvision and the way we perceive and know forms, and about the nature of art and the artistictransformation of objects and forms, so Stravinsky's music is a poetic statement aboutmusical objects and aural forms.[15]

At the opposite extreme from Stravinsky is the British musicologist Deryck Cooke, who maintains thatmusic is a language for expressing emotional states, and that furthermore it is (at least in the case of tonalmusic) a strictly codified language in which each scale degree signifies a certain emotion and permits onlya single specific reading. Cooke's argument on behalf of his theory in The Language of Music cites a greatmany supporting examples, but suffers from a) a tendency to extrapolate inherent emotional content ofscale degrees based on obviously style-based and culture-based clichés, and b) attribution of emotionalqualities which are so ambiguous as to be indisputable but meaningless because they make oppositeassertions simultaneously. For example, he says that "To rise in pitch in minor...may be an excitedaggressive affirmation of, and/or protest against, a painful feeling"[16]. Well, if it can be an "affirmation of"and/or a "protest against", then he has covered all the possibilities and thus has told us nothing.

Aaron Copland's intermediary view that music may express both musical and extra-musical meaningstrongly suggests a communicative (informative) power in music, but his belief that music "may evenexpress a state of meaning for which there exists no adequate word in any language" indicates that hefeels music does not possess anything like the explicit significative nature of language. In fact, hisreference to the existence of "purely musical meaning" allies him much more closely with Stravinsky thanwith Cooke.

Implications for Music Education

The American professor Bennett Reimer, in his book A Philosophy of Music Education , labels this type ofStravinsky-Cooke polarity as Absolute Formalism versus Referentialism.

The experience of art, for the Formalist, is primarily an intellectual one; it is the recognitionand appreciation of form for its own sake. This recognition and appreciation, whileintellectual in character, is called by Formalists an "emotion"--usually the "aestheticemotion". But this so-called "emotion" is a unique one--it has no counterpart in otheremotional experiences.

The Referentialist disagrees. The function of the art work is to remind you of, or tell youabout, or help you understand, or make you experience, something which is extra-artistic,that is, something which is outside the created thing and the artistic qualities which make ita created thing. In music, the sounds should serve as a reminder of, or a clue to, or a sign

of something extramusical; something separate from the sounds and what the sounds aredoing.[17]

Reimer points out that the viewpoint adopted by the educator has important socio-political implications,with potential for both benefit and abuse.

Referentialist assumptions are in operation in much that is done in the teaching of art....theattempt to add a story or picture to music, either verbally or visually; the search for the rightemotion-words with which to characterize music...

What are the values of art according to the Referentialist point of view?...Art works servemany purposes, all of them extra-artistic. If one can share these values one becomes abetter citizen, a better worker, a better human being...Of course, there is the danger thatharmful works of art will have harmful effects...That is why societies which operate under aReferentialist aesthetic must exercise a high degree of control over the artistic diet of theircitizens....

Music educators will have no difficulty recognizing the Referentialist basis for many of thevalue claims made for music education....It imparts moral uplift,...it provides a healthy outletfor repressed emotions...it is assumed to be, in short a most effective way to make peoplebetter--nonmusically.

The practice of isolating the formal elements of art works and studying them for their ownsake is the [Formalist] counterpart of separating out the referential elements....That themajor value of music education is intellectual; that the study of "the fundamentals" is, in andof itself, a beneficial thing;...that music or art in general transports one from the real worldinto the ethereal world of the aesthetic; all these are assumptions compatible withFormalism....

When considering each of these two viewpoints separately it is difficult...to give full assentto either....To translate the experience of art into nonartistic terms, whether conceptual oremotional, is to violate the meaningfulness of artistic experience....At the same time it is notpossible to regard art, as with the Formalist, as an intellectual exercise. Surely art isintimately connected to life rather than totally distinct from it....So while each view containssome truth, each also contains major falsehoods which prevent their use as a basis for aphilosophy.[18]

Like Copland, Reimer leans in the direction of Formalism, but does not ignore the role of extra-musicalconsiderations in music education. He calls this viewpoint Absolute Expressionism.

Absolute Expressionism [like Formalism] insists that meaning and value are internal; theyare functions of the artistic qualities themselves and how they are organized. But theartistic/cultural influences surrounding a work of art may indeed be strongly involved in theexperience the work gives...the story in program music, the crucifixion scene in a painting,the political conflicts in a play, and so on, are indeed powerfully influential in what theinternal artistic experience can be. However, references are always transformed andtranscended by the internal artistic form....That is why it is possible and quite common forworks with trivial referents to be profound...Cezanne's paintings of fruit on rumpled table

cloths, Beethoven's Pastorale Symphony, and so on. That is why it is also possible andquite common for works with important referents to be trivial or even demeaning as art--dime-store pictures of Jesus painted in day-glo colors on black velvet, love as depicted in"popular novels", and so on.[19]

Reimer points out that both the Referentialist and Formalist views make music education difficult to justifyadministratively because the first view holds that music does not deal with unique issues and the secondview disconnects music from the rest of life. He seeks to combine the best aspects of both, thus mitigatingtheir narrowness as individual viewpoints.

Absolute Expressionists [and] Absolute Formalists...both insist you must go inside to thecreated qualities that make the work an art work. That is the "Absolute" part of both theirnames. But the Expressionists include nonartistic influences and references as one part ofthe interior...The Absolute Expressionist view [is] that the arts offer meaningful, cognitiveexperiences unavailable in any other way, and that such experiences are necessary for allpeople if their essential humanness is to be realized.[20]

Music as Language (cont.)

If there is disagreement as to what music expresses, there is at least general agreement that music isintended to and does--through its form, its content, or both--produce in us emotions, be they strictlymusical or extra-musical. So, clearly it gives us stimulus and information, but that is hardly evidence of itsbeing a language. Before proceeding further, let us establish a working definition of language.

Language is a set (vocabulary) of symbols (signifiers, to use the terminology of semiotics), each of whichrefers to (indicates, signifies) one or more concrete things or abstract concepts. These symbols arecombined according to a more or less strict grammar of rules. The combination of the symbolic units in aspecific grammatical structure produces new, further significance.[21] This is the way in which verballanguages work, as well as such specialized written languages as those of mathematics and computerprogramming.

Does music conform to this definition of language? First, let's narrow the question considerably. DoesEuropean-American classical music conform to this definition? Despite attempts throughout history toanswer in the affirmative--from Plato's Republic to the musica reservata of the sixteenth century to thedoctrine of affections of the eighteenth century to Cooke's Language of Music--all theoretical formulationsof a "language of music" either have proved applicable only to a particular period and style or have notbeen at all widely accepted as a significant system.

The fact that a language system can potentially be embodied in any circumscribed, discernible set ofsounds (or objects of any kind for that matter) is not trivial, however. There exists purely functionalcommunicative music, and there exist in the music of a great many cultures and periods certain widelyaccepted sonic symbols. Such symbols are a subset of musical sounds or phrases that are recognized asknown musical objects and which we usually term clichés. A knowledge of those symbols, and indeed ofmusical clichés in general, is essential to musical understanding because they have significance, eithermusical or extra-musical.

In fact, though, a type of music made up entirely of sonic symbols is extremely rare. Symbols and otherclichés are almost always merely a subset of the acceptable sounds of a musical culture or style, and thatculture or style is in turn merely a subset of music. So, while music may contain discernible symbols, and

usually does employ some type of grammar, the two are rarely related in any way, and symbols are almostinvariably only a small subset of any piece of music.

The conclusion we reach, then is that a given style of music often includes linguistic elements of symbolsand grammar, but is not itself a language. It is even more untenable to say that music (independent ofstyle) is a language, much less a "universal" language of agreed-upon symbols, grammar, and meaning.Music is not a "universal language" any more than the sum total of all vocal sounds can be said to be auniversal spoken language. Whatever linguistic elements a music may possess are extremely dependenton explicit and implicit cultural associations, all of which are in turn dependent on society and theindividual. Even though media and tele-communications are increasing the awareness of music of othercultures, most individuals are still no closer to knowing all music than they are to knowing all languages.

We must also bear in mind that symbolic representation is not the only means of expression. Music can, byits very form (that is, the abstractions we derive from its form), express abstract or visual concepts, or itmay present a visceral, immediate appeal to our senses (our unconscious response). These are notmodes of expression that depend upon language, yet few would deny their existence in music.

Implications for Music Education

Music is not itself a language and therefore is not susceptible to precisely the same methods of analysisand teaching as verbal language; so it is almost certainly futile to attempt to model music entirely as alanguage. Nevertheless, music does in most cases include the linguistic elements of symbols andgrammar, and it is therefore very likely that linguistic methods of analysis and education can at leastprovide some insights into analogous processes of musical understanding.

In the area of music analysis (a term which includes analysis lite, "music appreciation") the linguistictheories of semiotics, generative grammar, and information are particularly appropriate in this regard. Inthe area of musical skill development, theories of language learning and acquisition present parallels inmusic education. The actual application of these theories will be discussed in more detail later in thischapter, in the category of "talking about music" (p. 16 ff.).

The Vocal Origins of Music

It is most widely held that music originated as a vocal practice: either as an extension of non-linguisticvocal utterances (expressions of joy, pain, etc.), as extensions or intensifications of the intonations ofspoken language, or as the production of vocal sounds purely for their sonic quality.

[An anonymous Sanskrit manuscript] explains the formation of musical sounds on the basisof the Maheshvara Sutra-s, an esoteric arrangement of musical sounds, whichNanadikeshvara also accepts as the philosophical basis of the Sanskrit language and, infact, all language.[22]

* * *

Various hypotheses about the origins of music have connected it with, or derived it from, theintonations of excited speech, nonverbal voice signals, or significant inflections of the voicein tonal languages. A widely accepted notion maintains that both natural language andmusic, alone of all the arts, involve sound unfolding in time, and that both have the humanvoice as their common source. Finally, music, like natural language, developed a system of

writing--musical notation.[23]

* * *

Certain forms of expression intrinsically link these two phenomena: sound and word. Is thisto go so far as to say that the evolution of language corresponds to a similar evolution ofmusic? To me it does not seem possible to assert that this problem is posed in terms of asimple parallelism....Certainly onomatopoeias and pulverized words can express whatconstructed language cannot propose to touch upon....For example, ritual chants in a largenumber of liturgies use a dead language that keeps of the chanted text remote....The Greektheater and the Japanese noh also furnish us with example of "sacred" language in whicharchaicism singularly or utterly limits comprehensibility. In popular songs, at the otherextreme, who has not been surprised to hear successions of onomatopoeias and ordinarywords deprived of their purpose? All that is admitted is the necessity and pleasure ofrhythm.[24]

Of course we can know of the origins of music only through descriptive accounts and theoretical treatises.We do not have examples of ancient music available for audition. We have no real way of tracing themusic of today back to its primal origins, and therefore, as Boulez rightly points out, no basis for believingin a parallel development between music and language. As he goes on to show, however, examples ofmusic in which the vocal sounds have to some extent become dissociated from their symbolic meaninggive us a new perspective on the musical organization of phonemes. We can study not only traditional text"settings" in vocal music, but also any inherent logic which may exist in the relations between vocalsounds, pitch, timbre, and rhythm.

Professor Yuasa, in lectures and private conversations, has repeatedly emphasized the value he sees in acomposer taking an "archeological approach" to music--to seeking knowledge of the "genesis" of music asa way to deeper understanding of our own music today. However, Stravinsky, in a stern, almostcurmudgeonly tone, remarks on what he personally has found to be the benefits and limitations of the"archeological approach".

Archeology...does not supply us with certitudes, but rather vague hypotheses. And in theshade of these hypotheses some artists are content to dream, considering them less asscientific facts than as sources of inspiration....Such a tendency in itself calls for neitherpraise nor censure. Let us merely note that these imaginary voyages supply us with nothingexact and do not make us better acquainted with music....My own experience has longconvinced me that any historical fact, recent or distant, may well be used as a stimulus toset the creative faculty in motion, but never as an aid for clearing up difficulties.[25]

Perhaps the two composers do not hold such different views after all. It is doubtful that Yuasa expects toestablish any "scientific facts" regarding the genesis of music. It is precisely as a "source of inspiration"--a"stimulus to set the creative faculty in motion"--that he seems to view this (spiritual rather thanmusicological) archeological search. His belief is that the important origins of music do not require a distant"voyage" for discovery, but rather that they are personally available to us, as part of our collectiveunconsciousness, and that we have only to cast our eyes and ears in the right way to realize them. Hisorchestral work Eye on Genesis II is his most eloquent musical essay on this topic.

Implications for Music Education

If, as is generally believed, instrumental melodies developed as accompaniment to and extension of vocalmelodies, vocalization is a vital tool for achieving a better understanding of instrumental music. This is thebasis of the teaching method of Zoltán Kodály, which in turn forms the basis of virtually all music educationin Hungary. Kodály himself makes a very strong case for the predominance of vocal training in musiceducation.

Musical illiteracy impedes musical culture, and is the cause of sparse attendance at seriousconcerts and opera productions....The way from musical illiteracy to musical culture leadsthrough reading and writing music....Active participation in music is by far the best way toget to know music; the gramophone and radio are no more than accessories....The bestapproach to musical genius is through the instrument most accessible to everyone: thehuman voice. This way is open not only to the privileged but to the great masses....Musicbelongs to everybody....With a good musician the ear should always lead the way for thosemobile fingers![26]

One of my most vivid memories from my own music education is of a remark made by my first collegeprofessor of musicianship, Gil Miranda, himself a believer in many of Kodály's precepts. He said simply,"To find the correct interpretation of a melodic phrase, you only have to sing it. The sung melody is thebasis for all phrasing. To make the audience hear something in the music you perform, you only need toensure that you can truly hear it yourself." I have found this statement to be true in virtually all cases whereI have applied it--a rare thing for any statement made about music. Even for music which was notconceived by the composer as having a cantabile character, the simple facts of breath control, changes inphysical exertion, and habits of vocal phrasing lead to a natural interpretation of melody which result in, ifnot automatically the best interpretation, at least a sizable step in the right direction. This indicates the verytight relationship that still exists between instrumental music and the influence of vocal behavior on musicalexpression.

Kodály's insistence on early, intensive, and careful vocal training as a requisite for all musicalunderstanding is virtually indisputable, and his observation of its egalitarian availability (and essentialindispensability) to all people flies in the face of elitist beliefs that some people are musically gifted whileothers simply are not.

What is maddening in America is most people have been separated from their culture. Theyhave been told there's a special privileged class of artists who have a special insight. Anormal person doesn't have this insight. That is a monstrous lie, and it is hideous becauseit is taught to us early on. We are taught we're not artists. Every single day we're reminded.The special students are isolated in a class and told, "You're special, you go on. The rest ofyou, please become middle-class and boring."[27]

Another music educator who has had great success and who downplays the role of "talent" and "gift" inmusical development is Shinichi Suzuki. Like Kodály, he stresses development of the ear beforeinstrumental practice, he stresses musical training beginning at an early age, and he champions musicaltraining for all people. Furthermore, Suzuki emphasizes language acquisition as a model for acquisition ofmusical skill. He points out that we all speak our native language competently--some better than others, tobe sure, but all with competency--because we are exposed to it, repeat it, and practice it constantly,

beginning at a very early age. His premise is that other abilities, notably musical skill, can be similarlyacquired.

Education begins from the day of birth. We must recognize the amazing power of the infantwho absorbs everything in his surroundings and adds to his knowledge. If attention is notgiven to early infancy, how can the child's original power be developed? We learn fromNature that a plant which is damaged or stunted during the sapling stage does not have apromising future. Yet at present, we know very little about proper training for the earlyinfancy of human beings. Therefore, we must learn more about the conditions in whichearly human growth takes place....

All children in the world show their splendid capacities by speaking and understanding theirmother language, thus displaying the original power of the human mind. Is it not probablethat this mother language method holds the key to human development?...

Cultural sensitivity is not inherited, but is developed after birth....It is wrong to assume thatspecial talent for learning music, literature, or any other field, is primarily inherited.

This is not to say that everyone can reach the same level of achievement. However, eachindividual can certainly achieve the equivalent of his language proficiency in other fields.We must investigate methods through which all children can develop their various talents.In a way this may be more important than the investigation of atomic power.[28]

Suzuki and Kodály differ on the order and importance of vocal training and notation reading in education,but it is clear that both are guided by a recognition of the important correlations between speech skills andmusical skills.

Talking About Music

If all meanings could be adequately expressed by words, the arts of painting and musicwould not exist. There are values and meanings that can be expressed only by immediatelyvisible and audible qualities, and to ask what they mean in the sense of something that canbe put into words is to deny their distinctive existence.[29]

Talking about music is like dancing about architecture.[30]

People talk about music in an effort to discover to what extent their experience of it and the significancethey attribute to it are personal, and to what extent its significance is actually "contained" in the sonicinformation (and thus available to others who receive the same information). Interest in this questioncomes from a desire to communicate knowledge, to share knowledge (possess common knowledge), andperhaps even from a desire to impose knowledge on others. We choose the medium of verbal languagebecause a) we are uncertain of the extent to which music is a shared system of communication, and b)what we actually wish to communicate, share, or impose is not music but ideas about music--ideas evokedby music.

As the quotes by Dewey and Anderson affirm, we will never successfully communicate music in anymedium other than music. However, not all ideas expressed by music are solely musical ideas. Musical

symbols or clichés may refer to extra-musical ideas, or music may, by its very formal construction, expressabstract or visual concepts. Such ideas can be and are much discussed. Music does not exist in avacuum; it exists in and reflects a society and a culture, and thus can refer to a whole world of ideas.Indeed there is hardly any idea or concept which has not at some time been, or could not potentially be,related to music.

How then to get at this matter of "what we talk about when we talk about music"? I will discuss and criticizesome existing ways of analyzing and teaching music, and will propose some new directions. I will start bytreating methods of analysis which are particularly related to music as language: in a sense, talking abouttalking about music.

Music and Semiotics

Semiology is the study of symbols. As we have noted, musical discourse makes use of symbols but is byno means restricted to their use. It would therefore appear that semiology is of limited applicability tomusic. If, however, we employ ideas of semiology but shift our focus from the symbol (the signifier) to thatwhich is signified (for the study of significance is a natural part of semiology) we may find useful ideas forthe analysis of music. Still, we must always be aware of the limitations of its usefulness, as we will see.

Music lies within [the reach of semiotics] because it is a kind of communication which hasboth organization and significance. Moreover, music may seem the most appropriate andgratifying object for these new approaches, because it is the purest system of abstractrelationships presented in concrete form, and the most immediate expression of meaning.

Yet at this point one has to be cautious...That music may be described in semiotic termsdoes not necessarily mean that the terminology and theory of semiotics will help us tounderstand music better.[31]

Can music be considered a "semiotic object"? According to Benjamin Hrushovski:

Semiotic objects may be intended for sign-functions or not. In the second case, theybecome "semiotic" if they are interpreted as such by "understanders". For example, whilewalking in a city we read the forms, sizes, and density of buildings to signify "officebuildings", "middle-class homes", "slums", etc., even if such messages were never intendedby the producers of those objects.[32]

Hrushovski describes a semiotic object as having three dimensions: Speech and Position, Meaning andReference, and Organized Text. The dimension of Speech and Position points to the importance of thesource of any expression, and the relative positions of all those involved in its propagation or perception.Who is the author, speaker, character, addressee, reader, etc.? What is the position (level) of each? Oftenspeakers are nested in a hierarchy. For example "an author presents a narrator who quotes a character(who may quote another character in turn), and a distortion may occur at any stage."[33] In musical termsthis might be analogous to a score, authored by a composer, directed by a conductor who cues theperformance of instrumentalists. Where is the music in this chain? At one point, at all points, or somewhereexternal to all of it (e.g., in the listener's perception and parsing of the resultant sound)? What is ouropinion or expectation of composer, conductor, and performers and how does that influence our convictionin the meaning of the text? "Regulating principles, such as point of view, irony, and generic mode, derive

from the speaker or the maker of the text. They explain in what sense to take the sense of the words."[34]

Speech and Position is an aspect of music analysis which is often ignored or grossly oversimplified inacademic discussions of music. Those who would ignore the question entirely claim that a piece of musicis an object which can be considered "on its own merits", regardless of who composed it or performed it.Of course this is just another example of the effects of the illusion of objectivity. The source of a piece ofmusic determines its social context and its artistic history, determines whether we hear that piece of musicat all. Consider, for example the vast socio-economic institution that is the cult of Beethoven. Is Beethovena great composer? How do you know? How many pieces did he write and how many of them do youknow? What does it mean to "know" a piece by Beethoven? How can someone who has been dead for 150years speak to you? Is Beethoven's Fifth Symphony a greater piece of music than Happy Birthday? Howdo you know? Who wrote Happy Birthday? Why are there no "great" woman composers? Why are thereno black American presidents? How many of these questions have you ever heard asked in a music class?

When questions of Speech and Position are treated at all, they are usually boiled down to standardizedplatitudes regarding the composer as creator possessed of greater or lesser inspiration, performer asinterpreter of greater or lesser technical ability, etc. A more penetrating search would address acomparison of text and subtext, the role of a perception of spontaneity in performance, and a whole arrayof performance conventions.

A good performance is a fortunate combination and fusion of the composer's ideas and the player's.Guitarist Pepe Romero discussed the importance of this fusion with me in a personal conversation.

As a player, when you take a piece of music you have to feel and become in tune with thatcomposer, with his mind and with his soul, and unite it to your own mind, to your own soul,to your own heart. Then you can recreate the music so it has a freshness, and it soundswhen the player plays it like he is composing it also....Together [the composer and theplayer] make one and they merge together; you cannot tell where one begins and the otherends. I know that when I play, and the music is really flowing, I cannot tell the differencebetween the composer and myself.

Of course any music performance (like any presentation of ideas, such as a speech) depends largely onthe personal persuasiveness and the charisma of the performer. The performer must convince theaudience that she/he believes in that music, has made it personal. In the case of a musician who ispresenting someone else's ideas (a player of composed music, as opposed to an improviser), the player'stask takes on similarities to that of an actor.

It is one thing to use your own words and thoughts, and quite another to adopt those ofsomeone else, which are permanently fixed, cast as it were in bronze, in strong clearshapes. They are unalterable. At first they have to be reborn, made into something vitallynecessary, your own, easy, desired--words you would not change, drawn from your ownself.[35]

Stanislavski outlines the process a performer might use to achieve this goal.

The right, you might say classic, course of creativeness operates from the text to the mind;from the mind to the proposed circumstances: from the proposed circumstances to the

subtext; from the subtext to the feeling (emotions); from emotions to the objective, desire(will) and from the desire to action....[36]

From the point of view of the performer as interpreter of a text, it becomes clear how the dynamic betweenthe composer and the player affects the success with which the music is performed. Composed musicwhich the performer understands and finds sympathetic is more easily adopted as one's own; music that isabstruse or alien requires greater "acting" skill of the performer.

The practice of performing music that was composed by someone else, or acting with text written bysomeone else, seems very different from the practice of improvising with one's own musical ideas (or evenplaying one's own composed or memorized music). There are musicians who play only other people'smusic, and there are actors who have no desire to write or to direct others. They feel, due to a lack of self-confidence, a conviction that they are not themselves "creative", or whatever reason, that they require agiven text in order to perform. The distinction is often made between creative artists and interpretiveartists, as though these represented completely different personality types. I believe, however, that thisdistinction is more one of degree and focus than of exclusive categories. (For more on the creative role ofthe performer of notated music, see the discussion of notation, p. 41 ff.)

And what changes occur in our view of Speech and Position, and the relationship between composer andperformer, when one or more of those positions is occupied by a computer? How do our aestheticperceptions and expectations change with the knowledge (or suspicion) that music has been composed orperformed by a machine? Doesn't this have a profound effect on the question of "expression" in music?The dimension of Meaning and Reference encompasses not only the "sense" of the text, but also how itstrue significance is affected by specific "frames of reference". "Observation of the referent (within a specificframe of reference) independently of the words and their senses, influences the decision on the meaning tobe assigned to this sign."[37] The meaning of a word, or connections between sentences, can changeentirely based on what we know about their implicit frame(s) of reference. Indeed, intentional confusion offrames of reference is a staple of comic theatrical dialogue (Shakespeare, Wilde, the Marx Brothers, etc.)Similarly, the meaning of individual musical events, or of connections between events, may be lost upon usif we are unaware (perhaps through lack of musical, philosophical, or general cultural erudition) of animplicit frame of reference.

Meaning and Reference can be treated superficially by searching a piece of music for symbols which drawon an extra-musical frame of reference and for clichés which refer to specific musical styles or moods. Thisis frequently a pursuit of music appreciation courses, music criticism, program notes, and the liner notes ofrecordings. More contemporary questions, however, are addressed by direct confrontation of how meaningis affected by frames of reference. How does our musical perception depend upon performance context,musical context, quotation (use of other music as a frame of reference), the definition and role of noise,etc.?

Composer/philosopher John Cage brought these matters into clear focus in the latter half of this century,seemingly almost single-handedly. His use of indeterminacy in notation and as a compositional methodbrought the importance of the will of the composer into question. His piece 4'33" questions the concepts ofnoise (unwanted sound) and silence, as well as the "framing" of a musical performance in time and space.Because Cage's music so openly challenges basic assumptions, a great deal of it gives rise to confusionor outrage among listeners who are unfamiliar with its philosophical bases.

The dialectic of Meaning and Reference is also closely related to Roger Schank's and Robert Abelson'sideas of "scripts"--bodies of assumed, shared knowledge based on past experience.[38] Music soundscontinuous or discontinuous, dissonant or consonant, stylistically "correct" or not, out of place or not,

because we have built up elaborate scripts of "appropriate" musical behavior, based on prior experience.To what extent can we reasonably expect to share or agree upon perceptions of music, given that each ofus has a different "cosmology" of past experience?

Semiology makes a distinction between symbols which are iconic and those which are not. A symbolwhich is iconic resembles, exemplifies, or shares some property with the thing that it represents. Anexample of an iconic symbol in instrumental music (well known to composers of scores for war movies) isa snare drum playing a steady march rhythm. Not only does the snare rhythm symbolize military activity, itactually is (or once was) one of the sounds of military activity. For the most part, however, instrumentalmusical symbols, like linguistic symbols, are more commonly not iconic. The sounds in music, like words...

...are abstract conventional signs. They have nothing in common with what they denote,and this gives natural language [and music] the freedom of reflecting the world withoutbeing tied to it. In this detachment, language gains an enormous discoursive power butloses whatever presentational capacity it might originally have had. Words fail to presentthe difference between blue and green to the blind or to the daltonic, and, as everyoneknows, all the attempts to "translate" music into words invariably appear awkward, crude,and inadequate. For there is neither transition nor similarity between the two modelingsystems...[39]

The lack of iconicity as a primary feature of both music and verbal language, as well as the obviousdissimilarity of their grammatical systems, helps us to understand the difficulties of talking about music. Ofcourse, the use of concrete sounds in music, which rose to popularity in the 1940's and 50's and which isseeing a new rise due to the availability of the digital sampling synthesizer, creates an entirely new musicallanguage, in which iconic symbols coexist with traditional musical sounds. Music theorists have mostlyavoided the analysis of such music, but perhaps semiotics will facilitate that analysis if and when it isundertaken.

The dimension of Organized Text encompasses the significative power of the formal structure of the text. Inboth literature and music the structure is mostly one of segmentation: volumes, chapters, paragraphs, etc.in literature--movements, sections, periods, etc. in music. This segmentation is usually dependent on thedevelopment of thematic material. Aspects of the content--e.g., events in a protagonist's life in a novel,introduction and variation of motifs in music--drive the sense of formal divisions. Variety of materialsaccount to some degree for the varieties of formal structure, and the "patterns and dimensions of theOrganized Text...participate in the meaning."[40]

Because of its abstract nature (and perhaps because of an unwillingness to grapple with the first twodimensions), the dimension of Organized Text (i.e., of form) has received considerable attention frommusic theorists. Writings on musical form, however, are unfortunately mostly of the "tour guide" variety,pointing to landmarks and formal curiosities in a piece of music. "Here's the B theme, and over there you'llnote the recapitulation on the horizon." Rarely is an effort made to explain the significance of formalstructure.

Certain composers are generally acknowledged to be builders of bold and impressive "architectural" formalstructures. Edgard Varèse and Iannis Xenakis come immediately to mind. Indeed, Xenakis is famous as anarchitect as well as a composer and Varèse attested to a deep interest in the structures of nature.

I was not influenced by composers so much as by natural objects and physicalphenomena. As a child, I was tremendously impressed by the qualities and character of the

granite I found in Burgundy...and I used to watch the old stone cutters, marveling at theprecision with which they worked. They didn't use cement, and every stone had to fit andbalance with every other. So I was always in touch with things of stone and with this kind ofpure structural architecture...All of this became an integral part of my thinking at a very earlystage.[41]

The "mobile" (or "open") forms popular the 1950's and 60's allowed the performer to find his or her ownsignificance in the dimension of Organized Text. The semiotician Umberto Eco saw this trend as animportant extension of the established concept of formal "openness".

The definition of the "open work," despite its relevance in formulating a fresh dialecticsbetween the work of art and its performer, still requires to be separated from otherconventional applications of this term. Aesthetic theorists, for example, often have recourseto the notions of "completeness" and "openness" in connection with a given work of art.These two expressions refer to a standard situation of which we are all aware in ourreception of a work of art: we see it as the end product of an author's effort to arrange asequence of communicative effects in such a way that each individual addressee canrefashion the original composition devised by the author.[42]

The technology of computer-controlled random access memory devices for music recording provide thepotential for still another type of open form in which the listener can truly restructure the form of a piece ofmusic.

I suggest that while a strict application of the tenets of linguistic semiotics to music would probably be futile,the semiotic viewpoint proposed by Hrushovski, of seeking significance in different levels of source,reference, and structure provides us with many new angles for approaching important and largely under-emphasized aspects of music perception. In applying semiotic theory to music, it is important to benefitfrom the methods of semiology without expecting that music will submit to formulation as a system signs.

Semiotics as a descriptive analytical method must be further refined and adjusted if it is tobecome a useful and productive approach to the peculiarly complex system of music...for itseems somewhat improbable that a concept formed on the basis of linguistics should havean immediate explanatory power outside of its original boundaries....

If music is to be considered a sign system, then it is a very strange one: an icon which hasnothing in common with the object it presents; an abstract language which does not allowfor a prior definition of its alphabet and vocabulary, and operates with and indefinite,virtually infinite number of unique elements...These discrepancies can be reconciled ifmusic is approached in terms of semiotics, but without its preconceptions.[43]

Information Theory

The application of information theory to music is treated at length in the response to the question byProfessor Lewis (p. 92 ff.), so I will restrict myself here to its indirect effects upon music theory andanalysis.

It is well known that our expectations play a vital role in our reaction to musical events. Our expectations

are based on knowledge of one or more prevalent styles of music, as well as on more local considerationsof context based on the immediate past. Information theory explores systems of accumulating informationas a sort of evidence, usually to attribute likelihoods to different interpretations of that information and/or tomake probabilistic predictions or decisions about the future. In music, theorists often attempt to correlateideas of probability (either intuitive or mathematical) with the fulfillment or disappointment of expectations.

Theorist Leonard Meyer suggests that expectations based on probabilistic evaluations of the local past, aswell as on Gestalt principles of perception, are "the nature of human mental processes", but that they willgenerally be superseded by expectations based on learned musical style.

Paradoxical though it may seem, the expectations based upon learning are, in a sense,prior to the natural modes of thought. For we perceive and think in terms of a specificmusical language just as we think in terms of a specific vocabulary and grammar; and thepossibilities presented to us by a particular musical vocabulary and grammar condition theoperation of our mental processes and hence of the expectations which are entertained onthe basis of those processes.[44]

Still, ideas of probability appear to play a role in our very concept of musical style. As a teenager, I taughtmyself the rules eighteenth-century harmonic style using a book borrowed from the local public library. Thebook, The Contrapuntal Harmonic Technique of the Eighteenth Century by Allen Irvine McHose, supportsits every assertion about harmony and voice leading with statistics of probability compiled by painstakinganalysis of all of the chorales of J.S. Bach. Whereas most textbooks might say, "The root of a triad shouldusually be doubled," this book says,

A frequency study of Bach's doubling, in major and minor triads which are in root position, isas follows:

MAJOR TRIAD MINOR TRIAD Root Third Fifth Exceptional Root Third Fifth Exceptional 88% 8% 3% 1% 84% 13% 2% 1%

The above study reveals that there is very little difference in Bach's method of doubling.When major or minor triads are in root position, the root is the best one to double, the thirdnext, and the fifth last....[45]

Every single rule of chord usage and voice leading is statistically supported in a similar fashion. While onemay certainly question whether such a statistical analysis can usefully be extrapolated into a generativerule set (see also the discussion of Markov processes in Dobrian, Chris, "Music and Artificial Intelligence"),such methodological rigor is certainly unusual in music analysis.

Walter Piston's textbook, Harmony, has a similar (though not statistically supported) table of usual rootprogressions. Meyer attests to the importance of probability in style when he says that such tables are"actually nothing more than a statement of the system of probability which we know as tonal harmony".[46]

I disagree that tonal harmony is merely a system of probability. As Meyer points out himself, considerationsof musical style play a much greater role in our expectations. It is also a leap to say that recognition ofprobability leads directly to expectation. Furthermore, expectations are not the only basis of our reaction tomusic. In a general sense, though, we do evaluate everything we perceive based on our past experience,and information theory suggests a methodology for the study of the role of expectation in our response tomusic.

Generative Grammar

Grammar is the set of rules which describe how words of a language may be combined. The rules ofgrammar, once accepted (consciously or unconsciously), play an important role in the comprehensibilityand meaning of words. One pursuit of linguists is to analyze grammar, formulate its rules, and try todiscover the rules which govern grammars. Once the rules of a grammar have been formulated, they canbe used to generate phrases in the language.

A full description of a grammar should define rules which permit the generation of all sentences in alanguage, without permitting the generation of sentences which are not characteristic of the language. Allbut the simplest grammars will permit an infinite number of combinations of basic elements (words) intomeaningful groups (sentences).

Some attempts have been made to apply linguistic grammar theory directly to music analysis. Forexample, two theorists have tried to demonstrate a parallel between the deep, shallow, and surfacegrammatical structures of spoken language, and the harmonic, modal, and melodic domains in jazzimprovisation.[47] Although this succeeds as an interesting (though incomplete) observation, it leavesseveral questions unanswered and would have to be applied to considerably more examples to test itsvalidity as any sort of general structural principle of jazz improvisation.

Most music theory does not attempt to draw parallels between the behavior of language and that of music,but rather tries to formulate the strictly musical rules by which basic elements are combined intomeaningful larger units. Observations of common traits of a style or common tendencies of soundprogressions do not, however, automatically constitute a grammar. And even once one feels confident inhaving established a working set of rules with which to describe a style of music, it is always questionablewhether grammars from the domains of musical analysis and composition are interchangeable.

Music Theory

Although the term music theory is commonly used, it is actually extremely rare that anyone attempts todiscover anything like a theory of music. Analyses of musical structure are almost invariably confined tomusic of a single style, culture, or sub-culture. Often the claim of universality is implied (more out of asense of superiority, ignorance, or apathy toward those styles not considered than out of any real belief inthe universality of one's findings), but in fact theories of musical grammar (like any theory that views musicas a language) can only apply to some music, almost never to all music.

Music theory can be roughly divided into two categories: theories of how music is perceived and theories ofhow music is composed. Although assumption of a connection between the two is entirely reasonable,there is little inherent reason to believe that direct correlation always exists. Often theorists, teachers, andstudents confuse these two categories, with distressing results. Watching the students in a music theoryclass, as well as remembering my own days in such a class, I observe that most students incline towardone of the two categories. Some, listening to the professor lecture on music theory (diatonic tetrachords,harmonic minor scales, etc.), seem to be thinking, "Yeah! This is the kind of stuff I'm interested in learning:how music really works, the rules that govern its construction." Others, I imagine, are thinking "What in the

world do I need to know all this for? What can this possibly have to do with my appreciation of music on anemotional level?"

They are both right and they are both wrong. Aaron Copland hypothesizes that we listen on "threeseparate planes", which he terms the "sensuous", the "expressive", and the "sheerly musical".[48] Ofcourse, he acknowledges that we don't really break up our listening so mechanically, but we actually listenon all three planes simultaneously and "correlate them...instinctively".[49]

Without disagreeing with his three planes, I would present my own companion view. I believe we listen andreact to music somewhere along a continuum between the purely "visceral" and the purely "intellectual".Sometimes we react to music viscerally (on a "sensuous" or "expressive" plane) as, for example, with aparticularly loud and violent passage or a particularly driving rhythm, and sometimes we receive a moreintellectual ("sheerly musical") pleasure, such as when we perceive well composed counterpoint orinteraction between abstract ideas. Too much time spent on either end of the continuum often leads toboredom because we seem to desire attention to both types of awareness of the music.

Copland makes some reference to this when he observes that "many people who consider themselvesqualified music lovers abuse [the sensuous] plane in listening" and that "It is very important for all of us tobecome more alive to music on its sheerly musical plane."[50] Indeed, many people think of music assomething of a warm bath which is supposed to soothe them and not provoke them in any way, eitherviscerally or intellectually. Listeners who take this attitude, or who overuse the "sensuous plane" are ineffect cutting themselves off from at least half of the music's appeal; they're receiving only half theexperience that they could receive. A knowledge of music's construction can help one to address theintellectual, "sheerly musical" aspect of the music, thus appreciating it in its totality more fully.

Furthermore, the ability to deal with the intellectual or "technical" aspects of a piece of music is absolutelyessential for composers and performers. If a maker of music doesn't understand the music intellectually,she/he is incapable of conveying that portion of the music to listeners and so is only presenting half anexperience to begin with.

On the other hand, attempts to understand music in a purely intellectual or factual way are equally futile.As Mephistopheles said to the Student in Goethe's Faust: "All theory...is gray. The golden tree of life isgreen." Anyone who thinks that music theory explains how music works is overly optimistic. It can reallyonly give us ways of thinking about and terms of discussing music. Music theory tries to makeobservations about music and formulate generalizations, which are then often presented to students asrules. We must always remember that any theoretical idea about music is only a tiny part of the story--afragment of an illusion of understanding which, one hopes, helps us to appreciate music and work with it.

What do we mean, in general, when we talk of a theory? We usually associate theories with scientificpursuits or methods. In science one makes observations of natural phenomena and tries to formulate arational statement that is true and encompassing for all related phenomena. Scientific theorists dislikehaving things be inexplicable or irrational; they try to conceive a rational explanation which holds true for allthe phenomena in question. This rational explanation can then be used as a tool for understanding otherexamples of similar phenomena. Eventually, phenomena are observed which belie the theory or demand arevision of the theory, and a new theory is devised, even though each new theory can only be a provisionalexplanation of that which we do not fully understand. Nature doesn't act according to laws; "laws" of natureare just our way of trying to get a theoretical grip on what nature does. In other words, nature doesn't followlaws, the laws are derived from nature.

And so it is with music. Music is not some kind of blind servant of the "laws" made up by theoreticians.Theoreticians observe music and try to formulate explanations and point out consistencies. In almost everycase one can find exceptions to the rule or theory in question. Often one will find evidence in actual music

that directly contradicts the traditional "wisdom" of a music theory book.

The main problem with music theory is that it isn't a theory at all. Theories about musical phenomena aredifferent from scientific theories in two very important ways. The first is that music theory is much moretolerant of exceptions. When an exception to a scientific theory is discovered, the theory must be revisedto include or otherwise account for the exception. Music theorists much more readily say, "Well, okay,there are exceptions to my theory, but in general..." This should underscore for us the fact that theories ofmusic are guidelines for understanding, not rules which explain (much less govern) all behavior. Thispoints out the second important difference. Scientists theorize about natural phenomena, while musictheorists consider musical phenomena which are the result of human decisionmaking. This means, first ofall, that humans can just as easily make the willful decision to defy or ignore the theories. Secondly, thisleads to a very common misconception about which came first, the music or the theory. Some theoristseven imply that music is composed in accordance with these theoretical ideas. They treat their descriptive"rules" as if they were obligatory generative grammars. Good composers have never made their decisionsbased on what a theorist told them was correct. Nor would we ever presume that nature behaves the wayit does because Isaac Newton or anyone else said it should.

Composers do have a knowledge of what theorists have surmised about the structure of music. That's partof the education and skill of any composer, and a composer does in some sense use all accumulatedknowledge about theory and musical style to guide compositional decisions. Without such knowledge, thecomposer would be swimming aimlessly in the great sea of "anything is possible." Composers imposerestrictions on themselves in order to give coherence to their ideas. But it is the set of restrictions thatcomposers choose which are later formulated by theorists, not vice versa. Schubert never worried aboutwhether he was using melodic minor or harmonic minor (although he was certainly aware that he wasraising the seventh degree of the A-minor scale when he wrote a G#). He did what he did because oftraditions in his musical culture and because the sound of it worked for him; only later did a theorist call thecollection of notes he used "melodic minor" or "harmonic minor". Beethoven never said to himself, "I'veheard tell of this thing called the 'harmonic minor scale'. Think I'll try to include a couple of 'em in my nextpiece." For us, the term "harmonic minor scale" is just a handy way of verbally conveying to each otherthat we're talking about a set of notes with a certain configuration--a minor scale with the seventh degreeraised.

Many fallacies based on the potential chicken-egg confusion between theory and practice--such as thisidea that Beethoven "used" the harmonic minor scale--are standard fare in the teaching of music theory.These fallacies result from a progression of a) reductions of information, b) non-theories (i.e., theorieswhich generously permit exceptions), and c) leaps from one closed system to another. We can trace thegrowth of such a fallacy as a four-step process. 1) A theorist tries to encapsulate and formulateobservations about musical phenomena or, much more frequently, observations about notation of musicalphenomena. As shown earlier (and described in the section on notation, p. 40 ff.) this results in atranslation, and almost invariably a reduction, of information. 2) This descriptive formula is thensymbolized by a single term (such as "harmonic minor"), which points to the more complex description(which in turn describes the notation which in turn describes the musical phenomenon). Clearly this isanother reduction of information: we now have a symbol which is open to confusion because of manypossible frames of reference (just as a pointer to the computer address of a data structure contains noneof that structure's data, and is useless without access to the structure itself). 3) This term is then codifiedas a theory or rule. However, unless the original description was absolutely complete, the rule will haveexceptions, and is therefore really a guideline rather than a rule. 4) Finally, a leap is made by transportingthis "rule" from the analytical domain in which it was developed into the compositional domain which mayor may not (in most cases clearly does not) work in complete obedience of the concerns of the analyst.This progression of confusion--descriptive formulation, to descriptive term, to descriptive "rule", togenerative "rule"--results in textbook terminology which is of dubious utility to a listener or performer, and

of even more dubious utility to a composer.

This example shows the pitfalls of transferring too blithely from the descriptive to the generative. Anotherexample will show the danger of transferring in the other direction.

Pierre Boulez is a very interesting thinker, especially when dealing with general ideas on music and culturesuch as form, criticism, taste, etc. However, when he discusses technical aspects of music (especially hisown), he is so overly concerned with the concept of the musical "language", of which pitch is thefundamental structurable element (ah, perfect-pitchers, what are ya gonna do with 'em?) that he is oftenguilty of the same sort of note-counting that he rails against in his nontechnical essays. (In fact, the mainbody of his book Penser la musique aujourd'hui is a perfect example of this fascination taken to its ratherabsurd extreme.)

What follows is an example from a talk of his I attended at the Centre Georges Pompidou in Paris. Thisexample was presented during a technical-but-for-the-general-public seminar on Language andPerception.

On page 2 of Boulez's composition Éclat, the following gesture occurs.

Boulez points out the obvious, that this is basically a chromatic descentof three tritones, followed by a B-flat.To "explain" the presence of the B-flat he points out that, with theproper octave transpositions and reordering of the first six notes, the B-flat can be shown to be the axis of symmetry upon which the first sixnotes converge.

If all this is not terribly interesting, it is at least indisputable. He thenmakes a rather dubious claim: that this organization is perceived when onehears the gesture. To me this implies that upon hearing the B-flat an(extremely) astute listener retrospectively mentally performs the properoctave transpositions and reordering of the first six notes to give logicalmeaning to the seventh. Perhaps Pierre Boulez and Milton Babbit and three otherpeople in the world would perform this sort of mental acrobatics, but even that is a littlehard to believe. But let's assume for a moment that one does, either consciously orunconsciously, recognize the "logic" of this organization of pitches. And let's evengrant (since Boulez is, after all, an acknowledged descendant of the Second Viennese School) that thistype of symmetrical structure is in some way more rewarding, when we arrive on that seventh note, thanany other, asymmetrical one. (If the last note had been E-natural, would I have mentally performed theorderings and transpositions necessary to make E-natural a satisfying point of symmetry? If the last notehad been B-natural, would I have heard it as a "wrong note", and thought it was Prokofiev?) One has tochoose pitches, after all, and the ones he chose here are as good as any and better than most. So what'sthe problem?

Firstly, Boulez and composers who share his views have generated, and continue to generate, the study ofmusical language almost exclusively from a technical point of view (materials) rather than a syntactical one(choices). Of course it's much easier to say what notes you used than it is to say how (not even to mentionwhy) you used them. To say, "I did this for these artistic reasons" requires stating aesthetic values andputting them up to scrutiny. It's much safer just to stick to demonstrable facts. As a result the general publicand music students everywhere continue to hear composers and theorists give talks exclusively onmaterials, without choices being discussed. Young composers develop the idea (although one can hardlyblame Boulez for the faults of lesser composers who attempt to imitate him) that if they use symmetricalpitch structures (or whatever) they will write music. After all, they have only been instructed on the

materials, not the use of them. Yet, the example Boulez used here has the potential for demonstratingsome interesting and clear aspects of his (marvelous) use of the language and its perception: the way thatthis selection and ordering of pitches carefully and elegantly avoids any traditional tonal connotation; theway that disjunct, nondirectional contour, along with the speed and the "l.v." marking, bring about theperception of arpeggio rather than melody; the way that the octave transpositions add vital ambiguity to anotherwise simple chromatic descent of tritones; even, if one really insists, the way that the accelerando andthe dynamic crescendo heighten the sense of arrival on the point of symmetry. One might even suggestdiscussing the sound of the example, given that we are talking about language and perception. Maybe itsimply sounds good. During this lecture, delivered in one of the world's foremost centers of music andtechnology, Boulez never played a recording of the example, nor even played it on a piano (the originalinstrument in the score).

Secondly, if the best-known and most respected composer in France gives this sort of lecture to thegeneral public (especially without playing what's being discussed), tells them that they're perceivingsymmetrical pitch aggregates and retrospectively reordering and transposing them in their heads (theyknow they're not doing that, and can never hope to), and that this is how modern music is written and is tobe perceived, then not only is he indubitably giving them a monstrous inferiority complex and alienatingthem from the very music upon which he proposes to enlighten them, but he is providing information whichis essentially useless as anything other than a curiosity.

A postscript to this example: As I was interested to see the context in which the passage in questionoccurs, I proceeded to the library of the Pompidou Center to peruse the score.[51] (Example 1.) The pianois doubled simultaneously at the superior perfect fifth by the harp (with a few octave transpositionsupward, presumably to facilitate the fingering or to create a more balanced over-all sonority), at thesuperior major sixth by the vibraphone, and at the superior minor sixteenth by the celesta. The actualsound of the score is thus

. Given that the gesture is played quickly by fourdifferent players, each with a slightly differentconcept of accelerando, all acting on a shortdown-up cue from the conductor, I'm sure thateven Boulez would not hesitate to admit that whatis perceived is a rapidly arpeggiated version of. So much for convergence on a point ofsymmetry.

Implications for Music Education

The preceding examples suffer from two problems. They both are incompleteformulations within the domain they treat, and the formulations are inappropriatelytransferred to another domain. "Harmonic minor" is an inadequate description of thepitch set found in almost any piece of music, so it cannot possibly work as a generativetheory. Boulez's demonstration of a pitch set's "convergence on a point of symmetry" isan insufficient explanation of his complex compositional technique, and so can never be an accurateexplanation of our perceptions.

In discussing music we must be content with partial explanations, and not attempt to draw overly broad--much less universal--conclusions from them. We must be content to recognize and learn from thelimitations of our theories. After all, if we ever did succeed at adequately explaining the composition andperception of music, it would become a static body of knowledge and would cease to be of interest. In orderto experience or discover anything new, we would be forced to violate the theories we had established. As

Professor Harkins wittily remarked in a personal conversation, "One gets the impression that for somepeople the goal of thinking is to think so much that they eventually reach a point where they no longer haveto think."

Lest I give the impression of anti-intellectualism, I should point out that I am simply advocating anintellectualism that permits and admits to uncertainty (which will always be present whether we choose topermit it or not), enjoys and uses ambiguity for greater understanding (even if that understanding takes astill more ambiguous form), and recognizes the dangers (or at least the limitations) of absolute definitions.

Although it is more work for the teacher, theoretical analysis of music must steer sharply away fromtextbooks--which are too far removed from the musical phenomena--and must derive real experientialknowledge from music itself (aided by, but not replaced by, notation and other representation whenappropriate). This does not require that teachers and students re-invent the wheel. The teacher can helpthe student avoid redundant information, help guide the student toward enlightening listening, and pointthe student to established theoretical ideas once the student has sufficient experience with the music itselfto be an adequately critical reader of the theories.

Specialized Music Languages

So-called natural language is by no means the only language used for the description of music. Manycultures use symbolic written notation to describe either the sound or its means of production (tablature).Recently various special systems have also been developed to describe music to a computer. I willconsider only two specialized music languages here--Western classical notation and the MusicalInstrument Digital Interface (MIDI) protocol--with the aim of evaluating their effectiveness as descriptors ofmusical sound. I will begin with a few general remarks about the utility of notation.

There are many reasons why one writes down a musical idea. Probably the earliest and most importantreason for notating music is to transmit it elsewhere in time and space. Before the invention of soundrecording the only way to recreate a musical experience in another time and place was to memorize it or tonotate it, then to play it again imitating as nearly as possible the original experience. Music was repeatedby memory well before notation was used, and human memory is probably a more thorough (though morevolatile) "storage medium" for saving and recreating musical experience. When music is of a certaincomplexity or length, written notation shows some advantages over memorization because paper is amore stable storage medium than the brain. In the short term, paper does not suffer from memory loss,and serves as an aid in the predominantly oral transmission of musical culture. In the long term, peopleand even whole societies may die, and notation serves as a partial representation of that cultural loss.Notation is by no means free from a potential loss of information, however, since sonic memory must betranslated onto paper and back into sound; notation is invariably an incomplete representation of sonicinformation.

Thus, notation of music can be viewed in some cases as simply an aid to memory or a preferred storagemedium for recreating music in another time and place. This begs the question, "Why recreate a musicalexperience?" What does it even mean to "recreate a musical experience", given that any experience(especially musical experience) is dependent upon its context in time? Perhaps the transplantation of amusical experience in time is a way of achieving continuity in time.

One reason why humans seek to recreate musical experience is because a culture's musicis central to its image of itself. The improvised music of Gambian griots is a bearer of theirnation's 500-year history. Human memory is thereby harnessed in the service of culturalmemory.[52]

Western Music Notation

In considering a specific system of notation, such as our standard five-line staff notation, it is important toconsider its biases and to question, "What aspects of sonic information are being adequately described bythis notation, and what information is being lost?" When the notation is retranslated into sound by aperformer, we may then ask, "What is actually being recreated--which aspects of the original experienceare retained and which are lost--when one plays from notation?"

The simplest approach to these questions is first to discuss what information is contained in most notatedWestern music. Pitches and rhythms are notated in the greatest detail, and are generally considered to bethe primary bearers of musical information. Standard notation, however, generally ignores pitch inflectionssuch as portamento and indicates modifications of tempo only in the most general of terms (rallentando,etc.). Musical parameters such as dynamics and timbre are notated even less specifically or not at all. Soall subtleties of portamento, rubato, dynamics, and timbre--the elements considered most important to theexpressive performance of music--are left unnotated, assumed known by any performer of the scorethrough an acquaintance with the conventions of the musical society.

From this fact comes the idea that a score must be interpreted by the reader; the non-notated elementsare provided according to societal conventions and the taste of the reader. It should be noted, though, thatall aspects of music are provided in this way--according to societal conventions and the taste of themusician(s) involved--it's just that some aspects are notated in advance and others are not (they are eithermemorized or improvised in real time). So even in the performance of a composed, notated piece, manyaspects of the musical performance are not notated and may not have been composed before theperformance. The point is that "interpretation" is really improvisation or memorized composition or both.Insofar as a performer relies on memorized composition (i.e., prepared decisions, made during rehearsal)to supply non-notated aspects of the music, the act is the same as that of a notator: musical ideas are fixedin some storage medium for repetition at another time.

The notation in many of the works of composer Brian Ferneyhough, such as Cassandra's Dream Song forsolo flute, stems from the desire to notate himself those aspects which would otherwise be consciously orunconsciously decided upon and memorized--or improvised--by the performer. It is debatable whether thisincrease in notation frees the performer by reducing the number of decisions to be made, or whether itoverburdens the performer with responsibilities for conscious actions, some of which would otherwise bemade unconsciously based on physical technique and societal conventions? Since the player of notatedmusic composes or improvises all aspects of the music not explicated by the notation, the extent of theplayer's contribution is determined only by the completeness of the notation and the player's ownconscientious attention to unwritten detail.

Ferneyhough's notation is representative of one trend, in the last fifty years of Western composed music,to notate more and more aspects of the music in greater and greater detail. An opposing trend has been tonotate in less detail, writing out structures for guided improvisation, mobile forms, suggestive graphisms,or some combination of these. In both trends, the composer is considered free to specify as much or aslittle as desired, in effect determining how little or how much will be left up to the performer. In most cases,however, the player is not considered to have the same freedom of determination, especially in thetraditionally strictly notated areas of pitch and rhythm. The reason for this difference is simply the societalview that the composer is the one with something to say--the creator--and the player is merely the vessel--the interpreter. This view has not always existed in classical music, however. It is well documented thatperformers of concertos during the classical and romantic periods improvised or composed their owncadenzas, and that composers such as Bach, Mozart, and Beethoven were also formidable improvisers.

It has long been acknowledged that many "great" players play a lot of "wrong" notes. Clearly pitch and

rhythm are not the only bearers of musical expression or ideas. A player often may give an admirableperformance while disregarding much of the notation--still conveying some if not all of the essentialmusical ideas of the composer, or conveying a full musical experience.

MIDI Data Protocol

The MIDI software specification is a protocol for transmitting performance instructions to computer-controlled musical instruments. In most cases the instrument is a synthesizer, but there also exist MIDI-controllable acoustic instruments such as pianos, as well as musical accessories--equalizers,reverberators, etc.--and even non-musical devices such as stage lights. Theoretically, any device that canbe controlled by computer could implement the MIDI protocol.

The MIDI specification was devised in the early 1980's by music instrument manufacturers, primarily as away of achieving cheap and ready standardization of data transmission between equipment made bydifferent companies. Its evolution took place over a matter of a few years, as compared to the evolution ofstandard notation which took place over several centuries. It does make considerable use of the model ofstandard notation, however, and includes such basic conventional notions as notes, dynamics, etc., thusinheriting both benefits and limitations of the standard notation system.

F. Richard Moore, in his paper entitled The Dysfunctions of MIDI,[53] outlines what he sees as theshortcomings of MIDI from a technical (computer science) standpoint. He contends that the extremelywide adoption of MIDI as a standard runs the danger of producing contentment with a limited system andcausing stagnation in the progress of computer music development. It is true that the manufacturersprobably did not foresee the speed with which the computer music community would push the use of MIDIto its technical limits. However, it is also clear, as evidenced by the enormous popularity of MIDI, that thislanguage was sufficient to fulfill many musicians' need for a standardized and easily manageable way ofimplementing computer control of instruments. While the danger of stagnation due to use of a limitedlanguage may be real, it is a basic problem with music notation--including the potentially limiting nature ofstandard music notation--and is not unique to MIDI. When a system of notation (or any language system)proves inadequate for new usages that arise, it is extended (either by decree from a governing body or bydevelopments in its vernacular usage) or abandoned.

Perhaps of greater concern to musicians are the musical assumptions and biases implied by the protocol.

When Western notation is used for post-audition (transcriptive) purposes, the lack of fidelityto the original (mostly due to the rigidity of its quantization of pitch and duration) becomesobvious. Musical data protocols based upon the Western notation system (MIDI) exposerather too brutally the poverty of its transcriptive possibilities. Indeed, the area of pitchinflection itself ("basic" pitch plus/minus offset) only becomes meaningful when viewed fromthe (disad)vantage point of this quantization.[54]

As discussed above, the limited descriptive powers of standard notation are substantially supplemented bythe vast knowledge base and skills of the interpreter. This body of knowledge and skills includes primarilystylistic conventions, but also includes the ability to of the interpreter interact with the sound and theenvironment. When the "interpreter" of this standard notation (or another notation based upon it, as in thecase of MIDI) is a machine, such vitally necessary knowledge and responsiveness are suddenly madeconspicuous by their absence. That whole range of information must be supplied by the softwareprogrammed into a mass-produced commercial synthesizer.

Designers of electronic instruments, sound synthesists, and music programmers attempt to fill this gap in

various ways. They may add complexity to music producing algorithms in an effort to make the effect moreinteresting. Unfortunately, even complexity can be simplistic, by being either too predictable, sounpredictable as to be unengaging, or simply by being, for whatever reason, uninteresting as musicaldiscourse. It is known that one of the main reasons that the sound of an acoustic instrument like the flute isso attractive is because the sound contains a vast and complex variety of subtle noises and variationswhich go virtually (but not totally!) undetected by our ears, in addition to the more obvious noises andvariations that we do detect. Synthesists have tried to add noise, jitter, vibrato, pitch inflection, etc. tosynthesized sounds in an effort to simulate the complexity of acoustic sounds. Unfortunately, for the mostpart, the precise nature of the variations which make acoustic sounds rich and attractive remainsundiscovered. These "microscopic" but all-important aspects of acoustic sound are so complex, orcomplex in such an unusual way, as to be virtually undefinable in precise terms.

Similarly, the relationship between a virtuoso musician and his or her instrument is replete with anoverwhelming amount and degree of nuance, built up over years of intense listening and practicing. Somuch of the nuance of the instrumentalist-instrument relationship is developed without being linguisticallydefined, by listening, imitating, feeling. Furthermore, it seems to be largely stored not in some intellectuallyexplicable way, but in laboriously developed muscular reflexes, almost as if the brain is entirely bypassed.In playing a single brief note, a violinist combines bow angle, bow speed, bow pressure, bow placement,bow attack, finger placement, finger movement, finger pressure, in addition to whatever totally involuntarymuscular movements may be caused by nervousness, coffee consumption, humidity, unknown electricaldischarges in the brain, etc.--and all of these factors are changing from millisecond to millisecond (morecorrectly, probably much, much faster than that), modified by the brain in interactive response to the soundbeing produced, the sound others are producing, the acoustics of the room, etc. It is no wonder then that avirtuoso performer of acoustic instruments is dismayed by the lack of response of a synthesizer--the soundof which is already vastly inferior to his or her ear--when she/he must manipulate with the foot--one of theless sensitive bodily extensions--a pedal which has only 128 possible gradations, and the resulting effect is(for example) a simplistic strictly regular vibrato of a low pass filter.

Efforts by programmers to add complexity to the instrumentalist-instrument interaction in the case ofsynthesizers have suffered a fate similar to that of synthesists' efforts to add complexity to sound materials.The complexities are so numerous, undefined, interconnected, and depend on so many variable,constantly changing factors, that an attempt to reproduce them is invariably simplistic. When a largeamount of complexity is introduced, it may exceed the ability of the instrumentalist to control it (since suchcontrol was previously largely subconscious), it may be complexity which does not actually add to theperformer's expressive or musical control, and it may simply be complexity which is not sonically ormusically engaging because of aesthetic taste.

Given all of these daunting problems, one must at least credit the originators of the MIDI specification withproviding the capability to express both discrete and continuous musical events, and for providing thespecial system exclusive message, an escape hatch by which the specification can be extended byindividual instrument manufacturers. Although the language was designed with very specific musicalmeanings in mind--symbols for triggering the beginnings and endings of notes of specific pitch andloudness, inflecting the basic pitch by "bending" it, controlling over-all instrument volume continuously,etc.--the actual interpretation of the information is up to the receiving instrument. MIDI is frequently blamedfor its inability to convey vital musical information, but the fault really lies more with the assumptions ofMIDI--how the information is to be interpreted or processed--than with the information format itself (andhow it could potentially be processed). It is important to remember that the symbols of any language ornotation are not the meaning; they merely point to meaning.

Implications for Music Education

Western music notation, for all its imperfections, does have the advantage of being a bona fide language.The meanings of the symbols are quite well established and agreed-upon; although meaning is oftenambiguous, it is no more so than the meanings of words. The rules which govern the arrangement of thesymbols are likewise standard and thorough. It seems reasonable, therefore, to assume that they can besuccessfully discussed and that notation can be a valuable tool for expressing ideas. Of course, it isimportant when discussing music notation, and the structures found therein, to remember that "the map isnot the territory, and the name is not the thing named".[55] The ideas expressed in notation may or maynot be perceptible in their sonic manifestation.

The view of music notation as a language suggests that it might be taught similarly to a language. It is noone's native language, to be sure, but ideas of second language acquisition might be applied to itsteaching. The following guidelines, designed for language teaching, are equally applicable to the teachingof notation and notation-related skills (dictation, sightreading, etc.)

Some experts in second language teaching draw an important distinction between acquisition andlearning. Acquisition is essentially a subconscious process: one is not aware that one is acquiring alanguage, only that one is using it effectively (i.e. obtaining the desired reactions); one is not aware ofusing particular rules (or any rules at all, perhaps), but one has a feel for correctness and recognizes whenan error--a violation of the unstated rules--has been committed. Learning is knowing about language--having explicit formal knowledge of the language and its rules. Grammar-based teaching--teaching of rulesthrough error correction and explanation--appears not to facilitate acquisition, especially in children.

We acquire (not learn) language by understanding input that is a little beyond our currentlevel of (acquired) competence....Speaking fluency is thus not "taught" directly; rather,speaking ability "emerges" after the acquirer has built up competence throughcomprehending input....In order for acquirers to progress to the next stages in theacquisition of the target language, they need to understand input language that includes astructure that is part of the next stage....

How can we understand language that contains structures that we have not yet acquired?The answer is through context and extra-linguistic information....by adding visual aids, byusing extra linguistic context....we use meaning to help us acquire language.[56]

There is considerable evidence of a "natural order"[57] of acquisition of grammatical morphemes: somegrammatical structures and usages tend to be acquired relatively early, others tend to be acquiredrelatively late. These tendencies are (for the most part) common to both first and second languageacquisition, to both children and adults, and to people with different first language influences. It would beinstructive to consider the possibility of a "natural order" of acquisition of musical skills. Which skills andconcepts in music reading and performance are immanently easier to acquire?

If we have some idea of the "natural order" of acquisition of structures, we can state the acquirer's stage ofdevelopment as point i in the natural order, and the next stage in the order as i+1. The introduction of anew structure (i+1) is best achieved when the speaker "casts a net" of new structures surrounding point ion the continuum. Provided that the speaker is understood, the speech will include structure i+1, becomprehensible, and be more interesting than speech which tries to pinpoint i+1. Furthermore, in aclassroom situation each student will have a slightly different point i and point i+1. Providing a "net" ofinput assures that everyone's individual i+1 will be introduced. In other words, the progression introducedby the speaker should be "roughly tuned" to the desired order. This has advantages over attempts to "finetune" the acquirer's progression: since we are always taking a guess as to where the student's current

level is, we may miscalculate i+1 if we tune the input too finely; roughly tuned input recycles and reviews,whereas finely tuned input progresses linearly without review; roughly tuned input will be good for morethan one acquirer at a time and "will nearly always be more interesting than an exercise that focuses juston one grammatical point."[59] A too rigidly "programmed" method of progressive steps risks being guilty ofthis excessively fine tuning, either boring or losing a large number of students.

For acquisition to take place, the acquirer must have a low "affective filter"[60], i.e., must be open to input.Factors that contribute to this include a positive attitude toward the "speaker", a low-anxiety environment,and some degree of self-confidence. Thus, "the students are not forced to [perform] before they are ready[and] errors which do not interfere with communication are not corrected".[58] An active interest in thesubject matter is also essential to a low affective filter; the material being used must be of relevance andinterest to the students.

Conscious learning does seem to be useful when used as a monitor (editor): utterances are initiated by theacquired system and may then be adjusted or modified by rule-based learning (either before or afteractually speaking or writing the idea). Monitoring is used in situations where a rule is learned but not yetacquired and is easy enough to implement quickly, or in situations where the rule has been acquired.

We see the natural order for grammatical morphemes when we test students in situationsthat appear to be relatively "Monitor-free", where they are focused on communication andnot form. When we give adult students pencil and paper grammar tests, we see "unnaturalorders", a difficulty order that is unlike the child second language acquisition order....

A very important point about the Monitor hypothesis is that it does not say that acquisition isunavailable for self-correction. We often self-correct, or edit, using acquisition, in both firstand second languages. What the Monitor hypothesis claims is that conscious learning hasonly this function, that it is not used to initiate production in a second language.[61]

This indicates that the utility of conscious learning is largely restricted to the monitor function.

If an effort is to be made to teach music skills such as singing, sightreading, and dictation by acquisition,the teacher must provide constant examples, presented in an appropriate "natural" order of difficulty, andcreate an environment conducive to learning by example. Rules of music theory or conscious strategies,as applied to music performance, should be taught after acquisition has gotten underway, and should beemployed only for monitoring purposes. Writing music is highly recommended as a way of applying bothacquired and learned structures. Even if good writing skills are not the objective, this can be a good way ofdeveloping monitoring and applications of rules.

Teaching by Example

After all this discussion of music and language, and the problems immanent in attempting to correlate thetwo, one is tempted to conclude that the only way to improve one's knowledge of music is through music.Education in all the surrounding trappings of musical culture--literature, history, etc.--and all the ideasrelated to or derived from music--abstract form, psychoacoustics, etc.--is certainly valuable. But for purelymusical insight, nothing succeeds like music.

I therefore posit a teaching method, applicable both to musical skills and music appreciation, in whichmusic is taught by example. This is hardly a new idea. In many cultures music is very successfully learnedentirely through experience and imitation. Those cultures are not in any way stagnant as a result of apaucity of discussion. Original thinkers do their original work without being told how to do it. (If they were

told how to do it, it wouldn't be original, nor would they.) Yet this obviously successful means of musiceducation is eschewed in the university, presumably on the grounds that it is anti-intellectual orinsufficiently liberal. Such fallacies are rooted in the beliefs that only skills--not ideas--can be taught byexample and that ideas can only be taught by verbal instruction.

The goal of the method is for the student to acquire personally meaningful knowledge about music usingmusic itself. Spoken/written language is to be used as little as possible (if not less). The most fundamentaland most common activity is making comparisons between sounds, musical excerpts, performances ofmusic, and occasionally visual images (without linguistic comment). Sounds may include instrumentalsounds and combinations thereof, concrete sounds, i.e., anything. Musical excerpts may include sections,phrases, individual moments, individual parts (a single line of a polyphonic texture, a single instrument in agroup, etc.) of "existing" music or newly composed/played music. Different performances or renditions of"the same" music may be compared. Visual images may be used to reinforce sonic perception (e.g.,computer display of played music, intellectually and emotionally evocative representational and non-representational images, etc.) The idea is to isolate musical ideas, rather than ideas developed in otherdomains.

When the topic is musical "materials" (the sound itself), visual images may be related abstract forms, ordirect representational depictions of some aspect of the sound or some theory about the sound (e.g.,related arts, graphical scores, etc.). When the topic is the cultural environment of music (for, after all,music does not exist in a vacuum), representational images depicting performance setting, surroundingsocial conditions, etc. may be more appropriate (e.g., videos, iconography, etc.).

In short, the intention of the method is to teach only by providing opportunities for acquisition. Consciouslearning is certainly not discouraged, but it takes place outside of the classroom.

Teaching entirely by example is certainly not without pitfalls. Without any verbal explanation, how does thestudent know what is being exemplified? If that's not sufficiently clear, the student may a) have no ideawhat she/he is supposed to derive from the example, b) draw the wrong conclusion from the example, orc) draw an unintended but valuable conclusion. Obviously, a and b are probably not so desirable, whereasc could be desirable, especially if the aim is for the student to create her/his own path to her/his ownessential information. How does one insure that c is achieved, though, and not a or b?

Multiple examples can be useful for exemplifying without explicating. Care must be exercised that thecommon traits of the examples are only (or most obviously) those that are at issue (otherwise, see a, b,and c, above). Rigorous comparison of multiple examples can help eliminate b (or insure that a b is reallya c).

Counter examples are another tool for exemplifying without explicating. The fact that an example isintended as a counter example should be indicated to avoid unnecessary confusion. (If it's not obvious,though, it may not be a good enough counter example.)

The question seems to come down to: Is the intent to guide the student to a specific conclusion, or is it topresent evidence and allow the student to reach an independent conclusion? If the former, then explicationis more direct and (by definition) less ambiguous than example. If the latter, then explication is virtuallyimpossible (for it is insufficiently ambiguous), and example is (by definition) the "evidence".

In fact, the real goal is to teach ways of drawing conclusions independently, from the evidence, rather thanpresenting foregone conclusions. So one must first teach bases of decision-making and evaluation. Canthese, too, be taught by example? One of the founders of UCSD, Roger Revelle, recounted:

[It was the conclusion of the founders of UCSD that] the object of a college education wasnot to acquire a body of knowledge, but to learn how to learn. We felt that in the rapidlychanging world of our times, few of the so-called facts one learns in college were likely to beuseful, or even true, ten years later. But what would be permanently useful were thelanguage and the logic of a science or a humanistic field, the demonstration that it ispossible to learn something new that is not already known--in other words, to makediscoveries--and that it is an exciting and satisfying thing to do so.[62]

Notes

1. information. See Dobrian, Chris. "Music and Artificial Intelligence", 1992.

2. meaning. Knowledge that is considered the end goal, the ultimate significance, of a perceptive andevaluative process. (c.f. significance.)

3. Erickson, Robert. Sound Structure in Music. University of California Press: Berkeley, 1975. p. 1.

4. Bateson, Gregory. Mind and Nature: A Necessary Unity . New York: E.P. Dutton, 1979. (New York:Bantam Trade Edition, Bantam Doubleday Dell Publishing Group, 1988.) p. 31.

5. Russell, Bertrand. Selected Papers. New York: Random House, 1955. p. 358.

6. Longfellow, Henry Wadsworth. Outre-Mer: A Pilgrimage Beyond the Sea , 1835.

7. Stravinsky, Igor. Chronicle of My Life . London: Victor Gollancz, 1936. pp. 91-92.

8. Stravinsky, Igor. Poetics of Music . New York: Alfred A. Knopf, 1947. p. 79. (Lectures delivered in1939-1940.)

9. Copland, Aaron. What to Listen for in Music. New York: McGraw-Hill, 1957. pp. 73-74.

10. Ibid. pp. 13-14.

11. Ibid. p. 12.

12. Cooke, Deryck. The Language of Music. New York: Oxford University Press, 1959. pp. 15, 33 & 32.

13. Hindemith, Paul. A Composer's World, Horizons and Limitations. Gloucester, MA.: Peter Smith,1969. p. 40. (Charles Eliot Norton lectures, 1949-1950, copyright 1952.)

14. Cooke. op. cit. pp. 21-22.

15. Salzman, Eric. Twentieth Century Music: An Introduction . Englewood Cliffs, New Jersey: Prentice-Hall, Inc., 1974. p. 48.

16. Cooke. op. Cit. p. 106.

17. Reimer, Bennett. A Philosophy of Music Education . Englewood Cliffs, New Jersey: Prentice-Hall,Inc., Second Edition, 1989. pp. 23 & 17.

18. Ibid. pp. 23-26.

19. Ibid. p. 27-28.

20. Ibid. p. 28-29.

21. significance. The concrete object or abstract concept which is signified (referred to) by a symbol orgrammatical construction of symbols. (c.f. meaning.)

22. Daniélou, Alain. The Raga-s of Northern Indian Music. London: Barrie & Rockliff the Cresset, 1968.p. 8.

23. Orlov, Henry. "Toward a Semiotics of Music". The Sign in Music and Literature. Wendy Steiner, ed.Austin: University of Texas Press, 1981. p. 131.

24. Boulez, Pierre. "Sound and Word". Notes of an Apprenticeship . Herbert Weinstock, trans. New York:Alfred A. Knopf, 1968. pp.52-56. (Relevés d'apprenti. Paris: Editions du Seuil, 1966.)

25. Stravinsky, Igor. Poetics of Music . pp. 26-27.

26. Kodály, Zoltán. Visszatekintés. [In Retrospect.] Ferenc Bónis, ed. Budapest: Zenemükiadó, 1964.Vol. I.

27. Sellars, Peter in Moyers, Bill. A World of Ideas II; Public Opinions from Private Citizens . New York:Doubleday, 1990. p. 24.

28. Suzuki, Shinichi. Remarks at the "National Festival", Tokyo, Japan, 1958.

29. Dewey, John. Art as Experience. New York: Capricorn Books, 1958. p. 74.

30. Anderson, Laurie. Aphorism displayed on a wall in the film Home of the Brave, 1985.

31. Orlov. op. cit. p. 131.

32. Hrushovski, Benjamin. "The Structure of Semiotic Objects: A Three-Dimensional Model". The Signin Music and Literature. Wendy Steiner, ed. Austin: University of Texas Press, 1981. p. 12.

33. Ibid. p. 18.

34. Ibid. p. 19.

35. Stanislavski, Constantin. Creating a Role. Elizabeth Reynolds Hapgood, translator. New York:Routledge, 1961. p. 262.

36. Ibid. p. 270.

37. Hrushovski. Op. cit. p. 19.

38. Schank, Roger and Abelson, Robert. Scripts, Plans, Goals, and Understanding. Hillsdale, NewJersey: Lawrence Erlbaum Associates, 1977.

39. Orlov. op. cit. p. 133.

40. Hrushovski. op. cit. p. 23.

41. Varèse, Edgard in Schuller, Gunther. "Conversation with Varèse". Perspectives of New Music, 3:2,1965. pp. 32-37.

42. Eco, Umberto. The Open Work. Translated by Anna Cancogni. Cambrige, Massachusetts: HarvardUniversity Press, 1989. p. 3.

43. Orlov. op. cit. pp. 132 & 136.

44. Meyer, Leonard B. Emotion and Meaning in Music. Chicago: The University of Chicago Press, 1956.pp. 43-44.

45. McHose, Allen Irvine. The Contrapuntal Harmonic Technique of the Eighteenth Century. EnglewoodCliffs, New Jersey: Prentice-Hall, Inc., 1947. p. 19.

46. Meyer. op. cit. p. 54.

47. Perlman, Alan M. and Greenblatt, Daniel. "Miles Davis Meets Noam Chomsky: Some Observationson Jazz Improvisation and Language Structure". The Sign in Music and Literature. Wendy Steiner,ed. Austin: University of Texas Press, 1981. pp. 169-183.

48. Copland. op. cit. p. 9.

49. Ibid. p. 18.

50. Ibid. pp. 10 & 17.

51. Boulez, Pierre. Éclat. London: Universal Edition, 1965. p. 3.

52. Lewis, George. Personal correspondence, 1991.

53. Moore, F. Richard. "The Dysfunctions of MIDI". Proceedings of the International Computer MusicConference. San Francisco: Computer Music Association, 1987. pp. 256-263.

54. Lewis. op. cit.

55. Bateson. op. cit. p. 30.

56. Krashen, Stephen D. and Terrell, Tracy D. The Natural Approach . Hayward, CA: The AlemanyPress, 1983. p. 32.

57. Ibid. p. 28.

58. Ibid.p. 35.

59. Ibid. p. 38.

60. Ibid. p. 20.

61. Ibid. p. 31.

62. Revelle, Roger. Paper to the Revelle College Renaissance Revisited Convocation. La Jolla,California: date unknown. (As printed in the Los Angeles Times, July 21, 1991.)