andrea baroni - a brief history of synthesizers

21
Andrea Baroni A BRIEF HISTORY OF SYNTHESIZERS

Upload: gushjones

Post on 10-Apr-2015

274 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Andrea Baroni - A Brief History of Synthesizers

Andrea Baroni

A BRIEF HISTORY OF SYNTHESIZERS

Page 2: Andrea Baroni - A Brief History of Synthesizers

2

INDEX

1. In the beginning… a brief history of modern synths’ ancestors…………………………………………………………………………….3

2. Understanding synthesizers…………………………….……..………..........7

2.1 What is a Synthesizer?.........................................................................7 2.2 The characteristics of sound and its perception...…………………....…7

2.2.1 Pitch………………………………………………………….....8

2.2.2 Loudness………………………………………………………..8

2.2.3 Timbre…………………………………………………………..8

2.3 Different types of synthesis………………………………………………..9 2.3.1 Additive synthesis…………………………………….……......9 2.3.2 Subtractive synthesis……………………………………..……9 2.3.3 Frequency Modulation……………………………….……….10 2.3.4 Wavetable synthesis……………………………….……..….10 2.3.5 Linear arithmetic synthesis………………………….........….11

2.3.6 Physical modelling……………………………………....……11

2.3.7 Other types of synthesis…………………………………...…12

3. From first analog synths to modern software synths……..……........13 a. First Experiments…………………………………………………………13 b. Moog and the first affordable analog synthesizers…………………...14 c. Microprocessor controlled and polyphonic analog synthesizers…….16 d. The digital revolution……………………………………………………..17 e. The virtual revolution……………………………………………………..20

Page 3: Andrea Baroni - A Brief History of Synthesizers

3

CHAPTER I

In the beginning… a brief history of modern synths’ ancestors

The synthesis is a process through which, also artificially, you get compounds starting from

simpler elements.

If we consider tha matter carefully, the words we often use to describe the world all around us

take much more information than we believe. In one of its meanings also just the word

“synthetic” has today become a synonym of something produced artificially, that is by means

of processes which copy the natural ones.

Why then dwelling upon a word such as “synthesis”? What is the aim of this introduction in a

subject about electronic music and modern musical instruments commonly known as

synthesizers?

Because, as it will later be dealt, before speaking about synthesizers it is necessary to

understand what the basis of reproducing and syntesizing sounds is, and which processes

make you replicate that peculiar acoustic sensation linked to some physical phenomena (the

tweak of a string, the impact of two objects, the air getting through a tube).

This topic has always been the cause of man’s curiosity and study since ancient times.

Therefore it should not be singular to deal upon syntesizers starting from times when we

could not even imagine the concept of elettricity. Great part of what was done since the

beginning of the 20th century influenced the evolving course of modern keyboards, synths and

generally speaking of electronic music.

Though electricity is the modern man’s child, mechanics is actually an ancient science, also

automatism and reproduction of sounds and music by means of machines is a matter dating

back to the ancient Greeks.

The Greek engineer Ktesibios, who lived in the 3rd

century B.C., in order to give an answer to

the question “How can you play more than one instrument at a time?” designed the Hydraulos

(see figure 1.1). It was a machine basically made up of a tube placed inside a tub filled with

water by means of a pump. The pressure was set by the water weight, and a series of

mechanical levers and devices would push the air through separate tubes, just in accord with

the same concept which rules modern pipe organs. Organs can be actually considered as the

Page 4: Andrea Baroni - A Brief History of Synthesizers

4

first rudimentary example of additive syntesizers, which as a

matter of fact get their sound by making up a series of

harmonics varying as time passes. All this to let a single

person play insruments of harmonic complexity which was

impossible to reach till then by means of the use of a single

instrument.

Let’s still remain in ancient Greece, the aeolic harp is

conversely the first example of completely automated

instrument. This instrument was made up of two bridges

crossed through by strings. It was the wind itself which would

generate music by passing through them.

After a fairly great temporal jump, we are in the 15th

century when the hardy-gurdy, the

ancestor of the grinder organ, was invented. This was at a certain extent the ancestral

forerunner of something which can be defined as an instrument with a sequencer, that is a way

of automatically reproducing a melody or a pattern of notes.

If we temporarily diverge from musical instruments, we can take into consideration a device

which had a huge impact on the modern age: in 1641 the adolescent Pascal invented the first

adding machine, the Pascaline, which is to be considered as the forerunner of modern

computers and therefore of modern digital synths.

We arrive in the 18th

century when esperiments on electricity get greater and greater interst

and so do the machines which use it. The first real electrical instruments of history: the Denis

d’or (the Gold Denis) and the Clavecin Electrique (electric harpsichord) by Jean-Baptiste de

La borde date back to the mid of the century. In the latter, by using a small keybord, just like

the one of the harpsichord, you could control a series of hammers which, being charged with

static electricity, would ring small bells.

Few years later the Panharminicom appears. This is a mechanical instrument provided with a

keyboard which played automating flutes, clarinets, trumpets, violins, drums and other

instruments. It was created by Johann Maelzel who even persuaded Beethoven to compose for

his invention. Even though the Panharmonicon was a mechanical and not an electric

instrument, the idea which forms the basis of this invention is the one which can be found

inside modern sampling instruments.

Going back to the first electrical instruments, the conception of the electromechanic piano is

due to Hipps (whose first name is unknown). This instrument was essentially composed of a

Figure 1.1 – the Hydraulos

Page 5: Andrea Baroni - A Brief History of Synthesizers

5

keyboard which would activate some electrical magnets.These in their own right would

activate some dynamos (small electrical current generators), the devices actually responsible

for sound production. They were the same dynamos which, almost a century later, would be

used in Cahill’s Teleharmonium.

Elsha Gray, the inventor of the telephone (just like Bell, who first however got the patent)

created the electroharmonic telegraph. Here, for the first time some oscillators appeared. As

a matter of facts Gray discovered how to reproduce a self-vibrating circuit, substantially a

frequency oscillator. This system, which originally was thought to transmit music through

telephone wires, was adjusted by Gray to be used regardless of a telephone and by means of a

speaker to output sounds. For the record, Alexander Bell himself released a similar instrument

which he called electric harp.

It was on the verge of the 19th

century that music was

first thought not only as a performance but as a

replication of a previous performance as well. The

phonograph, invented by Thomas Edison in 1877 is

actually the first device which lets you record any kind

of sound on a physical medium, in this instance a

waxen cylinder which is written on by means of a

diaphragm provided with a nail. These cylinders would

not last for a long time, yet Edison had first thougth of

this invention to be used in business and working field.

The same idea would be afterwards refined with the

use of several kinds of cylinders or records, which

would become our modern record players, the most

widespread system of sound reproduction for a long.

The Player Piano was based on the same reproduction conception. This was invented in the

U.S. in the late 19th

century and enabled you to record a performance on a paper roller which

could be copied and distributed to those who had the same instrumemt. So they could

reproduce an identical performance in a way very similar to what is done with modern MIDI

files today.

In 1898 Thadeus Cahil won a patent named “Art of and Apparatus for Generating and

Distributing Music Elettronically”. His idea was to create an electric device by means of

which music could be played and diffused to offices, hotels and houses through telephone

Figure 1.2 – Edison’s Phonograph

Page 6: Andrea Baroni - A Brief History of Synthesizers

6

wires. The telharmonium ,or dynamophone, came so to light, which, like the

electromechanical piano, see above, would generate sound by means of dynamos. These

produced alternate current and gave birth to a sinusoidal wave, in this instance the dynamo

was called alternator. This was released by means of electrical magnets and very large tone

wheels. This instrument is considered by many the first additive synthesis device.

We will close this brief look of the ancestors of modern synths with something which perhaps

approaches them the most, the Singing Arc, that is the first fully electronic instrument ever

made. This was conceived by William Duddel who, to produce it, started from the technology

used in the carbon arc lamp, a precursor of the bulb lamp. The problem of this light emitting

device was that it was also a source of a great deal of noise, from a low hum to an annoying

high frequency whistle. Duddel, a physicist who had been requested to inquire into the origin

of the noise these lamps produced, discovered that the more current you would apply to the

lamp, the higher sound frequency you would get. To demonstrate that phenomenon he

connected a keyboard to the lamp which was so called Singing Arc. During a convention of

elecric engineers in London this keyboard was connected to every lamp in the building, and it

was discovered that not only they played all together but all the lamps connected to the same

electric circuit of other buildings played contemporarily as well. It was so found a way to

transmit music at a distance.

No further development would follow neither Duddel requested a patent for this device. He

started to go round the country to show his Singing Arc which soon became a downright

novelty.

And so we close this short historical journey through the evolution of mechanical and

electrical instruments which more deeply influenced the development of modern synths and

more generally speaking of every analogic and digital instrument applied to electronic music.

Before resuming the journey starting from the 20th

century, in the next chapter we will see the

exact structure of a synth, and which algorithms and techniques are used in sound production.

Page 7: Andrea Baroni - A Brief History of Synthesizers

7

CHAPTER II

Understanding Synthetizers

As I have already stated at the beginning of the first chapter, to understand the history and

the evolution of this peculiar “family” of instruments is necessary to know what sound

generation is based on and to become clearly aware of the techniques used and of the

algorithms which have been chosen in every type of synth.

2.1 What is a Synthesizer?

It may sounds reductive but a synthesizer can best be defined as a device that constructs a

sound by uniquely determining the fundamental elements of pitch, timbre, and loudness, that

are the main characteristics of sound, as we will discuss later on. There are many different

types of this instrument, but it is important to state that it is not a product similar to a motorcar,

where various modes all perform more or less the same function, and it is not a machine, but a

tool to create music with.

2.2 The characteristics of sound and its perception

As a physicist might say, a sound is the result of a stimulus, caused by moving objects,

induced in the ear by the vibrations of air particles. If the way the source object moves consists

specifically in a periodical oscillation around a central point, and if the solid body which is

invested by continual changes of pressure is the tympanic membrane of the ear, this

phenomenon is exactly perceived as a sound.

What basically characterises a sound from a strict physical point of view are frequency

and amplitude. The former, which is measured in Hz (cycles per second) is bound to the

vibration speed of the source object, the latter, which is measured in decibel, is bound to the

width, or more properly to the energy of oscillation itself.

It is now necessary to point out the link occurring between the sound as a physical

phenomenon and the corresponding sound sensation. So, the features of pitch, bound to

Page 8: Andrea Baroni - A Brief History of Synthesizers

8

frequency, of loudness, bound to amplitude, and of timbre which indicates the colour, or the

quality, of a sound will be now taken into consideration.

2.2.1 Pitch

Pitch is the quality of a sound which makes some sounds seem “higher” or “lower” than

others. Pitch is determined by the number of vibrations produced during a given period of

time, corresponding to the frequency of the sound signal.

An average person can hear a sound from about 20 Hz to about 20,000 Hz. Above and

below this range there is ultrasound and infrasound , respectively. The upper frequency limit

drops with age.

2.2.2 Loudness

Loudness is the amount or level of sound, that corresponds to the amplitude of the sound

wave. A change in loudness in music is called dynamics, and is often measured in decibels

(dB). Sound pressure level (SPL) is a decibel scale which uses the threshold of hearing as zero

reference point. While there is technically no upper limit to amplitude threshold of hearing,

sounds begin to damage to ears at 85 dB and sounds above approximately 130 dB (called

threshold of pain) cause pain. Also in this case the range depends on the individual involved

and on the age.

2.2.3 Timbre

In music, timbre is the quality of a musical note which allows you to distinguish between

different sound sources producing sound at the same pitch and loudness.

The vibration of sound waves is quite complex, most sounds vibrate at several frequencies

simultaneously and the additional frequencies are called overtones or harmonics. The relative

strength of these overtones helps determine a sound’s timbre.

Though the tone colour of the phrase is often used as a synonym for timbre, colours of the

optical spectrum are not generally explicitly associated with particular sounds, the sound of an

instrument is most likely described with words like warm, harsh, dull, brilliant, pure or rich.

Page 9: Andrea Baroni - A Brief History of Synthesizers

9

2.3 Different types of synthesis

As by means of analysis it is possibile to get the main parameters characterising the

spectrum of a sound signal, similarly the aim of synthesis is to artificially produce a sound

with some required timbre features by generating a variation of electrical voltage by means of

either analogic or digital techniques.

In the latest decades a series of synthesis techniques have been realised and formalised.

They differ (one from another) for the peculiar mode of use and the multiplicity of timbre

features which can be obtained.

The synthesis techniques which will be examined are the most significant from a historical

and practical point of view. It is however important to underline that some of these techniques

are not exclusive, that is they can be combined to obtain new more interesting sounds, what

today happens in the software and hardware of modern synthesizers.

2.3.1 Additive synthesis

The sound is constructed from the ground up by mixing

(summing) together one or more basic and simple waveforms

(like sine waves) and their harmonics. In figure 2.x there’s an

illustration that shows how two sine waves can be summed to

form a more complex waveform. This method of synthesis is theoretically capable of

reproducing any sound. This is true because every sound can be broken down into a

collection of sine waves. However, you need to add an enormous amount of harmonics

to create a complex sound.

From a digital point of view an additive synthesis is the sum of the samples of

every sine wave which are measured at instants determined by sampling frequency.

One of the most interesting features of this kind of synthesis is the possibility to define

the volumetric course, that is the envelope of every single component. This makes it

possible to produce sounds with a dynamic spectrum.

One of the drawbacks of this technique is the fact that is very expensive in terms of

computing time, making it difficult to use for real time sound generation.

2.3.2 Subtractive synthesis

This process takes a reverse approach to generating sound. It is based on the

complete or partial subtraction of some frequency components of an initial sound. This

sound is usually a waveform rich with harmonics (like a square wave or better a white

noise generator), and the frequencies are taken away by applying some specific

Page 10: Andrea Baroni - A Brief History of Synthesizers

10

devices, called filters. Figuratively speaking, this technique resembles the action of a

sculptor who, starting from a formless piece of marble subtracts material to define the

required shape.

This method can be used to effectively recreate natural instrument sounds as well

as textured surreal sounds. Obviously the filters are crucial, the better the filters and the

wider the choice of available filters, the better the end result will be.

2.3.3 Frequency Modulation (FM)

This synthesis technique, still today one of the most used, is founded on the noted

model of frequency modulation, to a large degree widespread in broadcasting field and

which was extended to sound synthesis by Chowning in 1972.

The output of one oscillator (modulator) is used to modulate the frequency of

another oscillator (carrier). These oscillators are called operators. While in

broadcasting the frequency of the carrier is very high (it is measured in Mhz) if

compared with the frequencies in the modulating signal (typically audio range), as

regards sound synthesis the frequencies of the two signals are comprised of the same

width range

FM synthesizers usually have 4 or 6 operators. Algorithms are predetermined

combinations of routings of modulator and carriers. The envelope of the carrier

characterises the envelope of sound itself, while the envelope of the modulator

characterises the evolution of spectrum content. For these reasons FM synthesis is a

very advantageous technique, because it makes it possible to obtain complex signals

starting from two oscillators only, even if all the liberty which is permitted in additive

synthesis is not present here; this because the spectrum components are bound by

specific relations.

This computational efficiency is the reason for this invention and explains its great

popularity in earlier synthesizers and sound cards.

2.3.4 Wavetable synthesis

This form of synthesis incorporates the use of

pre recorded digitized audio waveforms of real or

syntheitic instruments. In this way, the digital audio

segments are stored as a table of waveforms in

memory and played back at different speeds to

produce output of a different pitch for each musical

Page 11: Andrea Baroni - A Brief History of Synthesizers

11

note.

A common addition to wavetable syntesis is that each instrument waveform

contains a loop region. This region starts after the attack segment of the digital audio

and repeats while the instrument’s note is sustained. Then the release segment of

digital audio finishes off the note.

Using envelopes and modulators these waveforms can be processed and layered to

form complex and interesting sounds. It is clear, for this type of synths, how the

physical memory is crucial to house the waveforms, so only in recent times this

technique has been massively used.

2.3.5 Linear arithmetic synthesis (LA)

First introduced by Roland in 1987 (as we will discuss later in the next chapter),

this type of synthesis is based on the discovery that much of the information used to

identify and categorise a sound is contained in the first few hundreds milliseconds.

So, it takes short attack sampled waveforms, digitalized with the PCM (Pulse Code

Modulation) technique, and combines them with synthesized sounds that form the body

and tail of the new sound (also known as Sample & Synthesis technique).

By layering these and combining them with the synthesized portion of the sound

you arrive at the new sound, that is also processed by filters, envelope generators, etc..

This is one of the most common forms of synthesis used in the 90s and even today.

2.3.6 Physical modelling (PM)

The synthesis techniques which have been analysed till now are based on

mathematical algorithms . In the case of the imitation of the sounds of traditional

musical instruments, these techniques are applied to signal generation, leaving aside

the physical mechanism they are produced with, and taking into consideration only the

analysis and synthesis of certain temporal functions. One of the main problems bound

to these techniques is the absence of a well defined correspondence between the change

of a parameter and the sound produced, making it less predictable

So, this technique is based on trying to simulate the physical properties of a real or

fictitious musical instrument mathematically by defining exciters and resonators.

Exciters are what trigger the physical model to start generating sound. Real examples

include the hit of a drum, the stroke of a bow or pluck of a string. An input (such as a

key press and the velocity of the press) is translated into the appropriate parameters for

simulating physical properties of instrument’s input (such as blowing, or amount of

Page 12: Andrea Baroni - A Brief History of Synthesizers

12

air). Resonators simulate the instruments response to the exciter which usually defines

how the instrument’s physical elements vibrate.

This type of sound generation can be extremely complex and requires heavy

computation, so many currently available physical modelling synths use short-cuts or

watered-down methods which enable them to respond in real-time.

One curious fact is that the Nord Lead (a famous synths producer) currently uses

PM synthesis to emulate an analogue syntesizer.

2.3.7 Other types of synthesis

There are some other synthesis techniques which, to be brief, will not be analysed

in detail. One of the most important to be mentioned is the Granular Synthesis, in

which tiny events of sounds (grains or clouds) are manipulated to form new complex

sounds. By using varying frequencies and amplitudes of the sonic components, and by

processing varying sequences and durations of these grains, a new complex sound is

formed.

Another one is the Amplitude Modulation Synthesis (AM) which is performed by

combining two signals together, similarly to FM synthesis, where the carrier is

multiplied by an unipolar modulation signal, that determines the audio singal amplitude

over time.

The Ring Modulation Synthesis (RM ) is almost identical to AM syntesis with the

exception that it a bipolar modulation signal. A bipolar singal is simply a signal that

has positive and negative value, causing an interesting difference in the output signal

when compared to AM synthesis. This type of modulation is used by vocoders which

are often used to effect a human voice’s sound signal to create a robotic sounding

variation.

Page 13: Andrea Baroni - A Brief History of Synthesizers

13

CHAPTER III

From first analog synths to modern software instruments

As I have dealt with what was done at the beginning of the 20th century and the concepts

lying in modern synthesizers and their sound generation, now I am going to speak about their

origin and evolution, from the very first experiments till today.

3.1 First experiments

The experiments on the sound generation and the ways it is reproduced, which has

been dealt with in the first chapter, of course went on successfully. Then, lots of the

synthesis algorithms described in the previous chapter were defined.

The additive synthesis, already present in all kinds of organs (including the acoustic

one), became very popular with the most synth-like of the organs, the Hammond,

whose invention dates back to 1929. It would become a real legend, both for its unique

sound and for the innovation it brought to modern music. The Hammond uses sine

waves generated by revolving tone wheels (a relatively primitive apparatus for

generating electronic musical notes), which induced a current in an electromagnetic

pick-up. For every harmonic, there had to be a separate tonewheel.

But we have to wait up to the middle of the century to see the first model of

synthesizer similar to the ones we know today.

In the 50’s RCA

would produce

experimental devices to

synthesise both voice

and music. One of the

most important of this

devices was created in

1955 by Harry Olson

and Belar, both working

for RCA. They invented

the Electronic Music Figure 3.1 – Olson and Belar with their Electronic Music Synthesizer

Page 14: Andrea Baroni - A Brief History of Synthesizers

14

Synthesizer, also known as Olson-Belar Sound Synthesizer. This huge and unwieldy

system (see figure 3.1) was controlled by a punched paper roll, similar to a player

piano roll. A keyboard was used to punch the roll and each note had to be individually

described by a number of parameters (frequency, volume, envelope, etc.). The output

was fed to disk recording machines, which stored the results on lacquer-coated disks.

Programming this machine must have been a laborious and time consuming process,

but it caught the attention of electronic music pioneers such as Milton Babbit.

The passage from organs to synths was however gradual and produced lots of very

interesting hybrids. Though even nowadays organs and synths are classified as

differentiated musical instruments, the sound generation occurring inside is based on

common principles.

In 1967, for example, Tsutomu Katoh,

the founder of Keio (which subsequently

would be called Korg), asked Fumio Mieda,

an engineer who wanted to develop musical

keyboards, to design and realise a keyboard

to be built and sold. The Korg Organ was so

born (see figure 3.2), which differently from

other organs on the market could program

voices.

3.2 Moog and the first affordable analog synthesizers

The synthesizers made in this early time were however very expensive and also

very hard to handle. By 1960, synthesizers could be played in real time but were

confined to studios because of their sizes. Modularity was the usual design, with

standalone signal sources and processors being connected with patch cords or by other

means, and everything was controlled by a common controlling device.

Donald Buchla, Hugh Le Caine, Raymond Scott and Paul Ketoff were among the

first to build such instruments, in the late 1950s and early 1960s. Only Buchla later

produced a commercial version.

It was the year 1963 when Robert Moog, an engineer and a physicist of Dutch

origins, founded the R. A. Moog Co. in Trumansburg near Ithaca, NY.

Figure 3.2 – the Korg Organ

Page 15: Andrea Baroni - A Brief History of Synthesizers

15

Becoming one of the most legendary synthesizers producers, he created the first

playable modern configurable music synth, and displayed it the Audio Engineering

Society convention in 1964. So, starting at first as a curiosity it caused a real sensation

by 1968.

The Moog Modular synthesizers (see figure 3.3) became the first embodiment of

the modern analogue synths. To get a sound you had to plug each module into another.

You couldn’t memorize

sounds (you had to draw the

connections on a sheet of

paper) and the keyboards

were monophonic (only one

note at a time), but you

could create an infinite

number of sounds.

Though Donald Buchla

got started a few months

ahead of Moog, the first Buchla Box would appear later. They are very similar: both

are modular and use voltage control for the oscillators and amplifiers, but the Buchla

Box consciously avoids keyboard controllers.

Other early commercial synthesizer manufacturers included ARP, by Alan Robert

Pearlman, who also started with modular synthesizers before producing all-in-one

instruments, and British firm Electronic Music Systems.

One major innovation by Moog was in 1971, when they made a synthesizer with a

built-in keyboard and without a modular design. The analog circuits were retained, but

made interconnectable with switches in a simplified arrangement called

"normalization". Though this design was less flexible than modularity, it made the

instrument more portable and its use

much easier. This first prepatched

synthesizer, the Minimoog (see figure

3.4), became very popular, with over

12,000 units sold. It also deeply

influenced the design of nearly all

subsequent synthesizers.

Korg, too, created a similar

instrument in 1973, the Mini Korg, a

Figure 3.3 – Moog Modular model 3C

Figure 3.4 – the MiniMoog

Page 16: Andrea Baroni - A Brief History of Synthesizers

16

very small monophonic instrument, whose success convinced the company to invest

substantial resources to develop further synthesizers.

So, by the beginnings of 1970s, even if the market demand for organs such as

Balwin, Hammond and Lowrey was clearly higher, the one of synth keyboards was

starting its small expansion thanks above all to miniaturized solid-state components

that let synthesizers become self-contained and movable. They began to be used in live

performances, becoming soon a standard part of the popular-music repertorire (Giorgio

Moroder’s “Son of my father”, 1971, became the first #1 hit to feature a synthesizer).

3.3 Microprocessor controlled and polyphonic analog synthesizers

Early analog synthesizers were always monophonic, producing only one tone at a

time. A few, such as the Moog Sonic Six, ARP Odyssey and EML 101, were capable

of producing two different pitches at a time when two keys were pressed. Polyphony,

or the ability to play simultaneously multiple "voices", each having its own pitch (thus

allowing the playing of chords), was only obtainable with electronic organ designs at

first. Popular electronic keyboards combining organ circuits with synthesizer

processing included the ARP Omni and Moog's Polymoog.

Around the middle 70s the first polyphonic synths started to be produced. For

example the Oberheim SEM, with which you could play 2, 4, 6 or 8 notes at the same

time, depending on the models, and provided with 16 memory slots, a kind of

experimental memory which could memorise, for each module, a couple of parameters.

A classic huge (it weighs over 200 lbs) polyphonic synthesizer is the Yamaha CS-

80 (see figure 3.5), to be

considered as the first Japan’s

great synthesizer. Born in

1976 as a development of the

GX1 model at an affordable

price range was a complex

polyphonic synth, with some

amazing features for its time,

like 16 oscillators, 32 filters,

32 envelopes allowing voices to be split and layered, and stored in a six part memory

allocation. The keyboard was velocity sensitive with poly-aftertouch sensivity, that

Figure 3.5 – the Yamaha CS-80

Page 17: Andrea Baroni - A Brief History of Synthesizers

17

means that it used a sensor for each key. It supported also a primitive sound settings

memory based on a bank of micropotentiometers.

When microprocessors first appeared on the scene in the early 1970s, they were

costly and difficult to apply. The first practical polyphonic synth, which was also the

first to fully apply a microprocessor as a controller, was the Sequential Circuits

Prophet-5 (see figure 3.6) in

1977. For the first time,

musicians had a practical

programmable polyphonic (5

notes) synthesizer that allowed

all knob settings to be saved in

computer memory (32 digital

memory slots to record all the

synth’s parameters).

The Prophet-5 was also

physically compact and

lightweight, unlike its

predecessors. This basic design paradigm became a standard among synthesizer

manufacturers, slowly pushing out the more complex (and more difficult to use)

modular design.

3.4 The digital revolution

The so called digital revolution transformed technology that previously was analog

into a binary representation of ones and zeros. By doing this, it became possible to

make multiple generation copies that were as faithful as the original. In digital

communications, for example, repeating hardware was able to amplify the digital

signal and pass it on with no loss of information in the signal.

We have at last arrived in the early 80s and this rank revolution seems to fully

invest the market of electronic musical instruments as well.

The new digital synthesizers use digital signal processing (DSP) techniques to

create musical sounds, which actually bases on the discrete representation of a

continuous analogue signal. One of these techniques is the Pulse Code Modulation,

where the magnitudo of the signal is sampled regularly at uniform intervals, and where

every sample is quantized to a series of symbols in a digital binary code.

Figure 3.6 – Sequential Circuits Prophet-5

Page 18: Andrea Baroni - A Brief History of Synthesizers

18

So, generating a digital sample corresponds to a sound pressure at a given sampling

frequency (typically 44100 samples per second). In the most basic case, each digital

oscillator is modelled by a counter. For each sample the counter is advanced by an

amount that varies depending on the frequency of the oscillator (see Par. 2.3, Different

types of synthesis, for further details).

The world of synthesizers became dominated by the FM synthesis model, which

uses sine-wave oscillators, which to be sufficiently stable needs to be generated

digitally.

The first patent covering FM sound synthesis was licensed in 1980 to Japanese

manufacturer Yamaha, that produced in 1983 the first FM digital keyboard, the DX-7

(see figure 3.7). It was about the same

size and weight as the Prophet-5 and

was reasonably priced. No more front

buttons but a set of countless preset

banks (sound-programmed in

factory). The quality and precision of

the sounds were increased, and so was

the sound tuning, compared to the

previous synthesizer generation. The

DX-7 was a smash hit and may be

heard on thousands of pop recordings since 1980s.

When Yamaha later licensed its FM technology to other manufacturers almost

every personal computer in the world contained an audio input-output system with a

built-in 4 oscillators FM digital synthesizer.

Another very important invention date back to 1983, the MIDI, a digital control

interface, making synthesizers more usable and versatile. The so-called General MIDI

(GM) standard was devised in the late 1980s to serve as a consistent way of describing

the set of synthesized tonalities available to a electronic digital instrument (or also a

Personal Computer) for playback of a musical score. This kind of interface and

protocol of communication (together with the file format .mid) are important standards

in use even today.

Roland, another great Japanese electronic instruments manufacturer, entered its

gold era truly with the advent of the digital synthesis. It was 1987 when Roland

released the D50 (see figure 3.8), a synth that had to become Roland’s standard

keyboard: great velocity and aftertouch sensivity, with splits and layers. It was also

Figure 3.7 – Yamaha DX-7 digital synth

Page 19: Andrea Baroni - A Brief History of Synthesizers

19

expandable, with

options for ROM and

RAM cards, a good

MIDI implementation,

a programmer, and

eventually third party

expansion boards. But

the most important reason for its success, being able to topple the DX-7 from the

throne it had occupied for four years, was the fact it sounded amazingly, thanks to the

new LA synthesis, developed by Roland itself in those years.

The enormous popularity of the D50 caused a whole support industry to spring up.

For many users it was unnecessary to learn how to program the instrument, they simply

plugged in their favourite sounds and played.

The possibility to digitalize a sound source created a new brand of electronic

instruments: the samplers. A sampler is a device that can record and store audio signal

samples, generally recordings of existing sounds, and play them back at a range of

pitches. An early form of sampler was the Mellotron, which used individual pre-

recorded tape loops, one under each key on the keyboard.

Sampling can also be used in combination with other synthesizer effects,

processing it with filters, reverbs, ring modulators and the like.

By the end of 1985 sampling had been around for six years. The affordable end of

the market was dominated by the Ensoniq Mirage, while Kurzweil end Emu dominated

the high end. Roland tried to fill the gap between them with the S50 sampler, a very

good and innovative instrument, with 16-voice polyphonic, multitimbral, velocity and

pressure sensitive, and offered splits, layers, and velocity crossfading.

The S-50 could have been a winner, but Roland had come to the market a bit too

late. Akai launched their S900 the same year (1986) and estabilished itself as the de

facto standard (see figure 3.9).

Figure 3.9 – Akai S900 sampler

Figure 3.8 – Roland D50

Page 20: Andrea Baroni - A Brief History of Synthesizers

20

So, history of synthesizers continues in the latest years, with old and new leading

manufacturer names such as Yamaha, Roland, Korg, Kurzweil and Alesis.

Most modern synthesizers are now completely digital, including those which model

analogue synthesis using digital techniques. These keyboards are now, in many cases,

real workstations, integrating wider

and wider sound banks, advanced

sequencing functions, midi recording,

sampling and even mastering and

video signal transmitting capabilities.

3.5 The virtual revolution

With the increase of computing power of modern CPUs and thanks to the diffusion

of professional audio boards at affordable prices, personal computers have lately

performed their rising in the world of recording and sound reproduction, in a particular

field and system known as home recording. This let people achieve results which only

advanced recording studios could afford to only few years ago.

If this revolution has at first involved audio recording programmes (midi and audio

sequencer, audio editor and complete systems of hard disk recording), in these days,

even synths are experiencing an interesting period of computerisation. What was once

computed by dedicated hardware (the one actually inside a synth), today can be

entirely (or almost) operated by a single PC processor. A variety of software is

available to run on modern high-speed personal computers and DSP algorithms are

commonplace, permitting the creation of fairly accurate simulations of physical

acoustic sources or electronic sound generators (oscillators, filters, VCAs, etc).

Thanks to this new generation of so called “soft-synths” lots among the old

synthesizers are experiencing a kind of second youth. Renowned models such as Moog

Modular, Arp 2600, Yamaha CS-80, Prophet 5 have returned to the front of the stage.

They are back not only as digital devices, where the sound is recreated by means of

algorithms of simulation, but even as software, that is in the form of stand-alone

programmes or even plug-in which can be run from modern sequencer like Cubase SX,

Pro Tools, Sonar and Logic. Figure 3.11 shows the new “reincarnation” of Moog

Modular by Arturia software house . It may be of a certain interest to notice that, in

Figure 3.11 – Korg Triton Studio

Page 21: Andrea Baroni - A Brief History of Synthesizers

21

spite of the modern plug-in approach,

they try to retain the look and feeling

of the old analogue hardware model,

with for example the use of the old

jack link cables (yes, even this stuff

has been emulated!). These features

do not bewilder musicians familiar

with particular procedures and let

them continue working using the

more versatile and comfortable

virtual release of their favourite

synthesizers.

But soft-synths are not only an

emulation of old analogue synths. In

actual fact, today you can have at

your disposal impressive banks of

every kind of sounds which software

samplers as renowned Native Instrument Kontakt and Steninber Halion are capable to

manage in real time so easily and efficiently as it was not even possible with hardware

samplers.

The virtual revolution has got some synths which made the history of music not

only to be recreated and improved, but they have been made affordable to a higher and

higher deal of musicians, as they quote sensible lower prices and, thanks to their

noticeable easiness, require a more casual use. Besides, with a further increase of

computing power, a lot of synthesis algorithms may be later on improved; which will

get sound generation more and more powerful and versatile.

Figure 3.11 – Arturia Moog Modular V soft-synth