module 01 - analog and digital audio

44
Audio Masterclass Music Production and Sound Engineering Course Module 01: Analog and Digital Audio Page 1 Module 01 Analog and Digital Audio In this module you will learn about the nature of sound, soundproofing, acoustics and acoustic treatment, analog audio electronics and digital audio. Learning outcomes To understand the way in which sound behaves in air, how sound interacts with hard and soft materials; flat and irregular surfaces. To possess the basic background knowledge of recording studio acoustic design. To understand how sound is handled and transmitted as an electronic signal. To understand how an analog electronic signal is converted to, handled and stored as a digital signal, and how it is converted back to analog. Assessment Formative assessment is achieved through the short-answer check questions at the end of this module.

Upload: antonio

Post on 18-Dec-2015

15 views

Category:

Documents


0 download

DESCRIPTION

Analog Digital Audio

TRANSCRIPT

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 1

    Module 01

    Analog and Digital Audio

    In this module you will learn about the nature of sound, soundproofing, acoustics and acoustic treatment, analog audio electronics and digital audio.

    Learning outcomesTo understand the way in which sound behaves in air, how sound interacts with hard and soft materials; flat and irregular surfaces.

    To possess the basic background knowledge of recording studio acoustic design.

    To understand how sound is handled and transmitted as an electronic signal.

    To understand how an analog electronic signal is converted to, handled and stored as a digital signal, and how it is converted back to analog.

    AssessmentFormative assessment is achieved through the short-answer check questions at the end of this module.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 2

    Module Contents

    Learning outcomes 1Assessment 1The nature of sound 3Frequency 5The decibel 7The inverse square law 10Acoustics 11Standing waves 14Acoustic treatment 16Soundproofing 18Materials 18The three requirements for good soundproofing 19Concrete 19Bricks 20Plasterboard (drywall) 20Glass 20Metal 21Proprietary flexible soundproofing materials 21Construction techniques 22Walls 22Ceiling 23Floor 23Windows 24Doors 25Box within a box 25Ventilation 26The function of absorption in soundproofing 27Flanking transmission 27Cable ducts 28Audio electronics 29Passive components 30Real world components 32Resistors in series and parallel 33Digital audio 34Digital versus analog 34Analog to digital conversion 36Problems in digital systems 38Latency 40Clocking 40Check questions 42

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 3

    The Nature of Sound

    We all know the experience of sound, and we all learned in school that it is a vibration of air molecules that stimulates our eardrums. People who work with sound every day tend not to think about the science of sound and take it for granted. But unless you have assimilated a good understanding of the nature of the medium in which you work, how are you ever going to make it really work for you?

    Sound starts with a vibrating source, commonly the vocal folds (formerly known as the vocal cords or sometimes vocal chords), musical instruments and loudspeaker diaphragms, as far as we are concerned.

    Let us think of a loudspeaker diaphragm. It vibrates forwards and backwards and pushes against air molecules. On a forward push, it squeezes air molecules together causing a compression, or region of high pressure. On pulling back it separates air molecules causing a rarefaction, or region of low pressure. The compressions and rarefactions travel away from the diaphragm in the form of a wave motion.

    Wave motions are all around us, from the water waves we see in the sea (best viewed from a ship - the breaking effect near the shore disguises their true nature), to all forms of electromagnetic radiation such as x-rays, light, microwaves and radio waves.

    The childs toy commonly known as the slinky spring can display a wave very much like a sound wave. The slinky is a spring the metal versions work best of around 15 cm in diameter and perhaps 4 m long when lightly stretched. If two people pull it out and one gives a sharp forward and backward impulse, the compression produced will travel to the end of the spring and if the other person holds his or her end firmly reflect back.

    This demonstrates a longitudinal wave where the direction of wave motion is in the same direction of the motion of the actual material (we can call the motion of the material the particle motion). A sound wave is a longitudinal wave.

    Vocal folds - illustration courtesy University of California, Berkley

    Slinky spring - photo by Roger McLassus (GFDL)

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 4

    Contrast this with a water wave where the wave moves parallel to the surface of the sea, but water molecules move up and down. This is a transverse wave. Electromagnetic waves are transverse waves too.

    One feature that the water wave demonstrates perfectly is that if you look out from the side of a ship at a piece of flotsam riding the wave, the wave appears to travel from place to place, carrying energy as it does so, but the flotsam simply bobs up and down. Other than wind or tide acting directly on the flotsam itself, it will bob up and down all day without going anywhere.

    This is true of sound too. A sound wave leaves a loudspeaker cabinet, but this doesnt mean that air travels away from the cabinet. The air molecules simply vibrate forwards and backwards, never going anywhere. (When air molecules travel from one place to another that is called, in purely technical terms, a wind!).

    If this were not so then either a vacuum would develop inside or around the cabinet and there would be a danger of asphyxiation. Obviously this doesnt happen. Oddly enough, if you put your hand in front of a bass loudspeaker you will feel a breeze, if not a full-on wind, on your hand. This is an illusion since you feel the air molecules when they press on your hand, but not when they pull back.

    In a transverse wave, such as a water wave, the direction of particle motion is at right angles (perpendicular) to the direction of wave motion. In a longitudinal wave, such as sound, the direction of particle motion is parallel to the direction of wave motion.

    Although the longitudinal wave in the slinky spring is similar to a sound wave, it doesnt quite tell the whole story. The slinky wave is confined within the spring whereas a sound wave spreads out readily. It is possible to think of each air molecule (actually oxygen, nitrogen and an increasing amount of carbon dioxide) that vibrates under the influence of a sound wave as a sound source in its own right.

    Transverse wave created on a string. Photo courtesy Union College

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 5

    Molecules are of course very small, and it is a feature of small sound sources or point sources that they emit sound equally in all directions, or omnidirectionally. So where light travels over great distances in straight lines, sound merely has a tendency to follow a straight-line path, and readily spreads out from that path in an ever-widening arc, particularly at low frequencies.

    [Regarding point sources - it is also worth considering the example of a small loudspeaker emitting a low frequency tone. If the speaker is small in comparison with the wavelength being emitted, then it will have the characteristics of a point source and will obey the inverse square law - sound pressure halves for every doubling of distance from the source.]

    FrequencyTo compare the range of frequencies in human experience, a satellite TV signal - for example - has a frequency of around 10 to 14 GHz. The Olympic Games have a frequency of 8 nanohertz (they happen once every four years!).

    1 hertz (Hz) means one cycle of vibration per second

    1000 Hz = 1 kHz

    1,000,000 Hz = 1 Megahertz (1 MHz)

    1,000,000,000 Hz = 1 Gigahertz (1 GHz)

    Sound comes in virtually all frequencies but our hearing system only responds to a narrow range. The upper limit of young human ears is usually taken to be 20 kilohertz (kHz) (twenty thousand vibrations per second). This varies from person to person, and decreases with age, but as a guideline its a good compromise. If a sound system can handle frequencies up to 20 kHz then few people will miss anything significant.

    At the lower end of the range it is difficult to know where the ear stops working and you start to feel vibration in your body. In sound engineering however we put a figure of 20 Hz on the lower end. We can hear, or feel, frequencies lower than this but they are generally taken to be unimportant.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 6

    Frequency is related to wavelength by the formula:

    velocity = frequency x wavelength

    This applies to any wave motion, not just sound. The velocity, or speed, of sound in air is a little under 340 meters per second (m/s). This varies with temperature, humidity and altitude, but 340 m/s is a nice round number and well stick with it. If you work out the math, this means that a 20 Hz sound wave travelling in air has a wavelength of 17 metres!

    The extreme physical size of low frequency sound waves leads to tremendous problems in soundproofing and acoustic treatment. At the other end of the scale, a 20 kHz sound wave travelling in air has a wavelength of a mere 17 mm. Curiously, the higher the frequency the more difficult it is to handle as an electronic, magnetic or other form of signal, but it is really easy to control as a real-life sound wave travelling in air. Low frequencies are easily dealt with electronically, but are very hard to control acoustically.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 7

    The decibelThe concept of the decibel a convenience that allows us to compare and quantify levels in the same manner through different media. In sound terms, decibels can be used for every medium that can store or transport sound or a sound signal, for instance...

    real sound travelling in air

    electric signal

    magnetic signal

    digital signal

    optical signal on a film sound track

    mechanical signal on a vinyl record

    A change in level of 3 dB means exactly the same thing in any of these media. Without decibels we would have to convert from newtons per square metre (sound pressure), volts, nanowebers per square metre, etc.

    Decibels have another advantage for sound. The ear assesses sound levels logarithmically rather than linearly. So a change in sound pressure of 100 N/m2 (micro-newtons per square meter) would be audibly different if the starting point were quiet (where it would be a significant change in level) then if it were loud (where it would be hardly any change at all). A change of 3 dB is subjectively the same degree of change at any level within the ears range.

    [Sound pressure is measured in newtons per square meter. You may think of the newton as a measure of weight. One newton is about the weight of a small apple.]

    An important point to bear in mind is that the decibel is a ratio, not a unit. It is always used to compare two sound levels.

    To convert to decibels apply the following formula in your scientific calculator:

    20 x log10 (P1 /P2 )

    where P1 and P2 are the two sound pressures you want to compare. So if one sound is twice the pressure of another then P1 /P2 = 2. The logarithm of 2 (base

    Optical film soundtracks, variable density and variable area. Illustration by Iain F.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 8

    10) is 0.3, and multiplying this by 20 gives 6 dB.

    Actually its 6.02 dB but we dont worry about the odd 0.02.

    This is useful because we commonly need to, say, increase a level by 6 dB, but it doesnt actually tell us how loud any particular sound is because the decibel is not a unit. The answer to this is to use a reference level as a zero point. The level chosen is 20 N/m2 (twenty micro-newtons per square meter), which is, according to experimental data, the quietest sound the average person can hear.

    We call this level the threshold of hearing and it can be compared to the rustle of a falling autumn leaf at ten paces. We quantify this as 0 dB SPL (sound pressure level) and now any sound can be compared with this zero level. Loud music comes in at around 100 dB SPL; the ear starts to feel a tickling sensation at around 120 dB SPL, and hurts when levels approach 130 dB SPL.

    If you are not comfortable with math, it is useful to remember the following, which apply to both sound pressure and voltage (but decibels work differently when referring to power):

    -80 dB = one ten thousandth

    -60 dB = one thousandth

    -40 dB = one hundredth

    -20 dB = one tenth

    -12 dB = one quarter

    -6 dB = one half

    0 dB = no change

    6 dB = twice

    12 dB = four times

    20 dB = ten times

    40 dB = one hundred times

    60 dB = one thousand times

    80 dB = ten thousand times

    Threshold of hearing = 0 dB SPL

    Threshold of feeling = 120 dB SPL

    Threshold of pain = 130 dB SPL

    This fruit weighs approximately 1 newton

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 9

    Do you need to understand decibels to be a sound engineer?

    The answer is, Yes - to a point. You need to be able to relate a change in decibels to a fader movement, and from there to an image in your aural imagination of what that change should sound like. In addition to that, youll get producers telling you to raise the level of the vocal a bit. How many decibels equal a bit? Only the experience you will gain in the early years of your career will tell you.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 10

    The inverse square lawThere is more to find out about the inverse square law. Here is an interesting point...

    The maximum rate of decay of a sound as you move away from it is 6 decibels per doubling of distance (the sound pressure halves). This is simply due to the spreading-out of sound - the same energy has to cover an ever greater area.

    If the sound is focused in any way, by a large source or by reflection, then it will fade away at a rate less than 6 dB per doubling of distance. This fact is of great importance to PA system designers.

    The ultimate focused sound source is the old-fashioned ships speaking tube. Sound is confined within the tube and can travel over 100 meters and hardly fade away at all.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 11

    Acoustics

    Its going to be a long time before anyone invents a way to transfer an electronic or digital signal straight into the brain, bypassing the ears. Until then, at some stage sound must always pass through the air, and this is the most difficult and least understood part of its journey.

    When sound is created, whether it is the human voice, speaking or singing, a musical instrument or plain old-fashioned noise, it travels through the air, bounces from reflecting surfaces, bounces again and mingles with its own reflection, then enters the microphone.

    The same happens at the other end of the chain. Sound leaves the speakers, and although part of the energy will be transmitted directly to the listener, much of it will bounce around the room over a period of anything from half a second or less in a domestic environment up to several seconds in a large auditorium.

    Compare this with an electrical signal.

    Once created, the signal travels in a one-dimensional medium a cable or circuit track. The signal cant escape until it reaches its intended destination, there is nothing that it can bounce off (unless the cable is several kilometers long when it will reflect from the ends unless measures are taken), and the worst that can happen is that electrical resistance will lower the level slightly.

    This is a little bit of a simplification, but its fair to say that everything about the behavior of electrical signals can be calculated easily.

    This is not the case with acoustics. Sound travels in three dimensions, not one, and will readily reflect from almost any surface. When the reflections mingle, constructive and destructive interference effects occur which differ at every point in the room or auditorium. The number of reflections is, for all practical purposes, infinite.

    Even with todays sophisticated science and computer technology, it is not possible to analyze the acoustics of

    An audio cable is a one-dimensional medium

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 12

    a room with complete precision, accounting for every reflection. It would rarely happen that the electrical components of a sound system of any kind would be installed (professionally of course) and then be found not to work as expected.

    It is normal however to complete the acoustic design of a room or auditorium, and then expect to have to make adjustments when the building work is complete. Hopefully these adjustments will not cost more than the margin of error allowed for in the budget.

    Acoustics is a complex science in practice, but in theory its all very simple. The acoustics of a room (acousticians use the term room to mean an enclosed space of any size) are determined by just three factors: the timing of reflections, the relative strengths of reflections, and the frequency balance of reflections.

    Look around you at the various surfaces in the room. If you speak to a colleague, the sound of your voice will travel directly to his or her ears. It will also bounce off the nearest surface producing a reflection that arrives at the ear after a certain number of milliseconds (sound travels just under 34 cm in a millisecond one foot per millisecond is often used as a handy rule of thumb in non-metric countries even though it is a little bit on the low side). It will bounce off the next nearest surface with a slightly longer delay, then the next. Then reflections of reflections will start to arrive. At first they will be spaced apart in time but soon there will be so many reflections that they turn into a general mush of reverberation.

    Some surfaces will be more absorbent, so reflections are lower in level. Some surfaces will favour certain ranges of frequencies. These three factors almost completely determine the acoustics of a room.

    There is a fourth factor that is worth mentioning movement. If anything moves in the room source, listener or any reflecting surface then the Doppler effect comes into play.

    The Doppler effect is best demonstrated by the siren of a passing police car, which appears to drop in pitch

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 13

    as it goes past. Sound cant travel faster than its natural velocity in any given medium, so if the sound source moves, then velocity of the source converts to a rising in pitch for an approaching source, a lowering of pitch (acoustic red shift if you like) for a source that is moving away.

    In most contexts where acoustics are important, neither the source nor listener will be moving significantly, nor will the reflecting surfaces.

    What will be moving however is the air in the room due to convection effects and ventilation. You can see this quite clearly if you anchor a helium balloon so that it can float midway between floor and ceiling. Even in a living room it will move around more than you would expert.

    This effect is often modelled in digital reverberation units where it adds useful thickening to the sound, or chorusing as some sound engineers might say.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 14

    Standing wavesAlthough acoustics is a science, the ultimate arbiter of good acoustics is human judgment. There are certain basics that must be adhered to, derived from common knowledge and experience, and also statistical tests using human subjects.

    Firstly, a room that is designed for speech must maintain good intelligibility. Too much reverberation obscures the words, as do reflections that are heard by the listener more than 40 milliseconds or so after the direct sound.

    Late reflections cause phonemes (the sounds that comprise speech) to overlap. Short reflections actually aid intelligibility by making unamplified speech louder.

    For both speech and music there is the requirement that the reverberation time (normally defined as the time it takes for the reverberation to decrease in level by 60 dB the RT60 ) is in accordance with that commonly found in rooms of a similar size.

    A small room with a long reverberation time sounds odd, as does a big room with a short reverberation time. We can thank the British Broadcasting Corporation (BBC), who probably own and operate more purpose designed acoustic spaces than any other organization in the world, for codifying this knowledge.

    One of the most common problems in acoustics, that particularly affects room-sized rooms, rather than concert halls and auditoria, is standing waves. The wavelength of audible sound ranges from around 17 mm to 17 m. Suppose that the distance between two parallel reflecting surfaces is 4 m. Half a wavelength of a note of 42.5 Hz (coincidentally around the pitch of the lowest note of a standard bass guitar) will fit exactly between these surfaces. As it reflects back and forth, the pattern of high and low pressure between the surfaces will stay static high pressure near the surfaces, low pressure halfway between. The room will therefore resonate at this frequency and any note of this frequency will be emphasized. The reverberation time at this frequency will also be extended.

    Standing wave demonstration using a string

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 15

    This will also happen at integral multiples of the standing wave frequency. Smaller rooms sound worse because the frequencies where standing waves are strong are well into the sensitive range of our hearing.

    Standing waves dont just happen between pairs of parallel surfaces. If you imagine a ball bouncing off all four sides of a pool table and coming back to where it started; a standing wave can easily follow this pattern in a room, or even bounce off all four walls, ceiling and floor too.

    Wherever there is a standing wave, there might also be a flutter echo. Next time you find yourself standing between two hard parallel surfaces, clap your hands and listen to the amazing flutter echo where all frequencies bounce repeatedly back and forth. Its not helpful either for speech or music.

    [At higher harmonics than the fundamental frequency, the pattern of high and low pressure can be such that there is high pressure in the centre between the boundaries and low pressure elsewhere. The pressure is always high at the boundaries.]

    The solution to standing waves is firstly to choose the proportions of the room so that the standing wave frequencies are spread out as much as possible. Square rooms concentrate standing waves into a smaller number of frequencies. A cube shaped room would be the worst. Non-parallel walls are good, but these damned clever standing waves will still find a way. We need...

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 16

    Acoustic Treatment

    The function of acoustic treatment is to control reverberation time and to reduce the levels of standing waves. Well come back to standing waves in a moment.

    If surfaces can be made more absorbent then obviously reflections will be reduced in strength, hence reverberation time will be less.

    Soft materials such as carpet, drapes and especially mineral wool all find applications as porous absorbers. Porous absorbers however only work well when they are at least a quarter of a wavelength thick.

    This means that they are only really practical for high and high mid frequencies. If the only acoustic treatment used in a room is porous absorption, then the room will sound incredibly dull and lifeless.

    Another type of absorber is the panel or membrane absorber. A flexible wood panel (around 4 mm to 18 mm thick) mounted over a sealed air space (around 100 mm to 300 mm in depth) will resonate at low frequencies, and as it flexes will absorb energy.

    If damping material (typically mineral wool) is added inside, or a flexible membrane is used, then this type of absorber can be effective over a range of low frequencies. Drill some holes in the panel and the absorption becomes wide band.

    Ideal! Panel absorbers with little damping can be tuned to the frequencies of standing waves and control them very effectively. The other way of dealing with standing waves, and at the same time waving a magic wand and making the room sound really great, is to use diffusion. Irregular surfaces break up reflections creating a denser pattern of low level reflections than would occur with mirror-like flat surfaces. The irregularities however have to be comparable in size to the wavelengths you want to diffuse. Sound is always difficult to control.

    Panel absorber

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 17

    Soundproofing

    There has always been a lot of confusion between the role of materials that reflect sound, and materials that absorb sound. Sound-absorbing materials are NOT good at blocking sound transmission.

    This is not to say that they have no function in soundproofing. Just that the general public consensus is that to provide soundproofing, all you need is lots of absorbent material. This is 100% absolutely not so. Heres an example...

    Suppose a partition is created from a very thick layer of mineral wool (the most cost-effective sound absorber there is). Suppose it is so thick that it absorbs 75% of the sound pressure that falls upon it, leaving only 25% to transmit through to the other side. This seems good, since the sound pressure has dropped to a quarter.

    However, when you consider this in decibel terms, reducing sound pressure to a quarter is a change of minus 12 dB. So if the sound pressure on the side where the sound originates is 100 dB SPL, the sound pressure on the other side of the partition is still a very significant 88 dB SPL.

    This is a noticeable difference, but its hardly soundproofing. For really effective soundproofing we need a drop of at least 45 dB, and preferably more. Even then, the sound will very likely be audible on the other side of the partition.

    MaterialsEffective soundproofing can only be provided by materials which reflect sound energy. Such materials would be massive and non-porous, such as concrete, or a well-made brick or blockwork wall. Here is a list of suitable materials:

    Concrete

    Bricks or non-porous blocks

    Plasterboard (also known as drywall, sheetrock, wallboard, & gypsum board)

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 18

    Plywood and dense particle board

    Glass

    Metal

    Proprietary flexible soundproofing materials

    The two characteristics that all of these have in common is mass and non-porosity. The last item, proprietary flexible soundproofing materials covers an immense range of potential solutions, some of which - when you look at their advertising material - seem to work by magic rather than physics. They will only work if they are massive and non-porous - simple as that.

    The three requirements for good soundproofing

    Having looking briefly at the materials, we can now consider the three requirements for good soundproofing:

    Mass

    Continuity of structure

    No defects

    Mass means what it says. Double the mass of a partition and you get 6 dB more insulation.

    Continuity of structure not only means non-porosity, it means that the soundproofing should enclose the room in question absolutely 100%. If there is any place where sound can get through, it will. You could spend a lot of money and see it wasted because of acoustic holes in the structure.

    No defects really means the same as continuity of structure. Except that it is one thing to design a room with no acoustic holes, quite another thing to see it through to completion successfully. Builders do not always comply with architects plans 100%, and the short-cuts they take could well compromise soundproofing considerably.

    Lets go back and look at the materials once again...

    Concrete

    Concrete is a wonderful building material. The only

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 19

    consideration is that it should be vibrated effectively to make sure there are no air pockets.

    Bricks

    A house brick often has a hollow in one surface, known as the frog. BBC practice is to lay bricks with their frogs uppermost (which is not always the case in conventional building practice) because then they have to be filled completely with cement. This makes the wall heavier than it may otherwise have been, and therefore a little bit better at soundproofing.

    Plasterboard (drywall)

    Plasterboard consists of a layer of gypsum plaster around 12 mm thick sandwiched between two sheets of thick paper. With it you can make a dry partition. A wooden framework is constructed and layers of plasterboard nailed on.

    The BBCs double Camden partition consists of two such frameworks, onto which are nailed a total of eight layers of plasterboard.

    The advantage of dry partitions is that they can be constructed while the rest of the studio complex is still operational. Concrete and bricks are very much more messy, making it more likely that operation will have to be closed down totally. Dry partitions are sometimes called lightweight partitions. This is because you can divide a room into two using just two sheets of plasterboard on the wooden framework. But by the time you have added enough extra layers for good soundproofing, it is no lighter than a brick wall providing the same degree of insulation.

    Plywood and particle board such as chipboard and MDF

    These are all good materials - obviously the denser, and therefore heavier, the better. They are more expensive than plasterboard however, so they are only used where they are needed.

    Glass

    Glass is a very good material for soundproofing, but it

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 20

    is expensive. Therefore it is only used when you need to see through the soundproofing.

    Metal

    Once again, this is a very good material for soundproofing, but it is expensive in comparison to the alternatives. It is only used where its high density is important in achieving a relatively thin soundproofed partition. It is most commonly found in soundproofed doors, which may have a lead lining.

    Proprietary flexible soundproofing materials

    With regard to the comments made above, these are generally expensive in comparison with their acoustic worth. They should only be used where flexibility or ease of installation is important. They may also be used as damping material.

    For instance a metal panel in a car may vibrate and transmit energy to the passenger compartment. If it were damped, then not only would the vibration would be reduced, but significant energy would be taken out of the sound wave.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 21

    Construction techniquesA room is made up from a variety of surfaces and components, all requiring their own construction techniques:

    Walls

    Ceiling

    Floor

    Windows

    Doors

    Cable ducts

    Ventilation ducts

    Walls

    Whatever material the wall is made out of, it is better to use two thin walls spaced apart rather than one thick one of equivalent mass.

    Remembering that soundproofing is best achieved by reflection, and that reflection occurs at the boundary between one material and another, it makes sense to provide four boundaries rather than two.

    At first thought, it may seem that if a partition has a sound transmission class (STC) of say 35 dB, and two such partitions are provided, then the overall STC will be 70 dB. This is not the case.

    By doubling the mass you get an extra 6 dB, and by spacing apart the two leaves of the partition you might gain another 3 dB.

    This may not sound like much, but it costs hardly anything so it is worth having.

    The reason why you dont get twice as many decibels of sound reduction is that the two leaves remain coupled together. In fact the more closely they are coupled, the more the object of the exercise is defeated.

    Double-leaf brick walls (cavity walls) are often constructed using wire or plastic ties which couple the leaves together for mechanical strength. For a wall that is designed for good soundproofing, the use of such ties should be minimized. Care should be taken

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 22

    not to allow cement to fall onto the ties. In normal building, this would not matter.

    Also, builders are known to have a habit of depositing rubbish between the leaves of a cavity wall. This of course must not be allowed to happen as it couples the leaves of the partition.

    The space between the partitions should be filled with absorbent material such as mineral wool. This is where absorbent material does have a place in soundproofing. If the cavity is left empty, sound will bounce back and forth between the leaves and some of the reflected energy will end up being transmitted.

    If this can be absorbed significantly, then the insulation will be better.

    Ceiling

    The difficulty in building a sound proofed ceiling is mounting sufficient mass horizontally. The brute force solution is to lay concrete on top of metal shuttering, preferably as a double-leaf construction. The concrete could be up to 175 mm thick in total. As always, mass wins.

    For a less heavily engineered solution, the BBC recommend woodwool slabs. Woodwool is a sheet or board made from a mixture of thin strips of wood and cement, which are bound together through compression within a mould. Layers of plasterboard can be used too, providing they are adequately supported.

    Acoustic tiles are virtually useless as sound insulation, although they do find application in acoustic treatment.

    Floor

    Once again, mass rules. But also there is a technique known as the floating floor, which is widely used in studio construction.

    The fully engineered floating floor would consist of a concrete slab formed on metal shuttering, supported on rubber pads or even heavy-duty springs. The mass of the slab is important as the mass-spring system

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 23

    will have a resonant frequency, at which the sound insulation properties will be worse than if the floor were not floating! A massive slab can push this resonant frequency below the audio band.

    Where restriction on cost or loading prevents a heavy floating floor, a lighter-weight version can be constructed to BBC recommendations by putting down a resilient layer of high-density mineral wool (approximately 30 mm) covered with a plastic sheet, and then laying a 70 mm reinforced concrete slab on top of that, with a further 40 mm concrete screed on top.

    A wooden domestic floor may be improved by adding two layers of 18 mm particle board on top, with the joints staggered to avoid gaps. The gaps around the edges can be filled with a mastic material, or by compressing mineral wool tightly into the gaps. There would be no harm in floating this on top of an old carpet, but the additional benefit, other than for impact noise, would be slight.

    Windows

    Glass is a good soundproofing material if it is thick enough.

    There is one type of window that is absolutely useless for sound insulation however, and that is an open window!

    If a window has to be opened to provide ventilation then all of the rest of the soundproofing in the studio is rendered worthless. Some windows have to be opened only for cleaning, in which case the opening section should be surrounded by a compression seal, not a brush seal (or no seal at all).

    A properly constructed window would have two panes of differing thicknesses (to avoid resonance effects allowing the same band of frequencies through each pane), set in mastic in a well constructed frame.

    The flexible mastic decouples the panes from the frame and from each other, reducing the opportunities for sound transmission. Since this is in effect a double leaf partition, it would be sonically advantageous to

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 24

    fill the gap between the panes with mineral wool. Unfortunately, the window would now no longer function as intended.

    The best compromise is to line the reveals (the edges between the panes) of the window with absorbent material to soak up the energy that would otherwise bounce around inside until it found an outlet.

    Often, windows are constructed so that the panes are angled to each other. This has some value in preventing standing waves being set up between the otherwise parallel panes, but its greater value is in cutting down on the visual reflections there would be otherwise.

    Doors

    The best solution to access is simply to buy a sound proofed door. This will be expensive, but it will be worth it. It will probably have magnetic seals around the top and sides, and a compression seal at the bottom. It will also be very heavy, meaning that the wall it fits into will have to be strong enough to support it.

    A reasonable alternative is a heavy fire door, with the jamb fitted with rubber compression seals and extended all the way around the door, including the bottom.

    To gain better insulation than a single door can achieve, a sound lobby is sometimes constructed so there are two doors between one side of the wall and the other.

    Box within a box

    The ultimate in studio building is the so-called box within a box structure. Here, the external building provides shelter from the elements, and office facilities, but within it is a completely enclosed and self-supporting structure standing on rubber pads or springs. Naturally, this is expensive.

    Ventilation

    Ventilation and air conditioning, sometimes known as HVAC (the H stands for heating) is a vitally important

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 25

    topic to study in conjunction with soundproofing. When a studio is soundproof, it is also air proof, unless steps are taken.

    Ventilation and air conditioning are not synonymous. Ventilation means access to fresh air from outside the building, air conditioning means cooling and maintaining the humidity of the air that is already inside. An air conditioning system may provide ventilation, but many do not at all.

    There are a number of problems caused by such systems:

    Noise caused by air turbulence within the ducts

    Fan noise transmitted through the ducts

    Noise in the structure of the building transferring to the ducts and being transmitted through them

    Fan noise transmitted through the metal of the ducts

    The ducts create transmission paths through the building

    These are the solutions...

    Turbulence is reduced by having ducts with a large cross-sectional area. This allows the air velocity to be lower and any remaining turbulence will be lower in frequency.

    Any airborne noise can be reduced by the incorporation of plenum chambers. A plenum is a large space through which the air must travel, lined with absorbent material. The air temporarily slows down and allows time for any sound it carries to be absorbed.

    The ducts are also lined, bearing in mind that the absorbent material must not give off particles (like mineral wool does), unless the air is being extracted. Baffles are generally not used as they increase turbulence.

    Noise that would otherwise travel through the metal of the duct is reduced by suspending the ducts flexibly, and by having flexible connector sections every so often to absorb vibration.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 26

    Noise from the fan that would otherwise enter the structure of the building can be reduced by mounting the fan on a heavy plinth, itself resting on resilient pads. Obviously, a fan that is intrinsically quiet should be used.

    Studio ventilation and air conditioning systems should be installed by contractors who have experience in doing this in a studio environment. Otherwise it is likely that the result will not be satisfactory.

    The function of absorption in soundproofing

    If soundproofing were carried out using only reflective materials, then the sound energy would continue to reflect back and forth, each time offering another opportunity for some of the energy to be transmitted.

    If there is sound absorbing material within the room or cavity, this energy will sooner or later be absorbed and converted to heat.

    Using absorption in this way is useful when the level of the sound source is fixed - a noisy fan could be enclosed and the enclosure filled with absorbent material, for example. It is also useful in cavities.

    In a recording studio control room however, adding more absorbent material for this purpose is not useful since it will lower the sound level in the room and the engineer will simply turn up the level to compensate.

    Flanking transmission

    Flanking transmission occurs where are partition is built up to the height of a suspended ceiling, or down to the level of a raised floor, but not all the way to the solid structure of the building.

    No matter how good the soundproofing qualities of the partition, sound will take the flanking path over or under the obstacle.

    Cable ducts

    Where a cable duct passes through a partition there

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 27

    will be the opportunity for sound to leak through the duct.

    To prevent this, the space not occupied by cables must be filled with pugging. This can take the form of sand in bags, or tightly compressed mineral wool.

    Note that there is no l in pugging.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 28

    Audio Electronics

    Sound engineers commonly feel quite close to the electricity and circuitry that form and guide their signals. What goes on inside the equipment is, to many, as interesting as the sound that comes out of it and is often the subject of heated debate. Next time you bump into a sound engineer, whisper, Tubes (valves) or transistors? in his or her ear.

    An understanding of the nature of electricity is important for a sound engineer to be able to do their work properly. As a starting point, lets take the concept of voltage, or electrical pressure.

    Its useful to think of voltage, measured in volts (V), as the motive force behind all electrical interactions. In fact we sometimes refer to voltage as EMF, meaning electromotive force, although the full term isnt in common use.

    An audio signal is always an alternating voltage (where the voltage swings continually between positive and negative) if there is a DC component then it shouldnt be there and the frequency of the voltage is exactly the same as the frequency of the original sound.

    Since the frequency range of human hearing is taken to be 20 Hz to 20 kHz, then these are the frequency bounds of an audio signal. The level of the signal may vary from microvolts (millionths of a volt), produced by a microphone in quiet conditions, through a volt or so in a mixing console, up to 100 V or more at the outputs of a power amplifier (keep your fingers well away!).

    Electricity flows from positive to negative voltage. At least that is what the early pioneers of electricity thought. It actually flows from negative to positive, transported by electrons, but we still generally think in terms of conventional current. Yes it is confusing.

    Nearly always we use a zero voltage reference, which in mains powered equipment is connected to earth, ultimately through a copper spike, or equivalent, sunk deep into the ground.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 29

    When a voltage is applied to a circuit component, or to anything for that matter, then a certain current will flow, measured in amperes (A). Its good to think of current as the rate of flow of electrons the more electrons that are moving, the greater the current. The magnitude of the current fairly obviously depends on the magnitude of the voltage. It also depends on the resistance of the object or component that is subjected to the voltage.

    Some materials have a high electrical resistance, measured in ohms (), and are therefore good insulators. The lower the resistance, the greater the current for a given voltage:

    I = V/R, where I is current, V is voltage and R is resistance

    When the voltage is alternating we also commonly think about impedance. Impedance (Z) is analogous to resistance but accounts for situations where the current and voltage are not alternating in step, or are not in phase, as we would put it. Impedance is also measured in ohms.

    There is also reactance (X), as found in a capacitor or inductor, which restricts the flow of current but there is no, or negligible, resistive component. Impedance is the combination of resistance and reactance.

    Electrical power is another important concept. Power in general terms is the rate of flow or conversion of energy. Energy, as contained in a battery for example, is measured in joules (J).

    A battery contains a certain quantity of energy and when that is used up the battery is dead. You can use it at a faster or slower rate. Connecting the battery to a low resistance circuit will result in a high rate of energy release, and a high power will be developed. Power (P), measured in watts (W), can be calculated in two ways:

    P = V2/R

    P = V x I

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 30

    Passive componentsThe three principal passive components are the resistor, the capacitor and the inductor. They are passive in the sense that they do not add to the signal in any way. They can reduce it in various ways, but they cant increase the power of the signal and generally do not change the shape of the waveform (distortion).

    The resistor is commonly used to reduce the level of a signal, or to develop a voltage from a current flowing in a conductor. It can also be used to reduce current flow. Fig. 1 shows a potential divider. The voltage applied to the input creates a current...

    I = Vin/(R1 + R2 )

    The voltage at the output is therefore...

    Vout = I x R2

    If the resistors are of equal value then this arrangement halves the voltage.

    Fig. 2a shows a wire of low resistance. Apply a voltage and current will flow so easily that the difference in voltage at the two ends will become instantly zero, or at least very close to zero. To take advantage of this current to create a voltage, insert a resistor as in Fig. 2b. There will now be a voltage between the two terminals of the resistor...

    V = I x R

    Of course the current will now be less, simply because there is a resistance present in the circuit.

    The capacitor, formerly known as the condenser, is used to control the high frequency content of a signal.

    Capacitors allow high frequencies to flow readily but have a higher reactance to low frequencies. (Note the use of the word reactance since voltage and current are not in phase and there is no resistive component). The relevant equation is: capacitative reactance X = 1/2fC, where f is frequency and C is capacitance measured in farads (F).

    Potential divider

    Measuring voltage

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 31

    Fig. 3 shows a circuit in which a capacitor is used to control the level of high frequency- signal components. You will notice that it is similar to the resistive potential divider. If the frequency is very low then the capacitor will appear as a high impedance. If you plug an arbitrarily high number of ohms into the potential divider equations, then you will find that Vout is almost equal to Vin. (It may seem odd that Vin is not reduced by passing through a resistor. This is in fact the case if no current is drawn by the measuring instrument.

    You can imagine the measuring instrument to be another R2 in another potential divider. If its input impedance, as we would call it, is high, then the voltage will not be reduced).

    However if the frequency is high, then the impedance of the capacitor will be low and it will allow current to flow readily to earth where it is lost. Vout will therefore be low.

    An inductor (formerly known as a choke) is very much like a capacitor, except that it will pass low frequencies and restrict high frequencies.

    Where two or more coils of wire are wound around a metal former, the result is a transformer. A transformer can act on an electrical signal to exchange more voltage for less current, more current for less voltage. The transformer, as a passive component, cannot increase the power of a signal.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 32

    Real world componentsResistors, capacitors and inductors are idealized concepts that in the real world have to be manufactured using available materials and technologies.

    Resistors are easily manufactured in any value from a fraction of an ohm up to 1 Mohm (1 million ohms) or more from a variety of resistive materials. Small resistors are used in low power circuits. Enormous wire-wound resistors are available for high power use. The desired accuracy and stability of the value can be specified, commonly to 1% but better is available.

    Capacitors are more of a problem. Capacitors consist of two metal plates (sometimes coiled) separated by a dielectric. Small value capacitors (1 pF would be about the smallest useful value) can readily be made from ceramic materials, mica etc. Large value capacitors are more difficult.

    So-called electrolytic capacitors may have high values up to 100,000 F but they have to be polarized so that one terminal experiences a positive DC voltage with respect to the other. Electrolytic capacitors are neither accurate nor stable in their values. They are also bulky. Tantalum electrolytic capacitors are smaller, but cannot withstand high voltages.

    An inductor is simply a coil of wire, sometimes wrapped round a former of soft (i.e. low coercivity) magnetic material. Inductors are more costly to manufacture than resistors and capacitors and their use is avoided whenever possible. A capacitor, for instance, can control high frequencies when placed in parallel (Fig. 4a) with the signal. It controls low frequencies when place in series (Fig. 4b). At high power levels, such as in loudspeaker crossovers, inductors have to be used. One problem is that the metal core of an inductor can become magnetically saturated i.e. it cannot be magnetized to any greater degree. This causes clipping and therefore distortion of the waveform.

    Passive components are useful in a variety of ways, but their limitation of not being able to increase the power of a signal means they can only do so much.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 33

    Resistors in series and parallelFig. 5a shows three resistors in series. Each resistor gets a chance to block the current, therefore their values add to give the total resistance: Rtotal = R1 + R2 + R3 . When resistors are placed in parallel, as in Fig. 5b, each resistor presents another opportunity for current to flow, therefore the total resistance is lower than the value of any of the individual components: 1/Rtotal = 1/R1 + 1/R2 + 1/R3 . This is important in sound engineering when connecting a number of loudspeakers to the same power amplifier.

    For instance, four identical loudspeakers can be connected as two pairs in series, and the pairs connected in parallel. The resulting impedance will be exactly the same as one loudspeaker by itself.

    Resistors in series and parallel

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 34

    Digital Audio

    Digital versus analogIn analog audio, the varying pressure of a sound wave travelling in air is represented by a varying electrical voltage. When the sound pressure goes higher, the voltage goes higher. When the sound pressure goes lower, the voltage goes lower. Sound can also be represented by other kinds of analog signals, such as a magnetic signal, mechanical signal on a vinyl record, optical signal on a film sound track. In all cases, some property of the medium varies in correspondence with the original sound pressure.

    The problem with analog audio is in the detail. Where there are very fine variations in air pressure, then there must be very fine variations in the storage or transmission medium. Although electrical signals can represent sound very accurately, there is a problem with magnetic storage, as in the old analog tape recorders. That problem is noise.

    The medium of magnetic tape is inherently grainy, for want of a better word. This granularity causes variations in the magnetic signal that are confused with the actual signal being recorded and stored. On playback, the granularity of the medium manifests itself as noise. And quite a lot of noise too.

    So to combat noise on a magnetic tape, the signal being recorded is made higher in level, causing stronger magnetism on the tape. This is so that the variations in magnetism caused by the signal are very much greater than those caused by the noise. The problem with this is that we run into magnetic saturation. There comes a point where the tape is reluctant to become any more strongly magnetized. This causes distortion. So analog tape recording is always a compromise between noise and distortion.

    In audio, this was the primary reason why we moved from analog to digital. It is possible to make a digital recording system with very low noise; it is impractical to do that with analog recording technology.

    Studer A80 analog tape recorder

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 35

    Digital audio has another advantage it can be copied without loss of quality. When making an analog recording, there is always a certain loss of quality, even in the original recording. When that recording is copied there is a further loss which is significant and audible. In the normal course of production, a recording may be copied several times before it reaches the listener, each generation of copying increasing the noise and distortion.

    However, when a digital recording is copied, it is a simple matter of reading the numbers, which are just ones and zeros, from one piece of media and writing them onto another. As long as the numbers are copied correctly, the copy will be perfect, and is often referred to as a clone.

    To put this another way, if some defect in the analog copying process causes the voltage to wiggle, then this wiggle will cause an audible distortion in the copy. If some defect in the digital copying process causes a zero to be a little but squashed in the middle, well it is still a zero and has exactly the same meaning as it did before.

    So there are our two primary reasons why we transferred from analog audio to digital noise and the ability to make perfect copies.

    Another very good reason developed over a period of time. Digital processing is very much cheaper than analog processing. So a digital mixing console that costs a couple of thousand dollars has the same mixing and processing capability as an analog console that costs tens of thousands of dollars, and maybe more. Although it is not specifically an advantage of digital audio itself, it is also an operational convenience that the entire settings of a digital mixing console can be stored at the push of a button. There were some analog consoles that could do this, but they were fiendishly expensive, and resetting the controls was a lengthy manual process.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 36

    Analog to digital conversionAn analog signal, represented as a varying voltage, must be converted to numbers. This is done by sampling the signal so many thousands of times per second.

    There is a simple theory Nyquists theory that states that to successfully digitize a certain frequency, you have to sample it at at least twice that frequency. So since the conventionally accepted limit of human hearing is 20,000 Hz, or 20 kHz, then an analog to digital convertor must sample at at least 40 kHz. In practice there needs to be a safety margin above this so we sample at 44.1 kHz or 48 kHz.

    Why 44.1 kHz? Its for historical reasons. Digital audio was originally recorded onto modified video recorders. In video terms, 44.1 kHz is a nice round number that happens to work.

    Why 48 kHz? This is preferred in video and broadcasting and also stems from historical reasons. When audio signals were first transmitted by satellite, the sampling rate was 32 kHz. This is too low for really good quality audio, but 48 kHz has a simple mathematical relationship with 32 kHz so it was easy to convert between the two. Its easy to convert from anything to anything these days, but 48 kHz remains as the alternative standard.

    It is also common to sample at 96 kHz, and feasible to sample at 192 kHz. These higher rates allow a better high frequency response. This may not be audible but the argument that 130 kilometers per hour in a car capable of 200 kph is a smoother experience than in a car that can only do 131.

    So to summarize, the voltage of the analog signal is sampled forty-odd thousand times per second.

    What we meaning by sampling is that at any one sampling period, the voltage is grabbed and stored, just for a tiny fraction of a second. It is stored to allow time to measure it. Measurement is not instantaneous in practical convertors. We talk of the sample and hold circuit doing this job while the voltage is measured.

    Analog to digital convertor

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 37

    At this point the signal is still an analog voltage. The next step is to convert it into a number.

    This is straightforward the voltage is divided into 65,536 different allowable values. At any instant, the allowable value that is closest to the actual value is selected. We now have a number. This number is expressed in binary form as a sequence of zeros and ones, all the way from 0000000000000000 to 1111111111111111.

    Two questions...

    Firstly, where does this magic number of 65,536 come from? The answer is that if we use a sequence of sixteen binary digits for encoding a 16-bit number then there are 65,536 different possible combinations. Clearly if we encode to 20-bit or 24-bit resolution then there will be more.

    Secondly, what happens when the actual signal falls between two allowable levels? Wont there be an error here? The answer is yes. We call the process of selection of the nearest allowable level quantization. Where the actual signal level is different, which it will be most of the time, this is the quantization error.

    Quantization error leads to distortion and noise. But the more bits you have in the system, the smaller this error is. As a rough guide, each bit is worth around 6 decibels of signal to noise ratio. So in a 16-bit system, the theoretical signal-to-noise ratio is 96 dB. In practice this will never be completely attained. In a 24-bit system the theoretical signal-to-noise ratio is 144 dB, but achieving better than 115 dB or so is still proving difficult in practice.

    To round off this section, a little bit of common parlance. We would describe the compact disc system as being 16-bit/44.1 kHz because it has sixteen bits of resolution and is sampled 44,100 times per second. Modern recording systems can be 24/96 an even shorter but common way of saying it which means of course twenty-four bits, sampled 96,000 times per second.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 38

    Problems in digital systemsWe have already discussed frequency response and signal-to-noise ratio. Distortion in digital systems is very closely linked to noise. Although the distortion is at a very low level in practical systems, digital distortion is highly offensive to the ear.

    The cure for digital distortion is dither. Dither is a very low level noise signal that is added to the analog signal being digitized. This might seem like a crazy idea, to add noise to the signal. But the problem is that digital distortion is correlated to the signal, and the ear easily picks up on that. Adding noise randomizes the distortion so that the noise is smooth and continues. The ear is happy to ignore that. And in a 16-bit system the noise is at a very low level anyway.

    Another problem in digital systems is jitter. Jitter is caused when a digital signal is transmitted at an uneven rate. This can occur because of faulty, poorly-designed or cheap components. Or by long cable paths and physical problems in the system. Jitter translates as noise in the output. Sometimes a lot of noise. Fortunately, jitter can be completely cured by re-clocking the signal so that it is even and regular. For practical purposes, it should be said that you will rarely come across significant problems caused by jitter using properly designed equipment. It is something that is worth knowing about however.

    One of the features of digital audio is that it uses zeros and ones to transmit and store the signal. So for instance a zero could be represented by a low voltage, a one by a high voltage. It would be very easy to distinguish the difference between a zero and a one. But sometimes there can be excessive noise or interference in the transmission or storage system and zeros and ones can be confused, causing errors.

    Digital systems therefore are able to detect errors and compensate for them.

    The first thing to do is to detect whether an error has occurred. The digital code used to store a signal has special data added to make this possible. For example if we devised a numerical code to represent letters of

    A relatively jitter-free digital signal shows this distinctive eye pattern. When there is

    jitter, the eye closes.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 39

    the alphabet, and we specified that only even numbers would be used, if you received a message from us and some of the numbers were odd, you would know that there had been an error.

    When a digital system detects an error, it will attempt to correct it. Error correction means that the data is as good as it was before and the defect is completely inaudible. This is done by storing extra redundant data that can be used if some of the signal data is damaged. So if there is a scratch on a compact disc, the error handling system will recognize that the data is damaged and will look for the redundant data to replace it.

    Sometimes however the extent of the damage is too great to do this, or the redundant data is damaged as well. In this case error concealment will take place. The system will make an intelligent guess as to what the data would have been. Audio signals are somewhat predictable, so this is easily possible. The result however will be a slight degradation that might be audible.

    If the worst comes to the worst and the data cannot be corrected or concealed, the system will mute. At least this is what is supposed to happen. Digital glitches can be very high in level and be unpleasant to the ear and even damage loudspeakers. Clearly though not all equipment that is meant to mute on coming across seriously damaged data actually does that.

    Error correction and concealment takes place in compact disc, DVD, digital radio and television. It is also used on digital tape, but that isnt found so much these days.

    In a hard disk, CD-ROM and DVD-ROM a more powerful error correction system is used so that in normal operation the data that comes off the disk is perfect. Banks and financial institutions, that use exactly the same storage systems, would be rather less than happy if this were not so. However, where there is a problem that is beyond the error correction systems ability to cope, then the data is often irretrievably damaged.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 40

    LatencyAll digital systems exhibit the phenomenon of latency where there is a delay between input and output on even the shortest signal paths. In a large analog mixing console there may be literally a kilometer of cable inside, but a signal takes for all practical purposes no time at all to go all the way through the console from input to output. This is so even in a large analog studio complex. Analog audio has no latency.

    However in a digital system it takes time to convert from analog to digital, and time to convert back again. So even the shortest signal path has a latency of at least a couple of milliseconds. This is generally not audible, although care has to be taken not to mix a signal with a delayed version of itself, or phase cancellation will take place.

    Where more processing is involved, the latency will be longer. The worst example would be a computer-based recording system where the signal is processed by the computers main processor (high-end systems use specialized digital signal processing cards). Processors such as this are designed for general data and are not optimized for signals. Also, the computer has to apply its attention to monitoring the keyboard and mouse for input, driving the display and other tasks. In this case the latency could be well into the tens of milliseconds, which is noticeable and sometimes off-putting.

    ClockingAn analog signal starts, ends, and goes its own sweet way in between. It has no external reference to time. An analog recording depends absolutely on the mechanism of the recorder running at a precisely controlled speed during recording and playback. There is no information in the recording to monitor or correct the speed.

    Digital systems on the other hand are sampled typically 44,100 times per second. This therefore binds digital signals to a time reference. A single signal on its own can happily work to its own time reference, which will simply be the period between samples.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 41

    However, when two digital signals are mixed, there is a problem.

    It is impossible to have two digital clocks that run at the same speed. So one signal might be running a tiny bit faster than 44. 1 kHz, the other a tiny bit slower. So if the two signals were mixed by simply adding up the numbers they contain, all might start off well, but sooner or later one of the signals will have to skip a sample to keep pace. This will cause a click, so clearly it is undesirable.

    A single signal contains its own clock. So if you wanted to copy a signal from one machine to another, then the machine you are copying onto can be set to external clock and it will derive its own clock reference from the incoming signal and run at the same speed. Often this happens automatically so the operator is unaware of it.

    A two-machine system can run perfectly well in this way. But as soon as you add a third digital device, which might be a mixer or processor and not a recorder, then it becomes difficult deciding which should be the clock master and which slave devices should synchronize to the master.

    So in larger digital systems it is common to provide a master clock, which is an independent unit. Everything in the entire system will use this clock source, so everything is sure to run at the same rate.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 42

    Check Questions

    Describe compression and rarefaction.

    Describe the direction of particle motion in a transverse wave.

    Describe the direction of particle motion in a longitudinal wave.

    Do air molecules move from place to place under the influence of a sound wave?

    Comment on point source.

    What is the accepted range of frequency of human hearing?.

    What is the mathematical relationship between sound velocity, frequency and wavelength?

    If a sound wave travelling in air has frequency 20 kHz, what is its wavelength? (Take the velocity of sound in air to be 340 m/s).

    If a sound wave travelling in air has frequency 20 Hz, what is its wavelength? (Take the velocity of sound in air to be 340 m/s).

    Describe the convenience aspect of decibels.

    Describe the logarithmic nature of the ears perception of sound levels.

    If sound pressure is doubled, by how many decibels does it increase?

    If sound pressure is quadrupled, by how many decibels does it increase?

    If a sound has a level of 100 dB SPL, what does this mean?

    Why are acoustic sounds more complex than electrical signals?

    Is acoustics a completely understood science?

    What are the three main factors that determine the acoustics of a room?

    What is the effect of air motion on acoustics?

    Why do reflections arriving later than 40 ms after the direct sound reduce the intelligibility of speech?

    What is meant by RT 60?

    What is a standing wave?

    Comparing wavelength and room dimensions, what is the requirement for a standing wave to be created?

    Is the pressure of a standing wave high or low close to a boundary?

    Can standing waves occur other than between parallel surfaces?

    What is a flutter echo?

    What is the worst shape for a room, acoustically?

    What is the function of acoustic treatment?

    What is a porous absorber?

    What is a panel absorber?

    Are materials that are good reflectors of sound also good sound insulators?

    Are materials that are good absorbers of sound also good sound insulators?

    How is sound absorbing material used in conjunction with sound reflecting material to improve

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 43

    insulation?

    With reference to the question above, when is this not effective?

    What property of a material controls its sound insulating ability?

    What are the three requirements for good soundproofing?

    What is flanking transmission?

    What is pugging?

    What is a dry partition?

    What is the disadvantage of proprietary flexible soundproofing materials?

    What should be avoided when building a double-leaf partition?

    What is a floating floor?

    Describe the construction of a window between a studio and control room.

    What types of seals would a soundproof door have?

    What is a box within a box?

    What problem does soundproofing cause with regard to ventilation?

    What problems do ventilation and air conditioning cause for soundproofing?

    With regard to the above question, describe some of the solutions.

    What is voltage?

    How many microvolts (V) are there in one volt (1 V)?

    Describe the difference between conventional current and real current.

    What is the voltage of electrical earth?

    Describe electrical current.

    Describe resistance.

    If voltage is divided by resistance, what is the result?

    Compare resistance with reactance.

    Compare resistance with impedance.

    In what units do we measure impedance, resistance and reactance?

    Describe electrical power.

    What is a potential divider?

    How may a capacitor be used to reduce the level of high frequencies?

    What is the function of a transformer?

    What causes noise in an analog tape recording?

    If the level of an analog recording is raised to combat noise, what is the undesirable consequence?

    Describe the problems that occur in copying analog recordings.

    Explain why a digital copy can be an exact clone of the master.

    Briefly describe the advantages of a digital mixing console over an analog mixing console.

    Briefly explain Nyquists theorem.

    State the two common sampling frequencies that lie between 40 kHz and 50 kHz.

  • Audio Masterclass Music Production and Sound Engineering CourseModule 01: Analog and Digital Audio

    Page 44

    State two other sampling frequencies that are in common professional use.

    Briefly explain why it is better to sample at a higher frequency.

    Describe the task of the sample and hold circuit.

    Describe quantization.

    What is the relevance of the number 65,536?

    What is quantization error?

    Roughly, what is the signal-to-noise ratio of a digital system that has 20-bit resolution?

    What is the purpose of dither?

    Describe jitter and the problem it can cause.

    How can jitter be cured?

    Describe error detection.

    Describe error correction.

    Describe error concealment.

    If an error is detected but can neither be corrected nor concealed, what should be done, and why?

    What is latency?

    Why is the term latency not relevant to analog systems?

    What causes latency in digital systems?

    Describe the flow of the clock signal when a digital recording is copied from one machine to another.

    What would happen if two digital unsynchronized digital signals were mixed together?

    What is a master clock

    AssessmentLearning OutcomesThe Nature of SoundFrequencyThe DecibelThe Inverse Square Law

    AcousticsStanding WavesAcoustic Treatment

    SoundproofingMaterialsThe Three Requirements for Good SoundproofingConcreteBricksPlasterboard (Drywall)GlassMetalProprietary flexible soundproofing materials

    Construction TechniquesWallsCeilingFloorWindowsDoorsBox Within a BoxVentilationThe Function of Absorption in Sound ProofingFlanking TransmissionCable Ducts

    Audio ElectronicsPassive ComponentsReal World ComponentsResistors in Series and Parallel

    Digital AudioDigital versus analogAnalog to digital conversionProblems in digital systemsLatencyClocking

    Check Questions