it 318 – supplementary material chapter 6it318.groups.et.byu.net/files/supplemental...pedestrians...

18
IT 318 Supplementary Material Chapter 6 1 IT 318 – SUPPLEMENTARY MATERIAL CHAPTER 6 Digital Communications V. 2016 BARRY M. LUNT Brigham Young University

Upload: others

Post on 30-Jun-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

1

IT 318 – SUPPLEMENTARY MATERIAL CHAPTER 6

Digital Communications

V. 2016

BARRY M. LUNT

Brigham Young University

Page 2: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

2

Table of Contents

Chapter 6: Digital Communications ............................................................................................................ 3

6.1 Overview: A Modern Miracle ........................................................................................................... 3

6.2 EM Spectrum and Bandwidth: Where and How Much? ................................................................... 3

6.3 Modulation: Putting Information on a Carrier ................................................................................... 4

6.4 Shannon’s Law: The Ultimate Speed Limit ...................................................................................... 5

6.5 Media: Our Three Options ................................................................................................................. 6

6.6 Analog to Digital Conversion: Entering a New Domain ................................................................... 7

6.7 Digital Tricks: The Wonders Digital Can Do for Us ......................................................................... 8

6.8 Networking: The Backbone of Computer Communication ............................................................. 12

6.9 The Internet: History of a Modern Marvel ...................................................................................... 14

6.10 Wireless: The Ultimate in Connectivity Convenience .................................................................. 16

6.11 Automation Standards: Performance and Convenience ................................................................ 16

Summary ................................................................................................................................................ 17

Page 3: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

3

Chapter 6: Digital Communications

6.1 Overview: A Modern Miracle Modern digital communications are truly ubiquitous; there is hardly any segment of our lives that is

untouched by them, in major ways. We have grown accustomed to instantly knowing what is taking place in any

corner of the world; to having cell-phone and internet service nearly everywhere we are; to being able to connect

with nearly anyone, any time. But prior to 1830, none of these technologies even existed; for thousands of years,

the fastest means of communication was a messenger on a horse. To say that has changed dramatically is the

understatement of the century.

This chapter is a very high-level overview of some of the fundamental technologies that underlie all of

this modern miracle. As desirable as it would be to dig into the many topics of this chapter in greater depth, we

will have to content ourselves with a basic overview, and acknowledge the presence of a great deal of further

information, readily available to the interested reader.

This author’s favorite example of our ability to communicate digitally is epitomized by the Voyager 1 and

2 deep space probes, launched by the USA in 1977. These probes successfully flew by Jupiter and Saturn, sending

back incredible pictures and discovering many new moons. Then the mission of Voyager 2 was extended, and it

visited the outer planets of Uranus and Neptune, over 2 billion miles distant from Earth. This was clear back in

about 1990. Today both probes are still transmitting and receiving data, though this data must travel more than 30

hours at the speed of light to reach Earth. The received signal strength is a vanishingly small 0.1 aW, or 1.0 x 10-19

W. Every trick in our digital communications book has been used to make this possible, and many of these tricks

will be overviewed in this chapter.

6.2 EM Spectrum and Bandwidth: Where and How Much? All electronic communication takes place by sending a changing voltage over a carrier – the changing

voltage is the information, while the carrier is the signal which, as implied by the term, carries the information

from A (the source) to B (the destination). What is the nature of the carrier? It is an electromagnetic (EM) wave,

of which

light is the

most familiar

example.

Figure 6.1 is

a reminder of

the many

known

components

of the EM

spectrum; it

represents all

there is

available to

us to send

EM signals.

Note

that the

frequency of an EM signal is the single identifying characteristic of an EM signal which places it on this

Figure 6.1: The electromagnetic (EM) spectrum represents the “space” available to us to send EM

signals.

Page 4: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

4

spectrum. It is also important to remember that the energy of a photon (a carrier of EM energy) is directly

proportional to its frequency: a photon at 1 THz possesses 1,000 times more energy than a photon at 1 GHz.

Because the EM spectrum covers more than 24 orders of magnitude, that also means that Gamma ray photons can

carry 1024 times more energy than a photon down in the VLF (Very Low Frequency) range – a phenomenal

amount!

Figure 6.1 can give us the impression that there is more than enough spectrum to send all our EM signals.

This would be true, if we had the ability to use all of it. To transmit data on EM waves (or are they particles?)

requires that we have some device which can produce these waves, another device which can put the data on these

waves (these are part of the transmitter, or A), and another device (part of B) which can detect these waves and

the data they carry. Today this is done with electronic transistors and diodes, along with other associated

electronic parts such as resistors, capacitors, and inductors, in what designers refer to as electronic circuits. So, we

presently are limited to those portions of the EM spectrum where we have been able to produce such circuits. This

includes the frequencies up to about 100 GHz, then those whose wavelength is from about 400-2000 nm, which

includes infrared and visible light. Much work is being done today on creating devices capable of working in the

“TeraHertz Gap”, generally defined as those frequencies between 200 GHz and 100 THz (about 2000 nm), but

this work is progressing only slowly. No significant work is being done on producing devices which can operate

in the ultraviolet, x-ray, or gamma-ray regions, for reasons that will not be discussed in this chapter.

The take-away of the preceding chapter is that despite the availability of a virtually unlimited amount of

spectrum, that portion of the EM spectrum which can be practically put to use today is somewhat limited, and

spectrum crowding is a serious issue today, one which is receiving much attention – and great progress is being

made.

Bandwidth is a term heavily used today, and which has a very specific meaning in digital

communications. Basically, it means the amount of EM spectrum which a given signal occupies. Using a traffic

analogy, let’s imagine a highway without marked lanes, in which each

item traveling on the highway uses only the width of the highway

necessary. Pedestrians and bicyclists would occupy much less width than

cars, and trucks and wide loads would occupy much more. In this traffic

analogy, the width of the road occupied by the bicycle, car, truck, or wide

load would be its bandwidth, and the highway would be the usable part of

the EM spectrum. Table 6.1 gives the required native bandwidth of

several common communication signals today.

6.3 Modulation: Putting Information on a Carrier Let’s start this section with an analogy to sending information

over a rope. A and B wish to communicate with each other, using nothing but a rope between them. First, they

must establish a connection – both A and B must be holding the rope, or at least looking at it, and the rope must

be free to move. The next thing A and B would have to do is to establish some kind of agreement as to the

meaning of the movements of the rope – what we know as a protocol. The point is this: if A and B are both

holding the rope, but the rope is not moving, the only thing B is getting from A is the carrier – the simple

information that A is ready to send something to B. The only way A can send information is if A moves the rope

in some fashion, and B knows what those movements mean.

Table 6.1: Some common EM

signals and their required native

bandwidth.

Page 5: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

5

In electronic communication, the carrier is an EM sine wave, of a specific nominal frequency. In order for

this carrier to send information, it must be modulated in some fashion, just as the rope in the previous paragraph

must be moved. Modulation is the act of

changing some feature of the EM sine wave. A

sine wave is characterized in only 3 ways: its

amplitude, its frequency, and its phase. These

are shown in Figure 6.2. Using the black sine

wave as our reference, we can see that the red

sine wave differs only in its amplitude; the

light blue sine wave differs only in its

frequency, and the green sine wave differs

only in its phase.

Thus, for EM signals, the only types

of modulation available to us are

amplitude modulation (AM), frequency

modulation (FM), and phase modulation

(PM or sometimes ϕM). Each of these

three types of modulation has its

respective characteristics, some of them

advantages and others disadvantages.

These characteristics are summarized in

Table 6.2. All 3 types of modulation are

widely used in modern digital

communication.

There is another entry in Table 6.2 that requires additional explanation: Digital M(odulation). In the

digital domain, we are still restricted to the 3 basic types of modulation, but because the carrier is modulated to

discrete values (this is what digital means, when it comes to modulation) of amplitude, frequency or phase, we

gain many advantages not available with only analog modulation; more on this later.

6.4 Shannon’s Law: The Ultimate Speed Limit

Shannon’s Law (named after Claude Shannon, who first wrote about it in his paper,

“Communication in the presence of noise”, published in 1949), is referred to here as an ultimate speed

limit. This is because it defines the absolute maximum error-free data rate that can be achieved in a

given channel, with a given signal-to-noise ratio. Unlike a traffic speed limit, in which we simply risk

being fined if we exceed that limit, the limit expressed by Shannon’s Law is a physical limit – we are

never able to exceed it. And in fact, we are never actually able to reach it, though recent advances have

come incredibly close to that limit.

There are three variables in Shannon’s Law: bandwidth, signal amplitude, and noise amplitude.

The equation that relates these variables to the maximum error-free capacity of the channel is given in

Equation 6.1, Shannon’s Law: 𝐶𝑎𝑝𝑎𝑐𝑖𝑡𝑦 = 𝐵𝑊 ∗ 𝑙𝑜𝑔2(1 +𝑆

𝑁) (Eqn 6.1)

In Equation 6.1, the capacity is given in bits-per-second (also known bps, or b/s); the BW is

given in Hertz, and the Signal and Noise are always in the same units as each other: either Voltage or

Watts. As for finding the log2, since most calculators don’t have that function built in, recall that:

Figure 6.2: The three properties of a sine wave.

Table 6.2: The types of modulation and their respective characteristics.

Page 6: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

6

𝑙𝑜𝑔𝑛 𝑥 =𝑙𝑜𝑔10𝑥

𝑙𝑜𝑔10𝑛 (Eqn 6.2)

So, for log2, this means the log2 of 64 would be:

𝑙𝑜𝑔264 =𝑙𝑜𝑔1064

𝑙𝑜𝑔102=

1.8062

0.3010= 6.0 (Eqn 6.3)

Log2(64) is just a mathematical way of saying, “What power of 2 equals 64?”, so the answer of 6 seems

rather obvious, since we know that 26 = 64. Extending our understanding of Equation 6.1, let’s find the

capacity of a typical phone line, where the BW = 3kHz, the signal = 10W, and the noise = 5 mW:

𝐶𝑎𝑝𝑎𝑐𝑖𝑡𝑦 = 3 𝑘𝐻𝑧 ∗ 𝑙𝑜𝑔2 (1 +10𝑊

5𝑚𝑊) = 3𝑘𝐻𝑧 ∗ 10.9665 = 𝟑𝟐. 𝟖𝟗𝟗𝒌𝒃/𝒔 (Eqn 6.4)

Increasing the bandwidth increases the capacity by the exact same proportion. Increasing the signal or

decreasing the noise also increases our capacity, but not at the same rate as increasing our bandwidth

since they are within the log2 part of the equation.

Another very important takeaway from Shannon’s Law is that if we want to increase the capacity

of a channel, we have only three options: increase our bandwidth, or increase our signal, or decrease the

noise in the channel.

A final look at Shannon’s Law is in Table 6.3. In

this table, the first row gives a capacity of 485.1 kb/s for the

given bandwidth, signal power, and noise power. The

second through fourth rows show the (log2) effect of an

increase in the noise power; each row has a 10x increase in

noise power, compared to the previous row. This results in a

13.7%, 15.8%, and 18.7% decrease in capacity, compared to

each previous row. Increasing bandwidth, however, has a

direct effect on the capacity, as shown in the last three rows.

Each of these rows has a 10x increase in the bandwidth, first

compared to the top row, then to the row immediately above it. The capacity increases by 10x each time,

showing this direct relationship.

6.5 Media: Our Three Options

In sending data from A to B, there are only three options for media: wired, wireless, or optical

fiber. Each medium has

its own characteristics,

including advantages and

disadvantages.

Additionally, there are

two main types of wired

media, and three main

ranges of wireless

frequencies. All of these

media are summarized in

Table 6.4.

Table 6.3: Several solutions of Shannon’s

Law equation, showing the effects of varying

signal strength and bandwidth.

Table 6.4: Media Characteristics

Page 7: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

7

As is nearly always the case, there is no best solution, and the media choice for a given situation

will depend on many factors, involving tradeoffs in most categories. This is why we have

communication systems using all of these media.

A few terms in Table 6.4 deserve clarification. RF stands for Radio Frequencies, a term coined

many decades ago to apply to the frequencies from about 500 kHz to about 1 GHz. Reach means the

distance between A and B, with no repeaters or amplifiers in between. EMI stands for ElectroMagnetic

Interference, and refers to the fact that electromagnetic (EM) waves are all around us today; the presence

of these EM waves pose an interference problem to media which are susceptible to this.

6.6 Analog to Digital Conversion: Entering a New Domain The real world we live in is dominated by analog things. Analog, in this case, simply means that

it varies continuously. Some examples include the height and weight of humans; the size of a tree or a

leaf on a tree; the temperature; the amount of sunlight; the velocity of the wind; music; vision.

Computers live in a digital world, dominated by 1s and 0s. Digital, in this case, simply means

that it varies in discrete increments. Some examples include the number of coins a person has; the

amount of money in your account; the number of people in a room; the number of cars on the highway;

the number of paper clips in a box. These things cannot vary continuously.

Any time we want to take something inherently analog and represent it or store it in a digital

world, it must first be converted from analog to digital. An example of an analog wave (in red), along

with a crude digital version of it (in blue) is

shown in Figure 6.3. There are two factors that

determine how faithfully the digital waveform

represents the original analog waveform: 1) how

frequently the waveform is sampled, and 2) how

small the step size needs to be. In the analog

world, the signal is continuous – not sampled.

Also, it can change continuously – not in discrete

increments (steps). For the digital world to fully

represent the analog world, we would have to

sample at nearly an infinite rate, and the step size

would have to be as small as a quantum energy packet, or about 6.626 x 10-34 J∙s, which is impossibly

small. And by the way, step size is also referred to as quantization error.

As in most engineering and technology decisions, the question we must answer is not what does

it take to do it perfectly, but what does it take to do it well enough? As for the first factor in analog to

digital conversion (the sampling rate), Harry Nyquist (1889-1976) gave us the answer that has since

been known as the Nyquist criterion: the signal must be sampled ≥2x the bandwidth of the signal, or

sometimes also stated as ≥2x the highest frequency. So if we consider music (see Table 6.1), in which

the bandwidth is 20 kHz, the Nyquist criterion states that we must sample music ≥40,000 times/sec in

order to adequately reproduce it. And by the way, in converting music to digital, the decision was made

to sample it at a rate of 44.1 ks/s (samples/second), which does meet this criterion.

The other factor (how small the step size needs to be) has been answered experimentally in most

applications. This means that a digital signal with a given step size is tested and compared to the original

Figure 6.3: An example of an analog waveform (in blue),

along with a rather crude digital version of the same

waveform (in red).

Page 8: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

8

analog signal; when the difference between the two signals becomes insignificant, the step size has been

determined. The equation for step size is given by equation 6.5:

𝑆𝑡𝑒𝑝 𝑆𝑖𝑧𝑒 =𝑉𝑚𝑎𝑥−𝑉𝑚𝑖𝑛

2𝑛 (Eqn 6.5)

where n = the number of bits, and (Vmax – Vmin) = the range of the signal being converted. Again using

the example of music, the number of bits adequate for high-quality storage and reproduction has been

determined to be 16 bits. Thus, the step size for a 20 Volt range is:

𝑆𝑡𝑒𝑝 𝑆𝑖𝑧𝑒 =20 𝑉

216 =20 𝑉

65536= 305.18 µ𝑉 (Eqn 6.6)

Table 6.5 gives the sampling rates and

resolution (number of bits, and percentage or ppm)

that have been deemed adequate for converting

several common analog signals into digital.

To transmit a signal which has been

converted from analog to digital, we must send each

sample, (each sample being of the designated

number of bits), and send them as fast as they are

being generated. For voice, this means our data

stream is: 8 ksamples/s * 8 bits/sample; when multiplied, the samples cancel out and we are left with 64

kbits/s – the standard data rate for transmitting voice. Note that a much higher data rate is necessary for

video: NTSC requires 5.5 Msamples/s * 24 bits/sample (since there are 3 colors), giving 132 Mbits/s;

HDTV requires even more at: 24 Msamples/sm * 24 bits/sample, giving 576 Mbits/s. This is the native

bandwidth requirements of these signals; compression has worked wonders on these very high data rate

requirements, but more on that in the next section.

6.7 Digital Tricks: The Wonders Digital Can Do for Us

There are several amazing things that we can do with digital signals, which we cannot do with

analog signals. These include compression, error detection and correction, and encryption, all of which

are used extensively in digital communication today.

6.7.1: Compression

There are two main categories of compression: lossless and lossy. Each has its respective

advantages and disadvantages.

Lossless compression is accomplished by finding repeating patterns in the data and replacing

these patterns with a simple code for each. On the other end of the transmission (the receiver end), the

data can then be restored to its original bits, using the table of special codes for patterns. Thus, no data is

lost. This is essential for transmitting financial data, spreadsheets, text files, and any data where lost bits

can be problematic. The disadvantage of lossless compression is that the maximum compression is only

about 2:1, which helps, but not enough in most cases. Examples of files extensions that been compressed

using lossless compression include .zip, .png, .gif., .wav, and JPEG 2000. Another fact regarding

lossless compression is that if one compresses a file a second time, no additional file size reduction is

accomplished.

Table 6.5: Sampling rates and resolution for converting

some common analog signals to digital.

Page 9: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

9

Lossy compression is accomplished in three main ways: spatial, temporal, and mathematical.

Additionally, there are lossy compression efforts that combine two or all three of these ways. The big

advantage of lossy compression is that huge compression ratios are possible: up to and even greater than

1000:1. The big disadvantage is that at the receiver end of the transmission, the file cannot be restored to

its original bits – some quality is lost. While this doesn’t sound useful, it has turned out to be highly

useful for signals such as video and music, which have very high native data rates. We have learned to

tolerate some reduction in the quality of these signals, in return for the greatly reduced data rates made

possible by lossy compression. It should also be mentioned that compression also reduces the amount of

storage needed to store the files.

Spatial lossy compression takes advantage of the fact that video signals are made up of scan lines

on the screen, and that some portions of the image are very similar. Particularly, the second scan line of

an image is not a great deal different from the first scan line; by taking advantage of this fact, we can

reduce the second scan line to a repeat of the first, with all necessary differences. The same is true for

subsequent scan lines – sending only the differences from the previous scan line saves a lot of bits.

Temporal lossy compression takes advantage of the fact that each frame of a video signal is not a

great deal different from the previous frame, received 1/30th or 1/60th of a second earlier. By taking

advantage of this fact, we can send the differences between the second frame and the first, again

reducing the number of bits required to represent a video signal. Recognizing how this works can enable

one to notice this type of compression when a video signal is close to not working well; when the scene

changes, there is a noticeable lag. This is because the new scene is often very different from the previous

scene, which means the differences between the new frame and the previous frame are many, requiring

more bits, and if those bits are having a hard time getting through, the image will be delayed.

The last type of lossy compression is mathematical, and many, many mathematical algorithms

have been developed for music and video. Many of these take advantage of our understanding of human

perception. For example, we have learned that there are some details in music that, if missing, are not

usually noticeable; removing these details saves bits. The same is true for video – some degradation in

quality is not very noticeable.

Examples of files that have undergone lossy compression include .jpg, .mp3, .mpeg, and

basically all video streamed over the Internet.

6.7.2: Error Detection and Correction (ED&C)

The value of ED&C can be readily perceived. Wouldn’t it be great if the receiver of a signal

could know right away if it had misperceived part of the message? The receiver could then tell the

sender there was a problem, and something could be done to fix the

problem. But how can this be done? First, by taking advantage of the fact

that we’re dealing with digital modulation, so we know there are only

specific values that could be sent. And second, by sending a little extra

(redundant) information about the message. To clarify, consider the

example given in Figure 6.4. In this example, we are restricted to sending

single digits, with values from 0 to 9. Every time we send 4 of these

digits, we follow that 4th digit with a pair of digits, representing the sum

of the previous 4 digits. The transmitter is easily able to calculate this

Figure 6.4: An example

of redundant information,

applied to integer

numbers.

Page 10: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

10

extra information about the message, and the receiver can readily check the digits as they come across.

If one of the sums doesn’t agree with the received sum of the previous 4 digits, the receiver knows an

error has occurred, and corrective action can be taken.

Parity

The preceding form of error detection is done digitally using a concept known as parity, which is

actually just a binary form of adding. This is shown in Figure 6.5. In these rows, the far right two

columns represent the two types of parity for the previous

8 bits (a bit is either a 0 or a 1). For even parity, the total

number of 1s in each row, including the parity bit, must be

even. For odd parity, the total number of 1s in each row,

including the parity bit, must be odd. So, for the first row,

which contains five 1s, the parity bit must be 1 for even

parity, giving the total row an even number of 1s (six).

The second row has three 1s, so the parity bit must again

be 1, giving the total row an even number of 1s (four).

And in row 3, which has six 1s, the even parity bit must be 0, giving the total row six 1s, again an even

number.

Odd parity can also be used, and this is also shown in Figure 6.5. There is no inherent advantage

to either even or odd parity, but once the decision is made which to use, it is no longer arbitrary for the

receiver to choose even or odd parity – the receiver must choose the same parity as the transmitter.

Looking at Figure 6.5, it is easy to see what would happen if any of the bits were to be

mistakenly reversed by the receiver. The parity for that row would be incorrect, and the receiver would

know an error had occurred. This is also true if one of the parity bits is mistakenly reversed at the

receiver. However, there is a hole in this approach, and it is a rather LARGE hole. If two bits in any row

were to be mistakenly reversed in any given row, the parity would not indicate the presence of an error!

As shown in Figure 6.6, this is also true for any even number of bit errors – they cannot be detected! So,

while parity is easy to implement and detect, it is

not as powerful as we would like it to be – it

misses too many of the cases where multiple bits

have been mistakenly reversed.

Another disadvantage of parity is the

additional (redundant) data that must be

transmitted – the parity bits themselves. For the

example in Figure 6.5, there is one parity bit for every 8 bits, which means 1/9 of the data is redundant –

11.1%. This 11.1% is called overhead. All forms of ED&C require some overhead.

CRC

A second approach to parity, with significantly more detection power, but also with more

complexity, is known as CRC, or Cyclic Redundancy Check. As with the parity error detection

described in the previous paragraph, it also uses binary addition to generate information about the data.

But for CRC, one of the big advantages is that it is able to detect errors involving multiple bits – in fact,

Figure 6.5: Rows of bits and their associated

parity.

Figure 6.6: Bits in error.

Page 11: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

11

any number of bits in error – within a certain probability, and the probability that an error would be

missed is quite small for 16-bit parity (about 15 parts per million, or ppm). And the overhead is also

quite small – only 16 extra bits need be added, almost irrespective of the size of data file. CRC creates

what is known as a checksum, which is computed using iterative feedback – each new bit causes a

different checksum to be generated. CRC is presently very widely used for error detection in digital

communication. An example of at 16-bit CRC generator is given in Figure 6.7.

For 16-bit CRC, the

number of redundant bits is

always 16, but CRC can be

computed over short blocks

of data or large blocks. If it is

calculated over 100 bits, there

would be 16 redundant bits in 116, or 13.7% overhead, which is on the high side. But if a 16-bit CRC

checksum is calculated over 65,536 bits (4 kbytes), then the overhead drops to only 0.0244% - and that

much lower overhead is great!

But in the end, as good as CRC checksums are, they can only tell that an error has occurred –

they cannot fix the problem. That capability is reserved for the last type of error detection.

FEC

The final type of error detection, known both as Forward Error Correction (FEC) and as Error

Correction Coding (ECC), is much more complicated than the previous two types, and it involves more

overhead and much more computation. But its power is not to be underestimated – it is amazingly

powerful, and has been adapted for use in nearly all forms of digital communication and data storage. Its

power comes from the fact that it goes one major step beyond the previous two types of error detection –

it actually can tell WHICH bit (or bits) were in error. And since we’re dealing with a binary system, as

soon as we know WHICH bit was in error, we also know how to fix it – just change it to the opposite

value!

The math behind practical forms of FEC in use today is fairly complex, but the concept is readily

grasped with an example, as shown in Figures 6.8a & 6.8b. Each row has simple parity added (as

described in the Parity section previously). Likewise, each column also has parity added. While this

results in more overhead, the advantage it gives us is that we can identify WHICH bit was in error in the

block of data.

Knowing this

allows us to fix the

offending bit.

But, as

before, having

multiple bits in

error in the same

row (or column)

causes rather severe

problems for FEC –

Figure 6.7: An example of a 16-bit CRC generator.

Figures 6-8a and 6-8b: A block of data without errors (a), and with errors (b).

Page 12: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

12

it prevents us from being able to determine which of several bits were actually in error. This could be

fixed with additional parity bits, and this is one of the solutions that has been used. However, even

multiple additional parity bits cannot help us identify all the bad bits if the bad bits are too close

together. And the nature of errors in digital data is that they are almost ALWAYS close together – in

bursts. Whatever the event was that caused one bit to be wrong is also very likely to cause several

adjacent bits (in the same burst) to be wrong. This bursty nature of digital errors (they tend to occur in

bursts) is readily solved by another brilliant yet simple solution: interleaving.

An example of interleaving is given in Figure 6.9. We simply change the order of the bits before

we send them out, so that bits

that were originally adjacent

are not sent adjacent to each

other. Thus, when a burst error

occurs, it does NOT affect

originally adjacent bits. In the

example of Figure 6.9, a burst

error 8 bits in length would

wipe out bits in 8 different

bytes of the 64-bit stream, but none of these bytes would have more than 1 bit in error, which is easy to

detect and correct. Fortunately, the hardware required to implement interleaving is very simple and

efficient, and this innovation allows us to use less complicated FEC codes, and reduces the necessary

overhead.

The power of FEC is best grasped with an example. FEC of a type known as Reed-Solomon is

widely used in optical disc storage (and MANY other applications). In the case of DVDs, it requires

approximately 25% overhead (meaning that of every 100 bits, approximately 25 of them are FEC bits).

In reading back the data from a DVD, an error typically occurs once in every 200 bits. This may not

sound bad, but keeping in mind that the data rate we’re dealing with here is 11 Mbps (11 million bits per

second), this means we would typically experience 55,000 errors every second! Clearly, this is an

unacceptable error rate. But with the 25% FEC bits added in, the playback circuits can actually detect

and correct these errors, improving the actual error rate by approximately 18 orders of magnitude! Thus,

instead of experiencing an average of 55,000 errors each second, we are able to reduce this to only

experiencing an average of one error every 50,000 years!

6.8 Networking: The Backbone of Computer Communication As the title of this section states, networking is the backbone of communication between

computers today, and for the foreseeable future. In its simplest sense, a network is three or more

computers which communicate with each other; in a more complex sense, it is multiple clusters of

computers which communicate within each cluster and with other clusters of computers. This more

complex sense of networking is essentially what the Internet is; more on that in the next section.

There are several terms which are unique in networking, so we will need to understand them

first. One of these terms is packet. A packet is a collection of bits associated with each other and which

have a specific order in which the bits are transmitted. Figure 6.10 gives an example of the typical

Figure 6-9: An example of interleaving 8 bytes.

Page 13: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

13

anatomy of a packet. The payload

can be fixed or variable, but

generally a message to be sent from

A to B will involve multiple packets.

Each packet is independent of all others, and all packets can take different routes to get from A to B.

The Internet is a network of networks, allowing computers anywhere in the world to connect to

any other computer in the world. There are many standards in place that make this possible; this section

will discuss some of these standards.

A protocol is a formal agreement (can also be a standard) by which communication takes place.

It covers all the necessary topics that need to be addressed, and is often known by its acronym. An

example is TCP/IP, which stands for Transmission Control Protocol/Internet Protocol, two of the main

protocols governing Internet communication.

A frame is very much like a packet, but at a higher level of encapsulation in the OSI model of

computer networking. This encapsulation is exemplified in Figure 6.11. At the lowest layer, we have

data which is sent over

some physical media

(wire, wireless, or optical

fiber) in the form of a

voltage variation with

respect to time; also

included in the Physical

layer are coding and

framing, which are not

covered in this chapter,

but which group together

these voltage variations

into groups of bits. At the

next layer, these bits are

compiled into an Ethernet

packet (the payload), with an appropriate header and checksum as shown in Figure 6.10. At the Network

layer, the Ethernet packet becomes the IP payload, which then gets an IP header added. And at the

Transport layer, the IP payload and header become the TCP payload, which then gets the appropriate

TCP header added. The TCP packet is now ready for transmission over the Internet.

A router is basically a computer which is dedicated to receiving packets, deciding the best way

to get each packet one step closer to its final destination, then sending each packet to that next router.

With enough routers (and there are millions), each packet eventually makes its way to its final

destination.

Latency is a measure of the time it takes a packet to transport from A to B. Latency is composed

of three parameters: transmission time (the physical time it takes am EM signal to traverse the distance

from A to B), the number of hops (routers) it encounters along the way, and the time it takes each router

to forward the packet to the next destination. For packets remaining on this planet, latency is dominated

by the latter two parameters, since distance is small compared to the speed of EM waves.

Figure 6.10: The anatomy of a typical packet.

Figure 6.11: Encapsulation in the 7-layer OSI model of the Internet.

Page 14: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

14

Jitter is the variation in latency, and is very important for time-sensitive signals such as

streaming music or video. Because each packet is sent independently, they can all take different routes

getting from A to B. Since latency is dominated by the number of hops and the time each hop takes, it is

common for packets to arrive out of order. This requires that B buffer the packets until the packets can

be put back in the right order. Thus, the larger the jitter, the larger the buffers at B must be, and the

longer the delay from when A starts sending to when B can start presenting what is being sent.

Packet loss ratio (PLR) is a measure of the percentage of packets that are lost along the way

from A to B. Because routers engage in what is called a best-effort protocol (delivery is not guaranteed,

but generally works), some packets don’t make it all the way from A to B. This can be due to congestion

in the Internet, to noisy links, or to the presence of higher-priority traffic, but it is always undesirable.

PLRs below 2% are generally tolerable for most Internet traffic.

Networking has, historically, been designed around this probabilistic protocol – there is a high

probability of knowing when your packet will make it from A to B, and how likely it is that it will make

it. There are other protocols which guarantee delivery (PLR = 0.0%) and have a fixed latency (jitter = 0),

but they have not come to dominate due to cost reasons, and due to the remarkable flexibility of

networking equipment built to tolerate this probabilistic packet delivery system.

6.9 The Internet: History of a Modern Marvel The Internet has truly become as integral a part of our society as automobiles, highways,

electricity, and telephones. It would set our modern civilization back quickly and severely if the Internet

were to suddenly disappear.

Perhaps one of the first things we should do is clarify what is meant by “internet” and “the

Internet”, for these terms have grown to mean two different things. An “internet” is simply a network of

computers which are connected to each other so that they can share data. As useful as this is, it pales in

comparison to the usefulness of “the Internet”, which means the world-wide interconnection of

computers. The Internet interconnects everyone with access to it – users can send and receive

information from any other Internet user, or from any of the hundreds of millions of websites that are

out there.

A saying about the Internet is attributed to Robert Metcalfe: “The value of a telecommunications

network is proportional to the square of the number of connected users of the system.” While the exact

quantification of the value of a telecommunications network is certainly subject to debate, no one can

argue against the basic premise: the value of the Internet has increased as a function of the number of

people connected to it. First begun in the 1960s as a way to interconnect computers working on

ARPANET, the number of computers connected to the Internet grew only slowly, and relatively few

people in the world knew or cared about it. But with the introduction of a point-and-click interface (also

known as a graphical user interface or GUI) in the early 1990s (also known as a web browser), coupled

with the increase in home computing, its usefulness began to grow rapidly. And as it grew, its value

increased greatly, so that in only a few years it became almost essential. Today, a computer isn’t even

considered fully a computer unless it has access to the Internet and the vast amount of information

available through it.

At its core, the Internet consists of a network interface circuit built into each computer, along

with routers and data centers distributed throughout the world. Computers today interface to the Internet

Page 15: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

15

through two main standards: WiFi (also known as IEEE 802-11) and Ethernet. WiFi is the wireless

access, highly convenient for mobile computing. Ethernet is the wired standard, giving high

performance (high data rates) and somewhat more security than wireless access.

Routers were discussed in the previous section, and they are an integral part of the Internet.

Between routers are various connection standards, some for low cost (and usually low data rates and/or

short distances), and others for very high data rates and/or long distances (and usually high cost). Some

of these connection

standards are show

in Table 6.6. An

example of a router

is shown in Figure 6-

12.

Data centers

are truly one of the

marvels of the

modern Internet.

They come in many

sizes, from very small to huge.

Google is one of the few companies

which has released public

information on their data centers. At

last count, they had 15 data centers, 8

in the USA, 4 in Europe, 2 in Asia,

and 1 in Chile. They have over

2,000,000 servers (computers

dedicated to providing files and other

data to querying customers). Figure 6.13 shows some of their pictures of their data centers. Data centers

consist of rows and rows of racks of servers and HDDs (hard-disk drives), along with associated

plumbing to remove the heat produced by all the hardware, and cables to connect everything to the

Internet. They are very expensive to build (large ones like these can be upwards of $700 M) and very

expensive to operate (they require about 30 MW of power, continuously, which works out to about

$100,000/month), in addition to requiring continual maintenance and backups for everything.

Table 6.6: Some of the standards that connect computers together over the Internet.

Figure 6-12: An example of a router.

Page 16: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

16

6.10 Wireless: The Ultimate in Connectivity Convenience When it comes to mobility, nothing beats the convenience of wireless access. However, of the

three media for transmitting data, wireless tends to be the most problematic, which is why it has taken

decades to develop protocols and standards capable of delivering the performance we tend to expect.

Table 6.7 summarizes some of the most common wireless standards in use today. It is expected that

more standards

will be

developed, as a

great deal of

work is going on

in the field of

wireless

technologies

today.

Significant

improvements

are regularly

being made in

hardware and

software, and the

public demand

for, and

appreciation of,

these

technologies

continues to

grow.

6.11 Automation Standards: Performance and Convenience Over the past several decades, many automation standard have been proposed, implemented, and

disappeared. Since none is ever a perfect solution, efforts have continued at improving these standards,

and this is sure to continue. Table 6.8 summarizes some of the more popular industrial automation

communication standards in use today.

Table 6.7: Examples of some common wireless standards in use today, and their respective

characteristics.

Page 17: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

17

Summary

Modern electronic

communication, over any

of the common media, has

grown in popularity and

performance for several

decades. There are many

options available today,

with continual

improvements in nearly all

areas. The future is bound

to provide us even more

impressive solutions as the

progress continues.

Figure 6.13: Pictures from Google of the insides of some of their data centers.

Table 6.8: Industrial automation standards in use today and their respective

characteristics.

Page 18: IT 318 – Supplementary Material Chapter 6it318.groups.et.byu.net/files/Supplemental...Pedestrians and bicyclists would occupy much less width than cars, and trucks and wide loads

IT 318 – Supplementary Material Chapter 6

18

Problems 1. Choose one thing about the Voyager 1 and 2 deep space probes that you find the most interesting, and explain why you chose it. (5 points) 2. Figure 6.1 shows the EM spectrum. It seems to convey that there is an unlimited amount of EM spectrum available to us for data communication, but this is not the case. What is the main reason this is not the case? (5 points) 3. What are the three basic types of modulation available to us? (5 points)

4. Explain why Shannon’s Law is one of the most important and most fundamental laws governing digital

communication. (5 points)

5. Wire media has been used for digital communication since the early days of Morse code, clear back in the

1860’s. Explain why such an incredibly old media has not been replaced by wireless or optical fiber. (5 points)

6. What are the two factors that must be considered when converting analog signals to digital signals? (5 points)

7. According to the Nyquist criterion, what is the highest frequency signal that can be adequately converted to

digital if sampling at a rate of 150 ksamples/sec? (5 points)

8. What is the step size (quantization error) of a signal of 85 kHz, a magnitude of 15 Vp-p, a phase of 0°, and

which is being converted to a digital signal using samples of 12 bits? (5 points)

9. Assume you want to send a signal from A to B, and must compress it before sending it. How would you choose

the type of compression you would use, whether lossy or lossless? (5 points)

10. Give one advantage and one disadvantage of CRC (cyclic redundancy checking). (5 points)

11. Give one advantage and one disadvantage of FEC (forward error correction). (5 points)

12. Explain why interleaving is always done when doing FEC. (5 points)

13. Explain why a packet sequence number (see Figure 6.10) is necessary. (5 points)

14. Many historians claim that the invention of the Internet, together with the World-Wide Web, constitute one of

the major wonders of the modern technical world. Explain why such a claim is quite reasonable. (5 points)

15. Explain why there are so many different wireless and industrial automation standards. (5 points)