bt0046 – communication technology(bsc it)

36
August 2009 Semester 4 BT0046 – Communication Technology – 2 Credits (Book ID: B0025 & B0026) Assignment Set – 1 (30 Marks) Answer all questions 5 x 6 = 30 Book ID: B0025 1. What is bandwidth? What is the bandwidth of a. Telephone signal b. Commercial radio broad casting c. TV signal. 2. Define and prove sampling theorem using frequency spectrum. Book ID: B0026 3. Explain the concept of Path Clearance. 4. Explain Tropospheric Forward Scatter Systems. 5. Explain various light sources for Optical Fiber Communication.

Upload: salimkottayil

Post on 17-Nov-2014

102 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: BT0046 – Communication Technology(BSC IT)

August 2009Semester 4

BT0046 – Communication Technology – 2 Credits (Book ID: B0025 & B0026)

Assignment Set – 1 (30 Marks)

Answer all questions 5 x 6 = 30

Book ID: B0025

1. What is bandwidth? What is the bandwidth ofa. Telephone signalb. Commercial radio broad castingc. TV signal.

2. Define and prove sampling theorem using frequency spectrum.

Book ID: B00263. Explain the concept of Path Clearance.4. Explain Tropospheric Forward Scatter Systems.5. Explain various light sources for Optical Fiber Communication.

1. What is bandwidth? What is the bandwidth ofa). Telephone signal

Page 2: BT0046 – Communication Technology(BSC IT)

b). Commercial radio broad castingc). TV signal.

Bandwidth is the span of frequencies within the spectrum occupied by a signal and used by the signal for conveying information. The bandwidth is an important parameter in communication and it depends on the type of signal or type of application, the amount of information to be communicated and the time in which the information is to be communicated. To convey more information in short time we need more bandwidth. The same quantity of information can be sent in a longer period using less bandwidth. Similarly to convey voice signal we need bandwidth and to convey video it requires more bandwidth and so on. Information Transfer rate. The bit has been defined as the basic unit of information in digital binary system. The speed at which information is transferred from one computer or terminal to another in a digital system called the information transfer rate or bit rate and is measured in bits/sec.E.g. 10bits get transferred in 10ms the information transfer rate is10bits/10ms=1000bits/s or that is equal to 1kbpsa). Telephone signal

A POTS line (in the US and Europe) has a bandwidth of 3 kHz. A normal POTS line can transfer the frequencies between 400 Hz and 3.4 KHz. The frequency response is limited by the telephone transmission system (the actual wire from central office to your wall can usually do much more). Typical telephone line has frequency response of 300 Hz to 3400 Hz. The signal starts to attenuate in the frequencies below 300 Hz because of the AC coupling of audio signals (audio signal goes through capacitors and transformers). The high frequency response is limited by the transformer and the available bandwidth in the telephone transmission system (in digital telephone network the telephone audio is sampled at 8 kHz sample rate).

Nowadays POTS is sharply band limited due to the fact that the line almost always is digitally sampled at 8 kHz at some point in the circuit. The absolute, theoretical limit (with perfect filters) is therefore 4 kHz - but this isn't reality, 3.4 kHz maximum frequency. The bass frequency response is limited because of the limitations in telephone system components: transformers and capacitors can be smaller if they don't have to deal with lowest frequencies. Other reason to drop out the lowest frequencies is to keep the possibly strong mains frequency (50 or 60 Hz and its harmonics) humming away from the audio signal you will hear.

Most of the current telephone systems are still restricted to the historically motivated limitation of the bandwidth from 0.3 to 3.4 kHz. This bandwidth ensures a sufficient intelligibility of speech—the speech quality, however, suffers from the reduced bandwidth. One possible approach to reduce the quality loss is the application of an artificial bandwidth extension system at the receiver which is proposed in this contribution. Based on a priori knowledge concerning the

Page 3: BT0046 – Communication Technology(BSC IT)

characteristics of speech signals and without transmitting any extra information the bandwidth is extended to the range from the lower hearing limit to 8 kHz. Because of the uniqueness of speech an exact reconstruction of the spectrally unrestricted original signal is not possible. In fact, only the subjective impression of an enlarged bandwidth is possible and intended.In this contribution a new method for artificial bandwidth extension of telephone speech towards frequencies below and above the telephone-band is proposed. It is based on a simple codebook approach which on its own does not ensure a satisfying quality of the processed signal. The proposed new method extends this approach by several optional components to reduce the artifacts. These additional components comprise time-smoothing, group-averaging, a more sophisticated classification approach including memory and a separate technique for low-pass extension.

b). Commercial radio broad castingThe commercial radio industry was very keen to make DAB radio happen but

it seemed that L–band spectrum would not provide a viable solution. The whole commercial radio industry was starting to realize that a digital radio platform was an essential part of its future so Commercial Radio Australia – the peak industry body for commercial radio – took up the task of driving the project forward. It was understood that Band III is more suitable for DAB broadcasts and would be significantly cheaper to roll out. Attention was turned to TV Channel 9A – a portion of Band III spectrum unused in the metro cities in Australia. TV Channel 9a is only 6 MHz wide – 1 MHz less than the other Band III TV channels. This made it unattractive to TV broadcasters, but it did provide enough space to broadcast three DAB ensembles. To set up a radio trial broadcast in TV spectrum was no minor task. It was necessary to convince the regulator that such a trial would be in the interests of broadcasting, and, more importantly, it was necessary to convince TV broadcasters that such a broadcast would not interfere with their transmissions. These were difficult goals to achieve as this was the first time a DAB broadcast had been trialed in TV spectrum anywhere in the world – but, in 2002, a trial license was issued to enable test transmissions to start. This license was awarded to Digital Radio Broadcasting Australia Pty Ltd – a consortium of Sydney broadcasters comprising ten commercial radio stations, together with the ABC and SBS. 5KW ERP test transmissions were started in Sydney using the TXA tower at Willoughby in band III in late 2003 – and power was increased to 12.5 KW ERP in the following year – with no reported interference to TV reception.

As well as trying to find a viable technical solution to DAB broadcasting, the commercial radio industry was also keen to lobby for an acceptable legislative framework. One of the key characteristics of DAB digital radio is it combines the services of several broadcasters into a single ensemble or multiplex for transmission. The industry was very aware of developments in digital radio in Europe where spectrum licenses had been granted to multiplex license holders, who

Page 4: BT0046 – Communication Technology(BSC IT)

were then able to sell bandwidth to broadcasters. The Australian broadcasting industry was deeply concerned about this and so pushed for a solution where the spectrum license was jointly owned by the broadcasters involved. The Sydney trial was the model for such a system – the license was applied for by Digital Radio Broadcasting Australia, which was jointly owned by all the broadcasters involved in the Sydney trial. This principle of broadcasters entitled to a share in the ownership of the multiplex license was incorporated into the Digital radio legislation. One of the advantages of AM is that its unsophisticated signal can be detected (turned into sound) with simple equipment.

If a signal is strong enough, not even a power source is needed; building an unpowered crystal radio receiver was a common childhood project in the early years of radio. Another advantage to AM is that it uses a narrower bandwidth than FM. The limitation on AM fidelity comes from current receiver design. Moreover, to fit more transmitters on the AM broadcast band, in the United States maximum transmitted audio bandwidth is limited to 10.2 kHz by an NRSC standard adopted by the FCC in June of 1989, resulting in a channel occupied bandwidth of 20.4 kHz. The former audio limitation was 15 kHz resulting in a channel occupied bandwidth of 30 kHz. Both of these standards are capable of broadcasting audio of significantly greater fidelity than that of standard AM with current bandwidth limitations, and a theoretical frequency response of 0-16 kHz, in addition to stereo sound and text data.

c). TV Signal The bandwidth of a TV signal is determined by the number of picture elements (pixels) necessary to send per unit time. We start by assuming that the horizontal and vertical resolutions in the picture should be identical. The number of active lines per picture is 625 minus 2 x 25 (there are 25 lines lost per field due to field blanking - this period allows the insertion of field synchronization pulses and allows the receiver's vertical time base time to reset the scan to the top of the screen). So there are 575 active lines per picture. The maximum spatial frequency in the vertical direction corresponds to lines being alternately black and white. There are 575/2 cycles per picture height = 287.5 cycles per picture height maximum.The aspect ratio of conventional analogue TV is 4:3; therefore the maximum horizontal number of cycles is: 287.5 x 4/3 = 383.33.The active line duration is 52 µs (See diagram). This time period therefore has to accommodate 383.33 cycles. In one second there are then 383.33 / (52 x 10 -6) cycles = 7,371,371 cycles. Or to put it another way, the maximum possible temporal frequency is 7.371 MHz

A typical TV signal as described above requires 4 MHz of bandwidth. By the time you add in sound, something called a vestigial sideband and a little buffer space, a TV signal requires 6 MHz of bandwidth. Therefore, the FCC allocated three

Page 5: BT0046 – Communication Technology(BSC IT)

bands of frequencies in the radio spectrum, chopped into 6-MHz slices, to accommodate TV channels:

54 to 88 MHz for channels 2 to 6 174 to 216 MHz for channels 7 through 13 470 to 890 MHz for UHF channels 14 through 83

The ratio of theoretical to actual horizontal resolution is called the Kell factor after the engineer, who defined it, and it is found, for a range of different line standards, to take values around 0.75; the figures for the 625 line system calculated above correspond to a Kell factor of 0.746.The reason that the Kell factor is less than unity arises from the effective sampling of the picture in the vertical direction and the continuous nature of the process horizontally. The maximum vertical spatial picture frequency is limited because 1 spatial cycle requires 2 picture lines (corresponding to the Nyquist cut-off) whereas in the horizontal direction the system can transmit frequencies above the nominal cut-off (5.5MHz) albeit with reducing amplitude with increasing frequency.

The required bandwidth in television and other image scanning systems depends upon the rate of change of signal intensity along a line of the scanned image. The scanning rate in conventional systems is uniform, and the bandwidth then depends upon the maximum rate of change needed to achieve an acceptable picture quality. In broadcast television, there is a high degree of correlation of the luminance signal from frame to frame. Nevertheless, camera movement and rapid changes of scene can reduce the inter frame correlation appreciably. For teleconferencing and video telephone type scenes where the camera is stationary and the movement of subjects rather limited, only a small fraction of the video samples change appreciably from scene to scene. Consequently, there would be less frame-to-frame correlation in average scenes transmitted in broadcast TV than in video telephone or videoconference scenes. Measurements have also indicated that typical variations of signal intensity along a single scan tend to vary in bunches, with little variation over one interval followed by a jump in level to the next interval; during a typical interval which usually exceeds 2% of the line duration, the signal intensity remains substantially unchanged.

Television transmission of a single, fixed scene may be achieved using a slow scan rate. In this case, the transmission bandwidth requirement would be small. However, a slow-scan, narrow bandwidth system would be incapable of transmitting a changing scene without serious degradation of picture quality. The time required to transmit video signal information is inversely proportional to the rate of change of the signal intensity. Thus, various inventors have proposed transmission schemes in which slowly varying information would be transmitted at a rapid scanning rate while rapidly varying information would be transmitted at a slow rate. Several early attempts to implement TV systems incorporating VVS produced disappointing results. These schemes were designed on the premise that the rate of change of the signal from a TV camera could be used to control its scanning

Page 6: BT0046 – Communication Technology(BSC IT)

velocity, thereby reducing the total bandwidth requirements. The bandwidth requirements to transmit the rate of change information, however, were greater than those of the TV camera output signal. Consequently, a greater bandwidth was actually required than would have been required to transmit the TV signal itself using a uniform scanning velocity.

2. Define and prove sampling theorem using frequency spectrum. Sampling theorem states that;

“Any signal which is continuous in time (analog) can be completely represented by its samples and can be recovered if the sampling frequency fs >2 fm. Where fx is the sampling frequency and fm is the maximum frequency of the signal”The definition of proper sampling is quite simple. Suppose you sample a continuous signal in some manner. If you can exactly reconstruct the analog signal from the samples, you must have done the sampling properly. Even if the sampled data appears confusing or incomplete, the key information has been captured if you can reverse the process. The sampling theorem. Frequently this is called the Shannon sampling theorem, or the Nyquist sampling theorem, after the authors of 1940s papers on the topic. The sampling theorem indicates that a continuous signal can be properly sampled, only if it does not contain frequency components above one-half of the sampling rate. For instance, a sampling rate of 2,000 samples/second requires the analog signal to be composed of frequencies below 1000 cycles/second. If frequencies above this limit are present in the signal, they will be aliased to frequencies between 0 and 1000 cycles/second, combining with whatever information that was legitimately there.

Amplitude

fm f

Page 7: BT0046 – Communication Technology(BSC IT)

The figure shows the frequency spectrum of the signal m(t).A frequency spectrum is a plot of signal amplitudes versus the frequency. For sinusoidal signals the spectrum will be what is called line spectrum, since each signal is represented as a line at the corresponding frequency and the height of the line represents the maximum amplitude of the signal. But for a band limited signal, that is signal.

A continuous-time signal x(t), whose spectral content is limited to frequencies smaller than Fb(i.e., it is band-limited to Fb) can be recovered from its sampled version x(n) = x(nT ) if the sampling rate Fs= 1/T is such that Fs> 2Fb

It is also clear how such recovering might be obtained. Namely, by a linear reconstruction filter capable to eliminate the periodic images of the base band introduced by the sampling operation. Ideally, such filter doesn't apply any modification to the frequency components lower than the Nyquist frequency, defined as FN= Fs/2, and eliminates the remaining frequency components completely. The reconstruction filter can be defined in the continuous-time domain by its impulse response, which is given by the function

h(t) = sinc(t) = sin (t/T )

t/T

Page 8: BT0046 – Communication Technology(BSC IT)

Figure 2: sinc function, impulse response of the ideal reconstruction filter

Ideally, the reconstruction of the continuous-time signal from the sampled signal should be performed in two steps:Conversion from discrete to continuous time by holding the signal constant in time intervals between two adjacent sampling instants. This is achieved by a device called a holder.The cascade of a sampler and a holder constitutes a sample and hold device.

3. Explain the concept of Path Clearance?

Path clearance, an essential component for point to point communications systems, involves ensuring that there are no obstructions between the transmitting and receiving antennas or within the first Fresnel zone. In practice the direct ray path from the transmitting antenna to the receiving antenna has to pass above and below or by the side of elevated structures like buildings, trees and hills etc. which are capable of reflecting the microwaves and thus providing a reflected wave. If these reflecting objects (including ground also) are not sufficiently removed from the direct ray path the reflected wave would tend to cancel the direct wave the receiver. Therefore it is necessary to ensure that adequate path clearance exists.

The practice of microwave communication path design prior to installation has in the past been based on empirical clearance criteria over surveyed elevation profiles

Page 9: BT0046 – Communication Technology(BSC IT)

between tower sites, or on actual path testing employing temporary towers with variable antenna heights. An unpublished Bell system practice entitled "Microwave Path-Testing" describes these techniques in detail. Path clearance surveys have employed topographic maps, altimetry, theodolities, optical flashing, low-altitude radar profiling, or high-altitude photogrammetry to determine path elevations for calculation of static clearance criteria over obstructions. These survey methods each contain inherent limitations and potential hazards in portraying the actual path strip profile, and they all ignore the equally significant performance parameters of terrain reflectivity and atmospheric refractivity variations. These sporadic elevation surveys and static clearance designs have been generally adequate for non-reflective paths or microwave routes with communication traffic which tolerates moderate fading. Propagation reliability has been improved when necessary by diversity transmission frequencies or by space diversity reception with empirically separated antennas. Arbitrary diversity design affords limited protection against structured fades produced by multi-surface reflections which vary with atmospheric refraction, but is not the complete answer.

Path clearance is calculated as demonstrated below

Page 10: BT0046 – Communication Technology(BSC IT)
Page 11: BT0046 – Communication Technology(BSC IT)

4. Explain Tropospheric Forward Scatter Systems?

The troposphere is the lowest layer of the atmosphere; it refracts radio waves slightly, providing communication at distances somewhat beyond the visual line of sight. It also absorbs radiation at some frequencies. This type of propagation is known as troposphere scatter propagation. It is also known as “troposcatter or “forward scatter propagation". It is responsible for the propagation of UHF signals beyond the horizon.

The prime advantage of tropospheric forward scatter systems, compared with the line of sight microwave system is that they provide reliable communication over distance up to 1000 km or more without repeaters stations. On the other hand the large range tropospheric scatter systems requires very range antennas and very high power transmitters. Characteristics of Tropospheric forward scatter systems

Page 12: BT0046 – Communication Technology(BSC IT)

Here two directional antennas are pointed. So, that their beams intersect

midway between them above the horizon. Sufficient radio energy is to be directed from transmitting antenna (Tx) to the receiving antenna (Rx) to have better communication. This is because of only a small portion of the transmitted energy is scattered and a small fraction of the scattered energy reaches the receiver.

There are two theories for explaining the troposcatter

One suggests reflections from “blobs” in the atmosphere similar to the scattering of a search light beam by dust particles.

The other theory assumes the reflections from the atmospheric layers as the reason for the troposcatter.

Troposcatter can give reliable communication over distances of about 80km to 800km at frequencies from 250MHz to 5GHz.However the best frequencies for this type of propagation are center on 0.9GHz, 2GHz and 5GHz. Even though troposcatter propagation is subjected to fading, it forms a very reliable method of over the horizon communication. It is not affected by the abnormal phenomenon that usually affects HF sky wave propagation to a great extent. Troposcatter propagation is most commonly used to provide long distance telephone and other

communication links. It is especially used to provide alternative to MW links or coaxial cables over rough or inaccessible terrain. The best results obtain from troposcatter propagation if antennas are elevated and then directed down towards horizon. Space diversity reception system is commonly employed to minimize the effect of fading. Large transmission loss, hence high gain, narrow beam antennas for both transmitting and receiving, scattering angle must be kept as small as possible. Tropospheric extends up to about 10 Km and max range is about 650 Km stratosphere (region between troposphere and ionosphere) max range is 1000 Km.Scatter Loss

This is the loss in addition to the free space loss in los transmission. It is statistical in nature and subject to two types of time variations or fading.

Fast fading:-Fast fading occurs if the channel impulse response changes rapidly within

the symbol duration. In other works, fast fading occurs when the coherence time of the channel TD is smaller than the symbol period of the transmitted signal T => TD <c T. This causes frequency dispersion or time selective fading due to Doppler spreading. Fast fading is due to reflections of local objects and the motion of the

objects relative to those objects.

Page 13: BT0046 – Communication Technology(BSC IT)

The receive signal is the sum of a number of signals reflected from local surfaces, and these signals sum in a constructive or destructive manner => relative phase

shift. Phase relationships depend on the speed of motion, frequency of transmission and

relative path lengths. To separate out fast fading from slow fading => the envelope or magnitude

of the RX signal is averaged over a distance (e.g. 10-m).Alternatively, a sliding window can be used

Slow Fading:-Slow fading is the result of shadowing by buildings, mountains, hills, and

other objects. The average within individual small areas also varies from one small area to the next in an apparently random manner. The variation of the average if frequently described in terms of average power in decibel (dB): Ui = Wlog(V2(xi)) where V is the voltage amplitude and the subscript % denotes different small areas. For small areas at approximately the same distance from the Base Station (BS), the distribution observed for Ui about its mean value E{U} is found to be close to the Gaussian distribution

p(Ut - E{U})= -^e 2°2 F

Page 14: BT0046 – Communication Technology(BSC IT)

Where GSF is the standard deviation or local variability of the shadow fading.

Page 15: BT0046 – Communication Technology(BSC IT)

5. Explain various light sources for Optical Fiber Communication?

There are two different kinds of Fibero Singleplexo MultiplexSingleplex slower and more dangerous has a high concentrated lazer that

shoots straight through the glass, This is allot more dangerous because the glass has a magnifying affect and if you look at the lazer you will permanently scar your retina (Be careful it can’t be seen while on)Multiplex is a little different it bounces the light off the glass instead of shooting straight through, so this works faster (because you can send multiple pulses at a time) the light isn't as strong so looking at it is harmless as you just appear it as red harmless light.An optical fiber bundle, comprising: a plurality of optical fiber light sources; a plurality of optical fibers bundled on both an input terminal side thereof and anoutput terminal side thereof, said optical fibers receiving light from said input terminal side and outputting said light to said output terminal side; and a connecting member provided for said optical fiber bundle; wherein said optical fibers are divided on said input terminal side, individually or into a plurality of groups in accordance with output terminal side positions of said optical fibers, said optical fiber bundle is arranged to adjust said light received from said input terminal side for each of said optical fibers or for each of said groups and said optical fiber light sources are provided for each of said optical fibers or each of said groups. The optical fiber bundles according to claim 1, wherein each of said optical fibers includes a connecting member. The optical fiber bundle according to claim 1, wherein said optical fibers are divided into a plurality of groups in accordance with output terminal side positions thereof, and each of said groups includes a connecting member.

The optical fiber bundles according to claim 1, wherein each of said optical fiber or each of said groups includes a light intensity adjusting member. The optical fiber bundle according to claim 1, wherein said optical fibers include an optical fiber for detecting said light on said output terminal side of said optical fiber bundle. A light source device comprising: an optical fiber light source; and an optical fiber bundle for receiving light from said optical fiber light source on an input terminal side thereof and outputting said light to an output terminal side thereof, said optical fiber bundle including a plurality of optical fibers bundled in a desired shape on both said input terminal side thereof and said output terminal side thereof, wherein said optical fibers are divided on said input terminal side thereof, individually or into a plurality of groups in accordance with output terminal side positions of said optical

Page 16: BT0046 – Communication Technology(BSC IT)

fibers, and said optical fiber bundle is arranged to adjust a light intensity of said light received from said optical fiber light source on said input terminal side for each of said optical fibers or for each of said groups. The light source device according to claim 6, wherein each of said optical fibers includes a connecting member, and is connected to said optical fiber light source through said connecting member. The light source device according to claim 6, wherein said optical fibers are divided into a plurality of groups in accordance with output terminal side positions thereof, each of said groups includes a connecting member, and is connected to said optical fiber light source through said connecting member. The light source device according to claim 6, wherein said optical fibers are connected to said optical fiber light source through a light intensity adjusting member provided for each of said optical fibers or each of said groups.A method of manufacturing a light source device, comprising the steps of: bundling a plurality of optical fibers to form an optical fiber bundle; irradiating a light from an input terminal side of said optical fiber bundle; detecting alight intensity and light distribution pattern on an output terminal side of said optical fiber bundle; calculating a light intensity of an optical fiber light source for each of said optical fiber on the basis of a detection result in order to obtain desired output on said output terminal side; and connecting said optical fiber light source to said optical fiber bundle on the basis of a calculation result. A method for manufacturing a light source device according to claim 10, further comprising the step of: adjusting said optical fiber light source to make said light intensity distribution uniform.

Page 17: BT0046 – Communication Technology(BSC IT)

August 2009Semester 4

BT0046 – Communication Technology – 2 Credits (Book ID: B0025 & B0026)

Assignment Set – 2 (30 Marks)

Answer all questions 5 x 6 = 30

Book ID: B0025

1. Briefly explain different layers of digital communication.2. Explain PCM with a suitable block diagram.3. What are the different signaling formats? Explain with waveforms.

Book ID: B0026

4. Explain LOS Propagation on Flat Earth.5. Write notes on Satellite Links.

1. Briefly explain different layers of digital communication?

Page 18: BT0046 – Communication Technology(BSC IT)

Communications is multiple levels or multiple layers actively. In communications between people, there are rules to follow to ensure that each person has a chance to speak, to interrupt, to finish. This is called the protocol of communications. Communications systems also have protocols that’s specially define how the communications is to start, finish, and recover from problems ( such as due to noise or equipment failure) how the receiver is to indicate if a message was received properly and without error, and what to do if an error is detected. The next to highest level is the coding, which defines how initial data will be sent and transmitted with these specific signal values, one level below is the format which is responsible for adding additional information about the message such as who it is for, how long it is , and where it ends. Format also provides framing and additional information that helps the receiver determine if the message, as received, contains any errors. At the lowest level are the specific voltages (or currents or frequencies) used in modulation to represent the digital information. There are four layer of digital communication;

Protocol - In the field of telecommunications, a communications protocol is the set of standard rules for data representation, signaling, authentication and error detection required to send information over a communications channel. An example of a simple communications protocol adapted to voice communication is the case of a radio dispatcher talking to mobile stations. Communication protocols for digital computer network communication have features intended to ensure reliable interchange of data over an imperfect communication channel. Communication protocol is basically following certain rules so that the system works properly.

Coding - In digital communications, a channel code is a broadly used term mostly referring to the forward error correction code and bit interleaving in communication and storage where the communication media or storage media is viewed as a channel. The channel code is used to protect data sent over it for storage or retrieval even in the presence of noise (errors).

Sometimes channel coding also refers to other physical layer issues such as digital modulation, line coding, clock recovery, pulse shaping, channel equalization, bit synchronization, training sequences, etc.

Channel coding is distinguished from source coding, i.e., digitalization of analog message signals and data compression.

Format – is responsible for adding additional information about the message such as who is it for, how long it is, and where it ends. Format also provides framing and additional information that helps the receiver determine if the message, as received, contain any errors.

Page 19: BT0046 – Communication Technology(BSC IT)

Modulation - Modulation is the process of transforming a message signal to make it easier to work with. It usually involves varying one waveform in relation to another waveform. In telecommunications, modulation is used to convey a message, or a musician may modulate the tone from a musical instrument by varying its volume, timing and pitch. In radio communications for instance, electrical signals are best received when the transmitter and receiver are tuned to resonance. Therefore keeping the frequency content of the message signal as close as possible to the resonant frequency of the two is ideal. Often a high-frequency sinusoid waveform is used as carrier signal to convey a lower frequency signal. The three key parameters of a sine wave are its amplitude ("volume"), its phase ("timing") and its frequency ("pitch"), all of which can be modified in accordance with a low frequency information signal to obtain the modulated signal.

A device that performs modulation is known as a modulator and a device that performs the inverse operation of modulation is known as a demodulator (sometimes detector or demod). A device that can do both operations is a modem (short for "Modulator-Demodulator").

Protocol

Coding Format Modulation

To physical interface

2. Explain PCM with a suitable block diagram?

Page 20: BT0046 – Communication Technology(BSC IT)

Pulse code modulation (PCM) is a digital scheme for transmitting analog data. The signals in PCM are binary; that is, there are only two possible states, represented by logic 1 (high) and logic 0 (low). This is true no matter how complex the analog waveform happens to be. Using PCM, it is possible to digitize all forms of analog data, including full-motion video, voices, music, telemetry, and virtual reality (VR). Pulse-code modulation (PCM) is a digital representation of an analog signal where the magnitude of the signal is sampled regularly at uniform intervals, then quantized to a series of symbols in a numeric (usually binary) code. PCM has been widely used in digital telephone systems. A sine wave is sampled and quantized for PCM. The sine wave is sampled at regular intervals. For each sample, one of the available values is chosen by some algorithm; usually the floor function is used. This produces a fully discrete representation of the input signal (shaded area) that can be easily encoded as digital data for storage or manipulation. This is modulation of the input signal. To produce output from the sampled data, the procedure of modulation is applied in reverse. After each sampling period has passed, the next value is read and a signal is shifted to the new value. As a result of these transitions, the signal will have a significant amount of high-frequency energy. To smooth out the signal and remove these undesirable aliasing frequencies, the signal would be passed through analog filters that suppress energy outside the expected frequency range.

To obtain PCM from an analog waveform at the source (transmitter end) of a communications circuit, the analog signal amplitude is sampled (measured) at regular time intervals. The sampling rate, or number of samples per second, is several times the maximum frequency of the analog waveform in cycles per second or hertz. The instantaneous amplitude of the analog signal at each sampling is rounded off to the nearest of several specific, predetermined levels. This process is called quantization. The number of levels is always a power of 2 -- for example, 8, 16, 32, or 64. These numbers can be represented by three, four, five, or six binary digits (bits) respectively. The output of a pulse code modulator is thus a series of binary numbers, each represented by some power of 2 bits. At the destination (receiver end) of the communications circuit, a pulse code demodulator converts the binary numbers back into pulses having the same quantum levels as those in the modulator. These pulses are further processed to restore the original analog waveform. Basic voice data rate is derived by determining the maximum theoretical capacity of codec packets that can be transmitted per second. So if a data frame can be transmitted in 439 microseconds then 2,277 packets can be transmitted per second. Say the network uses 50 frames per second for each voice stream and a standard call requires 100 frames per second then the maximum theoretical capacity is 22 WEP

Pulse code modulation is essentially analog to digital conversation of a special type where the information contained in the instantaneous samples of an analog

Page 21: BT0046 – Communication Technology(BSC IT)

signal is represented by digital words or codes in a serial bit stream. A simple PCM system is as shown in figure. The analog signal m(t) is sampled using a sample and hold circuit. The samples are held for the quantizer until the next sample so as to avoid aliasing effect. A quantizer then quantizes the samples. Therefore the output of the quantizer will be any one of the allowed levels unlike the sampled output which may take any voltage value within the upper and lower limits. The quantized voltage is converted into a uniquely identifiable binary code representing the quantized value by the encoder.

The pcm demodulator will reproduce the correct standard amplitude represented by the pulse-code group. However, it will reproduce the correct standard only if it is able to recognize correctly the presence or absence of pulses in each position. For this reason, noise introduces no error at all if the signal-to-noise ration is such that the largest peaks of noise are not mistaken for pulses. When the noise is random (circuit and tube noise), the probability of the appearance of a noise peak comparable in amplitude to the pulses can be determined. This probability can be determined mathematically for any ration of signal-to-average-noise power. When this is done for 105 pulses per second, the approximate error rate for three values of signal power to average noise power is:

17 dB - 10 errors per second 20 dB - 1 error every 20 minutes 22 dB - 1 error every 2,000 hours

Page 22: BT0046 – Communication Technology(BSC IT)

Above a threshold of signal-to-noise ration of approximately 20 dB, virtually no errors occur. In all other systems of modulation, even with signal-to-noise ratios as high as 60 dB, the noise will have some effect. Moreover, the pcm signal can be retransmitted, as in a multiple relay link system, as many times as desired, without the introduction of additional noise effects; that is, noise is not cumulative at relay stations as it is with other modulation systems. The system does, of course, have some distortion introduced by quantizing the signal. Both the standard values selected and the sampling intervals tend to make the reconstructed wave depart from the original. This distortion, called QUANTIZING NOISE, is initially introduced at the quantizing and coding modulator and remains fixed throughout the transmission and retransmission processes. Its magnitude can be reduced by making the standard quantizing levels closer together. The relationship of the quantizing noise to the number of digits in the binary code is given by the following standard relationship:Where:

n is the number of digits in the binary code

Thus, with the 4-digit code of figure 2-50 and 2-51, the quantizing noise will be about 35 dB weaker than the peak signal which the channel will accommodate.

The advantages of pcm are two-fold. First, noise interference is almost completely eliminated when the pulse signals exceed noise levels by a value of 20 dB or more. Second, the signal may be received and retransmitted as many times as may be desired without introducing distortion into the signal.

3. What are the different signaling formats? Explain with waveforms?

In commercial telephony along with the speech information some additional information regarding the initiation and termination of the call, address of the calling party etc. also has to be transmitted. These are called signaling. When analog transmission is employed, the signaling information is communicated using a separate channel. In T1 digital system a process of bit slot sharing is used to convey the signaling information. In this method the first five samples are encoded as eight bit codes white the sixth one is encoded as seven bit code and the eight bit (lease significant bit) is used for sending the singling information. This pattern is repeated for every six frames. Thus in six frames the number of bits used for encoding the samples is 5 x 8 + 1 x 7 = 47 so that samples are encoded on an average info 47 / 6 = 7 x (5/6) bits. The frequency of the bits used for signaling is 1/6th of the frame bit rate.

That is fb(T1)signaling = (1/6) x ( 8000) = 1,333 Hz.This type of signaling is called channel associated signaling.

Page 23: BT0046 – Communication Technology(BSC IT)

VocoderA vocoder is a combination of the words voice and encoder) is an analysis /

synthesis system, mostly used for speech. In the encoder, the input is passed through a multiband filter, each band is passed through an envelope follower, and the control signals from the envelope followers are communicated to the decoder. The decoder applies these (amplitude) control signals to corresponding filters in the (re)synthesizer.It was originally developed as a speech coder for telecommunications applications in the 1930s, the idea being to code speech for transmission. Its primary use in this fashion is for secure radio communication, where voice has to be encrypted and then transmitted. The advantage of this method of "encryption" is that no 'signal' is sent, but rather envelopes of the band pass filters. The receiving unit needs to be set up in the same channel configuration to re-synthesize a version of the original signal spectrum. The vocoder as both hardware and software has also been used extensively as an electronic musical instrument.

Digital speech coders can be classified into 2 categories, waveform coders and vocoders (voice coders). Waveform coders use algorithms to encode and decode, so that the system output is an approximation of the input waveforms. System like PCM and DPCM are examples of waveform of waveform coders. The main advantages of the waveform coders is the high quality of the signal reproduced. But they require relatively high bit rates. An alternative encoding scheme, which operates significantly at lower bit rates, is vocoders. Typically vocoder bit rates are in the range of 1.2 to 2.4 kb/s. (This is in contrast with the bit rate of 24Kb/s for voice signals. When encoded as an 8 bit PCM). Vocoders encode speech signals by extracting a set of parameters. These parameters are digitized and transmitted to the receiver where they are used to set values for the parameters in function generators and filters, which in turn, synthesize the output speech sound.The people who developed vocoders studied the physiology of the vocal chords, the larynx, the throat, the mouth and the nasal passages, all of which have bearings on speech generation. They also studied the physiology of the ear and the manner in which the brain interprets sound heard.

Voice ModelSpeech can be very well approximated as a sequence of voiced and

unvoiced sounds passed through a filter. The voiced sounds are those generated by the vibrations of the

Page 24: BT0046 – Communication Technology(BSC IT)

Switch whose position is determined byWhether the sound is voiced or unvoiced

Synthesized approximation to speech waveform

Vocal cords. The unvoiced sounds are those generated when a speaker pronounces such letters as “s” , “f, “p” etc. The unvoiced sounds are formed by expelling air through lips and teeth. A generalized representation of a vocoder is as shown above. The filter represents the effect on the generated sounds of the mouth, throat, and nasal passage of the speaker. In the vocoder, an impulse generator simulates the voiced sounds whose frequency is the fundamental frequency of vibration of the vocal chords. The unvoiced sounds are simulated by a noise source. All encoders employ the scheme shown above to generate a synthesized approximation to speech waveforms. They differ only in the techniques employed to generate the voiced and unvoiced sounds and in the characteristics, and design of the filter.

4. Explain LOS Propagation on Flat Earth?

Line-of-sight propagation refers to electro-magnetic radiation or acoustic wave propagation. Electromagnetic transmission includes light emissions traveling in a straight line. The rays or waves may be diffracted, refracted, reflected, or absorbed by atmosphere and obstructions with material and generally cannot travel over the horizon or behind obstacles.

Especially radio signals, like all electromagnetic radiation including light emissions, travel in straight lines. At low frequencies (below approximately 2 MHz or

Noise source to represent unvoiced sound

FILTER

To represent effect of mouth, throat and nasal passages

Impulse generator to represent Voiced sounds

Page 25: BT0046 – Communication Technology(BSC IT)

so) these signals travel as ground waves, which follow the Earth's curvature due to diffraction with the layers of atmosphere. This enables AM radio signals in low-noise environments to be received well after the transmitting antenna has dropped below the horizon. Additionally, frequencies between approximately 1 and 30 MHz, can be reflected by the F1/F2 Layer, thus giving radio transmissions in this range a potentially global reach (see shortwave radio), again along multiply deflected straight lines. The effects of multiple diffraction or reflection lead to macroscopically "quasi-curved paths".

However, at higher frequencies and in lower levels of the atmosphere, neither of these effects applies. Thus any obstruction between the transmitting antenna and the receiving antenna will block the signal, just like the light that the eye may sense. Therefore, as the ability to visual sight a transmitting antenna (with regards to the limitations of the eye's resolution) roughly corresponds with the ability to receive a signal from it, the propagation characteristic of high-frequency radio is called "line-of-sight". The farthest possible point of propagation is referred to as the "radio horizon".In practice, the propagation characteristics of these radio waves vary substantially depending on the exact frequency and the strength of the transmitted signal (a function of both the transmitter and the antenna characteristics). Broadcast FM radio, at comparatively low frequencies of around 100 MHz, easily propagates through buildings and forests.

5. Write notes on Satellite Links?

A satellite link comprises two parts, the uplink and the downlink.First, consider the uplink.   The earth station transmits a signal.   This signal

comes from the transmitter which may be a solid state power amplifier (SSPA) or travelling wave tube amplifier (TWTA).  Most commonly VSAT terminals have solid state power amplifiers mounted at the dish and as close to the feed as possible to

minimise waveguide attenuation losses.  These dish mounted units are often block

up converters (BUC) or Transmit Receive Integrated Assembly (TRIA) which change the frequency of the signals from L band (in the cross site inter facility link (IFL) cable) to the microwave frequency for transmission (C band, Ku or Ka band).  BUCs have a rated output power, such as 2 watts for single carrier operation or 0.5 watts for multi-carrier operation.   For ease of calculation the 2 watts power needs to be converted to dBW by doing 10 x log (power in watts), so a 2 watt BUC has a single carrier output power capability of +3 dBW (2 watt) or, for multi-carrier operation -6 dBW (0.25 watt) output power per carrier for each of two equal power carriers.

The output power of the BUC is fed to the dish which concentrates the power in the direction of the satellite rather than allowing the power to be radiated evenly

Page 26: BT0046 – Communication Technology(BSC IT)

in all directions.  This characteristic of the antenna is called gain, measured in dBi, which means gain relative to an isotropic, Omni-directional antenna. The combination of BUC power and satellite dish gain produces equivalent isotropic radiated power (EIRP), so for example.    2 watt BUC power + 40 dBi antenna gain produces 43 dBW EIRP.The transmit EIRP of the earth station may be achieved by having a variety of sizes of BUC power and dish size.   A large dish with low power BUC can produce the same EIRP as a small dish with high power BUC.   There are limiting considerations to this.  Small dishes may cause unacceptable interference to adjacent satellites.  To minimize cost, choose a larger dish plus lower power BUC and take account of the cost of the electricity used. Find the distance to the satellite as this will give you the spreading loss in the up satellite link.  Distances between approx 35860 km (sub satellite point) to approx 41756 km (edge of visibility) and are applicable for geostationary satellites.

The satellite receive beam will have a G/T value for the direction from your earth station.  Reviews the uplink beam coverage map and determine the satellite receive G/T in the direction from your site.   Values like -8 to +10 dBK are typical.  Broad, earth coverage global beams have the lowest G/T; their beam width is approx 17.5 deg, which is what the earth looks like from a geostationary orbit position.  Spot beams (say 1 deg diameter) have the highest uplink G/T.C/Nup = earth station EIRP - path loss + satellite G/T - bandwidth +228.6   dBGo to the link budget calculator and play with some numbers.  The EIRP you can transmit can be varied by changing the BUC power and dish size and so, as a result, the uplink C/N will vary.  You obviously need a decent uplink C/N (say more than 10 or 20 dB) but once it is adequate how do you decide what correct EIRP is needed?.  Note how the link budget calculator tells you what is the uplink power flux density that you are producing at the satellite.   Write this figure down.You need to consider the required power flux density into the satellite.  

If you were transmitting a single large 36 MHz satellite TV carrier and aiming to saturate the transponder you would need to produce the PFDsat for the transponder.   The satellite up-link beam pattern will have contours specifying both G/T and PFDsat.   Read off the PFDsat for your site and this will tell you the PFD that you need to produce for single carrier, full transponder operation.   You can ask the satellite operator to adjust the satellite transponder gain, and thus PFDsat, by setting attenuator switches on the satellite.   This will allow you to trade off earth station costs, convenience and quality.  Higher gain might be attractive if your uplink were a mobile TV uplink truck or if you were having problems producing enough uplink power.  The penalty is lower uplink C/N and greater susceptibility to uplink interference.For single carrier whole transponder operation PFD required = PFDsatThe satellite operator will normally have several nominal transponder gain set settings.  e.g. low gain for multi-carrier operation amongst large dishes, medium gain for single carrier operation and high gain for multi-carrier VSAT return

Page 27: BT0046 – Communication Technology(BSC IT)

links.If you were transmitting a small carrier into a multi-carrier operation transponder you need to do the following calculation as a starting point.  Note that the satellite will have a PFDsat and input back off specified (e.g. 6 dB input back off for multi-carrier operation).   Note your carrier bandwidth and the transponder bandwidth.  I am assuming here that you want your fair share of the satellite power, proportional to the bandwidth.  This is a good starting point but you may prefer to have your fair share of the power (and pay the normal amount) or have more power (and pay more) depending on your dish sizes.  As a rule it will be better to always spend more on larger dishes and reduce your space segment costs. For multi-carrier operation, PFD required = PFDsat - transponder input back off - 10 x log (your carrier bandwidth / total transponder bandwidthSatellite links: The Downward satellite linkThe downlink EIRP from the satellite is either: For single carrier, whole transponder operation, Satellite downlink carrier EIRP = the EIRP shown on the down-link beam contour or For multi-carrier operation,  Satellite downlink carrier EIRP = EIRP (as per beam contour) - transponder output back off - 10 x log (your carrier bandwidth / transponder bandwidth).Consider the downlink receive earth station.  This will have a diameter size; receive frequency and system noise temperature.  Put these together and you will get the receive earth station G/T.   The equation for G/T is:  Earth station G/T = Gain - 10 log (system noise temperature)Now use the link budget equation for satellite links:C/Ndown = satellite downlink EIRP - path loss + earth station G/T - bandwidth

+228.6   dBSatellite links: Miscellaneous noise entry factors in satellite links

Earth station intermediation noise:  If you are operating a multi-carrier BUC put in say 30 dB interference. Uplink interference from other earth stations pointed to nearby satellite:  If you are a low power spectral density uplink put 25 dB, otherwise 30 dB.Uplink interference from multiple beams on same satellite:  In any, put 30 dB.Uplink cross polar interference:  Put in 30 dB, if you can't trust the installers and NOC staff, put in 25 dB.Transponder intermediation: If multi-carrier the put in 21 dB Down-link interference from other nearby satellite:  If you are a low power spectral density uplink put 25 dB, otherwise 30 dB.Down-link interference from multiple beams on same satellite:  In any, put 30 dB.Down-link cross polar interference:  Put in 30 dB, if you can't trust the installers, put in 25 dB.