dd-1 training report

63
INDUSTRIAL TRAINING REPORT INTRODUCTION TO DOORDARSHAN Doordarshan Type Broadcast television network Country India Availabil ity National Owner Prasar Bharati Key people …………….. (CEO) Launch date 1959 Past names All India Radio Website http://www.ddindia.gov.in/ Doordarshan is a Public broadcast Terrestrial television channel run by Prasar Bharati, a board nominated by the Government of India. It is one of the largest broadcasting 1

Upload: binod-kumar-mahto

Post on 04-Apr-2015

721 views

Category:

Documents


0 download

DESCRIPTION

introduction of DD-1

TRANSCRIPT

Page 1: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

INTRODUCTION TO DOORDARSHAN

DoordarshanType Broadcast television networkCountry IndiaAvailability    NationalOwner Prasar BharatiKey people …………….. (CEO)Launch date 1959Past names All India RadioWebsite http://www.ddindia.gov.in/

Doordarshan is a Public broadcast Terrestrial television channel run by Prasar Bharati, a board nominated by the Government of India. It is one of the largest broadcasting organisations in the world in terms of the infrastructure of studios and transmitters. Recently it has also started Digital Terrestrial Transmitters.

Beginning

Doordarshan had a modest beginning with the experimental telecast starting in Delhi in September 1959 with a small transmitter and a makeshift studio. The regular daily transmission started in 1965 as a part of All India Radio. The television service was extended to Mumbai (then Bombay) and Amritsar in 1972. Till 1975, seven Indian cities had television service and Doordarshan remained the only television channel in India. Television services were separated from radio in 1976. Each office of All India Radio and

1

Page 2: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

Doordarshan were placed under the management of two separate Director Generals in New Delhi. Finally Doordarshan as a National Broadcaster came into existence.

The Historical Development of TelevisionThe History of television technology can be divided along two lines: those developments that depended upon both mechanical and electronic principles, and those which are purely electronic. From the latter descended all modern televisions, but these would not have been possible without discoveries and insights from the mechanical systems.

The word television is a hybrid word, created from both Greek and Latin. Tele- is Greek for "far", while -vision is from the Latin visio, meaning "vision" or "sight". It is often abbreviated as TV or the telly.

Electromechanical television

The origins of what would become today's television system can be traced back to the discovery of the photoconductivity of the element selenium by Willoughby Smith in 1873, and the invention of a scanning disk by Paul Gottlieb Nipkow in 1884.

The German student Paul Nipkow proposed and patented the first electromechanical television system in 1884. Nipkow's spinning disk design is credited with being the first television image rasterizer. Constantin Perskyi had coined the word television in a paper read to the International Electricity Congress at the International World Fair in Paris on August 25, 1900

2

Page 3: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

However, it wasn't until 1907 that developments in amplification tube technology made the design practical. The first demonstration of the instantaneous transmission of still images was by Georges Rignoux and A. Fournier in Paris in 1909, using a rotating mirror-drum as the scanner, and a matrix of 64 selenium cells as the receiver.

In 1911, Boris Rosing and his student Vladimir Kosma Zworykin created a television system that used a mechanical mirror-drum scanner to transmit, in Zworykin's words, "very crude images" over wires to the electronic Braun tube (cathode ray tube) in the receiver. Moving images were not possible because, in the scanner, "the sensitivity was not enough and the selenium cell was very laggy."

In 1927 Baird transmitted a signal over 438 miles of telephone line between London and Glasgow. In 1928 Baird's company (Baird Television Development Company / Cinema Television) broadcast the first transatlantic television signal, between London and New York, and the first shore to ship transmission. He also demonstrated an electromechanical

color, infrared (dubbed "Noctovision"), and stereoscopic television, using additional lenses, disks and filters. In parallel he developed a video disk recording system dubbed "Phonovision"; a number of the Phonovision recordings, dating back to 1927, still exist. In 1929 he became involved in the first experimental electromechanical television service in Germany. In 1931 he made the first live transmission, of the Epsom Derby. In 1932 he demonstrated ultra-short wave television. Baird's electromechanical system reached a peak of 240 lines of resolution on BBC television broadcasts in 1936, before being discontinued in favor of a 405 line all-electronic system.

Electronic television

In 1911, engineer Alan Archibald Campbell-Swinton gave a speech in London, reported in The Times, describing in great detail how distant electric vision could be achieved by using cathode ray tubes at both the transmitting and receiving ends. The speech, which expanded on a letter he wrote to the journal Nature in 1908, was the first iteration of the electronic television method that is still used today. Others had already experimented with using a cathode ray tube as a receiver, but the concept of using one as a transmitter was novel. By the late 1920s, when electromechanical television was still being introduced, inventors Philo Farnsworth and Vladimir Zworykin were already working separately on versions of all-electronic transmitting tubes.

The decisive solution — television operating on the basis of continuous electron emission with accumulation and storage of released secondary electrons during the entire scansion cycle — was first described by the Hungarian inventor Kálmán Tihanyi in 1926, with further refined versions in 1928.

On September 7, 1927, Philo Farnsworth's Image Dissector camera tube transmitted its first image, a simple straight line, at his laboratory at 202 Green Street in San Francisco.

3

Page 4: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

[2] By 1928, Farnsworth had developed the system sufficiently to hold a demonstration for the press, televising a motion picture film. In 1929, the system was further improved by elimination of a motor generator, so that his television system now had no mechanical moving parts. That year, Farnsworth transmitted the first live human images by his television system, including a three and a half-inch image of his wife Pem with her eyes closed (possibly due to the bright lighting required).

In Britain Isaac Shoenburg used Zworykin's idea to develop Marconi-EMI's own Emitron tube, which formed the heart of the cameras they designed for the BBC. Using this, on November 2, 1936 a 405 line service was started from studios at Alexandra Palace, and transmitted from a specially-built mast atop one of the Victorian building's towers; it alternated for a short time with Baird's mechanical system in adjoining studios, but was more reliable and visibly superior. So began the world's first high-definition regular service. The mast is still in use today.

Color television

Most television researchers appreciated the value of color image transmission, with an early patent application in Russia in 1889 for a mechanically-scanned color system showing how early the importance of color was realized. John Logie Baird demonstrated the world's first color transmission on July 3, 1928, using scanning discs at the transmitting and receiving ends with three spirals of apertures, each spiral with filters of a different primary color; and three light sources at the receiving end, with a commutator to

alternate their illumination. In 1938 shadow mask technology for color television was patented by Werner Flechsig in Germany. Color television was demonstrated at the International radio exhibition Berlin in 1939. On August 16, 1944, Baird gave a demonstration of a fully electronic color television display. His 600-line color system used triple interlacing, using six scans to build each picture.

4

Page 5: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

The Basic Television System and Scanning Principles

2.1 Image continuity :

While television elements of the frame by means of the scanning process, it is necessary to present the picture to the eye in such a way that an illusion of continuity and any motion of the scene appears on the picture tube screen as a smooth and continuious change.to achieve this, advantage is taken of ‘persistence of vision(1/16 second)’or storage characteristics of human eye . thus if scanning rate per second is made greater than sixteen , or the number of picture shown per second is more than sixteen, the eye is able to integrate the changing levels of brightness in the scene. So when the picture elements are scanned rapidly enough ,they appear to eye as a complete picture unit , with none of the individual elements visible separrately.

2.2 scanning :

A similar process is carried out in the television system . the scene is scanned rapidly both in the horizontal and vertical directions simultaneously to provide sufficient number of complete pictures or frames per second to give the illusion of continuous motion . instead of the 24 as in commercial motion picture practice, the frame repetition rate is 25 per second in most television systems.

2.2.1 Interlaced scanning

from concideration of flicker, it has been found that 50 picture frames per second is the minimum requirement in telvision scanning . for a 625-line system, this means that the horizontal line scanning frquency should be 31,250hz, with the line period of 32μs. For a desired resolution of 546/2 alterations in the horizontal line, this leads to a very high

5

Page 6: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

bandwidh requirement,viz. ((546/20)*1/(32-6)=)10MHz , if the line scanning is the simple sequential way.

An ingeneous method of reducing the bandwidth requirement,while still maintaining an effective vertical picture scan rate of 50 Hz is to employ ‘interlaced scanning ‘, rather than the simple squential raster . in interlaced scanning , the picture is divided into two more sets of fields each containing half or other fractional number of interlaced lines, and the fields are scanned sequentially. In 2:1 interlace,the 625 lines are devided into two sets of 312.5 lines each.

The first set set of 312.5 odd number of lines in the 625 lines, called the first field or the

odd field, are first scanned sequentially . halfway through the the 313 th line, the spot is returned tonthe top of the scene and remaining 312.5 even number lines, called the second field or the even field are then traced interleaved between the lines of the first set as shown in figure 2.0

This is done by operating the vertical field scan at 50 Hz so that the two successive interlaced scans,each at a 25 Hz rate, make up the complete picture frame. This keeps the line scanning speed down, as only 312.5 lines are scanne in 1/50 second. The 625 linesof the full pictureare are scanned in 1/25 second, thus keeping down the bandwidth requirement.

Here, through the picture is scanned25 times per second , the area of the screen is converted in an interlaced fashion at twice the rate, viz. 50 times per second. A closed examination may reveal the small area’ interlaced flicker’, as actullyeach individual line repeats only 25 times per second . but this is tolerable and the overall effect is closer to that of a 50 Hz scanning. The flicker becomes noticeable at high brightness only. In practice , the flyback from the bottom to the top is not instantaneous and takes a finite time equal to several line periods. Up to 20 lines are allowed for vertical flyback after each of the two fields that make a complete picture . this means that out of 625 lines, only (625-40=) 585 lines actually bear picture information. These are called the active lines.

6

Page 7: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

2.3 Horizontal Resolution and Video Bandwidth

in a 625-line system , there are effectively about 410 lines of vertical resolution . the horizontal resolution should be of the same order . because of aspect ratio of 4:3, the number of vertical lines for equivalent resolution will be (410*4/3≈) 546 black and white alternate lines, which means (546*1/2≈)273 cycles of black and white alternations of

elementary areas. For the 625-line system , the horizontal scan or line frequency fH is given by :

fH = number of lines per picture *picture scan rate

= 625*25 = 15625 Hz

as each picture line is scanned 25 times in one second. The totalline period is thus

TH = 1/ fH = 1/15625 sec = 64 μs.

Of this period , 12 μs are used for blanking of the flyback retrace. Thus the 546 black and white alternation , i.e. 273 cycles of complete square waves, are scanned along a horizontal raster line during the forward scan time of (64-12=) 52 μs. The period of this square wave is 52/273 ≈0.2 μs, giving the highest fundamental frequency of 5 MHz , which is adequate as the highest video frequency in the sisiganl.

The highest fundamental video frequency in a scanning system is thus given by

fmax = (active lines *kell factor *aspect ratio)/2*line forward scan period

= (Na *K*a)/2* tfH

where tfH is the horizontal –line forward scan period.

fmax ≈5 MHz

2.4 Principles of Television Colour 

7

Page 8: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

This is a spectrum of energy that starts with low frequency radio waves, moves through VHF-TV, FM radio, UHF-TV (which now includes the new digital TV band of frequencies), all the way through x-rays

Additive Colour

When colored lights are mixed (added) together, the result is additive rather than subtractive. Thus, when the additive primaries (red, green and blue light) are mixed together, the result is white.

This can easily be demonstrated with three slide projectors.

When all three primary colors overlap (are added together) on a white screen, the result is white light. Note in this illustration that the overlap of two primary colors (for example, red and green) creates a secondary color (in this case, yellow).

Y = 0.59G+ 0.3R +0.11B ……………(1)

R-Y=R-0.59G-0.3R-0.11B = 0.7R-0.59G-0.11B ………(2)

B-Y = B-0.59G-0.3R-0.11B = 0.89B-0.59G-0.3R………(3)

Chrominance signal = √{ (R-Y) 2 +(B-Y) 2}..................(4)

tanθ=(R-Y)/(B-Y)………………………………………..(5)

In analog television, chrominance is encoded into a video signal using a special "Subcarrier" frequency, which, depending on the standard, can be either quadrature-amplitude (NTSC and PAL) or frequency (SECAM) modulated. In the PAL system, the color subcarrier is 4.43 MHz above the video carrier, while in the NTSC system it is 3.58

8

Page 9: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

MHz above the video carrier. SECAM uses two different frequencies, 4.250 MHz and 4.40625 MHz above the video carrier.

Colour: White Yellow Cyan Green Magenta

Red Blue Black

Y 1.0 0.89 0.7 0.59 0.41 0.3 0.11 0

B-Y 0 - 0.89 + 0.3 - 0.59 + 0.59 - 0.3 + 0.89 0

R-Y 0 + 0.11 - 0.7 - 0.59 + 0.59 + 0.7 - 0.11 0

G-Y 0 + 0.11 + 0.3 + 0.41 - 0.41 - 0.3 - 0.11 a

Cuw 0 0.9 0.76 0.83 0.76 0.76 0.9 0

Y + Cuw

1 1.79 1.46 1.42 1.17 1.06 1.01 0

U 0 - 0.439 + 0.148 - 0.291 - 0.291 - 0.148 + 0.439 0

V 0 + 0.097 - 0.614 - 0.517 +0.517 - 0.614 - 0.097 0

Co 0 0.44 0.63 0.59 0.59 0.63 0.44 0

Φ - 167 283 241 61 103 347

Table:1 Relative Value of Luminence and Chrominance Signals for 100% Saturated colours

The presence of chrominance in a video signal is signalled by a "color burst" signal transmitted on the "back porch," just after horizontal synchronization and before each line of video starts. If the color burst signal were to be made visible on a television screen, it would look like a vertical strip of a very dark olive color. In NTSC and PAL hue is represented by a phase shift in the chrominance signal within each video line relative to the color burst, while saturation is determined by the amplitude of the subcarrier. In SECAM (R'-Y') and (B'-Y') signals are transmitted alternately and phase does not matter.

Chrominance is represented by the U-V color plane in PAL and SECAM video signals, and by the I-Q color plane in NTSC

9

Page 10: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

Composite Video Signal and Television Standards

3.1 Composite video signal

The composite video signal is formed by the electrical signal corresponding to the picture information in the lines scanned in the TV camera pick-up tube and the synchronizing signals introduced in it. It is important to preserve its waveform as any distortion of the video signal will affect the reproduced picture, while a distortion in the sync pulses will affect synchronization resulting in an unstable picture. The signal is, therefore, monitored with the help of an oscilloscope, at various stages in the transmission path to conform with the standards. In receivers, observation of the video signal waveform can provide valuable clues to circuit faults and malfunctions

Composite video is the format of an analog television (picture only) signal before it is combined with a sound signal and modulated onto an RF carrier.

It is usually in a standard format such as NTSC, PAL, or SECAM. It is a composite of three source signals called Y, U and V (together referred to as YUV) with sync pulses. Y represents the brightness or luminance of the picture and includes synchronizing pulses, so that by itself it could be displayed as a monochrome picture. U and V between them carry the colour information. They are first mixed with two orthogonal phases of a colour carrier signal to form a signal called the chrominance. Y and UV are then added together. Since

10

Page 11: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

Y is a baseband signal and UV has been mixed with a carrier, this addition is equivalent to frequency-division multiplexing.

3.2 Colorburst

In composite video, colorburst is a signal used to keep the chrominance subcarrier synchronized in a color television signal. By synchronizing an oscillator with the colorburst at the beginning of each scan line, a television receiver is able to restore the suppressed carrier of the chrominance signals, and in turn decode the color information.

3.3 Television Broadcast Channels

For television broadcasting, channels have been assigned in the VHF and UHF ranges. The allocated frequencies are:

RANGE BAND FRQUENCY

Lower VHF range Band I 41-68 MHzUpper VHF range Band III 174-230 MHzUHF range Band IV 470-582 MHzUHF range Band V 606-790 MHz

(Band II 88-108 MHz is allotted for FM broadcasting.)

TELEVISION CHNNEL ALLOCATIONS

Channel

Frequency range, MHz

Picture carrier, MHz Sound carrier, MHz

1 41- 47 Not used for TV

2 47- 54 48.25 53.75

3 54- 61 55.25 60.75

4 61- 68 62.25 67.75

5 174-181 175.25 180.75

6 181-188 182.25 187.75

7 188-195 189.25 194.75

8 195-202 196.25 201.75

9 202-209 203.25 208.75

10 209-216 210.25 215.75

11

Page 12: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

11 216-223 217.25 222.75

12 223-230 224.25 229.75

The channel allocations in band I and band III are given in table There are four channels in band I, of which channel (6 MHz) is no longer used for TV broadcasting, being assigned to other services.

3.4 Broadcasting of TV Programs The public television service is operated by broadcasting picture and sound from picture transmitters and associated sound transmitters in three main frequency ranges in the VHF and UHF bands. By international ruling of the ITD, these ranges are exclusively allocated to television broadcasting. Subdivision into operating channels and their assignment by location are also ruled by international regional agreement. The continental standards are valid as per the CCIR 1961 Stockholm plan. The details of the various system parameters are as follows. Band Frequency Channel Bandwidt

h I (41) 47 to 68 MHz 2 to 4 7 MHz

II 87.5 (88) to 108 MHz VHF PM sound

III 174 to 223 (230) MHz 5 to 11(12) 7 MHz

IV 470 to 582 MHz 21 to 27 8 MHz V 582 to 790 (860) MHz 28 to 60 (69) 8 MHz

VI 11.7 to 12.5 GHz superseded by satellite Special 68 to 82 (89) MHz 2 (3) S 7 MHz

channels 104 to 174 and channels Cable TV 230 to 300 MH SI to S20 7 MHz

3.5 Types of Modulation

Vision: C3F (vestigial sideband AM)

Vestigial sideband ratios: systems0.75/4.2 MHz = 1:5.6 M 525/60,6MHz0.75/5.0 MHz = 1:6.7 B625/50, 7MHz1.25/5.5 MHz = 1:4.4 I 625/50,8MHz

The saving of frequency band is about 40%; the polarity is negative because of the susceptibility to interference of the synchronizing circuits of early TV receivers (exception: positive modulation}; residual carrier with negative modulation 10% (exception 20%).

Sound: F3E; PM for better separation from vision signal in the receiver (exception: AM).Sound carrier above vision carrier within RF channel, inversion at IF; (exception: standards A, E and, in part, L).

3.6 Vestigial Sideband Transmission

12

Page 13: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

In the video signal very low frequency modulating components exist along with the rest of the signal. These components give rise to sidebands very close to the carrier frequency which are difficult to remove by physically realizable filters. Thus it is not possible to go to the extreme and fully suppress one complete sideband in the case of television signals. The low video frequencies contain the most important information of the picture and any effort to completely suppress the lower sideband would result in objectionable phase distortion at these frequencies. This distortion will be seen by the eye as 'smear' in the reproduced picture. Therefore, as a compromise, only a part of the lower sideband is suppressed, and the radiated signal then consists of a full upper sideband together with the carrier, and the vestige (remaining part) of the partially suppressed lower sideband. This pattern of transmission of the modulated signal is known as vestigial sideband or A5C transmission. In the 625 line system, frequencies up to0.75 MHz in the lower sideband are fully radiated. The net result is a normal double sideband transmission for the lower video frequencies corresponding to the main body of picture information.

As stated earlier, because of filter design difficulties it is not possible to terminate the bandwidth of a signal abruptly at the edges of the sidebands. Therefore, an attenuation slope covering approximately 0.5 MHz is allowed either end. Any distortion at the

Picture carrier Sound carrier

higher frequency end, if attenuation Slopes were not allowed, would mean a serious loss in horizontal detail,

Since the high frequency components of the video modulation determine the amount of horizontal detail in the picture. Fig illustrates the saving of band. Space. which results from vestigial sideband transmission. The picture signal is seen to occupy a bandwidth of 6.75 MHz instead of 11MHz.

3.7 Digital Coding of Colour TV Video Signals and Sound Signals

13

Page 14: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

National and international organizations are attempting at present to establish a uniform digital coding standard or at least an optimal compromise for the TV studio and transmission on the basis of CCIR 601 recommendation for digital interfaces.

3.8 T.V studioThe (West European) EBU has prepared the following Digital coding standard for video signals:

Component coding (Y signal and two colour-difference signals); Sampling frequencies f sample in the ratio 4 : 2 : 2 with 13.5 MHz (3 x f

chrominance) for the luminance component and 6.75 MHz (2 x f chrominance) for each chrominance component;

Quantization q is 8 bits/amplitude value; Data flow/channel

13.5 x 106 values/s x 8 bits/amplitude value = 2 (6.75 X 106 values/s x 8 bits/amplitude value) = 108 Mbits/s. 108 Mbits/s + 2 x 54 Mbis/s = 216 Mbits/s. I.e. the required bandwidth is approximately 100 MHz.

3.9 Transmission This high channel capacity can only be achieved with internal studio links via coaxial cables or fiber optics. The public communications networks of present-day technology, the limits per channel lie at the hierarchical step of 34 Mbits/s for microwave links, later 140 Mbits/s. Therefore great attempts are being made at reducing the bit rate with the aim of achieving satisfactory picture quality with 34 Mbits/s per channel. Terrestrial TV transmitters and coaxial copper cable networks are unsuitable for digital transmissions. 5-EEllites with carrier frequencies of about 20 GHz and above may be used. The digital coding of sound ·signals for satellite sound broadcasting and for the digital sound studio is more elaborate with respect to quantizing than for video signals. A quantization q of 16 Bits/amplitude value is· required to obtain a quantizing signal-to-noise ratio S/Nq of 98 dB,

[S/Nq = 96 + 2) dB].

The sampling frequency must follow the sampling theorem f sample =/> 2 x fmax, where fmax is the maximum frequency of the base band.

F-sample Quantization q Data flow/chl

Direct satellite sound broadcasting with 16 stereo channels

32 KHz 16 bits 512Kbits/sec

Digital sound studio Up to 48 KHz 16 bits 768Kbits/sec

14

Page 15: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

Video Camera TubeIn older video cameras, before the 1990s, a video camera tube or pickup tube was used instead of a charge-coupled device (CCD). Several types were in use from the 1930s to the 1980s. These tubes are a type of cathode ray tube

4.1 Types of camera tube 1 Image dissector 2 The iconoscope 3 Image Orthicon 4 Vidicon 5 Plumbicon 6 Saticon 7 Pasecon 8 Newvicon 9 Trinicon

4.1.1 Vidicon

A vidicon tube (sometimes called a hivicon tube) is a video camera tube in which the target material is made of antimony trisulfide (Sb2S3).

The terms vidicon tube and vidicon camera are often used indiscriminately to refer to video cameras of any type. The principle of operation of the vidicon camera is typical of other types of video camera tubes.

15

Page 16: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

The vidicon is a storage-type camera tube in which a charge-density pattern is formed by the imaged scene radiation on a photoconductive surface which is then scanned by a beam of low-velocity electrons. The fluctuating voltage coupled out to a video amplifier can be used to reproduce the scene being imaged. The electrical charge produced by an image will remain in the face plate until it is scanned or until the charge dissipates.

Pyroelectric photocathodes can be used to produce a vidicon sensitive over a broad portion of the infrared spectrum.

Vidicon tubes are notable for a particular type of interference they suffered from, known as vidicon microphony. Since the sensing surface is quite thin, it is possible to bend it with loud noises. The artifact is characterized by a series of many horizontal bars evident in any footage (mostly pre 1990) in an environment where loud noise was present at the time of recording or broadcast. A studio where a loud rock band was performing or even gunshots or explosions would create this artifact.

4.1.2 Plumbicon

Plumbicon is a registered trademark of Philips. Mostly used in broadcast camera applications. These tubes have low output, but a high signal-to-noise ratio. They had excellent resolution compared to Image Orthicons, but lacked the artificially sharp edges of IO tubes, which caused some of the viewing audience to perceive them as softer. CBS Labs invented the first outboard edge enhancement circuits to sharpen the edges of Plumbicon generated images.

Compared to Saticons, Plumbicons had much higher resistance to burn in, and coma and trailing artifacts from bright lights in the shot. Saticons though, usually had slightly higher resolution. After 1980, and the introduction of the diode gun plumbicon tube, the

16

Page 17: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

resolution of both types was so high, compared to the maximum limits of the broadcasting standard, that the Saticon's resolution advantage became moot.

surface: PbO Lead Oxide

17

Page 18: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

Television Studio Equipment, Organization and Control

A television studio is an installation in which television or video productions take place, either for live television, for recording live to tape, or for the acquisition of raw footage for postproduction. The design of a studio is similar to, and derived from, movie studios, with a few amendments for the special requirements of television production. A professional television studio generally has several rooms, which are kept separate for noise and practicality reasons. These rooms are connected via intercom, and personnel will be divided among these workplaces. Generally, a television studio consists of the following rooms:

1 Studio floor 2 Production control room 3 Master control room

5.1 Studio floor

The studio floor is the actual stage on which the actions that will be recorded take place. A studio floor has the following characteristics and installations:

decoration and/or sets cameras on pedestals microphones lighting rigs and the associated controlling equipment. several video monitors for visual feedback from the production control room

18

Page 19: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

a small public address system for communication A glass window between PCR and studio floor for direct visual contact is usually

desired, but not always possible

While a production is in progress, the following people work in the studio floor.

The on-screen "talent" themselves, and any guests - the subjects of the show. A floor director, who has overall charge of the studio area, and who relays timing

and other information from the director. One or more camera operators who operate the television cameras.

5.2 Production Control Room

The production control room (also known as the 'gallery') is the place in a television studio in which the composition of the outgoing program takes place. Facilities in a PCR include:

a video monitor wall, with monitors for program, preview, videotape machines, cameras, graphics and other video sources

switcher a device where all video sources are controlled and taken to air. Also known as a special effects generator

audio mixing console and other audio equipment such as effects devices character generator creates the majority of the names and full screen graphics that

are inserted into the program digital video effects and/or still frame devices (if not integrated in the vision mixer) technical director's station, with waveform monitors, vectorscopes and the camera

control units or remote control panels for the camera control units (CCUs) VTRs may also be located in the PCR, but are also often found in the central

machine room

19

Page 20: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

5.3 Master Control Room

The master control room houses equipment that is too noisy or runs too hot for the production control room. It also makes sure that wire lengths and installation requirements keep within manageable lengths, since most high-quality wiring runs only between devices in this room. This can include:

The actual circuitry and connection boxes of the vision mixer, DVE and character generator devices

camera control units VTRs patch panels for reconfiguration of the wiring between the various pieces of

equipment.

5.4 Other facilities

A television studio usually has other rooms with no technical requirements beyond program and audio monitors. Among them are:

one or more make-up and changing rooms a reception area for crew, talent, and visitors, commonly called the green room

5.5 Vision mixer

A vision mixer (also called video switcher, video mixer or production switcher) is a device used to select between several different video sources and in some cases composite (mix) video sources together and add special effects. This is similar to what a mixing console does for audio.

Explanation

Typically a vision mixer would be found in a professional television production environment such as a television studio, cable broadcast facility, commercial production facility or linear video editing bay. The term can also refer to the person operating the device.

20

Page 21: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

Vision mixer and video mixer are almost exclusively European terms to describe both the equipment and operatorsSoftware vision mixers are also available.

Capabilities and usage in TV Productions

Besides hard cuts (switching directly between two input signals), mixers can also generate a variety of transitions, from simple dissolves to pattern wipes. Additionally, most vision mixers can perform keying operations and generate color signals (called mattes in this context). Most vision mixers are targeted at the professional market, with newer analog models having component video connections and digital ones using SDI. They are used in live and video taped television productions and for linear video editing, even though the use of vision mixers in video editing has been largely supplanted by computer based non-linear editing.

5.6 Character generator

A character generator (CG for short) is a device or software that produces static or animated text (such as crawls and rolls) for keying into a video stream. Modern character generators are actually computer-based, and can generate graphics as well as text.

Character generators are primarily used in the broadcast areas of live sports or news presentations, given that the modern character generator can rapidly (i.e., "on the fly") generate high-resolution, animated graphics for use when an unforseen situation in the game or newscast dictates an opportunity for broadcast coverage -- for example, when, in a football game, a previously unknown player begins to have what looks to become an outstanding day, the character generator operator can rapidly, using the "shell" of a similarly-designed graphic composed for another player, build a new graphic for the previously unanticipated performance of the lesser known player. The character generator, then, is but one of many technologies used in the remarkably diverse and challenging work of live television, where events on the field or in the newsroom dictate the direction of the coverage. In such an environment, the quality of the broadcast is only as good as its weakest link, both in terms of personnel and technology. Hence, character generator development never ends, and the distinction between hardware and software CG's begins to blur as new platforms and operating systems evolve to meet the live television

21

Page 22: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

Hardware CGs

Hardware CGs are used in television studios and video editing suites. A DTP-like interface can be used to generate static and moving text or graphics, which the device then encodes into some high-quality video signal, like digital SDI or analog component video, high definition or even RGB video. In addition, they also provide a key signal, which the compositing vision mixer can use an alpha channel to determine which areas of the CG video are translucent.

Software CGs

Software CGs run on standard off-the-shelf hardware and are often integrated into video editing software such as nonlinear video editing applications. Some stand-alone products are available, however, for applications that do not even attempt to offer text generation on their own, as high-end video editing software often does, or whose internal CG effects are not flexible and powerful enough. Some software CGs can be used in live production with special software and computer video interface cards. In that case, they are equivalent to hardware CGs.

5.7 Camera control unit

The camera control unit (CCU) is installed in the production control room (PCR), and allows various aspects of the video camera on the studio floor to be controlled remotely. The most commonly made adjustments are for white balance and aperture, although almost all technical adjustments are made from controls on the CCU rather than on the camera. This frees the camera operator to concentrate on composition and focus, and also allows the technical director of the studio to ensure uniformity between all the cameras.

As well as acting as a remote control, the CCU usually provides the external interfaces for the camera to other studio equipment, such as the vision mixer and intercom system, and contains the camera's power supply.

22

Page 23: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

5.8 Video Tape Recorder.

A video tape recorder (VTR), is a tape recorder that can record video material. The video cassette recorder (VCR), where the videotape is enclosed in a user-friendly videocassette shell, is the most familiar type of VTR known to consumers. Professionals may use other types of video tapes and recorders.

Professional cassette / cartridge based systems

U-matic (3/4") Betacam (Sony) M-II (Panasonic) Betacam SP (Sony)

Standard definition Digital video tape formats

D1 (Sony) and Broadcast Television Systems Inc.

D2 (Sony and Ampex) Digital Betacam (Sony) Betacam IMX (Sony) DVCAM (Sony) DVCPRO (Panasonic)

5.9 Video cassette recorder

A VCR.

The videocassette recorder (or VCR, more commonly known in the British Isles as the video recorder), is a type of video tape recorder that uses removable videotape cassettes containing magnetic tape to record audio and video from a television broadcast so it can be played back later. Many VCRs have their own tuner (for direct TV reception) and a programmable timer (for unattended recording of a certain channel at a particular time).

.

23

Page 24: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

5.10 Patch panel

A patch panel or patch bay is a panel, typically rackmounted, that houses cable connections. One typically shorter patch cable will plug into the front side, while the back will hold the connection of a much longer and more permanent cable. The assembly of hardware is arranged so that a number of circuits, usually of the same or similar type, appear on jacks for monitoring, interconnecting, and testing circuits in a convenient, flexible manner.

Patch panels offer the convenience of allowing technicians to quickly change the path of select signals, without the expense of dedicated switching equipment.

5.11 Video monitor

.

A video monitor is a device similar to a television, used to monitor the output of a video generating device, such as a video camera, VCR, or DVD player. It may or may not have audio monitoring capability.

Unlike a television, a video monitor has no tuner and, as such, is unable to independently tune into an over-the-air broadcast.

24

Page 25: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

One common use of video monitors in is Television stations and Outside broadcast vechicles, where broadcast engineers use them for confidence checking of signals throughout the system.

Video monitors are also used extensively in the security industry with Closed-circuit television cameras and recording devices.

Common display types for video monitors

Cathode ray tube Liquid crystal display Plasma display

Common monitoring formats for broadcasters

Serial Digital Interface (SDI, as SD-SDI or HD-SDI) Composite video Component video

5.12 Mixing Console

In professional audio, a mixing console, digital mixing console, mixing desk (Brit.), or audio mixer, also called a sound board or soundboard, is an electronic device for combining (also called "mixing"), routing, and changing the level, tone, and/or dynamics of audio signals. A mixer can mix analog or digital signals, depending on the type of mixer. The modified signals (voltages or digital samples) are summed to produce the combined output signals.

25

Page 26: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

Mixing consoles are used in many applications, including recording studios, public address systems, sound reinforcement systems, broadcasting, television, and film post-production. An example of a simple application would be to enable the signals that originated from two separate microphones (each being used by vocalists singing a duet, perhaps) to be heard through one set of speakers simultaneously. When used for live performances, the signal produced by the mixer will usually be sent directly to an amplifier, unless that particular mixer is “powered” or it is being connected to powered speakers.

Further channel controls affect the equalization of the signal by separately attenuating or boosting a range of frequencies (e.g., bass, midrange, and treble frequencies). Most large mixing consoles (24 channels and larger) usually have sweep equalization in one or more bands of its parametric equalizer on each channel, where the frequency and affected bandwidth of equalization can be selected. Smaller mixing consoles have few or no equalization control. Some mixers have a general equalization control (either graphic or parametric).

Each channel on a mixer has an audio taper pot, or potentiometer, controlled by a sliding volume control (fader), that allows adjustment of the level, or amplitude, of that channel in the final mix. A typical mixing console has many rows of these sliding volume controls. Each control adjusts only its respective channel (or one half of a stereo channel); therefore, it only affects the level of the signal from one microphone or other audio device. The signals are summed to create the main mix, or combined on a bus as a submix, a group of channels that are then added to get the final mix (for instance, many drum mics could be grouped into a bus, and then the proportion of drums in the final mix can be controlled with one bus fader).

There may also be insert points for a certain bus, or even the entire mix.

On the right hand of the console, there are typically one or two master controls that enable adjustment of the console's main mix output level.

Finally, there are usually one or more VU or peak meters to indicate the levels for each channel, or for the master outputs, and to indicate whether the console levels are overmodulating or clipping the signal. Most mixers have at least one additional output, besides the main mix. These are either individual bus outputs, or auxiliary outputs, used, for instance, to output a different mix to on-stage monitors. The operator can vary the mix (or levels of each channel) for each output.

As audio is heard in a logarithmic fashion (both amplitude and frequency), mixing console controls and displays are almost always in decibels, a logarithmic measurement system. This is also why special audio taper pots or circuits are needed. Since it is a relative measurement, and not a unit itself (like a percentage), the meters must be referenced to a

26

Page 27: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

nominal level. The "professional" nominal level is considered to be +4 dBu. The "consumer grade" level is −10 dBV.

5.13 Sync Pulse Generator A sync pulse generator or a sync signal generator as it is often called, comprises: (i) crystal controlled or mains locked timing system, (ii) pulse shapers that generate required trains for blanking, synchronisation and deflection drives, and (iii) amplifier distributors that supply these pulses to various studio sources in a studio complex. The timing unit in the sync pulse generator has a master oscillator at a frequency of about 2H, that can be synchronised by: (1) a crystal oscillator, at 2H (31,250 Hz) exactly, (2) an external 2H frequency source or (3) the ac mains frequency with the help of a phase detector and an AFC circuit that compares 50 Hz vertical frequency rate with the mains frequency. The required pulse timings at H and V rate are derived from the 2H master oscillator through frequency dividers as shown in the figure. The blanking and sync pulses are derived from the 2H, H and V pulses employing suitable pulse shapers and pulse adders or logic gates.

5.14 LIGHTING SYSTEMS OF STUDIO

Basically the fittings employ incandescent lamps and quartz iodine lamps at appropriate colour temperatures. Quartz iodine lamps are also incandescent lamps with quartz glass envelope and iodine vapour atmosphere inside. These lamps are more stable in operation and colour temperature with respect to aging. The lamp fittings generally comprise spot lights of 0.5 kW and I kW and broads of 1 kW, 2 kW and 5 kW. Quartz iodine lamps of 1 kW provide flood lights. A number of these fittings are suspended from the top so that they can be adjusted unseen. The adjustments for raising and lowering can be done by (i)

27

Page 28: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

hand operation for smaller suspensions, (ii) winch motor operated controls for greater mechanical loads of batten suspens~ons carrying a number of light fittings, (iii) unipole suspensions carrying wells of light fittings manipulated from a catwalk of steel structure at the top ceiling where the poles carrying these are clamped.

The lighting is controlled by varying the effective current flow through the lamps by means of silicon controlled rectifier (SCR) dimmers. These enable the angle of current flow to be continuously varied by suitable gate-triggering signals. The lighting patch panels and SCR dimmer controls for the lights are provided in a separate room. The lighting is energized and controlled by switches and faders on the dimmer console in the PCR, from the technical presentation panel. The lighting has to prevent shadows and produce desired contrast effects. Following are some of the terms used in lighting.

High key: lighting is the lighting that gives a picture that has gradations that fall between gray shades and white, confining dark gray and black to few areasasin news reading, panel discussions,'etc.

Low key: lighting is the lighting that gives picture having gradations falling from gray to black with few areas of light gray or white.

Key light: is the principal source of direct iliumination often with hinged panels or shutters to control the spread 'of the light beam.

Fill light: is the supplementary soft light to fill details to reduce shadow contrast range. Back light: is the illumination from behind the subject in the plane of camera optical

axis, to provide 'accent lighting' to bring out the subject against the background or the scene.

5.15 Audio Pick-up For sound arrangement, the microphone placement technique depends upon the type of program. In some cases, e.g. discussions, news and musical programs, the mikes may be visible to the viewers and these can be put on a desk or mounted on floor stands. In other programs, for instance, dramas, the mikes must be out of view. Such programs require hidden microphones or a boom-mounted mike with a boom operator. A unidirectional microphone mounted on the boom arm, high enough to be out of sight, is desirable here. The boom operator must manipulate the boom properly. Lavaliere microphones and hidden microphones are also useful in such programs. In a television studio, there is considerable ambient noise resulting from off-the-camera activity, hence directional mikes are frequently used. The studio walls and ceilings are treated with sound absorbing material to make them as dead as possible. Artificial reverberation is then required to achieve proper audio quality.

5.16 System Blanking When cameras are placed at different locations, they may 'require different camera lengths and hence the line drive pulses applied to the cameras may be unequally delayed by the propagation delay of the cable, which is around 0.15us/l00 ft of the cable. This can cause a time difference between the cameras proportional to the cable length differences, and the raster in the picture monitor will shift slightly as the cameras are switched over. System blanking is useful in overcoming this time difference between the two camera signals arriving at the vision mixer unit. The system blanking is much longer in duration and encompasses both the camera blanking periods. The system line blanking is 12μs whereas the camera line blanking is only 7μs. This avoids the shift in the raster from being

28

Page 29: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

observed. In recent cameras, the time difference due to the differences in camera cable lengths is

offset by autophasing circuits which ensure that the video signals arriving from all cameras are all time-coincident irrespective of their cable lengths. Once the circuit is adjusted, the cable length has no effect on the timings. Even in such cases, the system blanking is necessary to mask off the unwanted oscillations or distortions at the end or start of the scanning line.

5.17 Principle of Genlock and Slavelock Techniques In a studio complex, the various program sources can be mixed, superposed or inlaid if they are synchronous, with their line and field sync in phase at the mixing point. If the sources are driven from the same SPG, it is relatively easy to make them synchronous. This can be done by applying either the 'genlock' or the 'slavelock' or by building out the necessary delay from the SPG.

Non-synchronous sources driven from separate SPG's cannot be mixed or superposed. They can only be switched by cutting the picture and sync signals together. Switching between two non-synchronous sources leads to a sudden change in the transmission sync pulses causing a temporary loss of sync in the driven monitor time bases. The line time base may recover fairly quickly, but the field time base may take several seconds to recover if the phase change is large resulting in picture roll. This can be avoided if the field sync components of the two sources are brought approximately in phase by suitable phase shifting networks at the moment of switching.

In the genlock process, the line and field components of the local SPG are locked in frequency and phase to the line and field components of a remote incoming video signal without producing any visible disturbance in the monitor. The line and field sync components of the incoming composite 'video signal are separated and are used to lock the local SPG master oscillator through a timing phase comparator.

In order to bring the local line and field sync components in phase with the remote ones, the local line or field frequency is changed for some time. Field phasing is achieved automatically by deviating the field frequency from its normal value by altering the number of lines per field. The field frequency divider count is changed so that the system runs at 623 or 627 lines until field coincidence is achieved. When the field phasing is correct, the normal number of 625 lines is restored. This method can give a fairly rapid lock, in a matter of a few seconds and hence is called 'quick genlock' .

Another method of genlock is to change the line-time until coincidence is obtained and then resetting it to the usual 64 fls period. This allows a much slower phasing process and can, therefore, be carried out without disturbing other sources tied to the generator to be phased in. The number of lines in each field is constant.

In both modes, the timing phase comparator is situated at the mixing point. In the genlock mode, the error control signal is used locally to adjust the timing of the appropriate sync component of (the SPG to) the video signal at the mixing point. In the slavelock, the error control signal is fed back in an invened sense to correct the timing of the appropriate synchronising component of (the SPG of) the contributio~, from the incoming video signal.

29

Page 30: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

Any system capable of slavelock operation can also be used for genlock operation. But a genlock system cannot operate for slavelock unless the error signal is suitable for other generators and the feedback delay is tolerated by the system for maintaining stability.

5.18 colour Sync Pulse Generators

Older monochrome video source equipment used four-line standard pulses to the equipment, viz. MS, LD, FDand MB pulses. A limited number of colour TV equipment used these four sets of pulses plus thecolour subcarrier CSC, the PAL indent flag and the colour burst gate. The next generation colour equipment of solid state design, produced three line distribution, viz. MS, MB and the CSC. Modern equipment employing LSI circuits use self-contained sync generators that require only a single reference pulse for operation. The colour-black signal with the black burst is taken as the de facto standard for single-line distribution. The sync and the subcarrier must be carefully separated from video in order to maintain the exact timing

5.19 Professional video camera

A Professional video camera (often called a Television camera even though the use has spread) is a high-end device for recording electronic moving images (as opposed to a movie camera, that records the images on film). Originally developed for use in television studios, they are now commonly used for corporate and educational videos, music videos, direct-to-video movies, etc.

30

Page 31: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

5.19.1 Studio Cameras

It is common for professional cameras to split the incoming light into the three primary colors that humans are able to see, feeding each color into a separate pickup tube (in older cameras) or charge-coupled device (CCD). Some high-end consumer cameras also do this, producing a higher-quality image than is normally possible with just a single video pickup.

5.19.2 ENG Cameras

Often used in independent films, ENG video cameras are similar to consumer camcorders, and indeed the dividing line between them is somewhat blurry, but a few differences are generally notable:

They are bigger, and usually have a shoulder stock for stabilizing on the cameraman's shoulder

They use 3 CCDs instead of one (as is common in digital still cameras and consumer equipment), one for each primary color

They have removable/swappable lenses All settings like white balance, focus, and iris can be manually adjusted, and

automatics can be completely disabled If possible, these functions will be even adjustable mechanically (especially focus

and iris), not by passing signals to an actuator or digitally dampening the video signal.

They will have professional connectors - BNC for video and XLR for audio A complete timecode section will be available, and multiple cameras can be

timecode-synchronized with a cable "Bars and tone" will be available in-camera (the bars are SMPTE (Society of

Motion Picture and Television Engineers) Bars similar to those seen on television when a station goes off the air, the tone is a test audio tone)

5.19.3 Parts of a Camera:--

Lens Turret- a judicious choice of the lens can considerably improve the quality of image , depth of field and the impact which is intended to be created on the viewer. Accordingly a number of different viewing angles are provided. Their focal lengths are slightly adjusted by movement of the front element of the lens located on the lens assembly .

Zoom Lens- a zoom lens has a variable focal length with an angle of 10:1 or more in this lens the viewing angle and field view can be varied without loss of focus . This enables dramatic close up control of the smooth and gradual change of focal length by the camera- man while televising a scene appears to viewers if he approaching or recording from the scene.

31

Page 32: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

Camera Mounting- studio camera is necessary to be able to move camera up and down and arround its centre axis to pic-up different sections of the scene.

View Finder- to permit the camera operator to frame the scene and maintain proper focus of an electronic view finder is provided with most Tv camera. It receive video signals from the control room stabilizing amplifier.The view finder has its own diflection circuitry as in any other monitor , to produce the raster. The view finder also has a built in dc restorer for maintaining average brightness of the scene televised.

5.20 Pal Encoder

The gamma corrected RGB signals are combined in the Y-matrix to form the Y signal.the U-V matrix combines the R,B and –Y signals to obtain R-Y and B-Y, which are weighted to obtain U and V signals. weighting by the factor 0.477 for R-Y, and 0.895 for prevents overmodulation on saturated colours.This gives:

Y= 0.30R+0.59+0.11B

U=0.477(R-Y)

V=0.895(B-Y)

32

Page 33: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

5.21 Outside Broadcasting

Outside Broadcasting is the production of television programmes (typically to cover news and sports events) from a mobile television studio. This mobile control room is known as an "Outside Broadcasting Van", "OB Van" or "Scanner". Signals from cameras and microphones come into the OB Van for processing and transmission. The term "OB" is almost unheard of in the United States, where "mobile unit" and "production truck" are more often used

A typical OB Van is usually divided into 5 parts.

The first and largest part is the production area where the director, technical director, assistant director, character generator operator and producers usually sit in front a wall of monitors. This area is very similar to a Production control room. The technical director sits in front of the video switcher. The monitors show all the video feeds from various sources, including computer graphics, cameras, video tapes, or slow motion replay machines. The wall of monitors also contains a preview monitor showing what could be the next source on air (does not have to be depending on how the video switcher is set up) and a program monitor that shows the feed currently going to air or being recorded.

The second part of a van is for the audio engineer; it has a sound mixer (being fed with all the various audio feeds: reporters. commentary, on-pitch microphones, etc. The audio engineer can control which channels are added to the output and will follow instructions from the director. The audio engineer normally also has a dirty feed monitor to help with the synchronization of sound and video.

The 3rd part of the van is video tape. The tape area has a collection of video tape machines (VTRs) and may also house additional power supplies or computer equipment.

The 4th part is the video control area where the cameras are controlled by 1 or 2 people to make sure that the iris is at the correct exposure and that all the cameras look the same.

The 5th part is transmission where the signal is monitored by and engineered for quality control purposes and is transmitted or sent to other trucks.

33

Page 34: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

5.22 Video Switcher

A video switcher is multi contact crossbar switch matrix with provision for selecting any one or more out of large no. of inputs and switching them on to out going ckts.The input source includes Cameras,V.T.R, and Telecine Machine outputs, besides test signal and special effects generators.

5.23 Video Editing

The term video editing can refer to:

non-linear editing system, using computers with video editing software linear video editing, using videotape

Video editing is the process of re-arranging or modifying segments of video to form another piece of video. The goals of video editing are the same as in film editing — the removal of unwanted footage, the isolation of desired footage, and the arrangement of footage in time to synthesize a new piece of footage

Clips are arranged on a timeline, music tracks and titles are added, effects can be created, and the finished program is "rendered" into a finished video.

34

Page 35: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

Non Linear Editing

The term "nonlinear editing" is also called "real time" editing, "random-access" or "RA" editing, "virtual" editing, "electronic film" editing, and so on.

Non-linear editing for film and television postproduction is a modern editing method which involves being able to access any frame in a video clip with the same ease as any other. This method is similar in concept to the "cut and glue" technique used in film editing from the beginning. However, when working with film, it is a destructive process, as the actual film negative must be cut. Non-linear, non-destructive methods began to appear with the introduction of digital video

technology.

Video and audio data are first digitized to hard disks or other digital storage devices. The data is either recorded directly to the storage device or is imported from another source. Once imported they can be edited on a computer using any of a wide range of software.

With the availability of commodity video processing specialist video editing cards, and computers designed specifically for non-linear video editing, many software packages are now available to work with them

some popular softwares used for N.L.E are………

1.Adobe Premiere Elements (Microsoft Windows)2.Final Cut Express3. Leitch Velocity4. Media 1005. Nero 7 Premium6.Windows Movie Maker

(D.P.S velocity is now being used in DDK-PATNA for N.L.E purpose.)

Linear EditingIt is done by using VCR and using monitor to see the output of editing.

5.24 Graphics

35

Page 36: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

The paint-box is a professional tool for graphics designer. Using an electronics curser or pen and a electronics board, by paint box any type of design can be created.An artist can capture any live video-frame and retouch it and subsequently process, cut or paste it on another picture and prepare a stencil out of the grabbed picture.The system consists of:Mainframe electronics, a graphics table, a keyboard, a floppy disk drive, 385MB Winchester disk drive.

5.25 Electronics News Gathering

It basically comes under the outside broadcasting. ENG may be live or recorded type.

There are two types of professional video cameras: High End Portable, Recording Cameras (essentially, high-end camcorders) used for ENG and studio cameras which lack the recording capability of a camcorder, and are often fixed on studio pedestals. Portable professional cameras are generally much larger than consumer cameras and are designed to be carried on the shoulder.

36

Page 37: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

Color Temperature and Color Correction in Photography

Color temperature is the main way in which we measure the different colors and color correction is about filtration and other techniques that we as photographers use to achieve a desired color effect. (The desired effect may be a neutral "daylight" color, or any other effect, e.g. a slight warm-up effect for portraits.)

6.1 Color Balance

Each point in the image can be described with three values. These could be chosen to be the percentage intensity of the colors red, green, and blue, relative to their maximum values for the particular film. This is completely analogous to the RGB color space in computer graphics.

Three values describe the image at any given point, but only two values are required to describe the color balance. Think of it this way: the overall intensity doesn't matter; if it is dark blue or light blue it is still blue. If you mix 25% of each of red, green, and blue you get a neutral gray color. If you mix 50% intensity you still get neutral gray, albeit a slightly lighter gray.

In the table below cells in the same row have the same color balance, only the intensity changes. All the colors in the first row are red and red only with no trace of blue or green.

                                                                                                                                                                                

We are of course free to choose any two (different) values to measure the color balance.

37

Page 38: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

In photography it is traditional to choose as the two variables the ratio of blue to red and the ration of green to the overall intensity. These correspond to the traditional light-balancing filters (80, 81, 82, and 85 series filters), and green and magenta filters (CC-G and CC-M).

Color Balance VariablesLight Balance (LB), orColor Temperature

The ratio of the intensities of blue to red

Green-Magenta Balance The relative amount of green

6.2 Color Temperature

Color temperature is a term that is borrowed from physics. In physics we learn that a so called "black body" will radiate light when it is heated. The spectrum of this light, and therefore its color, depends on the temperature of the body. You probably know this effect from everyday life: if you heat an iron bar, say, it will eventually start to glow dark red ("red hot"). Continue to heat it and it turns yellow (like the filament in a light-bulb) and eventually blue-white. The color moves from red to wards blue. But we say that red is a "warmer" color than blue! So a warm body radiates a cold color and a (comparatively) cold body radiates warm colors.

The photographic color temperature is not the same as the color temperature defined in physics or colorimetry. As mentioned above, the photographic color temperature is measured only on the relative intensity of blue to red. However, we borrow the basic measurement scale from physics and we will measure the photographic color temperature in degrees Kelvin (K).

38

Page 39: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

Temperature Typical Sources1000K Candles; oil lamps2000K Very early sunrise; low effect tungsten lamps2500K Household light bulbs3000K Studio lights, photo floods4000K Clear flashbulbs5000K Typical daylight; electronic flash5500K The sun at noon near Kodak's offices :-)6000K Bright sunshine with clear sky7000K Slightly overcast sky8000K Hazy sky9000K Open shade on clear day

10,000K Heavily overcast sky11,000K Sunless blue skies

20,000+K Open shade in mountains on a really clear day

This means that you will find photographers talking about "daylight balanced" film (nominally 5500K) and type A and B tungsten balanced films (3400K and 3200K). This gives the color of the light: below we will define a measure of how much a filter moves the color temperature (the mired shift).

6.3 Light Balancing Filters

Light balancing filters are used to change the color temperature of light. If you place a light balancing filter in front of your lens, the overall temperature of the scene will be changed. These filters are sometimes called conversion filters because they may be used to "convert" daylight balanced film to use in tungsten light or tungsten films to use in daylight

A very useful concept is the mired shift. Mathematically, this is defined as

1000 * (1000/T2 - 1000/T1)

Where T1 is the color temperature you have and T2 is the color temperature you desire (for example the color temperature of your film). The mired shift is sometimes called the light balance (LB) index of the filter

39

Page 40: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

Satellite CommunicationsTelevision could not exist in its contemporary form without satellites. Since 10 July 1962, when NASA technicians in Maine transmitted fuzzy images of themselves to engineers at a receiving station in England using the Telstar satellite, orbiting communications satellites have been routinely used to deliver television news and programming between companies and to broadcasters and cable operators. And since the mid-1980s they have been increasingly used to broadcast programming directly to viewers, to distribute advertising, and to provide live news coverage

7.1 DIRECT BROADCASTING SATELLITES

7.1.1 Geostationary OrbitAs indicated in Section 7.12, satellites orbiting at a height of about 36,000 km from the earth, at an orbital speed of about 3 km/s (11,000 km/h) can act as a geostationary satellite, when the centrifugal force acting on the satellite just balances the gravitational pull of the earth. If M is the mass of the earth, m is the mass of the satellite, r, the radius of the orbit, and G, the gravitational Constant, we have the centrifugal force,

mv 2 = Mm *G r r2

v = √ (GM/r)T = orbital period of the satellite = 24 hrs. = 2*Π*r/v = 24*3600 seconds Put: M = 5.974 X 1024 kg,

G = 6.6672 X 10-11

40

Page 41: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

Gives the orbital radius of a synchronous satellite as 42164 km. Deducting the radius of earth equal to 6378 km, the distance from earth surface will be 35786 km.

7.1.2 Footprints

As the satellite radio beam is aimed towards the earth, it illuminates on the earth an oval service area, called the 'footprint'. Because of slant illumination of the earth by the equatorial satellite, this is actually an egg-shaped area with the sharper side pointing towards the pole. The size of the footprint depends on how greatly the beam spreads on to the surface of the earth intercepted by it. The foot prints for contours of 3 dB or half power beam width are usually considered. The beam width planning depends on the angle of incidence of the beam on the earth or the angle elevation of the satellite. It can be directly controlled by the size of the on-board parabolic antenna. Present day launchers can carry antennas of around 3m, giving a minimum beam width of about 0.6°. With difficulties in accurate station-keeping, it is prudent to allow for a margin of around 0.10 when planning the footprint to cover a country. Some satellite employ additional antennas to emit spot beams that cover regions beyond the normal oval shape the slant range of a satellite involves calculation of the distances from the bore sight point of the beam, covered by the semi-beam width angle, considering the geometry of footprint

7.1.3 Satellite-Earth Link Budget

The free space loss depends on the path length d which is related to the angle of elevation. The radio waves undergo attenuation loss due to scattering and absorption in the lower layers of atmosphere and by rain, clouds etc. The atmospheric loss depends on the length of path through atmosphere, and naturally increases with lower angles of elevation. The loss also varies according to the atmospheric fluctuating with respect time. Hence maximum attenuation loss values, encountered for 99% or 99.9% time, during which satellite broadcasts are received are considered, depending on the degree of reliability sought. For 99% reliability, the attenuation in the 12 GHz band is found to increase e.g. from 1.5 dB at 45° angle of elevation, to 6.8 dB at 5° and for 99.9 % reliability it increases to 4.8 dB and 14 dB resp.

d

Down Pt Converter

-αct Gt Lfs Lst +Gr

41

Page 42: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

At the satellite transponder, a power amplifier feeds power Pt to the transmitting antenna of maximum directive gain Gt. The maximum radiated power EIRP from the antenna is Pt* Gt.

EIRP = Pt + Gt.As this power propagates towards the earth it spreads into space and encounters the so called free space loss. The spreading factor is given by ¼*Π* d2. The power density along the direction of maximum radiation is PFD = Pt. Gt /4Π d2

When a parabolic dish receiving antenna is positioned to collect maximum power from the radiated power, the total power intercepted and received is given by

Pr = PFD* Aeff

Where Aeff is the effective dish area or aperture, (= η *A, efficiency coefficient η accounting for the dish coupling loss)

Pr = PFD x Aeff , where Aeff = η *A The power gain of an antenna (in the direction of maximum directivity) with respect to isotropic antenna is given by the basic relation:

G = (4Π· Aeff) / λ2

Where λ = received wavelength, or Aeff = (G. λ2)/4. Π If Gr is the maximum directivity gain of the receiving antenna, Pr/Pt = Gt . Gr(λ/4.Π.d) 2

This can be expressed conveniently in dB to give

Pr/Pt = (Gt) dB+ (Gr) dB + 10 log (λ/4.Π.d) 2

The last term represents the spreading loss known as free space propagation loss. Thus the free space propagation magnitude of loss in dB between isotropic antennas is given by

Lfs = 10 log Pt/Pr =10 log (4.Π.d/ λ) 2

= 20log (4.Π.d/ λ) = 20 log (4000Π· f· d/0.3), where d is in km and f is in GHz = 32.45 + 60 + 20 (log f + log d) dB = 92.45 + 20 log f + 20 log d

For geostationary satellite having footprint away from the equator the path length d, is related to its height h (= 35786 km) from the equator, earth radius R (6378 km) and the angle of elevation β, by simple geometric relation

(R + h ) 2 = R2 + d2 + 2Rd cos (90 +β)

42

Page 43: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

i.e. (42164) 2 = 63782 + d2 + 12756. d cos (90 +β)

Where β is the angle of elevation of the satellite with the horizon.

The received power in dBW is the sum of the following factors expressed in dB: EIRP,

free space propagation loss, feed-line loss, receiving antenna gain, plus any other losses

such as polarization mismatch, antenna pointing error, atmospheric loss etc. as applicable

7.1.4 Beam Width

The radiation pattern from a parabolic dish can be calculated from the equation:

EΦ = J1 {(Π.D/ λ). sinΦ}*{ 2. λ / Π.D sinΦ}

Where Jl = Bessel function of first order D = diameter of the parabolic dish,

Φ= angle of direction with respect to the principle axis of the antenna aperture.

The expression (argument) within the bracket is evaluated and the Bessel function obtained for the argument, from Bessel function tables/graphs. The values of the argument for which the Bessel function becomes zero, will be found to be 3.8, 10.3 and 13.5. The ang1e of the radiation pattern where the first null occurs is given by

J1 {(Π.D/ λ). sinΦ}*{ 2. λ / Π.D sinΦ}=J1(3.8)

Which gives:

sinΦ = 3.8 Aln . D = 1.22 λ /D, and hence

Φ = 1.22 AID radians, or Φ = 70(λ/D) degrees

The main lobe of circular dish lies within the angle ¢ between the first nulls on either side or is given by twice this angle 0. The 3 dB beamwidth of the main lobe is given by the half lobe angle

Φ 3Db=58(λ/D)

It may be observed that the antenna gain is inversely proportional to square of the beam width. That is, a decrease of the beam width by a factor of 2 obtained by doubling the diameter of the dish increases the antenna gain by a factor of 4 (6 dB).

7.2 Earth Station

43

Page 44: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

An earth station or ground station is the surface-based (terrestrial) end of a communications link to an object in outer space. The space end of the link is occasionally referred to as a space station (though that term often implies a human-inhabited complex).

The majority of earth stations are used to communicate with communications satellites, and are called satellite earth stations or teleports, but others are used to communicate with space probes, and manned spacecraft. Where the communications link is used mainly to carry telemetry or must follow a satellite not in geostationary orbit, the earth station is often referred to as a tracking station.

satellite earth station is a communications facility with a microwave radio transmitting and receiving antenna and required receiving and transmitting equipment for communicating with satellites (also known as space stations).

Many earth station receivers use the double superhet configuration shown in which has two stages of frequency conversion. The front end of the receiver ~~ mounted behind the antenna feed and converts the incoming RF signals to a first IF in the range 900 to 1400 MHz. This allows the receiver to accept all the signals transmitted from a satellite in a 500-MHz bandwidth at C band or Ku band, for example. The RF amplifier has a high gain and the mixer is followed by a stage of IF amplification. This section of the receiver is called a low noise block converter (LNB). The 900-1400 MHz signal is sent over a coaxial cable to a set-top receiver that contains another down-converter and a tunable local oscillator. The local oscillator is tuned to convert the incoming signal from a selcted transponder to a second IF frequency. The second IF amplifier has a bandwidth matched to the spectrum of the transponder signal. Direct broadcast satellite TV receivers at Ku band use this approach; with a second IF filter bandwidth of 20 MHz

44

Page 45: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

BLOCK-DIAG OF AN EARTH STATION

7.3 UPLINKING AND DOWNLINKINGA satellite receives television broadcast from a ground station. This is termed as “Up linking” because the signals are sent up from the ground to the satellite. These signals are then broadcast down over the footprint area in a process called “Down linking”.

45

Page 46: DD-1 Training Report

INDUSTRIAL TRAINING REPORT

To ensure that the uplink and downlink signals do not interfere with each other,separate frequencies are used for uplinking and downlinking.

DOWNLINK FREQ (GHz) UP-LINK FREQ (GHz)

S BAND 2.555 to 2.635 5.855 to 5.935

Extended C Band (Lower) 3.4 to 3.7 5.725 to 5.925

C Band 3.7 to 4.2 5.925 to 6.425

Extended C Band (Upper) 4.5 to 4.8 6.425 to 7.075

Ku Band 10.7 to 13.25 12.75 to 14.25

Ka Band 18.3 to 22.20 27.0 to 31.00

7.4 Transponders

The word transponder is coined from transmitter-responder and it refers to the equipment channel through the satellite that connects the receive antenna with the transmit antenna. The transponder itself is not a single unit of equipment, but consists of some units that are common to all transponder channels and others that can be identified with a particular channel.

46