digital cinema image representation signal flowcar.france3.mars.free.fr/formation ina hd/hdtv/hdtv...

16
SMPTE Motion Imaging Journal, April 2006 • www.smpte.org 137 A pproximately 200 professional volunteers have laboriously worked over the years, each in vari- ous ways, to study related technologies and engineering practices needed to form this new and unique digital cinema moving image technology. Along the way, special technical ad-hoc committees were established to develop a practical digital cinema sys- tem having appropriate technical standards, recom- mended practices, and engineering guidelines. The latter will serve to guide and support a successful business model that will benefit and include major film studios, feature post-production facilities, satellite and high-speed network carriers, digital cinema theater owners and crews, and last but not least, satisfied the- ater audiences. It should be noted that even though the digital cine- ma system must be well defined, there is an extremely fine-line between related standards (requirements) and the many implementations involved to make the system capable of delivering overall cinema picture quality that will exceed that for 35mm film answer prints as viewed by audiences in film-based theaters. In particular, this includes existing, and yet to be developed theater digital projection systems, at afford- able prices to be extended to theater owners. Therefore, standards written must define system requirements, but must not exclude manufacturers from developing improved and competitively priced supporting equipment, including digital projection sys- tems. This was and is being made possible through the formulation of related Recommended Practices and Engineering Guidelines as supporting documents to the standards. These implementations will be dis- cussed in this paper and will serve as examples to Digital Cinema Image Representation Signal Flow By John Silva The purpose of this paper is to initiate the reader to digital cinema as a technology, to discuss image signal flow down the system pipeline in feature post-produc- tion, as well as through the distribution network and associated theater system; to provide short tutorials on related tech- nologies, and to enumerate important technical considerations and related aspects that need to be understood in making the system perform as intended.

Upload: vodien

Post on 29-May-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

SMPTE MMoottiioonn IImmaaggiinngg Journal, April 2006 • wwwwww..ssmmppttee..oorrgg 113377

Approximately 200 professional volunteers have

laboriously worked over the years, each in vari-

ous ways, to study related technologies and

engineering practices needed to form this new and

unique digital cinema moving image technology. Along

the way, special technical ad-hoc committees were

established to develop a practical digital cinema sys-

tem having appropriate technical standards, recom-

mended practices, and engineering guidelines. The

latter will serve to guide and support a successful

business model that will benefit and include major film

studios, feature post-production facilities, satellite and

high-speed network carriers, digital cinema theater

owners and crews, and last but not least, satisfied the-

ater audiences.

It should be noted that even though the digital cine-

ma system must be well defined, there is an extremely

fine-line between related standards (requirements)

and the many implementations involved to make the

system capable of delivering overall cinema picture

quality that will exceed that for 35mm film answer

prints as viewed by audiences in film-based theaters.

In particular, this includes existing, and yet to be

developed theater digital projection systems, at afford-

able prices to be extended to theater owners.

Therefore, standards written must define system

requirements, but must not exclude manufacturers

from developing improved and competitively priced

supporting equipment, including digital projection sys-

tems.

This was and is being made possible through the

formulation of related Recommended Practices and

Engineering Guidelines as supporting documents to

the standards. These implementations will be dis-

cussed in this paper and will serve as examples to

Digital Cinema ImageRepresentation Signal FlowBy John Silva

The purpose of this paper is to initiate thereader to digital cinema as a technology,to discuss image signal flow down thesystem pipeline in feature post-produc-tion, as well as through the distributionnetwork and associated theater system;to provide short tutorials on related tech-nologies, and to enumerate importanttechnical considerations and relatedaspects that need to be understood inmaking the system perform as intended.

Silva22.qxp 3/14/2006 2:52 PM Page 137

113388 SMPTE MMoottiioonn IImmaaggiinngg Journal, April 2006 • wwwwww..ssmmppttee..oorrgg

allow a full description of successful and proven meth-

ods that can be used within the system.

Beyond this, it will be up to manufacturers to devel-

op digital projectors for digital cinema theaters and

other related equipment that will deliver improved pic-

ture and sound quality with reasonable and affordable

pricing.

History

Until a few years ago, the two entities of TV broad-

cast and motion picture film production chose to

ignore each other; each acknowledging that the other

was to be tolerated, but not respected. Television-ori-

ented people felt comfortable with their agenda of

breaking news, sporting events, talk shows, syndicat-

ed shows recorded on videotape. Film-oriented peo-

ple, on the other hand, concluded that the mediocre

picture quality produced by television, compared with

that of film, would never serve the needs of higher

quality theater markets all over the world. Then digital

image technology was born. This was followed in later

years with the introduction of high-definition television

(HDTV), which ultimately provided TV-generated pic-

ture quality far superior than that produced by NTSC,

the first, and still existing, Monochrome/Color-compati-

ble Television System in the U.S., named in honor of

the National Television Service Committee that devel-

oped it in 1941.

In the years following, digital image technology

became the backbone of post-production workflow,

first for HDTV production, and then for the creation of

special effects used within feature films. Later, its

usage was extended to signal correction and process-

ing, as provided in feature film post-production, with

eventual transformation back to film for theater exhibi-

tion. This was termed the Digital Intermediate (DI)

process.

Time proved that digitally-performed special effects

and signal processing for film were not only consider-

ably less expensive, but could be accomplished in far

less time and with noticeable improved picture quality.

As the years progressed, digital cinema, the tech-

nology behind electronic movies, became a viable

concept, offering a promising future business model

for major film studios and producers. Now digital pro-

jectors are beginning to replace existing film projectors

in movie theaters all over the world.

Feature program content is originated from both film

and digital cameras. Also, digitally captured image

data signals for both types of origination sources are

being processed through digital cinema post-produc-

tion pipelines before reaching intended theaters via

high-speed networks, including satellite communica-

tions. Standards for this new technology that will sup-

port the new digital cinema business model are now

being written.

In the meantime, the digital post-production work-

flow, through significant technology advancements,

has made giant steps forward in providing improve-

ments in digital data signal processing, color-correc-

tion/enhancement, and working data storage network

devices. Together, this has resulted in the elimination

of multitudes of nonrealtime signal processing bottle-

necks, providing vast improvements in resultant per-

ceived picture quality as viewed by audiences in the-

aters. As a result, motion picture feature film produc-

ers have required that their directors and cinematogra-

phers follow the new digital workflow all the way

through from beginning to end. This will ensure that

the storytelling can be extended and/or enhanced by

the use of digital color processing to produce image

enhancements in colors, shades of gray, and textures,

which will serve to produce desired emotional feelings

to audiences. Now, meaningful dialogs between film

and television camps are taking place on a daily basis.

Digital Cinema Moving Image Technology

By definition, digital cinema is a modern, electronic

moving image technology that was conceived and

designed to provide a completely new business model

for producing digitized feature movies to be shown on

screens in digital cinema theaters throughout the

world, without the necessity of film prints and film-

based projectors.

Digital cinema signal flow process throughout the

system is divided into three phases: mastering, distrib-

ution, and exhibition.

• Mastering includes post-production development

of the Digital Cinema Distribution Master (DCDM) from

Digital Source Master (DSM) playout.

• Distribution includes the transmission of the com-

pleted and uncompressed DCDM down the digital cin-

ema network, followed by essence signal compres-

sion, encryption, packaging, transport via high-speed

DIGITAL CINEMA IMAGE REPRESENTATION SIGNAL FLOW

Silva22.qxp 3/14/2006 2:52 PM Page 138

networks to intended theaters. Distribution concludes

with all layers of DCDM being stored on disk memory.

• Exhibition includes playout from disk memory,

DCDM layer separation, implementation of intense

security measures, use of auxiliary data to control

lights, curtains, etc., at the theater, essence signal

decryption and decompression where required, signal

transformation, and final projection for audience view-

ing.

The digital cinema system flow path, for description

purposes in this paper, will be extended to the system

front-end to include both film and digital camera scene

acquisition, production, and feature post-production.

Digital cinema is datacentric in form, meaning

selected origination signals from film or digital cam-

eras are processed, edited, distributed, and projected

in digital form. It has two basic acquisition modes,

each relating to scene image capture (origination).

These are either motion picture film or digital camera

acquisition.

(1) Motion picture film acquisitionScene images are captured with motion picture film

cameras. This is followed by chemically developing

the exposed film into what is termed the original nega-

tive. In most cases an interpositive, second-generation

copy, is then made from the original negative for film

scanning use.

This content will consist of film clips of feature seg-

ments, and camera shots in bits and pieces, which will

then be assigned to designated working film reels in

accordance with the feature script.

Using this origination content, an edit decision list

(EDL) will be developed in an offline session to deter-

mine the actual frame sequences that will be film-

scanned in feature post-production. Once fi lm-

scanned, the resultant digitized image representation

signals will be transformed into image representation

coded data files, and then transferred to a working dig-

ital archive in feature post-production, to become

working digital negatives called “digital reels.” In this

state, the selected raw program content is immediately

ready for post-production feature assembly. Stored

data in the digital archive can be acquired almost

instantly, to be processed and edited as desired at

specifically designated workstations without the

necessity of first making copies for usage. Once

processed by a workstation, the digitized content is

returned to the working digital archive to reside as the

“feature-in-process” workmaster, and is then available

to be subsequently processed or modified by other

workstations along the pipeline.

Within the combined array of workstation operations

in feature post-production, the following types of signal

correction and processing can be made to the deliv-

ered digital negative after film origination:

(1) Color correction/grading

(2) Gamma

(3) Cropping

(4) Lift

(5) Painting/special effects

(6) Dust busting

(7) Grain matching

(8) Noise reduction

(9) Compositing

(10) Final editing

(2) Digital camera originationIn this mode, scene images will be captured with digi-

tal cameras that will produce digitized image represen-

tation signals in coded-data form, on playout. As in the

case for film acquisition, the digitized playout signals

will be transferred to the working digital archive in fea-

ture post-production. From there they will become

working digital negatives, immediately available for

image processing and editing, with the exception that

dust-busting and grain-matching will not be needed.

Due to recent significant advancements in digital

technology, such as high-speed, high-bandwidth, and

uncompressed digital data flow, as well as the evolu-

tion of the Storage Area Network (SAN) with a com-

mon file system; equipment supporting the Gigabyte

System Network (GSN), High-Speed Data Link

(HSDL), and other related technologies, are now

becoming available. Therefore, when implemented in

feature post-production, this equipment will not only

allow immediate acquisition by workstations of working

digital archival data content, but will further provide

realtime and semi-realtime processing, which has not

been available until now.

In his paper, “A Datacentric Approach to Cinema

Mastering,”1 Thomas True clearly explains what has

and is happening in mastering methodology, which is

currently available to digital cinema, and represents

good news for its implementation for the present and

SMPTE MMoottiioonn IImmaaggiinngg Journal, April 2006 • wwwwww..ssmmppttee..oorrgg 113399

DIGITAL CINEMA IMAGE REPRESENTATION SIGNAL FLOW

Silva22.qxp 3/14/2006 2:52 PM Page 139

the future.

Back to system flow realities, the colorless butterfly

is the object in the scene that the film camera lens is

focused on, as shown in Fig. 1. It is represented here

in achromatic (monochromatic) form because it has no

outward physical properties of color. The butterfly-

object instead, reflects specific visible wavelengths of

illuminating visual stimuli (electromagnetic energy),

which the camera lens then focuses on to successive

sprocket-driven frames of raw motion picture film

stock. In the diagram, the observer is shown viewing

the butterfly in the studio set.

The viewer will perceive its various colors by virtue

of the visual stimuli being reflected off the illuminated

insect and entering his eyes and passing through his

internal visual system. Therefore, the observer’s per-

ceived butterfly-object appearance is represented as a

colored object.

The above applies as well to all illuminated scene

objects, cameras, and scene observers in the Digital

Camera Acquisition mode.

Regarding Film Scanners

Telecine film chains or film scanners are used to

capture and digitize feature film footage. For

SDTV and 1920 x 1080 HDTV content, telecine

film chains will work. For 4096 horizontal pixel

counts, such as will be employed for digital cine-

ma, the upgrade to high-quality motion picture

film scanners will be required.

Briefly stated, motion picture film scanners,

which are somewhat similar to telecine film

chains, but like digital cameras, do not define a

direct color space and associated primary set.

This will be defined as the image representation

signals progress along the system where they

will be used to feed a digital cinema reference display

device, such as, feature post-production workstation

control or screening room projector or monitor.

How Light Translates to Dye Densities onNegative Film

Figure 2 represents a cross-section of 35mm motion

picture raw film stock before exposure. It consists of

four separate layers, three of which are individual

coatings of silver-halide crystal grains, which provide

super-imposed mosaics of blue, green, and red-light-

sensitive surfaces, all sequentially coated onto the top

surface of a transparent support structure underneath.

The top layer is blue-light-sensitive. The third layer

is green-light-sensitive, and the fourth layer is red-

light-sensitive. Each light-sensitive layer is chemically

treated during the manufacturing process to provide its

desired individual spectral sensitivity. The second (yel-

low-colored) layer sequentially coated between the

blue- and green-light-sensitive layers acts as a blue fil-

ter protecting the green- and red-light-sensitive layers,

which have a discernable sensitivity to blue light. This

is due to certain wavelengths of blue spectral sensitivi-

ty overlapping with wavelengths of those for green and

red. This yellow filter layer will become colorless once

the film is chemically processed (developed).

When film stock is exposed to scene visual stimuli

via a film camera and lens, each layer of silver-halide

crystals change in chemical character. This occurs in

accordance with scene light exposures incrementally

reaching each of the overlaid light-sensitive surfaces

of each film frame in the form of latent negative

images.

When the film is chemically processed:

• Blue layer mosaics of exposures are replaced with

114400 SMPTE MMoottiioonn IImmaaggiinngg Journal, April 2006 • wwwwww..ssmmppttee..oorrgg

DIGITAL CINEMA IMAGE REPRESENTATION SIGNAL FLOW

Figure 1. Scene object as perceived from reflected visual stimuli.

Figure 2. Motion picture film raw stock light-sensitive layers.

Silva22.qxp 3/14/2006 2:52 PM Page 140

equivalent respective mosaics of proportional densi-

ties of complementary-colored yellow dye.

• Green layer exposures are replaced with equiva-

lent respective mosaics of proportional densities of

complementary-colored magenta dye.

• Red layer exposures are replaced with equivalent

respective mosaics of proportional densities of com-

plementary-colored cyan dye.

The collective summation of these three layers of

complementary-colored dye densities, frame-by-

frame, make up the original negative film, which in this

negative form contains valid image representations of

visual stimuli from the original scene.

Basic Film-Scanner Action

Most transmission scanners are essentially tricolor

densitometers. The process begins by transmitting a

white-light source through sequential sprocket-driven

frames of chemically processed motion picture film

consisting of overlaid mosaics of yellow, magenta, and

cyan dye densities that serve as light-modulated fil-

ters. This results in three combined sources of yellow,

magenta, and cyan visual stimuli, which represent

individual residual amounts of the respective dye-fil-

tered white source light.

These combined sources, which have retained their

individual identities to respective sources of visual

stimuli from the original scene, are next sorted by

color separation optics and then directed over sepa-

rate paths into individual associated photodetector

sensors. In combination, these produce triplets of

complementary color-formed, negative analog image

representation signals. From here, the triplet signals

are each quantized and coded in digital form for sub-

sequent processing down the pipeline, but have yet to

define an associated color space and primary-set.

This will not be done until the image representation

signals reach a point in system flow where they will

feed a feature post-production color-control or screen-

ing projector or monitor. To make this happen, a

matrix will be applied that will translate from film dye

code values (valid image representation signals) to the

primaries of the display device.

As a top-of-the-line film scanner will be designed to

distinguish between the finest color differences of neg-

ative (and positive) film material, the digital data output

will be related to the full colorimetric content of the film.

Also, the film scanner will not change the color rep-

resentation of the film material. This means that if the

color space and primary-set of the visual display

includes all possible film colors, the above top-of-the-

line film scanner will be compatible with this color

space as well.

The Generation and Progression of ImageRepresentation Signals in Digital Cameras

Digital cameras do not have direct primaries.

Instead, they have “taking characteristics,” which, for

practical reasons in manufacturing, are altered ver-

sions of the calculated ideal color-matching function

curves for digital cameras.

The plot-points for these curves are calculated,

starting with the primaries of the control monitor or

projector used, to adjust the digital camera controls to

ensure or produce acceptable pictures. These ideal

color-matching functions are spectral responsivity

curves related to perceptual color vision of the aver-

age, color normal, human observer (i.e., CIE 1931 2°

standard observer2).

Originally, in 1931,2 the plot-points for these ideal

curves were determined by the use of a colorimeter,

which allowed a qualified observer to provide percep-

tual color matches between two adjacent semicircular

areas, called fields. The first field was formed by a

projection of successive predetermined single, mono-

chromatic wavelengths of visual stimuli (the reference

field).

The second matching field was formed by the resul-

tant visual stimuli produced by superimposed projec-

tions of individual and adjustable intensities of a partic-

ular set of three independent red, green, and blue pri-

mary light sources. The process in 1931 was done by

changing the reference stimuli in incremental steps,

wavelength by wavelength, throughout the visible

spectrum, and providing color matches by the observ-

er, adjusting the individual intensities of the three RGB

tristimuli.

As can be seen, this was a somewhat tedious

process. However, in practice, it is not necessary to

repeat the experiment for different sets of primaries.

Instead, the color-matching functions corresponding to

any given set of primaries, such as those of a particu-

lar image display device (e.g., CRT monitor or digital

projector) can be readily computed.

SMPTE MMoottiioonn IImmaaggiinngg Journal, April 2006 • wwwwww..ssmmppttee..oorrgg 114411

DIGITAL CINEMA IMAGE REPRESENTATION SIGNAL FLOW

Silva22.qxp 3/14/2006 2:52 PM Page 141

Digital Camera Image RepresentationSignal Creation and Processing

In the image acquisition process for digital cameras,

preliminary image representation signals are formed

from scene visual stimuli through the sequential action

of a taking lens, separation optics, RGB optical trim fil-

ters, followed by individual R, G, and B pickup

devices. The combined optical action of these ele-

ments provide a similar but altered filtering of the pri-

mary sources of visual stimuli, in a manner somewhat,

but not directly related to the calculated ideal color-

matching function curves as mentioned. However, in

actual practice, using a direct relationship with visual

display color matching functions will not produce the

correct or desired results.

Calculated color-matching functions corresponding

to any set of physically realizable visual display pri-

maries will have some negative lobes, such as is

shown in Fig. 3. To compound the issue, in actual

practice, the color-matching function portions of nega-

tivity are even greater than shown in Fig. 3.

The original CIE experiments were done with mono-

chromatic matching primaries (each having a 1 nm

bandwidth), which produced curves with less negati-

vity than those in real-world situations where primary

stimuli having much greater bandwidths exist. Since

the calculated color-matching functions that define the

theoretically desired spectral sensitivities of the digital

camera have significant portions that represent nega-

tive values, those respective sensitivities cannot be

physically realized as such. Therefore, real cameras

are built with optical components and sensors that

produce all-positive spectral sensitivities that will be

somewhat similar, but not identical, to the CIE XYZ

curves as shown in Fig. 4.

As a result, signal values produced by a sensor hav-

ing such spectral sensitivities are always positive.

However, interpolating forward, those sensitivities, and

all other sets of all-positive sensitivities, will corre-

spond to display primaries that are not physically real-

izable. Therefore, real cameras designed under these

criteria would correspond to imaginary displays, and

real displays would correspond to imaginary cameras.

To resolve this dilemma, a matrix is applied to these

image signals to transform the signal values to those

that would have been formed if the camera had been

able to implement the theoretical sensitivities corre-

sponding to the color-matching functions of the actual

display primaries. It is at this point in the signal path

that some negative signal values are created in the

process.

How and when these signals are processed as they

proceed along the digital cinema pipeline, must be

determined by system/equipment designed to meet

post-production Reference Projector and theater pro-

jector viewing requirements. For example, they might

simply be clipped, or remapped (gamma mapped) to

produce a more pleasing, or intended, appearance.

Nothing can be done to increase the color gamut

defined by the chromaticity boundaries of the actual

display devices used.

114422 SMPTE MMoottiioonn IImmaaggiinngg Journal, April 2006 • wwwwww..ssmmppttee..oorrgg

DIGITAL CINEMA IMAGE REPRESENTATION SIGNAL FLOW

Figure 3. Ideal RGB color-matching function curves (1931).

Figure 4. Ideal RGB color-matching function curves (1931)(modified in camera design to produce all positive spectralsensitivities).

Silva22.qxp 3/14/2006 2:52 PM Page 142

For this reason, the display matrix should be applied

as late as possible in the pipeline, or some means will

need to be employed to retain the negative values for

use with other types of displays having larger color

gamuts. In addition, the actual spectral sensitivities of

a film scanner or digital camera will not correspond

directly to any set of color-matching functions. For

many practical reasons, including minimizing noise

buildup, deliberate departures from color-matching

functions are made.

Despite the fact that the resultant pictures are

obtained using manufacture-skewed taking-character-

istics, the image representation signals serve to be

satisfactorily representative of the color-producing per-

formance of the feature post-production workstation

image display devices, as well as of the mastering

Reference Projector and the digital cinema theater

digital projectors that follow. This does not mean that

control room monitor or projector development in the

future will not produce image control display devices

having primaries and displayable picture quality equal

or comparable to those required of the Reference

Projector and digital cinema theater projection sys-

tems of that time.

When this does occur, the new representative dis-

play primaries will be used to determine associated

and calculated ideal color-matching function curves as

a starting point in the process, as was mentioned

above.3

Digital Source Master

The Digital Source Master (DSM) is a master

recording that is developed from digital camera or film-

acquired origination content. It is subsequently color

corrected, color processed, and edited in feature post-

production. All necessary distribution formats, such as,

NTSC, PAL, DTV, DVD, HDTV, and DCDM, etc., are

derived totally, or in part, from DSM program playout

content, as well as from archival storage.

The DSM, when originated from digital cameras, will

be processed by individual workstations in feature

post-production for working data file archive, signal

correction and processing, timing and color correction,

editing and conforming, and final color grading. For

film-acquisition, the operations are the same as

above, but the dust-busting and grain-matching

processes are added.

Content providers have been given the flexibility of

producing DSM-coded image data files having color

spaces, color primary sets, and white points of their

choice. There are also no restrictions regarding pixel

matrix resolutions, frame rates, bit depths used, as

well as other related metrics. However, they are

expected to be given yet-to-be-defined requirements

in processing the DSM image data files to produce the

individual master distribution formats and for Image

DCDM file development.

Digital Cinema Image Color Data Flow

Figure 5 is the digital cinema image flow diagram. It

starts with assembling the digital source master in fea-

ture post-production, shown in the large blue box in

the upper left corner.

All signal processing action in feature post-produc-

tion will be recorded in memory and distributed to the-

ater projectors via metadata track files. These will syn-

chronously pass down the digital cinema network with

and via the associated Image DCDM layered files.

Also, note the two smaller boxes within the Feature

Post-Production box.

The blue box labeled DSM represents the finished

master designated for DCDM development. The yel-

low box labeled DSM1 represents a copy of the DSM

that provides the playout that serves as a feed for

development of the DCDM.

The purpose of the DSM copy is to isolate the

DCDM development signal media from those intended

for additional format sources, such as, DVD, HDTV,

and so on. As shown in the flow diagram, the next

step in the process is to transform the DSM1 output

image reference signals having a particular primary

set, color space, whitepoint, and quantization bit rate,

to linearized XYZ primary signals with a CIE linearized

Image DCDM-specified primary-set, color space, and

white point. This operation is shown in the yellow box

labeled “DSM TO XYZ Transform.” (Fig. 5.)

Again, implementation of these actions will be left to

the manufacturers of equipment containing this circuit-

ry. At this system point, the image signals are ready to

be encoded into finalized uncompressed Image DCDM

signals. This is shown next to the yellow box to the

right in the flow diagram.

The operation involves applying an inverse 2.6

transfer characteristic of the trio of image representa-

SMPTE MMoottiioonn IImmaaggiinngg Journal, April 2006 • wwwwww..ssmmppttee..oorrgg 114433

DIGITAL CINEMA IMAGE REPRESENTATION SIGNAL FLOW

Silva22.qxp 3/14/2006 2:52 PM Page 143

tion signals and transcoding them from the DSM out-

put signal bit rate to those having a 12-bit quantiza-

tion. The reason for processing the data signals to

form this combination is to prevent contouring artifacts

on scene objects, that will be viewed by audiences.

Up until recently, layered DPX frame files had been

chosen to serve as carriers of DCDM image represen-

tation data to feed the Reference Projector in post-pro-

duction and the compression engine down the net-

work. However, DPX files were found to support 10-bit

data, but could not work with 12-bit data, which is

needed. As a result, the use of DPX

files as DCDM carriers has been

abandoned, and constrained TIFF

frame files, which are designed to

work with 16-bit data, have been

considered instead. It has been

proposed that the 12-bit quantized

Image DCDM signals be trans-

posed into the 12 most significant

bit components of the constrained

TIFF frame fi les, and to place

“zeros” in each of the 4 unused

least-signif icant bit

components, to

accommodate the full

16-bit TIFF f i le

requirement. In this

form, the constrained

TIFF frame files will

then be fed to the

Reference Projector in

post-production and

subsequently into the

compression engine,

which performs the

first major step in dis-

tribution to associated

digital cinema the-

aters.

On arriving at these

first two destinations,

the 12 most-signifi-

cant bits of the image

data wil l f irst be

selected to reestablish

the original 12-bit

DCDM encoded form (minus the “zeros”), and then will

be processed through the Reference Projector and

compression engine as was intended.

Presently Recommended Image DCDMSpecifications

The CIE XYZ primary-set, and its respective

inverse-matrixed RGB primary-set, both encompass

the complete spectrum locus. In fact, both have very

similar, but not identical, respective spectrum loci

chromaticity coordinates (Figs. 6 and 7).

114444 SMPTE MMoottiioonn IImmaaggiinngg Journal, April 2006 • wwwwww..ssmmppttee..oorrgg

DIGITAL CINEMA IMAGE REPRESENTATION SIGNAL FLOW

Figure 5. Digital cinema image data flow diagram.

Figure 6. CIE XYZ primary set. Figure 7. CIE RGB primary set.

Silva22.qxp 3/14/2006 2:52 PM Page 144

The major specifications are:

(1) Full bandwidth XYZ (not RGB) image represen-

tation signals will be coded for compression, without

color subsampling.

(2) Digital cinema frame rates will be 24 and 48

frames/sec.

(3) Pixel formats supported are those that can

achieve horizontal and vertical image resolutions

producing resultant picture detail better than that

realized in 35mm film production.

(4) The Image DCDM (image distribution master)

serves as an image representation signal container

for all elements that make up pictorial content.

Among these, pixels are the smallest visible picture

elements for all displayed images on the screen.

The maximum numbers of active horizontal and

vertical pixels that make up projected image content

in a screen raster are designated horizontal and

vertical resolutions, respectively.

(5) Presently, digital cinema has two classes of

active pixel resolutions related to the number of

active pixels that make up image content across a

digital display raster horizontally, and the number of

active pixels that make up image content vertically

down the display raster. The first class is 2048 x

1080; the second (producing higher resolution) is

4096 x 2160. The DCDM, as a carrier, delivers the

image signals that represent streams of pixel con-

tent as related to associated visual stimuli, to the

Reference Projector screen in mastering and to

intended digital cinema theater projector screens.

As a point of clarification, both classes of pixel reso-

lutions represent the maximum possible number of

horizontal and vertical pixels in each case. For visual

displays, there are several aspect ratios used

for features, as shown on visual display

screens. To accommodate this, active pixels

are reduced horizontally and vertically,

accordingly, by introducing black pixels on

respective picture raster edges (Fig. 8).

Note that on a given screen raster, the

maximum possible number of active pixels

equal number of black pixels plus number of

active pixels, both horizontally and vertically.

However, to keep the bit rate as low as possi-

ble, only data representing active pixels will

be sent down the network via the Image

DCDM container.

(6) Encoding primary chromaticities presently rec-

ommended are shown in Table 1.

(7) The recommended white point will be EE (equal

energy white point), for which the chromaticities are

shown in Table 2.

The basic encoding formulas that are applied to each

respective X, Y, and Z tristimulus component in the

triplet to accomplish the desired nonlinearity, as repre-

sented within the dotted enclosure of the flow diagram

mentioned above are:

SMPTE MMoottiioonn IImmaaggiinngg Journal, April 2006 • wwwwww..ssmmppttee..oorrgg 114455

DIGITAL CINEMA IMAGE REPRESENTATION SIGNAL FLOW

Figure 8. Showing active and black pixel arrangement to accommodateparticular projected aspect ratios.

1(a)

1(b)

1(c)

TTaabbllee 11

Primaries x y u’ v’

Red 1.0000 0.0000 4.0000 0.0000Green 0.0000 1.0000 0.0000 0.6000Blue 0.0000 0.0000 0.0000 0.0000

TTaabbllee 22

White point(Illuminant) x y u’ v’

EE 0.33330 0.3333 0.2105 0.4737

Silva22.qxp 3/14/2006 2:52 PM Page 145

Where CV is the calculated code value for the spec-

ified encoded tristimulus component.

CVmax = 2b � 1, where b is the signal quantization bit

depth of 12 bits.

X, Y, and Z are the linear image representation

triplet components, prior to being processed for � = 1/

2.6 nonlinearity in the encoding process. These will

ultimately produce reflected triplets of respective R, G,

and B linear visual stimuli, as measured off the intend-

ed projector screen. When X', Y', and Z' encoded

triplet components are all equal to CVmax, the resultant

pixel triplet of visual stimuli as measured off the front

of the projector screen will produce a luminance value

equal to the presently recommended 14.0 fL (48

cd/m2).

Presently, K is the normalizing (scaling) constant,

recommended to be 52.37, as compared to the previ-

ously recommended 48.0 value. Both of these values

will produce a maximum luminance value of 48 cd/m2.

The rationale used in choosing the 52.37 value is

related to the present selection of the EE chromaticity

for the DCDM white point. This value will expand the

encoded color gamut to include D65 as a potential

alternate for the DCDM white point, if desired in the

future.

� (gamma) is the power coefficient, recommended

to be 2.6. (� is the desired nonlinearity determinant).

It has been calculated that by conforming to all of

the above, a contrast ratio of approximately 10,000:1

can be accommodated, even though a universal use

of a 2000:1 Reference Projector and exhibition projec-

tor contrast ratio is desired for digital cinema at the

present time.

An Important Advantage in Using the CIEColor Space and Primary Set for theImage DCDM

All luminance information is carried by the green pri-

mary signal, which also carries its own color compo-

nents. The red and blue primary signals carry only

their respective color component information. This

allows separate-luminance encoding to be implement-

ed, using full bandwidth RGB component image sig-

nals. It also prevents degradation in subsequently per-

ceived picture quality, but additionally avoids a con-

stant luminance encoding problem where the R and B

primary signals form separate color-difference signals

where each is subtracted from the luminance signal

(e.g., Y-R and Y-B).

In constant luminance encoding, the color difference

signals carry luminance content, making the eventual

decoded R, G, and B signals subject to inherited lumi-

nance noise. When the above noise-effect does occur,

the ultimate picture quality, as viewed on a theater

screen, will be reduced accordingly.

The Reference Projector

The Reference Projector, located in the mastering

post-production screening room, is the most important

working visual display in the entire digital cinema sys-

tem, from scene-to-screen. It is used by cinematogra-

phers, directors, and other key post-production per-

sonnel to make creative and critical color judgments

and decisions on all feature program content.

It is highly recommended that the Reference

Projector:

• Be a real, color-calibrated, working projector in

post-production;

• Have performance specifications that meet the

requirements defined in the SMPTE Recommended

Practice for Reference Projector and Environment;

• Operate within a controlled viewing environment

as specif ied in the Reference Projector

Recommended Practice; and

• Serve as the visual display reference for all digital

cinema theater projectors having the same target per-

formance specifications, so that consistent and repeat-

able color quality for both mastering and exhibition can

be achieved.

Ideally, all post-production workstation displays and

their environments where creative decisions are

made, should also match the Reference Projector in

regard to image and color appearance and perfor-

mance parameters, within the specified tolerances.

This can be better accomplished if the workstation

visual displays are digital projectors. This will allow

meaningful creative decisions to be made at worksta-

tions before and during the final color grading step on

the Reference Projector. This is particularly important

for occasions where cinematographers and other key

feature production personnel sit beside the colorist at

a workstation to mutually make creative in-process

decisions for features. This will increase the likelihood

that such content will be accepted by cinematogra-

114466 SMPTE MMoottiioonn IImmaaggiinngg Journal, April 2006 • wwwwww..ssmmppttee..oorrgg

DIGITAL CINEMA IMAGE REPRESENTATION SIGNAL FLOW

Silva22.qxp 3/14/2006 2:52 PM Page 146

phers and directors when combined to make up the

finished feature, and viewed on the Reference

Projector screen containing unnecessary post-produc-

tion costs.

DCDM Image Signal Flow to the ReferenceProjector

As mentioned above, the uncompressed Image

DCDM X'Y'Z' image data, which is output-referenced,

starts its distribution journey in one or both of two dis-

tinct paths. The first leads to the Reference Projector

in feature post-production for director and cinematog-

rapher viewing. Along this path the now uncom-

pressed and completed DCDM X'Y'Z' data must first

be transformed to output-referenced linear RGB input

data in the projectors native RGB color space, prima-

ry-set, and designated white point. Output-referencedmeans referenced to projected visual stimuli repre-

senting program content, as reflected and viewed, or

measured off the front of the projector screen.

Note in the flow diagram (Fig. 5), that this operation

is indicated within the dotted rectangle enclosing the

“X'Y'Z' to XYZ Transform” box followed by the “XYZ to

Projector RGB Transform” box. This shows an imple-

mentation consisting of two separate operations that

together perform the X'Y'Z' to projector RGB transfor-

mation. This is an implementation method that has

been tested and proven to

work. However, if manufactur-

ers can devise a more effi-

cient method, they are

encouraged to incorporate it

into their equipment.

In this implementation, non-

linear X'Y'Z' image data is

first transposed to linear XYZ

data, which can be done by

applying a 2.6 gamma trans-

fer characteristic to the input

data via a lookup table.

This is followed by trans-

forming the required linear

XYZ data through a 3 x 3

matrix to the Reference

Projector light modulator as

linear RGB image representa-

tion input signals. The reason

for this needed linearity is explained in the following

paragraph.

It is very important to understand that transform

matricies, as used in digital cinema, are linear signal

processing entities. As such, they require linear signal

inputs, and in turn, produce linear output signals. As

linear entities, the associated arithmetic is reasonable

in complexity and in cost, as opposed to the alterna-

tive of transformation of nonlinear signals by other

matricies.

The matrix is shown below in Equation 2.

The Reference Projector’s light modulator then

processes these image representation data triplets to

ultimately produce, project, and focus respective R, G,

and B linear pixels of light (visual stimuli) derived from

the projector’s Xenon light source onto the screen for

producer, director, and cinematographer viewing.

At this present state of the art, the Reference

Projector color space, primary-set, and white point are

defined by commercially available projectors having

Xenon light sources. These are shown in Fig. 9, and

Tables 3 and 4.

SMPTE MMoottiioonn IImmaaggiinngg Journal, April 2006 • wwwwww..ssmmppttee..oorrgg 114477

DIGITAL CINEMA IMAGE REPRESENTATION SIGNAL FLOW

Figure 9. Xenon primary-set compared to that for Rec. 709.

(2)

Silva22.qxp 3/14/2006 2:52 PM Page 147

Exhibition Projector Input Color Data Flow

The second path for the uncompressed DCDM data

leads to the distribution network, where in sequential

order, the following operations take place: compres-

sion, encryption, packaging, transport to the intended

theater, and all related program content is stored on

disk drives at the digital cinema theater. These will be

discussed below.

DCDM Data File Layers

The DCDM was originally conceived to be a layered

set of DPX-formatted files, to separately contain

Image DCDM binary-coded data files, subtitle DCDM

binary-coded data files, captions DCDM multiple

tracks, audio DCDM multiple tracks, and presentation

supporting auxiliary multiple tracks. As discussed

above, the DPX-formatted f i les are now being

replaced with 16-bit constrained TIFF files for the rea-

sons mentioned.

Image DCDM files must be prepared for compres-

sion, which follows, and then be encrypted for security

purposes.

Subtitle DCDM files must also be prepared for com-

pression and then optionally be encrypted, as decided

by the content provider.

Audio DCDM, which includes multiple tracks, may

be optionally compressed and may be optionally

encrypted.

Captions DCDM, which includes multiple tracks, is

not needed to be compressed, but may be optionally

encrypted.

Auxiliary DCDM, which includes multiple tracks, is

not needed to be compressed or encrypted.

Open/close curtains, lights up/lights down, etc., are

some of the functions in theaters that are controlled by

the auxiliary data.

JPEG2000 Compression

The compression technology chosen for digital cine-

ma is JPEG2000. This was selected by Digital Cinema

Initiatives, referred to as “DCI.” DCI is a Limited

Liability Company formed by the seven major film stu-

dios: Disney, Fox, Metro Goldwyn Mayer, Paramount

Pictures, Sony Pictures Entertainment, Universal

Studios, and Warner Brothers, to ensure that digital

cinema standards were written to adequately protect

and enhance their combined business model for this

technology. Their working members as a team have

become an integral part of the digital cinema standard-

forming process, as well as for the Recommended

Practices and Engineering Guideline supporting docu-

ments.

The technical description for this technology and its

adaptation to digital cinema has been adequately and

deeply covered in a book and white paper written by

David S. Taubman and Michael W. Marcellin.4

It should be noted that JPEG2000 was selected

because of its tremendous flexibility, as well as its

ability to deliver excellent picture quality. One impor-

tant feature is its ability to compress both 2k (2048 x

1080) and 4k (4096 x 2160) pixel resolutions with one

pass of 4096 x 2160 down the network. Either 2k

and/or 4k resolutions can be programmed to be sent

to selected digital cinema theaters. Theaters where

only 4k is sent will have the choice of using it as such

or to extract the 2k from the 4k, depending on the pro-

jector capability, without any loss of picture quality.

A second feature is that the JPEG2000 compres-

sion engine is primary-set independent.

Another advantage is that separate related signals

simultaneously passing through can be selectively

compressed or ignored and seamlessly passed along

together. For example, accompanying metadata in

MXF track files are not compressed, but are sent

along in the output codestreams, in sync with the

compressed image representations. Among other fea-

tures, JPEG2000 also has the ability to extract sub-

frame objects within full frames without any loss of

quality.

114488 SMPTE MMoottiioonn IImmaaggiinngg Journal, April 2006 • wwwwww..ssmmppttee..oorrgg

DIGITAL CINEMA IMAGE REPRESENTATION SIGNAL FLOW

TTaabbllee 33

White point(Illuminant) x y u’ v’

Xenon 0.3140 0.3510 0.1908 0.4798

TTaabbllee 44

XenonPrimaries x y u’ v’

Red 0.6800 0.3200 0.4964 0.5255Green 0.2650 0.6900 0.0986 0.5777Blue 0.1400 0.0600 0.1628 0.1570

Silva22.qxp 3/14/2006 2:52 PM Page 148

Final Steps in the Delivery of Features toIntended Theaters

Once the applicable DCDM files have met their indi-

vidual compression and/or encryption requirements,

all of the above layers of files will be combined togeth-

er into a media package for transport to intended the-

aters by (1) high-speed terrestrial network, (2) low-

speed data service, satellite, or (3) courier.

An important advantage of digital cinema is that

content providers are able to simultaneously transport

digital features to intended theaters worldwide. It also

provides a much shorter distribution time compared to

that required for film, and millions of dollars are saved

by not having to generate and physically deliver multi-

generation release prints to theaters.

In addition, if for any reason multigeneration copies

are required during the distribution phase, digital fea-

tures do not lose picture quality. This is not the case

for film release prints. Once the packaged DCDM con-

tent is received at a theater facility and recorded on

storage disks, the digital cinema distribution phase is

complete.

The Exhibition Phase

The exhibition phase begins when the stored DCDM

combined content is played-out for audience viewing,

by either an associated data-push server, or a data-

pull playout device, depending on the theater equip-

ment installed.

On playout from disk memory, the individual layered

content, of which only some are compressed and

encrypted, enter what has been named the media

block. There, the combined layers of production con-

tent are first individually extracted from the group and

then separated into individ-

ual data codestreams or

data components. From

here, for example, the

Image DCDM codestream

is first decrypted and then

decompressed. Then, the

image representation sig-

nals once again become

authentic DCDM image

representation code-

streams in 12-bit, binary-

coded, nonlinear X'Y'Z' triplet form.

From this point on, as shown in the flow diagram

(Fig. 5), the recovered DCDM image representation

signals are processed in the same manner that the

original uncompressed DCDM signals were when

directed to the Reference Projector in mastering. To

enumerate, again as an example implementation, the

nonlinear DCDM X'Y'Z' triplets are first transposed to

linear XYZ data. This is done by applying a 2.6

gamma transfer characteristic to the input data via a

lookup table. This also applies to the signal flow to the

Reference Projector as described above, as do the

decoding formulas that follow. This provides a resul-

tant transfer characteristic of 1.0 (1/2.6 x 2.6 = 1)

between this system point at the exhibition projector

input and the first instance where linear XYZ image

signals were created by transformation from a copy of

the DSM (the DSM1) in feature post-production.

The decoding formulas that accomplish this are:

Where:

• X, Y, and Z are the linear triplet component code

values that, when all are equal to CVmax, will ultimately

produce a recommended 14.0 fL (48 cd/m2) of lumi-

nance, as measured off the front of the projector

screen.

• K is a normalizing (scaling) constant recommended

SMPTE MMoottiioonn IImmaaggiinngg Journal, April 2006 • wwwwww..ssmmppttee..oorrgg 114499

DIGITAL CINEMA IMAGE REPRESENTATION SIGNAL FLOW

3(a)

3(b)

3(c)

Figure. 10. Screen object as perceived from reflected visual stimuli off of screen

Silva22.qxp 3/14/2006 2:52 PM Page 149

at this time to be 52.37.

• CV is the code value

for the specified tristimulus

component.

• CVmax = 2b � 1, where

b is the bit depth (12-bits).

• � (gamma) is the

power coefficient (recom-

mended to be 2.6). (� is

the desired nonlinearity

determinant).

This is followed by trans-

forming the required linear

XYZ triplets through a final

3 x 3 matrix, producing lin-

ear triplets of R, G, and B

image representation sig-

nals having the target pro-

jector’s primary-set, color

space, and designated

white point.

For the system end-point

in the theater, Fig. 10 illustrates the final image of the

original camera-captured butterfly that was signal-cor-

rected and modified as desired in feature post-produc-

tion, and finally projected on the digital cinema theater

screen. Notice that the butterfly, as in Fig. 1, appears

graphically as a colorless object. This again is done to

illustrate that projected images are not physically col-

ored objects. Instead, they are arrays of pixilated visu-

al stimuli, produced by the projector in accordance

with respective RGB image representation input signal

triplets. These were modified versions of the image

representations of visual stimuli originally reflected

from the butterfly at the scene.

An observer viewing the theater screen is also

shown. His view on the screen is made up of electro-

magnetic energy in the visible-spectrum. However, as

Fig. 10 shows, this array of pixilated electromagnetic

energy allows the observer in the theater to perceive

(in color) an acceptable cinematographer-desired

replica of the image of the original butterfly captured at

the scene.

Theater Black

Theater Black is the term used for the darkest pro-

jector screen luminance level of reflected visual stimuli

(measured in candelas per square meter (cd/m2)) that

can be revealed to an audience in a given digital cine-

ma theater. The measurement of screen luminance

level to determine theater black will be made with the

projector input data triplet code values set to 0, 0, and

0.

The maximum theater projector-generated, screen-

reflected luminance is presently recommended to be

14 fL, or 48 cd/m2. The ratio of maximum screen-

reflected luminance to theater black, without light-spill

on the screen, is the projector contrast ratio.

After subjective testing, it has been decided that the

Reference Projector in post-production must be able

to deliver a contrast ratio of 2000:1, as measured and

calculated from reflected luminance off the viewing

screen and with no spill-light contamination. In this

case, the darkest luminance revealed off the screen

(theater black), will be measured at 0.024 cd/m2. This

luminance level is subjectively considered equivalent

to pure black, because the average observer cannot

recognize luminance differences below that luminance

level (Fig. 11).

Observe the luminance (black straight-line) curve for

projectors having a contrast ratio of 2000:1. As illus-

trated, screen-reflected luminance levels, without light

115500 SMPTE MMoottiioonn IImmaaggiinngg Journal, April 2006 • wwwwww..ssmmppttee..oorrgg

DIGITAL CINEMA IMAGE REPRESENTATION SIGNAL FLOW

Figure. 11. Effect of differing projector contrast ratios on displayable low-luminance picturecontent that can be displayed.

Silva22.qxp 3/14/2006 2:52 PM Page 150

spill, will change continuously (no horizontal curve dis-

placement) throughout the projector’s lower luminance

range of projected visual stimuli. Thus, dark objects

occurring in darkened regions of the picture content

will be viewed on the screen by the audience, as was

intended by the cinematographer, director, and pro-

ducer. This curve verifies that the lowest luminance

that can be produced is 0.024 cd/m2. This is calculat-

ed by dividing the maximum luminance (48 cd/m2)

value by the projector contrast ratio (48 cd/m2 / 2000 =

0.024 cd/m2).

Unfortunately, two sets of detrimental conditions

that cannot be ignored exist in digital cinema theaters,

causing low-luminance picture objects not to be

viewed by audiences, as was intended by the content

providers. The first detrimental condition involves the

projector contrast ratio in a given theater. If it is

2000:1, as mentioned, dark objects in dark regions of

the picture down to 0.024 cd/m2 luminance levels will

be observed on the theater screen, as was intended

by the content provider; barring any light-spill on the

screen.

At the present time, content providers are satisfied

that a projector with a 2000:1 maximum contrast ratio

will allow them to view as many low luminance objects

as needed in post-production, to analyze and make

creative decisions regarding image appearance of this

program content. The problem, however, is that most

digital cinema theater projectors have contrast ratios

less than 2000:1, because of the cost. For example, if

a theater projector has a contrast ratio of 1000:1, as

shown in the red curve in Fig. 1, the minimum lumi-

nance level at which it can project on a screen will be

48/1000 = 0.048 cd/m2. As a result, black objects with

screen luminance values below 0.048 cd/m2 will not

have the same appearances as was desired by the

content provider.

The curves show that intended black picture con-

tent, as input to projectors having contrast ratios lower

than 2000:1, will be seen on the screen as being offset

upward into the gray luminance regions of the picture.

This means that intended black objects in the lower

luminance regions will be viewed on the screen as

gray objects within gray surrounds. This will most like-

ly not be considered acceptable to the content

provider. The solution, of course, is to improve the

maximum contrast ratio capability of all digital cinema

theater projectors so that they will have at least 2000:1

contrast ratios—at affordable cost to theater owners.

The second detrimental condition that occurs in

many digital cinema theaters is spill-light on the pro-

jector screen, caused by aisle safety lighting and the-

ater exit lights required by building and safety codes.

Unfortunately, this light contamination eliminates any

chance of displaying dark objects in darkened regions

of the picture as desired.

It is extremely important that research be done to

find ways to eliminate light-spill on theater screens.

Whether this means finding ways to deflect this extra-

neous light from the screen, or by other means, it

needs to be done to maximize the potential image

quality of digital cinema features shown in theaters

around the world.

As a comparative note, the Reference Projector

screen in post-production viewing rooms, for all sub-

jective purposes, will adequately reveal desired black

program content when projector image representation

triplet codes of 0, 0, and 0 are input. The reason

being, the Reference Projector used in each will have

contrast ratios of 2000:1. Further, the post-production

viewing room environment will be effectively void of

extraneous spill-light onto the screen, as the facility

will not be encumbered with theater safety lighting

requirements.

In consideration of all of the above, when all digital

cinema mastering and theater projectors universally

have 2000:1 contrast ratios, and when their individual

environments are void of screen spill light, the darkest

screen-reflected black level, as projected and mea-

sured on the front of the screen—consistent with pro-

jector input triplet code values of 0, 0, and 0—will be:

48/2000 = 0.024 cd/m2.

When this condition universally exists, from feature

post-production to associated theaters, cinematogra-

phers, directors, and producers will be confident that

all their picture enhancements and overall image

appearance subjectively made and approved in post-

production will be viewed as intended, with expected

appreciation by audiences at all associated digital cin-

ema theaters.

Assuming that the complete digital cinema system

works as specified, both observers at the scene and in

the theater, as shown in Figs. 1 and 10, will have

acceptably equivalent color perceptions of the butter-

SMPTE MMoottiioonn IImmaaggiinngg Journal, April 2006 • wwwwww..ssmmppttee..oorrgg 115511

DIGITAL CINEMA IMAGE REPRESENTATION SIGNAL FLOW

Silva22.qxp 3/14/2006 2:52 PM Page 151

fly, as they will for all other picture-objects passing

through the system. The observer at the theater, of

course, will have the added advantage of viewing

enhanced versions of original program content, as

was desired by the producer, director, and cinematog-

rapher in feature post-production.

Conclusion

A lot of information has been included and dis-

cussed in this paper. The goal was to initiate readers

to digital cinema as a technology, or to serve as a

solid review to professionals directly involved with its

standards-making process. The discussion started

with an overview of the digital cinema standards

process, followed by a brief history of the film/televi-

sion relationship over the years, in which most parties

are now participating together.

This was followed with (1) an elemental discussion

of digital cinema as a moving image technology; (2)

film and film scanners and digital camera design con-

siderations and scene content acquisition; (3) a mini-

tutorial on how light translates to dye densities on

negative film; (4) basic film scanner action; (5) gener-

ation and progression of image representation signals

in digital cameras; (6) digital cameras and their rela-

tionship to visual displays; (7) the digital source mas-

ter and its role in DCDM development; (8) the digital

cinema image color data flow diagram, followed by (9)

signal processing in feature post-production in creat-

ing the DSM; (10) constrained TIFF frame files as pre-

liminary DCDM carriers; (11) presently recommended

Image DCDM specifications; (12) digital cinema pic-

tures consist of screen rasters of active horizontal and

vertical pixels; (13) the DCDM encoding equations

with their formula element descriptions; (14) the

Reference Projector and its special role in digital cine-

ma, (15) DCDM signal f low to the Reference

Projector; (16) exhibition projector input color data

flow; (16) DCDM data file layers; (17) JPEG2000 com-

pression; (18) the final steps in the delivery of features

to digital cinema theaters; (19) the exhibition phase in

digital cinema theaters; (20) the understanding of the-

ater black and the two detrimental theater conditions

that must be eliminated before digital cinema can be

declared supremely successful.

References1. Thomas J. True, “A Datacentric Approach to Cinema

Mastering,” SMPTE Mot. Imag. J. ,

112:347, Oct./Nov. 2003.

2. CIE, 1931, www.cie.org.

3. Edward J. Giorgianni and Thomas E. Madden, DigitalColor Management Encoding Solutions , Addison

Westley: Reading, MA, 1997.

4. David S. Taubman, and Michael W. Marcellin, JPEG2000Image Compression Fundamentals, Standards andPractice , Kluwer Academic Publishers: Boston,

Dordrecht, London, 2002.

BibliographyBerns, Roy S., Principals of Color Technology, Third

Edition, Wiley Inter-Science: New York, 2000.

Poynton, Charles, A Technical Introduction to Digital Video,John Wiley & Sons, Inc.: NY, 1969.

Poynton, Charles, Digital Video and HDTV Algorithms andInterfaces, Morgan Kaufmann Publishers: NY, 2002.

Rast, R. M, “SMPTE Technology Committee on Digital

Cinema—DC28: A Status Report, ”SMPTE J., 110:78,Feb. 2001.

Wyszecki and Stiles, Color Science, Second Edition, Wiley

Inter-Science: New York, 2000.

115522 SMPTE MMoottiioonn IImmaaggiinngg Journal, April 2006 • wwwwww..ssmmppttee..oorrgg

DIGITAL CINEMA IMAGE REPRESENTATION SIGNAL FLOW

THE AUTHOR

John Silva is known as the “father of airborne news

gathering.” In 1958, as chief engineer of television sta-

tion KTLA in Los Angeles, he conceived, designed,

and developed the world’s first airborne news heli-

copter which was named the “KTLA Telecopter.” In

1970 he received an Emmy for “outstanding achieve-

ment in newsgathering.” In 1974 he received a second

Emmy for “concept, design, and expertise of the KTLA

Telecopter. Silva also designed and developed the

world’s first frame-by-frame videotape editor called the

TVola in 1961. In 1977 he received the “NAB Engineer

of the Year” award.

Silva is an active participant and contributor on all

four SMPTE Digital Cinema Standards Committees.

A contribution received December 2005. Copyright © 2006 by SMPTE.

Silva22.qxp 3/21/2006 1:10 PM Page 152