mad unit 1 final

Upload: ujwala-ganga

Post on 03-Apr-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 MAD Unit 1 Final

    1/19

    Unit 1

    Fundamental concepts in Text and Image

    Multimedia and hypermedia

    World wide web

    Overview of multimedia software tools

    Graphics and image data representation

    a. Graphics/image data typesb. File formats

    Color in image and video:

    a. Color science

    b. Color models in imagesc. Color models in video

    Multimedia and Hypermedia:

    Multimedia: It is the integration of multiple forms of media. This includes text, graphics, audio, video,animation etc.

    Ex: i) A presentation involving audio and video clips would be considered as Multimedia Presentationii) Educational s/w that involves animations, sound, text is called Multimedia Software.

    iii) CDs and DVDs are often considered to be Multimedia Format.

    Classification of Multimedia:

    Multimedia is classified into two types: i) Linear Multimediaii) Non-Linear Multimedia (Hyper Media).

    i)Linear Multimedia content progresses without any navigational control for the viewer. Ex: Movies

    ii) Non Linear Multimediacontent offers users interactivity to controlprogress. Ex: Games.

    Hypermedia: In hypermedia the media is linked or arranged non sequentially. It is considered to be textbased. It can include other media such as graphics, images and espicially the continuos media audio and

    video. Hypermedia can be considered as one particular Multimedia application.

    World Wide Web(WWW):

    WWW is the largest and most commonly used hypermedia application. Its popularity is due to the amount of

    information available from web servers, the capacity to post such information and the ease of navigating suchinformation. WWW technology is maintained and developed by World Wide Web Consortium (W3C),

    although the IETF(Internet Engineering Task Force) standardize the technologies. The W3C lsited the

    following three goals for WWW:

    a) Unversal access of web resouces.

    b) Effectiveness of navigating available information.

    c) Responsible use of posted information.

    i)Hypertext Transfer Protocal(HTTP):

    * HTTP is orginally designed for transmitting hypermedia but it also supports transimission of any filetype.

  • 7/28/2019 MAD Unit 1 Final

    2/19

    *HTTP is a stateless request/response protocal, in the sense that a client typically opens a connection to

    the HTTP server, requests information, the server responds and the connection is terminated.

    *Basic request fromat is : method URI version

    additional headers

    message-Body

    * The URI(Uniform Resouce Locator) identifies the resources accessed, such as the hostname always

    preceeded by the token http://.

    * Method is a way of exchanging information (or) performing tasks on the URI. Two popular methods are

    GET and POST. GET specifies that the information requested is in the request string itself. While thePOST specifies that the resource pointed to in the URI should also consider the message body.

    * The basic response format is:Version Status_code Status_phrase

    Additional headers

    .

    .

    .

    Message body

    * Status_code is a number that identifies the response type and the Status_phrase is s text description of it.

    Two commonly seen status codes and phrases are 200 OK when the request was processed successfully

    and 404 Not Found when the URI does not exist.

    ii)Hyper Text Markup Language(HTML):

    HTML is a language for publishing hypermedia on the world wide web.

    Since it uses ASCII format, it is portable to all different computer hardware, which allows for global

    exchange fo information. The current version of HTML is XHTML, a reformulation of HTML and XML.

    HTML uses tags to describe document elements. The tags are in the format < token Params> to define

    the start point of a document element and to define the end of the element.

    HTML devides the document into a HEAD and BODY part as follows:

    o The HEAD describes document definitions. And it includes page title, resource links and meta

    informaiton.

    o The BODY part describes the documents sturucture and content. Common structure elements are

    paragraph, table, forms, links and buttons.

    o Example:

  • 7/28/2019 MAD Unit 1 Final

    3/19

    A sample web page

    Type text here

    o Drawback: HTML has rigid, non descriptive structure elements and modularity is difficult toachieve.

    iii)Extensible Markup Language(XML):

    o When ther is a need for a markup language for the www that use modularity of data,structure and

    view, we can use XML.

    o That is, we would like an user application to be able to define the tags(structure) allowed in a

    document and their relationship to each other in one place, then data using these tags in another

    place.

    o XML contains DTD(Document Type Definition)s which defines the document structurewith a list of legal elements and attributes.

    o In additional to XML specifications, the following XML related are standardized:i) XML Protocol: Used to exchange XML information between process.

    ii) XML Schema: A more structures and powerfull language for defining XML data types.

    iv)SMIL (Synchronized Multimedia Integration Language):

    Purpose of SMIL: it is also desirable to be able to publish multimedia presentations using a markup

    language.

    A multimedia markup language needs to enable scheduling and synchronizationof different

    multimedia elements, and define their interactivity with the user.

    The W3C established a Working Group in 1997 to come up with specifications for a multimedia

    synchronization language SMIL 2.0 was accepted in August 2001.

    The SMIL language structure is similar to HTML. The root element is smil, which contains two

    elements HEAD and BODY.

    HEAD describes document definitions. And it includes page title, resource links and meta

    informaiton.

    The BODY part describes the documents sturucture and content. Common structure elements are

    paragraph, table, forms, links and buttons.

    Three types of resource synchronization (grouping) are available SEQ, PAR and EXCL.

    SEQ specifies that the elements grouped are to be presented in sequential order.

    PAR specifies that the elements grouped are to be presented in parallel order.

    EXCL specifies that only one element can be presented at a time.

    Example:

  • 7/28/2019 MAD Unit 1 Final

    4/19

    Overview of Multimedia Software Tools:

    The categories of software tools briefly examined here are:

    1. Music Sequencing and Notation

    2. Digital Audio3. Graphics and Image Editing

    4. Video Editing

    5. Animation

    6. Multimedia Authoring

    Music Sequencing and Notation:

    CAKEWALK: now called Pro Audio.

    The term sequencer comes from older devices that stored sequences of notes ("events", in MIDI).

    It is also possible to insert WAV files and Windows MCI commands (for animation and video) into musictracks (MCI is a ubiquitous component of the Windows API.)

    CUBASE: Another sequencing/editing program, with capabilities similar to those of Cakewalk. It includes

    some digital audio editing tools.

    MACROMEDIA SOUNDEDIT: mature program for creating audio for multimedia projects and the web

    that integrates well with other Macromedia products such as Flash and Director.

    Digital Audio:Digital Audio tools deal with accessing and editing the actual sampled sounds that make

    up audio:

    COOL EDIT: a very powerful and popular digital audio toolkit; emulates a professional audio studio

    multitrack productions and sound file editing including digital signal processing effects.

    SOUND FORGE: a sophisticated PC based program for editing audio WAV files.

    PRO TOOLS: a high end integrated audio production and editing environment MIDI creation and

    manipulation; powerful audio mixing, recording, and editing software.

    Graphics and Image Editing:

    ADOBE ILLUSTRATOR: a powerful publishing tool from Adobe. Uses vector graphics; graphics can beexported to Web.

    ADOBE PHOTOSHOP: the standard in a graphics, image processing and manipulation tool.

    Allows layers of images, graphics, and text that can beseparately manipulated for maximum flexibility.

    Filter factory permits creation of sophisticated lighting effects

    filters.

    MACROMEDIA FIREWORKS: software for making graphics specifically for the web.

    MACROMEDIA FREEHAND: a text and web graphics editing tool that supports many bitmap formats

    such as GIF, PNG, and JPEG.

    Video Editing:

    ADOBE PREMIERE: an intuitive, simple video editing tool for nonlinear editing, i.e., putting video clipsinto any order:

    Video and audio are arranged in "tracks".

    Provides a large number of video and audio tracks, superimpositions and virtual clips.ADOBE AFTER EFFECTS: a powerful video editing tool that enables users to add and change existingmovies. Can add many eects: lighting, shadows, motion blurring; layers.

    FINAL CUT PRO: a video editing tool by Apple; Macintosh only.

    Animation:

  • 7/28/2019 MAD Unit 1 Final

    5/19

    Multimedia APIs:

    JAVA3D: API used by Java to construct and render 3D graphics, similar to the way in which the Java

    Media Framework is used for handling media files.1. Provides a basic set of object primitives (cube, splines, etc.) for building scenes.

    2. It is an abstraction layer built on top of OpenGL or DirectX

    DIRECTX : Windows API that supports video, images, audio and 3D animation

    OPENGL: the highly portable, most popular 3 D API

    Rendering Tools:

    3D STUDIO MAX: rendering tool that includes a number of very high end professional tools for characteranimation, game development, and visual effects production.

    SOFTIMAGE XSI: a powerful modeling, animation, and rendering package used for animation and special

    effects in films and games.

    MAYA: competing product to Softimage; as well, it is a complete modeling package.

    Renderman: rendering package created by Pixar.

    GIF ANIMATION PACKAGES: a simpler approach to animation, allows very quick development of

    effective small animations for the web.

    Multimedia Authoring:

    MACROMEDIA FLASH: allows users to create interactive movies by using the score metaphor, i.e., atimeline arranged in parallel event sequences.

    MACROMEDIA DIRECTOR: uses a movie metaphor to create interactive presentations very powerful

    and includes a builtin scripting language, Lingo, that allows creation of complex interactive movies.

    AUTHORWARE: a mature, well supported authoring product based on the Iconic/Flow control metaphor.

    QUEST: similar to Authorware in many ways, uses a type of flowcharting metaphor. However, the

    flowchart nodes can encapsulate information in a more abstract way (called frames) than simply subroutine

    levels.

    Graphics/Image Data Types:

    1-bit Images:Image consists of pixels (or) pels- picture elements in digital image. A 1-bit image consists of on and off bits only and thus is the simplest type of image.

    Each pixel is stored as a single bit (0 or 1). Hence it is also called as binary image.

    It is also called as 1-bit monochrome image since it contains no color.

    A 640*480 monochrome image requires 38.4 kilobytes of storage.

    Monochrome 1-bit images can be satisfactory for images only containing simple graphics & text.

    8-bit Gray-level Images:

    Each pixel has a gray-value between 0 and 255. Each pixel is represented by a single byte; e.g., a dark pixelmight have a value of 10, and a bright one might be 230.

    The entire image can be thought of as a two-dimensional array of pixel values that represents thegraphics/image data. We refer to such image as Bitmap.

    Such an Array must be stored in a Hardware. And we call it as Frame Buffer.

    Image resolution refers to the number of pixels in a digital image (higher resolution always yields better

    quality).

    8-bit image can be thought of as a set of 1-bit bitplanes, where each plane consists of a 1-bit representation

    of the image at higher and higher levels of "elevation": a bit is turned on if the image pixel has a nonzero

    value that is at or above that bit level.

  • 7/28/2019 MAD Unit 1 Final

    6/19

    0 1 2 3 4 5 6 7

    Each pixel is usually stored as a byte (a value between 0 to 255), so a 640 480 grayscale image requires

    300 kB of storage (640 480 = 307, 200).

    Dithering:

    When an image is printed, the basic strategy of dithering is used, which trades intensity resolution for

    spatial resolution to provide ability to print multi-level images on 2-level (1-bit) printers.

    Dithering is used to calculate patterns of dots such that values from 0 to 255 correspond to patterns that aremore and more filled at darker pixel values, for printing on a 1-bit printer.

    The main strategy is to replace a pixel value by a larger pattern, say 2 2 or 4 4, such that the number of

    printed dots approximates the varying sized disks of ink used in analog, in halftone printing (e.g., for

    newspaper photos).

    24-bit Color Images:

    In a color 24-bit image, each pixel is represented by three bytes, usually representing RGB.

    This format supports 256 256 256 possible combined colors, or a total of 16,777,216 possible colors.

    However such flexibility does result in a storage penalty: A 640 480 24-bit color image would require

    921.6 kB of

    storage without any compression.

    An important point: many 24-bit color images are actually stored as 32-bit images, with the extra byte ofdata for each pixel used to store an alpha value representing special effect information (e.g., transparency).

    8-bit Color Images:

    Many systems can make use of 8 bits of color information (the so-called "256 colors") in producing ascreen image.

    Such image files use the concept of a lookup table to store color information.

    Basically, the image stores not color, but instead just a set of bytes, each of which is actually an index into a

    table with 3-byte values that specify the color for a pixel with that lookup table index.

    Color Look-up Tables (LUTs):

    The idea used in 8-bit color images is to store only the index, or code value, for each pixel. Then, e.g., if a

    pixel stores the value 25, the meaning is to go to row 25 in a color look-up table (LUT).

    A Color-picker consists of an array of fairly large blocks of color (or a semi-continuous range of colors)such that a mouse-click will select the color indicated.

    In reality, a color-picker displays the palette colors associated with index values from 0 to 255.

    How to Devise a Color Lookup Table:

    The most straightforward way to make 8-bit look-up color out of 24-bit color would be to divide the RGB

    cube into equal slices in each dimension.Since humans are more sensitive to R and G than to B, we could shrink the R range and G range 0..255 into

    the 3-bit range 0..7 and shrink the B range down to the 2-bit range 0..3, thus making up a total of 8 bits.

    To shrink R and G, we could simply divide the R or G byte value by (256/8)=32 and then truncate. Then

    eachpixel in the image gets replaced by its 8-bit index and the color LUT serves to generate 24-bit color.

  • 7/28/2019 MAD Unit 1 Final

    7/19

    The drawback of above technique is that slight change in RGB results in shifting to a new code.

    A simple alternative solution for this is Median Cut Algorithm.

    Median cut Algorithm:

    a) The idea is to sort the R(Red) byte values and find their median; then values smaller than the median are

    labeled with a "0" bit and values larger than the median are labeled with a "1" bit.b) Next we consider only pixels with a 0 label from the first step and sort their G(Green) Values.

    c) Again we label image pixels wit another bit 0 for those less than the median in the greens and 1 for the

    greater.d) Now we will arrive at a two bit schema.

    e) Carrying on the blue channel, we have a 3-bit schema. Repeating all steps R,G,B channels we will get a 6-

    bit schema.f) And cycling through R&G once more results in 8-bit.

    Popular File Formats:

    1. GIF( Graphics Interchange Format).2. JPEG(Joint Photographers Expert Group)3. PNG ( Portable Network Graphics)

    4. TIFF (Tagged Image File Format)

    5. EXIF(Exchange Image File)6. PS(Post Script)

    7. PDF(Portable Document Format)

    8. WMF(Windows Meta File)9. BMP (Windows Bit Map)

    10. Machintosh Paint and Pict

    11. X windows PPM(Portable Pix Map)

    12. Graphics / Animation Fles.

    1. GIF( Graphics Interchange Format):

    8-bit GIF is one of the most important format because of its historical connection to the WWW and HTMLmarkup language as the first image type recognized by net browsers.

    GIF

    It is limited to 8-bit (256) color images only, which, while producing acceptable color images, is bestsuited for images with few distinctive colors (e.g., graphics or drawing).

    GIF standard supports interlacing successive display of pixels in widely-spaced rows by a 4-pass display

    process.

    GIF actually comes in two flavors:

    --GIF87a: The original specification.-- GIF89a: The later version. Supports simple animation via a Graphics Control Extension block

    For the standard specification, the general file format of a GIF87 file is as in Fig.

    GIF Signature

    Screen Descriptor

    Global Color Map

    Image Descriptor

  • 7/28/2019 MAD Unit 1 Final

    8/19

    Local Color Map

    Raster Area

    GIF Terminator

    The Signature is 6 bytes: GIF87a

    A GIF file can contain more than one image usually to fit on several different parts of the screen. Therefore

    each image can contain its own color Lookup Table, a local color map for mapping 8 bits into 24bit RGB

    values.

    A Global color map instead be defined to take the place of a local table if there is no need of a global color

    map.

    The Screen Descriptor is s 7 byte set of flags. It comprises a set of attributes that belong to every image inthe file.

    Format for Screen Descriptor is:

    Screen Width

    Screen Height

    M CR 0 Pixel

    Background

    0 0 0 0 0 0 0 0

    The Screen Width and the Screen Height specifies the width and height of the screen.

    M bit is 0 if no global color map is given.

    Color Resolution CR is 3 bits which indicates the intensity of the color resolution.

    The next bit is set to 0, is extra and is not used in this standard.

    Pixel is another 3 bit, indicating the number of bits per pixel in the image.

    Background gives the color table index byte for the background color.

    All 0s in the last byte are reserved for the future use.

    Format for Image Descriptor is:

    7 6 5 4 3 2 1 0

    0 0 1 0 1 1 0 0

    Image Left

    Image Right

    Image Width

    Image Height

    m i 0 0 0 Pixel

  • 7/28/2019 MAD Unit 1 Final

    9/19

    The end of one image and the beginning of the next is identified by a image separator character (commacharacter) which is indicated by the code 00101100.

    If the m bit is set to 1, we use global color map, or if it is set to 0 we use local color map.

    If the i bit is set to 1, it indicates interlaced scanning.

    Pixel indicates the number of pixels per byte.

    8 bit Color map format is:

    Red IntensityGreen Intensity

    Blue Intensity

    Red Intensity

    Green Intensity

    Blue Intensity

    Red IntensityGreen Intensity

    JPEG(Joint Photographic Experts Group):

    The most important current standard for image compression.

    This standard was created by a working group of International Organization for Standardization (ISO), that

    was informally called as JPEG. The human vision system has some specific limitations and JPEG takes advantage of these to achieve highrates of compression.

    JPEG allows the user to set a desired level of quality, or compression ratio (input divided by output).

    The eye brain system cannot see extremely fine detail, if many changes occur within a few pixels. We referto that image segment as having high spatial frequency i.e that is great deal of change in (X,Y) space.

    Therefore, color information in JPEG is decimated (partially dropped or averaged) and then small blocks of

    an image is represented in the spatial frequency domain (U,V) rather than in (X,Y).

    That is, the speed of changes in X and Y is evaluated, from low to high and a new values are formed by

    grouping the coefficients or weights of these speeds.

    These values are divided with some integer and the values below the minimum specific range are truncated.

    Since we effectively throw away a lot of information by the division and truncation steps, this compression

    schema is lossy.

    Since it is straight forward to allow the user to choose how large the denominator to use and hence howmuch information to discard, JPEG allows the user to set a desired level of quality.

    PNG (Portable Network Graphics):

    PNG is developed to support system independent image formats.It is meant to supersede the GIF standard, and extends it in important ways.

    Support for up to 48 bits of color information a large increase.

  • 7/28/2019 MAD Unit 1 Final

    10/19

  • 7/28/2019 MAD Unit 1 Final

    11/19

    Another text + figures language

    PDF files have inbuilt compression algorithm which is equivalent to UNIX gzip and Windows winzip.

    For files containing images, PDF may achieve higher compression ratios by using separate JPEG

    compression for the image content.

    The Adobe Acrobat reader can also support hyperlinks.

    WMF (Windows Meta File):

    It is the native vector file format for the Microsoft Windows operating environment.

    Consist of a collection of GDI (Graphics Device Interface) function calls, also native to the Windows

    environment.

    When a WMF file is "played" (typically using the Windows PlayMetaFile() function) the described

    graphics is rendered.

    WMF files are device independent and are unlimited in size.

    BMP (Windows BitMaP):

    The major system standard graphics file format for Microsoft Windows, used in Microsoft Paint and other

    programs.

    Macintosh PAINT and PICT:

    PAINT was originally used in the MacPaintprogram, initially only for 1-bit monochrome images.

    PICT format is used in MacDraw (a vector-based drawing program) for storing structured graphics.

    X-windows PPM (Portable Pix Map):

    the graphics format for the X Window system.

    PPM supports 24-bit color bitmaps, and can be manipulated using many public domain graphic editors.

    Color in Image and Video:

    i) Color Science.

    ii) Color in Image.

    iii) Color in Video.

    Color Science:

    Light and Spectra:

    Light is an electromagnetic wave. Its color is characterized by the wavelength content of the light.

    Visible light is the portion of the electromagnetic radiation that is visible to the human eye.

    Visible light has a wavelength of about 400-700 nanometer.

    Short wavelengths produce a blue sensation; long wavelengths produce a red one.

    Spectrophotometer: device used to measure visible light.

  • 7/28/2019 MAD Unit 1 Final

    12/19

    Human Vision:

    The eye works like a camera, with the lens focusing an image onto the retina (upside-down and left-right

    reversed). The retina consists of an array of rods and three kinds of cones.

    The rods come into play when light levels are low and produce a image in shades of gray ("all cats are grayat night!").

    For higher light levels, the cones each produce a signal. Because of their differing pigments, the threekinds of cones are most sensitive to red (R), green (G), and blue (B) light.

    It seems likely that the brain makes use of differences R-G, G-B, and B-R, as well as combining all of R, G,

    and B into a high-light-level achromatic channel.

    Spectral Sensitivity of the Eye:

    The eye is most sensitive to light in the middle of the visible spectrum. The eye has about 6 million cones, but the proportions of R, G, and B cones are different. They likely are

    present in the ratios 40:20:1. So the achromatic channel produced by the cones is approximatelyproportional to 2R + G + B/20.

    These spectral sensitivity functions are usually denoted by letters other than "R, G, B"; here let's use a vector

    function q(), with components q ()=(qR(),qG(),qB())T The response in each color channel in the eye is proportional to the number of neurons firing.

  • 7/28/2019 MAD Unit 1 Final

    13/19

    Image Formation:

    Surfaces reflect different amounts of light at different wavelengths, and dark surfaces reflect less

    energy than light surfaces.

    Light rays reflect off the object and into the eye where they are refracted by the cornea and focussed

    by the lens on to the retina, the optic nerve then carries the messages to the brain and an image is

    formed.

    Camera Systems:

    Camera systems are made in a similar fashion; a studio-quality camera has three signals produced

    at each pixel location (corresponding to a retinal position).

    Analog signals are converted to digital, truncated to integers, and stored. If the precision used is 8-bit, then the maximum value for any of R, G, B is 255, and the minimum is 0.

    However, the light entering the eye of the computer user is that which is emitted by the screen the screen

    is essentially a self-luminous source. Therefore we need to know the light E() entering the eye.

    Gamma Correction: The light emitted is in fact roughly proportional to the voltage raised to a power; this power is called

    gamma, with symbol .

    Now if you take the image file and turn each pixel value into a voltage and feed it into a CRT, you

    find that the CRT doesn't give you an amount of light proportional to the voltage.

    The amount of light coming from the phosphor in the screen depends on the the voltage something

    like this: Light_out = voltage ^ crt_gamma

    So if you just dump your nice linear image out to a CRT, the image will look much too dark.

    To fix this up you have to "gamma correct" the image first.

    You need to do the opposite of what the CRT will do to the image, so that things cancel out, and youget what you want. So you have to do this to your image:

    gamma_corrected_image = image ^ (1/crt_gamma)

    For most CRTs, the crt_gamma is somewhere between 1.0 and 3.0.

    CIE Chromaticity Diagram:The CIE system characterizes colors by a luminance parameter Y and two color coordinates xand y which specify the point on the chromaticity diagram.

    http://hyperphysics.phy-astr.gsu.edu/hbase/vision/specol.html#c2http://hyperphysics.phy-astr.gsu.edu/hbase/vision/photom.html#c2http://hyperphysics.phy-astr.gsu.edu/hbase/vision/colper.html#c4http://hyperphysics.phy-astr.gsu.edu/hbase/vision/cie.html#c2%23c2http://hyperphysics.phy-astr.gsu.edu/hbase/vision/specol.html#c2http://hyperphysics.phy-astr.gsu.edu/hbase/vision/photom.html#c2http://hyperphysics.phy-astr.gsu.edu/hbase/vision/colper.html#c4http://hyperphysics.phy-astr.gsu.edu/hbase/vision/cie.html#c2%23c2
  • 7/28/2019 MAD Unit 1 Final

    14/19

    This system offers more precision in color measurement because the parameters are based on the

    spectral power distribution (SPD) of the light emitted from a colored object and are factored by

    sensitivity curves which have been measured for the human eye.

    Based on the fact that the human eye has three different types ofcolor sensitive cones, the response of

    the eye is best described in terms of three "tristimulus values". However, once this is accomplished, it

    is found that any color can be expressed in terms of the two color coordinates x and y.

    The colors which can be matched by combining a given set of three primary colors (such as the blue,

    green, and red of a color television screen) are represented on the chromaticity diagram by a triangle

    joining the coordinates for the three colors.

    Color Monitor Specifications: Several color monitor specifications are currently in use. Monitor specifications consists of the fixed manufacturers specified colors for the monitor

    phosphorous.

    The organizations which are used for specifying the color monitor specifications are:

    i) NTSC is the video system or standard used in North America, most of South America and in

    Japan

    ii) The Society of Motion Picture and Television Engineers, founded in 1916 as the Society ofMotion Picture Engineers or SMPE. It is used in china.

    iii) The European Broadcasting Union (EBU) is a confederation of 85 broadcasting organizations

    from 56 countries including India.

    L*a*b* (CIELAB) Color Model:

    L*a*b is the abbreviation for a so-called color- opponent three dimensional color space in which the

    coordinates that describe a color are L (color lightness), a (position on the green-red axis) and b

    (position on the blue-yellow axis). This system was proposed by Hunter in 1948 and is yet anothertransformation of the CIEXYZ colorimetric system.

    Frequently Lab is also used to describe the L*a*b* color space (1976 CIE L*a*b* or CIELAB)

    which is similar to the Hunter Lab. L* represents lightness and a* and b* are similar to a and b, the

    main difference being the way coefficients are computed

    http://hyperphysics.phy-astr.gsu.edu/hbase/vision/spd.html#c1http://hyperphysics.phy-astr.gsu.edu/hbase/vision/colcon.html#c1http://hyperphysics.phy-astr.gsu.edu/hbase/vision/colper.html#c3http://hyperphysics.phy-astr.gsu.edu/hbase/vision/colspa.html#c2http://hyperphysics.phy-astr.gsu.edu/hbase/vision/spd.html#c1http://hyperphysics.phy-astr.gsu.edu/hbase/vision/colcon.html#c1http://hyperphysics.phy-astr.gsu.edu/hbase/vision/colper.html#c3http://hyperphysics.phy-astr.gsu.edu/hbase/vision/colspa.html#c2
  • 7/28/2019 MAD Unit 1 Final

    15/19

    Out-of-Gamut Colors: The phrase "out of gamut" refers to a range of colors that cannot be reproduced within the CMYK

    color space used for commercial printing.

    Graphics software is designed to work with images in the RGB color space throughout the editingprocess. The RGB color space has a much wider range of discernible colors than CMYK.

    When you print an image it must be reproduced with inks and these inks cannot reproduce the

    same range of colors that we can see with our eyes.

    Because the gamut of color that can be reproduced with ink is much smaller than what we can see,

    any color that cannot be reproduced with ink is referred to as "out of gamut."

    More Color Coordinate Schemes:a) CMY Cyan(C), Magenta (M) and Yellow (Y ) color model;b) HSL Hue, Saturation and Lightness;

    c) HSV Hue, Saturation and Value;

    d) HSI Hue, Saturation and Intensity;

    e) HCI C= Hue, Saturation and Chroma;f) HVC V= Hue, Value and Chroma;

    http://graphicssoft.about.com/library/glossary/bldefcmyk.htmhttp://graphicssoft.about.com/library/glossary/bldefrgb.htmhttp://graphicssoft.about.com/library/glossary/bldefcmyk.htmhttp://graphicssoft.about.com/library/glossary/bldefrgb.htm
  • 7/28/2019 MAD Unit 1 Final

    16/19

    g) HSD D= Hue, Saturation and Darkness.

    Hue:the property of light by which the color of an object is classified as red, blue, green, or yellow in

    reference to the spectrum.

    Saturation: The saturation of a color is determined by a combination of light intensity and how much it is

    distributed across the spectrum of different wavelengths.

    Color Models of Images:

    i)RGB Color model for CRT Displays:

    In the RGB color model, we use red, green, and blue as the 3 primary colors. The RGB color model

    is additive in the sense that the three light beams are added together to make the final color's spectrum. Zerointensity for each component gives the darkest color (no light, considered the black), and full intensity of each

    gives a white. The quality of this white depends on the nature of the primary light sources

    This is an additive model since the phosphors are emitting light. We can represent the RGB model by using a

    unit cube. Each point in the cube (or vector where the other point is the origin) represents a specific color.This model is the best for setting the electron guns for a CRT.

    ii)Subtractive Color: CMY Color Model

    Opposite model of RGB is CMY. Printing inks are based on this model. With the full presence of cyan,

    magenta and yellow we get black. The outcome of this process CMYK model and k stand for black color,

    which is also recognized as 'key' color. Since black is a full presence of color, you will have to subtract the

    levels of cyan, magenta and yellow to produce the lighter colors. This can be explained in different way.

    http://dictionary.reference.com/browse/whichhttp://dictionary.reference.com/browse/objecthttp://dictionary.reference.com/browse/whichhttp://dictionary.reference.com/browse/object
  • 7/28/2019 MAD Unit 1 Final

    17/19

    When light falls on the green surface or green ink. It absorbs (subtracts) all the colors from light except green.

    Hence the model is called subtractive model. Print production is based on this model

    iii)Transform from RGB to CMY:

    To make certain CMY color model from RGB is:

    C 1 R

    M = 1 - G

    Y 1 B

    To inverse of the RGB is:

    R 1 C

    G = 1 - M

    B 1 Y

    iv) Under Color Removal: CMYK System:

    Because inks are impure, producing a true black by mixing just cyan, magenta, and yellow is difficult. The

    result of the mixture of CMY is muddy brown due to the impurities of the printing inks. Hence black ink is

    added to get solid black. The outcome of this process CMYK model and k stand for black color, which is alsorecognized as 'key' color. Since black is a full presence of color, you will have to subtract the levels of cyan,

    magenta and yellow to produce the lighter colors.

    CMYK has a smaller gamut than RGB, which means you cant reproduce with inks all the colors you can

    create with RGB on your computer screen.

    v)Printer Gamuts:In a common model of the printing process, printers lay down transparent layers of ink

    onto a (generally white) substrate. If we wish to have a cyan color printing which is truly equal to minus red (-

  • 7/28/2019 MAD Unit 1 Final

    18/19

    red) our objective is to produce a cyan ink that completely blocks all red light and passes only green and blue

    light. Unfortunately that Block Dyes are only approximated in the industry.

    CYM inks are not pure. If they were the K wouldn't be needed. 100% CYK = dirty brown, not black. The

    bigger the gamuts of the monitor printer, the less color will change from monitor to print.

    Color models in videos:

    i)Video Color Transforms:

    Methods of dealing with color in digital video derive largely from older analog methods of coding color for

    TV. The different models are:

    YUV: YUV is the color space used by theNTSC color TV system, employed mainly in North and Centra

    America, and Japan

    YIQ:: Some forms of NTSC now use the YUVcolor space, which is also used by other

    systems such as PAL or SECAM.YCbCr:YCbCr or YCbCr, sometimes written YCBCR or YCBCR, is a family of color

    spaces used as a part of the color image pipeline in video and digital photographs

    ii)YIQ Color Model:

    YIQ is the color space used by theNTSC color TV system, employed mainly in North and Central Americaand Japan

    The Y component represents the luma (Brightness) information, and is the only component used by black-

    and-white television receivers. I and Q represent the chrominance information. The YIQ system is intended to

    take advantage of human color-response characteristics. The eye is more sensitive to changes in the orange-

    blue (I) range than in the purple-green range (Q) therefore lessbandwidth is required for Q than for I.

    http://en.wikipedia.org/wiki/Color_spacehttp://en.wikipedia.org/wiki/NTSChttp://en.wikipedia.org/wiki/North_Americahttp://en.wikipedia.org/wiki/Central_Americahttp://en.wikipedia.org/wiki/Central_Americahttp://en.wikipedia.org/wiki/YUVhttp://en.wikipedia.org/wiki/Color_spacehttp://en.wikipedia.org/wiki/Color_image_pipelinehttp://en.wikipedia.org/wiki/Videohttp://en.wikipedia.org/wiki/Color_spacehttp://en.wikipedia.org/wiki/NTSChttp://en.wikipedia.org/wiki/North_Americahttp://en.wikipedia.org/wiki/Central_Americahttp://en.wikipedia.org/wiki/Luminance_(video)http://en.wikipedia.org/wiki/Bandwidth_(signal_processing)http://en.wikipedia.org/wiki/Color_spacehttp://en.wikipedia.org/wiki/NTSChttp://en.wikipedia.org/wiki/North_Americahttp://en.wikipedia.org/wiki/Central_Americahttp://en.wikipedia.org/wiki/Central_Americahttp://en.wikipedia.org/wiki/YUVhttp://en.wikipedia.org/wiki/Color_spacehttp://en.wikipedia.org/wiki/Color_image_pipelinehttp://en.wikipedia.org/wiki/Videohttp://en.wikipedia.org/wiki/Color_spacehttp://en.wikipedia.org/wiki/NTSChttp://en.wikipedia.org/wiki/North_Americahttp://en.wikipedia.org/wiki/Central_Americahttp://en.wikipedia.org/wiki/Luminance_(video)http://en.wikipedia.org/wiki/Bandwidth_(signal_processing)
  • 7/28/2019 MAD Unit 1 Final

    19/19

    iii)YUV Color Model:

    The Y component represents the luma information, and is the only component used by black-and-white

    television receivers. In YUV, the U and V components can be thought of as X and Y coordinates within the

    color space.

    . Historically, the terms YUV and Y'UV were used for a specific analog encodingof color information intelevision . Y stands for the luma component (the brightness) and U and V are the chrominance (color)

    components. Luminance is denoted by Y and luma by Y' the prime symbols (') denotegamma compression,[1] with "luminance" meaning perceptual (color science) brightness, while "luma" is electronic (voltage ofdisplay) brightness.

    iv)YCbCr Color Model:

    YCbCr orYCbCr, sometimes written YCBCR or YCBCR, is a family of color spaces used as a part of

    the color image pipeline in video and digital photography systems. Y is the luma component and CB andCR are the blue-difference and red-difference chroma components. Y (with prime) is distinguished from Y

    which is luminance, meaning that light intensity is non-linearly encoded using gamma correction.

    YCbCr is not an absolute color space, it is a way of encodingRGB information. The actual color displayed

    depends on the actual RGB primaries used to display the signal. Therefore a value expressed as YCbCr is

    predictable only if standard RGB primary chromaticity are used.

    Digital YCbCr (8 bits per sample) is derived from analog R'G'B' as follows:

    http://en.wikipedia.org/wiki/YUVhttp://en.wikipedia.org/wiki/Gamma_compressionhttp://en.wikipedia.org/wiki/Gamma_compressionhttp://en.wikipedia.org/wiki/YUV#cite_note-0http://en.wikipedia.org/wiki/Color_sciencehttp://en.wikipedia.org/wiki/YUVhttp://en.wikipedia.org/wiki/Gamma_compressionhttp://en.wikipedia.org/wiki/YUV#cite_note-0http://en.wikipedia.org/wiki/Color_science