diktat java multimedia.pdf

Upload: deffrian-feris

Post on 02-Jun-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/10/2019 Diktat Java Multimedia.pdf

    1/139

    Java - Multimedia

    Hands on Lab

    September 2011

    For the latest information, please see bluejack.binus.ac.id

  • 8/10/2019 Diktat Java Multimedia.pdf

    2/139

    i | P a g e

    Information in this document, including URL and other Internet Web site references, is

    subject to change without notice. This document supports a preliminary release of software

    that may be changed substantially prior to final commercial release, and is the proprietary

    information of Binus University.

    This document is for informational purposes only. BINUS UNIVERSITY MAKES NO

    WARRANTIES, EITHER EXPRESS OR IMPLIED, AS TO THE INFORMATION IN THIS

    DOCUMENT.

    The entire risk of the use or the results from the use of this document remains with the

    user. Complying with all applicable copyright laws is the responsibility of the user. Without

    limiting the rights under copyright, no part of this document may be reproduced, stored in

    or introduced into a retrieval system, or transmitted in any form or by any means

    (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without

    the express written permission of Binus University.

    Binus University may have patents, patent applications, trademarks, copyrights, or other

    intellectual property rights covering subject matter in this document. Except as expressly

    provided in any written license agreement from Binus University, the furnishing of this

    document does not give you any license to these patents, trademarks, copyrights, or other

    intellectual property.

    Unless otherwise noted, the example companies, organizations, products, domain names, e-

    mail addresses, logos, people, places and events depicted herein are fictitious, and no

    association with any real company, organization, product, domain name, email address,

    logo, person, place or event is intended or should be inferred.

    2011 Binus University. All rights reserved.

    The names of actual companies and products mentioned herein may be the trademarks oftheir respective owners.

  • 8/10/2019 Diktat Java Multimedia.pdf

    3/139

    ii | P a g e

    Table of Contents

    OVERVIEW ..................................................................................................... iii

    Chapter 01 Digital Image .................................................................................. 1

    Chapter 02 Digital Audio ................................................................................. 33

    Chapter 03 Digital Video ................................................................................. 58

    Chapter 04 Graphic 2D ................................................................................... 71

    Chapter 05 Object 3D ................................................................................... 102

    Chapter 06 Multimedia Network Communication............................................... 114

  • 8/10/2019 Diktat Java Multimedia.pdf

    4/139

    iii | P a g e

    OVERVIEW

    Chapter 01

    Digital Image

    Chapter 02

    Digital Audio

    Chapter 03

    Digital Video

    Chapter 04

    Graphic 2D

    Chapter 05

    Object 3D

    Chapter 06

    Multimedia Network Communication

  • 8/10/2019 Diktat Java Multimedia.pdf

    5/139

    1 | P a g e

    Chapter 01

    Digital Image

    Objectives

    1. Color2. Color Space

    3. Digital Imaging

    4. Image Transformation

    5. Image Enhancement

    6. Java2D API

  • 8/10/2019 Diktat Java Multimedia.pdf

    6/139

    2 | P a g e

    1.1. Color

    The Colorclass is used to encapsulate color in the default sRGB 1color space 2or colors

    in arbitrary color space identified by a ColorSpaceclass. In short, Colorclass represents

    colors in java programming language.

    1.2. Color Space

    The ColorSpaceabstract class is used to serve as a color space tag to identify the specific

    color space of a Color object or, via a ColorModel object, of an Image, a

    BufferedImage, or a GraphicsDevice. It represents a system for measuring colors,

    typically using three separate values or components. The ColorSpace class contains

    methods for converting between the original color space and one of two standard color

    spaces, CIEXYZ and RGB. ColorSpaceis defined in the java.awt.colorpackage.

    Digital images, specifically digital color images, come in several different forms. The

    form is often dictated by the means by which the image was acquired or by the image's

    intended use.

    One of the more basic types of color image is RGB, for the three primary colors (red,

    green, and blue). RGB images are sometimes acquired by a color scanner or video

    camera. These devices incorporate three sensors that are spectrally sensitive to light in the

    red, green, and blue portions of the spectrum. The three separate red, green, and bluevalues can be made to directly drive red, green, and blue light guns in a CRT. This type

    of color system is called an additive linear RGB color system, as the sum of the three full

    color values produces white.

    Printed color images are based on a subtractive color process in which cyan,

    magenta, and yellow (CMY) dyes are deposited onto paper. The amount of dye deposited

    is subtractively proportional to the amount of each red, blue, and green color value. The

    sum of the three CMY color values produce black.

    The black produced by a CMY color system often falls short of being a true black.

    To produce a more accurate black in printed images, black is often added as a fourth

    1sRGB is a standard RGB color space created cooperatively by HP and Microsoft in 1996 for use on monitors,printers, and the Internet2A color space is the set of all colors which can be portrayed by a single color system

  • 8/10/2019 Diktat Java Multimedia.pdf

    7/139

    3 | P a g e

    color component. This is known as the CMYK color system and is commonly used in the

    printing industry.

    The amount of light generated by the red, blue, and green phosphors of a CRT is not

    linear. To achieve good display quality, the red, blue, and green values must be adjusted -

    a process known as gamma correction. In computer systems, gamma correction often

    takes place in the frame buffer, where the RGB values are passed through lookup tables

    that are set with the necessary compensation values.

    In television transmission systems, the red, blue, and green gamma-corrected color

    video signals are not transmitted directly. Instead, a linear transformation between the

    RGB components is performed to produce a luminance signal and a pair of chrominance

    signals. The luminance signal conveys color brightness levels. The two chrominance

    signals convey the color hue and saturation. This color system is called YCC (or, morespecifically, YCbCr).

    Another significant color space standard for Java is CIEXYZ. This is a widely-used,

    device-independent color standard developed by the Commission Internationale de

    l'clairage (CIE). The CIEXYZ standard is based on color-matching experiments on

    human observers.

    1.3. Digital Imaging

    Imaging is shorthand for image acquisition, the process of sensing our surroundings and

    then representing the measurements that are made in the form of an image. The sensing

    phase distinguishes image acquisition from image creation; the latter can be

    accomplished using an existing set of data, and does not require a sensor. (Efford, 2000)

    Java supported image manipulation via the Image class and a small number of related

    classes. The Image3 class is part ofthe java.awt package, and its helpers are part of

    java.awt.image. To load image data into a program, is accomplished by the getImage()

    method, which is directly available to Java applets. The method takes a URL specifying

    the location of the image as its parameter. Applications can obtain a java.awt.Toolkit

    object and call its getImage()method.

    3Note that Image is an abstract class; when you manipulate an Image object, you are actually working with aninstance of a platform-specific subclass.

  • 8/10/2019 Diktat Java Multimedia.pdf

    8/139

    4 | P a g e

    Sample Code

    Another way to support image manipulation is by using BufferedImage class. A

    buffered image is a type of image whose pixels can be modified. For example, you candraw on a buffered image and then draw the resulting buffered image on the screen or

    save it to a file. A buffered image supports many formats for storing pixels.

    An Image object cannot be converted to a BufferedImage object. The closest

    equivalent is to create a buffered image and then draw the image on the buffered image.

    This example defines a method that does this.

  • 8/10/2019 Diktat Java Multimedia.pdf

    9/139

    5 | P a g e

  • 8/10/2019 Diktat Java Multimedia.pdf

    10/139

    6 | P a g e

    1.4. Image Transformation

    Geometric operations change image geometry by moving pixels around in a carefully

    constrained way. We might do this to remove distortions inherent in the imaging process,

    or to introduce a deliberate distortion that matches one image with another. There are

    three elements common to most geometric operations: transformation equations that

    move a pixel to a new location, a procedure for applying these equations to an image, and

    some way of computing a value for the transformed pixel.

    Affine transformation is an arbitrary geometric transformation that will move a pixel

    at coordinates (,) to a new position, (,), given by a pair of transformationequations,

    = (,),

    =

    (

    ,

    ) ,

    Txand Tyare typically expressed as polynomials in and . In their simplest form,they are linear in and , giving us an affine transformion,

    = 0 + 1 + 2 = + + Transformation coefficients for some simple affine transformations.

    Transformation 0 1 2 0 1 2Translation by , 1 0 0 1 Scaling by factor 0 0 0 0Clockwise rotation through angle

    cos

    sin

    0sin

    cos

    0Horizontal Shear by factor 1 s 0 0 1 0

  • 8/10/2019 Diktat Java Multimedia.pdf

    11/139

    7 | P a g e

    Figure 1. Translation Transformation Figure 2. Rotation Transformation

    Figure 3. Scaling TransformationFigure 4. Shear Transformation

    The Java2D API supports affine transformations of images and other graphic objects. A

    transformation is specified by an instance of the class java.awt.geom. Affine

    Transform.

    1.5. Image Enhancement

    This chapter describes the basics of improving the visual appearance of images through

    enhancement operations.

    A single pixel considered in isolation conveys information on the intensity and

    possibly the colour at a single location in an image, but it can tell us nothing about the

    way in which these properties vary spatially. It follows that the point processes described

    in the preceding chapter, which change a pixel's value independently of all other pixels,

    cannot be used to investigate or control spatial variations in image intensity or colour. For

    this, we need to perform calculations over areas of an image; in other words, a pixel's

    new value must be computed from its old value and the values of pixels in its vicinity.

    These neighbourhood operations are invariably more costly than simple point processes,

    but they allow us to achieve a whole range of interesting and useful effects.

  • 8/10/2019 Diktat Java Multimedia.pdf

    12/139

    8 | P a g e

    Convolution and correlation are the fundamental neighbourhood operations of image

    processing. They are linear operations. In practice, trus means that, for an image and ascale factor

    [

    (

    ,

    )] =

    [

    (

    ,

    )] ,

    where C denotes either convolution or correlation. It also means that, for any two

    images f1and f2.

    [1(,) + 2(, )] = [1(, )] + [2(,)]The calculations perfonned in convolution are almost identical to those done for

    correlation.

    In convolution, the calculation performed at a pixel is a weighted sum of grey levels

    from a neighbourhood surrounding a pixel. The neighbourhood includes the pixel under

    consideration, and it is customary for it to be disposed symmetrically about that pixel. Weshall assume this to be the case in our discussion, although we note that it is not a

    requirement of the technique. Clearly, if a neighbourhood is centred on a pixel, then it

    must have odd dimensions, e.g., 3 x 3, 5 x 5, etc. The neighbourhood need not be square,

    but this is usually the case- since there is rarely any reason to bias the calculations in the

    or direction. Grey levels taken from the neighbourhood are weighted by coefficientsthat come from a matrix or convolution kernel. In effect, the kernel's dimensions define

    the size of the neighbourhood in which calculations take place. Usually, the kernel is

    fairly small relative to the image-dimensions of 3 x 3 are the most common. Figure 7

    shows a 3 x 3 kernel and the corresponding 3 x 3 neighbourhood of pixels from an image.

    Figure 7. A 3 x 3 convolution kernel and the corresponding image neighbourhood

    The kernel is centered on the shaded pixel. The result of convolution will be a new

    value for this pixel. During convolution, we take each kernel coefficient in turn and

  • 8/10/2019 Diktat Java Multimedia.pdf

    13/139

    9 | P a g e

    multiply it by a value from the neighbourhood of the image lying under the kernel. We

    apply the kernel to the image in such a way that the value at the top-left corner of the

    kernel is multiplied by the value at the bottomright comer of the neighbourhood.

    Denoting the kernel by

    and the image by

    , the entire calculation is

    , = 1,1 + 1, + 1 +0,1 , + 1 +1 ,1 1, + 1 +1,0 + 1, +0, 0 , +1,0 1, +1, 1 ( + 1, 1) +0,1 , 1 +1 , 1 1, 1 ,

    This summation can be expressed more succinctly as

    , = (,)( , )=

    =

    For the kernel and neighbourhood illustrated in figure 7, the result of convolution is

    , = 1 8 2 + 1 8 8 + 2 6 5 + 2 7 6 + 1 6 0 + 1 7 2 = 40Note that a new image (denoted g in Equation 7.4) has to be created to store the results of

    convolution. We cannot perfonn the operation in place, because application of a kernel to

    any pixel but the first would make use of values already altered by a prior convolutionoperation.

    Java2D provides two classes to support image convolution: Kerneland ConvolveOp.

    The Kernel class represents convolution kernels. A Kernel object is constructed by

    providing kernel dimensions and a one-dimensional float array of coefficients

    This example creates a 3 x 3 kernel whose coefficients are all equal. Note that each

    coefficient is normalised, such that the sum of coefficients equals 1.

  • 8/10/2019 Diktat Java Multimedia.pdf

    14/139

    10 | P a g e

    A convolution kernel is a 2D matrix of numbers that can be used as coefficients for

    numerical operations on pixels. Suppose you have a 3x3 kernel that looks like this:

    As you loop over all pixels in the image, you would, for any given pixel, multiply the

    pixel itself by zero; multiply the pixel directly above the given pixel by 2; also multiply

    by 2 the pixels to the left, right, and below the pixel in question; multiply by one the

    pixels at 2 o'clock, 4 o'clock, 8 o'clock, and 10 o'clock to the pixel in question; add all

    these numeric values together; and divide by 9 (the kernel size). The result is the new

    pixel value for the given pixel. Repeat for each pixel in the image.In the example just cited, the kernel (1 2 1 etc.) would end up smoothing or blurring

    the image, because in essence we are replacing a given pixel's value with a weighted

    average of surrounding pixel values. To sharpen an image, you'd want to use a kernel that

    takes the differences of pixels. For example:

    This kernel would achieve a differencing between the center pixel and pixels

    immediately to the north, south, east, and west. It would cause a fairly harsh, small-radius

    (high frequency) sharpening-up of image features

    Sample code for ConvolveOp:

  • 8/10/2019 Diktat Java Multimedia.pdf

    15/139

    11 | P a g e

    1.6. Java2D API

    The Java 2D API is a set of classes for advanced 2D graphics and imaging, encompassing

    line art, text, and images in a single comprehensive model. The API provides extensive

    support for image compositing and alpha channel images, a set of classes to provide

    accurate color space definition and conversion, and a rich set of display-oriented imaging

    operators.

    Java 2D is part of the core classes of the Java 2 platform (formerly JDK 1.2). The 2D

    API introduces new classes in the following packages:

    java.awt

    java.awt.image

    In addition, the 2D API encompasses six entirely new packages:

    java.awt.color java.awt.font

    java.awt.geom

    java.awt.print

    java.awt.image.renderable

    com.sun.image.codec.jpeg

    Java 2D is designed to do anything you want it to do (with computer graphics, at

    least).Prior to Java 2D, AWT's graphics toolkit had some serious limitations:

    All lines were drawn with a single-pixel thickness.

    Only a handful of fonts were available. AWT didn't offer much control over drawing. For example, you couldn't

    manipulate the individual shapes of characters.

    If you wanted to rotate or scale anything, you had to do it yourself.

    If you wanted special fills, like gradients or patterns, you had to make them

    yourself.

    Image support was rudimentary.

    Control of transparency was awkward.The 2D API remedies these shortcomings and does a lot more, too. To appreciate

    what the 2D API can offer, you need to see it in action. Java 2 includes a sample program

    that demonstrates many of the features of the API. To run it, navigate to the

    demo/jfc/Java2D directory in the JDK installation directory. Then run the Java2Demo

    class. The things 2D can do, including:

  • 8/10/2019 Diktat Java Multimedia.pdf

    16/139

    12 | P a g e

    Shapes

    Arbitrary geometric shapes can be represented by combinations of straight lines

    and curves. The 2D API also provides a useful toolbox of standard shapes, like

    rectangles, arcs, and ellipses.

    Stroking

    Lines and shape outlines can be drawn as a solid or dotted line of any widtha

    process called stroking. You can define any dotted-line pattern and specify how

    shape corners and line ends should be drawn.

    Filling

    Shapes can be filled using a solid color, a pattern, a color gradient, or anything

    else you can imagine.

    Transformations

    Everything that's drawn in the 2D API can be stretched, squished, and rotated.

    This applies to shapes, text, and images. You tell 2D what transformation you

    want and it takes care of everything.

    Alpha compositing

    Compositing is the process of adding new elements to an existing drawing. The

    2D APIgives you considerable flexibility by using the Porter-Duff compositing

    rules. Clipping

    Clipping is the process of limiting the extent of drawing operations. For example,

    drawing in a window is normally clipped to the window's bounds. In the 2D API,

    however, you can use any shape for clipping.

    Antialiasing

    Antialiasing is a technique that reduces jagged edges in drawings. The 2D API

    takes care of the details of producing antialiased drawing.

    Text

    The 2D API can use any TrueType or Type 1 font installed on your system.[1]

    You can render strings, retrieve the shapes of individual strings or letters, and

    manipulate text in the same ways that shapes are manipulated.

  • 8/10/2019 Diktat Java Multimedia.pdf

    17/139

    13 | P a g e

    Color

    It's hard to show colors correctly. The 2D API includes classes and methods that

    support representing colors in ways that don't depend on any particular hardware

    or viewing conditions.

    Images

    The 2D API supports doing the same neat stuff with images that you can do with

    shapes and text. Specifically, you can transform images, use clipping shapes, and

    use alpha compositing with images. Java 2 also includes a set of classes for

    loading and saving images in the JPEG format.

    Image processing

    The 2D API also includes a set of classes for processing images. Image

    processing is used to highlight certain aspects of pictures, to achieve aesthetic

    effects, or to clean up messy scans.

    Printing

    Finally, Java developers have a decent way to print. The Printing API is part of

    the 2D API and provides a compact, clean solution to the problem of producing

    output on a printer.

  • 8/10/2019 Diktat Java Multimedia.pdf

    18/139

    14 | P a g e

    1.7. Exercises

    1.7.1. Exercise 1Color

    For exercise, this module will explain the Color class usage using a JPanel object,

    which resides in a JFrame object. In this exercise we will change the JPanel color

    using the Color class.

    1. Lets make a JFrame reference named myFrame, then set its sizes and

    visibility.

    2.

    Then make a JPanelreference named myPanel, and add it to myFrame

  • 8/10/2019 Diktat Java Multimedia.pdf

    19/139

    15 | P a g e

    Output:

    Figure 5. The Resulting JFrame and JPanel

    3. Lets examine the setBackground()method which a JPanelhad

    Figure 6. The setBackground()method in JDK Documentation

    4. The setBackground method have a single parameter with the type of

    Color. This method will set the JPanelobject, (in our case, it will the the

    myPanel) to the color that we defined in the setBackground method

    parameter.

    5. We can use the Colorclass to set myPanelcolor in two ways:

    a. Use the available static constant member of the Color class, this

    member represent general colors which is common to us.

    b. Create a new object of Colorclass, allows us to define specific color.

  • 8/10/2019 Diktat Java Multimedia.pdf

    20/139

    16 | P a g e

    Output:

    6. Creating a new object of Colorclass to define the myPanelcolor

    Output:

  • 8/10/2019 Diktat Java Multimedia.pdf

    21/139

    17 | P a g e

    1.7.2. Exercise 2Color Space

    For exercise, this module will explain the ColorSpaceclass usage. First, we learn

    how to make a ColorSpace4 object named myColorSpace, To instantiate

    ColorSpaceclass, we use its getInstance()method

    The getInstance() method had one parameter of int type which to

    define a specific color space identified by one of the predefined class constants

    (e.g. CS_sRGB, CS_LINEAR_RGB, CS_CIEXYZ, CS_GRAY, or CS_PYCC)

    In this code, myColorSpace currently holds an ICCProfile object with the

    type of Linear RGB color space

    1.7.3.Exercise 3 Digital Imaging

    For this exercise, we will try to load image file to your program. I will use this

    troll.jpg for this exercise.

    4Note that ColorSpace is an abstract class; when you manipulate a ColorSpace object, you are actually working withan instance of a platform-specific subclass.

  • 8/10/2019 Diktat Java Multimedia.pdf

    22/139

    18 | P a g e

    we will load this image, and then draw this image on the window of your program

    1. Lets make a JFrame object first named myFrame, then set its sizes and

    visibility.

    2. Then we define a new Class named MyPanelClass which will draw our

    Imageobject

  • 8/10/2019 Diktat Java Multimedia.pdf

    23/139

    19 | P a g e

    3. Override the paintComponent()method on the MyPanelClass

    4. Inspect the drawImage()method that Graphicsand Graphics2Dclass had.

    Any of these overloaded drawImage() method will draw an image by using

    Java2D technique. Choose one method that you properly needed.

    5. For this excercise i will use the third method

    This method asks for the image object to be drawn, the x coordinate, the y

    coordinate, and the observer

  • 8/10/2019 Diktat Java Multimedia.pdf

    24/139

    20 | P a g e

    6. Load the image file, and use the drawImage() method to draw it on the

    MyPanelClassobject

    7. Construct a new MyPanelClassobject, and add it to myFrame

    8. Output

    9. To draw another image, please repeat step number 6

  • 8/10/2019 Diktat Java Multimedia.pdf

    25/139

    21 | P a g e

    1.7.4. Exercise 4Image Transformation

    For this exercise, we will use our last exercise, and try to transform the image file

    in your program using AffineTransformclass.

    1. These are the list of some of the method to define the geometric

    transformation by AffineTransformclass

    a. rotate()method

    b. translate()method

    c. shear()method

    d. scale()method

  • 8/10/2019 Diktat Java Multimedia.pdf

    26/139

    22 | P a g e

    2. Insert these three new lines of code into your previous code, we will construct

    a new AffineTransform object, then use the rotate() method. rotate()

    method have a double type parameter which used to set the amount of

    rotation

    Output:

  • 8/10/2019 Diktat Java Multimedia.pdf

    27/139

    23 | P a g e

    3. Lets do another transformation, we use the scale() method. scale()

    method have a double type parameter which used to set the amount of

    scaling

    Output:

  • 8/10/2019 Diktat Java Multimedia.pdf

    28/139

    24 | P a g e

    1.7.5. Exercise 5Image Enhancement

    For this chapter exercise we will make an application which will open a file, and

    sharpen / desharpen that image using ConvolveOpclass

    1. First, we will construct a GUI like this one

    The red area is our custom JPanel class object, the blue area is a regular

    JPanelobject, open button is a JButtonobject, sharpen, normal and blur is a

    JToggleButton object, the following code will construct the blue area, thered area would be on next code.

  • 8/10/2019 Diktat Java Multimedia.pdf

    29/139

    25 | P a g e

  • 8/10/2019 Diktat Java Multimedia.pdf

    30/139

    26 | P a g e

    2. This following is the half of custom JPanelclass code that will enhance our

    image

  • 8/10/2019 Diktat Java Multimedia.pdf

    31/139

    27 | P a g e

    3. Create MyCustomPanelobject, then add to the northern portion of myFrame

  • 8/10/2019 Diktat Java Multimedia.pdf

    32/139

    28 | P a g e

    4. ConvolveOpneed BufferedImageobject to enhance the image, the following

    code is to convert Image object into BufferedImage object, this is already

    explained in Digital Imaging chapter

  • 8/10/2019 Diktat Java Multimedia.pdf

    33/139

    29 | P a g e

    5. Next we give those sharpen, blur, and normal button an event handler add this

    following code

  • 8/10/2019 Diktat Java Multimedia.pdf

    34/139

    30 | P a g e

    6. Finally we make the code to show sharpen, blur, and normal image in the

    MyCustomPanelclass, add this following code

  • 8/10/2019 Diktat Java Multimedia.pdf

    35/139

    31 | P a g e

    a. Output: Normal Mode

    b. Output: Sharpen Mode

  • 8/10/2019 Diktat Java Multimedia.pdf

    36/139

    32 | P a g e

    c. Output: Blur Mode

  • 8/10/2019 Diktat Java Multimedia.pdf

    37/139

    33 | P a g e

    Chapter 02

    Digital Audio

    Objectives

    1. Audio Digitization2. MIDI3. Audio Compression4. Java Audio API

  • 8/10/2019 Diktat Java Multimedia.pdf

    38/139

    34 | P a g e

    2.1. Audio Digitization

    This chapter is going to discuss the conversion process between digital and analog

    signals. Nearly every piece of electronics in use today makes some use of an analog-to-

    digital or digital-to-analog converter. Because of the prevalence of these types of data

    conversions, it is important for people to understand the limitations and drawbacks of

    these processes. This book will discuss the conversion process (analog-to-digital and

    digital-to-analog), the mathematical models of signal conversion, some topics related

    specifically to these processes (quanization, companding, delta modulation, etc.), and the

    physical electrical hardware that makes the conversions happen.

    Signal

    What is a "signal", exactly? A signal in the sense that we will be considering in this bookis changing value of electric voltage or current through a transmission medium. There are

    two general types of signals: periodic and aperiodic. Periodic signals repeat themselves

    after a certain period of time -- after they have cycled through one period, following

    periods don't contain any new information. Aperiodic signals, on the other hand, don't

    repeat themselves, and therefore can contain information. Signals also can be analog or

    digital signals, and we will discuss them both below.

    Analog

    Analog signals equate levels of electric voltage or current to amounts of information by

    applying some rule. Consider for instance, an analog clock, where the passage of time is

    displayed as the motion of the clock hands. In electric signals, a certain amount of

    voltage corresponds directly to a measured physical phenomenon. For instance, on an

    accelerometer, the amount of acceleration, measured in g's will correspond directly to

    volts. So at one g, we have 1 volt output, at 2 g's we have 2 volts output, etc. Analog

    signals have an advantage that they can represent any fractional quantity, by outputting

    an equivalent fractional quantity of voltage or current.

    Put in a different manner an Analog signal is continuous in time and also

    continuous in value.

  • 8/10/2019 Diktat Java Multimedia.pdf

    39/139

    35 | P a g e

    Uses of Analog

    Analog signals have a number of uses. AM and FM radio, for instance, are signals that

    are transmitted in analog. Telephones (at least simple, older telephones) use analog

    signals to transmit voice data to the phone central office. Many electrical components,

    such as sensors, will output analog data, because of the accuracy that can be obtained

    from analog signals.

    Digital

    Digital signals are different from analog signals in that there are generally only two levels

    of voltage: high and low. These different voltage levels are put into a sequence to

    describe the value being transmitted. For convenience, regardless of the actual voltagelevels used, a "high" is called a 1, and a "low" is called a 0. Each signal level must be

    transmitted for at least a certain period of time called the Bit Time. A single signal level

    for a single bit time is called a Bit.

    From bits, we have Binary Numbers, a collection of bits that can be arranged to

    form larger quantities than the simple numbers 0 and 1.

    Uses of Digital

    Because bits can only be a 0 or a 1, digital transmissions don't have the same amount of

    accuracy as analog signals. Also, digital systems need to have complicated digital

    circuitry to read and understand the signals, that can cost more money then analog

    hardware does. However, the benefit is that digital signals can be manipulated, created,

    and read by computers and computer hardware.

    One of the best examples of digital signals are the control signals and data that are

    in use on your computer. Computers are almost completely digital, except for the sound

    card (which produced analog sound signals), and maybe a few other peripherals.

    Cellphones now are mostly digital, and the internet is a digital network.

  • 8/10/2019 Diktat Java Multimedia.pdf

    40/139

    36 | P a g e

    2.2. MIDI

    The Musical Instrument Digital Interface (MIDI) standard defines a communication

    protocol for electronic music devices, such as electronic keyboard instruments and

    personal computers. MIDI data can be transmitted over special cables during a live

    performance, and can also be stored in a standard type of file for later playback or

    editing.

    MIDI is both a hardware specification and a software specification. To understand

    MIDI's design, it helps to understand its history. MIDI was originally designed for

    passing musical events, such as key depressions, between electronic keyboard

    instruments such as synthesizers. Hardware devices known as sequencers stored

    sequences of notes that could control a synthesizer, allowing musical performances to be

    recorded and subsequently played back. Later, hardware interfaces were developed thatconnected MIDI instruments to a computer's serial port, allowing sequencers to be

    implemented in software. More recently, computer sound cards have incorporated

    hardware for MIDI I/O and for synthesizing musical sound. Today, many users of MIDI

    deal only with sound cards, never connecting to external MIDI devices. CPUs have

    become fast enough that synthesizers, too, can be implemented in software. A sound card

    is needed only for audio I/O and, in some applications, for communicating with external

    MIDI devices.

    Most programs that avail themselves of the Java Sound API's MIDI package do so

    to synthesize sound. The entire apparatus of MIDI files, events, sequences, and

    sequencers, which was previously discussed, nearly always has the goal of eventually

    sending musical data to a synthesizer to convert into audio. (Possible exceptions include

    programs that convert MIDI into musical notation that can be read by a musician, and

    programs that send messages to external MIDI-controlled devices such as mixing

    consoles.)

    The Synthesizer interface is therefore fundamental to the MIDI package. This

    page shows how to manipulate a synthesizer to play sound. Many programs will simply

    use a sequencer to send MIDI file data to the synthesizer, and won't need to invoke many

    Synthesizer methods directly. However, it's possible to control a synthesizer directly,

  • 8/10/2019 Diktat Java Multimedia.pdf

    41/139

    37 | P a g e

    without using sequencers or even MidiMessage objects, as explained near the end of this

    page.

    The synthesis architecture might seem complex for readers who are unfamiliar with

    MIDI. Its API includes three interfaces:

    Synthesizer

    MidiChannel

    Soundbank

    and four classes:

    Instrument

    Patch

    SoundbankResource

    VoiceStatus

    As orientation for all this API, the next section explains some of the basics of MIDI

    synthesis and how they're reflected in the Java Sound API. Subsequent sections give a

    more detailed look at the API.

    2.3. Audio Compression

    Digital audio compression allows the efficient storage and transmission of audio data.

    The various audio compression techniques offer different levels of complexity,

    compressed audio quality, and amount of data compression.

    This chapter is a survey of technique used to compress digital audio signals. The next

    section present detailed descriptions of a relatively simple approach to audio

    compression: -law.

    -law Audio Compression

    The -law transformation is a basic audio compression technique specified by the

    Comit Consultatif Internationale de Tlgraphique et Tlphonique (CCITT)

    Recommendation G.711.[5] The transformation is essentially logarithmic in nature and

    allows the 8 bits per sample output codes to cover a dynamic range equivalent to 14 bitsof linearly quantized values. This transformation offers a compression ratio of (number of

    bits per source sample)/8 to 1. Unlike linear quantization, the logarithmic step spacings

    represent low-amplitude audio samples with greater accuracy than higher-amplitude

    values. Thus the signal-to-noise ratio of the transformed output is more uniform over the

    range of amplitudes of the input signal. The -law transformation is

  • 8/10/2019 Diktat Java Multimedia.pdf

    42/139

    38 | P a g e

    = 255 127

    ln (1 + ) ln1 + 0127 127

    ln (1 + ) ln1 + < 0

    where m = 255, and x is the value of the input signal normalized to have a maximumvalue of 1. The CCITT Recommendation G.711 also specifies a similar A-law

    transformation. The -law transformation is in common use in North America and Japan

    for the Integrated Services Digital Network (ISDN) 8- kHz-sampled, voice-grade, digital

    telephony service, and the A-law transformation is used elsewhere for the ISDN

    telephony.

    2.4. Java Audio API

    The Java Sound API is a low-level API for effecting and controlling the input and output

    of sound media, including both audio and Musical Instrument Digital Interface (MIDI)

    data. The Java Sound API provides explicit control over the capabilities normally

    required for sound input and output, in a framework that promotes extensibility and

    flexibility.

    The Java Sound API fulfills the needs of a wide range of application developers.

    Potential application areas include:

    Communication frameworks, such as conferencing and telephony End-user content delivery systems, such as media players and music using

    streamed content

    Interactive application programs, such as games and Web sites that use dynamic

    content

    Content creation and editing

    Tools, toolkits, and utilities

    The Java Sound API provides the lowest level of sound support on the Java platform.It provides application programs with a great amount of control over sound operations,

    and it is extensible. For example, the Java Sound API supplies mechanisms for installing,

    accessing, and manipulating system resources such as audio mixers, MIDI synthesizers,

    other audio or MIDI devices, file readers and writers, and sound format converters. The

    Java Sound API does not include sophisticated sound editors or graphical tools, but it

  • 8/10/2019 Diktat Java Multimedia.pdf

    43/139

    39 | P a g e

    provides capabilities upon which such programs can be built. It emphasizes low-level

    control beyond that commonly expected by the end user.

    The Java Sound API includes support for both digital audio and MIDI data. These

    two major modules of functionality are provided in separate packages:

    javax.sound.sampled

    This package specifies interfaces for capture, mixing, and playback of digital

    (sampled) audio.

    javax.sound.midi

    This package provides interfaces for MIDI synthesis, sequencing, and event

    transport.

    Two other packages permit service providers (as opposed to application developers)

    to create custom software components that extend the capabilities of an implementation

    of the Java Sound API:

    javax.sound.sampled.spi

    javax.sound.midi.spi

    Sampled Sound

    The javax.sound.sampled package handles digital audio data, which the Java Sound

    API refers to as sampled audio. Samples are successive snapshots of a signal. In the case

    of audio, the signal is a sound wave. A microphone converts the acoustic signal into a

    corresponding analog electrical signal, and an analog-to-digital converter transforms thatanalog signal into a sampled digital form. The following figure shows a brief moment in

    a sound recording.

    A sampled sound wave

    This graph plots sound pressure (amplitude) on the vertical axis, and time on the

    horizontal axis. The amplitude of the analog sound wave is measured periodically at a

    certain rate, resulting in the discrete samples (the red data points in the figure) that

  • 8/10/2019 Diktat Java Multimedia.pdf

    44/139

    40 | P a g e

    comprise the digital audio signal. The center horizontal line indicates zero amplitude;

    points above the line are positive-valued samples, and points below are negative. The

    accuracy of the digital approximation of the analog signal depends on its resolution in

    time (the sampling rate) and its quantization, or resolution in amplitude (the number of

    bits used to represent each sample). As a point of reference, the audio recorded for

    storage on compact discs is sampled 44,100 times per second and represented with 16

    bits per sample.

    The term "sampled audio" is used here slightly loosely. A sound wave could be

    sampled at discrete intervals while being left in an analog form. For purposes of the Java

    Sound API, however, "sampled audio" is equivalent to "digital audio."

    Typically, sampled audio on a computer comes from a sound recording, but the

    sound could instead be synthetically generated (for example, to create the sounds of atouch-tone telephone). The term "sampled audio" refers to the type of data, not its origin.

    The Java Sound API does not assume a specific audio hardware configuration; it is

    designed to allow different sorts of audio components to be installed on a system and

    accessed by the API. The Java Sound API supports common functionality such as input

    and output from a sound card (for example, for recording and playback of sound files) as

    well as mixing of multiple streams of audio. Here is one example of a typical audio

    architecture:

    A Typical Audio Architecture

    In this example, a device such as a sound card has various input and output ports, and

    mixing is provided in the software. The mixer might receive data that has been read from

    a file, streamed from a network, generated on the fly by an application program, or

  • 8/10/2019 Diktat Java Multimedia.pdf

    45/139

    41 | P a g e

    produced by a MIDI synthesizer. The mixer combines all its audio inputs into a single

    stream, which can be sent to an output device for rendering.

    Sampled Sound

    The javax.sound.midi package contains APIs for transporting and sequencing MIDI

    events, and for synthesizing sound from those events.

    Where as sampled audio is a direct representation of a sound itself, MIDI data can be

    thought of as a recipe for creating a sound, especially a musical sound. MIDI data, unlike

    audio data, does not describe sound directly. Instead, it describes events that affect the

    sounds (or actions) performed by a MIDI-enabled device or instrument, such as a

    synthesizer. MIDI data is analogous to a graphical user interface's keyboard and mouse

    events. In the case of MIDI, the events can be thought of as actions upon a musicalkeyboard, along with actions on various pedals, sliders, switches, and knobs on that

    musical instrument. These events need not actually originate with a hardware musical

    instrument; they can be simulated in software, and they can be stored in MIDI files. A

    program that can create, edit, and perform these files is called a sequencer. Many

    computer sound cards include MIDI-controllable music synthesizer chips to which

    sequencers can send their MIDI events. Synthesizers can also be implemented entirely in

    software. The synthesizers interpret the MIDI events that they receive and produce audio

    output. Usually the sound synthesized from MIDI data is musical sound (as opposed to

    speech, for example). MIDI synthesizers are also capable of generating various kinds of

    sound effects.

    Some sound cards include MIDI input and output ports to which external MIDI

    hardware devices (such as keyboard synthesizers or other instruments) can be connected.

    From a MIDI input port, an application program can receive events generated by an

    external MIDI-equipped musical instrument. The program might play the musical

    performance using the computer's internal synthesizer, save it to disk as a MIDI file, or

    render it into musical notation. A program might use a MIDI output port to play an

    external instrument, or to control other external devices such as recording equipment.

    The following diagram illustrates the functional relationships between the major

    components in a possible MIDI configuration based on the Java Sound API. (As with

  • 8/10/2019 Diktat Java Multimedia.pdf

    46/139

    42 | P a g e

    audio, the Java Sound API permits a variety of MIDI software devices to be installed and

    interconnected. The system shown here is just one potential scenario.) The flow of data

    between components is indicated by arrows. The data can be in a standard file format, or

    (as indicated by the key in the lower right corner of the diagram), it can be audio, raw

    MIDI bytes, or time-tagged MIDI messages.

    A Possible MIDI Configuration

    In this example, the application program prepares a musical performance by loading

    a musical score that's stored as a standard MIDI file on a disk (left side of the diagram).

    Standard MIDI files contain tracks, each of which is a list of time-tagged MIDI events.

    Most of the events represent musical notes (pitches and rhythms). This MIDI file is read

    and then "performed" by a software sequencer. A sequencer performs its music by

    sending MIDI messages to some other device, such as an internal or external synthesizer.

    The synthesizer itself may read a soundbank file containing instructions for emulating the

    sounds of certain musical instruments. If not, the synthesizer will play the notes stored in

    the MIDI file using whatever instrument sounds are already loaded into it.

    As illustrated, the MIDI events must be translated into raw (non-time-tagged) MIDI

    before being sent through a MIDI output port to an external MIDI instrument. Similarly,

    raw MIDI data coming into the computer from an external MIDI source (a keyboard

    instrument, in the diagram) is translated into time-tagged MIDI messages that can control

    a synthesizer, or that a sequencer can store for later use.

  • 8/10/2019 Diktat Java Multimedia.pdf

    47/139

    43 | P a g e

    Service Provider Interfaces

    The javax.sound.sampled.spi and javax.sound.midi.spi packages contain APIs

    that let software developers create new audio or MIDI resources that can be provided

    separately to the user and "plugged in" to an existing implementation of the Java Sound

    API. Here are some examples of services (resources) that can be added in this way:

    An audio mixer

    A MIDI synthesizer

    A file parser that can read or write a new type of audio or MIDI file

    A converter that translates between different sound data formats

    In some cases, services are software interfaces to the capabilities of hardware devices,

    such as sound cards, and the service provider might be the same as the vendor of the

    hardware. In other cases, the services exist purely in software. For example, a synthesizer

    or a mixer could be an interface to a chip on a sound card, or it could be implemented

    without any hardware support at all.

    An implementation of the Java Sound API contains a basic set of services, but the

    service provider interface (SPI) packages allow third parties to create new services. These

    third-party services are integrated into the system in the same way as the built-in services.

    The AudioSystem class and the MidiSystem class act as coordinators that let applicationprograms access the services explicitly or implicitly. Often the existence of a service is

    completely transparent to an application program that uses it. The service-provider

    mechanism benefits users of application programs based on the Java Sound API, because

    new sound features can be added to a program without requiring a new release of the

    JDK or runtime environment, and, in many cases, without even requiring a new release of

    the application program itself.

    2.5. Exercises

    2.6.1. Exercise 1Audio Digitization

    For this exercise, we will try to make a java program which will record your

    sound using the line-in audio jack on your computer via microphone.

  • 8/10/2019 Diktat Java Multimedia.pdf

    48/139

    44 | P a g e

    The javax.sound.sampledpackage consists of eight interfaces, twelve top-

    level classes, twelve inner classes, and two exceptions. To record and play audio,

    you only need to deal with a total of seven parts of the package.

    1. Describe the audio format in which you want to record the data. This includes

    specifying the sampling rate and the number of channels (mono versus stereo)

    for the audio. You specify these properties using the aptly named

    AudioFormat class. There are two constructors for creating an AudioFormat

    object:

    The first constructor lets you explicitly set the audio format encoding,

    while the latter uses a default. The available encodings are ALAW,

    PCM_SIGNED, PCM_UNSIGNED, and ULAW. The default encoding used for the

    second constructor is PCM. Here is an example that uses the second

    constructor to create an AudioFormatobject for single channel recording in 8

    kHz format:

    2. After you describe the audio format, you need to get a DataLine. This

    interface represents an audio feed from which you can capture the audio. You

    use a subinterface of DataLineto do the actual capturing. The subinterface is

    called TargetDataLine. To get the TargetDataLine, you ask the AudioSystem.

    However when you do that, you need to specify information about the line.

    You make the specification in the form of a DataLine.Info object. In

    particular, you need to create a DataLine.Infoobject that is specific to the

    DataLine type and audio format. Here are some lines of source that get the

    TargetDataLine.

  • 8/10/2019 Diktat Java Multimedia.pdf

    49/139

    45 | P a g e

    If the TargetDataLine is unavailable, a LineUnavailableException is

    thrown.3. At this point you have your input source. You can think of the

    TargetDataLine like an input stream. However, it requires some setup

    before you can read form it. Setup in this case means first opening the line

    using the open() method, and then initializing the line using the start()

    method:

    4. Your data line is ready, so you can start recording from it as shown in the

    following lines of code. Here you save a captured audio stream to a byte array

    for later playing. You could also save the audio stream to a file. Notice that

    you have to manage when to stop outside the read-loop construct.

    5. Summed, it will be like this, whole recording code on a record()method

  • 8/10/2019 Diktat Java Multimedia.pdf

    50/139

    46 | P a g e

    6. Now let's examine playing audio. There are two key differences in playing

    audio as compared to recording audio. First, when you play audio, the bytes

    come from an AudioInputStream instead of a TargetDataLine. Second,

    you write to a SourceDataLine instead of into a ByteArrayOutputStream.

    Besides that, the process is the same. To get the AudioInputStream, you

    need to convert the ByteArrayOutputStream into the source of the

    AudioInputStream. The AudioInputStream constructor requires the bytes

  • 8/10/2019 Diktat Java Multimedia.pdf

    51/139

    47 | P a g e

    from the output stream, the audio format encoding used, and the number of

    sample frames:

    7. Getting the DataLineis similar to the way you get it for audio recording, but

    for playing audio, you need to fetch a SourceDataLine instead of a

    TargetDataLine:

    8. Setup for the line is identical to the setup for audio recording:

    9. The last step is to play the audio as shown below. Notice that this step is

    similar to the last step in recording. However, here you read from the buffer

    and write to the data line. There is also an added drain operation that works

    like a flush on an output stream.

  • 8/10/2019 Diktat Java Multimedia.pdf

    52/139

    48 | P a g e

    10. Summed, it will be like this, whole playing code on a play()method

    11. Code to stop recording/playing

  • 8/10/2019 Diktat Java Multimedia.pdf

    53/139

    49 | P a g e

    12. Code to construct the GUI

    Put all codes under the Mainclass, and youre done,

    Output:

  • 8/10/2019 Diktat Java Multimedia.pdf

    54/139

    50 | P a g e

    2.6.2. Exercise 2MIDI

    For this exercise we will try to make a simple synthesizer. But first, we will try to

    understand how Java Synthesizerworks, along with its companion

    How to get Synthesizer object; since Synthesizer is an interface we

    cant instantiate it directly

    This is how we get Synthesizerobject.

    The Synthesizer interface includes methods for loading and unloading

    instruments from soundbanks. An instrument is a specification for synthesizing a

    certain type of sound, whether that sound emulates a traditional instrument or is

    some kind of sound effect or other imaginary sound. A soundbank is a collection

    of instruments, organized by bank and program number (via the instrument's

    Patch object).

  • 8/10/2019 Diktat Java Multimedia.pdf

    55/139

  • 8/10/2019 Diktat Java Multimedia.pdf

    56/139

    52 | P a g e

    incompatibility is common. The Java Sound API includes ways to detect whether

    a given synthesizer supports a given instrument.

    An instrument can usually be considered a preset; you don't have to know

    anything about the details of the synthesis technique that produces its sound.

    However, you can still vary aspects of its sound. Each Note On message specifies

    the pitch and volume of an individual note. You can also alter the sound through

    other MIDI commands such as controller messages or system-exclusive messages.

    Many synthesizers are multimbral (sometimes called polytimbral),

    meaning that they can play the notes of different instruments simultaneously.

    (Timbre is the characteristic sound quality that enables a listener to distinguish

    one kind of musical instrument from other kinds.) Multimbral synthesizers canemulate an entire ensemble of real-world instruments, instead of only one

    instrument at a time. MIDI synthesizers normally implement this feature by taking

    advantage of the different MIDI channels on which the MIDI specification allows

    data to be transmitted. In this case, the synthesizer is actually a collection of

    sound-generating units, each emulating a different instrument and responding

  • 8/10/2019 Diktat Java Multimedia.pdf

    57/139

    53 | P a g e

    independently to messages that are received on a different MIDI channel. Since

    the MIDI specification provides only 16 channels, a typical MIDI synthesizer can

    play up to 16 different instruments at once. The synthesizer receives a stream of

    MIDI commands, many of which are channel commands. (Channel commands are

    targeted to a particular MIDI channel; for more information, see the MIDI

    specification.) If the synthesizer is multitimbral, it routes each channel command

    to the correct sound-generating unit, according to the channel number indicated in

    the command.

    In the Java Sound API, these sound-generating units are instances of

    classes that implement the MidiChannel interface. A synthesizer object has at

    least one MidiChannel object. If the synthesizer is multimbral, it has more than

    one, normally 16. Each MidiChannel represents an independent sound-generatingunit.

    Because a synthesizer's MidiChannel objects are more or less independent,

    the assignment of instruments to channels doesn't have to be unique. For example,

    all 16 channels could be playing a piano timbre, as though there were an ensemble

    of 16 pianos. Any grouping is possiblefor instance, channels 1, 5, and 8 could

    be playing guitar sounds, while channels 2 and 3 play percussion and channel 12

    has a bass timbre. The instrument being played on a given MIDI channel can be

    changed dynamically; this is known as a program change.

    Even though most synthesizers allow only 16 or fewer instruments to be

    active at a given time, these instruments can generally be chosen from a much

    larger selection and assigned to particular channels as required.

    Thats the introduction part of this MIDI chapter, you may run the code, it will

    produces a sound. The noteOn method plays the selected instrument

    and tone. Lets continue to make our simple synthesizer, it will be like this

  • 8/10/2019 Diktat Java Multimedia.pdf

    58/139

    54 | P a g e

    The list on the left hand lists all the instrument that java currently had on your

    system, and the buttons on the right hand side is to play the tone.

  • 8/10/2019 Diktat Java Multimedia.pdf

    59/139

    55 | P a g e

    1. Construct a JListobject that will contains the instrument list, and add it to a

    JscrollPane, so it will have a scrollbar on its side

  • 8/10/2019 Diktat Java Multimedia.pdf

    60/139

    56 | P a g e

    2. Create methods that will be called when the button/ list is click later on

    3. Create and instantiate all JButtonarray object, add their event handler, add

    to the JPanel, and add to the Jframe

  • 8/10/2019 Diktat Java Multimedia.pdf

    61/139

    57 | P a g e

    4. Create event handler for the instrumenList, thus when clicked it will change

    the loaded instrument

    5. Then the simple synthesizer is finished, you may create the more

    sophisticated synthesizer, currently like this one:

  • 8/10/2019 Diktat Java Multimedia.pdf

    62/139

    58 | P a g e

    Chapter 03

    Digital Video

    Obejctives

    1. Video Digitization and Video Compression2. Java Media Framework API

    3. Quick Time

    4. Java Quick Time API

  • 8/10/2019 Diktat Java Multimedia.pdf

    63/139

    59 | P a g e

    3.1. Video Digitization and Video Compression

    Alternatively referred to as a video digitiser, a video digitizer is software that takes an

    analog video still frame and coverts it to a digital still image. This is generally

    accomplished with the aid of computer hardware.

    3.2. Java Media Framework API

    The Java Media Framework (JMF) is a recent API for Java dealing with real-time

    multimedia presentation and effects processing. JMF handles time-based media, media

    which changes with respect to time. Examples of this are video from a television source,

    audio from a raw-audio format file and animations.

    Stages

    The JMF architecture is organized into three stages:

    During the input stage, data is read from a source and passed in buffers to the

    processing stage. The input stage may consist of reading data from a local capture device

    (such as a webcam or TV capture card), a file on disk or stream from the network.

    The processing stage consists of a number of codecs and effects designed to modify

    the data stream to one suitable for output. These codecs may perform functions such as

    compressing or decompressing the audio to a different format, adding a watermark of

    some kind, cleaning up noise or applying an effect to the stream (such as echo to the

    audio).

    Once the processing stage has applied its transformations to the stream, it passes theinformation to the output stage. The output stage may take the stream and pass it to a file

    on disk, output it to the local video display or transmit it over the network.

    For example, a JMF system may read input from a TV capture card from the local

    system capturing input from a VCR in the input stage. It may then pass it to the

  • 8/10/2019 Diktat Java Multimedia.pdf

    64/139

  • 8/10/2019 Diktat Java Multimedia.pdf

    65/139

    61 | P a g e

    The Quicktime architecture and its API (C or Java) can help to simplify this

    process.

    3.4. Java Quick Time API

    If you're a Java or QuickTime programmer and want to harness the power of

    QuickTime's multimedia engine, you'll find a number of important advantages to using

    the QuickTime for Java API. A C Quicktime API also exists. But we focus only on the

    Java API in this course.

    For one thing, the API lets you access QuickTime's native runtime libraries and,

    additionally, provides you with a feature rich Application Framework that enables you to

    integrate QuickTime capabilities into Java software.

    Aside from representation in Java, QuickTime for Java also provides a set ofpackages that forms the basis for the Application Framework found in the

    quicktime.app group. The focus of these packages is to present different kinds of

    media. The framework uses the interfaces in the quicktime.app packages to abstract and

    express common functionality that exists between different QuickTime objects.

    As such, the services that the QuickTime for Java Application Framework renders to

    the developer can be viewed as belonging to the following categories:

    creation of objects that present different forms of media, usingQTFactory.makeDrawable()methods

    various utilities (classes and methods) for dealing with single images as well as

    groups of related images

    spaces and controllers architecture, which enables you to deal with complex data

    generation or presentation requirements

    composition services that allow the complex layering and blending of different

    image sources

    timing services that enable you to schedule and control time-related activities

    video and audio media capture from external sources

    exposure of the QuickTime visual effects architecture

    All of these are built on top of the services that QuickTime provides. They provided

    interfaces and classes in the quicktime.app packages can be used as a basis for

  • 8/10/2019 Diktat Java Multimedia.pdf

    66/139

    62 | P a g e

    developers to build on and extend in new ways, not just as a set of utility classes you can

    use

    The media requirements for such presentations can also be complex. They may

    include ``standard'' digital video, animated characters, and customized musical

    instruments. QuickTime's ability to reference movies that exist on local and remote

    servers provides a great deal of flexibility in the delivery of digital content.

    A movie can also be used to contain the media for animated characters and/or

    customized musical instruments. For example, a cell-based sprite animation can be built

    where the images that make up the character are retrieved from a movie that is built

    specifically for that purpose. In another scenario, a movie can be constructed that

    contains both custom instruments and a description of instruments to be used from

    QuickTime's built-in Software Synthesizer to play a tune.In both cases we see a QuickTime movie used to contain media and transport this

    media around. Your application then uses this media to recreate its presentation. The

    movie in these cases is not meant to be played but is used solely as a media container.

    This movie can be stored locally or remotely and retrieved by the application when it is

    actually viewed. Of course, the same technique can be applied to any of the media types

    that QuickTime supports. The sprite images and custom instruments are only two

    possible applications of this technique.

    A further interesting use of QuickTime in this production space is the ability of a

    QuickTime movie to contain the media data that it presents as well as to hold a reference

    to external media data. For example, this enables both an artist to be working on the

    images for an animated character and a programmer to be building the animation using

    these same images. This can save time, as the production house does not need to keep

    importing the character images, building intermediate data containers, and so on. As the

    artist enhances the characters, the programmer can immediately see these in his or her

    animation, because the animation references the same images.

    Following the 2003 release of QTJ 6.1, Apple has made few updates to QTJ, mostly

    fixing bugs. Notably, QuickTime 7 was the first version of QuickTime not to be

    accompanied or followed by a QTJ release that wrapped the new native API's.

    QuickTime 7's new API's, such as those for working with metadata and with frame-

  • 8/10/2019 Diktat Java Multimedia.pdf

    67/139

    63 | P a g e

    reordering codecs, are not available to QTJ programmers. Apple has also not offered new

    classes to provide the capture preview functionality that was present in versions of QTJ

    prior to 6.1. Indeed, QTJ is dependent on some native API's that Apple no longer

    recommends, most notably QuickDraw. In short, QTJ has been deprecated by Apple.

    3.5. Exercises

    3.5.1. Exercise 1 - Video Digitization and Video Compression

    For this exercise we will try to make a image capturer from a webcam, thus we

    will need a webcam. The application would be like this

    Before we start, Java Media Framework only recognized VfW(Video for

    Window) / WDM (Windows Driver Model) supported webcam. To check if your

    webcam is supported by java, open start -> all programs -> Java Media

    Framework 2.1.1e -> JMFRegistry, then go to capture devices tab. There should

    be a vfw: WDM Image Capture(Win32):0entry. If

    your webcam is installed after installing Java Media Framework, JMFRegistry

    wont recognize your webcam yet. To register your webcam, press Detect

  • 8/10/2019 Diktat Java Multimedia.pdf

    68/139

    64 | P a g e

    Capture Device button. If your webcam still not listed, then your webcam

    probably not supported

    1. Code the GUI Part

  • 8/10/2019 Diktat Java Multimedia.pdf

    69/139

    65 | P a g e

    2. Detecting Capturing Device using CaptureDeviceManagerclass, and search

    for the webcam through the listed capturing device

  • 8/10/2019 Diktat Java Multimedia.pdf

    70/139

    66 | P a g e

    3. Load the captured image from the webcam to the GUI

  • 8/10/2019 Diktat Java Multimedia.pdf

    71/139

    67 | P a g e

    4. Code to capture the image, and use a JFileChooser to direct the save file

    directory

    The needed import code

  • 8/10/2019 Diktat Java Multimedia.pdf

    72/139

    68 | P a g e

    3.5.2. Exercise 2Quick Time and Java Quick Time API

    For this exercise we will try to make simple video player application using Quick

    Time for java

    1. Build the GUI

    2. Create the Quick Time Component

  • 8/10/2019 Diktat Java Multimedia.pdf

    73/139

    69 | P a g e

    3. Method to load the movie

    4. Event Handler to the open Button

  • 8/10/2019 Diktat Java Multimedia.pdf

    74/139

    70 | P a g e

    Output:

  • 8/10/2019 Diktat Java Multimedia.pdf

    75/139

    71 | P a g e

    Chapter 04

    Graphic 2D

    Obejctives

    1. What is Java2D?2. What Can Java 2D Do?

    3. Drawing on Components

    4. Drawing on Images

    5. Graphics2D

    6. Line2D

    7. Rectangle2D

    8. Ellipse2D and Arc2D

    9. Text

    10.Animation

  • 8/10/2019 Diktat Java Multimedia.pdf

    76/139

    72 | P a g e

    4.1. What is Java2D ?

    The Java 2D Application Programming Interface (the 2D API) is a set of classes that can

    be used to create high quality graphics. It includes features like geometric transformation,

    antialiasing, alpha compositing, and image processing.

    Java 2D is part of the core classes of the Java 2 platform (formerly JDK 1.2). The

    2D API introduces new classes in the following packages:

    java.awt

    java.awt.image

    In addition, the 2D API encompasses new packages:

    java.awt.color

    java.awt.font

    java.awt.geom

    java.awt.print java.awt.image.renderable

    4.2. What Can Java 2D Do?

    Java 2D is designed to do anything you want it to do (with computer graphics, at

    least). Prior to Java 2D, AWT's graphics toolkit had some serious limitations:

    All lines were drawn with a single-pixel thickness.

    Only a handful of fonts were available.

    AWT didn't offer much control over drawing. For example, you couldn't

    manipulate the individual shapes of characters.

    If you wanted to rotate or scale anything, you had to do it yourself.

    If you wanted special fills, like gradients or patterns, you had to make them

    yourself.

    Image support was rudimentary.

    Control of transparency was awkward.

  • 8/10/2019 Diktat Java Multimedia.pdf

    77/139

    73 | P a g e

    And this is what you can do with Java2D

    You can view the Java2D demo at C:\Program Files\Java\[jdk folder]\demo\jfc\Java2D

    This list will explain generally what Java2D can do:

    Shapes

    Arbitrary geometric shapes can be represented by combinations of straight lines

    and curves. The 2D API also provides a useful toolbox of standard shapes, like

    rectangles, arcs, and ellipses.

    Stroking

    Lines and shape outlines can be drawn as a solid or dotted line of any widtha

    process called stroking. You can define any dotted-line pattern and specify howshape corners and line ends should be drawn.

    Filling

    Shapes can be filled using a solid color, a pattern, a color gradient, or anything

    else you can imagine.

  • 8/10/2019 Diktat Java Multimedia.pdf

    78/139

    74 | P a g e

    Transformations

    Everything that's drawn in the 2D API can be stretched, squished, and rotated.

    This applies to shapes, text, and images. You tell 2D what transformation you want

    and it takes care of everything.

    Alpha compositing

    Compositing is the process of adding new elements to an existing drawing. The

    2D API gives you considerable flexibility by using the Porter-Duff compositing

    rules.

    Clipping

    Clipping is the process of limiting the extent of drawing operations. For example,

    drawing in a window is normally clipped to the window's bounds. In the 2D API,

    however, you can use any shape for clipping.

    Antialiasing

    Antialiasing is a technique that reduces jagged edges in drawings. The 2D API

    takes care of the details of producing antialiased drawing.

    Text

    The 2D API can use any TrueType or Type 1 font installed on your system. You

    can render strings, retrieve the shapes of individual strings or letters, and manipulate

    text in the same ways that shapes are manipulated.

    Color

    The 2D API includes classes and methods that support representing colors in

    ways that don't depend on any particular hardware or viewing conditions.

    Images

    The 2D API supports doing the same neat stuff with images that you can do with

    shapes and text. Specifically, you can transform images, use clipping shapes, and use

    alpha compositing with images. Java 2 also includes a set of classes for loading and

    saving images in the JPEG format.

    Image processing

    The 2D API also includes a set of classes for processing images. Image

    processing is used to highlight certain aspects of pictures, to achieve aesthetic effects,

    or to clean up messy scans.

  • 8/10/2019 Diktat Java Multimedia.pdf

    79/139

    75 | P a g e

    Printing

    Java developers have a decent way to print. The Printing API is part of the 2D

    API and provides a compact, clean solution to the problem of producing output on a

    printer.

    4.3. Drawing on Components

    Every GUI component (we only use Jpanel in examples) shows on the screen has a

    paint()method. The system passes a Graphics to this method. In JDK 1.1 and earlier,

    you could draw on Components by overriding the paint() method and using the

    Graphics to draw things. It works exactly the same way in Java 2, except that it's a

    Graphics2D that is passed to paint(). To take advantage of all the spiffy 2D features,

    you'll have to perform a cast in your paint()method, like this:

    Note that your component may not necessarily be drawn on the screen. The

    Graphics2D that gets passed to paint()might actually represent a printer or any other

    output device.

    Swing components work almost the same way. Strictly speaking, however, you shouldimplement the paintComponent()method instead of paint(). Swing uses the paint()

    method to draw child components. Swing's implementation of paint() calls

    paintComponent() to draw the component itself. You may be able to get away with

    implementing paint()instead of paintComponent(), but then don't be surprised if the

    component is not drawn correctly.

    4.4. Drawing on ImagesYou can use a Graphicsor Graphics2Dto draw on images, as well. If you have an Image

    that you have created yourself, you can get a corresponding Graphics2Dby calling

    createGraphics(),as follows:

  • 8/10/2019 Diktat Java Multimedia.pdf

    80/139

    76 | P a g e

    This works only for any Imageyou've created yourself, not for an Image loaded

    from a file. If you have a BufferedImage(Java 2D's new image class), you can obtain aGraphics2Das follows:

    Starter: Code Template

    To maintain simplicity, each of our examples will use this template to manage the

    GUI window. If you still having some difficulties on using the GUI, it is recommended

    that you use this template.

    4.5. Graphics2D

    Rendering is the process of taking a collection of shapes, text, and images and

    figuring out what colors the pixels should be on a screen or printer. Shapes, text, and

    images are calledgraphics primitives ; screens and printers are called output devices. If I

    wanted to be pompous, I'd tell you that rendering is the process of displaying graphics

  • 8/10/2019 Diktat Java Multimedia.pdf

    81/139

    77 | P a g e

    primitives on output devices. A rendering engineperforms this work; in the 2D API, the

    Graphics2D class is the rendering engine. The 2D rendering engine takes care of the

    details of underlying devices and can accurately reproduce the geometry and color of a

    drawing, regardless of the device that displays it.

    Coordinate Space

    The Java coordinate space is different from what we learn at school. The X valueincreases as we go to the right, and the Y value increases as we go down(the opposite of

    the regular Cartesian Coordinate). Please keep this in mind as this is essential to your

    drawings.

  • 8/10/2019 Diktat Java Multimedia.pdf

    82/139

    78 | P a g e

    Start Drawing

    Take a look of this sample code; this is useful for the big picture of what we will learn.

  • 8/10/2019 Diktat Java Multimedia.pdf

    83/139

    79 | P a g e

    Output:

    What we do here is creating a rectangle, an ellipse, give colors, draw the shapes, and

    drawing string using different fonts. Its not really hard if you understand the concepts.

    Geometry

    Java 2D allows you to represent any shape as a combination of straight and curved line

    segments. The rectangle and ellipse above are one of them.

    Draw able objects in Java2D implements the java.awt.Shape interface. You can

    refer to the JDK documentation to look for available shapes.

  • 8/10/2019 Diktat Java Multimedia.pdf

    84/139

    80 | P a g e

    Points

    The java.awt.geom.Point2D class encapsulates a single point (an x and a y) in User

    Space. It is the most basic of the Java 2D classes and is used throughout the API. Note

    that a point is not the same as a pixel. A pixel is a tiny square (ideally) on a screen or

    printer that contains some color. A point, by contrast, has no area, so it can't be rendered.

    Points are used to build rectangles or other shapes that have area and can be rendered.

    Point2D demonstrates an inheritance pattern that is used throughout

    java.awt.geom.In particular, Point2Dis an abstract class with inner child classes that

    provide concrete implementations.

    The subclasses provide different levels of precision for storing the coordinates of

    the point. The original java.awt.Point, which dates back to JDK 1.0, stores the

    coordinates as integers. Java 2D provides Point2D.Float and Point2D.Double for

    higher precision. You can either set a point's location or find out where it is:

    This method sets the position of the point. Although it accepts double values, be aware

    that the underlying implementation may not store the coordinates as double values.

    This method sets the position of the point using the coordinates of another Point2D.

    This method returns the x (horizontal) coordinate of the point as a double.

  • 8/10/2019 Diktat Java Multimedia.pdf

    85/139

    81 | P a g e

    This method returns the y (vertical) coordinate of the point as a double.

    Point2D also includes a handy method for calculating the distance between two points:

    Use this method to calculate the distance between this Point2D and the point specified byPX and PY.

    This method calculates the distance between this Point2D and pt.

    The inner child class Point2D.Doublehas two constructors:

    This constructor creates a Point2D.Doubleat the coordinates 0, 0.

    This constructor creates a Point2D.Double at the given coordinates.

    Point2D.Floathas a similar pair of constructors, based around floats instead of doubles:

    Furthermore, Point2D.Float provides an additional setLocation() method that

    accepts floats instead of doubles:

    This method sets the location of the point using the given coordinates. Why use floats

    instead of doubles? If you have special concerns about the speed of your application or

    interfacing with an existing body of code, you might want to use Point2D.Float.

    Otherwise, I suggest using Point2D.Double, since it provides the highest level of

    precision.

    Shapes and Paths

    Two of Graphic2D basic operations are filling shapes and drawing their outlines. But

    Graphics2Ddoesn't know much about geometry, as the song says. In fact, Graphics2D

  • 8/10/2019 Diktat Java Multimedia.pdf

    86/139

    82 | P a g e

    only knows how to draw one thing: a java.awt.Shape. The Shapeinterface represents a

    geometric shape, something that has an outline and an interior. With Graphics2D, you

    can draw the border of the shape using draw(), and you can fill the inside of a shape

    using fill().

    The java.awt.geompackage is a toolbox of useful classes that implement the Shape

    interface. There are classes that represent ellipses, arcs, rectangles, and lines. First, I'll

    talk about the Shape interface, and then briefly discuss the java.awt.geompackage.

    Lines

    The 2D API includes shape classes that represent straight and curved line segments.

    These classes all implement the Shapeinterface, so they can be rendered and manipulated

    like any other Shape. Although you could create a single straight or curved line segment

    yourself using GeneralPath, it's easier to use these canned shape classes. It's interesting

    that these classes are Shapes, even though they represent the basic segment types that

    make up a Shape's path.

    4.6. Line2D

    The java.awt.geom.Line2Dclass represents a line whose coordinates can be retrieved

    as doubles. Like Point2D, Line2D is abstract. Subclasses can store coordinates in anyway they wish.

    Line2Dincludes several setLine()methods you can use to set a line's endpoints:

  • 8/10/2019 Diktat Java Multimedia.pdf

    87/139

    83 | P a g e

    This method sets the endpoints of the line to x1, y1, and x2, y2.

    This method sets the endpoints of the line to p1 and p2.

    This method sets the endpoints of the line to be the same as the endpoints of the given

    line. Here are the constructors for the Line2D.Float class. Two of them allow you tospecify the endpoints of the line, which saves you the trouble of calling setLine().

    This constructor creates a line whose endpoints are 0, 0 and 0, 0.

    This constructor creates a line whose endpoints are x1, y1 and x2, y2.

  • 8/10/2019 Diktat Java Multimedia.pdf

    88/139

    84 | P a g e

    This constructor creates a line whose endpoints are p1 and p2.

    The Line2D.Doubleclass has a corresponding set of constructors:

    The following code will show you how to use Line2D and Point2D

  • 8/10/2019 Diktat Java Multimedia.pdf

    89/139

    85 | P a g e

    Output:

    4.7. Rectangle2D

    Like the Point2D class, java.awt.geom.Rectangle2D is abstract. Two inner subclasses,

    Rectangle2D.Doubleand Rectangle2D.Float, provide concrete representations.

  • 8/10/2019 Diktat Java Multimedia.pdf

    90/139

    86 | P a g e

    Because so much functionality is already in Rectangle2D's parent class,

    RectangularShape, there are only a few new methods defined in Rectangle2D. First,

    you can set a Rectangle2D's position and size using setRect(:

    These methods work just like the setFrame() methods with the same argument types.

    Two methods are provided to test if a line intersects a rectangle:

    If the line described

    by x1, y1, x2,and y2intersects this Rectangle2D, this method returns true.

    If the line represented by l intersects this Rectangle2D, this method returns true.

    Two other methods will tell where a point is, with respect to a rectangle. Rectangle2D

    includes some constants that describe the position of a point outside a rectangle. A

    combination of constants is returned as appropriate.

    These methods return some combination of OUT_TOP, OUT_BOTTOM, OUT_LEFT, and

    OUT_RIGHT, indicating where the given point lies with respect to this Rectangle2D. Forpoints inside the rectangle, this method returns 0.

  • 8/10/2019 Diktat Java Multimedia.pdf

    91/139

    87 | P a g e

    RoundRectangle2D

    A round rectangle is a rectangle with curved corners, represented by instances of

    java.awt.geom.RoundRectangle2D.

    Round rectangles are specified with a location, a width, a height, and the height and

    width of the curved corners.

    To set the location, size, and corner arcs of a RoundRectangle2D, uses the following

    method:

  • 8/10/2019 Diktat Java Multimedia.pdf

    92/139

    88 | P a g e

    This method sets the location of this round rectangle to x and y. The width and height

    are provided by w and h, and the corner arc lengths are arcWidthand arcHeight. Like

    the other geometry classes, RoundRectangle2D is abstract. Two concrete subclasses are

    provided. RoundRectangle2D.Float uses floats to store its coordinates. One of its

    constructors allows you to completely specify the rounded rectangle:

    This constructor creates a RoundRectangle2D.Float using the specified location,

    width, height, arc width, and arc height.

    RoundRectangle2D.Doublehas a corresponding constructor:

  • 8/10/2019 Diktat Java Multimedia.pdf

    93/139

    89 | P a g e

    Sample Code:

    Output:

  • 8/10/2019 Diktat Java Multimedia.pdf

    94/139

    90 | P a g e

    4.8. Ellipse2D and Arc2D

    Ellipse2D

    An ellipse, like a rectangle, is fully defined by a location, a width, and a height. The

    same with the other geometry classes, java.awt.geom.Ellipse2D is abstract. A

    concrete inner subclass, Ellipse2D.Float, stores its coordinates as floats:

    This constructor creates an Ellipse2D.Float using the specified location, width,

    and height.

    Another inner subclass, Ellipse2D.Double, offers a corresponding constructor:

    Arc2D

    The 2D API includes java.awt.geom.Arc2Dfor drawing pieces of an ellipse. Arc2D

    defines three different kinds of arcs.

    This constant represents an open arc. This simply defines a curved line that is a

    portion of an ellipse's outline.

    This constant represents an arc in the shape of a slice of pie. This outline is produced

    by drawing the curved arc as well as straight lines from the arc's endpoints to the center

    of the ellipse that defines the arc.

    In this arc type, a straight line is drawn to connect the two endpoints of the arc.

  • 8/10/2019 Diktat Java Multimedia.pdf

    95/139

  • 8/10/2019 Diktat Java Multimedia.pdf

    96/139

    92 | P a g e

    This method makes this arc have the same shape as the supplied Arc2D.

    This method specifies an arc from a center point, given by x and y, and a radius. The

    angle start, angle extent, and closure parameters are the same as before. Note that this

    method will only produce arcs that are part of a circle. Like the other geometry classes,

    Arc2D is abstract, with Arc2D.Float and Arc2D.Double as concrete subclasses.

    Arc2D.Floathas four useful constructors:

    This constructor creates a new OPEN arc.

    This constructor creates a new arc of the given type, which should be OPEN, PIE, or

    CHORD.

    This constructor is the same as the previous constructor, but it uses the supplied

    rectangle as the ellipse's bounding box. The other inner subclass, Arc2D.Double, has four

    corresponding constructors:

  • 8/10/2019 Diktat Java Multimedia.pdf

    97/139

    93 | P a g e

    Sample:

  • 8/10/2019 Diktat Java Multimedia.pdf

    98/139

    94 | P a g e

    Output:

    There are two things you need to know about arc angles:

    a. They're measured in degrees.

    b. Finally, the arc angles aren't really genuine angles. Instead, they're defined relative to

    the arc's ellipse, such that a 45 arc angle is always defined as the line from the

    center of the ellipse through a corner of the bounding box of the ellipse.

    4.9. Text

    Drawing string is simple. You can use one of these methods.

    This method draws the given string, using the current font, at the location specified

    by x and y. If you want to change the font, you can use the Fontclass.

  • 8/10/2019 Diktat Java Multimedia.pdf

    99/139

    95 | P a g e

    This Font constructor creates a new font with the specified name (eg. Times New

    Roman, Comic Sans MS), specified font size and style. The style parameter is filled with

    these constants.

    This code use to gives the bold style to the font.

    This code use to gives the italic style to the font.

    This code use to gives the standard plain style to the font.

    Sample:

  • 8/10/201