introduction to dip
TRANSCRIPT
![Page 1: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/1.jpg)
2/22/20092/22/2009 11
Image Processing and the Image Processing and the ApplicationsApplications
RajashekharaRajashekhara
![Page 2: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/2.jpg)
2/22/20092/22/2009 22
What is an Image?What is an Image?
a representation, a representation, likeness, or imitation likeness, or imitation of an object or thingof an object or thinga vivid or graphic a vivid or graphic descriptiondescriptionsomething introduced something introduced to represent to represent something elsesomething else
![Page 3: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/3.jpg)
2/22/20092/22/2009 33
Where are we?Where are we?
Imaging? Computer Vision?
Display/Printing?
Digital ImageProcessing
Computer Graphics?
BiologicalVision?
![Page 4: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/4.jpg)
2/22/20092/22/2009 44
Computer vision Vs computer Computer vision Vs computer graphicsgraphics
3D information Image or Display
Transformation(Computer Graphics)
Extraction(Computer Vision)
![Page 5: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/5.jpg)
2/22/20092/22/2009 55
Computer vision estimates 3D data from one or Computer vision estimates 3D data from one or more 2D images. more 2D images.
Computer graphics generates 2D/3D images Computer graphics generates 2D/3D images from the 3D (from the mathematical from the 3D (from the mathematical functions) of an object.functions) of an object.
Computer vision and Computer graphics are Computer vision and Computer graphics are inverse operations each other. They both use inverse operations each other. They both use image processing which is there fore regarded image processing which is there fore regarded as low level (or basic) operation for computer as low level (or basic) operation for computer vision and computer graphics. vision and computer graphics.
Note that Computer Vision and Computer Note that Computer Vision and Computer graphics and Image processing are normally graphics and Image processing are normally considered as three overlapping areas but considered as three overlapping areas but none them are subset of the othernone them are subset of the other
Computer vision, Computer Computer vision, Computer graphics, Image processinggraphics, Image processing
![Page 6: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/6.jpg)
2/22/20092/22/2009 66
Computer Vision MeansComputer Vision Means
Machine VisionMachine VisionRobot VisionRobot VisionScene AnalysisScene AnalysisImage UnderstandingImage UnderstandingImage AnalysisImage Analysis
![Page 7: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/7.jpg)
2/22/20092/22/2009 77
Image processing MeansImage processing Means
Image processing refers to a set of Image processing refers to a set of computational techniques which accept computational techniques which accept imagesimages as input. The results of the processing as input. The results of the processing can be can be new images or informationnew images or information extracted extracted from the input images. Video is just time from the input images. Video is just time sequence of images called sequence of images called framesframes. All image . All image processing techniques can be applied to processing techniques can be applied to framesframes. Image processing has many . Image processing has many applications applications
![Page 8: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/8.jpg)
2/22/20092/22/2009 88
Why Image processing?Why Image processing?
Why?Why?
–– Coding/compressionCoding/compression–– Enhancement, restoration, reconstructionEnhancement, restoration, reconstruction–– Analysis, detection, recognition, understandingAnalysis, detection, recognition, understanding–– VisualizationVisualization
![Page 9: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/9.jpg)
2/22/20092/22/2009 99
What do we do?What do we do?Image Processing/
Manipulation
Image Coding/ Communication
Image Analysis/Interpretation
Digital ImageProcessing
![Page 10: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/10.jpg)
2/22/20092/22/2009 1010
Digital ImageDigital Image
![Page 11: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/11.jpg)
2/22/20092/22/2009 1111
What is an Image?What is an Image?
a visual representation a visual representation of objects, their of objects, their components, components, properties, properties, relationships, relationships, Mapping 3D scene on Mapping 3D scene on to a 2D plane.to a 2D plane.
![Page 12: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/12.jpg)
2/22/20092/22/2009 1212
What is Digital What is Digital Image ?Image ?
A digital image A digital image contains a fixed contains a fixed number of rows and number of rows and columns of integer columns of integer numbers. Each integer numbers. Each integer number is called pixel, number is called pixel, picture elements, picture elements, representing representing brightness at the brightness at the points of the image. points of the image.
34 58 98
13 25 39
88 47 17
R
![Page 13: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/13.jpg)
2/22/20092/22/2009 1313
Digital ImageDigital ImageDigital image = a multidimensional
array of numbers (such as intensity image) or vectors (such as color image)
Each component in the imagecalled pixel associates withthe pixel value (a single number in the case of intensity images or a vector in the case of color images).
⎥⎥⎥⎥
⎥
⎤
⎢⎢⎢⎢
⎢
⎡
398715322213251537266928161010
⎥⎥⎥⎥
⎥
⎤
⎢⎢⎢⎢
⎢
⎡
39656554424754216796543243567065
⎥⎥⎥⎥
⎥
⎤
⎢⎢⎢⎢
⎢
⎡
99876532924385856796906078567099
![Page 14: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/14.jpg)
2/22/20092/22/2009 1414
Digital Image Types : Digital Image Types : Binary ImageBinary Image
Binary image or black and white imageEach pixel contains one bit :
1 represent white0 represents black
⎥⎥⎥⎥
⎥
⎤
⎢⎢⎢⎢
⎢
⎡
1111111100000000
Binary data
![Page 15: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/15.jpg)
2/22/20092/22/2009 1515
Digital Image Types : Digital Image Types : Intensity or Gray imageIntensity or Gray image
Intensity image or monochrome imageeach pixel corresponds to light intensity
normally represented in gray scale (gray level).
⎥⎥⎥⎥
⎥
⎤
⎢⎢⎢⎢
⎢
⎡
398715322213251537266928161010
Gray scale values
![Page 16: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/16.jpg)
2/22/20092/22/2009 1616
Digital Image Types : Digital Image Types : RGB imageRGB image
⎥⎥⎥⎥
⎥
⎤
⎢⎢⎢⎢
⎢
⎡
398715322213251537266928161010
⎥⎥⎥⎥
⎥
⎤
⎢⎢⎢⎢
⎢
⎡
39656554424754216796543243567065
⎥⎥⎥⎥
⎥
⎤
⎢⎢⎢⎢
⎢
⎡
99876532924385856796906078567099
Color image or RGB image:each pixel contains a vectorrepresenting red, green andblue components.
RGB components
![Page 17: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/17.jpg)
2/22/20092/22/2009 1717
Digital Image Types : Digital Image Types : Index ImageIndex Image
Index imageEach pixel contains index numberpointing to a color in a color table
⎥⎥⎥
⎥
⎤
⎢⎢⎢
⎢
⎡
256746941
Index value
Index Index No.No.
RedRedcomponentcomponent
GreenGreencomponentcomponent
BlueBluecomponentcomponent
11 0.10.1 0.50.5 0.30.3
22 1.01.0 0.00.0 0.00.0
33 0.00.0 1.01.0 0.00.0
44 0.50.5 0.50.5 0.50.5
55 0.20.2 0.80.8 0.90.9
…… …… …… ……
Color Table
![Page 18: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/18.jpg)
2/22/20092/22/2009 1818
Human Vision & Image Human Vision & Image VisualizationVisualization
In the beginning…
we’ll have a look at the human eye
![Page 19: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/19.jpg)
2/22/20092/22/2009 1919
Cross section of Human Cross section of Human EyeEye
![Page 20: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/20.jpg)
2/22/20092/22/2009 2020
Visual perception : Visual perception : Human eyeHuman eye
1. The lens contains 60-70% water, 6% of fat.
2. The iris diaphragm controls amount of light that enters the eye.3. Light receptors in the retina
- About 6-7 millions cones for bright light vision called photopic- Density of cones is about 150,000 elements/mm2.- Cones involve in color vision.- Cones are concentrated in fovea about 1.5x1.5 mm2.
- About 75-150 millions rods for dim light vision called scotopic- Rods are sensitive to low level of light and are not involvedcolor vision.
4. Blind spot is the region of emergence of the optic nerve from the eye.
![Page 21: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/21.jpg)
2/22/20092/22/2009 2121
Electromagnetic Electromagnetic SpectrumSpectrum
wavelength (Angstroms)
cosmic rays
gamma rays X-Rays UV
visible
IR
Electromagnetic Spectrum
1 Å = 1 0 -10 m
- 4 -2 2 4 6 8 10 121
microwave (SAR)
radio frequency
10 10 10 10 10 10 10 10
The whole electromagnetic spectrum is used by “imagers”The human eye is sensible to electromagnetic waves in the ‘visible spectrum’ :
![Page 22: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/22.jpg)
2/22/20092/22/2009 2222
The human eye is sensible to electromagnetic waves in the ‘visible
spectrum’ , which is around a wavelength of
0.000001 m = 0.001 mm
![Page 23: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/23.jpg)
2/22/20092/22/2009 2323
The human eye
•IIs able to perceive electromagnetic waves in a certain spectrum
•IIs able to distinguish between wavelengths in this spectrum (colors)
•HHas a higher density of receptors in the center
•Maps our 3D reality to a 2 dimensional image !
![Page 24: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/24.jpg)
2/22/20092/22/2009 2424
Easier: 2D array of cells, modelling the cones/rods
Each cell contains a numerical value (e.g. between 0-255)
The retinal model is mathematically hard to handle (e.g. neighborhood ?)
![Page 25: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/25.jpg)
2/22/20092/22/2009 2525
5 7 1 0 12 4 ………
•TThe position of each cell defines the position of the receptor
•TThe numerical value of the cell represents the illumination received by the receptor
![Page 26: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/26.jpg)
2/22/20092/22/2009 2626
•WWith this model, we can create GRAYVALUE images
•VValue = 0: BLACK (no illumination / energy)
•VValue = 255: White (max. illumination / energy)
![Page 27: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/27.jpg)
2/22/20092/22/2009 2727
What is light?What is light?• TThe visible portion of the electromagnetic (EM) spectrum.
• IIt occurs between wavelengths of approximately 400 and 700 nanometers.
![Page 28: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/28.jpg)
2/22/20092/22/2009 2828
Short wavelengthsShort wavelengths• DDifferent wavelengths of radiation have different properties.
• TThe x-ray region of the spectrum, it carries sufficient energy to penetrate a significant volume or material.
![Page 29: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/29.jpg)
2/22/20092/22/2009 2929
Long wavelengthsLong wavelengths• CCopious quantities of infrared (IR) radiation are emitted from warm objects (e.g., locate people in total darkness).
![Page 30: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/30.jpg)
2/22/20092/22/2009 3030
Long wavelengthsLong wavelengths• “Synthetic aperture radar” (SAR) imaging techniques use an artificially generated source of microwaves to probe a scene.
• SAR is unaffected by weather conditions and clouds (e.g., has provided us images of the surface of Venus).
![Page 31: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/31.jpg)
2/22/20092/22/2009 3131
Range imagesAAn array of distances to the objects in the scene.Range images•• TThey can be produced by sonar or by using laser
rangefinders.
![Page 32: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/32.jpg)
2/22/20092/22/2009 3232
Sonic imagesSonic images• PProduced by the reflection of sound waves off an object.• HHigh sound frequencies are used to improve resolution.
![Page 33: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/33.jpg)
2/22/20092/22/2009 3333
Image formationImage formationImage is two dimensional pattern of Image is two dimensional pattern of brightnessbrightness
What to do to get info on 3D world? What to do to get info on 3D world? --Study image formation process Study image formation process -- Understand how the brightness pattern is Understand how the brightness pattern is
produced.produced.Two important tasksTwo important tasks
-- Where the image of some point will Where the image of some point will appear?appear?-- How bright the image of some surface will How bright the image of some surface will
be ?be ?
![Page 34: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/34.jpg)
2/22/20092/22/2009 3434
A simple model of image formationA simple model of image formationThe scene is illuminated by a single The scene is illuminated by a single source.source.The scene reflects radiation towards the The scene reflects radiation towards the camera.camera.The camera senses it via chemicals on The camera senses it via chemicals on film.film.Light reaches Light reaches surfaces in 3D.surfaces in 3D.Surfaces reflect.Surfaces reflect.Sensor element Sensor element receives light energy.receives light energy.Intensity is important.Intensity is important.Angles are important.Angles are important.Material is important.Material is important.
![Page 35: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/35.jpg)
2/22/20092/22/2009 3535
Geometry and physicsGeometry and physics
–– The The geometry of image formationgeometry of image formation, , which determines where in the image which determines where in the image plane the projection of a point in the scene plane the projection of a point in the scene will be located.will be located.
–– The The physics of lightphysics of light, which determines , which determines the brightness of a point in the image the brightness of a point in the image plane as a function of illumination and plane as a function of illumination and surface properties.surface properties.
![Page 36: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/36.jpg)
2/22/20092/22/2009 3636
Image FormationImage Formation
Digital image generation is first step in any Digital image generation is first step in any image processing or computer vision image processing or computer vision method. The generated image is function method. The generated image is function of many parameters like the reflection of many parameters like the reflection characteristics of object surface, sensor characteristics of object surface, sensor characteristics of the camera, the optical characteristics of the camera, the optical characteristics of lens, the analog to digital characteristics of lens, the analog to digital converter, characteristics of light source, converter, characteristics of light source, geometric laws on the basis of which geometric laws on the basis of which image is acquired.image is acquired.
![Page 37: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/37.jpg)
2/22/20092/22/2009 3737
Image formationImage formation
–– The first task is related to the camera The first task is related to the camera projection which can be either a projection which can be either a perspective projection or an orthographic perspective projection or an orthographic projection. The perspective projection is projection. The perspective projection is more general than the orthographic more general than the orthographic projection, but requires calculations.projection, but requires calculations.
–– The second task is related to surface The second task is related to surface reflection properties, illumination reflection properties, illumination conditions, and surface orientation with conditions, and surface orientation with respect to camera and light sources.respect to camera and light sources.
![Page 38: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/38.jpg)
2/22/20092/22/2009 3838
Geometric camera modelsGeometric camera modelsThe projection of surface point of 3 dimensional The projection of surface point of 3 dimensional scene into 2 dimensional image plane can be scene into 2 dimensional image plane can be described by a perspective or orthographic described by a perspective or orthographic projection projection
Pine hole cameraPine hole cameraA camera with zero aperture sizeA camera with zero aperture sizeAll rays from the 3D scene points always pass All rays from the 3D scene points always pass through optical center of the lens through optical center of the lens ©©
![Page 39: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/39.jpg)
2/22/20092/22/2009 3939
Coordinate systemCoordinate systemIn computer vision, we deal with three kinds In computer vision, we deal with three kinds of coordinates systems of coordinates systems –– Image coordinate Image coordinate system, Camera coordinate system, and system, Camera coordinate system, and World coordinate system. Image coordinate is World coordinate system. Image coordinate is basically twobasically two--dimensional image plane. dimensional image plane. Camera coordinate is one, that is adjusted to Camera coordinate is one, that is adjusted to the camera. It can be either camera centered the camera. It can be either camera centered or image centered. In the cameraor image centered. In the camera--centered centered coordinate system the origin is the focal point coordinate system the origin is the focal point and the optical axis is Z axis. In the image and the optical axis is Z axis. In the image ––centered system the origin is positioned in centered system the origin is positioned in the XY image plane. World coordinate system the XY image plane. World coordinate system is general coordinate system with some is general coordinate system with some reference axes. reference axes.
![Page 40: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/40.jpg)
2/22/20092/22/2009 4040
Perspective projection set Perspective projection set upup Consider pinhole camera modelConsider pinhole camera model
Projection of a scene point P of the XYZ Projection of a scene point P of the XYZ space onto the image point Pspace onto the image point P’’ in the in the xyxyimage plane is perspective projectionimage plane is perspective projectionThe optical axis defined to be The optical axis defined to be perpendicular from the pinhole C to the perpendicular from the pinhole C to the image planeimage planeThe distance f between C and the image The distance f between C and the image plane is focal lengthplane is focal lengthThe coordinate system of XYZ space is The coordinate system of XYZ space is defined such that the XY plane is parallel defined such that the XY plane is parallel to the image plane, and origin is at the to the image plane, and origin is at the pinhole C, then the Z axis lies along the pinhole C, then the Z axis lies along the optical axisoptical axis
![Page 41: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/41.jpg)
2/22/20092/22/2009 4141
Perspective projection Perspective projection equationsequations
![Page 42: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/42.jpg)
2/22/20092/22/2009 4242
Perspective projection Perspective projection equationsequationsFrom the similar triangles (CAFrom the similar triangles (CA’’PP’’) and (CAP) and (CAP
From the two similar triangles (AFrom the two similar triangles (A’’BB’’PP’’) and ) and (ABP)(ABP)
From the last two equations, perspective From the last two equations, perspective projection equations are obtained:projection equations are obtained:
![Page 43: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/43.jpg)
2/22/20092/22/2009 4343
Perspective projectionPerspective projectionPoints go to pointsPoints go to pointsLine go to linesLine go to linesPlanes go to whole image or half planesPlanes go to whole image or half planesPolygons go to polygonsPolygons go to polygons
Long focal length Long focal length --> narrow field of view> narrow field of viewSmall focal length Small focal length --> large (wide) field > large (wide) field of viewof view--wide angle cameraswide angle cameras
![Page 44: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/44.jpg)
2/22/20092/22/2009 4444
Perspective projectionPerspective projection• Produces a view where the object’s size
depends on the distance from the viewer• An object farther away becomes smaller
![Page 45: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/45.jpg)
2/22/20092/22/2009 4545
Perspective projectionPerspective projectionHorizon Horizon –– observerobserver’’s eye levels eye levelGround Line Ground Line –– plane on which object restsplane on which object restsVanishing point Vanishing point –– position on horizon where position on horizon where depth projectors convergedepth projectors convergeProjection plane Projection plane –– plane upon which object is plane upon which object is projectedprojected
![Page 46: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/46.jpg)
2/22/20092/22/2009 4646
Vanishing pointsVanishing pointsObject edges parallel to projection plane Object edges parallel to projection plane remain parallel in a perspective projectionremain parallel in a perspective projectionObject edges not parallel to projection Object edges not parallel to projection plane converge to a single point in a plane converge to a single point in a perspective projection perspective projection vanishing point vanishing point (vp)(vp)
![Page 47: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/47.jpg)
2/22/20092/22/2009 4747
Camera with apertureCamera with apertureIn practice, the aperture must be larger to In practice, the aperture must be larger to admit more light.admit more light.Lenses are placed to in the aperture to Lenses are placed to in the aperture to focus focus the bundle of rays from each scene point the bundle of rays from each scene point onto the corresponding point in the image onto the corresponding point in the image planeplane
![Page 48: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/48.jpg)
2/22/20092/22/2009 4848
Orthographic projectionOrthographic projectionOrthographic projection is modeled by rays Orthographic projection is modeled by rays parallel to optical axis rather than passing parallel to optical axis rather than passing through the optical centerthrough the optical centerSuppose that the image of a plane lying at Z=Suppose that the image of a plane lying at Z=ZZooparallel to the image plane is formed. The parallel to the image plane is formed. The magnification m can be defined as the ratio of magnification m can be defined as the ratio of distance between two points in the image to distance between two points in the image to distance between their corresponding points on distance between their corresponding points on the image planethe image plane
![Page 49: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/49.jpg)
2/22/20092/22/2009 4949
Orthographic projectionOrthographic projection
![Page 50: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/50.jpg)
2/22/20092/22/2009 5050
Orthographic projectionOrthographic projectionFor an object located at average distance For an object located at average distance ––ZZooand variations in Zand variations in Z over its visible surface is not over its visible surface is not significant compared to significant compared to ––ZZoo (when distance (when distance between camera and object is very large between camera and object is very large relative to the variations in the object depth) relative to the variations in the object depth) then the image this object will be magnified by then the image this object will be magnified by a factor m. For all the visible points of object , a factor m. For all the visible points of object , projection equations are projection equations are
The scaling factor m is usually set to 1 or The scaling factor m is usually set to 1 or ––1 for 1 for convenience. Simple projection equations are convenience. Simple projection equations are x=X, y=Yx=X, y=Y
![Page 51: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/51.jpg)
2/22/20092/22/2009 5151
Radiometry basicsRadiometry basics
Light sourceproperties
Surface shape
Surface reflectancepropertiesOptics
Exposure
What determines the brightness of an What determines the brightness of an image pixel?image pixel?
![Page 52: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/52.jpg)
2/22/20092/22/2009 5252
Radiometry basicsRadiometry basics
Foreshortening and Solid angleForeshortening and Solid angleMeasuring lightMeasuring light : radiance : radiance Light at surfaceLight at surface : interaction between light and : interaction between light and surface surface –– irradiance = light arriving at surface irradiance = light arriving at surface –– BRDFBRDF–– outgoing radianceoutgoing radiance
Special cases and simplificationsSpecial cases and simplifications : : LambertainLambertain, , specularspecular, parametric and non, parametric and non--parametric modelsparametric models
OutgoingIncoming
![Page 53: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/53.jpg)
2/22/20092/22/2009 5353
ForeshorteningForeshortening
Two sources that look the same to a receiver must have same effect on the receiver;
Two receivers that look the same to a source must receive the same energy.
![Page 54: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/54.jpg)
2/22/20092/22/2009 5454
Solid AngleSolid Angle
2
cosr
dAd θω =A
By analogy with angle (in radians), the solid By analogy with angle (in radians), the solid angle subtended by a region at a point is the angle subtended by a region at a point is the area projected on a unit sphere centered at area projected on a unit sphere centered at that pointthat pointThe solid angle The solid angle ddωω subtended by a patch of subtended by a patch of area area dAdA is given by:is given by:
Measured in steradians (sr)Foreshortening : patches that look the same, same solid angle.
![Page 55: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/55.jpg)
2/22/20092/22/2009 5555
Radiometry basicsRadiometry basicsRadiometryRadiometry is a branch of physics that deals with is a branch of physics that deals with the measurement of the flow and transfer of the measurement of the flow and transfer of radiant energy.radiant energy.RadianceRadiance is the power of light that is emitted from is the power of light that is emitted from a unit surface area into some spatial angle; the a unit surface area into some spatial angle; the corresponding photometric term is brightness.corresponding photometric term is brightness.IrradianceIrradiance is the amount of energy that an image is the amount of energy that an image capturing device gets per unit of an efficient capturing device gets per unit of an efficient sensitive area of the camera. Quantizing it gives sensitive area of the camera. Quantizing it gives image gray tones.cimage gray tones.c
![Page 56: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/56.jpg)
2/22/20092/22/2009 5656
Radiometry basicsRadiometry basics
dA
( ) ( ) ωθφθφθ dLE cos,, =
n
θ
θcosdA
dω
Radiance (Radiance (LL): energy carried by a ray): energy carried by a ray–– Power per unit area perpendicular to the direction of travel, Power per unit area perpendicular to the direction of travel,
per unit solid angleper unit solid angle
–– Units: Watts per square meter per Units: Watts per square meter per steradiansteradian (W m(W m--2 2 srsr--11))
Irradiance (Irradiance (EE): energy arriving at a surface): energy arriving at a surface–– Incident power in a given direction per unit areaIncident power in a given direction per unit area–– Units: W mUnits: W m--22
–– For a surface receiving radiance For a surface receiving radiance LL((xx,,θθ,,φφ) coming in from ) coming in from ddωω the the corresponding irradiance iscorresponding irradiance is
![Page 57: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/57.jpg)
2/22/20092/22/2009 5757
Radiance Radiance ––emitted lightemitted light
unit = watts/(munit = watts/(m22sr)sr)constant along a rayconstant along a ray
Radiance = power traveling at some point in a direction per unit area perp to direction of travel, per solid angle
ωd
θ
ωθφθ
ddAPL
)cos(),,( =x dA
A θcosA
P
Radiance transfer :Power received at dA2 at dist r from emitting area dA1
1dA
2dA
1θ2θr
)cos(cos 222
1121 rdALdAP θθ=→
21ωd
1221 →→ = PP
![Page 58: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/58.jpg)
2/22/20092/22/2009 5858
Light at surface : irradianceLight at surface : irradianceIrradiance = unit for light arriving at the surface
θ
φωθφθ dLdE cos),,()( xx =
x
Total power = integrate irradiance over all incoming angles
x∫ ∫=ππ
φθθθφθ2
0
2/
0
sincos),,()( ddLE xxωd
![Page 59: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/59.jpg)
2/22/20092/22/2009 5959
Bidirectional reflectance distributionBidirectional reflectance distribution
ωθφθφθ
φθφθφθφθρ
dLL
EL
iiii
eee
iii
eeeeeii cos),(
),(),(),(),,,( ==
( ) ( )∫Ω
iiiiieeii dL ωθφθφθφθρ cos,,,,,
surface normal
Model of local reflection that tells how bright a Model of local reflection that tells how bright a surface appears when viewed from one direction surface appears when viewed from one direction when light falls on it from anotherwhen light falls on it from anotherDefinition: ratio of the radiance in the outgoing Definition: ratio of the radiance in the outgoing direction to irradiance in the incident directiondirection to irradiance in the incident direction
Radiance leaving a surface in a particular direction: Radiance leaving a surface in a particular direction: add contributions from every incoming directionadd contributions from every incoming direction
![Page 60: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/60.jpg)
2/22/20092/22/2009 6060
Light leaving surface : BRDFLight leaving surface : BRDF
x?
many effects :transmitted - glassreflected - mirrorscattered – marble, skin travel along a surface, leave some other absorbed - sweaty skin
BRDF = Bi-directional reflectance distribution function Measures, for a given wavelength, the fraction of incoming irradiance from a direction ωi in the outgoing direction ωo [Nicodemus 70]
ωθφθφθφθφθρ
dLL
iiii
eeeeeii cos),,(
),,(),,,,(x
xx =
Assume:• surfaces don’t
fluorescent• cool surfaces • light leaving a
surface due to light arriving
iiiiooiiooo dLL ωθφθφθφθρφθ )cos(),(),,,,(),,( ∫Ω
= xx
Reflectance equation : measured radiance (radiosity = power/unit area leaving surface Correction
![Page 61: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/61.jpg)
2/22/20092/22/2009 6161
Reflection as convolutionReflection as convolution
Reflection behaves like a convolution in the angular domain
BRDF – filterLight - signal
iiiieii
iiiieiieeo
dRLe
dLeL
ωθφθφθφθρ
ωθφθφθφθρφθ
βα )cos())','(()',',',',(
)cos(),()',',',',(),,(
,
'
∫
∫
Ω
Ω
=
=
x
xxReflectance equation
![Page 62: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/62.jpg)
2/22/20092/22/2009 6262
LambertianLambertian BRDFBRDFEmitted radiance constant in all directionsModels – perfect diffuse surfaces : clay, mate paper, …BRDF = constant = albedoOne light source = dot product normal and light direction
)(cos),,()(
i
iiiio LLLN
xx•=
=ρ
θφθρ
albedo normal light dir
iiiiooo dLL ωθφθρφθ )cos(),(),,('
∫Ω
=x
Diffuse reflectance acts like a low pass filter on the incident illumination.
![Page 63: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/63.jpg)
2/22/20092/22/2009 6363
BRDF for BRDF for LambertianLambertiansurfacesurface
Image irradiance = * scene Image irradiance = * scene radianceradiance
Π1
![Page 64: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/64.jpg)
2/22/20092/22/2009 6464
How to represent surface ?How to represent surface ?
X
Y
Z
ɵ
ɸ
r=1
Surface normal ( ) – It is directional vector with magnitude unity
^n
Camera in z direction we see hemisphereDepth representation would be z=z(x,y)
![Page 65: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/65.jpg)
2/22/20092/22/2009 6565
How to represent surface ?How to represent surface ?
Equation of the sphere (surface) x^2+y^2+z^2=a^2Z=+sqrt(a^2-x^2-y^2) If the surface is well behaved
termsorderhigheryzyoyx
zxxyxz
zxyxzyxzz
ooo
Too
........)()(),(
),(),(
+∂∂−+∂
∂−+=
∇∂+==
If the surface is smooth, simply neglect higher order terms i.e. small neighborhood you can consider it as plane called planar approximation
yqxpyzyoyx
zxxz o ∂+∂=∂∂−+∂
∂−= )()(
First order approximation of surface, p and q relate to The gradient of surface
![Page 66: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/66.jpg)
2/22/20092/22/2009 6666
Surface normalSurface normal
x
y
z∧
n
y∂xp∂
yq∂
x∂
∧
n
Surface normal perpendicular to tangent plane=slope of surface in x direction=slope of surface in y direction
pq
A
B
),1,0(
),0,1(),,0,(
qOB
pxpxOA
=
=∂∂=−
−
Cross product of OA & OB vectors gives the surface normal
221)1,,(qp
qpn
OBOAn
++
−−=
×=∧
∧
(p,q) is known, is known and hencsurface normal
∧
n
![Page 67: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/67.jpg)
2/22/20092/22/2009 6767
SpecularSpecular reflectionreflection
Smooth specular surfacesMirror like surfacesLight reflected along specular directionSome part absorbed
Rough specular surfacesLobe of directions around the specular direction Microfacets
LobeVery small – mirrorSmall – blurry mirrorBigger – see only light sourcesVery big – fait specularities
![Page 68: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/68.jpg)
2/22/20092/22/2009 6868
Diffuse reflectionDiffuse reflection
Dull, matte surfaces like chalk or latex paintDull, matte surfaces like chalk or latex paintMicrofacetsMicrofacets scatter incoming light randomlyscatter incoming light randomlyLight is reflected equally in all directions: Light is reflected equally in all directions: BRDF BRDF is constantis constantAlbedoAlbedo: fraction of incident irradiance reflected : fraction of incident irradiance reflected by the surfaceby the surfaceRadiosity:Radiosity: total power leaving the surface per total power leaving the surface per unit area (regardless of direction)unit area (regardless of direction)
![Page 69: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/69.jpg)
2/22/20092/22/2009 6969
Radiosity Radiosity --summarysummaryRadianceRadiance Light energyLight energy
IrradianceIrradiance Unit incoming Unit incoming lightlight
Total Total Energy Energy incomingincoming
Energy at Energy at surfacesurface
RadiosityRadiosity Unit outgoing Unit outgoing radianceradiance
Total Total energy energy leavingleaving
Energy leaving Energy leaving the surfacethe surface
ωθφθ
ddAPL
)cos(),( =
ωθφθ dLdE cos),,()( xx =
∫=ω
ωθφθ dLEi cos),,()( xx
iiiieeiieeo dLL ωθφθφθφθρφθ )cos(),(),,,,(),,( ∫Ω
= xx
eeiiiieeiio ddLEio
ωθωθφθφθφθρ )cos()cos(),(),,,,(⎥⎥⎦
⎤
⎢⎢⎣
⎡= ∫∫
ΩΩ
x
![Page 70: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/70.jpg)
2/22/20092/22/2009 7070
Interaction of light and Interaction of light and mattermatter
What happens when a light ray hits a point on an What happens when a light ray hits a point on an object?object?–– Some of the light gets absorbedSome of the light gets absorbed
converted to other forms of energy (e.g., heat)converted to other forms of energy (e.g., heat)–– Some gets transmitted through the objectSome gets transmitted through the object
possibly bent, through possibly bent, through ““refractionrefraction””–– Some gets reflectedSome gets reflected
possibly in multiple directions at oncepossibly in multiple directions at once–– Really complicated things can happenReally complicated things can happen
fluorescencefluorescence
LetLet’’s consider the case of reflection in details consider the case of reflection in detail–– In the most general case, a single incoming ray could be In the most general case, a single incoming ray could be
reflected in all directions. How can we describe the amount of reflected in all directions. How can we describe the amount of light reflected in each direction?light reflected in each direction?
![Page 71: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/71.jpg)
2/22/20092/22/2009 7171
Image formation system Image formation system Relation between what camera captures and Relation between what camera captures and what the surface reflects what the surface reflects
![Page 72: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/72.jpg)
2/22/20092/22/2009 7272
Image formation systemImage formation system--Consists of a thin lens and an image planeConsists of a thin lens and an image planeThe diameter of the lens is d and the value of The diameter of the lens is d and the value of the focal length is the focal length is ffpp. The system is assumed to . The system is assumed to be focused,be focused,
rays originating from a particular point on the rays originating from a particular point on the object meet at single point in the image planeobject meet at single point in the image planerays originating from infinitesimal area rays originating from infinitesimal area dAdAoo on on the object are projected into some area the object are projected into some area dAdApp in in the image plane and no rays from outside the the image plane and no rays from outside the area area dAdAoo will reach will reach dAdApp
When a camera captures the image of an When a camera captures the image of an object, the measured gray value is proportional object, the measured gray value is proportional to the image irradiance which is related to the to the image irradiance which is related to the reflection properties of the object surface.reflection properties of the object surface.
![Page 73: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/73.jpg)
2/22/20092/22/2009 7373
Image formation systemImage formation systemHow to calculate image irradiance in an image How to calculate image irradiance in an image forming system forming system -- The radiant flux The radiant flux ddɸɸ that is emitted from the that is emitted from the surface patch surface patch dAdAoo and passes through the and passes through the entrance aperture can be calculated by entrance aperture can be calculated by --
Where integration is over solid angle occupied Where integration is over solid angle occupied by the entrance aperture as seen from the by the entrance aperture as seen from the surface patch surface patch By assuming that there is no power loss in the By assuming that there is no power loss in the medium, the image area medium, the image area dAdApp will receive the will receive the same flux same flux ddɸɸ that is emitted from that is emitted from dAdAoo
By definition, the image irradiance is the By definition, the image irradiance is the incident flux per unit areaincident flux per unit area
![Page 74: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/74.jpg)
2/22/20092/22/2009 7474
Image formation systemImage formation systemFrom the previous equations From the previous equations
-- Let Let ѲѲrr’’ be the angle between the surface be the angle between the surface normal and the line to the entrance aperture, normal and the line to the entrance aperture, and let and let be the angle between this line and be the angle between this line and the optical axis. The solid angle occupied by the the optical axis. The solid angle occupied by the surface patch surface patch dAdAoo and seen from the entrance and seen from the entrance aperture equals the solid angle occupied by the aperture equals the solid angle occupied by the image area image area dAdApp : :
α
From the previous equations From the previous equations
![Page 75: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/75.jpg)
2/22/20092/22/2009 7575
Image formation systemImage formation systemIf the size of the lens is small relative to the If the size of the lens is small relative to the distance between the lens and the object, then distance between the lens and the object, then the values of angle the values of angle ѲѲrr’’ in the previous integral in the previous integral can be approximated by can be approximated by ѲѲrr’’ and the reflectance and the reflectance LLrr tends to be constant and can be removed tends to be constant and can be removed from the integral which leads to from the integral which leads to
The solid angle occupied by the lens as seen The solid angle occupied by the lens as seen from the surface patch is approximately equal from the surface patch is approximately equal to the foreshortened area divided to the foreshortened area divided by the distance by the distance
)cos()2
( 2 αdΠ
)(αCosf o
![Page 76: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/76.jpg)
2/22/20092/22/2009 7676
Image formation systemImage formation systemFinally the expression of the image irradiance is Finally the expression of the image irradiance is obtained as obtained as
That is, the image irradiance is proportional to That is, the image irradiance is proportional to the scene radiance and the factor of the scene radiance and the factor of proportionality is a function of the offproportionality is a function of the off--axis axis angle.angle.
-- >F stop number of camera.>F stop number of camera.df p
![Page 77: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/77.jpg)
2/22/20092/22/2009 7777
Image formation Image formation
LfdE
p ⎥⎥
⎦
⎤
⎢⎢
⎣
⎡
⎟⎟⎠
⎞⎜⎜⎝
⎛= απ 4
2
cos4
Image irradiance is linearly related to scene Image irradiance is linearly related to scene radianceradianceIrradiance is proportional to the area of the Irradiance is proportional to the area of the lens and inversely proportional to the squared lens and inversely proportional to the squared distance between the lens and the image planedistance between the lens and the image planeThe irradiance falls off as the angle between The irradiance falls off as the angle between the viewing ray and the optical axis increasesthe viewing ray and the optical axis increases
![Page 78: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/78.jpg)
2/22/20092/22/2009 7878
What happens on Image plane What happens on Image plane ? (CCD camera plane)? (CCD camera plane)Lens collects light arrays.Lens collects light arrays.Array of small fixed elements replace chemicals of Array of small fixed elements replace chemicals of film.film.Each element generates a voltage signal based on Each element generates a voltage signal based on irradiance valueirradiance value
![Page 79: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/79.jpg)
2/22/20092/22/2009 7979
DigitizationDigitization
Analog images are continuous Analog images are continuous representations of colorrepresentations of colorThis is somewhat of a problem for This is somewhat of a problem for computers, which like discrete computers, which like discrete measurementsmeasurements
![Page 80: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/80.jpg)
2/22/20092/22/2009 8080
DigitizationDigitization
Record
outputDisplay
digitize
Sample and quantize
store
Digital storage (disk)
process
Digital computer
Refresh/store
On-line buffer
object observe
Imaging systems
![Page 81: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/81.jpg)
2/22/20092/22/2009 8181
Digital image acquisition Digital image acquisition processprocess
![Page 82: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/82.jpg)
2/22/20092/22/2009 8282
Image sampling & Image sampling & quantizationquantization
Grayscale imageGrayscale image–– A grayscale image is a functionA grayscale image is a function
I(x,y)I(x,y) of the two spatial coordinates of the two spatial coordinates of the image plane.of the image plane.
–– I(x,y)I(x,y) is the intensity of the image at the is the intensity of the image at the point point (x,y)(x,y) on the image plane.on the image plane.
I(x,y)I(x,y) takes nontakes non--negative valuesnegative values
assume the image is bounded by a assume the image is bounded by a rectangle rectangle [0,a][0,a]××[0,b][0,b]
I: [0, a] I: [0, a] ×× [0, b] [0, b] →→ [0, [0, infinf ))Color image– Can be represented by three functions, R(x,y) for red, G(x,y) for green, and B(x,y) for blue.
y
x
020
4060
80
0
50
1000
50
100
150
200
250
columnsrows
inte
nsity
y x
I(x,y)
![Page 83: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/83.jpg)
2/22/20092/22/2009 8383
The analog signal representing a The analog signal representing a continuous image is continuous image is sampledsampled to produce to produce discrete values which can be stored by discrete values which can be stored by a computera computerThe The frequency frequency of digital samples greatly of digital samples greatly affects the quality of the digital imageaffects the quality of the digital image
Image sampling & QuantizationImage sampling & Quantization
![Page 84: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/84.jpg)
2/22/20092/22/2009 8484
Image sampling & QuantizationImage sampling & Quantization
To create a digital image, we need to To create a digital image, we need to convert continuous sensed data into convert continuous sensed data into digital form. digital form. This involves two processes: This involves two processes: sampling sampling and and quantisationquantisationThe basic idea behind sampling and The basic idea behind sampling and quantization is illustrated in Fig. 3.1. quantization is illustrated in Fig. 3.1.
![Page 85: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/85.jpg)
2/22/20092/22/2009 8585
Image sampling & quantizationImage sampling & quantizationComputer handles Computer handles ““discretediscrete”” data.data.SamplingSampling–– Sample the value of the image at the Sample the value of the image at the
nodes of a regular grid on the image nodes of a regular grid on the image plane.plane.
–– A pixel (picture element) at (i, j) is the A pixel (picture element) at (i, j) is the image intensity value at grid point image intensity value at grid point indexed by the integer coordinate (i, indexed by the integer coordinate (i, j).j).
QuantizationQuantization–– Is a process of transforming a real Is a process of transforming a real
valued sampled image to one taking valued sampled image to one taking only a finite number of distinct values.only a finite number of distinct values.
–– Each sampled value in a 256Each sampled value in a 256--level level grayscale image is represented by 8 grayscale image is represented by 8 bits.bits.
0 (black)
255 (white)
![Page 86: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/86.jpg)
2/22/20092/22/2009 8686
How sampling works ?How sampling works ?
The original analog representation
Measurements are made at equal
intervals
Discrete samples are taken from the
measurements
![Page 87: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/87.jpg)
2/22/20092/22/2009 8787
Figure 3.1(a) shows a continuous image, Figure 3.1(a) shows a continuous image, f (x, y), that we want to convert to digital f (x, y), that we want to convert to digital form. form. To convert it to digital form, we have to To convert it to digital form, we have to sample the function in both coordinates sample the function in both coordinates and in amplitude. and in amplitude. An image may be continuous with respect An image may be continuous with respect to the xto the x-- and yand y--coordinates and also in coordinates and also in amplitude. amplitude.
Image sampling & quantizationImage sampling & quantization
![Page 88: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/88.jpg)
2/22/20092/22/2009 8888
Digitizing the coordinate values is called Digitizing the coordinate values is called sampling.sampling.Digitizing the amplitude values is called Digitizing the amplitude values is called quantization.quantization.
Image sampling & quantizationImage sampling & quantization
![Page 89: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/89.jpg)
2/22/20092/22/2009 8989
Fig 3.1 Generating a digital image (a) Continuous image. (b) A scan line from A to B in the continuous image. (c) Sampling & quantisation. (d) Digital scan line.
Image sampling & quantizationImage sampling & quantization
![Page 90: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/90.jpg)
2/22/20092/22/2009 9090
The oneThe one--dimensional function shown in dimensional function shown in Fig. 3.1(b) is a plot of amplitute (gray Fig. 3.1(b) is a plot of amplitute (gray level) values of the continuous image level) values of the continuous image along the line segment AB in Fig. 3.1(a). along the line segment AB in Fig. 3.1(a). To sample this function, we take equally To sample this function, we take equally spaced samples along line AB, as shown in spaced samples along line AB, as shown in Fig. 3.1(c). Fig. 3.1(c). Location of each sample is given by a Location of each sample is given by a vertical tick mark in the bottom part of the vertical tick mark in the bottom part of the figure. figure.
Image sampling & quantizationImage sampling & quantization
![Page 91: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/91.jpg)
2/22/20092/22/2009 9191
The samples are shown as small white The samples are shown as small white squares superimposed on the function. squares superimposed on the function. The set of these discrete locations gives The set of these discrete locations gives the sampled function.the sampled function.However, the values of the samples still However, the values of the samples still span (vertically) a continuous range of span (vertically) a continuous range of graygray--level values. level values. In order to form a digital function, the In order to form a digital function, the graygray--level values also must be converted level values also must be converted (quantized) (quantized) into discrete quantities. into discrete quantities.
Image sampling & quantizationImage sampling & quantization
![Page 92: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/92.jpg)
2/22/20092/22/2009 9292
The right side of Fig. 3.1(c) shows the The right side of Fig. 3.1(c) shows the graygray--level scale divided into eight discrete level scale divided into eight discrete levels, ranging from black to white.levels, ranging from black to white.The vertical tick marks indicate the The vertical tick marks indicate the specific value assigned to each of eight specific value assigned to each of eight gray levels. gray levels. The continuous gray levels are quantized The continuous gray levels are quantized simply by assigning one of the eight simply by assigning one of the eight discrete gray levels to each sample. discrete gray levels to each sample.
Image sampling & quantizationImage sampling & quantization
![Page 93: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/93.jpg)
2/22/20092/22/2009 9393
The assignment is made depending on the The assignment is made depending on the vertical proximity of a sample to a vertical vertical proximity of a sample to a vertical tick mark. tick mark. The digital samples resulting from both The digital samples resulting from both sampling and quantization are shown in sampling and quantization are shown in Fig. 3.1(d) and Fig 3.2 (b). Fig. 3.1(d) and Fig 3.2 (b).
Image sampling & quantizationImage sampling & quantization
![Page 94: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/94.jpg)
2/22/20092/22/2009 9494
How to choose the spatial How to choose the spatial resolution : resolution : NyquistNyquist raterate
Orig
inal
imag
e
= Sampling locations
MinimumPeriod Spatial resolution
(sampling rate)
Sampled image
No detail is lost!Nyquist Rate:
Spatial resolution must be less or equalhalf of the minimum period of the imageor sampling frequency must be greater orEqual twice of the maximum frequency.
2mm 1mm
![Page 95: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/95.jpg)
2/22/20092/22/2009 9595
Aliased frequencyAliased frequency 1 ),2sin()(1 == fttx π
6 ),12sin()(2 == fttx π
Sampling rate:5 samples/sec
Two different frequencies but the same results !
0 0.5 1 1.5 2-1
-0.5
0
0.5
1
0 0.5 1 1.5 2-1
-0.5
0
0.5
1
![Page 96: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/96.jpg)
2/22/20092/22/2009 9696
Fig. 3.2 (a) Continuous image projected onto a sensor array. (b) Result of image sampling and quantisation
Image sampling & quantizationImage sampling & quantization
![Page 97: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/97.jpg)
2/22/20092/22/2009 9797
Image digitizationImage digitization
• SSampling means measuring the value of an image at a finite number of points.
• QQuantization is the representation of the measured value at the sampled point by an integer.
![Page 98: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/98.jpg)
2/22/20092/22/2009 9898
Image digitizationImage digitization
![Page 99: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/99.jpg)
2/22/20092/22/2009 9999
Fig. 3.3. Coordinate convention used to represent digital images
Image sampling & quantizationImage sampling & quantization
![Page 100: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/100.jpg)
2/22/20092/22/2009 100100
Fig. 3.4. A digital image of size M x N
Image sampling & quantizationImage sampling & quantization
![Page 101: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/101.jpg)
2/22/20092/22/2009 101101
It is advantageous to use a more It is advantageous to use a more traditional matrix notation to denote a traditional matrix notation to denote a digital image and its elements.digital image and its elements.
Fig. 3.5 A digital image
Image sampling & quantizationImage sampling & quantization
![Page 102: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/102.jpg)
2/22/20092/22/2009 102102
The number of bits required to store a The number of bits required to store a digitised image isdigitised image is
b = M x N x kb = M x N x kWhere M & N are the number of rows and Where M & N are the number of rows and columns, respectively.columns, respectively.The number of gray levels is an integer The number of gray levels is an integer power of 2:power of 2:
L = 2L = 2k k where k =1,2,where k =1,2,……2424It is common practice to refer to the It is common practice to refer to the image as a image as a ““kk--bit imagebit image””
Image sampling & quantizationImage sampling & quantization
![Page 103: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/103.jpg)
2/22/20092/22/2009 103103
The spatial resolution of an image is the physical The spatial resolution of an image is the physical size of a pixel in that image; i.e., the area in the size of a pixel in that image; i.e., the area in the scene that is represented by a single pixel in scene that is represented by a single pixel in that image. It is smallest discernible detail in an that image. It is smallest discernible detail in an image. Sampling is the principal factor image. Sampling is the principal factor determining spatial resolution. Gray level determining spatial resolution. Gray level resolution refers to smallest discernible change resolution refers to smallest discernible change in gray level (often power of 2)in gray level (often power of 2)Dense sampling will produce a high resolution Dense sampling will produce a high resolution image in which there are many pixels, each of image in which there are many pixels, each of which represents of a small part of the scene.which represents of a small part of the scene.Coarse sampling, will produce a low resolution Coarse sampling, will produce a low resolution image in which there are a few pixels, each of image in which there are a few pixels, each of which represents of a relatively large part of the which represents of a relatively large part of the scene.scene.
Image sampling & quantizationImage sampling & quantization
![Page 104: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/104.jpg)
2/22/20092/22/2009 104104
Fig. 3.6 Effect of resolution on image interpretation (a) 8x8 image. (b) 32x32 image © 256x256 image
Image sampling & quantizationImage sampling & quantization
![Page 105: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/105.jpg)
2/22/20092/22/2009 105105
Effect of samplingEffect of sampling
256x256
64x64
16x16
![Page 106: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/106.jpg)
2/22/20092/22/2009 106106
Examples of SamplingExamples of Sampling
256x256 pixels
64x64 pixels
128x128 pixels
32x32 pixels
![Page 107: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/107.jpg)
2/22/20092/22/2009 107107
Effect of spatial Effect of spatial resolutionresolution
![Page 108: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/108.jpg)
2/22/20092/22/2009 108108
Effect of spatial Effect of spatial resolutionresolution
![Page 109: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/109.jpg)
2/22/20092/22/2009 109109
Can we increase spatial Can we increase spatial resolution by interpolation ?resolution by interpolation ?
Down sampling is an irreversible process.
![Page 110: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/110.jpg)
2/22/20092/22/2009 110110
Image SamplingImage Sampling original image sampled by a factor of 2
sampled by a factor of 4 sampled by a factor of 8
![Page 111: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/111.jpg)
2/22/20092/22/2009 111111
Fig.3.7 Effect of quantisation on image interpretation. (a) 4 levels. (b) 16 levels. (c) 256 levels
Image sampling & quantizationImage sampling & quantization
![Page 112: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/112.jpg)
2/22/20092/22/2009 112112
Effect of QuantizationEffect of Quantization
4 bits / pixel
2 bits / pixel
8 bits / pixel
![Page 113: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/113.jpg)
2/22/20092/22/2009 113113
Effect of quantization Effect of quantization levelslevels
256 levels 128 levels
32 levels64 levels
![Page 114: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/114.jpg)
2/22/20092/22/2009 114114
Effect of quantizationEffect of quantization
16 levels 8 levels
2 levels4 levels
In this image,it is easy to seefalse contour.
![Page 115: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/115.jpg)
2/22/20092/22/2009 115115
Image quantizationImage quantization• 256 gray levels (8bits/pixel) 32 gray levels (5 bits/pixel) 16 gray levels (4 bits/pixel)
• 8 gray levels (3 bits/pixel) 4 gray levels (2 bits/pixel) 2 gray levels (1 bit/pixel)
![Page 116: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/116.jpg)
2/22/20092/22/2009 116116
Image representationImage representation
The result of sampling and quantisation is The result of sampling and quantisation is a matrix of integer numbers as shown in a matrix of integer numbers as shown in Fig.3.3, Fig.3.4. and Fig 3.5.Fig.3.3, Fig.3.4. and Fig 3.5.The values of the coordinates at the origin The values of the coordinates at the origin are (x,y) = (0,0).are (x,y) = (0,0).The next coordinate values along the first The next coordinate values along the first row are (x,y) = (0,1).row are (x,y) = (0,1).The notation (0,1) is used to signify the The notation (0,1) is used to signify the 22ndnd sample along the 1sample along the 1st st row.row.
![Page 117: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/117.jpg)
2/22/20092/22/2009 117117
Image representationImage representation
x
yf(x,y)
Images can be Images can be represented by 2D represented by 2D functions of the form functions of the form f(x,y).f(x,y).The physical meaning The physical meaning of the value of of the value of ff at at spatial coordinates spatial coordinates (x,y)(x,y) is determined by is determined by the source of the the source of the image.image.
![Page 118: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/118.jpg)
2/22/20092/22/2009 118118
Image representationImage representationIn a In a digitaldigital image, both the coordinates and image, both the coordinates and the image value become the image value become discretediscrete quantities.quantities.Images can now be represented as 2D Images can now be represented as 2D arrays (matrices) of integer values: arrays (matrices) of integer values: I[i,jI[i,j]] (or (or I[r,cI[r,c]).]).The term The term graygray levellevel is used to describe is used to describe monochromatic intensity.monochromatic intensity.
62 79 23 119 120 105 4 0
10 10 9 62 12 78 34 0
10 58 197 46 46 0 0 48
176 135 5 188 191 68 0 49
2 1 1 29 26 37 0 77
0 89 144 147 187 102 62 208
255 252 0 166 123 62 0 31
166 63 127 17 1 0 99 30
![Page 119: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/119.jpg)
2/22/20092/22/2009 119119
How to select the suitable size and How to select the suitable size and pixel depth of imagespixel depth of images
Low detail image Medium detail image High detail imageLena image Cameraman image
To satisfy human mind1. For images of the same size, the low detail image may need more pixel depth.2. As an image size increase, fewer gray levels may be needed.
The word “suitable” is subjective: depending on “subject”.
![Page 120: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/120.jpg)
2/22/20092/22/2009 120120
The pixelThe pixelSample location and sample values Sample location and sample values combine to make the picture element or combine to make the picture element or pixelpixel3 color samples per pixel:3 color samples per pixel:–– 1 1 RED RED samplesample–– 1 1 GREENGREEN samplesample–– 1 1 BLUEBLUE samplesampleInformation about pixels is stored in a Information about pixels is stored in a rectangular pattern and displayed to the rectangular pattern and displayed to the screen in rows called screen in rows called rastersrasters (from (from SpalterSpalter).).
![Page 121: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/121.jpg)
2/22/20092/22/2009 121121
The pixelThe pixel
Monitor pixels are actually circular light Monitor pixels are actually circular light representations of red, green and blue representations of red, green and blue phosphorsphosphorsPixel density is measured using Pixel density is measured using Dots Per Dots Per Inch (DPI)Inch (DPI)Pixel size is measured using Pixel size is measured using Dot PitchDot PitchDPI and Dot Pitch have an inverse DPI and Dot Pitch have an inverse relationship ( DPI = Dot Pitch)relationship ( DPI = Dot Pitch)
![Page 122: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/122.jpg)
2/22/20092/22/2009 122122
Image characteristicsImage characteristics
Each pixel is Each pixel is assigned a numeric assigned a numeric
value value (bit depth)(bit depth) that that
represents a shade represents a shade of gray based on the of gray based on the
attenuation attenuation characteristics of the characteristics of the
volume of tissue volume of tissue imagedimaged
![Page 123: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/123.jpg)
2/22/20092/22/2009 123123
Pixel depthPixel depthTTheTThe number of bits determines the number number of bits determines the number of shades of gray the system is capable of of shades of gray the system is capable of displaying on the digital images. displaying on the digital images.
110110-- and 12and 12-- bit pixel can display 1024 and bit pixel can display 1024 and 4096 shades of gray, respectively.4096 shades of gray, respectively.
IIncreasingIIncreasing pixel bit depth improves image pixel bit depth improves image qualityquality
![Page 124: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/124.jpg)
2/22/20092/22/2009 124124
BitBit--DepthDepth
ExpressionExpression NameName ColorsColors
2211 22--bitbit 22
2244 44--bitbit 1616
2266 66--bitbit 6464
Number of bits to represent pixel colorNumber of bits to represent pixel color
![Page 125: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/125.jpg)
2/22/20092/22/2009 125125
BitBit--DepthDepth
ExpressionExpression NameName ColorsColors
2288 88--bitbit 256256
221616 1616--bitbit 65, 53665, 536
222424 2424--bit (True bit (True Color)Color)
About About 1616--millionmillion
Number of bits to represent pixel colorNumber of bits to represent pixel color
![Page 126: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/126.jpg)
2/22/20092/22/2009 126126
Digital image characteristicsDigital image characteristicsAA digital image is AA digital image is displayed as a displayed as a combination of rows combination of rows and columns known as and columns known as matrixmatrixThe smallest The smallest component of the component of the matrix is the matrix is the pixelpixel(picture element)(picture element)The location of the The location of the pixel within the image pixel within the image matrix corresponds to matrix corresponds to an area within the an area within the patient or volume of patient or volume of tissue referred to as tissue referred to as voxelvoxel
![Page 127: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/127.jpg)
2/22/20092/22/2009 127127
Matrix sizeMatrix size
For a given field of view, a larger matrix
size includes a greater number
of smaller pixels.
![Page 128: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/128.jpg)
2/22/20092/22/2009 128128
Color FundamentalsColor FundamentalsUUsed heavily in human vision.VVisible spectrum for humans is 400 nm (blue) to 700 nm (red).MMachines can “see” much more; e.g., X-rays, infrared, radio waves.
![Page 129: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/129.jpg)
2/22/20092/22/2009 129129
HVSHVS
CColor perceptionLlight hits the retina, which contains photosensitive cells.TThese cells convert the spectrum into a few discrete values.
![Page 130: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/130.jpg)
2/22/20092/22/2009 130130
HVSTThere are two types of HVSPphotosensitive cells:CCones : Cones are sensitive to ccolored light, but not very sensitive tto dim light.RRods : Sensitive to achromatic light.TCones perceive color using three different types of cones. Each one is sensitive in a different region of the spectrum. 445 nm (blue), 535 nm (green), 575 nm (Red). Have different sensitivities. We are more sensitive to green than red.(
![Page 131: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/131.jpg)
2/22/20092/22/2009 131131
Color FundamentalsColor FundamentalsWWWHumansWHumans can discern thousands of color shades and can discern thousands of color shades and intensities compared to about only two dozen shades intensities compared to about only two dozen shades of gray. of gray. WWhenWWhen a beam of sunlight passes through a glass a beam of sunlight passes through a glass prism Emerging beam of light is continuous spectrum prism Emerging beam of light is continuous spectrum of colors ranging from violet to at one end to red at of colors ranging from violet to at one end to red at the other. the other.
If the light is achromatic its only attribute is its If the light is achromatic its only attribute is its intensity, or amount. What can be seen on black and intensity, or amount. What can be seen on black and white television set. Gray level refers to scale measure white television set. Gray level refers to scale measure Intensity that ranges from black, to grays, and finally Intensity that ranges from black, to grays, and finally to white.to white.
![Page 132: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/132.jpg)
2/22/20092/22/2009 132132
Color fundamentalsColor fundamentalsChromatic light spans the electromagnetic spectrum from 400 to
700nm. Quantities to describe quality of chromatic light source:radiance, luminance, and brightness. Radiance is the total amount of energy that flows from the light source. Luminance is the amount of energy perceived by the observer. Brightness is subjective measure that is practically impossible to measure.It embodies the achromatic notion of intensity. Human eye contains three types of cones red, green and blue. Due to the absorption characteristics of the human eye, colors are seen as variable combinations of the so called primary colors. R(red), green(G), and blue(B). Wavelengths of these colors are 700nm, 546.1nm, and 435.8nm, respectively (as per CIE standard).
The primary colors can be added to produce the secondary colors of light – magneta(red plus blue), cyan(green plus blue), and yellow(red plus green).
![Page 133: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/133.jpg)
2/22/20092/22/2009 133133
Color fundamentalsColor fundamentalsThe characteristics that distinguish one color from another are
brightness, hue, and saturation. Brightness embodies the notion of achromatic intensity. Hue is an attribute associated with thedominant wavelength in a mixture of light waves. Hue is dominant color perceived by an observer. When we call an object is red, blue, orange, yellow we are referring to its hue.Saturation means relative purity or amount of whit light mixed with a hue. Degree of saturation is inversely proportional to the amount of white light added. Hue and saturation taken together are called chromaticity. Color may be characterized by its brightness and chromaticity. The amount of red, green, blue needed to form any particular color are referred to as tristimulusvalues and denoted X, Y, and Z, respectively. A color is characterized by its trichromatic coefficients, defined as
ZYXZz
ZYXYy
ZYXXx
++=
++=
++=
![Page 134: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/134.jpg)
2/22/20092/22/2009 134134
Color fundamentalsColor fundamentalsIt is noted from these equations x+y+z=1. Another approach for
specifying colors is to use CIE chromaticity diagram (Fig), which shows color composition as function of x(red) and y(green). For any value of x and y, the corresponding value of z(blue) is obtained as z=1-x-y. The point marked in figure has approximately 62% green and 25% red content. Composition of blue is 13%. The position of the various spectrum colors – from voilet 380nm to red at 780nm –are indicated around the boundary of the tongue shaped chromaticity diagram. At any point within the boundary represents some mixture of spectrum colors. The point of equal energy corresponds to equal fractionsof three primary colors: it represents CIE standard for white light. Any point on the boundary of chromaticity chart is fully saturated. As we progress towards point of equal energy more and more white is added to the color and becomes less saturated. Saturation is zero at the point of equal energy.
![Page 135: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/135.jpg)
2/22/20092/22/2009 135135
CIE Chromaticity modelCIE Chromaticity modelTThe Commission Internationale de l’Eclairagedefined 3 standard primaries: X, Y, Z that can be added to form all visible colors. Y It was chosen so that its color matching function matches the sum of the 3 human cone responses.
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
−−−−−
=⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
ZYX
BGR
8986.01185.00583.00283.09984.19843.02883.05326.09107.1
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡=
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
BGR
ZYX
1149.10661.00000.01143.05868.02988.02001.01736.06067.0
![Page 136: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/136.jpg)
2/22/20092/22/2009 136136
CIE Chromaticity modelCIE Chromaticity model
xx, y, z normalize X, Y, Z such thatx + y + z = 1.Aactually only x and y are needed becausez = 1 - x - y.Ppure colors are at the curved boundary.Wwhite is (1/3, 1/3, 1/3).
![Page 137: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/137.jpg)
2/22/20092/22/2009 137137
Color fundamentalsColor fundamentalsTThey provide a standard way of specifying a particular color using a 3D coordinate system.HHardware orientedRRGB: additive system (add colors to black) used for displays.CCMY: subtractive system used for printing.YYIQ: used for TV and is good for compression.IImage processing orientedHHSI: good for perceptual space for art, psychology and recognition.
![Page 138: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/138.jpg)
2/22/20092/22/2009 138138
Color fundamentalsColor fundamentals
• Primary Colors
![Page 139: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/139.jpg)
2/22/20092/22/2009 139139
Color fundamentalsColor fundamentalsThe RGB model, each color appears in its primary spectral
components of red green, and blue. This model is based on cartesian coordinate system. The color subspace of interest is the cube. RGB are the three primary colors and secondary colors cyan, magnet, and yellow are located at the corners of cube. In this model the gray scale extends from black to white along the line that joins origin to (1,1,1). All the values of RGB are assumed to be in the range [0 1]. Image represented in RGB color consist of three component images, one for each primary color. When fed to RGB monitor, these three images combine to produce composite color. The number of pixels used to represent each pixel in RGB space is called pixel depth. Each RGB pixel has depth of 24 bits.
![Page 140: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/140.jpg)
2/22/20092/22/2009 140140
Color fundamentalsColor fundamentals
Secondary colors(additive synthesis):Secondary colorsSecondary colors(additive synthesis):(additive synthesis):
![Page 141: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/141.jpg)
2/22/20092/22/2009 141141
Color fundamentalsColor fundamentals
Secondary colors(additive synthesis):– adding primary colors:
R + G + B = blackR + G + B = blueR + G + B = greenR + G + B = cyanR + G + B = redR + G + B = magentaR + G + B = yellowR + G + B = white
Secondary colorsSecondary colors(additive synthesis):(additive synthesis):–– adding primary colors:adding primary colors:
RR + + GG + + BB = = blackblackRR + + GG + + BB = = blueblueRR + + GG + + BB = = greengreenRR + + GG + + BB = = cyancyanRR + + GG + + BB = = redredRR + + GG + + BB = = magentamagentaRR + + GG + + BB = = yellowyellowRR + + GG + + BB = = whitewhite
![Page 142: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/142.jpg)
2/22/20092/22/2009 142142
Color fundamentalsColor fundamentals
Secondary colors(additive synthesis):– weighted adding of primary colors:
0.5 ·R + 0.5 ·G + 0.5 ·B = grey1.0 ·R + 0.2 ·G + 0.2 ·B = brown0.5 ·R + 1.0 ·G + 0.0 ·B = lime1.0 ·R + 0.5 ·G + 0.0 ·B = orange
Secondary colorsSecondary colors(additive synthesis):(additive synthesis):–– weighted adding of primary colors:weighted adding of primary colors:
0.5 0.5 ··RR + 0.5 + 0.5 ··GG + 0.5 + 0.5 ··BB = = greygrey1.0 1.0 ··RR + 0.2 + 0.2 ··GG + 0.2 + 0.2 ··BB = = brownbrown0.5 0.5 ··RR + 1.0 + 1.0 ··GG + 0.0 + 0.0 ··BB = = limelime1.0 1.0 ··RR + 0.5 + 0.5 ··GG + 0.0 + 0.0 ··BB = = orangeorange
![Page 143: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/143.jpg)
2/22/20092/22/2009 143143
Color fundamentalsColor fundamentals
Color images can be represented by3D Arrays (e.g. 320 x 240 x 3)
![Page 144: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/144.jpg)
2/22/20092/22/2009 144144
Color fundamentals Color fundamentals -- RGB RGB AAdditive model.AAn image consists of 3 bands, one for each primary color.AAppropriate for image displays.
![Page 145: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/145.jpg)
2/22/20092/22/2009 145145
Color fundamentals Color fundamentals -- CMYCMY
Primary colors(subtractive synthesis):Primary colorsPrimary colors(subtractive synthesis):(subtractive synthesis):
![Page 146: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/146.jpg)
2/22/20092/22/2009 146146
Color fundamentalsColor fundamentals
Cyan, Magneta, and Yellow are the secondary colors of light or alternatively primary colors of pigments. For example when a surface coated with cyan pigment is illuminated with white light, no red light is reflected from the surface. That is Cyan subtracts red light from reflected white light, which itself is composed of equal amounts of red, green, blue light.
Most devices that deposit colored pigments on paper, such as color printers and copiers, require CMY data input or perform an RGB to CMY conversion internally. This conversion is performed using the simple operation
![Page 147: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/147.jpg)
2/22/20092/22/2009 147147
CMY modelCMY modelCCyan-Magenta-Yellow is a subtractive model which is good to model absorption of colors.AAppropriate for paper printing. Assumption here is all the color values are normalized to the raneg [0 1]
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡−
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡=
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
BGR
YMC
111
![Page 148: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/148.jpg)
2/22/20092/22/2009 148148
Color fundamentals Color fundamentals -- CMYKCMYK
Equal amounts of the pigment primaries, cyan, magneta, and yellow should produce black. In practice, combining these colors for printing produces a muddy-looking black. So in order to produce black, a fourth color black is added giving rise to the CMYK color model. When publishers talk about “four-color printing” they are referring to the three colors of CMY color model plus black.
![Page 149: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/149.jpg)
2/22/20092/22/2009 149149
Color fundamentals Color fundamentals -- HSIHSI
RGB and CMY color models are ideally suitable for hardware implementations. RGB strongly matches with the fact that human eye strongly perceptive to red, green, and blue components. Unfortunately these color models and similar other models are not well suited for describing colors in terms that are practical for human interpretation. For example one does not refer to the color of an object by giving the percentage of each of the primaries composing its color. In other words we do not think of color images as being composed of three primary images that combine to form that single image.
![Page 150: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/150.jpg)
2/22/20092/22/2009 150150
Color fundamentals Color fundamentals -- HSIHSI
When humans view a color object, we describe it by its hue, saturation, and brightness. Hue is a color attribute that describes pure color (pure red, orange, or yellow). Saturation give measure of the degree to which a pure color is diluted by white light. Brightness is subjective descriptor that is practically impossible to measure. It embodies achromatic notion of intensity and is one of key factors for color sensation. We do know that the intensity (grey level) is most useful descriptor of monochromatic images. This quantity is easily measurable and interpretable. One such model that decouples the intensity component from the color-carrying information (hue and saturation) in a color image is HSI. As a result HIS model is anideal tool for developing image processing algorithms based on color descriptions that are natural and intuitive to humans.
![Page 151: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/151.jpg)
2/22/20092/22/2009 151151
Color fundamentalsColor fundamentalsTo summarize RGB is ideal for image generation (image capture by a color camera or image display on monitor screen), but its use for color description is much more limited. RGB color image can be viewed as three monochromatic intensity images. In the RGB model the line joining black and white vertex represents intensity axis. To determine intensity of any color point just pass a plane perpendicular to the intensity axis. That gives us intensity value in the range [0 1]. Saturation of colorincreases as a function of distance from the intensity axis. Saturation of points on the intensity axis is zero. It is length of the vector from the origin to the point. Note that origin is defined by intersection of color plane with intensity axis. Hue can also be determined from the RGB point. It is a plane formed by three points (black, white , cyan). All the colors generated by three colors lie in triangle defined by those colors. Usually hue of some point is determined by an angle from some reference point. Usually an angle of 0 from the red axis designates 0 hue and increases counterclockwise from there.
![Page 152: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/152.jpg)
2/22/20092/22/2009 152152
Color fundamentalsColor fundamentalsUUniform: equal (small) steps give the same perceived color changes.HHue is encoded as an angle (0 to 2π).SSaturation is the distance to the vertical axis (0 to 1).IIntensity is the height along the vertical axis (0 to 1).
![Page 153: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/153.jpg)
2/22/20092/22/2009 153153
Color fundamentals Color fundamentals -- HSIHSIThe three important components of the HSI color space are
the vertical intensity axis, the length of the vector to the color point, and the angle this vector makes with the red axis.
HTo summarize HSI: Hue, saturation, value are non-linear functions of RGB. Hue relations are naturally expressed in a circle.
[ ]
[ ] GBBGBRGR
BRGRH
GBBGBRGR
BRGRH
IBGRS
BGRI
>⎪⎭
⎪⎬⎫
⎪⎩
⎪⎨⎧
−−+−
−+−−=
<⎪⎭
⎪⎬⎫
⎪⎩
⎪⎨⎧
−−+−
−+−=
−=
++=
−
−
if ))(()(
)]()[(2/1cos360
if ))(()(
)]()[(2/1cos
),,min(1
3)(
2
1
2
1
![Page 154: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/154.jpg)
2/22/20092/22/2009 154154
Color fundamentals Color fundamentals -- HSIHSI
(Left) Image of food originating from a digital camera.(Center) Saturation value of each pixel decreased 20%.(Right) Saturation value of each pixel increased 40%.
![Page 155: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/155.jpg)
2/22/20092/22/2009 155155
Color fundamentalsColor fundamentals--YIQ YIQ HHave better compression properties.LLuminance Y is encoded using more bits than chrominance values I and Q (humans are more sensitive to Y than I and Q).LLuminance used by black/white TVs.AAll 3 values used by color TVs.
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
−−−=
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
BGR
QIY
311.0532.0212.0321.0275.0596.0
114.0587.0299.0
![Page 156: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/156.jpg)
2/22/20092/22/2009 156156
Color fundamentals Color fundamentals -- SummarySummaryTTo print (RGB CMY or grayscale)TTo compress images (RGB YUV) CColor description (RGB HSI)CColor information (U,V) can be compressed 4 times without significant degradation in perceptual quality.TTo compare images (RGB CIE Lab)CCIE Lab space is more perceptually uniform.EEuclidean distance in Lab space hence meaningful.
![Page 157: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/157.jpg)
2/22/20092/22/2009 157157
Storing ImagesStoring ImagesWith the traditional cameras, the film is used both to record and store the image. With digital cameras, separate devices perform these two functions. The image is captured by the image sensor, then stored in the camera on a storage device of some kind. We look at many of the storage devices currently used.
Removable Vs. Fixed Storage
Older and less expensive cameras have built-in fixed storage that can’t be removed or increased. This greatly reduces the number of photos you can take before having to erase to make room for new ones.
Allmost all newer digital cameras use some form of removable storage media, usually flash, memory cards, but occasionally small hard disks, and even CDs, and the variations of the floppy disk.Whatever its form, removable media let’s you remove one storage device when it is full and insert another.
![Page 158: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/158.jpg)
2/22/20092/22/2009 158158
Storing Images Storing Images The number of images you take is limited only by the number of
storage devices you have and the capacity of each.
The number of images that you can store in a camera depends on avariety of factors including
(1) The capacity of the storage device (expressed in Megabytes)
(2) The resolution at which pictures are taken
(3) The amount of compression used
Number you can store is important because once you reach the limit you have no choice but to quit taking pictures or erase some existing ones to make room for new ones. How much storage capacity you need depends partly on what you use camera for.
![Page 159: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/159.jpg)
2/22/20092/22/2009 159159
Storing imagesStoring imagesThe advantages of removable storage are many. They include the
following:
(1) They are erasable and reusable
(2) They are usually removable, so you can remove one and plug in another so storage is limited only by the number of devices you have.
(3) They can be removed from the camera and plugged into the computer or printer to transfer the images.
![Page 160: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/160.jpg)
2/22/20092/22/2009 160160
Storing ImagesStoring ImagesFlash Card Storage: As the popularity of digital cameras and other hand held devices
has increased, so has the need for small, inexpensive memory devices. The type that is caught on is flash memory which uses solid state chips to store your image files. Although flash memory chips are similar to RAM chips that are used inside your computer there is one important difference. They don’t require batteries and don’t loose images when power is turned off. Your photographs are retained indefinitely without any power to the flash memory components. These chips are packaged inside a case equipped with electrical connectors and the sealed unit is called a card.
Flash memory cards consume little power, take up little space, and are very rugged. They are also very convenient. You can carry lots of them with you and change them as needed.
![Page 161: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/161.jpg)
2/22/20092/22/2009 161161
Storing imagesStoring imagesUntil recently, most flash cards have been in the standard PC card
format that is widely used in the network computers. However, with the growth of the digital camera and other markets, a number of smaller formats have been introduced. As a result of this competition, camera support confusing variety of incompatible flash memory cards including the following :
PC Cards :
CompactFlash:
Smart media
Memory sticks
xD-picture cards
Each of these formats is supported by its own group of companiesand has its own following.
![Page 162: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/162.jpg)
2/22/20092/22/2009 162162
Storing imagesStoring imagesPC Cards :
PC Cards have the highest storage capacities but their large size has led to their being used mainly in professional cameras
CompactFlashCads
They are generally the most advanced flash storage devices for consumer level digital cameras
CompactFlash Terminology
Compactflash cards and slots that are 3.3mm thick are called CompactFalsh(abbreviated CF) or CompactFlash Type 1(abbreviated CF-1) and that are 5mm thick are called Type II
Smart media cards
They are smaller than compactflash cards and generally don’t come with storage capacities quite as high
![Page 163: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/163.jpg)
2/22/20092/22/2009 163163
Storing ImagesStoring ImagesSony memory sticks, shaped something like stick of a gum,
currently used mainly in sony products
xD picture cards
The xD picture cards are the smallest of the memory cards and used in very small cameras. It was developed jointly by Fuji and Olympus
Memory card storage cases
Cards are easy to misplace and the smaller they are, the easier they are to lose. If you don’t find a way to store them safely. One way to keep them safe is to use an inexpensive storage case.
![Page 164: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/164.jpg)
2/22/20092/22/2009 164164
Storing ImagesStoring ImagesHard disk storage
One of the current drawbacks of compact memory flash cards is their limited storage capacity. For high resolution cameras thisis a reall drawback. One solution is high speed, high capacity hard disk drives. Untill recently these drives were too large and expensive to be mounted inside cameras, but that changed with IBM’s introduction of Microdrive hard disk drives. These drives now owned by Hitachi are smaller in volume lighter in volume than a roll of film. Infact, they are so small that they can be plugged into a Type II compact flash slot into digital camera or flash card reader. The Hitachi Microdrive fits a CF-II slot and is a marvel of engineering.
![Page 165: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/165.jpg)
2/22/20092/22/2009 165165
Storing ImagesStoring ImagesOptical storage disks
CDs are used in few cameras and have the advantage that they canbe read in any system with a CD drive. The disks are write once with archival quality with no danger of important files being deleted or written over. Sony’s line of Marvicas use CD discs for storage.
Temporary storage :
Portable digital image storage and viewing devices are advancingrapidly that’s good because they meet real need. When our photographing, if our storage device becomes filled with images, you need to place to temporarily store the images until images are transferred to main system. One device used for this is notebook computer. Not only many do have one of these, but their large screen and ability to run any of software. However, a notebook computer is not always the ideal temporary storage device, because of its weight, short battery life, and long start-up time. Hence the introduction of portable hard drive.
![Page 166: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/166.jpg)
2/22/20092/22/2009 166166
Storing ImagesStoring ImagesPC Cards :
FlashTrax from SmartDisk is one of the new multimedia storage/viewer devices.
To use one of these devices you insert your memory card into a slot, often using an adapter, and quickly transfer your images. You can erase your camera’s storage device to make room for new images and resume shooting. When you get back to your permanent set up, you copy or move your images from the intermediate storage device to the system you use for editing, printing, and distributing them. The speed with which you transfer depends on connections supported by the device. Most support USB 2 and some support FireWire. The latest trend is to incorporate image storage into multipurpose devices. Many of these devices let you review the stored images on the device itself or on connected TV. Some also let you to print the imagesdirectly on the printer without using computer.
![Page 167: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/167.jpg)
2/22/20092/22/2009 167167
Storing ImagesStoring ImagesThe ternd is to go even farther and combine digital photos, digital
videos, and MP3 music, in the same device. With a device like this one will be able to create slide shows with special transitions, pans, and accompanying music and play them back anywhere.
One way to eliminate or reduce the need for intermediate storage is to use a higher capacity storage device in the camera. For example, some devices store many gigabytes of data, enough to store hundreds of large photos.
![Page 168: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/168.jpg)
2/22/20092/22/2009 168168
Storing ImagesStoring ImagesThe key questions to ask when considering an intermediate storage
devices are:
(1) What is its storage capacity ? What is the cost per megabyte of storage?
(2) Does it have slots or adapters for the storage devices you use ?
(3) Does it support image formats you use ? Many support common image formats like JPEG but not proprietary formats such as Canon’s RAW and Nikon’s NEF format.
(4) Does it support video and MP3 music playback? Does it support camera’s movie format if it has one ?
(5) What is the transfer rate and how long does it take to transfer the images from card to the device?
(6) Can it display images on a TV set or be connected directly to a printer ?
![Page 169: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/169.jpg)
2/22/20092/22/2009 169169
Storing ImagesStoring Images(7) If it connects to a TV does it have remote control?
(8) Can you view images on devices own screen ?
(9) Are there ways to rotate, zoom in/out and scroll ?
![Page 170: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/170.jpg)
2/22/20092/22/2009 170170
Introduction to double bufferingIntroduction to double buffering
BitBlt - > It Stands for Bit-Block Transfer. It means that a “block”of bits, describing a rectangle in an image, is copied in one operation. Usually the graphics card supports this command in hardware. There is a function in the Win32 API of this name, which also occurs in MFC, but the FCL does not provide this function to you directly. Nevertheless it is essentially packaged for use as the Graphics.DrawImage method
![Page 171: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/171.jpg)
2/22/20092/22/2009 171171
Introduction to double bufferingIntroduction to double bufferingMemory DC - > DC means “device context”. This is represented in the FCL as a Graphics object. So far, the Graphics objects we have used in our programs usually corresponded to the screen; and in one lecture, we used a Graphics object that corresponded to the printer. But it is possible to create a Graphics object that does not correspond to a physical device. Instead, it just has an area of RAM (called a buffer) that it writes to instead of writing to video RAM on the graphics card. When you (for example) draw a line or fill a rectangle in this Graphics object, nothing changes on the screen (even if you call Invalidate), since the memory area being changed by the graphics calls is not actually video RAM and has no connection with the monitor. This “virtual Graphics object” is loosely referred to as a memory DC. It is a “device” that exists only in memory; but usually its pixel format does correspond to a physical device such as the screen, so that when data is copied from this buffer to video memory, it is correctly formatted.
![Page 172: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/172.jpg)
2/22/20092/22/2009 172172
Double bufferingDouble bufferingWe can use the images as We can use the images as offscreen drawingoffscreen drawing surfaces surfaces according to storing them as picturesaccording to storing them as picturesThis allows us to This allows us to renderrender any image, including text and any image, including text and graphics, graphics,
to an to an offscreen bufferoffscreen buffer that we can display at a later time.that we can display at a later time.
The advantage of doing this is that the images is seen only The advantage of doing this is that the images is seen only when it is completewhen it is complete
Drawing a complicated image could take several milliseconds Drawing a complicated image could take several milliseconds oorr moremore
which can be seen by thewhich can be seen by the user user as as flashing and flickeringflashing and flickeringThis This flashingflashing is distractingis distracting
causes the user to perceive his rendering as slower than actuallcauses the user to perceive his rendering as slower than actually isy is
Usage ofUsage of anan offscreen imageoffscreen image to reduce flickers to reduce flickers is is called called double bufferingdouble buffering. . Because: Because:
the screen is considered a buffer for pixels, and the screen is considered a buffer for pixels, and the offscreen image is the second buffer,the offscreen image is the second buffer,
where we can prepare pixels for display.where we can prepare pixels for display.
![Page 173: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/173.jpg)
2/22/20092/22/2009 173173
What is double buffering ?What is double buffering ?Double buffering - > “Double buffering” refers to the technique of writing into a memory DC and then BitBlt-ing the memory DC to the screen.
This works as follows: your program can take its own sweet time writing to a memory DC, without producing any delay or flicker on the screen. When the picture is finally complete, the program can call BitBlt and bang! Suddenly (at the next vertical retrace interval) the entire contents of the memory DC’s buffer are copied to the appropriate part of video RAM, and at the next sweep of the electron gun, the picture appears on the screen. This technique is known as double buffering. The name is appropriate because there are two buffers involved: one on the graphics card (video RAM) and one that is not video RAM, and the second one is a “double”of the first in the sense that it has the same pixel format.
![Page 174: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/174.jpg)
2/22/20092/22/2009 174174
What is double buffering ?What is double buffering ?[Some books reserve this term for a special case, in which the graphics card has two buffers that are alternately used to refresh the monitor, eliminating the copying phase. But most books use theterm double buffering for what we have described.]
Whatever is stored in the memory DC will not be visible, unless and until it gets copied to the DC that corresponds to the screen. This is done with BitBlt, so that the display happens without flicker.
![Page 175: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/175.jpg)
2/22/20092/22/2009 175175
Why use double buffering ?Why use double buffering ?Double buffering can be used whenever the computations needed to draw the window are time-consuming. Of course, you could always use space to replace time, by storing the results of those computations. That is, in essence, what double-buffering does. The end result of the computations is an array of pixel information telling what colors to paint the pixels. That’s what the memory DC stores.
This situation arises all the time in graphics programming. Allthree-dimensional graphics programs use double-buffering. MathXpert uses it for two-dimensional graphics. We will soon examine a computer graphics program to illustrate the technique.
Another common use of double-buffering is to support animation. If you want to animate an image, you need to “set a timer” and then at regular time intervals, use BitBlt to update the screen to show the image in the next position.
![Page 176: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/176.jpg)
2/22/20092/22/2009 176176
Why use double buffering?Why use double buffering?The BitBlt will take place during the vertical retrace interval. Meantime, between “ticks” of the timer, the next image is being computed and drawn into the memory DC. In the next lecture, this technique will be illustrated.
When should the memory DC be created?
Obviously it has to be created when the view is first created. Less obviously, it has to be created again when the view window is resized. But the memory DC is the same size as the screen DC, which is the same size as the client area of the view window. When this changes, you must change your memory DC accordingly.
When you create a new memory DC, you must also destroy the old one, or else memory will soon be used up by all the old memory DC’s (provided the evil user plays around resizing his window many times).
![Page 177: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/177.jpg)
2/22/20092/22/2009 177177
What is double buffering ?What is double buffering ?It turns out that every window receives a Resize event when it is being resized, and it also receives a Resize event soon after its original creation (when its size changes from zero by zero to the initial size). Therefore, we add a handler for the Resize event and put the code for creating the memory DC in the Resize message handler.
![Page 178: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/178.jpg)
2/22/20092/22/2009 178178
Cathode Ray TubeCathode Ray Tube
Adobe Acrobat 7.0 Document
![Page 179: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/179.jpg)
2/22/20092/22/2009 179179
Liquid crystal displayLiquid crystal displayLCD (Liquid Crystal Display) panels are "transmissive" displays, meaning they aren't their own light source but instead rely on a separate light source and then let that light pass through the display itself to your eye.We can start to describe how an LCD panel works by starting with that light source. The light source is a very thin lamp called a "back light" that sits directly behind the LCD panel as shown in Figure 1.
Figure 1
![Page 180: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/180.jpg)
2/22/20092/22/2009 180180
LCDLCDThe light from the backlighting then passes through a polarizing filter (a filter that aligns the light waves in a single direction). From there the now polarized light then passes through the actual LCD panel itself. The liquid crystal portion of the panel either allows the polarized light to pass through or blocks the light from passing through depending on how the liquid crystals are aligned at the time the light tries to pass through. See Figure 2.
Figure 2
![Page 181: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/181.jpg)
2/22/20092/22/2009 181181
LCDLCDThe liquid crystal portion or the panel is spit up into tiny individual cells that are each controlled by a tiny transistor to supply current. Three cells side by side each represent one "pixel" (individual picture element) of the image. An 800 x 600 resolution LCD panel would have 480,000 pixels and each pixel would have three cells for a total of 1,440,000 individual cells. Red, green and blue are the primary colors of light. All other colors are made up of a combination of the primary colors. An LCD panel uses these three colors to produce color which is why there are three cells per pixel — one cell each for red, green, and blue. Once the light is passed through the liquid crystal layer and the final polarizing filter it then passes through a color filter so that each cell will then represent one of the three primary colors of light. See Figure 3.
Figure 3
![Page 182: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/182.jpg)
2/22/20092/22/2009 182182
LCDLCDThe three cells per pixel then work in conjunction to produce color. For example, if a pixel needs to be white, each transistor that controls the three color cells in the pixel would remain off, thus allowing red, green and blue to pass through. Your eye sees the combination of the three primary colors, so close in proximity to each other, as white light. If the pixel needed to be blue, for and area of an image that was going to be sky, the two transistors for the red and green cells would turn on, and the transistor for the blue cell would remain off, thus allowing only blue light to pass through in that pixel. Pros:
1.LCD displays are very thin. They can be mounted in places traditional CRT televisions and monitors cannot. 2.Color reproduction is excellent. 3.Contrast is good, although not great. 4.Pixel structure is very small, which creates a very smooth image. 5.Durable technology. 6.No burn-in issues.
Cons:1.Very expensive technology per square inch of viewing area. 2.Black levels and details in dark scenes are not as strong as those in competing technologies. 3.Dead pixels can be an issue, although quality has improved as the technology has matured. 4.Sizes above 40" are cost prohibitive.
![Page 183: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/183.jpg)
2/22/20092/22/2009 183183
LCDLCDIs an LCD Panel right for you? It depends on your needs. Below is a list of
common scenarios where an LCD panel provides the best performance, followed by a list of scenarios that might suggest the need to use a different technology.
Scenarios where an LCD flat panel will perform well:
1. Any application that will require a screen of less than 42" diagonal.
2. Installations that require the monitor/television to be built into a wall or cabinetry, and require a diagonal image of less than a 42".
3. Pre-made entertainment centers and bedroom armoires.
4. Any application that requires wall mounting and requires a diagonal image of less than 42".
Scenarios where another technology might be more effective:
1. Any application that requires a large screen — larger than 40" diagonal. LCD displays get cost prohibitive for sizes above 40". If you opt to select an LCD panel of over 40", be prepared to pay.
2. Applications where the best possible image quality is needed. A CRT is still going to give the best shadow detail and color.
3. Tight budgets; CRT technology will be much less expensive per viewing area.
![Page 184: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/184.jpg)
2/22/20092/22/2009 184184
PrintersPrinters
Type of measurement equal to 25.4 millimeters or 2.54 centimeters.
Inch
MeasurementWhen referring to computers, a measurement is the process of determining a dimension, capacity, or quantity of an object or the duration of a task. By using measurements an individual can help ensure that an object is capable of fitting within an area, a storage medium is capable of storing the necessary files, a task will complete in the required time, or how fast a product is when compared to another product. Below is a listing of different types of computer measurements you may encounter while working with a computer or in the computer field.
![Page 185: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/185.jpg)
2/22/20092/22/2009 185185
PrintersPrinters
Term that comes from the words Picture Element (PEL). A pixel is the smallest portion of an image or display that a computer is capable of printing or displaying. You can get a better understanding of what a pixel is when zooming into an image, as seen in the example to the right. As you can see in this example, the character image in this picture has been zoomed into at 1600%. Each of the blocks seen in this example is a single pixel of this image. Everything on the computer display looks similar to this when zoomed in upon; the same is true with printed images, which are created by several little dots that are measured in DPI.
PPIShort for Pixels Per Inch, PPI is the number of pixels per inch a pixel imageis made up of. The more pixels per inch the image contains, the higher quality the image will be.
Pixel imageA type of computer graphic that is composed entirely of pixels.
![Page 186: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/186.jpg)
2/22/20092/22/2009 186186
PrintersPrintersThere seems to be a lot of confusion about what PPI means (apart from the fact that it means Pixels Per Inch of course). This article is for beginners in computer graphics and digital photography.
Dots Per Inch usually means the maximum dots a printer can print per inch. Roughly speaking the more DPI the higher quality the print will be. DPI is for printers, PPI is for printed images. But I don't think there is an official definition of the difference.PPI and DPI are sometimes but not always the same, I'll assume for simplicity in this article that they are.However, until you print an image the PPI number is meaningless.
![Page 187: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/187.jpg)
2/22/20092/22/2009 187187
PrintersPrintersUntil you print an image the PPI number is meaningless.
Imagine, for simplicity's sake, that the image below, when printed on your printer is one inch (or 2.54 cm) square:
If you count the pixels (blocks, dots) you'll find that there are 10 across the width of an image. If this was printed at the size of 1 inch it would be a 10 PPI image. Here is a 50 PPI version of the same image:
![Page 188: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/188.jpg)
2/22/20092/22/2009 188188
PrintersPrintersYou need not count the pixels, believe me the square above is a 50 by 50 pixel image, and if printed so that it covered exactly 1 square inch it would be a 50 PPI image. Now lets look at a 150 PPI image:
PPI is simply how many pixels are printed per inch of paper. You may not be able to see the pixels (because your eyes or your printer is not that high a quality). The above images are approximations but you get the idea.I don't care what the "image information" on your camera says, or the what PPI reading on your paint program says, only when you print can you really say what the PPI is. And the same image will have different PPI when printed at different sizes.
![Page 189: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/189.jpg)
2/22/20092/22/2009 189189
PrintersPrintersDPI (dots per inch) is a measurement of printer resolution, though it is commonly applied, somewhat inappropriately, to monitors, scanners and even digital cameras. For printers, the DPI specification indicates the number ofdots per inch that the printer is capable of achieving to form text or graphics on the printed page. The higher theDPI, the more refined the text or image will appear. To save ink, a low DPI is often used for draft copies or routine paperwork. This setting might be 300 or even 150DPI. High resolution starts at 600 DPI for standard printers, and can far exceed that for color printers designed for turning out digital photography or other high-resolution images.
![Page 190: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/190.jpg)
2/22/20092/22/2009 190190
PrintersPrintersIn the case of monitors, DPI refers to the number of pixels present per inch of display screen. The technically correct term is "PPI" or pixels per inch, but DPI is commonly used instead. A display setting of 1280 x 1024has 1.3 million DPI, while a setting of 800 x 600 has 480,000, or less than half the resolution of the higher setting. With fewer dots per inch, the picture will not have the clarity that can be achieved with a higher DPI saturation. This is because displays create images by using pixels. Each dot or pixel reflects a certain color andbrightness. The greater the DPI, the more detailed the picture can be. Higher DPI also requires more memory and can take longer to 'paint' images, depending on the system's video card, processor and other components.
![Page 191: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/191.jpg)
2/22/20092/22/2009 191191
PrintersPrintersScanners also operate at different resolutions. Scan time will increase with higher DPI settings, as the scanner must collect and store more data. However, the greater the DPI, or requested resolution, the richer the resulting image. A high DPI setting mimics the original image in a truer fashion than lower DPI settings are capable of doing. If the image is to be enlarged, a high DPI setting is necessary. Otherwise the enlarged picture will look "blocky" or blurry because the software lacks information to fill in the extra space when the image is enlarged. Instead it "blows up" each pixel to "smear" it over a wider area. Technically again, the more correct term in this application is sampled PPI, but DPI is more often used.
![Page 192: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/192.jpg)
2/22/20092/22/2009 192192
PrintersPrintersDigital cameras have their own specifications in terms of megapixels and resolution, but DPI is often mentioned in this context as well. Since DPI in all cases refers to the output image, a digital camera capable of the most basic current standards of resolution —- 3.0 megapixels and better —- will output an image capable of taking advantage of a very high DPI setting on the printer. However, if your printer is only capable of 600 DPI, the extra resolution of the camera will be lost in the printing process. When buying or upgrading components it is therefore critical that each product is capable of supporting the highest standards of any interfacing product.
![Page 193: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/193.jpg)
2/22/20092/22/2009 193193
PrintersPrintersThe quality of the hard copy produced by a computer printer. Below is a listing of some of the more common reasons why the print quality may differ.
1. Type of printer - Each type of printer has its own capabilities of printing. Withstandard printers, dot matrix is commonly the lowest quality printer, ink jet printers are commonly average quality, and laser printers are commonly the best quality.
2. Low DPI - Printer has a low DPI.
3. Print mode - The mode that the hard copy was produced may also affect the overall quality of the print. For example, if the mode was draft quality, the printer will print faster, but will be a lower quality.
4. Available toner or ink - If the printer is low on toner or ink the quality can be dramatically decreased.
5. Dirty or malfunctioning printer - If the printer is dirty or is malfunctioning this can also affect the quality of the print.
6. Image quality - It is important to realize that when printing a computer graphic, the quality may not be what you expect because of any of the below reasons.
•Printer does not have enough colors to produce the colors in the image. For example, some printers may only have four available inks where others may have six or more available inks. See process color.
•The image is a low quality or low resolution image.
•Image is too small and/or has too many colors in a small area.
Print quality
![Page 194: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/194.jpg)
2/22/20092/22/2009 194194
PrintersPrintersMost people have used printers at some stage for printing documents but few are aware of how it works. Printed documents are arguably the best way to save data. There are two types of basic printers Impact and Non-impact.
Impact printers, as the very name implies means that the printing mechanism touches the paper for creating an image. Impact printers were used in early 70s and 80s. In Dot Matrix printers a series of small pins is used to strike on a ribbon coated with ink to transfer the image on the paper.
Other Impact Printers like Character printers are basically computerized typewriters. They have a series of bars or a ball with actual characters on them, which strike on the ink ribbon to transfer the characters on the paper. At a time only one character can be printed. Daisy Wheel printers use a plastic or metal wheel.
![Page 195: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/195.jpg)
2/22/20092/22/2009 195195
PrintersPrintersThese types of printers have limited usage though because they are limited to printing only characters or one type of font and not the graphics.
There are Line printers where a chain of characters or pins, print an entire line, which makes them pretty fast, but the print quality is not so good. Thermal printers are nothing but printers used in calculators and fax machines. They are inexpensive to use. Thermal printers work by pushing heated pins against special heat sensitive paper.
More efficient and advanced printers have come out now which use new Non-impact Technology.
Non-impact printers are those where the printing mechanism does not come into the contact of paper at all. This makes them quieter in operation in comparison to the impact printers.
![Page 196: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/196.jpg)
2/22/20092/22/2009 196196
PrintersPrintersIn mid 1980s Inkjet printers were introduced. These have been the most widely used and popular printers so far.Colour printing got revolutionized after inkjet printers were invented. An Inkjet printer's head has tiny nozzles, which place extremely tiny droplets of ink on the paper to create an image. These dots are so small that even the diameter of human hair is bigger. These dots are placed precisely and can be up to the resolution of 1440 x 720 per inch. Different combinations of ink cartridges can be used for these printers.
![Page 197: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/197.jpg)
2/22/20092/22/2009 197197
PrintersPrintersHow an Inkjet printer works
The print head in this printer scans the page horizontally back and forth and another motor assembly rolls the paper vertically in strips and thus a strip is printed at a time. Onlyhalf a second is taken to print a strip. Inkjet printers were very popular because of their ability to colour print. Most inkjets use Thermal Technology. Plain copier paper can be used in these printers unlike thermal paper used for fax machines. Heat is used to fire ink onto the paper through the print head. Some print heads can have up to 300 nozzles. Heat resistant and water based ink is used for these printers.
![Page 198: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/198.jpg)
2/22/20092/22/2009 198198
PrintersPrintersThe latest and fastest printers are Laser Printers. They use theprincipal of static electricity for printing it as in photocopiers. The principle of static electricity is that it can be built on an insulated object. Oppositely charged atoms of objects (positive and negative) are attracted to each other and cling together. For example, pieces of nylon material clinging to your body, or the static you get after brushing hair. A laser printer uses this same principle to glue ink on the paper.
![Page 199: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/199.jpg)
2/22/20092/22/2009 199199
PrintersPrintersHow Laser Printer works
Unlike the printers before, Laser printers use toner, static electricity and heat to create an image on the paper. Toner is dry ink. It contains colour and plastic particles. The toner passes through the fuser in the computer and the resulting heat binds it to any type of paper. Printing with laser printersis fast and non-smudge and the quality is excellent because of the high resolution that it can achieve with 300 dots per inch to almost 1200 dpi at the higher end.
![Page 200: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/200.jpg)
2/22/20092/22/2009 200200
PrintersPrintersBasic components of a laser printer are fuser, photoreceptor drum assembly, developer roller, laser scanning unit, toner hopper, corona wire and a discharge lamp. The laser beam creates an image on the drum and wherever it hits, it changes the electrical charge like positive or negative. The drum then is rolled on the toner. Toner is picked up by charged portion of the drum and gets transferred to the paper after passing through the fuser. Fuser heats up the paper to amalgamate ink and plastic in toner to create an image. Laser printers are called "page printers" because entire page is transferred to the drum before printing. Any type of paper can be used in these printers. Laser printers popularized DTP or Desk Top Publishing for it can print any number of fonts and any graphics..
![Page 201: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/201.jpg)
2/22/20092/22/2009 201201
PrintersPrintersThis is how the computer and printer operate to print
When we want to print something we simply press the command "Print". This information is sent to either RAM of the printer or the RAM of the computer depending upon the type of printer we have. The process of printing then starts. While the printing is going on, our computer can still perform a variety of operations. Jobs are put in a buffer or a special area in RAM or Random Access Memory and the printer pulls them off at its own pace. We can also line up our printing jobs this way. This way of simultaneously performing functions is called spooling. Our computer and the printer are thus in constant communication.
![Page 202: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/202.jpg)
2/22/20092/22/2009 202202
Printing ImagesPrinting ImagesIn image processing, there are overlapping terms that tend to get interchanged. Especially for image and print resolution: dpi (dots per inch), ppi (pixel or points per inch), lpi (lines per inch). In addition to this, the resolution of an image is stated by its dimensions in pixels or in inches (at a certain ppi or dpi resolution). Yes, we can understand if your head is swimming. Let’s understand this:
When an image is captured using either a camera or a scanner, the result is a digital image consisting of rows – known as arrays – of different picture elements that are called pixels. This array has a horizontal and vertical dimension. The horizontal size of the array is defined by the number of pixels in one single row (say 1,280) and the number of rows (say 1,024), giving the image a horizontal orientation. That picture would have a “resolution” of “1,024 x 1,280 pixels”.
![Page 203: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/203.jpg)
2/22/20092/22/2009 203203
Printing imagesPrinting imagesThe size of the image displayed is dependent o the number of pixels the monitor displays per inch. The “pixel per inch” resolutions (ppi) of monitors vary, and are usually in the range of 72 ppi to 120 ppi (the latter, lager 21.4” monitors). In most cases, however, with monitors the resolution is given as the number of pixels horizontally and vertically (e.g.1,0240 x 1,280 or 1,280 x 1,600). So the “size” of an image very much depends on how many pixels are displayed per inch. Thus, wecome to a resolution given in ‘pixels per inch’ or ppi for short.
With LCD monitors, their ppi resolution is fixed and can’t be adjusted (at least not without a loss of display quality). With CRT monitors you have more flexibility (we won’t go into this further0.
When an image is printed, its physical size depends upon how many image pixels we put down on paper, but also how an individual image pixel is laid down on the paper.
![Page 204: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/204.jpg)
2/22/20092/22/2009 204204
How image pixels produced by printer dots?How image pixels produced by printer dots?
There are only a few printing technologies where a printer can directly produce a continuous color range within an individual image pixel printed. Most other types of printers reproduce the color of a pixel in an image by approximating the color by an n x n matrix of fine dots using a specific pattern and a certain combination of the basic colors available to the printer.
If we want to reproduce a pixel of an image on paper, we not only have to place a physical printer’s ‘dot’ on paper, but also have to give that ‘dot’ the tonal value of the original pixel. With bitonal images, that is easy. If the pixel value is o, you lay down a black printed dot, and if the pixel is 1, you omit the dot. However, if the pixel has gray value (say 128 out of 256), and you print with a black-and-white laser printer (just to make the explanation a bit simpler), we must find different way. This technique is called rasterization or dithering.
To simulate different tonal values (let’s just stick to black-and-white for the moment), a number of printed dots are placed in a certain pattern on the paper to reproduce a single pixel of the image. In a low-resolution solution, we could use matrix of 3 printed dots by 3 printed dots per pixel.
![Page 205: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/205.jpg)
2/22/20092/22/2009 205205
How image pixels produced by printer dots?How image pixels produced by printer dots?Using more printed dots per image pixel allows for more different tonal values. With a pattern of 6 x 6 dots, you get 37 tonal grades, (which is sufficient). For a better differentiation let’s call the matrix of printer dots representing a pixel of the image a raster cell.
Now we see why a printer’s “dot per inch” (dpi) resolution has to be much higher than the resolution has to be much higher than the resolution of a display (where a single dot on a screen may be used to reproduce a single pixel in an images, as the individual screen dot (also called a pixel) may have different tonal (or brightness) values.
When you print with a device using relatively low resolution forgrayscale or colored images, you must make a trade-off between a high resolution image (having as many “raster cells per inch” as possible) and larger raster cells providing greater tonal value per cell.
![Page 206: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/206.jpg)
2/22/20092/22/2009 206206
How image pixels produced by printer dots?How image pixels produced by printer dots?The image impression may be improved when the printer is able to vary the size of its dots. This is done on some laser printers, as well as with some of today’s photo inkjet printers. If the dot size can be varied (also called modulated), fewer numbers of dots (n x n) are needed to create a certain number of different tonal values, (which results in a finer raster). You may achieve more tonal values from a fixed raster cell size.
There are several different ways (patterns) to place single printed dots in a raster cell, and the pattern for this dithering is partly a secret of the printer driver. The dithering dot pattern is less visible and more photo-like, when the pattern is not the same for all raster cells having the same tonal values, but is modified from raster cell to raster cell in some random way.
![Page 207: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/207.jpg)
2/22/20092/22/2009 207207
Linear SystemsLinear Systems
![Page 208: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/208.jpg)
2/22/20092/22/2009 208208
Linear Space Invariant Linear Space Invariant SystemSystem
![Page 209: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/209.jpg)
2/22/20092/22/2009 209209
Linear Space Invariant Linear Space Invariant SystemSystem
![Page 210: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/210.jpg)
2/22/20092/22/2009 210210
This Property holdsThis Property holds
![Page 211: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/211.jpg)
2/22/20092/22/2009 211211
Convolution in 1 Convolution in 1 DimensionDimension
Let’s look at some examples of convolution integrals,
So there are four steps in calculating a convolution integral:
#1. Fold h(x’) about the line x’=0#2. Displace h(x’) by x#3. Multiply h(x-x’) * g(x’)#4. Integrate
f (x) = g(x) ⊗ h(x) =−∞
∞∫ g(x ')h(x − x' )dx'
![Page 212: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/212.jpg)
2/22/20092/22/2009 212212
Math of ConvolutionMath of Convolution
∑−
−==n
nnxfnhxfhxg )()()(*)(
11 22 11
h(-1)=1 h(0)=2 h(1)=1
![Page 213: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/213.jpg)
2/22/20092/22/2009 213213
Convolution (1D)Convolution (1D)
11 11 22 22 11 11 22 22 11 11
11 22 11
45
Filter Response
Filter Input Signal/Image-row
Output Signal/Image-row
Filter coefficients (mask, kernel, template, window)
![Page 214: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/214.jpg)
2/22/20092/22/2009 214214
Math of 2D Math of 2D Convolution/CorrelationConvolution/Correlation
∑∑− −
−−==n
n
m
mnymxfnmhyxfhyxg ),(),(),(*),(
∑∑− −
++==n
n
m
mnymxfnmhyxfhyxg ),(),(),(),( ο
Convolution
Correlation
![Page 215: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/215.jpg)
2/22/20092/22/2009 215215
Correlation (1D)Correlation (1D)
11 11 22 22 11 11 22 22 11 11
11 22 11
45
47
47
45
45
47
47
45
This process is called Correlation!!
![Page 216: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/216.jpg)
2/22/20092/22/2009 216216
Correlation Vs Correlation Vs ConvolutionConvolution
∑−
+==n
nnxfnhxfhxg )()()()( ο
11 11 22 22 11 11 22 22 11 11
11 22 11
11 11 22 22 11 11 22 22 11 11
11 22 11
Correlation
Convolution∑−
−==n
n
nxfnhxfhxg )()()(*)(
In image processing we use CORRELATION but (nearly) always call it CONVOLUTION!!!!!Note: When the filter is symmetric: correlation = convolution!
![Page 217: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/217.jpg)
2/22/20092/22/2009 217217
Correlation on imagesCorrelation on imagesProcess of moving a filter Process of moving a filter mask over the image and mask over the image and compute sum of products at compute sum of products at each location. In convolution each location. In convolution filter is first rotated by 180 filter is first rotated by 180 degreedegree
11 11 1111 11 1111 11 11
00 2211 22
11 22 1122 55 33
11 3322 2200 11
11 22 0022 11 4411 00 11
912
91
Input Output
![Page 218: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/218.jpg)
2/22/20092/22/2009 218218
Correlation on imagesCorrelation on images11 11 1111 11 1111 11 11
00 2211 22
11 22 1122 55 33
11 3322 2200 11
11 22 0022 11 4411 00 11
912
911
91
Input Output
![Page 219: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/219.jpg)
2/22/20092/22/2009 219219
Applications of Applications of Convolution/correlationConvolution/correlation
–– BlurringBlurring
–– Edge detectionEdge detection
––Template matchingTemplate matching
![Page 220: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/220.jpg)
2/22/20092/22/2009 220220
Blurring (smoothing)Blurring (smoothing)
11 11 1131
11 22 1141
11 11 1111 11 1111 11 11
91
11 22 1122 44 2211 22 11
161
Also know as: Smoothing kernel, Mean Also know as: Smoothing kernel, Mean filter, Low pass filter filter, Low pass filter The simplest filter: The simplest filter: –– Spatial low pass filterSpatial low pass filter
Another mask: Another mask: –– Gaussian filter:Gaussian filter:
![Page 221: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/221.jpg)
2/22/20092/22/2009 221221
Applications of smoothingApplications of smoothing
Show: camera, mean, convolution
Blurring to removeBlurring to removeidentity or other detailsidentity or other detailsDegree of blurring = kernel Degree of blurring = kernel sizesize
![Page 222: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/222.jpg)
2/22/20092/22/2009 222222
Applications of smoothingApplications of smoothing
Preprocessing: enhance objectsPreprocessing: enhance objectsSmooth + ThresholdingSmooth + Thresholding
![Page 223: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/223.jpg)
2/22/20092/22/2009 223223
Uneven illuminationUneven illumination
(IJ: mean=50, sub,TH)
Improve segmentation Improve segmentation Uneven illuminationUneven illumination–– Within an imageWithin an image–– Between imagesBetween images
SolutionSolution–– ““Remove backgroundRemove background””–– Algorithm: g(x,y) = f(x,y) Algorithm: g(x,y) = f(x,y) –– ff(x,y), (x,y), ff(x,y)=mean(x,y)=mean–– Use a big kernel for Use a big kernel for ff(x,y), e.g., 10(x,y), e.g., 10--5050
![Page 224: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/224.jpg)
2/22/20092/22/2009 224224
Uneven illuminationUneven illuminationInput f(x,y) Mean f(x,y) f(x,y) – f(x,y) Edges
![Page 225: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/225.jpg)
2/22/20092/22/2009 225225
Application of smoothingApplication of smoothing
Remove noiseRemove noise
![Page 226: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/226.jpg)
2/22/20092/22/2009 226226
Correlation application:Correlation application:Template MatchingTemplate Matching
![Page 227: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/227.jpg)
2/22/20092/22/2009 227227
Template MatchingTemplate Matching
Input image
Template
Output Output as 3D
The filter is called a template or a maskThe filter is called a template or a mask
The brighter the value in the output, the better the matchThe brighter the value in the output, the better the matchImplemented as the correlation coefficientImplemented as the correlation coefficient
![Page 228: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/228.jpg)
2/22/20092/22/2009 228228
Template MatchingTemplate Matching
Output
![Page 229: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/229.jpg)
2/22/20092/22/2009 229229
Correlation application:Correlation application:Edge detectionEdge detection
![Page 230: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/230.jpg)
2/22/20092/22/2009 230230
Edge detectionEdge detection
Edge detection
![Page 231: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/231.jpg)
2/22/20092/22/2009 231231
Edge detectionEdge detection
)1,()1,(),(),1(),1(),(
−−+≈−−+≈yxfyxfyxg
yxfyxfyxg
y
x
![Page 232: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/232.jpg)
2/22/20092/22/2009 232232
Edge detectionEdge detection
)1,()1,(),(),1(),1(),(
−−+≈−−+≈yxfyxfyxg
yxfyxfyxg
y
x
![Page 233: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/233.jpg)
2/22/20092/22/2009 233233
Properties of convolutionProperties of convolution
f ⊗ g = g ⊗ f
f ⊗ (g ⊗ h) = ( f ⊗ g) ⊗ h
f ⊗ (g + h) = f ⊗ g + f ⊗ h
commutative:
associative:
multiple convolutions can be carried out in any order.
distributive:
![Page 234: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/234.jpg)
2/22/20092/22/2009 234234
Convolution TheoremConvolution Theorem
ℑ f ⊗ g{ }= F(k) ⋅G(k)
In other words, convolution in real space is equivalent to multiplication in Frequency space.
![Page 235: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/235.jpg)
2/22/20092/22/2009 235235
Proof of convolution Proof of convolution TheoremTheorem
f ⊗ g = f (x)g(x '−x)dx−∞
∞∫
f ⊗ g =1
4π2 dx F(k)eikxdk−∞
∞∫
−∞
∞∫ G(k ' )eik '(x '−x )dk '
−∞
∞∫
f ⊗ g =1
2πdkF(k) dk 'G(k ')eik 'x' 1
2πeix(k−k ')dx
−∞
∞∫
δ(k−k ')1 2 4 4 3 4 4 −∞
∞∫
−∞
∞∫
So we can rewrite the convolution integral,
as,
change the order of integration and extract a delta function,
![Page 236: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/236.jpg)
2/22/20092/22/2009 236236
Proof of convolution Proof of convolution theoremtheorem
f ⊗ g =1
2πdkF(k) dk 'G(k ')eik 'x' 1
2πeix(k−k ')dx
−∞
∞∫
δ(k−k ')1 2 4 4 3 4 4 −∞
∞∫
−∞
∞∫
f ⊗ g =1
2πdkF(k) dk 'G(k ')eik 'x 'δ(k − k ')
−∞
∞∫
−∞
∞∫
f ⊗ g =1
2πdkF(k)G(k)eikx'
−∞
∞∫
or,
Integration over the delta function selects out the k’=k value.
![Page 237: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/237.jpg)
2/22/20092/22/2009 237237
Proof of convolution Proof of convolution theoremtheorem
f ⊗ g =1
2πdkF(k)G(k)eikx'
−∞
∞∫
ℑ f ⊗ g{ }= F(k) ⋅G(k)
This is written as an inverse Fourier transformation. A Fouriertransform of both sides yields the desired result.
![Page 238: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/238.jpg)
2/22/20092/22/2009 238238
Convolution in 2Convolution in 2--D D For such a system the output h(x,y) For such a system the output h(x,y)
is the convolution of f(x,y) with is the convolution of f(x,y) with the impulse response g(x,y)the impulse response g(x,y)
![Page 239: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/239.jpg)
2/22/20092/22/2009 239239
Convolution in 2Convolution in 2--DD
![Page 240: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/240.jpg)
2/22/20092/22/2009 240240
Example of 3x3 Example of 3x3 convolution maskconvolution mask
![Page 241: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/241.jpg)
2/22/20092/22/2009 241241
In Plain WordsIn Plain Words
Convolution is essentially equivalent to Convolution is essentially equivalent to computing a weighted sum of image pixels computing a weighted sum of image pixels where filter is rotated 180 degree.where filter is rotated 180 degree.
Convolution is Linear operation
![Page 242: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/242.jpg)
2/22/20092/22/2009 242242
Why Mathematical Why Mathematical transformations?transformations?
WhyWhy–– To obtain a further information from the signal To obtain a further information from the signal
that is not readily available in the raw signal.that is not readily available in the raw signal.Raw SignalRaw Signal–– Normally the timeNormally the time--domain signaldomain signal
Processed SignalProcessed Signal–– A signal that has been "transformed" by any of A signal that has been "transformed" by any of
the available mathematical transformations the available mathematical transformations Fourier TransformationFourier Transformation–– The most popular transformationThe most popular transformation
![Page 243: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/243.jpg)
2/22/20092/22/2009 243243
What is a Transform and What is a Transform and Why do we need one ? Why do we need one ?
TransformTransform:: A mathematical operation that takes a A mathematical operation that takes a function or sequence and maps it into another onefunction or sequence and maps it into another oneTransforms are good things becauseTransforms are good things because……–– The transform of a function may give additional /hidden The transform of a function may give additional /hidden
information about the original function, which may not be information about the original function, which may not be available /obvious otherwiseavailable /obvious otherwise
–– The transform of an equation may be easier to solve than The transform of an equation may be easier to solve than the original equation (recall your fond memories of Laplace the original equation (recall your fond memories of Laplace transforms in transforms in DFQsDFQs))
–– The transform of a function/sequence may require less The transform of a function/sequence may require less storage, hence provide data compression / reductionstorage, hence provide data compression / reduction
–– An operation may be easier to apply on the transformed An operation may be easier to apply on the transformed function, rather than the original function (recall other function, rather than the original function (recall other fond memories on convolution).fond memories on convolution).
![Page 244: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/244.jpg)
2/22/20092/22/2009 244244
Why transform ?Why transform ?
![Page 245: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/245.jpg)
2/22/20092/22/2009 245245
Introduction to Fourier Introduction to Fourier TransformTransform
f(x): continuous function of a real variable xf(x): continuous function of a real variable x
Fourier transform of f(x): Fourier transform of f(x):
{ } ∫∞
∞−
−==ℑ dxuxjxfuFxf ]2exp[)()()( π
where 1−=j
Eq. 1
![Page 246: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/246.jpg)
2/22/20092/22/2009 246246
Introduction to Fourier Introduction to Fourier transform transform
(u) is the frequency variable.(u) is the frequency variable.
The integral of Eq. 1 shows that F(u) is The integral of Eq. 1 shows that F(u) is composed of an infinite sum of sine and composed of an infinite sum of sine and cosine terms cosine terms andand……
Each value of u determines the frequency of Each value of u determines the frequency of its corresponding sineits corresponding sine--cosine pair.cosine pair.
![Page 247: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/247.jpg)
2/22/20092/22/2009 247247
Introduction to Fourier Introduction to Fourier transform transform
Given F(u), f(x) can be obtained by the Given F(u), f(x) can be obtained by the inverse Fourier transform:inverse Fourier transform:
)()}({1 xfuF =ℑ−
∫∞
∞−
= duuxjuF ]2exp[)( π
• The above two equations are the Fourier transform pair.
![Page 248: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/248.jpg)
2/22/20092/22/2009 248248
Introduction to Fourier Introduction to Fourier transform transform
Each term of the FT (F(u) for every u) Each term of the FT (F(u) for every u) is composed of the sum of all values of is composed of the sum of all values of f(x)f(x)
]/2sin/2)[cos(1)(
)cos()cos(sincos
1
0MuxjMuxxf
MuF
je
M
x
j
ππ
θθθθθ
∑−
=
−=
=−+=
![Page 249: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/249.jpg)
2/22/20092/22/2009 249249
The Fourier transform of a real The Fourier transform of a real function is generally complex and we function is generally complex and we use polar coordinates:use polar coordinates:
The Fourier transform of a real The Fourier transform of a real function is generally complex and we function is generally complex and we use polar coordinates:use polar coordinates:
⎥⎦
⎤⎢⎣
⎡=
+=
=
+=
−
)()(tan)(
)]()([)(
)()()()()(
1
2/122
)(
uRuIu
uIuRuF
euFuFujIuRuF
uj
φ
φ
Introduction to Introduction to Fourier transformFourier transform
![Page 250: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/250.jpg)
2/22/20092/22/2009 250250
Introduction to Introduction to Fourier transformFourier transform
|F(u)| (magnitude function) is the Fourier |F(u)| (magnitude function) is the Fourier spectrum of f(x) and spectrum of f(x) and φφ(u) its phase angle.(u) its phase angle.
The square of the spectrumThe square of the spectrum
)()()()( 222 uIuRuFuP +==
is referred to as the power spectrum of f(x) (spectral density).
![Page 251: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/251.jpg)
2/22/20092/22/2009 251251
Fourier spectrum:Fourier spectrum: [ ] 2/122 ),(),(),( vuIvuRvuF +=
• Phase: ⎥⎦
⎤⎢⎣
⎡= −
),(),(tan),( 1
vuRvuIvuφ
• Power spectrum: ),(),(),(),( 222 vuIvuRvuFvuP +==
Introduction to Introduction to Fourier transformFourier transform
![Page 252: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/252.jpg)
2/22/20092/22/2009 252252
Spatial Frequency Spatial Frequency decompositiondecomposition
-50 0 50 100 150 200 250 300
Pixel
I(x) = ai(coskix) + ibi(sinkix)i
∑
AmplitudesAmplitudes PhasePhase
Fourier TransformFourier Transform
-50 0 50 100 150 200 250 300
Pixel
0.250.25µµm myelinm myelin •• Any image can be Any image can be decomposed into a series of decomposed into a series of sinessines and cosines added and cosines added together to give the imagetogether to give the image
![Page 253: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/253.jpg)
2/22/20092/22/2009 253253
FTFTFourier Transform Fourier Transform of the Myelin Image of the Myelin Image
High frequencyHigh frequency
Low frequencyLow frequency
![Page 254: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/254.jpg)
2/22/20092/22/2009 254254
FT reversibleFT reversible
==
Fourier transform of myelinFourier transform of myelin
FF --11
![Page 255: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/255.jpg)
2/22/20092/22/2009 255255
22--D Image Transform D Image Transform General Transform General Transform
∑∑−
=
−
=
=1
0
1
0),(),,,(),(
N
x
N
yyxfvuyxTvuF
∑∑−
=
−
=
=1
0
1
0),(),,,(),(
N
u
N
vvuFvuyxIyxf
![Page 256: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/256.jpg)
2/22/20092/22/2009 256256
Discrete Fourier Discrete Fourier Transform Transform
![Page 257: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/257.jpg)
2/22/20092/22/2009 257257
Discrete Fourier Discrete Fourier Transform Transform
A continuous function f(x) is discretized A continuous function f(x) is discretized into a sequence:into a sequence:
)}]1[(),...,2(),(),({ 0000 xNxfxxfxxfxf Δ−+Δ+Δ+by taking N or M samples Δx units apart.
![Page 258: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/258.jpg)
2/22/20092/22/2009 258258
Discrete Fourier Discrete Fourier Transform Transform
Where x assumes the discrete values Where x assumes the discrete values (0,1,2,3,(0,1,2,3,……,M,M--1) then1) then
)()( 0 xxxfxf Δ+=
• The sequence {f(0),f(1),f(2),…f(M-1)} denotes any M uniformly spaced samples from a corresponding continuous function.
![Page 259: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/259.jpg)
2/22/20092/22/2009 259259
Discrete Fourier Discrete Fourier TransformTransform
The discrete Fourier transform pair that The discrete Fourier transform pair that applies to sampled functions is given by:applies to sampled functions is given by:
F(u) =1M
f (x)exp[− j2πux /M]x=0
M −1
∑ For u=0,1,2,…,M-1
f (x) = f (u)exp[ j2πux / M ]u= 0
M −1
∑ For x=0,1,2,…,M-1
and
![Page 260: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/260.jpg)
2/22/20092/22/2009 260260
Discrete Fourier Discrete Fourier Transform Transform
To compute F(u) we substitute u=0 in the To compute F(u) we substitute u=0 in the exponential term and sum for all values of xexponential term and sum for all values of xWe repeat for all M values of uWe repeat for all M values of uIt takes M*M summations and multiplicationsIt takes M*M summations and multiplications
The Fourier transform and its inverse always The Fourier transform and its inverse always exist!exist!
F(u) =1M
f (x)exp[− j2πux /M]x=0
M −1
∑ For u=0,1,2,…,M-1
![Page 261: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/261.jpg)
2/22/20092/22/2009 261261
Discrete Fourier Discrete Fourier TransformTransform
The values u = 0, 1, 2, The values u = 0, 1, 2, ……, M, M--1 correspond to 1 correspond to samples of the continuous transform at samples of the continuous transform at values 0, values 0, ΔΔu, 2u, 2ΔΔu, u, ……, (M, (M--1)1)ΔΔu.u.
i.e. F(u) represents F(ui.e. F(u) represents F(uΔΔu), where:u), where:
Δu =1
MΔx
![Page 262: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/262.jpg)
2/22/20092/22/2009 262262
Discrete Fourier Discrete Fourier TransformTransform
In a 2In a 2--variable case, the discrete FT pair is:variable case, the discrete FT pair is:
∑∑−
=
−
=
+−=1
0
1
0)]//(2exp[),(1),(
M
x
N
yNvyMuxjyxf
MNvuF π
∑∑−
=
−
=
+=1
0
1
0)]//(2exp[),(),(
M
u
N
vNvyMuxjvuFyxf π
For u=0,1,2,…,M-1 and v=0,1,2,…,N-1
For x=0,1,2,…,M-1 and y=0,1,2,…,N-1
AND:
![Page 263: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/263.jpg)
2/22/20092/22/2009 263263
Discrete Fourier Discrete Fourier TransformTransform
Sampling of a continuous function is now in a Sampling of a continuous function is now in a 22--D grid (D grid (ΔΔx, x, ΔΔy divisions).y divisions).
The discrete function f(x,y) represents The discrete function f(x,y) represents samples of the function f(xsamples of the function f(x00+x+xΔΔx,yx,y00+y+yΔΔy) y) for x=0,1,2,for x=0,1,2,……,M,M--1 and y=0,1,2,1 and y=0,1,2,……,N,N--1.1.
yNv
xMu
Δ=Δ
Δ=Δ
1 ,1
![Page 264: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/264.jpg)
2/22/20092/22/2009 264264
Discrete Fourier Discrete Fourier TransformTransform
When images are sampled in a square When images are sampled in a square array, M=N and the FT pair becomes:array, M=N and the FT pair becomes:
∑∑−
=
−
=
+−=1
0
1
0]/)(2exp[),(1),(
N
x
N
yNvyuxjyxf
NvuF π
∑∑−
=
−
=
+=1
0
1
0]/)(2exp[),(1),(
N
u
N
vNvyuxjvuF
Nyxf π
For u,v=0,1,2,…,N-1
For x,y=0,1,2,…,N-1
AND:
![Page 265: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/265.jpg)
2/22/20092/22/2009 265265
Properties of 2Properties of 2--D Fourier D Fourier transformtransform
TranslationTranslationDistributivity and ScalingDistributivity and Scaling
RotationRotationPeriodicity and Conjugate SymmetryPeriodicity and Conjugate Symmetry
SeparabilitySeparability
Convolution and CorrelationConvolution and Correlation
![Page 266: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/266.jpg)
2/22/20092/22/2009 266266
TranslationTranslation
f (x,y)exp[j2π(u0x /M + v0y /N)]⇔F(u− u0,v −v0)
f (x − x0,y − y0) ⇔F(u,v)exp[−j2π(ux0 /M +vy0 /N)]and
![Page 267: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/267.jpg)
2/22/20092/22/2009 267267
TranslationTranslation
The previous equations mean:The previous equations mean:
–– Multiplying f(x,y) by the indicated exponential Multiplying f(x,y) by the indicated exponential term and taking the transform of the product term and taking the transform of the product results in a shift of the origin of the frequency results in a shift of the origin of the frequency plane to the point (uplane to the point (u00,v,v00).).
–– Multiplying F(u,v) by the exponential term shown Multiplying F(u,v) by the exponential term shown and taking the inverse transform moves the origin and taking the inverse transform moves the origin of the spatial plane to (xof the spatial plane to (x00,y,y00).).
–– A shift in f(x,y) doesnA shift in f(x,y) doesn’’t affect the magnitude of its t affect the magnitude of its Fourier transformFourier transform
![Page 268: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/268.jpg)
2/22/20092/22/2009 268268
Distributivity & ScalingDistributivity & Scaling
Distributive over addition but not over Distributive over addition but not over multiplication.multiplication.
)},({)},({)},(),({ 2121 yxfyxfyxfyxf ℑ+ℑ=+ℑ
)},({)},({)},(),({ 2121 yxfyxfyxfyxf ℑ⋅ℑ≠⋅ℑ
![Page 269: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/269.jpg)
2/22/20092/22/2009 269269
Distributivity and ScalingDistributivity and Scaling
For two scalars a and b,For two scalars a and b,
),(),( vuaFyxaf ⇔
)/,/(1),( bvauFab
byaxf ⇔
![Page 270: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/270.jpg)
2/22/20092/22/2009 270270
RotationRotation
Polar coordinates:Polar coordinates:
ϕωϕωθθ cos ,cos ,sin ,cos ==== vuryrx
Which means that:
),(),,( become ),(),,( ϕωθ FrfvuFyxf
![Page 271: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/271.jpg)
2/22/20092/22/2009 271271
RotationRotation
Which means that rotating f(x,y) by an Which means that rotating f(x,y) by an angle angle θθ00 rotates F(u,v) by the same rotates F(u,v) by the same angle (and vice versa).angle (and vice versa).
),(),( 00 θϕωθθ +⇔+ Frf
![Page 272: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/272.jpg)
2/22/20092/22/2009 272272
Periodicity & Conjugate Periodicity & Conjugate SymmetrySymmetry
The discrete FT and its inverse are The discrete FT and its inverse are periodic with period N:periodic with period N:
F(u,v)=F(u+M,v)=F(u,v+N)=F(u+M,v+N)
![Page 273: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/273.jpg)
2/22/20092/22/2009 273273
Periodicity & conjugate Periodicity & conjugate symmetrysymmetry
Although F(u,v) repeats itself for infinitely many Although F(u,v) repeats itself for infinitely many values of u and v, only the M,N values of each values of u and v, only the M,N values of each variable in any one period are required to obtain variable in any one period are required to obtain f(x,y) from F(u,v).f(x,y) from F(u,v).
This means that only one period of the transform is This means that only one period of the transform is necessary to specify F(u,v) completely in the necessary to specify F(u,v) completely in the frequency domain (and similarly f(x,y) in the spatial frequency domain (and similarly f(x,y) in the spatial domain).domain).
![Page 274: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/274.jpg)
2/22/20092/22/2009 274274
Periodicity & Conjugate Periodicity & Conjugate SymmetrySymmetry
(shifted spectrum)move the origin of thetransform to u=N/2.
![Page 275: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/275.jpg)
2/22/20092/22/2009 275275
Periodicity & Conjugate Periodicity & Conjugate SymmetrySymmetry
For real f(x,y), FT also exhibits conjugate For real f(x,y), FT also exhibits conjugate symmetry:symmetry:
),(),(),(),( *
vuFvuFvuFvuF
−−=
−−=
or )()()()(
uFuFNuFuF
−=
+=
• i.e. F(u) has a period of length N and the magnitude of the transform is centered on the origin.
![Page 276: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/276.jpg)
2/22/20092/22/2009 276276
SeparabilitySeparability
The discrete FT pair can be expressed The discrete FT pair can be expressed in separable forms which (after some in separable forms which (after some manipulations) can be expressed as:manipulations) can be expressed as:
F(u,v) =1M
F(x,v)exp[− j2πux / M]x= 0
M −1
∑
Where: F(x,v) =1N
f (x,y)exp[− j2πvy /N]y= 0
N−1
∑⎡
⎣ ⎢ ⎢
⎤
⎦ ⎥ ⎥
![Page 277: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/277.jpg)
2/22/20092/22/2009 277277
SeparabilitySeparability in Specific in Specific formsforms
),(),(),,,( 21 vyTuxTvuyxT =
),(),(),,,( 21 vyTuxTvuyxT =
SeparableSeparable
SymmetricSymmetric
![Page 278: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/278.jpg)
2/22/20092/22/2009 278278
SeparabilitySeparability
For each value of x, the expression For each value of x, the expression inside the brackets is a 1inside the brackets is a 1--D transform, D transform, with frequency values v=0,1,with frequency values v=0,1,……,N,N--1.1.
Thus, the 2Thus, the 2--D function F(x,v) is D function F(x,v) is obtained by taking a transform along obtained by taking a transform along each row of f(x,y) and multiplying the each row of f(x,y) and multiplying the result by N.result by N.
![Page 279: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/279.jpg)
2/22/20092/22/2009 279279
SeparabilitySeparability
The desired result F(u,v) is then The desired result F(u,v) is then obtained by making a transform along obtained by making a transform along each column of F(x,v).each column of F(x,v).
![Page 280: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/280.jpg)
2/22/20092/22/2009 280280
Energy preservationEnergy preservation
∑ ∑∑ ∑−
=
−
=
−
=
−
==
1
0
1
0
21
0
1
0
2 ),(),(N
u
N
v
N
x
N
yvugyxf
22 fg =
![Page 281: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/281.jpg)
2/22/20092/22/2009 281281
Energy Compaction !
![Page 282: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/282.jpg)
2/22/20092/22/2009 282282
An AtomAn Atom
BothBoth functions have circular symmetry.functions have circular symmetry.The atom is a sharp feature, whereas its The atom is a sharp feature, whereas its transform is a broad smooth function. This transform is a broad smooth function. This illustrates the illustrates the reciprocal relationshipreciprocal relationship between a between a function and its Fourier transform. function and its Fourier transform.
![Page 283: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/283.jpg)
2/22/20092/22/2009 283283
Original Image Original Image –– FAFA--FPFP
![Page 284: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/284.jpg)
2/22/20092/22/2009 284284
A MoleculeA Molecule
![Page 285: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/285.jpg)
2/22/20092/22/2009 285285
Fourier duckFourier duck
![Page 286: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/286.jpg)
2/22/20092/22/2009 286286
Reconstruction from Reconstruction from Phase of cat & Amplitude Phase of cat & Amplitude of duckof duck
![Page 287: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/287.jpg)
2/22/20092/22/2009 287287
Reconstruction from Reconstruction from Phase of duck & Phase of duck & Amplitude of catAmplitude of cat
![Page 288: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/288.jpg)
2/22/20092/22/2009 288288
Original ImageOriginal Image--Fourier AmplitudeFourier AmplitudeKeep Part of the Amplitude Around the Origin and Reconstruct Keep Part of the Amplitude Around the Origin and Reconstruct Original Image (LOW PASS filtering)Original Image (LOW PASS filtering)
![Page 289: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/289.jpg)
2/22/20092/22/2009 289289
Keep Part of the Amplitude Far from the Origin and Keep Part of the Amplitude Far from the Origin and Reconstruct Original Image (HIGH PASS filtering)Reconstruct Original Image (HIGH PASS filtering)
![Page 290: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/290.jpg)
2/22/20092/22/2009 290290
ExampleExample
![Page 291: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/291.jpg)
2/22/20092/22/2009 291291
Reconstruction fromReconstruction from
phase of one image and amplitude of the otherphase of one image and amplitude of the other
![Page 292: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/292.jpg)
2/22/20092/22/2009 292292
ExampleExample
![Page 293: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/293.jpg)
2/22/20092/22/2009 293293
Reconstruction fromReconstruction from
phase of one image and amplitude of the otherphase of one image and amplitude of the other
![Page 294: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/294.jpg)
2/22/20092/22/2009 294294
ReconstructionReconstructionExampleExample
Cheetah ImageFourier Magnitude (above)Fourier Phase (below)
![Page 295: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/295.jpg)
2/22/20092/22/2009 295295
Reconstruction Reconstruction exampleexample
Zebra ImageFourier Magnitude (above)Fourier Phase (below)
![Page 296: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/296.jpg)
2/22/20092/22/2009 296296
ReconstructionReconstruction
Reconstruction withZebra phase,Cheetah Magnitude
![Page 297: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/297.jpg)
2/22/20092/22/2009 297297
ReconstructionReconstruction
Reconstruction withCheetah phase,Zebra Magnitude
![Page 298: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/298.jpg)
2/22/20092/22/2009 298298
![Page 299: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/299.jpg)
2/22/20092/22/2009 299299
Optical illusionOptical illusion
![Page 300: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/300.jpg)
2/22/20092/22/2009 300300
Optical illusionOptical illusion
![Page 301: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/301.jpg)
2/22/20092/22/2009 301301
Optical illusionOptical illusion
![Page 302: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/302.jpg)
2/22/20092/22/2009 302302
Optical illusionOptical illusion
![Page 303: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/303.jpg)
2/22/20092/22/2009 303303
Optical illusionOptical illusion
![Page 304: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/304.jpg)
2/22/20092/22/2009 304304
Optical illusionOptical illusion
![Page 305: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/305.jpg)
2/22/20092/22/2009 305305
Optical illusionOptical illusion
![Page 306: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/306.jpg)
2/22/20092/22/2009 306306
Optical illusionOptical illusion
![Page 307: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/307.jpg)
2/22/20092/22/2009 307307
Optical illusionOptical illusion
![Page 308: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/308.jpg)
2/22/20092/22/2009 308308
Optical illusionOptical illusion
![Page 309: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/309.jpg)
2/22/20092/22/2009 309309
Optical illusionOptical illusion
![Page 310: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/310.jpg)
2/22/20092/22/2009 310310
![Page 311: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/311.jpg)
2/22/20092/22/2009 311311
Optical illusionOptical illusion
![Page 312: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/312.jpg)
2/22/20092/22/2009 312312
Optical illusionOptical illusion
![Page 313: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/313.jpg)
2/22/20092/22/2009 313313
![Page 314: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/314.jpg)
2/22/20092/22/2009 314314
Optical illusionOptical illusion
![Page 315: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/315.jpg)
2/22/20092/22/2009 315315
Optical illusionOptical illusion
![Page 316: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/316.jpg)
2/22/20092/22/2009 316316
Discrete Cosine Discrete Cosine Transform 1Transform 1--DD
∑ ⎥⎦⎤
⎢⎣⎡ +=
−
=
1
0 2)12(cos)()()(
N
x NuxxfuauC π
1,,1,0 −= Nu Κ
⎪⎪⎩
⎪⎪⎨
⎧
−=
=
=
1,,12
01
)(
NuN
uN
ua
Κ
![Page 317: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/317.jpg)
2/22/20092/22/2009 317317
IDCT IDCT –– 1D1D
∑ ⎥⎦⎤
⎢⎣⎡ +=
−
=
1
0 2)12(cos)()()(
N
u NuxuCuaxf π
![Page 318: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/318.jpg)
2/22/20092/22/2009 318318
1D Basis functions N=81D Basis functions N=81.0
0.5
0
-0.5
-1.0
u=01.0
0.5
0
-0.5
-1.0
u=11.0
0.5
0
-0.5
-1.0
u=21.0
0.5
0
-0.5
-1.0
u=3
1.0
0.5
0
-0.5
-1.0
u=41.0
0.5
0
-0.5
-1.0
u=51.0
0.5
0
-0.5
-1.0
u=61.0
0.5
0
-0.5
-1.0
u=7
![Page 319: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/319.jpg)
2/22/20092/22/2009 319319
1D Basis functions N=161D Basis functions N=16
![Page 320: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/320.jpg)
2/22/20092/22/2009 320320
Example : 1D signalExample : 1D signal
![Page 321: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/321.jpg)
2/22/20092/22/2009 321321
DCTDCT
![Page 322: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/322.jpg)
2/22/20092/22/2009 322322
22--D DCTD DCT
⎥⎦⎤
⎢⎣⎡ +
∑ ⎥⎦⎤
⎢⎣⎡ +
∑=−
=
−
= Nvy
NuxyxfvauavuC
N
x
N
y 2)12(cos
2)12(cos),()()(),(
1
0
1
0
ππ
1,,1,0, −= Nvu Κ
⎥⎦⎤
⎢⎣⎡ +
∑ ⎥⎦⎤
⎢⎣⎡ +
∑=−
=
−
= Nvy
NuxvuCvauayxf
N
u
N
v 2)12(cos
2)12(cos),()()(),(
1
0
1
0
ππ
![Page 323: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/323.jpg)
2/22/20092/22/2009 323323
AdvantagesAdvantages
Notice that the DCT is a real transform.Notice that the DCT is a real transform.
The DCT has excellent energy compaction The DCT has excellent energy compaction properties.properties.
There are fast algorithms to compute the There are fast algorithms to compute the DCT similar to the FFT.DCT similar to the FFT.
![Page 324: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/324.jpg)
2/22/20092/22/2009 324324
22--D Basis functions N=4D Basis functions N=4
0
1
2
3
0 1 2 3
u
v
![Page 325: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/325.jpg)
2/22/20092/22/2009 325325
22--D Basis functions N=8D Basis functions N=8
![Page 326: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/326.jpg)
2/22/20092/22/2009 326326
SeparableSeparable
![Page 327: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/327.jpg)
2/22/20092/22/2009 327327
Example: Energy Example: Energy CompactionCompaction
![Page 328: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/328.jpg)
2/22/20092/22/2009 328328
Relation between DCT & Relation between DCT & DFTDFT
⎩⎨⎧
−≤≤−−−≤≤
=
−−+=
12),12(10),(
)12()()(
NxNxNfNxxf
xNfxfxg
)()()()(pointpoint2DFTpoint2point
uCuGxgxfNNNN
f→→→−−−−
![Page 329: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/329.jpg)
2/22/20092/22/2009 329329
DFT to DCTDFT to DCTFrom DFT to DCT (Cont.) From DFT to DCT (Cont.)
D CT has a higher com pression ration than D FT
- D CT avoids the generation of spurious spectral
com ponents
![Page 330: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/330.jpg)
2/22/20092/22/2009 330330
December 21 1807 December 21 1807 illusionillusion
“An arbitrary function, continuous or with discontinuities, defined in a finite interval by an arbitrarily capricious graph can always be expressed as a sum of sinusoids”
J.B.J. Fourier
Jean B. Joseph Fourier(1768-1830)
![Page 331: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/331.jpg)
2/22/20092/22/2009 331331
Frequency analysisFrequency analysis
Frequency SpectrumFrequency Spectrum–– Be basically the frequency components (spectral Be basically the frequency components (spectral
components) of that signalcomponents) of that signal–– Show what frequencies exists in the signalShow what frequencies exists in the signal
Fourier Transform (FT) Fourier Transform (FT) –– One way to find the frequency contentOne way to find the frequency content–– Tells how much of each frequency exists in a Tells how much of each frequency exists in a
signalsignal
( ) ( ) knN
N
nWnxkX ⋅+=+ ∑
−
=
1
011
( ) ( ) knN
N
kWkX
Nnx −
−
=∑ ⋅+=+
1
0111
⎟⎠⎞
⎜⎝⎛−
= Nj
N ewπ2
( ) ( ) dtetxfX ftjπ2−∞
∞−
⋅= ∫
( ) ( ) dfefXtx ftjπ2⋅= ∫∞
∞−
![Page 332: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/332.jpg)
2/22/20092/22/2009 332332
Complex function representation through simple Complex function representation through simple building blocksbuilding blocks–– Basis functionsBasis functions
Using only a few blocks Using only a few blocks Compressed Compressed representationrepresentation
Using sinusoids as building blocks Using sinusoids as building blocks Fourier Fourier transform transform –– Frequency domain representation of the functionFrequency domain representation of the function
( ) ( )ii
i FunctionSimpleweightFunctionComplex •= ∑
∫∫ == − ωωπ
ω ωω deFtfdtetfF tjtj )(21)( )()(
![Page 333: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/333.jpg)
2/22/20092/22/2009 333333
How does it work How does it work Anyway?Anyway?
Recall that FT uses complex exponentials (sinusoids) Recall that FT uses complex exponentials (sinusoids) as building blocks.as building blocks.
For each frequency of complex exponential, the For each frequency of complex exponential, the sinusoid at that frequency is compared to the signal.sinusoid at that frequency is compared to the signal.
If the signal consists of that frequency, the If the signal consists of that frequency, the correlation is high correlation is high large FT coefficients.large FT coefficients.
If the signal does not have any spectral component If the signal does not have any spectral component at a frequency, the correlation at that frequency is at a frequency, the correlation at that frequency is low / zero, low / zero, small / zero FT coefficient.small / zero FT coefficient.
( ) ( )tjte tj ωωω sincos +=
∫∫ == − ωωπ
ω ωω deFtfdtetfF tjtj )(21)( )()(
![Page 334: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/334.jpg)
2/22/20092/22/2009 334334
FT At workFT At work
)52cos()(1 ttx ⋅⋅= π
)252cos()(2 ttx ⋅⋅= π
)502cos()(3 ttx ⋅⋅= π
![Page 335: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/335.jpg)
2/22/20092/22/2009 335335
FT At workFT At work
)(1 ωX
)(2 ωX
)(3 ωX
F)(1 tx
F)(2 tx
F)(3 tx
![Page 336: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/336.jpg)
2/22/20092/22/2009 336336
FT At workFT At work
)502cos()252cos(
)52cos()(4
tt
ttx
⋅⋅+⋅⋅+
⋅⋅=
πππ
)(4 ωXF)(4 tx
![Page 337: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/337.jpg)
2/22/20092/22/2009 337337
FT At workFT At workComplex exponentials
(sinusoids) as basis functions:
∫
∫∞
∞−
∞
∞−
−
⋅=
⋅=
dteFtf
dtetfF
tj
tj
ω
ω
ωπ
ω
)(21)(
)()(
F
![Page 338: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/338.jpg)
2/22/20092/22/2009 338338
Stationarity of SignalStationarity of Signal
Stationary SignalStationary Signal–– Signals with frequency content unchanged Signals with frequency content unchanged
in timein time–– All frequency components exist at all timesAll frequency components exist at all times
NonNon--stationary Signalstationary Signal–– Frequency changes in timeFrequency changes in time–– One example: the One example: the ““Chirp SignalChirp Signal””
![Page 339: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/339.jpg)
2/22/20092/22/2009 339339
Stationary & Non Stationary & Non Stationary SignalsStationary Signals
FT identifies all spectral components present in the signal, howFT identifies all spectral components present in the signal, however it does ever it does not provide any information regarding the temporal (time) localinot provide any information regarding the temporal (time) localization of zation of these components. Why?these components. Why?Stationary signals consist of spectral components that do not chStationary signals consist of spectral components that do not change in timeange in time–– all spectral components exist at all timesall spectral components exist at all times–– no need to know any time informationno need to know any time information–– FT works well for stationary signalsFT works well for stationary signals
However, nonHowever, non--stationary signals consists of time varying spectral stationary signals consists of time varying spectral componentscomponents–– How do we find out which spectral component appears when?How do we find out which spectral component appears when?–– FT only provides FT only provides what spectral components exist what spectral components exist , not where in time, not where in time
they are located.they are located.
–– Need some other ways to determine Need some other ways to determine time localizationtime localization of spectral of spectral componentscomponents
![Page 340: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/340.jpg)
2/22/20092/22/2009 340340
Stationary & Non Stationary SignalsStationary & Non Stationary SignalsStationary signalsStationary signals’’ spectral characteristics do not spectral characteristics do not change with time change with time
NonNon--stationary signals have time varying spectrastationary signals have time varying spectra
)502cos()252cos(
)52cos()(4
tt
ttx
⋅⋅+⋅⋅+
⋅⋅=
πππ
][)( 3215 xxxtx ⊕⊕=
Concatenation⊕
![Page 341: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/341.jpg)
2/22/20092/22/2009 341341
Stationary & Stationary & Nonstationary signalsNonstationary signals
0 0 . 2 0 . 4 0 . 6 0 . 8 1- 3
- 2
- 1
0
1
2
3
0 5 1 0 1 5 2 0 2 50
1 0 0
2 0 0
3 0 0
4 0 0
5 0 0
6 0 0
Time
Mag
nitu
de M
agni
tud
e
Frequency (Hz)
2 Hz + 10 Hz + 20Hz
Stationary
0 0 . 5 1- 1
- 0 . 8
- 0 . 6
- 0 . 4
- 0 . 2
0
0 . 2
0 . 4
0 . 6
0 . 8
1
0 5 1 0 1 5 2 0 2 50
5 0
1 0 0
1 5 0
2 0 0
2 5 0
Time
Mag
nitu
de M
agni
tud
e
Frequency (Hz)
Non-Stationary
0.0-0.4: 2 Hz + 0.4-0.7: 10 Hz + 0.7-1.0: 20Hz
![Page 342: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/342.jpg)
2/22/20092/22/2009 342342
Chirp signalsChirp signals
Same in Frequency Domain
At what time the frequency components occur? FT can not tell!At what time the frequency components occur? FT can not tell!
0 0.5 1-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
0 5 10 15 20 250
50
100
150
Time
Mag
nitu
de Mag
nitu
de
Frequency (Hz)0 0.5 1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
0 5 10 15 20 250
50
100
150
Time
Mag
nitu
de Mag
nitu
de
Frequency (Hz)
Different in Time DomainFrequency: 2 Hz to 20 HzFrequency: 2 Hz to 20 Hz Frequency: 20 Hz to 2 HzFrequency: 20 Hz to 2 Hz
![Page 343: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/343.jpg)
2/22/20092/22/2009 343343
Stationary & Non Stationary SignalsStationary & Non Stationary Signals
Perfect knowledge of what frequencies exist, but no information about where these frequencies are located in time
![Page 344: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/344.jpg)
2/22/20092/22/2009 344344
FFT Vs WaveletFFT Vs Wavelet
FFT, basis functions: sinusoidsFFT, basis functions: sinusoidsWavelet transforms: small waves, called Wavelet transforms: small waves, called waveletwaveletFFT can only offer frequency FFT can only offer frequency informationinformationWavelet: frequency + temporal Wavelet: frequency + temporal informationinformationFourier analysis doesnFourier analysis doesn’’t work well on t work well on discontinuous, discontinuous, ““burstybursty”” datadata
![Page 345: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/345.jpg)
2/22/20092/22/2009 345345
Fourier Vs. WaveletFourier Vs. Wavelet
FourierFourier–– Loses time (location) coordinate completelyLoses time (location) coordinate completely–– Analyses the Analyses the wholewhole signalsignal–– Short pieces lose Short pieces lose ““frequencyfrequency”” meaningmeaning
WaveletsWavelets–– Localized timeLocalized time--frequency analysisfrequency analysis–– Short signal pieces also have significanceShort signal pieces also have significance–– Scale = Frequency bandScale = Frequency band
![Page 346: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/346.jpg)
2/22/20092/22/2009 346346
Shortcomings of FTShortcomings of FTSinusoids and exponentials Sinusoids and exponentials –– Stretch into infinity in time, Stretch into infinity in time, no time localizationno time localization
–– Instantaneous in frequency, Instantaneous in frequency, perfect spectral perfect spectral localizationlocalization
–– GlobalGlobal analysis does not allow analysis of nonanalysis does not allow analysis of non--stationary stationary signals signals
Need a Need a locallocal analysis scheme for a timeanalysis scheme for a time--frequency frequency representation (TFR) of nonstationary signalsrepresentation (TFR) of nonstationary signals–– Windowed F.T. or Short Time F.T. (STFT) : Segmenting Windowed F.T. or Short Time F.T. (STFT) : Segmenting
the signal into narrow time intervals, narrow enough to the signal into narrow time intervals, narrow enough to be considered stationary, and then take the Fourier be considered stationary, and then take the Fourier transform of each segment, Gabor 1946.transform of each segment, Gabor 1946.
–– Followed by other TFRs, which differed from each other Followed by other TFRs, which differed from each other by the selection of the windowing function by the selection of the windowing function
![Page 347: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/347.jpg)
2/22/20092/22/2009 347347
Nothing More Nothing Nothing More Nothing lessless
FT Only Gives what Frequency Components Exist in FT Only Gives what Frequency Components Exist in the Signalthe SignalThe Time and Frequency Information can not be The Time and Frequency Information can not be Seen at the Same TimeSeen at the Same TimeTimeTime--frequency Representation of the Signal is frequency Representation of the Signal is NeededNeeded
Most of Transportation Signals are Non-stationary. (We need to know whether and also when an incident was happened.)
ONE EARLIER SOLUTION: SHORT-TIME FOURIER TRANSFORM (STFT)
![Page 348: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/348.jpg)
2/22/20092/22/2009 348348
Short Time Fourier Short Time Fourier Transform Transform --STFTSTFT
1.1. Choose a window function of finite lengthChoose a window function of finite length2.2. Place the window on top of the signal at t=0Place the window on top of the signal at t=03.3. Truncate the signal using this windowTruncate the signal using this window4.4. Compute the FT of the truncated signal, Compute the FT of the truncated signal,
save.save.5.5. Incrementally slide the window to the right Incrementally slide the window to the right 6.6. Go to step 3, until window reaches the end Go to step 3, until window reaches the end
of the signalof the signalFor each time location where the window is For each time location where the window is centered, we obtain a different FTcentered, we obtain a different FT–– Hence, each FT provides the spectral information Hence, each FT provides the spectral information
of a separate timeof a separate time--slice of the signal, providing slice of the signal, providing simultaneous time and frequency information simultaneous time and frequency information
![Page 349: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/349.jpg)
2/22/20092/22/2009 349349
STFTSTFT
[ ]∫ −⋅′−⋅=′t
tjx dtettWtxtSTFT ωω ω )()(),(
Time parameter
Frequencyparameter
Signal to be analyzed
Windowingfunction
Windowing function centered at t=t’
FT Kernel(basis function)
STFT of signal x(t):Computed for each window centered at t=t’
![Page 350: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/350.jpg)
2/22/20092/22/2009 350350
STFTSTFT
tt’’=4=4 tt’’=8=8
tt’’==--88 tt’’==--22
![Page 351: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/351.jpg)
2/22/20092/22/2009 351351
STFTSTFTSTFT provides the time information by computing a STFT provides the time information by computing a different different FTsFTs for consecutive time intervals, and then for consecutive time intervals, and then putting them togetherputting them together–– TimeTime--Frequency Representation (TFR)Frequency Representation (TFR)–– Maps 1Maps 1--D time domain signals to 2D time domain signals to 2--D timeD time--frequency signalsfrequency signals
Consecutive time intervals of the signal are obtained Consecutive time intervals of the signal are obtained by truncating the signal using a sliding windowing by truncating the signal using a sliding windowing functionfunctionHow to choose the windowing function?How to choose the windowing function?–– What shape? Rectangular, Gaussian, EllipticWhat shape? Rectangular, Gaussian, Elliptic……??–– How wide? How wide?
Wider window require less time steps Wider window require less time steps low time resolutionlow time resolutionAlso, window should be narrow enough to make sure that the portiAlso, window should be narrow enough to make sure that the portion of the on of the signal falling within the window is stationarysignal falling within the window is stationaryCan we choose an arbitrarily narrow windowCan we choose an arbitrarily narrow window……??
![Page 352: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/352.jpg)
2/22/20092/22/2009 352352
Selection of STFT WindowSelection of STFT Window
Two extreme cases:Two extreme cases:W(t)W(t) infinitely long: infinitely long: STFT turns into FT, providing excellent STFT turns into FT, providing excellent frequency information (good frequency frequency information (good frequency resolution), but no time informationresolution), but no time informationW(t)W(t) infinitely short:infinitely short:
STFT then gives the time signal back, with STFT then gives the time signal back, with a phase factor. Excellent time information a phase factor. Excellent time information (good time resolution), but no frequency (good time resolution), but no frequency informationinformation
[ ]∫ −⋅′−⋅=′t
tjx dtettWtxtSTFT ωω ω )()(),(
1)( =tW
)()( ttW δ=
[ ] tj
t
tjx etxdtetttxtSTFT ′−− ⋅′=⋅′−⋅=′ ∫ ωωω δω )()()(),(
![Page 353: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/353.jpg)
2/22/20092/22/2009 353353
Drawbacks of STFT Drawbacks of STFT
![Page 354: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/354.jpg)
2/22/20092/22/2009 354354
Drawbacks of STFT Drawbacks of STFT
Unchanged WindowUnchanged WindowDilemma of ResolutionDilemma of Resolution–– Narrow window Narrow window --> poor frequency > poor frequency
resolution resolution –– Wide window Wide window --> poor time resolution> poor time resolution
Heisenberg Uncertainty PrincipleHeisenberg Uncertainty Principle–– Cannot know what frequency exists Cannot know what frequency exists
at what time intervalsat what time intervals
![Page 355: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/355.jpg)
2/22/20092/22/2009 355355
Heisenberg principleHeisenberg principle
π41
≥Δ⋅Δ ft
Time resolution: How well two spikes in time can be separated from each other in the transform domain
Frequency resolution: How well two spectral components can be separated from each other in the transform domain
Both time and frequency resolutions cannot be arbitrarily high!!!We cannot precisely know at what time instance a frequency component is
located. We can only know what interval of frequencies are present in which time intervals
![Page 356: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/356.jpg)
2/22/20092/22/2009 356356
Drawbacks of STFTDrawbacks of STFT
T
F
![Page 357: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/357.jpg)
2/22/20092/22/2009 357357
Multiresolution analysisMultiresolution analysis
Wavelet TransformWavelet Transform–– An alternative approach to the short time Fourier transform An alternative approach to the short time Fourier transform
to overcome the resolution problem to overcome the resolution problem –– Similar to STFT: signal is multiplied with a functionSimilar to STFT: signal is multiplied with a function
Multiresolution Analysis Multiresolution Analysis –– Analyze the signal at different frequencies with different Analyze the signal at different frequencies with different
resolutionsresolutions–– Good time resolution and poor frequency resolution at high Good time resolution and poor frequency resolution at high
frequenciesfrequencies–– Good frequency resolution and poor time resolution at low Good frequency resolution and poor time resolution at low
frequenciesfrequencies–– More suitable for short duration of higher frequency; and More suitable for short duration of higher frequency; and
longer duration of lower frequency componentslonger duration of lower frequency components
![Page 358: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/358.jpg)
2/22/20092/22/2009 358358
Wavelet DefinitionWavelet Definition
““The wavelet transform is a tool that The wavelet transform is a tool that cuts up data, functions or operators into cuts up data, functions or operators into different frequency components, and different frequency components, and then studies each component with a then studies each component with a resolution matched to its scaleresolution matched to its scale””
![Page 359: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/359.jpg)
2/22/20092/22/2009 359359
Principles of wavelet Principles of wavelet transformtransform
Split Up the Signal into a Bunch of Split Up the Signal into a Bunch of SignalsSignalsRepresenting the Same Signal, but all Representing the Same Signal, but all Corresponding to Different Frequency Corresponding to Different Frequency BandsBandsOnly Providing What Frequency Bands Only Providing What Frequency Bands Exists at What Time IntervalsExists at What Time Intervals
![Page 360: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/360.jpg)
2/22/20092/22/2009 360360
The wavelet transformThe wavelet transformOvercomes the preset resolution problem of the STFT Overcomes the preset resolution problem of the STFT by using a variable length windowby using a variable length windowAnalysis windows of different lengths are used for Analysis windows of different lengths are used for different frequencies:different frequencies:–– Analysis of high frequenciesAnalysis of high frequencies Use narrower Use narrower
windows for better time resolutionwindows for better time resolution–– Analysis of low frequencies Analysis of low frequencies Use wider windows Use wider windows
for better frequency resolutionfor better frequency resolutionThis works well, if the signal to be analyzed mainly This works well, if the signal to be analyzed mainly consists of slowly varying characteristics with consists of slowly varying characteristics with occasional short high frequency bursts.occasional short high frequency bursts.Heisenberg principle still holds!!!Heisenberg principle still holds!!!The function used to window the signal is called The function used to window the signal is called the the wavelet wavelet
![Page 361: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/361.jpg)
2/22/20092/22/2009 361361
Wavelet transformWavelet transform
Scale and shift original waveformScale and shift original waveformCompare to a waveletCompare to a waveletAssign a coefficient of similarityAssign a coefficient of similarity
![Page 362: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/362.jpg)
2/22/20092/22/2009 362362
Definition of continuous wavelet Definition of continuous wavelet transformtransform
( )∫ ⎟⎠⎞
⎜⎝⎛ −
=Ψ= ∗
txx dt
sttx
sssCWT τψττ ψψ 1),(),(
Continuous wavelet transform of the signal x(t) using the analysis wavelet ψ(.)
Scale parameter, measure of frequency
The mother wavelet. All kernels are obtained by translating (shifting) and/or scaling the mother wavelet
A normalization constant Signal to be
analyzed
Translation parameter, (location of window)
Scale = 1/frequencyWavelet Wavelet –– Small waveSmall wave–– Means the window function is of finite lengthMeans the window function is of finite length
Mother WaveletMother Wavelet–– A prototype for generating the other window functionsA prototype for generating the other window functions–– All the used windows are its dilated or compressed and All the used windows are its dilated or compressed and
shifted versionsshifted versions
![Page 363: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/363.jpg)
2/22/20092/22/2009 363363
CWTCWT
for each Scale for each Scale for each Positionfor each Position
Coefficient (S,P) = Signal x Coefficient (S,P) = Signal x Wavelet (S,P)Wavelet (S,P)
endendendend
∫all time
Coefficient Scale
![Page 364: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/364.jpg)
2/22/20092/22/2009 364364
ScalingScaling---- value of value of ““stretchstretch””Scaling a wavelet simply means Scaling a wavelet simply means stretching (or compressing) it. stretching (or compressing) it.
f(t) = sin(2t)scale factor 2
f(t) = sin(t)
scale factor1
f(t) = sin(3t)scale factor 3
![Page 365: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/365.jpg)
2/22/20092/22/2009 365365
ScaleScale
ScaleScale–– S>1: dilate the signalS>1: dilate the signal–– S<1: compress the signalS<1: compress the signal
High Scale High Scale --> a Stretched wavelet > a Stretched wavelet --> Non> Non--detailed detailed Global View of Signal Global View of Signal --> Span Entire Signal > Span Entire Signal ––> Low > Low Frequency Frequency --> Slowly changing, coarse features > Slowly changing, coarse features Low Scale Low Scale --> a Compressed Wavelet > a Compressed Wavelet --> Rapidly > Rapidly Changing details Changing details --> High Frequency > High Frequency --> Detailed View > Detailed View Last in Short TimeLast in Short TimeOnly Limited Interval of Scales is NecessaryOnly Limited Interval of Scales is Necessary
•It lets you either narrow down the frequency band of interest, or •determine the frequency content in a narrower time interval•Scaling = frequency band•Good for non-stationary data
![Page 366: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/366.jpg)
2/22/20092/22/2009 366366
Scale is (sort of) like Scale is (sort of) like frequencyfrequency
Small scale-Rapidly changing details, -Like high frequency
Large scale-Slowly changingdetails-Like low frequency
![Page 367: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/367.jpg)
2/22/20092/22/2009 367367
Scale is (sort of ) like Scale is (sort of ) like frequencyfrequency
The scale factor works exactly the same with wavelets. The smaller the scale factor, the more "compressed" the wavelet.
![Page 368: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/368.jpg)
2/22/20092/22/2009 368368
ShiftingShifting
Shifting a wavelet simply means delaying (or hastening) its onset. Mathematically, delaying a function f(t) by k is represented by f(t-k)
![Page 369: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/369.jpg)
2/22/20092/22/2009 369369
ShiftingShifting
C = 0.0004
C = 0.0034
![Page 370: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/370.jpg)
2/22/20092/22/2009 370370
Computation of CWTComputation of CWT
Step 1: The wavelet is placed at the beginning of the signal, and set s=1 (the most compressed wavelet);Step 2: The wavelet function at scale “1” is multiplied by the signal, and integrated over all times; then multiplied by ;Step 3: Shift the wavelet to t= , and get the transform value at t= and s=1;Step 4: Repeat the procedure until the wavelet reaches the end of the signal;Step 5: Scale s is increased by a sufficiently small value, the above procedure is repeated for all s;Step 6: Each computation for a given s fills the single row of the time-scale plane;Step 7: CWT is obtained if all s are calculated.
( )∫ ⎟⎠⎞
⎜⎝⎛ −
=Ψ= ∗
txx dt
sttx
sssCWT τψττ ψψ 1),(),(
![Page 371: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/371.jpg)
2/22/20092/22/2009 371371
Simple steps for CWTSimple steps for CWT1. Take a wavelet and compare it to a section at the start of
the original signal. 2. Calculate a correlation coefficient c
![Page 372: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/372.jpg)
2/22/20092/22/2009 372372
Simple steps to CWTSimple steps to CWT3. Shift the wavelet to the right and repeat steps 1 and 2 until
you've covered the whole signal.4. Scale (stretch) the wavelet and repeat steps 1 through 3.5. Repeat steps 1 through 4 for all scales.
![Page 373: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/373.jpg)
2/22/20092/22/2009 373373
WT At workWT At workLow frequency (large scale)
( )∫ ⎟⎠⎞
⎜⎝⎛ −
=Ψ= ∗
txx dt
sttx
sssCWT τψττ ψψ 1),(),(
Low frequency (large scale)
![Page 374: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/374.jpg)
2/22/20092/22/2009 374374
WT At workWT At work
![Page 375: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/375.jpg)
2/22/20092/22/2009 375375
WT At workWT At work
![Page 376: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/376.jpg)
2/22/20092/22/2009 376376
WT At workWT At work
![Page 377: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/377.jpg)
2/22/20092/22/2009 377377
Resolution of Time & Resolution of Time & FrequencyFrequency
Time
Frequency
Better time resolution;Poor frequency resolution
Better frequency resolution;Poor time resolution • Each box represents a equal portion
• Resolution in STFT is selected once for entire analysis
![Page 378: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/378.jpg)
2/22/20092/22/2009 378378
Comparison of Comparison of TransformationsTransformations
From http://www.cerm.unifi.it/EUcourse2001/Gunther_lecturenotes.pdf, p.10
![Page 379: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/379.jpg)
2/22/20092/22/2009 379379
Discretization of CWTDiscretization of CWT
It is Necessary to Sample the TimeIt is Necessary to Sample the Time--Frequency (scale) Plane.Frequency (scale) Plane.At High Scale At High Scale ss (Lower Frequency (Lower Frequency f f ), the Sampling Rate ), the Sampling Rate NN can be can be Decreased.Decreased.The Scale Parameter The Scale Parameter ss is Normally is Normally DiscretizedDiscretized on a Logarithmic Grid.on a Logarithmic Grid.The most Common Value is 2.The most Common Value is 2.The The DiscretizedDiscretized CWT is not a True Discrete TransformCWT is not a True Discrete Transform
Discrete Wavelet Transform (DWT)Discrete Wavelet Transform (DWT)–– Provides sufficient information both for analysis and synthesisProvides sufficient information both for analysis and synthesis–– Reduce the computation time sufficientlyReduce the computation time sufficiently–– Easier to implementEasier to implement–– Analyze the signal at different frequency bands with different rAnalyze the signal at different frequency bands with different resolutions esolutions –– Decompose the signal into a coarse approximation and detail infoDecompose the signal into a coarse approximation and detail informationrmation
![Page 380: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/380.jpg)
2/22/20092/22/2009 380380
Discrete Wavelet Discrete Wavelet transformstransforms
CWT computed by computers is really not CWT, it is a CWT computed by computers is really not CWT, it is a discretized version of the CWT.discretized version of the CWT.The resolution of the timeThe resolution of the time--frequency grid can be frequency grid can be controlled (within Heisenbergcontrolled (within Heisenberg’’s inequality), can be s inequality), can be controlled by time and scale step sizes.controlled by time and scale step sizes.Often this results in a very Often this results in a very redundant redundant representationrepresentationHow to How to discretizediscretize the continuous timethe continuous time--frequency plane, frequency plane, so that the representation is nonso that the representation is non--redundant?redundant?–– Sample the timeSample the time--frequency plane on a dyadic (octave)frequency plane on a dyadic (octave)
gridgrid( )∫ ⎟
⎠⎞
⎜⎝⎛ −
=Ψ= ∗
txx dt
sttx
sssCWT τψττ ψψ 1),(),(
( ) Znknt kkkn ∈−= −− , 22)( ψψ
![Page 381: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/381.jpg)
2/22/20092/22/2009 381381
Multiresolution analysisMultiresolution analysis
Analyzing a signal both in time domain and Analyzing a signal both in time domain and frequency domain is needed many a timesfrequency domain is needed many a times–– But resolutions in both domains is limited by But resolutions in both domains is limited by
Heisenberg uncertainty principle Heisenberg uncertainty principle Analysis (MRA) overcomes this , howAnalysis (MRA) overcomes this , how??–– Gives good time resolution and poor frequency Gives good time resolution and poor frequency
resolution at high frequencies and good frequency resolution at high frequencies and good frequency resolution and poor time resolution at low resolution and poor time resolution at low frequenciesfrequencies
–– This helps as most natural signals have low This helps as most natural signals have low frequency content spread over long duration and frequency content spread over long duration and high frequency content for short durationshigh frequency content for short durations
![Page 382: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/382.jpg)
2/22/20092/22/2009 382382
Discrete wavelet Discrete wavelet transformtransform
filters
Approximation(a)
Details(d)
lowpass highpass
signal
![Page 383: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/383.jpg)
2/22/20092/22/2009 383383
Discrete wavelet Discrete wavelet transformtransform
Dyadic sampling of the time Dyadic sampling of the time ––frequency plane results frequency plane results in a very efficient algorithm for computing DWT:in a very efficient algorithm for computing DWT:–– Subband coding using multiresolution analysisSubband coding using multiresolution analysis–– Dyadic sampling and multiresolution is achieved Dyadic sampling and multiresolution is achieved
through a series of filtering and up/down sampling through a series of filtering and up/down sampling operationsoperations
HHx[n]x[n] y[n]y[n]
∑
∑
=
=
−⋅=
−⋅=
==
N
k
N
k
knxkh
knhkx
nxnhnhnxny
1
1
][][
][][
][*][][*][][
![Page 384: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/384.jpg)
2/22/20092/22/2009 384384
DWT implementationDWT implementation
22--level DWT decomposition. The decomposition can be continues as llevel DWT decomposition. The decomposition can be continues as long ong as there are enough samples for downas there are enough samples for down--sampling.sampling.
G
H
Half band high pass filter
Half band low pass filter
2
2
Down-sampling
Up-sampling
G
H
2
2 G
H
2
2
2
2
G
H
+
2
2
G
H
+
x[n]x[n]
Decomposition Reconstruction
~
~ ~
~
∑ +−=n
high kngnxky ]2[][][~
∑ +−=n
low knhnxky ]2[][][~
∑ +−⋅k
high kngky ]2[][
∑ +−⋅k
high kngky ]2[][
![Page 385: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/385.jpg)
2/22/20092/22/2009 385385
DWT DWT -- DemystifiedDemystified
g[n] h[n]
g[n] h[n]
g[n] h[n]
2
d1: Level 1 DWTCoeff.
Length: 256B: 0 ~ π/2 HzLength: 256
B: π/2 ~ π Hz
d2: Level 2 DWTCoeff.
Length: 128B: 0 ~ π /4 HzLength: 128
B: π/4 ~ π/2 Hz
d3: Level 3 DWTCoeff.
…a3….
Length: 64B: 0 ~ π/8 HzLength: 64
B: π/8 ~ π/4 Hz
2
2 2
22
|G(jw)|
π-π wπ/2-π/2
a2
a1
Level 3 approximation
Coefficients
![Page 386: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/386.jpg)
2/22/20092/22/2009 386386
2D DWT2D DWTGeneralization of concept to 2DGeneralization of concept to 2D2D functions 2D functions images images f(x,y) f(x,y) I[m,n] I[m,n] intensity functionintensity functionWhy would we want to take 2DWhy would we want to take 2D--DWT DWT of an image anyway?of an image anyway?–– CompressionCompression–– DenoisingDenoising–– Feature extractionFeature extraction
Mathematical formMathematical form
>−−=<
−−⋅= ∑ ∑∞
−∞=
∞
−∞=
),(),,(),(
),(),(),(
jyixsyxfjia
jyixsjiayxf
o
i joo
φφ
φφ
)()(),( yxyxs φφφφ ⋅= )()(),( yxyxs ψψψψ ⋅=
![Page 387: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/387.jpg)
2/22/20092/22/2009 387387
Implementation of 2d Implementation of 2d DWTDWT
INPUTIMAGE…
…
……
RO
WS
COLUMNSH~ 2 1
G~ 2 1
H~ 1 2
G~ 1 2
H~ 1 2
G~ 1 2
ROWS
ROWS
COLUMNS
COLUMNS
COLUMNS
LL
LH
HL
HH
)(1
hkD +
)(1
vkD +
)(1
dkD +
1+kA
INPUTIMAGE
LL LH
HL HH
LLLH
HL HH
LHH
LLH
LHL
LLLLH
HL HH
LHH
LLH
LHL
![Page 388: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/388.jpg)
2/22/20092/22/2009 388388
Up and down Up and down ……. Up and . Up and downdown
2 1Downsample columns along the rows: For each row, keep the even indexed columns, discard the odd indexed columns
1 2Downsample columns along the rows: For each column, keep the even indexed rows, discard the odd indexed rows
2 1
1 2
Upsample columns along the rows: For each row, insert zeros at between every other sample (column)
Upsample rows along the columns: For each column, insert zeros at between every other sample (row)
![Page 389: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/389.jpg)
2/22/20092/22/2009 389389
ReconstructionReconstruction
)(1
hkD +
)(1
vkD +
)(1
dkD +
1+kA 1 2
1 2
1 2
1 2
H
G
H
G
2 1
2 1
H
G
ORIGINALIMAGE
LL
LH
HL
HH
![Page 390: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/390.jpg)
2/22/20092/22/2009 390390
SubbandSubband coding coding algorithmalgorithm
Halves the Time ResolutionHalves the Time Resolution–– Only half number of samples resultedOnly half number of samples resulted
Doubles the Frequency ResolutionDoubles the Frequency Resolution–– The spanned frequency band halvedThe spanned frequency band halved
0-1000 Hz
D2: 250-500 Hz
D3: 125-250 Hz
Filter 1
Filter 2
Filter 3
D1: 500-1000 Hz
A3: 0-125 Hz
A1
A2
X[n]512
256
128
64
64
128
256SS
A1
A2 D2
A3 D3
D1
![Page 391: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/391.jpg)
2/22/20092/22/2009 391391
Application of waveletsApplication of waveletsCompressionCompressionDeDe--noisingnoisingFeature Extraction Feature Extraction Discontinuity DetectionDiscontinuity DetectionDistribution EstimationDistribution EstimationData analysisData analysis–– Biological dataBiological data–– NDE dataNDE data–– Financial dataFinancial data
![Page 392: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/392.jpg)
2/22/20092/22/2009 392392
Fingerprint compressionFingerprint compression
Wavelet: HaarLevel:3
![Page 393: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/393.jpg)
2/22/20092/22/2009 393393
Image denoising using Image denoising using wavelet transformwavelet transformImage de-noising using wavelet transform:Utilize the same principles as for signal decomposition and de-noising.Each column of an image matrix is convolved with high-pass and low-pass filter followed by downsampling.The same process is applied to image matrix rows.The choice of threshold limits δ for each decomposition level and modification of its coefficients for k=0, 1, … N-1
δ>|)(|if kcBackward image reconstruction out of modified wavelet transform coefficients
δ>|)(|if kc
![Page 394: Introduction to DIP](https://reader034.vdocument.in/reader034/viewer/2022042513/54678e59af795983338b56d0/html5/thumbnails/394.jpg)
2/22/20092/22/2009 394394
Image enhancement Image enhancement using wavelet transformusing wavelet transform