361 - 02 image formation - computing science - … · elements of image formation • object: –...
TRANSCRIPT
© Machiraju/Zhang/Möller 2
Today
• Input and displays of a graphics system• Raster display basics: pixels, the frame
buffer, raster scan, LCD displays• Historical perspective• Image formation• The ray tracing algorithm• The z-buffer algorithm
Readings: Sections 1.2-1.5, 11.2 (Ray tracing), 6.11.5 (z-buffering)
© Machiraju/Zhang/Möller 3
Overview of a graphics system
•
Input devices
Image formed and stored in frame buffer
Output device
© Machiraju/Zhang/Möller 4
Input devices
• Pointing/locator devices: indicate location on screen– Mouse/trackball/spaceball– Data tablet– Joystick – Touch pad and touch screen
• Keyboard device: send character input• Choice devices: mouse buttons, function
keys
© Machiraju/Zhang/Möller 6
Input devices for CMPT 361
CMPT 361
Use simple keyboard and mouse commands to interact with your OpenGL program
© Machiraju/Zhang/Möller 8
CRT basics
• The screen output is stored in the frame buffer and is converted into voltages across the reflection plates via a digital-to-analog converter (DAG)
• Light is emitted when electrons hit phosphor• But light output from the phosphor decays
exponentially with time, typically in 10 – 60 microseconds– Thus the screen needs to be redrawn or refreshed– Refresh rate is typically 60 Hz to avoid flicker (“twinkling”)– Flicker: when the eye can no longer integrate individual light
pulses from a point on screen, e.g., due to low refresh rate
© Machiraju/Zhang/Möller 9
Shadow-mask color CRTs
• Three different colored phosphors (R, G, B) dots are arranged in very small groups (triads) on coating
We see a mixture of three colorsThree electron guns (R, G, B) emit electron beams in a controlled fashion so that only phosphors of the proper colors are excited
Shadow-mask CRT
© Machiraju/Zhang/Möller 10
Flat-panel displays
• CRTs are no longer the most widely used display devices nowadays
• Flat-panel displays, e.g., LCDs or liquid-crystal displays and plasma displays dominate
• Flat-panel displays can be emissive (plasma) or non-emissive (LCD) — allowing for light to pass through
• Colors can be produced in using triads (sub-pixels)• Screen refresh no longer necessary• See pp. 8 of text or Chapter 4 of Foley et al. for more
on flat-panel technologies (or check out the wikipedia)
© Machiraju/Zhang/Möller 12
High Dynamic Range Displays
High resolution colour LCD
Low resolution Individually ModulatedLED array
High Dynamic Range Display
Courtesy:
© Machiraju/Zhang/Möller 13
Raster display basics
• The screen is a rectangular arrayof picture elements, or pixels
• Resolution: determines the details you can see
• number of pixels in an image,• e.g.,
1024×768, 1280x1024, 1366 x 768, etc.• also in ppi or dpi – pixel or dot per inch
© Machiraju/Zhang/Möller 14
Raster scan pattern
• Horizontal scan rate: # scan lines per second• Interlaced (TV) vs. non-interlaced displays
Scan line
Horizontal retrace
Vertical retrace
© Machiraju/Zhang/Möller 15
Video display standards
• NTSC (National Television System Committee): – 525 lines, 30 frames/sec interlaced (visual system
tricked into thinking that the refresh rate is 60 f/s)– 480 lines visible– North America, South America, Japan
• PAL/SECAM:– 625 lines, 25 frames/sec interlaced– Europe, Russian, Africa, etc.
• HDTV: 1,080 lines
© Machiraju/Zhang/Möller 16
The frame buffer
• Stores per-pixel information– Depth of a frame buffer: number of bits per pixel– E.g. for color representation, 1 bit => 2 colors,
8 bits => 256 colors,24 bits => true color (16 million colors)
• Color buffer is only one of many buffers, other information, e.g., depth, can also be used
• Implemented with special type of memory in standard PCs or on a graphics card for fast redisplay
• Part of standard memory in earlier systems
© Machiraju/Zhang/Möller
Comp Graphics: 1950-1960
• Computer graphics goes back to the earliest days of computing– Strip charts– Pen plotters– Simple displays using A/D converters to go
from computer to calligraphic CRT• Cost of refresh for CRT too high
– Computers slow, expensive, unreliable
17adapted from Angel and Shreiner
© Machiraju/Zhang/Möller
Comp Graphics: 1960-1970
• Wireframe graphics– Draw only lines
• Sketchpad• Display Processors• Storage tube
wireframe representationof sun object
18adapted from Angel and Shreiner
© Machiraju/Zhang/Möller
Sketchpad
• Ivan Sutherland’s PhD thesis at MIT– Recognized the potential of man-machine
interaction – Loop
• Display something• User moves light pen• Computer generates new display
– Sutherland also created many of the now common algorithms for computer graphics
19adapted from Angel and Shreiner
© Machiraju/Zhang/Möller
Comp Graphics: 1970-1980
• Raster Graphics• Beginning of graphics standards
– IFIPS• GKS: European effort
– Becomes ISO 2D standard
• Core: North American effort– 3D but fails to become ISO standard
• Workstations and PCs
20adapted from Angel and Shreiner
© Machiraju/Zhang/Möller
Raster Graphics
• Allows us to go from lines and wire frame images to filled polygons
21adapted from Angel and Shreiner
© Machiraju/Zhang/Möller
Comp Graphics: 1980-1990
• Realism comes to computer graphics
smooth shading environment mapping
bump mapping
22adapted from Angel and Shreiner
© Machiraju/Zhang/Möller
Comp Graphics: 1980-1990
• Special purpose hardware– Silicon Graphics geometry engine
• VLSI implementation of graphics pipeline
• Industry-based standards– PHIGS– RenderMan
• Networked graphics: X Window System• Human-Computer Interface (HCI)
23adapted from Angel and Shreiner
© Machiraju/Zhang/Möller
Comp Graphics: 1990-2000
• OpenGL API• Completely computer-generated feature-
length movies (Toy Story) are successful• New hardware capabilities
– Texture mapping– Blending– Accumulation, stencil buffers
24adapted from Angel and Shreiner
© Machiraju/Zhang/Mölleradapted from Angel and Shreiner
Computer Graphics: 2000-
• Photorealism• Graphics cards for PCs dominate market
– Nvidia, ATI• Game boxes and game players determine
direction of market• Computer graphics routine in movie
industry: Maya, Lightwave• Programmable pipelines
25
© Machiraju/Zhang/Möller 26
Today
• Input and displays of a graphics system• Raster display basics: pixels, the frame
buffer, raster scan, LCD displays• Historical perspective• Image formation• The ray tracing algorithm• The z-buffer algorithm
Readings: Sections 1.2-1.5, 11.2 (Ray tracing), 6.11.5 (z-buffering)
© Machiraju/Zhang/Möller 27
Image formation (aside)
• In computer graphics, we form 2D images via a process analogous to how images are formed by physical imaging systems– Cameras– Microscopes– Human visual system
© Machiraju/Zhang/Möller 28
Elements of image formation
• Object:– Exists independent of image formation and viewer –
defined in its own object space– Formed by geometric primitives, e.g., polygons
• Viewer:– Forms the image of the objects via projection onto an
image plane– Image is produced in 2D, e.g., on retina, film, etc.– Objects need to be transformed into the image space
© Machiraju/Zhang/Möller 29
Also, light and color
• No light => nothing is visible• How do we, or the camera, see?
– Light reflected off objects in the scene, or– Light transmitted directly into our eyes, e.g.,
from light source• What is reflected color?
– Light and object have color– When colored light reaches a colored object,
some colors are reflected and some absorbed
© Machiraju/Zhang/Möller 30
Light
The electromagnetic spectrum
• Light is the (visible) part of the electromagnetic spectrum that causes a reaction in our visual systems
• Generally these are wavelengths in the range of about 350-750 nm (nanometers)
• Long wavelengthsappear as reds andshort wavelengths as blues
© Machiraju/Zhang/Möller 31
Simplistic color representation
• RGB-color system (R: red, G: green, B: blue)• Each displayed color has 3 components: R, G, B• 8 bits per component in 24-bit true-color display;
component value: 0 – 255
• In OpenGL, RGB values are in [0,1]• More details on lights and colors later in course
(R,G,B) = (255,255,255) (0,0,0) (255,0,0) (255,255,0) (0,255,255)
© Machiraju/Zhang/Möller 32
Imaging system: pinhole camera
• Use trigonometry to find projectionof point at (x,y,z)
xp = −dx/zyp = −dy/zzp = d
© Machiraju/Zhang/Möller 33
Synthetic camera model
center of projection
image plane
projector
Imaging model adopted by 3D computer graphics
© Machiraju/Zhang/Möller 34
First imaging algorithm
• Directly derived from the synthetic camera model– Follow rays of light from a point light source – Determine which rays enter
the lens of the camera throughthe imaging window
– Compute color of projection• Why is this not a good idea?
© Machiraju/Zhang/Möller 35
Ray tracing
• When following light from a light source, many (reflected) rays do not intersect the image window and do not contribute to the scene
Reverse the processCast one ray per pixel from the eye and shoot it into the scene
raysreflected ray shadow
ray
refracted ray
Image plane
eye pixel
© Machiraju/Zhang/Möller 36
Ray tracing: the basics
• A point on an object may be illuminated by– Light source directly – through shadow ray– Light reflected off an object – through reflected
ray– Light transmitted through a transparent object –
through refracted ray reflected ray shadow
ray
refracted ray
Image plane
eye pixel
© Machiraju/Zhang/Möller 37
Ray tracing: the algorithmfor each pixel on screen determine ray from eye through pixel if ray shoots into infinity, return a background color if ray shoots into light source, return light color appropriately find closest intersection of ray with an object cast off shadow ray (towards light sources) if shadow ray hits a light source, compute light contribution according to some illumination model cast reflected and refracted ray, recursively calculate pixel color contribution return pixel color after some absorption
Most expensive
© Machiraju/Zhang/Möller 39
Ray tracing: pros and cons
• Pros:– Follows (approximately) the physics of optical flow– High level of visual realism – can model both light-
object interactions and inter-surface reflections and refractions
• Cons– expensive: intersection tests and the levels of recursion
(rays generated)– Inadequate for modeling non-reflective (dull-looking)
objects (why is that?)
© Machiraju/Zhang/Möller
Which is the most important ray?
40
reflected ray shadow
ray
refracted ray
Image plane
eye pixel
© Machiraju/Zhang/Möller 41
The z-buffer algorithm
• Per-polygon operations vs. per-pixel for ray tracing• Simple and often accelerated with hardware• Works regardless of the order in which the polygons
are processed — no need to sort them back to front• A visibility algorithm and not designed to compute
colors• OpenGL implements this fundamental algorithm
– glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB | GLUT_DEPTH); …– glEnable(GL_DEPTH_TEST); …– glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); …
© Machiraju/Zhang/Möller 42
The algorithmfor each polygon in the scene project its vertices onto viewing (image) plane for each pixel inside the polygon formed on viewing plane determine point on polygon corresponding to this pixel get pixel color according to some illumination model get depth value for this pixel (distance from point to plane) if depth value < stored depth value for the pixel update pixel color in frame buffer update depth value in depth buffer (z-buffer) end ifend if
Question: What does the depth buffer store after running the z-buffer algorithm for the scene? Does polygon order change depth? Image?