computer graphics imp1

41
1) Explain Polygon Rendering method. Scan line algorithms can be used. polygon surface can be rendered in one of two ways.: a) Each polygon can be rendered with a single intensity, b) the intensity can be obtained at each point of the surface using an interpolation scheme. Different Polygon Rendering method are: a) Constant intensity shading/flat shading b) Gourand shading c) Phong shading/normal vector interpolation shading Constant intensity shading/flat shading Fast and simple method single intensity is calculated for each polygon All points are displayed with same intensity value Quickly display the general appearance of curved surface Assumptions/accurate if: object is polyhedron is not an approximation of an object with a curved surface. Light source is sufficiently far away from the surface. so that N . L and the attenuation function are constant over the surface(nunit normal vector to the surface,L is unit direction vector to the point light source from a position on the surface) viewing position sufficiently far away from the .so that V.R is constant over the surface(V & R unit normal vector in the viewing and specular reflection direction)

Upload: aswin-kv

Post on 16-Sep-2015

7 views

Category:

Documents


0 download

DESCRIPTION

computer graphics lab important questions second part

TRANSCRIPT

1) Explain Polygon Rendering method. Scan line algorithms can be used. polygon surface can be rendered in one of two ways.:a) Each polygon can be rendered with a single intensity,b) the intensity can be obtained at each point of the surface using an interpolation scheme.

Different Polygon Rendering method are:a) Constant intensity shading/flat shadingb) Gourand shadingc) Phong shading/normal vector interpolation shading

Constant intensity shading/flat shading Fast and simple method single intensity is calculated for each polygon All points are displayed with same intensity value Quickly display the general appearance of curved surfaceAssumptions/accurate if: object is polyhedron is not an approximation of an object with a curved surface. Light source is sufficiently far away from the surface. so that N . L and the attenuation function are constant over the surface(nunit normal vector to the surface,L is unit direction vector to the point light source from a position on the surface) viewing position sufficiently far away from the .so that V.R is constant over the surface(V & R unit normal vector in the viewing and specular reflection direction)

Gouraud shading eliminates the problem in constant intensity shading intensity-interpolation scheme renders a polygon surface by linearly interpolating intensity values across the surface. Intensity values for each polygon are matched with the values of adjacent polygons along the common edges, thus eliminating the intensity discontinuities that can occur in flat shading.

Steps:1) Determine the average unit normal vector at each polygon vertex.2) Apply an illumination model to each vertex to calculate the vertex intensity.3) Linearly interpolate the vertex intensities over the surface of the polygon.

Calculation: vertex position V, we obtain the unit vertex normal with the calculation.

determine the intensity at the vertices from a lighting model. Intensity at point 4 is linearly interpolated from the intensities at vertices 1 and 2.similarily at point 5 from intensity at vertices 2 and 3

Advantages: Gouraud shading can be combined with a hidden-surface algorithm to fill in the visible polygons along each scan line. Gouraud shading removes the intensity discontinuities associated with the constant-shading model, Disadvantages:

Highlights on the surface are sometimes displayed with anomalous shapes, and the linear intensity interpolation can cause bright or dark intensity streaks, called Mach bands, to appear on the surface. These effects can be reduced by dividing the surface into a greater number of polygon faces or by using other methods, such as Phong shading, that require more calculations.

Phong Shading/normal vector interpolation shading: Accurate method Interpolate normal vectors and then apply illumination model to each surface point.

Steps:1) Determine the average unit normal vector at each polygon vertex.2) Linearly interpolate the vertex normal s over the surface of the polygon.3) Apply an illumination model along each scan line to calculate projected pixel intersections for the surface points.

Drawback:1) Requires considerably more calculation.

2. Explain 3D object representation Representation scheme for solid object is divided into 2:1. Boundary representation: describe a 3D object as a set surfaces that separates the object interior from the environment.Eg: polygon facets, spline patches.2. Space partitioning : are used to describe interior properties by partitioning the spatial region containing an object into set of small ,non overlapping contiguous solids.Eg: octree representationpolygon surfaces : Commonly used boundary representation is a set of surface polygon. Specify polygon surface with a Set of vertex coordinate and attribute parameter Information of polygon is the input in table, data is placed into tables that are that are to be Used for processing ,displaying and manipulating object in the scene.Polygon data table:geometric table: contain vertex coordinates and parameters to identify special orientation of polygon surfaceAttribute table: includes parameter specifying degree of transparency of the object, and its surface reflection and texture characteristics

Convenient organization for storing geometric data creates 3 lists:

vertex table : contains coordinate value of each vertex in the object. Edge table: it contains pointers back into the vertex table to identify the vertices for each polygon edge. Polygon table: it contains pointers back into the edge table to identify the edges for each polygon edge.

Alternate arrangement is to use just vertex table, polygon table or we use only a polygon table. Expand the edge table to include forward pointers into the polygon table so that common edges between polygons could be identified more rapidly. Error checking is easier when 3 tables are used

Additional geometric information stored in data table include slop for each edge and the coordinate extent s for each polygon. Equation of a plane surface can be expressed in form

(x,y,z) is any point on the plane and coefficient A,B,C,D are constants describing the spatial properties of the plane.

Solution for A,B,C,D Can be obtained by solving a set of 3 plane equation using the coordinate values for 3 collinear points in the plane For this we select 3 successive polygon vertices, (x1,y1,z1), (x2,y2,z2) (x3,y3,z3) and solve the following set of simultaneous linear plane equation for the ratio (A/D),(B/D),(C/D).

(A/D)xk+(B/D)yk+(C/D)zk=-1,k=1,2,3 Solution for this set of equation can be obtained using Cramers Rule.

Expanding the determinants:

Orientation of a plane surface in space can be described with the normal vector to the plane.A,B,C are plane coefficientNnormal to the surface of a plane described by the equationhas Cartesian component (A,B,C) Plane equation are used to identify the position of spatial points relative to the plane surface of an object for any points(x,y,z) not on a plane with parameters A,B,C,D we have

Polygon Meshes: Some graphics packages (for example, PHlCS) provide several polygon functions for modeling object . A single plane surface can be specified with a function such as fillArea. But when object surfaces are to be tiled, it is more convenient to specify the surface facets with a mesh function.

One type of polygon mesh is the triangle strip. This function produces n - 2 connected triangles, .as shown in Figure given the coordinates for n vertices. Another similar function is the quadrilaferal mesh, which generates a mesh of (n - I) by (m - 1) quadrilaterals, given the coordinates for an n by m array of vertices. 20 vertices forming a mesh of 12 quadrilaterals.

Q: 3 different Visible surface Detection Algorithms/hidden surface elimination algorithms? Identifying visible parts of a scene from a chosen viewpoint Some algorithms requires, More memory storage, More processing time(execution time), apply to special types of objects (constraints) Deciding a method for a particular application depends on Complexity of the scene, Type of objects, Available equipment, Static or animated scene. Classified as: Object-space methods and Image-space methods depends on Whether they deal with object definition directly or with their projected images1. Object-space methodsDeals with object definition directly Compares objects and parts of objects to each other within the scene definition to determine which surface should label as visible.Ex) Line-display algorithms2. Image-space methods Deals with projected images Visibility is decided Point by point at each pixel position on the projection plane Most visible-surface algorithms use image space methods Most of the algorithms use Basic approaches in Visible-Surface Detection to improve performance:1. Sorting Facilitate depth comparisons Ordering the surfaces according to their distance from the view plane2. Coherence Take advantage of regularity in a scene.

Different visible surface detection algorithms:1. Back-Face Detection2. Depth-Buffer Method/ Z Buffer Method3. A-Buffer Method4. Scan Line method.

Back-Face DetectionFast and simple object space method Identifies back faces of polyhedron by A point (x, y, z) is inside a surface with plane parameters A, B, C, and D if

When inside point is along the line of sight ,then polygon must be back surface The polygon is a back face if

V is a vector in the viewing direction from the eye(camera) N is the normal vector to a polygon surface which is Cartesian component of A,B,C

If object distribution have been converted to projection coordinates and viewing direction is parallel to the viewing ZV axis, then V=(0,0,Vz) and V.N=Vz.C.We need to consider only the sign of C,z component of normal vector N In right handed system viewing system along the negative position axis, then the polygon is back face if c0 by examining C we can identify all back faces

Depth Buffer Method Commonly used image-space approach Easy to implement Can be implemented in normalized coordinates. Compares surface depths of each pixel on the projection plane Referred to as the z-buffer method Since depth is usually measured along the Z axis of viewing system Usually applied to scenes of polygonal surfaces Depth values can be computed very quickly Here S1 has smallest depth from Viewing plane and visibleat that point. Two buffer areas are required Depth buffer Store depth values for each (x, y) position All positions are initialized to minimum depth Usually 0 Refresh buffer Stores the intensity values for each position All positions are initialized to the background intensityAlgorithm1. Initialize the depth buffer and refresh buffer depth(x, y) = 0,refresh(x, y) = Ibackgnd2. For each position on each polygon surface Calculate the depth for each (x, y) position on the polygon If z > depth(x, y), then set depth(x, y) = z,refresh(x, y) = Isurf(x, y) where Ibackgnd is the value for the background intensity, and Isurf(x, y) the projected intensity value for the surface at pixel position (x,y).3. After all surfaces have been processed, the depth buffer contains depth values for the visible surfaces and the refresh buffer contains the corresponding intensity values for those surfaces.

Depth values for a surface position (x, y) are calculated from the plane equation for each surface: z=-Ax-By-D/C Depth of next position(x+1,y) z=-A(x+1)-By-D/C z=z-(A/C) , (A/C):constant for each surface Requires no sorting. Drawbacks: can only find 1 visible surface at each pixel position that is ,it cant accumulate intensity value for more than 1 surface.

A-Buffer Method Deals only with opaque surfaces An extension of the ideas in the depth-buffer method Antialiased, area-averaged, accumulation-buffer More than 1 surface intensity can be taken into consideration Each position in the A-buffer has two fields: Depth field Stores a positive or negative real number Intensity field Stores surface-intensity information or a pointer value

If the depth field is positive: The number at that position is the depth,The intensity field stores the RGB If the depth field is negative: Multiple-surface contributions to the pixel, The intensity field stores a pointer to a linked list of surfaces data Surface data: RGB intensity components, Percent of area coverage, Surface identifier, Pointers to next surface, Opacity parameters(percent of transparency), Depth

Scan-Line Method Image space method We can deal with multiple surface We can deal with multiple surface as each scan line is processed, all polygon surface intersecting that line are examined to determine visible For all polygons intersecting each scan line Processed from left to right Depth calculations for each overlapping surface The intensity of the nearest position is entered into the refresh buffer Tables are setup for the various surfaces:

1. Edge table Coordinate endpoints for each line Slope of each line Pointers into the polygon table Identify the surfaces bounded by each line2. Polygon table Coefficients of the plane equation for each surface Intensity information for the surfaces Pointers into the edge table

To facilitates the search for surfaces crossing a given scanline, we can setup an active List of edges from information in the edge table. this list Contain only edges across the current scan line, Sorted in order of increasing x.we also define a flag for each surface that is set ON or OFF to indicate whether a position along a scnline is inside or outside of the surface scanline are processed from LEFT to RIGHT.

Active list for scan line 1

Edge table Edges: AB, BC, EH, and FG Between AB and BC, only the flag for surface S1 is on. No depth calculations are necessary Intensity for surface S1 is entered into the refresh buffer Similarly, between EH and FG, only the flag for S2 is on.no other position along scanline1,intersect surface,so the intensity value in other areas are set to the background intensity. For scan line 2 Edges: AD, EH, BC, and FG Between AD and EH, only the flag for S1 is on Between EH and BC, the flags for both surfaces are on Depth calculation is needed using plane coefficient. If depth S1 < S2 Intensities for S1 are loaded into the refresh buffer until BC After that S1 intensity of surface S2 is stored until FG Take advantage of coherence Pass from one scan line to next Scan line 3 has the same active list as scan line 2 Unnecessary to make depth calculations between EH and BC

Any number of overlapping polygon surfaces can be processed with this scan-line method. Flags for the surfaces are set to indicate whether a position is inside or outside, and depth calculations are performed when surfaces overlap. When these coherence methods are used, we need to be careful to keep track of which surface section is visible on each scan line. This works only if surfaces do not cut through or otherwise cyclically overlap each other If any kind of cyclic overlap is present in a scene, we can divide the surfaces to eliminate the overlaps.

Q 4: what are FRACTALS, fractal classification, self squaring fractals?

Describe natural object realistically using fractals geometry method. Commonly applied in many field to describe and explain the features of natural phenomena. Used to generate display of natural object and visualizations of various mathematical and physical systems.

A fractal object has two basic characteristics:

Infinite detail at every point,If we zoom in, we see more detail of the object Self-similarity between object parts and the overall features of the objectFractal Dimension The detail variation in a fractal object can be described with a number D, which is a measure of the roughness, or fragmentation, of the object. More jagged-looking objects have larger fractal dimensions. the fractal dimension D for self-similar objects can be obtained from Solving this expression for D, the fractal similarity dimension, we have For a self-similar fractal constructed with different scaling factors for the different parts, the fractal similarity dimension is obtained from the implicit relationship where sk is the scaling factor for subpart number k. With s = 1 /2, the unit line segment is divided into two equal-length subparts. Similarly, the square is divided into four equal-area subparts, and the cube is divided into eight

Fractal Generation Procedures A fractal object is generated by repeatedly applying a specified transformation function to points within a region of space If P0=(x0,y0,z0) thenP1=F(P0 ), P2=F(P1 ), P3=F(P2 ), Transformation function can be applied to points or to a set of primitives such as lines, curves, color areas or surfaces

Classifications:

1.self similar fractals2.self affine fractals3.invariant fractals 1. self similar fractals: Have parts that are scaled down version of entire object. We construct object subparts by applying a same scaling parameters S to the overall shape statically self similar fractals.: Applying random variation to the scaled down versions of the fractals Models trees , plants etc2. self affine fractals

We have parts formed with different scale parameter S x , S y , S z in different coordinate direction statically self affine fractals.:Applying random variation to the scaled down versions of the fractals Models water , clouds etc

3. invariant fractals Formed by non linear transformation.1. Self squaring- formed by squaring function in complex space.2. Self inverse- formed by inversion procedure

Geometrical construction of self similar fractals Fractals start with an initiator, which is a given geometric shape, and Replace the subparts of the initiator with a pattern, generator The transformed position can diverge to infinity. The transformed position can converge to a finite limit point, called an attractor. The transformed position remains on the boundary of some object

For some boundary between those points that move toward infinity and those that tend toward a finite limit is a fractal. The boundary of the fractal object is called the julia set

Can locate fractal boundary by testing behavior of selected position .if a selected position either diverges to infinity or converges to attractor point .we try another nearby position. repeat this process until we eventually locate position on the fractal boundary

If we use initiator & generator we can construct snowflake/Koch curve. Scaling factor 1/3 d=ln 4/ln 3=1.2617 Therefore length of line segment increases by (4/3) and reduced by (1/3) at each step.so it goes to infinity ,we get infinite details

Affine Fractal-Construction Methods fractional Brownian motion. This is an extension of standard Brownian motion, a form of "random walk", that describes the erratic, zigzag movement of particles in a gas or other

Starting from a given position, we generate a straight-line segment in a random direction and with a random length. We then move to the endpoint of the first line segment and repeat the process. This procedure is repeated for any number of line segment we can calculate the statistical properties of the line path over any random in the time interval t animation frames, TV animation, Terrains, Natural phenomena.

random midpoint displacement method

Fractional Brownian-motion calculations are time-consuming, because the elevation coordinates of the terrain above a ground plane are calculated with Fourier series, which are sums of cosine and sine terms. Fast Fourier transform (FFT) methods are typically used, but it is still a slow process to generate fractal-mountain scenes.

Starting with a straight-line segment, we calculate a displaced y value for the mid position of the line as the average of the endpoint y values plus a random offset

Self Squaring Fractals Another method for generating fractals is to repeatedly apply a transformation function to points in complex space A complex number can be represented as z = x + iy where x and y are real numbers and i2 = -1 A complex squaring function f(z) is one that involves the calculation of z2, we can use some self-squaring functions to generate fractal shapes.Depending on the initial position selected for the iteration, repeated application of a self-squaring function will produce one of three possible results:The transformed position can diverge to infinity.The transformed position can converge to a finite limit point, called an attractor.The transformed position remains on the boundary of some object

iteration of the squaring transformation generates the fractal shape. A function rich in fractals is the squaring transformation z=f(z) = z (1 - z), where is assigned any constant complex value. For this function, we can use the inverse method to locate the fractal curve: The inverse transformation is then the quadratic formula:

Using complex arithmetic operations, we solve this equation for the real and imaginary parts of z

discr=1-4z/

Self inverse geometric inversion transformations can be used to create fractal shapes. Again, we start with an initial set of points, and we repeatedly apply nonlinear inversion operations to transform the initial points into a fractal. As an example, we consider a two-dimensional inversion transformation with respect to a circle with radius r and center at position Po = (xo, yo). Any point P outside the circle will be inverted to a position P' inside the circle with the transformation

Q:5 explain different types of parallel projection in 3D viewing? Once world coordinate describing of object in a scene are converted to viewing coordinate we can project the 3D object on to the 2D view plane. 2 types of projection methods:

perceptive projection.( Distance to Centre of projection is finite)

Parallel projection: (Distance to Centre of projection is infinite)

Parallel projection: Preserves relative proportion Used in drafting to produce scale drawing of 3 D object Produces accurate views of the various sides of an object Does not give a realistic representation of the appearance of a to the view plane, we have an 3D object Specified with a projection vector that defines the direction for the projection lines. When the projection is perpendicular to the view plane ,we have an orthographic parallel projection .otherwise, oblique parallel projection. Orthographic projection is used to produce front side and top views of an object. Forms of orthographic projection that display more than 1 face of an object. Elevations: Projection plane is perpendicular to a principle axis Front Top (Plan) Side Axonometric orthographic projection

If the view plane is placed at position Zvp along the Zv axis,then any point(x,y,z) in the viewing coordinates is transformed to projection coordinates as:Xp=x ,Yp=y, z coordinate value is preserved for depth information. Isometric: Direction of projection makes equal angles with each principle axis.

oblique projection is obtained by projecting points along parallel lines that are not perpendicular to the projection plane.Cavalier: Direction of projection makes a 45 angle with the projection planeCabinet: Direction of projection makes a 63.4 angle with the projection plane

Perceptive projection Produces realistic view. Donot preserve relative proportions Transform point along projection lines that meet at the projection reference point. Suppose we set the projection reference point at position Zprp along Zv axis and we place the view plane at Zvp.

Equations describing coordinate position along this perspective projection lines in parametric Form as:X=X-XuY=Y-YuZ=Z-(Z-Zprp)u U,value from 0 to 1 (x,y,z),any point along the projection line. On the view plane,Z=Zvp and we can solve the equation for parameter U at this position along the projection.

Substitute the value of U information the equation X,Y we have the persecptive transformation equations: One-point: One principle axis cut by projection plane One axis vanishing point

Two-point: Two principle axes cut by projection plane Two axis vanishing points

Three-point: Three principle axes cut by projection plane Three axis vanishing points

Perspective vs. Parallel Perspective projection Size varies inversely with distance - looks realistic Distance and angles are not (in general) preserved Parallel lines do not (in general) remain parallel Parallel projection Good for exact measurements Parallel lines remain parallel Angles are not (in general) preserved Less realistic looking

Any set of parallel lines in the object that are not a parallel to the plane are projected onto converging lines.

Q: explain various ray tracing methods? Instead of merely looking for the visible surface for each pixel ,we continue to bounce the ray around the scene collecting intensity contributing. Provides simple and powerful rendering techniques Provides for visible surface detecting, shadow effects ,transparency and multiple light source illumination. Highly realistic. Requires considerable computation time.

Basic ray tracing algorithm Set up a coordinate system with pixel position designated in the XY plane. From projection reference point ,we determine a ray path that passes through the center of each screen pixel position. Illumination effects then accumulated along the ray path are assigned to the pixel . Consider 1 ray per pixel for each pixel ray ,test each surface in the scene to determine if it is intersected by the ray. If intersected ,calculate the distance from the pixel surface intersection point. Smallest calculated intersection distance identifies the visible surface for that pixel. Reflect the ray off the visible surface along reflection path. If transparent, send a ray through the surface in the refraction direction Refraction ray and reflection ray are referred to as secondary rays. Procedure is repeated for each secondary ray, each successively intersected surface is added to binary ray tracing tree Intensity of the pixel is determined by accumulating the intensity contributions, starting at the bottom of its ray tracing tree. Pixel intensity: sum of the attenuated intensities at the root node of the ray tree. if no surface intersected by a pixel ray, the ray tracing tree is empty and pixel is assigned the intensity value of background. If it intersects a non- reflecting light source, the pixel can be assigned the intensity of the source.

Antialised Ray Tracing 2 techniques:-super sampling, adaptive sampling the pixel is treated as a finite square area.1. super sampling: unevenly spaced rays in some region of the pixel area. Intensity of the pixel rays are averaged to produce the overall pixel intensity.

1 ray generated through each corner of the pixel Divide the pixel area information subpixels and repeat the process,if:1. The intensities for the 4rays are approximately equal2. Some small object lies between the 4 rays

Pixel divided into 9 subpixel using 16 rays. Use adaptive sampling to further subdivide these subpixels that do not have nearly equal intensity rays or that subtend some small object Subdivision process continued until each subpixel has approximately equal intensity rays or an upper bound say,256 has been reached for number of rays/pixel.Distributed Ray Tracing randomly distributes rays according to the various parameters in an illumination model. Jittering: A better approximation of the light distribution over a pixel area is obtained by using a technique called jittering on a regular subpixel grid. Initially This is usually done by initially dividing the pixel area (a unit square) into the 16 subareas shown in Fig. 14-73 and generating a random jitter position in each subarea. The random ray positions are obtained by jittering the center coordinates of each subarea by small amounts, x and , where both ,x and y, are assigned values in the interval (-0.5,0.5). We then choose the ray position in a cell with center coordinates (x, y) as the jitter position (x + x, y + y).

each subpixel ray is then processed through the scene to determine the intensity contribution for that ray ray intensities are averaged to produce pixel intensity if subpixel intensities vary too much ,pixel is futher subdivided