image sensor and image processing - stanford...

31
Handbook of practical camera calibration methods and models Optical Metrology Centre CHAPTER 3 IMAGE SENSOR AND IMAGE PROCESSING Executive summary The formation of an image on the surface of the sensor by the lens system is the beginning of a chain of interconnected processes most of which produce error effects that are either random or systematic. For the highest accuracy measurement it is important to know how to minimise, model, or accept these errors. This chapter provides a description of these errors and their likely magnitude in a logical progression from the lens to digital image data. 3.1 Introduction Chapter 2 discussed the geometric and radiometric characteristics of a camera where only the lens and an ideal image plane were considered. The interaction of light with the sensor was not defined apart from the location of the sensor with respect to the lens system. This section of the report discusses the conversion of light to an array of numbers representing the image. It is not intended here to provide comprehensive information on all aspects but rather to concentrate on features that have a bearing on the camera calibration process. 3.2 Overview of sensor In the past there have only been two types of commercially available solid state image sensors, the CCD and CID, the CCD being by far the most common (there is probably only one manufacturer of the CID sensor). Now another type of sensor has become commercially viable, the CMOS sensor. The development of the CCD started in 1969 at Bell Labs, U.S.A. while the CID development started in the early 70's. Areas of interest here are the way in which the sensor type dictates features that may in turn affect the geometric or radiometric performance of the sensor. For instance, many CCD pixels do not have contiguous active areas because of the requirement to have buried sections of the sensor to transfer charge from the photo-sensor to the amplifier. One of the differences between the CCD, CMOS, and CID sensor types is the method used to transfer the charge stored at a given pixel site. The CID sensor allows the photo-generated charge at individual photo-sites to be output directly and, if required, non-destructively. The CCD transfers charge, by the manipulation of potential wells, from the generation sites to a position where they are output to an amplifier. The CMOS sensor, like the CID allows random access in a manner very similar to 3-1

Upload: nguyenanh

Post on 31-Jan-2018

220 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

CHAPTER 3

IMAGE SENSOR AND IMAGE PROCESSING Executive summary

The formation of an image on the surface of the sensor by the lens system is the beginning of a chain of interconnected processes most of which produce error effects that are either random or systematic. For the highest accuracy measurement it is important to know how to minimise, model, or accept these errors.

This chapter provides a description of these errors and their likely magnitude in a logical progression from the lens to digital image data.

3.1 Introduction Chapter 2 discussed the geometric and radiometric characteristics of a camera where only the lens and an ideal image plane were considered. The interaction of light with the sensor was not defined apart from the location of the sensor with respect to the lens system. This section of the report discusses the conversion of light to an array of numbers representing the image. It is not intended here to provide comprehensive information on all aspects but rather to concentrate on features that have a bearing on the camera calibration process. 3.2 Overview of sensor In the past there have only been two types of commercially available solid state image sensors, the CCD and CID, the CCD being by far the most common (there is probably only one manufacturer of the CID sensor). Now another type of sensor has become commercially viable, the CMOS sensor. The development of the CCD started in 1969 at Bell Labs, U.S.A. while the CID development started in the early 70's. Areas of interest here are the way in which the sensor type dictates features that may in turn affect the geometric or radiometric performance of the sensor. For instance, many CCD pixels do not have contiguous active areas because of the requirement to have buried sections of the sensor to transfer charge from the photo-sensor to the amplifier. One of the differences between the CCD, CMOS, and CID sensor types is the method used to transfer the charge stored at a given pixel site. The CID sensor allows the photo-generated charge at individual photo-sites to be output directly and, if required, non-destructively. The CCD transfers charge, by the manipulation of potential wells, from the generation sites to a position where they are output to an amplifier. The CMOS sensor, like the CID allows random access in a manner very similar to

3-1

Page 2: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

accessing computer memory. Both CID and CCD sensor exhibit a linear response to light while the CMOS sensor has a logarithmic response. A significant advantage of the CMOS sensor is the ability of the chip manufacturer to combine other electronics on-chip such as the A-D converter. This results in less noise (one of the limiting factors in the development of the CMOS sensors) and a cheaper device. However, it is only very recently that issues such as pixel response uniformity have been dealt with to allow the CMOS sensor to compete on closer terms with the CCD based sensor. The CCD sensor is still the most common and so its operation is briefly discussed. There are three common modes of operation of these sensors: interline, frame-transfer, and progressive scan. The interline transfer sensor (Figure 2) has a column of photo-site elements that are adjacent to a shielded shift regrister. The integration time for the next line takes place as the previous line is clocked out. Interline transfer should not be confused with interlaced imagery that results from the order that data are clocked out of the sensor.

Image section

Storesection

OutputOutput register

(opaque)

Figure 3.1. Interline transfer scheme

The consequence of this design is the active pixel size is smaller than the pixel spacing resulting in lower sensitivity than might otherwise have been obtained. The advantage of the interline transfer is that the transfer time (to opaque storage) is short compared to the integration period. For example, in the Pulnix TM6CN camera the transfer time is 64.0µs and the accumulation time is 40.0ms.

Figure 3.2. A typical interline transfer camera (Pulnix TM6CN)

3-2

Page 3: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Sensor format 1/2 inch interline transfer CCDPixel 752(H)x582(V)Cell size 8.6(H)x8.3(V) microns Sensing area 6.41(H)x4.89(V) mm Dynamic range 67dBChip size 7.95 mm(H)x 6.45 mm(V) Timing. 625 lines, 2:1 interlace (CCIR)Clock 28.375 MHz Pixel clock 14.1875 MHzHorizontal 15.725 KHz Vertical frequency 50.0 Hz Video output: 1.0v p-p composite video, 75Ω S/N ratio: 50 dB min.Shutter speed: 1/60 - 1/10000 sec. Minimum illumination:

1.0 lux(F=1.4) without IR. cut filter AGC: On = 16dB standard, Off = 32 dB max.

Gamma: 0.45 or 1 Dimensions: 45 mm (W) x 39 mm (H)x 75 mm (L)

Table 3.1. Some Pulnix TM6CN camera characteristics.

The frame-transfer sensor moves the entire image from the sensing area to a storage area where it is then read out (Figure 3.2).

Figure 3.3. Frame transfer sensor configuration

The sensor is sometimes blanked (perhaps with a mechanical shutter) from receiving light during the transfer time to avoid continued integration that would result in smearing. In other sensors smearing is a possibility as the sensor is not blanked and the image is transferred very quickly such that it is hoped that smearing is not significant. Another feature of these sensors is that they often have a 100% fill factor as unlike the interline transfer scheme the transfer electronics are under the pixels. In general this type of sensor is not as resistant to blooming as interline transfer sensors due to the poor draining possibilities of the design. In addition, some sensors of these type have been known to exhibit smearing for bright spots. This is usually exhibited by non-symmetries in the blobs. The progressive scan sensor uses the interline scheme but instead of having two fields that are exposed at differing times the sensor only exposes at one instant. Some of the electronics are simplified and these sensors are gaining ground over the conventional sensors. Currently, progressive scan cameras are more expensive than

3-3

Page 4: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

other sensors and are limited to lower resolutions such as required for standard CCIR or NTSC cameras but some sensors are now appearing with a resolution of 1280 x 1024. All methods of image collection may transfer data in an interlaced analogue signal mode to be compatible with the original method used to reduce monitor flicker for compatibility with older systems. In the classic case each image frame consists of two temporally separated fields, the odd lines, followed by the even lines. In the latest progressive scan Sony XC8500 camera, three modes are possible – interlaced with both fields captured at the same time, a field mode where different fields are output at 50 Hz, and full frame mode where the whole frame is output. 3.3 Error sources in the image collection process Table 3.1 illustrates most of the error sources involved in the image data production process. The geometric effects of these errors and some advice are given together with a measure of the importance of these effects in the use of the camera. It should be noted that where a camera does not have an external analogue signal the frame grabber and signal transmission error sources can be ignored. The major error effects are then discussed in detail in a series of notes linked to the table.

FEATURE GEOMETRIC INFLUENCE

ADVICE Effect

BETWEEN LENS

AND SENSOR

Diffuser Matches the spatial frequencies of the image to the sensor imaging capabilities, to avoid aliasing effects

Not likely to have a large effect if fitted, care should be exercised if it is not fitted and a lens is used that is capable of producing spatial frequencies beyond the Nyquist frequency limit causing aliasing

Filter On some cameras is used to modify the spectral response of silicon to the photoptic response of the human eye.

Care must be taken with IR radiation if no filter fitted due to diffusion of photon generated charges causing smearing. If a filter is fitted then care must be taken if using a red laser light – for instance at 670 nm, as much of the power of the signal may be attenuated

* (Note 14)

3-4

Page 5: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

SENSOR SURFACE

Photosensor size < pixel pitch (fill ratio)

Sampling of the image Subpixel camera shift can increase nominal resolution if applied correctly

* (Note 21)

Pixel pitch Sampling of the image Do not use lens with a resolving power capable of breaking the Nyquist frequency limit for the lens

* (Note 22)

Microlenses More light gathering, better signal

Consider whether sampling techniques can still be used

Sensor unflatness Large sensors may exhibit some bowing at the edges and cause problem with wide angle lenses

Model the distortion or absorb into other parameters

** (Note 10)

Sensor alignment with respect to the lens

Shear in the image Use additional parameters

**

Warm up of camera Image sensor expansion, increase in noise

Allow camera to warm up before use

* (Note 15)

SENSOR LIGHT

DETECTION

Frame transfer cameras

Smearing of the image due to exposed cells during tranfer

Care must be taken with bright targets

**

Interlaced imagery Images taken at differing times

Care with moving objects

*** (Note 4)

Blooming Overflow of charges from pixel sites into neighboring sites causing shifts in features

Assess the manufacturers specification and test with bright light sources – some cameras are better than others

**

Charge transfer efficiency

Fall off of light as data is clocked off the chip

More noticeable for large sensor sizes

* (Note 16)

Photo response non uniformity

Pixel intensity values are not uniform and can cause image location shifts

Calibrate camera and correct images before use

*

3-5

Page 6: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Dark current Thermally generated noise in the image, usually subtracted electronically but some effects may remain

Small effect for 8 bit A-D converters, but more significant for A-D converters with more bits.

*

Dark signal non-uniformity

Individual photosites have slightly varying dark signal characterisitics

Small effect

Charge Injection Noise

Negative transient voltages cause additional noise in the signal.

Small effect *

Sensor defects Large sensors may have a number of physical defects

The defects need to be known and appropriate measures taken to compensate for their presence

**

SENSOR ELECTRONICS

Amplifier noise Data clock out frequency dependent noise in image data

Use as slow a clock as possible to avoid this noise source

** (Note 17)

Gain error Amplifiers do not always have a uniform gain causing subpixel errors for high accuracy algorithms

Assess the linearity of device using a variable grey density test chart

Non-linearity in amplifier

The amplifier may not have a linear gain causing subpixel errors

Small effect

Power supply breakthrough

Noise in the power supply can get into the image

Small effect although sometimes can be detected in some camera/frame grabbers

Automatic Gain Control

Gain of signal automatically altered to take into account lighting. This can cause variations in the available signal to noise ratio of important sections of the image

Switch off AGC ***

3-6

Page 7: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Gamma correction Output signal modified to make the images more acceptable when viewed on a monitor hence causes non-linear intensity changes and therefore poor image location

Medium effect – switch to 1.0 rather than 0.45

*** (Note 5)

FRAME GRABBER

CABLING

Termination of cables

Ringing in the image Correct termination ** (Note 11)

Impedance of cables Ringing in the image Use correct impedance cables especially if the cables are long

**

FRAME GRABBER

ELECTRONICS

Frame grabber set up X or y offsets Take care when swapping frame grabbers – check image location with respect to camera

** (Note 12)

Low pass filter Used to smooth the input signal but can cause some geometric distortions

Ensure filtering is turned off

*** (Note 6)

A-D conversion The A-D converter will introduce quantisation, gain, linearity, offset, and missing code error

Quantisation of small signals affects accuracy of location – obtain highest signal to noise ratio.

* (Note 18)

Line-jitter Random shifts in x image locations

Use frame grabber with a low jitter PLL or Pixel clock

* (Note 19)

Frame grabber A-D offset

Subpixel image shifts Set the signal offset as close to zero as possible

** (Note 13)

Warm up of framegrabber

Differences in x axis image size

Allow framegrabber to warm up before use (90 mins)

***** (Note 1)

3-7

Page 8: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Sampling remapping Differences in the

clock rates of camera and frame grabber cause a remapping of the pixels into an arbitrary x scaling

Expect the x co-ordinates to be scaled differently to the physical pixel size due to remapping

***** (Note 2)

Fixed pattern noise The frame grabber black level may not stabilise for a number of lines into the image

Check the image with no lens and a wide source of light such as might be obtained with a diffuser such as ground glass

* (Note 20)

Phase pattern noise The frame grabber may exhibit a fixed or moving phase pattern noise that can be distinguished in addition to the usual electronic noise

Addition of a systematic error that cannot be modelled

*

DIGITAL CAMERA

IMAGE COMPRESSION

Compression of images

JPEG imagery introduces block effects due to lossy methodology

Care must be taken when using digital filters especially where high precision edges of features are required. Consider lossless compression or other compression schemes.

*

IMAGE STORAGE

Video tape storage Video tapes add noise to the image and introduce timing errors

Use the same video tape recorder for recording and playback for the best results

*** (Note 7)

IMAGE

PROCESSING

Threshold Subpixel image shifts Set as close to background as possible

*** (Note 8)

3-8

Page 9: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

LIGHTING EFFECTS

View point dependent location due to variations in surface reflectivity characteristics

Shifts in apparent position of images

Controlled lighting or downgrade accuracy estimation

*** (Note 9)

Laser beam location Shifts in position due to speckle

Fundamental problem, downgrade accuracy estimation

**** (Note 3)

NOTES Note 1. Frame grabber warm up (*****) The change in temperature of the camera / frame-grabber combination has been shown to influence image acquisition (Wong et al, 1990). By allowing the camera and the frame-grabber to warm up at differing times the effects attributable to each can be isolated and determined. A test field consisting of sixteen circular retro-reflective targets stuck onto a plane matt black glass slide was used. The targets were arranged to cover the entire field of view of the camera as shown in Figures 3.4 and 3.5. The test field and camera were then firmly fixed onto an optical bench. The test plane was surrounded by black paper to remove the influence of any outside stray light and a light source was placed behind the camera. In this way the retro-reflective targets produced a high signal to noise ratio image.

Target Array

Optical Bench

Pulnix Camera

25mm Lens

To Epix Framestore

665mm

Figure 3.4. A test field configuration suitable

for temperature effect investigations

3-9

Page 10: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Figure 3.5. An image of the test field

Initially an EPIX frame grabber and a Pulnix camera were switched on together. A series of images were collected over a period of time. The targets in each image were located by a centroid subpixel location algorithm. The difference between the co-ordinates of the image at the beginning of a 60 minute period compared to those at the end of the period are illustrated in Figure 3.6.

Figure 3.6. Vectors of warm-up of the frame-store in a 60 minute period

A significant shift in the co-ordinates of all the targets was observed during the period when the frame grabber was warming up. This shift is predominantly in the x co-ordinate direction. The co-ordinate variation became stable after approximately 60 minutes. The total RMS warm-up shift was as large as 4 pixels. The progression of this effect is illustrated in figure3.7.

3-10

Page 11: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

x coordinate shift

y coordinate shift

Figure 3.7. The RMS x, y co-ordinate shift of all targets over the period of warm up of the framestore. The effect is due to differences in the clocks used by both the camera and frame grabber. Recommendatations.

• Use a pixel synchronous system using the camera pixel clock or a digital camera. This will avoid the problem completely

• If no pixel clock is available allow the frame-grabber to warm up over a

period of around 90 minutes

• Perform a test of the type illustrated in this section to assess the problem with the frame grabber being used.

• Use a differential scaling term to allow for differences in the scale of x and

y. Note 2. Image resampling (*****) Frame grabbers vary considerably. Older frame-grabbers often operate at a frequency of 10 MHz sampling frequency and image storage is often 512x512 pixels in size. It is feasible to use such frame grabbers with CCIR or NTSC cameras that have more pixels. However, it should be noted how the frame grabber operates. The frequency required to sample the output at a rate compatible with the number of pixels would be 14.1875 or 14.3 MHz, as the horizontal and vertical frequency of the video signal has remained unchanged over many years. With a 10 MHz clock the frame grabber will sample the signal 512 times resulting in pixels that have nominal pixel sizes some 752/512 bigger than those in the camera. In addition to this large effect on pixel size the following points should also be noted:

3-11

Page 12: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

• Differences between frame grabbers mean that not all of the available

pixels in an image are always sampled

• The position of the image sampling will not usually have a one-to-one correspondence with the pixel location

• The beginning or ending of the sampling period will not always

correspond with the beginning or end of the actual image

• The PLL used may sample at a frequency other than the ideal frequency due to the way it is set up or warm-up effects discussed in Note 1.

Recommendations

• Do not assume that there is a sensor pixel to digital image value correpondence when using a frame-grabber camera combination

• Assess the position of the image with respect to the camera by looking at

the edges of the image to ensure that all of the pixels are active.

• Take care in the definition of the principal point. It is not always easy to transfer cameras between frame grabbers and still end up with the principal point in the same location.

Note 3. Laser beam location (****) The problem of speckle has been noted by Clarke & Katsimbris, (1994) and Dorsch et al., (1994). The visual effect can be minimised by arranging the speckle size to either be small in comparison with the sensor pixels size, or by not allowing speckle to be formed. The first condition is arranged by a large aperture size that unfortunately means an unacceptably narrow depth of field, and the second condition by using a small aperture. However, Clarke & Katsimbris, (1994) have shown that even with an extremely small aperture this method will still cause significant errors in target location. These errors are introduced because the computation of the centre of the laser target spot will change when viewed from different directions (Figure 3.8)

3-12

Page 13: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

X Axis6.2 6.4 6.6 6.8 7.2 7.4 7.6 7.86.0 7.0 8.0

Y A

xis

9.3

9.4

9.5

9.6

9.7

9.8

9.9

10.0

Location of targetwhen stationary

Figure 3.8. Laser speckle locus with moving background

Recommendations

• Downgrade the accuracy expectation when using laser targets or laser stripes to take into account the speckle effect

• Perform an experiment with the laser target projected onto a moving

background to cause the speckle to change while keeping the theoretical location of the laser spot in the same place. Collect multiple images and compute the centroid location of the target and monitor what happens.

Note 4. Interlaced images (***) Sensors that produce interlaced images are commonplace. However, with the usual interline cameras each of the two fields in the image, the odd and even, are exposed at different times 1/50 of a second apart. As a result moving objects can cause full or partial disassociation of the features belonging to the object. In experiments conducted with a rotating target it was found that relatively slow movements produced the problem and quicker movements caused two separate images to be produced. If the images are not then considered separately it can then be difficult to recognise each object as the sections of the feature will be broken up in the y direction by the background. In addition to the problem of full or partial dissociation of interlaced features, the issue of blur should also be considered. The exposure time for a sensor is finite and the resulting image feature will be spread out. If the feature is bright against a dark background then as the feature will be enlarged but will still produce the same amount of light so the feature will appear dimmer. Modern cameras have exposure times that can be reduced from 1/25 to less than 1/10,000 of a second. Short exposure times can reduce the effect of blur considerably. Recommendations

• Restrict the speed of movement of the object

3-13

Page 14: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

• Use each field separately but sacrifice y resolution

• Use a progressive scan sensor that outputs interlaced imagery but collects the image at one point in time

• Decrease the exposure time of the camera to reduce the effect of blur

• Produce two light pulses at the end of field 1 and the beginning of field 2

such that both images are exposed very close together Note 5. Gamma setting (***) The Gamma level on the camera may be set to 0.45 or 1 on typical cameras. The effect on the image is illustrated in figure 3.9.

Log relative exposure

Loggreylevel

Gamma = 0.45

Gamma = 1.0

Figure 3.9. Difference in the response of the sensor with two Gamma settings

The Gamma correction is designed to produce a sharp image that is suited to a monitor for viewing purposes. From an image processing point of view, especially for the highest accuracy such manipulation of the image will cause errors. Recommendation

• Always use a Gamma setting of 1.0 Note 6. Low pass filter. (***) Some frame grabbers allow the use of a low-pass filter to smooth the input signal. While a filter is a necessary requirement with analogue transmission of video signals such as required for the CCIR or NTSC standard and is performed in the camera. A line scan camera may output a raw signal that will include unwanted high frequency components and require filtering to remove them. If the filter is used in addition to the original filter it is likely to be an additional means of distortion of the signal.

3-14

Page 15: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Recommendation

• Do not use Low pass filters in the analogue signal path unless sure of the effect of the signal

Note 7. Video tape storage (***) Video tape storage of image data has up until very recently been the only method of storing and analysing long sequences of images. Now it is increasingly possible to store image data to disk or compress image sequences using MPEG or JPEG hardware. The influence of storage media on the accuracy and repeatability of image measurements using CCD cameras has been analysed by various workers by testing the effect of video recorders, video disks, and time base correctors on the fidelity of image data. It has been concluded that each stage in the storage process introduced approximately 100% greater errors into the accuracy that could be obtained. Hoflinger and Beyer (1993) concluded that all radiometric parameters were degraded by video tape storage. Playback on a machine other than the one the images were recorded on also caused degradations in performance. The geometric performance when using video tape was found to be a factor of between 1.4 and 2.3 worse. Recommendations

• If possible use the same tape recorder for recording and play-back

• Reduce the number of recording or transfer operations to a minimum

• Be aware of the degrading effects of tape storage and consider a time base corrector or alternative methods of storage such as MPEG.

Note 8. Threshold (***) To illustrate the effect of threshold variations on operations such as centroiding of target features a simulation experiment can be used. A bright circular feature can be produced with known locations and then the centroid of the target computed using either the intensity values themselves or squared intensity values. By estimating the location and then comparing the location with the true location the error in location can be measured. The intensity values can be given additive noise and various quantisation levels and target peak heights can be chosen. Figure 3.10 illustrates the effect of varying the threshold level on target location errors.

3-15

Page 16: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Threshold level (intensity)

0 20 40 60 80 100

Erro

r (pi

xels

)

0.00

0.02

0.04

0.06

0.08

0.10

0.12

0.14

Squared centroidCentroid

Additive noise

Figure 3.10. Variation in target location error with change in threshold (additive noise). The centroid method is initially poor due to the noise of the background that the squared centroid is less affected by. When the threshold level is high enough to remove the effect of the background noise the location of both methods improves. The change is more dramatic for the centroid method than for the squared centroid. As the threshold is increased the errors in location also increase due to the fact that the target is becoming smaller and there is less information for the algorithm to use. At certain subpixel locations with a non-symmetric target image the side pixels will be above the threshold on one side of the target image and below it on the other side. This produces a systematic bias in the location that when plotted is often sinusoidal in nature. The squared centroid method is less sensitive to this effect except when the target image is small in size due a high threshold. Recommendation

• Set thresholds as low as possible Note 9. View point dependencies (***) A serious problem for some applications is the intensity variation that naturally occurs when an object is viewed from a variety of directions. The causes for the problem are surfaces that are not consistent in their response to incident light from diffuse, partially distributed or spot light sources, as well as from reflections. Any of these can cause a viewpoint dependent change in apparent surface intensity and hence cause an error in subpixel feature detection methods. While such effects may be of little consequence in many feature based computer vision methods where identification is highly important and geometric accuracy is not, applications that require high geometric accuracy will be severely compromised. Hence, the any view point dependencies should be considered not just in the use of a camera but in the calibration methods employed as well.

3-16

Page 17: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Recommendations

• Control the operating environment with diffuse reflecting screens and diffuse lighting

• Downgrade expectations of the accuracy obtainable from naturally

illuminated scenes

• Use retro-reflective targeting to avoid the effect completely Note 10. Sensor unflatness (**) The flatness of the sensor is generally only a problem with medium to large sensors but can exceed 100 microns at the edges. Attempting to estimate sensor unflatness within a calibration procedure has been attempted but such a method is not recommended due to the weak and difficult to model unflatness characteristic. If the sensor can be measured independently and a permanent flatness map obtained this is likely to be the best procedure. Problems with thinned sensors have not to the Authors’ knowledge been reported but in this case small variations might occur during use due to temperature or orientation effects. Recommendation

• When the highest accuracy is required from a large sensor consider having the sensor’s surface mapped in 3-D to discover the magnitude of the effect

Note 11. Termination and impedance of cables. (**) This effect can be demonstrated by imaging a sharp edge or a thin white line on a black background. The resultant image will be the composite effect of: the lens point spread function; the electrical characteristics of the camera; signal transmission between the array and the framestore; and the framestore circuitry. Intensity profiles of two such images are shown in Figures 3.11 and 3.12. The image in Figure 3.11 was obtained using a 5m 50Ω cable where the phenomenon of ringing can be clearly seen. The cable was replaced by a 2m 50Ω cable to achieve the profile shown in Figure 3.12. A longer cable of the recommended 75 Ω impedance should achieve the same result.

3-17

Page 18: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Grey value

Pixels in the x direction0 5 10 15 20

0

50

100

150

200

Grey Value

Pixels in the x direction

200

150

100

50

00 5 10 15

Figure 3.11. Intensity profile of a

line using a 5m cable Figure 3.12. Intensity profile of a

line using a 2m cable Recommendations

• Use the correct impedance cable for the cameras and framegrabber • Use as few electrical joints as possible • Check that any T pieces are also of the correct impedance • Avoid using the signal more than once • Make cables as short as reasonably possible • If syncronisation is important make cables of the same length • Make sure that terminating resistors are used on the frame grabber end • Preferably use a digital or intelligent camera

Note 12. Frame grabber x,y, offsets (**) With any given frame-grabber there will be differences in the operation of the frame grabber that will almost certainly lead to errors of a few pixels in the position of the nominal centre of the sensor. For instance, the EPIX frame grabber can only collect pixels in increments of 4 in the y direction. This can either mean the loss of a number of pixels from the image or the inclusion of some optically inactive pixels. In addition the beginning of the image may also be difficult to estimate from the frame grabber settings. Finally the first lines of the image may not always be active. Recommendations

• If swapping from one frame-grabber to another check the set up both in the software and by comparing images

• Estimate the principal point for each camera separately

• Consider the way in which the nominal centre of the image is calculated –

dividing the sensor pixel size by 2 or doing the same with the nominal image size reported by the frame-grabber will probably introduce unwanted shifts.

3-18

Page 19: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Note 13. Frame grabber A-D offset (**) A CCD camera signal is sampled during a period known as the “back porch” where a number of CCD sensor pixels are shielded from incident light. The level of this signal is related to electronic noise that changes in quantity depending on the temperature of the sensor. This voltage output is used to adjust the level of the voltage supplied to the analog to digital converter. The relationship between this black-level-clamped signal and the analogue to digital converter is either factory set or adjusted by a potentiometer on the frame-grabber. The level may be adjusted so that the zero light level signal is below the zero threshold of the analog to digital converter. This can have some unfortunate effects. A simulation of this effect shows that an artificial threshold in the frame grabber results in an error in subpixel target location. When the intensity values for a target or line are symmetric there will be no error in location but, as the target intensities become asymmetric the target location using the centroid method will oscillate from one side of the true location to the other. To test whether this effect was significant with realistic intensity images a simulation was performed for a line with a Gaussian shape intensity profile using a sigma of 1.5. The effect, with and without a subtraction of six grey levels from the true target values, is illustrated in figure. 3.13, and figure. 3.14 respectively.

Error in centroid location (pixels)

-0.02 -0.01 0.00 0.01 0.02

Com

puta

tion

num

ber

0

100

200

300

400

500skew = 3 pixels, peak = 150 grey levels

dc offset = 6 grey levels, sigma = 1.5 pixels

Figure 3.13. A simulation of the effect of threshold on the

location of a line with a subtraction of six grey levels

3-18

Page 20: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Error in centroid location (pixels)

-0.02 -0.01 0.00 0.01 0.02

Com

puta

tion

num

ber

0

100

200

300

400

500peak = 150 grey levels, skew = 3 pixels

dc offset = 0 grey levels, sigma = 1.5 pixels

Figure 3.14. A simulation of the location of a line without grey level subtraction

The effect on real images can be just as dramatic. Figure 3.15 is a graph of the results of computing the centre of a straight line in the image before adjustment of the frame-grabber from its factory setting.

Centroid location of the line (pixels)-0.3 -0.2 -0.1 0.0 0.1 0.2 0.3

Loca

tion

of th

e lin

e (p

ixel

s)

0

100

200

300

400

500

Figure 3.15. Errors in line location using a

factory set frame grabber DC offset. After adjusment of the framegrabber the results illustrated in figure 3.16 were obtained.

3-19

Page 21: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Centroid location of the line (pixels)

-0.3 -0.2 -0.1 0.0 0.1 0.2 0.3Lo

catio

n of

the

line

(pix

els)

0

100

200

300

400

500

Figure 3.16. Centroid of the line after frame grabber adjustment.

The problem with this type of error is that it is systematic. Further simulation experiments revealed a similar effect for circular targets. The outliers on the left hand side of figure 3.16 are the result of an intensity of value one appearing to one side of the centroid computation window. The slight deviation from a straight line is probably caused by tangential lens distortion (the camera was set up to minimize radial lens distortion). Tests were also carried out using this frame-grabber setting to measure the centroid location of the retro-reflective targets. The results of one of these tests is illustrated in figure 3.17 (a & b).

Computation number

0 50 100 150 200 250 300

Targ

et c

entro

id lo

catio

n (p

ixel

s)

9.15

9.20

9.25

9.30

9.35

9.40

9.45

9.50

9.55

Exposure levels

1

2

3

4

Computation number

0 20 40 60 80 100 120 140

Targ

et c

entro

id p

ositi

on (p

ixel

s)

9.529.549.569.589.609.629.649.669.689.70

1 23

Exposure levels

(a) (b)

Figure 3.17 (a and b). The stationary target location error with three changes in exposure level before and after adjustment

Tests carried out using this set up for three-dimensional measurement gave improved results. Further simulation tests have revealed that the squared intensity centroid method is less affected by the DC offset than the centroid method. The problem of an incorrect level of the black-level-clamped signal being applied to the A-D converter would not usually be noticed in the apparent quality of the images. Many frame-grabbers may be set up correctly, but it appears that the adjustment of the frame-grabber should to be analysed carefully for high precision measurement. However, it

3-20

Page 22: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

should be noted that this problem does not occur for images with a background above the zero level. Recommendations

• If a frame-grabber has the facility to adjust the DC offset it should be set to a level just below the first digit in the A-D converter’s range. This is not a perfect solution but will probably be the best compromise possible.

Note 14. Camera filter (*) Silicon absorbs photons in the wavelength range of approximately 200-1100 nm, with a peak sensitivity at 750 nm. Image contrast can change depending on the wavelength. As a consequence many cameras have an optical filter in the optical path to modify the overall response of the system to mimic the response of the human eye. However if a laser beam (e.g. 670 nm) is going to be used to produce a bright target spot, then such integral filters can reduce the intensity of light reaching the chip by an unacceptable amount. There is no filter in the Pulnix TM6CN camera, spectral modification being due to the lens (Figure 3.18).

Wavelength (nm) Figure 3.18. The spectral sensitivity of the Sony ICX039ALA sensor (including a lens)

Recommendations

• Be aware of the spectral sensitivity of the Silicon sensors when choosing light sources – red are better than blue.

• Consider whether any sources of light have high infra-red content if a filter

is not being used

• Consider the effect of the lens glass in the modification of the sensor sensitivity and whether the lens is chromatically corrected at the wavelength being used.

3-21

Page 23: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Note 15. Warm up of the camera (*) Warm-up effects for three standard cameras were tested and where changes of up to 1/5 pixel were observed (Figure 3.19).

Camera 1 Camera 2 Camera 3

Vector scale: 1 pixel Figure 3.19. Image co-ordinate changes for the first 60 minutes of warm-up for three Pulnix cameras. It appears from this result that small additional linear image co-ordinate changes occurred probably due to sensor thermal expansion during camera warm-up. Recommendation

• For all important measurement tasks it is recommended that either the cameras are allowed to reach a steady temperature state before use or an experiment of the type described is used to assess the temperature stability of the sensor.

Note 16. Charge transfer efficiency (*) CTE is a measure of the quantity of charge transferred from one cell to the next in a CCD sensor. This would be 1.0 if perfect, and typical CTE values vary between 0.9995 - 0.99999 for common devices. The worst case (first pixel in each row) overall transfer efficiency for a 2048 pixel array with a four phase clock is equal to 0.99999(2048*4) = 92%. Hence CTE is more important in larger arrays. Recommendation

• Check the transfer efficiency of large arrays. Note 17. Amplifier noise (*)

A ground glass screen was placed in front of a typical CCD camera and various levels of illumination from a distributed light source were produced. A 50x50 pixel square section of the image was grabbed and analysed. The standard deviation of the resulting levels of noise were then computed and are illustrated in figure 3.20.

3-22

Page 24: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

I n t e n s i t y l e v e l s0 5 0 1 0 0 1 5 0 2 0 0 2 5 0

Noi

se st

anda

rd d

evia

tion

0 .9

1 .0

1 .1

1 .2

1 .3

1 .4

1 .5

1 .6

1 .7

1 .8

Figure 3.20. The standard deviation of electronic noise

at varying intensity levels These results illustrate that the electronic noise levels are considerable when compared to quantization noise. A maximum difference between grey levels at a mean of about 230 intensity levels was 13 grey levels. Furthermore, it appears that with the camera used that noise increased with the mean intensity level of the background. These measurements were taken on a hot day with a camera which had been switched on for some time. It is well known that the level of electronic noise is related to temperature, for example dark current noise is doubled for approximately every eight degrees increase in temperature. In astronomy, and other low light level applications such as microscopy, cooling is used to obtain good noise immunity and long exposure times. To investigate this subject further a camera was placed in a freezer and cooled to -3°C. The noise levels were then measured from switch on to temperature stabilisation. The results are illustrated in figure 3.21, and confirm the expected increase in noise with temperature.

Time (seconds)

0 50 100 150 200 250

Stan

dard

dev

iatio

n

1.401.451.501.551.601.651.701.751.80

Figure 3.21. Graph of electronic noise against time

These results are also generally comparable with the results obtained by Beyer, 1992. For one of the 50x50 sets a histogram of the intensity distribution was plotted (Figure 3.22).

3-23

Page 25: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Distribution of noise / intensity

-8 -6 -4 -2 0 2 4 6 8Fr

eque

ncy

of in

tens

ity a

bout

mea

n

0

100

200

300

400

500

600

700

800

Figure 3.22. Histogram of intensity distributions

This result does show that a Gaussian distribution is a reasonable approximation to practice, with only a small difference further from the mean where more values than would be expected were observed. This was confirmed by checking the distribution of a randomly generated set of data where the maximum and minimum values were inside the values obtained in practice. This slight difference may be explained by other noise effects that are superimposed on the electronic noise. Recommendation

• Be aware of the level of noise in imagery and consider multiple images where necessary to obtain higher accuracy results

Note 18. A-D Conversion (*) There are four main sources of error in A-D converters: quantisation, offset, gain, and linearity. The last three errors are temperature dependent, the converter only functions correctly at its normal operating temperature. All four errors are internal to the A-D converter, and do not include errors caused by incorrect gain, or offset outside the converter. The first error, quantisation, is always present but its effect can be reduced by using the full range of the converter. The A-D converter should be matched to the dynamic range of the signal requiring conversion. Recommendation

• Choose an A-D converter to match the noise level in the signal. If justified 10 and 12 bit A-D converters will give better results.

Note 19. Frame grabber line-jitter (*) The causes of line jitter are well known (Beyer, 1993). The problem originates from the video data transfer standard originally devised for TV data transmission. Early cameras (e.g. Vidicon) also did not have discrete pixels so the output was a continuous function of the time taken for the electron beam to sweep the sensing area. Hence, the initial standard used for data transfer between the camera and the

3-24

Page 26: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

framestore had no timing synchronisation between the sensor output and the A-D converter conversion period. In the case of the CCD sensor, discrete pixels are the originators of a voltage train. The camera clocks the analogue image intensity data from the CCD sensor at a fixed frequency (14.3 MHz for the CCIR standard). Many framestores use a Phase Locked Loop (PLL) to control the timing of the A-D converter based on the frequency that is expected. The timing of the conversion takes place at a given number of clock periods after the beginning of a horizontal line that is determined by a transition in the timing signals that are encoded with the image information in a composite synchronisation signal. Any variation in the ability to determine the start of this period gives rise to line jitter. Line jitter is independent of another potentially serious problem, that of clock period variations between camera and framestore, as described in the series of warm-up tests. Since the CCIR output is an analogue voltage train and the two timing systems are completely independent of each other (apart from the horizontal and vertical synchronisation pulses) there is no guaranteed accurate correspondence between pixel intensity and the A-D conversion period. It is possible for the output of a 752 pixel sensor to be sampled by a 512 x 512 framestore at a different frequency (e.g. 10 MHz), with apparently successful results, conversely, the output from a 752 pixel sensor may be over-sampled, producing an image of superficially higher resolution. There are three possible methods of solving the linejitter problem. One is to synchronise the data output from the camera with the framestore, another is to average results over many frames, the last is to use a digital camera. This first option is achieved if the camera has a pixel clock output and the framestore can accept a pixel clock input. A disadvantage with some framestores that allow flexible input of signals from cameras is that they often require the camera horizontal and vertical signals in addition to the pixel clock pulses. The EPIX SVMGRB4MB framestore allows the use of horizontal and vertical synchronisation pulses with the camera pixel clock, albeit with some electrical alterations to the framestore card. To analyse the extent of line jitter a test field was constructed consisting of an array of stretched white lines imaged against a black velvet background. Lighting was optimised to provide even illumination across the test field, but with sufficient light to obtain optimum imaging conditions for automated subpixel location. Image co-ordinate data for the eight lines within each single frame was computed using a subpixel algorithm. Twenty-seven images were taken using all possible camera and lens permutations at three different distances. The experiment was then repeated with the camera rotated by 90° to produce fifty-four images in total. For each image, subpixel image co-ordinate data were computed at eighty positions on each of the eight imaged lines. Each horizontal and vertical image pair were combined in a lens distortion calibration. By these means the systematic effects of radial and tangential lens distortion were removed from the co-ordinate data. The residuals from the calibration data represent the errors present in the imaging system (Table 3.2).

3-25

Page 27: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

RMS image residual (σ) σx σy µm 0.341 0.291

pixels 1/25 1/28 Table 3.2. RMS image co-ordinate residual standard

deviations for all 54 images Although there is a small difference between the x and y co-ordinate residuals, there is insufficient evidence to attribute the difference to line jitter. On analysing the individual images in detail it was observed that some of the lines only covered about four to five pixels, there were small variations in illumination and some ringing was present. These and other error sources such as image quantisation and thermal effects will have contributed to the residuals computed by the lens calibration routine. Consequently these mean residual standard deviations represent the total error budget, the effect of line jitter alone cannot easily be quantified. To analyse the system further, two images which did not exhibit any of the degradations mentioned previously were selected. Care was taken to minimise lens distortions by positioning a line to coincide with the optical axis of the lens. A linear regression was performed using only data computed from the central region of the line. Residuals from both horizontal and vertical lines are shown in Figures 3.23 (a & b).

Pixel No. Pixel No. Pixel No.

Image

residual

Image

residual

Image

residual

Figure. 3.23a & b Sub-pixel residuals in the x and y directions.

Figure. 3.23c. Quantisation error for a similar line.

The results obtained can be compared to the theoretical limit imposed by the quantisation error where it can be seen that the quantisation error accounts for at least one third of the total error. Interestingly no significant differences in the x and y directions attributable to line jitter can be seen using the Pulnix TM6CN and EPIX framestore. Analysis of the image co-ordinate residuals from a self-calibrating bundle adjustment do not often demonstrate any significant differences in magnitude between the x and y image co-ordinate residuals. Other frame-grabbers and cameras might produce different results and methods similar to those discussed might be used. Recommendation

• Line-jitter is not the problem it once was. Frame-grabbers usually specify a value in nano seconds. This is usually now so small as to be undetectable in most circumstances. Other effects caused by incorrect thresholding or the settling time of the PLL are often more significant.

3-26

Page 28: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Note 20. Fixed pattern noise (*) This refers to non uniformity in the output signal that is invariant with light intensity, and Photo Response Non Uniformity (PRNU) where the output signal varies in a non uniform way as the light intensity increases. Non-uniformities in the sensor can be caused by variations in substrate thickness or pixel element size. However, research has shown that the frame-grabber can itself produce characteristic patterns. A typical pattern was observed for all the Pulnix TM6CN cameras used with the EPIX framestore (Figure 3.24). This image, taken at a RMS grey level of 180, has been enhanced so that the 4 grey level variations cover a full 0-255 range.

Figure 3.24. Sensor output without lens (greyscale enhanced)

Recommendation

• For the highest quality work it is necessary to map the fixed pattern noise and compensate for it in the images

Note 21. Sampling the image (*) To recover signal information (in this case an image) in an undistorted form after a sampling process, the original information must be bandwidth limited to half the sampling frequency. Sampling above the Nyquist frequency will result in distortions to the image in the form of aliasing. However, it unusual to encounter aliasing due to the fact that lenses, diffusers, and sensors are designed to be compatible and higher frequency components landing on the sensor surface are likely to be of low magnitude or insignificant. If a particular camera has an appropriate lens and sensor, higher resolution images can be constructed by multiple images taken by a low resolution sensor if the lens, sensor, or camera are moved by subpixel amounts. However, this requires that the sensor has an active pixel size smaller than the pixel pitch otherwise the images will merely be averages of the intensities. Such methods are able to produce enhancements of up to 4 times the nominal resolution with commercial grade sensors.

3-27

Page 29: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Recommendation

• If sufficient control is available interlaced sensors can be manipulated to produce higher resolution images – it is suggested that only in special cases will the effort required to achieve this be justified.

Note 22. Geometric variations (*) The geometric positions of each of the pixels and the relative size of the active areas are all of vital importance in the photogrammetric process. For example sub-pixel target image location may be able to achieve 1/100th of a pixel, however if the physical pixel positions vary then the geometric accuracy will be limited. Fortunately the fabrication process is generally good, such that various investigations have been able to show that the geometric quality of sensors are excellent. Recommendation

• Fabrication processes are so good that it is unlikely that this issue will need to be considered in the vast majority of applications

The reader is referred to other literature for further information about these error sources (e.g. Chapter 5 in Atkinson, 1996) 3.4. References and bibliography Beyer H.A. 1993, "Geometric and Radiometric analysis of a CCD-camera based photogrammetric close range system". PhD thesis ETH-Honggerburg CH-8093 Zurich, 186pp. ISBN 3-906513-24-6. May 1992. Burner, A.W., Snow W.L., Shortis M.R. and Goad W.K. "Laboratory calibration and characterisation of video cameras." Close Range Photogrammetry meets machine vision. SPIE Vol 1395. p. 664-671. Zurich. 1990. Beyer, H.A., 1987. Some aspects of the geometric calibration of CCD cameras. Proceedings of ISPRS Intercommission Conference on Fast Processing of Photogrammetric Data, Interlaken, Switzerland. 437 pages: 68-81. Beyer, H.A., 1993. Determination of radiometric and geometric characteristics of frame grabbers. Videometrics II SPIE Vol. 2067: 93-103. Beyer, H.A., 1989. Calibration of CCD cameras for machine vision and robotics. Automated inspection and high speed vision architectures III. SPIE Vol. 1197: 88-98. Clarke, T.A., 1991. Application of optical techniques to surveying. PhD Thesis. City University. 310 pages.

3-28

Page 30: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Clarke, T.A., Cooper, M.A.R. and Fryer, J.G., 1993. An estimator for the random error in subpixel target location and its use in the bundle adjustment. Optical three-dimensional Measurement Techniques II. Wichmann, Karlsruhe. 624 pages: 161-168 Clarke, T.A., Cooper, M.A.R., Chen, J. & Robson. S., 1994. Automated three-dimensional measurement using multiple CCD camera views. Photogrammetric Record. In press. Clarke, T.A., 1994. An analysis of the properties of targets used in digital close range photogrammetric measurement. SPIE Vol. 2350. In press. Dahler, J., 1987. Problems in digital image acquisition with CCD cameras, Proceedings of ISPRS Intercommission Conference on Fast Processing of Photogrammetric Data, Interlaken, Switzerland, 437 pages: 48-59. Fryer, J.G., Clarke, T.A. & Chen, J., 1994. Lens distortion for simple C mount lenses. International Archives of Photogrammetry and Remote Sensing. XXX(5):97-111. Fryer, J. G. "Camera calibration in Non-Topographic photogrammetry," Chapt 5, Non Topographic Photogrammetry, edited by H. M. Karara, 2nd ed., pp59-69, Pub. ASP&RS, Falls Church, 1989. Ge, R., 1993. Linejitter detection of CCD cameras. Optical three-dimensional Measurement Techniques II. Wichmann, Karlsruhe. 624 pages: 239-246. Hoflinger, W. & Beyer, H.A. 1993. Evaluation of the geometric performance of a standard S-VHS Camcorder. SPIE Vol. 2067. Pp 104-115. Lenz, R. "Image data acquisition with CCD cameras." Optical 3D measurement techniques, applications in inspection, quality control and robotics. Ed. Gruen & Kahmen, p.22-34. Pub. Weichmann,Vienna. September 18-20 1989. Lenz, R., Beuthauser, R. and Lenz, U. 1994., A microscan/macro scan 3x12 bit digital colour CCD camera with programmable resolution up to 20,992x20,480 picture elements. International Archives of Photogrammetry and Remote Sensing. XXX. (5): 225-230. Maalen-Johansen, I. 1993. On the precision of subpixel measurements in videometry. Optical three-dimensional Measurement Techniques II. Wichmann, Karlsruhe. 624 pages:169-178. PULNIX. TM6CN Operations and maintenance manual. Pulnix America inc. 27pp 770 Pub. Pulnix Inc, Lucerne drive, Sunny Vale, CA 94086. 1992. Raynor, J.M. and Seitz, P. "The technology and practical problems of pixel-synchronous CCD data acquisition for optical metrology applications." Close Range Photogrammetry meets machine vision. SPIE Vol 1395 p. 96-103. Zurich. 1990.

3-29

Page 31: IMAGE SENSOR AND IMAGE PROCESSING - Stanford …robots.stanford.edu/cs223b04/JeanYvesCalib/papers/clarke_book... · Handbook of practical camera calibration methods and models Optical

Handbook of practical camera calibration methods and models Optical Metrology Centre

Roundy, C.B., Slobodzian, G.E. and Dan, K.J., 1993. Digital signal processing of CCD camera signals for laser beam diagnostic applications. Electro Optics, 23(109):11. SONY. Semiconductor IC Data book 1991, CCD cameras and peripherals. 862pp. Pub. Sony Corporation, Tokyo 108, Japan. 1991. Tseng, H., Ambrose J.R. and Faltahi, M. "Evolution of the Solid State Image sensor.", Journal of Imaging Science, Vol 29, No 1, Jan/Feb, 1985. Wong , K.W, Lew. M. and Ke Y. "Experience with two vision systems." Close Range Photogrammetry meets machine vision. SPIE Vol 1395. p.3-7 Zurich. 1990.

3-30