photography techniques (advanced)

Download Photography Techniques (Advanced)

Post on 09-Oct-2015




0 download

Embed Size (px)


Photography geek


  • PDF generated using the open source mwlib toolkit. See for more information.PDF generated at: Wed, 21 Aug 2013 16:46:53 UTC

    Photography TechniquesAdvanced Skills

  • ContentsArticles

    Zone System 1High-dynamic-range imaging 8Contre-jour 18Night photography 20Multiple exposure 25Camera obscura 28Pinhole camera 33Stereoscopy 38

    ReferencesArticle Sources and Contributors 60Image Sources, Licenses and Contributors 62

    Article LicensesLicense 65

  • Zone System 1

    Zone SystemThe Zone System is a photographic technique for determining optimal film exposure and development, formulatedby Ansel Adams and Fred Archer.[1] Adams described how the Zone System as "[...] not an invention of mine; it is acodification of the principles of sensitometry, worked out by Fred Archer and myself at the Art Center School in LosAngeles, around 1939-40."[2]

    The technique is based on the late 19th century sensitometry studies of Hurter and Driffield. The Zone Systemprovides photographers with a systematic method of precisely defining the relationship between the way theyvisualize the photographic subject and the final results. Although it originated with black-and-white sheet film, theZone System is also applicable to roll film, both black-and-white and color, negative and reversal, and to digitalphotography.


    VisualizationAn expressive image involves the arrangement and rendering of various scene elements according to photographersdesire. Achieving the desired image involves image management (placement of the camera, choice of lens, andpossibly the use of camera movements) and control of image values. The Zone System is concerned with control ofimage values, ensuring that light and dark values are rendered as desired. Anticipation of the final result beforemaking the exposure is known as visualization.

    Exposure meteringAny scene of photographic interest contains elements of different luminance; consequently, the exposure actuallyis many different exposures. The exposure time is the same for all elements, but the image illuminance varies withthe luminance of each subject element.Exposure is often determined using a reflected-light[3] exposure meter. The earliest meters measured overall averageluminance; meter calibration was established to give satisfactory exposures for typical outdoor scenes. However, ifthe part of a scene that is metered includes large areas of unusually high or low reflectance, or unusually large areasof highlight or shadow, the effective average reflectance[4] may differ substantially from that of a typical scene,and the rendering may not be as desired.An averaging meter cannot distinguish between a subject of uniform luminance and one that consists of light anddark elements. When exposure is determined from average luminance measurements, the exposure of any givenscene element depends on the relationship of its reflectance to the effective average reflectance. For example, a darkobject of 4% reflectance would be given a different exposure in a scene of 20% effective average reflectance than itwould be given in a scene of 12% reflectance. In a sunlit outdoor scene, the exposure for the dark object would alsodepend on whether the object was in sunlight or shade. Depending on the scene and the photographers objective,any of the previous exposures might be acceptable. However, in some situations, the photographer might wish tospecifically control the rendering of the dark object; with overall average metering, this is difficult if not impossible.When it is important to control the rendering of specific scene elements, alternative metering techniques may berequired.It is possible to make a meter reading of an individual scene element, but the exposure indicated by the meter willrender that element as a medium gray; in the case of a dark object, that result is usually not what is desired. Evenwhen metering individual scene elements, some adjustment of the indicated exposure is often needed if the meteredscene element is to be rendered as visualized.

  • Zone System 2

    Exposure zonesIn the Zone System, measurements are made of individual scene elements, and exposure is adjusted based on thephotographers knowledge of what is being metered: a photographer knows the difference between freshly fallensnow and a black horse, while a meter does not. Much has been written on the Zone System, but the concept is verysimplerender light subjects as light, and dark subjects as dark, according to the photographers visualization. TheZone System assigns numbers from 0 through 10[5] to different brightness values, with 0 representing black, 5middle gray, and 10 pure white; these values are known as zones. To make zones easily distinguishable from otherquantities, Adams and Archer used Roman rather than Arabic numerals. Strictly speaking, zones refer to exposure,[6]

    with a ZoneV exposure (the meter indication) resulting in a mid-tone rendering in the final image. Each zone differsfrom the preceding or following zone by a factor of two, so that a ZoneI exposure is twice that of Zone0, and soforth. A one-zone change is equal to one stop,[7] corresponding to standard aperture and shutter controls on a camera.Evaluating a scene is particularly easy with a meter that indicates in exposure value (EV), because a change of oneEV is equal to a change of one zone.Many small- and medium-format cameras include provision for exposure compensation; this feature works well withthe Zone System, especially if the camera includes spot metering, but obtaining proper results requires carefulmetering of individual scene elements and making appropriate adjustments.

    Zones, the physical world and the printThe relationship between the physical scene and the print is established by characteristics of the negative and theprint. Exposure and development of the negative are usually determined so that a properly exposed negative willyield an acceptable print on a specific photographic paper.Although zones directly relate to exposure, visualization relates to the final result. A black-and-white photographicprint represents the visual world as a series of tones ranging from black to white. Imagine all of the tonal values thatcan appear in a print, represented as a continuous gradation from black to white:

    Full Tonal Gradation

    From this starting point, zones are formed by: Dividing the tonal gradation into eleven equal sections.

    Eleven-Step Gradation

    Note: You may need to adjust the brightness and contrast of your monitor to see the gradations at the darkand light end of the scales.

    Blending each section into one tone that represents all the tonal values in that section. Numbering each section with Roman numerals from 0 for the black section to X for the white one.

  • Zone System 4

    ExposureA dark surface under a bright light can reflect the same amount of light as a light surface under dim light. The humaneye would perceive the two as being very different but a light meter would measure only the amount of lightreflected, and its recommended exposure would render either as ZoneV. The Zone System provides astraightforward method for rendering these objects as the photographer desires. The key element in the scene isidentified, and that element is placed on the desired zone; the other elements in the scene then fall where they may.With negative film, exposure often favors shadow detail; the procedure then is to1.1. Visualize the darkest area of the subject in which detail is required, and place it on ZoneIII. The exposure for

    ZoneIII is important, because if the exposure is insufficient, the image may not have satisfactory shadow detail.If the shadow detail is not recorded at the time of exposure, nothing can be done to add it later.

    2. Carefully meter the area visualized as Zone III and note the meters recommended exposure (the meter gives aZone V exposure).

    3. Adjust the recommended exposure so that the area is placed on ZoneIII rather than ZoneV. To do this, use anexposure two stops less than the meters recommendation.

    DevelopmentFor every combination of film, developer, and paper there is a normal development time that will allow a properlyexposed negative to give a reasonable print. In many cases, this means that values in the print will display asrecorded (e.g., ZoneV as ZoneV, ZoneVI as ZoneVI, and so on). In general, optimal negative development will bedifferent for every type and grade of paper.It is often desirable for a print to exhibit a full range of tonal values; this may not be possible for a low-contrast sceneif the negative is given normal development. However, the development can be increased to increase the negativecontrast so that the full range of tones is available. This technique is known as expansion, and the developmentusually referred to as plus or N+. Criteria for plus development vary among different photographers; Adams usedit to raise a ZoneVII placement to ZoneVIII in the print, and referred to it as N+1 development.Conversely, if the negative for a high-contrast scene is given normal development, desired detail may be lost ineither shadow or highlight areas, and the result may appear harsh. However, development can be reduced so that ascene element placed on ZoneIX is rendered as ZoneVIII in the print; this technique is known as contraction, andthe development usually referred to as minus or N. When the resulting change is one zone, it is usually calledN1 development.It sometimes is possible to make greater adjustments, using N+2 or N2 development, and occasionally evenbeyond.Development has the greatest effect on dense areas of the negative, so that the high values can be adjusted withminimal effect on the low values. The effect of expansion or contraction gradually decreases with tones darker thanZoneVIII (or whatever value is used for control of high values).Specific times for N+ or N developments are determined either from systematic tests, or from development tablesprovided by certain Zone System books.

  • Zone System 5

    Additional darkroom processesAdams generally used selenium toning when processing prints. Selenium toner acts as a preservative and can alterthe color of a print, but Adams used it subtly, primarily because it can add almost a full zone to the tonal range of thefinal print, producing richer dark tones that still hold shadow detail. His book The Print described using thetechniques of dodging and burning to selectively darken or lighten areas of the final print.The Zone System requires that every variable in photography, from exposure to darkroom production of the print, becalibrated and controlled. The print is the last link in a chain of events, no less important to the Zone System thanexposure and development of the film. With practice, the photographer visualizes the final print before the shutter isreleased.

    Application to other media

    Roll filmUnlike sheet film, in which each negative can be individually developed, an entire roll must be given the samedevelopment, so that N+ and N development are normally unavailable.[10] The key element in the scene is placedon the desired zone, and the rest of the scene falls where it will. Some contrast control is still available with the useof different paper grades. Adams (1981, 9395) described use of the Zone System with roll film. In most cases, herecommended N1 development when a single roll was to be exposed under conditions of varying contrast, so thatexposure could be sufficient to give adequate shadow detail but avoid excessive density and grain build-up in thehighlights.

    Color filmBecause of color shifts, color film usually does not lend itself to variations in development time. Use of the ZoneSystem with color film is similar to that with black-and-white roll film, except that the exposure range is somewhatless, so that there are fewer zones between black and white. The exposure scale of color reversal film is less than thatof color negative film, and the procedure for exposure usually is different, favoring highlights rather than shadows;the shadow values then fall where they will. Whatever the exposure range, the meter indication results in a ZoneVplacement. Adams (1981, 9597) described the application to color film, both negative and reversal.

    Digital photographyThe Zone System can be used in digital photography just as in film photography; Adams (1981, xiii) himselfanticipated the digital image. As with color reversal film, the normal procedure is to expose for the highlights andprocess for the shadows.Until recently, digital sensors had a much narrower dynamic range than color film, which, in turn, has less range thanmonochrome film. But an increasing number of digital cameras have wider dynamic ranges. One of the first wasFujifilms FinePix S3 Pro digital SLR, which has their proprietary Super CCD SR sensor specifically developed toovercome the issue of limited dynamic range, using interstitial low-sensitivity photosites (pixels) to capture highlightdetails.[citation needed] The CCD is thus able to expose at both low and high sensitivities within one shot by assigninga honeycomb of pixels to different intensities of light.Greater scene contrast can be accommodated by making one or more exposures of the same scene using different exposure settings and then combining those images. It often suffices to make two exposures, one for the shadows, and one for the highlights; the images are then overlapped and blended appropriately[11], so that the resulting composite represents a wider range of colors and tones. Combining images is often easier if the image-editing software includes features, such as the automatic layer alignment in Adobe Photoshop CS3, that assist precise registration of multiple images. Even greater scene contrast can be handled by using more than two exposures and

  • Zone System 6

    combining with a feature such as Merge to HDR in Photoshop CS2 and later.The tonal range of the final image depends on the characteristics of the display medium. Monitor contrast can varysignificantly, depending on the type (CRT, LCD, etc.), model, and calibration (or lack thereof). A computer printerstonal output depends on the number of inks used and the paper on which it is printed. Similarly, the density range ofa traditional photographic print depends on the processes used as well as the paper characteristics.


    Most high-end digital cameras allow viewing a histogram of the tonal distribution of the captured image. Thishistogram, which shows the concentration of tones, running from dark on the left to light on the right, can be used tojudge whether a full tonal range has been captured, or whether the exposure should be adjusted, such as by changingthe exposure time, lens aperture, or ISO speed, to ensure a tonally rich starting image.[12]

    Misconceptions and criticismsThe Zone System gained an early reputation for being complex, difficult to understand, and impractical to apply toreal-life shooting situations and equipment. Noted photographer Andreas Feininger wrote in 1976,

    I deliberately omitted discussing the so-called Zone System of film exposure determination in this bookbecause in my opinion it makes mountains out of molehills, complicates matters out of all proportions, doesnot produce any results that cannot be accomplished more easily with methods discussed in this text, and is aritual if not a form of cult rather than a practical technical procedure.[13]

    Much of the difficulty may have resulted from Adamss early books, which he wrote without the assistance of aprofessional editor; he later conceded (Adams 1985, 325) that this was a mistake. Picker (1974) provided a conciseand simple treatment that helped demystify the process. Adamss later Photography Series published in the early1980s (and written with the assistance of Robert Baker) also proved far more comprehensible to the averagephotographer.The Zone System has often been thought to apply only to certain materials, such as black-and-white sheet film andblack-and-white photographic prints. Adams (1981, xii) suggested that when new materials become available, theZone System is adapted rather than discarded. He anticipated the digital age, stating

    I believe the electronic image will be the next major advance. Such systems will have their own inherent andinescapable structural characteristics, and the artist and functional practitioner will again strive to comprehendand control them.

    Yet another misconception is that the Zone System emphasizes technique at the expense of creativity. Somepractitioners have treated the Zone System as if it were an end in itself, but Adams made it clear that the ZoneSystem was an enabling technique rather than the ultimate objective.

    Notes[3] Adams (1981, 30) considered the incident-light meter, which measures light falling on the subject, to be of limited usefulness because it takes

    no account of the specific subject luminances that actually produce the image.[4] A typical scene includes areas of highlight and shadow, and has scene elements at various angles to the light source, so it usually is possible

    to use the term average reflectance only loosely. Here, effective average reflectance is used to include these additional effects.[5][5] Adams (1981) designated 11 zones; other photographers, including Picker (1974) and White, Zakia, and Lorenz (1976) used 10 zones. Either

    approach is workable if the photographer is consistent in her methods.[6] Adams (1981) distinguished among exposure zones, negative density values, and print values. The negative density value is controlled by

    exposure and the negative development; the print value is controlled by the negative density value, and the paper exposure and development.Commonly, zone is also used, if somewhat loosely, to refer to negative density values and print values.

    [7] Photographers commonly refer to exposure changes in terms of stops, but properly, a stop is a device that regulates the amount of light,while a step is a division of a scale. The standard exposure scale consists of power-of-two steps; a one-step exposure increase doubles theexposure, while a one-step decrease halves the exposure. Davis (1999, 13) recommended the term stop to avoid confusion with the steps of aphotographic step tablet, which may not correspond to standard power-of-two exposure steps. ISO standards generally use step.

  • Zone System 7

    [8] Adamss description of zones and their application to typical scene elements was somewhat more extensive than the table in this article. Theapplication of ZoneIX to glaring snow is from Adams (1948).

    [9] The effective speed determined for a given combination of film and developer is sometimes described as an Exposure Index (EI), but anEI often represents a fairly arbitrary choice rather than the systematic speed determination done for use with the Zone System.

    [10] If a roll-film camera accepts interchangeable backs, it is possible to use N+ and N development by designating different backs for differentdevelopment, and changing backs when the image so requires. Without interchangeable backs, different camera bodies can be designated fordifferent development, but this usually is practical only with small-format cameras.

    [11] http:/ / www. luminous-landscape. com/ tutorials/ digital-blending. shtml[12] Discussion on how histograms can be used to implement the Zone System in digital photography (http:/ / www. illustratedphotography.

    com/ basic-photography/ zone-system-histograms)[13] Feininger, Andreas, Light and Lighting in Photography, Prentice-Hall, 1976

    References Adams, Ansel. 1948. The Negative: Exposure and Development. Ansel Adams Basic Photography Series/Book 2.

    Boston: New York Graphic Society. ISBN 0-8212-0717-2 Adams, Ansel. 1981. The Negative. The New Ansel Adams Basic Photography Series/Book 2. ed. Robert Baker.

    Boston: New York Graphic Society. ISBN 0-8212-1131-5. Reprinted, Boston: Little, Brown, & Company, 1995.ISBN 0-8212-2186-8. Page references are to the 1981 edition.

    Adams, Ansel. 1985. Ansel Adams: An Autobiography. ed. Mary Street Alinder. Boston: Little, Brown, &Company. ISBN 0-8212-1596-5

    ANSI PH2-1979. American National Standard Method for Determining Speed of Photographic NegativeMaterials (Monochrome, Continuous-Tone). New York: American National Standards Institute.

    Davis, Phil. 1999. Beyond the Zone System. 4th ed. Boston: Focal Press. ISBN 0-240-80343-4 ISO 6:1993. PhotographyBlack-and-White Pictorial Still Camera Negative Film/Process Systems. International

    Organization for Standardization (http:/ / www. iso. org). Latour, Ira H. 1998. Ansel Adams, The Zone System and the California School of Fine Arts. History of

    Photography, v22, n2, Summer 1998, pg 148. ISSN 0308-7298/98. Picker, Fred. 1974. Zone VI Workshop: The Fine Print in Black & White Photography. Garden City, N.Y.:

    Amphoto. ISBN 0-8174-0574-7 White, Minor, Richard Zakia, and Peter Lorenz. 1976. The New Zone System Manual. Dobbs Ferry, N.Y.: Morgan

    & Morgan ISBN 0-87100-100-4

    Further reading Farzad, Bahman. The Confused Photographers Guide to Photographic Exposure and the Simplified Zone System.

    4th ed. Birmingham, AL: Confused Photographers Guide Books, 2001. ISBN 0-9660817-1-4 Johnson, Chris. The Practical Zone System, Fourth Edition: For Film and Digital Photography. 4th ed. Boston:

    Focal Press, 2007. ISBN 0-240-80756-1 Lav, Brian. Zone System: Step-by-Step Guide for Photographers. Buffalo, NY: Amherst Media, 2001. ISBN


  • High-dynamic-range imaging 9

    Example of HDR image including images that were used for its creation.

    HDR photography of Leukbach, a river of Saarburg, Germany.

    High-dynamic-range imaging (HDRI orHDR) is a set of methods used in imagingand photography to capture a greaterdynamic range between the lightest anddarkest areas of an image than currentstandard digital imaging methods orphotographic methods. HDR images canrepresent more accurately the range ofintensity levels found in real scenes, fromdirect sunlight to faint starlight, and is oftencaptured by way of a plurality of differentlyexposed pictures of the same subjectmatter.[1][][2] [3]

    HDR methods provide higher dynamicrange from the imaging process. Non-HDRcameras take pictures at one exposure levelwith a limited contrast range. This results inthe loss of detail in bright or dark areas of apicture, depending on whether the camerahad a low or high exposure setting. HDRcompensates for this loss of detail by takingmultiple pictures at different exposure levelsand intelligently stitching them together toproduce a picture that is representative inboth dark and bright areas.HDR is also commonly used to refer todisplay of images derived from HDRimaging in a way that exaggerates contrastfor artistic effect. The two main sources ofHDR images are computer renderings andmerging of multiple low-dynamic-range(LDR)[4] or standard-dynamic-range(SDR)[5] photographs. HDR images can alsobe acquired using special image sensors,like oversampled binary image sensor. Tonemapping methods, which reduce overall contrast to facilitate display of HDR images on devices with lower dynamicrange, can be applied to produce images with preserved or exaggerated local contrast for artistic effect.

  • High-dynamic-range imaging 10

    High-dynamic-range (HDR) image made out of three pictures. Taken in Tronador,Argentina.


    In photography, dynamic range is measuredin EV differences (known as stops) betweenthe brightest and darkest parts of the imagethat show detail. An increase of one EV orone stop is a doubling of the amount oflight.

    Dynamic ranges of common devices

    Device Stops Contrast

    LCD display 9.5 700:1 (250:1 1750:1)

    Negative film (Kodak VISION3) 13[] 8192:1

    Human eye 1014[6] 1024:1 16384:1

    DSLR camera (Pentax K-5 II) 14.1[] 17560:1

    High-dynamic-range photographs are generally achieved by capturing multiple standard photographs, often usingexposure bracketing, and then merging them into an HDR image. Digital photographs are often encoded in acamera's raw image format, because 8 bit JPEG encoding doesn't offer enough values to allow fine transitions (andintroduces undesirable effects due to the lossy compression).Any camera that allows manual exposure control can create HDR images. This includes film cameras, though theimages may need to be digitized so they can be processed with software HDR methods.Some cameras have an auto exposure bracketing (AEB) feature with a far greater dynamic range than others, fromthe 3 EV of the Canon EOS 40D, to the 18 EV of the Canon EOS-1D Mark II.[7] As the popularity of this imagingmethod grows, several camera manufactures are now offering built-in HDR features. For example, the Pentax K-7DSLR has an HDR mode that captures an HDR image and outputs (only) a tone mapped JPEG file.[8] The CanonPowerShot G12, Canon PowerShot S95 and Canon PowerShot S100 offer similar features in a smaller format.[9]

    Even some smartphones now include HDR modes, and most platforms have apps that provide HDR picturetaking.[10]

  • High-dynamic-range imaging 11

    EditingOf all imaging tasks, image editing demands the highest dynamic range. Editing operations need high precision toavoid aliasing artifacts such as banding and jaggies. Photoshop users are familiar with the issues of low dynamicrange today. With 8 bit channels, if you brighten an image, information is lost irretrievably: darkening the imageafter brightening does not restore the original appearance. Instead, all of the highlights appear flat and washed out.One must work in a carefully planned work-flow to avoid this problem.

    Scanning filmIn contrast to digital photographs, color negatives and slides consist of multiple film layers that respond to lightdifferently. As a consequence, transparent originals (especially positive slides) feature a very high dynamic range.[11]

    Dynamic ranges of photographic material

    Material Dynamic range (F stops) Object contrast

    photograph print 5 1:32

    color negative 8 1:256

    positive slide 12 1:4096

    When digitizing photographic material with an image scanner, the scanner must be able to capture the wholedynamic range of the original, or details are lost. The manufacturer's declarations concerning the dynamic range offlatbed and film scanners are often slightly inaccurate and exaggerated.[citation needed]

    Despite color negative having less dynamic range than slide, it actually captures considerably more dynamic range ofthe scene than does slide film. This dynamic range is simply compressed considerably.

    Representing HDR images on LDR displays

    Camera characteristicsCamera characteristics must be considered when reconstructing high-dynamic-range imagesparticularly gammacurves, sensor resolution, and noise.[]

    Camera calibrationCamera calibration can be divided into three aspects: geometric calibration, photometric calibration and spectralcalibration. For HDR reconstruction, the important aspects are photometric and spectral calibrations.[]

    Color reproductionDue to the fact that it is human perception of color rather than color per se that is important in color reproduction,light sensors and emitters try to render and manipulate a scene's light signal in such a way as to mimic humanperception of color. Based on the trichromatic nature of the human eye, the standard solution adopted by industry isto use red, green, and blue filters, referred to as RGB, to sample the input light signal and to reproduce the signalusing light-based image emitters. This employs an additive color model, as opposed to the subtractive color modelused with printers, paintings etc.Photographic color films usually have three layers of emulsion, each with a different spectral curve, sensitive to red,green, and blue light, respectively. The RGB spectral response of the film is characterized by spectral sensitivity andspectral dye density curves.[12]

  • High-dynamic-range imaging 12

    Contrast reductionHDR images can easily be represented on common LDR media, such as computer monitors and photographic prints,by simply reducing the contrast, just as all image editing software is capable of doing.

    Clipping and compressing dynamic range

    An example of a rendering of an HDRI tonemapped image in a New York City nighttime


    Scenes with high dynamic ranges are often represented on LDRdevices by cropping the dynamic range, cutting off the darkest andbrightest details, or alternatively with an S-shaped conversion curvethat compresses contrast progressively and more aggressively in thehighlights and shadows while leaving the middle portions of thecontrast range relatively unaffected.

    Tone mapping

    Tone mapping reduces the dynamic range, or contrast ratio, of theentire image, while retaining localized contrast (between neighboringpixels), tapping into research on how the human eye and visual cortexperceive a scene, trying to represent the whole dynamic range while retaining realistic color and contrast.

    Images with too much tone mapping processing have their range over-compressed, creating a surreallow-dynamic-range rendering of a high-dynamic-range scene.

    Comparison with traditional digital imagesInformation stored in high-dynamic-range images typically corresponds to the physical values of luminance orradiance that can be observed in the real world. This is different from traditional digital images, which representcolors that should appear on a monitor or a paper print. Therefore, HDR image formats are often calledscene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore,traditional images are usually encoded for the human visual system (maximizing the visual information stored in thefixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDRimages are often gamma compressed (power law) or logarithmically encoded, or floating-point linear values, sincefixed-point linear encodings are increasingly inefficient over higher dynamic ranges.[][13][14]

    HDR images often don't use fixed ranges per color channelother than for traditional imagesto represent manymore colors over a much wider dynamic range. For that purpose, they don't user integer values to represent the singlecolor channels (e.g.m, 0..255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating pointrepresentation. Common are 16-bit (half precision) or 32-bit floating point numbers to represent HDR pixels.However, when the appropriate transfer function is used, HDR pixels for some applications can be represented withas few as 1012bits for luminance and 8bits for chrominance without introducing any visible quantizationartifacts.[][15]

  • High-dynamic-range imaging 13

    History of HDR photography1850

    Photo by Gustave Le Gray

    The idea of using several exposures to fix a too-extreme range ofluminance was pioneered as early as the 1850s by Gustave Le Gray torender seascapes showing both the sky and the sea. Such rendering wasimpossible at the time using standard methods, the luminosity rangebeing too extreme. Le Gray used one negative for the sky, and anotherone with a longer exposure for the sea, and combined the two into onepicture in positive.[16]

    Mid-twentieth century

    External images

    Schweitzer at the Lamp [17], by W. Eugene Smith[18][19]

    Mid-twentieth century, manual tone mapping was particularly done using dodging and burning selectivelyincreasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This iseffective because the dynamic range of the negative is significantly higher than would be available on the finishedpositive paper print when that is exposed via the negative in a uniform manner. An excellent example is thephotograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Dr. AlbertSchweitzer and his humanitarian work in French Equatorial Africa. The image took 5 days to reproduce the tonalrange of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.[19]

    Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in thedarkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, whichfeatures dodging and burning prominently, in the context of his Zone System.With the advent of color photography, tone mapping in the darkroom was no longer possible, due to the specifictiming needed during the developing process of color film. Photographers looked to film manufacturers to designnew film stocks with improved response over the years, or shot in black and white to use tone mapping methods.

  • High-dynamic-range imaging 14

    Exposure/Density Characteristics of Wyckoff's Extended Exposure Response Film

    Film capable of directly recordinghigh-dynamic-range images was developedby Charles Wyckoff and EG&G "in thecourse of a contract with the Department ofthe Air Force".[20] This XR film had threelayers, an upper layer having an ASA speedrating of 400, a middle layer with anintermediate rating, and a lower layer withan ASA rating of 0.004. The film wasprocessed in a manner similar to color films,and each layer produced a differentcolor.[21] The dynamic range of thisextended range film has been estimated as1:108.[22] It has been used to photographnuclear explosions,[23] for astronomicalphotography,[24] for spectrographic

    research,[25] and for medical imaging.[26] Wyckoff's detailed pictures of nuclear explosions appeared on the cover ofLife magazine in the mid-1950s.

    1980The desirability of HDR has been recognized for decades, but its wider usage was, until quite recently, precluded bythe limitations imposed by the available computer processing power. Probably the first practical application of HDRIwas by the movie industry in late 1980s and, in 1985, Gregory Ward created the Radiance RGBE image formatthefirst HDR imaging file format.The concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel ledby Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988.[] In 1993 the first commercial medical camera wasintroduced that performed real time capturing of multiple images with different exposures, and producing an HDRvideo image, by the same group.[]

    Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance orlight map using only global image operations (across the entire image), and then tone mapping this result. GlobalHDR was first introduced in 1993[1] resulting in a mathematical theory of differently exposed pictures of the samesubject matter that was published in 1995 by Steve Mann and Rosalind Picard.[]

    This method was developed to produce a high-dynamic-range image from a set of photographs taken with a range ofexposures. With the rising popularity of digital cameras and easy-to-use desktop software, the term HDR is nowpopularly used to refer to this process. This composite method is different from (and may be of lesser or greaterquality than) the production of an image from one exposure of a sensor that has a native high dynamic range. Tonemapping is also used to display HDR images on devices with a low native dynamic range, such as a computerscreen.1996The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory.[]

    Mann's method involved a two-step procedure: (1) generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods); and then (2) convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann's process is called a lightspace image, lightspace picture, or radiance

  • High-dynamic-range imaging 15

    map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map,which has been used for computer vision, and other image processing operations.[]

    1997This method of combining several differently exposed images to produce one HDR image was presented to thepublic by Paul Debevec.2005Photoshop CS2 introduced the Merge to HDR function, 32 bit floating point image support for HDR images, andHDR tone mapping for conversion of HDR images to LDR.[]

    VideoWhile custom high-dynamic-range digital video solutions had been developed for industrial manufacturing duringthe 1980s, it was not until the early 2000s that several scholarly research efforts used consumer-grade sensors andcameras.[27] A few companies such as RED[28] and Arri[29] have been developing digital sensors capable of a higherdynamic range. RED EPIC-X can capture HDRx images with a user selectable 1-3 stops of additional highlightlatitude in the 'x' channel. The 'x' channel can be merged with the normal channel in post production software. Withthe advent of low-cost consumer digital cameras, many amateurs began posting tone mapped HDR time-lapse videoson the Internet, essentially a sequence of still photographs in quick succession. In 2010 the independent studio SovietMontage produced an example of HDR video from disparately exposed video streams using a beam splitter andconsumer grade HD video cameras.[30] Similar methods have been described in the academic literature in 2001[31]

    and 2007.[32]

    Modern movies have often been filmed with cameras featuring a higher dynamic range, and legacy movies can beupgraded even if manual intervention would be needed for some frames (as this happened in the past withblack&white films upgrade to color). Also, special effects, especially those in which real and synthetic footage areseamlessly mixed, require both HDR shooting and rendering. HDR video is also needed in all applications in whichcapturing temporal aspects of changes in the scene demands high accuracy. This is especially important inmonitoring of some industrial processes such as welding, predictive driver assistance systems in automotiveindustry, surveillance systems, to name just a few possible applications. HDR video can be also considered to speedup the image acquisition in all applications, in which a large number of static HDR images are needed, as forexample in image-based methods in computer graphics. Finally, with the spread of TV sets with enhanced dynamicrange, broadcasting HDR video may become important, but may take a long time to occur due to standardizationissues. For this particular application, enhancing current low-dynamic range rendering (LDR) video signal to HDRby intelligent TV sets seems to be a more viable near-term solution.[33]

    ExamplesThese are examples of four standard dynamic range images that are combined to produce two resulting tone mappedimages.Raw material

    4 stops 2 stops +2 stops +4 stops

  • High-dynamic-range imaging 16

    Results after processing

    Simple contrast reduction Local tone mapping

    These are examples of two standard dynamic range images that are combined to produce a resulting tone mappedimage.Raw material

    +2 stops -2 stops

    Result after processing

  • High-dynamic-range imaging 17

    Final Tone Mapped Image


  • High-dynamic-range imaging 18

    References[1] "Compositing Multiple Pictures of the Same Scene", by Steve Mann, in IS&T's 46th Annual Conference, Cambridge, Massachusetts, May

    914, 1993[16] J. Paul Getty Museum. Gustave Le Gray, Photographer. July 9 September 29, 2002. (http:/ / www. getty. edu/ art/ exhibitions/ le_gray)

    Retrieved September 14, 2008.[17] http:/ / www. cybergrain. com/ tech/ hdr/ images1/ eugene_smith. jpg[18] The Future of Digital Imaging High Dynamic Range Photography (http:/ / www. cybergrain. com/ tech/ hdr/ ), Jon Meyer, Feb 2004[19] 4.209: The Art and Science of Depiction (http:/ / people. csail. mit. edu/ fredo/ ArtAndScienceOfDepiction/ ), Frdo Durand and Julie

    Dorsey, Limitations of the Medium: Compensation and accentuation The Contrast is Limited (http:/ / people. csail. mit. edu/ fredo/ArtAndScienceOfDepiction/ 12_Contrast/ contrast. html), lecture of Monday, April 9. 2001, slide 5759 (http:/ / people. csail. mit. edu/ fredo/ArtAndScienceOfDepiction/ 12_Contrast/ contrast6. pdf); image on slide 57, depiction of dodging and burning on slide 58

    [21] C. W. Wyckoff. Experimental extended exposure response film. Society of Photographic Instrumentation Engineers Newsletter, JuneJuly,1962, pp. 16-20.

    [22] Michael Goesele, et al., "High Dynamic Range Techniques in Graphics: from Acquisition to Display", Eurographics 2005 Tutorial T7 (http:// www. mpi-inf. mpg. de/ resources/ tmo/ EG05_HDRTutorial_Complete. pdf)

    [23] The Militarily Critical Technologies List (http:/ / www. fas. org/ irp/ threat/ mctl98-2/ p2sec05. pdf) (1998), pages II-5-100 and II-5-107.[24] Andrew T. Young and Harold Boeschenstein, Jr., Isotherms in the region of Proclus at a phase angle of 9.8 degrees, Scientific Report No. 5,

    Harvard, College Observatory: Cambridge, Massachusetts, 1964.[28][28] UNIQ-nowiki-0-e3ee77f1e1218d46-QINU[29][29] UNIQ-nowiki-1-e3ee77f1e1218d46-QINU


    Contre-jour photo taken directly against the setting sun causing loss of subject detail and colour, and emphasis ofshapes and lines. Medium: Colour digital image.

  • Contre-jour 19

    Contre-jour emphasizes the outline of the man and the tunnel entrance. The ground reflections show the position ofthe man. Medium: Digital scan from B&W paper print.Contre-jour, French for 'against daylight', refers to photographs taken when the camera is pointing directly toward asource of light. An alternative term is backlighting.[1][2]

    Contre-jour produces backlighting of the subject. This effect usually hides details, causes a stronger contrast betweenlight and dark, creates silhouettes and emphasizes lines and shapes. The sun, or other light source, is often seen aseither a bright spot or as a strong glare behind the subject.[2] Fill light may be used to illuminate the side of thesubject facing toward the camera. Silhouetting occurs when there is a lighting ratio of 16:1 or more; at lower ratiossuch as 8:1 the result is instead called low-key lighting.[citation needed]

    References[1] "contre-jour" (http:/ / www. thefreedictionary. com/ contre-jour). The Free Dictionary. Retrieved 2011-04-16.[2] Freeman, Michael (2007) The Complete Guide to Light & Lighting in Digital Photography. ILEX, London: Lark Books. pp. 74&75. ISBN


  • Night photography 20

    Night photography

    "The Night Sky" Photographed facing north at6,600 feet (2,000m) in the Mount Hood National


    The Skyline of Singapore as viewed at night

    Night photography refers to photographs taken outdoorsbetween dusk and dawn. Night photographers generally havea choice between using artificial light and using a longexposure, exposing the scene for seconds, minutes, and evenhours in order to give the film or digital sensor enough time tocapture a usable image. With the progress of high-speed films,higher-sensitivity digital image sensors, wide-aperture lenses,and the ever-greater power of urban lights, night photographyis increasingly possible using available light.


    View of Al Ain from top of Jebel Hafeet

    In the early 1900s, a few notable photographers, Alfred Stieglitz andWilliam Fraser, began working at night. The first photographers knownto have produced large bodies of work at night were Brassai and BillBrandt. In 1932, Brassai published Paris de Nuit, a book ofblack-and-white photographs of the streets of Paris at night. DuringWorld War II, British photographer Brandt took advantage of theblack-out conditions to photograph the streets of London by moonlight.

    Photography at night found several new practitioners in the 1970s,beginning with the black and white photographs that Richard Misrach made of desert flora (1975-77). JoelMeyerowitz made luminous large format color studies of Cape Cod at nightfall which were published in hisinfluential book, Cape Light (1979). .Jan Stallers twilight color

  • Night photography 21

    Early night photograph of the Luna Park, ConeyIsland, from the Detroit Publishing Co.

    collection, 1905.

    Chay kenar Boulevard in Tabriz

    photographs (1977-84) of abandoned and derelict parts ofNew York City captured uncanny visons of the urbanlandscape lit by the glare of sodium vapor street lights.

    By the 1990s, British-born photographer Michael Kenna hadestablished himself as the most commercially successful nightphotographer. His black-and-white landscapes were mostoften set between dusk and dawn in locations that includedSan Francisco, Japan, France, and England. Some of his mostmemorable projects depict the Ford Motor Company's RougeRiver plant, the Ratcliffe-on-Soar Power Station in the EastMidlands in England, and many of the Nazi concentrationcamps scattered across Germany, France, Belgium, Polandand Austria.

    During the beginning of the 21st century, the popularity ofdigital cameras made it much easier for beginningphotographers to understand the complexities ofphotographing at night. Today, there are hundreds of websitesdedicated to night photography.


    Celestial bodies (See astrophotography.) The moon, stars, planets, etc.

    City skylines Factories and industrial areas, particularly those that are brightly lit and are emitting smoke or vapour Fireworks Nightlife or rock concerts Caves (See cave photography) Streets with or without cars Abandoned buildings and artificial structures that are lit only by moonlight Bodies of water that are reflecting moonlight or city lights

    Lakes, rivers, canals, etc. Thunderstorms Amusement rides

  • Night photography 22

    Technique and equipment

    The length of a night exposure causes the lights on moving cars tostreak across the image

    The following techniques and equipment are generallyused in night photography. A tripod is usually necessary due to the long

    exposure times. Alternatively, the camera may beplaced on a steady, flat object e.g. a table or chair,low wall, window sill, etc.

    A shutter release cable or self timer is almost alwaysused to prevent camera shake when the shutter isreleased.

    Manual focus, since autofocus systems usuallyoperate poorly in low light conditions. Newer digitalcameras incorporate a Live View mode which oftenallows very accurate manual focusing.

    A stopwatch or remote timer, to time very longexposures where the camera's bulb setting is used.

    Long exposures and multiple flashesThe long exposure multiple flash technique is a method of night or low light photography which use a mobile flashunit to expose various parts of a building or interior using a long exposure.This technique is often combined with using coloured gels in front of the flash unit to provide different colours inorder to illuminate the subject in different ways. It is also common to flash the unit several times during the exposurewhile swapping the colours of the gels around to mix colours on the final photo. This requires some skill and a lot ofimagination since it is not possible to see how the effects will turn out until the exposure is complete. By using thistechnique, the photographer can illuminate specific parts of the subject in different colours creating shadows in wayswhich would not normally be possible.

    Painting with lightWhen the correct equipment is used such as a tripod and shutter release cable, the photographer can use longexposures to photograph images of light. For example, when photographing a subject try switching the exposure tomanual and selecting the bulb setting on the camera. Once this is achieved trip the shutter and photograph yoursubject moving a flashlight or any small light in various patterns. Experiment with this outcome to produce artisticresults. Multiple attempts are usually needed to produce a desired result.

  • Night photography 23

    High ISOWith advance imaging sensor (CMOS-BSI) and sophisticated software (processor) we can make low-lightphotography with High ISO without tripod or long exposure and even can use cameras with small sensor such as:Sony Cyber-shot DSC-RX100, Nikon 1 J2 and Canon PowerShot G1X which can give good images up to ISO400.[1]


    An exposure blended night imageof the Sydney Opera House

    San Francisco OaklandBay Bridge from Treasure

    Island (California) taken byMikl Barton.

    Rainbow Bridge viewedfrom Odaiba

    The Garden of FiveSenses, Delhi

    Amusement rides Four image panorama ofWashington Park, 30 second

    exposures each.

    The WorldTrade Centerin New York


    The Golden Gate Bridge atnight.

    Taipei 101at night,fully lit.

    The Space ShuttleColumbia launches

    for its mission to theHubble Space


    Toronto (30-second exposure). An exposure blended imageconsisting of 30, 2.5 and 10

    second exposures

    University of New South Wales,Sydney (digital, night mode)

    Copenhagen at night

  • Night photography 24

    Published night photographersThis section includes significant night photographers who have published books dedicated to night photography, andsome of their selected works. Brassai

    Paris de Nuit, Arts et metiers graphiques, 1932. Harold Burdekin and John Morrison

    London Night, Collins, 1934. Jeff Brouws

    Inside the Live Reptile Tent, Chronicle Books, 2001. ISBN 0-8118-2824-7 Alan Delaney

    London After Dark, Phaidon Press, 1993. ISBN 0-7148-2870-X Neil Folberg

    Celestial Nights, Aperture Foundation, 2001. ISBN 0-89381-945-X Karekin Goekjian

    Light After Dark, Lucinne, Inc. ASIN B0006QOVCG Todd Hido

    Outskirts, Nazraeli Press, 2002. ISBN 1-59005-028-2 Peter Hujar

    Night, Matthew Marks Gallery/Fraenkel Gallery, 2005. ISBN 1-880146-45-2 Rolfe Horn

    28 Photographs, Nazraeli Press. ISBN 1-59005-122-X Lance Keimig

    Night Photography, Finding Your Way In The Dark, Focal Press, 2010. ISBN 978-0-240-81258-8 Brian Kelly

    Grand Rapids: Night After Night, Glass Eye, 2001. ISBN 0-9701293-0-0 Michael Kenna

    The Rouge, RAM Publications, 1995. ISBN 0-9630785-1-8 Night Work, Nazraeli Press, 2000. ISBN 3-923922-83-3

    William Lesch Expansions, RAM Publications, 1992. ISBN 4-8457-0667-9

    O. Winston Link The Last Steam Railroad in America, Harry Abrams, 1995. ISBN 0-8109-3575-9

    Tom Paiva Industrial Night, The Image Room, 2002. ISBN 0-9716928-0-7

    Troy Paiva Night Vision: The Art of Urban Exploration, Chronicle Books, 2008. ISBN 0-8118-6338-7 Lost America: The Abandoned Roadside West, MBI Publishing, 2003. ISBN 0-7603-1490-X

    Bill Schwab Bill Schwab: Photographs, North Light Press, 1999. ISBN 0-9765193-0-5 Gathering Calm, North Light Press, 2005. ISBN 0-9765193-2-1

    Jan Staller [2]

    Frontier New York, Hudson Hills Press, 1988. ISBN 1-55595-009-4,http:/ / www. janstaller. net/ books/frontier-new-york/

  • Night photography 25

    Zabrina Tipton At Night in San Francisco, San Francisco Guild of the Arts Press, 2006. ISBN 1-4243-1882-3

    Giovanna Tucker "How to Night Photography", 2011. ISBN 978-1-4657-4423-4

    Volkmar Wentzel Washington by Night, Fulcrum Publishing, 1998. ISBN 978-1-55591-410-3

    References[2] http:/ / www. janstaller. net/

    External links Comprehensive tutorials and articles about how to do night photography (http:/ / thenocturnes. com/ resources.

    html) by The Nocturnes Photoblog Wiki (http:/ / www. photoblog. com/ wiki/ Night) wiki article on night photography Short notes discussing the meaning and technique of night photography (http:/ / www. nightfolio. co. uk/

    night_photography_notes_index. html) by David Baldwin Photography for night owls (http:/ / pages. cthome. net/ rwinkler/ nightphotog. htm) How to take photos in the

    style of Brassai Night Photography Guide (http:/ / adcuz. co. uk/ how-to-articles/ how-to-create-a-long-exposure-photo/ ) Tutorial

    by Adam Currie

    Multiple exposure

    A multiple exposure compositeimage of a lunar eclipse taken over

    Hayward, California in 2004.

    In photography and cinematography, a multiple exposure is the superimpositionof two or more exposures to create a single image, and double exposure has acorresponding meaning in respect of two images. The exposure values may ormay not be identical to each other.


    Ordinarily, cameras have a sensitivity to light that is a function of time. Forexample, a one second exposure is an exposure in which the camera image isequally responsive to light over the exposure time of one second. The criterionfor determining that something is a double exposure is that the sensitivity goesup and then back down. The simplest example of a multiple exposure is a doubleexposure without flash, i.e. two partial exposures are made and then combinedinto one complete exposure. Some single exposures, such as "flash and blur" usea combination of electronic flash and ambient exposure. This effect can beapproximated by a Dirac delta measure (flash) and a constant finite rectangularwindow, in combination. For example, a sensitivity window comprising a Diraccomb combined with a rectangular pulse, is considered a multiple exposure, even though the sensitivity never goes tozero during the exposure.

  • Multiple exposure 26

    Double exposure


    Composer Karlheinz Stockhausen, doubleexposure made using a film camera, 1980

    Double exposure made using a film camera

    In photography and cinematography, multiple exposure is a techniquein which the camera shutter is opened more than once to expose thefilm multiple times, usually to different images. The resulting imagecontains the subsequent image/s superimposed over the original. Thetechnique is sometimes used as an artistic visual effect and can be usedto create ghostly images or to add people and objects to a scene thatwere not originally there. It is frequently used in photographic hoaxes.

    It is considered easiest to have a manual winding camera for doubleexposures. On automatic winding cameras, as soon as a picture is takenthe film is typically wound to the next frame. Some more advancedautomatic winding cameras have the option for multiple exposures butit must be set before making each exposure. Manual winding cameraswith a multiple exposure feature can be set to double-expose aftermaking the first exposure.Since shooting multiple exposures will expose the same frame multipletimes, negative exposure compensation must first be set to avoidoverexposure. For example, to expose the frame twice with correctexposure, a 1 EV compensation have to be done, and 2 EV forexposing four times. This may not be necessary when photographing alit subject in two (or more) different positions against a perfectly dark background, as the background area will beessentially unexposed.

    Medium to low light is ideal for double exposures. A tripod may not be necessary if combining different scenes inone shot. In some conditions, for example, recording the whole progress of a lunar eclipse in multiple exposures, astable tripod is essential.More than two exposures can be combined, with care not to overexpose the film.


    Multiple exposure of one person using AdobePhotoshop

    Digital technology enables images to be superimposed over each otherby using a software photo editor, such as Adobe Photoshop or GIMP.These enable the opacity of the images to be altered and for an imageto be overlayed over another. They also can set the layers to multiplymode, which 'adds' the colors together rather than making the colors ofeither image pale and translucent. Many digital SLR cameras allowmultiple exposures to be made on the same image within the camerawithout the need for any external software.

  • Multiple exposure 27

    Long exposures

    A four hour long exposure, using multiple shorterexposures

    With traditional film cameras, a long exposure is a single exposure,whereas with electronic cameras a long exposure can be obtained byintegrating together many exposures. This averaging also permits thereto be a time-windowing function, such as a Gaussian, that weights timeperiods near the center of the exposure time more strongly. Anotherpossibility for synthesizing long exposure from multiple-exposure is touse an exponential decay in which the current frame has the strongestweight, and previous frames are faded out with a sliding exponentialwindow.

    Scanning film with multiple exposureMultiple exposure technique can also be used when scanning transparencies like slides, film or negatives using afilm scanner for increasing dynamic range. With multiple exposure the original gets scanned several times withdifferent exposure intensities. An overexposed scan lights the shadow areas of the image and enables the scanner tocapture more image information here. In contrary an underexposed scans allows to gather more details in the lightareas. Afterwards the data can be calculated into a single HDR image with increased dynamic range.Among the scanning software solutions which implement multiple exposure are VueScan and SilverFast.


  • Camera obscura 28

    Camera obscura

    A drawing of a camera obscura

    Camerae obscurae for Daguerreotype called"Grand Photographe" produced by Charles

    Chevalier (Muse des Arts et Mtiers)

    A projection of an image of the New RoyalPalace in Prague Castle created with a camera


    The camera obscura (Latin; camera for "vaulted chamber/room",obscura for "dark", together "darkened chamber/room"; plural: cameraobscuras or camerae obscurae) is an optical device that projects animage of its surroundings on a screen. It is used in drawing and forentertainment, and was one of the inventions that led to photographyand the camera. The device consists of a box or room with a hole inone side. Light from an external scene passes through the hole andstrikes a surface inside, where it is reproduced, upside-down, but withcolor and perspective preserved. The image can be projected ontopaper, and can then be traced to produce a highly accuraterepresentation.The largest camera obscura in the world is onConstitution Hill in Aberystwyth, Wales.[1]

    Using mirrors, as in the 18th-century overhead version (illustrated inthe History section below), it is possible to project a right-side-upimage. Another more portable type is a box with an angled mirrorprojecting onto tracing paper placed on the glass top, the image beingupright as viewed from the back.

    As the pinhole is made smaller, the image gets sharper, but theprojected image becomes dimmer. With too small a pinhole, however,the sharpness worsens, due to diffraction. Some practical cameraobscuras use a lens rather than a pinhole because it allows a largeraperture, giving a usable brightness while maintaining focus. (Seepinhole camera for construction information.)

  • Camera obscura 29


    Camera obscura in Encyclopdie, ou dictionnaireraisonn des sciences, des arts et des mtiers

    The camera obscura has been known to scholars since the time of Moziand Aristotle.[2] The first surviving mention of the principles behindthe pinhole camera or camera obscura belongs to Mozi (Mo-Ti) (470 to390 BCE), a Chinese philosopher and the founder of Mohism.[3] Mozireferred to this device as a "collecting plate" or "locked treasureroom."[4]

    The Greek philosopher Aristotle (384 to 322 BCE) understood theoptical principle of the pinhole camera.[5] He viewed the crescentshape of a partially eclipsed sun projected on the ground through theholes in a sieve and through the gaps between the leaves of a planetree. In the 4th century BCE, Aristotle noted that "sunlight travellingthrough small openings between the leaves of a tree, the holes of asieve, the openings wickerwork, and even interlaced fingers will createcircular patches of light on the ground." Euclid's Optics (ca 300 BCE)presupposed the camera obscura as a demonstration that light travels instraight lines.[6] In the 4th century, Greek scholar Theon of Alexandriaobserved that "candlelight passing through a pinhole will create anilluminated spot on a screen that is directly in line with the apertureand the center of the candle."

    In the 6th century, Byzantine mathematician and architect Anthemiusof Tralles (most famous for designing the Hagia Sophia), used a type of camera obscura in his experiments.[]

    In the 9th century, Al-Kindi (Alkindus) demonstrated that "light from the right side of the flame will pass throughthe aperture and end up on the left side of the screen, while light from the left side of the flame will pass through theaperture and end up on the right side of the screen."

    Alhazen also gave the first clear description[7] and early analysis[8] of the camera obscura and pinhole camera. WhileAristotle, Theon of Alexandria, Al-Kindi (Alkindus) and Chinese philosopher Mozi had earlier described the effectsof a single light passing through a pinhole, none of them suggested that what is being projected onto the screen is animage of everything on the other side of the aperture. Alhazen was the first to demonstrate this with his lampexperiment where several different light sources are arranged across a large area. He was thus the first tosuccessfully project an entire image from outdoors onto a screen indoors with the camera obscura.The Song Dynasty Chinese scientist Shen Kuo (10311095) experimented with a camera obscura, and was the firstto apply geometrical and quantitative attributes to it in his book of 1088 AD, the Dream PoolEssays.[9]Wikipedia:Verifiability However, Shen Kuo alluded to the fact that the Miscellaneous Morsels fromYouyang written in about 840 AD by Duan Chengshi (d. 863) during the Tang Dynasty (618907) mentionedinverting the image of a Chinese pagoda tower beside a seashore.[9] In fact, Shen makes no assertion that he was thefirst to experiment with such a device.[9] Shen wrote of Cheng's book: "[Miscellaneous Morsels from Youyang] saidthat the image of the pagoda is inverted because it is beside the sea, and that the sea has that effect. This is nonsense.It is a normal principle that the image is inverted after passing through the small hole."[9]

    In 13th-century England, Roger Bacon described the use of a camera obscura for the safe observation of solareclipses.[10] Its potential as a drawing aid may have been familiar to artists by as early as the 15th century; Leonardoda Vinci (14521519 AD) described the camera obscura in Codex Atlanticus. Johann Zahn's "Oculus ArtificialisTeledioptricus Sive Telescopium, which was published in 1685, contains many descriptions and diagrams,illustrations and sketches of both the camera obscura and of the magic lantern.

  • Camera obscura 30

    Giambattista della Porta is said to have perfected camera obscura. He described it as having a convex lens in latereditions of his Magia Naturalis (1558-1589), the popularity of which helped spread knowledge of it. He comparedthe shape of the human eye to the lens in his camera obscura, and provided an easily understandable example of howlight could bring images into the eye. One chapter in the Conte Algarotti's Saggio sopra Pittura (1764) is dedicatedto the use of a camera ottica ("optic chamber") in painting.[11]

    Camera obscura, from a manuscript of military designs. 17th century,possibly Italian.

    The 17th century Dutch Masters, such as JohannesVermeer, were known for their magnificent attention todetail. It has been widely speculated that they made useof such a camera, but the extent of their use by artists atthis period remains a matter of considerablecontroversy, recently revived by the HockneyFalcothesis.

    The term "camera obscura" itself was first used by theGerman astronomer Johannes Kepler in 1604.[12] TheEnglish physician and author Sir Thomas Brownespeculated upon the interrelated workings of optics andthe camera obscura in his 1658 discourse The Gardenof Cyrus thus:

    For at the eye the Pyramidal rayes from the object, receive a decussation, and so strike a second base upon theRetina or hinder coat, the proper organ of Vision; wherein the pictures from objects are represented,answerable to the paper, or wall in the dark chamber; after the decussation of the rayes at the hole of thehornycoat, and their refraction upon the Christalline humour, answering the foramen of the window, and theconvex or burning-glasses, which refract the rayes that enter it.

    Four drawings by Canaletto, representing CampoSan Giovanni e Paolo in Venice, obtained with a

    camera obscura (Venice, Galleriedell'Accademia)

    Early models were large; comprising either a whole darkened room ora tent (as employed by Johannes Kepler). By the 18th century,following developments by Robert Boyle and Robert Hooke, moreeasily portable models became available. These were extensively usedby amateur artists while on their travels, but they were also employedby professionals, including Paul Sandby, Canaletto and JoshuaReynolds, whose camera (disguised as a book) is now in the ScienceMuseum (London). Such cameras were later adapted by JosephNicephore Niepce, Louis Daguerre and William Fox Talbot for

    creating the first photographs.

  • Camera obscura 31


    A freestanding room-sizedcamera obscura at the Universityof North Carolina at Chapel Hill.One of the pinholes can be seen

    in the panel to the left of thedoor.

    A freestanding room-sizedcamera obscura in the shape of

    a camera located in SanFrancisco at the Cliff House in

    Ocean Beach

    Image of the South Downsof Sussex as seen in the

    camera obscura ofForedown Tower,Portslade, England

    A camera obscura createdby Mark Ellis is built in the

    style of an Adirondackmountain cabin, and sits bythe shore of Lake Flowerin the village of Saranac

    Lake, NY.

    19th-century artist using acamera obscura to outline

    his subject

    Image of a modern day cameraobscura

    Usage of a modern day cameraobscura used indoors

    In popular cultureIn the Mad Men (season 3) episode "Seven Twenty Three", Don Draper and Carlton Hanson help their children SallyDraper and Ernie Hanson's teacher, Suzanne Farrell, cut holes in cardboard boxes to create cameras obscurae, withwhich the kids and Miss Farrell watch the total solar eclipse of July 20, 1963; additionally, Betty Draper and HenryFrancis encounter a couple in downtown Ossining using a similar device. Miss Farrell explains to the children andtheir fathers how it works and cautions against looking directly into the sun (which Betty Draper does, and as a resultfeels faint, afterward).[13]

    Notes[1] http:/ / www. cardiganshirecoastandcountry. com/ cliff-railway-camera-obscura-aberystwyth. php Cliff Railway and Camera Obscura,

    Aberystwyth[2] Jan Campbell (2005). " Film and cinema spectatorship: melodrama and mimesis (http:/ / books. google. com/ books?id=lOEqvkmSxhsC&

    pg=PA114& dq& hl=en#v=onepage& q=& f=false)". Polity. p.114. ISBN 0-7456-2930-X[3] Needham, Joseph. (1986). Science and Civilization in China: Volume 4, Physics and Physical Technology, Part 1, Physics. Taipei: Caves

    Books Ltd. Page 82.[4] Ouellette, Jennifer. (2005). Black Bodies and Quantum Cats: Tales from the Annals of Physics. London: Penguin Books Ltd. Page 52.[5] Aristotle, Problems, Book XV[6] The Camera Obscura : Aristotle to Zahn (http:/ / inventors. about. com/ gi/ dynamic/ offsite. htm?site=http:/ / web. archive. org/ web/

    20080420165232/ http:/ / www. acmi. net. au/ AIC/ CAMERA_OBSCURA. html)[7][7] :[8][8] :[9][9] Needham, Volume 4, Part 1, 98.[10] BBC - The Camera Obscura (http:/ / www. bbc. co. uk/ dna/ h2g2/ A2875430)[12] History of Photography and the Camera - Part 1: The first photographs (http:/ / inventors. about. com/ library/ inventors/ blphotography.


  • Camera obscura 32

    References Hill, Donald R. (1993), "Islamic Science and Engineering", Edinburgh University Press, page 70. Lindberg, D.C. (1976), "Theories of Vision from Al Kindi to Kepler", The University of Chicago Press, Chicago

    and London. Nazeef, Mustapha (1940), "Ibn Al-Haitham As a Naturalist Scientist", (Arabic), published proceedings of the

    Memorial Gathering of Al-Hacan Ibn Al-Haitham, 21 December 1939, Egypt Printing. Needham, Joseph (1986). Science and Civilization in China: Volume 4, Physics and Physical Technology, Part 1,

    Physics. Taipei: Caves Books Ltd. Omar, S.B. (1977). "Ibn al-Haitham's Optics", Bibliotheca Islamica, Chicago. Wade, Nicholas J.; Finger, Stanley (2001), "The eye as an optical instrument: from camera obscura to Helmholtz's

    perspective", Perception 30 (10): 11571177, doi: 10.1068/p3210 (http:/ / dx. doi. org/ 10. 1068/ p3210), PMID11721819 (http:/ / www. ncbi. nlm. nih. gov/ pubmed/ 11721819)

    External links Timeline - The Camera Obscura in History The Camera Obscura Journal. Sfumato Press One error on page.

    Nicphore Nipce did not use a pinhole camera in 1827 - he used a camera obscura with a lens. (http:/ / www.obscurajournal. com/ history. php)

    An Appreciation of the Camera Obscura (http:/ / brightbytes. com/ cosite/ cohome. html) The Camera Obscura in San Francisco (http:/ / www. giantcamera. com/ ) the Giant Camera of San Francisco at

    Ocean Beach, added to the National Register of Historic Places in 2001 Camera Obscura and World of Illusions (http:/ / www. camera-obscura. co. uk/ ), Edinburgh Dumfries Museum & Camera Obscura, Dumfries, Scotland (http:/ / www. dumfriesmuseum. demon. co. uk/

    dumfmuse. html) Vermeer and the Camera Obscura (http:/ / www. bbc. co. uk/ history/ british/ empire_seapower/

    vermeer_camera_01. shtml) by Philip Steadman Paleo-camera (http:/ / www. paleo-camera. com/ ) the camera obscura and the origins of art List of all known Camera Obscura (http:/ / www. foredown. virtualmuseum. info/ camera_obscuras/ default. asp) Willett & Patteson (http:/ / www. amazingcameraobscura. co. uk) Camera obscura hire and creation Camera Obscura and Outlook Tower, Edinburgh, Scotland (http:/ / www. scottish-places. info/ features/

    featurefirst1049. html) (http:/ / www. cameraobscuras. com) George T Keene builds custom camera obscuras like

    the Griffith Observatory CO in Los Angeles. Camera obscura in Trondheim, Norway (http:/ / www. ntnu. no/ 1-2-tre/ 06), built by students of architecture and

    engineering from Norwegian University of Science and Technology (NTNU)

  • Pinhole camera 34

    Holes in the leaf canopy project images of a solareclipse on the ground.

    A home-made pinhole camera (on the left),wrapped in black plastic to prevent light leaks,

    and related developing supplies.

    A common use of the pinhole camera is to capture the movement of thesun over a long period of time. This type of photography is calledSolargraphy.

    The image may be projected onto a translucent screen for real-timeviewing (popular for observing solar eclipses; see also cameraobscura), or can expose photographic film or a charge coupled device(CCD). Pinhole cameras with CCDs are often used forsurveillance[citation needed] because they are difficult to detect.

    Pinhole devices provide safety for the eyes when viewing solareclipses because the event is observed indirectly, the diminishedintensity of the pinhole image being harmless compared with the fullglare of the Sun itself.[citation needed]

    World Pinhole Day is held on the last Sunday of April.[1]

    Invention of pinhole camera

    In the 10th century, Persian scientist Ibn al-Haytham (Alhazen) wroteabout naturally-occurring rudimentary pinhole cameras. For example,light may travel through the slits of wicker baskets or the crossing oftree leaves.[2] (The circular dapples on a forest floor, actually pinholeimages of the sun, can be seen to have a bite taken out of them duringpartial solar eclipses opposite to the position of the moon's actualoccultation of the sun because of the inverting effect of pinhole lenses.)

    Alhazen published this idea in the Book of Optics in 1021 AD. He improved on the camera after realizing that thesmaller the pinhole, the sharper the image (though the less light). He provides the first clear description forconstruction of a camera obscura (Lat. dark chamber).

    In the 5th century BC, the Mohist philosopher Mozi ( ) in ancient China mentioned the effect of an invertedimage forming through a pinhole.[3] The image of an inverted Chinese pagoda is mentioned in Duan Chengshi's (d.863) book Miscellaneous Morsels from Youyang written during the Tang Dynasty (618907).[4] Along withexperimenting with the pinhole camera and the burning mirror of the ancient Mohists, the Song Dynasty (9601279CE) Chinese scientist Shen Kuo (10311095) experimented with camera obscura and was the first to establishgeometrical and quantitative attributes for it.[4]

    Ancient pinhole camera effect caused bybalistrarias in the Castelgrande in Bellinzona

    In the 13th century, Robert Grosseteste and Roger Bacon commentedon the pinhole camera.[5] Between 1000 and 1600, men such as Ibnal-Haytham, Gemma Frisius, and Giambattista della Porta wrote on thepinhole camera, explaining why the images are upside down.

    Around 1600, Giambattista della Porta added a lens to the pinholecamera.[6][7] It was not until 1850 that a Scottish scientist by the nameof Sir David Brewster actually took the first photograph with a pinholecamera. Up until recently it was believed that Brewster himself coinedthe term "Pinhole" in "The Stereoscope"[citation needed]. The earliestreference to the term "Pinhole" has been traced back to almost acentury before Brewster to James Ferguson's Lectures on selectSubjects.[8][9] Sir William Crookes and William de Wiveleslie Abney were other early photographers to try thepinhole technique.[10]

  • Pinhole camera 35

    Selection of pinhole size

    An example of a 20 minute exposure taken with apinhole camera

    A photograph taken with a pinhole camera usingan exposure time of 2s

    Within limits, a smaller pinhole (with a thinner surface that the holegoes through) will result in sharper image resolution because theprojected circle of confusion at the image plane is practically the samesize as the pinhole. An extremely small hole, however, can producesignificant diffraction effects and a less clear image due to the waveproperties of light.[] Additionally, vignetting occurs as the diameter ofthe hole approaches the thickness of the material in which it ispunched, because the sides of the hole obstruct the light entering atanything other than 90 degrees.

    The best pinhole is perfectly round (since irregularities causehigher-order diffraction effects), and in an extremely thin piece ofmaterial. Industrially produced pinholes benefit from laser etching, buta hobbyist can still produce pinholes of sufficiently high quality forphotographic work.

    Some examples of photographs taken using apinhole camera.

    One method is to start with a sheet of brass shim or metal reclaimedfrom an aluminium drinks can or tin foil/aluminum foil, use fine sandpaper to reduce the thickness of the centre of the material to theminimum, before carefully creating a pinhole with a suitably sizedneedle.

    A method of calculating the optimal pinhole diameter was firstattempted by Jozef Petzval. The crispest image is obtained using apinhole size determined by the formula[11]

    where d is pinhole diameter, f is focal length (distance from pinhole toimage plane) and is the wavelength of light.

    For standard black-and-white film, a wavelength of light correspondingto yellow-green (550 nm) should yield optimum results. For apinhole-to-film distance of 1 inch (25mm), this works out to a pinhole0.17mm in diameter.[12] For 5cm, the appropriate diameter is0.23mm.[13]

    The depth of field is basically infinite, but this does not mean that no optical blurring occurs. The infinite depth offield means that image blur depends not on object distance, but on other factors, such as the distance from theaperture to the film plane, the aperture size, and the wavelength(s) of the light source.

  • Pinhole camera 36

    Pinhole camera constructionPinhole cameras can be handmade by the photographer for a particular purpose. In its simplest form, thephotographic pinhole camera can consist of a light-tight box with a pinhole in one end, and a piece of film orphotographic paper wedged or taped into the other end. A flap of cardboard with a tape hinge can be used as ashutter. The pinhole may be punched or drilled using a sewing needle or small diameter bit through a piece of tinfoilor thin aluminum or brass sheet. This piece is then taped to the inside of the light tight box behind a hole cut throughthe box. A cylindrical oatmeal container may be made into a pinhole camera.Pinhole cameras can be constructed with a sliding film holder or back so the distance between the film and thepinhole can be adjusted. This allows the angle of view of the camera to be changed and also the effective f-stop ratioof the camera. Moving the film closer to the pinhole will result in a wide angle field of view and a shorter exposuretime. Moving the film farther away from the pinhole will result in a telephoto or narrow angle view and a longerexposure time.Pinhole cameras can also be constructed by replacing the lens assembly in a conventional camera with a pinhole. Inparticular, compact 35mm cameras whose lens and focusing assembly have been damaged can be reused as pinholecamerasmaintaining the use of the shutter and film winding mechanisms. As a result of the enormous increase inf-number while maintaining the same exposure time, one must use a fast film in direct sunshine.Pinholes (homemade or commercial) can be used in place of the lens on an SLR. Use with a digital SLR allowsmetering and composition by trial and error, and is effectively free, so is a popular way to try pinholephotography.[14]

    Unusual materials have been used to construct pinhole cameras, e.g., a Chinese roast duck.[15] by Martin Cheung

    Calculating the f-number & required exposure

  • Pinhole camera 37

    A pinhole camera made from an oatmeal box.The pinhole is in the center. The black plastic

    which normally surrounds this camera (seepicture above) has been removed.

    A fire hydrant photographed by a pinhole cameramade from a shoe box, exposed on photographicpaper (top). The length of the exposure was 40

    seconds. There is noticeable flaring in thebottom-right corner of the image, likely due to

    extraneous light entering the camera box.

    The f-number of the camera may be calculated by dividing the distancefrom the pinhole to the imaging plane (the focal length) by thediameter of the pinhole. For example, a camera with a 0.5mmdiameter pinhole, and a 50mm focal length would have an f-number of50/0.5, or 100 (f/100 in conventional notation).

    Due to the large f-number of a pinhole camera, exposures will oftenencounter reciprocity failure.[16] Once exposure time has exceededabout 1 second for film or 30 seconds for paper, one must compensatefor the breakdown in linear response of the film/paper to intensity ofillumination by using longer exposures.

    Other special features can be built into pinhole cameras such as theability to take double images, by using multiple pinholes, or the abilityto take pictures in cylindrical or spherical perspective by curving thefilm plane.

    These characteristics could be used for creative purposes. Onceconsidered as an obsolete technique from the early days ofphotography, pinhole photography is from time to time a trend inartistic photography.

    Related cameras, image forming devices, or developments from itinclude Franke's widefield pinhole camera, the pinspeck camera, andthe pinhead mirror.

    NASA (via the NASA Institute for Advanced Concepts) has fundedinitial research into the New Worlds Mission project, which proposesto use a pinhole camera with a diameter of 10 m and focus length of200,000km to image earth sized planets in other star systems.

    Coded apertures

    A non-focusing coded-aperture optical system may be thought of asmultiple pinhole cameras in conjunction. By adding pinholes, lightthroughput and thus sensitivity are increased. However, multipleimages are formed, usually requiring computer deconvolution.

    References[1] http:/ / www. bbc. co. uk/ news/ in-pictures-22150973[2] "Light Through the Ages" (http:/ / www-groups. dcs. st-and. ac. uk/ ~history/

    HistTopics/ Light_1. html).[3] Needham, Joseph (1986). Science and Civilization in China: Volume 4, Physics and

    Physical Technology, Part 1, Physics. Taipei: Caves Books Ltd. Page 82.[4] Needham, Joseph (1986). Science and Civilization in China: Volume 4, Physics and

    Physical Technology, Part 1, Physics. Taipei: Caves Books Ltd. Page 98.[5] A reconsideration of Roger Bacon's theory of pinhole images (http:/ / www.

    springerlink. com/ index/ R2717G210K21R7R2. pdf)[6] History of Photography and the Camera Pinhole Camera to Daguerreotype (http:/ /

    inventors. about. com/ library/ inventors/ blphotography. htm)[7] http:/ / www-history. mcs. st-andrews. ac. uk/ Biographies/ Porta. html[9] What is a Pinhole Camera? (http:/ / www. pinhole. cz/ en/ pinholecameras/ whatis. html)

  • Pinhole camera 38

    [10] Pinhole photography history (http:/ / photo. net/ learn/ pinhole/ pinhole)[11] Rayleigh, (1891) Lord Rayleigh on Pin-hole Photography (http:/ / idea. uwosh. edu/ nick/ rayleigh. pdf) in Philosophical Magazine, vol.31,

    pp. 8799 presents his formal analysis, but the layman's formula "pinhole radius = ()" appears in Strutt,J.W. Lord Rayleigh (1891) Someapplications of photography in Nature. Vol.44 p.254.

    [12] Equation for calculation with f=1in, using Google for evaluation (http:/ / www. google. com/ search?q=sqrt(2*1in*550nm)=)[13] Equation for calculation with f=5cm, using Google for evaluation (http:/ / www. google. com/ search?q=+ sqrt(2*5cm*550nm)=)[14] http:/ / www. pcw. co. uk/ personal-computer-world/ features/ 2213298/ hands-digital-pinhole-camera[15] http:/ / www. urbanphoto. net/ blog/ 2010/ 11/ 25/ how-a-roast-duck-sees-chinatown/[16] http:/ / www. nancybreslin. com/ pinholetech. html

    External links Pinhole Photography by Vladimir Zivkovic (http:/ / www. behance. net/ gallery/

    PICTORIALISM-Winter-Pinhole-Photography/ 897715) Worldwide Pinhole Photography Day website (http:/ / www. pinholeday. org/ gallery/ index. php) An easy way to convert a DSLR to a pinhole camera (http:/ / www. alistairscott. com/ howto/ pinhole/ ) Pinhole Photography and Camera Design Calculators (http:/ / www. mrpinhole. com/ ) Illustrated history of cinematography (http:/ / www. precinemahistory. net/ ) How to Make and Use a Pinhole Camera (http:/ / www. kodak. com/ global/ en/ consumer/ education/

    lessonPlans/ pinholeCamera/ pinholeCanBox. shtml) Oregon Art Beat: Pinhole Photos by Zeb Andrews (http:/ / watch. opb. org/ video/ 2364990891)


    Pocket stereoscope with original test image. Usedby military to examine stereoscopic pairs of aerial


    View of Boston, c. 1860; an early stereoscopiccard for viewing a scene from nature

    Stereoscopy (also called stereoscopics or 3D imaging) is a techniquefor creating or enhancing the illusion of depth in an image by means ofstereopsis for binocular vision. The word stereoscopy derives from theGreek "" (stereos), "firm, solid"[1] + "" (skope), "tolook", "to see".[2]

    Most stereoscopic methods present two offset images separately to theleft and right eye of the viewer. These two-dimensional images arethen combined in the brain to give the perception of 3D depth. Thistechnique is distinguished from 3D displays that display an image inthree full dimensions, allowing the observer to increase informationabout the 3-dimensional objects being displayed by head and eyemovements.


    Stereoscopy creates the illusion of three-dimensional depth from giventwo-dimensional images. Human vision, including the perception ofdepth, is a complex process which only begins with the acquisition ofvisual information taken in through the eyes; much processing ensueswithin the brain, as it strives to make intelligent and meaningful senseof the raw information provided. One of the very important visualfunctions that occur within the brain as it interprets what the eyes see is

  • Stereoscopy 39

    Kaiserpanorama consisted of a multi-stationviewing apparatus and sets of stereo slides.Patented by A. Fuhrmann around 1890.[]

    Company of ladies watching stereoscopicphotographs, painting by Jacob Spoel, before1868. A very early depiction of people using a


    that of assessing the relative distances of various objects from theviewer, and the depth dimension of those same perceived objects. Thebrain makes use of a number of cues to determine relative distancesand depth in a perceived scene, including:[3]

    Stereopsis Accommodation of the eye Overlapping of one object by another Subtended visual angle of an object of known size Linear perspective (convergence of parallel edges) Vertical position (objects higher in the scene generally tend to be

    perceived as further away) Haze, desaturation, and a shift to bluishness Change in size of textured pattern detail(All the above cues, with the exception of the first two, are present intraditional two-dimensional images such as paintings, photographs, andtelevision.)Stereoscopy is the production of the illusion of depth in a photograph,movie, or other two-dimensional image by presenting a slightlydifferent image to each eye, and thereby adding the first of these cues(stereopsis) as well. Both of the 2D offset images are then combined inthe brain to give the perception of 3D depth. It is important to note thatsince all points in the image focus at the same plane regardless of theirdepth in the original scene, the second cue, focus, is still not duplicatedand therefore the illusion of depth is incomplete. There are alsoprimarily two effects of stereoscopy that are unnatural for the humanvision: first, the mismatch between convergence and accommodation, caused by the difference between an object'sperceived position in front of or behind the display or screen and the real origin of that light and second, possiblecrosstalk between the eyes, caused by imperfect image separation by some methods.

    Although the term "3D" is ubiquitously used, it is also important to note that the presentation of dual 2D images isdistinctly different from displaying an image in three full dimensions. The most notable difference is that, in the caseof "3D" displays, the observer's head and eye movement will not increase information about the 3-dimensionalobjects being displayed. Holographic displays or volumetric display are examples of displays that do not have thislimitation. Similar to the technology of sound reproduction, in which it is not possible to recreate a full3-dimensional sound field merely with two stereophonic speakers, it is likewise an overstatement of capability torefer to dual 2D images as being "3D". The accurate term "stereoscopic" is more cumbersome than the commonmisnomer "3D", which has been entrenched after many decades of unquestioned misuse. Although most stereoscopicdisplays do not qualify as real 3D display, all real 3D displays are also stereoscopic displays because they meet thelower criteria as well.

    Most 3D displays use this stereoscopic method to convey images. It was first invented by Sir Charles Wheatstone in1838.[4][5]

  • Stereoscopy 40

    Wheatstone mirror stereoscope

    Wheatstone originally used his stereoscope (a rather bulky device)[6]

    with drawings because photography was not yet available, yet hisoriginal paper seems to foresee the development of a realistic imagingmethod:[7]

    For the purposes of illustration I have emp