testing visual search performance using retinal light scanning … ·  · 2015-07-29testing visual...

11
1 Testing visual search performance using retinal light scanning as a future wearable low vision aid Lin, S.-K. V., Seibel, E. J., and Furness, T. A. 2003), two-volume special issue on Mediated Reality, Ed. S. Mann & W. Barfield International Journal of Human-Computer Interaction 15(2):245-263. INTRODUCTION According to the National Advisory Eye Council (National Advisory Council, 1998), low vision is defined as having a “visual acuity with best correction in the better eye worse than or equal to 20/200 or a visual field extent of less than 20 degrees in diameter.” Applying this definition, over 3 million Americans are classified as having low vision. When this definition is broadened to visual problems which impede the enjoyment of everyday activities, the number rises to nearly 14 million Americans (National Advisory Council, 1998). Wearable systems that include head- mounted displays (HMDs) are being designed to improve both the scanning ability and standard clinical measures of vision for people living with visual disabilities or low vision (Massoff, 1998; Peli, 1994). Although many studies have focused on the optical design and image processing issues of HMDs as future low vision aids (Massoff and Rickman, 1992; Peli, 1992; Greene et al., 1992; Everingham et al., 1998), these wearable systems are rarely studied as a performance evaluation of how well the visually disabled person accomplishes everyday tasks (Geruschat et al., 1999; Oritz et al., 1998). Furthermore, in the case of evaluating navigational or scanning ability, there are no studies in the research literature which test and compare the performance of various interface designs of HMDs as low vision aids. In this study, we integrated both aspects of HMD design and performance evaluation into a single study of scanning ability. The HMD low vision aid is comprised of a novel retinal light scanning display known as the virtual retinal display (VRD – see below), video camera, and electronics. Currently, the VRD system is tethered to an electrical outlet and the electronics are a camera-to-display image converter. However, the National Science Foundation is supporting efforts to redesign the retinal light scanning display technology into a low-cost vision aid with a wearable computer used for both real-time image conversion, enhancement, and processing. As part of this effort, a portable VRD system has been converted into a HMD low vision aid to test various display interface modes, from fully augmented viewing to only camera viewing. The scanning performance of low vision volunteers are measured for each of the display interface modes, as a direct corollary to their navigational ability. The results of this study will be used to design the VRD as a wearable low vision aid. Scanning ability Among those with low vision, a major concern is the inability to travel safely, independently, and comfortably. Therefore, it would be intuitive to reason that visual function and mobility performance is substantially correlated, and many researchers have shown this relationship (Geruschat et al., 1998; Haymes et al., 1996; Marron and Bailey, 1982; Kuyk et al., 1998 vol. 7). These studies have shown that clinical measures, such as visual acuity, contrast sensitivity, and visual field, are predictive measures to mobility performance. However, clinical assessments of vision loss are not the only indicators of mobility performance, for mobility involves a visually complex process. Dodds and Davis (1987) recognized that perceptual visual functions such as scanning ability were significantly correlated with mobility performance. Kuyk et al. (1998 vol. 7) evaluated the effect of perceptual visual functions using scanning ability, motion sensitivity, and figure-ground discrimination tests, and found that scanning ability was a more important predictor of mobility performance than contrast sensitivity. In addition, Geruschat and Smith (1997) have utilized scanning ability training tests to improve the mobility performances of low vision individuals. We have selected the scanning ability task not only because it is an important predictor of mobility performance, but also because the task requires motor skills and visual search times that can be easily measured. The Virtual Retinal Display Technology The VRD is a unique display interface which scans a modulated, low power laser beam toward the eye and onto the retina, creating a

Upload: dangdiep

Post on 19-Apr-2018

223 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Testing visual search performance using retinal light scanning … ·  · 2015-07-29Testing visual search performance using retinal light scanning as a future wearable ... INTRODUCTION

1

Testing visual search performance using retinal light scanning as a future wearablelow vision aid

Lin, S.-K. V., Seibel, E. J., and Furness, T. A.2003), two-volume special issue on Mediated Reality, Ed. S. Mann & W. Barfield

International Journal of Human-Computer Interaction 15(2):245-263.

INTRODUCTIONAccording to the National Advisory Eye

Council (National Advisory Council, 1998), lowvision is defined as having a “visual acuity withbest correction in the better eye worse than orequal to 20/200 or a visual field extent of lessthan 20 degrees in diameter.” Applying thisdefinition, over 3 million Americans areclassified as having low vision. When thisdefinition is broadened to visual problems whichimpede the enjoyment of everyday activities, thenumber rises to nearly 14 million Americans(National Advisory Council, 1998).

Wearable systems that include head-mounted displays (HMDs) are being designed toimprove both the scanning ability and standardclinical measures of vision for people living withvisual disabilities or low vision (Massoff, 1998;Peli, 1994). Although many studies havefocused on the optical design and imageprocessing issues of HMDs as future low visionaids (Massoff and Rickman, 1992; Peli, 1992;Greene et al., 1992; Everingham et al., 1998),these wearable systems are rarely studied as aperformance evaluation of how well the visuallydisabled person accomplishes everyday tasks(Geruschat et al., 1999; Oritz et al., 1998).Furthermore, in the case of evaluatingnavigational or scanning ability, there are nostudies in the research literature which test andcompare the performance of various interfacedesigns of HMDs as low vision aids.

In this study, we integrated both aspectsof HMD design and performance evaluation intoa single study of scanning ability. The HMDlow vision aid is comprised of a novel retinallight scanning display known as the virtualretinal display (VRD – see below), video camera,and electronics. Currently, the VRD system istethered to an electrical outlet and the electronicsare a camera-to-display image converter.However, the National Science Foundation issupporting efforts to redesign the retinal lightscanning display technology into a low-costvision aid with a wearable computer used forboth real-time image conversion, enhancement,and processing. As part of this effort, a portableVRD system has been converted into a HMD

low vision aid to test various display interfacemodes, from fully augmented viewing to onlycamera viewing. The scanning performance oflow vision volunteers are measured for each ofthe display interface modes, as a direct corollaryto their navigational ability. The results of thisstudy will be used to design the VRD as awearable low vision aid.

Scanning abilityAmong those with low vision, a major

concern is the inability to travel safely,independently, and comfortably. Therefore, itwould be intuitive to reason that visual functionand mobility performance is substantiallycorrelated, and many researchers have shownthis relationship (Geruschat et al., 1998; Haymeset al., 1996; Marron and Bailey, 1982; Kuyk etal., 1998 vol. 7). These studies have shown thatclinical measures, such as visual acuity, contrastsensitivity, and visual field, are predictivemeasures to mobility performance.

However, clinical assessments of visionloss are not the only indicators of mobilityperformance, for mobility involves a visuallycomplex process. Dodds and Davis (1987)recognized that perceptual visual functions suchas scanning ability were significantly correlatedwith mobility performance. Kuyk et al. (1998vol. 7) evaluated the effect of perceptual visualfunctions using scanning ability, motionsensitivity, and figure-ground discriminationtests, and found that scanning ability was a moreimportant predictor of mobility performance thancontrast sensitivity. In addition, Geruschat andSmith (1997) have utilized scanning abilitytraining tests to improve the mobilityperformances of low vision individuals. Wehave selected the scanning ability task not onlybecause it is an important predictor of mobilityperformance, but also because the task requiresmotor skills and visual search times that can beeasily measured.

The Virtual Retinal Display TechnologyThe VRD is a unique display interface

which scans a modulated, low power laser beamtoward the eye and onto the retina, creating a

Page 2: Testing visual search performance using retinal light scanning … ·  · 2015-07-29Testing visual search performance using retinal light scanning as a future wearable ... INTRODUCTION

2

virtual image (Furness and Kollin, 1995). Thetechnology is a derivative of the scanning laserophthalmoscope (SLO) (Webb et al., 1980), inwhich a coherent light source is scanned acrossthe retina for the purpose of imaging the retina.Occasionally the SLO was used as a display.Mainster et al. (1982) reported that as the laserbeam was modulated, the patient would perceivean image, thus retinal light scanning wasproposed as a new technology for overcomingvarious vision disabilities.

The VRD is a retinal light scanningdisplay that was developed at the University ofWashington, currently designed to accept anyVGA resolution video source as an input. Toproduce a full color or monochrome image, amodulated, low power laser beam is scannedonto the retina in a raster fashion at frequenciesof 15.75 kHz horizontally and 60 Hz vertically.The delivery optics converge the raster scan intoa 0.8 mm exit pupil having an approximate 40 x30 degree field of view. The entrance pupil ofthe eye must be aligned with the exit pupil of theVRD to view the image. A detailed explanationof the VRD can be found in Johnston and Willey(1995).

The VRD and Low VisionHead-mounted displays most commonly

use cathode ray tubes (CRTs) or liquid crystaldisplays (LCDs) as their pixelated source.Miniature CRTs and LCDs are limited in termsof their brightness, contrast, and resolution,which reduce their effectiveness for low visionindividuals. Only recently there has been analternative mini-display technology, the VRD,which is brighter, high in both contrast andresolution, and having unprecedented depth offocus. The safety issue concerned with higherbrightness levels in the VRD was addressed byViirre et al. (1990), demonstrating operationallevels well below the maximum permissibleexposure limits set by the American NationalStandards Institute (ANSI).

In previous low vision studies, theclinical assessment of visual acuity improvedacross a wide range of visual diseases whenusing a retinal light scanning display (Webb andHughes, 1981; Viirre et al., 1998; Kleweno et al.,1999). In performance evaluations of reading atmatched luminance levels, Kleweno, et al.(1999) reported that 2 out of 13 subjects showeda significant increase in reading speed, and 6 outof 13 showed increased visual acuity when usingthe VRD versus a CRT. Since visual acuity hasbeen shown to be significantly correlated with

mobility performance (Geruschat et al., 1998;Haymes et al., 1996), a wearable form of theVRD (WVRD) is expected to increase themobility performance of some low visionindividuals. Unlike conventional HMDs, thehigher illuminance and contrast levels due to thescanning laser light will allow the low visionaids to be used effectively in bright outdoorenvironments and in augmented (see-through)modes of display.

Recognizing the advantages of retinallight scanning, the purpose of this study is to testvarious interface modes using the VRD as awearable low vision aid. Since scanning abilityis an important predictor of mobilityperformance, it was measured in a simulatedenvironment. Specifically, four different displayinterface modes (DIMs) of the WVRD werecompared with respect to an individual’sscanning ability. The four DIMs ranged from afully augmented or see-through mode to a modethat completely occluded all natural viewing ofthe surroundings. The research goals of thisstudy were a performance evaluation of scanningability with respect to: (1) the level of contrastin the surrounding objects to be searched andidentified; (2) the design of the display interfacemode (DIM) for a given contrast level; and (3)the correlation between scanning performanceand the subject’s natural vision acuity.

METHODSSubjects

The subjects were student volunteerswho currently attend the University ofWashington. The two criteria for screening thesubjects were a natural visual acuity of 20/200 orworse and functional retinal vision in their righteye. Informed consent was obtained from eachsubject before the investigation. The subjectswere given a chance to ask any questionsconcerning the experiment at this time.

Vision AssessmentsOnce the subjects agreed to and signed

the consent form, general information wasobtained. This information included thesubject’s name, sex, age, primary diagnosis, andnotes of any previous mobility training.Traditional measures of visual function were alsobe obtained. These included visual acuity,contrast sensitivity, and visual fields. Allmeasures were performed only on the right eyewithout any optical corrections.

Visual acuity was measured using aLighthouse ET-DRS acuity chart at a luminance

Page 3: Testing visual search performance using retinal light scanning … ·  · 2015-07-29Testing visual search performance using retinal light scanning as a future wearable ... INTRODUCTION

3

of 85 cd/m2. Beginning at a testing distance of 3m, the subject read each line of letters until anentire line is incorrectly identified. For eachletter, one chance was given for correctidentification. If the subject was unable toidentify at least one letter on the first line, thenthe testing distance was reduced to 2 m.Similarly, if at 2 m the subject was still unable toidentify at least one letter on the first line, thetesting distance was reduced to 1 m. The visualacuity was reported in units of log MAR(minimum angle of resolution) with thefollowing equation (Geruschat et al., 1998):

log MAR = C – 0.02*x Eqn. (1)

where x is the number of correct letters identifiedand C is a constant 1.22, 1.4, and 1.7 for testingdistances at 3, 2, and 1m, respectively.

Contrast sensitivity was measured usingthe Pelli-Robson chart at a standard illuminationof 85 cd/m2. This chart consisted of same sizedletters with decreasing contrast. The testingdistance was at 1 m. The subject’s contrastsensitivity was reported in units of log CS(logarithm of peak contrast), which wasdetermined as the faintest line with at least 2 outof the 3 letters correctly identified.

Figure 1. Setup of visual field assessment dome (topview). Subjects hold their chin on a chinrest whilekeeping their right eye aligned with the fixed LED. Themoving LED moves from 180o to 0o along each of the 12axes.

Visual fields were measured using atransparent acrylic dome, providing 180 degreesfield of view. The dome was 16 inches indiameter. The subject rested his or her chin on achin rest, restricting the right eye to remain flushwith the opening of the dome. The subject fixedhis or her gaze on the point of fixation at thecenter of the dome. This point of fixation

consisted of a single, white LED attached to therear of the dome (Fig. 1).

A second white LED was used forkinetically assessing the visual field. This LEDwas manually controlled, sliding from 180 to 0degrees along 12 radial sectors. The pointswhere the subject spotted the appearances anddisappearances of the moving LED were markedon the exterior of the dome. The solid angle ofthe remaining visual field for each sector wassummed and divided by the entire solid angle ofa hemisphere. This proportion quantified thesubject’s monocular visual field as a percentageof a hemispherical visual field (%HVF).

Head-Mounted DisplayA monocular VRD was configured as a

HMD for this study and a functional diagram isshown in Fig. 2. A monochrome (red) version ofthe VRD was selected because the full-colorversion is not portable.

Figure 2. Computer generated graphic of the WVRD.

Originally, the portable VRD was notmade to be wearable, since the monochromeVRD is heavy and tethered by electrical cables(see Fig. 3). The entire system shown in Figure3 has a weight of about 2.5 pounds and canbecome uncomfortable after 1-2 hours of beingworn. Nonetheless, in this study, the HMDversion of the VRD will be referred to as awearable VRD (WVRD), since it performedexactly like a wearable vision aid when thesubject was seated and motion was limited. Theremote-head video camera used for the WVRDwas a Teli CM3710 monochrome CCD. Twounique characteristics of the camera were a smallremote head (~30 mm3) and a VGA output (640x 480 pixels at a 60 Hz frame rate). The cameralens was a micro video lens (Edmund Scientific)with a 43 degree horizontal and 34 degreevertical fields of view.

Point ofFixation

FixedLED

AcrylicDome

0o

MovingLED

180o

180o

Page 4: Testing visual search performance using retinal light scanning … ·  · 2015-07-29Testing visual search performance using retinal light scanning as a future wearable ... INTRODUCTION

4

Figure 3A. Front view of the WVRD. The cables fromthe VRD system and video camera are fed under thehelmet to an electronics control box (not shown). Thehorizontal bar in front of the helmet allows the user tofine adjust the optical alignment of the VRD, andtranslate the VRD for use in either eye.

Figure 3B. Side view of the WVRD. The camera line ofsight almost matches that of natural sight, beingdisplaced vertically, just above eyebrow level. Note, thereflective mirror below the camera can be adjusted toallow two augmented display modes: either exactlysuperimposed or vertically displaced VRD images ontothe natural view.

Due to the importance of matchingcamera and natural lines of sight (Biocca andRolland, 1998), the camera and mirror wasmounted vertically above the eye with thecamera line slightly higher than natural viewing(see Fig 3B). The WVRD was designed for usein either eye; however, we restricted the WVRDto the right eye. This restriction was employedfor future studies, in which the WVRD will becompared to a standard LCD-based HMD (V-Cap 1000 right-eye view, Virtual Vision Inc.).

To provide interchangeable augmentedand/or occluded DIMs, the video image wasprojected by the VRD off a 50/50 beamsplitterbefore entering the pupil of the eye. In the DIMwith no occlusion (fully augmented), thebeamsplitter reflected the VRD image to the eyewhile the natural surroundings were visible

either through the beamsplitter for centralviewing or around the beamsplitter for peripheralviewing. In the DIM with central and peripheryocclusion (total occlusion), only the VRD imagewas displayed while the natural surroundingswere blocked by black felt. The other DIMswere combinations of the two extremes, eitherthe central field (beamsplitter) was occluded orthe periphery surrounding the beamsplitter wasoccluded (Fig. 4). An animation of both thecamera imaging and redisplay using the WVRDcan be observed at the following web site,www.hitl.washington.edu/projects/wlva/anim.

Figure 4. Examples of viewing the WVRD through eachof the display interface modes. The naturally viewed ororiginal image has been blurred to illustrate how theWVRD display modes may be perceived by a low visionindividual.

Figure 5. Example of a 32 figurine array that was used inthe scanning ability tests. There were 32 uniquesubscripts consisting of combinations of the numbers 0,1,2, 3, 4, or 5.

Testing Scanning AbilityAs stated earlier, a person’s scanning

ability - the speed and accuracy of finding anobject in a wide field of figurines - is correlatedto their navigational ability. Our scanning abilitytest was derived from a method used by Chedru

Page 5: Testing visual search performance using retinal light scanning … ·  · 2015-07-29Testing visual search performance using retinal light scanning as a future wearable ... INTRODUCTION

5

et al. (1973); however, rather than 36 figurines,we used a total of 32 different figurines, eachcontaining a numerical subscript (Fig. 5). Thefigurines were projected onto a screen, using twoslide projectors with 16 figurines on eachprojector. The entire projection was 12 ft wideand 9 ft tall. The subject sat 5 ft from the wall,which allowed the projections to subtend 100degrees horizontally and 84 degrees vertically.At this distance the figurines and subscriptnumbers subtended 6 degrees and 3 degrees,respectively.

In our testing protocol, the subject wasinstructed to fix his or her gaze to the center ofthe screen. A target figurine without a subscriptnumber was then projected onto the center of theblank screen. Once the subject recognized thetarget, the target would disappear and the subjectwould be asked to close his or her eyes, keepingthe gaze in the same location as the target. Thesubject’s eyes remained closed until instructed tostart, and the entire field of distractors wasprojected onto the wall. Once instructed to start,the subject scanned the projection of distractorsand called out the subscript number once thesingle target was found. The subject’s scanningability was quantified both by the search timeand the accuracy of figurine recognition. Thesearch time for an incorrectly identified targetwas multiplied by an arbitrary factor of 1.5. Ifthis penalized time was above 60 seconds, or thesubject was unable to identify the target after 60seconds, the time was recorded as 60 seconds. Inthe event that the exit pupil of the VRD becamemisaligned with the eye, possibly due to erratichead movements, that test was disregarded andanother test was used.

ProcedureTesting took place in a naturally lighted

lecture hall at the University of Washington.Before each subject was tested, the roomilluminance levels and the contrasts of thefigurine projections were measured. Allluminance and illuminance values weremeasured using a Spectrascan PR-650spectrophotometer (Photo Research, Inc.). Forcomparison studies to future conventionalHMDs, the power levels of the WVRD wasmatched to the maximum luminance level of acommercial LCD-based HMD (V-CAP 1000,Virtual Vision Inc.). The measured luminancewas converted to radiometric units (WattVRD)using the luminous efficiency at the VRDwavelength of 636 nm (Roberts, 1994):

WattVRD = (LMEAS*AVCAP*r)/s Eqn. (2)

In this equation, LMEAS (cd/m2) is the measuredluminance of the V-CAP 1000 at its maximumbrightness (22.71 cd/m2), AVCAP (m2) is theeffective area of the source LCD screen (4.456 x10-4 m2), r is the steradian measurementdependent on the pupil size at a viewing distanceof 52.70 mm, and s is a photopic radiometricconversion factor (144.113 lm/W at 636 nm).The variable r is a function of the pupil sizesince the area of the collimated LCD light (1540mm2) was significantly larger than the area fromthe average pupil size (8.6 mm2). The derivationof this equation assumes that the pupil has thesame solid angle for each pixel point source fromthe LCD screen. The pupil size was assigned avalue of 5.8 mm, which was taken from a subjectin the earlier iterations of this study. Therefore,the value of r was 9.513 x 10-3 steradians. Toconfirm this assumed pupil size value, eachsubject’s pupil size was measured using a full-circle pupil diameter gauge; thus, giving anaverage pupil size of 5.96 mm. This differenceof 0.16 mm did not make a significant difference(+0.03 microwatts) in the calibrated power level.

The subject’s visual functions were thenassessed. The subject was required to removeany optical corrections during both the visualassessment and scanning ability tests. Prior toscanning performance evaluation, the subjectswere given three practice trials to familiarizethemselves with the testing procedure.

Each subject was tested at three contrastlevels for each of the following DIMs:

• A-C (Augmented Control) - VRD is turnedoff (control), no occlusion

• PO-C (Periphery Occluded Control) - VRDis off (control), periphery occluded

• A (Augmented) - See through withsuperimposed images, no occlusion

• CO (Center Occluded) - Peripheryunobstructed

• CPO (Center and Periphery Occluded) -Camera view only

• PO (Periphery Occluded) - Augmentedcentral view

An image as seen from the perspective of a lowvision individual through each of the DIMs isshown in Figure 4. The tests for each of thedisplay interface modes were conducted usingslides of three different contrasts: 81.4%,61.8%, and 38.0%. For each DIM, there were a

Page 6: Testing visual search performance using retinal light scanning … ·  · 2015-07-29Testing visual search performance using retinal light scanning as a future wearable ... INTRODUCTION

6

total of 18 tests, in which 3 sets of 6corresponded to the 3 different contrasts. Thus,for each DIM, the scanning ability tests providedan equal number of data points for the threecontrasts. For each subject, the orders of theslides and DIMs were random. At theconclusion of the entire experiment, each subjectwas asked survey questions, such as to ratewhich DIM was most helpful and least helpful inperforming the task.

RESULTSAll five subjects had vision that was

correctable to normal; nonetheless, these subjectsaccurately represented low vision subjects whennot using any optical correction (i.e. visual acuitywas equivalent to 20/200 or worse). Visualacuity, contrast sensitivity, and visual fieldmeasurements were made without opticalcorrection and are listed in Table 1. To matchthe maximum brightness of commercial HMDs,the VRD power level was set to 0.7 microwattsat 636 nm.

Table 1. Clinical assessments of the subjects’ vision.

Figure 6 shows a histogram of the scantimes for each DIM at the three different contrastlevels. For a given DIM and contrast level, thescan time was calculated as the average over allsubjects. The subjects correctly identified alltargets during the experiment. A few cases ofVRD misalignment occurred; however, theseonly occurred during the time of target display,which did not affect the timing of targetidentification. No data was lost due to fatigue.In general, all of the DIMs (including bothcontrols) exhibited an increase in scan time asthe contrast decreased. In all cases except thePO DIM, the longest scan time was measured atthe lowest contrast level.

Figure 7 shows the relative ratios of thescan times of a specific contrast and DIMcombination to its corresponding control. The Aand CO DIMs were compared to A-C, and theCPO and PO DIMs were compared to PO-C. Aratio > 1.0 means longer times were needed tolocate the target. At the highest contrast level,the A and CO DIMs showed a slightly higher

performance when compared to the A-C. At themedium contrast level, all DIMs provided lowerscanning performance than their correspondingcontrols (ratios > 1.0). Also at this contrastlevel, the PO DIM provided the worstperformance. However, at the lowest contrastlevel, the PO DIM showed a significant increasein scanning performance when compared to thePO-C. The remaining three DIMs at this lowcontrast level showed a lower scanningperformance, with the CO DIM being the worst.

Figure 6. Scan times (s) for the three contrasts, groupedin separate DIMs. The DIMs are defined as: A-C =Augmented Control; PO-C = Periphery Control; A =Augmented; CO = Center Occluded; CPO = Center andPeriphery Occluded; PO = Periphery Occluded.

Figure 7. Scan times of each contrast level and DIMcombination expressed as a ratio to their respectivecontrols. The A and CO DIMs were compared to the A-C, and the CPO and PO DIMs were compared to the PO-C. Ratios less than 1.0 show a higher performance thanthe control. For each DIM and contrast levelcombination, N = 30. The DIMs are defined as: A =Augmented; CO = Center Occluded; CPO = Center andPeriphery Occluded; PO = Periphery Occluded.

Figure 8 shows the correlation betweenscan time (s) and the subjects’ natural acuity (logMAR) for both controls (A-C and PO-C). Eachdata point was plotted as the average over all ofthe scan times under the given control for onesubject. These two variables lacked statisticalsignificance (A-C: R2 = 0.1284, PO-C: R2 =0.03) and therefore no relationship was

Subject Log MAR Acuity Equivalent Log CS %HVF Pupil Diameter (mm)

1 1.62 20/834 0.00 60.06 6.2

2 1.04 20/220 0.90 85.74 6.5

3 1.34 20/438 1.20 78.38 6

4 1.14 20/276 1.05 78.10 5.9

5 1.42 20/526 0.45 75.60 5.2

Scan Time Vs. DIM

3

3.5

4

4.5

5

A-C PO-C A CO CPO PO

DIMS

ca

n T

ime

(s

)

High Contrast (81.4%)

Mid Contrast (61.8%)

Low Contrast (38.0%)

Scan Time Relative to Controls

0.80.850.9

0.951

1.051.1

1.151.2

1.251.3

A CO CPO PO

DIM

Rel

ativ

e R

atio

High Contrast (81.4%)

Mid Contrast (61.8%)

Low Contrast (38.0%)

Page 7: Testing visual search performance using retinal light scanning … ·  · 2015-07-29Testing visual search performance using retinal light scanning as a future wearable ... INTRODUCTION

7

demonstrated between scan time and visualacuity.

Figure 8. Averaged scan times (s) over the three contrastsfor a given control, plotted as a function of visual acuity(log MAR) for the corresponding controls. The linearrelationships are described by the following: A-C =Control: R2 = 0.1284; PO-C = Periphery Control: R2 =0.03.

In response to the survey questions,100% of the subjects found that the VRD imageappeared brighter, sharper, and clearer in all ofthe DIMs compared to both of the controls.Additionally, the images displayed in the DIMswith occluded central view appeared higher incontrast and brighter to the subjects. Whenasked, all subjects stated a preference for agreater field of view of the VRD image. TheDIMs which the subjects found most and leasthelpful in completing their task is shown inTable 2. Interestingly, four out of five of themost helpful DIMs were also listed as the leasthelpful, illustrating a very strong and diverserange of subject preference.

DISCUSSIONThe results of the five subjects showed

that the scan time generally increased as thecontrast level decreased for all DIMs. This resultwas expected because visual acuity is lower forlower contrast images as well as making objectrecognition more difficult. The three levels ofcontrast represented the varied contrast ofviewing real-world scenery, necessary fornavigating with a vision aid.

In the survey results, the subjectsperceived the VRD image as clearer for all DIMswhen compared to the controls, and higher incontrast and brightness for the two DIMs withoccluded centers. This result may be expectedwhen reviewing Figure 4. Since the materialused for occluding the natural scene is black, thecentral VRD image would be overlaid on auniform black background instead ofsuperimposed on a complex image. Analternative DIM that was not tested, but shows

promise for augmented HMDs, is displacing theaugmented view vertically from the identicalview as seen through the beamsplitter (Holzel,1999). This displaced as well as augmentedDIM can be tested easily with the adjustablemirror that vertically adjusts the WVRD cameraview, (refer to Fig. 3B).

In scanning ability comparisons amongthe DIMs (Fig. 7), we found that at the lowestcontrast level, the PO DIM provided the highestscanning performance. This unexpected resultwas surprising because an unobstructedperiphery is expected to provide greatersituational awareness to the HMD user of normalvision (Rolland et al., 1994). Possibly, thehighly blurred and distorted peripheral vision ofa low vision user is less important fornavigational activities, such as searching, thannormally sighted users of HMDs. To explainthis result of the PO DIM, the lower contrastobject field may have been invisible in theviewer’s residual periphery, further eliminatingany distractions during the search, resulting in afaster scan time. This hypothetical role ofperipheral distractors is supported by datapresented in Figure 6. At the medium andhighest contrast levels, the PO-C showed higherscanning performance than the A-C (-8.6% and–3.6%, respectively), which could also be due toocclusion of the high contrast peripheraldistractions. At the lowest contrast level, thescan times of both controls were comparable towithin +5.6%, as expected, since the low contrastperipheral distractions were effectivelyeliminated in the A-C. Thus, the addition ofperipheral information for low vision subjects byusing augmented DIMs may not decreasesearching times, and eliminating this informationshows improved performance.

As illustrated in Figure 8, scanningperformance with the WVRD turned off(controls) did not show a strong statisticalrelationship with subject visual acuity. Asreviewed in the research literature (Kyuk et al.,1998 vol. 3), scanning ability and visual acuityare weakly related even though both variablesare strong predictors of mobility performance.Therefore, we cannot use this correlation to inferany predictions of visual acuity on mobilityperformance. Further navigational coursestudies would need to be employed in the futureto directly test the subject’s improved acuity dueto the WVRD with mobility performance.

On average, all of the DIMs displayed alower performance than their correspondingcontrols (Fig. 9). Among these lower

Averaged Scan Time Vs. Acuity

2.5

3

3.5

4

4.5

5

5.5

6

6.5

0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7

Acuity (Log MAR) of each subject

Av

era

ge

d S

ca

n T

ime

(s

)

A-C

PO-C

Linear (A-C)

Linear (PO-C)

Page 8: Testing visual search performance using retinal light scanning … ·  · 2015-07-29Testing visual search performance using retinal light scanning as a future wearable ... INTRODUCTION

8

performance DIMs, the CPO DIM showed thehighest scanning performance. This result maybe attributed to the higher contrast of theoccluded central view and the elimination of anyperipheral distractors, as described above.

Figure 9. Average scan times (s) for each correspondingDIM. All scan times for their corresponding DIM wereaveraged (including the controls) and expressed as theratio of the average DIM scan time to its correspondingaverage control scan time. Ratios less than 1.0 show ahigher performance than the control. For each DIM, N =90. The DIMs are defined as: A-C = AugmentedControl; PO-C = Periphery Control; A = Augmented; CO= Center Occluded; CPO = Center and PeripheryOccluded; PO = Periphery Occluded.

Figure 9 also shows that, on average,using the CO DIM provided the lowest scanningperformance. The occluded center increases thecontrast of the VRD image, and thus allows theuser to focus much easier on the virtual image.However, the periphery is unobstructed and theout of focus surrounding field may cause a focaldepth rivalry similar to the concept of binocularrivalry. This idea of focal depth rivalry is evenmore pronounced in the A DIM due to theunobstructed center. Thus, the hypothetical roleof focal depths may significantly decreasescanning performance.

Since the DIMs and contrasts wererandomized for each test, all of the above resultswere not influenced by the subject’s increasedfamiliarity of using the WVRD during the testingprocedure. In addition, these results were notinfluenced by the variability of the targetposition in the field of distractors. For example,if the target was placed near the center of thescreen, the subject could possibly identify thetarget in a shorter amount of time than if thetarget was placed further away. For all contrastsand DIMs, the scan time was compared to thedistance of the target from the center of thescreen (Fig. 10). The correlation between scantime and target distance is insignificant (R2 =0.03), suggesting that target position did nothave a consequential effect on scan time.

Figure 10. Scan time as a function of the target distancefrom the center of the screen. The correlation coefficient(R2 = 0.031) shows no relationship between the twovariables.

Based on the qualitative questionnaire,the subjects preferred a larger field of view of theprojected VRD image. This is understandable,since a larger field of view in the DIMs wouldprovide a larger sense of presence to the user(Hatada et al., 1980). An increase in field ofview would also suggest a decrease in scan time(Wells and Venturino, 1990).

Although field of view is certainly onefactor influencing the sense of presence, theentire issue of presence is not a major factor tothe visual search tasks performed in this study.Since there are no clear depth cues involved inthe search tasks, stereopsis would have littleinfluence on search time (Reinhart et al., 1990).Similarly, head tracking would not significantlyimprove scanning performance (Ehrlich andSinger, 1994; Chung, 1992). However, for amore complex task such as physical navigation,these factors of presence may significantly affectthe mobility performance of the subject (Cha etal., 1992; Zenyuh et al., 1988), and should beconsidered for future navigational studies.

The qualitative questionnaire alsoshowed that there is a wide range of DIMpreferences for different individuals. Theextreme range of subject preference to a specificDIM (Table 2) indicates that two distinct searchstrategies are employed, reflecting either serial orparallel processing (Nothdurft, 1999). Based onthese results, commercialization of a wearablelow vision aid would most benefit the low visionuser through customization of the system or theability to easily switch the DIMs of the system.

Another concern for the futuredevelopment of the VRD as a wearable lowvision aid is the stereopsis factor. In this study,the unused eye was patched and therefore,binocular rivalry was eliminated. For use in thegeneral marketplace, however, monocularviewing would not suffice due to lack of depthcues and possible binocular rivalry. In additionto the displaced view suggested earlier,additional DIMs need to be examined that allowbiocular and stereoscopic viewing. Both

Average Scan Time Relative to Controls

0.95

1

1.05

1.1

1.15

A CO CPO PO

DIM

Rel

ativ

e R

atio

Scan Time Vs. Target Distance

0

2

4

6

8

10

12

14

0 10 20 30 40 50 60 70 80

Target Distance (inches)

Sca

n T

ime

(s)

Page 9: Testing visual search performance using retinal light scanning … ·  · 2015-07-29Testing visual search performance using retinal light scanning as a future wearable ... INTRODUCTION

9

biocular and stereoscopic modes have shown tobe equal in accuracy in the performance of thelocalization of nearby visual objects (Ellis andMenges, 1998). In terms of costs andperformance, the biocular mode would currentlybe the most feasible since one VRD device iscapable of producing two exit pupils. However,the stereopsis factor may not become an issue forlow vision individuals, and therefore, furtherstudies should be performed to determine theneed for this factor.

Table 2. Qualitative results from questionnaire. TheDIMs are defined as: A = Augmented; CO = CenterOccluded; CPO = Center and Periphery Occluded; PO =Periphery Occluded.

Currently, retinal light scanningtechnology has been licensed to a local startupcompany (Microvision, Inc.) who has developedtetherless, wearable, and augmented HMDprototypes. These prototypes show promise forretinal light scanning technology to be used as aWLVA. In addition, the augmented feature ofthe system makes it unique in that there have notbeen existing commercially developed WLVAswith an augmented mode (Massoff, 1998; Peli,1994). Based on the results of our subjects thatattain higher acuity with the WVRD, however,the augmented mode may not prove to beadvantageous due to the rivalry of different focaldepths for the clear VRD image and the blurredbackground. Further studies using low visionsubjects would be noteworthy to determine anyrelevance of focal depth rivalry in the augmentedmode.

CONCLUSIONSA head-mounted version of the VRD

was fabricated to insure the augmented cameraview and see-through natural views could besuperimposed. The camera placement allowedan almost natural line of sight in a monochrome,monocular device. The WVRD designfacilitated the rapid switching between fourdifferent DIMs for comparisons in scanningability. Furthermore, the adjustable mirror onthe camera can provide an additional augmentedview, having the VRD image displaced from the

central field of view. Thus, this prototype waseffective in representing a future wearable visionaid based upon retinal light scanning technology.

For each DIM tested, scanningperformance decreased as the level of contrast inthe surroundings decreased, as expected, (seeGoal 1). At the lowest contrast level, the PODIM provided the subject with the greatestscanning performance, and the CO DIMprovided the worst scanning performance whencompared to their controls. At the mediumcontrast level, all DIMs were at a disadvantagerelative to their corresponding controls with thePO DIM being the least effective. At the highestcontrast level, the PO DIM was again the leasteffective compared to its control. The remainingthree DIMs provided equal performance relativeto their controls. On average, the CO DIMprovided the worst performance and the CPODIM provided the best performance, (see Goal2). These results may be unique for low visionusers of an augmented HMD. Possibly, objectsin the periphery cause distraction in the searchand greater visual acuity of the retinal scannedinset could cause rivalry due to retinal images atdifferent focal depths.

Scanning ability with respect to visualacuity was evaluated and showed no statisticalrelationship, (see Goal 3). No inferences couldbe made upon visual acuity and mobilityperformance, and therefore future navigationalstudies would provide further insight on thisissue.

The results from the qualitativeresponses suggest that a larger field of view wasmore preferable and a very diverse range of DIMpreference was a strong variable. Based on thesequalitative and quantitative results, we haverealized that the design of retinal light scanningas a WLVA is a very complex issue, and thatcustomization or rapid DIM switching featuresmay be an optimal design strategy. In addition,the augmented mode may not prove to be asadvantageous as expected based on the results ofour subjects, perhaps due to focal depth rivalry.However, this rivalry may be irrelevant in actuallow vision individuals that are not accustomed tobeing able to focus blurred images to highacuity.

ACKNOWLEDGEMENTSWe would like to thank Nick Kipping

(student in Industrial Design) and RobertBurstein (Research Engineer) for their technicalassistance, Duff Hendrickson (ExperienceDesigner) for the computer animations, The

Subject Most Helpful DIM Least Helpful DIM

1 CO A

2 CO CPO

3 CO PO

4 A CO

5 CPO CO

Page 10: Testing visual search performance using retinal light scanning … ·  · 2015-07-29Testing visual search performance using retinal light scanning as a future wearable ... INTRODUCTION

10

National Science Foundation for funding thisresearch (grant #’s 9801294 and 9978888), theR.E.U. program, and The Mary GatesFoundation for SL’s financial assistance.

REFERENCES1. Biocca FA, Rolland JP. (1998) Virtual eyes

can rearrange your body: adaptation tovisual displacement in see-through, headmounted displays. Presence 7(3): 262-277.

2. Cha K, Horch KW, Normann RA. (1992)Mobility performance with a pixelizedvision system. Vision Research 32: 1367-1372.

3. Chedru R, Leblanc M, Lhermitte F. (1973)Visual searching in normal and brain-damaged subjects (contribution to the studyof unilateral inattention). Cortex 9(1): 94-111.

4. Chung JC. (1992) A comparison of head-tracked and non-head-tracked steeringmodes on the targeting of radiotherapytreatment beams. In Proceedings of the1992 ACM symposium on interactive 3Dgraphics: 193-196.

5. Dodds AG, Davis DP. (1987) Low vision:assessment and training for mobility. Int JRehabil Res 10: 327-30.

6. Ehrlich JA, Singer MJ. (1994) Arestereoscopic displays necessary for virtualenvironments? Proceedings of the humanfactors and ergonomics society 38th annualmeeting: 952.

7. Ellis SR, Menges BM. (1998) Localizationof virtual objects in the near visual field.Human Factors 40(3): 415-431.

8. Everingham MR, Thomas BT, TrosciankoT. (1998) Head-mounted mobility aid forlow vision using scene classificationtechniques. International Journal of VirtualReality 3(4): 3-12.

9. Furness TA, Kollin JS. (1995) VirtualRetinal Display. US Patent 5,467,104.

10. Geruschat DR, Deremeik JT, Whited SS.(1999) Head mounted displays: are theypractical for school-aged children? J VisImpairment Blind 93(8): 485-497.

11. Geruschat D, Smith A. (1997) Low visionand mobility. Blasch, BB (Ed); Wiender WR(Ed); et-al.. Foundations of orientation andmobility (2nd ed.). pp. 83-5. New York, NY,USA: American Foundation for the Blind.

12. Geruschat DR, Turano KA, Stahl JW.(1998) Traditional measures of mobilityperformance and retinitis pigmentosa.Optom Vis Sci 75(7): 525-37.

13. Greene HA, Beadles R, Pekar J. (1992)Challenges in applying autofocustechnology to low vision telescopes. OptomVis Sci 69(1): 25-31.

14. Hatada T, Sakata H, Kusaka H. (1980)Psychophysical analysis of the “sensation ofreality” induced by a visual wide-fielddisplay. SMPTE Journal 89: 560-569.

15. Haymes S, Guest D, Heyes A, Johnston A.(1996) Mobility of people with retinitispigmentosa as a function of vision andpsychological variables. Optom Vis Sci73(10): 621-37.

16. Holzel T. (1999) Are head-mounted displaysgoing anywhere? Information Display,10/99: 16-18.

17. Johnston RS, Willey SR. (1995)Development of a commercial retinalscanning display. Proceedings of the SPIE,Helmet- and Head-Mounted Displays andSymbology Design Requirements II 2465:2-13.

18. Kleweno C, Seibel EJ, Viirre E, Furness TA.(1999) Evaluation of a scanned laser displayas an alternative low vision computerinterface. Technical Digest of VisionScience and its Applications, TechnicalMeeting. Optical Society of America. Feb.19-22. Santa Fe, New Mexico: 148-151.

19. Kuyk T, Elliott JL, Fuhr PS. (1998) Visualcorrelates of mobility in real world settingsin older adults with Low Vision. Optom VisSci 75(7): 538-47.

20. Kuyk T, Elliott JL, Fuhr PS. (1998) Visualcorrelates of obstacle avoidance in adultswith low vision. Optom Vis Sci 75(3): 174-182.

21. Mainster MA, Timberlake GT, Webb RH,Hughes GW. (1982) Scanning LaserOphthalmoscopy. Opthalmology 89:852-7.

22. Marron JA, Bailey IL. (1982) Visual factorsand orientation-mobility performance. Am JOptom Physiol Opt 59(5): 413-26.

23. Massoff, RW. (1998) Electro-optical head-mounted low vision enhancement. PracticalOptometry 9(6): 214-20.

24. Massoff RW, Rickman DL. (1992)Obstacles encountered in the developmentof the low vision enhancement system.Optom Vis Sci 69(1): 32-41.

25. National Advisory Council. Vision Research(1998) A National Plan: 1999-2003. U.S.Dept. of Health and Human Services (NIHPublication No. 98-4120).

Page 11: Testing visual search performance using retinal light scanning … ·  · 2015-07-29Testing visual search performance using retinal light scanning as a future wearable ... INTRODUCTION

11

26. Nothdurft HC. (1999) Focal attention onvisual search. Vision Research 39: 2305-2310.

27. Peli, E. (1992) Limitations of imageenhancement for the visually impaired.Optom Vis Sci 69(1): 15-24.

28. Peli E. (1994) Head-mounted display as alow vision aid. Proceedings of the SecondAnnual International conference on VirtualReality and Persons with Disabilities,Northridge, CA, California State Univ. 115-22.

29. Oritz A, Jobling JT, Chung STL, Legge GE.(1998) Reading with a head-mounted videomagnifier. Invest Ophthalmol Vis Sci39(suppl): S176.

30. Reinhard WF, Beaton RJ, Snyder HL.(1990) Comarison of depth cues for relativedepth judgements. Proceedings of the SPIEvol. 1256: 12-21.

31. Roberts DA. (1994) A guide to speaking thelanguage of radiometry and photometry. ThePhotonics Design and ApplicationsHandbook (Book 3). pp. H70-73. Pittsfield,MA: Laurin Publishing Co., Inc.

32. Rolland JP, Hollaway RL, Fuchs H. (1994)A comparison of optical and video see-through head-mounted d isp lays .Proceedings of the SPIE vol. 2351: 293-307.

33. Viirre E, Johnston R, Pryor H, Nagata S,Furness TA. (1990) Laser safety analysis ofa retinal scanning display system. J LaserApplications 9: 253-260.

34. Viirre E, Pryor H, Nagata S, Furness TA.(1998) The Virtual Retinal Display: A newtechnology for virtual reality and augmentedvision in medicine. In, Proceedings ofMedicine Meets Virtual Reality 6, 50: 252-257.

35. Webb RH, Hughes GW, Pomerantzeff O.(1980) Flying spot TV ophthalmoscope.Applied Optics 19(17): 2991-97.

36. Webb RH, Hughes GW. (1981) ScanningLaser Ophthalmoscope. IEEE Trans. OnBiomedical Engineering 28: 488-492.

37. Wells MJ, Venturino M. (1990)Performance and head movements using ahelmet-mounted display with different sizedfields of view. Optical Engineering 29: 870-877.

38. Zenyuh J, Reising JM, Walchli S, Biers D.(1988) A comparison of a stereoscopic 3-Ddisplay versus a 2-D display using advancedair-to-air format. Proceedings of the humanfactors society 32nd annual meeting: 53-57.