author's personal copy - glastephen/papers/sigpro3014.pdf · author’s benefit and for the...

23
This article was originally published in a journal published by Elsevier, and the attached copy is provided by Elsevier for the author’s benefit and for the benefit of the author’s institution, for non-commercial research and educational use including without limitation use in instruction at your institution, sending it to specific colleagues that you know, and providing a copy to your institution’s administrator. All other uses, reproduction and distribution, including without limitation commercial reprints, selling or licensing copies or access, or posting on open internet sites, your personal or institution’s website or repository, are prohibited. For exceptions, permission may be sought for such use through Elsevier’s permissions site at: http://www.elsevier.com/locate/permissionusematerial

Upload: duongbao

Post on 15-Jun-2019

224 views

Category:

Documents


0 download

TRANSCRIPT

This article was originally published in a journal published byElsevier, and the attached copy is provided by Elsevier for the

author’s benefit and for the benefit of the author’s institution, fornon-commercial research and educational use including without

limitation use in instruction at your institution, sending it to specificcolleagues that you know, and providing a copy to your institution’s

administrator.

All other uses, reproduction and distribution, including withoutlimitation commercial reprints, selling or licensing copies or access,

or posting on open internet sites, your personal or institution’swebsite or repository, are prohibited. For exceptions, permission

may be sought for such use through Elsevier’s permissions site at:

http://www.elsevier.com/locate/permissionusematerial

Autho

r's

pers

onal

co

py

Signal Processing 86 (2006) 3674–3695

Sensory substitution using tactile pin arrays:Human factors, technology and applications

Steven A. Wall�, Stephen Brewster

Glasgow Interactive Systems Group, Department of Computing Science, University of Glasgow, 17 Lilybank Gardens, G12 8QQ, UK

Received 1 July 2005; received in revised form 5 December 2005; accepted 1 February 2006

Available online 8 June 2006

Abstract

Tactile arrays use a matrix of individually controllable elements to present spatial and temporal patterns of cutaneous

information. Early devices of this type were in the field of sensory substitution to replace vision or hearing for users with a

sensory impairment. Many advances have been made due to the appropriation of tactile displays for telerobotics and

virtual reality, to represent physical contact with a remote or simulated environment. However, many of these have been

limited to engineering prototypes. The recent commercial availability of affordable, portable tactile pin arrays has

provided renewed impetus to apply the technology to sensory substitution applications. Lack of access to digitally stored

data can prove a significant barrier to blind people seeking careers in numerate disciplines. Tactile displays could

potentially provide a discrete and portable means of accessing graphical information in an intuitive non-visual manner.

Results are presented from experiments on tactual perception related to understanding graphs and simple visualisations

with a commercially available tactile array device. It was found that subjects could discriminate positive or negative line

gradient to within74.71 of the horizontal, compared to73.251 for results with a force feedback mouse and72.421 with a

raised paper representation.

r 2006 Elsevier B.V. All rights reserved.

Keywords: Tactile; Blind; Haptic; Multimodal

1. Introduction

The area of haptic (touch-based) human–compu-ter interaction has grown rapidly over the last fewyears. A range of new applications has becomepossible now that touch can be used as aninteraction technique. Most research in this areahas been concentrated on the use of force feedbackdevices for the simulation of contact forces wheninteracting with simulated objects. Devices such as

the Phantom haptic interface (see Fig. 1) fromSensable Technologies [1] present the illusion ofcontact with rigid virtual objects using program-mable constraint forces supplied to an end effectorsuch as a thimble, handle or stylus that the userinteracts with. A typical device will take the form ofa mechanical framework capable of movementalong one or more axes (typically two or three formost applications). Movement of the end effectorwithin this space is tracked by the device; thisposition is then compared to the position of virtualobjects within the space. Collisions with theseobjects are rendered by using actuators (which canbe pneumatic, electromechanical, or using a braking

ARTICLE IN PRESS

www.elsevier.com/locate/sigpro

0165-1684/$ - see front matter r 2006 Elsevier B.V. All rights reserved.

doi:10.1016/j.sigpro.2006.02.048

�Corresponding author. Tel.: +44 141 330 8430.

E-mail addresses: [email protected] (S.A. Wall),

[email protected] (S. Brewster).

Autho

r's

pers

onal

co

py

system) to constrain the motion of the user to thesurfaces of simulated objects.

Broadly speaking, the human haptic sense can bedivided in to two distinct channels of sensoryexperience. Kinaesthetic information refers to thesensation of positions, velocities, forces and con-straints that arises from the muscle spindles andtendons. Force feedback haptic interfaces appeal tothe kinaesthetic senses by presenting computercontrolled resistive forces to create the illusion ofcontact with a rigid surface. The cutaneous class ofsensations refers to those which arise through directcontact with the skin surface. Cutaneous stimula-tion can be further subdivided in to the sensationsof pressure, stretch, vibration, and heat. In someinstances pain is also referred to as a separatesensation, though excessive stimulation of the otherdetectable parameters will lead to a feeling of pain.

Exploring a virtual world through a pen-likestylus or a thimble currently gives little provision forcutaneous sensations in force feedback devices, suchas the complex distribution of forces on the skinthat are perceived when placing a fingertip on atextured surface. Tactile display research seeks toreplicate these sensations by using an array ofindividually controllable mechanical elements toperturb the skin at the user’s fingertip (see Fig. 2).Providing distributed cues for surface qualitiesgreatly expands the scope for potential applicationsof haptic displays [2]. For example, representing thevery fine and subtly varying cues for tactileassessment of fabric is problematic by force feed-back alone. Tactile displays could allow shoppers

and fashion designers to experience clothing materi-als via the Internet to aid with online purchasingdecisions [3,4]. Tactile display could also provideincreased realism in medical training simulations,allowing the user to differentiate between differenttypes of tissue more easily. This is particularlysalient for applications that require direct palpationwith the fingertips, rather than being mediated viasurgical equipment, such as the diagnosis of bovinepregnancy [5].

As early as the 1920s, researchers were interestedin using vibration of the skin as a means ofinformation transfer (for example, Gault in 1926,cited in [7]). Tactile-vision substitution systems(TVSS) were the earliest to be developed, in orderto present visual information to blind people. In atypical system, a camera receives visual informationwhich is converted to a tactile representation on atwo-dimensional pin array. Some of the earliestworks in this area were the development of theOptacon (see Fig. 3), which converted printed lettersto a spatially distributed vibrotactile representationon the fingertip, using a miniature handheld camera(summarised in [8]). Although reading speeds weresignificantly slower than Braille, the Optaconallowed blind people to access any text or graphicswithout having to wait for it to be converted intoBraille. Early pioneering work in TVSS was alsoperformed by Paul Bach-y-rita and colleagues in thelate 1960s. Early systems displayed visual informa-tion captured by a tripod mounted TV camera to avibrotactile display on the user’s back. Due tolimited spatial resolution, tactile masking effectsand a low dynamic range, the system was notsuitable as a day-to-day navigation aid. However,

ARTICLE IN PRESS

Fig. 1. The Phantom haptic interface. The user grips the stylus to

interact with the device. Motion is selectively constrained to

convey the illusion of contact with simulated objects.

Fig. 2. A tactile pin array for fingertip display, consisting three, 4

� 8 arrays of pins.

S.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–3695 3675

Autho

r's

pers

onal

co

pysubjects could easily recognise simple shapes anddiscriminate orientation of lines. It was also re-ported that experienced users could perform morecomplex tasks, such as recognition of faces, or elec-tronic assembly using the system (reported in [9]).

Increases in computing power, the notions of‘‘virtual reality’’, and the commercial availability ofhaptic force feedback devices as a research tool haveeffectively shifted the main emphasis of tactiledisplay research from representing remotely sensedreal-world information to the challenges inherent ininteracting with an environment consisting ofsimulated physical models on a computer. Theconcept of connecting haptic feedback to a virtualworld was first voiced in 1965 by Ivan Sutherland,who put forward a vision for an ‘‘ultimate display’’.Inspired by Sutherland’s vision, Frederick Brooksjr. and his colleagues became the first team to realiseforce feedback with a graphical environment inthe GROPE project (for an excellent treatise ofthe development of haptic feedback displays theauthors recommend [10]). The scope of forcefeedback enabled applications expanded consider-ably with the development of the Phantom, the firstcommercially available desktop force feedback

device [1]. The drive to create realistic cutaneousstimulation for virtual environments could now besaid to be the motivation behind most tactile displayresearch. In particular, the need to combine tactiledisplays with force feedback displays has led toincreased efforts to miniaturise the technology.

Researchers have sought to appropriate hapticfeedback devices for presenting information tovisually impaired computer users. For example,Sjostrom outlined a number of guidelines for usinghaptic technology to provide novel computerinteraction techniques for visually impaired people[11]. These include ‘‘rules of thumb’’ for navigation,gaining an overview, understanding objects, hapticwidgets and physical interaction. Several projectshave focussed on presenting simple visualisationsuch as graphs, charts and tables using hapticfeedback technology. Graphs provide a means bywhich sighted people can access numerical informa-tion in a manner which affords quick, visualidentification of trends, maxima, minima, intersec-tion points, and other features that would belaborious and time consuming with a table ofnumerical data. Fritz and Barner reported theearliest work that used a Phantom to present graphsto visually impaired users [12]. Wies et al. reportedwork using the Logitech Wingman force feedbackmouse to present graphical information [13]. Themost extensive body of work in this area wasperformed by Yu, Brewster and colleagues on theMultivis project (www.multivis.org). This projectadopted a multimodal approach to presentingvisualisations using force feedback and stereosound. In a series of experiments, multimodalfeedback was found to offer significant advantagesover a single modality alone [14–16].

Due to their more compact size and powerrequirements, tactile displays offer a potentiallymuch more discrete, affordable means of providingaccess to data visualisations via the sense of touch.Displays are often small enough to allow mountingon another, standard human–computer interactiondevice such as a mouse, keyboard or gamescontroller, or portable devices such as mobilephones and personal digital assistants (PDAs). Thispaper reviews the developments in tactile displaysfor sensory substitution, along with relevant litera-ture on perceptual and psychophysical issues relatedto the sense of touch. An experiment is describedinvestigating perceptual issues related to graphexploration with a commercially available tactiledisplay device.

ARTICLE IN PRESS

Fig. 3. Visually impaired person using the Optacon device to

read text. The handheld camera detects light and dark areas of

the page and converts to a vibrotactile representation, presented

to the non-dominant hand on the pin array (Image used with

permission from www.carroll.org).

S.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–36953676

Autho

r's

pers

onal

co

py

2. Methods of tactile presentation for sensory

substitution

The earliest structured work in sensory substitu-tion dates back to the 19th century, with thedevelopment of embossed raised characters forvisually impaired people. Many ‘‘low-tech.’’ solu-tions are still employed for presenting visualisationsof numerical data, although they are being gradu-ally augmented by microprocessor-based technol-ogy. In recent years several technologies haveemerged which can be used to present informationto the sense of touch, including vibration (vibro-tactile), force feedback and tactile displays.

2.1. Raised static material display

Although several competing alphabets of em-bossed characters were developed around 1840,Louis Braille was the first to demonstrate theefficiency of raised dot patterns over simplificationsof print letters. Each Braille character consists ofa 3 row � 2 column ‘‘cell’’, within which it ispossible to form 64 individual patterns. Meaningsassigned to these patterns include the letters of thealphabet, punctuation marks, and various contrac-tions that stand for frequently recurring lettergroups. Despite opposition from education onthe grounds that sighted teachers and visuallyimpaired students would require different represen-tations, and the supposedly arbitrary nature of theencoding in dot patterns, practical considerationsled to its widespread acceptance within the blindand visually impaired communities. Inspired by theseminal work of William Wait in the late 19thcentury (cited in [17]), experimental evidence wasgradually accrued demonstrating the increase inefficacy observed with the Braille code overembossed letters in terms of reading speed andcomprehension of text. The main drawback ofBraille is that reading speeds are considerablyslower than for visually presented text, around 104words per minute (wpm) for experienced adultusers [17] (in comparison, the average reading speedfor sighted high school students is 250–300wpm,with some adults reaching two or three times thatspeed [18]).

Raised paper diagrams, produced via embossingor heat-raised paper, have been employed forpresentation of pictorial information includingillustrations, photographs, graphs, and maps tothe visually impaired. Petrie [19] and Challis [20]

both describe the use of tactile embossed diagramsas an inexpensive computer interface for visuallyimpaired users. Embossed diagrams still possessmany advantages over more hi-tech solutions eventoday — they are cheap to produce, have no movingparts, and can be quickly and easily explored withthe whole of both hands to provide a good overviewof the information being displayed. They are,however, limited in size, can become easily clutteredwith information, are subject to wear and tear andare inherently non-dynamic in nature. The ‘‘TalkingTactile Tablet’’ (T3), developed in conjunction withthe Royal National College for the Blind in Here-ford, UK, exploits the benefit of the tactileembossed format by combining a touch-sensitivetablet with traditional tactile diagrams and digitallystored audio information to provide dynamic,nested information for the visually impaired,beyond what is possible with a sheet of paper. Forexample, the system allows visually impaired accessto the contents of a world encyclopaedia through araised paper tactile map [21].

2.2. Vibrotactile

Vibrotactile displays consist of a single elementstimulator that is used to encode information intemporal parameters of the vibration signal. Para-meters that can potentially be employed includefrequency, amplitude, waveform, duration andrhythm [22]. Stimulation can be generated byvarious means including, but not limited to,solenoids, voice coils or rotation of an inertia by amotor. Most people make use of vibrotactiletechnology within mobile phones, as a non-audio-based indicator of an incoming call or text message.Vibrotactile information has also found widespreadacceptance within the video gaming community asan inexpensive means of providing touch feedbackin hand held controllers. Actuators can producetactile sensations used to represent impacts, firing ofweapons or environmental effects. A small numberof gaming software applications have employedvibrotactile display for abstract encoding of infor-mation, such as warning indicators, progress bars ortactile ‘‘radars’’. Researchers are currently investi-gating how more sophisticated vibrotactile para-meters can be used to represent information [23],and how vibrotactile feedback can be combinedwith other sensory modalities and novel interactionmethods to create compelling interactions for handheld devices [24].

ARTICLE IN PRESSS.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–3695 3677

Autho

r's

pers

onal

co

py

2.3. Force Feedback

The need to represent textures and other highspatial frequency details of haptic virtual objects hasled many researchers to investigate the presentationof tactile information using force feedback devices.The most common method of rendering hapticsensations with force feedback devices is the pointinteraction model, whereby the user is representedin the virtual environment by a single pointcorresponding to the tip of a probe or thimblethrough which the user interacts with the device.This situation is analogous to exploring objectsremotely through an intermediary link, such as astick or probe. In this situation, the distributedspatial cues on the fingertip correspond to thegeometry of the probe, rather than the surface of theobject being explored. However, it has been shownthat it is still possible to estimate the surfacequalities of a texture explored via vibrations arisingthrough dynamic contact between the probe and thetexture [25]. This is an example of the phenomenonof ‘‘distal attribution’’, whereby the resulting vibra-tion sensations are attributed to the properties ofthe surface at the distal point of the probe, ratherthan as arising directly from the probe itself [26].

The earliest reported work in the area of forcefeedback representations of texture was the sandpa-per system, which represented texture as a series oflateral forces created by ‘‘virtual springs’’, renderedon a force feedback joystick [27]. Massie describesrepresentation of textures on a Phantom hapticinterface using sinusoidal perturbations perpendicu-lar to the object surface [28]. Several methods oftexture rendering have been investigated, includingstochastic [29], image-based and fractal [30], andFourier series [31] models. However, it is perhaps fairto say that perception of texture via force feedbackdevices is not fully understood. In particular, robustmodels of the relationship between the simulatedphysical characteristics of the texture and userperception remain elusive at present. A greaterunderstanding of how force feedback devices mediatehuman perception also needs to be developed, forexample, results on the range of a perceptually stabletexture space have already been investigated byresearchers at Purdue University [32].

2.4. Tactile displays

Tactile displays can be divided roughly in to threecategories of stimulation: electro-cutaneous, ther-

mal, and mechanical. Electro-cutaneous displayscreate touch sensations by passing a small electriccurrent through the skin. They are advantageous asthey contain no moving parts, therefore construc-tion, control and maintenance is relatively simple.They are more limited in application than mechan-ical displays due to the relatively small difference inthe thresholds between signal detection and theonset of pain. The reported range of stimulationvaries between 6 and 20 dB; for comparisonpurposes the vibrotactile range of skin is around40 dB [9,33]. Thermal displays have generated someresearch interest, but are yet to find sufficientpurchase in commercial devices due to the lack ofan actuator of a suitable bandwidth, safety issuesand the fact that human thermal sensing is relativelylittle understood compared to the other two modesof stimulation [34,35].

Mechanical tactile displays utilise actuated com-ponents to actively deform the user’s skin viapressure, stretch or other means, in order to induceprogrammed touch sensations. They can be furthercategorised by their method of stimulation in tovibration, lateral displacement (skin stretch) andskin indentation. Vibration displays present shapeinformation via activating patterns of spatiallyconfigured transducers at high temporal frequenciesof operation. These elements tend to be muchsmaller than the vibrotactile transducers describedin Section 2.2, for example, pins or wires driven bysolenoids or piezoelectric actuators [36]. Devices forlateral displacement present information throughspatiotemporal patterns of skin stretch [37,38].Tactile displays for skin indentation present dis-tributed cues by inducing pressure on the skin via anumber of moving elements. They have received themost attention for virtual environment applicationsas they offer the most significant potential torepresent the fingertip deformations that occur intouch interaction with everyday objects. There isalso some evidence that a static tactile display issuperior to a vibrating display. Embossed letters arerecognised tactually with the finger pad at 50%accuracy if their height is 4–6mm; similar accuracywith the Optacon requires 12–20mm height [39,40].

3. Human tactile perception

The physiological basis of haptic perceptioncarries profound implications for the design oftactile display devices. Similarly, many relevantpsychophysical studies have been conducted, which

ARTICLE IN PRESSS.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–36953678

Autho

r's

pers

onal

co

py

may indicate the degree of information that can betransmitted by mechanical perturbation of the skin.Here we discuss factors arising from neurophysio-logical, psychophysical and perceptual studies of thesense of touch.

3.1. Mechanoreceptors of the glabrous skin

There are four types of low threshold, mechan-oreceptive nerve endings in the hairless (glabrous)skin of the human hand. They can be classifiedbased upon the size of their receptive field, and theirresponse to a step indentation of the skin. Amechanoreceptor’s response is measured by thefiring rate on it’s corresponding afferent nerve fibre.Rapidly adapting (RA) units have little to no staticresponse, discharging only as long as the skinstimulus is in motion. RA units can be furtherclassified in to Type I (small receptive field) andType II (large receptive field). Slowly adapting (SA)receptor types are less sensitive to dynamic stimulithan the RA units, but exhibit a response dependanton the amplitude of maintained skin indentation.They can also be classified in to Type I (small) andType II (large) variants depending on the size oftheir receptive field. For all types of mechanorecep-tor, the frequency range within which the unitsrespond is increased rapidly with the amplitude ofthe stimulus [41,42]. The RAII receptors are alsofrequently referred to as Pacinian corpuscles (PC)in the literature. The characteristics of the fourmain types of mechanoreceptor are summarised inTable 1.

Experimental correlates between psychophysicalstudies and neurophysiologic measures suggests themechanoreceptive afferents can also be categorisedby function [43]. SAI receptors are responsible forthe detection of form and texture of objects; theyhave a high sensitivity to temporal and spatialstructure of stimuli. Comparatively, SAII afferentsare less sensitive to skin indentation, but more

sensitive to lateral skin stretch and are thereforeimplicated in the perception of relative movementagainst a surface. RAI mechanoreceptors are moresensitive than the SAI receptors, but have a poorerspatial resolution. As a result of this they present agood indication of motion, and are therefore usefulin tasks such as slip detection and grasping, but lessso in tasks that require spatial acuity. The RAIIsystem is extremely sensitive to high-frequencyvibrations. Perception of vibration is of greatimportance when perceiving remote contact via anintermediary link, such as a tool, probe or a hapticforce feedback device. Because of their deeperlocation in the skin they have essentially no spatialacuity in comparison to the other receptor systems.

The ‘‘duplex model’’ of touch postulates twochannels of stimulation, PC (corresponding to thePacinian corpuscles, the RAII receptors) and non-PC (encompassing the other mechanoreceptiveafferents in the glabrous skin). The two channelsare distinguished by their spatial and temporalresponse to stimulations. For frequencies lowerthan 40Hz, thus primarily exciting the non-PCchannel, it was shown that sensation threshold islargely independent of contact area. For frequenciesof stimulation greater than 40Hz, therefore appeal-ing to the PC channel, the effect of contact area hasbeen illustrated to be one of the most salient factorsin terms of determination of indentation thresholds.Sensitivity increased with contactor area, illustrat-ing that spatial summation of signals occurs forhigher frequencies [44].

Results with a vibrotactile array showed a greaterability to resolve signals within noise at higherfrequencies designed to stimulate the PC receptorsthan at lower frequencies intended to stimulate thenon-PC population. Detection rates were alsomarkedly better when the background maskingstimulus was targeted at the alternate receptorgroup (e.g. detecting a high-frequency movingvibratory signal in low-frequency background noise

ARTICLE IN PRESS

Table 1

Properties of the mechanoreceptive afferents of the glabrous skin of the human hand [33, 41, 43]

Receptor type Receptive field area

(mm2)

Frequency range (peak

sensitivity) (Hz)

Amplitude threshold for

vibration (mm)

Effective stimulus

RAI 1–100 1–300(50) 2 Skin motion

SAI 2–100 0–100 (5) 30 Edges, points, corners

RAII (PC) 10–1000 5–1000 (250) 0.01 Vibration

SAII 10–500 o8 (0.5) 40 Skin stretch

S.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–3695 3679

Autho

r's

pers

onal

co

py

and vice versa) [36]. Rogers [45] also noted anincreased acuity for identification of tactile letters athigher frequencies. These results, however, cannotbe solely attributed to the spatial acuity of the PCsystem, as vibrations at this frequency will alsolikely elicit a response from non-PC receptors[36,46]. Sherrick et al. also noted that spatiallocalisation performance appeared to be better forfrequencies exciting the non-PC receptors (25Hzstimulus) than the PC system (250Hz stimulus)at various skin locations, but noting that perfor-mance would improve for the PC system at thefingertip due to the marked increase in density ofreceptors [47].

3.2. Discrimination: absolute and difference

thresholds

The two points of contact discrimination thresh-old for the fingertips is widely regarded as 0.9mmwhen the stimuli are placed against the subject’sfinger in the absence of any movement lateral to theskin’s surface. Two points of contact closer than thisthreshold cannot be resolved in to distinct stimuli.Experimental evidence suggests that active explo-ration marginally increases sensitivity, decreasingthe threshold to 0.7mm (findings are summarised in[48]).

The most sensitive part of the human hand interms of psychophysical thresholds for indentationis the fingertip, estimated by psychophysical studiesto be in the region of 10 mm for step indentations[33,49]. The highest thresholds for detection occurat low frequencies of stimulation [50]. Minimumthresholds occur in the region of 250Hz stimulation[44,45,50] corresponding to the maximum sensitivityof the RAII receptors [41]. Discrimination capacityfor amplitudes is largely dependant on the locationof stimulation, being finer at those points of thebody with lower detection thresholds [51]. As withthe vision and auditory senses, discriminationcapacity is not constant throughout the stimulationintensity scale. Indentation amplitude discrimina-tion is lower resolution at low intensities, becomesmore sensitive in the range 200–700 mm, progres-sively degrading with increasing stimulus intensity[52]. Discrimination capacity for amplitudes isreported to be roughly independent of frequencyof stimulation [53].

The resolution of temporal frequency discrimina-tion is finer at lower frequencies. Investigations byGoff involving the stimulation of the subject’s finger

with a single probe showed that for lower frequen-cies (o 25Hz), discrimination threshold was lessthan 5Hz. For frequencies greater than 320Hz,discrimination capacities were degraded [54]. Mea-sures for discrimination thresholds of frequency areproblematic, as perception of vibratory pitch isdependant not just on frequency, but also anamplitude of stimulation. This is similar to theeffect of volume on pitch perception in audio, butthe effect is more pronounced for vibratory stimulion the skin. Geldard [55], as reported by Tan [53],found that subjects reported a change in pitch whenfrequency was fixed, but amplitude of stimulationwas changed.

4. Sensory substitution

The process of sensory substitution involves thesensing of stimuli by electronic means, transforma-tion of the stimulus via signal processing, andpresentation of the transformed stimulus in anothermodality. The main application of these systems isincreasing accessibility for those with sensoryimpairments. The earliest sensory substitutiondevices converted visual stimuli to tactile represen-tations for the blind and visually impaired. Thereare also examples of visual-to-auditory [56] andauditory-to-tactile (see Section 4.3) substitutionsystems.

4.1. The Optacon

The Optacon was one of the first devices toemploy a matrix of pins for tactile-vision substitu-tion and was the first device of this kind to bedeveloped as a commercial product. The input tothe device is a 6� 24 array of photosensitive cells,which detects patterns of light and dark as materialis moved underneath. The display part of the deviceis a 6� 24 array of pins on which the user placestheir fingertip. The output of the camera isrepresented by vibrating pins on the tactile display.The pins vibrate at a frequency of 230Hz, which isclose to the maximum sensitivity of the PC receptorsfor vibration [44]. Users are encouraged to selectvibration amplitude that maximises their comfort.

Reading speeds with the Optacon are around10–12wpm after the initial 9 day training period,reaching 30–50wpm after further training andexperience [8]. Hill extended the width of theOptacon display to allow simultaneous presentationof more than one letter. Visual reading rates

ARTICLE IN PRESSS.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–36953680

Autho

r's

pers

onal

co

py

increase with the number of letters displayed at anyparticular time, however, this was not the case fortactile reading ([57] cited in [8]). When only one ortwo columns of the display were activated, hencegiving only a partial view of letters scanned with thecamera, reading rates were found to decrease ([58]cited in [8]). Loomis et al found a similar resultwhen tactual line drawings were presented: nosignificant improvement in recognition rate wasfound for a two-finger aperture, compared to onefinger exploration [59]. Conversely, in a study usinga TVSS that presented letters tactually to the user’sback, it was found that narrowing the width of thedisplay to only show a smaller portion of the lettersactually improved recognition rates. It was prof-fered that the difference in results between this andthe Optacon study may have been due to the greateramount of practice the Optacon subjects had withthe display ([60] cited in [8]).

Bliss and colleagues found that a pattern movingacross the display produced better recognition ratesthan a static presentation, in which all elements areturned on and off simultaneously ([61] cited in [8]).Conversely, in a later study a static mode ofpresentation was found to produce superior recog-nition rates for letters than a scan mode. For adisplay time of 26ms, the rate of correct letterrecognition in static mode was 65%, compared toapproximately 25% in scan mode. The difference inperformance decreased as the display time in-creased. At longer durations both modes hadrelatively similar levels of performance.

In an extensive series of investigations with theOptacon, it was found that backwards masking ofstimuli (a second stimulus, or ‘‘masker’’, is pre-sented after the stimulus to be identified) was moredisruptive to performance in pattern recognitiontasks than forward masking (masker is presentedbefore the stimulus to be identified) [7]. Both staticpresentation of letters and scanning them across thefingertip showed similar masking effects (describedin [8]). Increasing the time between onsets ofsuccessive stimuli is the critical factor in reducingthe effect of masking, as opposed to increasing theinter-stimulus interval during which no informationis presented. Therefore, simply making presentationof patterns briefer (hence increasing the gapbetween successive patterns) will not diminish themasking effect [7]. If the time interval between thestimuli and the application of the mask is greaterthan approximately 200ms, there is little interfer-ence from the masker (results described in [8]).

Spatially separating patterns presented to a singlefinger will reduce the effect of masking, particularlyfor short durations of onset between successivestimuli (results by Weisenberger cited in [7]).Increasing the amplitude of vibration has beenshown to improve the recognition rate for indivi-dual letters, however, reading performance as awhole is largely independent of amplitude; this maybe so as the increase in intensity of a single letter willalso increase its effectiveness as a masking stimulusfor adjacently presented letters [8].

Kops and Gardner investigated the discrimina-tion of ‘‘textures’’ formed by periodic spacing ofactive pins on the Optacon. The textures varied inangular orientation (e.g. horizontal, vertical oroblique lines of elements) and their density (numberof Optacon elements active). Texture patterns werescanned back and forth underneath the subjects’finger as it rested on the Optacon, with anexploration velocity equivalent to 60mm/s. Tex-tures that differed in both spatial orientation anddensity were the most easily discriminated. Stimula-tion intensity appeared to be the most salient factorfor patterns that had a large difference in density;spatial orientation became more salient for discri-mination of texture patterns with smaller densities.Low-density patterns were more accurately discri-minated than high-density textures, and the hor-izontal and vertical orientations were more easilydiscriminated than the oblique patterns. The ver-tical texture was the only distinct pattern at highdensities. There was also a strong correlationbetween the subjects’ ability to match the tactilepatterns with visual representations, and theirability to distinguish them tactually. In a less formalstudy with a sub-group of participants, discrimina-tion ability was found to be significantly higher atslower exploration speeds (30mm/s) [62].

4.2. Virtual tactile displays

In order to allow exploration of a larger tactileimage with a device limited to the size of one or twofingertips, several devices have adopted a strategy ofmounting a tactile display on a computer inputdevice, such as a mouse or graphics tablet stylus.This allows the motion of the user’s fingertips to betracked within the limits of a certain workspace andthe tactile display to be updated accordingly,dependant on where the user is on the ‘‘virtualimage’’. Kaczmarek and Bach-y-rita classify devicesof this type as ‘‘virtual tactile tablets’’ [9], after the

ARTICLE IN PRESSS.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–3695 3681

Autho

r's

pers

onal

co

py

work of Boyd et al. [63]. We propose a more genericterm, ‘‘virtual tactile display’’, which does notexclude devices that do not employ a tablet interfacefor input or output. They can be distinguished fromdevices such as the Optacon in that control andstimulation occur at the same body location, andthey are most commonly employed to representinformation that is stored digitally on a computer,rather than present in the user’s distal environment.They can also be distinguished by the fact that‘‘active’’ exploration is necessary in order toperceive the entirety of the image being displayedon the device. It is necessary for the user to integratethe device in to a proprioceptive-tactile perceptualfeedback loop in order to synthesise a spatial mentalrepresentation of the stimulus from the temporallyvarying tactile cues at the fingertip and propriocep-tive information regarding the position of thefingers (Fig. 4).

Early exponents of active exploration were Katz[64] and Gibson [65] who both recognised theimportance of relative movement between skin andsurface for tactile perception. Broadly speaking,active perception deals with the voluntary move-ment of skin over a surface. In a passive situation

the stimulus is presented to the skin by theexperimenter, this can be static presentation withoutmovement, movement of the stimulus relative to theskin, or movement of the subject that is not inducedvoluntarily, but by the experimenter in somefashion, such as an actuated mechanical system.Initial allusions were that the notions of active andpassive touch were mutually exclusive concepts.Lederman and Taylor [66], however, stated thatactive/passive did not represent a dichotomy, morethat a method of exploration could have degrees ofactivity or passivity depending on the restriction offeedback channels to the subject. Kaczmarekprovides a summary of this continuum within thecontext of tactile display research [9]. Jansson pointsout that the argument for active versus passiveperception can also be seen as an argument for therelative importance of kinaesthetic and cutaneousinformation [67]. Indeed, active exploration wouldseem to be important for resolving tactile informa-tion larger than the fingertip. Jansson describes astudy using an Optacon display mounted on a twodegrees of freedom chassis capable of planar move-ments. The display’s output was determined by acontrolling PC, thus converting the device to a

ARTICLE IN PRESS

Fig. 4. User explores complete image with virtual tactile display but must construct a spatial image from tactile and proprioceptive cues

over time.

S.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–36953682

Autho

r's

pers

onal

co

py

virtual tactile display. Performance for shapeidentification (from a set of three) was significantlyabove chance for both active (84%) and passive(56%) conditions. In the passive condition, thesubjects placed their finger on the Optacon andthe resulting tactile output from a previous subjects’exploration of the stimuli were played back tothem [67].

An early virtual tactile display was the Panto-braille, which combined a planar force feedbackdevice with two standard Braille cells for explora-tion of a 100� 160mm workspace [68]. The TVSSdeveloped by the Electronic Vision group inHeidelberg uses a tactile display mounted on amoveable chassis in order to present tactile patternsover a larger surface area of 164� 159mm. Thedisplay consists of a pin array constructed from fourcommercially available Braille cells, mounted to-gether to provide a 6� 8 display, which is aconvenient size for three fingers. The tactile displayis mounted on a chassis that permits translationalmovement in the X and Y direction. This permits2880 pixels in the entire system as the tactile displayis swept across the workspace. The device can accessvisual images across the Internet, stored locallyon a computer or through a camera attached to aPC [69].

Gapenne et al. describe a tactile display for thevisually impaired that uses a graphics tablet toprovide spatial input with the dominant hand, andtwo commercial Braille cells to provide tactileoutput to the non-dominant hand [70]. Central tothe group’s work is the concept of parallelism,which states that the more individual sensor/stimulator pairs that are simultaneously employed(e.g. individual pins on a tactile display), the moreprecisely and easily information can be exploredand extracted [71]. It is argued that a greaternumber of sensor/stimulator pairs used in parallelnot only enriches the sensations received, but alsoaids decision making during exploration due to theadditional contextual information that is madeavailable [70]. In further experiments, it wasdemonstrated that supplementing the tactile feed-back with audio cues regarding the frame ofreference (location of the x and y axes) aidedperception of geometric forms, as presenting theaxes in a second modality allowed them to bequickly differentiated from the stimuli easily [72].

The first commercially available pin-array aug-mented mouse was released by Virtouch Ltd(www.virtouch2.com). Intended for tactile presenta-

tion of graphics to the visually impaired, the firstiteration of the device (no longer produced), theVTMouse (Fig. 5), was equipped with three 4� 8pin arrays on which the user rested his/her fingersduring operation (a review of the technical specifi-cations of the device and perceptual experimentscan be found in [73]). More recently, this model wassuperseded by the VTPlayer (Fig. 6), a morecompact device with two 4� 4 tactile pin arrays.Jansson and Pedersen describe a study in whichblindfolded sighted people and visually impairedusers employed the VTPlayer mouse to exploretactile and audio representations of maps of thestates of the USA. They compared conditions ofaudio and multimodal (tactile and audio) feedback.There was no significant difference in performancefor multimodal feedback over audio alone, showingthat the addition of tactile feedback did not affectthe user’s performance. However, Jansson andPederson noted that their participants had littletraining with the mouse, and problems with thesoftware were noted where there was a lack ofauditory feedback in certain situations [74].

4.3. Tactile-audio sensory substitution

Work on tactile substitution for the profoundlydeaf did not develop much until the late 1970s andearly 1980s, comparatively late compared to tactilevision substitution aids. One of the earliest incarna-tions of this type of device was the Tacticon, acommercial device that adjusted the perceived inten-sity of 16 electrodes, each of which corresponded to

ARTICLE IN PRESS

Fig. 5. The VTMouse, an example of a virtual tactile display with

three 4 � 8 pin arrays, shown with a standard mouse for

comparison of size.

S.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–3695 3683

Autho

r's

pers

onal

co

py

a range of frequencies in the auditory spectrum inorder to improve speech comprehension, auditorydiscrimination, and the clarity of the users speech(results summarised in [9]). Another early device,the Tactile Acoustic Monitor (TAM) was developedin the 1980s at the Biomedical Physics Group of theUniversity of Exeter by Dr Ian Summers, one of theUK’s leading exponents of tactile-auditory substitu-tion research. The TAM employed a single vibro-tactile stimulator to provide information about theloudness of the user’s speech and other sounds inthe environment. Sound is picked up by a micro-phone, which is then compared to a threshold levelwhich can be self-adjusted. If the microphone signalis above the threshold the vibrator is turned on at aconstant amplitude and frequency, whereas if thesound level falls below the threshold the vibrator isturned off. A UK Medical Research Council studyshowed that the TAM was also useful for lipreading applications, prompting a variety of experi-ments investigating speech perception via a singlevibratory transducer (A concise summary exists in[75]). Summers and colleagues have recently devel-oped a pin array tactile display for the fingertip. Thedevice consists of a 10� 10 matrix of pins over anarea of 1 cm2 [6,36]. At present, the size and weightof the actuators preclude the device from portableapplications for investigation of active touch, butthe Biomedical Physics Group have producedsmaller devices with less stimulators that can bemounted on a mouse to provide a virtual tactiledisplay.

5. Teleoperation and virtual environments

Researchers became interested in the idea ofcomputer mediated tactile sensations through the

field of teleoperation. Teleoperation is concernedwith human control of a remote slave device(typically a robot manipulator arm) using a localmaster device. The user interacts with the masterdevice, and his/her movements are communicated tothe slave, and subsequently replicated. This allowsthe user access to perform complex manipulationtasks in hazardous environments (underwater main-tenance, space exploration, nuclear reactors, bombdisposal) at no personal risk. User control istypically based upon visual feedback cues obtainedfrom a camera at the remote location; however,improved control of the slave manipulator canpotentially be achieved by incorporating otherfeedback modalities, such as force feedback, toprovide the operator with an indication of thecontact forces encountered by the slave manipulatorin the remote environment. Goertz and Thompsondeveloped the very first teleoperation system toinclude force display at the Argonne NationalLaboratory. Growth in space exploration led topivotal work by NASA and the Jet PropulsionLaboratory (JPL) with the introduction of compu-ter-based Cartesian control, which allowed masterdevices to be smaller and kinematically dissimilar toslave devices (see [10] for an extensive history offorce feedback devices). Tactile feedback can beused to relay contact information from the tip of aremote tool or the jaws of a slave manipulator to ahuman operator. Meaningful tactile feedback maybe of great importance for fine control of force andposition in grasping and manipulation tasks, andfor controlling the level of force exerted at the tip ofa tool on a slave device. Motivation for the crossfertilisation of ideas between earlier sensory sub-stitution work and teleoperation arose from theneed to provide increased tactile sensitivity toastronauts during extra vehicular activity on spacemissions. The thick, many-layered protective glovesworn by astronauts preclude tactile informationduring complex dextrous tasks such as satelliteservicing or space vehicle maintenance. Pioneeringwork in this area was performed by Paul Bach-y-ritaand colleagues, who investigated the presentation ofsensed forces on the fingers to electrotactile displayson the torso of astronauts [76].

The need to provide tactile cues became moreprevalent as more sophisticated master–slave sys-tems were developed for applications such as remotesurgery. Shimoga provides a review of tactilefeedback technology that could potentially beapplied in design of teleoperator systems [77]. The

ARTICLE IN PRESS

Fig. 6. VTPlayer tactile augmented mouse. A commercially

available virtual tactile display with two arrays of 4 � 4 pins.

S.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–36953684

Autho

r's

pers

onal

co

py

review reports the work of Patrick [78] as some ofthe earliest work applying vibrotactile feedback toan exoskeleton hand master. Patrick’s experimentswith a vibrating voice coil enhanced hand exoske-leton showed an improvement in mean task errorover those of visual feedback alone [78]. Edin et al.[79] showed that a short duration, high-frequencyvibration display mounted on a master manipulatorcan be used to represent slip sensations, and hencetrigger a quasi-reflexive grasp force increase by theoperator during telemanipulation tasks. In relatedearly work on vibration display, Minsky andcolleagues describe the sandpaper system, whichpresented virtual textures via a two degrees offreedom force feedback joystick [27]. Caldwell andGosney provide some of the earliest reported workregarding display of remotely sensed texture and slipto a human operator via a data glove enhanced withpiezoelectric vibratory displays [35]. Identification oftextures from a set of four surfaces with differingamplitude and frequency of texture was excellent(above 90% recognition rates). Their system wasalso capable of thermal and pressure sensations.

Motivated by the potential for development oftactile displays for distributed fingertip informationin teleoperator and haptic force feedback applica-tions, Lederman and Klatzky performed a series ofstudies using a rigid fingertip sheath to maskdistributed cutaneous information [80]. Two-pointdiscrimination distances were found to be substan-tially larger (over a 6-fold increase), force detectionwas significantly impaired, but not as drastically (a1.73-fold increase in threshold). Vibrotactile detec-tion thresholds were not significantly impaired andsubjects could also reliably estimate the roughnessmagnitudes of tactile raised dot patterns, thoughdiscrimination capabilities were significantly dimin-ished compared to without the rigid sheath.Performance in perceptual tasks such as orientationof 2D bars and detection of a rigid lump viapalpation demonstrated significantly less accuracywith the finger sheath. Lederman and Klatzky’sresults showed that spatial acuity was significantlyobviated, but it was still possible to use temporallyvarying cues in discrimination of vibrotactile thresh-olds and judgements regarding roughness of surfacetextures. More recently, Jansson and Monacidemonstrated that providing cutaneous informationat the fingertips was of considerably more benefitthan increasing the number of limited contact pointsavailable with a remote or simulated environmentvia a haptic interface or teleoperator [81].

Both Cohn et al. [82] and Hasser and Weisenber-ger [83] performed pioneering work in the area oftactile feedback for teleoperation applications.Hasser and Weisenberger report a prototype 5� 6element shape memory alloy actuated display. Inpilot experiments on pattern identification it wasshown that users could reliably identify visualrepresentations of patterns presented tactually withthe display. The active elements of the display weredriven at frequencies between 5 and 200Hz, it wasobserved that subjects were most likely to confusestimuli based on spatial similarity of patterns, ratherthan overall pattern density (number of active arrayelements) [83].

Perhaps the earliest exponents of complete tele-operation systems incorporating both force feedbackand pin array displays were Robert Howe andcolleagues in the mid-1990s [84]. Kontarinis et al.investigated the effect of tactile sensing and displayto a human operator in a telemanipulation task [85].Output from a tactile sensor located on a remotemanipulator was relayed to the operator’s fingertipsvia a tactile pin array of 6� 4 individually con-trollable elements. Shape memory alloy actuatorswere used to drive each pin up to a maximumamplitude of 3mm (a more extensive description ofthe device can be found in [42]). A teleoperated handsystem for thumb and finger grasp was used for themaster and slave manipulators (fully described in[86]). The master and slave device are identical two-fingered hands with two degrees of freedom in eachfinger allowing movement in the vertical plane.Subjects were asked to locate a rigid cylinder inside ablock of foam rubber in the absence of visualfeedback. With tactile feedback, the subjects wereable to locate the cylinder with an error of 3mm orless in 57 out of the 60 trials conducted. Conversely,when tactile feedback was not available, the meanabsolute error was over 13mm.

Richard and Cutkosky [87] explored the effects ofaugmenting force feedback with tactile (‘‘un-grounded’’) feedback for surface contact. Thus, asingle translational degree of freedom haptic inter-face was augmented with an actuated ‘‘forceapplicator’’ that provided a contact force at theuser’s fingertip when they collided with a virtualwall. While there was no quantitative benefit interms of boundary detection between free-space andhaptic constraint, subjects reported enhanced rea-lism of contact sensations.

Building on their earlier expertise with vibration-based display, Professor Darwin Caldwell and

ARTICLE IN PRESSS.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–3695 3685

Autho

r's

pers

onal

co

py

colleagues at Salford University developed a moreadvanced tactile display based on pneumaticallycontrollable pins for display of local shape andtexture to an operator’s fingertip in a telemanipula-tion task [38]. A series of informal evaluations wereundertaken with the display attached to a dataglove. Users reported a good ability to detect andtrack edges using the shape display and were able todiscriminate textures based on their frequency andamplitude characteristics. The authors reportfurther in depth tests as on going in this area.

6. Tactile display for multimodal human–computer

interaction

The mid-1990s also saw tactile displays usedwithin desktop interaction for the first time. Little ofthis work focussed on tactile pin arrays; rather,single pin or vibrotactile output was more common.The aim of this work generally has been tosupplement the graphical feedback from currentcomputer systems with another form of output. Thisis often done because the visual sense is overloadedand users can miss information if presentedgraphically; the tactile sense is underutilised andso is available for information presentation. It isalso the case that tactile feedback might be moreappropriate than graphics for communicating cer-tain types of information.

Much of the focus of work in this area to date hasbeen on device and hardware development; untilrecently there were few tactile transducers routinelyavailable and they were often designed specificallyfor use in sensory substitution or teleoperation,making them impractical as everyday devices fordesktop interaction. Now many mobile telephonesand hand held computers have vibrotactile actua-tors included for alerting. Lee et al. [88]have alsorecently developed a tactile stylus to use on touchscreens and hand held computers. Poupyrev et al.and Fukumoto et al. [89–91]have designed sophis-ticated tactile displays for hand held computers.These have enabled research to begin into how bestto use touch for interaction.

Significant early work in tactile human–computerinteraction was done by Akamatsu and colleagueswho investigated the addition of an electromagne-tically controlled pin to a standard mouse. Thisallowed users to feel vibrations through a fingertip.Akamatsu, MacKenzie and Hasbrouc [92] investi-gated the impact of their tactile feedback on targetacquisition in a Fitt’s law study [93]. They examined

whether targeting was aided when sound, tactile andcolour feedback were used to indicate targetacquisition in a desktop-type pointing interaction.They compared five conditions: ‘normal’, the threefeedback modes in isolation, and a ‘combined’condition in which all feedback modes were usedsimultaneously. Although they found no significantdifference in overall selection time, their resultsshowed that target highlight time was reduced whenfeedback was present. Tactile feedback appeared tohave a greater effect in reducing highlight time thaneither sound or colour.

In a second study, Akamatsu and MacKenzie [94]examined the contribution of tactile and force-feedback in targeting. They found a significantdifference between normal, tactile, force, andforce+tactile feedback conditions. Tactile andforce+tactile feedback reduced targeting times by5.6% and 7.6%, respectively. Force-feedback aloneresulted in slightly higher targeting times. Theauthors did not consider multiple targets and thedistractions that tactile feedback might cause inmore complex, real-world displays with manytargets on a display presenting un-needed feedbackas the user moves over them.

There has been less research into the use of tactilefeedback in more complex and realistic, multipletarget displays. Vitense et al. [95] investigated howall combinations of visual, audio and tactile feed-back influenced user performance in drag-and-drop-type tasks. The feedback modes were activatedwhen the dragged object was over the target. Tactilefeedback was provided by vibrating a LogitechWingman force feedback mouse. Dependent mea-sures included the total trial time, the highlight time(between entering the target and dropping theobject), and subjective responses. There were severalinteresting results, including the fact that tactilefeedback increased the total trial time but reducedthe target highlight time. Overall, their resultsshowed that some combinations of feedback werevery effective, but other combinations were not andactually reduced performance. This study was basedon a more realistic task than some of the othersdone previously, but there were still few distractertargets for users to cross on the way to the item ofinterest.

Cockburn and Brewster [96] looked at combina-tions of different feedback modalities, includingvibration feedback from a Logitech iFeel mouse(www.logitech.com), for selecting small targets on acomputer desktop. They found that, in simple Fitt’s

ARTICLE IN PRESSS.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–36953686

Autho

r's

pers

onal

co

py

law-type tasks (where discrete targets are used, sothere are no distracters), tactile and audio feedbackboth reduced targeting time (confirming Akamat-su’s results above), but the combination of audioplus tactile was not as good as when each was usedalone. However, in a more realistic task (choosingitems from drop down menus) the tactile feedbackcaused problems and actually increased targetingtime over a standard graphical display. The reasonfor this was the close proximity of many tactiletargets (each menu item gave tactile feedback)causing feedback ‘overload’. As the user movedover a menu to get to a specific item, all the itemstraversed gave tactile feedback which distracted theuser. Cockburn and Brewster suggest that ‘‘y careshould be taken to make sure that feedback suits theparticular task. It may be that feedback good in onesituation is poor in another due to excessivedistraction’’. They give some suggestions as tohow some of the problems of distracters can beavoided so that the advantages of tactile targetacquisition can be retained in complex interactions.

Many of the studies described above used verysimple tactile cues, for example, vibrating a mouseto indicate that a target has been reached. Brewsterand Brown have been investigating more complexcues which encode more information. They haveproposed Tactons, or tactile icons. These arestructured, abstract messages that can be used tocommunicate tactually [22,23]. Information is en-coded into Tactons using the basic parameters ofcutaneous perception, such as waveform, rhythmicpatterns and spatial location on the body. Resultshave shown that information can be encodedeffectively in this way and that users can easilyextract the information.

Brewster and King [97] looked at using simpleTactons to present progress information tactually.Progress bars are common in graphical userinterfaces of all types, from desktop systems tomobile phones. They compared a standard visualprogress bar to a tactile one. The tactile progressbar indicated progress information by the timebetween two tactile pulses; as the downloadprogressed, the pulses got closer together, even-tually overlapping when the download completed.They showed that, if users were engaged in avisually demanding task like typing, the tactiledisplay could indicate progress significantly moreeffectively than the standard visual one, as it did notcompete for visual attention with the typing task.This again shows that tactile displays have much

untapped potential for improving human–computerinteractions.

7. Experiment

Tactile displays clearly offer great potential forenabling visually impaired people access to digitallystored and manipulated data. The most commonlyemployed solutions for access at present includescreen readers, screen magnifiers, Braille displays,and producing tactile diagrams with heat-raisedpaper. There are several drawbacks with thesemethods in that they are either unable to respondquickly to dynamic changes in data (hard copiesneed to be produced of heat-raised diagrams), theyare inherently serial in nature and therefore highlymemory intensive (e.g. a screen reader or Brailledisplay representing numerical data from a largetable), use an abridged form of data (pre-recordeddescriptions of graphs or Braille versions of tables),or are simply inaccessible to potential users (only26% of visually impaired university students readBraille, and screen magnifiers are useless to thosewith no residual vision).

A virtual tactile display (see Section 4.2) couldpotentially make graphical representations of largedata sets more accessible to visually impaired peopleusing tactile representations of common visualisa-tions such as line graphs, bar charts and pie charts.The purpose of this study was to investigate theeffectiveness of a commercially available, mouse-based tactile display for the purposes of represent-ing data in the form of a line graph. The studyfocussed on one of the most fundamental compo-nents of comprehending data presented as a linegraph: perception of the gradient of lines.

7.1. Equipment

The device used in the study was the VTPlayertactile mouse (Fig. 6) which is similar to a standardcomputer mouse, except that the user rests theirindex and middle fingers on the two tactile displayslocated on top of the chassis. The displays bothconsist of a 4� 4 matrix of pins that are eachindividually controllable through software. Pins caneither be raised or lowered; under standard opera-tion, the states of the pins are related to the colourof the pixels surrounding the mouse pointer — adark pixel corresponding to a raised pin and a lightpixel to a lowered pin (exact thresholds areunknown). Thus, a user can actively explore a

ARTICLE IN PRESSS.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–3695 3687

Autho

r's

pers

onal

co

py

desktop, menu, or other on-screen environment andreceive tactile information regarding what is underthe mouse pointer.

Exploring a complex environment, such as adesktop or other visually oriented graphical userinterface, is extremely difficult from tactile informa-tion alone. Thus, for the purposes of the study,some simple line stimuli were created, consisting ofblack lines on a white background. When exploringthe lines with the VTPlayer, the lines wererepresented by raised pins on a background oflowered pins. The experiment utilised two data sets,each using lines of different thickness. Frominformal observations, it seemed that using thickerlines may make them easier to locate, but may alsoobscure gradient information due to the limitedresolution of the VTPlayer’s tactile displays.

7.2. Previous research

Previously at Glasgow University, researchershave studied the perception of line gradient usingboth traditional heat-raised paper diagrams used byvisually impaired people, and also with an inexpen-sive force feedback mouse (Logitech’s Wingman)[98]. Performance, in terms of identifying positiveand negative gradients relative to a horizontalline, was very accurate, and comparable, for bothmedia. Performance was at ceiling level withinroughly 6 degrees of the horizontal. This illustratedthat people could perform the task in the absenceof tactile cues, using kinaesthetic informationfrom the force feedback mouse. Studies in thepsychological literature have focussed on particularphenomena, such as the ‘‘haptic oblique effect’’,which shows reduced accuracy in oblique lineorientations compared to vertical or horizontal[99,100].

7.3. Experimental procedure

Twelve participants were recruited for the experi-ment from the student population at the Universityof Glasgow. All participants were sighted and right-handed. The stimuli consisted of a set of 19 imagesof lines which the participants were to explore usingthe VTPlayer. The range of angles used in the dataset was obtained using informal pilot studies withmembers of the authors’ research group. The stimuliconsisted of lines between �241 and +241 (relativeto the horizontal) at stepwise angular increases of31. There was also a line at �301 and +301 relative

to the horizontal. Thus, the full set of lines wereoriented at the following angles: �301, �241, �211,�181, �151,�121,�91, �61, �31 (all sloping down),01 (horizontal), 31, 61, 91, 121, 151, 181, 211, 241, 301(all sloping upwards). The full data set for theexperiment consisted of 4 repetitions of each ofthese lines, giving a set of 76 lines. The stimuli weregenerated using Microsoft Excel, and copied in tobitmap files of size 370� 345 pixels (approx 11 cmby 10.2 cm). Two sets of lines were generated, usingdifferent line thickness, to test for the effect of thison perception of gradient. The ‘‘thick’’ lines were 3pixels in thickness. The ‘‘thin’’ lines were 2 pixels inthickness. All the lines were approximately 5 cm inlength and ran through the centre of the bitmapimage. During the experiment, the user’s mousemovements were constrained by the VTPlayersoftware to an area around the centre of the bitmapof 290� 265 pixels, as the edge of the bitmapcontained no information and made it harder forparticipants to locate the stimuli.

There were two conditions in the experiment,‘‘thick lines’’ and ‘‘thin lines’’. The order in whichthe participants attempted these conditions wascounterbalanced between them in order to counterfor ordering effects, thus, half the participantsattempted the ‘‘thick lines’’ condition first, and theother half attempted the ‘‘thin lines’’ condition first.Participants explored the lines using the VTPlayerwith their dominant hand (in this case, the righthand, for all participants). The cursor speed of theVTPlayer was scaled to one-tenth that of normaloperation for the purposes of the experiment. It wasfound during piloting the experiment that noviceVTPlayer users tended to move the mouse veryquickly, which resulted in the line stimulus beingmoved over in between updates of the tactiledisplay, thus it was very easy for participants tomiss the stimulus and become confused. Theparticipants’ task was to indicate whether the linesloped upwards or downwards from left-to-right.There was no time limit imposed for this task.Participants logged their answer by pressing a keyon the keyboard using their non-dominant hand.The participants were instructed to guess at theanswer if unsure, as some of the stimuli may be closeto the limits of their perception. Participants werenot blindfolded, as they were required to follow on-screen instructions and use the keyboard to recordtheir answers, however, their dominant hand andthe mouse were hidden from view behind a screen(see Fig. 7).

ARTICLE IN PRESSS.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–36953688

Autho

r's

pers

onal

co

py

During each step of the experiment the partici-pant was presented with one of the stimuli toexplore using the VTPlayer. The position of themouse was reset to the centre of the bitmapcontaining the stimuli before each trial, usingthe VTPlayer software, to make the line stimulieasy to locate for the participant. During explora-tion of the stimuli, the participant received noon-screen visual feedback from a mouse pointer,so that their judgements were based purely onkinaesthetic information from mouse movements,and cutaneous stimulation from the VTPlayer’stactile displays. Upon making a decision, a visualon-screen warning instructed the participant to hitthe return key when ready to receive the nextstimuli. While this dialog box was on-screen theywere free to take a break if needed, and the mouseinput and output was disabled. The order ofpresentation of stimuli was randomised for eachparticipant. Participants were given 19 practicestimuli (one of each unique line) before participat-ing in the experiment. During this practice session,the mouse cursor was visible to the participants(they were informed prior to the practice sessionthat they would receive no visible feedback duringthe controlled experimental conditions); this was sothe participants could learn the relationshipsbetween their hand movements and the movementof the mouse pointer, and for them to learnstrategies of hand movements to locate and explorethe line stimuli.

7.4. Results

For the purposes of analysis, the proportion oftimes that participants responded that a line wasincreasing from left-to-right was recorded. Repre-senting data using this method (as opposed to, say,proportion of correct responses) allowed us to applystandard psychophysical methods in order tocalculate the difference threshold for correct identi-fication of gradient. Initially, statistical analysis wasperformed on both the ‘‘thick lines’’ and ‘‘thinlines’’ data set using an ANOVA. Statistical analysiswas performed using the Minitab software(www.minitab.com). The dependant variable wasthe proportion of ‘‘increasing’’ responses averagedover all subjects. The independent variableswere the stimulus angle and the thickness of thelines. It was found that the angle of the line had ahighly significant effect on the participants’ re-sponses (F(18,18) ¼ 271.3, Po0.001), and that thethickness of the line also had a significant effect(F(1,18) ¼ 6.04, Po0.05). Fig. 8 illustrates therelationship between the angle of the stimulus,the thickness of the lines and the proportion oftimes the line was identified as ‘‘increasing’’ by theparticipants. The graph indicates a difference inperformance for negative gradients, with thin linesbeing identified as ‘‘increasing’’ more regularly, thusperformance was less accurate for the thin lines.However, for positive gradients the data for both

ARTICLE IN PRESS

Fig. 7. Experimental set-up for line discrimination study. The

participant operates the VTPlayer with their dominant hand

behind the screen, and logs responses on the keyboard with the

non-dominant hand.

0

0.2

0.4

0.6

0.8

1

-30

-21

-15 -9 -3 3 9 15 21 30

Thick Lines

Thin Lines

Fig. 8. Proportion of times ‘‘increasing’’ answer was given in this

experiment, as a function of stimulus angle, for ‘‘thick lines’’ and

‘‘thin lines’’.

S.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–3695 3689

Autho

r's

pers

onal

co

py

thicknesses of lines appear to be similar. This wasinvestigated by performing a two-tailed T-test fornegative and positive angles separately. In bothcases the dependant variable was line thickness andthe independent variable was proportion of times‘‘increasing’’ was given as the answer. The T-test fornegative gradients verified that there was a sig-nificant difference between the two line thicknesses(T ¼ 2.31, P ¼ 0.027), confirming that performancewas significantly less accurate for thin lines. TheT-test for positive gradient data showed that in thiscase there was no significant difference between thetwo different line thicknesses (T ¼ 2.31, P ¼ 0.51).As the thickness of the lines only had a significanteffect for negative gradients, the two data sets werecollapsed in to a single data set for purposes offurther analysis. Fig. 9 shows the proportion ofpositive responses for each angle averaged over boththickness conditions and all subjects.

The data are approximate to a cumulative normalcurve (an ogive), the typical psychometric functionfor a difference threshold experiment. The data wereconverted to z-scores from the normal distribution.When the proportion of responses is expressed as z-scores, the cumulative normal distribution becomesa linear function. Hence, by fitting a straight line tothe data when plotted against z-scores, the best-fitcumulative normal model can be found for the data

(see solid line in Fig. 9). The model was fit usingdata only in between 791, as this was the mostsalient for calculating difference thresholds. Themodel was initially fit to the entire data set, but theceiling effects observed above this limit skewedthe model, providing a poor estimate of data aroundthe threshold. The upper difference threshold (DL)corresponds to 75% response rate, hence the upperDL predicted by the model is 4.7o. The lower DLcorresponds to 25%, which corresponds to �4.391as predicted by the model.

The data from Riedel and Burton’s study [98]which employed the same methodology with raisedpaper stimulus and a force feedback mouse wasobtained and re-analysed using the above method.Cumulative normal curves were fit to the data, asshown in Fig. 9. The data for both force feedbackand raised paper showed a significant constant error(CE) between the point of subjective equality (PSE)and the horizontal (0o), when analysed using themethod described above. Thus the PSE was fixed at00 when calculating the models for the data. Thisallowed for a better comparison with the dataobtained from the VTPlayer in terms of the slope ofthe psychometric functions, and the upper andlower DLs. The DL for raised paper was calculatedas 72.42o. The model predicted that the DL withthe force feedback mouse was 73.25o.

ARTICLE IN PRESS

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Angle from Horizontal (degrees)

P (

"In

cre

as

ing

" r

es

po

ns

es

)

VTPlayer Data

VTPlayermodel

Raised Paper(model)

ForceFeedback(model)

-30 -27 -24 -21 -18 -15 -12 -9 -6 -3 302724211815129630

Fig. 9. Proportion of times ‘‘increasing’’ answer was given, as a function of stimulus angle and method of representation. Comparison

between VTplayer tactile mouse, raised paper diagram and force feedback mouse results.

S.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–36953690

Autho

r's

pers

onal

co

py

7.5. Discussion

Comparing the results for the tactile mousealongside the results for the force feedback mousewill allow conclusions to be drawn regarding therelative contribution of tactile and kinaesthetic cuesin the line gradient discrimination task. Using theraised paper diagrams currently provides the richestcombination of tactile and kinaesthetic cues, whileallowing the user to employ both hands and severalfingers for exploration, and is thus the benchmarkfor discrimination tasks. As can be seen from Fig. 9,the psychometric curve for raised paper is thesteepest; therefore the observer’s differential sensi-tivity is finest in this condition. The models alsosuggest that users of the force feedback mouse werealso more able to perform the gradient discrimina-tion task than those using the tactile mouse. In thereal-world task using the raised paper graphs, alarge component of the task of identifying thegradient is detection of edges and contours of theraised lines. Studies have shown that performance inedge detection is degraded when the explorer isdeprived of distributed tactile cues on the fingertipby interposing a rigid sheath between the finger andthe stimulus [80]. Thus, it could be concluded thattactile stimulation of the skin is important toperform this task well. However, the tactile mouseusers performed worse at the task than the forcefeedback users. Previous studies have shown thatforce feedback is good for constraining and guidinga user along a particular path, which may accountfor the good performance with the force feedbackmouse [16]. Several factors differ between the stimu-lation obtained from touching a surface directly,and that arising from the matrix of stimulators onthe tactile mouse. Firstly, the resolution of themouse is quite poor compared to the resolution ofthe mechanoreceptors in the human fingertip.Secondly, only a limited population of the receptorswill be stimulated by the action of the mouse. Inparticular, the SAII afferents are less sensitive toskin indentation, but more sensitive to lateralskin stretch [43] and would therefore not bestimulated by the motion of the tactile displays.Finally, as discussed in Section 4, the update rate ofthe mouse display is often not fast enough to allowexploration of the stimuli if quick movements aremade. From the methodology used in this study it isimpossible to isolate these effects, but it is likely thatthey all contributed in some way to the degrada-tion in performance with the tactile mouse when

compared to direct finger contact with raised paperstimuli.

7.6. Conclusion

Users of the tactile mouse could successfullydiscriminate positive and negative gradients withinroughly 75o of a horizontal line. This performancewas worse than difference thresholds predicted bymodels derived from results for a similar experimentusing raised paper lines, and a force feedbackmouse. It appears that the kinaesthetic cuesprovided by force feedback are a more salient cuethan the distributed tactile stimulation provided bythe VTPlayer tactile mouse’s displays. The displaysare, however, of a limited resolution and band-width, which will have contributed in some way tothe result.

8. Future work and research challenges

8.1. Technology

Future developments in tactile display technologywill likely lead to devices of reduced size and costand increased resolution of pins (smaller spacingbetween adjacent elements). In particular, forvirtual environment and teleoperation applicationsseeking to create realistic, simulated textures, dis-plays of a resolution approaching that of the humansensory system will be necessary to create realisticfeeling textures. Reducing the size and cost ofdevices will allow them to be more easily andcheaply incorporated in to mechanisms with forcefeedback. One of the most important perceptualcues in tactile embossed representations is the use ofheight for demarcation of differentially salient areason a diagram, as noted by Challis and Edwards [20].Therefore, an important technological output offuture research will be a tactile array capable ofsufficient displacement and control of pins to allowfeatures of different amplitudes to be rendered. Thiscould be used, for example, to tactually distinguishthe content of a graph from its axes.

8.2. Human factors

Much of the perceptual and psychophysicalinformation available in the literature has beenconducted with vibrating pin arrays, such as theOptacon, as opposed to displays for skin indenta-tion. Therefore, along with the development of

ARTICLE IN PRESSS.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–3695 3691

Autho

r's

pers

onal

co

py

devices, work is also needed on the perceptual issuesof static displays. Guidelines need to be developedfrom psychophysical results for device parameterssuch as display size, pin resolution, actuatorbandwidth and force output. More understandingof perceptual phenomena such as spatial andtemporal masking of cues presented by pin arraysis also needed to help designers develop applicationsand tactile representations.

The commercial availability of virtual tactiledisplays, such as the VTPlayer, that allow activeexploration of a larger image provide an ideal toolto evaluate the relative contribution of kinaestheticand cutaneous information when interpreting tactilediagrams. By adopting an experimental paradigmbased on modes of active and passive touch it ispossible to isolate the contribution of both chan-nels. These results will be important for developers,as, if tactile information can be communicated bypassively received cutaneous information, this willfree the need for devices to provide kinaestheticcues, allowing them to be made smaller and moreportable.

8.3. Applications

Perhaps the highest motivating factor for researchinto tactile displays is now the simulation ofrealistic, haptic virtual environments (VEs) thatcombine force feedback with tactile displays. How-ever, force feedback interfaces need to be improvedto match the movement range, force capabilities andresolution of human movement and sensing. Srini-vasan and Basdogan [2] suggest that once tactiledisplays are developed and integrated with sophis-ticated force feedback mechanisms ‘‘we can expectan explosion in the applications of haptics in VEs’’.A deeper understanding is required of how usersperceive and interpret cues from a tactile display inconjunction with force feedback. Rendering algo-rithms also need to be developed in order to ensureconsistency of representation on both devices. Fordisplay of remotely sensed environments for tele-operators there is also the challenge of developingsensors of sufficient resolution capable of detectingpressure, slip and vibration to relay to the operator.

Tactile access to graphically stored data for thevisually impaired still needs to be made affordable,discrete and portable, as well as accessible. Virtualtactile displays could potentially provide dynamicaccess to digitally stored information such asgraphs, maps and charts that are currently only

available in an abridged or static format. Guidelinesand standards for designers for the presentation oftactile cues need to be established, along with howbest to combine tactile display with other modalitiessuch as auditory display. As Jansson notes, mouse-based devices such as the VTPlayer may not be themost appropriate for the visually impaired, as theyrely on spatially distributed cues to guide explora-tion [74], therefore other input devices will need tobe investigated. Haptic force feedback displays [16]provide a spatially grounded frame of reference forexploration and force cues for guidance, but theyare often bulky and expensive. Graphics tablets[59,71] provide the benefit of a spatial frame ofreference and are relatively inexpensive for theindividual, but more research is required to validatetheir use with tactile output. Bimodal combinationsof tactile and auditory information [14,20,21,72]have been demonstrated to offer numerous benefitsto visually impaired users. Establishing guidelinesfor assigning cues to modalities would help designersdevelop applications.

References

[1] T. Massie, K. Salisbury, The phantom haptic interface: a

device for probing virtual objects, ASME Annual Winter

Meeting, New York, 1994, pp. 295–300.

[2] M.A. Srinivasan, C. Basdogan, Haptics in virtual environ-

ments: taxonomy, research status, and challenges, Comput.

Graph. 21 (4) (1997) 393–404.

[3] M. Govindaraj, A. Garg, A. Raheja, G. Huang, D.

Metaxas, Haptic simulation of fabric hand, Eurohaptics,

Dublin, Ireland, 6–9 July 2003, pp. 253–260.

[4] P. Dillon, W. Moody, R. Bartlett, P. Scully, R. Morgan, C.

James, Sensing the fabric: to simulate sensation through

sensory evaluation and in response to standard acceptable

properties of specific materials when viewed as a digital

image, in: S. Brewster, R. Murray-Smith (Eds.), Haptic

Human–Computer Interaction, Springer, Heidelberg, 2001,

pp. 205–217.

[5] S. Baillie, A. Crossan, S. Reid, S. Brewster, Preliminary

development and evaluation of a bovine rectal palpation

stimulator for training veterinary students. Cattle Practice,

J. Br. Cattle Vet. Assoc. 11 (2) (2003) 101–106.

[6] I.R. Summers, C.M. Chanter, A.L. Southall, A.C. Brady,

Results from a tactile array on the fingertip, Eurohaptics,

Birmingham, UK, 1–4 July 2001, pp. 26–28.

[7] J.C. Craig, Tactile pattern perception and its perturbations,

J. Acoust. Soc. Am. 77 (1) (1985) 238–246.

[8] J.C. Craig, C.E. Sherrick, Dynamic tactile displays, in:

W. Schiff, E. Foulke (Eds.), Tactual Perception: A

Sourcebook, Cambridge University Press, Cambridge,

1982, pp. 209–233.

[9] K.A. Kaczmarek, P. Bach-y-rita, Tactile displays, in: W.

Barfield, T.A. Furness, (Eds.), Virtual environments and

ARTICLE IN PRESSS.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–36953692

Autho

r's

pers

onal

co

py

advanced interface design, Oxford University Press, Inc.,

New York, 1995.

[10] G.C. Burdea, Force and Touch Feedback for Virtual

Reality, Wiley, New York, NY, 1996.

[11] C. Sjostrom, Using haptics in computer interfaces for blind

people, CHI 2001, Seattle, WA, 31 March–5 April 2001,

pp. 245–246.

[12] J.P. Fritz, K. Barner, Design of a haptic graphing system,

19th RESNA Conference, Salt Lake City, UT, 1996.

[13] E. Wies, J. Gardner, S. O’Modhrain, V. Bulatov, Web-

based touch display for accessible science education, in:

S. Brewster, R. Murray-Smith (Eds.), Haptic hci, Springer,

Lecture Notes in Computer Science, Berlin, 2001, pp. 52–60.

[14] W. Yu, S. Brewster, Comparing two haptic interfaces for

multimodal graph rendering, IEEE VR2002, 10th Sympo-

sium on Haptic Interfaces for Virtual Environment and

Teleoperator Systems, Orlando, FL, 2002.

[15] W. Yu, D. Reid, S.A. Brewster, Multimodal virtual reality

versus printed medium in visualization for blind people,

Proceedings of ACM ASSETS, Edinburgh, Scotland, 2002,

pp. 57–64.

[16] W. Yu, S.A. Brewster, Evaluation of multimodal graphs

for blind people, J. Universal Access Inform. Soc. 2 (2)

(2003) 105–124.

[17] E. Foulke, Reading Braille, in: W. Schiff, E. Foulke (Eds.),

Tactual Perception: A Sourcebook, Cambridge University

Press, Cambridge, 1982, pp. 168–208.

[18] W. Gaver, The sonicfinder: an interface that uses auditory

icons, Hum–Comput. Interact. 4 (1989) 67–94.

[19] H. Petrie, V. Johnson, P. McNally, S. Morley, A.M.

O’Neill, D. Majoe, Inexpensive tactile interaction for blind

computer users: two application domains, IEE Colloquium

on Developments in Tactile Displays, London, UK, 21

January 1997, pp. 2/1–2/3.

[20] B.P. Challis, A.D.N. Edwards, Design principles for tactile

interaction, in: S. Brewster, R. Murray-Smith (Eds.), Haptic

hci 200, Springer, Berlin, Heidelberg, 2001, pp. 17–24.

[21] L.R. Wells, S. Landau, Merging of tactile sensory input

and audio data by means of the talking tactile tablet,

Eurohaptics 2003, Dublin, Ireland, 2003, pp. 414–418.

[22] S. Brewster, L.M. Brown, Tactons: structured tactile

messages for non-visual information display, 5th Austra-

lian User Interface Conference (AUIC2004), Dunedin,

New Zealand, 2004.

[23] L.M. Brown, S. Brewster, H. Purchase, A first investigation

into the effectiveness of tactons, World Haptics 2005, Pisa,

Italy, 2005.

[24] J. Linjama, T. Kaaresoja, Novel, minimalist haptic gesture

interaction for mobile devices, third Nordic Conference on

Human–computer Interaction, Tampere, Finland, 2004,

pp. 457–458.

[25] R.L. Klatzky, S.J. Lederman, Tactile roughness perception

with a rigid link interposed between skin and surface,

Percept. Psychophys. 61 (4) (1999) 591–607.

[26] J.M. Loomis, Distal attribution and presence, Presence:

Teleoperator. Virtual Environ. 1 (1) (1992) 113–119.

[27] M. Minsky, M. Ouh-Young, O. Steele, F. Brooks Jr., M.

Behensky, Feeling and seeing: issues in force display,

Comput. Graph. 24 (2) (1990) 235–243.

[28] T.H. Massie, Initial Haptic Exploration With the Phantom:

Virtual Touch Through Point Interaction, Massachussets

Institute of Technology, Cambridge, MA, 1996.

[29] J.P. Fritz, K.E. Barner, Stochastic models for haptic

texture, SPIE International Symposium on Intelligent

Systems and Advanced Manufacturing, 1996, pp. 34–44.

[30] C. Basdogan, C. Ho, M.A. Srinivasan, A ray-based haptic

rendering technique for displaying shape and texture of 3D

objects in virtual environments, Proceedings of the ASME

Dynamic Systems and Control Division, 1997, pp. 77–84.

[31] S.A. Wall, W.S. Harwin, Modelling of surface identifying

characteristics using Fourier series, Proceedings of ASME

International Mechanical Engineering Congress: Dynamic

Systems and Control Division (Haptic Interfaces for

Virtual Environments and Teleoperator Systems), Nash-

ville, TN, 1999, pp. 65–71.

[32] S. Choi, H.Z. Tan, An analysis of perceptual instability

during haptic texture rendering, 10th International Sym-

posium on Haptic Interfaces for Virtual Environment and

Teleoperator Systems (IEEE VR 2002), 2002, pp. 129–136.

[33] K.A. Kaczmarek, J.G. Webster, P. Bach-y-rita, W.J.

Tompkins, Electrotactile and vibrotactile displays for

sensory substitution systems, IEEE Trans. Biomed. Eng.

38 (1991) 1–15.

[34] L.A. Jones, M. Berris, Material discrimination and thermal

perception, 11th Symposium on Haptic Interfaces for

Virtual Environment and Teleoperator Systems, Los

Angeles, CA, 22–23 March 2003, pp. 171–178.

[35] D.G. Caldwell, C. Gosney, Enhanced tactile feedback (tele-

taction) using a multi-functional sensory system, IEEE

International Conference on Robotics and Automation,

2–6 May 1993, pp. 955–960.

[36] I.R. Summers, C.M. Chanter, A broadband tactile array on

the fingertip, J. Acoust. Soc. Am. 112 (5) (2002) 2118–2126.

[37] J. Pasquero, V. Hayward, Stress: A practical tactile display

with one millimeter spatial resolution and 700Hz refresh

rate, Eurohaptics, Dublin, Ireland, 6–9 July 2003, pp.

94–110.

[38] D.G. Caldwell, N. Tsagarakis, C. Giesler, An integrated

tactile/shear feedback array for stimulation of finger

mechanoreceptor, IEEE International Conference on

Robotics and Automation, Detroit, MI, May 1999,

pp. 287–292.

[39] J.M. Loomis, S.J. Lederman, Tactual perception, in: K.

Boff, L. Kaufman, J. Thomas (Eds.), Handbook of

Perception and Human Performance, Wiley, New York,

1986, pp. 31–41.

[40] K.O. Johnson, S.S. Hsiao, Neural mechanisms of tactual

form and texture perception, Annu. Rev. Neurosci. 15

(1992) 227–250.

[41] R.S. Johansson, U. Landstrom, R. Lundstrom, Responses

of mechanoreceptive afferent units in the glabrous skin of

the human hand to sinusoidal skin displacements, Brain

Res. 244 (1982) 17–25.

[42] D.A. Kontarinis, R.D. Howe, Display of High Frequency

Tactile Information to Teleoperators, 1993, pp. 40–50.

[43] K.O. Johnson, Neural basis of haptic perception, in:

H. Pashler, S. Yantis (Eds.), Stevens Handbook of

Experimental Psychology, Wiley, New York, 2002,

pp. 537–583.

[44] R.T. Verillo, Effect of contactor area on vibrotactile

threshold, J. Acoust. Soc. Am. 35 (1963) 1962–1966.

[45] C.H. Rogers, Choice of stimulator frequency for tactile

arrays, IEEE Trans. Man Machine Systems MMS- 11

(1970) 5–11.

ARTICLE IN PRESSS.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–3695 3693

Autho

r's

pers

onal

co

py

[46] S.J. Bolanowski, G.A. Gescheider, R.T. Verillo, C.M.

Checkosky, Four channels mediate the mechanical aspects

of touch, J. Acoust. Soc. Am. 84 (5) (1988) 1680–1694.

[47] C.E. Sherrick, R.W. Cholewiak, A.A. Collins, The

localization of low- and high-frequency vibrotactile stimuli,

J. Acoust. Soc. Am. 88 (1) (1990) 169–179.

[48] J.R. Phillips, K.O. Johnson, Neural mechanisms of scanned

and stationary touch, J. Acoust. Soc. Am. 77 (1) (1985)

220–224.

[49] R.S. Johansson, A.B. Vallbo, Detection of tactile stimuli.

thresholds of afferent units related to psychophysical

thresholds in the human hand, J. Physiol. 297 (1979)

405–422.

[50] J. Lofvenberg, R.S. Johansson, Regional differences in

sensitivity to vibration in the glabrous skin of the human

hand. Brain Res. 301 (1984) 65–72.

[51] F.A. Geldard, The Human Senses, Wiley, NY, 1972.

[52] G. Werner, V.B. Mountcastle, Quantitative relations

between mechanical stimuli to the skin and neural

responses evoked by them, in: D. Kenshalo (Ed.), The

Skin Senses, Charles C. Thomas, Springfield, IL, 1968,

pp. 112–138.

[53] H.Z. Tan, Information transmission with a multi-finger

tactual display, Electrical Engineering and Computing

science, Massachussetts, Institute of Technology: Cam-

bridge, MA, 1996.

[54] G.D. Goff, Differential discrimination of frequency of

cutaneous mechanical vibration, J. Exp. Psychol. 74 (1967)

294–299.

[55] F.A. Geldard, Adventures in tactile literacy, Am. Psychol.

12 (1957) 115–124.

[56] P.B.L. Meijer, An experimental system for auditory image

representations, IEEE Trans. Biomed. Eng. 39 (2) (1992)

112–121.

[57] J.W. Hill, Limited field of view in reading letter shapes with

the fingers, in: F.A. Geldard (Ed.), Cutaneous Commu-

nication Systems and Devices, Psychonomic Society,

Austin, TX, 1974, pp. 95–105.

[58] J.C. Bliss, M.H. Katcher, C.H. Rogers, R.P. Shephard,

Optical-to-tactile image conversion for the blind, IEEE

Trans. Man Machine Systems MMS- 11 (1970) 58–64.

[59] J.M. Loomis, R.L. Klatzky, S.J. Lederman, Similarity of

tactual and visual picture recognition with limited field of

view, Perception 20 (1991) 167–177.

[60] J.M. Loomis, Tactile letter recognition under different

modes of stimulus presentation, Percept. Psychophys. 16

(1974) 401–418.

[61] J.C. Bliss, J.G. Linvill, A direct translation reading

aid: reading alphabetic shapes tactually, International

Conference on Sensory Devices for the Blind, 1966,

pp. 389–407.

[62] C.E. Kops, E.P. Gardner, Discrimination of simulated

texture patterns on the human hand, Percept. Psychophys.

76 (2) (1996) 1145–1165.

[63] L.H. Boyd, W.L. Boyd, G.C. Vanderheiden, The Graphical

User Interface Crisis: Danger and Opportunity, University

of Wisconsin: Trace R&D Center, UK, 1990.

[64] L.E. Krueger, David katz’s der aufbau der tastwelt (the

world of touch): a synopsis. Percept. Psychophys. 7 (6)

(1970) 337–341.

[65] J.J. Gibson, Observations on active touch, Psychol. Rev. 69

(1962) 477–491.

[66] S.J. Lederman, M.M. Taylor, Fingertip force surface

geometry and the perception of roughness by active touch,

Percept. Psychophys. 12 (5) (1972) 401–408.

[67] G. Jansson, Haptic perception of outline 2D shape: the

contributions of information via the skin, the joints and the

muscles, in: B. Bril, A. Ledebt, G. Dietrich, A. Roby-Brami

(Eds.), Advances in Perception-Action Coupling, Editions

EDK, Paris, 1998, pp. 25–30.

[68] C. Ramstein, Combining haptic and Braille technologies:

design issues and pilot study, ASSETS’96, Vancouver,

British Columbia, Canada, 1996, pp. 37–44.

[69] T. Maucher, J. Schemmel, K. Meier, The Heidelberg tactile

Vision substitution system, International Sensory Aids

Conference, Exeter, UK, 23–26 May 2000.

[70] O. Gapenne, C. Lenay, J. Stewart, H. Beriot, D. Meidine,

Prosthetic device and 2D form perception: the role of

increasing degrees of parallelism, Proceedings of the

Conference on Assistive Technology for Vision and

Hearing Impairment (CVHI’2001), Castelvecchio Pascoli,

Italy, 2001.

[71] N. Sribunruangrit, C. Marque, C. Lenay, O. Gapenne, C.

Vanhoutte, J. Stewart, Application of parallelism concept

to access graphic information with precision for blind

people, EMBEC’02 2nd European Medical and Biological

Engineering Conference, Vienna, Austria, 4–8 December

2002.

[72] A. Ali Ammar, O. Gapenne, C. Lenay, J. Stewart, Effect of

bimodality on the perception of 2D forms by means of a

specific assistive technology for blind persons,

CVHI’2002—EURO-ASSIST-VHI-2 Conference on Assis-

tive Technologies for Vision and Hearing Impairment,

Granada, Spain, 6–9 August 2002, pp. 45–52.

[73] P. Kammermeier, G. Schmidt, Application-specific evalua-

tion of tactile array displays for the human fingertip, IEEE/

RSJ International Conference on Intelligent Robots and

Systems (IROS), Lausanne, Switzerland, 30 September–4

October 2002, pp. 2937–2942.

[74] G. Jansson, P. Pedersen, Obtaining geographical informa-

tion from a virtual map with a haptic mouse, XXII

International Cartographic Conference (ICC2005), A

Coruna, Spain, 11–16 July 2005.

[75] I.R. Summers, Single-channel information transfer through

the skin: limitations and possibilities, International Sensory

Aids Conference, Exeter, UK, 23–26 May 2000.

[76] P. Bach-y-rita, J.G. Webster, W.J. Tompkins, and

T. Crabb, Sensory substitution for space gloves and

space robots. Space Telerobotics Workshop, Jet Propul-

sion Laboratory, Pasadena, CA, 20–22 January 1987,

pp. 51–57.

[77] K.B. Shimoga, A survey of perceptual feedback issues in

dexterous telemanipulation. Part ii. Finger touch feedback,

IEEE Virtual Reality Annual International Symposium,

New York, September 1993, pp. 271–279.

[78] N.J.M. Patrick, Design, Construction and Testing of a

Fingertip Tactile Display for Interaction with Virtual and

Remote Environments, M.Sc. Thesis, Department of

Mechanical Engineering, Massachussets Institute of Tech-

nol. Cambridge, MA, 1990.

[79] B.B. Edin, R. Howe, G. Westling, M. Cutkosky, A

physiological method for relaying frictional information

to a human teleoperator, IEEE Trans. System Man

Cybernet. 23 (2) (1993).

ARTICLE IN PRESSS.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–36953694

Autho

r's

pers

onal

co

py

[80] S.J. Lederman, R.L. Klatzky, Sensing and displaying

spatially distributed fingertip forces in haptic interfaces

for teleoperator and virtual environment systems, Presence:

Teleoperators and Virtual Environments 8 (1) (1999)

86–103.

[81] G. Jansson, L. Monaci, Improving haptic displays:

providing differentiated information at the contact areas

is more important than increasing the number of areas,

World Haptics 2005, Pisa, Italy, 18–20 March 2005.

[82] M.B. Cohn, M. Lam, R.S. Fearing, Tactile feedback for

teleoperation, Telemanipulator Technology SPIE 1833

(1992) 240–252.

[83] C.J. Hasser, J.M. Weisenberger, Preliminary evaluation of

a shape-memory alloy tactile feedback display, Symposium

on Haptic Interfaces for Virtual Environments and Tele-

operator Systems, ASME Winter Annual Meeting, New

Orleans, LA, 1993, pp. 73–80.

[84] R.D. Howe, The shape of things to come: pin-based tactile

shape displays, Proceedings of the 4th International

Conference Eurohaptics 2004, Munich, Germany, 5–7 June

2004, pp. 2–11.

[85] D.A. Kontarinis, J.S. Son, W. Peine, R.D. Howe, A tactile

shape sensing and display system for teleoperated manip-

ulation, IEEE International Conference on Robotics and

Automation, 1995, pp. 641–646.

[86] R.D. Howe, A force-reflecting teleoperated hand system for

the study of tactile sensing in precision manipulation, IEEE

International Conference on Robotics and Automation,

Nice, France, May 1992, pp. 1321–1326.

[87] C. Richard, M.R. Cutkosky, Contact force perception with

an ungrounded haptic interface, ASME IMECE 6th

Annual Symposium on Haptic Interfaces, Dallas, TX,

15–21 November 1997.

[88] J.C. Lee, P. Dietz, D. Leigh, W. Yerazunis, S.E. Hudson,

Haptic pen: a tactile feedback stylus for touch screens,

Proceedings of UIST 2004, Santa Fe, NM, USA, 2004, pp.

291–294.

[89] I. Poupyrev, S. Maruyama, Tactile interfaces for small

touch screens, Proceedings of UIST 2003, Vancouver,

Canada, 2003, pp. 217–220.

[90] I. Poupyrev, S. Maruyama, J. Rekimoto, Ambient touch:

designing tactile interfaces for handheld devices, UIST,

Paris, France, 27–30 October 2002.

[91] M. Fukumoto, S. Toshaki, Activeclick: tacile feedback for

touch panels, Extended Abstracts of CHI 2001, Seattle,

WA, USA, 2001, pp. 121–122.

[92] M. Akamatsu, I.S. MacKenzie, T.A. Hasbrouc, A compar-

ison of tactile auditory, and visual feedback in a pointing task

using a mouse-type device, Ergonomics 38 (1995) 816–827.

[93] R.W. Soukoreff, I.S. MacKenzie, Towards a standard for

pointing device evaluation: Perspectives on 27 years of fitts’

law research in hci, Int. J. Human-Comput. Stud. 61 (2004)

751–789.

[94] M. Akamatsu, I.S. MacKenzie, Movement characteristics

using a mouse with tactile and force feedback, Int.

J. Human-Comput. Stud. 45 (1996) 483–493.

[95] H. Vitense, J.A. Jacko, V.K. Emery, Multimodal feedback:

an assessment of performance and mental workload,

Ergonomics 46 (1–3) (2003) 68–87.

[96] A. Cockburn, S. Brewster, Multimodal feedback for the

acquisition of small targets, Ergonomics 48 (9) (2005)

1129–1150.

[97] Brewster, A.J. King, An investigation into the use of

tactons to present progress information, Proceedings of

IFIP Interact 2005, Rome, Italy, 2005.

[98] B. Riedel, A.M. Burton, Perception of gradient in haptic

graphs: a comparison of virtual and physical stimuli,

Eurohaptics, Birmingham, UK, 2001, pp. 90–92.

[99] E. Gentaz, Y. Hatwell, Role of gravitational cues in the

haptic perception of orientation, Percept. Psychophys. 58

(8) (1996) 1278–1292.

[100] E. Gentaz, Y. Hatwell, The haptic oblique effect in the

perception of rod orientation by blind adults. Percept.

Psychophys. 60 (1) (1998) 157–167.

ARTICLE IN PRESSS.A. Wall, S. Brewster / Signal Processing 86 (2006) 3674–3695 3695