addresses panels short papers organisation overviews · keynote addresses panels short papers...

125
KEYNOTE ADDRESSES PANELS SHORT PAPERS ORGANISATION OVERVIEWS INDUSTRY DAY DEMOS VIDEOS DOCTORAL CONSORTIUM POSTERS human computer interaction' 9 8 H H C C I I 9 9 8 8 C C O O N N F F E E R R E E N N C C E E C C O O M M P P A A N N I I O O N N Edited by Jon May, Jawed Siddiqi and Julie Wilkinson Sponsored by the British HCI Group, a Special Interest Group of the BCS

Upload: donhu

Post on 30-Jul-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

human computer interaction'98

HHHCCCIII’’’999888CCCOOONNNFFFEEERRREEENNNCCCEEECCCOOOMMMPPPAAANNNIIIOOONNN

Edited byJon May,Jawed SiddiqiandJulie Wilkinson

Sponsored by the British HCI Group,a Special Interest Group of the BCS

British Computer Society Conference onHuman Computer Interaction

HCI’98 Conference Companion

Edited by

Jon May, Department of PsychologyUniversity of Sheffield

Jawed SiddiqiandJulie Wilkinson

School of Computing andManagement SciencesSheffield Hallam University

HCI’98 Conference Companion

Edited by Jon May, Jawed Siddiqi and Julie Wilkinson

Adjunct Proceedings of the 13th British Computer Society Annual Conference on Human ComputerInteraction, HCI’98. Held at Sheffield Hallam University, Sheffield, September 1998.

ISBN 0 86339 795 6

The use of registered names, trademarks etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevantlaws and regulations and therefore free for general use.

The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibilityor liability for any errors or omissions that may be made.

Copyright reverts to contributors on publication.

HCI’98 Conference Companion

—iv—

HCI’98 Conference CompanionKeynote Addresses Chaired by Hilary JohnsonParallel Universes............................................................................................................................................ 1

Karen Mahoney

Multimedia Kiosks: A Metropolitan Police Perspective ............................................................................... 2Gary Fitzpatrick

Portability of User Interfaces: “Writing it once” is not enough .................................................................... 3Joëlle Coutaz

Systems, Interactions and Macro-theory ........................................................................................................ 4Philip J Barnard

Successes and Failures in Groupware Adoption: Case Studies..................................................................... 5Jonathan Grudin

Panels Chaired by Andrew MonkHuman-Centred Processes .............................................................................................................................. 6

Jonathan Earthy, Brian Shackel, Hazel Courteney, Simon Hakiel, Brian!Sherwood-Jones andBronwen Taylor

A New User Interface Metaphor for Mobile Personal Technologies............................................................ 9Elisa del Galdo, Paul Gough,Matt Jones, Rob Noble and Philip Stenton

Organisational constraints on the proper teaching of HCI: must HCI have its own department to betaught properly?............................................................................................................................................. 12

Peter Gregor, Xris Faulkner, Phil Gray, Andrew Monk, Peter Timmer and Steve Draper

Short Papers Chaired by Jon May and Jawed SiddiqiCommunication goals in interaction............................................................................................................. 14

Ann Blandford and Richard M. Young

Sustaining the paper metaphor with Dynamic-HTML ................................................................................ 16Gavin J. Brelstaff and Francesca Chessa

User Profile-based Reference Points in Information Visualisation............................................................. 18Chaomei Chen and John Davies

Improving online style-guides and guidelines ............................................................................................. 20Mikael Ericsson

Using ‘Contact Points’ for Web Page Design.............................................................................................. 22Pete Faraday and Alistair Sutcliffe

Configurable visual changes in a word processor to aid dyslexics ............................................................. 24Peter Gregor, Peter Andreasen and Alan F. Newell

Design Issues for Interactive Drama ............................................................................................................ 26Peter Jagodzinski, Dan Livingstone, Mike Phillips, Tom Rogers and Simon Turley.

Support for Meeting People on the Internet ................................................................................................. 28Jun Kakuta, Kazuki Matsui and Hiroyasu Sugano

Usability Requirements for Virtual Environments ...................................................................................... 30Kulwinder Kaur, Alistair Sutcliffe and Neil Maiden

Context and Frequency of Use in ATMs: Change over a Decade .............................................................. 32Patrick J. O’Donnell, G.E.W. Scobie and Margaret Martin

SiteSeer: An Interactive Treeviewer for Visualizing Web Activity............................................................ 34Eric Sigman, Robert Farrell and Mark Rosenstein

Cognitively Engineering Coordination in Emergency Management .......................................................... 36Adam Stork, Tony Lambie and John Long

System Support for Rapid Prototyping of Collaborative Internet Information Systems............................ 38Michael Swaby, Peter Dew, David Morris and Gyuri Lajos

HCI’98 Conference Companion

—v—

Towards continuous usability evaluation of web documents...................................................................... 40Yin Leng Theng, Gil Marsden and Harold Thimbleby

Nonspeech Audio in Television User Interfaces.......................................................................................... 42Richard van de Sluis, Berry Eggen and Jouke Rypkema

Experiments in How Automated Systems Should Talk to Users ................................................................ 44David Williams, Christine Cheepen and Nigel Gilbert

Organisational Overviews Chaired by Nick RousseauConcepts of Interaction and the Nature of Design: HCI Research at Napier University, Edinburgh...... 46

David Benyon

The Department of Applied Computing at the University of Dundee ....................................................... 48Ramanee Peiris, Peter Gregor and Alan F. Newell

EDS Human Factors Group .......................................................................................................................... 50Michael Burnett

Introducing the Benefits Agency and Employment Service’s Model Office-testing the end to endprocesses........................................................................................................................................................ 52

Angela Maguire and Keith Wheeldon

Lucent Technologies OMC-2000 ................................................................................................................. 54Rod Moyse and Annette Tassone

Industry Day Chaired by Tony Rose and Peter WindsorUtilities face a challenge. Usability can help. ............................................................................................. 56

Rosalind Barden

Designing for cultural diversity .................................................................................................................... 58Girish V. Prabhu and Dan Harel

Usability Process Challenges in a Web Product Cycle................................................................................ 60Gayna Williams

The User Interface of Britain’s New En-Route Centre for Air Traffic Control ......................................... 62Jim Cozens

Refining the NERC User Interface............................................................................................................... 64Roger Attfield

Demonstrations & Videos Chaired by Andrew Stratton, Atif Waraich & Chuck ElliotDesigning a User Interface for Digital Dissection....................................................................................... 66

Dunja Hövik, Gunnar Berg and Christoffer Schander

The Motivational User Interface................................................................................................................... 68Linda Hole, Simon Crowle and Nicola Millard

Demonstration of the Development and Use of User Interaction in Computer Games ............................. 70Tim Heaton,

A Software Tool For Evaluating Navigation ............................................................................................... 72Rod McCall and David Benyon

Employment Service: Transforming Customer Services through IT.......................................................... 74Nick Rousseau, Janet Hinchliff and Bronwyn Robinson

AkuVis: Exploring Visual Noise .................................................................................................................. 76Katy Bvrner and Ipke Wachsmuth

Doctoral Consortium Chaired by Steve BrewsterLearning pathways and strategies of novice adult learners: a user-perspective approach ......................... 78

Joan Aarvold and Bob Heyman

A summary of HCI Engineering Design Principles..................................................................................... 80Stephen Cummaford

HCI’98 Conference Companion

—vi—

Cross-Cultural Differences in Understanding Human-Computer Interfaces.............................................. 82Vanessa Evers

Towards a Formal Representation of Multi-Modal Systems for Usability Assessment ............................ 84Joanne Hyde

Information Gathering and the Workplace Soundscape .............................................................................. 86Catriona Macaulay

URL, Summary, and Percentage. Click here for the next 16,433 matches: Why a URL, Summary andPercentage representation is not enough. ..................................................................................................... 88

Thomas Tan

Criteria of Credibility for Collaborative Virtual Environments .................................................................. 90Jolanda G. Tromp

User Interface Design & Evaluation for a Content-Based Image Retrieval System. ................................. 92Colin C. Venters

Posters Chaired by Julie WilkinsonQUASS – a tool for measuring the subjective quality of real-time multimedia audio and video.............. 94

Anna Bouch, Anna Watson and M. Angela Sasse

Extending Support for User Interface Design in Object-Oriented Software Engineering Methods.......... 96Elizabeth Kemp and Chris Phillips

On the relationship between mouse operating force and display design .................................................... 98Kentaro Kotani, Ken Horii and Yutaka Kitamura

Usability Principles Specific to Interactive Authoring Support Environments ........................................ 100Paula Kotzé

Choosing and using names for information retrieval................................................................................. 102Janet A. Pitman and Stephen J. Payne,

Designing for cultural diversity .................................................................................................................. 104Girish V. Prabhu and Dan Harel

Translating the World Wide Web interface into speech............................................................................ 106C. Reeves, M. Zajicek, C. Powell and J. Griffiths

Beyond the Interface: Modelling the Interaction in a Visual Development Environment....................... 108Chris Scogings and Chris Phillips

Designing Educational Interfaces from a Constructivist Perspective ....................................................... 110David Squires and Anne McDougall

Strategies for Developing Substantive Engineering Principles................................................................. 112Adam Stork and John Long

From Agents to a Networked Display Manager ........................................................................................ 114Mark Treglown

Collaborative Virtual Environments: the COVEN Project........................................................................ 116Jolanda G. Tromp and Anthony Steed

Touch screen VS Mouse: an experimental comparison using Fitts' law and mental workload. .............. 118Josine van de Ven

Formative evaluation of a focus+context visualization technique ............................................................ 119Bjork S and Holmquist LE

Socialspaces: an environment for dynamic participation in informal real-time group activities............. 119Boyer D and Wilbur S

Designing intrinsically motivating interaction........................................................................................... 119Garcia-Tobin D

Research platform to usable software - or writing the interface................................................................ 119Hughes J, Clark A and Sasse A

Economic and social influences on interaction with the web.................................................................... 119Johnson C

HCI’98 Conference Companion

—1—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Parallel Universes

Karen Mahoney

Mahoney Associates,3 Bleeding Heart Yard,Greville StreetLondon EC1N [email protected]

As more products, services and communications migrate to the Web the need for goodhuman-computer interface design has never been greater. This, then, should be aboom time for human factors HCI specialists, but the reality seems to be verydifferent. From observation, it appears that people from an academic HCI backgroundare seldom employed in the design of commercial web sites. The majority of large-scale work is being done by people working in teams combining visual design andsoftware backgrounds - many of whom are either unaware of academic HCI theory, orfind it largely irrelevant.Anyone who has encountered a web site where they have been unable to find theinformation that should there or who has been defeated in their attempt to purchasesomething on the web will recognise that usability remains a key issue with importantcommercial implications. The question then becomes why it is that HCI expertise isnot employed more in commercial web design?The answer may partly lie in the past. In the early days of interactive multimediadesign it was not uncommon to find that many human factors specialists were unableto make the conceptual jump from the issues that were important in designingcomputer applications to the new issues that arose from design which was focused oncommunicating content. Even today, there are instances of HCI specialists proposingideas, or approaches, which to anyone from a media or marketing background seemeither naive or ill-informed.Not surprisingly as interactive multimedia developed as a commercial industry manyof its designers were drawn from media and communications backgrounds becausethey did understand that the important issues are about effective communication, toneof voice, mood, narrative, sequence, visual appearance and the ability to providecompelling experiences. Nor was usability entirely neglected. In more traditionalmedia forms, such as print, film and television, there are usability issues, but thesehave been generally subsumed into to the repertoire of knowledge that any competentprofessional will bring to bear in their work.Nevertheless the Web is a computer based medium and HCI specialists, should beable to contribute to making it a more attractive and useful medium for its users.However, for this to happen, the definition of usability will have to be broadened toencompass communications issues, such as the ability to evoke emotion, to engage, tointrigue, to entertain and to inform. If the HCI discipline is to play a role in webdesign it must begin to recognise the communication and marketing issues thatpractising design agencies deal with or resign itself to simply being an academicdiscipline primarily concerned with developing a critique of design practice.Now that the interactive medium is playing a central role in both customer andcorporate communications, good design means not just good usability, andappropriate functionality, but also engaging content, professional production valuesand well thought out relationship and loyalty programmes. All of these aims areessential elements in effective strategic brand positioning.

HCI’98 Conference Companion

—2—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Multimedia Kiosks: A Metropolitan Police Perspective

Gary Fitzpatrick

The Metropolitan Police ServiceCentre for Applied Research and TechnologyJubilee House230-232 Putney Bridge Road,London SW15 2PD

ATTACH stands for Advanced TransEuropean Telematics Applications forCommunity Help. It is a European project led by the Metropolitan Police Servicewhich uses multimedia kiosks to bring information and advice to the community inNewham, North East London. At the touch of a screen a member of the public canhave access to information about local council facilities and police services fromkiosks installed in public places such as council offices, shopping precincts, leisurecentres and police stations.

The Met has derived some important guidelines about successful deployment ofpublic access networks which will be published as part of the European project.These include "It's not a kiosk project, it's an information project!", and "If it's broke,fix it now!". Attention has been paid to ensuring the users needs are met in relation tothe design of the kiosk hardware, user interface design and kiosk location. A keymessage from our studies has been that it is vital to locate the kiosks where the publicalready go.

The ATTACH project was one of the first to use touch screen browsers and webtechnology to deliver service to the public user. One of the primary goals ofATTACH has been to ensure single data entry, and owner self-maintenance of datadirectly on the ATTACH database. This has been accomplished by using an Oracledatabase, which is maintained directly via the web from any "interneted computer"even over slow modems (14.4kbps). Web pages delivered on demand to the kiosk arebuilt dynamically from suitable templates with the information extracted from thedatabase.

How to evaluate kiosks and network information systems has been a major part ofATTACH, and the Met has driven the specification and production of managementstatistics to support the business cases for public sector and, in the future, possibleprivate sector participation. We expect an organic growth in kiosk, and terminalbased information systems across London, and ATTACH will prepare the Met todeliver police information and service in support of the Met's policing strategy intothe next millennium.

HCI’98 Conference Companion

—3—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Portability of User Interfaces: “Writing it once” is not enough

Joëlle Coutaz

CLIPS-IMAGBP 53, 38041 Grenoble Cedex 9,Francehttp://iihm.imag.fr/[email protected]

The need for ubiquitous access to information processing, the success of newconsumer devices such as pocket computers and wireless networks, the availability oflarge electronic boards as well as the development of immersive caves, offer newchallenges to the software community of HCI. In particular, user interfaces need toaccomodate the variability of a large set of interactional devices without leading tocostly development efforts. For example, an electronic agenda should run both on aworkstation and on a handheld computer without requiring a complete re-implementation of the design solution.

This talk discusses the plasticity of user interfaces, i.e., their capacity to withstand thevariations of interactional devices while preserving usability and minimizing softwaredevelopment costs. Portability, which relies on software and hardware independence,is necessary but not sufficient. Virtual toolkits such as the Java machine, set thefoundation for platform independent code-execution. They offer very limitedmechanisms for the automatic reconfiguration of the user interface in response to thevariation of interactional devices. All of the current tools for developing userinterfaces have an implicit model of a single class of target computers. As a result, therendering and responsiveness of a Java applet may be satisfactory on the developer’sworkstation but not easily usable for a remote Internet user. In addition, themaintenance and the iterative nature of the user interface development process make itdifficult to maintain consistency among the code developed for multiple targetcomputers.

In the absence of appropriate frameworks and software tools, !“writing it once is notenough” . I will present the concept of abstract user interface as a means ofsupporting plasticity for multi-target user interfaces. From an abstract deviceindependent description of the user interface and a number of UI-related models (e.g.,modality and I/O device model of the target computer, context model, user model), aconcrete user interface can be generated automatically (or semi-automatically). I willuse a real case study to illustrate the requirements for the definition of the abstractuser interface and present our early results towards the goal !“specify once, generateusable multiple times”.

HCI’98 Conference Companion

—4—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Systems, Interactions and Macro-theory

Philip J Barnard

MRC Cognition and Brain Sciences Unit,15 Chaucer Rd,Cambridge CB2 [email protected]

The ecosystems populated by information technologies have diversified to anextraordinary extent - from line editor to embodied conversational characters andfrom radar to virtual battlefields. The diversity of applications has been mirrored inthe ways in which we have come to think about interactions with technologies. Witheach advance, new empirical techniques are brought centre stage, more often than notcoupled with changes in the theoretical models of, and perspectives on, interaction.HCI is not simply “interdisciplinary.” It is now hard to think how we might put anykind of effective boundaries on what disciplines and topics aren’t of potentialrelevance to the study of HCI. In a scientific context where many things are ofrelevance, a key issue is the management of analytic complexity. It will be argued thatthere is a pressing need to go beyond the development and validation of “local” theoryand to put in place macro-theories than structure and bind local theories or conceptsinto a coherent whole.This presentation will examine the requirements that a body of macrotheory shouldmeet. In doing so it will draw upon earlier theoretical work dealing with interactionswithin the human mental mechanism (Barnard & May, 1993) and upon conceptsdeveloped within HCI specifically to capture properties of interactions - syndeticmodelling (Duke, Barnard, Duce & May, 1995), and Interaction Framework(Blandford, Harrison & Barnard, 1995).Whether dealing with individual human cognition, a human interacting with acomputer, a small group collaborating on a task with the support of a range oftechnologies, or even an entire organisation, each level can be considered as relyingon a “system of interactors”. Any given system has at its heart a set of basic units that“behave”. The basic units at any given level can be decomposed into their constituentparts and, the basic units themselves form part of a superordinate interactor. Thebehaviour of a system of interactors can then be thought of as a trajectory ofinteraction, constrained by the configuration of interactors, their capabilities, therequirements that must be met to use that capability, and the regime through which theinteractors are co-ordinated and controlled. When viewed this way relatively simpletheoretical abstractions of “interactions” can applied applied across levels of analysisand macro-theories developed to bind content at one level of analysis to that atanother.REFERENCESBarnard, P & May, J. (1993) Cognitive Modelling for User Requirements. In Byerley, P., Barnard, P.

& May, J. (Eds.). Computers, Communication and Usability: Design Issues, Research andMethods for Integrated Services. 101-146, Amsterdam: North Holland, Studies inTelecommunications.

Blandford, A.E., Harrison, M.D. & Barnard, P.J. (1995) Using Interaction Framework to Guide theDesign of Interactive Systems. International Journal of Human-Computer Studies, 43, 101-130.

Duke, D.J., Barnard, P.J., Duce, D.A. and May, J. (1995) Systematic development of the humaninterface. In APSEC'95: Second Asia-Pacific Software Engineering Conference, IEEEComputer Society Press, 313-321.

HCI’98 Conference Companion

—5—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Successes and Failures in Groupware Adoption: Case Studies

Jonathan Grudin

Information & Computer Science DepartmentUniversity of California, Irvine, CA 92697-3425 USA

Technology that supports communication, information sharing, and coordinationwithin groups is being successfully introduced. Of course, the challenges thathindered groupware adoption for two decades have not entirely disappeared, andsuccess is likely to remain more the exception than the rule for some time. I have beenexamining successful cases of groupware introduction, identifying the adoptiontrajectory, how obstacles were overcome, and the resulting effects on work practice.Specific technologies have ranged from seemingly simple shared calendars onnetworked PCs to complex software enabling engineers on high-end workstationshundreds of miles apart to navigate simultaneously through huge 3-D CAD models.

Shared calendars, suddenly widespread in some environments, provide insights intosuccess factors, but also demonstrate the emergence of novel behaviors around a newtechnology, and the significance of seemingly simple features in determining thosebehaviors. They also illustrate that software that we consider a single application isused so differently when used by people with different activity structures—evenpeople within a single part of an organization—that it is more appropriatelyconsidered a set of applications. This is highly significant to design, acquisition,introduction, and support.Virtual collocation technologies, supporting synchronous activity, is so useful to somedistributed groups that it is being used despite substantial challenges in design andadoption. A central obstacle is the difficulty in understanding the context of remoteparticipants; to minimize the inevitable degradation of this understanding is a keychallenge.These technologies often increase efficiency by enabling greater “visibility” acrosstime and space. This lecture will conclude by considering some implications of thesechanges.

HCI’98 Conference Companion

—6—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Human-Centred Processes

Jonathan Earthy1, Brian Shackel2, Hazel Courteney3,Simon Hakiel4, Brian!Sherwood-Jones5 and Bronwen Taylor6

1Systems Integrity and Risk Management,

Lloyd’s Register29 Wellesley Road,CroydonTW1 1BL, UK

2Prof Emer,HUSAT Research Institute,Loughborough University,Leicestershire,LE11 3TU, UK

3Safety Regulation Group, CivilAviation Authority, GatwickAirport South,West Sussex,RH6 0YR, UK

4Human Factors, IBM UK Ltd.Hursley Park,Winchester,Hampshire, SO21 2JN, UK

5Human Factors, BAeSEMA,1, AtlanticQuay,Broomielaw,Glasgow, G2 8JE, UK

6 Human Factors,Philips Design,Emmasingel, PO BOX 218,5600 MD Eindhoven,The Netherlands

ABSTRACTHCI is not usable by software engineering and needs to be more aware of its owncontext of use. Representatives from different sectors of industry will present avariety of approaches to the packaging HCI for industrial use. These approaches arebased on the definition of and provision of support for the processes by whichproducts are made to be human-centred. The similarities and differences between thepresented approaches will be explored. Process issues in HCI will be discussed.

KEYWORDS: Human-centred, Process, Software Process Improvement,Humanware, ISO 13407.

INTRODUCTIONOne of the most important problems for HCI is its own usability by the systemsdevelopment community. If we assess HCI against ISO 9241, part 10 Dialogueprinciples, 1996 (quoted in the italicised headings below) we generally find thefollowing:

3.2 Suitability for the task: HCI does not support project managers in the effectiveand efficient development of a usable product.3.3 Self-descriptiveness: Feedback from HCI activities and any explanations ofresults are not immediately comprehensible to developers.3.4 Controllability: Methods and techniques can only be used in standard, expensiveand time-consuming fixed steps.3.5 Conformity with user expectations: HCI is not a programming tool, RapidApplication Development tool or a library of style guides.3.6 Error tolerance: Techniques are either overly-sensitive to minor changes in detailor generate apparently context-free output.3.7 Suitability for individualisation: HCI appears to propose the use of the samemethods for all applications.3.8 Suitability for learning: HCI seems to change itself regularly and to work to meetdifferent goals from those of system development.

HCI’98 Conference Companion

—7—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

In addition, principles for good practice do not help to define the minimum standardfor HCI acceptability. This is an important factor in a commercial environment,where it is necessary to make trade-offs with time, cost, and conflicting requirements.

To improve this situation and hence the uptake of HCI we need to take account of thecontext of use - i.e. the systems development process. We cannot continue to takemany inputs, but provide no usable outputs. HCI has to integrate, communicate andcollaborate, i.e. it has to fit in. One important part of fitting in is to participate in themovement towards the modelling and improvement of the processes by whichcomplex systems are developed and work products exchanged. Process modelling isabout the definition of what needs to be done in order to produce a product whichmeets requirements. The panel intends to demonstrate and discuss the followingpositions:

1. HCI now has a coherent definition of good usability/human-centred practice2. that organisations are actually using this definition to give benefit3. that there are ways of assessing if human-centred activities have been done4. definition of practice provides a link to the world of software capability.

POSITION OF PANELLISTS ON THIS ISSUEHazel Courteney: The Civil Aviation Authority has generated proposals for a humancentred assessment of the flight deck as part of aircraft Type Certification. Thisincludes high level criteria for relevant topics (such as system feedback and the effectsof error), accompanied by the requirement for scheduled development practices tosupport compliance. Detailed advisory material is being formulated, offering methodsthat would be deemed acceptable for iterative user involvement, and a new techniquefor Human Hazard Assessment. Defining a minimum level of acceptability is key tothis task.

Jonathan Earthy: Lloyd’s Register has developed an ISO 15504 Software processassessment compliant usability process reference model (INUSE, 1998) for use inproject planning, education and process assessment and improvement. The modelwas developed and tested on the European INUSE project and is based on theforthcoming ISO 13407 Human-centred design processes for interactive systems. It isnow being integrated into the new System Lifecycle Standard, ISO15288. The modelpresents human-centred development activities in the same form as systems andsoftware activities. It thus gives HCI a ‘licence to operate’ in the softwaredevelopment arena and a means of assessing human factors capability.

Simon Hakiel: User Centred Design (UCD) has been integrated into the IntegratedProduct Development process used by IBM for product development. This UCDprocess is specifically designed to integrate ease of use deliverables into the productdesign and development process. It is applicable to all aspects of a product withwhich users come into contact, not just to conventional user interfaces, and invokesexplicit user participation in assessment during all stages of design and development.The UCD process has been specified to include creation of ease of use assurancedeliverables and project management information, as well as product designdeliverables. Emerging key factors in the successful deployment of UCD areexplicitly distinguishing between UCD activities and software engineering activitiesin the development process (Hakiel, 1997), and a top down mandate for thedeployment of UCD in the organisation.

HCI’98 Conference Companion

—8—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Brian Shackel: Some designers cannot understand the apparent stupidity of userswho have difficulty accessing the brilliant functions which, at great cost, have beencleverly built into the software. HCI process method producers need to recognise thatthey are now in the same situation, with those stupid system designers failing to usetheir sophisticated usability design guides and procedures. So how do we set abouttaking our own medicine?

Brian Sherwood-Jones: BaeSEMA has developed a metric to assess maturity ofprocesses relating to usability and safety in use. The metric is currently being usedfor in-house benchmarking and process improvement, but the primary aim is as partof tender assessment for the supply of software-intensive, safety-related systems toBAeSEMA acting as prime contractor. The model is generally of the CapabilityMaturity Model form, but with some tailoring.

Bronwen Taylor: At Philips a programme known as Humanware ProcessImprovement (HPI) has been set up to bring about a stronger user focus in productcreation processes around the company so that we can design the qualities end usersreally want into products ranging from shavers to complex medical imaging systems.An HPI model of an "ideal" user-centred process has been developed, defining keypractices which address the qualities important to end users. Assessments of howcurrent practice measures up to this model are being carried out to share best practicebetween the different product divisions.

CONCLUSIONSA set of processes which result in usable products can be defined and will soon bepublished in an International Standard. These processes enable HCI to be integratedinto system development as an equal partner. Human-centred design processes are inuse as the basis of user-oriented product lifecycles in the software and electronicgoods sectors. Assessment of the maturity of human-centred processes is seen as ameans of ensuring usability in the military and transport sectors.

REFERENCESCourteney H.Y. & Earthy J.V. (1997) Assessing System Safety To 10-JOKER:

Accounting for the Human Operator in Certification and Approval, proceedingsof the INCOSE conference at the European Space Agency.

Hakiel S.R. (1997) Usability Engineering and Software Engineering: How do theyrelate? Advances in Human Factors/Ergonomics, 21B, Design of ComputingSystems 521 -524.

INUSE (1998) Usability Maturity Model: Processes, Lloyd’s Register of Shippingproject IE2016 INUSE Deliverable D5.1.4p, www.npl.co.uk/inuse.

Taylor B., Sherwood-Jones B. & Earthy J.V. (1998) Tutorials AM4, Capability inhuman centredness, and PM 14, Human-centred processes and theirimprovement, HCI’98, Sheffield.

HCI’98 Conference Companion

—9—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

A New User Interface Metaphor for Mobile Personal Technologies

Elisa del Galdo1, Paul Gough2,Matt Jones3, RobNoble1 and Philip Stenton4

1,4Canon Research Centre,1 Occam Court,Occam Road, Surrey Research ParkGuildford, Surrey GU2 5YJ UK

2Philips Research Laboratories,Cross Oak Lane,Redhill, Surrey, RH1 5HA, U.K.

3School of Computing Science,Middlesex University,Bounds Green Road, London N11 2NQ

4Hewlett-Packard LaboratoriesFilton RdBristol BS12 6QZ

ABSTRACTAs the computer industry advances into the 21st century major changes are takingplace in the manner in which users interact with computers. Advances in technologyhave enabled the miniaturization of components and products. Making truly mobileproducts a reality. The result is a change in the user's context. Miniaturization allowsthe creation of products with vast functionality accessed via a very small userinterface. Changes in user context and reduction of the user interface make thedesktop-like metaphor of the PC inappropriate for the context of mobile use.Currently, these problems have only been addressed by tailoring the existing userinterface metaphors and input and output methods. This panel addresses the userinterface design challenge that mobile products presents by looking beyond currentsolutions to the creation of new user interface metaphors and the incorporation andintegration of alternative input and output methods.

KEYWORDS: Mobile personal technologies, user interface metaphors, userinterface miniaturisation

INTRODUCTIONElisa del Galdo: Previously with Digital and an HCI consultant, Elisa del Galdo is amember of the Product Conceptualisation and Prototyping Group at Canon, where sheis working on the design of speech and language user interfaces.

As the computer industry advances into the 21st century major changes are takingplace in the manner in which users interact with computers. Advances in technologyhave enabled the miniaturisation of components and products. These small computersare referred to as 'personal technologies' and are truly usable in a mobile fashion. Thefirst and most visible change is the ability to use a product while mobile. This haschanged the context in which users work. The user is no longer just transporting thecomputer to be used at a makeshift or temporary desk. Thus, the desktop-likemetaphor that many small devices continue to use is likely to be inappropriate for thecontext of mobile use.

The second change in interaction is that users have less physical space in which tointeract with the computer. This problem is magnified by the gains in technology thatyield very small products with diverse and abundant functionality. Many of theseproducts possess a level of functionality that is normally accessed using an interfacethat is at least four or five times larger. Increased capability in a small product maybe seen as a benefit, but designing a small user interface that affords easy access to the

HCI’98 Conference Companion

—10—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

product's range of functionality can prove to be very difficult. Often, the result isoverly complex and many actually restrict users' access to even commonly usedfunctions.

Many vendors have approached this UI design problem by making adjustments to anexisting user interface design originally based on a desktop metaphor. Theadjustments include miniaturisation, slight 'look and feel' alterations, and the additionof alternative input or output modalities (e.g., handwriting recognition, voice as data,speech recognition). As newer user interface technology matures and designersdevelop a better understanding of users' requirements for mobile products (both fortasks and interaction), a new metaphor should emerge. This new 'mobile' metaphorwill include multi-modal interaction (i.e., using a combination of input or outputmethods for a single task), spontaneous interaction, output mechanisms suitable to asmall interface all within a metaphor appropriate for the mobile interaction.Addressing the mobile user interface design problem presents many excitingchallenges for the HCI community.

Paul Gough: Previously at Xerox, Paul Gough joined Philips in the early eighties. Heis the manager of the software research group and is involved in the development of amobile and personal interfaces research programme.

We believe that the desktop metaphor and direct manipulation interaction style is adeficient combination for mobile users, because of the differences in requirementsbetween mobile and office users. Unlike an office-based user, the mobile user needsto dedicate attention to their task location, and for safety reasons, be aware of theirenvironment. For these reasons, it is difficult to imagine that direct manipulation and adesktop metaphor are the ideal components of a mobile user-interface. Much moreappropriate are means of interaction which can be undertaken whilst the user ismaintaining awareness of location and environment.

We believe that the user interface will use a mixture of metaphors and interactionstyles (e.g., speech gestural, or tactile) to provide access to the available informationand functions.

Matt Jones: Senior Lecturer at the School of Computing, Middlesex University.Currently, he is working on an EPSRC funded project(GR/L70028) on handheld webbrowsing with collaboration from Reuters.

We are at a "defining moment" for mobile personal technologies. Up until recently,PDAs, email 'phones, in car navigation systems and the like have been mainly boughtby gadget lovers and early adopters. Soon, though, they will really have to work:people will rely on them to get their jobs done, to help them use their leisure timeeffectively and, in some cases, to keep them alive. To be successful, mobile interfacesneed to have qualities, which the current "desktop" just cannot deliver - focusednavigation for task completion, simple and systematic interactions and cross platformindependence. The development of new metaphors for the mobile era will haveramifications for interfaces on conventional platforms too -maybe it is time we threwthe desktop out of the window altogether.

HCI’98 Conference Companion

—11—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Rob Noble: a member of the Product Conceptualisation and Prototyping Group atCanon, where he exercises his pet interest in user interface design without beingbiased by any kind of formal training in HCI.

Today our interaction “language” consists of a vocabulary of mouse-clicks, keypresses, text and icons. The transition from desktop computer to mobile computingdevice makes current input devices awkward to use and output devices more limited.The rate of communication with a small or mobile device is lower, and the interactionbecomes less efficient. Cramming more information through the display with smallerdisplay element, extending the user interface hierarchy, or adding buttons where therewas previously “space to breathe” is not a good option.

One answer is to enrich the “vocabulary” of communication by using additional formsof input and output, such as natural language. Rather than forcing the user to learnmore interaction methods, we can use a natural language like English instead. Peoplehave to learn to speak in order to communicate with each other, and we can borrowthat learning effort. The device and its user can even improve their communication bycreating new, shared understandings that make their communication more efficient.

Philip Stenton: Phil Stenton, previously with British Telecom, is the manager of theInteraction Technology Department at Hewlett-Packard Laboratories in Bristol.

As personal technologies advance towards pervasiveness the windows metaphor forcomputing begins to look fragile, yet we hang on to it for the security it gives us likethe well known face of a trusted friend. A number of pretenders to the windowsthrone have come and gone. Some have found their niche in the pockets of mobileprofessionals or in the cabs of delivery professionals. Standardisation however is notjust in the visual interface it also allows our personal media to be shipped from onedevice to another. Lessons from the past suggest that miniaturisation and technologyperformance are necessary, but not sufficient for an interface revolution in personaltechnologies. The real drivers will be a focusing of function and a standardising ofintercommunication.

HCI’98 Conference Companion

—12—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Organisational constraints on the proper teaching of HCI: must HCIhave its own department to be taught properly?

Peter Gregor1, Xris Faulkner2, Phil Gray3,Andrew Monk4, Peter Timmer5 and Steve Draper6

1Dept of Applied Computing,University of Dundee,Dundee DD1 4HN

2School of Computing,South Bank University,103 Borough Road,London SE1 0AA

3Dept of Computing Science,University of Glasgow,Glasgow G12 8QQ

4Dept of Psychology,University of York,York YO1 5DD

5Ergonomics & HCI Unit,Department of Psychology,University College London,26 Bedford Way,London WCIH 0AP

6 Dept of Psychology,University of Glasgow,Glasgow G12 8QQ

ABSTRACTHCI now has an established place in the curricula of many University departmentsand is a necessary component in the professional development schemes of the BCS,ACM, IEEE and IEE. But how should HCI teaching be organised when mostuniversity teaching is organised around old discipline boundaries? Can it beeffectively and harmoniously organised within old departments and if so which, ormust separate HCI departments be created? The session is motivated by the pressuresexperienced by HCI course designers working in the computing sector. The paneloutlines the problems, offers three different approaches to addressing them andconcludes with a discussion.

PROBLEMS?Phil Gray is a lecturer in the Computing Science Dept at the University of Glasgow.He has designed and taught HCI courses for students in all four years of the Scottishundergraduate degree, advanced MSc students and practising software engineers.Xris Faulkner is a Senior Lecturer in HCI who has been lecturing at the School ofComputing at South Bank University since 1990. She has taught HCI on a range ofcourses from HND to MSc levels.

Even when HCI is included in a degree programme, it is often viewed as peripheral tothe central concerns of the discipline. HCI doesn’t “belong”, in that it doesn’t sharethe same academic culture as the department which hosts it. Typically HCI is alone intaking seriously “soft” disciplines like cognitive science, sociology and graphicdesign. It focuses on interactivity rather than technology, design rather thanimplementation, methodological rather than design principles. Computer science isneither old nor confident enough not to feel threatened by HCI’s seeminglyunscientific nature. Our options for growth within computer science point in twoopposing directions: integration or separation. Integration acknowledges that HCIshould be part of computer science. HCI can be distributed across other courses,making guest appearances in other subjects. Taken far enough, this option could leadto the dilution or disappearance of HCI as a degree component. Separation emphasisesHCI’s fundamental difference from the rest of computer science and softwareengineering. This path offers the freedom to develop courses which satisfy HCI’s owndeveloping culture but there is the danger of isolation, with HCI disappearingaltogether from the education of “pure” computer scientists.

HCI’98 Conference Companion

—13—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

SOLUTIONS?Peter Gregor lectures in the Department of Applied Computing at the University ofDundee. He organises the honours and postgraduate HCI &usability courses.Research interests include computer-based interviewing and reading aids for dyslexia.

The University of Dundee offers a degree programme in which HCI and usabilityengineering is an underlying organising principle. In a major review in 1994, thedepartment identified the increasing trend for competitive edge in industry to be basedprimarily on software engineering and the development of highly usable software.The resulting Applied Computing degree programme is successful in avoiding theusual tensions and in placing HCI alongside other topics at the heart of what is taught.

Peter Timmer is a Lecturer in Cognitive Ergonomics at the Ergonomics & HCI Unitat University College London. He is co-ordinator of the Human Factors of Human-Computer Interaction option in the M.Sc. in Ergonomics.

The Ergonomics Unit at UCL also has sufficient control over its timetabling andcourse content to permit a range of selected teaching practices to be adopted whereconsidered appropriate. UCL offers a one year full-time M.Sc. in Ergonomics. Allstudents choose to specialise in either General Ergonomics, or the Human Factors ofHCI. HCI is taught as a discipline of design.

Andrew Monk is Reader in Psychology at York University. He is Chair of the BritishHCI Group and his research is concerned with video communication and makinghuman factors techniques more accessible to designers.

Andrew Monk proposes a third solution, which is as yet far from being implemented.His view is that HCI cannot be taught properly within any existing department, andneeds a new department of its own. He bases this on an analysis of HCI research, andhow this is clearly about topics distinct from those studied in any of the apparentlycontributing disciplines.

DISCUSSIONSteve Draper has worked on HCI since the early 1980s, when he was a postdoc. withDon Norman in San Diego and co-edited the book “User Centered System Design”.He has taught an HCI module at M.Sc. level for 10 years at the University ofGlasgow.

Steve will chair the session and comment on the contributions. Questions to be raisedinclude: are the tensions HCI experiences any different from those in any engineeringsubject; does teaching HCI within Computer Science reveal a machine-centred ratherthan a user-centred approach; should the techniques of collecting data on userbehaviour be part of any HCI curriculum?

REFERENCESThis panel has been informed by a workshop held at the University of Glasgow on24th

& 25th May 1998. The background material for the workshop can be found at:http://www.psy.gla.ac.uk/~steve/HCI/rep.html

HCI’98 Conference Companion

—14—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Communication goals in interaction

Ann Blandford1 and Richard M. Young2

1 School of Computing ScienceMiddlesex UniversityBounds Green Road,London. N11 2NQ, U.K.

2 Department of PsychologyUniversity of HertfordshireHatfield, Herts.AL10 9AB, U.K.

ABSTRACTWe define a “communication goal” as task-related knowledge that a user expects tocommunicate to the computer system in the course of completing a task, and discussthe role that communication goals can play in determining the success, or otherwise,of the interaction.Keywords: cognitive modelling, interaction modelling, human error.

INTRODUCTIONThe course of an interaction between a user and a computer system can be stronglyinfluenced by the user’s expectations of what they will have to do. For users familiarwith a particular device, those expectations will include much device-specificknowledge. However, users unfamiliar with the device will base their initialexpectations on their knowledge of the domain, and have to rely on the device forcues on how to interact with it. For example, the telephone user who wishes to diverther calls to another extension will expect to specify the extension number. She willtherefore have the goal of communicating that number to the system, and so will seekan opportunity to satisfy that goal.We start with a fairly standard set of assumptions about users’ cognition — forexample, that a rational user will form plans to achieve her goals and will use herknowledge to form those plans, and that the system she is working with is animportant source of knowledge (e.g. Blandford & Young, 1996). We extend theapproach by asserting that:

• when the user adopts a task goal, she will recognise that particular task-relatedinformation needs to be communicated to the computer system, and will adoptthe communication of this information as “communication goals”.

• the user seeks opportunities to achieve “communication goals” and interpretsinformation from the device in the context of these goals.

Not surprisingly, the way the user formulates a task has a direct influence on thecommunication goals adopted. In interaction, we can identify a set of possibilities: onone dimension, the user may have a superfluous communication goal or may lack anessential communication goal for the current task; on another dimension, for eachcommunication goal, there may be a good match to a device requirement or amismatch. The presence of a superfluous communication goal is unlikely to lead to anovert error, but may leave the user feeling that they have “missed something”. Here,we focus on the absence of a communication goal and on false matches.ABSENCE OF A COMMUNICATION GOALThere may be a range of situations in which the user does not have all the necessarycommunication goals. We briefly consider two examples.Our first is use of a particular photocopier. Users of this particular photocopier have toinsert an account card to use it and retrieve the card when they have finished. Ingeneral, when the card is removed, all copier settings are reset to their default values.Under the rare circumstances when this does not occur, the next user who wants

HCI’98 Conference Companion

—15—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

copies according to the default settings will receive copies according to the currentsettings (e.g. double-sided). In terms of communication goals, the user’s formulationof their task is that they want a copy of this document. Because the parameters"single-sided", "full-size" are the default values, they are not represented ascommunication goals, resulting in an error.Not all cases lead to overt errors; for example, a mobile telephone described byBlandford, Butterworth and Good (1997) has an option to “divert voice calls”,immediately followed by the option to “divert all voice calls”. To the novice user,who is not aware of what the alternatives are, this appears to be the same question.Although this does not lead to an error, an informal test (asking several people todivert calls) indicates that most people are surprised and puzzled by this aspect of thedesign.FALSE MATCHESFalse matches between a communication goal and a device requirement generallyresult in overt errors. They are often a consequence of a novice user not knowing theappropriate terminology for working with a particular system. The user has the goal tocommunicate particular information to the device; the device is offering a range ofpossible options, and the user has to assess which of those options best matches any ofthe current goals. One example of misleading options that can result in false matchesis the mobile telephone menu discussed above. For this particular telephone, there is alevel in the menu hierarchy where the user has to select between options for “callrelated features”, “messages” and “phone setup”; if we consider the task goal ofdiverting calls to another number, any of these options could be the most relevant-looking depending on how exactly the user has formulated the goal (as relating tofeatures of calls, to the way the telephone is set up, or to where messages are sent).CONCLUSIONAn analysis in terms of communication goals can help an analyst to identify possiblesources of error, or points where the user is likely to find the interaction with acomputer system unnatural. Interaction generally integrates elements of both planningand situated behaviour. The work reported here extends an “interactive planning”model, that focuses primarily on the role of user knowledge in determining behaviour,to include semantic matching, the role of the interface as a resource (Suchman, 1987;Fields, Wright and Harrison, 1996) and the importance of task-relatedcommunication.ACKNOWLEDGEMENTSWe are grateful to Jason Good, Richard Butterworth and David Duke for helpfuldiscussions. This work is funded by EPSRC, grant number GR/L00391. Seehttp://www.cs.mdx.ac.uk/puma/ .

REFERENCESBlandford, A. E., Butterworth, R. & Good, J. (1997). Users as rational interacting

agents: formalising assumptions about cognition and interaction. In M.D.Harrison & J.-C. Torres (Eds.) Design, Specification and Verification of Interactive Systems.45-60. Vienna : Springer

Blandford, A.E. & Young, R.M. (1996) Specifying user knowledge for the design ofinteractive systems. Software Engineering Journal. 11.6, 323-333.

Fields, B., Wright, P. & Harrison, M. (1996) Designing Human-System InteractionUsing The Resource Model. Proceedings of APCHI ’96. 181-191.

Suchman, L. (1987). Plans and situated actions, Cambridge: CUP.

HCI’98 Conference Companion

—16—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Sustaining the paper metaphor with Dynamic-HTML

Gavin J. Brelstaff1 and Francesca Chessa2

1CRS4,Via N. Sauro 10,09123 Cagliari, [email protected]

2Università degli Studi di Sassari,Via M. Zanfarino,07100,Sassari, [email protected].

ABSTRACTWe describe prototypes that sustain the paper metaphor in two new on-line scenarios:(1) reading annotated works of foreign literature, and (2) paperless medical reporting

Keywords: Paper metaphor, D-HTML, Open distance learning, CALL,Electronic patient record.

INTRODUCTIONThe goal of the paperless office remains somewhat elusive - as the leading printermanufacturers can testify. PC applications, in particular, seem to be designed toproduce new printed documents, rather than eliminate them. However, a new impetusaway from paper is now being led by the technology of the Internet/Intranet. It notonly provides new channels to exchange purely digital documents, but its vastreadership, re-motivates the quest to extend the paper metaphor whilst enabling newmethods with the advent of dual-browser dynamic-HTML (D-HTML) (Mudry 1998).

The metaphor of the user interface (UI) as paper page has endured up to today’shomepages on the web - due, in part, to its universal familiarity. New users rapidlyorient themselves -already knowing how pages should behave or respond. As long astheir UI obeys key expected behaviours they happily accept extension of functionality(Collins 1995). Examples have included: the DEL key – enabling correction withoutsnowpake; VisiCalc, an early spreadsheet - computing totals on the page; spell-checkers red-underscoring misspellings; text entry into blank form fields; and theMac’s cartoon context-sensitive help. A factor common to these successful extensionsis their minimal invasion of the user’s visual search capacities: action occurs at, ornear where visual attention is already fixed - so normal reading patterns (Yarbus1976) are not disrupted by external saccades. It is becoming clear that transsaccadicmemory is not a good cache of detailed information (Blackmore et al 1995). Ignoringthis fact can lead to disorientating UIs: pop-up assistants (Office95); digital footnotesat the foot of the page; hypertext links that obliterate the reader’s current page; andNext buttons out of the natural reading flow. Below we outline prototypes thatleverage D-HTML to extend the paper metaphor in two on-line scenarios: (1) readingannotated works of foreign literature, and (2) paperless medical reporting.

READING ANNOTATED WORKS OF FOREIGN LITERATUREOriginal texts of great works are often inaccessible to foreign literature undergradsuntil they complete a year of preparatory language training. Our prototype confrontsthem with original text from the word go, providing inline annotation designed bytutors to resolve predicted language difficulties as well as to instil literary appreciationand interpretation. We aim to make reading interactive without being too distractive.The interface design is simple:

HCI’98 Conference Companion

—17—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

1 The original text is always visible as black on a light-grey background, exceptwhen the cursor moves over a word - where the coherent phrase or sentencecontaining it becomes highlit by a white background.

2 Parts of that phrase containing annotation are signalled in a distinguishablecoloured font, and respond to a mouse click by replacing, in-line, the originaltext with that of the annotation while reversing the font/background colours.

3 Successive clicks replace that annotation with another, or return to step 2;while moving the cursor off the phrase undoes any annotation or highlighting.

Here we dynamically manipulate CSS-2 style-sheets using client-side JavaScript(CSJS). Text and annotations are imported from CSJS files as series of call to afunction. Minimal eye movement is needed by the reader, and colour highlights holdrather than distract attention. So a basic grasp of a language may afford an on-lineopportunity to appreciate and learn from classics with less resort to reference books.

PAPERLESS MEDICAL REPORTINGMedics happily fill-in paper reports or forms, sign them and file a copy beforedispatching them. They are used to being able to see what they wrote when they needto. An electronic form is different: once submitted transparency is immediately lost –medics neither know where the data has gone nor whether it got manipulated duringthe process to misrepresent them. Gaining medics’ trust remains a barrier to thewidespread introduction of the multi-media, electronic patient record (EPR). Ourprototype uses CSJS to freeze each form at the moment of submission into adocument with identical HTML layout but with all data field fixed as permanent text.A medic can then digitally sign and email (S/MIME) this document to a securenewsgroup (Brelstaff, “Leveraging Internet-98 Technology for Computer HealthcareNetworks”, submitted to IEEE Trans. on IT in Biomedicine). Thus a paper-like,electronic archive remains always visible to the medic and authorised colleagues. Anewsgroup is maintained for each patient, each hosting a pre-specified set of threadsthat map onto distinct parts of the patient record (e.g. anagraph, clinical plan,diagnoses, vital signs, exams). Each thread is initialized with a custom blankdocument. As the patient progresses though the ward medics successively update thelatest version of relevant forms. For this purpose a “Reactivate Form” link is includedin each frozen form. When clicked it re-launches that form with the previously filleddata fields. In fact, it requests the appropriate blank form from a companion secureweb server (running SSL) as an URL with a “search string” specifying the existingdata. CSJS parses that data and fills in the blanks. Remarkably, no SQL databases isinvolved – and no text processing occurs on the server.

ACKNOWLEDGEMENTThis work was supported by the Autonomous Region of Sardinia, and by kit from Hewlett Packard.The authors are grateful for discussions with researchers at CRS4.

REFERENCESBlackmore, S.J., Brelstaff, G.J., Nelson, K., (1995) Is the richness of our visual world

an illusion? Transsaccadic memory for complex scenes, Perception, 24, 1075-1081.

Collins, D. (1995) Designing Object-Oriented User Interfaces, B.Cumings Pub. Co.Mudry, R.J. (1998) The DHTML Companion, Prentice Hall.Yarbus, A.L. (1967) Eye movements and vision, New York: Plenum Press.

HCI’98 Conference Companion

—18—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

User Profile-based Reference Points in Information Visualisation

Chaomei Chen1 and John Davies2

1Department of Information Systems & ComputingBrunel UniversityUxbridgeUB8 3PH, UK

2Knowledge Management ResearchBT LabsIpswichIP5 3RE, UK

ABSTRACTThis paper briefly describes the design of a novel user interface for exploitingdocuments accumulated in an information filtering and sharing environment. Inaddition to visualising inter-document relationships, the visual user interface revealsthe interconnectivity between user profiles and documents. The work currentlyfocuses on the role of user profiles based on the notion of reference points in order toprovide additional heuristics for information sharing.

Keywords: Visual user interface, information visualisation, reference points.

INTRODUCTIONThe exponential growth of widely accessible information in modern society highlightsthe need for efficient information filtering and sharing. Information filteringtechniques are usually based on the notion of user profiles in order to estimate therelevance of information to a particular person.Jasper is an information filtering and sharing system (Davies et al., 1995). Itmaintains a growing collection of annotated reference links to documents on theWorld Wide Web (WWW). Currently, the interconnectivity among these accumulateddocuments and user profiles is not readily available in Jasper. In this paper, wedescribe the design of a novel visual user interface in order to uncover theinterconnectivity.REFERENCE POINTSOur design is based on the notion of reference points. This concept was originated inpsychological studies of similarity data and spatial density (Krumhansl, 1978). Theunderlying principle is that geometric properties such as symmetry, perpendicularity,and parallelism are particularly useful in communicating graphical patterns. Forexample, people often focus on structural patterns such as stars, rings and spikes in anetwork representation. Reference points, conceptually or visually, play the role of areference framework in which other points can be placed. In our work, wehypothesized that a number of star-shaped, profile-centred document clusters wouldemerge if the role of reference points was actively played by user profiles. Userswould be able to share information more effectively based on the additionalinformation provided by user profiles through the visual user interface.METHODSWe randomly sampled 127 documents and 11 user profiles from Jasper. The mixedcollection was visualised within the Generalised Similarity Analysis (GSA)framework (Chen, 1997). First, we extracted and preserved only the most salientsemantic relationships in order to reduce the complexity of the visualisation network.Second, we incorporated user profile-based reference points in order to improve theclarity of the visual user interface. Unique behavioural heuristics were applied todistinguish user profiles and documents so as to speed up the convergence of our self-organised clustering process. These emergent structures were derived without anyprior knowledge of structural relationships. Additional structural cues are likely toresult in more efficient results.

HCI’98 Conference Companion

—19—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

RESULTSThe impact of user profile-based reference points can be seen in Figure 1. The leftsub-figure shows the self-organised spatial layout without using the mechanism ofreference points. The sub-figure in the middle shows the layout if the mechanism ofreference points was utilised. In fact, the 11 user profiles, which merely make up 8%of the 138 nodes, were associated with 69% of the links in the network, whereas theremaining 127 documents, which make up 92% of the nodes, only shared 31% of thelinks. Reference points have clearly improved the clarity of the overall structure.Users now may track relevant documents based on their knowledge of theircolleagues' expertise.

Figure 1: The role of reference points: disabled (left), enabled (middle), and aclose-up look at a cluster (right) (cube=profile; sphere=document).

CONCLUSIONIn this paper, we have shown that the quality of information visualisation can beimproved by incorporating user profile-based reference points. Preliminary resultssuggest that this technique is potentially useful for visual user interface design. Morework is needed to the study of human factors in using this type of visual user interfaceso as to enable users to gather and share information more efficiently.ACKNOWLEDGMENTSThis work was supported in part by 1997 BT short term research fellowship.

REFERENCESChen, C. (1997), Structuring and visualising the WWW by Generalised Similarity

Analysis, in Proc. of Hypertext’97 (Southampton, UK, April 1997), ACM Press,pp. 177-186.

Davies, N. J., Weeks, R. & Revett, M. C. (1995), An information agent for WWW, inProc. of the 4th Int. Conf. on World-Wide Web (Boston, USA, Dec. 1995).

Krumhansl, C. L. (1978), Concerning the applicability of geometric models to similardata: The interrelationship between similarity and spatial density. PsychologicalReview 85, 5, 445-463.

HCI’98 Conference Companion

—20—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Improving online style-guides and guidelines

Mikael Ericsson

Department of Computer and Information ScienceLinköpings universitet581 83 Linkö[email protected] http://www.ida.liu.se/~miker/

ABSTRACTEmpirical studies (quantitative and qualitative) were performed in order to assess theperception of different forms of online presentation of formalised design knowledge(e.g. guidelines). The results show that guidelines should be imperative rather thandeclarative. Qualitative data also suggests that examples should be used for guidance.

KEYWORDS: Design support, guidelines, online support

INTRODUCTIONMaking guidelines and style guides useful in professional work is a difficult task, asevidenced by scientific studies and practitioner comments: guidelines are oftenclaimed to be too general, too specific, hard to access, hard to locate, etc. (Chapanis,1990; Tetzlaff & Schwartz, 1991) Taking the knowledge online, inserting it into thedesigners work environment is one approach trying to solve some problems. There arenumerous prototype systems in the form of hypertext guidelines, example-databasesand intelligent commenting agents. Those make the knowledge available andaccessible in the design situation and show the feasibility. There is also evidence thatdesigners prefer online over paper based guidelines (Fox, 1992). However, it is stillunclear what form should be used for presentation. This paper presents a study ofdifferent linguistic forms for design knowledge presentation.

EMPIRICAL STUDIES — RESULTSAs part of an investigation of different behaviour of commenting agents deliveringguideline knowledge in the form of comments (Ericsson, 1996), we assessed theeffects from different linguistic variations.

Sixteen subjects were observed using a simulated (Wizard-of-Oz) commenting agentin a design support system. Different commenting behaviour was tested, and theoverall usefulness evaluated. Two linguistic variants (mood) were used—a declarative(pointing out a flaw) and an imperative (suggesting how to remedy a flaw) —with theintent to keep their semantic content otherwise similar. The interaction was loggedand recorded on video, and the subjects rated the agent with respect to usefulness,understandability, system competence, disturbance and perceived stress. Perceivedmental workload was measured using RTLX. Questionnaire questions relevant forguideline form addressed the comprehensibility of the comments, the perception ofthe competence of the tool and the participant’s ability to find objects referred to in acomment.

The participants performed a one-hour design task, and were then asked to rate and todescribe their impression of the system and the design support. The participants were

HCI’98 Conference Companion

—21—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

then shown three short video clips from their design work, and asked to perform thesame kind of rating for this specific situation.

We found three significant effects from different mood settings. Participants using thedeclarative system rated comments as more difficult to understand than those usingthe imperative system in general (6.3 vs. 2.8, F(1,10)=14.04, p<0.05) and on averagein the specific situations (5.7 vs. 2.6), F(1,24)=10.1, p<0.01). Those using theimperative system rated the system competency lower than the declarative group (4.7vs. 5.8, F(1,10)=4.0, p<0.10). Participants in the declarative mood reported a higherMWL than those in the imperative group (52 vs. 37.6, F(1,35)=16.7, p>0.01).

As part of a study on the use of and need for (requirement) support for formaliseddesign knowledge, we interviewed 8 professional systems developers. The semi-structured, open-ended interviews contained questions about guideline availability,use, demand and form. The interview answers and comments show that users preferaccessing the “essence” of the design knowledge first, without complete rationale.However, they would like to have the possibility to immediately access examples thatillustrates the guideline (positively or negatively).

CONCLUSIONS AND DISCUSSIONGuidelines presented online in a design situation should be imperative rather thandeclarative. Imperative comments result in lower MWL and are rated as easier tounderstand. The fact that an imperative system is considered less competent is similarto others, showing that systems, which are easier to understand, are considered lesscompetent (cf. Wærn and Ramberg, 1995).

The preference for imperative forms implies that comments must be "situated" in theparticular design moment. A comment phrased in a specific form (imperative) may beeasier to relate to the particular design moment than a comment phrased in a generalform (declarative), even if the degree of contextualisation is rather low.

ACKNOWLEDGEMENTSThis work was conducted in collaboration with Magnus Baurén, Prof. Yvonne Wærnand Jonas Löwgren. Thanks to the participants of our studies for their time andefforts. This work was financially supported by the Swedish Research Council forSocial Sciences and the Humanities (HSFR).

REFERENCESChapanis, A. (1990) Specifying human computer interface requirements. Behaviour

and Information Technology, 9(6):479–492.Ericsson, M. (1996) Commenting tools as design support – a Wizard-of-Oz study.

Licentiate thesis, Linköping Studies in Science and Technology. No. 576.Linköpings universitet, Sweden. http://www.ida.liu.se/~miker/research/

Fox, J. A. (1992) The effects of using a hypertext tool for selecting design guidelines.In Proc. of HFS’92, pp. 428–432.

Tetzlaff, L. & Schwartz, D.R. (1991). The use of guidelines in interface design. InProc. of CHI’91, pages 329–333.

Wærn, Y. & and Ramberg, R. (1995) People's perception of human and computeradvice. Computers in Human Behavior.

HCI’98 Conference Companion

—22—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Using ‘Contact Points’ for Web Page Design

Pete Faraday and Alistair Sutcliffe

Centre for HCI Design,City University,London, EC1 0HB, [email protected]

ABSTRACTThe paper explores how ‘contact points’ or co-references between text and imageshould be designed in web pages, via guidelines and a web page design tool.INTRODUCTIONAlmost all web pages contain text and image, yet little work has addressed how todesign such combinations. This paper introduces ‘contact points’ as a key designissue. These are places in the text where the content needs to be related with the imageeg for details of an object’s appearance, or how to perform an action. The problemsare i) how to provide linking references between text and image; ii) how to ensure themessage thread can be followed between text and image media.GUIDELINESTo investigate how viewers form contact points, we performed a number of eyetracking studies (Faraday & Sutcliffe, 1997). Figure 1 shows a result set for aLaserWriter instructions with two contact points, ‘Clean Wires’ and ‘Not break’. Thestudies provided evidence for the effect of contact point design techniques :- If a clause of text refers to an image, the contact point should be represented

explicitly in the text. In all cases subjects re-inspected the image after reading thetext, suggesting that the text had provided co-references with the image.

- Contact points should be sequenced to avoid overwhelming the reader with labelsand images which refer to later parts of the text. Subjects seemed to process thetext as a whole, then rescan the image. This may cause problems as the contact

points should be resolved in the ordergiven in the text to provide theirvalue.

- It is important to provide text andimage together in close proximity.This will allow the viewer to easilyswitch between the text and image.

- The referent in the image should beeasy to locate. Subjects tended tofixate highlighted objects, such asthose with arrows or zooms.

- Viewing should be self paced.Reading and inspecting the image areserial processes which take time.

Figure 1 Single subject eye track results

HCI’98 Conference Companion

—23—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Figure 2 Tool screen shot (clockwise - top) : Content, Page, Timeline views

TOOL SUPPORTTo meet these needs, a dynamic HTML authoring tool was developed. The toolencourages the designer to break their content up into contact points, relating a part ofthe text with a visual sequence in an image. The tool’s content view defines which

contact point relates to an image; in figure 2the image LWGround has two contactpoints, ‘Clean Wires’ and ‘Don’t break’.As each contact point is created, a contactpoint ‘button’ ( ) is embedded in the textwithin the page view. Referents, such asimages, labels and animation can then bedragged onto the image. Figure 2 shows thezoom image is associated with the ‘Don’tBreak’ contact point. For more complexsequences, the timeline view can be used toset elements order and duration.The tool outputs web pages. In figure 3, theuser clicks the ‘Clean Wires’ contact pointbutton to reveal the arrow image.

CONCLUSIONSWe believe that web pages designed with our tool have several benefits; they : i) makecontact points explicit; ii) step the user through each contact point; iii) draw the user’sattention to referents in the image; iv) let the user self pace their viewing. We arecurrently conducting studies to investigate if our pages improve comprehension.REFERENCESFaraday, P.M. & Sutcliffe, A.G (1997) Attending to Text and Pictures. Short Paper.

HCI ’97, Bristol, UK.

Figure 3 Web Page Output

HCI’98 Conference Companion

—24—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Configurable visual changes in a word processor to aid dyslexics

Peter Gregor, Peter Andreasen and Alan F. Newell

University of DundeeDepartment of AppliedComputingDundee DD1 4HN Scotland

ABSTRACTThe user centred development of a highly configurable word processing environmentto alleviate some of the difficulties encountered by dyslexics when producing andreading text is described. All dyslexic subjects tested were able to use the software toidentify and store a visual configuration which they found made reading easier.Successful tests were also carried out to investigate the use of different appearances toalleviate character recognition and reversal problems.

KEYWORDS: Dyslexia, user-centred design, evaluation, accessibility,configuration, word processing

INTRODUCTION AND BACKGROUNDThe approach taken in this study was to identify some of the most commonly notedvisual problems which dyslexics encounter when reading and producing text. On thebasis of these common difficulties, practical ways were identified in which eachindividual might be able to minimise the consequences of their own particularproblems by manipulating the appearance of their word processing environment andof the text presented within it. The work in progress ultimately led to a softwaresystem which provides a highly (and easily) configurable environment for dyslexicpeople to use for reading and producing text.

DYSLEXIAThe most common practical visual problems experienced by dyslexics are: numberand letter recognition; letter reversals; word recognition; number, letter and wordrecollection; spelling problems; punctuation recognition; fixation problems; wordadditions and omissions; poor comprehension (adapted from Willows, Kruk &Corcos, 1993). The wide ranging characteristics of dyslexia however mean that asingle technological approach will not be appropriate for the range of problemspresented by a group of dyslexic people. This study approached the problem bydeveloping an easily configurable word processing environment which responded tothe various needs of people with dyslexia.

COMPUTER TECHNOLOGYIn this study we offered dyslexic users a range of appropriate visual settings for thedisplay of a word processor, together with the opportunity to easily configure the wayin which text is displayed to them: it was envisaged that the users would experimentwith the settings and select the combination which best suited them: these settings arethen saved and later recalled each time that person uses the word processor. Thisapproach affords the potential to make computer based text significantly easier to readthan printed text, as well as improving the usability of computer word processingsystems for dyslexics.

HCI’98 Conference Companion

—25—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

EVALUATIONAn initial prototype text reading interface presented the user with an easilyconfigurable interface allowing for a number of display variables to be altered.Configurable parameters were background, foreground and text colours, font size andstyle, and the spacing between paragraphs, words and characters. The system wasdesigned so that visual feedback was available on selections before they wereconfirmed by the user.

Twelve computer literate dyslexic students from higher education with an age rangeof approximately 18 - 30 were engaged to assist by providing evaluative feedbackthroughout the development of the system. Evaluative data was gathered by using“think-aloud” techniques, as well as by the use of questionnaires and interviews.Subjects were asked to personalise a display which optimised their subjective abilityto read text.

All the users were able to find settings which made reading subjectively easier forthem. Selected screen layouts for individual subjects were found to be extremelyvaried, highlighting the individual nature of the disorder, although generally lowcolour contrast with normal sans serif type was selected. All reported that the abilityto vary spacing between the characters, words and lines was beneficial.

On the basis of these promising results, a text production and reading version of thesoftware was produced as an add-in to the industry standard Word (Microsoft, 1995).This prototype also incorporated configurable features to aid fixation, to alleviatereversal problems and to read text aloud from the screen. To enable subjects to usetheir preference settings yet preview the document as it would print, a WYSIWYGprint preview facility was added. This version was evaluated with seven dyslexicusers. Findings confirmed those of the first study, and reversal character configurationwas reported to be helpful (unexpectedly, also as a fixation aid). A recurring theme ofthe research has been that the users appeared to be unaware of how easy it was toimprove their reading potential by changing visual aspects of the readingenvironment. Many of the options which were presented can be achieved (in somecases, with difficulty) in most standard word-processing packages but very few of thesubjects had actually tried any adjustments prior to using this system.

Work is now well advanced on the production of a fully working system, “SeeWord”,which will be distributed freely in return for participation in a larger scale study of theeffectiveness of changing the visual environment of the computer to enable dyslexicsto improve their reading and text production abilities. The research to date hasconcentrated on subjectively perceived effects on reading and text production ability.The larger scale evaluation will include objective measures of these importantparameters.

REFERENCESMicrosoft (1995) Word for Windows 95, Microsoft Corporation: RedmondWillows, D.M., Kruk, R.S. and Corcos, E. (1993) Visual processes in reading and

reading disabilities Erlbaum: New Jersey.

HCI’98 Conference Companion

—26—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Design Issues for Interactive Drama

Peter Jagodzinski, Dan Livingstone, Mike Phillips,Tom Rogers and Simon Turley.

Human-Centred Systems Design Research Group,School of Computing,University of Plymouth,Plymouth. PL4 [email protected]

ABSTRACTInteractive television is imminent and promises new convergences of computing andthe arts. Design models are needed which comprehend artistic and technical goals.INTRODUCTIONThere is a radical change on the horizon which will affect the way in which we see theboundaries of what constitutes "computing". This is the convergence of digitaltelevision with broadband network technology and multimedia computing. When thechange is in place, forecast to be by 2010, (Loveridge et al, 1995), interactivemultimedia will be accessible to everyone in the world who currently watchestelevision. In this way the centre of gravity of interactions with computers seemslikely to shift from being mainly concerned with primarily rational activity such asinformation processing towards encompassing those aspects of our lives which alsoinvolve our senses and emotions, culture and values, through the arts entertainmentand education. Potentially the range of new roles for computers which willaccompany the forecast market shift is as least as wide as the spectrum offered by theexisting entertainment and education industries and, arguably, wider because of thenew affordances provided by interactivity.The purpose of this short paper is to describe research which has begun to explore thedesign of a new form of computing, namely interactive drama. The research is notconcerned with the technology but with the way in which humans may becomeinvolved with a form of interactive computing which has emotional, cultural, socialand informational elements combined. The issue facing designers of this form ofsoftware is the need for design models which comprehend an arts paradigm in whichhuman subjective experience is the target effect sought by the designer, rather thanconventional quantitative, objectively measured "performance enhancement". Thepaper concentrates on one example of this new form of computing, that is the use ofinteractive drama as a medium for accessing and extending human understanding.DESIGNING FOR SUBJECTIVE EXPERIENCEAs children and adults we learn far more about the world from experience, direct andvicarious, than we do from formal learning. Such everyday learning probablyinvolves the acquisition of facts, concepts and rules, but these may not be madeexplicit and may also be embedded in a complex assembly of perceptions andexperiences from interactions with the physical world, our interpersonal relationships,and our situation within social groups and cultures. Cultural and social norms,attitudes and expectations, are formed by means of our direct experience of the world,but also conditioned by observing and empathising with the experience of others, aswell as by anecdote, literature and the performing arts, in particular through themedium which most of us see most of, television. Our individual identity, the personwe become, and our stance with respect to the rest of the world are shaped by suchexperience. Kozma (1991), Laurillard (1993) and Simpson (1994) have described thepower of different media forms in delivering learning resources and the psychologicalprocesses which they involve. Bandura (1997) has also discussed the idea that

HCI’98 Conference Companion

—27—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

depiction of social events in the mass media can provide the foundation for mentalmodels of cultural and social norms and conventions.The aim of the research at Plymouth is to explore the potential of interactive dramawithin a multimedia learning environment (MLE) as a medium for humanisticlearning. The domain in question is that of pregnancy and childbirth, emphasisingemotional and social issues, but with access to relevant information sources too. Thisis a field in which prospective parents are often ill-prepared for the consequences ofmajor life-changing decisions and events and can be over-dependent on the caringservices to make decisions for them. The aim of the MLE is to empower prospectiveparents to take control of their own lives. Interactive drama (ID) provides anaccessible, engaging means of enabling parents to experience vicariously, and thus torehearse and anticipate the problems, conflicts, anxieties and threats which can be partof childbirth.The design of the MLE tracks the nine months of pregnancy, highlighting points atwhich crucial issues arise. For example, a decision has to be made at about week 16as to whether an amniocentesis should be carried out to test for a number of possiblegenetic disorders, but this carries the risk of causing a miscarriage. ID portrays theemotional, social and family issues associated with various possible outcomes arisingfrom the different choices that parents could make. Conventional hypermedia linksare also used to provide factual information on foetal development, physiologicalchanges, medical and care issues. Within the ID interactivity provides access at fourlevels: version of events; point of view of characters in the drama; internalmonologues of the characters; expansion of information by hypermedia. Design andproduction issues include the requirement for several versions of each scenario to bescripted, acted, directed, videoed and edited, as well as the need for softwareengineers, human factors specialists, domain specialists and multimedia artists tocollaborate. With such diversity of backgrounds creating a shared vision andcomplementary goals is a difficult yet vital role for the producer. The research isdescribed in more detail by Jagodzinski et al (1998).CONCLUSIONSInteractive drama requires the convergence of design models from HCI, computergames, drama, film and art. Interactivity adds new affordances to these traditionalmedia models and new design paradigms are beginning to emerge, in particular fromthe young field of interactive multimedia design.ACKNOWLEDGEMENTThis work was funded by the HEFCE QR initiative.REFERENCESBandura, A (1977) Social Learning Theory. Englewood Cliffs, NJ: Prentice-Hall.Jagodzinski A P, Livingstone D, Phillips M, Rogers T & Turley S (1998)

Transforming perspectives with interactive drama. Submitted.Kozma, R B (1991) Learning with Media. Rev. of Educational Research. 61, 179-211.Laurillard, D (1993) Rethinking University Teaching: a framework for the effective

use of educational technology. London: Routledge.Loveridge D, Georghion L & Nedera M (1995) United Kingdom technology foresight

programme delphi survey: a report to the Office of Science and Technology.PREST, University of Manchester.

Simpson, M S (1994) Neurophysiological considerations related to interactivemultimedia. Educational Technology Research and Development, 42, 75-81.

HCI’98 Conference Companion

—28—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Support for Meeting People on the Internet

Jun Kakuta, Kazuki Matsui and Hiroyasu Sugano

Information Service Architecture Laboratories,Fujitsu Laboratories Ltd.64 Nishiwaki, Ohkubo-cho, Akashi674-8555, Japan

ABSTRACTWe propose a profile card communication service to support Internetcommunications. Profile cards can be attached to web pages and distributed amongpeople on the Internet. It gives us more opportunities to get to meet people whilepresenting privacy.

KEYWORDS: Instant Message, vCard, Distribution of Profile, Privacy

INTRODUCTIONThere are millions of people on the Internet, however, it is not so easy to meet thosewe feel to be compatible with. Directory services may help us find communicationpartners, but they provide little information about them. We have to go to their webpage, if they have one, to determine whether they are the kind of person we look for,and then, e-mail them to initiate communications. We think, however, that e-mail hasinherent barriers for starting communications. In this paper, we describe a service thatenables us to easily initiate communications while providing a “profile card” of them,which can be used for subsequent contact with them and can be distributed to otherpeople on the Internet.

REQUIREMENTS FOR OUR SERVICEWe think that finding compatible people and sending messages easily can reduce thebarriers involved in initiating communications. Web pages provide various kinds ofinformation about page owners allowing us to decide whether they are to our liking.We consider the ICQ web-pager panel (Mirabilis, 1997) to be pretty good as it enablespage visitors to send messages easily without any other applications. We also focusedon this potential, and, in addition, think that distributing profiles will provide moreopportunities to meet compatible people. Based on these considerations, we propose aprofile card communication service to satisfy two requirements: ease in startingcommunications and distributing profiles to meet more compatible people.

PROFILE CARD COMMUNICATIONA Profile card is a visual electronic card with a send message feature and includesvarious kinds of profiles. It was implemented by extension of vCard, the proposedelectronic card standard format for use on the Internet. The next sections describethree features of our proposed service using profile cards.

FUN TO CREATEWe prepared the card layout using the standard extension method as stipulated by thevCard specifications. This enables card creators to freely arrange information on thecard and choose font attributes for the text, so that they can create original cardseasily. This stimulates creativity and makes it fun. Profile cards can be used as avisual representative of an individual on web pages, e-mail and so on.

HCI’98 Conference Companion

—29—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

EASY TO START COMMUNICATIONWe added a send message feature to the card, which enables web page visitors to startcommunications easily. All they need to send a message is the card embedded in aweb page. Messages are processed with a server, and all messages are stored in thisserver if card’s creator cannot immediately receive the message. Later, when they logto the server, they can retrieve the messages that were stored during their absence.Visitors can get cards from web pages and store them in a card folder application. Asdescribed above, the profile card itself has a feature that enables it to be used forcommunication. Collecting cards can greatly enhance our motivation to communicatewith others, and, therefore, we think it is one of the important aspects of our service.

GUARANTEE OF PRIVACYTo ensure privacy, server should construct the contents of cards with information thatis limited by the qualifications of the requester. Although the card embedded in a webpage generally includes a limited profile, such as maybe only an e-mail address, otherprofiles should be retrieved from the server. This will determine whether he/she hasthe adequate qualifications for retrieving a profile or not. Only after the server judgesthe qualification to be proper does it agree to the request and reveal the profile. Thepresence or absence of the card’s creator will also be shown if allowed by the card’screator. Everyone can forward profile cards to a third person. When someone gives itto a third person, the server should always add a distribution history property to thecard information. This makes it possible to communicate with any person in thedistribution history. In this way, our service offers users privacy and flexibility, andprovides more possibilities to meet more compatible people. Figure 1 shows asnapshot of our service.

Figure 1: Snapshot of Profile Card on a Web Page and Card Folder Application

CONCLUSIONWe pointed out that sending messages from web pages could eliminate the barriers forinitiating communications with a stranger. We also think that distributing profileswould broaden circles of friends. With these considerations in mind, we proposed theprofile card communication service described in the previous sections. This service, ifit functions as we expect, would eventually make both cyber life and daily life moreenjoyable. We have just started developing a prototype of the service. We willconduct experiments to verify the effects of a profile card communication service.

REFERENCEMirabilis LTD. (1998) ICQ Web-Pager Panel Enable Your Visitors to Contact You

From Your Site. http://www.mirabilis.com/webpanel/index.html.

ProfileCard

CardFolder

Application

WebBrowser

EmbeddedProfileCard

HCI’98 Conference Companion

—30—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Usability Requirements for Virtual Environments

Kulwinder Kaur, Alistair Sutcliffe and Neil Maiden

Centre for HCI Design, City UniversityNorthampton Square, LondonEC1V 0HB, UK

ABSTRACTThis paper reports the development of usability requirements for virtual environmentsand the results of a study to evaluate their impact on interaction success.

INTRODUCTIONVirtual environments (VEs) provide a computer-based interface representing a real-life or abstract 3-dimensional space. VEs offer new possibilities and challenges tohuman-computer interface design, however, major interaction problems have beenfound with current VEs, such as disorientation and difficulty finding andunderstanding available interactions, which result in user frustration and a lowacceptability for the VE (Kaur et al., 1996). Guidance is needed on interaction designfor VEs to avoid such usability problems.

THEORETICAL RESEARCHTo inform interaction design guidance, models of interaction for VEs were developed,by elaborating Norman’s (1988) general cycle of interaction. The models consist oftwenty-one inter-linked stages of activity, describing task-based, exploratory andreactive behaviour, to system-initiated events. The models were evaluated in userstudies of interaction behaviour, using verbal protocol analysis (see Kaur et al.,inpress). The models were then used to define design properties required to supportthe user during identified stages of interaction. The properties (46 in total) covervarious aspects of a VE: the user task, spatial layout, viewpoint and userrepresentation, objects, system-initiated behaviour, actions and action feedback. Forexample, the property identifiable object states that an object should be easy toidentify or recognise, where copied from real world phenomena.

STUDY METHODA controlled study was carried out to evaluate the impact of the design properties oninteraction success. Eighteen subjects took part in the study and were balanced,according to experience into a Control (C) and Experimental (E) group. The controlgroup was given the original version of a virtual business-park application and theexperimental group was given a version with some of the missing design propertiesimplemented. For example, figure 1 shows changes made to the application toimplement the properties: distinguishable object, identifiable object, clear navigationpathways and declared available action. In the amended version, objects such aswalls sharing an edge were made more distinguishable by using textures to emphasiseedges; exit doors were made easier to identify by labelling them; areas that the usercould not navigate into were marked using ‘no-entry’ signs (which appeared onapproach); and actions to provide information about basic facilities (e.g. lighting)were clearly cued with information signs.

HCI’98 Conference Companion

—31—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Figure 1: Implementation of four design properties: distinguishable object (top),identifiable object (bottom left), clear navigation pathways (bottomleft) and declared available action (bottom right).

Subjects interacted with the virtual business-park to complete nine varied tasks,including familiarisation and exploration, investigating windows, opening a loadingbay and comparing toilets in a building. Subjects were asked to provide a concurrent,'think-aloud' verbal protocol and their interaction sessions were video-recorded.Following interaction, subjects completed a memory test on the business-park.

RESULTS AND DISCUSSIONThere was a general improvement in interaction with use of the amended version(one-tailed t-tests used for following statistics). The experimental group encounteredsignificantly fewer usability problems (p<0.01; avg. C=134, E=45 problems persubject) and successfully completed significantly more tasks (p<0.01; avg. C=7,E=8.4 tasks). The experimental group also achieved higher scores for the memorytest and this difference approached significance (p=0.064; avg. C=46, E=52%).

The results are encouraging and show a 66% reduction in usability problems, leadingto subjects being able to complete their tasks better and gain more useful informationduring interaction. The results appear to indicate that the proposed design propertiesare important requirements for successful interaction, and a VE interface can besignificantly improved by implementing missing design properties. Our future workinvolves refining the set of design properties in light of detailed results and translatingthem into concrete guidelines for VE designers. Our overall goal is to addressproblems of interface design for VEs, using interaction modelling as a theoreticalbase.

ACKNOWLEDGEMENTSWe thank VR Solutions and The Rural Wales Development Board for loan of the testapplication and the EPSRC for funding.

REFERENCESKaur K., Maiden N. and Sutcliffe A. (1996). Design practice and usability problems

with virtual environments. In: Virtual Reality World '96 conference, Stuttgart,Proceedings (IDG Conferences).

Kaur K., Maiden N. and Sutcliffe A. (inpress). Interacting with virtual environments:an evaluation of a model of interaction. Accepted for publication in Interactingwith Computers: VR special issue.

Norman D.A. (1988). The psychology of everyday things. (New York: Basic Books).

HCI’98 Conference Companion

—32—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Context and Frequency of Use in ATMs: Change over a Decade

Patrick J. O’Donnell, G.E.W. Scobie and Margaret Martin

Psychology, University of Glasgow,Glasgow G12 8QQ,United Kingdom

ABSTRACTChanges in the context of use of ATMs over a decade are reported.

KEYWORDS: ATM, cash machine, user behaviour, usage context, evaluation.

INTRODUCTIONProblems exist in the use of ATMs as platforms for the delivery of a range ofconsumer services. Despite the capacity of the machines to deliver deposit facilities,ticket booking, or financial products their spread has been less pervasive than thegeneration of cash only ATMs (Hata and Iiyama 1991). This paper describes a timephased cross sectional study which we conducted with NCR.. Essentially it consistsof two telephone surveys separated by eleven years, 1987 and 1998. In the mid1980’s somewhere between 40% and 60% of bank account holders were non users ofATMs. Increasing the levels of use among bank customers was seen as a priorityamong banks and manufacturers. Initial theorising was in terms of technophobicattitudes, lack of awareness, perceived unreliability and human factor problems.

METHOD:A telephone survey of 306 individuals selected on the basis of a stratified randomsample of the UK population was conducted in 1987 (95% response rate). Questionscovered demographic variables and 82 questions on attitudes to technology, humanfactors issues with ATMs and patterns of ATM use. In 1998 a more restricted set ofquestions were asked on a stratified sample of 166 people drawn from the UKpopulation. (89% response rate).

RESULTS AND DISCUSSION:

The 1998 sample was compared with that of 1987 on the dimensions of social class,age, and sex. None of the individual chi square comparisons were significantindicating that both are random samples from the same population. The totalcombined samples were compared with population norms and showed a slightoverrepresentation of women and older age groups. A series of hypotheses about thedeterminants of general satisfaction, frequency of use and the user non-userdistinction were investigated. Combining both samples, the role of human factorissues in non use was examined by a correlation of the problem questions withfrequency of use among the user group. In fact frequency of use correlatedpositively with human factor issues e.g. screens dirty, difficult to read, problems withcard insertion and retrieval, height of screens, difficult to follow instructions anddamaged cards. Answers to these questions were added to produce an ergonomicssatisfaction index which was correlated with frequency of use. Regular users reportedmore problems with ATM use. (r=.43, p<01). The gradient of entry into user statusseems to be a steep one with people either becoming regular users quickly or giving

HCI’98 Conference Companion

—33—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

up at the first hurdle. In very few people could their pattern of non use be attributedto bad experiences with machines.

Ergonomic problems were not trivial. Height of screens was rated a problem by23%, card insertion by 18% and 28% had trouble with the instructions. Within theprevious three months 12% had forgotten their PIN, 24% had found the machineeither out of cash or not working, and 12% had trouble finding a machine. Howeverthe advantages of cash availability outweigh the faults. Attitudinal measures wererelated to use but weakly. Non users saw them as more unreliable, (t=2.2, p<.05)thought they made mistakes (t-2.34, p<.05) and worried about the probability of cardloss or theft. (t=2.8, p<0.05). The strongest prediction of use was the question ‘Doyou ever need access to cash after banking hours?’ with non users saying they haveno real need for the facility. (t=3.1, p<0.01). Fully 92% of the sample of non userssaid they personally felt they had no need for the facility. Yet on general attitudinalquestions such as ‘Are ATMs in general a good thing?’ 82% agreed.

The differences in response between the two time periods are the most instructive.The decade saw the extensive linkage of networks, more penetration of ATMS intosupermarkets, malls and office locations, and a sustained attempt to make themachines as ubiquitous and as dependable as the telephone. The early surveyshowed that of users only 27% use the ATM once or more per week and that 53% getcash by other means on a regular basis. Only 34% would go for a night out on theassumption they could get money and only 22% were confident of finding a machinein a strange town. The impression is of a view of ATMs as an adjunct to the normalpattern of cash request from a teller. But my 1998 the percentages had changed to42%,18%, 62%, and 65%. These results on relevant questions are comparable to anAmerican sample (Rogers et. al 1996). The ATM is well on the way to being aninvisible mechanism by which cash is routinely and dependably available. Moreoverwhile in 1987 65% wanted a protected lobby, 72% wanted availability insupermarkets, cinema and 92% at places of work, few would have achieved thatexperience then. However now 55% of users have in the last month used such alocation The term mimesis was coined to describe the delivery of a wish by atransparent and unattended mechanism. One subgroup of users , 4% in the earlysample 18% in the recent, use the ATMs regularly more than once a day (not ofcourse every day). In this pattern of use it has taken on the role of an external walletor rather the bank is now an extension of the individuals purse. The most interestingfocus is on the willingness to use new facilities. Few people expressed an interest inthese in 1987 but in the sample as a whole willingness to use ATMs for at least ticketpurchase (85%), insurance information.(51%) and some insurance purchase (38%)has gone up markedly. The change in use and perception is partly due to extramachine factors such as availability, networking and dependable support.

REFERENCES:Hatta, K. & Iiyama, Y. (1991) Ergonomic study of ATM operability. International

Journal of Human Computer Interaction 3, 259-309.Rogers, W.A., Cabrera, E.F., Walker, N., Gilbert, D.G. & Fisk, A.D. (1996). A

Survey of ATM Usage across the Adult Life Span. Human Factors 38, 156-166.

HCI’98 Conference Companion

—34—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

SiteSeer: An Interactive Treeviewer for Visualizing Web Activity

Eric Sigman, Robert Farrell and Mark Rosenstein

Bellcore445 South St., Morristown NJ07960, U.S.A

ABSTRACTSiteSeer is an interactive visualization tool designed to support the work of web siteanalysts in such tasks as understanding web site traffic patterns and effectiveadvertisement placement. SiteSeer integrates visualization of the content, structure,and utilization aspects of a web space.

KEYWORDS: Visualization, World-Wide Web, Data Filtering, Fisheye View,Traffic Analysis

INTRODUCTIONThe growth of commercial web sites has resulted in the need for a tool to aid in themonitoring, restructuring, and the introducing of new content into the site. These tasksdepend on the user's knowledge of the content, organization and the current sitevisitors' usage patterns. SiteSeer is a prototype visualization tool designed for webanalysis. In essence, the tool is an interactive tree viewer that uses a variety oftechniques for visual emphasis, focusing, and information filtering. This paperdescribes the tool and its application to two tasks: understanding site traffic patternsand the effective placement of advertisements.

EARLIER WORKSiteSeer is an outgrowth of our work with AMIT (Animated Multiscale InteractiveTreeviewer) (Wittenburg & Sigman, 1997a, 1997b), and retains many of its basicfeatures. AMIT is a tool aimed at integrating search and browsing on the World-WideWeb. It presents a web space as a tree structure. Font scaling and tree pruning areused to provide multifoveal fisheye views (Furnas, 1986), and animation providestransitions between the user's customized views. AMIT has been deployed for a webspace of over 12,000 documents (http://www.apparent-wind.com/sailing-page.html).

Initially, an off-line "web walker" collects documents by following the outgoing linkstructure from a designated root node. The walker generates a directed graph of thatspace, and then the system represents this graph as a tree structure. In AMIT, the titlesfor these documents are presented as nodes in a tree. The text collected by the webwalker is indexed by the Latent Semantic Indexing (LSI) (Deerwester et al., 1990)module. At runtime, a user's query to a LSI based search engine returns a list ofdocument hits along with relevancy scores. AMIT generates a view of the tree prunedto show the hits exceeding a threshold; the relevancy scores are reflected in the fontsize used to render the node. Users customize the tree view through directmanipulation. For example, users can select a set of nodes as foci for a succeedingview. The new view will be reduced to the selected nodes and their paths to the root.

SITESEER FOR WEB SITE ANALYSISSiteSeer extends AMIT to encompass a repository of traffic and ad presentation data.This data is posted against the tree of hyperlinked documents. For example, heavily

HCI’98 Conference Companion

—35—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

trafficked documents are represented by larger nodes. In this way, regions orpathways with high utilization become readily apparent. Often, users are concernedwith characteristics of the traffic, such as the originating site, type of site, or day ofthe week. SiteSeer provides easy-to-use filters to extract this data through point andclick dialog boxes.

SiteSeer was applied to a site where advertisements were dynamically served tovisitors. In this case, analysts want to know both how frequently ads are viewedthroughout the site and ad effectiveness as measured by the number of visitorsclicking on the ad banner. Typically, these analysts are seeking optimal advertisementplacement. A visualization that combines structure and traffic supports this task. Theuser can formulate queries to filter ad data by various parameters including thoseavailable to filter the traffic data.

An important feature is the sequential visualization of a query chain. Here, a view ofthe tree that results from a query can serve as input to a subsequent query. Forexample, a user could first query for the most frequently accessed documents, andthen holding that structure fixed, query for ad data that would be overlaid on thecurrent view of the tree. An even more interesting example utilizes the LSI searchengine. Since LSI rates the similarities among the documents, it is possible to create aview based on documents that have similar content. A subsequent chained query canthen be overlaid on this view. Thus, for example, a user could request pages withsports related content, and then overlay traffic data to discover promising regions ofthe site for placing sneaker ads.

From the experience with SiteSeer, perhaps the most interesting future direction is toconsider the issues of "path analysis." SiteSeer is limited to showing access todocuments in a structural path, but does not show actual traversal behaviour ofvisitors. A visualization that shows these traversals would likely answer questions onhow people are actually navigating the site and help improve the site for visitors.

REFERENCESDeerwester, S., Dumais, S.T., Landauer, T.K., Furnas, G.W. and Harshman, R.A.

(1990) Indexing by latent semantic analysis. Journal of the Society forInformation Science, 41(6).

Furnas, G.W. (1986) Generalized fisheye views. In the Proceedings of Human Factorsin Computing Systems, CHI `86. (Boston, MA, April).

Wittenburg, K. and Sigman, E. (1997) Integration of Browsing, Searching, andFiltering in an Applet for Web Information Access. In the Proceedings ofHuman Factors in Computing Systems, CHI `97. (Atlanta, GA, March).

Wittenburg, K. and Sigman, E. (1997) Visual Focusing Techniques in a Treeviewerfor Web Information Access. In the Proceedings of the IEEE Symposium onVisual Languages, VL 97. (Capri, Italy, September).

HCI’98 Conference Companion

—36—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Cognitively Engineering Coordination in Emergency Management

Adam Stork, Tony Lambie and John Long

Ergonomics and HCI Unit,University College London,26 Bedford Way, London,WC1H 0AP, UK

ABSTRACTThe Hidden (1992) investigation into the Clapham Junction train accidentrecommended that the coordination of the emergency services be improved throughtraining. This research proposes a framework for developing Cognitive Engineeringdesign knowledge to support effective training of coordination between theemergency services. This paper describes this framework and illustrates it byoutlining a conception of training and a conception of coordination mechanisms.

KEYWORDS: Cognitive Engineering, Coordination, Emergency Management,Training

ENGINEERINGEngineering distinguishes between research which is exploratory, but might (or mightnot) be of value to design knowledge, for example science knowledge, and researchwhich is intended to acquire validated ‘engineering’ knowledge which directlysupports the design of effective systems.

In this research the aim is to develop validated engineering knowledge to support thedesign of effective training systems for coordination in emergency management. Thisshort paper describes and illustrates a framework to attain this research aim. Theillustrations are drawn from research which analysed the coordination between theemergency services in the Severn Tunnel train accident.

DEVELOPING COGNITIVE ENGINEERING DESIGN KNOWLEDGETo be validated (and, therefore, effective), cognitive engineering design knowledgeneeds to be (Dowell and Long, 1989): conceptualised with respect to effective HCIdesign; operationalised with respect to this conception; generalised with respect to thisconception; and tested to ensure likelihood of successful application.

The research started with the following informal expression (IE) of some (substantive)cognitive engineering design knowledge with potential (e.g. Hidden, 1992) to bevalidated.

‘Computerised training in emergency management should train thosecoordination mechanisms that can be identified in emergency managementineffectiveness’

To attempt to validate the IE, the project: enhanced a conception of (substantive)cognitive engineering design knowledge (Stork and Long, 1994, based on Dowell andLong, 1989); conceptualised in detail additional components required by the IE,particularly ‘training’, ‘coordination mechanisms’, and ‘emergency managementineffectiveness’; ‘proto-operationalised’ the emergency management component ofthe IE by analysing coordination incidents in the Severn Tunnel rail accident (analysis

HCI’98 Conference Companion

—37—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

of training came later in the project); and generalised over the proto-operationalisations. ‘Proto-’ indicates that the Severn Tunnel rail accident is not adesign situation, and that the IE would still need to be operationalised and tested indesign. The conceptions of training and coordination mechanisms are outlined belowto illustrate this framework.

CONCEPTION OF TRAININGDomain

Domain and worksystem to be improved by training

WorksystemTrainer

Trainee

Computer(s)

Figure 1: Conception of trainingFigure 1 shows the conception of training graphically. The current and desired domainand worksystem to be improved by training are included to permit the performance ofthe training to refer to the current and designed performance of the worksystem to beimproved by training, in this case the coordination of the emergency services.

CONCEPTION OF COORDINATION MECHANISMSCoordination mechanisms were conceptualised as having a content and an attitudecomponent. The content component was conceptualised as: what was intended to beconveyed (for example, the location of the train accident); and its expression (forexample, the phrases used during a telephone call). The attitude component wasconceptualised as: the rôle of the agents during the coordination (their rank, technicalexpertise, and technical content); the manner of the coordination (whether gesture orverbal); and the plan or task knowledge (its representation and process) of the agentsduring the coordination.

CONCLUSIONApplication of the framework to developing cognitive engineering design knowledgeis ongoing and encouraging.

ACKNOWLEDGEMENTSResearch funded by the ESRC Cognitive Engineering programme

REFERENCESDowell J. & Long J. B. (1989). Towards a conception for an engineering discipline of

human factors. Ergonomics, November 1989, 32(11), pp. 1513-1535.Hidden (1992). Report into the Clapham Junction accident. HMSO (1992).Stork A. & Long J. (1994). A Planning and Control Design Problem in the Home:

Rationale and a Case Study. In: Proc. Intl. Working Conference on HOIT. K.Bjerg and K. Borreby (eds). University of Copenhagen, Denmark, 1994.

HCI’98 Conference Companion

—38—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

System Support for Rapid Prototyping of Collaborative InternetInformation Systems

Michael Swaby, Peter Dew, David Morris and Gyuri Lajos

Centre for Virtual Working Environments, Internet Computing GroupSchool of Computer Studies, University of Leeds, LS2 9JT, [email protected]

This work is investigating mechanisms for user centred design of internet-basedcollaborative information systems. Of key importance is the design of metaphorsthrough which users interact with these information systems. This paper summarisesprogress with DiMe (Display Metaphor), an object-oriented scripting language for theautomated creation of virtual working environments [Morris et al.1997]. DiMeenables modelling of user-to-user and user-to-system interaction within collaborativeapplication scenarios. DiMe then supports the generation of Web user interfaces fromthese models, incorporating integrated access to domain information sources andgroupware tools. We have found DiMe to be a useful design tool in developingprototype collaborative systems before crystallising a final implementation.The DiMe language is used to specify HTML and VRML user interfaces using arecursive container-component approach. Atomic 2D and 3D Web interfacecomponents (eg. a HTML heading or VRML sphere) are synthesised usinginheritance and overriding to form increasingly complex compound metaphors (eg. aHTML page containing frames, forms etc. or a VRML office containing desks, chairsand other objects). The DiMe system architecture is shown in Figure 1. Thearchitecture is based upon a standard 3-tier client-server model, consisting ofinformation management, application and presentation layers. Each application user isassigned a personal DiMe interpreter and metaphor set. This enables interfaces to becustomised for particular roles although many metaphors may, of course, be sharedbetween roles.DiMe integrates data access within metaphors by supporting query resolution via aruntime information manager. The information manager is responsible for federatingunderlying data sources into an entity-relationship graph that is made available toDiMe via a uniform interface. Metaphors may also trigger events within applications,e.g. execution of a groupware session upon selection of an object on a page or in ascene. A typical DiMe metaphor describes how to generate a particular user interfacefor an information (or application) object within a specified interaction style. Forexample, Figure 2 shows an office metaphor used to provide an interface to a personinformation object within a 3 D interaction style. Here, the $ token denotesinformation object access and the Action syntax denotes invocation of applicationoperations (in this case, groupware tool execution).Once a DiMe application has been constructed, it may be reconfigured rapidly bymodifying the collection of DiMe metaphor scripts that define its user interfaces.Changes in the underlying information model can be accommodated dynamically byupdating the information manager and recoding data access metaphors for those usersaffected by the change. DiMe has been used to produce prototype HTML and VRMLWeb interfaces for collaborative design and virtual consultancy scenarios. A library ofseveral hundred re-usable DiMe metaphors has been built up through the developmentof such applications. Future work aims to develop a toolset around the DiMearchitecture and metaphor library to provide a modelling and design environment forcollaborative internet information systems.

HCI’98 Conference Companion

—39—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Figure 1: DiMe system architecture

define OfficeMetaphorextends RoomMetaphorused_to_display PersonObjectin_style 3D

set floor to_contain { add desk1 as Desk(name=$user) at 2,3 add chair1 as Chair() at 3,3}

set desk1 to_contain { add tele1 as Telephone() at 0,0 Action select call $user audio add video1 as Videophone() at 1,1 Action select call $user video}}

Figure 2: DiMe metaphor example

ACKNOWLEDGEMENTSThe authors would like to thank Richard Drew, Neil Hunter, Diane Willows, GregPlatt, Thorsten Blaise and Rik Wade who have made valuable contributions. MichaelSwaby would like to acknowledge support from EPSRC and BT Laboratories.

REFERENCESMorris, D.T., Lajos, G., Dew, P.M., Drew, R.S. and Willows, D. (1997) DiMe: an

object oriented scripting language for the automatic creation of virtualenvironments. In Proceedings of Eurographics UK Conference, April 1997.

webbrowser

collaborativeapplication info

mgr

DiMe interpreter(user a)

DiMe interpreter(user b)

DiMe interpreter(user ..)

presentation layer application layer info layer

webbrowser

webbrowser

info objectsDiMe commandsHTML / VRML

user a

user b

user ..

HCI’98 Conference Companion

—40—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Towards continuous usability evaluation of web documents

Yin Leng Theng, Gil Marsden and Harold Thimbleby

School of Computing ScienceMiddlesex UniversityBounds Green RoadLondon N11 2NQ{y.theng, g.e.marsden, h.thimbleby}@mdx.ac.uk

ABSTRACTTo ensure continuous usability evaluation of web documents, this paper proposesintegrating evaluation methods and techniques into practical authoring tools.

KEYWORDS: web, usability evaluation, authoring tools

INTRODUCTIONAs the use of the Internet and the web grows, scalability not only refers to thehandling of the increased number of servers, but also of handling the increasednumber of end-users. Because the web was not designed to handle so many and suchlarge applications with more and more people using them, there are potentialproblems associated with the web, of which navigation is one of the more pressingproblems. This view was supported by the results of the 8th Web User Survey by theGraphic, Visualisation and Usability Center conducted over October/November 1997(Pitkow, Kehoe and Morton, 1997). The report showed that navigation is still aproblem (16.7%) despite much research effort being invested to address it, as opposedto the top two problems of data privacy (30.5%) and censorship (24.2%).Often the development process of the web site follows the six stages described in thewell-accepted iterative development lifecycle: (1) feasibility study; (2) conceptualdesign; (3) building; (4) implementation; (5) integration; and (6) maintenance.Unfortunately iterative design, which usually helps improve systems, has problems(Theng, 1997): (i) there is a lack of a disciplined and systematic approach to designingwell-structured web documents to meet end-users’ behaviour and navigation needs;(ii) prototypes are not thoroughly tested before being developed into the final system;and (iii) end-users involved in the process experience too little of a system to helpmake significant design contributions.If the conventional iterative development process is found lacking, better ways toensure that good web documents are produced are needed. This could be achieved byinvolving end-users and taking into account their needs throughout the design process.To ensure continuous usability evaluation of web documents, we propose integratingevaluation methods and techniques into practical authoring tools. These evaluationtechniques are categorised under real user testing and non-human user testing (Theng,1997). Real user testing includes observations, surveys, expert evaluation andexperiments, and should be carried out before the system is ready for implementationso that qualitative results and impressions can be obtained. Non-human user testingmethods are encouraged as a means to perform evaluation early enough to influencedesign while it can still change direction. Analytic and heuristic evaluation methods,and executable user models are some ways of evaluating without requiring theattendance of real users. We believe that if designers were to apply these methods andtechniques religiously to the development lifecycle, quality web documents could beachieved.

HCI’98 Conference Companion

—41—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

WORK ON CONTINUOUS EVALUATION OF WEB DOCUMENTSVarious authoring tools have been developed at the School of Computing Science(Middlesex University) as illustrations of our approach to provide better support andmore systematic usability evaluation on web documents.HyperAT: Tool supporting the different modes of usability testingApart from the basic editing facilities of creating, editing and saving, embodied withinHyperAT is an experimental, authoring testbed which allows hypertext designers tocarry out different modes of usability testing on the web documents created byHyperAT, all within the authoring environment of HyperAT: (i) structural analysiswhich formally analyses the structure of the web documents; and (ii) real userevaluation which analyses end-users’ browsing behaviour based on real users’transaction logs. The ability to toggle between different modes makes testing lesscumbersome, and hence more convenient for designers, thereby increasing the chanceof creating more usable web documents. The first and second modes of usabilitytesting have been implemented in HyperAT. As future work, we also recommendexploring the potential of using executable user modelling or non-human user testingas the third mode of usability testing.Tools for site authoring and maintenance of the RSA web siteThree tools have been developed as part of current research into distributed webauthoring, whilst building a web site for the Royal Society for the encouragement ofArts, Manufactures and Commerce (RSA). These tools allow the production of sets ofpages that have a consistent and easily changeable design/layout, achieved byseparating design from content. Gentler (Thimbleby, 1997a) was written inHyperCard and was used to create the first ‘automatic’ RSA web site, using adatabase of information provided by the RSA. Gentler managed the site from 1995 to1996, when it was superseded. Siteview (Thimbleby, 1997b) was written in Java, builton the ideas of Gentler, using content or ‘source’ files whose presentation wascontrolled by a number of design files - thereby allowing quick and reliable alterationsto the visual appearance of whole subsections of a web site. Siteview proved useful asit provided semi-automatic generation of certain site elements, such as consistentnavigation bars, link checking and a graphical representation of the site structure.Building on the ideas introduced by Gentler and Siteview, we have more recentlydeveloped a third tool known as StyleGeezer. This tool concentrates on delivering thecore features of the earlier tools in a simpler and more usable way. Using these tools,designers can easily recreate the site, making global changes to the design - forexample, changing background colour across the whole site without editing eachindividual page. This ensures consistency, and makes the maintenance task simpler,thus helping designers to manage the complexity of the design and maintenanceprocesses. Future work includes validating these tools with types of designers (e.g.novice, intermediate, experienced).REFERENCESPitkow, J., Kehoe, C. and Morton, K. GVU’s WWW User Surveys (1997),

http://www.cc.gatech.edu/gvu/user_surveys/survey_1997_10Theng, Y.L (1997), “Addressing the ‘lost in hyperspace’ problem in hypertext,” PhD

Thesis. Middlesex University.Thimbleby, H. (1997a), “Gentler: A tool for systematic web authoring,” in

Buckingham Shum, S. and McKnight, C. (eds.), Special issue of InternationalJournal of Human-Computer Studies on ‘Web Usability’.

Thimbleby, H. (1997b), “Distributed web authoring,” WebNet’97 World Conferenceof the WWW, Internet and Intranet, Canada.

HCI’98 Conference Companion

—42—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Nonspeech Audio in Television User Interfaces

Richard van de Sluis, Berry Eggen and Jouke Rypkema

Philips Research LaboratoriesProf. Holstlaan 45656 AA EindhovenThe [email protected]

INTRODUCTIONThis study explores the end-user benefits of using nonspeech audio in television user interfaces. AnElectronic Programme Guide (EPG) served as a carrier for the research. An EPG provides information

about current and coming television programmes(Westerink et al., 1998). By using categories,like sports, news, series, etc., the user can searchwith a certain focus for a TV programme (seeFigure 1). For our purposes, the graphicalrepresentation of the eleven TV programmecategories was extended by adding ’categorysounds’. Category sounds are auditory iconsrepresenting a specific category of TVprogrammes. When the user scrolls through theEPG categories, the category sounds provide

auditory feedback that is intended to carry the same information as the visual icons, e.g. news (sound ofa gong), sports (sound of a cheering audience), or films (part of the tune of a James Bond film). Sincethe EPG offers the possibility to set a reminder for a coming TV programme, the category sounds arealso used as auditory reminders indicating that a TV programme of this category is about to start.Furthermore, certain characteristics of the category sounds are manipulated to represent the urgency ofa reminder. For instance, a reminder for a TV programme which starts in 30 minutes seems to comefrom a source that is spatially further away than a cue for a programme that is going to start within fiveminutes.

EXPERIMENT 1: USABILITY OF CATEGORY SOUNDSThe first experiment investigated how well users can learn category sounds during ‘normal’ EPG useand whether they can exploit this knowledge to identify auditory reminders indicating the kind of TVprogramme that is going to start. Twenty subjects participated in the experiment. Phase one of theexperiment focused on learnability. First, it was tested whether users could learn the category soundsduring normal use of the EPG (unintentional learning), without explicit instructions to learn the sounds.They had to perform seven tasks exploring control of the EPG. These tasks were reasonably simple,e.g., “try to find an interesting documentary and set a reminder for it”. When all tasks were completed,the category sounds were presented in random order (audio-only) and subjects were asked to writedown the category name of each sound. Subjects who did not have a 100%-correct score afterunintentional learning, were instructed to use the EPG to listen to the category sounds again in order tolearn them all (intentional learning). A second test measuring correct identification of the categorysounds was performed. If the category sounds were still not recognised 100% correctly, the intentionallearning task was repeated until, finally, the 100%-correct level was reached. Phase two measured theeffectiveness of the use of category sounds as remindersSubjects were presented with 22 shortfragments of TV programmes. At the beginning of each fragment a category name was shown on thescreen simultaneously with a vocal presentation of this category name. Subjects were instructed to saythis category name aloud as soon as they heard the corresponding ‘target’ category sound. In eachfragment, three or four sounds were played, one or two of which were target sounds. At the end of eachfragment a question was asked to verify whether subjects had really been watching the TV fragment.The questions appeared on the screen and subjects were instructed to answer them aloud. In phase threeof the experiment, the satisfaction of subjects about the category sounds was investigated in aninterview.Learnability: The average number of recognised sounds in the ‘unintentional learning’ test was 7.1(standard deviation 1.8) out of 11 sounds (65%). The category sounds for news and sports were easilyrecognised. All subjects correctly identified the sounds for these categories. The category sounds fortalkshows, comedies and films were also well identified. Magazines, gameshows and hobbyprogrammeswere most difficult to recognise. Only one subject recognised all category sounds in the unintentionallearning test. The other 19 subjects performed an additional ‘intentional learning’ test, after which theaverage score was 96%. It took maximally 2 extra learning sessions of 1-3 minutes to learn all thecategory sounds.

news magazines hobby

Figure 1: Examples of the visual category icons.

HCI’98 Conference Companion

—43—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Effectiveness: Subjects were able to detect almost all auditory reminders correctly while watching TV.The average number of target sounds detected correctly was 25.3 out of 26 (97%). From the questionsat the end of each fragment the average number of correctly answered questions was 20.6 out of 22(94%), which indicates that subjects had attended the TV programme.Satisfaction: About 75% of the subjects was positive about the use of category sounds as auditoryfeedback in the EPG. However, some subjects said that the category sounds could become irritatingand that their use therefore should be made optional. The greater part (80%) of the subjects waspositive about the use of category sounds as a reminder and found it useful that the reminder alsoindicated the category of the TV programme.

EXPERIMENT 2: ENCODING URGENCY IN REMINDERSIn the second experiment it was explored whether the urgency of a reminder can be encoded into thecategory sound. For example, a reminder might occur 30 minutes, 5 minutes and 10 seconds before theprogramme actually starts. In this experiment the distance between listener and sound source in space isused as a metaphor for distance in time. When the distance to a sound source alters, the acousticcharacteristics of the sound change (Nielsen, 1991). People are quite sensitive to these changes. Maybethe audible information about the distance of a sound source can inform the user about the distance intime as well. Two variables were tested: the intuitiveness of the sound-distance metaphor, and theeffectiveness with which subjects were able to decode the urgency of the reminder. Seventeen subjectsparticipated in the experiment. For each of the category sounds, three versions were used; a ‘nearby’, a‘half far’ and a ‘far away’ category sound. To achieve this, three sound parameters were manipulated:overall intensity, intensity of high frequencies and sound reverberation. In the first task, included toestimate the intuitive understanding of the metaphor, subjects were presented with 11 pairs of categorysounds. For each category, the name was vocally presented followed by the ‘nearby’ sound and the ‘faraway’ sound of the same category in a randomised order. Subjects were told that the soundsrepresented TV programmes. They were asked to select the sound that, according to their intuition,represented the programme that would start first. After learning the category names corresponding tothe category sounds, the subjects had to perform the second task to estimate the recognition of bothcategory and urgency. In this task, after presenting a sound, subjects immediately had to determineboth the category and the urgency, like in a real TV-watching situation. An example was presented ofthe three urgency variants. In the experimental phase, for each of the eleven category sounds there werethree versions, yielding a total of 33 sounds, which were presented in random order.14 out of the 17 subjects responded 100%correctly to the eleven pairs presented. Theoverall average of correct responses was 10.5out of 11 (96%). All subjects were able todetermine the category names 100%correctly. Determining the urgency however,appeared to be somewhat more difficult.Nevertheless, four subjects succeeded indetermining the urgency of all 33 soundscorrectly. The average correct score was 29.8out of 33 sounds (90%). Figure 2 shows thebar diagram of the average score. The valuesat the X-axis represent the actual urgency,that is the time before the TV programmestarts. The times above the bars represent theurgency such as interpreted by the subjects.The diagram shows that a ‘30 min.-reminder’(‘far away’ sound) was never interpreted as a ‘10 sec.-reminder’ (‘nearby’ sound), and vice versa. Ithas to be stated, though, that the second experiment was a sound-only experiment. In a real living-roomenvironment urgency determination and watching TV will be done simultaneously. In such a dual-tasksituation urgency judgement might be more difficult to perform.

CONCLUSIONSPeople can easily learn to match the category sounds to the corresponding TV-programme categories,the use of category sounds is effective, and the category sounds were appreciated by a large part of thesubjects. The distance of a sound source is a useful metaphor to use in an auditory reminder to indicatethe distance in time before a programme is going to start.

REFERENCESNielsen, S.H. (1991), Distance Perception in Hearing, Acoustic Laboratory, Institute of Electronic

Systems, Aalborg University Press, Aalborg, 125-126.Westerink, J.H.D.M., M van der Korst, G. Roberts (1998), Evaluating the use of pictographical

representations for TV menus, Adjunct Proceedings of CHI’98, 217-218.

average score

30 min.

30 min.

5 min.

5 min.

5 min. 10 sec.

10 sec.

02468

1012141618

30 min. 5 min. 10 sec.

Figure 2: The average scores in task 2.

HCI’98 Conference Companion

—44—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Experiments in How Automated Systems Should Talk to Users

David Williams1, Christine Cheepen2 and Nigel Gilbert2

1Vocalis LtdGreat Shelford,CambridgeCB2 5LD, [email protected]

2Department of SociologyUniversity of Surrey, GuildfordGU2 5XH, UK{christine, nigel}@soc.surrey.ac.uk

ABSTRACTThis paper describes experiments carried out in the domain of automated telephonebanking. The results suggest that for this domain the use in system prompts ofhuman-like talk should be avoided.

KEYWORDS: Automated spoken dialogues, naturalness.

INTRODUCTIONThis paper describes experiments which investigate the notion of naturalness inhuman-machine spoken dialogues. The paper focuses on the experimental methodand results. For a more detailed theoretical background see Williams and Cheepen(1998). The experimental hypothesis is motivated by the widely-held assumption inthe commercial sphere that for dialogues to be perceived as ‘natural’ or ‘friendly’ by anovice user, the system output (prompts) must contain a wide variety of human-likeperson-directed tokens, e.g. ‘please’, ‘thanks’, ‘I’, ‘your,’ etc. This paper proposesthat embellishing a dialogue with such tokens will produce no better and possiblyworse interaction than a more laconic prompt style. The experiments take a highlygoal-directed domain which is typical of current automation targets, i.e. telebanking.A commercially available dialogue provides the dialogue logic and speech recognitionperformance.

Two prompt sets are compared. The first set (which we call the original set)illustrates the typical, arbitrary use of human-like person-directed tokens in systemoutput. The second set had these tokens stripped out or replaced by material whichwas not person-directed, in order to produce a ‘denatured’ prompt set. For example,the original prompt “I’m sorry I didn’t understand that” was reduced in the denaturedcondition to “Not understood”. There was no difference in recognition performance ordialogue logic between the two ‘systems’. We proposed that there would be noobjective or subjective advantage for the original system.

EXPERIMENTAL METHODSA pilot phase was conducted which used 12 subjects in a within subjects design.Ordering effects were countered by the subjects being organised into two groups,group one used the dentatured system first, then the original. The reverse was usedfor group two. Results were anecdotal, with subjects spontaneously referring to aperception of a more direct and rapid interaction with the denatured prompts.Furthermore, they identified this as the main reason for their preference for thisversion. A second experiment used 22 naive users from the general public. Theexperiment used a within subjects design with the original followed by the denaturedsystem (organisational constraints meant that ordering effects for system type werenot addressed. However, the pilot experiment showed the order of system

HCI’98 Conference Companion

—45—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

presentation was not a confounding variable). The dependent variable was transactiontime for each of the four tasks (Bill Pay, Statement, Balance, Transfer Funds) whichwas measured from option selection to the end of the last task-related prompt. Also,for each condition, subjects completed a short evaluation questionnaire. Oncompleting both conditions, subjects were simply asked which of the two systems wasquicker and which they preferred.

DISCUSSION: TRANSACTION TIMES AND USER PREFERENCEOur hypothesis, based on our findings in the pilot experiment, was that for the highlygoal-directed domain of telebanking, the denatured prompts would perform as welloverall (in terms of usability) as the human-like, supposedly ‘natural’ prompts. Theanalysis of experimental times shows no significant result in favour of the denaturedprompts - the denatured prompts only resulted in a significantly shorter transactiontime for one task. However, they clearly performed as well as the original prompts.

For user preference, the hypothesis suggests that the denatured prompts will bepreferred by users, or at least be on a par with the original prompts. The subjectiveresults indicate a clear preference for the denatured system. Examples of subjectcomments on the denatured system corroborate this, e.g. “Much clearer”, “Seemedeasier”, “No fancy language and faster”. However, this only makes a good argumentfor proposing shorter prompts in a highly goal-directed and business-oriented domain.Further work must be done in interpersonal domains, e.g. leisure services.

ACKNOWLEDGMENTSFrom an ESRC-funded research project ‘Design guidelines for advanced voicedialogues’, under the Cognitive Engineering Programme, project no. L127251012.

David Williams is now at Motorola Ltd (LMPS)

REFERENCESBevan, N., McLeod, M. (1992) Usability Assessment and Measurement. In

Management and Assessment of Software Quality, 167-191.Williams, D.M.L., C. Cheepen, N. Gilbert (1998) Designing for Naturalness in

Automated Speech-based Dialogues: All you gotta do is act naturally. Inpreparation.

HCI’98 Conference Companion

—46—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Concepts of Interaction and the Nature of Design:HCI Research at Napier University, Edinburgh

David Benyon

Professor of Human-Computer Systems,Dept. of Computing,Napier University,219, Colinton road,Edinburgh, EH14 [email protected]

Research in Human-Computer Interaction at Napier University is centred on the HCIgroup based in the Department of Computing. The group currently consists of fivefull-time lecturers, two research fellows and six PhD students and is active in anumber of areas of HCI that may be brought together under the overall research themeof human-computer co-operative systems. We believe that HCI is not merely about aperson interacting with a computer; increasingly, it is concerned with how networksof computers (or other information artefacts), people and artificial agents engage inmeaningful and effective activities.The distinctive nature of HCI research at Napier lies in its openness to differentapproaches to understanding the nature of interaction and on how to design for co-operation between people and information artefacts. The nature of design requires usto consider people engaged in co-operative activity, situated within a socio-culturalenvironment. Thus we find the concept of interaction, as traditionally understood, tobe problematic. There is a need for designers to be aware of non-cognitive and non-engineering approaches to the development of human-computer systems and to seedesign itself as a human activity. Our research seeks to contribute to the developmentof appropriate methods for understanding human-computer co-operative systems andto inform our understanding of design.One area contributing to this theme is to examine how designers abstract theworksystem and how they represent this abstraction in their designs. Development inareas such as distributed cognition, activity theory and experientialist cognition haveraised questions as to how appropriate traditional cognitive psychology can be in thisrespect. The metaphors employed by designers may be very different from thoseunderstood by users and we are using concepts from experientialist cognition tobetter understand this relationship. Another area of research is in the application ofactivity theory to HCI and an in depth study of information seeking activities at anational newspaper is being undertaken.The development of personal and mobile computing systems and the use of systemsfor collaborative work again changes the nature of what we mean by interaction. Forexample, novel interfaces are required to facilitate collaborative work amongstindividuals who may be mobile or co-located. A distinctive approach to requirementsanalysis and prototype evaluation is demanded in such environments wheremechanisms of interaction and collaboration which are relevant to real time, co-operative interfaces are required.The nature of interaction is further muddied in systems that employ IntelligentInterface Technology (IIT) such as agent-based interaction, explanation systems orintelligent user interfaces. On the one hand IIT can contribute to the usability ofsystems by adapting and tailoring information to meet the different needs of differentusers. On the other hand there is the need for IIT systems to explain and present thereasoning of the system, and to allow users to collaborate with, and assist thetechnology when the limits of its ‘intelligence’ is reached. One investigation in this

HCI’98 Conference Companion

—47—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

area is on developing co-operative, semi-automated theorem provers and evaluatingthese across a range of problems.Combining IIT with the increasing ubiquity of mobile computing and communicationbetween people through computers leads to alternative perspectives on interaction.Instead of seeing the user has outside the computer, looking in on a world ofinformation, we can view the user inside an information space. This in turn leads us toconsider the issue of how users can navigate their way through information spaces.An EC funded project, PERSONA, in collaboration with the Swedish Institute ofComputer Science has been established to look at issues of navigation in informationspace. This work takes a critical look at the alternatives for assisting users to navigateinformation spaces , utilising concepts from a wide variety of disciplines. Another ECfunded project (FLEX) is looking at interfaces to WebTV.The concept of 'narrative' as both a method of, and metaphor for, interactionrepresents a move towards a paradigm of social navigation of information space. Innarrative comprehension, readers (users) develop situation models not just of spatiallayout but of temporal, causal and personal characteristics of the space. Othermethods for enabling more social navigation are also being examined and anevaluation method, based on these ideas is being developed.The concepts and methods that we are looking at within the wider domain ofinteraction are being illustrated in a variety of areas. The process of developing Web-based courses is one area which requires careful consideration as current tools areseverely lacking in a number of important respects. A SHEFC funded project withuniversities of Glasgow, Heriot-Watt and Glasgow Caledonian has been investigatingthese issues in the context of collaborative teaching of HCI. A second project isevaluating the usability of text book versus hypertext in an educational setting.Individual differences and task types are being studied to better understand thecircumstances in which one medium might be more appropriate than the other.A significant strand of work within the HCI Group is concerned with buildingsystems for cognitive assessment and remediation. We have strong clinical contacts ina variety of paramedical fields and have designed systems for visual impairment,visual neglect, dyslexia, phonological development and agrammatic aphasia. AlisonCrerar’s Microworld for Aphasia broke new ground in the field of computer-basedlanguage This work has attracted a lot of international interest and recently aPortuguese version of the software has been prepared for use in Brazil. Continuingwork has resulted in a multimedia system suitable for home-based delivery of aphasiatherapy and the design and evaluation of a computer-based narrative generator, to helpnon-speaking people to relate stories and anecdotes which are modified appropriatelyaccording to factors such as listener, conversation history and time available.REFERENCESDetails of all projects can be found at http://www.dcs.napier.ac.uk/hci

HCI’98 Conference Companion

—48—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

The Department of Applied Computingat the University of Dundee

Ramanee Peiris, Peter Gregor and Alan F. Newell

Department of Applied ComputingUniversity of DundeeDundee DD1 4HN Scotland

KEYWORDS: Disability, communication aids, HCI, usability, AAC, multimodal,telecommunications.

INTRODUCTIONFounded in 1980, Dundee’s Department of Applied Computing contains one of thelargest and most influential academic groups in the world researching intocommunication systems for disabled people. It also has strong international andnational reputations in other aspects of human computer interaction research and wasawarded a top grade 5a in the 1996 RAE. The Department has an engineering biasand brings together a unique blend of disciplines including computer scientists andengineers, psychologists, a therapist, a special education teacher, and staff who havebenefited from interdisciplinary careers. In all its work it is committed to theprinciples of usability engineering with a focus on developing academic and practicalinsights, and producing software which can be commercialised. Research is fundedfrom a wide portfolio of funding agencies: the total research grants awarded in theyears 1996/1997 exceeded £1.6 million. The fourteen academic staff gave nine invitedlectures and published more than twenty-nine journal articles and book chapters andforty eight conference papers. Applied Computing has licensed many softwareproducts to commercial companies in the USA and Europe, and collaborates in itsresearch with commercial, academic and service organisations worldwide. TheDepartment offers undergraduate and postgraduate Degrees in Applied Computing ina unique programme where the learning of HCI and usability engineering techniquesis integral throughout the courses.MULTIMODAL AND ORDINARY AND EXTRA-ORDINARY HCIThe group developed and concept within the HCI community that extra-ordinary(disabled) people operating in ordinary environments, pose similar problems to able-bodied (ordinary) people operating in extra-ordinary (high work load,environmentally unfriendly) situations. They have shown how simultaneousmultimodal input, combined with user monitoring and plan recognition, can enhancethe reliability of human-system interaction for pilots, air traffic controllers and peoplewith disabilities.TELECOMMUNICATIONS AND REMOTE LEARNINGThis group is investigating how data communication networks can improve thequality of life for disabled and elderly people. They have developed special servicesrelating to interpersonal communication and have demonstrated the advantages ofnovel graphical forms of communication as an enhancement to live video. Thisactivity has been supported by research in multimedia services and HCI, and is linkedwith our more recent research into the use of video and other support services fordisabled and non-disabled students. Most of the research is collaborative, usually withEuropean partners.HEALTH INFORMATICSIn collaboration with medical and dental colleagues this research group is advancingthe frontiers in clinical decision support both in asthma treatment within generalmedical practice and molar extraction in general dental practice. They have taken part

HCI’98 Conference Companion

—49—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

in major studies of child growth, the clinical management of cystic fibrosis, and thelinkage between asthma and poor growth. They have identified novel approaches toautomating the visual inspection of both cervical smears and breast x-ray images. Thegroup are also founding members of the Focal Institute for Scottish HealthInformatics which seeks to bring together all health informatics researchers inScotland.INTERACTIVE COMMUNICATION SYSTEMS FOR DISABLED PEOPLEThis group is an international leader in the development of communication systemsfor disabled non-speaking people which help them to interact more effectively withothers. Research projects have investigated several aspects of conversationalmodelling to aid in this task including word frequency, openings and closings, givingfeedback, topic selections and movement, storytelling, and expressing emotions. Anumber of commercial products have resulted from this research including:Predictability, a word prediction system; TALK, a system aiding social conversation;ScripTalker, a communication system based on scripts, and Talk:About, a storytellingaid. Other related research projects include: Blissword, a predictive retrieval systemfor Blissymbolics which allows users to retrieve symbols which can then bemanipulated in a word processor; SeeWord, which enables dyslexic users to configuretheir word processor interface for optimum legibility when working with text;HAMLET, a system for the investigation of emotion in synthetic speech; andUnicorn, a communication system making use of the internet. Also the use ofpredictive and signal processing techniques is being investigated to assist with themaintenance and support of elderly people within the community.COMPUTER BASED INTERVIEWING AND KNOWLEDGE ELICITATIONModels of the structures of human interviews have been used to develop generalpurpose software to conduct computer based and computer facilitated interviews. Acommercial product based on this work (ChatterBox), has been evaluated in clinicaluse in a secure mental hospital, and within schools. Further research is focused onmore flexible models of computer interviewing and on the potential of computerbased interviewing techniques to assist in a variety of settings from engagement withpsychosis sufferers to employment pre-interviewing. This research is leading to newinsights concerning human computer interaction.DIGITAL SIGNAL PROCESSING AND SOFTWARE ENGINEERINGThis newly formed research group focuses on Digital Signal Processing in its broadestsense including image processing and multi-dimensional signal processing. Particularinterests cover remote sensing for environmental monitoring and signal processingon-board spacecraft. Recent research has covered data compression for syntheticaperture radar, work on a vision guided autonomous lunar lander, and Consultancy onspace-based signal processing architectures. Experience in system developmentwithin the aerospace industry has provided the foundation for a research initiative insoftware engineering. Emphasis is on pragmatic software tools which utilise andbuild on HCI techniques developed within Applied Computing.

HCI’98 Conference Companion

—50—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

EDS Human Factors Group

Michael Burnett

EDS,4 Roundwood Ave.,Stockley Park, Uxbridge,Middlesex UB11 1BQ,United Kingdom.Tel: +44 181 754 5890, Fax +44 181 754 [email protected]

ABSTRACTThe role of EDS’ Human Factors Group is described. The group has adapted itsservices to link business process re-design to a range of user-centred issues includinghuman-computer interface (HCI) design, user support development and changemanagement. Implications for the wider application of Human Factors are described.

ROLE OF EDS’ HUMAN FACTORS GROUPEDS’ Human Factors Group, formed in 1982, consists of 13 applied Psychologistsand Ergonomists with between 1 to 20 years of experience. The group forms a pivotalrole in EDS UK’s multi-disciplinary Strategy & Change Department. The HumanFactors Group works alongside specialists in business process re-engineering,management of organisational change, integrated logistics support, training and usersupport design. EDS, a US multi-national computer services company with over110,000 personnel, specialises in enabling organisations to take advantage of IT tomeet business needs. The consultants in EDS’ Human Factors Group typically workin integrated project teams tasked with developing new business, organisational andIT systems. Examples of the type of assignments currently in progress are used toillustrate the role of the group.

Prior to developing an IT-enabled solution current business processes must beunderstood and modelled. The current business model forms the basis for the re-engineering of processes to reduce costs and increase effectiveness. One of our teamsis currently re-designing human resource processes in a very large organisation ofover 250,000 personnel. They are using a business process modelling technique,based on activity analysis, which feeds into a more detailed task analysis for HCIrequirements capture. Process Charter, a flow-charting and business modelling tool, isused to represent the current and future business models. Using such tools we canevaluate the impact of HCI design in business terms, for example, through-put time,cost or staff levels required. We have found that positioning Human Factors at thisstage in the business change helps to ensure that subsequent IT-enabled solutions aretruly usable. Allocation of function decisions between human and computer andbetween human and human can be influenced right from the start.

In our view many human factors issues relate to the change process. For example,effective HCI prototyping not only gives empirical evidence about usability concerns,it also (if conducted correctly) will involve the user population in the change processin a positive way. One of our teams is currently helping a government agency tointroduce MS Office and bespoke business applications at over 30 sites across theUK. This process involves top-down management buy-in, bottom-up communications

HCI’98 Conference Companion

—51—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

management, HCI prototyping and the use of organisational re-modelling techniqueswith staff in each location.

In the past our Human Factors consultants would design the complete HCI with ITstaff. However, the scale of the IT applications and the need to introduce off-the-shelfproducts have changed the HCI design process. In the Command Support System forthe Royal Navy our role has been one of advice and specialist intervention. Wedeveloped an HCI Style Guide for the IT staff and the users. We then facilitated userreviews of HCI modifications and new applications design. IT requirements anddesign teams are shown how to use task analysis methods. We check the HCI forconsistency and usability. Finally we conduct HCI prototyping and user workloadevaluations of critical parts of the system.

A major challenge facing all of our projects is to ensure that the user population canunderstand and quickly learn new roles. These new roles are likely to involve changedbusiness procedures and new IT operating skills. The user support system is vital inenabling cost-effective business change. Effective user support must be providedduring the transition to a new system and in the steady state of system operation.EDS’ Human Factors consultants typically work on a range of design products thatwill form the user support system. These include on-line help and reference tools,intranet applications, a range of training solutions and user documentation. A typicalproject will require the development of a target audience database containingorganisational and user profiles, task-related knowledge and skill needs, user supportobjectives and media options. Recent projects have included: the design of hypertexttools providing business process and IT procedural help; the introduction of usabilityprinciples into web-design standards; and integrated sets of on-line and off-line userdocumentation. We view the development of the user support system as acontinuation of the HCI design process. This includes the need to help organise peersupport within the organisation - the most common method through which many usersseek help.

LESSONS FOR HUMAN FACTORSWe believe that we have learnt several lessons that have more general value for theapplication of Human Factors:

• The importance of taking a holistic, multi-disciplinary view of business processes,organisational change, HCI and user support design.

• The use of an integrated approach to link the business process model to moredetailed allocation of functions and task analysis models which in turn are used inHCI style and object design.

• An HCI design process, integrated into the IT design process, that meets usabilityobjectives while also enabling effective organisational change.

• Human Factors tools are developed and used to improve design productivity.

• Metrics are used to assess the value of HCI design characteristics for the targetorganisation and its business.

HCI’98 Conference Companion

—52—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Introducing the Benefits Agency and Employment Service’s ModelOffice - testing the end to end processes

Angela Maguire and Keith Wheeldon

Model Office, Level 2, PeelHouse, Sheffield, S3 8PQ, UK.(+44) 114 259 7412/5639

WHO ARE WE?We are a unique initiative funded by the Benefits Agency and the EmploymentService. Model Office, founded in 1996, offers facilities to test new businessprocesses in a safe environment simulating local offices, providing the opportunity toidentify and rectify high risk areas of business prior to implementation. Spanning anentire floor, it replicates a Jobcentre and processing centre. Commissions are receivedfrom projects wishing to test new or revised business processes. We employ 45 stafffrom both Agencies, but staffing levels fluctuate in accordance with work loads.We have four teams responsible for the preparation of test programmes. Scripting -who identify process risk areas and create scenarios to test end to end processes;Technical - responsible for IT provision; Evaluation - responsible for test observation,de-briefs and a final evaluation report; and Administration - who deliver ongoingsupport. We will use the New Deal test programme as an example for the purpose ofthis paper.

WHAT IS OUR ROLE IN THIS?Myself and many of my colleagues have a local office background, and we have oftenhad to implement a new process with emerging problems which could have beenidentified prior to introduction. Through our testing methods, we advise our customersof the highest risk areas in their process so that problems can be addressed beforeimplementation easing rollout nationally.

WHAT DO WE TEST?Testing covers all aspects of new processes: IT, guidance; training, and mostimportantly users’ reactions to and perceptions of process operability and integration.The New Deal test programme examines the process from identification of Jobseekerswho are suitable for New Deal, through the support processes we provide, to theirentry into work, training, etc..

To ensure that findings are accurate we identify staff skills required to perform testsand recruit from local offices those who have the required training and experience.We ensure they have received any training which has been developed to support thenew process before asking them to perform their normal duties using the new processand guidance.

WHO ARE CUSTOMERS?Our customers are project teams within both Agencies who are responsible for theimplementation of new initiatives. Examples include: New Deal and JobseekersAllowance.

HCI’98 Conference Companion

—53—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

WHERE DO WE START?Customers are often unsure which areas they want to test. We identify the greatestrisks in the process, for example - can staff identify Jobseekers wishing to gain earlyentry to New Deal. Once risks are identified, we agree which areas to prioritise. Risksare then divided into business activities which become test conditions from which wecreate the scripts. We have to work to extremely tight deadlines - sometimes receivingfinal versions of guidance days before testing is due to commence!

HOW DO WE TEST?Scripts mirror business scenarios occurring nationally when using new processes.Field staff (Users) are recruited to perform their normal roles using the new processes,guidance and IT. Time is invested making Users feel secure in the test environment,assuring them that we are evaluating the processes and not their skills. Time investedat the start of a programme gives our Users confidence and provides maximumfeedback from a committed team. Throughout testing Users are observed on a one toone basis to capture findings. After testing Users attend a debriefing to discuss theday’s events. From this we collect the softer issues of testing, for example Usersfeelings about new processes, which we have found to be a very productive way ofcapturing User’s perceptions. We encourage discussion and welcome Users commentsbased on test findings.

WHO IS INVOLVED IN TESTING?The most important people are the field staff. They have up to date knowledge andexperience and without them our tests would not be valid. The Technical Team arerequired to provide latest releases of software and systems and the Evaluation Teamwill observe and report to the customer on testing outcomes.

HOW DO WE RELAY FINDINGS TO THE CUSTOMER?Throughout the test programme we are in constant contact with customers, advisingthem of early findings. At the end of the test programme the evaluation team collatetheir findings and observations to compile a final evaluation report. This is sometimesreceived with mixed feelings as it can highlight issues that need addressing. Wesometimes deliver findings to customers through workshops. These are very useful asthey include the Users who give their account of the test programme and theirexperiences.

SUMMARYCustomers have approached us with little awareness of the benefits testing canprovide.. Having completed a full test programme and presented our findings theyhave realised the value and benefits it has brought to their business . Previously wehave marketed our facilities as we are not a permanent operation within either agencyand this was imperative if Model Office were to continue. However, we now findourselves in demand with so much business that we have to closely monitor ourfuture working plans.

SO WHERE DO WE GO FROM HERE?We are committed to developing our facilities to improve our service to ourcustomers. For example making more use of our “Usability Suite” which is furnishedwith video and editing equipment. We are also in the process of recruiting a HumanFactors specialist, who will help us to develop our skills in evaluating humancomputer interaction.

HCI’98 Conference Companion

—54—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Lucent Technologies OMC-2000

Rod Moyse1 and Annette Tassone2

1Lucent Technologies, GSMSystems Engineering, TheQuadrant, Stonehill Green,Westlea, Swindon SN5 7DJ, [email protected]

2Lucent Technologies,Advanced Development,Optimus, Windmill BusinessPark, Swindon, SN5 6PP, [email protected]

ABSTRACTWe describe our efforts to introduce user-centred disciplines as essential elements inthe development of mobile telecommunications network management products.

INTRODUCTIONThe organisation we wish to describe is the GSM Division of Lucent Technologies.This division has its World Headquarters at Swindon, UK, with development groupson several continents. There had previously been little HCI-related work carried outhere, and the authors sought to establish Usability Engineering (UE) and HumanFactors (HF) as essential parts of product development. We were based in two verydifferent groups (Systems Engineering and Advanced Development), but our workwas focused on a single product: the OMC-2000, and we will concentrate on this inour paper. Lucent Technologies as a whole is a US$26 billion global company with128 years of experience. The company, with headquarters in Murray Hill, NJ, USA,designs, builds and delivers a wide range of public and private networks,communications systems and software, data networking systems, business telephonesystems and microelectronic components. Bell Laboratories is the research anddevelopment arm of the company. As a whole the company employs some 150usability engineers or similar staff.

THE OMC-2000The OMC-2000 (Operations and Maintenance Centre) is the distributed softwaresystem which is used to operate and maintain the complex networks required forLucent Technologies’ GSM mobile telephone systems. A range of operators areconcerned with Fault Management, Configuration Management, and PerformanceManagement. The product has by definition been aimed at a narrow and highlytechnical sector of the mobile communications market. In this specialised marketusability has not always been seen as a priority. As the market has matured andcompetition has intensified, the rules have changed so that usability is now seen as amajor element of product quality, particularly where service providers wish to reducecosts by using less technically sophisticated personnel for routine system operation.

USABILITY MEETS ENGINEERINGFor both authors the technical learning curve was daunting as we lacked thetelecommunications background required for rapid assimilation. We relied on ourinitial usability evaluations and interviews to speed up the process. We kept on talkingto the experts, casting our nets as wide as possible and synthesising the results. As inmany technically-focused domains there had been pressure to view the product interms of its individual features rather than in terms of the users and what they wishedto accomplish. This gave us a product which had some major strengths in functionalterms, but which also presented some significant usability challenges. As ever,

HCI’98 Conference Companion

—55—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

development resources were finite and hard choices had to be made between usabilityimprovements and new product features.

The product teams had previously established the idea of using scenarios as a focusfor specification and development decisions, but had again applied a largely'engineering' view related to 'time on task' as an issue in competitive analysis, and tothe testing of implemented code. Sustained persuasion was required to establish theidea of scenarios as a means of identifying usability issues, designing remedies, andthen testing the result. In order to gain a clear view of the scope of what was required,we carried out an initial program which included co-operative evaluations,observational studies, a questionnaire survey, extended interviews, customer-written'fault' descriptions, and the collation of documents from internal and external sources.The breadth of this reference proved a major strength in our subsequent campaign.The work and analysis was summarised in a report which raised the most pressingissues, giving a rationale for their selection, relevant scenarios, a statement ofrecommended solutions, and some prototype designs.

MAKING A DIFFERENCEFollowing the initial work a sustained programme of lobbying and education wasrequired to raise the profile of usability issues and to gain management andengineering support for the necessary solutions. Although each interest group may bestriving for a better product, on any given day they may face pressures that seem moreurgent and important than ‘usability’. Customer site visits were a highly significantstep here. If you come back to the office with clear and graphic evidence of customerrequirements, it is hard to ignore. In some senses though, we were pushing at an opendoor. The need for a strong customer focus was well understood, and was reinforcedby the overall company goals. The need for customer site visits was accepted,although these were, as usual, difficult to arrange. Scenarios were much in demand asa means to structure various forms of product testing, and work on a tool to supporttheir wider use was welcomed. The use of new user interface technologies wasmostly hampered by the technical challenges of integrating them with our complex,distributed product.

CONCLUSIONSWe are beginning to make a difference, and have established a much greaterawareness of usability issues in our organisation. It has not been easy: there is nopurpose in completing a report and waiting for something to happen. In order to getresults you have to raise your profile and that of the problem, while gathering supportfor the changes that you believe to be necessary. It pays to stress the business caseand to make contact with all the far-flung groups involved with the product as theymay have much to offer. Any opportunities to build prototypes and show people whatyou mean should be quickly grasped as one visualisation can be worth hours of talk.Even when doing all the right things and winning support, you may still get stalled bysomething like an inflexible architecture. In this case you can start a debate aboutways to change it, and the business costs of doing nothing!

HCI’98 Conference Companion

—56—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Utilities face a challenge. Usability can help.

Rosalind Barden

Principal Consultant,Energy & Utilties Division, Logica UK Limited,Betjeman House,104 Hills Road,Cambridge, CB2 1LQ, UK

In April 1996, competition was introduced into the UK domestic gas market, BritishGas, once the sole supplier to 19 million homes, now faces over 20 rivals and up to25% of customers have already opted for a new supplier.In September of this year, the domestic electricity market will open to competition. Aswell as the 14 regional electricity companies, British Gas is already promoting itselfas an electricity supplier. In addition, other non energy companies are introducingspecial deals for their customers. For example, retailers such as Tesco are alreadysigning up recruits, the Trades Union Congress has set up ‘Union Energy’ to sell fuelto its members and the Daily Telegraph Newspaper has a readers’ promotion for ‘dualfuel’.The energy suppliers as well as facing the demands of the competitive market are alsoregulated. There are rules of competition to be obeyed and pressure from theregulator to increase service and reduce costs.Call centres play a pivotal rôle in looking after utility customers, but in a changingmarket there are a number of key challenges

An old utility company A modern utility companyknew that the customer had no choiceover supplier

appreciates that the customer has freedomof choice

focus on own procedures focus on customer servicecustomers come and go as move in andout of area

customers come and go as othercompanies make better offers

maintained a history of the meter maintains a history of customer contacteven when the customer moves house

supply and distribution dealt with by thesame company

regulator demands separation of supplyand distribution

dealt with single product ie gas orelectricity with one or two tariffsavailable

sells a multitude of products and tariffsand must be able to answer any customerquery immediately

often unable to keep up with customerqueries and requests leading to long waits

must respond quickly to customer queries

large numbers of staff often with manyyears of experience of companyprocedures

pressure to reduce staff numbers, highturnover in call centre staff, everchanging ways of working

approximately half the customers do notpay their bills on time

customers settle bills automatically

HCI’98 Conference Companion

—57—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

As well as having to deal with a competitive market, those now trading in energymust understand that consumers do not treat electricity and gas as commodities. Mostpeople regard energy as something that is ‘always there’.The successful companies will be those that can differentiate themselves in anapparently undifferentiated market. What they need to sell is the ‘wrapper’ aroundthe core energy products. This wrapper consists of both extra offerings, such as boilermaintenance and insurance, and high quality customer service.To assist in this, within the Energy and Utilities Division of Logica, we havedeveloped Flair. This is our model of the architecture for a 21st Century EnergySupplier. This talk will show the importance that usability in the call centre userinterface is playing in developing the Flair approach.HCI techniques have been used to ensure a smooth path for the customer, supportingthe call centre agent with appropriate information as customers request it. Goodusability practice has also been employed to support customer management principlessuch as providing helpful and informed responses to the customer while seeking theright opportunities for cross selling and up selling.To achieve this we have concentrated on designing tasks that enable an agent tofollow a path that is appropriate to the needs and requests of the customer beingserved. Much effort has been devoted to the presentation of such processes that willmake sense to both the inexperienced agent and to the member of staff who has 20years’ service.In what will be a fast changing world, with a short time to market of new products,new tariffs and combined packages, it is essential that services can be introduced witha short lead time. This means that not only must it be possible to provide the softwarequickly, but training time for agents must be kept to a minimum.In support of these demands, we have used a consistent and simple user interface. Itis based on an allocation of specific areas of the window to various aspects: access tofunctions, customer information, task details. Each task is presented in a pseudo‘wizard’, following through in steps where the route taken depends upon theinformation provided by the customer in both this conversation and in previous ones.The talk will describe our overall approach to structuring a 21st century energy supplycompany, present example business processes and show how these have beenimplemented in our model. Examples of the direct application of these ideas toenergy suppliers already in the market will be quoted throughout.

HCI’98 Conference Companion

—58—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Designing for cultural diversity

Girish V. Prabhu and Dan Harel

Eastman Kodak CompanyRochester,NY 14650-1916, U.S.A.

ABSTRACTProducts and software developed for sale in multinational markets are most successfulwhen they appropriately accommodate culture and language. The design of productsmay be either internationalized (based on features that are culture-neutral), orlocalized (based on features tailored to regional and local markets). Different levels oflocalization, no localization, translation only, to cultural localization, may be applieddepending upon the application type and the net return on efforts. Cultural localizationis successful only when a detailed understanding of the specific culture is available tothe designer. This poster describes a methodology based on cultural anthropology,used at Eastman Kodak Company to study and understand users’ needs andpreferences for internationalized versus completely localized digital imaging products,and to design products and software that are efficiently and successfully localized to“speak the universal language of photography”.

KEYWORDS: Localization, Cultural localization, Japanese and Chinese design

INTRODUCTIONEastman Kodak Company as a global company serves customers in continentscomprised of Asia, Africa, Middle East, Latin America, Europe (Western and EasternEurope), and North America (United States and Canada). Our customers, therefore,come from different countries, speak different languages, have different cultures, andhave different buying habits. These elements pose unique challenges not only tomarketing organizations, but also to product development organizations. Productsmarketed outside of the US succeed when they accommodate culture and languageappropriately. Culturally targeted design solutions contribute to a competitiveadvantage, a stronger brand recognition, and an increase in sales in regions we serve.

With this in mind, the Human Factors lab and the Strategic Design and Usabilitygroup of Eastman Kodak Company evaluated culture-specific user preferences foroverall product design. Our research was conducted from a sociocultural perspectiveand findings include insight into, or recognition of the local social fabric, attitudes andbehaviors, perceptions, beliefs, history, art, architecture, etc. The scope includedpublic access kiosks, in-home imaging, and desktop software. The objective was toresearch product design and graphical user interface design solutions for issuesaffecting internationalization (applying design features that are culture-free), andlocalization (customizing designs for regional and local markets) of Kodak productsand software services. The outcome of this research are cultural characteristics andproduct appearance and usability objectives that will be used by our designers todevelop digital imaging solutions that communicate respect and consideration fordifferent target cultures, delivers in product appearance and ensures ease of use.

HCI’98 Conference Companion

—59—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

METHODOLOGYCultural localization research utilized anthropological research methods. Culturalinformation was collected both from etic and emic perspectives. The overall plan forthe research was as follows:

Ethnographic research• Country-specific preferences for product appearance and user interface design

from the etic (insiders) viewpoint were compiled for potential imaging users.• Existing traditional and digital imaging products and software was benchmarked

to understand user preferences.Cultural characteristics

• Based on the identified etic and emic perspectives, cultural characteristics weredeveloped for these countries.

Product appearance and user interface design appearance requirements• Based on the cultural characteristics product appearance and user interface

design appearance qualities were developed for these countries.• The findings from the emic and etic view were combined to develop overall

product appearance and user interface design characteristics for each country.Validation research

• Prototypes of suitable products for each country were developed. The NorthAmerican baseline prototype for each country was translated.

• These localized prototypes were evaluated against the localized North Americanprototype in the specific countries through focus groups.

Develop guidelines• Based on the research the product design and UI design guidelines were refined.

The existing Kodak consumer segmentation was not used in this research becausethose segmentation were based on US centric data and was thought as inappropriatefor the Asian cultures. The research specifically targeted business, home, professionaland education-related users with different levels of familiarity with digital technology.The research recruited equal numbers of men and women. The age of the participantsfor the ethnographic study ranged between 14 to 55 years, whereas validation researchwas done using equal number of men and women in the age range of 26 to 44 years.

CONCLUSIONSResearch into cultural preferences has broadened our appreciation for the importanceand complexity of localized product design for Kodak products for Japan and China.Our research has indicated how elements such as symbology, field formatting, color,interaction styles, screen layout, and typography affects successful product interfacelocalization.

REFERENCESFernandes, T. (1995) Global Interface Design: A Guide to Designing International

User Interfaces, AP Professional, Boston, MA.Day, D. (1996) Cultural bases of interface acceptance: Foundations. People and

Computers XI, Proceedings of HCI 96, the 11th Annual European Human-Computer Interaction Conference, 20-23 August, Imperial College, London,35–47.

Zieglar, V. (personal communication) unpublished data.

HCI’98 Conference Companion

—60—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Usability Process Challenges in a Web Product Cycle

Gayna Williams

Microsoft CorporationOne Microsoft WayRedmond, WA [email protected]

ABSTRACTImproved accessibility to the Internet by product teams and end users is changing theproduct development cycle and the tools to support it. The changes createopportunities for usability engineers and others engaged in user-centered design, butintroduces new workload and scheduling problems.

KEYWORDS: Internet time, product development cycle, usability engineering

INTRODUCTIONMicrosoft has six usability groups and over 80 usability engineers, each groupreporting into a product development division. As elsewhere, usability engineerschoose from a diverse set of established techniques based on a team’s experience withmethods, the product being produced, the phase of the project, and the individualsinvolved in the process. The timing of activities depends on windows of opportunityfor influencing development.

The development of products in “Internet time”-very compressed development andtesting schedules-dramatically changes this process. The use of the Internet to releasebeta versions, the existence of Usenet newsgroups devoted to supporting specificproducts, the World Wide Web (WWW) and release of products partly written inHTML, and intranets in software corporations greatly increase information exchange.These present tremendous opportunities for feedback and communication, but canalso conflict with existing usability procedures practices. This paper describes suchchallenges and opportunities, focusing on experiences with Internet Explorer (IE).

BETA RELEASESThe WWW enables product teams to use the Internet to distribute beta versions to aninternational audience of eager end users. The principal goal may be “bug bashing,”but usability feedback is also possible, on a massive scale and in a time frame indevelopment when human-computer interface changes can still be made. In addition,competitor’s beta releases facilitate comparison testing. However, managing this greatopportunity effectively is a considerable challenge, for the following reasons.

Web beta releases add to the workload for usability engineers, who now are asked todetermine whether users can locate the beta on the web site, understand the downloadrequirements, download it, and find technical support information. In addition, wherea full field study was formerly done only at product release, it is now sought aftereach beta release. Moreover, trade publications now comment on usability inpublished reviews of beta releases, so usability engineers are under pressure not towait for feedback from beta users. To manage field tests of each beta, we have trainedIE team members in field study techniques, so that they can help conduct and utilizethe studies with minimal lead time.

HCI’98 Conference Companion

—61—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Beta testers do not necessarily understand how to provide useful usability feedback. Anew group was formed within the IE product team to manage beta releases and elicitfeedback. Educating beta testers on how to report usability problems and accuratelycapturing, quantifying, and communicating to teams the rich information from thisextensive, but not entirely representative, set of users are two more new tasks forusability engineers.

HTML: THE PROTOTYPE IS THE CODEIterative testing has utilized prototypes up to the code completion deadline, afterwhich there is no hope of change until the next version. In previous product cycles theprototype was the specification (Sullivan, 1996) but now the prototype is often thecode: Parts of the interface are developed in HTML, enabling non-developers to makechanges right up to product release. This improves the ease of iterative testing andchange. This means testing is needed to confirm that changes are in factimprovements right up to the “eleventh hour”. Another challenge is to manage thisactivity alongside other activities that traditionally filled the schedule after codecompletion, such as field studies.

NEWSGROUPS AND INTRANETS: USEFUL, CHALLENGINGThe Internet provides developers (as well as usability engineers) with an ever-openwindow on the “real world,” rather than periodic glimpses through marketingfeedback, printed product reviews, and usability tests. Special newsgroups exist tosupport products-“Read the newsgroups” is often a mandate from productmanagement. The challenge for usability engineers is to ensure that information fromnewsgroups is used appropriately. Newsgroups attract certain types of users, whoseconcerns may not be widespread. A developer may make changes based on what asingle “real user” wants, while neglecting more significant problems reported byusability engineers.

These pressures interact. In responding to feedback from newsgroups and externalcustomers, some IE 4.0 feature components were being changed at the time of thebeta 2 release and required usability evaluation work, work that was required at a timewhen a field study had been planned. The field study was canceled to iterate on theinterface.

In addition to the external Internet, the Microsoft intranet has improved productdevelopment dramatically, enabling rapid dissemination of schedules, specifications,daily versions of the product called “builds,” vision statements, and remote viewing ofusability tests. Managing the coordination of this information remains a challenge forusa bility and the product teams.

REFERENCESSullivan, K (1996). The Windows 95 User Interface: A Case Study in Usability

Engineering, in Proceedings of CHI ’96 (Vancouver, Canada), ACM Press,473-480.

HCI’98 Conference Companion

—62—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

The User Interface of Britain’s New En-Route Centre for Air TrafficControl

Jim Cozens

Catcon Limited,The Chart House, Ballfield Road,Godalming, Surrey,GU7 2HA, UK

INTRODUCTIONFor the past 6 years National Air Traffic Services and their prime contractor LockheedMartin have been engaged in the development of a new Air Traffic Control Centrethat will control aircraft ‘en-route’ over England and Wales – essentially planes flyingabove15,000 feet. The new centre, known as ‘NERC’, will replace the current centreat West Drayton and is expected to increase air traffic capacity by 40%. This paperdiscusses four key decisions in the user interface design. For each we describe thedesign problem and the rationale for the chosen solution and give an assessment ofhow the system is working in practice.DISPLAY MANAGEMENTThe ‘on screen’ part of the NERC user interface is a GUI controlled with mouse andkeyboard. One of the first issues in it’s design was how to organise the displays. Tobe successful in a real time environment we needed rapid access with the minimumoverhead in managing windows. The familiar ‘desktop’ solution with overlappingwindows has too much ‘swapping’ for this context so we chose an approach wecalled the ‘workbench’ in which primary data is always present, with ‘tools’ andadditional data quickly accessible. The primary data is provided by ‘permanentwindows’ – the radar plus a border on the main display, an airspace map plus borderon the auxiliary – that tile the display, overlaid by movable windows containing flightdata in ‘electronic strip bays’. Other windows are accessed via buttons in the bordersand/or directly through the primary data. This design does have one awkwardcompromise. If the sector for which a controller is responsible is wide but short,having the tall, thin strip bays on the main display means that the radar is used at asmaller scale than is desirable. The preferred alternative is to move the strip bays tothe auxiliary display, reducing the effective size of the airspace map and so slowingdown access to support information. In the usability trials and others large systemtests we have run, the display organisation attracts little comment except for the aboveissue (no comment is a complement from our vociferous users). Our observationsshow that the controllers spend little time managing their displays once they havethem set up (some button labels need improving) so we think this aspect of the userinterface is a success.INTERACTION STYLEIn the current en-route centre, the controllers use paper flight progress strips that theyupdate by hand. Although apparently crude, this is a highly efficient system. Early inthe project NATS decided that the ‘tactical’ controllers (who talk to aircraft) wouldcontinue to use paper strips for their moment by moment record. The NERCcomputer systems would, however, hold the ‘coordination’ data (the plan for whereaircraft enter and leave individual sectors of the airspace) which would be entered bythe ‘planner’ controllers and used by both tacticals and planners. To be effective, theNERC user interface has to be as efficient as pen and paper for entering coordinations.The interaction style we chose is a combination of mouse plus keypad (number padplus function keys). We chose the mouse for its all round strength as a pointingdevice, the number pad because the main data to be entered is 3 digit levels (the menu

HCI’98 Conference Companion

—63—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

selection alternatives we looked at were typically less efficient and/or more errorprone) and function keys for commands that had no simple direct manipulationequivalent. We are currently seeing two problems with the interaction style: thecontrollers are taking time to become assured with it and for a few key tasks theinteractions are clumsy. We are confident that the first of these is simply a matter ofpractice – the users with the most experience are now using the system in an assuredand skilled manner. The unwieldy interactions are being refined and we expect toimprove them to the usability level we already have for the majority of key tasks.INFORMATION MANAGEMENTA primary ATC technique is the ‘scan’ in which the controller regularly examines thedata for the aircraft for which he/she is responsible to maintain a mental picture of thetraffic situation. This approach uses information ‘remembered in the world’ anddemands that all the data being used is displayed all the time – interacting with thesystem during the scan is unduly disruptive. Moreover, what is needed is not only thetask data but also contextual data to help maintain the structure of the model. Keyfeatures of the user interface design to support this method are: different data displaysprovide the pertinent data for each members of a control team – the data needed bythe tactical controller is on the radar plus the paper strips, the planner’s data is on theelectronic strips plus the radar, and specialised arrival and departure lists allowassistants to communicate with airfields; the individual displays are configurable – avariety of data can be selected for display in radar track data blocks, the electronicstrips have collapsed and full forms and can be sorted in half a dozen different ways;and additional data is available on demand – global ‘quicklooks’ provide one extradatum on every radar track datablock (which fits the scan) and a mouse driven ‘pop-up’ provides full data on one track or flight plan. Usability trials have not uncoveredany substantial problems with this approach but we have identified furtherapplications of ‘quicklook’.ALERTING AND ATTENTION GETTINGModern air traffic control systems aid the controller by detecting potential problemsbefore they occur. Both these alerts and unsolicited changes to the data on displayneed the controller’s attention but may not be as urgent as his/her active task. Tobalance the need to alert with the need to avoid interruption the NERC user interfaceuses different forms of display according to the importance of the data: simple statusindicators are black and steady, eg the buttons in the border that bring the strip baysonto the display show an icon indicating whether there are strips in the correspondingbay; changes to data on the electronic strips ‘throb’ between black and dark grey –this gives a low priority ‘notice when you look closely’ alert; alerts that the controlleris likely to attend to as the next task use colour infills or hatching, eg if a planningaction is overdue, the corresponding strip is marked in orange – these indications areimmediately noticeable when the controller scans the display, may distract’; flashingis reserved for the one truly urgent system event - the short term conflict alert.The design goes outside this scheme for parts of the workstation that are outside themain visual field: eg flashing is used on the telephone panel for incoming calls and toindicate outstanding messages in the system message area in the top border of themain display. Our preliminary assessment is that scheme this works as intended.However, this facet of the design is susceptible to the affects of high workload so wewill be monitoring it as we run high workload tests in the run up to operation.

HCI’98 Conference Companion

—64—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Refining the NERC User Interface

Roger Attfield

National Air Traffic ServicesCAA House, 45-49 KingswayLondon, WC2B 6TE, UK

INTRODUCTIONThe NERC user interface was designed by user interface specialists working with user participants andwith prototyping of the key aspects. Although we are still working on quantatitive assessments of theusability of the resulting system, our qualitative assessments show that we have achieved the highstandard for which we strived. However, as with any user interface design some choices that lookedgood on paper or in a prototype have proved poor in practice. In this paper we discuss some of theseusability issues and the changes we are making to resolve them. With the advantage of hindsight, wealso consider whether these problems might have been avoided in the original design.

INTERACTING WITH ELECTRONIC STRIPSThe planner controller’s task is to agree the flight levels at which aircraft will enter and leave his/hersector with the planners for the adjacent sector (‘coordination’). The system supports this task withfunctionality for electronic ‘offers’. The user interface for receiving and accepting offers works well,but that for making offers does not under some circumstances. The interaction sequence is that theplanner: selects (‘hooks’) the appropriate flight strip; clicks on the exit level area of the strip to bring upa ‘coord out’ dialog box (or uses a ‘coord out’ function key) and enters proposed exit level. The systemsends the offer to the next sector at the appropriate time (depends on the flight’s route and the sectorsinvolved). The problem with this interaction lies with the need to hook the strip before bringing up‘coord out’. This was because of a central concept of the user interface design: that flight data inputrelates to the single ‘hooked’ flight – ensuring that we have a clear context for the interaction andhelping make sure that the controller always has the ‘right’ flight. Following this principle the designonly provided the ‘coord out’ and other buttons on a hooked strip. This lead to one more interactionthan was necessary. On its own, that would have been a minor defect but the original design had afurther misfeature: it was not possible to ‘hook’ from the right hand end of the strip. This arose becausewe were trying to disambiguate three interactions with the strips: clicking on embedded buttons in thestrip, click on the strip itself to hook or unhook, and press and hold to bring up a pop-up menu. To keepthe elements separate we chose to disallow hooking from the areas of the strip containing buttons .Weimproved the ‘coord out’ interaction by making the coord out buttons available on an unhooked stripand making the button action hook the strip as well as bring up the dialog box. We also made wholestrip sensitive for hooking. This may seem an elementary error which we should have identified andfixed much earlier in NERC’s development. We missed it because we expected the planners to set theexit levels for a flight immediately after accepting in it; in that situation the flight is already hooked sothe usability problem does not occur. The ATC procedures still recommend setting the exit levelstraight after accept, but there are sufficient cases where that rule cannot be applied that the problemoccurs routinely. The root cause of the poor design was that the scenarios we used in developing thedesign did not include these cases. Knowing that you have a sufficient set of scenarios remains adifficult judgement.

STANDING AGREEMENT TRAFFICThe other substantial usability issue for the planner controller arises from a difference between theoperational method used at the current London Air Traffic Control Centre (LATCC) and that originallyproposed for NERC. At LATTC aircraft are coordinated automatically using ‘standing agreements’ butfor NERC it was intended that the planner would explicitly accept every flight. This has proved tocause too high a workload, so the system has been changed to follow the LATCC model. Making thisfunctional change has a significant impact on the user interface . In the original design we had ensuredthat strips only moved into the ‘accepted’ bay under the planner’s control. This is a key feature becauseit prevents the strips with which the planner is interacting from moving unpredictably. If theautomatically accepted strips were placed in the correct place in the bay they would break this rule. Wedealt with this problem by building on an existing feature of the user interface. The strips are orderedby sorting using keys that include dynamic data, hence automatic re-sorting would have causedunpredictable movement . However, the strip order is important and so we wanted restore it quickly butwithout imposing a housekeeping task on our users. Our solution to this was to identify a set of‘opportunities’ for re-sorting – for example when the planner manually brought a strip into the bay. Weadopted this technique for the automatically accepted strips: they are initially put in a separate sectionof the bay and then moved to their ‘normal’ position at the next re-sort opportunity.

HCI’98 Conference Companion

—65—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

SILENT HANDOVER ALERTSIn addition to its primary civil air traffic function, NERC also supports the London Joint AreaOperation (LJAO) which controls military flights traversing civil air space. LJAO procedures include a‘silent handover’ in which controllers follow a system mediated dialogue to pass an aircraft from oneteam to another (silent because the controllers don’t need to speak to each other). This is supported byan existing flight data processing system called!EDDUS which has been combined with the otherNERC systems to provide an integrated user interface. As with the civil user interface, the design isbuilt around electronic flight strips. We have found two problems with this part of the LJAO userinterface. First, the strip interactions suffer from an equivalent problem to that with the civil stripsdescribed above. The same solution has also been applied. Secondly, we found that the offeringcontroller did not notice the display indications telling him or her that the receiving controller hadinitiated the dialogue. This has been fixed by applying our standard alert for notifying the controller ofa new task – we infill the appropriate area on the strip [Cozens 1998]. With hindsight, this highlightshould have been part of the original design. It was overlooked because we incorporated the ‘working’EDDUS user interface directly into the NERC context. The EDDUS user interface does not distinguishbetween low priority data updates and those that indicate new work – it demands positiveacknowledgement of every change – and we simply mapped this onto the NERC design for low prioritychanges. Had we prototyped silent handover we most likely would have identified the need to alert thecontroller to the start of a new transaction (almost certainly – a sister project looking at the mainmilitary ATC centre did prototype silent handover and did find the problem). However, projectconstraints on time and effort limited the LJAO prototyping to a simple facade to validate the striplayouts. This choice was primarily because the new civil ATC functionality took precedence; the factthat we were transplanting the EDDUS user interface into NERC reinforced this view.

RADAR COLOURSOne of the most difficult and contentious elements of the NERC user interface has been the use ofcolour, especially for the radar display. Not only was this a pioneering effort – previously onlymonochrome displays had provided sufficient resolution – but we were also substantially increasing thequantity of information displayed for each radar track by including significant flight plan and statusdata. To determine an appropriate mixture of colour, symbology and text labeling we prototyped avariety of solutions before settling on a basic scheme. For the actual colours we adopted a schemedevised by a NATS research project ‘DAWS’. This used groups of colours matched in saturation andbrightness to create a ‘layered’ effect. In our original design, the base colour is a mid grey, backgroundmaps use dull colours, mid range colours are used for highlights and targets and track data block text isblack. We diverged from the DAWS recommendations in two respects: we did not always use filledtarget symbols and we did not use white ‘flags’ behind the track data block text. We used filled / opensymbols to show a piece of status data and rejected flags both for making the data blocks too prominentand because they obscured the maps. While the system was being developed, ongoing simulation workindicated problems with this scheme and once we started large scale tests with the real system it wasapparent that the prominence of the targets and the legibility of the track data block text were notsufficient. This has been remedied by making the targets yellow (and normally filled) and track datablock text white. With hindsight we can see that when we diverged from DAWS recommendations wedid not go far enough. At the same time, we also made a number of other changes to the colour scheme.In particular we refined the way radar tracks were classified changing from two main types: foregroundand background, to three: foreground, background and ‘others’ (tracks without flight plans). This wasneeded because the NERC radar system was ‘seeing’ many aircraft outside controlled airspace whichare ‘clutter’ for NERC controllers. We were not surprised that the original colours were not goodenough; we had concluded the prototyping work with a design we were confident could be made towork and so could commit to the system development.

CONCLUSIONSWe can, arguably, claim to have followed ‘best practice’ in the design of the NERC user interface andwe think our experience shows the ‘usability limits’ inherent in large, bespoke developments(especially under a firm fixed price contract). We suggest there are three fundamental factors thatcontribute to these limits: project constraints on effort and time will always restrict the number ofiterations of the design (although we feel that had we done more prototyping we might well haveimproved the ‘wrong’ parts of the design); some aspects of the users’ tasks will only make theirpresence felt in the real world (this is especially true in a real time context) and the overall systemrequirements evolve and/or our understanding of what is required improves over time.The remedy, ofcourse, is to maintain the usability focus through the latter stages of system development anddeployment and, were necessary, be prepared to iterate the delivered system.

REFERENCESCozens J. (1998) The User Interface of Britain’s New En-Route Centre for Air Traffic Control, HCI 98

Industry day Paper

HCI’98 Conference Companion

—66—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Designing a User Interface for Digital Dissection

Dunja Hövik, Gunnar Berg and Christoffer Schander

1Department of Zoology,Göteborg UniversityMedicinaregatan 18Göteborg, Sweden

ABSTRACTThis paper discusses the problems that where faced when designing the user interfaceof an educational software for biology students. The purpose of the software is toperform a digital dissection.

KEYWORDS: Interface design, digital dissection.

INTRODUCTIONA large increase of students and a lack of resources forced the persons responsible forthe basic course in Biology at the department of Zoology, Göteborg University, to trya new educational tool. The making of an educational software simulating the task ofdissecting the common laboratory rat commenced (Schander et al., 1996). Thesoftware is developed close to the target group. The target group has no or littleprevious computer experience.

THE FIRST DESIGNThe development team consisted of a senior lecturer and a PhD student with aninterest in computers. The programming and the interface design was done by thePhD student. The software has a hierarchical and controlled structure. Performing asingle task meant changing between a number of screens. You can navigate either upor down in the hierarchical structure. It doesn’t offers menus or other shortcuts. Theinterface has a colourful design with large buttons.

The software was not implemented in full. Only half of the planned content for thelaboratory rat was implemented. The programming technique builds on moving fromdifferent frames using hardly any code at all. It became more and more difficult tokeep an overview of the different frames and it became very time-consuming tonavigate between the frames when developing the software.

THE SECOND DESIGNA software designer is added to the team. A restructure of the design begins becauseof the growing complexity of the frames. The interface is redesigned with the studentsreal life situation when performing a dissection, the students laboratory, as ametaphor (Allwood, 1991). The design uses a more flexible structure with menus toenable navigation from any module of the software to any other module of thesoftware. A toolbar gives immediate access to video, sound, microscopical images,animations and navigation to previous, first and next module. The user can both pointat a structure for identification and point at the name of a structure to get its location.The user remains in one single frame for each separate task.

The design uses a minimalist approach concerning colour and decorations. Thestructure of the software and the programming technique makes it possible for the

HCI’98 Conference Companion

—67—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

software to grow in content without causing problems for the programmer/developer.The drawback is that you need to be a more skilled programmer than for the previousdesign.

SUMMARYThe problem concerning the complexity of working with the software in the firstdesign was solved by the second design. Instead the level of the skill needed by theprogrammer increased. The bright colours of the first design was changed to a basicgreyscale interface. The only item with colour on the screen is the laboratory animalbeing dissected thus making it easier for the student to focus visually on the anatomy.Menus gives the user the possibility to navigate more freely between the differentmodules if the user so chooses. To help the user to concentrate on the performed taskthe user remains in one single frame for each separate task.

REFERENCESAllwood, C M. (1991) Människa -dator interaktion, ett psykologiskt perspektiv.

Studentlitteratur, Lund, Sweden.Schander, C. Berg, G. (1996) Computer based alternatives to animal use in higher

education. Försöksdjurens roll i den moderna biologin, Scan-LAS 26thsymposium p77.

Figure 1: Screen snapshot from the first design

Figure 2: Screen snapshot from the second design

HCI’98 Conference Companion

—68—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

The Motivational User Interface

Linda Hole1, Simon Crowle1 and Nicola Millard2

1School of Design, Engineering & ComputingBournemouth University, PooleBH12 5BB, England

2BT LaboratoriesMartlesham Heath, IpswichIP5 3RE, England

ABSTRACTThe interface design focused on performance support for the advisers in a customer service centre. Itevolved in close collaboration with users handling the incoming calls. A study of the advisers’motivation to perform their jobs produced a rich set of data which generated concepts presented asmotivators in a Motivational User Interface (MUI). The 3D graphical objects which form the MUI werewell-received by the users, who offered further design suggestions.

KEYWORDS: GUI, performance support, call centre

INTRODUCTIONDevelopment of the Motivational User Interface (MUI) was focused on the BT Customer ServiceCentre advisers’ work with a call handling system. The aims of the project were:

• to discover what motivates customer service advisers to achieve their targets;• to investigate how the interface could support their selling capabilities;• to ascertain what other performance support could be added to the interface;• to design an interface which would prove enjoyable and motivating to use.

The designers were based in the customer service centre, so the team gained a clear insight into theworking lives of the advisers through interviews, team-based card sorting exercises, and attitudequestionnaires. The data gathered provided the foundation for a useful set of concepts which arepresented in the MUI.

ADVISER MOTIVATIONThe study found that there were four aspects to the advisers’ motivation:

• extrinsic, service-based motivators from both the organisation and the customers, plus• the advisers’ intrinsic motivation in terms of what they want from the job and in their

relationships with the customers and other members of their team.The advisers felt that their company wanted them to be efficient ‘problem-solvers’, providinginformation which included both service and product data. They needed to use good interaction skillsto exchange information, and to promote customer loyalty. Advisers considered selling to be part of alarger picture in which they represented good service to the customers on behalf of the company. Theyfelt that they were good advisers, but that they could improve performance. The advisers realised thatthe customers wanted a pleasant and positive response to their calls, and they preferred to obtain all theinformation they needed within one phonecall. Customers’ reactions could affect the advisers’ spiritsthroughout their shift, which was often considered tiring. At times, system demands caused them tobreak their conversation with the customer to concentrate. The advisers did their job for the rewards ofa salary and people contact. Team work was an important aspect of the advisers’ job: other teammembers could help them in their work. Many advisers agreed that discussing the work with othershelped them do their job better.The customer service centre advisers valued the intrinsic motivations inherent to their job (customerand team contact and the satisfaction gained from dealing with difficult problems or customers) at leastas much as the extrinsic motivators in place at the call centre. That is not to say that the extrinsicmotivators were not effective, but they appeared to not exploit the full potential of the advisers or theirteams. To motivate the advisers in their work, a good mixture of both extrinsic and intrinsic motivatorsshould be sought. Addressing this problem involved identifying those aspects of the advisers’ workwhich they valued most and making them more visible in the work context.

THE MUI OBJECTSThe factors arising from the advisers’ daily work activities were acknowledged in the design ofperformance-relevant objects which appear in the MUI. The objects are presented as ‘concrete’components of the advisers’ working environment; their design evolved from earlier work based on thecall handling system (Millard, Hole & Crowle, 1997). There are six main elements to the motivational

HCI’98 Conference Companion

—69—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

interface, which offers the customer advisers a tailorable working environment. This is achievedinitially by their selection of an outside world, which provides the backdrop for their personalworkspace. There are currently two environments within which the adviser can work: either a tropicalisland beach house or a courtyard in Florence. The customer capsule is a self-contained representationof the customer which appears as the call is taken. It acts as a ‘key’ to the database, invoking therelevant customer information to appear at the interface. The customer book is an amalgamation of thethree separate books prototyped in the earlier interface: the ‘phonebook’, ‘streetfinder’ and ‘bills file’appear as sections of the customer book, accessed by filing tabs. Further sections concerning ‘issues’and ‘bookings’ (for appointments) have been added to the book. Fields of information held in thecustomer book can be ‘peeled off’ as notes, and then edited and attached or sent to other objects. Thecommunication cube offers a variety of functions, via six faces:

• the customer face provides visual communication with the caller;• the two buddy faces provide email communication with other team members;• the team-leader face provides direct access to the line manager;• the team face provides communication between team members;• the world face provides intranet access and email communication.

The product information is generated in the form of script bubbles from the intranet site. These areused to break the monotony of reading fixed product scripts to the customer. The moodies providequalitative measures of the types of callers the adviser has to deal with during the shift. They appear aslittle images of people whose colour conveys the mood of their enquiry.

PERFORMANCE SUPPORT AT THE INTERFACEUse of the MUI provides performance support in the following ways:

• the adviser handles a stream of customer calls: these are represented in the outside world, at thecustomer face of the communication cube, by the customer capsule and the moodies; aphonepad offers dial-out facilities to return calls.

• the adviser acts as a go-between from the customer to the database: database access is simplifiedby the facilities offered by the customer capsule, the customer book, and email and intranetaccess via the communication cube.

• motivators encourage the adviser to perform well within the team: the buddy, team-leader, andteam faces of the communication cube provide moral support, and the moodies and emailenable the advisers to signal that they are having specific difficulties with particular customers.

PROTOTYPING AND EVALUATIONThe MUI prototype was developed in Macromedia DirectorTM. It demonstrates the behaviour of theinterface objects and expands on their use during call handling scenarios, with a limited amount of datahandling. The Director prototype was used as a focus for discussion with the customer advisers, togauge their initial reactions to the Motivational User Interface idea. The concept was well-received bythe users, who offered further design suggestions. The next stage of the work will involve userperformance trials, with the MUI re-engineered to act as a live interface for call handling.

ACKNOWLEDGEMENTSThanks are due to BT Laboratories for their support for this project.

REFERENCESMillard, N., Hole, L., & Crowle, S. (1997) From Command to Control: interface design for future

customer handling systems, in S. Howard, J. Hammond & G. Lindgaard (eds), Human-computerInteraction INTERACT ‘97, London: Chapman & Hall, 1997, pp. 294-300

HCI’98 Conference Companion

—70—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Demonstration of the Development and Use of User Interaction inComputer Games

Tim Heaton,

Software Manager, GremlinInteractive, The Green House, 33Bowdon Street, Sheffield, S1 4HA,UK

ABSTRACTGremlin Interactive are one of Europe’s foremost computer game developers andpublishers. The demonstration will endeavour to show the current state of the art inuser input and output in games.

KEYWORDS: Computer games, force feedback, motion capture, computergraphics.

INTRODUCTIONComputer games, because of the fast moving nature of the market and the obviousappeal of innovation, use a bewildering array of human / computer interactiontechniques. We will display a series of games demonstrating a variety of user inputand user feedback techniques. An informal presentation will attempt to discuss thefollowing:

THE DEVELOPMENT PROCESSComputer games aren’t developed by maniac 16 year olds in their bedrooms anymore.They’re developed by teams of ten or twenty people over an 18 month to 2 yearperiod. They use the latest software engineering techniques and particular care istaken with object orientated design, version control, code re-use and qualityassurance.

MOTION CAPTUREMotion capture allows us to record human movement in three dimensions and placethat data onto computer generated characters. This is a quicker and more accuratemethod of animation than ‘hand’ animation, where an artist adds animation to acharacter using a series of computer tools. It does however have it’s own set ofproblems. Gremlin Interactive have a full time Mo Cap studio in Sheffield and isperhaps the most experienced of all games companies in its use.

USER INPUTComputer games live or die by the speed and appropriateness of their input. Theessence of gameplay is frequently determined by the ‘feedback loop’ of input andinstantaneous response. Equally, the move to games in three dimensions has meantthe development of a variety of input methods to deal with this. The latest techniqueswill be demonstrated including radically new joysticks, ‘orb’ input devices, VRheadwear and possibly a motion control chair.

HCI’98 Conference Companion

—71—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Figure 1: Chris Woods (ex-Sheffield Wednesday) being captured for ActuaSoccer

DISPLAY TECHNIQUESDemonstrations of 3D hardware accelerated graphics, stereo imaging and threedimensional sound will be given.

HCI’98 Conference Companion

—72—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

A Software Tool For Evaluating Navigation

Rod McCall and David Benyon

Dept. of Computing,Napier UniversityEdinburgh,EH14 ILT{r.mccall,d.benyon}@dcs.napier.ac.uk

ABSTRACTTraditional methods of evaluating the usability of software systems largely ignore theproblem of navigation within computer based environments. In contrast the'Navigation of Information Space' paradigm (Benyon and Höök, 1997) places thenavigability of the system as central. In order to make the ideas of navigationavailable to system developers, we have produced a method of evaluation called ISEN(Information Space: Evaluating Navigation). This demonstration will show how thesoftware version of the method can be used and how it provides a complementaryapproach to the evaluation of user-system interaction.

KEYWORDS: Navigation, Information Space, Evaluation

INTRODUCTIONThe purpose of the research described in this demonstration is to develop a"navigational instrument" which will allow designers to evaluate the navigationalfeatures of systems they have designed. As well as being of value at the evaluationstage it is hoped that such an instrument will draw attention to navigational issuesduring the design process.

Following an extensive review of navigation (Dahlbäck, 1998; Munro, Höök andBenyon, 1998) from a wide variety of perspectives, such as traditional geography(Lynch, 1967), cognitive (Downs and Stea, 1977) and social approaches to navigation,we have arrived at a number of features that are central to the efficacy of navigation ininformation spaces. These have been combined into an evaluation approach known asISEN (Information Space: Evaluating Navigation).

The current version of ISEN exists in two forms; a paper, checklist format and asoftware prototype. The demonstration will illustrate the use of the latter, whichincludes the basic forms in the checklist as well as graphical or audio examples andreferences to relevant literature. The software system uses the twelve areas identifiedin the checklist as important aspects of navigation: use of sound, use of metaphor, thedistribution of objects in the space, the conceptual structure and dynamics of thespace, navigational aids, transportation aids, informational signs, directional signs,consistency of signs, landmarks, users in space and finally user enjoyment of space.

BACKGROUND: WHY LOOK AT NAVIGATION?As users we live, work and relax in information space (Benyon and Höök, 1997). Inthese spaces we try to find our way around (or navigate). There are three types ofactivity which we can consider as forms of navigation; wayfinding, exploration andidentifying objects. Wayfinding consists of; orientating oneself in the environment,choosing the correct route, monitoring the route and recognising a destination hasbeen reached (Downs and Stea, 1977). We see this as analogous to how we navigate

HCI’98 Conference Companion

—73—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

our way round in computer based environments. For example if a user is seeking toset one of the revisions options in Word, firstly they have to find where they are in theinterface and choose the correct route (go to the Tools menu, select Options...). Theusers have to monitor their progress and need to be aware, for example, that byselecting Revision the relevant options have not been set. Finally they have reachedthe destination when they have set the correct options and clicked on the "OK" button.

Within any information space there exists a number of objects/artefacts (spatialphenomenon) which the user will need to get to know and understand. Shum (Shum,1990) defined these in the context of hypertext systems in terms of locationalinformation that deals with distance and direction and attributional information thatdeals with issues such as colour, sound and information content of the object.Borrowing ideas from urban planning (Lynch 1967) it is possible to use his basicconcepts (landmarks, districts, paths, nodes and edges) and apply these to navigationin information spaces. Using landmarks as an example, research has indicated thatclear landmarks do aid users in finding their way around. Also that in HCI termsgrouping (creating districts) of related operations (icons or menu options) togetherhelps users to gain a better understanding of where options are but also what theymight do. In a similar way, we have taken ideas of exploration, awareness, socialactivities in space (such as asking others for directions rather than following maps) tocompile the twelve categories identified above.

DEMONSTRATIONThe demonstration will illustrate the use of the software based version of thechecklist, which contains examples as well as references to literature. We arecurrently validating the software by carrying out a number of real system evaluations.

ACKNOWLEDGEMENTSThis project is funded under I3, Esprit Long-Term research projects programme.Acknowledgements are also due to the other members of the PERSONA project team.

REFERENCESBenyon, D. R. and Höök, K. (1997) Navigation in Information Space. In S. Howard,

J. Hammond, and G. Lindegaard (eds.), Human-Computer Interaction,INTERACTí97, Chapman & Hall.

Dahlbäck, N (ed.). (1998). Towards a Framework for Design and Evaluation ofNavigation in Electronic Spaces 1(1): 13-29. ISSN: 1100-3154

Downs, R. and D. Stea (1973). Cognitive Representations. Image and Environment..Chicago, Adline.

Lynch, K. (1967). Image of the City, MIT PressMunro, Höök and Benyon, (eds.) (1998). Workshop On: PERSONALISED AND

SOCIAL NAVIGATION IN INFORMATION SPACE, Roslagens Parla, Sweden,Swedish Institute of Computer Science.

Shum, S. (1990). Real and Virtual Spaces: Mapping From Spatial Cognition toHypertext Hypermedia 2(2): 133-158.

HCI’98 Conference Companion

—74—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Employment Service: Transforming Customer Services through IT

Nick Rousseau1, Janet Hinchliff2 and Bronwyn Robinson3

1Occ. Psychology Division,Employment Service,B2, Porterbrook House,7 Pear St., Sheffield S11 8JF, [email protected]

2Process and Systems Division,Employment Service,L3, Steel City House,West Street,Sheffield, UK.

3 Citizen Connect Ltd.,Wharfebank House,Ilkley Road, Otley,LS21 3JP, UK [email protected]

KEYWORDS: Public Access system, touch-screen, user organisation, Internet.

INTRODUCTIONThe exhibit and demonstrations will provide delegates with an appreciation of the HCI issues theEmployment Service is addressing as it continuously seeks ways to improve services to jobseekersthrough greater use of IT. The Employment Service is a large (35,000 employees and a network of1,000 Jobcentres) public sector organisation that makes extensive use of information. The New Dealinitiatives launched by the Labour Government have added a major impetus to change the face ofservices to jobseekers and has placed greater emphasis on working with other organisations in helpingdiverse groups of people from welfare into work. Ministers requirements for change are both broad-ranging and urgent. Human Computer Interaction, for the Employment Service, means helping in thedevelopment or procurement of systems that meet real business needs within very tight timescales; andensuring that system users and other stakeholders are enabled to benefit from these systems insupporting real world tasks. As such, considerable attention needs to be spent on enabling theexploitation of the systems as well as on their design and development. It is HCI in a real worldcontext!THE EXHIBITThe exhibit will contrast: the technology and environment of Jobcentres of the past; the new style ofJobcentre that is being piloted/rolled out with: self service touch screen kiosks where jobseekers cansearch for vacancies directly from a huge database and submit themselves to them using the integraltelephone; new desks and PC-based systems that enable ES people to provide an increasinglyprofessional service with guidance available on-screen. In addition, the exhibit will show delegatesother systems that are being developed to manage such services and to support Head Office policycolleagues in interpreting Ministers’ wishes and guiding practice in the field. Around the exhibit area,there will be posters describing how these systems have been developed, giving further informationabout the emerging vision of the ES of the future, and acknowledging the range of organisations andpartners who we have worked with to produce this. In particular, it will be made clear to delegates thehuge agenda of Human Computer Interaction issues that have been identified in the course of thiswork, and the practical steps that have been taken to address them.The exhibit will be in place throughout the duration of the conference. In addition, we will be offeringa timetabled demonstration of our Open Access System - the touch-screen kiosks we have developed.We have also invited Lifeskills, a separate company developing a system for jobseekers, to demonstratetheir product, to enable delegates to see other ways in which IT can be used to support jobseeking.N.B. The exhibit will make clear where the Employment Service has benefitted from the services of

external organisations but this does not constitute an endorsement of their services or products.Lifeskills is one of a number of organisations developing services and systems for jobseekerswith whom the Employment Service is working.

THE OPEN ACCESS PROJECTThe Employment Service has been exploring how much of job-broking could be carried out directly byjobseekers via the use of self service touch screen kiosks since 1995. There are two key issues here:

• what aspects of the process of searching and submitting for suitable vacancies could be taken upby jobseekers?

• what proportion of jobseekers could do this for themselves with the right system and support?Insofar as job-broking can be left to jobseekers to carry out, the Employment Service will be able toconcentrate the time of its staff on tasks and jobseekers where it adds more value (in particularsupporting those most disadvantaged in the labour market). The approach taken has combined:

• studies of jobseekers’ behaviour in searching for vacancies using the old style Vacancy DisplayBoards;

• consultation with Jobcentre staff regarding their perceptions of jobseekers’ requirements;

HCI’98 Conference Companion

—75—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

• evaluation of early system prototypes with jobseekers and heuristic evaluation of the userinterface by HCI specialists;

• major pilots of systems in jobcentres where Vacancy Display Boards have been removed, withevaluations including jobseeker reactions and business impact.

The Open Access System (OAS) represents the second pilot of touch screen technology to assistjobseekers in their search for work. In an earlier pilot, with a more limited system, Jobseekers wouldsearch for work using the terminals and would take a printout to ES staff to be submitted to the job.The Open Access System takes the pilot a step further and, as well as allowing people to search for jobsusing touch screen technology, it also allows them to submit themselves to the job using a phoneintegral to the terminal. The first system pilotted was well received and extensively used by jobseekers.We have identified a number of key challenges in this work:

• We have a very wide range of clients (disabled clients, people with literacy problems,executives, labourers) and it is vital that we avoid further disadvantaging people who mightalready find difficulty with jobsearch.

• We could not rewrite our vacancies database and this meant we needed to find how to providethe best front end to this data, which constrained the design options.

• We had to re-design current job broking processes to integrate with the automated system,which we did with help from Jobcentre staff.

• One of the biggest challenges was how to identify jobseekers using the system so that we couldmonitor what vacancies they apply for. We decided on magnetic swipe cards; this systemrequired the least amount of client activity to access the system.

• The nature of the work led to concerns amongst staff about risks to their jobs - our experiencehas been that by involving them in the planning and implementation of the pilots, andcommunicating openly about their scope, staff have accepted the changes.

• The enthusiasm of the developers had to be kept in check. Kiosk technology was new to themand they were keen to produce a system that would look impressive in the I.T. market place. Wewanted to maintain the ‘simplicity’ of the interface, and wanted to ensure the system could beproduced in the short timescales available.

The demonstration will provide delegates with an understanding of the system currently being pilottedand the issues and challenges we are addressing.

LIFESKILLS CITIZEN CONNECTCurrently in development, this is a WWW tool that harnesses the power of the Internet to provideindividuals with an on-line resource giving information, guidance and a support service for findingwork, choosing careers and entering into lifelong learning. The package will include:

• tools for individuals to develop profiles of their interests, skills and values;• guidance packages on key areas of jobsearch such as CVs, being interviewed;• support for the development and review of key skills such as communication, time management,

getting on with people;• a database of occupational groups, each containing standard information and with video material

to help communicate the nature of the work involved;The package can be linked to job banks so as to provide current vacancy information. Although thefocus in the design is to equip the individual for self-discovery and self-navigation, it will be offeredthrough intermediary agencies or possibly in Jobcentres direct. Citizen Connect is intended to be anetworking tool which agents can use to empower individuals to navigate the rapidly changing worldof work. In particular, there is a facility for directly comparing two different occupations or vacancies.This is intended to enable users to develop an appreciation of their own needs and of the ways in whichjobs vary.The benefits of the resource for the individual are:

• Connected to the world of work and employment• Given skills in navigating successfully the labour market• In command of the process• Equipped to discover themselves in terms of skills and talents• Able to make informed choices related to employment and learning• Connected to a support system for sustainable career progression• Develop confidence and fluency in information and communication technology• The benefits of the resource for employment agencies are:• Work in partnership with individuals• Create greater opportunities to dialogue with clients at their level of interest• Provide individuals with a much wider access to services than currently available by providing

inter-service connectivity.

HCI’98 Conference Companion

—76—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

AkuVis: Exploring Visual Noise

Katy Bvrner and Ipke Wachsmuth

Faculty of Technology,University of Bielefeld, PF 1001 31, D-33501 Bielefeld,Germany.

ABSTRACTThe AkuVis (Interactive Visualization of Acoustic Data) project is under developmentby researchers of the University of Bielefeld and Governmental Institutions. It seeksto create a highly interactive virtual environment of modeled acoustic data in order tosensitize and improve human decision-making.

KEYWORDS: Visualisation, Decision-support.

INTRODUCTIONThe AkuVis (Interactive Visualization of Acoustic Data) project is under developmentby researchers of the University of Bielefeld and Governmental Institutions. It seeksto create a highly interactive virtual environment of modelled acoustic data in order tosensitize and improve human decision-making. In particular, it attempts to enhancethe understanding of noise emission data as a basis for governmental decisions aboutnoise protection regulations for streets or industrial areas.

AKUVISA well-established method of visualizing data of noise pollution simulations are twodimensional plots. However, decision makers are often uncomfortable with this kindof presentation and the complexity inherent in these plots. In AkuVis, acoustic dataare mapped into a three-dimensional visual and acoustic space as visual noise that canbe sensed by eyes and ears and explored interactively.

Input data provided by the German T\V are used to extract three-dimensional modelsof road maps and houses. Furthermore, numerical data of noise pollutions at discretepoints modelled for night and day conditions are mapped onto the three spatialdimensions - x/y for position, z for decibel level. Regions showing the same decibellevel show the same colour in the resulting acoustic landscape. To simulate the noiseconditions in a certain region three general kinds of sound are employed. Permanentbackground noise provides an impression of the general dB value. Transitory noiseequals temporary sounds of, e.g., a passing truck. Random events like a bicycle ringor a step on the accelerator pedal are very short. The sounds are merged depending onthe decibel value of a selected region and the daytime.

The Responsive Workbench developed at the German National Research Center forInformation Technology is used as a virtual reality output device. It allows projectingstereoscopic graphics onto a surface of a translucent, 6' by 4' tabletop. Users wearLCD Shutter glasses that allow for time-multiplexing different images to differenteyes and are synchronized by an infrared signal from an emitter located near thescene. A stylus glove is used to pick virtual objects as well as to manipulateinteraction elements. Electromagnetic position sensors keep track of the users' eye andhand positions. One user acts as active viewer, controlling the stereo projection

HCI’98 Conference Companion

—77—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

reference point. Other participants see stereo, but from the tracked person'sperspective. A stereo audio system provides acoustic feedback. Several computers areused to process tracker data, run the application and rendering as well as to simulatethe sound in real-time.

Running the application, the user is free to select several data sets. The Houses Viewpresents the houses and streets of the modelled region only. The Night and Day Viewprojects the decibel values for night and day pollution respectively. Moreover, theuser may activate certain features introducing the street names, turning on the sound,replacing active head tracking by a standard normal view or inserting a virtual sensorin the shape of a human ear into the acoustic landscape.

Visually, users experience a richly detailed, interactively changing landscapeillustrating the noise conditions in a city district. Different positions of the trackedglasses result in different perspectives of the scene giving a free view of previouslyhidden objects. Acoustically, the landscape can be explored by way of the virtualsensor. The ear's position relative to the acoustic landscape determines the soundlevel, frequency, and kind of sound samples played to simulate the noise conditions atthe selected point. Several persons can discuss what they see in the visual noise. Ourproject partners, who have also tested the setting, found it very helpful forunderstanding complex noise emission data.

In a next step, the project aims to implement visual and acoustic zoom functionalitysuch that different regions of a city can be selected and explored in detail. For theacoustic zoom, the height of the ear determines the diameter of the region observed.Placing the ear at street level, users can explore the local street noise at this position.Moving the ear up results in a larger diameter and thus in a global mixture of soundsof a certain region.

In the accompanying video the real-world problem, the setting used, as well as theinteractive application are sketched.

ACKNOWLEDGEMENTSThe authors are grateful to Heiko Rommel and Timo Thomas who did substantialwork on the implementation of the system. Additionally, we thank Elke Bernauer forproviding data material as well as Peter Serocka and Marc Latoschik for givingtechnical advise.

REFERENCESBvrner, K., Fehr, R., Wachsmuth, I. (1998) ‘AkuVis: Interactive Visualization of

Acoustic Data’, 12. Internationales Symposium Informatik fur denUmweltschutz, Universitdt Bremen, Germany.

Bvrner, K. (1998) ‘AkuVis: Interactive Visualization of Acoustic Data’,http://www.TechFak.Uni-Bielefeld.DE/techfak/ags/wbski/akuvis/

HCI’98 Conference Companion

—78—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Learning pathways and strategies of novice adult learners: a user-perspective approach

Joan Aarvold and Bob Heyman

Faculty of Health,Social Work & Education,University of Northumbria at Newcastle

ABSTRACTThe elicitation and analysis of users' perceptions of their learning are not strongfeatures within the dominant HCI research paradigm. The normative theoreticalapproach to develop user-models, against which subsequent user behaviour isassessed, contains an inbuilt tendency to denigrate users. Competent adults will bringa range of skills and experiences to their learning. Most will succeed to some degreein learning to use a computer. It follows then, that the techniques they employ, farfrom being naive or counter-productive, must have some utility. Qualitative designs,where the subjective experiences of the adult learners are the central focus aresuggested.M.PHIL STAGE (1993-5)A study, based on classroom observations of 34 novice adult computer users, in a 'realworld' environment suggested that multiple learning pathways operated. The findingssupported the view that learners clearly differed in their rate of learning, theirreasoning, recognition and management of problems. Teachers invariably attemptedto correct difficulties, usually by the shortest route. Learners, on the other hand,sought understanding of the events confronting them. Although much was learnedfrom the classroom observations, the learner's sense of the new 'universe' couldneither be explained nor understood through observation alone. An individual, user-centred approach was therefore devised.

PH.D STAGE (1996 ->)The methodological shift was reflected in the move from an etic to an emicperspective and an empirical shift involved the use of a theoretical sampling frame.Six novice students, with different levels of self-expressed anxiety, were observedduring normal, introductory computer classes. To capture as much as possible of themeaning of events for novice computer users, a series of video recordings (31 hours)were made of three different novice (anxious) adult learners. The small numbers inthe study are acceptable in qualitative work where the aim is to understand ratherthan generalise. The learners followed similar introductory, graphical word-processing manuals. The data were analysed using a grounded theory approach(Strauss and Corbin 1990).

SOME FINDINGS: LEARNING STYLESContrasting attitudes and expectations between similar students, eg.1 Student Sexplored the new 'universe' with a pioneering spirit. Although anxious, he was not putoff by mistakes. He carried on regardless. 'You just have to try something else - it'sgot to be done'. S didn't understand many of the signs 'I've never heard of them', buthe could follow instructions. His intrepid style helped him through the manual andpleased his teacher. However, history reminds us that such explorers often met asticky end. Eg.2 student G knew his was going to be a difficult journey. He consultedhis 'map' at every stage, unfortunately it was seldom helpful. There were long pauses

HCI’98 Conference Companion

—79—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

when he just stared at the screen and his book, shaking his head. He did complete themanual although he was unable to describe much of what he had done.

Key themes: meaningful signposts in learning - what are they and where are they tobe found? Use of 'success indicators' for novice users, (implications for softwaredevelopers). Learners’ relationships with and use of manuals. Courage versustimidity, deep versus superficial learning. Teacher/student perceptions of problems.Management of problems.

USE OF METAPHOR AND ASSOCIATIVE PROCESSESWithout critical questioning at key stages, understanding of events from the learner'sview would not have been possible. Eg.1 Learner B hesitates when presented withmenu options: Restore, Maximise, Minimise, Close etc. B wanted to open Word fromthe MS icon. She chose Maximise and was concerned at the outcome of her choice.She closed down and started again. When asked about her actions, her 'logic' wasrevealed. She believed that maximise would 'bring the program to the right size'. Forher, Restore meant 'go back to how you were' and she wanted to move forward.

Eg.2 Student P pointed to the cursor and said 'Is that what I write with?' She pointedto the icons in Program Manager screen and asked if they were like books and wouldshe 'find interesting things inside'. P had a literary background and was clearlydrawing on previous knowledge to make sense of the new world. Eg.3 Student C witha sporting interest, when asked what happened when he clicked on a menu option said'I have induced Windows to drop that chart'. This phrase did not come from hismanual, but by linking the alien with the familiar it ‘made sense’ and he couldprogress.

Key themes: true and false signposting; utility value of exemplars and techniques ofrehearsal; semiotics; meaning of events; concept formation; methodologicaleclecticism (Hammersley 1997)

CONCLUSIONEach of the users studied generated his or her own user-model. Qualitative designs(Banister et al. 1994) in HCI research can enhance understanding of the contradictionsand inconsistencies in user-behaviours. Recent reviewers of HCI research, suggest amarriage or at least an improvement in understanding between those who advocatenormative and those who support user-oriented models (John and Marks 1997).

REFERENCESBanister, P., Burman, E., Parker, I., Taylor, M., Tindall, C., (1994) Qualitative

methods in psychology. A research guide.Open University Press, Buckingham.Hammersley, M., (1997) The relationship between qualitative and quantitative

research: paradigm loyalty versus methodological eclecticism, in J.T.E.Richardson (ed) Handbook of qualitative research methods, The BritishPsychological Society, Leicester.

John, B. E. and Marks, S.J. (1997) Tracking the effectiveness of usability evaluationmethods, Behaviour & Information Technology, 16:4/5, 188-202.

Strauss, A., Corbin, J., (1990) Basics of qualitative research. Grounded theoryprocedures and techniques. Sage Publications, London

HCI’98 Conference Companion

—80—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

A summary of HCI Engineering Design Principles

Stephen Cummaford

Ergonomics & HCI Unit,University College London,26 Bedford Way, London,WC1E 0AP, United Kingdom.

ABSTRACTThere is a need for more formal HCI design knowledge, which can be validated, suchthat guarantees may be developed. This need would be met by Engineering DesignPrinciples (EDPs). EDPs support the specification then implementation of a class ofdesign solution for a class of design problem within the scope of the EDP.KEYWORDS: Engineering, design principles, human-computer interaction.

INTRODUCTIONCurrent best practice in HCI design has produced many technologies that interact withthe user to perform effective work. However, the knowledge applied in the design ofthese technologies is all-too-often not explicitly stated and so not formallyconceptualised, although it may be successfully operationalised by designers.Reliance on such ‘craft’ skills militates against the identification, and so thevalidation, of successful design knowledge and, as a result, its take-up and re-use. Thelack of validation and the consequent ineffective development of design knowledgethus leads to slow and inefficient HCI discipline progress (Long 1996). There is aneed for more formal HCI design knowledge, that is, whose conception is coherent,complete and fit-for-purpose, such that guarantees may be developed and ascribed.HCI Engineering Design Principles (EDPs) would meet this need by establishingthese guarantees by means of analytic and empirical testing, leading to theirvalidation.FUNCTIONAL ROLE OF EDPSEDPs support the formal specification then implementation of a class of designsolutions for a class of design problems. The notions of hierarchical classes of designproblem and solution supports carry-forward of design knowledge between similardesign instances. Classes are not intended to be a taxonomy of all possible designproblems, only those for which a well formed class of design solution exists.SUMMARY OF EDP CONCEPTIONThe components of a Design Principle are expressed formally, the relationshipsbetween the scope; substantive component; methodological component; andperformance guarantees are made explicit such that EDPs may be considered coherentand complete internally (Cummaford & Long, 1998). An important feature of EDPs isthe possibility of guaranteeing the successful outcome of EDP application. To supportthis, the design problems to which an EDP may be validly applied is constrained bythe scope. The scope specifies a class of performances which are achievable for aclass of users interacting with a class of computers, if the EDP is correctly applied. Adesign problem is within the scope if the user(s), computer(s) and desiredperformance (Pd), which comprise the design problem, are instances of the respectiveclasses in the scope. Pd is expressed as product goals to be achieved (i.e. work to bedone) to a certain level of task quality, whilst incurring an acceptable level of costs tothe worksystem (Dowell & Long, 1989).The methodological component of a Principle supports the specification of a designsolution which achieves a desired level of performance by performing a series of taskgoals. These task goals are sufficient to achieve the product goals specified in the

HCI’98 Conference Companion

—81—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

desired performance of the design problem to a certain level of task quality, whilstincurring some acceptable level of costs to the worksystem. The substantivecomponent specifies worksystem structures and behaviours which are present if theuser(s) and computer(s) in the design problem are within the scope. These structuresand behaviours are sufficient to perform the task goals specified by themethodological component.An EDP is conceptualised, operationalised, tested and generalised prescriptive designknowledge, supporting the practices of specification then implementation of a designsolution for any design problem with the scope of the EDP. The conception of EDPspresented in Cummaford & Long, it is argued, is coherent and complete. Fitness-for-purpose may only be assessed by empirical testing. Thus, EDPs must beoperationalised to support test and generalisation, and so validation. Empirical testingof the implemented system ensures that the principle is fit-for-purpose, that is, itsupports the specification then implementation of a system which achieves the desiredlevel of performance stated in the design problem. Furthermore, the explicit scopesupports the development of performance guarantees on the basis of empirical testing.The fourth stage of validation, generalisation, involves establishing the generality ofthe EDP. These four stages of validation support the ascription of a guarantee that aworksystem, which performs the task goal structures specified in the methodologicalcomponent of the EDP, will attain the achievable performance stated in the EDP. Asecond guarantee, that the substantive component supports the specification of aworksystem, which exhibits the structures and behaviours sufficient to achieve thetask goal structures specified in the methodological component, may then beascribed. A third guarantee, that correct application of the EDP to a design problemwithin its scope supports the specification then implementation of a design solutionwhich achieves Pd, is then ascribed on the basis of the former guarantees and furtherempirical testing. EDPs thus support the specification then implementation of a designsolution which achieves the desired performance, if the design problem is within thescope of the EDP.OPERATIONALISATION OF EDP CONCEPTIONA hierarchy of classes of design problem has been hypothesised for Internet-basedtransaction systems. This is being used to guide EDP development by informing theoperationalisation of class-level design problems. When class-level design solutionshave been constructed, the task goals sufficient to achieve the product goal stated inPd will be specified. These task goals will be used to inform development of themethodological component of an EDP and assess Tq. The worksystem structures andbehaviours sufficient to achieve these task goals will then be specified and used toinform development of the substantive component of an EDP and assess U and Ccosts. The proto-EDP will then be operationalised to solve further design problems tosupport the development of the scope and guarantees.REFERENCESCummaford and Long (1998) Towards a conception of HCI engineering design

principles. Proceedings of ECCE-9, the Ninth European Conference onCognitive Ergonomics. in press.

Dowell, J. and Long, J. B. (1989) Towards a conception for an engineering disciplineof human factors. Ergonomics 32, 1513-1536.

Long, J. B. (1996) Specifying relations between research and the design of human-computer interactions. International Journal of Human Computer Studies, 44, 6,875-920.

HCI’98 Conference Companion

—82—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Cross-Cultural Differences in Understanding Human-ComputerInterfaces

Vanessa Evers

Institute of Educational TechnologyThe Open University, Milton KeynesWalton Hall, MK7 6AA

ABSTRACTThis PhD research performed at the Open University investigates cross-culturalunderstanding of human-computer interface design. The main research will involveobservation sessions with users from several cultures and will focus on metaphoricalaspects of interface design. A pilot study and a literature review are carried out toform a basis for the research.

KEYWORDS: Interface design, metaphors, cross-cultural user understanding

INTRODUCTIONAs software markets become global, more and more software producers sell theirproducts overseas. The research discussed in this paper deals with cultural localisationof software products and culturally diverse users' understanding of interface designfeatures. The study suggests that although much research has been done in cross-cultural attitudes towards computers and interface localisation, there is a need forinvestigating what are the differences in interpreting interface design features and whythese differences occur. From the literature it can be deduced that culture indeedimpacts understanding of interface design features (Evers and Day, 1997; del Galdoand Nielsen, 1996; Fernandes, 1994; Interacting with Computers, 1998). Therefore, itis important to find out in what way understanding differs and how this is linked tothe users' cultural background.

RESEARCH APPROACHA thorough review of the literature in this area has been performed and a pilot studyfor the main research project has just started. The aim of the pilot study is to find outhow the meaning of interface design aspects (i.e. colours, pictures, text) varies acrosscultures and which interface design features are most culturally sensitive. This will beundertaken through observational studies in which students from several culturalbackgrounds evaluate the web site of a virtual university campus. The pilot study willinvestigate how the meaning of interface design features varies across cultures. Inother words, when people from different cultures look at an object in an interface (thiscould be a picture or a name-tag of an icon), will they have a different understandingof what it represents or means?

Also, if people do indeed have different understandings of interface aspects (forexample a picture of a book) then why is that? It could be because of their traditions,education, physical environment and so on.

The interface to be evaluated is the web site of DirectED virtual campus(http://www.directed.edu/core.html). This site will be evaluated in individual, halfhour sessions, using groups of 6 to 8 young adults from several cultural backgrounds(possibly English, American, Japanese, Dutch and Indian). The pilot study will

HCI’98 Conference Companion

—83—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

provide us with information on how interfaces are perceived across cultures andwhich interface design features are most culturally sensitive.

So far, research carried out in this area has mostly used self-reported data. Asobservation is likely to provide more accurate in depth information the main researchwill also involve observational experiments, probably on a larger scale. Theinformation from the pilot study should provide background for designing the mainresearch project. In the mean time, an ongoing literature review is investigating cross-cultural understanding of metaphors in interface design as a focus for the mainresearch project.

REFERENCESEvers, V., and Day, D. (1997) The role of culture in interface acceptance. In S.

Howard, J. Hammond and G. Lindegaard (Ed), Human-Computer InteractionINTERACT'97, London: Chapman and Hall.

Fernandes, T. (1995) Global Interface Design. London: Academic Press.del Galdo, E., and Nielsen, J. (Ed) (1996) International User Interfaces. New York:

Wiley.Shared Values and Shared Interfaces, Special issue of Interacting with Computers,

No. 4-6, vol. 9.

HCI’98 Conference Companion

—84—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Towards a Formal Representation of Multi-Modal Systems forUsability Assessment

Joanne Hyde

School of Computing Science, Middlesex University,Bounds Green Road, London, N11 [email protected]

INTRODUCTIONThe user interface is being called upon to handle an increasingly diverse range ofusers, and some designers feel that interaction can be facilitated by the use of systemsthat exploit more than one means of input or output at a time: so-called “multi-modal”systems. However, their design appears to be device-led, with developers interested inthe novel nature of a device rather than its implications for the increased usability ofthe interface. Existing research tends towards an empirical approach in analysing thesuccess of a particular device. There is thus an absence of appropriate multi-modalusability theory, complicated by the disagreement between various communities overthe precise meaning of the term “modality”(e.g. Bernsen, 1995, Coutaz et al., 1993).This impedes research into the complex usability problems posed by multi-modalsystems, and results in a lack of appropriate notations to describe multi-modalactivity.This research explores the problem of defining modality. Existing modelling notationsand techniques are scoped to see how well they capture multi-modal interfaceusability issues. A new definition of modality is proposed, with an associatedtaxonomy to allow the identification and categorisation of instances of modalities atthe interface. Further work involves the identification of usability properties of multi-modal systems, and their formal application.MODALITY RESEARCHThere are several, often competing, definitions of modality, which can result inconfusion when designers from different backgrounds come together in the design ofinteractive computer systems. The computer systems perspective defines modality interms of input or output devices (e.g. Bernsen, 1995), for example, a keyboard,mouse, monitor or microphone. The user perspective defines modality in terms of thesensory channel through which it is expressed, for example, tactile or visual (e.g.Purchase, 1998). Other definitions use elements from both viewpoints, but havedifficulties in integrating them, because of the irreconcilable differences between whatis being considered as a modality (e.g. Coutaz et al., 1995). It is this conflict whichmakes many current definitions unusable in a wider context. There is therefore a needfor a definition of modality which is not as broad as one based on sensory channels,yet avoids being device-dependent, and is able to provide a clear basis for relatingcertain types of communication devices to the capabilities of the user.NOTATIONAL RESEARCHIn order to scope their ability to identify potential multi-modal usability problems,five different techniques for representing user interaction, covering a wide range offormal system and user approaches, were examined. They included a hierarchicalgoal-based technique (GOMS), a natural language goal-based method (CognitiveWalkthrough), a means-end planning based technique (Programmable User Model), adiagrammatic representation (State Transition Diagram), and one based on set theoryand first-order predicate logic (Z).The study, based around the examination of a task performed by a prototype roboticarm (see Hyde et al., 1998), found that these techniques were able to identify a wide

HCI’98 Conference Companion

—85—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

range of usability problems, but had difficulty in identifying usability issues directlyrelated to multi-modality. Although Critical-Path-Method (CPM) GOMS (John andKieras, 1996) initially seemed to be best able to handle multi-modal interaction issues,it was unable to shed much light other than providing comparative information aboutthe two methods of input. The limitations of these techniques with regard to multi-modal usability issues centre around their emphasis on interaction ordering, and theirinability to deal with simultaneous complexity. Notations which utilise differentinteraction paradigms may be more appropriate for describing multi-modal systems.NEW MODALITY DEFINITION AND TAXONOMYKey attributes contained within the concept of modality were identified from theliterature and work on notations, and included into a new definition of modality, as atemporally based instance of information perceived by a particular sensory channel.This produces a new three dimensional taxonomy (sensory channel, temporal nature,and information form) to aid modality classification.The sensory channel refers to three human senses: audio, visual and haptic, since theyare the three main channels through which information is perceived andcommunicated. The temporal nature describes whether a modality is discrete(unchanging within its occurrence, which is brief), continuous (repeated exactly thesame more than once) or dynamic (changes in content within its occurrence, whichmay last for some time). The form of the information relates to its presentation, andcan be divided into: lexical (in the form of text); concrete (in the form of thereproduction of a real life object); and symbolic (a representation of something ratherthan an actual reproduction of it). The twenty-seven cells derived from this newtaxonomy allow for a large coverage of the interaction space, while the classificationis small enough to be easily applied.FURTHER WORKInvestigate taxonomy applications, identify significant properties of multi-modalsystems, develop notation for multi-modal usability analysis of user interface.ACKNOWLEDGEMENTSThis work is supported by a postgraduate studentship from the School of ComputingScience, Middlesex University.REFERENCESBernsen, N.O. (1995) A toolbox of output modalities: representing output information

in multi-modal interfaces. In Bernsen, Jensager, Lu & Verjans (eds): Modalitytheory and information mapping. Amodeus project deliverable D15

Coutaz, J., Nigay, L., & Salber, D. (1993) The MSM framework: a design space formulti-sensory-motor systems. In Bass, J., Gornostaev, J., & Unger, C. (eds):Lecture notes in computer science no. 753, Human computer interaction ,Springer-Verlag Moscow

Hyde, J.K., Blandford, A.E., & Goodman, H.S (1998) A Comparison of FiveTechniques for Investigating General and Multi-Modal Usability Issues. To bepublished as School of Computing Science Technical Report.

John, B.E. & Kieras, D.E. (1996) The GOMS family of user interface analysistechniques: comparison and contrast. In: ACM Transactions on Computer-Human Interaction Vol 3 No 4, pp 320-351

Purchase, H. (1998) Separating text from technology: a semiotic definition ofmultimedia communication. To appear in Semiotica 1998.

HCI’98 Conference Companion

—86—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Information Gathering and the Workplace Soundscape

Catriona Macaulay

Department of Computing,Napier University219 Colinton RoadEdinburgh, EH14, UK

INTRODUCTIONTwo frequently cited reasons for an interest in the use of sound in HCI design areincreasingly ‘cluttered’ screens (Brewster, 1996) and the needs of both the physicallyand the ‘situationally’ disabled (Newell, 1995). The rapid growth in the amount ofinformation available by electronic means has led to what has been labelled by themedia ‘information overload’. Researchers interested in visualisation (Hearst, 1995)propose that one solution to ‘information overload’ might be to move away from theentirely textual presentation of large, dense information spaces. Whilst visualisationtechniques provide one alternative, for the reasons cited above they will not always besuitable. Investigations into the use of sound to support information gathering workare therefore particularly appropriate at this point in time. However, whilst researchersand designers interested in using sound have a relatively large body of lab-basedwork and new technology developments from which to draw inspiration andguidance, there is little by way of ‘workplace studies’ for this purpose. This projectseeks to address this gap by providing a workplace study of information gathering in aUK daily newspaper. The study pays particular attention to two questions: are thereaspects of information gathering activity that suggest opportunities for using sound indesign, and do existing workplace soundscapes play a part in information gathering?METHODSThe project, which in part builds upon work done in my MSc, started in November1996 with an extensive literature review. The review concentrated on three mainareas: sound (in ‘the world’ and in computing), information gathering from various(e.g. IR, HCI, IM) perspectives, and ‘social’ studies of work practice. From theliterature review on sound an informal ‘map’ of the auditory environment (thesoundscape) was developed and has been used as an aid to studying the auditoryenvironment of the field site. The principle empirical method used in this project is anactivity-theory oriented ethnography of information gathering at a UK dailynewspaper. Responding to criticism that activity theory is difficult to use in practice,(Kaptelinin & Nardi, 1997) developed the Activity Checklist, which is being used,and validated, in the project.The main study is being conducted over a period of one year, with typically betweenone and two days per week spent visiting the newspaper’s offices and a further one totwo days per week spent writing up field notes. The principle data gatheringtechniques used have been participant observation, historical research and semi-structured interviews. In addition we have conducted extensive materials research,gathering formal documents (such as guides to using the various digital systemsavailable, memos, Library keyword lists, etc.) and informal working documents (suchstory lists, jottings, etc.). We have also monitored journalism related stories from themedia, gathered material from journalism training courses and conducted researchinto the history of the British press and journalism. Fieldnotes and transcripts havebeen coded and analysed for themes using the Ethnograph™ software package,alongside the more traditional ‘multiple re-readings’.

HCI’98 Conference Companion

—87—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

ANTICIPATED OUTCOMESAlmost any ethnographic study of work practice can claim to have some ‘implicationsfor design’, however this project is at heart an applied ethnographic study of workpractice. It has been conducted within a particular intellectual tradition (HCI), and itsmain audience will be found there. In light of this we propose providing, along withan ethnographic account of information gathering work practice and a validation ofthe activity checklist, an analysis of a ‘space of design possibilities’ (SDP) forauditory interfaces aimed at information gathering activities. This part of the thesisis intended to satisfy some of the particular needs of the HCI and design communities.From a more ‘ethnographic’ perspective we might also consider the inclusion of theSDP section a way of answering the call for reflexivity in sociological andethnographic analyses (Bourdieu & Wacquant, 1992). We are making explicit theconcerns that motivated and shaped the fieldwork and that caused us to representsome aspects of our field experience and not others. The space of design possibilitiesis itself a representation of the ethnography and a reflection upon the way we asresearchers have constructed ‘the field’. The SDP itself will consist of a number of‘insights’ which are presented in narrative and, where appropriate, abstract form.Insights are then linked to relevant work from the HCI and CSCW communities toprovide guidance for anyone interested in pursuing a more detailed requirementsgathering/development project on the basis of the material presented. The SDP is notintended as an exhaustive analysis of all the fieldwork material that might have any‘implications for design’, neither is it intended to be a detailed analysis of a particularpart of the fieldwork material for the purposes of developing a single design. It is aninvestigation into those aspects of the workplace study which seem, to us, to havesome relevance for the design of auditory interfaces for information gathering work.In summary, this project aims to address the lack of workplace studies available forinterface designers specifically interested in the use of sound. The contributions willbe: a workplace study of information gathering, validation of the Activity Checklist,and a ‘space of design possibilities’.REFERENCESBourdieu, P., & Wacquant, L.J. (1992). An Invitation to Reflexive Sociology.

Cambridge: Polity.Brewster, S.A. (1997). Using non-speech sound to overcome information overload.

Displays, Special issue on multimedia displays, 17, pp 179-189.Hearst, M.A. (1995). TileBars: Visualization of Term Distribution in Full Text

Information Access, Human Factors in Computing Systems: CHI '95Conference Proceedings (pp. 59-66), Cambridge,MA: Addison-Wesley.

Kaptelinin, V., & Nardi, B. (1997). The Activity Checklist: A Tool for Representingthe "Space" of Context, Technical Report: Umeä University.

Newell, A.F. (1995). Extra-ordinary Human Computer Interaction. In A.D.N.Edwards (Ed.), Extra-Ordinary Human-Computer Interaction: Interfaces forUsers with Disabilities (pp. 3-18). Cambridge: Cambridge University Press.

HCI’98 Conference Companion

—88—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

URL, Summary, and Percentage. Click here for the next 16,433matches: Why a URL, Summary and Percentage representation isnot enough.

Thomas Tan

School of Computing Science, Middlesex University,Bounds Green Rd,London N11 2NQ, United [email protected]

INTRODUCTIONInternet search services have provided a means for users to locate and accessinformation on the World Wide Web. Even with the proliferation of such services,the limitations of current search engines become more apparent as the body of poorlyorganised information on the web increases.

TRADITIONAL INFORMATION RETRIEVAL RESEARCHTraditional information retrieval research (Salton and McGill, 83; Witten, Moffat andBell, 94) has largely concerned itself with improving the effectiveness in terms ofprocessing speed, resource requirements, precision and recall, of indexing andretrieval mechanisms.

Currently, most information retrieval systems including web search engines, employthe statistical and probabilistic techniques (Excite, 97; Autonomy, 97) developed fromtraditional approaches to determine the relevancy of a document to a user’s query.These techniques represent the current state of the art in relevancy determination.However, in an age where there can be such a thing as too much information, theproblem of serving only desired information has become more significant than theperfunctory provision of data.

Current systems of retrieval, typified by those on the web, attempt to mitigate theinadequacies in precision by providing the user with a plethora of search features suchas Boolean keyword matching and proximity searching. These features generally tendto increase the rate of recall instead of precision, which aggravates the problem byretrieving more documents and not necessarily more precise matches.

INFORMATION EXPLORATION AND VISUALISATION INTERFACESCurrent models of displaying retrieval results are limiting in that they convey themany perspectives of both collection and document content inadequately in sequentialhierarchical fashion. The human perception system is more adept at recognisinghighly visual multidimensional content than performing thought intensive processessuch as reading. With low cost, high quality displays increasingly becoming astandard component of today’s computing environment, efforts to address informationretrieval problems are increasingly being directed to enhance user exploration of theinformation space, and to the effective presentation and visualisation of retrievalresults.

Examples of such efforts are the Bifocal Display (Spence, 97) which presents thecontext of the current focus of interest while providing a smooth transition betweenthe focus and the context of an information space; the TileBar system (Hearst, 95)

HCI’98 Conference Companion

—89—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

which provides an effective simultaneous and compact visualisation ofmultidimensional relevance (statistical) data from returned document sets; theHyperSpace system (Beale, McNab and Witten, 96), and the Scatter/Gather system(Hearst and Pedersen, 96).

AIM OF RESEARCHThe aim of the research proposed here is to investigate and devise informationretrieval techniques, that will provide improved information retrieval performancethrough the effective presentation and visualisation of enhanced retrieval results. Acritical survey of information visualisation techniques by the author is currentlyongoing. It is also of the author’s belief that in order to conceive of what enhancedretrieval results may be presented, it is important to understand the many properties ofa text corpora. An essential second stage will be to identify what those properties areand how they can be used to better represent a document.

It is envisaged that this work will culminate in the prototyping, evaluation (throughuser studies) and development of information retrieval systems, that will allow theuser to make more informed judgements about the retrieved documents. This will beaccomplished through effective document representations other than the ubiquitousURL, Summary and Percentage representation.

ACKNOWLEDGEMENTSThis work is supported by a postgraduate studentship from the School of ComputingScience, Middlesex University.

REFERENCESAutonomy (1997) ‘Autonomy Agentware Technology White Paper’, http://www.

agentware.com/main/tech/whitepaper.htm, 27th March 1998.Beale, R., McNab, R., Witten, I. (1996) ‘Visualising Sequences of Queries: A New

Tool For Information Retrieval’, [email protected], School of ComputerScience, University of Birmingham, UK.

Excite (1997) ‘Information Retrieval Technology and Intelligent Concept Extraction’,http://www.excite.com/Info/tech.html, 27th March 1998.

Hearst, M. (1995) ‘TileBars: Visualisation of Term Distribution Information in FullText Information Access’, in Proceedings of ACM CHI Conference, 1995.

Hearst, M., Pederson, J. (1996) ‘Reexamining the Cluster Hypothesis: Scatter/Gatheron Retrieval Results’, in Proceedings of the Nineteenth Annual InternationalACM SIGIR Conference, Zurich, June 1996.

Salton, G., McGill, M. (1983) Introduction to Modern Information Retrieval. NewYork: McGraw Hill.

Spence, R. (1997) ‘The Acquisition of Insight’. http://www.ee.ic.ac.uk/research/information/www/bobs/bobs.html. 21st April 1998.

Witten, I., Moffat, A., Bell, T. (1994) Managing Gigabytes, Van Nostrand Reinhold.

HCI’98 Conference Companion

—90—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Criteria of Credibility for Collaborative Virtual Environments

Jolanda G. Tromp

Communications Research Group,Computer Science DepartmentUniversity of Nottingham, NG7 2RD

ABSTRACTMy thesis is aimed at developing guidelines for usability evaluation and design ofcollaborative virtual environments; creating 3D interfaces for distributed participants.

KEYWORDS: multi-user virtual reality, methodology of usability testing,interaction design

INTRODUCTIONCollaborative Virtual Environments (CVEs) are a novel application area of computingtechnology, demanding an understanding of computer mediated human collaborationand human-computer interaction in 3D virtual spaces. A general tendency to ignore orminimise VE evaluation has been observed (Durlach & Mavor, 1995). To date, thereare no systematic guidelines for the design and evaluation of CVEs, although work ison progress for single user VEs, to find design principles (Kaur, 1997) and a userrequirements analysis method (Parent, 1998). The hypotheses for the thesis workreported here are based on the work of Kaur (1997), in order to extend her pioneeringwork on single user human-computer interaction in VEs.

THE HYPOTHESESHypothesis 1: Interface design guidelines are needed specifically for

collaboration in CVEs.Hypothesis 2: General patterns of collaboration can be predicted.Hypothesis 3: Collaboration design properties can be predicted.Hypothesis 4: New design properties support CVE collaboration.

To test H1, the need for CVE guidelines, two studies were made, looking at CVEusability problems and CVE design problems. First, the exploration of CVE usabilityproblems involved an Inspection (e.g. Cognitive Walkthrough and a HeuristicEvaluation) based on Nielsen (1994. This Inspection method was redesigned toinquire into the collaborative aspects of the CVEs and applied to the COVENPlatform (a CVE being build on Division's dVS_, a leading VR product). The resultsof this inspection are applicable to many CVE systems and applications, not just thosedeveloped by COVEN, and identified a significant number of usability issues atsystem, interaction and application levels (Steed and Tromp, 1998), such as tensionsbetween 3D object representation and 3D object interaction, and tensions betweenphysically remote user collaboration and network latency. Second, the exploration ofCVE design problems involved interviewing CVE designers. To date four CVEdesigners have been interviewed. The preliminary results show that these designersshow a tendency to forget about end-user usability issues, do not make use of HCIdesign guidelines, and feel they lack the skills to create artistically satisfying CVEs.To test H2 a model of collaboration in CVEs has been developed, and a method toanalyze group collaboration in CVEs has been created using this model. The method

HCI’98 Conference Companion

—91—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

identifies atomic actions for users involved in focused and unfocused collaboration,and is tested on video recordings of CVE user interaction. The analysis will revealsequences of collaborative activity (Bales, 1951). H3 is tested by creating aHierarchical Task Analysis (HTA) for CVE collaboration. Predictions for usabilityproblems will be made from the HTA, by describing the difficulties likely to occur ifthe interface properties needed to perform those collaborative actions are missing orinadequate. Subsequently, user behaviour during representative collaborative tasks isobserved using the group collaboration analysis method. These experiments act as thecontrol condition for experiments to test H4. Implementing the design properties in aCVE is hypothesized to improve multi-user collaboration by supporting the usersduring each stage of their collaboration, and so avoiding usability problems. To testthe impact of the design properties a controlled studies during network trials of theapplication have been planned. The test condition is the third iteration COVENplatform design, improved by implementation of the missing design properties. Taskperformance is assessed for both groups with a post-study test. The observations ofbreakdown in collaboration will be checked against the predictions made for usabilityproblems. A good match will support the desirable design properties (H3). H4 will betested by comparing results for the 2 conditions. Improvements in collaboration aredefined as fewer usability problems, better task performance or lower task completiontimes, and general improved satisfaction.

CONCLUSIONSResults from this research will be used to define and refine a set of design propertiesand evaluation methods for CVEs. The results can be used to help develop guidelinesfor CVE designers and evaluators.

ACKNOWLEDGEMENTSThanks go to Dr. Steve Benford, Kulwinder Kaur, Prof. Alistair Sutcliffe, Prof. JohnWilson. Jolanda Tromp is employed on the COVEN Project (ACTS N. AC040).

REFERENCESBales, R.F., (1951). Interaction Process Analysis; A Method for the Study of Small

Groups, Addison-Wesley Press, Cambridge.Durlach, N.I., Mavor, A.S., (1995). Virtual Reality: Scientific and Technological

Challenges, National Research Council, National Academy Press, USA.Kaur, K. (1997). Designing Virtual Environments for Usability. Doctoral Consortium

Paper, in: IFIP Proc. Of INTERACT'97 Conference on Human-ComputerInteraction, Sydney, Australia.

Nielsen, J, Mack, R.L, (1994). Usability inspection methods, John Wiley and Sons,New York, NY.

Parent, A., (1998). Designing life-like virtual environments, submitted to Presence:Teleoperators and Virtual Environments, MIT Press.

Steed, A., Tromp, J.G., (1998). Experiences with the Evaluation of CVE Applications,to appear in: Proc. of Collaborative Virtual Environments, 2nd CVE98Conference, Manchester, June 17-19.

HCI’98 Conference Companion

—92—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

User Interface Design & Evaluation for a Content-Based ImageRetrieval System.

Colin C. Venters

Department of Information & Library Management,University of Northumbria at Newcastle,Lipman Building, Sandyford Road,Newcastle Upon Tyne,NE1 8ST, England.

RESEARCH BACKGROUNDThe developing technology of content-based image retrieval (CBIR) is creating newand exciting opportunities to enhance the access to and retrieval of digitally storedimages. Paradoxically, while the underlying technology of CBIR systems is beingadvanced both system developers and researchers have generally overlooked theimportance of the human-computer interaction (HCI) and the crucial role of the userinterface. For example, the ability to formulate and communicate a query is anessential feature in all information retrieval systems. However, a number of CBIRsystems provide only primitive facilities for image query formulation, or restrict end-users to a single visual-browsing query option to interrogate the database of storedimages. The major problem of designing a user interface for CBIR systems lies in theneed to provide users with appropriate interface features and tools to facilitate HCI,yet sound design techniques for facilitating retrieval are rare. It is the aim of thisdoctoral research programme to address this problem. This research project aims todesign and evaluate a user interface for a CBIR system using the ARTISAN system asan example.

RESEARCH TO DATEStage I: End-User Group IdentificationPotential groups of end-users were selected on the basis of suitability to utilise theARTISAN system. UK trade mark and patent companies, patent information network(PIN) libraries, and patent services were identified and selected using a combinationof both commercial references sources and professional directories. This enabled thedevelopment of a substantial database of both private practice firms and personnel inthe UK. In total, 144 companies and 316 potential volunteers were identified. Acriterion for selecting volunteers had been developed prior to a postal mail shot inorder to provide a cross-section of end-users’. This criterion was designed to providean insight into specific areas of potential volunteers' experience and focused on anumber of important areas: the number of years in the profession, level of computerliteracy, user interface familiarity, image retrieval system use, and perceived level ofimage retrieval skill. Users were requested to complete background questionnaire inorder to assess their suitability.

Stage I: User Requirements GatheringA tape recorded, semi-structured interview format was selected as the method bywhich the user requirements would be gathered. An interview framework was thendeveloped based on an object-oriented diagram. This provided structure and flexibilityallowing both end-users and the interviewer to explore salient issues. Twenty-threeinterviews were then conducted between August-September 1997 with the targetedgroup of end-users. The aim of the interviews was to explore in-depth the user

HCI’98 Conference Companion

—93—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

requirements for a CBIR system in general, and the ARTISAN retrieval system inparticular. A number of key areas and issues were explored within the topics ofsystem input, system output, and user interface issues.

Stage II: User Requirements AnalysisNo perceived benefits would be gained by the full transcription of the 23 interviewtapes, as a result the data collected from the interviews was partially transcribed toenable analysis of the data. The data collected during the user requirements processhighlighted a number of functional and data requirements common to the overallsystem, and a number of system features necessary for visual query formulation. Fromthe data gathered 5 methods were proposed for visual query formulation: a pre-defined graphic file, a scanning tool, a free-hand drawing tool, a visual browsing tool,and a visual building tool. Contrary to current system features, these findings suggestthat end-users have a range of query needs, which can be supported by the identifiedquery formulation tools. Although the end-user groups have a shared interest in theeffective retrieval of the dataset, the findings highlighted a distinction between theuser requirements of trademark searchers and that of PIN information officers. Thiscan be attributed to the type of client and nature of the service they provide. Theanalysis of the data collected from the interviews has resulted in the production of anumber of abstractions of the system: dataflow diagrams, entity relationship diagram,object-oriented model, and a viewpoint-oriented model. These findings andabstractions have been evaluated with a small cross-section of the initial end-userpopulation who validated the findings. As a result of the requirements validationprocess these identified features will form the basis for the conceptual and holisticdesign of the user interface, and the development of a horizontal prototype in the nextstage of the project.

CONCLUSIONTo assure effective and efficient access and retrieval, the design of the user interfacefor CBIR systems needs to be dramatically improved. If CBIR systems are to becomeviable applications, the design of a suitable user interface for query formulation andthe manipulation of search results is a fundamental aspect of developing the systemsin conjunction with the underlying technology. The outcome of this research willcontribute a framework for the design and development of more user-centeredinterfaces for content-based image retrieval systems.

HCI’98 Conference Companion

—94—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

QUASS – a tool for measuring the subjective quality of real-timemultimedia audio and video

Anna Bouch, Anna Watson and M. Angela Sasse

Department of Computer Science,University College LondonGower Street, LondonWC1E 6BT, England{A.Bouch, A.Watson, A.Sasse}@cs.ucl.ac.uk

ABSTRACTThere is currently no adequate method of measuring the subjective quality of audioand video experienced by users in desktop videoconferencing, meaning that objectiveaudio and video quality requirements cannot be effectively derived. This paperintroduces a new measurement tool, QUASS (QUality ASsessment Slider), which isdesigned to address this problem.

KEYWORDS Desktop videoconferencing, subjective quality, assessment,measurement.

INTRODUCTIONDesktop videoconferencing has the potential to enable large and distributed audiencesall over the world to participate in conferences and meetings without having to leavetheir respective physical locations. However, despite the huge potential of thiscommunication technology, uptake has been slower than expected. One reason for itsslow uptake must be the potential users’ concern over whether the quality of the audioand video will be good enough, at a price they can afford. There has been nosystematic investigation of the audio and video quality required for differentvideoconferencing tasks to be accomplished successfully. The complexity of theissues involved in evaluating these media in videoconferences is discussed in Watson& Sasse (1996), but what is required immediately is a method by which subjectivequality ratings can be gathered in a dynamic, continuous fashion as a videoconferenceproceeds. Different videoconferencing tasks and sub-tasks will have different qualityrequirements, and these must be identified, so that quality thresholds and guidelinescan be established.

MEASURING PERCEIVED QUALITYAssessment of speech and video quality has traditionally been in the domain of bodiessuch as the ITU (International Telecommunications Union) and the EBU (EuropeanBroadcasting Union). Assessment methodologies and rating scales have beendeveloped and standardised across the world by these bodies. However, desktopvideoconferencing is a new area of communication and traditional quality ratingmethods are not suitable for use in videoconferencing assessment. The main reasonsfor this are: the range of vocabulary on ITU recommended rating scales is notapplicable to low-cost videoconferencing quality; conditions over some networks canfluctuate, meaning that quality ratings gathered at the end of a conferencing sessioncan be subject to primacy or recency effects; measuring only speech or video qualityalone does not produce very meaningful results for generalising to multimediaconferences where the different media can and do interact. These issues are discussedin greater depth in Watson & Sasse (1998).

HCI’98 Conference Companion

—95—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

What is required is a new type of subjective measurement tool that takes into accountthe dynamic and interactive nature of videoconferencing speech and video quality.This tool must be able to gather real-time continuous assessment results asconferences, and their various sub-tasks, proceed.

ESTABLISHING REQUIRED QUALITY USING QUASSThrough questionnaires and focus groups held after various desktopvideoconferencing trials, we at UCL have identified a number of different dimensionsthat play a role in forming opinions of overall quality, such as packet loss and‘unpredictability’ for speech, and speed and ‘blockiness’ for video. We havedeveloped a software measurement tool, QUASS (QUality ASsessment Slider), whichallows users to move a slider up and down an unlabelled continuous scale as a speechfile varies in quality along one of the formative quality dimensions. Measurements ofthe position of the slider are pumped to a file every second, allowing us to comparethe objective speech quality with the subjective rating at that instant. The tool can alsobe used to dynamically control the quality of the speech that the user is receiving,along a particular dimension. In order to prevent the user setting the quality to itsmaximum level at all times, this condition also provides the user with a ‘budget’,which decreases according to the quality that is demanded. The expenditure isrecorded by the software in order to provide insights into the relationship betweenpayment behaviour and the requested Quality of Service (QoS).

Although at present QUASS is being used only in a laboratory setting to investigatespeech quality dimensions, we believe the technique will also be suitable for theinvestigation of subjective video quality, and in interactive, multimedia conferencesituations so that different task requirements can be identified. We believe thatQUASS will prove to be a sensible means of assessing subjective quality, and willallow HCI researchers to make qualified recommendations to users, developers ofnew systems and network managers as they begin to implement network resourcereservations.

ACKNOWLEDGEMENTSAnna Watson and Anna Bouch are supported by EPSRC CASE awards with BT.

REFERENCESWatson, A. & Sasse, M.A. (1996) Evaluating audio and video quality in low-cost

multimedia conferencing systems. Interacting with Computers, 8(3), 255-275.Watson, A. & Sasse, M.A. (1998) Measuring perceived quality of speech and video in

multimedia conferencing applications. To be presented at ACM Multimedia ’98.

HCI’98 Conference Companion

—96—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Extending Support for User Interface Design in Object-OrientedSoftware Engineering Methods

Elizabeth Kemp and Chris Phillips

Institute of Information Sciences and TechnologyMassey UniversityTuritea SitePalmerston NorthNew Zealand

ABSTRACTThe focus of this research is on extending support for the design of graphical user interfaces (GUIs) inestablished object-oriented software engineering methods (OOSEMs). Through an examination ofcurrent texts, a framework for GUI development is established, and some well known OOSEMs arereviewed in order to determine what support is provided for user interface design. Generalrecommendations are made for the extension of OOSEMs to better support interface design.

KEYWORDS graphical user interface design, software engineering, object-oriented methods.

A GENERAL FRAMEWORKObject-oriented software engineering methods (OOSEMs) are well established, although still evolving.The focus of this research is on extending support for the design of graphical user interfaces inestablished OOSEMs. Two published object-oriented approaches to developing GUIs have beenreviewed and compared (Collins, 1995; Lee, 1993), and a general framework established:1. System development should commence with the construction of an object model which is domain

or application-focussed, and which is independent of the visible interface. This should be based onan analysis of users and tasks.

2. Development should proceed from the construction of the domain object model, through the designof the visible interface (the look and feel), to the construction of the implementation systeminvolving the GUI and software sub-systems. That is, the design of a software system with a GUIshould proceed from the ‘outside in’.

3. The development process should exploit synergies between object representations in all parts of thesystem, in particular, the implementation object structures (both the GUI and software sub-systems)should reflect the object structures of both the problem domain and the visible interface.

4. A software architecture founded on dialogue independence is most appropriate for GUI-basedsoftware systems. This should be applied at both the design and implementation phases.

5. Prototyping and evaluation should be used as a means of validating the models constructed at eachstage.

REVIEW OF OOSEMS AND USER INTERFACE DESIGNFour well established OOSEMs (Booch, 1994; Jacobson et al, 1992; Rumbaugh et al, 1991; Coad &Yourdon, 1991) have been reviewed in the light of this framework, and compared. The first three arecurrently undergoing a process of amalgamation through the Unified Modelling Language (UML)initiative (Fowler, 1997). Detail of this is still emerging, and it is unclear at this stage whether themethod associated with the UML (which is essentially a collection of notations) will provide greatersupport for GUI design. The focus here is on current practice. Table 1 summarises the four methods inrelation to support for interface design. The comparison of the four approaches is instructive, thetreatment of HCI being somewhat uneven.

RECOMMENDATIONS FOR EXTENSIONS TO ESTABLISHED OOSEMSBased upon the above analysis, and the earlier framework established for GUI development, thefollowing general recommendations can be made for extending support for user interface design inestablished OOSEMs:1. That more complete integration should be provided for the object models developed at each stage.

It should be possible to track objects through the development lifecycle.2. That better support should be provided in OOSEMs for the design of the look and feel of the GUI,

including the selection of metaphor.

HCI’98 Conference Companion

—97—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

3. That more suitable notations should be developed for describing the control and sequencing ofGUIs, including screen flow. State-based models are not really suited to describing theasynchronous and object-oriented nature of GUIs.

4. That GUI development tools (prototyping tools, GUI builders) should be better integrated intoOOSEMs. More control is needed at the implementation stage in connection with the consistency ofthe GUI and software sub-systems. In particular, GUI tools should connect with the software objectmodel.

5. That a common architecture based on dialogue independence should be adopted at the design andimplementation phases.

OOSE(Jacobson)

OA and D (Booch)

OOA and OODCoad/Yourdon)

OMT(Rumbaugh)

User model User rolesUsers’ conceptualmodel

User roles UsercharacteristicsSkill levelCritical successfctr

User roles

Task model Use cases &scenarios

Use cases &scenarios

Task scenarios Use cases &scenarios

Domain objectmodel

Problem domainmodel

Initial objectand class models

Initial 5 layermodel

Object model

Interface objectmodel

Analysis modelincorporatinginterface objects

Revised classand object model

5 layer modelincorporatingclasses and events

Revised objectmodel

Software objectmodel

Design modelcomposed ofblocks

Fine tuned classand object model

Completed 5 layermodel with taskcomponentsadded

Completedobject model

Selection ofmetaphor

Not addressed Not addressed Supported Not addressed

Screen layout Based onprototyping andfeel for importantobjects

Based onexamination ofscenarios andprototyping

Based onmetaphor andinterfaceguidelines, andprototyping

Based on view/presentationobjects

Screen flow Possiblyprototyping

Possiblyprototyping

Possiblyprototyping

Not addressed

Control flow State transitiongraphs

State transitiondiagrams

Commandhierarchy

Dynamic modelState charts

Table 1: Summary of the four methods in relation to support for interface design

REFERENCESBooch, G. (1994), Object-Oriented Analysis and Design. Menlo park, Calif: Benjamin/Cummings.Coad, P. and Yourdon, E. (1991), Object-Oriented Analysis. Englewood Cliffs, New Jersey: Yourdon

Press.Collins, D. (1995), Designing Object-Oriented User Interfaces, Redwood City, Benjamin/Cummings

Publishing Company.Fowler, M. (1997): UML Distilled: Applying the Standard Object Modelling language, Reading, MA,

Addison-Wesley.Jacobsen, I. Christerson, M., Jonsson, P., and Overgaard, G. (1992), Object-Oriented Software

Engineering: A Use Case Driven Approach. Reading, MA: Addison-Wesley.Lee, G. (1993), Object-Oriented GUI Application Development, Eaglewood Cliffs, NJ, Prentice Hall.Rumbaugh, J., Blaha, M., Premerlani, W., Eddy, F. and Lorensen, W. (1991), Object-Oriented

Modeling and Design. Englewood Cliffs, NJ, Prentice Hall.

HCI’98 Conference Companion

—98—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

On the relationship between mouse operating force and displaydesign

Kentaro Kotani1, Ken Horii1 and Yutaka Kitamura2

1Department of IndustrialEngineering, Kansai UniversitySuita, Osaka564-8680, Japan

2Faculty of Informatics, KansaiUniversityTakatsuki, Osaka569-1095, Japan

ABSTRACTThis study, using a mouse equipped with a force sensitive resistor, examined therelationship between mouse operating force and fundamental design characteristicsassociate with display conditions such as target size and approaching angles.

KEYWORDS Mouse, Display design, Force sensitive resistors.

INTRODUCTIONA mouse is now an integral part in almost all computer tasks due to the greatpopularity of PC-based software. Consequently, epidemiological study on the use of amouse is indispensable in identifying the risk factors of physical disorders caused byusing the pointing device. However, recent researches on mouse use focus chiefly onusability issue and only a few researches deal with mouse use empirically in terms ofphysical and epidemiological aspects (Dowell and Gscheidle, 1997). Apparently,obtaining knowledge of such aspects must contribute to the improvement of thedisplay design in determining an ergonomically risk-free size of buttons to click onand in arranging icons that ensures less stressful mouse movements. The objective ofthis study is to obtain empirical data on the operating force applied to the mouse inrelation to the characteristics of such display layouts as target areas and approachingangles to the target.

METHODSA 5 mm-diameter force-sensitive resistor (FSR) was housed on the top of themicroswitch inside the mouse for the measurement of operating force. The voltagesignals were transmitted to a programmable A/D converter, controlled by a PC. Priorto the experiment, a calibration testing was conducted and the regression equationwith R2 of 0.999 was obtained for the relationship between FSR voltage and the forceapplied to the mouse.

Independent variables used in the study were four approaching angles(0 (horizontal),30, 60, 90 (vertical) degrees), three target sizes(10_10mm, 20_20mm, 30_30mm) andtwo testing sessions. A total of five subjects were chosen from the population of theengineering majors, who have used a mouse more than one year on a daily basis. Thetarget size and the approaching angle were chosen as factors to examine therelationship between the mouse operating force and Fitts’-Law-based pointingcharacteristics demonstrated by Card et al.(1978). The occurrences of wrist andforearm motions in mouse operation throughout the experimental assignment weremonitored to compare and discuss effects of varied approaching angles.

The experimental paradigm consisted of a practice session (approximately 15minutes) and two testing sessions. Each testing session consisted of 12 trials, which

HCI’98 Conference Companion

—99—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

were fully-crossed combined with independent variables of the approaching angle andthe target size. The order of trials was randomized in the session.

RESULTSThe grand average of mouse operating force was 146gf, which was twice that of theminimum mouse operating force (75grams, catalog data). The ANOVA showed asignificant main effect on the mouse operating force for target size (F(2,8)=11.64, p <.01), whereas the other effects, including the effect of subject, were not significant.Some of interaction items were also significant: 2-factor interaction of subject withsession (approximate F(28,20)=9.61, p < .05) and 4-factor interaction of subject,session, target size and approaching angle (F(24,1080) = 7.92, p < .01). Indices ofdifficulty (IDs) were calculated from the target sizes and linear distance betweentargets. The smallest force (137gf) was found in the conditions with the highest ID (=4.04). With respect to the target size, when the target area was small, the mouseoperating force was accordingly small. However, as the target size increased, highermouse operating force (approximately six percent increase) was observed.

DISCUSSIONThe relationship between mouse operating force and the ID was, puzzlinglycontradictory to the general anticipation. Before this experiment, the largest mouseoperating force was anticipated to be observed with the highest ID. The highest mouseoperating force was, in fact, observed in the task with the largest target, i.e., the lowestID. Therefore, the results implied that the task difficulty did not directly reflect themouse operating force. It should be noted, however, that the subjects’ comments werecontrastive to the results: they felt they pressed much harder when they had to clickthe smallest target. Currently, we hypothesized that the total amount of musclecontraction employed in pointing mouse may be related to the IDs, that is, someamount of muscle contraction was used for pointing the mouse by flexing the indexfinger, and the rest of the force was merely used for developing the muscle tensionprompted by the task requirement. The hypothesis will be enhanced by the furtherstudy including the measurements of surface EMGs during the task and draggingforce.

REFERENCESCard, S.K., English, W.K. and Burr, B.J. (1978). Evaluation of mouse, rate-controlled

isometric joystick, step keys, and text keys for text selection on a CRT.Ergonomics, 21(8):601-603.

Dowell, W.R. and Gscheidle, G.M. (1997). The effect of mouse location on seatedposture. In G. Salvendy, M.J. Smith, & R.J. Koubek, (eds) Advances in HumanFactors/Ergonomics, 21A Design of Computing Systems: CognitiveConsiderations, Elsevier, 607-610.

HCI’98 Conference Companion

—100—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Usability Principles Specific to Interactive Authoring SupportEnvironments

Paula Kotzé

Department of Computer Science and Information Systems,University of South AfricaP O Box 392, Pretoria0003, South Africa

ABSTRACTThis poster illustrates an interaction framework and usability principles that can beused in the design and evaluation of interactive authoring support environments aimedat the development of computer-based instructional systems.

KEYWORDS Usability principles, interaction models, interactive authoringsupport environments, computer-based instruction.

INTRODUCTIONAbstract principles for effective interaction derived from knowledge of psychological,computational, and sociological aspects of the problem domain can be used to directthe design and evaluation of a product from its onset. Three categories of generalusability principles can be identified (Dix et al. 1998) — learnability (includingpredictability, synthesizability, familiarity, generalizability, and consistency),flexibility (including dialogue initiative, multi-threading, task migratability,substitutivity, and customizability), and robustness (including observability,recoverability, responsiveness, and task conformance).

However, in defining these principles it is easy merely to provide general and abstractdefinitions that are not very helpful to the designer. The principles should becombined to form a close bond with the specific paradigm and application domainunder consideration.

Authoring support environments (ASEs) are used to develop interactive multimediaapplications. They are widely advocated for the development of computer-basedinstructional (CBI) material. ASEs can be divided into three main groups according tothe interface approach they follow — display-based, map-based and code-based. Thisposter presentation suggests a framework of principles which, if adhered to, wouldlead to more usable interactive ASEs for the CBI development domain, and wouldsupport the different paradigms on which these authoring approaches are based. Thefocus is on the principles that could support authors in achieving their goals, as wellas the flexibility of the ASE in supporting different authoring approaches.

INTERFACE FRAMEWORKThe interface framework (Kotzé 1997) used in defining the usability principles isspecifically geared towards interactive ASEs and the development of CBI material. Itconsists of a process involving the relationships between four state sets — the internalstate of the ASE, the display state of the ASE, the resulting CBI state, and the CBIdisplay state. The internal state space of an ASE refers to the functional state of anASE during the authoring process, while the display state space refers to the externalperceivable renderings of the ASE. The CBI result state refers to the state of the CBI

HCI’98 Conference Companion

—101—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

system under construction, while the CBI display state refers to the way in which theCBI system will be rendered to the ‘learner’ in the delivered product.

A model based on a combination of a modified version of the Sufrin and He (1990)framework and the global state approach of the interactors framework of Duke andHarrison (1994), is used as basis for the interface framework.

USABILITY PRINCIPLESThe specific usability principles addressed include different levels of object contentobservability and object structure observability, dynamic and static displays,distinguishable object renderings, representation multiplicity, equal opportunitydisplays, and task migratability.

Each principle is illustrated by means of a formal definition using the interfaceframework and the Z notation, as well as an example from an existing commercialASE. A set of functions linking certain internal and display states is used in definingthe various usability principles. There is, for example, a principle known as mediacontent observability which aims to assist the author in determining the ‘appearance’of the instructional content as well as the manipulation thereof. This principle isdefined as a function between the Media_State and the Media_Display state,constrained by its relation to the Media_Result and Media_Result_Display states viathe render and result_render mappings:

media_content_observable: Media_State +Æ Media_Display

" is: Media_State • $ rs: Media_Result | result(is) = rs •(is, render(is)) Œ media_content_observable ¤

( "nid: NODE_ID; ic: Media | (nid Œ dom(is.node_instances) Ÿic Œ ran((is.node_instances(nid)).instructional_content)) •

(render(ic.media_content) ≠ f Ÿrender(ic.media_content) Õ render(is) Ÿrender(ic.media_content) =result_render(result(ic.media_content))))

REFERENCESDix, A., Finlay, J., Abowd, G., Beale R. (1998), Human-Computer Interaction,

Prentice Hall Europe.Duke, D.J., Harrison, M.D. (1994), Connections from A(V) to Z. Technical Report

System Modelling/WP21, AMODEUS II Project, ESPRIT Basic ResearchAction 7040.

Kotzé, P. (1997), The use of formal models in the design of interactive authoringsupport environments, DPhil Thesis, Computer Science, University of York.

Sufrin, B., He, J. (1990): Specification, refinement and analysis of interactiveprocesses, in by M.D. Harrison and H.W. Thimbleby (eds.), Formal Methods inHuman-Computer Interaction, Cambridge University Press.

HCI’98 Conference Companion

—102—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Choosing and using names for information retrieval

Janet A. Pitman and Stephen J. Payne,

School of Psychology,Cardiff University,PO Box 901,Cardiff, CF1 3YG.

INTRODUCTIONDespite the advent of modern graphical user interfaces access to files on personal computers is nearlyalways mediated by a lexical name. In many situations the user of the file chooses the name for laterretrieval; alternatively, where information is shared across more than one user, a user may be requiredto use a name that has been chosen by someone else. Choice and use of names was an important topicin early HCI research, but has recently become less fashionable. However, many psychological issueswith important usability consequences remain unresolved. This poster describes a pair of experimentsthat investigate the relationship between choosing and using names.Informal empirical studies suggest that retrieval of files named by oneself is typically quite successful(Carroll, 1982; Malone, 1983; Nardi and Barreau, 1995). Furthermore, some experimental studies havesuggested that self-chosen names are reliably better than are names chosen by others. Broadbent andBroadbent (1978) found a 50 per cent advantage in using ones own descriptors for file retrieval.Similarly, Scapin (1982) reports an advantage in recall for individually generated command-names.However, Carroll (1985) compared self-created and externally imposed command names and foundonly a marginal benefit for ones own set.An established advantage for self-chosen over imposed names may have two possible explanations.First, the advantage may lie in the processing performed during name-choice; users may rememberaspects of the processing episode in addition to the name itself. This explanation would make the"self-choice effect" somewhat analogous to the established "generation effect" in verbal learning. On theother hand, it is possible that self-chosen names are better names, for example because they are morestrongly associated with their referents for their particular user. If this explanation has some truth, itsuggests that people are able to adapt their name choices to the idiosyncrasies of their own cognitivesystems. An interesting related question is whether people can also successfully adapt to the role of"designers", i.e. can they create names which are better suited for others than are names which theychoose for themselves to use? This question becomes of immediate practical importance in co-operative work situations, where files are stored on shared disks and accessed by a community of users.

EXPERIMENT 1Four groups of participants took part in the study. The"self-choice"group were instructed to choosesingle word filenames for paragraphs of text for their own use in subsequent memory tasks. Eachmember of the "other-imposed" group was yoked to a member of the self choice group, and used theirpartners names in the memory tasks. Participants in the "design" group were instructed to design namesfor other people to use,although they later also used these names themselves. Each member of the"designed-imposed" group was yoked to a member of the design group and used the names that partnerhad generated. Forty paragraphs of text served as target files. Participants firstly chose or studiednames for these paragraphs and after a filled delay of ten minutes were given a recognition test inwhich they had to choose which of the forty paragraphs was associated with twenty of the names.Participants then attempted to recall the names for the other twenty paragraphs. For each memory test,three planned comparisons were made. The self-choice effect was tested by comparing performance ofthe self group with the other imposed group and with the design-imposed group.Whether participantscould adapt to the perceived needs of others was tested by comparing the other imposed group to thedesign-imposed group. For the recognition test there was significant advantage for self-choice (56%correct) over other-imposed (34%) and over design imposed (36%). The difference between the twoimposed conditions was not significant. For the name-recall task performance was generally slightlylower but the pattern of significant comparisons was identical: the two imposed groups performed atabout half the level of the self-choice group. In summary experiment 1 established a statisticallyreliable and numerically compelling advantage for self-chosen names.

EXPERIMENT 2This study used a similar design to experiment 1 but attempted to directly test the two explanations forthe self-choice effect. We reasoned that an advantage due to memory-for-processing should be lesslong-lasting than an advantage due to idiosyncratically better associations. Consequently we introduceda second testing session after a delay of one week. At the beginning of this session, participants in each

HCI’98 Conference Companion

—103—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

of the four groups re-studied the names (again, we assumed that this would lessen any processingdifferences between the groups.) Only the name-recall task was used in both test sessions.In test session 1, the planned comparisons replicated experiment 1. There was strong evidence for a selfchoice effect, but no evidence for adaptive "designing". By the second session, however, the situationchanged. The self choice group was still significantly better than the other-imposed group (69% versus46%). However it was not significantly better than the design-imposed group (61%) which was itselfsignificantly better than the other-imposed group. This pattern of results suggests two conclusions.First, there is a substantial self-choice advantage which is probably best attributed to both a processingadvantage and a name-target associative advantage. Second, people can adapt their name choices to theneeds of others: the names chosen by "designers" were better for other people than were the nameschosen by the self-choice group. In co-operative work situations, people will produce better filenamesif they know at the time they are creating names that will be used by others and make the usability byothers their primary design criterion.

REFERENCESBarreau, D. and Nardi, B. (1995): Finding and reminding: File organisation from the desktop. SIG CHI

Bulletin, Vol. 27, No. 3, 39-43.Broadbent, D.E., and Broadbent, M.H.P. (1978): The allocation of descriptor terms by individuals in a

simulated retrieval system. Ergonomics, 21,343-354.Carroll, J.M. (1982): Learning, using and designing filenames and command paradigms. Behaviour and

Information Technology, 1(4), 327-346.Carroll, J.M. (1985): What's in a name?: An essay in the psychology of reference. W.H. Freeman and

Co., New York.Jones, W.P. and Dumais, S.T. (1986): The spatial metaphor for user interfaces: Experimental tests of

reference by location versus name. ACM Transactions on Office Information Systems, Vol. 4,No. 1, pp. 42-63.

Kirsh, D.(1995): The intelligent use of space. Artificial Intelligence, 73, 31-68.Lansdale, M.W.(1991): Remembering about documents: memory for appearance, format, and location.

Ergonomics, Vol. 34, No. 8. 1161-1178.Malone, T.W. (1983): How do people organise their desks? Implications for the design of office

information systems. ACM Transactions on Office Information Systems, 1, 99-112.Scapin, D.L. (1982): Generation effect, structuring and computer commands. Behaviour and

Information Technology, Vol. 1, No. 4, 401-410.

HCI’98 Conference Companion

—104—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Designing for cultural diversity

Girish V. Prabhu and Dan Harel

Eastman Kodak CompanyRochester,NY 14650-1916, U.S.A.

ABSTRACTProducts and software developed for sale in multinational markets are most successfulwhen they appropriately accommodate culture and language. The design of productsmay be either internationalized (based on features that are culture-neutral), orlocalized (based on features tailored to regional and local markets). Different levels oflocalization, no localization, translation only, to cultural localization, may be applieddepending upon the application type and the net return on efforts. Cultural localizationis successful only when a detailed understanding of the specific culture is available tothe designer. This poster describes a methodology based on cultural anthropology,used at Eastman Kodak Company to study and understand users’ needs andpreferences for internationalized versus completely localized digital imaging products,and to design products and software that are efficiently and successfully localized to“speak the universal language of photography”.

KEYWORDS Localization, Cultural localization, Japanese and Chinese design

INTRODUCTIONEastman Kodak Company as a global company serves customers in continentscomprised of Asia, Africa, Middle East, Latin America, Europe (Western and EasternEurope), and North America (United States and Canada). Our customers, therefore,come from different countries, speak different languages, have different cultures, andhave different buying habits. These elements pose unique challenges not only tomarketing organizations, but also to product development organizations. Productsmarketed outside of the US succeed when they accommodate culture and languageappropriately. Culturally targeted design solutions contribute to a competitiveadvantage, a stronger brand recognition, and an increase in sales in regions we serve.

With this in mind, the Human Factors lab and the Strategic Design and Usabilitygroup of Eastman Kodak Company evaluated culture-specific user preferences foroverall product design. Our research was conducted from a sociocultural perspectiveand findings include insight into, or recognition of the local social fabric, attitudes andbehaviors, perceptions, beliefs, history, art, architecture, etc. The scope includedpublic access kiosks, in-home imaging, and desktop software. The objective was toresearch product design and graphical user interface design solutions for issuesaffecting internationalization (applying design features that are culture-free), andlocalization (customizing designs for regional and local markets) of Kodak productsand software services. The outcome of this research are cultural characteristics andproduct appearance and usability objectives that will be used by our designers todevelop digital imaging solutions that communicate respect and consideration fordifferent target cultures, delivers in product appearance and ensures ease of use.

HCI’98 Conference Companion

—105—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

METHODOLOGYCultural localization research utilized anthropological research methods. Culturalinformation was collected both from etic and emic perspectives. The overall plan forthe research was as follows:Ethnographic research

• Country-specific preferences for product appearance and user interface designfrom the etic (insiders) viewpoint were compiled for potential imaging users.

• Existing traditional and digital imaging products and software was benchmarkedto understand user preferences.

Cultural characteristics• Based on the identified etic and emic perspectives, cultural characteristics were

developed for these countries.Product appearance and user interface design appearance requirements

• Based on the cultural characteristics product appearance and user interfacedesign appearance qualities were developed for these countries.

• The findings from the emic and etic view were combined to develop overallproduct appearance and user interface design characteristics for each country.

Validation research• Prototypes of suitable products for each country were developed. The North

American baseline prototype for each country was translated.• These localized prototypes were evaluated against the localized North American

prototype in the specific countries through focus groups.Develop guidelines

• Based on the research the product design and UI design guidelines were refined.

The existing Kodak consumer segmentation was not used in this research becausethose segmentation were based on US centric data and was thought as inappropriatefor the Asian cultures. The research specifically targeted business, home, professionaland education-related users with different levels of familiarity with digital technology.The research recruited equal numbers of men and women. The age of the participantsfor the ethnographic study ranged between 14 to 55 years, whereas validation researchwas done using equal number of men and women in the age range of 26 to 44 years.

CONCLUSIONSResearch into cultural preferences has broadened our appreciation for the importanceand complexity of localized product design for Kodak products for Japan and China.Our research has indicated how elements such as symbology, field formatting, color,interaction styles, screen layout, and typography affects successful product interfacelocalization.

REFERENCESFernandes, T. (1995) Global Interface Design: A Guide to Designing International

User Interfaces, AP Professional, Boston, MA.Day, D. (1996) Cultural bases of interface acceptance: Foundations. People and

Computers XI, Proceedings of HCI 96, the 11th Annual European Human-Computer Interaction Conference, 20-23 August, Imperial College, London,35–47.

Zieglar, V. unpublished data.

HCI’98 Conference Companion

—106—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Translating the World Wide Web interface into speech

C. Reeves1, M. Zajicek2, C. Powell2 and J. Griffiths2

1 IT Services Development,Royal National Institute for the Blind,224 Great Portland Street,London, WIN 6AA, UK.

2 The Speech Project,School of Computing and Mathematical Sciences,Oxford Brookes University,Oxford OX3 OBP, UK.

ABSTRACTBrookesTalk, a prototype browser, uses information retrieval to provide a set ofcomplementary options to summarise the Web page. The aim is to enable visuallyimpaired users to effectively browse and use the World Wide Web alone or withsighted co-workers; using either a visual browser, specific items of large text orspeech output. Some initial studies and future areas of work are discussed.

KEYWORDS World Wide Web, browser, visually impaired, information retrieval,HTML, usability

INTRODUCTIONBrookesTalk, a speech output browser using Microsoft speech technology, offers arange of navigation functions for the World Wide Web. These include a list ofheadings, links, keywords, an abridged version of the page and page summary. It isexpected that the user will pick tools which complement one another for the particulartype of page under review. Our hypothesis is that improved provision of summaryinformation will increase orientation, navigation and general usability of the WorldWide Web for visually impaired users.

This paper briefly describes the essential functionality behind BrookesTalk and thendiscusses preliminary evaluation of the usability of BrookesTalk together with futureareas of work.

HOW BROOKESTALK WORKSKeywords - The list of extracted keywords consists of words which are assumed to beparticularly meaningful within the text (Luhn, 1958).These are found using standardinformation retrieval techniques based on word frequency (Zajicek and Powell, 1997).

Abridged text - The technique is based on ‘Word level n-gram analysis’ in automaticdocument summarisation (Rose and Wyard, 1997). Extraction of three word keyphrases, or trigrams, preserves some word position information and then creates apage consisting of the sentences in which the trigrams appeared.Abridged pages onaverage worked out to be 20% of the size of the original text and, unlike keywordlists, are composed of well formed, comprehensible sentences.

WHAT THE USERS THOUGHTKeywords - Preliminary experiments with twenty subjects showed that headings andkeywords were judged to be roughly comparable in their usefulness as page contentindicators; both however were significantly more useful than anchors/links.

Page summary - Perceived by users to be an important tool in Web page orientation.The BrookesTalk summary comprises of title, author name, author defined keywords,

HCI’98 Conference Companion

—107—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

number of words in the page, headings, links and extracted keywords. The number ofwords in a page was found to be particularly useful for page orientation.

BrookesTalk summary facilities become useful for visually impaired users as a resultof the variability in the use of HTML code by Web page authors which can impedestandard orientation methods.

General feedback on BrookesTalk was also obtained from a group of visuallyimpaired users including those at the Royal National Institute for the Blind (RNIB).Aspects such as the need for operational simplicity, a small program ‘footprint’, highflexibility / configurability, usable by sighted co-workers were all deemed importantfactors and will influence future studies. Further areas of functionality werehighlighted, such as a flexible scratchpad to record sections of Web pages for later,integrated retrieval. There are also potential benefits of using the summary functionswith other, non Web, formatted documents used during work and leisure.

THE WAY FORWARD WITH BROOKESTALKIt has become apparent that users have varied approaches to the use of BrookesTalkand its functionality. We will attempt to observe and fully understand differentconceptual and navigational techniques used both within and between visuallyimpaired and sighted people. We are currently undertaking user mapping to identifythe main user groups and stakeholders, which will form the basis for future usabilitystudies. It is likely that a large scale trial will involve users performing tightlycontrolled tasks on specially chosen types of Web pages. We will seek to establishwhether different summarisations are required for different Web page types, the levelof activity required to gain a clear ‘picture’ of a Web page, and the level of enhancedusability achieved by providing a closely linked combination of visual and speechoutput.

Similarly, on the functionality side, we plan to extend page summarisation to includean analysis of the page as a multi subject document. We will also re-consider theabridged version of the page, which received most criticism. The trigram analysiscaneasily pick out the wrong trigrams as being significant, and the algorithm forpicking trigrams is currently not very stable. New algorithms are currently beingimplemented along with other tools which will be developed by the IntelligentSystems Research Group at Oxford Brookes University.

REFERENCESLuhn, H. P. (1958) The automatic creation of literature abstracts. IBM Journal of

Research and Development, 2, 159-165.Rose, T., and Wyard, R. (1997) A Similarity-Based Agent for Internet Searching. In

Proceedings of RIAO’97.Zajicek M., Powell C., (1997) The use of information rich words and abridged

language to orient users. In the World Wide Web, IEE Colloquium ‘Prospectsfor spoken language technology’, London.

HCI’98 Conference Companion

—108—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Beyond the Interface: Modelling the Interaction in a VisualDevelopment Environment

Chris Scogings1 and Chris Phillips2

1Institute of Information & Mathematical SciencesMassey UniversityAlbany CampusAucklandNew Zealand

2Institute of Information Sciences & TechnologyMassey UniversityTuritea SitePalmerston NorthNew Zealand

ABSTRACTA shortcoming of current user interface building tools is that while they permit thedesigner to construct an interface made up of a set of screens, they provide no modelof the interaction. Interface builders could be extended to produce such a model as anautomatic by product of the construction of the interface. This research examines suchan extension to Delphi. The interaction model is specified in Lean Cuisine+, a semi-formal graphical notation for describing the behaviour of direct manipulation GUIs.

KEYWORDS interface development tools, interaction model, Lean Cuisine+

INTRODUCTIONA variety of tools and techniques are available to support the development ofinteractive systems, from pencil and paper mockups to full scale interfacedevelopment environments (Myers, 1993; Szekely, 1994). Interface builders, such asVisual Basic and Delphi are capable of producing industrial strength applications, andare commonly based on general purpose (often object-oriented) programminglanguages. They provide access to a toolkit of widgets, and support visualprogramming, which provides designers with immediate feedback on the look andfeel of the interface.A shortcoming of current interface builders is that while they permit the designer toconstruct a set of screens, they provide no model of the interaction. The interfaceexists and can be exercised, but no specification or description of the interaction existsoutside the code produced. The focus is on the interface rather than the interaction.Interface builders could be extended to produce a model of the interaction as abyproduct of the construction of the interface. This research explores such anextension to Borland’s Delphi, and briefly reviews its utility. The interaction model isspecified in Lean Cuisine+ (Phillips, 1995).MODELLING THE INTERACTIONDelphi is a visual development tool which can be used to create PC Windowsapplications. Delphi supports a ‘drag-and-drop’ approach to creating interfaces. UsingDelphi, the designer can define the appearance of screens, and also the behaviour ofmenus, buttons etc in linking screens. Thus both the look and feel of the user interfacecan be created. A Delphi prototype has been developed for an Automatic TellerMachine (ATM) based on a touch screen. Numeric data, eg, the PIN number, isentered using touch keys. Six Delphi screens (Forms) have been defined, includingthose for Card Entry, Deposit, Account Balance, and Withdrawal.The interaction model for the Delphi ATM application has been produced via aconversion process which makes use of a prototype Lean Cuisine+ software supportenvironment (Phillips, 1994). Lean Cuisine+ is a semi-formal graphical notation forspecifying the behaviour of GUIs in terms of the constraints and dependencies whichexist between selectable dialogue primitives. For the ATM, the conversion process

HCI’98 Conference Companion

—109—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

initially produces the LeanCuisine+ diagram shown inFigure 1 (some detail isomitted). The interaction isrepresented by a dialogue tree,which shows inter-relationshipsand some of the constraintswhich impact the behaviour ofthe selectable primitives.Further constraints are capturedin the form of selection triggers,represented by directed arcs.Additional information can beadded to the diagram by editingit using the software supportenvironment.REVIEWThe Lean Cuisine+ modelprovides information on theinteraction which couldotherwise be uncovered only byeither exercising the prototypeand committing the behaviour tomemory, or by studying Delphicode. In particular, conciseinformation in graphical form ispresented on the structure of theinteraction involving both intra-screen and inter-screenrelationships between selectableoptions. This should be usefulboth as documentation, and inhelping a new user form amental model of the application.The model can be analysed by

the developer for structural shortcomings and inconsistency within the dialogue,including excessive navigation between screens in performing tasks, inconsistenciesin selecting options on different screens, and missing options. It can also be matchedwith the interaction design specification.

REFERENCESMyers, B.A. (1993): State of the Art in User Interface Tools, In Hartson, H.R. and Hix, D. (Eds),

Advances in Human-Computer Interaction, Norwood, NJ, 11-28.Phillips, C.H.E. (1995): Lean Cuisine+: An executable graphical notation for describing direct

manipulation interfaces, Int with Comp, 7, 1, 1995, 49-71.Phillips, C.H.E. (1994): Serving Lean Cuisine+: Towards a Software Support Environment. Proc.

OZCHI’94, CHISIG of Erg. Soc. of Australia, 41-46.Szekely, P. (1994): User Interface Prototyping: Tools and Techniques, Proc. ICSE’94 Workshop on

SE-HCI, Sorrento, Italy, Springer, 76-92.

Figure 1: Lean Cuisine + diagram for the ATM

HCI’98 Conference Companion

—110—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Designing Educational Interfaces from a Constructivist Perspective

David Squires1 and Anne McDougall2

1School of Education,King's CollegeWaterloo Road,London SE1 8WA, UK

2Faculty of Education,Monash UniversityClayton,Victoria 3168, Australia

ABSTRACTThis paper identifies constructivism as the current dominant theory of learning andhighlights some issues emerging from the authors' on-going analysis of theimplications of constructivism for the design of educational software interfaces .

KEYWORDS Constructivism, Educational Software, Interface Design.

INTRODUCTIONConstructivism, in its various forms, is the predominant theory of learning today.Many writers have expressed the hope that taking a constructivist approach tosoftware design will lead to better educational software and better learning. Thepotential synergy between multimedia environments and a constructivist approachleads to new challenges for educational software development. Our current work isexamining how educational software, and interfaces in particular, can be designed toreflect a constructivist approach.DESIGN OF CONSTRUCTIVIST SOFTWAREWhile constructivism is an umbrella term that covers a range of theories of learning,the essential notions now seem to be widely agreed upon. Constructivism emphasiseslearning as individual and idiosyncratic. Learners bring different perspectives tolearning, interpreting task domains in terms of their own past experiences to buildtheir own conceptual structures. Many constructivists also stress the importance ofcontext, maintaining that learning is 'situated' in environment in which it takes place.This leads to an emphasis on collaboration between learners and new interpretationsof the relationships between teachers and learners.Personal construction of knowledge and a recognition of the importance of contexthave far reaching consequences for the use of educational software. Learners willperceive the function of software and interpret its behaviour in idiosyncratic ways,depending on the way in which they construct knowledge and they relate to contextualfactors.An educational software package cannot be seen as a fixed entity defined by thedesigner, rather it is a personal construct in the mind of the learner. Thus theoverriding design rationale changes from one of pedagogic prescription to one ofproviding rich cognitive experiences. Four main design thrusts are discernible. Firstlearners are expected to be active and purposeful, taking a large responsibility fortheir learning. The implication for software design is that learners should havesignificant control over the operation of software, with opportunities to explore issuesand express their own ideas and concepts. Second, the notion of authenticity iscritical. Authentic learning environments are typically complex, providing rich anddiverse opportunities for learners to explore ideas in realistic and convincing ways,and to complete useful projects rather than contrived abstract tasks. Third, support formultiple perspectives is an important aspect of constructivist design. It is importantthat different learners have the opportunity to view a learning situation in differentways. Finally, the social construction of knowledge, with the enhancement of learningby collaboration and discussion, is a critical feature. The idea of learning being

HCI’98 Conference Companion

—111—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

situated in a specific context is fundamental here. A situated view assumesdistribution of intelligence between all actors in a learning environment, includingresources.HCI REQUIREMENTS IN CONSTRUCTIVIST SOFTWAREThe non-routine cognitive nature of educational tasks emphasised in a constructivistview of learning indicates that interface design for educational software should takeaccount of exploratory open-ended cognitive activity. However, most of the researchin human-computer interaction has been concerned with the routine use of wellknown systems such as word processors and spreadsheets to complete well definedtasks. The cognition involved in the completion of these routine tasks is based on theuse of internalised and well understood procedures, for example moving a block oftext in a document. Experienced users will complete these tasks in well orderedconcise ways without the need for exploratory problem solving use of the system.This contrasts with the use of educational software, where the user typically has anincomplete or misconceived understanding of the task in question. Educationalsoftware is designed to assist students in the development of understanding and thecorrection of misconceptions - a non-routine cognitive task.We are currently examining ways in which software can be designed to reflect aconstructivist approach to learning, using data from three empirical studies, each ofwhich provides a detailed account of learning associated with the use of aneducational software environment designed to support a constructivist approach. Onestudy (McDougall, 1988) observed young children learning mathematical ideas inLogo, an environment with a purely programmatic interface. The second (Squires,1994) involved upper secondary students studying photosynthesis in an environment,Bioview, with a direct manipulation interface. The third (Sellman, 1991) recordedlearning processes of secondary students exploring planetary motion in a hybridprogramming/direct manipulation environment.Our work includes investigations of learning with these interfaces in terms of issuessuch as learner control, time investment by learners, learners’ mental models of thelearning environment, support of learners’ visualisation of abstract concepts andprocesses, interface metaphors to support learning, learners’ attitudes to and use of“errors”, and goal setting by learners. Through careful analysis of the learningactivities and processes sponsored and supported by the environments, andconsideration of related issues of interface design we are working toward a betterunderstanding of interface design issues to support constructivist views of learning.REFERENCESMcDougall, A. (1988) Children, Recursion and Logo Programming. Unpublished

Ph.D. thesis, Monash University, Melbourne.Sellman, R. (1991) Hooks for tutorial agents: A note on the design of discovery

learning environments. CITE Technical Report No. 145, IET, Open University.Squires, D.J. (1994) A Comparison of Learner and Designer Models in the Use of

Direct Manipulation Educational Software in the Context of Learning AboutInteracting Variables in Photosynthesis.. Unpublished Ph. D. thesis, Universityof London.

HCI’98 Conference Companion

—112—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Strategies for Developing Substantive Engineering Principles

Adam Stork and John Long

Ergonomics and HCI Unit,University College London,26 Bedford Way, London,WC1H 0AP, UK

ABSTRACTThis paper seeks to contribute to more effective Human Computer Interaction (HCI)design practice in the longer term. The ultimate goal of more effective HCI designpractice is held to be practice supported by ‘engineering principles' (Dowell and Long89). This paper describes two broad overall strategies for developing (substantive)engineering principles.

KEYWORDS Engineering Principles, Human Computer Interaction

INTRODUCTIONThis paper describes work that is part of research to improve the emergent disciplineof HCI. The research can be characterised by describing the ultimate goal of HCIknowledge to be practice supported by ‘engineering principles’ (Dowell and Long89), which are conceptualised, operationalised, tested, and generalised to offer anearlier and a better guarantee of success in application. However, a strategy isrequired to develop such engineering principles. This short paper considers onlysubstantive engineering principles. Substantive excludes consideration of anymethodological component of the principles.

CONCEPTION OF SUBSTANTIVE ENGINEERING PRINCIPLESHCI engineering principles are one type of HCI knowledge to support HCI practice.HCI practice is the provision of specific design solutions to specific design problemsby applying general design knowledge. General design knowledge is general overtypes of user, types of computer, and types of domain of application and includesperformance.

The component of specific design problems that relate to particular (general) designknowledge can be conceptualised, and termed a general design problem. Similarly,the component of specific design solutions that relate to particular (general) designknowledge can be conceptualised, and termed a general design solution.

To support a high guarantee of application, engineering principles are conceptualisedas consisting of a general design problem, its general design solution, and theirrelationship.

In the case of Dowell and Long’s conception of these general design problems andsolutions (89, as extended by Stork and Long, 94), the high guarantee requirement forengineering principles suggests that the performance of a general solution must beequal to that of a general problem. If the performance of the two is equal, then theexpression of any particular engineering principle can be briefer, because theexpression need only contain one of the performances.

HCI’98 Conference Companion

—113—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

STRATEGIES FOR DEVELOPING (SUBSTANTIVE) ENGINEERINGPRINCIPLESThe above conception of engineering principles suggests that the overall aim of anystrategy for developing engineering principles is to identify a general design problemand its general design solution.

Two broad overall strategies are proposed to this end:

1. Bottom-up strategy. To conceptualise and operationalise specific designproblems and their specific design solutions; to generalise over theseoperationalisations to produce putative engineering principles; and to test further theputative engineering principles by application (the putative engineering principles canbe considered —to some extent— tested by their development from specific designproblems and their solutions).

2. Top-down strategy. To conceptualise and operationalise general designproblems and their general design solutions as putative engineering principles; to testthe putative engineering principles by application.

These two strategies imply a continuum of strategies, over the expected initialgenerality. The first requires initial concepts that must be expected to be general (forexample, the concepts of ‘structure’ and ‘behaviour’ in the Dowell and Longconception of the HCI general design problem). The second requires initial conceptsthat are expected to be even more general (for example, the potential concept of‘feedback’). The first strategy appears to offer a more certain initial route toengineering principles, given the difficulty of selecting and representing the initialgenerality.

CONCLUSIONBoth strategies are ongoing (Stork and Long, 1994 and in press; and Stork, Lambie,and Long, in press). Both have potential, although, as expected, the first appears to bedelivering a more certain initial route.

ACKNOWLEDGEMENTSResearch funded by EPSRC CASE studentship sponsored by SchlumbergerIndustries. Views expressed in the paper are those of the authors.

REFERENCESDowell J. & Long J. B. (1989). Towards a conception for an engineering discipline of

human factors. Ergonomics, November 1989, 32(11), pp. 1513-1535.Stork A. & Long J. (1994). A Planning and Control Design Problem in the Home:

Rationale and a Case Study. In: Proc. Intl. Working Conference on HOIT. K.Bjerg and K. Borreby (eds.). University of Copenhagen, Denmark, 1994.

HCI’98 Conference Companion

—114—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

From Agents to a Networked Display Manager

Mark Treglown

Department of Electrical and Electronic Engineering,University of Bristol,Merchant Venturers Building,Bristol. BS8 1UB. UK

INTRODUCTIONThe ESPRIT project 20304 - ETHOS is concerned with devising and evaluatingprotocols and devices to be employed Europe-wide to support an increasing degree ofautomation in the home. ETHOS-compliant home appliances contain embeddedmicroprocessors and network hardware and are able to communicate using mains andwireless modems and negotiate to make better use of the possibly limited poweravailable to the domestic electricity consumer, to take advantage of cheaper tariffs soas to reduce electricity costs, and to schedule when tasks should be performed. Whileubiquitous computing presents a vision of invisible computing machinery embeddedin objects and devices encountered in the environment, users and devices maysometimes need to communicate with each other while being remote from each other,or the device itself may not be able to provide an adequate display for all tasks and aremote display is required. ETHOS provides such a device named the UBUI, anetworked user interface that supports a number of protocols for displaying textinformation, and providing simple user interfaces. We report on formally specifyingthe UBUI in the Agent notation and developing an implementation and we commenton the Agent notation's usefulness in the development of user interfaces.AGENTS IN THE UBUI SYSTEM'S DEVELOPMENTThe Agent model (Abowd, 1990) describes systems in terms of intercommunicatingobjects, where each Agent is defined in terms of a persistent internal state whichchanges in response to receiving event messages from other agents (described usingthe Z notation); a description of the one-way communication channels that connectagents and the messages that may be sent or received along each channel; and aneternal behaviour component that describes the temporal ordering on sequences ofevent messages that the agent is prepared to respond to, defined using the notation ofCommunicating Sequential Processes (CSP). While much research effort has seen theAgent model superseded by the Interactor model, for many systems Agentdescriptions will be directly comparable to Interactor descriptions, and, as in the caseof the UBUI, we wish to describe a physical and temporal separation of theapplication and the display which is a simpler task in the Agent model than in theInteractor model where an Interactor's state and the rendering of that state are moretightly coupled. Unlike most display managers which provide a large number ofwidgets and graphics routines that tend not to restrict the ways in which displays areconstructed, the low bandwidth of ETHOS networks means that messages sentbetween appliances and the display are short and few. This means that the UBUI,rather than the application, must manage the layout of text and other interfacecomponents. Initial development followed a route similar to that of Fields et al (1994)where an initial natural language partial specification was transformed into a moreconcrete set of requirements. The UBUI is to be used by as wide a range of users aspossible, so requirements constraining the use and layout of fonts, colours andinterface components to meet the needs of users with common visual impairmentswere placed on the system.An implementation in Java, chosen for its portability and object-oriented nature, wasthen developed. While the Agent model captures the requirements of the finalprogram, implementation required information lacking in previous discussions of

HCI’98 Conference Companion

—115—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Agent use. In particular, the internal and external behaviour components proved hardto implement. While refinement of Z is well-understood, many internal operationswill be calls to the API of the Java Abstract Window Toolkit. Even with thiscompromise, we find that as one gets closer to replacing internal operations'specifications with windows toolkit function calls, the mapping becomes harder andone is forced to increasingly adopt the conventions of the toolkit in terms of programand interaction structure. Transforming external behaviour components into code isalso complicated by Abowd's notation differing slightly from standard CSP and beingincomplete, a full operational semantics thus had to be completed. The semanticsproduced is used to transform the CSP external behaviour component to a labelledtransition system which is translated by hand into a state machine integrated into eachinternal operation. This approach avoided having to develop a CSP compiler for theexternal behaviour component, but for now requires lengthy compilation by hand andincreases programming time. We are now investigating the automation of theprocedure undertaken to compile the external behaviour components, but byproducing a labelled transition system, one is able to apply the many available toolsand logics developed to prove properties of interactive systems.THE USABILITY OF AGENTS IN DESIGN AND DEVELOPMENTWhile the Agent notation is not accessible to the untrained designer, it relies on feweradditional concepts than other formal notations introduce. Johnson and Gray (1995)claim that an adequate notation for specifying temporal aspects of interaction mustoffer sufficient expressiveness so that relevant properties of interaction may bedescribed; enable significant properties of the interaction, especially where problemswill arise to be salient in the representation; have a well-formed semantics; andintegrate the temporal properties of the system with other relevant properties such asthe display state. While the Agent notation clearly has a well-defined semantics, andis expressive to a degree sufficient to capture the design of the UBUI, it is lacking inother regards as a specification notation. The salience of the external behaviourcomponents is low, definitions can become extremely complicated and confusing tofollow and usability problems hard to find. This reflects the Agent's capturinginteraction at a very fine and detailed level, greater salience can be found in the high-level agent frames of Fields et al (1994) from which more formal Agents are derived.The degree of integration is also quite low, while the system state and interaction arelinked, they can only be seen in terms of how the state changes in response to events,the overall current display state is distributed among a number of Agents and alsomust be derived from the initial state being subjected to the intervening changes.Other aspects of integration such as the physical nature of input and output media andinformation flows to and from the user are either not part of the Agent notation orextremely hard to discern. While the Agent notation has been employed in thedevelopment of a medium-sized user interface component with some success, this wasat the expense of the time taken to extend the understanding of the model sufficientlyso that an implementation could be derived. Given the drawbacks, also, of Agents as ameans of capturing interaction designs and the current lack of needed tool support,one must ask if further use of the Agent model is currently warranted.REFERENCESAbowd, G. D. (1990) Agents: Communicating Interactive Processes. In D. Diaper, D. Gilmore, G.

Cockton & B. Shackel (eds.) Human-Computer Interaction INTERACT '90, North-Holland,Amsterdam.

Fields, B., Harrison, M. & Wright, P. (1994) From Informal Requirements to Agent-basedSpecification. SIGCHI Bulletin, 26(2): 65-68.

Johnson, C. & Gray, P. (1995) A Critical Analysis of Interface Specification Notations. ComputerScience Department, University of Glasgow technical report TR-1995-6.

HCI’98 Conference Companion

—116—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Collaborative Virtual Environments: the COVEN Project

Jolanda G. Tromp1 and Anthony Steed2

1Communications Research Group,Department of Computer ScienceUniversity of Nottingham, NG7 2RD UK

2Department of Computer ScienceUniversity College LondonGower St, London, WC1E 6BT, UK

ABSTRACTCOVEN is a European project that addresses the technical and design-levelrequirements of VR-based multi-user collaborative activities in professional andcitizen-oriented domains. This paper gives an overview of the project, its objectives,methodology and research. It highlights our current and future research on theevaluation of collaborative virtual environment systems and applications.KEYWORDS collaborative virtual reality, methodology of usability testing,

interaction design

INTRODUCTIONThis paper describes the COVEN (COllaborative Virtual ENvironments) Project(1995-1999), and its work in the design and evaluation of systems and applications tosupport collaborative virtual environments (CVEs). COVEN is organized as severalconcurrent and related threads of activities addressing CVE network, system andapplication development, and it is based around three ‘requirements-design-evaluation’ iterations. A major concern of the project is enabling CVE applicationsthat will scale from five to hundreds of simultaneous users. Additional informationcan be found at http://chinon.thomson-csf.fr/coven/. In order to capture different userand system requirements several application scenarios have been developed. In thefirst design iteration the main demonstrators are a virtual conferencing suite aimed atprofessional users, and a virtual tourist information service for holiday planning, seeFigure 1. They are built upon Division's dVS™, a leading VR product. Bothdemonstrators support collaboration services, such as audio and text communication,expressive avatars, object manipulation, participant roles and group navigation.

Figure 1: Views of the virtual conferencing, and holiday planning scenarios

EXPERIMENTAL WORKA general tendency to ignore or minimize VE evaluation has been observed (Durlach& Mavor, 1995). It is our conviction that this is partly due to the lack of VR specificevaluation- and design-tools. To date, there are still few guidelines for usabilitydesign and evaluation of CVEs, with the only exceptions being the work on singleuser virtual environments (Kaur, Maiden & Sutcliffe, 1996; Parent, 1998.). Our workis therefore based on three hypotheses: There are existing HCI design and evaluationmethods for 2D applications which need to be translated to 3D/CVE applications;

HCI’98 Conference Companion

—117—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

there are CVE specific concepts introduced by human behavioral needs which arelargely unknown, still under development, and need to be explored; there are CVEspecific constraints on methodological aspects of evaluations which need to beidentified and resolved.From an analysis of the methodological constraints (Tromp, 1997) we devised anevaluation framework, and three main threads of work were derived. Firstly, we usedHCI informed usability inspections of the initial applications based on Nielsen (1994),to uncover the main implementation flaws and clean-up the overall design. The resultsof this inspection are applicable to all CVE systems and applications, not just thosedeveloped by COVEN (Steed and Tromp, 1998); we identified a significant numberof usability issues at system, interaction and application levels. System issues reflectfundamental properties of the CVE system, such as latency and synchronization thatneed to be addressed by system designers, and worked around at application level.Interaction issues arise due to the complexity of interacting with a 3D scene. Forexample, reconciling the requirements for text input, 3D object interaction and freenavigation in a desktop or immersive system is difficult problem that is highly taskdependent. Application issues are those problems that arise due to the presentation offunctionality in the 3D world. For example, there is a tension between making the 3Dobjects realistic so that the participants recognize them, and having to support thenormal affordances of the resulting objects. Secondly, to explore the new humanbehavioral concepts we conducted observational evaluations of participantsperforming tasks in networked trials. To date we have completed the first series ofover 30 network trials. The participants in the trials completed usabilityquestionnaires after each trial and those results gave strong emphasis to issues raisedin the other research threads. And, finally, isolated auxiliary case-controlledexperiments took place, focusing on the evaluation of central CVE concepts such asthe sense of presence and requirements for collaboration.Future work, during the second iteration of the demonstrator applications’ designcycle, will involve a more detailed investigation of user interaction and usercollaboration, through case-controlled network trial experiments. We are alsoredeveloping our CVE inspection method to address issues such as latency, 3D objectinteraction and collaboration. Our approach is aimed at extracting CVE interfacedesign guidelines and more appropriate CVE specific inspection methods.ACKNOWLEDGEMENTSCOVEN is a European project in the Advanced Communications Technologies and ServicesProgramme, which is part of the Fourth Framework Programme of research and development of theEuropean Union (ACTS N. AC040). The COVEN consortium gathers twelve partners, from bothindustrial and academic backgrounds: Arax Ltd (UK), Division Ltd (UK), EPFL (CH), IIS Ltd (GR),KPN Research (NL), Lancaster University (UK), SICS (S), Thomson-CSF LCR (F; coordinator), TNOFEL (NL), University College London (UK), University of Geneva (CH), University of Nottingham(UK). Thanks go to Kulwinder Kaur, Alistair Sutcliffe and Anne Parent for sharing their views with uson the topic of methods for usability evaluation and design for VR.

REFERENCESDurlach, N.I., Mavor, A.S., (1995). Virtual Reality: Scientific and Technological Challenges, National

Research Council, National Academy Press, USA.Kaur, K., Maiden, N., Sutcliffe, A., (1996). Design Practice and Usability Problems with Virtual

Environment, in: Virtual World’96 Conference, Stuttgart, in: Proc. IDG Conferences.Nielsen, J, Mack, R.L, (1994). Usability inspection methods, John Wiley and Sons, New York, NY.Parent, A., (1998). Designing life-like virtual environments, submitted to Presence: Teleoperators and

Virtual Environments, MIT Press.Steed, A., Tromp, J.G., (1998). Experiences with the Evaluation of CVE Applications, to appear in:

Proc. of Collaborative Virtual Environments, 2nd CVE98 Conference, Manchester, June 17-19.Tromp, J.G., (1997). Methodology of Distributed CVE Evaluations, in: Proceedings of UK VR SIG

1997, Bristol.

HCI’98 Conference Companion

—118—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

Touch screen VS Mouse: an experimental comparison using Fitts'law and mental workload.

Josine van de Ven

Dutch IT GroupVan Kinsbergenstraat 5, Elburg8081 CL, The [email protected]

KEYWORDS: touch screen, mouse, Fitts' law, mental workload, HRV

THE AIM OF THE STUDYThe aim of this study was to compare two different devices, mouse and touch screens.We tried to compare these devices using two different methods, Fitts' law and mentalworkload. Fitts' law is used for studying pointing tasks carried out with indirectpointing devices like mice and trackballs. It provides a good model to predictmovement-times with these devices. Whether it will be equally good for touch screentasks is something we try to establish in this experiment. In addition, the researcherswanted to get experience with registering mental workload of the subjects duringcomputer task-performance. Mental workload is measured using heart rate variability(HRV), this is derived from heartbeat registration.

MATERIALS AND METHODSThe experiment is carried out at the University of Nijmegen. Two tasks were used tomake the comparison. The first task was used to see if Fitts' law applies to touchscreen data at all. This was a simple task, in which a button appeared random on thescreen. Subjects had to push the button as fast as possible. Then a new target appearedand the task started all over again.

The second task was, in principle, identical to the first task, but now mental workloadwas imposed by asking subjects to keep up three mental counters. Charactersappeared on the buttons and subjects had to count (silently) the occurrences of thedifferent characters during the task.

CONCLUSIONResults of this experiment show that Fitts' law can be applied to touch screen data incertain situations (depending on the width of the target). Although the fit of the modelis not as high as with mouse data, it is reasonable good.

With respect to the HRV-data we found that there is no difference between mouse andtouch screen. But we did find some unexpected results regarding different tasks thatneed to be investigated further. The first task (simple pointing) seems to cause ahigher mental workload than the second task (with three mental counters). Weexpected to find results indicating the opposite. These results can not be explainedwith theories we consulted. At this moment we are carrying out another experimentwith similar conditions. We plan to include these results at the actual posterpresentation in September.

HCI’98 Conference Companion

—119—

KEYNOTEADDRESSES

PANELS

SHORTPAPERS

ORGANISATIONOVERVIEWS

INDUSTRYDAY

DEMOS

VIDEOS

DOCTORALCONSORTIUM

POSTERS

The following posters have also been accepted for presentationduring HCI’98, but descriptions were not received in time for

inclusion within this Conference Companion.

Formative evaluation of a focus+context visualization techniqueBjork S and Holmquist LEDept. Computer Science Goteborg University PO Box 620 S-405 30 Goteborg SWEDEN

Socialspaces: an environment for dynamic participation in informalreal-time group activitiesBoyer D and Wilbur SBell-Labs Lucent Technologies

Designing intrinsically motivating interactionGarcia-Tobin DSchool of Computing University of Plymouth Drake Circus PLYMOUTH PL4 8AA<[email protected]>

Research platform to usable software - or writing the interfaceHughes J, Clark A and Sasse AHE R&D Unit UCL 1-19 Torrington Place LONDON WC1E 6BT

Economic and social influences on interaction with the webJohnson CDept. Computer Science University of Glasgow GLASGOW G12 8QQ