understanding human computer interaction for information ......ding the psychology of hci. this...

24
Human-Computer Interaction Understanding Human- Computer Interaction for Information Systems Design By: James H. Gerlach Graduate School of Business Administration University of Coloradoat Denver Campus Box 165 P.O. Box 173364 Denver, Colorado 80217-3364 Feng-Yang Kuo Graduate School of Business Administration University of Coloradoat Denver Campus Box 165 P.O. Box 173364 Denver, Colorado 80217-3364 Abstract Over the past 35 years, information technology has permeated every business activity. This growing use of information technology promised an unprecedented increasein end-user produc- tivity. Yetthis promise is unfulfilled, due primari- ly to a lack of understanding of end-user behavior. End.user productivity is tied directly to functionality and ease of learning and use. Fur- thermore, system designers lack the necessary guidance and tools to apply effectively whatis known abouthuman-computer interaction (HCI) during systems design. Software developers need to expand their focusbeyond functional re- quirements to include the behavioral needs of users.Only when system functions fit actual work and the system is easy to learn and usewill the system be adopted by office workers and business professionals. The large, interdisciplinary body of research literature suggest HCI’s importance as well as its complexity. This article is the product of anex- tensive effort to integrate the diverse body of HCI literature into a comprehensible framework that provides guidance to systemdesigners: HCI design is divided into three major divisions: system model,action language,andpresenta- tion language. Thesystem model is a concep- tual depictionof system objectsand functions. The basicpremise is that the selection of a good system model provides dtrection for designing ac- tion and presentation languages that determine the system’s look and feel. Major design recom- mendations in each division are identified along with current research trendsand future research issues. Keywords: User-computer interface, user men- tal model, human factors, system model, presentation language, action language ACM Categories:D.2.2, H.1.2, K.6.1 Introduction The user is often placed in the position of an absolute master over an awesomely powerful slave, who speaks a strangeand painfully awkward tongue, whose obe- dience is immediateand complete but woefully thoughtless, without regard to the potential destruction of its master’s things, rigid to the point of being psychotic, lack- ing sense, memory, compassion, and-- worst of all---obvious consistency (Miller and Thomas 1977, p. 512). The problemsof human-computer interaction (HCI), such as cryptic error messages and incon- sistent command syntax, are well-documented (Carroll, 1982; Lewis and .Anderson, 1985; Nickerson, 1981) and trace beck to the beginning of the computer revolution (Grudin, 1990). The impact of problematic HCI designs is magnified greatly by the adventof desktop computers, employed mainlyby professionals for enhancing their work productivity.A faulty HCI design traps the user in unintended and mystifying cir- cumstances. Consequently, the user may not adopt the system in his or her work because learning and using the system are too difficult and time-consuming; the business loses its invest- ment in the system. As concern about HCIproblems grew, research was conducted by both practitioners and scholars to find solutions. Initially, researchers focused on enhancing programming environments in order MIS Quarterly~December 1991 527

Upload: others

Post on 17-Jul-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

Understanding Human-Computer Interactionfor InformationSystems Design

By: James H. GerlachGraduate School of Business

AdministrationUniversity of Colorado at DenverCampus Box 165P.O. Box 173364Denver, Colorado 80217-3364

Feng-Yang KuoGraduate School of Business

AdministrationUniversity of Colorado at DenverCampus Box 165P.O. Box 173364Denver, Colorado 80217-3364

AbstractOver the past 35 years, information technologyhas permeated every business activity. Thisgrowing use of information technology promisedan unprecedented increase in end-user produc-tivity. Yet this promise is unfulfilled, due primari-ly to a lack of understanding of end-userbehavior. End.user productivity is tied directly tofunctionality and ease of learning and use. Fur-thermore, system designers lack the necessaryguidance and tools to apply effectively what isknown about human-computer interaction (HCI)during systems design. Software developersneed to expand their focus beyond functional re-quirements to include the behavioral needs ofusers. Only when system functions fit actual workand the system is easy to learn and use will thesystem be adopted by office workers andbusiness professionals.

The large, interdisciplinary body of researchliterature suggest HCI’s importance as well as itscomplexity. This article is the product of an ex-tensive effort to integrate the diverse body of HCIliterature into a comprehensible framework thatprovides guidance to system designers: HCI

design is divided into three major divisions:system model, action language, and presenta-tion language. The system model is a concep-tual depiction of system objects and functions.The basic premise is that the selection of a goodsystem model provides dtrection for designing ac-tion and presentation languages that determinethe system’s look and feel. Major design recom-mendations in each division are identified alongwith current research trends and future researchissues.

Keywords:User-computer interface, user men-tal model, human factors, systemmodel, presentation language, actionlanguage

ACM Categories: D.2.2, H.1.2, K.6.1

IntroductionThe user is often placed in the position ofan absolute master over an awesomelypowerful slave, who speaks a strange andpainfully awkward tongue, whose obe-dience is immediate and complete butwoefully thoughtless, without regard to thepotential destruction of its master’s things,rigid to the point of being psychotic, lack-ing sense, memory, compassion, and--worst of all---obvious consistency (Millerand Thomas 1977, p. 512).

The problems of human-computer interaction(HCI), such as cryptic error messages and incon-sistent command syntax, are well-documented(Carroll, 1982; Lewis and .Anderson, 1985;Nickerson, 1981) and trace beck to the beginningof the computer revolution (Grudin, 1990). Theimpact of problematic HCI designs is magnifiedgreatly by the advent of desk top computers,employed mainly by professionals for enhancingtheir work productivity. A faulty HCI design trapsthe user in unintended and mystifying cir-cumstances. Consequently, the user may notadopt the system in his or her work becauselearning and using the system are too difficult andtime-consuming; the business loses its invest-ment in the system.

As concern about HCI problems grew, researchwas conducted by both practitioners and scholarsto find solutions. Initially, researchers focused onenhancing programming environments in order

MIS Quarterly~December 1991 527

Page 2: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

to improve programmers’ productivity. With theproliferation of desk-top computers, it wasdiscovered that non-technical users were notsatisfied with the same type of environment thatprogrammers used. Raseamh has since expand-ed beyond technical considerations to in-vestigating behavioral issues involving humanmotor skills, perception, and cognition fordeveloping functional, usable, and learnable soft-ware. HCI is now an important scientific disciplinebuilt upon computer science, ergonomics,linguistics, psychology, and social science.

Today’s system designers are expected to applythese interdisciplinary principles to improve usersatisfaction and productivity. This is a formidabletask because HCI development is not an aspectof software design that can be illuminated by asingle design approach. More importantly, thereis a lack of guidance in applying HCI researchfindings to design practice. Consider a typical in-terface design based upon many decisions:which functions and objects to include; how theyare to be labeled and displayed; whether the in-terface should use command language, menus,or icons; and how online help can be provided.As will be discussed later, each of these deci-sions involves consideration of complicated, andsometimes conflicting, human factors. When alldecisions are considered at once, interfacedesign becomes overwhelming. Therefore, ourfirst objective in writing this article is to separateHCI design into major divisions and identify themost relevant design goals and human factors.In each division, design subtasks are analyzedwithin the context of current HCI research. Theintent of this classification is to assist designersin relating the research findings to the HCI designprocess.

Early research emphasized the development ofdesign guidelines. But, after attempts to bothwrite and use guidelines, it was recognized thatwhen a design is highly dependent upon taskcontext and user behavior, the usefulness ofguidelines diminishe.s (Gould and Lewis, 1985;Moran, 1981). The.a~n~wer to this problem for particular design is to model the behavior of usersdoing specific tasks. The model provides a basisfor analyzing why a design works or fails. Thisleads to the emphasis of understanding cognitiveprocesses employed in HCI; Model Human Pro-cessor (Card, et al., 1983), SOAR (Laird, et al.,1987), and Task Action Grammars (Payne and

Green, 1986) are examples of HCI theoreticmodels for studying user behavior (to be dis-cussed later). These models provide a basis forexplaining why some design guidelines work. Oursecond objective is to elaborate existingguidelines with their task constraints andtheoretic bases so a designer can relate them tonew, untested situations.

Our third and last objective is to identify oppor-tunities for HCI research. An exhaustive reviewof guidelines and theories in user interface designreveals gaps in our knowledge regarding theim-pact of design choices on human behavior. Bynoting these opportunities, we hope to interestboth practitioners and research scholars in fur-thering our knowledge of user interface design.

We begin with a framework for organizing HCIdesign and several theoretic approaches to in-vestigating HCI issues. This is followed by designrecommendations and research opportunities foreach issue in the framework, and our con-clusions.

Overview of User InterfaceFramework and TheoriesCard, et al. (1983) propose the user’s recognition-action cycle as the basic behavior for understan-ding the psychology of HCI. This cycle includesthree stages: the user perceives the computerpresentation and encodes it, searches long andshort-term memory to determine a response, andthen carries out the response by sending his orher motor processors in motion. A more elaborateseven-stage HCI model is proposed by Norman(1986) (see Figure 1). Norman’s model expandsthe memory stage to include mental activities,such as interpretation and evaluation of systemresponse, formulation of personal goals and in-tentions, and specification of action sequences.Four cognitive processors are employed in theelaborated recognition-action cycle: motormovements, perception, cognition, and memory(Olson and Olson, 1990). Except for long-termmemory, these processors have limited capaci-ty and constrain users’ behavior and, thus, HCIdesign. Most obvious is the need to satisfy users’motor and perceptual needs: signals must beperceivable, and responses should be within therange of a user’s motor skills. But more impor-tantly, the interface must empower the memory

528 MIS Quarterly~December 1991

Page 3: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

Retrieve a unit fromLTM

COGNITION:Execute a mental

stepChoose among

methods

Intention Evaluation

Specification

~Execution PerceptionMental Activity Perceive

/ Saccade

Physical Activity

Figure 1. Physical and Mental Processes in Operating s Computer

(Adapted from Norman, 1986, and reprinted from Olson and Olson, 1990,p. 229, by permission of Lawrence Erlbaum Associates)

and cognitive capacity of its users to learn andreason easily about the system’s behavior. Other-wise, the user interface will hinder the user’s abili-ty to learn all aspects of the system; a badinterface means the user will not use the systemto solve new, difficult problems.

Overview of the framework

While HCI objectives are clear, it is less obvioushow the designer should go about developing in-terfaces that meet these objectives. Recentresearch suggests that a system model be

MIS Quarterly~December 1991 529

Page 4: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

employed as the basis of HCI design (Norman,1986). The system model is a conceptual depic-tion of the set of objects, permissible operationsover the objects, and relationships between ob-jects and operations underlying the interface(Jagodzinski, 1983).

Norman (1986) points out that the selection of good system model enables the development of

clear and consistent interfaces. This is thepremise of the interface design frameworkdescribed in Figure 2. The conceptual aspect ofthe framework concerns design of the systemmodel such that the underlying process the com-puter is performing is directly pertinent to the userin a manner compatible with the user’s ownunderstanding of that process (Fitter, 1979). Thephysical aspect of the framework involves the

Con ceptual Design

Physical Design

MotorMovement

PerceptionPresentationLanguage:

¯ Object representation¯ Presentation format

¯ Spatial layout¯ Attention and confirmation

=User assistance

Action Language:-dialog style

,syntax¯ protection mechanism

Figure 2. The HCI Design Framework

530 MIS Quarterly~December 1991

Page 5: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

design of action and presentation languages,which consist of patterns of signs and symbolsenabling the user to communicate to and fromthe system (Bennett, 1983).

Designing action and presentation languagesbased on a coherent system model enables theuser to easily develop a mental model of thesystem through repetitive use. The mental modelis the user’s own conceptualization of the systemcomponents, their interrelations, and the processthat changes these components (Carroll andOlson, 1988). The mental model provides predic-tive and explanatory power for understanding theinteraction, enabling the user to reason abouthow to accomplish goals (Halasz and Moran,1983; Norman, 1986). Hence, the closer thesystem model is matched to user expectations,the more easily and quickly user learning takesplace. Developing the system model, therefore,requires a study of what the user expectationsare.A system model provides direction for designingaction and presentation languages that deter-mine the system’s look and feel. When there isclose correspondence between the system modeland these two languages, the user can manipu-late all parts of the system with relative ease. Thiscreates an interface of "naive realism"(diSessa,1985): one that the user operatesunaware of the computational technicalitiesembedded in the system software. But this naiverealism cannot be easily achieved becausetechnological restrictions limit the choice of dialogstyle and impose rigid syntax rules and recoveryprocedures. Hence, in specifying an actionlanguage, design tradeoffs must be made be-tween satisfying the user’s cognitive re-quirements and satisfying technologicalconstraints. The presentation language com-plements the action language by displaying theresults of system execution such that the usercan easily evaluate and.interpret the results. Italso involves design tradeoffs in choosing pro-per object representations, data formats, spatiallayout, confirmative mechanisms, and userassistance facilities.

Note that in Figure 2 the system model servesas the basis for developing action and presenta-tion languages. The importance of this principleis illustrated by the user interfaces of two spread-sheet packages: IFPS (Execucom, 1979) and1:2-3 (Lotus, 1989). IFPS’s system model

resembles linear algebra with a Fortran-like pro-gramming language; 1-2-3’s resembles a paperspreadsheet and an electronic calculator. Thesystem model choice results in clear differentia-tion in the action and presentation languages ofthese two packages. IFPS’s action language re-quires the user to follow strict syntax rules toenter a spreadsheet model. Its presentation isthat of an accounting report that can only beviewed in a top-down manner. Also, user actionsand system presentations are clearly disjointedin IFPS; that is, the user first enters the algebraicformulae, waits for the system to process them,and receives the output when the system isfinished.

In contrast, 1-2-3’s action and presentationlanguages are intertwined. 1-2-3 allows the userto enter the spreadsheet by moving to any cell,row, or column in any order to enter data orspecify formulae. Its presentation utilizes thesame row-column format used for input; the userobtains an instant result for each action. The pro-perties of 1-2-3’s action and presentationlanguages are more generally accepted thanthose of IFPS, even though both provide similarcapability. Hutchins, et al. (1986) attributes the success of spreadsheet packages like 1-2-3 totheir use of a conceptual model that matches theuser’s understanding of spreadsheet tasks.

Cognitive modelingAs previously mentioned, developing the systemmodel requires a study of user expectations. Oneapproach is to create prototypes, which providean environment for testing and refining thesystem model. This, however, is expensive andtime-consuming. Alternatively, several cognitivemodels can be used to analyze and clearlydescribe user behavior. This type of theoreticalanalysis can help designers select the bestdesign from several alternatives, resulting in lesstime needed for HCI design (Lewis, et al., 1990).

GOMS Model

A family of cognitive models based on the GOMSmodel is proposed by Card, et al. (1983) forpredicting user performance. A GOMS modelconsists of four cognitive components: (1) goalsand subgoals for the task; (2) operators, in-cluding both overt operators (like key presses)

MIS Quarterly/December 1991 531

Page 6: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

and internal operators (like memory retrieval);(3) methods composed of a series of operatorsfor achieving the goals; and (4) selection rulesfor choosing among competing methods toachieve the same goal. The majority of GOMSresearch has centered on the study of expertsperforming well-learned, repetitive tasks. This hasled to the discovery of parameters, such as timesfor keystroke entry and the scanning of systemoutputs, useful for predicting skilled-user perfor-mance (Card, et al., 1983). But other importantaspects of user behavior cannot be easily mod-eled in GOMS, such as the production of andrecovery from errors (Olson and Olson, 1990) andthe use of sub-optimal goals or methods in per-forming routine editing tasks, even when moreefficient goals or methods are known (Young, etal., 1989).

SOAR

SOAR (Laird, et al., 1987) is a general cognitivearchitecture of human intelligence. Although ithas not been applied extensively in HCI research,SOAR has the potential for answering questionsnot addressed by GOMS. SOAR is an applica-tion of artificial intelligence that models users do-ing both routine and new tasks. In addition to aknowledge base and an engine that performstasks it knows, SOAR has a learning mechanism.It provides an account of how a user evaluatessystem responses and formulates a new goal orintention. With SOAR, one can estimate how longit takes a user to recognize an impasse in his orher skill and set up a new. goal and action se-quence to overcome that impasse.

Formal Grammars

Formal grammars expressed in Backus-Naurform (BNF) can be used to describe the rules an action language. From these, an analyst canpredict the cognitive effort needed to learn thelanguage by examining the volume and con-sistency of the rules (Reisner, 1981). Task Ac-tion Grammars (TAG) are similar languages,which make explicit the knowledge needed fora user to comprehend the semantics and syntaxof a user interface (Payne, et al., 1986). In addi-tion to identifying the consistency of grammarrules, TAG can be applied to study how well thetask features of the language match user goals.

DiscussionGOMS, SOAR, and formal grammars collective-ly provide guidance in the design of systemmodels and action and presentation languages.For example, GOMS suggests that system modeldesign should be guided by analysis of user goalsin order to identify methods for achieving thesegoals; SOAR demonstrates the importance ofmodeling user knowledge of the system modelfor solving new, difficult problems; TAG indicateshow an action language’s organization affectsuser learning.

It should be noted that each of these theories canexplain some, but not all, aspects of humanbehavior in HCI. For example, the GOMS modelcan explain the task of selecting an option froma list of choices, but it fails to predict errors a per-son makes when using a line editor; TAG pro-vides a reason why errors might occur but cannotpredict moment-by-moment performance. In ad-dition, psychological attributes, such aspreference and attitude, and cognitive functions,such as mental imagery and cognitive style, arenot considered in these theories (Olson andOlson, 1990). The specificity of each of thesetheories results in areas of uncertainty in HCIdesign, restricting our ability to apply them topractice. A great need for integrating theory andpractice remains in HCI research.

System Model DesignCentral tO the entire HCI design question is thedesign of the system model, a conceptualdescription of how the system works. This re-quires an analysis of user tasks so the systemmodel can be organized to match the user’sunderstanding of these tasks (Carroll andThomas, 1982; Halasz and Moran, 1982; Moran,1981). It also requires an analysis of metaphorsand abstract models that can adequately portraysystem functionality (Carroll, et al., 1988). Theresult of the latter analysis may also help in selec-ting representations for system objects/functionsand in user training.

Analysis of taskThe work by Card, et al. (1983) and Norman(1986) indicates that during computer interaction,the user’s mental activities center around goaldetermination and action planning. To ensure

532 MIS Quarterly~December 1991

Page 7: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

that the system model supports these activities,task analysis should emphasize identifying usergoals and the methods and objects employed toachieve these goals (Grudin, 1989; Phillips, et al.,1988).

Work Activities and Scenarios

Goals, methods, and objects can be discoveredby analyzing users acting out work-relatedscenarios (Young, et al., 1989). A scenario is record of a user interacting with some device inresponse to an event, which is carefully con-structed so that the user performs a definite ac-tion (like reordering paragraphs of a documentor computing the return on a financial invest-ment). A carefully constructed set of eventsassures that a comprehensive range of situationsis studied and the results are applicable to brief,real-life work situations (Young and Barnard,1987). Scenario analysis produces records ofuser actions from which specific user goals,methods, and objects needed to achieve thesegoals are identified. In addition, records of severalusers completing the same scenario enable thedesigner to compare different approaches to thesame work situation and generate a set ofmethods and objects for a wide range of users.

Routine Tasks and Complex Work

Task analysis proceeds by studying cognitive pro-cesses involved in handling the events. Resear-chers have observed that users’ mentalprocesses occur at two levels (Bobrow, 1975).Low-level processing involves well-learned,rehearsed procedures for handling routine opera-tions such as data entry or word deletion. High-level processing, which relies upon knowledgeof the system model, is used to generate plansof action to handle non-routine tasks.

To support low-level processing, objects need tobe organized into logical chunks, and operationsneed to match the actions users normally makewith these objects in the real world (Phillips, etal., 1988). In so doing, learning to associateoperations with objects is easy; with practice,operations can be applied almost automatically,and even in parallel, because examination of datacontent and the meaning of each user action isunnecessary (Shiffrin and Schneider, 1977). Forexample, the spreadsheet system model sup-

ports low-level processing by organizing spread-sheets into cells, rows, and columns; operationslike "delete" can be applied to any of these datalevels with simple cursor movement and thesame menu action choices.

High-level processing is top-down and is guidedby user goals and motives; planning is slow,serial, and conscious (Newell and Simon, 1972;Rasmussen, 1980). A plan of action is a goalstructure that describes how the user decom-poses the problem into a sequence of methodswhich, when executed, propedy handles the worksituation. When facing a complex task, a usermay divide the entire task into many subtasksand perform these subtasks separately at dif-ferent times (diSessa, 1986). Thus, to supporthigher-level processing, one must ensure thatnearly all user goals can be easily achievedthrough combinations of operations described inthe system model in either a sequential ordistributed manner. This flexibility can be seenin Xerox’s Star Workstation, where operations forone goal (like creating a document) can be easi-ly suspended to perform operations for anothergoal (like creating a spreadsheet) (Bewley, et al.,1983). Star also allows the user to cut a portionof one object (like a spreadsheet) and paste it another object (a document) to achieve a higher-level goal of creating a report.

Task analysis results can be documented usingGOMS, BNF, TAG, or SOAR. To complete theinterface design, details of the methods and theoperations to be performed on the objects needto be specified later during physical design.

Analysis of metaphors and abstractmodels

In designing the system model, it is beneficial tosearch for metaphors analogical to the systemmodel. Presenting metaphors to users helpsthem relate the concepts in the system model tothose already known by a wide set of users. Thisenables the user to make inferences regardingwhat system actions are possible and how thesystem model will respond to a given action.

Metaphors and Composite Metaphors

Metaphors can be drawn from tools and systemsthat are used in the task domain and the

MIS Quarterly~December 1991 533

Page 8: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

common-sense real world (Carroll, et. al., 1988).For example, many use a typewriter as ametaphor for a word processor. Unfortunately,the analogy between a word processor and atypewriter breaks down for depicting block inser-tion and deletion in word processing. For theseactions, the word processor works more like amagnetic tape splicer. Hence, complex systemscan be more completely described by a com-posite of several metaphors, each examinedclosely for its correspondence to the system’s ac-tual goal-action sequence. Since users general-ly develop disjointed, fragmented models toexplain different kinds of system behavior(VVaren, 1987), it is easy for them to accom-modate composite metaphors in learning thesystem (Carroll and Thomas, 1982).

Even with composite metaphors, mismatchesmay still occur. Typical computer systems aremore powerful than manual tools and may con-tain features not embodied in the metaphors, andvice versa. These mismatches may lead the userto form misconceptions about how the systemworks (Halasz and Moran, 1982). For example,in word processing, document changes need tobe saved or the entire work session is lost; thereis no such concept applicable to typewriters. Ex-plicitly pointing out the mismatches to the usershould prevent such misconceptions (Carroll, etal., 1988.)

Abstract Models

Abstract models explicitly represent a systemmodel as a simple, abstract mechanism, whichthe user can mentally "run" to generate expectedsystem responses (Young, 1981). For example,a hierarchical chart depicting the organization ofmessages, folders, and files serves as theabstract model of storage for an electronic mailsystem, while a file cabinet serves as themetaphor (Sein and Bostrom, 1989). Like metaphor, the abstract model is not intended tofully document every detail of a system model;rather, both provide a semantic interpretation anda framework to which the user can attach eachnew system concept (Carroll, et. al., 1988; Mayer,1981). But unlike a metaphor, there is a one-to-one mapping from the attributes of an abstractmodel to those of the system model, although notvice verse. Abstract models are particularly usefulfor depicting system models that have no real-world counterparts; for instance, a pictorial depic-

tion of interactions among memory, instructions,input, and output can provide a useful high-leveldescription of a BASIC program’s execution.

Applying Metaphors and Abstract Models

Metaphors and abstract models are powerfulmeans for conveying the system model tonovices. Mayer (1981) reports that novices wholack requisite knowledge are aided by learningabstract models, which enable them tounderstand system concepts during interactionswith the system. Sein and Bostrom (1989) findthat abstract models work best for novices whoare able to create and manipulate mental images.For other novices, the metaphor is better. Hence,the choice between metaphor and abstract modelis dependent upon the user’s task knowledge andthe ability to conceptually visualize the systemmodel.

In conceptual design, candidate metaphors andabstract models can be identified to provide thedesigner with building blocks for constructing aconsistent, logical system model based upon theuser’s task model (Waren, 1987). But basing thesystem model entirely on metaphors may be toolimiting for harnessing the full power of the com-puter. The designer’s objective should be to pro-perly balance the users’ descriptive model of’thetask, the normative model of how the task oughtto be done, and the new opportunities providedby computer technology.

Iterative system model developmentmethodologies and toolsTask and metaphor analysis must be user-centered and iterative. Initial attempts producea crude system model; iterative design andtesting rework this crude model into a successfulsystem model. For example, questionnaires helpdetermine the basic attributes of the user grouplike age, computer training, and education. In-terviews can be used to identify the basic systemcapabilities (Olson and Rueter, 1987). Otheruseful approaches include psychological scalingmethodologies and simulation and protocolanalysis.

Psychological Scaling Methodologies

To identify the grouping of objects/methods, thedesigner can solicit user similarity judgments on

534 MIS Quarterly~December 1991

Page 9: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

all pairs of objects/operations based upon userjudgment of frequency of occurrence, temporaldistance, or spatial distance (McDonald andSchvaneveldt, 1988). From this similarity mea-surement, clusters of objects/methods can beidentified by applying psychological scalingmethodologies, such as hierarchical clustering,multidimensional scaling, and network structur-ing techniques (e.g., Pathfinder) (McDonald et al.,1988; Olson and Rueter, 1987). These method-ologies can be applied to organize system docu-mentation or menu hierarchy. For example,Kellogg and Breen (1987) developed users’ viewsof how various elements of documents (footnotes,captions, etc.) are interrelated; McDonald andSchvaneveldt (1988) organized UNIX documen-tation according to perceived functionality.

Simulation anti Protocol Analysis

Requiring users to describe their work re-quirements in their own language can identifyuseful metaphors and abstract models (Mayer,1981). Pencil-and-paper simulations of proposed interface enable the user to act outtypical work scenarios (Gould and Lewis, 1985).This technique, coupled with think-aloud protocolanalysis, makes it possible to determine howwork is actually done. It is useful for deriving aninitial estimate of the users’ set of basic functionsand data objects.

Another approach is called the Wizard of Oz (Car-roll and Aaronson, 1988). This approach employstwo linked machines, one for the user and theother for the designer. Both the user’s displayand the designer’s display show a simulated viewof the system. To attempt a task, the user entersa command, which is routed to the designer’sscreen. The designer simulates the computer byevaluating the user input and sending a responseto the user’s display. This approach has the ad-vantage of putting the user in a work-like situa-tion well before the final system is fullyprogrammed. Finally, user interface managementsystems like GUIDE, Domain/Dialog, and Pro-totyper (Hartson and Hix, 1989) or hypermediatools like Hypercard (Halasz, 1988) can be usedfor rapid prototyping to evaluate user needs. Theyare, however, more expensive than the Wizardof Oz in terms of manpower and time needed forcreating the prototype.

DiscussionMuch research is still needed if we are tothoroughly understand system model design. Ourknowledge of cognitive processes in HCI is stilllimited, although recent emphases in this areaindicate an increasing awareness of itssignificance among researchers and practitioners(Olson and Olson, 1990). One important strategyis to apply theories like GOMS, TAG, and SOARto study a broad range of computer tasks forunderstanding mental activities involved in solv-ing routine and novel problems. An attempt at thisresearch has been underway; an AI program in-cooporating means-ends analysis and multipleproblem spaces has been used to analyze usertask knowledge (Young and Whittington, 1990).This analysis can alert the designer to potentialproblems of a proposed interface.

Another important strategy is to improvepsychological methods for studying users’ priorknowledge and cognitive processes. Themethods may be applied to investigate how auser forms a mental model of a system and toevaluate the discrepancies between the user’smental model and the system model. This pro-vides feedback regarding the quality of systemmodel design to designers, who can then improvetheir design strategies.

In addition, guidance is needed for applyingmetaphors to system model design. Whether ornot system models are based upon metaphors,users are likely to generate metaphoric com-parisons on their own (Mack, et al., 1983). Whathappens if this comparison creates user confu,sion because of the discrepancy between thedesigner’s metaphor choice and the user’s owncomparative idea? Strategies are needed for por-traying metaphors so that the metaphoric com-parison is obvious but not distracting. There isalso a need for methodologies for evaluatingalternative metaphors. Carroll, et al. (1988)hypothesize that the user transforms metaphorsinto a precise understanding of the system modelvia a three-stage process: (1) establishing metaphoric comparison; (2) elaborating aspectsof the metaphoric comparison map meaningful-ly to the system model; and (3) consolidating produce a system model from what was learnedfrom each comparison. However, it is unclearhow this theory can be applied to analyzemetaphor learnability.

MIS Quarterly~December 1991 535

Page 10: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

Finally, user confusion may arise when systemconcepts have no analogical descriptions, suchas the difference between a line wraparound anda hard carriage control. How can abstract modelsbe useful in these situations? Research is need-ed to provide principles to guide the developmentof abstract models and strategies for using thesemodels effectively in user training.

Action Language DesignThe next component of the HCI framework to beaddressed is the action language design. It in-volves the creation of a means for the user toeasily translate his or her intentions to actions ac-cepted by the system. Because natural languageis not yet a viable option, designers must relyupon dialog styles unnatural to novices, relyingprimarily on keyboards and pointing devices.Designers must also choose a syntax and vocab-ulary for action specifications, and mechanismsfor protecting the user from unintentionallydestroying completed work.

Dialog styleMany conversation-based dialog styles havebeen employed in HCI. In Table 1, these stylesare classified according to who inititates thedialogs and choices available for action specifica-tions (Miller and Thomas, 1977). Recently, directmanipulation styles using pointing and graphicsdevices have become popular; they differ fromconversational styles in many aspects (see Table2) (Hutchins, et al., 1986; Shneiderman, 1987).

The system model, when designed in accord withuser perception of how tasks are conducted, maysuggest the dialog style. For example, the "form"style is the natural choice for a system involvingdatabase inquiries because forms are widelyused for storing data manually and, as a conse-quence, become the metaphor for that system.

But choosing a dialog style often requires con-sidering human factors other than the systemmodel. The tasks may be complex, suggestingthat no single style is sufficient. For example, ac-counting application interfaces are often a mixof forms, menus, and command languages, eachtailored to specific task requirements. User dif-ference also plays an important role. Perfor-mance on relatively low-skill, computer-based

tasks can vary as much as 9:1 (Egan, 1988). Thisvariance in user performance can be partially at-tributable to individual differences such as skilllevel, technical aptitude, age, and cognitive style.

The level of user experience and technical skillis a dominant factor in selecting an appropriatedialog style (Mozeico, 1982). For novices,computer-guided, constrained-choice interfacesare better because the time spent on mental ac-tivities, shown in Figure 1, is reduced. Converse-ly, with experience comes a clear understandingof how tasks can be achieved, decreasing theneed for a computer-guided interface and creat-ing a preference for a user-initiated language.

Direct manipulation styles, like Star’s iconic desk-top interface, are easy to learn because theyclosely reflect the system model, which in turnclosely matches the user’s task knowledge. Theyare easy to use for both novices and expertsbecause of simple push-button actions and a con-tinuous display of the "system states" that guideuser actions (Shneiderman, 1987). Still, directmanipulation styles may be slower than conver-sational styles for experts to use (Hutchins, et al.,1986).

Novices can become expert through experience.This transition is easier if the user possessestechnical aptitude, which involves high spatialmemory and visualization and/or deductivereasoning ability. These abilities help the userremember, visualize, and locate objects andgenerate syntactically correct instructions (Egan1988).

Cognitive style and age also affect the dialog styledecision. A study by Fowler, et al. (1985) showsthat field-independent users, autonomous andself-reliant, prefer a user-initiated commandstructure, while field-dependent users tend toprefer constrained interfaces. Age is a significantfactor in predicting user performance, particularlyfor interfaces requiring the user to possess atechnical aptitude (Egan, 1988). The loss in per-formance due to aging can be countered with asimplified interface that reduces the necessity ofvisualizing important displays.

Multi-style interfaces can be employed to satisfyusers varying in skill level, cognitive style, andage. For example, styles ranging from question-answer to menu and command language can allbe included within the interface; the user can thenchoose any style to achieve better performance

536 MIS Quarterly~December 1991

Page 11: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

Table 1. Taxonomy of Dialog Styles Based on Initiation and Choice

Choice Initiation Free-Response Forced-Choice

User-guided

System-guided

Database languageCommand languageData mnemonicsText (word) processing

Question/free answerForm filling

Expert system questionsInput-in-the-context-of-output

Question/forced answerCommand menu selectionData menu selectionEmbedded menuAccelerated menu

Table 2. Comparison of Conversational and Direct Manipulation Styles

Conversational Style Direct Manipulation Style

Sequential dialog, which requires the userto enter parts of an instruction in apredetermined order

Language of strict syntax to describe theuser intention

Complete specification of user intention isrequired

Discrete display of states of systemexecutions; this includes errors if thecommand fails to execute

Single-threaded dialogs, which force theuser to perform tasks serially

Command first, object next is typical

Modes are often used to increase keystrokeefficiency

Asynchronus dialog, which enables theuser to enter parts of an instruction invirtually any order

Direct manipulation of objects

Incremental specification of user intentionis allowed

Continuous update of objects to reflectsystem execution results; few errormessages are needed

Multi-threaded dialogs, which permit theuser to switch back and forth betweentasks

Object first, command next is typical

Modeless user operations, which are lessconfusing to the user

and satisfaction (Mozeico, 1982). Recently, an im-plementation integrating natural language withdirect manipulation (Cohen, et al., 1989) andanother combining command language anddirect manipulation (Gerlach and Kuo, 1991)show the practicality of this approach.

User interface syntax

In interacting with a computer, the user is re-quired to translate his or her goals and intentionsinto actions understood by the system. Hence,in syntax design, designers must select wordsthat not only represent system objects and func-tions but also match user expectations. Likewise,

the action sequence of entering these wordsneeds to be specified so it can be easily recog-nized and remembered by users.

Vocabulary

One way to select vocabulary is for designers toselect keywords based upon the system model.This approach to vocabulary design, although in-tuitively appealing, is shown to be impracticalbecause designers’ word choices vary signifi-cantly among themselves and may differ fromusers’ choices (Carroll, 1985). Barnard (1988)suggests user testing for obtaining specificwords.

MIS Quarterly~December 1991 537

Page 12: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human.Computer Interaction

Novices prefer general, frequently used wordsthat are not representative of system concepts(Black and Sebrechts, 1981; Bloom, 1987). Dif-ferent novices often assign different words to thesame concept (Good, et al., 1984; Landauer, etal., 1983). As a result, words used by somenovices may not help others learn the actionlanguage.

A better alternative is to have expert users selectterms that are highly representative of systemconcepts; these terms can then be evaluated bynovices for learnability (Bloom, 1987). To accom-modate both novices’ and experts’ preferences,synonyms should be included as a part of the ac-tion language (Good, et al., 1984). The alternativeword choices, even if synonyms are not im-plemented, can be presented to novice users forlearning the concept of the chosen word (Bloom,1987).

Action Consistency

Consistent keystrokes within and across differentsystems lend themselves to easy memorization,resulting in faster, easier learning. This helpsusers in transferring knowledge of a well-learnedsystem to a new system (Poison, 1988; Poison,et al., 1986). It also reduces user errors and thetime and assistance needed to enter commands(Barnard, et al., 1981).

Action inconsistency typically occurs in systemsemploying modes. For example, line editorstypically have two modes: one for input and theother for editing. Modes are confusing to novicesbecause identical keystroke sequences generatedifferent results in different modes (Norman,1983). However, they are efficient for applicationsin which the number of commands exceeds thenumber of keys available. With practice, modesallow experts to use fewer keystrokes for com-mand entry; elimination of modes may penalizethe experienced user. Norman recommends thatmodes be employed judiciously. We suggest thattechniques for focusing user attention (discussedlater) should be used to make modes obvious tothe user to reduce confusion.

An action language’s consistency is affected byits orthogonality. In an orthogonal language, eachbasic keystroke component is assigned a uniquemeaning representing a single action parameter,which can be an operation, an object, or any

other qualifier (Bowden, et al., 1989). A single setof rules determines how these unique keystrokecomponents can be combined to form com-mands. For example, in a word processingsystem, commands must obey the rule: first,operation (e.g., DELETE); next, object (e.g., LET-TER); and last, direction qualifier (e.g., RIGHT).In an orthogonal language, keystrokes per com-mand increases in proportion to the size of thecommand set; more time is therefore needed toenter commands. But less effort is needed tomemorize and recall each keystroke’s meaning.This reduction in mental effort and time maymake the memorability-efficiency tradeoffbeneficial if ease of learning is critical to the user.

Action Efficiency

Many system implementations concentrate onminimizing keystrokes to reduce motor activitiesthrough the use of function keys, command ab-breviations, and recognition of an option’s firstletter. But as noted earlier, keystroke efficiencyis also a function of memorizing and recalling thekeystrokes. For example, when a function key isgiven multiple meanings whose interpretationdepends upon the context in which it is applied,a user can be easily confused because of the in-creased mental load in recall (Morland, 1983). Of-l~ering both whole and abbreviated commands isone way to increase motor efficiency while reduc-ing the mental load. With these options, the usercan initially enter the whole command and thenquickly make use of abbreviated commands(Landauer, et al., 1983). The importance of reduc-ing the mental load is further illustrated by Lerch,et al.’s (1989) study of spreadsheet users perfor-ming financial planning tasks. They found thatusers perform better using relative referencingof spreadsheet variables (e.g., PREVIOUSREVENUES) than when using absolute row andcolumn coordiantes. Absolute row and columncoordinates require less keystroke time to enterbut additional mental overhead. Overall, relativereferencing schemes reduce user errors andallow the user to devote mental capacity to plan-ning the task solution.

Another way of increasing efficiency is for asystem to offer multiple methods for doing thesame type of task; the efficiency of each methodvaries in accordance with the task situation. Butthe user may fail to choose the method that re-quires the least number of keystrokes for a given

538 MIS Quarterly~December 1991

Page 13: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

task because of the additional mental cost ex--pended in choosing between two methods (Olsonand Nilsen, 1987). Further investigation mayfocus on trade.off decisions between using a well-rehearsed single general method and learningand employing several context-specific methods.

Protection mechanismsThe majority of beginners act recklessly; theymake little effort to read user manuals to acquiresystem knowledge. A survey shows that trial-and-error learning is most widely used (Hiltz and Kerr,1986). A major concern, therefore, is to ensurethat the action language protects the user frombeing penalized for trying the system.

One common technique for this is to provide theuser with an "undo" function that reverses aseries of actions. Another is to prompt the userto reconsider planned actions that can lead todamaging, irreversible, results, such as deletinga file.

A third, more interesting approach is "trainingwheels," which encourage novices to exploresystem features during the initial learning stagewhile protecting them from disaster (Carroll andCarrithers, 1984). They block invocation of non-elementary system features and respond with amessage stating that the feature is unavailable.The "training wheels" approach effectively sup-ports exploratory learning by reducing theamount of time users spend recovering from theirerrors. But they do not help the learner acquiresystem concepts needed for performing tasks notattempted previously (Catrambone and Carroll,1987). Research is needed to study what userslearn or do not learn from their mistakes. Anotherinteresting question is the effect of combining theabstract model and the "training wheels" ap-proach for providing the user with an interfacefor learning the system model. We hypothesizethis combination will result in deeper userunderstanding of system concepts.

DiscussionAn important issue of action-language designconcerns trade-offs between efficiency and con-sistency. Keystroke consistency may .increaselearnability for novices but decrease efficiency

for experts. This issue requires further researchin understanding the user’s cognitive processesfor memorization and recall when interacting witha computer.

Another research issue concerns how to designan interface or suite of interfaces to satisfy allusers. For example, multi-style interfaces can becreated so all styles are equally functional. Theuser can then express the same intention in hisor her preferred style. To do so, research mustaddress questions related to how interfaces canassist users in transferring knowledge from onedialog style to another. How can one build multi-style interfaces so that mastery of one style is in-strumental and perhaps sufficient to facilitate pro-gress to another? Can users move from a stylethat is system-initiated to one that is user-intiated? Future research should focus on under-standing cognitive processes for knowledgetransfer, building on the work by Kieras, et al.(e.g., Kieras and Bovair, 1984; Kieras and Poison,1985).

Finally, there is a need for developing principlesto guide the use of speech and gesture devices.Preliminary studies have shown that users preferthese devices (Hauptmann, 1989; Weimer andGanapathy, 1989). Effective incorporation of suchdevices in the action language requires furtherstudies to assess their impact on the motor, sen-sory, perceptual, and cognitive processes of theuser.

Presentation LanguageDesignThe last section of the HCI framework concernspresentation language design. An importantdesign objective is for interface displays to guideuser actions (Bennett, 1983). This objective re-quires selecting representations that fit the user’stask knowledge; the format of data produced bythe system must satisfy task needs and prefer-ences. A display’s layout is to be organized sothat the collective presentation of various outputseases user perception and interpretation. Presen-tations also convey feedback to attract the user’sattention and confirm user actions. Finally, onlineassistance must be designed to help users learnsystem operations and correct their errors.

MIS Quarterly~December 1991 539

Page 14: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

Object representation

If the presentation is to adequately reflect themetaphors on which the system model is based,the designer must choose a display appearancethat assists users in establishing the analogy be-tween that display and the metaphors. A familiarappearance enables the user to recognize andinterpret the representation easily. Examples ofthis principle are found in the spreadsheet-likeinterfaces of 1-2-3 and the electronic desk top ofStar.

Icons can represent much information and beeasily differentiated (Blattner, et al., 1989). icon can be a concrete picture replicate of afamiliar object, such as the trash can icon in Star.System concepts having no pictoral replicatescan be depicted by abstract icons composed ofgeometric shapes and figures. Concrete andabstract icons may also be combined to createhybrid icons, e.g., [] for deleting a character.Unlike concrete icons, abstract and hybrid iconsmust be taught to the user. Once learned,however, they are effective on conveying impor-tant system concepts.

Presentation formats: table vs.graphPresenting results in graph or table formats tosatisfy both user decision style and task re-quirements is of great interest to designers ofdecision support systems. When the task re-quires a large volume of data, graphs are moreeffective than tables for allowing the user to sum-marize the data (Jarvenpaa and Dickson, 1988).Graphs are also good for tasks (such as interpola-tion, trend analysis, and forecasting) that requireidentification of patterns from large volumes ofdata. Conversely, if the task requires pinpointingdata with precision, tables are better. Tables alsooutperform graphs for simple production schedul-ing decisions. But for complex decisions, graphsare superior (Remus, 1984; 1987). Finally, com-bining graph and table formats can result in bet-ter decisions, albeit with slower performance,compared to using either display alone (Powers,et al., 1984).

Our understanding of the cognitive processes in-volved in handling tables and graphs is stilllimited. Johnson and Payne (1985) and Johnson,

et al. (1988) demonstrate that if information presented in a format difficult for the user to com-prebend, the user may employ an easier but lesseffective decision strategy than one that requiresmore sophisticated reasoning but leads to a bet-ter result. Lohse (1991) shows that graphs andtables differ in their cognitive effort. Lohse’sresearch is interesting because it is based on acognitive model that includes perceptual stores,short-term memory, algorithms for discriminationand encoding, and timing parameters. The modelcan predict the time needed for a user to unders-tand a graph. It can be an advisory tool for choos-ing formats to match task needs and has thepotential to answer questions regarding how andwhen graphs and tables can be applied tofaciliate problem solving.

Spatial layoutUser productivity is enhanced when all neededinformation is readily available. To display asmuch information as possible in a limited area,the designer should consider information chunk-ing, placement consistency, and the use of win-dows and 3-D displays.

Chunking

The display, partitioned into well-organizedchunks that match the user’s expectations andnatural perception abilities, provides a basis forthe user to select and evaluate actions (Mehlen-bacher, et al., 1989). Chunks can be identifiedusing the psychological techniques discussed inthe system model section. The layout can beorganized following Gestalt principles: the prin-ciples of proximity and closure suggest enclos-ing each chunk of objects in a separated area;the principle of similarity suggests using the samefont or color for objects of the same chunk. Also,spatial consistency of chunks is importantbecause memorization of location is effortless(Mandler, et al., 1977); labels can be used withchunking to improve recognition and recall(Burns, et al., 1986; Jones and Dumais, 1986).

Placement Consistency

One way proposed to reduce the time in seamhingmenu items is arranging menus according to fre-

540 MIS Quarterly/December 1991

Page 15: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

quency of use (Witten, et al., 1984). But this ap-proach may have only a short-term advantageover a menu with fixed configuration; it may evencause slower performance because the mentaleffort for searching the menu increases withchange and the user becomes disoriented(Somberg, 1987; Trevellyan and Browne, 1987).In the long term, a fixed configuration facilitatessearching better than, or as well as, a dynamicmenu. The fixed configuration lends itself tomemorization, and, therefore, menu selection iseffortless once it is learned by the user.

Windows and 3-D Displays

A window is a clearly defined portion of thescreen that provides a working area for a par-ticular task. Windowing has several benefits. Us-ing multiple windows enables the user tosimultaneously perform multiple tasks that maybe unrelated. The content of the unfinished taskin a window is preserved so the user can easilycontinue that task later. Windows also serve asvisible memory caches for integrating informa-tion from multiple sources or monitoring changesin separate windows. These benefits collective-ly enable windowing to support separate but con-current task execution.

A drawback of windowing is that operating multi-ple windows demands higher cognitive pro-cesses, i.e., memory, perception, and motorskills. Overuse of windows can cause informa-tion overload and loss of user control such thatthe user may employ an inefficient searchstrategy in scanning multiple windows (Hen°drickson, 1989). Window manipulation is alsoshown to be difficult for the user, probably causedby the complexity in arranging windows (Car-roll and Mazur, 1986). Users perform tasks moreslowly, although more accurately, with windows(Hendrickson, 1989). Thus, operations formanaging windows should be simplified. The win-dow design should employ consistent placementand avoid overcrowded window to ease userperception and memory load.

Also, 3-D displays can be used to accommodateand condense a large volume of data (Card, etal., 1991). A 3-D display is divided into many 3-Drooms, each used for a distinct application. Theuser can manipulate objects in the 3-D space todifferentiate images, investigate for hidden infor-mation, and zoom in for details.

Attention and confirmation

Video and audio effects are useful in drawing auser’s attention to important system responsesand confirming user actions. Both are importantfor helping the user judge the status of his or heractions.

People typically have an orienting reflex to thingsthat change in their visual periphery. Hence,video effects such as color, blinking, flashing, andbrightness contrast can stimulate user curiosityfor critical information (Benbasat, et al., 1986;Morland, 1983). Audio effects can be used tocomplement video effects or reveal informationdifficult to represent with video (Gaver, 1986;1989). In addition, audio feedback can reducespace needs and synchronize user input withsystem response (Nakatani, et al., 1986).

Often there is delay between user actions andsystem presentations. In this situation, confirm-atory .feedback, such as immediate cursorresponse and changing shapes and shades oficons, is useful (Bewley, et al., 1983; Gould, etal., 1985). Similarly useful are progress indicatorsto display the percentage of work completed.Graphic-based progress indicators, like apercent-done thermometer or a clock, are con-sidered fun to use (Myers, 1985). Progress in-dicators also aid in conducting multiple tasks. Forexample, a user informed that a long time is re-quired for printing a document may decide tospend that time editing another file or retrievinga cup of coffee.

Both visual and auditory cues are shown tomotivate users to explore unknown systemfeatures (Malone, 1984). Incorporating both videoand audio feedback may have significant impacton user learning and satisfaction. Auditory icons,or "earcons," provide intuitive ways to use soundfor presenting information to users (Blattner, etal., 1989; Gaver, 1986; 1989). Like visual icons,auditory icons can be constructed by digitizingnatural sounds with which the user is familiar;abstract auditory icons can also be created bycomposing a series of sound pitches (Blattner,et al., 1989). For example, in SonicFinder (Gaver,1989), a wooden sound is used for opening a fileand a metal sound for opening an application,while a scraping sould indicates the dragging ofan object. The research in this area could focuson creating game-like interfaces that are fun to

MIS Quarterly~December 1991 541

Page 16: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

learn (Carroll and Mazur, 1986) and on assistingvisually impaired users.

User assistanceThree types of information have been shown tobe valuable for providing user assistance (Car-roll and Aaronson 1988; Kieras and Bovair, 1984).One is "how-to-do-it" information that definesspecific action steps for operating the system.Another is ’-’what-it-is-for" information thatelaborates on the purpose of each step; this helpsusers associate steps with individual goals. Thirdis "how-it-works" information that explains thesystem model; this is useful for advanced trouble-shooting and creative use of the system. All threetypes of information can be used in writing onlineerror messages and user instructions.

Error Correction

When novices make errors and are uncertainabout what to do next, they often look for instruc-tions from the system message (Good, et al.,1984). Thus, error messages should pinpoint cor-rective, "how-to-do-it" information and state"what-it-is-for" (Carroll and Aaronson, 1988). addition, immediate feedback on user errorsfacilitates learning better than delayed feedbackbecause a user can easily associate the correctaction with the exact point of error (Catramboneand Carroll, 1987). The style of error messagesis also important: they should reflect users’words, avoid negative tones, and clearly identifythe portion of the action in error (Shneiderman,1987).

Online Manuals

When users know the task they wish to perform,brief "guided exploration cards" (Catramboneand Carroll, 1987) help users perform better thanlong manuals. Specific "how-to-do-it" informa-tion can be included for novices to do completetasks quickly in the begh~ning (Carroll and Aaron-son, 1988; Catrambone, 1990). In addition, in-structions describing general rules of the systemmodel encourage novices to infer unstated detailsof the interface, resulting in better user learningof the system (Black, et al., 1989).

The GOMS model described earlier can be usedto create online manuals (Gong and Elkerton,

1990). To do so, the designer conducts a GOMSanalysis of user tasks. The result is then appliedto organize the manual based on possible usergoals; for each goal, specific "how-to-do- it" in-formation on methods and operators is then pro-vided. Error avoidance and recovery informationcan be included to improve user performance.

Query-in-Depth

Query-in-depth is a technique designed to pro-vide multi-level assistance to help users atvarious levels of expertise learn the system(Gaines, 1981; Houghton, 1984). Its low-level helpincludes brief "how-to-do-it" and "what-it-is-for"

information that instructs users’ immediate ac-tions. If not satisfied, the user can request moreadvanced" how-it-works" information for trouble-shooting.

DiscussionIn the past 10 years, engineers have createdsophisticated video and audio technologies forcomputer input and output. New technologies,like Virtual Reality and Speech I/O, will likely beintegrated into normal presentations. To effec-tively apply them, we need to better understandhow they affect the user in performing work.Studies have shown that while auditory memoryhas less storage capacity than visual memory,it retains signals more than twice as long as visualmemory (Cowan, 1984). These differences in at-tention and memory phenomena must be ex-amined within the context of human-computerinteraction. What is the impact on user cognitiveprocess given that only limited capacity isavailable for motion and perception? How shouldthe various devices be integrated? What are thecosts and benefits in terms of hardware, software,user training, and actual user performance? Pro-viding guidance in designing video and audio in-terfaces is challenging but critical in HCI researchin the near future.

Windowing offers many advantages in action andpresentation language design that have yet beenexplored. For example, one way to implementmulti-style interfaces is to allow each style to beoperated in a separate window. Or, to adapt toa user’s pattern of menu usage, a window for themost recently used menu options, another for themost frequently used options, and a third for the

542 MIS Quarterly~December 1991

Page 17: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

regular menu options can be used in combina-tion. Windows are ideal for user assistance: er-ror messages, online manual, or confirmatoryfeedback can be located in windows separatedfrom work dialogs. Complex tasks can also besupported by allowing subtasks in separate win-dows or 3-D rooms. Again, research is neededto study how windows and 3-D rooms can be ef-fectively applied for these various purposes. Thecentral issue is to understand how they can im-pact the user’s cognitive processes, as discussedin the work by Card, et al. (1991).

Finally, there is a need for research in online ad-vising. Research so far has shown that online ad-vising, even that provided by an expert using theWizard-of-Oz technique, is of limited use for thenovice user (Carroll and Aaronson, 1988). Thedifficult issues to be addressed are what infor-mation should be given and when, what ideasshould be left to user inference, and how to usemotivational feedback to make learning en-joyable. Studies could also explore the use ofvideo and audio feedback in assisting the user.

ConclusionInterfaces are complex, cybernetic-like systemsthat can be built quickly but are difficult to buildwell. Their complexities necessitate the decom-position of the entire user-interface design pro-blem into small, manageable subproblems, alongwith a reexamination of their interrelationships in-to a whole. The framework presented in this arti-cle serves this purpose; it organizes researchfindings into three major divisions: system model,action language, and presentation language. Thisarticle reviews current HCI research findings andilluminates their practical implications. The aimof this work is to enable HCI design practice tobecome more systematic and less intuitive thanit is today.

Throughout the literature two major philosophiesof interface design and research can be iden-tified. One is that interface design is often drivenby technological advancement; research is con-ducted to address problems that occur after adesign is implemented. This generated themouse, voice, windows, and graphics. The otheris that we still know little about the psychologicalmake-up of the user. The work on the psychologyof HCI by Card, et al. (1983) and Norman (1986)provide a solid theoretic beginning; much

research is needed to expand these theories sothey can be useful in addressing a wide rangeof interface design issues based upon user andtask considerations.

Great challenges remain ahead in interfaceresearch. We should not limit ourselves to thestudy of problems concerning only existingtechnologies. We should explore new, creativeuses of advanced technologies to know what,when, and how to apply them effectively. We cansave substantial research effort by ceasing to em-phasize problems inherent in poorly developedtechnologies unless they illuminate cognitive pro-cesses that will be important to interfaces of thefuture (Wixon, et al., 1990)o

We need to broaden research concerning howpeople organize, store, and retrieve concepts(Ca;;oll and Campbell, 1986; Newell and Card,1985; 1986). Theories of exemplar memory, pro-totype memory, episodic memory, and seman-tic memory are probably applicable to HCIresearch. We also need to investigatepsychological attributes (such as attitude andpreference), work-related factors (such as fatigueand organizational culture), and certain physicallimitations (such as hearing and vision impair-ment). We must study how user interfaces shouldcope with the limitations imposed by varying usercharacteristics. More importantly, we must focuson what aspects of user characteristics are im-portant, how they are related to each stage of HCIdesign, and when during the design stage theymust be considered. This focus ensures the ap-plicability to research findings to design.

Finally, we must interrelate the research findingsif we are to develop comprehensive theories forthe design, implementation, and testing of func-tional, usable, and learnable interfaces. In thispursuit, the role of .the designer in documentinghis or her design rationales is especially impor-tant. A design rationale is a record of design alter-natives and explanation of why some specificchoice is made. To further our understanding ofHCI, design rationales should be a co-product ofthe design process (Maclean, et al., 1989). Com-paring and contrasting design rationales ofvarious systems enables us to capture the rangeof constraints affecting the HCI design and gaininsights into why a choice works or does not work.

Some excellent exploratory work has been donein this area. For example, Wixon, et al. (1990) pro-

MIS Quarterly~December 1991 543

Page 18: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

pose collection of usability data in the context ofuser tasks to identify both general principles anddetailed guidelines for HCI design. Carroll andKellogg (1989) and Carroll (1990) emphasize identification of psychological claims embodiedin an interface and the application of artifacts asbases for assessing appropriateness of theseclaims. In conclusion, data regarding user tasks,user achievement and problems, and changesin the overall environment should be collected ona continuous basis. Assumptions about thepsychology of the user performing the task andlimitations of technology must be explicitly stated.The collection of design rationales can then beused to develop practical guidelines and prin-ciples, which should be repeatedly evaluated todevelop theories governing HCI design.

AcknowledgementsWe are indebted to the anonymous reviewers fortheir considerable effort in reviewing this article.We are particularly thankful to the associateeditor, Judith Olson, for her insights into the fieldof HCI. Their many recommendations contributedsignificantly to this article’s development.

References

Barnard, P.J. "Command Names," in Handbookof Human-Computer Interaction, M. Helender,(ed.), Elsevier Science Publishers, Amster-dam, 1988, pp. 181-199.

Barnard, P.J., Hammond, N. Vo, Morton, J., Long,J.B., and Clark, I.A° "Consistency and Com-patibility in Human/Computer Dialogue," In.ternational Journal of Man-Machine Studies(15), 1981, pp. 87-134.

Benbasat, I., Dexter, A.S., and Todd, P. "An Ex-perimental Program Investigating Color-Enhanced and Graphical Information Presen-tation: An Integration of the Findings," Com-munications of the ACM (29:11), December1986, pp. 1094-1105.

Bennett, J. "Analysis and Design of the User In-terface for Decision Support Systems," inBuilding Decision Support Systems, J. Ben-nett (ed.), Addison-Wesley, Reading, MA,1983, pp. 41-64.

Bewley, W.L., Roberts, T.L., Schroit, D., andVerplank, WoL. "Human Factors Testing in the

Design of Xerox’s 8010 STAR Workstation,"Proceedings of CH1’83 Human Factors inComputing Systems, Boston, MA, 1983, pp.72-77.

Black, J.B., Bechtold, J.S., Mitrain, IVl, and Car-roll, J.M. "On-line Tutorials: What Kind of In-ference Leads to the Most EffectiveLearning?" Proceedings of CH1"89 HumanFactors in Computing Systems, Austin, TX,1989, pp. 81-83.

Black, J.B. and Sebrechts, M.M. "FacilitatingHuman-Computer Communication," AppliedPsycholinguistics (2), 1981, pp. 149-177.

Blattner, M.M., Sumikawa, D.A., and Greenberg,R.M. "Earcons and Icons: Their Structure andCommon Design Principles," Human-Computer Interaction (4:1), 1989, pp. 11-44.

Bloom, C°P. "Procedures for Obtaining andTesting User-Selected Terminologies,"Human-Computer Interaction (3:2),1987-1988, pp. 155-177.

Bobrow, D.G. "Dimensions of Representations,"in Representation and Understanding, D.G.Bobrow and A. Collins (eds.), Academic Press,New York, NY, 1975, pp. 1-34.

Bowden, E.M., Douglas, S.A., and Stanford, C.A."Testing the Principle of Orthogonality inLanguage Design," Human Computer Interac.tion (4:2), 1989, pp. 95-120.

Burns, M.J., Warren, DoL., and Rudisill, M. "For-matting Space-Related Displays to OptimizeExpert and Nonexpert User Performance,"Proceedings of CH1’86 Human Factors inComputing Systems, Boston, MA, 1986, pp.274-280.

Card, S.K., Moran, T.P., and Newell, A. ThePsychology of Human-Computer Interaction,Lawrence Erlbaum Associates, Hiilsdale, NJ,1983.

Card, S.K., Robertson, G.G., and Mackinlay, J.D."The Information Visualizer: An InformationWorkspace," Proceedings of CH1"91 HumanFactors in Computing Systems, New Odeans,I_A, 1991, pp. 181-188.

Carroll, J.M. "The Adventure of Getting to Knowa Computer," IEEE Computer (15:11), Nov.1982, pp. 49-58.

Carroll, J.M. What’s in a Name, Freeman, NewYork, NY, 1985.

Carroll, JoM. "Infinite Detail and Emulation in anOntologically Minimized HCI," Proceedings ofCHI’90 Human Factors in ComputingSystems, Seattle, WA, 1990, pp. 321-327.

544 MIS Quarterly~December 1991

Page 19: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

Carroll, J.M and Aaronson, A.P. "Leaming by Do-ing with Simulated Intelligent Help," Com-munications of the ACM (31:9), September1988, pp. 1064-1079.

Carroll, J.M. and Carrithers, C. "Training Wheelsin a User Interface," Communications of theACM (27:8), August 1984, pp. 800-806.

Carroll, J.M. and Campbell, R.L. "Softening UpHard Science: Reply to Newell and Card"Human Computer Interaction (2:3), 1986,

pp. 227-249.Carroll, J.M. and Kellogg, W.A. "Artifact as

Theory-Nexus: Hermeneutics Meets Theory-Based Design," Proceecfings of CH1’89,Human Factors in Computing Systems,Austin, TX, 1989, pp. 7-14.

Carroll, J.M., Mack R.L., and Kellogg, W.A. "In-terface Metaphors and User InterfaceDesign," in Handbook of Human-Computer In-teraction, M. Helander, (ed.), Elsevier SciencePublishers, Amsterdam, 1988, pp. 67-86.

Carroll, J.M. and Mazur, S.A. "Lisa Learning,"IEEE Computer (19:11), November 1986, pp.35-49.

Carroll, J.M. and Olson, J.R. "Mental Models inHuman-Computer Interaction," in Handbookof Human-Computer Interaction, M. Helander(ed.), Elsevier Science Publishers, Amster-dam, 1988, pp. 45-65.

Carroll, J.M. and Thomas, J.C. "Metaphor andthe Cognitive Representation of ComputingSystems," IEEE Transactions on Systems,Man, and Cybernetics (12:2), 1982, pp.107-116.

Catrambone, R. "Specific Versus General Pro-cedures in Instructions," Human Computer In-teraction (5:1), 1990, pp. 49-93.

Catrambone, R. and Carro!l, J.M. "Learning aWord Processing System with TrainingWheels and Guided Exploration," Pro-ceedings of CHI + GI 1987 Human Factorsin Computing Systems, Toronto, Ontario,1987, pp. 169-174.

Cohen, P.R., Dalrymple, M., Moran, D.B.,Pereira, F.C.N., Sullivan, J.W., Gargan, R.A.,Jr., Schlossberg, J.L., and Tyler, S.W."Synergistic Use of Direct Manipulation andNatural Language," Proceedings of CH1’89Human Factors in Computing Systems,Austin, TX, 1989, pp. 227-234.

Cowan, N. "On Short and Long Auditory Stores,"Psychological Bulletin (96), 1984, pp. 341-470.

diSessa, A.A. "A Principled Design for an In-

tegrated Computational Environment,"Human-Computer Interaction (1:2), 1985, pp.1-47.

diSessa, A.A. "Models of Computation," in UserCentered System Design, D.A. Norman andS.W. Draper (eds.), Lawrence ErlbaumAssociates, Hillside, NJ, 1986, pp. 201-218.

Egan, D.E. "Individual Differences in Human-Computer Interaction," in Cognitive Scienceand its Application for Human-Computer In-teraction, H. Helendar (od.), Elsevier SciencePublishers B.V., Hillsdale, NJ, 1988, pp.543-568.

Execucom Systems Corporation. Cases andModels Using IFPS, Execucom, Austin, TX,1979.

Fitter, M. "Towards More Natural InteractiveSystems," International Journal on Man-Machine Studies (11:3), 1979, pp. 339-350.

Fowler, C.J.H., Macaulay, L.A., and Fowler, J.F."The Relationship Between Cognitive Styleand Dialogue Style: An Explorative Study," inPeople and Computers: Designing the Inter-face, P. Johnson and S. Cook (eds.), Cam-bridge University Press, New York, NY, 1985,pp. 186-198.

Gaines, B. "The Technology of Interaction-Dialogue Programming Rules," InternationalJournal of Man-Machine Studies (14:1), 1981,pp. 133-150.

Gaver, W. "Auditory Icons: Using Sound in Com-puter Interfaces," Human-Computer Interac-tion (2:2), 1986, pp. 167-177.

Gaver, W.W. "The SonicFinder: An Interface thatUses Auditory Icons," Human-Computer In.teraction (4:1), 1989, pp. 67-94.

Gedach, J.H. and Kuo, F.Y. "Formal Develop-ment of Hybrid User-Computer Interfaces withAdvanced Forms of User Assistance," Jour-nal of Systems and Software (16:3), November1991, pp. 169-184.

Gong, R. and Elkerton, J. "Designing MinimalDocumentation Using a GOMS Model: AUsability Evaluation of an Engineering Ap-proach," Proceedings of CHI’90 Human Fac.tors in Computing Systems, Seattle, WA,1990, pp. 99-106.

Good, M.D., Whiteside, J.Ao, Wixon, D.R., andJones, S.J. "Building a User-Derived Inter-face," Communications of the ACM (27:10),October 1984, pp. 1032-1043.

Gould, J.D., Lewis, C., and Barnes, V. "Cursor

MIS Quarterly/December 1991 545

Page 20: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

Movement During Text Editing," ACM Tran-sactions on Office Information Systems (3:1),January 1985, pp. 22-34.

Gould, J.D. and Lewis, C. "Designing for Usabili-ty: Key Principles and What Designers Think,"Communications of the ACM (28:3), March1985, pp. 300-311.

Grudin, J. "The Case Against User InterfaceConsistency," Communications of the ACM(32:10), October 1989, pp. 1164-1173.

Grudin, J. "The Computer Reaches Out: TheHistorical Continuity of Interface Design," Pro-ceedings of CHI’90 Human Factors in Com-puting Systems, Seattle, WA, 1990, pp.261-268.

Haiasz, F.G. "Reflections on Notecards: SevenIssues for the Next Generation of HypermediaSystems," Communications of the ACM(31:7), July 1988, pp. 836-852.

Halasz, F.G. and Moran, T.P. "Analogy Con-sidered Harmful," Proceedings of the Con-ference on Human Factors in ComputingSystems, Gaithersburg, MA, 1982, pp.383-386.

Halasz, F.G. and Moran, T.P. "Mental Modelsand Problem Solving in Using a Calculator,"Proceedings of CH1’83 Human Factors inComputing Systems, Austin, TX, 1983, pp.212-216.

Hartson, H.R. and Hix, D. "Human-Computer In-terface Development: Concepts and Systemsfor Its Management," Computing Surveys(21:1), March 1989, pp. 5-92.

Hauptmann, A.G. "Speech and Gestures forGraphic Image Manipulation," Proceedings ofCH1’89 Human FactOrs in ComputingSystems, Austin, TX, 1989, pp. 241-245.

Hendrickson, J.J. "Performance, Preference,and Visual Scan Patterns on a Menu-BasedSystem: Implications for Interface Design,"Proceedings of CH1’89 Human Factors inComputing Systems, Austin, TX, 1989, pp.217-222.

Hiltz, S.R. and Kerr, E.B. "Learning Modes andSubsequent Use of Computer-Mediated Com-munication Systems," Proceedings of CH1"86Human Factors in Computing Systems,Boston, MA, 1986, pp. 149-155.

Houghton, R.C. "Online Help Systems: A Con-spectus," Communications of the ACM (27:2),February 1984, pp. 126-133.

Hutchins, EoL., Hollan, JoD. and Norman, D.A."Direct Manipulation Interfaces," in User

Centered System Design, D.A. Norman andS.W. Draper (eds.), Lawrence ErlbaumAssociates, Hillsdale, NJ, 1986, pp. 87-124.

Jagodzinski, A.Po "A Theoretical Basis for theRepresentation of On-Line Computer Systemsto Naive Users," International Journal of Man-Machine Studies (18), 1983, pp. 215-252.

Jarvenpaa, S.L. and Dickson, G.W. "Graphicsand Managerial Decision Making: Research-Based Guidelines," Communications of theACM (31:6), June 1988, pp. 764-774.

Johnson, E.J. and Payne, J.W. "Effort and Ac-curacy in Choice," Management Science(31:4), April 1985, pp. 395-414.

Johnson, E.J., Payne, J.W., and Bettman, J.R."Information Displays and Preference Rever-sals," Organizational Behavior and HumanDecision Processes (42), 1988, pp. 1-21.

Jones, W.P. and Dumais, S.T° "The SpatialMetaphor for User Interfaces: ExperimentalTests of Reference by Location versusNames," ACM Transactions on Office Infor-mation Systems (4:1), January 1986, pp.42-63.

Kellogg, W.A. and Breen, T.J. "Evaluating Userand System Models: Applying Scaling Tech-niques to Problems in Human-Computer In-teraction," Proceedings of CHI ÷ GI 1987Human Factors in Computing Systems,Toronto, Ontario, 1987, pp. 303-308.

Kieras, D.E. and Bovair, S. "The Role of MentalKnowledge in Learning to Operate a Device,"Cognitive Science (8), 1984, pp. 191-219.

Kieras, D.E. and Poison, P.G. "An Approach tothe Formal Analysis of User Complexity," In-ternational Journal of Man-Machine Studies(22), 1985, pp. 366-394.

Laird, J.E., Newell, A., and Rosenbloom, P.S."SOAR: An Architecture for General In-telligence," Artificial Intelligence (33), 1987,pp. 1-64.

Landauer, T.Ko, Galotti, K.M., and Hartwell, S."Natural Command Names and Initial Learn-ing: A Study of Text-Editing Terms," Com-munications of the ACM (26:7), July 1983, pp.495-503.

Lerch, F.Jo, Mantel, M.M., and Olson, J.R."Skilled Financial Planning: The Cost ofTranslating Ideas into Action," Proceedingsof CH1’89 Human Factors in ComputingSystems, Austin, TX, 1989, pp. 121-126.

Lewis, C., Poison, P., Wharton, C., and Rieman,

546 MIS Quarterly/December 1991

Page 21: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

J. "Testing a Walkthrough Methodology forTheory-Based Design of Walk-Up and Use In-terfaces," Proceedings of CHI’90 Human Fac-tors in Computing Systems, Seattle, WA,1990, pp. 235-242.

Lewis, M.W. and Anderson, J.R. "Discriminationof Operator in Problem Solving: Learning fromExamples," Cognitive Psychology (17), 1985,pp. 26-65.

Lohse, J. "A Cognitive Model for the Perceptionand Understanding of Graphs," Proceedingsof CH1’91 Human Factors in ComputingSystems, New Orleans, LA, 1991, pp.137-144.

Lotus Development Corporation. Lotus 1-2-3,Lotus Development Corporation, Cambridge,MA, 1989.

Mack, R.L., Lewis, C.H., and Carroll, J.M."Learning to Use Word Processors: Problemsand Prospects," ACM Transactions on OfficeInformation Systems (1:3), July 1983, pp.254-271.

Maclean, A., Young, R.Mo, and Moran, T.P."Design Rationale: The Argument Behind theArtifact," Proceedings of CH1"89 Human Fac-tors in Computing Systems, Austin, TX, 1989,pp. 247-252.

Malone, T.W. "Heuristics for Designing En-joyable User Interfaces: Lessons from Com-puter Games," in Human Factors inComputing Systems, J.C.. Thomas and M.Schneider (eds.), Ablex, Norwood, NJ, 1984,pp. 1-12.

Mandler, J.M., Seegmiller, D., and Day, J. "Onthe Encoding of Spatial Information," MemoryCognition (5), 1977, pp. 10-16.

Mayer, R.E. "The Psychology of How NovicesLearn Computer Programming," ComputingSurveys (13:1), March 1981, pp. 121-141.

McDonald, J.E. and Schvaneveldt, R.W. "TheApplication of User Knowledge to InterfaceDesigni" in Cognitive Science and Its Applica.tion for Human-Computer Interaction, R.Guinden (ed.), Lawrence Edbaum Associates,Hillsdale, N J, 1988, pp. 289-338.

Mehlenbacher, B., Duffy, T.M~, and Palmer J."Finding Information on a Menu: LinkingMenu. Organization to the User’s Goals,"Human-Computer Interaction (4:3), 1989, pp.231-251.

Miller, L.A. and Thomas, J.C., Jr. "BehaviorIssues in the Use of Interactive Systems," In-ternational Journal of Man Machine Studies

(9), 1977, pp. 509-536.Moran, T. "An Applied Psychology of the User,"

Computing Surveys (13:1), March 1981, pp.1-12.

Morland, D.V. "Human Factors Guidelines forTerminal Interface Design," Communicationsof the ACM (26:7), July 1983, pp. 100-104.

Mozeico H. "A Human/Computer Interfaces toAccommodate User Learning Stages," Com-munications of the ACM (25:2), February 1982,pp. 100-104.

Myers, B.A. "The Importance of Percent-DoneIndicators for Computer- Human Interfaces,"Proceedings of CH1’85 Human Factors inComputing Systems, San Francisco, CA,1985, pp. 11-17.

Nakatani, L.H., Egan, D.E., Ruedisueli, L.W.,Hawley, P.M., and Lewart, D.K. "TNT: A Talk-ing Tutor ’N’ Trainer for Teaching the Use ofInteractive Computer Systems," Proceedingsof CH1’86 Human Factors in ComputingSystems, Boston, MA, 1986, pp. 29-34.

Newell, A. and Card, S. "The Prospects ofPsychological Science in Human-ComputerInteraction," Human Computer Interaction(1:3), 1985, pp. 209-242.

Newell, A. and Card, S. "Straightening OutSoftening Up: Response to Carroll and Campbell," Human Computer Interaction (2:3),1986, pp. 251-267.

Newell, A. and Simon, H.A. Human ProblemSolving, Prentice-Hall, Englewood Cliffs, NJ,1972.

Nickerson, R.S. "Why Interactive ComputerSystems Are Sometimes Not Used by the Peo-ple Who Might Benefit from Them," Interna-tional Journal of Man-Machine Studies (4),1981, pp. 469-483.

Norman, D.A. "Design Rules Based on Analysisof Human Error," Communications of theACM (26:4), April 1983, pp. 254-258.

Norman, D.A. "Cognitive Engineering," in UserCentered System Design, D.A. Norman andS.W. Draper (eds.), Lawrence ErlbaumAssociates, Hillsdale, NJ, 1986, laD. 31-61.

Olson, J.R. and Nilsen, E. "Analysis of the Cogni-tion Involved in Spreadsheet Software Interac-tion," Human-Computer Interaction (3:4),1987, pp. 309-349.

Olson, J.R. and Olson, G.M. "The Growth ofCognitive Modeling in Human Computer In-teraction Since GOMS," Human Computer In-teraction (5:2-3), 1990, pp. 221-266.

MIS Quarterly/December 1991 547

Page 22: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

Olson, J.R. and Rueter, H.H. "Extracting Exper-tise from Experts: Methods for Knowledge Ac-quistion," Journal of Expert Systems (4:3),1987, pp. 152-168.

Payne, S.J. and Green, T.R.G. "Task-ActionGrammars: A Model of the Mental Represen-tation of Task Languages," Human-ComputerInteraction (2:2), 1986, pp.93-134.

Phillips, M.D., Howard, B.S., Ammerman, H.L.,and Fligg, C.M., Jr. "A Task Analytic Ap-proach to Dialogue Design," in Handbook ofHuman.Computer Interaction, M. Helander(ed.), Elsevier Science Publishers, Amster-dam, 1988, pp. 835-857.

Poison, P., Muncher, E., and Englebeck, G. "ATest of a Common Elements Theory ofTransfer," Proceedings of CH1"86 HumanFactors in Computing Systems, Boston, MA,1986, pp. 78-83.

Poison, P. "The Consequences of Consistentand Inconsistent Interfaces," in CognitiveScience and Its Application for Human- Com-puter Interaction, R. Guinden (ed.), LawrenceErlbaum Associates, Hillsdale, NJ, 1968, pp.59-107.

Powers, M., Lashley, C., Sanchez, D., andShneiderman, B. "An Experimental Com-parison of Tabular and Graphical Data.Presentation," International Journal of Man-Machine Studies (20), 1984, pp. 545-568.

Rasmussen, J. "The Human as a System Com-ponent," in Human Interaction with Com-puters, H.T. Smith and T.R.G. Green (eds.),Academic Press, London, 1980, pp. 67-96.

Reisner, Po "Using a Formal Grammar in HumanFactors Design of an Interactive GraphicsSystem," IEEE Transactions on SoftwareEngineering (7:2), March 1981, pp. 1409-1411.

Remus, W. "An Experimental Investigation of theImpact of Graphical and Tabular Data Presen-tations of Decision Making," ManagementScience (30:5), May 1984, pp. 533-542.

Remus, W. "A Study of Graphical and TabularDisplays and Their Integration with En-vironmental Complexity," ManagementScience (33:9), September 1987, pp.1200-1204.

Sein, M.K. and Bostrom, R.P. "Individual Dif-ferences and Conceptual Models in TrainingNovice Users," Human-Computer Interaction(4:3), 1989, pp. 197-229.

Shiffrin, R.M. and Schneider, W. "Controlled andAutomatic Information Processing: Perceptual

Learning, Automatic Attending, and a GeneralTheory," Psychological Review (84:2), March1977, pp. 127-190.

Shneiderman, B. Designing the User Interface,Addison-Wesley, Reading, MA, 1987.

Somberg, B.L. "A Comparison of Rule-Basedand Potentially Constant Arrangements ofComputer Menu Items," Proceedings of CHI+ GI 1987 Human Factors in ComputingSystems, Toronto, Ontario, 1987, pp. 255-260.

Trevellyan, R. and Browne, D.P. "A Self-Regulating Adaptive System," Proceedings ofCHI + GI 1987 Human Factors in ComputingSystems, Toronto, Ontado, 1987, pp. 103-107.

Waren, Y. "Mental Models in Learning Com-puterized Tasks," in Psychological Issues ofHuman Computer Interaction in the WorkPlace, M. Frsse, E, Ulich, and W. Dzida (eds.),Elsevier Science Publishers, Amsterdam,1987, pp. 275-294.

Weimer,D. and Ganapathy, S.K. "A SyntheticVisualEnvironment with Hand Gesturing andVoice Input," Proceedings of CH1’89 HumanFactors in Computing Systems, Austin, TX,1989, pp. 235-240.

Witten, I.H., Cleary, J., and Greenberg, S. "OnFrequency-Based Menu-Splitting Algorithms,"International Journal of Man-Machine Studies(21), 1984, pp. 135-148.

Wixon, D., Holtzblatt, K., and Knox, S. "Contex-tual Design: An Emergent View of SystemDesign," Proceedings of CHI’90 Human Fac-tors in Computing Systems, Seattle, WA,1990, pp. 329-336.

Young, R.M. "The Machine Inside the Machine:Users’ Models of Pocket Calculators," Inter-national Journal on Man-Machine Studies(15), 1981, pp. 51-85.

Young, R.M. and Barnard, P.J. "The Use ofScenarios in Human-Computer InteractionResearch: Turbo-Charging the Tortoise ofCumulative Science," Proceedings of CHI ÷GI 1987 Human Factors in ComputingSystems, Toronto, Ontado, 1987, pp. 291-296.

Young, R.M., Barnard, P., Simon, To, and Whit-tington, J. "How Would Your Favourite UserModel Cope with These Scenarios?" SIGCHIBulletin (20:4), April 1989, pp. 51-55.

Young, R.M. and Whittington, J. "Using aKnowledge Analysis to Predict ConceputalErrors in Text-Editor Usage," Proceedings ofCHI ’90 Human Factors in ComputingSystems, Seattle, WA, 1990, pp. 91-97.

548 MIS Quarterly/December 1991

Page 23: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Human-Computer Interaction

About the

James H. Gerlach is associate professor of in-formation systems at the University of Coloradoat Denver. In addition to human-computer in-teraction, his research interests include softwareengineering and EDP auditing. His work has ap-peared in ACM Transactions on InformationSystems, IEEE Computer, Decision SupportSystems, Journal of Systems and Software, TheAccounting Review, and Auditing. Dr. Gerlachreceived an M.So in computer science and aPh.D. in management, both from PurdueUniversity.

Au~o~

Feng-Yang Kuo is assistant professor of infor-mation systems in the Graduate School ofBusiness, University of Colorado at Denver. Hereceived his Ph.D. in management informationsystems from the Univerity of Arizona. Hisresearch interests include human-computer in-teraction, database management, office automa-tion, and decision support systems. Dr. Kuo’swork has appeared in MIS Quarterly, Com-munications of the ACM, Information Manage-ment, and Decision Support Systems.

MIS Quarterly~December 1991 549

Page 24: Understanding Human Computer Interaction for Information ......ding the psychology of HCI. This cycle includes three stages: the user perceives the computer presentation and encodes

Executive Overview

It is widely recognized that partnership arrangements are likely to play an increasingly common rolein the development and application of new technology. For ambitious projects, the stakes may be toohigh for any one organization to foot the entire bill and take all of the risks. Partnership arrangementscan be extremely effective in bringing together complementary resources and sharing the risks andrewards for high-payoff projects. This paper describes such an arrangement between the United Ser-vices Automobile Association (USAA) and IBM.

USAA is a very large financial services company serving a specialized market of active and retired militaryofficers. Some 80 percent of its revenue comes from property and casualty insurance. The companymanages the largest mail order business in the country, with a huge daily volume of incoming and outgoingmail (100,000 and 250,000 respectively). It also operates the largest automatic telephone call distribu-tion system in the world.

A great variety of paper documents is generated by such a massive operation. The effort of managingthe resulting paper files had drawn the attention of USAA senior executives since the early 1980s. TheCEO of the company had an early vision of doing away with most of the paper through the use of imag-ing technology. Prototype imaging systems were developed in the mid-80s in an attempt to reduce theflood of paper. The lack of standards and the relatively primitive technology of the time precluded areally satisfactory solution to the problem.

Discussion about possible cooperative arrangements to develop imaging technology had been initiatedbetween USAA and IBM executives as early as 1982. There were pockets of support for the idea withinIBM, but it was not until 1987 that sufficient corporate interest was generated for the deal to be put together.Once this was done, however, both partners became thoroughly committed to the success of the venture.

In order for a partnership to work, a number of conditions must be created. There must be a clear setof compatible objectives, a strong top-level commitment, complementary resources and skills, a willing-ness to respond flexibily to the inevitable problems that arise, and mutual trust. Each party must bringsomething to the table. USAA brought a vision of the "paperless" processing system as a way out ofits growing deluge of paper, a number of years of experience with earl!er imaging systems, and a willingtest bed for proving the hardware, software, and processing procedures in a very demanding enviro-ment. IBM brought its large-scale resources, broad technology base, and a desire to develop a productline that would satisfy what top management saw as potentially a very important market segment.

The partnership proved to be a great success. The imaging system went "live" in the latter half of 1988,and by the third quarter of 1989 all incoming mail was handled for the underwriting, sales, and serviceareas in the property and casualty line of business. By the first quarter of 1990, over 1,400 image worksta-tions had been installed, serving over 2,000 users. Millions of dollars per year were saved in storagespace and labor costs. Storage space shrank to a tiny fraction of the amount formerly required. It becamepossible to substantially re-engineer the management of work and the control of documents.

The case presents an interesting example of a successful strategic partnership. It illustrates the benefitsthat can accrue to both partners if things are done right. The article discusses the conditions that mustbe put into place and the supporting roles that must be provided, if a partnership stands a strong chanceof succeeding.

550 MIS Quarterly~December 1991