0 guidelines and empirical research research laboratory ... lished guidelines for the development of...

66
Technical Report 888 0o Strategies of Computer-Based 0 Instructional Design: A Review of Guidelines and Empirical Research Dudley J. Terrell Anacapa Sciences, Inc. 0 May 1990 DTIC ELECTE S JUL189 U United States Army Research Institute for the Behavioral and Social Sciences Approved for public release; distribution is unlimited 90 07 16 032

Upload: phungdang

Post on 16-Apr-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

Technical Report 888

0o Strategies of Computer-Based0 Instructional Design: A Review of

Guidelines and Empirical Research

Dudley J. TerrellAnacapa Sciences, Inc.

0May 1990

DTICELECTE

S JUL189 U

United States Army Research Institutefor the Behavioral and Social Sciences

Approved for public release; distribution is unlimited

90 07 16 032

U.S. ARMY RESEARCH INSTITUTE

FOR THE BEHAVIORAL AND SOCIAL SCIENCES

A Field Operating Agency Under the Jurisdiction

of the Deputy Chief of Staff for Personnel

EDGAR M. JOHNSON JON W. BLADESTechnical Director COL, IN

Commanding

Research accomplished under contractfor the Department of the Army

Anacapa Sciences, Inc.

Technical review by

Charles A. GainerGabriel P. IntanoGena M. PedroniJohn E. Stewart

NOTICESils 10~O m], distrition r h n make A/I a

|cor ~od nce neng listr u ion df re][ t:.. my -chItitu ftl/

FINAL DISPOSITION: This report may be destroyed when it is no longer needed. Please do notreturn it to the U.S. Army Research Institute for the Behavioral and Social Sciences.

NOTE: The findings in this report are not to be construed as an official Department of the Armyposition, unless so designated by other authorized documents.

UNCLASSIFIEDSECURITY CLASSIFICATION OF THIS PAGE

REPORT DOCU MENTATION PAGE Form ApprovedIMB No. 0704-0188

la. REPORT SECURITY CLASSIFICATION lb. RESTRICTIVE MARKINGSUnclassified --2a. SECURITY CLASSIFICATION AUTHORITY 3. DISTRIBUTION/AVAILABILITY OF REPORT

2b. DECLASSIFICATION/DOWNGRADING SCHEDULE Approved for public release;

distribution is unlimited.

4. PERFORMING ORGANIZATION REPORT NUMBER(S) 5. MONITORING ORGANIZATION REPORT NUMBER(S)

AS1690-324-89 ARI Technical Report 888

6a. NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION1 (If applicable) U.S. Army Research Institute AviationAnacapa Sciences, Inc. Research and Development Activity

6c. ADDRESS (City, State, and ZIP Code) 7b. ADDRESS (City, State, and ZIP Code)

P.O. Box 489 ATTN: PERI-IRFort Rucker, AL 36362-5000 Fort Rucker, AL 36362-5354

8a. NAME OF FUNDINGISPONSORINe r 8b. OFFICE SYMBOL 9. PROCUREMENT INSTRUMENT IDENTIFICATION NUMBERORr3ANIZATtON u.. Y esearch (If applicable)

Institute tor the Behavioraland Social Sciences PERI-I MDA903-87-C-05238c. ADDRESS(City, State, and ZIP Code) 10. SOURCE OF FUNDING NUMBERS

PROGRAM PROJECT TASK WORK UNIT5001 Eisenhower Avenue ELEMENT NO. NC. NO. ACCESSION NO.

Alexandria, VA 22333-5600 63007A 795 3309 C0611. TITLE (Include Security Classification)Strategies of Computer-Based Instructional Design: A Review of Guidelines and EmpiricalResearch

12. PERSONAL AUTHOR(S)

Terrell, Dudley J. (Anacapa Sciences, Inc.)13a. TYPE OF REPORT 13b. TIME COVERED 14. DATE OF REPORT (Year, Month, Day) 115. PAGE COUNTInterim FROM 88/06 TO 89/09 17 1990, May I16. SUPPLEMENTARY NOTATION All research on this project was technically monitored by Mr. CharlesA. Gainer, Chief, U.S. Army Research Institute Aviation Research and Development Activity(ARIARDA), Fort Rucker, Alabama.

17. COSATI CODES 18. SUBJECT TERMS (Continue on reverse if necessary and identify by block number)FIELD GROUP j SUB-GROUP Computer-based instruction Part-task training

05 08 Computer-assisted instruction Instructional development

Computer-based training (Continued)19. ABSTRACT (Continue on reverse if necessary and identify by block number)

> A survey of literature was conducted to examine empirical research for the numerousguidelines and recommendations that have been published about design strategies forcomputer-based instruction. The guidelines and experiments were categorized as pertainingto (a) strategies for presenting instructional material, (b) strategies for questioningand interactivity, or (c) strategies for programming response feedback and remediationprocedures. Strategies for presenting instructional material were analyzed in literatureon orienting instructions and objectives, stimulus display duration, sequencing instruc-tional material, sequencing levels of difficulty, graphics, and review of material.Strategies for questioning and interactivity were analyzed in literature on prelesson

questions, question types, question placement, number of questions, and answering ques-tions. Strategies for programming response feedback and remediation procedures were

analyzed in literature on feedback for correct and incorrect responses, latency of feed-

back, and placement of feedback. Each guideline was classified according to (Continued)

20. DISTRIBUTION/AVAILABILITY OF ABSTRACT 21. ABSTRACT SECURITY CLASSIFICATION

OUNCLASSIFIED/UNLIMITED MI SAME AS RPT. 0 DTIC USERS Unclassified22a. NAME OF RESPON58t F INDIVIDUAL 22b. TELEPHONE (Include Area Code) 22c. OFFICE SYMBOLCharles A. Gainer

(205) 255-4404 PERI-IR

DD Form 1473, JUN 86 Previous editions are obsolete. SECURITY CLASSIFICATION OF THIS PAGE

UNCLASSIFIED

i

UNCLASSIFIEDSECURITY CLASSIFICATION OF THIS PAGE(W1eiv Date Entered)

ARI Technical Report 888

18. SUBJECT TERMS (Continued)

Instructional designTraining effectivenessTraining strategiesLearning

19. ABSTRACT (Continued)

whether authors of instructional guidelines made recommendations that contra-dicted the guideline, and whether empirical research supports the guideline,contradicts the guideline, or does not exist to evaluate the guideline. Veryfew of the guidelines are supported by empirical research or agreement amongexperts. Most of the guidelines are contradicted by other guidelines andare insufficiently supported by research. Instructional design guidelinesrequiring further research are identified.

Aacession For

is 1iGRA&IDTIC TAB 0UnatzlouncedJustitfcatlo

By

Dlstribution/

Availability Codes

-Ajvailandi/or

DIst SpeolaL

UNCLASSIFIED

SECURITY CLASSIFICATION OF THIS PAGE(Whon Date Entered)

ii

Technical Report 888

Strategies of Computer-Based Instructional Design:A Review of Guidelines and Empirical Research

Dudley J. TerrellAnacapa Sciences, Inc.

ARI Aviation R&D Activity at Fort Rucker, AlabamaCharles A. Gainer, Chief

Training Research LaboratoryJack H. Hiller, Director

U.S. Army Research Institute for the Behavioral and Social Sciences5001 Eisenhower Avenue, Alexandria, Virginia 22333-5600

Office, Deputy Chief of Staff for PersonnelDepartment of the Army

May 1990

Army Project Number Training and Simulation20263007A795

Approved for public release; distribution is urlimited.

iii

FOREWORD

The U.S. Army Research Institute for the Behavioral andSocial Sciences performs research and develops programs toimprove training effectiveness and to contribute to trainingreadiness. Of special interest are research and developmentprograms that apply computers and other advanced technologies topart-task trainers and training strategies Research that iden-tifies the most effective strategies for designing computer-based trainers and training programs will enhance the developmentand procurement of specific part-task training systems requiredby the Army training community.

This report surveys the literature and evaluates the exist-ing research for computer-based instructional design strategies.Instructional design issues requiring further research and devel-opment are identified. With an understanding of the currentstate of the art in instructional design, training developers(Directorate of Training and Doctrine (DOTD), U.S. Army AviationCenter (USAAVNC)) can design new systems to use the most effec-tive strategies and evaluate the effectiveness of new strategies.

This work was conducted within the Training Research Labora-tory program under Research Task 3309, entitled "Techniques forTactical Flight Training." The Army Research Institute AviationResearch and Development Activity (ARIARDA) at Fort Rucker, Ala-bama, was responsible for the execution of the work. This workis a part of a Memorandum of Agreement (MOA) between ARIARDA andDOTD, USAAVNC, dated 15 March 1984 and updated i'n October 1989.This information was provided as a preliminary draft to the DOTD.However, the primary application was a literature review to sup-port research for the empirical development of a total trainingsystem for aviation knowledge and skills.

EDGAR M. JOHNSONTechnical Director

v 6D t|

STRATEGIES OF COMPUTER-BASED INSTRUCTIONAL DESIGN: A REVIEW

OF GUIDELINES AND EMPIRICAL RESEARCH

EXECUTIVE SUMMARY

Requirement:

This literature survey was conducted to examine the empiri-cal research support for the numerous guidelines and recommenda-tions that have been published about computer-based instructionaldesign strategies.

Procedure:

Two types of publications were selected and reviewed:(a) reports of empirical research comparing at least one strategyof computer-based instruction (CBI) with another, and (b) pub-lished guidelines for the development of CBI. The guidelinesand experiments were categorized according to whether they per-tained to strategies for (a) presenting instructional material,(b) questioning and interactivity, or (c) programming responsefeedback and remediation procedures. Each section of the reportbegins with a list of guidelines for some aspect of instructionaldesign and a discussion of the recommendations made by theauthors of the guidelines. Then the empirical research relevantto each guideline is reviewed, and guidelines that lack researchsupport are identified. Finally, each guideline discussed in thereport is classified by whether (a) authors of instructionalguidelines made another recommendation that contradicted theguideline, and (b) empirical research supports the guideline,contradicts it, or does not exist to evaluate it.

Findings:

Only 5 of the 57 guidelines reviewed are supported by em-pirical research. Authors of guidelines agree that CBI shouldpresent questions, corrective feedback for incorrect responses,and multiple trials for items that are answered incorrectly.Although some authors do not agree that CBI should present pre-lesson questions or that computers should be programmed to con-trol adaptively the number of trials, empirical research suggeststhat these strategies are effective.

Although the experimental evidence is not strong enough tojustify complete rejection, 8 of the 57 guidelines are contra-dicted by the results of empirical research. Empirical researchtends to contradict recommendations to use graphics often, toprovide increasingly informative feedback for successive errors,to train under mild speed stress, to present information before

vii

practice, to randomize the sequence of material, to present part-task training prior to whole-task training, to permit the studentto control the presentation of reviews, and to program the com-puter to control the presentation of reviews.

Finally, 44 of the guidelines have been insufficientlyaddressed by empirical research. For many of the guidelines,either empirical research has produced mixed results and furtherresearch is required, is inconclusive because of inadequate ex-perimental designs, or simply does not exist. Some of the guide-lines, however, are self-evident and do not require empiricalresearch. Other guidelines are stated in terms that are toogeneral to evaluate their usefulness. Some are very specific,and an evaluation of the possibility of presenting them generallyis required.

Utilization of Findings:

The CBI designer and the educational technology researchershould be aware of the contradictions and potential limitationsof the recommendations in the literature. Research should bedesigned to address the current gaps in knowledge of computer-based instructional strategies. Existing operational problems inpart-task training may be addressed in a manner that extends thisknowledge. Instructional designers currently working on trainingdevelopment should recognize when a particular strategy is beingapplied in a case that potentially tests the generality of arecommendation or research result.

viii

STRATEGIES OF COMPUTER-BASED INSTRUCTIONAL DESIGN: A REVIEWOF GUIDELINES AND EMPIRICAL RESEARCH

CONTENTS

Page

INTRODUCTION .... .............. . . . . . . . . . . . 1

Background. . . . . . . . . . . . . . . . . . . . . . . . 1Procedure ......... . . . . . . ............ 2

PRESENTATION OF INSTRUCTIONAL STIMULI. . . . . . . . . . . . 2

Orienting Instructions and Objectives .......... 3Stimulus Display Duration .......... ............ 4Sequencing Instructional Material ................. 6Sequencing Levels of Difficulty ............ . 11Graphics ..... ..................... . . 13Review of Material............. . . . ... . 16Summary ................................. 17

QUESTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . is

Interactivity .. ..... ................... 19Prelesson Questions . ......... ......... 20Question Types. . . . . . . . . . . . . . . . . 21Question Placement ...................... ....... 23Number of Questions or Problems .... ............. . 23Answering Questions ................... ......... 26Summary . . . . . . ....................... 27

PROGRAMMING RESPONSE FEEDBACK AND REMEDIATIONSTRATEGIES . . . . ....................... . . 28

Feedback for Correct Responses .... .............. ... 28Feedback for Incorrect Responses. . . . . . ........ 29Latency of Feedback ....... ................... ... 32Placement of Feedback ...... .................. 34Summary ......... ......................... 35

OTHER GUIDELINES ABOUT INSTRUCTIONAL STRATEGIES ........ ... 35

DISCUSSION . . . . . . . . . . . . ............ . 36

Guidelines Supported by Research. . . . . ... . 41Guidelines Contradicted by Research .. . . . ... . 42Guidelines Lacking Research Support .... ............. 44Conclusion. . . . . . . . . . . . . . . . . . 48

REFERENCES . . . . . . . . . . .................. . 51

ix

CONTENTS (Continued)

Page

LIST OF TABLES

Table 1. Listing and classification of CBI guidelines . . . 37

x

STRATEGIES OF COMPUTER-BASED INSTRUCTIONAL DESIGN:A REVIEW OF GUIDELINES AND EMPIRICAL RESEARCH

Introduction

Bacground

Training technology has progressed from Pressey'steaching machines and Link's simulators of the 1920s to thecomputer-based tutors, part-task trainers and simulators ofthe 1980s (Benjamin, 1988; Flexman & Stark, 1987). A newfield of research and development, called instructionaldesign, has emerged in industry and academia to provideservices in educational technology. Textbooks on the appli-cation of learning theory to instructional design (forexample, Gagn6, Briggs, & Wager, 1988), journals specializingin the publication of educdtional technology research (forexample, Journal of Computer-based Instruction, Instructional Development), ana guidelines for the develop-ment of computer-based instruction (for example, Alessi &Trollip, 1985; Kearsley, 1986) have been published. Profes-sional organizations, such as the Association for theDevelopment of Computer-Based Instructional Systems and theComputer Systems Technical Group of the Human FactorsSociety, convene regularly to discuss developments incomputer-based instruction research.

Computer-based instruction (CBI) and simulation arerapidly becoming an integral part of military training.Operation of sophisticated weapons systems during fast-pacedcombat requires a multitude of skills that may be trainedmost effectively with simulators and part-task trainers. Thedevelopment of such training technology requires a thoroughunderstanding of (a) the skills and abilities required tooperate the systems, (b) theories of learning and principlesof instruction, and (c) the state-of-the-art in computertechnology.

Over the past decade, an enormous amount of research hasbeen conducted to investigate the training potential of CBI.The majority of this research has had as its primary objec-tive the comparison of CBI with other methods of instruction.Reviewers have often concluded that computers can measurablyimprove training effectiveness (for example, Eberts & Brock,1987). However, Clark (1985) suggests that the trainingeffectiveness of CBI is due primarily to effective instruc-tional strategies and not to the medium per se. Most of themedia comparison research provides very little information

about effective strategies of computer-based instructionaldesign (Reeves, 1986; Shlechter, 1986).

The present survey of literature was conducted toexamine the empirical research support for the numerousguidelines and recommendations that have been published aboutcomputer-based instructional design strategies. Two types ofpublications were selected and reviewed: (a) reports ofempirical research comparing at least one strategy of CBIwith another and (b) published guidelines for the developmentof CBI. Reports of experiments comparing CBI with othermethods of instruction, reports of computer-based instruc-tional development (without experimentation), and reportsdescribing new technology or new applications of existingtechnology were not considered.

The guidelines and experiments were categorizedaccording to whether they pertained to strategies for (a)presenting instructional material, (b) questioning andinteractivity, or (c) programming response feedback andremediation procedures. In the following sections, theguidelines and research in each of these areas are presented.Each section begins with a list of guidelines and a discus-sion of the recommendations made by their authors. Then theempirical research relevant to each guideline is reviewed.Finally, the guidelines that lack research support areidentified.

Presentation of Instructional Stimuli

Designers of CBI must make many decisions about theorganization and presentation of training material. Theymust decide what sorts of events should occur on the monitor,such as tutorials, drills, and simulations, and how long eachinstructional event should take. They must determine howstudents should be instructed to interact with a lesson, ilowgraphics can be used most effectively, and whether the stu-dent or the computer should control the training variables.This section presents some of the guidelines and empiricalresearch pertaining to orienting instructions, stimulusdisplay duration, sequencing of instructional material,sequencing various levels of difficulty, use of graphics, andreview of material.

2

OrientinQ Instructions and Objectives

Orienting instructions are directions that are providedto inform students about the nature of the interactionrequired during a lesson, the operation of a training device,or both. Objectives are statements of the benefits to berealized by the student as a result of successful participa-tion in the lesson or training. Not all recommendationsabout the presentation of orienting instructions or objec-tives must be supported by empirical research. If a trainingdevice is complicated to operate or unfamiliar to thestudent, operating instructions obviously are necessary fortraining to occur. Similarly, if a student must choose froma set of lessons or training devices, statements of objec-tives may enable the student to select a lesson addressing aparticular interest or training deficiency. However, train-ing effectiveness research is required to investigate thepossible benefits of orienting instructions or objectives inany particular lesson or with a device that is not difficultto operate.

Guidelines. Authors of instructional design guidelineshave four different, and somewhat incompatible, recommenda-tions about the use of orienting instructions and objectives.Statements made about the utility of orienting instructionsand statements of objectives include the following:

" they are ineffective and unnecessary," they limit the amount or breadth of learning," they should be included in CBI design, and" they should be a student-controlled option.

Hannafin and Hughes (1986) suggest that orientinginstructions are typically unnecessary. Although orientinginstructions may be necessary for poorly organized lessons,well organized lessons are not enhanced by orienting instruc-tions. Statements of behavioral objectives may be beneficialonly when students have had some experience in the use ofobjectives as orienting activities.

Statements of objectives may enhance training describedin the objectives, but they may obscure any incidentaltraining potential of a lesson. Hannafin and Hughes (1986)warn that orienting activities may be effective only forlearning that is "context-bound or limited in application"(p. 248). Jonassen and Hannum (1987) and Wager and Wager(1985) state that orienting instructions appear to increasethe speed of learning and reduce lesson errors, but they mayhinder the retention of lesson material.

3

Despite the conclusions and recommendations describedabove, these and other authors still advocate the use oforienting instructions and objectives in CBI. Jonassen andHannum (1987) advise instructional designers to includecognitive maps and structured overviews to illustrate wheretopics fit into the content area. Wager and Wager (1985)recommend that lessons inform the student of special condi-tions and expectations of performance. Alessi and Trollip(1985) state that computer-based tutorials should have aconcise and accurate statement of objectives, although notnecessarily in behavioral terms. However, they add that thisrecommendation does not apply to CBI designed for youngchildren. Buehner (1987) suggests that CBI consistentlyinclude an overview of the training, describing opportunitiesfor review and reinforcement. According to MacLachlan(1986), "...at the beginning of a computerized tutorial, thestudent can be told the benefits he can expect by givingattention to the material in the tutorial .... It would bedesirable, then, to preface a tutorial with instructionsindicating the manner in which the student should be watch-ful" (p. 65). Finally, Jonassen and Hannum (1987) recommendthat lesson overviews be presented as an option for thestudent to choose.

Research. There is very little research support forrecommendations about orienting instructions and objectives.Ho, Savenye, and Haas (1986) and Reiber and Hannafin (1986)found no differences between groups of subjects who viewedorienting instructions and those who did not. However,Hannafin, Phillips, and Tripp (1986) found that orientinginstructions improved posttest scores when the ensuing lessonfollowed the instructions after a 5-second interval but notafter a 20-second interval.

The research evidence is inadequate to support eitherthe inclusion or exclusion of orienting instructions orobjectives in CBI. Furthermore, no research has been locatedthat addresses the assertion that orienting instructions andobjectives limit the amount or breadth of learning in CBI.

Stimulus Display Duration

Except in the case of animation, the CBI designer mustdecide whether a stimulus display should remain on the screenuntil removed by the student, or should be removed automati-cally after a prescribed amount of time has passed. If thelatter alternative is chosen, the designer must determineoptimal durations for displays.

4

Gjdp.in es. Authors of instructional design guidelinesdisagree in their recommendations about stimulus displayduration. Alessi and Trollip (1985) advocate student controlof display duration for the presentation of questions andgraphics. However, Eberts and Brock (1984) assert, "If thetask can be shortened with no loss of those aspects of thetask that are important for training, then more practicetrials can be achieved in the same amount of time" (p. 268).Shortening a task may require computer control, not studentcontrol, of display durations. Furthermore, Schneider,Vidulich, and Yeh (1982) recommend that training be conductedunder mild speed stress; speed stress is difficult to achievewhen the control of display duration is left to the student.

R. Control of display duration was examined infour experiments. Tennyson and Park (1984) compared twostrategies of computer-controlled display durations with acondition of student-controlled duration. In one computer-control strategy, display duration increased followingcorrect responses and decreased following incorrectresponses. In the other computer-control strategy, displayduration decreased following correct responses and increasedfollowing incorrect responses. Posttest scores were higherwhen display duration increased following correct responsesand decreased following incorrect responses, but there wereno significant differences between the student-controlstrategy and either computer-control strategy.

Tennyson, Park, and Christensen (1985) and Tennyson,Welsh, Christensen, and Hajovy (1985) each compared twostrategies for manipulating trial display duration. In onestrategy, display duration increased following correctresponses but was unchanged following incorrect responses.In the other strategy, display duration was controlled by thestudent. Posttest scores were higher for students who usedthe former strategy.

Two of these three experiments support the recommenda-tion that the computer, rather than the student, controldisplay duration. However, the results of these experimentsalso suggest that student control of display duration mightbe appropriate after the student has progressed beyondinitial skill or knowledge acquisition. In each of theseexperiments, a student's response immediately terminated thetrial display and initiated the feedback procedure. In themost effective strategy of each experiment, correct responsesproduced longer display durations. As the frequency of cor-rect responses increased, each subject had more opportunityto respond before a trial was automatically terminated.Therefore, the students who were responding correctly

5

essentially controlled their own trial durations. Tennyson,Park, and Christensen (1985) and Tennyson, Welsh,Christensen, and Hajovy (1985) showed that student controlwas not an effective strategy when it was unrelated to thestudent's correct responding.

All three experiments contradict the recommendations todecrease trial durations or train under speed stress. Thatis, if correct responses result in longer display durations,successful performance could make the program run slower andprobably become less stressful, rather than quicker and morestressful.

Finally, one experiment appears to support studentcontrol of display duration. Belland, Taylor, Canelos,Dwyer, and Baker (1985) compared three strategies forcontrolling duration of a textual display. In one strategy,the student controlled the duration of each lesson frame. Inanother strategy, the frame duration equalled one second perline of text plus eight extra seconds per frame. In thethird strategy, the frame duration equalled one second perline of text plus one extra second per frame. Posttestscores were higher for students using the student-control andthe one-plus-eight-second strategy than for students usingthe one-plus-one-second strategy. While appearing to supportthe student-control recommendation, the results of theresearch conducted by Belland et al. (1985) merely indicatethat one second per line plus one second per frame is aninsufficient amount of time to read text.

The issue of controlling text duration may not requireempirical research. Unless the goal of instruction is toincrease a student's reading speed, text should not bepresented faster than it can be read. However, if researchis required, a better test of student control of text dura-tion would be to compare pairs of students of comparablereading speed. One student in each pair should control thedisplay duration for the pair, thereby creating an apparentcomputer-control condition for the other student.

Sequencing Instructional Material

Instructional designers must decide the sequence inwhich instructional events occur. For example, lessons couldbegin with presentations of information, follow with practicedrills or questions, and end with remediation. Part taskscould be trained prior to whole tasks. The same subjectmatter could be trained in a tutorial or information-orientedlesson, a flashcard-type drill, or a simulation or game.

6

Guidelines. Instructional designers must decide whetherthe student, the computer, or the designer should determinethe sequence of events. Authors of instructional designguidelines offer the following conflicting recommendations:

" always allow the student to control the sequence ofinstruction,

" never allow the student to control the sequence ofinstruction, and

" conditionally allow the student to control thesequence of instruction.

Kearsley (1986) recommends that the student control allaspects of sequencing. Specifically, students should be ableto get to any lesson or part of the program from a main menuand should be able to decide the order in which they wouldlike to complete the lessons. However, Kearsley adds thatthe computer should inform students if they choose to skipimportant prerequisite information. Eberts and Brock (1984)advise that a program should promote a strategy of activesearch through instructional material, and therefore, shouldbe designed so students can freely access different parts ofthe material. Buehner (1987) says that instruction shouldinclude opportunities for the student to adjust the order ofpresentation. Alessi and Trollip (1985) recommend thatdesigners provide the capability to browse through thequestions in a computer-based test.

Jonassen and Hannum (1987) advocate the opposite posi-tion. Claiming it is not effective for students to controlthe sequence of instruction, they advise: "Do not allowlearners to determine the sequence of the instructionalcontent," and "Do not allow an option for learners to skipexamples" (p. 14).

However, Jonassen and Hannum (1987) contradict them-selves when they advise designers to give "learners with highinternal locus of control the ability to sequence the lessoncontent" (p. 13). Consequently, they suggest that instruc-tional programs include a test for internal locus of controland permit students with a high degree of internal locus ofcontrol to sequence the lesson content for themselves.Jonassen and Hannum do not define internal locus of controlin their article; however, the phrase was used originally torefer to the tendency of some individuals to believe thattheir own actions are responsible for the consequences oftheir behavior (Rotter, 1966). Jonassen and Hannum'srecommendation assumes that students who believe they are thecauses of their own reinforcers also know the most effectivetraining sequences for themselves. Alessi and Trollip (1985)

7

recommend that control of sequence be given to advancedstudents when the subject matter is simple, but not to lessadvanced students and not if the subject matter iscomplicated.

In the event that students are not permitted to controlthe sequence of instruction, effective sequences must beidentified and programmed into the lesson. Authors of designguidelines recommend the following:

" information should be presented before practice,

" concrete material should be presented before abstractmaterial,

" chronological ordering of material should be used inpreference to random sequencing,

" the sequence of practice material should berandomized, and

* part-task training should sometimes precede whole-tasktraining.

Alessi and Trollip (1985) assert that programs are moreefficient and more successful when they begin with thepresentation of information. For initial acquisition ofinformation, Jonassen and Hannum (1987) advocate sequencesthat begin with concrete, real-world experiences and proceedto more abstract experiences. MacLachlan (1986) advocatesthematic organization of material and recommends chrono-logical organization as the most effective theme forcomputer-based tutorials. In contrast, Jonassen and Hannumadvise designers to randomize the presentation of practicematerial during computer-based drills.

Eberts and Brock (1984) recommend that early trainingshould be mostly part-task training, and later trainingshould contain a high percentage of whole-task training.However, Holding (1987) notes that the value of part-tasktraining depends on the tradeoff between task difficulty andthe degree of interdependence of the parts of a task. Largetasks that can be broken down into relatively independentparts are more appropriately trained with part-task training.

Research. There is very little research to supportrecommendations about the allocation of sequence control inCBI. Gray (1987) compared two strategies of sequence controlin CBI. In one strategy, students could choose to see areview of each trial, but otherwise the trials were presentedin a linear fashion. In the other strategy, students couldchoose to review the current trial, progress to the nexttrial, or branch anywhere in the lesson. Students using the

8

branching strategy performed better on a posttest adminis-tered immediately following the instruction than the studentsusing the linear strategy. However, there were no differ-ences between groups on a second posttest administered oneweek after the lesson.

Except for the results of the follow-up posttest, thisexperiment appears to support the recommendation that stu-dents be allowed to control the sequence of instruction.However, the effectiveness of the linear strategy was notadequately assessed. The student-control strategy may havebeen merely better than a relatively ineffective strategy oftrial sequencing. This does not contradict the possibilitythat an instructional designer could determine a sequencethat is superior to the student-control strategy.

A demonstration that student-control strategy is moreeffective than an ineffective training sequence does notrefute a recommendation that the computer should control theinstructional sequence. For example, Barsam and Simutis(1984) compared two trial sequencing strategies in acomputer-based course on map contour line interpretation. Inboth strategies the computer presented a simplified contourmap in the upper half of the screen and a three-dimensionaldepiction of the terrain in the lower half of the screen.The computer cursor could be placed at any location on themap and rotated. Subsequently, the three-dimensional viewcorresponding to the cursor's position and direction wouldappear on the lower half of the screen. In the student-control strategy, the student could select the placement androtation of the cursor on each trial. In the computer-control strategy, the placement and rotation of the cursorwas randomly selected on each trial. Students with highspatial ability test scores in the student-control conditionhad higher posttest scores than students with high spatialability test scores in the computer-control condition.However, there were no differences between the two conditionsfor students with medium or low spatial ability test scores.

The results of this experiment cannot be taken tosupport recommendations for student control of trialselection for high ability students or recommendationsagainst student control for low ability students. As withthe Gray (1987) research, this experiment does not demon-strate the effectiveness of the comparison strategy for trialsequencing. It is possible that a random sequence of trialsis an ineffective instructional strategy, and students ofhigh spatial ability were simply able to design a lesson thatwas superior to random trial presentations, while students oflow spatial ability were not.

9

No research was found to support Alessi and Trollip's(1985) recommendation that CBI should present informationbefore practice. Lahey (1981) compared three presentationsequences of rules, examples, and practice trials in aconcept learning task: rule-examples-practice, examples-rule-practice, and practice-examples-rule. According toAlessi and Trollip, the first and second sequences should bemore effective than the third. However, posttest resultsrevealed no differences among the three sequences.

No research was found to support the recommendationsthat CBI should present concrete material before abstractmaterial, chronologically order the instructional material,or randomize the sequence of practice material. In fact, theresults of Barsam and Simutis's (1984) research could betaken as refutation of the recommendation to randomize thesequence of practice material.

Wightman and Lintern (1985) reviewed empirical researchon three strategies of part-task training: segmentation,fractionation, and simplification. In segmentation, a taskis divided temporally or spatially into subtasks with identi-fiable end points. In fractionation, a task is divided intosubtasks that are normally executed simultaneously. Insimplification, a difficult task is made easier by reducingthe complexity of stimuli and response requirements.Although many of the experiments that Wightman and Linternreviewed did not employ computer-based instructional media,the results typically suggest that segmentation is aneffective strategy of part-task training. However, themethods of fractionation and simplification were notnecessarily more effective than whole-task training. Theeffectiveness of part-task training prior to whole-tasktraining appears to be a function of the task partitioningstrategy, an issue that may be independent of computer-basedinstructional design considerations.

A program of research by Tennyson and associatesdemonstrated the effects of response-sensitive trialsequencing strategies in a concept classification task (Park& Tennyson, 1980, 1986; Tennyson, Park, & Christensen, 1985;Tennyson, Welsh, Christensen, & Hajovy, 1985). In each trialof a concept classification task, a subject was required toidentify the category to which a particular example belongs.Park and Ten- vson (1980) compared a response-sensitive strat-egy with random presentation of trials. In the response-sensitive strategy, the trial following each incorrectresponse contained an example from the same category as theprevious incorrect response. The response-sensitive strategyproduced higher posttest scores with fewer training trialsthan the random presentation.

10

Then, Tennyson, Welsh, Christensen, and Hajovy (1985)compared two response-sensitive strategies. One strategy wasidentical to the Park and Tennyson (1980) strategy: thetrial following each incorrect response contained an examplefrom the same category as the previous incorrect response(hereafter called the previous x.Qnsp strategy). In theother strategy, the trial following each incorrect responsecontained an example from the same category as the previousexample (hereafter called the previous exmpe strategy).Posttest scores showed no differences between these twostrategies.

Tennyson, Welsh, Christensen, and Hajovy (1985) thendemonstrated that a third strategy was superior to theprevious example strategy. In the new strategy, incorrectresponses early in the lesson were followed by a trialcontaining an example from the same category as the previousexample, but incorrect responses later in the lesson werefollowed by a trial containing an example from the samecategory as the previous response. Tennyson, Park andChristensen (1985) affirmed and extended this finding bydemonstrating the superiority of the new strategy over theprevious response strategy as well.

Park and Tennyson (1986) found no differences betweenthe previous example and previous response strategies on acomputer-based multiple-choice posttest (reproducing theresults of Tennyson, Welsh, Christensen, and Hajovy, 1985).However, subjects using the previous example strategy per-formed significantly better than subjects using the previousresponse strategy on a paper-and-pencil posttest of conceptdefinitions.

These results demonstrate that a method of response-sensitive (but computer-controlled) trial sequencing canproduce better learning than random sequences. However, itis unclear if the computer-control method is superior tostudent control of trial sequencing. Furthermore, whenprogramming response-sensitive sequencing strategies forother kinds of tasks, instructional designers will find verylittle guidance in the results of these experiments.

Sequencing Levels of Difficulty

For any given subject matter, instructional material maybe organized and presented at various levels of difficulty.The designer must determine what aspects of a task or subjectmatter are difficult to students and how the content shouldbe presented for optimal training effectiveness.

1I

riuidelines. Authors of design guidelines make severalconflicting recommendations about the sequencing of variouslevels of difficulty in CBI, including the following:

" simple material should be presented first andcomplicated material should be presented later,

" material that the student already knows should not bepresented,

* the difficulty level of material should not varyduring the session,

" the student should be allowed to control the diffi-culty level, and

" the student should not be allowed to control thedifficulty level.

Most authors recommend that instructional presentationsbegin with simple material and progress to more complexmaterial. Wheaton, Rose, Fingerman, Korotkin, and Holding(1976) assert that mastery of a task will be quicker iferrors are prevented in the first few trials. Eberts andBrock (1984) agree by recommending that the early stages oftraining be designed so the accuracy rate of responding isrelatively high. Wager and Wager (1985) recommend thatdesigners sequence material from simple to complex and buildon what the student already knows. Jonassen and Hannum(1987) advise designers to adapt the context of more diffi-cult tasks to a content area familiar to the student and havethe student progress from easy to more difficult problems.Finally, Alessi and Trollip (1985) state that computer-basedtutorials should show students how to relate new informationto what they already know.

These recommendations conflict with MacLachlan's (1986)advice to avoid the presentation of "information that stu-dents already know" and "provide a means of branching aroundinformation which is not needed by certain students" (p. 67).Similarly, the advice of Alessi and Trollip (1985) to keepitem difficulty fairly constant throughout a computer-baseddrill session conflicts with recommendations to progress fromeasy to difficult problems. Ensuring errorless performancemay be incompatible with keeping item difficulty constantthroughout a session.

Kearsley (1986) advises designers to let the studentdetermine the difficulty level of a presentation. However,other authors of design guidelines disagree. Asserting thatstudent control of difficulty level is ineffective, Ebertsand Brock (1984) recommend that the computer be programmed todetermine the appropriate level of difficulty. Similarly,Jonassen and Hannum (1987) recommend the use of courseware

12

that analyzes each student's responses and adjusts thedifficulty level accordingly.

Research. None of the recommendations about thesequencing of different levels of difficulty are supported bycomputer-based instructional strategies research. Only onereport was located that addresses the organization ofmaterial according to difficulty level. Tennyson, Welsh,Christensen, and Hajovy (1985) compared two types of contentorganization: taxonomic and schematic. These authors definea taxonomic structure as the organization of lesson contentthat might be recommended by a subject matter expert. In aschematic structure, the lesson content is organized into ahierarchy of prerequisite skills and knowledge. An initialexperiment revealed no differences between the two contentorganization strategies. In a second experiment, preliminarytraining in the operation of the computer was added to eachcontent organization condition. Results of the secondexperiment demonstrated the superiority of schematic overtaxonomic structure. This is weak support for the recommen-dation to present easy material first and difficult materiallater.

Many instructional objectives can be met through thepresentation of textual material. In some cases, however,text alone may be insufficient to achieve the trainingobjectives; additional nontextual graphics may be required.In other cases, graphics may enhance training effectivenessby illustrating information more clearly and quickly thancomparable textual presentations. Instructional designersshould understand how computer graphics modify the trainingeffectiveness of CBI. Such an understanding may supportdecisions about when graphics should be used and what sortsof graphics should be used to achieve particular trainingobjectives.

rnui/dpines. Authors of instructional design guidelinesare more consistent in their recommendations about the use ofgraphics than in other areas of computer-based instructionaldesign. They offer the following guidelines:

* graphics should be used often,

- graphics should be used for highlighting importantmaterial,

* graphic information should be placed on the samescreen as, and to the left of, textual material,

13

" high quality graphics are usually unnecessary andshould be avoided, and

• animation should be used.

Several authors advise designers to use graphics ratherthan text whenever possible (Kearsley, 1986; Kearsley &Frost, 1985). Noting that illustrations and pictures areremembered more easily than words, MacLachlan (1986) recom-mends that designers use concrete rather than abstract repre-sentations. Braden (1986) advocates "a balance betweeniconic and digital displays, appealing to both right and leftbrain hemispheres" (p. 22). Buehner (1987) recommends acombination of verbal (e.g., text, voice) and spatial (e.g.,graphics, pictures) material in CBI.

Several authors suggest the use of graphics to highlightimportant material (Jonassen & Hannum, 1987; MacLachlan,1986). However, Jonassen and Hannum (1987) warn designersnever to use flashing text nor more than three types ofhighlighting cues in any lesson. Similarly, Kearsley andFrost (1985) warn designers to use color sparingly foremphasis. Instead, they recommend using variations of typestyles and sizes to highlight material.

Alessi and Trollip (1985) assert that graphic informa-tion should be presented on the same display as correspondingtextual information. Wager and Wager (1985) agree, butsuggest that when it is impossible for all the information tobe on the same screen, the program should allow switchingbetween the textual and graphic displays. Buehner (1987)recommends that spatial material be placed to the left oftextual material to facilitate processing by the right hemi-sphere of the brain.

Some authors feel that excessive detail or realismshould be avoided in graphic presentations (Alessi & Trollip,1985) and that medium resolution graphics are sufficient(Jonassen & Hannum, 1987). Alessi and Trollip advisedesigners to provide "just as much detail as is necessary toconvey the necessary information" (p. 194), but give noguidance about how to determine the optimal level of detailfor a specific display. Claiming that lack of visual detailarouses curiosity and enhances training effectiveness,MacLachlan (1986) asserts that "one way to induce [curiosity]is to present the human subject with a blurred picture"(p. 65).

Finally, MacLachlan (1986) recommends animated graphicsduring tutorials. Kearsley (1986) recommends animation toconvey sequence or cause and effect relationships.

14

Research. None of the research on the effects ofgraphics in CBI supports a recommendation to use graphics.For example, Reid and Beveridge (1986) examined the effectsof illustrations accompanying text in a computer-basedscience lesson. Posttest scores showed no differencesbetween subjects who viewed pictures plus text and those whoviewed text only.

Surber and Leeder (1988) compared three types of feed-back messages in a computer-based spelling drill. In agraphic feedback condition, correct responses were followedby a 3-second, multicolored, cartoon-style display of a wordsuch as "WOW," "SUPER," or "GOOD WORK." Incorrect responseswere followed by the word "NO" in the same graphic style andthen the correct spelling of the word. In one of the non-graphic feedback conditions, responses were followed by thesame words as in the graphic condition; however, the feedbackappeared as normal text characters rather than graphic dis-plays. In the other nongraphic feedback condition, correctresponses were followed only by a presentation of the nexttrial, and incorrect responses were followed by a presenta-tion of the correct spelling. The different forms of feed-back, and consequently, the graphics, had no effect on theposttest scores or on the tendency of students to return touse the computer a second time.

Research support for the recommendation to use animationis mixed. Moore, Nawrocki, and Simutis (1979) presentedcomputer-based lessons on audio psychophysiology to threegroups of subjects. The lessons differed among groups in thelevel of complexity of graphic displays. For one group,graphics consisted of static alphanumeric characters andschematic drawings. For another group, graphics consisted ofstatic line drawings of ear physiology. For the third group,graphics consisted of animated line drawings. No significantdifferences were found among groups on any of four posttestsor in the time required to complete lessons.

Rieber and Hannafin (1986) compared textually presentedorienting instructions with an animated version of the sameinstructions and found no differences. As noted previously,research suggests that orienting instructions may not enhancetraining effectiveness. Therefore, Rieber and Hannafin'sresearch may not constitute a valid assessment of the utilityof animation in CBI.

Animated graphics may be more useful than still graphicsfor training the perception of full-motion visual stimuli.McDonald and Whitehill (1987) found that full-motion graphicshad a more beneficial effect on posttest performance than

15

still graphics for training the visual recognition of movingstimuli.

No literature was found that addresses recommendationsabout the placement of graphics on the display or the use ofgraphics for highlighting. Furthermore, no literature wasfound that addresses the assertion that high quality graphicsare unnecessary (Alessi & Trollip, 1985; Jonassen & Hannum,1987; MacLachlan, 1986).

Review of Material

Instructional presentations typically conclude with areview of instructional material. Remediation for anincorrect answer usually consists of a review of instruc-tional segments pertaining to the test question. Instruc-tional designers are concerned about whether the student orthe computer should control the presentation of such reviews.

Guidelines. Some authors of instructional designguidelines recommend that students be allowed to control thepresentation of reviews. For example, Cohen (1985) suggeststhat CBI programs be designed to enable students to go backthrough a lesson and reexamine the questions. Alessi andTrollip (1985) recommend that computer-based tests bedesigned in such a way that students can mark questions forsubsequent review. Jonassen and Hannum (1987) tell thedesigner to give the student the option of reviewing materialbefore answering questions.

In contrast, other authors recommend that the designeror the computer control the presentation of reviews. Forexample, Jonassen and Hannum (1987) advise designers to"select items again at intervals for review" (p. 10). Wagerand Wager (1985) suggest a spaced review after the studentfirst masters the mater' . Braden (1986) recommends thatwhole displays or majo- .gments of displays be repeated toreinforce learning.

Research. The research shows no differences betweenstudent-control and computer-control review strategies. Hoet al. (1986) compared three review strategies during CBI:computer-controlled review, student-controlled review, and noreview. In the computer-control strategy, a video summarywas presented after each instructional segment. In thestudent-control strategy, each student chose whether or notto view the summary. In the no-review strategy, eachinstructional segment was followed immediately by the nextinstructional segment. Students using the two review

16

strategies had significantly higher posttest scores thanstudents using the no-review strategy; however, there were nodifferences between the computer-controlled and student-controlled review strategies.

Hannafin and Colamaio (1986) compared three strategiesof review during a computer-based lesson on cardiopulmonaryresuscitation: designer-imposed strategy, learner-selectedstrategy, and linear strategy. For each strategy, 12questions were presented during the lesson. In the designer-imposed strategy, if a question was answered correctly, thelesson advanced to the next segment. If the question wasanswered incorrectly, a review of the appropriate videosegment was shown and the question was repeated. If thequestion was answered incorrectly on the second attempt, thecorrect answer was presented and the lesson advanced to thenext segment. In the learner-selected strategy, the studentswere given options of reviewing each video segment, repeatingeach question, or advancing to the next segment regardless ofwhether their answers were correct or incorrect. Finally, inthe linear strategy, students observed a preset sequence ofvideo segments, and responses to questions were followed onlyby feedback about whether their answers were correct orincorrect. No video segment reviews, question repetitions,or sequence choices were presented to the students in thiscondition. Subjects in the designer-imposed and learner-selected conditions scored higher on a posttest than didsubjects in the linear condition. However, there were nodifferences between scores for the designer-imposed andlearner-selected strategies.

Finally, Kinzie and Sullivan (1988) compared twostrategies for review following incorrect responses in acomputer-based science lesson. A learner-control strategypermitted the student to decide whether to review material orto proceed with the lesson. A program-control strategyrequired each student to review the material. Posttestsshowed no differences between the two strategies.

A few of the experiments comparing methods of informa-tion presentation have produced results that can helpdesigners of CBI. There is some support for increasing thestimulus display duration following correct responses so thatadvanced students may gradually take control of stimulusdisplay duration (Tennyson & Park, 1984; Tennyson, Park, &Christensen, 1985; Tennyson, Welsh, Christensen, & Hajovy,1985), but this contradicts the recommendations made by

17

Alessi and Trollip (1985), Eberts and Brock (1984), andSchneider et al. (1982). Tennyson's research also demon-strates an effective trial sequencing strategy for conceptacquisition (Park & Tennyson, 1980, 1986; Tennyson, Park, &Christensen, 1985; Tennyson, Welsh, Christensen, & Hajovy,1985) and provides weak support for sequencing lesson mate-rial from simple to complex (Tennyson, Welsh, Christensen, &Hajovy, 1985). Wightman and Lintern's (1985) review ofempirical research suggests that part-task training shouldprecede whole-task training only for tasks that can bepartitioned temporally and recombined in a backward chainingprocedure. Finally, one experiment demonstrated thatanimated graphics are more effective than static graphics fortraining in the perception of dynamic visual stimuli(McDonald & Whitehill, 1987).

Many of the published guidelines and recommendations areeither not supported or not addressed by research. Experi-ments comparing lessons with and without orienting instruc-tions or objectives have produced contradictory results(Hannafin et al., 1986; Ho et al., 1986; Rieber & Hannafin,1986). Furthermore, the research does not address theassertion that orienting instructions and objectives limitthe amount or breadth of learning. Research support forstudent control of trial sequencing comes from experimentswith possibly inadequate comparison conditions (Barsam &Simutis, 1984; Gray, 1987). One experiment fails to supportthe recommendation to present information before practice(Lahey, 1981). No research was located that addressesrecommendations to present concrete material before abstractmaterial or to use temporal versus random sequencing strate-gies (cf., Barsam & Simutis, 1984). Recommendations to avoidpresenting material that the student already knows and toavoid varying the difficulty level of material are notaddressed by empirical research. There is also no researchthat addresses recommendations about student control ofdifficulty level, the use of graphics, graphics highlighting,placement of graphics, or quality of graphics. Finally,research shows no differences between student control andcomputer control of reviews.

Questions

One characteristic of nearly all training media is thepresentation of questions to test the student's understandingof the subject matter presented during instruction. Text-books, educational films, tapes, and lecturers frequentlyquery students in an attempt to introduce lesson material,review material, or test for comprehension of material.Whether and when to present questions, how to present

18

questions, and how subjects should respond to questions areissues that must be addressed by designers of CBI.

Interactivity

Eberts and Brock (1984) advise that training systemsshould be designed such that students can interact with thetraining program. Hannafin (1985) suggests that the effec-tive ingredients of interactivity are question, response, andfeedback procedures. Response and feedback procedures arediscussed in later sections. The importance of questions isdiscussed in the following paragraphs.

Guideline. Most authors of instructional design guide-lines agree that questions are essential in CBI. Forexample, Jonassen and Hannum (1987) advise designers to askquestions about material before, during, or after instruc-tion. Wager and Wager (1985) assert that postlessonquestions increase retention of material.

Researc. Experiments designed to compare computer-based presentations with questions to those without questionstend to support the recommendation to include questions inCBI. There is one exception. McMullen (1986) compared threekinds of CBI programs that differed in the amount of inter-activity and feedback. An information-only program presentedtextual material without any questions. A flashcard-typedrill program presented questions after the text presenta-tion. Subjects were required to enter an answer, but nofeedback was provided for responses. An educational gameprogram was identical to the flashcard-type drill, exceptfeedback was provided for responses. Results showed nodifferences in posttest scores among the three programs.

On the other hand, Schaffer and Hannafin (1986) comparedfour versions of a videotape lesson that varied in the levelof interactivity and complexity of feedback. In one version,subjects viewed the videotape without questions. In a secondversion, subjects viewed the videotape and were presentedwith questions but were provided no feedback for theirresponses. In a third version, subjects viewed the videotapeand questions and were informed whether they were correct orincorrect in their responses. Finally, in a fourth version,subjects viewed the videotape and questions, were providedwith feedback, and when incorrect, were shown a review of thesection of videotape containing the information. Overall,posttest scores increased as a function of interactivity andfeedback complexity.

19

Two experiments showed that presenting questions duringa lesson is more effective than highlighting the samematerial. However, the effect only holds for the materialthat was highlighted or questioned. Schloss, Schloss, andCartwright (1985) presented two versions of a computer-basedlesson to groups of subjects. One version asked open-ended,sentence completion questions at various intervals during thelesson. The other version presented highlights, instead ofquestions, at various intervals during the lesson. A high-light was defined as a sentence entailing a question and itscorresponding answer from the other version of the lesson.All subjects were given two posttests after viewing thelesson. One test contained 16 multiple-choice questionsparalleling 16 of the questions (or highlights) from theprevious lesson. The other test contained 16 questionsparaphrased from the text of the previous lesson. Subjectswho viewed the lesson with questions performed better on theformer posttest than did the subjects who viewed the lessonwith the highlights. There were no differences betweengroups on the latter posttest.

Schloss, Sindelar, Cartwright, and Schloss (1986)replicated the above study using multiple-choice questions,instead of open-ended questions. Again, subjects who viewedthe lesson with questions performed better than subjects whoviewed the lesson with the highlights on a posttest contain-ing multiple-choice questions paralleling the questions (orhighlights) from the previous lesson.

Prelesson Ouestions

Prelesson questions may function in a manner similar toorienting instructions or statements of objectives bypreparing the student for the forthcoming lesson material.Recommendations and research about orienting instructions andobjectives have already been discussed. However, someauthors of instructional design guidelines have made specificrecommendations about the use of prelesson questions.

Guidelines. Some authors recommend the use of prelessonquestions; others recommend that prelesson questions not beused because they may interfere with comprehensive learning.

MacLachlan (1986) asserts that prelesson questionsenhance learning. Jonassen and Hannum (1987) recommend thatlessons begin with a pretest. They advise instructionaldesigners to challenge students to answer questions presentedat the beginning of a lesson.

20

In contrast, Wager and Wager (1985) believe that pre-lesson questions serve to focus attention on informationrelated to answering the questions, but may hinder theretention of material that is not introduced by the ques-tions. Hannafin and Hughes (1986) agree that learners tendto focus effort on information related to prelesson questionsto the detriment of information not presented in theprelesson questions.

Research. Dalton, Goodrum, and Olsen (1988) comparedthree strategies for presenting prelesson questions during acomputer-based lesson on division rules. Subjects in onegroup received a pretest of 20 items with feedback indicatingwhether each response was correct or incorrect. Subjects ina second group received the same pretest until five itemswere missed. After missing five items, the pretest ended andthe lesson began. Subjects in a third group received no pre-test. Posttest scores were significantly higher for subjectsin the second group than in the first and third groups.Dalton et al. speculate that the 20-item pretest produced toomuch failure prior to the lesson and may have decreasedstudent motivation. However, the brief pretest was superiorto no pretest, suggesting that limited prelesson questioningenhances learning.

Question Types

Most computer-based instructional programs use multiple-choice questions, typically for the convenience of program-ming response processing and remediation routines. Yet thereare many other types of questions that can be presentedduring CBI.

Guiines. Authors of instructional design guidelinesrecommend that designers of CBI include the following typesof questions:

" application questions," discrimination questions, and" rhetorical questions.

Jonassen and Hannum (1987) recommend questions thatrequire students to think about the application of the lessonmaterial. Alessi and Trollip (1985) advocate questions thatrequire the student to apply a rule or principle to a newsituation rather than to recall or recognize answers. Wedmanand Stefanich (1984) assert that test items should requirethe student to apply principles (or perform procedures) inways consistent with how principles will be applied (orprocedures performed) outside the learning situation. For

21

concept acquisition, Wedman and Stefanich recommend testitems that require the student to discriminate betweenexamples and non-examples of each concept. Jonassen andHannum (1987) also recommend the use of rhetorical questions.

R. Experiments evaluating various types ofquestions have not addressed the application, discrimination,and rhetorical questioning strategies recommended by Alessiand Trollip (1985), Jonassen and Hannum (1987), and Wedmanand Stefanich (1984). Dalton and Hannafin (1987) found thatsubjects learned more from a lesson emphasizing applicationof concepts than from one that presented definitions ofconcepts without describing their application. However, thisexperiment did not compare question types. Similarly,Tennyson and his associates (Park & Tennyson, 1980, 1986;Tennyson, Park, & Christensen, 1985; Tennyson, Welsh,Christensen, & Hajovy, 1985) used a discrimination proceduresimilar to the one recommended by Wedman and Stefanich(1984). However, the focus of the Tennyson research was ontrial sequencing strategies and not the effectiveness ofdiscrimination questions in concept acquisition.

Schloss et al. (1986) compared three ratios of highercognitive questions to factual questions in a computer-basedlesson. They defined a higher cognitive question as one thatrequires a student to generate an answer that is not pre-sented directly in the previous text. A factual questionrequires a student to recognize or recall information that ispresented directly in the previous text. One version of thelesson contained 15 higher cognitive questions and 45 factualquestions; a second version contained 30 higher cognitivequestions and 30 factual questions; and a third versioncontained 45 higher cognitive questions and 15 factualquestions. Posttest results revealed no differences amongthe three ratios.

Merrill (1987) compared computer-based lessons withhigh-level questions to lessons with low-level questions.Lessons with high-level questions produced higher posttestscores than lessons with low-level questions. Unfortunately,Merrill provides no definition or explanation of a high-levelor low-level question. So, while Schloss et al. (1986)demonstrated that there is no need to distinguish betweenhigher cognitive and factual questions when designing CBI,Merrill demonstrated a need to distinguish between high-leveland low-level questions; however, insufficient explanation ofthat distinction reduces the usefulness of his researchresults.

22

Ouestion Placement

In addition to decisions about whether to ask questionsand the type of questions to ask, designers must decide whenand where to present questions during a computer-basedlesson. The issue of prelesson questions has been discussedpreviously; this section discusses the placement of questionswithin a lesson.

Gnidelines. Authors of instructional design guidelinesprovide very little guidance about the placement of questionswithin a lesson. Alessi and Trollip (1985) suggest thatquestions should occur "frequently" during a lesson, and thatonly one question should appear on each display during acomputer-based test. Wager and Wager (1985) recommend thateach question be placed after the text passage or diagram towhich it refers.

e . Very little research has been conducted tosupport decisions about the placement of questions. Schlosset al. (1985) investigated the placement of questions in acomputer-based lesson by comparing three ratios of questionsto lesson frames. One program presented a question afterevery lesson frame (1:1); a second program presented threequestions after every three lesson frames (3:3); a thirdprogram presented five questions after every five frames(5:5); and a fourth program presented all 90 frames beforepresenting all 90 questions (90:90). There were no differ-ences in posttest scores due to the different question/frameratios. The results of this research do not support therecommendation that questions should occur frequently duringa lesson (Alessi & Trollip, 1985).

Number of Ouestions or Problems

Most CBI tutorials, drills, and tests are designed to beinteractive by presenting discrete questions for students toanswer or problems for students to solve. Simulations alsomay be divided into discrete segments for practice andfeedback. Designers of CBI should present as many opportuni-ties for interactivity as are necessary to achieve the train-ing objective. However, it is not obvious how the number ofquestions or problems should be determined. The number couldbe fixed for all students receiving the training, or thenumber could be based upon each student's performance duringthe lesson. As with other decisions during CBI development,the student could be given control of the number of questionsor programs presented.

23

Guideines. Authors of instructional design guidelinesmake contradictory recommendations about the number ofquestions or problems required in CBI. Some authors claimthat the student should control the number of questions orproblems; others claim that the computer should be programmedto control adaptively the number of questions or problems.

Jonassen and Hannum (1987) advise designers to letstudents select the number of practice problems they want towork in a lesson. Specifically, they recommend that (a) thecomputer should ask students after each practice problemwhether they want to work another practice problem or todiscontinue the lesson, and (b) students should be allowed tostop working practice problems and return to other parts ofthe lesson whenever they choose. Similarly, Kearsley (1986)asserts that programs should allow students to skipquestions.

Jonassen and Hannum (1987) also advise designers toprovide more examples and exercises for students with lowprior knowledge or low within-lesson performance; further-more, they suggest that the number should be based uponstudent achievement during the lesson. The computer couldeither advise students to solve an appropriate number ofproblems, or it could simply present the appropriate number.

R. Empirical research tends to support Jonassenand Hannum's latter recommendation about controlling thenumber of problems in CBI. Specifically, it appears mosteffective to program the computer to determine the number ofproblems necessary for each student to master the content.However, if the computer simply informs the students of thenumber of problems necessary for mastery, students mayeffectively select the number of problems themselves.

Tennyson (1980) compared three strategies for selectingthe number of problems in a concept acquisition task. In thestudent-control strategy, the student decided when to stopthe practice problems and take the posttest. In thecomputer-control strategy, the computer selected the numberof problems based upon an assessment of the student's pretestand lesson performance. In the third strategy, calledstudent control with advisement, the computer advised thestudent about the number of problems necessary for masterybased upon an assessment of the student's pretest and lessonperformance, but the student decided when to stop the prob-lems and to take the posttest. Students who were advisedabout the number of problems had higher posttest scores thanstudents who controlled the number of problems withoutadvisement. There were no differences in posttest scores

24

between students who were advised and students whose problemswere controlled by the computer. However, students in theadvisement condition selected fewer problems and took lesstime than students in the computer-control condition.

Tennyson (1981) and Johansen and Tennyson (1983) repli-cated this experiment with different age subjects and with arule learning task instead of a concept acquisition task.The results were reproduced. From the Tennyson research,designers of CBI might conclude that student control over thenumber of problems is effective only if the students areadvised about the number of problems to select. Resultsreported by Goetzfried and Hannafin (1985) appear tocontradict this conclusion.

Goetzfried and Hannafin (1985) compared three versionsof a computer-based lesson in mathematics. In the student-control version, the students controlled the presentation ofthe problems, but they were advised to solve at least fourproblems before advancing in the lesson. In the adaptivecomputer-control version, the students did not control thepresentation of the problems; rather, they received addi-tional instruction or problems according to the accuracy oftheir responses during the lesson. In the linear computer-control version, the computer presented a fixed sequence andnumber of problems. In this version, the students were notpermitted to respond; instead, the problems were presentedfor student viewing, and the correct solution was presentedon the following frame. There were no differences in post-test scores among these three versions, and the linearcomputer-control version took significantly less time tocomplete.

While the results of the Goetzfried and Hannafinexperiment seem contradictory to those of the Tennysonresearch, the two procedures differed substantially. First,the algorithms for adaptive problems number selection weredifferent. In the Goetzfried and Hannafin (1985) computer-control/adaptive condition, "each student was required tosolve correctly all four problems in each section beforeadvancing to the next divisibility rule" (p. 274). TheTennyson research used a complex algorithm developed fromBayesian probability theory (see Rothen & Tennyson, 1978).Consequently, the advisement procedures were different.Goeztfried and Hannafin advised each subject to "solve atleast four problems correctly before advancing to the nextsection" (p. 274). Tennyson's advice came from the Bayesianalgorithm. Finally, Goetzfried and Hannafin's subjects wereseventh grade students enrolled in remedial mathematicsclasses; Tennyson's subjects were average high school andcollege students.

25

Answering Ouestions

The student's response in CBI might be an importantvariable in training effectiveness. Multiple-choice ques-tions, the most common type, require students to select ananswer by typing a single character or moving a cursor to theselected answer. Fill-in-the-blank questions require moretyping. Simulations may require student responses of anentirely different nature.

Guidelines. Authors of instructional design guidelinesoffer the following, somewhat incompatible, recommendationsabout answering questions during CBI:

" the student should be permitted to see the answers toquestions before responding,

* the student should not be permitted to see the answersto questions before responding,

" overt responses are not necessary and should not berequired,

" overt responses are necessary and should be required,

" the student should be permitted to change an answer atany point during a quiz, and

" the student should control the sequence of questions.

Alessi and Trollip (1985) recommend that designers allowstudents to see the answer to a question upon request duringcomputer-based drills. However, Jonassen and Hannum (1987)suggest that (a) the student should be required to generatean answer first and then receive feedback, and (b) thestudent should not be permitted to look back through thepresentation for answers.

Jonassen and Hannum (1987) note that students canrespond covertly as well as overtly. Asserting that overtresponding is not necessary for learning, they recommend thatdesigners give directions to students to stimulate covertresponding. However, they contradict themselves when theystate that "responses requiring multiple keypresses arerequired for deeper/more meaningful mental processing"(p. 11). Wager and Wager (1985) advise designers to ensurethat the student make a substantive response before beingshown the answer.

Kearsley (1986) recommends that programs permit studentsto change their answers to any question at any point during apretest, posttest, or mid-lesson quiz. Furthermore, headvocates that programs permit students to back up and skipquestions during pretests, posttests, and exams.

26

Research. None of the recommendations about studentresponses are addressed by computer-based instructionalstrategies research. One experiment suggests that a briefforced response-delay will enhance scores on multiple-choicetests. Stokes, Halcomb, and Slovacek (1988) presentedcomputer-based quizzes to three groups of students in apsychology course. The groups differed in the period offorced response-delay following the presentation of amultiple-choice question: 0-sec, 30-sec, and 60-sec.Subjects in the 30-sec group had significantly higher quizscores than subjects in the other two groups. However, therewere no differences among groups on final examination scoresor course grades. The brief response-delay appeared toimprove immediate recognition of the correct answers, but itdid not affect retention over a longer period.

Summary

The recommendation to include questions in CBI isgenerally supported by the research literature (Schaffer &Hannafin, 1986; Schloss et al., 1985; Schloss et al., 1986;but cf., McMullen, 1986). However, there is very littleresearch to guide designers of CBI in making decisions aboutspecific question and response strategies. There is someevidence that a brief prelesson quiz will enhance posttestperformance (Dalton et al., 1988). Merrill (1987) has shownthat high-level questions are superior to low-level ques-tions, although the distinction between the two is unclear.Tennyson's research suggests that students can effectivelycontrol the number of practice problems if the computer isprogrammed to (a) analyze each student's achievement during alesson and (b) present advice about the appropriate number ofproblems required for mastery (Johansen & Tennyson, 1983;Tennyson, 1980; Tennyson, 1981). Finally, there is someevidence that a brief, forced delay of response following thepresentation of a multiple-choice question increases theprobability that the student will answer correctly (Stokes etal., 1988).

Most of the guidelines and recommendations about ques-tion and response strategies remain unsupported by empiricalresearch. No research was located to evaluate the relativeeffectiveness of application, discrimination, or rhetoricalquestions. There is no experimental support for presentingquestions "frequently" during a lesson. No research waslocated to support the recommendations to permit students tosee answers or to prevent students from seeing answers beforeresponding. No experiments were found that address the issueof overt versus covert responding. Finally, no experiments

27

were found that address recommendations to permit students tochange answers at any point during a quiz or to control thesequence of questions.

Programming Response Feedback and Remediation Strategies

If interactivity is a critical element of CBI, decisionsabout feedback and remediation are as important as decisionsabout the presentation of material and questions. Designersmust determine what sorts of events should follow correct andincorrect responses, when these events should occur, and howthese events should be presented on the screen. This sectionpresents some of the guidelines and empirical research per-taining to feedback for correct and incorrect responses,remediation strategies, latency of feedback, and placement offeedback.

Feedback for Correct Responses

Instructional designers must decide what sorts ofevents, if any, should follow correct responses by students.The designer must decide if and how the computer shouldreinforce correct responding.

Guidelines. Authors of instructional design guidelinesmake the following three recommendations about feedback forcorrect responses:

" feedback is unnecessary and should not be used,* feedback should be brief, ande feedback should explain why the responses are correct.

Cohen (1985) asserts that feedback after a correctresponse is not as important as feedback after an incorrectresponse in CBI. However, Wager and Wager (1985) recommend ashort affirmation of each correct response. They warndesigners to avoid correct response feedback that is time-consuming. Conversely, Jonassen and Hannum (1987) warndesigners to avoid feedback that simply indicates whether ananswer is correct or incorrect. Instead, they recommend thatcorrect answers be followed by feedback indicating that theanswer is correct and explaining why it is correct. Wagerand Wager (1985) suggest that the computer reinforce correctresponses by displaying the answer in the context of the itemwhenever possible.

R. No research was located that addresses recom-mendations about feedback for correct responses in CBI.

28

Feedback for Incorrect Responses

As with feedback for correct responses, designers of CBImust determine the most effective consequences of incorrectresponses during a lesson. In addition to informing studentsthat a response is incorrect, the computer is capable ofproviding remedial training or instruction. Therefore, it isimportant to determine the most effective methods forremediating errors in CBI.

Guieline. Authors of instructional design guidelinesprovide the following recommendations for feedback andremediation of errors in CBI:

" corrective feedback is necessary and should beprovided,

" feedback should be specific to the type of error," novel, entertaining, or auditory stimuli should not beused as feedback for incorrect responses, and

" multiple trials should be presented when an item ismissed.

In addition, some authors make specific recommendations forremediation in cases of multiple incorrect responses andpartially incorrect responses.

As stated previously, Jonassen and Hannum (1987) warndesigners to avoid presenting feedback that simply indicateswhether an answer is correct or incorrect. Feedback forincorrect responses should be corrective (Alessi & Trollip,1985; Kearsley, 1986). Specifically, Wager and Wager (1985)advise that feedback should focus on correcting misconcep-tions represented by incorrect answers. Cohen (1985) agreeswith this advice, noting that informational feedback isbetter than simple knowledge of results following incorrectresponses.

Different types of errors should produce different typesof feedback (Jonassen & Hannum, 1987). For example, Alessiand Trollip (1985) suggest a special method of feedbackfollowing errors in discrimination problems. Jonassen andHannum (1987) suggest that if the specific error can bedetermined, feedback should indicate what error was made, whyit was an error, and how it can be corrected. If thespecific error cannot be determined, the computer shouldpresent the correct answer and an explanation of why it iscorrect.

Wager and Wager (1985) warn designers to avoid correc-tive feedback that is entertaining or novel. Jonassen and

29

Hannum (1987) add a warning never to use auditory cues tosignal incorrect responses.

Alessi and Trollip (1985) recommend that questionsanswered incorrectly be repeated at variable intervals laterin the instructional session. Jonassen and Hannum (1987)reinforce this recommendation by suggesting that the computerselect items for review. If a program permits multipleopportunities to attempt a correct answer, Wager and Wager(1985) recommend providing increasingly informative feedbackafter each successive wrong answer. However, they cautiondesigners to restrict the number of trials allowed beforepresenting the student with the correct answer or correctivefeedback. If an answer is partially incorrect (and partiallycorrect), feedback should prompt a correct response bytelling students why their answers are incomplete (Wager &Wager, 1985).

Research. Several experiments have been conducted toexamine the effects of different types of feedback andremediation strategies for incorrect responses. Consequent-ly, there is some experimental support for the recommendationto provide corrective feedback rather than only knowledge ofresults. Two of the conditions in the Schaffer and Hannafin(1986) experiment, discussed previously, provide a test oferror feedback complexity. Subjects in both conditionsviewed a computer-based videotape lesson and answered ques-tions. Subjects in one condition were only informed whetherthey were correct or incorrect in their responses. Subjectsin the other condition were provided with similar feedbackand, when answers were incorrect, were shown a review of thesection of videotape containing information relevant to thequestion. Posttest scores were higher for subjects who wereshown the reviews.

Similarly, Waldrop, Justen, and Thomas (1986) comparedthree types of feedback for responses on a 20-trial computer-based drill. In th.e minimal feedback condition, subjectswere merely informned whether their responses were correct orincorrect. In the extended feedback condition, subjects werenot only informed of the correctness of the answers, but werealso provided with an explanation of the correct answer. Ina third condition, minimal feedback was provided for thefirst two incorrect responses on a problem, but extendedfeedback was provided if the subject answered incorrectly athird time. Subjects in the extended feedback conditionscored higher on a posttest than subjects in the minimalfeedback condition. However, posttest scores of subjects in

30

the minimal plus extended feedback condition were no differ-ent than the scores of the subjects in the other twoconditions.

Two experiments examined the complexity of remediationstrategies for incorrect responses. Seigel and Misselt(1984) compared three strategies of feedback followingincorrect responses in a flashcard-type drill for learningforeign words. In the first strategy, all incorrectresponses were followed by a presentation of the correctanswer. In the second strategy, errors were classified asout-of-list errors or discrimination errors. An out-of-listerror occurred when a subject's incorrect answer (a transla-tion) was a word that was not being taught in the lesson. Adiscrimination error occurred when a subject's incorrectanswer was a word that was taught in the lesson but wasincorrect on that particular trial. Out-of-list errors werefollowed by a presentation of the correct answer (as in thefirst strategy). Discrimination errors were followed by apresentation of the correct answer and the correct transla-tion for the subject's incorrect response. The third strat-egy was identical to the second, except that discriminationerrors were also followed by additional discrimination train-ing. Posttest scores showed no differences between the firstand second remediation strategies. However, subjects usingthe third strategy had fewer posttest errors than the othersubjects.

Merrill (1987) compared two types of feedback for errorsduring a computer-based science lesson. Corrective feedbackwas a complete description of the correct answer. Attributeisolation feedback was not fully explained in the report.However, the attribute isolation feedback may have been lesscomprehensive, more specific, and more detailed than thecorrective feedback. According to Merrill, "attribute isola-tion helps to focus attention on the critical and variableattributes of a concept" (p. 18). Results showed no differ-ences in posttest scores between these two types of feedback.

Seigel and Misselt (1984) demonstrated that multiplerepetitions of a problem following an incorrect responseproduced higher posttest scores than a single immediaterepetition of the problem. They compared two strategies forrepeating problems to which incorrect responses occur. Inone strategy, the problem is repeated immediately followingthe incorrect response. In the other strategy, the problemis repeated on the next, 4th, and 9th trials following theerror. Subjects exposed to the multiple repetition strategymade significantly fewer posttest errors than subjectsexposed to the single repetition strategy. These results

31

support the recommendation to provide multiple opportunitiesto respond to a missed item.

The Waldrop et al. (1986) experiment, describedpreviously, serves as a weak test of the recommendation thatmultiple errors be followed by increasingly more informativefeedback (Wager & Wager, 1985). In this case, the resultsfail to support the recommendation; that is, the posttestscores for subjects in the minimal plus extended feedbackcondition were no different than the scores for the other twoconditions. Thorkildsen and Friedman (1986) examined adifferent approach to remediating multiple incorrectresponses. They compared two branching strategies termedextensive (EXT) and minimal (MIN). For both strategies, thefirst occurrence of an incorrect answer to a particularquestion is followed by a repetition of the question. Aftera second incorrect answer to the question, EXT branches to asimpler question than the original one, while MIN presentsthe original question for the third time. After a thirdincorrect response to a question, EXT presents an evensimpler question with a prompt for the correct answer, whileMIN presents the correct answer. Posttest scores showed nosignificant differences between these two branching strate-gies. However, subjects in the EXT condition spent signifi-cantly less time on the system. Recommendations abouteffective remediation of multiple errors will probably haveto wait for definitive research on effective remediation ofsingle errors.

No research was located that addresses the recommenda-tions to avoid novel, entertaining, or auditory stimuli inerror feedback (Jonassen & Hannum, 1987; Wager & Wager,1985). Similarly, no research was located that addresses therecommendation to provide corrective feedback for partiallyincorrect responses (Wager & Wager, 1985). However, if apartially incorrect response is considered an incorrectresponse, research supports the recommendation to presentcorrective feedback (Schaffer & Hannafin, 1986; Waldrop etal., 1986).

Latency of Feedback

In addition to decisions about the kind of feedback forresponses in CBI, designers must determine the temporalparameters of feedback. For example, designers must decidewhether feedback should occur immediately after responses orafter some delay. If delayed, designers must determine howmuch time should pass between the response and the feedback.

32

Guidelines. Authors of instructional design guidelinesmake the following contradictory recommendations aboutlatency of feedback:

* immediate feedback should always be presented,

" delayed feedback should always be presented, and

" immediate feedback should be presented at some timesand delayed feedback should be presented at othertimes.

Alessi and Trollip (1985) recommend giving immediatefeedback for incorrect answers during drills. For tests,detailed feedback should be given immediately after the test.Cohen (1985) agrees with this recommendation, stating thatimmediate feedback is better than delayed or end-of-sessionfeedback for students exhibiting low mastery of the material.Jonassen and Hannum (1987) also agree that immediate feedbackis more effective for initial acquisition of material. Addi-tionally, they assert that immediate feedback about conse-quences of decisions is more effective than delayed feedback.Wheaton et al. (1976) recommend that "information about thecorrectness of action should be available quickly" (p. 78),but that "delay of [feedback] has little or no effect onacquisition" (p. 77).

On the other hand, Cohen (1985) maintains that immediatefeedback can impede the pace of learning, and MacLachlan(1986) asserts that students with higher skill levels maylearn better under conditions of delayed feedback. Cohenrecommends immediate feedback for the initial acquisition ofmaterial and for recognition or immediate recall of ideas.However, if students have prior knowledge of material,informational feedback should be delayed, and knowledge ofresults should be presented immediately after each response.End-of-session feedback should be provided when comprehen-sion, long-term retention, and application of information arethe training objectives. However, Cohen cautions thatdelayed feedback should be presented no longer than 15 or 20minutes after the responses occur. After long periods with-out feedback, "the effect of the feedback becomes negligibleand confusing" (Cohen, 1985, p. 36).

Jonassen and Hannum (1987) assert that end-of-sessionfeedback facilitates learning of more abstract material,particularly for higher achievement learners. For computer-based simulations, Alessi and Trollip (1985) recommendimmediate feedback (regardless of fidelity) with beginningstudents, and natural feedback (regardless of immediacy) withmore advanced students.

33

Research. No research was found that supports therecommendations about latency of feedback in CBI. Gaynor(1981) presented computer-based math problems to four groupsof subjects. One group received immediate feedback forresponses. A second group received feedback after a 30-second delay. A third group received feedback only at theend of the session. A fourth group received no feedback forresponses. No significant differences in posttest scoreswere observed among the groups.

One experiment appears to support a recommendation todelay corrective feedback during computer-based simulations.Munro, Fehling, and Towne (1985) compared two strategies forpresenting error feedback and remediation during a computer-simulated air intercept controller task. For both strate-gies, errors were immediately followed by a tone and a briefmessage in an area of the computer display reserved forinstructional messages. In one strategy, an additionalinstructional message was shown immediately following theerror notice. In the other strategy, the instructionalmessages were not shown unless the subject requested them.Subjects using the latter strategy committed fewer errorsoverall and on the last ten trials than the subjects usingthe former strategy.

These results could be taken as support for a recommen-dation to delay corrective feedback during a simulation, orat least to present corrective feedback under the student'scontrol. However, because of the dynamic nature of the airintercept controller's task, it is likely that subjects inthe immediate feedback condition observed fewer of theinstructional messages than subjects in the delayed feedbackcondition. The corrective feedback appeared in an area ofthe screen reserved for messages, not in the area of thescreen that the subject had to view to perform the task. Thecomparison in this experiment may have been between aninstructional message condition and a no-message condition,and subject control and delay of feedback were not the causalfactors.

Placement of Feedback

In addition to the content and latency of feedback,instructional designers must decide how to structure thefeedback on the computer display. Wager and Wager (1985)recommend that feedback be placed on the same screen as thequestion and below the student's response. The results ofthe Munro et al. (1985) experiment suggest that placement of

34

feedback is an important issue in the design of CBI. How-ever, no research has been located to evaluate differentstrategies for placement of feedback.

Summary

As with the presentation of instructional material andquestions, there is very little research to guide instruc-tional designers in making decisions about feedback andremediation strategies. There is some experimental supportfor the recommendation to provide corrective feedback,instead of simple knowledge of results, following incorrectresponses (Schaffer & Hannafin, 1986; Seigel & Misselt, 1984;Waldrop et al., 1986). Also, there is evidence that multiplerepetitions of a problem following an incorrect response aremore effective than a single repetition of the problem(Seigel & Misselt, 1984).

However, the research has not unequivocally determinedwhether and how different types of errors should be followedby different types of feedback (Merrill, 1987; Seigel &Misselt, 1984). No research was located to show that feed-back for correct responses is unnecessary, should be brief,or should explain why a response is correct. No evidence wasfound that multiple errors should be followed by increasinglyinformative feedback (Thorkildsen & Friedman, 1986; Waldropet al., 1986). No research was located that addresses recom-mendations to avoid novel, entertaining, or auditory stimuliin error feedback. No research was located to support any ofthe recommendations about latency of feedback in CBI or toevaluate different strategies for placement of feedback.

Other Guidelines about Instructional Strategies

Authors of instructional design guidelines give far moreadvice than the empirical research currently supports. Forexample, Kearsley and Frost (1985) warn designers to avoidcluttering the screen with too much information. Certainly,a program should not present subjects with ineffective visualdisplays, but how much information is too much? When is ascreen cluttered? Braden (1986) and Kearsley and Frostsuggest presenting only one primary concept or idea pervisual display. This recommendation lacks empirical researchsupport. Furthermore, Braden recommends that lists berestricted to seven or fewer items per display.

Several authors offer guidelines about the presentationof textual material. Alessi and Trollip (1985) recommend

35

normal upper and lower case text. Buehner (1987) assertsthat double-spaced text is easier to read than single- ortriple-spaced text. Wager and Wager (1985) warn designers toavoid abbreviations. They advise designers to spell wordscompletely and use complete sentences. MacLachlan (1986)opposes the use of cliches and recommends the use of subordi-nate words and concepts. For example, MacLachlan prefers theuse of a specific term, such as "raven," over the use of amore general term, such as "bird." No experiments were foundthat address such issues.

A few authors advocate mnemonics in CBI. MacLachlan(1986) specifically recommends the use of rhyme, rhythm, andthe method of loci in textual material. Jonassen and Hannum(1987) suggest that designers periodically include directionsfor the student to "generate mental images of the content"(p. 10). The use of mnemonics in learning has empiricalsupport in traditional educational research. There is noreason to assume that the value of mnemonics would be lessfor CBI.

Several recommendations about instructional strategiesare so vague that their value to designers is questionable.For example, Eberts and Brock (1984) recommend that CBIinclude "examples and analogies to make the training effec-tive" (p. 280). They also advise designers to make thelearning intrinsically motivating by utilizing challenge,fantasy, and curiosity in the learning environment. Finally,they recommend that the computer display should provideinformation in a manner that can be used to form an accurateinternal representation of the system or concept beingtrained. Kearsley and Frost (1985) advise designers to"organize information functionally on the screen as much aspossible to reduce confusion and unnecessary cognitiveprocessing" (p. 10). Finally, Jonassen and Hannum (1987)suggest that designers "organize material to an appropriatetop-level structure: description, compare/contrast,temporal, explanation, definition, example, problem solution,causation" (p. 10).

Discussion

Although there are some consistencies in the literature,the guidelines and research in computer-based instructionalstrategies are characterized by contradictions. In somecases, authors of instructional design guidelines contradicteach other with their recommendations. In other cases,empirical research contradicts the experts' recommendations.Many of the recommendations, however, lack empirical researchaltogether.

36

Each guideline discussed in this review was classified

by (a) whether authors of instructional guidelines madeanother recommendation that contradicted the guideline and(b) whether empirical research supports the guideline,contradicts the guideline, or does not exist. The guidelinesand their categories are listed in Table 1. If any guidelinecontradicted another guideline, both of the guidelines wereclassified under "Some Authors Disagree." Otherwise, theguidelines were classified under "No Disagreement AmongAuthors." If empirical research supports or contradicts aguideline, the guideline was classified under "Research

Table 1

Listing and Classification of CBI Guidelines

NoDisagreement Some Insufffi-

Among Authors Research Research cientGuideline Authors Disagree Supports Contradicts Research

1. Present questions X X

2. Corrective feedbackis necessary for X Xincorrect responses

3. Multiple trialsshould be presented X Xwhen an item is missed

4. Present prelesson X Xquestions

5. Program the computerto control adaptively X Xthe number of trials

6. Use graphics often X X

7. Provide increasinglyinformative feedback X Xfor successive errors

8. Train under mild speed X Xstress

9. Present informationbefore practice

10. Randomize the sequence X Xof material

11. Part-task trainingshould precede whole- X Xtask training

12. Permit student tocontrol the presenta- X Xtion of reviews

37

Table 1

Listing and Classification of CBI Guidelines (Continued)

NoDisagreement Some Insufffi-

Among Authors Research Research cientGuideline Authors Disagree Supports Contradicts Research

13. Program the computerto control the X Xpresentation of reviews

14. Use graphics tohighlight important X Xmaterial

15. Place graphicinformation on thesame screen as and tothe left of the text

16. High quality graphics X Xare unnecessary

17. Use animation X X18. Present application X X

questions

19. Present discrimina- Xtion questions

20. Present rhetorical Xquestions

21. Present questions X Xfrequently

22. Present eachquestion after thetext passage towhich it refers

23. Permit student tochange answers at any X Xpoint during a quiz

24. Permit student tocontrol the sequence X Xof questions

25. Feedback should bespecific to the type X Xof error

26. Do not present novel,entertaining, orauditory feedbackfor errors

27. Place feedback onthe same screen as X Xthe question

38

Table 1

Listing and Classification of CBI Guidelines 'Continued)

No

Disagreement Some Insufffi-Among Authors Research Research cient

Guideline Authors Disagree Supports Contradicts Research

28. Orienting instructions X Xare ineffective

29. Present orienting X Xinstructions

30. Lesson objectives are X Xineffective

31. Lesson objectives X Xlimit learning

32. Present lesson X Xobjectives

33. Present lessonobjectives as astudent-controlledoption

34. Permit student tocontrol duration of X Xquestions and graphics

35. Shorten tasks withoutlosing important X Xaspects for training

36. Always permit studentto control the sequence X Xof instruction

37. Never permit studentto control the sequence X Xof instruction

38. Conditionally permitstudent to control the X Xsequence of instruction

39. Present concretematerial before X Xabstract material

40. Use chronological Xordering of material

41. Present simplematerial before X Xcomplex material

42. Do not presentmaterial that the X Xstudent already knows

43. Do not vary thedifficulty of material X xduring the session

39

Table 1

Listing and Classification of CBI Guidelines (Continued)

NoDisagreement Some Insufffi-

Among Authors Research Research cientGuideline Authors Disagree Supports Contradicts Research

44. Permit the student tocontrol the difficulty X Xlevel of material

45. Do not permit thestudent to control thedifficulty level ofmaterial

46. Prelesson questions X Xlimit learning

47. Permit student tocontrol the number X Xof problems

48. Permit student tosee answer before X Xresponding

49. Require student torespond before X Xpresenting answer

50. Present questions that X Xevoke covert responses

51. Require overt responses X X52. Feedback for correct

responses is X Xunnecessary

53. Feedback for correctresponses should be X Xbrief

54. Feedback for correctresponses shouldexplain why responsesare correct

55. Immediate feedbackshould always be X Xpresented

56. Delayed feedbackshould always be X Xpresented

57. Present immediatefeedback sometimes and Xdelayed feedback atother times

40

Supports" or "Research Contradicts," respectively. Often, asingle experiment exists to support or contradict a guide-line. Such evidence may or may not be considered conclusive,depending upon the strength of the research and the speci-ficity of the guideline. Finally, a guideline was classifiedunder "Insufficient Research" if (a) empirical researchproduced mixed results and further research is required, (b)empirical research is inconclusive because of inadequateexperimental designs, or (c) empirical research on thatguideline was not located.

In the following sections, each guideline is evaluatedin terms of its usefulness for computer-based instructionaldesign. Some of the guidelines are self-evident and do notrequire empirical validation. Other guidelines are stated interms that are so general that it is not possible to evaluatetheir usefulness. Some of the guidelines are very specific,and an evaluation of their generalizability is required.Generalizability may exist on different dimensions, and CBIresearchers and designers need to understand the importantdimensions in educational technology. The present review andinterpretation of the literature suggests that there are atleast three important dimensions along which generalizabilitymay vary: (a) training format (e.g., tutorials, drills, orsimulations), (b) training objectives (e.g., acquisition vssustainment training, verbal/conceptual vs nonverbal/procedural training), and (c) target population (e.g., highschool or college students, military personnel, industrialworkers).

Guidelines Supported by Research

Guidelines 1-5 in Table 1 are supported by empiricalresearch. Authors of guidelines agree that CBI should pre-sent questions, corrective feedback for incorrect responses,and multiple trials for items that are answered incorrectly.The recommendation to present questions applies principallyto tutorials. However, questions might also be incorporatedinto some types of training simulations, such as thosedesigned to train complex decision-making skills.

The recommendation to present corrective feedback meansthat feedback that simply informs the student of whether ananswer is correct or incorrect is not as effective as feed-back that presents more information. Precisely how muchinformation is sufficient probably depends upon the instruc-tional content, the purpose of the instruction, and the typeof student. The recommendation to present multiple problemsfollowing incorrect responses is supported by a single

41

experiment. The generalizability of this guideline needs tobe evaluated.

Although some authors do not agree that CBI shouldpresent prelesson questions or that computers should be pro-grammed to control adaptively the number of trials, empiricalresearch suggests that these strategies are effective. Therecommendation to present prelesson questions is supported bya single experiment. Therefore, the generalizability of thisguideline should be evaluated. The recommendation to programthe computer to control adaptively the number of trials issupported by a series of experiments in which age of studentand instructional content have varied.

Guidelines Contradicted by Research

Although authors agree that designers of CBI should usegraphics often and provide increasingly informative feedbackfor successive errors (guidelines 6 and 7, Table 1),empirical research fails to support these guidelines. How-ever, the experimental evidence is not strong enough tojustify complete rejection of these guidelines. For example,while experiments that compare graphics to text have notproduced results suggesting that graphics should be usedoften, no research has been located to show that graphicsshould be avoided. Whether graphics are useful in CBIprobably depends upon many other factors, such as thetraining format, training objectives, and target population.

The recommendation to provide increasingly informativefeedback for successive errors was contradicted by severalexperiments. Consequently, its rejection may be warranted.However, it seems reasonable to suppose that errors maderepeatedly require more extensive remedial effort than errorsmade only once or twice.

Some authors do not agree with the next six guidelinesin Table 1 (guidelines 8-13), all of which are contradictedby empirical research. Again, the experimental evidence doesnot necessarily justify rejection of each guideline. Forexample, the recommendation to train under mild speed stressis contradicted by several experiments showing that correctresponses should be followed by increases in stimulus displayduration. However, these experiments were not explicitlydesigned to test the effectiveness of speed stress duringtraining. Certainly, the recommendation is sensible if thepurpose of the training program is to increase the speed withwhich a student performs a task. Further research isrequired to determine whether or not speed stress during

42

training is useful when increasing performance speed is not atraining objective.

Only one experiment contradicts the recommendation topresent information before practice (Lahey, 1981), but thevalidity of this guideline is logically questionable. Bydefinition, computer-based drils and simulations emphasizepractice. Often, the presentation of information is simul-taneous with practice. Only tutorials may require informa-tion to be presented separate from practice. In addition tothe experiment by Lahey, the demonstrated effectiveness ofdrills and simulations contradicts the recommendation topresent information before practice.

The recommendation to randomize the sequence of instruc-tional material is contradicted by other recommendationsabout sequence and by the results of Barsam and Simutis(1984). Randomization can occur at several levels, and atsome levels it may be necessary for effective instruction.The response choices in a multiple-choice question can andshould be randomized, and ordinarily, the order of items in atest should be randomized. However, there are some types ofinstructional material that simply cannot be learned if thematerial is presented in a random manner. When sequence ofinstruction is a critical training variable, the mosteffective sequence must be determined.

The general consensus about whether part-task trainingshould precede whole-task training is that it depends uponthe partitioning strategy for the particular task (Wightman &Lintern, 1985). However, Wightman and Lintern note that theeffects of the segmentation strategy may be confounded withthe backward chaining technique used in most of the trainingeffectiveness research. Further research is required toseparate the effects of these two variables.

Finally, the contradictions in recommendations about thepresentation of reviews may be resolved if designers considerthat there are several different kinds of reviews. Instruc-tional material can be reviewed at the end of lengthyinstructional segments prior to questioning or ending asession. Reviews may follow incorrect responses during adrill or test. Questions or items in a drill may bereviewed. Whether or not students should control thepresentation of reviews may depend upon which type of reviewone is considering.

43

Guidelines Lacking Research Support

The remaining guidelines in Table 1 have been insuffi-ciently addressed by empirical research. The next twosections discuss the guidelines that are not contradicted byany other guideline (guidelines 14-27), and those that arecontradicted by another guideline (guidelines 28-57).

No disagreement among authors. Research has not beenlocated to compare strategies of highlighting, but there isno reason to assume that the use of graphic highlightingwould be any less effective than other methods, such ascapitalizing and underlining. Computer display design hasbeen studied in other settings (Brown, 1988), and computer-based instructional designers might benefit from the resultsof that research.

It would be simple enough to test the recommendation toplace graphic information on the same screen as, and to theleft of, text. However, it seems doubtful that any distinc-tion between right and left brain specificity of visualprocessing would result in performance differences during CBIthat have practical significance. When a student attends toa visual stimulus, such as text or graphics on a computerscreen, that stimulation impinges on receptors in the fovea.The fovea must shift so frequently during inspection ofstimuli on the screen that any spatial differences in place-ment of text and graphics may be irrelevant for trainingeffectiveness.

The assertion that high quality graphics are unnecessaryhas not been adequately tested. This is an important areafor research, because high quality graphics may be moreexpensive to produce than low quality graphics. The qualityof graphics required for effective training may be an issueof fidelity. The general consensus is that maximum physicalfidelity (between a computer graphic and the real-worldstimulus that the graphic represents) is not always necessaryfor effective training. However, it is inappropriate to saythat high quality graphics are never necessary. It seemsreasonable to assume that different types of images used fordifferent purposes in training require different levels ofgraphic quality. Further research is required to elucidatethe relationship among types of images, training functions ofimages, and quality of graphics.

Similarly, animated graphics are often more expensive toproduce than static graphics. The results of McDonald andWhitehill (1987) suggest that animation is required foreffective training in the perception of dynamic visual

44

stimuli. However, it is not clear whether animation is worththe cost for many other applications of CBI.

Application, discrimination, and rhetorical questionsare sensible alternatives to simple recall questions.Research has shown that application and discriminationtraining are effective approaches in CBI. It is likely thatapplication and discrimination questions would enhancetraining similarly. Rhetorical questions may serve as usefulstimuli in a training program. However, by definition, itwould be impossible to record, evaluate, and remediate aresponse to such a stimulus.

Although one experiment showed no differences betweentutorials with different frame-to-question ratios (Schloss etal., 1985), these results are surprising, and the recommenda-tion to present questions frequently should be researchedfurther. Frequent presentation of questions facilitates moreinteraction with the training device. If interactivity is acritical aspect of training, it is reasonable to assume thatmore frequent questions would be effective.

No one disagrces that questions should be presentedafter the text passages to which they refer, assuming thatthe recommendation does not refer to prelesson questions. Itseems sensible to evaluate or reinforce students' knowledgeof a fact only after they have been presented with that fact.Prelesson questions, however, have been shown to enhancetraining effectiveness (Dalton et al., 1988). Althoughfurther research is required, it is likely that intralessonquestions presented p to a text passage are alsoeffective.

Although this guideline has not been tested, it seemssensible to permit students to change answers at any pointduring a computer-based quiz. The only exceptions might beprograms designed to train students to answer questionsrapidly under time pressure.

No one has contradicted the recommendation to permitstudents to control the sequence of questions, and noresearch has been located to address that recommendation. Ifthe sequence of questions is unimportant for training effec-tiveness, there is no reason why students should berestricted from modifying it. However, further research isrequired to elucidate the effects of various sequentialstrategies before declaring that sequence is an unimportantvariable.

45

The guideline that feedback should be specific to thetype of error has been evaluated in experiments with mixedresults. If different types of potential errors can beidentified during instructional design, it would be possibleto program different types of feedback. However, the addedcost of such an effort justifies further research todetermine the level of specificity required in feedback.

No research was located on the effects of novel, enter-taining, or auditory stimuli in error feedback. A stimulusthat is novel or entertaining to a designer might not be soto a student. Furthermore, students of different ages mayrespond to such stimuli in different ways. Presumably, therecommendation to avoid auditory stimuli is made to prevent astudent's embarrassment when making an error in a CBI class-room. However, many CBI programs present auditory stimuli asa part of the training. Headphones may be used to preventthe stimuli at one workstation from disrupting the trainingat nearby workstations. Under such conditions of privacy,there is no reason to assume that auditory stimuli would beany less effective than visual stimuli for remediatingerrors.

The guideline to place feedback on the same screen asthe question assumes that such feedback always fits on onescreen. If feedback is extensi.e, it may be necessary topresent it in several screens.

Some authors disagree. Authors of instructional designguidelines contradict each other in their recommendationsabout the presentation of lesson objectives and orientinginstructions. Guidelines about orienting instructions seemself-evident. It cannot be denied that CBI using novelequipment or complicated interactivity routines must provideinstructions on how to use the equipment or interact with theprogram properly. When the equipment is simple to operate orthe interactivity routines are obvious, orienting instruc-tions are not required. Because of research with othereducational media, some authors suggest that lesson objec-tives are ineffective or may limit the amount of materialretained from CBI. Other authors recommend that lessonobjectives (and orienting instructions) be presented, atleast as a student-controlled option. These assertions havenot been tested with CBI. If lesson objectives are ineffec-tive or detrimental to training effectiveness, they shouldnot be presented. Further research is required to evaluatethese guidelines.

The recommendations to permit students to control theduration of questions and graphics are contradicted by

46

recommendations to train under speed stress or to program thecomputer to control stimulus display durations. Visualstimuli should be presented long enough for students toadequately view them. The best strategy for determiningstimulus duration probably depends upon the type of traininginvolved.

It makes sense to shorten tasks, if this can be donewithout losing important aspects for training. For mosttasks, more training means more learning. More trials can bepresented if unnecessary time-consuming events are eliminatedfrom the training. Designers must empirically determinewhich events are important and which are unnecessary for thespecific tasks to be trained.

Guidelines 36-40 in Table 1 are about sequence ofinstructional material. Authors of design guidelines dis-agree about whether students should control the sequence ofinstruction. Research must be conducted to determine ifsequencing is an important variable for training effective-ness and what sequence is most effective for particularapplications. If sequence is an important training variable,students should not be permitted to control it. Students areno better than trained instructional designers (and probablyworse) at determining the most effective sequences ofinstruction. If an effective sequence can be determined,that sequence should be presented. Whether or not anyparticular sequence (such as concrete material beforeabstract material, or a chronological sequence) is effectivedepends upon the nature of the instructional material. Mostmaterial can be presented in several different sequences.The design of CBI must include formative evaluations todetermine the most effective sequences.

Sequencing instructional material according to diffi-culty level is a related issue (guidelines 41-45). For mostmaterial, simple material may serve as a prerequisite formore difficult material. In such cases, the prerequisitematerial should be presented first. Research is required todetermine when the difficulty level should be raised duringCBI for each application. Again, it makes no sense to dependupon students to determine the most effective level of diffi-culty for training. However, it is possible that studentsusing CBI for skill sustainment training could effectivelyselect their own difficulty level. This possibility shouldbe evaluated in empirical research.

One experiment has shown that limited prelesson ques-tioning enhances learning and extensive prelesson questioningcould be detrimental (Dalton et al., 1988). However, similarto the assertion about lesson objectives, the assertion that

47

prelesson questions limit learning requires further research(guideline 46).

The recommendation to permit students to control thenumber of problems (guideline 47) is contradicted by theempirically supported recommendation to program the computerto control adaptively the number of problems. In fact, theresearch shows no differences between adaptive computercontrol and student control when students are advised aboutthe number of problems required to master the material.Adaptively generated advice appears to be a necessarycomponent of the effective student-control strategy. Ifstudents follow such advice, there is no reason to preventthem from controlling the number of problems. Whether or notstudents will follow the advice is a topic for furtherresearch.

Guidelines 48-51 in Table 1 address the importance of astudent's response in CBI. If the interaction between stu-dent and computer is an important phenomenon in training, thenature of the student's response is important. Permittingstudents to see answers before attempting to answer questionsthemselves is one form of interaction. Whether this type ofinteraction is as effective as requiring an overt responseand then providing feedback must be determined in empiricalresearch.

The focus of feedback research has been on remediationof errors. Guidelines for designing feedback for correctresponses have not been evaluated (guidelines 52-54). Cer-tainly, students should be informed when they have made acorrect response. Therefore, it is erroneous to say thatfeedback for correct responses is unnecessary. However,whether this feedback should be brief or extensive may dependupon other training issues. Further research is required inthis area.

Whether feedback should be presented immediately follow-ing a response or should be delayed for some time periodprobably depends upon other training issues (guidelines55-57). Research in basic learning processes suggests thatimmediate feedback is most effective for most trainingapplications. The generalizability of this conclusion hasnot been sufficiently explored.

The implications of this review may differ for the CBI

designer and the educational technology researcher. Both

48

should be aware of the contradictions and potential limita-tions of the recommendations in the literature. Research canbe designed to address the current gaps in our knowledge ofcomputer-based instructional strategies; in the meantime,designers must continue to develop material to meet currenttraining and educational requirements. Instructionaldesigners should recognize when a particular strategy isbeing applied in a case that potentially tests the generaliz-ability of a recommendation or research result. Conse-quently, formative and summative evaluations during CBIdevelopment will contribute to the body of knowledge ininstructional design strategies (Gagn6 et al., 1988).Educational technology researchers should identify theimportant dimensions along which the generalizability ofinstructional strategies may vary. Several have beensuggested here, but a clearer exposition requires furtherresearch.

49

References

Alessi, S. M., & Trollip, S. R. (1985). Computer-basedinstruction: Methods and development. EnglewoodCliffs, NJ: Prentice-Hall, Inc.

Barsam, H. F., & Simutis, Z. M. (1984). Computer-basedgraphics for terrain visualization training. HumanFactors, 2&, 659-665.

Belland, J. C., Taylor, W. D., Canelos, J., Dwyer, F., &Baker, P. (1985). Is the self-paced instructionalprogram, via microcomputer-based instruction, the mosteffective method of addressing individual learningdifferences? Educational Communication and TechnologyJ , 3a, 185-198.

Benjamin, L. T., Jr. (1988). A history of teaching machines.American Psychologist, A.a, 703-712.

Braden, R. A. (1986). Visuals for interactive video: Imagesfor a new technology (with some guidelines).Educational Technology, 2a, 18-23.

Brown, C. M. (1988). Human-computer interface designguidelines. Norwood, NJ: Ablex.

Buehner, L. J. (1987). Instructional design: Impact ofsubject matter and cognitive styles (Technical Paper86-11). Wright-Patterson Air Force Base, OH: Air ForceHuman Resources Laboratory.

Clark, R. E. (1985). Evidence for confounding in computer-based instruction studies: Analyzing the meta-analyses.Educational Communication and Technology Journal, 33,249-262.

Cohen, V. B. (1985). A reexamination of feedback incomputer-based instruction: Implications forinstructional design. Educational Technology, 25, 33-37.

Dalton, D. W., Goodrum, D. A., & Olsen, J. R. (1988).Computer-based mastery pretests and continuingmotivation. 30th ADCIS International ConferenceProceedings (pp. 227-232). Bellingham, WA: Associationfor the Development of Computer-Based InstructionalSystems.

51

Dalton, D. W., & Hannafin, M. J. (1987). The effects ofknowledge- versus context-based design strategies oninformation and application learning from interactivevideo. Journal of Computer-Based Instruction, 1A, 138-141.

Eberts, R., & Brock, J. F. (1984). Computer applications toinstruction. In F. A. Muckler (Ed.), Human factorsreview (pp. 239-284). Santa Monica, CA: The HumanFactors Society.

Eberts, R. E., & Brock, J. F. (1987). Computer-assisted andcomputer-managed instruction. In G. Salvendy (Ed.),Handbook of human factors (pp. 976-1011). New York:Wiley and Sons.

Flexman, R. E., & Stark, E. A. (1987). Training simulators.In G. Salvendy (Ed.), Handbook of human factors (pp.1012-1038). New York: Wiley and Sons.

Gagn6, R. M., Briggs, L. J., & Wager, W. W. (1988).Principles of instructional design (3rd ed.). New York:Holt, Rinehart and Winston.

Gaynor, P. (1981). The effect of feedback delay on retentionof computer-based mathematical material. JComputer-Based Instruction, a, 28-34.

Goetzfried, L., & Hannafin, M. J. (1985). The effect of thelocus of CAI control strategies on the learning ofmathematical rules. American Educational ResearchJournal, 22, 273-278.

Gray, S. H. (1987). The effect of sequence control oncomputer assisted learning. Journal of Computer-BasedInstruction, 14, 54-56.

Hannafin, M. J. (1985). Empirical issues in the study ofcomputer-assisted interactive video. EducationalCommunications and Technology Journal, 33, 235-247.

Hannafin, M. J., & Colamaio, M. E. (1986). The effects oflocus of instructional control and practice on learningfrom interactive video. 28th ADCIS ConferenceP (pp. 249-255). Bellingham, WA: Associationfor the Development of Computer-Based InstructionalSystems.

52

Hannafin, M. J., & Hughes, C. W. (1986). A framework forincorporating orienting activities in computer-basedinteractive video. Instructional Science, la, 239-255.

Hannafin, M. J., Phillips, T. L., & Tripp, S. D. (1986). Theeffects of orienting, processing, and practicingactivity on learning from interactive video. JQunal oComputer-Based Instruction, l3, 134-139.

Ho, C. P., Savenye, W., & Haas, N. (1986). The effects oforienting objectives and review on learning frominteractive video. Journal of Computer-BasedInstruction, 13, 126-129.

Holding, D. M. (1987). Concepts of training. In G. Salvendy(Ed.), Handbook of human factors (pp. 939-962). NewYork: Wiley and Sons.

Johansen, K. J., & Tennyson, R. D. (1983). Effect ofadaptive advisement on perception in learner-controlled,computer-based instruction using a rule-learning task.Educational Communication and Technology Journal, 31,226-236.

Jonassen, D. H., & Hannum, W. H. (1987). Research-basedprinciples for designing computer software. EducationalThl , 27, 7-14.

Kearsley, G. (1986). Authoring: A guide to the design ofinstructional software. Reading, MA: Addison-Wesley.

Kearsley, G. P., & Frost, J. (1985). Design factors forsuccessful videodisc-based instruction. EducationalT o , Z5, 7-13.

Kinzie, M. B., & Sullivan, H. J. (1988). The effects oflearner control on motivation. 30th ADCIS InternationalConference Proceedings (pp. 247-252). Bellingham, WA:Association for the Development of Computer-BasedInstructional Systems.

Lahey, G. F. (1981). The effect of instructional sequence onperformance in computer-based instruction. Journal ofComputer-Based Instruction, 2, 111-116.

MacLachlan, J. (1986). Psychologically based techniques forimproving learning within computerized tutorials.Journal of Computer-Based Instruction, U3, 65-70.

53

McDonald, B. A., & Whitehill, B. V. (1987). Visualrecognition training: Effectiveness of computerg/rahica (Technical Report 87-21). San Diego, CA: NavyPersonnel Research and Development Center.

McMullen, D. W. (1986). Drills vs. games -- any differences:A pilot study. 28th ADCIS Conference Proceedings (pp.40-45). Bellingham, WA: Association for theDevelopment of Computer-Based Instructional Systems.

Merrill, J. (1987). Levels of questioning and forms offeedback: Instructional factors in courseware design.Journal of Computer-Based Instruction, 1A, 18-22.

Moore, M. V., Nawrocki, L. H., & Simutis, Z. M. (1979). Theinstuctional effectiveness of three levels of graphicsdisplays for computer-assisted instruction (TechnicalPaper 359). Alexandria, VA: U.S. Army Research Institutefor the Behavioral and Social Sciences. (AD A072 335)

Munro, A., Fehling, M. R., & Towne, D. M. (1985).Instruction intrusiveness in dynamic simulationtraining. Journal of Computer-Based Instruction, 12,50-53.

Park, 0., & Tennyson, R. D. (1980). Adaptive designstrategies for selecting number and presentation orderof examples in coordinate concept acquisition. Journalof Educational Psychology, 72, 362-370.

Park, 0., & Tennyson, R. D. (1986). Computer-based response-sensitive design strategies for selecting presentationform and sequence of examples in learning of coordinateconcepts. Journal of Educational Psychology, _U, 153-158.

Reeves, T. C. (1986). Research and evaluation models for thestudy of interactive video. Journal of Computer-BaspdInstruction, 13, 102-106.

Reid, D. J., & Beveridge, M. (1986). Effects of textillustration on children's learning of a school sciencetopic. British Journal of Educational Psychology, 56,294-303.

54

Rieber, L. P., & Hannafin, M. J. (1986). The effects oftextual and animated orienting activities and practiceon learning from computer-based instruction. 28Conference Proceedings (pp. 308-313). Bellingham, WA:Association for the Development of Computer-BasedInstructional Systems.

Rothen, W., & Tennyson, R. D. (1978). Application of Bayes'stheory in designing computer-based adaptiveinstructional strategies. Educational Psychologist, 12,317-323.

Rotter, J. B. (1966). Generalized expectancies for internalversus external control of reinforcement. PsychologicalflQrphl, a2 (Whole No. 609).

Schaffer, L. C., & Hannafin, M. J. (1986). The effects ofprogressive interactivity on learning from interactivevideo. Educational Communication and TechnologyJournal, 2, 89-96.

Shlechter, T. M. (1986). An examination of the researchevidence for computer-based instruction in militarytrain (Technical Report 722). Alexandria, VA: U.S.Army Research Institute for the Behavioral and SocialSciences. (AD A174 817)

Schloss, C. N., Schloss, P. J., & Cartwright, G. P. (1985).Placement of questions and highlights as a variableinfluencing the effectiveness of computer assistedinstruction. Journal of Computer-Based Instruction, 12,97-100.

Schloss, P. J., Sindelar, P. T., Cartwright, G. P., &Schloss, C. N. (1986). Efficacy of higher cognitive andfactual questions in computer assisted instructionmodules. Journal of Computer-Based Instruction, 13, 75-79.

Schneider, W., Vidulich, M., & Yeh, Y. (1982). Trainingspatial skills for air-traffic control. In R. E.Edwards & P. Tolin (Eds.), Proceedings of the HumanFactors Society 26th Annual Meeting (pp. 10-14). SantaMonica, CA: Human Factors Society.

Seigel, M. A., & Misselt, A. L. (1984). Adaptive feedbackand review paredigm for computer-based drills. Journalof Educational Psychology, 76, 310-317.

55

MOOMMMEMI

Stokes, M. T., Halcomb, C. G., & Slovacek, C. P. (1988).Delaying user responses to computer-mediated test itemsenhances test performance. Journal of Computer-BasedInstruction, .1a, 99-103.

Surber, J. R., & Leeder, J. A. (1988). The effect of graphicfeedback on student motivation. Journal of Computer-Based Instruction, 15-, 14-17.

Tennyson, R. D. (1980). Instructional control strategies andcontent structure as design variables in conceptacquisition using computer-based instruction. Journalof Educational Psychology, 72, 525-532.

Tennyson, R. D. (1981). Use of adaptive information foradvisement in learning concepts and rules usingcomputer-assisted instruction. American EducationalResearch Journal, la, 425-438.

Tennyson, R. D., & Park, S. I. (1984). Process learning timeas an adaptive design variable in concept learning usingcomputer-based instruction. Journal of EducationalPsychology, 76, 452-465.

Tennyson, R. D., Park, 0., & Christensen, D. L. (1985).Adaptive control of learning time and content sequencein concept learning using computer-based instruction.Journal of Educational Psychology, 27, 481-491.

Tennyson, R. D., Welsh, J. C., Christensen, D. L., & Hajovy,H. (1985). Interactive effect of information structuresequence of information and process learning time onrule learning using computer-based instruction.Educational Communication and Technology Journal, fl,213-223.

Thorkildsen, R. J., & Friedman, S. G. (1986). Interactivevideodisc: Instructional design of a beginning readingprogram. Learning Disability Ouarterly, 3, 111-117.

Wager, W., & Wager, S. (1985). Presenting questions,processing responses, and providing feedback in CAI.Journal of Instructional Development, a, 2-8.

Waldrop, P. B., Justen, J. E., III, & Thomas, M. A., II.(1986). A comparison of three types of feedback in acomputer-assisted instruction task. EducationalTe., 25, 43-45.

56

Wedman, J. F., & Stefanich, G. P. (1984). Guidelines forcomputer-based testing of student learning of concepts,principles, and procedures. Educational Technology, 2A,23-28.

Wheaton, G. R., Rose, A. M., Fingerman, P. W., Korotkin, A.L., & Holding, D. H. (1976). Evaluation of theeffectiveness of training devices: Literature reviewand preliminary model (Research Memo 76-6).Alexandria, VA: U.S. Army Research Institute for theBehavioral and Social Sciences. (AD A076 809)

Wightman, D. C., & Lintern, G. (1985). Part-task trainingfor tracking and manual control. Human Factors, 27,267-283.

57