chapter 10 designing and conducting formative evaluations

29
CHAPTER 10 Designing and Conducting Formative Evaluations Carolyn Jenkins-Haigler

Upload: cdjhaigler

Post on 09-Jul-2015

287 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Chapter 10 Designing and Conducting Formative Evaluations

CHAPTER 10

Designing and Conducting Formative Evaluations

Carolyn Jenkins-Haigler

Page 2: Chapter 10 Designing and Conducting Formative Evaluations

BACKGROUNDA formative evaluation, evidence of an instructional program’s worth isgathered for use in making decisions about how to revise the program while itis being developed. This is why it is called "formative" evaluation, because theinstruction is in its developmental stages and is not yet "grown up". The ideais to find out if your newly developed course works at teaching the objectivesyou need to teach to the learners who need to learn them, before you present itto your target audience. In any given formative evaluation, you can find outhow to make your instruction more:

Effective

Efficient

Interesting/Motivating

Usable

Acceptable

You do this by carrying out procedures that will provide you with evidenceas to the effectiveness of your instruction. The emphasis is on collecting dataand revising the instruction.

Page 3: Chapter 10 Designing and Conducting Formative Evaluations

OBJECTIVES

Describe the purposes for and various stages of formative evaluation of instructor-developed materials, instructor- selected materials, and instructor- presented instruction.

Describe the instruments used in a formative evaluation.

Develop an appropriate formative evaluation plan and construct instruments for a set of instructional materials or an instructor presentation.

Collect data according to a formative evaluation plan for a given set of instructional materials or instructor presentation.

Page 4: Chapter 10 Designing and Conducting Formative Evaluations

THE CONCEPT OF FORMATIVE

EVALUATIONDefinition

The collection of data and information duringthe development of instruction that can be usedto improve the effectiveness of the instruction.

Purpose

To obtain data that can be used to revise theinstruction to make it more efficient andeffective.

Page 5: Chapter 10 Designing and Conducting Formative Evaluations

THE ROLE OF SUBJECT-MATTER, LEARNING, AND

LEARNER SPECIALISTSIt’s important to have the instruction reviewed by specialists.

SME may be able to comment on the accuracy and currency of the instruction.

Learning specialist may be able to critique your instruction related to what is known about enhancing that particular type of learning

Learner specialist may be able to provide insights into the appropriateness of the material for the eventual performance context.

Page 6: Chapter 10 Designing and Conducting Formative Evaluations

THE THREE PHASES OF FORMATIVE EVALUATION

I. One-to-One Evaluation

II. Small-Group Evaluation

III. Field Trial

Page 7: Chapter 10 Designing and Conducting Formative Evaluations

ONE-TO-ONE EVALUATION

Criteria

Selecting Learners

Data Collection

Procedures

Assessments and Questionnaires

Learning Time

Data Interpretation

Outcomes

Purpose

To identify and remove the most obvious errors in the instruction

To obtain initial performance indications and reactions to the content by learners

Criteria

Clarity

Impact

Feasibility

Page 8: Chapter 10 Designing and Conducting Formative Evaluations

CRITERIADuring the development of the instructional strategy and theinstruction itself, designers and developers make a myriad oftranslations and decisions that link the content, learners, instructionalformat, and instructional setting.

The one- to- one trials provide designers with their first glimpse of theviability of these links and translations from the learners’ perspective.

The three main criteria and the decisions de-signers will make duringthe evaluation are as follows:

1. Clarity: Is the message, or what is being presented, clear toindividual target learners?

2. Impact: What is the impact of the instruction on individuallearner’s attitudes and achievement of the objectives and goals?

3. Feasibility: How feasible is the instruction given the availableresources ( time/ context)?

Page 9: Chapter 10 Designing and Conducting Formative Evaluations

SELECTING LEARNERS

One of the most critical decisions by the designer in theformative evaluation is the selection of learners toparticipate in the study.

This is not an experiment; there is no need for randomselection of large numbers of learners.

Actually, the designer wants to select a few learnerswho represent the range of ability in the group becauseprior learning or ability is usually one of the majordeterminers of ability to learn new skills andinformation.

Page 10: Chapter 10 Designing and Conducting Formative Evaluations

DATA COLLECTION The first category, message, relates to how clear the basic message is

to the learner determined by such factors as vocabulary, sentencecomplexity, and message structures. Regardless of whether thelearner reads, hears, or sees the message, he or she must be able tofollow it.

The second category, links, refers to how the basic message is tailoredfor the learner, including contexts, examples, analogies, illustrations,demonstrations, and so forth. When these links are also unfamiliar tothe learner, the basic message will undoubtedly be more complex.

The third area, procedures, refers to characteristics of the instructionsuch as the sequence, the size of segment presented, the transitionbetween segments, the pace, and the variation built into thepresentation. The clarity of instruction may change for the learnerwhen any one of these elements is inappropriate for her or him.

Page 11: Chapter 10 Designing and Conducting Formative Evaluations

PROCEDURES The typical procedure in a one- to- one evaluation is to

explain to the learner that a new set of instructional materials has been designed and that you would like his or her reaction to them.

You should say that any mistakes that learners might make are probably due to deficiencies in the material and not theirs.

Encourage the learners to be relaxed and to talk about the materials.

You should have the learners not only go through the instructional materials but also have them take the test( s) provided with the materials.

Page 12: Chapter 10 Designing and Conducting Formative Evaluations

ASSESSMENTS AND QUESTIONAIRES

After the students in the one- to- one trials havecompleted the instruction, they should review theposttest and attitude questionnaire in the samefashion.

After each item or step in the assessment, ask thelearners why they made the particular responses thatthey did.

This will help you spot not only mistakes but also thereasons for the mistakes, which can be quite helpfulduring the re-vision process.

Page 13: Chapter 10 Designing and Conducting Formative Evaluations

LEARNING TIME One design interest during one- to- one evaluation is

determining the amount of time required for learners tocomplete instruction, which is a very rough estimate,because of the interaction between the learner and thedesigner.

You can attempt to subtract a certain percentage of thetime from the total time, but experience has indicatedthat such estimates can be quite inaccurate.

Page 14: Chapter 10 Designing and Conducting Formative Evaluations

DATA INTERPRETATION

The information on the clarity of instruction,impact on learner, and feasibility of instructionneeds to be summarized and focused.

Particular aspects of the instruction found to beweak can then be reconsidered in order to planrevisions likely to improve the instruction forsimilar learners.

Page 15: Chapter 10 Designing and Conducting Formative Evaluations

OUTCOMES The outcomes of one- to- one trials are instruction that

1) contains appropriate vocabulary, languagecomplexity, examples, and illustrations for theparticipating learner;

( 2) either yields reasonable learner attitudes andachievement or is revised with the objective ofimproving learner attitudes or performance duringsub-sequent trials; and

( 3) appears feasible for use with the availablelearners, resources, and setting. The instruction canbe refined further using small group trials.

Page 16: Chapter 10 Designing and Conducting Formative Evaluations

FIELD TRIAL

PurposeTo determine whether the changes/revisions in the instruction made after the small group stage were effective.

To see whether the instruction can be used in the context for which it was intended.

In the final stage of formative evaluation the instructor attempts to use a learning con-text that closely resembles the intended context for the ultimate use of the instructional materials.

One purpose of this final stage of formative evaluation is to determine whether the changes in the instruction made after the small group stage were effective.

Page 17: Chapter 10 Designing and Conducting Formative Evaluations

SMALL-GROUP EVALUATION

Purposes

To determine the effectiveness of changes made following

the one-to-one evaluation.

To identify any remaining learning problems that learners

may have.

To determine whether learners can use the instruction

without interacting with the instructor.

Page 18: Chapter 10 Designing and Conducting Formative Evaluations

FORMATIVE EVALUATION

To determine Weakness(es) in the Instruction

Focusing the design only on the goals and objectives ofthe instruction would be too limited.

Data on learners’ achievement of goals and objectiveswould be insufficient, though important, because thesedata will only provide information about where errorsoccur rather than why they occur.

Page 19: Chapter 10 Designing and Conducting Formative Evaluations

FORMATIVE EVALUATION

HAS SIX STAGES Design Review

Expert Review

One-To-One

Small Group

Field Trials

Ongoing Evaluation

Page 20: Chapter 10 Designing and Conducting Formative Evaluations

DESIGN REVIEW

Does the instructional goal match the problem identified in the needs assessment?

Does the learner & environmental analysis match the audience?

Does the task analysis include all the prerequisite skills?

Are the test items reliable and valid, and do they match the objectives?

Page 21: Chapter 10 Designing and Conducting Formative Evaluations

EXPERT REVIEW

Is the content accurate & up-to-date?

Does it present a consistent perspective?

Are examples, practice exercises, & feedback realistic& accurate?

Is the pedagogy consistent with current instructionaltheory?

Is the instruction appropriate to the audience?

Page 22: Chapter 10 Designing and Conducting Formative Evaluations

ONE-TO-ONE REVIEW

Is the message clear?

What is the impact on:learner attitudesachievement of objectives & goals

Feasibility of training

Page 23: Chapter 10 Designing and Conducting Formative Evaluations

SMALL GROUP REVIEW

Look for the effects caused by the changes made in the one-to-one review

Identify any remaining learning problems

Page 24: Chapter 10 Designing and Conducting Formative Evaluations

FIELD TRIAL REVIEWS

Look for effects in changes made in small group

Can the instruction be used in the context in which it was intended

Page 25: Chapter 10 Designing and Conducting Formative Evaluations

ONGOING EVALUATION

Project Size

Life span of content

Audiences change

One-To-One

Small Group Tryouts

Field Trials

Page 26: Chapter 10 Designing and Conducting Formative Evaluations

LEARNER EVALUATION

Do learners understand the instruction?

Do they know what to do during the practice & the tests?

Can they interpret graphics in the text?

Can they read all the material?

How much time does it take?

Page 27: Chapter 10 Designing and Conducting Formative Evaluations

REFERENCE

Page 28: Chapter 10 Designing and Conducting Formative Evaluations

SUMMARY

Formative evaluation of instructional materials is conducted to determine theeffectiveness of the materials and to revise them in areas where they areineffective. Formative evaluations should be conducted on newly developedmaterials as well as existing materials that are selected based on theinstructional strategy. Evaluations are necessary for both mediated andinstructor presented materials. The evaluations should be designed toproduce data to pinpoint specific areas where the instruction is faulty and tosuggest how it should be revised. An iterative process of formative evaluationcontaining at least three cycles of data collection, analysis, and revision isrecommended. Each cycle focuses on different aspects of quality. The firstcycle, one- to-one evaluation, is conducted to pinpoint gross errors in thematerials. These errors typically relate to both the clarity of vocabulary,concepts, and examples used, and the motivational value of all fivecomponents of the instructional materials. Evaluations can also be conductedwith content experts and individuals familiar with the characteristics of targetlearners. One- to- one evaluations must be conducted with representatives ofthe target population. An interactive interview process is used so theevaluator can learn what was wrong with the materials and why it waswrong.

Page 29: Chapter 10 Designing and Conducting Formative Evaluations

THE END

Carolyn Jenkins-Haigler