evaluation design

24
Evaluation Design Curriculum Evaluation (EDU 5352) Mina Badiei (GS31016)

Upload: mina-badiei

Post on 13-Nov-2014

471 views

Category:

Education


0 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Evaluation design

Evaluation Design

Curriculum Evaluation (EDU 5352)Mina Badiei (GS31016)

Page 2: Evaluation design

What Is Evaluation Design?

• The plan for an evaluation project is called a "design“.

• It is a particularly vital step to provide an appropriate assessment.

• A good design offers an opportunity to maximize the quality of the evaluation, helps minimize and justify the time and cost necessary to perform the work.

Page 3: Evaluation design

Design Process

• 1. Identifying evaluation questions and issues• 2. Identifying research designs and

comparisons• 3. Sampling methods• 4. Data collection instruments• 5. Collecting and coding qualitative data

Page 4: Evaluation design

Evaluation Design Approaches

Quantitative Qualitative Mixed-

Method

Page 5: Evaluation design

Quantitative approach

• Quantitative data can be counted, measured, and reported in numerical form and answer questions such as who, what, where, and how much.

• The quantitative approach is useful for describing concrete phenomena and for statistically analyzing results.

• Data collection instruments can be used with large numbers of study participants.

• Data collection instruments can be standardized, allowing for easy comparison within and across studies.

Page 6: Evaluation design

• Experimental designs tend to be rigorous in that they control for external factors and enable you to argue, with some degree of confidence, that your findings are due to the effects of the program rather than other, unrelated, factors.

• They are rarely applicable in educational settings where there is a chance that students may be denied an opportunity to participate in a program because of the evaluation design.

Experimental

• Quasi-experimental designs are those in which participants are matched beforehand, or after the fact, using statistical methods

• These studies offer a reasonable solution for schools or districts that cannot randomly assign students to different programs, but still desire some degree of control so that they can make statistical statements about their findings.

Quasi-Experimental

• it is intended to demonstrate trends or changes over time.• the purpose of the design is not to examine the impact of an

intervention ,but simply to explore and describe changes in the construct of interest.

• Time series offer more data points, but because there is little control over extraneous factors.

Time-series Study

Page 7: Evaluation design

• it is intended to show a snapshot in time.• it might be used to answer questions like:• -what do parents think about our school?• -what do parents see as the strengths and

weaknesses the school environment?

Cross-sectional

• case studies are those which seek to follow program implementation or impact on an individual, group, or organization, such as a school or classroom.

• However, case studies are an excellent way to collect evidence of program effectiveness, to increase understanding of how an intervention is working in particular settings, and to inform a larger study to be conducted later.

Case-studies

Page 8: Evaluation design

Types of Experimental Design

Post-test only design• the least complicated of the

experimental design.• it has three steps:1) decide what comparisons are desired and meaningful.2) the students in two or more comparison groups must be similar.3) collect the information after the posttest to determine whether the differences occurred.

Pre-post design• it is employed when a

pretreatment measure can supply useful information.

• in Field-trial stage this design in common to use.

Page 9: Evaluation design

Example:

• Equilibrium Effects of Education Policies: A Quantitative Evaluation

By : Giovanni Gallipoli, Costas Meghir, Giovanni L. ViolanteThe paper compares partial and general equilibrium effects of alternative education policies on the distribution of education and earnings. The numerical counterpart of the model, parameterized through a variety of data sources, education enrollment responses which are broadly in line with reduce-form estimates. Through numerical simulations, they compare the effects of alternative policy interventions on optimal education decisions, inequality, and output. It’s a king of Quasi-Experimental design.

Page 10: Evaluation design

Qualitative Approach• Qualitative data are reported in narrative form.• Qualitative approach can provide important insights into how

well a program is working and what can be done to increase its impact.

• Qualitative data can also provide information about how participants – including the people responsible for operating the program as well as the target audience – feel about the program.

• It promotes understanding of diverse stakeholder perspectives (e.g., what the program means to different people).

• Stakeholders, funders, policymakers, and the public may find quotes and anecdotes easier to understand and more appealing than statistical data.

Page 11: Evaluation design

Observation

Interview

Focus Groups

Document Studies

Key-informants

Page 12: Evaluation design

observation• Observational techniques are methods by which an individual or

individuals gather firsthand data on programs, processes, or behaviors being studied.

• They provide evaluators with an opportunity to collect data on a wide range of behaviors, to capture a great variety of interactions, and to openly explore the evaluation topic.

• By directly observing operations and activities, the evaluator can develop a holistic perspective, i.e., an understanding of the context within which the project operates.

• Observational approaches also allow the evaluator to learn about things the participants or staff may be unaware of or that they are unwilling or unable to discuss in an interview or focus group.

Page 13: Evaluation design

When to use observations• Observations can be useful during both the formative and summative

phases of evaluation. For example, during the formative phase, observations can be useful in determining whether or not the project is being delivered and operated as planned.

• In the hypothetical project, observations could be used to describe the faculty development sessions, examining the extent to which participants understand the concepts, ask the right questions, and are engaged in appropriate interactions.

• Observations during the summative phase of evaluation can be used to determine whether or not the project is successful. The technique would be especially useful in directly examining teaching methods employed by the faculty in their own classes after program participation.

Page 14: Evaluation design

Interviews

• Interviews provide very different data from observations: they allow the evaluation team to capture the perspectives of project participants, staff, and others associated with the project.

• In the hypothetical example, interviews with project staff can provide information on the early stages of the implementation and problems encountered.

• An interview, rather than a paper and pencil survey, is selected when interpersonal contact is important and when opportunities for follow up of interesting comments are desired.

• Two types of interviews are used in evaluation research: structured interviews, in which a carefully worded questionnaire is administered; and in depth interviews, in which the interviewer does not follow a rigid form.

Page 15: Evaluation design

Contd..structured interviews:

• the emphasis is on obtaining answers to carefully phrased questions.

• Interviewers are trained to deviate only minimally from the question wording to ensure uniformity of interview administration.

In-depth interviews: • the interviewers seek to encourage

free and open responses, and there may be a trade off between comprehensive coverage of topics and in depth and limited set of questions.

• In depth interviews also encourage capturing of respondents’ perceptions in their own words. This allows the evaluator to present the meaningfulness of the experience from the respondent’s perspective.

• In depth interviews are conducted with individuals or with a small group of individuals.

Page 16: Evaluation design

When to use interviews• interviews can be used at any stage of the evaluation process. They are

especially useful in answering questions such as those suggested by Patton (1990):

• What does the program look and feel like to the participants? To other stakeholders?

• What are the experiences of program participants? • What do stakeholders know about the project? • What thoughts do stakeholders knowledgeable about the program have

concerning program operations, processes, and outcomes? • What are participants’ and stakeholders’ expectations? • What features of the project are most salient to the participants? • What changes do participants perceive in themselves as a result of their

involvement in the project?

Page 17: Evaluation design

Focus Groups

• Focus groups combine elements of both interviewing and participant observation.

• The focus group session is, indeed, an interview not a discussion group, problem-solving session, or decision-making group. (Patton, 1990)

• The hallmark of focus groups is the explicit use of the group interaction to generate data and insights that would be unlikely to emerge without the interaction found in a group.

• Focus groups are a gathering of 8 to 12 people who share some characteristics relevant to the evaluation. Originally used as a market research tool to investigate the appeal of various products.

Page 18: Evaluation design

Contd.

• the focus group technique has been adopted by other fields, such as education, as a tool for data gathering on a given topic.

• It conducted by experts take place in a focus group facility that includes recording apparatus (audio and/or visual) and an attached room with a one-way mirror for observation. There is an official recorder who may or may not be in the room.

• Participants are paid for attendance and provided with refreshments.

Page 19: Evaluation design

When to use focus groups

• When conducting evaluations, focus groups are useful in answering the same type of questions as in-depth interviews, except in a social context.

• Specific applications of the focus group method in evaluations include :

- identifying and defining problems in project implementation; -identifying project strengths, weaknesses, and recommendations; -assisting with interpretation of quantitative findings; -obtaining perceptions of project outcomes and impacts; and generating new ideas.

Page 20: Evaluation design

Other Qualitative Methods

• Document Studies: defined a document as "any written or recorded material" not prepared for the purposes of the evaluation or at the request of the inquirer. Documents can be divided into two major categories: public records, and personal documents (Guba and Lincoln, 1981).

• Key Informant :A key informant is a person (or group of persons) who has unique skills or professional background related to the issue/intervention being evaluated, is knowledgeable about the project participants, or has access to other information of interest to the evaluator.

• Key informants can help the evaluation team better understand the issue being evaluated, as well as the project participants, their backgrounds, behaviors, and attitudes, and any language or ethnic considerations. They can offer expertise beyond the evaluation team. They are also very useful for assisting with the evaluation of curricula and other educational materials. Key informants can be surveyed or interviewed individually or through focus groups.

Page 21: Evaluation design

Example:

• A Qualitative Evaluation Process for Educational Programs Serving Handicapped Students in Rural Areas

By : LUCILLE ANNESEZEPHThe paper describes a qualitative methodology designed to evaluate special education programs in rural areas serving students with severe special needs. A rationale is provided for the use of the elements of aesthetic criticism as the basis of methodology, and specific descriptions of the steps for its implementation and validation are provided. Some practical limitations and particular areas of usefulness are also discussed.

Page 22: Evaluation design

Mix method• In recent years evaluators of educational and social programs have

expanded their methodological repertoire with designs that include the use of both qualitative and quantitative methods. Such practice, however, needs to be grounded in a theory that can meaningfully guide the design and implementation of mixed-method evaluations.

• In many cases a mixture of designs can work together as a design for evaluating a large, complex program.

• The ideal evaluation combines quantitative and qualitative methods. A mixed-method approach offers a range of perspectives on a program's processes and outcomes.

• For example, the impact of a reading intervention on student performance may be compared for all students in a school over a period of time using repeated measures from exams administered for this purpose, but could also include more focused case studies of particular classes to learn about crucial implementation issues.

Page 23: Evaluation design

benefits

• It increases the validity of your findings by allowing you to examine the same phenomenon in different ways.

• It can result in better data collection instruments. For example, focus groups can be invaluable in the development or selection of a questionnaire used to gather quantitative data.

• It promotes greater understanding of the findings. Quantitative data can show that change occurred and how much change took place, while qualitative data can help you and others understand what happened and why.

• It offers something for everyone. Some stakeholders may respond more favorably to a presentation featuring charts and graphs. Others may prefer anecdotes and stories.

Page 24: Evaluation design

Example:• A Mixed Methods Evaluation of a 12-Week Insurance-Sponsored

Weight Management Program Incorporating Cognitive–Behavioral Counseling

By:Christiaan Abildso , Sam Zizzi, Diana Gilleland, James Thomas, and Daniel Bonner

A sequential mixed methods approach was used to assess the physical and psychosocial impact of a 12-week cognitive–behavioral weight management program and explore factors associated with weight loss. Quantitative data revealed program completion rate and mean percentage weight loss that compare favorably with other interventions, and differential psychosocial impacts on those losing more weight. Telephone interviews revealed four potential mechanisms for these differential impacts: (a) fostering accountability, (b) balancing perceived effort and success, (c) redefining ‘‘success,’’ and (d) developing cognitive flexibility.