chapter8.evaluation

Upload: chee-cmc

Post on 21-Feb-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/24/2019 Chapter8.Evaluation

    1/16

    1

    CCChhhaaapppttteeerrr888::: CCCUUURRRRRRIIICCCUUULLLUUUMMMEEEVVVAAALLLUUUAAATTTIIIOOONNN

    LEARNING OUTCOMES

    When you complete this module will be able to:

    State what is curriculum evaluation

    List reasons for evaluating the curriculum

    Explain the characteristics of the CIPP model

    Describe the features of Stakes model of curriculum evaluation

    Explain the characteristics of Eisners Connoisseurship model

    Compare the different instruments of data collection

    OVERVIEW

    8.0 Introduction

    8.1 What is curriculum evaluation?8.2 The CIPP evaluation model8.3 Application of the CIPP

    evaluation model8.3 Stakes evaluation model

    8.4 Eisnersconnoisseurshipevaluation model

    8.5 Data collection methods

    8.5.1 Interviews8.5.2 Observations8.5.3 Tests8.5.4 Surveys8.5.5 Content analysis

    8.5.6 PortfolioDiscussion QuestionsReadings

  • 7/24/2019 Chapter8.Evaluation

    2/16

    2

    Dont Make Physical Education (PE) an Examination Subject

    The Education Ministry has askedministry officials to look into introducingPhysical Education (PE) as an

    examination subject. I think PE should notbe an exam subject.

    In the 1950s, 1960s, 1970s and1980s, PE was never an exam subject andyet the country produced world-classsportsmen and women in badminton,weightlifting, hockey, athletics and other

    events. People like Jegathesan, MokhtarDahari, Tan Aik Huang, Rajamani, Ng

    Boon Bee, Nurul Huda, Marina Chin,

    Karu Selvaratnam, Nashtar Singh, ZaitonSulaiman, Ghani Minhat, Tan Aik Mong,Dhanapal Naidu and many others.

    We had no sports schools inthose days. All schools were sportsschools. How did we produce excellentsportsmen and sportswomen? We had

    supportive parents, interestedheadmasters, dedicated and committed PEteachers, coaches and disciplined

    sportsmen and sportswomen.

    [Source:Letters to the Editor, New Straits Times, February 1, 2005]

    The sporting calendar for

    Term 1 (January to April) had football,athletics, cross country. Term 2 (May

    to August) it was athletics and cricket.Term 3 (September to December it washockey and rugby. As for the courtgames, they were played all year round.

    There were inter-house gamesand if your school had six houses you

    would play at least five matches foryour house. There were inter-schoolgames and the rivalry was very intense.

    Today, inter-house games areextinct and even if they do have them,

    it is on a knock-out basis. It is the samefor inter-school games.

    Sporting activities havebecome a burden to schools. There is

    little organisation and the faster theyare over, the better. The school savesmoney and teachers have more time forcompleting the syllabus and revision inpreparation for national examinations.

    - Retired Physical Education

    Teacher

    In Module 7, we discussed the implementation

    of the curriculum plan. We looked at why people resist

    change, the role of teachers, students, administrator and

    parents in ensuring the successful implementation of

    change. In this chapter, we will focus on determiningwhether the curriculum plan implemented has achieved

    its goals and objectives as planned. In other words, the

    curriculum has to be evaluatedto determine whether all

    the effort in terms of finance and human resources has

    been worthwhile. Various stakeholders want to know

    the extent to which the curriculum has been successfully

    implemented. The information collected from evaluating a curriculum forms the basis

    for making judgements about how successfully has the programme achieved its intended

    outcomes and the worth or value of the programme.

    8.0 Introduction

  • 7/24/2019 Chapter8.Evaluation

    3/16

    3

    What is evaluation?Evaluation is the process of collecting data on a programmeto determine its value or worth with the aim of deciding whether to adopt, reject, or

    revise the programme. Programmes are evaluated to answer questions and concerns of

    various parties. The public want to know whether the curriculum implemented has

    achieved its aims and objectives; teachers want to know whether what they are doing inthe classroom is effective; and the developer or planner wants to know how to improve

    the curriculum product.

    McNeil (1977) states that curriculum evaluation is an attempt to throw light on twoquestions: Do planned learning opportunities, programmes, courses and activities as

    developed and organised actually produce desired results? How can the curriculum

    offerings best be improved? (p.134).

    Ornstein and Hunkins (1998) define curriculum evaluation as a process or clust erof processes that people perform in order to gather data that will enable them to

    decide whether to accept, change, or eliminate something- the curriculum in general

    or an educational textbook in particular (p.320).

    Worthen and Sanders (1987) define curriculum evaluation as the formaldetermination of the quality, effectiveness, or value of a programme, product,

    project, process, objective, or curriculum (p.22-23).

    Gay (1985) argues that the aim of curriculum evaluation is to identify its weaknesses

    and strengths as well as problems encountered in implementation; to improve the

    curriculum development process; to determine the effectiveness of the curriculum

    and the returns on finance allocated.

    Oliva (1988) defined curriculum evaluation as the process of delineating, obtaining,

    and providing useful information for judging decision alternatives. The primary

    decision alternatives to consider based upon the evaluation results are: to maintain

    the curriculum as is; to modify the curriculum; or to eliminate the curriculum.

    Evaluation is a disciplined inquiry to determine the worth of things. Things

    may include programmes, procedures or objects. Generally, research and evaluation are

    different even though similar data collection tools may be used. The three dimensions

    on which they may differ are:

    First, evaluation need not have as its objective the generation of knowledge.Evaluation is applied while research tends to be basic.

    8.1 Curriculum Evaluation

    ACTIVITY 8.1Read the newspaper report at the beginning of the chapter and answer the

    following questions

    1. Do you think physical education be made an examination subject?

    2.

    Do you agree with the writers opinions on the state of sports in schools?

  • 7/24/2019 Chapter8.Evaluation

    4/16

    4

    When the cook tastes the soup,

    thats formative evaluation;

    when the guests taste the soup,

    thats formative evaluation.

    - Robert Stakes

    Second, evaluation presumably, produces information that is used to make

    decisions or forms the basis of policy. Evaluation yields information that has

    immediate use while research need not.

    Third, evaluation is a judgement of worth. Evaluation result in value judgementswhile research need not and some would say should not.

    As mentioned earlier, evaluation is the process of determining the significance or

    worth of programmes or procedures. Scriven (1967) differentiated evaluation as

    formative evaluation and summative evaluation. However, they have come to mean

    different things to different people, but in this chapter, Scrivens original definition will

    be used.

    8.2.1 Formative evaluation:

    The term formative indicates that data is gathered during the formation ordevelopment of the curriculum so that revisions to it can be made. Formative evaluation

    may include determining who needs the programme (eg. secondary school students),

    how great is the need (eg. students need to be taught ICT skills to keep pace with

    expansion of technology) and how to meet the need (eg. introduce a subject on ICT

    compulsory for all secondary schools students). In education, the aim of formative

    evaluation is usually to obtain information to improve a programme.

    In formative evaluation, experts would evaluate the match between the

    instructional strategies and materials used, and the learning outcomes or what it aims to

    achieve. For example, it is possible that in a curriculum plan the learning outcomes and

    the learning activities do no match. You want students to develop critical thinking skills

    but there are no learning activities which provide opportunities for students to practice

    critical thinking. Formative evaluation by experts is useful before full-scale

    implementation of the programme. Review by experts of the curriculum plan may

    provide useful information for modifying or revising selected strategies.

    In formative evaluation

    learners may be included to review

    the materials to determine if they

    can use the new materials. For

    example, so they have the relevant

    prerequisites and are they

    motivated to learn. From theseformative reviews, problems may be discovered. For example, in curriculum document

    may contain spelling errors, confusing sequence of content, inappropriate examples or

    illustrations. The feedback obtained could be used to revise and improve instruction or

    whether or not to adopt the programme before full implementation.

    8.2.2 Summative evaluationThe term summative indicates that data is collected at the end of the

    implementation of the curriculum programme. Summative evaluation can occur just

    after new course materials have been implemented in full (i.e. evaluate the effectiveness

    of the programme), or several months to years after the materials have been

    implemented in full. It is important to specify what questions you want answered by theevaluation and what decisions will be made as a result of the evaluation. You may want

    8.2 Formative and Summative Evaluation

  • 7/24/2019 Chapter8.Evaluation

    5/16

    5

    to know if learners achieved the objectives or whether the programme produced the

    desired outcomes. For example, the use of a specific simulation software in the teaching

    of geography enhanced the decision making skills of learners. These outcomes can be

    determined through formal assessment tasks such as marks obtained in tests and

    examinations. Also of concern is whether the innovation was cost-effective. Was the

    innovation efficient in terms of time to completion? Were there any unexpectedoutcomes? Besides, quantitative data to determine how well students met specified

    objectives, data could also include qualitative interviews, direct observations, and

    document analyses

    How should you go about evaluating curriculum? Several experts haveproposed different models describing how and what should be involved in evaluating a

    curriculum. Models are useful because they help you define the parameters of an

    evaluation, what concepts to study and the procedures to be used to extract importantdata. Numerous evaluation models have been proposed but three models are discussed

    here.

    8.3.1 Context, Input, Process, Product Model (CIPP Model)

    Daniel L. Stufflebeam (1971), who chaired the Phi Delta Kappa National Study

    Committee on Evaluation, introduced a widely cited model of evaluation known as the

    CIPP (context, input, process and product) model. The approach when applied to

    education aims to determine if a particular educational effort has resulted in a positive

    change in school, college, university or training organisation. A major aspect of the

    Stufflebeams model is centred on decision making or an act of making up ones mindabout the programme introduced. For evaluations to be done correctly and aid in the

    decision making process, curriculum evaluators have to:

    first delineatewhat is to be evaluated and determine what information that has

    to be collected (eg. how effective has the new science programme has been in

    enhancing the scientific thinking skills of children in the primary grades)

    second is to obtain or collect the information using selected techniques and

    methods (eg. interview teachers, collect test scores of students);

    third is to provide or make available the information (in the form of tables,

    graphs) to interested parties. To decide whether to maintain, modify or eliminate

    the new curriculum or programme, information is obtained by conducting the

    following 4 types of evaluation: context, input, process and product.

    8.3 Curriculum Evaluation Models

    SELF-TEST 8.1

    1.

    Identify the key words in the five definitions of curriculum

    evaluation.

    2. Why do you need to evaluate curriculum?

    3.

    Whats the difference between formative and summativeevaluation?

  • 7/24/2019 Chapter8.Evaluation

    6/16

    6

    Stufflebeams model of evaluation relies on both formative and summative evaluation to

    determine the overall effectiveness a curriculum programme (see Figure 8.1).

    Evaluation is required at all levels of the programme implemented.

    FORMATIVE

    SUMMATIVE

    Figure 8.1 Formative and summative evaluation in the CIPP Model

    a)Context Evaluation(What needs to be done and in what context)?This is the most basic kind of evaluation with the purpose of providing a

    rationale for the objectives. The evaluator defines the environment in which the

    curriculum is implemented which could be a classroom, school or training department.

    The evaluator determines needs that were not met and reasons why the needs are not

    being met. Also identified are the shortcomings and problems in the organisation under

    review (eg. a sizable proportion of students in secondary schools are unable to read at

    the desired level, the ratio of students to computers is large, a sizable proportion of

    science teachers are not proficient to teach in English). Goals and objectives are

    specified on the basis of context evaluation. In other words, the evaluator determines the

    background in which the innovations are being implemented.The techniques of data collection would include observation of conditions in the

    school, background statistics of teachers and interviews with players involve in

    implementation of the curriculum.

    b) Input Evaluation(How should it be done?)

    is that evaluation the purpose of which is to provide information for determining

    how to utilise resources to achieve objectives of the curriculum. The resources of the

    school and various designs for carrying out the curriculum are considered. At this stage

    the evaluator decides on procedures to be used. Unfortunately, methods for input

    evaluation are lacking in education. The prevalent practices include committee

    deliberations, appeal to the professional literature, the employment of consultants andpilot experimental projects.

    Context

    Input

    Process

    Product

  • 7/24/2019 Chapter8.Evaluation

    7/16

    7

    c) Process Evaluation (Is it being done?) is the provision of periodic feedback while

    the curriculum is being implemented.

    d) Product Evaluation(Did it succeed?)or outcomes of the initiative. Data is collected

    to determine whether the curriculum managed to accomplish it set out achieve (eg. towhat extent students have developed a more positive attitudes towards science). Product

    evaluation involves measuring the achievement of objectives, interpreting the data and

    providing with information that will enable them to decide whether to continue,

    terminate or modify the new curriculum. For example, product evaluation might reveal

    that students have become more interested in science and are more positive towards the

    subject after introduction of the new science curriculum. Based on this findings the

    decision may be made to implement the programme throughout the country.

    8.4.2 Case Study:

    Evaluation of a Programme on Technology Integration in Teaching and

    Learning in Secondary Schools

    The integration of information and communication technology (ICT) inteaching and learning is growing rapidly in many countries. The use of the internet

    SELF-TEST 8.2

    1.

    What is the difference between context evaluation and input

    evaluation according to the CIPP model? Give specificexamples

    2. What is the difference betweenprocess evaluation andproduct

    evaluation according to the CIPP mode? Give specific

    examples.

    Context Input

    Product Process

    GOALS PLANS

    OUTCOMES ACTIONS

    CORE

    VALUES

  • 7/24/2019 Chapter8.Evaluation

    8/16

    8

    and other computer software in teaching science, mathematics and social sciences is

    more widespread today. To evaluate the effectiveness of such a programme using the

    CIPP model would involve examining the following:

    Context: Examine the environment in which technology is used in teaching and

    learning How did the real environment compare to the ideal? (eg. The programme

    required five computers in each classroom, but there were only two computer

    labs of 40 units each for 1000 students)

    What problems are hampering success of technology integration? (eg.

    technology breakdowns, not all schools had internet access)

    About 50% of teachers do not have basic computer skills

    Input:Examine what resources are put into technology integration (Identify the

    educational strategies most likely to achieve the desired result)

    Is the content selected for using technology right?

    Have we used the right combination of media? (internet, video-clips, etc)

    Process:Assess how well the implementation works (Uncovers implementation

    issues)

    Did technology integration run smoothly?

    Were there technology problems?

    Were teachers able to integrate technology in their lessons as planned?

    What are the areas of curriculum in which most students experienced

    difficulty?

    Product:Addresses outcomes of the learning (Gather information on the results ofthe educational intervention to interpret its worth and merit)

    Did the learners learn using technology? How do you know?

    Does technology integration enhance higher order thinking?

    ACTIVITY 8.2Withreference to Case Study 8.4.2

    1. Suggest other questions you would ask regarding process

    evaluation (i.e. implementation issues).2. What data collection techniques would you recommend for

    carrying out product evaluation to determine the teaching

    and learning outcomes of technology integration?

  • 7/24/2019 Chapter8.Evaluation

    9/16

    9

    INSTRUCTION

    Student &teachercharacteristics,curriculumcontent,instructionalmaterials,community

    context.

    Communication

    flow, time

    allocation,

    sequence of

    events, social,

    climate

    Student

    achievement,

    attitudes,

    motor skills,

    effect on

    teachers and

    institution.

    8.4.3 Stakes Countenance ModelThe model proposed by Robert Stake (1967) suggests three phases of curriculum

    evaluation: the antecedent phase, the transaction phase and the outcome phase. The

    antecedent phase includes conditions existing prior to instruction that may relate to

    outcomes. The transaction phase constitutes the process of instruction while the

    outcome phase relates to the effects of the programme. Stake emphasises twooperations; descriptionsandjudgements.Descriptions are divided according to whether

    they refer to what was intended or what actually was observed. Judgements are

    separated according to whether they refer to standards used in arriving at the judgements

    or to the actual judgements.

    Antecedents Transactions Outcomes

    Figure 8.3 Stakes Countenance Model

    8.3.2 Eisners Connoisseurship ModelElliot Eisner, a well known art educator argued that learning was too complex to

    be broken down to a list of objectives and measured quantitatively to determine whether

    it has taken place. He argued that the teaching of small manageable pieces of

    information prohibits students from putting the pieces back together and applying themto new situations. As long as we evaluate students based on the small bits of information

    students we will only learn small bits of information. Eisner contends that evaluation

    has and will always drive the curriculum. If we want students to be able to solve

    problems and think critically then we must evaluate problem solving and critical

    thinking, skills which cannot be learned by rote practice. So, to evaluate a programme

    we must make an attempt to capture the richness and complexity of classroom events.

    He proposed the Connoisseurship Model in which he claimed that a

    knowledgeable evaluator can determine whether a curriculum programme has been

    successful, using a combination of skills and experience. The word connoisseurship

    comes from the Latin word cognoscere, meaning to know. For example, to be a

    connoisseur of food, paintings or films, you must have knowledge about and experiencewith different types of food, paintings or films before you are able to criticise. To be a

  • 7/24/2019 Chapter8.Evaluation

    10/16

    10

    food critic, you must be a connoisseur of different kinds of foods. To be a critic, you

    must be aware and appreciate the subtle differences in the phenomenon you are

    examining. In other words, the curriculum evaluator must seek to be an educational

    critic. When employing the procedure of educational criticismthe following questions

    may be asked:

    What has happened in the classrooms as a result of implementation of the newcurriculum?

    What are some of the events that took place? (eg. more students are participating

    in field work, more students are asking questions in class, even academically

    weak students are talking in group activities)

    How did students and teachers organise themselves in these events?

    What were the reactions of participants in these events? (eg. students enjoyed

    working collaboratively in projects)

    How can the experiences of learners be made more effective as suggested by

    students, teachers and administrators? (eg. more resources are needed for

    fieldwork, more computers are needed to integrate the internet in teaching and

    learning).

    You will notice that these questions places more emphasis on the process of learning

    and the quality of experiences by those involved in the implementation of the

    curriculum; namely, students, teachers and administrators. According to the

    Connoisseurship Model, evaluators provide a description and interpretation of the

    curriculum plan implemented:

    1) Description: The evaluator records the actions, the features of the environment

    and experiences of students, teachers and administrators. People who read the

    evaluation report will be able to visualise what the place looks like and theprocesses taking place. The aim here is to help the reader see the school or

    classroom and get a feel of what the curriculum evaluator or critic is attempting

    to understand and help others understand.

    2) Interpretation: The evaluator explains the meaning of events reported by

    putting it in its context. For example, why academically weak students were

    motivated to ask questions; why reading comprehension skills improved; why

    enthusiasm for doing science experiments increased and so forth.

    To be able to describe and interpret the implementation of a curriculum the evaluator

    has to collect data and the following are examples of activities an evaluator may engagein:

    o The evaluator observes what is going on the classroom and records

    teachers and students in action using videotapes, audiotapes and

    photographs.

    o The evaluator keeps notes of what is done, what is said and more

    importantly what is not said. The evaluator should strive to describe the

    toneof the curriculum in action (Ornstein and Hunkins, 1998).

    o The evaluator interview students, teachers and administrators about the

    quality of the curriculum

    o The evaluator would analysis students work

    .

  • 7/24/2019 Chapter8.Evaluation

    11/16

    11

    One of the great benefits of Elliot W. Eisner's activities has been the way in

    which he has both made the case for a concern with connoisseurship and criticism, and

    mediated these concerns for educators and researchers. The importance of his advocacy

    of these ideas cannot be underestimated - especially at a time when rather narrow

    concerns with instrumental outcomes and an orientation to the technical dominate.Together they offer educators a more helpful and appropriate means to approach

    evaluation, for example.

    Advocating moving beyond technocratic and behaviouristic modes of thinking -and for having a concern for 'expressive outcomes'.

    Calling to attend to fundamentals. Eisner has consistently warned against

    educational fads and fashion. He has criticized dominant paradigms and

    invited educators and others to ask questions such as 'what is basic in

    education?'.

    Arguing that schools should help children create meaning from experience,

    and that this requires an education devoted to the senses, to meaning-making

    and the imagination. Eisner argues for a curriculum that fosters multiple

    'literacies' in students (especially by looking to non-verbal modes of learning

    and expression) and a deepening of the 'artistry' of teachers.

    Over the time that Eisner has been writing there have been significant shifts inthe context in which schools have to operate. While there have been other voices

    calling for changes in the culture of schooling (notably Howard Gardner in this

    arena), the impact of globalization, growing centralization in many schooling systems,

    reaction against more process-oriented forms of pedagogy, and a growinginstrumentalism education have served to make Eisner's message both more pertinent

    to schools, and more difficult to respond to.

  • 7/24/2019 Chapter8.Evaluation

    12/16

    12

    1. Aspects of the

    curriculum to be

    evaluated

    The evaluator determines what is to be evaluated which may

    be the total school system, a particular district, a

    particular grade level or a particular subject. The objectives

    of the evaluation activity are clearly stated.

    Identify the information to be collected and the tools for

    collecting the data which may involve interviews, giving of

    questionnaires, tests, collection of documents and so forth.The evaluator also identifies the people from whom data is

    to be collected.

    The data collected is analysed and presented in the form o

    tables and graphs. Statistical tools are often used to comparsignificant differences and to establish correlation o

    relationship between variables.

    Reports are written describing the findings and interpretatio

    of the data. Based on the findings, conclusion are made o

    the effectiveness of curriculum implementation effort

    Recommendations are made to reconsider certain aspects o

    the curriculum.

    No matter what evaluation model is used in evaluating a curriculum, themethods of data collection and the instruments used are more or less similar. The

    common instruments used in curriculum evaluation are interviews, observations, tests,

    survey, content analysis and portfolios (record of work or products).

    8.6.1 Questionnaires and ChecklistsWhen you need to quickly and/or easily get lots of information from people in a

    non threatening way, questionnaire and checklist are useful data collection techniques.

    Questionnaires and checklists can complete anonymously and relatively inexpensive to

    administer. Since data collected is quantitative, it is easy to compare and analyse and

    can be administered to many people. Massive amount of data can be obtained. It is also

    easy to design as there are many sample questionnaires already in existence. However,the information obtained may not be accurate as it relies how truthfully subjects respond

    2. Data Collection

    3. Analysis of

    Information

    4. Reporting of

    Information

    8.6 Instrumentation for Curriculum Evaluation

    8.5 Phases of Curriculum Evaluation

  • 7/24/2019 Chapter8.Evaluation

    13/16

    13

    to the questions. There is also the fear that the wordings used can bias client's responses.

    Questionnaires are impersonal and since only a sample of subjects are given the

    instrument, we not get the full story.

    8.6.2 InterviewsInterviews are usually one-on-one situations in which an individual asks

    questions to which a second individual (which may be a teacher, principal, student,

    parent) responds. The person asking the questions is called the interviewer while the

    person giving answers to the questions is called the interviewee. Interviews are used

    when you want to fully understand someone's impressions or experiences, or learn more

    about their answers to questionnaires. There are two general types of interviews

    depending on the extent to which the responses required are unstructured or structured.

    In an unstructured interview, the interviewer does not follow a rigid script and

    there is a great deal of flexibility in the responses. For example; Why do you think the

    recommended textbook for the course is difficult for low ability learners? The teacher

    responding to such a question will give a variety of reasons. Some of the reasons givenmay be of a general nature while others may be specific to certain sections of the

    textbook. This makes the task of keeping track of responses more difficult. The open-

    endedness of the question will require that the interviewer record all responses and

    make sense of it later. The advantage of the unstructured interview is that it allows the

    evaluator to gather a variety of information, especially in relation to the interviewees

    knowledge, beliefs or feelings toward a particular situation.

    In a structured interview, the questions asked usually require very specific

    responses. For example, Is the recommended textbook difficult for low ability learners

    because: a) there is too much content; b) the language used is beyond the

    comprehension of low ability learners, c) or there are too few examples and illustrations.

    Regardless of which type of interview is used, evaluators should ensure that each

    question is relevant for its intended purpose. In the end, the data must be translated into

    a form that can be analysed and this has to be dome carefully to preserve accuracy and

    to maintain the sense of the data. The advantage of interviews is that it can get a full

    range and depth of information and it develops a relationship with teachers and students

    and it is more flexible. However, interview can take much time, can be hard to analyze

    and compare, can be costly and interviewer can bias client's responses.

    8.6.3 ObservationsTo gather accurate information about how a program actually operates,

    particularly about processes. In other words to view operations of a program as theyare actually occurring. For example, can the people involved adapt to events as they

    occur.

    8.6.4 DocumentsWhen we want impressions of how a programme operates without interrupting

    the programme; we can review the memos, minutes, etc to get a comprehensive and

    historical information about the implementation of the programme. However, we

    should be quite clear about what looking for as there may be a load of documents.

  • 7/24/2019 Chapter8.Evaluation

    14/16

    14

    Method Overall Purpose Advantages Challenges

    questionnaires,surveys,

    checklists

    when need to quicklyand/or easily get lots of

    information from people ina non threatening way

    -can complete anonymously

    -inexpensive to administer

    -easy to compare andanalyze

    -administer to many people-can get lots of data

    -many sample questionnaires

    already exist

    -might not get careful

    feedback

    -wording can bias client'sresponses

    -are impersonal-in surveys, may need

    sampling expert

    - doesn't get full story

    interviews

    when want to fullyunderstand someone's

    impressions or

    experiences, or learn more

    about their answers to

    questionnaires

    -get full range and depth ofinformation

    -develops relationship with

    client

    -can be flexible with client

    -can take much time-can be hard to analyze and

    compare

    -can be costly

    -interviewer can biasclient's responses

    documentation

    review

    when want impression ofhow program operates

    without interrupting the

    program; is from review of

    applications, finances,

    memos, minutes, etc.

    -get comprehensive and

    historical information-doesn't interrupt program or

    client's routine in program

    -information already exists

    -few biases about

    information

    -often takes much time

    -info may be incomplete-need to be quite clear about

    what looking for

    -not flexible means to get

    data; data restricted to what

    already exists

    observation

    to gather accurate

    information about how a

    program actually operates,

    particularly about

    processes

    -view operations of a

    program as they are actually

    occurring

    -can adapt to events as they

    occur

    -can be difficult to interpretseen behaviors

    -can be complex to

    categorize observations

    -can influence behaviors of

    program participants

    -can be expensive

    focus groups

    explore a topic in depth

    through group discussion,

    e.g., about reactions to an

    experience or suggestion,

    understanding common

    complaints, etc.; useful inevaluation and marketing

    -quickly and reliably getcommon impressions

    -can be efficient way to get

    much range and depth of

    information in short time

    - can convey key information

    about programs

    -can be hard to analyze

    responses

    -need good facilitator for

    safety and closure

    -difficult to schedule 6-8

    people together

    case studies

    to fully understand or

    depict client's experiences

    in a program, and conduct

    comprehensive

    examination through cross

    comparison of cases

    -fully depicts client'sexperience in program input,

    process and results

    -powerful means to portrayprogram to outsiders

    -usually quite time

    consuming to collect,

    organize and describe

    -represents depth of

    information, rather than

    breadth

    Table Showing A Summary of Data Collection Instruments

  • 7/24/2019 Chapter8.Evaluation

    15/16

    15

    Background: Mathematics Learning and Teaching Initiative (MALATI) was

    commissioned by the Education Initiative of the Open Society Foundation for South

    Africa in 1996 to develop, pilot and disseminate alternative approaches and tools for

    teaching and learning mathematics.

    Method: Based on project workers observation and written field notes made during

    the implementation of the MALATI curriculum the following findings were obtained:

    Findings:

    a.

    a number of teachers had not yet received the most basic communicationsissued to schools regarding Curriculum 2005

    b.

    teachers had difficulty interpreting certain aspect of the official curriculum

    document. Lack of clarity led to confusion

    c.

    the curriculum document had content errors

    d. content knowledge of teachers was not adequate to handle some of the topic in

    the curriculum such as statistics.

    e. learners did not have the prior experience assumed in the curriculum eg. in

    grade 9, the teaching of probability assumes that learner had done some

    statistics in the earlier grades

    f. teachers are continuing to teach the topics they are used to and are reluctant to

    use the MALATI materialsg. the curriculum suggested that group work be used in teaching probability and

    data handling. Learners were not accustomed to group discussion and listening

    to one another.

    h.

    the teaching of the topic took a longer time as teachers struggled to deal with

    learners everyday experiences in the teaching of probability

    Recommendations:1)

    Teachers need workshops on selected aspects of the content

    2) Selected parts of the curriculum documents need to be rewritten to reduce

    confusion

    3)

    To convince teachers not to treat the teaching of probability and statistics as

    new content but teach it for its mathematical value

    [Source: Karin Brodie and Craig Pournara, 2003. Towards a framework for developing and

    researching groupwork in mathematics classrooms.http//www.hsrcpress.ac.za.

    ACTIVITY 8.21) What are some of the problems identified with the implementation of the

    MALATI programme?

    2)

    Based on the findings list the recommendations made.

    8.3 Case Study: Evaluation of a Mathematics Curriculum in South Africa

  • 7/24/2019 Chapter8.Evaluation

    16/16

    16

    DISCUSSION QUESTIONS:

    1.

    Identify some problems in the implementation of the Primary School

    Integrated Curriculum (KBSR) and the Secondary School Integrated

    Curriculum (KBSM)?

    2.

    Describe how the teaching of science and mathematics in English was

    implemented in your school?

    3. New curriculum often fail to become established in schools because the

    importance and complexity of the implementation phase is not understood

    Discuss.

    READINGS

    Ben-Peretz, M. (1990). The Teacher-Curriculum Encounter. Buffalo: StateUniversity of New York Press.

    o Chapter 1: Patterns of teachers involvement in the curriculum

    endeavour.

    o Chapter 3: Teachers concerns about curriculum issues

    o Chapter 7; Implications for teacher education and staff development

    [available at eBrary].

    Ornstein, A. and Hunkins, F. Curriculum: Foundations, principle and

    issues. (1998). Boston, MA: Allyn & Bacon. Chapter 10: Curriculum

    implementation.

    Sowell, E. (2000). Curriculum: An integrative introduction. Upper Saddle

    River, NJ: Prentice-Hall. Chapter 1: Overview of curriculum processes andproducts.