evaluation of teaching excellence, a guide for administrators

37
Teaching Excellence, a Guide for Administrators Heather McGovern, PhD Director of the Institute for Faculty Development Associate Professor of Writing and BASK November 2011

Upload: frayne

Post on 09-Feb-2016

21 views

Category:

Documents


1 download

DESCRIPTION

Evaluation of Teaching Excellence, a Guide for Administrators. Heather McGovern, PhD Director of the Institute for Faculty Development Associate Professor of Writing and BASK November 2011. Candidates should provide multiple ways for teaching to be evaluated. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Evaluation of Teaching Excellence, a Guide for Administrators

Evaluation of Teaching Excellence, a Guide for AdministratorsHeather McGovern, PhDDirector of the Institute for Faculty DevelopmentAssociate Professor of Writing and BASKNovember 2011

Page 2: Evaluation of Teaching Excellence, a Guide for Administrators

Candidates should provide multiple ways for teaching to be evaluated. Stockton policy states that “evidence of

teaching performance should be demonstrated by a teaching portfolio, as outlined below, which should contain the following:

A self-evaluation of teachingStudent evaluations of teaching and

preceptorial teachingPeer evaluations of teachingOther evidence of effectiveness in

teaching”

Page 3: Evaluation of Teaching Excellence, a Guide for Administrators

Student ratings should be less than half of the evaluation of teaching.The IDEA Center “strongly recommends that additional sources of evidence be used when teaching is evaluated and that student ratings constitute only 30% to 50% of the overall evaluation of teaching.” Primary reasons: o“some components of effective teaching are

best judged by peers and not students”o“it is always useful to triangulate information...” ono instrument is fully validono instrument is fully reliable

Page 4: Evaluation of Teaching Excellence, a Guide for Administrators

How student ratings align to Stockton’s definition of “excellence in teaching” “A thorough and current command of the subject matter, teaching

techniques and methodologies of the discipline one teaches. Sound course design and delivery in all teaching assignments…as

evident in clear learning goals and expectations, content reflecting the best available scholarship or artistic practices, and teaching techniques aimed at student learning.

The ability to organize course material and to communicate this information effectively. The development of a comprehensive syllabus for each course taught, including expectations, grading and attendance policies, and the timely provision of copies to students.

…respect for students as members of the Stockton academic community, the effective response to student questions, and the timely evaluation of and feedback to students.”

“Where appropriate, additional measures of teaching excellence are Ability to use technology in teaching The capacity to relate the subject matter to other fields of knowledge Seeking opportunities outside the classroom to enhance student

learning of the subject matter”

Orange: Student ratings may be a valid measureRed: Student ratings should be a valid measure and those used at Stockton elicit informationUnderlined: student ratings may be one of the best sources

Page 5: Evaluation of Teaching Excellence, a Guide for Administrators

Reliability and representativeness: # of classes needed for evaluation

The IDEA Center “recommends using six to eight classes, not necessarily all from the same academic year, that are representative of all of an instructor’s teaching responsibilities.”

In a person’s first few years at Stockton, evaluators will not be able to do what is ideal. This makes using teaching observations and other evidence of good teaching in a file even more important.

Page 6: Evaluation of Teaching Excellence, a Guide for Administrators

The # of student responders affects interrater reliability (consistency of student responses)IDEA reports the following median rates:

10 raters .69 reliability15 raters .83 reliability20 raters .83 reliability30 raters .88 reliability40 raters .91 reliabilityReliability ratings below .70 are highly

suspect.Starting in Fall 2010, to respond to this issue, Stockton began using a small class instrument (that gathers qualitative rather than quantitative data) for classes of fewer than 15 students, usually determined following the last day to withdraw. Which instrument is used is not a faculty option, but is dictated.Many faculty will continue to have unreliable data from earlier years and even last year in their files.

Page 7: Evaluation of Teaching Excellence, a Guide for Administrators

The percentage of student responders affects representativeness

Higher response rates=more representative dataLower response rates=less representative data

Online classes using the IDEA online currently have, as a group, the lowest response rate at Stockton.

Low response rate can cause a course with a high enough enrollment to use the regular IDEA to provide unreliable data.

Page 8: Evaluation of Teaching Excellence, a Guide for Administrators

It matters whether faculty say something is “important” or “essential.”

In the IDEA Progress toward Relevant Objectives scores on page one of the report, items of minor importance do not count at all, and items that are “essential” count double items that are “important.”

Faculty choice also reflects their philosophy of teaching for the class.

Page 9: Evaluation of Teaching Excellence, a Guide for Administrators

The objectives faculty choose affect some of the summary report data. • Item A. on page one and column one in the

graph report Progress on Relevant Objectives. • The “Summary Evaluation” provided on page

one of the IDEA report weights Progress toward Relevant Objectives at 50% and Excellent Teacher and Excellent Course at 25%.

• Data on page two reports student ratings on only the items faculty selected.

• Data on page four reports all ratings. On the small class form, students will list objectives they feel they progressed on.

Page 10: Evaluation of Teaching Excellence, a Guide for Administrators

How many objectives should faculty select? Usually, 3-5. This is a rule of thumb.

Faculty may have good reason to select as few as one or to select more than 5. Choices logically related to the relevant course may be good choices.

IDEA (and so Sonia and I) advise faculty that It is harder for students to make progress if the class has

many objectives. Research indicates that student ratings tend to decrease

when larger numbers of objectives are selected.

Page 11: Evaluation of Teaching Excellence, a Guide for Administrators

Myths about objectivesFaculty have to choose a certain number. No. If they choose none as important or essential, then by default all will be important, which usually lowers the mean in composite scores. Faculty have to have at least one essential or at least one important.

No. Any combination is ok, but if all selections are essential or important, they’ll be equally weighted.

Page 12: Evaluation of Teaching Excellence, a Guide for Administrators

Disciplinary codes—see row in small table on first page and columns on other pages Ideally, a faculty member’s code is as good a

match to their class as possible.

In most cases, matches were selected for faculty and they can check and ask for a change.

Many comparisons are too general to be of much help.

If “NA” appears it is because IDEA’s database is insufficient and is no fault of the faculty member.

Page 13: Evaluation of Teaching Excellence, a Guide for Administrators

STUDENT RATINGS, BASIC INFORMATION

Page 14: Evaluation of Teaching Excellence, a Guide for Administrators

Remember that the results report

Student ratings reflect student perception, which is not the same as student learning and may differ from reality.

Page 15: Evaluation of Teaching Excellence, a Guide for Administrators

Outliers affect mean scores.Evaluators can check the statistical detail on page 4. Standard deviations of .7 are typical. Standard deviations of over 1.2 indicate “unusual diversity.”If the distribution is bimodal, then the class may have contained “ two types of students who are so distinctive that what “works” for one group will not for the other. For example, one group may have an appropriate background for the course while the other may be under-prepared….” (IDEA)In these cases, IDEA recommends detailed item examination; there may be issues beyond instructor control.

Page 16: Evaluation of Teaching Excellence, a Guide for Administrators

Scores and comments can be affected by the halo effect

“the tendency of raters to form a general opinion of the person being rated and then let that opinion color all specific ratings. If the general impression is favorable, the "halo effect" is positive and the individual receives higher ratings on many items than a more objective evaluation would justify. The "halo effect" can also be negative; an unfavorable general impression will lead to low marks "across the board", even in areas where performance is strong.”

Page 17: Evaluation of Teaching Excellence, a Guide for Administrators

How can you know? Look at the pattern of student

responses on page 4 or on the student forms. If a form gives someone a 5 all the way down, regardless of whether a class covered a particular learning objective—halo effect!

In most cases, also true with a 1 or any other number all the way down…

Page 18: Evaluation of Teaching Excellence, a Guide for Administrators

The Error of Central Tendency can affect scores

“Most people have a tendency to avoid the extremes (very high and very low) in making ratings. As a result, ratings tend to pile up more toward the middle of the rating scale than might be justified. In many cases, ratings which are "somewhat below average" or "somewhat above average" may represent subdued estimates of an individual's status because of the "Error of Central Tendency.”

Page 19: Evaluation of Teaching Excellence, a Guide for Administrators

Things evaluators should check The teacher selected objectives. If

not, by default, all will be considered “important.” PRO scores on the first page of the report are worthless.

The objectives the teacher chose seem reasonable for the course.

The teacher discusses problematic objective choices or irregularities in the class.

Page 20: Evaluation of Teaching Excellence, a Guide for Administrators

IDEA compares class results to three groups (page one and two)1) Three years of IDEA student

ratings at multiple institutions 2) Classes at your institution in the

most recent five years3) Classes in the same discipline in

the most recent five years where at least 400 classes with the same disciplinary code were rated

Page 21: Evaluation of Teaching Excellence, a Guide for Administrators

The validity of comparisons variesThe validity of comparisons depends on a number of factors, including how “typical” a class is, compared to classes at Stockton or all classes in the IDEA database or how well the class aligns with other classes with the same IDEA disciplinary code.

Page 22: Evaluation of Teaching Excellence, a Guide for Administrators

External factors can affect comparisons and ratingsStudents in required courses tend to

report lower. Students in lower level classes tend to

report lower. Arts and humanities >social science >

math (this may be because of differences in teaching quality or due to quantitative nature of courses, both, or other factors).

Race/gender/age/culture/height/physical attractiveness and more may be factors, as they are in many other areas of life.

Page 23: Evaluation of Teaching Excellence, a Guide for Administrators

Some external factors don’t usually affect ratingsTime of day of the courseTime in the term in which ratings

are given (after midterm)Age of studentLevel of studentStudent GPA

Page 24: Evaluation of Teaching Excellence, a Guide for Administrators

We should use converted scores when making comparisons

IDEA states that “Institutions that want to make judgments about teaching effectiveness on a comparative basis should use converted scores.”

Converted scores are reported in the graph and lower table on page one and on page two.

Page 25: Evaluation of Teaching Excellence, a Guide for Administrators

Why we should use converted scoresThe 5-point averages of progress ratings on

“Essential” or “Important” objectives vary across objective. For instance, the average for “gaining factual knowledge” is 4.00, while that for “gaining a broader understanding and appreciation for intellectual/cultural activity is 3.69.

Unconverted averages disadvantage “broad liberal education” objectives.

Using converted averages “ensures that instructors choosing objectives where average progress ratings are relatively low will not be penalized for choosing objectives that are particularly challenging or that address complex cognitive skills.”

Page 26: Evaluation of Teaching Excellence, a Guide for Administrators

Norming sorts people into broad categories

Scores are normed. Therefore, it is unrealistic to expect most people to score above the similar range. Statistically, 40% of people ALWAYS score in the similar range and 30% above and 30% below that range.

Many teachers teach well. Therefore, the comparative standard is relatively high.

Because the instrument is not perfectly valid or reliable, trying to compare scores within the five major categories IDEA provides is not recommended.

Page 27: Evaluation of Teaching Excellence, a Guide for Administrators

Why we should use adjusted averages in most casesAdjusted scores adjust for “student

motivation, student work habits, class size, course difficulty, and student effort. Therefore, in most circumstances, the IDEA Center recommends using adjusted scores.”

Page 28: Evaluation of Teaching Excellence, a Guide for Administrators

How are they adjusted? “Work Habits (mean of Item 43, As a rule, I put forth more effort than other students on academic work) is generally the most potent predictor…Unless ratings are adjusted, the instructors of such classes would have an unfair advantage over colleagues with less dedicated students.”

Page 29: Evaluation of Teaching Excellence, a Guide for Administrators

How are they adjusted, part II“Course Motivation (mean of Item

39, I really wanted to take this course regardless of who taught it) is the second most potent predictor. …unless ratings are adjusted, the instructors of such classes would have an unfair advantage over colleagues with less motivated students.”

Page 30: Evaluation of Teaching Excellence, a Guide for Administrators

How are they adjusted, part III“Size of Class…is not always

statistically significant; but when it was, it was always negative – the larger the class, the lower the expected rating.”

Page 31: Evaluation of Teaching Excellence, a Guide for Administrators

How are they adjusted, part IV“Course Difficulty, as indicated by student ratings of item 35,

Difficulty of subject matter” is complicated because the instructor influences students’ perception of difficulty.

Therefore, “A statistical technique was used to remove the instructor’s influence on “Difficulty” ratings in order to achieve a measure of a class’s (and often a discipline’s) inherent difficulty. Generally, if the class is perceived as difficult (after taking into account the impact of the instructor on perceived difficulty), an attenuated outcome can be expected.”

Notable examples: in “Creative capacities” and “Communication skills” “high difficulty is strongly associated with low progress ratings.”

In two cases, high difficulty leads to high ratings on progress toward objectives: “Factual knowledge” and “Principles and theories.”

Page 32: Evaluation of Teaching Excellence, a Guide for Administrators

How are they adjusted, part V“Student Effort is measured with responses to item 37, I

worked harder on this course than on most courses I have taken. “ Here, because response reflects the students’ general habits and how well the teacher motivated students, the latter is statistically removed from the ratings leaving the fifth extraneous factor, “student effort not attributable to the instructor.” Usually, student effort is negatively related to ratings.

A special case is “Classes containing an unusually large number of students who worked harder than the instructor’s approach required” which get low progress ratings, maybe because people were unprepared for the class or lack self-confidence and so under achieve “or under-estimate their progress in a self-abasing manner.”

Page 33: Evaluation of Teaching Excellence, a Guide for Administrators

A critical exception to using adjusted scores“We recommend using the unadjusted score if the average progress rating is high (for example, 4.2 or higher).”

In these cases, students are so motivated and hard-working that the teacher has little opportunity to influence their progress, but “instructors should not be penalized for having success with a class of highly motivated students with good work habits.”

Page 34: Evaluation of Teaching Excellence, a Guide for Administrators

Bottom Line

For evaluation purposes, use the higher of the two scores (adjusted or raw).

Page 35: Evaluation of Teaching Excellence, a Guide for Administrators

Myths about IDEA, page 3Effective teaching=students

make progress on all 12 learning objectives

Effective teachers= teachers who employ all 20 teaching methods

Page 36: Evaluation of Teaching Excellence, a Guide for Administrators

Attend to other evidenceStudent ratings should not be the most important element in evaluating teaching excellence, despite my focus today.Philosophy of teaching, reflection, teaching observations, and other evidence (syllabi, assignments, direct evidence of student learning, etc.) should compose more of the the evidence for or against a candidate as an excellent teacher.

Page 37: Evaluation of Teaching Excellence, a Guide for Administrators

References Cashin, William. “Student Ratings of Teaching, the Research

Revisited.” 1995. Idea paper 32. http://www.theideacenter.org/sites/default/files/Idea_Paper_32.pdf

Cashin, William. “Student Ratings of Teaching: A Summary of the Research.” 1988. Idea paper 20. http://www.theideacenter.org/sites/default/files/Idea_Paper_20.pdf

Colman, Andrew, Norris, Claire., and Preston, Carolyn. “Comparing Rating Scales of Different Lengths: Equivalence of Scores from 5-Point and 7-Point Scales.” 1997. Psychological Reports 80: 355-362.

Hoyt, Donald and Pallett, William. “Appraising Teaching Effectiveness: Beyond Student Ratings.” Idea paper 36. http://www.theideacenter.org/sites/default/files/Idea_Paper_36.pdf

“Interpreting Adjusted Ratings of Outcomes.” 2002, updated 2008. http://www.theideacenter.org/sites/default/files/InterpretingAdjustedScores.pdf

Pallet, Bill. “IDEA Student Ratings of Instruction.” Stockton College, May 2006.

“Using IDEA Results for Administrative Decision-making.” 2005. http://www.theideacenter.org/sites/default/files/Administrative%20DecisionMaking.pdf