final reports from the measures of effective teaching project tom kane harvard university steve...

Post on 03-Jan-2016

214 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Final Reports from the Measures of Effective Teaching ProjectTom KaneHarvard University

Steve Cantrell, Bill & Melinda Gates Foundation

2

The MET project is unique … in the variety of indicators tested,

5 instruments for classroom observations

Student surveys (Tripod Survey)

Value-added on state tests

in its scale,3,000 teachers

22,500 observation scores (7,500 lesson videos x 3 scores)

900 + trained observers

44,500 students completing surveys and supplemental assessments in year 1

3,120 additional observations by principals/peer observers in Hillsborough County, FL

and in the variety of student outcomes studied. Gains on state math and ELA tests

Gains on supplemental tests (BAM & SAT9 OE)

Student-reported outcomes (effort and enjoyment in class)

3

Two Past Reports: Learning about Teaching

(Student Surveys)

Gathering Feedback for Teaching (Classroom Observations)

4

Have we identified effective teachers or … teachers with exceptional students?

To find out, we randomly assigned classrooms to 1,591 teachers.

5

Have We Identified Effective Teachers?KEY FINDINGS

Following random assignment in Year 2, the teachers with greater

measured effectiveness in Year 1 did produce higher student achievement.

The magnitude of the impacts were consistent with predictions.

They also produced higher achievement on the supplemental assessments 70 percent as large as impacts on state tests.

6

7

Organizing Observations by School PersonnelKEY FINDINGS

Adding an observation by a second observer increases reliability twice as much as having the same observer score an additional lesson.

Short observations provide a time-efficient way to incorporate more than one observer per teacher.

School administrators rate their own teachers higher than do outside observers. However, (1) they rank their teachers similarly to others and (2) they discern bigger differences between teachers than peers do (which increases reliability).

Although average scores are higher across the board, letting teachers choose which lessons are observed produces similar rankings and slightly higher reliability.

9

There are many roads to reliability.

10

Combining Measures Using WeightsKEY FINDINGS

The best way to identify teachers who produce large student achievement gains on state tests is to put 65 to 90 percent of the weight on teacher’s past history of gains on such tests. However, the resulting composite does not predict student achievement gains on more cognitively challenging assessments as well.

Balanced weights have somewhat less predictive power with respect to state achievement gains but, they offer (1) better ability to predict other outcomes and (2) improved reliability (less volatility).

It is possible to go too far. Weighting state tests less than one-third results in (1) worse predictive power with respect to other outcomes and (2) less reliability.

11

What’s the best we could do with master’s degrees and experience alone?

Higher Order

.14

State

.13

12

Feedback for Better Teaching

13

January 2013

Steve Cantrell, Bill & Melinda Gates Foundation

14

“MOM P.”

15

Monitor validityEnsure reliabilityAssure accuracy

Make meaningful distinctionsPrioritize support and feedbackUse data for decisions at all levels

Set expectations Use multiple measuresBalance weights

16

Set expectations Use multiple measures Balance weights

17

Monitor validity

Set expectations Use multiple measures Balance weights

18

Monitor validity Ensure reliability

Set expectations Use multiple measures Balance weights

19

Monitor validity Ensure reliability Assure accuracy

Set expectations Use multiple measures Balance weights

20

Make meaningful distinctions Prioritize support and feedback Use data for decisions at all levels

Set expectations Use multiple measures Balance weights

Monitor validity Ensure reliability Assure accuracy

21

Actual scores for 7500 lessons.

Framework for Teaching (Danielson)Unsatisfactory

Yes/no questions; posed in rapid succession; teacher asks all questions; same few students participate.

Basic

Some questions ask for explanations; uneven attempts to engage all students.

Proficient

Most questions ask for explanation; discussion develops, teacher steps aside; all students participate.

Advanced

All questions high quality; students initiate some questions; students engage other students.

22

Set expectations Use multiple measures Balance weights

Monitor validity Ensure reliability Assure accuracy

Make meaningful distinctions Prioritize support and feedback Use data for decisions at all levels

23

24

25

26

27

Achievement Gains

28

Achievement Gains

2009 Average Performance

Below Above

29

Achievement Gains

2009 Average Performance

Below Above

At 2010 PredictedPerformance

Below

Above

30

Achievement Gains

2009 Average Performance

Below Above

Below

Above

Very Low Prior Student Achievement

31

Student Achievement

2009 Average Performance

Below Above

Below

Above

Very Low Prior Student Achievement

Almost All Performing at or above Prediction

32

Classroom Observation

33

Student Surveys

34

35

The Library of Practice

MET Longitudinal Database

Professional Development Studies

Working with Key Partners to Implement Feedback and Evaluation Systems

This Symposium and Your Good Work!

36

You can find this slide presentation, the current reports,

and all past reports at

www.metproject.org

top related