a district-initiated appraisal of a state assessments instructional sensitivity holding...

31
A District-initiated Appraisal A District-initiated Appraisal of a of a State Assessment’s State Assessment’s Instructional Sensitivity Instructional Sensitivity HOLDING ACCOUNTABILITY TESTS ACCOUNTABLE HOLDING ACCOUNTABILITY TESTS ACCOUNTABLE Stephen C. Court Stephen C. Court Presented in Symposium American Educational Research Association (AERA) Annual Meeting May 2, 2010 Denver, Colorado

Upload: patrick-mcwilliams

Post on 10-Dec-2015

220 views

Category:

Documents


1 download

TRANSCRIPT

A District-initiated AppraisalA District-initiated Appraisalof aof a

State Assessment’s State Assessment’sInstructional SensitivityInstructional Sensitivity

HOLDING ACCOUNTABILITY TESTS ACCOUNTABLEHOLDING ACCOUNTABILITY TESTS ACCOUNTABLE

Stephen C. CourtStephen C. Court

Presented in Symposium American Educational Research Association (AERA)

Annual MeetingMay 2, 2010

Denver, Colorado

AccountabilityAccountability

Basic premise:

Teaching Learning Proficiency

High proficiency rates = Good schools

Low proficiency rates = Bad schools

AccountabilityAccountability

Basic Assumption

State assessments distinguish well-taught students from not so well-taught students with enough accuracy to support accountability decisions.

AccountabilityAccountability

Q: Is the assumption warranted?

A: Only if the tests are instructionally sensitive.

When tests are insensitive, accountability decisions are based on the wrong things – e.g., SES.

Kansas: SESKansas: SES

Kansas: Test ScoresKansas: Test Scores

Kansas: Exemplary by SESKansas: Exemplary by SES

The Situation in KansasThe Situation in Kansas

Basic Question

Can the instruction in low-poverty districts truly be that much better than the instruction in high-poverty districts?

Or, do instructionally-irrelevant factors (such as SES) distort or mask the effects of instruction?

Multi-district StudyMulti-district Study

• Purpose– To compare instructional sensitivity appraisal models and

methods– To appraise the instructional sensitivity of the Kansas state

assessments

• District-initiated because no state-level study had been initiated – Indicator-level analysis– Loss/gain because no indicator-level cut scores

• Based initially on empirical approach recommended by Popham (2008)

Tactical VariationsTactical Variations

• A variety of practical constraints and preliminary findings raised several conceptual and methodological issues.

• The original design underwent several revisions.

• Several tactical variations involving– data collection – data array, analysis, and interpretation

Tactical VariationsTactical Variations

See the paper for details…

• discusses the issues and design revisions

• provides exegesis of item-selection criteria and test-construction that yield instructional insensitivity

• describes, demonstrates, and compares the tactical variations employed in the collection, array, and analysis of the data, as

well as in the interpretation of the results

Due to time constraints, let’s focus just on the “juiciest jewels”…

Study ParticipantsStudy Participants

575 teachers responded– 320 teachers (grades 3-5 reading and math)– 129 reading teachers (grades 6-8)– 126 math teachers (grades 6-8)

14,000 students

• Only Grade 5 reading included in this study.

• To be reported in June at CCSSO in Detroit:– other reading results (grades 3-8) – all math results (grades 3-8)

A Gold StandardA Gold Standard

By recommending that teachers be asked to identify their best-taught indicators, Popham (2008) transformed the instructional sensitivity issue in a fundamental way – both conceptually and operationally:

For the first time since IS inquiries began about 40 years ago, there now could be a gold standard independent of the test itself – a huge breakthrough!

Old and New ModelOld and New Model

A = Non-Learning

B = Learning

C = Slip

D = Maintain

A = True Fail

B = False Pass = II-E

C = False Fail = II-D

D = True Pass

Initial Analysis SchemeInitial Analysis Scheme

Initial logic:

If best-taught students outperform other students, indicator is sensitive to instruction.

If mean differences are small or in the wrong direction, indicator is insensitive to instruction.

ProblemProblem

But significant performance differences between best-taught and other students do not necessarily represent instructional sensitivity.

affluent students provided ineffective instruction typically end up in Cell B

challenged students provided effective instruction typically end up in Cell C

ProblemProblem

Thus: Means-based and DIF-driven approaches that evaluate between-group differences are not appropriate for appraising instructional sensitivity.

Instead: Focus on the degree to which indicators accurately distinguish effective from ineffective instruction – without confounding from instructionally irrelevant easiness or difficulty.

Conceptually CorrectConceptually Correct

Rather than comparing group differences in terms of means, let’s look instead at the combined proportions of true fail and true pass. That is,

(A + D) / (A + B + C + D)

Which can be shortened to

(A + D) / N = Malta Index

Malta IndexMalta Index

(A + D) / NRanges from 0 to 1

(Completely Insensitive to Totally Sensitive)

In practice:

A value of .50 = chanceEquivalent to random guessing

Totally SensitiveTotally Sensitive

(A + D) / N =

(50 + 50) / 100 = 1.0

A perfectly sensitive item or indicator would cluster students into Cell A or Cell D.

Totally InsensitiveTotally Insensitive

(A+D) / N = (0+0) / 100 = 0.0

A perfectly insensitive test clusters students into Cell B or Cell C

UselessUseless

(A+D) / N = (25+25) /100 = 0.50

0.50 = mere chance

An indicator that cannot distinguish true fail or pass from false fail or pass is totally useless – no better than random guessing.

Malta Index ParallelsMalta Index Parallels

The Malta Index is similar conceptually to:

– Mann-Whitney U

– Wilcoxon ranks statistic

– Area Under the Curve (AUC) in Receiver Operating Characteristic (ROC) curve analysis

But its interpretation is embedded in the context of instructional sensitivity appraisal.

Malta IndexMalta Index

Compared to these other approaches, the Malta Index is easier to…

– compute– understand– interpret

Thus, it is more accessible conceptually to measurement novices, such as

– teachers– reporters– policy-makers

ROC AnalysisROC AnalysisMalta Index values can be depicted graphically as ROC curves.

Informal EvaluationInformal Evaluation

Malta Index values can be evaluated informally via acceptability criteria (Hosmer & Lemeshow, 2000)

Value

– .90-1.0 = excellent (A) – .80-.90 = good (B) – .70-.80 = acceptable (C) – .60-.70 = poor (D) – .50-.60 = fail (F)

Indicator

Teacher Ratings (Most vs. Less)

Prior Data: (Best vs. Not Best)

Prior Data: (Best vs. Worst)

MI AUC MI AUC MI AUC

1 .51 .51 .56 .56 .64 .64

2 .50 .51 .54 .63 .64 .66

3 .50 .54 .56 .56 .59 .59

4 .57 .55 .62 .62 .68 .68

5 .53 .54 .72 .72 .79 .79

6 .52 .50 .61 .61 .69 .69

7 .53 .50 .56 .56 .62 .63

8 .55 .53 .56 .56 .59 .59

9 .52 .54 .57 .57 .64 .64

10 .52 .52 .57 .57 .64 .64

11 .51 .56 .59 .60 .68 .68

12 .52 .50 .57 .57 .63 .63

13 .66 .52 .56 .56 .58 .58

14 .64 .58 .58 .58 .62 .62

Average .54 .53 .64 .59 .64 .65

Summary and InterpretationsSummary and Interpretations

• AUC and the Malta Index yield very similar but not identical results

• Identical conclusions overall: Grade 5 reading indicators lack instructional sensitivity

– No indicator was graded better than a “C”– Most were in the “Poor” to “Useless” range– Averages ranged from “Poor” to “Useless”

Summary and InterpretationsSummary and Interpretations

Low instructional sensitivity values for grade 5 reading were disappointing, especially given:

– Local contractor (CETE)– Guidance from TAC (including Popham and

Pellegrino)– Concerns from the KAAC (including Court)

If Kansas assessments lack instructional sensitivity, what about other states’ assessments?

ConclusionConclusionDear U.S. Department of Education:

Please make instructional sensitivity…

– An essential component in reviews of RTTT funding applications

– A critical element in the approval process of state and consortia accountability plans

When the Department revised its Peer Review Guidance (2007) to include alignment as a critical element of technical quality, states were compelled to conduct alignment studies that they otherwise would not have conducted.

Instructional sensitivity deserves similar Federal endorsement.

Presenter’s email:Presenter’s email:[email protected]@cox.net

Questions, comments, or Questions, comments, or suggestions are welcomesuggestions are welcome