a statistical investigation of the effects of computer disruptions on student and school scores

Post on 05-Jan-2016

20 Views

Category:

Documents

3 Downloads

Preview:

Click to see full reader

DESCRIPTION

A Statistical Investigation of the Effects of Computer Disruptions on Student and School Scores. June 25, 2014. Computer Disruptions. In 2013, several States experienced computer problems during student testing. Minnesota’s Comprehensive Assessment - PowerPoint PPT Presentation

TRANSCRIPT

Human Resources Research Organization (HumRRO)

66 Canal Center Plaza, Suite 700 Alexandria, Virginia 22314-1578 | Phone: 703.549.3611 | Fax: 703.549.9661 | www.humrro.org66 Canal Center Plaza, Suite 700 Alexandria, Virginia 22314-1578 | Phone: 703.549.3611 | Fax: 703.549.9661 | www.humrro.org

A Statistical Investigation of the Effects of Computer Disruptions on Student and School Scores

Presented at: National Conference on Student Assessment New Orleans, LA

Presenters : Bethany H. Bynum (HumRRO)Gene Hoffman (HumRRO)Art Thacker (HumRRO)Matt Swain (James Madison University)

June 25, 2014

2

Computer Disruptions

● In 2013, several States experienced computer problems during student testing. – Minnesota’s Comprehensive Assessment – Kentucky's high school end-of-course exams – Indiana’s ISTEP exam – Oklahoma Student Testing Program

● Thousands of students were effected by wide-spread computer disruptions– Kicked off the test – Long load times

● States were concerned that these disruptions could detrimentally impact student scores

3

Research Question

● Minnesota and Oklahoma:– Commissioned a study to examine the impact of computer disruptions

on student scores. – The type of disruption was different, but we used the same statistical

approach.

● “On average, would student scores have been different if computer disruption did not occur?”

● Things to consider:– Minnesota allowed students the opportunity to review items or

retake the test, but did not keep record of who took advantage of this opportunity.

– Oklahoma allowed students that were kicked off to retake the test.

4

Defining Disruption

● Oklahoma: Computer issues occurred on April 28 and April 30– Students kicked off of the test in the middle of the exam.– The Vendor had record of every student that was kicked off.

● Minnesota: Computer issues occurred on April 16 and 23 – Long item latencies – the amount of time it take an online page to load– Abnormal restarts – students unexpectedly logged out and required to

log back in to resume test– Administrative Pauses – test paused by teacher or test proctor– No clear record of who was or was not interrupted

5

Defining Disruption

Page Latency in Seconds

Date Latency Records Mean Median 90th 95th 99th

April 16 129,980 4.46 0.20 2.20 23.41 107.80

April 23 1,072,104 2.56 0.12 0.73 2.63 80.54

April 30 2,176,700 0.20 0.10 0.35 0.55 1.29

Percentage of Students with Indicated Latencies

 Date N > 30 sec 20-30 sec 15-20 sec 10-15 sec 5-10 sec

April 16 23,077 11.08 2.61 1.37 1.62 3.48

April 23 65,866 16.69 4.68 2.66 2.95 6.61

April 30 95,025 0.25 0.09 0.11 0.19 1.15

6

Research Approach● Basic approach:

– Based on the premise that student scores are consistent over time. – Match disrupted students to similarly performing non-disrupted students.

• Using Propensity Matching (Prior year scores, Gender, Ethnicity, FRP Lunch, LEP, School-level FRP Lunch, School-level achievement)

– Examine differences in 2013 test scores between the two groups.

● Student-level Statistical Investigation:1. Are there mean differences in scores between the disrupted and non-

disrupted groups?

2. Does disruption add to the prediction of the 2013 tests scores?

3. Are there differences in prediction between the disrupted and non-disrupted groups?

4. Would disrupted students have done differently under normal testing conditions?

5. Compare the distribution of differences between the disrupted and non-disrupted samples.

7

Student-level Statistical Investigation

1. Are there mean differences in scores between the disrupted and non-disrupted groups?

8

Student-level Statistical Investigation2. Does disruption add to the prediction of the 2013 tests scores?

9

Student-level Statistical Investigation

3. Are there differences in prediction between the disrupted and non-disrupted groups?

10

Student-level Statistical Investigation

4. Would disrupted students have done differently under normal testing conditions?

– For each disrupted student, apply the non-disrupted prediction equation and compute the predicted score.

– Take the difference between the predicted score and observed score (score obtained under the disrupted conditions)

– For, the non-disrupted sample, computed the difference between predicted scores and observed scores

● Large numbers of students with notable differences between observed and predicted scores provides another piece of evidence about the impact of the computer disruptions.

11

Student-level Statistical Investigation

5. Compare the distribution of differences between the disrupted and non-disrupted samples.

– Determine the difference in observed and predicted scores at the 5th, 10th, 90th and 95th percentile for the non-disrupted group

– Apply these cuts to the disrupted group and determine what percent of students fall at or above the 90th and 95th cuts and what percent of students that fall at or below the 5th and 10th cuts.

– Larger than 5 and 10 percent of the sample, would provide evidence that the computer disruption may have impacted scores.

12

Student-level Statistical Investigation

Percent of Disrupted Students with Predicted and Observed Score Differences at the 5th, 10th, 90th and 95th Percentile of Non-Disrupted Students

13

School-Level Score Differences

● School-level Statistical Investigation:

1. Would school-level scores differ if disrupted students were excluded?

2. Does disruption add to the prediction of 2013 school-level means?

3. Would there be differences in % proficient if predicted scores were used in place of observed scores?

14

School-Level Score Differences

Would school-level scores differ if disrupted students were excluded? ● All students included (baseline)● Students in disrupted sample excluded

– School-level means increased slightly (d = -.02 to .22)

● Students that test on April 16 or 23 excluded– School-level means dropped on average by .09 theta points (d

= .01-.26)

● Predicted score are used in place of the observed score for students that were in the disrupted sample– School-level means for some grades increased and for other grades

decreased (d = .002 - .09)

15

School-Level Score Differences

Does disruption add to the prediction of 2013 school-level means?

● Multiple Regression:– 2012 school-level means– Percent of students disrupted– Interaction between 2012 school-level mean and % of students

disrupted

● Results: – % of students disrupted and the interaction term were not

significant predictors– ΔR² was small

16

School-Level Score Differences

Would there be differences in % proficient if predicted scores were used in place of observed scores?

17

Conclusions

Student-level Summary: “The evidence shows that there were disruption effects; however, the effects on students’ scores were neither widespread nor large. In addition, the evidence shows that disruptions were not consistently detrimental, but at times were beneficial.”

School-Level Summary:“School-level analyses suggest that there may be a small impact on school-level means for schools that experienced disruption, but the direction of the impact is not consistent and adjusting school-level scores based on this data would be beneficial for some schools and detrimental to others.”

top related