organizing and evaluating results from multiple reading assessments

7
   TEACHING TIPS     TEACHING TIPS     TEACHING TIPS      TEACHING TIPS     TEACHING TIPS     TEACHIN NG TIPS    TEACHING TIPS     TEACHING TIPS     TEACHING TIPS     TEACHING TIPS     TEACHING TIPS     TE TIPS    TEACHING TIPS     TEACHING TIPS     TEACHING TIPS     TEACHING TIPS     TEACHING TIPS     TEAC CHING TIPS    TEACHING TIPS     TEACHING TIPS     TEACHING TIPS     TEACHING TIPS      TEACHING TIPS     TE G TIPS     TEACHING TIPS     TEACHING TIPS      TEACHING TIPS     TEACHING TIPS     TEACHING TIPS    TEAC S     TEACHING TIPS     TEACHING TIPS     TEACHING TIPS     TEACHING TIPS     TEACHING TIPS     TEACHIN CHING TIPS    TEACHING TIPS     TEACHING TIPS     TEACHING TIPS     TEACHING TIPS     TEACHING TIPS     TE 606 The Reading Teacher, 64(8), pp. 606–611 © 2011 International Reading Association DOI:10.1598/RT.64.8.6 ISSN: 0034-0561 print / 1936-2714 online Organizing and Evaluating Results From Multiple Reading Assessments Jim Rubin W hile school and district policies often dic- tate the administration of specific stan- dardized tests at prescribed intervals, there are alternative assessment options. The man- dated tests tend to give a snapshot of a child’s ability, whereas use of a variety of assessments gives teach- ers a more comprehensive portrait. The professional literature is replete with sugges- tions of ways to measure reading ability beyond stan- dardized testing, and many teachers come up with their own additional methods through experience with their students. One purpose for assessment is to determine the level of text that will challenge stu- dents, motivating them to read rather than causing frustration. Sound classroom practice is character- ized by a teacher’s ability to choose reading mate- rial at children’s instructional level (Vacca & Vacca, 2008). Text at this level is a step below material that children can read independently, but with guidance from a teacher, students can learn and problem solve at a higher level of proficiency. In this way, the in- structional level is much like the zone of proximal development described by Vygotsky (1978). In this article, I describe how data from a range of assessments can be organized and analyzed to pro- vide a comprehensive picture of the achievement of students individually and as a group. This approach also helps teachers to consider the reliability of various forms of assessment and to choose reading material at an appropriate level to support student learning. Using Multiple Assessments There are many different assessments available that can assist in gauging how well students read. Taken together, data from a variety of assessments can help advise a teacher about the text difficulty that students can handle, in addition to pinpointing their specific strengths and weaknesses in reading (Dennis, 2009). At the elementary level, many districts use a stan- dardized reading assessment designed to cover five essential elements of reading (phonemic awareness, phonics, fluency, vocabulary, and comprehension). However, research has questioned the wisdom of us- ing results from only one tool to pass judgment on how well students comprehend text—which, after all, is the main reason for measuring reading ability (Afflerbach, 2005; Allington, 2002; Buly & Valencia, 2002). Some popular alternatives for assessing reading comprehension include the cloze test (Grant, 1978; Vacca & Vacca, 2008), Informal Reading Inventories (Flippo, Holland, McCarthy, & Swinning, 2009; Johnston, 1997), and running records (Clay, 1966; Fawson, Ludlow, Reutzel, Sudweeks, & Smith, 2006; Ross, 2004). Each offers the classroom teacher an op- portunity to exert control over the process of assess- ment by choosing text that is appropriate for use in testing and designing questions that will give a valid picture of reading comprehension on a variety of lev- els. By using these instruments, teachers need not rely exclusively on the scores from a single standardized test to make decisions concerning how to offer instruc- tion that will target the needs of diverse students. Each of these assessments focuses on different elements of reading. The cloze test has been recom- mended for supporting readers who struggle with com- prehension and vocabulary (Palumbo & Loiacono, 2009). The processes involved in providing the right words to fill deletions in a text passage require stu- dents to make sense of syntax as well as semantics. Informal Reading Inventories (IRIs) focus on evaluating comprehension through postreading questions and have been recommended for report- ing reading growth over the course of the school year (Paris, 2002). Researchers have questioned the reli- ability of IRIs from commercial publishers (Spector, 2005), but the option remains for classroom teachers to develop their own assessments of this type based

Upload: rathx039

Post on 05-Dec-2014

329 views

Category:

Education


0 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Organizing and Evaluating Results from Multiple Reading Assessments

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

 TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

    

    

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   

    

    

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IP

606The Reading Teacher, 64(8), pp. 606–611 © 2011 International Reading AssociationDOI:10.1598/RT.64.8.6 ISSN: 0034-0561 print / 1936-2714 online

Organizing and Evaluating Results From Multiple Reading AssessmentsJim Rubin

While school and district policies often dic-tate the administration of specific stan-dardized tests at prescribed intervals,

there are alternative assessment options. The man-dated tests tend to give a snapshot of a child’s ability, whereas use of a variety of assessments gives teach-ers a more comprehensive portrait.

The professional literature is replete with sugges-tions of ways to measure reading ability beyond stan-dardized testing, and many teachers come up with their own additional methods through experience with their students. One purpose for assessment is to determine the level of text that will challenge stu-dents, motivating them to read rather than causing frustration. Sound classroom practice is character-ized by a teacher’s ability to choose reading mate-rial at children’s instructional level (Vacca & Vacca, 2008). Text at this level is a step below material that children can read independently, but with guidance from a teacher, students can learn and problem solve at a higher level of proficiency. In this way, the in-structional level is much like the zone of proximal development described by Vygotsky (1978).

In this article, I describe how data from a range of assessments can be organized and analyzed to pro-vide a comprehensive picture of the achievement of students individually and as a group. This approach also helps teachers to consider the reliability of various forms of assessment and to choose reading material at an appropriate level to support student learning.

Using Multiple AssessmentsThere are many different assessments available that can assist in gauging how well students read. Taken together, data from a variety of assessments can help advise a teacher about the text difficulty that students can handle, in addition to pinpointing their specific strengths and weaknesses in reading (Dennis, 2009).

At the elementary level, many districts use a stan-dardized reading assessment designed to cover five essential elements of reading (phonemic awareness, phonics, fluency, vocabulary, and comprehension). However, research has questioned the wisdom of us-ing results from only one tool to pass judgment on how well students comprehend text—which, after all, is the main reason for measuring reading ability (Afflerbach, 2005; Allington, 2002; Buly & Valencia, 2002).

Some popular alternatives for assessing reading comprehension include the cloze test (Grant, 1978; Vacca & Vacca, 2008), Informal Reading Inventories (Flippo, Holland, McCarthy, & Swinning, 2009; Johnston, 1997), and running records (Clay, 1966; Fawson, Ludlow, Reutzel, Sudweeks, & Smith, 2006; Ross, 2004). Each offers the classroom teacher an op-portunity to exert control over the process of assess-ment by choosing text that is appropriate for use in testing and designing questions that will give a valid picture of reading comprehension on a variety of lev-els. By using these instruments, teachers need not rely exclusively on the scores from a single standardized test to make decisions concerning how to offer instruc-tion that will target the needs of diverse students.

Each of these assessments focuses on different elements of reading. The cloze test has been recom-mended for supporting readers who struggle with com-prehension and vocabulary (Palumbo & Loiacono, 2009). The processes involved in providing the right words to fill deletions in a text passage require stu-dents to make sense of syntax as well as semantics.

Informal Reading Inventories (IRIs) focus on evaluating comprehension through postreading questions and have been recommended for report-ing reading growth over the course of the school year (Paris, 2002). Researchers have questioned the reli-ability of IRIs from commercial publishers (Spector, 2005), but the option remains for classroom teachers to develop their own assessments of this type based

Page 2: Organizing and Evaluating Results from Multiple Reading Assessments

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

 TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

    

    

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   

    

    

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IP

607Organizing and Evaluating Results From Multiple Reading Assessments

A mechanism for comparing scores from test to test is needed to provide the teacher with reliable data for de-cision making.

Visualizing and Aggregating Multiple AssessmentsTo assist in organizing assessment data, scores from each assessment can be related to three reading levels—independent, instructional, and frustration—according to the measurement scale for each instru-ment. A text that a student can read with a high degree of comprehen-sion without teacher assistance is described as being at his or her inde-

pendent reading level. A text at the instructional level is one for which the student would benefit from hav-ing teacher support to fully understand the content. The frustration level represents text that is too hard for the student, even with teacher support (Vacca & Vacca, 2008).

Teachers who provide instructional-level reading material for each student will maximize learning po-tential for portions of the lesson that include teacher support. Table 1 illustrates how scores on four read-ing assessments map to independent, instructional, and frustration levels. The cloze test, IRIs, and run-ning records have scoring systems that relate to these levels. For other types of assessments, the procedure would be to map each of the three levels to the in-struments’ scoring systems. For standardized tests, students who score from a high B to an A (85th to

on knowledge of individual students’ progress and the use of more open-ended questioning and retelling for-mats (Rogers et al., 2006).

Running records are widely used for assessing reading progress (Bean, Cassidy, Grumet, Shelton, & Wallis, 2002) and have been found to be reliable when students are tested with a minimum of three pas-sages (Fawson et al., 2006). This as-sessment tool measures contextual reading accuracy through an oral reading under untimed conditions and has been found to be an ac-curate predictor of future reading success (Wilson, 2005), as well as a valid means for measuring progress with development of comprehen-sion strategies (Johns, 2005).

Teachers often have little say in the administration of standardized tests, but they can feel empowered by their capacity to use alterna-tive assessments. Another benefit of using multiple instruments is that they provide several data sources that each reflect slightly different aspects of the skills involved in reading. When results are taken together, they can give teachers a comprehensive portrait of student achievement. However, each assessment uses a different rating scale, and it can be difficult to know how to aggregate the numbers for a whole class of students in order to make valid judgments about instruction and how to choose reading material that is appropriate for each child. Furthermore, due to student differences in preference for testing formats, familiarity with processes, and performance variabil-ity due to affective factors, the validity of a single test score for a particular student can be questionable.

PAUSE AND PONDER

■■ What are the different ways I assess reading in my classroom? How can I ensure that I look at results from different assessments both individually and as a whole to get a complete picture of my students’ achievement?

■■ How can I recognize reading assessment scores that are not reliable?

■■ How can I use assessment data to guide my decisions about differentiated instruction?

Table 1 Mapping Scores on Reading Assessments to Reading Ability Level

Text-level reading ability

Standardized test score (percentile)

Cloze test (percent correct)

Informal Reading Inventory (raw score based on 10 question IRI)

Running record (based on accuracy

rate)

1. Frustration < 70 < 40 < 7 < 902. Instructional 70–84 40–60 7–8 90–943. Independent 85–100 > 60 9–10 95–100

Page 3: Organizing and Evaluating Results from Multiple Reading Assessments

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

 TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

    

    

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   

    

    

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IP

608 The Reading Teacher Vol. 64, No. 8 May 2011

Interpreting Assessment Data to Guide Instructional Decision MakingPresenting data in this way can serve as a check to see if any student’s scores appear contradictory. If a single student receives scores at each of the levels on the individual assessments, this would indicate an inconsistency in results that should be investigated before making determinations about differentiated instruction. In such cases, the student should be retested on the two assessments whose results are most contradictory or sent to a reading specialist for further evaluation.

David (Table 2, row 4), for example, has scores on the individual assessments ranging from frustration to independent. His weak scores on running records,

100th percentile) would be considered at the inde-pendent level, while those who are achieving a C to a middle B (70th to 84th percentile) are reading at the instructional level; students who score below a C (un-der the 70th percentile) are at the frustration level.

Table 2 presents scores for students in a fictitious class on four reading assessment instruments; the number in parentheses following each score gives the reading level as determined from the mapping in Table 1 (1 = frustration, 2 = instructional, 3 = indepen-dent). The composite score for each student is calcu-lated by averaging the reading-level indications. For example, on the cloze and standardized tests, Alice performed at the instructional level, while on the IRI and with running records, she read independently. Her composite score is therefore calculated as

(2 + 2 + 3 + 3) ÷ 4 = 2.5

Table 2 Class Reading Assessment Profile

Student Cloze testStandardized

reading test (percentile)Informal Reading

InventoryRunning records

Composite score

Alice 58% (2) 82 (2) 90% (3) 96% (3) 2.5Ben 32% (1) 40 (1) 65% (1) 86% (1) 1Carol 40% (2) 55 (1) 75% (2) 92% (2) 1.75David 60% (2) 84 (2) 90% (3) 89% (1) 2Ed 62% (3) 86 (3) 95% (3) 94% (2) 2.75Faye 28% (1) 38 (1) 40% (1) 82% (1) 1Glenda 44% (2) 60 (1) 80% (2) 91% (2) 1.75Henry 46% (2) 61 (1) 80% (2) 92% (2) 1.75Ida 66% (3) 94 (3) 100% (3) 98% (3) 3Jason 32% (1) 38 (1) 85% (3) 95% (3) 2Kelly 26% (1) 36 (1) 50% (1) 85% (1) 1Lynn 50% (2) 75 (2) 85% (3) 91% (2) 2.25Marguerite 48% (2) 62 (1) 80% (2) 92% (2) 1.75Nelson 60% (2) 81 (2) 90% (3) 94% (2) 2.25Olive 54% (2) 77 (2) 90% (3) 90% (2) 2.25Patty 32% (1) 42 (1) 65% (1) 88% (1) 1Quinn 48% (2) 54 (1) 85% (3) 89% (1) 1.75Ron 32% (1) 60 (1) 84% (2) 95% (3) 1.75Sally 50% (2) 76 (2) 86% (3) 92% (2) 2.25Tom 32% (1) 38 (1) 70% (2) 86% (1) 1.25Ursula 54% (2) 75 (2) 87% (3) 93% (2) 2.25

Note. Numbers in parentheses indicate the text level each child can read as indicated by each score: 1 = frustration; 2 = instructional; 3 = independent.

Page 4: Organizing and Evaluating Results from Multiple Reading Assessments

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

 TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

    

    

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   

    

    

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IP

609Organizing and Evaluating Results From Multiple Reading Assessments

material that is below her grade level. Retesting with a different IRI (perhaps one with open-ended ques-tioning) would be appropriate to check the reliability of the first score.

In Ron’s case, the stronger scores on the IRI and running records might make one suspect that he has the ability to read accurately, but these scores contra-dict both the standardized test and the cloze proce-dure. In order to validate findings, it would be a good idea to retest with running records, focusing on the portion that entails a retelling of the content to check for comprehension.

Figure 1 displays a scatter plot recording the com-posite scores presented in Table 2. This visual repre-sentation gives a class profile, allowing the teacher to see how the class as a whole relates to each read-ing level. In this fictitious class, most of the students fall into the mid- to high instructional category, with

for which oral reading is required, might indicate that he needs extra practice in developing elements of reading fluency. However, his scores with other as-sessments indicate that his comprehension skills are strong. A lack of experience reading aloud might ac-count for some of the issues on the running records result.

Jason, Quinn, and Ron show a similar range in as-sessment results. For Jason, with such a wide discrep-ancy in scores (two assessments showing frustration and two showing independent), there is a need to readminister all of the assessments to understand the issues. Perhaps personal problems on the day of testing affected his performance during the cloze and standardized tests, because the other two scores indicate he has strong abilities.

Quinn’s scores, with the exception of those on the IRI, indicate she will benefit from using reading

Figure 1 Class Profile

Note. Numbers reflect a breakdown of the composite score into three equal categories, which range from 1–3. Asterisks indicate each individual student’s composite score in the class example provided.

3 *

*

2.33

2.32 * * * * * * * * * * * * *

1.67

1.66

*1 * * * * *

Page 5: Organizing and Evaluating Results from Multiple Reading Assessments

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

 TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

    

    

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   

    

    

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IP

610 The Reading Teacher Vol. 64, No. 8 May 2011

Buly, M.R., & Valencia, S.W. (2002). Below the bar: Profiles of students who fail state reading assessments. Educational Evaluation and Policy Analysis, 24(3), 219–239. doi:10.3102/01623737024003219

Clay, M.M. (1966). Emergent reading behavior. Unpublished doc-toral dissertation, University of Auckland, New Zealand.

Dennis, D. (2009). “I’m not stupid”: How assessment drives (in)appropriate reading instruction. Journal of Adolescent & Adult Literacy, 53(4), 283–290. doi:10.1598/JAAL.53.4.2

Fawson, P.C., Ludlow, B.C., Reutzel, D.R., Sudweeks, R., & Smith, J.A. (2006). Examining the reliability of running records: Attaining generalizable results. The Journal of Educational Research, 100(2), 113–126. doi:10.3200/JOER.100.2.113-126

Flippo, R.F., Holland, D.D., McCarthy, M.T., & Swinning, E.A. (2009). Asking the right questions: How to select an Informal Reading Inventory. The Reading Teacher, 63(1), 79–83. doi:10.1598/RT.63.1.8

Grant, P. (1978, May). Using the cloze procedure as an instructional device: What the literature says. Paper presented at the 23rd Annual Convention of the International Reading Association, Houston, TX.

Johns, J.L. (2005). Fluency norms for students in grades one through eight. Illinois Reading Council Journal, 33(4), 3–8.

Johnston, P.H. (1997). Knowing literacy: Constructive literacy as-sessment. York, ME: Stenhouse.

Palumbo, A., & Loiacono, V. (2009). Understanding the causes of intermediate and middle school comprehension problems. International Journal of Special Education, 24(1), 75–81.

Paris, S.G. (2002). Measuring children’s reading development us-ing leveled texts. The Reading Teacher, 56(2), 168–170.

Rogers, T., Winters, K.L., Bryan, G., Price, J., McCormick, F., House, L., Mezzarobba, D., & Sinclaire, C. (2006). Developing the IRIS: Toward situated and valid assessment measures in collaborative professional development and school reform in literacy. The Reading Teacher, 59(6), 544–553. doi:10.1598/RT.59.6.4

Ross, J. (2004). Effects of running records assessment on early literacy achievement. The Journal of Educational Research, 97(4), 186–195. doi:10.3200/JOER.97.4.186-195

two students in the independent category and six stu-dents at the low end of the frustration level. Knowing this will facilitate a teacher’s decision concerning using mixed-ability grouping for specific activities, choosing material that is appropriate for whole-class readings, and choosing material for targeted groups whose members are working on the same level.

With many viable assessment approaches to choose from, determining how to aggregate data from a variety of sources and come away with a clear understanding of student needs can be the most daunting task of all. The approach to managing and analyzing data described here offers the classroom teacher a means of organizing a wide array of infor-mation in order to better understand how to differ-entiate instruction and choose reading materials that correspond to each student’s instructional level.

NoteThe author thanks Marino Alvarez of Tennessee State University for ideas that contributed to the approach described here.

ReferencesAfflerbach, P. (2005). National Reading Conference policy brief:

High stakes testing and reading assessment. Journal of Literacy Research, 37(2), 151–162. doi:10.1207/s15548430jlr3702_2

Allington, R.L. (Ed.). (2002). Big Brother and the national reading curriculum: How ideology trumped evidence. Portsmouth, NH: Heinemann.

Bean, R.M., Cassidy, J., Grumet, J.E., Shelton, D.S., & Wallis, S.R. (2002). What do reading specialists do? Results from a nation-al survey. The Reading Teacher, 55(8), 736–744.

Take ACTION!To organize assessment data from numerous instruments, follow these steps to create a grid similar to the one shown in Table 2.

1. Draw a table with one row for each student in your class and one column for each assessment instrument you use regularly in your classroom.

2. For each assessment you use, create a guide that maps scores by range to each of three reading

levels (frustration, instructional, independent). See Table 1 for an example.

3. Following administration of an assessment, record student scores in your table. Beside each entry, note whether the score indicates the frustration (1), instructional (2), or independent (3) level, using your mapping guide.

4. Review the scores for each student. Do any show a range

across all levels? If so, dig deeper to determine why.

5. Calculate an average of the scores for each student, and use the composite scores to develop a class profile. Use the profile to guide instructional decisions concerning whole-class activities.

6. Use the individual and whole-class data to inform your decisions about reading groups and reading material that is appropriate for your students.

Page 6: Organizing and Evaluating Results from Multiple Reading Assessments

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

 TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

   

    

    

   

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

    

    

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   

    

    

    

  T

EA

CH

ING

TIP

S  

  T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

   T

EA

CH

ING

TIP

S  

    

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 

TE

AC

HIN

G T

IPS

    

TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IPS

    

 TE

AC

HIN

G T

IP

611Organizing and Evaluating Results From Multiple Reading Assessments

Spector, J.E . (2005). How reliable are Informal Reading Inventories? Psychology in the Schools, 42(6), 593–603. doi:10.1002/pits.20104

Vacca, R.T., & Vacca, J.L. (2008). Content area reading: Literacy and learning across the curriculum (9th ed.). Boston: Allyn & Bacon.

Vygotsky, L.S. (1978). Mind in society: The development of higher psychological processes (M. Cole, V. John-Steiner, S. Scribner, & E. Souberman, Eds. & Trans.). Cambridge, MA: Harvard University Press.

Wilson, J. (2005). The relationship of Dynamic Indicators of Basic Early Literacy Skills (DIBELS) oral reading fluency to perfor-mance on Arizona Instrument to Measure Standards (AIMS). Tempe, AZ: Tempe School District No. 3.

Rubin teaches at Union College, Barbourville, Kentucky, USA; e-mail [email protected].

MORE TO EXPLOREIRA Books

■■ �Understanding and Using Reading Assessment, K–12�by�Peter�Afflerbach

■■ �Essential Readings on Assessment�edited�by�Peter�Afflerbach

■■ Standards for the Assessment of Reading and Writing�(Rev.�ed.)�prepared�by�the�Joint�Task�Force�on�Assessment�of�the�International�Reading�Association�and�the�National�Council�of�Teachers�of�English

Page 7: Organizing and Evaluating Results from Multiple Reading Assessments

Copyright of Reading Teacher is the property of International Reading Association and its content may not be

copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written

permission. However, users may print, download, or email articles for individual use.