the mid-south educational esearchec

16
The Mid-South Educational esearchec May, 1990 1989 DISTINGUISHED President's Column, John Petry 3 1989 Distinguished Dissertation: The Relationship Between Age of Norms and the Equipercentile Assumption, Gloria Turner 4 Reviewers Needed 14 Editorial , Judy Boser 15 Position Avail able 15 A Refereed Journal and a Publication of the Mid-South Educational Research Association

Upload: others

Post on 20-Feb-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

The Mid-South Educational

esearchecMay, 1990

1989 DISTINGUISHED

President's Column, John Petry 3

1989 Distinguished Dissertation:The Relationship Between Age of Norms and theEquipercentile Assumption, Gloria Turner 4

Reviewers Needed 14

Editorial , Judy Boser 15

Position Avail able 15

A Refereed Journal and a Publication of theMid-South Educational Research Association

Editorial Policy

The Researcher. MSERA's research journal aridnewsletter, includes research articles and scholarlymaterial in four of the five issues each year, theexception being issue number 4, the Annual Meetingprognun issue. ,The purpose of the Resear<.;her is toprovide ~ means of communication with members ofthe organization (MSERA) and to encourage researchand to promote the sharing of knowledge throughpublication of research studies and other articles ofinterest to the membership.

Manuscripts that meet submission requirementswill be acknowledged when received. Notification ofthe results of the review may take approximately six toeight weeks. The Researcher editorial staff reserves theright to make editorial changes in manuscripts in orderto improve clarity, to maintain APA style, to correctgrammar and spelling, and to fit available space.

Notification of change of address should besent to:

Judith A. Boser, EditorMSERA Researcher212 Claxton, UTKKnoxville, TN 37966-3400

Submission Requirements andDirections

Length. Text, excluding tables and references, shouldnot exceed 2,000 words, including tables, charts, andfigures.

Manuscript preparatioD. The AP A' PublicationManual should be used asa style guide. Allmanuscripts are to be typed and dO\Jble spaced. Tables.>"charts, and figures should be kept to a minimutn, Each .chart and figure sOOu1d be 00 a separate page and should,be in camera-read)" ferm.

Cover Page. The title of the manuscript shouldappear on a separate cover page along with thefollowing information about each of the contributingauthors: name; position; institution, school orbusiness; and address. Home and work telephonenumbers foc the primary amhor should'also be included.If the manuscrip< is based on thesis or dissertationresearch, the majoc ]XOfessoc or dissertation director andgranting institutioo sIloWd also be included:

Review Process. Eacb manuscript will undergoblind review by a minimum of two members of theEditorial Review Board of the Researcher.

General DirectioDs.Send three (3) copies, along with a self-addressed,

stamped, lener-sizeden\'elope, to:

W. NeVr100 Su:t:erPublicatioos Editor, MSERA ResearcherDepartmeut of Educatiooal LeadershipUniversity of Arkansas at Little Rock280 1 S. Unr.'el'Sity AvenueLittle Rock. AR 72204

Judith A. Boser, EditorFrances Fowler, Associate Editor

The University of Tennessee,Knoxville

W. Newton Suter, Publications EditorUniversity of Arkansas at Little Rock

The Bureau of Educational Research and &nice,212 Claxton

The University of TennesseeKnoxville, Tennessee 3799&3400.

John Petry1990 MSERA President

The Board of the Association met on Friday, March16, 1990, at Memphis State University. The meetingconsisted of the introduction of new members,committee reports, reports by officers, and newbusiness.

New Members. Collin Ballance, TennesseeState Department of Education, was introduced as a newboard member replacing Tom Saterfiel, who assumed anew position in a different state. Collin was elected bythe Board of the Association in accordance withconstitutional provisions. Judith Collier, MemphisState University doctoral student, wasintroduced as thenew chair of the Graduate Student Advisory Committee.

Archives. Archives Committee Chair VernonGifford led a discussion about the advisability ofdescribing the archives service in the Proceedings and inthe Researcher so that the membership will be betteracquainted with its location and ways of accessingmaterials.

Constitution and Bylaws. Upon therecommendation of Harry Bowman, Chair, that theeditor of the Researcher have an official office in theMSERA organization, a motion was passedunanimously to amend the Bylaws to give non-votingstatus on the Board of the Association to editors andcommittee chairs.

Evaluation. Jim Flaitz, Chair, led a discussionthat revolved around ways of increasing participation incompleting the evaluation form at the business meetingeach year.

Local Arrangements. The report of SusanKappelman, Local Arrangements Chair for 1990, waspresented. The Annual Meeting this year will be at theMonteleone Hotel in New Orleans, November 14-16.

Jim Craig, Local Arrangements Chair for 1991,announced that the meeting in Lexington, Kentuckywill be at the Marriott Griffins Gate, November 13-15,1991.

Judy Boser, Chair for 1992, announced that afterlooking at three Knoxville sites, her committeerecommended that the Hyatt Regency be the site of theAnnual Meeting in November, 1992.

Program.- Following a report by Rob Kennedy,Program Chair, a motion was passed that a list of"finalists" for the outstanding thesis/dissettation andpaper awards be printed on a special sheet to be givenout at registration so that conference attenders may beinformed about all who participated.

Publications. Dan Fasko's report from thepublications committee resulted in passage of a motionthat MSERA copyright the Researcher as a way ofincreasing its visibility and reputation. In response to

another recommendation that MSERA consider going toa separate journal format for articles, the committee wasdirected to present to the Board a specific set ofrecommendations, including details about the costs ofsuch a change.

Judy Boser, Editor of the Researcher, announcedthat· this is the last year that the University oftennessee, Knoxville will be publishing theResearcher. Shepresented a detailedbudget and scheduleof publication, and announced that Newton Suter is thePUblications Editor for the Researcher. She wascommended by the Board for her dedication andleadership in editing the ReSearcher.

LERA Affiliate. LERA .Affiliate Representa-tive Bea Baldwin has been working to increase the effec-tiveness of the relationship between MSERA andLERA, resulting in a relationship that is the strongestit has ever been. LERA members will be involved inregistration and other tasks at the Annual Meeting.

Executive Secretary. Harry Bowman statedthat he will be introducing the MSERA outstandingpaper at AERA in Boston and manning the booth of theSpecial Interest Group on State and RegionalOrganizations at AERA on Thursday, April 19.

Treasurer. Treasurer Jeff Gorrell reported that thetotal amount in the bank account as of March 7, 1990,was $4,853.27. A breakdown was given on thenumbers of non-student and student members, old andnew members who have paid dues for 1990.

New Business. A motion was passed thatannual dues and the registration fee for the AnnualMeeting be combined into one fee for those who attendthe meeting. The total amount of the new combined feewill remain the same as the total of the current twoindependent fees. Those nbt attending the annualmeeting will still pay the usual dues.

A limited placement service will be provided at theAnnual Meeting. This will include provision of abulletin board, simple distributable forms for contactingpotential employers and candidates, and a place to postmessages.

Committees ExpandedPresident John Petry announced the following

additions to MSERA committees for 1990:Graduate Student Advisory Committee:

Sharla FaskoMembership Committee: Collin BallanceAnnual Meeting Program Committee:

Beatrice Baldwin, David Bell, Ernest Bentley, JudyBoser, John L. Burns, Shirley Byrne, Glynn N.Creamer, Jim Flaitz, Harold Griffin, Stan Henson,Lynda N. Lee, M. Nan Lintz, Elaine McNiece, SidMitchell, Lee Napier, Dave Naylor, Sue Peterson, PattyPhelps, Ernest A. Rakow, Jesse Rancifer, SelvinRoyal, and Glenda Thurman

The Relationship Between Age of Norms and theEquipercentile Assumption

Gloria TurnerThe University of Alabama

Doctoral dissertation completed under the direction of James McLeanWinner of the 1989 MSERA Distinguished Dissertation Award

Introduction

Historically, there have been many problemsassociated with the measurement of change inexperimental designs, with the two major problemsbeing reliability (Bereiter, 1963; Cronbach & Furby,1970; Linn & Slinde, 1977; Lord, 1956; Stanley, 1971)and regression toward the mean (Cronbach & Furby,1970; Lord, 1958; Thorndike, 1924). Other issues haveincluded problems with comparability (Diederich, 1956;Lord, 1956, 1958; O'Connor, 1972) and problemsassociated with validity (Angoff, 1971; Linn, 1981;Linn & Slinde, 1977; Lord, 1958).

The use of achievement tests for measuring changein quasi-experimental designs, which are common ineducational research, has all the problems associatedwith their use in experimental designs. Other problemswith using achievement tests for measuring change areunique to quasi-experimental designs. Some of thesemeasurement problems can be eliminated, or at leastreduced, by the use of a control group, but a controlgroup is difficult to establish in many educationalevaluations. Though randomization in experiments ispreferable, randomization is not always possible insocial experiments because of practical (Horst,Tallmadge, & Wood, 1975), ethical (Boruch, 1987;Cook & Campbell, 1979), and legal (Boruch, 1987)considerations. With the use of quasi-experiments, theresearcher has to "make all the threats explicit and thenrule them out one by one" (Cook & Campbell, 1979,p.56).

Large-scale evaluation of educational programs isrelatively new. Efforts at evaluation of educationalprograms initially incorporated such models as theposttest comparison with matched groups model,analysis of covariance, and regression models (Horst,Tallmadge, & Wood, 1975). Problems withimplementing these designs in educational settings(Cronbach & Furby, 1970; Horst, Tallmadge, & Wood,1975; Tallmadge, Wood, & Gamel, 1981) have givenway to the almost exclusive use of the norm-refetencedmodel (Merkel-Keller, 1986).

The norm-referencedmodel incorporates the use of alocal or national norm group as the estimate of no-treatment posttest scores. The equipercentile assump-

tion of the norm-referenced model asserts that students,without treatment, will maintain their achievementstatus with respect to the norm group from pretest toposttest (Horst, Tallmadge, & Wood, 1975). For astudent receiving treatment, the difftlence between theobserved posttest score and the estimated no-treatmentscore is dermed to be the gain.

Potential Threat to the Validity of the EguillercentileAssumlltion

Title I, the largest federally supported elementaryand secondary program in the nation's history, wascreated by the Elementary and Secondary Education Act(ESEA) of 1965 in an effort to aid academicallydisadvantaged students. It was the fIrst federal sociallegislation to require evaluation of its programs, and, assuch, has had a major impact on educational evaluationpractices and policies. Title I later became known asChapter I of the Education Consolidation andImprovement Act of 1981. Since a vast majority ofschool districts use the norm-referenced model toevaluate their Chapter I programs, it is important thatits validity be tested.

The validity of the norm-referenced model and itsequipercentile assumption depends on several factors,including the quality of normative data. "Society ingeneral and education in particular are continually

undergoing change.... For norms to be representativeof current populations, they must be continuallyupdated. . . . The older the norms are, the more cautionmust accompany their interpretation" (Hopkins &Stanley, 1981, pp. 70-71). Misinterpretation can resultas shown recently when it was reported that "90% of theschool districts in the United States ... claim to beabove average" (Cannell, 1988, p. 6). The fact that thescores appear to be inflated could be attributed in part tothe fact that the norms used are not current. It is notuncommon for national commercial achievement teststo be revised and renormed only every seven to nineyears.

As evidence of the need for current cohorts in thenorms used in program evaluations, results from theNational Assessment of Educational Progress (1983,1985) show that, whether increasing or decreasing,achievement levels of students do change over time.These results are complemented by a study whichexamined the extent to which national performance-onan achievement test battery changed over a 4-year period(Wiser & Lenke, 1987). The researchers in the Wiserand Lenke study, using the same edition of the testseries used in the present study, found that nationalperformance improved in many areas over the 4-yearperiod. Use of the earlier norms to assess students fouryears later overestimated the standing of these studentsrelative to the national population in many cases.

Despite the potential threat to the validity. of theequipercentile assumption due to the lack of up-to-datenorms, a search of the literature revealed no researchaimed at studying this relationship. It seemed clear thatan investigation of the relationship that age of normshas to the validity of the equipercentile assumptionwould be useful in view of the extensive testing nowassociated with the evaluation of Chapter I and otherfederal programs.

Purwse of the StudyThere are many threats to the internal validity of

the norm-referenced model including threats due tohistory, instrumentation, maturation, regression, andselection (Crawford & Kimball, 1986; Hiscox & Owen,1978; Linn, Dunbar, Harnisch, & Hastings, 1982;Murray & Arter, 1980). An important consideration instudying the validity of the model is its majorassumption, the equipercentile assumption. Therelatively few studies that have been conducted of theequipercentile assumption have resulted in inconsistentand inconclusive findings (Faddis & Arter, 1979;Hiscox & Owen, 1978; House, 1985; Linn, 1980;Powell & Raffeld, 1980; Powell, Schmidt, & Raffeld,1979; Powers, Slaughter, & Helmick, 1983;Tallmadge, 1985; Wood, 1980). The purpose of thisstudy was to examine empirically the relationship thatage of norms has to the validity of the equipercentileassumption of the norm-referenced model for programevaluation.

Four separate analyses were performed to examinethe relationship between the age of norms and theequipercentile assumptions The analyses examined twosubject areas (reading and mathematics) and two types ofnorms (national and state).

The NormsBoth the national norms and the state norms in this

study were derived from administration of the same formof the Stanford series. The Stanford series, a norm-referencedachievement test, was designed to reflect whatwas being taught in schools throughout the country atthe time of test development (Harcourt BraceJovanovich, 1983, 1987).

National Norms. The two sets of national norms(1982 and 1986) were derived from national testingusing test scores sampled from the entire population ofstudents in the nation. The intent was to obtainnormative data that were descriptive of achievement inthe nation's schools, and, therefore, samples werechosen to represent the national population in terms ofschool system enrollment, geographic region, andsocioeconomic status (Harcourt Brace Jovanovich,1983, 1987).

The,1982 standardization took place from April 26to May 14, 1982, with 200,000 students fromkindergarten through grade 12 and junior andcommunity colleges participating (Harcourt BraceJovanovich, 1983). The 1986 standardization tookplace from April 28 to May 16, 1986, withapproximately 150,000 students from kindergartenthrough grade 12 participating, providing updatednormative data for the Stanford series in order to reflectmore current achievement levels (Harcourt BraceJovanovich, 1987). In both standardizations; the ~Lennon School Ability Test was administered, except atthe kindergarten level, and these scores were used toensure that the 1986 samples would match the 1982samples in ability (Harcourt Brace Jovanovich, 1983,1987). These norms were used to derive NCEequivalents of pretest and posttest scores from theobtained scaled scores in the present study.

State Norms. The four sets of state norms (1985,1986, 1987, and 1988) used in this study were derivedas part of the statewide testing program for Alabamapublic schools over 4 consecutive years, using testscores from the entire population of students in thestate's schools at grades 1,2,4, and 5. The intent wasto obtain descriptive data on achievement in the state'sschools that could be used for instructional. purposes.Therefore, sampling was not used in the state norms.

The 1985 norming took place from April I toApril 19, 1985, with approximately 214,000 studentsfrom grades 1, 2, 4, and 5 participating. The 1986

norming took place from March 31 to April 18, 1986,with approximately 212,000 students from grades 1,2,4, and 5 participating. The 1987 norming took placefrom March 30 to April 17, 1987, with approximately217 ,000 students from grades 1, 2, 4, and 5participating. The 1988 norming took place fromApril 4 to April 15, 1988, with approximately 220,000students from grades 1,2,4, and 5 participating. Thesenorms were used to derive NCE equivalents of pretestand posttest scores from the obtained scaled scores inthe present study.

SubjectsStudents included in the study were from a rural

area in Alabama in which the school system servedapproximately 3,550 students from kindergarten throughgrade 12. The students systemwide, as well as thestudents included in this study, were approximately 77%black and 23% white, and approximately 52% male and48% female. Approximately 70% of the system's totalschool population received free lunches, andapproximately 8% received reduced lunches. Thesystem's mean national individual percentile ranks forboth reading and mathematics were consistently lowerthan the state's percentile ranks for the previous 4 years.Utilizing the 1982 national norms of the Stanfordseries, the system's mean national individual stanineranks for both reading and mathematics wereconsistently in the 4th and 5th stanines with themathematics scores higher than those for reading.According to the Otis-Lennon School Ability Test.which provides a measure of the abilities needed toacquire the desired cognitive outcomes of formaleducation (Otis & Lennon, 1982), administered in thespring of 1988, the mean School Ability Indexes (SAI)for grades 1 through 6 ranged from 92 to 97.(According to Otis and Lennon, the SAI has a meanvalue of 100 and a standard deviation of 16 forunselected groups.)

The students in the study were the approximately1,760 students enrolled in grades 1 through 6 during the1987-1988 school year and included only those studentshaving both pretest and posttest scores. The study wascomposed of the 312 grade 1 students (104 in Chapter Ireading and 52 in Chapter I mathematics), the 317 grade2 students (105 in Chapter I reading and 43 in Chapter Imathematics), the 316 grade 3 students (126 in Chap-ter I reading and 45 in Chapter I mathematics), the 270grade 4 students (123 in Chapter I reading and 34 inChapter I mathematics), and the 263 grade 6 students(103 in Chapter I reading and 32 in Chapter Imathematics). Special education students were included.However, the 127 repeaters in grades 1 through 6 werenot included.

In order to be eligible for the Chapter I reading ormathematics programs in the school system, studentshad to score below the 50th national individualpercentile on the previous year's achievement test.

Students who were eligible were then selected toparticipate in Chapter I based on need, with prioritygiven to those scoring the lowest.

InstrumentsThe norm-referenced tests used in this study are part

of the Stanford series which is published by HarcourtBrace Jovanovich and included the seventh edition of theStanford Achievement Test and the second edition of theStanford Early School Achievement Test (SESAT).The appropriate on-grade level tests were administered atall grade levels. The Total Reading and TotalMathematics domains were used for both pretests andposttests at all grade levels except grade I, for which theMathematics domain of SESAT 1 was used for thepretest. Comparable domains were linked in the scaledscore equating procedure, making it possible to comparescores across the Mathematicsrrotal Mathematicsdomains and across the Total Reading domains(Harcourt Brace Jovanovich, 1983, 1987).

Several reliability coefficients were reported for theStanford Achievement Test series. The Kuder-Richardson Formula #20 reliabilities ranged from .93 to.97 for Total Reading and from .81 to .96 for Mathe-matics[fotal Mathematics for the spring test administra-tion (Gardner et aI., 1985). Correlation coefficientswere also calculated from the performance of studentstested in both the fall and spring of the same schoolyear as a measure of consistency over time. Thesereliability coefficients range from .70 to .88 for TotalReading and from .66 to .89 for MathematicsrrotalMathematics (Gardner et al., 1985).

A very important aspect of validity for an achieve-ment test is the extent to which the content of the testis representative of the curriculum being taught. Tocheck for evidence of validity for use of the Stanfordseries in the school system, the content of the Stanfordseries was compared with the instructional objectives ofthe school system's curriculum. Where discrepanciesindicated objectives were to be tested but were not in thecurriculum, an effort was made to include the objectivesin the curriculum prior to testing.

Data CollectionIn order to prepare for testing, which occurred in the

spring in 1987 and 1988, training workshops thatincluded instructions in standardized testing proceduresas well as practice with teacher-dictated subtests wereheld for each school. To insure proper administrationand to reduce stakeholder bias during testing, randommonitoring of testing sessions was conducted byprincipals and the central office staff. All answerdocuments were electronically scored by PsychologicalCorporation.

Once the scaled scores for both pre- (administeredspring 1987) and posttesting (administered spring 1988)were obtained, they were converted to NCEs. Forhypotheses 1 and 3, the scaled scores were converted to

NCEs using each of the 1985, 1986, 1987, and 1988state norms. For hypotheses 2 and 4, the scaled scoreswere converted to NCEs using both the 1982 and 1986national norms.

Data AnalysisThe following four null hypotheses were examined

in this study:1. There is no significant difference (p<.05)

between Chapter I and non-Chapter I NCE gains inreading achievement for different ages of state norms.

2. There is no significant difference (p<.05)between Chapter I and non-Chapter I NCE gains inreading achievement for different ages of nationalnorms.

3. There is no significant difference (p<.05)between Chapter I and non-Chapter I NCE gains inmathematics achievement for different ages of statenorms.

4. There is no significant difference (p<.05)between Chapter I and non-Chapter I NCE gains inmathematics achievement for different ages of nationalnorms.

For each of these four hypotheses, there are fourindependent variables and one dependent variable. Theindependent variables are student, age of the norms,grade level of students, and program (Chapter I and non-Chapter I). The dependent variable is gain score inachievement Analysis of variance was used to test eachhypothesis using a factorial design with three crossedfactors and a nested factor with repeated measures. Thefactorial design for hypotheses 1 and 3 consisted of fourlevels of age of norms, two levels of grade, and twolevels of program. The factorial design for hypotheses 2and 4 consisted of two levels of age of norms, six levelsof grade, and two levels of program. For each of thefour hypotheses, students were nested within repeatedmeasures.

The data were analyzed using SAS's general linearmodels procedure (pROC GLM) (Freund, Littell, &Spector, 1986). If significance was found at the .05level with a three-way interaction, main effects andsimple effects were disregarded and a follow-upprocedure (Dunn's multiple comparison) was performedwhere appropriate on the simple, simple age effects inorder to determine which levels of the variables weresignificant If no significance was found with the three-way interaction but significance was found at the .05level with two-way interactions, main effects weredisregarded and a follow-up procedure (Dunn's multiplecomparison) was performed where appropriate on thesimple age effects in order to determine which levels ofthe variables were significant. Main effects were notexamined due to the fact that either a significant three-way interaction or significant two-way interactions werefound involving the variables of age, grade, and programfor each hypothesis.

This study has provided empirical evidence tosupport several findings concerning the relationship thatage of norms has to the achievement gains of Chapter Iand non-Chapter I students.

Using state reading norms, a statisticallysignificant three-way interaction for age of norms byprogram by grade (see Table 1) was found in thefollowing cases: (a) Chapter I students in grade 2generally demonstrated increasing NCE achievementgains as the norms used were more recent; and (b) bothChapter I students and non-Chapter I students in grade 5generally showed decreasing gains in NCE achievementscores as more recent norms were used. While theseresults were statistically significant, an eta square of .16suggests that the practical significance may be limited.

National reading norms indicated a statisticallysignificant three-way interaction for age of norms byprogram by grade (see Table 2) in the following cases:(a) More up-to-date norms led to decreases in NCEachievement gains for Chapter I students in grade 1,both Chapter I and non-Chapter I students in grade 3,and Chapter I students in grade 6; and (b) more up-to-date norms showed increases in NCE achievement gainsfor both Chapter I and non-Chapter I students in grade 2and grade 4. Once more, an eta square of .14 limitsassumptionsof a practical difference.

State mathematics norms indicated a statisticallysignificant two-way interaction for age of norms bygrade (see Table 3) in the following cases: (a)Significant increases in NCE achievement gains werefound in grade 2 using 1988 norms only; and (b) ingeneral, more up-to-date norms yielded decreasing NCEachievement gains for students in grade 5. An etasquare of .02 once more indicated that limited practicaldifferencesexist

When age of norms by program was investigatedusing state mathematics norms, a statisticallysignificant two-way interaction (see Table 3) wasobserved. In general, the more up-to-date norms, thesmaller the mean NCE achievement gains for non-Chapter I students. An eta square of .05 was foundsuggesting caution in assumptions of practicaldifferences.

National mathematics norms indicated astatistically significant two-way interaction in an age ofnorms by grade comparison (see Table 4). In grades 2,3,4, and 6, more up-to-date norms showed decreases inNCE achievement gains. An eta square of .04 suggeststhat caution should be taken in interpreting practicaldifferences.

The empirical evidence from these analyses can beused to support several conclusions, includingconclusions relative to changes in achievement gainsusing different ages of norms. Also supported are

SourceBetween subjects

PGPxGs (P x G)

SS df MS F

62153.587 1 62153.586 83.30*8655.277 1 8655.277 11.60*3160.061 1 3160.061 4.24*

379025.519 508 746.113

Within subjectA 34.41*

20.69*59.18*

1.2389.93*2.72*

128.17*7.98*

315.684 3 105.228A @ G2 x Pt 189.773 3 63.258A @ G5 x Pt 542.888 3 180.963A @ G2 x P2 11.266 3 3.755A @ G5 x P2 824.972 3 274.991

A x P 24.952 3 8.317A x G 1175.997 3 391.999A x P x Ga 73.173 3 24.391As (P x G) 4660.898 1524 3.058

Note. All significant three-way interactions for program effects and grade effects were omitted. P =Program (Pt =Chapter I, P2 = non-Chapter I); G = Grade (G2 = Grade 2, G5 = Grade 5); A = Age; s = studentaEta square = .16.*p < .05.

Source SS df MS FBetween subjects

P 40895.156 1 40895.156 135.71*G 11204.323 5 2240.865 7.44*PxG 14953.673 5 2990.735 9.93*s (P x G) 440848.009 1463 301.332

.905 1 .905A @ Gt x Pt 855.252' 1 855.252A @ G2 x Pt 200.748 1 200.748A @ G3 x Pt 135.316 1 135.316A @ G4 x Pt 806.479 1 806.479A @ G5 x PI 3.889 1 3.889A @ G6 x Pt 30.778 1 30.778A @ Gt x P2 9.748 1 9.748A @ G2 x P2 236.073 1 236.073A @ G3 x P2 146.747 1 146.747A @ G4 x P2 374.750 1 374.750A @ G5 x P2 .090 1 .090A @ G6 x P2 18.867 1 18.867

A x P 53.024 1 53.024A x G 2559.993 5 511.999A x P x Ga 508.423 5 101.685As (P x G) 7398.193 1463 5.057

Note. All significant three-way interactions for program effects and grade effects were omitted. P = Program (PI =Chapter I, P2 = non-Chapter I); G = Grade (G1 = Grade 1,G2 = Grade 2, G3 = Grade 3, G4 = Grade 4, G5 = Grade 5,G6 = Grade 6); A = Age; s = student.a Eta square = .14.*p < .05.

Within subjectA .18

169.13*39.70*26.76*

159.48*.77

6.09*1.93

46.68*29.02*74.11*

.023.73

10.49*101.25*20.11*

Source SS ( MS FBetween subjects

P 21839.315 1 21839.315 26.34*G 6511.747 1 6511.747 7.85*PxG 1639.153 1 1639.153 1.98s (P x G) 417832.123 504 829.032

16.654 3 5.551A@ G2 92.229 3 30.743A@ G5 564.857 3 188.286A@ PI 8.912 3 2.971A @ P2 158.770 3 52.923

A x pa 25.677 3 8.559A x Gb 296.531 3 98.844A x P x G 12.610 3 4.203As (P x G) 4783.103 1512 3.163

~. All significant two-way interactions for program effects and grade effects were omitted. P = Program (PI =Chapter I, P2 = non-Chapter I); G = Grade (G2 = Grade 2, G5 = Grade 5); A = Age; s = student.a Eta square = .05.b Eta square = .02.* p < .05.

Within subjectA 1.75

9.72*59.52*

.9416.73*2.71*

31.25*1.33

Source SS ( MS FBetween subjects

P 33473.689 1 33473.689 90.32*G 21757.436 5 4351.487 11.74*PxG 5738.546 5 1147.709 3.10*s (P x G) 539600.042 1456 370.604

6.814 1 6.814A@ Gl 3350.134 1 3350.134A @ G2 1562.400 1 1562.400A@ G3 79.057 1 79.057A @ G4 97.971 1 97.971A @ G5 147.181 1 147.181A@ G6 162.242 1 162.242

A x P 10.836 1 10.836A x pa 2878.892 5 575.778A x P x G 25.921 5 5.184As (P x GO 5857.725 1456 4.023

~. All significant two-way interactions for program effects and grade effects were omitted. P = Program; Grade =Grade (Gl = Grade 1, G2 = Grade 2, G3 = Grade 3, G4 = Grade 4, G5 = Grade 5, G6 = Grade 6); A =.Age; s =student.a Eta square = .04.* p < .05.

Within subjectA 1.69

832.71*388.35*19.65*24.35*36.58*40.33*2.69

143.12*1.29

conclusions concerning the relationship between age ofnorms and the equipercentile assumption.

Conclusions and Discussion

The results of this study indicate that achievementgains varied according to age of norms, grade level ofstudents, and program of students. These achievementgains varied for both reading and mathematics. Thefmdings do not support the equipercentile assumption.

Achievement Gains Using Different Ages of NormsThe achievement gains in this study fluctuated

using different ages of norms for both reading andmathematics. For example, there was a difference of3.057 NCEs in reading achievement for non-Chapter Istudents in grade 5 when comparing gains using 1985state norms to gains using 1988 state norms. Althoughthere was no overall finding that the more up-to-datenorms resulted in either consistently increasing orconsistently decreasing trends in achievement gains,trendswere observed for gains when examined by gradeand by program.

In examining gains by grade, gains using nationalreading norms (including those not significant) tended todecrease with more up-to-date norms, while gains usingnational mathematics norms tended to significantlyincrease with more up-to-date norms. Grade 2 gains forboth reading and mathematics state norms (includingthose not significant) tended to increase with more up-to-date norms, while grade 5 gains tended tosignificantly decrease with more up-to-date norms.

An examination of gains by program indicated that,although the magnitude of the differences varied, thedirection of the gains (whether increasing or decreasing)was the same for both Chapter I and non-Chapter Istudents at the same grade level. For example, at grade6, both Chapter I and non-Chapter I students decreasedin reading achievement gains using more recent nationalnorms; however, the decrease was significant forChapter I students, but not for non-Chapter I students.

Brennan (1988) reported that there have been bothincreases and decreases in student performance for thepast 30 years. The results of this study support the factthat achievement is changing and is not static. Whenthe achievement trends have been relatively stable, oldnorms may be appropriate. It is when the achievementlevels have fluctuated that the recency of the normsbecomes critical (Drabozal & Frisbie, 1988). Becauseof the significant differences in achievement gains foundby varying age of norms in this study, this cautionshould also be applied to reports of achievement gains.

Equipercentile AssumptionThe equipercentile assumption is not supported by

the findings in this study, as shown by the significantinteractions found among age, grade, and program. Thisis consistent with the results of other studies of the

equipercentile assumption (Faddis & Arter, 1979; Linn,1980; Powers, Slaughter, & Helmick, 1983).Although there were a few instances where the NCEgains approached zero, in no case was the mean NCEachievement gain for non-Chapter I students equal tozero (the no-treatment expectation of the equipercentileassumption), regardless of the age of the norms. Infact, the mean NCE achievement gain for non-Chapter Istudents was as high as -10.855 with one set of norms.

Neither non-Chapter I nor Chapter I studentsmaintained a constant percentile status using any of thesix sets of norms. Not only were the NCE gains notzero, but the magnitude of the gains also changed as theage of the norms changed. In many cases, themagnitude of these changes in gains was statisticallysignificant. This suggests that even if theequipercentile assumption were true for a specific set ofnorms, it would not necessarily be true for other sets ofnorms, particularly in times of changing achievementtrends. Because the equipercentile assumption dealswith achievement gains, it is important that it beexamined not only in relation to the year that the testswere normed, but also in relation to the generaldirection of the achievement trends.

It is apparent from this study that using a constantreference group over time may have presented a "noiselevel" in achievement gain that is unacceptable.Because achievement was changing over time, some ofthe apparent gain that would have been attributed totreatment may have been due simply to this factor.

ImplicationsThough other possible threats to the validity of the

equipercentile assumption have been examined inprevious studies, the results of this study suggest thatage of norms is yet another threat. Because the norm-referenced model has been used almost exclusively byChapter I programs, it is important that age of normsbe examined in relation to achievement gains. Changesin achievement over time make it difficult to accuratelydepict the achievement growth that is due to treatmentrelative to a constant reference group, resulting inachievement gains that may be either an overestimationor an underestimation. For example, if a Chapter Iprogram reports an NCE growth of 3 using old normsduring a period of increasing achievement, the resultswould be an overestimation of the actual gain due totreatment. On the other hand, an NCE growth of 3using old norms during a period of decreasingachievement would underestimate any actual gain due totreatment. In order to more accurately measure the gaindue to treatment, the change in achievement over timeand its direction should be considered.

The relationship that age of norms has to achieve-ment gain becomes even more important in light ofnew federal legislation. The Hawkins-Stafford SchoolImprovement Amendments of 1988 now requires thatevery school receiving Chapter I funds must show an

increase in achievement (Jennings, 1988). Because ofchanges in achievement over time, the most accurateestimate of achievement gain would be made with theuse of the most recent norms available. When currentnorms are not available, reporting the empiricalnorming dates and the achievement trends becomes evenmore important

A more feasible alternative to solving the problemspresented with age of norms might be the creation ofannual norms. Although there is opposition to annualnorming (Lenke & Keene, 1988; Qualls-Payne, 1988),one major testing company plans to provide "annualnational norms updates" in the near future (Drahozal &Frisbie, 1988), while another company has alreadybegun offering "annual national normative trend data"(Williams, 1988). This could help to avoid the error ofinaccurately labeling an educational program as effectiveor ineffective when gain scores are related to age ofnorms.

Though age of norms has been recognized as animportant issue in measurement of achievement(Drahozal & Frisbie, 1988; Lenke & Keene, 1988;Williams, 1988), it has not been applied to studies ofachievement gains. A weaknesses of previous studies ofthe equipercentile assumption is that norming dateshave not been included in the reports; therefore, it is notpossible for the reader to determine the age of thenorms. This study examined the age of norms inrelation 'to the equipercentile assumption and found thatthe magnitude of the NCE gains varied according to ageof norms.

The measurement of gain scores has presentedpsychometricians and evaluators with one of theirgreatest challenges. The most widespread model formeasuring gain in educational evaluations in recentyears has been the norm-referenced model. The factorsinfluencing the accuracy of the norm-referencedmodel ingeneral and its equipercentile assumption in particularare varied and interwoven. One very important factor isthe relationship that age of norms has to the validity ofthe equipercentile assumption.

While the age of norms used in calculatingachievement gains was the primary focus of this study,grade level and program were also considered.Significant three-way interactions were found amongage of norms, grade, and program using both nationaland state reading norms. That is, significant differencesin mean NCE reading achievement gains were found forboth Chapter I and non-Chapter I students at differentgrade levels as ages of national and state norms varied.Evidence was also found to support the fact that therewere significant differences in mean NCE mathematicsachievement gains at each grade level tested usingdifferent ages of both national and state norms.

Based on the findings in this study, each of the nullhypotheses were rejected. The equipercentileassumption was not supported by the findings in thisstudy, as shown by the significant interactions amongage of norms, grade, and program. Not only were theNCE gains not zero, but the magnitude of the gainsalso changed as the age of the norms changed, providingempirical evidence to support the fact that a relationshipexists between age of norms and achievement gains forboth Chapter I and non-Chapter I students. However,no discernible pattern was found indicating that eitherconsistently increasing or consistently decreasing trendsin achievement gains were associated with,more recentnorms.

Based on the research design and results of thisstudy, several recommendations and implications shouldbe considered. First, it is recommended that areplication of this study be undertaken with a morerepresentative sample of students so that results can begeneralized to a larger population. Second, the resultsof this study suggest that age of norms is a threat to thevalidity of the equipercentile assumption, which hasimplications for evaluations that are based on thisassumption. Third, a recommended alternative forsolving the problems found with age of norms in thisstudy is the creation of annual norms. It is hoped thatmethods for program evaluations that do not saddleeducational researchers with unrealistic demands forperfection, yet at the same time can be used withconfidence,adequately reflecting the reality of treatment,can be validated.

Angoff, W. H. (1971). Scales, norms, and equivalentscores. In R. L. Thorndike (Ed.), Educationalmeasurement (2nd ed., pp. 508-600). Washington,DC: American Council on Education

Bereiter, C. (1963). Some persisting dilemmas in themeasurement of change. In C. W. Harris (Ed.),Problems in measuring change, (pp. 3-20).Madison: University of Wisconsin Press.

Boruch, R. F. (1987). Conducting social experiments.In D. S. Cordray, H. S. Bloom, & R. J. Light(Eds.), Evaluation Practice in Review (pp. 45-66),New directions for program evaluation (No. 34).San Francisco. Jossey-Bass.

Brennan, A. (1988). Riverside Publishing CompanyPosition Paper on the Friends for EducationReport. (Available from Riverside PublishingCompany, Test Division, Chicago).

Cannell, J. J. (1988). Nationally normed elementaryachievement testing in America's public schools:How all 50 states are above the national average.Educational Measurement 7(2), 5-9.

Cook, T. D., & Campbell, D. T. (1979). Ouasi-experimentation: Design and analysis issues forfield settings. Chicago: Rand McNally.

Crawford, J., & Kimball, G. (1986, ApriI). Chapter Ievaluations: An alternative model to assessachievement growth. Paper presented at the annualmeeting of the American Educational ResearchAssociation, San Francisco.

Cronbach, L. J., & Furby, L. (1970). How we shouldmeasure "change" - or should we? PsychologicalBulletin,]i,68-80.

Diederich, P. B. (1956). Pitfalls in the measurementof gains in achievement. The School Review, 61,59-63.

Drahozal, E. C., & Frisbie, D. A. (1988). Riversidecomments on the Friends for Education report.Educational Measurement, 1(2), 12-16.

Faddis, B. J., & Arter, J. A. (1979, April). A...nempirical comparison of ESEA Title I evaluationmodels A and B. Paper presented at the annualmeeting of the American Educational ResearchAssociation, San Francisco.

Freund, R. J., Littell, R. C., & Spector, P. C. (1986).SAS system for linear models. Cary, NC: SASInstitute.

Gardner, E. F., Madden, R., Rudman, H. C., Karlsen,B., Merwin, J. C., Callis, R., & Collins, C. S.(1985). Stanford achievement test series technicaldata re.port.New York: Harcourt Brace Jovanovich.

Harcourt Brace Jovanovich. (1983). Stanfordachievement test series multilevel norms bookletnational. New York: Author.

Harcourt Brace Jovanovich. (1987). Stanford 7 plusmultilevel norms booklet national. New York:Author.

Hiscox, S. B., & Owen, T. R. (1978, March). Behindthe basic assumption of Model A. Paper presentedat the annual meeting of the American EducationalResearch Association, Toronto.

Hopkins, K. D., & Stanley, J. C. (1981). Educationaland psychological measurement and evaluation (6thed.). Englewood Cliffs: Prentice-Hall.

Horst, D. P., Tallmadge, G. K., & Wood, C. T.(1975). A practical guide to measuring projectimpact on student achievement (Stock No. 017-080-01460-2). Washington, DC: U.S.Government Printing Office.

House, G. D. (1985). TIERS evaluation modelequivalence and eQuipercentileassumption validity.Unpublished manuscript.

Jennings, J. F. (1988). Working in mysterious ways:The federal government and education. Phi DeltaKappan, 1JJ., 62-65.

Lenke, J. M., & Keene, J. M. (1988). A response toJohn J. Cannell. Educational Measurement, 1(2),16-18.

Linn, R. L. (1980). Evaluation of Title I via theRMC models: A critical review. In E. L. Bakerand E. S. Quellmalz (Eds.), Educational testing andevaluation: Design. analysis. and policy (pp. 121-142). Beverly Hills: Sage.

Linn, R. L. (1981). Measuring pretest-posttestperformance changes. In R. A. Berk (Ed.),Educational evaluation methodology: The state of~ (pp. 84-109). Baltimore: Johns HopkinsUniversity Press.

Linn, R. L., Dunbar, S. B., Har!!isch, D. L., &Hastings, C. N. (1982, April). The validity of theTitle I evaluation and reporting system. In E. R.Reisner, M. C. AIkin, R. F. Boruch, R. L. Linn,& J. Millman, Assessment of the Title I evaluationand reporting system (pp. 7-26). Washington, DC:U. S. Department of Education

Linn, R. L., & Slinde, J. A. (1977). Thedetermination of the significance of change betweenpre- and posttesting periods. Review 0 fEducational ReSearch,fl, 121-150.

Lord, F. M. (1956). The measurement of growth.Educational and Psychological Measurement, .12,421-437.

Lord, F. M. (1958). Further problems in themeasurement of growth. Educational andPsvchological Measurement, 18,437-451.

Merkel-Keller, C. (1986, ApriI). The evolution ofevaluation: Title I to Chapter I. Paper presented atthe annual meeting of the American EducationalResearch Association, San Francisco. (ERICDocument Reproduction Service No. ED 269440)

Murray, S., & Arter, J. (1980). Internal validity of theESEA Title I evaluation models. In G. Echternacht(Ed.), Measurement Aspects of Title I Evaluations(pp. 17-31), New directions for testing andmeasurement (No.8). San Francisco: Jossey-Bass.

National Assessment of Educational Progress'. (1983,April). The third national mathematics assessment:Results, trends, and issues (Report No. 13-MA-0l).Denver: Education Commission of the States.

National Assessment of Educational Progress (1985).The reading report card (Report No. 15-R-Ol).Princeton: Educational Testing Service.

O'Connor, Jr., E. F. (1972). Extending classical testtheory to the measurement of change. Review ofEducational Research, 42,73-97.

Otis, A. S., & Lennon, R. T. (1982). Otis-LennonSchool Ability Test manual for administering andinterpreting. New York: Harcourt Brace Jovano-vich.

Powell, G. D., & Raffeld, P. C. (1980, April). Aninvestigation of the equipercentile assumption andthe one-group preLpostdesign. Paper presented at

the annual meeting of the American EducationalResearch Association, Boston. (ERIC DocumentReproduction Service No. ED 190627)

Powell, G., Schmidt, J., & Raffeld, P. (1979, April).The eguipercentile assumvtion as a vseudo-controlgrouv estimate of gain. Paper presented at theannual meeting of the American EducationalResearch Association, San Francisco. (ERICDocument Reproduction Service No. ED 174673)

Powers, S., Slaughter, H., & Helmick, C. (1983). Atest of the equipercentile hypothesis of the TIERSnorm-referenced model. Journal of EducationalMeasurement. 2.0, 299-302.

Qualls-Payne, A. L. (1988). SRA response toCannell's article. Educational Measurement, 1(2),21-22.

Stanley, J. C. (1971). Reliability. In R. L. Thorndike(Ed.), Educational measurement (2nd ed., pp. 356-442). Washington, DC: American Council onEducation.

Tallmadge, G. K. (1985). Rumors regarding the deathof the equipercentile assumption may have been

greatly exaggerated. Journal of EducationalMeasurement, 22, 33-39.

Tallmadge, G. K., Wood, C. T, & Gamel, N. N.(1981, February). User's guide: Title I evaluationand re.vorting system (Vol. 1). Mountain View,CA: RMC Research Corporation.

Thorndike, E. L. (1924). The influence of the chanceimperfections of measures upon the relation ofinitial score to gain or loss. Journal ofExperimental Psychology, 1, 225-232.

Williams, P. L. (1988). The time-bound nature ofnorms: Understandings and misunderstandings.Educational Measurement, 1(2), 18-21.

Wiser, B., & Lenke, J. M. (1987, April). Thestability of achievement test norms over time.Paper presented at the annual meeting of theNational Council on Measurement in Education,Washington, DC.

Wood, C. T. (1980, July). The adeguacy of theeguipercentile assumvtion in the norm-referencedevaluation model. Mountain View, CA: RMCResearch Corporation.

Reviewers NeededPublications Editor Newton Suter is still accepting

interested individuals to serve as reviewers formanuscripts submitted for publication in the MSERAResearcher. MSERA members who would like tobecome actively involved in the organization throughthis activity are encouraged to contact Dr. Suter byphone (501) 569-3357 or by mail. All correspondenceand manuscripts should be sent to:

W. Newton Suter, PhDPublications Editor, MSERA ResearcherDepartment of Educational LeadershipUniversity of Arkansas at Little Rock2801 S. University AvenueLittle Rock, AR 72204

Carl Martray, MSERA Past President, has beenselected to serve as Dean of the College of Educationand Behavioral Sciences at Western KentuckyUniversity.

Tom Saterfiel, who was elected to the MSERABoard of directors representing State EducationAgencies, is leaving the Mississippi Department ofEducation to become vice president of Research forAmerican College Testing. Collin Ballance, of theTennessee Department of Education will replace Tomon the MSERA Board of Directors.

1990 Annual Meeting

Monteleone Hotel, New Orleans, Louisiana

As a result of Board action, the MSERA Researcherbecomes a copyrighted publication, effective withthis issue. This is one way in which the publicationscommittee and the MSERA Board hope to make theMSERA Researcher more appealing to potentialauthors. The move to expand the Researcher from anewsletter to a refereed publication was undertaken in anattempt to provide an additional outlet to MSERAmembers for publication of their research.

The Researcher has never been overwhelmed withsubmissions of manuscripts, and some issues have beenpublished without a research articlebecau!,e there werenone to be included. In an organization with member-ship that annually ranges from approximately 400 to600, depending on the time of year, there should beenough submissions to at least include one article ineach of the four regular issues of the Re§earcher if thereis a need for this type of service to members. Not allmanuscripts submitted were accepted, and it is thefeeling of those of us involved in the review processthat the organization wanted the articles that werepublished to reflect the high quality of work that ischaracteristic of our members as well as being ofinterest to our readers.

Dr. Newton Suter, the new publications editor whois now handling the manuscript review process, hasasked that we again call for individuals to serve asreviewers. Reviewing manuscripts is a way of provid-ing service to MSERA, but it also provides an oppor-tunity for growth on the part of the individuals who dothe reviewing. If MSERA is going to have a refereedpublication of any type, we (MSERA members) mustsupport, through submissions of our work and servingas reviewers, this dimension of MSERA if it is to growand succeed.

On another note, the 1990 MSERA Member-ship Directory will be mailed separately from thisissue of the Re§earcher. In the past, the directory wasof the same dimensions (8 1/2 x 11 inches) as theRe§earcher, facilitating joint mailing. This year thedirectory will be produced in the 7 by 8 1/2 inchbooklet size that was used for the Annual MeetingProgram Issue last fall.

Position: Education Specialist, GS-I710-13Location: Air War College, Maxwell AFB ALType Appointment: Career/Career ConditionalOpen: June 1, 1990Close: June 14, 1990

Major Duties: The incumbent serves as Chief ofEvaluation charged with designing, implementing,analyzing, and reporting upon evaluation projects whichprovide assessments of the Air War College's institu-tional effectiveness; the curriculum of the institution, toinclude individual instructional· periods, primarycourses, Advanced Studies and Electives, Field Studies,Field Trips, and the total Air War College educationalexperience; and the students; achievement of desiredlearning outcomes.

Qualification requirement: Successful completionof a full 4-year course of study leading to a bachelor'sdegree from an accredited college or university. Th.isstudy must have included 24 semester hours tneducation, six semester hours of which must have beencompleted in courses in tests and measurements. Inaddition, candidates must present two academic years ofappropriate experience in teaching, educational research,development of education or training aids, educationaltesting, guidance counseling, education administr~tionor other comparable activity 1>lu§one year of expenencein evaluating educational programs. NOTE: Candidateswill not qualify if they do not possess six semesterhours or equivalent of education courses in the test andmeasurements discipline.

How to Apply: Submit SF 171 (personal Qualifica-tion Statement) to:

Office of Personnel ManagementHuntsville Area OfficeBuilding 600 Suite 3413322 Memorial Parkway, SouthHuntsville, AL 35801-5311

This form may be obtained from any Office ofPersonnel Management or Federal Civilian PersonnelOffice. An Equal Opportunity Employer.

Papers submitted for the 1990 Annual Meeting program

must be postmarked on or before July 15. 1990.

1990 MSERA BOARD OF DIRECTORS

PresidentJohn R. Petry302 BallMemphis State UniversityMemphis,. TN 38152(901) 678-2362

Board of Directors: TennesseeTimothy J. Pettibone212 ClaxtonThe University of TennesseeKnoxville, TN 37996-3400(615) 974-2272

Past PresidentCarl MartrayOffice of the Dean, College of EducationWestern Kentucky UniversityBowling Green, KY 42010(502) 745-4662

Board of Directors: Locai Education AgencyJoan ButlerEarly Childhood Center, Sudduth Elementary SchoolGreenfield DriveStarkville, MS 39759(601) 324-4150

Vice President/President-ElectGypsy Abbott ClaytonDept. of Human Services in EducationUniversity of Alabama at BirminghamBirmingham, AL 35291(205) 934-1800

Board of Directors: State Dept. of EducationCollin BallanceTennessee State Department of Education134 Cordell HullNashville, TN 37219(615) 741-2361

Secretary/TreasurerJeffrey GorrellDepartment of EFLTAuburn UniversityAuburn, AL 36849-5221(205) 844-3050

Board of Directors: At LargeC. David BellArkansas Tech UniversityCrabaugh,Russellville, AR 72801(501) 968-0290

Board of Directors: AlabamaAnne TishlerUniversity of MontevalloUM Station 6345Montevallo, AL 35115(205) 665-6345

Board of Directors: At LargeGlenelle Halpin5086 Haley CenterAuburn University, AL 36849(205) 826-4457

Board of Directors: ArkansasDon WrightP.O.Box 76State University, AR 72467(501) 972-2317

Board of Directors: At LargeDiana Lancaster1100 Florida Ave., Box 140New Orleans, LA 70119-2799(504) 948-8683

Board of Directors; KentuckyLyndaN. Lee406 CombsEastern Kentucky UniversityRichmond, KY 40475-0940(606) 622-1124

Board of Directors: At LargeJames TurnerP.O. Box 6331Mississippi State, MS 39762(601) 325-03750

Board of Directors: LouisianaJames R. Flaitz, Jr.103 Girard HallUSL Box 43091Lafayette, LA 70504-3091(318) 231-5261

LERA Affiliate RepresentativeBeatrice BaldwinP.O. Box 782Southeastern Louisiana UniversityHammond, LA 70402(504) 549-5019

Board of Directors: MississippiJoe BlackbournDrawer LH, MSUMississippi State, MS 39762(601) 325-3041

Executive SecretaryHarry Bowman302 BallMemphis State UniversityMemphis, TN 38152(901) 678-2363

The ResearcherPublished at the University of Tennesseefor the Mid-South Educational Research AssociationBureau of Educational Research and Service212 Oaxton BuildingThe University of Tennessee JKnoxville, Tennessee 37996-3400 B 0 vJ (Y\ A-N

Non-Profit Org.U.S. Postage

PAIDThe UniverSityot TennesseeKnoxville