document resume - eric · document resume. he 020 900. alexander, joanne m.; stark, joan s....

43
ED 287 431 AUTHOR TTTLF. INSTITUTION SPONS AGENCY PUB DATE GRANT NOTE AVAILABLE FROM PUB TYPE DOCUMENT RESUME HE 020 900 Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working Paper. National Center for Research to Improve Postsecondary Teaching and Learning, Ann Arbor, MI. Office of Educational Research and Improvement (ED), Washington, DC. 86 OERI-86-0010 43p.; From the NCRIPTAL Program on Research Leadership, Design and Integration. NCRIPTAL, Suite 2400 Sc,aol of Education Building, The University of Michigan, Ann Arbor, MI 48109-2748 (Technical Report No. 86-A002.0, $5.00). Reports - Descriptive (141) -- Information Analyses (070) EDRS PRICE MF01/PCO2 Plus Postage. DESCRIPTORS Academic Achievement; Basic Skills; *Classification; *College Students; *Educational Assessment; *Evaluation Methods; Higher Education; *Outcomes of Education; *Research Design; Standardized Tests; Student Development IDENTIFIERS *College Outcomes Assessment ABSTRACT Current methods and instruments tor assessing college studen, academic outcomes are identified and described, and possible outcome measures of NCRIPTAL's (National Center for Research to Improve Postsecondary 'leaching and Learning) are suggested. Section I defines college outcomes from several perspectives, including pressures for outcome assessment, emphasis on quality, and issues that hinder assessment. Section II reviews various approaches to outcome assessment as well as existing typologies for classifying outcomes, including works of Pace, Astin, Bowen, Ewell, and Lenning. A typology tentatively adopted by NCRIPTAL researchers is introducsd and delimits the discussions of outcomes in Section III, which presents several common outcome measures. They are grouped in three categories: (1) academic-cognitive outcomes (Graduate Record Examination, Undergraduate Assessment Program of the College Entrance Examination Board, American College Testing (ACT) Program achievement tests, ACT College Outcome Measures ProjeeA, College Level Examination Program, critical thinking and higher level outcome measures, basic skills); (2) academic-motivational outcomes; and (3) academic-behavioral outcomes (career/life goal exploration, diversity, persistence, faculty-student relationships). Appended are ch--ts showing the categories of the NCHEMS Outcome Structures, and lists of basic skills tests for college students. Seventy references are included. (LB) ***************************k**************1.**************************** Reproductions supplied by EDRS are the best that can be made from the original document. ***********************************************************************

Upload: others

Post on 26-May-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

ED 287 431

AUTHORTTTLF.

INSTITUTION

SPONS AGENCY

PUB DATEGRANTNOTE

AVAILABLE FROM

PUB TYPE

DOCUMENT RESUME

HE 020 900

Alexander, Joanne M.; Stark, Joan S.Focusing nn Student Academic Outcomes A WorkingPaper.National Center for Research to Improve PostsecondaryTeaching and Learning, Ann Arbor, MI.Office of Educational Research and Improvement (ED),Washington, DC.86OERI-86-001043p.; From the NCRIPTAL Program on ResearchLeadership, Design and Integration.NCRIPTAL, Suite 2400 Sc,aol of Education Building,The University of Michigan, Ann Arbor, MI 48109-2748(Technical Report No. 86-A002.0, $5.00).Reports - Descriptive (141) -- Information Analyses(070)

EDRS PRICE MF01/PCO2 Plus Postage.DESCRIPTORS Academic Achievement; Basic Skills; *Classification;

*College Students; *Educational Assessment;*Evaluation Methods; Higher Education; *Outcomes ofEducation; *Research Design; Standardized Tests;Student Development

IDENTIFIERS *College Outcomes Assessment

ABSTRACTCurrent methods and instruments tor assessing college

studen, academic outcomes are identified and described, and possibleoutcome measures of NCRIPTAL's (National Center for Research toImprove Postsecondary 'leaching and Learning) are suggested. Section I

defines college outcomes from several perspectives, includingpressures for outcome assessment, emphasis on quality, and issuesthat hinder assessment. Section II reviews various approaches tooutcome assessment as well as existing typologies for classifyingoutcomes, including works of Pace, Astin, Bowen, Ewell, and Lenning.A typology tentatively adopted by NCRIPTAL researchers is introducsdand delimits the discussions of outcomes in Section III, whichpresents several common outcome measures. They are grouped in threecategories: (1) academic-cognitive outcomes (Graduate RecordExamination, Undergraduate Assessment Program of the College EntranceExamination Board, American College Testing (ACT) Program achievementtests, ACT College Outcome Measures ProjeeA, College LevelExamination Program, critical thinking and higher level outcomemeasures, basic skills); (2) academic-motivational outcomes; and (3)academic-behavioral outcomes (career/life goal exploration,diversity, persistence, faculty-student relationships). Appended arech--ts showing the categories of the NCHEMS Outcome Structures, andlists of basic skills tests for college students. Seventy referencesare included. (LB)

***************************k**************1.****************************

Reproductions supplied by EDRS are the best that can be madefrom the original document.

***********************************************************************

Page 2: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

"PERMISSION TO REPRODUCE THISMATERIAL HAS BEEN GRANTED BY

NCRIPTAL

TO THE EDUCATIONAL RESOURCESINFORMATION CENTER (ERIC)"

U DEPARTMENT OF EDUCATIONOffice of Educational Research and Improvement

EDUCATIONAL RESOURCES INFORMATIONCENTER (ERIC)

is document has been reonr,:twed asreceived from the nr.;on or organizationoriginatinc t

E Minor changes have been made to Improverepro( action quality

Points of view or opinions stated in this docu-ment do nit necessarily represent officialOE RI position or policy

V AVAILABLE

Page 3: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on StudentAcademic Outcomes

A Working Paper

3

Page 4: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Papers in this seriesavailable from NCRIPTAL

Approaches to Research on the Improvement ofPostsecondary Teaching and Learning:A Working PaperPatricia J. Green and Joan S. Stark

Postsecondiuy Teaching and Learning Issues inSearch of Researchers: A Working PaperCarol D. Vogel and Joan S. Stark

Teaching and Learning in the CollegeClassroom: A Review of the Research LiteratureWilbert J. McKeachie, Paul R. Pintrich,Yi-Guang Lin, and David A. F. Smith

Psychological Models of the Impact of Collegeon Sty 'tentsHarold A. Korn

Designing the Learning Plan:A Review of Research and Theory Related toCollege CurriculaJoan S. Stark and Malcolm A. Lowther. withassistance from Sally Smith

Faculty as a Key Resource:A Review of the Research LitcratureRobert T. Blackburn, Janet H. Lawrence,Steven Ross, Virginia Polk Okoloko.Jeffery P. Bieber, Rosalie Meilarui, andTerry Street

The Organizational Context for Teaching andLearning: A Review of the Research LiteratureMarvin W. Peterson. Kim S. Cameron,Lisa A. Mets, Philip Jones, andDeborah Etting ton

Electronic Information:Literacy Skills for a Computer AgeJerome Johnston

Design in Context: A Conceptual Fl :workfor the Study of Computer Software inHigher EducationI: ryt Kozma and Robert Bangert-Drolims

1

Page 5: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing onStudent Academic Outcomes

A Working Paper

byJoanne M. Alexander

andJoan S. Stark

Grant Number OERI-86-0010

Joan S. Stark, DirectorWilbert J. McKeachie, Associate Director

Suite 2400 School of Education BuildingThe University of Micnigan

Ann Arbor, Michigan 48109-2748

(313) 936-2748

5

Page 6: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

©1986 by the Regents of The University of Michigan

The project presented, or reported helein,was performed pursuant to a grant fromThe Office of Educationai Research and Improvement/Deportment of Education(OERI/ED). However, the opinions expressed herein do not necessarily reflect theposition cr policy of the OERI/ED and no official endorsement by the OERI/EDs-ould be inferred.

6

Page 7: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Contents

Introductinn

I. Outcomes and Outcome Assessment 3Current Pressures for Outcome Assessment 3An Emphasis on Quality 3Issues that Hinder Assessment 4Purposes of Outcome Assessment 5Locus of Assessment 5NCRIPTAL's Research Mission and Model 6

II. Approaches to Outcomes and Outcome Assessment 9

NCRIPTAL's Delimited Outcome Framework 12Type of Outcome Measures 12Level of Outcome Measures 14Form of Outcome Measures 14

III. Outcome Measures and Outcome Research 17

Academic-Cognitive Outcomes 17Graduate Record Examination 17American College Testing Program Achievement Tests 19Undergraduate Assessment Program of the College EntranceExamination Board: Area Tests and Field Tests 19

Acr College Outcome Measures Project 20College Level Examination Program 20National Teacher Examination 20Critical Thinking and Higher Level Outcome Measures 21Basic Skills 23

Academic-Motivational Outcomes 23Involvement and Quality of Effort 23

Academic-Behavioral Outcomes 24Career and Life Goal Exploration 24Exploration of Diversity 25Persistence 25Faculty-Student Relationships 25

Conclusion 26

Appendixes 27A. Categories of the NCHEMS Outcome Structures 27B. Basic Skills Tests for College Students 31

References 35

71.)

Page 8: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Tables

1. A Typology of Student Outcomes (Astm, Panos. & Creager) 10

2. Outcomes Over Time (Astir') 10

3. Propositions and Caveats About Type, Level, and Form of 13Outcome Measurement

4. A Whole-Person Approach to College Student Outcomes 13

5, NCRIPTAL's Outcome Framework

Figures

14

1. Parameters of Outcome Assessment 6

2. Variables in NCRIPTAL's Research Agenda 7

8vit

Page 9: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Acade nic Outcomes

Introduction

The purpose of this working paper is (1) toidentify and describe some current methods

and instruments for assessing college studentacademic outcomes, (2) to suggest possible out-come measures of NCRIPTAL's research program,and (3) to suggest methods of outcome assess-ment for other researchers and practitioners. Inthe process of this exploration we review existingliterature on outcome assessment and otherrelated research literature on college outcomes toset the stage for a discussion of appropriatemethods for recognizing improved teaching andlearning. We also examine typologies and frame-works for understanding outcomes developed byseveral scholars.

The literature on college outcomes and theirmeasurement is evolving rapidly. We acknowl-edge a substantial debt to several scholars,particularly Alexander Astin, Howard R Bowen,Peter Ewell, C. Robert Pace, Ernest Pascarella.and others, from whose earlier reviews we haveselected and summarized material liberally.Along with these scholars and other organiza-tions, such as the American Association of HigherEducation, which has gathered a substantialcollection of literature on outcome assessment,NCRIPTAL desires to be of service to educatorswho seek a broad, nontechnical summary of thisemerging field.

Undoubtedly, there is new and important lit-erature that we have overlooked or that is still inpress. It is particularly difficult to know of valu-able work in progress at individual campuses butwe believe that this general summary will encour-age practitioners and researchers to inform us oftheir suc,.esses as well as the difficulties theyencounter. Therefore, this paper is a workingdocument to be updated periodically as NCRIP-TAL learns of the work of educators who havedeveloped new measures and new techniques.

This paper is a working document in anothersense as wet. It has been developed during thefirst few months of our existence as a nationalcentcr to conduct research and provide leadership

1

in improving postsecondary teaching and learn-ing. Concurrently, several NCRIPTAL researchersare developing reviews of outcome measures thatcan be used to assess specific aspects of studentacademic development. Because of their concur-rent development and the technical nature ofthese related reviews, they are mentioned onlybriefly in this overview. A list of the concurrentlydeveloped papers follows the title page.

As NCRIPTAL's work proceeds over the nextseveral years, an important goal is to understandconnections among more specific areas of re-search on student growth. Thus, future synthe-ses will describe and develop more completely therelationships among potential measures of stu-dent outcomes within the NCRIPTAL typologydescribed later in this paper.

The paper is organized in the followingmanner: Section I defines college outcomes fromseveral perspectives and discusses the impor-tance of outcome assessment at the postsecon-dary level. Pressure from the academic commu-nity and fr n federal and state agencies hasincreased interest among educators in assessingstudent achievement. The resulting interest hasbrought into focus a number of issues aboutoutcome assessment, including choices amongappropriate models for outcome assessment.NCRIPTAL's mission in improving postsecondaryteaching and learning and its relation to outcomeassessment is introduced.

Section II reviews various approaches tooutcome assessment as well as existing typologiesfor classifying outcomes. The recent works ofPace, Astin, Bowen, Ewell, Lenning, and othersare discussed. A typology tentatively adopted byNCRIPTAL researchers is introduced and delimitsthe discussions of outcomes in Section III.

Section III presents a number of commonoutcome measures as potential measures both forNCRIPTAL and for the research community andeducators in general. Areas of the NCRIPTALtypology in which outcome measures are under-developed are noted.

9

Page 10: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

I. Outcomes and Outcome Assessment

Before engaging the issues of outcome assess-ment, a review of the definitions of outcomes

that have been used by researchers is in order.In their groundbreaking review, Feldman andNewcomb (1969) refer to the impact of college onstudents rather than to outcomes. They viewimpact as the influence of colleges on studentorientations and characteristics. Bowen (1977)takes an economic approach to outcomes, defin-ing them as the result of transformed institutionalresources. The primary product of the transfor-mation is individual learning; additional productsinclude changes in other intangible individualqualities.

Pace (1979) defines outcomes as changesthat are widely accepted as goals of higher educa-tion and that are the result of events and t...peri-ences in college designed to help students attainthese goals. Astin (1980) uses a value-addedapproach to outcomes. He specifies that out-comes are the measured differences betweenentry characteristics of a student and the charac-teristics of a student on exit from college. Ewell's(1983) definition of outcomes coincides withAstin's. He defines outcomes as any change orconsequence that occurs as a result of enrollmentin an educational institution and participation inits programs.

The major differences among these defini-tions relate to whether they address the questionof what the outcomes are or the question of whyoutcomes occur. Pace (1979) claims thatFeldman and Newcomb, through the use of theterm impact. are attempting to explain the causesof certain outcomes. The question that Feldmanand Newcomb address, theref3re, Is why certainchanges occur. In contrast, Pace prefers simplyto address the issue of change. He claims that bymeasuring change the "what" question beingaddressed is: What are the outcomes of college?This question, he believes, is much simpler toaddress and provides primary evidence of theresults of a college education.

For the purpose of NCRIPTAL's researchprogram for improving teaching and learning,both the "what" and the "why" questions areimportant. This paper focuses c q determiningwhat changes occur; other NCRIPTAL literaturereviews focus on various reasons for the out-comes. This paper also addresses the question ofwhich measures may be most appropriate formeasuring the outcomes of postsecondary learn-ing.

The outcome definition most closely suited toNCRIPTAL's model is Astin's definition of out-comes. NCRIPTAL's use of a value-added, changemodel will assist in the discovery of the effects ofvarious instructional, programmatic, and individ-ual characteristics on the teaching and learningprocess. The framework NCRIPTAL has adoptedto focus its work is presented in Figure 1 (seepage 7).

Current Pressures for Outcome Assessment

Currently there are pressures in postsecon-dary education from two directions for outcomeassessmentfrom the academic community itselfand from employers of college graduates. First,the academic community is calling on postsecon-dary institutions to use assessment as a mcans ofimproving the quality of education. A number ofbooks and articles have pointed to an apparentlack of quality control in collegiate education andhave suggested measurement of student progressas one means of rectifying the situation. TheNational Institute of Education's Involvement inLearning (ME Study Group, 1984) calls for thesystematic assessment of students' knowledge,capacities, and skills as a way of addressingproblems in the undergraduate curriculum. In ToReclaim a Legacy (1984), William Bennett alsocalled for curricular reform, minimum standards,and assessment as a way of standardizing themeaning of the undergraduate degree. Integrityin the College Curriculum (Association of Ameri-can Colleges, 1985) posits that the absence ofinstitutional accountability is a grave problemand that the measurement of student progressposes a solution to the dilemma. Access toQuality Undergraduate Education (SouthernRegional Education Board, 1985) calls for acooperative effort within the educational commu-nity to find ways of improving quality whilemaintaining access. The report suggests that newways of measuring student progress and perform-ancc.' are needed to resolve the access/qualityproblem.

An Emphasis on Quality

A key issue raised in all of these reports isquality. For years the worth of a college educa-tion went unquestioned; it was typically assumedthat college graduates left college with an in-creased amount of knowledge and understanding.

310

Page 11: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

FbcusIng on Student Academic Outcomes

Today, however, public attitudes have changed. Anumber of indicators point to a decrease in thequality of college education and this has resultedin a call for accountability for improved teachingand learning. Among the cited indicators are: (1)a large number of students who need remedialcourses at the college level, (2) a decline in stu-dent scores on verbal sections of standardizedtests, (3) a decline in graduate scores on stan-dardized tests and professional licensing exams,and (4) an increased number of students pursu-ing professional and occupational studies ratherthan a liberal arts education (Hartle, 1985).Despite disagreements over appropriateness ofthese indicators, many believe the quality prob-lem is real.

These reports have pointed to assessment asa solution to the probiem. The common themesin these reports include a need for strongerstudent performance, dearer expectations of whatcollege students should learn, and more rigorousmeasurement of educational achievement.

The expressions of concern about educa-tional quality are not unique to the educational;academic community. Spokespersons fromprivate industry, government, and accreditationagencies have called for increased institutionalaccountability as well (Ewell, 1985). The privatesector, as the major employer of college gradu-ates, desires more uniformly high quality in thegraduates it hires. In addition, state legislatorsare becoming increasingly concerned with thereturn states are getting from their investment inhigher education. These constituencies representthe second pressure calling for postsecondaryassessment.

Issues that Hinder Assessment

Despite this increased pressure on institu-tions to evaluate and assess student outcomes,very few colleges actually have established stu-dent assessment programs (Ewell, 1985). Thislimited response may be due to a number ofconcerns and problems revolving around studentassessment.

One of the most difficult issues surroundingoutcome assessment is the question of what theoutcomes of college should be. At the secondarylevel, the assessment of basic skills provides acommonl accepted level of achievement. At thecollege le ', however, there is no common baselevel of higher academic skills that is universallyaccepted As Turnbull (1985) stated,

Beyond basic skills there lies an immense realm ofdisagreement about collegiate rfoals....lt is essentialto i ealize that the purposes of higher education area matter of fundamental debate. (p. 24)

4

This lack of consensus exists within colleges aswell as between colleges. Hartle (1985) recognizesthe probleiii within institutions.

The central problem is that measuring educationalachievement may well require more agreementabout the ends and means of a higher educationthan we have at most institutions. (p. 15)

The development of consensus on minimumrequirements is crucial for developing a success-ful outcome assessment program within a giveninstitution. Consensu 3 among institutions wouldbe even more difficult to attain and might resultonly in agreement on very minimal outcomes.

A common set of stated outcomes for all in-stitutions would make it difficult to take intoaccount the broad range of institutional goals andmissions. To be useful and effective, assessmentprograms must address the diversity of institu-tional goals. In a society with a diversified anddecentralized system of postsecondary education,assessment programs must be tailored to fit theneeds of the various institutions.

In his discussion of accountability, Bowen(1974) recognizes the diverse goals of institutionsand calls for a matching of assessment programsto college goals. He proposes that to attain trueinstitutional accountability an institution must(1) define goals and order priorities, (2) measureand identify outcomes, (3) compare the outcomeswith the goals to determine the degree to whichgoals have been met, and (4) measure the costand determine whether it is reasonable (p. 2).

Once the question of consensus and out-comes has been resolved, the method of measure-ment arises as an issue. Numerous measures areavailable for measuring both cognitive and affec-tive outcomes among students. Deciding on theappropriate measures is difficult. Harris (1985)offers guidelines for practitioners getting startedwith an assessment program in higher education.

Assessing improved teaching and learning isalso somewhat hindered because of politicalproblems it can pose for faculty aid administra-tors. Ewell (1983) identifies a number of con-cerns that cause administrators to avoid assess-ment. First of all, they fear that no positiveimpact will be found and that results will reflectbadly on their leadership. Ewell believes this fearis unfounded, given a number of successfulattempts in finding positive outcomes. Adminis-trators are also concerned about the misinterpre-tation of quantitative results. Although scmeoutcomes may be qualitative in nature. themeasures used most typically are quantitativedescriptors. Administrators and faculty fear thatpeople may place too mue`i weight on the num-bers, losing sight of the fact that the measures

11

Page 12: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

are only proxies for the actual outcomes. Inaddition, the false preciFion of quantitative meas-ures may further compli ate the public interore-tati,,n of the results. These fears among educa-tors have hindered the development of studentoutcome assessment programs.

Purposes of Outcome Assessment

Even if agreement could be reached on whatthe outcomes of college should be, assessmentholds varied meanings for various parties. Ac-cording to Hartle (1985), the term assessment, ineducation, is often used interchangeably withevaluation and measurement. Yet, there aresubtle differences among these terms. Assess-ment is the process of gathering data (measure-ment) and assembling evidence into an interpret-able form for some intendeti use. Once theinformation is gathered, judgments (evaluations)may be made based on the evidence. Measure-ment, therefore, is only a part of the assessmentprocess, and is not necessarily synonymous withassessment, whereas evaluation implies judg-ments based on the collated measures.

In addition to the problem of defining assess-ment, individuals tend to associate assessmentwith nun lrous and diverse activities. Hartle(1985) mentions six separate and non-parallel butoverlapping activities that may be believed torepresent assessment activities in higher educa-tion. One activity uses multiple measures andobservers to monitor students' intellectual andpersonal growth. Another assessment activitymay be associated with state-mandated require-ments for evaluating student progress or aca-demic program success. A third assessmentactivity is the value-added method for measuringstudent progress, which involves pre- and post-testing and attribution of gains to the collegeexperience. A fourth activity involves the use ofstandardized testing to measure the extent ofstudent knowledge. A fifth assessment activityuses assessment in fund allocation, frequently byrewarding institutions based on performancecriteria. Finally, measurement of changes instudent attitudes and values is a sixth activityconsidered part of the assessment domain.

Locus of Assessment

In addition to the issues surrounding agree-ment on assessment outcomes and activities,issues arise over the level of analysis at which as-sessment occurs. Data gathering for assessmentprograms can occur at one or more of three levels:the individual student, the academic or depart-ment program, and the institution. The useful-ness of assessment at each level depends on thepurpose of the investigation. Somc, individuals

5

dlIMOIMIM

have argued that assessment should focus on theindividual learner (Hartle, 1985: Bowen, 1979)Others view the academic department as theappropriate level of analysis for investigatingteaching and learning environments (Winteler.1981). At a higher level of aggregation, measureshave been developed to assess institutionaloutcomes. Pascarella (1985), however, assertsthat institutional differences are much moredifficult to pinpoint and that differences in out-comes are more effectively explained throughindividual characteristics.

One final important issue concerning out-come assessment is whether the process is inter-nally or externally administered. While someinstitutions have established their own programs(e.g., Alverno College). others have received statemandates to use standardized testing to assessthe quality of education (e.g., Tennessee andFlorida). In general, academic institutions tend tofeel threatened by external control and fear loss ofautonomy. Some assessment proponents indicatethat these fears should inspire institutions toinitiate internally directed programs before statelevel initiatives are realized. Institutional assess-ment programs may have needed flexibility toaddress most appropriately the particular goals ofthe institution.

There is strong opinion among some authori-ties that the pressure for educational assessmentwill not dissipate. For example, basing his con-clusion on several factors, Hartle (1985) warnsthat assessment is not a passing fad. First, nowthat the issue of student access has been re-solved, the focus on quality assurance is essen-tial. Second, there is a widespread public con-cern over the lack of value placed on teaching insome colleges-. Finally, state governments arewell-informed and interested in the quahLy ofeducation.

Such predictions, and the abundance ofexisting definiticns and interpretations aboutthem, indicate a continuing need to specify atleast eight parameters for effective discussionabout assessment of student outcomes.

1. What are the purposes or incentives for as-sessment? Assessment activities to satisfystate mandates may differ substantially fromthose undertaken to improve learning withinan institution.

2. The type of assessment being discussed isimportant. For example, is information onstudent values, student academic achieve-ment, or the employment rate of new gradu-ates to be gathered? Is outcome informationto be gathered only on those outcomes uponwhich some group has achieved consensusor. with the possibility of improving under-

12

Page 13: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

standing or consensus, should it also begathered in areas about which little aArtc-ment exists?

3. At what level is the assessment to occur?For example, is it important to learn aboutprogress of individual students or groups ofstud its, about students in specific aca-demic programs, about the programs them-selves, or about the institution as an aggre-gate of students and programs?

4. What wi2 be the form of assessment? Forexample, is the assessment to focus onmeasures of student change (the "value-added" approach) or to determine whetherstudents have reached some expected level ofachievement? Can these two forms beeffectively combined? Will multiple meas-ures or a single measure of each cutcome beused?

5. What agency will be responsible for adr.tini-stration of an assessment program? Possi-bilities range from external groups such asstate agencies and legislatures to groups offaculty in specific academic programs.

6. If evaluation is to follow assessment, whatwill be the acus of evaluation? That is, oncethe infc7mation has been gathered in aninterpretable form, who will make evaluativejudgments about what types of change orstability are suggested?

7. What will be the locus of decisions about theappropriate use of the information or aboutany evaluative judgments that are made?

8. What will be the use of the evalu tive judg-ments made on the basis of assessment ac-tivitiecd For example, if judgments aremadc, will they be about the merit or worthof students, the merit or worth of programs,the merit or worth of institutions? Or will'hey be focused on specific recommendationsfor improvement of student learning, ofprogram quality, or institutional functioning?

These eight parameters may be interrelatedin many ways; certainly decisions about each willinfluence decisions about the others. Manyobservers would agree, however, that the mostc ucial linkage is that between the purpose ofassessment and use of data gathered in theassessment process. Another logical linkage(perhaps less clear) may exist between the level a'which assessment is undertaken and the level ofadministration. Leaving other linkages unspeci-fied as unique to a given situation, we have

6

illustrated the importance of these relationshipsby arranging the eight parameters as shown inFigure 1.

Use

Locus of Administration

Locus of Decisionsabout Change

Locus of EvaluaL.

Policy Parameters

Purpose

4 Level of Assessment

Type of Assessment

Form of Assessment

Technical Parameters

Figure 1. Parameters of Outcome Assessment

One implication of the arrangement in Figure1 is that the parameters on the left, those havingto do with loci of assessment and subsequentevalu: tions and decisions are basically policyissues. Those on the right, describing type, formand level of assessment, are largely technicalissues. Both sets strongly depend on the pur-poses and planned uses of assessment. In thefollowing section, we describe briefly NCRIPTAL'smission. Our purpose in doing so is to show howour efforts assume a specific institutional purposefor assessment activities, namely, improvement inthe teaching and learning environment. Weexpect, therefore, that institutions will also beresponsible for using information they gatherthrough use of outcome measures. Conse-quently, the type, level, and form of outcomemeasures selected, developed, and used inNCRIPTAL's researcl. agenda will be those mostsuitable for use by institutions interested inimproving teaching and learning.

NCRIPTAL's Research Mission and Model

The National Center ior Research to ImprovePostsecondary Learning and Teaching will focusits research, development, and disseminationactivities on 11Te aspects of college learning envi-ronments that affect learner outcomes: class-room learning anti teaching strategies, curricularstructure and integration, faculty attitudes andteaching behaviors, organizational practices, andthe use of emerging information technology.While recognizing multiple student outcomes ofcollege, such as cognitive development, personaldevelopment, and career development, the Centerinitially will emphasize cognitive development ofundergraduate students in colleges that concen-trate on teaching as their primary mission. Thisemphasis was chosen because the recent dra-matic progress of research in cognition holdsgreat promise for improving learning and teach-

13

Page 14: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

ing. Furthermore, student cognitive developmentis intimately linked to career development and toother important outcomes such as the develop-ment of a sense of self- efficacy, personal responsi-bility, and motivation.

Student's cognitive and affective characteris-tics. which vary with their diverse backgrounds,are important conditioners as well as predictors oflearning experiences. Since learners of manybackgrounds and ages now attend college andsince instructors may select an increasing varietyof potentially effective strategies, the Center willattempt to discover optimum combinations oflearner characteristics and instructional proc-esses to facilitate cognitive development.

To complete this mission, a research frame-work for the NCRIPTAL's work has been devel-oped. Considered simply, this model (see Figure2) includes three general research variables: stu-dent characteristics (independent variables),teaching/learning environments (alterable vari-ables), and student outcomes (dependent vari-ables). Student characteristics are motives,learning styles, prior knowledge, skills, and othercharacteristics that students bring with them tocollege. These characteristics interact withinstitutional environments to determine learning.

The teaching/learning environments are in-

fluenced by the faculty, the curriculum. theteaching and learning strategies. the institutionalpractices, and by the technological environmentAlthough extracurricular and interpersonalfactors also influence the institutional environ-ment, these variables will not be included inNCRIPTAL's research agenda Additionally, theenvironmental factors interact and overlap, andthese interactions will be recognized in our re-search agenda.

Student outcomes here are defined as therest.lts of students' involvement in teaching/learning environments. These outcomes may beboth long-term (measurable throughout life aftercompletion of college) and short-term (measurableduring or immediately following the college experi-ence), but NCRIPTAL will focus on the short-termoutcomes as most directly applicable to improvingteaching and learning. Although briefly charac-terized in the usual manner as independent"variables, student characteristics and goals mayalso change as a result of the college experience.Thus, a feedback loop is included in the Centermodel indicating that education is an iterativeprocess whereby the typically independent vari-ables are affected over time by the teaching/learning environments.

STUDENTIndependent Variables

OUTCOMEDependent Variables

Figure 2. Variables in NCRIPTAL's Research Agenda

14

Page 15: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

II. Approaches to Outcomes and Outcome Assessment

Anumber of researchers have attempted toclassify outcomes and specify approaches for

assessing outcomes. In this section, approachesby Ewell, Astin, Lenning et al., and Bowen arediscussed.

Ewell (1983) discusses three approaches thathave been used to measure student outcomes:academic investigation perspective, student-personnel perspective, and management perspec-tive. Actually these approaches are based on thepurpose of the investigators and thus use differ-ent perspectives on outcomes, have different goalsfor using outcomes, and involve different data re-quirements.

Academic investigation (research) is theoldest and most commonly used reason for meas-uring student outcomes. The college experience isinvestigated in a typical research fashion: theo-ries about student growth are developed, tested,and refined as a result of data collection. Fromthis perspective, most of the research on studentoutcomes has been done by psychologists andsociologists. Frequently psychologists have fo-cused on the impact of college on personal andcognitive development and sociologists haveconcentrated on such issues as the impact ofcollege on social mobility and socialization ofstudents into the professional fields. In thisperspective the goal to explain (and ultimately topredict) human behavior and the data collectedmust have high empirical quality and be objective.While some of the relationships discovered in thisresearch have been used by institutional poll-cymakers, it should be noted that decisionalutility is not the goal. the purpose is to success-fully account for a given outcome.

The student personnel approach uses stu-dent outcomes as a means for evaluating studentsfor admission to programs and placement on com-pletion of the program. The data are also used forcounselling students in career selection and forevaluating the effectiveness of programs for meet-ing student needs. In this perspective the goal ofoutcome measurement is to gain assessmentinformatior about individual students. Data isconsidered useful if it provides information forstudent placement or if it is diagnostic of studentproblems. The theoretical constraints of datacollection are not crucial when using this ap-proach.

The management perspective for measuringoutcomes is a still different approach to outcomeassessment. From this perspective the focus is onthe use of outcome assessment as a method to

9

improve administrative decisions, particularlythose involving program planning and budgetingThe goal of outcome assessment in this perspec-tive is to improve the quality of resource-alloca-tion decisions. To meet this goal, data must beempirically valid, reliable, and perceived by thedecision makers as relevant to the decision.

Ewell's classification of approaches to stu-dent outcomes is useful because it calls attentionto varied uses of outcomes and the ways in whichdifferent goals influence the collection of studentoutcome information.

In addition to classifying approaches tooutcome assessment based on proposed uses,researchers have attempted to classify types ofeducational outcomes. Astin (1974) developed ataxonomy of student outcomes involving threedimensions: type of outcome, type of data. andtime. The types of outcome are split into twodomains: cognitive and affective. The cognitivedomain includes outcomes such as basic skills,general intelligence, and higher-order cognitiveprocesses. The affective domain includes out-comes often described as attitudes, values, andself-concept.

The data dimension is also split into two do-mains: behavioral and psychological. This di-mension distinguishes between outcome data thatare covert and those that are observable. The be-havioral domain refers to observable activities ofthe individual. The psychological domain refers tothe internal states or traits of the individual.While the actual outcomes may be the same, theways in which the information is gathered torepresent them are different.

The primary two dimensions of Astin's ap-proach are shown in Table 1. This typology hasbeen widely accepted as a method for classifyingoutcomes. In Astin's typology the third dimen-sion, time, stresses the importance of includingboth the long- and short-term outcomes of college.Some examples of applying the time dimension tothe outcome cells are provided in Table 2.

In addition to the typology, Astin (1974)provided some inFAghts into the assessment ofeducational outcomes. To him the fundamentalpurpose of assessment is to produce informationthat is useful for decision making. Thus meas-urement should begin with a value statement anidea about what future state would be desirable orimportant.

Lenning and Associates (1983) at the Na-tional Center for Higher Education Management

Page 16: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

TABLE 1

A Tyi....11ogy of Student Outcomes

DATA

Affective

OUTCOME

Cognitive

Psychological

Behavioral

Self-conceptValuesAttitudesBeliefsDnve for Achievement33hsfaction with College

Personal HabitsAvocationsMental HealthcitizenshipInterpersonal Relations

KnowledgeCntical Thinking AbilityBasic SkillsSpecial AptitudesAcademic Achievement

Career DevelopmentLevel of Educational AttainmentVocational AchievementsLevel of ResponsibilityIncomeAwards or Special Recognition

Source: Alexander W Astin, R.J. Panne, and J A Creeper, Nabonal Nonns for Entering College Freshmen - Fall 1%6(Wasnington, D C : American Council on Education, 1967): p 16.

TABLE 2

Outcomes Over Time

OUTCOME DATA SHORT-TERM LONG-TERMINDICATOR INDICATOR

Affective Behavioral Choice of majorfield cf study

Current Occupation

Affective Psychological Satisfaction withcollege

Job Satisfaction

Cognitive Behavioral Persistence Job Stability

Cognitive Psychological LSAT score Score on law boards

Source: Astin, 1974, p 33

Systems (NCHEMS) developed an exterwork for identifying the universe of r t-puts" and outcomes of postsecondar ,utions.In developing this taxonomy, the authors soughtto develop an exhaustive list of outcomes to assistin the assessment of managerial effectiveness. Asa result of the management perspective, Lenninget al. :d not focus exclusively on student out-comes but rather included them in two of theseveral categories: human characteristics out-comes and knowledge, technology, and art formsoutcomes. Viewed in Astin's terms, the humancharacteristics outcomes include primarily affec-tive and personality characteristics, as well asskill outcomes. The knowledge, technology, andart form category includes the typically cognitiveoutcomes: both specialized and general knowl-edge and scholarship. Additional outcome catego-

1 0

ries in this framework include (1) economic (e.g.,economic security, standard of living), (2) resourceand service provision (e.g., teaching, facilityprovisions), and (3) other maintenance andchange (e.g., traditions, organizational operation).A listing of the complete NCHEMS taxonomy isincluded in Appendix A. Clearly this frameworkincludes both long- and short-range studentoutcomes as well as outcomes at the program andinstitutional level.

Bowen (1974) took a slightly different ap-proach from the two previous researchers whendiscussing outcomes. Instruction is related to theoutcome of learning and changes in human traits.Research and scholarship relates to the outcomesof preservation, discovery, and interpretation ofknowledge, artistic and social criticism, philo-sophical reflection, and advancement c,f the fine

16

Page 17: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academtc Outcomes

arts. Public service results in societal outcomessuch as improved health, solutions to socialproblems and agricultural productivity (p. 2-3).

Of these three services, Bowen believes thatinstruction is the primary goal of higher educationand bringing about desired changes in students iscentral to this mission. Bowen's approach couldbe viewed, therefore, as primarily academic innature. He focused on investigating the changesthat occur among students without emphasizingthe use of these measures in either placement ordecision making.

In a later work, Bowen (1977) broadened hisview of student learn!ng and offered a more elabo-rate catalogue of accepted goals. This catalogueof goals serves also as a typology of student out-comes derived from three widely accepted goals ofinstruction. These three general goals arc: edu-k,iting the whole person, addressing the incitvidu-ality of students, and maintaining accessibilityThe first goal, educating the whole person, refersto the idea that education should cultivate boththe intr. ectual and affective dispositions ofpersons, thereby enhancing intellectual, moral,and emotional growth. The second gcal, address-ing individuality, requires that the uniqueness ofindividuals be taken into account in the educa-tional process. Accessibility refers to the notionthat education should be readily available to abroad range of persons,

According to Bowen. the catalogue of goalsderived from these general goals constitutes botha model for the educational system and thecriteria by which the system can be judged. WhileBowen recognized that his goal typology hasutopian qualities, he posits that it provides auseful model that can be used to shape and guideinstitutional functioning.

In Bowen's scheme specific educational goalsare divided into two groups: goals for individualstudents and goals for society. The five categoriesof goals for individual students include: cognitivelearning, emotional and moral development,practical competence, direct satisfactions fromcollege education, and avoidance of negativeoutcomes. In a further subdivision, cognitivelearning includes ten specific areas of learning.They are:

1. Verbal skills: Ability to read, speak, andwrite clearly and correctly.

2. Quantitative skills: Understanding ofmathematical and statistical concepts.

3. Substantive knowledge: Acquaintance withWestern culture and traditions and familiar-ity with other cultures. Knowledge of con-temporary philosophy, art, literature, natu-ral science, and social issues. Understand-

11

2 A M OMI=.ing of facts, pnnciples and v:_, abularywithin at least one selected field

4 Rationality. Ability to think logically, andanalytically, and to see facts clearly and ob-jectively.

5. Intellectual tolerance: Openness to newideas, curi,siiy, and ability to deal withambiguity and complexity.

6. Esthetic sensibility: Knowledge of and inter-est in literature, the --irts and naturalbeauty.

7. Creativeness: Ability to think imaginativelyand originally.

8. Intellectual integrity: Respect fo andunderstanding of the contingent nature oftruth.

9. Wisdom: Ability to balance perspective,judgment and prudence.

10. Lifelong learning: Sustained interest inlearning. (Bowen, 1977, pp. 35-36)

Bowen's remaining four categories of studentgoals are focused primarily on affective and long-term student outcomes.

Bowen also suggested seven principles hatshould be used in the identification of outcomesand thus in outcome assessment at particularcolleges. The first principle is that inputs shouldnot be confused with outputs. Bowen claims thathigh institutional expenditures (an input) do notguarantee equivalently high outcomes: the differ-ences between inputs and outputs has too oftenbeen ignored. The only valid outcome measure-ment is of the development and changes thatoccur in students as a result of their collegeexperience.

The second principle suggests that assess-ment should be linked to all educational goals,not just to those developments easily measured orrelated to economic success. Bowen offers hiscatalogue of goals as a starting point on which tobuild an assessment plan.

The third principle states simply that educa-tional outcomes should relate to the person as awhole: and the fourth principle posits that out-come assessment should include the study ofalumni as well as current students. The fifthprinciple suggests that outcome assessmentshould measure changes that occur as a result ofthe college experience.

The sixth principle states that alt evaluationscheme must be practical: not too time-consum-ing or expensive. The assessment should focus

174

Page 18: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

on major goals of the institution and need not bebased on the entire population of students. How-ever, results must be reported in a form that thegeneral public can read and understand.

The final principle asserts that assessmentshould be controlled from within the institutionrather than being imposed by exten.al agencies.Assessment programs should be designed foreach institution, keeping the special missions andphilosophies of the institutions in mind.

Ewell (1983) mentions additional outcome di-mensions that should be considered. Theseinclude whether (1) the effects are short- or long-term, (2) the student is aware or unaware of theoutcome, (3) the effect is direct or indirect (i.e.,how closely the outcome is connected to the edu-cational program), and (4) the outcome is in-tended or unintended. These dimensions repre-sent important differences between outcomes thatshould be considered in outcome research andassessment.

In more recent work than that reviewedearlier, Astin (1979) identifies three core measuresof student outcomes that should be included in astudent outcome data base. First, students'successful completion of a program of studyshould be included. More specifically, informa-tion is needed to determine whether stuaents' ac-complishments are consistent with their originalgoals. Second, a measure of cognitive develop-ment must be included and more than gradepoint average and class standing are needed.Preferably, repeated measurement will be used sothat change can be assessed by comparing per-formance at two points in time. Third, measuresof student satisfaction should include satisfactionwith the quality of the curriculum, teaching,student services, facilities, and other aspects ofthe college.

Beyond these essential measures, the stu-deni data should include information gathered onentry, during the educational process, and at exitor another designated point of time. Studentcharacteristics should be recorded when they firstenroll, information on what happens to the stu-dent while enrolled at the college must be avail-able, and measures of the degreL. of attainment ofdesired or behavioral objective at exit must alsobe accessible. This approach, developed by Astin,is known as the "value-added" approach. Itasserts that outcome measures alone tell us verylittle about institutional effectiveness or impact.By controlling for entry characteristics, however, amore accurate picture of outcomes will emerge.In the absence of sw.h data, outcome measuresway be grossly misinterpreted when used fotassessing institutional effectiveness because mostoutcomes are highly dependent on the character-istics of students at entry.

12

NCRIPTAL's Delimited Outcome Framework

As discussed earlier, NCRIPTAL's mission in-cludes both conducting basic research on theeffects of various aspects of the teaching andlearning environment on student outcomes andproviding leadership and assistance to institu-tions in their own assessment and evaluationefforts. Thus, in the terms of Ewell's "perspec-tives," we must engage in a dual approach, com-bining the academic-investigative spirit of basicresearch and a management perspective that canhelp institutions construct their own ass( ssmentprocesses and uses of the information.

Fulfilling this dual mission with available re-sources requires delimitation of the arena inwhich our work will be conducted and a selectionof outcome measures and assessment principlesthat seem most closely related to practical con-cerns in improving teaching and learning. Exist-ing typologies, such as that proposed by Astin (seeTables 1 and 2), the list of principles by Bowen,and the important distinctions mentioned byEwell, as well as the work of many other scholars,have been helpful in formulating our plans. InTable 3 we have summarized some of thesepropositions, attempting to group them as accu-rately as possible under the "technical parameter"headings discussed earlier, namely, "type ofoutcome to be measured," level of measurement,"and "form of measurement." This grouping formsthe basis for our discussion of outcome measuresto be used in NCRIPTAL's work. It bears repeatingthat only these three parameters of type, level,and form are discussed because we have alreadyfocused our work on a specific purpose (improve-ment of teaching and learning) and assume thatresults will be used for decisions consonant withthat purpose. Furthermore, our efforts are basedon the assumption that the administrative locusof assessment activities and evaluative decisionsabout this information all rest within the collegeor university.

Type of Outcome Measures

However desirable it might be for researchersand institutions to follow Bowen's suggestion toassess all possible outcomes and relate outcomesto the development of the whole person, such aglobal program would readlily encounter problemsof feasibility and lack of consensus. Nonetheless.our discussion of outcome measures begins withthe whole-person approach in an effort to deter-mine which subsets of this universe are of great-est importance.

During such discussions we found manybenefits. but some pitfalls, in Astin's encompass-ing four-fold typology of student outcomes (see

18

Page 19: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

TABLE 3

Propositions and Caveats about Type, Level, and Form of Outcome Measurement

Type of Outcome Measures

Level of Outcome Measures

BOWEN

Assess all outcomes,even those difficultto measure

Relate outcomes towhole person

Focus on changesattributable tocollege

Focus on majorinstitutional goals

Study alumni as wellas current students

Form of Outcome Measures Separate inputs andoutputs

Use practical andfeasible means

EWELL ASTIN

Distinguish intendedand unintendedoutcomes

Distinguish outcomesof which student isaware and unaware

Distinguish outcomesclosely linked toeducational program

Disencuish short- andlong-term outcomemeasures

Record whetherstudents completedprogram and whetheraccomplishments wereconsistent withtheir goals

Measure at variouspoints in time,include informationat entry, dunngprogram, and on exit

Use measures ofcognitive develop-ment beyond gradepoint average

Include measures ofstudent satisfaction

Table 1). Specifically, although Astin acknowl-edged interactions between affective and cognitiveoutcomes, his typology used these concepts astwo different primary dimensions. Consequently,the typology made little provision for attention tocognitive-personal outcomes or affective-academicoutcomes. Yet, many cognitive psychologists andpersonality theorists believe that, particularly forstudents who enter college with undevelopedmotivation or low self-efficacy, affective outcomesmay be related to academic as well as to personaland social growth. As a result of these andrelated discussions, we drew a slightly differenttype of typology framework which notes three"arenas" of student growth in college and threeforms through which changes in these arenasmay be observed. The resulting nine-cell frame-work, which we stress was derived a priori fromour accumulated experience, is shown in Table 4.

The arena dimension refers to the variousaspects of life in which the outcome is important.

The three arenas are personal, social, and aca-demic. The personal domain includes outcomeslike personal worth, feelings about oneself, satis-faction with personal accomplishments, ability tomake decisions, and using one's skills appropri-ately. The social arena outcomes include abilityto function in interpersonal relationships, citizen-ship, social responsibl:!ty, social awareness, and

TABLE 4

A Whole-Person Approach to College Student Outcomes

FORM OF ARENAS OF GROWTH AND DEVELOPMENTDEMONSTRATEDCHANGc Social Personal Academic

Cognitive

Motivational

Behavic al

13

19

1 _J

Page 20: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academtc OutcomesIMIIMI11

contributions to society. The academic arenaincludes academic achievetnent, 0°1r-efficacy.motivation, critical-thinking abilities, problem-solving skills, and goal exploration behaviors.

The form dimension also has three catego-ries: cognitive, motivational, and behavioral.This dimension specifies the fo.--n in which theoutcome is demonstrated. Cognitive outcomesare internal outcomes. Typically they occurwithin individuals' mental processes and theirexistence is inferred, usually through testing.Motivational outcomes consist largely of thefeelings that individuals have about themselves,their capabilities, and the world around them.These outcomes are generally self-reported,though some social-psychological methods existthat tap these attitudes more discretely. Behav-ioral outcomes may be reported by the individualor directly observed.

As menuoned earlier, NCRIPTAL's researchprogram will focus on the academic arena shownin Table 4. In selecting this subset of the universeof college outcome measures for attention, we riskposing for others the same difficulty that Astin'stypology posed for us. We acknowledge that thepersonal and social arenas cannot be separatedfrom the academic arena; one's personal andsocial development affects one's academic devel-opment and the reverse is also true. Nonetheless,by constructing a framework that includes threecells, academic-cognitive, academic-motivational,and academic-behavioral, we are able to encom-pass a broad set of outcomes of primary concernto colleges and the public as well as to incorporaterecent theories of cognitive development. Table 5shows a more detailed view of the academic arenaand the types of outcomes that seem to fit intoeach of the three major cells.

At first glance, some observers will believe wehave violated Bowen's principle of separatinginputs and outputs by classifying as outcomessome of those items listed in the academic-moti-vational cell. Traditionally, motivation, self-efficacy, involvement, and effort have been viewedas fixed attributes students bring to the educa-tional process. Our view that these characteris-tics are subject to change (in an intended oruninter ded direction) as a result of the educa-tional process is, in part, what caused us tomodify previously existing outcome typologies.Although little attention has been given to theseideas, most colleges would agree, for example,that improved motivation is an outcome to besought. While the original motivation a studentbrings to college is an input, a new motivationallevel based on educational experiences becomesan outcome the student takes to the next stage oflearning.

An additional previously neglected aspect ofthe iterative outcomes conception relates to

14

TABLE 5

NCRIPTAL's Outcome Framework

FORM OF41EA SURD ;ENT

Cogniti' e

Mon rational

Behavforal

ACADEMIC ARENA

Achievement (facts, principles, ideas, skills)Critical-thinking skillsProblem-solving skills

Satisfaction with collegeInvolvement/effortMotivationSelf-efficacy

Career and life goal explorationExploration of diversityPersistenceRelationships with faculty

Ewell's distinction between student awareness orlack of awareness of changes. Although we havenot included it in the list at this time, if studentsare to take increased responsibility for theirlearning, awareness itself may be an outcome tobe sought.

Level of Outcome Measures

As already mentioned, both practicality andtechnical difficulties have caused us to set asideBowen's suggestion that alumni be stud' inaddition to current students. Instead,NCRIPTAL's agenda will focus on outcome meas-ures that can be related directly to classroom andprogram educational experiences. In general, ourunit of analysis will be the individual student andgroups of students sharing a common educationalexperience in a course or program. Wheneverpossible, outcome measures for special popula-tions of students (e.g., minorities, women, adultstudents) will examined in relation to similardata for traditional students.

Astin's point about whether students' even-tual accomplishments are consistent with theirgoals will be a special focus of one of our researchprograms. In fact, goals of students at collegeentry are subject to change in both intended andunintended directions. Since there would likelybe disagreement about what constitutes positivechange, we have included an acadernic-behavioraloutcome called "career and life goal deve!opment."The implication is that the student should gain inability to explore. consider, and make decisionsabout eventual goals.

Form of 0 tcome Measures

For many institutions, there may be an in-herent conflict between observing Bowen's caveatabout feasibility of measurement and adopting

20

Page 21: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Ficustng on Student Academic Outcomes

Actin's value-added approach, which statisticallycontrols for student entry characteristics whenobserving changes in student outcomes over time.This is particularly true if measures of cogruitvedevelopment, such as reasoning skills and criticalthinking, are used to supplement more traditionalmeasures of academic achievement. In cltwelop-ing new measures and in assisting institutionswith the use of already developed measures,NCRIPTAL will attempt to help simplify the appro-priate use of outcome measures.

The next section of this paper describessome of the academic measures already in use bycolleges and alerts the reader to some new meas-ures that NCRIPTAL staff hope to make availablefor future use.

15 21

Page 22: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

III: Outcome Measures and Outcome Research

As discussed in the previous sections. outcome measurement recently has received in-

creased emphasis at the college level. Numerousmeasures are available to assess learning incollege. It is difficult for educators to chooseam;ng the widely diverse types of measures. Inthis section, some of the available measures willbe reviewed. Reliability and validity informationare included when available and scholarly re-search that has been conducted using the meas-ures is reported. The purpose is to describe theutility of the instrument for measuring improvedlearning and teaching by examining themeasure's properties and the results it has pro-duced as a college outcome measure. New meas-ures and new reports on their uses are appearingdaily. This review should be considered back-ground for future updates.

The measures will be divided into threesections consistent with the cells in NCRIPTAL'stypology academic-cognitive outcomes,academic-motivational outcomes, and theacademic-behavioral outcomes.

Academic-Cognitive Outcomes

The following available measures aicreviewed here:

Graduate Record ExaminationAmerican College Testing Program Achievement

TestsUndergraduate Achievement Program of the

College Entrance Examination BoardThe American College Testing Program College

Outcome Measurement Program (COMP)College Level Examination ProgramNational Teacher ExaminationMeasures of critical thinkingMeasures of basic skills

Graduate Record Exam

The Graduate Record Exam, produced by theEducational Testing Service, was initially intro-duced as a general achievement test to measureknowledge in three general categories: 'octalstudies, natural sciences, and humanities. Thesetests were different from the typical achievementtests of that era because the items were meant toevaluate students' ability to read, understand,and interpret knowledge rather than to testsimply their possession of knowledge. Thoughthe items were somewhat content-imbedded, the

inidal area tests of the GREs were developed totest ability to generalize from information thatwas given (Pace, 1979).The general test of the GREs is a test of developedverbal, quantitative, and analytical abilities thathave been acnuired by students over time. TheGRE general test is offered to college seniors andgraduates and is used by some graduate schoolsfor admission decisions, fellowship awards, andprediction of an applicant's success in graduateschool.

Educational Testing Service also producestwenty GRE advanced subject tests that measureknowledge specific to certain fields. These testsare intended for college seniors and are fairlycomprehensive. These scores also are used foradmission criteria in some graduate schools.

The K-R 20 reliability coefficients for verbaland quantitative exceed .90 and for the analyticalsection are .86 (Cohn, 1985). Though thesevalues are highly respectable, Jaegar (1985)warns that an internal consistency measure sucnas the K-R 20 may actually overestimate thereliability of the general sections.

The validity of the GRE general tests issomewhat questionable since there is little evi-dence to support the predictive power of the testsfor graduate school achievement. The validity co-efficients for predicting first-year grade-pointaverages for the three sections of the general testsare around .20 and .30 (Cohn, 1985). It shouldbe noted, however, that though these correlationsare only moderate, the sample of students islimited to those who have been accepted into agraduate program. Thus both the range of scoresand the number of students included in thesample are small.

In combination with undergraduate gradepoint average, the predictive validities of the GREgeneral test for graduate school success rangefrom .32 to .56 (Jaegar, 1985). Because ETSencourages the use of GRE scores in conjunctionwith other admission criteria (e.g., G.P.A., lettersof recommendation), this combined validityjustifies its use.

Numerous researchers have used the GREsas a measure of differences in teaching andlearning at both the institutional and individuallevels Many of these studies have been reviewedby Pascarella (1985) and we have drawn freelyfrom that review.

At the individual level, Nichols (1964) at-tempted to assess the effects of different collegeson the GRE verbal and quantitative scores of

22

Page 23: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

students. Nichols examined structural andorganizational characteristics of colleges (privatevq. public faos.Ity--tasrient r.tio, enrollment, andlibrary books per student) and the environmentalcharacteristics (using Astin's 1963 EnvironmentalAssessment Technique). The college structuralcharacteristics were not significantly correlatedwith the GRE sores but three aspects of the EATwere significantly correlated with the verbal andquantitative scores. The amount of varianceaccounted for by these EAT variables, however,was small in comparison to the amount of vari-ance accounted for by the students' entry charac-teristics.

Astin (1968) examined variation on the hu-manities, natural science, and social science GREtests as a function of traditional indices of institu-tional quality. These indices included intelligenceof student body. financial resources, library size,and student-faculty ratio. Astin found, however,that all of the partial correlations indicating arelationship between institutional characteristicsand GRE scores became trivial after accountingfor numerous student entry characteristics (e.g.,aspirations, high school achievement, familybacicvound). Apparently. student characteristicsare more predictive of GRE area scores thaninstitutional characteristics. This finding indi-cates that changes in learning may not be attrib-uted to institutional characteristic-. but perhapsmust be examined at a lower programmatic level.

In further analyses of the same data. Astinand Panos (1969) found that institutional charac-teristics other than traditional quality indices ex-plained some GRE variance. These characteris-tics included institutions where students madefrequent use of automobiles and whera studentswere undecided about their careers. Also, GREscores were higher at institutions where therewas a generally flexible curriculum. where therewas a technical emphasis, and where there was alarge enrollment. These partial correlations,however, were also quite small, indicating thatpre-enrollment characteristics may be moremeaningful than institutional characteristics fordetermining college outcomes.

At the institutional level, Rock, Centra, andLinn (1970) and Centra and Rock (1971) at-tempted to explain the relationship betweencollege characteristics and student learning.Their dependent measure was residual scores onthe three area tests of the GREs, the humanities,social sciences, and natural sciences. To obtainthese residuals, the authors regressed the averageinstitutional GRE score on the average SAT scorewhich yielded predicted GRE scopes for eachinstitution. The predicted scores were then

subtracted from the actual scores which pro-duced the residual score. The proportion ofstudents majoring In the various at Ccib was cilbotaken into account.

Rock, Centra, and Linn (1970) examined theinfluence of institutional characteristics typicallyassociated with quality on GRE residual scores.Only two of the factors, the income a collegereceives per student and the proportion of facultywith a doctorate, were consistently rd"lated tocolleges with high residual achievement.

Centra and Rock (1971) focused on the dif-ferences between environmental characteristics ofcolleges and their potential influence on learning.The five factors they used, which were derivedfrom the Questionnaire on Student and CollegeCharacteristics, were faculty-student interaction,curriculum flexibility, cultural facilities, studentactivism. and degree of academic challenge.Centra and Rock found a positive relationshipbetween achievement and faculty-student interac-tion, curricular flexibility and availability ofcultural activities.

In his review of large-scale unpublishedsurveys, Pace (1979) discusses two studies thatexamine academic achievement during collegeusing the GREs as the outcome measure. An ETScompilation of 3,035 scores of seniors fromvarious colleges showed that students who ma-jored in one of three subareas (social science,natural science, and humanities) scored higheron that section the the GRE area tests thanstudents who majored in another area. Pacestates that the results simply attest to the factthat "students know most what they study most"(p. 25).

The second study Pace reviewed involved theadvanced tests of the GREs. Harvey andLannholm (1960) tested 300 upperclassmen at 29institutions both before the students had takenany upper level division courses and again at theend of their senior year. Students who hadmajored . Jsychology, economics, or chemistrywere included in the sample. The differences inscores were typically close to a standard deviationhigher after having taken the upper level courses.This evidence supports the contention that stu-dents learn from studying in a specific field.

These studies seem to reveal that studentslearn in college and more they study in acertain field. the more they learn in that field.The GRE tests appear to be a reasonable measurefor examining learning at the college level. Whilefew studies have conducted pre- and post-tests ofcollege students' learning using the GREs, thisoption appears to have potential for a usefulmeasure of differential teaching and learning.

18 23

Page 24: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

American College Testing Program

The .A.C,T tests are designed to measure edu-cational development in the areas of mathematicsusage, English usage, social studies reading, andnatural sciences reading. The ACT AssessmentProgram also includes an Interest Inventory and aStudent Profile. The program was developed in1954 as a college admissions test and as a tool forguidance and counseling of freshman in college.The program originally grew out of the Iowa Testsof Educational Development but has since thenbecome independent.

Five educational development scores arereported after the ACT tests have been taken.Four individual scores are rep "rted for the foursubsections and a composite score representingan average of the four sections is also included.

The content validity of the tests is acceptableand reasonable according to two reviewers (Aiken,1985; Kifer, 1985). Predictive validity for the ACTtests is also quite high. The validity ranges from.4 to .5 with college freshman grade- pointaverage (Aiken, 1985). However, when highschool grade point average is already included inthe regression equation, the inclusion of ACTscores improves the predictive validity by only.10. Kifer (1985) questions the value of the effortinvested to obtain this limited increase in predic-tive power.

Another issue mentioned by numerousreviewers is the amount of overlap within the foursections of the ACT (Hill, 1978; Kifer, 1985;Aiken, 1985). There is agreement among thesecritics that too much emphasis is placed onreading in the various sections of the test. Theintercorrelations of the four sections range from.53 to .68, indicating that similar abilities arebeing tested.

The reported reliabilities for the various sub-sections are as follows: English usage, .92;mathematics usage, .91; social studies reading,.88; and natural sciences reading, .88 (Aiken,1985). These reliabilities have improved over theyears and are adequate at these levels for individ-ual decisions based on test scores.

The ACT test scores have been used by re-searchers to examine learning and cognitive de-velopment at the college level. Leming, Munday,and Maxey (1969) examined cognitive growth inthe first two years of college in five institutionsusing tests of the American College TestingProgram. Samples of students were chosen fromtwo state colleges, on_ liberal arts college, a juniorcollege, and a state university. ACT tests wereadministered at the beginning of the freshmanyear and again at the end of the sophomore year.Differences between pre- and post-test scoreswere significant on all the composite scores for all

groups except the female sample at one institu-tion. Students made the greater,. gains in socialstudies and natural sciences arid somewhatlesser gains in English and mathematics.

Dumont and Troelstrup (1981) also used theACT tests as a measure of cognitive gains incollege. They pre-tested students at one institu-tion at the beginning of their freshman year andagain four years later. Students made significantgains in all areas, showing even more of anincrease in ACT subscores than the sophomoresin Leming, Munday, and Maxey's sample.

The use of pre- and post-tests for studentshighlights the actual learning that occurs incollege. The ACT tests seem capable of tappingthe cognitive development of students in generaleducation areas.

Undergraduate Assessment Program:Area Tests and Field Tests

The Undergraduate Assessment Program(UAP) is closely related to the GRE. The UAP areatests are fundamentally the same as the GREarea tests and are similar to the GRE advancedtests, however, the Business field test is the onlyfield test still available (Pace, 1979). The areatests are divided into three sections: humanities,social sciences, and natural sciences.

At part of the standardization of the UAPtest, 47,000 seniors from 211 colleges were giventhe area tests. Some of the colleges also admini-stered the tests to other classes. Despite theproblems of comparing cross-sectional data,especially when the population of colleges was soheterogeneous and the sample sizes so different,the results showed that, within the three majordomains, seniors and juniors scored higher thansophomores and freshmen (Pace, 1979).

Further results from ETS (1976) indicatethat when the UAP test is in the student's area ofinterest (i.e., humanities, social sciences, ornatural sciences), the scores are substantiallyhigher than the scores of the total group of stu-dents from that institution (including the 'interest'group in the total). Seniors within their area ofinterest scored higher than sophomores havingthe same academic interest (ETS, 1976).

ETS revised the area tests in 1978 andpublished a new guide (ETS, 1978). This newguide reports results similar to the previous oneSeniors who majored In humanities, social sci-ence, or natural science scored considerablyhigher than sophomores and freshman. Themean scores on the three area tests 11.A-ease witheach year of college as well (remember that this isa cross-sectional data set). Thus, the evidencereported by ETS leads one to believe that themore one concentrates on a particular field, the

19

Page 25: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Acc.ernic Outcomes

more one learns in that field. Students arelearning in college and the UAP tests seem able tocapture some aspects of stu :ents' cognitivedevelopment.

ACT College Outcome Measures Project

The American College Testing Program hasdeveloped a unique achievement test called theCollege Outcome Measures Program. This projectrepresents a new direction in achievement testing(Pace, 1979). Rather than testing general knowl-edge and specific content from an academicdiscipline, the COMP is an effort to measurestudents' ability to apply facts, concepts, andskills to real world activities. Specifically, thecontent of the Measurement Battery involvesthree areas related to adult functioning: func-tioning within social institutions, using scienceand technology, and using the arts. Three typesof competencies are measured within these threeareas: communicating, solving social problems,and clarifying social values.

The COMP is unusual in its format and inthe materials used for testing. Unlike the typicalmultiple-choice, paper and pencil format, theCOMP materials include film excerpts, tapednewscasts, art prints, magazine and newspaperarticles, and other realistic materials that mightbe encountered in life. An actual item on theCOMP might require a recorded or a writtenresponse: the student may have to write a per-suasive memo and justify a decision in speech.The idea is to make the test as realistic and asrelevant to real life as possible.

The COMP, therefore, takes considerabletime to administer as well as to evaluate andscore. Taking the test requires approximately sixhours and rating the responses takes about onehour. Standardized rating scales have beendeveloped to aid judges in their evaluation proc-ess (Forrest & Steele, 1978).

In addition to the six-hour battery, theoverall COMP includes two additional tests: anActivity Inventory and an Objective Test. TheObjective Test is an effort to maintain the advan-tages of the Measurement Battery while decreas-ing the time factor. This test uses the samestimuli as the Measurement Battery but thestudent is given four options from which tochoose, two of which are considered good an-swers. This test can be macLine scored and test-taking time is reduced to two and one-half hours.

The Activity Inventory measures the amountof experience that an individual has had in the sixareas that are covered in the Measurement Bat-tery. The score is meant to supplement either theObjective Test or the Measurement Battery withan experience factor.

20

College Level Examination Program

.11

The College Level Examination Program(CLEP) is a program originally developed for givingcollege credit to students by examination. CLEPnow offers two types of examinations: the Gen-eral Examinations and the Subject Examinations.The examinations were developed to assess theknowledge of students who have acquired knowl-edge outside the classroom. The General Examsmeasure college-level achievement in Englishcomposition, humanities, mathematics, naturalsciences, and social sciences and history. Thetest is for general education requirements andcovers material typically studied in the first twoyears of college. The Subject Examinations aremore advanced and require specific knowledge ina particular field. Only the General Exams will bediscussed here.

Reliability coefficients for the General Examsare quite high, on the order of .90 and above(Aleamoni, 1985). Validity of the exams, however,is somewhat in question. The primary validityinformation comes from data used for normingthe tests. These data showed that the morecourses taken in an area (i.e., humanities, historyand social sciences, science, and math), thehigher the score on the test (Pace, 1979). Thesevalidity tests were based on a national sample ofcollege sophomores (N = 2600). The manual fromwhich information was made available, however,did not include the correlations between numberof courses taken and test scores. The fact that arelationship exists is fairly weak evidence forvalidity. Aleamoni (1985) states that manycolleges have had to develop their own validationstudies in order to make a case for the appropri-ateness of the exams.

National Teacher Examination

The National Teacher Examination initiallyproduced by the Educational Testing Service wasa test designed for college seniors and teachersand provided a standardized measure of academicpreparation in three areas: general education,professional education (for teachers), and subjectfield preparation. The purpose of the test wasthreefold: it could be used to assist colleges inreviewing their programs and policies; statedepartments could use the scores for teachercertification purposes as well as for attainingprofiles of prospective teachers' knowledge andskills; and school administrators could use thescores as a standardized measure for evaluatingthe competencies of prospective teachers. ETS,however, warned against the use of these examsas a sole determinant of graduation, certification,and selection decisions. Colleges are warned in

25

Page 26: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

mousing on Student Academic Outcomes

the exam booklet against using absolute cut-offscores for any type of decision (Merwin, 1978).

As described above, the NTE actually is twoexams: the Common Examination and the AreaExaminations. Only the Common Examinationwill be discussed here. The Common Examina-tion covers general education and professionaleducation: the professional education sectionconsists of 110 items, the general educationsections consist of 45 items on Written EnglishExpression, 65 items on Social Studies, Litera-ture, and the Fine Arts, and 50 items on Scienceand Mathematics. These sections are generallyseen as including knowledge a well-roundededucator should possess.

Because of the increasing concern withteacher certification and the emerging needs of in-stitutions. ETS expanded the National TeacherExaminations just described into the NationalTeacher Examination Program. This new pro-gram is composed of three sections: the Pre-Professional Skills Tests (PPST), the NTE CoreBattery, and the NTE Specialty Area Tests.

The PPST is an assessment of basic skillsinitially developed for use in colleges to determinewhether a student had the basic skills necessaryto enter a teacher training program. Ttere hasbeen an increase, however, in the use of the PPSTas an initial teacher certification test. The PPSTmeasures skills required for the beginningteacher. The battery consists of three tests:Communications Skills, General Knowledge, andProfessional Knowledge. The CommunicationsSkills test covers listening, reading, and writingabilities. The General Knowledge test coversmathematics, science, social studies, literature,and fine arts. The Professional Knowledge testcovers knowledge and skills needed for developinginstructional plans and their implementation aswell as the professional behavior required ofteachers. The NTE Specialty Area tests arecontent specific tests available in 28 areas.

Recent research by Ayres and Bennett (1983)and Ayres (1983) used the old NTE as an outcomevariable to assess differences in learning acrossinstitutions and individuals. The authors judgedthe NTE to be a reasonable measure of learningusually expected during general undergraduatestudy. Using the institution as a unit of analysis(N = 15) Ayres and Bennett (1983) explained 88%of the variance in NTE scores by including in theregression equation average institutional SATscore, average number of courses taken in generaleducation, average faculty salary, average educa-tional attainment of faculty, institutional age,library size, and institutional size. The averageeducational attainment of faculty membersaccounted for the largest percentage of the vai l-ance.

Using the same data, but examining it withthe student as the unit of analysis (N = 2,229),Ayres (1983) investigated tile efriecis racialcomposition of the institution on NTE scores.Ayres found that when controlling for aptitude(SAT score), black students in a primarily whiteinstitution performed better than black studentsin a predominantly black institution.

These successful attempts at explainingvariation on the NTE provide evidence for itsusefulness as a measure of some aspects ofcollege achievement. The evidence presented byETS (in Merwin, 1978) suggest, on the otherhand, that the predictive validity of the NTE limitsits usefulness as a selection or certification tool ineducation. As an academic outcome measure ofthe effectiveness of teaching and learning, how-ever, it rn, ue useful and appropriate.

Critical Thinking and Higher LevelOutcome Measures

A number of tests have been developed tomeasure the higher level cognitive processes ofcollege students. These cognitive processesinclude critical thinking, complex reasoning andjudgment, abstract thinking, and flexibility ofthought. In this section, some research findingsare briefly reviewed and information available onthe tests used are reported. The measures usedin this research, however, are often not standard-ized and are developed by researchers for theirparticular interests. Neither can we claim to haveexhausted the extensive and growing literature inthis area. Within the NCRIPTAL work, McKeachieand colleagues present a more complete review ofsuch measures.

Evidence supports the notion that criticalthinking abilities improve over the college years.Lehmann (1963) used the American Council ofEducation's Test of Critical Thinking in a longitu-dinal study of students at Michigan State Univer-sity. The ACE test taps five dimensions of criticalthinking: (1) defining a problem, (2) selectinginformation relevant to the problem, 93) recogniz-ing stated and unstated assumptions, (4) formu-lating and selecting relevant hypotheses, and (5)drawing valid conclusions. Lehmann tested1,051 students on entering college and again atthe end of the freshman year and every subse-quent year. All students tested had significantlyhigher scores as seniors than they did as fresh-man. The most significant gains occurred duringthe freshman year.

Keeley, Brown, and Kreutzer (1982) used a cross-sectional design and administered open-endedand essay measures of critical thinking to 145freshman and 155 seniors at a large state univer-

21

25

Page 27: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

sity. Two experimental conditions were employed,with half of each class cohort receiving eachtreatment. In one condition, students were givenvery general instruction on how to rPcpnnd tothe items. The other group received specificinstructions for writing critical evaluations of theitems.

In flu.. general instruction condition, seniorshad significantly higher scores in six of the sevencriticism categories on which they were judged.These categories were general criticisms, under-standing of structure, logical inconsistency,explicit criticism, and essay length. In the spe-cific instruction condition, seniors were moreskilled at identifying the controversy and conclu-sions of the essay, and identifyinid, assumptions.The seniors also received higher overall scores inthis condition.

Another critical thinking measure, theWatson-Glaser Critical Thinking Appraisal (Wat-son & Glaser, 1964), has been used by a numberof researchers and may be the most widely usedcritical thinking instrument. This instrument isdesigned to measure three dimensions of criticalthinking: inference, deduction, and recognition.Recognition refers to the ability to recognizeunstated assumptions. Inference refers to theability to distinguish between valid and in randinferences drawn from data. Deduction refers tothe ability to reasca deductively, from the generalto the specific.

Mentkowski and Strait (1983) studied thedevelopment of critical thir-l-Ing skills du-ing thecollege years using the Watson-Glaser. Thisresearch was part of a comprehensive outcomeevaluation program that Alverno College uses toexamine cognitive development of students.

The design was longitudinal, with more than700 freshman tested at the beginning of theirfreshman year and again at the end of the sopho-more and senior years. Significant increases inscores on all three dimensions of the Watson-Glaser were found between sophomore and senioryears. Significant increases on the inference anddeduction scales were found between the fresh-man and sophomore years.

An additional assessment tool used inAlverno's Evaluation Program is a Piagetianformal reasoning task. The task essentiallymeasures the student's ability to reasonabstractly: there were two proportionality prob-lems, two conservation of volume problems andone problem dealing with the separation, of vari-ables.

Mentkowski and Strait (1983) found signifi-cant increases in formal reasoning betweenfreshman and sophomore year. A similar in-crease in formal reasoning ability was also foundby Eisert and Tomlinson-Keasey (1978) in a study

22

of 55 freshman. They found significant increasesbetw -n the start and end of the freshman year.

other measure available is the Test ofThematic Analysis (Wir.ftcr & McClelland, 1976).This broad essay exam measures thinking andreasoning ability. In the test, students are giventwo different groups of Thematic ApperceptionTest stories ano are asked to describe the differ-ences between the two groups in an essay. Theessays are judged on nine reasoning and thinkingcriteria.

Winter and McClelland (1978) conducted amultiple institution study of cognitive develop-ment using the Test of Thematic Analysis.Samples were drawn from three institutions: anelite liberal arts college, a state teachers college,and a community college. Longitudinal andcross-sectional data were collected from theliberal arts college; the design for the teachersc liege and community college was cross-sec-tional.

Longitudinal data from %30 students at theliberal arts college showed significant differencesfrom freshman to senior year on the ThematicAnalysis score. The cross-sectional designsresulted in significant findings only at the eliteliberal arts college. Data from the teacherscollege and community college did not showsignificantly reliable increases.

Winter, McClelland, and Stewart (1981)sought to measure intellectual flexibility in rea-soning using Stewart's Analysis of Argument Test(1977). This test confronts a subject with acontroversial statement and asks the subject towrite two essays: one defending the statementand one attacking it. The two essays are scoredon ten criteria. These criteria are meant tomeasure the extent to which the subject canevaluate an argument and construct a coherentevaluation of an argument.

The study involved the same samples asthose in Winter and McClelland's 1978 study.Statistically significant differences were foundbetween freshman and final -ear students at allthree institutions on total score of the Analysis ofArgument Test.

The reflective judgment interview (RH) isanother ,ailable measure of higher level cogni-tive skills. Reflective judgment refers to thedevelopment of complex reasoning and judgmentskills. In the interview, the subject is confrontedwith four controversial dilemmas and a set ofstandardized questions designed to tap level ofreasoning. Level of reasoning is determinedbared on a Perry-like scheme of intellectualdevelopment (1970). Schmidt and Davison (1981)classify level of reasoning along a multilevelcontinuum, ranging from dualism to probabilism.Dualism is a simple and illogical reasoning pat-

27

Page 28: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Fbcustng on Student Academic Outcomes

tern and probabilism is reasoning based onevidence and logic.

Brabeck (1983) provides a review of tenstudies that have used the FIJI as a dependentvai:able. Many of these studies used a cross-sectional design to measure differences in reflec-tive judgment across educational levels. Theresults support the idea of a reflective judgmentcontinuum, with more advanced students show-ing higher levels of reasoning ability. Postsecon-dary education does have an influence on the de-velopment of reasoning ability.

NCRIPTAL researchers, McKeachie, Pintrich,Lin, and Smith provide a detailed and moretechnical review of measures of two other aca-demic-cognitive outcomes, problem- solving skills,and knowledge representation. Their investta-tion will be incorporated into this backgroundpaper in its next revision. Readers are referred tothe list of NCRIPTAL technical reports noted atthe beginning of thi,- report.

In sum, many higher level cognitive proc-esses appear to improve in college. Many meas-ures are available for evaluating improvement inthese areas. In this area, most measures aredeveloped specifically for a given research pro-gram though they may be useful for other pur-poses.

Basic Skills

Alth..,ugh basic skills are generally notviewed as an outcome of college, they are becom-ing an increasingly Important part of the collegecurriculum. As access to college for disadvan-taged students has improved, so has the concernfor keeping those students in college and helpingthem to succeed through the use of remedial andbasic skill programs.

Concern for basic skills programs at thefour-year college level has grown out of fairlyrecent changes in the student population. Two-year community and junior colleges have alwaysbecn concerned with basic skills as both a re-quirement and an outcome of their programs.

For the purposes of evaluating outcomes aswell as the maintenance of skills in both two- andfour-year colleges, basic skills measures areconsidered here as an outcome measure thatsome colleges may wish to incorporate or use as apre-test at entry.

Basic skills usually refer to fundamentalabilities in mathematics. English composition.reading. and vocabulary. While most programsfor identifying basic skills at the college level aredeveloped at the institutions, there are testsavailable for evaluating students in this area.Appendix B presents information on a number ofreading, math, and vocabulary tests designed for

the college level, While not an inclusive list, itrepresents some available tests of basic skills.

Wolfe (1983) studied the development ofbasic skills in college. He mvesugated the pro-gression of students in vocabulary and mathe-matcal ability rising 1979 follow-up data from theNational Longitudinal Study of the High SchoolClass of 1972. He found that, when controllingfor ethnicity, parent's education, father's occupa-tion and the 1972 scores, postsecondary educa-tion significantly improved both vocabulary andmathematics performance.

Academic-Motivational Outcomes

Measures covered in this section includethose related to: motivation, self-efficacy, studentinvolvement, and satisfaction with college.

Two motivational outcomes, motivation andself-efficacy, are discussed in detail in a compan-ion report by McKeachie et al. Another compan-ion paper by Korn discusses self-efficacy as bothan input and outcome variable in college. Ratherthan repeat their efforts, we refer the readerinterested in these outcomes to the two otherreviews.

Involvement and Quality of Effort

The relatively new concepts of Involvementand quality of effort have both motivational andbehavioral components. We have chosen toinclude them as motivational c itcomes becauseof the ties between these concepts and motiva-tion.

Astin's (1984) theory of student involvementcan be simply summarized as "Students learn bybecoming involved" (Astin, 1985, p. 133). Astindefines involvement as the amount of physicaland psychological energy that the student devotesto the academic experience. He considers involve-ment as closely linked to motivation but prefersthe term involvement because it has more behav-ioral implications.

Involvement theory has five postulates.First, involvement consists of investing energyinto objects. Second, involvement exists along acontinuum. Third, involvement has quantitativeand qualitative aspects. Fourth, the amount ofstudent learning and personal development isrelated to the amount of student invol-ement.Fifth and finally, the effectiveness of an institu-tional policy or practice is related to the capacityit has to increase student involvement.

Astil (1977) conducted a large-scale longitu-dinal study investigating the effects of variousforms of involvement on numerous studentoutcomes His general conclusion was that mostforms of involvement lead to greater than average

23 28

Page 29: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

changes in the entry characteristics of freshmen.In some instances, involvement was more stronglyrelated to outcomes than Ptther entry or institu-tional eliarac .eristics.

Most of the outcomes Astin measured weremotivational outcomes. More research is neededto determine the effects of involvement on cogni-tive outcomes. No obvious instrument is yetavailable to measure student involvement.

Pace (1984) discusses the concept of qualityof effort as an important determinant of studentoutcomes. He posits that because education isboth a process and product, the quality of theeducational experience must be taken into ac-count. The quality of this process is not the soleresponsibility of the institution or its facultymembers, rather students must take some re-sponsibility for their own progress by takingathantage of opportunities provided to them bythe institution. Thus, by measuring quality ofstudent effort one can better assess the quality ofthe educational process.

Pace veloped an instrur ment-. to measurethe qua' f student experiences b, determiningthe extent to which students take part in activi-ties and opportunities intended to promote stu-dent learning and development. Pace's Quality ofCollege Student Experiences (1984) standardizedself-report survey includes fourteen scales ofactivities (e.g., student union, athletic and recrea-tional facilities, experiences in writing, libraryexperiences) which reflect increasing amounts ofeffort and potential value. The scored responsesprovide a measure of the quality of effort studentshave invested in the various aspects of college life.

The survey also collects information oncollege environment, student background infor-mation, and gains made during college. Pacesuggests that the instrument be used by institu-tions for program evaluation, resource allocation,and faculty and staff discussions. The instru-ment can also be used as a research tool for in-vestigating the relationship between quality ofstudent effort and institutional characteristics.

Pace (1984) reports reliability and validity in-formation in the manual. For reviews of the in-strument and comments on the psychometricfeatures reported in the manual see Miller (1985)and Brown (1985).

Pace also reports data from studies using theQuality of College Student Experiences Survey.Results support the idea that quality of effort isan important predictor of student achievement,effort measures significantly increase the amountof explained variance in student achievement(Pace, 1984). The increase in explanatory poweroccurs when student characteristics, collegestatus variables, and college environment ratingshave already been included in the regressionequation.

24

The concepts of quality of effort and involve-ment are often considered process variables; thatis, variables that moderate the relationshipbetween available learning opportunities andstudent ,utcomes. However, when education isviewed os an iterative process with current out-comes influencing future achievement, then these..!oncepts can be considered outcomes in thesense that the development of effort and involve-ment will be an outcome that in turn influencesthe educational process. For this reason, we areincluding these as possible outcome measures atthe college level.

Academic-Behavioral Outcomes

Behavioral measures of students' cognitivedevelopment are difficult to find in the literature.Most measures used in research and practice areeither motivational or cognitive. As mentionedearlier, involvement and quality of effort measurescan be considered behavioral in the sense thatthey measure self-reports of participation invarious activities. Other behavioral measures ofacademic outcomes that NCRIPTAL researchersare considering include goal-exploring behaviors,relationships with faculty, and persistence.

Career and Life Goal Exploration

One desired outcome of college is the explo-ration and development of life options, includingboth appropriate career choices and recognition ofvalues to be gained from liberal education. Inter-estingly, however, literature in these areas seemsto advocate one of these types of student develop-ment to the exclusion of the other.

A wide variety of instruments exists tomeasure students' career exploration activities.These include vocational development inventories,career maturity scales, and search proceduresthat help students identify occupational groupswith similar personality characteristics or inter-ests. Most of these instruments are designed toassist the "undecided college student" and appearto be based on the assumption that being unde-cided is both economically inefficient and psycho-logically unsettling to the college student.

In a different mode, considerable rhetoric ad-vances the value of liberal education in preferenceto early (or premature) thxisions about careerspecialization. Although this openness to liberallearning is highly valued by many educators, theauthors know of no instrument designed tomeasure such student proclivities. Most existinginformation is based tat surveys of enteringcollege students. In recent years the percentageof students espousing vocational goals has risensubstantially while the percent who desire generaleducation remains relatively stable.

29

Page 30: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

From the standpoint of improving teachingand learning, the important question is whetherstudent openness to considering various educa-tional and career alternatives changes because ofspecific educational experiences. For example,does study of liberal arts subjects result in stu-dents' placing greater or less value on this knowl-edge? Does studying career-oriented subjectsclose one's mind to the value of liberal education?

Such questions have been studied by only afew researchers. To illustrate, Mentkowski andDoherty (1984) report that students at a collegewith a competency-based liberal arts curriculummoved from strong initial career orientationstoward an appreciation of liberal arts.

Instruments that measure student goals in amulti-dimensional fashion so that change ineither direction can be assessed await develop-ment.

Exploration of Diversity

We have used the term "exploration ofdiversity" to represent a complex set of educa-tional outcomes that are not captured in othercategories of our framework. One of the bestknown frameworks for measuring such outcomesin the academic-cognitive sense is Perry's schemeof intellectual development (Perry, 1970). In thisscheme, intellectual development is measured bystudents' movemelit from a position of dualism(right/wrong) to a more balanced consideration ofa variety of viewpoints (relativism), finally toselecting and justifying one's own point of view(commitment). A variety of paper and pencilmeasures of student change on the Perry dimen-sions are under de. elopment.

In the academic-behavioral sense, explora-tion of diversity may be reasonably well capturedby some portions of Pace's Quality of Student Ex-perience Scale, discussed earlier. Attendance atcampus events, for example, might be a measureof the impact of college in broadening studenthorizons as well as an index of stu:Acat effort.

Our intent in listing these exploratory behav-iors as outcomes 45, to stimulate thinking aboutbroad behavioral observations through whichcolleges might measure the extent to whichstudents become more likely to participate infurther education and cultural affairs and toexhibit other behaviors generally attributed toeducated people. One of the deficiencies inprevious outcome typologies has been strongdependence on high inference measures and anotable lack of actual behavior observations inassessing student outcomes. We will be exploringmeasures of these types of outcomes in thefuture.

Persistence

Why students drop out of college firs beenstudied by numerous researchers in an effort tounderstand the determinants of attrition. Tinto's(1975) model of attrition posits that academic andsocial integration of the student into the institu-tion and the students' interaction with thesesystems are the primary determinants of persis-tence in college.

In testing the Tinto model, Munro (1981)found that academic integration had a strongeffect on persistence while social integration wasnot a significant predictor. Similarly, Pascarella,Duby, and Iverson (1983) also found that aca-demic integration was a significant predictor ofstudent persistence. In addition, they found thatentry characteristics were more predictive of per-sistence in a non-residential setting than in aresidential setting.

Pascarella, Smart, and Ethington (1986)studied persistence in students at two-yearcolleges over a period of nine years. They foundthat both academic and social integration wereimportant predictors of persistence when thestudents were tracked for a longer period of time.

Edwards and Waters (1983) attempted toexplain persistence by using academic course in-volvement, academic ability, academic perform-ance, and satisfaction with both courses andcollege in general as predictors. In a replicationstudy, when they included a personal needs/college climate discrepancy index and a volun-tary/involuntary attrition breakdown, they foundthat the discrepancy index was marginally signifi-cant as a predictor of voluntary attrition.

As an outcome variable, persistence thushas most often been correlated with various inde-pendent variables in hopes of identifying facilitat-ing conditions. While it is not a direct measure ofimproved learning at the college level, attendanceis a prerequisite for continued cognitive develop-ment that can be directly linked to the collegeexperience. Quite possibly, persistence as adichotomous variable is not as useful an outcomein achieving teaching and learning improvementas involvement or quality of effort, which could beconstrued to represent various levels of persis-tence.

Faculty-Student Relationships

An additional behavioral outcome to be con-sidered here, student interactions with faculty,can be considered both an outcome and processvariable. Informal interactions with faculty areincluded in Tinto's model of attrition as an in.dor-tant aspect of academic integration. In thissense, relationships are considered process

25 30

Page 31: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Acoilemic Outcomes

variables in the educational cycle: they determineand affect educational outcomes. However, whenviewing, eriiicntion as an iterativerelationships with faculty can be viewed as out-comes that may affect future educational out-comes.

Pascarella (1980) completed a cmprehensivereview on the relationship between informalstudent-faculty interaction and college outcomes.He concluded that the extent and quality ofstudent-faculty 4nteractions had significantpositive associations with students' educationalaspirations, their attitudes toward college, theiracademic achievement, their intellectual andpersonal development. and their persistence incollege. Bean and Kuh (1984). on the other hand.found no significant relationship between infor-mal contact with faculty and student grade pointaverage.

A recent study by Volkwein. King, andTerenzini (1986) investigated the relationshipsthat develop between faculty members and trans-fer students. In this subsample of college stu-dents, perceptions about the quality and strengthof their relationships with faculty were signifi-cantly related to intellectual growth.

Thus far, most measures of faculty-studentrelationships have been student self-reports ofthe number of hours per semester of non-class-related interactions with faculty. There is room

for development of other valid measures of thisassociation.

In Sum, there are Jr;VCI al acadetuic.:-behav-ioral outcomes of college but few measures havebeen developed. Though these outcomes havebeen investigated less frequently than the cogni-tive and affective outcomes, there is evidence tosupport the importance and salience of theseoutcomes for college students.

Conclusion

In this working paper. NCRIPTAL has speci-fied its concerns for the technical parameters ofoutcome measures that are part of the currentdiscussions of assessment, assuming that thepolicy parameters are hc.:!. constant through ourcollaborative work with institutions that desire toimprove teaching and learning. Further. we havedelineated the type and form of outcomes throughwhich we hope to measure the effectiveness ofvarious alterations in teaching and learningenvironments. We have briefly reviewed some ofthe forms of outcome measurement that areavailable for our use and that of others, and wehave identified some gaga in available measure-ment techniques. As we learn more about newinstruments and techniques that are being usedwith apparent success in purposive improvementof teaching and learning, we will expand theinformation in this paper.

Page 32: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

Appendixes

Appendix A. Categories of the NCHEMS Outcome Structures

Categories of the NCHEMS Outcomes Structure

CAT.CODE 0 ENTITY BEING MAINTAINED OR CHANGED

1000 Economic Outcomes1100 Economic Access and Independence Outcomes

1110 Economic Access1120 Economic Flexibility, Adaptability, and Security1130 Income and Standard of Living

1200 Economic Resources and Costs1210 Economic Costs and Efficiency1220 Economic Resources (including employees)

1300 Economic Production1310 Economic Productivity and Production1320 Economic Services Provided

1400 Other Economic Outcomes

2000 Human Characteristics Outcomes2100 Aspirations

2110 Desires, Aims, and Goals2120 Dislikes, Likes, and Interests2130 Motivation or Drive Level2140 Other Aspirational Outcomes

2200 Competence and Skills2210 Academic Skills2220 Citizenship and Family Membership Skills2230 Creativity Skills2240 Expression and Communication Skills2250 Intellectual Skills2260 Interpersonal, Leadership, and Organizational Skills2270 Occupational and Employability Skills2280 Physical and Motor Skills2290 Other Skill Outcomes

Ncxm The fourthlevel acetones, into wlucF -nv of the categories listed here can be divided,are "maintenance" (a fourth digit of "1") and "change" (a fourth digit of "2")

SOURCE. Oscar T Lenrung, Young S Lee, Sidney S. Micelc, and Allan L Service, A Structurefor the Outcomes of Postsecondat) Eclucanon (Boulder, Colo National Center for HigherEducation Management Systems, 1977), p 27

27 3 2

Page 33: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes112-..

Appendix A (continued)

Categories of the NCHEMS Outcomes Structure, continued

CAT.CODE ENTITY BEING MAINTAINED OR CHANGED

2000 Human Characteristics Outcomes, continued2300 Morale, Satisfaction, and Affective Characteristics

2310 Attitudes and Values2320 Beliefs, Commitments, and Philosophy of Life233C Feelings and Emotions2340 Mores, Customs, and Standards of Conduct2350 Other Affective Outcomes

2400 Perceptual Characteristics2410 Perceptual Awareness and Sensitivity2420 Perception of Self2430 Perception of Others2440 Perception of Things2450 Other Perceptual Outcomes

2500 Personality and Personal Coping Characteristics2510 Adventurousness and Initiative2520 Autonomy and Independence2530 Dependability and Responsibility2540 Dogma ric/Open-/viinded, Au thori ta ria n/Dem ocra tic2550 flexibility and Adaptability2560 Habits2570 Psychological Functioning2580 Tolerance and Persistence2590 Other Personality and Personal Coping Outcomes

2600 Physical and Physiological Characteristics2610 Physical Fitness and Traits2620 Physiological Health2630 Other Physical or Physiological Outcomes

2700 Status, Recognition, and Certification2710 Completion or Achievement Award2720 Credit Recognition2730 Image, Reputation, or Status2740 Licensing and Certification2750 Obtaining a Job or Admission to a Follov,.up Program2760 Power and/or Authority

2833

Page 34: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes-.....Appendix A (continued)

Categories of the NCHEMS Outcomes Structure, continued

CAT,CODE II ENTITY SEIM MAINTAINED OR CHANGED

2000 Human Characteristics Outcomes, continued

2770 Job, School, or Life Success2780 Other Status, Recognition, and Certification

Outcomes

2800 Social Activities and Roles2510 Ad)ustment to Retirement2820 Affiliations2830 Avociational and Social Activities and Roles2840 Career and Vocational Activities and Roles2850 Citizenship Activities and Roles2860 Family Activities and Roles2870 Friendships and Relationships2880 Other Activity and Role Outcomes

2900 Other Human Characteristic Outcomes

3000 Knowledge, Technology, and Art Form Outcomes3100 General Knowledge and Understanding

3110 Knowledge and Understanding of GPmeral Facts andTerminology

3120 Knowledge and Understanding of General Processes3130 Knowledge and Understanding of General Theory3140 Other General Knowledge and Understanding

3200 Specialized Knowledge and Unetrstanding3210 Knowledge and Understanding of Specialized Facts

and Terminology3220 Knowledge and Understanding of Specialized Processes3230 Knowledge and Understanding of Specialized Theory3240 Other Specialized Knowledge and Understanding

3300 Research and Scholarship3310 Research and Scholarship Knowledge and

Understanding3320 Research and Scholarship Products

Page 35: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

Appendix A (continued)

Categories of the NCHEMS Outcomes Structure, continued

C AT.CODE ENTITY BEING MAINTAINED OR CHANGED

3000 Knowledge, Technology, and Art Form Outcomes,manual

3400 Art Forms and Works3410 Architecture3420 Dance3430 Debate and Oratory3440 Drama3450 Literature and Writing3460 Music3470 Painting, Drawing, and Photography3480 Sculpture3490 Other Fine Arts

3500 Other Knowledge, Technology, and Art Form Outcomes

4000 Resource and Service Provision Outcomes4100 Provision of Facilities and Events

4110 Provision of Facilities4120 Provision of Sponsorship of Events

4200 Provision of Direct Services4210 Teaching4220 Advisory and Analytic Assistance4230 Treatment, Care, and Referral Services4240 Provision of Other Services

4300 Other Resource and Service Provision Outcome-.

5000 Other Maintenance and Change Outcomes5100 Aesthetic-Cultural Activitiei, Traditions, and Condrnons

5200 Organizational Format, Activity, and Operation

5300 Other Maintenance and Change

?0

35

Page 36: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

Appendix B. Basic Skills Tests for College Students

Nelson-Denny Reading Test, Forms E and F

Purpose: To evaluate reading comprehension, vocabulary development, andreading rate.

Use: To screen, predict college success, and to diagnose reading difficulties.

Reliability: Test-retest reliabilities for vocabulary subtest is .89 to .95, forcomprehension subtest is .75 to .82, for reading rate is .62 to.82.

Validity: Limited information available; comprehension subtest is contextdependent.

Degrees of Reading Power

Purpose: To measure reading effectiveness.

Use: To predict probability of success for students in prose materials ofvarying difficulties.

Reliability: K-R 20 coefficients vary between .93 and .97.

Validity: Correlations with CAT reading test range from .77 to .85.

Prescriptive Reading Performance Test

Purpose: To evaluate reading and spelling patterns and determine how a studentemploys visual and auditory modalities in reading.

Use: To ass!ss preliminarily reading level and reading comprehension; toidentify strengths and weaknesses.

Reliability: Test-retest is .98, Spearman -Brown corrected split-half is .98.

Validity: Pearson correlations with six other reading measures range from .65 to.94.

Self-scoring Reading Placement Test

Purpose: To evaluate reading and mathematical skills necessary for success in atwo-year college for students entering postsecondary institu'ions withopen-door policies.

Use: To assist in placing students in college courses.

Reliability: K-R 20 coefficients available in manual.

Validity: Predictive validity of English and mathematics grades range from 06to.70, with a median of .40.

31

36

Page 37: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

Appendix B (continued)Test of Mathematical Abilities

Purpose: To assess mathematical attitudes and aptitudes

Use: For individual assessment.

Reliability: Internal consistency reliabilities range from .96 to .57.

Validity: Correlations of .26 to .31 between attitude subscale scores and threestandardized mathematics tests.

All Vocabulary Scale

Purpose: To evaluate vocabulary level.

Reliability: K-R 21 coefficients range from .60 to .90, split-half coefficients rangefrom .70 to .90 for all 80-word tests.

Validity: Correlates between .50 and .75 with vocabulary tests and variables suchas non-verbal, reading, and mathematics ability and intelligence.

Comprehensive Tests of Basic Skills: Reading

Purpose: To evaluate reading and reference skills.

Reliability: K-R 20 coefficients for total reading scores are .94 to .97, for referenceskills are .76 to .94.

Validity: Information not available.

Iowa Silent Reading Tests

Purpose: To measure vocabulary, reaeLng comprehension, and reading efficiency.

Reliability: Median alternate-forms reliabilities for vocabulary, comprehension, andefficiency are .86, .83, and .77, respectively.

Validity: No predictive validity information is available, correlations with otherreading tests are in the .70s and .80s.

Reading Progress Scale

Purpose: To evaluate "reading-input" performance.

Reliability: Alternate-form estimate is .84.

Validity: Only available for grades 3-6.

Page 38: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Fbcustng on Student Academic Outcomes

Appendix B (continued)

Sequential Tests of Educational Progress: Reading Series II

Purpose: To measure sentence and passage comprehension.

Reliability: Internal cuLisistency and alternate forms reliability information areavailable in manual.

College Board Achievement Test in Mathematics. Level 1

Purpose: To measure mathematical abilities of persons with at least three years ofcollege preparatory mathematics courses.

Reliability: K-R 20 coefficient is .88.

Validity: Good predictor of college grades but does not add much predictive abilitywhen SAT scores and high school GPA are already included.

Information for this appendix was gathered from reviews in the Mental Measurement Yearbook

33 38

Page 39: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Fbcust tg on Student Academic Outcomes

References

(NOTE: In some instances references have been made to early drafts of works that subsequently mayhave been published in another form. -JA and J.S.)

Aiken, L R. (1985). Review of ACT assessment program. In J. V. Mitchell (Ed.), The ninth mentalmeasurements yearbook (pp. 29-31). Lincoln, NE: University of Nebraska Press.

Aleamoni, L M. (1978). Review of CLEP general examinations. In 0. T. Bums (Ed.), The eighth mentalmeasurements (pp. 29-30). Highland Park. NJ: The Gryphon Press.

Association of American Colleges. (1985, February). Integrity in the college curriculum: A report toacademic community. Washington, DC: Association of American Colleges.

Astin, A. (1963). Further validation of the Environmental Assessment Technique. Journal ofEducational Psychology. 54, 217-226.

Astin, A. (1968). Undergraduate achievement and institutional excellence. Science, 161, 661-668.

Astin, A. W. (1974). Measuring the outcomes of higher education. In H. R. Bowen (Ed.), Evaluatinginstitutions for accountability, New directions for institutional research (pp. 23-46). San Francisco:Jossey-Bass.

Astin, A. W. (1977). Pour critical years. San Francisco: Jossey-Bass.

Astin, A. W. (1980). When does a college deserve to be called 'high quality?'. Current Issues in Hig herEducation, 1, 1-9.

Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal ofCollege Student Personnel, 25, 297-308.

Astin, A. W. (1985). Achieving educational excellence. San Francisco: Jossey-Bass.

Astin, A., & Patios, R (1969). The educational and vocational development of college students.Washington, DC: American Co incil on Education.

Ayres, Q. (1983). Student achievertent atpredominantly white and predominantly black universities.American Educational Research Jaztrnal, 20, 291-304.

Ayres, Q., & Bennett, R (1983). University characteristics and student achievement. Journal ofHigher Education, 54, 516-532.

Bean, J. P., & Kuh, G. D. (1984). The reciprocity between student-faculty informal contact andacademic performance of university undergraduate students. Research In Higher Education, 21,461-477.

Bennett, W. J. (1984). To reclaim a legacy: A report on the humanities In higher education.Washington, DC: National Endowment for the Humanities.

Bowen, H. R. (1974). The products of higher education. In H. R. Bowen (Ed.), Evaluating institutionsfor accountability. New directions for institutional research (pp. 1-22). San Francisco: Jossey-Bass.

Bowen, H. R (1977). Investment in learning: The individual and social value of American highereducation. San Francisco: Jossey-Bass.

Bowen, H. R (1979). Evaluating educational quality: A conference summary. In A. W. Astin (Ed.),Evaluating educational quality: A conference summary. Washington. DC: Council on PostsecondaryAccreditation.

Brabeck. M. (forthcoming). Critical thinking skills and reflective judgment development: Redefiningthe aims of higher education. Journal ofApplied Developmental Psychology.

Brown, R. D (1985). Review of college student experiences. In J. V. Mitchell (Ed.), The ninth mentalmeasurements yearbook (pp. 365-366). Lincoln. NE: University of Nebraska Press.

35

39

Page 40: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Student Academic Outcomes

Buros, 0. T. (Ed.). (1978). The eighth mental measurements yearbook. Highland Park. NJ. TheGryphon Press.

Centra, J., & Rock, D. (1971). College environments and student academic achievement. AmericanEducational Research Journal, 8, 623-634.

Cohn, S. J. (1985). Review of Graduate Record Examinations- General Test. In J. V. Mitchell (Ed.).The ninth mental measurements yearbook (pp.622-624). Lincoln, NE: University of Nebraska Press.

Dumont, R. & Troelstrup, R (1981). Measures and predictors of educational growth with four yearsof college. Research in Higher Education, 14, 31-27.

Educational Testing Service.Educational Testing Service.

Educational Testing Service.Educational Testing Service.

Edwards. J. E., & Waters. L. K. (1983). Predicting university attrition: A replication and extension.Educational and Psychological Measurement, 43, 233-236.

Eisert, D., & Tomlinson- Keasey. C. (1978). Cognitive and interpersonal growth during the collegefreshman year A structural analysis. Perceptual and Motor Skills, 46, 995-1005.

Ewell. P. T. (1983). Information on student outcomes: How to get it and how to use it. Boulder. CO:NCHEMS.

(1976). Undergraduate Assessment Program Guide. Princeton, NJ:

(1978). Undergraduate Assessment Program Guide. Princeton, NJ:

Ewell, P. T. (1984). The self-regarding institution: Inforrnationfor excellence. Boulder, CO: NCHEMS.

Ewell, P. (1985). Assessment: What's it all about? Change, 17(6). 32-36.

Feldman. K. A.. & Newcomb. T. M. (1969). The impact of college on students. San Francisco: Jossey-Bass.

Forrest. A. F.. and Steele. J. M. (1978). College Outcomes Measures Project. Iowa City, IA: AmericanCollege Testing Program.

Grant, M. K.. & Hoeber, D. R (1978). Basic skills programs: Are they working? Washington, DC:American Association for Higher Education.

Harris. J. (1985). Assessing outcomes in higher education: Practical suggestions for getting started.Washington. DC: American Association for Higher Education.

Hartle. T. W. (1985). The growing interest in measuring the educational achievement cf collegestudents. Washington. DC: American Association for Higher Education.

Harvey. P.. & Lannholm. G. (1960). Achievement in three major fields during the last two years ofcollege. (Graduate Record Examination Special Report No. 60-Z). Princeton. NJ: Educational TestingService.

Hills, J. R. (1978). Review ofACTAssessment. In O. T. Buros (Ed.), The eighth mental measurementsyearbook (pp. "22-626). Highland Park, NJ: The Gryphon Press.

Jaegar, R M. (1485). Review of Graduate Recora Examinations General Test. In J. V. Mitchell (Ed.),The ninth mental measurements yearbook (pp. 624-626). Lincoln, NE: University of Nebraska Press.

Keeley. S., Browne. M.. & Kreutzer. J. (1982). A comparison of freshmen and seniors on general andspecific essay tests of critical thinking. Research in Higher Education, 17. 139-154.

Kifer. E. (1985). Review of ACT Assessment Program. In J. V. Mitchell (Ed.). The ninth mentalmeasurements yearbook (pp. 31-36). Lincoln, NE: University of Nebraska Press.

Lehmann, I. (1963). Changes in critical thinking. attitudes and values from freshman to senioryears.Journal of Educational Psychology, 54 305-315.

36 40

Page 41: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Fbcusing on Student Academic Outcomes

Lenning. 0. T.. Lee, Y. S.. Mice lc. S. S.. & Service, A. L (1977). A structure Jor the outcomes ofpostsecondary education. Boulder. CO: NCHEMS.

Lenning, 0. T.. Munday. L.. & Maxey. J. (1969). Student educational growth during me first two yearsof college. College and University, 44, 145-153.

McKeachle. W. J.. Pintrich, P. R. Lin. Y.. & Smith. D. A. F. (1986). Teaching and learnt ,g in the collegeclassroom: A review of the research literature. (Tech. Rep. No. 86-13-001.0.) Ann .ft.ibor: Universityof Michigan. National Center for Research to improve Postsecondary Teaching e--? I-earning.

Ment wski. M.. & Doherty. A. (1984). Abilities that last a lifetime. AAHE EhIllettr,, 36(6). 3-1 4.

Mentkowski. M.. & Strait. M. J. (1983). A longitudinal study of student change in cognitivedevelopment, learning strategies, and generic abilities in an outcome-centered liberal arts curricu-lum. (Final Report to the National Institute of Education: Research report No. 6). Milwaukee. WI:Alvemo College Product ions. (ERIC Document Reproduction Service No. ED 239 562).

Merwin. J. C. (1978). Review of National Teacher Examinations. In 0. T. Bunn (Ed.), The eighthmental measurements yearbook (pp. 514-516). Highland Park, NJ: The Gryphon Press.

Milholland. J. E. (1978). Review of CLEP General xaminations. In 0. T. Buroi (Ed.). The eighthmental measurements yearbook (pp. 30-32) Highland Park, NJ: The Gryphon Press.

Miller. J. K. (1985). Review of college student experiences. In J. V. Mitchell (Ed.), The ninth mentalmeasurements yearbook (pp. 365-366). Lincoln. NE: University of NebraskaPress.

/Lichen, J. V. (1978). Review of National Teacher Examination. In 0. T. Bums (Ed.,, The eighthmental measurements yearbcr': (pp. 516-518). Highland Park, NJ: The Gryphon Press.

Mitchell. J. V. (Ed.). (1985). The ninth mental measurements yearbook. Lincoln, NE: University ofNebraska Press.

Munro. B. H. (1981). Dropouts from higher education: Path analysis of a national sample. AmericanEducational Reseorch Journal, 16, 133-141.

Nichols. R (1964). Effects of various college characteristics on student aptitude test scores. Journalof Educational Psychology, 55, 45-54.

Pace. C. R (1979). Measuring outcomes of college: Fifty years of findings and recommendations forthe future. San Francisco: Jossey-Bass.

Pace. C. R (1984). Measuring the quality of college student experiences. Los Angeles: University ofCalifornia at Los Angeles, Higher Education Research Institute.

Pascarella, E. (1980). Student-faculty informal contat.t and college outcomes. Review of EducationalResearch, 50. 545-595.

Pascarella. E. (1986). College environmental influences on learning and cognitive development: Acritical review 1 synthesis. In J. C. Smart (Ed.). Higher education: Handbook cf theory and research(Vol. 1, pp.1-61). New York: Agathon Press.

Pascarella. E.. Duby. P. B., & Iverson, B. K. (1983). A test and reconceptuallzation of a theoreticalmodel of college withdrawal in a commuter institution setting. Sociology of Education, 56(2). 88-100.

Pascarella, E. T.. Smart. J. C.. & Ethington. C. A. (1986). Long-term persistence of two-year collegestudents. Research In Higher Eclucption, 24, 47-71.

Perry. W. (1970). Forms of intellectual and ethical aeve/opment in the college years. New York: Holt,Rinehart and Winston.

Rock, D., Centra. J.. & Lin .. R. (1970). Relationships between college characteristics and studentachievement. American Educational Research Journal, 7. 109-121.

Schmidt. J.. & Davison. M. (1981). Reflective judgment: How to tackle the tough questions. MoralEducation Forum, 6. 2-14.

3741

Page 42: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

Focusing on Stv-lent Acaaernic Outcomes

Southern Regional Education Board. (1985). Access to quality undergraduate education. TheChronicle of Higher Education, July 3, 9-12.

Stewart, A. (1977). Analysis of argument: An empirically dertued measure of tritellectualflextbilay.Boston: McBer.

Study group on the conditions of excellence in American higher education. (1984). Involvement inlearning: Reaitzing the potential of American higher education. Washington. DC. National Instituteof Education.

Tinto, V. (1975). Dropout from higher education: A theoretical synthesis ofrecent research. Reviewof Educational Research, 45, 89-125.

Turnbull, W. W. (1985). Are they learning anything in college? Change, 17(6), 23-26.

Volkwein, J. F., King, M. C., &Terenzini. P. T. (1986). Student-facultyrelationships and intellectualgrowth among transfer students. The Journal of Higher Educatior 57, 413-430.

Watson. G., & Glaser, E. (1964). Critical thinking appraisal. New York: Harcourt Brace Jonavich.

Winteler, A. (1981). The academic department as environment for teaching and learning. HigherEducation, 10, 25-35.

Winter, D., & McClelland, D. (1978). Thematic analysis: An empirically derived measure of the effectsof liberal arts education Journal of Educational Psychology, 70, 8-16.

Winter, D., McClelland, D., & Stewart, A. (1981). A new case for the liberal arts: Assessinginstitutional goals and student development. San Francisco: Jossey-Bass.

,Joffe, L. (1983). Effects of higher education on ability for blacks and whites. Research on HigherEducation, 19, 3-10.

38 42

Page 43: DOCUMENT RESUME - ERIC · DOCUMENT RESUME. HE 020 900. Alexander, Joanne M.; Stark, Joan S. Focusing nn Student Academic Outcomes A Working. Paper. National Center for Research to

F

NCRIPTAI.

N..

NATIONAL CENTER FORRESEARCH TO IMPROVEP QS T SE CO N DJ RTEACHING and LEARNING

1: