direct and indirect measures of learning outcomes in an ... · pdf filedirect and indirect...

13
Joumal of Social Work Education, 49: 408-419, 2013 Copyright © Council on Social Work Education ISSN: 1043-7797 print/2163-5811 online S V Taylors. Francis Croup DOI: 10.1080/10437797.2013.796767 Direct and Indirect Measures of Learning Outcomes in an MSW Program: What Do We Actually Measure? Orly Calderón This study offers a unique perspective on assessmetit of learning b> comparing results from direct and indirect measures in a social work graduate program across two campuses of a single university. The findings suggest that students' perceptions of learning are not necessarily reflective of content and applied skills mastery. Perception of learning appears to be a separate construct from actual learning, and it may reflect the students' satisfaction with their experiences in the program, rather than their attainment of content and skills. Thus, students' satisfaction with their educational experi- ence deserves the attention of educators and administrators who are interested in improving program quality. Postsecondary education programs engage in learning outcomes assessment to improve curriculum content and delivery and to satisfy accreditation requirements set by accrediting agen- cies. Price and Randall (2008) notes that learning outcomes can be measured directly (namely, assessing students' mastery of content or skills) or indirectly (i.e., assessing opinions or atti- tudes toward learning). Similarly, Nichols and Nichols (2005) distinguishes between indicators of knowledge and skills attainment (direct learning outcome measures) and attitudinal indicators of perception of knowledge and skills attainment (indirect learning outcomes measures). Traditionally, social work education programs have used student-focused direct and indirect measures (e.g., tests, papers, and students' course evaluations) as assessment strategies that exist on a continuum and that assess the same construct: namely, students' learning (Holden, Barker, Meenaghan, & Rosenberg, 1999). However, more recently, experts point out the differences in the relative utility of direct and indirect measures of learning outcomes. For example, Suskie (2009) stated that direct measures of students' learning are "visible . . . evidence of exactly what students have . . . learned" (p. 20). In contrast, the very construe! validity of indirect measures as indicators of students' learning has been criticized as weak and inaccurate (Allen, 2004). Suskie (2009) noted that such measures merely provide "proxy signs tha: students are probably learning" (p. 20) and therefore are not convincing. Allen (2004) distinguishes between direct measures that assess actual learning and indi- rect measures that assess perception of learning. The first category includes standardized and locally developed embedded assignments and course activities, portfolios of students' work. Accepted: October 2011 Orly Calderón is associate professor at Long Island University and an NYS licensed psychologist. Address correspondence to Orly Calderón, Long Island University, 720 Norihem Boulevard, Brookville, NY 11548, USA. E-mail: [email protected]

Upload: duongliem

Post on 28-Mar-2018

224 views

Category:

Documents


1 download

TRANSCRIPT

Joumal of Social Work Education, 49: 408-419, 2013Copyright © Council on Social Work EducationISSN: 1043-7797 print/2163-5811 online S V Taylors. Francis Croup

DOI: 10.1080/10437797.2013.796767

Direct and Indirect Measures of Learning Outcomes inan MSW Program: What Do We Actually Measure?

Orly Calderón

This study offers a unique perspective on assessmetit of learning b> comparing results from directand indirect measures in a social work graduate program across two campuses of a single university.The findings suggest that students' perceptions of learning are not necessarily reflective of contentand applied skills mastery. Perception of learning appears to be a separate construct from actuallearning, and it may reflect the students' satisfaction with their experiences in the program, ratherthan their attainment of content and skills. Thus, students' satisfaction with their educational experi-ence deserves the attention of educators and administrators who are interested in improving programquality.

Postsecondary education programs engage in learning outcomes assessment to improvecurriculum content and delivery and to satisfy accreditation requirements set by accrediting agen-cies. Price and Randall (2008) notes that learning outcomes can be measured directly (namely,assessing students' mastery of content or skills) or indirectly (i.e., assessing opinions or atti-tudes toward learning). Similarly, Nichols and Nichols (2005) distinguishes between indicatorsof knowledge and skills attainment (direct learning outcome measures) and attitudinal indicatorsof perception of knowledge and skills attainment (indirect learning outcomes measures).

Traditionally, social work education programs have used student-focused direct and indirectmeasures (e.g., tests, papers, and students' course evaluations) as assessment strategies that existon a continuum and that assess the same construct: namely, students' learning (Holden, Barker,Meenaghan, & Rosenberg, 1999). However, more recently, experts point out the differences inthe relative utility of direct and indirect measures of learning outcomes. For example, Suskie(2009) stated that direct measures of students' learning are "visible . . . evidence of exactly whatstudents have . . . learned" (p. 20). In contrast, the very construe! validity of indirect measures asindicators of students' learning has been criticized as weak and inaccurate (Allen, 2004). Suskie(2009) noted that such measures merely provide "proxy signs tha: students are probably learning"(p. 20) and therefore are not convincing.

Allen (2004) distinguishes between direct measures that assess actual learning and indi-rect measures that assess perception of learning. The first category includes standardized andlocally developed embedded assignments and course activities, portfolios of students' work.

Accepted: October 2011Orly Calderón is associate professor at Long Island University and an NYS licensed psychologist.Address correspondence to Orly Calderón, Long Island University, 720 Norihem Boulevard, Brookville, NY 11548,

USA. E-mail: [email protected]

DIRECT AND INDIRECT MEASURES OF LEARNING OUTCOMES 4 0 9

and competence interviews. The latter includes surveys, interviews, reflective essays, and focusgroups that target students' perceptions of their learning. Though both types of measurementare limited, the consensus among experts in outcome assessment of learning in higher educa-tion is that despite the resource cost involved in developing direct measures (Allen, 2004), suchstrategies feature high construct and content validity in terms of assessing students' learning.

The field of social work education has been considering issues of assessing learning outcomes,leading to recent shifts in the policies of its accrediting agency, the Council on Social WorkEducation (CSWE). Attainment of learning has been redefined in terms of actual practice behav-iors (CSWE, 2008) that presumably require direct measures of actual learning while possiblydiminishing the value of indirect measures of perceived learning.

Within the field of social work, several direct assessment strategies have been used success-fully. Adams (2004, p. 121) described the use of the Classroom Assessment Techniques (CAT)in assessing ongoing leaming in a social welfare policy class. These brief assessment tools,which may include anonymous polls or brief responses to open ended questions, help instruc-tors identify the level of content mastery and comprehension that students attain as the courseprogresses. Adams found that data gathered through CATs are useful for identifying and ulti-mately minimizing barriers to learning knowledge, values, and skills in a social welfare policyclass.

Regehr, Bogo, Regehr, and Power (2007) developed an evaluation system that helped fieldinstructors to better assess students' performances in the field practicum and to identify studentswho were experiencing difficulties. Regehr et al. discovered that use of rating scales to assessstudents' behaviors in the field was not useful in effectively evaluating the competency levelof students' performances. Instead, they developed an instrument composed of vignettes thatdescribed contextual practice behaviors. The participating field instructors were asked to matchthe practice behavior patterns of their second year master's of social work degree (MSW) studentswith the scenarios described in the vignettes. These vignettes were assigned 3core values rangingfrom exemplary to unsuitable for practice, but to avoid a response bias, the score assignment ofeach vignette was not revealed to the field instructors. The findings indicated that field instructorswere able to reliably identify the competency level of their students by matching their students'behaviors with several vignettes with the same score values. Regehr et al. point out that thevignette matching method of assessment was effective in identifying students who had exhibitedpotentially problematic performance levels.

Smith, Cohen-Callow, Hall, and Hayward (2007) assessed MSW students' development ofcritical thinking skills in the context of critical appraisal of research. Using a pre- and posttestdesign. Smith et al. administered the Critical Appraisal Skills Program, an aptitude test, to MSWstudents enrolled in an introductory research class. The results suggested a small but statisticallysignificant increase in students' correct responses to several of the test's items on the posttest,compared to the pretest. The authors described the utility of such assessment in helping toshape social work research curriculum to focus on training students to be effective consumersof research.

Alter and Adkins (2006) used direct measures of learning outcomes to assess writing skills ofapplicants to an MSW program. Students were asked to produce a writing sample in response to aprompt about a case study that they had previously discussed during their orientation. To measurestudents' writing proficiency, the writing samples were assessed along the following dimensions:(a) students' understanding of the prompt; (b) students' abilities to extract sufficient and relevant

410 CALDERÓN

information from the case study to support their written positions; (c) students' abilities to log-ically organize their written responses; (d) students' abilities to use a persuasive voice in theirwriting; and (e) students' abilities to demonsti-ate use of appropriate writing mechanics (Alter& Adkins, 2006, p. 344). The authors described how the results of this direct assessment of stu-dents' writing skills can advance a movement toward a social work curriculum that provides moresupport for writing skills.

Concomitant with the focus on developing and using direct measures of learning outcomes,much work has been done during the past decade to develop valid and reliable indirect measuresof learning in social work programs. For example, these measures assess students' perceptionsof self-efficacy (Holden, Anastas, & IVIeenaghan, 2003, 2005; Holden, Barker, Rosenberg, &Onghena, 2008; Holden, Meenaghan, Anastas, & Metrey, 2002). Such assessment instrumentsderive their construct and criterion validity from Bandura's definition of perceived self-efficacyand its relationship to task performance and perseverance (as cited in Holden et al., 2002). Resultsfrom studies that test the psychometric properties of self-efficacy scales indicate that they arevalid, reliable, and reasonably correlate with measures of actual performance (Holden et al., 2003;Holden et al., 2008), especially in relationship to mastery of specific content areas (Holden et al.,2005).

However, evidence also exists to suggest that indirect measures of learning are not good predic-tors of actual learning as measured by mastery of content and skills. For example. Fortune, Lee,and Cavazos (2005) found that social work students' ratings on achievement motivation scales(including self-rating of skills and self-efficacy) were not significantly correlated with the fieldinstructors' evaluations of the students' performance. Outside of the social work education field.Price and Randall (2008) found that results from students' knowledge surveys did not correlatewith actual knowledge in a management program research class at Georgia Southern University.

The question arises whether direct and indirect measures of learning outcomes exist along acontinuum of evaluation strategies (a perspective that may indeed discourage the use of indirectmeasures in the assessment of actual learning) or whether they represent a two-factor solutionthat provides data on two separate constructs (a perspective that may validate indirect measuresas indicators of learning experiences other than actual content mastery and skills attainment).The current study offers a unique opportunity to address this question by comparing results fromdirect and indirect measures of learning outcomes in a social work graduate program across twocampuses of a single university in the United States. Though MSW students on either campusfollowed the same curriculum in terms of content and skills acquisition, they represent two sepa-rate and independent cohorts. Thus, although the data were collected in a single university, theyreflect scores from two independent groups, thereby increasing the validity of this study.

This study is based on educational outcome data that was collected at the end of the2007-2008 academic year (AY) from students in the foundation year of an MSW program ata large private university in the New York metropolitan area. The study tests the hypothesis thatthere will be significant differences between the two cohorts' scores on direct and indirect mea-sures of learning outcomes. Although the two campuses are located in different geographicalsettings (urban and suburbia), the location setting per se is not hypothesized to be an importantfactor. Therefore, and for the sake of convenience, the two cohorts are referred to as Cohorts Aand B, respectively, throughout this article. Scores on direct measures of learning outcomes areoperationally defined using two separate assessment mechanisms: (1) students' scores on the AreaConcentration Achievement Test, Version A (PACAT, n.d.), and (2) field instructors' ratings of

DIRECT AND INDIRECT MEASURES OF LEARNING OUTCOMES 4 1 1

Students' performances as reflected on the Field Instructors' Evaluation of Student Performance(Barretti, 2005). Scores on indirect measures of learning outcomes are operationally defined asstudents' ratings of course objectives achievement as reflected by scores on the Students' CourseObjective Achievement Survey (S-COAS), an evaluation instrument developed within the socialwork department of the university. A detailed description of these instruments appears in theInstruments subsection of the Method section.

METHOD

Participants and Sampling Procedures

This study uses an availability sample. Educational outcome assessment data were collected fromstudents and field instructors of the MSW program's foundation year. The participants included64 students, and 80% of those attended Campus B. The field instructors were social workers fromvarious social service agencies, schools, and hospital agencies in close geographic proximity tothe two campuses.

Procedure

One faculty member was assigned to oversee the program's outcome assessment on both cam-puses. This person was responsible for overseeing data collection and analysis and then providingit to faculty, interested students, and other stakeholders in a timely manner for the purpose ofcurriculum and program development. The data reported in this article were collected as partof the program's required ongoing assessment procedure. The university's Office of SponsoredResearch has granted this author permission to share these data in the aggregate via this article.

Data regarding achievement of course objectives (indirect measures) were collected from stu-dents via surveys distributed in class on the last day of each course. The students completed thesesurveys voluntarily and anonymously. Data regarding students' mastery of foundation curriculumcontent (direct measures) were collected at the end of spring semester of the concentration yearvia a standardized test that was administered to all second year students during class time in thelast week of the semester. Participation in this test was voluntary, and scores from this particularmeasure were not factored into the students' grades or academic records. This article presentsdata from the content mastery test, which was administered at the end of AY 2008-2009 to stu-dents who completed the indirect measures regarding achievement of the same content objectivesthe year before. Hence, the results reflect data collected regarding foundation year curriculumdespite administration of the test during the concentration year (which is a result of strategicconstraints in the program's delivery of certain foundation year curriculum). Data regardingachievement of practical skills that correspond to program objectives (direct measures) were col-lected at the end of each semester during the AY 2007-2008 from the field instructors of the samestudents who completed the indirect measures. Although these surveys are not anonymous andare used for grading students' performance, this author did not have access to students' identi-fying information when analyzing these data, thus maintaining confidentiality and avoiding anexperimenter's bias through a blind design.

412 CALDERÓN

Instruments: Indirect Measures of Learning Outcomes

Students' Course Objective Achievement Survey (S-COAS). This instrument was orig-inally designed in compliance with the CSWE Educational Policy and Accreditation Standards(EPAS; 2001) and was recently revised (after the completion ofthe current study) to comply withthe new CSWE EPAS (2008) and to reflect assessment of learning competencies rather than learn-ing objectives. This instrument, which is a result of a collaborative effort ofthe social work facultymembers at the university, assesses the degree to which students believe they have achieved thecourse objectives as outlined in the course syllabus. The instrument derives its content validityfrom the CSWE EPAS (2001), which has served as the basis for the course objectives for eachcourse in the program. Students are asked to rate the degree to which course learning objectiveshave been met using a 5-point Likert scale ranging from 1 = This objective was met to a verysmall degree to 5 = This objective was met to a very great degree. Consistent with other self-report measures of indirect learning (Suskie, 2009), this instrument draws its construct validityfrom the assumption that students' perceptions of course objectives achievement is indicative ofstudents' perceptions of their actual learning of course content. The S-COAS features a good levelof reliability (a Cronbach's alpha reliability coefficient of .76). A sample S-COAS instrumentappears in Figure 1.

Instruments: Direct Measures of Learning Outcomes

Area Concentration Achievement Test (ACAT) Version A. This standardized test pub-lished by PACAT (n.d.) assesses students' comprehension and mastery of CSWE curriculumcontent areas. The ACAT has a mean of 500 and a standard deviation of 100. Our students tookVersion A. This form is thought to have better content validity than other available forms becauseit feattires eight content areas that reflect CSWE foundation curriculum, and it corresponds wellwith our program curricular objectives. The eight content areas of Version A are (1) diversity,(2) populations at risk, (3) social and economic justice, (4) values and ethics, (5) policy and ser-vices, (6) social work practice, (7) human behavior in the social environment, and (8) researchmethods. The ACAT features moderate reliability, which is evident through its mean odd-evenreliability coefficient of .67.

Field Instructors' Evaluation of Student Performance (FIESP). This quantitative instru-ment was designed originally by Marietta Barretti (2005) of the Social Work Department atLong Island University; it derives its content validity from the 2001 CSWE EPAS. Each pro-gram learning objective corresponds to one of the objectives stated in the 2001 EPAS and hasbeen operationally defined to reflect a corresponding practical skill. Using a 6-point Likert scale,field instructors are asked to rate the students' performances on each of these skills. The scalefeatures the following response options: 1 = No evidence of behavior; 2 — Behavior present,in a minimum degree; 3 = Behavior present to some degree; 4 = Behavior present most of thetime and to the degree expected; 5 = Student surpassed expectations of a graduate social workstudent; and N = No opportunity to observe (Long Island University, n.d.).

In 2009 the instrument was revised to ensure its compliance with the new CSWE EPAS (2008);it is currently being used for ongoing evaluation of the program learning competencies. Examples

DIRECT AND INDIRECT MEASURES OF LEARNING OUTCOMES 4 1 3

MSW PROGRAMSTUDENTS' COURSE OBJECTIVE ACHIEVEMENT SURVEY

SWK 677 (Soc. & Psych, aspects of subs, abuse) Section Semester/Year:Spring 2008

This survey is your opportunity to evaluate individual courses and the curriculum. Yourresponses will be compiled and analyzed and will be used by faculty and committees to makechanges in the curriculum where needed. A student volunteer should collect and return thesurveys to the Social Work Department

Please evaluate the degree to which each course objective has been met using the following scale:

1. Very small degree2. Small degree3. Moderate degree4. Great degree5. Very great degree

By the end of this course, students will:

1. Identify various governmental policies regarding alcohol and drug use and the practicesof the entities charged with implementing these policies.

2. List at least five (5) socio-ethno-cultural groups and be able to discuss at least three(3) specific strategies for working with each group.

3. Develop a culturally competent style that serves their foundation for their work withpersons who are substance abusers.

FIGURE 1 Sample of Students' Course Objective Achievement Survey(S-COAS) items. Note. MSW = master's of social work degree; Soc. =sociology; Psych. = psychology; subs. = substance; SWK = social work(course title).

of FIESP items that correspond to the program's learning objectives appear in Table 1. Reliabilityscores for the FIESP version used in this study are not available.

Design

This educational outcome assessment study used a group survey design that compares scores ondirect and indirect measures of learning outcomes across the entire MSW student population fromboth campuses (for a detailed discussion of survey design research models, see Engel & Schutt,2008).

Data Analysis

Achievement of program learning objectives across the curriculum. Descriptive dataare calculated for the S-COAS for Cohorts A and B. The mean score of each course objectiveachievement is entered as a data point for calculating achievement of the corresponding programobjective, using a curricular mapping technique. This method of data analysis treats N as thenumber of course objectives that correspond to the program's 12 foundation learning objectives.

414 CALDERÓN

TABLE 1Sample FIESP Items That Correspond to Program's Learning Objectives

Program Objective FIESP Indicators

Apply critical thinking skills within the context of Develops strategies for approachingprofessional social work practice. differential tasks.

Understand the value base of the profession and its Maintains client confidentiality,ethical standards, principles, and practice.

Practice without discrimination as reflected in field Communicates in a manner that is sensitivepractice. and respectful of clients' diversities.

Note. FIESP = Field Instructors' Evaluation of Student Performance.

rather than as the number of students who have completed the S-COAS. These program objectivesare reflected in 70 course objectives across the foundation curriculum. Essentially, A' reflects thenumber of times that students have been exposed to opportunities to attain a specific programlearning objective. Measures of central tendency are then calcula:ed for each program objective.An independent i-test analysis is used to test for differences between the cohorts' ratings ofobjective achievement.

FIESP. Measures of central tendency are calculated for each cohort (i.e., students fromCampus A and B) for each field course. Ratings of indicators of program objectives are aver-aged to yield a mean score that represents the student's degree of skills' acquisitions associatedwith the respective program leaming objectives.

Mastery of curriculum content areas. Results from the ACAT are analyzed by an indepen-dent agent from PACAT, Inc. to provide standardized ipsative and normative scores. The scoresreported here were achieved by the same students who had completed the S-COAS and the FIESPduring the AY 2007-2008 to yield a single group data.

RESULTS

Results were analyzed for direct and indirect indicators of content mastery and for directindicators of skills (practice behaviors) attainment.

Content Mastery

Indirect measure/curricular mapping of S-COAS. Results from the indirect measure(curricular mapping of S-COAS) indicate that students on each campus rated all the learningobjectives as achieved to a great degree (M = >3.5). However, students from Campus B ratedachievement of six learning objectives as significantly higher than students from Campus A (seeTable 2).

Direct measure—ACAT. Results on the ACAT were computed separately for the twocohons. The results indicated that students from Campus A of the MSW program achieved anoverall standard score of 544, which is in the 67th percentile of the standardized comparison

DIRECT AND INDIRECT MEASURES OF LEARNING OUTCOMES 41 5

TABLE 2Achievement of Foundation Program Objectives Across the Curriculum: Significant

Differences Between Campuses

Learning Objective

1. Critical thinking2. Practice without discrimination3. Advocacy4. Connection between history and

current practice5. Evaluate research6. Communication skills

Mean ScoreCampus B

4.284.554.604.35

4.424.36

Mean ScoreCampus A

4.013.863.833.04

3.933.96

SignificantDifference

((157) = -2.18*((22) = -3.46*((21) = -5.33*((28) = -2.49*

((28) = -2.35*((37) = -2.46*

*p < .05.

TABLE 3ACAT Content Area Scores

Content Area

OverallDiversityPopulations at-riskSocial and economic justiceValues and ethicsPolicy and servicesSocial work practiceHBSEResearch methods

Standard Score

Campus B

475456484516441428471507548

Campus A

544508509539521500553538585

Percentile

Campus B

403344562824395368

Campus A

675354655850706580

Note. ACAT = Area Concentration Achievement Test; HBSE = human behavior in thesocial environment.

group. This means that students from Campus A scored higher than 67% of other MSW stu-dents across the United States who took this test during the past 6 years. The results furtherindicated that students from Campus B of the MSW program achieved an overall standard scoreof 475, which is in the 40th percentile of the standardized comparison group. Table 3 presents therespective cohorts' standard scores on each of the ACAT's content areas.

Skiiis Attainment

Direct measure—FIESP Students on each campus demonstrated to their field instructorsthat practical skills associated with each of the foundation program's objectives were presentmost of the time and to the expected degree evidenced by a mean score of 4.0 or higher on each

416 CALDERÓN

TABLE 4Mean Scores on FIESP

Course Objective

Critical thinking

Ethical standards

Practice without discrimination

Advocacy

History and current practice

Generalist skills

HBSE

Social policy

Evaluate research

Communication skills

Use of supervision

Organizational change

Campus

ABABABABABABABABABABABAB

Mean

4.264.224.344.394.204.214.304.344.224.214.224.314.104.104.554.664.824.294.364.354.324.314.434.36

Standard Deviation

.01

.01

.00

.14

.02

.12

.26

.19

.14

.19

.07

.07

.04

.33

.02

.01

.01

.01

.06

.13

.03

.05

.07

.08

Note. FIESP = Field Instructors' Evaluation of Student Performance; HBSE = humanbehavior in the social environment.

of the indicators associated with specific program objectives (see Table 4). An independent t-test yielded no significant differences between the performance Levels of students from the twocampuses on any of the practical skills indicators.

DISCUSSION

This outcome assessment study compares the results of direct and indirect assessment of learningamong MSW students. The results provide only a partial support for the hypothesis that differ-ences exist between students' perceptions of their learning and their actual learning. The findingsindicate a consistency between students' perceptions of their learning (indirect measure of learn-ing) and their attainment of practice skills as rated by their field instructors (direct measures oflearning). The findings suggest that students believe they have achieved the program learningobjectives to a great degree, and they have also demonstrated satisfactory presence of practicebehaviors associated with those learning objectives. Further, the perceptions of Cohort A stu-dents regarding achievement of program learning objectives is consistent with their demonstratedmastery of learning as evidenced by their scores on an objective standardized test. (The reported

DIRECT AND INDIRECT MEASURES OF LEARNING OUTCOMES 4 1 7

perceived achievement of learning objectives to a great degree is conceptually consistent withthe above average achievement on a standardized content mastery test.) However, students fromCohort B demonstrated a difference between their reported perceived learning and their actuallearning. By their reports, they believed they had achieved the program's learning objectives toa great degree but demonstrated a below average performance on a standardized content masterytest. Also, it is interesting that students from Cohort B, although demonstrating a lower level ofcontent mastery compared to students from Cohort A (40th percentile vs. 67th percentile), per-ceived that they had achieved the program's learning objectives to a greater degree than studentsin Cohort A along several dimensions: critical thinking, practice without discrimination, advo-cacy, connection between history and current practice, evaluating research, and communicationskills.

Essentially, the findings suggest a pattern that mimics an interaction effect for the measure(direct vs. indirect) x cohort (A and B, respectively) variables, because the cohort that reporteda lower perception of achievement (indirect measure) in the curricular areas of nondiscrimina-tory practice, advocacy, and research evaluation, actually scored higher on direct measures oflearning within the corresponding content areas of diversity, policy and services, and researchmethods. Though it is possible that the ACAT scores (direct measures) could have been affectedby the advanced year curriculum (the test was administered at the end of the second year),such an effect does not negate the discrepancy between groups on this measure, nor the pat-tern of scores within group, comparing this measure to the indirect measures. These findingsindicate that students' perceptions of their educational achievement is not necessarily reflec-tive of their abilities to demonstrate mastery of content and applied skills. Thus, it appearsthat students' perceptions of their educational success may be a separate construct from actualeducational success. Perception of learning may reflect the students' satisfaction with their expe-riences in the program, which is not necessarily related to their actual learning. Satisfactionmay be influenced by factors not related to actual learning but to student's perceptions ofthe social and emotional components of their learning environment. In the absence of moreinformation about students' characteristics (i.e., prior educational experiences), it is difficult tospeculate on factors that may affect perception of learning. Nevertheless, and regardless of fac-tors that affect perception of learning, the data indicate that indirect measures address an aspectof the learning experience that may be independent of actual attainment of content and skills.Therefore, indirect measures may be useful for evaluation of a construct that, though part ofthe educational experience, is entirely different from its original purpose of a learning outcomeindicator.

The findings indicate that a two-factor solution may be a more appropriate approach to assess-ment of learning outcomes than either a continuum approach or a direct measure approach alone.Direct measures can effectively assess knowledge and skills attainment, whereas indirect mea-sures can effectively measure students' learning experiences. Additional research is required toidentify factors that play a role in students' ratings ofthe learning experience and to further studythe relationship among those factors, perception of learning, and actual learning.

These findings are particularly timely for the field of social work education in view of thecurrent EPAS requirement (CSWE, 2008) that social work programs focus on measurement ofactual competencies, thus challenging the indirect approach to assessment and steering socialwork programs toward an exclusively direct assessment of learning outcomes. A two-factor solu-tion, supported by the current findings, suggests that as social work education programs move

418 CALDERÓN

toward assessment of competencies (i.e., actual practice behaviors), faculty must not neglectassessm.ent of other factors that play a role in educational outcomes. Hence, they can capturethe full educational experiences of students and develop evidence-based strategies for qualityimprovement of those experiences.

This study is limited because it uses a small sample. In addition, this study uses nonstandard-ized instruments that have been designed with various scales of measurements, thus making itdifficult to conduct an accurate comparison of the scores. To protect students' anonymity andconfidentiality, access to data from one instrument in particular (the FIESP) was limited, whichfurther interfered with establishing the psychometric properties of the instrument. Proceduresfor future data collection in the department are being modified to address this problem for thepurpose of future research. Finally, this study does not provide insight into factors that affectstudents' degree of perceived leaming. Currently, this author is completing a study that incorpo-rates a qualitative component into an educational outcomes assessment to better understand howstudents think about their leaming experiences.

Nevertheless, the strength of this study lies in the unique opportunity it affords to compareeducational outcomes of two cohorts within the same program, with the same curriculum content.The findings, particularly those that reflect differences between cohorts, have implications forfuture studies, which can look at factors that may be associated with actual and perceived leamingoutcomes, such as differences in students' academic preparedness and admission criteria and inthe way that content, albeit identical, is infused into the curriculum.

REFERENCES

Adams, P. (2004). Classroom assessment and social welfare policy: Addressing challenges to teaching and learning.Journal of Social Work Education, 40, 121-142.

Allen, M. I. (2004). Assessing academic programs in higher education. Bolton, MA: Anker Publishing.Alter, C, & Adkins, C. (2006). Assessing students writing proficiency in graduate schools of social work. Journal of

Social Work Education, 42, 337-353.Barretti, M. (2005). Field instructors' evaluation of student performance (Unpublished instrument). Department of Social

Work, Long Island University, NY.Council on Social Work Education. (2001). Educational policy and accreditation standards. Alexandria, VA: Author.Council on Social Work Education. (2008). Educational policy and accreditcaion standards. Retrieved from http://www.

cswe.org/Accreditation/2008EPASDescription.aspxEngel, R. I., & Schutt, R. K. (2008). Survey research. In R. M. Grinnell & Y. A. Unrau (Eds.), Social work research and

evaluation: Foundations of evidence-based practice (pp. 265-304). New York, NY: Oxford University Press.Fortune, A. E., Lee, M., & Cavazos, A. (2005). Achievement motivation and outcome in social work field education.

Journal of Social Work Education, 41, 115-129.Holden, G., Anastas, I., & Meenaghan, T. (2003). Determining attainment of the EPAS foundation program objectives:

Evidence for the use of self-efficacy as an outcome. Journal of Social Wo.-k Education, 39, 425-440.Holden, G., Anastas, I., & Meenaghan, T. (2005). EPAS objectives and foundation practice self-efficacy: A replication.

Journal of Social Work Education, 41, 559-570.Holden, G., Barker, K., Meenaghan, T., & Rosenberg, G. (1999). Research self-efficacy: A new possibility for educational

outcomes assessment. Journal of Social Work Education, 35, 463-476.Holden, G., Barker, K., Rosenberg, G., & Onghena, P. (2008). The Evaluation Df Self-Efficacy Scale for assessing progress

toward CSWE accreditation related objectives: A replication. Research on Social Work Practice, 18, 42—46.Holden, G., Meenaghan, T., Anastas, J., & Metrey, G. (2002). Outcomes of social work education: The case for social

work self-efficacy. Journal of Social Work Education, 38, 115-133.

DIRECT AND INDIRECT MEASURES OF LEARNING OUTCOMES 4 1 9

Long Island University, (n.d.). LIU MSW combined program summary of learning outcome assessment AY1I-Í2. Retrieved from http://www.liu.edu/~/media/Files/CWPost/Academics/SHPN/LIU_Post_SummaryLearning.ashx-2012-08-08

Nichols, I. O., & Nichols, K. W. (2005). A road map for improvement of student teaming and support services throughassessment. New York, NY: Agathon Press.

PACAT, Inc. (n.d.). Area Concentration Achievement Test. Retrieved from http://www.collegeoutcomes.comPrice, B. A., & Randall, C. H. (2008). Assessing learning outcomes in quantitative courses: Using embedded questions

for direct assessment. Joumai of Education for Business, 83(5), 288-294.Regehr, G., Bogo, M., Regehr, C, & Power, R. (2007). Can we build a better mousetrap? Improving the measurements

of practice performance in the field practicum. Journal ofSociai Work Education, 43, 327-344.Smith, C. A., Cohen-Callow, A., Hall, D. M. H., & Hayward, R. A. (2007). Impact of a foundation-level MSW research

course on students' critical appraisal skills. Journal ofSociai Work Education, 43, 481-495.Suskie, L. (2009). Assessing student learning: A common sense guide. San Francisco, CA: lossey-Bass.

Copyright of Journal of Social Work Education is the property of Routledge and its contentmay not be copied or emailed to multiple sites or posted to a listserv without the copyrightholder's express written permission. However, users may print, download, or email articles forindividual use.