an attributional approach to the validity of student ratings of instruction

14
CONTFMPORARY EDUCATIONAL PSYCHOLOGY 4, 26-39 (1979) An Attributional Approach to the Validity of Student Ratings of Instruction RUSSELL AMES AND SING LAW Purdue University The study related students’ perceptions of the causes of their performance in three different components of a course (i.e., quizzes, individual papers, and group projects) to their ratings of the course and instructor in order to explore the conver- gent and discriminant validity of the course ratings. Multiple regression techniques were used to relate three internal and two external attribution cue items to seven course evaluation factors. Regression analyses indicated: (a) Multiple correlations between the five attribution cue items and seven course evaluation factors were highly significant; (b) after variance due to instructor had been removed the attribu- tional cue items accounted for from IO-25% of the criterion variance; and (c) internal attributions for performance were significantly related to positive course evaluations, and external attributions for performance were related to negative course evaluations. A major question involved in using student ratings to evaluate instruc- tion is the validity of these ratings (Costin, Greenough, & Menges, 1971; Crittenden & Norr, 1975; Frey, 1973; McKeachie, Lin, & Mann, 1971). A valid measure is one that is reliable, shows convergent and discriminant validity, and relates to performance criteria (Crittenden & Norr, 1975). In the convergent-discriminant approach, a valid measure would be one which correlates with factors theoretically related to teaching effective- ness and which can be distinguished from “biasing factors.” Social per- ception variables associated with student attitudes, values, and personal- ity traits have been among the more common bias factors investigated (Crittenden & Norr, 1973; Crowe & Feldhusen, 1976; Tatenbaum, 1975; Rezler, 1965). Tatenbaum (1975), for example, found that student ratings were inflated when student needs were congruent with the teacher’s orientation. More generally, these researchers have indicated that if stu- dents perceive themselves and the teacher to be similar on a given charac- teristic, they are attracted to the teacher and rate the instructor and course more positively. Critics of instructor ratings have argued that if such biasing factors are operating, instructors may attempt to make them- selves attractive to students by administering high grades at the expense of quality instruction. Requests for reprints should be sent to the first author at Program in General Administra- tion, University College, University of Maryland, College Park, MD 20742. This research was supported in part by an instructional improvement grant to the first author from the Exxon Education Foundation. 26 0361-476X/79/0 10026 - 14$02.00/O Copyright @ 1979 by Academic Press. Inc. All rights of reproduction in any form reserved.

Upload: sing

Post on 02-Jan-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An attributional approach to the validity of student ratings of instruction

CONTFMPORARY EDUCATIONAL PSYCHOLOGY 4, 26-39 (1979)

An Attributional Approach to the Validity of Student Ratings of Instruction

RUSSELL AMES AND SING LAW Purdue University

The study related students’ perceptions of the causes of their performance in three different components of a course (i.e., quizzes, individual papers, and group projects) to their ratings of the course and instructor in order to explore the conver- gent and discriminant validity of the course ratings. Multiple regression techniques were used to relate three internal and two external attribution cue items to seven course evaluation factors. Regression analyses indicated: (a) Multiple correlations between the five attribution cue items and seven course evaluation factors were highly significant; (b) after variance due to instructor had been removed the attribu- tional cue items accounted for from IO-25% of the criterion variance; and (c) internal attributions for performance were significantly related to positive course evaluations, and external attributions for performance were related to negative course evaluations.

A major question involved in using student ratings to evaluate instruc- tion is the validity of these ratings (Costin, Greenough, & Menges, 1971; Crittenden & Norr, 1975; Frey, 1973; McKeachie, Lin, & Mann, 1971). A valid measure is one that is reliable, shows convergent and discriminant validity, and relates to performance criteria (Crittenden & Norr, 1975). In the convergent-discriminant approach, a valid measure would be one which correlates with factors theoretically related to teaching effective- ness and which can be distinguished from “biasing factors.” Social per- ception variables associated with student attitudes, values, and personal- ity traits have been among the more common bias factors investigated (Crittenden & Norr, 1973; Crowe & Feldhusen, 1976; Tatenbaum, 1975; Rezler, 1965). Tatenbaum (1975), for example, found that student ratings were inflated when student needs were congruent with the teacher’s orientation. More generally, these researchers have indicated that if stu- dents perceive themselves and the teacher to be similar on a given charac- teristic, they are attracted to the teacher and rate the instructor and course more positively. Critics of instructor ratings have argued that if such biasing factors are operating, instructors may attempt to make them- selves attractive to students by administering high grades at the expense of quality instruction.

Requests for reprints should be sent to the first author at Program in General Administra- tion, University College, University of Maryland, College Park, MD 20742.

This research was supported in part by an instructional improvement grant to the first author from the Exxon Education Foundation.

26 036 1-476X/79/0 10026 - 14$02.00/O Copyright @ 1979 by Academic Press. Inc. All rights of reproduction in any form reserved.

Page 2: An attributional approach to the validity of student ratings of instruction

ATTRIBUTION AND RATINGS OF INSTRUCTION 27

From an attribution theory framework (Heider, 1958; Kelley, 1967, 1971; Weiner, 1974), students’ ratings of instruction may be more related to their perception of the causes of their course performance rather than to their relative success or failure. For example, students who have attrib- uted their poor performance to their own lack of effort have blamed themselves for the failure, not the course or instructor. In contrast, stu- dents who attribute a poor performance to some fault in the instruction may rate the course and instructor more negatively. The present study was an attempt to examine a set of social perception variables which may mediate the rating process, and further, to use the attribution theory framework to provide evidence for the convergent and discriminant valid- ity of student instructional ratings.

Weiner and Kukla (1970) and Weiner (1974) have derived an attribu- tional model for how persons explain and interpret achievement events. According to their model, persons who attribute the causes of success internally take personal responsibility for the outcome and experience positive affect; whereas persons who attribute the causes of success exter- nally do not take such responsibility, and hence, do not experience posi- tive feelings. In contrast, persons who attribute the causes of failure in- ternally, take personal responsibility for the failure and experience strong negative affect; whereas persons who attribute the causes of failure exter- nally do not take such personal responsibility and do not experience such negative feelings about themselves.

As a set of mediating cognitions, this internal versus external attribu- tion pattern for achievement events could differentially affect course and instructor ratings. Weiner, Freize, Kukla, Reed, Rest, and Rosenbaum (1971) found evidence that teachers’ evaluations of students were affected by their beliefs about the causes of student performance; thus, students ratings of teachers may also be affected by attributional beliefs. In the Weiner et al. (1971) study, successful students were rated more positively than failing students, but effort attributions had the greatest effect on teacher evaluations. Specifically, teachers rated those students most posi- tively who they believed tried hard but lacked ability, and rated those students most negatively who they believed had the ability but did not try, regardless of level or quality of performance.

If a similar attributional model holds for student ratings of teachers, students may rate their instructor more or less positively as a function of their beliefs about the causes of their own performance. Students who attribute the cause of their own performance internally may rate a course positively regardless of how well or poorly they have performed. When students succeed, they should make internal ability and effort attributions only when they believe a course tested their capabilities and was worth their time investment. Positive course evaluations logically follow from

Page 3: An attributional approach to the validity of student ratings of instruction

28 AMES AND LAU

such ability and effort ascriptions for success. An internal attribution for failure, may also lead to positive course evaluations because the student has blamed himself as the cause for the failure, not the instructor. A negative course rating, however, might result when students attribute the cause of their performance externally to task ease or difficulty and good or bad luck. A course that is too easy is not challenging, and a course that is too difficult should have been adjusted to meet student ability levels. Further, course performance due to luck cannot be logically associated with any positive course features.

Thus, the present study was designed to examine the relationship be- tween various combinations of internal and external attributional cues and student evaluations of course and instructor. Five attributional cues were selected for the study on the basis of Weiner’s (1974, p. 6) attributional analysis of achievement settings: intentionality, ability, and effort as in- ternal cues, and task ease and luck as external cues. As evidence for convergent and discriminant validity, it was expected that internal cues would show a strong positive correlation and that external cues would show a negative relationship with course ratings.

Further evidence of convergent and discriminant validity was gathered by examining whether or not these causal cues would relate to various components of a course in theoretically predictable ways. In the present study, it was possible to distinguish between a course component, i.e., group work, in which the instructor was actively involved in class presen- tations and leading discussions and components, i.e., quizzes and indi- vidual projects, in which the instructor was not engaged in such activity. Thus, it was reasoned that if the ratings were valid then students’ attribu- tions for performance on group projects ought to show a strong relation- ship to the course evaluation factors, e.g., presentation and discussion, assessing instructor behavior critical to successful implementation of that group component of the course. In contrast, attribution for performance on quizzes and individual projects which required little instructor in- volvement ought to show a weaker relationship to these group project related instructor behaviors.

METHOD Subjects. Instructional ratings were obtained from 361 students (264 females and 97

males) enrolled in a multisection undergraduate educational psychology course (junior-senior level) at Purdue University. The ten sections of approximately 36 students each were taught by eight different instructors (4 males and 4 females).

Course ovganization. The course was organized into three separate components of instruction: quizzes, group projects, and individual papers. In the first component, students prepared for short mastery quizzes with the aid of a study guide and completed them on an independent, outside-class schedule. The group projects emphasized application of basic information. Students worked in small discussion groups of 4-6 persons on simulated edu- cational problems and prepared a written report of their proposed solutions. Individual

Page 4: An attributional approach to the validity of student ratings of instruction

ATTRIBUTION AND RATINGS OF INSTRUCTION 29

papers were completed outside class as a final test of student ability to apply course con- cepts to problems of teaching. The quiz and individual paper dimensions required little in-class direct instruction while the group project component occupied approximately 90% of class time. The instructor skills necessary to effective management of the in-class, group project component included lecture-presentation, discussion, personal attention, and or- ganization planning. A more detailed description of the design of this course is reported elsewhere (see Feldhusen, Ames, & Linden, 1974).

Performance in the course was assessed on a point system in which students earned a specified number of points for work on quizzes, group projects, and individual papers. Information was given at the beginning of the course regarding the point equivalents of final letter grades (A, B, C, etc.).

Course evaluation questionnaire. The Endeavor Instructional Rating (EIR) form (Frey, 1973) was administered during the last class of the semester in order to assess student evaluations of the course and instructor. The EIR contains 21 items that are each rated on a 7-point scale. The response fields for 14 of the items are based on a frequency dimension (never, seldom, sometimes, often, always) while the response tields for the other 7 items are based on degree of agreement (definitely no, no, yes, definitely yes). A factor analysis of the EIR by Frey, Leonard, and Beatty (1975) showed seven reliable factors each comprised of three items: clarity of presentations, workload, personal-attention, class discussion, organization-planning, grading, and student accomplishment. Moreover, the same study showed that three of the factors were highly correlated with an external criterion of “good teaching”, i.e., final exam performance, suggesting the validity of these scales.

Attributional questionnaire. In addition to the EIR, students also completed a questionnaire asking them to attribute the causes of their performance on tests, group projects, and individual papers. Five questionnaire items corresponding to each of the selected attributional cues were used and were repeated three times, once for each course component. Intentionality was assessed by the item: It was very imporranr to do my best on the quizzes (group projects) (individual papers). The other internal cues of ability and effort were assessed in the items: (1) My performance on (a) the quizzes, (b) group projects, or(c) individual papers was a function of my ability (a) to understand concepts and principles of educational psychology, (b) to work well with others in small problem solving groups, or (c) to apply what I know to practical situations; (2) My performance on the quizzes (group projects) (individual papers) was a function of how much effort I put into them. The external cues of luck and task ease were assessed in the items: (1) Success on the quizzes (group projects) (individual papers) was mostly a function of luck; (2) Success was mostly a function of how easy the quizzes (group projects) (individual papers) were. The poles on the inten- tionality item were labeled “not important” (1) to “very important” (7), and the poles for all the remaining scales were labeled “definitely no” (1) to “definitely yes” (7).

Dutu undysis. In order to test for the convergent and discriminant validity of the student instructional ratings, a series of multiple regression analyses were performed using the attributional cue items as predictors and the instructional rating factors as criterion variables. First, a set of seven stepwise multiple regression analyses were performed (one for each course evaluation factor) controlling for differences between instructors. Ten categorical variables were established, one for each section of the course, and were entered as dummy variables which were forced in the regression equation first to account for differ- ences between instructors. The five attributional cue items were then allowed to enter the equation freely according to their capacity to improve the equation. This first set of regres- sion analyses were performed using the average of the attributional cue items across the three performance criteria, quizzes, group projects, and individual papers. Next, a set of 21 stepwise regression analyses (a set of seven for each course component) were performed entering the internal cue items intentionality, ability, and effort as predictors in that fixed order. A second set of 21 regression analyses were performed using the external cue factors,

Page 5: An attributional approach to the validity of student ratings of instruction

30 AMES AND LAU

luck and task ease, as predictors. All analyses were performed using the Statistical Package for the Social Sciences (SPSS) regression analysis computer program with the statistical option for pairwise deletion of missing data (Nie, Hull, Jenkins, Steinbrenner, & Brent, 1975).

RESULTS

Factor Analysis of EIR The 21 EIR items were factor-analyzed, and a principle components

analysis with varimax rotation yielded the same seven factors reported by Frey et al. (1975). The Cronbach alpha interitem reliability estimates for each of the seven factors were: clarity of presentation, r = .90; workload, Y = .87; personal-attention, Y = .88; class discussion, r = .86; organization- planning, r = .84; grading, r = .87; student accomplishment, r = .88. All reliability estimates were significant at or beyond the .Ol level. Intercorre- lations among the seven factors were low to moderate, but generally significant (p < .Ol) ranging from r = .05 to .68.

Effects of Course Performance and Instructor Since the major convergent-discriminant attribution hypotheses

suggested that internal and external attributions would mediate ratings regardless of course performance, a series of regression analyses were performed to test the effects of course achievement against the seven course evaluation factors. Student grades were reasonably normally dis- tributed; approximately 50% of the students obtained point totals falling in the B - to B + range, 30% obtained point totals equalling a grade of A, and 20% obtained a total equal to a C. Sex, major area, and number of com- pleted semesters were also included in these analyses. In general, these items did not make a significant contribution to the prediction of any of the course evaluation factors. Course achievement, however, did enter as a significant predictor of one course evaluation item, instructor fairness in grading, and was correlated significantly (r = .22, p < .OOl) with this criterion.

The results of the regression analyses in which instructor was entered first as a dummy variable are presented in Table 1. For each of the seven regressions, the instructor variable is listed first followed by the attribu- tional cue items listed in the rank order of their importance in improving the predictability of the ratings. The reported F values signify the impor- tance of the variable to the regression equation as of the last step.

All but one (workload) of the multiple correlations produced by combin- ing the instructor variable and the five attributional cue items are above SO and all are highly significant (p < .OOl) accounting for from 25 (grading) to 36% (accomplishment) of the criterion variance (see Table 1). The instructor variable accounts for 8 to 18% of the criterion variance. For each equation, excepting workload, the introduction of the first attribu-

Page 6: An attributional approach to the validity of student ratings of instruction

ATTRIBUTION AND RATINGS OF INSTRUCTION 31

TABLE I STEPWISE MULTIPLE REGRESSION ANALYSES-ATTRIBUTION CUES x COURSE EVALUATION

FACTORS WITH INSTRUCTOR VARIABLE FORCED IN FIRST

Rank Variable Increase

R R2 in R2 Simple r F

Instructor” 1 Ability 2 Effort 3 Luck 4 Intent 5 Ease

,424 ,180 .514 ,329 ,588 ,346

.591 ,350

.591 .350 ,592 ,350

,180 -b 6 23*** .I49 ,426” 63,04’:** ,020 ,403 733””

,003 - ,199 1.43 ,000 ,336 .06 ,000 - ,094 .07

Attention

Instructor 1 Ability 2 Ease 3 Intent 4 Effort 5 Luck”

.43 I .186 ,186 - 6.&p”:”

.549 ,302 ,116 ,376 46,88*;L” ..563 ,317 ,016 -.I63 6,45”:* .577 ,333 ,016 ,353 6 ~3,“: ii

.517 .333 ,000 ,323 .01 - - - - NS

Discussion

Instructor ,372 ,138

1 Ability ,500 ,250

2 Luck ,520 ,270

3 Effort ,526 ,277 4 Intent .526 .277 5 Ease ,527 ,277

,138 - 4,55*“‘* .I12 ,381 42 lj*** ,020 - ,236 ,89*Y

,006 .362 2.43 ,001 ,290 .21 ,000 -.121 .lO

Planning

Instructor 1 Effort 2 Ability 3 Luck 4 Intent 5 Ease

,414 .172 ,527 ,278

.532 ,283 ,535 ,286 ,535 .287 ,536 ,287

,171 - 5,***“:* ,106 ,370 41,63”“*

,005 ,342 2.01 ,003 -.181 1.18 ,001 ,300 .25 ,000 - ,066 .18

Workload

Instructor ,308 .095 0.95 - 2.97**

1 Effort ,383 ,147 ,052 ,222 17,34”C”

2 Ability ,414 ,171 .024 ,078 *,2*** 3 Intent ,432 ,187 ,016 ,233 5.37” 4 Ease .436 .190 ,003 - ,094 1.15 5 Luck ,438 .192 ,002 - .079 .65

Presentation

Page 7: An attributional approach to the validity of student ratings of instruction

32 AMES AND LAU

TABLE I-Continued

Grading

Instructor ,306 .094 1 Ability .474 ,224 2 Luck .490 .240 3 Intent ,503 ,253 4 Effort .504 ,254 5 Ease .504 ,254

,094 - .I30 ,413 .016 -.241 ,013 ,350 ,000 ,372 .ooo -.I23

Accomplishment

* 94”‘* 47,62**‘: 6.01% 4.85”’

.14

.Ol

Instructor ,288 .083 .083 - 2,57** 1 Ability .579 ,335 ,252 ,547 107,5****: 2 Intent ,605 .366 ,030 .482 13.43** 3 Luck .605 .366 ,001 -.I86 .27 4 Ease .605 ,366 .OOO - ,043 .08 5 Effort .605 ,366 .OOO ,447 .02

Note. These regressions are based on Ns ranging from 279 to 283 (a maximum of 82 sub- jects were deleted for missing data).

a Since the instructor factor was forced into the equation first, it was not assigned a rank. ’ The instructor factor, as the first step in the equation, was based on the separate entries

of ten sections; thus an overall simple r based on all ten sections taken together was not obtained.

’ For a N of 279 any correlation above ,149 was significant at p < .Ol ’ The luck cue had such a nonsignificant relationship to the attention criterion that it did

not enter the equation. *p < .05.

**p < .Ol. ***p < ,001.

tional cue item results in a 10% or greater increase in R*. And, for student accomplishment, attribution to ability accounts for a 25% increase in criterion variance while the instructor variable only accounts for 8%. After the entry of the first attributional cue item, successive attributional cues show a marked decrease in the criterion variance for which they can account. Among the five attributional cues, ability entered first after in- structor in five of the equations and second in the other two, and thus appeared to be the most significant attributional cue when the criterion was averaged across the three course components.

Examination of the simple correlations in Table 1 indicates that the relationships between the internal cues and the course evaluation factors were always positive, and the correlations between the external cues and course evaluation factors were negative. The correlations between inter- nal cues and course evaluations tended to be consistently higher than those between the external cues and evaluation factors.

Page 8: An attributional approach to the validity of student ratings of instruction

ATTRIBUTION AND RATINGS OF INSTRUCTION 33

TABLE 2 INTERCORRELATIONSOF~NTERNALAND EXTERNAL ATTRIBUTION CUES

Attribution cues Intent Effort Ability Luck

Intent Effort Ability Luck Ease

.626

.602 ,747 - ,209 - ,327 - ,275 - ,032 - ,220 -.lOl ,516

Nore. For a N of 325 (36 subjects were deleted for missing data) any correlation above .I I is significant at p < .OI.

Internal versus External Analysis

The correlation matrix presented in Table 2 showing the intercorrela- tions of the attributional cues clearly indicates that internal cues were negatively related to the external cues. Moreover, the internal cues were strongly and positively related to each other while the external cues, luck and task ease, also showed a high, positive intercorrelation. Thus, the internal and external attribution cue blocks each appeared to reflect a distinctly different locus of attributional direction.

The results of the regression equations in which the internal and exter- nal cue blocks were separated and used to predict the course evaluation factors are presented in Table 3. The first equation listed for each course evaluation factor is based on an average of the attributional items across the three course components. Regression equations are then reported for each course component taken separately. The multiple TS reported in Table 3 show that the internal cues account for approximately twice as much variance as the external cues in predicting course evaluations. Ad- ditionally, the internal factors show a positive relationship to course evaluations while the external factors show a negative one. Thus, persons who attributed their performance to internal factors evaluated the course more positively, and persons who attributed their performance to external factors evaluated the course more negatively. Again, when the attribu- tional cue items were averaged across the three course dimensions, ability appeared to be the most important cue.

The pattern of relationships between the internal attribution cue items and course evaluation factors appears to differ between the three course components (see Table 3). Internal attributions made for performance on group projects consistently accounts for more criterion variance than in- ternal attributions made for either quizzes or individual papers. The only exception to this rule appears in the relation between internal attribution cues and the grading factor. The strongest relationships appear to be between internal attributions for group performance and the presentation,

Page 9: An attributional approach to the validity of student ratings of instruction

34 AMES AND LAU

TABLE 3 BETA COEFFICIENTS FOR ATTRIBUTION CUES FROM SEPARATL. INTERNAL

AND EXTERNAL RECRESSI~N ANALYSES PREDICTING COURSE EVALUATION

FACTORS FOR EACH COURSE COMPONENT

COUIX component

Internal cues External cues

Intent Ability Effort R Luck Ease R

Overall” Quizzes Individual Group

.08

.09

.13* Jo***

.26**

.21*

.06

.23***

.I6

.04

.23**

.17**

.45*** - .22** .03 .20** ,30*** - ,21*** 34 .20** .36*** .03 - .23*** .21*** .48*** -.26*** .18** .24***

Attention

Overall Quizzes Individual Group

.20** .25** .02 .41*** -.15* - .07 .20**

.15* .19** -.04 .27*** -.13* .Ol .I3

.14* .I0 .18* ,36%* - .02 - .20** .21***

.30*** .19** .02 ,43*** - .20** -.Ol .20**

Discussion

Overall Quizzes Individual Group

.05

.I1

.oo

.21**

.23**

.I2

.19*

.lS**

.I5

.06

.20** .13*

,40*** - .25*** .02 .24*** ,24*** -.I3 .07 .I2 ,36*** -.14* -.14* .25*** .42*** ,26*‘** .06 .24***

Planning

Overall Quizzes Individual Group

-.09 .14* .03 .20**

.I2

.14

.02

.I2

.22**

.06

.27**

.lS**

.39*** - .21*” .06 .19**

.28*** - ,22**x .I0 .20**

.31*** .oo -.14* .14*

.41*** -,2,*** .09 .18**

Workload

Overall Quizzes Individual Group

.21** - ,‘Jj*** .29**” .30**7 - .04 - .07 .I0

.21*** -.I0 .I5 ,2@** - .oo -.04 .04

.06 -.I5 .22** IV* -.Ol -.04 .06

.23*** -.lI .15* ,28*** - .09 - .08 .15*

Grading

Overall Quizzes Individual Group

.I3

.13*

.18*”

.]a*”

.27*** .23*” .lo* .14*

.09 .44*** - .25*** .02 .24*** .08 .38*** - .20*** .05 .19** .09 .37*** -.I2 -.22 .3w** .I0 x4-** - .22*** .17** .21***

Accomplishment

Overall .24*** .41*** - .Ol .58*** - .24*“’ .I0 .20** Quizzes .24*** .32*** - .04 ,45*** - .20 .06 .lS** Individual .17** ,25*** .I0 .45*** - .04 -.12 .14* Group .33*** ,22*** .I2 .54*** - ,24*** .lS** .23***

Presentation

Nore. These regression equations are based on Ns ranging from 290 to 294 (a maximum of 71 subjects were deleted for missing data).

N The overall equation is based on the average of the attribution cue items across the three course com- ponents.

*p c .05. *p i .Ol.

-p < .OOl.

Page 10: An attributional approach to the validity of student ratings of instruction

ATTRIBUTION AND RATINGS OF INSTRUCTION 35

attention, discussion, planning, and accomplishment factors. Note that four of the above five factors were deemed critical to the effective im- plementation of the group project component.

When the analysis is broken down by type of course component, ability appears as the most significant cue only in the regression equations for quizzes, i.e., ability is a significant contributor in four equations account- ing for the most variance in each (see Table 3). For group projects, inten- tionality makes a significant contribution in seven of seven equations, and accounts for more variance than the other two internal cues in five of those regressions. Effort, however, appears to be the most significant cue for individual papers accounting for the most criterion variance in five of the seven equations. As external cues, luck and ease generally appear less important than any of the internal cues. Examination of Table 1 shows that luck made a significant contribution in only two equations, discussion and grading, and ease made a significant contribution in only one equation. When the external cues are taken separately as predictors, as in Table 3, luck appears much more frequently as a significant contributor to the regression equations than does task ease.

DISCUSSION The results clearly indicate that attributional judgments about the

causes of student performance in a course are significantly related to student evaluations of instruction. Further, the five attributional cues of intentionality, ability, effort, luck, and ease consistently accounted for lo-25% of the variance in predicting the seven course evaluation factors even after the variance associated with the individual instructor had been removed. The attributional cues, in fact, accounted for as much variance as the instructor factor. While a great deal of variance remains to be accounted for, the relationships obtained between the attributional cues and course evaluations in both simple and multiple correlations seem particularly significant in view of the fact that no consistent relationship was obtained between course performance and the evaluations. From a validity standpoint, it would be expected that a major portion of variance in instructor ratings ought to be accounted for by differences between instructor styles and class performance. The rating process, however, is an act of perceiving, and the present study indicates that attributional beliefs affecting the perceptual processes of student raters are as impor- tant in determining course ratings as are any objective differences be- tween instructors.

The internal and external attribution evidence for convergent and dis- criminant validity was also impressive. A consistent positive relationship was obtained between the internal cues and course evaluations, and a consistent negative relationship was obtained between the external cues

Page 11: An attributional approach to the validity of student ratings of instruction

36 AMES AND LAU

and course evaluations. It appears that when students attributed the causes of their performance to internal cues, they rated the course more positively; and when they attributed the causes of their performance ex- ternally, they rated the course more negatively. This attribution pattern suggests that students may rate courses on the basis of a rational, cogni- tive information processing assessment process rather than on the basis of positive or negative feelings associated with a good or poor performance and grade.

A rational assessment process may involve the following set of cogni- tions associated with internal attributions and positive course evaluations: (1) A good course is one that challenges student ability and effort and thus students who intend to do well, have the ability and put forth effort, succeed, and those who do not, fail (see Weiner et al. 1971). Cognitions associated with external attributions may lead to negative evaluations. If students believe that their intentions, ability, and effort have little to do with how well or poorly they performed, and instead believe that perfor- mance is mostly a function of luck or ease, they have little evidence upon which to rate the course positively. There is little support in the present study for bias in ratings related to positive or negative affect associated with a good or poor performance. Thus, instructors who want better ratings may need to be more concerned with how students are attributing the causes of their performance rather than the grade they give for the performance.

The structure of the grading system may also be an important determi- nant of student beliefs in their own internal control over course perfor- mance. The grading system for the course used in this study was based upon criterion-referenced measurement procedures that provided stu- dents with opportunities to improve their grades by re-doing quizzes, group projects, and individual papers. The average and modal student performance in the course was a point total equivalent to a grade of B, and while some students received As and others a grade of C or less, they may all have believed that a B was obtainable given some ability, effort, and positive intentions.

Further evidence for convergent and discriminant validity was obtained when separate analyses were done for each type of course component. As expected, group performance attributions showed the strongest relation- ship to most of the course evaluation factors. This strong relationship between group performance attributions and course evaluation factors was expected because the instructor was most active in presenting, dis- cussing, giving attention, and organizing during this component, and further this phase occupied 90% of class time. The strong relationship between group performance attributions and ratings of accomplishment also seems logical since the component involving most of the in-class

Page 12: An attributional approach to the validity of student ratings of instruction

ATTRIBUTION AND RATINGS OF INSTRUCTION 37

instructional time may have contributed most to perceptions of ac- complishment. Regressions for workload and grading ratings did not show this differential correlation pattern between attributions and evaluations across the three course components. Students may have perceived that each course component contributed equally to the workload (considering both in and out of classwork) and the final grade. Additionally, it should be noted that as the complexity of the course task being attributed in- creased (i.e., quizzes are less complex than group projects and individual papers), the number of causal cues significantly related to the course evaluation factors increased. Logically, more complex tasks ought to in- volve more causal factors than less complex ones.

The attributional cues of intentionality, ability, and effort were most important for predicting positive course evaluations. For group projects, the intentionality cue was most predominant. Since the group component was a novel approach to instruction, positive ratings may have required that students perceive the importance of this kind of innovative teaching to accomplishing course goals. Based on this analysis, instructors could improve ratings by focusing on the intentionality dimension using a vari- ety of techniques to increase student perceptions of the intrinsic impor- tance of doing well on a particular course assignment. Further, instructors may need to develop methods for increasing the strength of student beliefs in an effort or ability by performance covariation (see Weiner, 1974). Clearly, courses that do not engender an internal attribution cue by per- formance covariation may not receive positive evaluations.

The apparent nonbiased relationship obtained between internal and ex- ternal attribution cues and course evaluations may only hold when a behaviorally oriented rating scale such as the EIR is used. Many research- ers (e.g., Brown, 1976; Crittenden & Norr, 1975; Sullivan & Skanes, 1974) have noted that a systematic social perception bias is most likely to appear when the rating items ask for very global and highly evaluative judgments (e.g., “This instructor was among the best I’ve ever had.“). Evidence from the present study suggests that when the elimination of bias is important, as it ought to be when the ratings contribute to decisions regarding promotion and tenure, a behaviorally oriented scale would serve the purpose most effectively. In support of the present study, the relationships between the social perception variables and course evalua- tions were studied in a naturalistic setting (see Crittenden & Non-, 1975). Since the present study found little evidence for bias when behaviorally oriented course ratings were taken, future research may need to specifi- cally focus on conditions under which attributional bias in the rating pro- cess does occur.

It should be noted that the results of the present study may not be able to be generalized to college courses using a more traditional approach.

Page 13: An attributional approach to the validity of student ratings of instruction

38 AMES AND LAU

The course used as the source of data in this study, however, has many more similarities to a traditional college format than some other novel and highly structured approaches (e.g., Personalized System of Instruction, Keller, 1968) in that instructors meet with students for regularly scheduled class hours, and the instructor plays a significant and important role in classroom activities. In particular, there is a great deal of teacher-student interaction during implementation of the small group project component. In fact, the factor structure of course ratings in tradi- tional courses versus the structure found in ratings of courses using the format employed in this study have been shown to be virtually identical indicating that students tend to rate the same dimensions of teacher be- havior in the present format as they do in more traditionally structured courses (Aversano & Feldhusen, Note 1). Thus, students have the oppor- tunity to observe many of the same behavioral dimensions that would be observed in a lecture format. It seems reasonable to suggest, then, that the results of the present study may be generalized to other more traditional course formats. The most critical, underlying dimension may be the degree to which the instructor or course format establishes a belief that student performance is a function of the internal factors intention, ability, and effort.

REFERENCES BROWN, D. L. Faculty ratings and student grades: A university wide multiple regression

analysis. Journal of Educational Psychology, 1976,68, 573-578. COSTIN, F., GREENOUGH, W. T., & MENGES, R. J. Student ratings of college teaching:

Reliability, validity, and usefulness. Review of Educational Research, 1971, 31, 566-570.

CRITTENDEN, N. S., & NORR, J. L. Student values and teacher perception: A problem in person perception. Sociometry, 1973, 36, 143-151.

CRITTENDEN, N. S., & NORR, J. L. Some remarks on “student ratings: validation.” Ameri- can Educational Research Journal, 1975, 12,429-434.

CROWE, M. H., & FELDHUSEN, J. F. Student perceptions of organizational features in rela- tionship to course ratings. Contemporary Educational Psychology, 1976, 1, 376-383.

FELDHUSEN, J. F., AMES, R., & LINDEN, K. W. Designing instruction to achieve higher level goals and objectives. Educational Technology, 1974, 14, (lo), 21-24.

FREY, P. W. Student ratings of teaching: Validity of several rating factors. Science, 1973, 182, 83-85.

FREY, P. W., LEONARD, D. W., & BEATTY, W. W. Student ratings of instruction: Validation research. American Educational Research Journal, 1975, 12, 435-447.

HEIDER, F. The psychology of interpersonal relations. New York: John Wiley, 1958. KELLER, F. S. Goodbye teacher. Journal of Applied Behavioral Analysis, 1968, 1, 79-89. KELLEY, H. H. Attribution theory in social psychology. In D. Levine (Ed.), Nebraska

symposium on motivation. University of Nebraska Press, 1967. KELLEY, H. H. Attribution in social interaction. Morristown, N.J.: General Learning Press,

1971.

Page 14: An attributional approach to the validity of student ratings of instruction

ATTRIBUTION AND RATINGS OF INSTRUCTION 39

MCKEACHIE, W. J., LIN, Y., & MANN, W. Student ratings of teacher effectiveness: Validity studies. American Educational Research Journal, 1971, 8, 435-445.

NIE, N. H., HULL, C. H., JENKINS, J. G., STEINBRENNER, K., & BRENT, D. H. Statistical package for the social sciences (2nd ed.). New York: McGraw-Hill, 1975.

REZLER, A. G. The influence of needs upon the student’s perception of his instructor. Journal of Educational Research, 1965, 58, 282-286.

SULLIVAN, A. M., & SKANES, G. R. Validity of student evaluation of teaching and the characteristics of successful instructors. Journal of Educational Psychology, 1974, 66, 584-590.

TATENBAUM, T. J. The role of student needs and orientations in student ratings of teachers. American Educational Research Journal, 1975, 12, 417-429.

WEINER, B. Achievement motivation and attribution theory. Morristown, N.J.: General Learning Press, 1974.

WEINER, B., FRIEZE, I., KUKLA, A., REED, L., REST, S., & ROSENBAUM, R. Perceiving the causes of success and failure. Morristown, N.J.: General Learning Press, 1971.

WEINER, B., & KUKLA, A. An attributional analysis of achievement motivation. Journal of Personality and Social Psychology, 1970, 15, l-20.

REFERENCE NOTES 1. AVERSANO, F., & FELDHUSEN, J. F. Instructor rating factors under a novel instructional

format: A factor analytic study. Manuscript submitted for publication, 1977.