mythbusters: exploring teaching staff beliefs about title...
TRANSCRIPT
THINK.CHANGE.DO
Title of presentation here Mythbusters: Exploring teaching staff beliefs about
student feedback in evaluation surveys
Carolyn Newbigin BA, B Psych, Grad Dip Psych, Grad Cert Social Research ADSRI
Social Research Specialist Planning & Quality Unit, UTS
& Dr Peter Kandlbinder Bed (SCAE), MED (UTS), PhD
Senior Lecturer Institute for Interactive Media and Learning, UTS
Introduction
UTS:PLANNING AND QUALITY UNIT
November 2012
• The use of student evaluations of teaching (SETs) – why is this still considered contentious for teaching staff?
• Myths abound regarding the validity of survey instruments, the
accuracy of student perceptions of teaching, and ‘popularity contest’ to name a few
• Of interest to the current study:
• How adamant are teachers in their views? • How accurate are these views compared with the literature? • What are some of the common features of teaching staff who
disagree with the ways that SETs are used in universities?
Method
• Online survey to every member of teaching staff who was subject to an SET in first semester of this year (at UTS this is the Student Feedback Survey – SFS)
• Measures included:
• A list of common ‘myths’ based on an extensive literature review
• Job role (i.e., tutor, lecturer, senior lecturer, course coordinator, subject coordinator, etc.)
• Age, Gender, Length of Service
• Attitudes towards institutional uses of the SFS (based on work done by Stein and colleagues in New Zealand universities)
• Data Driven Instruction (based on Harris, 2011; McLeod, 2005)
UTS:PLANNING AND QUALITY UNIT
November 2012
Participants
UTS:PLANNING AND QUALITY UNIT
November 2012
Faculty Number of
Respondents
% of Total
Respondents
% of total UTS
teaching staff in
each Faculty
Faculty of Arts and Social Sciences (FASS) 101 22.4% 16.7%
Faculty of Law 37 8.2% 10.3%
Faculty of Design Architecture and Building (DAB) 44 9.8% 10.7%
Faculty of Engineering and IT (FEIT) 75 16.6% 18.6%
Faculty of Nursing, Midwifery and Health (NMH) 30 6.7% 6.8%
Faculty of Science 55 12.2% 15.7%
UTS Business School 97 21.5% 21.3%
UTS Pharmacy 6 1.3% Data unavailable
Other 3 0.7% N/A
Did not respond to this question 3 0.7% N/A
Total 451 100.0% 100.0%
UTS:PLANNING AND QUALITY UNIT
DATE 7/2/08
Myth 1: “The student is expecting a high grade”
What do teaching staff believe?
UTS:PLANNING AND QUALITY UNIT
November 2012
MuchLower
LowerNo
ImpactHigher
MuchHigher
Student is expecting ahigh grade
5.8 19.5 25.1 40.5 9.1
0.0
5.0
10.0
15.0
20.0
25.0
30.0
35.0
40.0
45.0
% o
f re
spo
nse
s
Myth 1: “The student is expecting a high grade”
What do we know?
UTS:PLANNING AND QUALITY UNIT
DATE 13/11/2012
Evidence FOR Evidence AGAINST
• Nowell and others (2007; 2010) confirmed that higher
expected grades were significantly positively correlated with
student evaluations of teaching.
• Langbien (2008) found that an unexpected grade increase
correlated with an increase of approximately 10% in mean
scores given by students on teaching evaluations.
• Significant correlations also reported by Feldman (1997),
Centra (2003) and Spooren & Mortelmans (2006).
• A review by Aleamoni (1999) cited 24 studies which found no
relationship between grade expectation and teacher evaluation
ratings, and 37 studies which found significant positive
relationships between these factors.
• Jones and colleagues (2012) discussed that this myth is particularly
problematic because there is a risk of grade inflation and deflation
in challenging course work in order to get students to give
favourable ratings
• This is countered by the argument that better teachers encourage
their students to work harder and therefore earn higher grades –
known in some of the literature as the ‘Validity Hypothesis’ (see
Spooren & Mortelmans, 2006). .
Myth 1: “The student is expecting a high grade”
UTS:PLANNING AND QUALITY UNIT
November 2012
Myth 2: “The lecturer is ‘popular’ or entertaining”
What do teaching staff believe?
UTS:PLANNING AND QUALITY UNIT
November 2012
MuchLower
LowerNo
ImpactHigher
MuchHigher
The teacher is 'popular' orentertaining
0.3 0.5 7.3 61.4 30.6
0.0
10.0
20.0
30.0
40.0
50.0
60.0
70.0
% o
f re
spo
nse
s
Myth 2: “The lecturer is ‘popular’ or entertaining”
What do we know?
UTS:PLANNING AND QUALITY UNIT
DATE 13/11/2012
Evidence FOR Evidence AGAINST
• One of the main studies which supports the „popularity
contest‟ myth is the „Dr Fox‟ study conducted by Naftulin and
colleagues in 1973 (cited and critiqued in Kulik, 2001). In this
study an actor delivered an enthralling but factually incorrect
mathematics lecture to a group of medical educators and
then received highly favourable feedback. This study is
considered so methodologically flawed as to be a cautionary
tale.
• In another widely cited study by Shevlin and colleagues
(2000) argued that the „charisma‟ of the lecturer predicted
student ratings of teaching, however the research only used a
single item, namely “The lecturer has charisma” to measure
this construct.
• As cited in a review by Aleamoni (1999) several studies
including those by Tang (1997), Johannessen (1997) and
Marsh & Bailey (1993) indicate that students rate teachers on
the basis of factors such as instructional effectiveness, being
prepared and organised, responding to questions and being
courteous to students as opposed to being merely
„entertaining.‟
• Spooren & Mortelmans (2006) demonstrated that there is a
higher order factor which influences student ratings of seven
sub-dimensions of teaching effectiveness (e.g., clarity of
objectives, presentation skills and help of the teacher during
the learning process). They describe this factor as „teacher
professionalism‟ rather than popularity.
Myth 2: “The lecturer is ‘popular’ or entertaining”
UTS:PLANNING AND QUALITY UNIT
November 2012
Myth 3: Surveys being administered online rather than in class
What do teaching staff believe?
UTS:PLANNING AND QUALITY UNIT
November 2012
MuchLower
LowerNo
ImpactHigher
MuchHigher
The survey is nowcollected online %
12.6% 37.4% 42.0% 6.3% 1.8%
0.0%
5.0%
10.0%
15.0%
20.0%
25.0%
30.0%
35.0%
40.0%
45.0%
% o
f re
spo
nd
ents
What do we know?
UTS:PLANNING AND QUALITY UNIT
DATE 13/11/2012
Evidence FOR Evidence AGAINST
• Recently Nowell and colleagues (2010) have found that
when controlling for student characteristics, teacher
characteristics and other background factors in a multiple
regression analysis, the results of surveys collected using
an online method are significantly lower than those
collected in class.
• Another persistent myth about online data collection is
that students are more likely to respond if they have
extreme views, and it is accepted that there is greater
variability in responses due to lower response rates (Sax,
Gilmartin & Bryant, 2003).
• A number of studies conducted by Layne, DeCristoforo &
McGinty, 1999; Dommeyer et al, 2002, 2004 (cited by
Nowell et al, 2010), found no significant differences in
mean ratings of instructors using either online or in class
surveys.
Myth 3: Surveys being administered online rather than in class has an impact on SET results
Myth 3: Surveys being administered online rather than in class has an impact on SET results
UTS:PLANNING AND QUALITY UNIT
November 2012
MYTHS BUSTED??
UTS:PLANNING AND QUALITY UNIT
November 2012
What are some of the common features of teaching staff who disagree with the ways that SETs are used in universities?
• What was measured re. ‘Attitudes towards the SFS’ scale
• Institutional uses for the data including for:
• Promotion applications
• Salary review
• Learning and teaching awards
• Course reaccreditation
• Informing faculty decision making
• Summary reporting to external bodies (de-identified)
UTS:PLANNING AND QUALITY UNIT
November 2012
What are some of the common features of teaching staff who disagree with the ways that SETs are used in universities?
What might we expect to have an impact on views towards the use of teaching surveys?
• Background factors:
• Age?
• Gender?
• Full time or part time?
• Length of time spent working in the tertiary sector?
• Job role – tutor vs. lecturers vs. senior lecturers?
UTS:PLANNING AND QUALITY UNIT
November 2012
Step 1: Background variables
(Age, Gender, Length of Service, Full or Part time)
Step 2: Job role
(Tutor, Lecturer, Senior Lecturer)
Step 3: Data Driven Instruction
(DDI Scale)
UTS:PLANNING AND QUALITY UNIT
November 2012
Data Driven Instruction
UTS:PLANNING AND QUALITY UNIT
November 2012
What are some of the common features of teaching staff who disagree with the ways that SETs are used in universities?
Step 1: Background variables
(Age, Gender, Length of Service, Full or Part time)
ΔF(4,214)=1.95, p=.104, (ΔRsquare=.035)
NOT SIGNIFICANT
Step 2: Job role
(Tutor, Lecturer, Senior Lecturer)
ΔF(1,213)=3.08, p=.081, (ΔRsquare=.014)
NOT SIGNIFICANT
Step 3: Data Driven Instruction
ΔF(1,212)=83.69, p<.001, (ΔRsquare=.269)
VERY SIGNIFICANT
E.g.:
I make changes in my instruction based on my SFS results;
I use SFS results to set targets and goals
Also correlates positively and significantly with implementing ‘closing the loop’ strategies and interest in SET data analysis workshops
UTS:PLANNING AND QUALITY UNIT
November 2012
Data Driven Instruction
So….
Understanding how to interpret and use your own teaching data is basically the best predictor of positive attitudes towards their use in universities
UTS:PLANNING AND QUALITY UNIT
November 2012
Data Driven Instruction
Next Steps
UTS:PLANNING AND QUALITY UNIT
November 2012
• Write up of preliminary results for publication
• Qualitative analysis of staff comments in the survey • Post-study follow up with staff – different feedback mechanisms
and a second survey (closing the loop, SFS data analysis workshop and a wait list control group)
• Structural equation modelling to more fully explore the
predictors • Match staff survey data with SFS results (with permission)
UTS:PLANNING AND QUALITY UNIT
November 2012