Leibniz University Hannover
Faculty of Humanities
Institute of Psychology
Measuring Education Science Students‘
Statistics Anxiety Conceptual Framework, Methodological Considerations, and Empirical Analyses
Research Report Leibniz University Hannover: Institute of Psychology 2018
Dr. Günter Faber Leibniz University Hannover, Institute of Psychology
Dr. Heike Drexler Leibniz University Hannover, Institute of Psychology
MA Alexander Stappert University of Vechta, Department of Social and Education Sciences
BA Joana Eichhorn Leibniz University Hannover, Education Science Master’s Programme
Leibniz University Hannover
Faculty of Humanities
Institute of Psychology
URL: http://www.psychologie.uni-hannover.de/guenter_faber.html
Mail: [email protected]
Copyright © Faber, Drexler, Stappert, Eichhorn 2018
DOI: 10.13140/RG.2.2.18109.31201
Measuring education students’ statistics anxiety.
Conceptual framework, methodological considerations, and empirical analyses.
Research report
Günter Faber
Heike Drexler
Alexander Stappert
Joana Eichhorn
Leibniz University Hannover
Faculty of Humanities
Institute of Psychology
Copyright © Faber, Drexler, Stappert, Eichhorn 2018
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
1
Table of contents
1 Abstract ................................................................................................................. 2
2 Introduction ........................................................................................................... 3
2.1 Conceptual perspectives on statistics anxiety ....................................................... 4
2.2 Methodological perspectives on statistics anxiety measures ................................ 8
2.3 Approaching refined measurement ....................................................................... 9
3 Validation framework and objectives ................................................................. 11
4 Method
4.1 Participants and procedure .................................................................................. 15
4.2 Measures ............................................................................................................. 16
4.3 Data analyses ....................................................................................................... 18
5 Results
5.1 Scale formation ................................................................................................... 18
5.2 Validation results ................................................................................................ 21
5.3 Additional multivariate analyses ......................................................................... 23
6 Conclusion .......................................................................................................... 25
7 References ........................................................................................................... 29
8 Appendix ............................................................................................................. 46
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
2
1 Abstract
At higher education levels, the acquisition and application of research methods appears to
represent a great challenge for a considerable number of students. As relevant empirical find-
ings consistently demonstrated, many students in social or human science programmes tend to
evolve and maintain negative attitudes against compulsory method courses and, in particular,
perceive statistical demands as threatening study events – thus, being at risk of developing a
heightened level of statistics anxiety. Therefore, university settings need to pay attention to
this phenomenon and strive for a sound analysis of the students’ concern. Accordingly, over
the past decades several questionnaires had been developed to assess university learners’ sta-
tistics anxiety in order to get appropriate information for conceptualizing most adaptive
instructional and guidance strategies. However, against theoretical and methodological back-
ground of test anxiety research, current instruments for assessing university students’ statistics
anxiety prevailingly emphasize the affective construct component. In order to unfold the con-
struct in a more exhaustive and differentiated manner, a scale for measuring university stu-
dents’ worry, avoidance, and emotionality cognitions was developed. In two samples of edu-
cation science majors the present study aimed at analyzing the scale’s psychometric properties
and at gaining preliminary validation results. In both samples, principal component analysis
led to the formation of a unidimensional scale which appeared to be sufficiently reliable. Its
relations to domain-specific self-belief and background variables turned out as theoretically
expected – thus, for the time being the scale should claim criterion validity. In particular, the
scale’s total sum score was demonstrated to substantially correlate with the students’ math-
ematical self-concept, entity beliefs, and negative instrumental values.
Keywords: statistics anxiety, math self-concept, implicit entity beliefs, utility values
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
3
2 Introduction
Higher education settings usually need to manage a broad range of students’ individual
background, learning approaches, and study outcomes. Therefore, in order to ensure a most
adaptive and effective academic environment, they must rely on appropriate conceptualiza-
tions of the students’ learning processes which should help to analyze the individual and con-
textual factors of university achievement and to explain existing differences in the students’
performance – and, eventually, to access reliable and valid information about the substantial
determinants of university success or failure. For that purpose, educational research had offer-
ed several modeling perspectives to unfold the complexity of learning processes being typical
for university contexts. Though each of them focusing on learning and achievement in a
different manner, all these conceptualizations widely agree in stressing the importance of
students’ individual aptitudes, body of knowledge, and motivational characteristics (Credé &
Kunzel, 2008; De Clercq, Galand, Dupont, & Frenay, 2013; Entwistle, 2007; Price, 2014; van
Rooij, Brouwer, Bruinsma, Jansen, Donche, & Noyens, 2018). For the most part, empirical
research in the field had verified and further differentiated these conceptual assumptions. In
particular, relevant meta-analyses unanimously demonstrated the crucial role of students’
cognitive abilities, prior knowledge, and latest school grades to predict academic outcomes at
university level. Furthermore, they evidenced the impact of students’ motivational orientations
on their academic attainment (Richardson, Abraham, & Bond, 2012; Robbins, Lauver, Le,
Davis, Langley, & Carlstrom, 2004; Schneider & Preckel, 2017).
Especially, the motivational orientations cover various types and levels of academic self-
beliefs which essentially may regulate the students’ learning approach and, thus, affect their
academic outcomes (Pintrich, 2004). The students’ competence and control beliefs must be
recognized as cognitve-motivational key constructs (Krampen, 1988; Schunk & Zimmerman,
2006). Competence beliefs refer to an individual’s perceived capabilities to succeed in a given
academic activity or task (Eccles & Wigfield, 2002). Control beliefs refer to an individual’s
perceived likelihood of accomplishing desired academic outcomes or of avoiding undesired
academic outcomes by means of his or her own behavioral attempts (Perry, Hall, & Ruthig,
2005). Both types of beliefs result from an individual’s cumulative success or failure exper-
iences and must be seen as important personal resources to cope with academic requirements
in a certain educational setting. Over time, competence and control beliefs will operate in a
mutually reinforcing manner and contribute to implement the students’ more or less adaptive
learning approach (Drew & Watkins, 1998; Robbins, Lauver, De, Davis, Langley, & Carl-
strom, 2004; van Overvalle, 1989). Accordingly, students with high competence and control
beliefs will develop a strong sense of self-efficacy and evolve learning strategies to persistently
endeavor successful outcomes – whereas students with low competence and control beliefs
will develop strong concerns not to master the academic tasks at hand and possibly rely on
less adaptive learning strategies. In the latter case, students are seriously prone to increasingly
experience feelings of personal threat and, in the long term, to stabilize trait-like test anxieties
(Putwain, 2018; Respondek, Seufert, Stupnisky, & Nett, 2017). As empirical findings from
various university settings consistently substantiated, an individually heightened level of test
anxiety predicts less useful learning strategies (Spada, Nikcevic, Moneta, & Ireson, 2006; Vir-
tanen, Nevgi, & Niemi, 2013) and leads to significantly lowered learning outcomes (Credé &
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
4
Kunzel, 2008; Feinberg & Halperin, 1978; Hancock, 2001; Kassim, Hanafi, & Hancock, 2007;
Schneider & Preckel, 2017). Hence, the emergence, maintenance, and consequences of sust-
ained test anxieties indicate a potential risk factor of the students’ learning – which foreseeably
threatens to impede their academic perspectives in a certain study domain or programme. Con-
sequently, higher education settings should carefully pay attention to this phenomenon and
strive for the implementation of appropriate instructional or guidance strategies.
With all that, the learning of research methods, most notably the acquisition of statistical
knowledge and competencies, appears to represent a stress or anxiety provoking domain in
higher education contexts. Quite a number of undergraduate and graduate students of social
sciences, education, psychology and business programmes struggles with statistics (Mji & On-
wuegbuzie, 2004; Onwuegbuzie & Wilson, 2003). Moreover, in comparison with other acad-
emic courses their individually reported level of anxiety appears to be highest in the statistics
domain (Dykeman, 2011). When dealing with the requirements of quantitative method courses
which commonly are compulsory for earning their degree, these students mostly suffer from
strong failure expectations and frequently experience feelings of apprehension and personal
threat. As well, they perceive methodological competencies difficult to acquire and less useful
for current studies and later professional development (Murtonen & Lehtinen, 2003; Murto-
nen, Olkinuora, Tynjälä, & Lehtinen, 2008; Zeidner, 1991). As a result, they are at risk for
developing and maintaining a heightened level of anxiety in the face of statistical analyses –
in particular, when being confronted with statistical tasks of data gathering, processing, and
interpreting (Cruise, Cash, & Bolton, 1985). Over the past decades, numerous empirical
analyses from most diverse conceptual, methodological, and institutional backgrounds had
yielded a vast amount of insights into the development, the structure, and the consequences of
higher education students’ statistics anxiety – and, thus, advancingly established a broad-based
research line to clarify and elaborate the construct.
2.1 Conceptual perspectives on statistics anxiety
Structurally, the students’ emerging statistics anxiety has to be considered a multidimen-
sional construct reflecting the complex interplay of several cognitive, motivational, and phy-
siological components (Rost & Schermer, 1989a). Against the background of relevant empiri-
cal findings from test anxiety research it can be defined as a domain-specific form of perfor-
mance or evaluation anxiety which manifests as repeatedly occuring worry cognitions, task-
irrelevant and interfering thoughts, marked states of emotional tension and physiological arou-
sal (Zeidner, 1991). The worry component of test anxiety refers to the students’ mental antici-
pation of failure and its negative consequences, whereas the emotionality component refers to
their feelings of tenseness, nervousness or distress, and the physiological component refers to
their perceptions of bodily symptoms. As in some conceptual approaches the emotionality
component includes physiological reactions, other conceptual approaches differentiate or ex-
tend these components (Cassady & Johnson, 2002; Schwarzer & Quast, 1985; Zeidner, 1998).
Over the past decades, across a wide range of student samples and measurements there has
been ample evidence for these components being distinguishable but mutually reinforcing
(Deffenbacher, 1980; Hodapp & Benson, 1997; Kieffer & Reese, 2009; Liebert & Morris,
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
5
1967; Sarason, 1984). In most cases, they had been demonstrated to negatively affect the stu-
dents’ learning processes and achievement outcomes. However, the worry component ge-
nerally turned out to most strongly predict academic performance or test results (Cassady &
Johnson, 2002; Deffenbacher, 1980; McIlroy, Bunting, & Adamson, 2000; Rana & Mahmood,
2010; Schwarzer & Jerusalem, 1992; Seipp, 1991; Steinmayr, Crede, McElvany, & Wirth-
wein, 2016; von der Embse, Jester, Roy, & Post, 2018). The debilitating effect of worry cog-
nitions on students’ performance appears to be mainly caused by their strongly biased and
task-irrelevant mode of information processing. High-anxious students usually tend to per-
ceive threatening cues within a certain evaluation setting in an extremely attentive or even
sensitive manner (Wine, 1982; Fernández-Castillo & Caurcel, 2015; Putwain, Connors, & Sy-
mes, 2010). They produce more and stronger self-doubts which interfere with the process of
task completion – in particular, by thinking more frequently and intensely about the inevitable
failure they expect (Sarason, 1984; Schwarzer, 1996). Thus, high-anxious students’ attention
and cognitions operate more self-focused than task-focused. As a result, their cognitive re-
sources are to be seen heavily preoccupied and notedly impeded (Ashcraft & Moore, 2009;
Hopko, Ashcraft, Gute, Ruggiero, & Lewis, 1998; Richards, French, Keogh, & Carter, 2000).
In the long term, high-anxious students tend to use less adaptational coping strategies. They
prefer behavioral responses to immediately reduce their cognitive worries and emotional ap-
prehension – rather than basically enhancing their task-relevant knowledge and learning ap-
proach (Rost & Schermer, 1989b; Zeidner, 1998). However, these strategies neither will con-
tribute to handle their feelings of threat and distress, nor will they facilitate their mastery of
the academic task. Instead they will most likely lead to the failure outcome the students already
fear for. From a transactional perspective, the emergence and maintenance of test anxiety cru-
cially contributes to establishing a self-reinforcing circle of negative anxiety-outcome relations
(Spielberger & Vagg, 1995; Zeidner, 1998).
Based on this conclusive body of evidence a theoretically and methodologically sound fra-
mework for the statistics anxiety construct should implicitly take into account its cognitive,
emotional, and physiological components. In particular, it is essential that it addresses the issue
of relevant worry cognitions as it is assumed that they are closely linked to the debilitating
effects of statistics anxiety on statistical learning and performance.
To date, however, most empirical analyses in the field have emphasized the emotional and
physiological components of statistics anxiety (Cruise, Cash, & Bolton, 1985). They define
the construct as feelings of anxiety or as habitual anxiety in the face of statistically loaded
situations (Macher, Paechter, Papousek, & Ruggeri, 2012; Onwuegbuzie & Wilson, 2003).
Admittedly, the measurement items used in these empirical analyses cover a wide range of
statistical tasks that students typically encounter in everyday study or the context of their
course. These items can be empirically assigned to various situation- or task-specific dimen-
sions.
Thus, factor analyses of the task- and course-related items of Zeidner’s (1991) Statistics Anx-
iety Inventory led to the development of a content- and a test-specific subscale. Likewise,
factor analyses of the widely used Statistics Anxiety Rating Scale (Cruise, Cash, & Bolton,
1985) and the conceptually related Statistical Anxiety Scale (Vigil-Colet, Lorenzo-Seva, &
Condon, 2008) provided separate subcomponents concerning the students’ interpretation and
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
6
test anxiety, their fear of asking for help and fear of statistics teachers (Baloğlu, 2002; Chew
& Dillon, 2014a; Chiesi & Primi, 2010; Chiesi, Primi, & Carmona, 2011; DeVaney, 2016;
Hanna, Shevin, & Dempster, 2008; Liu, Onwuegbuzie, & Meng, 2011; Oliver, Sancho, Ga-
liana, & Cebrià i Iranzo, 2014; Macher, Paechter, Papousek, & Ruggeri, 2012; Papousek, Rug-
geri, Macher, Paechter, Heene, Weiss, Schulter, & Freudenthaler, 2012; Sesé, Jiménez, Mon-
tano, & Palmer, 2015).
Similarly, the Statistics Anxiety Measure (Earp, 2007; Vahedi, Farrokhi, & Bevrani, 2011)
and the Statistics Comprehensive Anxiety Response Evaluation (Griffith, Mathna, Sapping-
ton, Turner, Evans, Gu, Adams, & Morin, 2014) revealed some distinct task- or situation-
specific subcomponents referring to statistically relevant course requirements and situations.
In summary, these approaches to measuring the construct of statistics anxiety undoubtedly
represent typically anxiety-evoking task features and test situations in a most elaborate way.
However, with the exception of the Statistics Anxiety Measure (Earp, 2007) and the short
research scale Hong and Karstensson (2002) used – both of which include some worry items
– it is notable that all the other instruments fail to integrate students’ cognitive, emotional, and
physiological anxiety reactions, in particular with regard to the critical worry component.
Williams’s (2013, 2015) findings provide support to this conceptual shortcoming, as they de-
monstrated that scores on the interpretation and test/class anxiety subscales of the Statistics
Anxiety Rating Scale (Cruise, Cash, & Bolton, 1985) were only moderately correlated with
students’ tendency to worry.
In contrast, a concurrently operating research line in the test anxiety field had already decom-
posed the statistics anxiety construct and assessed the students’ worry and emotionality res-
ponses separately. However, in most cases a composite score including both components was
used because of both components’ high interrelation (Benson, 1989; Benson, Bandalos, &
Hutchinson, 1994; Finney & Schraw, 2003; González, Rodrígez, Faílde, & Carrera, 2016;
Hong & Carstensson, 2002). That way, albeit merely having total anxiety scores available, the
interpretation of students’ responses explicitly allowed for a traceable cognitive perspective.
Furthermore, one relevant study had analyzed the relations of both the worry and emotionality
component with several cognitive-motivational and achievement variables. According to the
findings of test anxiety research, its results demonstrated the worry component to better ex-
plain the students’ performance in a statistics exam (Bandalos, Yates, & Thorndike-Christ,
1995).
As this research line essentially contributes to refine the statistics anxiety construct with re-
spect to its motivationally operating components, its task- or situation-specific references ap-
pear less differentiated. That is, in all studies the items for assessing statistics anxiety referred
exclusively to the taking of a statistical test or exam. This contextual limitation, hence, should
challenge the representativity or content validity of the worry and emotionality measures, be-
cause their scores account only for a particular part of the relevant learning setting (Cohen,
Manion, & Morrison, 2011; Haynes, Richard, & Kubany, 1995).
A most appropriate framework of the statistics anxiety construct should merge the concep-
tual strengths of both research lines and, therefore, consider the relevant worry, emotionality,
and bodily reactions students will experience when dealing with the typically occurring task
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
7
features and examination procedures of their statistics course or of any other course – if ne-
cessarily requiring statistical knowledge and skills. Refining the construct that way, research
in the field should advance in developing measurement tools which will help to determine the
students’ statistics anxiety in a psychologically as well as contextually more differentiated and,
thus, in a more representative manner. Consequently, their construction should thoroughly
specify the construct’s key facets (Guttman & Greenbaum, 1998) and transform its a priori
assumptions into most concrete operationalizations mapping out both relevant anxiety reac-
tions and typically threatening situations (Zeidner, 1998).
Moreover, another issue crucial to the conceptualization of the statistics anxiety construct
refers to the role of the students’ avoidance tendencies. Already in the very first beginning of
empirical test anxiety research, Mandler and Sarason (1952) posited anxiety responses to ma-
nifest as “implicit attempts at leaving the test situation” (166). Subsequently, empirical find-
ings provided support for this assumption and yielded sound evidence for students’ avoidance
cognitions being substantially related to their anxiety responses (Blankenstein, Flett, & Wat-
son, 1992; Galassi, Frierson, & Sharer, 1981; Genc, 2017; Hagtvet & Benson, 1997; Middleton
& Midgley, 1997; Skaalvik, 1997; Zeidner, 1998). In particular, the analyses of Elliot and
McGregor (1999), Pekrun, Elliot and Maier (2009), and Putwain and Symes (2012) demon-
strated clearly that students with heightened avoidance orientations reported a higher extent of
worry cognitions and lower scores on subsequent exam performance. Conceptually, these em-
pirical findings primarily originate from achievement motivation research, in most cases from
goal orientation theories which had developed several instruments to assess the students’ striv-
ing no to perform worse than others in a certain academic setting (Elliot & Murayama, 2008).
Though not unfolding the avoidance issue in an exhaustive manner, this motivation approach
should unarguably contribute to broadening and refining the test anxiety construct – and, thus,
offer promising perspectives to integrate the role of students’ avoidance tendencies (Putwain,
2008).
Heretofore, only few conceptualizations of the test anxiety construct had addressed this
issue and claimed the students’ escape or avoidance cognitions to constitute an essential part
of their worries and to represent an important factor to elicit interfering, task-irrelevant
thoughts (Pekrun, Goetz, Kramer, Hochstadt, & Molfenter, 2004; Schwarzer & Quast, 1985).
Future research efforts are required to further determine the relevance of avoidance thoughts
within the worry component. Accordingly, in order to strive for a most differentiated analysis
of the construct, research in the field should benefit from an enhanced emphasis on students’
avoidance cognitions. Hence, it should provide appropriate measurements being designed not
only to assess the students’ worries about threatening failure outcomes, but also to inquire their
thoughts to preferably avoid getting involved with threatening tasks or situations. Currently
available questionnaires either do not consider the students’ avoidance cognitions at all (Grif-
fith, Mathna, Sappington, Turner, Evans, Gu, Adams, & Morin, 2014; Onwuegbuzie & Wil-
son, 2003; Vigil-Colet, Lorenzo-Seva, & Condon, 2008) or include just a single item assessing
avoidance behavior (Earp, 2007). They do not allow for a proper clarification of their role
within the students’ worry responses. The same applies to one single study which had exam-
ined the students’ avoidance goals as predictors of statistics anxiety and, thus, demonstrated a
moderate association (Zare, Rastegar, & Hosseini, 2011).
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
8
Research in the field should refine its conceptualization of students’ worry responses and
develop instruments that explicitly capture avoidance cognitions with respect to statistically
loaded tasks and situations. That way, an important step to refine the substantive and structural
stage of construct validation would be done (Benson, 1998).
2.2 Methodological perspectives on statistics anxiety measures
Students’ statistics anxiety is commonly assessed using questionnaires consisting of items
in Likert-scale format. With respect to statistically relevant task features or situations, they
usually offer response categories which ask the students for each task- or situation-related item
to rate the subjectively experienced intensity of being anxious. The response alternatives range
from “no anxiety” to “high anxiety” in the Statistics Anxiety Rating Scale (Cruise, Cash, &
Bolton, 1985), likewise from “not at all anxiety evoking” to “extremely anxiety evoking” in
the Statistics Anxiety Inventory (Zeidner, 1991), from “not anxious” to “very anxious” in the
Statistics Anxiety Measure (Earp, 2007), from “no anxiety” to “considerable anxiety” in the
Statistical Anxiety Scale (Vigil-Colet, Lorenzo-Seva, & Condon, 2008), and from “no anx-
iety” to “severe anxiety” in the Statistics Comprehensive Anxiety Response Evaluation (Grif-
fith, Mathna, Sappington, Turner, Evans, Gu, Adams, & Morin, 2014).
Thereby, it is left up to the student to interpret the meaning of these categories and to decide
implicitly, to what extent the answer would reflect her or his cognitive worries, negative emo-
tions, or bodily reactions. Although these anxiety components have been demonstrated to be
substantially associated, the ambiguity of anchor labels (Podsakoff, MacKenzie, Lee, & Pod-
sakoff, 2003) should hinder a most differentiated and valid clarification of the students’ anx-
ious reactions and, hence, should not sufficiently comply with the conceptual perspectives on,
and the empirical knowledge about, the test anxiety construct (Rost & Schermer, 1989a).
Rather, these response categories appear to be principally at risk for compressing or even
covering the students’ actual understanding of the response she or he has individually decided
for (Krosnick & Presser, 2010). As a consequence, this response format potentially might con-
tribute to establish a method-driven restriction in the interpretation of research data. Therefore,
from both a conceptual and methodological perspective for future scale development in the
field, it is recommended to apply response labels asking for the students’ agreement with clear-
ly and distinguishably worded items to assess their cognitive, emotional, and physiological
reactions (Cohen, Manion, & Morrison, 2011; Rost & Schermer, 1989a).
Furthermore, the anchors currently in use might be considered emotionally loaded and hence
to have the potential to trigger biased responses (Lee, 2006; Podsakoff, MacKenzie, Lee, &
Podsakoff, 2003). Principally, this biased responses will operate in different ways and produce
contradictory effects: on the one hand, those students experiencing a heightened level of sta-
tistics anxiety should be prone to avoid extreme statements such as “high anxiety”, “extremely
anxiety evoking”, or “considerable anxiety” and, thus, tend to report a socially and emotionally
more acceptable extent of statistics anxiety (Lunneborg, 1964; Zeidner, 1998) – inasmuch as
they would judge their anxious feelings as manifestation of personal weakness. Therefore, they
should be motivated to report a somewhat lowered or trivialized level of anxiety and, thus, to
protect their sense of self-worth (Harter, 1990). They might also feel embarrassed by the
response wording and try to repress their anxiety symptoms. Hence, the response wording
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
9
would provoke a certain kind of coping strategy (Rost & Schermer, 1989b). On the other hand,
students with a heightened level of statistics anxiety should process these anchor labels in a
most sensitive way as threatening stimuli (Bar-Haim, Lamy, Pergamin, Bakermans-Kranen-
burg, & Ijzendorn, 2007; Fernández-Castillo & Caurcel, 2015; Lawson, 2006) and, accord-
ingly, report an inflated level of statistics anxiety (Mathews & MacLeod, 2002).
Although there is only scarce empirical evidence to reasonably clarify the prevalence and di-
rection of that emotional response bias, no matter what, its potentially confounding effect on
research data must be seen as most plausible (Gaye-Valentine & Credé, 2013). In particular,
possible response bias should affect the students’ responses in an uncontrollable way and se-
riously diminish their validity, eventually. Work on developing new measurement instruments
in the field should take this methodological flaw into account and use unloaded response cate-
gories as a precaution.
2.3 Approaching refined measurement
To overcome the conceptual limitations and methodological shortcomings of current
instruments as a very first attempt a new scale to assess university students’ statistics anxiety
was developed. Nevertheless, it should adopt the particular strengths of existing instruments.
Thus, it was assigned to approach a refined measure of the construct by meeting the following
criteria: Its items should (1) specifically take into account the students’ anxiety responses in a
most differentiable way and, hence, consider their worry and avoidance cognitions as well as
their emotional reactions. Thereby, its items should embed the various anxiety reactions (2)
into a representative range of statistically loaded task features and course situations the stu-
dents would typically encounter. In all this, its items should use (3) statements and response
labels which do not offer emotionally or motivationally biasing wording.
The construction of this scale for measuring university students’ “Worry, Avoidance, and
Emotionality Cognitions Encountering Statistical Demands” (WAESTA) largely followed the
procedure of facet theory using a mapping sentence (Guttman & Greenbaum, 1998). This map-
ping sentence served as a heuristic device to cover all major facets of the construct and, thus,
to achieve sufficient content validity (Edmundson, Koch, & Silverman, 1993). A pool of eli-
gible items was drawn up using this constitutive mapping sentence which included con-
ceptually relevant anxiety components, situational references, and intended response catego-
ries (Zeidner, 1998).
In particular, each item to represent the statistics anxiety domain was specified with respect to
four key facets: a relevant reaction facet referring to the worry, avoidance, and emotionality
component, and three contextual facets referring to the (1) outcome in a statistics exam, the
(2) individual learning of statistical procedures and handling of statistical demands, and the
(3) public mastering of statistical content. Furthermore, an additional range facet defined the
response categories to assess the students’ perceived magnitude of individual anxiety re-
actions. As seemingly appropriate response range a four-point format was decided – in order
to avoid artificial complexities in the respondents’ decision making (Lee & Paek, 2014; Lo-
zano, Garcia-Cueto, & Muñiz, 2008) but instead to ensure a cognitive-motivationally realistic
as well as just manageable number of rating references.
For instance, statistics anxiety should be more or less manifest as a student’s experienced or
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
10
anticipated worries about the anticipated exam outcome, the understanding of a certain statis-
tical formula and the presentation of research findings.
This four-facet mapping sentence was used as a cross-classification template to systemati-
cally operationalize the statistics anxiety construct as it allowed to combine the various ele-
ments of facets in a most differentiated manner (Hox, 1997). That way, a final scale version
with 17 four-point Likert-type rating items was built (Table 1).
All items concerned a mentally imaginable situation the students should easily manage to
anticipate. Eight items referred to the students’ worries about their potentially expected failure
to master the course exam and to cope with several statistical requirements. Four items con-
cerned their cognitions to preferably avoid the statistics course and particular statistical de-
mands. Five items are related to their emotional tension when being confronted with a certain
statistical task (Appendix).
Table 1. WAESTA items assessing relevant anxiety reactions to statistically loaded task
features and course situations
Test Anxiety
Component
Course
Exam
Outcome
Understanding
Explanation
Application
Oral Task
Presentation
Explanation
Worry 01, 14, 16 05, 08 10, 11, 12
Avoidance 03 06, 17 04
Emotionality 02, 07, 09, 13 15
Example (Item 07):
quite nervous explain a chart
“I would be quite nervous if I were asked to explain a chart from a research
report.”
Response range Does not apply (1) … … Applies in full (4)
As statistically indicated task requirements for the understanding of course contents, the
interpretation of quantitative research results as well as the application of statistical formulas
and procedures were considered. Likewise, the oral presentation and explanation of statistical
content in the public course situation was included. To warrant conceptual clarity, the avoid-
ance items should neither tap the students’ avoidance reactions by suppressing or substituting
individually occurring threat cognitions (Williams, 2015), nor should they refer to the stu-
dents’ actual avoiding strategies to cope with disliked or threatening academic events (On-
wuegbuzie, 2004; Rost & Schermer, 1989b). Rather they should operationalize the students’
mentally processed avoidance thoughts or even escape illusions before or during statistical
task completion – and, thus, might indicate a specific subcomponent of worry cognitions.
Moreover, the component of physiological symptoms or bodily tensions was not explicitly
addressed. Instead it was thought to be indirectly inferred from the emotionality items. Con-
ceptually, this restriction seemed to be justifiable, since the students’ perceived affective state
should always reflect their actual physiological arousal (Zeidner, 1998).
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
11
3 Validation framework and objectives
As the WAESTA scale should be examined for the first time, the present study was in-
tended as an exploratory investigation of its structural and criterion validity (John & Benet-
Martinez, 2000). A further aim was to describe its psychometric properties.
As test anxiety is assumed to be a multifaceted construct (Zeidner, 1998), the first step was to
analyze the factor structure of the WAESTA scale. As the WAESTA items were theoretically
designed to tap either the worry, avoidance or emotionality component of the statistics anxiety
construct. As the WAESTA items were theoretically designated to represent each the worry,
avoidance, and emotionality component of the statistics anxiety construct, a clear three factor
solution appeared to be expectable, at best. However, relevant research findings in the test
anxiety field had demonstrated these components being substantially correlated (Deffen-
bacher, 1980; Cassady & Johnson, 2002; Chin, Williams, Taylor, & Harvey, 2017; Gaye-
Valentine & Credé, 2013; Hodapp & Benson, 1997; Hong & Karstensson, 2002; Sarason,
1984; Sparfeldt, Schilling, Rost, Stelzl, & Peipert, 2005; von der Embse, Jester, Roy, & Post,
2018). Furthermore, in certain research contexts dealing with school students’ domain- or sub-
ject-specific test anxieties, all worry, avoidance, and emotionally items repeatedly loaded on
one common anxiety factor (Faber, 1993, 1995a, 2012b, 2015). Therefore, an accurate pre-
diction of the scale’s ultimate factor structure seemed difficult. Rather the present study should
explore the scale’s underlying structure in a most tentative way – and should, thus, take into
account three alternatives: a three factor solution separating the worry, avoidance, and emo-
tionality components, a two factor solution with the worry and avoidance items loading on a
first factor and the emotionality items loading on a second factor, and a one factor solution
subsuming all items. Presuming the avoidance component to represent a specific worry ele-
ment, the two factor solution definitely revealed a reasonable perspective, in particular. With
the reservation of this initially performed analysis, the final version of the WAESTA scale
should be determined and its psychometric properties examined.
The second step was to test the scale’s external validity, in particular its relationships with
selected convergent and discriminant criterion variables. For this purpose, the validation
framework included variables representing the students’ cognitive-motivational orientations
and the statistically relevant background.
As relevant cognitive-motivational variables the students’ competence and control beliefs
were considered. Both types of beliefs are well proven to regulate the students’ engagement
and learning approach in the long term. In particular, they essentially affect the students’ anx-
iety experience. As unfavorable competence and control beliefs usually come along with in-
creased expectancies of failure, they will provoke a strong sense of personal threat and, thus,
lead to an individually heightened level of anxiety. For the purpose of scale validation, as
relevant competence beliefs their domain-specific self-concepts, as relevant control beliefs
their implicit competence theories were assessed.
There is sound evidence that domain-specific academic competence beliefs and self-con-
cepts substantially predict the individually existing magnitude of test anxiety (Ahmed, Min-
naert, Kuyper, & van der Werf, 2012; Faber, 2012a; Goetz, Pekrun, Hall, & Haag, 2006).
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
12
Correspondingly, in the statistics domain the crucial role of self-concepts had been well esta-
blished. In most cases, high-anxious students reported a lowered self-concept of own ma-
thematics or statistical competencies (Bandalos, Yates, & Thorndike-Christ, 1995; Benson,
1989; González, Rodrígez, Faílde, & Carrera, 2016; Macher, Paechter, Papousek, & Ruggeri,
2012; Onwuegbuzie & Wilson, 2003; Williams, 2014, Zeidner, 1991). Therefore, it should be
assumed the WAESTA scale scores to correlate negatively and substantially with the students’
mathematics self-concept. As well, to sufficiently clarify the domain-specificity of the
WAESTA scale, its relation to the students’ verbal self-concept should be concurrently anal-
yzed. According to the multidimensional feature of academic self-beliefs (Green, Martin, &
Marsh, 2007; Marsh & O’Mara, 2008), research in the field had consistently demonstrated the
students’ mathematics anxiety being substantially related to their performance and motivation
in the mathematics but not in the verbal domain (Arens, Becker, & Möller, 2017; Faber, 1995b,
2015; Goetz, Frenzel, Pekrun, Hall, & Lüdtke, 2007; Gogol, Brunner, Preckel, Goetz, &
Martin, 2016; Hembree, 1990; Hill, Mammarella, Devine, Caviola, Passolunghi, & Szücs,
2016; Marsh, 1988; Marsh & Yeung, 1996; Sparfeldt, Schilling, Rost, Stelzl, & Peipert, 2005;
Streblow, 2004). Across most diverse educational settings a clear subject-specific pattern of
relations between mathematics anxiety and performance or motivation variables had been sub-
stantiated – thus evidently indicating the convergent and discriminant validity of mathematics
anxiety measures. From this validation perspective, the WAESTA scale should claim prelimi-
nary subject-specificity if its correlation with the verbal self-concept variable would turn out
to be distinctly weaker than with the mathematics self-concept variable.
Besides, the students’ control beliefs largely manifest as causal attributions to explain indi-
vidually experienced outcomes (Skinner, 1996) which, in turn, affect their expectancies of
future attainment. Students who attribute their failure to stable, internal factors are less likely
to strive to improve their competence – for instance, by increasing their learning effort or
altering their learning strategies (Weiner, 1985). In a narrow sense, students’ control beliefs
may essentially refer to their perceived malleability of own abilities or competencies. Accord-
ingly, the students develop implicit theories or mindsets which may stress an entity view of
more or less fixed and unchangeable abilities – or an incremental view of more or less modi-
fiable and changeable abilities (Dweck & Leggett, 1988; Yeager & Dweck, 2012). Though
these implicit theories have been demonstrated to correlate with the student’s actual ability
level only marginally (Spinath, Spinath, Riemann, & Angleitner, 2003), they significantly
affect their motivationally relevant goal orientations, effort beliefs, learning strategies and,
eventually, their task performance (Blackwell, Trzesniewski, & Dweck, 2007; Burnette,
O’Boyle, VanEpps, Pollack, & Finkel, 2013; Cury, DaFonseca, Zahn, & Elliot, 2008; Jones,
Wilkins, Long, & Wang, 2012). These implicit theories principally might not only concern an
individual’s cognitive ability but also might emerge in a domain-specific manner and refer to
the perceived malleability of certain skills or competencies (Dweck & Molden, 2005) – e.g.,
in the areas of mathematics and language (Davis, Burnette, Allison, & Stone, 2011; Räty, Ka-
sanen, Kliskinen, Nykky, & Atjonen, 2004; Vogler & Bakken, 2007), foreign language learn-
ing (Lou & Noels, 2017), academic writing (Karlen & Compagnoni, 2017), music (Smith,
2005), physical activities (Ommundsen, 2003), biology and science (Chen & Pajares, 2010;
Dai & Cromley, 2014). Consequently, they should also play a motivationally crucial role in
the students’ learning of statistics. With respect to the statistics domain, an entity view of own
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
13
competencies would diminish or even suspend any control perspective. Following the findings
of attribution analyses, this loss of control should increase the students’ expectancies of in-
evitable failure and, thus, should heighten their statistics anxiety (Bandalos, Yates, & Thorn-
dike-Christ, 1995; Carden, Bryant, & Moss, 2004; Covington, 1986; Faber, 2012a; Leppin,
Schwarzer, Belz, Jerusalem, & Quast, 1987).
Unfortunately, previous studies in the field had seldom analyzed the role of implicit theories.
If at all, they had referred to the students’ general ability beliefs, but not to specific beliefs
about statistical competencies (Zonnefeld, 2015) or had only considered the students’ beliefs
to master statistical demands through strategy use and effortful behavior (Schutz, Drogosz,
White, & Distefano, 1998) – thus, reflecting an incremental view of learning approach. How-
ever, these learning control beliefs had been demonstrated to significantly predict course
grades. Moreover, in another study the students’ implicit beliefs concerning the malleability
of their methodological competencies had been analyzed. Though not directly tapping the
issue of statistics, the findings demonstrated a certain motivational type of students showing
stable negative perceptions of method learning and, in particular, reporting lowered incre-
mental scores (Lapka, Wagner, Schober, Gradinger, Reimann, & Spiel, 2010).
Accordingly, against the background of overall research findings the WAESTA scale scores
should be reasonably assumed to correlate positively and substantially with the students’ entity
view of less or not malleable statistical competence.
Furthermore, as another motivational criterion variable, the students’ task values were consid-
ered. Conceptually, from an expectancy-value perspective on achievement motivation task
values concern the students’ perceived importance or adequacy of a certain activity to fulfill
their personal needs and to attain their personal goals (Eccles & Wigfield, 2002). In particular,
these task values may manifest as attainment, intrinsic, or utility components and have been
demonstrated to regulate the students’ motivational orientations, learning strategies, and aca-
demic choices (Wigfield, Hoa, & Klauda, 2009). Task values evidently emerge in a task- or at
least domain-specific manner (Gaspard, Häfner, Parrisius, Trautwein, & Nagengast, 2017; Ko-
sovich, Hulleman, Barron, & Getty, 2015; Selkirk, Bouchey, & Eccles, 2011). Thus, they
should characteristically affect the students’ learning and performance in the statistics domain
as well. Previous research in the field had primarily analyzed the students’ perceived utility or
worth of statistical knowledge and competencies as an attitudinal construct (Nolan, Beran, &
Hecker, 2012) – focusing on the usefulness of statistics in personal and professional contexts
(Cruise, Cash, & Bolton, 1985; Dauphinee, Schau, & Stevens, 1997; Ramirez, Emmioğlu, &
Schau, 2010; Schau, Stevens, Dauphinee, & Del Vecchio, 1995). As relevant empirical find-
ings consistently showed, the students’ ratings of the worth of statistics appeared to positively
correlate with their statistical achievement as well as with their self-reported learning strategies
to a slight extent only (Arumugam, 2014; Emmioǧlu & Capa-Aydin, 2012; Hood, Creed, &
Neumann, 2012; Kesici, Baloğlu, & Deniz, 2011; Nasser, 2004; Slootmaeckers, 2012). In
comparison, the relations of utility perceptions with domain-specific measures of academic
self-concept were stronger. Students with low competence beliefs valued statistics as less im-
portant (Baloğlu, 2002; Chiesi & Primi, 2009; Dauphinee, Schau, & Stevens, 1997; Dempster
& McCorry, 2009; Stanisavljevic, Trajkovic, Marinkovic, Bukumiric, Cirkovic, & Milic,
2014; Vanhoof, Kuppens, Sotos, Verschaffel, & Onghena, 2011). Similarly, students’ statistics
anxiety was also moderately correlated with their utility ratings – indicating those students
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
14
suffering from a heightened level of statistics anxiety tendentially perceived statistics to a les-
ser extent as useful (Baloğlu, 2002; Chew & Dillon, 2014c; Nasser, 2004; Papanastasiou,
2005; Papanastasiou & Zembylas, 2008; Papousek, Ruggeri, Macher, Paechter, Heene, Weiss,
Schulter, & Freudenthaler, 2012; Tremblay, Gardner, & Heipel, 2000).
In summary, these correlational findings draw a pattern which is completely in line with the
social-cognitive processing of task values (Bornholt & Wilson, 2007; Guo, Parker, Marsh, &
Morin, 2015). They will operate both as mediating and mediated variables among the students’
actual competencies and self-beliefs (Chiesi & Primi, 2010; Lalonde & Gardner, 1993; Nasser,
2004; Ratanaolarn, 2016; Sesé, Jiménez, Montano, & Palmer, 2015).
Accordingly, the WAESTA scale scores should be assumed to correlate inversely and sub-
stantially with the students’ perceived value of statistical competence.
As a relevant background variable to explain the students’ statistical self-beliefs and compe-
tencies their prior mathematical learning, in particular their latest school grade had been well
proven. From the perspective of self-concept development (Marsh & O’Mara, 2008), previous
failure experience in mathematics will evidently lead to form low competence beliefs in the
statistics domain and, eventually, contribute to strengthening the emergence of domain-speci-
fic anxiety responses. In various studies students with poor school grades in mathematics re-
ported a heightened level of statistics anxiety (Beurze, Donders, Zielhuis, de Vegt, & Verbeek,
2013; Birenbaum & Eylath, 1994; Chiesi & Primi, 2010; Galli, Ciancaleoni, Chiesi, & Primi,
2008; Lalonde & Gardner, 1993).
Accordingly, the WAESTA scale scores should be assumed to correlate inversely and sub-
stantially, but low in magnitude with the students’ mathematics grade they had last earned at
school.
Students’ prior experience of statistics should also be considered as a relevant background
variable. Following the analysis of Sutarso (1992) who demonstrated significantly lower anx-
iety scores in the subgroup of students with more statistics courses taken before the exam,
Onwuegbuzie (2003) also substantiated a small but significant relation between prior statistics
experience and self-concept. Unfortunately, he did not explicitly assess the statistics anxiety
variable. Instead he had used the computational self-concept subscale from the STARS instru-
ment (Cruise, Cash, & Bolton, 1985). Although there is theoretical and empirical overlap be-
tween the self-concept and anxiety variables they represent distinct constructs and should not
be treated as interchangeable. This is particularly important as the STARS subscale does not
measure self-concept in a particularly sophisticated way and includes several items that are
unrelated to competence beliefs. In another study the students’ previous experiences did not
significantly predict their grades in a statistics exam (Slootmaeckers, 2012). These findings
seemingly suggest the potential role of prior statistics experiences to explain rather the stu-
dents’ self-beliefs than their actual attainment. The study of Dempster and McCorry (2009)
clearly lent support to this perspective. Being more frequently and persistently involved with
learning statistics might increase the students’ confidence with typical requirements and meth-
ods (Waples, 2016). Thereby, they might reduce their feelings of apprehension and fear of
failure. However, other relevant studies did not substantiate any significant impact of prior
statistics experiencies on the students’ statistics anxiety (Birenbaum & Eylath, 1994; Chew &
Dillon, 2012; 2014b; Schutz, Drogosz, White, & DiStefano, 1998; Townsend, Moore, Tuck,
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
15
& Wilton, 1998). Most noteworthy, in some cases advanced students’ anxiety level did not
decrease after completion of a statistics course or program (Chau, 2018; van der Westhuizen,
2014). In order to further clarify their role to predict the individually existing level of statistics
anxiety, the scale validation of the WAESTA scale included the students’ prior statistics ex-
periences.
According to the exposure or contact hypothesis, the WAESTA scale scores should be as-
sumed to correlate inversely and substantially, but low in magnitude with the students’ prior
statistics experience.
4 Method
4.1 Participants and procedure
In order to conceptually and methodologically extend the scope of preliminary analyses
(Faber, Drexler, Stappert, & Eichhorn, 2018), in the present study the data of both a construc-
tion sample and a validation sample were further analyzed.
The construction sample consisted of 113 graduate students (n = 94 females, n = 19 males)
from a German university Master’s course in educational sciences (n = 80) and special edu-
cation (n = 33). They all were enrolled in a compulsory course on empirical research methods
and theory of science. Therefore, the participation rate was sufficiently high at 82 per cent.
Seventy-four of the students had already acquired elementary statistical knowledge during
their first degree, whereas 39 were required to attend a course in basic descriptive and
inferential statistics. Both the subgroup with and without statistical knowledge did not signi-
ficantly differ with respect to gender (chi-square test, p > .05) and age (Mann-Whitney U-test,
p > .05). Also, there was no significant difference of gender ratio within each subgroup
(binomial test, p > .05).
The validation sample was thought to scrutinize the findings from the construction sample
one year later. It consisted of 87 graduate students from the same Master’s courses: educational
sciences (n = 59) and special education (n = 28). The sample was predominantly female (n =
74). As with the construction sample all the students were enrolled on a compulsory course on
empirical research methods and theory of science. The participation rate was rather high at 89
per cent. Fifty of the students had acquired statistical knowledge during their first degree
whereas 37 had to attend an introductory statistics course. Once again there were no subgroup
differences in gender, gender ratio, or age (Stappert, 2017).
In both samples all relevant data concerning the self-belief and background variables under
consideration were gathered on the course’s first term. For that purpose, a questionnaire in-
cluding all items to measure the students’ self-concept, statistics anxiety, implicit theories, task
values, and relevant background information was administered.
Whereas the background items as well as the self-concept items were presented separately and
blockwise, the other self-belief items were all presented in a mixed order (Schriesheim, Ko-
pelman, & Solomon, 1989). Furthermore, to prevent a priming effect of the self-concept items,
they were presented at the end of the questionnaire (Podsakoff, MacKenzie, Lee, & Podsakoff,
2003).
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
16
4.2 Measures
The students’ academic self-concepts in mathematics and language (German) were as-
sessed using nine six-point rating items for each subject. These items referred to the students’
most recent learning experiences at school and addressed their competence beliefs with regard
to meeting subject-specific demands – also in comparison with former classmates. The
wording of the items was strictly parallel. In the majority, the items originate from well proven
instruments (Faber, 2012a; Möller, Streblow, Pohlmann, & Köller, 2006; Rost, Sparfeldt, &
Schilling, 2007). For the purpose of this study they were adapted, phrased retrospectively, and
presented in an economical grid style format. Sample item: “I tried hard in mathematics/
German, but I did not perform very well.” Exploratory factor analysis (with varimax rotation)
revealed a two-factor solution allowing for a clear distinction between the subject-specific
self-concept facets. Hence, it was possible to build two scales for measuring the subject-
specific academic self-concepts. In both samples, their reliability was most appropriate for
both the mathematics and the language self-concept scale (Table 2). According to the
multifaceted feature of the construct, the self-concept variables appeared to lowly correlate in
the construction sample (r = .14, p > .05) and in the validation sample (r = -.05, p > .05). High
scale scores indicated the students’ competence beliefs being positive. For the construction
sample, an additionally conducted confirmatory factor analysis using three item parcels for
each subject revealed an appropriate fit of the measurement model (χ2/df = 1.143, TLI = 0.989,
CFI = 0.997, RMSEA = 0.036). Here again, the latent mathematics and language variable
correlated to a low extent (r = -.25, p < .01). Remarkably, the relation between both self-
concept measures turned out to be negatively signed. However, from the perspective of self-
concept development, this particular finding might be explainable as it indicates a dimensional
comparison effect in the students’ academic self-perceptions across non-matching domains or
subjects (Arens, Becker, & Möller, 2017; Marsh, Kuyper, Seaton, Parker, Morin, Möller, &
Abduljabbar, 2014).
To assess students’ implicit theory of statistical competencies, a short scale with five four-
point rating items was administered in the construction sample. As current instruments in the
field only allowed for measuring the students’ implicit intelligence theory (İlhan & Cetin,
2013; Kooken, Welsh, McCoach, Johnston-Wilder, & Lee, 2016), a new scale was created.
Following the recommendations of Hong, Chin, Dweck, Lin and Wan (1999), all items tapped
an entity view of personal statistical competence. Unfortunately, due to their insufficient item-
test correlation (rit < .22) two items had been deleted. With an average item intercorrelation of
Fisher’s z’ = .44 and an average item-test correlation of Fisher’s z’ = .52 the final scale’s
reliability appeared to be just acceptable (John & Benet-Martinez, 2000). Sample item: “To
work with statistics, you need a talent that I simply do not have.” High scale scores indicated
the students to perceive their statistical ability being fixed, hence as less malleable in nature.
In the validation sample, a slightly revised scale with four four-point Likert items was used
(Stappert, 2017). In view of sample size and item number its reliability appeared to be
sufficient (Table 2).
The utility value students attributed to statistical competence was measured by means of a
short scale. In the case of the construction sample it consisted of five four-point rating items
dealing with the perceived utility of statistics for the students’ current studies and intended
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
17
career. To reduce acquiescence or social desirability bias in the students’ responses, scale
items were both positively and negatively worded. Sample item: “Statistics will not play an
important role in my future professional life”. With an average item intercorrelation of Fisher’s
z’ = .40 and an average item-test correlation of Fisher’s z’ = .49 the scale’s reliability was just
acceptable (Table 2). High scale scores indicated the students to consider statistics being less
important. In the validation sample, an extended version of the scale was used (Eichhorn,
2018). It consisted of eight four-point Likert items. Its reliability was once more just acceptable
(Table 2). Here again, high sum scores indicated the students to perceive statistics as being
less useful for their current studies and later professional development.
Table 2. Descriptive statistics and reliabilities of the scales for measuring self-belief and
background validation variables
Items AM SD zS zK α
Latest School Grade in Mathematics
Construction Sample 1 3.06 0.91 0.76 -0.15
Validation Sample 1 3.11 1.36 -0.50 -2.28*
Prior Statistics Experiences
Construction Sample 1 1.15 0.91 1.46 -1.62
Validation Sample1 1 2.0
Academic Self-Concept in Mathematics
Construction Sample 9 29.81 9.81 1.37 -2.04 .93
Validation Sample 9 31.01 11.43 -0.33 -1.84 .94
Academic Self-Concept in Language (German)
Construction Sample 9 45.52 7.32 -3.61*** -0.01 .92
Validation Sample 9 41.45 8.53 -2.12* -0.16 .90
Implicit Entity Beliefs
Construction Sample 3 5.43 2.02 3.59*** 0.13 .70
Validation Sample 4 7.87 2.61 2.15* -0.32 .81
Negative Utility Value of Statistics
Construction Sample 5 11.06 2.89 0.41 -1.18 .72
Validation Sample 8 16.18 3.95 0.55 -0.10 .75
1In the validation sample, a binary response format was used. Therefore, the item’s
mode is reported.
Significance: *p < .05, ***p < .001
Note. AM = arithmetic mean, SD = standard deviation, zS = z-standardized skewness,
zK = z-standardized kurtosis, α = internal consistency (Cronbach’s coefficient alpha)
Finally, as relevant background variable students’ most recent school grade in mathematics
was inquired in both samples. Students were also asked to report the experience of statistics in
the course of their first degree. The construction sample used a four-point Likert scale ranging
from “no experience” to “a lot of experience”. As this response format turned out to be too
ambiguous in the students’ understanding, in the validation sample their previous experience
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
18
with statistics was assessed with a dichotomous format – which referred to the successful pas-
sing of a statistics exam (response categories “yes” = 2 and “no” = 1).
4.3 Data analyses
According to the conceptual difficulty to unequivocally predict the WAESTA scale’s ulti-
mate factorial feature, a series of principal component analyses (PCA) was conducted in order
to explore and clarify the latent structure underlying the relations among all worry, avoidance,
and emotionality items (Kline, 2013). Following the methodological recommendations of Pi-
tuch and Stevens (2016), only factor loadings equaling or exceeding an absolute value of |0.4|
and, thus, explaining around 16% of the item variance, should be accepted and interpreted –
and consequently used for the purpose of final scale formation.
That way, principal component analyses were calculated for the construction sample. The
Kaiser-Meyer-Olkin measure of sampling adequacy and Bartlett’s test of sphericity demon-
strated the inter-item correlations being appropriately strong (KMO = .878, BTS p < .001). In
spite of the small number of participants in the validation sample, a principal component ana-
lysis of the scale items was also conducted. As the items’ communalities ranged from h = .412
to h = .660 (Costello & Ossbourne, 2005) and the Kaiser-Meyer-Olkins measure revealed an
appropriately high score (KMO = .901), this procedure appeared to be most reasonable (de
Winter, Dodou, & Wieringa, 2009; MacCallum, Widaman, Zhang, & Hong, 1999).
In order to further clarify possible differences in the statistics anxiety variable and to dis-
close potential nonlinear relations, a series of two-way analyses of variance (ANOVA) with
the students’ mathematics self-concept, implicit entity beliefs, and negative utility values as
factor variables was additionally run. In each case, the independent variables were factorized
using median split, thus, leading to the identification of lowered and heightened level
subgroups. In particular, separate 2�2 analyses and a comprehensive 2 �2 �2 analysis for
the construction sample were conducted.
Both samples had missing data (5.7% and 7.3%). As they did not produce any systematic
pattern in the construction (MCAR test p = .182) and in the validation sample (MCAR test p
= .178), they were treated as “missing completely at random” (Little, 1988). The missing
values were estimated by means of the two-step iterative expectation-maximization (EM)
algorithm. Though this method appears to be at risk to underestimate the standard errors of
imputed data, in view of the small number of missings, its use was considered justified (Gra-
ham, 2012).
5 Results
5.1 Scale formation
For determining the final version of the WAESTA scale in the construction sample, first
of all, descriptive item statistics were calculated. To test significant deviations from normal
distribution their z-standardized skewness and kurtosis scores were utilized (Field, 2009). The
avoidance item 04 as well as the worry item 11 showed a significant negative skew indicating
most students to agree with the statements – in detail they would preferably give a presentation
without any statistical content and during a presentation they would strongly hope not being
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
19
asked statistical questions. Furthermore, as the analysis revealed a significant negative kurtosis
score for the items 03, 06, 14, and 15, their distribution appeared to be platykurtic. Accord-
ingly, the students’ relevant item responses denoted a heightened variance or difference among
them (Table 3).
Table 3. Descriptive statistics, factor loadings and corrected item-test correlations of
WAESTA items (WR = worry, AV = avoidance, EM = emotionality): Results from the
construction sample (Faber, Drexler, Stappert, & Eichhorn, 2018)
Item AM SD zS zK a rit
01 WR 2.37 0.82 1.49 -0.67 .486 .436
02 EM 2.71 0.94 -1.30 -1.69 .688 .645
03 AV 2.74 1.08 -1.33 -2.61** .581 .515
04 AV 2.88 0.97 -2.12* 1.65 .719 .664
05 WR 2.51 0.91 -0.50 -1.68 .725 .663
06 AV 2.50 0.97 -0.21 -2.08* .758 .697
07 EM 2.17 0.96 1.71 -1.78 .697 .620
08 WR 2.24 0.84 1.49 -0.83 .718 .670
09 EM 2.20 0.87 1.59 -1.04 .668 .583
10 WR 2.69 0.93 -1.63 -1.43 .647 .694
11 WR 2.91 0.97 -2.09* -1.75 .740 .676
12 WR 2.26 0.80 1.52 -0.45 .734 .618
13 EM 2.96 0.93 -1.51 -0.45 .683 .493
14 WR 2.69 1.07 -0.67 -2.18* .551 .474
15 EM 2.87 0.84 -1.30 -2.77** .532 .637
16 WR 2.75 0.84 -1.45 -0.83 .689 .442
17 AV 2.07 0.87 1.55 -1.52 .488 .442
Total Sum Score (Items 01-17)
WAESTA 42.66 10.20 -0.49 -1.34
Significance: *p < .05, **p < .01
Note. AM = arithmetic mean, SD = standard deviation, zS = z-standardized
skewness, zK = z-standardized kurtosis, a = factor loading, rit = corrected item-
test correlation
For the data of the construction sample, principal component analyses did not statistically
separate the three anxiety components. Due to the eigenvalues (e1 = 7.385, e2 = 1.219, e3 =
1.132), neither a varimax nor an oblique rotation procedure yielded any loading pattern to
separate the worry, avoidance, and emotionality items in a conceptually proper way. Rather
all analyses led to a unidimensional structure (Table 3). This solution revealed sufficiently
high factor loadings and explained 43.59 per cent of extracted variance.
Nonetheless, for further clarification, three provisional subscales representing the students’
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
20
worry, avoidance, and emotionality cognitions were formed, and their interrelations as well as
their relations with criterion variables were examined. In line with the PCA result, the sub-
scales strongly correlated. Furthermore, using Steiger’s Z formula (Hoerger, 2013; Steiger,
1980) to compare dependent correlation coefficients, no difference between coefficients reach-
ed statistical significance (p > .05). Accordingly, the worry, avoidance, and emotionality sub-
scales appeared to be equally associated with the achievement and motivation variables under
consideration (Table 4).
Table 4. Correlations between provisory WAESTA subscales and with criterion variables
(in the construction sample)
AV EM SGM MSC VSC IEB NUV
WAESTA-
Worry .72*** .80*** -.23* -.38*** .03 .58*** .41***
WAESTA-
Avoidance
(AV)
.69*** -.28** -.43*** .05 .56*** .57***
WAESTA-
Emotionality
(EM)
-.22* -.34*** .12 .49*** .32***
Significance: *p < .05, **p < .01, ***p < .001
Note. SGM = Most Recent School Grade in Mathematics, MSC = Mathematics Self-Con-
cept, VSC = Verbal Self-Concept, IEB = Implicit Entity Beliefs, NUV = Negative Utility
Value
Consequently all WAESTA items were used to build the scale’s final version. For its total
score, neither the z-standardized scores of skewness and kurtosis nor the Shapiro Wilk W-test
(W = .988, df = 113, p = .443) evinced any significant deviation from the normal distribution
assumption. High total scores indicated the students’ to report stronger worry, avoidance, and
emotionality cognitions. The scale’s reliability was estimated in various ways and turned out
to be adequate: Its internal consistency (Cronbach’s coefficient alpha) amounted to α = .92, its
split-half reliability (odd-even method using Spearman-Brown correction) to r12 = .89, and its
standard error (based on coefficient alpha) was se = 2.67.
In the validation sample, the PCA results revealed eigenvalues (e1 = 8.422, e2 = 1.216, e3 =
1.006) which indicated a solution with one common factor. All factor loadings appeared to be
considerably high (ranging from amin = .612 to amax = .812). Accordingly, the unidimensional
scale feature was fully replicated and explained 52.64 per cent of extracted variance. For the
total sum score, z-standardized skewness (zS = -0.096) and kurtosis values (zK = -0.173) did
not indicate any significant deviation from the normal distribution assumption. Here again, the
scale’s reliability was estimated in various ways and turned out to be adequate: its internal
consistency (Cronbach’s coefficient alpha) amounted to α = .94, its split-half reliability (odd-
even method using Spearman-Brown correction) to r12 = .91, and its standard error (based on
coefficient alpha) was se = 2.45.
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
21
5.2 Validation results
As an initial approach to analyze the external validity of the WAESTA scale, its zero-order
correlations with the criterion variables under consideration were first analyzed. Due to the
non-normal distribution of criterion variables, Pearson and Spearman correlation coefficients
were precautionally calculated. However, as both types of coefficients revealed quite similar
information, the construct relations were preferably described in terms of Pearson’s r.
As the results consistently demonstrated (Table 5), the WAESTA scores were closely and
significantly associated with the students’ mathematics self-concept but not with their lan-
guage self-concept. This particular finding might be considered to provide preliminary evid-
ence that the WAESTA scale measures rather a domain-specific than a general facet of the
students’ test anxiety experience.
Table 5. Relations of WAESTA sum scores with background and self-belief variables
(zero-order correlations): Results from the construction and the validation sample
Most Recent
School Grade
Mathematics
Prior
Experiences
Statistics
Academic
Self-Concept
Mathematics
Academic
Self-Concept
Language
Implicit
Entity
Beliefs
Negative
Utility
Value
Construction Sample
-.29*** -.23* -.43*** .09 .62*** .49***
Validation Sample
-.22* -.18* -.38*** -.08 .80*** .32***
Significance: *p < .05, **p < .01, ***p < .001
Furthermore, the scale scores were most strongly correlated with the students’ entity views
of own statistical competence. A heightened level of statistics anxiety came along with a deep
understanding of own statistical competencies being less or even not malleable in nature. With
the negative instrumental value of statistics the WAESTA sum score correlated moderately
positive. Students reporting a higher level of statistics anxiety tendentially perceived statistical
competencies as less important. Finally, the relation between the WAESTA score and the most
recent school grade in mathematics appeared to be positive and significant, though low in
magnitude. Hence, students with a heightened level of statistics anxiety had been less success-
ful in the mastery of mathematical demands at school. Similarly, students with prior exper-
iences with statistics displayed a lower level of statistics anxiety. Overall, the analysis in both
the construction and the validation sample revealed a most comparable pattern of criterion-
related correlations.
To get most differentiated validation results, a series of regression analyses with the
WAESTA scale as dependent variable were computed for both samples. As this procedure
allowed for controlling the covariations among all predictor variables with respect to their
empirical overlap and multicollinearity (Field, 2009), it should help to unravel the complexity
of construct relations. In particular, a sequence of regression models including an advancing
number of predictor variables was consecutively tested (Table 6). In both samples the stand-
ardized residuals of WAESTA sum scores did not violate the normal distribution assumption.
In each case, the Shapiro Wilk W-test demonstrated the standardized residuals being normally
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
22
distributed (construction sample: W = 986, df = 113, p = .308, validation sample: W = 979, df
= 87, p = .182).
Table 6. Multiple regression of WAESTA sum scores on background and self-belief vari-
ables (standardized beta weights and squared multiple regression coefficients): Results
from the construction and the validation sample
Model
Most Recent
School Grade
Mathematics
Prior
Experiences
Statistics
Academic
Self-Concept
Mathematics
Implicit
Entity
Beliefs
Negative
Utility
Value
R2
Construction Sample
A -.039 -.449*** .177
Validation Sample
A .184 -.533*** .154 Construction Sample
B -.005 -.244** -.436*** .236 Validation Sample
B .180 -.144 -.519*** .165 Construction Sample
C -.136 -.221*** -.014 .592*** .478 Validation Sample
C .030 .036 -.087 .772*** .616 Construction Sample
D -.158 -.248*** .031 .487*** .313*** .559 Validation Sample
D .023 .032 -.081 .799*** -.059 .614
Significance: *p < .05, **p < .01, ***p < .001
Note. R2 = adjusted multiple regression coefficient squared
The results for regression models A and B clearly demonstrated the mathematics self-con-
cept to explain the most part of anxiety variance. However, adding the entity beliefs to the
regression equation in model C and D, the predictive power of the students’ mathematics self-
concept was reduced to a minimal and insignificant extent. Instead, the students’ entity beliefs
largely contributed to the WAESTA sum score. As the mathematics self-concept and the entity
belief variable in both samples were substantially correlated (r = -.51 in the construction
sample and r = -.45 in the validation sample), but the entity belief variable in both samples
was more strongly related to the anxiety variable (r = .63 in the construction sample and r =
.80 in the validation sample) – the massive decline in the self-concepts’ beta weight must be
seen as a result of multicollinearity (Marsh, Dowson, Pietsch, & Walker, 2004). This predic-
tive pattern occurred in both samples.
Moreover, only in the construction sample the students’ negative value of statistics sub-
stantially and independently explained additional variance in the WAESTA sum score. The
difference between samples might be due to the fact, that the methods for assessing the value
variable were not comparably formatted. Similarly, it was only in the construction sample that
that prior experience of statistics contributed incrementally to variance in WAESTA score.
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
23
Students with more experience reported statistics anxiety to a lesser degree. Once again the
difference between samples should be traced back to the scale format. In the validation sample
the dichotomous experience score was treated as a dummy variable which might have reduced
empirical variance and, thus, lowered correlation coefficients. Apart from this, all regression
analyses demonstrated the students’ statistics anxiety to be essentially and most closely pre-
dicted by their control beliefs – as reflecting their perceived malleability of individual compe-
tencies in the statistics domain.
Finally, for further clarification of the WAESTA scale’s validity, their mean differences
between both the educational science and the special education students were analyzed. As the
comparison groups were small and unequal in size the nonparametric Mann-Whitney U-test
for independent samples was used (Zimmerman, 1987). In the construction sample a signifi-
cantly higher level of statistics anxiety in the special education subgroup were be found (Z =
-2.314, p = .021, effect size r = 0.22). In the validation sample, the level of statistics anxiety
did not significantly differ between educational science and special education students (Z = -
0.374, p > .05). However, with the small size of the validation sample in mind, this finding
should be considered cautiously.
5.3 Additional multivariate analyses
In order to scrutinize the role of the WAESTA scores within the multivariate context of
relevant self-belief variables, their mean differences depending on the students’ domain-spe-
cific self-concept, implicit entity, and negative value responses were analyzed.
As the result of the first two-way analysis of variance demonstrated, mean differences in the
students’ statistics anxiety appeared to be significantly explained by a main effect of both the
entity belief and the self-concept variable (Table 7). Accordingly, those students reporting a
lowered sense of malleability of their statistical competencies and having a poorer math-
ematical self-concept apparently displayed a heightened level of worry, avoidance, and emo-
tionality cognitions (Figure 1). The interaction effect of the self-concept and implicit entity
beliefs variable was statistically not significant.
A quite similar effect pattern was found with regard to the negative value variable. Mean
differences in the WAESTA responses were significantly explained by a main effect of both
the negative value and the self-concept variable (Table 7). Those students who perceived the
availability of statistical competencies as less useful and, besides, reported a poorer math-
ematical self-concept, showed a heightened level of statistics anxiety (Figure 1). Thus, in both
analyses the students’ statistics anxiety appeared to be substantially predicted by distinct ef-
fects of the domain-specific self-beliefs under consideration – whereas the effect of the math-
ematics self-concept was less powerful in each case. Here again, the interaction effect of both
independent variables was statistically not significant.
Bringing together both perspectives by concurrently analyzing the role of all self-belief
variables, the result of a three-way analysis of variance revealed significant main effects of
the entity beliefs and the negative value but not anymore for the self-concept variable (Table
6). Rather, the students with strong entity beliefs and negative utility values reported a con-
siderably heightened level of statistics anxiety (Figure 2).
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
24
Figure 1. Mean statistics anxiety scores depending on the students’ implicit entity theory,
negative utility value, and mathematics self-concept: separate two-way analyses of vari-
ance from the construction sample.
Figure 2. Mean statistics anxiety scores depending on the students’ implicit entity theory,
negative utility value, and mathematics self-concept: comprehensive three-way analysis
of variance from the construction sample.
Though this effect pattern obviously appeared somewhat more pronounced for the sub-
group with a poorer mathematical self-concept, this difference did not reach statistical signi-
ficance. Therefore, the additive impact of lowered expectations to improve one’s own statis-
tical skills and negative perceptions of the utility of statistical skills allowed for best predicting
the interindividually existing level of statistics anxiety. All interaction effects were statistically
not significant. This finding is completely in line with the regression results (Table 5). How-
ever, it may yield a more descriptive look on the relations among constructs. That is, within
the cognitive-motivational interplay of all self-belief variables, the explanative role of the
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
25
mathematical self-concept was largely mediated by the students’ entity beliefs and utility va-
lues.
Table 7. Differences in students’ statistics anxiety depending on their mathematics self-
concept, implicit entity beliefs, and negative instrumental value. Results from the con-
struction sample.
WAESTA Sum Scores (Statistics Anxiety)
Factor Variables df F p η2
Mathematics Self-Concept (MSC) 1,112 5.749 .018 .050
Implicit Entity Beliefs (IEB) 1,112 48.671 .000 .309
MSC × IEB 1,112 0.875 .352 .008
Mathematics Self-Concept (MSC) 1,112 7.100 .009 .061
Negative Utility Value (NUV) 1,112 24.349 .000 .183
MSC × NUV 1,112 0.260 .611 .002
Mathematics Self-Concept (MSC) 1,112 3.342 .070 .031
Implicit Entity Beliefs (IEB) 1,112 35.411 .000 .252
Negative Utility Value (NUV) 1,112 13.469 .000 .114
MSC × IEB 1,112 1.339 .250 .013
MSC × NUV 1,112 0.774 .381 .007
IEB × NUV 1,112 0.268 .606 .003
MSC × IEB × NUV 1,112 0.003 .956 .000
Note. df = degree of freedom, F = F-value, p = probability, η2 = partial eta
squared (effect size)
6 Conclusion
The present study should examine the internal and external validity as well as the psycho-
metric properties of the newly developed WAESTA scale for measuring educational science
students’ worry, avoidance, and emotionality cognitions in the domain of statistics learning.
Conceptually, this measurement approach should integrate both the strengths of a more situa-
tion- and a more response-focused research line in the field. Data were gathered in a construc-
tion sample as well, one year later, in a validation sample of education science majors. As a
substantive result, this scale was demonstrated to represent the construct in a unidimensional
manner. Using principal component factor analysis, all scale items were empirically support-
ed. Accordingly, the final scale version included all items as initially administered in both
samples. Its internal consistency was most sufficient. Furthermore, its relations with various
self-belief and background variables most widely turned out as theoretically predicted. Spe-
cifically, total WAESTA score was more strongly correlated with the students’ mathematics
than with their language self-concept – and, thus, the scale should claim domain-specific va-
lidity. These findings correspondingly held for both the construction and the validation sample.
For the time being, the WAESTA scale can be considered internally and externally valid as
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
26
well as having adequate psychometric properties. Nevertheless, some results definitely deserve
further attention.
In particular, the scale’s underlying structure consistently appeared to be unidimensional.
This finding indicates the strong empirical overlap among the worry, avoidance, and emo-
tionality responses – and, thus, the cognitive-motivational interplay of anxiety components.
Similarly, close relations had been already found elsewhere (Deffenbacher, 1980; Hodapp &
Benson, 1997; Cassady & Johnson, 2002; Chin, Williams, Taylor, & Harvey, 2017; Hong &
Karstensson, 2002; Sarason, 1984; Sparfeldt, Schilling, Rost, Stelzl, & Peipert, 2005),
especially with respect to domain- or task-specific facets of test anxiety (Faber, 1993, 1995a,
2012b, 2015). Nonetheless, in most cases worry and emotionality cognitions were demon-
strated to differently correlate with students’ performance and motivation scores (Faber, 2000,
2002; Seipp, 1991; von der Embse, Jester, Roy, & Post, 2018). By no means, this result does
challenge the need for a separate assessment of worry, avoidance, and emotionality cognitions.
Rather this approach should ensure to obtain a most differentiated measuring of statistics anx-
iety and, thereby, should contribute to reducing the interpretation ambiguity of item responses.
In that regard, it should certainly increase the scale’s cognitive-motivational representativity
and content validity. Likewise, the students’ avoidance cognitions were found to be most clo-
sely related to their worry, but only slightly less closely related to their emotionality cogni-
tions. Therefore, according to relevant findings (Galassi, Frierson, & Sharer, 1981; Hagtvet &
Benson, 1997), avoidance cognitions must be seen as an important feature within the students’
anxiety experience and, thus, should have contributed to completing and differentiating the
measuring of statistics anxiety. Hence, future research should more strongly pay attention to
this issue and reconsider its theoretical framework as well as its methodological implications.
Apart from this conceptual perspective, the WAESTA scale’s content and format should be
further refined. In particular, the proportion of worry, avoidance, and emotionality items
should be more balanced – in particular, the number of avoidance items should be extended.
Furthermore, given the ambiguous research findings (Lee & Paek, 2014; Lozano, Garcia-
Cueto, & Muñiz, 2008; Wakita, Ueshima, & Noguchi, 2012; Weng, 2004), subsequent scale
analyses should also compare various scale versions with different numbers of response cate-
gories in order to examine their psychometric and validity properties.
With respect to the validation results, both the correlation and regression analyses suggest,
at first glance, that students’ implicit entity beliefs are sufficient to explain their statistics anx-
iety. Without any doubt, the entity beliefs appear to obviously play a crucial role in the pre-
diction of statistics anxiety – as should be expected from the view of social-cognitive theories
(Dweck & Leggett, 1988; Schunk & Zimmerman, 2006; Zeidner, 1998). However, in the stu-
dents’ cognitive-motivational processing, they will operate in a more complex manner. Ac-
cording to relevant theoretical conceptions and empirical findings in the field (Blackwell,
Trzesniewski, & Dweck, 2007; Chiesi & Primi, 2010; Emmioǧlu, 2011; Onwuegbuzie, 2003;
Sesé, Jiménez, Montano, & Palmer, 2015), it should be assumed that implicit beliefs actually
mediate the effects of students’ self-concept and their learning background on the dependent
anxiety variable (Baron & Kenny, 1986). The massive decline in the self-concept variable’s
beta weights, when adding the entity belief variable to the regression equation, apparently
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
27
supports this assumption (Table 5). The same applies for the results of additionally run anal-
yses of variance (Table 6). Indeed, within the validation framework this indirect effect cannot
be adequately substantiated with correlational or regression analysis, but only with multivar-
iate modeling method (Kline, 2011). Future research should make every effort to apply such
modelling techniques in order to clarify the role of entity beliefs in the statistics domain.
Beyond the purpose of scale validation, the empirical findings concerning the students’
entity beliefs might even extend the previous research in two respects: at the level of construct
specificity, the measuring of entity beliefs did not refer to the perceived malleability of general
cognitive abilities but enquired the perceived malleability of statistical competencies. As this
entity belief variable was only very weakly correlated with language self-concept (construc-
tion sample r = .07; validation sample r = -.12) it should be considered domain-specific.
Hence, for the domain of statistical learning, this particular finding appears to be in line with
the recommendations of the implicit theories approach (Dweck & Molden, 2005). Accord-
ingly, at the level of construct relations, the results allow for refining the nomological scope
of the statistics anxiety framework – at least, as it refers to the type and role of self-belief
variables (Bandalos, Yates, & Thorndike-Christ, 1995; González, Rodrígez, Faílde, & Carrera,
2016; Onwuegbuzie & Wilson, 2003; Zeidner, 1991).
The present study undeniably suffers from some conceptual and empirical limitations and
shortcomings. First of all, composition and size of both student samples do not allow for ge-
neralizing the empirical findings as should be required. Instead, the findings reported here
might claim a sort of local validity – all the more, as their data basis referred to a certain
university setting. Their external validity must, therefore, be considered weak. Further anal-
yses should necessarily remedy this problem and examine the WAESTA scale with other stu-
dent samples from other educational science contexts.
Moreover, the validation framework is still lacking in several respects and should be fur-
ther completed (Benson, 1998). The number of items for measuring the students’ implicit
entity theory and utility value should be extended – not least, in order to enhance their internal
consistencies. However, simply increasing the number of items will not necessarily improve
reliability and criterion validity to a considerable extent (Gogol, Brunner, Goetz, Martin,
Ugen, Keller, Fischbach, & Preckel, 2014). Further validation efforts should carefully explore
options to broadening the constructs’ operationalizations in a conceptually and contextually
adequate manner – but not only to enhance statistical variance of item responses. Especially
the construct of entity beliefs might allow for response variability merely in its context or
situation facet. Therefore, as relevant studies in the field correspondingly showed, its potential
operationalization space will be conceivably restricted (Blackwell, Trzesniewski, & Dweck,
2007; Hong, Chiu, Dweck, & Wan, 1999; Lou & Noels, 2017). One way to refine the construct
should be its extension with the students’ beliefs about perceived effort and learning control
(Blackwell, Trzesniewski, & Dweck, 2007; Schutz, Drogosz, White, & Distefano, 1998;
Tempelaar, Rienties, Giesbers, & Gijselaers, 2015). Furthermore, this study assessed students’
mathematics self-concept retrospectively. As a proportion of both samples did not have any
prior experience of statistics using a measure of statistical self-concept would have been
misguided. However, provided that further research would include participants being most
comparable in their statistical background, their self-concept in the statistics domain should
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
28
be absolutely used to elaborate scale validation (González, Rodrígez, Faílde, & Carrera, 2016;
Sproesser, Engel, & Kuntze, 2016). Likewise, concurrent measures of the students’ self-effi-
cacy to master certain statistical tasks should help to further differentiate the scale’s criterion
validity (Finney & Schraw, 2003; Olani, Harskamp, Hoekstra, & van der Werf, 2010; Pere-
piczka, Chandler, & Becerra, 2011). Not least, an appropriate validation of the WAESTA scale
will require an analysis of its relations with other instruments for measuring statistics anxiety
– for instance, by comparing it with the German adaptation of the STARS questionnaire (Pa-
pousek, Ruggeri, Macher, Paechter, Heene, Weiss, Schulter, & Freudenthaler, 2012).
As another crucial point the measuring of the students’ prior statistics experience might be
considered. In both samples, this variable had been differently operationalized. Therefore, its
relations to the statistics anxiety score cannot be compared across samples. However, the
multivariate findings indicate the students’ informally experienced exposure to statistics (as
assessed in the construction sample) being stronger related to their anxiety scores than their
prior passing of a formal exam (as assessed in the validation sample). Possibly, this particular
result should be traced to both the binary item format and the number of students in the va-
lidation sample. Moreover, in line with research findings reported elsewhere (Dempster &
McCorry, 2009), this particular result might also indicate the everyday encounter with sta-
tistical demands to play a more important role for the emergence and maintenance of anxious
reaction patterns. Further validation efforts should bear this perspective in mind and always
analyze the informal experiences with statistics as cognitive-motivationally decisive
background variable – instead of merely focussing on structural information (Hannigan, He-
garty, & McGrath, 2014; Onwuegbuzie, 2003). However, as additional analyses of the data
from both samples showed, the relation between students’ prior statistics experiences and their
statistics anxiety appeared to be largely superposed by the domain-specific self-concept and
value variable (Faber & Drexler, 2018).
Another considerable lack of the present study concerns the missing of a relevant perfor-
mance measure. As only a certain part of students in both samples had yet to pass an exam in
introductory statistics, sufficiently robust data were not available. Further validation studies
should analyze the relation between the WAESTA scores and suitable measures of students’
actual statistics performance (Hanna & Dempster, 2012). Especially, this relation should be
most instructive – in as much as relevant studies commonly reported low to moderate correla-
tions (Bandalos, Yates, & Thorndike-Christ, 1995; Finney & Schraw, 2003; Macher, Paechter,
Papousek, & Ruggeri, 2012; Sesé, Jiménez, Montano, & Palmer, 2015; Tremblay, Gardner, &
Heipel, 2000; Vigil-Colet, Lorenzo-Seva, & Condon, 2008; Zeidner, 1991). However, these
results do not really indicate a general flaw in the measures’ criterion validity. Rather, they
reflect the motivational consequences of statistics anxiety within a strongly restricted setting
(Pekrun, 1988). As the successful passing of statistical requirements in the Master’s degree is
mandatory, the students’ increasingly experienced worry, avoidance tendencies, and feelings
of apprehension should dispose them to strengthen their learning effort in order to avoid an
impending failure outcome (Macher, Papousek, Ruggeri, & Paechter, 2015; Martin & Marsh,
2003). Accordingly, for the WAESTA scale, also a moderate relation with the students’ statis-
tical performance should be assumed. Moreover, in order to further elucidate the relation be-
tween students’ statistics anxiety and actual achievement, it should be also enlightening to
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
29
analyze their domain-specific learning approach (Chiesi, Primi, Bilgin, Lopez, del Carmen
Fabrizio, Gozlu, & Tuan, 2016; Markle, 2017).
Finally, as both samples in this study were small and predominantly female, gender was
not included in the validation analyses. Relevant findings in the field consistently demon-
strated the females to report a higher level of statistics anxiety (Benson, 1989; Hong & Kar-
stensson, 2002; Macher, Paechter, Papousek, & Ruggeri, 2012; Onwuegbuzie & Wilson,
2003). Interestingly, despite the apparently heightened anxiety level of female students, some
studies did not substantiate any significant disadvantage in their exam performance (Bradley
& Wygant, 1998; Macher, Paechter, Papousek, Ruggeri, Freudenthaler, & Arendasy, 2013).
This finding needs further clarification with respect to the underlying motivational and behav-
ioral processes. Hence, female students might have overrated their individually existing
anxiety level (Zeidner, 1998) – possibly due to a self-derogatory gender stereotyping effect
(Bieg, Goetz, Wolter, & Hall, 2015; Pomerantz, Altermatt, & Saxon, 2002). As well, pursuing
a more adaptive coping strategy to avoid feared failure, they might have ramped up their learn-
ing approach (Martin & Marsh, 2003). Given a larger sample size with a more adequately
balanced gender ratio, this issue should also be examined with respect to the WAESTA scale.
In summary, the present findings yield important information concerning the internal and
external validity of the newly developed WAESTA scale. However, they must be seen as pre-
liminary in nature. Therefore, they should represent just a very first step in method devel-
opment.
7 References
Ahmed, W., Minnaert, A., Kuyper, H., & van der Werf, G. (2012). Reciprocal relationships
between math self-concept and math anxiety. Learning and Individual Differences, 22, 385-
389.
Arens, A.K., Becker, M., & Möller, J. (2017). Social and dimensional comparisons in math
and verbal test anxiety: Within- and cross-domain relations with achievement and the me-
diating role of academic self-concept. Contemporary Educational Psychology, 51, 240-252.
Arumugam, R.N. (2014). Student’s attitude towards introductory statistics course at public
universities using partial least square analysis. Interdisciplinary Journal of Contemporary
Research in Business, 6, 94-123.
Ashcraft, M.H., & Moore, A.M. (2009). Mathematics anxiety and the affective drop in perfor-
mance. Journal of Psychoeducational Assessment, 27, 197-205.
Baloğlu, M. (2002). Psychometric properties of the Statistics Anxiety Rating Scale. Psycho-
logical Reports, 90, 315-325.
Bandalos, D.L., Yates, K., & Thorndike-Christ, T. (1995). Effects of math self-concept, per-
ceived self-efficacy, and attributions for failure and success on test anxiety. Journal of Edu-
cational Psychology, 87, 611-623.
Bar-Haim, Y., Lamy, D., Pergamin, L., Bakermans-Kranenburg, M.J., & van Ijzendorn, M.H.
(2007). Threat-related attentional bias in anxious and nonanxious individuals: A meta-ana-
lytic study. Psychological Bulletin, 133, 1-24.
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
30
Baron, R.M., & Kenny, D.A. (1986). The moderator-mediator variable distinction in social
psychological research: Conceptual, strategic, and statistical considerations. Journal of Per-
sonality and Social Psychology, 51, 1173-1182.
Benson, J. (1989). Structural components of statistical test anxiety in adults: An exploratory
model. Journal of Experimental Education, 57, 247-261.
Benson, J. (1998). Developing a strong program of construct validation: A test anxiety ex-
ample. Educational Measurement: Issues and Practice, 17, 10-17, 22.
Benson, J., Bandalos, D., & Hutchinson, S. (1994). Modeling test anxiety among men and
women. Anxiety, Stress, and Coping, 7, 131-148.
Beurze, S.M., Donders, A.R.T., Zielhuis, G.A., de Vegt, F., & Verbeek, A.L.M. (2013). Statis-
tics anxiety: A barrier for education in research methodology for medical students? Medical
Science Educator, 23, 377-384.
Bieg, M., Goetz, T., Wolter, I., & Hall, N. (2015). Gender stereotype endorsement differ-
entially predicts girls‘ and boys‘ trait-state discrepancy in math anxiety. Frontiers in Psy-
chology, 6, 1404.
Birenbaum, M., & Eylath, S. (1994). Who is afraid of statistics? Correlates of statistics anxiety
among students of educational sciences. Educational Research, 36, 93-98.
Blackwell, L.S., Trzesniewski, K.H., & Dweck, C.S. (2007). Implicit theories of intelligence
predict achievement across an adolescent transition: A longitudinal study and an interven-
tion. Child Development, 78, 246-263.
Blankenstein, K.R., Flett, G.L., & Watson, M.S. (1992). Coping and academic problem-solv-
ing ability in test anxiety. Journal of Clinical Psychology, 48, 37-46.
Bornholt, L.J. & Wilson, R. (2007). A general mediated model of aspects of self knowledge
(M-ASK): Children’s participation in learning activities across social context. Applied Psy-
chology: An International Review, 56, 302-318.
Bradley, D.R., & Wygant, C.R. (1998). Male and female differences in anxiety about statistics
are not reflected in performance. Psychological Reports, 82, 245-246.
Burnette, J.L., O’Boyle, E.H., VanEpps, E.M., Pollack, J.M., & Finkel, E.J. (2013). Mind-sets
matter: A meta-analytic review of implicit theories and self-regulation. Psychological Bul-
letin, 139, 655-701.
Carden, R., Bryant, C., & Moss, R. (2004). Locus of control, test anxiety, academic procra-
stination, and achievement among college students. Psychological Reports, 95, 581-582.
Cassady, J.C., & Johnson, R.E. (2002). Cognitive test anxiety and academic performance.
Contemporary Educational Psychology, 27, 270-295.
Chau, Q. (2018). Exploration of statistics anxiety among doctoral students in health sciences
related disciplines. Seton Hall University in South Orange, Department of Interprofessional
Health Sciences and Health Administration: Dissertation.
Chen, J.A., & Pajares, F. (2010). Implicit theories of ability of Grade 6 science students: Re-
lation to epistemological beliefs and academic motivation and achievement in science. Con-
temporary Educational Psychology, 35, 75-87.
Chew, K.H., & Dillon, D. (2012). Statistics anxiety fluctuates over a semester and decreases
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
31
with experience. Presentation at the 24th Annual Convention of the Association for Psycho-
logical Science (APS). Chicago, May 24-27, 2012.
Chew, P.K.H., & Dillon, D. (2014a). Reliability and validity of the Statistical Anxiety Scale
among students in Singapore and Australia. Journal of Tropical Psychology, 4(7), 1-7.
Chew, P.K.H., & Dillon, D.B. (2014b). Individual differences in statistics anxiety among stu-
dents in Singapore. In P. Mandal (Ed.), Proceedings of the International Conference on
Managing the Asian Century (pp. 293-302). Singapore: Springer Publishing.
Chew, P.K.H., & Dillon, D. (2014c). Statistics anxiety update: Refining the construct and re-
commendations for a new research agenda. Perspectives on Psychological Science, 9, 196-
208.
Chiesi, F., & Primi, C. (2009). Assessing statistics attitudes among college students: Psycho-
metric properties of the Italian version of the Survey of Attitudes toward Statistics (SATS).
Learning and Individual Differences, 19, 309-313.
Chiesi, F., & Primi, C. (2010). Cognitive and non-cognitive factors related to students' statis-
tics achievement. Statistics Education Research Journal, 9(1), 6-26.
Chiesi, F., Primi, C., Bilgin, A.A., Lopez, M.V., del Carmen Fabrizio, M., Gozlu, S., & Tuan,
N.M. (2016). Measuring university students’ approaches to learning statistics: An invarian-
ce study. Journal of Psychoeducational Assessment, 34, 256-268.
Chiesi, F., Primi, C., & Carmona, J. (2011). Measuring statistics anxiety: Cross-country va-
lidity of the Statistical Anxiety Scale (SAS). Journal of Psychoeducational Assessment, 29,
559-569.
Chin, E.C.H., Williams, M.W., Taylor, J.E., & Harvey, S.T. (2017). The influence of negative
affect on test anxiety and academic performance: An examination of the tripartite model of
emotions. Learning and Individual Differences, 54, 1-8.
Cohen, L., Manion, L., & Morrison, K. (2011). Research methods in education (7th ed.).
Abington: Routledge.
Costello, A.B., & Ossborne, J.W. (2005). Best practices in exploratory factor analysis: Four
recommendations for getting the most from your analysis. Practical Assessment, Research
and Evaluation, 10(7), 1-9.
Covington, M.V. (1986). Anatomy of failure-induced anxiety: The role of cognitive mediators.
In R. Schwarzer (Ed. ), Self-related cognitions in anxiety and motivation (pp. 247-263).
Hillsdale: Erlbaum.
Credé, M., & Kuncel, N.R. (2008). Study habits, skills, and attitudes: The third pillar support-
ing collegiate academic performance. Perspectives on Psychological Science, 3, 425-453.
Cruise, R.J., Cash, R.W., & Bolton, D.L. (1985). Development and validation of an instrument
to measure statistical anxiety. Paper presented at the annual meeting of the Statistical Edu-
cation Section. Proceedings of the American Statistical Association, Chicago 1985 (pp. 92-
97). Washington: American Statistical Association.
Cury, F., Da Fonseca, D., Zahn, I., & Elliot, A. (2008). Implicit theories and IQ test perfor-
mance: A sequential mediational analysis. Journal of Experimental Social Psychology, 44,
783-791.
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
32
Dai, T., & Cromley, J.G. (2014). Changes in implicit theories of abilities in biology and drop-
out from STEM majors: A latent growth curve approach. Contemporary Educational Psy-
chology, 39, 233-247.
Dauphinee, T.L., Schau, C., & Stevens, J.J. (1997). Survey of Attitudes Toward Statistics:
Factor structure and factorial invariance for women and men. Structural Equation Mode-
ling, 4, 129-141.
Davis, J.L., Burnette, J.L., Allison, S.T., & Stone, H. (2011). Against the odds: academic un-
derdogs benefit from incremental theories. Social Psychology of Education, 14, 331-346.
De Clercq, M., Galand, B., Dupont, S., & Frenay, M. (2013). Achievement among first-year
university students: an integrated and contextualised approach. European Journal of Psy-
chology of Education, 28, 641-662.
Deffenbacher, J.L. (1980). Worry and emotionality in test anxiety. In I.G. Sarason (Ed.), Test
anxiety: Theory, research and applications (pp. 111-128). Hillsdale: Erlbaum.
Dempster, M., & McCorry, N.K. (2009). The role of previous experience and attitudes toward
statistics in statistics outcomes among undergraduate psychology students. Journal of Sta-
tistics Education, 17(2), 1-6.
DeVaney, T.A. (2016). Confirmatory factor analysis of the Statistical Anxiety Rating Scale
with online graduate students. Psychological Reports, 118, 565-586.
de Winter, J.C.F., Dodou, D., & Wieringa, P.A. (2009). Exploratory factor analysis with small
sample sizes. Multivariate Behavioral Research, 44, 147-181.
Drew, P.Y., & Watkins, D. (1998). Affective variables, learning approaches and academic
achievement: a causal modelling investigation with Hong Kong tertiary students. British
Journal of Educational Psychology, 68, 173-188.
Dweck, C.S., & Leggett, E.L. (1988). A social-cognitive approach to motivation and person-
ality. Psychological Review, 95, 256-273.
Dweck, C.S., & Molden, D.C. (2005). Self-theories. Their impact on competence motivation
and acquisition. In A.J. Elliot & C.S. Dweck (Eds.), Handbook of competence and motiva-
tion (pp. 122-140). New York: Guilford Press.
Dykeman, B.F. (2011). Statistics anxiety: Antecedents and instructional interventions. Educa-
tion, 132, 441-446.
Earp, M.S. (2007). Development and validation of the Statistics Anxiety Measure. University
of Denver, College of Education: Dissertation.
Eccles, J.S., & Wigfield, A. (2002). Motivational beliefs, values, and goals. Annual Review of
Psychology, 53, 109-132.
Edmundson, E.W., Koch, W.R., & Silverman, S. (1993). A facet analysis approach to content
and construct validity. Educational and Psychological Measurement, 53, 351-368.
Eichhorn, J. (2018). Die Erfassung von studentischen Valenzkognitionen zur Nützlichkeit sta-
tistischer Kompetenzen. Entwicklung, psychometrische Prüfung und vorläufige Validierung
eines entsprechenden Befragungsinstruments [Measuring educational science majors‘ util-
ity value of statistical competence. Scale development, psychometric properties, and pre-
liminary validation results]. Leibniz University Hannover, Faculty of Humanities (Institute
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
33
of Psychology): unpublished Master’s thesis.
Elliot, A.J., & McGregor, H.A. (1999). Test anxiety and the hierarchical model of approach
and avoidance achievement motivation. Journal of Personality and Social Psychology, 76,
628-644.
Elliot, A.J., & Murayama, K. (2008). On the measurement of achievement goals: Critique,
illustration, and application. Journal of Educational Psychology, 100, 613-628.
Emmioǧlu, E. (2011). A structural equation model examining the relationships among math-
ematics achievement, attitudes toward statistics, and statistics outcomes. Middle East Tech-
nical University Ankara, Graduate School of Social Sciences: Doctoral thesis.
Emmioǧlu, E., & Capa-Aydin, Y. (2012). Attitudes and achievement in statistics: A meta-
analysis study. Statistics Education Research Journal, 11(2), 95-102.
Entwistle, N.J. (2007). Research into student learning and university teaching. In N.J. Ent-
wistle & P.D. Tomlinson (Eds.), British Journal of Educational Psychology Monograph
Series II(4) – Student learning and university teaching (pp. 1-18). Leicester: British Psy-
chological Society.
Faber, G. (1993). Eine Kurzskala zur Erfassung von Leistungsangst vor schulischen Recht-
schreibsituationen: LARs [A short scale for measuring test anxiety in the spelling domain].
Empirische Pädagogik, 7, 253-284.
Faber, G. (1995a). Die Diagnose von Leistungsangst vor schulischen Rechtschreibsituationen:
Neue Ergebnisse zu den psychometrischen Eigenschaften und zur Validität einer entspre-
chenden Kurzskala [Measuring elementary school children’s spelling-specific anxiety: Fur-
ther psychometric and validation results]. Praxis der Kinderpsychologie und Kinderpsychi-
atrie, 44, 110-119.
Faber, G. (1995b). Schulfachbezogen erfragte Leistungsängste bei Grundschulkindern: Eine
Analyse ihrer differentiellen Beziehungen zu ausgewählten Selbstkonzept- und Leistungs-
maßen [Domain-specific test anxieties in elemenary school children: An analysis of their
differential relations to academic self-concept and achievement]. Psychologie in Erziehung
und Unterricht, 42, 278-284.
Faber, G. (2000). Rechtschreibängstliche Besorgtheits- und Aufgeregtheitskognitionen: Empi-
rische Untersuchungsergebnisse zum subjektiven Kompetenz- und Bedrohungserleben
rechtschreibschwacher Grundschulkinder [Fourth graders‘ worry and emotionality cogni-
tions in the spelling domain]. Sonderpädagogik, 30, 191-201.
Faber, G. (2002). Rechtschreibängstliche Besorgtheits- und Aufgeregtheitskognitionen: Em-
pirische Untersuchungsergebnisse zu ihrer Bedeutung für das Selbstwertgefühl und die
Schulunlust rechtschreibschwacher Grundschulkinder [Relations of fourth graders‘ worry
and emotionality cognitions with their global self-esteem and negative school attitudes: An
analysis in the spelling domain]. Sonderpädagogik, 32, 3-12.
Faber, G. (2012a). Elementary school children’s spelling-specific self-beliefs. Longitudinal
analyses of their relations to academic achievement, school attitudes, and self-esteem. New
York: Nova Science Publishers.
Faber, G. (2012b). Measuring self-perceptions of oral narrative competencies and anxiety in
the EFL context. Electronic Journal of Research in Educational Psychology, 10, 1343-
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
34
1382.
Faber, G. (2015). lereLA-GS4. Skalen zur schulfachspezifischen Leistungsangst im Lesen und
Rechnen für das vierte Grundschuljahr [Rating scales for the measurement of fourth grad-
ers‘ test anxiety in the reading and mathematics domain]. In Leibniz-Zentrum für Psycho-
logische Information and Dokumentation (Hrsg.), PSYNDEX Tests. Datenbanksegment
Psychologischer und Pädagogischer Testverfahren (Dok.-Nr. 9006927). Universität Trier:
ZPID.
Faber, G., & Drexler, H. (2018). Predicting education science students’ statistics anxiety: The
role of prior experiences within a framework of domain-specific motivation constructs (sub-
mitted).
Faber, G., Drexler, H., & Stappert, A. (2018). BEVAST-EWL. Skala zu studentischen Besorgt-
heits-, Vermeidungs- und Aufgeregtheitskognitionen bezüglich statistischer Anforderungen
in erziehungswissenschaftlichen Lehr-Lern-Kontexten [Measuring education science stu-
dents‘ worry, avoidance, and emotionality cognitions in statistcs]. In Leibniz-Zentrum für
Psychologische Information und Dokumentation (Hrsg.), PSYNDEX Tests. Datenbankseg-
ment Psychologischer und Pädagogischer Testverfahren (Dok.-Nr. 9007626). Trier: ZPID.
Faber, G., Drexler, H., Stappert, A., & Eichhorn, J. (2018). Education science students‘ statis-
tics anxiety: Developing and analyzing a scale for measuring their worry, avoidance, and
emotionality cognitions. International Journal of Educational Psychology, 7, 248-285.
Feinberg, L.B., & Halperin, S. (1978). Affective and cognitive correlates of course perfor-
mance in introductory statistics. Journal of Experimental Education, 46, 11-18.
Fernández-Castillo, A., & Caurcel, M.J. (2015). State test-anxiety, selective attention and con-
centration in university students. International Journal of Psychology, 50, 265-271.
Field, A. (2009). Discovering statistics using SPSS (3rd ed.). Los Angeles: Sage Publications.
Finney, S.J., & Schraw, G. (2003). Self-efficacy beliefs in college statistics courses. Contem-
porary Educational Psychology, 28, 161-186.
Galli, S., Ciancaleoni, M., Chiesi, F., & Primi, C. (2008). Who failed the introductory statistics
examination? A study on a sample of psychology students. Paper presented at the 11th In-
ternational Congress on Mathematical Education (ICME). Monterrey, Mexico, July 6-13.
Galassi, J.P., Frierson, H.T., & Sharer, R. (1981). Behavior of high, moderate, and low test
anxious students during an actual test situation. Journal of Consulting and Clinical Psycho-
logy, 49, 51-62.
Gaspard, H., Häfner, I., Parrisius, C., Trautwein, U., & Nagengast, B. (2017). Assessing task
values in five subjects during secondary school: Measurement structure and mean level dif-
ferences across grade level, gender, and academic subject. Contemporary Educational Psy-
chology, 48, 67-84.
Gaye-Valentine, A., & Credé, M. (2013). Assessing the construct validity of test anxiety: The
influence of test characteristics and impact on test score criterion validity. Testing Psicome-
trica Metodologia, 20, 117-130.
Genc, A. (2017). Coping strategies as mediators in the relationship between test anxiety and
academic achievement. Psihologija, 50, 51-66.
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
35
Goetz, T., Frenzel, A.C., Pekrun, R., Hall, N.C., & Lüdtke, O. (2007). Between- and within-
domain relations of students’ academic emotions. Journal of Educational Psychology, 99,
715-733.
Goetz, T., Pekrun, R., Hall, N., & Haag, L. (2006). Academic emotions from a social-cognitive
perspective: Antecedents and domain-specificity of students' affect in the context of Latin
instruction. British Journal of Educational Psychology, 76, 289-308.
Gogol, K., Brunner, M., Goetz, T., Martin, R., Ugen, S., Keller, U., Fischbach, A., & Preckel,
F. (2014). “My questionnaire is too long!” The assessments of motivational-affective con-
structs with three-item and single-item measures. Contemporary Educational Psychology,
39, 188-205.
Gogol, K., Brunner, M., Preckel, F., Goetz, T., & Martin, R. (2016). Developmental dynamics
of general and school-subject-specific components of academic self-concept, academic in-
terest, and academic anxiety. Frontiers in Psychology, 7, 356.
González, A., Rodrígez, Y., Faílde, J.M., & Carrera, M.V. (2016). Anxiety in the statistics
class: Structural relations with self-concept, intrinsic value, and engagement in two samples
of undergraduates. Learning and Individual Differences, 45, 214-221.
Graham, J.W. (2012). Missing data. Analysis and design. New York: Springer.
Green, J., Martin, A.J., & Marsh, H.W. (2007). Motivation and engagement in English, math-
ematics and science high school subjects: Towards an understanding of multidimensional
domain specificity. Learning and Individual Differences, 17, 269-279.
Griffith, J.D., Mathna, B., Sappington, M., Turner, R., Evans, J., Gu, L., Adams, L.T., & Mo-
rin, S. (2014). The development and validation of the Statistics Comprehensive Anxiety
Response Evaluation. International Journal of Advances in Psychology, 3(2), 21-29.
Guo, J., Parker, P.D., Marsh, H.W., & Morin, A.J.S. (2015). Achievement, motivation, and
educational choices: A longitudinal study of expectancy and value using a multiplicative
perspective. Developmental Psychology, 51, 1163-1176.
Guttman, R., & Greenbaum, C.W. (1998). Facet theory: Its development and current status.
European Psychologist, 3, 13-36.
Hagtvet, K., & Benson, J. (1997). The motive to avoid failure and test anxiety responses: Em-
pirical support for integration of two research traditions. Anxiety, Stress, and Coping, 10,
35-57.
Hancock, D.R. (2001). Effects of test anxiety and evaluative threat on students’ achievement
and motivation. Journal of Educational Research, 94, 284-290.
Hanna, D., & Dempster, M. (2012). The effect of statistics anxiety on students’ predicted and
actual test scores. Irish Journal of Psychology, 30, 201-209.
Hanna, D., Shevlin, M., & Dempster, M. (2008). The structure of the statistics anxiety rating
scale: A confirmatory factor analysis using UK psychology students. Personality and Indi-
vidual Differences, 45, 68-74.
Hannigan, A., Hegarty, A.C., & McGrath, D. (2014). Attitudes towards statistics of graduate
entry medical students: the role of prior learning experiences. BMC Medical Education, 14,
70.
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
36
Harter, S. (1990). Causes, correlates, and the functional role of global self-worth: A life-span
perspective. In R.J. Sternberg, & J Kolligian (Eds.), Competence considered (pp. 67-97).
New Haven: Yale University Press.
Haynes, S.N., Richard, D.C.S., & Kubany, E.S. (1995). Content validity in psychological as-
sessment: A functional approach to concepts and methods. Psychological Assessment, 7,
238-247.
Hembree, R. (1990). The nature, effects, and relief of mathematics anxiety. Journal for Re-
search in Mathematics Education, 21, 33-46.
Hill, F., Mammarella, I.C., Devine, A., Caviola, S., Passolunghi, M.C., & Szücs, D. (2016).
Maths anxiety in primary and secondary school students: Gender differences, developmen-
tal changes and anxiety specificity. Learning and Individual Differences, 48, 45-53.
Hodapp, V., & Benson, J. (1997). The multidimensionality of test anxiety: A test of different
models. Anxiety, Stress, and Coping, 10, 219-244.
Hoerger, M. (2013). ZH: An updated version of Steiger's Z and web-based calculator for testing
the statistical significance of the difference between dependent correlations [Retrieved from
http://www.psychmike.com/dependent_correlations.php].
Hong, E., & Karstensson, L. (2002). Antecedents of state test anxiety. Contemporary Educa-
tional Psychology, 27, 348-367.
Hong, Y.-Y., Chiu, C.-Y., Dweck, C.S., Lin, D.M.-S., & Wan, W. (1999). Implicit theories,
attributions, and coping: A meaning system approach. Journal of Personality and Social
Psychology, 77, 588-599.
Hood, M., Creed, P.A., & Neumann, D.L. (2012). Using the expectancy value model of moti-
vation to understand the relationship between student attitudes and achievement in statistics.
Statistics Education Research Journal, 11(2), 72-85.
Hopko, D.R., Ashcraft, M.H., Gute, J., Ruggiero, K.J., & Lewis, C. (1998). Mathematics anx-
iety and working memory: Support for the existence of a deficient inhibition mechanism.
Journal of Anxiety Disorders, 12, 343-355.
Hox, J.J. (1997). From theoretical concept to survey question. In L. Lyberg, P. Biemer, M.
Collins, E. de Leeuw, C. Dippo, N. Schwarz, & D. Trewin (Eds.), Survey measurement and
process quality (pp. 47-69). Hoboken: Wiley.
İlhan, M., & Cetin, B. (2013). Mathematics Oriented Implicit Theory of Intelligence Scale:
Validity and reliability study. GESJ: Education Science and Psychology, 25, 116-134.
John, O.P., & Benet-Martinez, V. (2000). Measurement: Reliability, construct validation, and
scale construction. In H.T. Reis & C.M. Judd (Eds.), Handbook of research methods in
social and personality psychology (pp. 339-369). New York: Cambridge University Press.
Jones, B.D., Wilkins, J.L. M., Long, M.H., & Wang, F. (2012). Testing a motivational model
of achievement: How students’ mathematical beliefs and interests are related to their
achievement. European Journal of Psychology of Education, 27, 1-20.
Karlen, Y., & Compagnoni, M. (2017). Implicit theory of writing ability: Relationships to
metacognitive strategy knowledge and strategy use in academic writing. Psychology Learn-
ing and Teaching, 16, 47-63.
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
37
Kassim, M.A.B., Hanafi, S.R.B.M., & Hancock, D.R. (2007). Test anxiety and its consequen-
ces on academic performance among university students. In R.V. Nata (Ed.), Progress in
education. Volume 15 (pp. 17-37). New York: Nova Science Publishers.
Kesici, S., Baloğlu, M., & Deniz, M.E. (2011). Self-regulated learning strategies in relation
with statistics anxiety. Learning and Individual Differences, 21, 472-477.
Kieffer, K.M., & Reese, R.J. (2009). Measurement of test and study worry and emotionality
in college students. A psychometric evaluation of the Test and Study Attitudes Inventory.
Educational and Psychological Measurement, 69, 303-321.
Kline, R.B. (2011). Principles and practice of structural equation modeling (3rd. ed.). New
York: Guilford Press.
Kline, R. (2013). Exploratory and confirmatory factor analysis. In Y. Petscher, C. Schatschnei-
der, & D.L. Compton (Eds.), Applied quantitative analysis in education and the social
sciences (pp. 171-207). New York: Routledge.
Kooken, J., Welsh, M.E., McCoach, D.B., Johnston-Wilder, S., & Lee, C. (2016). Develop-
ment and validation of the Mathematical Resilience Scale. Measurement and Evaluation in
Counseling and Development, 49, 217-242.
Kosovich, J.J., Hulleman, C.S., Barron, K.E., & Getty, S. (2015). A practical measure of stu-
dent motivation: Establishing validity evidence for the Expectancy-Value-Cost Scale in
middle school. Journal of Early Adolescence, 35, 790-816.
Krampen, G. (1988). Competence and control orientations as predictors of test anxiety in stu-
dents: Longitudinal results. Anxiety Research, 1, 185-197.
Krosnick, J.A., & Presser, S. (2010). Question and questionnaire design. In P.V. Marsden &
J.D. Wright (Eds.), Handbook of survey research (2nd ed., pp. 263-314). Bingley: Emerald.
Lalonde, R.N., & Gardner, R.C. (1993). Statistics as a second language? A model for predict-
ing performance in psychology students. Canadian Journal of Behavioural Science, 25,
108-125.
Lapka, D., Wagner, P., Schober, B., Gradinger, P., Reimann, R., & Spiel, C. (2010). Metho-
denlehre: Alptraum oder Herausforderung für Psychologiestudierende? Eine Typologie auf
der Basis des sozialkognitiven Motivationsmodells von Dweck [Methodology: Nightmare
or challenge for psychology undergraduate students? A typology based on Dweck’s social-
cognitive motivation model]. Psychologie in Erziehung und Unterricht, 57, 209-222.
Lawson, D.J. (2006). Test anxiety: A test of attentional bias. University of Maine, Graduate
School: Doctoral dissertation.
Lee, J., & Paek, I. (2014). In search of the optimal number of response categories in a rating
scale. Journal of Psychoeducational Assessment, 33, 664-673.
Lee, S.H. (2006). Constructing effective questionnaires. In J.A. Pershing (Ed.), Handbook of
human performance technology: principles, practices, and potential (3rd ed., pp. 760-779).
San Francisco: Pfeiffer.
Leppin, A., Schwarzer, R., Belz, D., Jerusalem, M., & Quast, H.-H. (1987). Causal attribution
patterns of high and low test-anxious students. In R. Schwarzer, H.M. van der Ploeg, & C.
D. Spielberger (Eds.), Advances in test anxiety research. Volume 5 (pp. 67-86). Lisse: Swets
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
38
and Zeitlinger.
Liebert, R.M., & Morris, L.W. (1967). Cognitive and emotional components of test-anxiety:
A distinction and some initial data. Psychological Reports, 20, 975-978.
Little, R.J.A. (1988). A test of missing completely at random for multivariate data with missing
values. Journal of the American Statistical Association, 83, 1198-1202.
Liu, S., Onwuegbuzie, A.J., & Meng, L. (2011). Examination of the score reliability and va-
lidity of the statistics anxiety rating scale in a Chinese population: Comparisons of statistics
anxiety between Chinese college students and their Western counterparts. Journal of Edu-
cational Enquiry, 11, 29-42.
Lozano, L.M., Garcia-Cueto, E., & Muñiz, J. (2008). Effect of the number of response cat-
egories on the reliability and validity of rating scales. Methodology, 4(2), 73-79.
Lunneborg, P.W. (1964). Relations among social desirability, achievement and anxiety meas-
ures in children. Child Development, 35, 169-182.
Lou, N.M., & Noels, K.A. (2017). Measuring language mindsets and modeling their relations
with goal orientations and emotional and behavioral responses in failure situations. Modern
Language Journal, 101, 214-243.
MacCallum, R.C., Widaman, K.F., Zhang, S., & Hong, S. (1999). Sample size in factor anal-
ysis. Psychological Methods, 4, 84-99.
Macher, D., Paechter, M., Papousek, I., & Ruggeri, K. (2012). Statistics anxiety, trait anxiety,
learning behavior, and academic performance. European Journal of Psychology of Educa-
tion, 27, 483-498.
Macher, D., Paechter, M., Papousek, I., Ruggeri, K., Freudenthaler, H. H., & Arendasy, M.
(2013). Statistics anxiety, state anxiety during an examination, and academic achievement.
British Journal of Educational Psychology, 83, 535-549.
Macher, D., Papousek, I., Ruggeri, K., & Paechter, M. (2015). Statistics anxiety and perfor-
mance: blessings in disguise. Frontiers in Psychology, 6, 1116.
Mandler, G., & Sarason, S.B. (1952). A study of anxiety and learning. Journal of Abnormal
and Social Psychology, 47, 166-173.
Markle, G. (2017). Factors influencing achievement in undergraduate social science research
methods courses: A mixed methods analysis. Teaching Sociology, 45, 105-115.
Marsh, H.W. (1988). The content specificity of math and English anxieties: The High School
and Beyond study. Anxiety Research, 1, 137-149.
Marsh, H.W., Dowson, M., Pietsch, J., & Walker, R. (2004). Why multicollinearity matters:
A reexamination of relation between self-efficacy, self-concept, and achievement. Journal
of Educational Psychology, 96, 518-522.
Marsh, H.W., Kuyper, H., Seaton, M., Parker, P.D., Morin, A.J.S., Möller, J., & Abduljabbar,
A.S. (2014). Dimensional comparison theory: an extension of the internal/external frame of
reference effect on academic self-concept formation. Contemporary Educational Psy-
chology, 39, 326-341.
Marsh, H.W., & O’Mara, A.J. (2008). Self-concept is as multidisciplinary as it is multidimen-
sional. A review of theory, measurement, and practice in self-concept research. In H.W.
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
39
Marsh, R.G. Craven, & D.M. McInerney (Eds.), Self-processes, learning, and enabling hu-
man potential. Dynamic new approaches (pp. 87-115). Charlotte: Information Age Publi-
shing.
Marsh, H.W., & Yeung, A.S. (1996). The distinctiveness of affects in specific school subjects:
An application of confirmatory factor analysis with the National Educational Longitudinal
Study of 1988. American Educational Research Journal, 33, 665-689.
Martin, A.J., & Marsh, H.W. (2003). Fear of failure: Friend or foe? Australian Psychologist,
38, 31-38.
Mathews, A., & MacLeod, C. (2002). Induced processing biases have causal effects on anx-
iety. Cognition and Emotion, 16, 331-354.
McIlroy, D., Bunting, B., & Adamson, G. (2000). An evaluation of the factor structure and
predictive utility of a test anxiety scale with reference to students’ past performance and
personality indicies. British Journal of Educational Psychology, 70, 17-32.
Middleton, M.J., & Midgley, C. (1997). Avoiding the demonstration of lack of ability: An
underexplored aspect of goal theory. Journal of Educational Psychology, 89, 710-718.
Mji, A., & Onuwuegbuzie, A.J. (2004). Evidence of score reliability and validity of the Statis-
tical Anxiety Rating Scale among technikon students in South Africa. Measurement and
Evaluation in Counseling and Development, 36, 238-251.
Möller, J., Streblow, L., Pohlmann, B., & Köller, O. (2006). An extension to the internal/
external frame of reference model to two verbal and numerical domains. European Journal
of Psychology of Education, 21, 467-487.
Murtonen, M., & Lehtinen, E. (2003) Difficulties experienced by education and sociology
students in quantitative methods courses. Studies in Higher Education, 28, 171–85.
Murtonen, M., Olkinuora, E., Tynjälä, P., & Lehtinen, E. (2008). „Do I need research skills in
working life?”: University students’ motivation and difficulties in quantitative method cour-
ses. Higher Education, 56, 599-612.
Nasser, F.M. (2004). Structural model of the effects of cognitive and affective factors on the
achievement of Arabic-speaking pre-service teachers in introductory statistics. Journal of
Statistics Education, 12(1), 1-18.
Nolan, M.M., Beran, T., & Hecker, K.G. (2012). Surveys assessing students‘ attitudes toward
statistics: A systematic review of validity and reliability. Statistics Education Research
Journal, 12(2), 103-123.
Olani, A., Harskamp, E., Hoekstra, R., & van der Werf, G. (2010). The roles of self-efficacy
and perceived teacher support in the acquisition of statistical reasoning abilities: a path ana-
lysis. Educational Research and Evaluation, 16, 517-528.
Oliver, A., Sancho, P., Galiana, L., & Cebrià i Iranzo, M. (2014). Nueva evidencia sobre la
Statistical Anxiety Scale (SAS). Anales de Psicología, 30, 150-156.
Ommundsen, Y. (2003). Implicit theories of ability and self-regulation strategies in physical
education classes. Educational Psychology, 23, 141-157.
Onwuegbuzie, A.J. (2003). Modeling statistics achievement among graduate students. Educa-
tional and Psychological Measurement, 63, 1020-1038.
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
40
Onwuegbuzie, A.J. (2004). Academic procrastination and statistics anxiety. Assessment and
Evaluation in Higher Education, 29, 3-19.
Onwuegbuzie, A.J., & Wilson, V.A. (2003). Statistics anxiety: Nature, etiology, antecedents,
effects and treatments – a comprehensive review of the literature. Teaching in Higher Edu-
cation, 8, 195-209.
Papanastasiou, E.C. (2005). Factor structure of the “Attitudes Toward Research” scale. Statis-
tics Educational Research Journal, 4(1), 16-26.
Papanastasiou, E.C., & Zembylas, M. (2008). Anxiety in undergraduate research method cour-
ses: its nature and implications. International Journal of Research and Method in Educa-
tion, 31, 155-167.
Papousek, I., Ruggeri, K., Macher, D., Paechter, M., Heene, M., Weiss, E.M., Schulter, G., &
Freudenthaler, H. (2012). Psychometric evaluation and experimental validation of the
Statistics Anxiety Rating Scale. Journal of Personality Assessment, 94, 82-91.
Pekrun, R. (1988). Anxiety and motivation in achievement settings: towards a system-theore-
tical approach. International Journal of Educational Research, 12, 307-323.
Pekrun, R., Elliot, A.J., & Maier, M.A. (2009). Achievement goals and achievement emotions:
Testing a model of their joint relations with academic performance. Journal of Educational
Psychology, 101, 115-135.
Pekrun, R., Goetz, T., Perry, R.P., Kramer, K., Hochstadt, M., & Molfenter, S. (2004). Beyond
test anxiety: Development and validation of the Test Emotions Questionnaire (TEQ). Anx-
iety, Stress, and Coping, 17, 287-316.
Perry, R.P., Hall, N.C., & Ruthig, J.C. (2005). Perceived (academic) control and scholastic
attainment in higher education. In J. Smart (Ed.), Higher education: Handbook of theory
and research. Volume 20 (pp. 363-436). Dordrechr: Springer.
Perepiczka, M., Chandler, N., & Becerra, M. (2011). Relationship between graduate students'
statistics self-efficacy, statistics anxiety, attitude toward statistics, and social support. The
Professional Counselor, 1(2), 99-108.
Pintrich, P.R. (2004). A conceptual framework for assessing motivation and self-regulated
learning in college students. Educational Psychology Review, 16, 385-407.
Pituch, K.A., & Stevens, J.P. (2016). Applied multivariate statistics for the social sciences.
Analyses with SAS and IBM’s SPSS (6th ed.). New York: Routledge.
Podsakoff, P.M., MacKenzie, S.B., Lee, J.-Y., & Podsakoff, N.P. (2003). Common method
biases in behavioral research: A critical review of the literature and recommended remedies.
Journal of Applied Psychology, 88, 879-903.
Pomerantz, E.M., Altermatt, E.R., & Saxon, J.L. (2002). Making the grade but feeling dis-
tressed. Gender differences in academic performance and internal distress. Journal of Edu-
cational Psychology, 94, 396-404.
Price, L. (2014). Modelling factors for predicting student learning outcomes in higher educa-
tion. In D. Gijbels, V. Donche, J.T.E. Richardson, & J.D. Vermunt (Eds.), Learning patterns
in higher education: dimensions and research perspectives (pp. 56-77). London: Routledge.
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
41
Putwain, D.W. (2008). Deconstructing test anxiety. Emotional and Behavioural Difficulties,
13, 141-155.
Putwain, D.W. (2018). An examination of the self-referent executive processing model of test
anxiety: control, emotional regulation, self-handicapping, and examination performance.
European Journal of Psychology of Education (in press).
Putwain, D.W., Connors, L., & Symes, W. (2010). Do cognitive distortions mediate the test
anxiety-examination relationship? Educational Psychology, 30, 11-26.
Putwain, D.W., & Symes, W. (2012). Achievment goals as mediators of the relationship be-
tween competence beliefs and test anxiety. British Journal of Educational Psychology, 82,
207-224.
Räty, H., Kasanen, K., Kliskinen, J., Nykky, M., & Atjonen, P. (2004). Childrens’ notions of
the malleability of their academic ability in the mother tongue and mathematics. Scandina-
vian Journal of Educational Research, 48, 413-426.
Ramirez, C., Emmioğlu, E., & Schau, C. (2010). Understanding students‘ attitudes toward
statistics: New perspectives using an expectancy-value model of motivation and the Survey
of Attitudes Toward Statistics. Proceedings of the Joint Statistical Meetings (JSM 2010),
Section on Statistics Education (pp. 830-837), Vancouver, July 31-August 5.
Rana, R.A., & Mahmood, N. (2010). The relationship between test anxiety and academic a-
chievement. Bulletin of Education and Research, 32(2), 63-74.
Ratanaolarn, T. (2016). The development of a structural model of graduate students’ statistics
achievement. International Journal of Behavioral Science, 11, 153-168.
Respondek, L., Seufert, T., Stupnisky, R., & Nett, U.E. (2017). Perceived academic control
and academic emotions predict undergraduate university student success: Examining effects
on dropout intention and achievement. Frontiers in Psychology, 8, 243.
Richards, A., French, C.C., Keogh, E., & Carter, C. (2000). Test-anxiety, inferential reasoning
and working memory load. Anxiety, Stress, and Coping, 13, 87-109.
Richardson, M., Abraham, C., & Bond, R. (2012). Psychological correlates of university stu-
dents’ academic performance: A systematic review and meta-analysis. Psychological Bul-
letin, 138, 353-387.
Robbins, S.B., Lauver, K., Le, H., Davis, D., & Carlstrom, A. (2004). Do psychosocial and
study skill factors predict college outcomes? A meta-analysis. Psychological Bulletin, 130,
261-288.
Rost, D.H., & Schermer, F.J. (1989a). The various facets of test anxiety: A subcomponent
model of test anxiety measurement. In R. Schwarzer, H.M. van der Ploeg & C.D. Spielber-
ger (Eds.), Advances in test anxiety research. Volume 6 (pp. 36-52). Lisse: Swets and Zeit-
linger.
Rost, D.H., & Schermer, F.J. (1989b). The assessment of coping with test anxiety. In R.
Schwarzer, H.M. van der Ploeg, & C.D. Spielberger (Eds.), Advances in test anxiety re-
search. Volume 6 (pp. 179-191). Lisse: Swets and Zeitlinger.
Rost, D.H., Sparfeldt, J.R., & Schilling, S.R. (2007). DISK-Gitter mit SKSLF-8. Differentielles
Selbstkonzept-Gitter mit Skala zur Erfassung des Selbstkonzepts schulischer Leistungen und
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
42
Fähigkeiten [The self-concept grid: A questionnaire for measuring subject-specific facets
of academic self-concept]. Göttingen: Hogrefe.
Sarason, I.G. (1984). Stress, anxiety, and cognitive interference: Reactions to tests. Journal of
Personality and Social Psychology, 46, 929-938.
Schau, C., Stevens, J., Dauphinee, T.L., & Del Vecchio, A. (1995). The development and va-
lidation of the Survey of Attitudes Toward Statistics. Educational and Psychological Mea-
surement, 55, 868-875.
Schneider, M., & Preckel, F. (2017). Variables associated with achievement in higher educa-
tion: A systematic review of meta-analyses. Psychological Bulletin, 143, 565-600.
Schriesheim, C.A., Kopelman, R.E., & Solomon, E. (1989). The effect of grouped versus ran-
domized questionnaire format on scale reliability and validity: A three-study investigation.
Educational and Psychological Measurement, 49, 487-508.
Schunk, D.H., & Zimmerman, B.J. (2006). Competence and control beliefs: Distinguishing
the means and ends. In P.A. Alexander & P.H. Winne (Eds.), Handbook of educational
psychology (2nd ed., pp. 349-367). New York: Erlbaum.
Schutz, P.A., Drogosz, L.M., White, V.E., & Distefano, C. (1998). Prior knowledge, attitude,
and strategy use in an introduction to statistics course. Learning and Individual Differences,
10, 291-308.
Schwarzer, R. (1996). Thought control of action: Interfering self-doubts. In I.G. Sarason, G.R.
Pierce, & B.R. Sarason (Eds.), Cognitive interference: Theory, methods and findings (pp.
99-115). Mahwah: Erlbaum.
Schwarzer, R., & Jerusalem, M. (1992). Advances in test anxiety theory: A cognitive process
approach. In K.A. Hagtvet, & T.B. Johnson (Eds.), Advances in test anxiety research.
Volume 7 (pp. 2-17). Lisse: Swets and Zeitlinger.
Schwarzer, R., & Quast, H.-H. (1985). Multidimensionality of the anxiety experience: Evid-
ence for additional components. In H.M. van der Ploeg, R. Schwarzer, & C.D. Spielberger
(Eds.), Advances in test anxiety research. Volume 4 (pp. 3-14). Lisse: Swets and Zeitlinger.
Seipp, B. (1991). Anxiety and academic achievement: A meta-analysis of findings. Anxiety
Research, 4, 27-41.
Selkirk, L.C., Bouchey, H.A., & Eccles, J.S. (2011). Interaction among domain-specific ex-
pectancies, values, and gender: Predictors of test anxiety during early adolescence. Journal
of Early Adolescence, 31, 361-389.
Sesé, A., Jiménez, R., Montano, J.-J., & Palmer, A. (2015). Can attitudes towards statistics
and statistics anxiety explain students’ performance? Revista de Psicodidáctica, 20, 285-
304.
Skaalvik, E.M. (1997). Self-enhancing and self-defeating ego orientation: Relations with task
and avoidance orientation, achievement, self-perceptions, and anxiety. Journal of Educa-
tional Psychology, 89, 71-81.
Skinner, E.A. (1996). A guide to constructs of control. Journal of Personality and Social Psy-
chology, 71, 549-570.
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
43
Slootmaeckers, K. (2012). Too afraid to learn statistics?! Attitudes towards statistics as a
barrier to learning statistics and to acquiring quantitative skills. Paper presented at the 4th
International Conference on Education and New Learning Technologies (EDULEARN12),
Barcelona 2012, July 2-4.
Smith, B.P. (2005). Goal orientation, implicit theory of ability, and collegiate instrumental
music practice. Psychology of Music, 33, 36-57.
Spada, M.M., Nikcevic, A.V., Moneta, G.B., & Ireson, J. (2006). Metacognition as mediator
of test anxiety on a surface approach to studying. Educational Psychology, 26, 615-624.
Sparfeldt, J.R., Schilling, S.R., Rost, D.H., Stelzl, I., & Peipert, D. (2005). Leistungsängstlich-
keit: Facetten, Fächer, Fachfacetten? Zur Trennbarkeit nach Angstfacette und Inhaltsbe-
reich [Test anxiety: The relevance of anxiety facets and school subjects]. Zeitschrift für
Pädagogische Psychologie, 19, 225-236.
Spielberger, C.D., & Vagg, P.R. (1995). Test anxiety: A transactional process model. In C.D.
Spielberger & P.R. Vagg (Eds.), Test anxiety: Theory, assessment, and treatment (pp. 3-
14). Philadelphia: Taylor & Francis.
Spinath, B., Spinath, F.M., Riemann, R., & Angleitner, A. (2003). Implicit theories about per-
sonality and intelligence and their relationship to actual personality and intelligence. Per-
sonality and Individual Differences, 35, 939-951.
Sproesser. U., Engel, J., & Kuntze, S. (2016). Fostering self-concept and interest for statistics
through specific learning environments. Statistics Education Research Journal, 15(1), 28-
54.
Stanisavljevic, D., Trajkovic, G., Marinkovic, J., Bukumiric, Z., Cirkovic, A., & Milic, N.
(2014). Assessing attitudes towards statistics among medical students: Psychometric pro-
perties of the Serbian version of the Survey of Attitudes Towards Statistics (SATS). PLOS
ONE, 9(11), e112567.
Stappert, A. (2017). Die Erfassung impliziter Fähigkeitstheorien von Studierenden im Um-
gang mit statistischen Anforderungen. Entwicklung, psychometrische Prüfung und vor-
läufige Validierung eines entsprechenden Befragungsinstruments [Measuring educational
science majors‘ implicit theories of statistical competence. Scale development, psycho-
metric properties, and preliminary validation results]. Leibniz University Hannover, Faculty
of Humanities (Institute of Educational Psychology): unpublished Master’s thesis.
Steiger, J.H. (1980). Tests for comparing elements of a correlation matrix. Psychological
Bulletin, 87, 245-251.
Steinmayr, R., Crede, J., McElvany, N., & Wirthwein, L. (2016). Subjective well-being, test
anxiety, academic achievement: Testing for reciprocal effects. Frontiers in Psychology, 6,
1994.
Streblow, L. (2004). Bezugsrahmen und Selbstkonzeptgenese [The effects of social and di-
mensional comparison on the formation of academic self-concepts]. Münster: Waxmann.
Sutarso, T. (1992). Some variables in relation to students’ anxiety in learning statistics. Paper
presented at the Annual Meeting of the Mid-South Educational Research Association,
Knoxville 1992, November 11-13.
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
44
Tempelaar, D.T., Rienties, B., Giesbers, B., & Gijselaers, W.H. (2015). The pivotal role of
effort beliefs in mediating implicite theories of intelligence and achievement goals and
academic motivations. Social Psychology of Education, 18, 101-120.
Townsend, M.A.R., Moore, D.W., Tuck, B.F., & Wilton, K.M. (1998). Self-concept and anx-
iety in university students studying social science statistics within a co-operative learning
structure. Educational Psychology, 18, 41-54.
Tremblay, P.F., Gardner, R.C., & Heipel, G. (2000). A model of the relationships among mea-
sures of affect, aptitude, and performance in introductory statistics. Canadian Journal of
Behavioural Science, 32, 40-48.
Vahedi, S., Farrokhi, F., & Bevrani, H. (2011). A confirmatory factor analysis of the structure
of the Statistics Anxiety Measure: An examination of four alternative models. Iranian Jour-
nal of Psychiatry, 6, 92-98.
van der Westhuizen, S. (2014). Postgraduate students‘ attitude towards research, their research
self-efficacy and their knowledge of research. South African Journal of Higher Education,
28, 1414-1432.
Vanhoof, S., Kuppens, S., Sotos, A.E.C., Verschaffel, L., & Onghena, P. (2011). Measuring
statistics attitudes: Structure of the Survey of Attitudes Toward Statistics (SATS-36). Statis-
tics Education Research Journal, 10(1), 35-51.
van Overvalle, F. (1989). Success and failure of freshmen at university: a search for determi-
nants. Higher Education, 18, 287-308.
van Rooij, E., Brouwer, J., Bruinsma, M., Jansen, E., Donche, V., & Noyens, D. (2018). A
systematic review of factors related to first-year students’ success in Dutch and Flemish
higher education. Pedagogische Studiën, 94, 360-404.
Vigil-Colet, A., Lorenzo-Seva, U., & Condon, L. (2008). Development and validation of the
Statistical Anxiety Scale. Psicothema, 20, 174-180.
Virtanen, P., Nevgi, A., & Niemi, H. (2013). Self-regulation in higher education: Students’
motivational, regulational, and learning strategies, and their relationship to study success.
Studies for the Learning Society, 3(1-2), 20-36.
Vogler, J.S., & Bakken, L. (2007). Motivation across domains: Do goals and attributions chan-
ge with subject matter for Grades 4 and 5 students? Learning Environments Research, 10,
17-33.
von der Embse, N., Jester, D., Roy, D., & Post, J. (2018). Test anxiety effects, predictors, and
correlates: A 30-year meta-analytic review. Journal of Affective Disorders, 227, 483-493.
Wakita, T., Ueshima, N., & Noguchi, H. (2012). Psychological distance between categories
in the Likert scale: Comparing different numbers of options. Educational and Psychological
Measurement, 73, 533-546.
Waples, J.A. (2016). Building emotional support with students in statistics courses. Scholar-
ship of Teaching and Learning in Psychology, 2, 285-293.
Weiner, B. (1985). An attributional theory of achievement motivation and emotion. Psycho-
logical Review, 92, 548-573.
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
45
Weng, L.-J. (2004). Impact of the number of reponse categories and anchor labels on coeffi-
cient alpha and test-retest reliability. Educational and Psychological Measurement, 64, 956-
972.
Wigfield, A., Hoa, L.W., & Klauda, S.L. (2009). The role of achievement values in the regu-
lation of achievement behaviors. In D.H. Schunk & B.J. Zimmerman (Eds.), Motivation and
self-regulated learning. Theory, research, and applications (pp. 169-195). New York:
Routledge.
Williams, A.S. (2013). Worry, intolerance of uncertainty, and statistics anxiety. Statistics Edu-
cation Research Journal, 12(1), 48-59.
Williams, A. (2014). An exploration of preference for numerical information in relation to
math self-concept and statistics anxiety in a graduate statistics course. Journal of Statistics
Education, 22(1), 1-16.
Williams, A.S. (2015). Statistics anxiety and worry: The roles of worry beliefs, negative pro-
blem orientation, and cognitive avoidance. Statistics Education Research Journal, 14(2),
53-75.
Wine, J.D. (1982). Evaluation anxiety. A cognitive-attentional construct. In H.W. Krohne &
L. Laux (Eds.), Achievement, stress, and anxiety (pp. 207-219). Washington: Hemisphere.
Yeager, D.S., & Dweck, C.S. (2012). Mindsets that promote resilience: When students believe
that personal characteristics can be developed. Educational Psychologist, 47, 302-314.
Zare, H., Rastegar, A., & Hosseini, S.M.D. (2011). The relation among achievement goals and
academic achievement in statistics: the mediating role of statistics anxiety and statistics self-
efficacy. Procedia – Social and Behavioral Sciences, 30, 1166-1172.
Zeidner, M. (1991). Statistics and mathematics anxiety in social science students: Some inte-
resting parallels. British Journal of Educational Psychology, 61, 319-328.
Zeidner, M. (1998). Test anxiety. The state of the art. New York: Plenum.
Zimmerman, D.W. (1987). Comparative power of Student t test and Mann-Whitney U test for
unequal sample sizes and variances. Journal of Experimental Education, 55, 171-174.
Zonnefeld, V.L. (2015). Mindsets, attitudes, and achievement in undergraduate statistics
courses. University of South Dakota, Graduate Faculty: Dissertation.
Günter Faber, Heike Drexler, Alexander Stappert & Joana Eichhorn
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
46
8 Appendix
WAESTA scale for measuring education science students’ worry (WR), avoidance (AV), and
emotionality (EM) cognitions encountering statistical demands (from Faber, Drexler, Stappert,
& Eichhorn, 2018) – originally worded in German (Faber, Drexler, & Stappert, 2018).
01 WR I will hardly be able to meet my degree program's statistics requirements
right away.
02 EM I would be very uncomfortable if I had to work on a statistical problem.
03 AV If I could, I would rather take two other courses than do one statistics
course.
04 AV When presentation topics are being assigned in the course, I would make
sure that I receive a topic that doesn't involve statistics.
05 WR It would be difficult for me to discuss statistical content adequately in my
papers.
06 AV When preparing presentations, I would rather omit anything that has to do
with statistics.
07 EM I would be quite nervous if I were asked to explain a chart from a research
report.
08 WR I have difficulty understanding statistical content in a lecture.
09 EM I would have trouble extracting the relevant information from a table of
statistical values.
10 WR If I had to comment on statistical data in a course, I would be worried that
I would make a fool of myself.
11 WR If I had to give a presentation including statistical findings in a course, I
would hope that no one had any follow-up questions.
12 WR I would hardly be able to present a report on statistical research findings
adequately.
13 EM I would feel very tense if I had to apply a statistical formula.
14 WR Despite careful preparation for a statistics exam, I would worry about not
passing it.
15 EM The thought of having to explain a statistical problem in a course makes
me quite nervous.
16 WR If I took a statistics course, I would be concerned that I would quickly
forget everything I had learned.
17 AV In scientific texts, I would skip over statistical tables and diagrams if pos-
sible.
Measuring Education Science Students’ Statistics Anxiety
Leibniz University Hannover:Institute of Psychology 2018
© Faber, Drexler, Stappert, Eichhorn
47
Recommended citation:
Faber, G., Drexler, H., Stappert, A. & Eichhorn, J. (2018). Measuring education science students‘ statistics anxiety.
Conceptual framework, methodological considerations, and empirical analyses. Leibniz University Hannover, Fa-
culty of Humanities, Institute of Psychology: Research report.