Download - Assessment of Oral Communication
This article was downloaded by: [112.209.40.91]On: 23 February 2013, At: 00:13Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Communication EducationPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/rced20
Assessment of Oral Communication:A Major Review of the HistoricalDevelopment and Trends in theMovement from 1975 to 2009Sherwyn Morreale , Philip Backlund , Ellen Hay & Michael MooreVersion of record first published: 09 Mar 2011.
To cite this article: Sherwyn Morreale , Philip Backlund , Ellen Hay & Michael Moore (2011):Assessment of Oral Communication: A Major Review of the Historical Development and Trends inthe Movement from 1975 to 2009, Communication Education, 60:2, 255-278
To link to this article: http://dx.doi.org/10.1080/03634523.2010.516395
PLEASE SCROLL DOWN FOR ARTICLE
Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions
This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden.
The publisher does not give any warranty express or implied or make any representationthat the contents will be complete or accurate or up to date. The accuracy of anyinstructions, formulae, and drug doses should be independently verified with primarysources. The publisher shall not be liable for any loss, actions, claims, proceedings,demand, or costs or damages whatsoever or howsoever caused arising directly orindirectly in connection with or arising out of the use of this material.
Assessment of Oral Communication:A Major Review of the HistoricalDevelopment and Trends in theMovement from 1975 to 2009Sherwyn Morreale, Philip Backlund, Ellen Hay &Michael Moore
This comprehensive review of the assessment of oral communication in the commu-
nication discipline is both descriptive and empirical in nature. First, some background on
the topic of communication assessment is provided. Following the descriptive back-
ground, we present an empirical analysis of academic papers, research studies, and books
about assessing communication, all of which were presented or published from 1975 to
2009. The results are outlined of content and thematic analyses of a database of 558
citations from that time period, including 434 national convention presentations, 89
journal articles, and 35 other extant books and publications. Three main themes and
eight subthemes are identified in the database, and trends evident in the resulting data
are considered. The study concludes with a discussion of the trends and overarching
themes gleaned from the research efforts, and the authors’ recommendations of best
practices for how to conduct oral communication assessment.
Keywords: Assessment; Evaluation; Communication Assessment; Communication Skills
Assessment; Evaluating Communication; Assessing Communication; Assessing Student
Learning; Learning Outcomes; Program Evaluation
Access to and success in college are substantially influenced by prior academic
achievement. Learning is a continuum; gaps and weaknesses at one point*whether
Sherwyn Morreale (Ph.D., University of Denver, 1989) is Associate Professor and Director of Graduate Studies
in the Communication Department, University of Colorado at Colorado Springs. Philip Backlund (Ph.D.,
University of Denver, 1977) is a professor of Communication Studies at Central Washington University.
Ellen Hay (Ph.D., Iowa State University, 1982) is a professor and chair of the Department of Communi-
cation Studies at Augustana College, Rock Island, IL. Michael Moore (Ph.D., University of Missouri, 1973) is an
emeritus professor of Communication from Morehead State University. Sherwyn Morreale can be contacted at
ISSN 0363-4523 (print)/ISSN 1479-5795 (online) # 2011 National Communication Association
DOI: 10.1080/03634523.2010.516395
Communication Education
Vol. 60, No. 2, April 2011, pp. 255�278
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
in high school or college*create barriers to successful performance at the next level.
Student learning outcomes data are essential to better understand what is working and
what is not, to identify curricular and pedagogical weaknesses, and to use this
information to improve performance. (Kuh & Ikenberry, 2009, p. 1)
As a national initiative, early mandates to assess student learning were often perceived
as an inappropriate expectation of faculty set by college administrators and legislators
external to their campuses. Many faculty members firmly believed their current
practices for grading knowledge and performance were quite sufficient. Times and
attitudes evolved, and assessment is now institutionalized on the majority of
American campuses (Ewell, 2009). As we will detail in the following report,
assessment in higher education and in the communication discipline has developed
considerably over the last 35 years. As a policy and practice in higher education,
assessment likely will be with us for years to come. Assessment is not going away and
nor should it.
Today a wide range of organizations external to campuses*regional accrediting
bodies, legislatures, state boards of education, and others*endorse and mandate
assessment. They require assessment as part of their accountability processes to
ensure faculty at institutions of higher education are doing their jobs well. These two
processes*assessment and accountability*may be a source of confusion for some,
because both are often collectively referred to as assessment. In reality, assessment
and accountability are two different processes but with one potentially embedded in
the other. Put simply, when we assess our own performance or that of our students, it
is assessment; when others assess our performance or that of our department,
program, or institution, it is accountability (Frye, 2006).
Another persistent source of confusion about these two processes is the language
used in their discussion and application. Precision of language about assessment
could support greater clarity of practice and perhaps more enthusiastic support and
participation. To that end and to inform the report that follows, Table 1 presents
some of the more commonly used terms in the assessment movement.
The present study examines the development and evolution of the assessment
movement in the communication discipline by providing a comprehensive overview
of historical trends in related scholarship over the last 35 years. The goal of this
research study is to serve the needs of scholars, teachers, and administrators who are
committed to engaging in oral communication assessment effectively, both in and
outside the communication discipline. We begin with some descriptive background
for this study and then outline the method and results of gathering and analyzing
data about assessing communication from national convention programs, educa-
tional journals, as well as other books and publications.
Background to the Present Study
Some understanding of assessment in general, and in the communication discipline
in particular, is in order. A brief sketch of the history of the assessment movement
includes references to communication’s role therein. Then we detail the National
256 S. Morreale et al.
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
Table 1 Common Terms Used in the Assessment Initiative
Term Definition/description
Assessment andaccountability
In general, when we assess our own performance, it is considered assessment;when others assess our performance, it is considered accountability. That is,assessment is a set of initiatives we take to monitor the results of our actionsand to improve ourselves; accountability is a set of initiatives others take tomonitor the results of our actions and to penalize or reward us accordingly.
Assessment The systematic process of determining educational objectives, gathering,analyzing, and using information about student learning and learningoutcomes to make decisions about programs, individual student progress, oraccountability.
Measurement The systematic investigation of people’s attributes or behaviors.Benchmark A criterion-referenced objective standard that is used for comparative
purposes. A program can use its own data as a baseline or benchmark againstwhich to compare future performance. It can also use data from anotherprogram as a benchmark.
Direct assessment Direct assessment of student learning requires students to display theirknowledge and skills as they respond to or are evaluated using an assessmentinstrument. Objective tests, essays, presentations, and classroom assignmentsall meet this criterion.
Indirectassessment
Indirect assessments such as surveys and interviews ask students to reflect ontheir learning rather than demonstrate it.
Formativeassessment
An assessment that is used for improvement (on an individual or programlevel) rather than for making final decisions or for accountability. It is alsoused to provide feedback to improve teaching, learning, and the curricula, aswell as to identify students’ strengths and weaknesses.
Summativeassessment
A sum total or final product measure of achievement at the end of aninstructional unit or course of study.
Performance-basedassessment
An assessment technique involving the gathering of data through systematicobservation of a behavior or process and evaluating these data based on aclearly articulated set of performance criteria to serve as the basis for evaluativejudgments. Evaluating speeches is a good example of this type of assessment.
Evaluation This term broadly covers all potential investigations of institutionalfunctioning, based on formative, summative, or performance-basedassessment processes. Evaluation may include assessment of learning, but itmight also include nonlearning centered investigations (e.g., satisfaction withinstructional facilities).
Objectives The specific knowledge, skills, or attitudes that students are expected toachieve through their college experience (e.g., any expected/intended studentoutcomes).
Outcomes The results of instruction, the specific knowledge, skills, or developmentalattributes that students actually develop through their college experience(viz., the assessment results).
Rubric A scoring tool that lists the criteria for an assignment or task, or ‘‘what counts’’(e.g., purpose, organization, and mechanics) in a piece of writing. A rubricalso articulates gradations of quality for each criterion it contains, fromexcellent to poor.
Norm An interpretation of scores on a measure that focuses on the rank ordering ofstudents, not their performance in relation to criteria.
Value-added The effects educational providers have had on students during their programsof study. The impact of participating in higher education on student learningand development above that which would have occurred through naturalmaturation. Value-added factors are usually measured as longitudinal changeor difference between pretest and posttest.
Assessment Review 257
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
Communication Association’s involvement in assessing communication programs
and student learning outcomes. We compare assessment of communication to that in
other disciplines and describe the current status of communication assessment
nationally.
General Historical Background
In the 1960s, poor results of what was termed choice-based curriculum led to the
undergraduate curriculum reform calls of the 1980s. During that time, many students
were deemed not adequately prepared for college, and students graduating from
college sometimes lacked skills necessary for workplace success*a condition that
some might say has not changed sufficiently yet. Over 20 national reports on skills
assessment were published by a variety of associations and agencies between 1983 and
1989 (Hay, 1989, 1992).
As the 20th century drew to a close, interest in assessment continued to increase.
Goals 2000: Educate America Act, and President Bush’s No Child Left Behind program,
focused on improving education with an emphasis on assessing learning outcomes.
The Goals 2000 Act codified into law six national education goals developed in 1989
and added two goals to encourage parental participation and the professional
development of educators (‘‘Clinton intends,’’ 1993). The national goal on literacy
and lifelong learning was of particular importance to communication educators:
‘‘The proportion of college graduates who demonstrate an advanced ability to think
critically, communicate effectively, and solve problems will increase substantially’’
(Lieb, 1994, p. 1). Even though the communicate effectively portion of the objective
might not have received the full attention some communication educators believed it
deserved, it does provide a national rationale for assessing communication education.
National Communication Association (NCA) Assessment Initiatives
NCA, formerly known as the Speech Communication Association, has actively
developed a national assessment agenda since the 1970s. The Speech Communication
Association Task Force on Assessment and Testing, formed in 1978, was charged with
gathering, analyzing, and disseminating information about the testing of speech
communication skills (Backlund & Morreale, 1994). This task force has evolved into
the NCA Communication Assessment Division (CAD), which addresses activities
such as defining communication skills and competencies, publishing summaries of
assessment procedures and instruments, publishing standards for effective commu-
nication programs, and developing guidelines for program review. A significant
portion of the research on assessment supported by NCA has focused on two inter-
related areas: communication programs and student learning outcomes.
Communication program assessment. The purpose of program assessment is con-
tinuous improvement of departmental educational efforts through self-evaluation.
Such assessment provides an opportunity for department members to demonstrate
the unique contributions of their departments to administrators and to fend off
258 S. Morreale et al.
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
threats based upon fiscal constraints or political motivations. Program assessment
requires department members to examine curricula, educational experiences, and
student learning. Evaluation may be part of a campus-wide effort (Backlund, Hay,
Harper, & Williams, 1990) and/or part of a departmental initiative (Makay, 1997;
Shelton, Lane, & Waldhart, 1999). Work in this area is also associated increasingly
with mandates from state agencies and accreditation boards. NCA has a set of
Guidelines for Developing and Assessing Undergraduate Programs in Communication
available on the association’s website (NCA, 2008).
Communication learning outcomes assessment. The purpose of learning outcomes
assessment is to examine actual student learning in any course or other arena in
which teaching and learning may occur. Faculty ultimately must ‘‘own’’ assessment of
student learning; they are the ones who write student-focused learning objectives,
select appropriate instruments of assessment, collect, analyze, and interpret the data,
and then use the data for course and program improvement. Developing student-
learning outcomes in communication begins with defining communication compe-
tence as it relates to the desired educational outcomes of the instructional program.
Several publications, including the published proceedings of NCA’s national
assessment conference in 1993, discuss various approaches to examining commu-
nication competence (Christ, 1994; Morreale, Brooks, Berko, & Cooke, 1994;
Morreale, Spitzberg, & Barge, 2006).
Assessment in Communication Compared to Other Disciplines
While the terms in Table 1 apply to assessment in all disciplines, assessment within
the communication discipline tends to be unique. Methods of assessment used in
other academic areas cannot always be adapted to communication, particularly to the
assessment of oral communication skills. Comparing communication skills assess-
ment with assessment in other academic areas calls attention to two distinctions,
which can be addressed by well-designed assessment programs (Morreale &
Backlund, 2007).
First, assessment of learning in many disciplines can use such methods as
achievement tests as well as objective and subjective tests of content. By contrast,
communication is generally seen as a process skill, similar to reading and writing.
While it is important to assess students’ knowledge about how they should
communicate, it is equally if not more important to assess their communication
performance in authentic situations. Thus, communication skills have generally been
assessed with performance measures, while communication knowledge has been
assessed with more traditional assessment tools such as paper-and-pencil tests and
essays.
Second, due to the interactive nature of communication, assessment of commu-
nication performance encounters other challenges. The appropriateness and effec-
tiveness of communication education is generally based on the situation and in the
perceptions of the viewer or the impression made by the communicator on the
Assessment Review 259
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
observer. As a result, there may be more than one correct answer or way of
performing. To complicate matters further, evaluation of a communicator depends
on criteria that are often culturally bound, thus making assessment more difficult
than in other academic subjects.
Third, in the communication discipline, we are faced with the uncertainty of not
knowing whether our educational programs have worked. Assessment results can
only be predictive of a certain potential or propensity to communicate competently
in the future. The determination of competence in communication will be affected by
numerous factors impinging on any interaction at any given time. Determining
whether a student has achieved a given level of maintainable competence requires
observation of the student’s performance in a multitude of diverse situations.
Current Status of Communication Assessment
The assessment of student learning about communication has taken root nationally.
Two efforts external to the communication discipline support this observation. The
College Board recently published content standards for English language arts as well
as math and statistics; these content standards are recommended as secondary school
assessment tools to test for college readiness. Communication is considered critical in
the language arts’ standards, which prominently include rubrics for assessing
speaking, listening, and media literacy (see http://professionals.collegeboard.com
for a full description of The College Board standards).
The Association of American Colleges and Universities now offers its Liberal
Education for America’s Promise (LEAP) program as a primary vehicle for advancing
and enhancing undergraduate liberal education for all students. Communication
competence and its assessment figure significantly in the LEAP program (see http://
www.aacu.org/leap for a full description of the LEAP program). The essential
learning outcome for intellectual and practical skills includes written and oral
communication, information literacy, as well as teamwork and problem solving. The
learning outcome for personal and social responsibility includes intercultural
knowledge and competency.
More information on the nature and status of communication assessment is
available in an updated edition of an NCA publication on large-scale assessment
(Morreale & Backlund, 2007). However, to further investigate the evolution of
assessment in communication studies, we conducted an analysis of convention
presentations, journal articles, and other publications, as described in the following
section.
Method
This study utilized a triangulated methodology to examine trends in research related
to oral communication assessment over a 35-year period extending back to 1975,
when assessment began to take on national impetus. Content and thematic analyses
were used to develop a database of citation items, code those items, identify themes
260 S. Morreale et al.
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
and categories, and produce a comprehensive description of how communication
assessment has been approached over the years.
Content Analysis
First, a content analysis process was conducted to identify and count presentations
at conventions of the NCA and scholarly articles on communication assessment
in leading, national communication education journals. Additionally, a list was
developed of extant publications, such as books and monographs that address
assessment of communication. More specifically, for the time period from 1975 to
2009, we reviewed events and presentations listed in NCA and Speech Communica-
tion Association (SCA) convention programs. For the same time period, we
examined the tables of content of Communication Education, the Association for
Communication Administration Bulletin, and Communication Teacher (formerly
Speech Communication Teacher). Some past issues of Speech Communication Teacher
were not included because they are not available in any electronic database. The tables
of content for journals published by the four regional communication associations
also were reviewed but not included because most were not available electronically.
We also searched an array of databases and the Internet for books and other nonserial
publications, such as conference proceedings within and outside of the communica-
tion discipline. Assessment, evaluation, assessing, and evaluating were the initial
keywords used in this search. These keywords were adjusted during the data gathering
process based on the results of queries conducted in various databases. The results of
the content analysis processes were recorded in Ref Works, a computerized
bibliographic database system that is housed on the campus of one of the authors,
but was electronically accessible to all authors involved in this study. The resulting
database contained a total of 558 items, including 434 convention presentations, 89
journal articles, and 35 other books and nonserial publications. After developing the
database, we subjected the items in the database to thematic analysis using a
qualitative coding and categorizing process (Saldana, 2009).
Thematic Analysis
Thematic analysis of the items in the database, including convention presentations,
journal articles, and other extant publications, was used to determine trends and
analyze patterns in the evolution of oral communication assessment from 1975 to
2009. The first step in the thematic analysis process involved identifying the general
theme or main focus of each item. Each of the four researchers in this study worked
independently, engaging in a first cycle coding process of all the items in the database
to develop a preliminary list of main themes. After comparing the four sets of themes,
three broad categories of themes emerged, focused on the why, what, and how of
assessing communication.
Next, the authors collaborated to identify subthemes for each of the three
categories and a description of each subtheme to be used in a second cycle coding
process. The goal of this collaboration was to agree on a set of subthemes that are
Assessment Review 261
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
comprehensive and mutually exclusive. If the set of subthemes was comprehensive,
then each of the presentations, articles, and publications in the database could be
assigned to one of the themes and subthemes. If the subthemes were mutually
exclusive, it would improve the likelihood that each item would clearly fall into one
subtheme rather than another. According to Saldana (2009), the goal of this type of
coding and categorizing process is to organize and group similarly coded data into
categories or ‘‘families’’ (p. 8) because they share some characteristics. Table 2 presents
a description of the three categories or main themes and their associated subthemes.
A pilot test of these subthemes was conducted to ensure their viability before
engaging in second cycle coding of the entire database of articles, presentations, and
publications. All four raters categorized the same 32 items from the database.
Table 2 Themes and Subthemes Used for the Thematic Analysis Process
Themes and subthemes Description
Theme 1What is communication
assessment, and why dowe do it?
This category of bibliographic items includes theoretical issues,fundamentals of oral communication assessment, and reflectionsabout the oral communication process. Items also focus on why weengage in the communication process.
Subtheme A: Generaloverview
These items discuss basic assessment concepts and assessment‘‘language,’’ and they reflect upon and provide a sense of where themovement is at or could be headed.
Subtheme B: Rationale These items explain why we do communication assessment.
Theme 2What is assessed? This category of bibliographic items includes the qualities, knowledge,
abilities, and dispositions that are commonly assessed. These itemsfocus more on what should be considered in the assessment processrather than how to do the assessment.
Subtheme C: Studentlearning outcomes
These items aid in defining what could and should be assessed. Theitems identify typical aspects of student learning that are or should beassessed.
Subtheme D: Program/departmental evaluation
These items focus on evaluation that occurs at the unit orprogrammatic level, and provide guidance in what qualities andprocedures to consider in such a review or evaluation.
Theme 3How is it assessed? This category of bibliographic items considers the ‘‘how to’s’’ of
specific assessment practices and processes.Subtheme E: Assessment
guidelines andframeworks
These items focus on how to develop and organize assessment efforts;they also explain how departments can gather and analyze assessmentdata.
Subtheme F: Assessmentin specific contexts andcourses
These items focus on assessment practices and processes with anemphasis on how to assess specific knowledge sets, behavioral skills,and dispositions across a range of situations and contexts (e.g.,interpersonal, group, public, organizational, K-12).
Subtheme G: Assessmentstrategies and techniques
These items focus on data-gathering strategies (e.g., portfolio, survey,behavioral coding) and their use in both classroom and nonclassroomcontexts (e.g., applied situations, communication centers and labs).
Subtheme H: Assessmentinstruments
These items focus on assessment instruments/measures, including thecriteria for evaluating various assessment instruments.
262 S. Morreale et al.
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
Cronbach’s alpha was used to calculate the consistency of their ratings, and a
coefficient of .862 was achieved. Cronbach’s alpha coefficient is a measure of internal
consistency reliability, and it is useful for understanding the extent to which the
ratings from a group of judges are consistent (Stemler, 2004).
Since a reliable and consistent set of subthemes emerged in the pilot test,
Cronbach’s alpha was calculated to determine the two most consistent and reliable
pairs of coders who would engage in second cycle coding and work as two
independent teams to code half of the entire database of items. One pair of coders
achieved a reliability coefficient of .90 and the second pair a coefficient of .75.
The two teams then each took half of the database and used the set of subthemes in
Table 2 to code their items. Any items about which the two coders in a pair disagreed
were coded by a third coder in order to determine the subtheme for that item. The
results of content and thematic analysis are presented next followed by a discussion of
trends and overarching themes evident in the results.
Results
The results of the content analysis and coding of the NCA/SCA convention programs,
the national communication journals, and other books and publications provide a
detailed picture of how interest in communication assessment has been approached
over the years. Patterns, in 5-year periods extending from 1975 to 2009, became
evident when the items in the database were coded into subthemes. In the appendix
to this writing, full citations are provided for the journal references and books that
were identified and coded for this study. They are organized by theme and subtheme
to facilitate the reader’s ability to tie specific studies or books to a particular topic or
subtheme. A list of the 434 coded convention papers is available directly from the
authors of this study.
To interpret the results now presented in Tables 3�6, the eight subthemes, based on
subtheme letter (e.g., A, B, C), are briefly listed again here:
Theme One: What Is Communication Assessment, and Why Do We Do It?
� Subtheme A: General overview of assessment.
� Subtheme B: Rationale for doing assessment.
Theme Two: What Is Assessed?
� Subtheme C: Student learning outcomes assessment.
� Subtheme D: Program/department evaluation.
Theme Three: How Is It Assessed?
� Subtheme E: Assessment guidelines and frameworks.
� Subtheme F: Assessment in specific contexts and courses.
� Subtheme G: Assessment strategies and techniques.
� Subtheme H: Assessment instruments.
Assessment Review 263
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
Table 3 presents all of the coded citations, including convention papers, journal
articles, and other extant publications, identified by subthemes during each of the
seven five-year periods. A total of 558 relevant citations were spread out over 1975 to
2009. As Table 3 shows, the majority of the citations (383; 68.6%) occurred from
1990 to 2004. The subtheme occurring most frequently (123; 22.0%) was assessment
in specific contexts and courses (F). The next most popular subthemes were
assessment strategies and techniques (G; 83; 14.8%), general overview of assessment
(A; 82; 14.6%), and program/department evaluation (D; 81; 14.5%). The least
popular subtheme was rationale for doing assessment (B; 14; 2.0%).
Table 4 presents the coded convention papers, identified by subthemes during each
of the seven five-year periods. A total of 434 relevant citations were spread out over
1975 to 2009. As with the total number of citations, this table indicates the majority
of the convention presentations (322; 74.2%) occurred from 1990 to 2004. The most
popular subtheme for convention presentations also was assessment in specific
contexts and courses (F; 113; 26%). The next most popular subthemes were program/
departmental evaluation (D; 66; 15.2%) and assessment strategies and techniques (G;
65; 15%). The least popular subtheme of convention papers was rationale for doing
assessment (B; 12; 2.7%).
Table 5 presents the coded journal articles, identified by subthemes during each of
the seven five-year periods. A total of 89 relevant articles were spread out over 1975 to
Table 3 Total Citations of Convention Papers, Journal Articles, and Books and Other
Extant Publications by Theme and Years (1975 to 2009)
Theme 1975�79 1980�84 1985�89 1990�94 1995�99 2000�04 2005�09 Total
A 7 15 7 17 12 13 11 82B 0 1 3 2 5 2 1 14C 5 1 6 15 6 5 4 42D 0 6 5 20 31 13 6 81E 0 2 8 13 16 8 10 57F 2 9 9 32 28 30 13 123G 1 3 6 22 22 23 6 83H 6 5 7 16 22 10 10 76Total 21 42 51 137 142 104 61 558
Table 4 Convention Papers by Theme and Years (1975 to 2009)
Theme 1975�79 1980�84 1985�89 1990�94 1995�99 2000�04 2005�09 Total
A 1 2 5 13 10 12 8 51B 0 0 3 2 5 1 1 12C 3 1 4 4 5 4 3 24D 0 5 1 11 30 13 6 66E 0 1 3 7 16 8 9 44F 1 7 8 28 26 30 13 113G 1 0 3 15 19 21 6 65H 4 1 4 13 21 8 8 59Total 10 17 31 93 132 97 54 434
264 S. Morreale et al.
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
2009. This table reveals that the 5-year period from 1990 to 1994 was the most
popular for publications of journal articles on assessment (34; 38.2%). A steady
decline in journal articles is evident in each 5-year period since 1994. Interestingly,
the most popular subtheme for journal articles was general overview of assessment
(A; 20; 22.4%), rather than assessment in specific contexts and courses (F; 8; 9%).
Table 6 presents the books and other extant publications, identified by subthemes
during each of the seven 5-year periods. A total of 35 relevant books were published
from 1975 to 2009. The 5-year period from 1990 to 1994 was the most popular for
books and other publications (10; 28.6%). The most popular subtheme for books was
general overview of assessment (A; 11; 31.4%)
Discussion of Results, Trends, and Overarching Themes
Based on the results just outlined, we now discuss several trends in oral communi-
cation assessment over the past 35 years, including some recommendations for how
to conduct assessment research in the future.
One general trend indicates that communication assessment received considerable
attention in the 1990s up until the early 2000s (see Table 3). But since 2005,
assessment publications and convention presentations have declined. This changing
pattern of interest parallels national attention toward assessment. With calls for
Table 5 Journal Articles by Theme and Year (1975 to 2009)
Theme 1975�79 1980�84 1985�89 1990�94 1995�99 2000�04 2005�09 Total
A 3 10 1 4 0 0 2 20B 0 1 0 0 0 0 0 1C 1 0 2 8 1 1 1 14D 0 1 4 7 1 0 0 13E 0 0 5 5 0 0 1 11F 1 1 1 3 2 0 0 8G 0 2 1 5 1 2 0 11H 2 2 2 2 1 2 0 11Total 7 17 16 34 6 5 4 89
Table 6 Books and Other Extant Publications by Theme and Year (1975 to 2009)
Theme 1975�79 1980�84 1985�89 1990�94 1995�99 2000�04 2005�09 Total
A 3 3 1 0 2 1 1 11B 0 0 0 0 0 1 0 1C 1 0 0 3 0 0 0 4D 0 0 0 2 0 0 0 2E 0 1 0 1 0 0 0 2F 0 1 0 1 0 0 0 2G 0 1 2 2 2 0 0 7H 0 2 1 1 0 0 2 6Total 4 8 4 10 4 2 3 35
Assessment Review 265
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
educational reform in the 1980s, state legislatures and accrediting agencies demanded
more accountability in the 1990s. Individuals in communication programs needing
to respond to these mandates began to share more information as well as their
resources for assessing communication. Additionally, in the early 1990s, NCA
convened a major national conference on assessment that produced an array of new
assessment instruments (Morreale et al., 1994). Then it appears that assessment
pressures lessened from 2005 to 2009. That may be, in part, because we have figured
out what we are doing about assessment and how we should do it. However, now
might be the right time to revisit earlier discussions of assessment about guidelines
and frameworks as well as strategies and techniques (i.e., subthemes E and G) with a
focus on contemporary instructional challenges. For example, it would be helpful to
have more studies that focus on assessing the role of communication in enhancing
student learning (i.e., subtheme C) in online environments, in distance education
settings, and in global contexts.
A second trend suggests there has been a change over time in the venues for
discussions about communication assessment. From 1975 to 1994, journal articles
and other publications (see Tables 5 and 6) were more numerous, keeping pace with
presentations in the convention format (see Table 4). Since 1995, there has been a
shift such that almost all of the conversations about assessment are conducted at
conventions, not in academic journals. Perhaps because of the demise of the
Association of Communication Administration Bulletin, there are fewer journals in
which assessment manuscripts can be published. Or perhaps department and
program chairs, who believe assessment to be a concern, might not have the time
to prepare manuscripts for journal publication. In either case, we may be well advised
to begin seeking more publications in scholarly journals, focused on valid and reliable
best practices in assessment of student learning outcomes at the program and
departmental levels (i.e., subthemes C and D).
A third trend, indicated by the subthemes reported in the results, suggests that
most of the attention over time has focused topically on assessment in specific
contexts and courses (see Table 3, subtheme F), with this discussion occurring mainly
at the convention level. These how-to’s of assessment appear to have dominated
convention discussions. Approximately 26% of convention papers were about
assessment of knowledge, abilities, and dispositions in different situations and
contexts, 15% were about program and department evaluation, and 14% about
assessment strategies and techniques. These papers, and the how-to recommenda-
tions they contain, could provide useful models for departments interested in course-
based assessment and the assessment instruments for use therein (i.e., subthemes F
and H). Meta-analyses of the content of the accessible convention papers could make
a useful contribution to the assessment literature in communication.
By contrast to the subtheme topics at conventions, general overviews of the
assessment process were the primary focus in journal articles and other publications
(see Tables 5 and 6). In these more permanent records, we have not given attention to
the how-to’s of assessment. The publication of such applied studies would be useful,
if they are characterized by the academic rigor expected in journal publications.
266 S. Morreale et al.
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
Finally, rationale writings, explaining why we do assessment, appear to be the least
popular subtheme across all outlets, conventions, journals, and books. Despite the
popular notion that we need to argue for communication assessment, not many of us
are writing rationale statements, which might be a topic worth revisiting and
updating.
The results and trends next are examined briefly in light of the three questions that
emerged from the original thematic analysis of the 558 citations: What is
communication assessment and why do we do it? What is assessed? How is it
assessed?
What Is Communication Assessment, and Why Do We Do It?
What is communication assessment? Of the 558 citations in the database, 17% (i.e.,
95) provided answers to the what and why of assessment. And while there is no
shortage of definitions, the scholars appear to accept as a basic premise that
assessment is how we document our efforts to develop student learning. This leads to
a specific definition of assessment as the process of gathering and analyzing
information from multiple sources in order to develop a deep understanding of
what students know, understand, and can do with their knowledge as a result of their
educational experiences (Teaching and Learning Center, 2010). Assessment of
communication is thus considered the process of documenting, usually in measur-
able terms, student gains in communication knowledge, skills, attitudes, and beliefs.
Why do we assess communication? Recently, one of the coauthors of this article
participated in an accreditation visit to a university. The faculty interviewed for this
accreditation visit seemed enthusiastic about assessment. Two faculty members, self-
described curmudgeons, stated they formerly were strong skeptics but now were
strong supporters of the concept of assessment. Their reasons for a change-of-heart
mirror the essential rationale for assessing communication identified in this study’s
database. Those studies summarily point to four reasons for engaging in commu-
nication assessment:
� It is good for students. As the citations in our database strongly suggest, the
primary motivation for assessment is to develop a stronger educational program
for students.
� It brings faculty together. Determining how to assess student learning outcomes
for a communication program requires faculty to think beyond their individual
courses and collaboratively examine the entire educational program.
� It satisfies the needs of external agencies. Assessment provides information and
data to outside agencies. State legislatures, state boards of education, and every
regional accrediting body require some form of assessment.
� It is the right thing to do. Improvement is the ultimate response to why do
we conduct assessments? If each faculty member and each administrator is
committed to student learning, then assessment of that learning is obviously
appropriate.
Assessment Review 267
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
The 95 reviewed citations in the what and why category consistently reinforced the
point that the fundamental reason for communication assessment is improvement.
They provide a useful rationale for continuing to refine and enhance assessment
programs in the communication discipline.
What is Assessed?
Indeed, there is a commonality of interest in assessment to meet the expectations of
others about improving student learning. However, what is actually assessed, while
described in 22% (123) of the 558 items in our database, appears to vary
considerably. These 123 citations center on assessing specific competencies (e.g.,
media literacy, service learning) in specific courses (e.g., fundamentals, public
relations) and at multiple levels (e.g., pre-K, secondary, college). Furthermore, these
citations also center on communication competence at the institutional level, often as
part of general education, and on the wide range of skills believed to comprise
communication competence. Another less frequent emphasis is on speaking and
listening skills, with a modest level of interest specifically in public speaking and
listening. There remains very little focus on more cognitive (as opposed to
behavioral) skills*less than 5% of the 123 items reviewed displayed a focus on
cognitive skills.
With regard to the context in which assessment is conducted, the greatest interest
continues to be in assessment of the academic program/major and the academic
department. Without question, the academic level receiving the most attention in the
literature is the college and university level, with over 54% of the 123 items focusing
specifically on assessment of communication competence and/or academic programs
at this level. Only 4% address assessment at the community college level. While there
is some interest in assessment of communication competence or specific compe-
tencies within K-12, that interest appears very limited.
Based on the 123 citations in this category, our discipline clearly needs to continue
efforts to enhance assessment of communication competence at the college and
university level, to include the knowledge, skills, and dispositions of our students
upon graduation. Moreover, some focus on assessing communication at other levels
of the educational enterprise also may be in order.
How Is It Assessed?
As noted in the results, communication educators have spent considerable time and
energy focusing on how to do assessment. Approximately 61% (340) of the total of
558 citations were categorized into the four subthemes responding to this how to do it
question. These papers, articles, and texts have provided overviews of assessment
plans, many explaining how specific departments devised their response to
assessment mandates. They have also offered suggestions on how to assess the
various abilities and contexts (i.e., public speaking, listening, interpersonal,
intercultural, and organizational) as well as professional applications (i.e., commu-
nication assessment for teachers, lawyers, doctors, or engineers). Different assessment
268 S. Morreale et al.
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
strategies such as surveys, interviews, focus groups, capstone courses, and portfolios
have been examined but to a lesser extent. Finally, communication educators have
developed and reported on instruments to evaluate effectiveness in public speaking,
interpersonal interaction, listening, and various dimensions of motivation to
communicate.
Scholars in the communication discipline now need to look more closely at the
content of the 340 studies categorized here as how is communication assessed. Using
this body of literature, we need to develop research-driven models for student
learning and program assessment.
Based on these overarching trends gleaned from our research efforts, recommen-
dations for best practices in regard to communication assessment will conclude this
report. But first, we mention several limitations to the present study.
Limitations of Study
This review of the assessment literature was limited by two constraints. First, we did
not examine the journals and convention programs of affiliated organizations. For
example, examining the work of the International Listening Association or the
International Communication Association might have expanded our understanding
of assessment. While we did consider including the journals of the regional
communication associations, their convention programs were not readily accessible
for the 35-year time frame of interest in this study.
Second, because many of the convention sources were unavailable in their entirety,
we were only able to examine the titles of convention papers and some of the other
books. Had we had the opportunity to review the content of all 558 manuscripts in
our database, we could have engaged in more substantive analysis of the content of
the studies. Such analysis would have provided yet more useful insights into the how-
to’s of communication assessment, a topic reserved for a later study. Given this
limitation, the following recommendations for best practices, while supported by the
data from the qualitative thematic analysis, also are based on the authors’ extensive
experience with assessment, accountability, accreditation, program review, and
external evaluation.
Recommendations for Best Practices in Oral Communication Assessment
The results of this study imply communication assessment has become a permanent
part of the fabric of academic life in higher education. This permanence highlights
the following critical issues for consideration by communication department and
program administrators, scholar teachers, and scholarly associations.
Recommendations for Administrators
Legislatures, accrediting bodies, state boards of education, and internal reviewers will
continue to inquire about whether the communication education received by
students produces the desired effects. Department and program administrators as
Assessment Review 269
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
well as directors need to be fully cognizant of any expectations or requirements on
their campuses related to communication instruction and programs. In addition to
curricula for communication majors, communication expectations in other majors
or in general education are typical examples. Not only should quality instruction be
in place to satisfy those expectations, but also assessment of student achievement
should occur. While the departments’ needs and the requirements imposed by other
constituencies may vary, administrators’ responses can be guided by fundamental
questions related to incorporating the assessment of student learning outcomes in
programmatic assessment (i.e., subthemes C and D). Administrators need to facilitate
strategic planning discussions with faculty as a way to define, review, and redefine
academic programs in the communication discipline. The following set of questions,
derived from our consulting activities, suggest that assessment can serve as an integral
part of defining, reviewing, and redefining academic programs in the communication
discipline:
1. Who are we and why do we exist? What is the mission of our program?
2. What do we want to accomplish? What are our goals and objectives? Who do we
serve?
3. What assessment procedures can we use to determine if the goals and objectives
are met?
4. What is our assessment plan, and is it sufficiently rigorous?
5. What are the results of our assessment program, and how are they being used?
6. What changes will we make to our goals/objectives/outcomes/processes based on
the results?
7. What evidence is there that this assessment process is a continuous cycle of
improvement?
Recommendations for Scholar Teachers
Communication faculty members have responsibilities and opportunities, regarding
communication assessment, at two levels. At the campus level, faculty members need
to support their administrators, chairs, or program directors in the development of
rigorous course-based assessment activities. At its core, student learning occurs in
courses, and only faculty members know how to best assess learning in their
particular courses (i.e., subthemes F, G, and H). Their recommendations about
assessment can and should provide the essential substance for departmental
assessment plans and programs. At a disciplinary level, communication faculty
need to turn their attention back to the national dissemination of what they learn
through conducting rigorous assessment of student learning at the local level. Such
sharing should begin to occur through publications as well as in discussions and
presentations at conferences and conventions. Basic course directors and faculty, for
example, could begin to report about their best practices for course-based assessment
in multiple sections of the same course. Communication faculty might also want to
consider framing the publication of their assessment efforts under the umbrella of the
270 S. Morreale et al.
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
scholarship of teaching and learning, a well-respected initiative in the communica-
tion discipline (Huber & Morreale, 2001).
Recommendations for Scholarly Associations
Finally, NCA may want to take a greater role in ensuring that faculty and departments
have access to an archival history of assessment in the discipline, as well as the most
recent assessment resources to meet their needs. Convention papers on assessment
could be collected in a centralized location, and reports of updated assessment
practices could be solicited for inclusion on the association-based assessment website.
NCA could also help to provide a national venue for an ongoing dialogue about
communication assessment in general (i.e., subthemes A and B). Without such
resources, the discipline may find itself reinventing the assessment wheel every 10�15
years.
Conclusion
The time is right for our discipline to become fully engaged once again in
communication assessment. This re-engagement should be characterized by rigor
and an emphasis on valid and reliable results from the assessment process. Moreover,
all stakeholders should recognize the development of communication assessment
programs as a genuinely constructive activity, rather than just another expectation of
administrators and legislators. By developing and implementing best practices and
disseminating the how-to’s of those practices at conventions, in academic journals,
and in other publications, we can collaborate to assess communication programs and
student learning more effectively.
References
Backlund, P., Hay, E.A., Harper, S., & Williams, D. (1990). Assessing the outcomes of college:
Implications for speech communication. Association for Communication Administration
Bulletin, 72, 13�20.
Backlund, P., & Morreale, S.P. (1994). History of the Speech Communication Association’s
assessment efforts and present role of the committee on assessment and testing. In S.P.
Morreale, M. Brooks, R. Berko, & C. Cooke (Eds.), 1994 SCA summer conference proceedings
and prepared remarks (pp. 9�16). Annandale, VA: Speech Communication Association.
Christ, W.G. (Ed.). (1994). Assessing communication education: A handbook for media, speech, and
theatre educators. Hillsdale, NJ: Erlbaum.
Clinton intends to establish national academic standards. (1993, February 24). Greeley Tribune,
p. A1.
Ewell, P.T. (2009). Assessment, accountability, and improvement: Revisiting the tension (NILOA
Occasional Paper No. 1). Urbana, IL: National Institute of Learning Outcomes Assessment.
Frye, R. (2006). Assessment, accountability, and student learning outcomes. Dialogue, 1(2), 1�12.
Retrieved from http://pandora.cii.wwu.edu/dialogue/default.htm
Hay, E.A. (1989). Education reform and speech communication. In P.J. Cooper & K.M. Galvin
(Eds.), The future of speech communication education (pp. 12�17). Annandale, VA: Speech
Communication Association.
Assessment Review 271
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
Hay, E.A. (1992). Assessment trends in speech communication. In E.A. Hay (Ed.), Program
assessment in speech communication (pp. 3�7). Annandale, VA: Speech Communication
Association.
Huber, M.T., & Morreale, S.P. (Eds.). (2001). Disciplinary styles in the scholarship of teaching and
learning: A conversation. Washington, DC: American Association for Higher Education and
The Carnegie Foundation for the Advancement of Teaching.
Kuh, G., & Ikenberry, S. (2009). More than you think, less than we need: Learning outcomes
assessment in American higher education. Urbana, IL: National Institute for Learning
Outcomes Assessment.
Lieb, B. (1994, October). National contexts for developing postsecondary communication assessment.
Washington, DC: U.S. Department of Education, Office of Educational Research and
Improvement.
Makay, J.J. (1997). Assessment in communication programs: Issues and ideas administrators must
face. Journal of the Association for Communication Administration, 1, 62�68.
Morreale, S.P., & Backlund, P. (Eds.). (2007). Large-scale assessment in oral communication: K-12
and higher education (3rd ed.). Washington, DC: National Communication Association.
Morreale, S.P., Brooks, M., Berko, R., & Cooke, C. (Eds.). (1994). 1994 SCA summer conference
proceedings and prepared remarks. Annandale, VA: Speech Communication Association.
Morreale, S.P., Spitzberg, B.H., & Barge, J.K. (2006). Human communication: Motivation, knowledge,
and skills (2nd ed.). Belmont, CA: Wadsworth.
National Communication Association. (2008). Guidelines for developing and assessing under-
graduate programs in communication. National Communication Association Insider, 1.
Retrieved from http://www.natcom.org/index.asp?bid�14887
Saldana, J. (2009). The coding manual for qualitative researchers. New York: Sage.
Shelton, M.W., Lane, D.R., & Waldhart, E.S. (1999). A review and assessment of national education
trends in communication instruction. Communication Education, 48, 228�237.
Stemler, S.E. (2004). A comparison of consensus, consistency, and measurement approaches to
estimating interrater reliability. Practical Assessment, Research & Evaluation, 9. Retrieved
from: http://PAREonline.net/getvn.asp?v�9&n�4
Teaching and Learning Center. (2010). Assessing students in a learner-centered classroom [Work-
shop]. Eugene, OR: University of Oregon. Retrieved from http://tep.uoregon.edu/workshops/
teachertraining/learnercentered/assessing/assessing.html
Appendix
The following are all the journal references and books that were identified and coded for this
study. They are organized by subtheme to facilitate the reader’s ability to locate writings
relevant to a particular topic or subtheme.
Subtheme A: General Overview
Aitken, J.E., & Neer, M. (1992). The public relations need in assessment reporting. Association for
Communication Administration Bulletin, 81, 53�59.
Allen, R.R., & Brown, K.L. (Eds.). (1976). Developing communication competence in children: A
report of the Speech Communication Association’s national project on speech communication
competencies. Skokie, IL: National Textbook Company.
Allen, R.R., & Wood, B.S. (1978). Beyond reading and writing to communication competence.
Communication Education, 27, 286�292.
Anderson, R.S., & Speck, B.W. (Eds.). (1998). Changing the way we grade student performance:
Classroom assessment and the new learning paradigm. New Directions for Teaching and
Learning No. 74. San Francisco, CA: Jossey-Bass.
272 S. Morreale et al.
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
Anonymous. (1979). The 7th annual SCA seminar on assessment of programs. Association for
Communication Administration Bulletin, 30, 4�6.
Ashmore, T.M. (1980). The rhetoric of quality assessment. Association for Communication
Administration Bulletin, 32, 26�32.
Backlund, P.M., Booth, J., Moore, M., Parks, A.M., & VanRheenen, D. (1982). A national survey of
state practices in speaking and listening skill assessment. Communication Education, 31,
125�129.
Becker, S.L. (1980). The rhetoric of quality assessment: A ‘‘no’’ vote. Association for Communication
Administration Bulletin, 32, 33�35.
Becker, S.L. (1984). Evaluation: Reactivity and validity. Association for Communication Adminis-
tration Bulletin, 48, 46�48.
Clevenger, T., Jr. (1980). Evaluation of the quality assessment seminar. Association for Commu-
nication Administration Bulletin, 32, 47�50.
Cole, S.S. (1990). Outcomes assessment: The tip of the iceberg. Association for Communication
Administration Bulletin, 74, 34�36.
Cooper, P.J. (1987). A response to assessment. Association for Communication Administration
Bulletin, 60, 55�55.
Cooper, P.J., & Galvin, K.M. (Eds.). (1989). The future of speech communication education.
Annandale, VA: Speech Communication Association.
Goldberg, A.A. (1980). ‘‘The rhetoric of quality assessment:’’. A response. Association for
Communication Administration Bulletin, 32, 36�37.
Goulden, N.R. (1992). Theory and vocabulary for communication assessments. Communication
Education, 41, 258�269.
Gray, P.A. (1984). Assessment of basic oral communication skills: A selected, annotated bibliography.
Annandale, VA: Speech Communication Association.
Hay, E.A. (1992). A national survey of assessment trends in communication departments.
Communication Education, 41, 247�257.
Hunt, G.T. (1990). The assessment movement: A challenge and an opportunity. Association for
Communication Administration Bulletin, 72, 5�12.
Kurylo, A. (2007). Teaching about assessment in professional organizations. Communication
Teacher, 21, 93�98.
Larson, C.E. (1978). Problems in assessing functional communication. Communication Education,
27, 304�309.
Larson, C.E., Backlund, P.M., Redmond, M.V., & Barbour, A. (1978). Assessing functional
communication. Annandale, VA: Speech Communication Association.
McCroskey, J.C. (2007). Raising the question #8 assessment: Is it just measurement? Communica-
tion Education, 56, 509�514.
Morreale, S.P., & Backlund, P.M. (1996). Large scale assessment of oral communication: K-12 and
higher education (2nd ed.). Annandale, VA: Speech Communication Association.
Morreale, S.P., Backlund, P.M., Hay, E.A., & Jennings, D.K. (Eds.). (2007). Large scale assessment of
oral communication (3rd ed.). Washington, DC: National Communication Association.
Mottet, T.P. (2004). Seminar in communication assessment. Communication Teacher, 18, 111�115.
Rubin, D.L., & Mead, N.A. (1984). Large scale assessment of oral communication skills: Kindergarten
through grade 12. Annandale, VA: Speech Communication Association.
Rubin, R.B. (1984). Communication assessment instruments and procedures in higher education.
Communication Education, 33, 178�180.
Spitzberg, B.H. (1983). Communication competence as knowledge, skill, and impression.
Communication Education, 32, 323�329.
Sullivan, J. (1980). Quality assessment: An insider’s view. Association for Communication
Administration Bulletin, 32, 38�40.
Wood, B.S. (1976). Children and communication: Verbal and nonverbal language development.
Englewood Cliffs, NJ: Prentice-Hall.
Assessment Review 273
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
Subtheme B: Rationale
Allen, T.H. (2002). Charting a communication pathway: Using assessment to guide curri-
culum development in a re-vitalized general education plan. Communication Education,
51, 26�39.
McCaleb, J.L. (Ed.). (1987). How do teachers communicate? A review and critique of assessment
practices [Teacher Education Monograph No. 7]. Washington, DC: ERIC Clearinghouse on
Teacher Education.
Subtheme C: Student Learning Outcomes
Backlund, P.M. (1985). SCA national guidelines for essential speaking and listening skills for
elementary school students. Communication Education, 34, 185�195.
Backlund, P., Hay, E.A., Harper, S., & Williams, D. (1990, April). Assessing the outcomes of college:
Implications for speech communication. Association for Communication Administration
Bulletin, 72, 13�20.
Bassett, R.E., Whittington, N., & Staton-Spicer, A. (1978). The basics in speaking and listening
for high school graduates: What should be assessed? Communication Education, 27,
293�303.
Canary, D.J., & MacGregor, I.M. (2008). Differences that make a difference in assessing student
communication competence. Communication Education, 57, 41�63.
Christ, W.G. (Ed.). (1994). Assessing communication education: A handbook for media, speech, and
theatre educators. Hillsdale, NJ: Erlbaum.
Clark, R.A. (2002). Learning outcomes: The bottom line. Communication Education, 51, 396�404.
Jones, E.A., & Melander, L. (1993). Speech communication skills for college students. University Park,
PA: National Center on Postsecondary Teaching, Learning, and Assessment.
Litterst, J.K. (1990). Communication competency assessment of non-traditional students.
Association for Communication Administration Bulletin, 72, 60�67.
Phillips, G.M., Kelly, L., & Rubin, R.B. (1991). Communication incompetencies: A theory of training
oral performance behavior. Carbondale, IL: Southern Illinois University Press.
Quianthy, R.L. (1990). Communication is life: Essential college sophomore speaking and listening
competencies. Annandale, VA: Speech Communication Association.
Rubin, R. (1985). Ethical issues in the evaluation of communication behavior. Communication
Education, 34, 13�17.
Rubin, D., & Bazzle, R.E. (1981). Development of an oral communication assessment program: The
Glynn County speech proficiency examination for high school students. Brunswick, GA: Glynn
County Board of Education.
Rubin, D.L., & Hampton, S. (1998). National performance standards for oral communication
K-12: New standards and speaking/listening/viewing. Communication Education, 47,
183�193.
Runkel, R. (1990). Assessing problem-solving abilities in the theatre curriculum: A cumulative
and sequential approach. Association for Communication Administration Bulletin, 74, 44�48.
Smith, R.M., & Hunt, G.T. (1990). Defining the discipline: Outcome assessment and the prospects
for communication programs. Association for Communication Administration Bulletin, 72,
1�4.
Evangelistic, A.L., & Daly, J.A. (1989). Correlates of speaking skills in the United States: A national
assessment. Communication Education, 38, 132�143.
Wenger, P.E., & Fischbach, R.M. (1983). Speech communication instruction: An interdisciplinary
assessment. Association for Communication Administration Bulletin, 45, 36�38.
Wood, B.S. (1977). Development of functional communication competencies grades K-6. Annandale,
VA: Speech Communication Association.
274 S. Morreale et al.
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
Subtheme D: Program/Departmental Evaluation
Aitken, J.E., & Neer, M. (1992). A faculty program of assessment for a college level competency-
based communication core curriculum. Communication Education, 41, 270�286.
Comer, K.C. (1987). Development of quality factors for college and university theatre programs.
Association for Communication Administration Bulletin, 60, 11�14.
Downey, B.J. (1980). A proposed instrument for program quality assessment. Association for
Communication Administration Bulletin, 32, 4�7.
Hagood, A.D. (1986). Program evaluation in a major university. Association for Communication
Administration Bulletin, 56, 9�11.
Hay, E.A. (Ed.). (1992). Program assessment in speech communication. Annandale, VA: Speech
Communication Association.
McBath, J.H. (1990). Use of departmental review as part of the assessment process. Association for
Communication Administration Bulletin, 72, 38�44.
McGlone, E.L. (1984). Program evaluation and elimination: A case study. Association for
Communication Administration Bulletin, 50, 19�25.
Nebergall, R.E. (1980). ACA seminar/workshop on assessment of programs: Report on session
three. Association for Communication Administration Bulletin, 32, 45�46.
Parker, B.L., & Drummond-Reeves, S.J. (1992). Alumni outcomes assessment: Boise State University
survey, 1990. Association for Communication Administration Bulletin, 79, 1�11.
Platt, R.W. (1985). External evaluation of small college communication programs. Association for
Communication Administration Bulletin, 54, 40�42.
Reynolds, B. (1986). Program evaluation in undergraduate only institutions. Association for
Communication Administration Bulletin, 56, 12�13.
Smith, R.M. (1990). Issues, problems, and opportunities in assessment of communication
programs. Association for Communication Administration Bulletin, 72, 21�26.
Symons, J.M. (1990). Assessment guidelines for theatre programs in higher education. Association
for Communication Administration Bulletin, 72, 35�37.
Taylor, A. (1983). Curriculum accreditation: A case against. Association for Communication
Administration Bulletin, 44, 41�43.
Valentine, C.A. (1980). Program evaluations and standards: An overview. Association for
Communication Administration Bulletin, 32(2), 8�12.
Subtheme E: Assessment Guidelines and Frameworks
Buerkel-Rothfuss, N.L. (1990). Communication competence: A ‘‘test out’’ procedure. Association for
Communication Administration Bulletin, 72, 68�72.
Crocker-Lakness, J., Manheimer, S., & Scott, T. (1991). The Speech Communication Association’s
criteria for the assessment of oral communication. Annandale, VA: Speech Communication
Association.
Hawkins, K.W. (1987). Use of the Rasch model in communication education: An explanation and
example application. Communication Education, 36, 107�118.
Hay, E.A. (1990). Nontraditional approaches to assessment. Association for Communication
Administration Bulletin, 72, 73�75.
King, P.E., & Witt, P.L. (2009). Teacher immediacy, confidence testing, and the measurement of
cognitive learning. Communication Education, 58, 110�123.
Knight, M.E., & Lumsden, D. (1990). Outcomes assessment: Creating principles, policies, and
faculty involvement. Association for Communication Administration Bulletin, 72, 27�34.
Parker, B.L., & Drummond-Reeves, S.J. (1992). Outcomes assessment research: Guidelines for
conducting communication alumni surveys. Association for Communication Administration
Bulletin, 79, 12�19.
Assessment Review 275
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
Rubin, R.B. (Ed.). (1983). Improving speaking and listening skills: New directions for college learning
assistance (No. 12). San Francisco: Jossey-Bass.
Rubin, R.B., & Graham, E.E. (1988). Communication correlates of college success: An exploratory
investigation. Communication Education, 37, 14�28.
Rubin, R.B., Graham, E.E., & Mignerey, J.T. (1990). A longitudinal study of college students’
communication competence. Communication Education, 39, 1�14.
Spitzberg, B.H., & Hurt, H.T. (1987). The measurement of interpersonal skills in instructional
contexts. Communication Education, 36, 28�45.
Stiggins, R.J., Backlund, P.M., & Bridgeford, N.J. (1985). Avoiding bias in the assessment of
communication skills. Communication Education, 34, 135�141.
Willmington, S.C. (1989). Oral communication assessment procedures and instrument develop-
ment. Association for Communication Administration Bulletin, 69, 72�78.
Subtheme F: Assessment in Specific Contexts and Courses
Backlund, P.M., Brown, K.L., Gurry, J., & Jandt, F. (1982). Recommendations for assessing speaking
and listening skills. Communication Education, 31, 9�17.
Bostrom, R.N., & Brown, M.H. (1990). Listening behavior: Measurement and application. New York:
Guilford Press.
Chesebro, J.W., McCroskey, J.C., Atwater, D.F., Bahrenfuss, R.M., Cawelti, G., Gaudino, J.L., &
Hodges, H. (1992). Communication apprehension and self-perceived communication
competence of at-risk students. Communication Education, 41, 345�360.
Erickson, J.G., & Omark, D.R. (Eds.). (1981). Communication assessment of the bilingual, bicultural
child: Issues and guidelines. Baltimore, MD: University Park Press.
Ford, W.S.Z., & Wolvin, A.D. (1993). The differential impact of a basic communication course on
perceived communication competencies. Communication Education, 42, 215�223.
Redmond, M.V. (1998). Outcomes assessment and the capstone course in communication.
Southern Communication Journal, 64, 68�75.
Ritter, E.M. (1977). Accountability for interpersonal communication instruction: A curriculum
perspective. Central States Speech Journal, 28, 204�209.
Rubin, R.B., & Feezel, J.D. (1985). Teacher communication competence: Essential skills and
assessment procedures. Central States Speech Journal, 36, 4�13.
Rubin, R.B., Welch, S.A., & Buerkel, R. (1995). Performance-based assessment of high school speech
instruction. Communication Education, 44, 30�39.
Trank, D.M., & Steele, J.M. (1983). Measurable effects of a communication skills course: An initial
study. Communication Education, 32, 227�236.
Subtheme G: Assessment Strategies and Techniques
Angelo, T.A., & Cross, K.P. (1993). Classroom assessment techniques: A handbook for college teachers
(2nd ed.). San Francisco: Jossey-Bass.
Arneson, P., & Arnett, R.C. (1998). The ‘‘praxis’’ of narrative assessment: Communication
competence in an information age. Association for Communication Administration Bulletin,
27, 44�58.
Backlund, P. (1992). Using student ratings of faculty in the instructional development process.
Association for Communication Administration Bulletin, 81, 7�12.
Dick, R.C., & Robinson, B.M. (1992). Assessing self-acquired competency portfolios in speech
communication: National and international issues. Association for Communication Admin-
istration Bulletin, 81, 60�68.
Emmert, P., & Barker, L.L. (1989). Measurement of communication behavior. New York: Longman.
276 S. Morreale et al.
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
Larson, V.L., & McKinley, N.L. (1987). Communication assessment and intervention strategies for
adolescents. Eau Claire, WI: Thinking Publications.
Lederman, L.C. (1990). Assessing educational effectiveness: The focus group interview as a
technique for data collection? Communication Education, 39, 117�127.
Lederman, L.C., & Ruben, B.P. (1984). Systematic assessment of communication games and
simulations: An applied framework. Communication Education, 33, 152�159.
Malinauskas, M.J. (1990). ‘‘Get what you like:’’ An assessment scheme. Association for Commu-
nication Administration Bulletin, 72, 76�79.
McNeilis, K.S. (2002). Assessing communication competence in the primary care medical interview.
Communication Studies, 53, 400�428.
Morreale, S.P., Brooks, M., Berko R., & Cooke, C. (Eds.). (1994). 1994 summer conference
proceedings and prepared remarks: Assessing college student competency in speech communica-
tion. Annandale, VA: Speech Communication Association.
Neer, M.R. (1989). The role of indirect tests in assessing communication competence. Association
for Communication Administration Bulletin, 69, 64�71.
Northwest Regional Educational Laboratories. (1998). Improving classroom assessment: A toolkit for
professional developers. Portland, OR: Northwest Regional Educational Laboratory.
Rubin, D.L., Daly, D., McCroskey, J.C., & Mead, N.A. (1982). A review and critique of procedures
for assessing speaking and listening skills among preschool through grade twelve students.
Communication Education, 31, 285�303.
Saraceni, I.J. (1990). The video camera as an assessment tool in the acting class. Association for
Communication Administration Bulletin, 74, 37�43.
Stiggins R.J. (Ed.). (1981). Using performance rating scales in large-scale assessments of oral
communication proficiency. Portland, OR: Clearinghouse for Applied Performance Testing.
Stitt, J.K., Simonds, C.J., & Hunt, S.K. (2003). Evaluation fidelity: An examination of criterion-
based assessment and rater training in the speech communication classroom. Communication
Studies, 54, 341�353.
Young, R., & He, A.W. (Eds.). (1998). Talking and testing: Discourse approaches to the assessment of
oral proficiency. Philadelphia, PA: John Benjamins.
Subtheme H: Assessment Instruments
Bostrom, R.N. (1990). Assessing achievement with standardized tests: The NTE speech commu-
nication examination. Association for Communication Administration Bulletin, 72, 45�50.
Carlson, R.E., & Smit-Howell, D. (1995). Classroom public speaking assessment: Reliability and
validity of selected evaluation instruments. Communication Education, 44, 87�97.
Hayes, D.T. (1978). Toward validation of a measure of speech experience for prediction in the basic
college-level speech communication course. Communication Studies, 29, 20�24.
Morreale, S.P. (2007). Assessing motivation to communicate (2nd ed.). Washington, DC: National
Communication Association.
Morreale, S.P., Moore, M., Surges-Tatum, D., & Webster, L. (2007). Competent speaker speech
evaluation form (2nd ed.). Washington, DC: National Communication Association.
Papa, M.J., & Graham, E.E. (1991). The impact of diagnosing skill deficiencies and assessment-
based communication training on managerial performance. Communication Education, 40,
368�384.
Rubin, R.B. (1982). Assessing speaking and listening competence at the college level: The
communication competency assessment instrument. Communication Education, 31, 19�32.
Rubin, R.B. (1985). The validity of the communication competency assessment instrument.
Communication Monographs, 52, 173�185.
Rubin, R.B., & Martin, M.M. (1994). Development of a measure of interpersonal communication
competence. Communication Research Reports, 11, 33�44.
Assessment Review 277
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
Rubin, R.B., & Roberts, C.V. (1987). A comparative examination and analysis of three listening
tests. Communication Education, 36, 142�153.
Rubin, R.B., Sisco, J., Moore, M.R., & Quianthy, R. (1983). Oral communication assessment
procedures and instrument development in higher education. Annandale, VA: Speech
Communication Association.
Sayer, J.E., & Chase, L.J. (1978). Contemporary graduate study: Evaluating the post-coursework
comprehensive written examination. Association for Communication Administration Bulletin,
26, 30�33.
Spitzberg, B.H. (2007). Conversational skills rating scale (2nd ed.). Washington, DC: National
Communication Association.
Thomson, S., & Rucker, M.L. (2002). The development of a specialized public speaking competency
scale: Test of reliability. Communication Research Reports, 19, 18�28.
Watson, K.W., & Barker, L.I. (1983). Watson�Barker listening test. Auburn, AL: Spectra.
Watson, K.W., Barker, L.I., & Roberts, C.V. (1989). Watson�Barker high school listening test-
development and administration. Auburn, AL: Spectra.
278 S. Morreale et al.
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013