mceachron, allen and sualp assessment as an element of design

21
McEachron, Allen and Sualp Assessment as an Element of Design 1 Assessment as an Element of Design: A Iterative Mapping Strategy for Curriculum Design Dr. Donald L. McEachron and Dr. Fred Allen School of Biomedical Engineering, Science and Health Systems Drexel University Philadelphia, PA 19104 and Mr. Mustafa Sualp AEFIS, LLC Philadelphia, PA 19106

Upload: others

Post on 07-Jun-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

1

Assessment as an Element of Design: A Iterative Mapping Strategy for Curriculum Design

Dr. Donald L. McEachron and Dr. Fred Allen School of Biomedical Engineering, Science and Health Systems

Drexel University Philadelphia, PA 19104

and Mr. Mustafa Sualp

AEFIS, LLC Philadelphia, PA 19106

Page 2: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

2

Abstract The impact of curriculum design on student learning has not been as well studied as other aspects of higher education. Research studies on student learning typically focus on the interplay between student personality and/or cognitive factors and instructional delivery systems within the context of a single course or course sequence. Such studies provide results related primarily to student achievement within that specific course or sequence. Curricula, however, are complex systems of courses and other activities with significant dependencies and interactions. This paper details the methodology used by the School of Biomedical Engineering, Science and Health Systems at Drexel University to develop a student learning outcome-driven alignment and integration of our courses into a competency-based curriculum. We began the process with two requirements, one internal and one external. The external requirement was imposed by an external accreditor – ABET, Inc. – which requires that a set of predefined student learning outcomes be used to evaluate student performance in all engineering programs. These are the ABET a-k criteria. The internal requirement is similar in that Drexel University has developed a set of Drexel Student Learning Priorities (DSLPs). Programs are required to evaluate student performance and ensure that all Drexel students attain adequate level of achievement on all the DSLPs. The Undergraduate Curriculum Committee (UGCC) created a combined set of student learning outcomes that covered all the internal and external requirements. These were then decomposed into measureable learning indicators. The UGCC then established an initial mapping of these criteria into our courses by matching course content and objectives to the learning indicators. The mapping was refined through interviews with course instructors and made into a list of courses with their associated learning indicators. This initial listing displayed certain anomalies that had not been apparent previously. Some performance criteria were over-represented and others were under-represented within the curriculum. As a result, a secondary review of the situation in consultation with the faculty resulted in a limited redesign of the curriculum and a partial redistribution of the performance criteria to create a more balanced and efficient treatment of the criteria. Creating categories or levels by which a course might be associated with a performance criterion further refined the list. The levels were Introduced, Reinforced and Emphasized as used by the New Jersey City University Business Administrative Program. The courses, with their associated levels of performance criteria, were then placed into a temporal context by creating a template of the courses in the order in which students would progress through the curriculum. Again, numerous anomalies appeared. For example, a course might be listed as reinforcing a topic before that topic had been introduced or the same topic might be introduced as many as 10 times without being reinforced or emphasized. In consultation with the faculty, the UGCC is further refining the curriculum and the category associations to remove these anomalies and ensure a coherent and progressive placement of each learning outcome and indicator.

Page 3: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

3

The I, R and E designations do not lend themselves to effective assessment. Therefore, Bloom’s taxonomy of educational objectives will be applied to each category resulting in a curricular map associating courses and other activities with specific cognitive characteristics (knowledge, comprehension, application, analysis, synthesis, and evaluation). The resulting mapping will be used to develop multiple assessment tools such as internal (in class assignments, problems, etc.) and external (standardized examinations, performance evaluations, etc.) assessment instruments. Introduction The processes of assessment and evaluation when applied to higher education are often viewed with considerable angst by participating faculty1. This attitude can be especially negative when the process is viewed as imposed by outside forces, such as when it is driven by the accrediting process exemplified by ABET’s approach to engineering programs. However, properly understood and applied, assessment and evaluation systems can lead to substantial improvements in curricular design and implementation. Since preliminary research indicates that overall curriculum design is an important factor in student academic achievement2,3, processes that improve curriculum design can be of vital importance in ensuring program success. The purpose of this paper is to discuss one such process - the mapping of learning indicators into a curriculum - which is often considered as particularly onerous, and to provide a practical method of implementing the process. Basis Definitions and Terminology Following the example of one of the greatest teachers of all time, Socrates, we will begin with definitions of selected key terms to ensure that we are communicating in a consistent fashion and therefore achieve common understanding Assessment is the process by which one determines the current state of an individual, course, program, curriculum, academic unit, or institution. Assessment is primarily concerned with the collection and organization of data. These data can quantitative, qualitative or both. Data can be in any format – test scores, written documents, portfolios, external reviews, etc. Evaluation is the process of making value judgments based upon the assessment data. Taken together, assessment and evaluation are key elements of the design process. To design a course, curriculum, or program, it is necessary to establish the goals and objectives to be obtained. Creating an assessment plan during the design process helps define these goals and objectives in measurable terms. By having a preliminary decision matrix in place to guide evaluation, the goals and objectives can be further refined and better understood by all the stakeholders.

Page 4: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

4

Modes of Assessment. There are several modes in which assessment can be done. These are:

1. Institutional; 2. Programmatic; 3. Course/Activity; 4. Instructor; 5. Student

Broadly speaking, these can be categorized as Institutional (#1) and Academic (#2-5). Institutional assessment refers to the entire entity and how resources are obtained and allocated in relationship to the mission and strategic plans of the institution. For example, if an institutional goal is to raise the SAT scores of incoming freshmen by 20 points over 2 years, the effectiveness of programs in the Undergraduate Admissions Office to accomplish this task can be assessed. What programs were initiated, what was the cost in terms of various investments, and what was the result in terms of SAT scores are typical questions to be assessed in this scenario. Once that data is available, decisions are made about the efficacy of those approaches and the return-on-investment. The latter are value judgments and form the evaluation part of the process. Academic assessment is a subset of institutional assessment in education. Clearly, a major goal of any institution of learning is to educate students and prepare them for future activities. Thus, academic assessment is necessary for a complete valuation of any institution of higher learning. This paper is only concerned, moreover, with academic assessment and not with other aspects of institutional metrics. This is not to say these other aspects are not vital to the health of a college or university – they are – but rather that this paper is more narrowly focused on academic concerns. The levels of assessment are presented above in order of size, not importance. The hierarchy does seem to imply a cascading assessment approach, where each level is subsumed within the level above it. To a certain extent this is true and forms an important aspect of assessment planning. For example, Drexel University has an academic mission, the College of Engineering has a mission, the Department of Electrical and Computer Engineering has a mission as does the BS program in electrical engineering and finally the Telecommunications/Digital Signal Processing track. Each mission must incorporate the goals of the level above while providing additional academic criteria. Thus, a student in the telecommunications track is simultaneously an undergraduate electrical engineer in the Department of Electrical and Computer Engineering, a member of College of Engineering and a Drexel University student and therefore, all the individual academic missions apply to him or her. On the other hand, what is the final endpoint, the ultimate goal of academic enterprise? A student educated to the level and in a manner consistent with all the applicable academic missions. Thus, assessment should focus on student performance and achievement. This is the true measure of the success of any educational enterprise. In the final analysis, it does not matter how many students graduate or how many are retained if those students are inadequately prepared to contribute productively to their

Page 5: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

5

communities upon graduation. The average SAT scores of incoming freshman may be a factor in a college or universities ranking in US News and World Report, but it is how much value that is added to each student’s capabilities that determines the actual contribution that a college or university is making to the world. The importance of institutional assessment notwithstanding, such data can be grossly misleading if not balanced with clear academic assessment which is both student-entered and evidence-based. Categories for Assessment. The definitions presented here are based primarily on those provided by ABET, Inc. for academic assessment and may differ slightly from those used by Middle States and other accrediting bodies. These differences are minor, however, and should not detract from the discussion that follows. Program Educational Objectives (PEOs) PEOs are characteristics of individuals who graduate from a specific program, academic unit and institution as measured post-graduation. The measurements are taken usually 3 and 5 years after graduation, although there is no set protocol. An example of an objective (from the School of Biomedical Engineering, Science and Health Systems) is: As a result, graduates will identify opportunities to contribute to society from a variety of positions, ranging from biomedical engineering, biotechnology design and development to practicing physicians, lawyers, innovators, entrepreneurs and business managers. The graduate may also pursue further education in the form of graduate and professional degrees. A majority in this instance could be defined as 67% or more. Each program must generate its own objectives in line with that program’s expectations for their graduates.

Student Outcomes (also called Student Learning Outcomes - SLOs). SLOs are characteristics of students at the time of graduation4. These characteristics can include knowledge, skills, attitudes, etc. Some examples are given below (source in parentheses)

1. An ability to design and conduct experiments, as well as to analyze and interpret data (Engineering Programs, ABET, Inc. http://www.abet.org/)

2. Understand accounting and business terminology used in business scenarios, and be proficient with commonly used office software programs (Butte College, Business Education Program http://www.butte.edu/departments/careertech/businessed/slos.html)

3. To think philosophically about our existence in the world and to demonstrate a philosophic approach to ethical issues (Seattle University http://www.seattleu.edu/assessment/SLO.asp)

4. Communication Skills Learning Goal: Students graduating with a BADM degree will be able to effectively present information orally and in writing (California State University, Chico – College of Business http://www.csuchico.edu/cob/_documents/learningGoals_BADM.pdf )

5. The ability to analyze and evaluate artwork from various perspectives and to receive responsively suggestions about and criticisms of his or her own work from others. (Dickinson State University Bachelor of Arts

Page 6: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

6

degree in Art http://www.dsu.nodak.edu/Catalog/fine_arts/art_majors_minors.htm)

6. Recognize the relationship between structure and function at all levels: molecular, cellular, and organismal. (University of San Francisco Bachelor of Science degree in Biology http://www.usfca.edu/biology/outcomes.htm)

It should be noted that there are other, additional outcomes related to students. These student outcomes include such items as retention, normative time to graduation, etc. Such data fall under the auspices of Institutional Assessment and are not directly addressed in this paper. However, it is reasonable to expect that enhancements in the learning environment for students will impact these data and a close collaboration between personnel and systems involved in institutional and academic assessment is critical. Again, it is important to recognize the hierarchical nature of SLOs. Course SLOs should be consistent with curricular SLOs which should be consistent with program SLOs, discipline SLOs (such as engineering, business, art, information systems, nursing, etc.) and finally with SLOs resulting from the institutional mission. Learning indicators are the constituent elements of a student learning outcome5. While SLOs are a useful set of requirements by which to define success of an educational program, they are not always easy to measure. Two examples of such SLOs are:

a. Ability to function on multidisciplinary teams (ABET d); b. Understanding of professional and ethical responsibilities (ABET f)

In order to measure student achievement, SLOs are decomposed into measureable components we have labeled as learning indicators (LIs). Rubrics create categories of student achievement within a given LI-8. The number of levels varies between 3 and 5 depending on the resolution one wishes to achieve. In Figure 1, the relationships between an SLO, its associated learning indicators and rubrics are displayed. Rubrics are categories or levels of achievement within a given performance criterion. Learning indicators 1-4 are components of the SLO. Figure 2 displays a specific example from the School of Biomedical Engineering, Science and Health Systems at Drexel University. In the example, there are 4 categories of achievement for each learning indicator and the description provided in each box is the metric that determines the placement of any specific student’s accomplishments into one of the categories. Taken together, the four descriptive categories associated with the learning indicator, Problem-Solving Abilities– The graduate is able to creatively solve problems from both analytic and synthetic perspectives using multiple approaches, integrating the life sciences, engineering, and the humanities, form the rubric for that indicator .

Page 7: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

7

Figure 1. Relationships between Student Learning Outcome, Learning indicators Rubrics.

Figure 2. Example of the Relationship of Outcomes, Learning indicators and Rubrics

Student Learning Outcome (SLO)

Learning Indicator1

Learning Indicator 2

Learning Indicator 3

Learning Indicator 4

Rubric 1.1 Rubric 2.1 Rubric 4.1 Rubric 3.1

Rubric 1.2 Rubric 2.2 Rubric 3.2 Rubric 4.2

Rubric 1.3

Rubric 1.4

Rubric 2.3 Rubric 3.3

Rubric 3.4

Rubric 3.5

Rubric 4.3

Rubric 4.4

Page 8: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

8

Standards or Benchmarks are terms used in countless different ways. In this case, a standard or benchmark is the overall level of achievement determining success for a given criterion. In Figure 1, a level of 70% of students achieving at the x.2 level or higher on all four learning indicators could be used to indicate that this specific student learning outcome has been attained by the program. Learning indicators are measureable categories associated with student learning outcomes. The indicators are what is being measured in a student-centered, academic assessment. Remaining to be determined is when, where, and how learning being measured. Mapping is the process by which learning indicators are associated with specific events within a program or curriculum. Learning indicators differ from course grades in two somewhat paradoxical ways. First, they are specific performance metrics whereas course grades are usually a combination of many different metrics. Learning indicators provide superior resolution that cannot be achieved using class grades. Second, they are general characteristics of students that should be applicable in multiple different situations and environments. Thus, while a course grade is measured within the context of a specific course, learning indicators can be measured across courses, in extra-curricular activities, during employment and service activity, as well as within particular courses. They can be tracked during a student’s progression through the academic program and will provide key intervention points in curricular re-design. Mapping is the process by which learning indicators are associated with specific events within a program or curriculum. The Mapping Process The question that concerns us is how to create accurate and useful curricular maps. Although many kinds of mappings are possible, two specific types of maps often provide the most useful information: 1) a coverage map and 2) a tracking map. A coverage map associates each learning indicator with a specific course or courses or other curricular event(s) (ex. Co-operative education). The assessment approach is described, along with the timing of data collection and most important, a plan for intervention. In the coverage map, all educational experiences related to the performance criterion is listed to the extent possible but only one or two are selected for data collection. The second type of map is a tracking map. Tracking maps chart learning indicators over a curriculum or program in terms of when and how proficiency in each criterion is imparted to the student. The question to that concerns us is how to create accurate and useful curricular maps.

There is a temptation to try and convert tracking maps directly into coverage maps, especially after all the effort that goes in to creating tracking maps for a curriculum. Most of the rest of this paper concerns the creation and iterative re-design of the curriculum using those maps. However, it would be a mistake to attempt such a direct conversion. Simply because a learning indicator has been mapped to a specific academic

Page 9: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

9

experience does not mean it must be assessed within that experience. After all, the purpose of the experience is to create a permanent enhancement of student performance – theoretically, this could be assessed at any time after student participation in that experience. Practically speaking, assessments should be so timed as to allow faculty to determine if the various academic experiences were successful or need to be revised. It is in this phase of mapping that quality management techniques and statistical design and analysis approaches for quality control are applied. Initial Stages

Prior to beginning the mapping process, outcomes must be established, learning indicators created and rubrics developed by which learning can be measured. Engineering programs have an advantage in this regard since ABET provides a list of general and program outcomes as a starting point. Thus, engineering faculty does not have to start completely from scratch in developing program outcomes. Moreover, there is a wealth of knowledge and experience in creating performance metrics and rubrics in the field of engineering from which faculty can draw examples that can be refined for use in specific programs. Our experience at the School of Biomedical Engineering, Science and Health Systems found that learning indicators and rubrics found through Web searching proved to be very useful starting points from which our final criteria and metrics were derived. These are then reviewed, refined and modified by the appropriate curriculum committee(s). As a final step, the SLOs, LIs and rubrics are approved by the faculty as a whole.

Initial Assignments Once learning indicators have been created and accepted, they must be initially assigned to specific parts of the curriculum. Our experience at the School of Biomedical Engineering Science and Health Systems at Drexel University suggests that this activity is best done by a small committee of faculty using course syllabi. Providing individual faculty with a list of 50-70 learning indicators and asking them to assign the appropriate criteria to their specific courses proved to be unworkable. It takes some time to become familiar with the indicators and faculty found it difficult to work through all the indicators to select those most appropriate to their courses. Moreover, many courses taught to engineering students are provided by academic units outside of engineering whose faculty are not entirely familiar with this type of assessment process. The varied backgrounds and attitudes of the faculty involved created an uneven application of the criteria making it more difficult to ascertain the accuracy of the assignments. In our case, the Undergraduate Curriculum Committee, consisting of 3-5 faculty members working together with a common understanding of the indicators, was able to create a reasonable initial assignment by matching course objectives and descriptions with the appropriate indicators. These preliminary assignments were then forwarded to the faculty instructors to verify the accuracy of each assignment. Operating in this

Page 10: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

10

manner allows faculty instructors to edit the assignments rather than create them de novo and provided for a more effective use of the faculty’s time. Although it is possible to proceed from this initial assignment of LIs to a first curricular map, we suggest an additional refinement be undertaken before proceeding further. Learning is a developmental process and student performance on the indicators should improve as he or she progresses through the curriculum. One should not expect the same level of accomplishment in solving engineering problems from freshmen and seniors! In order for a curricular map to be useful, it should reflect this developmental learning process. Therefore, it seems reasonable to assign developmental or application levels to the criteria. One such set of levels is: Introduce; Reinforce and Emphasize. Another possible set could be: Introduce; Practice; Review; Utilize. There are many different ways of creating these levels and there is no one absolute best method. The method you choose should be one that reflects the developmental approaches to learning involved in your program while simultaneously maximizing the understanding and buy-in from your faculty. The initial levels should be relatively simple to facilitate the mapping process. They can – and will – be refined later. An examination of the initial curricular mapping can be surprisingly revealing. Unfortunately, educational curricula often seem to have more in common with biological evolution than intelligent design. Courses and sequences are often created for reasons that were more applicable to past situations than present circumstances. Resource availability (especially in terms of faculty) often dictates what courses are taught and what subjects are emphasized. The individualistic nature of the academic process, while advantageous to students in terms of providing various unique perspectives, can create a disadvantage in terms of a curriculum that lacks coherence and does not achieve overall program outcomes. Nothing reveals these problems faster than curricular maps. Consider the initial tracking mapping of learning indicators provided in Figure 3 that identifies the courses or curricular events that are responsible for measuring a set of learning indicators at the School. From this map it is fairly clear that the coverage of learning indicators is rather uneven. This is not necessarily a design flaw – this distribution may reflect the actual goals of the program. However, we found that this is not always true. It is especially disconcerting when an important LI does not appear to be covered at all or it mapped at a reinforcement level but not at an introductory level. On the other side, such maps also reveal when certain criteria are over-emphasized. A map of this kind reveals what the curriculum has actually become and not what the faculty or administration think it is. It is a very revealing experience for all concerned. The map in Figure 3 is useful for revealing any disconnects between the relative importance of various learning indicators as indicated by faculty deliberations and the currently implemented importance of those same indicator. It allows for some potential adjustments when the faculty expectations and curricular implementation do not match. Furthermore, by changing the fixed parameter from Figure 3 from a learning indicator to a course or curricular event, these same data can be re-mapped as shown in Figure 4.

Page 11: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

11

This approach can reveal even more discrepancies in the implementation of the current curriculum. Figure 3. An Initial Tracking Map*

Note that Learning Indicators are also known as performance criteria or performance indicators.

Page 12: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

12

Figure 4. Another Tracking Map

In Figure 4, the numbers in the Introduce, Reinforce and Emphasize columns refer to specific learning indicators that are mapped to the courses listed in the first column. As in the first tracking map (Figure 3), this kind of mapping reveals unsuspected anomalies in the curriculum. BMES 304 and BMES 315 have no learning indicators mapped to them while BMES 325 and BMES 326 have 14 and 17 learning indicators mapped to those courses respectively. Is this realistic or does this reveal flaws in either the curriculum design or the initial mapping of criteria? After all, why should one have a course that does not support any learning indicators vital to the program? On the other hand, is it not too much to ask of one course that it carry the burden of 17 learning indicators? Again, the mapping in and of itself does not determine a course of action. There may be reasons why a course does not have a LI mapped to it. The course may reflect a more specialized student interest that is not reflected in the student learning outcomes (and associated learning indicators) that are, after all, program specific and not course specific. For courses that carry an apparently excessive burden of criteria, these may be gateway courses – courses so fundamental to understanding the field that they must be associated with many learning indicators. These courses would then require more extensive scrutiny to ensure that they fulfill their curricular function. The answers to these questions depend upon the program and specific design and outcomes associated with it. However, to find these answers, one must first ask the questions and that is one of the functions of curricular maps – to generate questions by providing an integrated view of the curriculum and its correspondence to learning indicators. Refining the Mapping It is possible to improve and enhance a curricular design using the mapping strategies discussed so far as an initial condition. It is also not clear that external accreditors would

Page 13: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

13

require any additional refinements, so is there any point to further refinements and iterative mapping? Perhaps not solely from the point of view of accreditation although this is by no means certain. However, a vital function of LI mapping is curricular design and in that sense, some possible enhancements remain. One such refinement is to place the mapping into the proper temporal context. After revising the mapping based upon the data provided by approaches illustrated in Figures 3 and 4, a template can be created showing the relationships between learning indicators, the levels expected, the academic experience with which the criteria is associated and the timing when these experiences take place. A portion of such a mapping is provided in Figure 5. In the example shown, three terms – the initial term of the freshman, pre-junior and senior years – are shown. Drexel University uses the quarter system and emphasizes 3 six-month co-operative education experiences, leading to a five-year undergraduate degree. The pre-junior year corresponds to the third year in this five-year curriculum. As with each of the previous mappings, this version provides yet additional data. Again, the relationship between specific learning indicators and the academic experiences (courses, etc.) can be viewed, along with the level of performance expected. However, now we can see how the developmental levels change chronologically over the curriculum. Even after several revisions using the previous approaches, these maps provided some surprises. Occasionally, the levels associated with a specific criterion were developmentally inappropriate. For example, criteria were reinforced without being introduced or emphasized before any reinforcement. These indicate gaps in the teaching processes between what we expected the students to be able to do and the educational opportunities being provided which would facilitate their learning to accomplish the tasks at the levels expected. Once these inconsistencies were noted, the Curriculum Committee was able to work with the faculty to ensure smoother transitions and a more coherent curriculum. Further Refinements As a further refinement to the maps and to aid in revising the curriculum to more developmentally appropriate dynamic, we created a translational matrix between the levels we used in the maps and Bloom’s cognitive taxonomy9,10. This required that the basic three levels of Introduce; Reinforce and Emphasize be further subdivided to map to the appropriate levels in the taxonomy. For example, when a performance criterion is originally introduced, it was mapped to Bloom’s knowledge level. In our original maps, a criterion might be introduced in several different courses. After 2-3 ‘introductions’, the level for the criterion is moved up in the taxonomy to Bloom’s comprehension level and so on. We use this matrix to determine the appropriate levels of performance expected at each developmental stage in the curriculum and then applied this to the distribution of student performance on the rubrics for each criterion. It is possible, of course, to move directly from the distribution of Introductions, Reinforcements, and opportunities for Emphasis directly to revised distributions on the rubrics for each performance criterion but we do not recommend such an approach. The use of Bloom’s taxonomy helps to clarify the expectations at each developmental stage and makes it easier for faculty to understand what is expected in terms of student learning on each criterion mapped to

Page 14: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

14

their specific courses. Using the taxonomy also reinforces the idea that a student’s progress through the academic curriculum is a developmental process, with each step depending on those which preceding it. Assessment should be geared to a gradual increase in student achievement in performance as indicated by greater numbers of students attaining higher rubric categories as they proceed from freshman to senior status and eventual graduation. Our initial matrix is displayed in Table 1. Figure 5. Performance Mapping in Temporal Context

For those of you unfamiliar with Bloom’s Taxonomy and its current revisions, Benjamin Bloom and a group of educational psychologists created a hierarchy in 1956 to emphasize the cognitive results of education9-11.The idea was to reform educational processes to generate a greater emphasis on developing higher order thinking among students. The approach was hierarchical in the sense that each step up in the system required mastery of the processes below it. For example, understanding requires knowledge while application requires both understanding and knowledge and so on. Figure 6 is a graphical display of the six levels of the original cognitive taxonomy.

Page 15: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

15

There is a certain degree of arbitrariness to all of this insofar as this represents the design phase of a process – we have not yet established that the maps we have created are accurate. What is placed on syllabi and what is taught in a course do not necessarily match and no data has yet been collected in the process we have been describing to determine to what extend the syllabi are a true representation of the students’ learning experiences. This is, of course, the reason for assessing the process and determining actual student performance. However, there is a subtle problem in analyzing performance data in order to determine the accuracy of these tracking maps. Student performance is assessed and evaluated in order to ascertain if the curriculum is succeeding in reaching student learning outcomes for the participants. To be effective, students should be assessed at several different stages in the program so that timely intervention can be conducted. Early detection is not just preferable in the case of medical problems but in all cases where an intervention is planned. However, when a problem is observed, what does that actually mean? There are at least two possibilities. Either the academic experience associated with creating the expected level of student performance has not achieved its objective or the mapping of these learning indicators into that experience is incorrect. If the mapping itself is incorrect, then adjusting the academic experience will not necessarily have the desired effect on student performance. Table 1. Matrix Relating Level at Which Learning indicators are Presented to Bloom’s Learning (Cognitive) Domains

Mapping Level Bloom’s Learning Domain and Associated Abilities Basic Introduction (I)

Knowledge: Ability to recall information or data

Advanced Introduction (2I-3I)

Comprehension: Ability to determine and understand the meaning of instructions or problems and able to translate this information into one’s own terms/words.

Basic Reinforcement (R)

Application: Ability to use a concept in a new situation or circumstance and/or ability to apply learned material in novel situations

Intermediate Reinforcement (2R-3R)

Analysis: Ability to decompose material and/or concepts into constituent parts and determine their relationship and overall structure. Ability to distinguish between facts and inferences.

Advanced Reinforcement (> 3R)

Synthesis: Ability to construct a new structure or pattern from diverse elements

Emphasize (E) Evaluation: Ability to make value judgments concerning ideas, materials, products, processes, etc.

Page 16: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

16

Figure 6. Graphic Display of Bloom’s Taxonomy of Educational Objectives (redrawn from 12)

One method to check on the accuracy of the mapping process is to use student evaluations. Many student course evaluations request that students measure their learning on course objectives. The School uses a version of these where students rate their state of knowledge on each objective on the syllabus before and after participating in the specific course. We have been able to use these data to monitor the success of various courses in reaching their objectives and this allows for revisions in the course syllabi. By having students rate the learning indicators mapped to each course in a similar manner, a preliminary check on the accuracy of the mappings can be obtained. Additional confirmation can be obtained by examining the assignments, examinations and other deliverables required of students for each course to determine if those deliverables reflect learning opportunities associated with mapped learning indicators. The advantage of using student evaluations is that it can be made a regular part of any assessment system with minimum additional overhead in terms of labor or cost. Our experience has shown that the data is quite reliable when applied to course objectives and we have every reason to believe it can be applied to learning indicators with equal efficacy. One can expect changes in the matrix presented in Table 1 as well. The table represents the first pass at the association between mapping levels and expected cognitive achievement and is by no means intended to be the final word on the matter. This association may need revision as the outcomes of assessment indicate how students are really progressing through the curriculum. This again reflects the iterative nature of curriculum mapping, assessment, and re-design – the process is continuous.

Page 17: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

17

From Tracking to Assessment All the maps discussed above are tracking maps which allow faculty and administrators to determine when and where (and occasionally if) learning indicators are being taught, track the developmental stages of student learning and performance and revise the curriculum if needed to provide an enhanced learning experience. By providing maps of pathways by which student performance on specific criteria develops, the expectations for student performance can be more clearly defined, for both students and faculty. This should provide a more coherent and understandable experience for students and provide some reasonable response to student inquires such as ‘Why I am learning this?’; When will I ever use this?”; and ‘What is this for?’. However, tracking maps do not, by themselves, generate useful assessments. Their function is to show how learning indicators are supposed to be learned and at what levels - such maps do not demonstrate that the actual learning is taking place. In addition, simply because a performance criterion is mapped to a particular course or other academic experience does not mean that assessment of that criterion must take place there. If every performance criterion were assessed at every location in the curriculum in which it was mapped, the amount of data generated would be unmanageable. Tracking maps do, however, provide insights as to where critical development occurs in sets of learning indicators and thus highlight potential assessment locations. For example, in Figure 5, ENGR 101, ENGR 202 and Senior Design all have large numbers of learning indicators associated with them. This indicates that critical levels of student learning should be taking place within these course experiences. Thus, some form of assessment should be located close to the time when these courses have been completed. In the case of Senior Design, the School has arranged for external reviewers to evaluate student performance. In other cases, we are creating faculty review boards for writing assignments at different locations within the curriculum and are developing embedded problems to be added to assignments and examinations for assessment purposes. We are also creating a second year examination and revising our co-operative education employer survey to reflect those learning indicators associated with co-op experiences. An additional idea under consideration is an ePortfolio tracking design from freshman through sophomore to junior and senior design classes to track the development of this critical engineering perspective and skill set. To ensure coverage, a coverage map should be created for each performance criterion. Again, let us emphasize that tracking maps are not designed to be translated one-to-one into coverage maps. Tracking maps provide a developmental and chronological chart of the progress students make on each learning outcome and indicator throughout their academic careers. These maps also highlight critical educational experiences where significant learning is to take place and where a programs’ educational resources should be adequately concentrated. Once these maps exist, they can be then used to identify those critical points in the process where assessment should take place to ensure that students are making the necessary progress.

Page 18: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

18

The decision-making process that follows the analysis of such assessment data is beyond the scope of the present paper. Knowledge Management and AEFIS As can be imagined, the process of iterative mapping can be extremely time-consuming and error-prone, especially if done manually or using multiple Excel spreadsheets as illustrated above. Automated software systems, such as the AEFIS (Academic Evaluation, Feedback and Intervention System) Solution Platform, greatly increase the efficiency and effectiveness of the process by providing a dedicated software system designed for the purposes of mapping, curriculum design and assessment. For example, Figures 7-9 show an ongoing mapping process using AEFIS in which a person can pivot from a student learning outcome and learning indicator view (Figure 7) to a course view (Figure 8) to a term-by-term view (Figure 9) with the click of a mouse. As a knowledge-management system, of course, AEFIS manages more than just the mapping aspects of curriculum design and assessment. However, the key point is that there are software systems available that support assessment-driven mapping and curriculum design such that many of the more tedious aspects of the process can be automated. This does not mean that everything can be handed over to computers. The human element – from designating learning outcomes and indicators to understanding and delivering the subject matter in a developmentally appropriate manner – is the most critical part of the process. System like AEFIS cannot substitute for those aspects of the process but they can release faculty creativity and innovation by reducing the manual labor involved in mapping and curriculum design. Figure 7. AEFIS-generated student learning outcomes map

Page 19: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

19

Figure 8. AEFIS-generated course view map

Figure 9. AEFIS-generated term-by-term map

Page 20: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

20

Conclusion Assessment is often considered as a ‘necessary evil’ by faculty and administrators forced to engage in the process as a requirement of accreditation. However, properly applied, the design of assessment and evaluation systems can provide considerable insight into curriculum design. The mapping of learning indicators into a curriculum forces faculty to confront the need to create a developmentally sound and rational educational process to obtain the desired student learning outcomes. The iterative process of mapping, design, re-mapping, and re-design, although tedious, is surprisingly revealing. If faculty work with the process, the results can be a significant increase in the coherence of the curriculum with a concomitant enhancement of student learning. Bibliography 1. Meirovich, G. & Romar, E.J. (2006). The difficulty in implementing TQM in higher

education instruction. Quality Assurance in Education, 14: 324-337. 2. Ishiyama, J. (2005). The structure of an undergraduate major and student

learning: A cross-institutional study of political science programs at thirty-two colleges and universities. The Social Science Journal, 42: 359-366.

3. Van der Hulst, M. and Jansen, E. (2002). Effects of curriculum organization on

study progress in engineering studies. Higher Education, 43: 489-506. 4. Besterfield-Sacre, M., Shuman, L.J., Wolfe, H., Atman, C.J., McGourty, J. Miller,

R.L., Olds, B.M., & Rogers, G.M. (2000). Defining the Outcomes: A framework for EC-2000. IEEE Transactions on Education, 43: 100-110.

5. Duerden, S. & Garland, J. (1998). Goals, objectives, and learning indicators: A

useful assessment tool for students and teachers. Frontiers in Education Conference, 1998, FIE’98, 28th Annual, 2, 773-777.

6. Mertler, C.A. (2001). Designing scoring rubrics for your classroom. Practical

Assessment, Research & Evaluation, 7(25) . Retrieved September 27, 2006 from http://PAREonline.net. Getvn.asp?v=7&n=25.

7. Moskal, B.M. (2000). Scoring rubrics: What, when and how? Practical

Assessment, Research & Evaluation, 7(3) . Retrieved September 27, 2006 from http://PAREonline.net. Getvn.asp?v=7&n=3.

8. Moskal, B.M. & Leydens, J.A. (2000). Scoring rubric development, validity and

reliability. Practical Assessment, Research & Evaluation, 7(10) . Retrieved September 27, 2006 from http://PAREonline.net. Getvn.asp?v=7&n=10.

Page 21: McEachron, Allen and Sualp Assessment as an Element of Design

McEachron,AllenandSualp AssessmentasanElementofDesign

21

9. Bloom, B.S. (Ed.), Engelhart, M.D., Furst, E.J., Hill, W.H. and Krathwohl, D.R. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook 1: Cognitive domain. David McKay: New York.

10. Anderson, L.W. (Ed.), Krathwohl, D.R. (Ed.), Airasian, P.W., Cruikshank, K.A.,

Mayer, R.E., Pintrich, P.R., Raths, J. and Wittrock, M.C. (2001). A taxonomy for learning, teaching and assessing: A revision of Bloom’s Taxonomy of Educational Objectives. Longman: New York.

11. Krathwohl, D.R. (2002). A revision of Bloom’s taxonomy: An overview. Theory

into Practice, 41: 212-218. 12. Forehand, M. (2005). Bloom’s taxonomy: Original and revised. In M. Orey (Ed.),

Emerging perspectives on learning, teaching, and technology. Retrieved February 5, 2009, from http://projects.coe.uga.edu/epltt/index.php?title=Bloom%27s_Taxonomy.