lobe: learning object evaluation · quality of learning object in a teacher-led instructional...
TRANSCRIPT
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
1
Design and Development of LOBE:
Learning Object Evaluation Instrument
1st Edition, Released November 2016
Technical Report: TR-ET-2016-02
Inter-disciplinary Programme in Educational Technology
Indian Institute of Technology Bombay, Mumbai
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
2
Design and Development of LOBE:
Learning Object Evaluation Instrument
1st Edition November 2016
Authors
Gargi Banerjee, Sahana Murthy, Sridhar Iyer*
Interdisciplinary Programme on Educational Technology
Indian Institute of Technology Bombay
*Department of Computer Science and Engineering
*Contact email: [email protected]
The distribution and usage of this Curriculum are as per the Creative Commons license – Attribution-
NonCommercial-Share Alike 2.5 India. See https://creativecommons.org/licenses/by-nc-sa/2.5/in/ for
details. A brief excerpt from the license is given below.
You are free:
to copy, distribute, display, and perform the work
to make derivative works
to make commercial use of the work
Under the following conditions:
Attribution. You must attribute the work in the manner specified by the author or licensor.
Share Alike. If you alter, transform, or build upon this work, you may distribute the resulting
work only under a license identical to this one.
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
3
Learning Object Evaluation, 2016, 1st Edition
Abstract
This document presents the design and development of an instrument for evaluating learning
objects. This learning object evaluation (LOBE) instrument evaluates learning objects that contain
visualization (video/animation/simulation) as the core component. The learning objects may also
contain associated components like assessment questions and learning designs based on the
visualization.
LOBE is designed to evaluate quality of learning objects in a teacher-led instructional setting. It
evaluates quality in terms of the teaching-learning potential of the individual components as well
as the integrated learning object. It also evaluates the quality of the support provided to teachers
for effective use of the learning object. LOBE, in its current version, has undergone a pilot testing
for face and construct validity. The target users of LOBE are external evaluators giving feedback
on quality to learning object creators or e-learning companies.
This document provides: (i) the rationale, underlying philosophy, and key features of LOBE, (ii)
details of constructs and criteria to be addressed to evaluate learning object quality, (iii)
recommendations for scoring the learning object quality using LOBE, and (iv) illustrative examples
of how LOBE can be implemented to evaluate learning object quality. In addition the entire
instrument is provided in this document.
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
4
Table of Contents 1. INTRODUCTION ............................................................................................................................... 5
1.1. LOBE in context of existing learning object evaluation .......................................................... 5
1.2. Scope of LOBE ............................................................................................................................ 6
1.3. Summary ...................................................................................................................................... 7
2. FOUNDATIONS OF LOBE INSTRUMENT .................................................................................. 7
2.1 Underlying philosophy of LOBE ............................................................................................... 8
2.2 Theoretical underpinnings ......................................................................................................... 8
2.2.1 Constructive alignment ......................................................................................................... 9
2.2.2 Meaningful learning with ICT ............................................................................................... 9
2.2.3 Technological Pedagogical Content (TPACK) framework ................................................. 10
2.3 Design and Development process of LOBE ............................................................................ 10
3. LOBE INSTRUMENT DETAILS ................................................................................................... 14
3.1 Part I: LOBE for individual components of learning object ................................................ 14
3.2 Part II: LOBE for integration design and usage support provided by learning object ..... 15
3.3 Part III: LOBE quality for user testing .................................................................................. 16
4. RECOMMENDED USE OF LOBE ................................................................................................ 17
4. 1 Target Evaluators ..................................................................................................................... 17
4. 2 Customizing LOBE for evaluators .......................................................................................... 17
4. 3 Scoring using LOBE ................................................................................................................. 18
4. 4 Interpreting LOBE score.......................................................................................................... 19
5. PILOT TESTING OF LOBE ........................................................................................................... 19
6. CONCLUSION ................................................................................................................................. 21
6.1. Limitations: ................................................................................................................................ 21
6.2. Future Work: ............................................................................................................................ 21
7. REFERENCES .................................................................................................................................. 22
Annexure A: Learning Object Evaluation Instruments ........................................................................ 23
Annexure A (i): LOBE Part I – Evaluation of individual learning object components ................. 23
Annexure A (ii): LOBE Part II – Evaluation of integrated learning object and usage
support ....................................................................................................................................................... 29
Annexure A (iii): LOBE Part III – Evaluation of learning object through user testing ................ 31
Annexure B: Glossary of Educational Technology terms used in LOBE ........................................ 32
Annexure C: List of reviewers & Acknowledgement ............................................................................ 34
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
5
1. INTRODUCTION
Learning object evaluation instrument (LOBE) is designed to evaluate the quality of learning
objects in a teacher-led instructional setting. By the term ‘learning object’ we mean a web-based
or digital resource which is "a collection of content items, practice items, and assessment items
that are combined based on a single learning objective" (Hodgins, 1994). Hodgins’ definition of
learning object was chosen from the many that exist since it aligns best with the type of resource
LOBE has been designed to evaluate. LOBE interprets a learning object to consist of a core
visualization component (videos/ animations/ simulations) and associated components of learning
designs and assessment targeting a single or a small set of learning objectives.
The central visualization component of the learning object may contain explanation of concepts,
depiction of process, allow users to manipulate variables and so on. The learning design
component of a learning object contains guidelines for teachers in creating learning activities based
on the content of the visualization component, thus supporting them in the orchestration process
while teaching with the learning object. The assessment component contains formative and
summative assessment questions based on the visualization content. Each of these three
components as well as the interplay between these components play an important role in
determining the learning and teaching effectiveness. Thus, an instrument that evaluates learning
objects should encompass teaching-learning effectiveness of each of the individual components,
their integrated design and the quality of support provided for its effective use. These form the
focus of evaluation by LOBE – a three-part evaluation instrument.
1.1. LOBE in context of existing learning object evaluation
A survey of evaluation instruments existing for learning objects reveal that they can be broadly
categorized into summative evaluation i.e. evaluation of end-product and formative evaluation i.e.
evaluation during production process. In this sub-section, we present a brief summary of the
existing summative evaluation measures since LOBE evaluates quality of the finished product.
The typical purpose of summative evaluation instruments is to aid in selection of good quality
learning objects based on quality ratings. These ratings are derived either from explicit or implicit
evaluations or a combination of both. Explicit evaluation collects direct feedback about the
learning object through instruments like questionnaires from users. The implicit evaluation is
determined from indirect feedback from learning object usage data such as number of visits,
number of downloads etc. Explicit evaluation results are often combined with implicit results to
rank the learning objects in a repository, such as in a recommender system (Sanz-Roudriguez et.
al., 2010). The target users of summative evaluation instruments are of two types – (i) learning
object users i.e. students and teachers and (ii) peer reviewers i.e. experts reviewing the learning
object for its effectiveness. The instruments targeting users like LOES-S, LOES-T, QAMLM Part
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
6
C (Kay & Knaack, 2009; CEMCA, 2009) capture user’s experience of learning using a particular
learning object. On the other hand, peer-review instruments like LORI, PMLQ (Nokelainen, 2006,
Leacock & Nesbit, 2007) are used by experts to judge potential learning effectiveness along
multiple dimensions like content, presentation design and others.
In this scenario of learning object evaluation, we needed an instrument that would evaluate
learning object in a teacher-led instructional setting. We did not find a learning object evaluation
instrument customized to our requirements i.e. evaluates learning and teaching effectiveness of
each individual learning object component and the usage support provided to teachers. Therefore,
we created the LOBE instrument. A subset of the questions in LOBE are adapted from existing
robust instruments. But majority of the questions were framed by us to evaluate learning object
quality of learning object in a teacher-led instructional setting.
1.2. Scope of LOBE
LOBE is designed for learning objects that contain dynamic visualization (videos/animations/
simulations) as a core component. It may contain other components associated with the
visualization like assessment questions and learning designs.
LOBE is designed for learning objects from the science and engineering domains. It contains
criteria and operationalizing questions on affordances critical to learning in these domains like
questions on variable manipulation. The criteria are grouped under key constructs such as content,
pedagogy, technology, pedagogical content, technological content, technological pedagogy. Also,
LOBE evaluates pedagogy related constructs in greater detail than the technology related ones.
Certain evaluation criterion like adaptation for different learning styles is out of scope for LOBE.
LOBE targets learning objects useful for face-to-face instructional settings like classrooms and
laboratories where teacher is present as facilitator during student learning with the learning object.
In such settings students may or may not have direct access to the visualization.
The target user of LOBE is an evaluator reviewing the quality of learning objects to give detailed
feedback to the learning object creators or e-learning companies producing these learning objects.
The prerequisite for efficient use of LOBE is that the evaluator should have basic knowledge of
active learning strategies and constructivist learning theory. LOBE provides these evaluators with
an instrument that enables them to gauge the teaching-learning potential of the learning object
based on the recommendations of established teaching-learning theories on teaching with learning
objects. LOBE can also be used for peer review within the e-learning company i.e. a parallel
production team can evaluate the learning object produced by another team in the company.
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
7
1.3. Summary
LOBE falls into the category of summative evaluation instrument for peer review. However, the
purpose of LOBE is not to aid in selection of learning objects by end-users. Instead, the objective
is to provide feedback to learning object creators (such as e-learning companies) about the
teaching-learning effectiveness of the individual components and the integrated learning object. It
also provides feedback on the quality of support the learning object provides for student-centered
use of the resource. The evaluation score for each construct (like pedagogy, technology, content
etc.) provides feedback on the quality of the learning object vis-à-vis the theory-recommended
state. LOBE is a summative evaluation instrument in that it evaluates quality of the end-product.
LOBE consist of three parts where each part contains a set of criteria and a set of questions to
measure the quality of the criteria as incorporated in the learning object. The set of criteria are
derived for learning object evaluation theory and aligns with the objective of LOBE. The criteria
are sorted under appropriate constructs. The evaluation scores from LOBE provide feedback to the
learning object creators in terms of the specific constructs and criteria along which the quality of
their learning object needs to improve to enhance their effectiveness as a teaching-learning
resource.
2. FOUNDATIONS OF LOBE INSTRUMENT
This section addresses the following questions in context of learning object evaluation as a
teaching-learning resource:
(i) What constructs and criteria should be evaluated to assess quality of individual
components of learning objects as a teaching-learning resource?
(ii) What criteria should be evaluated to assess quality of integration of the components into a
learning object?
(iii) What criteria should be evaluated to assess quality of support provided for effective use
of the learning object?
We answer the above questions through an analysis of existing learning object evaluation
frameworks (Nokelainen, 2006; Leacock & Nesbit, 2007; Hadjerrouit (2010)), learning theories
and teaching principles (Biggs, 1996; Howland et.al, 2012). This led to identification of a superset
of necessary evaluation criteria recommended by existing teaching-learning theories and brought
under a common nomenclature. A subset of this was included in LOBE containing those criteria
that were aligned to the scope of LOBE. For example theory recommends inclusion of criterion
like adaptation. But it is not included in LOBE because measuring adaptation capacity is not within
the scope of LOBE. The operationalizing questions corresponding to each criteria are either
sourced from a survey of existing instruments which have reported face validity values or phrased
by us like the questions operationalizing interaction design principles or questions on appropriate
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
8
selection of media. A pilot testing of face validity of such questions was done with evaluators
drawn from the target sample population. Thus LOBE provides learning object creators with a
theory-informed set of evaluation questions that assess teaching-learning effectiveness of learning
objects. LOBE instruments have undergone pilot testing for construct and face validity.
2.1 Underlying philosophy of LOBE
LOBE considers a learning object to be an effective teaching-learning resource if it supports the
constructivist philosophy of teaching and learning. Hence, LOBE interprets effectiveness as:
(i) Promotes higher order thinking skills among students rather than rote learning:
LOBE interprets a learning object as an effective learning resource when it promotes active
meaning making by students on their own rather than simple information transmission. This stems
from the constructivist philosophy of learning. Thus the LOBE instrument is based on teaching-
learning theories like Meaningful learning with ICT (Howland et.al, 2012). Details about
theoretical underpinnings of LOBE is given in the next sub-section. LOBE addresses user interface
design principles like multimedia principles and interaction design principles since they test
whether the interface design of the learning object facilitates active meaning making by students.
(ii) Supports design of student-centered learning activities with visualization rather than mere
demonstration of visualization :
LOBE focuses not only on the quality of the individual components as a teaching-learning
resource, it also evaluates their constructive alignment integration. Thus theories like constructive
alignment (Biggs, 1996) forms the theoretical basis of LOBE. Details of constructive alignment is
given in the next sub-section. LOBE ensures that each of the components of a learning object
upholds the constructivist philosophy.
2.2 Theoretical underpinnings
As mentioned in previous sub-section, LOBE is based on the following principles while teaching
and learning with digital resources:
● Constructive Alignment (Biggs, 1996)
● Meaningful Learning with ICT (Howland et.al, 2012)
● Technological Pedagogical Content Knowledge (TPACK)
(Koehler & Mishra, 2009)
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
9
2.2.1 Constructive alignment
Constructive alignment is chosen as one of the theoretical basis of LOBE. The choice is not only
because it is one of the fundamental principles of student-centered teaching-learning and aligns
with the objective of LOBE. This principle specifies that to successfully attain the learning
objective of a teaching unit, both the teaching-learning activity and the assessment designed should
map to the learning objective within a constructivist pedagogy (Fig. 1). Empirical studies have
shown the positive impact of this principle on student learning in multiple science domains
(Morris, 2008; Hoddinott 2000). Thus LOBE measures the constructive alignment between the
learning objective and each of the individual components of the learning object i.e. assessment
questions, learning designs and visualization content.
Figure 1: Constructive alignment principle
2.2.2 Meaningful learning with ICT
Constructive alignment does not focus on ICT which is brought in by the Meaningful learning with
ICT, another fundamental teaching principle. Meaningful learning with ICT (Howland et.al, 2012)
outlines the dimensions that should be incorporated within learning designs to ensure meaningful
learning. The five dimensions are: a) Active learning - actively engage with ICT content, b)
Constructive – use ICT to construct their own knowledge through self-reflection and articulation,
c) Authentic - devise solutions to real-life problems using ICT, d) Intentional - set their learning
goals, evaluate their understanding and self-diagnose their errors through ICT and e) Cooperative
- do group activity with their peers using ICT. LOBE evaluates the extent to which the five
dimensions of meaningful learning are incorporated through each of the individual components as
also by the integrated learning object.
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
10
2.2.3 Technological Pedagogical Content (TPACK) framework
TPACK (Koehler & Mishra, 2009) is another important framework for teaching with ICT. It
describes the knowledge base instructors need to have to be able to design effective activities with
ICT. It contains seven constructs - Content (C), Pedagogy (P), Technology (T), as also the
interaction between these three i.e. Pedagogical content (PC), Technological content (TC),
Technological pedagogical (TP) and Technological Pedagogical Content (TPC). Since, objective
of LOBE is to evaluate learning objects as a teaching-led resource, we chose the set of seven
TPACK constructs as the basis for sorting the evaluation criteria obtained from existing theory.
TPACK framework describes the knowledge base teachers need to acquire to effectively teach
using ICT tools like visualization. The significance of these constructs are:
(i) Content (C) – This construct measures teacher’s knowledge of the content or subject matter
(ii) Pedagogy (P) – This construct measures teacher’s knowledge about how students learn, what
are the different teaching strategies and assessment strategies
(iii)Technology (T) – This construct measures teacher’s knowledge about using different ICT tools
in teaching
(iv) Pedagogical Content (PC) – This construct measures teacher’s knowledge in using different
teaching and assessment strategies
(v) Technological Pedagogy (TP) – This construct measures teacher’s knowledge in using
different teaching and assessment strategies by exploiting the affordances of the ICT tool
(vi) Technological Content (TC) - This construct measures teacher’s knowledge in using different
technologies to teach content
(vii) Technological Pedagogical Content (TPC) - This construct measures teacher’s knowledge in
using different teaching and assessment strategies that exploit affordances of the technology.
2.3 Design and Development process of LOBE
In this sub-section, we describe the stepwise process undertaken to identify the requisite criteria
and sort them into constructs to create LOBE instrument followed by pilot testing for validity.
Step 1: Identify criteria for learning object evaluation
Step 2: Group criteria under constructs & classify them as critical, non-critical
Step 3: Operationalize criteria into questions
Step 4: Test validity of LOBE instrument
Step 1- Identify Criteria for Learning Object Evaluation
The broader field of e-learning evaluation frameworks was studied to ensure that all criteria
relevant to learning object evaluation were included in LOBE. A meta-analysis was done of
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
11
thirteen e-learning evaluation frameworks (Abiagam & Usoro, 2009; Teng, 2004; Jung, 2011;
Ozkan et al., 2009; Sun et al., 2008; Phipps & Merisotis, 2000), in order to identify specific
evaluation criteria. Each of the frameworks was found to refer to a subset of criteria but none of
them covered all. Our starting point involved doing a union of criteria from different frameworks.
Two filters were applied for inclusion of a criterion into LOBE. The first filter was that the criteria
should be validated from learning theory or educational technology research and be recommended
by multiple evaluation frameworks and be relevant to learning objects. Certain e-learning
evaluation criteria such as course development, institutional support and reliability of online
examinations that were not applicable to learning objects were excluded from LOBE. The criteria
were finalized only after the identified set started repeating across multiple research studies with
no new criteria being addressed. Within the resulting criteria superset, similar criteria from
different frameworks were brought under a common nomenclature. The next filter applied was the
criteria should be aligned to the scope of LOBE (described in Sec. 1.2).
Step 2 - Group criteria under constructs and classify them as mandatory, non-mandatory
In this step, the criteria were classified into mandatory and non-mandatory categories and then
grouped under relevant constructs. A criterion was termed mandatory, if it has been recommended
by teaching-learning theories and multiple learning object evaluation frameworks, reflecting its
relevance and also fits in with the objective of LOBE. The mandatory criteria i.e. criteria which
should necessarily be evaluated for quality evaluation of learning object in any context, were
starred in LOBE. The criteria which were un-starred are those which maybe important for a subset
of the target audience but not for all. For e.g. the criteria, Alignment, which measures the extent
of alignment of learning objective with assessment question and that with learning activity inbuilt
in visualization (visualization activity), is a mandatory criteria in LOBE as per the teaching
principle of Constructive Alignment (Biggs, 1996). But a criterion like Value add-on, which
evaluates the value addition provided by the learning object over the textbook, has been included
as a non-mandatory criterion since it is likely to be relevant for only a subset of the target audience.
On the other hand, the criterion of ‘Metadata retrieval’ was identified mandatory from analysis of
existing literature on learning object evaluation. However, the objective of LOBE does not focus
on open-source resources and ease of searching. Thus the ‘Metadata retrieval’ criterion was made
non-mandatory in LOBE.
LOBE contains three parts with Part I evaluating individual components, Part II evaluating the
integrated learning object and usage support and Part III providing questions to evaluate during
user studies. Once the criteria set for LOBE was identified, they were grouped under appropriate
constructs in Part I based on definitions provided for each construct in the TPACK framework
(Koehler & Mishra, 2009). The 7th construct of TPACK i.e. Technological Pedagogical Content
(TPC) is not included. This is because TPC represents the support provided to teach the content
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
12
with student-centered pedagogy using learning object. Question operationalizing TPC is therefore
included in Part II. The construct-criteria mapping in LOBE Part I is shown in Table 1.
Table 1: Shortlisted construct-criteria mapping in LOBE Part I
Construct Construct definition Criteria (Sub-criteria, if any)
1. Content Deals with the subject matter
presented in the learning object
i) Content coverage*
ii) Content accuracy*
iii) Language comprehensibility (concise & comprehensible)*
iv) Accommodates socio-cultural differences
v) Content updated to reflect recent advances in the field
2. Pedagogy Deals with incorporation of the
established teaching-learning
theories/principles to teach the
subject matter
i) Learning objective validity ii) Learning objective
explicitness iii) Constructive iv) Alignment v) Feedback
quality vi) Assessment question framing vii) Time estimate for
assessment question response viii) Prior knowledge
3. Technology Deals with the technical aspects
of the learning object like its
graphic design or user interface
design
i) Interaction design principles* (Proximity, Visibility, User
action feedback, Consistent, Affordance, Mapping)
ii) User interface usability*
iii) Reusability
iv) Ease of use*
v)Accommodates differently abled students
vi) Standard Compliance
vii) Metadata retrieval
4. Pedagogical
content
Deals with application of
established teaching-learning
theories/principles to teach the
subject matter
i) Addressing Misconception*
ii) Connection to real-life*
iii) Content sequencing*
iv) Transfer potential*
5.
Technological
pedagogical
Deals with utilizing the
technological affordances of the
learning object to teach the
subject matter
i) Appropriate choice of media*
ii) Group activity*
iii) Multimedia principles* (Coherence, Signaling,
Redundancy, Contiguity, Segmenting, Personalization)
iv) Active learning*
v) User control of pace*
vi) Navigation
6. Technological
content
Deals with utilizing the
technological affordances to
present the subject matter in a
meaningful way
i) Choice of media, ii) Value addition
iii) Compatibility with other resources
*= mandatory
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
13
Step 3 - Operationalize criteria into questions
We proceeded through the following process to operationalize the criteria into questions:
i) We did a comparative study of operationalization of each criteria in existing learning object
evaluation instruments. This yielded a large number of questions mapped to the same criterion.
ii) A short-listing mechanism was implemented for inclusion of such questions into LOBE:
- Those questions were taken that had a robust instrument as source. Only when a robust
source was not found, was a non-robust instrument source considered.
- When multiple questions addressing the same criteria competed for inclusion, the
question from a robust source instrument was favored. If both source instruments were
robust, the more clearly worded question (as judged in the pilot testing with students and
peer-reviewers) was included.
iii) We framed operationalizing questions for those criteria like Multimedia principles or
Interaction design principles for which operationalizing questions were not found in the existing
instruments.
The construct and face validity of such questions were confirmed through focus group interview
with experts and evaluators drawn from the sample target population. The detail of the
methodology followed for validity testing is given in the next step.
Step 4 - Test Validity of LOBE instrument
A pilot testing of validity of LOBE was carried out to test for (i) face validity and (ii) construct
validity.
i) Construct validity –
This gives the extent to which the questionnaire measures what it claims to be measure. This was
assessed through a focus group interview with an expert panel. The panel comprised of ten
researchers from the educational technology field with specialization in learning design research,
assessment design and interaction design research areas. They were given LOBE and the following
set of four focus questions to deliberate on, of which the last two was related to construct validity:
- Is there any other criterion that needs to be evaluated for measuring the teaching-learning
potential of learning objects?
- Are the operationalizing questions comprehensible?
- Do the set of questions mapped to a construct measuring the said construct?
- Are there questions mapped to a construct that seems to measure another construct
instead?
The panel debated their opinions for 2.5 hours by applying the questionnaire on learning objects
from multiple repositories until they agreed on a common set of modifications to be done in LOBE.
The panel recommendations were incorporated in revised versions LOBE through 3 iterations and
cross-checked with experts.
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
14
ii. Face validity –
It measures the extent to which the target users of the instrument are able to correctly comprehend
what the question intends. This validity was tested in two rounds. In round-1, the expert panel
tested face validity during the focus group interview described above. In round-2, the face validity
was tested with the six evaluators drawn from the sample population. The methodology followed
in this round in described above in inter-rater reliability section. To measure face validity the
evaluators were asked at the end of their evaluation, if there are any questions that they found
ambiguous or were not able to understand? The feedback received was included in LOBE before
it was administered to the next evaluator. Finally, we received no more face validity issues with
the last few evaluators. This points to the face validity of LOBE.
3. LOBE INSTRUMENT DETAILS
LOBE is scoped to those learning objects which contain visualization
(video/animation/simulation) as its core components. It may contain associated components like
assessment and learning design based on the visualization. It evaluates effectiveness of such a
learning object as a teaching-learning resource for a face-to-face instructional setting where the
teacher is present to facilitate the learning using the learning object. LOBE considers a learning
object to be of high quality if it promotes student-centered constructive learning and application
of higher order thinking skills. The objective of LOBE is to evaluate the teaching-learning
effectiveness of the learning object. The evaluation results generated through LOBE will provide
constructive feedback to the learning object creator on the affordance design quality, integration
design quality and the quality of support provided for effective use of the learning object as a
teaching-learning resource. LOBE consists of three parts - (i) Part I: This part evaluates the quality
of the affordances provided by the individual components of a learning object i.e. the core
visualization component, the assessment and the learning design component. (ii) Part II: This part
evaluates the quality of the integration of the individual components into the learning object. It
also evaluates the support provided by the learning object for its effective use. (iii) Part III: This
is a short 6-item questionnaire that contains questions on criteria that need to be evaluated through
learning object user (target student/teacher population) studies.
3.1 Part I: LOBE for individual components of learning object
Part I is a questionnaire containing forty-eight questions. The objective of Part I is to evaluate the
affordance quality of each of the individual components of the learning object. It contains a set of
criteria like feedback quality, interaction design which are derived from teaching-learning theory
with ICT tools like visualizations. They are sorted into the six constructs of Technological
Pedagogical Content (TPACK) framework – Content, Pedagogy, Technology, Pedagogical
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
15
content, Technological content and Technological pedagogy. Each criteria is accompanied by its
corresponding operationalizing questions. The questions are color coded according to the
component they are evaluating (Visualization = purple, Assessment = Black, Learning Design =
orange). The purpose of color coding is that if the learning object does not contain any of the
associated components then those questions can be dropped from the question set.
The criteria set is further grouped into mandatory and non-mandatory criteria. The mandatory
criteria are marked with a ‘*’symbol. Mandatory criteria means that for any type of learning object
this criteria has to be evaluated. The evaluation of the non-mandatory criteria is not that critical.
The learning object creators can decide which non-mandatory criteria is to be included in the
evaluation instrument. This decision should be based on the learning object creator’s design
decisions. Thus criteria like constructive or active learning are marked mandatory in LOBE
whereas criteria like reusability or prior knowledge are marked non-mandatory. How the criteria
are to be scored by evaluators is explained in section 4.3. The complete list of criteria and questions
of Part I is given in Appendix A (i). Illustrative examples of questions from Part I are given in
Table 2.
Table 2: Example of evaluation questions from Part I
Construct Criteria Question Remark
Pedagogy Assessment
Feedback
Quality*
Is the student provided with corrective
feedback when asked to answer
assessment questions?
‘*’ = example of a mandatory
criterion
Grey font color => assessment
component evaluation
Is the student provided with corrective
feedback when asked to do some
activity within visualization?
‘*’ = example of a mandatory
criterion
Purple font color => visualization
component evaluation
Content Language
comprehensibility
Is language used in the learning designs
easy to understand? Not starred => Non-mandatory
Orange font color => learning
design component evaluation
3.2 Part II: LOBE for integration design and usage support provided by learning object
Part II is a 19-question questionnaire. The objective of Part II is to evaluate effectiveness of the
integration of the different components into a single learning object. It also evaluates the quality
of support provided for effective use of the learning object. Like in Part I the questions are color
coded with integration design questions in green and usage questions in brown. The criteria set,
like in Part I, are marked as mandatory and non-mandatory. The basis of this classification is the
same as followed in Part I. But the criteria are not further grouped into constructs. How the criteria
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
16
are to be scored is explained in the next section. The complete list of criteria and questions of Part
II is given in Annexure A (ii). Illustrative examples of questions from Part II are given in Table 3.
Table 3: Example of evaluation questions from Part II
Criteria Question Remark
Alignment* (Evaluate for each LO) Are
the learning designs aligned
to the learning objective?
‘*’ = example of a mandatory criterion
Black font color => evaluation of integration
between different learning object components
Cooperative* Do the learning design
provide support for group
activity using visualization?
‘*’ = example of a mandatory criterion
Green font color => support provided by
learning object for its effective use
Compatibility
with other
resources like
textbook
Can visualization be used in
combination with the
textbook?
Not starred => Non-mandatory
Green font color => support provided by
learning object for its effective use
3.3 Part III: LOBE quality for user testing
This is a short 6-question questionnaire. The objective of Part III is to provide evaluators with a
question set that can be evaluated only through user testing with the target audience of the learning
object being evaluated i.e. student and teacher users. This questionnaire, like Part II, contains a set
of criteria accompanied by a corresponding set of questions that measure quality of the particular
criteria. The complete list of criteria and questions of Part III is given in Annexure A (iii).
Illustrative examples of questions from Part III are given in Table 4.
Table 4: Example of evaluation questions from Part III
Criteria Question Remark
Language
comprehensibility
Is the language used in the learning designs easy to
understand for the target teacher population? Non- mandatory
criterion
Cognitive Loading* Do students have to remember only small sections of
the content at the same time? ‘*’ = example of a
mandatory criterion
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
17
4. RECOMMENDED USE OF LOBE
4. 1 Target Evaluators
LOBE is primarily designed to be used by evaluators for evaluating learning objects. For example
an external evaluator reviewing quality of learning object created by e-learning companies. The
evaluation score generated through LOBE is expected to give feedback to learning object creators
such as e-learning companies about the effectiveness of their learning object as a teaching-learning
resource and a learning design aid. The prerequisite for evaluators is that they should have a basic
knowledge of active learning and teaching strategies and constructivist learning theory.
Another category of evaluators can be peer reviewers i.e. reviewers from parallel learning object
creation teams within the same e-learning company. Different parallel teams like the Instructional
Designers (IDs), Subject Matter Experts (SMEs) and Graphic Designers (GDs) can use it as a
quality checklist to provide constructive feedback to the production team.
4. 2 Customizing LOBE for evaluators
LOBE instrument is scoped to those learning objects which contain a core visualization
(video/animation/simulation) component along with associated components of assessment
questions and learning designs. Part I of LOBE instrument contains forty-eight questions and
evaluates the effectiveness of the affordances of the individual components of the learning object.
The questions pertaining to each of the components is color coded. Visualization component
questions appear in purple assessment questions in black and learning design questions in blue.
Thus, if the learning object being evaluated does not contain any of the two associated components,
they can be dropped from the evaluation instrument for that particular object. Part II of LOBE
instrument contains nineteen questions and they are color coded with the integrated learning object
in brown and questions on learning object use in green. Hence, if learning object comprises of only
a visualization component, then in Part II only the green coded questions should make it to the
customized evaluation instrument. If the learning object contains visualization and any one of the
associated components but not both, then that set of questions are dropped that evaluate integration
of that component with another component. The usage level questions are retained. Part III of
LOBE contains a set of six questions and is to be used only if the learning object creator wants to
carry out user testing of their learning objects with target student or teacher populations.
In Part I the questions are grouped under the following constructs – Content, Pedagogy,
Technology, Pedagogical Content, Technological Content and Technological Pedagogy. Part I can
be used to give scores per construct. Therefore, the learning object creator will get remedial
feedback on the quality of the learning object in terms scores obtained for each of the constructs
vis-à-vis the total score possible for the construct. If parallel production teams are serving as
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
18
evaluators, then each team can pick up a subset of Part I mapped to their expertise. For example,
the subject matter (SME) team can take up the Content, Pedagogy, Pedagogical Content questions
for evaluation whereas the Instructional Designing (ID) team can take up the Pedagogy,
Pedagogical Content and Technological Content. Part II of LOBE is a set of questions which are
not segregated into constructs. The score generated after Part II evaluation vis-à-vis the total score
will give the learning object creator remedial level feedback as to which part of the integrated
design needs improvement and feedback on different aspects of the usage support provided.
4. 3 Scoring using LOBE
To score the effectiveness of the learning object, we suggest the following scheme on a scale of 0
to 3 for all parts of LOBE – Part I, II and III.
i) Score 3 ==> Target level: all required aspects are present and correct
ii) Score 2 ==> Acceptable level: major aspects are present and correct but needs improvement
in a few aspects
iii) Score 1 ==> Inadequate level: one or more major aspects are missing or incorrect, yet a
few aspects may be correct
iv) Score 0 ==> Missing or incorrect level: None of the required aspects are present or the
given aspects are entirely incorrect
We have provided illustrative examples of application of the scoring scheme below.
Example 1:
For criteria = ‘Assessment Feedback Quality’, the operationalizing question in LOBE is – ‘Is the
student provided with corrective feedback when asked to answer assessment questions?’
As per the scoring scheme outlined above, a learning object will get:
Score 3 ==> Target level: If ‘Feedback is provided to students that informs them of the correct
response along with explanation. In addition, remedial help is provided to students that guides
them to revisit specific content portions for better understanding.’
Score 2 ==> Acceptable level: If ‘Feedback is provided to students that informs them of the
correct response along with explanation for why the chosen response is correct or incorrect’.
Score 1 ==> Inadequate level: If ‘Feedback is provided to students but only in binary format
of correct or incorrect without further explanation’
Score 0 ==> Missing or incorrect level: No feedback is provided to students by the assessment
question
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
19
Example 2:
For criteria = ‘Learning objective validity’, the operationalizing question in LOBE is – ‘Are
learning objectives written correctly?’
As per the scoring scheme outlined above, a learning object will get:
Score 3 ==> Target level: If ‘Learning objectives specify what students should be able to do,
use measurable action verbs and specify conditions under which performance will be carried
out’
Score 2 ==> Acceptable level: If ‘Learning objectives specify what students should be able to
do and use action verbs. But they do not contain clarity on the conditions under which
performance is to be carried out.’
Score 1 ==> Inadequate level: If ‘Learning objectives specify what students should be able to
do. But they do not use specific measurable action verbs leading to ambiguity or multiple
interpretations of the performance.’
Score 0 ==> Missing or incorrect level: ‘Learning objectives are not valid since they are not
student-centered. That is, they do not indicate students’ measurable performance level. Instead
they may indicate what the teacher is supposed to do, or use non-measurable action verbs’
4. 4 Interpreting LOBE score
Once the evaluation of the selected learning object is complete, it is recommended that the scores
per construct be calculated for Part I. The score obtained per question is totaled and compared
against the total score possible for the construct. This will give feedback to the learning object
creator as to which construct needs to be looked into to improve quality. A tentative interpretation
of the score (in percentage) maybe:
Percent Interpretation
100 – 75 Excellent
74 -50 Good
49 - 25 Needs Improvement
24 – 0 Poor
5. PILOT TESTING OF LOBE
In this chapter we present the application of the LOBE instruments Part I and Part II on learning
objects from OSCAR repository (http://oscar.iitb.ac.in). These learning objects contain a core
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
20
visualization component and an associated assessment component. They do not contain the
learning design component. So to evaluate OSCAR animations, we drop the orange coded
questions from Part I pertaining to evaluation of the learning design component. Similarly, we use
those Part II questions that evaluate the integration of visualization and assessment components
along with the usage support and drop the learning design related questions. Given below are some
illustrative examples (Figs. 2, 3) of evaluating the quality of the OSCAR animation: Structure of
DNA using LOBE. These examples also demonstrate the scoring mechanism to use with LOBE
as outlined in Sec. 4.2 in the last chapter.
Figure 2: Illustrative example of judging evaluation score of criteria = Learning objective validity
Figure 3: Illustrative example of judging evaluation score of criteria = Assessment feedback quality
Criteria = Learning
object validity
Score = 3
(as all components
present)
Criteria = Feedback
quality
Score = 1 (as feedback
merely contains
information of correct/
incorrect level)
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
21
6. CONCLUSION
Learning Object Evaluation (LOBE) conforms to constructivist teaching-learning to evaluate
effectiveness of the learning object as a teacher-led resource. The theoretical underpinnings are
Constructive alignment (Biggs, 1996), Meaningful leaning with ICT (Howland et.al, 2012) and
Technological Pedagogical Content Knowledge (TPACK) (Koehler & Mishra, 2009). LOBE is a
set of three instruments that evaluates teaching-learning potential of individual components and
the integrated learning object as also the support provided to teachers for its effective use. Each
instrument contains a list of criteria operationalized into questions for the evaluator to respond to.
A subset of these questions is adapted from existing learning object evaluation instruments. The
majority of the questions were framed by us. A pilot testing of face validity and construct validity
was done for all parts of LOBE with educational technology researchers.
The target users of the instrument are reviewers who evaluate learning objects created by e-
learning companies. The feedback generated by LOBE part I provides learning object creators and
e-learning companies with quality assessment for the individual components of the learning object.
The assessment is provided along the six constructs of Content, Pedagogy, Technology,
Pedagogical content, Technological content, Technological pedagogy. The feedback generated by
LOBE Part II provides learning object creators with assessment of the integration design of the
different components. It also evaluates the quality of support provided for teacher use of the
learning object. Part III is a six-item questionnaire that the learning object creator can use to
improve the quality of their learning object when doing user studies with it.
6.1. Limitations:
The LOBE instrument, in its current version, has the following limitations:
● LOBE leaves the evaluation scale to the judgement of the individual evaluator. It does not
describe in detail what a score of 1 means for that particular criteria.
● A pilot testing was done for validity of the LOBE instruments with eight evaluators who
are educational technology researchers. The robustness testing requires to be scaled up to
establish robustness of LOBE.
6.2. Future Work:
As part of future work, we need to design a descriptive scale for scoring each criterion. We plan
to convert LOBE ver. 1 into an analytical rubric. The feedback generated from the rubric will
provide learning object creators with useful feedback in terms of what modifications have to done
to improve the quality of their product. We believe LOBE can be used by learning object
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
22
creators/e-learning companies for formative evaluation during the production process. But this
needs to be tested. We also plan to conduct reliability and validity testing of the 4-scale analytical
rubric that will be designed in future with potential reviewers. We plan to conduct usefulness
survey of LOBE with e-learning companies who create learning objects.
7. REFERENCES
Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher education, 32(3), 347-364.
CEMCA (2009): Quality assurance for multimedia learning materials. Retrieved July10, 2010, from cemca.org/finalQAMLM.pdf
Chyung, Y. (2007). Learning object-based e-learning: content design, methods, and tools. Retrieved April, 20, 2010.
Hadjerrouit S. (2010): A Conceptual Framework for Using and Evaluating Web-Based Learning Resources in School Education.
Journal of Information Technology Education, 9, 53-79.
Hoddinott, J. (2000). Biggs’ constructive alignment: evaluation of a pedagogical model applied to a web course. In Proceedings
of ED-MEDIA 2000, World Conference on Educational Multimedia, Hypermedia & Telecommunications, Montreal (pp. 1631-
1632).
Hodgins, W. (1994). Learning Architectures, APIs, and Learning Objects. CedMA Working Group.
Howland, J. L., Jonassen, D., & Marra, R. M. (2012). Meaningful learning with technology (4th ed.). Boston, MA: Allyn &
Bacon.
Kay R. & Knaack L. (2009): Assessing Learning, Quality, and Engagement in Learning Objects: The Learning Object Evaluation
Scale for Students (LOES-S). Educational Technology Research and Development, 57(2), 147-168.
Koehler, M. J., & Mishra, P. (2009). What is technological pedagogical content knowledge. Contemporary issues in technology
and teacher education, 9(1), 60-70.
Leacock, T. L., & Nesbit, J. C. (2007). A framework for evaluating the quality of multimedia learning resources. Educational
Technology & Society, 10(2), 44-59.
Morris. M. M. (2008). Evaluating university teaching and learning in an outcome-based model: replanting Bloom. Doctoral
Dissertation, University of Wollongong
Nokelainen, P. (2006). An empirical assessment of pedagogical usability criteria for digital learning material with elementary
school students. Educational Technology & Society, 9(2), 178-197.
Sanz-Rodriguez, Javier, Dodero, Juan, & Sánchez-Alonso, Salvador (2010): Ranking Learning Objects through Integration of
Different Quality Indicators. IEEE Transactions on Learning Technologies, 3(4), 358 - 363.doi: 10.1109/TLT.2010.23
Schmidt, D., Baran, E., Thompson, A., Koehler, M., Mishra, P. & Shin, T. (2009): Examining pre-service teachers' development
of technological pedagogical content knowledge in an introductory instructional technology course. In C. Crawford et al. (Eds.),
Proceedings of Society for Information Technology and Teacher Education International Conference, 4145-4151. Chesapeake,
VA: AACE.
Weiss, R. E., Knowlton, D. S., & Morrison, G. R. (2002). Principles for using animation in computer-based instruction:
Theoretical heuristics for effective design. Computers in Human Behavior, 18(4), 465-477.
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
23
Annexure A: Learning Object Evaluation Instruments
Annexure A (i): LOBE Part I – Evaluation of individual learning object components
This instrument evaluates teaching-learning effectiveness of individual components of a learning
object. Here, the term ‘learning object’ refers to a digital education resource that contains
visualization component (video/ animation/ simulation) at its core. It may include associated
components like assessment questions and learning design based on the visualization. How to
customize and score learning objects using LOBE is described in Sec. 4.2 and Sec. 4.3 respectively.
Tick (√) the appropriate score from (0) to (3) for each criteria.
Key:
● Constructs are represented in blue bands
● Questions evaluating visualization component is in purple font
● Questions evaluating assessment component is in grey font
● Questions evaluating learning design component is in orange font
● Starred Criteria = Mandatory criteria i.e. criteria which are a must for evaluation of
learning object quality
Content (C) Level
Criteria Operationalizing Question Missing/
Incorrect
(0)
Inadequate
(1)
Acceptable
(2)
Target
(3)
Content Accuracy* 1. Is the content accurate?
Content Scope* 2. Is the scope/ coverage of the
content as per grade/curriculum
specification?
Language
comprehensibility
3. Is language used for content
presentation easy to understand?
4. Is the language used in the
assessment questions easy to
understand?
5. Is the language used in the learning
designs easy to understand?
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
24
Criteria Operationalizing Question Missing/
Incorrect
(0)
Inadequate
(1)
Acceptable
(2)
Target (3)
Accommodate socio-
cultural differences in
content presentation
6. Does the content presentation
adequately represent diverse
gender/race/ socio-economic class/
caste?
Content updated to
reflect recent
advances in the field
7. Is the content up-to-date w.r.t
recent advances in the field?
Pedagogical (P) Level
Criteria Operationalizing Question Missing/
Incorrect
(0)
Inadequate
(1)
Acceptable
(2)
Target
(3)
Learning
objective
explicitness
8. Are learning objectives explicitly
stated?
Learning
objective validity
9. Are learning objectives written
correctly?
Learning
objectives for
constructive
learning*
10. Do the learning objectives target
students doing meaning making w.r.t.
content rather than simply transmitting
information?
Visualization
support for
constructive
learning*
11. Does the visualization promote students
doing meaning making w.r.t. content rather
than simply transmitting information?
Assessment
questions for
constructive
learning*
12. Do the assessment questions require
students to do meaning making w.r.t.
content?
Learning design
for constructive
learning*
13. Does the learning design require
students to do meaning making w.r.t.
content rather than simply information
transmission?
Visualization
Feedback quality
14. Is the student provided with corrective
feedback when asked to do some activity
within visualization?
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
25
Assessment
Feedback quality
15. Is the student provided with corrective
feedback when asked to answer assessment
questions?
Assessment
question
framing*
16. Are the assessment questions
unambiguous?
Criteria Operationalizing Question Missing/
Incorrect
(0)
Inadequate
(1)
Acceptable
(2)
Target
(3)
Time estimate
for assessment
question
response
17. Is the estimated time required to answer
the assessment questions, where applicable,
adequate?
Prior Knowledge 18. Have pre-requisites of the content been
stated?
19. Does visualization build on prior
concepts?
Technological (T) Level
Criteria Operationalizing Question Missing/
Incorrect
(0)
Inadequate
(1)
Acceptable
(2)
Target
(3)
Proximity
Principle
20. Are related visual elements grouped
together on-screen (e.g. input
parameters/output parameters)?
Visibility
Principle
21. Are all currently irrelevant things
dimmed/hidden on-screen, so that users
can see but not access them?
User Action
Feedback
Principle
22. Is the visualization providing
appropriate visual (like a red cross or a
green tick) /textual feedback or response
(like a popup) to user action?
Consistent
Principle
23. Are the panels/ buttons at the similar
position throughout the content?
24. Do the panels/ buttons have similar
behavior throughout the visualization?
Affordance
Principle
25. Can you identify clickable and non-
clickable objects in this content?
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
26
Criteria Operationalizing Question Missing/
Incorrect
(0)
Inadequate
(1)
Acceptable
(2)
Target (3)
Metadata
retrieval
a) Does visualization conform to the IMS
Global Learning Consortium’s Content
Packaging Specification 3 or SCORM or
Tincan?
b) Has educational level been clearly
identified in metadata record?
c) Have technical requirements for
visualization (browser compatibility/java
plugin or flash specification) been provided?
d) Do the learning designs conform to IMS
LD specifications?
Pedagogical Content (PC) Level
Criteria Operationalizing Question Missing/
Incorrect
(0)
Inadequate
(1)
Acceptable
(2)
Target
(3)
Misconception* 29. Does visualization cover student
misconceptions in the content?
Connection to real
life*
30. Does visualization provide
connection to real-life application of the
content?
31. Does learning design provide
connection to real-life application of the
content?
Mapping
Principle
26. Is association (e.g. same color/shape
association) of different interactions with
user carried forward within and across
modules?
User Interface
usability*
27. Is use of fonts and colors appropriate?
Ease of use* 28. Can content can be viewed with
minimal scrolling of screen?
Reusability 29. Can visualization be reused across
modules/curriculums/ other learning
environments like lab, self-study etc.?
Accommodates
differently enabled
students
30. Is visualization usable for differently
abled students?
Technology (T) Level
Note: Respond to the following questions only if you are creating content for public use
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
27
32. Do assessment questions include
connection to real-life application of the
content?
Content
sequencing*
33. Does visualization content follow a
sequence that is based on existing
textbook/ curriculum/ educational
theory?
Transfer potential*
34. Does visualization provide multiple
examples of the same concept like
different types of signal transformation?
35. Does visualization provide multiple
examples of the same scenario (e.g.
through variable manipulation)?
Technological Content (TC) Level
Criteria Operationalizing Question Missing/
Incorrect
(0)
Inadequate
(1)
Acceptable
(2)
Target
(3)
Need of
visualization*
37. Does visualization exploit the
instructional power of the visualization
medium (Refer Annexure B: Glossary for
detailed explanation of Weiss graph)?
Support for
group activity*
38. Does visualization provide support for
group activity (like multi-touch screen or
recording team responses)?
Support for
constructive
learning
39. Does visualization provide support for
constructive learning (like slider bars/ drag-
drop/ drop-down)?
Coherence
Principle
40. Does visualization exclude showing
extraneous material? (e.g. 3D graphics of
heart maybe unnecessary to teach circulatory
system vs. a 2D graphics)?
Criteria Operationalizing Question Missing/
Incorrect
(0)
Inadequate
(1)
Acceptable
(2)
Target
(3)
Choice of media
based on content
type *
36. Is the type of visualization (static, video,
animation, simulation) chosen mapped to
content type (declarative, procedure,
situated)?
(Refer Annexure B: Glossary for detailed
explanation)
Technological Pedagogical (TP) Level
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
28
Criteria Operationalizing Question Missing/
Incorrect
(0)
Inadequate
(1)
Acceptable
(2)
Target
(3)
Signaling
Principle
41. Does visualization include cues that
highlight the main ideas on-screen?
Redundancy
Principle
42. Does visualization use graphics and narration instead of using animation, narration, and on-screen text?
Contiguity
Principle
43. Does the visualization have corresponding printed words and graphics placed near rather than far from each other on-screen?
Segmenting
Principle
44. Is the narrated visualization presented in segments rather than as a continuous unit?
Modality
Principle
45. Does visualization use graphics and narration rather than graphics and on-screen text?
Personalization
Principle
46. Does visualization involve standard-accented human voice speaking in conversational style rather than a machine voice or foreign-accented human voice speaking in formal style?
User Control of
pace*
47. Does visualization help students to learn
at their own pace [Presence of Play-Pause-
Back buttons, speed control, access previous
segment]?
Navigation
48. Does visualization offer optional
navigation routes for students to progress?
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
29
Annexure A (ii): LOBE Part II – Evaluation of integrated learning object and
usage support
This instrument evaluates the teaching-learning effectiveness of the integrated learning object and
the quality of support provided for its effective use by teachers. Here, the term ‘learning object’
refers to a digital education resource that contains visualization component (video/ animation/
simulation) at its core. It may also include assessment questions and learning design based on the
visualization. How to customize and score learning objects using LOBE is described in Sec. 4.2
and Sec. 4.3 respectively. Tick (√) the appropriate score from (0) to (3) for each criteria.
Key:
● Questions evaluating integrated learning object is in grey font
● Questions evaluating learning design component is in green font
● Starred Criteria = Mandatory criteria i.e. criteria which are a must for evaluation of
learning objects
Criteria Operationalizing Question Missing/
Incorrect
(0)
Inadequate
(1)
Acceptable
(2)
Target
(3)
Formative
Assessment
alignment
1. Is the scope/ coverage of content in
assessment questions as per the content
presented in visualization?
2. (Evaluate for each LO) Are the
assessment questions aligned to the
learning objective?
Summative
Assessment
alignment
3. Is the scope/ coverage of content in
assessment questions as per the content
presented in visualization?
4. (Evaluate for each LO) Are the
assessment questions aligned to the
learning objective?
Learning
Design
alignment
5. (Evaluate for each LO) Are the learning
designs aligned to the learning objective?
6. (Evaluate for each LO) Are the learning
designs aligned to the learning objective?
Assessment
strategy*
7. Are teachers provided with multiple
assessment strategies (formative/
summative/ diagnostic) to use with the
learning object?
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
30
Criteria Operationalizing Question Missing/
Incorrect
(0)
Inadequate
(1)
Acceptable
(2)
Target
(3)
Teaching
strategy *
8. Do learning designs expose teachers to
multiple teaching strategies (flash
card/role play/fishbowl/ jigsaw) to teach
using learning object?
Learning
activity
implementation
*
9. Are teachers provided with guidance on
classroom implementation of learning
activities using the learning object?
10. Does the learning object provide
support to teachers to adapt teaching
strategies for students of different
achievement level?
Ease of use* 11. Can the user find out the different
components (e.g. assessment question/
learning designs/visualization) easily?
12. Are clear instructions given to teachers
on how to use the different components of
the learning object together?
Selection of
teaching
strategies
13. Does the learning object provide
guidance to teachers on selecting student-
centered teaching strategies appropriate to
their learning objective?
Value addition* 14. Does visualization provide learning
benefits beyond the textbook?
Compatibility
with other
resources like
textbook
15. Can visualization be used in
combination with the textbook?
Learning gap
diagnosis*
16. Does the learning object provide
opportunities to diagnose student’s
learning gaps and fix them?
Active
Learning*
17. Does the learning design actively
engage the students?
Cooperative* 18. Do the learning design provide support
for group activity using visualization?
Integrated use
of components*
19. Does the learning object guide the
teacher on teaching the content in a
student-centered way using its different
components?
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
31
Annexure A (iii): LOBE Part III – Evaluation of learning object through user testing
This instrument evaluates the teaching-learning effectiveness of the different components of a
learning object individually. Here, the term ‘learning object’ refers to a digital education resource
that contains visualization component (video/ animation/ simulation) at its core. It may also
include assessment questions and learning design based on the visualization. How to score learning
objects using LOBE is described in Sec. 4.2 and Sec. 4.3 respectively. Tick (√) the appropriate
score from (0) to (3) for each criteria.
Criteria Operationalizing Question Missing/
Incorrect
(0)
Inadequate
(1)
Acceptable
(2)
Target
(3)
Language
Comprehensibi
lity
1) Is language used for content presentation
easy to understand for the target student
population?
2) Is language used in assessment questions
easy to understand for the target student
population?
3) Is the language used in the learning
designs easy to understand for the target
teacher population?
Cognitive
Loading*
4) Do students have to remember only
small sections of the content at the same
time?
5) How much mental effort do you think
students have to put in to learn this content
from visualization?
6) How difficult are the learning object
activities for the targeted student
population?
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
32
Annexure B: Glossary of Educational Technology terms used in LOBE
This annexure contains explanation of certain educational technology principles used in LOBE for
benefit of the evaluators.
(i) [Refer Part I Qs. 37] Weiss graph (Weiss et.al., 2002) (Fig. 4) informs when dynamic
visualizations should be used for effective teaching and learning. It states that visualization is to
be used only when an invisible phenomenon/process needs to be made visible or, when a system
is to be shown that involves displacement w.r.t. time or space.
Figure 4: Weiss Graph (Weiss et.al., 2002)
(ii) [Refer Part I Qs. 36] The graph (Fig. 5) informs what type of visualization
(video/animation/simulation) is to be used based on the content type (Chyung, 2007). The content
types are:
a) Declarative content = Content that deals with knowing what. E.g. what is the formula for
calculating area of a circle?
b) Procedural content = Content that deals with knowing how. E.g. how to examine cell structure
of onion under microscope
c) Situated content = Content that deals with knowing when & why. E.g. when radius of the
circle is varied, the area of the circle changes accordingly.
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
33
Figure 5: Content type vs. Visualization Type graph
LOBE: Learning Object Evaluation
Creative Commons Copyright © 2016 Some Rights Reserved
34
Annexure C: List of reviewers & Acknowledgement
We acknowledge the constructive reviews received from the following reviewers for
construct validity testing of the first version of LOBE :
- Aditi Kothiyal
- Jayakrishnan M.
- Rekha Ramesh
- Shitanshu Mishra
- Sameer Shahasrabudhe
We acknowledge the constructive reviews received from the following reviewers for face
validity testing of the first version of LOBE
- Aditi Kothiyal
- Anurag Deep
- Ashutosh Raina
- Jayakrishnan M.
- Kavya Alse
- Rekha Ramesh
- Prajish Prasad
- Shitanshu Mishra
- Soumya Narayana
- Sameer Shahasrabudhe
We thanks our colleagues at IDP-Educational Technology, IIT Bombay for general support.