medical students' online compatibility
DESCRIPTION
TRANSCRIPT
Web-based Survey and Regression Analysis to Determine USAIM Students’ Online
Compatibility – A Pilot Study
Dr S. Sanyal MBBS, MS (Surgery), ADPHA, MSc (Health Informatics)
Associate Professor, Faculty of Gross Anatomy and Neurosciences University of Seychelles American Institute of Medicine (USAIM), Seychelles
Original research conducted in USAIM
May 2008
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
COPYRIGHT AND DISCLAIMER
Copyright Notice
Attention is drawn to the fact that copyright of this project rests with its author and USAIM. This copy of
the project has been supplied on condition that anyone who consults it is understood to recognise that its
copyright rests with its author and USAIM, and that no quotation from the project and no information
derived from it may be published without the prior written consent of the author or USAIM.
Restrictions on Use
This project may be made available for consultation within the USAIM Library and may be photocopied or
lent to other libraries solely for the purposes of education, research and consultation.
Disclaimer
The opinions expressed in this work are entirely those of the author except where indicated in the text.
Disclosures and conflicts of interest
The author discloses no incentives, financial or otherwise, and no conflicts of interest during conduct of this
study and production of this treatise.
Signature
2008-05-1
******
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 i
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
ACKNOWLEDGEMENTS
The author wishes to thank all those who helped in the research project and the production of this treatise,
either directly or indirectly. The first and foremost is Dr Fauzia Alkhairy, MD; President of University of
Seychelles American Institute of Medicine (USAIM), Fort Wayne, Indiana, USA. Without her permission
the whole project would not have taken off in the first place. Next are Mr Tariq Alkhairy, Managing
Director of USAIM, and Dr Rana Shinde, PhD; Dean of USAIM, whose tacit support during the conduct of
the student survey within the USAIM campus was invaluable. Thank you to all personalities.
Then the author would like to thank the entire student body of USAIM for being such enthusiastic
participants in the Web-based survey. Such was the enthusiasm that many students completed the survey
from home, from their personal Internet connections, due to paucity of time during regular college hours.
Such a response was heart-warming, to say the least.
Next the author would like to acknowledge the tacit cooperation of other staff members and colleagues in
the faculty, notably Dr Sanjay Kulkarni, MD, Department of Microbiology and Immunology, USAIM; he
was a great morale-booster during the process of the survey, by being there when it was needed most. Dr
Justin Gnanou, MD, Department of Biochemistry, USAIM, though he is no longer with us in USAIM,
deserves special thanks for making the SPSS v.10 software package available to the author.
There are two researchers who the author has never met. They are Franz Faul and Edgar Erdfelder of the
Department of Psychology, University of Bonn, Germany. They deserve thanks in absentia for taking the
pains to make the G*Power power analysis software package available free of charge to researchers all over
the world.
The author also gratefully acknowledges M/s eLearners™Advisor for enabling the use of an adaptation of
their questionnaire for the purpose of this Web-based survey.
Finally, how can the author overlook the silent contribution of his lovely spouse? During the trying months
of the project, she bore with his infrequent phone calls, taciturn monosyllabic responses and pre-occupations
with the project with silent fortitude and patient forbearing, which only the deep unspoken understanding
capabilities of a woman can bring forth.
***********
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 ii
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
TABLE OF CONTENTS
TITLE PAGE COPYRIGHT AND DISCLAIMER Page i ACKNOWLEDGEMENTS Page ii ABSTRACT Page iv CHAPTER 1: PRELIMINARIES AND LITERATURE REVIEW Page 1 – 7 CHAPTER 2: MATERIALS AND METHODS Page 8 – 15 CHAPTER 3: STATISTICAL ANALYSIS AND RESULTS Page 16 – 40 CHAPTER 4: DISCUSSION Page 41 – 57 REFERENCES Page 58 – 61
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 iii
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
ABSTRACT Immediate objective: To identify technical glitches (problems) in the newly-devised Web-based questionnaire and try to devise a future-proof system through troubleshooting Short term goals: (1) Determine USAIM students’ online learning preparedness against 5 set parameters; (2) Devise a mathematically objective scoring system for each parameter; (3) Determine overall online preparedness (Compatibility Score) of USAIM students; (4) Determine robustness of LS questionnaire currently being used for the study; (5) Identify relationships between students’ personal and general characteristics vis-à-vis online learning; (6) Suggest improvements to questionnaire and survey; and (7) Suggest ways of overcoming barriers to online learning Long term goals: (a) Use Compatibility Score as baseline for future studies in USAIM and elsewhere; and (b) Render online learning and examinations a regular and feasible option for USAIM Design: It was designed as a Web-based questionnaire survey of USAIM students. It was a one-shot, cross-sectional, non-experimental pilot study. Setting: The study was conducted within the campus of USAIM. Participants / Data sources: Thirty-five students from PC-1 to PC-5 were the participants. Their feedback from the questionnaire provided the data for statistical analysis Main outcome measures: The following mathematical outcomes were generated: (A) Weighted scores for technology access parameters; (B) Weighted scores for personal parameters; (C) Weighted scores for technical proficiencies; (D) Weighted scores for online LS preferences; (E) Weighted scores for students’ general considerations; (F) Overall Compatibility Score of USAIM students; (G) Correlation, internal consistency and factor analysis scores of online LS questionnaire items; (H) Correlation and regression analysis scores of personal factors vs. general considerations; (I) Predictive model and formula of students’ online learning characteristics; and (J) Power analysis scores vis-à-vis sample size. Results: Identification of 5 problems and their tentative solutions; Weighted scores (expressed as percentage of maximum) of measured parameters were; Type of Internet access (63.7%); Primary computer (80%); Motivation (70%); Schedule (58.5%); Hours of online study (57.8%); Technical proficiencies (73.9%); Online LS preferences (64%); Online concerns (64%); Education level (54.3%); Age (73.3%); Overall Compatibility Score of USAIM students (64%); Pearson’s correlations (‘Pro-Yes’ vs. ‘Anti-No’) r = 0.3; (p = 0.48; 2-tailed; N = 8); Reliability coefficients (Intra-class, Cronbach α, Guttman, Spearman-Brown) 0.42 to 0.45; PCA factor analysis – Component-1 (‘Anti-onlineness’ factor); Pearson’s correlation (‘Concerns’ vs. ‘Age’) r = -0.963; (p = 0.037; 2-tailed; N = 4); Regression analysis (‘Concerns’ vs. ‘Age’) ‘Concerns’ = 80.261 – 0.898(‘Age’); Regression analysis (‘Concerns’ vs. ‘Motivation’) ‘Concerns’ = 80.261 + 0.638(‘Motivation’); Predictive model ‘Concerns’ = Constant + [0.638(‘Motivation’)] – [0.898(‘Age’)]; Post hoc power analysis (N = 35) – Power = 0.43 (1-tailed), 0.30 (2-tailed); Compromise power analysis (N = 35) – Power = 0.77 (1-tailed), 0.68 (2-tailed); A priori analysis – Required N = 102 (1-tailed), 128 (2-tailed) Conclusions: Glitches in the Web-based questionnaire are attributable to excessive ‘hits’ on Google site server from a single user-session. Average online readiness and overall online Compatibility of USAIM students are in the ‘Good’ category. Learning style questionnaire needs to be re-structured. The questionnaire as a whole needs to be rendered more robust from research perspective. Online concerns of students are directly proportional to their motivation and inversely proportional to their age. Subject recruitment for a formal study needs to be at least 3.7 times more than this pilot study. This would render the results of a robust statistical analysis more valid. Overall, USAIM students are poised on the threshold of introduction of online courses and examinations. Once they are introduced, the natural progression of learning curve would take care of the ongoing hurdles.
***********
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 iv
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
CHAPTER-1: PRELIMINARIES AND LITERATURE REVIEW
“The voyage of discovery is not to seek for new landscape, but to install a pair of new eyes.” ~~Anon~~
1. Introduction Implementing online technologies towards imparting learning constitute the next big wave to hit the
educational arena, after the chalk and blackboard. This is not so surprising, considering the ubiquity of
computers, Internet and the inexorable progression of related technologies.[1] The Sloan Consortium (Sloan-
C™) defines Online Courses as those where 80% or more of the course content is delivered online. There
are usually no face-to-face (F-2-F) interactions between faculty and students. Other types of imparting
education, based on decreasing proportion of course content delivered online are; Hybrid / blended Course
(30-79% course content delivered online, there are both online discussions and face-to-face meetings); Web-
facilitated Course (1-29% course content (assignment, syllabus etc) delivered through course management
system (CMS) or Web pages, uses Web technologies to facilitate an essentially F-2-F course delivery
program); and Traditional (no course content delivered online, only orally or in writing).[2]
2. Models of online education A radically different classification identifies 5 new ‘Models’ for online learning, aimed towards improving
learning at affordable costs. In the Supplemental Model the basic structure of traditional course (number of
class meetings etc) is retained; only some technology-based out-of-class activities are added to encourage
greater student engagement with course content. In the Replacement Model the key characteristic is a
reduction in class meeting time, replacing face-to-face time with online, interactive learning activities by
students. The Emporium Model is based on the premise that a student learns best when he wants to learn
rather that when the instructor wants to teach. This model therefore eliminates all class meetings and
replaces them with a learning resource center featuring online materials and on-demand personalized
assistance. In the Fully Online Model, the single-handed, monolithic, repetitive, labor-intensive task of a
professor of traditional course has been transferred to the online scenario. This model assumes that the
instructor must be responsible for all interactions, personally answering every inquiry, comment or
discussion. However, newer software systems have been developed (viz. Academic Systems software) that
present the course content so effectively that instructors do not have to spend any time delivering content.
These 4 course-redesign models treat all students as same; therefore they represent ‘one-size-fits-all’
approach. In contrast, the Buffet Model offers students an assortment of interchangeable paths that match
their individual learning styles (LS), abilities and tastes at every stage of the course. Even the best ‘fixed-
menu’ of teaching would fail for some students. In contrast, the ‘buffet’ strategy suggests a large variety of
offerings that can be customized to fit the needs of the individual learner.[3]
3. Growth of online education
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 1
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Over the last five years there has been a progressive increase in online courses in universities worldwide and
in USA in particular. This increase pertains to all aspects educational continuum; in terms of numbers of
courses being offered, number of colleges and higher education schools offering them, and numbers of
students enrolling for online courses, both in absolute terms as well as in proportion to those enrolling for
traditional courses. From 1.6 million online students in 1998 in the US, the number had escalated to 3.48
million by 2006. [2,4] Among all colleges offering distance learning, the proportion using the Internet had
grown from 22% in 1995 to 60% in 1998.[4] Overall, students in USA who were taking at least one online
course in 2006 represented 20% of total enrollments in higher education. This represented a jump of nearly
10% over 2005.[2] It is projected this growth will continue, albeit at a slower rate, into the future.[1,2] 4. Advantages of online education More and more universities and colleges worldwide are jumping on the online bandwagon. The reasons
cited by Sloan-C™ for adopting online courses, in order of importance are to; increase student access, attract
students from outside traditional service areas, grow continuing and / or professional education, increase rate
of degree completion, enhance value of college/university brand, provide pedagogic improvements, improve
student retention, increase the diversity of student body, optimize physical plant utilization, improvement
enrollment management responsiveness, increase strategic partnerships with other institutions, reduce /
contain costs, and enhance alumni and donor outreach. Therefore these are the purported advantages of
online education.[2] Improving student retention is a contentious issue. Statistics of Foothill College, Los
Altos, CA showed that students in on-line computer classes had a drop rate of 30% compared to a drop rate
of 10-15% in on-ground classes.[5] On the other hand, the University of North Carolina (UNC) School of
Public Health has cited 10 essentially different reasons why online learning excels over traditional
education; Student-centred learning; Writing intensity; Highly interactive discussions; Geared to lifelong
learning; Enriched course materials; On-demand interaction and support services; Immediate feedback;
Flexibility; Intimate learner community; and Faculty development and rejuvenation.[6] Here again there is an
apparent contradiction. Low acceptance of online instruction by faculty has been cited as one of the barriers
to online education.[2]
5. The online framework In terms of engagement in online courses and their attitudes towards same, institutions have been classified
into 5 categories by Sloan-C™. These are; (A) Fully engaged: those that have online courses that they have
fully incorporated into their formal strategic long term plans; (B) Engaged: Those that have online course(s)
that they believe are strategic to their long term plans but have not yet incorporated them into the formal
long term strategy; (C) Not yet engaged: Those that do not have any online courses yet but believe they
critical to their long term strategy, and are therefore expected to implement some form of online courses in
the future; (D) Non-strategic online: Those that have some online course(s) but do not believe that it is
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 2
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
important for their long term strategy; and (E) Not interested: Those that do not have any online courses and
do not believe that it is important for their long term strategy.[2]
6. USAIM in online framework The University of Seychelles American Institute of Medicine (USAIM) is a pre-clinical medical school in
Seychelles that was established in 2001. Commensurate with its progressive-minded philosophy it believes
in adopting the latest technologies in imparting education. In collaboration with another organization,
Boolean Education from Mumbai, India, USAIM introduced its online M.Ch (Orthopedics) Certification
program as part of its AACME-accredited (American Academy of Continuing Medical Education) CME
activity (Figure-1). This is a 6-month course, 5 of which are entirely online, covering one module every
month; and the sixth month includes a 3-day F-2-F Instructional Course Lecture Series (ICLS).[7] Thus, as
per the Sloan-C™ definition (and according to its self-declaration) it is a Hybrid / blended course. But since
more than 80% of the M.Ch Orthopedics certification course content is delivered online, it is closer to the
definition of a true Online Course.[2] Apart from all examinations of M.Ch certification program, which are
fully online, USAIM is also on the verge of introducing fully online and automated examinations for its
routine Pre-clinical semesters (of which the author is an Associate Professor) on a regular basis. Therefore,
since USAIM already has an online course and it has fully incorporated online activities into its formal
strategic long term plans, it conforms to Sloan-C™ categorization of a ‘Fully Engaged’ institution.[2]
Figure-1: Screenshot from the Website showing online M.Ch Certification course offered by USAIM, Seychelles and Boolean Education, India 7. Barriers to online learning In spite of all the purported benefits and advantages of online course, they are not without their barriers.
Some of the identified hurdles to widespread adoption of online learning are; students need to have more
self-imposed discipline in online courses, variable / low acceptance of online instruction by faculty, lower
student retention rate in online courses, high costs of developing online courses, high costs to deliver online
courses, and lack of acceptance of online degrees by employees. These are the identified barriers from the
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 3
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
institutional perspective. Not all these are barriers are given identical weightings by all institutions; in fact
some institutions do not consider some of them as barriers at all.[2] Of arguably greater importance are those potential barriers that could be identified from the students’
perspective. Eight barrier factors were determined by a factor analytic study in 2005; (a) administrative
issues, (b) social interaction, (c) academic skills, (d) technical skills, (e) learner motivation, (f) time and
support for studies, (g) cost and access to the Internet, and (h) technical problems. Independent variables that
affected student ratings of these barrier factors included; gender, age, ethnicity, type of learning institution,
self-rating of online learning skills, effectiveness of learning online, online learning enjoyment, prejudicial
treatment in traditional classes, and the number of online courses completed.[8] 8. Background of present pilot study The findings from the aforementioned 2005 study provided the impetus to try to determine how many of
those factors applied to USAIM students, and in what way, on the theoretical assumption that they were all
to be enrolled for online courses in the near future. A more focussed search of the Web-based literature,
based on the factors identified by the 2005 study, corroborated that technical skills, learner motivation, and
access to Internet (technologies) were of special significance from the online learning perspective.[5,9,10]
Another factor that had not been addressed in the abovementioned study pertained to students’ learning style
(LS) preferences; more specifically their online LS preferences. And finally, prior academic skills and age
also have an indirect role in students’ online learning endeavours.[8-10] Technology access: Having access to technology (viz. computer and Internet) is to the online learner what
pen and paper is to the traditional student. For the former, computer and Internet access are the primary
instruments of learning. Having to use a computer with inadequate computing power or an erratic / slow
Internet connection can impede the online learner significantly. Consequently the capabilities of the
technology used by the online learner, and access to the same, play important roles in the overall success in
online learning.[10]
Self-motivation: Implicit within the structure of most traditional forms of learning is a certain level of
external motivation. Online learning is more loosely structured and relies more heavily on internal
motivation of the learner. They must schedule time for learning on their own and then stick to that schedule,
not for external impositions by others but because they have to meet their self-imposed personal goals. To
that extent they must be sufficiently internally motivated and must be able to put in sufficient numbers of
hours of self-study without exhortation from anyone.[5,8-10]
Techno-skills: Technical skills are also a key factor in the success of an online learner. This does not imply
advanced computer skills. However, a minimum level of technical ability is essential, which can make all
the difference between success and failure. The determining factor in what constitutes this ‘minimum’ level
is simply having enough technical knowledge to ensure that the technology does not become a barrier in the
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 4
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
learning process. If the online student has to submit a paper electronically he/she should spend most of their
time towards writing the paper and not figuring out how to attach the file to an outgoing e-mail.[10,11]
Learning styles: Research has revealed that everyone has different preferences in how they learn. These
preferences are called ‘learning styles’ (LS). There is considerable confusion about the exact definition of
LS.[12] In one review there were 7 definitions / descriptions of LS. The most ‘accurate’ definition appears to
be that of Keefe, who described LS as characteristic cognitive, affective, and psychological behaviours that
serve as relatively stable indicators of how learners perceive, interact with and respond to the learning
environment.[13] Grasha defined LS as personal qualities that influence a student’s ability to acquire
information, to interact with peers and the teacher, and otherwise participate in learning experiences.[14]
Theis described LS as a set of biological and developmental characteristics that make identical instruction
for learners either effective or ineffective.[15] LS pertains to a preference of the student. Some students find
that they have a dominant LS, which they utilise most frequently (or prefer to do so), and use other styles
less frequently. Other students find that they use different styles in different circumstances. Everyone has a
mixture of LS. There is no right mixture; nor are they fixed.[16] Some empirical evidence suggests that
learners also have different preferences when it comes to online learning. Some prefer to learn through
lectures while others find that project-based learning better suits abilities and interests.[14,17] Knowing one’s
‘online LS’ can be important in ensuring that they would select a style of online learning delivery (e.g.
synchronous or asynchronous) in which they would excel.[10,18]
Considering all these parameters, the present study was narrowed down to focus on 5 factors. These
pertained to students’ access to technology, personal issues, technical competencies, online LS preferences
and some general aspects of students. Further perusal of the literature revealed several resources that had
considered some or all of these factors in determining students’ readiness for online learning.[10,19-24]
Therefore these considerations formed the basis for the present study. It was decided to incorporate the parameters identified in the preceding paragraphs into a newly-devised
Web-based questionnaire system. The details of creation of a Web-based questionnaire system are described
in the next chapter. It was decided to pilot this new Web-based system among the students of USAIM at the
time of conducting the survey. Therefore, the present study was actually based on two background issues;
the issue of online compatibility of USAIM students, and the piloting aspect of a newly-introduced Web-
based questionnaire system for the study. The two were to go hand in hand during the course of the study. 9. Selection criteria for questionnaire – based on literature review The survey instrument (questionnaire) and the questions themselves had to conform to the requirements of
the study that had been planned, apart from fulfilling the precepts of a good questionnaire (described in
Chapter-2). Therefore a set of parameters and a scoring system was applied to the various survey
instruments that were available. The parameters were; (a) Number of question items: Between 30 and 40 was
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 5
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
considered ideal (reason described in Chapter-2 and Chapter-4); therefore it scored 1 point, anything less
scored 0; (b) Ease of administration: This depended on the format and presentation of the questionnaire,
whether script-based or plain .doc format; (c) Easy questions referred to the wording, sentence construction,
user-friendliness and ease of understanding; it was graded from 1+ to 5+; (d) Mixed response items referred
to bimodal (Yes/No), scaled (Very/Somewhat/Not) or multi-option question items. Question items should
not ideally be mixed too much (elaborated under ‘Discussion’); (e) Dimensions referred to all the groups
described earlier; whether they were deficient or whether there were any extra groups in the questionnaire.
Each dimension scored 1 point; absence scored negative point(s); (f) Scoring method and interpretation of
results: If it was automatically performed by the parent organization, it was better than manual scoring; (g)
Validity: This referred to the accuracy (how accurately it measures what it is purported to measure) or
otherwise of the items in the questionnaire; ideally questionnaire items should be independently peer-
validated. Based on the overall score, the questionnaire from eLearners™Advisor[10] was selected for this
study because it scored the maximum points (Table-1). The questionnaire was not being piloted; rather the
Web-based system that was being introduced for the first time was being piloted. Pace AASU OLE eL DVC ION DuPa Number of Qs items 31 (1) 35 (1) 23 (0) 40 (1) 32 (1) 12 (0) 10 (0) Ease of administration Yes (1) No (1) Yes (1) Yes (1) No (0) Yes (1) Yes (1) Easy Qs items 5+ 3+ 2+ 4+ 1+ 4+ 4+ Mixed response options No (1) No (1) No (1) Yes (0) No (1) No (1) No (1) No. of dimensions 4 3 4 5 0 0 0 Extra dimension(s) Time Mx
(1) None (0) None (0) None (0) ? ? ?
Deficient dimension(s) LS, Gen (-2)
Personal, Gen (-2)
Gen (-1) None (0) ? ? ?
Scoring method Self (0) Parent org (1)
Parent org (1)
Parent org (1)
Parent org (1)
Parent org (1)
Self (0)
Automatic result interpretation
No (0) Yes (1) Yes (1) Yes (1) Yes (1) Yes (1) No (0)
Instrument self-validated Yes (1) Yes (1) Yes (1) Yes (1) Yes (1) Yes (1) Yes (1) Instrument peer-validated No (0) No (0) No (0) No (0) No (0) No (0) No (0) Total score 12 10 10 14 6 9 7 Table-1: Score for each parameter is given in parenthesis and colored green; See text for details. Pace[24]; AASU[19]: Armstrong Atlantic State University; OLE : OnlineLearning.net™; eL : eLearners™Advisor (acknowledged in ‘Acknowledgements’ section); DVC : Diabolo Valley College; ION :
[20] [10]
[21] [22] Illinois Online Network, University of Illinois; DuPa[23]: College of DuPage 10. Objective and goals of study The immediate objective of the pilot study was to identify technical glitches (problems) in the newly-
devised Web-based questionnaire survey system and try to devise a future-proof system through
troubleshooting. During the course of this pilot, the following additional goals were fulfilled. These
pertained to the online compatibility issue alluded to earlier.
1. Determine USAIM students’ online learning preparedness against 5 set parameters
2. Devise a mathematically objective scoring system for each parameter
3. Determine overall online preparedness (Compatibility Score) of USAIM students
4. Determine robustness of LS questionnaire currently being used for the study
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 6
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
5. Identify relationships between students’ personal / general characteristics vis-à-vis online learning
6. Suggest improvements to questionnaire and survey
7. Suggest ways of overcoming barriers to online learning
8. Use Compatibility Score as baseline for future studies in USAIM and elsewhere (long term goal)
9. Render online learning and examinations a regular and feasible option for USAIM (long term goal) 11. Study outcomes measured The following mathematical outcomes were generated: A. Weighted scores for technology access parameters
B. Weighted scores for personal parameters
C. Weighted scores for technical proficiencies
D. Weighted scores for online LS preferences
E. Weighted scores for students’ general considerations
F. Overall Compatibility Score of USAIM students
G. Correlation, internal consistency and factor analysis scores of online LS questionnaire items
H. Correlation and regression analysis scores of personal factors vs. general considerations
I. Predictive model and formula of students’ online learning behaviour characteristics
J. Power analysis scores vis-à-vis sample size 12. Summary and usefulness of research This preliminary chapter provided the introduction, background information and current status of on online
education, provided the background of USAIM, its role in online education, the basis of this study, the
rationale behind questionnaire selection, and the objectives, goals and expected outcome measures from this
study. This research would be useful from a number of perspectives. It would tell us how the innovatively-
designed Web-based survey system performs. It would provide baseline data about USAIM students’ online
learning potential; the so-called ‘Compatibility Score’. It would identify deficiencies or lacunae in students
that would require to be addressed. The Web-based nature of the survey itself would inform us about
students’ online potentialities. If they can successfully undertake the online survey, it would automatically
mean they possess basic online skills. Finally it would pave the way for implementation of future online
courses and examinations in USAIM. The next chapter would describe the methodology involved in creating
the Web-based questionnaire and its pilot administration to students of USAIM during the course of survey.
***********
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 7
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
CHAPTER-2: MATERIALS AND METHODS
"Don't be afraid to take a big step. You can't cross a chasm in two small jumps." ~~David Lloyd George~~
1. Introduction Following up from the selection of questionnaire that was described in the previous chapter, this chapter
describes the contents and structure of the questionnaire, the method of creation of the new Web-based
survey system (and troubleshooting its attendant glitches), piloting the Web-based questionnaire and
conducting the survey to its successful conclusion. The quotation from David Lloyd George aptly reflects
the ethos of this chapter insofar as it relates to the Web-based questionnaire itself. 2. Survey preliminaries This study was conducted in USAIM, Seychelles from February 2008 to March 2008. A preliminary round
of discussions with the President, Managing Director and Dean of USAIM culminated in their collective and
tacit approval for the study. This was followed by submission of the study proposal in the form of a
preliminary abstract, which was accepted by the Dean. Then a notice was inserted in the student notice
board, detailing the purpose, scope and depth of the study, and the approximate time it would take to
complete the questionnaire. It also contained an FAQ to clear common anticipated doubts and allay
apprehensions. It was stressed that there were no wrong or right answers so that students would respond
honestly, without any misgivings. The students were also informed that all results would be statistically
aggregated and no individually identifiable data would be asked for or displayed. Informed consent of study
participants was implicit. 3. Study design, setting, participants and data sources The study was conducted within the campus of USAIM. It was designed as a Web-based questionnaire
survey of USAIM students. It was a one-shot, cross-sectional, non-experimental study, with data collected at
a single point in time to reflect a cross-section of the current student population. Therefore the current
students from PC-1 to PC-5 were the participants. There were 35 students at the time of conducting the
survey, all of whom were included in the study. Their feedback from the questionnaire provided the data for
statistical analysis. 4. Questionnaire It was decided at the outset that, unlike the previous surveys conducted by the author in USAIM, which were
paper-based, this study would utilize a Web-based questionnaire.[25,26] There were several reasons for this.
Firstly, the study itself was about students’ online preparedness. Therefore it made logical sense to have
Web-based questionnaire. If the students could access and answer the questions online, it would be a
significant reflection on their online capabilities. Secondly, online questions are easy to administer, less
time-consuming, more efficient, and are eco-friendly insofar they do not entail any usage of paper.[25]
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 8
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Thirdly, the software in the online system allowed automatic percentage calculation, thereby reducing the
time and effort required to manually perform the same. The contents of the questionnaire were selected from several questionnaires that were available on the net,
which had been used to test for students’ online preparedness.[10,19-24] The selection criteria for the
questionnaire was described in Chapter-1. Forty questions were selected and sorted into 5 matches for online
learning question groups; numbered sequentially from A to E. Group A had 2 questions pertaining to
technology access. Group B had 3 questions pertaining to personal facts (insofar as they impacted the
students’ online learning capabilities). Group C had 16 technical proficiency-related questions. Group D
contained 16 questions to determine students’ learning style (LS) preferences. Eight questions in this group
were worded in such a way that a ‘Yes’ response indicated pro-online LS preference. The other 8 reflected
anti-online LS preference. The pro-online and anti-online LS preference questions were alternately arranged
in Group D. Finally group E contained 3 questions of a general nature. All questions had provision for only
one answer except the question about motivation for online learning (first question in group B), which
permitted students to upload up to 3 options. Appendix-1 gives a sample of the questionnaire. The purpose
behind adapting the questionnaire from existing ones rather than creating a fresh questionnaire from scratch
was these had already been tried and tested on student populations elsewhere; i.e. they were self-validated, if
not entirely peer-validated. This obviated the time-wastage on piloting the questionnaire itself, which a
newly-generated questionnaire would have entailed.[26,27]
4.1 Precepts of good questionnaire-design While preparing the questionnaire, every effort was made to keep within the principles of good
questionnaire design, both paper and Web-based.[27-30] It was within 2 pages, as per the stipulations of good
questionnaire.[27] Estimated time of completion was not more than 20 minutes. It had 40 questions,[30] with
easy wordings and user-friendly sentence constructions. All required single option selection except third
question, which required up to 3 selections. Sixteen question-options were ranked (Very / Some / None) and
16 had bimodal (Yes / No) options. Among the rest, the number of available options ranged from 3 to 5.
There were no open-ended questions, as advocated by some.[27] Though strictly not within the rule,
differently ranked questions were somewhat mixed.[27] At the outset, a short introduction mentioning the
background and aims of the evaluation was given. Users were told what to expect so that they would be
mentally prepared and were informed that they would be anonymous.[30] The questions had been piloted
elsewhere; that was one of main reasons for their selection. Therefore it was not considered necessary for the
author himself to pilot the questionnaire again, as suggested by some.[26,27] There were certain drawbacks in
the questionnaire that have been discussed in Chapter-4. 5. Generating a Web-based questionnaire Perlman described Web-based questionnaire creation using customizable Web-based PERL (Practical
Extraction and Report Language) CGI (Common Gateway Interface) script. It was based on established
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 9
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
questionnaires and automated the process of data collection. It was hard to completely customize,
questionnaire terminology did not closely match the required domain for this study and there were no
analysis tools.[25] Therefore, a different technique was employed in this study. The Google-based blog-site;
URL: http://www.blogger.com was used as the platform for creating the Web-based questionnaire survey
system. A new blog-site was created in February 2008. First, an introduction to the survey and USAIM logo
(Figure-1a); and essentially a repetition of the earlier paper-based instructions (Figure-1b), was entered in
the main ‘Blog Posts’ box (Figure-1c). In the main blog posts page, below the ‘Blog Posts’ box was an
option; ‘Add a Page Element’ (Figure-1c). Clicking on this opened a dialog box that enabled one to ‘Create
a poll’ (Figure-1d). This allowed entry of a question followed by as many answer-options as desired, set the
limit of selection of options (single / multiple options), and also set the time limit for the poll (Figure-1d).
Clicking on the ‘Save’ button of ‘Create a poll’ dialog box saved the question in the ‘Page Element’ section
of the main blog page. Forty of these poll-creating ‘Page Elements’ were added in succession to constitute
40 questions of the questionnaire. For each question the student had to select one of the options through the
radio buttons and click ‘Vote’ in order to save his/her poll (Figure-1e). The process had to be repeated for
each of the 40 questions. Clicking on the ‘Show results’ link for any question (Figure-1e) revealed the
percentage scores for that question (Figure-1f). The blog-site was published for public viewing. The author
personally accessed the blog-site and checked it several times for usability; till opening, loading and
refreshing of the page elements was satisfactorily achieved, and all buttons, options and links were found to
be successfully operating. Finally, The URL: http://sanyalonlineusaimsurvey.blogspot.com/ was made
available to the students through a general notice.[30]
a b
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 10
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
c d
e f Figure-1a,b: Shows creation of a blog post for the online survey, with USAIM logo, introduction and instructions for the survey; Figure-1c,d: Shows method of creating an online question (with options) through the ‘Create a poll’ dialog box. Figure-1e,f: Shows the resultant online questionnaires and responses by students automatically expressed as percentage of total respondents. 6. Sequence of survey After the preliminary notice, the URL of the blog-site containing the survey was released to the students and
they were given 2 weeks to complete the questionnaire. Throughout the release period the author regularly
visited the site to keep track of the numbers of students responding to the questionnaire. Moreover, the
researcher was always available to solve doubts, queries and troubleshoot glitches. After 1 week a reminder
notice was issued for those who had not visited the site or attempted the survey. After all students had
completed the questionnaire, the author visited the site and manually extracted the percentage scores for
each option (through the ‘Show results’ link) for each question, which had been automatically calculated by
the blog-site server. The raw data were entered on an MS® Excel® worksheet and tabulated for further
analysis. The result scores were analyzed with a specific view towards arriving at the stated goals of the
study. This is detailed in the next chapter. At the conclusion of the analysis, a summary of the aggregated
results, without identifying anybody, and USAIM students’ online Compatibility Score was put on the
student notice board for everybody’s information. 7. Troubleshooting technical glitches in Web-based questionnaire This was a study to pilot the newly-devised Web-based questionnaire, and identify the technical glitches in
the system. They required ongoing identification and correction throughout the process of the survey. In
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 11
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
fact, it was part and parcel of the survey process itself. Therefore it is apt to describe the glitches and the
measures taken to circumvent them in this chapter itself. Technical explanations are dealt in Chapter-4. Several glitches were encountered that required ongoing attendance. Firstly, the full blog-site page required
considerable time to open fully, after numerous (~ 40) ‘clicking’ sounds. This was because each question
was entered as a separate poll in a separate page element in the blog-site, as per the feature of that site.
Therefore during opening of the page, each question (element) required a separate response from the site
server. There was nothing that could be done about this, except to warn the students about this and
encourage patience. Secondly, the most serious problem encountered was the en masse error message, “This
page cannot be displayed”, corresponding to each question, even though the Internet connection was stable.
This occurred when several students simultaneously tried to access the site from different machines on the
USAIM server. Investigation revealed that the Google server, which was the main site server for the blog-
site, interpreted these simultaneous hits as suspected virus attacks on its server, and therefore tried to shut
out the USAIM server. Therefore, perforce the students had to access the site one at a time when they were
within the USAIM network. The third problem was when students tried to progress rapidly through the
questions; after certain time, the last few questions tended to display the same error message, possibly as a
result of the same erroneous interpretation by the Google server. A fourth situation, similar to the previous,
was encountered when students clicked on the ‘Vote’ button and, without waiting for the page to refresh,
progressed to the next question and clicked on its option radio-button. The same error message was
displayed. Therefore the students had to be told to progress through the questions at a moderate, but not too
rapid, pace. They were instructed to wait for the page to refresh after each ‘voting’. Another problem was
encountered when one student immediately followed the next (usually, but always, on the same machine).
The page used to open with the questions displayed in the post-voting mode of the previous student, asking
if the student wanted to change his poll opinion. If the next student clicked on this, the page got refreshed
but the previous student’s response got erased. This also required that there should be a sufficient time gap
between two students’ access to the blog-site. Because of these problems many students had to complete the
questionnaire in more than one sitting. Not all students encountered all these problems, however. Quite a
few managed to complete the whole questionnaire without encountering a single glitch, especially those who
accessed it from home on their personal laptops through their own Internet connection provided by their
personal ISP. 8. Conclusion Three aspects of this study were covered in this chapter. Firstly, the nuts and bolts of the whole survey
process (questionnaire, Web-based system and the survey proper) were exhaustively described. Secondly, it
described the fulfilment of the immediate objective of this study, namely to assess the functioning and
identify the problems in the newly-devised Web-based system. Thirdly, the process described herein led to
the generation of data that led to the fulfilment of other goals of this study. These are discussed in Chapter-3.
***********
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 12
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
APPENDIX-1: Text of the online questionnaire
hfT\` fàâwxÇàáË bÇÄ|Çx fâÜäxç dâxáà|ÉÇÇt|Üx
Available at URL: http://sanyalonlineusaimsurvey.blogspot.com/ (Totally 40 questions) Q1) What type of Internet access will you have at home during your online studies? No Access Dial-up/Modem Access High-Speed/Wireless Access Q2) Please select the option that best describes the primary computer you will be using Purchased 1 - 2 years ago Purchased 3 - 4 years ago Purchased > 4 years ago I plan to buy a new PC soon I'm unsure what PC I will use Q3) Which of the following would be your motivation(s) for undertaking the online course? (Select up to 3) A) It seems like the fastest and easiest way to studyB) To increase my earning potential in my future careerC) To qualify for a good position or careerD) A personal interest or goal (I like this method of learning) E) Outside influences rather than my own goals or needs (My lecturer is telling me to do it!) Q4) My schedule is... A) Predictable (I can devote regular blocks of time for online study) B) Somewhat Unpredictable (My schedule changes often, but I can usually put in some time for online study)C) Very Unpredictable (My schedule is rarely the same; however, I shall see what I can do) Q5) Each day I could dedicate the following number of hours for online study: A) 1 to <2 B) 2 to <3 C) 3 to <4 D) 4 to <5 E) 5 or more Q6) Fast and accurate typing on a computer keyboard A) Very skilled B) Some skills C) No skills Q7) Open files saved on a floppy disk, hard drive or CD A) Very skilled B) Some skills C) No skills Q8) Save a file with a new name, file type or file location A) Very skilled B) Some skills C) No skills Q9) Copy, cut and paste text/files between windows/programs A) Very skilled B) Some skills C) No skills Q10) Format fonts and document layout using a word processor A) Very skilled B) Some skills C) No skills Q11) Insert a picture/object into a word processing document A) Very skilled B) Some skills C) No skills Q12) Solve basic computer problems (e.g. computer freezes) A) Very skilled B) Some skills C) No skills Q13) Learn new software programs or applications A) Very skilled B) Some skills C) No skills Q14) Visit a web site (if you are given the address/URL) A) Very skilled B) Some skills C) No skills
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 13
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Q15) Send and receive e-mail messages A) Very skilled B) Some skills C) No skills Q16) Send and receive attachments/files through e-mail A) Very skilled B) Some skills C) No skills Q17) Use search engines to find answers and resources A) Very skilled B) Some skills C) No skills Q18) Use "message boards" or "forums" or "newsgroups" A) Very skilled B) Some skills C) No skills Q19) Use a "chat room" or "instant messaging" A) Very skilled B) Some skills C) No skills Q20) Download and install software or a "plug-in" A) Very skilled B) Some skills C) No skills Q21) Protect your PC from threats (viruses, spyware, hackers) A) Very skilled B) Some skills C) No skills Q22) Socializing with my classmates is important for my education A) Yes B) No A) Yes B) No Q23) I am comfortable building online relationships and networking online. A) Yes B) No Q24) I always need to share my knowledge, thoughts and experiences with others. A) Yes B) No Q25) I am a disciplined student and I can usually stick to my study plan. A) Yes B) No Q26) I have difficulty completing assignments on time, and sometimes need extension dates. A) Yes B) No Q27) I prefer to learn through independent projects instead of structured assignments. A) Yes B) No Q28) I prefer lecture-based learning rather than discussion-based / project-based learning A) Yes B) No Q29) I have decent computer reading speed and I can learn well that way. A) Yes B) No Q30) I do not participate much in group discussions unless specifically called upon to do so. A) Yes B) No Q31) I prefer working alone on assignments instead of in study-groups. A) Yes B) No Q32) I prefer verbal discussions rather than submitting my ideas in writing. A) Yes B) No Q33) I prefer structuring my own projects instead of being given specific directions.
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 14
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
A) Yes B) No Q34) I prefer hearing verbal explanations instead of reading written ones. A) Yes B) No Q35) I have good writing skills and can effectively communicate my ideas in writing. A) Yes B) No Q36) I am much more comfortable communicating face-to-face rather than with email. A) Yes B) No Q37) I am good at structuring my own learning; independent study courses are right for me
Q38) What is your age? A) <18 years B) 18 to <19 years C) 19 to <20 years D) 20 to <21 years E) = or > 21 years Q39) What is the highest level of education that you have completed till date? A) Class 10 or equivalent B) High school (10 + 2) C) Some college courses (e.g. Pre-med) D) Bachelor’s degree Q40) Do you have any concerns about the quality of online courses? A) No concerns at all (Hey, cool man!)B) Some concerns (I’m just a wee bit worried!)C) Many concerns (Gee, I’m highly worried!!) D) Unimaginably concerned (It ain’t for me dude!)
*******************
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 15
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
CHAPTER-3: STATISTICAL ANALYSIS AND RESULTS “You are never granted a wish without the power to make it come true. You have to work for it, however.”
~~Anon~~ 1. Introduction Continuing from the data collection described in the previous chapter, this chapter deals with univariate and
bivariate statistical analysis of the data and the output generated there from. The results of the survey
pertained to four broad set of student-specific parameters that were considered relevant for online learning.
These were; technical access parameters (type of Internet access, primary computer that would be used for
online courses); personal parameters (motivation for online learning, scheduling ability for same, ability to
devote certain numbers of hours of self-study per day); technical proficiency (which included 16 items);
learning style preferences insofar as they pertained to online learning (which also included 16 items); and
general considerations (concerns about online learning, previous education levels, age groups). A. UNIVARIATE STATISTICS 2. Weightings and score-generation All results were collected as percentages of students responding to various item-options in each question. In
order to render the crude percentage results more meaningful, they were fitted into a scoring system.[10,23]
Moreover, since each options for the questions were not of equal importance, there also has to be logical
weighting system for the options, based on their relative importance.[10] Each item of the parameters under
study was given a weighting factor that ranged from ‘0’ to ‘5’. ‘0’ was the weighting for the response that
was not at all useful, and ‘5’ weighting was allotted to the most important response-option in the online
learning context. The intermediate weightings ranged sequentially between the two extremes, depending on
their relative order of importance from online learning point of view. However for each parameter the actual
weighting range was variable; some had ranges 0-3, others had ranges 0-4, 0-2 1-3, 1-4 or 1-5, etc;
depending on the number of response-options for that parameter. 3. Technology access parameters 3.1 Type of Internet access
This parameter pertains to type, speed and bandwidth of Internet connection that students would have for
their online course. The weighting for items in this parameter ranged from ‘0’ to ‘3’; ‘0’ being for ‘No
access’ and ‘3’ being for ‘High-speed / Wireless access’, this latter being considered ideal for online
courses.[11] ‘Cable modem’ and ‘Dial-up’ received weightings of ‘2’ and ‘1’ respectively, the former being
considered superior to latter. Slightly less than half (48%) of the students had access to high-speed or
wireless Internet at home. The weighted score collectively secured by the students for this parameter was
191, out of a theoretical maximum (Max) of 300. This worked out to 63.7% of Max for the parameter ‘Type
of Internet access’ (Table-1, Figure-1).
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 16
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Type of Internet access % W P Max
No Access 19 0 0 Dial-up modem Access 19 1 19 Cable modem access 14 2 28 High-Speed/Wireless Access 48 3 144 300 Total 100 191 300 Final score 63.7% 100%
Table-1: Shows percentage of students with various types of Internet access at home. The items have been weighted (W) from 0 to 3 based on their relative importance. Next column gives the product (P) of previous 2 cscore. Figure-1: Shows histogram of percentages of students with various types of Internet access at home for online studies.
Type of Internet Access
48
14
1919
0
10
20
30
40
50
60
No Access Dial-up modem Access Cable modem access High-Speed/WirelessAccessType of Access
% S
tuud
ents
olumns, the collective score for this parameter and the % of maximum (Max) possible
3.2 Primary computer
This parameter refers to the main computer that students claimed they would use at home for their online
courses. Weightings for items in this parameter ranged from ‘0’ (those who were unsure which PC to use) to
‘4’ (those possessing latest PCs). Older PCs secured lesser weightings. Two-thirds (68%) of the students had
purchased their primary computer within the last 1 or 2 years. The collective weighted score for this
parameter was 320, out of a theoretical Max of 400, giving students a score of 80% of Max for the
parameter labelled as ‘Primary computer’ (Table-2, Figure-2). Primary computer % W P Max
Purchased 1 - 2 years ago 68 4 272 400 Purchased 3 - 4 years ago 11 3 33 Purchased > 4 years ago 5 2 10 Plan to buy new PC soon 5 1 5 Unsure what PC to use 11 0 0 Total 100 320 400 Final score 80% 1 00%
Table-2: Shows percentage stud
of ents with various types of primary computer at home for online studies. The items have been weighted (W) from 0 to 4 based on their relative importance. Next column gives the product (P) of previous 2 coscore. Figure-2: Shows histogram of percentages of students with various types of primary computer at home for online studies.
Duration of Primary Computer68
11 11
550
10
20
30
40
50
60
70
80
Purchased 1 - 2years ago
Purchased 3 - 4years ago
Purchased > 4 yearsago
I plan to buy a newPC soon
I'm unsure w hat PC Iw ill use
Duration
% S
tude
nts
lumns, the collective score for this parameter and the percentage of maximum (Max) possible
4. Personal parameters 4.1 Motivation
This was considered as the single most important parameter under personal factors, which has the capacity
to determine whether a student would be able to pursue an online course successfully or not.[5,8-11,24] The
weightings employed for items in this parameter were thus; ‘Outside influences’ carried ‘0’ weight because
they were not students’ internal motivations. ‘Personal interest’ carried the maximum weight of ‘2’ for the
converse reason. Other response items (‘Fast easy learning’, ‘Earning potential’, ‘Good position’) were
equivocal response-options; they may or may not be applicable to a given situation. Therefore they could all
be considered as motivations under certain, but not all, circumstances. They carried a uniform weighting-
factor of ‘1’ each. Personal interest was the most frequent (50%) motivating factor for pursuing online
studies. One-third or more (33% - 44%) cited other reasons also, because students were permitted to tick up
to three response options in this parameter. A small proportion (16%) admitted being influenced by outside
sources, viz. lecturers. The collective weighted score for this parameter was 210, out of a theoretical Max of
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 17
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
300, giving them a score of 70% for the parameter labelled as ‘Motivation’ to pursue online courses, as one
of the personal factors (Table-3, Figure-3). Motivation % W P Max
Fast, easy learning 44 1 44 100 Earning potential 33 1 33 100 Good position 33 1 33 100 Personal interest 50 2 100 200 Outside influences 16 0 0 0 Total * 210 300
(3/5 of 500) Final score 70.0% 100% *Students were permitted to tick up to 3 choices
Table-3: Shows percentage of students with various motivating factors for online studies. The items have been weighted (W) from 0 to 2 based on their relative importance (see text). Next column gives the product (P) of previous 2 columns, the collective score for this parameter and the percentage of maximum (Max) possible score. Max of 300 for this parameter was computed as three-fifths of 500 (3/5 x 500), because students were permitted to tick up to a maximum of 3 response-options. Figure-3: Shows histogram of % of students with various motivating factors for online studies.
Motivation for Online Study
16
50
3333
44
0
10
20
30
40
50
60
Fast, easy learning Earning potential Good position Personal interest Outside inf luencesMotivating Factors
% S
tude
nts
4.2 Schedule
This parameter, also under personal factors, refers to how predictably students could devote a fixed number
of hours of self-study at home during their online course. Ability to maintain predictable hours of study was
also considered important for successful online studies.[10,11,24] Therefore the weighting scheme for items in
this parameter, which ranged from ‘0’ to ‘2’, was straightforwardly based on this aspect of the schedule,
with ‘0’ for ‘Unpredictable’ ‘1’ for ‘Somewhat unpredictable’ and ‘2’ for ‘Predictable’ study schedules.
Somewhat less than half (47%) admitted that they had somewhat unpredictable schedules insofar as it
pertained to devoting notional self-study time for online course. However, more than one-third (35%)
indicated they had the ability to maintain predictable study hours. The collective weighted score for this
parameter was 117, out of a theoretical Max of 200, giving them a score of 58.5% of Max for the parameter
labelled as ‘Schedule’ to pursue online courses (Table-4, Figure-4). Schedule % W P Max
Predictable 35 2 70 200 Somewhat unpredictable 47 1 47 Unpredictable 18 0 0 Total 100 117 200 Final score 58.5% 100%
Table-4: Shows percentage of students with predictability of various schedules for online studies. The items have been weighted (W) from 0 to 2 based on their relative importance. Next column gives the product (P) of previous 2 columns, the collective score for this parameter and the percentage of maximum (Max) possible score. Figure-4: Shows histogram of percentage of students with predictability of various schedules for online studies.
Schedule for Online Study
18
35
47
0
5
10
15
20
25
30
35
40
45
50
Predictable Somew hat unpredictable UnpredictablePrdictability
% S
tude
nts
4.3 Hours of online study
This parameter pertains to the number of hours that students can devote daily as part of their online studies.
A 3-credit course requires 9 to 12 hours (some say 10 to15 hours) of study per week.[10,11,24] The options in
this study pertained to numbers of hours/day; the rationale being those who could study more hours/day
were more likely to complete their scheduled weekly credits. The weighting for this parameter ranged from
‘5’ (capable of 5+ hours of study/day) to ‘1’ (capable of 1-2 hours/day). Majority (42%) of students stated
they could devote 3-4 hours of online study/day. About one-third (32%) could devote 2-3 hours/day. Since
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 18
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
no student could put in 5+ hours, it was considered realistic to peg the Max possible score for this parameter
at 400. The collective weighted score for this parameter was 231; considered out of a Max of 400, it gave a
score of 57.8% for the parameter labelled as ‘Hours of online study’ per day (Table-5, Figure-5). Hours of online study % W P Max
1 to <2 21 1 21 2 to <3 32 2 64 3 to <4 42 3 126 4 to <5 5 4 20 *400 5 or more 0 5 0 Total 100 231 400 Final score 57.8% 100%
Table-5: Shows percentage of students capable of various hours of online study per day. The items have been weighted (W) from 1 to 5 in ascending order of importance. Next column gives the product (P) of previous 2 columns, the collective score for this parameter and the percentage of maximum (Max) possible score. [*Since no student could put in the maximum number of hours as per the response items, it was considered realistic to consider the next highest (400) as the maximum possible score.] Figure-5: Shows histogram of percentage of students capable of various hours of online study per day.
Hours of Online Study Possible
05
42
32
21
0
5
10
15
20
25
30
35
40
45
1 to <2 2 to <3 3 to <4 4 to <5 5 or moreHours / day
% S
tude
nts
5. Technical proficiency This parameter considered 16 items to determine how proficient the students were in handling computers.
They had to grade their capability for each item in terms of ‘Very skilled’, ‘Some skills’ and ‘No skills’.
Therefore it was logical to allocate weights of ‘2’, ‘1’ and ‘0’ respectively to each of these skill levels,
according to a modified Likert scale. Students proved to be most proficient in visiting Websites and
Emailing messages; achieving weighted scores of 98% in each. Using search engines scored 90%. They
were moderately weak in downloading / installing software (58%), and protecting PCs from virus (60%).
Using message board and basic problem-solving was their Achilles heel; achieving only 48% and 50%
weighted scores respectively. There were very few items where students professed no skills; they were the
same two items that was their Achilles heel (24% and 23% respectively). The collective weighted score for
this parameter was 2366, out of a theoretical Max of 3200, giving them a score of 73.9% for the parameter
labelled as ‘Technical proficiencies’ to pursue online courses (Table-6, Figure-6). Very
skilled (%)
W1 Score1 Some skills (%)
W2 Score2 No skills (%)
W3 Score3 Σ Score Max %
Keyboard typing 32 2 64 58 1 58 10 0 0 122 200 61% Opening files 53 2 106 47 1 47 0 0 0 153 200 77% Saving files 72 2 144 28 1 28 0 0 0 172 200 86% Copying-pasting files 63 2 126 32 1 32 5 0 0 158 200 79% Formatting document 47 2 94 47 1 47 6 0 0 141 200 71% Inserting picture/object 70 2 140 20 1 20 10 0 0 160 200 80% Basic problem solving 23 2 46 54 1 54 23 0 0 100 200 50% Learning new software 42 2 84 48 1 48 10 0 0 132 200 66% Visiting website 95 2 190 5 1 5 0 0 0 195 200 98% Emailing messages 95 2 190 5 1 5 0 0 0 195 200 98% Attaching files to msgs 73 2 146 27 1 27 0 0 0 173 200 87% Using search engines 80 2 160 20 1 20 0 0 0 180 200 90% Using message board 19 2 38 57 1 57 24 0 0 95 200 48% Using chat room 59 2 118 36 1 36 5 0 0 154 200 77% Download/install software 33 2 66 50 1 50 17 0 0 116 200 58% Protecting PC from virus 35 2 70 50 1 50 15 0 0 120 200 60% Total 2366 3200
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 19
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Final Score 73.9% 100% 73.9%
Table-6: Shows percentage of students with various skill levels in 16 different items of computer proficiency. The items have been weighted (W) from 2 to 0 in descending order of skill levels. Each Score-column gives the product of previous 2 columns. ΣScore-column (Σ = summation) gives the sum of all 3 Score-columns, the collective weighted score for this parameter and the % of maximum (Max) possible score. Last column gives the % score of each item of technical proficiency (item-wise ΣScore divided by 200, expressed as %), and last field in that column gives the average value of that column, which is the same as the final % of collective score.
Overall Technical Competencey Level
60%58%
77%66%
80%79%
86%
77%
61%
98% 98%
87%90%
48%50%
71%
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Keyboard
typin
g
Opening fil
es
Saving
files
Copying
-pas
ting fil
es
Formatt
ing docu
ment
Inserti
ng picture/
object
Basic
problem
solvi
ng
Learn
ing new so
ftware
Visiting w
ebsite
Emailing m
essa
ges
Attachin
g files
to m
sgs
Using se
arch en
gines
Using m
essag
e boa
rd
Using ch
at ro
om
Download/in
stall s
oftware
Protec
ting PC fr
om virus
Competency Category
Ove
rall
Scor
e (%
)
Figure-6: Shows histogram of % of students with various skill levels in 16 items of computer proficiency. The roughly horizontal blue line represents the average line, which corresponds to between 70% and 75% (actually 73.9%). 6. Learning style preferences Some learning style (LS) preferences favour online methods of study; others favour face-to-face (F-2-F) on-
campus traditional methods of studies.[14] There were 16 questions in this parameter; 8 of which were
worded in such a way that a ‘Yes’ answer to them indicated pro-online LS preference. ‘Yes’ answer to
remaining 8 questions indicated anti-online LS preference. Conversely, a ‘No’ answer to any of these
questions indicated the opposite LS preference. The two sets of questions were arranged alternately with
each other. Since this study was initiated from the online learning perspective, ‘Yes’ answer to pro-online
LS type of questions was given weighting of ‘1’ and ‘No’ answer was given weighting of ‘0’. The reverse
weighting was given for the anti-online LS type of questions. The percentages of ‘Yes’ and ‘No’ to the alternating anti-online and pro-online oriented LS preference
questions was rather ambiguous. Most students gave high responses to both opposing sets of questions. This
has been further analyzed under bivariate statistics and the results described later. From the online LS
perspective, the weighted score collectively secured by the students for this parameter was 801, giving them
an aggregated pro-online LS preference score of 64.0% for the parameter labelled as ‘Learning style
preferences’ to pursue online courses (Table-7). Yes (%) W1 Score1 No (%) W2 Score2 ΣScore Max
Prefer socializing 89 0 0 11 1 11 11 100 Prefer online networking 68 1 68 32 0 0 68 100 Need knowledge sharing 65 0 0 35 1 35 35 100 Disciplined study plan 50 1 50 50 0 0 50 100 Timely assignment difficult 35 0 0 65 1 65 65 100 Prefer independent projects 62 1 62 38 0 0 62 100 Prefer lecture-based learning 53 0 0 47 1 47 47 100
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 20
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Good computer reading speed 76 1 76 24 0 0 76 100 Non-participation in discussion boards 53 0 0 47 1 47 47 100 Prefer working alone 69 1 69 31 0 0 69 100 Prefer verbal discussions 69 0 0 31 1 31 31 100 Prefer own structured projects 56 1 56 44 0 0 56 100 Prefer verbal explanations 62 0 0 38 1 38 38 100 Good writing skills 82 1 82 18 0 0 82 100 Prefer F-2-F communication 86 0 0 14 1 14 14 100 Prefer own structured learning 50 1 50 50 0 0 50 100 Total 513 288 801 1600 Final Score 64.0% 36.0% 50.1%
Table-7: Shows percentage of students with pro- and anti-online learning style (LS) preferences. W1 column indicates weighting of ‘1’ to ‘Yes’ answers to pro-online questions and ‘0’ to same answers to opposite questions. W2 column has been weighted exactly opposite of previous sets of questions. Score1 and Score2 columns give products of respective previous 2 columns. ΣScore column (Σ = summation) gives aggregate of the two Score columns, the individual-item and collective weighted scores for this parameter, and the final score for pro-online LS preference expressed as % of theoretical maximum (Max) of 1600. 7. General considerations 7.1 Online concerns
This parameter pertains to the concern level of students at the prospect of facing an online course.[10] The
response-options for this parameter ranged from ‘No concerns’ to ‘Highly concerned’. The weighting ranged
in descending order from ‘3’ (‘No concerns’) to ‘0’ (‘Highly concerned’), in a modified Likert fashion. It
was considered appropriate to give a weighting of ‘0’ to the last category of response because highly
concerned individuals were not exactly considered fit candidates for online courses. Forty-three percent of
students professed some concerns; while just below one-third (31%) had no concerns regarding online
courses. The collective weighted score for this parameter was 192, out of a theoretical Max of 300, giving
them an aggregated score of 64% for the parameter labelled as ‘Online concerns’ (Table-8 Figure-7). Online concerns % W P Max
No concern 31 3 93 300 Some concerns 43 2 86 Many concerns 13 1 13 Highly concerned 13 0 0 Total 192 300 Final score 64% 100%
Table-8: Shows percentage of students with various levels of concerns regarding online courses. The items have been weighted (W) in descending order from 3 to 0. Next column gives the product (P) of previous 2 columns, the collective score for this parameter and the % of maximum (Max) possible score. Figure-7: Shows histogram of % of students with various levels of online concerns.
Online Concern Levels
1313
43
31
0
5
10
15
20
25
30
35
40
45
50
No concern Some concerns Many concerns Highly concernedDegree of concern
% S
tude
nts
7.2 Prior education
This parameter indicates the prior education levels of students, before being considered for online courses. It
was theorized that those with high prior levels of education would be cognitively more capable and receptive
to online courses. Therefore the weighting was performed in ascending order of prior education levels,
ranging from ‘1’ (10+2 (high school) education) to ‘3’ (Bachelor’s degree). Exactly half the students had
completed 10+2, while 37% had completed Pre-medical. The collective weighted score for this parameter
was 163, out of a theoretical Max of 300, giving them an aggregated score of 54.3% for the parameter
labelled as prior ‘Education level’ (Table-9, Figure-8).
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 21
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Education Levels
50
13
37
0
10
20
30
40
50
60
10 + 2 Pre-med BachelorEducation Levels
% S
tude
nts
Education level % W P Max
10 + 2 50 1 50 Pre-med 37 2 74 Bachelor 13 3 39 300 Total 163 300 Final score 54.3% 100%
Table-9: Shows percentage of students with prior education levels. The items have been weighted (W) in ascending order from 1 to 3. Next column gives the product (P) of previous 2 columns, the collective score for this parameter and the % of maximum (Max) possible score. Figure-8: Shows histogram of students’ % with various prior education levels. 7.3 Age
This parameter was designed to give an indirect assessment of the level of mental maturity of students prior
to undertaking online courses.[8] The twenties are generally considered to be more mature than late teens.
Therefore the weighting was allocated in ascending order from ‘1’ (18 to <19 years) to ‘4’ (> 21 years).
Maximum numbers of respondents (40%) was 21+ years, while half that numbers (20%) were 18 – 19 years.
Exactly one-third (33%) were 20 – 21 years. The collective weighted score for this parameter was 293, out
of a theoretical Max of 400, giving them an aggregated score of 73.3% for the parameter labelled as ‘Age’
(Table-10, Figure-9). Age % W P Max
18 to <19 years 20 1 20 19 to <20 years 7 2 14 20 to <21 years 33 3 99 = or > 21 years 40 4 160 400 Total 293 400 Final score 73.3% 100%
Table-10: Shows percentage of students with various age levels prior to undertaking online courses. The items have been weighted (W) in ascending order from 1 to 4. Next column gives the product (P) of previous 2 columns, the collective score for this parameter and the percentage of maximum (Max) possible score. Figure-9: Shows histogram of % of students with various age levels prior to undertaking online courses.
Age Proportions
20
40
33
7
0
5
10
15
20
25
30
35
40
45
18 to <19 years 19 to <20 years 20 to <21 years = or > 21 yearsAge Groups
% S
tude
nts
8. Overall USAIM Compatibility Score for Online Learning All the collective weighted scores computed in the preceding sections for each parameter were tabulated in a
summary table and depicted in the form of a histogram (Table-11, Figure-10). A modified grading scheme
was created based on earlier analyses.[10,23] This modification took into consideration the characteristics
unique to the USAIM students under study (diverse ethnic and educational backgrounds, missed age
groups). This scheme graded the weighted scores from ‘Excellent’ to ‘Poor’, and allotted points ranging
from 10 (‘Excellent’) to 0 (‘Poor’) (Table-12). The grading scheme (and the corresponding points) was then
applied to the summary table. Computation of total points showed that students of USAIM had secured 64
points against a theoretical maximum of 100 points, corresponding to a ‘Good’ score (Table-11). This was
the overall USAIM institutional Compatibility Score for online learning. From a different perspective, the
average of all the percentages works out to 66.0%. This also corresponds to ‘Good’ category (Table-11).
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 22
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Overall Weighted Scores of USAIM Students
73.3
54.3
6464.1
73.9
57.858.5
7080
63.7
0
10
20
30
40
50
60
70
80
90
Type o
f Inter
net Acc
ess
Primary
Computer
Motivati
on
Sched
uling C
apabilit
ies
Hours o
f Onlin
e Stu
dy
Technica
l Compete
ncies
Online L
S Pref
erence
s
Online C
oncern
Levels
Prior E
ducatio
n Levels
Age Lev
els
Parameter
% S
core
Parameters W scores (%)
Grades Points
Type of Internet Access
63.7 Good 6
Primary Computer 80 Excellent 10 Motivation 70 Very good 8 Scheduling Capability 58.5 Average 4 Hours of Online Study 57.8 Average 4 Technical Competency 73.9 Very good 8 Online LS Preferences 64 Good 6 Online Concern Levels 64 Good 6 Prior Education Level 54.3 Average 4 Age Levels 73.3 Very good 8 COMPATIBILITY SCORE
66% (Avg.)
64/100
Table-11: Grading scheme applied to all weighted scores (W scores %) for all parameters, to arrive at the USAIM Compatibility Score for online learning. Figure-10: Shows histogram of all weighted scores of USAIM students regarding their fitness to undertake online courses. Grading Scheme
Weighted score Grade Points > or = 80% Excellent 10 70 to < 80% Very good 8 60 to < 70% Good 6 50 to < 60% Average 4 40 to < 50% Fair 2 < 40% Poor 0
Table-12: Grading scheme that was applied to all the scores tabulated in Table-11 B. BIVARIATE STATISTICS 1. Learning style preference results The percentages of ‘Yes’ and ‘No’ to the alternating anti-online and pro-online oriented LS preference
questions were rather ambiguous. Weighted scores for ‘Pro-Yes’ and ‘Anti-Yes’ were both 64%. That meant
majority of students showed a propensity to mark ‘Yes’ for all questions (Tables-13, 14; Figures 11, 12).
Pro-online Preference Scores
50
82
56
6962
50
6876
50
18
44
3124
3832
50
0
10
20
30
40
50
60
70
80
90
Prefer onlinenetw orking
Disciplinedstudy plan
Preferindependent
projects
Good computerreading speed
Prefer w orkingalone
Prefer ow nstructuredprojects
Good w ritingskills
Prefer ow nstructuredlearningPro-online Preferences
% S
tude
nts
Yes (%)
No (%)
Pro-online preferences Yes (%) No (%)
Prefer online networking 68 32
Disciplined study plan 50 50
Prefer independent projects 62 38
Good computer reading speed 76 24
Prefer working alone 69 31
Prefer own structured projects 56 44
Good writing skills 82 18
Prefer own structured learning 50 50
Anti-online preferences Yes (%) No (%)
Prefer socializing 89 11
Need knowledge sharing 65 35
Timely assignment difficult 35 65
Prefer lecture-based learning 53 47
Non-participation in discussion boards 53 47
Prefer verbal discussions 69 31
Prefer verbal explanations 62 38
Prefer F-2-F communication 86 14
Anti-online Preference Scores
86
6269
5353
35
65
89
14
3831
4747
65
35
110
10
20
30
40
50
60
70
80
90
100
Prefersocializing
Need know ledgesharing
Timelyassignment
diff icult
Prefer lecture-based learning
Non-participationin discussion
boards
Prefer verbaldiscussions
Prefer verbalexplanations
Prefer F-2-Fcommunication
Anti-online Preferences
% S
tude
nts
Yes (%)
No (%)
Tables-13, 14: Shows % of responses to pro-online and anti-online LS preference questions. Figures-11, 12: Shows histograms of same findings as tables
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 23
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Therefore, in order to assess the reliability of items in the questionnaire, some bivariate statistical analyses
were conducted on the results from this parameter, using SPSS for Windows statistical package, Release
10.0.1, Standard Version, Copyright © SPSS Inc., 1989 – 1999 (Figure-13).
Figure-13: Screenshot of Statistical Package for Social Sciences (SPSS) used for bivariate analyses 1.1 Scatterplot matrix of LS preference results Before calculating correlations, it was considered advisable to plot the variables to see how they were
related. An initial analysis by means of a scatterplot matrix [Box-1] revealed that the data were uniformly
scattered about, without being tightly packed. Some matrices showed linear association (Figure-14).
PROYES
ANTINO
PRONO
ANTIYES
Online LS Preference
Figure-14: Initial scatterplot matrix of data to give visual demonstration of the overall distribution, and also shows how the variables are related to each other. Box-1: SPSS Syntax for Scatterplot matrix 1.2 Frequency histograms and tests of normality of LS scores Since many statistical tests assume data are normally distributed, it was decided to check the distribution of
LS data by means of frequency distributions and histograms with normal curves superimposed on them
[Box-2a]. Kolmogorov-Smirnov and Shapiro-Wilk tests of normality (for <50 cases) were conducted [Box-
2b]. All significance values were high (p = 0.2 (K-S), 0.6-0.74 (S-W); df = 8; Lilliefors Significance
Correction); distribution of data did not differ significantly from normal distribution. Frequency distribution
histograms with normal curves superimposed on each showed that though none of the data followed a
classical bell-shaped symmetric distribution about a mean (which is typical of normally distributed data),
they did not vary widely from a normal distribution either (Table-15, Figures-15a,b,c,d). Kolmogorov-Smirnova Shapiro-Wilk Statistic df Sig.* Statistic df Sig.
PROYES .135 8 .200 .943 8 .606 ANTINO .143 8 .200 .956 8 .740 PRONO .135 8 .200 .943 8 .606
ANTIYES .143 8 .200 .956 8 .740 * This is a lower bound of the true significance a Lilliefors Significance Correction
Box-1: SPSS syntax of scatterplot matrix SPSS Data Editor Window <Graphs> <Scatter…>; Scatterplot Dialog Box <Matrix> <Define>; in Scatterplot Matrix Dialog Box transfer all data into <Matrix Variables> list box; endorse <OK> SPSS Syntax Editor GRAPH /SCATTERPLOT(MATRIX)=proyes antino prono antiyes /MISSING=LISTWISE /TITLE= 'Online LS Preference'
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 24
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Table-15: Shows Tests of Normality. Shapiro-Wilk and Kolmogorov-Smirnov in the Explore procedure are formal tests to check for normality. A low significance value (generally <0.05) indicates that the distribution of the data differs significantly from a normal distribution
Box-2a: SPSS Syntax for frequency distribution histograms SPSS Data Editor Window <Analyze> <Descriptive Statistics> <Frequencies>; Frequencies dialog box, Enter all variables; <Charts>, endorse <Histogram> <With normal curve>, <Continue>; <OK> SPSS Syntax Editor FREQUENCIES VARIABLES=proyes antino prono antiyes /HISTOGRAM NORMAL /ORDER= ANALYSIS
Box-2b: SPSS Syntax for tests of normality SPSS Data Editor Window <Analyze> <Descriptive Statistics> <Explore> SPSS Syntax Editor /PLOT BOXPLOT HISTOGRAM NPPLOT /COMPARE GROUP /STATISTICS DESCRIPTIVES /CINTERVAL 95 /MISSING LISTWISE /NOTOTAL.
PROYES
80.075.070.065.060.055.050.0
PROYES
Freq
uenc
y
2.5
2.0
1.5
1.0
.5
0.0
Std. Dev = 11.76 Mean = 64.1
N = 8.00
ANTINO
70.060.050.040.030.020.010.0
ANTINO
Freq
uenc
y
2.5
2.0
1.5
1.0
.5
0.0
Std. Dev = 17.82Mean = 36.0
N = 8.00
PRONO
50.045.040.035.030.025.020.0
PRONO
Freq
uenc
y
2.5
2.0
1.5
1.0
.5
0.0
Std. Dev = 11.76 Mean = 35.9
N = 8.00
ANTIYES
90.080.070.060.050.040.0
ANTIYES
Freq
uenc
y
2.5
2.0
1.5
1.0
.5
0.0
Std. Dev = 17.82Mean = 64.0
N = 8.00
Figures-15a,b,c,d: Show frequency histograms of LS data with normal curves superimposed on them to see if the data appear normal. Normally distributed data are symmetric about the mean and bell-shaped. Though none of the data above followed a classical bell-shaped symmetric distribution about a mean, they did not vary widely from a normal distribution either. Box-2a,b: Shows SPSS Syntax for plotting frequency histograms and Kolmogorov-Smirnov test of normality 1.3 Bivariate correlations of LS scores It was postulated that there would be a positive correlation between ‘Yes’ responders to pro-online LS
preference questions (‘Pro-Yes’) and ‘No’ responders to anti-online LS preference questions (‘Anti-No’).
Same would be the correlation between ‘Pro-No’ and ‘Anti-Yes’ responders. In order to test these
postulates, a Pearson’s correlation matrix was drawn up [Box-3]. It correlated the percentage figures of
‘Yes’ and ‘No’ responders to both sets of questions. There was weak positive correlation (r = 0.3; N = 8)
between ‘Pro-Yes’ and ‘Anti-No’ responders, and between ‘Pro-No’ and ‘Anti-Yes’ responders. There was
equivalent weak negative correlation (r = -0.3; N = 8) between opposite responders. The high p-values (p =
0.48; 2-tailed) indicated the correlations were insignificant, and the two variables were not linearly related.
The direction of association (+ or -) was as per the postulates (Table-16).
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 25
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
PROYES ANTINO PRONO ANTIYES PROYES Pearson Correlation 1.000 .291 -1.000 -.291
Sig. (2-tailed) . .484 .000 .484 N 8 8 8 8
ANTINO Pearson Correlation .291 1.000 -.291 -1.000 Sig. (2-tailed) .484 . .484 .000 N 8 8 8 8
PRONO Pearson Correlation -1.000 -.291 1.000 .291 Sig. (2-tailed) .000 .484 . .484 N 8 8 8 8
ANTIYES Pearson Correlation -.291 -1.000 .291 1.000 Sig. (2-tailed) .484 .000 .484 . N 8 8 8 8
** Correlation is significant at the 0.01 level (2-tailed)
Box-3: Syntax for Bivariate Pearson’s Correlation matrix SPSS Data Editor Window <Analyze> <Correlate> <Bivariate>; Bivariate Correlations dialog box, enter variables in Variables list box; endorse <Pearson’s> check box under Correlation Coefficients, <2-Tailed> check box under Tests of Significance and <Flag significant correlations> check box; <OK> SPSS Syntax Editor CORRELATIONS /VARIABLES=proyes antino prono antiyes /PRINT=TWOTAIL NOSIG
/MISSING=PAIRWISE.
Table-16: Shows Pearson’s r correlation coefficient matrix between percentages of responders to online learning style questions, significance values, and the number of cases with non-missing values. The correlation coefficients on the main diagonal (red line) are always 1.0, because each variable has a perfect positive linear relationship with itself. Correlations above the main diagonal are a mirror image of those below. The significance level (or p-value) is the probability of obtaining results as extreme as the one observed (or how likely it is that there is no linear association). If p-value is <0.05 then the correlation is significant and the two variables are linearly related. If p-value is >0.05 then the correlation is not significant. Even then, the variables may be correlated but the relationship is not linear. Box-3: Shows SPSS syntax for Bivariate Pearson’s Correlation matrix 1.4 Reliability analysis of LS scores It was also considered necessary to perform reliability analysis of LS scores [Box-4] in order to determine
the extent to which the items in the LS questionnaire were related to each other, and get an overall index of
the repeatability or internal consistency of LS scale as a whole. Of the models available through SPSS,
Cronbach’s Alpha (α), Split-half (Guttman, Spearman-Brown (equal / unequal length)), and intra-class
correlation coefficients (to compute inter-rater reliability estimates) were chosen because these were
appropriate for the nature of the questionnaire. Cronbach’s α coefficient, Guttman split-half reliability
coefficient and intra-class correlation coefficients were all 0.42. Standardized item α and Spearman-Brown
reliability coefficients (equal / unequal length) were all 0.45. Hotelling’s T-Squared multivariate test
rejected the null hypothesis that all items in the scale have same mean (p = 0.003) (Table-17). Average Measure Intraclass Correlation coefficient
0.4223
Hotelling's T-Squared 18.9556 (F = 18.9556; Prob. = 0.0033)
Reliability Coefficients of 2 items (‘Pro-Yes’ and ‘Anti-No’) Cronbach Alpha coefficient 0.4223 Standardized item alpha 0.4508 Correlation between forms 0.2910 Guttman Split-half reliability 0.4223 Equal-length Spearman-Brown reliability 0.4508 Unequal-length Spearman-Brown reliability 0.4508
Box-4: SPSS syntax for reliability analysis SPSS Data Editor Window <Analyze> <Scale> <Reliability Analysis>; Enter variables in Reliability Analysis dialog box; <Statistics> Endorse <Hotelling’s T-Squared>, <Intraclass Correlation Coefficient> check boxes, <Continue>; Endorse <Alpha>, and then <Split-half> from Model list box; <OK> each time SPSS Syntax Editor RELIABILITY /VARIABLES=proyes antino /FORMAT=NOLABELS /SCALE(ALPHA)=ALL/MODEL=ALPHA /STATISTICS=HOTELLING ANOVA /ICC=MODEL(MIXED) TYPE(CONSISTENCY) CIN=95 TESTVAL=0.
Table-17: Shows results of Reliability Analysis (Cronbach Alpha and Split-half (Guttman and Spearman-Brown) using SPSS®. Box-4: Shows SPSS syntax for reliability analysis. 1.5 Data reduction – Factor analysis of LS scores The purpose of factor analysis was to determine if there were a limited number of factors or components in
the data that could account for the majority of variances observed among the variables. The objectives of
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 26
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
this analysis were to; (a) determine how many important components were present in the data through a
principal component analysis (PCA); (b) determine to what extent the important (extracted) components
were able to explain the observed correlations between variables; (c) rotate the factors / components in order
to make their interpretation more understandable; (d) determine which items had high loadings on each
rotated component / factor; (e) try to identify and name the rotated components / factors; and (f) provide a
list of factor loadings for each item. 1.5.1 Sequence of factor analysis Through SPSS Data Editor Window; selected <Analyze> <Data Reduction> <Factor…>; Factor Analysis
dialog box (Figure-16a), all variables entered in Variables: box; successively endorsed <Descriptives>,
<Extraction>, <Rotation>, <Scores> [Box-5]. In each sub-dialog box, appropriate options were endorsed, as
shown in screenshots below (Figures-16b,c,d,e). Meaning of each options, their significance in factor
analysis and interpretation are described in the output section of factor analysis.
16a 16b
c d e e
Box-5: SPSS Syntax Editor FACTOR /VARIABLES proyes antino prono antiyes /MISSING LISTWISE /ANALYSIS proyes antino prono antiyes /PRINT UNIVARIATE INITIAL CORRELATION SIG DET REPR EXTRACTION ROTATION /PLOT EIGEN ROTATION /CRITERIA MINEIGEN(1) ITERATE(25) /EXTRACTION PC /CRITERIA ITERATE(25) /ROTATION VARIMAX /SAVE REG(ALL) /METHOD=CORRELATION.
Figure-16 (a-e): Screenshots of various dialog boxes for Factor Analysis. See text for details. Box-5: SPSS Syntax Editor for Factor Analysis 1.5.2 Factor analysis outputs Univariate descriptives: The first output was univariate descriptives of the variables, which gave number of
valid observations, mean, and standard deviation for each variable (Table-18a).
Mean Std. Deviation Analysis N PROYES 64.13 11.76 8 ANTINO 36.00 17.82 8 PRONO 35.88 11.76 8
ANTIYES 64.00 17.82 8 Table-18a: Table of Descriptive Statistics of variables Correlation matrix: The next output was correlation matrix between variables. This has been described
earlier under bivariate correlations. The shortcomings of factor analysis in this study accruing from this
result are elaborated in Chapter-4 under ‘Discussion’.
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 27
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Communalities: Next was a table of estimated communalities (Table-18b). These are estimates of that part
of the variability in each variable that is shared with others, and which is not due to measurement error or
variance specific to the variable. They indicate the amount of variance in each variable that is accounted for.
These do not play any direct role in Principal Component Analysis (PCA). Initial communalities are
estimates of the variance in each variable accounted for by all components / factors. For PCA, this is always
equal to 1.0 (for correlation analyses). Extraction communalities are estimates of variance in each variable
accounted for by factors / components in the factor solution. Small values indicate variables that do not fit
well with the factor solution, and should be dropped from the analysis. In this study all extraction
communalities were = 1. Initial Extraction PROYES 1.000 1.000 ANTINO 1.000 1.000 PRONO 1.000 1.000 ANTIYES 1.000 1.000 Extraction Method: Principal Component Analysis Table-18b: Table of Initial and Extraction Communalities Total Variance Explained: This table gives eigenvalues, variance explained, cumulative variance explained
for the factor solution, and the relative importance of each of 4 principal components (Table-18c). The first
panel gives values based on ‘Initial Eigenvalues’. For the initial solution, there are as many components /
factors as there are variables. The ‘Total’ column gives the amount of variance in the observed variables
accounted for by each component / factor. The ‘% of Variance’ column gives the percent of variance
accounted for by each specific factor or component, relative to the total variance in all variables. The
‘Cumulative %’ column gives the percent of variance accounted for by all factors / components up to and
including the current one. Thus, the Cumulative % for the second factor is the sum of the % of Variance for
the first and second factors. In a good factor analysis, there would be few factors that explain a lot of
variance; and rest of the factors would explain relatively small amounts of variance. ‘Extraction Sums of
Squared Loadings’ group gives information about the extracted factors / components. For PCA, these values
will be same as those under ‘Initial Eigenvalues’. There are 5 methods of rotation in SPSS package; Varimax, Direct Oblimin, Quartimax, Equamax and
Promax. Varimax (orthogonal; right-angled; uncorrelated) rotation was requested because it simplifies
interpretation of factors / components. The results are seen in ‘Rotation Sums of Squared Loadings’ group.
The variance accounted for by rotated factors / components may be different from those reported for the
extraction; but the Cumulative % for the set of factors or components are always the same.
Initial Eigenvalues Extraction Sums of Squared Loadings Rotation Sums of Squared Loadings Component Total % of Variance Cumulative % Total % of Variance Cumulative % Total % of Variance Cumulative % 1 2.582 64.551 64.551 2.582 64.551 64.551 2.582 64.551 64.551 2 1.418 35.449 100.000 1.418 35.449 100.000 1.418 35.449 100.000 3 2.288E-16 5.719E-15 100.000 4 1.249E-16 3.123E-15 100.000 Extraction Method: Principal Component Analysis Table-18c: Shows table of Total Variance Explained. This table gives eigenvalues, variance explained, and cumulative variance explained for the factor solution (see text for details)
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 28
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Table-18c shows the importance of each of 4 principal components. According to Kaiser Criterion (which is
the default in SPSS), factors / components with eigenvalues >1 (for a correlation matrix) are extracted. Only
first 2 extracted components had eigenvalues >1.00 (Component-1: 2.582; Component-2: 1.418), and
together these explained 100% of total variability in the data. This led to the conclusion that a 2-factor
solution would probably be adequate. The middle part of the table shows eigenvalues and % of variance
explained for the 2 factors of the initial solution that are regarded as important. Clearly the 1st factor of the
initial solution is much more important than the 2nd. In the right-hand part of table, the eigenvalues and % of
variance explained for the 2 rotated factors are displayed. Taken together, the 2 rotated factors explain just
the same amount of variance as the 2 factors of the initial solution. The effect of rotation should be to spread
the importance more or less equally between the 2 rotated factors. But here rotation has not resulted in any
change in the importance of the 2 extracted factors. This drawback is highlighted below and in Chapter-4.
Scree Plot
Component Number
4321
Eige
nval
ue
3.0
2.5
2.0
1.5
1.0
.5
0.0
Figure-17: Shows scree plot of factors for extraction. It shows those factors / components with eigenvalues over 1. Thereafter the slope (‘scree’) of the plot flattens out. This is just a graphic display of the first column of the table in the preceding section. This gives an idea of the number of factors to be extracted. Scree plot: The previous conclusion was supported by the scree plot (which is actually simply displaying the
same data visually). Kaiser criterion considers all variables with eigenvalues more than 1. Scree plot method
looks at when the slope of the plot flattens out (the ‘scree’); it extracts those factors / components previous
to the flattening. We have fulfilled the first objective of factor analysis; ‘to determine how many important
factors / components are present in the data through a principal component analysis.’ Component Matrix: The next output reported the loadings for each variable on unrotated components /
factors (Table-18d). Each number represents the correlation between the item / variable and unrotated factor
/ component. These correlations help to interpret the factors / components; by looking for a common thread
among the variables that have large loadings for a particular factor / component. This is further elaborated
after the factors have been rotated in Table-18f.
Componentsa
1 2 PROYES -.803 .595 ANTINO -.803 -.595 PRONO .803 -.595 ANTIYES .803 .595 Extraction Method: Principal Component Analysis a 2 components extracted Table-18d: Shows the loadings for each item / variable on unrotated components / factors
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 29
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Reproduced Correlations: The next output is; ‘Reproduced Correlations’ and ‘Residuals’ from the factor
analysis (Table-18e). This shows the predicted pattern of relationships if the factor analysis solution is
assumed to be correct. If the solution is a good one, the reproduced correlations would be close to the
observed values. Residuals show the difference between the predicted and observed values. For a good
factor analysis solution, most of these values should be small. Table-18e shows the extent to which the
original correlation matrix could be reproduced from 2 extracted factors. The reproduced correlations were
same as original correlations. The small residuals showed that there was very little difference between the
reproduced correlations and the correlations actually observed between the variables. We have fulfilled the
second objective of factor analysis; ‘To what extent the important factors / components were able to explain
the observed correlations between variables.’ PROYES ANTINO PRONO ANTIYES Reproduced Correlations PROYES 1.000b .291 -1.000 -.291 ANTINO .291 1.000b -.291 -1.000 PRONO -1.000 -.291 1.000b .291 ANTIYES -.291 -1.000 .291 1.000b
Residualsa PROYES 1.110E-16 -1.221E-15 3.331E-16 ANTINO 1.110E-16 -1.110E-16 1.110E-16 PRONO -1.221E-15 -1.110E-16 -2.776E-16 ANTIYES 3.331E-16 1.110E-16 -2.776E-16 Extraction Method: Principal Component Analysis a Residuals are computed between observed and reproduced correlations. There are 0 (.0%) non-redundant residuals with absolute values > 0.05 b Reproduced communalities Table-18e: Shows reproduced correlations. See text for details Rotated Solutions: A rotation method (orthogonal or oblique) must be selected to obtain a rotated solution.
In this study (Varimax; orthogonal rotation), the outputs are rotated pattern matrix and factor / component
transformation matrix. These two are displayed below (Tables-18f,g). Rotated pattern matrix: This output reports the loadings for each item / variable on the components / factors
after rotation. Each number represents the partial correlation between the item and the rotated factor. These
correlations can help to interpret the factors / components. This is done by looking for a common thread
among the item / variables that have large loadings for a particular factor / component. Table-18f shows that
variables Pro-No and Anti-Yes (anti-online items of questionnaire) loaded strongly positively on Component
1. Variables Pro-Yes and Anti-No (pro-online items of questionnaire) loaded equally strongly negatively on
Component 1. All items showed equivocal loadings on Component 2. Comparison with Table-18d showed
that rotation had not produced any difference in the loadings of variables on the components. We have
fulfilled the 3rd and 4th objectives of factor analysis; ‘to rotate the factors / components and determine which
items have high loadings on each rotated component.’ Component 1 2 PROYES -.803 .595 ANTINO -.803 -.595 PRONO .803 -.595 ANTIYES .803 .595 Extraction Method: Principal Component Analysis Rotation Method: Varimax with Kaiser Normalization Rotation converged in 1 iteration Table-18f: Shows rotated pattern matrix (see text for details, and also Figure-18)
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 30
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Factor transformation matrix: This output is used to compute the rotated factor matrix from the original (un-
rotated) factor matrix. It gives information about the extent to which the factors have been rotated, and
describes the specific rotation applied to the factor solution. If the off-diagonal elements are close to zero,
the rotation was relatively small. If the off-diagonal elements are large (greater than ±0.5), a larger rotation
was applied. The angle of rotation can be calculated by treating the correlation coefficient as a cosine.
Cosine of 0° is 1 and cosine of 90° is 0.[31] Table-18g implies that the factors / components are mutually at
90° to each other (orthogonal) in this study. This is further elaborated under ‘Discussion’ in Chapter-4.
Component 1 2 1 1.000 .000 2 .000 1.000
Extraction Method: Principal Component Analysis Rotation Method: Varimax with Kaiser Normalization Table-18g: Shows factor / component transformation matrix (see text for details) Component plot: Finally, SPSS produced a 2-dimensional plot of the 4 variables on axes representing the
two rotated factors / components (Figure-18).
Component Plot in Rotated Space
Component 1
1.0.50.0-.5-1.0
Com
pone
nt 2
1.0
.5
0.0
-.5
-1.0
antiyes
pronoantino
proyes
Figure-18: Shows the 2 components / factors plotted in rotated space; the location of the 4 original variables in relation to rotated components / factors is seen. Anti-online responses load strongly positively on Component 1 and Pro-online responses load strongly negatively on Component 1. However, they are all equivocally distributed with respect to Component 2. See this figure in conjunction with Table-18f. Naming the factors: Anti-online items of the questionnaire (Pro-No and Anti-Yes) loaded strongly
positively on Component 1, and pro-online items of the questionnaire (Pro-Yes and Anti-No) loaded equally
strongly negatively on Component 1 (Table-18f). Therefore it seems logical to identify Component 1 as
‘Anti-onlineness’ factor. All items showed equivocal loading on Component 2. Therefore Component 2
cannot be named appropriately. We have fulfilled the 5th objective of factor analysis; ‘to try to identify and
name the rotated factors / components.’ Saving factor scores as variables: There are 3 methods of saving factor scores as variables; Regression,
Bartlett and Anderson-Rubin (Figure-16e). All are methods for estimating factor score coefficients. In
Regression the scores produced have Mean = 0 and a variance equal to the squared multiple correlations
between estimated factor scores and true factor values. The scores may be correlated even when factors are
orthogonal. In Bartlett the scores produced have a Mean = 0. The sum of squares of the unique factors over
the range of variables is minimized. Anderson-Rubin is a modification of Bartlett that ensures orthogonality
of estimated factors. The scores produced have a Mean = 0, SD = 1, and are uncorrelated. In this study, the
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 31
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Regression method was employed. The Saved Factor scores were added to the data. One variable was
created for each factor in the solution. A table in the output showed the name of each new variable and a
variable label indicating the method used to calculate factor scores (screenshot in Figure-19). These are
standardized scores, obtained by applying the rotated factor loadings to the standardized score of each item
on each of the variables (like making a prediction using a regression equation). It can be seen from Figure-19 that question numbers 3, 4 and 7 had low standardized scores on the first
factor (-0.900, -1.012 and -1.015 respectively). Therefore these 3 questions could be said to be low in ‘Anti-
onlineness’; by default they could be considered more pro-online in nature. In contrast, question number 8
had a high standardized score on first rotated factor (1.515). Therefore it could be said to be high on ‘Anti-
onlineness’. We have fulfilled the 6th objective of factor analysis; ‘to provide a list of factor loadings for
each item.’
Figure-19: Screenshot of SPSS Data Editor Window showing saved factor scores as variables 2. Personal factors vs. general considerations 2.1 Bivariate correlations – personal vs. general characteristics Three personal parameters; ‘Motivation’ of the student, predictability of ‘Schedule’ and ‘Hours of study’
that the student was capable of putting in were analyzed against two general considerations; ‘Online
concerns’ and ‘Age’. Since these parameters per se were nominal variables, the weighting factors allotted to
them, ranked according to their importance, multiplied by the percentage of students (i.e. the weighted
scores) were considered for analysis. Table-19 shows the correlation matrix between weighted scores of the
five parameters; ‘Motivation’ (for online studies), predictability of ‘Schedule’, ‘Hours’ (of online study
capable), (online) ‘Concerns’, and ‘Age’ at the time of undertaking present study. Since the portions of the
matrix above and below the red diagonal line are mirror images, only the top part is being considered. There
was a significant negative correlation between ‘Concerns’ and ‘Age’ (r = - 0.96; p = 0.037; 2-tailed; N = 4).
There was also a strong negative correlation between the weighted scores for predictability of ‘Schedule’
and that for ‘Hours’, which fell just short of significance (r = - 0.996; p = 0.058; 2-tailed; N = 3).
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 32
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
MOTIVATI SCHEDULE HOURS CONCERNS AGE MOTIVATI Pearson Correlation 1.000 .752 -.069 -.585 .780 Sig. (2-tailed) . .458 .912 .415 .220 N 5 3 5 4 4 SCHEDULE Pearson Correlation .752 1.000 -.996 .969 -.924 Sig. (2-tailed) .458 . .058 .159 .249 N 3 3 3 3 3 HOURS Pearson Correlation -.069 -.996 1.000 -.276 .007 Sig. (2-tailed) .912 .058 . .724 .993 N 5 3 5 4 4 CONCERNS Pearson Correlation -.585 .969 -.276 1.000 -.963 Sig. (2-tailed) .415 .159 .724 . .037 N 4 3 4 4 4 AGE Pearson Correlation .780 -.924 .007 -.963 1.000 Sig. (2-tailed) .220 .249 .993 .037 . N 4 3 4 4 4 * Correlation is significant at the 0.05 level (2-tailed) Table-19: Shows correlation matrix between 5 parameters within personal factors and general considerations 2.2 Linear regression – personal vs. general characteristics Given the correlations in Table-19, it was tried to develop a predictive model of relationships. Multiple
linear regressions were tried using one measured dependent variable against several independent variables,
to determine the predictive power (if any) of the latter. The SPSS syntax [Box-6] is; SPSS Data Editor
Window; select <Analyze>, <Regression>, <Linear>; Linear Regression dialog box, enter dependent /
independent variables in respective list boxes; endorse successively <Statistics>, and then <Plots>;
appropriate options in each dialog box endorsed as shown in Figures-20a,b. Three models of linear
regression were tried; ‘Motivation’ vs. ‘Hours’ and ‘Age’ vs. ‘Hours’ models did not fit the existing data
well. However, ‘Motivation’ and ‘Age’ vs. ‘Concerns’ model fitted the data well. The outputs from this
multiple linear regressions are shown sequentially below.
a b
Box-6: SPSS Syntax Editor REGRESSION /MISSING LISTWISE /STATISTICS COEFF OUTS CI R ANOVA COLLIN TOL CHANGE /CRITERIA=PIN(.05) POUT(.10) /NOORIGIN /DEPENDENT concerns /METHOD=ENTER motivati age /PARTIALPLOT ALL /RESIDUALS HIST(ZRESID) NORM(ZRESID).
Figure-20a,b: Shows the ‘Statistics’ and ‘Plots’ dialog boxes and the options that were endorsed in each. Box-6: SPSS Syntax Editor output for multivariable linear regression. 2.2.1 Multiple linear regression outputs Variables entered: ‘Motivation’ and ‘Age’ were entered in the Statistics box as independent variables and
‘Concerns’ as the dependent variable. Out of 5 methods (Enter, Stepwise, Backward, Remove, Forward), the
‘Enter’ method was selected (Table-20a). Model Variables Entered Variables Removed Method 1 AGE, MOTIVATI . Enter a All requested variables entered. b Dependent Variable: CONCERNS Table-20a: Variables Entered / Removed Model summary: Table-20b displays R, R-squared, adjusted R-squared and standard error. R is the
correlation between the observed and predicted values of dependent variable. The meanings of range,
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 33
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
direction, strength and significance of R have been described earlier. R-squared is the proportion of
variation in dependent variable explained by the regression model. R-squared values range from 0 to 1. The
sample R-squared tends to optimistically estimate how well the model fits the population. Adjusted R-
squared attempts to correct R-squared to more closely reflect the goodness-of-fit of the model in the
population. R-Squared is used to determine which model is best. Small values indicate that the model does
not fit the data well. A model with a high value of R-squared that does not contain too many variables
should be chosen; in this case R-Squared = 0.998. Models with too many variables are often over-fit and
hard to interpret. In this case there were only 2 independent variables.
R R-Square Adjusted R-Square Std. Error of the Estimate Change Statistics Model R Square Change F Change df1 df2 Sig. F Change
1 .999 .998 .994 3.86 .998 234.153 2 1 .046 a Predictors: (Constant), AGE, MOTIVATI b Dependent Variable: CONCERNS Table-20b: Shows the Model Summary. High R-Square indicates that the model fits the data well. See text for details ANOVA table: Table-20c summarizes the results of an analysis of variance (ANOVA). The sum of squares,
degrees of freedom (df), and mean square are displayed for 2 sources of variation, regression and residual.
The output for Regression displays information about the variation accounted for by this model. The output
for Residual displays information about the variation that is not accounted for by this model. The output for
Total is the sum of the information for Regression and Residual. A model with a large regression sum of
squares in comparison to the residual sum of squares indicates that the model accounts for most of variation
in the dependent variable. In this case, Regression Sum of Squares = 6983.089, which was much higher
than the Residual Sum of Squares (= 14.911).Very high residual sum of squares indicate that the model fails
to explain a lot of the variation in the dependent variable; it may be necessary to look for additional factors
that help account for a higher proportion of the variation in the dependent variable. The mean square is the
sum of squares divided by the df. The F-statistic is the regression mean square (MSR) divided by the
residual mean square (MSE).The regression df is the numerator df and the residual df is the denominator df
for the F-statistic. The total number of df is the number of cases minus 1 (N – 1). If the significance value
(Sig.) of the F-statistic is <0.05 then the independent variables do a good job explaining the variation in the
dependent variable. If Sig. is >0.05 then the independent variables do not explain the variation in the
dependent variable. In this case Sig. = 0.046. Model Sum of Squares df Mean Square F-statistic Sig.
1 Regression 6983.089 2 3491.544 (MSR 234.153 .046 Residual 14.911 1 14.911 (MSE) Total 6998.000 3
a Predictors: (Constant), AGE, MOTIVATI b Dependent Variable: CONCERNS Table-20c: Shows Analysis of Variance (ANOVA). High Regression Sum of Squares and low Significance value indicate that independent variables do a good job of explaining the variations in dependent variable. See text for details. Coefficients: Table-20d shows the estimates of coefficients in the regression model. The unstandardized
coefficients are the coefficients of the estimated regression model. In this case, the estimated model is
Concerns = 80.261 + 0.638Motivation - 0.898Age. If independent variables are measured in different units,
the standardized coefficients (beta; β) attempt to make the regression coefficients more comparable. If the
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 34
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
data were transformed to z scores prior to regression analysis, one would get the β values as unstandardized
coefficients. The t-statistics help to determine the relative importance of each variable in the model. t-values
well < -2 or > +2 are guides regarding useful predictors. In this case t(Constant) = 19.094; t(Motivation) =
5.746; t(Age) = -17.538. Each of them conforms to the stipulated criteria. ‘Age’ was more important than
‘Motivation’ in predicting ‘Concerns’.
Unstandardized Coefficients
Standardized Coefficients
t-statistics Sig. 95% Confidence Interval for B Collinearity Statistics
Model B Std. Error Beta Lower Bound Upper Bound Tolerance VIF 1 (Constant) 80.261 4.203 19.094 .033 26.852 133.671 MOTIVATI .638 .111 .424 5.746 .110 -.773 2.049 .392 2.554 AGE -.898 .051 -1.294 -17.538 .036 -1.548 -.247 .392 2.554
a Dependent Variable: CONCERNS Table-20d: Shows the Coefficients. Extreme t-values indicate good predictor capacity. See text for details Collinearity diagnostics: Table-20e displays statistics that help to determine if there are any problems with
collinearity. Collinearity / multicollinearity is an unhealthy situation where correlations among independent
variables are strong. Eigenvalues provide an indication of how many distinct dimensions there are among
the independent variables. When several eigenvalues are close to zero, the variables are highly inter-
correlated and small changes in the data values may lead to large changes in the estimates of the
coefficients. In this case, only the 3rd dimension eigenvalue is very low (calculated at 0.0047). Condition
indices are the square roots of the ratios of the largest eigenvalue to each successive eigenvalue. A condition
index >15 indicates a possible problem and an index >30 suggests a serious problem with collinearity. In
this case the highest condition index = 6.992; well below 15. The variance proportions are the proportions
of the variance of the estimate accounted for by each principal component associated with each of the
eigenvalues. Collinearity is a problem when a component associated with a high condition index contributes
substantially to the variance of 2 or more variables. In this case the highest condition index (6.992), that
associated with 3rd dimension, contributed very highly to variance proportion of Motivation (0.99), and
somewhat to that of Age (0.68). But this condition index is <15. Furthermore, Table-19 shows that
correlation between ‘Motivation’ and ‘Age’ is moderate and insignificant (r = 0.78; p = 0.22; 2-tailed; N =
4). Therefore problem of collinearity was suspected but not definite positive.
Eigenvalue Condition Index Variance Proportions Model Dimension (Constant) MOTIVATI AGE
1 1 2.715 1.000 .02 .01 .02 2 .230 3.439 .52 .00 .30 3 5.553E-02 6.992 .46 .99 .68
a Dependent Variable: CONCERNS Table-20e: Table shows Collinearity Diagnostics. Very low Eigenvalues, very high Condition Indices and high Variance Proportions suggest problems of collinearity. In this case collinearity is suspected but not positively confirmed. See text for details Residual statistics: Table-20f displays statistics about the residuals and predicted values. For each case, the
predicted value is the value predicted by the regression model. And for each case, the residual is the
difference between the observed value of the dependent variable and the value predicted by the model.
Residuals are estimates of the true errors in the model. If the model is appropriate for the data, the residuals
should follow a normal distribution. Standardized predicted values are predicted values standardized to have
mean = 0 and SD = 1. Similarly, standardized residuals are ordinary residuals divided by the sample SD of
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 35
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
the residuals and have mean = 0 and SD = 1. The minimum, maximum, mean, SD and sample size (N) are
displayed for predicted value, residual, standardized predicted value, and standardized residual. In this case
the residuals are very low. Therefore estimated true errors in the model were minimal.
Minimum Maximum Mean Std. Deviation N Predicted Value .43 90.38 48.00 48.25 4 Residual -2.75 2.62 2.55E-15 2.23 4 Std. Predicted Value -.986 .878 .000 1.000 4 Std. Residual -.711 .679 .000 .577 4 a Dependent Variable: CONCERNS Table-20f: Table shows the Residuals Statistics. Residuals reflect the true errors in the model. They should be low. See text for details. Partial regression plots: Regression plots between ‘Concerns’ (Y-axis) vs. ‘Motivation’ (X-axis) and ‘Age’
(X-axis) are shown respectively in Figures-21a,b. ‘Concerns’ increases linearly with ‘Motivation’ and
decreases linearly with ‘Age’. In each case, the red line indicates the interpolation between the values.
a
Partial Regression Plot
Dependent Variable: CONCERNS
MOTIVATI
20100-10-20-30
CO
NC
ERN
S
20
10
0
-10
-20
b
Partial Regression Plot
Dependent Variable: CONCERNS
AGE
6040200-20-40
CO
NC
ERN
S
40
20
0
-20
-40
-60
Figure-21a: Shows Regression plot between ‘Concerns’ vs. ‘Motivation’. ‘Concerns’ increases linearly with ‘Motivation’ Figure-21b: Shows Regression plot between ‘Concerns’ vs. ‘Age’. ‘Concerns’ decreases linearly with ‘Age’; In each case, the red line indicates the interpolation between the values. 2.2.2 Predictive model The standard Regression Equation is;
Y = A + BX
Where,
• Y is the value on the Y-axis
• X is the value on the X-axis
• A is the intercept on Y-axis (the value taken by Y when X = 0); this is Constant for the equation
• B is the slope of the regression line. It is also called Regression Coefficient. It reflects the average
change in Y for a unit change in X. It is given by the slope of the regression line (Slope = Opposite /
Adjacent). If B is negative, then Y decreases as X increases, and vice versa. The opposite holds true
if B is positive.
Regression formula derived from this statistical exercise is;
‘Concerns’ = Constant + [0.638(‘Motivation’)] – [0.898(‘Age’)]
Where,
• ‘Concerns’ = Weighted scores for online ‘Concerns’ of students; this corresponds to ‘Y’ of the
regression equation
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 36
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
• Constant = 80.261; this corresponds to ‘A’ of the regression equation
• ‘Motivation’ = Weighted scores for ‘Motivation’ of students for online studies; this corresponds to
‘X’ of one regression equation (say ‘X1’)
• ‘Age’ = Weighted scores for ‘Age’ of students at time of study; this corresponds to ‘X’ of another
regression equation (say ‘X2’)
• 0.638 corresponds to one ‘B’; slope of one regression line (say ‘B1’)
• -0.898 corresponds to another ‘B’; slope of another regression line (say ‘B2’)
Therefore, the individual regression equations are;
Equation 1: ‘Concerns’ = 80.261 + 0.638(‘Motivation’) ……..[Y = A + B1X1]
Equation 2: ‘Concerns’ = 80.261 – 0.898(‘Age’) ……………...[Y = A - B2X2] 3. Power of study It was decided to calculate the power of the tests to detect a difference, using the software package G*Power
(Figure-22).[32]
Figure-22: G*Power statistical power calculation package from Department of Psychology, Bonn University. In order to determine which test was most successful in generating the best power, given the sample size and
other criteria of this study, G*Power package was used to perform 3 types of power analyses; (a) Post hoc
power analysis to determine the power of the study with the sample size (N = 35) that was used in the study;
G*Power generated power values for the given sample size, effect size and alpha (α) levels; (b) A priori
power analysis to determine the ideal sample size required; G*Power generated the sample size for given
effect size, α levels and power values; and (c) Compromise power analysis to arrive at ‘compromise’ power
value, accepting equal α and beta (β) error probabilities; G*Power generated α and β values for the given
sample size, effect size and β / α ratio. 3.1 Post-hoc power calculation and results First a post-hoc power analysis was performed. This calculated the power of the statistical study to detect a
(small, medium or large) difference. As a preliminary, it was necessary to compute Cohen’s effect size
index‘d’, which was defined as:
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 37
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
| μ1 - μ2 | d = ________ σ Where; μ (mu) refers to population means σ (sigma) refers to population standard deviation; taken as 3 for general purposes Cohen’s effect size convention stipulated that when values of ‘d’ are 0.2, 0.5 and 0.8, the effect sizes were
considered ‘small’, ‘medium’ and ‘large’ respectively. A medium-size effect (Cohen’s‘d’ = 0.5) was
specified at the outset, before analyzing the data. This could be interpreted in the context of the present study
to mean the power of the test to detect a ‘medium’ sized effect. This choice of effect size was determined by
the theoretical context of the study and related research results published previously.[33-38] Another
requirement was to select the acceptable level of α, (the probability (p) value). A p value of 0.05 was
selected for this study, which is generally considered suitable. Delta (δ) was the non-centrality parameter of
the t-distribution. The statistical program used the sample size to compute the relevant noncentrality
parameter of the noncentral t-distribution, which occurs if H1 (the alternative hypothesis) is true. Thus,
given the sample size, Cohen’s effect size‘d’ value (0.5), and α level (0.05), post hoc analysis gave the
power of the test and noncentrality parameter value δ. The power of a test is defined as 1- β.[33] This analysis
was performed with the following protocols; t-test for means and accuracy mode. The criteria were; (a)
Cohen’s effect size, d = 0.5, (b) α = 0.05, (c) Sample size = 35, (d) Two-tailed and one-tailed. The results
with 1-tailed criteria were; (a) Power = 0.43, (b) Critical t(34) = 1.7, and (c) noncentrality parameter δ = 1.5.
The 2-tailed results, keeping other criteria same, were; (a) Power = 0.30; (b) Critical t(33) = 2.03; and (c) δ
= 1.48. 3.2 A priori power calculation and results A priori analysis should ideally have been performed at the beginning. But given the poor power from the
post hoc study, it was performed after post hoc, in order to determine the number of subjects that would have
been required to generate more power. In this analysis the acceptable α and β levels and effect size ‘d’ were
known; α = β = 0.05. Therefore, power (1- β) = 0.95. This was considered as the ideal power value. Size of
the effect, d = 0.5 (medium effect). Given these parameters, a priori analysis gave the total sample size
required and noncentrality value δ.[33] This analysis was performed with the same protocols as previous. The
criteria were; (a) Effect size, d = 0.5, (b) α = 0.05, (c) Power (1 – β) = 0.95; (d) two-tailed and one-tailed.
The results with 1-tailed criteria were; (a) Total sample size required = 102, (b) Critical t(100) = 1.66; (c)
Noncentrality parameter δ = 2.52; and (d) Actual power = 0.8059. The 2-tailed results, keeping all the other
criteria constant, were; (a) Total sample size required = 128; (b) Critical t(126) = 1.98; δ = 2.83; and (d)
Actual power = 0.8015. 3.3 Compromise power calculation and results
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 38
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Since the sample size required was far too many to procure under the constraints of the present study,
compromise power analysis was performed next. In this the maximum possible sample size and Cohen’s
effect size (d = 0.5) were fixed. It was necessary to specify the relative seriousness of α and β error
probabilities, denoted by ‘q’. Type I (α) error, or false positive error, is the probability of falsely accepting
H1 and rejecting H0, when in fact the latter is true. Type II (β) error, or false negative error, is the
probability of falsely accepting H0 when in fact H1 is true. In this study, as in any basic research, both errors
were considered equally seriously. Thus, q = β / α = 1 was chosen. Given this parameter, pre-determined
sample size (N = 35), and Cohen’s effect size (d = 0.5), compromise power analysis computed α, power (1-
β) and non-centrality parameter δ. The same protocol as previous was used for this test. The criteria
included; (a) Total sample size = 35; (b) Cohen’s effect size, d = 0.5; (c) β / α = q = 1; and (d) two-tailed and
one-tailed. The results with 1-tailed criteria were; (a) Power = 0.77, (b) α = 0.23, (c) Critical t(33) = 0.74,
and (d) noncentrality parameter δ = 1.48. The 2-tailed results, keeping all other criteria constant, were; (a)
Power = 0.68, (b) α = 0.31, (c) Critical t(33) = 1.02, and (d) δ = 1.48. 3.4 Summary of power analyses results Table-21 summarizes the post hoc, a priori and compromise power analyses results, and gives the power
values, required sample sizes and α values for both 1-tailed and 2-tailed tests.
Test Power Required sample size (N) Alpha (α) values 1-tailed 2-tailed 1-tailed 2-tailed 1-tailed 2-tailed Post hoc analysis (N=35) 0.43 0.30 A priori analysis 102 128 Compromise analysis (N=35) 0.77 0.68 0.23 0.31 Table-21: Summary of results of power analyses C. UNIVARIATE / BIVARIATE RESULTS SUMMARY Table-22 summarizes all the results of statistical analysis in this study. The most notable ones are
highlighted in blue; they are the overall Compatibility Score and the Predictive model formula. Parameter Score A. Weighted scores for technology access parameters
• Type of Internet access 63.7% • Primary computer 80%
B. Weighted scores for personal parameters• Motivation 70% • Schedule 58.5% • Hours of online study 57.8%
C. Weighted scores for technical proficiencies 73.9% D. Weighted scores for online LS preferences 64% E. Weighted scores for students’ general considerations
• Online concerns 64% • Education level 54.3% • Age 73.3%
F. Overall Compatibility Score of USAIM students 64% G. LS Score results
• Pearson’s correlations (‘Pro-Yes’ vs. ‘Anti-No’) r = 0.3; p = 0.48; 2-tailed; N = 8 • Reliability coefficients (Intra-class, Cronbach α, Guttman,
Spearman-Brown) 0.42 to 0.45
• PCA factor analysis Component-1 (‘Anti-onlineness’ factor) H. Personal parameters vs. General considerations
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 39
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
• Pearson’s correlation (‘Concerns’ vs. ‘Age’) r = -0.963; p = 0.037; 2-tailed; N = 4 • Regression analysis (‘Concerns’ vs. ‘Age’) ‘Concerns’ = 80.261 – 0.898(‘Age’) • Regression analysis (‘Concerns’ vs. ‘Motivation’) ‘Concerns’ = 80.261 + 0.638(‘Motivation’)
I. Predictive model ‘Concerns’ = Constant + [0.638(‘Motivation’)] – [0.898(‘Age’)]
J. Power analysis• Post hoc (N = 35) Power = 0.43 (1-tailed); 0.30 (2-tailed) • Compromise (N = 35) Power = 0.77 (1-tailed); 0.68 (2-tailed) • A priori Required N = 102 (1-tailed); 128 (2-tailed)
Table-22: Summary of all results of statistical analysis. The most important ones are highlighted in blue. 4. Summary and conclusions This chapter detailed all the study outcomes that had been earmarked for measuring in the first chapter. The
results are all summarized in Table-22 of this chapter. Statistical analysis of results showed that the overall
online Compatibility Score of USAIM was 64%. It also showed that Motivation and Age of students are
reasonably good predictors of their online Concerns. The latter was directly related to motivation and
inversely related to age of the subjects. The significance of these results are critically analyzed and discussed
in the next chapter.
***********
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 40
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
CHAPTER-4: DISCUSSION “In a day when you don’t come across any problems – you can be sure you are traveling in a wrong path.”
~~Swami Vivekananda~~ 1. Introduction The previous chapter had dealt with the statistical analysis of data and had enumerated the outcome
measures succinctly. This chapter deals with the discussion of the analysis, the results accruing there from,
critical comparison with similar studies in the literature, the shortcomings of this pilot study, lessons learned
and future directions to take in terms of research and educational implications for USAIM. 2. Two-tier scoring of results There are many survey of this nature that assess the online readiness of students.[10,19-24] All of them assess
the students on an individual basis. In contrast, this survey was meant to be on an institutional basis, to
reflect the online readiness of students of USAIM as a whole. Therefore, a 2-tier method of scoring and
grading (each with several steps) was adopted in this study. The 1st-tier consisted of generating weighted
scores at the level of the individual parameters. The 2nd-tier consisted of aggregating these parameter scores
to generate an overall Compatibility Score for the institution as a whole. A 5-step method was employed for the 1st-tier of scoring. Firstly, the results obtained through the survey
were not individual students’ scores for each parameter, but the collective scores of all the students,
aggregated as percentages. Secondly, weights were attached to the individual response items in order to sort
them in order of importance from the online perspective. Thirdly, the product of the raw percentage score
and weighting factor gave the true weighted score for that item. Fourthly, the sum of all the weighted scores
for all items gave the overall institutional weighted score for that parameter. Finally, this was expressed as
an institutional percentage of maximum for that parameter. Thus, if all students had selected the most
important option in any parameter; i.e. the item carrying the maximum weight, then the institutional score
would have been 100% for that parameter. A 3-step method was employed for the 2nd-tier of scoring. Firstly, a grading-cum-point system was devised.
Secondly, each parameter score, in turn, was allotted a grade and point, which became the institutional point
for that parameter. And finally, aggregation of all these points gave the overall institutional score for online
readiness. This was termed as the overall institutional Compatibility Score for online learning. This method of scoring and grading the results served 4 purposes. Firstly, it provided a standardized method
of assessing the online readiness of students on an institutional basis. Secondly, it gave a baseline value of
online readiness of students of USAIM. Thirdly, it provided a commonly uniform and standard basis for
comparing with the online readiness of other institutions, notably those affiliated to USAIM. Fourthly, it
fulfilled the first three goals of this study; (1) determine USAIM students’ online preparedness against 5 set
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 41
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
parameters; (2) devise a mathematically objective scoring system for each parameter; and (3) determine the
overall online learning preparedness (Compatibility Score) of USAIM students. A possible drawback of this system, in its present form, is that it was rather tedious, time-consuming,
manual and therefore potentially error-prone. This set up a vicious cycle of repeated manual checks to detect
possible errors, which in turn consumed more time. There could be a method for feeding all the formulae
and commands on a spreadsheet software, MS®Excel® for example, so as to render the process automatic,
less time-consuming and relatively error-free. This could therefore be a project for future consideration. 2.1 Comparison with other scoring methods As mentioned earlier, other surveys have also devised their own methods of scoring, mostly for individual
assessment. Some were Web-based forms with automated scoring systems,[10,19-22] some required manual
scoring by the students themselves.[23] In our system, only the initial percentage calculation was automatic;
the rest had to be performed manually. One system based the scores for its tool on a 100-point scale, and
calibrated them to a range that may be expected on a typical exam. The overall compatibility factor arrived
by that system was a combination of 4 sub-factors (similar to our first 4 parameters), weighted to provide an
accurate assessment of the student’s readiness for study in an online degree program. Their grading system
was as shown in Table-1 below.[10] This was similar to our 2nd-tier of grading, though our ranges were
slightly more relaxed, keeping in mind the average capabilities of our student intake. Another system had 3
options; ‘a’, ‘b’, ‘c’ in each of 10 questions. It employed the scoring scheme as shown in Table-2.[23] This
kind of system (the number of questions and method of scoring) was not suitable for an institutional analysis
of the kind performed in our study Grade Score Excellent 90% or above Very good 80 – 89% Good 70 – 79% Table-1: A method of grading adopted by one center 3 points for each ‘a’ selected 2 points for each ‘b’ 1 point for each ‘c’ Score Interpretation >20 distance-learning course was a real possibility for him/her 11 – 20 distance-learning courses may work; may need to make adjustments in schedule and study habits <10 distance learning may not be the best alternative; advised to talk to counsellor Table-2: A scoring system adopted by another center 3. Correlations of LS scores 3.1 Statistical assumptions All parametric tests in general (Pearson’s product moment correlation coefficient (PMCC) ‘r’ in this study),
assumes that the populations from which the samples have been taken were normally distributed, and the
variances of the samples are same. If the data are not normally distributed one would need to try
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 42
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
transforming the data (using SPSS ‘Transform’ option) or use a nonparametric procedure that does not
require the normality assumption All non-parametric tests (e.g. Spearman’s rho (ρ) correlation coefficient or
Kendall’s tau (τ) rank correlation coefficient) make no assumptions about the underlying distribution of the
sample. They are used when the data is qualitative; i.e. measured by ordinal (rank) scale, or not normally
distributed. They are not as powerful as parametric tests when data is normally distributed. That is why
applying both tests on a data set give variable and somewhat contradictory results.[39] 3.2 Tests of normality Therefore it was incumbent on our part to determine the normality (or otherwise) of our LS score data
distribution, before applying the appropriate correlation test. One method was to plot frequency distribution
histograms, with normal curves superimposed on them, to check for normality of data distribution. A more
objective method was to conduct tests of normality. Kolmogorov-Smirnov (with Lilliefors Significance
Correction) and Shapiro-Wilk tests in the SPSS Explore procedure are formal tests to check for normality.
The latter test is used only if there are <50 cases. The K-S statistic tests the hypothesis that the data are
normally distributed. A significance value <0.05 rejects the hypothesis, i.e. it indicates the distribution of
data differs significantly from a normal distribution. All the K-S significance values in this study were high
(p = 0.2; df = 8; Lilliefors Significance Correction), thus indicating that the distribution of data did not differ
significantly from normal distribution. These were confirmed on the frequency distribution histograms with
normal curves superimposed on each. Though none of the data followed a classical bell-shaped symmetric
distribution about a mean (which is typical of normally-distributed data), they did not vary widely from a
normal distribution either (Table-15, Figures-15a,b,c,d in Chapter-3). 3.3 Correlation coefficients Correlation is usually tested using a version of the t test. The formula is: t = √ {(n-2) / (1-r2)}. Correlation
coefficients show how one variable increases or decreases as the other variable increases. Since the
distribution of percentage figures in this study did not differ significantly from normal distribution about the
mean, a parametric correlation (Pearson’s PMCC ‘r’) was employed. PMCC assumes the data are normally
distributed. It is a measure of linear association between two variables. It indicates how closely the points lie
to a line. The values of ‘r’ range from -1 to +1. The closer it is to ‘0’, the less is the linear association
between the two continuous variables. Value of ‘0’ indicates there is no linear relationship at all between the
two variables. Values of -1 or +1 indicate that the variables are perfectly linearly related; i.e. the scatterplot
(graph showing relationship between two variables) points lie in a straight line. The sign of the correlation
coefficient indicates the direction of the relationship (positive or negative). Negative values of ‘r’ indicate
that one variable decreases as the other increases. The absolute value of the correlation coefficient indicates
the strength, with larger absolute values indicating stronger relationships (Table-3). Generally, values above
0.4 are clinically significant, but it varies depending on the context of the study. Clinical significance may
not be the same as statistical significance. The statistical significance of a correlation is based on whether
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 43
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
one could be confident that the observed correlation is different from zero. The p-value attached to the
correlation coefficient shows how likely it is that there is no linear association (i.e. ‘r’ is ‘0’) between the
two variables. There are several caveats to consider when determining correlations between two variables. A
correlation coefficient may be strong, but insignificant because of sample size. A significant correlation does
not imply cause and effect. Correlation does not give any information about the magnitude of increase or
decrease of the two variables. Neither does correlation coefficient give a measure of agreement. Also, the
variables may be strongly associated but not linearly.[39]
Table-3: Correlation strengths Pearson’s PMCC ‘r’ Degree of association 0.8 to 1.0 Strong 0.5 to <0.8 Moderate 0.2 to <0.5 Weak 0 to <0.2 Negligible The correlation matrix in Table-16 (Chapter-3) showed there was weak positive correlation (r = 0.3; N = 8)
between percentages of ‘Pro-Yes’ and ‘Anti-No’ responders, and between ‘Pro-No’ and ‘Anti-Yes’
responders. There was equivalent weak negative correlation (r = -0.3; N = 8) between opposite responders.
Since the values were not relatively close to -1 or 1, the variables were not strongly correlated. But the
direction of association (+ or -) was as per the earlier postulates. The high p-values (p = 0.48; 2-tailed)
indicates the correlations were insignificant and the two variables were not perfectly linearly related. Even
though the correlations between variables were not significant, the scatterplot matrix of the variables
(Figure-14) shows these variables appear to be weakly related, not necessarily linearly. The insignificant
correlation may be due to low sample size. This is discussed further later in this chapter. 4. Reliability analysis of LS scores Reliability is the level of consistency of the measurement or procedure, or the degree to which an instrument
measures the same way each time it is used under the same condition with the same subjects. In short, it is
the repeatability of the measurement. Reliability is estimated rather than measured. It can be considered as
the relationship between a so-called ‘true score’ and an actual ‘observed scores’ which includes a degree of
error. All measures have an acceptance (tolerance) level that is acceptable. Influences on reliability of a
measure include; sample size, number of equivalent measures (up to 8), and the instrument / human
characteristics. Reliability is usually estimated by 3 means; (a) Internal consistency reliability; (b) Test-
retest reliability; and (c) Inter-rater reliability. Some other methods of assessing reliability include;
equivalent (alternative / multiple / pure) measures (using different measures for same thing), and split-half
random allocation (questionnaires containing two different sections measuring the same thing; results
between two sections would be compared). In this study we employed internal consistency reliability. 4.1 Internal consistency reliability Internal consistency reliability assesses the consistency of results across the items within the test. It is most
commonly indicated by Cronbach’s alpha (α). Cronbach’s α coefficient is a model of internal consistency
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 44
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
that is based on the average inter-item correlation. It is a statistical test to show the degree of similarity
between different measures. It assesses how well the set of items in a scale measures a particular preference.
This enables one to develop equivalent measures. The higher the score, the more reliable the scale is. The
widely accepted social science view is that α should be > 0.70 for a set of items to be considered a scale. At
this level of α, the standard error of measurement will be over half of a standard deviation (SEM > ½SD).
However, Tuckman has opined that α reliability should be > 0.75 for achievement tests and > 0.5 for attitude
tests.[40] Other commonly used measures of scale reliability models that are available through SPSS package
are Split-half (Guttman, Spearman-Brown), Guttman’s Lambda (λ), Parallel and Strict parallel. The split-
half model splits the scale into two parts and examines the correlation between the parts using Guttman
split-half reliability or Spearman-Brown reliability (equal and unequal length). Intra-class correlation
coefficients can also be used to compute inter-rater reliability estimates. Using reliability analysis, we could determine the extent to which the items in the questionnaire were related
to each other, and we could get an overall index of the repeatability or internal consistency of the scale as a
whole. Intra-class correlation coefficient, Cronbach α and split-half models were employed for reliability
analysis in this study. These models of analysis were appropriate for the nature of the questionnaire and the
items within them. The results of reliability analysis of scales and items within scales were all < 0.5.
Therefore, according to Tuckman’s criteria,[40] repeatability or internal consistency of the scale as a whole
was low. These could possibly explain the considerable degree of overlap among the responses between the
two opposing scales. 5. Factor analysis of LS scores Factor analysis is a multivariate statistical method commonly used in both internal and external construct
validation studies. Factor analysis is a metaphor that uses the concept of vectors. It uses a mathematical
model based on vector algebra. A vector can be depicted geometrically as an arrow drawn in relation to
coordinate axes. In an item factor study, factors are axes, items are vectors. The first most important factor is
drawn as the vertical axis. The second factor is not drawn at right angles because it is correlated with the
first one. If it had been at right angles to the first factor it would have been orthogonal (uncorrelated); i.e.
correlation = 0. Correlation coefficient is geometrically interpreted as the cosine of the angle between two
vectors (items), or two axes (factors). Thus a correlation of 0 corresponds to 90° (i.e. the 2 vectors are
orthogonal; at right angles to each other), and a correlation of 1 corresponds to 0° (i.e. item vectors are
collinear).[31,41,42] Factor analysis attempts to identify underlying variables, or factors, that explain the pattern of correlations
within a set of observed variables. It is used in data reduction to identify a small number of components /
factors that explain most of the variance observed in a much larger number of known variables. The purpose
is to ascertain whether the correlations among the variables can be accounted for in terms of a comparatively
few latent variables or factors. There are 7 methods of factor / component extraction; principal components,
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 45
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
unweighted least squares, generalized least squares, maximum likelihood, principal axis factoring, alpha
factoring, and image factoring. In this study principal component analysis (PCA; total variance in the data is
considered) was used. In PCA, first an initial solution is obtained that includes initial communalities,
eigenvalues of the eigenvectors, and the percentage of variance explained by the numbers of extracted
factors. By default, factors with eigenvalues greater than 1 (when analyzing a correlation matrix) are
extracted (Kaiser’s criterion). To use a different eigenvalue as the cut-off value for factor extraction, a
number between 0 and the total number of variables in the analysis should be entered. Next, this finding is
confirmed with the ‘scree plot’ test. It is a plot of the variance associated with each factor. It graphically
groups factors to separate the retainable constructs from those that are not useful, thus determining how
many factors should be kept. Typically the plot shows a distinct break between the steep slope of the large
factors and the gradual trailing of the rest (the ‘scree’). In the scree plot, components are ignored beyond the
point where the smooth decrease of eigenvalues appears to level off to the right of the plot. The Kaiser
criterion sometimes retains too many factors, while the scree test sometimes retains too few. However, both
do quite well when there are relatively few components and many cases (i.e. under normal conditions).[40-45]
The components first extracted most often do not correspond to constructs of interest to the researcher. The
first components are mathematical abstractions that pick up the most variance in the correlations among the
variables. Such components need to be rotated into a different configuration in order to reveal constructs of
greater interest or to discern the patterns better. There are 5 methods of component rotation; Varimax, Direct
Oblimin, Quartimax, Equamax and Promax. Varimax is an orthogonal (uncorrelated; right-angled) rotation
method that minimizes the number of variables that have high loadings on each factor. It simplifies the
interpretation of the factors. Direct Oblimin is a method for oblique (non-orthogonal) rotation, with delta (δ)
value. When δ = 0 (default), solutions are most oblique. As δ becomes more negative, the factors become
less oblique. To override the default δ = 0, one should enter a number < 0.8. Quartimax is a rotation method
that minimizes the number of factors needed to explain each variable. It simplifies the interpretation of the
observed variables. Equamax is a rotation method that is a combination of Varimax method (simplifies
factors), and Quartimax method (simplifies variables). The number of variables that load highly on a factor
and the number of factors needed to explain a variable are minimized. Promax (with Kappa (κ) value) is an
oblique (non-orthogonal) rotation, which allows factors to be correlated. It can be calculated more quickly
than a Direct Oblimin rotation, so it is useful for large datasets. κ is a parameter that controls calculation of
Promax rotation. The default value (κ = 4) is appropriate for most analyses.[40-42]
5.1 Relevance to present study – shortcomings in factor analysis Principal Component Analysis (PCA) with Varimax rotation was employed in this study to elucidate the
factor constructs of LS scores. There may be a possible ground for contention here insofar that a reliability
analysis and factor analysis were both performed on the same set of scores. The former analysis provides a
measure of internal consistency; i.e. how similar all the items are. In other words it is giving a measure of
support for a single-factor solution. Factor analysis provides the opposite; i.e. when there may be more than
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 46
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
one cluster, where is the clustering of correlations? So basically, if we have more than one factor, then we
are ignoring the earlier internal consistency arguments. However, these arguments notwithstanding, other
researchers have performed both analyses on same sets of data.[40-42,44,45] Moreover, our study demonstrated
poor internal consistency reliability. Therefore, there were reasonable grounds for performing PCA factor
analysis to try to elucidate the constructs of the LS questionnaire. Correlation matrix: One of the first steps during factor analysis includes generating a correlation matrix of
the variables under consideration, with significance levels, Determinant, Kaiser-Meyer-Olkin (KMO)
Measure of Sampling Adequacy (MSA), and Bartlett’s test of sphericity (Figure-16b, Chapter-3). This
output is reproduced in Table-4 below. The correlation matrix is essentially same as Table-16, Chapter-3.
There should be relatively few correlations near ‘0’. If we see a lot of very small correlations, we should
reconsider using factor analysis with the data. Most of the significance values (i.e. p-values for testing
whether the corresponding correlations are different from ‘0’) should also be small for a useful factor
analysis. Determinant for the correlation matrix indicates the invertability of the matrix. If the determinant is
‘0’, the correlation matrix cannot be inverted, and certain factor extraction methods will be impossible to
compute. The KMO MSA tests whether the partial correlations among variables are small. Bartlett's test of
sphericity tests whether the correlation matrix is an identity matrix, which would indicate that the factor
model is inappropriate. In this study we see that there were several small correlations, high p-values, ‘0’
determinant value and the correlation matrix is closer to an identity matrix and not positive definite. PROYES ANTINO PRONO ANTIYES Correlation PROYES 1.000 .291 -1.000 -.291 ANTINO .291 1.000 -.291 -1.000 PRONO -1.000 -.291 1.000 .291 ANTIYES -.291 -1.000 .291 1.000 a Determinant = .000 b This matrix is not positive definite Table-4: Correlation matrix with KMO measure of sampling adequacy and Bartlett’s test of sphericity. Extraction communalities are estimates of variance in each variable accounted for by factors / components
in the factor solution (Table-18b, Chapter-3). Small values indicate variables that do not fit well with the
factor solution, and should be dropped from the analysis. In this study all extraction communalities were =
1. This is also not an ideal situation. Extraction communalities should be high, but <1. Total variance explained: In the right-hand part of table (Table-18c, Chapter-3), the eigenvalues and % of
variance explained for the 2 rotated factors are displayed. Taken together, the 2 rotated factors explain just
the same amount of variance as the 2 factors of the initial solution. In a good set of factor analysis data, the
division of importance between the two rotated factors should be very different. The effect of rotation
should be to spread the importance more or less equally between the 2 rotated factors. But here we see that
rotation has not resulted in any change in the importance of the 2 extracted factors. That means rotation was
of no use in elucidating the principal components.
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 47
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Component matrix and rotated solutions: These are given in Table-18d (Component matrix), Table-18f
(Rotated pattern matrix) and Table-18g (Factor transformation matrix), respectively in Chapter-3. We notice
that there is no difference between the first two matrices. In other words, rotation of principal components
had no effect on factor loadings. This is further clarified from Table-18g (factor transformation matrix).
There was no rotation of the extracted factors at all from their original axes. Due to this reason, item
loadings on Component-2 were equivocal. Conclusion from factor analysis: From all these it could be concluded that this was not an ideal model for
factor analysis. However, a factor analysis was nonetheless performed, partly because it was a useful
exercise for future studies of this sort, and partly to determine if we could arrive at some partial solutions
through the analysis. Through this exercise we could walk through the steps of PCA factor analysis and
fulfil our immediate stepwise objectives therein (enumerated in Chapter-3). We did elucidate the item
loadings on Component-1, which we could assign a name, and we could tentatively identify a few question-
items in the questionnaire that were ambiguously worded and needed to be modified or removed. These tests
also fulfilled our fourth goal of this study; ‘Determine the robustness of LS questionnaire currently being
used for the study’. 6. Errors, sample size and power Errors concern the incorrect rejection or acceptance of null hypothesis. There are 2 types of errors; Type I
(α) error and Type II (β) error. Type I (α) error: This is the probability of detecting a significant difference
when the parameters are really the same (significance level α); i.e. the risk of a false positive result. In this
the null hypothesis is falsely rejected. Type II (β) error: This is the probability of not detecting a significant
difference when there is really a difference; i.e. the risk of a false negative result. In this there is failure to
reject the null hypothesis when it is indeed false. One minus Type II error (1 – β) is the power of the test to
detect a difference (Table-5). The power of a test depends on: (a) significance level; (b) size of the
difference we wish to detect; and (c) sample size. Determinants of sample size are (these determine random
errors); (i) Magnitude of outcomes between the 2 intervention groups; (ii) Probability of α (Type I) error; i.e.
false positive result. This is usually set at 0.05. (iii) Probability of β (Type II) error; i.e. false negative result.
This is usually set at 0.20. Therefore by default, the power of the test (1 – β) is usually set at 0.8. (iv)
Proportion of sample demonstrating the outcome of interest and the variability of observations. An
insufficient sample size may lead to failure to detect the true difference that may exist (Type II error).[33-35]
Actual situation Conclusions from observations
Intervention has an effect Intervention has no effect Intervention has an effect True positive (1 – α) False positive (α); Type I error Intervention has no effect False negative (β); Type II error True negative (1 – β) Table-5: Type I and II errors; True positive is the Power of a Study to correctly reject the null hypothesis when it is indeed false. True negative is the Power of the Test to detect to detect a difference. 6.1 Relevance to this study – low power
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 48
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
This pilot study suffered from the serious drawback of low power due to low sample size. This was
determined by G*Power software package (described in Chapter-3),[32] which performed 3 types of power
analyses; (a) Post hoc power analysis to determine the power of the test with sample size N = 35 that was
used in the study; (b) A priori power analysis to determine the ideal sample size required; and (c)
Compromise power analysis to arrive at ‘compromise’ power value, accepting equal α and β error
probabilities; These calculations basically related to, and interrelated with, three statistical entities; effect
size, power of study and sample size. Effect size was interpreted as measures of the ‘distance’ between H0
(null hypothesis) and H1 (alternative hypothesis). It referred to the underlying population rather than a
specific sample. Power of study related to its ability to detect an effect of specified size. Sample size was the
number of subjects summed over all groups of the study design.[33] Compromise power approach is
somewhat paradoxical and therefore controversial. The concept was developed by Erdfelder for situations
like in this study, where practical considerations precluded following the recommendations derived from a
priori power analysis.[33] The power of the test to detect a difference (using our present student sample size
of 35) was 0.4 (1-tailed) and 0.3 (2-tailed) by the Post hoc analysis method. By the Compromise analysis
method (accepting β/α ratio = 1), the power was 0.77 (1-tailed) and 0.68 (2-tailed). In neither case it reached
the accepted requirement of 0.8. A priori analysis revealed that in order to achieve a power of 0.8, we would
have required a total sample size of 102 (1-tailed analysis) and 128 (2-tailed analysis). 7. Regression analysis of personal vs. general characteristics data 7.1 Correlation vs. experimental design This study was of non-experimental design and one-shot cross-section study. However, we tried to establish
certain correlations between certain parameters deriving from students’ personal and general characteristics.
This is the process of discovering relationships between two or more variables by analysing surveys. In such
designs it is only strictly possible to investigate associations. In a correlation analysis the causality can be in
any direction; from the direction of X-axis variable to Y-axis variable or vice versa. The actual direction of
causal link, if any, is based purely on the researcher’s interpretation of the results. Therefore, there is always
the temptation on the part of the researcher to establish causal links between parameters based on their
correlation findings. Drawing such conclusions (inferences) is fraught with risks. Such a situation is referred
to as confounding of the dependent variable, because it leaves the researcher in a quandary to decide which
one caused which. Thus, it may appear that experimental designs offer the best method to investigate
causality, due to the high degree of control inherent in such designs. In experimental designs, the direction
of causation is always from independent variable (usually X-axis) to dependent variable (generally Y-axis).
However such strict control also has problems associated with it. Naturalism is reduced, and along with it,
the ability to generalise. Therefore in order to resolve the issue of confounding the dependent variable
without going into the complexities of an experimental design, some linear regression models were tried on
the parameters, in order to arrive at some tentative conclusions about causal relationships.
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 49
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
7.2 Predictive linear regression model Preliminary bivariate correlation between personal parameters and general characteristics showed significant
negative correlation between ‘Concerns’ and ‘Age’ (r = - 0.96; p = 0.037; 2-tailed; N = 4). This meant that
there was a strong linear correlation between the two parameters; when the weighted percentage scores for
‘Concern’ were decreasing, the weighted percentage scores for ‘Age’ were increasing and vice versa. There
was also a strong negative correlation between the weighted scores for predictability of ‘Schedule’ and that
for ‘Hours’ (of online study capable), which fell just short of significance (r = - 0.996; p = 0.058; 2-tailed; N
= 3). No cause-effect relationship was implied in any of these correlations. Thereafter, 3 models of linear
regression were tried; ‘Motivation’ vs. ‘Hours’ and ‘Age’ vs. ‘Hours’ models did not fit the existing data
well. However, ‘Motivation’ and ‘Age’ vs. ‘Concerns’ model fitted the data well (R-Squared = 0.998).
Unlike the results of factor analysis of LS scores, results of regression analysis were quite satisfactory. The
independent variables did a good job explaining the variation in the dependent variable (p = 0.046). Both
‘Motivation’ and ‘Age’ were useful predictors of ‘Concern’; t being well above +2 or well below -2
[t(Motivation) = 5.746; t(Age) = -17.538]. The estimated predictive model was ‘Concerns = 80.261 +
0.638Motivation - 0.898Age’. Problem of collinearity (strong correlations among independent variables) was
suspected but not definite positive. Estimated true errors in the model were minimal. The individual
regression equations were; Equation 1: ‘Concerns’ = 80.261 + 0.638(‘Motivation’); and Equation 2:
‘Concerns’ = 80.261 – 0.898(‘Age’). This meant that ‘Concerns’ regarding online courses increased with
internal ‘Motivation’ and decreased with ‘Age’. This study fulfilled the fifth goal of this study; ‘Identify
relationships between students’ personal / general characteristics vis-à-vis online learning’. 7.3 Comparison with other studies in literature Several studies and articles have attested to the importance of internal motivation in successfully
undertaking online courses.[5,8-11] In a large-scale (N = 1,056), exploratory factor analysis study that
determined the underlying constructs that comprise student barriers to online learning, eight factors were
found; (a) administrative issues, (b) social interaction, (c) academic skills, (d) technical skills, (e) learner
motivation, (f) time and support for studies, (g) cost and access to the Internet, and (h) technical problems.
Independent variables that significantly affected student ratings of these barrier factors included: gender,
age, ethnicity, type of learning institution, self-rating of online learning skills, effectiveness of learning
online, online learning enjoyment, prejudicial treatment in traditional classes, and the number of online
courses completed.[8] Our pilot study was not as exhaustive as this; it was single-institution based; we did
not consider administrative, social, gender or ethnic issues. But the importance of skills, motivation, time,
technical access and age, as identified in our study are corroborated in this study. In another study, multiple linear regressions and discriminant function analysis were used to examine
whether individual differences predicted WebCT use, while analysis of covariance determined whether Web
use influenced academic achievement. The number of hits, length of access, and use of the bulletin board
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 50
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
was predicted by age, with older students using WebCT more. These factors were also influenced by ability
and achievement orientation. Only bulletin board use influenced achievement, with those posting messages
outperforming those not using, or passively using bulletin boards. Individual differences will determine the
extent to which students utilise this facility.[9] This study also corroborates some of our findings. Older age
and achievement orientation (read motivation) influence online predeliction. A significant observation was
that these authors had also used statistical techniques similar to ours, albeit on a larger sample, in order to
arrive at their conclusions. Other articles have abundantly emphasized the role and importance of technical environment, technical
skills, amount of time to be devoted (usually 10-15 hours / week), emphasis on self-motivation, time-
dependency, discipline (following course schedule and complete assignments), self-organization (schedule
study time and online time to ensure all course obligations are met), self-directed (able to motivate
her/himself), and the role of learning styles (learning styles will obviously impact their success online).[10,11]
8. Shortcomings of pilot study 8.1 Sample size Low sample size could have been a serious drawback if this had been attempted as a full-fledged study. But
given the fact that this was a pilot study, the relatively small size of medical college, and other practical and
logistic constraints, 35 students could arguably be considered as not too serious a drawback. This figure was
based on practical considerations rather than any misplaced belief in the law of small numbers (described
later). With small numbers there is a tendency to overestimate power, overestimate significance,
underestimate the width of confidence intervals (CI), misrepresent the importance of sampling variability,
and place exaggerated confidence in the validity of conclusions.[35] All these factors may have been at play
in all the results that were discussed in this study. This therefore emphasizes the fact that future studies
should include larger numbers of subjects. 8.2 Limited sampling variability and representation An extension of the sample size problem is the issue of sampling variability and sampling limitations. There
is a tendency to view a sample randomly drawn from a population as highly representative of that population
in all essential characteristics; often referred to as representation hypothesis. There is also a tendency to
view sampling to be a self-correcting process. Both these tendencies generate expectations about
characteristics of samples. Law of large numbers guarantees that very large samples will indeed be highly
representative of the population from which they are drawn. Undue faith in the law of small numbers, which
asserts that the law of large numbers applies to small numbers as well, stems from excessive dependency on
the so-called self-corrective tendency. This leads researchers to believe that small samples should also be
highly representative and similar to one another.[36] In this pilot study there may have been poor sampling
variability and poor sample representation.
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 51
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
8.3 Selection bias This study essentially consisted of convenience samples; i.e. subjects available in college, and being taught
by the author. Therefore, strictly speaking, it was a non-random selection of subjects. Therefore such results
cannot be generalised outside the setting. It also means that individual groups may have reacted differently
to the survey; some volunteering enthusiastically, others interpreting it as an order from their instructor etc. 8.4 Other shortcomings related to sample size and variability Limited sample variability (and size, of course) may lead to an exaggerated belief in the likelihood of
successfully replicating an obtained finding. It also detracts from gender-wise stratification, logical
stratification of subjects, extrapolation of findings to whole population, and generalization to other settings
and situations; i.e. inadequate confirmation of external validity. 8.5 Shortcomings in questionnaire The questionnaire used in the pilot had several potential weaknesses. To enumerate briefly; the
questionnaire was not peer-validated; 40 questions was a shade more than the ideal recommendation of 30
for Web-based questionnaires;[30] it had all forced-choice questions without any open-ended questions for
eliciting respondents’ qualitative responses. At least one open-ended question should be included to enable
users to express their opinions freely.[27,30] It had mixed scales and ranges. This last situation should be
avoided as far as possible because students tend to mark only the easy options and leave out the rest.[27]
There was no link to a ‘Privacy Statement’ page at the bottom of the web-based questionnaire page.[30] The
LS questions in the questionnaire were ambiguously worded; all 3 sets of statistical tests confirmed this.
Non-standard questionnaire delivery may have produced results with a large error variance or ‘noise’.[39]
Their identification in this pilot study could pave the way for more reliable questionnaire development and
administration. Manual method of scoring the results, with its attendant overheads on time and propensity to
errors have been mentioned earlier. 8.6 Shortcoming in timing of survey Another factor that was not considered during the delivery of the questionnaire was timing of the day. There
are reports that subjects may differ in their cognitive performance according to the match between their
time-of-day preference and the time of a test. Ideally, this factor should also have been considered, and the
subjects’ time-of-day preferences should have been measured using the Morningness–Eveningness
Questionnaire (Horne and O¨stberg’s MEQ) and compared with time of day students completed the study.[15]
8.7 Insufficient power It is essential to have as much power in the research design as possible. Explicit computation of power, with
Cohen's medium effects (d = 0.5), should always be carried out before any study is done.[34] Such
computations will often lead to the realization that there is no point in conducting the study unless the
selected sample size is multiplied by ‘x’, ‘x’ being a variable figure more than unity. In this pilot study ‘x’
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 52
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
worked out to be nearly 3.7 (calculated a priori power (128) divided by sample size (35)). A serious outcome
of such underpowered studies will be research consisting of seemingly contradictory results.[35] 8.8 Coachability and fakeability As with any questionnaire dealing with interests, attitudes and preferences, the validity of the scales depends
upon honest responses from each respondent; it is probable that a coached subject or one with prior
knowledge of what was involved could influence the scores if he/she knew what profile was desired.
Similarly, all psychometric tests have implications for faking and possibility of self-fulfilling prophesies.
Faking implies attempting to obtain scores that are different from the actual score. However, this depends on
insight into the self. Further, beliefs in one’s ability may indeed affect how people approach psychometric
tests and performance on them.[41,46] In spite of all attempts to explain every thing to the subjects in explicit
detail these possibilities can never be ruled out completely in any study. 8.9 Possible errors in correlation assumptions Several assumptions in correlation analysis were detailed earlier. Some of these could be, and were, tested
prior to performing the correlations. However, there were many that were assumed but not explicitly tested.
Homoscedasticity was assumed; i.e., the error variance was assumed to be the same at any point along the
linear relationship. Minimal measurement error was also assumed. Measurement error reduces systematic
covariance and lowers the correlation coefficient. The descriptive statistics table for each sets of variable in
Chapter-3 shows that each of these assumptions was not necessarily true. Low reliability attenuates the
correlation coefficient. Therefore, internal reliability was assumed. Unrestricted variance was also assumed.
If variance is restricted in one or both variables due to poor sampling, this can also lead to attenuation of the
correlation coefficient. This study suffered from poor sampling; so assumption of unrestricted variance was
not justified. For purposes of assessing significance of correlation, common underlying normal distributions
were assumed. Even this assumption was not strictly true; the population from which the sample was drawn
approximated a normal distribution but was not perfectly normally distributed.[39]
8.10 Glitches in Web-based questionnaire system Fifty percent of information systems fail possibly due to technical problems, data content and format, user
problems related to skills, competence and motivation; and organisational problems.[47] The problems
encountered with the newly-introduced Web-based questionnaire system (and the measures taken to tackle
them) were elaborated in Chapter-2. These glitches were not anticipated at the time of creating the Web-
based questionnaire. Piloting of the system enabled identification of the problem, with a view towards
finding some solutions. They were all possibly related to the same situation, namely creation of online
questions through multiple ‘Page Elements’. Figure-1 below explains the problems in terms of relationships
between user sessions, page views and site server ‘hits’, as recorded in site server logs. Each Webpage
viewed by the user records ‘hits’ on the site server in terms of number of images and HTML pages. One user
may have visited 4 Web pages on a Website containing a mixture of images and HTML content. The site
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 53
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
server log would record 10 ‘hits’, even though only one user, in one session, visited only 4 pages. A
Webpage using frames would count as multiple page views. Also different users might be using the same
computer / IP address one after the other, or they may be sitting on the same institutional proxy server, and
would all appear as a single visitor in site server logs.[30] Our blog-site containing the Web-based questionnaire page had 2 images, variable amount of HTML
content and 40 Page Elements. Each time a student accessed the Webpage, the Google site server (which
hosts the blog-site) would have recorded more than 40 ‘hits’. If this is multiplied by the number of students
using multiple computer terminals, all sitting on USAIM server, the number of hits on Google site server
would be enormous within a short span of time, all apparently emanating from a single user / session / IP
address (namely the USAIM proxy server; (domain = usaim.ed)). This possibly explains the error messages
received during the survey. In fact, this pilot study proved to be a good testing ground for identifying and
rectifying the same problems, in order to render future applications of the system more robust.
Figure-1: The relation between hits, page views and visits (sessions) in server logs. Each Webpage viewed by the user records ‘hits’ on the site server in terms of number of images and HTML pages. One user may have visited 4 Web pages on a Website containing a mixture of images and HTML content. The site server log would record 10 ‘hits’, even though only one user, in one session, visited only 4 pages. Adapted from Boulos MNK, Bath University.[30] 9. Lessons learned from study and future directions 9.1 Improvements and amendments All the shortcomings described till now conveyed their own respective messages for this study. The lessons
learned could be summarized into the following essential requirements; (a) Adequate sample size, preferably
around 128 subjects; (b) Adequate stratification by gender and age; (c) Adequate power of the test, at least
1- β = 0.8; (c) More scientific rendering of the survey itself (e.g. morning-evening preference considerations
etc); (d) Robust (reliable and valid) questionnaire development; (e) Trouble-free Web-based questionnaire
delivery and survey administration, perhaps with the expertise of an IT specialist; (f) More detailed
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 54
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
statistical analysis; and (g) Automated system of scoring the results, possibly through an embedded
spreadsheet. The questionnaire should have uniform number of response-options, [27] the scales should not cross each
other,[27] the learning style component needs to be reframed and re-worded to render it less ambiguous, a set
of time-management questions should be included,[24] at least one open-ended question should be
included,[27,30] and a privacy statement incorporated in the questionnaire.[30] Robust statistical analysis
should include internal consistency, test-re-test and inter-rater reliability estimations;[40] construct
(convergent and discriminant) and criterion (concurrent and predictive) validity measurements;[41,42,44,45] and
regression analysis, possibly through structural equation modeling (vide infra).[48]
9.2 Structural equation modeling Structural equation modeling (SEM) is also known as analysis of covariance structures, or causal
modelling. In view of the fact that multiple variables and correlations were involved in this study, the ideal
solution would be to analyse the models through SEM. Such an approach would throw new light to the study
from the following perspectives; (a) Establish the cause-effect relationships between variables, rather than
mere correlations, and depict them through path diagrams with directional arrows; (b) Relationships
between SEM path coefficients and beta weights in multiple regression, as a way of explaining the variances
(this would be possible through calculation of R-Squared (sum of the products of the standardised path
coefficients and their simple correlations), which would tell the researcher how much percentage of the
variance in the Effect could be explained by the Predictor(s), including the error values); (c) Calculate the
direct, indirect and total effects that one or more variables have upon the other, discounting the unanalysed
and spurious effects; (d) Enable specification and testing of the measurement model before the testing of the
full structural model; so as to assess the validity and reliability of the constructs before their use in the full
model; and then conduct the multi-variable analysis; and (e) Enable testing for goodness of fit (Chi-Squared
values (along with df and p-values), Root Mean Square Error Approximations (RMSEA) etc) and squared
multiple correlations of the endogenous variables, to determine their appropriateness as models for
prediction of invariance between variables.[48] Amos (Analysis of MOment Structures) is SEM software from
SPSS. Amos program is specifically for visual SEM. With Amos, one can specify, view, and modify the
model graphically. Then it would be possible to assess the model’s fit, make any modifications, and print out
a final model.[49]
With the sets of suggestions and improvements in the questionnaire and survey method detailed in the
preceding paragraphs, the 6th goal of this study is fulfilled; ‘Suggest improvements to questionnaire and
survey’, for rendering future studies more accurate. 9.3 Guidelines to USAIM students
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 55
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Technology and technical skills (computer, Internet etc) are the easiest factors to improve in a timely
manner. Personal factors, which include motivation and scheduling issues, may not be as easy to change.
Learning styles (insofar as they relate to online learning) have developed over one’s lifetime as a student,
and may be a good fit for certain methods of online instruction and a worse fit for others. Technical aspects
to consider: Consider purchasing a new computer in 3-4 years. You should investigate various high speed
options at home. Personal aspects: Motivation is an important aspect of success as an online student. Try to
plan out your studying schedule as soon as your course starts. A typical 3-credit online course will require
up to 12 or 15 hours / week to complete. Use this to determine how many courses you can take given your
available weekly time. Skills to strengthen: Fast and accurate typing on a computer keyboard; Understand
common aspects of software programs or applications; Use a ‘chat room’ or ‘instant messaging’. Learning
styles to consider: Online classes typically have a large, and often mandatory, discussion component. Many
online courses are designed around student-centred projects. Some students find learning online to be
isolating while others can find it to be very interactive and personal. If you feel isolated, consider sending a
private e-mail to another student who has similar interests and ask them about their hobbies. Remember to
spend some time building relationships with your classmates and instructors. Most online students are most
successful when they play an active role in the teaching and learning process. Try to take the initiative each
week to communicate with at least one other student or the instructor about the course contents. Depending
on your predilection you can try one of the following options; try to select an online program that does not
contain a large amount of group work on assignments or projects; or try to select an online program that uses
message boards or other forms of asynchronous communication tools; or try to select an online program that
includes the use of video or audio communication tools.[10] This fulfils the 7th and final goal of this pilot
study, ‘Suggest ways of overcoming the barriers to online learning’. 9.4 Long term educational implications The two long term goals of this pilot study are: (1) Use Compatibility Score as baseline for future studies in
USAIM and elsewhere, possibly in its affiliated institutions; and (2) Render online learning and
examinations a regular and feasible option for USAIM. These would be next items on the agenda. 10. Conclusions 10.1 Objectives and goals achieved This study fulfilled a number of objectives and goals. It enabled identification of 5 glitches (problems) and
tentative solutions to them through piloting of the newly-devised Web-based questionnaire. This met the
immediate goal of the pilot study. It determined USAIM students’ online learning preparedness against 5 set
parameters; devised a mathematically objective scoring system for each parameter; determined the overall
online preparedness (Compatibility Score) of USAIM students; determined the robustness of LS
questionnaire currently being used for the study; identified the relationships between students’ personal and
general characteristics vis-à-vis online learning; suggested improvements to questionnaire and survey; and
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 56
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
suggested ways of overcoming the barriers to online learning. Finally it paved the way for using
Compatibility Score as baseline for future studies in USAIM and elsewhere and rendering online learning
and conducting examinations a regular and feasible option for USAIM. 10.2 Overall analysis Glitches in the Web-based questionnaire are attributable to excessive ‘hits’ on Google site server from a
single user-session. Average online readiness and overall online Compatibility of USAIM students are in the
‘Good’ category. Learning style questionnaire needs to be re-structured. The questionnaire as a whole needs
to be rendered more robust from research perspective. Online concerns of students are directly proportional
to their motivation and inversely proportional to their age. Subject recruitment for a formal study needs to be
at least 3.7 times more than this pilot study. This would render the results of a robust statistical analysis
more valid. Overall, USAIM students are poised on the threshold of introduction of online courses and
examinations. Once they are introduced, the natural progression of learning curve would take care of the
ongoing hurdles. 10.3 Personal reflections from study Several personal milestones were crossed by the author during the course of this pilot study. This was the
third survey conducted by the author in USAIM in less than 3 years. This was the first Web-based survey of
USAIM. The Web-based system was entirely the author’s original idea and was created from scratch. To
best of knowledge, no one else has tried to create a Web-based survey entirely from a blog-site. There was
also involvement of the author into the much-talked about technique of ‘problem-based’ or ‘case-based’ (or
‘discovery’, or ‘inquiry’) learning.[50] It started out as a pilot study to test a Web-based system and to
determine USAIM students’ online readiness. What initially purported to be a simple estimation of online
Compatibility Score turned out to be a deeper statistical analysis, with correlations, reliability analysis,
factor analysis and multivariable regression analyses. The author was forced to learn ‘on the job’; as each
academic problem presented itself or was perceived, the author was forced to go back to learning all about
the problem from ground up and then apply them to the present situation. There was nothing prescriptive
about this research; every step of the way the author had to take a thorny problem or a collection of data or
observations and try to make sense of it. Hence the allusion to problem-based learning was made. Finally, all
these prompted the urgency to explore the technical and statistical aspects of this study further with a more
robustly designed research. This succinctly summarizes not only this study itself but also the ethos,
philosophy, guiding principle and motivation behind this study.
***********
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 57
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
REFERENCES 1. British Broadcasting Corporation News [homepage on the Internet]. London: BBC News; 1999 October 5 [cited 2008 May 1]. Online Future of Higher Education; [about 1 screen]. Available from: http://news.bbc.co.uk/2/low/uk_news/education/465922.stm 2. Elaine Allen I, Seaman J. Sloan Consortium™ [homepage on the Internet]. Needham, MA: Sloan-C™; Copyright © 2007 October [cited 2008 May 1]. Online Nation: Five Years of Growth in Online Learning; [31 pages]. Available from: http://www.sloan-c.org/publications/survey/pdf/online_nation.pdf 3. Twigg CA. Improving Learning and Reducing Costs: New Models for Online Learning. Educause Review. 2003 Sept/Oct; 38(5): pp 28 – 38 [cited 2008 May 1]. Available from: http://www.educause.edu/ir/library/pdf/ERM0352.pdf 4. British Broadcasting Corporation News [homepage on the Internet]. London: BBC News; 1999 December 22 [cited 2008 May 1]. US colleges move courses online; [about 1 screen]. Available from: http://news.bbc.co.uk/1/low/education/575327.stm 5. Cellilo J. On Course Workshop [homepage on the Internet]. Los Altos, CA: 2001 [cited 2008 May 1]. Motivation in online courses; [about 2 screens]. Available from: http://www.oncourseworkshop.com/Motivation015.htm 6. Kassop M. Ten Ways Online Education Matches, or Surpasses, Face-to-Face Learning. The Technology Source. 2003 May/June [cited 2008 May 1]. Available from: http://technologysource.org/article/ten_ways_online_education_matches_or_surpasses_facetoface_learning/ 7. University of Seychelles American Institute of Medicine [homepage on the Internet]. Seychelles: USAIM [2007 April 30; cited 2008 May 1]. M.Ch Orthopedics Certification Program; [about 16 screens]. Available from: http://www.mch-orth.com/index.html 8. Muilenburg LY, Berge ZL. Student barriers to online learning: A factor analytic study. Distance Education. 2005 May; 26(1):29–48. DOI: 10.1080/01587910500081269 9. Hoskins SL, van Hooff JC. (2005) Motivation and ability: which students use online learning and what influence does it have on their achievement? British Journal of Educational Technology. 2005 March; 36(2); pp 177–192. doi:10.1111/j.1467-8535.2005.00451.x 10. eLearners Advisor [homepage on the Internet]. eLearners™Advisor; © 2002-2004 eLearners.com Inc. [cited 2008 May 1]. Available from: http://www.elearners.com/advisor/index.asp 11. eLearn Space [homepage on the Internet]. eLearnspace; 2002 [2002 October 14; cited 2008 May 1]. Preparing Students for Elearning: Elearning Course; [about 4 screens]. Available from: http://www.elearnspace.org/Articles/Preparingstudents.htm 12. Suskie L. Towson University [homepage on the Internet]. Towson, MD: © 2007 [2002 Aug 28; cited 2008 May 1]. Theories and Instruments for Identifying Student Learning Styles; [about 11 pages]. Available from: http://wwwnew.towson.edu/iact/teaching/SuskieLearningStylesTheoriesandInstruments.doc 13. Heineman PL. Personality Project [homepage on the Internet]. Copyright © 1995 [cited 2008 May 1]. Cognitive Versus Learning Style; [about 2 screens]. Available from: http://www.personality-project.org/perproj/others/heineman/cog.htm 14. Diaz DP, Cartnal RB. Students' learning styles in two classes: Online distance learning and equivalent on-campus. College Teaching. 1999; 47(4):130-5 [cited 2008 May 1]. Available from: http://home.earthlink.net/~davidpdiaz/LTS/html_docs/grslss.htm
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 58
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
15. Kra¨tzig GP, Arbuthnott KD. Perceptual Learning Style and Learning Proficiency: A Test of the Hypothesis. Journal of Educational Psychology. 2006; 98(1):238-46. 16. Learning Styles Online [homepage on the Internet]. Advanogy®; © 2003 – 2007 [cited 2008 May 1]. Free learning styles inventory, including graphical results; [about 3 pages]. Available from: http://www.learning-styles-online.com/inventory/ 17. University of Illinois [homepage on the Internet]. Urbana-Champaign, Illinois: Illinois Online Network (ION); © 1998-2006 ION and Board of Trustees of UOI [last modified 2006 August 25; cited 2008, May 1]. Learning Styles and the Online Environment; [about 2 pages]. Available from: http://www.ion.uillinois.edu/resources/tutorials/id/learningStyles.asp 18. Academy Internet [homepage on the Internet]. Academy Internet; © 2001-2005 [cited 2008 May 1]. The E-Learning Process 7: Delivery; [about 2 screens]. Available from: http://www.academyinternet.com/elearning/delivery.html 19. Armstrong Atlantic State University [homepage on the Internet]. USA: AASU; [cited 2008 May 1]. Web Coursework: Are you ready for online learning? [about 3 screens]. Available from: http://www.it.armstrong.edu/dlassess.htm 20. Online Learning [homepage on the Internet]. Los Angeles, CA: OnlineLearning.net™; Copyright © 2006 OnlineLearning.net [cited 2008 May 1]. The OnlineLearning.net Self-Assessment QuizTM; [about 2 screens]. Available from: http://www.onlinelearning.net/OLE/holwselfassess.html?s=825.t010b360n.058u214d00 21. Jester C. Diablo Valley College: © Copyright 2000 by Suzanne Miller [cited 2008 May 1]. A Learning Style Survey for College; [about 2 screens]. Available from: http://www.metamath.com/multiple/multiple_choice_questions.html 22. ibid 16. Self Evaluation for Potential Online Students; [about 2 screens]. Available from: http://www.ion.uillinois.edu/resources/tutorials/pedagogy/selfEval.asp 23. College of DuPage [homepage on the Internet]. Center for Independent Learning (CIL); Copyright © 1996 College of DuPage [updated 26 May 2006; cited 2008 May 1]. Are distance-learning courses for Me? [about 2 screens]. Available from: http://www.cod.edu/dept/CIL/CIL_Surv.htm 24. PACE [homepage on the Internet]. New York City, NY: PACE; 2006 [last updated 2006 August 2; cited 2008 May 1]. Is Online Learning for Me? [about 3 screens]. Available from: http://appserv.pace.edu/execute/page.cfm?doc_id=4678 25. Perlman G. Association for Computing Machinery [homepage on the Internet]. ACM; Copyright © 2008 [cited 2008 May 1]. Web-Based User Interface Evaluation with Questionnaires; [about 4 screens]. Available from: http://www.acm.org/~perlman/question.html 26. Boulos MNK. Semantic Web [homepage on the Internet]. © 2001, 2002 HealthCyberMap.org [last revised 2002 April 17; cited 2008 May 1]. Formative Evaluation Questionnaire of HealthCyberMap Pilot Implementation; [about 10 screens]. Available from: http://healthcybermap.semanticweb.org/questionnaire.asp 27. Bonharme E, White I. Napier University [homepage on the Internet]. Edinburgh, UK: School of Computing (DCS), Napier University, Marble project [last updated 1996 June 18 (archived); cited 2008 May 1]. Questionnaires; [about 1 screen]. Available from: http://web.archive.org/web/20040228081205/www.dcs.napier.ac.uk/marble/Usability/Questionnaires.html
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 59
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
28. Berdie DR, Niebuhr MA. Questionnaires: Design and use. 2nd ed. Metuchen, NJ: Scarecrow Press; 1986. 29. Smart Draw [homepage on the Internet]. Copyright © 2008 SmartDraw™ [cited 2008 May 1]. Questionnaire Design; [about 1 screen]. Available from: http://survey.smartdraw.com/questionnaire-design.html 30. Boulos MNK. Bath University [homepage on the Internet]. Bath, UK: Bath University [2004 June 16; cited 2008 May 1]. Notes on Evaluation Methods (Including User Questionnaires) for Web-based Medical/ Health Information and Knowledge Services; [about 6 pages]. Available from: http://staff.bath.ac.uk/mpsmnkb/unit11/1_2_1.htm 31. Joyce DE. Clark University [homepage on the Internet]. Worcester, MA: Department of Mathematics and Computer Science, Clark University; © 1996, 1997, 1999 [cited 2008 May 1]. Cosines; [about 3 screens]. Available from: http://www.clarku.edu/~djoyce/trig/cosines.html 32. Faul F, Erdfelder E. GPOWER: A priori, post hoc, and compromise power analyses for MS – DOS (Computer program). Bonn, FRG: Bonn University, Department of Psychology. 1992 33. Erdfelder E, Faul F, Buchner A. G*Power: A general power analysis program. Behaviour Research Methods, Instruments, & Computers. 1996; 28:1-11. 34. Gatsonis C, Sampson AR. Multiple correlation: Exact power and sample size calculations. Psychological Bulletin. 1989 Nov; 106(3): 516-24. 35. Tversky A, Kahneman D. Belief in the Law of Small Numbers. Psychological Bulletin. 1971; 76(2):105-10. 36. Maxwell SE. Sample size and multiple regression analysis. Psychological Methods. 2000 Dec; 5(4): 434-58. 37. Green SB. How Many Subjects Does It Take to Do a Regression Analysis? Multivariate Behavioural Research. 1991; 26(3):499-510. 38. Jackson DL. The Effect of the Number of Observations per Parameter in Misspecified Confirmatory Factor Analytic Models. Structural Equation Modelling: A Multidisciplinary Journal. 2007; 14(1):48-76. 39. Garson GD. North Carolina State University [homepage on the Internet]. North Carolina; [last updated 2006 May 9; cited 2008 May 1]. Correlation; [about 7 pages]. Available from: http://www2.chass.ncsu.edu/garson/PA765/correl.htm 40. Zywno MS. A Contribution to Validation of Score Meaning for Felder-Soloman’s Index of Learning Styles. 2003: Proceedings of 2003 American Society for Engineering Education Annual Conference & Exposition, Copyright © 2003, ASEE; 2003. 41. Bunderson VC. Herrmann International [homepage on the Internet]. Herrmann International™; 1980 [cited 2008 May 1]. The Validity of the Herrmann Brain Dominance Instrument; [about 28 pages]. Available from: http://www.herrmanninternational.us/home/friendlyDownload.cfm?actualFile=100331.pdf&directory=100021_resources&saveName=HBDI-Validation-in-1980.pdf. 42. Bunderson VC. The Validity of the Herrmann Brain Dominance Instrument. Princeton, NJ. Educational Testing Service. 1988. 43. Martinez M. The Training Place [homepage on the Internet]. Oro Valley, Arizona: The Training Place, Inc; Copyright © 1996-2005 [cited 2008 May 1]. Learning Orientation Questionnaire – Interpretation
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 60
Web-based Survey and Analysis of USAIM Students’ Online Compatibility – Pilot Study
Manual; [about 48 screens]. Available from: http://www.trainingplace.com/source/research/LOQPKG-Manual2005.pdf 44. Shearer CB. Multiple Intelligence Research [homepage on the Internet]. Kent, Ohio: MI Research and Consulting Inc; [2006 Jan; cited 2008 May 1]. Using a Multiple Intelligences Assessment to Facilitate Teacher Development; [about 9 screens]. Available from: http://www.miresearch.org/files/Teacher_Development_and_the_MIDAS.doc 45. Shearer CB. Reliability, Validity and Utility of a Multiple Intelligences Assessment for Career Planning. Paper presented at the 105th Annual Convention of the American Psychological Association; 1997: Chicago, IL: American Psychological Association; 1997. 46. Furnham A, Dissou G. The Relationship Between Self-Estimated and Test-Derived Scores of Personality and Intelligence. Journal of Individual Differences. 2007; 28(1):37–44. 47. Lyytinen K, Hirschheim R. Information systems failure - A survey and classification of empirical literature. Oxford Surveys in Information Technology. 1987; 4:257–309. 48. McKenzie K, Gow K. Exploring the first year academic achievement of school leavers and mature-age students through structural equation modelling. Learning and Individual Differences. 2004; 14:107-23. 49. University of Nottingham [homepage on the Internet]. Nottingham: IS Academic & Research Applications University of Nottingham; [cited 2008 May 1]. AMOS; [about 2 screens]. [Available from: http://www.nottingham.ac.uk/is/services/software/is-machines/appls/amos.phtml 50. Felder RM, Silverman LK. North Carolina State University [homepage on the Internet]. Raleigh, NC: NCSU; Engr. Education; 78(7):674–81 [published 1988; cited 2008 May 1]. Learning and Teaching Styles In Engineering Education; [about 10 screens]. Available from: http://www.ncsu.edu/felder-public/Papers/LS-1988.pdf
*****************
USAIM Online Survey; Dr S. Sanyal, Assoc. Prof., Faculty of Anatomy & Neurosciences, USAIM, Seychelles May 2008 61