chapter one introduction background of the study stanislus sochima.pdfseries ( usman and harbor-...
TRANSCRIPT
![Page 1: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/1.jpg)
1
CHAPTER ONE
INTRODUCTION
Background of the Study
Mathematics is considered by many people, institutions, and
employers of labour, among others, as very important. Mathematics is
considered indispensable because it has substantial use in all human
activities including school subjects such as in Introductory technology,
Biology, Chemistry, Physics including Agricultural science. Its unique
importance explains why the subject is given priority as a school subject.
Infact, the International Association for the Evaluation of Educational
Achievement (IEA) (2004) has also associated the learning of mathematics
with basic preparation for adult life. Also, mathematics is used for analysing
and communicating information and ideas to address a range of practical
tasks and real-life problems (Gray and Tall, 1999). Again, employers in the
engineering, construction, pharmaceutical, financial and retail sectors, have
all expressed their continuing need for people with appropriate mathematical
skills (Smith, 2005). This situation demands that every child should be
included in mathematics instruction right inside the classrooms (Sydney,
1995; Hill, 2001), at the secondary school level of education.
There is ample evidence to show that all over the world, majority of
Secondary School students’ performance in mathematics have been
variously reported by individuals and group of persons to be generally poor.
For instance, at the international scene, the situation reported by the
National Research Council in the late 1980s is of the view that students
![Page 2: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/2.jpg)
2study of mathematics is getting worse worldwide especially with regard to
the enrolment and performance of minority groups in mathematics/science
courses (Ezeife, 2002). Locally, similar reports on students’ poor
performance on mathematics were noted (Chief Examiners’ report, 1993-
2000; Raimi, 2001; Igbo, 2004; Aguele, 2004). It is unfortunate that the
general performance of students in mathematics has been observed to be
poor (Agwagah, 2000; Ekele, 2002; Kurume, 2004). This situation cannot be
allowed to continue escalating without proper check. Several reasons
including (Usman and Harbor- Peters, 1998; Harbor- Peters, 2001; Ikeazota,
2002 and Igbo, 2004), have offered reasons for these consistent poor
performance in mathematics. Some noted that it was associated with poor
teaching of the subject (mathematics) by teachers. Specifically, accusing
fingers have been pointed at the way mathematics is taught in schools, and
the lack of relevance of mathematics content to the student’s real life
experiences (Ezeife. 2002). Some reported that students detest
mathematics, suggesting that the students are not working hard enough or
learning the subject seriously. For instance, the inability of students to
change to a thinking mode suitable for the particular problem, for example,
to alter between a numeric, graphic, or symbolic form of representing
mathematical ideas deterred them from solving a wide range of
mathematical problems (Tall, 2005).
Other researchers (Usman and Harbor- Peters, 1998; Unodiaku, 1998;
an Aguele, 2004) have also examined the incidence of errors as determinant
of students’ achievement in mathematics. Among these errors are the
![Page 3: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/3.jpg)
3process errors committed by students while solving mathematical
problems. Teaches inability to diagnose these process errors among other
factors according to Harbor- Peters and Ugwu (1995); and Aguele ( 2004)
has contributed to the poor performance of students in both internal and
external examinations over the years. Therefore, if poor performance of the
students in mathematics is to be halted, these errors or weaknesses relating
to the process skills should be identified among JS 3 students for further
learning of mathematics in SS1 level. It becomes necessary, therefore to
investigate the students specific areas of weakness as indicated by the
process errors they committed. The mathematics readiness test (MATHRET)
indicates the frequency of these process errors, from which one can find out
the extent students entering the senior secondary school possess the
knowledge of the Js 3 mathematics curriculum contents in readiness for
senior secondary school mathematics work. This situation demands that a
mathematics readiness test ( MATHRET) need to be developed with which to
know whether the JS 3 students posses the background learning experiences
that can enable them cope with SS1 mathematics work. Okonkwo (1998)
developed and validated mathematics readiness test for JS 1 students. Also,
Obienyem (1998) identified mathematical readiness levels of JS1 entrants.
Both studies were centred on pupils of primary six intending to resume new
mathematics programme in JS1 level. This and the paucity of instrument for
determining the readiness level of JS 3 students intending to resume new
mathematics programme in SS1 level and remedying mathematics
deficiencies of Nigerian secondary school students and for the improvement
of the teaching and learning of the subject motivated this researcher to
![Page 4: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/4.jpg)
4develop and validate a mathematics readiness test for senior secondary
school students.
Readiness is a condition, which reflects possession of particular
subject-matter knowledge, or adequate subject-matter sophistication, for
further or increasingly learning complex tasks (Ausubel, Navok and Harison,
1978). More still, the quality of education received, in other words is a
significant determinant of the pupils developmental readiness, as well as of
subject- matter readiness, for further learining ( Ausubel, et al, 1978). Lack
of readiness in a given task, therefore signals failure in such. Moreso, when
a pupil is prematurely exposed to a learning task before he is adequately
ready for it, he not only fails to learn the task in question (or learns it with
undue difficulty), but also learns on this experience to fear, dislike, and
avoid the task (Ausubel, et al, 1978). Thus, readiness becomes an essential
factor in any learning, which involves acquisition of sequential skills (Gagne,
1967; Zylber, 2000). Lack of prerequisite skills in a given task invariably
inhibits acquisition of subsequent related skills. This is particularly so with
Mathematics (Igbo, 2004) because of the nature of its structure (Piaget,
1979), the sequential procedure used in its instruction (Gagne, 1962; 1968)
and the hierarchical pattern of its organization (Igbo, 2004). Thus, effective
teaching and learning of Mathematics may achieve with reliable assessment
of readiness as based on diagnostic information (the process errors students
commit in solving mathematics problems). Readiness test has been defined
as test that determines the possession of prerequisite knowledge for further
learning task (Ausubel, et al, 1978). Diagnostic test has been defined as test
![Page 5: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/5.jpg)
5that analyzes and locates specific strengths and weaknesses and
sometimes suggests causes (Burns, Roe and Ross, 1988). Achievement test
on the other hand, measures what students have learned (Annie and
Mildred, 1999), and so cannot determine students’ specific areas of strength
and weaknesses (process errors). It becomes necessary to investigate the
students’ specific areas of weaknesses as indicated by the process errors
they committed. Process skills are thought processes that are related to
cognitive development. They are commonly brought into use while
performing mathematical operations. The errors resulting from the violation
or wrong use of these skills are referred to as process errors (Payne and
Squibb, 1990). Harbor- peters and Ugwu (1995) classified these errors
which students commit in geometrical theorems as conceptual, logical, and
drawn/ construction, translation and applied errors. Other researchers have
also carried out investigation on the process errors students committed in
some other aspects of mathematics. Some of these include inequalities
(Isinenyi, 1990), longitude and latitude ( Ubagu, 1992), sequences and
series ( Usman and Harbor- Peters, 1998) and simultaneous linear equations
( Unodiaku, 1998).
Process errors which marred students’ readiness levels for senior
secondary school mathematics programme, according to Usman and Harbor-
Peters (1998), Aguele (2004) and Ezugwu (2006) could be influenced by the
sex and school location of the students. Hence, the need to investigate
whether the readiness level of the JS3 students is likely to be influenced by
the sex and school location of the students. Such investigation focused on
![Page 6: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/6.jpg)
6whether Urban or rural, and the type of school attended considering
whether public or privately owned. Rural inhabitants work with people they
know well and are accustomed to relationships of great intimacy, whereas
Urban dwellers know each other in narrow segmented ways that have little
to do with family or friendship (Encyclopaedia Britannica, 2003). For the
purpose of this study, schools located in places where the inhabitants of
such places are accustomed to relationships of great intimacy and work with
people they know well are classified as rural schools. Moreso, urban school
will be classified as schools located where the dwellers know each other in
narrow, segmented ways that have little to do with family or friendship. A
private school was defined as one rightly owned and cared for by an
individual, group of people, or public organizations such as higher
institutions, army, police or road safety. A public school was defined as one
owned and cared for by a government, normally through its agency charged
with the responsibility of administration and supervision of educational
system.
Statement of the problem
Research reports over the years have not only vindicated that senior
secondary school students perform poorly in mathematics but also indicated
that the process errors which students commit while solving mathematics
problems have contributed to students poor performance in mathematics.
These process errors (weaknesses) committed by students in solving
mathematics problems, constituted to their lack of readiness for further
mathematics learning.
![Page 7: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/7.jpg)
7 Some recent mathematics literature indicated that mathematics
readiness tests were developed and used to determine the readiness levels
of pupils advancing from primary six to Junior secondary school one (JSS1)
where they intend to resume a new mathematics programme. The literature
also suggested that readiness test should be developed and used to
determine the readiness levels of JS3 students intending to resume a new
mathematics programme in senior secondary school mathematics work. One
may therefore ask the following questions. To what extent can readiness test
on mathematics be developed and validated for senior secondary school
students? Would the instrument be sensitive to such variables like gender,
school type and location?
Purposes of the Study
The main purpose of this study is to develop and validate
Mathematics readiness test (MATHRET) for senior secondary school
students. Specifically, the study indented to achieve the following purpose:
1. To develop the instrument MATHRET
2. To determine the validity of the instrument.
3. To establish the reliability of the instrument
4. To find out the percentage of senior secondary school entrants that
are ‘ready’, ‘fairly ready’, or ‘not ready’, for the senior secondary
school mathematics learning.
5. To find out if mathematical readiness of the senior secondary class
one (SS1) students will vary on the basis of their sex.
![Page 8: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/8.jpg)
86. To find if school location ( Urban or rural) has influence on the
mathematical readiness of the entrants into senior secondary school
mathematics programme.
7. To find out the influence of school types ( private or public) (in
terms of the mean errors) on the mathematical readiness of SS1
entrants.
Scope of the study
The study covered junior secondary school students in Nsukka and
Obollo- Afor education zones. It was focused on junior secondary three (JS3)
students, because Obienyem (1998) and Okonkwo (1988), all suggested
that mathematics readiness test should be developed and used to determine
the readiness of senior secondary school entrants for senior secondary
school mathematics programme. The study also covered junior (JS3)
students’ mathematics curriculum. The scope of this study is such that the
junior secondary class three (JS3) students’ mathematical readiness was
investigated at the point of entry into senior secondary school (SS1)
mathematics programme. Therefore, the content areas covered by the study
are all the content areas that are covered by the National curriculum in
mathematics for junior secondary schools (JS3). The content areas covered
by the study are:
i. Number and numeration
ii. Algebraic processes
iii. Geometry and mensuration
![Page 9: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/9.jpg)
9iv. Everyday statistics
The mathematics readiness test (MATHRET) was used to determine the
extent of mastery of the JS3 mathematics learning experiences, focusing on
the frequency of the process errors they committed.
Significance of the study
The study was considered significant along the following perspectives.
The consistent reports on poor performance of senior secondary school
students, and the dire need to improve upon that, and get the JS3 students
intending to gain entrance into the senior secondary school (SS1) level, it is
now necessary to provide a clear tool which could be used to asses the
readiness level of JS3 students at the point on entry into SS1. This situation
makes the development and use of the MATHRET quite necessary.
The MATHRET as a readiness test in secondary school mathematics will
help the teachers/ school administrators in placement of the students from
one level, JS3 to another, SS1 based on the frequency of the process errors
the students committed on the MARTHRET at the point of entry into senior
secondary one, will determine the depth of coverage or mastery of the junior
secondary school mathematics curriculum. In otherwords how ready they
were for junior secondary school mathematics work. By publishing MATHRET
as a booklet and disseminating same to junior secondary school
mathematics teachers as instrument, it will become accessible to teachers.
Teachers can then use it in determining the readiness levels of their students
for further learning of mathematics. The teachers and school administrators
can now carryout remedial instruction of the students found ‘not ready’.
![Page 10: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/10.jpg)
10Identification of the weaknesses that affect the level of
mathematical readiness of entrants into senior secondary school
mathematics programme will enable mathematics teachers, educational
planners and mathematics educators take effective measures in harnessing
teaching and learning of senior secondary school mathematics. The teaching
and learning can be harnessed by promoting only the students identified as
being ‘ready’ or ‘fairly ready’ and then subjecting the students identified as
‘not ready’ to remedial instruction. The MATHRET remedial package aspect of
this work is very helpful in this direction (see Appendix: R).
Determination of mathematical readiness of junior secondary class
three (JS3) students at the point of admission into senior secondary school
mathematics programme will enable mathematics teachers embark on
comprehensive and effective instructional decisions in pursuance of senior
secondary school mathematics programme. Such effective instructional
decisions will be based on how ‘ready’, ‘fairly ready’ or ‘not ready’ the
students were.
Determination of mathematical readiness of senior secondary class one
at the point of entry into senior secondary school mathematics programme
will indicate their specific areas of strengths and weaknesses as well as the
nature of the weaknesses or deficiencies in junior secondary school
mathematics curriculum contents. Based on this information, teachers can
then stress more on those specific areas that students exhibit difficulties,
using various instructional strategies, especially during ‘entry behaviour’.
The MATHRET will serve as indicator of the quality of junior
secondary school mathematics curriculum contents that were taught and
![Page 11: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/11.jpg)
11learnt. The diagnosis of the students learning experiences may help to
reveal how prepared the junior secondary school mathematics teachers are
in teaching junior secondary school mathematics. This gives room for
teachers to effectively use the MATHRET remedial package entrenched in
this work (see Appendix: R).
Educational administrators can organise seminars, workshops,
symposium, etc for junior secondary school mathematics teachers based on
mathematical concepts or skills that students exhibit difficulties or based on
identified process errors students committed in solving mathematical
problems. These seminars, workshops, etc for teachers is very important
because those concepts/ skills that students exhibit weaknesses implies that
their teachers are handicapped in teaching those aspects well. Such
workshops/ seminars if organised for teachers will increase the teaching
skills of the teachers. This will enhance the readiness level of the students as
teachers will tend to teach better.
Remedial programme can be organised by the educational/school
administrators for the deficient students based on the areas that students
exhibit weaknesses (areas students committed errors). Based on the
revealed areas of weaknesses one can prescribe possible remediation on the
specific area of their weaknesses.
Researchers can carry out research on more effective teaching
methods, instructional materials or strategies that can be applied in teaching
to address students’ areas of weaknesses in junior secondary school
mathematics. Emphasis should be laid on those skills that students commit
![Page 12: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/12.jpg)
12more errors than other skills (i.e. Errors that occur more frequent than
others).
Curriculum workers can allot more weights in the curriculum where
students exhibit lack of readiness/deficiencies; indicate teaching aid to be
used. They should also include possible methods teachers can use in
teaching the topic or units that students show evidence of lack of mastery or
weaknesses.
Authors of junior secondary school mathematics textbooks can
now organise or entrench the skills or concepts sufficient enough to assist
students in understanding and mastery of the areas that they are deficient.
This will assist the students to be ‘ready’ or at least ‘fairly ready’ for the
senior secondary school mathematics work.
Research Questions
The following research questions will guide the study.
1. To what extent can validity of the MATHRET be determined?
2. To what extents can reliability of the MATHRET be determined?
3. What percentage of senior secondary school entrants are ready, fairly
ready or not ready (in terms of their mean frequency of errors on
the MATHRET for the senior secondary school mathematics learning?
4. To what extent do male and female students vary in terms of their
mean frequency of errors in their mathematical readiness test?
5. To what extent do students in urban and rural schools vary in terms of
their Respective mean frequency of errors committed on the
MATHRET?
![Page 13: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/13.jpg)
136. To what extent do school types influence the subjects’ mathematical
readiness (in terms of their mean frequency of errors committed on
the MATHRET) for senior secondary school mathematics programme?
Hypotheses
The following hypotheses guided the study. The hypotheses were
tested at 5% level of significance.
1. Gender is not a significant factor in the means of errors committed by
male and female students that influences their degree of readiness for
senior secondary school mathematics, as determined by the mean
frequency of errors committed by male and female students.
2. Locations is not a significant factor that influences the degree of
readiness of SSI entrants for senior secondary school mathematics, as
measured by the mean frequency of errors committed by urban and
rural students.
3. School type is not a significant factor that influence the degree of
readiness of SSI entrants for senior secondary school mathematics as
measured by the mean frequency of errors committed by public and
private students.
![Page 14: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/14.jpg)
14CHAPTER TWO
LITERATURE REVIEW
The literature review presented in this chapter is organized under the
following broad headings:
A. Theoretical Framework:
- Concept of Readiness
- Constructs of a Mathematics Readiness Test
- Purposes of Readiness Test
- Approaches Used in Developing Readiness Test
- Development of a Test that has Maximum Validity and Reliability.
- Diagnosis of a Mathematics Readiness Test.
- Interpretation of a Mathematics Readiness Test Scores.
B. Empirical Framework
- Studies on Readiness Testing Procedures.
- Sex, Location and Type of Junior Secondary School Attended and
Readiness for Mathematics.
C. Summary of Literature Review
Theoretical Framework
Mathematics and science educators, generally, have made tremendous
efforts, through research, to raise the attainment level in mathematics
education. One of such area has been determining the readiness levels of
students for further learning through diagnosis of students learning
experiences. One aspect of the diagnosis researchers focused their attention
seriously was on students’ weaknesses (errors) in mathematics which mares
their performance in the subject. This draws strength from Piaget’s theory
![Page 15: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/15.jpg)
15for Child’s Conception of Number (1952). Piaget claims that
understanding class inclusion is an essential prerequisite for understanding
addition and subtraction. He further argued ( P.190) that children may
appear to understand the words “Two and six makes eight’, but will not
understand what this means until they understand how the set ‘eight’ can be
broken down into its subsets ‘two’ and ‘six’ and then reconstituted again. It
is by knowing whether the JS3 students have gained mastery of the
prerequisite skills in JS3 mathematics work (based on the process errors
they committed) with which they can understand the SS1 mathematics
work, that the mathematics readiness test for senior secondary school
students (SS1) can be sought for. This is the main thrust of this study.
Concept of Readiness
Readiness has been defined as a condition, which reflects possession
of a particular subject matter knowledge or adequate subject- matter
sophistication, for a particular learning task.
Without an adequate definition of readiness, it is difficult to determine
what academic programmes and supports are necessary to nurture and
enhance children’s readiness (Alkerrnon and Barneth, 2005). Piotrkowski
(2000) in Akerrnan and Barneth (2005) pointed out that at its broadest;
readiness is considered as social, political, organizational, educational and
personal resources that support children’s success in school entry. Ability to
attend selectively, show appropriate social responses and stay engaged in
academic tasks are all implicated as factors that contribute to and define
“school readiness (Rimm-kanfran, 2004). Ausubel, Navok and Hanisan
(1978) conceived of readiness as a condition, which reflects possession of
![Page 16: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/16.jpg)
16particular subject-matter sophistication, for particular learning tasks. This
conception of readiness appears to suggest the enabling effect of this
“condition” in say learning situation. Brunner (1966) in Meisels (2002)
pointed out that the idea of “readiness” is a mischievous half-truth, largely
because it turns out that one “teaches” readiness or provides opportunities
for its nurture; one does not simply wait for it. He concluded that readiness,
in these terms, consists of mastery of those simple skills that permit one to
reach higher skills. This conception suggests necessity of “mastery” of skills
or knowledge say, in learning. When the mastery is not in the learner, the
level of what is learned (if any) is low. Similarly David and Flavell (1989)
emphasized “mastery” in their conception of readiness in learning. They
asserted that the problem of the child’s “readiness to learn’ can in fact be
reduced to the question of whether he has mastered all the steps in the
sequence that proceed and are prerequisite for the concept to be learned.
School Goal Team Report (2006) conceived of readiness as “condition or
state of the person that makes it possible for him to engage profitably in a
given activity”. The report also conceived of it as “preparedness to respond
or react”. These concepts of readiness suggest the enabling effect of this
“condition” or “state”, in, say, learning, which is indicative of its importance
in learning. Burns; Roe and Ross (1988) remarked readiness as the notion
that a person needs to be in a state of preparedness (ie not just ready and
wiling, but also intellectually and physically able), before he can learn new
knowledge or skills. Similarly, Meisels (2002) noted that waiting for children
to demonstrate their readiness by learning something spontaneously without
some intervention or preparation of the environment is, in his views,
![Page 17: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/17.jpg)
17fruitless. Here, again, readiness is being looked at as “preparedness” to
learn? Similarly, Hind (1970) shared the view that readiness to learn has
mentally, emotional and physical components, when he looked at it as a
physiological condition, cognitive competences and basic knowledge which
results in certain actions to be carried out in preference to all others, also
appears to suggest the necessity of readiness for achievement. For when a
child is mentally and physically retarded he cannot participate gainfully in a
learning activity. Children’s physical development such as rate of growth,
health status such as ability to see and hear and physical abilities such as
ability to move around the environment, assisted or unassisted (School Goal
Team Report, 2006). In agreement with Burns, et al and Hinde’s conception
of readiness as having mental and physical components, Ferguson (2002)
remarked that children learning about numbers and how to perform
arithmetic processes, do not learn a new process properly until they have
developed the physical and cognitive competencies needed to understand it.
These conceptions of readiness suggest therefore that readiness is a state,
condition, mastery or preparedness of a person that enables him to profit
from learning activities and without which he may not likely achieve success.
However, the notion that a person needs to be in a “state of
preparedness” (Burns, et al, 1988) before he/she can learn new knowledge
or skills does not guarantee ultimate satisfactory conclusion in learning such
new knowledge or skills. This is associated with the fact that one might be
prepared to perform in an activity, but still fail to perform due to some
external stimuli which may pose as hindrance in the process, he will not
enjoy the activity. If for any reason, he is forced to enjoy the activity
![Page 18: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/18.jpg)
18without readiness, it will lead to dissatisfaction or frustration. These
assertions suggest therefore, that although readiness is necessary in
learning, yet it is not a sufficient condition for learning to materialize.
The nature of readiness has been looked at from cognitive point of
view. Cognitive readiness refers to the adequacy of existing cognitive
procuring equipment or capacity at a given level of development for coping
with the demands of a specified cognitive learning task (Ausubel, 1978).
Similarly, early signs of cognitive ability and maturity have been shown to be
linked to children’s performance in school, and for this reason, this highly
intuitive approach to assessing readiness has been used as an indication that
a child is prepared for the school environment (Meisels, 1999). It appears to
admit that readiness has to do with ability to profit from practice or learning
experience. Furthermore, individuals manifest readiness when the outcomes
of his or her learning activity, in terms of increased knowledge or academic
achievement, are reasonably commensurate with the amount of effort and
practice involved. It has been suggested that readiness is not a simple
construct but has to do with a broad spectrum of experience judged
necessary for effective performance within some area or level (Johnson,
1976). Johnson’s conception of readiness suggests that readiness is not a
dichotomous construct, that can be either ‘yes’ or ‘no’, ‘pass’ or ‘fail’.
Readiness is not an end in itself; it is the beginning of an active teaching and
learning engagement (Samuel, 2002). It is associated with acquisition of
adequate knowledge or skill for building subsequent ones. It appears to
suggest a gradation and has been looked at as a “degree” to which one has
acquired some prior knowledge essential for learning a new skill or acquiring
![Page 19: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/19.jpg)
19new knowledge (Udegboka, 1987). In addition, it could be argued that if
performance at the next higher level is as a result of readiness, differential
performance at that higher level could be attributed to differential readiness,
assuming all things being equal. Downs and Parking (1958) considered
“readiness” as that interval a child find difficulty in understanding the
contents of instruction due to his young age, intellectual immaturity and
insufficient experience in the subject-matter and the period his mind can
now cope with the work. This conception of “readiness” suggests a stage of
a child’s development that may not permit him to participate gainfully in
learning. Similarly, Gibson (1972) noted that the term “readiness” is used
by educators to refer to a stage in development that must be reached before
a particular task can be accomplished. Downs and Parking (1958) and
Gibson (1972) conception of “readiness” suggests that the concept of
readiness discussed so far is a narrow one. And for more appropriate
discourse, the concept of maturational readiness should be examined to give
the concept of readiness a broad view.
Nwabuisi (1986) listed four components of readiness as having
cultural, the personal, the cognitive and the motivational components. For
the Nwabuisi’s conception of readiness as having cultural and cognitive
components tends to be integral aspect of relevant preparatory training
which school Goal Team Report conceived of readiness. Moreso, children
from various cultures and with various experiences will express their
competencies differently and should be expected to show different patterns
of development (School Goal Team Report (SGTR), (2006). It is to extent
that these cultural and experiences vary from culture to culture and from
![Page 20: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/20.jpg)
20one individual to another that degree of readiness for any learning task
would differ across schools. Such differences in learning could be due to
their micro cultural differences as well as differences in cognitive approaches
to acquisition of knowledge or skills. Moreover, SGTR’s “personal”
component of readiness incorporates maturation and perhaps other
physiological dispositions that are considerable factors in profitable
engagement in any learning activity.
The concept of readiness as discussed seems to be unsatisfactory
without the idea of Mathematics readiness. Mathematics is a technical
subject that performance in it requires an acquisition of different
combinations of composite of skills or knowledge in different areas of the
subject, such as knowledge of numbers, shapes, and simple patterns (School
Goal Team Report, 2006). How, for example, can we classify some one who
has not acquired the necessary knowledge or skill of some aspects of
Mathematics but has mastered some aspects of it? Judging from Bruner’s
conception of readiness, Mathematics readiness may, therefore, be defined
as all encompassing personal factors capable of bringing about adequate
progress in Mathematics learning under a stipulated instructional conditions.
Such requirements for personal factors that can lead to adequate progress in
learning under a stipulated condition may vary from one aspect of
Mathematics learning to another. It is to that extent of variation in
requirements for personal factors for any given aspect of Mathematics, to
that extent does readiness for Mathematics depends on the aspect of
Mathematics chosen. Thus, readiness, as Nwabuisi (1986) perceived, is not
an absolutely bounded concept but has to do with degree. He asserted that a
![Page 21: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/21.jpg)
21child may be ready for one kind of approach to Mathematics learning but
not to another. A child may as well be ready for one content areas or units
in a Mathematics curriculum but not for other content areas or units in same
Mathematics curriculum.
In summary, readiness is associated with a state or condition
necessary for one to tackle the next harder work successfully, but often such
conditions are inadequate for a learner to participate fruitfully in a given
learning task. However, it is considered to be associated with such factors
as motivation, preparatory achievement and maturity. It is to that extent
that a child’s state or condition is devoid of these factors, to that extent will
be child’s readiness fall short of the “absolute”. And since this is observable
in real learning situation than theory, readiness as a construct needs to be
defined in contextual terms. It then follows that an instrument to be used in
measuring readiness should more appropriately be evaluated in the context
of which factors of readiness it tends to measure (such factors as the
intellectual, the motivational, the physiological or combination of the factor).
Constructs of a Mathematics Readiness Test
Experience has shown that a person’s philosophical perspective about
the nature of Mathematics influences his choice of Mathematics contents and
learning of Mathematics. It is the choice of such content and approach to
Mathematics teaching that invariably determines what qualifies as
appropriate entry behaviour. This may be related to the statement that the
constructs involved in the assessment of Mathematics readiness are
dependent on the one’s perception of what Mathematics is and what
Mathematics learning should be able to achieve in the one’s life. For
![Page 22: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/22.jpg)
22instance, Stein (1969) conceived Mathematics as completely a human
creation. Like Stein, Wilder (1974) saw Mathematics as a function of
cultural demands of the time as any of (man’s) other adaptive mechanisms.
To Hardy (1941) while arguing that as far as Mathematics learning is
concerned, he observed that our function is to discover or observe it, and
that the theorems which we prove and which we describe grandiloquent by
as our ‘creation’ are simply our notes of our observations.
In view of this conception of Mathematics, its learning should tend to
be restricted to the memorization of a set of algorithms, which then
becomes the standard solution of the problems with the child having little or
no understanding needed from the child. Thus, a test of readiness for a
subsequent topic in Mathematics would only need a demonstration of ability
to reproduce appropriate algorithms without bothering about ability to
reason or analyse them. This may be the reason why Lassa and Paling
(1983) remarked that people’s ideas of Mathematics depend a lot on their
experiences and knowledge of the subject. This suggests that there is no
spiritual or Devine intervention in acquiring knowledge of Mathematics. This
statement is in line with the most popular opinions which suggest that
Mathematics is a human creation whose major value lies in its usefulness as
a tool by which man obtains knowledge, analyse and address a range of
practical tasks and real-life problems (Dienes, 1972; Northern Ireland
Curriculum Council, 1990; Jennifer, 1993). To Lassa and Paling (1983),
Mathematics is a way of using information, knowledge or shapes and
measures and ability to calculate in thinking, through visualizing and using
relationships. Still continuing, Dienes (1972) opined that in Mathematics
![Page 23: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/23.jpg)
23man tries to establish relationship between groups and relationship
between relationships themselves. By mere looking at this conception, it
appears simple, but comprehensive enough for most practical purposes. It
suggest the inclusion of such basic ideas like acquisition of information about
terms for, and meanings of, numbers, operations as well as other techniques
of processing information and applications of these in the secondary school
curriculum. It also suggests the need for Mathematics learning to include
development of ability to reason along these basic ideas.
Insofar readiness involves not only maturation and motivation, but
also some preparatory training (English and English, 1958), a Mathematics
readiness test for senior secondary school should assess achievement in
some aspects of Junior Secondary School Mathematics required as essential
foundation for senior secondary school Mathematics. In this direction,
Hierbert and Carpenter, (1982) have argued that a readiness test useful in
classroom settings should provide information on a child’s capabilities across
a range of concepts or skills and not just readiness to learn only one
concept. Thus, such achievement assessed by a Mathematics readiness test
should be broad based. Again, such a test should also provide information
concerning the individual’s capabilities in such reasoning as are needed for
further achievement in future learning. In other words, a Mathematics
readiness test should not only provide information concerning the child’s
past learning in Mathematics but also a valid prognosis about future
Mathematics learning.
In the concept of readiness, it is basically assumed that mastering of a
subordinate concept is prerequisite to further learning (Udegboka, 1987).
![Page 24: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/24.jpg)
24This implies that factors that affect concept acquisition and achievement
would as well influence readiness. Preparatory achievement has been
identified as a condition for readiness (English and English, 1958). Several
studies have found significant relationships between verbal competence and
performance in Mathematics. For instance, Balow (1964) worked with a
sample of 368 sixth grade children in California and found a correlation of
.46 between their performance in Stanford Achievement Test – Arithmetic
Reasoning and Reading subscales. In a similar study, Muscio (1962) found a
correlation of .78 between scores of 2006 sixth grade California children in
the California Vocabulary Test and the Quantitative Reasoning subscale of
Mathematics and Functional Evaluation in Mathematics Test. Earlier on,
Murray (1949) had cited evidence that suggested that performance in a
geometry test, clearly dependent on spatial ability, is also closely related to
verbal ability of his subjects. Harrison (1944) strongly commented that
vocabulary is an important factor in solving Mathematics world problems and
as such should be taught during Mathematics lessons. One of Johnson’s
(1944) findings was that pupils given specific training in Mathematics
vocabulary made gains in problem-solving ability implies a confirmation of
Aansen’s comment. In another similar study conducted by Vander Linder
(1994) found superior achievement in both arithmetic concepts and word
problems by an experimental group of nine fifth grade classes over their
control counterparts even with initial matching on I. Q. and achievement
test scores in vocabulary, reading comprehension, arithmetic concepts and
arithmetic world problems abilities. The experimental group was taught a
![Page 25: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/25.jpg)
25different list of eight quantitative terms each week for 20 to 24 weeks
before the final test was administered to the group.
Another study done by call and Wiggin (1973) using second year
Algebra students yielded similar results. An experimental group was taught
by Wiggin, an English language teacher with some training in teaching
reading but no training in teaching Mathematics Wiggin taught algebra,
stressing understanding of meaning of words in Mathematics symbols. The
control group was taught by call, a trained Mathematics teacher. The result
of the study revealed that the experimental group performed better in the
criterion test even when initial differences in reading and Mathematics test
scores were statistically controlled for.
Considering a lot of evidence of the influence of reading ability on
performance, not only in predominantly verbal aspects of Mathematics, but
also in relatively non-verbal aspects of geometry, a valid Mathematics
readiness test should incorporate a test of reading either as a separate
subscale or as an integral part of some other subscales. Suffice it to say
that test of readiness for senior secondary school Mathematics should
include at least some word problems as distinct from non-verbal ones.
Romberg (1969) made an attempt to deduce from Harrison’s review of
80 Piagetian studies. In it, Romberg noted that the findings were related to
Mathematics learning and instruction. Romberg was optimistic about the
relevance of much of Piaget’s work on human cognitive development should
have particular impact in Mathematics education because most of Piaget’s
observations have been on Mathematics tasks like geometry, logic, spatial
relations and so on. It has been noted that wholesale application of
![Page 26: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/26.jpg)
26Piagetian tasks to Mathematics readiness testing has been flawed on the
basis of empirical evidence (see Bailey, 1974; Michaels, 1977). It need be
pointed out that their use as tests of readiness in those aspects of
Mathematics that necessarily call for logical reasoning might be promising as
Howlett’s (2001) investigation suggests. Since secondary school
Mathematics involves a lot of this type of situation, a readiness test for
senior secondary school Mathematics should be able to prognosticate a
child’s performance in logical reasoning tasks such as class inclusion, figure
synthesis and analysis.
In summary, a readiness test for senior secondary school Mathematics
should be able to measure achievement in number manipulation,
Mathematics concepts and ability to apply such concepts. And yet more, it
should be able to prognosticate performance in numerical and perceptual
reasoning tasks. Thus, a readiness test should be an embodiment of items
that test these constructs. MATHRET is a diagnostic instrument composed of
numerical and perceptual ability skills.
Purposes of Readiness Test
The primary purpose of test is to constitute an objective check on both
student academic progress as well as ultimate achievement so that if either
is deficient, suitable remedial measures may be instituted. Thus, a really
adequate evaluation programme not only assesses the extent to which
student achievement realized educational objectives but also attempts to
account for unsatisfactory achievement – irrespective of whether this inheres
in unsuitable instructional methods or materials, in competent teaching,
inadequate student morale or insufficient readiness and aptitude (Ausubel,
![Page 27: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/27.jpg)
271963). Apart from its monitoring purpose, data from evaluation can be
used to facilitate students learning. The purpose of this examination is to
obtain information to assist your academic adviser in making and informed
recommendation for your first Mathematics course at University of Vermont
(MRTR Form, 2006). Moreso, evaluation data can be used by Mathematics
teachers to formulate and clarify their expectations to students. It has been
shown that students distribute their study time and apportion their learning
effort in direct proportion to the predicted likelihood of various topics and
kinds of information being represented on the examination (Keislar, 1961).
It is evident, therefore that if teachers wish to influence learning outcomes
in particular ways by the kinds of testing devices they use, they must
formulate their objectives clearly, communicate these objectives explicitly to
students, and construct reliable and valid measuring instrument that test the
degree to which these objectives are realized. For if a test is to be useful, its
scores must be both reliable and valid (Atkinson and Atkinson, 1993).
Different types of tests such as assay or multiple-choice tests can be
used to assess individual students readiness. For instance, MDTP on-line
multiple-choice tests were designed to help individual students review their
readiness for some Mathematics courses and may be useful in preparing for
some Mathematical placement tests used by some California colleges and
Universities (UCSD, 2006). Here the purpose is to ascertain how well an
individual will profit from some subsequent course of training (Hildreth, in
Leslie, 1968). If the students possess adequate subject-matter background,
the tendency is that they will profit from the senior secondary school
Mathematics programme.
![Page 28: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/28.jpg)
28 Furthermore, examination itself is an essential learning experience,
in the sense that it forces students to review, consolidate, clarify, and
integrate subject-matter prior to testing. Feedback from test enables a
student to confirm, clarify and correct ideas, and identifies areas that need
further thought and study. Each on-line test includes a diagnostic scoring
report to help students identify strengths and weaknesses in some topic
areas (UCSD, 2006). Merely identifying the correct answers on a multiple-
choice test significantly increases retest scores a week later (Plow-man and
Stroud, 1992). This connective function of feedback is extremely important
since students often feel “certain” about incorrect answers (Ausubel, 1978).
Instructors teaching precalculus at Rowan University normally required their
students to take the precalculus Readiness Test to evaluate their pre-
calculus background and preparation for calculus (Rowan University
instructions, 2006). The purpose of a mastery test, on the other hand, is to
separate the pupils into two groups, those that have achieved at least as
high as a certain level and those who have not (Ahman and Glock, 1971).
In addition, test play significant motivating role in school learning. At least,
desire for academic success, fear of failure, an avoidance of guilt, to mention
a few, is legitimate motives in an academic setting. It is hardly likely that a
student will study regularly, systematically and conscientiously in the
absence of periodic examinations. Frequent quizzing markedly facilitates
classroom learning (Ausubel, 1978). Again, students have gained the
experience of being subject to external appraisal. Consequently, they have
learnt how to evaluate their own learning outcomes independently. Such
self-evaluation enhances school achievement (Duel, 1958, in Asubel, 1978).
![Page 29: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/29.jpg)
29 Readiness test is used to facilitate teaching. It is from readiness
test results that teachers obtain essential feedback regarding the
effectiveness of their instructional efforts. Such results reveal how
effectively teachers present and organize materials, how clearly they explain
ideas, how well they communicate with less sophisticated individuals, and
how efficacious particular instructional techniques or materials are.
Feedback from examinations identities areas requiring further explication,
clarification, and review, and is invaluable in the diagnosis of learning
difficulties both individual and group (Ausubel, 1968 in Ezeife, 2002). Just
like every other test feedback from it, indicates curriculum contents, which
students have mastered, and those areas that they lack prerequisite skills or
knowledge. For instance, in schools that have a formal first-grade program,
using basal textbooks early in the term, readiness tests help the teacher
screen out at the beginning of the term those pupils who would almost
certainly fail it they were to undertake the difficult work to come (Hildreth, in
Leslie, 1968). Similarly, ACT Mathematics Test measures what students
have learned in three years of high school Mathematics, including algebra 1,
geometry, algebra 2, and some trigonometry, and students’ proficiencies in
applying the knowledge and skills that they have acquired in the first two
years of high school science courses (Ferguson, 2002). MATHERET was
administered to the beginning SS1 students within their first or two weeks of
entrance into SS1 level. Students who exhibit mastery will be retained in
this level while students that indicate lack of prerequisite skills/knowledge
may be subjected to remedial programme or repeating the JS3 level. Apart
from the fact that MATHRET posses diagnostic potential (UCSD, 2006) as
![Page 30: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/30.jpg)
30well as suitable for assessing individualized instruction or group of
students such as JS3 students, it assesses the readiness of JS3 students to
profit from senior secondary school Mathematics programme. In other
words it assesses whether the JS3 students are sophisticated in terms of
adequacy of subject-matter knowledge that will enable them profit from the
next higher level of schooling.
Usually, in most of the uses of readiness test, information is gathered
for the purpose of improving the nature of plans, decisions, and
adjustments. For instance, decision could be taken regarding an
adolescent’s vocational plans. In particular, the information provided by a
good readiness test is one helpful basis for making necessary adjustments in
the first-grade programme (Hildreth in Leslie, 1968). Moreso, the Readiness
Assessment is an instrument Ancilla college uses to determine the best
classes for you to take as you start your education there (Student Success
Centre, 2006). In view of grade placement, for instance, it is assumed that
students’ scores on achievement tests tell something about how well he will
do in one grade/level/class as compared with another. Specifically, we
might judge that a fifth grade pupils with grade equivalent scores in reading
and Mathematics at the 7.8 level. Using local norms would, in terms of
academic work, be more appropriately placed in the sixth grade, or even in
the seventh grade (Ausubel, 1963). Readiness assessment would place you
in classes that are neither too hard nor too easy for you (Student Success
Centre, 2006). That is to say that the JS3 students that will score a cut-off
point or more as will be suggested in this study will be classified “ready”
while students that will score below the cut-off point will be classified “not
![Page 31: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/31.jpg)
31ready”. Ahman, et al, 1971) has said as much, that the purpose of a
mastery test is to separate the pupils into two groups, those who have
achieved at least as high as a certain level and those who have not. For
vocational decision it is assumed that measured aptitudes in some way are
related to success or satisfactions in an occupation. For instance, one might
want to infer that because a youngster has higher readiness test score on
verbal than non verbal, he would be more successful in a verbal than a non
verbal kind of occupation.
Readiness test results frequently remind schools of the large
differences among individuals who have been assumed to be at the same
level. Basically, information obtained from psychological testing has made
selection and classification of individuals possible. Specifically, achievement
tests are used not only for educational purposes but also in the selection of
applicants for industrial and government jobs (Anastesi, 1968). When
applicants for a job are considered, it is helpful to estimate which applicants
are most likely to perform well later on (Ingule; Ruthie, and Ndombuk,
1996). In this manner, readiness test is effectively employed as an adjunct
to skilful interviewing, so that test scores may be properly interpreted in the
light of other background information about the individual. Nevertheless,
readiness test constitutes an essential part of the total personnel program.
From the assembly-line operator or filing clerk to top management, there is
scarcely a type for job for which some kind of psychological test has not
proved helpful in such matters as hiring, job assignment, transfer,
promotion or termination (Anastesi, 1968). MATHRET can as much prove
helpful in such matters as determine the degree of students’ mastery of
![Page 32: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/32.jpg)
32junior secondary school Mathematics curriculum contents, as they are
advancing into senior secondary school Mathematics programme. Such that
students that have adequate background knowledge in the junior secondary
school Mathematics programme can be promoted, and vice versa.
Furthermore, prior to World War 1, psychologist had begun to
recognize the need for tests of special aptitudes to supplement the global
intelligence tests. These special aptitude tests were developed particularly
for use in vocational counselling and in the selection and classification of
industrial and military personnel (Anastasi, 1968). Commenting on the
value of psychological tests in industry, Ruch (1948) pointed but that in
United States, civil service found in one study that 93 per cent of the
appointees selected by psychological tests were more efficient than average
employees selected by other means. Similarly, readiness test used for
admission to schools or programs or for educational diagnosis not only affect
individuals, but also assign value to the content being tested (ACES, 2006).
Moreso, readiness test is used to evaluate potential employees, to revise
curricula, to select applicants for college, to award scholarships, to place
students in homogeneous sections of courses (Goldman, et al, 1971).
MATHRET can as much be used to select JS3 students as they are advancing
into senior secondary school level. Again, psychological tests such as
MATHRET can be used to select or award scholarships to students who have
shown evidence of mastery of junior secondary school Mathematics
programme. Such students can then profit from senior secondary school
Mathematics programme, since they are ‘ready’ for the senior secondary
school Mathematics program.
![Page 33: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/33.jpg)
33 Apart from vocational counselling and selection of personnel for
industrial works, readiness test is as much used to assist in guidance
counselling, and the individualization of instruction. Systematic
measurement and evaluation of aptitude, achievement, motivation,
personality, attitudes, and interests are necessary for individualizing
instruction and for purposes of individualizing guidance and counselling
(Goldman, 1971). This suggests that readiness test can be used to defect
the aptitude levels of students as well as the adequate background
knowledge they previously acquired. The form of test results also vary from
pass/fail, to holistic judgments, to a complex series of numbers meant to
convey minute difference in behaviour (ACES, 2006). The test results thus
point up to schools the necessity of trying to individualize instruction within
the group and they provide an objective basis for starting differentiated
instruction (Trazler, Jacobs, Selover, and Townsend, 1973). Invariably, this
will assist the teacher in preparing teaching materials and methods to suit
the ability levels of the learners. And more importantly for individualized
instruction and guiding and counselling the students individually. We must
know the current aptitude levels of pupils and the current state of their
subject-matter knowledge before we can “prepare curriculum materials
appropriate to ability levels (and) adopt teaching methods to the learners
and the content to be learned” (Adkins, 1958). It is test result that can give
us such information. Based on the information we can, for instance,
determine grade placement, promotion and grouping of the learners.
Precisely, readiness test can be used to promote those that have
demonstrated adequate subject-matter knowledge as “ready” or group those
![Page 34: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/34.jpg)
34students that lack mastery of a given subject-matter content together as
“not ready”. Those that are “ready can profit from the next higher level of
school, and so should be promoted, while those grouped “not ready’ need to
be subjected to further remedial program. In the absence of such
information, intelligent decisions cannot be made about grade placement,
grouping, the pacing of study, promotion, choice of courses, academic and
vocational goals, and remedial work (Goldman, 1971).
Ideally, the information provided by a good readiness test is one
helpful basis for making necessary adjustments in the first grade programme
(Hildreth in Leslie, 1968). The Test Readiness Review shall be used to
assess whether the contractor’s progress is adequate to meet the next
milestone (GPI, 2006). It is based on such information that adjustment can
be made probably by individualizing instruction or organising remedial
programmes to the deficient students.
In counselling, need for information has necessitated the use of
readiness test. Bordins (1955) clarification of counselling needs include
dependence, lack of information choice-Anxiety, and lack of assurance.
Quite frequently, especially with adolescents who come voluntarily for
counselling, lack of confidence in their ability to make decisions lead to an
attempt to become dependent on the counsellor. The dependence is noted
primarily to those instances in agencies and schools in which an individual
comes (or is sent) seeking help with a particular problem. The original
purposes of assessment were intellectual evaluation and diagnostic labelling
(Greenspoon and Gerten, 1986). Many schools apply batteries of tests on a
programmatic basis, giving the same battery of achievement, aptitude, and
![Page 35: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/35.jpg)
35other tests to entire classes (Goldman, 1971). So, many candidates with
say, lack of assurance on whether he/she will succeed in a given programme
may be subjected to a readiness test. Information from the readiness test
result will determine whether the candidate should undertake the
programme or go back to acquire background knowledge of the subject
matter.
Psychological tests can be used in solving a wide range of practical
problems. For instance, examples may be drawn from studies on the nature
and extent of individual differences, the identification of psychological traits,
the measurement of group differences and the investigation of biological and
cultural factors associated with behavioural differences. For all such areas of
research –and for many others – the precise measurement of individual
differences made possible by well-constructed tests is an essential
prerequisite (Ausubel, 1968).
Furthermore, readiness test could be used to facilitate conversation.
This is more prevalent among counselees who find it difficult to begin
talking, more particularly when strong feelings or long-suppressed thoughts
are involved. In this kind of situation, Kirk (1961) suggested that tests such
as a sentence completion or the thematic Apperception may be especially
helpful. This suggests that if the counselee completes the sentence
correctly, he is ‘ready’ for the conversation; otherwise, he lacks the
necessary prerequisite that would enable him to profit from the
conversation. Moreso, councillors use testes to lay groundwork for late
counselling. In this case, high school and college counsellors spend a
considerable amount of time in “routine” interviews with students; included
![Page 36: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/36.jpg)
36frequently in such interviews are reports of results of test taken at the
time of admission or at other group-testing occasions (ACES, 2006).
So far, the review suggests that readiness test can and need be used
for various purposes and in all aspects of human endeavour as found in
public and private sectors, especially in identifying students’ areas of
strengths and weaknesses in Mathematics.
Validity of a test.
Validity of a test has been defined as the degree to which a test
measures what it is supposed to measure (Georgetown, 2006). According to
this statement, a ruler may be a valid measuring device for length, but isn’t
very valid for measuring volume. A readiness test has been looked at as a
test, which should separate those who are capable of learning a particular
concept and those who are not (Hiebert and Carpenter, 1982). And in the
classroom situation, a readiness test can be useful only to the point that it is
relatively free from both type I and type II errors. Type I and type II errors
are respectively errors of classifying as ready a student that is actually not
ready and classifying as not ready one that is ready. To appraise these
assertions emphasis should be on “relatively”, because no test is error-free.
The basic concept in test theory states that the score obtained by any
individual on a test has two components, namely, his true score (the exact
measure of his ability) plus the error associated with his score on this
particular test (Walter and Nancy, 1971). Similarly, Guilford (1954) stated
that among the group of testees, all test scores are partly due to error
variance. The much that a test constructor can do is to minimize error
variance and not to eliminate its occurrence entirely. From theoretical point
![Page 37: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/37.jpg)
37of view, one way of minimizing error variance is to maximize reliability
(Nunnally, 1981). However, maximizing the reliability of a test is necessary,
but it is not a sufficient condition to guarantee minimum error variance.
Georgetown (2006) upheld that validity is the extent to which a test actually
measures what it says it measures. Georgetown, further remarked that it is
perhaps more important that a test serves the purpose for which it is
designed. A readiness test such as MATHRET should therefore, be capable
of separating students into two distinct groups – those that have acquired
the prerequisite knowledge (ready) for the next higher tasks and those that
are lacking the prerequisite knowledge or skill. Otherwise referred to as ‘not
ready’. To the extent that a readiness test can do this, to that extent it is a
valid readiness test (Hiebert and Carpenter, 1982). Therefore, validity of a
test has to do with what a test measures and how well it does not (Anastesi,
1976). Thomdike and Hagen (1977), thus recommended that judgment of
test validity should be made in terms of some specific functions the test is
intended to serve and not to be made in general terms. This
recommendation suggests for instance, that a valid Mathematics readiness
test may not be valid as a general, aptitude test.
Test theory suggests that the concept of validity assumes that test
score variance can be broken into variance in some trait and error variance
(Guilford, 1954). Furthermore, true variance can be divided into two-
variance shared by other traits and variance peculiar to a particular test. In
view of this theory if two tests share a common factor, the implication is that
scores in them are Inter-correlated and that score in one test can be used to
predict a score in the other test. However, each test posses another factor,
![Page 38: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/38.jpg)
38which is specific to it. How valid a test is depends on its purpose – for
example a ruler may be a valid measuring device for length, but isn’t very
valid for measuring volume (Georgetown, 2006). Therefore, each test scores
is a composite of different weights of scores in some common factors, the
error factor and unique factor (Guilford, 1954). Test validity may be defined
in terms of what a test score predicts or measures. Such test score is a
valid predictor of every other thing (except itself), which it significantly
correlates with. The basis of validity therefore is common factor variance.
Test reliability however is considered on the ground of true variance, which
is equal to the sum of specific factor variance, and common factor variance.
In view of this conception, validity may be looked at as the proportion of test
score variance which is common factor variance whereas reliability of a test
is proportion of test score variance which is explained by true variance.
Therefore, reliability sets the limit of validity (Guilford, 1954).
In literature, identification has been made of a number of approaches
applicable to the administration of validity in readiness testing. Each of
these approaches deals with an aspect of validity. One of such aspect of
validity is construct validity. Construct validity is the extent to which a test
actually measures what it claims to measure (Thomdike and Hagen, 1977)
or adequately measure the underlying construct (Ibecker, 2006), two tests
of reliabilities .81 and .89 said to measure alphabetizing ability have been
found to correlate .09 and .00, with a criterion of alphabetizing on the job
(Mosier, 1947). Criterion reliability was found to be .40. This reveal that
even though the two tests measured whatever they claim to measure
consistently, yet that construct was not alphabetizing ability. Therefore,
![Page 39: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/39.jpg)
39they had no construct validity in the measurement of alphabetizing ability.
The reliability estimates of both tests were less than .90, probably due to
poor construction of the tests. For well constructed tests usually have a
reliability coefficient of r =. 90 or more (Atkinson and Atkinson, 1993). Also,
other authors have described approaches to construct validation (Guilford,
1954; Thomdike and Hagen, 1977; Beihler and Snowman, 1990; Ibecker,
2006). Criterion-related validity has been looked at as extent to which a
test can predict an individual’s behaviour in a specified situation (Guilford,
1954). Moreso, criterion-related validity refers to how strongly the scores
on the test are related to other behaviours (Ibecker, 2006). This is usually
empirically determined by correlating scores in the test scores in some
criterion, which should be a direct, and independent measure of what the
test is designed to predict (Anastesi, 1976). Another aspect of validity is
content validity. It is the extent to which a test adequately covers a
representative sample of some behaviour domain of interest. Its
assessment is usually rational and it is commonly used in evaluating
achievement tests (Anastesi, 1976).
Nunnally (1981) suggested that a readiness test could be validated
against any criterion it is designed to predict inasmuch as such criterion
have the qualities of relevance, unbiased, reliability and availability
demanded by Thorndike and Hagen (1977). Similarly, a test may be used to
predict future behaviour provided you want to predict, that is you need to
specify the criterion (Ibecker, 2006). This procedure can be achieved by
correlating the scores on the criterion with corresponding readiness test
scores. Predictive validity, that is ability to predict, is the crux of the matter
![Page 40: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/40.jpg)
40in any measure of readiness as well as all other tasks designed for
selection and placement. Nunnally (1981) stressed that predictive validity is
determined exclusively by the degree of the two measures involved. In this
sense predictive validity is the degree to which a current measure (often
called predictor) is related to the variance of interest (the criterion), which is
not observed until sometimes in the future (Ingule; Ruthie; Ndambuki,
1996). Nunnally (1981) further remarked that it correlation is high no other
standards are needed. This suggests that although application of theory and
common sense in test construction are necessary in test construction, yet
empirical analysis is the final determinant of predictive validity.
There is a danger in validating a test on purely empirical
considerations. (Monsier and Ibecker, 2006) argued that there is danger in
validating a test on purely rational grounds. For a test can only be as valid
as the criterion against which it has been validated upon. A study aimed at
predicting success of research scientists in administrative positions was cited
by Travers (1951) in Hill (2001). The study revealed that families of skilled
craftsmen and research scientists from rural backgrounds tended to be more
successful administrators than their counterpart from large city backgrounds
and retail merchant families. Although the result of the study was empirical,
yet the result of the finding was not meaningful because it was discovered
that the judge that judged the success of the experimental groups have
interest to protect. With the use of such ratings as criterion for validating a
test would lead to perpetuating bias, because this would yield systematic
variance irrelevant to the construct of interest. The findings of the authors
Monsier (1947) and Travers (1951) in Hill (2001) suggests that validation by
![Page 41: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/41.jpg)
41rationalization alone or empiricism alone is subject to committing of error.
The implication is that each of these approaches should be checked against
the other. Against the conception, if an item in a test acquitted
unreasonable scoring weight by the empirical approach, such item should be
dropped and used only after its validity must have been rationalized. This is
Guilford’s (1954) submission.
The findings of Travers (1951) in Hill (2001) calls for the need for good
design in empirical study. For a good design hardly introduce bias into one’s
findings. But poor design does. This point is worth mentioning especially in
a country such as Nigeria where her situation is such that continuous
assessment scores could be hypothesized as a relevant criterion measure for
validating a readiness test. By the true nature of continuous assessment,
which is systematic and comprehensive (Ipaye, 1982; Federal Ministry of
Education, Science and Technology, 1985; Adedibu, 1988; Odili, 1991), it
should produce a valid measure of student attainment of educational
objectives. Moreso, evidence of validity of continuous assessment have
been demonstrated in various studies (such as Okedora, 1980; Arowasegbe,
1990, Odili, 1991), which revealed significant positive correlations between
continuous assessment scores and end-of-course examination grades.
However, the studies (such as Ali and Akubue, 1988; Iwuala, 1988) have
found sufficient reason to disagree even the reliability (a prerequisite for
validity) of continuous assessment scores as found in the Nigerian school
system. No wonder then why the existence of disagreement among the
researchers. Critical study of the situation however shows design flaws in
some of the researchers that showed that continuous assessment scores are
![Page 42: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/42.jpg)
42valid scores. For instance, Arowosegbe (1990) worked with a sample
drawn from Bendel state, computed Pearson product moment coefficients of
correlation between student’s continues assessment scores and their junior
school certificate examination scores. Anowosegbe used scores obtained in
English studies, Mathematics, social studies and integrated science whereas
the Odili study involved integrated science scores. The study revealed
significant result. At the end, they concluded that continuous assessment
scores had predictive validity.
However, in school system, continuous assessment forms 30 percent
of final certificate examination score (Implementation committee, National
Policy on Education, 1992, 1994), correlation of these two sets of scores is
left to Mathematical and logical necessity. Yet such ‘significant’ correlation
does not therefore necessarily account for the validity of continuous
assessment. It would attest to the validity of continuous assessment if the
studies had correlated continuous assessment scores with only the final
examination scores before they were combined with the continuous
assessment scores to provide the published result which the researchers had
used, provided of course, that the said examinations were themselves validly
measures achievement.
Okedera’s (1980) approach was not consistent with the above result.
For Okedera instead administered a teacher-made achievement test to 30
adult learners (10 female and 20 male) in an adult literacy class in Ibadan
two weeks before they sat for the final school leaving certificate
examination. The same test was administered to the same subjects a week
later. Correlation of test and retest scores were made with the final leaving
![Page 43: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/43.jpg)
43certificate examination scores and the result was found significantly
positive. The researcher concluded that the result of the study was an
evidence of predictive validity of teacher-made tests and suggested that
primary school leaving examination could be abolished in favour of
continuous assessment. This design is flooded with flaws and constitutes a
serious blow to the validity of the conclusions.
One of such flaws is that using a small and sample that was not
randomly drawn and only 30 subjects selected from one classroom cannot
but threaten the external validity of the results (Campbell and Stanley, 1966
in Zuriel, 2004). Another flaw is that the two administrations of the test
were not more than two weeks before the certificate examination makes the
result difficult to be acceptable evidence of predictive validity. The main
concern with these and many other predictive measures is predictive
validity, because without it, they would be worthless (Allpsych Online,
2006). On this ground, this researcher argues that for practical significance,
the interval between administrations of predictor and criterion tests should
be at least a school year. This may suggest that school heads and
administrators may base their decisions on predictive test on students’
achievement for short periods just two weeks.
A study conducted by Okeke (1985) in which 1040 students that sat
for the west African School Certificate Examinations in Anambra State in
1981/82 and 1982/83) were used, tends to be a better designed study than
that of Okedera’s (1980). For the study correlated the scores of the
students during their school certificate mock examinations with the
corresponding scores in the substantive school certificate examination. The
![Page 44: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/44.jpg)
44result of the study was inconclusive. Although little variation was noted
from one student to another, yet correlations markedly varied from school to
school. This may be an evidence of interschool differences in the degree of
validity (predictive) of mock school certificate examination scores and even
in teacher-made tests. The study failed to specify the intervals between the
mock examination and substantive school certificate examination as taken in
different schools sampled for the study. Again, this situation places some
doubts about the validity of continuous assessment scores, which depend
solely on quality of teacher-made tests.
However, teachers have idea of what could be done on continuous
assessment and so the prospects of continues assessment is within
expectation. For instance, evidence that teachers have an idea of what
could be done (see Harbour-Peters and Nworgu, 1990) and could do it
(Ipaye, 1982), have been provided. Nevertheless, a lot more need to be
done in this direction to realize the potentials of the continuous assessment.
As it stands now, in a developing country such as Nigeria, it appears too
early to use continuous assessment scores as criterion measure in validation
of a readiness test such as MATHRET.
MATHRET, like any other test, may also be validated against ratings
(Anastesi, 1976). One major problem associated with rating is that it is
prone to error. Kissane (1986) cited Endean and Cares who found that
teachers are poor predictors of mathematical aptitude. The findings also
revealed a discrepancy between teachers’ nominations of Mathematically
able year 8 Australian students and their achievement in the Mathematical
scale of the scholastic Aptitude Test (SAT-M) with some bright students
![Page 45: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/45.jpg)
45(using SAT-M as criterion) overlooked. Still on teacher nomination,
Stanley 1976) reported that SAT-M scores obtained several years earlier
predicted year II students’ success in a difficult Mathematics competition
much better than teacher nominations. This reveals that ratings may not
yield the “relevance” test for valid criteria (Thorndike and Hagen, 1977).
Apart from failure to be relevance, they do fail the “freedom from bias” test
(Travers, 1951).
Some expert opinions insisted that it is not problem as such. That it is
needles to validate every test. For instance, Ebel (1979) has wondered why
empirical evidence is demanded of the validity of most tests. Ebel further
argued that in all validation studies, validity of some tests (especially
criterion) measure, used in the process ends up being only rational. In other
words, the rational of the tests depends only on “expert” judgement.
Nevertheless, it seems more reasonable if the same expert judgement used
in validating the criterion test initially to directly validate the test instead,
since the validity of a test cannot exceed that of its criterion measure
(Guilford, 1954). With the same expert judgment used in the first instance
in directly validating the test, the length of the validation process chain will
be reduced. This approach to validation may need not be taken to be wholly
rational owning to the fallibility of “expert” judgement does not however
hardly flaw Ebel’s argument. For if it does, the flaw would as well occur in
the criterion measure already considered to be “valid” initially. The
researcher is quite consistent with Ebel (1979) assertion on this point.
Earlier on Ebel (1961) disagreed on the correlating some criterion measure
with test scores as evidence of validity of a test. He gave three reasons.
![Page 46: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/46.jpg)
46One, that situations encountered in Education and Psychology are
influenced by too many interacting variables that change in ways too fast for
any Mathematical model to be able to keep effective track of or give
satisfactory explanations to such Mathematical models like correlation,
require a certain degree of stability that cannot be guaranteed in any
empirical study in psychology or Education. Two, that validity is not
necessarily a quality of a test but depends on the use it was made use of.
This suggests that Ebel wish that the concern of validity should be on the
use a test is put and not on the test itself per say. Third, Ebel pointed out
that criterion measures do not exist for most tests but have to be made only
by the same process that the tests of interest are made by and therefore
should as well need first of all their own validation. Ebel therefore
recommended that expert judgment be used as evidence of content,
including construct, validity, except for such tests that its criteria are
obvious, easy and simple to measure. Validity evidence will continue to
gather, either enhancing or contradicting previous findings (ACES, 2006).
Although the point has been made that validity of a test is not a
principal basis of a quality of a test, it has to do on the use the test is put
into, one need to conclude that this nullifies the importance of empirically
determining the indices of validity. Regardless of the form a test takes, its
most important aspect is how the results are used and the way those results
impact individual persons and society as a whole (ACES, 2006). It also
needs to stress that the test user should have some idea of the degree of
confidence needed to have on the use of a test. Remember that validity is a
matter of degree, rather than all or none (ACES, 2006). This degree of
![Page 47: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/47.jpg)
47confidence is given by a combination of sufficient standardization
information as well as a validity index associated with it. For one reason to
consider test scores when making academic decisions is that large-scale
tests are administered under standardized conditions (ACES, 2006). Lyman
(1986) insisted that any good test manual should be composed of these.
Moreso, Ebel’s (1961) claim that criterion measures for most tests do not
exist seems to be nullified by the fact that common variance is at issue in
validity (Guilford, 1954; Kerlinger, 1973). Insofar, no perfect criterion
measure exists (unless of course the test scores themselves), any trait
scores on which are significantly correlated with those on a given test is a
valid criterion (Nunnally, 1981). Really, if the correlation coefficient tends to
be higher and higher, then validity is considered better. More still,
correlations yields a test of the hypothesized relationship between criterion
scores and test. Yet, this does not constitute evidence that the test
measures the construct it is tagged as measuring (Kerlinger, 1973). Only
that it provides enough evidence that the tests measure the same construct
and the correlation coefficient provides a measure of the extent to which
they measure it as well as the degree of confidence one can have in using
one as predictor of the other. The construct, which the test actually
measures, can be inferred from an examination of the tasks that comprise
the tests (Ebel, 1961)
Furthermore, the construct a test measures is defined by the nature
and variety of tasks it expects subjects to perform (Ebel, 1961). If we have
a difficult time defining the construct, we are going to have an even more
difficult time measuring it (AlPsych Online, 2006). As a result, if what is to
![Page 48: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/48.jpg)
48be measured is given in quantity form, the process of measurement, such
as found in physical sciences, what a test purports to measure coincides with
that which it actually measures. Consequently, the demand for empirical
evidence of validity is nullified. This situation only demands from the test
constructor to claim noting more than that his test measures testees’
competence in carrying out the type of tasks that make up his test. The
reason being that test scores constitute the best criterion we can get of what
a test is intended to measure (Ebel, 1979). One major problem here could
be that this approach may make it very difficult to compare tests. For
rationalized tests measuring the same construct may not share any common
variance after all (see Mosier, 1947), it is not certain that for any two given
tests can really measure similar or the same constructs, an assurance which
is required for tests classification. This therefore confirms Guilford’s (1954)
call for both empirical evidence and rational in test validation.
Ebel (1979) disagree with predictive studies common with selection
(like promotion, admission) tests pointing out that a great number of
unforeseen and unmeasured variables affect them so much that our best
predictions can only be imprecise and crude. Any disagreement with Ebel
(1979) could be in order, since what is obtainable in so many (including
achievement) testing is a prediction of anticipated “success” in a subsequent
“programme”. In order for a test to be a valid screening device for some
future behaviours, it must have predictive validity (AllPsych Online, 2006).
Therefore, worthy of recommendation is predictive studies that estimate
degree of confidence (or risk) in using such tests for such purposes than
such cases in which predictions are made without any knowledge of degree
![Page 49: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/49.jpg)
49of confidence (or risk) involved. The researcher is of the opinion that
efforts on validation should continue to look for agreement between the
empirical approaches and the rational and provide test users the degree of
risk involved in using any test.
Reliability of a test.
Reliability is synonymous with the consistency of a test, survey,
observation, or other measuring device (AllPsych Online, 2006). The
concept of reliability assumes that whatever a test measures must be some
enduring trait among individuals (Guilford, 1954). Moreover, differences
among individuals with regard to possession or non-possession of such a
trait are true differences and constitute true variance among the individuals.
As a result, different “administrations” of the test should rank the individuals
in a consistent manner on the trait if the test is to be said to posses high
reliability. If fluctuations occur among individuals in a differential manner,
Guilford is of the opinion that different “administrations” would rank
individuals differently, leading to a tag of low reliability, for the test. Such
differential rate of change among the individuals constitutes error variance.
The reliability of a test reflects the extent to which this error variance has
been reduced (Nunnally, 1981). It is the proportion of test variance which
constitute true variance. In other words, the proportion of variance of scores
obtained in a test which is due to true difference (rather than random
difference) among the testees on the trait that the test measures.
Although reliability is necessary, but it is not sufficient guarantee that
a test is of high quality. It is only to the extent that test scores are reliable
that they can be useful for any purpose at all (Ebel, 1968). It is also to that
![Page 50: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/50.jpg)
50extent that the test measures whatever it measures with precision.
Reliability is essential to both the testee and the test user. To the testee,
decision about him depends on the test scores. And to the test user who
should have an idea of the degree of confidence he should place on the
scores generated from the test as he makes his decision based on them
(Ebel, 1979).
Measurement of reliability is carried out in terms of a correlation
coefficient. In operational terms, Ebel (1979:275) defined test reliability
thus: the reliability coefficient of a group of examines is the coefficient of
correlation between that set of scores and another set of scores on
equivalent test obtained independently from the numbers of the same
group. Considering the definition, it could be deduced that reliability is not
an intrinsic property of a test per se but depends on the particular group of
examinees tested. For size of the reliability coefficient could be affected by
the level of ability of the testees and the range of their talents as well.
Reliability computed via coefficient alpha usually takes values from 0.00 to
1.00 indicating identical ordering between the test and the hypothetical
equivalent form coefficient alpha may also take values less than zero (WISC,
2006). In addition, being a correlation coefficient, the reliability coefficient
yields information on the degree to which relative rather than absolute
values of pairs of scores on each of the testees agree. According to Guilford
(1954) the standard approach to reliability aim at determining the extent to
which a test correlates with itself. In literature different approaches were
adopted in answering different questions. The procedure in each case
involved using deviations and correlation of two sets of scores from the
![Page 51: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/51.jpg)
51same test and sample. Among such different approaches identified in
literature include test-retest (Anastesi, 1976; Ebel, 1979; Watson, et al
1991 in Ibecker, 2006; Aupsych Online, 2006), reader reliability (Ebel,
1951), alternative form (Thorndike and Hagen, 1977), split-half reliability
(Cronback, 1951; Horst, 1951, Guilford, 1954; Georgetown, 2006), Kuder –
Richardson (Ebel, 1979; Cronback, 1957; Horst, 1953; Guilford, 1965) and
analysis of variance (Hoyt, 1941; Guilford, 1954; Engelhart, 1972).
Choice of Coefficient of Reliability.
A reliability coefficient is often the statistic of choice in determining the
reliability of a test (AllPsych Online, 2006). Some practical considerations
determine which particular measure or measures of reliability a test
constructor should adopt. The alternative form approach to reliability
estimate yields the most reliable coefficient because of its sensitivity to all
forms of error variance (Thorndike and Hagen, 1977). However, it requires
using two parallel forms of the test that will test the same material and give
the same result (Georgetown, 2006). One major problem with this
alternative form approach is that constraints of money, time or even
administrative set-up of cooperating agencies might hamper the
development of two forms of a test. Obviously, two forms of a test imply a
much larger initial pool of items and demand more time for their trial on the
sampled subjects. Such demand of more time on the part of the subjects
might be too disruptive of the cooperating agencies’ programmes that they
may not tolerate such. As a result, observation from Thorndike and Hagen
implies that they might refuse permission for such a project. Such refusal is
equally applicable to alternative form or retest approach which demand two
![Page 52: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/52.jpg)
52administrations of a test. Moreover, as Anastesi 1976) rightly noted,
development of truly parallel forms of a test is not an easy task. To
determine parallel forms reliability, a reliability coefficient is calculated on
the scores of the two measures taken by the same group of subjects
(AllPsych Online, 2006). Consequently, despite inherent weaknesses, which
resulted from their tendency to make certain error variance, the approaches
to reliability, estimate, which needs only one administration of a test, might
prove the only option left for some test constructors. Specifically, we may
define an index of reliability in terms of the proportion of true score
variability that is captured across subjects or respondents, relative to the
total observed variability (Statsoft, 2003). However, it leaves them bound,
as Anastesi (1976) has argued, to report whatever approach they use in
assessing the reliability of their instruments so as to enable users evaluate
such instruments more appropriately.
Evaluation of Reliability Coefficient.
A critical review of the evaluation of the reliability of a test, suggests,
in general, that the bigger the reliability coefficient of the test, the better the
test, that such direct comparison uses similar approaches in estimating
reliability. Adopting different approaches to estimate reliability, however,
Thondike and Hagen (1977) strongly opined that the coefficient that is
sensitive to more sources of error variance should be preferred. Other
considerations should also include the composition of the sample tested
(Anastesi, 1976). The idea being that the more heterogeneous a sample is
(ie. The wider the range of the talents of the testees), the higher the
coefficient of reliability emanating there from. This implies that a test is
![Page 53: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/53.jpg)
53more likely to consistently rank a group comprised of a wide range of
classes in terms of Mathematics readiness that it would rank a group
consisting of only subjects that belong to the same class. And the ability
level of subjects used in the determination of the reliability of a test has also
been identified as a factor in determining a size of the reliability coefficient
(Ebel, 1979, Georgetown, 2006). If items of a test are not too difficult (ie.
Appropriate) for the ability of the subjects, the test, it has been argued, is
likely to rank them consistently thereby yielding high reliability (Guilford,
1954; Statsoft, 2003). For those that the test is too difficult for, their scores
in the test are likely to be influenced by guess work and would lead to high
error variance and consequently yield low reliability. And for those the test
is too easy for, discrimination would not be effective, resulting also in low
reliability. It appears therefore, that a test would be best administered on
subjects it is designed for and this group should be reported for appropriate
utilization of the test by the consumer (Lyman, 1986). The higher the
correlation coefficient, the better the reliability (Burger, 1997).
So far, the review has shown the need for considering validity and
reliability in measurement process. Moreso, the review has vindicated that
for a test to be useful, it has to be valid and reliable. Again, various
approaches to the calculation of the validity and reliability of a test have
been looked into. And more interestingly what meaning to make of the
result of each of these approaches examined. It appears that development
of any useful test such as mathematics readiness test (MATHRET) requires
that a high validity and reliability should be built into its development.
![Page 54: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/54.jpg)
54Evaluation.
Considering the review of various approaches to item analysis in test
development, it becomes necessary to evaluate the results. Various
approaches to determining item validity appear to result in tests of about the
same levels of reliability (Ely, 1951). As similarity is noted in item selection
using different indices, one may rightly suggest that a test constructor need
to choose those indices that call for least amount of labour. In consideration
of this, then Ferguson’s (1942) method and the analysis of variance need to
be over looked. Consequently, Phi, rank tends to be preferred instead.
Guilford (1954) suggested that point biserial should be applied in a situation
where only the coefficient is to be calculated. Guilford’s recommendations
are quite in order especially in a situation where validation of a test, such as
readiness test demands an external criterion. However, Englehart (1965)
pointed out that for an achievement whereby the criterion is usually interval
(i.e. Total score) discrimination index D would be the best option considering
its simplicity and comparable efficiency. Discrimination index (D) of the
MATHRET was, therefore determined (see Appendix: G).
Development of a Test that has Maximum Validity and Reliability.
In developing a test such as MATHRET, the traditional thing is to start
with blue print. A test blue print has been described as a two dimensional
table of specification outlining the content and behavioural objectives to be
sampled as well as the respective numbers of items to do the job
(Ebegbulem, 1982; Ohuche and Akeju, 1988; STI, 2006). From this
definition, it suggests that blueprint enhances validity and reliability of a
test. Selection techniques as well as item analysis adopted in developing a
![Page 55: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/55.jpg)
55test also help to achieve better validity and reliability of a test. If the
sample is not representative of the population demographically, selection
bias is introduced (Georgetown, 2006). In this discourse, it will be assumed
that a test blueprint, selection technique and item analysis exist.
Item analysis is built into test development so as to maximize the
reliability and validity of the test. Anastesi (1976) noted that a high validity
as well as a desired distribution of final test scores can be built into a test
when developing it. This suggests that assumptions of different indices need
to be beard in the mind of test maker. To Lemka and Wiersma (1976) most
approaches to item analysis are more suitable for tests and therefore, are
unsuitable for speeded tests. The contention being that speeded tests yield
item difficulty indices whose values depend on the position of an item on the
test than its intrinsic quality (Guilford, 1954). In addition, Wersma’s (1949)
finding revealed that the size of an item –total correlation coefficient in a
speeded test is dependent on the position of the item. All in all, these
results suggest that item analysis of a speeded test would lead to unreliable
indices of reliability as well as validity.
Allen and Yen (1979) have suggested that an item pool of at least a
range of one and a half to three times the number of items expected to
compose the final form of the test is required in initiating item analysis.
Similar observation was made by Lemike and Wiersma (1976) who
recommended that an item pool of at least double the expected final length
of a test is necessary in initiating item analysis. To pre-test the final form of
the test, Allen and Yen though recommended a minimum of 50 subjects,
they as well expressed their preference for “several hundred” testees
![Page 56: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/56.jpg)
56representative of the population with which the final form of the test is to
be administered to. To Nunnally (1981), a suggestion of at least 300
examinees are required for pre testing and that number in any case should
not be five times or more of the number of items. Guilford (1954) cited
Conrad who recommended three preliminary test administrations. Conrad
insisted that the test constructor himself using 100 subjects, so as to
uncover gross defects in the proposed test, need undertake first
administration. 400 subjects were suggested for second administration for
purposes of item analysis, and finally a third administration involving the
final form of the test would be administered for purposes of determining the
test reliability. The standards set by these authors appear quite stringent
and are unlikely to be the minimum for good results. In view of this,
Guilford (1954) remarked that Conrad’s assertion is “representing good
workmanship in test construction”, yet it need not be swallowed whole and
entire, arguing that the amount of and kind of pre testing required for a test
would differ from one situation to another.
In item analysis process, test constructors usually use extreme groups
in estimating item difficulty and validity indices for the whole sample from
these sub-sample results Mehrens and Lehmann, 1993; (Ipaye 1982;). In
the process of analysis, the distribution is splinted into different points,
which enabled the use of upper and lower 50, 33, 27, or 25 percents.
Guilford (1954) noted that the more extreme these groups are the sharper is
the discrimination between them and the less likely that a chance reversal
would occur in a different sample. Such a situation however, may lead to
increase standard error of difference between the proportions passing an
![Page 57: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/57.jpg)
57item in both groups. It becomes essential then to determine an optimum
point of split that would minimize standard error and yet maximize
discrimination. Under normal distribution, Lemke and Wiersma (1976)
pointed out that this optimum point is attained with the upper and lower
27%. However, for flatter than normal distributions, the optimum point of
split results from the use of extreme 33% (Ureton, 1957). It seems that the
27% rule might be quite robust. This is because some studies (eg Ely,
1951); Kuang; 1952) have compared reliabilities of tests developed from
item analysis using tail proportions that ranged from 10% to 50%. No
consistent significant difference was found. This implies that any suitable
tail proportion might serve as well as any other. In such situation therefore,
the test constructor should aim at a normal distribution of total scores in his
final test construction.
Anastesi (1976) pointed out that a skewed distribution of test scores
could be normalized by adjusting the proportion of items of appropriate
difficulty. Positive skewness occurs when there is insufficiency of easy items
to ensure discrimination at the lower end of the range of scores. On the
other hand, negative skewness results from injection of more difficult items
to effect discrimination at the upper end of the range. From the foregoing, it
appears that level of difficulty of test items determines the range of scores.
Brogden (1996), working with a number of hypothetical free response tests,
found that level of difficulty of items also affects test validity. He also found
that if all items are of equal difficulty, maximum validity is obtained when
the level of difficulty is .50. In addition, he found that increasing the range
of item difficulty decreases test validity provided that the tetrachonic item
![Page 58: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/58.jpg)
58Inter-corelations are not more than .40 and the number of items in the
test does not exceed 150. Gulliksen (1945) had earlier confirmed that
reliability of a test increases as mean test item difficulty approaches .50, as
dispersion of item difficulty decreases and as average item inter correlation
increases. The results of these two studies therefore suggest that similar
conditions govern the maximization of both validity and reliability.
In Brodgen’s (1996) work, he assumed that the free response test
items he worked with had tetrachonic intercorrelation that had only one
common factor which he considered to be the criterion. He then defined the
correlation between test score and criterion or common factor as “validity”.
Broden’s work was further carried out by Lord (1952) to multiple-choice
tests where, unlike in free response tests, guessing is a factor. In working
with hypothetical tests whose scores were adjusted for guessing so as to
retain the assumption of only one factor accounting for tetrachonic inter
correlation and make his findings comparable with Boraden’s, Lord found
that reliability and “validity” are maximized by minimizing variability of item
difficulty somewhat easier than half-way between a chance percentage
correct answer. He crowned his conclusion by asserting that as multiple
choice items become more difficult, chance of guessing increases and
reliability decreases. To Guilford (1954) under ordinary conditions (ie. When
tetrachonic item intercorrelations range between .10 and .30) the optimal
level for two-choice items is an uncorrected difficulty level of .85, for three-
choice items, .77 while it is .74 and .69 for four and five-choice items
respectively. In another submission, Cronback and Warrington (1952)
opined that multiple choice tests discriminate best when items are at a level
![Page 59: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/59.jpg)
59easier than median difficulty after correction for chance. One would then
wonder how use of such easy items could yield optimal discrimination when
Anastesi (1976) has suggested that use of easier items reduces the
discrimination power of a test for the most able testees. In view of
Cronback and Warrington’s work suggest that the effect of such reduction for
the most able testees can only be serious when item inter-correlation is very
high.
Several studies have demonstrated that reliability of a multiple-choice
test can be increased by increasing the number of alternatives. For
instance, Denney and Remmers (1940), Remmers and Ewart (1941),
Remmers and House (1941) and Georgetown (2006) have demonstrated
that the increase in item –total correlation can be predicted using the
spearman – Brown prophecy formula by substituting ratio of alternative
responses for ratio of test lengths. The authors found it true for two to five
alternatives using a vocabulary test, an arithmetic test, an attitude inventory
and third grade arithmetic test. However, when seven alternatives were
used, the formula seems to over predict the correlation. The situation in
these studies was that the alternative distracters added to the items were as
attractive as the original ones.
In some other studies not as much increase in item-total correlation
following increase in number of alternatives as was found by the above
studies were noted. Plumlee (1952) studied the less in item-total biserial
correlation due to chance in five-choice tests as compared with completion
tests of parallel content. She found that the mean biserial coefficient for
completion tests were actually .08 greater than for the multiple-choice forms
![Page 60: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/60.jpg)
60instead of .13, which a prediction formula would suggest. Thus,
considering for conditions one may conclude that correlation for chance
would probably overcorrect. The contradiction in these findings seems to
occur from the levels of difficulty of items used in composing the tests for
the studies. It seems that with relatively easy multiple-choice items, need
for guesswork is largely reduced and so reliability is increased. Bryan, Burke
and Stewart (1952)’s findings appear to corroborate this assertion. The
study was designed to investigate the effect of correcting or not correcting
total score or item mean for chance on the size of item-total biserial
coefficient in several achievement tests. The researchers found that
correction of total score only, did not change the average item-test
correlation. But these coefficients were somewhat lower in the more difficult
tests than in their easier counterparts. However, correlation of item
proportions (without or with correction of total score) consistently increased
the average item validity indices. The correlation of MATHRET scores using
Test-retest method yielded a high coefficient of correlation of 0.96 (see
Appendix: D).
Diagnosis of a Mathematics Readiness Test.
Concept of diagnostic Testing.
Diagnostic test has been looked at from different perspectives. Some
authors looked at it from educational point of view while some insisted that
it is entirely a medical term. For instance, teachers may use the sensory
tests for screening, but diagnosis and treatment would require a medical or
other type of specialist (Annie and Mildred, 1999). But, California
Mathematics Diagnostic Testing project offered web-based Mathematics
![Page 61: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/61.jpg)
61Analysis Readiness Test which is a diagnostic test of topics needed for
success in a precalculus course (Math. Arizona, 2007). The Math. Arizona
(2007) further revealed that this multiple-choice test is designed to be taken
without a calculator to obtain a more reliable indication of readiness for a
precalculus course. MATHRET is an essay test equally designed to be taken
by students of beginning SS1 without calculator so that they can show their
workings fully, thereby exhibiting their areas of mastery and weakness in
solving the JS3 Mathematics curriculum contents.
Furthermore, diagnostic test has been defined as a test used to
diagnose, analyze or identify specific areas of weakness and strength; to
determine the nature of weaknesses or deficiencies; diagnostic achievement
tests are used to measure skills (Glossary, 2007). That is to say that
MATHRET as a measuring instrument, can be used to identify JS3 students’
areas of strengths and weaknesses as they are intending to resume
Mathematics programme in SS1 level. Similarly, Glossary (2007) defined
diagnostic test as a test intended to locate learning difficulties or patterns of
error. Glossary (2007) further revealed that such tests yield measures of
specific knowledge, skills, or abilities underlying achievement within a broad
subject, and thus, provide a basis for remedial instruction. MATHRET was
designed to measure specific knowledge, skills, or abilities with which one
can locate specific learning difficulties of students’ works. Based on identified
specific learning difficulties of students or patterns of errors they committed,
one can use the cut-off point of 29 frequency of errors or below it and 30
frequency of errors or above it, as suggested in this study, to determine who
![Page 62: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/62.jpg)
62is “read” or “not read” respectively and make recommendations for
remedial instruction.
Purposes of diagnostic Testing.
Kraw Parker (2007) while commenting on diagnostic testing for
Mathematics students, pointed out that the need to assess the current
Mathematical ability of students on entry to any course is self-evident. It
was further revealed (Kraw Parker,2007) that the variety of different
examinations, assorted Mathematical backgrounds, (including access and
mature students), will reinforce these demands in order to help students
achieve a common core of Mathematical skills. Harling (1991) noted that the
national assessment system provides teachers and others with the means of
identifying the need for further diagnostic assessments for particular pupils
where appropriate to help their educational development. In support of
Harling (1991) for further diagnostic assessment of deficient students, Annie
and Mildred (1999) advocated for learning style inventories. The use of
learning style inventory is advocated by those who subscribe to the trait-
treatment or the Aptitude by Treatment Interaction (ATI). The two authors
further hinted that ATI and Trait-treatment concepts of learning are referred
to as diagnostic-prescriptive approaches to teaching. It is hoped that when
the diagnosis of the students learning abilities is investigated via the analysis
of their error patterns, there would be clear indication of those that are
“ready” and those that are “not ready” for senior secondary school
Mathematics learning. The MATHRET was developed by the researcher
himself to be used purposely to identify specific areas of students’ strengths
and weaknesses.
![Page 63: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/63.jpg)
63 Diagnostic information concerning a student lacking readiness is a
determinant of whether the treat/problem of the student can be treated in
the presence or absence of the student and how such problem/treat can be
treated by recommendation. In counselling, for instance, Goldman (1971)
noted that precounceling diagnostic information is intended to help the
counselor (with or without the client’s collaboration) to decide whether the
client’s needs are within the purview of his services. The treat being those
prerequisite skills the students have not been able to master which made
them to be deficient or resulted to lack of readiness for senior secondary
school Mathematics learning. In determination of who is “ready” or “not
ready”, the MATHRET was administered to a sample of 300 beginning SS1
students inform of group test and scores obtained were used in the diagnosis
of the students’ learning abilities. In this regard, Annie and Mildred noted
that for diagnosis, the primary types of data needed are samples of
students’ work and scores on group tests.
In summary, diagnosis is associated with identification of students’
areas of strengths and weaknesses. The weaknesses emanated from the
errors students committed in solving the MATHRET items. Readiness of an
entrant into SS1 level was determined by the cut-off point of 29 frequency
of errors the student committed relative to a total of 59 frequency of errors.
The literature review revealed that diagnosis is a factor that determines
readiness and that primary data needed for diagnosis are samples of
students’ work and scores on group tests as applicable to MATHRET scores
obtained from a sample of 300 beginning SS1 students.
![Page 64: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/64.jpg)
64Interpretation of Mathematics Readiness Test Score.
The major concern in all testing procedure is what meaning to attach
to a given test score. For assignment of a score is not an end but a means
to an end in testing (Schofield, 1973). From the test, one get an idea about
the trait of interest as manifested in the testee through the test. In
literature, two major approaches to interpretation of test scores are
identifiable (Popham and Husek, 1969; Ebel, 1962; Engelhart, 1972). One
approach is concerned with measuring a given score against an apparently
absolute standard. In this regard, Lyman (1986) noted that this “absolute”
standard is the maximum obtainable score in the test. Other scores of the
testees failed to play any role in determining the score of any given testee.
Following this approach, a standard (criterion) is set ahead of time so as to
determine what constitutes an acceptable level of performance. A testee’s
score is thus supposed to be an index of level of acquisition or possession of
“content” which the test is designed to measure and is evaluated against the
predetermined standard. Lyman (1986) has however, strongly maintained
that this standard is not absolute so to say. For a testee’s score is
dependent on how easy or difficult the test items are to the testee, even for
a given content area. Again, Ebel (1962) was of the opinion that the
standard set in this criterion – referenced approach to interpretation of test
scores is wholly determined by expectations of the group and not by some
true absolute. Criterion –referenced interpretation of scores is mainly
utilized in achievement testing where mastery of content is at issue
(Engechart, 1972). And use of percentage correct scores is an example
(Lyman 1986).
![Page 65: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/65.jpg)
65 The other second approach to interpretation of scores compares
each testee’s score with the performance of others in the group. In this
case, one’s score does not depend on how difficult the test is but on how the
group performs (Engelhart, 1972). The group whose scores are used as the
reference is the normative group and their average performance is the
norm. Norm-referenced testing is mainly associated with standardized tests
(Lyman, 1980).
In readiness testing the aim is to differentiate testees on a continuum
representing readiness (see Guilford, 1954). Readiness has been defined as
preparedness to engage profitably in some activity (Burks, 1968). It need to
be remarked that there are situations during which it becomes important to
define a point on the continuum of readiness beyond which the degree of
preparedness qualifies one for designation as “ready” and below which one
is considered “not ready”. At this point a readiness test constructor is
confronted, as part of the problem of giving meaning to scores, with the
problem which all selection test constructors must face: determination of a
cut-off score.
Ebel (1979) identified several approaches that can be used in
determination of cut-off scores which could as well be applied to readiness
testing. One of such approaches has to do with a determination of what
constitutes the point at which performance starts to take place, that is to
say, the minimum essentials of competence to qualify one for being “ready”.
Tasks could be developed to test such considerable competence point. From
theoretical point of view, Ebel (1979) insisted that the ideal cut-off score
here should be 100 percent. The problem that appear to exist here is that
such considerable competence point might not be easy to identify, because it
may be possibly a debatable issue among experts. Therefore, items to be
![Page 66: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/66.jpg)
66used in testing them may be difficult to produce. Again, examinee
performance may not be typical. Therefore, scores generated using such
test cannot have perfect reliability. This, therefore, calls for need to have
lower-than-perfect cut-off scores and tasks broad enough to include
essential fundamentals.
Attempts at meeting this situation have resulted in test constructors
arbitrarily fixing cut-off scores without presenting rationales for their choice
(Ebel, 1979). However, the rational demanded has been provided by Ebel
(1979) himself in his attempt to determine the cut-off score. Ebel (1979)
contended that in a well constructed test, no testee should score less than
the expected chance score and that no best testees should score near the
maximum possible score. In this regard, he defined an ideal mean score as
midway between the chance score and the maximum score and the cut-off
score as midway between the ideal mean and the expected chance scores.
One could argue that most tests are not ideal, being either easier or
more difficult than even the constructor wanted to make it. And that
readiness has been looked at as possessing a cultural component suggesting
its dependence on the group (Bruner, 1964). It then follows that a purely
criterion –referenced approach to interpretation of readiness test scores may
not be appropriate. However, this may be different from the issue. For
having a cultural component only suggests that level of readiness may vary
from “culture” to “culture” and not that prerequisite skills for any topic in
Mathematics remain the same every where irrespective of the fact that it
may exist in one classroom but not in another. To determine what should be
the cut-off score for readiness is particular should thus, not be a “cultural”
![Page 67: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/67.jpg)
67matter. It suggests therefore that a criterion – referenced approach to
determination of cut-off score is quite in order in readiness testing. Thus,
although an ideal test may not exist, if a test becomes easy or difficult (to a
particular group) because the skills it tests are mastered by most or few in
the population of interest, the skewness of the distribution should not be the
worry of the test constructor. In other words, if there is evidence to show
that the distribution of scores for a representative sample of the population
is similar to what is obtainable in the population, Ebel’s (1979) suggested
approach should be accepted. However, if the distribution as obtained in
such a representative sample can be hypothesized to differ from what
obtains in the population, Ebel’s approach should be regarded inappropriate.
An approach to interpretation of selection test scores which is wholly
norm-referenced, such as a readiness test score, is also found in literature
(e.g Allen and Allen, 1979; Cronback and Warrington, 1952). The approach
consists of a stipulation of some predetermined proportion that should be
selected. This proportion is based on say funds available in the school or
some other considerations in the system. In adopting this approach, the
cut-off scores is got from a frequency distribution of scores at the score
above which the predetermined proportion of testees falls. One major
problem that could be perceived here is that it is possible that many of those
that fall within the group that “qualifies” may not really posses the
competence of interest. In other words they may not possess the
prerequisite Mathematical and reasoning skills that make for true readiness
for a given Mathematics programme. It is also possible that this problem
could manifest in another approach designed for distributions whose
![Page 68: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/68.jpg)
68skewnes extremely differ from what is believed to obtain in the
population, it appears that the problem, is more with the instrument and not
with the true distribution of the trait in the population. It may not be in
order if either the wholly criterion-referenced or the wholly norm-referenced
approach is used to determine a cut-off score in this situation. For Ebel
(1979), he recommended approach that can be adapted to readiness testing
for situations whereby a test is either easier or more difficult than there is
reason to believe obtains in the population. In this particular case, an “idea”
standard is combined with a group related standard to find out the cut-off
score so suggested:
i. The determination of the average of the actual mean and the ideal
mean;
ii. The determination of the average of the lowest score obtained and
the expected chance score;
iii. Fixing the cut off score midway between those two averages.
The advantage of this approach is that it is combining the ideal with the
actual to arrive at a practicable and sensible standard. An approach similar
to this combines a criterion score with the need to satisfy certain situational
considerations. According to Ebel (1979) this approach fixes a cut-off score
subject to the selection of a stipulated maximum and minimum number or
proportion of the testees. If number less than this minimum meet the cut-
off standard, a new cut-off score is defined midway between the former cut-
off and minimum score got by the proportion initially stipulated. If the initial
cut-off score is got by more than the maximum number stipulated, the cut-
![Page 69: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/69.jpg)
69off score is increased to a position midway between the initial cut-off and
the minimum score got by the stipulated maximum proportion.
The several of these approaches so far discussed seem to provide for a
variety of situations that users of Mathematics readiness tests could gainfully
lay their hands on. Therefore, every user has options to consider so as to
select an approach that can appropriately satisfy his need. It is worth
pointing out that this decision is ultimately a rational decision. It is also
rational to accommodate the advice of Guilford (1954) to subject it to
empirical test so as to assess the validity of forcing readiness (a continuous
variable) into a dichotomy at the particular cut-off score point finally
decided. If such validity (i.e. Correlation with a criterion measure) is found
to be low, that particular cut-off score may not be an appropriate point of
discrimination between those that are “ready” and those that are “not
ready”.
Apart from using a readiness test for selection purpose which forces a
continuous variable (readiness) into a dichotomy of “ready” and “not ready”
testees, there is as well the possible use of a readiness test for counselling
purpose. This, like in all standardized tests, compares the testee with a
reference (normative) group of testees to find out now a testee measures
with among other testees in the test. This approach “defines” a testee’s
degree of readiness relative to that of his other members of a sample of the
population. It uses derived scores like T-scores, Z-scores, percentiles and
stanines (Lyman, 1986). Within a school’s system of programmes, such
scores could aid classification and placement as well.
![Page 70: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/70.jpg)
70 In summary, in consideration of predetermined score, a testee’s
score in a readiness test may be interpreted. Theoretically, this score is not
affected by other testees’ scores. It has however been shown to depend on
how easy or difficult the test may be. Moreso it has been suggested that
even the determination of what criterion score to use as point of acceptance
is clearly done in view of expected performances within the group. Another
approach to interpretation of a test scores is in line with consideration of
overall performance within some reference group. In using readiness fest for
the purpose of selection, the need more often arises for a cut-off score that
separates those regarded “ready” for a given set of activities from those “not
ready” for such activities. With or without reference to the overall
performance of the groups, cut-off scores could be fixed such as MATHRET in
which from 55% and above of the maximum frequency of errors is regarded
as “not ready” and vice versa. What is left for the test user is to determine
for himself which approach or cut-off best fits his particular situation. More
so, he may further test the validity of his choice empirically. And for
counselling purpose, like all other psychological purposes, a readiness test
score is interpreted with reference to the performance of a usually defined
(normative) group, just as MATHRET score is used to determine which
group (male/female; urban/rural; public/private, schools), is more ‘ready’
or ‘fairly ready’ than the other and vice versa.
Empirical Framework:
On readiness Testing
A good number of published Mathematics readiness tests focused on
kindergarten and primary school levels. Also, a good number of published
![Page 71: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/71.jpg)
71studies on readiness testing were carried out on kindergarten and primary
schools. However, the basic principle concerning all readiness testing are
the same no matter the stratum of the school system one might be
interested in. The review in this section will therefore be primarily
concerned with literature on Mathematics readiness tests and procedures of
such tests with strict restriction to a particular stratum of the school system.
Studies on Readiness Testing Procedures.
In literature, several approaches to readiness testing are identified. A
lot early Mathematics readiness tests centred on performance on Piagetian
tasks. Piagent has articulated some stages in cognitive development with
characteristic intellectual operations possible during each stage (See
Helgard, Atkinson and Atkinson, 1975; Hill, 2001). Although differences
have been found in the average ages at which children become able to
perform the tasks associated with stages, depending on intelligence, cultural
and social-economic factors, the order of progression appears the same for
all children (Duruji, 1975; Hilgard, Atkinson and Atkinson, 1975 and Hill
2001). Incidentally, one approach to Mathematics readiness testing involves
correlating performance on some Mathematics tasks with performance on
some concrete Piagetian tasks. Freyberg (1966) adopted this approach in
his studies. In it, he administered tests of conservation of quantity and
weight, numeric correspondence, spatial relations, classification and causal
relations on a sample of 151 children aged between 5yeras 9 months and 7
years 10 months. Later, he administered a 120 items arithmetic
computation and 25-item word problem test on the sample subjects. He
then correlated the scores on arithmetic computation and word problems
![Page 72: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/72.jpg)
72with the sum of scores on the Piagetian tasks, which yielded correlation
coefficients of .52 and .57 respectively. His conclusion was that Piagetian
tasks were good predictors of performance in Mathematics and hence good
Mathematics readiness tests. Another finding by Dimitrovsky and Almy
(1975) confirmed this report. The two researchers found a high correlation
between conservation ability of kindergarten children and their arithmetic
achievement scores obtained at the end of first grade (ie. Primary one).
Some authors opined that training could influence performance on
some Piagetian tasks (Bruner, 1964; Gibson, 1972; Ann, Bill, and Wilber
Dulton, 1996). In an attempt to further the investigation, Bearison (1975)
designed a study to compare such performances due to training with those
affected spontaneously due to maturation. In the study, Bearison conducted
a longitudinal study using some children over a four-year period. When the
children were on kindergarten stage, some of them were identified as
capable of conserving liquid quantity. They were regarded as natural
conservers. Some of the other no conservers were trained so that they can
conserve liquid quantity while the others was left on their own. Then
comparison of the third grade arithmetic achievement scores of the three
groups of children was made. One of the findings Bearison made revealed
that early natural or spontaneous conservers performed significantly better
than the trained conservers in the arithmetic achievement test. Significant
difference was not found between performance of the trained conservers and
their later conserving peers. The finding suggests that there is relationship
between early spontaneous conservation and arithmetic achievement.
However, there is lack of explanation of what factor that is responsible for
![Page 73: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/73.jpg)
73their relationship. For since Hilgard, Atkinson and Atkinson (1975) have
noted that intelligence is a factor in the age at which conservation is
acquired, it is quite plausible that general intelligence rather than some
particular Mathematical ability might be the factor that accounted for such
relationship found by Bearison. At this juncture, Bearison’s finding can only
be regarded as inconclusive until a study is designed to control for
intelligence as well as any other likely confounding variable.
Furthermore, in the process of correlating a general measure of
concrete operation with achievement, in Mathematics readiness testing, a
significant correlation still fails to provide information concerning the nature
of such relationship which is important for possible manipulation of variables
in teaching as well as in research. For a teacher or researcher would, in
addition to discovering the existence of a relationship between a
Mathematics concept and a Piagetian task, is required to identify which
Mathematical concepts are related to which reasoning abilities, so as to plan
a programme aimed at a particular result.
Another approach to Mathematics readiness testing requires
correlating specific measures of logical reasoning and Mathematics
achievement. Among such measures of logical reasoning is conservation of
number. This approach is noted in results of some research conducted, such
as that of Steffe (1970), which vindicated the existence of a significant
relationship between performance in number conservation and first grade
children’s addition skills. Inview of this approach, such relationship is looked
at as evidence that those who fail the logical reasoning test are not likely to
perform well in, say, first grade Mathematics. On the other hand, Michael
![Page 74: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/74.jpg)
74(1977) and Broody (1979) have found that young children acquire the
ideas of addition and subtraction before they conserve numbers. These
findings suggest that although number conservers are likely to do well in
addition and subtraction, yet some children that have not acquired number
conservation could still add and subtract accurately. Similar finding was
reported by Pennington, Wallach and Wallach (1980). They found that
although number conservers performed significantly better than their
counterpart who are no conserving on some arithmetic tasks, a good
number of no conservers performed successfully on most problems on
computation. Thus, results of these studies suggest that use of number
conservation task as tests of Mathematics readiness would invariably yield a
substantial number of false rejects, an error that Hiebert and Carpenter
(1982) insisted a readiness test should be relatively free from.
In another approach to Mathematics readiness testing, performance in
class inclusion was used as measure of logical reasoning. Evidence of such
use of performance in class inclusion was noted in studies such as one done
by Howlett (1974) in which he reported that first grade children’s class
inclusion performance correlated significantly with their scores on a missing
addend test. Earlier study by Dodwell (1962) contradicted this finding, since
Dodwell (1962) did not find any clearly defined relationship between class
inclusion and fundamental number concepts in his sample composed of 5 to
8 year old children. Looking at these apparently contradicting findings in
perspective one might conclude that the correlation as found by Howlett
might not have resulted from logical reasoning on the part of the children as
such reasoning involving numbers cannot precede acquisition of the basic
![Page 75: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/75.jpg)
75concept of a number. As a result, with the use of Piapetian task as a
Mathematics readiness test will lead to Type II error as the number
conservation approach has been shown to do.
From literature review by Hierbert and Carpenter (1982), the authors
reported that some studies had sought to establish a sequence in which
conservation and a variety of standard measurement concepts were
acquired. The authors remarked that while some studies suggested that
ability to conserve preceded a grasp of the inverse relationship between unit
size and number of units a given quantity measured, some others found that
acquisition of some measurement skills precede the appearance of
conservation. On the issue of assessment of pupils for a variety of
measurement tasks in elementary school arithmetic, the authors Hiebert and
Carpenter (1982) also revealed that transitive reasoning takes were often
used. However, in contradicting the expectations that transitive reasoning
ability should develop before a child could perform a number of
measurement tasks, Bailey (1974) has earlier reported that children gained
proficiency in a variety of standard measurement skills before they passed
the transitivity test.
These studies revealed that the apparent relationship between
Piagetian task performance and achievement in Mathematics is not general.
In other words not all Pragetian tasks are necessary in learning concepts in
Mathematics. The global correlation studies noted could possibly emanate
from a variety of relationships between Mathematical tasks and specific
Piagetian tasks. Interpretation of such global correlations is therefore not a
simple affair. This provides further justification for the remand for indices of
![Page 76: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/76.jpg)
76correlation that can define the nature of the relationship between specific
measures of achievement scores and Piagetian tasks in Mathematics.
In Hiebert and Carpenter’s (1982) review of literature, it could be
deduced that different Mathematical tasks need different levels or types of
logical reasoning ability. Tasks like calculations or routine number
manipulation, or straightforward application of memorized algorithms,
require simple skills and could be handled without necessarily acquiring
Piagetian abilities. However, many children depend on their ability in
conservation, class inclusion and transitivity, in solving problems that lack
standard approaches for solving them and for which invariably need logical
thought. The last types of problems tend to involve a combination of or
more techniques.
Case (1978) review some studies in which it was revealed that the real
developmental constraint which children have in learning Mathematics in
school is their inability to deal with more than one aspect of a problem
simultaneously. Children can therefore solve more complex Mathematical
problems than their level of cognitive development on a Piagetian scale
might suggest if such problems are capable of being segmented and solved
in a sequence of an aspect at a time. It is at this juncture that one can
possibly perceive where lies the resolution of the apparent contradiction
between Bruner’s (1964) findings and proposition and Piagetian theory of
cognitive developmental stages. Bruner’s contention was that any child
irrespective of his age could be taught to perform any cognitive task no
matter its complexity provided such task could be broken into bits he can
solve. In demonstrating this postulate, Bruner successfully taught some
![Page 77: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/77.jpg)
77third grade children to solve quadratic equation using some wooden
blocks. It then follows that insofar each piece of a problem can be handled
independently of the others and not as integrated with other parts of the
problem, any limitation of the hypothesis will not rose severe problem.
Several techniques, skills and algorithms in Mathematics posses this
characteristic. This information processing analysis explains why Piapetian
abilities are often not a factor responsible for some children’s performances
in Mathematics even when such abilities are logical prerequisites. It needs
to be pointed out, however, that not all problems on Mathematics can be
handled in such mechanical reutinized approach. Some call for integrative
manner of solving them. For instance, tasks such as finding a missing
addend and those requiring the use of inverse relation between the use of
inverse relation between unit size and number of units are among such tasks
and this is more reason why they correlate highly with Piagetian tasks (see
Howlett, 1974; Hierbert and Carpenter, 1982; Hill, 2006).
It is pertinent to remark that the high incidence of error emanating
from the use of Piagetian tasks for measuring Mathematics readiness in
children acquisition of prerequisite skills is not a flaw in the validity of
Piaget’s theory of stages of cognitive development. It is possible that
misapplication of the theory might be responsible for such errors. This is so
because Piaget (1964) in Hill (2001) himself has distinguished between
physical knowledge and logical reasoning abilities like conservation and
transitivity are critical for acquiring logical Mathematical knowledge but not
physical knowledge. A kind of physical knowledge is algorithmic skills and
![Page 78: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/78.jpg)
78can be used by children to solve certain kinds of Mathematics problems
without acquiring conservation and other logical reasoning abilities.
In summary, Piagetian tasks should prove quite valid in assessing
readiness to perform logical Mathematical tasks but may not be good tests
of readiness to perform mechanical Mathematical assignments.
Evidence from literature revealed that approach to Mathematics
readiness testing involves a correlation of performance on a simple sample
of work in an appropriate area of Mathematics with subsequent scores in the
area of Mathematics of interest Zylber (2000); Zuriel and Galinka (1999)
and Zuriel (2002) have carried out prognosis tests on this approach. In the
children’s conceptual and perceptual Analogies Modifiability (CCPAM) version
(Zuriel and Galinka , 1999; Zuriel, 2002) simple materials in CCPAM are
provided for the subjects to learn and a test on the material follows
immediately. The overall scores from this test are correlated with the scores
of “Didactic Assessment of Mathematics for preschool children”, a readiness
test developed for this study (Zulber, 2000) are as evidence of validity.
Another approach to Mathematics readiness testing was reported by
Anastesi (1976). This involves a correlation of a combination of prerequisite
arithmetic and other skills with later performance in Mathematics. In
Anastesi (1976)’s work, some of the test subscales contain materials similar
to numerical subtests of some general aptitude tests. Thorndike and Hagen
(1977) noted that the authors of tests using these approaches have offered
evidence that suggest that, for example, a mathematics readiness test
provides a better prediction of achievement in Mathematics than a numerical
and other Mathematics related subscales of a general aptitude test.
![Page 79: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/79.jpg)
79 In summary, it has been noted that a number of approaches were
used in Mathematics readiness testing. Empirical evidence suggested that
some of these approaches were valid but some were not. The empirical
support exists for the use of Piagetian tasks only as measures of readiness
for aspects of Mathematics that require logical reasoning in order to solve it.
A lot of other Mathematical problems have been found to be solvable by use
of relatively simple applications of algorithmic skills rather than applying
complex logical reasoning. Testing procedures involved simple work
samples of Mathematical tasks and combinations of prerequisite
Mathematics readiness testing.
On Mathematics Readiness Tests
A good number of readiness tests reviewed in this section is directly
related to Mathematics. Some other readiness tests such as reading and
language readiness tests have as well been included to highlight some
strengths and weaknesses that could be educative to test constructors while
constructing other readiness tests.
Didactic Assessment of Mathematics for Preschool Children (DAMPC)
This is a Mathematics readiness test developed by Zuber (2000) and
reported by Zuriel (2004) to examine the preschool children’s Mathematics
knowledge level. The test included an open, coded questionnaire for the
quantitative assessment of Mathematics knowledge, precise intervention
guidelines for the teacher, various learning aids, and a coding key. It is
composed of 314 items administered to a sample of 100 kindergarten
children selected from both public and religious kindergartens, average age
![Page 80: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/80.jpg)
80of 70.97 months and SD = 4.72. The 314 items were divided into four
sub-tests: namely ‘serial meaning’, ‘quantitative meaning’, ‘conservation and
correspondence’, leading to four possible scores for each item. For
independent work, the child receives 3 points, for performance after verbal
intervention, 2 points, for performance after intervention utilizing learning
aids, 1 point, and 0 is given if the child cannot answer the question. The
child’s Mathematical readiness score is the sum of points received for the
314 items. There was lack of information concerning method of sampling,
the validity and reliability estimates of the subtests and the teachers’ ratings
of the instrument. Lack of information on these aspects leaves the
researcher with no confidence on the use of the Mathematics assessment for
preschool children.
R-B Number Readiness Test
This is a readiness test developed by Dorathy A. Roberts designed to
be used with children of 4 to 6 years. Johnson (1976) pointed out that the
R-B Number Readiness Test was designed to measure concepts of counting,
ordinality, cardinality, one-to-one correspondence, vocabulary (shorter,
longest, more than, most, order, etc), writing of simple numerals,
recognition of shapes and patterns involving shapes, recognition and
matching of numerals. The test is made up of 20 pictorial items and must
be administered orally to groups of not more than eight children at a time,
preferably administering it using an assistant to help less mature children in
turning pages, following directions, and so on. The test yielded raw scores,
which were used in ranking children, or identifying children in need of
additional readiness activities prior to formal Mathematics instruction.
![Page 81: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/81.jpg)
81 According to Johnson (1976) there was no evidence of reliability
and validity of this test. It becomes difficult, therefore, to evaluate this test.
Even it is difficult to se how sores yielded by the test can be used to identify
those in need of additional readiness activities. This is so because no
evidence of standard whatsoever has been shown against which possible
comparisons might be made.
Arithmetic Concept Individual Test (ACIT )
This test was developed by three authors namely, G. Melnick, S.
Freeband and B. Lehrer. The ACIT is designed for use on individual primary
and intermediate level educable mentally retarded children to assess their
arithmetic readiness skills (Melnick and Freedland, 1972). The test is based
on Piagetian concepts and according to the authors, on research on the
relations between Piagetian concepts and arithmetic achievement. Moreso,
the test attempts to assess how testees tackle quantitative relation and so
enhance insight into why particular children do not progress in arithmetic
skills. It covers concepts like spatial relation, classification, class inclusion,
differentiation of length and number concepts, conservation of number,
spatial relations and one-to-one correspondence.
30 educable mentally retarded children of mean chronological age 10.4
years, mental age 6.8 years and 1.Q. 66.3 were used in establishing the
validity of the ACIT. There was no evidence of reliability of the ACIT. It was
reported that Pearson Product-moments intercorrelation among the
subscales of the test range from low through moderate to high (between
seriation and conservation of number), thus suggesting that the subscales
are heterogeneous. It validity was measured against the arithmetic concept
![Page 82: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/82.jpg)
82Screaming Test (Melnick, Mischio and Lehrer, 1971). Moreso, it was
reported that correlations between the subscales of the ACIT and the
criterion were generally moderate and significant. Seriation appears to be
the best predictor of arithmetic skill performance with both correlating .70.
Mental age and I.Q. follow in that order and correlate .68 and .50
respectively. Moreover, class inclusion was reported to be significantly
related to both mental age and I.Q. at .40 and .40.
Problems such as lack of information concerning method of sample
composition and small size of the validation sample and reliability of the
ACIT make it difficult to assess the degree of stability of performances on
the test and generalizability of results. However, it could be argued that
since validity sets a floor on reliability as test theory suggests (Guilford,
1954) and reported validity figures are generally moderate, perhaps
reliability might not be too low. Even at that, the ACIT can only be used
with caution until sufficient information on its psychometric properties for
more confident decision on it is available.
ZIP Test
There is also a Mathematics readiness part in the Mathematics section
of the ZIP test (Scott, Jr, 1970) designed for use with migrant children aged
between 6 and 12 years. The test is capable of assessing a child’s
proficiency in a sequence of behaviourally defined reading and Mathematics
skills, for the purpose of placement. The Mathematics section of the ZIP
yielded concurrent validity of .94, validated using independent judgments of
experienced migrant teachers against ZIP Mathematics test scores for 69
![Page 83: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/83.jpg)
83students. Test-retest reliability was established for the Mathematics
subtest using 125 students, which yielded .93.
Furthermore, on the ZIP test, there is no available sufficient
information regarding any rational assessment of the Mathematics readiness
part of the ZIP test. No information was reported on reliability and validity
of the readiness part, except only for the whole Mathematics subtest.
Moreover, the use of teachers’ ratings as criterion for validation of the
subtest is suspect, because studies such as Stanley’s (1976) and Kissane’s
(1986) suggest that the validity of the approach is low. Again, there was no
report on the rather reliability in the ZIP test to enhance proper assessment
of the ratings.
Arithmetic Concept Screening Test (ACST)
The authors, G. Melnick, G. Hischio, Z. Berstein and B. Lehrer
developed this test and described it as a diagnostic test. The test was
designed for placement and evaluation of educable mentally retarded
children with regard to arithmetic skills. Mathematics readiness part was
noted to be contained in that test. Moreso, the authors maintained that the
test was equally designed to control for extraneous factors which cause
educable mentally retarded children to fail (e.g. irrelevant stimuli, abstract
test stimuli and responses and lack of intermediary reinforcement). It
consist of 123 items divided into 5 subtests reflecting 6 levels of ability
(Melnick, Mischio and Lehrer, 1971). The subtests were administered
separately. The test covered some concepts which include one-to-one
correspondence, form and size discrimination, rational counting, more and
less, virtual clustering, before and after and identification of symbols. The
![Page 84: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/84.jpg)
84test also covered reversibility of addends, money concepts, rule of
likeness, ordinal numbers, addition and subtraction facts, multiplication and
division readiness, multiplication and division, number sequence.
79 educable mentally retarded students with mean chronological age
of 9 years 3 months and standard deviation of 1.4 years were used in
validating the test, mean mental age of 6 years 5 months with standard
deviation of 1.0 year and mean I.Q. of 66.9 with standard deviation of 7.6.
it was reported that items within each of the 5 subtests yielded Kuder-
Richardson estimate of .74, .82, ..90, .89, and .95. Moreso, it was reported
that content validity was estimated from various sources and is also
indicated by the high number of correlations between scores at various
levels confirming the hierarchical nature of the abilities upon which the tests
were based. Again, it was reported that independent item analysis for the
different subjects yield relatively low standard errors of measurement
(ranging from 1.56 to 1.98) and relatively high mean discrimination indices
(ranging from .37 to .67). More still, the authors reported that except for
one mixed factor, varimax rotation factor analysis for each subtest yielded
factors corresponding to the concepts at each level, and the absence of
factors corresponding to two concepts tested by only single items. It was
also reported that intercorrelation of ACST scale scores with mental age and
I.Q are generally significant and in the moderate range.
The concern of the researcher in this test is in its readiness aspect.
Reports on its validity suggest that this part might have a measure of
constant validity as revealed by the result of the factor analysis. However,
Thorndike and Hagen (1977) have warned that factorial validity need to be
![Page 85: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/85.jpg)
85checked against an external criterion. It can be argued that it is difficult
to assess its validity as a readiness test, since no such correlations have
been reported on multiplication and division readiness part of the ACST.
Unless for the small size and lack of information on method of composition
of the validation sample, the validity and reliability figures reported would
suggest that if the test is relevant to a user’s particular needs, it might be a
useful one.
Preschool Language Scale
Johnson (1976) reported that this test was designed for children
whose ages fall between 2 and 9 years but with emphasis on 41/2 to 6 years
bracket. The test was designed purposely to assess school readiness of
integrated auditory and visual perceptual modalities (i.e. ability to integrate
hearing and sight). The author noted that it has five sections namely:
vitual-vocal integration (involving a chain response requiring visual-auditory-
vocal association, discrimination and memory; vocabulary; Auditory
Response (basically, ability to follow direction or instruction); integrated
auditory among (aimed at assessing auditory memory) and discriminative
visual auditory memory).
It was reported that scores of 2500 kindergarten children used in this
test correlated .77 with scores in an unspecified language test. The
correlation was found significant at the .001 level of significant. It was also
reported that Pearson product-moment correlation of scores on this test with
those scores on Standford-Binet intelligence Test computed using a random
sample of 57 students selected from the testing population is .78, also found
significant at .001 levels. And yet more, this test is reported to have a
![Page 86: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/86.jpg)
86formula for predicting mental age correctly to within 20 I.Q. points in
98.6% of cases.
There is no evidence of reliability concerning this test. Although its
validity as a measure of whatever the Stanford –Binet Intelligence Test
measures is sizeable, the reported correlation coefficient of .77 with a
language test cannot be interpreted without more information concerning
the nature of the language test
Academic Readiness Scale (ARS);
This test, developed by Burks (1968) designed for use in kindergarten
and first grade. ARS is a 14-item, 5-point rating scale designed to sample
the child’s function in memory, motor, attention, perceptual motor,
persistence, vocabulary, word recognition, interest in curriculum, Humour,
social, and emotional aspects of behaviour. The rating of the scale was
performed by the teacher.
110 kindergarten children, rated twice by their teachers on an interval
of 10 days between ratings, were used to validate the scale. The test-retest
reliability estimates for the different categories yields a range from .64 to
.83. Academic Readiness scale scores got at the beginning of the session
were correlated with end of year scores on the Stanford Achievement Test
reading subtest. Correlation coefficient between all the categories of the
ARS and the word recognition and reading comprehension sections of the
Stanford Achievement Test reading subtest were positive and significant,
most at .01 level (Burk, 1968). It was reported that nine out of the fourteen
items showed significant difference between two schools. One of the schools
possesses higher socio-economic status and the other of lower socio-
![Page 87: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/87.jpg)
87economic status. Factors analysis was reported to have been used in the
analysis of the ARS in which 4 factors namely Academic skills; motor-
concentration; perceptual-motor and social-emotional.
Furthermore, validity coefficients of the ARS were reported to have
been found significant, but their values are not known and the best
interpretation of the situation can only be the existence of common factors
between the correlated measures. Specific figures were not reported. This if
reported would have enhanced evaluation of the instrument. Again, with
fourteen items used in measuring about twelve functions in this instrument,
it would appear that most of the functions would be measured by single
items only. This situation will end up yielding very unreliable measures. This,
even the reported moderately high reliability indices for categories within
this instrument might not be quite reliable. Thus, if one should use this
instrument at all or with any reliable degree of confidence, there it need to
first of all cross validate the instrument.
Horst-Reversal Test (HRT)
This is a reading readiness test (Johnson, 1976), designed for 5 to 6
year olds. The validity of the test was established and provided in the form
of correlations between Horst Reversal Test (HRT) scores obtained on first
grade children at the beginning of the school session and Wide Range
Achievement Test (WRAT) scores and teachers’ ratings at the end of the
session. ART scores correlated .69 with WRAT scores and .66 with teachers’
ratings. WRAT scores correlated .84 with teachers’ ratings. One factor only
(ie reversal) was measured by the test in the process of learning to read.
However, this single factor is a very important one according to Johnson
![Page 88: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/88.jpg)
88(1976) who also hinted that HRT is mostly used in batteries with tests of
other aspects of reading ability.
There was no report concerning the size of validation sample. Moreso,
no report was given on the reliability of this test. Thus, it is difficult to
evaluate the validity figures’ with confidence. Considering its face validity,
the validity figures appear reasonable and teacher ratings used also appear
consistent with WRAT scores in view of the size of their correlation. There
was lack of information on the reliability estimate of the WRAT scores.
However, lack of information on reliability of WRAT scores makes its uneasy
to evaluate the high correlation with teacher’s ratings with confidence.
(Johnson-Kenney) J-K Screening Test
the authors, Rosalie C. Johnson and Rose Kenney for use in education
process. The test consisted of 10 subtests that sample a variety of
perceptual-motor and cognitive skills. The test was designed to enhance
detection of early learning difficulties in children aged between 5 years 6
months and 6 years 6 months because tasks on the test are those most
children in the age bracket can solve easily. Clearly, the test is an academic
readiness test for first grade students.
Validated on 171 first grade children sampled from 4 elementary
schools in Marin country in California, USA, the J-K Screening Test scores
obtained at the beginning of the school year were correlated with teachers’
dichotomous subjective ratings on the students at the end of the session.
The ratings were based on academic performance and made as either
satisfactory or unsatisfactory. Correlation coefficient of .66 was obtained
![Page 89: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/89.jpg)
89using Pearson Product-movements and found significant at .001 level
using 375 first graders in San Fransisco using identical procedure as the first
in a cross validation study. Using the same data, a biserial coefficient was
computed which yielded identical result, .65, also found significant at .001
level. Another report by Seitz, Johnson and Kenney (1973) remarked athat
a second correlation of .65, significant at .001 level was obtained on a
random sample of 375 first graders in San Francisco applying similar
procedure as the first in a cross validation study. On the same data, a
biserial coefficient computed give identical result, .65,also found significant
at .001 level. Communalities and Kuder-Richardson formula 20 used in
obtaining the coefficient of the reliabilities of the subtest which yielded
values ranging .16 to .99 with a median of .68 were also reported.
The use of Pearson-moment correlation is not appropriate in a
situation involving dichotomized variable (see Hilgard, Atkinson and
Atkinson, 1975; Downie and Heath, 1974). The computation of biserial
coefficient on San Francisco sample is therefore, the only recommendable
index of validity reported on this test. It appears sizeable by mere looking
at its face value. However insofar no information on the validity of the
teachers’ ratings, interpretation of the reported validity coefficient of J-K
Screening Test whose criterion measure the ratings were, cannot be given
with confidence except when more information are available. More still,
some reported subtest reliabilities are very low. Obviously scores from such
subscales are capable of large fluctuations in subsequent testing. The
information would have been more appropriate assuming that multiple
correlation between different subscales of the test and the criterion
![Page 90: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/90.jpg)
90measures had been reported instead of the use of global test scores only.
This approach would have defined the nature of relationship between test
and criterion scores, and possible fluctuations among them which apparently
have counter-balanced themselves to produce identical values across
samples.
Literature reviewed in this section was on different approaches to
Mathematics readiness testing in which case empirical evidence was used in
appraising their appropriateness. In this regard, two criteria were adopted,
as standard, for usefulness of a readiness test enunciated by Hiebert and
Carpenter (1982) viz: it should provide information on a child’s capabilities
across a range of concepts or skills instead of just a readiness to learn a
single concept and should be free from both type 1 and type II errors.
Appraisal was also done on some readiness tests for usefulness on the basis
of validation and other psychometric information reported on them. So far,
it appears that most of them lack sufficient information for a reliable
appraisal of their usefulness, quite unlike MATHRET in which its reliability
and validity estimates have been established.
Sex, Location and Type of Junior Secondary School Attended and
Readiness for Mathematics.
In literature, not much of information was found directly linking
students’ sex, location and type of junior secondary school attended and
their level of readiness for senior secondary school Mathematics. However,
evidence from some studies can be found with implications for such
relationship. This section will be concerned with review studies whose
findings have implications for possible relationship between these variables
![Page 91: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/91.jpg)
91and level of Mathematics readiness. It has been noted earlier that
readiness comprises achievement aspect and a reasoning aspect. It
suggests that variables that relate to achievement and ability to reason,
which is relevant to Mathematics might be related to Mathematics readiness
as well.
In Mathematics achievement studies, one major evidence therein is
sex difference. For instance, Mitchelmore (1973) carried out a longitudinal
study involving secondary school students in Ghana purposely to investigate
the possible effect of sex on performance in modern Mathematics. Earlier,
he had noted an inverse relationship existing between age and achievement
in Mathematics. In the process, he composed a sample of older boys and
younger girls and still found in the study; boys performing significantly
higher than girls at all class levels.
Hilton and Berglund (1974), in another longitudinal study, studied
1849 subjects over a period of four years to find out the possible trend in
the relationship of sex to performance in Mathematics. The authors used
two tests: the Mathematics subscales of the sequential Test of Educational
Process (STEP) and the School and College Ability Test (SCAT). In STEP,
there was no significant sex difference found in grade 5 but males score
significantly higher in all subsequent grades. In SCAT, sex difference
appears to increase with age, being statistically greatest in grade II. In non-
academic group, subjects (females) scored higher in grade 5 but in grade II
males scored higher. Such apparent superiority of females over their male
counterpart in this particular grade might not be the first or only report on
female superiority in Mathematics test over their male counterpart. For
![Page 92: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/92.jpg)
92Barrick (1980) used 360 randomly selected subjects on whom
Fennemashema and Sandman Tests were administered. It was reported
that females were consistently achieving higher scores than their male
counterpart. However, none of the reported differences were found
statistically significant. In these findings that appear to be apparently
conflicting, it tends to be different to explain whether this was possibly
caused by “experimental errors” or situational differences that might have
introduced confounding variables that contaminated the results. For most
studies vindicated that sex differences in Mathematics achievement favour
males against their female counterpart. Even in their own study using 360
fifth form subjects, Obioma and Ahuche (1980) found boys superior to girls
in Mathematics achievement. However, an apparently converse study that
investigated deficiency in Mathematics among form 3 students of 126
secondary schools in the four former eastern states of Nigeria namely
Anambra, Imo, Rivers and Cross Rivers found that girls were more deficient
in Mathematics than boys (Obioma, 1985). This suggests that males are
superior in Mathematics achievements. This finding is similar to that of Hill
(1980). Using a random sample of 186 male and 186 male and 182 female
subjects, Hill administered competency test in reading, Mathematics, and
writing. After analysis of the Mathematics score, the analysis revealed that
sex is a significant Predictor at P<0.01 of Mathematics achievement with
mean performance of male subjects being greater than that of the female
subjects.
With regard to studies on Mathematics reasoning, there is evidence to
suggest the existence of sex differences. For instance, Kostick (1994) did a
![Page 93: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/93.jpg)
93study on a reasoning test. In the study, 900 subjects were assessed on
their ability to use information and principles in inductions. It was found
that male subjects performed significantly higher than their female
counterpart even after adjustments were made for previous knowledge,
personality traits, practice effects, and knowledge of pertinent principles
(Kostick, 1994). In another study conducted by Sommer (1958) using a
sample comprising 156 hospital patients, 95 students of elementary
psychology classes and 76 student psychiatric nurses to investigate whether
there was possible sex effect in ability to orally recall old orally given
information on quantitative material. The result showed that male subjects
recalled significantly more than the female subjects. However, Onibokun
(1979) contradicted this finding, since he reported that there was no
significant difference between the male and female subjects’ Mathematical
abilities. Onibokun administered the Hecarthy scale of children Abilities
(HSCA) using a random sample of Nigerian children. It is possible that this
is the source of his contradictory finding, because American test might not
have been a valid test of the Nigerian children’s ability.
So far, the review suggests that sex might not be a factor responsible
for Mathematics achievement and quantitative reasoning. This might
probably be a similar case with Mathematics readiness. Few contradictory
findings reported might suggest some yet-to-be-investigated phenomena,
which might be interacting with sex to produce such different result in
different situations
In literature, it was noted that location is a variable that has effect on
Mathematics achievement. Although Mitchelmore (1973) found significant
![Page 94: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/94.jpg)
94difference (P < 0.001) among subjects from different locations, Obioma
and Ohuche (1980) found no significant location different in their subject
performance in Mathematics. The researcher is in support of Mitchelnore’s
finding when considering that different schools are differently equipped with
resources both material and administrative which can be brought into the
teaching/learning situation and affect it differently. Moreso, Obioma (1985)
found significant location different among his subjects in Mathematics.
Similarly, Unodiaku (1998) has also found significant location difference in
his subjects’ level of errors committed in Mathematics. This suggests also a
location effect in achievement and perhaps Mathematics readiness.
Apparent non-contradiction in findings of these studies seems to imply or
interaction effects or uncontrolled situational differences that call for
investigation.
Not much could be deduced from literature on issue of possible
influence of type of school attended (in the content of public or privately
owned) by subjects on their achievement in, or readiness for, Mathematics.
However, considering the difference in commitment obvious between public
and private sectors of the national economy, probably due to difference in
levels of effective supervision or other motivation, one could expect that
educational enterprise might experience such difference too. It follows that
one could as well expect that products of such different Junior secondary
schools types should move into senior secondary school with significantly
different levels of readiness.
In summary, it seems that there is enough evidence notable in
literature concerning influence of sex and location on Mathematics
![Page 95: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/95.jpg)
95achievement and Mathematical reasoning and hence, possibly, on
Mathematics readiness. Still, contradictory results of some other published
studies might be concerned with other situational differences. More still,
while no evidence was found in the literature on possible influence of type of
school attended on Mathematics readiness, it is within expectation that some
form of relationship might exist between these variables. There is need for
empirical evidence to clarify this notion.
Summary of Literature Review.
Readiness is condition, or state or preparedness or mastery, which
reflects possession pf prerequisite knowledge of a particular subject-matter
with which to tackle the next harder work successfully. The review revealed
that readiness test can be used for various purposes in public and private
sectors of human endeavor. The review highlighted a number of approaches
used by researchers in readiness testing. It could be deduced from the
review that development of any useful test requires that a high validity and
reliability should be built into its development. Literature revealed that
readiness can be determined from any of the two aspect of diagnosis (i.e.
weaknesses or strength). The weakness of the students on mathematics
learning is the process errors they committed in solving mathematics
problems. The literature revealed that most of the mathematics readiness
tests studied were on pupils advancing from one primary school level to
another primary school level. Also studies were carried out on pupils
advancing from primary school (primary six) to junior secondary school class
one (JS1). No study was conducted for students advancing from junior
secondary (JS3) to senior secondary (SS1) level.
![Page 96: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/96.jpg)
96CHAPTER THREE
RESEARCH METHODOLOGY
This chapter is presents the following : research design, area of the
study, population of the study, sample and sampling technique,
development and content validation of the instrument (MATHRET),
instrument (MATHRET) for data collection, administration of the instrument,
scoring of the instrument, reliability of the instrument, and method of
data analysis. Each of these sub-sections is described below.
Design of the study:
The study adopted survey research design. The instrument was used
to survey JS3 students’ readiness for senior secondary school mathematics
work with which the influence of gender, school type and location can be
established. This is considered adequate as it enabled the researcher obtain
necessary data from large sample of students, through the administration of
common test to all those students included in the sample (Obienyem, 1998).
This design was successfully applied by Ozouche (1993) in an investigation
into difficult areas of ordinary level mathematics for the secondary schools.
Moreso, this design was successfully applied by Obienyem (1998) in an
identification of mathematics readiness level in junior secondary school class
one students in Anambra State.
Area of the Study
The area of this study is Enugu state. This study was carried out in
Nsukka and bollo-Afor education zones. The two education zones (Nsukka
and Obollo-Afor) were randomly sampled (using simple random sampling
![Page 97: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/97.jpg)
97method) out of six education zones in the area (Enugu State), namely
Udi, Awgu, Obollo-Afor, Enugu, Agbani and Nsukka.
Nsukka urban has four (4) boys schools, three (3) girls schools,
eighteen (18) mixed and six (6) private schools. Rural locality has three (3)
boys schools, three girls schools (3), thirteen (13) mixed schools and eight
(8) private schools.
Urban schools in Obollo-Afor zone consists of two (2) boys, two (2)
girls, seven (7) mixed schools and five (5) private schools. Rural schools in
Obollo-Afor have two (2) boys, three (3) girls, nine (9) mixed schools and
nine (9) private schools. These gave a total of sixty-nine (69) public and
privately owned schools (Ministry of Education, Enugu, 2007/08; Post
Primary Schools Management Board (PPSMB), Enugu, 2007/08) (see
Appendix : B).
Population of the Study
The population of this study is 54031. This figure comprised all the
newly admitted senior secondary class one students in 69 public and private
secondary schools in the two education zones (Nsukka and Obollo-Afor)
sampled from the six education zones in Enugu State. Specially, the total
students’ population of Nsukka zone is 31905 with males numbering 13731
and females numbering 18174. The Total students’ population of Obollo-Afor
zone is 22126. Out of this figure (22126), the male students numbered 9850
whereas females numbered 12276 (Ministry of Education, Enugu, 2009/08;
PPSMB, Enugu 2007/08).
![Page 98: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/98.jpg)
98Sample and Sampling Technique
A sample of 300 students was used for the study. The choice of this
figure (300) was based on the fact that the instrument (MATHRET) was
composed of thirty (30) essay questions. Each of the students’ scripts needs
to be thoroughly scrutinized and marked step- by –step. With this process
one can identify all the skills that students missed or failed (process errors)
or got correctly. Obviously, with this marking procedure on 30 essay
questions for such large number (300 students’ scripts), it is quite tedious,
painstaking and may take more time than required to accomplish the study
if attempt is made to increase this sample size. This study adopted multi-
stage sampling technique.
The first stage was based on simple random sampling technique
(balloting without replacement) in which two education zones (Nsukka and
Obollo-Afor) were sampled out of six education zones namely, Enugu,
Nsukka, Udi, Agbani, Awgu and Obollo-Afor. The next stage involved the use
of simple random sampling technique (balloting without replacement) in
selecting school type (in terms of whether public or private school). This
resulted to 102 school types sampled. The next stage involved using
proportionate sampling technique to sample 19 schools from the 102 school
types. The next stage involved using the same proportionate sampling
technique to sample 17 intact classes. The last stage involved using simple
random sampling technique to sample the required number of students per
class. In this case, for instance, in a boys’ school with 20 boys in a class, in
which the researcher intend to select 7 students, the researcher simply cut
out 20 pieces of paper with equal size. He wrote ‘yes’ on seven pieces and
![Page 99: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/99.jpg)
99‘No’ on the remaining thirteen pieces, and rumple the pieces of paper. He
then put the pieces of paper in a bag and shuffle them. The researcher then
requested the students to pick just once (without replacement). Those that
picked the 7 ‘yes’ will be the effective sample from this class.
In the case of mixed school (class) (boys and girls), each class was
grouped into boys and girls similar technique was applied in selecting the
number of boys and girls required in mixed class. This sample random
sampling technique was adopted in the 17 intact classes which resulted to
148 boys and 152 girls. These figures ( 148 and 152) brought the effective
number of sampled subjects to 300 students used for the study.
The Mathematics Readiness Test: Approaches used in Development
and Content Validation of MATHRET (First Version).
The first part of the test development involved the researcher
analyzing junior secondary class three (JS3) mathematics section of the
National Curriculum for junior secondary schools Volume 1, Science, Federal
Ministry of Education, Science and Technology, 1985), which outlines
objectives (see Appendix :A and F). Specifications should begin with an
outline of the objectives of the course as well as of the subject matter to be
covered (Anastesi, 1968). With your objectives in hand, it may be useful to
create a test blueprint that specifies your outcome and the types of items
you plan to use to assess those outcomes (STI, 2006). In the national
curriculum, no weights were attached to the content areas. In view of this,
two JS3 Mathematics teachers each holding degree in Mathematics
education with at least six years post qualification mathematics teaching
experience and two lecturers, one in mathematics education and the other in
![Page 100: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/100.jpg)
100measurement and evaluation where requested to independently suggest
weights that they consider appropriate for the different sections of the JS3
mathematics Curriculum. The judges were to consider the activities/
materials required in each content area of the curriculum as determinant of
the percentage weights to be assigned to each content area. The percentage
weights the judges assigned to the different sections as indicated in table I
below, where A represents the researcher and B, C the two teachers, D and
E the two lecturers.
Table 1: The mean values of percentage weights assigned to different
content areas of JS3 mathematics curriculum by 5 ‘Judges’.
Content Area Judge’s Weight (%)
A B C D E Mean (X) % weight
Number and Numeration 19 16 20 17 18 18 30%
Algebraic Processes 11 7 6 7 8 7.8 13%
Geometry and Mensuration 20 17 18 19 18 18.4 30%
Everyday statistics 18 15 16 14 19 16.4 27%
Total 60.6 100%
The mean weights in the respective content areas were summed which
resulted to 60.6. Therefore, the proportion 18:60.6 expressed in percentage
gave 30% Number and Numeration. More so, the proportion of 7.8:60.6
expressed in percentage gave 13% for Algebraic processes. Again, the
proportion of 18.4:60.6 expressed in percentage gave 30% for Geometry
and Mensuration. Finally, the proportion 16.4:60.6 expressed in percentage
![Page 101: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/101.jpg)
101gave 27%. It is useful for persons who are designing their first test to
construct a table of specifications (Hopkins and Antes, 1978) in order to
increase their chances of identifying an item pool that will represent their
domain of interest (Golden, Sawicki, and Franzen (1990) in Goldestein
(1992)). The researcher therefore developed a test blueprint (Table 2) for
the mathematics readiness Test (MATHRET) employing the mean values of
the weights assigned to each content area by the “Judges” as the weight of
the corresponding content area of the test blueprint (see table 2 below).
National policy on Education Implementation Committee’s (2000)
guidelines aims at achieving the objective of ensuring the acquisition of the
appropriate levels of literacy, numeracy and manipulative skills needed for
laying a solid foundation for life-long learning. These guidelines suggest that
at the senior secondary school level, most attention should be directed to
the higher levels of behavioral objectives (i.e. Application to synthesis),
lower levels of the behavioral objectives (knowledge, comprehension) to the
junior secondary school level. Considering the development of the MATHRET
therefore a classification broader than that of Bloom; Krathwohl; and Masia
(1956) Taxonomy was used in conjunction with Ohuche and Akeju (1988).
In Bloom et al’s Taxonomy, K represents knowledge category, UCP is a
combination of comprehension and application while DM combines Analysis,
Synthesis and Evaluation. Ohuche and Akeju’s classification were knowledge
(K); understanding and use of concepts and process (UCP); and Decision
Making (DM).
![Page 102: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/102.jpg)
102Table 2: A Blueprint of the MATHRET for trial testing
Content No of Items
K UCP DM Total
Number and Numeration 30% 7 4 - 11
Algebraic Processes 13% 3 2 - 5
Geometry and Mensuration 30% 2 6 3 11
Everyday statistics 27% 3 4 2 9
Total 36
The six teachers A,B,C,D,E and all possessing equal qualification as
well as Mathematics teaching experience along side two lecturers, one in
mathematics Education and the other in measurement and Evaluation were
requested to rate the blueprint independently on a five-point-scale based on:
1. Coverage of the cognitive domain adopting the classification
scheme done by Ohuche and Akeju (1988);
2. Coverage of the JS3 mathematics Curriculum contents.
The rating scale was organized as follows:
1. for grossly inadequate coverage
2. for poor coverage
3. for fair coverage
4. for good coverage
5. for very good coverage
![Page 103: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/103.jpg)
103Table 3: Teachers’ Ratings of the MATHRET Blueprint on a 5- point
scale.
A B C D E F G H Mean (x)
Coverage of cognitive domain 5 5 5 5 4 4 5 5 4.8
Coverage of curriculum content 5 5 5 5 5 5 5 5 5
The mean ratings for the coverage of JS3 mathematics curriculum
content was 5, and the mean ratings for the coverage of the cognitive
domain was 4.8. The mean ratings of 5 and 4.8 suggested a high degree of
agreement among the ratters. It also suggested that the coverage of both
behavioral domains and content as shown in the blueprint was adequate.
Subsequently, a pool of 36 essay items covering all the content areas in the
table 2 was produced. The 36 items on the blueprint were used to produce
the Mathematics Readiness Test (MATHRET) for trial testing and item
analysis. Such analysis may be facilitated by item difficulty index, which
represents the percentage of a given group that fails an item (WISC, 2006).
Or based on percentage of persons who answer it correctly (Anastesi, 1968).
Items are measured so as to discard those ones that are of unsuitable
difficulty. The difficulty levels of the items and the discrimination index of
the items were used to establish the item difficulty (ID) and discrimination
index (DI) of the MATHRET (see Appendix: G).
The validity of the item pool was measured using discrimination index
D as demanded by Engelhart 1972 (see Appendix: P). In selecting items to
compose the final form of the MATHRET, three criteria were adopted, in line
with literature.
![Page 104: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/104.jpg)
104i. In case more items qualified than were specified in the blueprint,
priority should be given to those with indices near .50 (Guilford, 1954).
ii. Item D must not be less than .20 (Lenike and Wiersma, 1976).
iii. Item validity index P, should range between .25 and .75 (see Allen and
Yen, 1979), where P stands for proportion of the students that pass each
item (see Appendix : P).The 36 items were made up of four content areas
namely, Number and Numeration, Algebraic processes, Geometry and
Mensuration and Everyday statistics. The developed items were Pilot tested
so as to carry out the analysis of the items.
Pilot Testing of the First Version of MATHRET
The Mathematics Readiness Test (MATHRET) was field tried out so as
to establish the reliability of the instrument and its subscales. The MATHRET
was administered to all the 80 students of three secondary schools randomly
sampled from Enugu Education Zone of Enugu State, during their first week
in 2007/2008, session. The mean (x) and standard deviation of the
distribution of their scores on the MATHRET are 67.14 and 4.64 respectively
(see Appendix: S). The four content areas of the MATHRET were used to
compose three subscales namely, number manipulation (NUMAP),
Computational skills (COMPUS) and Mathematical concepts (MACOPS). The
NUMAP has 8 items, COMPUS 17 items and MACOPS 11 items. Kuder
Richardson (KR20) method was used to determine the internal consistency
reliability estimate of subscales (NUMAP, COMPUS and MACOP). The
reliability estimates of NUMAP, COMPUS and MACOPS are .91, .88 and .81
respectively (see Appendix: S).
![Page 105: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/105.jpg)
105
After the trial testing and item analysis, six (6) items out of thirty-six
(36) items used in the pilot testing were found to be bad. The 6 items were
therefore discarded, retaining 30 items. The developed instrument
(MATHRET) is composed of 30 items. These 30 items were therefore used in
the second version of the MATHRET (i.e. for data collection).
The Second Version of the MATHRET
The second version of the study consisted of using the developed
instrument (MATHRET) for data collection. In this regard, the reliability and
validity of the MATHRET was established using sampled subjects different
from the sample used in the first version.
Table 4: A Blueprint of the second Version of MATHRET for JS3 students
Content Number of items
K UCP DM Total
Number and Numeration 20% 3 3 - 6
Algebraic processes 16.7% 2 3 - 5
Geometry and Mensuration 33.3% 3 5 2 10
Everyday statistics 30% 2 5 2 9
Total 30
The MATHRET blueprint (Table 4 above) was subjected to twelve
experts for rating. Nine of them are mathematics teachers M,N,O,P,Q,R,S,T
and U, all possessing equal qualification and Mathematics teaching
experience along side three lecturers, (V,W and X). One in mathematics
![Page 106: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/106.jpg)
106Education and the other two in measurement and Evaluation were
requested to rate the blueprint independently on a five-point-scale based on:
1. Coverage of the cognitive domain adopting the classification scheme
done by Ohuche and Akeju (1988);
2. Coverage of the JS3 mathematics Curriculum contents.
The rating scale was organized as follows:
1. for grossly inadequate coverage
2. for poor coverage
3. for fair coverage
4. for good coverage
5. for very good coverage
The results of the rating of the second version of the MATHRET by twelve
experts are displayed in table 5 below:
Table 5: Teachers’ Ratings of the second version of MATHRET Blueprint on a
5- point scale.
M N O P Q R S T U V W X Mean (x)
Coverage of cognitive 5 5 4 5 5 5 5 5 5 5 5 5 4.9
Coverage of curriculum content
5 5 5 5 5 5 5 5 5 5 5 5 5
The mean ratings for the coverage of the cognitive domain of the second
version of the MATHRET blueprint was 4.9, and the mean ratings of the
coverage of the curriculum content was 5. The mean ratings of 4.9 and 5
indicated a high degree of agreement among the raters. It also shows that
the coverage of both behavioural domains and content of JS3 mathematics
curriculum as shown in the blueprint table 4 below was adequate.
![Page 107: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/107.jpg)
107Consequently, the MATHRET was subjected to pilot testing in te second
version of the study.
Pilot Testing of second version of MATHRET
The Mathematics readiness test (MATHRET) was field – tried out so as
to establish the reliability estimates of the MATHRET (see Appendix: E).
Sample gets a pre – test to establish a baseline of behavior (Georgetown,
2006). Moreso, the pilot testing was aimed at identifying problems that
could be expressed during the main study with a view to finding out practical
solutions to such problems. For instance, the researcher may, find that some
questions were either too hard or too easy, or variably interpretable by the
testees (Ali, 2006). And they pre- test the items to identify those which are
too difficult or too easy or which fail to discriminate clearly between high and
low achievers (Leslie, 1968).
The MATHRET was administrated to all the 110 JS3 students of four
secondary schools randomly selected from Udi Education Zone of Enugu
state, during their second week 2007/2008 session. Data obtained from the
pilot testing was used to provide the item analysis data of the instrument.
The item discriminating index of the MATHRET was found to range from .44
to .74 with a mean of .61 (see Appendix: H). All the 30 items of the
MATHRET were then qualified for retention and were retained.
The MATHRET is a 30- item essay group test and is made up of four
content areas namely, Number and Numeration, Algebraic processes,
Geometry and Mensuration and Ev eryday statistics. These four content
areas were used to compose three subscales namely Number manipulation
![Page 108: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/108.jpg)
108(NUMAP),computational skills (COMPUS), and mathematical concepts
(MACOPS). The NUMAP has 6 items, COMPUS 15 items, and MACOPS has 9
items. Kuder Richardson (KR20) method was used in establishing the internal
consistency reliability estimates of NUMAP, COMPUS and MACOPS. Their
reliability estimates are 0.91, 0.90, and 0.97 respectively (see Appendix :D).
Note that a reliability coefficient of .70 or higher is considered “acceptable”
in most social science research situations (UCCLA, 2006). The three
subscales are described hereunder.
Number Manipulation (NUMAP)
This tests ability to add, subtract, divide, multiply, evaluate powers of
numbers. All the items are numerical and involve both positive and negative
numbers. Considering the nature of this subscale, all items in this section
fall within the knowledge and understanding categories corresponding to the
knowledge and comprehension categories of the cognitive domain of the
taxonomy of the educational objectives (Blooms, et al, 1956).
Computational Skills (COMPUS)
This subscale tests ability to simplify, evaluate Mathematical
expressions involving one or two variables, calculate from a given quality or
data.
Mathematical Concepts (MACOPS)
Items here are intended to measure knowledge and understanding of
Mathematical ideas as well as terms, notations and figure series. The
mathematical ideas included here are only those ones considered
prerequisite for understanding the initial Mathematics concepts taught in the
senior secondary school curriculum. They are concepts usually assumed by
![Page 109: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/109.jpg)
109SSI Mathematics teachers in lessons that introduce the major segments
of the curriculum. Knowledge and understanding in the context of this
section correspond to the knowledge and comprehension categories of the
cognitive domain of the Taxonomy of educational objectives of Bloom, et al
(1956) or the UCP and K categories of Ohuche and Akeju (1988)
classification.
Instrument for data collection
The instrument for data collection for this study was a Mathematics
Readiness Test (MATHRET) developed by the researcher. The instrument
was composed of thirty (30) essay test items that were designed to elicit the
necessary analyzable data from the students (see Appendix: C).
Administration of the instrument
The researcher administered the MATHRET to a sample of 300
students in Nsukka and Obollo-Afor education zones of Enugu state.
Scoring of the Instrument
Each omission or failure of a skill is counted as having committed
an error. The total frequency of errors committed by each student was later
transformed into percentage. This approach for interpreting the readiness
levels of the students was concerned with measuring a given score against
an apparently absolute standard. In this regard, Lyman (1986) noted that
this ‘absolute’ standard is the maximum obtainable score in the test. The
criteria for assessment in achievement test in secondary school education
system is that a student scoring from 0% to 39% is classified as ‘fail’. If he
![Page 110: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/110.jpg)
110scores from 40% to 54%, he will be classified as ‘pass’, and if he scores
from 55% and above, he will be classified as having obtained credit level.
The above criteria was adopted in this work but in reverse order. The above
interval of scores was reversed, such that mathematical readiness of a
student was determined by the cut-off points of frequencies of errors
ranging from 55% and more of the maximum obtainable frequencies of erros
(17700) (see Appendix: E) was classified as not ready (fail). If his errors
frequencies rangers from 40% to 55% he was classified as ‘fairly ready’
(weak pass), and if his error frequency ranges from 0% to 39% he was
classified as ‘ready’ (pass).
Reliability of the instrument
The researcher used Kuder Richardson (KR20) method to establish the
reliability of the MATHRET. In this case, the instrument was administered to
110 SS1 entrants of 2007/08 session in Udi education Zone within their first
two weeks in first term. The data collected with the instrument during trial
testing were used to find the internal consistency of the instrument. The
reliability estimate of .91 was obtained as the internal consistency of the
items (see Appendix: D).The results of reliability estimates of the MATHRET
subscales, namely NUMAP, MACOPS and COMPUS were .91, .97 and .90
respectively (see Appendix: E).
Method of Data Analysis
The data obtain with the instrument (MATHRET) were analyzed using
the following statistical tools.
a) Kuder Richardson (KR20)
![Page 111: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/111.jpg)
111b) Percentage
c) The t- test statistics was used to test the significance of the difference
of means of errors between:
i. Male and female students;
ii. Urban and rural students; and
iii. Public and Private Secondary School Students.
![Page 112: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/112.jpg)
112CHAPTER FOUR
RESULTS
This chapter deals with data presentation and analysis in accordance
with the research questions and hypothesis posed in the study.
Research Question one.
To what extent can validity of the MATHRET be determined?
The validity of the MATHRET was established using content validation.
The content validity was established from the high degree of agreement
among the raters. They were requested to asses the weights independently
given to different content areas of JS3 mathematics curriculum, assessment
of adequacy of the MATHRET table of specification and assessment of its
adherence to the final result of the blue print. The mean ratings of the raters
on 5- point scale was 5 and 4.8 for the coverage of JS3 mathematics
curriculum content and the coverage of the cognitive domain respectively.
(see Table 6 below).
Table 6: Teachers’ Ratings of the second version of MATHRET Blueprint on a
5- point scale.
M N O P Q R S T U V W X Mean (x)
Coverage of cognitive 5 5 4 5 5 5 5 5 5 5 5 5 4.9
Coverage of curriculum content
5 5 5 5 5 5 5 5 5 5 5 5 5
Research Question two
To what extent can the reliability of the MATHRET be determined?
The collected answer scripts of the 110 students during the trial
testing were used to find the internal consistency of the instrument.
![Page 113: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/113.jpg)
113Kuder Richardson (KR20) procedure which is a modified version of Kuder-
Richarson formula (KR20) was used in testing the internal consistency of the
items used in the trial testing. The reliability coefficient of .91 was obtained
as the internal consistency of the items (see Appendix: D).
Research Question three
What percentage of senior secondary school entrants are ‘ready’,
‘fairly ready’ or ‘not ready’ (in terms of their mean error scores on the
MATHRET) for the senior secondary school mathematics learning?
Table 7: Contingency table showing the percentage of 300 senior secondary
school entrants that are ‘ready’, ‘fairly ready’, and ‘not ready’ for senior
secondary school mathematics.
Total frequency of errors committed by students out of the 17700 maximum frequency of errors.
14508
Total frequency of errors committed by students classified as ‘ready’.
863
Total frequency of errors committed by students classified as ‘fairly ready’.
1267
Total frequency of errors committed by students classified as ‘not ready’.
12378
Total number of students that committed 39% and below of 17700 frequency of errors (‘ready’)
38
Total number of students that committed between 40% and 54% inclusive (‘fairly ready’)
47
Total number of students that committed from 55% and above 17700 frequency of errors (‘not ready’).
215
Percentage ‘ready’ 12.67%
Percentage ‘fairly ready’ 15.67%
Percentage ‘not ready’ 71.66%
(See Appendix: J and L).
![Page 114: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/114.jpg)
114From data of table 7, it can be seen that:
i. students classified ‘not ready’ recorded more frequency of errors
(12378) than their counterpart students classified ‘ready’ or
‘fairly ready’ in which 863 and 1267 frequencies of errors
respectively were found to be committed.
ii. Not up to one- third (i.e. 38+47=85) of the total number of
students (300) used for the study were found ‘ready’ for the
senior secondary school mathematics programme. In other
words the total percentage (i.e. 12.67%+ 15.67%= 28.34%) of
the 300 students used for the study were found ‘ready’ and
‘fairly ready’ against 71.66 percent found ‘not ready’ an
indication of lack of readiness of JS3 students for senior
secondary school mathematics programme.
Research Question four
To what extent do male and female students vary in terms of their
scores in the mathematics readiness test?
Table 8: Contingency table showing mean error difference, committed by
male and female students as measured by MATHRET.
![Page 115: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/115.jpg)
115S/N Error category Total freq. of errors
committed by male students
Total freq. of errors committed by female students
A Compreh. skills 1 A 198 271 2 B 209 269 3 C 202 263 4 D 213 402 5 E 235 311 6 F 225 312 B Process 7 G 294 333 8 H 307 345 9 I 170 465 10 J 305 504 11 K 230 298 12 L 241 291 13 M 246 273 14 N 228 392 15 O 231 272 16 P 253 359 17 Q 248 331 18 R 230 278 19 S 275 291 20 T 235 255 21 U 299 319 C Transform. skill 22 V 430 128 Carelessness skill 23 W 248 211 E Encoding skill 24 X 368 547 25 Y 549 442 Grand total of
freq. of errors committed by males
6668
-
Total freq. of errors committed by females
- 7840
Total number of male students
148 -
Total number of female students
- 152
Mean errors 45.05 51.58 Mean error differ. 6.53
(see Appendix : J and O).
![Page 116: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/116.jpg)
116
From the table 6 above, the researcher found that:
i. The mean errors committed by male students was 45.05;
ii. The mean errors committed by female students was 51.58;
iii. The mean error difference was 6.53 in favour of the male students;
iv. The males were more ‘ready’ than their female counterpart for senior
secondary school mathematics learning.
Research Question five.
To what extent do students in Urban and rural schools vary in terms of
their respective mean errors committed on the MATHRRET?
Table 7: contingency table showing mean error difference committed by
Urban and rural students as measured by MATHRET.
. means of errors made by Difference Urban and Rural students. in means of errors Urban Rural
Total frequency of errors committed
by students
6395 8113
Total number of students 144 156
Mean Errors committed on the
MATHRET
44.41 52.01
7.60
(See Appendix: M for raw scores per candidate).
From data of table 7 above, it can be seen that:
i. The mean of the errors committed by the urban students was 44.41
with a total of 6395 frequency of errors;
![Page 117: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/117.jpg)
117ii. The mean of the errors committed by the rural students was 52.01
with a total of 8113 frequency of errors;
iii. The difference in means of errors committed by the Urban and rural
students was 7.60 in favour of the Urban students;
iv. The urban students were found to be more ‘ready’/‘fairly ready’ than
their rural counterpart for senior secondary school mathematics learning.
Research Question six
To what extent do private and pubic secondary school students vary
(in terms of their means of errors made on the MATHRET)?
Table 10: contingency table showing difference in means of errors
committed by students from private and public secondary schools
as measured by MATHRET.
. means of errors made by Difference Urban and Rural students. in means of errors Public Private
Total frequency of errors made by
public and private students
8964 5544
Total number of students 186 114
Means of errors committed by
the students on the MATHRET
49.55 46.42
3.13
From data of table 10 above, it could be deduced that;
![Page 118: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/118.jpg)
118i. Students attending schools in privately owned secondary schools
committed more errors than their counterpart students schooling in public
schools.
ii. The mean difference of 6.09 was found to be committed in favor of
students schooling in public secondary schools.
iii. Students of publicly owned secondary schools appeared to be more
‘ready’/‘fairly ready’ than students of privately owned secondary schools.
Hypothesis one:
There is no significant difference in the means of errors committed by
male and female students that influence their degree of readiness for senior
secondary school mathematics as measured by MATHRET.
Table 11: t-table for difference in mean errors made by male and female
SS1 entrants as measured by MATHRET.
Sex No. Mean S.D. Difference
between
means
t-cal. t-critical Decision
Male 148 45.05 14.98 6.53 3.07 1.645 Reject H0
Female 152 51.58 9.59 (See Appendix: L)
From the student’s distribution shown in table 9, df (v) = 284, = 0.05, and
the t- critical Value is 1.645. The t-calculated value (3.07) is greater than
t-critical value (1.645), the hypothesis was therefore, rejected. This implies
that there is a significant difference in the means of errors scores made by
male and female SSI entrants as measured by MATHRET (scores). This
significant difference suggests that sex is a significant factor that influences
readiness in senior secondary school mathematics programme. The table
![Page 119: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/119.jpg)
119revealed that female students committed more errors than their male
counterpart, meaning that males are more ‘ready’ for senior secondary
school mathematics programme compared with their female counterpart.
Hypothesis Two
There is no significant difference in the means of errors made by urban
and rural SSI entrants that influence their degree of readiness for senior
secondary school mathematics as measured by MATHRET (scores).
Table 12: t-table for difference in means of errors made by urban and rural
SSI entrants as measured by MATHRET (scores).
Location No. Mean
(x)
S.D. Difference
between
means
t-cal. t-critical Decision
Urban 144 44.41 13.09
7.60
26.48
1.645
Reject Ho.
Rural 156 52.01 11.47
(See Appendix: M)
From the student’s t-distribution table with = 0.05 and df (v) = 284,
the t-critical value is 1.645. The t-calculated value (26.48) is greater than t-
critical value (1.645). The hypothesis was therefore rejected. This means
that there is a significant difference in the means of errors made by urban
and rural SSI entrants as measured by MATHRET. This significant difference
in the means of errors committed by urban and rural students suggests that
location is a significant factor that influences the degree of readiness of JS3
students advancing from junior secondary school level (JS3) to senior
secondary school level (SSI) where they intend to receive higher
mathematics learning.
![Page 120: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/120.jpg)
120
Hypothesis Three
There is no significant difference in the means of errors made by public
and private senior secondary class one entrants that influence their degree
of readiness for senior secondary school mathematics as measured by
MATHRET.
Table 13: t-table for difference in means of errors made by public and
private senior secondary class one entrants as measured by MATHRET.
School type No. Mean (x)
S.D. Difference between means
t-cal. t-critical Decision
Public 186 49.55 10.53 3.13
1.83
1.645
Reject Ho. Private 114 46.42 16.22
(see Appendix: N)
From the student’s t distribution table with = 0.05 and df (v) = 284, the
t- critical value is 1.645. The t-calculated value (1.83) is greater than the
t-critical value (1.645). The hypothesis was therefore rejected. This means
that there is a significant difference in the means of errors made by public
and private senior secondary school class one (SS1) entrants as measured
by MATHRET. This significant difference in the means of errors committed
by public and private students suggests that school type is a significant
factor that influences the degree of readiness of junior secondary school
class three (JSS3) students intending to resume a new mathematics
programme in SS1 level.
![Page 121: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/121.jpg)
121
CHAPTER FIVE
DISCUSSION, CONCLUSIONS, IMPLICATIONS, RECOMMENDATIONS,
SUGGESTIONS AND SUMMARY.
This chapter presents the following: discussion of results, conclusions
drawn from the findings, educational implications of the study,
recommendations, suggestions for further studies and summary of the work.
Discussion of results
The results of this study are discussed around the findings in both the
research questions and the hypotheses.
Readiness of Beginning SS1 Students as Measured by MATHERT.
The total frequency of errors committed by the students was 14508.
Out of this figure (14508), 863 and 1267 frequency of errors were actually
found to be committed by the students classified as ‘ready’ and ‘fairly ready’,
respectively, while 12378 frequency of errors was found to be actually
committed by the students classified as ‘not ready’. The total number of
students that committed 39% and below of the maximum obtainable
frequency of errors (17700) otherwise referred to as those that were ‘ready’
was 38 students out of the total sample size of 300 students used for the
study. Moreso, the number of students that committed between 40% and
54% inclusive otherwise referred to as ‘fairly ready’ were 47 students while
the students that committed 55% and above of the maximum obtainable
frequency of errors (17700) were 215 out of a total number of 300 subjects
used for the study. The percentage found to be ready was 12.67 percent.
The percentage found to be ‘fairly ready’ was 15.67% while the percentage
![Page 122: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/122.jpg)
122‘not ready’ was 71.66% percent. It was found that students committed
the highest frequency of errors on the skill y with a record of 549
frequencies of errors, followed by the skill j with error frequency of 809. it
was discovered that the least frequency of errors occurred on w with 459
frequency of errors (See Appendix: J and O). This result suggests that the
percentages of students found to be ‘ready’ and ‘fairly ready’ against those
‘not ready’ were not encouraging. Only 12.67% and 15.67 percent of the
sampled subjects found to be ‘ready’ and ‘fairly ready’ respectively, for
senior secondary school mathematics programme suggesting that generally
the junior secondary school students lacked evidence of readiness for senior
secondary school mathematics learning. In the literature, it was noted that
general poor performance in mathematics was related to conceptual and
procedural errors (Kalu, 1990; Onugwu, 1991), which students committed in
solving mathematical problems. Invariably, this situation may mare JS3
students’ readiness for senior secondary school mathematics learning. This
lack of readiness for senior secondary school mathematics learning may be
attributed to different error categories (comprehension, transformation
process, encoding and carelessness) students commit in their mathematics
work (Onugwu, 1991; Unodiaku, 1998). The result also, suggested that the
students were most deficient in ability to write down the answers correctly
(or with the appropriate signs, where necessary in which they committed
989 frequencies of errors. The next skill that the students were found most
deficient was inability to divide numbers in which they recorded 809
frequencies of errors. That means students have not mastered writing down
answers correctly (or with the appropriate signs, where necessary), and
![Page 123: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/123.jpg)
123using number to divide number, thereby exhibiting procedural error as
noted by Onugwu (1991).
The above findings may be associated with the quality and quantity of
mathematics taught and learned at the junior secondary school level. Senior
secondary class one students (SS1) was dependent upon previous junior
secondary school mathematics experiences acquired and lack of this, is an
indication of unpreparedness for senior secondary school mathematics
programme. Mathematical unpreparedness of secondary school students at
the point of admission may be related to the inability of most teachers to
integrate manipulative into their lessons; this makes lessons boring,
uninteresting and mathematical concepts learnt in disconnected and
distorted manner (Ibeaja and Nworgu, 1992; Obienyem, 1998). This
mathematical unpreparedness appears to be more prevalent among students
attending private secondary schools compared with their counterpart
students in the public secondary schools. Students in the private schools
committed more errors than their counterpart students schooling in the
public schools with mean error difference of 3.13 in favour of public schools.
The mean frequencies of errors committed by the students in the public
schools was 49.55 while the mean frequencies of errors committed by the
students in the private schools was 46.42. Obviously, JS3 students in the
public schools appear to be more ready than their counterpart students in
the privately owned secondary schools. This disparity in readiness of the
private and public junior secondary school students (JS3) for senior
secondary school mathematics work reveals that public secondary school
students received more quantity and quality of mathematics instruction at
![Page 124: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/124.jpg)
124the junior secondary school (JS3) level than their counterpart students
in the private secondary schools.
The MATHRET frequencies of errors committed by the students
revealed that the beginning SS1 students were generally ‘not ready’ for
senior secondary school mathematics programme due to a notable poor
performance in the MATHRET. The issue of errors students commit in
mathematics as deterrent to their lack of readiness for senior secondary
school mathematics programme is inconclusive. There is therefore need for
further investigation to clarify this notion.
Gender Factor in the SSI Mathematical Readiness as Measured by
MATHRET (scores)
This study sought to investigate how far the male and female students
vary in terms of their scores in their mathematical readiness. More
observation of the sex mean errors displayed in table 6 suggests interesting
results. The mean error difference of 6.53 was in favour of the male
students. The results of the hypotheses on sex related influence on the
occurrence of errors between male and female students was found to be
significant (see appendix: P).
Gender was found to be a significant factor of variance in the errors
students committed in solving MATHRET items at P = 0.05. The males
committed more errors than females in the following skills: v, w, and y. (see
appendix J for what the abilities, v, w, and y represents). It is therefore
obvious that:
i. Female students acquired more of the ability to translate word
problems into numeric form than their male counterpart;
![Page 125: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/125.jpg)
125ii. Female students acquired more ability to write down values or
expression which they have mastery always, than the male
students;
iii. Female students gained more ability to write down the answers
correctly (or with the appropriate signs, where necessary) than the
male students;
iv. However, the male students appeared to be more ‘ready’ than their
female counterpart, in terms of mastery of the abilities represented
by a to u (see appendix: J for what the abilities a-u represents).
This shows that male students were more ready than their female
counterpart in mastery of most of the skills while the females were more
ready than the males in mastery of few other skills.
Furthermore, male students from public secondary schools appeared
to be more ready than their male counterpart students from privately owned
secondary schools for senior secondary school mathematics learning,
because male students attending schools in private secondary schools
recorded more mean (x) errors (46.42) than their male counterpart from
public schools who recorded 49.55 mean errors with difference in mean
errors of 3.13. However, students from private schools showed evidence of
being more ready for senior secondary school mathematics than their
counterpart students from public schools, since students from public schools
committed more mean (x) errors of 49.55 than female students from private
schools that committed mean (x) errors of 46.42 with mean error difference
of 3.13 (see Appendix: N). This observed difference was found significant
![Page 126: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/126.jpg)
126(t = 1.83, p = 0.05 and 284 degrees of freedom (see Appendix: N). the
observed difference in readiness levels of public and private students could
be adducible to the fact that more qualified mathematics teachers are more
gainfully employed and better paid by federal or state government in public
schools than can be paid in private schools by proprietors of private schools,
thereby rubbing the private school students of being taught by qualified
mathematics teachers. Moreso, mathematics teaching aid are made readily
available to public schools by government but none to private schools,
thereby making the public schools to have an edge over their private
counterpart schools. There is need to seek for possible solution to bridge the
gap to enhance better teaching and learning of mathematics especially
among the privately owned secondary schools springing up here and there
without moderation by government. The observed mean error difference of
3.13 in favour of the privately owned secondary schools need further enquiry
since students in public schools were more privileged or accessible to maths
teaching facilities and human resources than their counterpart students in
the privately owned secondary schools.
The observed gender related differences in the readiness of the male
and female students in terms of the errors they committed in solving
MATHRET items seem to some extent to compare favourably with earlier
postulates and research findings. In addition to sex differences in general
and specific learning abilities there are also differences in cognitive style
(Biehler and Snowman, 1999). Isinenyi (1990), Ubagu (1992) and Unodiaku
(1998), all reported that male students achieve more on problem-solving
(computation) than girls as is the case in this study in which girls committed
![Page 127: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/127.jpg)
127more computational (process) errors than boys. In other words males
are more ready than females in computation or process. This finding
contradicts previous finding of Obienyem (1998) who reported that there
was no differential influence of sex on mean mathematical readiness scores,
the entire students were not mathematically ready for JS3 mathematics
programme at the point of admission, hence, that performance in
mathematics by the findings was not influenced by the students’ sex.
Obienyem (1998)’s reporting of no differential influence of sex contradicts
previous reports on influences of sex on students’ achievement in
mathematics (Isinenyi, 1990; Biehler and Snawman, 1990; Ubagu, 1992;
and Unodiaku, 1998).
The gender factor in the SSI entrants mathematical readiness levels as
measured by the MATHRET appears to be inconclusive. There is therefore
need for further enquiry to clarify this issue.
![Page 128: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/128.jpg)
128
Location Factor in the SSI entrants Mathematical Readiness as
Measured by MATHRET (scores).
This study sought to investigate whether some or all the errors
identified are peculiar to students in urban located schools or rural areas.
The urban students showed more deficient in the skills represented by p, s,
v, w, x and y than their rural counterpart (see Appendix: O). In other words
the rural students are more ready than the urban students in terms of
mastery of these skills.
However, the urban students showed more mastery (readiness) than
rural students’ in the rest of other skills represented by a to O, q, r, t and u
(see appendix: O). It was observed that students in public schools located in
the rural areas committed more errors than students from public schools
located in the urban areas. These observed differences could be the reason
why subjects’ location and type of junior secondary school attended (in
terms of public or private) were found to be significantly influencing
readiness for senior secondary school mathematics at alpha level of 0.05.
These findings are consistent with psychologists’ observation. Psychologists,
Sawrey and Telford (1958) observed that quite rightly, that the intelligence
of the children varies directly with the environment in which they are raised.
The authors believed that children raised in superior, average, or inferior
environments tend to show superior, average, or inferior intelligence
respectively. It is obvious, they have argued, that environment is what
determines intelligence. Perhaps, these psychologists’ viewpoint could
account for differences in readiness levels between urban and rural students.
![Page 129: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/129.jpg)
129Such apparent variation in mastery of the skills between urban and
rurally located schools is in line with different axioms. One of such axioms is
that children’s intelligence varies directly with the environment in which they
are raised; that environment is what determines intelligence. Another axiom
holds that localities with different socio-cultural economic and physical
circumstances present differential learning experiences and stimulations to
the learner. When environment is grossly deficient in motivation,
automatically the development of learner trained in it is correspondingly
retarded (Unodiaku, 1998). A child’s intelligence is an interplay of his talent
and experience, the later being mainly culturally determined Hunt (1961) in
Unodiaku (1998). Nwagu (1990) pointed out that the resources available,
the sub-cultural and the geo-physical conditions differ somehow from rural
to urban settings in particular.
In consideration of the above axioms, the findings in this study on the
influence of school location as a factor on the mathematical readiness of the
beginning SSI students are within expectation. The school location was
found to be significantly influencing the readiness of JS 3 students intending
to resume mathematics programme in senior secondary school level. The
findings in this study are quite consistent with the findings of Isinenyi
(1990), Akukwe (1990), Ubagu (1992) and Odo (1990). All reported that
students from rural localities committed more errors than their counterpart
from urban schools. In other words, students from the urban located schools
are more ready than their counterpart students attending schools in the
rurally located schools.
![Page 130: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/130.jpg)
130 A possible explanation for such differences in readiness levels
between urban and rural students with urban students being more
mathematically ready than their rural counterpart could be associated with
agricultural occupation with which the rural families and inhabitants are
naturally preoccupied. Majority of parents, guardians and other classes of
rural inhabitants are peasant illiterate farmers and petty traders who cannot
offer any academic guidance to their children and wards attending schools.
Apart from the fact that lack of educational background hinders them from
assisting their children or wards educationally, lack of basic educational
settings in their homes appears to be the major contributing factor
responsible to their children’s and wards’ lack of readiness for senior
secondary school mathematics programme. This is a contradistinction to
what is obtainable educationally in urban areas. In urban areas almost all
the inhabitants are composed of literate wards and parents, who have
minilibraries, TV sets, chalkboards and radio at their homes. These home
equipment or gargets supplements what the urban students receives from
school, thereby enabling them to have an edge over their counterpart from
the rurally located schools in terms of readiness abilities for senior secondary
school mathematics learning.
Another possible reason for the significant difference due to location is
that extra-mural and evening classes are usually made compulsory in urban
areas by the Parents Teachers Association (PTA). Moreso, most parents in
urban localities pay part-time teachers for the purpose of coming to teach
their children and wards in the evenings. But in the rural areas the illiterate
parents and guardians vehemently kick against implementation of evening
![Page 131: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/131.jpg)
131and extramural lessons. In the rural areas the parents of the rural
students never showed concern for their wards or children’s poor attendance
to school or lateness to school. The success of extra-mural lessons, evening
lessons, regular attendance to school and punctuality to school are adducible
to the literate status of the parents and guardians in the urban areas. These
differences in the home background of the children tends to give plausible
explanation for the significant differences in the mean error scores
committed or variance in the readiness levels of urban and rural students
(see table 7 Appendix: M).
Finally, the significant mean difference due to school location could be
viewed from the fact that schools located in urban areas are usually better
endowed with available resources (equipped libraries, qualified mathematics
teachers, funds, among others) for improving the quality and quantity of
teaching and learning in urban schools. The worst of all, almost all the
school equipment (e.g. sports, laboratory, guidance and counselling (G&C),
Dramatic art, school plant/generator, building materials, and so on) supplied
free of charge to school by government, that of the rural schools were
vandalized and carted away by thieves. This situation worsens the
backwardness of the rural students from educational pursuit. The above
explanations are merely hypothetical and require further enquiry.
Conclusions.
The conclusions of this study are entirely based on the investigated
problems. The results of data analysis itemized as follows revealed that:
i. Twenty-eight point thirty-three percent (28.33%) of the students
were found to be ready for senior secondary school mathematics
![Page 132: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/132.jpg)
132programme while the remaining seventy-one point sixty-seven
percent (71.67%) were found ‘not ready’ for the senior secondary
school mathematics learning.
ii. The males were more ‘ready’ than their female counterpart for
senior secondary school mathematics learning with a mean error
difference of 6.53 in favour of the males.
iii. Gender was a significant factor in readiness of male and female SSI
entrants for senior secondary school mathematics learning as
measured by MATHRET (scores). This factor was found significant at
p<0.05.
iv. The school location was a significant factor influencing readiness of
SSI entrants as measured by MATHRET (scores). This location
factor was found significant at P<0.05
v. The interaction effects of gender and type of junior secondary
school attended (in terms of whether public or private) were found
to be significantly influencing readiness of JS3 students intending to
resume new mathematics programme in SSI level with students
from public schools appearing more ready than students from
private secondary schools for senior secondary school mathematics
learning.
vi. The interaction effects of location and type of junior secondary
school attended were found to be significant factors that influence
mathematics readiness of beginning SSI students. The interaction
effects were found significant at p<0.05 with urban students being
![Page 133: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/133.jpg)
133more ready than their rural counterpart for senior secondary
school mathematics programme.
Hence MATHRET of acceptable degree of validity could be used to
determine the readiness level of SSI entrants intending to resume
mathematics learning.
Educational implications of the findings.
In could be implied from the study that:
1. Since the MATHRET was found to have an acceptable index of internal
consistency reliability, MATHRET could be used to determine the
readiness level of JS3 students intending to resume mathematics
programme at senior secondary school level.
2. The MATHRET was also found to have an acceptable reliability and
validity indices and therefore useful for the evaluation of mathematics.
These implies that the MATHRET could be used by teachers to determine
the readiness level of students advancing from one level to another, say
from JSI to JS2; JS2 to JS3, and so on.
3. The MATHRET possesses their diagnostic potentials. It has implications
on the development of positive attitudes towards mathematics . This will
culminate in increased students’ performance.
4. Male and female students used in the study show convincingly unequal
readiness level as measured by the MATHRET, with males being more
ready than their female counterpart. The implication is that the use of
MATHRET has gender bias. The implication of this is that the issue of
![Page 134: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/134.jpg)
1345. males performing better than females earlier reported in literature
appears to be persistent especially as it concerns evaluating them in
science subjects and mathematics in particular. In teaching female
students mathematics, caution need to be taken by paying attention to
every detail of necessary skills, algorithms and principles embedded in
the topic to enhance mastery of the skills, and so on.
6. Variations in the readiness levels of urban and rural students were
noted to exist and found significant in the study. The implication is that
location factor in students performance earlier reported is still at work,
implying the need for enhanced mathematics teaching methods or
strategies or facilities to be employed in teaching students located in the
rural areas to get them ready for senior secondary school mathematics
learning.
7. Students from private and public schools performed significantly
different, with students from public schools being more ready than
students from private schools for senior secondary school mathematics
instruction. The implication is that if the MATHRET is continued to be
used for assessing them, the students from public schools will appear to
be continually more ready than students from private schools.
8. This work supplies resource materials to institutions who may want to
use readiness test in teaching and evaluation.
9. Entry behaviour was recommended to be used in teaching mathematics,
to determine the prerequisite knowledge a student has before
introducing a new related topic. This study provides such materials for
teachers to avail themselves of teaching and evaluating mathematics.
![Page 135: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/135.jpg)
135Recommendations.
The recommendations were also made based on identified area of
students’ weaknesses followed by recommendations on what teachers should
do over the errors committed by students.
1. As this Readiness Test (MATHRET) has passed through the process of
validation and reliability and also found to be capable of identifying
students’ areas of strengths and weaknesses, with which the students
could be classified as ‘ready’, ‘fairly ready’ or ‘not read’, it is therefore
recommended to the teachers to adopt and adapt for use in
mathematics instructions and evaluation.
2. As the mathematics Readiness Test (MATHRET) has been found quite
innovative for the teaching and evaluation of mathematics, teachers
are encouraged to use MATHRET in their teaching and evaluation.
3. As the MATHRET has been successfully used to determine the
readiness levels of JS3 students intending to resume new mathematics
programme in senior secondary class one level, examination agencies
should introduce mathematics readiness test into the evaluation of
mathematics.
4. The study has also vindicated that the use of the MATHRET enhances
students’ performance in mathematics, by being able to separate
those ‘ready’ and ‘fairly ready’ against those ‘not ready’, therefore,
![Page 136: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/136.jpg)
136
need to be entrenched in the Junior secondary school mathematics
curriculum for teaching and evaluation.
5. Mathematics textbook authors and other test developers may use this
MATHRET as a guide for developing future tests.
Suggestion for further Studies
The following themes have been suggested for further researcher:-
1. Since mathematics readiness test has been successfully developed for
JS 3 students intending to resume programme in senior secondary
school class one, mathematics readiness should therefore be
developed for other students intending to resume mathematics
programme in other levels.
2. Readiness Test should be developed and validated in other subjects.
3. More tasks on the JS 3 National curriculum content areas used on
mathematics should also be developed and validated for use in junior
secondary school mathematics instruction and evaluation.
4. As this mathematics readiness test was validated at Nsukka and
Obollo-Afor education zones, it should equally be validated at other
education zones in Enugu State.
5. As this MATHRET was validated at Enugu state, it should also be
validated at other states.
![Page 137: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/137.jpg)
1376. This MATHRET was developed according to three approaches as
stipulated by Goldstein (1992). Readiness tests should also be
developed according to other test developers.
Summary of the work
Mathematics is indispensable in the life of every school child; in our
everyday life, and for science and technological breakthrough. Ample
evidence revealed that majority of the senior secondary school students’
performance in mathematics has been generally poor. The curriculum
planners hierarchically organized mathematics to follow a sequential order so
that the learning of one aspect becomes a prerequisite for the learning of the
next harder one. The curriculum planners recommended the use of entry
behaviour’ in teaching mathematics so as to enable the teacher know
whether the learner has acquired the necessary prerequisite
skills/experience/ knowledge that can enable him profit from the present
instruction he intends to present. This is to say that the teacher intends to
find out if the child is ‘ready’ for the new instruction considering whether he
has background experience before undertaking a new instruction. Readiness
level of the learner therefore becomes a factor that determines the teacher’s
instruction of the new topic or otherwise. Readiness levels of the students
were determined through diagnosis of the students’ learning experiences
and recommendations on what teachers should do concerning the errors
committed by the students. Mathematics readiness tests were developed
and used to determine the readiness of primary six pupils intending to
resume new mathematics learning in JS1. The researchers have suggested
![Page 138: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/138.jpg)
138that mathematics readiness test be developed and used to determine
the readiness levels of JS3 students intending to resume new mathematics
programme in SS1. Unfortunately, no readiness test has been developed for
prospective senior secondary school entrants. This has been complicated by
the fact that secondary school teachers find it difficult to diagnose their
students’ learning experiences in mathematics tasks, so as to determine
their area of weakness (process errors) that mares their enhancement in
learning mathematics.
The following research questions guided the study:
1. To what extent can validity of the MATHRET be determined?
2. To what extent can the reliability of the MATHRET be determined?
3. What percentage of senior secondary school entrants are ‘ready’,
‘fairly ready’ or ‘not ready’ (in terms of scores on the MATHRET) for the
senior secondary school mathematics learning?
4. To what extent do male and female students vary in terms of their mean
errors committed on the mathematical readiness test?
5. To what extent do students in Urban and rural schools vary in terms of
their respective mean errors committed on the MATHRET?
6. To what extent do school types influence the subjects’ mathematical
readiness (in terms of their mean errors committed on the MATHRET) for
senior secondary school mathematics programme?
![Page 139: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/139.jpg)
139The following hypotheses were tested at 5% level of significance.
1. Gender is not a significant factor that influences the degree of
readiness for senior secondary school mathematics as determined by the
mean errors committed by male and female students.
2. Location is not a significant factor that influences the degree of
readiness for senior secondary school mathematics as measured by the
mean errors committed by urban and rural students.
3. The interaction effects of gender and type of junior secondary school
attended (in terms of whether public or private) are not significant factors
influencing readiness for senior secondary school mathematics.
4. The interaction effects of location (urban or rural) with type of junior
secondary school attended (private or public) are not significant factors
influencing readiness for senior secondary school mathematics programme.
Literature was reviewed under two broad headings; theoretical and
empirical. It was observed that Readiness test has been very efficacious in
various ways but unfortunately none was developed for senior secondary
school mathematics programme.
A descriptive survey study was designed and executed in Nsukka and
Obollo-Afor education zones in Enugu State. The population was 54031
senior secondary school class one entrants.
![Page 140: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/140.jpg)
140 Nineteen schools were purposively sampled from the two education
zones to give a sample size of 300.
The instrument used was the MATHRET. Thirty essay items were developed
and given to validators. Based on the results of the validators, the instrument was
administered to the 300 sampled subjects. Kuder Richardson formula (KR20) was
adopted in establishing the reliability of the MATHRET.
The findings of the study.
1. A 30- item essay mathematics readiness test was developed by the
researcher.
2. A reliability coefficient of 0.91 was obtained for the test through pilot testing.
3. The mean ratings of the MATHRET blueprint on a 5-point scale on coverage
of cognitive domain and coverage of curriculum content were 4.8 and 5
respectively. The validity of the MATHRET was established using content
validation.
4. Only 12.67 and 15.67 represent, 38 and 47 percent of the 300 subjects used
for the study were found ‘ready’ and ‘fairly’ ready’ respectively for senior
secondary school mathematics programme.
5. Mean error difference of 6.53 was found to exist between male and female
(in favour of males) students’ readiness levels as measured by MTHRET
(scores).
6. The urban students were found to be more ready for senior secondary
school mathematics teaching/learning than their rural counterpart, as
![Page 141: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/141.jpg)
141measured by MATHRET, with mean error difference of 7.60 in
favour of the urban students.
7. The mean error difference between public and private SS 1 entrants as
measured by MATHRET was found to be 3.93 in favour of public
secondary school students.
8. The mean error difference between male and female SSI entrants as
measured by MATHRET (scores) was found significant at p<0.05.
9. The mean error difference between urban and rural beginning SS1
students as measured by MATHRET (scores) was found significant at
p<0.05).
![Page 142: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/142.jpg)
142
10. REFERENCES
ACES Report (2006). ACES Validity Handbook: What is Test Validity? http: ii www. Collegboard. Com/highered/apr/aces. Handbook/test valid. Htm/.
Ackerrnan, D. J and Barnett, W. S. (2006). Policy Report prepared for Kindergarten:
What Does “Readiness” Mean? www. Nieer or/resources/ policy reports/report 5. pdf.
Adedibu, A. A. (1988) Continuous Assessment in 6-3-3-4 System of Education. In AKpa,
G.O. and Udoh, S. U. (eds). Toward Implementing the 6-3-3-4 System of Education in Nigeria. Jos: Techsource Electronic Press.
Adkins, D. C (1998). Measurement in Relation to the Educational Process. Educational
and Psychological Measurement, 18,221-240. Aguele, L.I. (2004). Remediation of Process Errors Committed by Senior Secondary
School Students in Sequences and Series. Unpublished Ph.D. Theris, UNN. Agwagah, U.N.V. (2000). Teaching Number Bases in Junior Secondary School,
Board, Abacus, Journal of Mathematics Association of Nigeria, 26 (1)/ Ahman J. S. and Glock, M. D. (1971). Evaluating Pupils’ Growth: Principles of test and
Measurements. Boston: Allyn and Bacon inc. Akukwe, A. C. (1990). School Location and School type as factors in Secondary School
Mathematics Achievement in Imo State. Unpublished M.ED Thesis, University of Nigeria, Nsukka.
Ali, A. and Akubue, A. (1988). An Evaluation of Secondary School Teachers’
Performance on Continuous Assessment Practices. In G. O. Akpa and S. U. Udoh (eds). Towards Implementing the 6-3-3-4 system of Education in Nigeria. Jos: Techsource Electronic Press.
All Psych Online (2006). Research Methods Validity and Reliability in Allpsych
Online. http: //allpsych. Com/research methods /validity reliability. html. Almy, M. C. (2006). Young Children’s Thinking. New York: Teachers College,
Columbia University press. Amoke, A;.O. and Ezike, R. O. (1989) Teaching of Difficult Concepts in senior
Secondary Schools: Longitude and Latitude. In Ohuche, R. and Ali, A. (Eds), Teaching Senior Secondary School Mathematics Creatively. Onitsha, Nigeria: Summer publishers.
Anastasi, A. (1968). Psychological Testing (3rd ed.). Psychological Testing (4th ed.) New
York: Macmillan Publishing Co. Inc.
![Page 143: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/143.jpg)
143 Ann, A.C.; Bill, V.; Wilbur, D.M. and Dutton, L. (2006). Mathematics Children Use and
Understand. Napa, California: Rattle Ok Publishers. Annie W. Ward and Mildred Murray-Ward (1999). Assessment in the Classroom. New
York Wadsworth Publishing Company. Arowosegbe, J. O. (1990). The Continuous Assessment Scores as a Predictor of
Academic Achievement in Junior Secondary Certificate Examination in Ondo State. Unpublished M. ED Project. Nsukka: University of Nigeria.
Atkinson, R. L., Atkinson, R. C., Smith, E. E; Bem, D. J. and Nolen-Hoeksema, S.
(1993). Introduction to Psychology (11th ed.).New York: Harcourt Brace Jovanovich Publishers Inc.
Ausubel, D. P. (1963). The Psychology of Meaningful Verbal Learning .New York:
Grune and Stratton. Aususbel, D. P. (1968(b)). Symbolization and Symbolic Thought: Response to
Turth, Child Development, 30, 997-1001. Ausubel, D. P; Navok, T. and Hanisan, B. (1978). Educational Psychology: A Cognitive
View. (2nd Ed), New York: Werbel and Peck. Bailey, T. G. (1994). Linear Measurement in Elementary School. Arithmetic
Teacher, 21, 520-526. Balow, I. H. (1994). Reading and Computation Ability as Determinants of Problem
Solving. Arithmetic Teacher, 11, 18 -22. Baroody, A. J. (1979). The Relationships among the development of counting, number
Conservation and basic arithmetic abilities Dissertation Abstracts International, 39, 6640A - 6641A.
Barrack, S.W. (1980). Achievement in and Attitude Toward High School Mathematics
with Respect to Sex and socio-economic Status. Dissertation Abstract International, 41 (5), 1812A.
Bearison, D.J. (1975). Induced Versus Spontaneous Attainment of Concrete Operations
and Their Relationship to School Achievement. Journal of Educational Psychology, 67, 576-580.
Bloom, B.S, Krathwohl, D. R. and Masia, B. B. (1956). Taxonomy of educational
objectives. Handbook: Cognitive Domain. New York: Longman Inc. Bolvin, J. O. (1968). Programmed Instruction in the Schools: An Application of
Programming Principles in individually prescribed instruction. In P.C. Lange (ed),
![Page 144: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/144.jpg)
144Programmed Instruction: Sixty-sixth Yearbook of the National society for the study of Education, PP. 217-254. Chicago: University of Chicago Press.
Bordin, E.S. (2000). Psychological Counselling. New York: Appleton Century – Crofts. Brodgen, H.E. (1996). Variation in Test Validity with Variation in the Distribution of
Item Difficulties, Number of Items and Degree of Their Inter Correlation. Psychometrika. 11, 197-214.
Bruner, J.S. (1964). Some Theorems in Instruction Illustrated with Reference to
Mathematics. In E.R. Hilgard (ed). Theories of Learning and Instruction (PP306-335. Chicago: The National Society for the Study of Education.
Bruner, J.S. (1966). Toward a Theory of Instruction. Cambridge, MA: Belknap Press. Bryan, M.M.; Burke, P.J.; and Stewart, W. (1999). Correlation for Guessing in Scoring of
Pre-tests: Effect Upon Item Difficulty and Item Validity Indices. Educational and Psychological Measurement, 12,45-56.
Burks, H.F. (1968. Manual of Burks’ Behaviour Rating Scales. Huntington Beach,
California: Arden Press. Burns, P.C.; Roe, B.D.; and Ross, E.P. (1988). Teaching Reading in today’s elementary
schools, 4th ed. Houghton Mifflin company. Boston; New Jersey press .
Call, R. J. and Wiggin, W.A (1966). Reading and Mathematics. Mathematics Teacher, 54, 149-157.
Campbell, D.P. and Stanley, R.N. (1966). Revision of the Strong Vocational Interest Blank. Personnel and Guidance Journal44,744-749.
Case, R. (2000). A Developmentally Based Theory and Technology of Instruction.
Review of Educational Research48,439-463 David, E. and Flavell, J.H. (1989). Studies in Cognitive Development: Studies in Honour
of Jean Piagent. New York; Oxford University Press, London.
Denney, H.R. and Remmers, A.H. (2000). Reliability of Multiple-choice Measuring Instruments as a Function of the Spearman-Brown Prophecy Formula 11. Journal of Educational Psychology, 31, 699-704.
Derogates, I.R. (1992). SCL-90-R: Administration, Scoring and Procedures Manual – 11.
Baltimore, MD: Clinical Psychometric Research.
Dienes, Z.P. (2002). Some reflections on learning Mathematics. In W. E. Lamon (Ed). Learning and nature of Mathematics (pp 51-67). Chicago: Science Research Association Inc.
Dodwell, P.C. (1998). Relation Between the Understanding of the Logic of Class and of
Cardinal Number in Children. Canadian Journal of Psychology, 16, 152-160.
![Page 145: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/145.jpg)
145Downs, L. W. and Parking, D. (1958). The Teaching of Arithmetic in Tropical
Primary Schools. London: Oxford University Press. Downie, N.M. and Heath, R.W. (1974). Basic Statistical Methods. 4th edition. New York:
Harper and Row publishers.
Duel, H.J. (1998). Effect of Periodic Self-Evaluation on Student Achievement Journal of Educational Psychology, 49, 197-199.
Ebegbulem, C.E. (1982). Constructing Tests for Continuous Assessment. Lagos. Macmillan Nigeria Publishers Ltd.
Ebel, R.L. (1961). Writing the Test item. Educational Measurement, Washington, D.C.
American Council on Education, 185-249.
Ebel, R.L. (1962). Content Standard Test Scores. Educational and Psychological Measurement, 22,15-25.
Ebel, R. L. (1967). The Relation of Item Discrimination to Test Reliability. Journal of
Educational Measurement, 4,125-128.
Ebel, R.L. (1968). The Value of Internal Consistency in Classroom Examinations. Journal of Educational Measurement, 5, 71-73.
Ebel, R.L. (1979). Essentials of Educational Measurement. 3rd ed. Englewood Cliffts,
N.J., Prentice-Hall inc.
Ekele, J. (2002). Development and Standardization of Quadratic Aptitude Test for Upper Primary School Pupils. Unpublished Ph.D Thesis, University of Nigeria, Nsukka.
Ely, J.H. (2001). Studies in Item Analysis 2: Effects of Various Methods Upon Test
Reliability. Journal of Applied Psychology, 35, 154-203. Engelhart, M.D. (1972). Methods of Educational Research, Chicago: Rand Mc Nally and
company. English, H.B. and English, A.C. (1958): A Comprehensive Dictionary of Psychological
and Psychoanalytical Terms. London: Longmans. Ezeife, A.N. (2002). Interactions of Culture and Mathematics in an Aboriginal
classroomhttp://ehlt.flinders.edu.au/education/iej/articles/v3n3/ Ezeife/paper.pdf. Ezugwu, G.G. (2006). Development and Validation of An Instrument for Students
Appraisal of Teaching Effectiveness in Colleges of Education unpublished Ph.D. Thesis, UNN.
Fan, C.T. (2000). Item Analysis Table. Princeton N.J.: Educational Testing Services.
![Page 146: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/146.jpg)
146Federal Ministry of Education (F.M.E), Science and Technology (1985). National
Curriculum for Junior Secondary Schools. Vol. 1: Science, Ibadan: Heinemann Educational Books (Nigeria) Limited.
Ferguson, R. L. (2002). ACT Mathematics Test. www.university of California.edu/regents/regment/may 02/30act.pdf.
Freybery, P.S. (1966) Cognitive Development in Piagetian Terms in Relation to School
Attainment. Journal of Educational Psychology, 57,164-187. Gagne, R.M. (1967). Instruction and the Conditions of Learning. In L. Siegel (Ed.),
Instruction: Some Contemporary Viewpoints. San Francisca: Chadler Press. Gagne, R.M. (1962). The Acquisition of Knowledge. Psychological Review, 69 (4), 355-
365.
Gagne, R.M. (1968). Learning Hierarchies. Educational Psychology, 6 (1), 3-6. Georgetown (2006). Research Methods and Statistics Resources. Department of
Psychology. Georgetown University. http://www.georgetown.edu/departments/Psychology/research methods/research
chand design/validity and reliability. htm. Gibson, J. T. (1972). Educational Psychology, (2nd ed.), Engelwoods clifts, N.J.: Prentics-
Hall Inc.
Glossary (2007).www.wrightslaw.com/links/glossary.assessment.htm. Goldman, L. (1971). Using Tests in Counselling. (2nd ed.). Los Angeles: Good year
Publishing Company Inc.
Goldstein, G. (1992). Handbook of Psychological Assessment (2nd ed.). New York: Pergamon Press.
Gray, E. M. and Tall, D.D. (1999). Duality, Ambiguity, and Flexibility. A “proceptual”
view of simple arithmetic Journal of Research in Mathematics Education, 25, 116-146.
Greenspoon, J. and Gerten, C.D. (1986). A New Look at Psychological Testing: Psychological Testing from the Standpoint of a Behaviourist. American Psychologist, 22, 843-853.
Guilford, J. P. (1954). The Phi Coefficient and Chi Square as Indices of Item Validity.
Psychometrika, 6,11-19. Guilford, J.P. (1965). Fundamental Statistics in Psychology and education, 4th ed. New
York: McGraw-Hill Book Company.
Harison, C.W. (1944). Factors Associated with Successful Achievement in Problem Solving in Sixth Grade Arithmetic. Journal of Educational Research. 38, 111-118.
![Page 147: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/147.jpg)
147Harbor Peters, V.F. And Ugwu, P.N. (1995). Process Errors committed by Students
in some Geometric Theories. Nigeria Research in Education 7, 141-152 Harling, P. (1991). 100s of Ideas for Primary Maths: A Cross-Curricular approach.
Hodder and Stoughtor, London. Hawlett, K.D. (2001). A Study of the Relationship Between Piagetian Class Inclusion
Tasks and the Ability of First Grade Children to do Missing Added Computation and Verbal Problems. Dissertation Abstracts International, 34,6259A – 6260A.
Hiebert, J. and Carpenter, T.P. (1982). Piagetian Tasks as Readiness Measures in Mathematics instruction, a Critical Review. Educational Studies in Mathematics , 13, 329-345.
Hildreth, G. E. (1941). The Difficulty Reduction Tendency in Perception and Problem
Solving. Journal of Educational Psychology, 32, 305-313. in Leslie, W. and Barnette, Jr. (1968). Readings in Psychological Test and Measurements. Ontario: The Dorsey Press.
Hilgard, E. R., Atkinson, R. C.; and Atkinson, R.L. (2005). Introduction to Psychology,
(6th ed). New York: Harcourt Brace, Jovanovich, Inc. Hill, M. (1980). Variables Predicting Student Achievement in State Mandated
Competence Test. Dissertation Abstracts International, 41 (5), 2015A. Hill, B. (2001). The Importance of Mathematics in Early Childhood Education. Saint
Martin’s College, Lindsey Petersen, USA: Saint Martin’s college press. Hinde, R.A. (1970). Animal Behaviour. A Synthesis of ethology and comparative
Psychology. New York, and London. In Encylopendia of Psychology, Vol. 3, Phas to z, London: Serch Press.
Hllins, E.R. (1996). Culture in School Learning: Revealing the Deep Meaning. New
Jersey: Lawrence Erlbaum Associates. Hilton, J.C. and Berglund, G. W. (1998). Sex Differences in Mathematics – a
Longitudinal Study. The Journal of Education, XVIII (z), 295-304. House, J.M. and Remmers, H. H. (1941). Reliability of Multiple Choice Measuring
Instruments as a Function of the Spearman-Brown Prophecy Formula, IV. Journal of Educational Psychology, 32,372-376.
Human Sciences Research Council (HSRC, (2005)). The Wider Mathematics
Community. In Smith, A. (2005). Making Mathematics Count. Mathematically Correct <http.// Mathematically correct.com>.
Ibeaja, V. F. A. and Nworgu, B. G. (1992). Mode of task presentation and Pupils performance on Mathematical task involving addition and subtraction. abacus, 12, 18.
![Page 148: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/148.jpg)
148Ibecker, T. J. (2006). Reliability and Validity, Part II-Validity.
http://web.uccs.edu/Ibecker/Psy590/relval-11.htm.
IEA (1978). IEA = International Association for the evaluation of Educational Achievement.
Igbo, J. N. (2004). Effect of Peer Tutoring II on the Mathematics Achievement of Learning-Disabled children. Unpublished Ph.D. Thesis, University of Nigeria, Nsukka.
Ikeazota, N.N. (2002). Identification and Remediation of Senior Secondary School Students’ Process Errors in Quadratic Equations. Unpublished Ph.D Thesis, University of Nigeria, Nsukka.
Implementation Committee, National Policy on Education (1992). Guideline on Uniform
Standards for the Junior Secondary School Certificate Examinations (Revised). Lagos: Federal Ministry of Education.
Implementation Committee, National Policy on Education. Some Questions and Answers
on Continuous Assessment. Lagos: Federal Ministry of Education.
Ingule, F.O.; Ruthie, C.R and Ndambuka, P.W. (1996). Introduction to Educational Psychology. Educational Publishers, Nairobi-Kampala: East African.
Ipaye, T. (1982). Continuous Assessment in Schools. Ilorin: University of Ilorin Press.
Isinenyi, M. M. (1990). Common Errors Committed by Junior Secondary School Students in solving Problems involving inequality. Unpublished M.ED thesis, University of Nigeria, Nsukka.
Iwuala, U. H. (1988). An Investigation of the Current Continuous Assessment Practices in Anambra State Secondary Schools Unpublished M.ED. Project. University of Nigeria, Nsukka.
Johnson, H.C. (2000). The Effect of Instruction in Mathematical Vocabulary Upon Problem Solving in Arithmetic. Journal of Educational Research, 38, 97-110.
Johnson, A. P. (1976). Notes on a Suggested Index of Item Validity: the U-L Index.
Journal of Educational Psychology, 42, 499-504. Johnson, O.C. (2004). Tests and Measurement in Child Development: Handbook II, Vol.
1. San Francisco: Jersey. Bass, Inc. Johnson, D.D.( 2005). Teaching Children to Read. University of Wisconsin-Madison.
Addison-Wesley Publishing Company. Kalu, J. M. (1990). A Diagnostic Study of SS3 Students’ Conceptual Difficulties in Field
Physics. Unpublished M.ED Thesis. University of Nigeria, Nsukka. Keislar, E.R. (1997). Shaping of a Learning Set in Reading. Paper Presented at the
Meeting of the American Educational Research Association Atlantic City.
![Page 149: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/149.jpg)
149Kerlinger, F.N. (1973). Foundations of Behavioural Research, (2nd ed). New York.
Holt, Rinehart and Winston. Kersh, B.Y. (1967). Engineering Instructional Sequence for the Mathematics Classroom.
In J. M. Scandura (ed). Research in Mathematics Education. Washington, D.C.: National Council of Teachers of Mathematics.
Kirk, S.A. (2003). Educating the Retarded child. Boston: Houghton Mifflin press. Kissane, B.V. (1996). Selection of Mathematically Talented Students. Educational
Studies in Mathematics, 17(3), 221-241. Kooker, E.W. and Williams, C.S. (1959). College Students’ Ability to Evaluate their
Performance on Objective Tests. Journal of Educational Research, 53, 69-72. Kostick, M.N. (1994). A Study of Transfer: Sex Differences in the Reasoning Process.
Journal of Educational Psychology. 45 (8), 449-458. Kuang, H.P. (2002). A Critical Evaluation of Relative Efficiency of Three Techniques in
Item Analysis. Educational and Psychological Measurement, 12, 248-266. Kuder, G.F. and Richardson, M.W. (1937). The Theory of Estimation of Test Reliability.
Psychometrika, 2, 151-160. Kurume, M.S.C. (2004). Effects of Ethnomathematics Approach on Students
Achievement and Interest in Geometry and Mensuration, Ph.D. Thesis, University of Nigeria, Nsukka.
Lassa, P.N. and Paling, D. (2003). Teaching Mathematics in Nigerian Primary Schools.
Ibadan: University Press Limited.
Lemke, E. and Wiersma, W. (1976). Principles of Psychological Measurement. Chicago: R and Nc Nally College Publishing Company.
Leslie, W. and Barnette, Jr. (1968). Readings in Psychological Test and Measurements.
Ontario: The Dorsey Press. Lord, F.M. (1998). The Relation of the Reliability of Multiple-Choice Tests to the
Distribution of Item Difficulties. Psychometrika, 7, 181-194. Lyman, H.B. (1986). Test Scores and What they Mean. (4th ed). Englewood Clifts,
N.J: Printice-Hall press. Kraw, H. (2007).http://math.Arizona.edu / ̴ krawczyk/intro.html> Mc Nemar, Q. (1962). Psychological Statistics, (3rd ed). New York: Wiley. Mehrens, W.A. and Lehmann, D. (1993). Standardized Tests in Education. New York:
Holt, Rinehart and Winston.
![Page 150: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/150.jpg)
150 Meisels, S.J. (2002). Assessing Readiness. In Pianta, R.C. (Coxm, eds). The transition to
Kindergarten: Research, Policy, training, and Practice Baltmore: The MD press Limited.
Melnick, G. and Freedland, S. (2002). Arithmetic Concept Individual Test. Curriculum
Research and Development Centre in Mental Retardation. New York: Yeshiva University press Limited.
Melnick, G; Mischio, G.; and Lehrer, B. (2004). Arithmetic Concept Screening Test
Manual. Curriculum Research and Development Centre in Mental Retardation. New YorkYeshiva University press Limited.
Michael, E.R. (2000). Acquisition Order of Number Conservation and Arithmetic Logic
of Addition and Subtraction. Dissertation Abstract International. 37,4116B-4117B. Michael, W.B.; Hertzka, A.F.; and Perry, N.C. (1953). Errors in Estimates of Item
Difficulty Obtained from Use of Extreme Groups on a Criterion Variable. Educational and Psychological Measurement, 42, 499-504.
Mitchelmore, M.C. (1973). Performance in a Modern Mathematics Curriculum. West
African Journal of Education, XVII (2), 295-304.
Monsier, C.I. (1987). A Critical Examination of Face Validity. Educational and Psychological Measurement, 7, 191-205.
Murray, J.E. (1999). An analysis of Geometric Ability. Journal of Educational Psychology, 40, 118-124.
Muscio, R.D. (1962). Factors Related to Quantitative Understanding in the Sixth Grade.
Arithmetic Teacher, 9, 258-262.
Nevo, B. (1985). Face Validity Revisited. Journal of Educational Measurement, 22, 289-293.
New Encyclopaedia Britanica (1995). Encyclopaedia Britanic Inc., 15th Edition, vol. ID,
London, p. 243. Nie, N.H.; Hull, C.H; Jenkins, J.G.; Stein Brenner, K; and Bent, D.H. (1975). Statistical
Package for the Social Sciences, (2nd ed). New York: Mc Graw-Hill.
Nunnally, J.C. (1981). Psychometric Theory. New Delhi, Tata: Mc Graw-Hill Publishing Company, Ltd.
Nwabuise E.M. (1986). Factors Affecting the Learning Process. In V.O.C. Nwachukwu (es). Educational Psychology. Ibadan: Heinemann Educational Books (Nig.) Ltd.
Nwachukwu, V.C. (1997). Transfer of Learning. In V.O.C. Nwachukwu (ed.) Educational Psychology. Ibadan: Heinemann Educational Books (Nig.) Ltd.
![Page 151: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/151.jpg)
151 Nworgu, B.G. (1990). Evaluating the Effects of Resource Material Types Relative to
Students Cognitive Achievement, Relation and Interest in Integrated Science. Unpublished Ph.D. Thesis, University of Nigeria, Nsukka.
Obienyem, C. (1998). Identification of Mathematics readiness Level of Junior Secondary
School Class one Students in Anambra State. Unpublished M.ED. Thesis, University of Nigeria, Nsukka.
Obioma, C.O. and Ohuche, R.O. (1980). Sex and Environment as Factors in Secondary
School Mathematics Achievement. Paper Presented at the 1980. Annual Conference of Mathematics Association of Nigeria held at Bayero University, Kano, August 9-14.
Odili, J.N. (1991). Relationship Between Continuous Assessment Cumulative Scores and
Junior school Certificate Examination Results in Integrated Science in Selected LGAS of Bendel State. Unpublished M.ED Project: University of Nigeria, Nsukka.
Odo, I. O. (2000). Gender and School Location as factors of Students’ difficulty in
Secondary School geometry. Unpublished M.ED Thesis, University of Nigeria, Nsukka.
Ohuche, R. O. (1990). Explore Mathematics with your Children. Onitsha: Summer
Education Publishers Limited. Ohuche R.O. and Akeju, S.A. (1988). Measurement and Evaluation in Education.
Onitsha: Africana –Feb Publishers Limited. Okedera, J.T. (1980). Teacher-Made Tests as a Predictor of Academic Achievement in
the Experimental Adult-Literacy Classes in Ibadan, Nigeria the Counsellor, 3 (1 82), 41-56.
Okeke, F.N. (1985). Students’ Mock WASC/GCE Scores in Science Subjects as
Predictors of Their Achievement in WASC/GCE O’level in Anambra State Unpublished M.ED Project, University of Nigeria, Nsukka.
Onibokun, O. M. (1979). Sex Differences in Quantitative and Other Attitude Cures.
ABACUS, 14; 52-58. Okonkwo, S.C. (1998). Development and Validation of Mathematics Readiness test for
Junior Secondary School Students. Unpublished Ph.D Thesis, UNN. Onugwu, J.I. (1991). Identification of Kinds of Errors Secondary School Students Make
in Solving Problems in Mathematics. Unpublished M.ED Thesis, University of Nigeria, Nsukka.
Ozouche, C. N. (1993). Difficult areas of ordinary level Mathematics for the Secondary
Schools. Unpublished M.ED Thesis, University of Nigeria, Nsukka.
![Page 152: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/152.jpg)
152Payne, S.J. And Squibb, H.R. (19990). Algebra and Rules and cognitive Accounts of
Errors. Cognitive science, 14, 445-481. Pennington, B.F., Wallach, I. and Wallah, M.A.(1980). Non Conservers’ Use and
Understanding of Number and Arithmetic Genetic Psychology Monographs, 101, 231-243.
Piaget, J. (1952) in martin Hughes (1989). Children and Number: Difficulties in learning
Mathematics. Britain: T.J. press (Padstow) limited. Piaget, J. (1964). Development and Learning. Journal of Research in Science Teaching,
12, 176-186. Piaget, J. (1979). Concept of Structure, in Scientific Thought, Some Underlying Concepts,
Methods and Procedures. Paris: Hague Mouton/UNESCO. Piotrkowski, E. (2000: 540), in Akernan, D.J. and Barnett, W.S. (2006).
www.nieerorg/resources/policy reports/reports 5. pdf. Plowman, L. and Stroud, J.B. (1992). The Effect of Informing Pupils of The Correctness
of Their Responses to Objective Test Questions. Journal of Educational Research, 36, 16-20.
Plumlee, L.B. (2000). The Effect of Difficulty and Chance Success on Item-Test
Correlation and on Test Reliability Psychometrika, 17, 69-86. Popham, W.J. and Husek, T.R. (1969). Implications of Criterion-Referenced
Measurement, PP. 1-9, in Journal of Educational Measurement. Vol. 6, Number 1. Raimi, A. (2001). Excerpts from Poor Performance Review. University of Rochester,
Washington. http://www.Mathematically Correct. Com/mspap.tm Rawan University Instructions (2006). Instructions for Students-Precalculus Readiness
Testing. http://www.rawan.edu/mars/depts/math/readiness test/instructions% 20 for % 20 students…
Remmers, H.H. and Ewart, E. (2001). Reliability of Multiple Choice Measuring
Instruments as a Function of the Spearman-Brown Prophecy Formula III, Journal of Educational Psychology, 32, 61-66.
Romberg, T.A. (2006). Current Research in Mathematics Education. Review of
Educational Research, 39,473-491. Ross, C.C. and Henry, L.K. (1939). The Relation Between Frequency of Testing and
Progress in Learning Psychology. Journal of Educational Psychology, 30, 604-611.
Samuel, J.M. (2002). Readiness to Learn. In www.Gera.org/Library/reports/inquiry -3.
![Page 153: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/153.jpg)
153 Sawrey, J. M. and Telford, C. W. (1958). Educational Psychology. Allyn and Bacon. Inc.
Boston, P. 243. School Goal Team Report (2006). MC’s School Readiness Definition.
www.fpg.unc.edu/school rediness/definition.htm. Scott, Jr. N.C. (1990). ZIP Test: a Quick Locator Test for Migrant Children Journal of
Educational Measurement, 7, 49-50. STAN (1974). Journal STAN vol. 12, Number 4, 49-52. Stanley, J.C. (2000). Test, Better Finder of Great Mathematics Talent Than Teachers are
American Psychologist, 31 (4), 313 -314. Stard, A. (1991). On the Duel Nature of Mathematical Conceptions: Reflections on
Processes and Objects and Different Sides of the same Coin. Educational Studies in Mathematics, 22,1-36.
Statsoft, L. D. (2003). Reliability and Item analysis. http://www.
Statsoft.com/textbook/streliab.htm. Steffe, J. P. (1970). Differential Performance of First Grade Children when Solving
Arithmetic Addition Problems. Journal of Research in Mathematics Education, 1, 144-161.
Stevin, S.K. (1969). Mathematics, The Man-made Univese. San Francisco: W.H.
Freeman and Company. STI, Annual Report (2006). Selecting Test Items. http://www.edtech:vt.edu/edtech/id/assess/items.html. Student Success Centre (2006). Readiness Assessment (Placement Test) Information
http://www.ancilla.edu/studentssucess/successfag.htm. Smith, B. O. (1938). Logical Aspects of Educational Measurement. New York: Columbia
University Press. Smith, M. R. (2005). Making Mathematics Count. The Report of Professor Advian
Smith’s Inquiry into Post 14 Mathematics Education: Mathematics for the Citizen, Uk.
SPSS Inc.(2000). Statistical Package for the Social Sciences Personal Computer (spsspc)
version 3.0, Chicago, IL: SPSS Inc. Sydney, S.L. (1995). En-chanting, Facinating, Useful Number. Teaching Children
Mathematics, 1, 486-491.
![Page 154: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/154.jpg)
154Tall, D. D. (2005). The Special Position of Mathematics. In the Report of Adrian
Smith’s Inquiry into Post-14 Mathematics Education: Mathematics for the Citizen, U.K.
Thorndike, R. L. and Hgen, E. P. (1977). Measurement and Evaluation in Psychology and education, (4th ed). New York: John Wisley and Sons.
Travers, R.M.W. (1951). Rational Hypotheses in the Construction of Tests. Educational and Psychological Measurement, 11,128-137.
Traler, D., Jacobs, V.L. Selover, M.O., and Townsend, T.J. (1973). Cognition and
instruction. Hillsdale, NJ: Erlbaum. Ubagu, M.K. (1992). Process Errors Committed by Students in Solving Mathematics
Problems on Longitude and Latitude. Unpublished M.Ed Thesis, University of Nigeria, Nsukka.
UCCLA (2006). Reliability and Validity. www.ats.ucla.edu/STAT/SPSS/fag/alpha.html-24k. UCSD (2006). Mathematics Diagnostic Testing Project. The California State
University/University of California. http.//mdtp.ucsd.edu/onlineTests.Shtml.
Udegboka, K.O. (1987). Principles, Theories and Practice of Education. Onitsha a: Etukokwu Publishers Ltd.
University of Vermont (MRTR Form, 2006). Mathematics Readiness Test Registration
Form. http://www.emba.Uvm.edu1-burgmeie/mrt:php3. Unodiaku, S. S. (1998). Analysis of Errors Committed by SSI Students in Solving Word
and Symbolic Problems on Simultaneous Linear Equations. Unpublished M.ED. Thesis, University of Nigeria, Nsukka.
Usman, K.O. And Harbor-peters, V.F.A. 91998). PROCESS Errors committed by senior
secondary school students in mathematics. Journal or science, technology and mathematics Education
Vande Linder, L. F. (1994). Does the Study of quantitative Vocabulary Improve Problem
Solving? Elementary School Journal, 65, 143-152. West African Examinations Council (2000). Chief Examiner’s Report. Lagos: West
African Examinations council. Walter, D. and Nancy, H. (1971). Topics in Measurement: Reliability and Validity. Mc
Graw-Hill Inc., USA. Warrigton, W.G. and Cromback, L. J. (1952). Efficiency of Multiple-Choice Tests as a
Functions of Spread of Item Difficulties. Psychometrika, 17,127-296.
![Page 155: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/155.jpg)
155Watson, C.G, Juba, M.P., Manifold, V.; Kucala, T. and Anderson, P.E.D. (1991).
The PTSD Interview: Rationale, Description, Reliability , and Construct Validity of a DSM -111 Based Technique. Journal of Clinical Psychology, 47, 179-188.
Wesma, A.G. (1949). Effect of Speed on Item Test Correlation Coefficients. Educational
and Psychological Measurement, 9, 51-57. Wiggins, J. S. (1973). Personality and Prediction: Principles of Personality Assessment.
Reading, MA: Addison-Wesley. Wilder, R.L. (2001). Evolution of Mathematical Concepts. London: Transworld
Rublishers Ltd. WISC (2006). Summary of Test Statistics: What do Those Numbers Mean? http://testing.
Wisc.edu/what Do Those numbers Mean. htm. Zeidner, M. (1987). Essay Versus Multiple-choice Type Classroom Exams: The
Student’s Perspective. Journal of Educational Research, 80, 352-358. Zylber, D. (2000). Didactic Assessment of Mathematics for Preschool Children. In
Zulber, D. (2004). The Link Between Preschool Mathematical Knowledge and the Changing Cognitive Ability of Analogical Thinking. Master’s Degree Thesis, School of Education, Bar Ilan University, Ramat Gan.
Zuriel, P. (2002). Children’s Conceptual and Perceptual Analogies Modifiability Test
(Second Part). In Zulber (2004). The Link between Preschool Mathematical knowledge and the changing cognitive ability of analogical thinking. Master’s Thesis, school of Education, Ramat Gan: Bar Ilan University.
Zuriel, P. and Galinka (1999). Children’s Conceptual and Perceptual Analogies
Modifiability Test (first part). In Zylber (2004). The Link between preschool Mathematical knowledge and the changing Cognitive ability of Analogical Thinking. Master’s degree Thesis, School of Education, Ramat Gan: Bar Ilan University.
![Page 156: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/156.jpg)
156
9
APPENDIX: A National Curriculum for Junior Secondary Schools Vol. 1 Science. Federal Ministry of Education, Science and Technology, 1985. YEAR 3
Topic Objectives Content Activities/Materials remarks A: Number and Numeration.
1. Students will be able to apply binary numbers as a two-way classification system using punch cards. 2. Students will have gained competence in applying the basic operations to common and decimal fractions in word problems.
Binary counting system. The punched card I = yes, O = no intersection presented as ‘yes’ yes’. Complement presented as ‘no’. The interpretation of word problems into numerical expressions and equations using brackets and fractions.
Producing and using simple punch cards. Collecting data on simply-made cards (3 or 4) holes only). Library reading and other activities requiring the use of ‘punch tape’ (Telex). Students could code their names in binary and write it on strips of paper (to serve as tapes). Translation between numbers and words Eg. (2+7) – 3 Is the same as; ‘one ninth of the difference between the sum of 2 and 7 and the number 3’ ; ‘from the product of 10 and 7 subtract 24 and then divide the result by 3 is the same as (10 x 7) -24
Students are not required to make their own punch cards.
3
![Page 157: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/157.jpg)
157
Topic Objectives Content Activities/Materials remarks 3. Students will be
able to solve problems involving inverse proportion.
4. students will be able
to: (i) Identify non-
rational numbers, and
(ii) determine, the approximate value of some non-rational numbers.
5. Students will be
able to use approximation measurement.
The concept of inverse proportion. Study of applications such as speeds, productivity, consumption, and reciprocal, compound interest. Non-rational numbers. Decimal places and significant figures. Problems in mensuration involving volume. Area of land. Distances. Consumer arithmetic. Games and athletics timing. Etc.
Preparation of speed time and distance tables. Use of ready reckoners. Other practical problems on inverse proportion. Preparation and use of reciprocal table. Compound interest. Trial and error approach to square roots. Experiments with circles to obtain. By
graphs. The constant DC
.i.e.л Some historical approaches e.g Archimedes approximation of Л applying Pythagoras Theorem to the diagonals of a unit square. Interpretation of data such as population. Rounding off in multiplication and addition to a reasonable degree of accuracy. Calculation using standard form. Eg. (1.36 x 10-5) x (2.43 x 100).
Compound interest should be considered on yearly basis with the use of
formula.
This should be related to the pupils work in Science and Geography. Relate calculation using standard form the use of a calculation machine.
![Page 158: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/158.jpg)
158
Topic Objectives Content Activities/Materials remarks B Algebraic
Process 1. Students will be
able to factorize algebraic expressions.
2. Students will be
able to solve simple equations involving fractions.
3. Students will be
able to solve simultaneous linear equations in two variables.
(i) Graphically, and (ii) By calculation. 4. Students will be
able to solve problems involving variation.
5. Students will be able to change the subject of a formula.
Factorization of expressions of the form a2-b2. 3a-eb-3b +ac, a2 = 2ab +b2.
Solution of equations involving fractions, i.e
331
aba
Graphical treatment of simultaneous linear equations. Simultaneous linear equations of the form x+3y= 5 2x+y=7 direct variation: y= kx inverse variation: = k/x partial variation: y= kx + c joint variation:
y= xkc
Change of subject of formula.
Working problems on factorization with numerical examples. Solve a variety of simple equations involving fractions with practical applications to word problems. Construct table of values and use table to draw two linear graphs using the same axes. Solution of simultaneous equations by standard methods. Application to word problems. Solve a variety of problems like, eg. (i) If 1 packet of sugar
costs x kobo. What will be the cost of 20 packets?
(ii) Speed, time problems.
Exercise involving change of formula. Eg. If 2x = y =d. Expresses x in terms of y and d.
Note its use for rapid calculations. The intersection of the two lines is the solution of the two linear equations. When the two lines are paralle, there is no simultaneous solution. Encourage checking of accuracy of answers by substitution. The students should see these as examples of relationship between two variables.
![Page 159: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/159.jpg)
159
Topic Objectives Content Activities/Materials remarks C: Geometry
and Measuration
1. Students will be able to draw views and plans of common solids.
2. Students will be
able to identify similar figure.
3. Students will be
able to compare lengths.
Careas and volumes of similar figures.
4. Students will be
able to determine the sine, cosine and tangent of an acute angle.
5. Students will be able to solve further problems on areas.
Views, plans and sketches of cube, cone, cuboid. Cyclinder, sphere. Similar shapes: triangles, rectangles, squares, cubes and cuboids. Enlargements and scales factor. Use of the scale factor to calculate lengths. Areas and volume in practical problems. The sine, cosine and tangent of an acute angle. Use of similar right angled triangles. Areas of triangles. Parallelograms, trapeziums and circles.
Use models of solids to identify and draw their plans and views. Freehand sketches of objects from different angles should also be included. Compare angles and sides of similar figures by measurement, sliding rotation or tracing. Identify corresponding sides and angles. Examples like the pinhole camera could serve as illustration. Find the ratio of corresponding sides, areas and volumes as appropriate. Practical examples leading to calculation of length, areas and volumes of similar objects. Determine the values of sine, Cosine and tangent of acute angles from ratios of appropriate sides. Application to finding distance and lengths in practical problems.
Use example arising from physical or technical situations and other everyday problems. E.g. concentric circles. Figures related to metalwork or woodwork.
Note its use for rapid calculations. Note that: 1. in similar figures
(i) corresponding angles are equal,
(ii) ratio of corresponding sides is a constant.
2. all squares are similar and all cubes are similar.
It should be possible: Solve the problems without using logarithm tables. Note that lengths may be calculated using trigonometric ratios.
![Page 160: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/160.jpg)
160Road signs, roofing, tiling.
![Page 161: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/161.jpg)
161
Topic Objectives Content Activities/Materials remarks 6. Students will be
able to perform constructions, using a pair of compasses and a ruler.
Bisection of a line segment. Bisection of an angle. Construction of angels of size 90o, 60o, 45o, 30o. copying a given angle.
Bisection of line segments and angles using compasses and a straight edge. Checking accuracy of construction by measurement of paper folding. Applications to constructing triangles and related figure.
D: Everyday Statistics
1. Students should have consolidated their knowledge of statistical presentations and concepts.
Revision of earlier work and further examples. Mean, median. Mode and range.
Students should suggest and investigate further relevant situations statistically, students should be led to deduce that: 1. the arithmetic sum
of the deviation from the means is
2. The product of the
mean and the number of items is equal to the total sum of the items.
Students should also consider distributions with the same mean and different ranges, and distributions with the same range but different means. The position of the mode relative to the extreme values should also be considered. Consider possible explanations for its position.
It should be possible to do all examples without using assumed mean and grouped data.
![Page 162: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/162.jpg)
162APPENDIX; B
Distribution of Sampled Subjects
Strata Edu. Zone
School Type
No of school type
20% of the Sch. type
Name of the Sch.
Student population 30% of the student population
Total drawn
Male Female Total Male Female Total
Pub. Priv. Pub. Priv.
Urban
Nsukka Obollo
Boys Boys
- -
4 2
- 1 1
A B
401 401
- -
401 341
12 10
12 10
- -
12 10
Nsukka Obollo
Girls Girls
- -
3 2
- 1 1
C D
- -
370 341
370 340
11 10
- -
11 10
11 10
Nsukka mixed
-
18
- 3
P 271 200 471 14 8 6 14 E
275 202 477 15 7 8 15
Nsukka 7 - 1 F 250 340 590 17 7 10 17 Obollo
- mixed
- 6 1 G 200 537 737 22 6 16 22 - 5 1 H 401 270 671 20 12 8 20
Rural
nsukka Boys boys
- -
3 - 1 I 251 201 452 13 7 6 13
Obollo 2
- 1 J 401 - 401 12 12 - 12
Nsukka Girls girls
- -
3 - 1 K 351 - 351 10 10 - 10
Obollo 3
- 1
L
- 402 402 12 - 12 12
M - 353 353 10 - 10 10
Nsukka mixed
-
13 - 2
N
202 272 474 14 6 8 14
Obllo 1 O 201 302 503 15 6 9 15 Nsukka
- mixed
9 8 1 Q 303 250 553 15 9 7 15 Obollo - 9 1 R 640 504 1144 34 19 15 34
S
538 553 1091 33 17 16 33
69 28 19 148 152 300
![Page 163: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/163.jpg)
163APPENDIX: C
Mathematics Readiness Test (MATHRET) Second Version
Time: 11/2hr. Instructions Class: SS1 Your are not allowed to write on the question paper. You shall return the
question paper along with your answer script. Write your name, sex (i.e male or
female), school and class very clearly on your answer script. Show clearly the
processes you use in solving the following questions. Answer all the questions. Do
not start answering the questions until you are told to start.
1. Write down the translation of the following statement into numerical.
Subtract the sum of nineteen and three from the product of nineteen and
three and divide the result by seven.
2. Find one-sixth of the difference between the sum of sixteen and eleven
and the number three.
3. Write down the value of x, if 0.0000218 = 2.18 ×10x
4. Write down the approximate value of 0.046487 to two significant figures.
5. Multiply 1.12 by 0.11 and leave your answer in standard form.
6. A quantity of food took 40 students 15 days to consume. How many days
will it take 25 students to consume the same quantity of food?
7. Write down the factors of x2 – y2.
8. Solve for x: 122
x 1
3x
9. Use calculation method to solve the simultaneous linear equations.
x– 3y = 10
2x – y = 15
10. Y varies inversely as the square-root of x. if x = 100 when y = 8, write
down the equation connecting x and y.
11. Make t the subject of the formula: R = 35t
![Page 164: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/164.jpg)
164
12. Use the plans drawn in (a) and (b) below to sketch cuboid and cone
respectively.
__________________
(Complete the cuboid)
(a)
(b)
Drawn Drawn (Side view) (Top view)
(Note: complete the plan of the cone and drawn its side and top views in the spaces provided).
13. Identify figures that are similar, giving two reasons.
14. How many lines of symmetry does the figure drawn below have?
(Plan)
800
400 600
Fig. (1)
400 600 800
Fig. (2)
40O
50O
Fig. (3)
Fig. 4
60o 30o
Fig. 5
![Page 165: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/165.jpg)
165
15. Calculate the value of x in the diagram below.
16. Using the diagram drawn below, find the side marked y.
17. Calculate the angle marked in the diagram below.
18. Write down four major steps in copying <ABC shown below.
19. Identify the angle constructed in the diagram above.
20. Two similar rectangles have a pair of corresponding sides in the ratio 3 :
7. If 270 cm2 is the area of the first rectangle, find the area of the second
rectangle.
21. Two similar cuboids have heights 6cm and 3cm. If the volume of the
second cuboid is 1800cm3 calculate the volume of the first cuboid.
22. Calculate the volume of a cone whose height is 4.2cm and diameter of
the base is 5cm (Take Л = 22/7).
23. Find the value of n if the mean of the scores 9, 10, 7, 1, 3, 9 and 0 is 8.
10cm
A
C B 30O
x
60O 16cm
y
6 8
A
![Page 166: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/166.jpg)
16624. Calculate the median of the following set of scores.
2.5, 4.5, 3.1, 2, 4.2, 2.5, 3.9, 4.3.
The following are the examination marks scored by 15 students. Use the
information to answer questions 25 and 26.
5, 8, 0, 4, 5, 7, 8, 5, 11, 4, 7, 0, 4, 5, 7.
25. Calculate the modal mark.
26. Find the range of the distribution.
27. A bag contains 35 ripped and 25 unripe ones. Calculate the probability of
selecting unripe fruits.
28. In a class of 21 girls and 42 boys, what is the probability of selecting a
girl as prefect of the class?
29. The figure drawn below shows the family spending in a day.
Which food item is the least expensive?
30. The figure shown below represents the population of students in junior
section of a school.
Which section has greatest number of students?
Yam
Ric
e B
eans
Meat
Potato
JS III
JS I JS II
130O
![Page 167: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/167.jpg)
167
APPENDIX: D
INTERNAL CONSISTENCY RELIABILITY ESTIMATE OF THE MATHRET
Item Pass Fail p q pq 1 73 37 .6636 .3364 .2232 2 80 30 .7273 .2727 .1983 3 79 31 .7182 .2818 .2024 4 75 35 .6818 .3182 .2169 5 82 28 .7455 .2545 .1897 6 74 36 .6727 .3273 .2202 7 78 32 .7091 .2909 .2063 8 81 29 .7364 .2636 .1941 9 77 33 .7 .3 .21 10 75 35 .6818 .3182 .2169 11 83 27 .7545 .2455 .1852 12 85 25 .7727 .2273 .1756 13 69 41 .6273 .3727 .2338 14 79 31 .7182 .2818 .2024 15 72 38 .6545 .3455 .2261 16 92 18 .8364 .1636 .1368 17 90 20 .8182 .1818 .1487 18 94 16 .8545 .1455 .1243 19 91 19 .8273 .1727 .1429 20 93 17 .8455 .1545 .1306 21 89 21 .8090 .191 .1545 22 78 32 .7091 .2909 .2063 23 80 30 .7273 .2727 .1983 24 69 41 .6273 .3727 .2338 25 79 31 .7182 .2818 .2024 26 77 33 .7 .3 .21 27 79 31 .7182 2818 .2024 28 81 29 .7364 .2636 .1941 29 74 36 .6727 .3273 .2202 30 78 32 .7091 .2909 .2063 n = 30; s2 = 46.92; x = 80.2, s = 6. 85 pq = 5.8127 Reliability of MATHRET for Jss 3 students using
![Page 168: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/168.jpg)
168
KR – 20 =
211 SD
pqk
k, when n = number of subjects, k = number of items,
P = proportion of the nominees that pass each item and
Q = (1-p) = proportion of the nominees who do not pass each item.
S2 = Variance of the total test scores.
Here, N = 110, k = 30, x = 80.2, s = 6.85, S2 = 46.92
KR - 20 =
92.468127.51
2930
= (1.034) (1-0.1239)
= (1.034) (0.8761)
= 0.91
APPENDIX: E
INTERNAL CONSISTENCY RELIABILITY ESTIMATES OF THE MATHRET
SUBSCALE 1: NUMAP
Item Pass Fail p q pq 1 92 18 .8364 .1636 .1368 2 90 20 .8182 .1818 .1487 3 94 16 .8545 .1455 .1243 4 91 19 .8273 .1727 .1429 5 93 17 .8455 .1545 .1306 6 89 21 .8090 .191 .1545
K = 6, s2 = 3.5, x = 91.5, pq =
.8378
KR- 20 =
5.38378.1
56
= 1.2 (1-0.2394)
= 1.2 (0.7606)
= 0.91
![Page 169: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/169.jpg)
169
SUBSCALE 2: CAMPUS
Item Pass Fail p q pq 1 73 37 .6636 .3364 .2232 2 80 30 .7273 .2727 .1983 3 79 31 .7182 .2818 .2024 4 75 35 .6818 .3182 .2169 5 82 28 .7455 .2545 .1897 6 74 36 .6727 .3273 .2202 7 78 32 .7091 .2909 .2063 8 81 29 .7364 .2636 .1941 9 77 33 .7 .3 .21 10 75 35 .6818 .3182 .2169 11 83 27 .7545 .2455 .1852 12 85 25 .7727 .2273 .1756 13 69 41 .6273 .3727 .1756 14 79 31 .7182 .2818 .2024 15 72 38 .6545 .3455 .2261
[
1011.3pq
K = 15, x = 77.47, s2 = 19.84
KR- R20 =
84.191011.31
1415
= 1563.11415
= 1.071 (0.8437)
= 0.90
![Page 170: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/170.jpg)
170
SUBSCALE 3: MACOPS
Item Pass Fail p q pq 1 78 32 .7091 .2909 .2063 2 80 30 .7227 .2727 .1983 3 69 41 .6273 .3727 .2338 4 79 31 .7182 .2818 .2024 5 77 33 .7 .3 .21 6 79 31 .7182 .2818 .2024 7 81 29 .7364 .2636 .1941 8 74 36 .6727 .3273 .2202 9 78 32 .7091 .2909 .2063 pq =
1.8738
k = 9 , x = 78.55, s = 3.66, s2 = 13.40
KR- 20 =
40.138738.11
89
= 1.125 (1-6.1398)
= 1.125 (0.8602)
= 0.97
![Page 171: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/171.jpg)
171APPENDIX: F
Outline of content/objective tested and item analysis data on the second
version of mathematics readiness test (MATHRET) Item No
Content/objective tested D P Obj.
Number and Numeration 1 Interpretation of word problems into numerical expressions .74 .66 K 2 Writing down word problems into numerical expression .7 .72 Ucp 3 Approximation in measurement; calculation using standard
form .64 .71 K
4 Approximation in measurement and signification figures. .66 .68 K 5 Calculation using standard from. .17 .74 Ucp 6 Solving problems involving inverse proportion. .73 .67 Ucp Algebraic processes 7 Factorization of algebraic expressions. .17 .7 K 8 Simple equations involving fractions .68 .74 Ucp 9 Using calculation method to solve problems on simultaneous
linear equations in two variables. .73 .7 Ucp
10 Knowledge of variation. .16 .68 K 11 Change of subject of a formula. .68 .75 Ucp Geometry and mensuration 12 Drawing views and plans of common solids. .61 .77 Ucp 13 Identification of similar figures. .66 .63 K 14 Identification of similar shapes. .63 .72 Ucp 15 Calculation of lengths of triangles. .62 .65 Ucp 16 Length of right-angled triangles. .68 .84 Ucp 17 Angle associated with right-angled triangle .68 .82 Ucp 18 Knowledge of construction. .71 .85 K 19 Idea of construction. .73 .83 K 20 Comparison of estimates involving areas of similar figures. .62 .81. Ucp 21 Comparison of estimates involving volumes of similar firgues. .68 .81 Dm 22 Calculation of volume of solid figues .67 .71 Dm Everyday Statistics 23 Concept of mean. .73 .73 Ucp 24 Concept of median. .62 .63 Ucp 25 Concept of mode. .66 .72 Ucp 26 Idea of range. .72 .7 Ucp 27 Concept of probability. .61 .72 Dm 28 Idea of probability. .66 .74 Dm 29 Knowledge of pie-chart. .18 .87 K 30 Concept of pie-chart. .69 .71 Ucp
![Page 172: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/172.jpg)
172
![Page 173: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/173.jpg)
173
P = T
R
NN (100),
where p = percentage of pupils who answer the test item correctly.
NR = number of pupils who answer the test item correctly.
Nt = total number of pupils who attempt to answer the test item.
Discriminating index (D) = N
RR lu ,
where Ru = the number of upper achievers that scored the item correctly, Rl = the
number lower achievers who scored the item correctly, N = the number in the
group.
![Page 174: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/174.jpg)
174APPENDIX: G
Analysis of MATHRET Items. Class: ss1
Computation of the item Difficulty (ID) and Discriminating index (DI)
Of the first version of the MATHRET. S/N ID DI RMK 1 .19 .13 X 2 .57 .6 √ 3 .61 .74 √ 4 .59 .7 √ 5 .68 .64 √ 6 .74 .66 √ 7 .12 .17 X 8 .61 .73 √ 9 .69 .71 √ 10 .73 .68 √ 11 .59 .73 √ 12 .4 .21 X 13 .62 .6. √ 14 .66 .63 √ 15 .78 .61 √ 16 .59 .66 √ 17 .36 .42 X 18 .58 .68 √ 19 .57 .68 √ 20 .68 .71 √ 21 .58 .73 √ 22 .56 .62 √ 23 .62 .68 √ 24 .71 .53 √ 25 .19 .26 X 26 .49 .67 √ 27 .61 .73 √ 28 .54 .62 √ 29 .72 .66 √ 30 .15 .29 X 31 .59 .72 √ 32 .71 .61 √ 33 .69 .58 √ 34 .58 .66 √
P = T
R
NN (100), where p = percentage
of pupils who answer the test item
correctly. NR = number of pupils who
answer the test item correctly.
Nt = total number of pupils who attempt to
answer the test item.
Discriminating index (D) =
NRR LU , where Ru = the number of
upper achievers that scored the item
correctly, RL = the number lower
achievers who scored the item
correctly, N = the number in the group.
![Page 175: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/175.jpg)
17535 .68 .69 √ 36 .67 .62 √
Key: √ = good item, x = bad item.
![Page 176: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/176.jpg)
176
APPENDIX: H
Analysis of MATHRET Items. Class: ss1
Computation of the item Difficulty (ID) and Discriminating index (DI)
of the first version of the MATHRET. S/N ID DI RMK 1 .68 .62 √ 2 .56 .5 √ 3 .59 .72 √ 4 .55 .61 √ 5 .64 .62 √ 6 .71 .63 √ 7 .61 .55 √ 8 .59 .7 √ 9 .65 .72 √ 10 .71 .65 √ 11 .58 .74 √ 12 .51 .6 √ 13 .6 .55 √ 14 .62 .6 √ 15 .71 .59 √ 16 .52 .63 √ 17 .47 .44 √ 18 .59 .69 √ 19 .54 .65 √ 20 .64 .68 √ 21 .57 .71 √ 22 .55 .6 √ 23 .6 .67 √ 24 .7 .5 √ 25 .58 .44 √ 26 .49 .56 √ 27 .6 .72 √ 28 .52 .63 √ 29 .47 .5 √ 30 .51 .6 √ Key: √ = good item.
Item difficulty (ID) = N
RR lu
2 and
Discriminating index (DI) = N
RR lu , where
Ru = the number of upper achievers that scored
the item correctly, RL = the number
Lower achievers who scored the item correctly,
N = the number in the group.
![Page 177: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/177.jpg)
177
APPENDIX: I
Names of schools sampled for the study. 1. St. Theresa’s College, Nsukka devoted by
A
2. Boys’ Secondary school, Orba denoted by
B
3. St. Cyorian’s Girls’ Secondary School, Nsukka denoted by
C
4. Girls’ Secondary School, Obolo-Afor denoted by
D
5. Model Secondary School, Nsukka denoted by
E
6. Community Secondary School, Obimo denoted by
F
7. Community Secondary School, Obolo-Afor denoted by
G
8. Onward International School, Nsukka denoted by
H
9. Oxford Secondary School, Obolo-Afor denoted by
I
10. Boys’ Secondary School, Aku denoted by
J
11. Boys’ Secondary School, Ibagwa-Aka denoted by
K
12. Girls’ Secondary School, Aku denoted by
L
13. Girls’ Secondary School, Imiliki denoted by
M
![Page 178: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/178.jpg)
17814. Community Secondary School, Ohebe-dim denoted by
N
15. Community Secondary School, Ede-Oballa denoted by
P
16. Community Secondary School, Umunko denoted by
O
17. Community Secondary School, Amala denoted by
Q
18. In-land Secondary School, Opi denoted by
R
19. Model Secondary School, Orba denoted by
S
![Page 179: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/179.jpg)
179APPENDIX: J
Mathematics Readiness Test (MATHRET) skills.
A. Comprehending Skills:-
a) Ability to understand the concept of inverse proportion.
b) Ability to identify similar figures
c) Ability to identify lines of symmetry
d) Ability to understand the procedure of construction/bisection of angle
e) Ability to understand when to use formula appropriately
f) Ability to identify the quantity of objects from figure drawn.
B. process skill:
g) Ability to add numbers
h) Ability to multiply
i) Ability to subtract numbers
j) Ability to divide numbers
k) Ability to write numbers in standard form.
l) Ability to approximate numbers to a given significant figures.
m) Ability to factorize algebraic expression
n) Ability to multiply expression
o) Ability to add expressions.
p) Ability to sketch solid objects
q) Ability to find unknown side of a right-angled triangle.
r) Ability to find unknown angle of a right-angled triangle
s) Ability to compare areas/volumes of similar figures.
t) Ability to compare angles in a figure drawn.
u) Ability to balance equation.
C. Transformation skill:
(v) Ability to translate word problem into numeric
![Page 180: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/180.jpg)
180D. Carelessness skill:
(w) Ability to write down values or expressions which one has mastery always.
E. encoding skill:
(x) Ability to draw accurately or write down the values in the diagrams
accordingly.
(y) Ability to write down the answers correctly (or with the appropriate signs,
where necessary).
The Mathematics Readiness Test Process Skills, the Number of points
allocated to each level of Skill and the Maximum frequency of the points
(Errors Committed by individual students.
Comprehending skill --- 6 points
Transformation skill --- 1 point
Process skill --- 15 points
Carelessness skill --- 1 point
Encoding skill --- 2 points
Total --- 25 points
Maximum frequency of the 25 points (Errors) per script of a student
= 59 (see marking scheme: appendix: k)
Maximum frequency of the Errors for the 300 students
= 59 × 300 = 17700.
![Page 181: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/181.jpg)
181
APPENDIX: K
Solution (MATHRET)
1.
7319319
MI Transf (v)
= 7
2257 M1 process (h)
M1 process (g)
= 735 M1 process (j)
= 5 M1 process (j)
A1 Encod (y) 6 marks
2. 6
31116 M1 Transf. (v)
= 6
323 M1 process (g)
= 624 M1 process (j)
= M1 process (j)
= 4 A1 Encod (y) 5 marks
3. 0.0000218 = 2.18 x 10-5
2.18 × 10-5 = 2. 18 × 10x M1 process (k)
x = 15 A1 Encod. (y) 2marks
4. 0.046 M1 process (L)
A1 Encod (y) 2marks
5. 100001232
10011
100112
= 0. M1 process (h)
M1 process (j)
![Page 182: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/182.jpg)
182
= 1.232 × 10-1 M1 process (k)
A1 Econd 4marks
40 students = 15 days;
25 students = ?
6. 1
152540
= 24 days. M1 process (h); M1 compr
A1 Encod. (y) 3 marks
7. (x – y) (x – y) M1 process (m)
A1 Ecode (y) 2 marks
8. L.C.M. of x -12 and x – 1 is (x – 12) (x – 1)
1
311212
2112
x
xxx
xx
= 2(x-1) = 3(x-12)
2x – 2 = 3x – 36 M1 process (n)
2x – 3x = 2 – 36 M1 process (u)
x = 34 A1 Econd (y) 3 marks
9. x – 3y = 10 ………………… equ(1)
2x – y = 15………………… equ (2)
(1) x1 x – 3y = 10 …………………equ (3) M1 process (n)
(2) x 3 6x – 3y = 45 ………………….equ (4) M1 process (n)
(4) – (3) 5 x = 35 M1 process (i) M1 process (0)
x = 535 = 7 M1 process (j)
Subtract 7 for x in equ (1) to get
1 1
![Page 183: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/183.jpg)
183 7 – 3y = 10
- 3y = 10-7 M1 process (u)
- 3y = 3
y = 3
3
= -1 M1 process (j)
A1 Encod. (y) 8 marks
10. ya ,
1xk
x where k is a constant.
8 = 10
8100
kk
k = 80 M1 process (u)
x
y 80 A1 Encod. (y) 2 marks
11. R2 = 35t (by taking square-root of both sides)
3R2 = 5t M1 process (u)
t = 5
3 2R A1 Encod. (y) 2 marks
12. (a)
(b)
8 marks
13. Figure (1 and (2) are similar triangles because their;
(i) Corresponding angles are equal; M1 Compr (b)
M1 process (p) A1 Encod. (x)
M1 process (p) A1 Encod. (x)
Side view is a rectangle M1 Process (p) A1 Encod. (x)
Top view is a Circle without centre M1 Process (p) A1 Encod. (x)
![Page 184: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/184.jpg)
184(ii) Corresponding sides are equal. A1 Encod (y) 2
marks
![Page 185: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/185.jpg)
18514.
5 lines of Symmetry
M1 Compr (c) 1 mark
15. Sin 300
Sin 300 = 10 M1 process (n)
x = 30
10Sin
M1 process (q)
= 1
10 M1 process (j)
x = 20cm A1 Encod (y) 4 marks
16. Cos 600 = 16y
y = 16 Cos 600 M1 process (n)
= 16 × ½ M1 process (q)
M1 process (h)
= 8cm A1 Encod (y) 4 marks
17. Tan θ = 86
= 0.75 M1 process (j)
θ = tan-1 (0.75) M1 process (r)
θ = 370 A1 Enod (y). 3 marks
18. (i) Draw BC M1 Compr (d)
(ii) With B as center describe an arc to cut at C and B M1 compr
(d)
(iii) With the same radius put the point at C and cut again at B M1
compr (d)
3
1 2
4
5
2
![Page 186: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/186.jpg)
186(iv) Join AB A1 Encod (y) 4 marks
19. Angle 450 A1 Encod (y) 1
mark
![Page 187: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/187.jpg)
187
20. 949:1
37:
33 2
M1 process (j)
M1 process (h)
1
270949 x M1 process (j)
M1 process (h)
= 1470cm2 A1 Encod (y) 5 marks
21. The scale factor is 6:3 = 1:36 M1
process (j)
= 2 : 1
Vol. of the first cuboid = (2)3 Vol. of the second cuboid
= 8 × 1800 M1 process (h)
= M1 process (s)
= 14400cm3 A1 Encod (y) 4marks
22. Volume of cone = 31 m -2 h M1
compr (e)
= )25
2)((
12.4
25
722
31 2
drradius M1 process (s)
= 12.4
425
722
31 xxx M1 process (h)
= 27.5cm3 A1 Encod (y) 4 marks
23. 7
09317109 n = 8
7722 n
= 8 M1 process (g)
22+ 7n = 8 x 7
![Page 188: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/188.jpg)
1887n = 56 – 22 M1 process (h)
7n = 34 M1 process (i)
n = 734 M1 process (g)
= 4.86 A1 Encod.(y) 5 marks
24. 2, 2.5, 2.5, 5.1, 3.9, 4.2,
4.3, 4.5
29.31.3
Median
= 27 M1 process (g)
M1 process (j)
= 3.5 A1 Encod (y) 3
marks
25.
From the above table, 5 is the score that has the highest frequency 4.
The modal mark is 5 A1 Encod (y) 1 mark
26. The rnage of the distribution is 11-
0 = 11.
M1 process (i), M1 compr (e), A1 Encod (y) 3 marks
27. Total number of fruits in the bag = 35 + 25 = 60 M1 process (g)
A1 Encod (y)
Marks Tally Frequency 0 11 2 4 111 3 5 1111 4 7 111 3 8 11 2 11 1 1
![Page 189: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/189.jpg)
189 Prob. of selection unriped fruits =
125
6025
M1 process (j)
A1 Encod (y) 4 marks
28. Total of students = 21 + 42 = 63
Prob. of selecting a girl = 31
6321
M1 process (g)
M1 process (j)
M1 Encod (y) 3 marks
29.
From the above figure, meat is the least expensive M1 Compr (f)
A1 Encod (y) 2 marks
30.
From the above figure, JS II = 900
JS III = 1300 JS I = 3600 - 1300 - 900 = 1400 M1 process (j) M1 process (t)
JSI has the greatest number of students A1 Encod. (y0 3 marks
Yam
Rice
Beans Meat
Potato
JS I JS II
JS III 130O
![Page 190: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/190.jpg)
190N.B: a, b, c, d, ………y represents the Mathematics Readiness Test
(MATHRET) Skills. (see Appendix .j) M1 and A1 are the marks/points allotted to
the skills accordingly. Total frequency of marks/errors = 59.
![Page 191: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/191.jpg)
191APPENDIX: L
Standard deviation of ram scores (x and y) of errors committed by 148 male
and 152 female students respectively, as measured by MATHRET.
S/N Male Female x2 y2
(x) Key (y) Key
1 58 X 58 X 3364 3364
2 58 X 23 3354 784
3 58 X 56 X 3481 3136
4 29 58 X 841 3136
5 56 X 29 3136 841
6 56 X 59 X 2916 3136
7 56 X 23 3481 784
8 56 X 59 X 3481 3249
9 28 59 X 784 3481
10 58 X 28 3136 784
11 59 X 59 X 3481 3136
12 58 X 59 X 3364 3136
13 28 23 784 784
14 59 X 59 X 3481 3136
15 58 X 57 X 3364 3249
16 56 X 22 3136 784
17 59 X 57 X 3025 3249
18 23 56 X 784 3136
19 59 X 56 X 3481 3136
20 28 √ 23 784 784
21 57 x 59 X 3249 3481
22 28 57 X 784 2704
![Page 192: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/192.jpg)
19223 22 57 X 784 3136
24 58 X 58 X 3364 3364
25 28 59 X 784 3025
26 58 X 58 X 3364 3364
27 57 X 59 X 3249 3481
28 56 X 58 X 3136 3364
29 29 57 X 841 3249
30 58 X 57 X 3364 3249
31 56 X 59 X 2809 3481
32 23 58 X 676 3136
33 22 55 X 729 3025
34 59 X 59 X 3025 3481
35 56 X 58 X 3025 3364
36 28 56 X 784 3136
37 56 X 29 3136 841
38 29 58 X 841 3364
39 57 X 59 X 3249 3481
40 59 X 28 3249
3481
784
41 57 X 23 3249 729
42 57 X 58 X 3249 3364
43 56 X 29 3236 841
44 58 X 59 X 3364 3481
45 56 X 58 X 3136 3364
46 58 X 59 X 3364 3025
47 59 X 58 X 3481 3364
48 58 X 22 3364 784
49 59 X 59 X 3481 3025
![Page 193: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/193.jpg)
19350 56 X 58 X 3136 3364
51 58 X 23 3364 729
52 52 X 59 X 2704 3481
53 23 56 X 784 2704
54 57 X 58 X 3136 3364
55 59 X 53 X 3025 2809
56 59 X 54 X 2809 2916
57 59 X 59 X 3481 3481
58 58 X 56 X 3249 2916
59 28 59 X 784 3481
60 56 X 57 X 3136 3249
61 28 58 X 784 3364
62 22 58 X 841 3025
63 58 X 22 3364 784
64 29 59 X 841 3136
65 58 X 56 X 3025 3136
66 23 57 X 676 3249
67 22 58 X 841 3364
68 57 X 23 3249 784
69 56 X 58 X 3136 3364
70 59 X 29 X 3025 841
71 23 .R 59 X 729 3481
72 56 X 59 X 3136 3481
73 22 28 729 784
74 28 59 X 784 3481
75 58 X 57 X 3364 3249
76 59 X 27 2916 729
77 57 X 59 X 3249 3136
![Page 194: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/194.jpg)
19478 23 58 X 784 3025
79 28 58 X 784 3364
80 23 58 X 676 3136
81 58 X 58 X 2809 3364
82 29 59 X 841 3481
83 59 X 23 2704 625
84 56 X 23 2809 676
85 27 58 X 729 3364
86 34 X 59 X 900 3481
87 23 58 X 676 3364
88 27 57 X 729 3249
89 22 56 X 676 3136
90 55 X 59 X 3481 3481
91 58 X 28 3025 784
92 56 X 23 3136 729
93 59 X 59 X 2601 3481
94 59 X 29 3481 841
95 58 X 58 X 3364 3364
96 28 57 X 784 3249
97 22 59 X 729 3481
98 23 56 X 729 3136
99 59 X 59 X 3136 3481
100 28 58 X 784 3364
101 29 59 X 841 3481
102 23 56 X 784 3481
103 29 59 X 841 3481
104 28 58 X 784 3364
![Page 195: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/195.jpg)
195105 59 X 59 X 3481 3481
106 58 X 29 3025 841
107 59 X 59 X 3481 3481
108 28 59 X 784 3025
109 58 X 59 X 3364 3481
110 59 X 58 X 2916 3364
111 22 55 X 484 3025
112 23 58 X 676 3364
113 59 X 58 X 3249 3364
114 29 53 X 841 2809
115 29 58 X 841 3364
116 23 58 X 784 3364
117 28 56 X 784 3136
118 26 58 X 676 3364
119 22 58 X 676 3364
120 24 59 X 576 3025
121 28 23 784 729
122 28 58 X 784 3364
123 55 X 59 X 3025 3481
124 28 58 X 784 3364
125 49 X 59 X 1936 3481
126 48 X 48 X 3364 3364
127 55 X 57 X 3025 3249
128 28 57 X 784 3249
129 49 X 59 X 2025 3481
130 58 X 55 X 3364 3025
131 52 X 28 2704 784
![Page 196: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/196.jpg)
196132 58 X 56 X 3364 3136
133 58 X 23 3025 784
134 56 X 56 X 3136 3136
135 35 X 58 X 900 3364
136 27 28 729 784
137 23 59 X 676 3481
138 59 X 58 X 3481 3364
139 58 x 59 X 3364 3136
140 56 x 57 X 3136 3025
141 54 X 57 X 3481 3249
142 57 X 55 X 3249 3025
143 58 X 56 X 3364 3136
144 55 X 58 X 2401 3364
145 25 59 X 841 3136
146 57 x 59 X 3249 3364
147 59 x 23 3481 676
148 51 X 56 X 2916 3136
149 59 X 3481
150 58 X 3364
151 56 X 3136
152 57 X 3249
Key Total errors committed by male students = 6668
= ready Total errors committed by female students = 7840
fairly ready (f.R) Total number of students ‘fairly ready’ = 47
x = not ready Total number of students ‘ready’ = 38
Total number of students ‘not ready’ = 215
Percentage of students ‘ready’ = 1
10030038
= 12.67%
![Page 197: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/197.jpg)
197Percentage of students ‘fairly ready’
1100
30047
=15.67%
Percentage of students ‘not ready’ 1
100300215
= 71.66%
2;6669 xx = 333571; N = 148; X1 = 45.05
Standard Deviation (S.D.) of Errors scores (x) made by male students.
S.D =
98.1436.22450.266056.275221486668
1483335712
2
N
xNx
S12 = 224.40
![Page 198: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/198.jpg)
198
Standard Deviation (S.D) of errors scores (y) made by female students.
y = 7840 ; 2 y = 418389; N = 152; Y1 = 51.58
S.D. = NN
XNY 06.9250.2666056.27522
1527840
1524183892 2
= 9.59
S22 = 91. 97
![Page 199: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/199.jpg)
199APPENDIX: M
Standard deviation of raw scores (x and y) of errors committed by urban and
rural students respectively, as measured by MATHRET.
S/N Urban (x)
Rural (y)
x2 y2 S/N Urban (x)
Rural (y)
x2 y2
1 49 55 2401 3025 40 54 50 2916 2500 2 55 52 2025 2704 41 55 53 3025 2809 3 28 51 784 2601 42 53 48 2809 2304 4 47 57 2209 3249 43 46 53 2116 2809 5 41 58 1681 3364 44 29 50 841 2500 6 56 59 3131 3481 45 48 57 2304 3249 7 49 56 2401 3136 46 47 55 2209 3025 8 29 55 841 3025 47 53 51 2809 2601 9 47 54 2209 2916 48 49 59 2401 3481 10 29 53 841 2809 49 55 49 3025 2401 11 51 50 2601 2500 50 28 46 784 2116 12 56 54 3136 2916 51 49 58 2401 3364 13 37 55 1369 3025 52 28 59 784 3481 14 48 52 2304 2704 53 29 56 841 3136 15 50 51 2500 2601 54 47 57 2209 3249 16 53 52 2809 2704 55 48 53 2304 2809 17 55 50 3025 2500 56 34 48 1156 2304 18 27 54 729 2916 57 56 56 3136 3136 19 54 49 2916 2401 58 41 54 1681 2916 20 26 58 676 3364 59 27 57 729 3249 21 55 56 3025 3136 60 42 41 1764 1681 22 43 53 1849 2809 61 23 34 529 1156 23 52 56 2704 3136 62 59 50 3481 2500 24 53 54 2809 2916 63 54 47 2916 2209 25 55 52 3025 2704 64 52 49 2704 2401 26 49 56 2401 3136 65 54 51 2916 2601 27 37 59 1369 3481 66 51 53 2601 2809 28 49 52 2401 2704 67 29 28 841 784 29 41 48 1681 2304 68 50 43 2500 1849 30 43 44 1849 1936 69 43 49 1849 2401 31 53 55 2809 3025 70 28 55 784 3025 32 40 50 1600 2500 71 59 50 3481 2500 33 53 53 2809 2809 72 47 56 2209 3136 34 55 51 3025 2601 73 51 51 2601 2601 35 28 48 784 2304 74 23 42 529 1764 36 49 56 2401 3136 75 54 51 2916 2601
![Page 200: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/200.jpg)
20037 40 59 1600 3481 76 28 59 784 3481
![Page 201: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/201.jpg)
201 38 52 47 2704 2209 77 59 54 3481 2916 39 51 54 2601 2916 78 55 43 3025 1849 79 43 38 1849 1444 119 57 54 3249 2916 80 42 55 1764 3025 120 23 54 529 2916 81 49 51 2401 2601 121 59 56 3481 3136 82 47 46 2209 2116 122 48 53 2304 2809 83 48 56 2304 3136 123 50 52 2500 2704 84 47 28 2209 784 124 52 58 2704 3364 85 51 52 2601 2704 125 43 50 1849 2500 86 53 54 2809 2916 126 51 43 2601 1849 87 29 38 841 144 127 26 46 676 2116 88 59 30 3481 900 128 22 59 484 3481 89 28 46 784 2116 129 42 56 1764 90 55 52 3025 2704 130 39 53 1521 91 50 38 2500 1444 131 28 56 784 92 49 46 2401 2116 132 37 55 1369 93 52 49 2704 2401 133 22 59 484 94 29 33 2401 1089 134 41 54 1681 95 42 59 1764 3481 135 27 59 729 96 58 52 3364 2704 136 33 53 1089 97 27 46 729 2116 137 50 55 2500 98 46 51 2116 2601 138 43 51 1849 99 49 58 2401 3364 139 28 55 784 100 56 50 3136 2500 140 27 59 729 101 53 54 2809 2916 141 36 58 1296 102 41 54 1681 2916 142 28 53 784 103 49 47 2401 2209 143 45 54 2916 104 53 42 2809 1764 144 58 55 3025 105 57 43 3249 1849 145 55 3025 106 56 56 3136 3136 146 59 3481 107 48 51 2304 2601 147 57 3249 108 53 49 2809 2401 148 59 3481 109 42 58 1764 3364 149 58 3364 110 53 56 2809 3136 150 53 2809 111 40 51 1600 2601 151 59 3236 112 28 53 784 2809 152 56 3136 113 28 52 784 2704 153 59 3481 114 57 59 3249 3481 154 58 3364 115 37 48 1369 2304 155 59 3481 116 49 57 2401 3249 156 58 3364 117 55 52 3025 2709 118 50 50 2500 2500
![Page 202: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/202.jpg)
202
![Page 203: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/203.jpg)
203 Total errors committed by urban students = 6395
Total errors committed by rural students = 8113
Standard deviation (S.D) of errors scores (x) made by urban students.
N2 = 144; x 2 = 6395; x 22 = 308672; x = 44.41
S.D. = 56.21431446395
144308672 22
2
2
2
22
N
xNx - 1972.22
S2 = 13.09
S22 = 171.35
Standard deviation (S.D) of errors scores (x1) made by rural students.
N1 = 156; x 1 = 8113; x 12
= 442496; x1 = 52.01
S.D. = 22
1
1
1
21
1568113
156442496
N
xNx
= 04.270551.2836 = 11.47
= S1 = 11.47
= S12 = 131.56
42.16.37
19.184.06.37
14435.171
15656.131
41.4401.52
2
22
1
21
21
ns
ns
xxt cal = 26. 48
![Page 204: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/204.jpg)
204APPENDIX: N
Standard deviation of raw scores (x and y) of errors committed by public and
private students respectively, as measured by MATHRET.
S/N Public (x)
Private (y)
x2 y2 S/N Public (x)
Private (y)
x2 y2
1 58 59 3364 3481 38 52 58 2704 3364 2 59 27 3481 729 39 56 59 3136 3481 3 28 56 784 3136 40 27 59 729 3481 4 54 59 2916 3481 41 58 28 3364 784 5 58 29 3364 841 42 28 59 784 3481 6 28 58 784 3364 43 58 59 3364 3481 7 59 26 3481 676 44 27 58 729 3364 8 58 59 3364 3481 45 56 28 3136 784 9 28 58 784 3364 46 28 59 784 3481 10 29 27 841 729 47 58 58 3364 3364 11 28 58 784 3364 48 53 27 2809 729 12 58 59 3364 3481 49 29 59 841 3481 13 29 26 841 676 50 28 57 784 3249 14 28 59 784 3481 51 55 59 3025 3481 15 28 59 784 3481 52 57 58 3249 3364 16 57 58 3249 3364 53 53 59 2809 3481 17 58 27 3364 729 54 28 58 784 3364 18 29 59 841 3481 55 54 59 2916 3481 19 28 58 784 3364 56 53 27 2809 729 20 29 59 841 3481 57 29 59 841 3481 21 28 56 784 3136 58 28 57 784 3249 22 58 25 3364 625 59 54 26 2916 676 23 54 59 2916 3481 60 59 59 3481 3481 24 55 59 3025 3421 61 54 57 2916 3249 25 59 59 3481 3481 62 53 28 2809 784 26 28 58 784 3364 63 59 58 3481 3364 27 58 29 3364 841 64 29 59 841 3249 28 28 59 784 3481 65 28 59 784 3481 29 52 57 2704 3249 66 27 59 729 3481 30 51 28 2601 784 67 29 58 841 3364 31 28 59 784 3481 68 55 59 3025 3481 32 27 59 729 3481 69 28 29 784 841 33 58 28 3364 784 70 27 59 729 3481 34 28 59 784 3481 71 29 29 841 841 35 29 29 841 841 72 55 58 3025 3364 36 52 58 2704 3364 73 28 57 784 3249 37 58 59 3364 3481 74 54 59 2916 3481
![Page 205: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/205.jpg)
205 75 52 57 2704 3249 116 28 784 76 29 58 841 3364 117 59 3481 77 28 59 784 3481 118 52 2704 78 54 57 2916 3249 119 28 784 79 28 58 784 3364 120 29 841 80 52 59 784 3381 121 51 2601 81 28 58 784 3364 122 58 3364 82 59 59 3481 3481 123 52 2704 83 56 58 3136 3364 124 59 3481 84 58 29 3364 841 125 28 784 85 28 58 784 3364 126 29 841 86 52 59 2704 3481 127 28 784 87 58 59 3364 3481 128 50 2500 88 27 57 729 3249 129 59 3481 89 57 57 3249 3249 130 28 784 90 52 58 2704 3364 131 54 2916 91 58 57 3364 3249 132 59 3481 92 29 29 841 841 133 28 784 93 54 59 2916 3481 134 52 2704 94 27 58 729 3364 135 27 728 95 28 28 784 784 136 55 3025 96 59 58 3481 3364 137 51 2601 97 58 28 3364 784 138 50 2500 98 56 57 3136 3249 139 52 2704 99 55 58 3025 3364 140 51 2601 100 29 55 841 3025 141 53 2809 101 26 28 3136 784 142 55 3025 102 57 54 3249 2916 143 54 2916 103 53 59 2809 3481 144 57 3249 104 55 28 3025 784 145 28 784 105 29 57 841 3249 146 54 2916 106 28 55 784 3025 147 58 3364 107 54 58 2916 3364 148 59 3481 108 53 56 2809 3136 149 58 3364 109 54 50 2916 250 150 58 3364 110 53 2809 151 57 3249 111 51 2601 152 59 3481 112 28 784 153 28 784 113 59 3481 154 59 3481 114 58 3364 155 58 3364 115 29 841 156 53 2809
![Page 206: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/206.jpg)
206 157 57 3249 158 59 3481 159 54 2916 160 58 3364 161 54 2916 162 58 3364 163 58 3364 164 58 3364 165 54 2916 166 53 2809 167 54 2916 168 53 2809 169 54 2916 170 56 3136 171 54 2916 172 57 3249 173 55 3025 174 57 3249 175 25 625 176 52 2704 177 57 3249 178 58 3364 179 53 2809 180 58 3364 181 54 2916 182 58 3364 183 52 2704 184 58 3364 185 55 3025 186 52 2704 187 57 3249 188 54 2916 189 55 3025 190 57 3249 191 57 3249 x = 8964, 2x = 452640, y 5544, 2y = 299594 Number of x = 186 Number of y = 144 Mean (x) = 49.55 Mean (y) = 46.42 Standard deviation (S.D or errors made by students in Public schools.
![Page 207: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/207.jpg)
207
S.D. = 62.232255.243386'1
8964186
452640 222
Nx
Nx
= 93.110 = 10.53, S2
1 = 110.93 Standard deviation (S.D) of errors made by students in Private schools.
S.D. = 22.1699.26203.236502.26281145544
114299594 222
NY
NY
S2
2 = 262.99
17.113.3
91.213.3
31.260.013.3
11499.262
18693.110
42.4655.49
2
22
2
21
21
nS
nS
xxtcal = 1.83
Total frequency of errors committed by students from public schools = 8964. Number of students from public schools = 186 . Mean (x) = 49.55. Total frequency of errors committed by students from private schools = 5544. Number of students from private schools = 114. Mean (y) = 46.42
![Page 208: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/208.jpg)
208APPENDIX: O
TOTAL FREQUENCY OF ERRORS COMMITTED ON MATHRET SKILLS.
SEX LOCATION SCHOOL TYPE Errors Category
Skills to be mastered
Total freq. Of errors committed on the skills
Freq. of errors males committed
freq. of errors female committed
Freq. of urban students committed
Freq. of errors rural students committed
Freq. of errors private students committed
Freq. of errors public students committed
A Comprehending skills A 469 198 271 187 282 190 353 B 478 209 269 213 265 217 335 C 465 202 263 218 247 204 333 D 615 213 402 237 378 245 352 E 546 235 311 226 320 242 386 F 537 225 312 236 301 241 378 B Process skill G 627 294 333 283 344 294 415 H 652 307 345 300 352 289 445 I 635 170 465 193 442 210 449 J 809 305 504 328 481 337 554 K 528 230 298 247 281 238 372 L 532 241 291 228 304 236 378 M 519 246 273 244 275 232 369 n 620 228 392 231 389 241 461 o 503 231 272 220 283 221 356 p 612 253 359 364 248 373 321 q 579 248 331 241 338 251 410 r 509 321 278 239 270 243 348 s 566 275 291 286 280 289 359 t 490 235 255 243 247 248 324 u 618 299 319 285 333 251 394 C Transformation Skills
v 558 430 128 397 161 388 170 D Carelessness Skill
w 459 248 211 232 227 239 220 E Encoding Skill x 593 368 547 345 248 315 242
y 989 547 442 521 468 547 442 5292 9216
![Page 209: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/209.jpg)
209APPENDIX: P
Outline of content/objective tested and item analysis data on the first version of Mathematics readiness test (MATHRET).
20 Angle associated with right-angled triangle .68 .82 Ucp 21 Knowledge of construction. .71 .85 K 22 Idea of construction. .73 .83 K 23 Comparison of estimates involving areas of similar figures. .62 .85. Ucp 24 Comparison of estimates involving volumes of similar figures. .68 .81 DM 25 Calculation of volume of similar figures .67 .71 DM
Item No
Content/objective tested D P Obj
Number and Numeration 1 Interpretation of word problems into numerical expressions .71 .64 K 2 Interpretation of word problems into numerical expression .11 .16 K 3 Write down word problems into numerical expression .7 .72 Ucp 4 Write down value of an expression in numerical term. .18 .15 K 5 Approximation in measurement; calculation using standard form. .64 .71 K 6 Approximation in measurement; and significant figures. .66 .68 K 7 Calculation using standard form. .17 .74 Ucp 8 Solving problems involving inverse proportion .73 .67 Ucp Algebraic processes 9 Factorization of algebraic expressions. .17 .7 K 10 Simple equations involving fractions .68 .74 Ucp 11 Factorization of algebraic expressions. .17 .19 Ucp 12 Using calculation method to solve problems on simultaneous
linear equations in two variables. .73 .7 Ucp
13 Knowledge of variation .16 .68 K 14 Change of subject of a formular .68 .75 Ucp Geometry and mensuration 15 Drawing views and plans of common solids. .61 .77 Ucp 16 Identification of similar figures. .66 .63 K 17 Identification of similar shapes. .63 .72 Ucp 18 Calculation length of triangles .62 .65 Ucp 19 Length of right-angled triangles .68 .84 Ucp
![Page 210: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/210.jpg)
210
Everyday Statistics 26 Concept of mean. .73 .73 Ucp 27 Idea of mean. .21 .19 Ucp 28 Concept of median. .62 .63 Ucp 29 Concept of mode. .66 .72 Ucp 30 Idea of range. .72 .7 Ucp 31 Concept of probability. .61 .72 DM 32 Idea of probability. .66 .74 DM 33 Knowledge of pie-chart. .18 .67 K 34 Concept of pie-chart. .69 .71 Ucp 35 Idea of mode .2 .24 Ucp 36 Concept probability .18 .22 DM
P = T
R
NN (100), where p= percentage of pupils who answer the test item correctely.
NR = number of pupils who answer the test item correctly. NT = total number of pupils who attempt to answer the test item.
Discriminating index (D) = N
RR LU , where Ru = the number upper achievers that
scored the item correctly, RL = the number lower achievers who scored the item correctly, N = the number in the group.
![Page 211: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/211.jpg)
211
APPENDIX: S
Internal Consistency Reliability Estimate of the MATHRET on first version.
Item Pass Fail p q pq 1 70 10 .875 .125 .1093 2 68 12 .85 .15 .1275 3 71 9 .8875 .1125 .0998 4 65 15 .8125 .1875 .1523 5 60 20 .75 .25 .1875 6 72 8 .9 .1 .09 7 69 11 .8625 .1375 .1185 8 65 15 .8125 .1875 .1523 9 70 10 .875 .125 .1093 10 74 6 .925 .075 .0694 11 66 14 .825 .175 .1443 12 69 11 .8625 .1375 .1185 13 70 10 .875 .125 .1093 14 65 15 .8125 .1875 .1523 15 60 20 .75 .25 .1875 16 68 12 .85 .15 .1275 17 67 13 .8385 .1625 .1362 18 58 22 .8625 .1375 .1185 19 74 6 .925 .075 .0694 20 65 15 .8125 .1875 .1523 21 70 10 .875 .125 .1093 22 71 9 .8875 .1125 .0998 23 72 8 .9 .1 .09 24 64 16 8 .2 .16 25 70 10 875 .125 .1093 26 59 21 .7375 .2625 .1936 27 61 19 .7625 .2375 .1881 28 58 22 .8625 .1375 .1185 29 68 12 .85 .15 .1275 30 66 14 .825 .175 .1443 31 70 10 .875. .125 .1093 32 60 20 8125 .1875 .1523 33 71 9 .8875 .1125 .0998 34 74 6 .925 .075 .0694 35 67 13 .8385 .1625 .1362 36 70 10 .875 .125 .1093 4.6251 K= 36, X = 67.14, S.D. = 4.64, S2 = 21.55, pq = 4.6251
![Page 212: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/212.jpg)
212 Subscale 1: NUMAP Item Pass Fail p q pq 1 68 12 .85 .15 .1295 2 65 15 .8125 .1875 .1523 3 63 17 .7875 .2125 .1673 4 67 13 .8385 .1625 .1362 5 70 10 .875 .125 .1093 6 69 11 .8625 .1375 .1185 7 66 14 .825 .175 .1443 8 65 15 .8125 .1875 .1523 K = 8, X = 66.63, S= 2.23, S2 = 5.41, pq = 1.1077
KR – 20 =
41.51077.11
78
= 2048.0178
= 795.078 = (1.14) (0.79) = 0.9063 = 0.91
Subscale 2: CAMPUS Item Pass Fail p q Pq 1 72 8 .9 .1 .09 2 64 16 .8 .2 .16 3 66 14 .825 .175 .1443 4 70 10 .875 .125 .1093 5 70 10 .875 .125 .1093 6 68 12 .85 .15 .1275 7 65 15 .8125 .1875 .1523 8 67 13 .8385 .1625 .1362 9 68 12 .85 .15 .1275 10 60 20 .75 .25 .1875 11 69 11 .8625 .1375 .1185 12 70 10 .875 .125 .1093 13 71 9 .8875 .1125 .0998 14 68 12 .85 .15 .1275 15 59 21 .7375 .2625 .1936 16 66 14 .825 .175 .1443
17 69 11 .8625 .1375 .1185 K = 17, X = 67.18, S= .593, S2 = 12.90, pq = 2.2554
![Page 213: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/213.jpg)
213
KR – 20 =
90.122554.21
1617
= 0174811617
= 1.0625 (0.8252 = 0.88
Subscale 3: MACOBS Item Pass Fail p q pq 1 70 10 0.875 .125 .1094 2 69 11 .8625 .1375 .1185 3 63 18 .7878 .2125 .1673 4 68 15 .85 .15 .1275 5 68 10 .85 .15 .1275 6 65 16 .8125 .1875 .1523 7 70 13 .875 .125 .1093 8 64 14 0.8 .2 .16 9 67 13 .8385 .1625 .1362 10 66 14 .825 .175 .1443 11 68 12 .785 .125 .1093 K = 11, x = 67.09, S = 2.34, S2 = 5.4, pq = 1.4616
KR - 20 =
49.54616.11
1011
= 1.1 (1- 0.2662)
= 1.1 (1-0.7338)
= 0.81
![Page 214: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/214.jpg)
214APPENDIX: T
MATHEMATICS Readiness Test (MATHRET) First Version.
Time: 1 ½ hr. Instructions Class: SS 1
You are not allowed to write on the question paper. You shall return the
question paper along with your answer script. Write your name, sex (i.e male or
female), school and class very clearly on your answer script. Show clearly the
processes you use in solving the following questions. Answer all the questions. Do
not start answering the questions until you are told to start.
1. Write down the translation of the following statement into numerical.
Subtract the sum of nineteen and three from the product of nineteen and
three and divide the result by seven.
2. Write down the product of three multiplied by one-fifth and two.
3. Find one-sixth of the difference between the sum of sixteen and eleven and
the number three.
4. Write down the value of x, if 2.1x = 5.8 x 10.
5. Write down the value of x, if 0.0000218 = 2.18 x 10x
6. Write down the approximate value of 0.046487 to two significant figures.
7. Multiply 1.12 by 0.11 and leave your answer in standard form.
8. A quantity of food took 40 students 15 days to consume. How many days
will it take 25 students to consume the same quantity of food?
9. Write down the factors of x2 – y2.
10. Solve for x: 1
312
2
xx
11. Write down the factors of x2 + y2.
12. Use calculation method to solve the simultaneous linear equations.
x- 3y =10
2x –y = 15
![Page 215: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/215.jpg)
215
13. y varies inversely as the square-root of x. If x = 100 when y = 8, write down
the equation connecting x and y.
14. Make t the subject of the formula R = 35t
15. Use the plans drawn in (a) and (b) below to sketch cuboid and cone
respectively.
(a) __________________
(Complete the cuboid)
(b) Draw Draw
(Side view) (Top view)
(Note: complete the plan of the cone and draw its side and top views in the
spaces provided).
16. Identify figures that are similar, giving two reasons.
(Plan
400 600 800
Fig. (2)
40O
50O
Fig. (3)
800
400 600
Fig. (1) Fig. (4)
60o 30o
Fig. (5)
![Page 216: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/216.jpg)
216
17. How many lines of symmetry does the figure drawn below have?
18. Calculate the value of x in the diagram below.
19. Using the diagram drawn below, find the side marked y.
20. Calculate the angle marked θ in the diagram below.
21. Write down four major steps in copying <ABC shown below.
22. Identity the angle constructed in the diagram above.
10cm
A
C B 30O
X
16cm
6 8
A
B C
60O
y
![Page 217: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/217.jpg)
217
23. Two similar rectangles have a pair of corresponding sides in the ratio 3 : 7.
If 270 cm2 is the area of the first rectangle, find the area of the second
rectangle.
24. Two similar cuboids have heights 6cm and 3cm. If the volume of the second
cuboid is 180cm2 calculate the volume of the first cuboid.
25. Calculate the volume of a cone whose height is 4.2cm and diameter of the
base is 5cm (Take Л = 22/7) .
26. Find the value of n if the mean of the scores 9, 10, 7, 1,3, 9, and 0 is 8.
27. Find n if the mean of the scores 7, 0, 3, n, 4, 10 and 5 is 6.
28. Calculate the median of the following set of scores.
2.5, 4.5, 3.1 2, 4.2, 2.5, 3.9, 4.3.
The following are the examination marks scored by 15 students. Use the
information to answer questions 29 and 30.
5, 8, 0, 4, 5, 7, 8, 5, 11, 4, 7, 0, 4, 5, 7.
29. Calculate the modal mark.
30. Find the range of the distribution.
31. A bag contains 35 ripped fruits and 25 unripe ones. Calculate the probability
of selecting unripe fruits.
32. In a class of 21 girls and 42 boys, what is the probability of selecting a girl
as perfect of the class?
33. The figure drawn below shows the family
spending in a day.
Which food item is the least expensive?
Yam
Ric
e B
eans
Meat Potato
![Page 218: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/218.jpg)
218
34. The figure shown below represents the population of students in
junior section of a school.
Which section has greatest number of students?
35. Calculate the mode of these scores
7, 1, 0, 7, 5, 2, 0, 9, 12.
36. In a class of 10 boys and 15 girls, what is the probability of selecting a boy
of 15 years of age?
JS I JS II
JS III 130O
![Page 219: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/219.jpg)
219Appendix: R
Recommendation on what teachers should do over the errors students committed.
1. As the study showed that the error of inability of the student to write down
the answers correctly (or with the appropriate signs, where necessary)
ranked highest (989 frequency of errors) among the 25 skills, teacher are
encouraged to draw the attention of the student in teaching process to write
answers with the appropriate units or signs. For instance, if area, with m2 (i.e
square meter). Again, if volume was calculated in litres, the answer should
be in unit L3 (read as cubic litres). Moreso, if an angle was measured as 28
degrees; it should be designated as 28o and not 28. Teacher should
emphasize to the student that positive and negative numbers are not the same
because of the negative sign. For instance, -5 is not the same as 5. Therefore,
if an answer to a question is -5, writing down only 5 implies entirely
different number thereby committing this error. All these illustrations were
noted in the students’ script.
2. The next to the highest frequency of errors student committed was inability
to divide numbers in which 809 frequencies of errors was recorded by the
students. To remedy further occurrence of such error among student(s) who
may fall victim of this error, teachers should draw the attention of the
students to know that:
a. 2 divide all number whose last digit is even number or zero;
b. 5 divide all number whose last digit is 5 or zero.
3. The third highest frequency of error students committed occurred in inability
of the students to multiply numbers in which they committed 652
frequencies of errors. Teachers should draw the attention of the students to
learn by heart the multiplication table from 2 times up to 12 times as
prerequisite for proper learning of mathematics involving multiplication of
numbers.
![Page 220: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/220.jpg)
2204. Students seriously demonstrated their inability to subtract numbers as
well as being unable to add numbers. This lack of prerequisite knowledge/
skill in subtraction and addition of numbers requires that before teachers
engage the students in the learning process, the students’ attention should be
drawn to the prerequisite knowledge of the arithmetic method of “carrying”
and “addend” before introducing harder secondary school Mathematics
problems as found that inability of the students to subtract numbers
constituted a total of errors of 635 frequencies of errors while their inability
to add number yielded a total of 627 frequencies of errors.
5. The students showed evidence of lack of ability to multiply expression in
which they committed 620 frequencies of errors. Teachers should inform the
student that in multiplying expression, constants should be used to multiply
constants while letters should be used to multiply letters. For instance, 2r
4n = (2 4) (r n) = 8rn. Also, 3t 5t = (3 5) (t t) = 15 t2 = 15t2. in
balancing equation, whatever is done to one side of the equation should be
applicable to the other side. For instance, to find the value of x in the
equation 5x = 10, divide both side by the coefficient of x (i.e 5). The value
of x therefore becomes 2. (i.e x = 2). In most of the students’ script it was
observed that students that failed these two aspects were mixing numbers
and letters in multiplication as well as dividing one side of equation and
leaving the other side undivided with the same number.
6. The students demonstrated inability to understand procedure of
construction/bisection of angle by recording a total of 615 frequencies of
errors and inability to sketch solid objects in which they recorded 612
frequencies of errors. In remediation, teacher should draw the attention of
the students on using ruler to draw straight line(s), marking the required
measurement on the line segment drawn (say 8cm, or 5cm, as may be
required), the use of a pair of compasses to draw an arc(s), drawing another
line segment to cut previous line segment drawn (as case may be). Teachers
should lead the students by using a pair of compasses bisecting line or angle
![Page 221: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/221.jpg)
221and not protractor. Teachers should emphasize that in all constructions/
bisection of angles all arcs required in the process must be clearly shown
(see Appendix: C items 18 and 19). From the script of the students that
committed these two errors, there was no evidence of the use of a pair of
compasses as no arcs was shown. In some students’ scripts the angle was
bisected accurately but no arc was showing, implying that such students
must have used protractor to divide the angle and this step is no longer
construction. It is wrong procedure in construction.
Furthermore, teacher should build into the students those prerequisite skills that
could enable them sketch solid objects. Such as knowledge of terms like plan,
edges, top view, side view, front view, parallel projection, proportional drawing,
and orthogonal projection (See Appendix: K item 12). If teachers built these terms
into the students, future student(s) who may commit the error of inability to sketch
solid object will tend to avoid it, thereby enhancing the students’ readiness for
senior secondary Mathematics learning.
7. Students have shown inability to draw accurately or write down the values in
the diagrams accordingly by recording a total of 593 frequencies of errors
and inability to find unknown side of a right- angled triangle with 579
frequencies of errors committed on this. In remediation of these errors, it is
suggested that: teachers should teach the student that to gain the knowledge
of similarity of triangles lies on the equality of their corresponding (1)
angles and (2) sides. In the case of say pie- chart, as in terms 29 and 30 of
the MARKET, the knowledge of the size of spaces pie item in a pie- chart
determines which item is larger (in size or quantity) than the other.
Similarity, the size of angles in a pie- chart determines which sector is larger
than the other. All these information should be inculcated in the students,
and with these, another student(s) who could fall victim of this error will
avoid it due to mastery of the skill. Teachers should introduce the idea of
SOHCAHTOA in teaching the students how to find unknown side of the
![Page 222: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/222.jpg)
222right- angled triangle with one unknown side, one side given and no
included angle, where,
When students master the use of this idea/principle, student who could have failed
problem(s) on this aspect will take correction. This step will increase the volume of
Mathematical readiness level of JS 3 student aspiring to resume Mathematics
learning at senior secondary one level.
8. Student showed lack of ability to compare areas/ volumes of similar figure
with a record of 566 frequency of errors, as well as inability to translate
word problem into numeric in which the committed a total of 558 frequency
of errors. Teachers should teach the student to gain knowledge of ratio or
proportion as prerequisite or entry behaviour in comparing areas or volume
of similar figures. The attention of student should also be drawn to the
principal of scale factor (See Appendix; k, items 20 and 21). Translation of
word problem into numeric requires that student(s) should have gained
knowledge of looking for cue words in a given word problem. Teachers
should teach students to look for cue words in any given word problem. For
instance, in Appendix: C item 1, we have: subtract the sum of nineteen. Here
the cue words are ‘subtracted’ and ‘sum’ then teach students these cue
words. Subtraction is arithmetic word meaning difference between two
numbers, while sum means addition. Inculcating these into the students will
pave way for any student who will fall victim of these handicap in readiness
abilities among JS3 students.
9. Students showed that they lacked the ability to understand when to use
formula by recording a total of 546 frequencies of errors, and inability to
AdjacentOppositeTangentTOA
HypothenusAdjacenteCoCAH
HypothenusOppositeSinSOH
sin;
![Page 223: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/223.jpg)
223identify the quantity of objects from figure drawn in which they
committed a total of 537 frequencies of errors. Teachers should let the
students to understand the components of a formula before applying it in
solving problem. For instance, the formula for the volume of a cone is
2
31 r h, which is made up of one- third (1/3), area of the circular base ,2r
and height (h) of the cone (see Appendix: K item 22. Similarly, the formula
for finding the range of a distribution (say scores or marks) is ‘the highest
score minus the least score’ (see Appendix: K item 26). When these
remediation are considered, adapted and adopted in teaching JS3 students
Mathematics, the frequency of these errors that may occur among future
student(s) will be eliminated or reduced drastically leading to enhanced
readiness level of those who might have fallen victim of these errors.
10. Results showed that the total frequency of errors committed by students due
to their inability to approximate numbers to a given significant figures was
found to be 532; inability of the students to write numbers i8n standard form
yielded 528 frequency of errors. Teachers should draw the attention of
students on two basic issues in approximation of numbers. First, if you are
approximating three or four, etc… digit numbers to two significant figures,
and the third digit is 4 or less than 4, record it as zero. But if the digit is more
than 4 (i.e 5, 6, ----, 9), consider the digit as 1 and add it (I.e. 1) to the next
digit before this digit in question. More so zero, immediately after a decimal
point is not regarded as being significant, and therefore should not be
counted among the number of digits required. For instance, the approximate
values of 4378, 3547, 0.02657 to 2 significant figures are 4400, 3500 and
0.027 respectively. Mores so, based on these prerequisite information
students can solve the MATHRET item 0.046487 = 0.046. Teachers should
lead the students to use the standard form method in calculation. Thud when
a number is written as a product of the nature A x 10n, where A is a number
1 and 10 such that 1< A < 10 and n is an integer; the number is regarded as
![Page 224: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/224.jpg)
224being written in standard form. It should be emphasized to the students
that n could be positive or negative number (see Appendix: K item 3). Based
on these points, the frequency of errors student commit on these aspects will
be remedied and future students who may fall victim of these errors will tend
to take correction by mastering these skills discussed above.
11. Remediation of students’ inability to: (a) factorize algebraic expression and
(b) find unknown angle of a right- angled triangle. The frequency of errors
student committed on the (a) and (b) above were 519 and 509 respectively.
Teachers should first of all teach the students to know the concept of factor.
That a factor is a number, which divides another number without remainder.
From the MARTHRET the factors (x – y) and (x + y) of the expression x2 –
y2 are those expressions that can divide x2 –y2 without remainder (See
Appendix: K item 7). Using long division method a teacher can use one of
the factors, say x + y to divide x2-y2 to get the other factor, x – y. and vice
versa. Teachers should use SOHCAHTOA in teaching the students how to
find unknown angle of a right- angled triangle (See Appendix: K item 17).
Students should be acquainted with SOHCAHTOA and how to use the
formula such that TOA stands for Tan AdjOpp
. So considering the
MATHRET item 17, in finding the unknown angle of a right- angled triangle
when two sides (involving opposite and hypotenuse) were given. The ability
to use logarithm tables and or calculator should be taught as prerequisite
required to know that 37o = tan-1 (0.75) (See Appendix: K item 17).
12. Students demonstrated inability to (a) add expression with 503 frequency of
errors; and (b) inability to compare angles in a figure should be acquainted
with rudiments/ skills such as minus minus gives minus (i.e. - - = -, always).
Also minus minus minus gives either minus(-) or plus (+) = - - - = + or –
Examples, -4 -3 = -7; -4- (-3) = -4+3 = -1. but -5 – (-8) = -5 + 8 = 3. based
on these prerequisite knowledge /skills, students can then solve expressions
such item 9 in the MATHRET. In solving item 9, students can easily see that
![Page 225: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/225.jpg)
225equation (4) minus equation (3) is -3y – (-3y) gives – 3y + 3y = 0; and
6x – x gives 5x. Also 45 – 10 gives 35 (See Appendix: K item 9). Students
should be informed that;
a. Angle formed by two perpendicular lines gives 90o; (B) the sum of
angles at a point gives 360o; (c) The sum of angles on a straight line is
180o. Considering MATHRET item 30 (see Appendix: K) students
aught to know that since JS II was indicated with 90o; JS III with 130o,
then JS I should be 360o – 130o – 90o = 140o. They should therefore
compare the three angles that make up the angles at a point 360
degrees i.e. 90o, 130o and 140o. They will realize that 140o is the
greatest angle and it represents JS I section. JS I section therefore has
the greatest number of students (See Appendix: K item 30)
13. Inability to (a0 understand the concept of inverse proportion, and (b) identify
similar figures were among the areas that students committed less frequency
of errors, with 469 and 478 frequency of errors respectively. Although
students demonstrated evidence of mastery on these errors among the
students that committed it, and among students who may fall victim of these
errors. Teachers should introduce illustrative examples in explaining the
concept of inverse proportion. For instance, teacher may introduce the idea
that if a plot of land takes 2 men 5 days to cultivate, that the same plot of
land will take more men less days to cultivate. That means instead of more
men more days the inverse refers to more men less days. Teachers should
introduce the idea of lines of symmetry for different solid objects before
teaching students similarity of solid figures such as ‘similar triangles as
found among the MATHRET items. For the similar triangles in the
MATHRET, the two major consideration for their similarity are (1) the
corresponding sides must be equal; and (2) the corresponding angles must be
equal too (See Appendix: K item 13).
![Page 226: CHAPTER ONE INTRODUCTION Background of the Study STANISLUS SOCHIMA.pdfseries ( Usman and Harbor- Peters, 1998) and simultaneous linear equations ( Unodiaku, 1998). Process errors which](https://reader034.vdocument.in/reader034/viewer/2022042400/5f0e9fee7e708231d4402336/html5/thumbnails/226.jpg)
22614. The last two skills students showed evidence of mastery were on
(1) Ability to identify lines of symmetry, with a record of 465 frequency of
errors, and especially on (2) ability to write down values or expressions which
one has mastery always with a record of 459 frequencies of errors. Although
students demonstrated evidence of mastery of these two skills in their scripts,
yet there is need that teachers state generally that line of symmetry divides a
solid object into identical parts.