report on normalization approach for +2 marks across...
Post on 20-Feb-2018
217 Views
Preview:
TRANSCRIPT
Report on
Normalization Approach
for +2 Marks Across Various Boards –
Validation and Refinement
Submitted to
The Ministry of Human Resource Development
Govt. of India, New Delhi
by
Committee Chaired by Prof. S.K. Joshi
February 2013
ii
TABLE OF CONTENTS CHAPTER 1 : INTRODUCTION 1-7
1.1 Need for Inclusion of Board Marks and its Normalization
1.2 Committees for Normalization Scheme
1.3 Potential Challenges for Implementation of Scheme
1.4 Terms of Reference
1.5 Outline of the Report CHAPTER 2 : DIFFERENT APPROACHES TO NORMALIZATION 8-15
2.1 Description of Different Schemes Considered for Normalization
2.2 Selection of a Normalization Scheme
2.3 Summary
CHAPTER 3 : DATA ANALYSIS AND FINE TUNING OF SELECTED METHOD 16-28
3.1 Methods Chosen for Fine Tuning
3.2 Description of the Data Used
3.3 Results and Analysis
3.4 Further Discussion and Selection of Procedure
3.5 Final Choice
3.6 Finer Issues
3.7 Summary
CHAPTER 4 : ALGORITHM AND FLOWCHART FOR NORMALIZATION 29-37
4.1 Source and Data Output
4.2 Descriptive Algorithm
4.3 Flowchart
4.4 Proposed Timelines for Processing and Analysis of Results
iii
CHAPTER 5 : CONCLUSIONS 38-39 CHAPTER 6 : ACKNOWLEDGEMENTS 40 CHAPTER 7 : REFERENCES 41 CHAPTER 8 : APPENDICES 42
Appendix A : Report of “Ramasami Committee” on “Examination and
Admission system in Engineering Programmes” (without
Annexure)
A1-A22
Appendix A1 : MHRD, GoI Order F.No. 19-4/2010-TS I dated 11th
November, 2010 constituting “Ramasami Committee”
to assess the examination and admission system in
engineering programmes
A23
Appendix A2 : MHRD, GoI Order F.No. 19-2/2010-TS I dated 8th March,
2010 constituting “Acharya Committee” to look into the
strengthening and rationalizing JEE, GATE, JMET, JAM,
etc.
A24
Appendix B : MHRD, GoI Order No.F.33-5/2012-TS III dated 13th
August, 2012 constituting “Joshi Committee” for
Validating Normalization Formula of +2 Marks
B1
Appendix C : Input from ACER to Chairman, CBSE
C1-C22
Appendix D : Input from CAER to Chairman, CBSE
D1-D25
Appendix E : Input from the “Core Committee” to Chairman, CBSE
E1-E5
Appendix F : Analyses carried out for Validation / Fine-tuning F1-F30
1
CHAPTER 1 : INTRODUCTION
1.1 Need for Inclusion of Board Marks and its Normalization
Admissions to several well established institutes of higher learning in India are very
competitive with selection ratios ranging between 1:100 and 1: 20. Since the school board
examinations vary in their standards, the higher education institutions have traditionally
used an additional examination called an entrance examination, to overcome the
difficulties of preparing a common merit list for admissions to their institutes. The student,
unsure of which institution he or she will get admitted, tries to appear for several entrance
examinations. This puts a huge stress on students as well as on schools and society. In
order to boost the prospects of better performance in the entrance examination, the
student often neglects the school education with more focus on coaching classes. In
general it is felt that since the school board performance is not taken into account during
admission of students to different professional courses, the schooling system is not able to
enforce and improve standards.
It is well accepted that entrance examinations should be designed and conducted to
evaluate the scholastic level as well as aptitude of a student for the course in which he or
she wishes to pursue the higher studies. But it has been observed that besides
proliferation of entrance examinations, the preparation work required by students for these
examinations has also increased considerably due to level of competition and diverse
nature of such entrance tests.
The school system is the backbone of Secondary education. Any student transiting from
the secondary system should have a good performance for being eligible to enter the
tertiary system. In order to inter-alia address the above issue, the MHRD, Govt. of India
had constituted “Ramasami Committee” [Appendix A] on “Examination and Admission
System in Engineering Programmes”. The report submitted by the committee suggested
that if the performance of a student is taken into account in “percentile” form as compared
to the actual “percentages” of marks then it is possible to compare the student
performance across different boards.
2
Further, the committee undertook the exercise in collaboration with Indian Statistical
Institute (ISI) to assess the following questions with help of sample data from few boards:
(a) Do the aggregate scores from different boards exhibit sufficient stability over the
years, so that these can be used as criteria for admissions with a reasonable
degree of confidence?
(b) What is the best way of standardizing different board scores in order to make them
comparable for the purpose of selection?
It was found that the year-to-year variation in the aggregate scores is minimal, while there
is substantial variation of aggregate scores from board to board, particularly when non-
science subjects are included in the aggregate.
The stability of the aggregate scores of different boards over the years indicated
stability of the examination processes that produce these scores.
The Ramasami Committee report [Appendix A] was discussed in the 4th meeting of the NIT
Council held on July 04, 2012. It was felt that a combination of school and national level
test performance will help develop an alternative admission system wherein multiplicity of
tests and dependency on coaching would get reduced by incorporating +2 (or its
equivalent) results. It was decided by the NIT Council that the NIT System would
consider 40% weightage for performance in class XII Board marks normalized on
percentile basis and the remainder 60% weightage would be given for perfromance
in JEE Main and a combined merit list would be decided accordingly. This system
would be implemented from 2013. The members of the council resolved to facilitate
implementation of the new admission policy in close coordination with the CBSE.
1.2 Committees for Normalization Scheme
There were several committees constituted by the Ministry of Human Resource
Development, Govt. of India, from time to time to develop alternative test schemes for
admission to institutions like IITs and NITs. Some of these committees had considered the
possibility of factoring in the +2 marks for preparation of overall merit list for admission to
3
the centrally funded technical institutions. A particular mention is made of the “Ramasami
Committee” report [Appendix A] on “Examination and Admission system in Engineering
Programmes”, submitted to MHRD in November 2011. The report dealt with the basic
aspects of normalization and with the help of expertise from ISI, it came out with a clear
recommendation that +2 marks from all the boards across the country can be considered.
The committee further suggested that a group be set up under MHRD to further
evaluate the detailed modus operandi to make use of board results.
Subsequently the NIT council, in its 4th meeting held on July 04, 2012, authorized its
Chairman to constitute a committee consisting of a few Directors, Chairman CBSE
and other experts to look into issues relating to normalization of class XII Board
marks on percentile basis.
Keeping these recommendations in view, the Ministry constituted a committee for
validation of normalization of board marks for its use in JEE-Main 2013. As per Order
No.F.33-5/2012-TS III dated 13th August, 2012 issued by Ms. Amita Sharma (Additional
Secretary, MHRD & Member Secretary, Council of NITs), Ministry of Human Resource
Development, Govt. of India constituted “Joshi Committee” for validating normalization
formula of +2 Marks [Appendix B].
Simultaneously the Chairman-CBSE and his team also started exploring the
possibilities of combining AIEEE/JEE-Main marks with +2 scores as CBSE was
identified as the nodal agency for JEE-Main examinations for 2013. The Chairman
CBSE instituted the following assignments / committees:
1. “Australian Council for Educational Research, Australia (ACER)” for combining
AIEEE/JEE-Main scores and board assessment for tertiary selection purposes in
India [Appendix C]
2. “Indian Centre for Assessment, Evaluation and Research (CAER) India” to
evaluate some possible options for aggregating subject examinations at the level of
scores for entry into Indian Tertiary Institutions [Appendix D].
4
3. A “Core Committee” to evaluate the possible normalization strategies. The
members of the group consisted of experts from ISI, IITs, and other prominent
institutions working in the relevant domain [Appendix E].
These were in addition to the direct role being played by CBSE in evaluation of possible
methodologies to consider +2 marks for JEE-Main examinations.
For the present work assigned to the “Joshi Committee”, the inputs made available by
above referred activities were considered and their recommendations were deliberated
upon. Some of the suggested aspects, found useful as input to the “Joshi Committee”, are
discussed in the subsequent chapters of the report.
The “Joshi Committee” held its first meeting on 30th October, 2012 and expressed the
need for understanding various approaches as well as some data analysis before arriving
at a decision. Meanwhile, the “Core Committee” was also invited to join the “Joshi
Committee”.
Subsequently a joint meeting of both these committees was held on 30th November, 2012.
It was brought to the attention of the members of the joint meeting that there is already a
decision to use a weighted combination of JEE-Main scores and normalized board scores
for admission to NITs, and that the weights for these two components have also been set
as 60% and 40%, respectively. Therefore, the task before the members was to arrive
at a suitable normalization procedure for the +2 board scores only.
Grading in different boards differ from each other due to subjective nature of evaluation,
different syllabi and other local factors. In order to normalize marks of different boards it is
appropriate that a large and heterogeneous population of students from the different
boards is subjected to a common test. It was felt that since the students of various
boards will be writing a common entrance examination (JEE-Main), it makes sense
to normalize their board marks by aligning them with the common entrance test.
Several possible mechanisms for normalization of board scores were considered at length
in the first joint meeting. Some of them were proposed by the Core Committee, while the
others had been proposed by others to the Joshi Committee. It was decided that some of
these methods of normalization (as described in Chapters 3 and 4) would be tried out on
5
the 2012 data compiled by CBSE, and the results of the analysis would be examined in
detail in the next few meetings before arriving at a final decision to choose the most
appropriate approach. The findings of various groups on this follow-up analysis are
included in Appendix F of this report. The next three meetings were held on November 30,
2012, December 28, 2012 and January 29, 2013 respectively, to arrive at several
decisions made by the committee and reported in Chapters 2, 3, and 4 of this report.
1.3 Potential Challenges for Implementation of Scheme
During the course of evaluation of strategies for normalization of board marks, it was felt
that there are many challenges to the entire normalization process apart from validation of
the normalization scheme, which was the primary task assigned to the committee. Some
of the likely challenges are:
• Collection of data
• Formatting of data
• Validation / Authentication of data
• Adherence of time frames for data delivery
These were in addition to some of the expected bottlenecks / difficulties with
respect to marks of re-evaluation/re-examination cases as well as that of differing
combinations and number of subjects/choices offered by the various boards across
the country.
Keeping the above in mind, the validation work was undertaken, with data from CBSE and
five other boards which were available, to complete the exercise in a given time frame.
However, the above mentioned potential challenges highlight the difficulty of undertaking
the task in limited time, which with proper planning is not insurmountable. Though, these
points are not essentially under the purview of the “Joshi Committee”, yet some efforts
have been made to suggest a few important guidelines to make it comprehensive and to
facilitate timely completion of normalization work for declaration of JEE-Main 2013 results.
6
1.4 Terms of Reference
As per the terms of reference defined by the MHRD Order [Appendix B] the main objective
of the “Joshi Committee” was
• Validating the normalization formula using actual results of various Boards
and refining it based on its validation.
However, in view of the above, the committee decided the following detailed objectives to
undertake the task in a comprehensive manner
• To evaluate implementation methodologies and effectiveness of various possible
schemes
• To validate the chosen scheme for its consistency and application for JEE - Main
2013
• To identify and enlist relevant issues, which are not covered under the scope of
current study, for proper implementation of the scheme.
1.5 Outline of the Report
In addition to brief background of the report described in this Chapter, the following
Chapters report the work undertaken by the committee and several members for
finalization of a suitable normalization approach, its validation and algorithmic description.
Chapters 2 and 3, which are the heart of the report, describe different approaches to
normalization, data analysis and fine tuning of the selected approach. For the benefit of
users of this report, Chapter 4 provides the details of Algorithm and Flow-Chart of the
selected scheme.
Conclusions are provided in Chapter 5. It also enumerates the importance of a few other
aspects which are critical for implementation of the scheme.
7
Important References and Acknowledgements are provided in Chapters 6 & 7
respectively. Finally Chapter 8 provides the Appendices, which supplement the details
discussed in the main report.
While the content of the Appendices have been used as important inputs for preparation of
this report, the views expressed there do not necessarily represent the collective and
considered view of the “Joshi Committee”.
8
CHAPTER 2 : DIFFERENT APPROACHES TO NORMALIZATION
Normalization of marks of +2 board examinations should ideally be a process that
produces a single score for each qualifying student, which represents the candidate’s
general performance at that level, is comparable with similar scores of other candidates of
the same board or other boards, and is compatible with the JEE-Main for eventual
computation of a composite score.
2.1 Description of Different Schemes Considered for Normalization
The committee considered various schemes towards normalisation of the Board marks.
The schemes differed in the way they treated one or more of the following factors.
• Variation in the marks distributions in different examinations
• Disparity across subjects
• Combining marks of different subjects
The various schemes are briefly listed below along with their salient features, assumptions
and limitations. These schemes had been proposed and examined by various groups, as
described in Appendix F. Detailed description of these and related techniques of
normalization (known in the literature as the problem of equating) can be found in Kolen
and Brennan (2004) and other references given in Appendix F.
2.1.1 Normalizing Board Marks by Adjusting for the Mean and Standard Deviation
Analysis of past data of board examinations reveals that the distribution of marks in
different subjects varies across subjects and also from board to board. This method
normalizes marks in a particular subject by subtracting the mean and dividing by the
standard deviation of the marks in that subject for each board. The normalized scores
obtained through this linear transformation have mean zero and standard deviation 1. One
may use another linear transformation on these normalized scores so that the further
9
transformed scores have a specified mean (other than 0) and a specified standard
deviation (other than 1), common to all the subjects and all the boards. These scores may
then be added over subjects to obtain a single aggregate score for each student.
As for the common mean and standard deviation to be used for the second linear
transformation, one can use the mean and the standard deviation of JEE-Main marks.
The main assumption underlying this method is that the marks in a particular subject,
adjusted as above for mean and standard deviation, are comparable across different
boards. However, analysis of past data indicates that the transformed scores obtained as
above have different distributions from board to board. Thus, it is difficult to justify the
assumption.
In a variation of the above method, aggregation over subjects can also be done before
transformation. The underlying assumption would be that the aggregate marks, adjusted
for mean and standard deviation, are comparable across different boards. This assumption
is also difficult to justify for the same reason.
2.1.2 Normalizing Board Marks using Percentiles
In this method, the percentile scores in a subject in a board are used in lieu of marks. This
action amounts to using a non-linear transformation on the marks, which brings them to a
common uniform scale from 0 to 100. These percentile scores can be further transformed
monotonically into another chosen scale as well, which is the same for all the boards.
These scores may then be aggregated over different subjects.
A possible second transformation is one that makes the transformed subject scores have
the same distribution as the JEE-Main marks.
The main assumption here is that the percentile scores in a particular subject are
comparable across different boards. This can happen if, for example, the merit distribution
in that subject is the same from one board to another, and the students obtain marks in the
order of their respective merits. While this assumption is difficult to validate in practice, it
appears to be equitable to all the boards.
10
In a variation of the above method, aggregation over subjects can also be done before
computation of percentiles. The underlying assumption would then be: the percentiles of
aggregate marks are comparable across different boards. This can happen if aggregate
marks for a board are in the order of merit and the merit distribution of students is the
same in all boards.
2.1.3 Use of a Rasch Model for Adjusting Difficulty Level of Subjects
In this method, a multi-level logistic model (see Andrich, 2005) is fitted to the marks
obtained by the students in different subjects of a particular board. According to this model,
the performance of a student in a subject is determined by the difference between the
‘achievement level’ of the student and the ‘difficulty level’ of the subject. The intention of
using this model is to take into account the relative difficulties of the different subjects. The
fitted model gives a single achievement level of a student for all the subjects, which can be
used to generate the normalized board score. Aggregation is not required.
This model, which is an extension of Rasch’s (1960) model for dichotomous achievement
scores, makes specific assumptions about the nature of dependence of a student’s subject
rank (within the board) on the difficulty level of the subject and a student’s achievement.
For large data sets, the model is often found not to fit the data. Further, fitting such a
complex model to an extremely large data set generally involves numerical problems.
2.1.4 Use of Scores of an Auxiliary Examination for Tracking Board Marks
This method makes use of the marks of the JEE-Main, in which all the students under
consideration would appear, for ‘tracking’ the scores of different boards. Presuming that
merit distributions in different boards are different, this method attempts to compensate for
this difference on the basis of the differential performance of students of different boards in
JEE-Main. Specifically, the percentile score of any student in a subject in a particular board
examination is converted into the JEE-Main mark in that subject corresponding to the
same percentile among the JEE-Main candidates of that board. These converted scores
may then be aggregated over subjects.
11
The assumption underlying this approach is that the JEE-Main marks in a subject
adequately capture differences, if any, in merit distributions (for that subject) in different
boards. This could happen if (a) students of a particular board do not have any advantage
or disadvantage over students of another board, in respect of the JEE-Main examination of
any subject, and (b) students appearing in JEE-Main from a particular board constitute a
representative sample of that board. A drawback of this method is that the set of JEE-Main
subjects are rather limited, and as such there are plenty of subjects in board examinations
that have no corresponding subject in JEE-Main.
In a variation of the above method, the percentile score of any student in a particular board
examination may be computed on the basis of aggregate marks, and then converted into
the aggregate JEE-Main marks corresponding to the same percentile among the JEE-Main
candidates of that board. The underlying assumption is that the JEE-Main aggregate
marks adequately capture differences, if any, in overall merit distributions in different
boards.
2.2 Selection of a Normalization Scheme
Selection of various aspects of a normalization scheme is made by analysing their
suitability to the present context, and also by making use of available data, where possible.
The choices driven mostly by analysis of suitability are made in this section. Further fine
tuning of the selected method on the basis of data analysis is done in the next chapter.
2.2.1 Selection of Subjects for Aggregation
The candidates eligible for JEE-Main have different combination of subjects, many of
which have no direct correspondence with JEE-Main subjects. For many candidates
eligible for JEE-Main, Physics and Mathematics are the only common subjects between
the two sets of examinations.
The number of subjects differ from one board to another, and sometimes even within a
board. Almost all boards make students choose at least five subjects (ICSE and IGCSE
are known exceptions). Since the decision to use board marks is meant to increase
12
emphasis on school studies, it was decided that five subject marks would be used for
aggregation. These are:
1. Physics
2. Mathematics
3. Any one of the subjects Chemistry, Biology, Biotechnology and Computer Science
4. One language
5. Any subject other than the above four subjects.
In respect of 3, 4 and 5, the best mark in a given category is to be chosen. In respect of
boards/examinations assessing only four subjects, the aggregate marks may be
extrapolated to a five-subject aggregate, provided the subject combination satisfies the
eligibility condition of JEE-Main. This provision is applicable only for the 2013
normalization process.
2.2.2 Handling Disparity across Subjects
Of all the methods considered, the only method that makes an explicit adjustment for the
possibly different difficulty levels of various subjects is the one based on Rasch models.
The number of parameters in this model is more than the total number of students. As
indicated before, fitting of such a model to data sets of lakhs of students entails
computational difficulties as well as the possibility of statistical lack of fit. Further, the
model can at most account for disparities of subject within a board. Therefore, it was
decided that no adjustment would be made for subject disparity. It was observed that
normalization would eventually be done on aggregate marks (see sections below), where
the effect of subject disparity, if any, would be moderated by the existence of some
common subjects.
2.2.3 Aggregation Before or After Transformation
Aggregation at the last stage requires separate transformation of subject scores. An
attractive aspect of this strategy is that one can adjust for disparity in marking across
subjects/boards. However, a major difficulty with this choice is that subject-specific
13
transformation involves much greater computation (with greater scope of error), which has
not been experienced by the implementing agency (CBSE) at this scale. This may
jeopardize the publication schedule of results.
On the other hand, aggregate of untransformed subject marks is widely used, understood,
and accepted as a basis for comparing merits of students having different combinations of
subjects. Any monotone transformation of the aggregate marks would leave undisturbed
the relative ranking of the students within the board.
It was decided that aggregation would be done this year before transformation of
the board scores. The possibility of subject-wise transformation before aggregation may
be considered in future – after some experience has been gained in respect of
normalization at this scale.
2.2.4 Selection of Transformation
Past data shows discrepancy of aggregate marks distribution across different boards.
Some disparity remains even after mean and standard deviations are adjusted for. As an
example, consider the aggregate marks of six boards – Assam, CBSE, Jharkhand,
Maharashtra, Mizoram and Uttarakhand – for class XII in the year 2012. Figure 2.1 shows
the percentiles against the linearly transformed aggregate marks (adjusted for mean and
standard deviation) of the respective boards.
Figure 2.1: Percentiles of different board marks adjusted for mean and standard deviation
0
20
40
60
80
100
-4 -2 0 2 4
Pe
rce
nti
le in
bo
ard
Linearly transformed board aggregate marks
(adjusted for mean and standard deviation)
Assam
CBSE
Jharkhand
Maharashtra
Mizoram
Uttarakhand
14
There is considerable dissimilarity among the distributions of the linearly transformed
aggregate marks for different boards. A linear transformation is clearly inadequate.
The committee had been asked to validate/refine a normalization scheme of board marks
so that a weighted sum of the JEE-Main marks and normalized board scores (with 60%
and 40% weights, respectively) can be used for drawing up the merit list for admissions. A
weighted combination is meaningful only if the two components are similar in terms of
range, distribution, and so on. A simple way of achieving this, while leaving the JEE-Main
marks intact, is to ensure that the transformed board scores have the same distribution as
the JEE-Main scores.
This parity can be achieved in two ways:
• By using a version of the Method described in Section 2.1.2, i.e., by transforming
percentile scores of all boards (based on aggregates of subject marks) identically in
such a way that the set of transformed scores of the students of each board have
the same distribution as the JEE-Main All-India aggregate marks;
• By using a version of the Method described in Section 2.1.4, i.e., by transforming
percentile scores of each board (based on aggregates of subject marks) in such a
way that the set of transformed scores of the students of each board have the same
distribution as the JEE-Main aggregate marks of candidates appearing from that
board.
The choice of one of these options is made in Chapter 3 after relevant data analysis.
2.2.5 Selection of the Group for Normalization
The percentile score corresponding to the aggregate board marks of a student may be
computed from different groups of students within a board. The following groups were
considered:
(i) All the students;
15
(ii) All the students that passed the board examination;
(iii) All the students (appeared in board examination) whose subject combinations meet
the eligibility criteria of JEE-Main.
The five-subject combination mentioned in Section 2.2.1 is not even defined for all the
students (option (i) above). Further, as different boards have different pass marks, and the
pass percentages also vary somewhat from board to board, it would not be fair to use the
set of passed candidates (option (ii) above) for normalization. For admission to
engineering courses, the set of students having appropriate subject combinations (option
(iii)) appears to be the most natural group for normalization.
Option (iii) was chosen for the purpose of normalization on the basis of percentiles.
2.3 Summary
In this chapter, various approaches for normalization that were considered have been
reported. The process of selection of an appropriate approach has also been described.
Based on these criteria two approaches are short listed and these are described in detail
in the next chapter.
16
CHAPTER 3: DATA ANALYSIS AND FINE TUNING OF SELECTED
METHOD
After considering various approaches to normalization and the criteria for selection that
were described in Chapter 2, two procedures were shortlisted for further study. These are
examined in this chapter more closely, and a final choice is made on the basis of data
analysis.
3.1 Methods Chosen for Fine Tuning
The two procedures short-listed in Section 2.2.4 of Chapter 2 can be described through
the following steps.
Procedure 1
1. Note down the aggregate marks ���� obtained by each student in JEE-Main.
2. Compute the percentile ��� of each student on the basis of aggregate marks in
his/her own board ����, computed from the list of five subjects specified in Section
2.2.1 (each mark out of 100). The percentile is to be computed among all students
of the board whose subject combinations meet the eligibility criteria of JEE-Main.
3. Determine the JEE-Main aggregate marks corresponding to percentile ��� at the
All-India level. Regard this as the normalized board score of the student ����.
4. The composite score used for drawing the merit list is �� � 0.6 � �� 0.4 � ��.
17
Procedure 2
1. Note down the aggregate marks ���� obtained by each student in JEE-Main.
2. Compute the percentile ��� of each student on the basis of aggregate marks in
his/her own board ����, computed from the list of five subjects specified in Section
2.2.1 (each mark out of 100). The percentile is to be computed among all students
of the board whose subject combinations meet the eligibility criteria of JEE-Main.
3. Determine the JEE-Main aggregate marks corresponding to percentile ��� among
the set of aggregate scores obtained in the JEE-Main by the students of that
board. Regard this as the normalized board score of the student ����.
4. The composite marks used for drawing the merit list is �� � 0.6 � �� 0.4 � ��.
3.2 Description of the Data Used
In order to understand the difference between the above two procedures, these were
implemented in respect of the 2012 data on AIEEE scores and board scores. The data
available for analysis were marks of +2 examinations of 2012 for all students of CBSE,
Assam (AM), Jharkhand (JH), Maharashtra (MR), Mizoram (MZ) and Uttarakhand (UK)
boards, marks of all students in the 2012 AIEEE examination (precursor of JEE-Main,
2013), and matched pair of board and AIEEE marks for a subset of students of the above
six boards (after discarding those cases where a perfect match could not be ensured). The
matching was done on the basis of the names of the candidate and his/her parents. The
following relevant sets of data were extracted in respect of the six boards.
• Data Set A: Aggregate marks in board examinations 2012 (expressed as
percentage) and the corresponding percentiles for all students taking the
examination.
• Data Set B: Aggregate marks in AIEEE examination 2012 and the corresponding
percentiles computed from the set of students from the chosen board.
18
• Data Set C: Aggregate marks in AIEEE examination 2012 and the corresponding
percentiles computed from the set of all students taking the examination.
• Data Set D: Matched pair of AIEEE aggregate marks and board aggregate
percentage.
Note that Data Set C is common to all the boards, while the other three sets are compiled
separately for the six boards.
It was planned that the 2012 data would be used to study the normalized scores for
analysis. Accordingly, the percentiles � of aggregate board marks, the transformed board
scores �� and �� and the composite scores �� and �� were computed for students of six
boards, while treating the AIEEE marks as JEE-Main marks and the available board
aggregate marks (in percentage) as five-subject aggregate marks. The relation between ��
and � was compared with that between �� and � . The two composite scores were
compared with the original AIEEE aggregate marks ����.
All-India ranks and percentiles on the basis of these scores can only be computed when
the above calculations are done for all the boards. For the limited purpose of the present
analysis, ranks and percentiles were computed from the set of scores from the six boards
only. This required computation of ranks ( �� and �� ) and percentiles ( �� and �� )
corresponding to the composite scores �� and ��, respectively, and fresh computation of
ranks (��) and percentiles (��) from the marks ��.
3.3 Results and Analysis
3.3.1 Treatment of Percentiles of Different Boards
The graphs in Figure 3.1 show how the transformed board scores under Procedures 1 and
2 (�� and ��, respectively) depend on the percentile scores (�) in the board.
19
Figure 3.1: Graphs showing how the two transformed board scores depend on board percentile
It can be seen that while the dependence relation shown in the first graph is the same for
all the boards, those in the second graph vary greatly from board to board. The extent of
this variation arising from Procedure 2 is brought out from the following examples.
• The transformed score of a student at the 80th percentile of Maharashtra Board is
less than that of a student at the 50th percentile of CBSE.
• The transformed board scores of the top students of CBSE, Maharashtra board and
Jharkhand board are 346, 331 and 274, respectively. In fact, these respective
scores are shared by the top five students of each of the three boards.
What these numbers show is that if one presumes any difference in the merit distribution
across boards and attempts to compensate for it on the basis of the performance of the
students of various boards in AIEEE (through Procedure 2), then the amount of
adjustment is very large. The extent of adjustment is weighed against the underlying
assumptions of the two procedures in Section 3.4. In any case, extreme levels of
differential treatment given to percentile scores of various boards seem to go
against the principle of fairness to all boards. Currently the NIT’s treat the five toppers
of each board at par, and admit them directly.
-100
-50
0
50
100
150
200
250
300
350
0 50 100
Tra
nsf
orm
ed
bo
ard
sco
re (
B1)
Percentile in board (P)-100
-50
0
50
100
150
200
250
300
350
0 50 100
Tra
nsf
orm
ed
bo
ard
sco
re (
B2)
Percentile in board (P)
CBSE
AM
JH
MR
MZ
UK
20
3.3.2 Distribution of Students of Each Board in Different Percentile Ranges
Tables 3.1, 3.2 and 3.3 show the state-wise composition of students at different ranges of
percentile computed from AIEEE marks only ����, percentile computed from composite
score by Procedure 1 ���� and percentile computed from composite score by Procedure 2 ����, respectively. Figures 3.2, 3.3 and 3.4 show these in the form of composite bar charts.
Proportion of students in percentile range
Board < 50 50 to 75 75 to 95 > 95 Total
Assam 73% 19% 7% 1% 100%
CBSE 41% 27% 25% 7% 100%
Jharkhand 83% 14% 3% 0% 100%
Maharashtra 74% 18% 7% 1% 100%
Mizoram 79% 15% 6% 0% 100%
Uttarakhand 82% 15% 3% 0% 100%
All matched samples
50% 25% 20% 5% 100%
Table 3.1: Distribution of students of six boards
in different percentile ranges of ��
Figure 3.2: Distribution of students of six
boards in different percentile ranges of ��
Proportion of students in percentile range
Board < 50 50 to 75 75 to 95 > 95 Total
Assam 30% 39% 27% 4% 100%
CBSE 48% 25% 21% 6% 100%
Jharkhand 61% 23% 15% 1% 100%
Maharashtra 57% 23% 17% 3% 100%
Mizoram 34% 34% 27% 4% 100%
Uttarakhand 50% 25% 23% 2% 100%
All matched samples
50% 25% 20% 5% 100%
Table 3.2: Distribution of students of six boards
in different percentile ranges of ��
Figure 3.3: Distribution of students of six
boards in different percentile ranges of ��
Proportion of students in percentile range
Board < 50 50 to 75 75 to 95 > 95 Total
Assam 56% 32% 11% 1% 100%
CBSE 42% 28% 24% 6% 100%
Jharkhand 81% 14% 5% 0% 100%
Maharashtra 72% 17% 10% 2% 100%
Mizoram 63% 26% 11% 0% 100%
Uttarakhand 76% 17% 6% 0% 100%
All matched samples
50% 25% 20% 5% 100%
Table 3.3: Distribution of students of six boards
in different percentile ranges of ��
Figure 3.4: Distribution of students of six
boards in different percentile ranges of ��
0%
20%
40%
60%
80%
100%
95 <= P0
75 <= P0 < 95
50 <= P0 < 75
P0 < 50
0%
20%
40%
60%
80%
100%
95 <= P1
75 <= P1 < 95
50 <= P1 < 75
P1 < 50
0%
20%
40%
60%
80%
100%
95 <= P2
75 <= P2 < 95
50 <= P2 < 75
P2 < 50
21
The distributions across different boards are seen to have least discrepancy in the case of
percentiles corresponding to Procedure 1. In other words, Procedure 1 is fairer than
Procedure 2 and the existing selection procedure.
3.3.3 Relation Between AIEEE Score and Normalized Scores
The association between the original scores (AIEEE only) and the composite scores by the
two procedures is now examined. In order to get a perspective, one can compare the
association between the AIEEE aggregate marks and matched board aggregate marks
(before transformation). The Spearman rank correlations between these quantities are
given in Table 3.4.
Board Spearman rank correlation
with AIEEE aggregate marks
Assam 0.4124
CBSE 0.6891
Jharkhand 0.1926
Maharashtra 0.4322
Mizoram 0.2919
Uttarakhand 0.3235
All matched
samples 0.4841
Table 3.4: Spearman Rank correlation between AIEEE and board aggregate marks
The rank correlation is found to be relatively larger in the case of students of CBSE,
indicating greater alignment of that board examination with AIEEE. However, none of the
rank correlations is very large. This finding indicates that the board examinations are
different from the entrance examination for engineering courses, and appear to
capture a different aspect of ability/achievement than what the AIEEE captures.
Therefore, utilization of the board marks is expected to bring in some departure from the
existing scenario.
Table 3.5 summarizes the Spearman rank correlations between the AIEEE score and the
two composite scores.
22
Board Rank correlation
between A0 and A1
Rank correlation
between A0 and A2
Assam 0.7283 0.8347
CBSE 0.9464 0.9302
Jharkhand 0.5782 0.7163
Maharashtra 0.7691 0.8337
Mizoram 0.5936 0.7025
Uttarakhand 0.5996 0.7435
All matched
samples 0.8559 0.9106
Table 3.5: Spearman Rank correlation between AIEEE marks and the two normalized scores
It is found that :
(a) Both the composite scores have higher rank correlation with AIEEE as compared to
the rank correlation of the raw board marks with AIEEE marks, reported in the
previous table;
(b) The variation of the rank correlations from board to board follow the pattern
observed in the previous table (highest correlation for CBSE, low correlation for
Jharkhand, Mizoram and Uttarakhand, and so on);
(c) Composite scores �� and �� have higher rank correlation than �� and ��.
The higher rank correlation of Procedure 2 shows that it produces a composite score
more closely aligned with the AIEEE marks. The board marks contain information that
is different from the AIEEE marks (as seen from Table 3.4), and this additional information
appears to have been partially lost by the different transformations used for different
boards under Procedure 2.
3.3.4 Movement of Students from AIEEE Merit List to New Merit List
The level of movement of students from the merit list by AIEEE scores to the merit list by
composite scores are studied now. Figure 3.5 shows scatter-plots of the ranks of 1000
23
randomly chosen students computed from the two procedures (�� and ��) against the rank
on the basis of AIEEE marks alone (��).
Figure 3.5: Movement of ranks from merit list as per current procedure to merit list according to the
two normalization procedures considered (for 1000 randomly selected candidates)
The scatter for Procedure 2 (plot on the right side) is seen to be less scattered around a
straight line. In other words, Procedure 2 produces milder movement of ranks than
Procedure 1. This finding is in line with the higher rank correlation for that procedure (with
(�� )) observed previously. The scatter-plot of Board marks percentage against AIEEE
marks, given in Figure 3.6, shows much wider spread, indicating how different the Board
marks are from the AIEEE marks. Therefore, the movement of ranks observed in the
case of Procedure 1 makes sense.
Figure 3.6: Relation between board marks and AIEEE marks (for 1000 randomly selected
candidates)
24
3.3.5 Limitations of the Data Analysis
It should be noted that the analysis presented in section 3.3.2 to 3.3.4 is based on 2012
data from six +2 boards only. Analysis in a larger scale could not be done due to non-
availability of data in requisite format during the time available for analysis. The reported
quantities (distribution of students in different percentile ranges, rank correlations and
movement of students from one rank list to anther) would change if data from all the
boards are used. However, the relative standings of the two procedures emerging
from this data analysis have been corroborated with general reasoning. These
qualitative conclusions would not change even if the data from all the boards are
taken into consideration.
3.4 Further Discussion and Selection of Procedure
3.4.1 Common and Exclusive Assumptions
The procedures compared here depend on the following common assumptions:
1. The aggregate scores of the students in a board examination are in the order of
their general merit within that board.
2. The aggregate scores of the students in JEE-Main are in the order of their merit in
respect of engineering admissions.
These assumptions may be violated when a student’s performance in a particular
examination is not indicative of his/her merit. However, normalization cannot be expected
to set right a particularly poor (or good) performance of a student. No fairer assumption in
this regard is possible.
Procedure 1 involves the following additional assumption:
3. General merit distribution does not vary from one board to another.
On the other hand, Procedure 2 involves the following alternative assumption:
25
4. The differences in performance patterns by students of different boards in
AIEEE/JEE-Main capture the difference of merit distributions across boards.
This assumption would hold if
4A. students of a particular board do not enjoy any particular advantage or
disadvantage over students of another board, in respect of the JEE-Main
examination, and
4B. students appearing in JEE-Main from a particular board constitute a
representative sample of that board.
We closely examine the assumptions 4A and 4B in Sections 3.4.2 and 3.4.4, respectively.
3.4.2 Confounding Factors
It may appear at first glance that Assumption 3 is contradicted by the different levels of
performance in AIEEE/JEE-Main by students of different boards as seen in Figure 3.1
(Rowley, 2012, p.C18). However, this need not be the case. Some possible factors that
can explain the difference include
(a) different levels of alignment of instructional/examination pattern of different boards
with the AIEEE/JEE-Main (as indicated by the different correlations observed in
Table 3.4),
(b) different extents of instruction in English and Hindi – the only available languages
for AIEEE/JEE-Main,
(c) different levels of access to coaching (one of the major issues mentioned in Chapter
1 of this report) enjoyed by students of different boards.
In the presence of these factors, it appears likely that students of some boards have an
advantage over students of other boards, in respect of the AIEEE/JEE-Main. Such a
scenario amounts to violation of Assumption 4A (related to Procedure 2). The said factors
26
confound any suspected difference in merit distributions among different boards (i.e.,
violation of Assumption 3 related to Procedure 1). Confounding is a major statistical issue
when different groups are compared (see Liao, 2002). Adoption of Procedure 2
amounts to ignoring these confounding factors and attributing the considerable
differences in performance patterns solely to presumed difference in merit
distributions. Such a decision based on presumption would not be fair to all the
boards.
3.4.3 Anomalous Effect of Differential Treatment of Board Marks
Consider two students of different boards who are at the same percentile of JEE-Main and
also at a common percentile in their respective boards. Depending on the performance of
the other students of the two boards in JEE-Main, the two students would have different
values of normalized board score, and consequently different ranks, under Procedure 2.
As a specific example observed in the matched data set from six boards, five students
(four from CBSE, one from Maharashtra) were tied at AIEEE aggregate marks of 130 and
board percentile of 93.1. According to Procedure 1, they shared the rank 20,352 (ties were
not broken). However, according to Procedure 2, the four CBSE students had a shared
rank of 18,077 and the student of Maharashtra board had a rank of 34,175. The
considerably lower rank of the Maharashtra student is entirely attributable to the poorer
performance of other students of that board in AIEEE as compared to those of CBSE. This
highlights the anomalous effect of the differential treatment of students of different boards
under Procedure 2.
3.4.4 Levels of Participation in AIEEE by Students of Different Boards
The popularity of AIEEE/JEE-Main among different sections of students varies from board
to board. This is brought out by Table 3.6, which shows the composition of AIEEE
candidates in terms of performance in their respective board examinations, as computed
from the matched data on 2012 AIEEE and board marks. There is substantial difference in
board performance patterns of AIEEE candidates.
27
Proportion of AIEEE candidates in percentile range of board aggregate marks
Board 0-20 20-50 50-80 Above 80 Grand Total
Assam 0% 0% 2% 98% 100%
CBSE 29% 15% 19% 37% 100%
Jharkhand 16% 5% 5% 74% 100%
Maharashtra 11% 8% 11% 69% 100%
Mizoram 1% 1% 2% 96% 100%
Uttarakhand 14% 3% 4% 79% 100%
Table 3.6: Proportion of AIEEE candidates in different percentile ranges of board marks
The above finding indicates violation of Assumption 4B mentioned in Section 3.4.2, and
puts a question mark against Procedure 2.
3.5 Final Choice
In view of the findings of the foregoing section, Procedure 1 was finally selected for
normalization.
3.6 Finer Issues
• Scores of different years. It was decided that percentile of a student in a board
would be computed with respect to all the students that appeared from that board in
that particular year.
• Smaller boards. The assumption of a common distribution of ‘merit’ for all the
boards in all the years may be problematic for boards with small number of
candidates. However, any attempt to rectify the ‘problem’ (e.g., by pooling scores of
different boards and/or different years) would require further assumptions that are
difficult to justify. In other words, the solution would be worse than the ‘problem’.
Therefore, it was decided that the smaller boards would not be treated differently.
Smaller boards have a marginal effect on the overall merit list anyway.
28
• Handling revision of board marks. It was decided that in the case of any revision of
marks after a cut-off date no revision would be done in the merit-list.
• Calculation of percentiles. The percentile of a candidate will be calculated as
100 � Number of candidates in the `group&with aggregate marks less than the candidateTotal no. of candidates in the `group+
For the purpose of calculating percentiles of board marks, ‘group' means the
collection of candidates (appeared) in the board who satisfy the eligibility criteria of
JEE Main. For the purpose of calculating percentiles of JEE-Main marks, ‘group'
means the collection of candidates (appeared) in JEE-Main.
• Determination of normalized board scores for a board percentile. When the list of
JEE-Main percentiles contains no exact match for a board percentile, nearest
matches from the upper and lower side will be used. The normalized score for the
board percentile will be obtained by linear interpolation of the JEE-Main aggregate
marks corresponding to the nearest upper and lower percentiles.
• Precision. Board percentiles, normalized board scores and composite scores will be
calculated up to eight places after decimal. After the merit order is determined, the
composite scores/percentiles will be reported up to five places after decimal.
3.7 Summary
The two procedures shortlisted for fine tuning were validated using 2012 data. The
relevant analysis has been reported in this chapter and resulted in selecting one of
the two procedures for normalization. The next chapter explains this method
through an algorithm and flow-chart.
29
CHAPTER 4 : ALGORITHM AND FLOWCHART FOR NORMALIZATION
Procedure 1 described in Section 3.1 is chosen for computing the normalized composite
score and All India Rank for every student of every board, satisfying eligibility criteria of
JEE-Main. This chapter gives a description of the source data and output, a step-wise
descriptive algorithm and a set of flowcharts for the implementation of this procedure.
4.1 Source Data and Output
For the sake of completeness, details about the structured source data required as basic
inputs to compute the normalized board score of each student, along with the nature of the
output, is described here.
The source data sets for the algorithm are as follows.
1. Board data sets, consisting of Roll number, name, date of birth, father’s name,
marks in different subjects and aggregate marks in respect of each student of a
board (one data set for each board, each of the years 2011, 2012 and 2013).
2. JEE-Main data set, consisting of JEE-Main Roll number, name of student, date of
birth, father’s name, name of Board, year of passing board examination, Board roll
number, marks in different subjects and aggregate marks ���� in respect of each
student appeared in JEE-Main 2013.
The output of the algorithm is the following.
1. Merit list, consisting of JEE-Main Roll number, name of Board, year of passing
board examination, board roll number, normalized composite score ���� and All
India Rank ���� in respect of each student appeared in JEE-Main 2013.
30
4.2 Descriptive Algorithm
Before the main algorithm starts, some pre-processing of the two source data sets is
needed.
4.2.1 Pre-processing of the Board Data Sets
The pre-processing steps indicated below use the board data as source and create an
augmented board data set as output. This is done for every board, and for each of the
years 2011, 2012 and 2013. These pre-processing steps are also shown as a flowchart
(Section 4.3.1).
Step 1: For all the students of a particular board in a particular year, filter out students
having inappropriate subject combination, while retaining only those who have
Mathematics, Physics, and at least one out of Chemistry, Biology, Biotechnology
and Computer Science.
Step 2: Compute five-subject aggregates from the marks in the following subjects, while
using the best marks in categories 3, 4 and 5.
1. Physics
2. Mathematics
3. Any one of the subjects Chemistry, Biology, Biotechnology and Computer
Science
4. One language
5. Any subject other than the above four subjects.
Step 3: Express these aggregates as percentages and save as a new field in Board
data set. In case the student has only four subjects, use the corresponding
aggregate to calculate the percentage.
Step 4: Compute number of students at each distinct value of five-subject aggregate,
and write this frequency distribution into a look-up table (LUT-B);
31
Step 5: Compute the percentile (up to eight decimal places) for each value of five-
subject aggregate by using LUT-B and the formula
100 � Number of eligible candidates with aggregate marks less than the candidateTotal number of eligible candidates
and augment the look-up table (LUT-B) with this field.
Step 6: Compute percentile of each student by using the augmented LUT-B obtained in
Step 5, and save as another field in the Board data set.
Step 7: Repeat Steps 1-6 for all the three years 2011, 2012 and 2013.
Step 8: Repeat Steps 1-7 for all the +2 boards.
4.2.2 Pre-processing of the JEE-Main Data Set
The pre-processing steps indicated below use the JEE-Main data as source and create, as
output, a look-up table relating percentiles with aggregate marks. These pre-processing
steps are also shown as a flowchart (Section 4.3.2).
Step 1: Compute number of students at each distinct value of aggregate marks of JEE-
Main exam, and write this frequency distribution into a look-up table LUT-JEE.
Step 2: Compute the percentile (up to eight decimal places) for each value of aggregate
scores of JEE-Main exam by using LUT-JEE and the formula
100 � Number of candidates with aggregate marks less than the candidateTotal number of candidates appeared in JEE‐Main 2013
and augment the look-up table (LUT-JEE) with this field.
32
4.2.3 Main Algorithm for Generation of the Merit List
The main algorithm described here uses the following as source
1. the augmented board data sets described in Section 4.2.1,
2. the JEE-Main data set described in Section 4.1,
3. the look-up table described in Section 4.2.2,
and creates the merit list as output.
Step 1: Create the merit list data set by copying the following fields from the JEE-Main
data set:
(i) JEE-Main Roll number
(ii) Name of Board
(iii) Year of passing board examination
(iv) Board roll number
(v) Aggregate marks ����.
Step 2: For each student in the merit list data set, get the name of board and year of
passing board examination data to identify the appropriate board data set, and
then use the board roll number data to obtain the board percentile ��� of the
student. Save this as another field of the merit list data set.
Step 3: For each student in the merit list data set, convert the board percentile ��� into
transformed board score ���� in the following way.
a) Determine �2, the largest JEE-Main percentile in the look-up table LUT-
JEE that is smaller than or equal to �. If there is no percentile smaller
than �, then define �2 as the smallest percentile in the list.
33
b) Determine �3, the smallest JEE-Main percentile in the look-up table LUT-
JEE that is greater than or equal to �. If there is no percentile greater
than �, then define �3 as the largest percentile in the list.
c) Determine, from the look-up table LUT-JEE, the JEE-Main aggregate
marks �2 that corresponds to the percentile �2.
d) Determine, from the look-up table LUT-JEE, the JEE-Main aggregate
marks �3 that corresponds to the percentile �3.
e) Compute normalized board score (up to eight decimal places)
�� � 4�2 � 5 �2�3 5 �2 � ��3 5 �2� if �3 6 �2, �2 if �3 � �2.8
Augment the merit list data set with the new field ��.
Step 4: Compute composite score �� � 0.6 � �� 0.4 � �� (up to eight decimal places)
and augment the merit list data with this field.
Step 5: Compute rank for each student appeared in JEE-Main and augment the merit list
data with this field.
The algorithm is also shown in a flowchart (Section 4.3.3).
NOTE : While normalizing the board marks, an accuracy up to eight digits after
decimal should be maintained while evaluating percentiles and equivalent marks to
help break tie between students. Composite score/percentile of a student in the final
merit list should display up to five places after decimal.
4.3 Flowchart
The flowcharts corresponding to the three parts of the algorithm are provided in the next
three sections.
34
4.3.1 Flowchart for Pre-processing of the Board Data Sets
START
Board Data Set
(29 boards,
3 years)
Board-N
Data
Filter out students having
inappropriate subject combination
(Step 1: Section 4.2.1)
Compute five-subject aggregate
marks of each student
(Step 2: Section 4.2.1)
Compute percentage of
marks for each student
(Step 3: Section 4.2.1)
Compute number of students tied at each distinct percentage of marks, write into LUT-B
(Step 4: Section 4.2.1)
Compute percentile for
each percentage of marks
(Step 5: Section 4.2.1)
LUT-B ____________________________________ Access LUT-B
Compute percentile
(P) for each student
(Step 6: Section 4.2.1)
Repeat for three years
– 2011, 2012, 2013
(Step 7: Section 4.2.1)
Repeat for all Boards
(Step 8: Section 4.2.1)
END
Output: Augmented
Board Data Set
(29 Boards, 3 years)
35
4.3.2 Flowchart for Pre-processing of the JEE-Main Data Set
LUT-JEE _____________________________________________
Access LUT-JEE
START
JEE-Main
Data Set
Aggregate marks
obtained by each
student in JEE-Main
Calculate percentile for each
value of aggregate marks of
JEE-Main
(Step 2: Section 4.2.2)
Compute number of students tied at each distinct
aggregate of marks, write into LUT-JEE
(Step 1: Section 4.2.2)
END
Output:
LUT-JEE
36
4.3.3 Flowchart for Main Algorithm for Generation of the Merit List
Compute the per
Flowchart of Section 4.3.2
Flowchart of Section 4.3.1
START
Augmented
Board Data Set
(29 Boards, 3 years)
JEE-Main
Data Set
Create the merit list
data set by calling
JEE-Main Roll No,
Board name, Year of
passing, board roll No
and JEE-Main
aggregate marks (A0)
(Step 1: Section 4.2.3) For each student in merit list, identify
appropriate board data set and obtain
the board percentile (P) of the
student; write into merit list
(Step 2: Section 4.2.3)
For each student in
merit list, convert board
percentile (P) to
normalized board score
(B1), using LUT-JEE;
write into merit list
(Step 3: Section 4.2.3)
Fetch 40% of B1
Fetch 60% of A0
Composite Marks A1
(Step 4: Section 4.2.3)
From A1, compute
rank of each student
appeared in JEE-Main
(Step 5: Section 4.2.3)
Output:
JEE-Main
Merit List
END
Augmented
Board-N Data
LUT-JEE
Board Data Set
(29 boards, 3 years)
LUT-JEE _________________________________________
Access LUT-JEE
37
4.4 Proposed Timelines for Processing and Analysis of Results
Declaration of JEE (Main) results by the CBSE : 7th May 2013
Declaration of Class XII results by the Boards : 25th May – 10th June 2013
Obtaining Results Database from the Boards : 25th May – 10th June 2013
Mapping of Roll Numbers & Authentication
of the Candidates : 01st June – 15th June 2013
Computation of Board percentiles ��� : 01st June – 20th June 2013
Computation of normalized (modified) Board
scores, composite scores and rank list: 20th June – 30th June 2013
Declaration of JEE (Main) results : 01st July 2013
NOTE : Those applying for re-evaluation / re-checking shall be permitted to submit
their results by 25th June 2013.
38
CHAPTER 5 : CONCLUSIONS
The task assigned to the committee was found to be more than challenging in view of the
diversity of examination processes followed by various boards, general operational
constraints, non-availability of a unique approach and limitations of time. However based
on the available inputs from various reports, thought processes and ideas received
from several experts, the committee finalised its recommendations which have been
presented in chapter 4 with a functional description of the approach followed in the
form of a set of algorithms and a set of flowcharts.
A statistical problem of this magnitude and nature has to deal with inherent variations and
non-linearity existing in the data to be processed, and also with confounding of various
factors. It is recognized that no statistical procedure can produce a perfect normalization.
While making the recommendations, committee would like to clearly point out that there is
no unique and perfect method to solve the given problem. Accordingly, it was decided to
attempt and finalise a method which is workable / practicable in the given set of conditions.
Differing views on the method are possible, and some readers of this report may find a
slightly different method more reasonable. However, in the opinion of the committee,
the recommended approach is the best under the present circumstances and one
can expect to make it more robust based on the experiences gained in future years.
Hence, it is essential that in the interest of clarity and operability, the approach be
publicised amongst the stakeholders well in advance through public discourses in the form
of standard presentations, media coverage and blogs etc. Further the CBSE, who is
going to conduct JEE-Main, should also be requested to constitute a Core Group for
Normalization and its implementation mainly focussing on the following aspects.
Data Collection: All data collected should be in the standard formats designed by the
group.
39
Nature of Data: The boards should be asked to submit subject-wise/aggregate marks data
in respect of all the students who have Physics, Maths, language, elective and any one of
Chemistry, Biology, Biotechnology and Computer Science.
Validation of Data: The group should evolve methods to cross check the accuracy and
the nature of data supplied.
Timeline for Data Collection: The results should be submitted in a given time frame as
decided by the Core group at CBSE. Results of re-evaluation or the re-examination cases,
if any, should also be delivered in the prescribed time limit. In the absence of such results
(corrected ones), the available results should be considered to be valid for all practical
purposes including that for generation of merit lists.
Data Processing: Normalizing the marks and making the stakeholders aware of the
processes and its outcome.
The committee sincerely hopes that with the above aspects fully taken care, the new
initiative of the MHRD, Govt. of India in empowering school education and making
admission system more robust would be a reality from the ensuing academic year.
40
CHAPTER 6 : ACKNOWLEDGEMENTS
The committee would like to thank the Ministry of Human Resource Development, Govt. of
India for providing an opportunity to the committee to undertake this study of an important
task. The committee would also like to thank several experts particularly from ISI, IITs,
NITs, CBSE and several other institutions who have contributed immensely for validation
of normalization process. The committee also record its appreciation for all the boards of
Secondary Education, CBSE and various committees, for sharing their data and findings to
give shape to the report.
The committee also acknowledges several experts and agencies namely, Prof. Jim
Tognolini, CAER Delhi, ACER Australia, Ramasami Committee and to all its members for
their contributions in finalising the right approach and its validation thereafter. Finally the
committee records its gratitude to CBSE for hosting all meetings and providing the
required logistics and to CCB-2012 for supporting it financially.
41
CHAPTER 7: REFERENCES
1. Rasch, G. (1960). Probabilistic Models for Some Intelligence and Attainment Tests.
Danish Institute for Educational Research, Copenhagen; Expanded edition (1980):
University of Chicago Press, Chicago.
2. Andrich, D. (2005). The Rasch model explained. In Applied Rasch Measurement: A
Book of Exemplars, S. Alagumalai, D.D. Durtis and N. Hungi (Eds.), Chapter 3,
Springer-Verlag, 308-328.
3. Kolen, M.J. and Brennan, R.L. (2004). Test Equating, Scaling and Linking: Methods
and Practices; Second Edition, Springer, New York.
4. Rowley, G. (2012). Combining AIEEE Scores and Board Assessments for Tertiary
Selection Purposes in India. Note prepared on behalf of the Australian Council for
Educational Research (ACER) to advise Chairman, CBSE; included in Appendix C
of this report.
5. Liao, T.F. (2002). Statistical Group Comparison. Wiley, New York.
42
CHAPTER 8: APPENDICES
Appendix A : Report of “Ramasami Committee” on “Examination and Admission
system in Engineering Programmes” (without annexure)
Appendix A1 : MHRD, GoI Order F.No. 19-4/2010-TS I dated 11th November, 2010
constituting “Ramasami Committee” to assess the examination and
admission system in engineering programmes
Appendix A2 : MHRD, GoI Order F.No. 19-2/2010-TS I dated 8th March, 2010
constituting “Acharya Committee” to look into the strengthening and
rationalizing JEE, GATE, JMET, JAM, etc.
Appendix B : MHRD, GoI Order No. F.33-5/2012-TS III dated 13th August, 2012
constituting “Joshi Committee” for validating normalization formula of
+2 Marks
Appendix C : Input from ACER to Chairman, CBSE
Appendix D : Input from CAER to Chairman, CBSE
Appendix E : Input from the “Core Committee” to Chairman, CBSE
Appendix F : Analyses carried out for Validation / Fine-tuning
Part 1: Analysis by D Sengupta and AG Bhatt, ISI
Part 2: Analysis by BM Gupta, CBSE
Part 3: Analysis by Neeraj Mishra, IIT Kanpur
Part 4: Analysis by BS Daya Sagar, ISI Bangalore
Part 5: Analysis by Jim Tognolini and Jon Twing, CAER
Alternate Admission System for
APPENDIX A
Engineering Programmes in India
Expert Committee
T. Ramasami Ashok Thacker
D. Acharya B.K. Gairola Mukul Tuli P. Arora
Submitted to
Ministry of Human Resource Development Government of India
September, 2011
A1
A2
Background
The current system based on multiples of entrance examinations for admission into
engineering programmes has no parallel in other parts of the world. Most nations
employ just one test, mostly, for assessment of scholastic aptitude instead of a
plethora of evaluation tests.
The current selection systems in India have, no doubt, resulted in visible benefits;
but, the future of Indian youth might need a paradigm shift in admission systems in
engineering programmes for ensuring opportunity for larger sections of the society.
The extreme level of competitiveness in the screening processes employed for
deciding access to professional education is not without its psychological or
sociological implications for the society. They do influence the mindset and
behavioural changes among the youth.
The Ministry of Human Resource Development is grappled with the need to design
and develop an alternative to the current systems of multiple examinations for
deciding admission of students to the engineering programmes in the country. A
committee was constituted under the Chairmanship of Professor D Acharya, Director
IIT Kharagpur. The Acharya Committee presented in its interim report an alternative
to the present examination system for admission into engineering colleges, including
IITs. While there was unanimity that the present examination system of JEE and
AIEEE etc has to change to reduce the burden on students on account of the
multiplicity of entrance examinations, there was emphasis that any new system has
to recognize the diversity of learning within the country.
In order to address comprehensively the reality of diversity of learning within the
country, the Ministry enlarged the committee with Dr T Ramasami, Secretary,
Department of Science and Technology, Government of India as the Chair and Prof
Acharya as the expert member from IIT. The enlarged committee consisted of some
alumni of IITs including one who passed from an IIT within the last five years. The
composition of the committee is as given in Annexure 1.
A3
Underlying Philosophy behind Alternatives to current Test Scheme
“Unity in diversity” is the Indian brand value. Unification, while retaining the diversity
of educational and learning systems in the country is the underlying strategy of the
proposed alternative Test Scheme for deciding admission into engineering colleges,
including IITs in the country. An overarching philosophy behind development test
schemes taking for reducing the multiplicity of entrance examinations is presented in
Annexure 2.
Lessons from Acharya Committee Report The interim report of the Acharya Committee Annexure 3 formed the main basis on
which this alternative test scheme for engineering colleges including IITs has now
been developed. Some key recommendations of Acharya committee are:
• Screening based on normalized Board scores at Standard X and/or Standard
XII and Multiple Choice examination replacing the two stage JEE from 2006.
• Entry barrier is to be raised to 60% in the +2 examinations.
• Factors, other than the Standard XII marks and All India Rank (AIR) based on
Physics, Chemistry and Maths (PCM) testing, such as raw intelligence, logical
reasoning, aptitude, comprehension and general knowledge need to be
considered.
• Need to factor in school performance more significantly into the selection process.
From the discussions held by this committee the following additional desirable
features of the admission process were identified:
• Decision based on one time test needs to be re-examined. Opportunities to
improve must be built in.
• Students must be relieved of the pressure of multiple JEEs. Currently a
student appears on an average at 5 JEEs all within a few days of the Board
Examinations.
• Influence of coaching for JEE needs to be minimised.
A4
• Urban-rural and gender bias has to be eliminated or at least minimised.
• The objective type of examination lends itself to undue influence of coaching.
The conventional pen and paper examination with well designed long and
problem solving oriented questions should be revived by keeping numbers in
any JEE within reasonable limits.
• JEEs, especially the IIT-JEE, have become a huge money spinning activity
for coaching centres with attendant undesirable consequences.
Recognising the realities of the current situation in
admission system in engineering programmes
The present system of multiple competitive examinations, as observed by Acharya
Committee has emerged because of the large demand-supply gap in access to high-
quality education in engineering discipline and unevenness in levels of excellence in
education in various centres. Diversity challenge associated with various school
boards is one of the reasons for the emergence of multiples of entrance
examinations for deciding admission into engineering programmes.
It must be recognised that some competitive examinations, such as for example,
joint entrance examination conducted by the IITs have proved their process integrity
and gained global acclaim. IIT-JEE is a proven system that works. AIEEE is another
large scale entrance examination which has gained social acceptance of high levels.
Any alternative proposed should match the process integrity and robustness of JEE
and AIEEE.
Since millions of talented youngsters compete for less than tens of thousands of
slots in elite engineering institutions, the use of high band filters like IIT-JEE or
AIEEE may, perhaps, seem essential.
Nevertheless, even while it must be recognised that most high performers in such
competitive examinations are extremely talented, it is not clear as to whether IIT-JEE
type examinations are not missing a section of talent base, which should not be
missed.
A5
Concerns are expressed that the guessing behaviour could be promoted among
students seeking admission into engineering programmes by the models being
employed by the current examination systems. Psychological and sociological
dimensions of such testing and evaluation systems that focus on extremely narrow-
width high band-filters are not unimportant. The unintended consequences of
asymmetries in the types of clientele and challenges of social behaviour mooted by
such extremes cannot be discounted.
Vast majority of youth living in smaller towns and far flung places as well as
economically weaker segments of society are not able to join the competitive stream
today. For the youth, the future seems to be decided just by success or otherwise in
one competitive examination or other. The present system seems to be unwittingly
promoting a societal behaviour and a mind set towards differentiation rather than
integration.
Alternative test schemes for admission: What should they
aim at?
The Alternative Test Scheme should ideally
1. evaluate the ability of the learners rather than their preparedness and
competitiveness
2. reveal in a transparent, the latent potentials of the learners to match the emerging
opportunities in engineering education sector and link to the development of
National economy
3. aim to provide for more proportional representation of various regions and parent
income levels without causing rural-urban divides
4. reduce the burden of education administration on faculty in elite engineering
institutions so that their higher participation in research and academic roles could
be further facilitated
A6
5. match the rigour and process integration of best global models into the currently
employed admission systems in engineering programmes in the country and
6. Offer opportunities to retain the “unity in diversity” principle of the country by
permitting scientific methods of providing allowance to scholastic performances in
various board examinations into deciding admission criteria into engineering
programmes in the country.
Process adopted for the developing the Alternative Test
Scheme
Education is much too important for any committee to overlook the consequences of
inadvertent errors in decision making. Therefore, the committee chose to engage as
many stakeholders as possible in designing the Alternative Test Scheme for
admission into engineering programmes.
There are many state school boards which conduct their own examination for
assessing their students for issuing certificates. Shear diversity of these
examinations pose challenges of normalization and deciding eligibility for admission
into national centres of excellence.
The multiplicity of competitive examinations leading to duplicity of efforts may be a
direct result of diversities and complexities involved in the evaluation of inter-
comparison of scoring systems of various school boards. As a result, most elite
institutions disregard the performance in school examinations. They develop their
own competitive test methods and depend too heavily on ranks and scores.
Consistency of performance in different examinations is not considered necessary.
Performance in single examination starts to influence the entire career opportunities
leading to social implications.
While competitive examinations of the types of IIT-JEE etc based on multiple choices
and negative scoring are celebrated, a recent analysis points out inherent limitations
A7
of such systems on the one hand and the benefits of non-negative scoring methods
on the other. (See Karandikar, Current Science, 99, No 8, 25th October 2010)
Alternative admission systems for engineering programmes should find innovative
ways of retaining the diversity of many school boards and yet derive value from the
test scores for making decisions by educational institutions. Such an innovation
seems now possible and realistic. In order to select best possible alternatives, a wide
spread consultations and a research study were undertaken.
Consultation
Several consultations with stake holders were made. The process of consultation
included those with
1. Public through opinion poll
2. States and school boards
3. Educators from elite institutions like IITs
4. Professional Experts in Evidence-based criteria selection and
5. Statistical experts for a Modeling Study for reconstruction of past Scenario
Research Plan
Past data of scores in examinations of different school boards were sourced and
analyzed for designing methods for normalization based on sound statistical tools.
Evidence based and objective criteria for assessing the inter-operability of test
scores of various school boards have been examined by availing the professional
help of experts. Different statistical models have been constructed and investigated
for reliability and ease of implementation. Systems of evaluation based on
technology tools have been prioritized.
Interim report of the Acharya committee has made some important observations and
recommendations on Alternative Test System (Annexure 3) after their own research
findings. Some attempt has been made to reconstruct past scenario using data on
students who have passed entrance examinations of IIT-JEE during the last five
years.
A8
The committee recommends also a research study involving a pilot test among a
select group of students and evaluation of various test models for minimizing number
of examinations but not rigor and challenge. It is considered necessary to consult
also experts in social sciences in devising a system of reporting test results which
ensures sufficient inputs to institutions for decision making and selection of the
candidates without leading to negative psychological and sociological outcomes on
the youth.
Public Participation in Opinion Survey
On-line opinion survey was carried out among the people of India and public
opinions were sought on current competitive examination systems, employed for
admission into engineering programmes. Specific views were sought on:
n Multi parametric grading system as against single test models and
n Screening out as against selection strategies
A special questionnaire, presented in Annexure 4, was designed and hosted on the
national portal of India website maintained by NIC. The survey period remained
open for three weeks during 1st and 21st June 2011. More than 2000 people
responded to the study. Social network through face book was also established.
There were about 400 hits for face book. Detailed report of findings from public
opinion has been presented in Annexure 5.
The survey sought also information on responder profiles and opinion polls on
various models and suggestions for alternative national test systems and on risk
mitigation strategies for implementation. Suggestions received are complied in the
report on public opinion presented in Annexure 5.
Analysis and Internalization of Some Key Recommendations emanating from
Pubic Opinion
An overwhelming majority of respondents (more than 70%) for the public opinion poll
express their support for Alternative Test Schemes recommending avoidance of
multiples of entrance examinations for admission into engineering education in the
country. Support is evidenced from public opinion for a) weighing in some
A9
transparent manner scores obtained in school board examinations, b) a mix of
aptitude (like Scholastic Aptitude Test, SAT of USA) and advanced test (like IIT Joint
Entrance Examination), c) offering more than one chance for candidates to take the
National Level Test and d) conducting the national level test more than once in each
year.
One of the serious concerns expressed by public with respect to both National Level
Test and School Board Examinations is the level of process integrity in setting the
question papers and in the conduct of the examinations. These are presented in
Annexure 5.
Consultation and Cooperation with School Boards
Consultations were made with school boards for seeking permission for access to
data access and enrolment of boards for undertaking research. An attempt was
made to learn the concerns of states and school boards. The committee believes
that it is necessary to build social trust for the alternative admission systems among
the stake holders. Innovations are required for managing the diversity challenges of
school board scores before they could be employed for deriving inputs for alternative
systems to admission systems in elite engineering institutions like IITs.
Consultation with faculty of Elite Institutions and Opinion Leaders in Academic
Bodies
Consultation with faculty of some elite institutions and opinion leaders in academic
bodies has been made in the process of development of an alternative admission
system. This consultation process, at various stages, focused on a) learning about
their concerns, b) gathering experience, c) debating alternatives and d) building trust.
The faculty and Directors of IITs participated in the selection of various approaches.
Results of the public opinion survey were presented to a committee of Directors of
IITs. A copy of report contained in Annexure 5 was provided to Directors of IITs for
their study. The committee believes that enrolment of faculty involved in some of the
competitive examinations is critical because they form truly important share holders.
A10
The consultation attempted to a) address the concerns of senior faculty, b) test some
of the hypothesis, c) convince faculty with opposite views, if any, and d) enroll some
of the faculty in implementation work.
Research on Examination Methodologies for Screening for
Admission into engineering programmes
1. Work of experts of Indian Statistical Institute for normalization of scores
of various school boards
Selection of evidence-based and objective criteria is critical for the acceptance of
alternatives in preference to the currently established admission systems, which
enjoy a high level of acceptance of the stake holders and share holders. Application
of rigorous research methodologies based on open minded research has been
considered necessary. A team of experts was assembled to work on a time bound
manner. Evidence-based identification of criteria was the focus for development of
alternatives to the current admission systems.
One of the most important points considered necessary by both this committee and
Acharya Committee is that there should be a rigorous and scientific approach to
factor-in scores of school boards into admission systems for engineering
programmes in the country.
Indian Statistical Institute (ISI) the leading institution was assigned the task of
developing methods for normalization of data on scores emanating from a various
school boards. For the pilot testing of normalization concepts, data from Central
Board of Secondary Education (CBSE), Tamil Nadu State School Board Examination
(TNSSBE), West Bengal State Board examination (WBSSBE) and Indian Council for
School Examination (ICSE) were selected. The findings of experts from ISI are
presented in Annexure 6 and 6A.
ISI carried out all the required research investigations. For the same school board,
data were analyzed as per equations 1 and 2.
X1 – X2 eq. 1
X3 - X2
A11
Where X1 = is the mark obtained by each candidate, X2 = is the mark of the selected
percentile rank holder, X3= is the maximum mark scored by any candidate. In this
correlation, scores will range between 0 and 1 as shown in Figure 1(Anenxure-6). In
the correlation of ratios of scores obtained by candidate and score of the percentile
cutoff selected as in Eq.2 seems to maintain linearity over a larger range as in
Figure 2. (Anenxure-6).
X1 eq. 2
X2
Stability of scores of each board over different years was first tested out by
examining the profiles of percentile scores over a period of time. Experts of ISI
reported that through monotone transformation, it will be possible to map the profiles
of all boards onto one selected board and create a normalization routine. Profiles for
the four boards are presented in Figure 3 and 4 (Anenxure-6).
Normalized percentile ranks with different cut offs for all boards have been computed
(as for example 75%) as in eq 3
(Percentile rank of student – 75) X 100 eq.3
100-75
When normalized percentile rank is correlated against percentile rank with say cut-
off at 75%, a linear relation is obtained as in Figure 5 (Annexure-6). Experts from
ISI report that the same linear correlation as in Figure 5 (Annexure-6) will be the
same for any board for any year.
2. Some Recent Work on Selection of Types of Examinations for Screening
Recently Karandikar (Current Science, 99, no 8, October 2010, Annexure 7) has
analyzed the consequences of multiple choice tests and negative marking as
practiced recently in several screening examinations. Such methods are employed
also in the entrance examinations employed for admission into engineering
programmes in the country. Impact of marking schemes with negative scoring and
multiple choices has been examined using principles of statistics. Models were
A12
postulated for distribution of marks and guessing behavior of the candidates when
they do not know the correct answer. The work has simulated statistical outcome of
such tests and probabilities of candidates who should not have been selected getting
selected because of random guessing. Probabilities of gate-crashing into the
selection list through multiple choice examinations with unique right answer and
negative marking have been examined.
The work highlights the value of traditional question-answer tests where the
candidate is required to write down the solution along with steps rather than
objective tests with multiple choices and one right answer. The work recommends
that if for practical reasons, screening tests were to resort to multiple choice tests
where evaluation is done through the use of computers, a better alternative would be
to design tests with more than one correct answers and give credits based on
students selecting all right answers and not select any wrong answer.
The recent work of Karandikar further reiterates and supports the position of the
Committee that some weighting of the school board examinations would be gainful.
Since School boards could deploy the traditional question-answer tests where a
candidate is required to write down solutions, any weighting scheme which allows
considerations for the scores obtained in school boards would be valuable based on
the recent work of Karandikar.
The merits of conducting objective tests based on multiple choices for testing
advanced knowledge of candidates for admission into education programmes are to
be evaluated in light of other factors as well. Whereas such tests are useful for
assessing the aptitude, proficiency in advanced knowledge is perhaps better tested
out through tests where the candidates are expected to write down the solutions, as
was the case in IIT-JEE in earlier years and school board examinations currently.
A13
General Approach Suggested for Alternative Admission System for
engineering programmes
The committee suggests an approach to employ scores obtained by the same
candidate in different types of examinations rather than to rely entirely upon the
performance in one screening type examinations like IIT-JEE or AIEEE
Now that a reasonable model has been devised by professional experts from ISI for
normalization of score from different boards, the committee recommends one of the
two possible specific approaches.
Approach 1
• weighing consistency of performance in school board examinations and
employ them for testing ability to write solutions and
• One objective screening test with two sections; one for testing the aptitude
and the other advanced knowledge in domain areas.
Approach 2
• weighing consistency of performance in school board examinations and
employ them for testing ability to write solutions and
• one objective aptitude test based on multiple choices and computer based
correction systems
Objective tests for assessment of aptitude employing multiple choices and evaluation
using computer assisted testing could be designed in the general pattern of
Scholastic Aptitude Test of the USA.
Advanced tests for evaluating knowledge in domain areas could be designed and
fashioned in the shape of Joint Entrance Examination of IITs with one improvement
suggested by Karandikar, namely choices of answers bearing more than one right
answer and avoiding Gate-crashing of the wrong candidates into the selection list.
Both Aptitude and Advanced tests could be included in the same paper, giving the
option of choosing to take both aptitude and advanced knowledge or not to the
candidate.
A14
Each candidate might be permitted a maximum of three chances to take the National
Level Screening Test. The committee recommends that National Level Screening
Test could be conducted at least twice a year.
Individual institutions could be given the liberty of choosing weighting factors for
different examinations within a specified guideline. For example, IITs could choose
about 40% weighting for school board scores and 30% each for aptitude and
advanced tests respectively whereas some other state based institution could weigh
school board scores as per the revised normalized system as high as 70% and
National Level Screening Aptitude test at 30%.
The committee believes that it is important to avoid multiple screening tests and
proportional weighting of multiple types of tests already being conducted which
would avoid outweighing one mode of testing, where preparedness and gate
crashing of non-ideal candidates could not be ruled out.
Suggestions for Factorizing Normalization of board scores into screening
process
Aggregate percentage scores of candidates in class XII examination of their
respective boards could be first converted into percentile ranks of their own
respective boards and then normalized through percentile ranks as in eq.3 for
common cut off and each candidate is accorded normalized percentile rank
irrespective of the board which conducted the examination. This could be expressed
in the form of normalized grade for school board and termed as A1.
A similar exercise could be carried out also for the aggregate percentage in the
subject examinations of relevance to the higher education desired by the candidate
for example all science subjects for seeking admission into engineering and termed
as A2.
By according equal weighting to both aggregate percentages and subject scores,
half of (A1 + A2) could be computed for each candidate and A3 reported as
corresponding to class XII performance.
A15
Performance at the National Level Screening Test in the aptitude section could be
evaluated separately and accorded a national score A4.
Performance at the National Level Screening Test in the advanced section could be
evaluated and each candidate is accorded a National score A5.
Suggestion of different options
Option 1: Deployment of Scores as criteria based on class XII performance only
• Equal weighting of school board scores A1and A2
• Equal weighting of aptitude scores A4 and advanced scores A5
Normalized score = {A1 + A 2+A4 +A5}/4
Option 2: Deployment of Scores as criteria based on class XII performance only
• Equal Weighting of board score A3
• Equal Weighting of Aptitude scores A4 and A5
Normalized score ={A3 +A 4+A5}/3
Option 3: Deployment of Scores as criteria based on consistency of
performance at class X and Class XII levels as well as in National Level
Aptitude and Advanced Tests
• Equal weighting for aggregate as well as subject performance at class X
and Class XII levels where ) 0.1X (normalized score at class X in aggregate
+ normalized score at class X in subjects of choice + normalized score at
class XII + normalized score at class XII in subjects of choice)
• One third weighting of aptitude score 0.3 A4
• One third weighting of advanced score 0.3 A5
Normalized score = 0.1{ Normalized aggregate class X +
normalized class X subject score + Normalized class XII aggregate +
Normalized class XII subject score} + 0.3 A3and 0.3 A5
A16
Option 4: Deployment of School Board Performance as screening but not as
determinant for National ranks
• Specify a Cut-off normalized percentile rank score for school performance
say as 80 or 85 percentile rank
• 50% weighting of National Level Aptitude score A4 for candidates passing
the cut off of percentile rank
• 50% weighting of National Level Advanced Score A5 for candidates
passing the
Normalized score = 0.5 A4 +0.5A5
Option 5: Deployment of School Board performance as subject score and
National Level Aptitude Test as a combination and avoid the Advanced Testing
system according freedom for the individual institutions to select mixing
proportions within a pre-specified guideline
Option 6: Equal weighting of School Board performance as subject score and
National Level Aptitude Test as objective test system where
Normalized score = 0.5 A2+0.5A4
Further Work Suggested
1. There are as many as 42 school boards in the country conducting examinations
at school levels. They conduct examinations in slightly varying schedules. Such
differing schedules may pose challenges. Some work may be required to align
the time schedules of board examinations and National Screening Tests.
2. Although ISI seems to have developed a scientific methodology for normalization
of school boards’ scores based on a pilot study involving four typical school
boards, it may be necessary to access data from all the 42 boards and test run
the findings of the experts of ISI.
A17
3. It will be beneficial to apply the recommended methodology on candidates
selected for admission into IITs, NITs during the last four years using the data on
current students sourcing data from IIT-JEE and AIEEE as well as school boards
scores at class X and XII levels. This will help us ground truthing and revalidation
of proposed methods.
Recommendations of the Committee
The committee makes the following recommendations for the consideration of the IIT
council
A. Normalization of School Board Scores
• ISI has proposed a method for normalization of scores of candidates of various
school boards and demonstrated its potential to derive normalized scores. This
method seems to offer possibility to factorize performance in school board
examination as a criterion for merit-ranking of students for admission into higher
education.
• ISI may be commissioned by IIT Council to further refine the methodology and
establish it’s potential by proving its utility for normalization of all board scores
over a period of time based on past data.
• The method of ISI may be revalidated by some other institution as well for ease
of application
B. National Screening Test Scheme
• One National Screening Test (NST) with two sections namely Aptitude and
Advanced could be designed and developed.
• The test could be of 3.5 to 4 hour duration with an option for the candidates to opt
out of advanced test after examining the paper for say 15 minutes.
• Aptitude test section could employ multiple choice questions which enable
evaluation using a computer
• Advanced Test section could involve multiple choices with multiple right answers
and minimization of Gate-crashing by candidates with limited merit
• An expert committee of educators could be constituted for designing best fit
models of National Screening Test methodologies
A18
C. Testing and Evaluation related Organizational matters
• IITs may be assigned the task of designing the Alternative Screening Test
• While question papers may be set-up by experts drawn from educational
institutions like IITs, IISc, NITs etc, the logistics support for conducting and
evaluating examination papers may be assigned to a specialist organization
taking into account of the large scale of the operation and need for
professionalization.
D. Enrollment of Policy Bodies
• A project for creating past scenario may be commissioned to IITs, NITs and
leading universities based on employing methods developed through research.
E. Order of Preference of the Committee
The committee has considered various options. Some order of preference is
indicated for discussion and finalization by the council of IIT for making decisions.
Recommended order of Preference of options
1st Preference: Option 2
Equal weighting of school board scores at class XII (of both
aggregate and science scores) A3, national level aptitude, A4 and
Advanced A5 scores, {A3 + A4 + A5 }/3
2nd Preference: Option 6
Equal weighting of School Board performance as subject score
and National Level Aptitude Test as objective test system;
0.5 A2+0.5A4
A19
3rd Preference: Option 5
Deployment of School Board performance as subject score and
National Level Aptitude Test as a combination and avoid the
Advanced Testing system according freedom for the individual
institutions to select mixing proportions within a pre-specified
guideline
4th
Preference: Option 4
Deployment of School Board Performance as screening but not
as determinant for National ranks (as for example Specified Cut-
off: normalized percentile rank score for school performance say
as 80 or 85 percentile rank)
Equal weighting of National Level Aptitude score A4 for
candidates passing the cut off of percentile rank and Equal
weighting of National Level Advanced Score A5 for candidates
passing the cut off of percentile rank; (0.5 A4 + 0.5A5 )
5th Preference: Option 1
Deployment of Scores as criteria based on class XII performance
Equal weighting of school board scores A1and A2 and Equal
weighting of aptitude scores A4 and advanced scores A5 ;
{A1 + A 2+A4 +A5 }/4
6th Preference: Option 3
Deployment of Scores as criteria based on consistency of
performance at class X and Class XII levels as well as in National
Level Aptitude and Advanced Tests
Equal weighting for aggregate as well as subject performance at
class X and Class XII levels where ) 0.1X (normalized score at
A20
class X in aggregate + normalized score at class X in subjects of
choice + normalized score at class XII + normalized score at
class XII in subjects of choice); One third weighting of aptitude
score 0.3 A4
One third weighting of advanced score 0.3 A5 ;
0.1{ Normalized aggregate class X + normalized class X subject
score + Normalized class XII aggregate + Normalized class XII
subject score} + 0.3 A3and 0.3 A5
Concluding Remarks
Complexities of developing alternative test schemes for deciding admission in
engineering programmes arise from a) diversity and b) scale of operations. The
committee is conscious of the ground realities and the challenge of suggesting
alternative methods for some test and evaluation systems, which have gained social
esteem and trust. Therefore, the committee has relied on scientific tools for gathering
evidence as much as possible and not on perception based approaches. The
committee is of the view that changes in paradigms are essential in this phase of
development of India.
One National Screening Test for admission into engineering programmes supported
by methodologies for factorizing scores obtained in school board examinations while
retaining their diversities seems the way forward. The committee does make a strong
case for such a change in paradigm.
Some options have been recommended. The committee has consciously adopted a
probabilistic rather than deterministic approach taking into account of complexities
involved in the exercise. The committee is also conscious of the fact that some of
the recommendations may have relevance outside the scope of admission into IITs
into other engineering programmes.
A21
As a measure of abundant caution, the committee recommends selection from
among the six options by an expert committee taking into account of challenges of
convincing the society of the security of normalization methodologies of scores of
school board examinations developed by ISI on the basis of scientific tools.
Acknowledgement
The committee thanks the Ministry of Human Resource Development for the
opportunity to participate in this important National endeavor. Members of the
committee have consulted several experts and students individually and collectively.
Many experts from NIC, DST, IITs, ISI, Chennai Mathematical Institute and general
public participated in this study and in preparation of this draft report. Their support
and cooperation is acknowledged. The help of Dr. Parveen Arora, Scientist,
Department of Science and Technology in preparation of the report is gracefully
acknowledged.
---xxxxx----
A22
Post Script
The draft report was presented to the IITs Council in the meeting held on 14th Sept,
2011 at IIT, Delhi. The Council has accepted and approved the principle enshrined in
the report.
The Council has authorized a small group of IIT Directors to meet and select the
preferred options while indicating the preference for Option 2 and 6.
The Committee recommended that an Internal Committee may analyse and select
the preferred options from among those recommended in this report.
There is a latent potential to enlarge the scope of this work and embark upon a
single National Test Scheme for admission into tertiary education after due
consultations with States and other experts from the academic sector.
While the challenges involved in formulating a National Test Scheme would be
enormous, the benefits to the next generation of learners could be significant. The
Committee recommends a further examination of the possibility for a national test
scheme for tertiary education after due consultations and examination.
A23
APPENDIX A1
A24
APPENDIX A2
B1
APPENDIX B
C1
APPENDIX C
Preliminary Advice
Combining AIEEE Scores and Board Assessments
for Tertiary Selection Purposes in India
Prepared for Mr Vineet Joshi, Chairman, CBSE
by
Glenn Rowley
Australian Council for Educational Research
October 22, 2012
Contact
Dr Glenn Rowley
Principal Research Fellow
Australian Council for Educational Research
Private Bag 55
CAMBERWELL VIC 3124
Tel: +613 9277 5443 Fax: +613 9277 5500
Email: glenn.rowley@acer.edu.au
C2
Table of Contents
The data supplied .................................................................................................................. 3
The advice ............................................................................................................................ 3
Assumptions ......................................................................................................................... 4
The first datasets provided..................................................................................................... 5
The second datasets provided ................................................................................................ 7
Board AM ......................................................................................................................... 8
Board BR .......................................................................................................................... 9
Board CBS ...................................................................................................................... 10
Board GU ........................................................................................................................ 11
Board KL ........................................................................................................................ 12
Board MR ....................................................................................................................... 13
Board RJ ......................................................................................................................... 14
Board TN ........................................................................................................................ 15
Board UP ........................................................................................................................ 16
Differences by Board .......................................................................................................... 17
Demonstration of the procedures ......................................................................................... 18
Conclusion .......................................................................................................................... 21
C3
The Request
To provide advice on how a set of scores from national examinations (AIEEE) in Physics,
Chemistry and Mathematics) can be combined with scores provided by 29 different Boards
across India to generate a single score that can be used for tertiary selection in Engineering
across all 29 Boards.
The data supplied
Initially, I was provided with three files:
EEECBS10.zip
EEECBS11.zip
EEECBS12.zip.
On October 18 I was provided with a further set of data files:
EEEAM12.xlsx
EEEBR12.xlsx
EEECBS12.xlsx
EEEGU12.xlsx
EEEKL12.xlsx
EEEMR12.xlsx
EEERJ12.xlsx
EEETN12.xlsx
EEEUP12.xlsx.
This advice is provided on the basis of an appraisal of these files and some limited trialling of
the proposed strategies.
The advice
On the basis of my examination of the data provided, I propose that the following procedures
be considered.
1. For each Board, the AIEEE and Board data need to be scrutinised carefully, to ensure
that the data are free from irregularities and evidence of manipulation. Once satisfied
with the quality of the data, apply the procedures that follow within each set of Board
scores.
2. Apply a transformation to the AIEEE Total scores (the sum of examination scores in
Physics, Chemistry and Mathematics) to convert the distribution from the (typically)
skewed raw score distribution to a normal distribution.
3. Adjust the mean and standard deviation of this normalised distribution to the same
mean and standard deviation as that of the original AIEEE Total scores from that
Board.
4. Apply a transformation to each set of Board scores to convert the distribution from the
(typically) skewed raw score distribution to a normal distribution.
5. Adjust the mean and standard deviation of this normalised distribution to the same
mean and standard deviation as that of the original AIEEE Total scores from that
Board.
C4
6. Form a composite score by combining the transformed AIEE scores and the
transformed Board scores in 60:40 proportion (60% AIEEE Total and 40% Board
scores).
7. The composite will have the same mean but not the same standard deviation as the
original distribution of AIEE scores from that Board, and its distribution will be not
quite normal. Normalise the distribution and adjust the mean and standard
distribution to match the mean and standard deviation of the AIEEE Total scores from
that Board.
8. Repeat the process in all 29 Boards.
At the conclusion of this process, each Board will have a set of scores that is normally
distributed, and with means and standard deviations that match those of the scores obtained
by its own candidates on the AIEEE Total. To the extent possible, scores from different
Boards will now be comparable.
9. Aggregate the scores from all 29 Boards into a single data file. These scores are now
suitable for use in tertiary selection.
10. If the scores are to be made publicly available, they need to be converted to a scale
more suited for this purpose, and one that will not be confused with either AIEEE
scores or Board scores. Three possibilities would be:
a. A normal distribution with an arbitrarily chosen mean and standard deviation
(such as the mean 500 standard deviation 100 commonly to used to present the
results of international surveys);
b. Percentiles (to one or two decimal places), ranging from 100 down to
essentially zero; and
c. Percentiles (to one or two decimal places), ranging from 100 down to a
number that reflects a relevant proportion; e.g. the percentage of the age
cohort across all 29 Boards who present for the AIEEE examination (as used
in the Australian Tertiary Entrance Rank).
Assumptions
The procedures described above are based on the following assumptions:
1. In populations of the size we are dealing with here, distributions of achievement will
be normal, or approximately so.
2. AIEEE Total scores are directly comparable with one another, whether they come
from the same Board or from different Boards.
3. Board scores from candidates within the same Board are directly comparable to one
another.
4. Board scores from candidates from different Boards are not directly comparable to
one another.
5. Differences in the distributions of AIEEE Total scores across Boards provide the best
available indication of the differences in distributions of achievement across the
Boards.
C5
The first datasets provided
The first datasets provided to me were in the files EEECBS10.zip, EEECBS11.zip and
EEECBS12.zip. For each of the years 20-10, 2011 and 2012, they contained data for the
Board CBSE as follows:
AIEEE Physics Board marks: Physics
AIEEE Chemistry Board marks: Chemistry
AIEEE Mathematics Board marks: Mathematics
The AIEEE marks were for year 2011 were examined first, and were distributed as follows
The distributions are as would be expected in any situation where a set of common
examinations have been administered to a large population of candidates. There is a positive
skew indicating that the examination is able to separate candidates at the upper end better
than at the lower end of the achievement scale – a desirable attribute given its current use.
The distributions of Board scores were quite different:
C6
These bear no resemblance to any known distribution, and defy explanation. They appear to
contain at least two separate populations. One is small, nearly normal, and at the lower end
of the reported scores. The other is larger and is almost completely separate from the first in
Physics and Chemistry and completely separate in Mathematics. For each subject, the
second, larger distribution has peaks that suggest human intervention to raise candidates over
C7
some score, such as a passing score or perhaps the score that required to gain some reward,
such as course selection or a scholarship. In Mathematics the distribution approximates a
uniform distribution, suggesting that the bulk of the scores submitted were rankings or
percentiles.
In the absence of any explanation for these irregularities, I cannot recommend that such data
could contribute usefully to tertiary selection. AIEEE Total scores appear to be quite well-
suited for tertiary selection, and to form any combination with scores distributed like these
would detract seriously from their effectiveness.
Fortunately, these problems were absent from the 2102 data that were provided next. The
remainder of the report will be based on 2012 data, which were free of these irregularities.
The second datasets provided
On October 18 I was provided with a further set of data files with the filenames
EEEAM12.xlsx
EEEBR12.xlsx
EEECBS12.xlsx
EEEGU12.xlsx
EEEKL12.xlsx
EEEMR12.xlsx
EEERJ12.xlsx
EEETN12.xlsx
EEEUP12.xlsx.
These appear to be 2012 data from nine Boards, for which I will use the acronyms AM, BR,
CBS, GU, KL, MR, RJ, TN and UP. Each file includes variables labelled EEE_PHY_M,
EEE_CHE_M, EEE_MAT_M, EEE_TOT and BOARD_MRK, which I interpret as AIEE
marks in Physics, Chemistry and Mathematics, Total AIEEE scores and Board scores. It is
not clear how the Board scores were arrived at, but if they were arrived at by a process that is
uniform within each Board, they are potentially usable for selection.
For each of the nine Boards in turn, the distributions of the four AIEEE scores (Physics,
Chemistry and Mathematics) and the Board scores are portrayed in the next nine pages of
charts.
C8
Board AM
C9
Board BR
C10
Board CBS
C11
Board GU
C12
Board KL
C13
Board MR
C14
Board RJ
C15
Board TN
C16
Board UP
C17
From these charts, it appears that:
1. In each of the nine Boards, the AIEEE scores are distributed in the manner expected.
The single-subject scores are more “chunky,” because they have less possible score
points. The AIEEE Total scores in each Board are smoother and have similar
distributions – close to normal, but with some positive skew.
The Board scores are more varied:
2. For Board AM the score distribution is close to normal, with slight “bunching” at
three points. These scores could be used, but the precision of selection would suffer
slightly because of the inability of the scores to discriminate between candidates at
these levels. Given that the levels are below the point at which the keenest selection
decisions are made, this may not be a major issue.
3. For Board AM the score distribution is close to normal, with slight “bunching” at one
point, suggesting human intervention after the marks had been awarded. These scores
could be used for selection. As for Board AM, the problematic scores are below
average and well below the point at which the keenest selection decisions are made,
so this need not be a major issue.
4. For Board CBS, the score distribution is both positively skewed and asymmetrical.
Neither of these presents any problem, and can be taken care of by normalising the
scores. There will be some loss of precision because the examination discriminated
less well at the crucial upper end of the score range.
5. For Board GU, the scores are reasonably close to normally distributed and very
suitable.
6. For Board KL, the scores are significantly positively skewed, but this can be corrected
by normalisation. The scores do not discriminate as well at the top end of the scale as
they do in the middle and lower score ranges. The scores are usable, but would be
more useful for selection if there was greater discrimination in the upper score ranges.
7. For Board MR, the scores are slightly positively skewed, but close enough to
normally distributed and very suitable.
8. For Board RJ, the scores are close to normally distributed and very suitable.
9. For Board TN the score distribution is highly skewed in the negative direction,
indicating that in the very highest score ranges, very little discrimination was made.
This can be corrected by normalisation, but the consequence will be that candidates
who are not exceptionally high scorers will be moved significantly downwards. The
scores are insufficiently precise for us to know whether or not they deserve better than
this. This is a disappointing set of scores, and it could be argued that the addition of
the Board scores to the AIEEE Total scores would detract from the quality of
selection.
10. For Board UP, the scores are close to normally distributed and very suitable.
In summary, for seven of the nine Boards, the Board scores could contribute usefully to
selection. For the remaining two (KL and TN), the Board scores are too heavily concentrated
in the high score range, which detracts significantly from their usefulness.
Differences by Board
There are substantial differences in average achievement levels by Board, as evidenced in the
Means and standard deviations of the AIEEE scores presented in the table that follows.
C18
Table 1. AIEEE Total scores: Comparisons by Board
AIEEE Total Scores Percent of Candidates in the top:
Board N Mean SD 1% 2% 5% 10%
AM 5,764 28.56 28.22 0.05% 0.24% 1.02% 2.91%
BR 40,103 26.01 28.32 0.06% 0.32% 1.21% 2.70%
CBS 182,399 58.24 50.14 1.94% 3.78% 9.07% 17.33%
GU 27,501 37.50 33.19 0.27% 0.53% 1.91% 5.22%
KL 15,819 41.15 33.90 0.16% 0.52% 2.28% 6.83%
MR 70,602 28.83 33.02 0.39% 0.75% 1.85% 3.96%
RJ 17,188 35.84 37.21 0.34% 1.07% 3.23% 7.07%
TN 14,739 32.01 26.81 0.04% 0.14% 0.65% 2.49%
UP 28,628 25.13 26.41 0.07% 0.19% 0.74% 1.82%
All 402,743 43.09 43.36 1.00% 2.00% 5.00% 10.00%
From Table 1 it is apparent that the differences in achievement by Board are too great to
ignore. Board scores from different Boards cannot be treated as equivalent.
Scores from Board CBS have a dramatically higher mean, combined with an equally
dramatically higher standard deviation than all other boards. The combined effect of these
two elements is that there are many more candidates from Board CBS at the highest level of
achievement than in any other Board. This is clearly seen by examining the percent of
candidates in each Board who score in the top 1%, 2%, 5% and 10% of scores, shown in the
last four columns of Table 1. Given that these are the ranges within which many selection
decisions are likely to be made, it is important that any selection scores that we generate must
preserve these differences.
The proposal in this paper is that this be done by transforming each Board’s scores (however
they have been arrived at) so that they are normally distributed, and to adjust the mean and
standard of each set of Board scores to match the mean and standard deviation of that Board’s
candidates on their AIEEE total scores. In this way we can have confidence that both the
AIEEE Total scores and the adjusted Board scores, before they are combined, adequately
represent differences within and between the Boards.
Demonstration of the procedures
To demonstrate how the scores would be combined and to show how the score distributions
would appear at each stage, I will use the first of the Board data sets provided to me (Board
AM) for purposes of demonstration.
The steps involved are as outlined in the first section.
1. For each Board, the AIEEE and Board data need to be scrutinised carefully, to ensure
that the data are free from irregularities and evidence of manipulation. Once satisfied
with the quality of the data, apply the procedures that follow within each set of each
Board scores.
C19
Comment: Both sets of scores are appropriate and can be converted to normal with
little change.
2. Apply a transformation to the AIEEE Total scores (the sum of examination scores in
Physics, Chemistry and Mathematics) to convert the distribution from the (typically)
skewed raw score distribution to a normal distribution.
3. Adjust the mean and standard deviation of this normalised distribution to the same
mean and standard deviation as that of the original AIEEE Total scores from that
Board.
4. Apply a transformation to the Board scores to convert the distribution from the
(typically) skewed raw score distribution to a normal distribution.
C20
5. Adjust the mean and standard deviation of this normalised distribution to the same
mean and standard deviation as that of the original AIEEE Total scores from that
Board.
6. Form a composite score by combining the transformed AIEE scores and the
transformed Board scores in 60:40 proportion (60% AIEEE Total and 40% Board
scores).
7. The composite will have the same mean but not the same standard deviation as the
original distribution of AIEE scores from that Board, and its distribution will be near
to but not quite normal. Normalise the distribution and adjust the mean and standard
distribution to match the mean and standard deviation of the AIEEE Total scores from
that Board:
The final scores used for selection will correlate with both AIEEE scores and Board scores.
The correlation with AIEEE scores is likely to be higher than with Board scores because of
the higher weight accorded to the AIEEE scores. Scatter plots are shown below.
C21
As anticipated, the selection scores correlate with both AIEEE Total scores and Board score,
but more closely with the former.
Conclusion
The processes described are simple and manageable enough to be applied routinely. But the
quality of the selection decisions made depends heavily on the quality of the data that are
used to facilitate these decisions.
To some extent the process may be self-correcting. If there is a tendency in a Board to award
greater numbers of high scores than is warranted, normalisation will adjust most of those high
scores downwards, with the unfortunate result that the highest achievers in the Board will be
disadvantaged. As this occurs (and particularly if it can be understood and anticipated), the
C22
temptation to inflate the scores of higher achievers quickly soon disappear. This has been the
experience when such procedures have been used here.
Data quality is an important issue. There appears to be no problem with the quality of AIEEE
data, provided the total scores are used. The quality of Board data appears mixed. There are
no major problems in the 2012 data that I have received from seven of the nine Boards. For
the remaining two (Boards KL and TN), there is a tendency to push too many candidates into
the highest score ranges, with w resulting lack of discrimination where it matters most.
In my view, the stated policy can be implemented if all the data are of the quality of Boards
AM, BR, CBS, GU, MR, RJ and UP. It can be implemented with some loss of precision
where the data are of lesser quality (as in Boards KL and TN).
Implementation of this policy needs to be accompanied by a concerted effort to provide
professional development at the Board level. The professional development should include:
1. Setting examinations to discriminate across the whole achievement range.
2. Developing scoring schemes to achieve discrimination across the whole achievement
range.
3. Training examiners to ensure uniform standards of marking throughout the Board.
4. Developing an understanding of the importance of data quality in the selection
process and the detriment to data quality when proper procedures are not followed.
Indian Centre for Assessment Evaluation and Research (CAER) 2012
APPENDIX D
INDIAN CENTRE FOR ASSESSMENT, EVALUATION AND RESEARCH (CAER)
Some Possible Options for Aggregating Subject Examinations at the Level of Scores for Entry into Indian Tertiary Institutions
Jim Tognolini and Jon Twing
9/28/2012
This paper provides some options for scaling the examination scores across Boards and subjects so that students are neither advantaged nor disadvantaged by the group of students that are being compared within their Board. The paper will also provide some data to show some of the effects that can occur to the rank order if the scaling is done.
Aggregating Subject Examination Scores
D2
Some Possible Options for Aggregating Subject Examinations at the Level of Scores for Entry into Indian Institutes of Technology
1.0 Introduction The Joint Institute of Technology Joint Entrance Examination (JEE) is an annual entrance examination for the 16 Indian Institutes of Technology (IITs). It has been used for entry since 1960
1. In the early years it was called the Common Entrance Examination (CCE) and was
initiated in response to the IIT Act of 1961. There have been a number of changes and reforms to the test over the years. The most recent changes have just been accepted and will be implemented in 2013. The latest structure will comprise 2 JEE Examinations; JEE (Main) and JEE (Advanced). Candidates wishing to get entry into an Indian Institute of Technology (IIT) will have to sit for the (JEE Main). The students who perform best on this examination (approximately 1.5 lakhs) will then be eligible to sit for the JEE (Advanced) exam. In addition, the JEE-Main (which was until 2012 known as the All India Engineering Entrance Examination (AIEEE) is also to be used for admission to various Central Engineering Institutes other than IITs. The final rank list for entry into these (non-IIT) institutes will be prepared by giving 40% weighting to (“normalised”) Grade XII Board examination scores and 60% to scores on the JEE-Main. This paper provides some options for “normalising”, “scaling” or “equating” the Board marks so that they can legitimately be compared and contribute, along with the JEE-Main score towards a Tertiary Entrance Score. This change has enhanced the importance of the examinations conducted by the various Boards across India; because students have to perform well in their Board examinations (be in the top 20 percentile) in order to be eligible for entry. However, this decision has also introduced an issue regarding the comparability of the Board examinations. Given that there is going to be a single rank order of merit produced for entry into IITs, it is problematic to just take the scores of students from the various Boards across the country because there has been no attempt to adjust for the relative differences in the “ability” of various cohorts or for the differences in “difficulty” of various content or subject examinations. In other words, it may be a lot easier to beat 20 percent of the cohort of students in one Board than it is in another, particularly if the subjects are not the same; to just take the top 20% from each Board without adjusting for the “ability” of the comparable cohort would not be considered “fair” by most communities and so there is a need to scale the results to produce a single rank order of merit of aggregated examination results across the country. This paper provides some options for scaling the examination scores across Boards and subjects so that students are neither advantaged nor disadvantaged by the group of students that are being compared within their Board and the relative difficulty of the subjects that they have chosen to take in their various programmes. The paper will also provide some data to show some of the effects that can occur to the rank order if the scaling is done.
1It was originally referred to as the Common Entrance Examination (CEE).
Aggregating Subject Examination Scores
D3
2.0 Background to Scaling A serious limitation of traditional testing is that the “raw” score that a student obtains on the test (typically obtained by summing the marks the student obtains on the test) can only be interpreted in terms of the particular test that has been used. When a test is constructed to assess performance on a subject, the test questions or the items that are used represent only a sample of the possible set of items that could have been used. The score obtained by the student is dependent on the particular items chosen for that particular examination. If a different set of items (or even a few different items) had been taken, a different score would probably have been obtained and hence a different rank ordering of students on raw score might be realised. To this end the score on the subject is peculiar to the set of items through which it is defined. In many testing situations it is necessary to compare the scores of students who have taken different forms of a test. One of these situations occurs when the test contains a choice of items. In practice, the tests cannot be expected to be of equal difficulty for students at all ability level. Therefore, a comparison of total scores of the different forms of the test would not be fair to the students who have taken the more difficult set of items. A second situation and one that is more the focus for this paper occurs when scores obtained from different combinations of subjects need to be compared and included in the calculation of a student’s single Tertiary Entrance Score (TES). The scores in this situation are defined in the metric of the subject (and as such are not strictly comparable) and would depend upon the relative difficulty of the subject for the particular cohort of students attempting it. In India and the UK there is an added layer of complexity in that the examinations have been carried out by different Boards and there is no alignment of difficulty, across the Boards, for examinations in the same subject; or, across different subjects. There are numerous more situations ranging from monitoring the performance of students over time when tests that differ in context and difficulty are used in different years (e.g. Program for International Student Achievement [PISA} and the International Mathematics and Science Surveys [e.g. TIMSS]); tocomparing school assessments prepared by different schools across the country. In this latter case in order that valid comparisons among students from different schools can be made, different teachers’ assessments must be placed onto a common scale before they are compared. The process of transforming scores on one test so that they can be compared directly to scores on another is referred to as equating, scaling or linking. As an integral part of the test construction process, test equating has received widespread coverage in the measurement literature. It is beyond the scope of this report to provide a survey of all of the topics associated with test equating. However, it is the aim of this report to provide an overview of some of the more common methods for linking different tests and examinations. 3.0 Equating, Scaling and Linking Tests and Examinations Equating, scaling and linking are terms used to describe the empirical procedures used in transforming the scores of tests and examinations to ensure that it makes no difference, which set of items students have taken. After equating has been carried out, it is possible to compare the performance of students, even though the students have scores based upon tests composed of different items. Bèguin (2000) makes the following distinction between the terms equating, linking and scaling which are the terms that describe the statistical procedures used to adjust the scores on different test forms so that they can be used interchangeably (see Angoff, 1971; Kolen and Brennan, 1995; Petersen, Kolen and Hoover, 1989).
Aggregating Subject Examination Scores
D4
Equating is the process used to adjust the scores on equivalent test forms. A process related to equating but different in purpose is linking… Linking is used for tests that are purposefully built to be different in statistical characteristics. From a statistical point of view, equating is a special case of linking or scaling to achieve comparability. (Bèguin, 2000, page 3)
In essence, equating measures ensures that the measures are interchangeable. Scaling on the other hand refers to the process of associating numbers with the performance of students. When two tests have been equated, they are placed on the same scale. However, when two tests have been scaled they have not necessarily been equated (Kolen, 1985) A definition of equating promulgated by Angoff (1971) states that to equate two test forms is
… to convert the system of units of one form to the system of units of the other – so that scores derived from the two forms after conversion will be directly equivalent. (Angoff, 1971, page 562)
Lord (1977, 1980) has proposed a definition of equating which introduces the notion of equity.
Tests X and Y can be considered to be equated if and only if it is a matter of indifference to each examinee whether he takes Test X or Test Y.
(Lord, 1977, page 128) There are a number of implicit conditions inherent in these definitions. The first is that the tests to be equated must be measures of the same variable. For example, an analogy from the physical sciences would be equating degrees Fahrenheit and degrees Centigrade. Both are measures of temperatures. Similarly the equating of different currencies, such as Australian dollars, French francs and Italian lira is possible because they are measures of the same variable, purchasing power. In the case of equating test, it makes sense to equate tests that obviously measure the same variable. For example, equating a mathematics test from the Central Board of Secondary Education (CBSE) to one from another Indian Board is worthwhile. Angoff (1971) would suggest, however, that it makes little sense to equate tests measuring performance on different variables. For example, he suggests that equating a test that is a measure of mathematics to a test, which is a measure of artistic aptitude, is worthless.
2
The second condition implied by equating is that the resulting equivalence should not depend on the students, whose responses are used to develop the transformation, thus making the equating generalisable. As Angoff (1971) states:
…in order to be truly a transformation of only systems of units, the conversion must be unique, except for random error associated with the unreliability of the data and the method used for determining the transformation; the resulting conversion should be
2Angoff's premise might be construed to mean that linking a mathematics problem-solving test with a
mathematics number sense assessment would not be allowed because the two tests measure different things. However, both tests measure aspects of a higher order variable (i.e., mathematics skill) such that the linking becomes plausible. Similarly, while it might not make sense to link an art test with an arithmetic test, if these two scores will be used as indicators of a higher order measure (e.g. Tertiary Entrance Scores) the need or requirement is not so clear.
Aggregating Subject Examination Scores
D5
independent of the individuals from whom the data were drawn to develop the conversion and should be freely applicable to all situations. (Angoff, 1971, page 562)
The condition is an extension of a basic measurement principle developed explicitly by Thurstone (1959).
If a scale value is to be regarded as valid, the scale values of the statements should not be affected by the opinions of the people who help to construct it. This may turn out to be a severe test in practice, but the scaling method must stand the test before it can be accepted as being more than a description of the people who construct the scale.
(Thurstone, 1959, page 228) Rasch (1960) also stressed the need for this kind of invariance and referred to it later in his writings as “specific objectivity”.
Individual-centred statistical techniques require models in which each individual is characterized separately and from which, given adequate data, the individual parameters can be estimated. It is further essential that comparisons between individuals become independent of which particular instruments – tests, or items or other stimuli – within the class considered have been used. Symmetrically, it ought to be possible to compare stimuli belonging to the same class – measuring the same thing – independent of which individuals within the class were instrumental for the comparison. (Rasch, 1960, page vii)
Specific Objectivity is a property taken for granted in the field of physical measurement; namely no scientist asks which thermometer is used to measure temperature - it is just assumed the thermometers are calibrated before measures are taken. A third condition proposed by Lord (1980) is that the two tests must be equally reliable or perfectly parallel. In practice this condition is rarely, if ever, met. A less rigorous definition has been used in connection with constructing statistically equivalent tests.
Non-parallel tests X and Y (that is, tests measuring the same non unidimensional ability but differing in difficulty or reliability) can be considered to be equated if any two examinees of equal true ability, one taking test X and the other taking test Y, would be expected to obtain the same score when performance on test X and test Y are expressed on a common score scale.
(Kolen, 1981, page 1) Kolen (1981) referred to the above as the definition of equating for non-parallel tests. Whitely and Dawis (1974) refer to this definition as the equating of “tau-equivalent” measures, where
“tau” refers to the symbol “τ” which stands for an ideal true score of a person. Test equating procedures are generally classified into 2 categories: horizontal and vertical equating.
Aggregating Subject Examination Scores
D6
Horizontal equating is used to describe the process of equating 2 or more tests that are designed to measure the same property at the same academic level. Vertical equating is the equating procedure that is used to equate tests that measure the same property at different academic levels (Holmes, 1982; Mislevy, 1992). 4.0 Methods of Equating
4.1 Classical Test Theory (TTT) equating
Research into methods of equating tests (particularly those with items that are dichotomously scored) has been going on for more than 50 years. Many different methods for test equating have been proposed but until the advent of latent trait methods, the linear and equipercentile equating methods had been the most commonly used and most research, at least until the last decade, has focused extensively on them.
The traditional methods of equating tests revolve around matching the shapes of the distribution of scores. In the case of the linear scaling method, the assumption is that the only difference between two tests to be equated is a difference in origin and unit. The linear scaling method adjusts for differences by setting the mean (origin) and standard deviation (unit) of the same groups of students to the same means and standard deviations on the equating variables. This type of equating underpins most statistical procedures that are used to moderate school assessments before they are combined with examination scores to produce Tertiary Entrance Scores.
The equipercentile scaling method assumes that in general, scores on different tests cannot be equated by adjusting the origin and unit size only. The method requires the cumulative frequency distributions for each test, and assigns the same scaled score to the scores on Test X and Test Y if their percentile ranks are the same. That is, the equivalent scores are scores on Test X and Test Y that have the same percentile rank. This method is generally used in Australian states to adjust for differences among subjects. Once it has been carried out, the scaled scores from the different subjects are added; the resulting score is expressed as the Tertiary Entrance Score TEST). While linear linking establishes equivalence between means and standard deviations, equipercentile scaling extends this linking such that all four central moments (mean, standard deviation, skew and kurtosis) are equivalent.
Both of these equating methods assume that the students that have done the two tests are the same students or at least they are randomly equivalent groups. If this is not the case, more advanced equating methods, with additional assumptions must be used (see Gulliksen, 1950; Braun and Holland, 1982; Angoff, 1971; Dorans, 1990; and, Marco, Petersen and Stewart, 1983).
4.2 Rasch Theory Equating
The development of Rasch models arose from an equating problem at the level of tests. Reading tests, administered to the same pupils at different stages, to measure the improvement in reading ability (Rasch, 1960/1980) had to be equated. The important characteristic of these unidimensional models for measurement was that they had one parameter for a student, the ability, and one parameter for the test, its difficulty. Moreover, no assumptions were needed regarding the distribution of student abilities or test difficulties. Thus the student and test parameter, together with the form of the model, were considered to determine the probability of an error in reading each word.
Aggregating Subject Examination Scores
D7
It was from the solution to this problem at the level of the test that Rasch proceeded to the model for dichotomous items, which he later generalized to items with more than two categories.
One of the advantages of using the method developed by Rasch (1960/1980, 1968, 1977) is that it provides an explicit framework for evaluating the validity of equating any two tests.
When items in different tests (subjects taken by students) have been
• constructed to measure the same property; and,
• shown to fit the requirements of the Rasch model, then they can be transformed onto a single common scale.
Once the items are on a common scale, they share a common calibration. The measures that result from scores on any tests that are drawn from the scale, are automatically equated and no further collection or analysis of data is needed.
5.0 Scaling Examination Results from Different Indian Examination Boards
The basic problem confronted in this paper is the following: Changes to the requirements for entry into some institutions from 2013 onwards sees a situation whereby candidates will be rank ordered on the basis of a tertiary entrance score which is obtained by giving 60% weighting to (“normalised”) Grade XII Board examination scores and 40% to scores on the JEE-Main. This change has enhanced the importance of the examinations conducted by the various Boards across India; because students have to perform well in their Board examinations (be in the top 20 percentile) in order to be eligible for entry. The suggested change highlights an equity issue that currently exists in the system but has never been a problem because there has never been a need to compare the performance of students across the whole country based on their examination scores before. It would be problematic (unfair) to just take the raw scores of students on their examinations done within the various Boards and aggregate them to arrive at a score that can be used to rank-ordering aspiring candidates for competitive entry. This would mean that there has been no attempt to adjust for the relative differences in difficulty of various subjects taken. Since different students can take different subjects to produce their final scores and these subjects can vary in difficulty then it seems to be necessary, in the interests of fairness, to take account of the relative difficulty of subjects before generating a final score. It would be unfair for someone who sat the “easiest” subjects to gain entry ahead of someone who had chosen the most “difficult” subjects purely because the subject was inherently easier. Ultimately the aim should be to ensure that students are not advantaged nor disadvantaged by their subject choice; nor advantaged or disadvantaged by the Board that set the examinations. It is with this basic principle of equity that the following optional scaling procedures are considered.
Aggregating Subject Examination Scores
D8
Option 1 One option is to add up the scores obtained by the students on the examinations conducted by the particular boards and then compare the students based on their aggregate without any adjustment. The major problem with this method of comparing the aggregates of students is that it assumes that everybody has done exactly the same examinations irrespective of Board and subject. This assumption cannot be sustained and this solution is not a viable option. Option 2 A second option which could address the issue (different entry scores across Boards) involves first “normalizing” the distributions from the various examination Boards so that they all had a mean of zero and a standard deviation of 1; and then giving them a “common” mean and standard deviation so that the scores for the students at different locations on the distributions of all the examinations would be the same. For example, after normalization a 75 in subject X would mean that the students had beaten say 80% of the cohort; the same mark in every other cohort would mean the same. This would mask the anomaly identified by the journalist of the Hindustan Times where he compared the scores of the students who were in the top 20 percentile of students from the various boards. It stated the following:
“In a quirky scenario, a student of the West Bengal class 12 board will need just 58% to be eligible to take the IIT-Joint Entrance Exam (JEE) next year while an aspirant from the Tamil Nadu board will have to score nearly 78% to make the cut. Preliminary data of seven boards across the country shows that the percentage required to be in the top 20 percentile — a necessary condition to be eligible for IIT-JEE next year — will vary for different boards”. (Hindustan Times, 16 July 2012)
However this option would not overcome the problem associated with the comparability of the cohorts. It is not fair to students who are competing with more able candidatures. For example, it could be easier to beat 80% of the population in some subjects then others purely because the students taking this subject are less able than those taking another. This could logically lead to students moving from one jurisdiction (Board) to another to maximise their chances of beating other students in the race for tertiary entrance. Option 3 takes account of the relative “ability” of the various cohorts of students. Option 3 The next group of options does address the measurement issue that is not accounted for in the first two options in that they use a common test to scale for differences across subjects and across Boards. The All India Engineering Entrance Examination (AIEEE) is the test that is done by significant numbers of students from National and State Examination Boards throughout India (It will be replaced by the JEE (Main) in 2013). It can be used to take account of the general ability of the cohort of students taking each subject by scaling the distribution of the students in the various subjects (irrespective of Board) to the distribution of the scores of the same
Aggregating Subject Examination Scores
D9
group of students on the AIEEE. A national test of this nature enables the Board differences and the subject differences to be taken care of concurrently. Option 3, therefore, involves using a linear scaling method to adjust the mean (origin) and standard deviation (unit) of the students in each subjectto the same means and standard deviations of those same students on the scaling test (the AIEEE). Using this transformation, every score on the subject examination can be converted to an equivalent score on the AIEEE; and as a consequence, be directly compared. The following equation can be used for the transformation.
���� � ������ .��� .��
� � �.��� � �.��
where ���� is the scaled subject score ‘s’ for person ‘p’ within board ‘b’;
��� is the examination score in subject ‘s’ for person ‘p’ within board ‘b’;
.�� is the mean examination score for all persons attempting subject ‘s’ within board ‘b’; .�� is the standard deviation of the examination scores for all persons attempting subject ‘s’ within board ‘b’; �.�� is the standard deviation of the AIEEE ‘A’ scores of the all of the
persons taking subject ‘s’ within board ‘b’; and, �.�� is the AIEEE ‘A’ mean of all the students taking subject ‘s’ within
board ‘b’.
One of the problems with this method of equating is that the scaled scores can have values below the minimum score of the scaling test and above the maximum score. This can be adjusted by fixing the minimum scaled score to zero and the maximum scaled score to 100. This scaling method adjusts for the mean and spread of the distribution but primarily retains the features of the examination scores. If on the other hand it is desirable for the scaled scores to conform closely to the distribution of the scores on the scaling test (AIEEE) then it is essential to equate the percentile distributions. With the particular problem outlined above it is important that the distribution of scores on the examinations better match the distribution of scores on the AEIII so an equipercentile scaling procedure would be more appropriate. The following option shows an Equipercentile Scaling Method for solving the problem. Option 4
Option 4 is referred to as the equipercentile scaling method. It assumes that in
general, scores on different tests cannot be equated using just the mean and standard deviation only. It requires the cumulative frequency distributions for each test, and assigns, for equivalent percentile ranks on the two, the same scores on the subject examination as on the AIEEE. Once this scaling has been carried out, the scaled scores for the different subject examinations are comparable. In order to illustrate equipercentile scaling, consider the following steps in which the scores from mathematics from board are scaled onto the AIEEE distribution. STEP 1: Determine the score ‘s’ in the AIEEE (ALL) distribution that
corresponds to, for example, to a percentile rank of 50. STEP 2: Convert the score‘s’ to a percentile rank ‘p’ in the AIEEE (subject)
distribution.
Aggregating Subject Examination Scores
D10
STEP 3: Convert the percentile rank ‘p’ to a score ‘e’ in the subject distribution. STEP 4: The ordered pair (e, s) is used as one of the points in the equating
process. STEP 5: A similar strategy is used to equate all of the subject scores to the
AIEEE scale. Once this process has been done for all subjects across all boards, scores in any two subjects are considered equivalent if they correspond to the same score on the AIEEE. Furthermore, it is considered that they can be added and the resulting total score used to summarise the overall performance of candidates on a common scale. After scaling, it is considered appropriate to answer the question, “What is the score in mathematics in the Board x examination that corresponds to a score of 75 in another subject from another Board?” It is also considered that the aggregate that results from the scaled scores can be compared directly and it is possible to ascertain the top 20 percentile of candidates across India irrespective of which examinations they used to generate the aggregate and which examination Board administered the examination. Illustrative Example The data set that is used to illustrate equipercentile equating started with the total pool of students (in excess of 185,000) and all subjects.
The first step of the analysis was to remove subjects with insufficient numbers of students. It was arbitrarily decided that any subjects with less than 100 candidates would be removed from further analysis for the purposes of this example. This resulted in 39 subjects being available for the analysis. It was decided (again arbitrarily) to focus on subjects). As such, Table 1 provides the names and a brief description of the remaining 35 subjects to be analysed.
Aggregating Subject Examination Scores
D11
TABLE 1
Subjects used in Illustrative Example
Variable Name
Description Note
BENG_TOT AIEEE TOTAL SCORE Engineering Entrance Exam MRK_041 FUNC-ENG Language MRK_042 PUNJABI Language MRK_043 BENGALI Language MRK01 ENGLISH ELECTIVE Language MRK02 HINDI ELECTIVE Language MRK07 GEOGRAPHY Course Content MRK08 ECONOMICS Course Content MRK11 MUSIC HIND.VOCAL Music MRK12 MUSIC HIND.INS.MEL Music MRK14 PSYCHOLOGY Course Content MRK16 MATHEMATICS Course Content MRK17 PHYSICS Course Content MRK18 CHEMISTRY Course Content MRK19 BIOLOGY Course Content MRK20 BIOTECHNOLOGY Course Content MRK21 ENGG. GRAPHICS Course Content MRK22 PHYSICAL EDUCATION Course Content MRK23 PAINTING Course Content MRK26 APP-COMMERCIAL ART Course Content MRK33 HOME SCIENCE Course Content MRK34 INFORMATICS PRAC. Course Content MRK35 ENTREPRENEURSHIP Course Content MRK36 MULTIMEDIA & WEB T Course Content MRK40 COMPUTER SCIENCE Course Content MRK41 FUNCTIONAL ENGLISH Language MRK42 PUNJABI Language MRK43 BENGALI Language MRK46 MARATHI Language MRK48 MALAYALAM Language MRK51 KANNADA Language MRK62 ENGLISH CORE Language MRK63 HINDI CORE Language MRK65 SANSKRIT CORE Language MRK66 TYPOGRAPHY &CA ENG Course Content
After the calculation and inspection of the descriptive statistics were completed, each subject was statistically linked to the AIEEE total score BENG_TOT using the procedure outlined above.
The linking method used a common type of statistical test form equating commonly known as the “randomly-equivalent groups design” (Kolen& Brennan, 2004). The linking used no smoothing. While the assumption of randomly equivalent groups is not a strong assumption for the current linking given that the linking was performed on subsets of data with common persons, the ability to generalise this linking is strong. In
Aggregating Subject Examination Scores
D12
other words, if larger or different groups of candidates were used, different conversion tables between each subject and the AIEEE total score would be realised.
Using the linking methodology previously described, each subject (see Table 1) was equated to the AIEEE total score (variable BENG_TOT) using the R package “equate” (Albano, 2011). As such, a total of 34 “pair-wise” linkings were completed. Each individual linking resulted in a conversion table, which provided an AIEEE total score equivalent for each individual subject scale score. For example, when the 0-100 scale of MRK01 was equated to the -51 to 345 scale for BENG_TOT two column vectors resulted. In one column was the BENG_TOT total scale score and in the other was the MRK01 scale score equivalent. In this way a simple conversion of MRK01 (which in this case is the English Elective course for language study) to the BENG_TOT scale (AIEEE total) has been made. Such an equipercentile linking was then performed for all of the remaining subjects such that each subject had a linked AIEEE total score equivalent. A comparison of a composite score obtained from a simple average of these “equated scores” with the empirically obtained AIEEE shows the differences possible from decisions resulting from the composite score relative to decisions made from the use of the AIEEE score only. Furthermore, a simple composite of all subject scores was obtained by simply averaging the individual scale scores. This “non-equated” composite would not adjust for group ability and/or subject and test differences. The results of these analyses are presented in the next section. As outlined in the previous section, the equipercentile equating allows for comparison among three key derived or total score composites. First is the total score for AIEEE (BENG_TOT). This subject has been named in these analyses as AIEEE. Second is the composite score obtained by averaging scores on each subject examination (See Table 1) across all of India, after converting these scores to their equivalent AIEEE score via the equipercentile linking procedure described in the previous section. This variable has been named AIEEE-EC for “AIEEE Equated Composite”. Finally, there is the simple composite score obtained by averaging the individual subject scores (see Table 1) without any linking or equating adjustment. This variable has been named SS-C for “Scale Score – Composite”. Table 2 presents the summary descriptive statistics for these composites.
TABLE 2
Descriptive Statistics
AIEEE AIEEE-EC SS-C
Mean 58.0295 61.1826 67.9993
SD 50.0268 46.3334 15.6629
Min -51 -29 9
Max 345 330 99
N-count 185123 184947 185123 Table 2 reveals the similarities as expected between AIEEE and AIEEE-EC. Also seen in this table is the restriction of range imposed on the AIEEE-EC (minimum and maximum scores less than those seen for AIEEE) most likely to do with the 0-100 originating scale associated with each test linked to the much larger AIEEE scale which ranges from -51 to 345. Table 3 provides the simple inter-correlations amongst the three scores
Aggregating Subject Examination Scores
D13
TABLE 3
Simple Correlations
AIEEE AIEEE-EC SS-C
AIEEE 1.0000
AIEEE-EC 0.5661 1.0000
SS-C 0.5625 0.8572 1.0000
N-count = 184,947 One result that seems relatively surprising on first review is the strength of the correlations between the two derived composite scores, AIEEE-EC and SS-C relative to the correlations between AIEEE and the composite scores AIEEE-EC and SS-C. Why would AIEEE-EC correlate higher with SS-C than with what it is supposed to be equivalent to, namely AIEEE? This is likely an artifact of the data and has to do with the fact that the rank orderings of the two derived composite scores were predestined to stay the same given the linking procedure used in this paper. The equating adjustment used to obtain AIEEE-EC relied on the rank ordering or percentile rank of each subject scaled score. As such, the composite generated from AIEEE-EC should indeed rank students in a similar way as the simple composite SS-C since both rely on the rank ordering of the students taking each examination. The relatively weak correlation between AIEEE and AIEEE-EC; and, AIEEE and SS-C testifies that the rank ordering of students based only on AIEEE is different from the rank ordering of students on the two composite scales (i.e. AIEEE-EC and SS-C). This is likely due to the fact that the various subjects will each have different relationships to AIEEE with subjects such as physics and mathematics likely to be more strongly related while areas like psychology and languages less so. Table 4shows a misclassification table. It is used to summarise the differences that occur from producing a rank order of achievement based on total scaled scores (AIEEE-EC) and a rank order of achievement based on raw scores (SS-C). For the purposes of this illustrative example, it uses the top 10
th percentile as the cut-off score.
IN on the scaled score means that the students are in the top 10% of students on the AIEEE-EC and OUT means that they are below the top 10 percent, Similarly, IN on the raw score means that the students are in the top 10% of students on the SS-C and out means that they are below the top 10%.
Aggregating Subject Examination Scores
D14
TABLE 4
Misclassification Table Showing the Differences in Results Before and after Scaling
SCALED SCORES
IN OUT TOTAL R
AW
SC
OR
E
IN 10,380 (5.6%)
7,475
(4.0%)
17,853
(9.6%)
OUT 8471
(4.6%)
158,797
(85.8%)
167,268
(90.4%)
TOTAL 18,851
(10.2%)
166,272
(89.8%)
185,123
(100.00%)
Table 4 shows those students that are in the top 10% of students on both scaled scores and raw sores (10,380 or 9.6% of the sample); those who are in the top 10 % on neither score (158,797 or 85.8%); those who are in the top 10% on the raw score and not in the top 10% on the scaled score(7,475 or 4.0%); and, those who are in the top 10% on the scaled score but not the raw score (8,471 or 4.6%). It is the last group of students who would be disadvantage unfairly if scaling were not carried out. It is important to stress the following when interpreting these results. Firstly, when producing a single rank order of merit based on the aggregate of subject scores in which not all students have attempted the same subjects then it is essential that scaling (or equating) be carried out to produce scores which are more valid then those produced by just aggregating the raw scores. Secondly, this example only uses the data from the CBSE in 2012 where in most cases every student has done at least the BENG_TOT; Chemistry; Mathematics; Physics; and, English Foundation. When the results from different Boards are included then the impact of scaling will become much more significant in terms of its impact on students. Some Potential Problems with Equipercentile Linking (Scaling) One of the main concerns in using this procedure to render the scores across subjects and Examination Boards comparable is that the anchor test (AIEEE in this case) irrespective of which one is used will be differentially valid for different subjects. That is, it will correlate better with some subjects than others. The higher the correlation between the scores on the anchor test and the cores on the subject the more valid it is to equate scores in the subject. McGaw (1983) summarises the problem with using a test like the AIEEE or JEE (Main) as the anchor test for equating different subject tests as follows:
In each rescaling ASAT (anchor test in Australia) is used essentially to identify the characteristic of the candidates enrolled in order to determine how they stand in relation to the other students who might have enrolled. In a subject like chemistry where the correlation with ASAT is high, the ASAT scores of the students enrolled give a reasonable indication of their relative standing in chemistry in a population where all students took chemistry. In a case like economics
Aggregating Subject Examination Scores
D15
or French, ASAT provides a less valid indication of the selectiveness of the students enrolled. ASAT is thus a more valid rescaling variable for chemistry than for economics or French. (McGaw, 1983, page 9)
A further consequence of the lack of homogeneity in the inter-subject correlations is that the aggregates constructed from different combinations of subjects will have will have substantial differences. When scores are aggregated to form averages the variances of the sum will be the sum of the variances of the individual subjects that comprise the aggregate plus twice the covariance between each pair of subjects. Aggregates comprised of subjects that have relatively high inter-correlations will have greater variance. The effect will be that the results for persons above the mean will be pushed higher above the mean of the aggregate. Those students who take combinations of subjects which do not have a high inter-correlation will not be pushed as high. In the competition for entry into it’s the problem is less likely to be an issue because the applicants will be inclined to include subjects like subjects in their aggregates. In educational measurement there is a need to distinguish between actually aggregating scores as has been carried out in this option and the process of measuring. The next option provides an alternative that uses one of the family of Rasch Models to construct a measurement scale which can then be used to measure the property of interest. One of the advantages of using such a measurement model is that it provides an explicit framework for evaluating the validity of equating any two tests. Once it has been established that equating has been conducted at the level of tests, and then the achievement of the students can be located on the same scale. Together with the achievement measure the model provides an estimate of standard error and an index of fit between the person’s profile of results and the model. Option 4 The fourth option is to develop a measurement scale using an extension of the Simple Logistic Model (SLM) of Rasch. The Extended Logistic Model (ELM) is a generalisation of the SLM for cases where the items have more than 2 ordered response categories. It evolves from an elaboration of Rasch’s generalised model as a consequence of substantial work done by Andersen (1973 and 1977) and Andrich (1978). This study uses the ELM where the subjects are treated as polytomous items with scores that range from 0 to 100. Pr� � �; �, �, �� � exp #�$ � �%� �&/( where � is the student achievement; δ is the subject difficulty; and
( � ∑ exp%*+,- �$ � .%� �&&
In this option, the model is used to first create a scale and then measure performance of students against the scale. The intention in this paper is to demonstrate how the model can be used to generate aggregate scores for students which take account of the relative difficulties of the different subjects and enables their direct comparison on a single rank order of merit
Aggregating Subject Examination Scores
D16
that can then be used to directly compare performance for the purposes of competitive entry into IITs. It is not the intention of this paper to go into detail regarding the model. However, Tognolini (1989) and Tognolini and Andrich (1995, 1996) have demonstrated how the model can be used to generate scores for entry into universities. In order for the scale to be generated there has to be something that the students have done in common. In 2012 the AIEEE was done by most students wanting to apply to the IITs. In future this will be replaced by the JEE (Main). Illustrative Example Data from the CBSE 2012 examination results were used to demonstrate the model. The data consisted of results for 185,123 students in 68 subjects (BENG_TOT is the same as the AIEEE) (See Table 5). It can be seen from Table 1 that a number of subjects had no enrolments for the sample of students chosen. These subjects were removed for the illustrative example. Other subjects with a very small number of candidates were also removed for the study. The final sample consisted of 31 subjects (See Table 6).
Aggregating Subject Examination Scores
D17
TABLE 5
CBSE Sample of Subjects and Student Numbers for 2012
SUBJECT No. of students
BENG_TOT 185123
Mathematics 184947
Physics 185123
Chemistry 185123
English Elective (001) 192
Hindi Elective (002) 1056
Urdu Elective (003) 36
Sanskrit Elective (022) 27
History (27) 13
Political Science (28) 41
Geography (29) 318
Economics (30) 4203
Music Car Vocal (031) 0
Music Car Ins (032) 0
Music Hind Vocal (34) 5176
Music Hind Ins Mel (35) 308
Music Hind Ins Per (36) 76
Psychology (037) 164
Sociology (039) 59
Biology (44) 23735
Biotechnology (045) 700
Eng Graphics (046) 2353
Phys Education (048) 91163
Painting (049) 8720
Graphics (50) 8
Sculpture (051) 66
APP Commercial Art (052) 1270
Fashion Studies (053) 56
Business Studies (054) 4
Accountancy (055) 35
Dance Kathak (056) 56
Dance Bhar (057) 3
Dance Odissi (059) 0
Home Science (064) 650
Informatics Practice (065) 9718
Entrepreneurship (066) 274
Multi Media & Web (067) 894
Aggregating Subject Examination Scores
D18
TABLE 5 (continued)
CBSE Sample of Subjects and Student Numbers for 2012
SUBJECT No. of students
Agriculture (068) 51
Graphic Design (071) 7
Mass Media (072) 18
Computer Science (83) 46152
Functional English (101) 5571
Punjabi (104) 444
Bengali (105) 380
Tamil (106) 38
Telugu (107) 84
Marathi (109) 121
Manipuri (111) 1
Malayalam (112) 155
Oriya (113) 5
Assamese (114) 0
Kannada (115) 187
Arabic (116) 25
Tibetan (117) 14
French (118) 7
German (120) 5
Russian (121) 0
Nepali (124) 44
Limboo (125) 1
Lepcha (126) 0
Bhutia (195) 1
English Core (301) 179346
Hindi Core (302) 20077
Urdu Core (303) 18
Sanskrit Core (322) 1818
Typography (607) 133
Marketing (613) 1
Geo Spatial Technology (740)
32
Aggregating Subject Examination Scores
D19
TABLE 6
CBSE Subjects and Student Numbers used in the Illustrative Example
Serial Number SUBJECT No. of students
1 BENG_TOT 185123
2 Mathematics (41) 184947
3 Physics (42) 185123
4 Chemistry (43) 185123
5 English Elective (001) 192
6 Hindi Elective (002) 1056
7 Geography (29) 318
8 Economics (30) 4203
9 Psychology (037) 164
10 Sociology (039) 59
11 Biology (44) 23735
12 Biotechnology (045) 700
13 Eng Graphics (046) 2353
14 Phys Education (048) 91163
15 Painting (049) 8720
16 APP Commercial Art (052) 1270
17 Fashion Studies (053) 56
18 Home Science (064) 650
19 Informatics Practice (065) 9718
20 Entrepreneurship (066) 274
21 Multi Media & Web (067) 894
22 Agriculture (068) 51
23 Computer Science (83) 46152
24 Functional English (101) 5571
25 Punjabi (104) 444
26 Malayalam (112) 155
27 Kannada (115) 187
28 Nepali (124) 44
29 English Core (301) 179346
30 Hindi Core (302) 20077
31 Sanskrit Core (322) 1818
The BENG_TOT (AIEEE) had a range of scores from -51 to 345. For the purposes of this illustrative study students with scores below 0 and above 100 were excluded. In practice these scores would be transformed to a metric where the top score was aligned to 100 and the lowest score was aligned to 0 for the equating exercise.
Aggregating Subject Examination Scores
D20
In addition, for the purpose of this example, a random sample of 80,000 students was selected for the analysis. Consequently, the example to illustrate the Rasch (IRT) modeling option used a sample of 80,000 students and 31 subjects. The Rasch Unidimensional Models for Measurement (RUMM) program was used to analyse the data. Table 7 shows the relative difficulty of the subjects in difficulty order (i.e. from the easiest to the most difficult). Table 7 shows that Painting was the most relatively easy subject; followed by Commercial Art and Informatics Practice. The subject AIEEE was the most difficult. Once the subjects (items) have been calibrated to produce a scale (Tertiary Entrance Scale) then the students can be measured; that is, placed along the measurement scale.The following equation is used to convert the scores on the different subjects into a single measure of person achievement.
/0 � 1 �022
� 1 �022
. 3%�02; �0; �2; 42; 52; 62&
can be used to measure the overall achievement,�0 , for person n. It can be seen from the above equation that the sum of the raw scores is converted; using the model, into a measure, �0, on the underlying measurement scale that can be directly compared. The students can do any number of subjects; any combination of subjects; and, the resulting �0 will be comparable. The model takes account of the relative difficulties of the subjects used to generate the raw score; this means for example, that it would be possible for students to get the total raw score, /0, but different scaled scores, �0, because it was compiled from different combinations of subjects. Students in the 80,000 sample used in this example sat different numbers of subjects; some sat 7 subjects, some sat 6 and some sat 5. While the resulting �07 are comparable irrespective of the number of subjects attempted, only those students (who sat 6 subjects (including BENG_TOT) were retained for the comparative stage of the exercise (number of students in final sample is 69,140.
Aggregating Subject Examination Scores
D21
TABLE 7
CBSE Subjects in Difficulty Order (Logits where the larger the value, the more difficult the subject)
Serial Number SUBJECT Difficulty3
1 Painting (049) -2.498
2 APP Commercial Art (052) -2.363
3 Informatics Practice (065) -0.917
4 Eng Graphics (046) -0.565
5 Phys Education (048) -0.509
6 Agriculture (068) -0.489
7 Hindi Elective (002) -0.283
8 Hindi Core (302) -0.255
9 Home Science (064) -0.225
10 English Core (301) -0.025
11 Sociology (039) -0.009
12 Punjabi (104) 0.093
13 Geography (29) 0.098
14 Nepali (124) 0.165
15 Psychology (037) 0.252
16 Chemistry (43) 0.262
17 Sanskrit Core (322) 0.266
18 Entrepreneurship (066) 0.268
19 Multi Media & Web (067) 0.302
20 Malayalam (112) 0.31
21 Biology (44) 0.336
22 Fashion Studies (053) 0.338
23 Physics (42) 0.352
24 English Elective (001) 0.376
25 Computer Science (83) 0.428
26 Biotechnology (045) 0.506
27 Mathematics (41) 0.539
28 Functional English (101) 0.6
29 Economics (30) 0.645
30 Kannada (115) 0.649
31 BENG_TOT 1.355
3These are expressed in logits (i.e. Logarithmic Units). The more positive the value, the harder the subject.
Aggregating Subject Examination Scores
D22
The following misclassification table (Table 8) is used to summarise the disparities that occur from producing a rank order of achievement based on total scores and a rank order of achievement based on raw scores.
TABLE 8
Misclassification Table Showing the Differences in Results Before and after Scaling
SCALED SCORES
IN OUT TOTAL
RA
W S
CO
RE
IN 13,449 (19.5%)
438
(0.6%)
13,887
(20.0%)
OUT 480
(0.6%)
54,773
(79.2%)
55,253
(80.0%)
TOTAL 13,929
(20.1%)
55,211
(79.9%)
69,140
(100.00%)
Table 8 shows those students that are in the top 20% of students on both scaled scores and raw sores (13,449 or 19.5% of the sample); those who are in the top 20 % on neither score (54,773 or 79.2%); those who are in the top 20% on the raw score and not in the top 20% on the scaled score 438 or 0.6%); and, those who are in the top 20% on the scaled score but not the raw score (480 or 0.6%). It is the last group of students who would be disadvantage unfairly if scaling were not carried out. While it appears as though less than 1% of students would be disadvantaged by the introduction of scaling it is important to stress the following when interpreting these results. Firstly, it is essential, when producing a single rank order of merit based on the aggregate of subject scores in which not all students have attempted the same subjects that scaling (or equating) be carried out to produce scores which are more valid then those produced by just aggregating the scores. That is, it is the right thing to do. Secondly, this example only uses the data from the CBSE in 2012 where in most cases every student has done at least the BENG_TOT; Chemistry; Mathematics; Physics; and, English Foundation. The only variation occurs really in the inclusion of the sixth subject. When the results from different Boards are included then the impact of scaling will become much more significant in terms of its impact on students. Thirdly, one of the main advantages of using a measurement model to govern the scaling process is that the measurement model provides a means of explicitly evaluating the validity of the scaling process. This has not been done at this stage as the intention of this paper is to show the effects that scaling has on the rank ordering of the top 20% of the students based upon their subject scores. Some Potential Problems with Rasch Equating One of the main issues with equating using the Rasch Model is that generally, with such large numbers, the data do not fit the model. This could be a potential criticism that can be addressed; not by getting the data to fit the model but showing that for the purposes of equating the Model is robust to the magnitude of the variation that generally occurs. At the same time, with feedback from the process to the examiners
Aggregating Subject Examination Scores
D23
who set the subject papers, it is anticipated that the fit to the model will improve over time. A second, more practical issue, is that the current programs used for Rasch modeling of the type advanced here are generally not built to handle the volumes of data that will be expected when all students are included in the scaling process. Once again this is a problem that can be readily addressed as it primarily relates to the amount of memory allocated within the program for the analysis.
6.0 Conclusion
Combining together information of disparate background has confronted researchers, statisticians and lay people alike. There are a number of methods that will enable the aggregation of data across variables to generate a combined score which has meaning in some augmented variable. In the United States for example, the Dow Jones Industrial Average (DJIA) is an indicator of the status of one of the North American stock markets. This averaged is a composite score that averages many different stock holdings together. Similarly, in education, makers of achievement and aptitude assessments have traditionally created composite scores that combine disparate pieces of information. The Stanford Achievement Test Series, Tenth Edition (Pearson, 2012) for example creates a total test score by combining areas of mathematics, reading, science and history. The ACT Assessment (2009) used in the US is a premier college entrance examination. The ACT Composite is made up of English, mathematics, Reading, and science measures. In Australia, most states and territories use scaling or equating of one form or another to solve the problem of producing a single rank order for tertiary entrance when students have taken different combinations of subjects. Hence, while at first reading it seems illogical to combine scores from say mathematics with language, it is often done and done so to provide a higher level variable representing a more generalised ability or skill. While the principles of statistical test form equating and scaling have strong requirements when combining or linking scales via equating, this paper investigated some options that might be considered when combining subjects to form a composite that can be used to rank order students across the country for the purposes of entry into tertiary institutions that will ensure that students will not be advantaged or disadvantaged by the subjects they choose or the Board they have chosen to accredit their performance. While the procedures do have their limitations, the question becomes one of magnitude of error - namely, do decisions that result from linking disparate scores become more accurate after scaling than they would be if the scores were not scaled at all. The answer to this must be yes. The results of this investigation are relatively straight-forward. Firstly, it seems that the application of the Rasch Scaling model sufficiently reproduce a rank ordering of subjects in terms of difficulty that are intrinsically rationally valid. That is, there seems very little incongruence with a priori expectations of the ordering of difficulty and what was discovered. No examination system actually uses this option to solve the equating problem. However, there is good research evidence to show that the option could meet the needs of systems. The Curriculum Council of Western Australia is currently investigating, once again, the prospect of using this measurement model to produce their Tertiary Entrance Score. The work is being pioneered by the Pearson
Aggregating Subject Examination Scores
D24
Psychometric Laboratory (PPL) at the University of Western Australia (UWA). The seminal work on this option was carried out in 1990 (Tognolini, 1989) and is currently very close to being implemented using the distribution of the sum of scores on the students’ other subjects as the common measure rather than a common examination. The equipercentile equating method is used in one form or another to scale for differences between subjects in New South Wales; Queensland (using the Core Skills Test (CST) as the common measure; and Victoria. Secondly, the equipercentile linking showed the various relationships between composite scores and how, in the aggregate, the power of the subjects taken by most students (the languages like MRK_041, MRK_042, MRK_043) as well as mathematics, physics and chemistry (MRK16, MRK17, MRK18) are likely dominating the composite score making the composite seem more similar to AIEEE than it should. This suggests that a more extreme situation (where say the five easiest subjects taken by the most liberal Boards are compared against students taking the five most difficult subjects offered by the most conservative Boards) might reveal a more likely potential for bias for composite scores not using linking to adjust for differences in group ability and subject difficulty Third, the correlations after the equipercentile linking suggest that there are substantial differences in the ordering of subjects using composite scores than in using the AIEEE total score only. This is a serious fairness concern with the current desire to mix subjects and Board results into a single tertiary determination. Finally, the analyses have shown that, with relative ease and exploiting the fact that most students will always sit for the AIEEE exam, little additional effort or time will be lost in providing some linking adjustment before reporting scores.
7.0 References
ACT. (2009). ACT’s College Readiness System: Meeting the challenge of a changing world. Iowa City, IA: ACT.
Albano, A. (2011). Statistical Methods for Test Score Equating. R Package Version 1.1-4. Installed from web http://www.r-project.org.
Angoff, W. H. (1971). Scales, norms, and equivalent scores. In: R.L.Thorndike (ed.), Educational measurement (2nd ed., pp. 508-600). Washington DC: American Council of Education.
Bèguin, Anton A., (2000). Robustness of Equating High-Stakes Tests. Unpublished Dissertation, Universiteit Twente, The Netherlands.
Braun, H. I., & Holland, P. W. (1993). Observed-score test equating: A mathematical analysis of some ETS equating procedures. In: P.W.Holland, & H. Wainer (Eds.), Differential Item Functioning (pp.25-29). Hillsdale, NJ: Erlbaum.
Dorans, N. J. (1990). Equating methods and sampling designs. Applied Measurement in Education, 3, 3-17.
Gulliksen, H. (1950). Theory of mental tests. New York: Wiley.
Holmes, S. E. (1982). Unidimensionality and vertical equating with the Rasch model. Journal of Educational Measurement, 19, 139-147.
Kolen, M. J.(1981).Comparison of traditional and item response theory methods for equating tests. Journal of Educational Measurement, 18, 1-11.
Aggregating Subject Examination Scores
D25
Kolen, M. J. (1985). Standard errors of Tucker equating. Applied Psychological Measurement, 9, 209-223.
Kolen, M. J., & Brennan, R. L. (1995). Test Equating. New York: Springer.
Kolen, M. J. & Brennan, R. L. (2004). Test Equating, Scaling, and Linking. (2nd ed.), New York: Springer.
Lord, Frederic M. (1977). Some item analysis and test theory for a system of computer-assisted test construction for individualized instruction. Applied Psychological Measurement, 1, 447-455.
Lord, F. M. (1980). Applications of item response theory to practical testing problems. Hillsdale, NJ: Erlbaum.
Marco, G. L., Petersen, N. S. & Stewart, E. E. (1983). A test of the adequacy of curvilinear score equating models. In D. Weiss (Ed.), New horizons in testing (pp.147-176). New York: Academic.
Mislevy, R. J. (1992). Linking educational assessments: Concepts, issues, methods, and prospects. Princeton, NJ: ETS Policy Information Center.
Petersen, N. S., Kolen, M. J. & Hoover, H. D. (1989). Scaling, norming and equating. In R.L.Linn (Ed.), Educational Measurement (3rd ed.,pp.221-262). New York: American Council on Education and Macmillan.
Rasch, G. (1960/1980). Probabilistic models for some intelligence and attainment tests.(Copenhagen, Danish Institute for Educational Research), expanded edition (1980) with foreword and afterword by B.D. Wright. Chicago: The University of Chicago Press.
Rasch, G. (1968). A Mathematical Theory of objectivity and Its Consequences for Model Construction. European Meeting on Statistics, Econometric and Management Sciences. Amsterdam 2-7 September 1968. pp. 31.
Rasch, G. (1977) On specific objectivity: an attempt at formalizing the request for generality and validity of scientific statements. Danish Yearbook of Philosophy 14: 58-94.)
Thurstone L. L. 1959. The Measurement of Values. Chicago: University of Chicago Press
Tognolini, J. and Andrich, D. (1996) “Analysis of profiles of students applying for entry to universities”. Applied Measurement in Education, Volume 9, Number 4.
Tognolini, J. (1989) Topic “Psychometric profiling and aggregating of public examinations at the level of test scores” Unpublished Doctor of Philosophy from Murdoch University, Western Australia.
Whitely, S. E. and Dawis, R. V. (1974), The nature of objectivity with the Rasch model. Journal of Educational Measurement, 11: 163–178.
E1
1. The aggregate as well as subject score distributions vary greatly from one board to another. Some boards award high scores to many students, while others generally award low scores.
APPENDIX E Input from the Core Committee to Chairman, CBSE
(From Proceedings of the 2nd Meeting of the Core Committee held on 31st October, 2012)
During the first meeting of the Core Committee, the Chairman had explained that for admissions to NIT’s, 60% weight would be put on the students’ performance in the JEE-main examination and the remaining 40% weight would be on Board scores at the class XII level. The task of the Core Committee is restricted to recommending appropriate normalization of the scores without altering these weights.
The Chairman, CBSE, had informed the Core Committee that all the boards have agreed to provide scores data in a specified proforma within a stipulated time, and as such the time available for processing that data would be short. He had also informed the Group that, in view of the disparity of subjects chosen by students within a board and between different boards, the aggregate would be computed from five subjects (with equal weights on a language, three science subjects and one other subject).
The Core Committee considered some preliminary analyses of the Board and AIEEE (precursor of the JEE-Main) scores data, and took into account the following facts.
2. The distributions of aggregate as well as subject scores obtained by students of different boards in the AIEEE vary considerably. Students of some boards generally do better in AIEEE, while the students of some other boards generally have poor performance in AIEEE.
The Core Committee considered the following proposals for arriving at normalized Board scores.
Option 1. Use percentile of aggregate score1
Option 2. Use different linear transformations
of each student in his/her board examination of that year as the normalized Board score.
2
1 The percentile of score of a candidate in a particular examination is the percentage of candidates appearing in that particular examination that received the same or lower score. This is different from the score expressed in percentage. The relation between percentile and percentage score varies from one examination to another. For example, in the year 2007, a student at the 5th percentile of the CBSE class XII examination received 24.4% in aggregate marks, and a student at the 80th percentile received 78.8%. In the same year, a student at the 5th percentile of the class XII examination of the Tamil Nadu board received 41.5% in aggregate marks, while a student at the 80th percentile received 76.5%.
for the aggregate scores of different boards so that the set of all transformed scores within a board for a particular year has a fixed mean and a fixed standard deviation.
2 A linear transformation of a score means multiplication/division of the score by a number and addition/subtraction of another number. For example, the z-score of a student in an examination is a linearly transformed version of the original score, obtained by subtracting the mean score (computed from all scores of that examination) from the student’s score and then dividing the difference by the standard deviation of
E2
Option 3. Use different linear transformations for the aggregate scores of different boards so that the set of all transformed scores within a board for a particular year has the same mean and standard deviation as the set of scores obtained by the students of that board in JEE-Main.
Option 4. Use different non-linear transformations for the aggregate scores of different boards so that the set of all transformed scores within a board for a particular year match specified percentiles of the set of scores obtained by the students of that board in JEE-Main.
Option 5. Use variations of the above options, where the scores of the ‘failed candidates’ in a board and students with negative aggregate score in JEE-Main are ignored while computing mean and standard deviation.
Option 6. Use variations of the above options, with subject-wise normalization before aggregation. Option 7. Use Rasch models that take into account relative difficulty levels of different subjects.
The Core Committee proceeded with the selection of the normalization scheme by reasoning as follows.
1. A Rasch (multi-level logistic) model, favoured by the Indian Centre for Assessment, Evaluation and Research (CAER), makes specific assumptions about the dependence of score on the difficulty level of a subject and a student’s achievement. It is possible to check whether the data at hand fits the model. Large data sets are often found not to fit the model. Further, in the present situation, time for analysis would be limited and the data would be vast. It would not be prudent to use complicated models of this kind for the purpose of normalizing scores of so many boards. Thus, Option 7 is ruled out.
2. Any method based on separate normalization of each subject score for each board would be unsuitable for the present purpose, as the time available for analysis would be short, and there may be issues with the format and contents of data files provided by the different boards in 2013, which would be the first year of compilation of multi-source data at such a scale. Thus, Option 6 (various versions of which were mentioned by CAER) is ruled out. However, the possibility of subject-wise normalization may be revisited in future – after a reliable mechanism of detailed scores from various boards is put in place.
3. As different boards have different pass marks, and the pass percentages also vary somewhat from board to board, it was decided that Option 5 would not be used. This would also simplify computations, which would be an important issue at least in the year 2013.
4. Some versions of Option 3 had been studied in IIT Kanpur, while a version of Option 4 was suggested by the Australian Council of Educational Research (ACER). These two options use JEE-Main scores as anchor for calibrating board scores. Anchoring is generally advocated as a strategy to eliminate the presumption that students of all the boards have the same distribution of ‘merit’ (implied in Options 1 and 2). However, the following counter arguments were given by different members of the Core Committee.
(a) Anchoring with respect to the aggregate JEE-Main score does not eliminate assumptions. It merely replaces one assumption by another. The assumption implied
scores. A further linear transformation of the z-score could produce a set of scores with any desired mean and standard deviation.
E3
by anchoring with respect to JEE-Main scores is that the latter score adequately represents the relevant ‘merit’. This assumption is neither weaker nor more verifiable than the assumption of same ‘merit’ distribution in all boards.
(b) There is no reason to presuppose that scores of JEE-Main, based only on a few subjects, would be a fairer indicator of ‘merit’. This notion is further weakened by the effect of highly specialized coaching, to which poor students have less access.
(c) Anchoring with respect to the aggregate JEE-Main score can only be done on the basis of scores obtained by candidates of a particular board that actually appeared for JEE-Main. This is mostly a subset of all the candidates appearing for the class XII examinations of that board. This subset may not be a representative sample of the board. Thus, there is the additional (hidden) assumption that sampling of students appearing for JEE-Main takes place similarly in all the boards.
(d) When anchoring with respect to the aggregate JEE-Main score is used, a candidate may be unfairly penalized for the weak performance of other students of his/her board in JEE-Main.
(e) Anchoring with respect to the aggregate JEE-Main score effectively increases the importance given to JEE-Main, and therefore distorts the decision to use 60 : 40 weight ratio on the JEE-Main scores and the board scores.
(f) If one is prepared to embrace the assumption that the JEE-Main score adequately represents the requisite ‘merit’, then a cleaner strategy (not involving the additional assumption mentioned in (c) above) is not to use board scores at all. However, it is the prevailing discomfort with this assumption that must have led to the recent initiative to move away from AIEEE scores alone and to give importance to board examinations. Moving back towards that assumption would indicate muddled thinking regarding its acceptability.
In view of (a)–(f) above, Options 3 and 4 were discarded. 5. Options 1 and 2 seek to address the issue of disparity of marks distributions among the
different boards. Option 2 uses a linear transformation of the aggregate scores to bring parity in the average value and the spread of scores across various boards. A variation of this method is currently used for admission to engineering colleges in the state of Kerala. However, even after this transformation, the scores from different boards continue to have different distributions. This difference can only be removed by a nonlinear transformation that brings parity in the aggregate score distributions in the examinations conducted by all the boards. Therefore, Option 2 was discarded in favour of Option 1.
Following these arguments, the Core Committee chose Option 1 as the preferred normalization scheme.
The following are the summary of the Group’s deliberations and decisions on some related issues.
Distribution of scores. It is possible to use a further transformation of the percentiles of aggregate scores so that the transformed percentiles have any desired distribution. For example, the suggestion from ACER was to ensure that the transformed percentiles have the normal distribution. It was observed that as such the percentiles have uniform distribution over the range 0 to 100. Any further transformation would cause higher concentration of ‘normalized
E4
scores’ in certain ranges and less concentration in other ranges. Ranges with less concentration of scores would be more amenable to efficient discrimination for selection – at the expense of other ranges. Since the normalized scores would eventually be used for admission to different engineering colleges catering to a wide variety of students, it was felt that the best possible discrimination over the entire range of scores is desirable. This requirement corresponds to the choice of not transforming the percentiles at all.
Range of scores. In order to make the JEE-Main scores and the board scores comparable in all respects before the 60% and 40% weights are used for combining them, it was decided that the JEE-Main scores would also be replaced by the percentile of a student. Thus, the JEE-Main percentile and the Board examination percentile would be used as normalized score from the two sets of examinations. These normalized scores, as well as the weighted sum of scores, will be in the range 0 to 100.
Scores of different years. Sometimes students passing from school boards in different years compete together for admission to engineering colleges. The score distributions in a particular board vary from year to year. Therefore, it was decided that percentile of a student in a board would be computed with respect to all the students that appeared from that board in that particular year. This will also make the task of computing percentiles easier for the boards.
Smaller boards. It was expressed that the assumption of a common distribution of ‘merit’ for all the boards in all the years may be problematic for boards with small number of candidates. However, any attempt to rectify the ‘problem’ (e.g., by pooling scores of different boards and/or different years) would require further assumptions that are difficult to justify. In other words, the solution would be worse than the ‘problem’. Therefore, it was decided that the smaller boards would not be treated differently.
Handling revision of board scores. Board scores of some students are modified after review. Modified score of a candidate may have an impact on the percentile of other candidates. In fact, this difficulty is applicable to all the options considered above. It was decided that in the case of any revision of score after a cut-off date (26th June for 2013 admissions) would entail fresh computation of percentile of the candidate concerned, but the percentiles of the other candidates would not be revised. The boards may be asked to provide the revised percentile corresponding to any score revised after the cut-off date. The NIT’s would be free to continue their existing policies on revised scores, or devise a new one.
Discrimination against ‘good’ boards. The Core Committee was informed of an apprehension that students of boards with a historical record of good performance in AIEEE might feel disadvantaged by the new admission criterion. However, the Core Committee felt that there is no possibility of discrimination against the students of these boards. Good performance in AIEEE or JEE-Main may not be an adequate measure of a student’s general academic standing, which is sought to be factored in through the percentile in the board examination. The pattern of education and examinations in a particular board may be better aligned (as compared to other boards) with AIEEE, and this may have led to better performance of the students of those boards in AIEEE. Besides, education/examinations in some boards may have more consistent standards than other boards. The students of such boards will
E5
continue to benefit from these attributes. However, different levels of performance in AIEEE by students of different boards can never be regarded as proof of higher intrinsic ‘merit’ of students of one board compared to another. Any presumption of different ‘merit’ distributions among students of different boards should attract the burden of proof.
Post-selection performance. It was felt by the members of the Core Committee that any empirical study on the appropriateness of a particular scheme of normalization has to be based on the performance of students in engineering colleges after selection. Such data are not available at the moment. It would be desirable to initiate steps for collecting data on post-selection academic performance and collating that with pre-selection performance (in Boards/JEE-Main) of students, for meaningful analysis.
F1
APPENDIX F
Analyses carried out for validation and fine tuning
The two procedures for normalization mentioned in Chapter 4 had been short-listed
in the second meeting of the “Joshi Committee” held on 30th November, 2012.
Several groups/individuals had been asked to compare these procedures on the
basis of marks of AIEEE and six Board examinations of 2012. This appendix is a
compilation of these findings. Some of these analyses are based on marks of fewer
than six boards.
Part 1: Analysis by Debasis Sengupta and Abhay G. Bhatt of ISI
The Alternative Normalization Schemes
The two procedures for normalization and computation of composite score (for drawing merit
list) are described in Table 1.
Table 1: Description of two procedures for normalization and subsequent computation of
composite score
Procedure 1 (adaptation of Core Group’s
suggestion to the above format)
Procedure 2 (adaptation of Prof. Jim
Tognolini’s suggestion to the above format)
1. Note down the aggregate marks (A0)
obtained by each student in JEE-Main.
2. Compute the percentile (P) of each
student on the basis of aggregate
scores in his/her own board.
3. Determine the JEE-Main aggregate
marks corresponding to percentile P at
the All-India level. Regard this as the
normalized board score of the
student (B1).
4. The composite marks used for
drawing merit list is
A1 = 0.6A0 + 0.4B1.
1. Note down the aggregate marks (A0)
obtained by each student in JEE-Main.
2. Compute the percentile (P) of each
student on the basis of aggregate
scores in his/her own board.
3. Consider the set of aggregate marks
obtained by different students of the
above board in JEE-Main. Determine
the JEE-Main marks corresponding to
percentile P computed from this set.
Regard this as the normalized board
score of the student (B2).
4. The composite marks used for drawing
merit list is
A2 = 0.6A0 + 0.4B2.
Interpretation of the two procedures
Procedure 1 amounts to treating students at the same percentile from different boards at
par. The normalized board marks awarded to a student at this percentile (irrespective of the
board) is according to the JEE-Main marking pattern.
F2
Procedure 2 amounts to treating students at the same percentile from different boards
differently. The normalized board marks awarded to a student at this percentile varies from
one board to another. Specifically, the normalized board marks awarded to a student at a
certain percentile of a particular board is the same as the JEE-Main marks corresponding to
that percentile, computed from the set of JEE-Main marks received by students of that board
only.
Description of Data and Plan of Analysis
The data available for analysis were marks of 2012 class XII examinations for all students of
CBSE, Assam (AM), Jharkhand (JH), Maharashtra (MR), Mizoram (MZ) and Uttarakhand
(UK) boards, marks of all students in the 2012 AIEEE examination (precursor of JEE-Main,
2013), and matched pair of board and AIEEE marks for a subset of students of the above six
boards. The following relevant sets of data were extracted in respect of the six boards.
1. Data Set A: Aggregate marks in board examinations 2012 (expressed as percentage)
and the corresponding percentiles for all students taking the examination.
2. Data Set B: Aggregate marks in AIEEE examination 2012 and the corresponding
percentiles computed from the set of students from the chosen board.
3. Data Set C: Aggregate marks in AIEEE examination 2012 and the corresponding
percentiles computed from the set of all students taking the examination.
4. Data Set D: Matched pair of AIEEE aggregate marks and board aggregate
percentage.
Note that Data Set C is common to all the boards, while the other three sets are compiled
separately for the six boards.
It was planned that the 2012 data would be used to study the normalized scores for analysis.
Accordingly, the composite scores A1 and A2 were computed for the students of six boards,
while treating the AIEEE marks as JEE-Main marks and the available board aggregate
marks (in percentage) as five-subject-aggregate marks. These composite scores were
compared with the original AIEEE aggregate marks (A0).
All-India ranks and percentiles on the basis of these scores can only be computed when the
above calculations are done for all the boards. For the limited purpose of the present
analysis, ranks and percentiles were computed from the set of scores from the six boards
only. This required computation ranks (R1 and R2) and percentiles (P1 and P2)
corresponding to the composite scores A1 and A2, respectively, and fresh computation of
ranks (R0) and percentiles (P0) from the marks A0.
Results of Analysis
The tables below show the state-wise composition of students at different percentile ranges
on the basis of AIEEE marks only (A0), composite score by procedure 1 (A1) and composite
score by procedure 2 (A2), respectively.
F3
Table 2: State-wise composition of students at different percentile ranges on the basis of
AIEEE marks only (A0)
Number of students in percentile range
Board P0<50 50<=P0<75 75<=P0<95 95<=P0 Grand Total
AM 3721 971 374 27 5093
CBSE 77689 51374 46625 12226 187914
JK 8724 1444 310 16 10494
MR 37677 9071 3654 545 50947
MZ 76 14 6 96
UK 3475 621 124 7 4227
Grand Total 131362 63495 51093 12821 258771
Table 3: State-wise composition of students at different percentile ranges on the basis
composite score by procedure 1 (A1)
Number of students in percentile range
Board P1<50 50<=P1<75 75<=P1<95 95<=P1 Grand Total
AM 1515 1998 1388 192 5093
CBSE 90310 47673 39010 10921 187914
JK 6363 2423 1588 120 10494
MR 29170 11617 8591 1569 50947
MZ 33 33 26 4 96
UK 2123 1044 970 90 4227
Grand Total 129514 64788 51573 12896 258771
Table 4: State-wise composition of students at different percentile ranges on the basis
composite score by procedure 2 (A2)
Number of students in percentile range
Board P2<50 50<=P2<75 75<=P2<95 95<=P2 Grand Total
AM 2865 1634 541 53 5093
CBSE 78527 52137 45300 11950 187914
JK 8485 1496 493 20 10494
MR 36469 8439 5141 898 50947
MZ 60 25 11 96
UK 3212 738 263 14 4227
Grand Total 129618 64469 51749 12935 258771
It may be observed that the results from the two procedures (Tables 3 and 4) differ vastly.
The difference can be directly attributed to the variation among the normalized board marks
(B2) awarded to students of different boards under Procedure 2, arising from the different
levels of their performance in the AIEEE examination. This fact is illustrated through Figure
1, which shows the normalized marks (B2) according to Procedure 2, awarded to students of
comparable percentiles of two boards: CBSE and Maharashtra.
Figure 1: Normalized board marks (B2) according to Procedure 2, awarded to students of
different percentiles of CBSE and Maharashtra Board
F4
As an example, the normalized score (B2) awarded to a student at the 80th percentile of the
Maharashtra state board and that awarded to a student at about the 48th percentile of CBSE
will be the same, 46.
The vast difference in the pattern of AIEEE performance of students of different boards may
be attributed to various factors such as (a) different mediums of instruction, (b) different
amounts of emphasis to languages given by different boards, (c) different levels of alignment
of instructional pattern of different boards with the AIEEE, (d) different levels of access to
coaching, and so on. In the presence of such factors, the students of some boards receive
relatively less marks in AIEEE (A0) than students of other boards. Their composite scores
would also be generally less because of the 60% weight given to A0 while computing the
composite score. Using Procedure 2 amounts to subjecting these students to further
disadvantage by assigning smaller ‘normalized board score’ (B2) as compared to students of
other boards who have done better in AIEEE. The above analysis shows that the degree of
unequal treatment of students of different boards can be very high.
Procedure 2 amounts to using the JEE-Main scores for ‘tracking’ of marks of different
boards. However, the ‘tracking’ method is not meant for differential treatment of students at
comparable percentile of different large boards. In the present test case, it produces glaring
distortions that illustrate why Procedure 2 cannot be statistically justified.
-100
-50
0
50
100
150
200
250
300
350
0 20 40 60 80 100
Tra
nsfo
rm
ed
bo
ard
sco
re
Percentile in board
CBSE
MR
F5
Part 2: Analysis by B.M. Gupta of CBSE
Steps of computation
1. Following data files have been prepared in respect of a Board :
a. Data Set A : The aggregate scores in 5 subjects of the board in a
particular year (for ALL students of the board for whom 5
subjects are well defined) including the students who did not
appear for AIEEE.
b. Data Set B : The aggregate scores in AIEEE (for ALL students of the
above board who appeared for AIEEE in that year).
c. Data Set C : The aggregate scores in AIEEE (for ALL students who
appeared for AIEEE in that year).
d. Data Set D : For all the students of the said board in the year for whom
matching was possible i.e. Mapped data (AIEEE + Board)
i.e. All student of a Board who appeared in AIEEE as well as
in Board in that year.
B0 : Marks scored in the Board out of 360
A0 : Marks scored in AIEEE out of 360
R0 : All India AIEEE Rank
P0 : AIEEE Percentile Rank
2. Data Set D has been extended by computing the following additional qualities for
each student:- at Board level and All India level
All India level Board level
1. Modified Board Score
B1=
• Calculate Percentile Rank of B0 wrt Data Set A, then
• Calculate AIEEE Score corresponding to this percentile rank as obtained from Data Set C.
B2=
• Calculate Percentile Rank of B0 wrt Data Set A, then
• Calculate AIEEE Score corresponding to this percentile rank as obtained from Data Set B.
2. Modified Composite Score
A1 = 0.6A0 + 0.4 B1 A2 = 0.6A0 + 0.4 B2
3. Modified Percentile Rank
P1= Percentile Rank of A1 wrt Data Set C
P2= Percentile Rank of A2 wrt Data Set C
4. Modified Rank
R1=
• Arrange all Candidates (irrespective of Board) in descending order of Modified composite score A1 and generate All India AIEEE rank R1.
R2=
• Arrange all Candidates (irrespective of Board) in descending order of Modified composite score A2 and generate All India AIEEE rank R2.
F6
Data Used
CODE STATE BOARD – Data SET-A
(No of Records)
AIEEE – Data SET-B
(No of Records)
MAPPED (AIEEE+Board)
Data SET-D (No of Records)
AM Assam AMOUT12.DBF
(156170) AMEEE12.DBF
(11051) AMDDD12.DBF
(5093)
JH Jharkhand JHOUT12.DBF
(322929) JHEEE12.DBF
(17768) JHMDDD12.DBF
(10491)
MZ Mizoram MZOUT12.DBF
(10795) MZEEE12.DBF
(139) MZDDD12.DBF
(96)
UK Utarakhand UKOUT12.DBF
(131399) UKEEE12.DBF
(8388) UKDDD12.DBF
(4227)
MR Maharashtra MROUT12.DBF
(1141591) MREEE12.DBF
(121110) MRDDD12.DBF
(50947)
CBSE CBSE CBSOUT12.DBF
(759358) CBSEEE12.DBF
(319785) CBSDDD12.DBF
(187914)
TOTAL All 6 (2522242) All 6 (478241) All 6 (258771)
EEEOUT12.DBF (1061824)
Data SET-C
Note : Computations carried out up to four places of decimal
Results
TABLE 1 : Correlation Coefficients between Marks � AIEEE score : A0
� Modified composite score (All India Based) : A1
� Modified composite score (Board Based) : A2
Sl. No. Board A0 and A1 A0 and A2 A1 and A2
1. AM 0.8270 0.8936 0.9846
2. JH 0.6240 0.7381 0.9814
3. UK 0.6806 0.7942 0.9749
4. MZ 0.7469 0.7833 0.9546
5. MR 0.8433 0.8715 0.9928
6. CBSE 0.9556 0.9481 0.9989
7. ALL 6 0.9041 0.9389 0.9771
F7
TABLE 2 : Correlation Coefficients between Ranks � All India Rank AIEEE : R0
� Modified All India Rank (All India Based) : R1
� Modified All India Rank (Board Based) : R2
Sl. No. Board R0 and R1 R0 and R2 R1 and R2
1. AM 0.7377 0.8425 0.9828
2. JH 0.5933 0.7120 0.9718
3. UK 0.5777 0.7156 0.9778
4. MZ 0.7686 0.8333 0.9921
5. MR 0.6003 0.7441 0.9753
6. CBSE 0.9464 0.9303 0.9987
7. ALL 0.8573 0.9105 0.9614
TABLE 3 : Distribution of Candidates on different percentile scores
P0
range BRD
TY
PE
P1 and P2 range
95&
ABV
90-
<95
85-
<90
80-
<85
75-
<80
70-
<75
65-
<70
60-
<65
55-
<60
50-
<55
25-
<50
00-
<25 TOTAL
95&
ABV AM
P2 186 53 8 3 --- --- --- --- --- --- --- --- 250
P1 186 49 5 5 1 2 2 --- --- --- --- --- 250
95&
ABV JH
P2 259 82 80 45 24 10 4 1 --- --- --- --- 505
P1 234 64 47 47 33 29 23 17 6 2 3 --- 505
95&
ABV UK
P2 130 33 12 16 8 3 --- 2 1 --- --- --- 205
P1 127 25 15 3 5 6 6 9 3 3 3 --- 205
95&
ABV MZ
P2 3 1 --- --- --- --- --- --- --- --- --- --- 4
P1 3 1 --- --- --- --- --- --- --- --- --- --- 4
95&
ABV MR
P2 1752 560 160 61 7 2 --- --- --- --- --- --- 2542
P1 1839 453 123 70 41 13 3 --- --- --- --- --- 2542
95&
ABV CBSE
P2 7461 1719 165 29 1 --- --- --- --- --- --- --- 9375
P1 7394 1846 125 10 --- --- --- --- --- --- --- --- 9375
95&
ABV TOT
P2 10147 2397 240 37 --- --- --- --- --- --- --- --- 12821
P1 9712 2787 297 25 --- --- --- --- --- --- --- --- 12821
90-
<95 AM
P2 44 106 41 25 15 8 2 2 1 --- --- --- 244
P1 48 89 39 15 14 13 10 5 4 2 5 --- 244
90-
<95 JH
P2 84 58 77 70 81 59 35 16 14 5 3 --- 502
P1 81 52 53 48 39 38 47 46 33 28 36 1 502
90-
<95 UK
P2 36 53 30 20 21 11 13 8 5 2 7 --- 206
P1 40 44 23 14 12 15 7 9 11 7 24 --- 206
90-
<95 MZ
P2 1 --- 2 --- --- --- --- --- 1 1 --- --- 5
P1 1 1 1 --- --- --- --- --- --- --- 2 --- 5
90-
<95 MR
P2 535 716 515 288 211 117 47 9 --- 1 --- --- 2439
P1 536 770 414 210 151 130 110 81 27 9 1 --- 2439
90-
<95 CBSE
P2 1692 4333 2215 616 159 35 9 --- --- --- --- --- 9059
P1 1682 4353 2380 556 78 10 --- --- --- --- --- --- 9059
90-
<95 TOT
P2 2314 5889 3188 944 234 53 8 --- --- --- --- --- 12630
P1 2070 5159 3749 1306 291 49 6 --- --- --- --- --- 12630
F8
TABLE 3 : Distribution of Candidates on different percentile scores (contd.)
P0
range BRD
TY
PE
P1 and P2 range
95&
ABV
90-
<95
85-
<90
80-
<85
75-
<80
70-
<75
65-
<70
60-
<65
55-
<60
50-
<55
25-
<50
00-
<25 TOTAL
85-
<90 JH
P2 48 55 52 58 61 77 43 37 24 26 13 --- 494
P1 48 53 35 47 40 42 40 47 30 38 73 1 494
85-
<90 UK
P2 16 27 28 15 25 29 17 17 11 9 6 --- 200
P1 17 23 21 16 13 15 15 17 10 15 36 2 200
85-
<90 MZ
P2 --- 1 --- 2 1 1 --- --- --- --- --- --- 5
P1 --- 1 1 1 1 --- 1 --- --- --- --- --- 5
85-
<90 MR
P2 169 502 486 423 338 258 225 128 63 8 4 --- 2604
P1 131 566 469 333 243 204 182 169 146 103 58 --- 2604
85-
<90 CBSE
P2 214 2267 3215 2149 944 341 112 37 4 --- --- --- 9283
P1 274 2131 3318 2377 915 215 50 3 --- --- --- --- 9283
85-
<90 TOT
P2 376 2911 4330 3018 1371 533 167 57 8 --- --- --- 12771
P1 653 2073 3559 3455 2026 756 197 46 6 --- --- --- 12771
80-
<85 AM
P2 2 33 41 35 43 28 18 16 16 10 7 --- 249
P1 1 36 43 22 24 23 25 14 14 12 34 1 249
80-
<85 JH
P2 40 60 35 70 62 56 51 59 36 40 51 1 561
P1 42 54 39 62 41 47 29 30 41 46 126 4 561
80-
<85 UK
P2 6 23 23 19 16 15 13 21 18 12 18 --- 184
P1 7 22 18 19 11 14 11 4 9 16 49 4 184
80-
<85 MZ
P2 --- --- 1 --- --- --- 1 1 --- --- 1 --- 4
P1 --- --- 1 --- --- --- --- 1 --- 1 --- 1 4
80-
<85 MR
P2 52 279 391 359 339 301 273 216 136 103 54 --- 2503
P1 29 307 410 319 279 222 204 189 165 152 227 --- 2503
80-
<85 CBSE
P2 25 759 2139 2656 2000 1137 551 196 82 21 4 --- 9570
P1 41 728 2037 2789 2302 1183 362 104 22 2 --- --- 9570
80-
<85 TOT
P2 74 1061 2678 3518 2795 1628 826 317 111 41 5 --- 13054
P1 274 965 1772 2809 3014 2353 1232 468 121 40 6 --- 13054
75-
<80 AM
P2 2 7 38 34 36 19 29 25 17 13 25 1 246
P1 2 14 34 29 26 16 16 22 21 14 43 9 246
75-
<80 JH
P2 17 39 32 35 56 44 64 53 72 40 82 1 535
P1 19 38 32 36 42 37 28 43 36 53 159 12 535
75-
<80 UK
P2 10 21 23 25 10 22 24 20 16 12 43 1 227
P1 10 24 18 24 9 14 13 15 16 12 59 13 227
75-
<80 MZ
P2 --- 1 --- --- --- --- --- 1 1 --- 1 --- 4
P1 --- 1 --- --- --- --- --- --- 1 --- 1 1 4
75-
<80 MR
P2 18 166 280 272 253 262 256 244 207 167 165 --- 2290
P1 7 175 310 255 215 210 192 174 185 180 386 1 2290
75-
<80 CBSE
P2 1 254 997 1878 2180 1762 1168 691 325 103 60 --- 9419
P1 3 252 932 1825 2320 2036 1291 554 146 47 13 --- 9419
75-
<80 TOT
P2 19 376 1214 2328 2826 2386 1719 966 476 208 120 --- 12638
P1 136 578 809 1452 2207 2610 2295 1476 713 238 124 --- 12638
F9
TABLE 3 : Distribution of Candidates on different percentile scores (contd.)
P0
range BRD
TY
PE
P1 and P2 range
95&
ABV
90-
<95
85-
<90
80-
<85
75-
<80
70-
<75
65-
<70
60-
<65
55-
<60
50-
<55
25-
<50
00-
<25 TOTAL
70-
<75 AM
P2 --- 2 21 23 26 22 31 27 22 13 42 1 230
P1 --- 4 24 23 15 20 18 22 22 17 56 9 230
70-
<75 JH
P2 14 24 25 23 18 26 26 22 28 24 68 7 305
P1 15 28 23 22 16 22 15 15 22 24 86 17 305
70-
<75 UK
P2 4 23 27 22 17 22 14 18 11 10 41 3 212
P1 3 29 22 22 14 18 14 10 13 7 46 14 212
70-
<75 MZ
P2 --- --- --- 1 --- --- 1 --- 1 2 1 --- 6
P1 --- --- --- 1 --- --- 1 --- --- 1 3 --- 6
70-
<75 MR
P2 13 108 167 249 239 255 229 237 223 197 365 2 2284
P1 2 110 195 260 217 217 178 182 157 166 592 8 2284
70-
<75 CBSE
P2 1 52 403 1102 1722 1951 1663 1296 751 424 269 --- 9634
P1 1 57 361 1004 1732 2119 1978 1370 668 233 111 --- 9634
70-
<75 TOT
P2 7 153 597 1373 2153 2448 2201 1668 1130 604 515 --- 12849
P1 54 404 520 815 1383 1979 2315 2237 1646 907 589 --- 12849
65-
<70 AM
P2 --- 6 18 19 17 31 18 19 26 24 63 3 244
P1 --- 9 19 19 14 25 15 12 19 21 76 15 244
65-
<70 JH
P2 25 46 34 29 54 62 60 55 60 57 215 17 714
P1 30 44 43 38 44 53 37 44 39 46 242 54 714
65-
<70 UK
P2 4 5 12 16 28 11 14 20 20 22 65 8 225
P1 3 8 13 14 27 10 11 13 15 14 73 24 225
65-
<70 MZ
P2 --- 1 --- --- 3 --- --- --- --- --- 1 --- 5
P1 --- 1 --- 1 2 --- --- --- --- --- 1 --- 5
65-
<70 MR
P2 3 78 160 235 276 267 265 284 270 261 726 3 2828
P1 2 70 180 274 263 250 213 200 220 207 892 57 2828
65-
<70 CBSE
P2 --- 9 151 515 1085 1599 1663 1497 1184 831 772 --- 9306
P1 --- 19 139 432 1006 1575 1878 1815 1329 688 425 --- 9306
65-
<70 TOT
P2 --- 77 307 677 1291 1834 1951 1857 1533 1088 1439 1 12055
P1 19 303 440 522 724 1184 1622 1918 1944 1573 1805 1 12055
60-
<65 AM
P2 --- 2 12 15 21 34 26 27 26 20 85 7 275
P1 --- 5 10 21 25 23 25 20 18 21 93 14 275
60-
<65 JH
P2 4 20 24 22 29 25 33 31 37 33 151 25 434
P1 7 22 28 21 32 18 30 22 35 19 152 48 434
60-
<65 UK
P2 1 8 8 9 12 10 14 12 4 14 35 7 134
P1 1 10 10 8 11 10 16 6 5 8 34 15 134
60-
<65 MZ
P2 --- --- --- --- --- --- --- 1 1 --- --- 1 3
P1 --- --- --- --- --- --- 1 --- 1 --- --- 1 3
60-
<65 MR
P2 --- 44 81 153 186 188 198 251 245 272 838 36 2492
P1 --- 36 98 177 195 178 186 197 204 184 907 130 2492
60-
<65 CBSE
P2 --- 3 72 254 631 1027 1437 1465 1360 1129 1614 14 9006
P1 --- 6 67 219 527 992 1446 1693 1653 1208 1195 --- 9006
60-
<65 TOT
P2 --- 40 143 456 857 1469 1885 2111 1978 1713 2859 20 13531
P1 10 215 400 409 568 817 1248 1701 2175 2067 3872 49 13531
F10
TABLE 3 : Distribution of Candidates on different percentile scores (contd.)
P0
range BRD
TY
PE
P1 and P2 range
95&
ABV
90-
<95
85-
<90
80-
<85
75-
<80
70-
<75
65-
<70
60-
<65
55-
<60
50-
<55
25-
<50
00-
<25 TOTAL
55-
<60 AM
P2 1 2 5 10 23 25 35 23 27 25 106 13 295
P1 --- 3 10 13 25 29 25 19 22 25 92 32 295
55-
<60 JH
P2 10 34 32 19 26 28 36 51 36 48 189 36 545
P1 15 33 37 11 43 29 31 37 24 41 180 64 545
55-
<60 UK
P2 3 4 14 15 16 11 15 21 25 23 86 24 257
P1 3 6 17 15 14 13 17 16 18 20 81 37 257
55-
<60 MZ
P2 --- --- 1 --- --- --- --- 1 1 --- 2 --- 5
P1 --- --- 1 --- --- --- --- 1 1 --- 2 --- 5
55-
<60 MR
P2 3 29 97 134 151 178 159 190 190 206 728 50 2115
P1 1 25 110 165 151 175 165 159 156 145 727 136 2115
55-
<60 CBSE
P2 --- 1 18 102 299 600 962 1268 1323 1248 2665 74 8560
P1 --- 2 17 91 233 543 902 1299 1605 1471 2394 3 8560
55-
<60 TOT
P2 --- 11 98 213 522 919 1343 1630 1854 1808 4559 148 13105
P1 2 141 287 394 395 533 822 1149 1500 1928 5735 219 13105
50-
<55 AM
P2 1 1 1 12 9 11 11 14 19 20 61 12 172
P1 --- 2 6 15 8 12 8 15 15 15 54 22 172
50-
<55 JH
P2 3 21 24 18 7 27 21 32 39 38 164 67 461
P1 6 24 23 19 23 20 26 28 26 33 138 95 461
50-
<55 UK
P2 --- 2 3 8 7 10 9 6 11 17 62 19 154
P1 --- 2 4 7 13 9 6 10 12 11 44 36 154
50-
<55 MZ
P2 --- --- --- --- 1 --- 1 --- --- 1 4 --- 7
P1 --- --- --- --- 1 --- 1 1 --- 1 2 1 7
50-
<55 MR
P2 --- 22 65 127 144 169 172 210 251 298 1315 199 2972
P1 --- 11 84 152 161 185 201 170 224 226 1182 376 2972
50-
<55 CBSE
P2 --- --- 12 55 186 419 749 1057 1309 1420 4143 227 9577
P1 --- --- 12 50 157 327 644 1021 1443 1711 4115 97 9577
50-
<55 TOT
P2 1 10 49 120 298 535 860 1166 1430 1605 5398 483 11955
P1 2 91 254 307 336 371 534 731 976 1322 6484 547 11955
25-
<50 AM
P2 --- --- 2 28 35 44 58 84 72 101 533 343 1300
P1 --- --- 13 48 57 61 70 70 77 73 433 398 1300
25-
<50 JH
P2 11 58 78 82 66 76 105 115 126 135 1039 744 2635
P1 18 70 104 103 97 127 118 111 125 120 818 824 2635
25-
<50 UK
P2 --- 13 30 44 39 47 53 44 73 62 444 310 1159
P1 --- 18 47 53 50 55 66 65 64 55 352 334 1159
25-
<50 MZ
P2 --- --- 1 2 --- 2 2 1 --- 1 9 6 24
P1 --- --- --- 2 1 3 1 2 --- 1 8 6 24
25-
<50 MR
P2 2 39 121 208 317 445 563 614 685 761 5397 2927 12079
P1 --- 21 131 281 487 565 667 678 663 744 4493 3349 12079
25-
<50 CBSE
P2 --- --- 8 36 181 504 1035 1780 2863 3794 27832 9095 47128
P1 --- --- 9 33 130 384 807 1468 2402 3806 30254 7835 47128
25-
<50 TOT
P2 --- 9 89 242 520 997 1751 2795 3801 4915 34876 13191 63186
P1 2 201 679 1069 1444 1491 1760 2059 2575 3367 32932 15607 63186
F11
TABLE 3 : Distribution of Candidates on different percentile scores (contd.)
P0
range BRD
TY
PE
P1 and P2 range
95&
ABV
90-
<95
85-
<90
80-
<85
75-
<80
70-
<75
65-
<70
60-
<65
55-
<60
50-
<55
25-
<50
00-
<25 TOTAL
00-
<25 AM
P2 --- --- --- 3 7 9 10 12 21 29 350 894 1335
P1 --- --- 4 8 17 15 28 40 32 44 373 774 1335
00-
<25 JH
P2 9 28 32 53 41 35 46 52 53 80 647 1727 2803
P1 9 43 61 69 74 65 100 85 107 76 609 1505 2803
00-
<25 UK
P2 --- --- 2 2 12 21 25 22 17 28 250 685 1064
P1 --- --- 4 16 31 34 29 37 35 44 256 578 1064
00-
<25 MZ
P2 --- 1 --- --- --- 1 --- --- --- --- 5 17 24
P1 --- --- 1 --- --- 1 --- --- 2 1 5 14 24
00-
<25 MR
P2 --- 3 25 39 86 106 157 166 277 270 3143 9527 13799
P1 --- 2 22 52 145 200 245 347 400 429 3274 8683 13799
00-
<25 CBSE
P2 --- --- --- 3 4 19 45 117 190 416 9627 37576 47997
P1 --- --- --- 3 3 12 37 61 131 229 8474 39047 47997
00-
<25 TOT
P2 --- 1 4 18 68 139 223 376 618 957 14910 50862 68176
P1 --- 23 174 368 555 794 906 1155 1284 1499 13148 48270 68176
TOT AM P2 254 255 253 256 255 254 254 256 254 255 1273 1274 5093
P1 254 255 254 255 254 255 254 256 254 255 1273 1274 5093
TOT JH P2 524 525 525 524 525 525 524 524 525 526 2622 2625 10494
P1 524 525 525 523 524 527 524 525 524 526 2622 2625 10494
TOT UK P2 210 212 212 211 211 212 211 211 212 211 1057 1057 4227
P1 211 211 212 211 210 213 211 211 211 212 1057 1057 4227
TOT MZ P2 4 5 5 5 5 4 5 5 5 5 24 24 96
P1 4 5 5 5 5 4 5 5 5 5 24 24 96
TOT MR P2 2547 2546 2548 2548 2547 2548 2544 2549 2547 2544 12735 12744 50947
P1 2547 2546 2546 2548 2548 2549 2546 2546 2547 2545 12739 12740 50947
TOT CBSE P2 9394 9397 9395 9395 9392 9394 9394 9404 9391 9386 46986 46986 187914
P1 9395 9394 9397 9389 9403 9396 9395 9388 9399 9395 46981 46982 187914
TOT TOT P2 12938 12935 12937 12944 12935 12941 12934 12943 12939 12939 64681 64705 258771
P1 12934 12940 12940 12931 12943 12937 12937 12940 12940 12941 64695 64693 258771
F12
TABLE 4 : Distribution of Candidates on different ranks
R0 range
R1 and R2 range
1-1000 1001-2000
MR CBSE TOTAL AM JH MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 33 37 655 599 688 636 --- 1 --- 1 14 24 196 189 210 215
1001-2000 3 3 221 249 224 252 --- --- 1 1 10 17 315 269 326 287
2001-3000 4 --- 61 75 65 75 1 --- --- --- 9 3 224 218 234 221
3001-4000 --- --- 13 26 13 26 --- --- --- --- 6 2 122 141 128 143
4001-5000 --- --- 7 8 7 8 --- --- 1 --- 3 --- 49 74 53 74
5001-6000 --- --- --- --- --- --- --- --- --- --- 2 --- 24 32 26 32
6001-7000 --- --- --- 1 --- 1 --- --- --- --- 2 --- 9 16 11 16
7001-8000 --- --- 2 1 2 1 --- --- --- --- --- --- 5 4 5 4
8001-9000 --- --- --- 1 --- 1 --- --- --- --- --- --- --- 2 --- 2
9001-10000 --- --- 1 --- 1 --- --- --- --- --- --- 1 3 3 3 4
10001-15000 --- --- --- --- --- --- --- --- --- --- 1 --- 2 2 3 2
15001-20000 --- --- --- --- --- --- --- --- --- --- --- --- 1 --- 1 ---
20001-25000 --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
25001-30000 --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
30001-40000 --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
40001-50000 --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
50001-100000 --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
100001-150000 --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
150001-200000 --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
Above 200000 --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
TOTAL 40 40 960 960 1000 1000 1 1 2 2 47 47 950 950 1000 1000
R0 range
R1 and R2 range
2001-3000 3001-4000
AM MR CBSE TOTAL AM JH MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 1 1 5 13 66 75 72 89 1 1 --- --- --- 3 20 29 21 33
1001-2000 --- --- 8 11 213 170 221 181 --- --- --- --- 6 12 112 98 118 110
2001-3000 --- --- 5 5 213 180 218 185 --- --- --- --- 8 11 157 126 165 137
3001-4000 --- --- 5 5 166 168 171 173 --- --- --- --- 3 5 164 137 167 142
4001-5000 --- --- 3 2 140 138 143 140 --- --- --- --- 4 3 147 143 151 146
5001-6000 --- --- 2 1 80 107 82 108 --- --- --- --- 4 --- 129 122 133 122
6001-7000 --- --- 2 --- 26 58 28 58 --- --- --- --- 2 1 76 106 78 107
7001-8000 --- --- 4 --- 23 24 27 24 --- --- --- --- 3 --- 62 73 65 73
8001-9000 --- --- 1 --- 10 13 11 13 --- --- --- 1 2 --- 42 54 44 55
9001-10000 --- --- 2 --- 6 8 8 8 --- --- --- --- 2 1 15 29 17 30
10001-15000 --- --- --- 1 15 18 15 19 --- --- --- --- 2 --- 27 37 29 37
15001-20000 --- --- 1 --- 3 2 4 2 --- --- 1 --- --- --- 10 7 11 7
20001-25000 --- --- --- --- --- --- --- --- --- --- --- --- --- --- 1 1 1 1
25001-30000 --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
30001-40000 --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
40001-50000 --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
50001-100000 --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
100001-150000 --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
150001-200000 --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
Above 200000 --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
TOTAL 1 1 38 38 961 961 1000 1000 1 1 1 1 36 36 962 962 1000 1000
F13
TABLE 4 : Distribution of Candidates on different ranks (contd.)
R0 range
R1 and R2 range
4001-5000
AM JH UK MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 --- --- --- --- --- --- --- 3 9 15 9 18
1001-2000 --- 1 --- 1 --- 1 3 10 60 63 63 76
2001-3000 --- 1 --- --- --- --- 6 14 123 95 129 110
3001-4000 1 --- --- --- --- --- 9 9 119 102 129 111
4001-5000 --- --- 1 --- --- --- 6 4 131 106 138 110
5001-6000 --- --- --- --- --- --- 5 1 123 112 128 113
6001-7000 --- --- --- --- --- --- 6 1 106 110 112 111
7001-8000 1 --- --- --- 1 --- 1 --- 80 98 83 98
8001-9000 --- --- --- --- --- --- 4 --- 72 75 76 75
9001-10000 --- --- --- --- --- --- --- --- 37 55 37 55
10001-15000 --- --- --- --- --- --- 2 1 71 97 73 98
15001-20000 --- --- --- --- --- --- 1 2 13 18 14 20
20001-25000 --- --- --- --- --- --- 2 --- 6 5 8 5
25001-30000 --- --- --- --- --- --- --- --- 1 --- 1 ---
30001-40000 --- --- --- --- --- --- --- --- --- --- --- ---
40001-50000 --- --- --- --- --- --- --- --- --- --- --- ---
50001-100000 --- --- --- --- --- --- --- --- --- --- --- ---
100001-150000 --- --- --- --- --- --- --- --- --- --- --- ---
150001-200000 --- --- --- --- --- --- --- --- --- --- --- ---
Above 200000 --- --- --- --- --- --- --- --- --- --- --- ---
TOTAL 2 2 1 1 1 1 45 45 951 951 1000 1000
R0 range
R1 and R2 range
5001-6000
AM JH UK MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 --- 1 --- --- --- --- --- 4 --- 3 --- 8
1001-2000 --- 1 --- --- --- 1 4 6 26 27 30 35
2001-3000 1 1 --- --- --- --- 4 10 73 68 78 79
3001-4000 --- --- --- --- --- --- 7 7 89 65 96 72
4001-5000 --- --- --- --- 1 --- 5 2 110 89 116 91
5001-6000 1 --- --- --- --- --- 1 5 101 95 103 100
6001-7000 --- --- --- --- --- --- 4 3 121 86 125 89
7001-8000 --- --- --- --- --- --- 2 --- 101 105 103 105
8001-9000 --- --- --- --- --- --- 2 --- 85 97 87 97
9001-10000 1 --- --- --- --- --- --- --- 61 78 62 78
10001-15000 --- --- --- --- --- --- 8 2 145 187 153 189
15001-20000 --- --- --- --- --- --- 2 --- 25 37 27 37
20001-25000 --- --- --- 1 --- --- --- --- 16 18 16 19
25001-30000 --- --- 1 --- --- --- --- --- 2 1 3 1
30001-40000 --- --- --- --- --- --- --- --- 1 --- 1 ---
40001-50000 --- --- --- --- --- --- --- --- --- --- --- ---
50001-100000 --- --- --- --- --- --- --- --- --- --- --- ---
100001-150000 --- --- --- --- --- --- --- --- --- --- --- ---
150001-200000 --- --- --- --- --- --- --- --- --- --- --- ---
Above 200000 --- --- --- --- --- --- --- --- --- --- --- ---
TOTAL 3 3 1 1 1 1 39 39 956 956 1000 1000
F14
TABLE 4 : Distribution of Candidates on different ranks (contd.)
R0 range
R1 and R2 range
6001-7000
AM JH UK MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 --- 1 --- --- --- --- --- --- --- --- --- 1
1001-2000 1 1 --- 1 --- --- --- 2 10 20 11 24
2001-3000 --- --- --- 1 --- --- 2 5 38 33 40 39
3001-4000 1 --- --- --- --- 1 3 9 78 62 82 72
4001-5000 --- --- --- --- --- --- 2 6 80 65 82 71
5001-6000 --- --- 1 --- --- --- 3 5 97 80 101 85
6001-7000 --- --- --- --- --- --- 4 3 108 88 112 91
7001-8000 --- --- --- --- --- --- 3 1 83 89 86 90
8001-9000 --- --- 1 --- --- --- 4 --- 93 67 98 67
9001-10000 --- --- --- --- --- --- 2 1 79 90 81 91
10001-15000 --- --- --- --- 1 --- 8 --- 219 259 228 259
15001-20000 --- --- --- --- --- --- 1 --- 50 80 51 80
20001-25000 --- --- --- --- --- --- --- --- 17 23 17 23
25001-30000 --- --- --- --- --- --- --- 1 8 4 8 5
30001-40000 --- --- --- --- --- --- 1 --- 2 2 3 2
40001-50000 --- --- --- --- --- --- --- --- --- --- --- ---
50001-100000 --- --- --- --- --- --- --- --- --- --- --- ---
100001-150000 --- --- --- --- --- --- --- --- --- --- --- ---
150001-200000 --- --- --- --- --- --- --- --- --- --- --- ---
Above 200000 --- --- --- --- --- --- --- --- --- --- --- ---
TOTAL 2 2 2 2 1 1 33 33 962 962 1000 1000
R0 range
R1 and R2 range
7001-8000
AM JH MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 --- --- --- --- --- --- --- --- --- ---
1001-2000 1 1 --- --- --- 4 4 12 5 17
2001-3000 --- 2 --- 1 4 9 28 29 32 41
3001-4000 --- --- --- 1 4 7 53 42 57 50
4001-5000 --- 1 --- --- 6 5 61 41 67 47
5001-6000 --- --- --- --- 4 5 72 63 76 68
6001-7000 2 --- --- --- 2 4 78 69 82 73
7001-8000 --- --- --- --- 2 2 79 69 81 71
8001-9000 --- --- --- --- 2 --- 78 67 80 67
9001-10000 --- --- 1 --- 4 --- 78 67 83 67
10001-15000 1 --- 1 --- 8 3 294 313 304 316
15001-20000 --- --- --- --- 1 --- 95 136 96 136
20001-25000 --- --- --- --- 2 1 19 31 21 32
25001-30000 --- --- --- --- --- --- 8 10 8 10
30001-40000 --- --- --- --- 1 --- 7 5 8 5
40001-50000 --- --- --- --- --- --- --- --- --- ---
50001-100000 --- --- --- --- --- --- --- --- --- ---
100001-150000 --- --- --- --- --- --- --- --- --- ---
150001-200000 --- --- --- --- --- --- --- --- --- ---
Above 200000 --- --- --- --- --- --- --- --- --- ---
TOTAL 4 4 2 2 40 40 954 954 1000 1000
F15
TABLE 4 : Distribution of Candidates on different ranks (contd.)
R0 range
R1 and R2 range
8001-9000
AM JH UK MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 --- --- --- --- --- --- --- --- --- --- --- ---
1001-2000 --- 1 --- --- --- --- 2 4 --- 5 2 10
2001-3000 --- 2 --- --- --- 1 2 7 18 18 20 28
3001-4000 --- --- --- --- --- --- 5 9 48 45 53 54
4001-5000 1 --- --- --- --- --- 2 11 56 47 59 58
5001-6000 --- --- --- --- --- --- 6 1 53 45 59 46
6001-7000 1 --- --- --- --- --- 4 5 80 53 85 58
7001-8000 --- --- --- --- --- --- 6 2 74 67 80 69
8001-9000 1 --- --- --- 1 --- 4 3 74 69 80 72
9001-10000 --- --- --- --- --- --- 1 --- 74 61 75 61
10001-15000 --- --- --- --- --- --- 6 3 284 291 290 294
15001-20000 --- --- --- --- --- --- 6 --- 124 162 130 162
20001-25000 --- --- --- --- --- --- 1 --- 37 56 38 56
25001-30000 --- --- --- --- --- --- --- --- 18 21 18 21
30001-40000 --- --- --- 1 --- --- 2 3 6 7 8 11
40001-50000 --- --- 1 --- --- --- 1 --- 1 --- 3 ---
50001-100000 --- --- --- --- --- --- --- --- --- --- --- ---
100001-150000 --- --- --- --- --- --- --- --- --- --- --- ---
150001-200000 --- --- --- --- --- --- --- --- --- --- --- ---
Above 200000 --- --- --- --- --- --- --- --- --- --- --- ---
TOTAL 3 3 1 1 1 1 48 48 947 947 1000 1000
R0 range
R1 and R2 range
9001-10000
AM JH MR CBSE TOTAL
R1 R2 R1 R2 R1 R2 R1 R2 R1 R2
1-1000 --- --- --- --- --- --- --- --- --- ---
1001-2000 --- --- --- --- --- 3 --- 3 --- 6
2001-3000 --- 1 --- 2 3 6 10 15 13 24
3001-4000 --- 1 --- --- 5 9 30 27 35 37
4001-5000 --- --- --- --- 2 5 36 31 38 36
5001-6000 --- --- --- --- 5 7 64 49 69 56
6001-7000 --- --- --- --- 4 6 63 51 67 57
7001-8000 1 --- 2 --- 3 --- 62 50 68 50
8001-9000 --- --- --- --- 2 2 58 59 60 61
9001-10000 --- --- --- --- 6 4 71 46 77 50
10001-15000 1 --- --- --- 6 3 303 283 310 286
15001-20000 --- --- --- --- 6 2 156 202 162 204
20001-25000 --- --- --- --- 3 2 54 84 57 86
25001-30000 --- --- --- --- 2 1 19 26 21 27
30001-40000 --- --- --- 1 3 --- 16 17 19 18
40001-50000 --- --- 1 --- --- --- 3 2 4 2
50001-100000 --- --- --- --- --- --- --- --- --- ---
100001-150000 --- --- --- --- --- --- --- --- --- ---
150001-200000 --- --- --- --- --- --- --- --- --- ---
Above 200000 --- --- --- --- --- --- --- --- --- ---
TOTAL 2 2 3 3 50 50 945 945 1000 1000
F16
TABLE 4 : Distribution of Candidates on different ranks (contd.)
R0 range
R1 and R2 range
10001-15000
AM JH UK MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 --- --- --- --- --- --- --- --- --- --- --- ---
1001-2000 --- --- --- --- --- --- --- 2 --- --- --- 2
2001-3000 --- 9 --- 2 --- --- 2 24 3 16 5 51
3001-4000 1 3 --- --- --- 3 14 23 48 54 63 83
4001-5000 --- 1 --- --- --- --- 19 27 103 103 122 131
5001-6000 3 1 2 --- --- --- 17 26 129 104 151 131
6001-7000 3 2 --- --- --- --- 9 17 177 139 189 158
7001-8000 3 --- --- --- --- --- 18 14 233 167 254 181
8001-9000 2 --- --- 1 --- --- 12 10 233 198 247 209
9001-10000 --- 1 --- --- --- --- 7 14 267 211 274 226
10001-15000 2 1 --- --- 3 --- 37 42 1296 1096 1338 1139
15001-20000 2 --- --- 3 --- --- 35 14 1085 1096 1122 1113
20001-25000 2 --- 1 --- --- --- 28 6 648 842 679 848
25001-30000 --- --- --- --- --- --- 15 4 268 396 283 400
30001-40000 --- --- 3 1 --- --- 9 7 185 256 197 264
40001-50000 --- --- 1 --- --- --- 8 1 59 58 68 59
50001-100000 --- --- --- --- --- --- 1 --- 7 5 8 5
100001-150000 --- --- --- --- --- --- --- --- --- --- --- ---
150001-200000 --- --- --- --- --- --- --- --- --- --- --- ---
Above 200000 --- --- --- --- --- --- --- --- --- --- --- ---
TOTAL 18 18 7 7 3 3 231 231 4741 4741 5000 5000
R0 range
R1 and R2 range
15001-20000
AM JH UK MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 --- --- --- --- --- --- --- --- --- --- --- ---
1001-2000 --- --- --- --- --- --- --- --- --- --- --- ---
2001-3000 --- 3 1 2 --- 2 --- 2 --- 1 1 10
3001-4000 2 2 --- 2 --- 1 1 20 2 3 5 28
4001-5000 --- 2 --- 1 --- --- 8 24 10 19 18 46
5001-6000 --- 2 1 2 --- 1 16 22 32 36 49 63
6001-7000 1 1 1 1 --- --- 21 21 59 53 82 76
7001-8000 --- 1 --- 1 1 --- 8 21 76 69 85 92
8001-9000 2 1 --- --- 1 --- 10 15 107 82 120 98
9001-10000 --- --- --- --- --- --- 13 17 131 86 144 103
10001-15000 4 1 2 3 1 --- 49 55 882 660 938 719
15001-20000 1 --- 3 4 1 --- 35 29 1005 864 1045 897
20001-25000 2 --- 1 1 --- --- 32 10 888 855 923 866
25001-30000 1 --- 3 --- --- --- 19 4 639 762 662 766
30001-40000 --- --- 5 2 --- --- 26 5 641 895 672 902
40001-50000 --- --- 1 2 --- --- 7 6 173 249 181 257
50001-100000 --- --- 4 1 --- --- 7 1 64 75 75 77
100001-150000 --- --- --- --- --- --- --- --- --- --- --- ---
150001-200000 --- --- --- --- --- --- --- --- --- --- --- ---
Above 200000 --- --- --- --- --- --- --- --- --- --- --- ---
TOTAL 13 13 22 22 4 4 252 252 4709 4709 5000 5000
F17
TABLE 4 : Distribution of Candidates on different ranks (contd.)
R0 range
R1 and R2 range
20001-25000
AM JH UK MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 --- --- --- --- --- --- --- --- --- --- --- ---
1001-2000 --- --- --- --- --- --- --- --- --- --- --- ---
2001-3000 --- --- --- --- --- --- --- --- --- --- --- ---
3001-4000 --- --- --- 1 --- --- 1 6 --- --- 1 7
4001-5000 --- 4 --- --- --- 3 2 14 1 6 3 27
5001-6000 --- 6 --- 2 --- 1 9 18 8 7 17 34
6001-7000 --- 3 --- 3 --- 1 8 17 9 11 17 35
7001-8000 --- 5 --- 2 --- 1 10 18 25 25 35 51
8001-9000 --- 1 1 1 --- --- 9 9 37 39 47 50
9001-10000 --- 3 --- --- 2 --- 13 17 58 47 73 67
10001-15000 8 1 1 3 1 --- 36 61 516 358 562 423
15001-20000 9 2 4 --- 1 --- 31 30 746 556 791 588
20001-25000 5 1 3 1 2 --- 32 21 831 707 873 730
25001-30000 1 --- --- --- --- --- 27 19 773 766 801 785
30001-40000 3 --- 3 --- --- --- 33 5 1038 1208 1077 1213
40001-50000 --- --- 1 --- --- --- 22 5 436 659 459 664
50001-100000 --- --- 2 2 --- --- 14 7 228 317 244 326
100001-150000 --- --- --- --- --- --- --- --- --- --- --- ---
150001-200000 --- --- --- --- --- --- --- --- --- --- --- ---
Above 200000 --- --- --- --- --- --- --- --- --- --- --- ---
TOTAL 26 26 15 15 6 6 247 247 4706 4706 5000 5000
R0 range
R1 and R2 range
25001-30000
AM JH UK MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 --- --- --- --- --- --- --- --- --- --- --- ---
1001-2000 --- --- --- --- --- --- --- --- --- --- --- ---
2001-3000 --- --- --- --- --- --- --- --- --- --- --- ---
3001-4000 --- --- --- 1 --- --- --- 1 --- --- --- 2
4001-5000 --- 1 1 --- --- 1 --- 6 --- --- 1 8
5001-6000 --- 4 --- 1 1 2 2 17 --- --- 3 24
6001-7000 --- 1 --- 1 --- 2 5 18 1 1 6 23
7001-8000 --- 4 --- 1 --- --- 14 22 4 8 18 35
8001-9000 --- 1 --- 3 --- 1 7 14 13 17 20 36
9001-10000 --- 6 --- --- --- 2 12 20 20 15 32 43
10001-15000 5 5 1 2 --- 1 53 89 267 186 326 283
15001-20000 5 1 1 4 3 --- 55 50 454 313 518 368
20001-25000 3 1 4 4 1 1 38 27 657 483 703 516
25001-30000 7 1 2 2 2 --- 24 14 750 607 785 624
30001-40000 3 --- 4 2 2 --- 46 18 1254 1290 1309 1310
40001-50000 2 1 5 --- 1 --- 27 7 704 974 739 982
50001-100000 2 1 7 4 --- --- 31 11 500 730 540 746
100001-150000 --- --- --- --- --- --- --- --- --- --- --- ---
150001-200000 --- --- --- --- --- --- --- --- --- --- --- ---
Above 200000 --- --- --- --- --- --- --- --- --- --- --- ---
TOTAL 27 27 25 25 10 10 314 314 4624 4624 5000 5000
F18
TABLE 4 : Distribution of Candidates on different ranks (contd.)
R0 range
R1 and R2 range
30001-40000
AM JH UK MZ MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
1001-2000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
2001-3000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
3001-4000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
4001-5000 --- --- --- 3 --- --- 2 --- --- 4 --- --- 2 7
5001-6000 --- 2 1 1 --- 1 --- 2 1 10 --- --- 2 16
6001-7000 1 7 --- 3 --- 4 --- --- 3 23 --- --- 4 37
7001-8000 --- 6 1 --- --- 2 --- --- 6 15 --- 7 7 30
8001-9000 --- 7 1 4 --- 3 --- --- 12 35 11 9 24 58
9001-10000 --- 17 --- --- --- 2 --- --- 16 31 10 6 26 56
10001-15000 4 20 2 7 --- 6 --- --- 91 189 167 130 264 352
15001-20000 10 14 2 3 4 --- --- --- 104 115 444 287 564 419
20001-25000 22 2 4 2 6 1 --- --- 84 64 743 495 859 564
25001-30000 11 2 5 --- 1 --- --- --- 64 54 1046 714 1127 770
30001-40000 23 1 3 5 7 --- --- --- 85 55 2384 1907 2502 1968
40001-50000 6 2 4 2 --- --- --- --- 76 31 1975 2125 2061 2160
50001-100000 4 1 17 10 1 --- --- --- 146 64 2367 3468 2535 3543
100001-150000 --- --- --- --- --- --- --- --- 2 --- 21 20 23 20
150001-200000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
Above 200000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
TOTAL 81 81 40 40 19 19 2 2 690 690 9168 9168 10000 10000
R0 range
R1 and R2 range
40001-50000
AM JH UK MZ MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
1001-2000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
2001-3000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
3001-4000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
4001-5000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
5001-6000 --- --- --- --- --- --- 1 --- --- 1 --- --- 1 1
6001-7000 --- --- --- 1 --- 2 --- --- --- 4 --- --- --- 7
7001-8000 --- 1 --- 3 --- 5 --- 1 --- 7 --- 1 --- 18
8001-9000 --- 3 --- 4 --- --- --- --- 3 15 1 --- 4 22
9001-10000 --- 3 --- 3 --- 3 --- --- 4 23 --- 1 4 33
10001-15000 --- 22 1 9 1 7 --- --- 55 148 40 34 97 220
15001-20000 2 21 1 7 1 3 --- --- 88 129 168 111 260 271
20001-25000 5 11 7 4 5 3 --- --- 88 111 295 163 400 292
25001-30000 9 2 5 7 4 1 --- --- 55 83 524 320 597 413
30001-40000 21 6 10 6 6 1 --- --- 114 117 1596 1053 1747 1183
40001-50000 24 3 6 6 3 2 --- --- 110 62 1957 1571 2100 1644
50001-100000 15 4 32 18 10 6 --- --- 255 92 4321 5603 4633 5723
100001-150000 --- --- 8 2 4 1 --- --- 25 5 120 165 157 173
150001-200000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
Above 200000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
TOTAL 76 76 70 70 34 34 1 1 797 797 9022 9022 10000 10000
F19
TABLE 4 : Distribution of Candidates on different ranks (contd.)
R0 range
R1 and R2 range
50001-100000
AM JH UK MZ MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
1001-2000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
2001-3000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
3001-4000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
4001-5000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
5001-6000 --- --- --- --- --- --- --- --- --- --- --- 1 --- 1
6001-7000 --- --- 1 1 --- 2 --- --- --- --- 1 --- 2 3
7001-8000 --- 1 --- 2 1 2 --- --- --- 3 --- --- 1 8
8001-9000 --- 4 1 3 1 3 --- --- --- 7 --- --- 2 17
9001-10000 --- 4 --- 2 1 4 1 --- 1 18 --- --- 3 28
10001-15000 --- 50 2 46 2 35 --- 2 59 201 6 7 69 341
15001-20000 5 68 2 40 1 52 2 1 135 384 54 29 199 574
20001-25000 11 71 4 52 2 36 --- 1 190 410 177 103 384 673
25001-30000 15 57 14 38 9 19 --- 1 220 449 366 204 624 768
30001-40000 46 89 36 67 22 33 --- 2 431 797 1675 849 2210 1837
40001-50000 80 68 49 31 38 17 --- 1 481 613 3241 1760 3889 2490
50001-100000 356 181 231 237 128 57 5 1 2232 1735 25538 22958 28490 25169
100001-150000 99 21 314 202 73 33 2 2 1646 1047 10478 15026 12612 16331
150001-200000 2 --- 103 37 29 16 1 --- 399 130 974 1566 1508 1749
Above 200000 --- --- 3 2 2 --- --- --- --- --- 2 9 7 11
TOTAL 614 614 760 760 309 309 11 11 5794 5794 42512 42512 50000 50000
R0 range
R1 and R2 range
100001-150000
AM JH UK MZ MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
1001-2000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
2001-3000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
3001-4000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
4001-5000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
5001-6000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
6001-7000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
7001-8000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
8001-9000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
9001-10000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
10001-15000 --- 2 1 13 --- 6 --- --- --- 2 --- --- 1 23
15001-20000 --- 7 2 29 --- 27 1 1 2 26 --- --- 5 90
20001-25000 --- 18 2 48 --- 38 --- --- 16 116 2 2 20 222
25001-30000 2 41 3 36 2 51 --- --- 40 164 9 2 56 294
30001-40000 2 107 13 92 5 63 --- 1 140 533 49 18 209 814
40001-50000 9 108 21 108 19 77 --- 1 200 632 161 66 410 992
50001-100000 369 475 285 422 186 204 2 5 1971 2983 7768 3584 10581 7673
100001-150000 514 213 477 451 250 164 6 4 2976 2471 18887 17185 23110 20488
150001-200000 95 20 725 422 229 94 5 2 3156 1904 9648 14851 13858 17293
Above 200000 --- --- 150 58 59 26 --- --- 449 119 1092 1908 1750 2111
TOTAL 991 991 1679 1679 750 750 14 14 8950 8950 37616 37616 50000 50000
F20
TABLE 4 : Distribution of Candidates on different ranks (contd.)
R0 range
R1 and R2 range
150001-200000
AM JH UK MZ MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
1001-2000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
2001-3000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
3001-4000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
4001-5000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
5001-6000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
6001-7000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
7001-8000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
8001-9000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
9001-10000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
10001-15000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
15001-20000 --- 2 --- 18 --- 10 --- --- --- 1 --- --- --- 31
20001-25000 --- 4 --- 34 --- 11 --- --- 1 11 --- --- 1 60
25001-30000 --- 3 1 35 --- 24 1 1 3 25 --- --- 5 88
30001-40000 2 54 2 113 1 55 --- 1 28 133 3 2 36 358
40001-50000 3 81 20 99 6 71 --- 3 42 282 7 2 78 538
50001-100000 146 560 249 513 93 282 4 10 887 2603 1043 295 2422 4263
100001-150000 632 454 380 611 249 245 9 9 2373 3294 7614 3579 11257 8192
150001-200000 510 175 1021 777 418 246 12 3 5051 4221 15946 15736 22958 21158
Above 200000 45 5 1089 562 352 175 1 --- 4272 2087 7484 12483 13243 15312
TOTAL 1338 1338 2762 2762 1119 1119 27 27 12657 12657 32097 32097 50000 50000
R0 range
R1 and R2 range
Above 200000
AM JH UK MZ MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
1001-2000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
2001-3000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
3001-4000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
4001-5000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
5001-6000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
6001-7000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
7001-8000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
8001-9000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
9001-10000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
10001-15000 --- --- --- --- --- --- --- --- --- --- --- --- --- ---
15001-20000 --- --- --- 1 --- --- --- --- --- --- --- --- --- 1
20001-25000 --- --- --- 5 --- 1 --- --- --- 1 --- --- --- 7
25001-30000 --- --- --- 19 --- 8 --- --- 1 1 --- --- 1 28
30001-40000 --- 12 1 63 --- 23 --- 1 1 16 --- --- 2 115
40001-50000 --- 31 --- 75 --- 51 1 3 7 52 --- --- 8 212
50001-100000 33 384 118 509 26 333 3 8 193 1210 99 31 472 2475
100001-150000 312 623 293 749 145 315 4 11 964 2775 1123 323 2841 4796
150001-200000 811 548 716 1006 368 375 13 12 3305 4881 6463 2978 11676 9800
Above 200000 734 292 3973 2674 1430 863 20 6 16128 11663 21486 25839 43771 41337
TOTAL 1890 1890 5101 5101 1969 1969 41 41 20599 20599 29171 29171 58771 58771
F21
TABLE 4 : Distribution of Candidates on different ranks (contd.)
R0
range
R1 and R2 range
TOTAL
AM JH UK MZ MR CBSE TOTAL
R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1 R2 R1
1-1000 2 5 --- 1 --- --- --- --- 52 84 946 910 1000 1000
1001-
2000 2 5 1 3 --- 2 --- --- 36 74 961 916 1000 1000
2001-
3000 2 19 1 8 --- 3 --- --- 49 96 948 874 1000 1000
3001-
4000 5 6 --- 5 --- 5 --- --- 63 112 932 872 1000 1000
4001-
5000 1 9 3 4 1 4 2 --- 62 113 931 870 1000 1000
5001-
6000 4 15 5 6 1 5 1 2 77 119 912 853 1000 1000
6001-
7000 8 14 2 10 --- 11 --- --- 76 123 914 842 1000 1000
7001-
8000 5 18 3 9 3 10 --- 1 80 105 909 857 1000 1000
8001-
9000 5 17 4 17 3 7 --- --- 74 110 914 849 1000 1000
9001-
10000 1 34 1 5 3 11 1 --- 83 147 911 803 1000 1000
10001-
15000 25 102 11 83 9 55 --- 2 421 800 4534 3958 5000 5000
15001-
20000 34 115 16 109 11 92 3 2 503 782 4433 3900 5000 5000
20001-
25000 50 108 26 152 16 91 --- 1 517 780 4391 3868 5000 5000
25001-
30000 46 106 34 137 18 103 1 2 470 819 4431 3833 5000 5000
30001-
40000 100 269 80 353 43 175 --- 5 920 1689 8857 7509 10000 10000
40001-
50000 124 294 110 323 67 218 1 8 981 1691 8717 7466 10000 10000
50001-
100000 925 1606 945 1716 444 882 14 24 5737 8706 41935 37066 50000 50000
100001-
150000 1557 1311 1472 2015 721 758 21 26 7986 9592 38243 36298 50000 50000
150001-
200000 1418 743 2565 2242 1044 731 31 17 11911 11136 33031 35131 50000 50000
Above
200000 779 297 5215 3296 1843 1064 21 6 20849 13869 30064 40239 58771 58771
TOTAL 5093 5093 10494 10494 4227 4227 96 96 50947 50947 187914 187914 258771 258771
F22
TABLE 5 : Mean and Standard Deviation of Marks
� AIEEE score : A0
� Modified composite score (All India Based) : A1
� Modified composite score (Board Based) : A2
Sl. No. Board Mean (SD) of
A0 A1 A2
1. AM 28.48
(27.74) 68.01
(36.25) 51.06
(29.30)
2. JH 20.36
(22.61) 45.80
(33.68) 32.11
(25.28)
3. UK 21.16
(21.94) 54.76
(38.49) 35.73
(27.16)
4. MZ 23.54
(23.14) 66.35
(37.00) 51.10
(40.71)
5. MR 27.98
(31.59) 51.37
(39.26) 42.01
(36.31)
6. CBSE 58.86
(50.15) 60.51
(46.36) 68.97
(48.63)
7. ALL 49.99
(47.72) 58.17
(44.52) 61.26
(46.86)
Observations
1. There is high correlation between • AIEEE Marks (A0) and modified Composite Score All India based (A1)
• AIEEE Marks (A0) and modified Composite Score Board based (A2)
• All India AIEEE Rank (R0) and modified All India AIEEE Rank All India based (R1)
• All India AIEEE Rank (R0) and modified All India AIEEE Rank Board based (R2).
2. Ranking based on modified rank score A1 (All India Based) be used. It may be difficult to justify ranking based on modified rank score A2 (Board Based).
3. Since there is significant difference between No of Candidates appeared in a Board
and No of Candidates appeared in AIEEE from the same Board, therefore percentile ranks/ score be considered for only those students who appeared for JEE (Main) in that year.
F23
Part 3: Analysis by Neeraj Mishra of IIT Kanpur
Steps of computation
1. Following data files have been prepared in respect of a particular Board :
a. Data Set A : The aggregate scores in 5 subjects of the board in a
particular year, as discussed in the meeting, for ALL students
of the board for whom these 5 subjects are well defined
(including the students who did not appear for AIEEE).
b. Data Set B : The aggregate scores in AIEEE, for ALL students of the
above board appearing for AIEEE in that year.
c. Data Set C : The aggregate scores in AIEEE, for ALL students appearing
for AIEEE in that year.
d. Data Set D : Aggregate of five subjects board scores (B0),
Aggregate AIEEE score (A0),
AIEEE rank (R0) and
AIEEE percentile rank (P0) for all the students of the said
board in the year for whom matching was possible. The
percentile rank is between 0% and 100%.
2. The Data Set D has been extended by computing the following additional quantities
for each students :
a. Modified board score, second type (B1): Take B0, calculate percentile rank of
B0 with respect to the Data Set A,
then calculate the AIEEE score
corresponding to this percentile rank
as obtained from Data Set C. Call this
score B1.
b. Modified board score, first type (B2): Take B0, calculate percentile rank of B0
with respect to the Data Set A, then
calculate the AIEEE score
corresponding to this percentile rank
as obtained from Data Set B. Call this
score B2.
c. Modified composite score, first type (All India Based) (A1): A1 = 0.6A0 + 0.4B1.
d. Modified composite score, second type (Board Based) (A2): A2 = 0.6A0 + 0.4B2.
Data used
The data used is the same as in Part 2 of this Appendix, except for the data in respect of
Maharashtra Board.
F24
Results
Rank Correlations Based On Combined Data of Five Boards:
AIEEE A1 A2
AIEEE 1.0 .8786 .9089
A1 1.0 .9750
A2 1.0
Board Specific Rank Correlations:
Sl.
No. Boards
AIEEE and Type A1
(All India Based)
AIEEE and Type A2
(Board Based)
A1 (All India Based)
and
A2 (Board Based)
1. AM 0.7350 0.7350 0.8547 0.9638
2. JH 0.5823 0.5823 0.6963 0.9502
3. UK 0.6013 0.6013 0.7519 0.9346
4. MZ 0.6100 0.6100 0.7841 0.9472
5. CBSE 0.9311 0.9311 0.9163 0.9983
6. ALL 0.8786 0.8786 0.9089 0.9750
Recommendations
1. Ranking based on modified rank score A1 be used. It may be difficult to justify
ranking based on modified rank score A2
2. It may be meaningful to consider percentile ranks/ score among only those students
who appeared for JEE (Main) in that year
Part 4: Analysis by B.S. Daya Sagar of ISI
Steps of computation and data used
The steps of computation are identical to those described in Part 2. The data used
are also the same, except for the data in respect of CBSE and Maharashtra Board.
Results
The following charts show the scatter of the AIEEE ranks (R0) of the candidates of
the four boards against the composite ranks obtained by Procedures 1 and 2, (R1
and R2, respectively).
F25
F26
Representation of boards in different percentile ranges:
The table shows the number of students of the four boards in different upper percentile
ranges, computed according to AIEEE (P0), Procedure 1 (P1, All India based equating) and
Procedure 2 (P2, Board based equating)
Uttarakhand Mizoram Jharkhand Assam
P0 P1 P2 P0 P1 P2 P0 P1 P2 P0 P1 P2
>95 7 315 77 0 5 0 18 201 188 34 306 304
>75 189 1574 1528 7 46 45 471 2748 2748 510 2583 2583
>50 1022 2717 2717 27 79 80 2353 5890 5890 1716 4429 4429
Part 5: Analysis by Jim Tognolini and Jon Twing of CAER
Introduction
This text represents a “brief report” on the efforts to investigate the normalization of CBSE
and Board scores for the AIEEE assessment. This report focuses exclusively on the
“equipercentile scaling method (option 4)” as outlined in communication from Sri B. M. Gupta
on or about 2012-12-10.
This brief report is preliminary and the authors intend to finalize in a published paper from
the Centre (CAER) in the near future and once source data, verification and variable
decisions can be clarified.
Procedures
The procedures used in this brief report followed those outlined as “option 4” of the
equipercentile scaling method. The difference for this report is that this linking is between
the composite or aggregate Board specific score after averaging across five different
subjects. The two variables linked via the equipercentile procedure were this Board specific
composite score and the AIEEE score obtained by the same candidates.
Data Set
The primary data set used in this brief report was SETD provided by Sri. B. M. Gupta on or
about 2012-12-10. The data set used was the “mapped” AIEEE and Board specific scores.
These matched scores existed for four State Boards and the CBSE. The table below
presents this data:
F27
Table 1. Data Set Used in Brief Report
Code State / Board Data SETD N-Count
AM Assam AMDDD12.DBF 5,093
JH Jharkhand JHDDD12.DBF 10,494
MZ Mizoram MZDDD12.DBF 96
UK Utarakhand UKDDD12.DBF 4,227
CBS CBSE CBSDDD12.DBF 187,914
Variables
The variables used in this brief report include BENG_TOT which represent AIEEE total
marks (out of 360) for each candidate in the State / Board specific files. The other variable
included was MRK_360 which was the average State / Board mark (also out of 360).
Assumptions
The files came matched and cleaned such that an assumption of no missing data,
representativeness of the generalization to a broader population and fidelity of the composite
score were made. None of these assumptions have been tested.
All data was rounded to the nearest whole number for ease of interpretation.
Results
The results of this Brief Report are presented in Table 2 below. Table 2 shows for each
percentile point the corresponding or equivalent AIEEE and Board Composite scores. This
method calculated percentiles following the method used in the SAS statistical program
package or the method commonly called “Type 3” using the nearest even order statistic:
y = 0 if g = 0 and j is even and 1 otherwise.
This procedure is fully documented in Becker, et. al., 1988 or Hyndman & Fan, 1996.
Table 3 provides the summary statistics and correlations associated with the variables and
data used. The correlations are simple Pearson correlations between the to two variables
and have not been corrected in anyway.
References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth &
Brooks/Cole.
Hyndman, R. J. and Fan, Y. (1996). Sample quantiles in statistical packages, American
Statistician, 50, 361–365.
F28
Table 2. Equivalent Scores by State / Board for each Percentile.
Percentile
Assam CBSE Jharkhand Mizoram Utarakhand
AIEEE BOARD AIEEE BOARD AIEEE BOARD AIEEE BOARD AIEEE BOARD
0 -35 135 -66 46 -36 32 -11 130 -32 65
1 -18 165 -9 125 -20 106 -11 130 -18 110
2 -12 172 -4 137 -16 116 -10 139 -14 118
3 -9 177 -1 145 -13 122 -7 148 -11 126
4 -7 181 2 153 -11 127 -6 153 -10 132
5 -6 185 4 161 -10 130 -6 156 -8 136
6 -4 189 5 166 -8 133 -6 165 -7 141
7 -3 192 7 172 -7 136 -6 166 -6 145
8 -1 194 9 177 -6 138 -5 168 -5 148
9 -1 197 10 181 -5 140 -5 171 -4 151
10 0 199 11 185 -4 143 -4 176 -3 153
11 1 202 12 188 -3 144 -3 180 -2 156
12 2 204 13 192 -2 146 0 183 -1 158
13 3 205 14 194 -1 148 0 183 -1 160
14 4 207 15 197 -1 149 0 183 0 162
15 4 209 16 199 0 150 1 183 1 164
16 5 210 17 202 0 152 1 183 2 166
17 6 212 18 204 1 153 1 184 2 168
18 6 213 19 206 2 155 3 186 3 169
19 7 215 20 208 2 156 3 187 4 171
20 8 216 21 210 3 157 4 188 4 173
21 9 217 22 212 4 158 4 189 4 174
22 9 219 23 213 4 160 5 189 5 175
23 9 220 24 215 4 162 7 190 5 177
24 10 221 24 217 5 162 7 191 6 178
25 11 222 25 218 6 162 7 192 6 180
26 11 223 26 220 6 163 8 192 7 181
27 12 225 27 222 7 164 8 194 7 182
28 13 226 28 223 7 166 9 194 8 183
29 13 227 29 225 8 166 10 194 8 184
30 14 228 29 226 8 168 11 195 9 185
31 14 230 30 228 9 168 12 197 9 186
32 14 230 31 229 9 169 12 197 10 187
33 15 232 32 230 9 171 12 198 10 188
34 15 233 32 232 10 171 13 199 11 189
35 16 233 33 233 10 172 14 201 11 190
36 16 235 34 235 11 173 15 202 12 192
37 17 236 35 236 11 174 15 204 12 192
38 17 237 36 238 12 174 15 204 13 194
39 18 238 36 239 13 176 15 204 13 194
F29
Table 2. Equivalent Scores by State / Board for each Percentile (contd).
Percentile
Assam CBSE Jharkhand Mizoram Utarakhand
AIEEE BOARD AIEEE BOARD AIEEE BOARD AIEEE BOARD AIEEE BOARD
40 19 239 37 240 13 176 15 205 14 196
41 19 240 38 242 13 177 16 205 14 197
42 19 240 39 243 14 178 16 206 14 198
43 20 242 40 245 14 179 16 207 15 199
44 20 243 41 246 14 179 17 210 15 200
45 21 243 42 248 15 180 17 210 16 202
46 22 245 42 249 15 181 18 210 16 202
47 22 246 43 251 16 182 19 210 17 203
48 23 246 44 252 17 183 19 210 17 204
49 24 248 45 253 17 184 19 212 18 205
50 24 248 46 255 18 184 19 214 19 207
51 24 249 47 256 18 185 20 216 19 207
52 25 251 48 258 19 186 20 216 19 209
53 25 252 49 259 19 186 20 216 20 210
54 26 253 50 261 19 188 21 217 20 211
55 26 254 51 262 20 189 22 217 21 212
56 27 255 52 264 20 189 22 219 21 213
57 28 256 53 265 21 190 22 220 22 215
58 28 256 54 266 21 191 23 224 22 216
59 29 257 55 269 22 192 23 224 23 217
60 29 258 56 270 23 193 23 226 24 218
61 30 259 57 271 23 194 23 228 24 219
62 30 261 59 273 24 195 23 229 24 220
63 31 261 60 274 24 196 23 229 25 221
64 32 262 61 276 25 197 24 230 26 222
65 33 264 62 277 25 197 24 232 26 224
66 33 264 64 279 26 198 26 232 27 225
67 34 265 65 281 26 199 27 233 27 227
68 34 266 66 282 27 200 29 235 28 228
69 35 267 68 284 28 201 31 237 29 229
70 36 268 69 285 29 202 31 238 29 230
71 36 269 71 287 29 203 31 238 30 232
72 37 270 73 288 29 204 33 240 30 233
73 38 271 74 290 30 204 34 242 31 235
74 39 271 76 292 30 205 34 242 32 236
75 40 272 78 293 31 207 34 243 33 238
76 41 274 80 294 32 208 34 244 34 240
77 42 274 82 297 33 209 34 247 34 241
78 43 276 84 298 34 210 36 248 35 243
79 44 276 87 300 34 211 38 250 36 244
F30
Table 2. Equivalent Scores by State / Board for each Percentile (contd).
Percentile
Assam CBSE Jharkhand Mizoram Utarakhand
AIEEE BOARD AIEEE BOARD AIEEE BOARD AIEEE BOARD AIEEE BOARD
80 45 277 89 302 35 216 39 251 37 245
81 47 279 92 303 36 216 39 251 37 247
82 47 279 94 305 37 216 40 252 38 248
83 49 281 97 307 38 217 44 253 39 251
84 50 282 100 308 39 218 46 253 40 253
85 52 282 104 310 40 220 46 256 41 254
86 54 284 107 312 41 221 49 256 42 257
87 55 285 111 314 42 222 49 258 43 259
88 57 287 115 315 44 225 49 258 44 261
89 59 288 119 318 45 226 50 262 45 264
90 61 289 125 319 47 228 52 263 47 266
91 64 291 130 321 49 230 52 265 49 269
92 67 292 136 323 50 232 53 271 50 272
93 71 294 144 325 53 234 54 272 52 274
94 74 297 152 328 55 236 62 274 55 277
95 80 300 161 330 59 240 69 280 58 282
96 86 302 173 333 63 243 71 282 62 284
97 94 306 189 335 68 247 75 284 69 289
98 104 312 209 338 77 253 87 310 77 296
99 123 318 239 342 94 260 100 318 91 305
100 245 347 345 356 257 301 102 320 204 330
Table 3. Associated Summary Statistics
Percentile
Assam CBSE Jharkhand Mizoram Uttarakhand
AIEEE Board AIEEE Board AIEEE Board AIEEE Board AIEEE Board
Mean 28.46 246.46 58.86 252.8 20.36 184.67 23.54 218.35 21.16 207.98
SD 27.74 34.72 50.15 51.28 22.61 33.27 23.14 37.61 21.94 43.12
N-Count 5093 5093 187914 187914 10494 10494 96 96 4227 4227
Missing 0 0 0 0 0 0 0 0 0 0
Skewness 1.7 -0.25 1.63 -0.38 1.6 -0.07 1.17 0.31 1.42 0.02
Kurtosis 5.74 -0.37 3.37 -0.35 7.34 0.11 1.75 0.19 5.61 -0.25
Min -35 135 -66 46 -36 32 -11 130 -32 65
Max 245 347 345 356 257 301 102 320 204 330
Correlation 0.4624 0.4624 0.2345 0.4549 0.3838
top related