d. c. · 2014. 3. 18. · in those areas range from .06 to .17. the correlations between 1) mcat...

23
ED 310 689 AUTHOR TITLE INSTITUTION PUB DATE NOTE AVAILABLE FROM PUB TYPE EDRS PRICE DESCRIPTORS IDENTIFIERS ABSTRACT DOCUMENT RESUME HE 022 794 Mitchell, Karen J. Use of MCAT Data in Admissions. A Guide for Medical School Admissions Officers and Faculty. Association of American Medical Colleges, Washington, D. C. 87 24p. Association of American Medical Colleges, One Dupont Circle, Suite 200, Washington, DC 20036 ($5.00 plus $2.50 postage and handling). Guides - Non-Classroom Use (055) MF01/PC01 Plus Postage. Academic Ability; Access to Education; Achievement Tests; Admission Criteria; Aptitude Tests; College Admission; *College Applicants; *College Entrance Examinations; Higher Education; *Medical Schools; Prior Learning; Standardized Tests *Medical College Admission Test A description of the standardized, multiple-choice Medical College Admission Test (MCAT) and how to use it is offered. Medical school admissions officers medical educators, college faculty members, and practicing physicians are active participants in selecting content, drafting test specifications, and authoring questions for the exam. The MCAT is designed to: assess understanding of science concepts and principles identified as prerequisite to the study and practice of medicine; evaluate basic analytical stills in the context of medically relevant problems and data; and help admissions committees predict which of their applicants will perform adequately in the medical school curriculum. Its six scores are biology knowledge, chemistry knowledge, physics knowledge, science problems, skills enalysis--reading, skills analysis--quantitative. Six chapters are as follows: a description of the MCAT; an chapters are as follows: a description of the MCAT; an explanation of what MCAT scores show about examinees; ways to interpret an applicant's MCAT scores; methods of comparing MCAT performance among applicants; a discussion of how to consider the six MCAT scores; and ways to evaluate the admissions process at an institution. Two appendices disucss the assessment of the selection process and provide sources of additional information. (SM) **************g*******************************%************************ * Reproductions supplied by EDRS are the best that can be made % * from the original document. %

Upload: others

Post on 09-Mar-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

ED 310 689

AUTHORTITLE

INSTITUTION

PUB DATENOTEAVAILABLE FROM

PUB TYPE

EDRS PRICEDESCRIPTORS

IDENTIFIERS

ABSTRACT

DOCUMENT RESUME

HE 022 794

Mitchell, Karen J.Use of MCAT Data in Admissions. A Guide for MedicalSchool Admissions Officers and Faculty.Association of American Medical Colleges, Washington,D. C.87

24p.

Association of American Medical Colleges, One DupontCircle, Suite 200, Washington, DC 20036 ($5.00 plus$2.50 postage and handling).Guides - Non-Classroom Use (055)

MF01/PC01 Plus Postage.Academic Ability; Access to Education; AchievementTests; Admission Criteria; Aptitude Tests; CollegeAdmission; *College Applicants; *College EntranceExaminations; Higher Education; *Medical Schools;Prior Learning; Standardized Tests*Medical College Admission Test

A description of the standardized, multiple-choiceMedical College Admission Test (MCAT) and how to use it is offered.Medical school admissions officers medical educators, college facultymembers, and practicing physicians are active participants inselecting content, drafting test specifications, and authoringquestions for the exam. The MCAT is designed to: assess understandingof science concepts and principles identified as prerequisite to thestudy and practice of medicine; evaluate basic analytical stills inthe context of medically relevant problems and data; and helpadmissions committees predict which of their applicants will performadequately in the medical school curriculum. Its six scores arebiology knowledge, chemistry knowledge, physics knowledge, scienceproblems, skills enalysis--reading, skills analysis--quantitative.Six chapters are as follows: a description of the MCAT; an chaptersare as follows: a description of the MCAT; an explanation of whatMCAT scores show about examinees; ways to interpret an applicant'sMCAT scores; methods of comparing MCAT performance among applicants;a discussion of how to consider the six MCAT scores; and ways toevaluate the admissions process at an institution. Two appendicesdisucss the assessment of the selection process and provide sourcesof additional information. (SM)

**************g*******************************%************************* Reproductions supplied by EDRS are the best that can be made %

* from the original document. %

Page 2: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

fY

4

-34,:t1.,N7,,,- tiPrt...4., ftIir o+), ft* ' -wel, I V.,'..\*

if

alb

'40.30$8, .al.111 ft.:. L., *S1,171 f.;",0.144eiiP11.

g ,Zak) 11 * a. i41°.4fAt tV,Nr!,

MK

VAR.:

anZefra,.. Nft, ref ;WI ' .V

ot.

6890-1Caas, ^

:;t

arNS4,14AS-itt

s "L. ter,44,

'Icert^,.%ft

tn ),- u,tnE 0E° 0I CCLI, III 0 ^

S1Z--0 U

U (f)e-1 al

CC0 < LIJ0 CC IrCC 0 w '0 -J CrCL Z C9 < WICC 71 (n Z ZW w ri 0 w0 °3 O 0 C- 0I to rl 0 ot,c zz a ..= 00 = cty R1 c.)(75 --, rl U &I 0 1u, a(N?5 E ti, 2ir WI I 00.W 0 z2

U.< 1

Y V., 4,4 t.v , ./,,, V++.1,r1..../..,*C. a 0 .,, 66

Page 3: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

USE OF

MCATDATA IN

ADMISSIONS

A GUIDE FORMEDICAL SCHOOL ADMISSIONS

OFFICERS AND FACULTY

Karen. J. Mitchell, Ph.D.Association of American Medical Colleges

Section for Student and Educational ProgramsOne Dupont Circle, N.W.Washington, D.C. 20036

(202) 828.056,i

3

Page 4: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

Copyright 1987 by the Association of American Medical Colleges.All material subject to this copyright may be photocopied for the

noncommercial purposes of scientific or educational advancement.Printed in the U.S.A.

Page 5: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

Preface 1

What Is the Medical College Admission Test(MCAT)? 3

What Do MCAT Scores Tell Me About Examinees? .

How Do I Interpret Applicants' MCAT Scores?

How Can I Compare MCAT Performance AmongApplicants?

5

9

11

How Should I Consider the Six MCAT Scores? .... 15

How Can I Evaluate the Admissions Process atMy Institution?

Appendixes

Appendix A: Assessment of the SelectionProcess

Appendix B: Sources of AdditionalInformation

19

23

27

Page 6: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

i,,,,,

1, his guide provides admissions officers and medicali school faculty members who serve on admissions

,7-1 committees with information about the design, objec-...z._:> tives, interpretation, and use of the Medical College

Admission Test (MCAT) in selecting students. The followingquestions are addressed:

What is the Medical College Admission Test?What do MCAT scores tell me about examinees?How do I interpret applicants' MCAT scores?How can I compare MCAT performance amongapplicants?How should I consider the six MCAT scores?How can I evaluate the admissions process at myinstitution?

Procedures for examining and assessing the selection processat your institution are discussed in Appendix A. Sources ofadditional information about the MCAT and student selectionare referenced in Appendix B.

The information about methods for ranking applicationsand summarizing MCAT scores at the preinterview stages ofselection is based in part on AAMC's 1986 survey on institu-tional admissions practices and the use of test data in studentselection. Responses from medical school admissions officerswere valuable in developing the guide.

Special thanks are due several colleagues who providedcritical reviews of this work: Robert L. Linn, Ph.D., Loretta A.Shephard, Ph.D., Richard M. Jaeger, Ph.D., who are exrcrnaltechnical consultants to the MCAT testing program; and to themembers of the Group on Student Affairs (GSA) Committee onAdmissions: Daniel A. Burr, Ph.D., Julian J. Dwornik, Ph.D., J.Donald Hare, M.D., Diane J. Klepper, M.D., C. Howard Krukof-sky, Leonard E. Lawrence, M.D., Billy B. Rankin, Rue W.Schoultz, Ph.D., Henry M. Seidel, M.D., and Robert J. Welch.Appreciation is also extended to Mary H. Littlemeyer, August G.Swanson, M.D., James B. Erdmann, Ph.D., and Robert L. Beran,Ph.D., of AAMC's Division of Academic Affairs, for their manyuseful suggestions. Thanks go also to Pat Coleen and CaryleThohey, who carefully and efficiently prepared several prelimi-nary drafts of this guide and to Brenda George for her meticu-lous work on the final manuscript.

Page 7: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

t1 .,,A. he MCAT is a standardized, multiple-choice exam.Medical school admissions officers, medical educa-tors, college faculty members, and practicing physi-cians are active participants in selecting content, draft-

ing test specifications, and authoring questions for the exam.The development and maintenance of the examination is theresponsibility of the Association of American Medical Colleges.

The MCAT is designed to achieve several purposts.O To assess understanding of science concepts and

principles identified as prerequisite to the study andpractice of medicine

O To evaluate basic analytical skills in the context ofmedically relevant problems and data

O To help admissions committees predict which oftheir applicants will perform adequately in the med-ical school curriculum

The MCAT tests for knowledge of material coveredin first-year, introductory undergraduatecourses in gen-eral biology, general chemistry, organic chemistry, andgeneral, noncalculus physics. Six scores are reported:

O Biology KnowledgeO Chemistry KnowledgeO Physics KnowledgeO Science ProblemsO Skills Analysis: ReadingO Skills Analysis: Quantitative

7

Page 8: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

r _,A;, he MCAT provides admissions committees with a

standardized measure of academic achievement forall examinees. Differences in undergraduate curricu-lar emphases, evaluation standards, and grading

scales make much preadmission information, such as under-graduate grades from different colleges, difficult to interpret. Incontrast, the MCAT provides assessment information on acommon scale for all examinees.

Evidence exists for the predictive validity of MCATscores to performance in medical school basic and clinicalsciences programs and on the National Board of Medical Exam-iners (NBME) tests. The AAMC Spring 1986 survey of medicalschool admissions officers reveals that they use MCAT scoresto:

Identify applicants likely to succeed in medicalschool and those likely to experience academicdifficultyDiagnose applicants' specific strengths andweaknesses in science preparation and analyticalskillsInterpret the transcripts and letters of evaluationfor applicants from unfamiliar undergraduateinstitutions

MCAT scores are intended to be only one of severalmeasures of applicants' qualifications. Consistencies and dis-parities in the information provided by multiple sources of datahelp provide a more complete picture of the applicant. Manymedical school admissions committees evaluate MCAT data inconjunction with applicants':

Undergraduate grade point averagesBreadth and difficulty of undergraduate courseworkQuality of the degree-granting undergraduateinstitutionsPersonal comments on American Medical CollegeApplication Service (AMCAS) and/of institutionalapplication formsLetters of evaluation from undergraduate advisers,faculty members, premedical committees,community leaders, research sponsors, and/oremployersMedical school interview resultsParticipation in activities/events demonstratingmotivation, responsibility, maturity, integrity,resourcefulness, tolerance, perseverance, dedicationto service, and/or other relevant noncognitivecharacteristics

Page 9: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

O Involvement in extracurricular activities such asstudent governance and community service duringundergraduate and graduate years

O Gender, racial, and ethnic backgroundsO Involvement in and quality of academic programs at

the graduate and postgraduate levelsO Involvement in and quality of undergraduate and

graduate health-related work and researchexperience

O State or county of legal residenceAlthough information provided by the MCAT and other

preadmission data may overlap, notable differences are seenbetween MCAT scores and data garnered from other sources.Research shows, for example, that:

O Correlations between the six individual MCATscores and Biology, Chemistry, Physics, and MathGPA range from Al to .58;

O Correlations between the six individual MCATscores and nonscience GPA range from .25 to .40;

O Correlations between the six individual MCATscores and selectivity of the undergrkluateinstitution, as measured by the average institutionalcombined Scholastic Aptitude Test score, rangefrom .31 to .39; and

O Correlations between number of course hours inBiology, Chemistry, and Physics and MCAT scoresin those areas range from .06 to .17.

The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutionalselectivity are moderate. MCAT, GPA, and selectivity providepartially redundant pictures of academic potential. Correlationsbetween MCAT and number of course hours in the sciences arelow, which suggests that the record of undergraduate workprovides useful data that are independent of the MCAT. Thesecorrelations indicate that performance on the MCAT is rela-tively unaffected by enrollment in advanced science courses.

Page 10: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

1, \ he MCAT scores are reported on a scale ranging fromone (lowest) to 15 (highest). The average scaledscore on each of the six tests was set at eight when the

. test was introduced in 1977; the mean score variesslightly as characteristics of the examinee population change.The standard deviation of each test is 2.5. Scaled scores areconverted to percentile scores to reveal an examinee's standingin relation to other examinees.

As a standardized test of cognitive achievement, theMCAT assesses preadmission characteristics of examinees. As afixed-point assessment, MCAT scores reflect neither the dy-namics of development nor the increased reliability that re-peated, cumulative assessment can provide. MCAT scores maybe fallible indices of cognitive achievement due to factors suchas:

An applicant's poor health on the test dateAn applicant's lack of understanding of testinstructionsLess than optimal test room conditions or otherfactors beyond the examinee's control

The scores reported for applicants should be consideredapproximate estimates of achievement rather than in-fallible indicators.

Because MCAT scores are not exact representations ofachievement levels for individuals, confidence bands areneeded to explicate the range of test scores within which datadepicting examinees' achievement levels probably lie. On theMCAT, a confidence band corresponds to plus or minus onescaled score from the individual's reported score. In 68 percentof the cases, this confidence band includes the score an individ-ual would receive on an instrument with perfect measurementproperties. In Figure 1, confidence bands, shown by bracketedlines, correspond to plus or minus one scaled score from thereported score; the X's indicate the individual's reportedscores.

10

Page 11: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

1 2 3 q 5 6 7 8 9 10 11 12 13 14 15

Biology Knowledge 1----X-----)

Chemistry Knowledge [X)Physics Knowledge [X)

Science Problems 1-----X-----)

Skills Analysis: Reading [-----X-----1

Skills Analysis: Quantitative [X.----)

1 1 3 4 5 6 7 8 9 10 11 12 13 14 15

Figure 1Confidence bands (bracketed lines) and reportedscores (X's) for a hypothetical MCAT examinee on

six MCAT tests.

Plus or minus two scaled scores defines a 95th percent confi-dence band. This means that in 95 percent of the cases, thescore best representing an applicant's achievement level lieswithin two points of the reported score.

\Vhen confidence bands overlap, as for example with theChemistry' Knowledge and Science Problems scores in Figure1, you can conclude that Performance in the two areas isessentially equivalent. \Vhen bands do not overlap, such as inBiology and Physics Knowledge, you can judge that interpret-able performance differences exist between the two areas.

Similar conclusions can be drawn when comparing theMCAT scores of two examinees. If, for example, confidencebands for Biology scores of two applicants overlap, you canconclude that the preparation and performance of the twostudents is essentially equivalent. If bands do not overlap, youc4n conclude that performance differences exist.

Drawing conclusions about performance differences forexaminees at extreme ends of the MCAT score scale is gener-ally unwarranted. In many cases, differences between levels ofpredicted performance in the basic and clinical sciences forexaminees scoring one scale point apart at the extremes aresmall. This is true at both the high and low ends of the scale.

11

Page 12: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

/17 applicants on MCAT performance before they are,-T - I

a i

interviewed. Some methods of considering appli

any methods can be used to rank or compare

L.A.:,..1_,Icants' MCAT scores are more appropriate psycho.metrically and/or more consonant with the goals of medicaleducation programs than others. The Standards for Educa-tional and Psychological Testing (1985), prepared by a jointcommittee of the American Educational Research Association,the National Council on Measurement in Education, and theAmerican Psychological Association, state that test scoresshould never be used in isolation to make accept/reject deci-sions about applicants. Respondents to the 1986 AAMC surveyon the use of MCAT data describe several approaches tocomparing or ranking application folders using MCAT data andother information at the preinterview stages of selection:

APPROACH A. Consider MCAT scores and otheradmission data without formal weights to rankapplication folders or sort applications intocategories representing varying levels of predictedsuccess in medical school.

Using this approach, you might sort applications into categoriesbased on a number of variables. Category One might includeapplications above given MCAT scores, grade point averages(GPAs), patterns of course enrollment and research involve-ment. Placement in Category Two might require slightly lowerqualifications. Any number of sorting categories may be used;subsequent screening activities may vary by category. Thecutoffs for inclusion in each category should be based onresearch or cumulative experience about academic difficultyand attrition at your institution. This approach is more fullydiscussed in Appendix A.

APPROACH B. Consider MCATscores with otheradmission data to create a formulabased combinedminimum score, ranking of application folders, orsorting of applications into categories representingvarying levels of predicted success in medicalschool.

A formula considering MCAT scores, CPAs, selectivity of under-graduate institution, and state of residence, for instance, can bedeveloped and applications ranked or sorted into groups usingthe formula at initial screening. Institutional research on thepredictive validity of preadmission data to performance in thebasic and clinical sciences can be used to derive a formula-

!

12

Page 13: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

based system. Alternatively, your admissions committee can setup a screening formula based on judgments about the relativeimportance of various personal and academic characteristics ofcandidates. This approach is also described in Appendix A.

O APPROACH C. Establish minimum scores for the sixMCAT tests and retain applications of those withscores above the minimum levels for furtherconsideration.

Past institutional experience with MCAT and school perfor-mance can be used to define minimum scores. If, for example,applicants scoring below five in Chemistry Knowledge or SkillsAnalysis. Reading performed poorly in the basic science pro-gram at an institution, application folders for individuals scor-ing below these levels might be removed from the applicantpool.

0 APROACH D. Define ranges of exemplary, mid-level,and unacceptable MCAT scores at the preinterviewstage(s); subsequent screening activities may varyby score range.

This is an extension of Approach C. In addition to minimumlevels, MCAT ceilings may be established. To illustrate, manyfaculties find little difference in medical school performanceamong students who scored in the upper MCAT levels. Theseadmissions committees assign MCAT scores of nine, for exam-ple, to all folders with MCAT scores of nine and above. Therationale is that finer upper level discriminations are unnec-essary for predicting success in medical school and othercriteria should be used for selecting candidates with scores inthis range.

With any of these screening approaches, and particularlywith Approaches C and D, you may choose to establish differentcriteria for special categories of applicants, such as those whoare educationally or financially disadvantaged applicants orapplicants interested in practicing in underserved communi-ties. Average test scores for some of these applicants may fallbelow those of the majority group. Remember that the MCAT isa fixed-point assessment, much useful information is providedby academic records, GPAs, and other preadmission data re-flecting the cumulative experience and performance of specialapplicants. Your decisions about alternate criteria for suchapplicants should be based on your knowledge of the perfor-mance of past and present students who have special charac-

1 3

Page 14: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

teristics and on decisions about the goals of the medicaleducation program in your school. In your decisions, youshould also consider the support services that are available atyour institution.

Since MCAT scores are only one source of data uponwhich to base decisions in admissions, admissions committeesshould use MCAT scores with other information about appli-cants at all stages that draw on test data for decision making.The importance and ease of using such sources as letters ofevaluation for applicants, interview performances, and ac-counts of extracurricular and research experience increase asadmissions decision making proceeds and the size of theapplicant pool you are considering decreases. The importanceof MCAT data, concomitantly, decreases.

MCAT research on test-repeater performance and onscore gains for examinees receiving university-based or com-mercial preparation for the MCAT also bears on the consider-ation of scores at the preinterview stages of decision making.The data suggest that:

0 Average MCAT retest gains range from .41 to 1.15scale score points. Repeaters who initially scorebelow 8 gain more than those with higher initialscores. Retest gains are smaller for both of the SkillsAnalysis tests.Average coaching gains range from .05 to .55 scalescore points. Half point gains are realized in thescience areas of assessment after coaching. Littlecoaching gain is observed for Skills Analysis:Reading and Skills Analysis: Quantitative.

Research indicates that you should treat data for reexamineesorfor students who received coaching in the same way that youtreat data for other applicants.

1 4

Page 15: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

s earlier stated, the MCAT consists of four science,..A one reading, and one quantitative area of assessment.

_../

--1 The six MCAT scores are purposefully reported sepa--.> .I_,,rately. Independent consideration of the level ofperformance indicated by each score yields a profile of thecandidate's strengths and weaknesses.

The separate subject matter scores provide diagnosticinformation and have value in clarifying applicants' achieve-ment levels. Separate scores also allow medical schools toweight MCAT content areas in ways relevant to their curricula.Separate scores provide students with information about spe-cific strengths and weaknesses and allow them to focus theirpreparation for future testing and/or entry to medical school.Similarly, admissions committees can use scores diagnostically.For example, they can evaluate applicants' weaknesses in rela-tion to the availability of instruction in relevant areas within themedical school setting. Admissions committees are also en-abled to compare individual MCAT scores to correspondingcourse enrollments and grades on applicants' transcripts.

Results of the survey on MCAT use indicate, however,that MCAT users often summarize the six area scores in anumber of ways. Some of the reported methods are morepsychometrically sound and more consistent with the goals ofmedical education than others. Several examples of methodsrespondents currently use in considering the six MCAT scoresfollow. These are methods that are currently in use; not all ofthem exemplify sound practice.

METHOD A. The six areas of assessment areconsidered individually and equally.METHOD B. The six scores are s:onsideredindividually. Some areas of assessment are explicitlyor implicitly accorded more weight than others.Local validity data or cumulative experience areused to define weights for each test.METHOD C. The six scores are summed or averaged.The scores are assigned unequal weights incomputing the total or mean. Validity data and/orexperience are used to define weights.METHOD D. The four science scores are summed oraveraged and the two skills scores are summed oraveraged to yield MCAT science and Skillscomposites.METHOD E. Scores for the six areas of assessmentare summed or averaged. Each score is weightedequally in computing the total or mean.

15

Page 16: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

.-14";"*--

If the admissions proc,ss at your school calls for thesimple or weighted summation or averaging of scores at somepoint in decision making, flit method you use should reflectthe curricular emphases ..nd goals of your school. For example,if data or experience indicate that Physics and Quantitativescores are less related to success in your program than perfor-mance in the other areas, these scores sl-ould be given lesserweight in computing the total or average score. Alternatively, ifdata indicate, or if it is judged, that basic analytical skill is asimportant to success in your program as achievement in biol-ogy, chemistry, and physics, Method D may be a good way tosummarize scores at your institution. Research shows that, ingeneral, the science scores are strong predictors of perfor-mance in the basic sciences. Skills Analysis: Reading and SkillsAnalysis: Quantitative data appear to predict clinical perfor-mance. Skills Analysis: Reading is also a good predictor of basicscience performance for special categories of applicants, suchas those who are educationally disadvantaged.

In examining MCAT data for their applicants, manycommittees sum or average the six test scores with equalweights at some point in admissions deliberations (Method E).This method is often cited as an example of improper use ofscore information in admissions decision making. It results in aselection index that nominally weights the sciences by four andweights the Skills Analysis tests by a factor of two. The computa-tion of a summary index with equal subpart weights placesheavy emphasis on science preparation in selecting medicalstudents and devalues measurement of the skills assessed bythe Reading and Quantitative sections. The creation of suchindices runs counter to the admonition of the report of thePanel on the General Professional Education of the Physicianand College Preparation for Medicine (GPEP) that admissions

C`U

Page 17: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

committees examine applicants' credentials for evidence ofscholarly endeavor in the natural and social sciences and in thehumanities. The GPEP Panel recommended that "the relativeweights accorded to the scores on the six sections of the MCAT

[be made] consistent with the best use of the examination asa predictive instrument." Again, local research on the predic-tive validity of the six MCAT scores to performance in the basicand clinical sciences and on NBME can be used to derive asummary score that reflects an institution's goals and curricularemphases. Guidelines for executing such analyses appear inAppendix A. Admissions committees can also weight the indi-vidual scores based on judgments about the relative importanceof tested preadmission skills to success in the medical curricu-lum. It is important to consider the content implications of anyscore summary methods that are used.

Many committees find that it is unnecessary to sum oraverage scores after initial screening. Certainly by the interviewstage, you should use individual, rather than combined, scores.

7

Page 18: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

ne of the central concerns of admissions officers iswhether the tests and data they use in the admissionsprocess are valid. It is important to know whetherpreadmission data predict adequately how well stu-

dents will perform in the basic and clinical sciences at yourinstitution.

Evidence for the reliability and validity of the MCAT isavailable for a number of schools. For example, analyses ofexaminee data and research conducted at several medicalschools indicate that:

The reliabilities of the MCAT tests range from .84 to.88.The median multiple correlation between MCATand year 1 grades is .41. When GPA is added toMCAT, the median multiple correlation is .52.The median multiple correlation between MCATand year 2 grades is .37. When GPA is added toMCAT, the median multiple correlation is .51.The median multiple correlation between MCATand NBME, Part I is .54. When GPA is added toMCAT, the median multiple correlation is .59.Correlations between the six individual MCATscores and the NBME, Part H total range from .34 to.62.Correlations between the six individual MCATscores and the NBME, Part III total range from .11to .58.MCAT scores also predict probability of academicdifficulty. Little variation in probability is seenbetween MCAT scores of 8 and 15. Probability ofacademic difficulty increases systematically below 8.

These data reflect the usefulness of MCAT scores for predictingperformance in medical school and on the NBME. Research onthe correlations between MCAT and performance in years threeand four by a number of institutions is currently in progress.

Examining the relationships between preadmissiondatasuch as undergraduate GPAs, MCAT scores, accounts ofextracurricular activity, and interview ratingsand perfor-mance in the basic sciences and clinical setting is likely toprovide important directions for admissions decision making atyour institution. Such assessment will help your committeeshape a selection process that results in the effective andequitable identification of promising physician candidates.

The Standards for Educational and Psychological Test-ing referenced earlier state that those concerned with admis-sions should portray the relevance of selection procedures and <.......

-tr."

Page 19: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

selection criteria to admissions decision making and to thesubsequent performance of selected candidates at an institu-tion. According to the authors, admissions personnel shouldalso be cognizant of possible unintended consequences ofselection procedures and attempt to avoid actions that havenegative results. For example, overreliance on undergraduatescience GPA and science MCAT scores may result in theadmission of students who perform admirably in a school'sbasic science program but do less well in the clinical setting.Narrow selection criteria may result in an entering class that ismore homogeneous than you would 'lice or may cause ineq-uities in the selection of minority or disadvantaged applicants.The Standards go on to say that the relationships betweenpreadmission predictors and medical school performance mea-sures should be described by correlation coefficients and re-gression equations. These data summary techniques are de-scribed in Appendix A.

Conducting validation studies at one's own institution isImportant for a number of reasons. A standardized test, such asMCAT, must necessarily compromise the institution-specificrequirements of individual schools in order to be useful gener-ally. It is important, therefore, to assess the validity of MCATand other preadmission data at an institutional level; differ-ences in applicant pools and curricular emphases are likely tounderlie predictive differences among schools.

Such efforts should begin with a statement of the goals ofthe overall admissions process and of each admissions screen-ing stage. Faculty members should also specify the skills theyconsider critical to success in the medical school curriculumand in the practice of medicine. Once admissions committeemembers have articulated their goals and identified criticalskills, these goals and skill listings can be used to suggest thetypes of preadmission data needed for decision making at eachscreen. This approachbeginning with (1) the ariculation ofthe goals of your admissions committee for each screeningstage, (2) the identification of skills that underlie success inyour program and in the profession, and contirwing with (3)the examination of preadmission and performance data thatspeak to these skills and goalsis more completely describedin Appendix A. Text and software referen"es art- also providedin the Appendix for additional explication. I case study ispresented for illustrative purposes. Research of the type de-scribed there is likely to result in increased admissions deci-sionmaking validity and the selection of competent and con-cerned health-care providers.

Page 20: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

As was stated in the body of this guide, the general approach toevaluating the selection process at your institution should be to(1) articulate the goals of your admissions committee for eachscreening stage, (2) identify skills that underlie success in yourprogram and in the profession, and (3) examine preadmissionand performance data that speak to these skills and goals. Forexample, if the objective of the initial evaluation of applicationfolders is to retain in the pool those applicants with a low risk ofacademic failure at your institution, you might use informationabout the relationship between preadmission characteristicsand academic difficulty for recent graduating classes to suggestinitial screening criteria. Data might suggest, for instance, thatstudents with MCAT scores below 5 and or grade point aver-ages less than 2.5 in combination with records of delayedcollege or university graduation experience academic difficultyat your medical school. The initial screen might then bestructured to remove applications of persons with these charac-teristics.

If a second review of applicant folders is accomplishedto select a fixed number or percentage of students with poten-tial for academic and clinical success in your program and inthe profession, correlational analyses of the relationship be-tween selected screening and school performance data mightbe executed on data for recent graduating classes to defineprocedures for identifying such students. The critical skilllistings drafted by your faculty should be used to identifyrelevant school performance and promising preadmission data.Guidelines for computing correlational analyses can be foundin univariate and multivariate statistics texts such as Funda-mental Statistics in Psychology and Education by Guildfordand Fruchter (1978) and Multiple Regression in BehavioralResearch by Kerlinger and Pedhazur (1973). Descriptions ofthese procedures may also be found in documentation for manystatistical analysis software packages, e.g., Statistical Packagefor the Social Sciences': (SPSS`) user's Guide, SPSS, Inc. (1983)or the Statistical Analysis System (SAS) (Ism' Guide by SASInstitute, Inc. (1985). Correlations between preadmission andindividual or combined school performance data can be com-puted. In most cases, a more parsimonious picture of predic-tor/criterion relationships will be derived from a multi-correla-tional approach called multiple regression. This approachconsiders both the correlations among predictors and therelations between predictor and criterion, that is, medicalschool performance, variables.

Researchers at the University of Arizona have examinedtheir admissions process in a manner similar to that described

21

Page 21: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

' Ir

here. They have documented the relationship between tradi-tional preadmission predictors, interview ratings, and medicalschool performance. In addition to undergraduate GPA dataand MCAT scores, interview ratings were obtained on scalesaddressing the applicant's manner, maturity, motivation, verbalskills and expression, interpersonal contact, and maturity. Per-formance criteria were compiled for three basic sciencecourses and on three clinical clerkship scales: Internal Medi-cine Clinical Skills, Internal Medicine Maturity, and Obstetrics/Gynecology Attitude.

Regression equations were calculated and findings indi-cated that basic science grades were well predicted by GPA,MCAT, and interview ratings. GPA and MCAT predicted theclinical data at a lesser level with interview ratings contributingstrongly to the prediction of clinical criteria. This researchstresses the importance of using multiple data sources inadmissions decision making. The methods used by these re-searchers are consonant with the correlational procedures described by the authors of the Standards for Educational andPsychological Testing (1985). It is likely that the preadmissionand performance data used in this study do not represent therange of information sources that are relevant to, available at,and potentially useful for assessing selection procedures atyour institution. As reported, these procedures also do notallow for the inclusion of preadmission data that are not proveduseful by correlational analyses but that reflect qualitiesdeemed desirable by your committee and medical schoolfa cu lty.

Several caveats to the general correlational and regres-sion procedures described by the Standards should be noted.The first is that often basic science and clinical grades or ratingsdo not draw distinctions among students that are useful forresearch purposes. For example, the vast majority of studentsmay receive passing marks or high passes in a pass/fail system.This type of grading policy limits the amount of informationavailable from correlational analyses. Rating systems with nu-merous gradations are more useful analytically. Alternatively,medical school performance data may not reference attributesthat a faculty considers critical to the effective provision ofpatient care; for instance, values and attitudes that promoteconcern for the individual and society are important to physi-cian practice. Special attention should be paid to preadmissioninformation that mirrors these judgments even if criterion dataignore such characteristics.

A second and related limitation is that much of thevaluable information provided in application folders is difficultto quantify for research or selection purposes. For example, it is

Page 22: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

difficult to quantify data provided by personal statements,letters of evaluation, and accounts of extracurricular work orresearch experience. Efforts should be made to assess theserich sources of information as early in the decisioninakingprocess as is possible. These sources may reflect personalcharacteristics that account for important differences betweenstudents in medical school performance.

A third limitation to the approach described in theStandards is that medical school performance data are unavail-able for applicants who were not selected. The non-selectedpool typically represents a broader range of preadmissioncharacteristics than the selected group. It is plausible that manyapplicants who were not admitted would have succeeded inmedical school. The characteristics of students for whomschool performance data are obtainable reflect the selectionconstraints imposed in previous years. Because data for se-lected students are limited, the information provided by cor-relational analyses is somewhat restricted.

Despite these caveats, research of ibis type is likely toinform admissions decision making and provide for the selec-tion of promising candidates. Careful articulation of the goals ofyour medical education program, identification of the skillsdeemed critical to physician performance, and the establish-ment of links between these skills, goals, and available appli-cant data at each screening stage are likely to result in increasedadmissions decision-making validity.

At some point in the admissions process, there shouldbe an opportunity to examine more carefully the qualificationsof special categories of applicants, such as educationally orfinancially disadvantaged applicants or those interested in prac-ticing in underserved communities, who do not pass estab-lished screens. Information gleaned from statistical examina-tions of preadmission and performance data should besupplemented by judgments about institutional objectives forand the potential accomplishments of special students. Theavailability of institutional support services should also beconsidered with regard to these applicants.

\Then the interview pool has been selected and inter-views are complete, the application materials of your candi-dates are enhanced by information not communicated by writ-ten application materials. The weights assigned by yourcommittee to available preadmission data are likely to changewith the addition of interview information. As with earlierscreens, program objectives and judgments about the relativeimportance of various applicant characteristics should be usedin the final admissions stages to shape a decision-makingmodel for student selection.

3

Page 23: D. C. · 2014. 3. 18. · in those areas range from .06 to .17. The correlations between 1) MCAT and science and nonsci-ence GPAs, and 2) MCAT and undergraduate institutional selectivity

Additional information is available in a number of AAMCpublications:

1. Information about the specific content of the MCAT,its organization, and scoring scheme appears in The MCATStudent Manual (Washington, D.C.: Association of AmericanMedical Colleges, 1984).

2. Technical information on the psychometric charac-teristics of the MCAT, its reliability and validity, and perfor-mance characteristics for gender and racial/ethnic populationgroups, is referenced in An Annotated Bibliography of Re-search on the Medical College Admission Test (Washington,D.C.: Association of American Medical Colleges, 1987).

3. Data on MCAT validity and on the effect of coachingand practice on MCAT test results are reported in the MedicalCollege Admission Test Interpretive Studies Series (Washing-ton, D.C.: Association of American Medical Colleges, 1984,1986).

4. Performance data for first-time and repeating exam-nees ate pliblished each year in the MCAT Summary of ScoreDistribution Reports :Washington, D.C.: Association of Ameri-can Medical Colleges). These reports also provide MCAT databy gender, year-incollege, undergraduate major, racial/ethnicgroup, and state of residence. Trend data on demographic andselected academic variables for medical school applicants andmatriculants are published annually in Trends in MedicalSchool Applicants and Matriculants (Washington, D.C.: Associ-ation of American Medical Colleges).

5. Information about the selection and retention ofminority applicants appears in workshop materials for theSimulated Minority Admissions Exercise (Washington, D.C..Association of American Medical Colleges, 1986).

If important topics are not covered in this document orin these other references and you would like additional in-formation, please call or write the author.