combining test data mana 4328 dr. jeanne michalski michalski@uta.edu

Post on 14-Jan-2016

219 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Combining Test Data

MANA 4328

Dr. Jeanne Michalski

michalski@uta.edu

Selection Decisions

First, how to deal with multiple predictors?

Second, how to make a final decision?

Developing a Hiring System

OK, Enough Assessing: Who Do We Hire??!!

Interpreting Test Scores

Norm-referenced scores Test scores are compared to applicants or

comparison group. Raw scores should be converted to Z scores or

percentiles Use “rank ordering”

Criterion-referenced scores Test scores indicate a degree of competency NOT compared to other applicants Typically scored as “qualified” vs. “not qualified” Use “cut-off scores”

Setting Cutoff Scores

Based on the percentage of applicants you need to hire (yield ratio). “Thorndike’s predicted yield” You need 5 warehouse clerks and expect 50 to apply.

5 / 50 = .10 (10%) means 90% of applicants rejected Cutoff Score set at 90th percentile Z score 1 = 84th percentile

Based on a minimum proficiency score Based on validation study linked to job analysis Incorporates SEM (validity and reliability)

Selection Outcomes

PREDICTION

PERFORMANCE

No Pass Pass

Regression LineCut Score

90% Percentile

Selection Outcomes

PREDICTIONPREDICTION

High Performer

Low Performer

True Positive

True Negative

Type 2 ErrorFalse

Positive

Type 1 ErrorFalse

Negative

PERFORMANCEPERFORMANCE

No Hire Hire

Selection Outcomes

PREDICTION

High Performer

Low Performer

PERFORMANCE

Unqualified Qualified

Prediction Line

Cut Score

Dealing With Multiple Predictors

“Mechanical” techniques superior to judgment

1. Combine predictors Compensatory or “test assessment approach”

2. Judge each independently Multiple Hurdles / Multiple Cutoff

3. Profile Matching

4. Hybrid selection systems

Compensatory Methods

Unit weightingP1 + P2 + P3 + P4 = Score

Rational weighting(.10) P1 + (.30) P2 + (.40) P3 + (.20) P4 = Score

RankingRankP1 + RankP2 +RankP3 + RankP4 = Score

Profile MatchingD2 = Σ (P(ideal) – P(applicant))2

Multiple Regression Approach Predicted Job perf = a + b1x1 + b2x2 + b3x3

x = predictors; b = optimal weight Issues:

Compensatory: assumes high scores on one predictor compensate for low scores on another

Assumes linear relationship between predictor scores and job performance (i.e., “more is better”)

Multiple Cutoff Approach Sets minimum scores on each predictor Issues

Assumes non-linear relationship between predictors and job performance

Assumes predictors are non-compensatory How do you set the cutoff scores?

Multiple Cutoff Approach Sets minimum scores on each predictor Issues

Assumes non-linear relationship between predictors and job performance

Assumes predictors are non-compensatory How do you set the cutoff scores? If applicant fails first cutoff, why continue?

Test 1 Test 2 Interview Background

FinalistDecision

Reject

Multiple Hurdle Model

Fail FailFail Fail

Pass PassPass Pass

Profile Matching Approach

Emphasizes “ideal” level of KSA e.g., too little attention to detail may produce sloppy

work; too much may represent compulsiveness Issues

Non-compensatory Small errors in profile can add up to big mistake in

overall score Little evidence that it works better

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Detail Experience C. Service Sales Apt

Profile Match Example

Ideal

Profile Match Example

0

1

2

3

4

5

6

Detail Experience C. Service Sales Apt

Ideal

J ohn

Sam

Sue

Making Finalist Decisions

Top-Down Strategy Maximizes efficiency, but may need to look at adverse

impact issues Banding Strategy

Creates “bands” of scores that are statistically equivalent (based on reliability)

Then hire from within bands either randomly or based on other factors (inc. diversity)

Banding

Grouping like test scores together Function of test reliability

Standard Error of Measure Band of + or – 2 SEM 95% Confidence interval

If the top score on a test is 95 and SEM is 2, then scores between 95 and 91 should be banded together.

Applicant Total Scores9493898887878681818079797872706967

Information Overload!!

Leads to: Reverting to gut instincts Mental Gymnastics

Combined Selection Model

Selection Stage

Selection Test

Decision

Model

Applicants Candidates

Application Blank Minimum Qualification

Hurdle

Candidates Finalists

Four Ability Tests

Work Sample

Rational Weighting

Hurdle

Finalists Offers

Structured Interview Unit Weighting

Rank Order

Offers

Hires

Drug Screen

Final Interview

Hurdle

Alternative Approach

Rate each attribute on each tool Desirable Acceptable Unacceptable

Develop a composite rating for each attribute Combining scores from multiple assessors Combining scores across different tools A “judgmental synthesis” of data

Use composite ratings to make final decisions

Who Do You Hire??

Name

Interview

Work Sample

Knowledge Test

Personality Inventory

Lee Excellent Good 90% Hire

Maria Excellent Very Good 85% Hire

Alan Good Excellent 90% Caution

Juan Marginal Good 81% Hire

Frank Excellent Poor 70% Hire

Tamika Good Good 75% Hire

Categorical Decision Approach

1. Eliminate applicants with unacceptable qualifications

2. Then hire candidates with as many desirable ratings as possible

3. Finally, hire as needed from applicants with “acceptable” ratings

Optional: “weight” attributes by importance

Sample Decision Table

Name

Customer Service

Attention to Detail

Conscient- iousness

Computer Skills

Work Knowledge

Lee Acceptable Desirable Desirable Acceptable Acceptable

Maria Desirable Desirable Acceptable Acceptable Desirable

Alan Desirable Acceptable Unacceptable Acceptable Acceptable

Juan Acceptable Acceptable Acceptable Acceptable Acceptable

Frank Desirable Desirable Desirable Unacceptable Unacceptable

Tamika Acceptable Desirable Acceptable Acceptable Acceptable

More Positions than Applicants

Name

Customer Service

Attention to Detail

Conscient-iousness

Computer Skills

Work Knowledge

Hiring Action

Lee Acceptable Desirable Desirable Acceptable Acceptable Hire

Maria Desirable Desirable Acceptable Acceptable Desirable Hire

Alan Desirable Acceptable Unacceptable Acceptable Acceptable Not Hire

Juan Acceptable Acceptable Acceptable Acceptable Acceptable Hire

Frank Desirable Desirable Desirable Unacceptable Unacceptable Not Hire

Tamika Acceptable Desirable Acceptable Acceptable Acceptable Hire

More Applicants than Positions

Name

Customer Service

Attention to Detail

Conscient-iousness

Computer Skills

Work Knowledge

Hiring Action

Lee Acceptable Desirable Desirable Acceptable Acceptable Hire 2

Maria Desirable Desirable Acceptable Acceptable Desirable Hire 1

Alan Desirable Acceptable Unacceptable Acceptable Acceptable Not Hire

Juan Acceptable Acceptable Acceptable Acceptable Acceptable Hire 4

Frank Desirable Desirable Desirable Unacceptable Unacceptable Not Hire

Tamika Acceptable Desirable Acceptable Acceptable Acceptable Hire 3

Selection

Top Down Selection (Rank) vs. Cutoff scores Is the predictor linearly related to performance? How reliable are the tests?

1. Top-down method – Rank order

2. Minimum cutoffs – Passing Scores

Final Decision

Random Selection Ranking Grouping

Role of Discretion or “Gut Feeling”

top related