measuring the effects of unit nonresponse in establishment surveys clyde tucker and john dixon u.s....

54
Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Upload: ashley-todd

Post on 27-Mar-2015

220 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Measuring the Effects of Unit Nonresponse in Establishment

Surveys

Clyde Tucker and John DixonU.S. Bureau of Labor Statistics

David CantorWestat

Page 2: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Acknowledgement

We would like to thank Bob Groves and Mike Brick for the use of their materials from their short course “Practical Tools for Nonresponse Bias Studies.”

We also thank Bob for the use of materials from his 2006 POQ article “Nonresponse Rates and Nonresponse Bias in Household Surveys,” Public Opinion Quarterly, 70, 646-675.

Page 3: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Factors Affecting Nonresponse Outside Survey Organization Control

• Three clusters of factors identified as being outside the control of the survey organization (Willimack et al., 2002)

– External environmental attributes (“climate”)– Characteristics of the sample unit– Characteristics of the establishment employee(s) who decide

whether to join a survey, priority of responding, and length of participation

• Effects attributable to all three factors are widespread and substantially affect nonresponse

• Even if nonresponse not increasing, these factors make it hard to maintain the status quo

Page 4: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Some Examples of These Factors

Downsizing decreases staff available to provide data

Increased firm size due to mergers and acquisitions increases complexity reporting burden

“Gatekeeping” poses significant barriers

Attitudes of owners or key managers toward government and data confidentiality

Whether or not anyone in the firm actually uses data products from the survey

Staff turnover

The growing accounting practice of using third parties such as payroll processing or accounting firms

Page 5: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Nonresponse Error for Sample Mean

In simplest terms

OR

Respondent Mean = Full Sample Mean +

(Nonresponse Rate)*(Respondent Mean – Nonrespondent Mean)

r n r m

mY Y Y Y

n

Page 6: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Thinking Causally About Nonresponse Rates and Nonresponse Error

• Key scientific question concerns mechanisms of response propensity that create covariance with survey variable

where is the covariance between the survey

variable, y, and the response propensity, p

• What mechanisms produce the covariance?

pyyE yp

nr

)(

yp

Page 7: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Reporting Bias• The relative bias provides a measure of the

magnitude of the bias. Interpreted similar to a percent, it is useful in comparing bias from survey measures which are in different scales.

• :

• Where:

• Rel B ( ) = the relative bias with respect to the estimate, .

Page 8: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Reporting Bias

• The bias ratio provides an indication of how confidence intervals are affected by bias:

• Where:  = the standard error.

Page 9: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

What does the Stochastic View Imply?

• Key issue is whether what influences survey participation also influences the survey variables

• Increased nonresponse rates do not necessarily imply increased nonresponse error. Although lower propensity will tend to increase error.

• Hence, investigations are necessary to discover whether the estimates of interest might be subject to nonresponse errors because of a correlation between p and y

Page 10: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Alternative Causal Models for Studies of Nonresponse Rates and Nonresponse Bias

2. Common Cause Model

Z

P Y

4. Nonresponse-MeasurementError Model

P

Z

P Y

X

1. Separate Causes Model

Y

Y P

3. Survey VariableCause Model

5. Nonresponse Error Attenuation Model

Y P

Y* Y*

Page 11: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

A More Specific Theory Relating Nonresponse to Bias

• Levels of bias will differ by subpopulations

• Differences between estimates from the total sample and just respondents will be greatest on either end of the nonresponse continuum, but potential bias greatest when response rates are low

– For example: Bias in a business survey may be greatest in the Services sector because it often has the lowest response rates

Page 12: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat
Page 13: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat
Page 14: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Nonresponse Bias Study Techniques

1. Comparison to other estimates (benchmarking)

2. Nonresponse bias for estimates based on variables available on sample

3. Studying variation within the respondent set

4. Altering the weighting adjustments

Page 15: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

• A base or selection weight is the inverse of the probability of selection of the unit. The sum of all the sampled units’ base weights estimates the population total.

• When units are sampled using a complex sample design, suggest using (base) weights to compute response rates that reflect the percentage of the sampled population that respond. Unweighted rates are useful for other purposes, such as describing the effectiveness of the effort.

• Weighted response rates are computed by summing the units’ base weights by disposition code rather than summing the unweighted counts of units.

• In establishment surveys, it is useful to include a measure of size (e.g., number of employees or students) to account for the units relative importance. The weight for computing response rates is the base weight times the measure of size.

Weights and Response Rates

Page 16: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

• A general rule is that weights should be used in nonresponse analysis studies so that relationships at the population level can be examined. Guides for choosing the specific weights to use are:– Use base weights for nonresponse bias studies that

compare all sampled respondents and nonrespondents. Weights adjusted for nonresponse may be misleading in this situation.

– Use fully adjusted weights for nonresponse bias studies that compare survey estimates with data from external sources. One important exception is when the survey weights are poststratified. In this case, weights prior to poststratification are generally more appropriate.

Weights and Nonresponse Analysis

Page 17: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

1. Comparison to Other Estimates -- Benchmarking

• Data or estimates from another source that are closely related to respondent estimates used to evaluate bias due to nonresponse in the survey estimates

• Assume that alternative data source has different sources of measurement error and/or is a superior measure to target survey.

Page 18: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

1. Benchmarking Survey Estimates to those from Another Data Source

• Another survey or administrative record system may contain estimates of variables similar to those being produced from the survey

• Difference between estimates from survey and other data source is an indicator of bias (both nonresponse and other)

Page 19: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

1. How to Conduct a Nonresponse Bias Benchmark Study

1. Identify comparison estimates• surveys with very high response rates• administrative systems with different measurement error

properties

2. Assess major reasons why the survey estimates and the estimates from the comparison sources differ

3. Compute estimates from the survey (using final weights) and from the comparison source to be as comparable as possible (often requires estimates for domains)

4. The difference is an estimate of the direction, or perhaps the magnitude, of the bias

Page 20: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Pro’s and Con’s of Benchmark Comparison to Estimate NR Bias

• Pro’s– Relatively simple to do and often inexpensive– Estimates from survey use final weights and are thus

relevant– Gives an estimate of bias that may be important to analysts

• Con’s– Estimated bias contains errors from the comparison source

as well as from the survey; this is why it is very important that the comparison source be highly accurate

– Measurement properties are generally not consistent for survey and comparison source; often is largest source of error

– Item nonresponse in both data sets reduces comparability– Hard to find comparable data for establishment surveys

(IRS records?) – More common in household surveys

Page 21: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

2. Using Variables on Respondents and Non-respondents

• Compare statistics available on both respondents and non-respondents

• The extent there is a difference is an indication of the bias

Page 22: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Possible Sources of Data on Respondents and Non-respondents• Sampling frame variables• Matched variables from other data-sets• Screener information

Page 23: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Pro’s and Con’s of Using Data on both Respondents and non-respondents

• Pro’s– Measurement properties for the variables are consistent for

respondents and nonrespondents– Bias is strictly due to nonresponse– Provides data on correlation between propensity to respond

and the variables

• Con’s– Bias estimates are for the variables; only variables highly

correlated with the key survey statistics are relevant– The method assumes no nonresponse adjustments are

made in producing the survey estimates; if variables are highly correlated, then they could be used in adjustment

Page 24: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

The CES Study

• J. Dixon and C. Tucker (ICES3), “Assessing Bias in Estimates of Employment”

• Collects employment, hours and earnings monthly from a current sample of over 300,000 establishments

• Tracks the gains and losses in jobs in various sectors of the economy

• In this paper, nonresponse bias work on this survey focuses on estimating bias for establishment subpopulations with different patterns of nonresponse using data from the 2003 CES and QCEW (state UI files)

Page 25: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Link relative estimate of employment (Y)

Let Yt be the estimate for a primary cell for month t, then Yt = Rt,t-1 * Yt-1

• where Rt,t-1 is the ratio of the total sample employment in month t to the total sample employment in month t-1 for all sample units reporting data for both months.

Page 26: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Estimate of Bias

• Using the most recent employment reports in the QCEW (not CES) for both responders and nonresponders

• Compare the link relative for respondents to that for nonrespondents

• Results presented are not weighted by probability of selection, but weighted results show similar patterns

• At this point, not comparing the link relative of responders to the entire sample

Page 27: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Quantile Regression

• Bias analysis performed at the establishment level on subpopulations defined by size and industry

• Testing for the difference in employment between CES responders and nonresponders. Y=a+Bx+e where x is an indicator of nonresponse (essentially a t-test).

• Since size of firm is theorized to relate to nonresponse, the coefficients relating nonresponse to employment is likely to be different for different size firms.

• Quantile regression examines the coefficients for different quantiles of the distribution of the sizes of firms.

• Since industries can be expected to have different patterns, the quantile regressions are done by industry group.

Page 28: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Distribution of size and the quantile regression curve

Page 29: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Quantile regression using the log of size.

l i nk r el at i ve based es t i mat esnai cs2=Agr i cul t ur e, For es t r y, Fi shi ng and Hunt i ng

0 1

- 5. 0

- 2. 5

0

2. 5

5. 0

7. 5

logrel1

nfl ag1

Page 30: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

MSA percent bias predicted by response rate for Mining

Page 31: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

MSA percent bias predicted by response rate for Food Manufacturing

Page 32: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

MSA percent bias predicted by response rate for Retail Trade

Page 33: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

MSA percent bias predicted by response rate for Accommodation and Food Services

Page 34: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Hing (1987). Nonresponse bias in expense data from the 1985 national nursing home survey. Proceedings of the Survey Research Methods Section, American Statistical

Association, 401-405.

• Purpose: Estimate cost of care in nursing homes• Target population: Nursing home facilities in U.S.• Sample design: Stratified list sample of facilities, facilities

sampled with probabilities proportionate to estimated number of beds, second stage sample of residents and staff

• Mode of data collection: In-person interview of facility administrator, with drop-off self-administered Expense questionnaire for accountant

• Response rate: Facility q’naire: 93%; Expense q’naire: 68% of those responding to Facility interview

• Target estimate: Estimated cost of care• Nonresponse error measure: Comparison of Facility

questionnaire items for respondents and nonrespondents of Expense questionnaire

Page 35: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Using the Facility Questionnaire to Estimate Nonresponse Bias based on Participation in Expense Questionnaire

Tables 2 and 5 from Hing (1987)

Characteristic Response rate (%)

Rel-bias of number of beds

(%) Ownership Proprietary 58 -15 Nonprofit 89 32 Government 94 32

Bed size <50 61 -11 50-99 65 3 100-199 68 -1 200+ 73 1

Page 36: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Conclusions

• Smaller nursing homes underrepresented; thus, respondent estimates overestimate averages on size-related attributes

• Analysis suggested poststratification by ownership type would significantly reduce biases

• Limitation:– Nonresponse bias estimate does not reflect

nonresponse on Facility questionnaire

Page 37: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

3. Weighting Adjustments

• Alter estimation weights and compare the estimates using the various weights to evaluate nonresponse bias. Weighting methods may include poststratification, raking, calibration, logistic regression, or even imputation.

Page 38: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Adjust Weights Using Model of Characteristics

• Weighting can reduce nonresponse bias if the weights are correlated with the estimate. Auxiliary data in weighting that are good predictors of the characteristic may give alternative weights that have less bias. If the estimates using the alternative weights do not differ from the original estimates, then either the nonresponse is not resulting in bias or the auxiliary data does not reduce the bias.

• If the estimates vary by the weighting scheme, then the weighting approach should be carefully examined and the one most likely to have lower nonresponse bias should be used.

Page 39: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

How to Conduct Nonresponse Bias Analysis Using Weights From Modeling

Characteristics

1. Using weighting method such as calibration estimation with these variables and produce alternative weights.

2. Compute the difference between the estimates using the alternative weights and the estimates from the regular weights as a measure of nonresponse bias for the estimate.

Page 40: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Pro’s and Con’s of Comparing Alternative Estimates Based on

Modeling the Estimate• Pro’s

– If good predictors are available, then it is likely that the use of these in the weighting will reduce the bias in the statistics being evaluated

– If the differences in the estimates are small, it is evidence that nonresponse bias may not be large

• Con’s– Recomputing weights may be expensive– If good correlates are not available then lack of differences

may be indicator of poor relationships rather than the absence of bias

– The approach is limited to statistics that have high correlation with auxiliary data

Page 41: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

4. STUDYING VARIATION WITHIN THE RESPONDENTS: Level of Effort

• Some nonresponse models assume that those units that require more effort to respond (more callbacks, incentives, refusal conversion) are similar to the units that do not respond

• Characteristics are estimated for respondents by level of effort (e.g., response propensity scores)

• Models fitted to see if it fits and can be used to estimate characteristics of nonrespondents

Page 42: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Analyze Level of Effort

1. Associate level of effort data to respondents (e.g., number of callbacks, ever refused, early or late responder)

2. Compute statistics for each level of effort separately (usually unweighted or base weights only)

3. If there is a (linear) relationship between level of effort and the statistic, then may decide to extrapolate to estimate statistic for those that did not respond

4. Often more appropriate to do the analysis separately for major reasons for nonresponse

Page 43: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Pro’s and Con’s of Using Level of Effort Analysis to Estimate Bias

• Pro’s– Simple to do, provided data collection systems

capture the pertinent information– In some surveys may provide a reasonable

indicator of the magnitude and direction of nonresponse bias

• Con’s– Highly dependent on model assumptions that

have not been validated in many applications– Difficult to extrapolate to produce estimates of

nonresponse bias without other data

Page 44: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Brick et al (2007) “Relationship between length of data collection period, field costs, and data quality”

Paper presented at ICES III, Montreal, Canada

• Purpose: Estimate amount and condition of research space for science and engineering

• Target population: Colleges and universities.• Sample design: Census• Mode of data collection: Web and mail• Response rate: 94%• Target estimate: Square feet• Nonresponse error measure: Relative bias

when level of effort is cut by 25%

Page 45: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Relative bias for Facilities Survey estimates for academic institutions, by field and

response level

Net assignable square feet - FY03

0.0

0.2

0.4

0.6

0.8

1.0

70 75 80 85 90 95

Response level

Ab

solu

te r

elat

ive

dif

fere

nce

Net assignable square feet of new construction projects - FY05

0.0

0.2

0.4

0.6

0.8

1.0

70 75 80 85 90 95

Response levelA

bso

lute

rel

ativ

e d

iffe

ren

ce

Page 46: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Summary

– The patterns are not consistent– The 75% response level generally

exhibits larger bias than the other response levels, but generally not statistically significant

– No significant bias if data collection were terminated at the 88% response level

Page 47: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

5. Followup of Nonrespondents

• Use of respondent data obtained through extra-ordinary efforts as comparison to respondent data obtained with traditional efforts

• “Effort” may include callbacks, incentives, change of mode, use of elite corps of interviewers

Page 48: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

How to Do a Nonrespondent Followup Study

1. Define a set of recruitment techniques judged to be superior to those in the ongoing effort

2. Determine whether budget permits use of those techniques on all remaining active cases

• If not, implement 2nd phase sample (described later)

3. Implement enhanced recruitment protocol

4. Compare respondents obtained in enhanced protocol with those in the initial protocol

Page 49: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Pro’s and Con’s of Nonresponse Followup Study

• Pro’s– Direct measures are obtained from previously

nonrespondent cases– Same measurements are used – Nonresponse bias on all variables can be

estimated

• Con’s– Rarely are followup response rates 100%– Requires extended data collection period

Page 50: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Marker, et al (2005).Terrorism Risk Insurance Program: Policyholders Survey. Final report prepared for the

Department of Treasury. Westat: Rockville, MD

• Purpose: Estimate use of Terrorism Insurance by Businesses

• Target population: Businesses and state/local government offices in the U.S..

• Sample design: Stratified sample. • Mode of data collection: Web and mail• Response rate:17%• Target estimate: Estimated percent that have insurance• Nonresponse error measure: Indicators of use of

terrorism insurance

Page 51: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Follow-up Procedures

• Contacted follow of 1000 non-resopndents to the survey.

• A shortened instrument was used to collect critical measures

• Interviews conducted by telephone

Page 52: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Selected comparisons from non-response follow-up

Survey Follow

-up

Diff

% with Commercial Property Insurance

92.3 89.3 3.0

% with Terrorism Risk under TRIA through Workers Compensation

42.6 16.8 25.7*

% with Terrorism risk under TRIA through umbrella Policy

35.7 31.7 4.0

* p<.000

Page 53: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Summary

• Out of 8 estimates, one was statistically significant

• Some indication that non-response leads to overestimate of certain types of insurance– The significant difference may have been due

to measurement error on the follow-up instrument

Page 54: Measuring the Effects of Unit Nonresponse in Establishment Surveys Clyde Tucker and John Dixon U.S. Bureau of Labor Statistics David Cantor Westat

Five Things You Should Remember from this Lecture

1. The three principal types of nonresponse bias studies are: - Comparing surveys to external data- Studying internal variation within the data collection, and- Contrasting alternative postsurvey adjusted estimates

2. All three have strengths and weaknesses; using multiple approaches simultaneously provides greater understanding

3. Nonresponse bias is specific to a statistic, so separate assessments may be needed for different estimates

4. Auxiliary variables correlated with both the likelihood of responding and key survey variables are important for evaluation

5. Thinking about nonresponse before the survey is important because different modes, frames, and survey designs permit different types of studies