collecting managing and assessing data using sample surveys

561

Upload: duc-nhien-tran

Post on 26-Jan-2017

374 views

Category:

Engineering


3 download

TRANSCRIPT

Page 1: Collecting managing and assessing data using sample surveys
Page 2: Collecting managing and assessing data using sample surveys

Collecting, Managing, and Assessing Data Using Sample Surveys

Collecting, Managing, and Assessing Data Using Sample Surveys provides a thorough, step-by-step guide to the design and imple-mentation of surveys. Beginning with a primer on basic statistics, the first half of the book takes readers on a comprehensive tour through the basics of survey design. Topics covered include the ethics of surveys, the design of survey procedures, the design of the survey instrument, how to write questions, and how to draw representative samples. Having shown readers how to design sur-veys, the second half of the book discusses a number of issues sur-rounding their implementation, including repetitive surveys, the economics of surveys, Web-based surveys, coding and data entry, data expansion and weighting, the issue of nonresponse, and the documenting and archiving of survey data. The book is an excel-lent introduction to the use of surveys for graduate students as well as a useful reference work for scholars and professionals.

peter stopher is Professor of Transport Planning at the Institute of Transport and Logistics Studies at the University of Sydney. He has also been a professor at Northwestern University, Cornell University, McMaster University, and Louisiana State University. Professor Stopher has developed a substantial reputa-tion in the field of data collection, particularly for the support of travel forecasting and analysis. He pioneered the development of travel and activity diaries as a data collection mechanism, and has written extensively on issues of sample design, data expansion, nonresponse biases, and measurement issues.

Page 3: Collecting managing and assessing data using sample surveys
Page 4: Collecting managing and assessing data using sample surveys

Collecting, Managing, and Assessing Data Using Sample Surveys

Peter Stopher

Page 5: Collecting managing and assessing data using sample surveys

CAMBRIDGE UNIVERSITY PRESSCambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo, Delhi, Tokyo, Mexico City

Cambridge University PressThe Edinburgh Building, Cambridge CB2 8RU, UK

Published in the United States of America by Cambridge University Press, New York

www.cambridge.orgInformation on this title: www.cambridge.org/9780521681872

© Peter Stopher 2012

This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press.

First published 2012

Printed in the United Kingdom at the University Press, Cambridge

A catalogue record for this publication is available from the British Library

ISBN 978-0-521-86311-7 HardbackISBN 978-0-521-68187-2 Paperback

Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.

Page 6: Collecting managing and assessing data using sample surveys

To my wife, Carmen, with grateful thanks for your faith in me and your continuing support and encouragement.

Page 7: Collecting managing and assessing data using sample surveys
Page 8: Collecting managing and assessing data using sample surveys

vii

List of figures page xixList of tables xxiiAcknowledgements xxv

1 Introduction 11.1 The purpose of this book 11.2 Scope of the book 21.3 Survey statistics 4

2 Basic statistics and probability 62.1 Some definitions in statistics 6

2.1.1 Censuses and surveys 72.2 Describing data 8

2.2.1 Types of scales 8Nominal scales 8Ordinal scales 9Interval scales 9Ratio scales 10Measurement scales 10

2.2.2 Data presentation: graphics 112.2.3 Data presentation: non-graphical 16

Measures of magnitude 17Frequencies and proportions 17Central measures of data 21Measures of dispersion 34

The normal distribution 45Some useful properties of variances and standard deviations 46

Proportions or probabilities 47Data transformations 48Covariance and correlation 50Coefficient of variation 51

Contents

Page 9: Collecting managing and assessing data using sample surveys

Contentsviii

Other measures of variability 53Alternatives to Sturges’ rule 62

3 Basic issues in surveys 643.1 Need for survey methods 64

3.1.1 A definition of sampling methodology 653.2 Surveys and censuses 65

3.2.1 Costs 663.2.2 Time 67

3.3 Representativeness 683.3.1 Randomness 693.3.2 Probability sampling 70

Sources of random numbers 713.4 Errors and bias 71

3.4.1 Sample design and sampling error 733.4.2 Bias 743.4.3 Avoiding bias 78

3.5 Some important definitions 78

4 Ethics of surveys of human populations 814.1 Why ethics? 814.2 Codes of ethics or practice 824.3 Potential threats to confidentiality 84

4.3.1 Retaining detail and confidentiality 854.4 Informed consent 864.5 Conclusions 89

5 Designing a survey 915.1 Components of survey design 915.2 Defining the survey purpose 93

5.2.1 Components of survey purpose 94Data needs 94

Comparability or innovation 97Defining data needs 99Data needs in human subject surveys 99

Survey timing 100Geographic bounds for the survey 101

5.3 Trade-offs in survey design 102

6 Methods for conducting surveys of human populations 1046.1 Overview 1046.2 Face-to-face interviews 1056.3 Postal surveys 107

Page 10: Collecting managing and assessing data using sample surveys

Contents ix

6.4 Telephone surveys 1086.5 Internet surveys 1116.6 Compound survey methods 112

6.6.1 Pre-recruitment contact 1126.6.2 Recruitment 113

Random digit dialling 1156.6.3 Survey delivery 1176.6.4 Data collection 1186.6.5 An example 119

6.7 Mixed-mode surveys 1206.7.1 Increasing response and reducing bias 123

6.8 Observational surveys 125

7 Focus groups 1277.1 Introduction 1277.2 Definition of a focus group 128

7.2.1 The size and number of focus groups 1287.2.2 How a focus group functions 1297.2.3 Analysing the focus group discussions 1317.2.4 Some disadvantages of focus groups 131

7.3 Using focus groups to design a survey 1327.4 Using focus groups to evaluate a survey 1347.5 Summary 135

8 Design of survey instruments 1378.1 Scope of this chapter 1378.2 Question type 137

8.2.1 Classification and behaviour questions 138Mitigating threatening questions 139

8.2.2 Memory or recall error 1428.3 Question format 145

8.3.1 Open questions 1458.3.2 Field-coded questions 1468.3.3 Closed questions 147

8.4 Physical layout of the survey instrument 1508.4.1 Introduction 1508.4.2 Question ordering 153

Opening questions 153Body of the survey 154The end of the questionnaire 158

8.4.3 Some general issues on question layout 159Overall format 160

Page 11: Collecting managing and assessing data using sample surveys

Contentsx

Appearance of the survey 161Front cover 162Spatial layout 163Choice of typeface 164Use of colour and graphics 166Question numbering 169Page breaks 170Repeated questions 171Instructions 172Show cards 174Time of the interview 174Precoding 174End of the survey 175Some final comments on questionnaire layout 176

9 Design of questions and question wording 1779.1 Introduction 1779.2 Issues in writing questions 178

9.2.1 Requiring an answer 1789.2.2 Ready answers 1809.2.3 Accurate recall and reporting 1819.2.4 Revealing the data 1829.2.5 Motivation to answer 1839.2.6 Influences on response categories 1849.2.7 Use of categories and other responses 185

Ordered and unordered categories 1879.3 Principles for writing questions 188

9.3.1 Use simple language 1899.3.2 Number of words 1909.3.3 Avoid using vague words 1919.3.4 Avoid using ‘Tick all that apply’ formats 1939.3.5 Develop response categories that are mutually exclusive

and exhaustive 1939.3.6 Make sure that questions are technically correct 1959.3.7 Do not ask respondents to say ‘Yes’ in order to say ‘No’ 1969.3.8 Avoid double-barrelled questions 196

9.4 Conclusion 197

10 Special issues for qualitative and preference surveys 19910.1 Introduction 19910.2 Designing qualitative questions 199

10.2.1 Scaling questions 200

Page 12: Collecting managing and assessing data using sample surveys

Contents xi

10.3 Stated response questions 20610.3.1 The hypothetical situation 20610.3.2 Determining attribute levels 20710.3.3 Number of choice alternatives or scenarios 20710.3.4 Other issues of concern 208

Data inconsistency 208Lexicographic responses 209Random responses 209

10.4 Some concluding comments on stated response survey design 210

11 Design of data collection procedures 21111.1 Introduction 21111.2 Contacting respondents 211

11.2.1 Pre-notification contacts 21111.2.2 Number and type of contacts 213

Nature of reminder contacts 213Postal surveys 215Postal surveys with telephone recruitment 216Telephone interviews 217Face-to-face interviews 219Internet surveys 220

11.3 Who should respond to the survey? 22111.3.1 Targeted person 22111.3.2 Full household surveys 223

Proxy reporting 22411.4 Defining a complete response 225

11.4.1 Completeness of the data items 22611.4.2 Completeness of aggregate sampling units 228

11.5 Sample replacement 22911.5.1 When to replace a sample unit 22911.5.2 How to replace a sample 233

11.6 Incentives 23511.6.1 Recommendations on incentives 236

11.7 Respondent burden 24011.7.1 Past experience 24111.7.2 Appropriate moment 24211.7.3 Perceived relevance 24211.7.4 Difficulty 243

Physical difficulty 243Intellectual difficulty 244Emotional difficulty 245Reducing difficulty 246

Page 13: Collecting managing and assessing data using sample surveys

Contentsxii

11.7.5 External factors 246Attitudes and opinions of others 246The ‘feel good’ effect 247Appropriateness of the medium 248

11.7.6 Mitigating respondent burden 24811.8 Concluding comments 250

12 Pilot surveys and pretests 25112.1 Introduction 25112.2 Definitions 25212.3 Selecting respondents for pretests and pilot surveys 255

12.3.1 Selecting respondents 25512.3.2 Sample size 258

Pilot surveys 258Pretests 261

12.4 Costs and time requirements of pretests and pilot surveys 26212.5 Concluding comments 264

13 Sample design and sampling 26513.1 Introduction 26513.2 Sampling frames 26613.3 Random sampling procedures 268

13.3.1 Initial considerations 26813.3.2 The normal law of error 269

13.4 Random sampling methods 27013.4.1 Simple random sampling 271

Drawing the sample 271Estimating population statistics and sampling errors 273

Example 276Sampling from a finite population 279Sampling error of ratios and proportions 279

Defining the sample size 281Examples 283

13.4.2 Stratified sampling 285Types of stratified samples 285

Study domains and strata 287Weighted means and variances 287

Stratified sampling with a uniform sampling fraction 289Drawing the sample 289Estimating population statistics and sampling errors 290Pre- and post-stratification 291Example 293

Page 14: Collecting managing and assessing data using sample surveys

Contents xiii

Equal allocation 294Summary of proportionate sampling 295

Stratified sampling with variable sampling fraction 295Drawing the sample 295Estimating population statistics and sampling errors 296Non-coincident study domains and strata 296Optimum allocation and economic design 297Example 298Survey costs differing by stratum 300Example 301Practical issues in drawing disproportionate samples 303Concluding comments on disproportionate sampling 305

13.4.3 Multistage sampling 305Drawing a multistage sample 306Requirements for multistage sampling 307Estimating population values and sampling statistics 308

Example 309Concluding comments on multistage sampling 314

13.5 Quasi-random sampling methods 31413.5.1 Cluster sampling 316

Equal clusters: population values and standard errors 317Example 319The effects of clustering 321

Unequal clusters: population values and standard errors 322Random selection of unequal clusters 324Example 325Stratified sampling of unequal clusters 326Paired selection of unequal-sized clusters 327

13.5.2 Systematic sampling 328Population values and standard errors in a systematic sample 328

Simple random model 329Stratified random model 329Paired selection model 329Successive difference model 330Example 330

13.5.3 Choice-based sampling 33313.6 Non-random sampling methods 334

13.6.1 Quota sampling 33413.6.2 Intentional, judgemental, or expert samples 33513.6.3 Haphazard samples 33513.6.4 Convenience samples 336

13.7 Summary 336

Page 15: Collecting managing and assessing data using sample surveys

Contentsxiv

14 Repetitive surveys 33714.1 Introduction 33714.2 Non-overlapping samples 33814.3 Incomplete overlap 33914.4 Subsampling on the second and subsequent occasions 34114.5 Complete overlap: a panel 34214.6 Practical issues in designing and conducting panel surveys 343

14.6.1 Attrition 344Replacement of panel members lost by attrition 345Reducing losses due to attrition 346

14.6.2 Contamination 34714.6.3 Conditioning 348

14.7 Advantages and disadvantages of panels 34814.8 Methods for administering practical panel surveys 34914.9 Continuous surveys 352

15 Survey economics 35615.1 Introduction 35615.2 Cost elements in survey design 35715.3 Trade-offs in survey design 359

15.3.1 Postal surveys 36015.3.2 Telephone recruitment with a postal survey with or

without telephone retrieval 36115.3.3 Face-to-face interview 36215.3.4 More on potential trade-offs 362

15.4 Concluding comments 363

16 Survey implementation 36516.1 Introduction 36516.2 Interviewer selection and training 365

16.2.1 Interviewer selection 36516.2.2 Interviewer training 36816.2.3 Interviewer monitoring 369

16.3 Record keeping 37016.4 Survey supervision 37216.5 Survey publicity 373

16.5.1 Frequently asked questions, fact sheet, or brochure 37416.6 Storage of survey forms 374

16.6.1 Identification numbers 37516.7 Issues for surveys using posted materials 37716.8 Issues for surveys using telephone contact 377

16.8.1 Caller ID 37816.8.2 Answering machines 378

Page 16: Collecting managing and assessing data using sample surveys

Contents xv

16.8.3 Repeated requests for callback 38016.9 Data on incomplete responses 381

16.10 Checking survey responses 38216.11 Times to avoid data collection 38316.12 Summary comments on survey implementation 383

17 Web-based surveys 38517.1 Introduction 38517.2 The internet as an optional response mechanism 38817.3 Some design issues for Web surveys 389

17.3.1 Differences between paper and internet surveys 38917.3.2 Question and response 39017.3.3 Ability to fill in the Web survey in multiple sittings 39217.3.4 Progress tracking 39317.3.5 Pre-filled responses 39417.3.6 Confidentiality in Web-based surveys 39517.3.7 Pictures, maps, etc. on Web surveys 395

Animation in survey pictures and maps 39617.3.8 Browser software 396

User interface design 396Creating mock-ups 397Page loading time 398

17.4 Some design principles for Web surveys 39817.5 Concluding comments 399

18 Coding and data entry 40118.1 Introduction 40118.2 Coding 402

18.2.1 Coding of missing values 40218.2.2 Use of zeros and blanks in coding 40318.2.3 Coding consistency 404

Binary variables 404Numeric variables 404

18.2.4 Coding complex variables 40518.2.5 Geocoding 406

Requesting address details for other places than home 408Pre-coding of buildings 409Interactive gazetteers 410Other forms of geocoding assistance 410Locating by mapping software 411

18.2.6 Methods for creating codes 41218.3 Data entry 41318.4 Data repair 416

Page 17: Collecting managing and assessing data using sample surveys

Contentsxvi

19 Data expansion and weighting 41819.1 Introduction 41819.2 Data expansion 419

19.2.1 Simple random sampling 41919.2.2 Stratified sampling 41919.2.3 Multistage sampling 42019.2.4 Cluster samples 42019.2.5 Other sampling methods 421

19.3 Data weighting 42119.3.1 Weighting with unknown population totals 422

An example 423A second example 424

19.3.2 Weighting with known populations 426An example 427

19.4 Summary 429

20 Nonresponse 43120.1 Introduction 43120.2 Unit nonresponse 432

20.2.1 Calculating response rates 432Classifying responses to a survey 433Calculating response rates 435

20.2.2 Reducing nonresponse and increasing response rates 440Design issues affecting nonresponse 440Survey publicity 442Use of incentives 442Use of reminders and repeat contacts 443Personalisation 444Summary 445

20.2.3 Nonresponse surveys 44520.3 Item nonresponse 450

20.3.1 Data repair 450Flagging repaired variables 451Inference 452Imputation 452

Historical imputation 453Average imputation 454Ratio imputation 454Regression imputation 455Cold-deck imputation 456Hot-deck imputation 457Expectation maximisation 457

Page 18: Collecting managing and assessing data using sample surveys

Contents xvii

Multiple imputation 458Imputation using neural networks 458Summary of imputation methods 460

20.3.2 A final note on item nonresponse 460Strategies to obtain age and income 461

Age 461Income 462

21 Measuring data quality 46421.1 Introduction 46421.2 General measures of data quality 464

21.2.1 Missing value statistic 46521.2.2 Data cleaning statistic 46621.2.3 Coverage error 46721.2.4 Sample bias 468

21.3 Specific measures of data quality 46921.3.1 Non-mobility rates 46921.3.2 Trip rates and activity rates 47021.3.3 Proxy reporting 471

21.4 Validation surveys 47221.4.1 Follow-up questions 47321.4.2 Independent measurement 475

21.5 Adherence to quality measures and guidance 476

22 Future directions in survey procedures 47822.1 Dangers of forecasting new directions 47822.2 Some current issues 478

22.2.1 Reliance on telephones 478Threats to the use of telephone surveys 479Conclusions on reliance on telephones 481

22.2.2 Language and literacy 481Language 481Literacy 483

22.2.3 Mixed-mode surveys 48622.2.4 Use of administrative data 48722.2.5 Proxy reporting 488

22.3 Some possible future directions 48922.3.1 A GPS survey as a potential substitute for a household

travel survey 493The effect of multiple observations of each respondent on sample size 495

Page 19: Collecting managing and assessing data using sample surveys

Contentsxviii

23 Documenting and archiving 49923.1 Introduction 49923.2 Documentation or the creation of metadata 499

23.2.1 Descriptive metadata 50023.2.2 Preservation metadata 50323.2.3 Geospatial metadata 503

23.3 Archiving of data 506

References 511Index 525

Page 20: Collecting managing and assessing data using sample surveys

xix

Figures

2.1 Scatter plot of odometer reading versus model year page 12 2.2 Scatter plot of fuel type by body type 12 2.3 Pie chart of vehicle body types 13 2.4 Pie chart of household income groups 13 2.5 Histogram of household income 14 2.6 Histogram of vehicle types 14 2.7 Line graph of maximum and minimum temperatures for thirty days 15 2.8 Ogive of cumulative household income data from Figure 2.5 16 2.9 Relative ogive of household income 16 2.10 Relative step chart of household income 17 2.11 Stem and leaf display of income 22 2.12 Arithmetic mean as centre of gravity 24 2.13 Bimodal distribution of temperatures 25 2.14 Distribution of maximum temperatures from Table 2.4 29 2.15 Distribution of minimum temperatures from Table 2.4 30 2.16 Income distribution from Table 2.5 30 2.17 Distribution of vehicle counts 33 2.18 Box and whisker plot of income data from Table 2.5 36 2.19 Box and whisker plot of maximum temperatures 37 2.20 Box and whisker plot of minimum temperatures 37 2.21 Box and whisker plot of vehicles passing through the green phase 43 2.22 Box and whisker plot of children’s ages 45 2.23 The normal distribution 45 2.24 Comparison of normal distributions with different variances 46 2.25 Scatter plot of maximum versus minimum temperature 52 2.26 A distribution skewed to the right 54 2.27 A distribution skewed to the left 54 2.28 Distribution with low kurtosis 55 2.29 Distribution with high kurtosis 55 3.1 Extract of random numbers from the RAND Million Random Digits 72 4.1 Example of a consent form 87

Page 21: Collecting managing and assessing data using sample surveys

List of figuresxx

4.2 First page of an example subject information sheet 88 4.3 Second page of the example subject information sheet 89 5.1 Schematic of the survey process 92 5.2 Survey design trade-offs 103 6.1 Schematic of survey methods 113 8.1 Document file layout for booklet printing 162 8.2 Example of an unacceptable questionnaire format 164 8.3 Example of an acceptable questionnaire format 165 8.4 Excerpt from a survey showing arrows to guide respondent 168 8.5 Extract from a questionnaire showing use of graphics 169 8.6 Columned layout for asking identical questions about multiple people 171 8.7 Inefficient and efficient structures for organising serial questions 172 8.8 Instructions placed at the point to which they refer 173 8.9 Example of an unacceptable questionnaire format with response codes 175 9.1 Example of a sequence of questions that do not require answers 178 9.2 Example of a sequence of questions that do require answers 179 9.3 Example of a belief question 181 9.4 Example of a belief question with a more vague response 181 9.5 Two alternative response category sets for the age question 185 9.6 Alternative questions on age 186 9.7 Examples of questions with unordered response categories 187 9.8 An example of mixed ordered and unordered categories 188 9.9 Reformulated question from Figure 9.8 189 9.10 An unordered alternative to the question in Figure 9.8 189 9.11 Avoiding vague words in question wording 192 9.12 Example of a failure to achieve mutual exclusivity and exhaustiveness 194 9.13 Correction to mutual exclusivity and exhaustiveness 195 9.14 Example of a double negative 196 9.15 Example of removal of a double negative 196 9.16 An alternative that keeps the wording of the measure 197 9.17 An alternative way to deal with a double-barrelled question 197 10.1 Example of a qualitative question 200 10.2 Example of a qualitative question using number categories 200 10.3 Example of unbalanced positive and negative categories 201 10.4 Example of balanced positive and negative categories 201 10.5 Example of placing the neutral option at the end 202 10.6 Example of distinguishing the neutral option from ‘No opinion’ 202 10.7 Use of columned layout for repeated category responses 203 10.8 Alternative layout for repeated category responses 204 10.9 Statements that call for similar responses 204 10.10 Statements that call for varying responses 205 10.11 Rephrasing questions to remove requirement for ‘Agree’/‘Disagree’ 206 11.1 Example of a postcard reminder for the first reminder 215

Page 22: Collecting managing and assessing data using sample surveys

List of figures xxi

11.2 Framework for understanding respondent burden 241 14.1 Schematic of the four types of repetitive samples 338 14.2 Rotating panel showing recruitment, attrition, and rotation 353 18.1 An unordered set of responses requiring coding 402 18.2 A possible format for asking for an address 409 18.3 Excerpt from a mark-sensing survey 415 20.1 Illustration of the categorisation of response outcomes 436 20.2 Representation of a neural network model 459 23.1 Open archival information system model 508

Page 23: Collecting managing and assessing data using sample surveys

xxii

2.1 Frequencies and proportions of vehicle types page 18 2.2 Frequencies, proportions, and cumulative values for household

income 19 2.3 Minimum and maximum temperatures for a month (°C) 20 2.4 Grouped temperature data 21 2.5 Disaggregate household income data 22 2.6 Growth rates of an investment fund, 1993–2004 26 2.7 Speeds by kilometre for a train 27 2.8 Measurements of ball bearings 29 2.9 Number of vehicles passing through the green phase of a traffic light 32 2.10 Sorted number of vehicles passing through the green phase 32 2.11 Number of children by age 34 2.12 Deviations from the mean for the income data of Table 2.5 38 2.13 Outcomes from throwing the die twice 40 2.14 Sorted number of vehicles passing through the green phase 43 2.15 Deviations for vehicles passing through the green phase 44 2.16 Values of variance and standard deviation for values of p and q 47 2.17 Deviations for vehicles passing through the green phase raised to third

and fourth powers 57 2.18 Deviations from the mean for children’s ages 58 2.19 Data on household size, annual income, and number of vehicles for

forty households 59 2.20 Deviations needed for covariance and correlation estimates 61 3.1 Heights of 100 (fictitious) university students (cm) 76 3.2 Sample of the first and last five students 76 3.3 Sample of the first ten students 76 3.4 Intentional sample of ten students 77 3.5 Random sample of ten students (in order drawn) 77 3.6 Summary of results from Tables 3.2 to 3.5 77 6.1 Internet world usage statistics 112

Tables

Page 24: Collecting managing and assessing data using sample surveys

List of tables xxiii

6.2 Mixed-mode survey types (based on Dillman and Tarnai, 1991) 121 11.1 Selection grid by age and gender 222 13.1 Partial listing of households for a simple random sample 272 13.2 Excerpt of random numbers from the RAND Million Random Digits 273 13.3 Selection of sample of 100 members using four-digit groups from

Table 13.2 274 13.4 Data from twenty respondents in a fictitious survey 276 13.5 Sums of squares for population groups 286 13.6 Data for drawing an optimum household travel survey sample 299 13.7 Optimal allocation of the 2,000-household sample 299 13.8 Optimal allocation and expected sampling errors by stratum 300 13.9 Results of equal allocation for the household travel survey 300 13.10 Given information for economic design of the optimal allocation 301 13.11 Preliminary sample sizes and costs for economic design of the

optimum allocation 301 13.12 Estimation of the final sample size and budget 302 13.13 Comparison of optimal allocation, equal allocation, and economic

design for $150,000 survey 302 13.14 Comparison of sampling errors from the three sample designs 303 13.15 Desired stratum sample sizes and results of recruitment calls 305 13.16 Distribution of departments and students 310 13.17 Two-stage sample of students from the university 311 13.18 Multistage sample using disproportionate sampling at the first stage 313 13.19 Calculations for standard error from sample in Table 13.18 315 13.20 Examples of cluster samples 316 13.21 Cluster sample of doctor’s files 320 13.22 Random drawing of blocks of dwelling units 326 13.23 Calculations for paired selections and successive differences 332 18.1 Potential complex codes for income categories 406 18.2 Example codes for use of the internet and mobile phones 407 19.1 Results of an hypothetical household survey 424 19.2 Calculation of weights for the hypothetical household survey 424 19.3 Two-way distribution of completed surveys 424 19.4 Two-way distribution of terminated surveys 425 19.5 Table 19.3 expressed as percentages 425 19.6 Sum of the cells in Tables 19.3 and 19.4 425 19.7 Cells of Table 19.6 as percentages 426 19.8 Weights derived from Tables 19.7 and 19.5 426 19.9 Results of an hypothetical household survey compared to

secondary source data 427 19.10 Two-way distribution of completed surveys by percentage

(originally shown in Table 19.5) 427 19.11 Results of factoring the rows of Table 19.10 428

Page 25: Collecting managing and assessing data using sample surveys

List of tablesxxiv

19.12 Second iteration, in which columns are factored 428 19.13 Third iteration, in which rows are factored again 429 19.14 Weights derived from the iterative proportional fitting 429 20.1 Final disposition codes for RDD telephone surveys 439 23.1 Preservation metadata elements and description 504

Page 26: Collecting managing and assessing data using sample surveys

xxv

As is always the case, many people have assisted in the process that has led to this book. First, I would like to acknowledge all those, too numerous to mention by name, who have helped me over the years, to learn and understand some of the basics of design-ing and implementing surveys. They have been many and they have taught me much of what I now know in this field. However, having said that, I would particularly like to acknowledge those whom I have worked with over the past fifteen years or more on the International Steering Committee for Travel Survey Conferences (ISCTSC), who have contributed enormously to broadening and deepening my own understandings of surveys. In particular, I would like to mention, in no particular order, Arnim Meyburg, Martin Lee-Gosselin, Johanna Zmud, Gerd Sammer, Chester Wilmot, Werner Brög, Juan de Dios Órtuzar, Manfred Wermuth, Kay Axhausen, Patrick Bonnel, Elaine Murakami, Tony Richardson, (the late) Pat van der Reis, Peter Jones, Alan Pisarski, Mary Lynn Tischer, Harry Timmermans, Marina Lombard, Cheryl Stecher, Jean-Loup Madre, Jimmy Armoogum, and (the late) Ryuichi Kitamura. All these individuals have inspired and helped me and contributed in various ways to this book, most of them, probably, without realising that they have done so.

I would also like to acknowledge the support I have received in this endeavour from the University of Sydney, and especially from the director of the Institute of Transport and Logistics Studies, Professor David Hensher. Both David and the university have provided a wide variety of support for the writing and production of this book, for which I am most grateful.

However, most importantly, I would like to acknowledge the enormous support and encouragement from my wife, Carmen, and her patience, as I have often spent long hours on working on this book, and her unquestioning faith in me that I could do it. She has been an enduring source of strength and inspiration to me. Without her, I doubt that this book would have been written.

As always, a book can see the light of day only through the encouragement and support of a publisher and those assisting in the publishing process. I would like to acknowledge Chris Harrison of Cambridge University Press, who first thought that this book might be worth publishing and encouraged me to develop the outline for

Acknowledgements

Page 27: Collecting managing and assessing data using sample surveys

Acknowledgementsxxvi

it, and then provided critical input that has helped to shape the book into what it has become. I would also like to thank profusely Mike Richardson, who carefully and thor-oughly copy-edited the manuscript, improving immensely its clarity and complete-ness. I would also like to thank Joanna Breeze, the production editor at Cambridge. She has worked with me with all the delays I have caused in the book production, and has still got this book to publication in a very timely manner. However, as always, and in spite of the help of these people, any errors that remain in the book are entirely my responsibility.

Finally, I would like to acknowledge the contributions made by the many students I have taught over the years in this area of survey design. The interactions we have had, the feedback I have received, and the enjoyment I have had in being able to teach this material and see students understand and appreciate what good survey design entails have been most rewarding and have also contributed to the development of this book. I hope that they and future students will find this book to be of help to them and a contin-uing reference to some of those points that we have discussed.

Peter StopherBlackheath, New South Wales

August 2011

Page 28: Collecting managing and assessing data using sample surveys

1

1 Introduction

1.1 The purpose of this book

There are a number of books available that treat various aspects of survey design, sam-pling, survey implementation, and so forth (examples include Cochran, 1963; Dillman, 1978, 2000; Groves and Couper, 1998; Kish, 1965; Richardson, Ampt, and Meyburg, 1995; and Yates, 1965). However, there does not appear to be a single book that covers all aspects of a survey, from the inception of the survey itself through to archiving the data. This is the purpose of this book. The reader will find herein a complete treatment of all aspects of a survey, including all the elements of design, the requirements for testing and refinement, fielding the survey, coding and analysing the resulting data, documenting what happened, and archiving the data, so that nothing is lost from what is inevitably an expensive process.

This book concentrates on surveys of human populations, which are both more chal-lenging generally and more difficult both to design and to implement than most sur-veys of non-human populations. In addition, because of the background of the author, examples are drawn mainly from surveys in the area of transport planning. However, the examples are purely illustrative; no background is needed in transport planning to understand the examples, and the principles explained are applicable to any survey that involves human response to a survey instrument. In spite of this focus on human partic-ipation in the survey process, there are occasional references to other types of surveys, especially observational and counting types of surveys.

In writing this book, the author has tried to make this as complete a treatment as pos-sible. Although extensive references are included to numerous publications and books in various aspects of measuring data, the reader should be able to find all that he or she requires within the covers of this book. This includes a chapter on some basic aspects of statistics and probability that are used subsequently, particularly in the development of the statistical aspects of surveys.

In summary, then, the purpose of this book is to provide the reader with an exten-sive and, as far as possible, exhaustive treatment of issues involved in the design and execution of surveys of human populations. It is the intent that, whether the reader is a student, a professional who has been asked to design and implement a survey, or

Page 29: Collecting managing and assessing data using sample surveys

Introduction2

someone attempting to gain a level of knowledge about the survey process, all ques-tions will be answered within these pages. This is undoubtedly a daunting task. The reader will be able to judge the extent to which this has been achieved. The book is also designed that someone who has no prior knowledge of statistics, probability, surveys, or the purposes to which surveys may be put can pick up and read this book, gaining knowledge and expertise in doing so. At the same time, this book is designed as a ref-erence book. To that end, an extensive index is provided, so that the user of this book who desires information on a particular topic can readily find that topic, either from the table of contents, or through the index.

1.2 Scope of the book

As noted in the previous section, the book starts with a treatment of some basic statis-tics and probability. The reader who is familiar with this material may find it appro-priate to skip this chapter. However, for those who have already learnt material of this type but not used it for a while, as well as those who are unfamiliar with the material, it is recommended that this chapter be used as a means for review, refreshment, or even first-time learning. It is then followed by a chapter that outlines some basic issues of surveys, including a glossary of terms and definitions that will be found helpful in reading the remainder of the book. A number of fundamental issues, pertinent to overall survey design, are raised in this chapter. Chapter 4 introduces the topic of the ethics of surveys, and outlines a number of ethical issues and proposes a number of basic ethical standards to which surveys of human populations should adhere. The fifth chapter of the book discusses the primary issues of designing a survey. A major underlying theme of this chapter is that there is no such thing as an ‘all-purpose sur-vey’. Experience has repeatedly demonstrated that only surveys designed with a clear purpose in mind can be successful.

The next nine chapters deal with all the various design issues in a survey, given that we have established the overall purpose or purposes of the survey. The first of these chapters (Chapter 6) discusses and describes all the current methods that are available for conducting surveys of human populations, in which people are asked to partic-ipate in the survey process. Mention is also made of some methods of dealing with other types of survey that are appropriate when the objects of the survey are observed in some way and do not participate in the process. In Chapter 7, the topic of focus groups is introduced, and potential uses of focus groups in designing quantitative and qualitative surveys are discussed. The chapter does not provide an exhaustive treat-ment of this topic, but does provide a significant amount of detail on how to organise and design focus groups. In Chapter 8, the design of survey instruments is discussed at some length. Illustrations of some principles of design are included, drawn princi-pally from transport and related surveys. Chapters 9 and 10 deal with issues relating to question design and question wording and special issues relating to qualitative and preference surveys. Chapter 11 deals with the design of data collection procedures themselves, including such issues as item and unit nonresponse, what constitutes a

Page 30: Collecting managing and assessing data using sample surveys

Scope of the book 3

complete response, the use of proxy reporting and its effects, and so forth. The seventh of this group of chapters (Chapter 12) deals with pilot surveys and pretests – a topic that is too often neglected in the design of surveys. A number of issues in designing and undertaking such surveys and tests are discussed. Chapter 13 deals with the topic of sample design and sampling issues. In this chapter, there is extensive treatment of the statistics of sampling, including estimation of sampling errors and determination of sample sizes. The chapter describes most of the available methods of sampling, includ-ing simple random samples, stratified samples, multistage samples, cluster samples, systematic samples, choice-based samples, and a number of sampling methods that are often considered but that should be avoided in most instances, such as quota samples, judgemental samples, and haphazard samples.

Chapter 14 addresses the topic of repetitive surveys. Many surveys are intended to be done as a ‘one-off’ activity. For such surveys, the material covered in the preceding chapters is adequate. However, there are many surveys that are intended to be repeated from time to time. This chapter deals with such issues as repeated cross-sectional sur-veys, panel surveys, overlapping samples, and continuous surveys. In particular, this chapter provides the reader with a means to compare the advantages and disadvantages of the different methods, and it also assists in determining which is appropriate to apply in a given situation.

Chapter 15 builds on the material in the preceding chapters and deals with the issue of survey economics. This is one of the most troublesome areas, because, as many companies have found out, it is all too easy to be bankrupted by a survey that is under-taken without a real understanding and accounting of the costs of a survey. While information on actual costs will date very rapidly, this chapter attempts to provide rel-ative data on costs, which should help the reader estimate the costs of different survey strategies. This chapter also deals with many of the potential trade-offs in the design of surveys.

Chapter 16 delves into some of the issues relating to the actual survey implemen-tation process. This includes issues relating to training survey interviewers and moni-toring the performance of interviewers, and the chapter discusses some of the danger signs to look for during implementation. This chapter also deals with issues regarding the ethics of survey implementation, especially the relationships between the survey firm, the client for the survey, and the members of the public who are the respondents to the survey. Chapter 17 introduces a topic that is becoming of increasing interest: Web-based surveys. Although this is a field that is as yet quite young, there are an increasing number of aspects that have been researched and from which the reader can benefit. Chapter 18 deals with the process of coding and data entry. A major issue in this topic is the geographic coding of places that may be requested in a survey.

Chapter 19 addresses the topics of data expansion and weighting. Data expansion is outlined as a function of the sampling method, and statistical procedures for expanding each of the different types of sample are provided in this chapter. Weighting relates to problems of survey bias, resulting either from incomplete coverage of the population in the sampling process or from nonresponse by some members of the subject population.

Page 31: Collecting managing and assessing data using sample surveys

Introduction4

This is an increasingly problematic area for surveys of human populations, resulting from a myriad of issues relating to voluntary participation. Chapter 20 addresses the issue of nonresponse more completely. Here, issues of who is likely to respond and who is not are discussed. Methods to increase response rates are described, and refer-ence is made again to the economics of the survey design. The question of computing response rates is also addressed in this chapter. This is usually the most widely recog-nised statistic for assessing the quality of a survey, but it is also a statistic that is open to numerous methods of computation, and there is considerable doubt as to just what it really means.

Chapter 21 deals with a range of other measures of data quality, some that are gen-eral and some, by way of example, that are specific to surveys in transport. These mea-sures are provided as a way to illustrate how survey-specific measures of quality can be devised, depending on the purposes of the survey. Chapter 22 discusses some issues of the future of human population surveys, especially in the light of emerging technol-ogies and their potential application and misapplication to the survey task.

Chapter 23, the final chapter in the book, covers the issues of documenting and archiving the data. This all too often neglected area of measuring data is discussed at some length. A list of headings for the final report on the survey is provided, along with suggestions as to what should be included under the headings. The issue of archiving data is also addressed at some length. Data are expensive to collect and are rarely archived appropriately. The result is that many expensive surveys are effectively lost soon after the initial analyses are undertaken. In addition, knowledge about the survey is often lost when those who were most centrally involved in the survey move on to other assignments, or leave to work elsewhere.

1.3 Survey statistics

Statistics in general, and survey statistics in particular, constitute a relatively young area of theory and practice. The earliest instance of the use of statistics is probably in the middle of the sixteenth century, and related to the start of data collection in France regarding births, marriages, and deaths, and in England to the collection of data on deaths in London each week (Berntson et al., 2005). It was then not until the middle of the eighteenth century that publications began to appear advancing some of the ear-liest theories in statistics and probability. However, much of the modern development of statistics did not take place until the late nineteenth and early twentieth centuries (Berntson et al., 2005):

Beginning around 1880, three famous mathematicians, Karl Pearson, Francis Galton and Edgeworth, created a statistical revolution in Europe. Of the three mathematicians, it was Karl Pearson, along with his ambition and determination, that led people to consider him the founder of the twentieth-century science of statistics.

It was only in the early twentieth century that most of the now famous names in sta-tistics made their contributions to the field. These included such statisticians as Karl

Page 32: Collecting managing and assessing data using sample surveys

Survey statistics 5

Pearson, Francis Galton, C. R. Rao, R. A. Fisher, E. S. Pearson, and Jerzy Neyman, among many others, who all made major contributions to what we know today as the science of statistics and probability.

Survey sampling statistics is of even more recent vintage. Among the most notable names in this field of study are those of R. A. Fisher, Frank Yates, Leslie Kish, and W. G. Cochran. Fisher may have given survey sampling its birth, both through his own contributions and through his appointment of Frank Yates as assistant statistician at Rothamsted Experimental Station in 1931. In this post, Yates developed, often in col-laboration with Fisher, what may be regarded as the beginnings of survey sampling in the form of experimental designs (O’Connor and Robertson, 1997). His book Sampling Methods for Censuses and Surveys was first published in 1949, and it appears to be the first book on statistical sampling designs.

Leslie Kish, who founded the Survey Research Institute at the University of Michigan, is also regarded as one of the founding fathers of modern survey sampling methods, and he published his seminal work, called Survey Sampling, in 1965. Close in time to Kish, W. G. Cochran published his seminal work, Sampling Techniques, in 1963.

Based on these efforts, the science of survey sampling cannot be considered to be much over fifty years old – a very new scientific endeavour. As a result of this rela-tive recency, there is still much to be done in developing the topic of survey sampling, while technologies for undertaking surveys have undergone and continue to undergo rapid evolution. The fact that most of the fundamental books on the topic are about forty years old suggests that it is time to undertake an updated treatise on the topic. Hence, this book has been undertaken.

Page 33: Collecting managing and assessing data using sample surveys

6

2 Basic statistics and probability

2.1 Some definitions in statistics

Statistics is defined by the Oxford Dictionary of English Etymology as ‘the political science concerned with the facts of a state or community’, and the word is derived from the German statistisch. The beginning of modern statistics was in the sixteenth century, when large amounts of data began to be collected on the populations of coun-tries in Europe, and the task was to make sense of these vast amounts of data. As statis-tics has evolved from this beginning, it has become a science concerned with handling large quantities of data, but also with using much smaller amounts of data in an effort to represent entire populations, when the task of handling data on the entire population is too large or expensive. The science of statistics is concerned with providing inputs to political decision making, to the testing of hypotheses (understanding what would happen if …), drawing inferences from limited data, and, considering the data limita-tions, doing all these things under conditions of uncertainty.

A word used commonly in statistics and surveys is population. The population is defined as the entire collection of elements of concern in a given situation. It is also sometimes referred to as a universe. Thus, if the elements of concern are pre-school children in a state, then the population is all the pre-school children in the state at the time of the study. If the elements of concern are elephants in Africa, then the popula-tion consists of all the elephants currently in Africa. If the elements of concern are the vehicles using a particular freeway on a specified day, then the population is all the vehicles that use that particular freeway on that specific day.

It is very clear that statistics is the study of data. Therefore, it is necessary to understand what is meant by data. The word data is a plural noun from the Latin datum, meaning given facts. As used in English, the word means given facts from which other facts may be inferred. Data are fundamental to the analysis and model-ling of real-world phenomena, such as human populations, the behaviour of firms, weather systems, astronomical processes, sociological processes, genetics, etc. Therefore, one may state that statistics is the process for handling and analysing data, such that useful conclusions can be drawn, decisions made, and new knowledge accumulated.

Page 34: Collecting managing and assessing data using sample surveys

Some definitions in statistics 7

Another word used in connection with statistics is observation. An observation may be defined as the information that can be seen about a member of a subject population. An observation comprises data about relevant characteristics of the member of the population. This population may be people, households, galaxies, private firms, etc. Another way of thinking of this is that an observation represents an appropriate group-ing of data, in which each observation consists of a set of data items describing one member of the population.

A parameter is a quantity that describes some property of the population. Parameters may be given as numbers, proportions, or percentages. For example, the number of male pre-school children in the state might be 16,897, and this number is a parameter. The proportion of baby elephants in Africa might be 0.39, indicating that 39 per cent of all elephants in Africa at this time are babies. This is also a parameter. Sometimes, one can define a particular parameter as being critical to a decision. This would then be called a decision parameter. For example, suppose that a decision is to be made as to whether or not to close a primary school. The decision parameter might be the number of schoolchildren that would be expected to attend that school in, say, the next five years.

A sample is some subset of a population. It may be a large proportion of the popu-lation, or a very small proportion of the population. For example, a survey of Sydney households, which comprise a population of about 1,300,000 might consist of 130,000, households (a 10 per cent sample) or 300 households (a 0.023 per cent sample).

A statistic is a numerical quantity that describes a sample. It is therefore the equiva-lent of a parameter, but for a sample rather than the population. For example, a survey of 130,000 households in Sydney might have shown that 52 per cent of households own their own home or are buying it. This would be a statistic. If, on the other hand, a figure of 54 per cent was determined from a census of the 1,300,000 households, then this figure would be a parameter.

Statistical inference is the process of making statements about a population based on limited evidence from a sample study. Thus, if a sample of 130,000 households in Sydney was drawn, and it was determined that 52 per cent of these owned or were purchasing their homes, then statistical inference would lead one to propose that this might mean that 676,000 (52 per cent of 1,300,000) households in Sydney own or are purchasing their homes.

2.1.1 Censuses and surveysOf particular relevance to this book is the fact that there are two methods for collect-ing data about a population of interest. The first of these is a census, which involves making observations of every member of the population. Censuses of the human pop-ulation have been undertaken in most countries of the world for many years. There are references in the Bible to censuses taken among the early Hebrews, and later by the Romans at the time of the birth of Christ. In Europe, most censuses began in the eigh-teenth century, although a few began earlier than that. In the United States of America,

Page 35: Collecting managing and assessing data using sample surveys

Basic statistics and probability8

censuses began in the nineteenth century. Many countries undertake a census once in each decade, either in the year ending in zero or in one. Some countries, such as Australia, undertake a census twice in each decade. A census may be as simple as a head count (enumerating the total size of the population) or it may be more complex, by collecting data on a number of characteristics of each member of the population, such as name, address, age, country of birth, etc.

A survey is similar to a census, except that it is conducted on a subset of the popula-tion, not the entire population. A survey may involve a large percentage of the population or may be restricted to a very small sample of the population. Much of the science of survey statistics has to do with how one makes a small sample represent the entire popu-lation. This is discussed in much more detail in the next chapter. A survey, by definition, always involves a sample of the population. Therefore, to speak of a 100 per cent sample is contradictory; if it is a sample, it must be less than 100 per cent of the population.

2.2 Describing data

One of the first challenges for statistics is to describe data. Obviously, one can provide a complete set of data to a decision maker. However, the human mind is not capable of utilising such information efficiently and effectively. For example, a census of the United States would produce observations on over 300 million people, while one of India would produce observations of over 1 billion people. A listing of those observa-tions represents something that most human beings would be incapable of utilising. What is required, then, is to find some ways to simplify and describe data, so that use-ful information is preserved but the sheer magnitude of the underlying data is hidden, thereby not distracting the human analyst or decision maker.

Before examining ways in which data might be presented or described, such that the mind can grasp the essential information contained therein, it is important to under-stand the nature of different types of data that can be collected. To do this, it seems useful to consider the measurement of a human population, especially since that is the main topic of the balance of this book.

In mathematical statistics, we refer to things called variables. A variable is a char-acteristic of the population that may take on differing or varying values for different members of the population. Thus, variables that could be used to describe members of a human population may include such characteristics as name, address, age or date of birth, place of birth, height, weight, eye colour, hair colour, and shoe size. Each of these characteristics provides differing levels of information that can be used in vari-ous ways. We can divide these characteristics into four different types of scales, a scale representing a way of measuring the characteristic.

2.2.1 Types of scales

Nominal scalesEach person in the population has a name. The person’s name represents a label by which that person can be identified, but provides little other information. Names can

Page 36: Collecting managing and assessing data using sample surveys

Describing data 9

be ordered alphabetically or can be ordered in any of a number of arbitrary ways, such as the order in which data are collected on individuals. However, no information is provided by changing the order of the names. Therefore, the only thing that the name provides is a label for each member of the population. This is called a nominal scale. A nominal scale is the least informative of the different types of scales that can be used to measure characteristics, but its lack of other information does not render it of less value. Other examples of nominal data are the colours of hair or eyes of the members of the population, bus route numbers, the numbers assigned to census collection dis-tricts, names of firms listed on a country’s stock exchange, and the names of magazines stocked by a newsagency.

Ordinal scalesEach person in the population has an address. The address will usually include a house number and a street name, along with the name of the town or suburb in which the house is located. The address clearly also represents a label, just as does the person’s name. However, in the case of the address, there is more information provided. If the addresses are sorted by number and by street, in most places in the world this will pro-vide additional information. These sorted addresses will actually help an investigator to locate each home, in that it is expected that the houses are arranged in numerical order along the street, and probably with odd numbers on one side of the street and even numbers on the other side. As a result, there is order information provided in the address. It is, therefore, known as an ordinal scale. However, if it is known that one person lives at 27 Main Street, and another person lives at 35 Main Street, this does not indicate how far apart these two people live. In some countries, they could be next door to each other, while in others there might be three houses between them or even seven houses between them (if numbering goes down one side of the street and back on the other). The only thing that would be known is that, starting at the first house on Main Street, one would arrive at 27 before one would arrive at 35. Therefore, order is the only additional information provided by this scale. Other examples of ordinal scales would be the list of months in the year, censor ratings of movies, and a list of runners in the order in which they finished a race.

Interval scalesEach person in the population has a shoe size. For the purposes of this illustration, the fact that there are slight inconsistencies in shoe sizes between manufacturers will be ignored, and it will be assumed, instead, that a man’s shoe size nine is the same for all men’s shoes, for example. Shoe size is certainly a label, in that a shoe can be called a size nine or a size twelve, and so forth. This may be a useful way of labelling shoes for a lot of different reasons. In addition, there is clearly order information, in that a size nine is smaller than a size twelve, and a size seven is larger than a size five. Furthermore, within each of children’s, men’s, and women’s shoes, each increase in a size represents a constant increase in the length of the shoe. Thus, the difference between a size nine and a size ten shoe for a man is the same as the difference between a size eight and a size nine, and so on for any two adjacent numbers. In other words,

Page 37: Collecting managing and assessing data using sample surveys

Basic statistics and probability10

there is a constant interval between each shoe size. On the other hand, there is no nat-ural zero in this scale (in fact, a size of zero generally does not exist), and it is not true that a size five is half the length of a size ten. Therefore, shoe size may be considered to be an interval scale. Women’s dress sizes in a number of countries also represent an interval scale, in which each increment in dress size represents a constant interval of increase in size of the dress, but a size sixteen dress is not twice as large as a size eight. In many cases, the sizing of an item of clothing as small, medium, large, etc. also represents an interval scale. Another example of an interval scale is the normal scale of temperature in either degrees Celsius or degrees Fahrenheit. An interval of one degree represents the same increase or decrease in temperature, whether it is between 40 and 41 or 90 and 91. However, we are not able to state that 60 degrees is twice as hot as 30 degrees. There is also not a natural zero on either the Celsius scale or the Fahrenheit scale. Indeed, the Celsius scale sets the temperature at which water freezes as 0, but the Fahrenheit scale sets this at 32, and there is not a particular physical property of the zero on the Fahrenheit scale.

Ratio scaleEach member of the population has a height and a weight. Again, each of these two measures could be used as a label. We might say that a person is 180 centimetres tall, or weighs 85 kilograms. These measures also contain ordinal information. We know that a person who weighs 85 kilograms is heavier than a person who weighs 67 kilo-grams. Furthermore, we know that these measures contain interval information. The difference between 179 centimetres and 180 centimetres is the same as the difference between 164 centimetres and 165 centimetres. However, there is even more information in these measures. There is ratio information. In other words, we know that a person who is 180 centimetres tall is twice as tall as a person who is 90 centimetres tall, and that a person weighing 45 kilograms is only half the weight of a person weighing 90 kilograms. There are two important new pieces of information provided by these mea-sures. First, there is a natural zero in the measurement scale. Both weight and height have a zero point, which represents the absence of weight or the absence of height. Second, there is a multiplicative relationship among the measures on the scale, not just an additive one. Therefore, both weight and height are described as ratio scales. Other examples of ratio scales are distance or length measures, measures of speed, measures of elapsed time, and so forth. However, it should be noted that measurement of clock time is interval-scaled (there is no natural zero, and 5 a.m. is not a half of 10 a.m.), while elapsed time is ratio-scaled, because zero minutes represents the absence of any elapsed time, and twenty minutes is twice as long as ten minutes, for example.

Measurement scalesThe preceding sections have outlined four scales of measurement: nominal, ordinal, interval, and ratio. They have also demonstrated that these four scales are themselves an ordinal scale, in which the order, as presented in the preceding sentence, indicates increasing information content. Furthermore, each of the scale types, as ordered above,

Page 38: Collecting managing and assessing data using sample surveys

Describing data 11

contains the information of the previous type of scale, and then adds new information content. Thus an ordinal scale also has nominal information, but adds to that informa-tion on order; an interval scale has both nominal and ordinal information, but adds to that a consistent interval of measurement; and a ratio scale contains all nominal, ordi-nal, and interval information, but adds ratio relationships to them.

There are two other ways in which scales can be described, because most scales can be measured in different ways. The first of these relates to whether the scale is contin-uous or discrete. A continuous scale is one in which the measurement can be made to any degree of precision desired. For example, we can measure elapsed time to the near-est hour, or minute, or second, or nanosecond, etc. Indeed, the only thing that limits the precision by which we can measure this scale is the precision of our instruments for measurement. However, there is no natural limit to precision in such cases. This is a continuous scale. A discrete scale, on the other hand, cannot be subdivided beyond a certain point. For example, shoe sizes are a discrete scale. Many shoe manufactur-ers will provide shoes in half-size increments, while others will provide them only in whole-size increments. Subdivision below half sizes simply is not done. Similarly, any measurement that involves counting objects, such as counting the number of members of a population, is a discrete scale. We cannot have fractional people, fractional houses, or fractional cars, for example.

The second descriptor of a scale is whether it is inherently exact or approximate. By their nature, all continuous scales are approximate. This is so because we can always increase the precision of measurement. Generally, numbers obtained from counting are exact, unless the counting mechanism is capable of error. However, other discrete scales may be approximate or exact. In most clothing or shoe sizes, the measure would be considered approximate, because sizes often differ between manufacturers, and between countries. A size nine shoe is not the same size in the United States and in the United Kingdom, for example, nor is it necessarily the same size from two different shoe makers in the same country.

It is important to recognise what type of a scale we are dealing with, when infor-mation is measured on scales, because the type of scale will also often either dictate how the information can be presented or restrict the analyst to certain ways of pre-sentation. Similarly, whether the measure is discrete or continuous will also affect the presentation of the data, as will, in some cases, whether the data are approximate or exact.

2.2.2 Data presentation: graphicsIt is appropriate to start with some simple rules about graphical presentations. There are four principal types of graphical presentation: scatter plots, pie charts, histograms or bar charts, and line graphs.

A scatter plot is a plot of the frequency with which specific values of a pair of vari-ables occur in the data. Thus, the X-axis of the plot will contain the values of one of the variables that are found in the data, and the Y-axis will contain the values of the other

Page 39: Collecting managing and assessing data using sample surveys

Basic statistics and probability12

variable. As such, any type of measure can be presented on a scatter plot. However, if all values occur only once – i.e., are unique to an observation – then a scatter plot is of no particular interest. Therefore although any data can theoretically be plotted on a scatter plot, data that represent unique values, or data that are continuous, and also will probably have frequencies of only one or two at most for any pair of values, will not be illuminated by a scatter plot.

An example of a scatter plot is provided in Figure 2.1, which shows a scatter plot of odometer readings of cars versus the model year of the vehicle. The Y-axis is a ratio-scaled variable, and the X-axis is an interval-scaled variable. The scatter plot indicates that there probably is a relationship between odometer readings and model year, such that the higher the model year value, the lower the odometer reading, as would be expected. This is a useful scatter plot.

Figure 2.2 illustrates a scatter plot of two nominal-scaled variables: fuel type versus body type. It is not a very useful illustration of the data. First, we cannot tell how many points fall at each combination of values. Second, all it really tells us is that there are no taxis (body type 5) in this data set, that all vehicle types use petrol (fuel type 1), that all except motorcycles (body type 6) use diesel (fuel type 2), and that only cars (body

0

200,000

400,000

600,000

800,000

1,000,000

1,200,000

1950 1960 1970 1980 1990 2000 2010

Model year

Od

om

eter

rea

din

g

Figure 2.1 Scatter plot of odometer reading versus model year

0

1

2

3

4

5

0 1 2 3 4 5 6 7 8Body type

Fuel

typ

e

Figure 2.2 Scatter plot of fuel type by body type

Page 40: Collecting managing and assessing data using sample surveys

Describing data 13

type 1), four-wheel drive (4WD) vehicles (body type 2), and utility/van/panel vans (body type 3) use dual fuel (fuel type 4). This illustrates that nominal data – both fuel type and body type are nominal scales – may not produce a useful scatter plot.

A pie chart is a circle that is divided into segments representing specific values in the data, with the length of the segment along the circumference of the circle indicating how frequently the value occurs in the data. Again, pie charts can be used with any type of data, when the information to be presented is the frequency of occurrence. However, they will generally not work with continuous data, unless the data are first grouped and converted to discrete categories. An example of a pie chart is provided in Figure 2.3. This shows that the pie chart works well for nominal data, in this case the vehicle body type from a survey of households.

Figure 2.4 shows a pie chart for category data – i.e., discrete data. The data are reported household incomes from a survey of households. The categories were those used in the survey. Income, being measured in dollars and with a natural zero, is actu-ally a ratio scale. In the categories collected, income is a ratio-scaled discrete measure. Again, the pie chart provides a good representation of the data.

A histogram or bar chart is used for presenting discrete data. Such data will be interval- or ratio-scaled data. Histograms can be constructed in several different ways. When presenting complex information, bars can be stacked, showing how different

4WD

Car

Motorcycle

Other

Taxi

Truck

Utility vehicle

Figure 2.3 Pie chart of vehicle body types

None

$1–$4,159

$4,160–$8,319

$8,320–$15,599

$15,600–$25,999

$26,000–$36,399

$36,400–$51,999

$52,000–$62,399

$62,400–$77,999

$78,000–$103,999

$104,000+

Don't know

Refused

Figure 2.4 Pie chart of household income groups

Page 41: Collecting managing and assessing data using sample surveys

Basic statistics and probability14

classes of items add up to a total within each bar. Bars can also be plotted so that each bar touches the next, or they may be plotted with gaps between. There is no particular rule for plotting bars in this manner, and it is more a matter of personal preference. Examples of two types of histograms are shown in Figures 2.5 and 2.6. Histograms can also be used to indicate the frequency of occurrence of specific values of both nominal and ordinal data. In this case, it is preferred that the bars do not touch, the spaces indi-cating that the scale is not interval or ratio.

Figure 2.5 shows ratio-scaled discrete data on household incomes, this time in a two-dimensional histogram or bar chart. Note that the bars touch, indicating the under-lying continuous nature of the data. Figure 2.6 shows a histogram of nominal data frequencies of vehicle type for household vehicles. Two instructive observations may be made of this histogram. First, the dominance of the car tends to make the histogram

0

20

40

60

80

100

120

140

160

Nu

mb

er o

fre

spo

nd

ents

None

$1–$

4,159

$4,16

0–$8

,319

$8,32

0–$1

5,599

$15,6

00–$

25,99

9

$26,0

00–$

36,39

9

$36,4

00–$

51,99

9

$52,0

00–$

62,39

9

$62,4

00–$

77,99

9

$78,0

00–$

103,9

99

$104

,000+

Annual income

Figure 2.5 Histogram of household income

0

200

400

600

800

1,000

1,200

4WD Car Motorcycle Other Taxi Truck Utilityvehicle

Vehicle type

Nu

mb

er

Figure 2.6 Histogram of vehicle types

Page 42: Collecting managing and assessing data using sample surveys

Describing data 15

somewhat less useful. In contrast, the pie chart really communicated the information better. Second, the bars do not touch, in this case clearly indicating the discrete cate-gories of a nominal scale.

The fourth type of chart is a line graph. This is much more restricted in application than the other types of charts. A line graph should be used only with continuous data, whether interval- or ratio-scaled. It is inappropriate to use line graphs to present data that are discrete, or data that are nominal or ordinal in nature. An example of a line graph is shown in Figure 2.7.

Temperature is inherently a continuous measurement. It is therefore appropriate to use a line graph to present these data. This case demonstrates the use of two lines on the same graph. This allows one not only to see the maximum and minimum tempera-tures, but also to deduce that there may be a relationship between the two.

A special type of line graph is an ogive. An ogive is a cumulative frequency line. Even when the original data are discrete in nature, the ogive can be plotted as a line, although a cumulative histogram is preferable. Generally, it makes sense to create cumulative graphs only of interval- or ratio-scaled data, although the data may be either discrete or continuous. Figure 2.8 shows an ogive for the income data used in Figure 2.5.

The ogive is essentially an S-shaped curve, in that it starts with a line that is along the X-axis and ends with a line that is parallel to the X-axis, with the line climbing more or less continuously from the X-axis at the left to the top of the graph at the right. A special case of the ogive is a relative ogive, in which the proportions or percentage of observations are used, not the absolute counts, as in Figure 2.8. A relative ogive for

0

5

10

15

20

25

30

35Su

nd

ayM

on

day

Tues

day

Wed

nes

day

Thu

rsd

ayFr

iday

Satu

rday

Sun

day

Mo

nd

ayTu

esd

ayW

edn

esd

ayTh

urs

day

Frid

aySa

turd

aySu

nd

ayM

on

day

Tues

day

Wed

nes

day

Thu

rsd

ayFr

iday

Satu

rday

Sun

day

Mo

nd

ayTu

esd

ayW

edn

esd

ayTh

urs

day

Frid

aySa

turd

aySu

nd

ayM

on

day

Day of week

Tem

per

atu

re (

°C)

Maximum temperature

Minimum temperature

Figure 2.7 Line graph of maximum and minimum temperatures for thirty days

Page 43: Collecting managing and assessing data using sample surveys

Basic statistics and probability16

the same data will have the same shape, but the scale of the Y-axis changes, as shown in Figure 2.9.

A step chart, which is the discrete version of an ogive, could also be drawn for the income data. It can use either the count, the proportion, or the percentage for the Y-axis. A step chart is shown in Figure 2.10.

2.2.3 Data presentation: non-graphicalGraphical presentations of data are very useful. As can be seen in the preceding sec-tion, the adage that ‘a picture is worth a thousand words’ is clearly interpretable as ‘a picture is worth a thousand numbers’. Indeed, one can grasp rather readily from

0

100

200

300

400

500

600

700

800

900

None

$1–$

4,159

$4,16

0–$8

,319

$8,32

0–$1

5,599

$15,6

00–$

25,99

9

$26,0

00–$

36,39

9

$36,4

00-$

51,99

9

$52,0

00–$

62,39

9

$62,4

00–$

77,99

9

$78,0

00–$

103,9

99

$104

,000+

Household income

Cu

mu

lati

ve n

um

ber

Figure 2.8 Ogive of cumulative household income data from Figure 2.5

0

0.2

0.4

0.6

0.8

1

$0

$1–$

4,159

$4,16

0–$8

,319

$8,32

0–$1

5,599

$15,6

00–$

25,99

9

$26,0

00–$

36,39

9

$36,4

00–$

51,99

9

$52,0

00–$

62,39

9

$62,4

00–$

77,99

9

$78,0

00–$

103,9

99

$104

,000+

Household income

Cu

mu

lati

ve p

rop

ort

ion

Figure 2.9 Relative ogive of household income

Page 44: Collecting managing and assessing data using sample surveys

Describing data 17

the graphs what is potentially a large amount of data, which the human mind would have difficulty grasping as raw data. However, pictures are not the only ways in which data can be presented for easier assimilation. There are also numeric ways to describe data. Ideally, what one would like would be some summary variables that would give one an idea about the magnitude of each variable in the data, the disper-sion of values, the variability of the values, and the symmetry or lack of symmetry in the data.

Measures of magnitudeThese measures could include such concepts as frequencies of occurrence of particular values in the data, proportions of the data that possess a particular value, cumulative frequencies or proportions, and some form of average value. Each of these measures is considered separately.

Frequencies and proportionsFrequencies are simply counts of the number of times that a particular value occurs in the data, while proportions are frequencies divided by the total number of observations in the data. Table 2.1 shows the frequencies of occurrence of the different vehicle types used in the earlier illustrations of graphical presentations.

For nominal data, cumulative frequencies or proportions are not sensible, because the scale does not contain any ordered information. Thus, to produce a cumulative frequency distribution for the entries in Table 2.1 would not make sense. Moreover, it should be noted that frequencies and proportions are generally sensible only for dis-crete data. However, if, in continuous data, there are large numbers of observations with the same value and the data set is large, then frequencies and proportions may possibly be useful. This, for example, might be the case for national data derived from a census.

00.10.20.30.40.50.60.70.80.9

1

$0

$1–$

4,159

$4,16

0–$8

,319

$8,32

0–$1

5,599

$15,6

00–$

25,99

9

$26,0

00–$

36,39

9

$36,4

00–$

51,99

9

$52,0

00–$

62,39

9

$62,4

00–$

77,99

9

$78,0

00–$

103,9

99

$104

,000+

Household income

Cu

mu

lati

ve p

rop

ort

ion

Figure 2.10 Relative step chart of household income

Page 45: Collecting managing and assessing data using sample surveys

Basic statistics and probability18

For the income data plotted in Figure 2.5, both frequencies or proportions, and cumu-lative frequencies or proportions, make sense. These are shown in Table 2.2. From the information in Table 2.2 it is possible to grasp several things about the data on household income, such as the fact that the largest group is the one with $15,600–$25,999 annual income, followed by $78,000–$103,999 and $36,400–$51,999. It can also be seen that 16 per cent of households would not report their income. When the non-reported income is excluded, one can see that the proportions change substantially, and that a half of the population have incomes below $62,399. In effect, this table has summarised over 1,000 pieces of data and made them comprehensible, by presenting just a handful of numbers.

In the case of the income data shown in Table 2.2, the groups were defined in the sur-vey itself. However, one may also take data that are collected as continuous measures and group them, both to display as a histogram and to present them in a table, similar to Table 2.2. In such a case, it is necessary to know into how many categories to group the data.

Number of classes or categories Sturges’ rule (Sturges, 1926) provides guidance on how to determine the maximum number of classes into which to divide data, whether grouping already discrete data or continuous data. There are a number of elements to the rule.

(1) Interval classes must be inclusive and non-overlapping.(2) Intervals should usually be of equal width, although the first and last interval may

be open-ended for some types of data.(3) The number of classes depends on the number of observations in the data, accord-

ing to equation (2.1):

k = 1 + 3.322 × (log10 n) (2.1)

where k = the number of classes, n = the number of observations.

Suppose, for example, that the income data had been collected as actual annual income, and not in income classes. One might then ask the question as to how many

Table 2.1 Frequencies and proportions of vehicle types

Vehicle type Frequency Proportion

Car 1,191 0.8174WD 96 0.066Utility vehicle 134 0.092Truck 10 0.007Taxi 0 0Motorcycle 19 0.013Other 7 0.005

Total 1457 1.000

Page 46: Collecting managing and assessing data using sample surveys

Describing data 19

classes would be the maximum that could be used for income. This would be obtained by substituting 900 into the above equation, because one should not include the miss-ing data. This would result in a value for k of 10.81, which would be truncated to 10. Therefore, Sturges’ rule would indicate that the maximum number of intervals that should be used for these data is ten. The data were actually collected in eleven classes. Therefore, this would suggest that the design was marginally appropriate and there should not be a need to group together any of the classes with the number of valid observations obtained. However, the intervals used violate Sturges’ rule in one respect, in that they are not of equal size. This is not uncommon with income grouping, where it is often the case, as here, that the lower incomes are divided into smaller classes than the higher incomes. This is generally done to keep the population of the classes more nearly equal.

Suppose that the temperature data used in Figure 2.7 were to be grouped into clas-ses. The raw data are shown in Table 2.3. There are thirty observations of daily maxi-mum and minimum temperatures in this data set. Applying Sturges’ rule, the value of k is found to equal 5.92, suggesting that five intervals would be the most that could be used. For the high temperatures, the range is twenty-two to thirty-three. If this range is divided into groupings of two degrees, this would produce six intervals, while using three degrees would produce four intervals. In this case, given that k was found to be close to six, it would be best to use six intervals of two degrees per interval. For the low temperatures, the range is from sixteen to twenty-two. Grouping these also into groups of two degrees in size, which is preferable when one wants to look at both minimum

Table 2.2 Frequencies, proportions, and cumulative values for household income

Income range Frequency ProportionCumulative frequency

Cumulative proportion

Including missing

Excluding missing

None 28 0.0262 28 0.0262 0.0311$1–$4,159 2 0.0019 30 0.0280 0.0333$4,160–$8,319 11 0.0103 41 0.0383 0.0456$8,320–$15,599 67 0.0626 108 0.1009 0.1200$15,600–$25,999 155 0.1449 263 0.2458 0.2922$26,000–$36,399 97 0.0907 360 0.3364 0.4000$36,400–$51,999 129 0.1206 489 0.4570 0.5433$52,000–$62,399 72 0.0673 561 0.5243 0.6233$62,400–$77,999 105 0.0981 666 0.6224 0.7400$78,000–$103,999 133 0.1243 799 0.7467 0.8878$104,000+ 101 0.0944 900 0.8411 1.0000Don’t know 1 0.0009 901 0.8421Refused 169 0.1579 1,070 1.0000

Total 1,070 1

Page 47: Collecting managing and assessing data using sample surveys

Basic statistics and probability20

and maximum temperatures on the same graph, or in side-by-side graphs, would result in four groups. Because this is less than the maximum of six, it is acceptable. In this case, grouping is sensible only if what one wants to do is to create a histogram of the frequency with which various maximum and minimum temperatures occur. Such a frequency table is shown in Table 2.4.

There is a second variant of Sturges’ rule for binary data. This variant defines the number of classes, as shown in equation (2.2):

k = 1 + log2(n) (2.2)

Table 2.3 Minimum and maximum temperatures for a month (°C)

DayMaximum temperature

Minimum temperature

Sunday 23 18Monday 26 19Tuesday 25 19Wednesday 27 17Thursday 32 22Friday 29 21Saturday 26 20Sunday 27 19Monday 30 22Tuesday 31 21Wednesday 33 23Thursday 24 20Friday 25 18Saturday 27 19Sunday 28 20Monday 32 22Tuesday 24 18Wednesday 26 16Thursday 25 17Friday 22 17Saturday 28 19Sunday 27 20Monday 28 20Tuesday 29 21Wednesday 28 20Thursday 26 19Friday 27 20Saturday 30 21Sunday 29 20Monday 31 23

Page 48: Collecting managing and assessing data using sample surveys

Describing data 21

When n is less than 1,000, the two equations result in approximately the same num-ber of classes. For example, for 900 cases, this second formula gives k equal to 10.81, which is the identical result. For the thirty-observation case, the second formula gives 5.91, which is almost identical. It has been pointed out in various places (see Hyndman, 1995) that Sturges’ rule is good only for samples less than 200, and that it is based on a flawed argument. Nevertheless, it is still the standard used by most statistical software packages. There are two other rules that may be used, and these are discussed later in this chapter, because they utilise statistical measures that have not been discussed at this point. All the rules produce similar results for small samples, but diverge as the sample size becomes increasingly large. The other possible problems with Sturges’ rule are, first, that it may lead to over-smoothed data and, second, that its requirement for equal intervals may hide important information.

Stem and leaf displays Another way to display discrete data is to use a stem and leaf display. Essentially, the stem is the most aggregate level of grouping of the data, while the leaf is made up of more disaggregate data. Table 2.5 shows some household data when the actual income was collected, rather than having people respond to pre-defined classes.

A stem and leaf display would be constructed, for example, by using the tens of thousands of dollars as the stem and the thousands as the leaf. This, like a histogram, provides a picture of the distribution of the data, as shown in Figure 2.11. This graphic shows clearly the nature of the distribution of incomes.

Central measures of dataThere are at least six different averages that can be computed, which provide different ways of assessing the central value of the data. The six that are discussed here are:

(1) arithmetic mean;(2) median;

Table 2.4 Grouped temperature data

Temperature range

Number of highs

Number of lows

Cumulative number of highs

Cumulative number of lows

16–17 0 4 0 418–19 0 9 0 1320–21 0 12 0 2522–23 2 5 4 3024–25 5 0 7 3026–27 9 0 16 3028–29 7 0 23 3030–31 4 0 27 3032–33 3 0 30 30

Page 49: Collecting managing and assessing data using sample surveys

Basic statistics and probability22

Stem Leaf

0 3 4 4 6 6 7 9 9

1 2 3 3 4 6 6 8 9 9

2 0 0 1 2 2 3 4 4 5 6 7

3 1 3 4 6 7 7 8 9 9

4 1 4 5 6 7 7

5 0 4 5 5 7

6 6 7 8 9

7 0 1 2 6

8 9

9 1 6

10 1

Figure 2.11 Stem and leaf display of income

Table 2.5. Disaggregate household income data

Household number

Annual income

Household number

Annual income

Household number

Annual income

1 $22,358 21 $9,226 41 $70,1352 $24,679 22 $96,435 42 $100,5633 $37,455 23 $55,341 43 $3,8774 $46,223 24 $89,367 44 $2,9545 $22,790 25 $12,984 45 $6,4226 $38,656 26 $21,444 46 $16,3517 $49,999 27 $36,339 47 $19,2228 $76,450 28 $20,105 48 $56,7789 $53,744 29 $44,446 49 $41,237

10 $18,919 30 $34,288 50 $24,89211 $44,881 31 $25,678 51 $31,08412 $26,570 32 $4,122 52 $68,00813 $12,135 33 $7,390 53 $71,03914 $46,990 34 $65,809 54 $13,13315 $37,855 35 $47,001 55 $18,25916 $32,568 36 $23,874 56 $14,24917 $8,917 37 $39,007 57 $36,89818 $19,772 38 $67,445 58 $91,04519 $72,455 39 $54,890 59 $6,34120 $69,078 40 $22,378 60 $15,887

Page 50: Collecting managing and assessing data using sample surveys

Describing data 23

(3) mode;(4) geometric mean;(5) harmonic mean; and(6) quadratic mean.

The arithmetic mean The arithmetic mean is simply the total of all the values in the data divided by the number of elements in the sample that provided valid values for the statistic.

Mathematically, it is usually written as equation (2.3):

xx

n

ii

n

1 (2.3)

In words, the mean of the variable x is equal to the sum of all the values of x in the data set, divided by the number of observations, n. It is important to note that values of xthat contribute to the estimation of the mean are only those that are valid, and that n is also a count of the valid observations. Thus, in the income data we used previously, the missing values would be removed, and a mean, if it was calculated, would be based on 900 observations, not on the 1,070 survey returns.

The sample mean – i.e., the value of the mean estimated from a sample of observa-tions – is normally denoted by the symbol x , while the true mean from the population is denoted by the Greek letter μ. It is a convention in statistics to use Greek letters to denote true population values, and the equivalent Roman letter to denote the sample estimate of that value. Put another way, the parameter is denoted by a Greek letter, and the statistic by the equivalent Roman letter.

Using the temperature data from Table 2.3, the sum of the maximum temperatures is found to be 825, which yields an arithmetic mean of 27.5°C. Similarly, the sum of the minimum temperatures is 591, which gives an arithmetic mean of 19.7°C. In each of these cases there were thirty valid observations, so the total or sum was divided by thirty to give the arithmetic mean. Similarly, using the income data from Table 2.5, the sum of the incomes is $2,248,437. With sixty valid observations of income, the arith-metic mean of income is $37,474.

The arithmetic mean (usually referred to simply as the mean, because it is the mean most often used) can also be understood by considering it as being the centre of gravity of the data. This is shown in Figure 2.12. In each of the two distributions shown in the �gure, the fulcrum or balance point represents the mean. In the distribution on the left the mean is at thirteen, while in the one on the right it is at fourteen.

Figure 2.12 illustrates two important facts. First, the symmetry or lack of it in a distri-bution of values will affect where the mean falls. Second, the arithmetic mean is in�u-enced by extreme values. If the value of twenty were removed from the data distribution on the right of Figure 2.12, the mean would shift to thirteen. On the other hand, if the extreme value had been at twenty-�ve instead of twenty, the mean value would shift to

Page 51: Collecting managing and assessing data using sample surveys

Basic statistics and probability24

14.5. These changes come about by changing one out of nine observations, suggesting some substantial sensitivity of the mean to a relatively small change in the data.

The median The median is the central value of the data, or it can be defined to be the value for which half the data are above the value and half are below. For any data, the median value is most easily found by ordering the data in increasing or decreasing value and then finding the midpoint value. For the temperature data, this is seen fairly easily in the grouped data of Table 2.4. For the maximum temperature, the dividing point between the first fifteen values and the last fifteen values is found at 27°C, which is therefore the median value. Similarly, for the minimum temperatures, the median is 20°C. Note that the median must be a whole number of degrees in these cases, because the data are reported only in whole numbers of degrees. Note that the medians of each of these two variables are not exactly the same as the means, although they are very close.

For already grouped data, the median must be a range. Looking back at the income data in Table 2.2, and using the cumulative proportions with the missing data excluded, it can be seen that the median falls in the interval $26,000–$36,399. For the income data in Table 2.5, the median can be an actual value. However, because there is an even number of observations, the median actually falls between the thirtieth and the thirty- first observations, so between $32,568 and $34,288. By interpolation, the median would be $33,428. Comparing this to the mean, it is noted that the mean is quite a bit higher at $37,474.

The mode The mode is the most frequently occurring value in a set of observations. For the maximum temperature data, the mode occurs at 27°C, for which there are five observations. For the minimum temperature, the mode occurs at 20°C, for which there are eight days on which this temperature occurs. For the income data in Table 2.2, the mode is $8,320–$15,599. This is quite different from the median. For the income data in Table 2.5, there is no mode for the ungrouped data, because each value is unique. To find a mode, it is necessary to group the data. This has, effectively, been done in the stem and leaf display, from which it can be determined that the mode

11 12 13 14 15 11 12 13 14 15 16 17 18 19 20

Figure 2.12 Arithmetic mean as centre of gravitySource: Ewart, Ford, and Lin (1982: 38).

Page 52: Collecting managing and assessing data using sample surveys

Describing data 25

is in the range of $20,000–$24,999, which contains eight households. Using classes of $5,000 for the ranges, there is no other range that has as many households in it. If ranges of $10,000 were used, then the mode would be in the range $20,000–$29,999.

Unlike all the other mean values, there may be more than one mode. In fact, the limit on the number of modes that can occur is the number of observations, if each value occurs only once in the data set. However, this is not a useful result, and data in which each value occurs only once, as in the income data, should be grouped to provide more useful information. Data may be distributed bimodally or trimodally, or more. This means that there will be multiple peaks in the data distribution. Figure 2.13 shows a possible bimodal distribution of daily maximum temperatures. There are two modes in the underlying data, one at 23°C and one at 27°C. Knowing that there are two modes in a data set provides information on the appearance of the underlying distribution, as shown in Figure 2.13.

The geometric mean The geometric mean is similar to the arithmetic mean, except that it is determined from the product of all the values, not the sum, and the nth root of the product is taken, rather than dividing by n. Thus, the geometric mean is written as shown in equation (2.4):

x xg ix xg ix xi

n n

x xx xx xg ix xx xg ix xx xx xg ig ix xg ix xx xg ix xx xx xx xg ix xx xg ix xx xx xx xx xg ig ig ig ix xg ix xx xg ix xx xg ix xx xg ix xx xx xg ig ix xg ix xx xg ix x1

1 (2.4)

It is most useful when looking at growth over time periods. For example, suppose an individual had investments in a mutual fund over a period of twelve years, and the fund experienced the growth rates shown in Table 2.6. The question one might like to ask is: ‘What is the average annual growth rate over the twelve years?’ If one were to estimate this using the arithmetic mean, one would obtain the answer that the average growth rate is 5.85 per cent. However, the geometric mean produces a value of 5.77 per cent. Although this difference does not appear to be numerically large, it has a sig-ni�cant effect on calculations of the value of the investment at the end of twelve years. If one assumes that the actual initial investment was $10,000, then the actual fund

012345678

20 21 22 23 24 25 26 27 28 29 30 31 32 33 34Maximum daily temperatures

Day

s o

f o

ccu

rren

ce

Figure 2.13 Bimodal distribution of temperatures

Page 53: Collecting managing and assessing data using sample surveys

Basic statistics and probability26

would stand at $19,609.82. This is exactly the result that would be obtained by using the geometric mean. However, the arithmetic mean would estimate the fund as being $19,782.92 – a difference of $173.10.

The arithmetic mean is obtained from equation (2.5):

x ( .( . . .. . . .. . . . . .. . .( .1( .( .1( .052052 1 067. .067. .11. .1. .103103 1 139. .139. .11. .1. .116116 1 065 11 059059. . .059. . .. . .059. . .11. . .1. . .038038. . .038. . .. . .038. . .1. . .1. . .0021.. . . ) .) . .021 1 016. .016. .1. .1. .024) .12) .) .12) .702 12 1 0585. .. .11 016016. .016. .. .016. . ) .) .) .12) .) .12) .702702 1212

(2.5)

This produces an estimated annual average growth rate of 5.85 per cent. Using this to estimate the actual value of the fund at the end of twelve years, assuming an initial investment of $10,000, one would calculate equation (2.6):

V12 $10,000 (1.0585)12 $10,000 1.978292 $19,782.92 (2.6)

The geometric mean is obtained from equation (2.7):

xg ( .( . . .. . . .. . . . . .. . .( .1( .( .1( .052052 1 067. .067. .11. .1. .. .1. .103103 1 139. .139. .11. .1. .. .1. .116116 1 065 11 059059. . .059. . .. . .059. . .11. . .1. . .. . .1. . .038038. . .038. . .. . .038. . .1. . .1. . .002

11 021 1 016 1 1 96058 1 0577112

112. .021. .021 1. .1 . )024. )024 . .1. .1. .96058. .9605811 016016. .. .1. .11. .1 . .. .

(2.7)

This produces the estimated annual geometric mean growth rate of 5.77 per cent. To estimate the value of the fund at the end of twelve years, one estimates in the same manner as for the arithmetic mean, as in equation (2.8):

V12 $10,000 (1.0577)12 $10,000 1.960982 $19,609.82 (2.8)

The reader can readily verify that this is identical to the amount calculated by apply-ing each year’s growth rate, compounded, to the amount of the fund at the end of the

Table 2.6 Growth rates of an investment fund, 1993–2004

YearGrowth (percentage charge)

1993 5.201994 6.701995 10.301996 13.901997 11.601998 6.501999 5.902000 3.802001 0.202002 2.102003 1.602004 2.40

Page 54: Collecting managing and assessing data using sample surveys

Describing data 27

previous year. Note that both the arithmetic and geometric means are obtained by using the compounding formula (1 growth rate) to obtain the average rate of growth.

The harmonic mean The harmonic mean is obtained by summing the inverse of the values for each observation, taking the inverse of this value, and multi-plying the result by the number of observations. It may be written as shown in equa-tion (2.9):

xn

xh

ii

n1

1/

(2.9)

The harmonic mean is used to estimate a mean from rates such as rates by time or distance. A good example would be provided by estimating the average speed of a train when the train’s speed changes every one kilometre, because of track condition, signals, and congestion. Suppose that the speeds for each kilometre of a twenty kilo-metre train trip were as shown in Table 2.7. If one were to take the arithmetic mean, this would give a mean speed of 58.25 kilometres per hour (km/h). This would suggest

Table 2.7 Speeds by kilometre for a train

Kilometreof trip

Speed(km/h)

Time taken (minutes)

1 40 1.52 45 1.3333 55 1.0914 60 15 70 0.8576 65 0.9237 50 1.28 35 1.7149 40 1.5

10 60 111 70 0.85712 80 0.7513 100 0.614 90 0.66715 70 0.85716 60 117 60 118 45 1.33319 40 1.520 30 2

Total – 22.683

Page 55: Collecting managing and assessing data using sample surveys

Basic statistics and probability28

that the time taken for the trip was 20 60 / 58.25 minutes, or 20.6 minutes, when it was actually 22.7 minutes (see Table 2.7). The harmonic mean is calculated as shown in equation (2.10):

xg20

140

145

155

160

170

165

150

135

140

160

170

180

1100

1190

170

160

160

145

140

130

20

0 3780552 903

.. (2.10)

This gives a harmonic mean speed of 52.903 km/h. Using this �gure, rather than the arithmetic mean speed, the time taken for the twenty kilometre trip is 20 60 / 52.903 minutes, or 22.7 minutes, which is the correct �gure.

The quadratic mean The quadratic mean is also known as the root mean square (RMS). It is given by summing the squared values of the observations, dividing these by the number of observations, and taking the square root of the result, as shown in equation (2.11):

RMSx

n

ii

n 21 (2.11)

The quadratic mean is most often used with data whose arithmetic mean is zero. It is often used for estimating error when the expected value of the average error is zero. For example, suppose that one is assessing the accuracy of a machine that produces ball bearings of nominally 100 millimetres (mm) in diameter. Measurements are taken of a number of ball bearings, and the actual diameters found to be those shown in Table 2.8, which also shows how much each one deviates from 100 mm.

The arithmetic mean of the deviations is 0.11 mm. However, the RMS is ±0.81 mm. The latter value gives a much clearer idea of the amount by which the ball bear-ings actually deviate from the desired diameter, because it does not allow the negative values to compensate for the positive ones. It shows, more precisely, the tolerance in the manufacturing process.

Relationships between mean (arithmetic), median, and mode Thereare relationships between the arithmetic mean (referred to hereafter as the mean), the median, and the mode that can tell us more about the underlying data. In the tempera-ture data from Table 2.3, it was found that the mean high temperature was 27.5°C, the median was 27°C, and the mode occurred at 27°C. In this case, it can be seen that the mode, median, and mean are all quite close. For the low temperatures, the mean was 19.7°C, the median was 20°C, and the mode was also 20°C. Again, the values are very similar. In contrast, for the income data of Table 2.5, the mean is $37,474, the median is $33,428, and the mode would be in the range $20,000–$24,999. These values are not particularly close.

Page 56: Collecting managing and assessing data using sample surveys

Describing data 29

For the mean, mode, and median to be the same value, the data must be distributed symmetrically around the mean and median, and the distribution must be unimodal – i.e., have one mode – which must occur at the mean value.

Plotting the temperature data, as shown in Figure 2.14 for the high temperatures and Figure 2.15 for the low temperatures, shows distributions that are very nearly symmetrical and that meet the conditions for a coincidence of mean, mode, and median.

Using Sturges’ rule, with sixty observations on income, incomes should be grouped into seven equal steps. This can be done by setting the intervals to $15,000. The result is shown in Figure 2.16. In contrast to the temperature data, Figure 2.16 shows that the data are not symmetrical but, rather, that they are skewed to the right, meaning that there is a longer tail to the distribution to the right than to the left. This leads to a median and a mode that are both below the mean.

Table 2.8 Measurements of ball bearings

Ball bearing Diameter (mm) Deviations

1 98.5 −1.52 100.2 0.23 99.6 −0.44 98.9 −1.15 100.6 0.66 100.3 0.37 100.7 0.78 99.1 −0.99 99.9 −0.1

10 101.1 1.1

0

1

2

3

4

5

Freq

uen

cy

22 23 24 25 26 27 28 29 30 31 32 33

Maximum temperature

Figure 2.14 Distribution of maximum temperatures from Table 2.4

Page 57: Collecting managing and assessing data using sample surveys

Basic statistics and probability30

The reverse would also be true. If we were to estimate the mean, median, and mode for a particular set of data, and were to find that the mean was less than the mode or the median, we would know that the distribution was skewed to the left, so that it would have a long tail on the left and a short tail on the right – the mirror image of Figure 2.16.

Another aspect of the mean, median, and mode is their relative sensitivity to extreme values in the data. It has already been noted that the mean is sensitive to extreme values, as was shown in the diagram in Figure 2.12. It can be illustrated further by examining what happens to the mean of income if a single extreme value is dropped. For example, suppose that the observation of $100,563 were found to be erroneous in some way and was removed from the data. This would change the mean from $37,474 to $36,405. The mode would be unchanged. The median from the grouped data would also be unchanged. Similarly, suppose that the low income of household 44 of $2,954 were found to be erroneous and was removed from the data; the mean would increase to $38,059. However, the mode would again be unaffected, as would the median. Of

0

1

2

3

4

5

6

7

8

Freq

uen

cy

16 17 18 19 20 21 22 23

Minimum temperature

Figure 2.15 Distribution of minimum temperatures from Table 2.4

0

2

4

6

8

10

12

14

16

Freq

uen

cy

$0–$14,999

$30,000–$44,999

$60,000–$74,999

$90,000–$104,999

Household income

Figure 2.16 Income distribution from Table 2.5

Page 58: Collecting managing and assessing data using sample surveys

Describing data 31

course, if the income data had happened to include a millionaire with, say, $1,005,630 as his or her income, in place of the individual with the income of $100,563, the mean would change from $37,474 to $52,558, while the mode and median would remain unchanged. This shows the effect of a dramatic extreme value.

In the temperature data, the effects are much less marked for two reasons: the spread of the data is much smaller than in the income data, so that the extreme values in the data are not particularly extreme, and the data set is very small. However, suppose that, for one day, the data entry person had mistakenly entered the high and low tempera-tures in reverse, and suppose that, on that day, the real high had been 34°C and the real low 15°C. Then the mean would change for the high temperature from 27.5°C to 26.9°C, and the mean low temperature from 19.7°C to 20.3°C. However, in each case, the mode and median would remain unchanged.

In summary, the mean is sensitive to extreme values in the data, while the mode and median are not. The relative values of the mean, mode, and median indicate informa-tion about the nature of the distribution, with coincident or nearly coincident values indicating a nearly symmetrical distribution, while having the mode and median below the mean indicates a right-skewed distribution, and above indicates a left-skewed dis-tribution. It is also worth noting that the mode and median can be coincident but differ-ent from the mean, as well as all three values being different from one another. As also noted, under the discussion of the mode, if there is more than one mode this provides further information about the shape of the underlying distribution. In the case of a symmetrical, bimodal distribution, the mean and the median could be coincident, with two modes lying equidistant from the mean and median on either side. Similarly, in a trimodal distribution, the mean and median could be coincident with one another and with one of the three modes, with the other two lying at equal distances either side of the mean, median, and third mode.

Examples

Example 1. A number of vehicles were observed passing through a traffic signal dur-ing the green phase, with observations as shown in Table 2.9. Calculate the mean, median, and mode of the data. Deduce the shape of the distribution.

Summing these values, it is found that the total number of vehicles observed is 138. Dividing by thirty, because there are thirty observations, gives a mean of 4.6. The mode and median can be found best by sorting the data by the number of vehicles. This produces Table 2.10.

Table 2.10 shows that the mode is six, and the median is five. Therefore, these data describe a skewed distribution, with the median and the mode both being larger than the mean, so that it is left-skewed, and none of the three values is coincident. Moreover, the distribution is unimodal. It can also be seen that, because the next most frequent number of vehicles occurs only four times, compared to the mode, which occurs eight times, the distribution must be sharply peaked.

As confirmation, a plot of the data is shown in Figure 2.17.

Page 59: Collecting managing and assessing data using sample surveys

Basic statistics and probability32

Table 2.9 Number of vehicles passing through the green phase of a traffic light

Green phase

Number of vehicles

Green phase

Number of vehicles

1 5 16 12 7 17 23 6 18 64 6 19 35 4 20 46 2 21 57 7 22 68 8 23 39 3 24 1

10 2 25 611 4 26 712 5 27 213 7 28 314 6 29 615 5 30 6

Table 2.10 Sorted number of vehicles passing through the green phase

Green phase

Number of vehicles

Green phase

Number of vehicles

16 1 15 524 1 21 56 2 3 6

10 2 4 617 2 14 627 2 18 69 3 22 6

19 3 25 623 3 29 628 3 30 65 4 2 7

11 4 7 720 4 13 71 5 26 7

12 5 8 8

Page 60: Collecting managing and assessing data using sample surveys

Describing data 33

Example 2. A small community in outback Australia has forty-seven school-age chil-dren. Their ages are shown in Table 2.11. Because of the small number of children, it is desired to split the children into two equal-sized groups. What is the age cut-off that should be used to do this? What is the age that occurs most frequently in this commu-nity? What is the average age of the children?

To determine the age at which to split the children into two equal-sized-groups, it is necessary to �nd the median. With forty-seven children, the median value will be that of the age of the twenty-fourth child, counting from �ve upwards. This is age thirteen. Therefore, one group should comprise the �ve- to twelve-year-olds, of which there are twenty-three, and the other the thirteen- to eighteen-year-olds, of which there are twenty-four.

The age that occurs most frequently is �fteen, for which there are seven children. This is the mode, and there is only one mode in these data. The average age of the chil-dren can be calculated by using a variation on the formula for a mean, in which we use the frequency of occurrence of each age, as shown in equation (2.12):

x fx f x fix fix fi

hx f

hx fx fx fx fx f

ii

hhx fx fx fx fx fx fx fx f x fx fx fx fi ii ii ii ix fi ix fx fi ix fx fi ix fx fi ix f x fi ix fx fi ix fx fi ix fx fi ix f

iiiix f

ix fx f

ix fx f

ix fx f

ix f

hhhhx f

hx fx f

hx fx f

hx fx f

hx f

1 11 1x f

1 1x f

i1 1ix f

ix f

1 1x f

ix f

i1 1i1 11 1ii1 1ii1 11 1x fx f

1 1x fx fx fx f

1 1x fx f

1 11 11 1x fx fx fx f

1 1x fx fx fx f x fx fx fx f

1 1x fx fx fx fi ii ii ii i1 1i ii ii ii ix fi ix fx fi ix fx fi ix fx fi ix f

1 1x fi ix fx fi ix fx fi ix fx fi ix f x fi ix fx fi ix fx fi ix fx fi ix f

1 1x fi ix fx fi ix fx fi ix fx fi ix f

1 11 11 1(2.12)

where fi the frequency of occurrences of the ith value of x, and h the number of distinct values of x.

In this case, the mean is determined from equation (2.13):

x (3(3 5 45 4 6 36 3 7 27 2 8 1 98 1 98 1 9 5 15 10 40 4 11 1 11 12 32 3 132 12 14 74 7 15 3 13 16 56 55 16 55 16 5 7 4 18 47 568 15 15 17 47 4 47471818) /) / / .47/ .47 12/ .12/ .

(2.13)

0

1

2

3

4

5

6

7

8

Ob

serv

atio

ns

1 2 3 4 5 6 7 8Number of vehicles

Figure 2.17 Distribution of vehicle counts

Page 61: Collecting managing and assessing data using sample surveys

Basic statistics and probability34

Thus, the median age is found to be thirteen, the mode is fifteen, and the mean is 12.1. The mode and median are, again, both larger than the mean, indicating a distri-bution of ages that is skewed to the left.

Measures of dispersionMeasures of dispersion should provide additional information to that given by the central measures of the data. What is needed now is to get some idea of how much the values vary. For example, in the temperature data, it was noted that there was not a very large amount of variation in the temperatures compared, say, to the income data. However, this was something observed from a rather small amount of data. If one had data consisting of hundreds of observations, one would be hard pressed to deduce this.

The range Intuitively, the first measure that one might think of as pro-viding a measure of variability of the data is that of the range of the data – i.e., the minimum and maximum values, and the difference between them. For the maximum temperatures, this would provide values of 22°C and 33°C. For the minimum tempera-tures, it would provide values of 16°C and 23°C. For the income data, it would provide values of $2,954 and $100,563. Certainly, this provides useful information. However, the range is too sensitive to outliers. If the income data had included the millionaire, then the range would change to $2,954 and $1,005,630. This is a huge increase in the range, yet there is only one value different, so the data are not much more dispersed than when the millionaire was not in the data set. Similarly, there might have been one day when the temperature reached 42°C, while all the other temperatures remained the

Table 2.11 Number of children by age

Age Number of children

5 36 47 38 29 1

10 511 412 113 314 215 716 317 518 4

Total 47

Page 62: Collecting managing and assessing data using sample surveys

Describing data 35

same. The range would now be extended to 22°C to 42°C, which is almost double the original range, yet the data are hardly more dispersed. As a result, it must be concluded that the range, while giving valuable information, is not the best measure of the dis-persion of the data.

The interquartile range Another possibility is to use a sub-part of the range, in which the extreme values at either end of the range are removed. The range that is most frequently used is called the interquartile range. To understand how this is determined, it is necessary first to define fractiles.

Fractiles Any set of data can be divided into equal-size groupings. The median divides the data into two equal-size groups. The median may, therefore, also be called the fiftieth percentile, because it is the boundary dividing the data into two equal halves: 50 per cent of the data has a value less than the fiftieth percentile, and 50 per cent has a value more than the fiftieth percentile. If the data are divided into thirds, the boundaries of each third of the data can be called a tertile. Similarly, if the data are divided into four equal-size groups, so that one-quarter of the data falls into each group, then the boundaries are the quartiles. One can obviously talk of quintiles, sextiles, and so forth. Similarly, the data can be divided into percentage groupings. One might, for example be interested in the ninetieth percentile, which might be used in determining relatively rare events, which would be those occurring beyond the nineti-eth percentile. In setting speed limits in countries such as the United States, Australia, and the United Kingdom, among others, the eighty-fifth percentile speed is often used as the speed limit. This would be the speed that is exceeded by only 15 per cent of all vehicles.

One might, for example, use the lowest 10 per cent of incomes to define poverty. Therefore, in the income data of Table 2.5, the tenth percentile income would be the income of the household ranked sixth from the lowest. This is the household with an income of $7,390. The seventh household has an income of $8,917, so that the tenth percentile could be defined as being $7,391. The eighty-fifth percentile maximum tem-perature would be the temperature that is exceeded on only four to five days of the month. This would be 30°C, because there are five days that had higher temperatures than this.

The interquartile range is defined as the difference between the first and third quartiles, or the difference between the twenty-fifth and seventy-fifth percentiles. In the income data, the interquartile range is found to be from $18,259 to $53,744, or a range of $35,485. This is now unaffected by the millionaire, or even by the household with an income of $100,563. For the temperature data, the interquartile range of the maximum temperatures is 26°C to 29°C, or 3°C; the interquartile range for the low temperatures is 19°C to 21°C, or 2°C. This range is clearly much less sensitive to extreme values than the range. However, it provides still rather limited data about dis-persion, and actually ignores the values of one-half of the data (the lowest 25 per cent and the highest 25 per cent). It would be preferable for a measure of dispersion to use all the data.

Page 63: Collecting managing and assessing data using sample surveys

Basic statistics and probability36

Box and whisker plot Before leaving the interquartile range, it is appro-priate to mention the box and whisker plot. This is a plot that summarises much of the information discussed in this section and previous sections. It is used to display the maximum and minimum values in the data, the interquartile range, and the median. A box and whisker plot of the income data is shown in Figure 2.18. This shows the interquartile range by the box, with the median represented as a bar across the box, and the whiskers extend to the minimum and maximum values in the data. This is a useful summary of the income data. A similar plot could be obtained from each of the maxi-mum and minimum temperatures, and these are shown in Figures 2.19 and 2.20.

Variability It has been noted that the main problem with ranges is that they use only two values from the data, and ignore the rest. There could be two sets of data with quite different dispersions of data but the same range. Therefore, what is needed to measure variability is, clearly, something that uses all the data points.

Deviations from zero or the mean One possible way to use all the data would be to estimate deviations of the data from a �xed value, such as zero or the mean. However, the mean deviation from zero is the mean, and the mean deviation from the mean is zero. Therefore, neither of these measures adds anything new. The reader can readily con�rm that this is so by considering any of the data sets provided in this chapter. Of course, the mean deviation from the mean is zero because the negative deviations will cancel out the positive ones. An alternative would be to consider the mean absolute deviation from the mean. This would be given by equation (2.14):

MADx xx x

n

ix xix xi

n

1 (2.14)

Income

$0

$20,000

$40,000

$60,000

$80,000

$100,000

$120,000

Figure 2.18 Box and whisker plot of income data from Table 2.5

Page 64: Collecting managing and assessing data using sample surveys

Describing data 37

The mean absolute deviation (MAD) implies that one is interested not in the signs of the deviations but only in the magnitudes of the deviations. This also gets away from the problem that deviations sum either to the mean or to zero. Applying this to the income data of Table 2.5, the mean absolute deviation from the mean is $20,685; the data for calculating this are shown in Table 2.12. However, if the household with an income of $100,563 is removed from the data, then this value drops to $19,856. If, on the other hand, the millionaire is included in place of this household, then the value

Maximum temperature

22

24

26

28

30

32

34

Figure 2.19 Box and whisker plot of maximum temperatures

Minimum temperature

16

17

18

19

20

21

22

23

Figure 2.20 Box and whisker plot of minimum temperatures

Page 65: Collecting managing and assessing data using sample surveys

Basic statistics and probability38

changes to $40,757. Clearly, this means that this absolute deviation is very sensitive to extreme values in the data.

If the same value is estimated for the data on maximum and minimum temperatures, the mean absolute deviations are, respectively, 2.2°C and 1.4°C. These mean absolute deviations indicate, as would be expected, that the temperature data are much less dis-persed than the income data, and that the minimum temperatures are less dispersed than the maximum temperatures. The mean absolute deviation of the large income data set (with 872 observations) is not readily calculated, because the data were collected in ranges. Picking a particular value to represent each range, to estimate a mean and the mean absolute deviation, is a perilous undertaking, because the choice of such values is arbitrary, but it can have a significant effect on the statistics calculated from them.

Squared deviations from the mean Although the mean absolute deviation is fairly easy to calculate, it does not possess some of the properties that statisticians desire in a measure. Specifically, if there is a sample and the MAD is estimated for it, this will be a biased estimate of the population MAD. What this means is that the sample size could be increased and the error between the sample estimate and the true population value will not necessarily get smaller, and might even get larger, whereas an unbiased estimator will always get closer and closer to the true value if the sample size is increased. Another way to understand the bias is to consider a situation in which a number of independent samples are drawn and a statistic is estimated from each sam-ple. If the average of these statistics (such as the mean) is estimated, the averaged result will be closer to the true population value than any of the individual sample values, if

Table 2.12 Deviations from the mean for the income data of Table 2.5

Household number

Annual income

Household number

Annual income

Household number

Annual income

1 –$15,116 21 –$28,248 41 $32,6612 –$12,795 22 $58,961 42 $63,0893 –$19 23 $17,867 43 –$33,5974 $8,749 24 $51,893 44 –$34,5205 –$14,684 25 –$24,490 45 –$31,0526 $1,182 26 –$16,030 46 –$21,1237 $12,525 27 –$1,135 47 –$18,2528 $38,976 28 –$17,369 48 $19,3049 $16,270 29 $6,972 49 $3,763

10 –$18,555 30 –$3,186 50 –$12,58211 $7,407 31 –$11,796 51 –$6,39012 –$10,904 32 –$33,352 52 $30,53413 –$25,339 33 –$30,084 53 $33,56514 $9,516 34 $28,335 54 –$24,34115 $381 35 $9,527 55 –$19,215

Page 66: Collecting managing and assessing data using sample surveys

Describing data 39

the statistic is unbiased. The MAD is biased, so that averaging MADs from a number of samples will not necessarily provide a closer estimate to the true value.

A statistic that does provide an unbiased estimate is the mean of the squared devia-tions from the mean. Squaring the deviations from the mean takes care of the signs, just as happens with the absolute deviations. Therefore, again, this measure focuses on the magnitude of the deviations, not the sign. However, an unbiased estimator can be obtained from the squared deviations, which is not possible from the absolute deviations.

The mean of the squared deviations is known as the variance, and it is calculated from equation (2.15):

22

2

1x xx x

n

ix xix xi

n(2.15)

The Greek letter sigma squared, 2, is customarily used to denote the population value of the variance, while s2 is used to denote the estimate of the variance from sam-ple data. However, an unbiased estimate of the variance is obtained by dividing by (n – 1) instead of n. Thus, the sample variance is given by equation (2.16):

sx x

n

ix xix xi

n

2

2

1

1

x xx x(2.16)

The biased and unbiased nature of these statistics can be illustrated by considering a die. The population of numbers on a die is one, two, three, four, �ve, six. The mean of these values is 3.5 (1 2 3 4 5 6 21; 21 / 6 3.5). The variance is 2.917, because this is the entire population. Therefore, the variance is calculated as

[(–2.5)2 (–1.5)2 (–0.5)2 (0.5)2 (1.5)2 (2.5)2] / 6 [(6.25 2.25 0.25) 2] / 6 2.917

The mean absolute deviation is calculated as

(2.5 1.5 0.5 0.5 1.5 2.5) / 6 9 / 6 1.5

Now, suppose that the die is thrown twice, and this is used as a sample of the values. For each sample, the mean, the mean absolute deviation, and the variance can all be estimated. There are thirty-six possible outcomes from doing this. For example, two ones (1, 1) could be thrown, or a one and a four (1, 4) could be thrown, or a �ve and a two (5, 2), etc. Table 2.13 shows the thirty-six possible outcomes. For each outcome, the next column shows the mean absolute deviation for the two values thrown. The third column, labelled ‘Variance (biased)’, shows the value of the sample variance, dividing by n 2. The fourth column, labelled ‘Variance (unbiased)’, uses the sample variance obtained using the correct, unbiased formula, where division is by (n – 1), which, in this case, is 1. The �nal column contains a value of the mean absolute devia-tion obtained by dividing by n – 1 instead of n, to see if this corrects the problem.

Page 67: Collecting managing and assessing data using sample surveys

Basic statistics and probability40

If the average of all the mean absolute deviations is found, this should give a cor-rect estimate of the true population mean absolute deviation. However, it gives an underestimate of 0.972. Thus, the sample mean absolute deviations do not produce a population estimate. If the average of the variance estimates is found, using n in

Table 2.13 Outcomes from throwing the die twice

Outcome MADVariance (biased)

Variance (unbiased)

MAD (‘corrected’)

1, 1 0 0 0 01, 2 0.5 0.25 0.5 12, 1 0.5 0.25 0.5 11, 3 1 1 2 23, 1 1 1 2 21, 4 1.5 2.25 4.5 34, 1 1.5 2.25 4.5 31, 5 2 4 8 45, 1 2 4 8 41, 6 2.5 6.25 12.5 56, 1 2.5 6.25 12.5 52, 2 0 0 0 02, 3 0.5 0.25 0.5 13, 2 0.5 0.25 0.5 12, 4 1 1 2 24, 2 1 1 2 22, 5 1.5 2.25 4.5 35, 2 1.5 2.25 4.5 32, 6 2 4 8 46, 2 2 4 8 43, 3 0 0 0 03, 4 0.5 0.25 0.5 14, 3 0.5 0.25 0.5 13, 5 1 1 2 25, 3 1 1 2 23, 6 1.5 2.25 4.5 36, 3 1.5 2.25 4.5 34, 4 0 0 0 04, 5 0.5 0.25 0.5 15, 4 0.5 0.25 0.5 14, 6 1 1 2 26, 4 1 1 2 25, 5 0 0 0 05, 6 0.5 0.25 0.5 16, 5 0.5 0.25 0.5 16, 6 0 0 0 0

Total 35 52.5 105 70

Average 0.972 1.458 2.917 1.944

Page 68: Collecting managing and assessing data using sample surveys

Describing data 41

calculating each sample variance, then the average of these values is 1.458, which is also an underestimate of the population value of 2.917. However, if each value is cal-culated by dividing by (n – 1) instead, an average value of 2.917 is obtained, which is exactly correct. When the same procedure, dividing by (n – 1) instead of n, is applied to the MAD, a value of 1.944 results, which overestimates the true population MAD of 1.5. Hence, there seems to be no way to obtain an unbiased estimate of the MAD. It is, therefore, not suitable for estimating population values from a sample, whereas the variance, estimated using (n – 1), is able to provide unbiased population estimates from a sample.

It is now possible to proceed to estimate the variance from each of the earlier data sets, used through much of this chapter. For the income data in Table 2.5, the variance is found to be 643,845,896 squared dollars. This is a very large number, which is the normal result when the variable takes large values, such as the annual income �gures. For the maximum temperatures the variance is 7.57, and for the minimum temperatures it is 3.18. The units of both of these variances are squared degrees Celsius. Two results are immediately apparent. First, the variances of the temperature data are much smaller than the variance of the income data; second, the variance of the maximum temperatures is about twice as great as that of the minimum temperatures. In all cases, the estimates were obtained with (n – 1). If n had been used, the values for the maximum and min-imum temperatures would have been 7.32 and 3.08, respectively, and, for income, the result would have been an estimated variance of 633,115,131. The differences in these values are not very large, and will become smaller as the sample sizes become larger.

As with the MAD, it is also useful to examine the effect on the variance of extreme values in the data. Because extreme values do signify some information about the dis-persion of the data, it would be expected that there would be some effect on the vari-ance, but it would be desirable that it would not be excessive. In the interquartile range, it was found that extreme values had no effect, while they had an enormous poten-tial effect on the range. The MAD was found to be very sensitive to extreme values, although not as sensitive as the range. Inclusion of the millionaire in the income data has a very sizeable effect on the variance, which increases from 643,845,896 squared dollars to 16,231,870,828 squared dollars. If the income of $100,563 is removed, then the variance decreases to 585,158,933. Thus, it must be concluded that the variance is very sensitive to extreme values, although less so than the range.

The standard deviation It has just been shown that the values of the var-iance can be quite large, and that the units of the variance are the square of the units of the variable. This means that the variance, although a useful statistic, is not easily interpreted. The standard deviation is the square root of the variance. It therefore has the same measurement units as the variable, and will be of a similar order of magnitude to the variable in question. Thus, the standard deviation for the population is given by in equation (2.17):

x xx x

n

ix xix xi

n

1

2 (2.17)

Page 69: Collecting managing and assessing data using sample surveys

Basic statistics and probability42

and the standard deviation for a sample is given by s in equation (2.18):

s i

nx xx xiix xix xx xix xx xx xx xx x

nn

2

1

1(2.18)

In the previous examples for the variance, the standard deviation of household income is ±$25,374. For the maximum temperatures the standard deviation is ±2.75°C, and for the minimum temperatures it is ±1.78°C. It is interesting to compare these to the MADs, which are $20,685 for the income data, and 2.2°C and 1.4°C for the max-imum and minimum temperatures, respectively. It appears, from these examples, that the MAD is generally less than the variance, which was also the case for the example of the numbers obtained from throwing a die.

It should be noted that the standard deviation, being a square root, may be either positive or negative. Correctly, it should always be shown with the plus/minus (±) sign in front of it. It is also worth noting that the magnitude of the standard deviation is usu-ally similar to the magnitude of the variable being assessed. Thus, instead of a variance in hundreds or thousands of millions of squared dollars, the standard deviation is in tens of thousands of dollars, similar to the variable of annual household income.

Examples

Example 1. For the data provided in Table 2.9 of vehicles passing through the green phase of a traf�c signal, �nd the interquartile range, plot a box and whisker plot, and determine the MAD and the variance and standard deviation of the data. Table 2.14shows the sorted data (presented earlier in Table 2.10), which is easiest to use to estab-lish the interquartile range.

From this it can be estimated that the twenty-�fth percentile is at three vehicles per green, and the seventy-�fth percentile is at six vehicles per green. Therefore, the inter-quartile range is three to six, or a size of three vehicles per green. Figure 2.21 showsthe box and whisker plot for these data.

The mean of the data was earlier found to be 4.6. Therefore, the MAD, the variance, and the standard deviation can be estimated by computing the deviations from 4.6. These are shown in Table 2.15.

From these deviations, the mean absolute deviation can be calculated as

MAD 1.72 vehicles per green

Similarly, we are able to calculate the variance and the standard deviation as

s2 3.97 (vehicles per green)2

s 1.99 vehicles per green

As before, the standard deviation is larger than the MAD. In addition, it can be seen that the standard deviation is smaller than the interquartile range.

Page 70: Collecting managing and assessing data using sample surveys

Describing data 43

Example 2. Using the data from Table 2.11, on the numbers of children by age, calcu-late the interquartile range, the MAD, the variance, and the standard deviation, and present the interquartile range on a box and whisker plot.

The twenty-fifth percentile age is eight and the seventy-fifth percentile is sixteen; hence, the interquartile range is eight to sixteen, or eight years. The box and whisker

Table 2.14 Sorted number of vehicles passing through the green phase

Green phase

Number of vehicles

Green phase

Number of vehicles

16 1 15 524 1 21 56 2 3 610 2 4 617 2 14 627 2 18 69 3 22 619 3 25 623 3 29 628 3 30 65 4 2 711 4 7 720 4 13 71 5 26 712 5 8 8

Vehicles

1

2

3

4

5

6

7

8

Figure 2.21 Box and whisker plot of vehicles passing through the green phase

Page 71: Collecting managing and assessing data using sample surveys

Basic statistics and probability44

plot is shown in Figure 2.22. The mean was found earlier to be 12.1 years, so the deviations can be estimated for each age group. Again, the calculation of the MAD, variance, and standard deviation must be done using the frequency of occurrence of each age. Thus, the MAD is given by equation (2.19):

MADf x

n

i if xi if xi

h| || |f x| |f x x| |xi i| |i if xi if x| |f xi if x| |

1 (2.19)

In the same way, the sample variance is obtained from equation (2.20):

sf x x

n

i if xi if xi

h

2

2

1

1

f xf xf xi if xf xi if x (2.20)

The mean absolute deviation is found to be

MAD 3.70 years

The variance and standard deviation are found to be

s2 17.86 (years)2

s 4.23 years

Again, the standard deviation is larger than the MAD and smaller than the inter-quartile range.

Table 2.15 Deviations for vehicles passing through the green phase

Green phase Vehicles Deviation Green phase Vehicles Deviation

1 5 0.4 16 1 3.62 7 2.4 17 2 2.63 6 1.4 18 6 1.44 6 1.4 19 3 1.65 4 0.6 20 4 0.66 2 2.6 21 5 0.47 7 2.4 22 6 1.48 8 3.4 23 3 1.69 3 1.6 24 1 3.6

10 2 2.6 25 6 1.411 4 0.6 26 7 2.412 5 0.4 27 2 2.613 7 2.4 28 3 1.614 6 1.4 29 6 1.415 5 0.4 30 6 1.4

Page 72: Collecting managing and assessing data using sample surveys

Describing data 45

The normal distributionIn the preceding sections of this chapter, the distributions of variables in a population or a sample have been discussed. The various graphs that were shown in a previous section of this chapter each depict the distribution of the data points from a sample or a population. In the early years of statistics, researchers noted that a certain pattern of distribution seemed to arise quite frequently in various different settings and for a wide variety of variables. This was a symmetrical distribution, with a fairly high peak. Because of the many occasions on which this distribution, or something very similar to it, arose, it was termed the ‘normal’ distribution. It is a very important distribution, because it has a number of properties that have been analysed extensively, and that are readily understood. The normal distribution is shown in Figure 2.23.

00.010.020.030.040.050.060.070.080.09

µ – 4σ µ – 3σ µ – 2σ µ – σ µ µ + σ µ + 2σ µ + 3σ µ + 4σ

Variable X

f(x)

Figure 2.23 The normal distribution

Age

17.5

15

12.5

10

7.5

5

Figure 2.22 Box and whisker plot of children’s ages

Page 73: Collecting managing and assessing data using sample surveys

Basic statistics and probability46

The normal distribution has the property that the mean, mode, and median are coin-cident and occur (of course) at the point of symmetry of the distribution. Another very important property of the normal distribution is that 68 per cent of the values lie between –1 and +1 standard deviations from the mean, and approximately 95 per cent lie between –2 and +2 standard deviations from the mean. The normal distribution can take a variety of shapes and positions, depending on the relative magnitude of the mean and the standard deviation, and depending on the absolute value of the mean. Figure 2.24 illustrates three different normal distributions, with standard deviations of one, three, and ten, all with a mean of zero.

The principal reason for the importance of the normal distribution in the context of this chapter is the additional information that it offers with respect to the standard deviation. For example, if one was to assume that the temperatures in the previous examples were distributed normally, then knowledge that the standard deviation of the maximum temperatures is ±2.75°C also suggests that about 95 per cent of maxi-mum temperatures (presumably in the month of observation) will lie between +5.5°C and –5.5°C of the mean. Because the mean was found to be 27.5°C, this would mean that one could expect (if the temperatures are normally distributed) that the maximum temperature would lie between 22°C and 33°C about 95 per cent of the time. Indeed, in the observations of high temperatures over that month, the actual range of tempera-tures observed was from 22°C to 33°C.

The income data are not normally distributed, but are skewed and have a minimum value of $0, whereas the normal distribution goes from minus infinity to plus infin-ity. Therefore, this idea of two standard deviations from the mean cannot be readily applied to the income data. Nevertheless, the general idea that 95 per cent of the data lie between +2 and −2 standard deviations is applicable in a large number of cases, and offers additional insights from calculation of the mean and the standard deviation.

Some useful properties of variances and standard deviationsThere are a few properties of variances, in particular, and standard deviation that are useful in survey applications. Variances and standard deviations are very important in various calculations relating to sampling. Variance and standard deviation are proper-ties of the population. A large variance and standard deviation indicate that the measure

-0.05

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

–20 –15 –10 –5 0 5 10 15 20

X

f(x)

σ = 1

σ = 10

σ = 3

Figure 2.24 Comparison of normal distributions with different variances

Page 74: Collecting managing and assessing data using sample surveys

Describing data 47

concerned has considerable variability in the population. A small variance and stan-dard deviation are indicative of variables that vary little in the population.

Proportions or probabilitiesThe �rst useful property of variances and standard deviations is for a proportion or probability. Suppose that the object of interest is an attribute for which the measure is the proportion of the population that possesses a particular value of this attribute. The proportion of the population with this value of the attribute is denoted as p, and the proportion that does not possess this value as q, where q (1 – p). In this case, the variance of the proportion p is given by equation (2.21):

s pq pps pps p2s p2s ps ps pq pq p pp11 (2.21)

Similarly, the standard deviation is given by equation (2.22):

s ps pq qq qps pps ps ps ps ps pq qq q pp1 (2.22)

Note that the variance and standard deviation are a function only of the proportion that possesses the value of the attribute and the sample size. It can be shown also that the maximum value of the variance and standard deviation occur when p q 0.5. At that value, the variance and standard deviation will be as shown in equation (2.23):

ss

0 52

0 5

0 5 0 50 5

0 5.0 5

0 5.0 5

. .0 5. .0 5 0 5. .0 50 5.0 50 50 5. .. .0 5. .0 50 5. .0 5 (2.23)

It can be shown rather easily that this is the largest value that the variance and stan-dard deviation can take. Table 2.16 shows the range of possible values of the variance and standard deviation. The number of observations does not affect the result, so that any number would show the same values of the variance and standard deviation. If the values of p and q are reversed, there is no difference in the variances and standard deviations.

Table 2.16 Values of variance and standard deviation for values of p and q

p q pq Variance Standard deviation

0.5 0.5 0.25 0.25 0.50.6 0.4 0.24 0.24 0.490.7 0.3 0.21 0.21 0.4580.8 0.2 0.16 0.16 0.40.9 0.1 0.09 0.09 0.3

0.95 0.05 0.045 0.045 0.2120.99 0.01 0.0099 0.0099 0.099

Page 75: Collecting managing and assessing data using sample surveys

Basic statistics and probability48

From Table 2.16, it is clear that the maximum value of the variance and the standard deviation occurs at values of p and q of 0.5. This is a very useful fact, which is used subsequently in discussions about sample sizes (Chapter 9).

Data transformationsIt is useful to understand what happens when various transformations are made of data, and the effect this has on the variance of the data.

Adding a constant In various circumstances, one may wish to add a con-stant to a variable, so that xi becomes yi xi a. The effect of this is to add the same constant to the mean, but to leave the variance unchanged. Equation (2.24) shows the results:

y x ay x as s

i iy xi iy x

y xs sy xs s

y xy xi ii iy xi iy xy xi iy xy xy xs ss s2 2s s2 2s ss ss s2 2s ss s

(2.24)

Multiplying by a factor In other circumstances, one may wish to multi-ply a variable xi by a factor, so that a new variable is created, yi bxi. The effect of this is to multiply the variance by b2, and also multiply the mean by b. This is shown in equation (2.25):

y bxy by bxs b s

i iy bi iy bxi ix

y xs by xs b sy xs

y by by bi iy by bi iy by by bs bs b2 2s b2 2s bs bs b2 2s bs b 2

(2.25)

Linear transformation If both a multiplying factor and a constant are applied to a variable, resulting in a linear transformation, then the mean and variance are changed as shown in equation (2.26):

y bx ay by bx as b s

i iy bi iy bx ai ix a

y xs by xs b sy xs

y by bx ax ay bi iy by bi iy bx ai ix ax ai ix ay by bx ax as bs b2 2s b2 2s bs bs b2 2s bs b 2

(2.26)

An example of such a transformation is provided from the temperature data used earlier in this chapter. One might wish to transform the temperatures from degrees Celsius to degrees Fahrenheit. To do this, the number of degrees is multiplied by a fac-tor of 1.8 and a constant of thirty-two is added. Using equation (2.26), this would yield the mean of the maximum temperatures in degrees Fahrenheit of 81.5. Similarly, the standard deviation, which will be the factor multiplied by the original standard devia-tion, would be ±4.95°F. The reader can readily check that these are the correct results by converting each value in Table 2.3 to degrees Fahrenheit, and then estimating the mean and standard deviation.

Page 76: Collecting managing and assessing data using sample surveys

Describing data 49

Adding or subtracting variables In yet other circumstances, one may wish to add together two or more variables to form a new third variable, or one may wish to subtract the values of two variables to form a new third variable. The addition or subtraction of variables will result in changes to the means and variances of the transformed variables. First, it is assumed that the variables to be added or subtracted are independent variables. In the event that variables are added together, the mean of the composite variable will be the sum of the means of the individual variables, while the variance will be the sum of the variances. Further, if the addition is done with mul-tiplying factors, then these multiplying factors will appear in each of the mean and the variance in the same manner as just shown for a simple transformation. If variables are subtracted one from another, the mean will be the difference in the means, but the variance is the sum of the variances. Equation (2.27) shows the results of adding inde-pendent variables, while equation (2.28) shows the results of subtracting independent variables:

y b x b x b x b x ay by b x bx b x bx b x bx b x as

i iy bi iy b x bi ix b i ix bi ix b x bi ix bk kx ak kx aix aix a

k kx ak kx a

y

y by b x bx bx bi ix bx bi ix bi ii iy bi iy by bi iy b x bx b x bx bx bi ix bx bi ix b x bi ix bx bi ix bx bx b x ax ak kk kx ak kx ax ak kx ax aix ax aix ay by b x bx b x bx b x bx bx bx b x ax ak kk kx ak kx ax ak kx a

i i1 1i ix bi ix b1 1x bi ix b1 1x bx b1 1x bx bi ii i1 1i ii ix bi ix bx bi ix b1 1x bi ix bx bi ix b2 2x b2 2x b3 3i i3 3i ix bi ix b3 3x bi ix b x bi ix b3 3x bi ix b3 3x bx b3 3x bx b x bx b3 3x bx bi ii i3 3i ii ix bi ix bx bi ix b3 3x bi ix bx bi ix b x bi ix bx bi ix b3 3x bi ix bx bi ix b

1 1x b1 1x b1 1x bx b1 1x bx b2 2x b2 2x b3 3x b3 3x b x b3 3x b3 3x bx b3 3x bx b x bx b3 3x bx b2

x bx bx bx b

b sb s b s b sb s b sx x x kb skb sxk1b s1b sb s1b s2 22 2b s2 2b sb s2 2b s 2b s2b sx x2x x2 2b s2 2b s 3b s3b sb s3b s2 22 2b s2 2b sb s2 2b s 2 2b s2 2b s

1 2x x1 2x xx x2x x1 2x x2x x 3

(2.27)

y b x b x b x b x cy by b x bx b x bx b x bx b x cs

i iy bi iy b x bi ix b i ix bi ix b x bi ix bk kx ck kx cix cix c

k kx ck kx c

y

y by b x bx bx bi ix bx bi ix bi ii iy bi iy by bi iy b x bx b x bx bx bx b x cx ck kk kx ck kx cx ck kx cy by b x bx b x bx b x bx bx bx b x cx ck kk kx ck kx cx ck kx c

i i1 1i ix bi ix b1 1x bi ix b1 1x bx b1 1x bx bi ii i1 1i ii ix bi ix bx bi ix b1 1x bi ix bx bi ix b2 2x b2 2x b3 3x b3 3x b x b3 3x bi i3 3i ix bi ix b3 3x bi ix b x bi ix b3 3x bi ix b3 3x bx b3 3x bx b x bx b3 3x bx b

1 1x b1 1x b1 1x bx b1 1x bx b2 2x b2 2x b3 3x b3 3x b x b3 3x b3 3x bx b3 3x bx b x bx b3 3x bx b2

x bx bx bx b

b sb s b s b sb s b sx x x kb skb sx k1b s1b sb s1b s2 22 2b s2 2b sb s2 2b s 2b s2b sx x2x x

2 2b s2 2b s 3b s3b s2 2b s2 2b s 2 2b s2 2b s1 2x x1 2x xx x2x x1 2x x2x x 3 1k3 1k x3 1x

(2.28)

It is important to recall that these equations hold if and only if the variables x1, x2, etc. are independent variables. In the event that they are not independent, then the above equations for the variance will have to be modi�ed by the addition of twice the covari-ance for each pair of variables that are not independent when the composite variable is formed by addition, and by the subtraction of the same quantity when the composite variable is formed by subtraction. Suppose one were to form a new composite variable from two other variables that are not independent of each other. If the two variables are to be added together, the appropriate formulae are those shown in equation (2.29), and, if subtracted, the appropriate formulae are shown in equation (2.30):

y b x b xy by b x bx b xs b s b s b b x

i iy bi iy b x bi ix b i

ys bys b x x

y by b x bx bi ii iy bi iy by bi iy b x bi ix bx bi ix by by b x bx bs bs b s bs b s bs bs bs b b xb xs bs bs bs b

i i1 1i ix bi ix b1 1x bi ix b1 1x bx b1 1x bx bi ii i1 1i ii ix bi ix bx bi ix b1 1x bi ix bx bi ix b2 2x2 2x

1 1x b1 1x b1 1x bx b1 1x bx b2 2x2 2x2s b2s b11

2 2s b2 2s b2 2s bs b2 2s bs b2x x2x x2 2s b2 2s b1 2b xb x1 2b xb x1 2

1 2x x1 2x xx x2x x1 2x x2x x b xcob xv ,b xv ,b xb xb x11v ,v ,b xv ,b xb xv ,b x1v ,11v ,1 xxxx22

(2.29)

y b x b xy by b x bx b xs b s b s b b x

i iy bi iy b x bi ix b i

ys bys b x x

y by b x bx bi ii iy bi iy by bi iy b x bi ix bx bi ix by by b x bx bs bs b s bs b s bs bs bs b b xb xs bs bs bs b

i i1 1i ix bi ix b1 1x bi ix b1 1x bx b1 1x bx bi ii i1 1i ii ix bi ix bx bi ix b1 1x bi ix bx bi ix b2 2x2 2x

1 1x b1 1x b1 1x bx b1 1x bx b2 2x2 2x2s b2s b11

2 2s b2 2s b2 2s bs b2 2s bs b2x x2x x2 2s b2 2s b1 2b xb x1 2b xb x1 2

1 2x x1 2x xx x2x x1 2x x2x x b xcob xv ,b xv ,b xb xb x11v ,v ,b xv ,b xb xv ,b x1v ,11v ,1 xxxx22

(2.30)

Page 77: Collecting managing and assessing data using sample surveys

Basic statistics and probability50

Covariance and correlationThis introduces the concept of covariance, which is now de�ned. Just as the variance measures how much the values of a variable vary within the population or the sample, so the covariance tells how similarly two variables vary together. If two variables are completely unrelated to one another, the covariance will be zero or close to it. If two variables vary almost identically to each other, then their variances will be approxi-mately equal to one another and their covariance will be equal to the variance of either one.

The covariance is de�ned in equation (2.31):

cov ,

cov ,

x yv ,x yv ,x x y y

n

x yv ,x yv , sx x y y

xyi ix xi ix x y yi iy y

i

n

xyi ix xi ix x y yi iy y

i

v ,v ,x xx xi ii iy yy yi ii i

v ,v , ssx xx xi ii iy yy yi ii i

xyxy1

11

1

n

n

(2.31)

It is the sum of the products of the deviations of x from its mean and the deviations of y from its mean, summed over the sample or the population, and divided by (n – 1) for a sample, or divided by n if for the population. As with the variance, the population value is written as xy, and for the sample it is written sxy.

It can also be seen that, in the special case in which xi is equal to yi, this formula reduces to the formula for the variance (equations 2.15 and 2.16). This shows the important close relationship between covariance and variance. Further, there is another relationship between variance and covariance, called the correlation. The correlation is de�ned by equation (2.32):

Rs

s sxy

x ys sx ys ss ss ss sx ys ss sx ys s(2.32)

Covariance can be negative or positive. If x increases as y decreases and vice versa, then sxy will be negative. This necessarily follows, because values of x below its mean will produce negative values, which will, in this case, be multiplied by values of y that are above the mean of y, and therefore positive. Similarly, values of x that are above the mean of x will be multiplied by values of y that are below its mean, again producing negative values. If x increases as y increases, then covariance is positive.

As noted earlier, if x and y are very similar to one another, the covariance will approximate the variance of either one, and the variances of the two will be almost equal. In this case, the correlation will approach the value of one. If there is no rela-tionship between the two variables, then the covariance will tend towards zero, while the individual variances will be non-zero values. The correlation will approach zero. If the relationship between x and y is an inverse one, then the covariance will be negative, and so will the correlation. Thus, it can be seen that the correlation is a useful measure of the strength of relationship between two variables.

Page 78: Collecting managing and assessing data using sample surveys

Describing data 51

However, there is a restriction on this. If the relationship between two variables is non-linear, then the correlation and the covariance will not measure it effectively. For example, if variable x �rst increases with increasing values of y and then decreases, proportionately to the earlier increases, the covariance will tend towards zero, as will the correlation, even though there may be a clear relationship between the two vari-ables. Thus, it should be stated that correlation measures the strength of the linearrelationship between two variables. It is not capable of measuring the strength of a non-linear relationship.

One further measure of importance is the coef�cient of determination, also known as R-squared (R2). This is the square of the correlation, and is given by equation (2.33):

Rs

s sxy

x ys sx ys s2

2

2 2s s2 2s ss ss ss sx ys ss sx ys s2 22 2s s2 2s ss s2 2s s(2.33)

Values of R2 must lie between zero and one, and cannot take negative values. It is a better measure of the strength of association between variables x and y, because its approach to one is slower than that of the correlation coef�cient, R. Whereas a cor-relation of 0.9 might suggest a close relationship, the value of 0.81 for R2 suggests a somewhat less strong relationship, and is a better measure of the true strength of the relationship.

One could use the maximum and minimum temperatures to illustrate this. Looking back at Figure 2.7, it would seem that the maximum and minimum temperatures are related, in that higher maximum temperatures seem to be associated with higher min-imum temperatures, and vice versa. One could, therefore, explore the covariance and correlation between these variables, using the data in Table 2.3. This shows that the covariance of the maximum and minimum temperatures is 3.92, which is somewhat larger than either of the standard deviations. Using this covariance and the standard deviations, the correlation coef�cient, R, is 0.826 and the coef�cient of determination, R2, is 0.681. The result of this calculation does, indeed, suggest that there is a relation-ship between the maximum and minimum temperatures, but that it is certainly not a perfectly linear relation. The actual plot of maximum versus minimum temperatures is shown in Figure 2.25. This shows an appropriate use of a scatter plot. Had there been a correlation of one between the temperatures, all points would have lain on a straight line, connecting the lowest combined values of minimum and maximum temperatures to the highest combined values. The fact that the points show some scatter, and that there is not clear evidence of a combined highest or lowest value of the two tempera-tures, shows the lack of a strict linear relationship between the two measures. Had there been a correlation of zero between the two, then the points would be scattered around a horizontal line that would be at the mean of the maximum temperatures, or 27.5°C.

Coef�cient of variationVariance, standard deviation, covariance, and correlation have all been de�ned and discussed, all of which provide very useful information about the data that might be

Page 79: Collecting managing and assessing data using sample surveys

Basic statistics and probability52

collected in a survey. Correlation has also been seen to be a particularly useful measure, in that it has no units and is therefore readily comparable between variables measured on radically different scales. For example, a correlation can be computed between annual household income and car ownership, even though these two variables are mea-sured in vastly different units, and the correlation will remain a value between 1 and 1. One could also estimate the correlation between car ownership and the number of workers in a household, which would also produce a correlation between 1 and 1,and the two correlations thus estimated would be directly comparable one to the other. On the other hand, the standard deviations of these variables and the covariances will have very different values, and would not be readily comparable.

There is a similar problem with the standard deviation. Among the variables of car ownership, annual household income, and number of workers in the household, there are vastly different values for the standard deviation, which is measured in one case in cars per household, in another in dollars per household, and in the third by workers per household. Clearly, there is some dif�culty in making comparisons between these.

However, just as it was possible to develop a unitless and standard measure of covariance, called the correlation coef�cient, so, also, there is a measure, called the coef�cient of variation, that offers a unitless measure of the standard deviation. The coef�cient of variation, written as cv, is obtained by dividing the standard deviation by the mean, as shown in equation (2.34):

cvs

xx (2.34)

The coef�cient of variation therefore provides a ready comparison between quite dis-parate variables, and gives an idea of the relative variability of the data. The coef�cient

20

22

24

26

28

30

32

34

15 17 19 21 23 25Minimum temperature

Max

imu

m t

emp

erat

ure

Figure 2.25 Scatter plot of maximum versus minimum temperature

Page 80: Collecting managing and assessing data using sample surveys

Describing data 53

of variation also provides some additional information. If the value is less than one, then the standard deviation is less than the mean, and the data are not widely dispersed. If the value of the cv is one, the standard deviation and the mean are equal, suggest-ing moderate variability. If the cv is greater than one, then the data values are quite variable.

From the temperature data, the mean of the maximum temperatures was 27.5°C, and it was 19.7°C for the minimum temperatures. The mean income from Table 2.5was $37,474. The standard deviations of these three separate sets of observations are, respectively, ±2.75°C, ±1.78°C, and ±$25,374. The coef�cients of variation are then ±0.1 for the maximum temperatures, ±0.09 for the minimum temperatures, and ±0.677 for income. These values are all dimensionless and are, therefore, directly comparable. They show that the coef�cients of variation of the maximum and minimum tempera-tures are almost the same, whilst the coef�cient of variation for income is almost seven times as large as that for the temperatures.

Other measures of variabilityThere are a number of other useful measures of variability that can be used to help explain the data we have collected in a sample survey. The two principle measures we introduce are skewness and kurtosis.

Skewness The sample coef�cient of skewness, usually written as g1, is given by equation (2.35).

gn s

i

n

1

3

13

x xx x11x x1x xx x1x xx xx xx xx x

n sn s33n s3n sn s3n sn sn sn sn s(2.35)

The population coef�cient of skewness is given by equation (2.36):

111

3

133

x

ni

n

(2.36)

The coef�cient of skewness can be positive, negative, or zero. Positive values indi-cate that the distribution of data is skewed to the right – i.e., that there is a long tail to the distribution to the right (positive) side of the distribution. Negative values indi-cate that the distribution is skewed to the left (or negative) side of the distribution. A value of zero, or very close to it, indicates that the distribution is symmetrical (or nearly so). Again, this provides further information about the appearance of the distri-bution, without necessarily having to plot the distribution of values. Figure 2.26 shows a distribution that is skewed to the right. It has a coef�cient of skewness of 1.411. Figure 2.27 shows a distribution that is skewed to the left; it has a coef�cient of skew-ness of –1.834. The coef�cient of skewness for the maximum temperatures is 0.128 and for the minimum temperatures it is –0.059, showing that both distributions are almost symmetrical, with a slight right skewing for the maximum temperatures and

Page 81: Collecting managing and assessing data using sample surveys

Basic statistics and probability54

a slight left skewing for the minimum temperatures. The minimum temperatures are slightly closer to symmetry than the maximum temperatures. In contrast to this, the skewness of the income data from Table 2.5 is 0.73, showing that there is a de�nite skew to the right. All these are clearly borne out by the distributions that were shown earlier in the chapter (see Figures 2.14, 2.15, and 2.16).

Kurtosis The �nal variability measure of interest is the fourth moment of the deviations from the mean. This is called the sample coef�cient of kurtosis and is given by equation (2.37):

gn s

i

n

2

4

14

x xx xiix xix xx xix xx xx xx xx x

n sn s44n s4n sn s4n sn sn sn sn s(2.37)

The population coef�cient of kurtosis is given by equation (2.38):

221

4

14

x

ni

n

(2.38)

The coef�cient of kurtosis measures how peaked the data distribution is and how fat the tails are. Irrespective of the mean and the standard deviation, a normal distribution

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0 5 10 15 20 25 30

Figure 2.26 A distribution skewed to the right

0.00

0.02

0.04

0.06

0.08

0.10

0.12

0.14

0.16

0 5 10 15 20 25 30 35 40

Figure 2.27 A distribution skewed to the left

Page 82: Collecting managing and assessing data using sample surveys

Describing data 55

has a kurtosis of three. Equation (2.39) shows what is termed the excess kurtosis, which is often the formula that is built into spreadsheets and statistical software, although it is often incorrectly referred to as the kurtosis:

gx x

n s

ix xix xi

n

2

4

144n s4n s

3x xx x

n sn sn sn s(2.39)

A distribution that has a kurtosis of less than three (or excess kurtosis less than zero) is said to be platykurtic, while a distribution that has a kurtosis of more than three (excess kurtosis greater than zero) is said to be leptokurtic. Platykurtic distributions are less peaked and have thinner tails. Leptokurtic distributions are more peaked and have fatter tails. Figures 2.28 and 2.29 show two distributions, with the distribution in Figure 2.28 having a lower kurtosis than the one in Figure 2.29.

The kurtosis of Figure 2.28 is 1.386 (excess kurtosis of –1.614), while that of Figure2.29 is 5.953 (excess kurtosis of 2.953). The distribution of Figure 2.28 is not as peaked and has thinner tails than that of Figure 2.29.

0

1

2

3

4

5

6

7

8

9

10

0 5 10 15 20 25 30 35 40

Figure 2.28 Distribution with low kurtosis

0123456789

10

0 5 10 15 20 25 30 35 40

Figure 2.29 Distribution with high kurtosis

Page 83: Collecting managing and assessing data using sample surveys

Basic statistics and probability56

Again, using the temperature and income data that have been used throughout much of this chapter, the coefficient of kurtosis for the maximum temperatures is 2.544, for the minimum temperatures it is 2.619, and for the income data it is 2.740. From this, it can be concluded that none of the three distributions is as peaked as a normal distribution, and that the least peaked distribution is the maximum temperatures, followed by the minimum temperatures, followed by income. In other words, all three data sets are platykurtic.

Examples

Example 1. Continuing use of the number of vehicles passing through a traffic sig-nalised intersection during the green phase, calculate the coefficient of variation, the skewness, and the kurtosis for these data.

It will be recalled that the mean of the number of vehicles was found to be 4.6 vehicles per green phase and that the variance and standard deviation were found to be 3.97 and 1.99, respectively. The coefficient of variation is found by dividing the standard deviation by the mean:

cv = 1.99 / 4.6 = 0.433

This indicates that, relatively, the data are not highly variable. Recall that the tem-perature and income cvs were between 0.09 and 0.677. Thus, the green light traffic data show a lower cv than the income data, but greater than the temperature data.

To estimate the coefficient of skewness, it is necessary to estimate the cubed deviations from the mean for the data. Table 2.15 provided the deviations. Table 2.17 shows the same data as Table 2.15, but adds in columns for the cubed deviations and the deviations to the fourth power (needed for the coefficient of kurtosis). The sums of the cubed devia-tions and the deviations to the fourth power are also shown at the bottom of Table 2.17.

This shows that the sum of the cubed deviations is –63.84. The cube of the standard deviation of these data is 7.916 (=1.9933). Hence the skewness is given by

g1 = –63.84 / 27 × 7.916 = –0.2987

Therefore, the traffic signal data are skewed to the left by a small amount.Similarly, the coefficient of kurtosis is given by

g2 = 842.496 / 26 × (1.99)4 = 2.054

Hence, the data distribution is not as peaked as the normal distribution.

Example 2. Using the data from Table 2.11 on number of children by age, calculate the coefficient of variation, the coefficient of skewness, and the coefficient of kurtosis.

It was found previously that the mean of the ages was 12.1 and the standard devi-ation was ±4.23 years. Using these data, the coefficient of variation can then be cal-culated as

cv = ±4.23 / 12.1 = 0.35

Page 84: Collecting managing and assessing data using sample surveys

Describing data 57

From this, it can be seen that the distribution of children’s ages is a little less vari-able than the number of vehicles passing through the green traffic signal.

Table 2.18 shows the deviations from the mean, the cubes of these values, and the values raised to a fourth power.

Calculation of the skewness and kurtosis will again need to use frequencies of occur-rence for the calculation. The sum of the weighted cubed deviations is –787.814. Using this in the formula for the coefficient of skewness, the coefficient is

g1 = –787.814 / (44 × (4.2264)3) = –0.237

Table 2.17 Deviations for vehicles passing through the green phase raised to third and fourth powers

Green phase Vehicles Deviation (Deviation)3 (Deviation)4

1 5 0.4 0.064 0.02562 7 2.4 13.824 33.17763 6 1.4 2.744 3.84164 6 1.4 2.744 3.84165 4 −0.6 −0.216 0.12966 2 −2.6 −17.576 45.69767 7 2.4 13.824 33.17768 8 3.4 39.304 133.63369 3 −1.6 −4.096 6.5536

10 2 −2.6 −17.576 45.697611 4 −0.6 −0.216 0.129612 5 0.4 0.064 0.025613 7 2.4 13.824 33.177614 6 1.4 2.744 3.841615 5 0.4 0.064 0.025616 1 −3.6 −46.656 167.961617 2 −2.6 −17.576 45.697618 6 1.4 2.744 3.841619 3 −1.6 −4.096 6.553620 4 −0.6 −0.216 0.129621 5 0.4 0.064 0.025622 6 1.4 2.744 3.841623 3 −1.6 −4.096 6.553624 1 −3.6 −46.656 167.961625 6 1.4 2.744 3.841626 7 2.4 13.824 33.177627 2 −2.6 −17.576 45.697628 3 −1.6 −4.096 6.553629 6 1.4 2.744 3.841630 6 1.4 2.744 3.8416

Total 138 0 −63.84 842.496

Page 85: Collecting managing and assessing data using sample surveys

Basic statistics and probability58

Similarly, the sum of the weighted deviations to the fourth power is 24850.48, so the coefficient of kurtosis is given by

g2 = 24850.48 / (43 × (4.2264)4) = 1.811

It can be concluded that the age data are skewed to the left to a small degree, that the data distribution is much less peaked than a normal distribution, and that the skewing is almost the same as in the traffic data, while the distribution is flatter than that for the traffic data.

Example 3. Table 2.19 shows data collected in a survey for forty households, from which information was collected on the number of persons in the household, the house-hold’s income, and the number of motorised vehicles available to the household for private use. Determine the mean, variance, and standard deviation for each of the three variables. Determine the covariance between each of persons and vehicles and income and vehicles, and also estimate the correlation coefficient and coefficient of determination for these two pairs of measures. In addition, determine whether or not each of the distributions of persons, vehicles, and income are skewed, and how peaked the distributions are in relation to the normal distribution.

The totals of each variable are shown at the bottom of Table 2.19, from which the means can be calculated for each measure. It can be seen that the total number of peo-ple in the forty households is ninety-six, which yields an average of 2.4 persons per household. The total number of vehicles is fifty-one, so the mean is 1.275. The total of the annual household incomes is $1,658,435, from which the average annual house-hold income of these households can be determined to be $41,461.

Table 2.18 Deviations from the mean for children’s ages

AgeNumber of children

Deviation from mean (Deviation)3 (Deviation)4

5 3 −7.0851 −355.6634 2519.91276 4 −6.0851 −225.3225 1371.11137 3 −5.0851 −131.4922 668.65208 2 −4.0851 −68.1726 278.49259 1 −3.0851 −29.3637 90.5901

10 5 −2.0851 −9.0654 18.902211 4 −1.0851 −1.2777 1.386412 1 −0.0851 −0.0006 0.000113 3 0.9149 0.7658 0.700614 2 1.9149 7.0216 13.445615 7 2.9149 24.7667 72.192316 3 3.9149 60.0012 234.898317 5 4.9149 118.7251 583.521018 4 5.9149 206.9383 1224.0179

Page 86: Collecting managing and assessing data using sample surveys

Describing data 59

Table 2.19 Data on household size, annual income, and number of vehicles for forty households

ID number Number of persons Household income Number of vehicles

1 2 $59,378 12 5 $67,899 23 2 $53,497 14 1 $49,201 15 1 $14,346 16 2 $10,950 17 1 $16,370 18 3 $50,344 29 1 $0 1

10 2 $22,329 111 2 $17,344 112 1 $37,892 113 4 $47,132 114 2 $31,250 115 2 $20,578 116 2 $23,921 117 2 $18,752 118 2 $39,876 219 2 $21,554 120 4 $33,142 121 4 $27,808 222 3 $87,361 223 3 $61,027 224 2 $15,652 125 3 $47,588 126 2 $50,033 227 4 $57,925 128 4 $28,794 129 2 $49,961 130 1 $9,032 131 2 $115,873 132 4 $93,588 333 1 $17,891 134 4 $22,422 135 2 $19,877 136 1 $106,251 137 4 $79,069 238 2 $25,145 139 2 $19,822 240 3 $87,561 1

Total 96 $1,658,435 51

Squared total 278 100,021,447,013 75

Page 87: Collecting managing and assessing data using sample surveys

Basic statistics and probability60

A short cut method for estimating the sum of the squared deviations from the mean is given by equation (2.40):

x nxx nxx ni

nix nix n

i

nx nx nx nix nx nix n

iiii

nnx xx xx xx xiiiix xix xx xix xx xix xx xix xx xx xx xx xx xx xx xx xiiiiiiii

22

1112 22 2x n2 2x nx2 2xx nxx n2 2x nxx nx nx n2 2x nx n

1(2.40)

The squared totals are shown in the last row of Table 2.18. From this the variance of the number of people in the household is found to be

Var(HHSIZE) (278 – 40 (2.4)2) / 39 47.6 / 39 1.221

The standard deviation for household size is ±1.105 persons per household.Similarly, the variance and standard deviation of income are estimated as follows:

Var(INCOME) (100,021,447,013 – 40 (41461)2) / 39 801,571,302

The standard deviation for income is ±$28,312.03.Finally, for vehicles, the same procedure shows that the variance is 0.256 and the

standard deviation is ±0.506 vehicles per household.Next, the question asks for the covariances between each of household size and

vehicles and income and vehicles. The covariances can be estimated, as seen before, by determining the deviations from the mean and then multiplying the appropriate val-ues together. Table 2.20 shows the deviations of interest for this problem.From the covariances shown in Table 2.20, and the variances estimated previously, the correlation, R, is found to be:

for persons per household and vehicles per household,

R = 0.24/(1.105 × 0.506) = 0.441

for income per household and vehicles per household,

R = 5358.6/(28312.03 × 0.506) = 0.384

The coef�cient of determination for these two pairs of variables are:for persons per household and vehicles per household,

R2 = 0.194

for income per household and vehicles per household,

R2 = 0.147

These correlations and coef�cients of determination show a very low level of rela-tionship between persons and vehicles in a household and between income and vehi-cles in a household, with the latter a slightly less strong relationship than the former.The skewness of each of the variables – persons per household, vehicles per house-hold, and income per household – is computed by using equation (2.36). This produces the values of 0.567 for persons per household, 1.657 for vehicles per household, and 0.977 for income per household. This shows that all three of the variables – household size, vehicles, and income – are skewed to the right.

Page 88: Collecting managing and assessing data using sample surveys

Describing data 61

Table 2.20 Deviations needed for covariance and correlation estimates

ID number Household size Income Vehicles Size × vehicles Income × vehicles

1 −0.4 $17,917 −0.275 0.11 −4,927.212 2.6 $26,438 0.725 1.885 19,167.643 −0.4 $12,036 −0.275 0.11 −3,309.934 −1.4 $7,740 −0.275 0.385 −2,128.535 −1.4 −$27,115 −0.275 0.385 7,456.596 −0.4 −$30,511 −0.275 0.11 8,390.497 −1.4 −$25,091 −0.275 0.385 6,899.998 0.6 $8,883 0.725 0.435 6,440.279 −1.4 −$41,461 −0.275 0.385 11,401.74

10 −0.4 −$19,132 −0.275 0.11 5,261.2711 −0.4 −$24,117 −0.275 0.11 6,632.1412 −1.4 −$3,569 −0.275 0.385 981.4413 1.6 $5,671 −0.275 −0.44 −1,559.5614 −0.4 −$10,211 −0.275 0.11 2,807.9915 −0.4 −$20,883 −0.275 0.11 5,742.7916 −0.4 −$17,540 −0.275 0.11 4,823.4717 −0.4 −$22,709 −0.275 0.11 6,244.9418 −0.4 −$1,585 0.725 −0.29 −1,149.0319 −0.4 −$19,907 −0.275 0.11 5,474.3920 1.6 −$8,319 −0.275 −0.44 2,287.6921 1.6 −$13,653 0.725 1.16 −9,898.3322 0.6 $45,900 0.725 0.435 33,277.5923 0.6 $19,566 0.725 0.435 14,185.4424 −0.4 −$25,809 −0.275 0.11 7,097.4425 0.6 $6,127 −0.275 −0.165 −1,684.9626 −0.4 $8,572 0.725 −0.29 6,214.7927 1.6 $16,464 −0.275 −0.44 −4,527.6328 1.6 −$12,667 −0.275 −0.44 3,483.3929 −0.4 $8,500 −0.275 0.11 −2,337.5330 −1.4 −$32,429 −0.275 0.385 8,917.9431 −0.4 $74,412 −0.275 0.11 −20,463.3332 1.6 $52,127 1.725 2.76 89,919.2933 −1.4 −$23,570 −0.275 0.385 6,481.7234 1.6 −$19,039 −0.275 −0.44 5,235.6935 −0.4 −$21,584 −0.275 0.11 5,935.5736 −1.4 $64,790 −0.275 0.385 −17,817.2837 1.6 $37,608 0.725 1.16 27,265.8938 −0.4 −$16,316 −0.275 0.11 4,486.8739 −0.4 −$21,639 0.725 −0.29 −15,688.1840 0.6 $46,100 −0.275 −0.165 −12,677.53

Sum of cross-products 9.6 214,345.38

Covariance 0.24 5,358.63

Page 89: Collecting managing and assessing data using sample surveys

Basic statistics and probability62

Using equation (2.40) to estimate the excess kurtosis, we �nd that for persons per household this is −0.636, for vehicles per household it is 2.018, and for income it is 0.292. Taking the skewness and excess kurtosis together, this shows the following results:the distribution for persons per household is skewed to the right and is platykurtic (less peaked than the normal distribution); the distribution for vehicles per household is more skewed to the right, and is leptokurtic (much more peaked than a normal distribu-tion); and the distribution for income per household is also skewed to the right and is slightly leptokurtic (not much more peaked than a normal distribution).

Alternatives to Sturges’ ruleIt was noted earlier that there are some alternatives to Sturges’ rule for determining the number of intervals into which to classify continuous data that are to be used in dis-crete classes. Two rules may be used to determine the class width, and the number of intervals can be calculated by dividing the range of the data by the width of an interval. Scott (1979) has proposed that the width of the interval should be based on the stan-dard deviation, as shown in equation (2.41):

h snh sh s3 5h s3 5h s 1 3h s3 5h s.h s3 5h s 1 3/1 3(2.41)

A second method, due to Freedman and Diaconis (1981), is shown in equation (2.42):

h IQ nh Ih Ih Ih IQ nQ nh I2h I 1 31 3/1 3 (2.42)

IQ in equation (2.42) is the interquartile range. Given that it has already been seen that the standard deviation tends to be somewhat smaller than the interquartile range, these two alternative derivations will give similar results for the interval width. The number of intervals is then determined from k R / h for either de�nition of the inter-val width.

As an illustration of the use of these formulae, we previously used Sturges’ rule to determine the number of intervals for the temperature data. It was found, from Sturges’ rule, that the number of intervals should be six, given that we had thirty observations. The range is 12°C. The interquartile range is 4°C and the standard deviation is 2.75°C. Using Scott’s rule, the interval width should be

h 3.5 2.75 (30)1/3 3.09

From this, we can determine that the number of intervals should be 12 / 3, or four intervals. This is noticeably less than Sturges’ rule suggested. Similarly, if we apply the rule of Freedman and Diaconis, we obtain:

h 2 4 (30)1/3 2.57

This would suggest �ve intervals, which is still less than the six obtained from Sturges’ rule.

Page 90: Collecting managing and assessing data using sample surveys

Describing data 63

For the income data from 900 households, the range is from zero to $104,000. The interquartile range is $70,200 and the standard deviation (using the midpoints of each range, which is not strictly correct, but is necessary if we are to estimate a standard deviation) is $32,059. The mean income from the 900 households is $52,1236. Scott’s rule would suggest that the interval width should be

h = 3.5 × 32059 × (900)−1/3 = $11,622

Given that the range is $104,000, this suggests that there should be nine intervals of just over $11,600 each. The rule of Freedman and Diaconis would result in

h = 2 × 70200 × (900)−1/3 = $14,542

This would reduce the number of intervals to seven, rather than the eleven used in the data collection, and the nine suggested by Scott’s rule.

Page 91: Collecting managing and assessing data using sample surveys

64

3 Basic issues in surveys

3.1 Need for survey methods

Much scientific enquiry needs data. Data are needed to permit scientists to test theo-ries and hypotheses about phenomena around them, to estimate models that might be used to predict the behaviour of various phenomena, to calibrate or tune the models as new information becomes available or as the situation changes, to validate models, to describe situations, to develop ways of simulating the real world in laboratory or con-trolled conditions, and so forth. Some areas of science and engineering can develop data through undertaking laboratory experiments. In such cases, there is no need for a survey to collect the relevant data; rather, the data are collected from observations and measurements made during the laboratory experiments. However, in many instances in both science and engineering, as well as in most instances in the social sciences, labo-ratory experiments are either not possible or will not provide the information needed.

When laboratory experiments are not feasible or possible, observation of the real-world phenomenon of interest becomes necessary. There may be limited situations in which observation can be made of the universe of interest. However, in most cases, this is not possible, and it is necessary, instead, to measure some subset of the uni-verse. In the same way that it is not usually possible to measure the universe, it is also often not possible to measure everything that may be relevant about the universe, or those members of the universe that are measured in a survey. For example, one might be interested in measuring the population of Sydney, Australia, and desire to measure everything relating to their purchasing patterns for groceries and other non-capital items. However, the population of Sydney is approximately 4 million people in about 1.3 million households. Measuring all those people would be much too expensive. Further, it may be desired to measure their buying behaviour for a period of the past twelve months, and also to obtain information on a wide variety of social and demo-graphic descriptors of the households and people. However, to measure all the desired information is, again, likely to prove too burdensome for people to provide. Therefore, in two respects, it becomes necessary to reduce what is measured: one must measure a sample of the population, and one must measure only as much about that population as it can reasonably be expected that people will be willing to tell.

Page 92: Collecting managing and assessing data using sample surveys

Surveys and censuses 65

The first thing that this reduced measurement leads to is the requirement that the data collected should be representative of the entire population. Although represen-tativeness, as is discussed later in this book, is not always a requirement of surveys, when the issue at hand is the measurement of a large population through a sample and the interest is in describing the full population, then representativeness is essential. A useful dictionary definition of being representative is that it entails exhibiting a like-ness, or being typical. Without, for the moment, delving further into what makes a sample representative, it can be stated that increasing the sample size will generally increase the representativeness of the sample. However, increasing the sample size also increases the costs. This gives rise to a basic tension in survey design: how to obtain greater representativeness without unduly increasing the cost.

3.1.1 A definition of sampling methodologySurvey sampling methodology can be defined as the science of choosing a sample that provides an acceptable compromise between sample cost and sample representative-ness. Fortunately, representativeness increases less and less as the sample size grows larger, so that compromise can be reached, usually, at relatively modest sample sizes. Nevertheless, as is discussed in the next section, the cost of any survey, particularly of human populations in which the human subjects provide responses to survey ques-tions, is large.

3.2 Surveys and censuses

As was defined in Chapter 1, a survey concerns a sample of the population such that the fraction of the population to be surveyed is always less than 100 per cent. A census involves the measurement of every member of the population. Thus, it is axiomatic that a sampling fraction of 100 per cent will define a census. In its original form, a census was simply a head count. However, a correct definition of a census is that it is an enu-meration (or count) of the population, together with statistics relating to the members of the population. In most countries around the world, today, a census of the popula-tion is undertaken every five or ten years, usually collecting a fairly wide range of data about the members of the population. In some cases, the census may be designed so that the information about the population is collected from everyone, while, in other cases, much of the information may be collected only from a sample.

Censuses must necessarily be expensive to conduct, if for no other reason than the sheer size of the task to be undertaken, and the significant amount of time they take to conduct. Often, the use of a sample survey is preferred for reasons of:

(1) economy;(2) speed and the timeliness with which the data can be produced;(3) feasibility – some types of measurement are destructive (e.g., sampling specimens

of the concrete used to construct a building and testing to determine the force required to break the concrete); and

Page 93: Collecting managing and assessing data using sample surveys

Basic issues in surveys66

(4) quality and accuracy, when it is simply not possible to hire and train sufficient per-sonnel to conduct a good census (Kish, 1965).

One might ask why, given these advantages of sample surveys, a census would ever be done. Perhaps the strongest reason for conducting a census is that the aver-age person does not understand that a small sample can be representative of a large population. Therefore, sample surveys, especially small samples, are often not credible to the public at large. The advantages of a census, as outlined by Kish (1965), are:

(1) data can be obtained for small units of the population, such as individual suburbs of a large metropolitan area;

(2) public acceptance of the resulting data may be easier to attain;(3) public compliance with the requirements to complete the census forms may be

easier to achieve (in most countries, completion of the national census is required by law);

(4) problems of complete coverage may be easier to avoid or correct (although national censuses typically have serious problems with the homeless, and with illegal immigrants); and

(5) sampling statistics and those qualified to use them are not required.

Given these competing merits of sample surveys and censuses, it is worth looking at the relative costs and time requirements of each. However, in doing this, it must be kept in mind that both the cost and time requirements will be affected significantly by the amount of information to be collected, the response rate that is achievable (particularly in a survey that is ‘voluntary’, compared to a census that is ‘compulsory’), and the size of the workforce that can be put in place to undertake the survey or census.

3.2.1 CostsThree contrasting censuses are offered by considering the United States (up to 2000), Australia, and the United Kingdom. In the United States, the census is carried out every ten years and involves the collection of a small amount of data about every mem-ber of the population, and much more detailed information about a sample of the pop-ulation. Everyone receives what is known as the short form, while 16.7 per cent of the population receives the long form, which collects the very detailed information. The approximate cost of the 2000 census in the United States was $6.5 billion (US Census Bureau, 2005) over the decade from 1990 to 2000 for planning and executing the cen-sus. The population covered was approximately 281 million people. This equated to about $23 per person, or about $52 per household.

In Australia, the census is carried out every five years, and all members of the popu-lation provide detailed information about themselves, their households, and the homes in which they live. The cost of the 2001 census was A$240 million (ABS, 2005) for a population of approximately 20 million people. This equates to about $12 per person, or about $27 per household.

Page 94: Collecting managing and assessing data using sample surveys

Surveys and censuses 67

In the United Kingdom, the most recent census for which cost figures are available, in April 2001, cost £255 million for a population of about 59.6 million people – a cost of £4.30 per person, or about £9.40 per household. For all three censuses, these per capita costs are very low, and they show the economies of scale that exist in a census. However, the total magnitude of the costs is very large.

The author’s field of work is in transport, and surveys are frequently undertaken in this field. These surveys usually entail collecting social and demographic data from the household and the persons in the household, and then collecting detailed data about everywhere that each member of the household travelled during a specific day, including the purpose of the travel, how they travelled, when they travelled, and with whom they travelled. Such a survey, if conducted by post, currently costs upwards of A$100 per complete household, whereas, if the information is collected by a face-to-face interview, it will cost well over A$300 per completed household. In the United States, the current costs of conducting a telephone survey are on the order of US$175 and upwards. The typical cost of a household travel survey, which would normally involve a sample of 2,000 to 5,000 households, is usually between about $500,000 and $1 million.

3.2.2 TimeSurveys tend not only to be expensive to conduct but also to take a long time to under-take. In general, the US Decennial Census takes about three years from initiating the collection of the data to the publication of the initial data. Taking into account the time for testing the instruments and procedures, it will often take five years or longer to complete the census. The Australian Bureau of Statistics (ABS), working with a much smaller population, still takes more than a year from the start of the data collection pro-cedure to the initial release of data, with releases then continuing usually for another couple of years after that. Planning for the next census usually commences before the last data tabulations have been released, so that the five-year cycle for the ABS is, effectively, an ongoing process.

The typical household travel survey takes about eighteen months to two years to complete. A statewide survey undertaken in Michigan in the United States in the early part of the twenty-first century took approximately two years to complete for a sam-ple of around 15,000 households. A survey of the North Central Texas Council of Governments region in the mid-1990s took almost three years to complete. In New South Wales, Australia, there is a continuing household travel survey. This survey has now been running for thirteen years (as of 2010), with data being collected throughout the year every year. For this survey, it takes approximately eighteen months from when data are collected until they are available for release. Data collection actually takes a year at a time, so that the total time from commencing data collection until release of data is between two and two and a half years.

All this serves to illustrate the fact that undertaking surveys of human populations is not a trivial exercise. The time required to plan, design, execute, and analyse the survey

Page 95: Collecting managing and assessing data using sample surveys

Basic issues in surveys68

is substantial. Not only that, but the public and private sectors both tend not to allow sufficient time or money for data collection. For some reason, data collection is viewed by many public and private agencies as a rather uninteresting exercise, albeit one that cannot be forgone. One result of this is that it is almost universally the case that agen-cies seeking data collection will budget neither sufficient time nor money to undertake the survey (Stopher and Metcalf, 1996).

3.3 Representativeness

Most surveys have, as a key requirement, the need to be representative of the popula-tion of interest. In statistical terms, representativeness can be defined in the following way:

(1) sample means are statistically no different from population means;(2) sample variances are statistically no different from population variances; and(3) sample covariances are statistically no different from population covariances.

To put this a little more simply, a sample is representative if the mean of any char-acteristic of interest measured in the sample is statistically equal to the mean of that characteristic as it would be measured in the entire population. Similarly, if one were to estimate the variance of any characteristic of interest in the sample, the value should be statistically equal to the variance of the whole population. Moreover, if one were to measure the covariance between two characteristics in the sample, it should have statistically the same value as the covariance between those two characteristics in the entire population. (Recall that the mean, variance, and covariance are explained in some detail in Chapter 2.)

However, one may ask what is meant by being ‘statistically equal’ or ‘statistically no different’. Statistics recognises that, when dealing with samples from a large pop-ulation, there is always going to be some level of imprecision in estimating any value from the sample that one might wish to use to describe the population. Hence, these statements of ‘no difference’ or ‘equality’ are interpreted as being approximately no different or the same, as is considered reasonable when dealing with a sample. How much imprecision one would accept for estimates from a sample depends on the sam-ple size, with a greater degree of imprecision in the value being expected for smaller sample sizes, while larger sample sizes would be associated with relatively little impre-cision in the value.

As Yates (1965) suggests, one might think that the best way to achieve representa-tiveness would be to make a very purposeful selection of elements of the population for the sample. If one concentrates on the first requirement for representativeness, this might be accomplished by choosing elements for the sample that are thought to be close to the average, or ‘typical’. However, such a selection will certainly vio-late the second requirement for representativeness, in that the variance of the sample will be much less than that of the population. It will probably also violate the third requirement, in that covariances will not be similar to the population’s values. Another

Page 96: Collecting managing and assessing data using sample surveys

Representativeness 69

possibility is that the analyst chooses a sample in which not only are some elements chosen that would seem to be typical or average, but also others that are believed to represent the extremes. Again, this is likely to result in violation of most, if not all, of the conditions of representativeness. In addition, as Yates (1965) also points out, there may be both conscious and unconscious biases inherent in the way in which a purpose-ful sample is drawn. The analyst may have a subconscious or conscious desire to prove a point or provide evidence for a particular policy or decision, and may intentionally or unintentionally bias the selection of sample elements towards this desire.

Representativeness can generally be achieved when the probability that any element of the population has of being included in or excluded from the sample is known. Knowing this probability would be difficult under conditions such as those described in the preceding paragraph, when the person drawing the sample is making conscious or unconscious judgements as to which elements to include and which to exclude. In fact, the only well-understood method of having known probabilities of inclusion or exclusion is one of using random sampling.

3.3.1 RandomnessThe topic of randomness is a fascinating one, but it is beyond the scope of this book to do justice to it. The interested reader is referred to other sources for more information on the concept, such as Bennett (1998), Chaitin (1975), or Cover (1974). The concept of randomness is tied up implicitly with ‘fairness’. From very young ages, children use such methods as drawing straws, tossing a coin, or ‘one potato, two potato’ as a means to select someone under the notion of fairness. Such procedures are attempts at using randomness to make a choice that is considered fair, or, if one likes to put it in such a way, one in which there is an equal probability that anyone can get chosen. Many games played by children and adults are anchored in randomness, such as any game using dice, or spinners, or other similar devices to pick a number. While games of chance and the use of such devices as dice and straws are extremely old, definitions of what is meant by randomness are very recent. Early definitions of randomness were put forward in the eighteenth century, but clear definitions of randomness did not really come forth until the late twentieth century.

Although dictionaries will often describe randomness as accidental or haphazard, these are actually rather different concepts from true randomness. One of the earlier definitions of randomness has to do with a lack of pattern or predictability. This can be demonstrated with an illustration used by Chaitin (1975). In his paper, Chaitin pro-poses two alternative sequences of binary digits:

01010101010101010101

01101100110111100010

The first of these shows a repetition of the number sequence ‘01’ ten times. It appears that this is constructed from a simple rule that is completely predictable. If one were to be asked to extend this sequence, one could, with certainty, suggest that it would

Page 97: Collecting managing and assessing data using sample surveys

Basic issues in surveys70

continue as ‘01’ repeated as many times as desired. However, the second sequence does not provide such an obvious pattern. Indeed, one cannot discern a pattern in it, and it is not possible to predict with any certainty what the next two digits will be. The arrangement is, therefore, apparently a random assortment of zeroes and ones. There is no pattern and no predictability.

In fact, the second series was generated by tossing a coin twenty times; the ones represent heads and the zeroes represent tails. As has already been alluded to, the toss-ing of a coin is generally regarded as a random process. So, one might propose that this is a good definition of randomness. Admittedly, there are some problems with such a definition, as discussed by Chaitin (1975). However, they are not considered here, nor is it intended to seek for more precise definitions of randomness. For the purposes of defining randomness for survey sampling, it is sufficient to assert that randomness means a lack of pattern or predictability. It is not haphazard, nor accidental, but it is unpredictable and patternless.

3.3.2 Probability samplingRepresentativeness is achieved generally through the strict application of probability sampling. Probability sampling means that the probability of each element in the pop-ulation being sampled is known, and this knowledge can be attained through the appli-cation of randomness in the choice process. Probability samples have the important property of being measurable, which means that one can draw statistical inferences about the population from which they are drawn, based on measures of variability that are estimated from the sample data.

There are two primary cases of probability sampling. The first of these is equal prob-ability sampling and the second is unequal probability sampling. In the first instance, equal probability sampling, this is also known as EPSEM (equal probability of selec-tion method) sampling. It involves either giving every member of the population an equal probability of being selected, or having variable probabilities that compensate for each other in the various stages of a multistage selection. A multistage selection could, for example, involve sampling counties, then blocks, and then dwellings. If the probabilities of each county were proportional to the proportion of total dwellings in the county, and the probabilities of each block proportional to the proportion of total dwellings in the block, this would mean that each county and each block would have different probabilities of being sampled, but that the final sampling of dwellings would have equal probabilities for the dwellings. A special case of EPSEM sampling is simple random sampling (SRS). In SRS, random drawings from the population are used to select the sample and achieve equal probabilities of selection for the entire population.

The other principal form of probability sampling is unequal probability sampling, which may be the unintended result of simple random sampling when there are real-world deficiencies in the sampling, or the result of an intentional procedure of using different probabilities to improve some other aspect of the results of sampling. This is

Page 98: Collecting managing and assessing data using sample surveys

Errors and bias 71

dealt with in detail in Chapter 9. Nevertheless, with unequal probabilities, the sample is still a probability sample and is still measurable. Samples that are not probability samples can, in some cases, approximate a probability sample, in which case they are also approximately measurable, while others do not approximate probability samples and are definitely not measurable.

The sole method of ensuring a probability sample is the use of randomness. There is simply no other way to ensure that the probabilities of elements of the population being sampled are known. This means that it is quite critical that there is a known method to generate random numbers for use in drawing a sample.

Sources of random numbersUnlike the situation when many of the current books on survey sampling were written, there appear now to be numerous easy ways of producing random numbers. However, this is deceptive. While most scientific calculators and most computer spreadsheets have the capability to produce random numbers, most of these are not truly random. They may serve for a relatively small drawing of random numbers, but, as the sample size grows, patterns emerge; in addition, there are problems with the distribution of num-bers, as predictability also appears. In the author’s opinion, the best source of random numbers is the 1 million random digits produced by the RAND Corporation (RAND, 1955), which are now available on the Web at http://ftp.rand.org/software_and_data/random/digits.txt. These random digits are free to copy and paste into a spreadsheet or into any other desired format. They were produced by the RAND Corporation by building a special machine to produce the random numbers. The results were subjected to all known tests for randomness at the time they were produced, and only when all the tests had been satisfied were the numbers released. This involved reconstructing the random-number-generating machine several times. A small extract of the numbers is shown in Figure 3.1.

It should be noted that the first column of five digits is simply a line number, and is not a set of random digits. It is provided so that one can easily determine whereabouts one is in the set of random digits. Each row then consists of ten groups of five random digits. They are grouped into fives for convenience only and to make them readable, but there is no requirement to use them in groups of five. In all, there are 20,000 such lines, numbered from 00000 to 19999. Subsequently, the use of these random numbers is discussed in much more detail.

3.4 Errors and bias

Error exists in all sample surveys by the nature of a sample survey. They exist because a sample can never be a complete representation of the population from which it is drawn. Error is the amount by which the sample fails to represent the population. Error can be of two types: systematic or random. Systematic error is usually termed bias. Bias is undesirable, and steps should be taken to avoid it as far as humanly possible.

Page 99: Collecting managing and assessing data using sample surveys

00000 10097 32533 76520 13586 34673 54876 80959 09117 39292 74945

00001 37542 04805 64894 74296 24805 24037 20636 10402 00822 91665

00002 08422 68953 19645 09303 23209 02560 15953 34764 35080 33606

00003 99019 02529 09376 70715 38311 31165 88676 74397 04436 27659

00004 12807 99970 80157 36147 64032 36653 98951 16877 12171 76833

00005 66065 74717 34072 76850 36697 36170 65813 39885 11199 29170

00006 31060 10805 45571 82406 35303 42614 86799 07439 23403 09732

00007 85269 77602 02051 65692 68665 74818 73053 85247 18623 88579

00008 63573 32135 05325 47048 90553 57548 28468 28709 83491 25624

00009 73796 45753 03529 64778 35808 34282 60935 20344 35273 88435

00010 98520 17767 14905 68607 22109 40558 60970 93433 50500 73998

00011 11805 05431 39808 27732 50725 68248 29405 24201 52775 67851

00012 83452 99634 06288 98083 13746 70078 18475 40610 68711 77817

00013 88685 40200 86507 58401 36766 67951 90364 76493 29609 11062

00014 99594 67348 87517 64969 91826 08928 93785 61368 23478 34113

00015 65481 17674 17468 50950 58047 76974 73039 57186 40218 16544

00016 80124 35635 17727 08015 45318 22374 21115 78253 14385 53763

00017 74350 99817 77402 77214 43236 00210 45521 64237 96286 02655

00018 69916 26803 66252 29148 36936 87203 76621 13990 94400 56418

00019 09893 20505 14225 68514 46427 56788 96297 78822 54382 14598

00020 91499 14523 68479 27686 46162 83554 94750 89923 37089 20048

00021 80336 94598 26940 36858 70297 34135 53140 33340 42050 82341

00022 44104 81949 85157 47954 32979 26575 57600 40881 22222 06413

00023 12550 73742 11100 02040 12860 74697 96644 89439 28707 25815

00024 63606 49329 16505 34484 40219 52563 43651 77082 07207 31790

Figure 3.1 Extract of random numbers from the RAND Million Random Digits

Page 100: Collecting managing and assessing data using sample surveys

Errors and bias 73

Random error, on the other hand, is always present and is a function of sampling. It is also directly a function of the sample size, such that it decreases as the sample size grows larger. Random error is also known as sampling error, because it is present only for samples. By definition, a census has no sampling error. In a sample survey, random error can be reduced by increasing the sample size, whereas systematic error or bias cannot generally be reduced by increasing the sample size, and may actually increase as the sample size increases. Sampling error may be calculated, on the basis of certain assumptions about the sample and its characteristics. As a result, it is usually possible to know the amount of sampling error in any sample that is drawn according to correct principles of probability sampling. On the other hand, bias cannot usually be estimated, is not necessarily dependent on characteristics of the population, and may be unknown even as to its existence, under certain circumstances.

A simple illustration of a systematic error or bias is provided if one were, for exam-ple, measuring the length of a sample of garden hoses produced by a mechanical manufacturing process. If the tape measure being used to measure the lengths of the garden hoses has, in fact, stretched as a result of repeated use, so that what it measures as 1 centimetre (cm) is actually 1.1 cm, then all measurements of the lengths of hoses will be underestimated. If the person doing the measuring is unaware of the stretching of the tape measure, then this error will actually be an unknown error. It will not be reduced by sampling more garden hoses, because, no matter how many are sampled, the tape measure is still stretched by 10 per cent. Therefore, all measurements with this tape measure will be wrong by 10 per cent. Even a census of the garden hoses pro-duced will be in error by the same amount.

Another example of a systematic bias is one that arises whenever a survey of human populations is carried out by telephone. If the desire is to survey the entire population, then a systematic error is introduced in that those households that have no telephones will be systematically excluded. If households that do not have telephones are different from those that do on characteristics of importance to the survey, the result is a bias in the survey measurements. This bias cannot be reduced by increasing the sample of households that are telephoned in the survey. It can be reduced only by using another method to contact the households with no telephones. For example, suppose a measure of interest was the number of workers in the household, and suppose also that house-holds with no telephone also have fewer workers than those with telephones; then no amount of additional sampling of households with telephones will yield the correct estimate of the number of workers per household in the entire population.

3.4.1 Sample design and sampling errorGiven that it is known that sampling error decreases as sample size increases, and that cost increases as sample size increases, the purpose of sample design is to achieve the lowest amount of sampling error for the lowest cost. Usually this will require a compromise, by which the survey designer is willing to accept a certain level of sam-pling error, in order to keep costs within reason. Sample design also involves seeking

Page 101: Collecting managing and assessing data using sample surveys

Basic issues in surveys74

methods to reduce sampling error without significantly increasing the sample size. To do this, it is necessary to understand more about the determinants of sampling error. This is dealt with in detail in Chapter 9 of this book. Sampling itself may also have a cost associated with it. More expensive sample designs may have to be forgone, at the expense of increased sampling error, depending on the relative costs.

A very important point that needs to be understood is that sampling error is gen-erally unaffected by the size of the population (unless this is small in comparison to the sample size), and is also unaffected by how any variable of interest in the survey may be distributed. In Chapter 2 the normal distribution was introduced, and it was noteworthy that most of the variables used as examples in the chapter, such as income, maximum and minimum temperatures, and cars passing through a green traffic light phase, did not exhibit distributions that were identical with the normal distribution. It is not a requirement of the statistics of sampling that any variable to be measured should have a normal distribution. What is important is that, if a large number of independent samples were drawn from the population, the resulting distribution of estimates of the mean of the sample would have a normal distribution, or would approach a normal dis-tribution, if sufficient samples were drawn.

3.4.2 BiasIt is important to note a few points about bias. First, as has already been explained, bias arises from two primary sources:

(1) measurement; and(2) sampling.

Bias in measurement arises when the measuring device is not accurate. It will arise from physical measurements when a physical measuring device has a systematic error in it, and it will arise in questions when the questions have a systematic error in them. An example of a biased question might be a question that asks what a person’s total income was for the past year. By not clearly stating if this is meant to mean gross income, before any deductions are made, and including income from all sources, not just job-related earnings, some people will report gross income, some will report income net of certain deductions, some may exclude anything other than job-related earnings, while others may include income from a variety of sources. The result will be a biased estimate of income. However, the extent of the bias will never be known.

Sometimes, one hears of the idea of a self-cancelling bias. Reporting bias with respect to income is a good illustration of a bias that could be thought of as being self-cancelling. Assume that a survey is being conducted in which people are asked to report their income. Assume this time that the question is so phrased that there is no doubt as to what income is to be reported. Some people will report their income accu-rately. However, some will assume that the survey firm will communicate their income information to the tax authorities, and will intentionally give a lower income than their real income. Others will want to have the interviewer think they are really wealthier than they are, and will report their income as being higher than it really is. Others will

Page 102: Collecting managing and assessing data using sample surveys

Errors and bias 75

guess the amount, because they really don’t know, and some will guess high while others will guess low. Taking all this together, some may claim that, although the indi-vidual household incomes may not be correct, the overall average income will be. This may or may not be true. Unfortunately, if the analysis that is done with the results were to seek for a relationship between income and, say, spending on groceries, the errors in individual household income reports will result in an erroneous relationship between income and grocery expenditure. Therefore, the overestimates and underestimates do not cancel each other out.

In sampling, there are several potential sources of bias. An intentional or judge-mental sample, in which the researcher draws a sample based on presumed knowl-edge about ‘typical’ members of the population, or about means and variability in the population, will always result in biases, as discussed at the beginning of this chapter. Another source of bias is sampling on the basis of an attribute that is related to the analysis. For example, if a sample is drawn from the telephone book, and the purpose of the survey is to determine the porportion of households that has a telephone, the result is obviously biased. While this is a rather blatant example, which readers of this book would certainly not fall into, there are many situations in which such an error can occur far more subtly, especially if a relationship between the sampling method and the behaviour of interest is not immediately apparent.

An example of biases resulting from incorrect sampling is provided by considering Table 3.1, which shows 100 observations of the heights of students at a university.

Suppose that the researcher wants to draw a sample of ten of these students. First, a sample is drawn that consists of the first five and the last five students, assuming that students are not in any particular order, and that this is an easy method to draw a sam-ple. This produces the sample shown in Table 3.2. Second, another alternative sample is drawn, consisting of the first ten students. This is shown in Table 3.3. Another sam-ple that is drawn is an intentional sample, based on trying to pick students that are in the middle and at the extremes. This is shown in Table 3.4. Finally, Table 3.5 shows a random sample, using the random numbers from Figure 3.1, and starting at line 00020. This produces the following selections: 91, 15, 68, 28, 46, 84, 95, 90, 37, and 20.

The means, medians, and standard deviations for the entire population of Table 3.1 and for each of the samples provided in Tables 3.2 to 3.5 are shown in Table 3.6. These show some of the potential problems from non-random samples.

As can be seen, the first two samples of ten observations produce means and medians that are far too high, while the standard deviations are much too low. The intentional sam-ple does much better on the mean and median, but has a standard deviation that is much too high. The random sample has the closest mean and standard deviation, although the median is a little higher than that of the intentional sample. It is unlikely that any amount of new draws of the intentional sample will bring both the standard deviation and the mean and median into alignment with the population. However, repeated random draw-ings will produce an average that will be much closer to the population values.

Another source of bias in sampling comes from discarding sampling units through some subjective process. Again, a rather blatant example could be suggested, in which a survey is being done of shoppers visiting a particular shopping centre. The survey

Page 103: Collecting managing and assessing data using sample surveys

Basic issues in surveys76

Table 3.1 Heights of 100 (fictitious) university students (cm)

Person Height Person Height Person Height Person Height Person Height

1 178 21 183 41 152 61 170 81 1472 175 22 175 42 168 62 173 82 1633 183 23 163 43 170 63 165 83 1734 183 24 168 44 180 64 183 84 1755 180 25 178 45 183 65 168 85 1806 178 26 165 46 185 66 175 86 1707 170 27 168 47 160 67 178 87 1658 165 28 160 48 165 68 160 88 1859 185 29 180 49 173 69 155 89 157

10 163 30 178 50 180 70 185 90 15211 173 31 183 51 175 71 173 91 17512 175 32 191 52 157 72 165 92 17013 160 33 165 53 178 73 152 93 16014 170 34 173 54 178 74 157 94 16315 188 35 170 55 150 75 168 95 18316 183 36 180 56 170 76 160 96 18017 180 37 175 57 173 77 155 97 17318 173 38 163 58 183 78 180 98 17819 196 39 173 59 188 79 183 99 16020 180 40 185 60 175 80 191 100 168

Table 3.2 Sample of the first and last five students

Person Height Person Height

1 178 96 1802 175 97 1733 183 98 1784 183 99 1605 180 100 168

Table 3.3 Sample of the first ten students

Person Height Person Height

1 178 6 1782 175 7 1703 183 8 1654 183 9 1855 180 10 163

Page 104: Collecting managing and assessing data using sample surveys

Errors and bias 77

firm has hired a group of undergraduate male students to undertake the survey and has told them to intercept every tenth person entering the shopping centre and interview them. In counting the persons entering the shopping centre, the students may reject certain people, on the basis that they think they will be hard to interview, or may tend to always pick attractive young female shoppers to interview, justifying their selection by skewing their counts so that they invariably end with such a person as the ‘tenth’ person to interview.

Perhaps the most troublesome of sources of bias in surveys is that of nonresponse. This is such a significant issue that the whole of Chapter 20 is devoted to this topic. Suffice it to say at this point that refusals to participate in a survey, and premature terminations during the survey process, will almost invariably result in bias. The bias arises from the fact that those who are willing to respond and those who are not will-ing to respond will usually differ in some important ways that relate to the goals of the

Table 3.4 Intentional sample of ten students

Person Height Person Height

1 178 73 15216 183 74 15819 196 80 19126 165 86 17057 173 91 175

Table 3.5 Random sample of ten students (in order drawn)

Person Height Person Height

91 175 84 17515 188 95 18368 160 90 15228 160 37 17546 185 20 180

Table 3.6 Summary of results from Tables 3.2 to 3.5

Sample Mean Median Standard deviation

Population 172 173 10.2First and last five 176 178 7.3First ten 176 178 7.8Judgemental (intentional) sample 174 174 13.6Random sample 173 175 12.0

Page 105: Collecting managing and assessing data using sample surveys

Basic issues in surveys78

survey. For example, in household travel surveys, it has been demonstrated time after time that one-person households and large households are less likely to respond, and that those who travel a great deal are also less likely to respond. The result of this will be biases in the survey measurements of travel. The large households that are missing will, on the average, undertake more travel as a household than smaller households. As a result, both this group of households and those with people who travel much more than the average will be missing or under-represented in the sample. Unless some sort of correction is made to the data, the results of this will be an underestimate of the average amount of travel that is undertaken. One-person households will also add to this problem, when one looks at person travel, rather than household travel, because such households have only one person to do all the travelling that is required, and will often have a person travel rate that is higher than that of persons in multi-person households.

Other types of surveys will incur their own particular patterns of nonresponse. For example, marketing surveys focusing on purchase of a particular product will tend to receive poor response from people who are not interested in that product, while politi-cal surveys will tend to receive poor responses from those who are undecided and those who are uninterested in the political issues of concern in the survey.

3.4.3 Avoiding biasThere are several ways in which bias can be avoided or reduced. The principal ways to do this are the following.

(1) Adhere to strict rules of random sampling. Any departure from strict random sam-pling is likely to be a source of bias.

(2) Avoid using subjective judgements to drop sampling units or to select them.(3) Perform sample selection away from field operations, except when this is the only

feasible method of sampling.(4) Do not have surveyors or interviewers perform sampling.(5) Oversee interviewers or surveyors so as to ensure that the survey is conducted as

originally designed.(6) Design measurement procedures as carefully as possible in order to avoid mea-

surement biases.

No survey will be completely free of bias. Any level of nonresponse, non-adherence to strict random sampling, and almost any form of measurement will be the cause of at least some minimal amount of bias. However, following the steps listed above will result in minimal bias in most surveys.

3.5 Some important definitions

The previous chapter began with some definitions that were important for under-standing the statistical issues introduced in that chapter. Some of these definitions are

Page 106: Collecting managing and assessing data using sample surveys

Some important definitions 79

repeated here, especially for the reader who may have skipped Chapter 2. Moreover, definitions are needed at this point that are specific to surveys, as opposed to defini-tions that might apply to statistics more generally.

The elements of a population are the units for which information is sought – e.g., people, households, families, etc.

Expansion factors are those factors that, when multiplied by the sample statistics, provide estimates of the population values or parameters.

The population is the aggregate of all the elements. A population must normally be described in terms of:

content;•elements or units;•extent; and•time.•

For example, a population might be described as follows: all persons over four years of age, living in households (but not group quarters), within the Sydney Metropolitan Region in the spring of 2010. Thus, the elements are persons, the content is those over four years of age living in households, the extent is the Sydney Metropolitan Region, and the time is the spring of 2010.

The survey population may differ from the population because of non-coverage – e.g., omitting households that do not have land-line telephones. Otherwise, the survey population should be the same as the population.

The population value and the true value are the values of a parameter that would be derived from the entire population. The difference between the population value and the true value is observation error (or measurement error), which may or may not exist.

The sample value or statistic is the estimate of a parameter that is obtained from the elements in the sample. Sample statistics are random variables, because their val-ues depend on the sample design and the particular elements included in the sample. Repeated sampling from the same population will produce different sample values for each sample.

The sample distribution is the distribution of the values of a statistic from all possi-ble samples of the population.

The sampling bias is a systematic error that arises from faulty design or execution of the survey. It is not susceptible of estimation, and is to be avoided or minimised.

The sampling error is the error in a sample statistic that arises from having a sample in place of the population. It is a function of the variance in the population and of the sample size. It is susceptible of estimation for a sample that is drawn using random sampling procedures.

The sampling frame or sample listing is the listing of all the sampling units in the population from which a sample is to be drawn. Sampling frames are required for some forms of sampling, but may be unnecessary for others.

Page 107: Collecting managing and assessing data using sample surveys

Basic issues in surveys80

The sampling units contain the elements and are used for selecting elements into the sample. For example, households may be the sampling units, when persons are the elements. However, it is also possible for sampling units and elements to be one and the same. Sampling units are usually referred to as just units.

A stratum (plural: strata) is a subgroup of the population based on some criterion. For example, households may be grouped by income level. The households in one spe-cific income level would then comprise a stratum. Similarly, vehicles could be grouped by the number of cylinders in the engine. Vehicles with four-cylinder engines would then constitute one stratum.

Weights are those factors that are needed to correct the sample because of design features of sampling or nonresponse from sample elements.

Additional definitions are introduced elsewhere in this book, at the time that they are specifically needed. However, the above definitions will be helpful in proceeding into the next chapters of this book.

Page 108: Collecting managing and assessing data using sample surveys

81

4 Ethics of surveys of human populations

4.1 Why ethics?

At any time that one is dealing with human populations, it is necessary to consider the ethics of what one may or may not ask members of the population to do. Increasingly, governments and universities alike are becoming extremely sensitive to issues of ethics, especially in relation to the privacy of individuals, and the burden that may be placed upon them by surveys. Several market and survey research organisations around the world have set up codes of practice, regulations, or ethical standards that they request or require their members to practise.

The dictionary defines ethics as relating to morals, or treating of morality or duty, or a set of moral principles. In the context of surveys of human populations, this clearly has to do with what it is morally correct to ask people to do, and how to treat those who are asked to participate in surveys. Perhaps one of the greatest threats to the ability of governments and agencies to undertake survey research that will ultimately benefit people is the unscrupulous and immoral use of surveys by entities that seek to profit from the use of such devices. It is, therefore, perhaps of even greater importance that those agencies engaging in survey research of an altruistic nature adhere to correct moral principles in all matters concerning interaction with the potential participants in a survey.

There are a number of survey and market research organisations around the world that have taken up the responsibility of promulgating ethical standards for surveys and of striving to ensure that such surveys adhere to appropriate ethical standards. In North America, there are at least two such organisations, with the Council of American Survey Research Organizations (CASRO) and the Marketing Research Association (MRA) being two of the principal ones. There are similar organisations in many other countries, such as the British Market Research Association (BMRA), the South African Market Research Association (SAMRA), the China Market Research Association (CMRA), the Japan Market Research Association (JMRA), the Market Research Society of Australia (MRSA), the Australian Association of Market Research Organisations (AMRO), and the World Association of Opinion and Marketing Research Professionals (still known by its original acronym ESOMAR – the European Society

Page 109: Collecting managing and assessing data using sample surveys

Ethics of surveys of human populations82

for Opinion and Marketing Research), among many others. All these organisations have codes of practice, or ethical standards, or codes of privacy, which they maintain and with which they expect their members to comply. It is, perhaps, an indication of the importance of ethics in survey research that there are so many different organisa-tions around the world and that all have codes of practice or ethics that they have devel-oped and expect their members to uphold.

Establishing a code of ethics for the conduct of surveys serves several purposes. First, it brings all surveys undertaken by those who subscribe to the code under a sin-gle, uniform standard of behaviour with respect to the public who may be the subject of the surveys. Second, it provides a reference that survey respondents, clients, and practitioners can refer to, evaluate, and update. Survey respondents can be reassured as to how they are to be treated in a survey, clients can be guaranteed that respondents and the resulting data will be treated according to strict professional standards, and practitioners can know what is expected of them in the conduct of surveys. Third, the existence of a code of ethics allows those agencies that commission surveys to demand that those who undertake surveys on their behalf conform to the published code of ethics. This helps to preserve those agencies from adverse public reaction, and assures the agencies that a certain minimum level of concern for respondents and data will be observed in the data collection effort.

4.2 Codes of ethics or practice

Clearly, it is not possible to include within this book all the different codes of practice and ethics that are promulgated by all the various organisations that exist to develop and maintain such codes. For the purposes of this book, a set of principles that were originally promulgated by the Marketing Research Association as a Respondent Bill of Rights (Stopher et al., 2008a) have been extracted, although this has now been superseded by an expanded Code of Marketing Research Standards (MRA, 2007). To these have been added some elements from the rights of respondents that are laid out in the ICC/ESOMAR International Code on Market and Social Research (International Chamber of Commerce [ICC], 2007). The essential elements of eth-ical conduct of surveys include as a minimum, the following (see also van der Reis and Harvey, 2006).

(1) The privacy of the individual and the confidentiality of the information that he or she provides in the survey must be protected at all times.

(2) The name, address, telephone number, and any other personal information of the respondent will not be disclosed to third parties without the respondent’s permis-sion, and the client has no rights to know these details unless the respondent has provided explicit permission for this.

(3) Survey personnel must take all reasonable precautions to ensure that respondents are in no way directly harmed or adversely affected as a result of their participation in the survey.

Page 110: Collecting managing and assessing data using sample surveys

Codes of ethics or practice 83

(4) Children under the age of fourteen may not be interviewed without the consent of a parent, guardian, or responsible adult, and survey personnel must take special care when interviewing children and young people.

(5) Survey personnel will always be prepared to identify themselves, the research or survey company that they represent, and the nature of the survey being con-ducted, if requested by a respondent.

(6) The respondent will not be sold anything or asked for money as part of the survey. This includes the guarantee that respondents will not be required to bear any part of the telephone, postal, or other costs of contacting them.

(7) Persons will be contacted at reasonable times to participate in the survey, and they may request to be recontacted at a later time or date, if that is more convenient.

(8) A person’s decision to refuse participation, not answer specific questions, or ter-minate the interview while in progress will be respected without question, if that is the respondent’s firm decision.

(9) A survey participant will be advised in advance if the interview is to be recorded, and will be informed of the purpose of the recording.

(10) Respondents may not be surveyed or observed without their knowledge, other than in public places where they are normally liable to be observed and/or over-heard by other people present. Methods of data collection, such as the use of hidden tape recorders, one-way mirrors, or invisible identifiers on postal ques-tionnaires, may be used in a survey only if the method has been fully disclosed to the respondent and the respondent agrees to its use.

(11) A research agency may not release research findings prior to the public release of the findings by the organisation that commissioned the study, unless approval of the client organisation is obtained to do so.

(12) A research agency must ensure the reasonable safety of its fieldworkers during the execution of a survey.

(13) A research agency must ensure the security of all research records in its possession.

(14) Researchers must not undertake non-research activities – e.g., telemarketing, list building – and research activities simultaneously.

(15) No unauthorised person should be able to identify an individual respondent from the data provided.

(16) Raw data should not be freely available on the internet.(17) Only selected and fully trained interviewers and survey personnel, registered with

an organisation known to adhere to a code of ethical conduct, may be permitted to approach or interview respondents.

(18) The respondent is assured of the highest professional standards in the collection and reporting of the information provided.

Adherence to ethical standards of this nature is designed to protect the public and to maintain confidence in the collection of data for the purposes of benefiting

Page 111: Collecting managing and assessing data using sample surveys

Ethics of surveys of human populations84

the population at large. Failure by survey researchers to adhere to such standards undermines public confidence and ultimately leads to discrediting the whole activity of survey research.

4.3 Potential threats to confidentiality

The issue of maintaining confidentiality is perhaps one of the most important aspects of ethical considerations. Therefore, it is worthwhile to consider some of the obvious threats to it and steps that can be taken to remove such threats. The issue of confiden-tiality is taken so seriously by organisations that collect census data that information that could possibly lead to discovery of the person or household that provided a spe-cific response is normally suppressed. For this reason, the US Bureau of the Census and the Australian Bureau of Statistics, among others, release only aggregate data at detailed levels of geography, and suppress even that information if the total number of persons or households falls below an established threshold number. In the same way, both of these organisations, as well as others, also release unit records of a sample of respondents to the census, but do so only with the removal of all potential identifying information, and provide geographic locations usually only to large geographic units that may contain as many as 100,000 households.

Data from surveys are normally stored on electronic media. Retaining data on hard drives of computers, especially when such computers are open to use by a number of different people, can represent a potential threat to confidentiality and privacy. Storage of the data on removable electronic media, such as CD-ROMs, tapes, or disks, should be accomplished as soon as possible, with the data then being removed from computer hard drives, so as to restrict access solely to those who are authorised to use the data. However, if all identifying information has been removed, and there is no possible way for the respondent to be traced, then data in this form can remain on computer hard drives for access by researchers. This should not present any problems relating to the ethical standards discussed previously.

The retention of details, such as names, addresses, and telephone numbers, as part of the survey record is a threat to privacy and confidentiality. However, there are many situations in which such data are needed for further research. This is especially true when respondents are to be contacted multiple times in the execution of a survey. In such cases, contact details must be retained. The question then becomes one of how to handle such needs. The best method to handle this situation is probably to give each record an unique identification number and then remove the name, address, and telephone number data, storing them in a separate confidential file that also includes the identification numbers. The confidential file can be held on a disk or CD, or other storage medium, and kept locked away for use only when there is a legitimate need to match the data to the unit records.

However, van der Reis and Harvey (2006) raise an interesting issue in relation to this ethical requirement. In certain cultures and situations, it is possible that a survey

Page 112: Collecting managing and assessing data using sample surveys

Potential threats to confidentiality 85

may encounter the situation in which respondents want their identities to be known to those for whom the survey is being done. In cases when it is believed that this may be important, the best method is to permit people to indicate during the survey if they wish their name and details to be divulged to the survey client or not. For example, at the conclusion of the survey, respondents could be told to write in or provide to the interviewer their name and contact details if they would like to have this information passed on to the survey client. If they prefer to remain anonymous, then they do not provide any details.

4.3.1 Retaining detail and confidentialityIf, for some reason, it is necessary to retain address information in the record, per-haps because the location of the household is important to the research purposes, then there are methods by which the information can be made less susceptible to breaking confidentiality without the loss of important research information. One such method is to take such items as street addresses and randomly add or subtract a small value from the street number. Under most street numbering schemes in Western nations, such changes in numbers will usually not affect the locational information materially, although it may help to preserve the anonymity of the respondent. In North America, where street numbers are often skipped, so that number 3533 may be next door to 3537 and 3527, for example, the use of the addition and subtraction of two or four to the street number will often result in the creation of a fictitious address, meaning that the actual house is not identifiable. In Australia and the United Kingdom, where numbers are normally not skipped, the addition or subtraction of a small even num-ber, randomly, if known to have been applied, still protects the individual respondent from identification, by virtue of locating every respondent at the ‘wrong’, although nearby, address. Indeed, it could be that a few respondents end up with the addition or subtraction of zero, in which case the user of the data can never know if the address provided is real or falsified.

In surveys that are undertaken for land use and transport needs, the locations of households and the places to which they travel are increasingly frequently coded to latitude and longitude locations, with the location being given to six decimal places of a degree. This provides address location down to a few metres. Clearly, even without the specific address, if the geographic coordinates are provided at this level of detail, the household location can be identified very precisely. Two possibilities exist to pro-tect confidentiality when such coordinates are provided. The first is to round the lati-tude and longitude to, say, only three decimal places. This will provide location to the nearest 100 metres or so, which, in most urban areas, would be sufficient to preserve anonymity. The second possibility is to add or subtract a random number between, say, zero and 0.001 to each of the latitude and longitude. This would have the effect of mov-ing the location of each geographic point by up to about 140 metres in any direction. If the area has pockets of low-density development, where 140 metres of shifting would

Page 113: Collecting managing and assessing data using sample surveys

Ethics of surveys of human populations86

not be sufficient to move the observation to a potential different address, then a number larger than 0.001 could be used.

In summary, one possible way to retain detail yet protect confidentiality is to add fuzziness to locations, so that the researcher is no longer sure of the precise location, while retaining a location that is within 100 or 200 metres or so of the true location. There will be situations in which this is still insufficient to protect confidentiality, so that decisions on whether this is a satisfactory procedure must be made on a case-by-case basis. When in doubt, it is best to remove all identifying information from the unit records.

4.4 Informed consent

In many market research surveys, the first that the potential respondent knows about the survey is a knock at the door by an interviewer, or a telephone call, or a package of material received in the post, with a request to complete the survey. As is discussed in Chapter 16, response to a survey may often be improved through the use of a pre-notification letter. However, that is not the subject of this discussion. Irrespective of whether potential respondents are forewarned of the survey through a pre- notification letter or are ‘cold contacted’ by an interviewer in person, or on the telephone, or through a postal survey, it is always implied that, by completing the survey, the respondent is implicitly consenting to the collection of the data requested.

In contrast, there is an increasing trend towards the requirement to obtain a signed consent from respondents to participate in the survey. For example, at the University of Sydney, it is a requirement that any survey of human populations must include two important components: a consent form, and a subject information sheet. The consent form briefly summarises the purpose of the survey, explains what the person will be asked to do, assures the respondent of the confidentiality of the information that will be provided, and requests a signature that the respondent is willing to participate in the survey. An example of a consent form (with appropriate information removed) is shown in Figure 4.1.

The subject information sheet provides more detailed information about what the survey is intended for, and what respondents will be asked to do. An example is shown in Figures 4.2 and 4.3, which display the two pages of a recent subject information sheet. These two sheets are provided to potential respondents at the beginning of the survey, usually in the first contact made. Until the consent form is signed and returned, the survey does not proceed. The exception to this is when the survey is responded to on the internet. In this case, the same information as in the consent form is displayed, and the potential respondent is told that, by clicking on the ‘Continue’ button, he or she is consenting to take part in the survey. Thus, in this case, a signature is not obtained, but the fact of the person continuing on into the survey is taken as an explicit indica-tion of consent.

Page 114: Collecting managing and assessing data using sample surveys

Informed consent 87

The University of SydneyHOUSEHOLD CONSENT FORM

Adelaide Household Travel Study

CONFIDENTIAL STUDYThe Adelaide Household Travel Study is being undertaken by the Institute of Transport & Logistics Studies, The University of Sydney, for the Department of Transport and Urban Planning.

Participation in the study is entirely voluntary and if at any stage you wish to withdraw from thestudy, you are free to do so. The study involves each household providing the odometer readingsof all motorised vehicles that they use, once every 4 months. Each time we contact you, we will askyou to just update us on any changes that have happened to your household or your vehicles,either by post, via the web, or by phone.

If you have any questions, now or at any stage through the period, please feel free to call theUniversity of Sydney toll-free on 1800 XXX XXX. This project has the approval of the University ofSydney’s Ethics committee, however, if you should have any concerns or complaints pleasecontact the Ethics Committee on (02) 9351 YYYY. You can be assured that your responses to thisstudy will remain strictly private and confidential.

If your household does not have a vehicle kept at home, please indicate your unavailability for thisstudy by ticking the box below. If your household is willing to participate in this study pleaseindicate your consent by signing in the space below, circling your preferred method of recordingyour readings. Then, please return this form, in the stamped, addressed envelope provided. If youprefer, you can also give your consent online by logging on to the website with your unique id andpassword, (see the Subject Information Statement for details).

If we do not hear from you by April 21st 2005, we will call you to find out if you will take part andto arrange to get your information.

I have read and understood the information on this sheet, and agree on behalf of this household toparticipate in this study.

Name ___________________________________________________

Address ___________________________________________________

Phone ___________________________________________________

Signed _______________________________________________________

Date ___________________________________________________

I would like to record my odometer readings via: (Please circle one option):

Post Internet Phone ________________ please give preferred contact number.This may be home, work or mobile.

This household does not have any vehicles that are kept at home and so isunavailable for this study.

Natalie LauerPhone (02) 9351 XXXXE-mail:

Fax (02) 9351 XXXX «IDNum»

Figure 4.1 Example of a consent form

Page 115: Collecting managing and assessing data using sample surveys

Ethics of surveys of human populations88

Although many survey researchers might fear that the use of the consent form would decrease response rates, the author’s experience has been to the contrary. Either the form has no apparent effect on response rates or, in some cases, it appears to have had the effect of increasing the response rate. There is no evidence, to date, to suggest that it has a negative effect on response. Potentially, because of so many arenas in which

The University of Sydney

Institute of Transport & Logistics StudiesUniversity of Sydney

Subject Information Statement

The Resident<Address><Suburb> South Australia <Postcode>

Dear Resident,

Adelaide Household Travel Study

The Institute of Transport and Logistics Studies at the University of Sydney is conducting a very important transport study in Adelaide for the Department of Transport and Urban Planning. Your household is being asked to complete a simple Odometer Form every four months or so starting in late April, 2005, for the next two years. In this first package we are asking you to tell us about each of your household vehicles as well as each of the people that live in your household. Every four months you will only need to confirm or update these details and provide us with your latest odometer reading.

In this envelope you will find:

1. A brief introductory letter outlining the purposes of the study;

2. A yellow Consent Form (needed this one time only);

3. A blue form to record information about your vehicles and all members of your household;

4. An example of the odometer survey that will be sent to you each survey; and

5. A reply paid return envelope to return your yellow Consent Form and the blue form.

Your first step is to decide how you would like to record your information and odometer readings. There are three ways for you to do this:

a) POST: - if you would like to provide the information to us through the post, simply fill in the yellow Consent Form provided (making sure to provide us with a contact phone number) along with the filled out blue form and post them back to the University of Sydney in the stamped, addressed envelope provided. We will send you an Odometer Form in the next few days for you to fill in and send back to us.

Figure 4.2 First page of an example subject information sheet

Page 116: Collecting managing and assessing data using sample surveys

Conclusions 89

people are asked to sign their consent for various things in which they will participate, the presence of this consent form may provide reassurance that the survey is being conducted with care and in accordance with appropriate ethical standards. This may be a useful direction in which to move in the future in surveys, whether handled by a university or other agency.

4.5 Conclusions

Adherence to accepted ethical standards is most important in the conduct of surveys of human populations. The codes of ethics that have been developed by a number of market research organisations around the world are in substantial agreement on what constitutes the moral duty of those who conduct surveys of human populations. A review of the codes from a number of different organisations from around the world

b) INTERNET: - if you would like to provide the information to us over the internet, simplygo to the website, http://xxxxxx.xxx.usyd.edu.au/sa/start.html, log on with your unique idcode <xxxxxxxx> and password <password> and follow the simple instructions. We willsend you an Odometer Form in the post and you can send us the readings over theinternet.

c) PHONE: - if you would like to provide the information to us over the phone, simply fill inthe yellow Consent Form provided (making sure to provide us with a contact phonenumber) and post it back to the University of Sydney in the stamped, addressed envelopeprovided. We will send you an Odometer Form in the next few days for you to fill in. Leavethis near the phone ready for when you are called in the following few days to provide uswith your readings and the information about your household vehicles and members (fromthe blue form).

If we have not heard back from you by post, or internet by April 19th, we will call you to arrangefor you to give us the information.

Everything you tell us will be kept strictly confidential and will only be used for statistical purposes.No households, household members or household vehicles will be able to be identified from thisstudy. If you do not wish to participate in the study, you do not have to. However, yourparticipation is very important to the success of this study.

If you have any questions please call the University of Sydney toll-free on 1800 XXX XXX oremail [email protected].

Kind Regards,

Natalie LauerResearch AnalystInstitute of Transport and Logistics StudiesThe University of Sydney

Figure 4.3 Second page of the example subject information sheet

Page 117: Collecting managing and assessing data using sample surveys

Ethics of surveys of human populations90

shows very close agreement in the basic elements of ethical conduct, with most show-ing only slight wording differences from one another.

Ethics in survey conduct have to do with what is morally and professionally appro-priate behaviour on the part of the survey firm and the ultimate client for the survey research. These codes of ethics also provide potential respondents with assurances that the survey will be conducted according to the highest professional standards. In this day and age of telemarketing, and various intrusions into the private lives of individuals, adherence to, publication of, and access to a code of ethics is one of the only ways that reputable survey research can continue to be carried out. As is dis-cussed in Chapter 20, nonresponse to surveys has become a major issue. One of the antidotes to this trend is maintaining strict ethical conduct in undertaking any survey of a human population.

Page 118: Collecting managing and assessing data using sample surveys

91

5 Designing a survey

5.1 Components of survey design

Survey design is a complex process. The various elements of the survey that must be designed are:

the sampling methodology;•the survey methodology; and•the survey instrument.•

Each of these elements is interdependent with the others. We cannot determine the survey methodology without knowing how we will undertake the survey. Similarly, the survey methodology puts certain constraints on the survey instrument and the sampling methodology. The choice of survey instrument will also affect the survey methodology and, sometimes, the sampling methodology that can be employed. Figure 5.1 shows a schematic of the overall process of undertaking a survey, and shows how different elements of the process interrelate.

Studying Figure 5.1, we see that the process begins with defining the survey pur-pose. This chapter deals specifically with this element of the survey design. Once we have defined the purpose, we can move into the preliminary planning, which also then feeds into collecting background information, setting up the survey organisation, and selecting the survey method, which assumes that we also know the population we wish to survey. The design of survey methods is dealt with in Chapter 6. Once the survey method has been chosen, it is possible to proceed into both sampling design (Chapter 14) and design of the survey instrument (Chapters 8 to 10). These two ele-ments interact one with the other, and both are dependent on the selection of the survey method.

The survey organisation is set up in the next step, and fieldworkers are selected, briefed, and trained. These aspects are discussed in Chapter 16. The next step in the process is to conduct pretests of elements of the survey and then to undertake a full pilot survey, as discussed in Chapter 12. This step feeds back to each of the instru-ment design, the sampling, the survey method, and the interviewer training, and any or all of these may be modified and retested in pretests or a pilot survey. Once the

Page 119: Collecting managing and assessing data using sample surveys

Designing a survey92

designs have been finalised the survey is implemented, as discussed in Chapter 16. Following the implementation of the survey, the data collected often have to be coded, and then entered into a database. This step may also entail some aspects of editing of the data. This is covered in Chapter 18. Weighting and expansion of the data may also be required (Chapter 19), and analysis will be performed either with weighted and expanded data, or with the edited data, depending on the needs and nature of the analysis. This will then lead to presentation of the results, and also archiving of the data, both of which are discussed in Chapter 23.

There are a number of feedback loops, shown in dashed lines in Figure 5.1. These indicate feedbacks that are likely to change the way in which the next survey is accom-plished, because most of them occur after the implementation of the survey. Data cod-ing may influence future survey instrument design for the same overall survey purpose. The results of data analysis will also affect data coding and future sampling, and even the definition of the overall survey purpose.

Define surveypurpose

Undertakepreliminary

planning

Collectbackgroundinformation

Design sampling

Implement thesurvey

Code andenter the data

Edit the data

Createpreservation

metadata

Archive thedata

Data analysis

Weight andexpand the

data

Conductpretests andpilot survey

Design surveyinstrument

Select surveymethodTrain and brief

fieldworkers

Set up surveyorganisation

Present theresults

Figure 5.1 Schematic of the survey process

Page 120: Collecting managing and assessing data using sample surveys

Defining the survey purpose 93

The complexity of the schematic of Figure 5.1 rather clearly shows the complexity of the survey design process. It is quite possible that not all interactions are shown here. What is certain is that there are also some missing elements from the process, such as the possible use of focus groups to assist in the design of the survey instrument and the survey methodology. Measurement of the data quality is also a part of present-ing the results, but it is not shown as an explicit step, in order not to overcomplicate the schematic.

The perfect survey has never yet been designed. It is unlikely that a perfect survey will ever be designed. For those familiar with the Arthurian legends, the pursuit of the perfect survey design might be likened to the pursuit of the Holy Grail. In these legends, the Knights of the Round Table were assigned the quest to seek for the Holy Grail, the cup used by Christ at the Last Supper. No one knew for certain that it even existed, let alone that it was in Britain, where the quest took place. However, there were myths and legends that said that it was to be found in the British Isles. As the Arthurian legends unfold, the knights go on many adventures as they pursue the Holy Grail. It becomes known that only the Perfect Knight will ever find the grail. There is one perfect knight, and he, indeed, does find the grail, but upon finding it he dies, and no one else sees the grail. This is the story of the perfect survey, which can be consid-ered to be the Holy Grail of survey researchers, and the story tells how readily this will be achieved.

More pragmatically, every survey that has ever been designed can be improved. None is perfect, and it is unlikely that a perfect survey will ever be designed. Therefore, the quest for those who embark upon survey design is always to learn from the past and make surveys better and better, with the ever-present realisation that it is unlikely that anyone will be able to design the perfect survey.

5.2 Defining the survey purpose

No survey should be designed without a clear statement of the purposes to which the data will be put. Failure to define the purpose is likely to lead to a survey that will fail to be useful. There is, in fact, no such thing as a general-purpose survey – i.e., a sur-vey that is designed to be useful for a range of unspecified uses. Some may be tempted to think that the periodic censuses that are carried out are general-purpose surveys. However, all censuses known to this author are very purposeful. There are usually strict mandates from government regarding the principal purposes of the census, and it is designed specifically to serve those purposes. The fact that census data can then be used to support other uses, not necessarily included in the original mandates for the census, is actually a benefit of a carefully designed, purpose-specific survey.

In the mid-1970s the US Department of Transportation commissioned a survey to be undertaken in Baltimore, Maryland. This survey was intended to be a general-purpose survey of travel behaviour that could be used to support an emerging need for research and development in the area of travel forecasting. However, the survey was designed and undertaken without a specific purpose in mind. The research and development it

Page 121: Collecting managing and assessing data using sample surveys

Designing a survey94

was intended to support had not progressed to the point at which the survey could be designed to serve those specific analytical purposes. The survey involved about 1,000 face-to-face household interviews, which cost, at that time, around $500 per completed household to undertake – hence the survey cost of around $500,000 (Talvitie, 1997). To the best of our ability to determine this, the resulting data set was never used to support any research. It has been lost now for some years. This should be an adequate demonstration of the failure of general purpose surveys.

5.2.1 Components of survey purposeThe survey purpose should include:

(1) defining the data that should be measured;(2) describing how the data will be used;(3) determining who the user of the data will be;(4) defining the time period over which the data should be collected; and(5) defining the geographic area, or other dimensions of the data collection.

Each of these elements is dealt with in turn in the following subsections.

Data needsThere are a number of different ways in which the data needs can be determined. In some cases, standard tests and procedures may define precisely what data must be collected. In such cases, there will be little question about what must be included, and there will be clear justification for inclusion of each element. In some cases, the survey purpose may be so tightly defined that it defines precisely what has to be measured. This would be the case, for example, if the survey purpose were to count the number of vehicles making a left turn at a specific intersection per hour, or if the purpose were to determine the voting intentions of the voting public at an upcoming election.

Another possible source of the definition of data needs may be past surveys, when comparability to the past surveys is of paramount importance. However, this situation gives rise to some potential problems, and it is discussed further in a subsequent sub-section. The data needs may also be defined by one or more technical persons, who would usually be the eventual users of the data. Finally, the data needs may be defined by a committee that may comprise technical and non-technical people alike. This is often the case when data are being collected for public agencies, especially when more than one agency is involved. In such situations, the committee may be made up of representatives of the various agencies involved in commissioning or funding the data collection, and also those that represent different groups of the public that may be considered to be stakeholders in the data collection effort. However, it is important to remember the tongue-in-cheek definition of a camel – a horse designed by a commit-tee. There are many examples that can be found when the product of a committee that has input the data needs is somewhat ungainly, and may include elements that have only very limited usefulness.

Page 122: Collecting managing and assessing data using sample surveys

Defining the survey purpose 95

Among the reasons that it is so important to define data needs are:

(1) data needs define much about the survey design process;(2) data needs will define what must be measured in the survey;(3) data needs may often limit the survey methodologies that can be applied;(4) data needs may often limit the sampling methodologies that can be used; and(5) data needs will dictate many of the detailed elements of the survey.

An example, in this case taken from a transport situation, may help to explain how data needs can define much of the survey process. In Florida, in the mid-1980s, the author was involved in designing a survey of the use of buses in Miami (Stopher et al., 1986). At this time, there was an ordinance in effect that prohibited public money being used to translate any materials into any other language than English, and the survey was publicly funded. Usually, a survey of this type would have been conducted by handing bus riders a short questionnaire, and asking them to complete it while they rode on the bus. The survey would have been designed to be returned on the bus, although it would have been possible to post it back to the survey organisers, if time or other circumstances did not permit on-bus completion. However, more than a half of the bus ridership at that time in Miami spoke only Spanish. Therefore, it was clear that a self-administered questionnaire that would be only in English would garner a very poor response rate and would also be quite biased in coverage.

It was established that the information required was to know the pattern of use of the bus system in Miami. Therefore, the most important information was to know approx-imately where each passenger boarded the system, where he or she alighted, and what, if any, transfers were made by the passenger. It was concluded that passengers could not be asked for any information if the survey was to obtain good coverage of bus rid-ers. Instead, the following method was devised. Each bus route was divided up into segments, a segment being defined as the part of the route from one timing point to the next (usually covering four to eight bus stops). Each such segment was given a des-ignated colour. A printer then printed cards of the same size and weight as a standard business card, on which was printed the route number, and the cards were printed on stock that corresponded in colour to the route segments. For example, route 83 might start with a blue segment, followed by green, yellow, red, purple, white, orange, and pink. Cards were printed with the number 83 on them on stock of each of these colours. When a passenger boarded the bus, he or she would be handed a card of the appropri-ate colour, denoting the segment of the route in which he or she boarded. Surveyors on the bus also had a set of small boxes, each with a colour indicated on the box. As passengers got off the bus, they returned the card to the surveyor, who placed it in a box whose colour corresponded to the segment of the route where the passenger alighted.

A passenger making a transfer was required to purchase a transfer, which was stamped with the date and the route number issuing the transfer. When a passenger boarded a bus with a transfer, he or she was required to give the transfer to the bus driver in place of paying a fare. (If another transfer was required, it would be obtained from the driver at the same time.) This transfer procedure was the one in use on all

Page 123: Collecting managing and assessing data using sample surveys

Designing a survey96

Miami buses at this time. This procedure was used by collecting from the driver any transfers handed in and adding these to the box of the colour appropriate to the seg-ment where the transfer was handed in.

To help passengers understand what was required of them, cartoon-like panels were devised showing a person receiving the card as he or she boarded, taking a seat on the bus, while holding the card, and then returning the card to the surveyor as he or she alighted. No words were used. The panels were placed in the advertising panels at the front and on the sides of the bus on the inside. Thus, no matter where a passenger was sitting or standing, he or she would be able to see one of these panels. Surveyors were, for the most part, bilingual, so they could answer verbal questions, and could also ask passengers to take the cards and return them. Surveyors made a count, after leaving each bus stop, of the number of passengers on board the bus. The number of cards given to each surveyor of each colour was also recorded at the start and end of each bus trip. By counting the number of cards by colour returned in each segment of the route, it was possible to determine how many people rode from each segment to each other segment of each route surveyed, by time of day. In addition, by counting the number of transfers handed in for each segment, and determining the routes on which the transfers had been issued, it was also possible to reconstruct the transfer pattern of passengers, and determine the location at which transfers took place.

The survey was extremely successful in providing exactly the information desired, without asking passengers any questions. Furthermore, whereas the usual response rate for an on-board questionnaire survey is between 20 and 45 per cent, the response rate to this survey was over 95 per cent. Of course, if one had needed to know where passengers lived, what purpose they were travelling for, and details about their socio-demographic characteristics, this survey could not have been done in this manner. However, such information was not required, thereby permitting this simple but effec-tive procedure to be designed.

Another example is drawn from a situation that existed in southern California in the late 1980s. The South Coast Air Quality Management District had implemented a regulation that required employers with more than 100 employees to implement var-ious strategies to increase the average vehicle occupancy of employees travelling to and from work. The regulation was known as Regulation XV, and it has since been repealed by the California state legislature. However, a key element of this regulation was the requirement that employers carry out an annual survey of their employees to determine the average vehicle occupancy, or average vehicle ridership (AVR), as it was called in the regulation. Clearly, it would have been possible for employers who were so minded to declare one day each year as a ride-sharing day, encourage all employ-ees to share a ride that day (by car-pooling, van-pooling, or riding the bus) or to use bicycles or walk all the way to work. However, anticipating this, the regulation stipu-lated that the average vehicle ridership was to be determined as the arithmetic average over a five-day week, from Monday to Friday. In addition, it was required by the regu-lation that the survey had to ascertain what strategies to encourage ride-sharing or the

Page 124: Collecting managing and assessing data using sample surveys

Defining the survey purpose 97

use of non-motorised vehicles were being used by the employer and what number of employees were taking advantage of these. Strategies included such things as close-in designated parking spots for car and van pools, subsidised public transport passes, guaranteed ride-home programmes for those car-pooling or van-pooling, showers for those bicycling or walking, etc.

In this case, the regulation itself specified the data that had to be collected, requir-ing employees to indicate their means of travel to and from work for each day for a week, and the questions they were to be asked about their familiarity with and use of incentives provided by the employer. It was also specified in the regulation that each employee who failed to complete a survey was to be counted as driving alone to work, thereby penalising the employer in the calculation of the average vehicle ridership for its employees.

Comparability or innovationAs noted earlier, one of the requirements often placed on surveys is that the data col-lected be comparable with a previous survey. However, as noted near the beginning of this chapter, no survey is perfect, and improvement to surveys is always possible. Not only that but, as time goes by, the requirements placed on surveys change. Human beings react differently over time to specific elements of a survey, requiring that surveys continually be adapted to changing circumstances, preferences, and issues. Therefore, to repeat the exact same survey design time after time is an undesirable procedure, in that it will render the survey increasingly unsuitable for its purpose.

Herein lies a serious dilemma for the survey designer. If comparability is important, then data must be measured in the same way and by the same process, so that there can be no change introduced as a result of methodological differences. However, because human beings do not remain unchanged, even to hold the design of a survey constant does not guarantee that there is no methodological component to any change that is measured. A methodology that was suitable to the population in 1950 may have serious methodological flaws in 2010. These flaws may well mean that, even using the same instrument and procedures in 2010 as were used in 1950, the measurements are not comparable.

The other problem with holding design constant is that it prevents the correction of previous errors and the introduction of new and innovative ways of surveying a human population. For comparability, the introduction of such new methods is a major chal-lenge. Now, one is required to determine how much effect on the underlying data the changes in instrument, methodology, etc. may have had, and how much change arose as a result of the elapsed time on the population being measured, constituting a real behaviour change. For example, suppose one were interested in measuring the effec-tiveness of a drug abuse education programme, aimed at helping schoolchildren to resist experimenting with addictive drugs. To determine, over time, how effective the programme is, one might expect the same survey to be conducted on a particular age group of children at each evaluation point. However, one of the challenges of this type

Page 125: Collecting managing and assessing data using sample surveys

Designing a survey98

of survey is to get schoolchildren to give an accurate accounting of experimentation with illegal substances. As time goes by, new and better methods are found, whereby children can be persuaded to give more accurate accountings of the behaviours that are the subject of the survey. If the same instrument and methods are used each time that the programme is evaluated, then the potential is that children will actually have learned how to lie about their behaviour better, thereby giving increasingly erroneous results as time goes by. On the other hand, if better instruments and procedures are used, in which children are less able to lie or obscure the truth about their use of such substances, then the measurement becomes increasingly accurate. In neither case is there actual comparability between the surveys over time.

Another illustration of this is provided by surveys about childbirth out of wedlock. Surveys undertaken in the 1950s would be likely to understate the number of out-of-wedlock births, because it was considered to be socially unacceptable and immoral to give birth out of wedlock, and considerable shame accrued to any woman who did so. However, surveys undertaken in the 2000s would encounter a vastly different situation, especially in a country such as Australia, where de facto households are very com-mon. The number of out-of-wedlock births has grown considerably, and now society attaches no stigma to the situation. However, this also means that a survey designed in the 1950s to ascertain such information would be quite inappropriate for asking for the same information currently. Comparability would not be likely to be achieved by using the exact same survey instrument and procedures in 2010 as was used in, say, 1950.

In general, this author suggests that comparability cannot be achieved by holding the survey instrument and procedures constant. This means that it is actually an illu-sion that keeping the survey the same will allow one to make comparison that is lost by improving the instrument and intentionally fielding a better survey. Whether or not it is possible to establish strict comparability over time is actually very much open to question, and will probably vary with the topic of the survey and the degree to which population situations have changed. As a general rule, this author recommends that, when new methods are available that should make measurement more accurate, they should be used. Even when comparability is a desirable output, the improvement of methods should be given greater priority.

One possible method that can be used to discern methodological and societal changes that may have impacted on ongoing or repeated surveys is to conduct a small sample survey using the previous instrument and procedures. If this is done on a random sub-sample of the population, then comparing the results of this subsample and the main survey sample will provide an estimate of the extent of societal change and survey methodological impact. This may not work in all cases, but it will work in a major-ity of situations. It may also be possible to use either a control group to benchmark the effects of such changes or supplementary information (data gained from another source) as a way to discern the effects of changes in survey methodology and societal changes affecting the issues being measured. Some surveys may also not be subject to societal change, in which case determining the methodological effect is likely to be much simpler.

Page 126: Collecting managing and assessing data using sample surveys

Defining the survey purpose 99

Defining data needsThere is probably little difficulty in starting the process of defining data needs. Most individuals and groups involved in commissioning and conducting surveys of human populations for various purposes will not be hard put to come up with a ‘wish list’ of data items. The real challenge is not in deciding possible questions to ask but, rather, in reducing it to a reasonable set. The issue of respondent burden is discussed later in this book. However, it is appropriate to introduce the notion here. Whenever people are asked to complete a survey, irrespective of the method of survey administration, the completion of the survey imposes some degree of burden on the individual. This burden may just be a matter of the time it requires for him or her to answer the survey questions, but may also include time required to look up certain records or information to complete the survey, or challenges in terms of thinking about things in a different way. The more questions that are asked in a survey, all other things being equal, the more burdensome the survey becomes, and the more likely it is that people will not have the patience to complete the survey. Therefore, it always behoves the designer of a survey to be parsimonious with the questions to be included.

With this goal in mind, the major challenge in defining data needs is to reduce the initial ‘wish list’ to a list of ‘must-have’ data items. It is suggested that there are some simple procedures by which this may be done. This involves reviewing each data item that has been proposed and then employing the following steps.

(1) Determine how each data item will be used.(2) Determine the accuracy that is required for each data item.(3) Define any specific ranges required for categorical data, and the reasons they are

needed.(4) Question the need for the data item.(5) If the need is still substantiated, keep the item; if not, drop it.

In preparation for the design of the instrument itself, the next step in the procedure should be to define the order in which the data items will be requested. Sometimes a review of the order will show that a data item thought to be necessary is not, because the information it will provide has already been acquired at an earlier stage, or will be obtained at a later stage, through other questions. It is sometimes the ordering of the questions themselves that reveals this.

The next step is to review the data items together. In this process, two things should be looked for: duplication or overlap in what two or more different questions are estab-lishing, and illogical questions. Again, lack of logic in the questioning is not always apparent until questions are looked at together. Any duplication, overlap, or illogical questioning should be seen as a flag to drop or change certain questions.

Data needs in human subject surveysHuman subject surveys require special care in the overall design. It is most important to consider the viewpoint of the potential respondent in the design. A basic principle of design should be to make the survey as simple and straightforward as possible for the

Page 127: Collecting managing and assessing data using sample surveys

Designing a survey100

respondent, even though this may be at the expense of the survey researcher. Data can always be transformed or modified by the analyst. However, surveys that are designed for the convenience of the analyst will often result in surveys that are difficult for the respondent. Brög (2000) has made a telling case for treating the respondent as the ‘customer’ and taking the respondent’s viewpoint into account in all aspects of survey design.

The survey should be designed in such a way as to make it easy for the respondent to answer, and should follow a logical progression that fits with the way in which the respondent thinks. In questions for which categories are provided for the respondent to tick the answer, it is imperative to make sure that every respondent will find a cat-egory that fits his or her situation. If this cannot be done, then it is probably better to offer an open-ended question and allow the respondent to give an answer in his or her own words.

A clue that there is a problem in a question is given when the survey designer finds it difficult to phrase a question so that it is clear and unambiguous. Often, when this is a substantial struggle, it may suggest that the question should not be asked, or that the information should be acquired through a different means. There is more discussion of this in Chapter 9, where question design and question wording are considered at length. However, in the matter of data needs, this is definitely a point at which the need for a piece of data that is proving awkward to request should be questioned.

Survey timingAn important, and often overlooked, aspect of survey design is the answer to the ques-tion of when the survey should be conducted. By ‘when’ is not meant the time of day when people should be contacted so much as the period of time within which the survey is to be undertaken. Some of the issues that need to be addressed, irrespective of whether the survey is to be done by interviewers in a face-to-face situation, by tele-phone, or by post, are whether or not the survey should be conducted on weekdays or weekends or both, whether the survey should be conducted over public holidays, and whether the survey should be conducted during the major school holidays or vacation periods. Some surveys must be restricted to certain times of the year, and this is also to be specified as part of the timing issue.

Recently, the author had the experience of a survey being conducted on a weekend with face-to-face home interviews. Unbeknownst to the survey designers, the week-end in question turned out to be one of the hottest days on record for that month, and also included a day on which a major sports event was taking place within the city where the survey was being done. As might be expected, this combination of situations resulted in a poor response rate for that weekend. Had it been ascertained in advance that this particular sports event was on, then the survey would probably not have been put in the field on that weekend. The effects of weather are more difficult to handle, and it may be questionable to let weather dictate when a survey is to be done. Nevertheless, weather can often impact the conduct of a survey, whether it is excessive heat or cold, a snowstorm, or another more extreme weather event, such as a tornado or cyclone.

Page 128: Collecting managing and assessing data using sample surveys

Defining the survey purpose 101

In the transport field, it is often considered by transport analysts and researchers that the data should be collected from a time in the year that is considered to be ‘represen-tative’. This has frequently been interpreted to mean collecting data only during the spring and autumn, on the grounds that summers are unrepresentative, because schools are not in session and many people are away from home on summer holidays, and, similarly, that winters are unrepresentative, this time because of inclement weather that may affect transport, as well as holidays and other events that occur in that season. However, from an air quality viewpoint, omitting these two seasons could seriously compromise the value of the data. Summer is usually the time when the highest levels of ozone are experienced, resulting from exhaust emissions, while winter is often the season of the highest levels of carbon monoxide production from cars. Therefore, to omit these two seasons from the survey would mean that travel behaviour that leads to the most extreme air quality degradation would not be included in the measurement.

It is almost a trivial point that political polls need to be undertaken most often in the months, weeks, and days leading up to an election. Although political surveys are often also useful at other times, a survey designed to determine voting intentions in an upcoming election must, obviously, be undertaken in the period preceding the election in question. A survey on gift purchases by people for the Christmas season should also predominantly be done either in the period immediately preceding Christmas or, if a recall survey is being done, in the period immediately following Christmas. A survey on this topic undertaken in July will probably not yield very useful results.

Hence, it is an important part of the design process to determine the timing of the survey. This needs to define the days, weeks, months, or even years during which the survey should be conducted, and whether there are any specific periods during which the survey fieldwork should be suspended. Most often, such times of suspension will relate to times when it is expected to be hard to find people at home, when interviewers may be difficult to find, and when people are likely to be distracted from the survey, so that serious consideration to completing the survey is not likely to be given. Depending on the specific nature and goals of the survey, other aspects of timing may be specified. These may include a date by which fieldwork must be completed, or a particular time of the year that should be omitted from fieldwork.

Geographic bounds for the surveyAlmost every survey is also restricted to a certain geographic area. The definition of this geographic area is also important in defining the overall survey design. Several consequences flow from this. Failure to define the geographic area may result either in a lack of coverage of all the population that it was desired to include in the survey, or in the collection of data that are not useful to the survey purpose, because the households concerned live outside the area of interest. It is therefore most important that a clear definition be provided of the area that is to be covered. Usually, this will be done by using some level of administrative geography, such as specifying the towns, counties, or states for which the survey is to be done. The geographic extent of the survey may impact a number of aspects of the survey design.

Page 129: Collecting managing and assessing data using sample surveys

Designing a survey102

A recent household travel survey was undertaken for the entire state of Michigan in the United States. The geographic extent of the survey was defined as being the entire state, and only the state, of Michigan. The fact that the survey was to be representa-tive of the state of Michigan led in turn to the specification of ways in which to sub-divide the state to ensure that households living in all the different possible situations within the state would be measured, and in sufficient number to be able to make some statistically reliable statements about them. Michigan includes a large metropolitan area – Detroit – and a number of significant-sized cities, such as Ann Arbor, Flint, and Lansing. In addition, there are areas of the state that are rural in nature, and the state also consists of both an upper and a lower peninsula. These two rural areas are some-what unlike each other, because the upper peninsula has limited access to the rest of the state, and contains no large cities. It is also an area that is much frequented for winter sports and for summer homes. As a result, the state was divided into seven geographic subdivisions for the purpose of the survey: the metropolitan area of Detroit, the larger urbanised areas, the small cities, the small urban areas, the upper peninsula rural, the northern lower peninsula rural, and the southern lower peninsula rural. The boundaries of these different areas were set so that the entire state was included, and no regions overlapped any other regions.

This type of geographic specificity is likely to be important in many different types of surveys. Political surveys will be restricted geographically to the electorates of con-cern in an upcoming election. Social surveys of various types may be restricted to urban areas only, to rural areas only, or to some combination of these two, depending on survey purpose. As with the data needs and the timing, the geographic extent and the desired geographic coverage (as in the case of the Michigan example) will set cer-tain constraints on the design of the survey.

5.3 Trade-offs in survey design

The final point that needs to be made about survey design in general is that it involves trade-offs. As has been noted already, the larger the sample, the greater the accuracy of the resulting data, assuming that there are no biases present. On the other hand, the larger the sample, the greater the cost of the survey, and the more of almost all survey resources that will be required. There is another trade-off, involving the quality of the survey. The quality of the survey can be lowered, which will either lower the cost of the survey or increase the quantity of data. There are also ‘sub’-trade-offs, whereby survey length can be traded off against sample size, either of which can be traded off against the quantity of data. Similarly, the survey method, quality control, and the quality of the data can be traded off against one another. The tensions and trade-offs in survey design are illustrated in Figure 5.2.

As is the way with trade-offs, it is not normally possible to obtain a high quality of survey, with a large quantity of data, with few survey resources. Similarly, a high quality of survey cannot be achieved by using a poor survey method and little quality control; nor can a large quantity of data be obtained from a short survey and a small

Page 130: Collecting managing and assessing data using sample surveys

Trade-offs in survey design 103

sample size. All these elements of survey design are interrelated, and all affect one another. Most often, survey design commences with a limited budget. At the same time, the client for the survey will demand that the data are of high quality and that a large quantity of data should be obtained. The art of survey design is to select survey methods, quality control procedures, survey length, and sample size that will provide the best quality and quantity within the available budget.

A further trade-off that is implicit in almost all survey work is that of time. To obtain high-quality data in substantial quantity involves time. However, usually the demands of the survey are that the data be produced in as short a time as possible. Thus, there is a further tension here relating to time: it takes time to produce high-quality data. It also takes time to produce a substantial quantity of high-quality data.

It seems to be almost axiomatic, as Stopher and Metcalf (1996) have observed, that agencies desiring a survey to be accomplished will provide neither sufficient resources nor sufficient time to undertake the quality of survey and to produce the quantity of data they desire. Therefore, the survey designer is always faced with trying to make trade-offs, either by reducing the quality while achieving the desired quantity of data, or by reducing the quantity to obtain an adequate quality. The tools that are at the designer’s disposal to achieve this are the survey method, the quality control proce-dures, the survey length, and the sample size. How this can be achieved will become clearer in the following chapters, as alternative methods and designs are explored.

Survey resources

Survey quality

Surveymethod

Qualitycontrol

Surveylength

Data quality

Samplesize

Figure 5.2 Survey design trade-offsSource: Richardson, Ampt, and Meyburg, 1995.

Page 131: Collecting managing and assessing data using sample surveys

104

6 Methods for conducting surveys of human populations

6.1 Overview

There are a number of different ways in which a human population can be surveyed. The method that probably preceded any others was that of face-to-face interviewing. In this method there is a trained interviewer, who intercepts the respondent by some means and then conducts the survey by asking questions and writing down the answers. A second, and quite different, method is to write out the questions in a questionnaire of some type, and send this questionnaire to prospective respondents in the post. The respondent then reads through the questionnaire and writes down his or her responses to the questions, and sends back the completed form to the survey agency.

A third survey method, which it has been possible to use extensively only since about the 1970s, is a telephone survey. In its pure form, a telephone survey differs from the face-to-face survey only in that the interviewer and respondent are no longer face to face but, instead, are speaking to one another over the telephone. The survey is con-ducted by the interviewer again asking questions of the respondent, and noting down the answers. A fourth type of survey is one that has become feasible only since the late 1990s. In it, the respondent is presented with questions on the internet, and then responds to the questions by either typing in answers or clicking on a ‘radio button’ to indicate the answer that the respondent prefers. These four methods represent the four basic methods that are available for interactive surveys with human populations – i.e., surveys in which the respondent provides answers to questions posed by the survey organisation.

It might be thought that the likely direction of development is for newer methods of conducting surveys to replace older ones. Indeed, in some fields and in some parts of the world, face-to-face interviews have largely been replaced by the use of telephone interviews. In other parts of the world, and for other survey purposes, the face-to-face interview continues to be the preferred method. With the advent of the internet and Web-based surveys, some have suggested that this method will replace all other survey methods. However, Dillman (2002) suggests that ‘[the] future of surveying is far more likely to evolve towards the use of different survey [methods] for different studies than it is to be the disappearance of older [methods] in favour of only one or two new ones’.

Page 132: Collecting managing and assessing data using sample surveys

Face-to-face interviews 105

In addition to surveys that require interaction between the surveyor and the target sample, the other category of surveys of human populations is that of an observational survey, in which a survey person observes the behaviour of the survey subjects, noting down relevant information about the subject and the subject’s behaviour of concern. For example, in transport, an observational survey might be one in which a count is made of vehicles passing through an intersection. The survey subjects are the vehicles and the persons driving them. Several facts may be noted down about the vehicle and its occupants. These may include the type of vehicle, the number of occupants, the licence plate information, which leg of the intersection the vehicle approaches on, whether it continues straight through the intersection or turns right or left, whether the vehicle stops prior to entering the intersection, and so forth.

Other observational surveys might include observing in a shopping centre which stores people patronise, the order in which stores are visited, and whether purchases are made in the stores. All these items can be ascertained without the participation of the visitor to the shopping centre.

6.2 Face-to-face interviews

As noted in the preceding section, these are usually conducted by using trained inter-viewers. Face-to-face interviews may be conducted at a place, such as a person’s home, workplace, or school, or by intercepting people in a particular activity, such as visiting a shopping centre, waiting for a plane, riding on a bus or train, or visiting a theatre. In a face-to-face interview, the key elements are that the interviewer has a script for the inter-view, and a method to record the answers provided by the respondent. To ensure that there is comparability between the responses obtained by an interviewer with different respondents and between interviewers, it is normally a requirement that the interviewer use the provided script exactly as it is written, without any deviation from it.

It is extremely important that interviewers are trained and present themselves well when undertaking interviews. Further discussion about interviewer training and pre-sentation is provided in Chapter 16 on survey implementation. Suffice it to say here that interviewers need to be familiar with the survey and its purposes, so that they can respond to questions about the survey and so that they understand the importance of each question and of adhering precisely to the wording of the questionnaire.

There are some important issues in survey design that must be considered for interview surveys. The amount of time that an interviewer has to spend recording the answer of the respondent is quite critical. There is nothing more off-putting to most survey respon-dents than long silences between questions, during which the interviewer is furiously writing down the respondent’s answer. First, the silences themselves are a challenge for most respondents. Second, the time taken to write out lengthy answers prolongs the interview considerably. Third, sizeable amounts of writing by the interviewer are likely to offer many opportunities for error in recording the answers of the respondent. As a result, face-to-face interview surveys should include, whenever possible, categorical answers for which the interviewer simply has to determine which response from among

Page 133: Collecting managing and assessing data using sample surveys

Methods for conducting surveys of human populations106

a list matches that given by the respondent, and make a mark to indicate the correct answer. When such categorical answers are not possible or are inappropriate, the design should still minimise the amount of writing required by the interviewer.

The traditional form of face-to-face interview is one that uses a paper and pencil format, and it is therefore also referred to as a PAPI (paper and pencil interview). In this type of interview, the interviewers are provided with a supply of survey forms, usually with a clipboard to make writing easier, and use a pen or pencil to write in the answers provided by the respondent. More recently, there has emerged the computer-assisted personal interview (CAPI), in which the computer screen replaces the paper survey, and the computer keyboard replaces the pen or pencil (if no interviewer is present, the term used is computer-assisted self-interview – CASI). There are several advantages to a CAPI survey. First, as the interviewer enters the response to each ques-tion, it is recorded in a database stored on the computer, thus avoiding the data entry task from the PAPI survey and removing one of the potential error sources. Second, the computer-based survey can simplify progression through a survey that has skip patterns, by revealing to the interviewer only those questions that are appropriate to this respondent, conditioned on answers already provided. This simplifies the task of the interviewer, speeds up the interview, and can also avoid the appearance of a bulky questionnaire that may be off-putting to the respondent, who tends to assume that every question has to be asked and answered.

Third, the computer-based survey can include built-in checks on the answers pro-vided by the respondent, and can provide warnings to the interviewer when two answers provided by the respondent are incompatible or inconsistent. The interviewer can then re-question the respondent to determine which of the answers may have been incorrect or misunderstood. Fourth, the computer also offers a possible way to reduce item nonresponse, although this must be used carefully, so as not to offend the respon-dent or result in termination of the interview. This is done by telling the respondent who has declined to answer a particular question that the computer will not allow the interviewer to proceed to the next question until an answer is recorded to the one the respondent has not answered. However, this technique must be used with considerable caution, because it can have much more serious impacts on the completion of the sur-vey if used unwisely. Fifth, the CAPI interview can also be conducted more speedily, because many questions can be answered simply by pressing one key, which then takes the interviewer directly to the next question, as compared to writing even a tick or a cross in a box and then turning the page. Speed is also provided by the fact that the interviewer does not have to figure out the skip patterns; skips are handled automati-cally by the computer software.

On the negative side, some respondents may be put off by the use of a computer, or may even have an irrational fear associated with a computer-based survey. For such cases, it is usually wise for the interviewer to have an alternative PAPI survey form, so that the interview can be conducted in that manner if necessary. Computers are also known to ‘misbehave’ by freezing or locking up, failing to save input data, or corrupting the data in some way. The computer, itself, can also become a distraction

Page 134: Collecting managing and assessing data using sample surveys

Postal surveys 107

to the respondent, who may wish to see the screen, or who may be distracted into wanting to ask technical questions about the computer and the software for the survey. Nevertheless, as computers become both cheaper and more powerful, especially in terms of the notebook computers that are most suited to this type of survey, the CAPI survey is becoming increasingly common.

6.3 Postal surveys

The second type of survey is a postal survey. In this type of survey, respondents are recruited by some means and then are sent a survey form in the post. This form is to be completed by the respondent and then sent back to the survey agency, usually by post. In this case, there is no interviewer to elucidate the meaning of questions, or to answer side questions of the respondent. This type of survey is a fully self- administered one.

A postal survey requires a high level of attention to the design of the survey instru-ment. Complex skip patterns must be avoided, because most respondents either will not follow the patterns or will perceive the difficulty of doing so to be too great, and will refuse to complete the survey as a result. Second, people do not read instructions. Probably the first question that a computer software technical support person asks is whether the caller to technical support has read the manual. In more than 90 per cent of cases, the answer is probably that they have not done so. Even when completing an income tax return, few people read the instructions, even though their money is at stake and failure to read the instructions could cost them a significant amount of money. It should be fairly clear therefore that, in the case of a self-administered survey, to which the respondent is providing answers out of the goodness of his or her heart, and from which the respondent has little to gain by doing so, instructions are unlikely to be read. This does not mean that instructions should not be provided, but it does mean that the survey form should be able to be completed reasonably well without the need to read the instructions. Questions should be self-explanatory and should generally not leave the respondent in doubt as to what is being asked and how the respondent should answer. Instructions, then, should be more by way of providing additional information or help that the respondent may or may not need. A good principle on which to work is to assume that any instructions provided will be read, if then, only when all else fails. Further, a respondent may not even realise that he or she has failed to answer questions correctly if the answers seem to fit, even though they are wrong.

Postal surveys also rely on people to be able to read and to understand the questions. This means that considerable effort needs to be expended on ensuring that questions are completely unambiguous and are expressed in sufficiently simple language that they can be understood by someone of limited linguistic ability. One procedure used by the author in designing surveys is to try to remove all multisyllabic words and replace them with one-syllable words whenever possible. For example, respondents are asked to ‘fill out’ the survey form, not ‘complete’ it. A question on income, which could be phrased as ‘What was your annual income last year?’, might be replaced with ‘How much money did you earn last year?’.

Page 135: Collecting managing and assessing data using sample surveys

Methods for conducting surveys of human populations108

In a self-administered survey, when a skip is inevitable it should be as simple as possible, such as a simple branch conditional on a response. The skip can be explained further by using arrows that point the respondent to the next question from the answers already provided. Another device that can be used is that of colour. For example, the responses to a question can be grouped into answers that result in a specific branch, with the same colour being used for the branch question. An example of this concerns a situation common in transport surveys. A person may be asked what means of trans-port he or she used for a particular journey. Conditional on the means of transport, there may then be different questions to be answered. Thus, if a person travelled by car, there may be questions about the cost of parking, who was driving the car, and which of the household’s cars was used. On the other hand, if the person used public transport, the questions may be about the fare paid, where he or she got on and got off the public transport vehicle, etc. If the person travelled on foot or by bicycle, then questions might be asked about the distance that was walked or bicycled, whether there were footpaths or bicycle paths provided, etc. In such a case, the answers to the ques-tion on the means of transport could be grouped separately into car, public transport, and walk/bicycle, with each of the groups having a particular colour background. The same colour background is then used for the questions that follow, so that the respon-dent is able to see that, if he or she answered the question as a car user, which has a pink background, then the next questions to be answered also have a pink background. Other similar devices can be used to make skip patterns clearer to respondents.

However, a word of caution is needed here. If these devices become excessive in use, or if arrows are required to span pages or to cross other questions, and so forth, then the skip patterns are too complex, and the survey questionnaire needs to be sim-plified. There is much more to be added on this topic, which is to be found in Chapter 8 on the design of survey instruments. Suffice it to say here that the design of the survey instrument is absolutely crucial for self-administered surveys.

The postal survey is normally conducted with no personal contact between the sur-vey agency and the respondents. Recruitment is undertaken without any direct contact, and the survey form is sent out by post and then returned by post. However, a variation on this is when potential respondents are intercepted in some way or while undertak-ing some activity, are handed a survey, and asked to complete it. The survey may be designed then to be handed back, returned by post, or returned to a marked receptacle for completed surveys. Examples of such surveys include surveys that are conducted while a person is travelling, such as an airline survey, or an on-board bus survey. Other examples include surveys handed out while someone is shopping in a particular store or shopping centre, staying in a hotel, or eating in a restaurant.

6.4 Telephone surveys

The telephone interview survey is similar to a face-to-face interview survey, but is conducted over the telephone instead of in the presence of the respondent. In this case, respondents are contacted through a telephone call, and the interviewer asks questions

Page 136: Collecting managing and assessing data using sample surveys

Telephone surveys 109

on the phone and records the respondent’s answers. Most often, telephone interviews are performed when the respondent is at home, although surveys conducted with peo-ple at their workplaces may be carried out from time to time. These latter are generally less common, because they involve people responding during their employer’s time – a procedure that would be considered unfavourably by most employers.

As with the face-to-face interview, the interviewer again has a script for the inter-view that must be followed precisely, to ensure the comparability of the responses from each respondent. Again, the training of the interviewers is extremely important, so that interviewers are completely familiar with the survey and its goals and understand the importance of adhering to the specific wording of the survey questionnaire.

Many of the same issues arise with telephone interviews as arise for face-to-face interviews, with some being of even greater importance in the telephone survey. The amount of time required by the interviewer to record responses is of even more crit-ical importance in the telephone interview, because the respondent cannot see what the interviewer is doing. Long pauses during a telephone survey are likely to lead to termination of the interview by the respondent, or at least to an increasing level of frustration and annoyance on the part of the respondent. Indeed, the need for telephone interviews to run smoothly without significant pauses, and to be completed relatively rapidly, is much greater than for face-to-face interviews. There is no possibility of eye contact during a telephone interview, so it is much more important that interviewers in a telephone interview have the ability to strike a rapport with the respondent through verbal contact alone.

Perhaps rather naturally, these requirements for the telephone interview led to the development of computer-assisted methods before the computer was used in any other form of surveying human populations. The computer-assisted telephone interview (CATI) is probably the most common form of application of computers in interviewing at this time. The CATI interviewer is equipped with a computer ter-minal, keyboard, and screen, and, while speaking on the phone with the respondent, is able to read the questions and instructions from the screen and enter responses directly via the keyboard to the computer. The computer assistance offers all the same advantages that are discussed for the CAPI survey; in particular, the ability of the computer to speed up the interview and to simplify progression through a survey with skip patterns is of enormous benefit to the time-critical nature of the telephone interview.

As is discussed in more detail later in this book, the CATI survey also offers con-siderable benefits in terms of supervision. It is possible for supervisors to be able to listen in to the entire interview, to see the responses entered into the computer by the interviewer, and to monitor all aspects of the interview. Such close supervision is not possible in face-to-face interviews, unless a supervisor accompanies each interviewer for each survey – a procedure that would be far too expensive to consider. A disad-vantage of a telephone survey is that respondents cannot be shown visual prompts during the survey. In some surveys this does not present a significant problem, but in others it can be a significant disadvantage. For example, in many attitudinal surveys,

Page 137: Collecting managing and assessing data using sample surveys

Methods for conducting surveys of human populations110

respondents may be asked to rate various statements on a scale of importance or pref-erence, or some other such scale. In a face-to-face interview, the interviewer can have a copy of the scale and show it to the respondent, from which the respondent chooses his or her responses to each of the attitudinal questions. In a telephone interview, the scale cannot be shown to the respondent, but has to be provided verbally. This requires the respondent to be able to recall the scale throughout the questioning, or requires the interviewer to repeat the dimensions of the scale from time to time. The repetition may eventually annoy the respondent and will certainly add to the length of the interview. Similar concerns may arise with questions such as income or occupation. For income questions, it is usually found best to request a respondent to indicate the income group-ing into which his or her income falls. For occupation, which is most difficult to ascer-tain through an open-ended question, it is often found best to show a respondent a list of occupational categories and ask him or her to pick the one that fits best to his or her actual occupation. In both these cases a face-to-face interviewer would show a card, and a postal survey would provide the listing of all the categories from which the respondent can choose. A telephone interview does not permit showing the list of cat-egories, and requires the respondent either to remember the list of categories or to be presented with the categories one at a time, and be asked to stop the interviewer when the appropriate one is reached. Again, this adds to the length of the survey, and it can also be a problem in a listing, such as occupation, that has no obvious order, for which the respondent may need to have the list repeated in order to find the correct category. Further issues relating to question design for a telephone interview are handled in Chapter 9 on question wording.

Finally, it must be noted that the ease of using the telephone to contact people has been recognised by many businesses involved in marketing products and services. As a result, people are often contacted by marketing companies and others in an effort to sell a product or service, often using an introduction to this by purporting to conduct some type of survey. This has led to an abuse of telephone surveys, which has had neg-ative consequences throughout the world on the response rates to genuine telephone interviews. It has made it necessary for a genuine telephone survey to begin with an assurance that the interviewer is not selling anything and that this is a genuine survey. It has also led some countries to introduce a ‘Do not call’ registry (uS Federal Trade Commission [FTC], 2003) or similar device to allow people to place their names on a list of telephone numbers that are not to be called in marketing efforts. Although most genuine surveys in the public interest are exempt from these ‘Do not call’ registers, many individuals do not understand this exemption and may become annoyed at the calls received for a genuine survey, believing that registering on this list should pre-vent any survey calls of any description. Where such registers do not exist, increasing numbers of private telephone subscribers are opting for unlisted or ‘silent’ telephone numbers, in the hopes that such procedures will reduce the number of such calls that they will receive. In addition, call screening devices, such as caller ID displays and answering machines, are used to screen out potential marketing and survey calls. All these procedures are making it increasingly difficult to use the telephone survey as

Page 138: Collecting managing and assessing data using sample surveys

Internet surveys 111

an effective survey device. There are methods to circumvent some of these problems, which are discussed later in this chapter.

6.5 Internet surveys

The fourth type of survey, noted at the outset, is the internet survey. This survey is similar to the postal survey, in that it is self-administered and can be undertaken at the time of the respondent’s choosing. However, unlike the postal survey, it is potentially interactive, with earlier responses to survey questions conditioning later questions in the survey, and even offering the ability to refer back to the answer to an earlier ques-tion. In addition, unlike the postal survey, the internet survey offers the capability to embed cross-checks and validity checks on respondent answers, and it can repeat a question when an apparently illogical response has been obtained. Like telephone and face-to-face interviews, the internet survey can also have built into it the requirement that a given question is answered before the survey can proceed, offering a possible mechanism to reduce item nonresponse.

At present, however, an internet survey cannot be considered as a sole survey mech-anism if the goal is to sample from the general population. Computer penetration, the internet, and the capability to use the internet are by no means universal in any coun-try at the time of writing. Bonnel and Madre (2006) have documented the extent of internet penetration and people’s ability to use internet browsers, worldwide, as shown in Table 6.1.

As the table shows, although the growth in internet penetration is enormous, glob-ally only 15 per cent of the population had internet availability as of the end of 2005. Even with North America at 68 per cent and Australia and New Zealand (Oceania) at 53 per cent, there are still substantial proportions of the population that do not have access to the Internet. Although statistics do not appear to be available on the extent of internet use skills, it is reasonable to assume that a significant proportion of those with internet access lack the skills needed to undertake an internet survey, making this a survey method that offers probably less than 50 per cent coverage of the population, even in North America, and substantially less than 50 per cent in most of the rest of the world. At present, then, the internet must generally be considered as a survey mode that needs to be used in conjunction with one of the other methods of undertaking a survey, in order to provide adequate population coverage. Exceptions would be in the cases of surveys that do not seek to represent the population at large, and in surveys that are specifically related to internet use.

Considerable skill is needed to develop a good internet survey (Alsnih, 2006). Many of the design issues are covered in more detail in later chapters of this book. At this point, two specific issues that it is important to raise are as follows. First, the respon-dent should be informed at the outset of the survey (and before actually beginning the survey) how much time it is expected to take to complete the survey. This is particu-larly important if, once started on the survey, it is not possible to stop and resume from the same place. Second, it is advisable, for an internet survey that will take longer than

Page 139: Collecting managing and assessing data using sample surveys

Methods for conducting surveys of human populations112

about five minutes to complete, to provide the means for a person to stop entering responses and be able to return in another internet session and pick up where he or she left off.

Many of the same issues and comments apply to internet surveys as to postal sur-veys. For example, the same concern about instructions is also appropriate to the inter-net survey. Generally, people with access to the internet will want to move immediately to responding to the questions in the survey and will not trouble to read through exten-sive instructions. Therefore, once again, the survey questions should flow with a min-imal need for instruction. Similarly, even though it may be possible that people with internet access have somewhat higher education levels than the public at large, it is still advisable to keep the language simple and straightforward. This will also assist in ensuring that questions are not ambiguous.

6.6 Compound survey methods

The preceding sections of this chapter have dealt with four primary methods of survey-ing human populations. However, in real-world applications it is customarily the case that elements of these methods are compounded to form an actual survey method. To understand the way in which this occurs, it is useful to think of the process for conduct-ing a travel survey, which falls into four stages or steps, namely the pre-recruitment contact, recruitment, survey delivery, and collection of the data (Ampt and Stopher, 2005). This is illustrated in Figure 6.1.

6.6.1 Pre-recruitment contactThe pre-recruitment contact is not included in all surveys. Its use and application are dis-cussed further in Chapter 11, and also in Chapter 20, relating to reducing nonresponse.

Table 6.1 Internet world usage statistics

World regionsPopulation (2005 estimate)

Internet usage (2000)

Internet usage, (November 2005)

user growth (2000–5)

Penetration (percentage of population)

Africa 896,721,874 4,514,400 23,917,500 429.8 % 2.7 %Asia 3,622,994,130 114,303,000 332,590,713 191.0 % 9.2 %Europe 804,574,696 103,096,093 285,408,118 171.6 % 35.5 %Middle East 187,258,006 5,284,800 16,163,500 392.1 % 8.6 %North America 328,387,059 108,096,800 224,103,811 107.3 % 68.2 %Latin America/

Caribbean546,723,509 18,068,919 72,953,597 303.8 % 13.3 %

Oceania 33,443,448 7,619,500 17,690,762 132.2 % 52.9 %

World total 6,420,102,722 360,971,012 972,828,001 169.5 % 15.2 %

Source: www.Internetworldstats.com/stats.htm (accessed 11 January 2006).

Page 140: Collecting managing and assessing data using sample surveys

Compound survey methods 113

The pre-recruitment contact is usually in the form of a letter that explains the purposes of the survey, indicates the importance of responding to the survey, and outlines ben-efits that may be expected to arise from completion of the survey. Most often, this letter is sent by post, but it may also be delivered by an interviewer, or sent by courier. The latter two methods may be of particular use when the survey is extremely impor-tant and it is desired to impress upon prospective respondents how vital the survey is. It may also be used when postal delivery is considered to be less reliable, or when address information is not adequate to permit the local post office to deliver all such letters, or when it is combined with an in-field enumeration of addresses.

It is also possible for the pre-recruitment contact to be made by e-mail, when the main survey is to be done on the internet and the population to be surveyed is restricted to those with e-mail and internet access. It is important to note that the pre-recruitment contact should not be made by telephone. The purpose of this contact is to be non- intrusive and to permit those who receive it to make a decision not to be troubled fur-ther. A telephone contact would fail on the count of being intrusive.

In general, intercept surveys, such as those that might be done in a hotel or res-taurant, at a shopping centre, on board a vehicle, and at other similar locations or by similar methods, do not have pre-recruitment contact. In these cases, it is generally not feasible to undertake such a contact.

6.6.2 RecruitmentRecruitment can be undertaken by any of the following methods: post, telephone, face-to-face contact, and e-mail. Recruitment can also be carried out via a pop-up win-dow in an internet browser. Recruitment is the process of gaining the agreement of

Recruit

Recruit

Intro tosurvey

Collectdata

PostPostPostPost

Phone

Phone

Phone

E-mail

E-mail

Internet

InternetInternetInternet

E-mailE-mail

Deliverforms

Deliverforms

Leaveforms

Paper andpencil

Makeformsavailable

Computer

Computer

Face-to-

face

Face-to-

face

Face-to-

faceFace-to-

face

Letter

Pre-recruitment

contactRecruitment Survey

deliveryData

collection

Figure 6.1 Schematic of survey methods

Page 141: Collecting managing and assessing data using sample surveys

Methods for conducting surveys of human populations114

potential respondents to participate in the survey, and often requires them to provide contact details, or for the survey agency to provide a Web address or other information for respondents to use. In Chapter 4, the increasing use of a consent form was noted. Obtaining a signed consent form, when required, is usually also part of the recruitment process.

Recruitment is done by post most frequently when there is access to a good address list, but not to other means of contact. If the survey delivery is to be done by the inter-net, postal recruitment may involve sending out by post the uRL for the survey site, together with the user ID and password to be used for initial access to the internet survey. If the survey delivery is to be done by telephone, the postal recruitment may involve sending out a notification of the time of the telephone call, or requesting the household to return a form or postcard indicating the telephone number on which contact is to be made. If the survey delivery is to be by post, then the recruitment and survey delivery will normally be combined, resulting in the full survey package being delivered through the post. If the survey is to be delivered face to face, then the postal recruitment is most likely to be a form or postcard to be returned, indicating the time (and, if appropriate, place) of the interview.

Recruitment can also be done by telephone, which is probably the most common method that has been in use in the last decade of the twentieth century and the first decade of the twenty-first century. One of the reasons for the popularity of telephone recruitment is the ease of generating a random sample for this method. unlike postal recruitment, or the other methods yet to be discussed, no listing is necessarily required for drawing a sample. Instead, a method called random digit dialling (RDD) is often used. This method is explained shortly.

In telephone recruitment, a script is prepared and is used to solicit the cooperation of the household contacted to participate in the survey. If the survey delivery is to be by post, then the telephone recruitment will collect or confirm the address for deliv-ery of the survey, and may also be used to collect some preliminary information about the person or household to whom the survey is to be sent. This preliminary informa-tion may be used to control the sample, either through screening or by tabulating the number of different types of persons or households recruited so as to determine when sufficient of a particular type have been recruited. If the survey delivery is to be by telephone interview, then the recruitment will normally proceed directly into the sur-vey delivery. If the survey delivery is to be by face-to-face interview, the telephone recruitment will usually collect or confirm the address for the interviewer to visit and will also set an appointment for the time for the interview. If the survey delivery is to be by the internet, then the telephone recruitment will provide the uRL, the user ID, and the password needed to access the survey website. As with the case of the postal survey delivery, preliminary data may be collected by telephone, both to control the sampling and to provide data on those who indicate that they will do the survey, but who subsequently fail to do so.

Recruitment may also be carried out by a face-to-face visit for any of the survey delivery methods. Again, as with telephone recruitment, the recruitment may collect

Page 142: Collecting managing and assessing data using sample surveys

Compound survey methods 115

certain details required for the postal delivery of surveys, or may collect a telephone number and set an appointment for a telephone interview, or may set an appointment for the face-to-face interview (usually this is done at the time of the recruitment visit), and will provide details of the website and how to access it for an Internet survey.

Recruitment by e-mail is most often done for an internet survey, and is used to pro-vide details of Web access for the selected e-mail recipients. However, there is no reason why e-mail recruitment should not be done for any of the other three methods of survey delivery. For a postal delivery of the survey, the e-mail could request the recipients to send back details of their postal addresses so that the survey can be sent out. Similarly, if the survey is to be delivered by telephone, the e-mail could request a preferred tele-phone number and offer a selection of times for contact, with the recipient picking one or two such times and sending this information back by e-mail. Finally, for a face-to-face delivery, e-mail recruitment could be used to obtain the address for the interview, and to set an appointment for the time of the interviewer’s visit. Notwithstanding these potentials, there are possible problems with using e-mail as a recruitment device. With the increasing amount of spam that is sent to e-mail accounts, an e-mail recruitment of this type may be relegated to spam without further examination. It also may be classified as spam and removed from the recipient’s incoming e-mail by various spam-blocking software products. At this time, it is probably best used for internet surveys to specific groups of individuals who will be likely to see and, potentially, act upon such e-mails. use in connection with membership of a professional society, hobby group, listserve, etc. is most likely to be successful.

Finally, recruitment can be done by using a pop-up window in an internet browser. However, this may suffer in a similar manner to the e-mail, in that many recipients may have pop-up blockers that will prevent the pop-up window from being displayed. Again, in theory, the internet pop-up window could be used to recruit people for any type of survey delivery, but it is probably used at this point only to recruit for internet surveys.

Random digit diallingBefore proceeding further through the survey process, it is useful to digress and define the process of random digit dialling. This can take at least two possible forms.

In the first of these, it is assumed that telephone numbers are initially assigned by the telephone company on the basis of exchange codes. The exchange codes may repre-sent two, three, or four digits that identify a geographic area. In the united States and Canada, it is common for most areas to be divided into three-digit exchange codes. In Australia, the country is divided into four-digit exchange codes. In the united Kingdom, most areas are divided into exchange codes, but these may vary in length from three digits (especially in London) to as many as five digits elsewhere in the country.

For RDD by the first method, it is necessary, first, to make a list of the exchange codes that cover the geographic area within which the survey is to be carried out. If the boundaries of these areas do not map exactly into the study area, then a pre-qualifying question will be needed in the recruitment step to find out where the person is located,

Page 143: Collecting managing and assessing data using sample surveys

Methods for conducting surveys of human populations116

to determine whether or not he or she is eligible to be included. If the survey area cov-ers more than one area code, then the list will need to be of area codes and exchange codes.

To produce a random telephone number, the exchange code (and area code, if appli-cable) is first selected by a random procedure. This is followed by generating a sec-ond random number of the required number of digits to make up a telephone number. For example, in Australia, telephone numbers consist of the area code (two digits), an exchange code (four digits), and a telephone number in the exchange code (four digits). Thus, supposing the survey was to be conducted solely within one area code, the initial step is to select one of the candidate four-digit exchange codes in a random procedure, and then to select a four-digit number at random, which is appended to the exchange code. This ‘creates’ a telephone number. Of course, it cannot be known, at this stage, whether the number produced by this means is a working telephone number, and, if it is, whether the number is assigned to a business, a pay telephone, a residence, or some other entity.

The second method uses a selection of listed telephone numbers, usually from a published telephone directory, and creates a telephone number for use in the survey by adding or subtracting a random value from the selected telephone numbers. Most often, the number added or subtracted is one, but it is not necessary that this is the case. Numbers can be randomly picked from zero to nine and from zero to minus nine, and added to the listed telephone numbers.

Neither of these methods requires there to be an available list of all telephone numbers. Both methods will produce contact with people who have unlisted or ‘silent’ numbers. Both methods will also, potentially, generate a substantial number of ineligible telephone numbers. These will include numbers that are not in ser-vice, numbers that have never been assigned, numbers for businesses, numbers for pay phones, etc. In some countries, the telephone companies are willing to provide lists of blocks of numbers that are currently not in use, those that are reserved for telephone company testing, and blocks that are assigned to major businesses. In some countries, the yellow pages, providing telephone numbers for all businesses that have registered as a business with the telephone company, are available either on line or on a CD-ROM. When this is the case, the numbers generated can be matched against these sources of additional business numbers, and, if the numbers are required for a residential-based survey, the business number matches can be dropped from the available set.

In some countries, especially the united States and Canada, there are commercially produced lists of random telephone numbers available for purchase by survey firms and agencies. At the last checking, the usual price for these lists was $1 per telephone number, although quantity discounts were sometimes available. Essentially, these com-mercial lists have been produced by using the first of the two methods explained here, and then non-working and non-residential numbers have been purged from the lists, using the procedures outlined in this section. Because the firms that produce these lists do this for the entire country and are continually updating their lists, they have been

Page 144: Collecting managing and assessing data using sample surveys

Compound survey methods 117

able to establish good contacts with the telephone companies and are able to access lists of non-productive numbers more readily than may be possible for the average survey firm or agency.

The value of this procedure is that it does not require an initial listing of all eligible telephone numbers, which would normally be an impossible task to produce. It also permits unlisted numbers to be included in the sample. As a result, it provides one of the best mechanisms for producing a truly random sample for a survey, provided that there is not a response problem when the residential telephone numbers are called. Nonresponse to a telephone survey is probably the largest problem, by far, with using this method.

6.6.3 Survey deliveryAs noted earlier, each of the four principal methods of conducting a survey can be used for survey delivery – that is, getting the survey questionnaire in front of the respon-dents. Possibly the two most common methods still of delivering the survey are by post or by interview. However, it is important to keep in mind that the method of delivering the survey and the method of collecting the data can be quite different.

If the collection of data is to be done by post, telephone, or face-to-face interview, it is possible to deliver the survey form by post. In this case, the survey must be devel-oped as a self-administered questionnaire, and respondents are expected to fill out the survey at their convenience or at a pre-appointed time, as the case may be. If the collection of the data is to be on the survey forms and they are to be collected by post, then a stamped addressed envelope should be sent with the survey questionnaire, with instructions for respondents to complete the survey, place it in the envelope, and drop it in a post box. It should be stressed that the posting back of the survey will not cost the respondent anything.

If the data are to be collected by interview, whether by telephone or by an inter-viewer visit, then the survey forms are to be filled out and kept in readiness for the interviewer. At the appointed time, an interviewer will either knock on the door or tele-phone the household.

A second method to deliver the survey is by having an interviewer take the survey forms to the address and leave them there for completion. An advantage of this method over postal delivery is that it offers the interviewer an opportunity to provide instruc-tions on how to complete the survey. The personal delivery of the survey will also help to reinforce the importance of the survey, though it also, of course, contributes to making the survey more expensive. Again, interviewer delivery of the survey form can permit any of post, telephone, or face-to-face interview for collection of the data.

A survey may also be delivered by e-mail, as an attachment that the recipient is asked either to print off and fill out by hand or to open in a particular software and com-plete on the computer. If the survey form is to be printed, then any of post, telephone, and face-to-face interview can be used to collect the data. If the survey form is to be completed on the computer, it will normally be returned by e-mail or by uploading to a website.

Page 145: Collecting managing and assessing data using sample surveys

Methods for conducting surveys of human populations118

If the survey is made available on an internet site, it will usually be designed for return by internet. Indeed, the process of completing the survey on the internet website will usually result in the data being recorded at the time of completion. It is conceiv-able, though probably not very sensible, that a survey that is completed on the internet could be printed out and returned by post, picked up by an interviewer, or even col-lected over the telephone. However, such a method of collection removes just about all the advantages of using the internet in the first place, and seems, therefore, to be inappropriate. If the survey delivery is by internet, then this means that the respondent goes to a website and fills out the survey on that website, simultaneously recording the data on the internet server where the survey is located. Hence, this method of survey delivery also dictates the method of data collection.

6.6.4 Data collectionThe final phase is that of the actual data collection. Again, as Figure 6.1 shows, there are numerous possible ways in which this can be done, with few of them conditioned on the way in which the survey was delivered. A survey that was delivered by post, and is, therefore, a paper and pencil survey, can be collected by having the recipient send it back by post. However, it is also possible to send an interviewer to the recipient’s address and have the interviewer collect the survey. When this is done, it is also possi-ble to have the interviewer check the completeness of the survey and answer any ques-tions that the respondent may have about how to answer specific questions. Although it might be considered a possibility to have the interviewer ask the respondent all the survey questions verbally and enter these either to a paper survey or into a computer, in a CAPI, these options are really rather unlikely ones. Respondents would be likely to be annoyed that, having already written down their answers to the questions, they were then being asked to repeat all the information to an interviewer to whom they could as easily hand the completed form.

It is possible to collect the data by telephone. In this case, the survey will have been completed in a pencil and paper version, and the telephone interviewer now goes through all the questions, with the respondent reading back to the interviewer what is recorded on the survey form. As has been noted previously in this chapter, this pro-cedure can be done with the telephone interviewer writing the information down on a paper form, or by having the interviewer enter it directly into a computer, in a CATI. This method does not have the same disadvantages, of having respondents read back the information they have already written down, as it would if the interviewer were present with the respondent. It has also been found to generate a higher response rate than leaving respondents to post back the completed forms. However, it is not gener-ally known how many times respondents are actually reading from filled-out surveys, as compared to responding for the first time when the interviewer asks the questions on the telephone, even though the question may be asked as to whether the respondent is using a previously completed form.

The same options apply if the survey form was left by an interviewer, when the sur-vey was also a paper and pencil survey, or if the delivery was by an e-mail attachment

Page 146: Collecting managing and assessing data using sample surveys

Compound survey methods 119

that was to be printed out and filled out by pen or pencil. The completed survey can be collected by post, by the return of the interviewer, or by telephone. In addition, as for the postal delivery of the survey, the telephone collection of the survey can be to a paper and pencil or a CATI survey.

Of course, a survey can be conducted without delivering a physical survey form. If the survey is conducted by interview, whether telephone or face to face, without pro-viding a form for respondents to look at or complete beforehand, this will normally be the method by which the data are collected. In other words, if a physical survey form is not delivered to the respondent, then the survey is delivered by means of verbal ques-tions, to which verbal answers are given. Because the answers are provided verbally, the interviewer must either write them down, enter them into a computer, or record them on a recording device, such as a video camera or tape recorder. The telephone interview permits collection of the data only by written record (of the interviewer), computer recording through keyboard entry, or audio recording. If the method of sur-vey delivery is a face-to-face interview, then all the methods of recording the data are available.

6.6.5 An exampleIt is useful to consider an example of a common method of mixing the different proce-dures discussed in this chapter. In household travel surveys in the united States, it has been customary since the mid-1980s to use mixed methods of conducting the survey. In these surveys, there may or may not be a pre-notification letter sent out to potential respondents. However, if one is used, it is usually sent by post to those households within the sample for which it has been possible to ascertain an address. Typically, this will be less than half the total sample. The next step is to use RDD for sampling and recruit households by telephone. The addresses for the pre-notification letter would have been obtained by matching the RDD-generated phone numbers to directory list-ings, and sending the letter only to those households for which a match was obtained.

Recruitment by telephone is used to obtain an address for the household (or confirm the one found through matching the telephone number), introduce the purposes of the survey, especially for those who did not receive the pre-notification letter, collect some data to ensure that the household is in the geographic study area, and to determine such things as the number of people in the household, the number of vehicles available for the use of household members, and the address to which survey materials can be sent. Following successful recruitment, the household is sent a package of survey materials by post. During the recruitment telephone call the household was instructed when to complete the survey forms, and this information is reiterated in the survey package. Household members are asked to fill out the survey forms and then to have them near the telephone, for retrieval by phone at a pre-appointed time.

At the time that was previously agreed for data retrieval, the household is telephoned again and, using a CATI procedure, the telephone interviewer proceeds to collect the data that have been recorded on the survey forms, entering them directly into the

Page 147: Collecting managing and assessing data using sample surveys

Methods for conducting surveys of human populations120

computer. Thus, this procedure uses a combination of postal and telephone surveys. In some cases, in addition to retrieving the data by telephone, households are still asked to post their completed survey forms back to the survey agency. This is usually done as a means to check the accuracy and completeness of the telephone retrieval.

6.7 Mixed-mode surveys

In the preceding discussion, it is assumed that all respondents to a survey are surveyed using the same method, whether it is a ‘pure’ survey method or a compound one. It is known that the method of administering the survey can have effects on the answers provided by respondents (Bonnel, 2003; Goulias, 2000). This has led to the conven-tional approach, which assumes that one survey method will be used for the entire sample. As is mentioned elsewhere, response rates to surveys are eroding, especially in developed countries, where busy lifestyles, telephone blocking technologies, and con-cerns over privacy conspire to make it increasingly difficult to reach target populations. This erosion of response rates leads to the notion that different population segments may need to be surveyed using different methods or survey modes. The introduction of internet surveys adds further impetus to this. As has been noted, the internet is still not available to significant elements of the general population, so that internet surveys lack complete coverage. However, internet surveys could potentially be used in conjunction with another mode of surveying, such that those with internet access use that mode and those without use some other mode.

The idea of tailoring the survey mode to the segment of the sample is an idea that has been put forward by a number of survey researchers. The problems with using such designs still merit research. As noted by Bonnel (2003), ‘[T]here have been very few experiments with a rigorously controlled methodological framework that aim to compare different survey modes.’ Indeed, it remains the case that most available com-parisons are of different methods used at different times in different places, rather than a comparison within a single population at a single point in time.

An example of mixing modes in a survey is provided by the Sydney Household Travel Survey (Battellino and Peachman, 2003). In this survey, data are collected by means of a face-to-face interview. However, if the results from a particular household are incomplete because one member of the household has not been interviewed face to face, then attempts are made to complete the interview by telephone.

Dillman and Tarnai (1991) have developed a useful taxonomy of mixed-mode surveys, which shows that there are actually a substantial number of different ways in which survey modes can be mixed. These are shown in Table 6.2. The Sydney Household Travel Survey clearly falls into category 6 of this table. The compound methods discussed prior to this fit into category 4. The types of mixed-mode surveys of concern in this section of this chapter fall primarily into categories 1 and 6.

One of the principal arguments used for mixed-mode surveys of types 1 and 6 is to increase response rates. There are a number of challenges and issues in this regard. First, as Zmud (2006) points out, one of the foremost problems for mixed-mode surveys is

Page 148: Collecting managing and assessing data using sample surveys

Mixed-mode surveys 121

the availability of sampling frames. As has been discussed already, there are problems for telephone numbers as a sampling frame, arising from unlisted or silent numbers, ‘Do not call’ registers, and the use of screening devices such as caller ID and answer-ing machines. In many parts of the world, address lists are difficult, if not impossible, to access, if they even exist. For postal contact, this raises serious issues of coverage. It also hampers the use of face-to-face interviewing, which is further impeded by the increasing construction of ‘gated’ or ‘walled’ communities, making such places off-limits to door-knocking interviewers. A similar problem exists with secure blocks of flats. Even the internet is not immune to these problems, with the emergence of ‘Do not contact’ registers in some parts of the world, or with some internet providers.

The second issue with mixed-mode surveys, also noted by Zmud (2006), is that different types of data may be collected by different survey modes. For example, in the area of transport surveys, travel data have traditionally been collected by self-re-port diaries. An emerging survey mode is to equip respondents with a passive Global Positioning System (GPS) device that they carry with them while they undertake their daily activities (see the further discussion in Chapter 22). The GPS device requires nothing other than to be charged each night. If both the diary and the GPS device are used in the same survey, how compatible are the data from these two modes? Is it really possible to combine the data, and, if so, how does one do so? These are questions that have not been answered in the survey literature. Apart from this rather obvious differ-ence, there is other evidence, as documented by Bonnel (2003), Bradburn and Sudman

Table 6.2 Mixed-mode survey types (based on Dillman and Tarnai, 1991)

Mode Example

(1) To collect the same data from different respondents within a sample.

Survey early respondents by post; use telephone or face-to-face surveys for those who cannot be contacted by post.

(2) To collect follow-up panel data from some respondents at a later time.

Postal survey for initial contact; telephone survey for the follow-up panel study.

(3) To collect the same data from different sample frames in the same population for combined analysis.

Collecting data from households and from persons riding buses, the households being sampled by RDD, and the bus passengers from intercept surveys.

(4) To collect different data from the same respondents during a single data collection period.

Initially recruit and interview respondents by telephone, followed by a postal survey to the same respondents.

(5) To collect data from different populations for comparison purposes.

use of target and control groups to evaluate the effect of a treatment administered to the target group.

(6) To increase the response rate for another mode.

Give respondents the option of responding by several modes – e.g., internet, telephone, post.

Source: Adapted from Zmud (2006).

Page 149: Collecting managing and assessing data using sample surveys

Methods for conducting surveys of human populations122

(1979: 1–13), Holbrook, Green, and Krosnick, (2003), and Sudman and Bradburn (1982: 261–80), among others, that different survey modes may produce different responses to the same questions. In other words, the survey mode may elicit different responses from people, perhaps because of the nuances of the way in which the survey is presented. There is likely to be a significant difference between the way in which people process information that they obtain visually from what they do when the infor-mation is presented aurally only. When people read, they tend to read what they expect to see, rather than what is written. When people hear something asked, they also tend to hear only partially, but the misinterpretations from reading are likely to be different from those from hearing. In addition, when an interviewer is present, there is more chance that the interviewer can correct a misperception on the part of the respondent arising from his or her not having heard exactly what was asked. On the other hand, when a survey is self-administered, no one is available to correct the misinterpretations from misreading questions or misunderstanding the meanings of phrases.

Moreover, questions may, of necessity, be asked in different ways when different survey modes are used. For example, in face-to-face interviews and postal surveys, respondents may be presented with seven- or nine-point scales for attitudinal ques-tions, whereas, in a telephone survey, it may be necessary to restrict such questions to five-point scales (Dillman, 2000). Furthermore, telephone interviews may often ask for a ‘Yes’/‘No’ response, whereas an internet survey may ask for a person to mark all the answers that apply (Smyth, Christian, and Dillman, 2006). Research shows that the differences in these question formats result in significantly different results in responses (Rasinski, Mingay, and Bradburn, 1994; Smyth et al., 2006). Indeed, Smyth, Christian, and Dillman (2006) conclude that there is little evidence of a survey mode effect, per se, but much more evidence of a question format effect. This suggests that changing question formats between survey modes needs to be avoided as far as pos-sible, although it must be recognised that this is not always feasible, because some question formats will not work in some survey modes. Smyth, Christian, and Dillman. (2006) suggest that survey researchers and designers should not continue to assume that certain single-mode question formats are appropriate in situations in which mixed-mode surveys are to be used, and should, rather, search for a common format that works in all survey modes.

Given these concerns and issues for mixed-mode surveys, the question naturally arises as to why they would even be considered. There are probably two major reasons for considering them. The first is coverage of a population. As different elements of the population become harder and harder to contact, mixed-mode surveys become almost a necessity if one is to include all segments of the population. For example, people who live in gated communities or secure blocks of apartments are also more likely to be people with unlisted telephone numbers. In a survey that is to be conducted using face-to-face interviews, such people are very hard to include within the sample. Resorting to alternative survey modes, such as RDD telephone contact, internet surveys, or postal surveys, may be the only way in which these population segments can be included in the sample coverage. The other major motive for undertaking mixed-mode surveys is

Page 150: Collecting managing and assessing data using sample surveys

Mixed-mode surveys 123

that of attempting to increase the response rate and reduce bias in the survey. However, there are many unanswered questions as to the appropriateness of mixed-mode surveys for this purpose, as discussed by Ampt and Stopher (2005).

6.7.1 Increasing response and reducing biasThe goals of increasing response and reducing bias are very important goals for any survey. The question that must be addressed is whether or not mixed-mode surveys are a possible way to achieve this. As is discussed elsewhere in this book, response rates are falling almost throughout the world in the early twenty-first century. Falling response rates bring with them the likelihood that the samples obtained from volun-tary surveys are increasingly biased – i.e., that the people who respond to the survey are significantly different from those who do not respond, so that the behaviours to be measured in a survey are no longer representative of the population at large. Error and bias are discussed and differentiated in Chapter 3. The necessity of avoiding or mini-mising bias is dealt with in that chapter.

The question to be addressed here is whether a mixed-mode survey is an effective way to reduce bias and increase response rates. Dillman et al. (2001) conclude that, while a mixed-mode survey using telephone interview, mail, interactive voice response, and the internet did improve response, it did not appear to reduce nonresponse error, based on demographics. In the field of transport surveys, there are a few instances of comparisons of survey methods, and also a few nonresponse surveys that shed some light on this question. A rather useful aspect of transport surveys is that a key vari-able to be measured is that of the number of trips or journeys made per day by survey respondents. This key variable can be used as an indicator of whether or not different modes produce different results, and whether there might be evidence of a reduction in bias. Bonnel (2003) suggests that postal surveys collect fewer trips than telephone surveys, but that telephone surveys collect fewer trips than face-to-face interviews. However, these were comparisons of different survey modes in different situations, and they are not comparisons of mixed-mode surveys. On the other hand, experiments using GPS to measure the travel that people also report in a telephone interview have shown that telephone interviews seriously under-report travel compared to what is measured by the GPS device (Wolf et al., 2003; Zmud and Wolf, 2003). In contrast, using GPS with a face-to-face interview survey (Stopher, Xu, and FitzGerald, 2005) showed a much lower discrepancy between the interview results and the GPS results. This bears out the suggestions of Bonnel (2003) that telephone interviews are less accurate than face-to-face interviews.

The dilemma in using mixed-mode surveys can be illustrated quite simply. Suppose that a survey is being undertaken to assess a particular type of behaviour that is affected by such characteristics as the size of the household, the ages of the members of the household, the presence of children in the household, the income of the household, the owner/renter status of the household, the education level of members of the house-hold, and the ownership of vehicles in the household. Suppose further that the survey

Page 151: Collecting managing and assessing data using sample surveys

Methods for conducting surveys of human populations124

is conducted by face-to-face interviews, but with the option that those who do not respond to the face-to-face interviews may complete the survey either by a telephone interview or on the internet. Suppose, further, that the face-to-face interview is found to be biased against people who are aged between fifteen and forty-nine, and those who live in rented property. It is this bias or lack of coverage that the mixed-mode survey is intended to reduce.

In examining the telephone interview survey, it is found that this is biased against people in low- and high-income households, those with less than a high school edu-cation, older persons, large households, and young single people. The internet survey is biased against low-income people, those with less than a college education, those living in rented property, and those who do not own a car. Thus, each survey mode has brought its own coverage biases with it. It is unclear that adding the results from these three different survey modes actually reduces coverage bias. If the behaviour of interest to the survey is highly correlated with these measures and also with whether or not a person responds to the survey in any mode, then the addition of the other survey modes will probably not improve the bias situation, and may even exacerbate it.

This problem becomes more evident when a particular focus for this hypothetical survey is postulated. Suppose, for example, that the survey in question is a personal travel survey. Suppose now that the people who are least likely to respond to the face-to-face interview are those who travel a lot, and those who travel hardly at all. The first group are not there in the sample because they are never at home when the interviewer calls, or they are just dashing off to some other place and do not have time for the interview. The second group are not in the sample because they do not feel that their behaviour is relevant, and are not amenable to argument to take part in the survey. The first of these groups are also in the fifteen–forty-nine year age group, while the second tend to be elderly and also those living in rented properties. Now, the telephone survey option is offered. The high-income earners and the large households are among those who, again, travel a great deal. The low-income earners and the elderly are among those who possibly travel much less than others. A new bias has been introduced with this survey that is still a bias against both extremes in travel behaviour. Most of the biased coverage of the internet survey is, again, towards those who are likely to travel less (non-car-owning households, low-income households, and renters), so that none of these three survey modes has actually permitted the survey to penetrate further into the biases in the behaviour that is the subject of the survey.

Similar results to this hypothetical example are reported through actual nonre-sponse surveys, in which the nonresponse usually demonstrates a bias against or for the behaviour that is being measured. In a nonresponse survey relating to a Dutch survey about licit and illicit drug use (Abraham, Kaal, and Cohen, 2002: 81–97), it was found that those who refused to participate tended to have lower rates of lifetime cannabis use, while those who were not at home when the interviewer came to the house were more likely to be high lifetime cannabis users and to have higher rates of recent alcohol use. In this same exercise, a mixed-mode survey was used, and it was found that one method would tend to overestimate actual drug use, whereas the other

Page 152: Collecting managing and assessing data using sample surveys

Observational surveys 125

would underestimate it. However, because of differences in the types of respondents to each survey mode, these two effects are not self-cancelling. In a Dutch Time Budget Survey (van Ingen, Stoop, and Breedveld, 2008), it was also found that response to the survey was biased by the behaviour under measurement. In this case, it was found that people who were very busy were actually over-represented in the sample, while those who were not busy were under-represented. using a mixed-mode survey approach to correct the coverage might not necessarily penetrate to those who were unresponsive to the initial survey mode.

It can be concluded that more studies are needed of the effects of mixed-mode sur-veys. It is not sufficient simply to look at the demographics of those who respond and those who do not respond to particular survey modes. Rather, it is necessary to have an understanding of the relationship between the survey mode and the behaviour or attitudes that are the subject of the survey. If mixed-mode surveys are warranted, there are also several rules that need to be implemented to ensure that survey mode effects are minimised. Chief among these is the adoption of question formats that are as sim-ilar as possible in all modes. For example, a format that may be easy to implement in an internet survey (such as ‘Tick all that apply’) may not work in other survey modes, such as a telephone interview. It is necessary, then, to adopt for the internet survey a question format that works in the telephone mode when both these modes are to be used. using a single sampling frame is another step that needs to be taken to ensure that the mixed-mode surveys do not relate to different populations. This may mean that it is necessary to undertake the initial recruitment by a common method, offering the alternative survey modes only for the completion of the post-recruitment survey. Third, it is essential that the survey researcher understands the relationship between willing-ness to participate in the survey and the behaviour to be measured. This is essential to choosing the right mix of survey modes and to determining what potential effect the mixed-mode survey will have on the findings of the survey.

Notwithstanding the potential pitfalls and difficulties surrounding mixed-mode sur-veys, it is almost certain to be the case that these types of surveys will necessarily become more prevalent, in an effort to improve response rates and decrease coverage error. If designed carefully and administered well, mixed-mode surveys probably can improve survey accuracy. However, undertaking such surveys without adequate design care and without an understanding of the relationships between response and survey mode is likely to lead to later difficulties with the data, and could well result in a mixed-mode survey that fails to achieve the goals of improved coverage and increased accuracy.

6.8 Observational surveys

Observational surveys fall into three main categories: surveys in which a human observer watches members of the target population and notes down relevant infor-mation on a previously prepared survey form; surveys in which a human observer watches members of the target population and records the required information through

Page 153: Collecting managing and assessing data using sample surveys

Methods for conducting surveys of human populations126

machine-assisted procedures; and surveys in which there are no human observers, but the information required is recorded completely automatically for processing later.

In the first and second of these, it is necessary to determine what the observer needs to be able to see and note, and to determine an appropriate location from which this can be done, while also trying to ensure that the presence of the observer does not change the behaviour that is to be observed. Attention must be paid to the order in which characteristics for observation can be observed and whether it is possible for one observer to observe everything that is required. In some instances, it may be nec-essary to use two observers, with each one tasked to observe different characteristics. Machine assistance may be provided, whether this is as simple as counting devices or as complex as a pre-programmed computer, with an attached video camera. usually, when mechanical counters are provided, it is necessary to record the readings on the counters at periodic intervals, especially in case of failure on the part of the machine. Cameras, audio recorders, computers, and other devices may also be used by observers to obtain the observations of the population.

The third method relies entirely on some type of mechanical or electronic device, or a combination thereof. For example, it is increasingly common for closed-circuit television (CCTV) to be used for the surveillance of properties, or even, in some parts of the world, public areas of streets, parks, etc. The only human intervention that may be required is to change the video tapes from time to time. Once the data have been recorded, they may be viewed when and if needed. Other devices may record informa-tion only when a particular behaviour takes place. For example, enforcement cameras on automatic toll gates will photograph the licence plates of vehicles that fail to pay a toll, but will not do so for vehicles that have paid the required amount. Therefore, the only data recorded will be the licence plates of those vehicles that did not pay the required toll. In a similar way, speed cameras will photograph the licence plates only of vehicles that exceed the speed limit.

Again, the use of automatic recording devices for observational surveys requires attention to be paid to the correct siting of the devices, so that they are able to collect the desired data and are not likely to influence the behaviour to be observed, unless that is part of the survey purpose. The range of devices that can be used for such observa-tional surveys are limited only by the limits to man’s ingenuity. In the author’s special-ist area of transport and traffic, there are a huge number of devices that can be used, and the number seems to increase year by year as new devices are created or adapted for some new survey purpose.

Page 154: Collecting managing and assessing data using sample surveys

127

7 Focus groups

7.1 Introduction

A focus group represents a type of qualitative research, with roots in sociology. Focus groups were originally conceived by Robert K. Merton (Kaufman, 2003), although the name was apparently coined by Ernest Dichter (Ames, 1998). Focus groups have been used widely in marketing and advertising, but they also play an important role in the design and the evaluation of surveys. It is these latter uses of focus groups that are the primary focus of this chapter.

It is not the intent of this book to provide an exhaustive treatment of focus groups but, rather, to outline briefly what they are and how they are conducted, and to dis-cuss where they fit in the survey design process and the evaluation of surveys. There are available numerous good resources describing precisely how to design and con-duct a focus group (e.g., Krueger and Casey, 2000; McNamara, 1999; inter alia). This is not the purpose of this chapter. Focus groups have been used for some time in marketing (Morgan, 1988), as a means to test new concepts or ideas, to help design new products, etc. In the United States, the automotive industry has used focus groups for many years to assist in designing new model cars. Focus groups are also used in the medical profession (Powell and Single, 1996), though such use is in its infancy, according to Oz et al. (2006). Indeed, Oz et al. (2006) quote several different areas of medicine in which focus groups have been used, such as academic nursing, psychiatry, paediatrics, and behaviour-changing processes, such as quitting smoking. There are also references that can be found to investigations into cancer treatments (Frazier et al., 2010) and other specific patient areas and clinical studies. Focus groups are used extensively in the social sciences. However, the specific use of focus groups that is the subject of this chapter is that of assisting in the design and evaluation of surveys. The different disciplines that use focus groups also use different approaches to the design of focus groups and the mechanisms for using them. In this chapter, the ways in which they can be used to assist in the design and evaluation of surveys are considered specifically and those approaches that are best suited to this are discussed. Other uses of focus groups may be more suited to using different approaches.

Page 155: Collecting managing and assessing data using sample surveys

Focus groups128

The focus group is clearly a survey method in its own right. It is part of a battery of in-depth and qualitative survey methods that have been developed to help shed light on a number of different issues in a variety of research fields. With entire books writ-ten on the topic of how to design and use focus groups (examples include Barbour and Kitzinger, 1998; Bloor et al., 2000; Fern, 2001; Krueger, 2000; and Morgan, 1996), it would be quite inappropriate to devote a single chapter to covering the entirety of this topic. Therefore, discussion is restricted here to a particular use of focus groups, which is to assist in the design of more traditional qualitative and quantitative surveys and to provide a means to evaluate a survey after it has been conducted.

7.2 Definition of a focus group

A focus group is an ‘in-depth, qualitative interview with a small number of carefully selected people’ (American Statistical Association [ASA], 1998), who are brought together to discuss a specific topic. It is also defined as ‘a research technique that col-lects data through group interaction on a topic determined by the researcher’ (Morgan, 1996). Usually a focus group consists of between six and twelve people, although it is often best to keep it to no more than ten, and it will last about ninety minutes or so. It involves directed discussion with a moderator, whose role is to pose the questions and ensure that the group does not get sidetracked. As Morgan (1996) also points out, a focus group is specifically designed for data collection, which distinguishes it from such groups as those used for ‘therapy, decision making, education, organising, or behaviour change’. It is not a group interview, which involves interviewing a number of people at the same time, with the emphasis on questions from the researcher and responses from the participants (Gibbs, 1997). Group interviews also do not allow interactive discussion among the participants, whereas focus groups are designed to foster interactive discussion.

7.2.1 The size and number of focus groupsThe size of the group is quite critical to the success of a focus group (Cameron, 2000). If there are too few participants, the discussion will become rather limited and the extent of any interaction will be much less than desired. If the group is too large, then some members will not have the chance to participate fully in the discussion. Cameron suggests that four is the minimum number that can be used successfully, while ten is probably a reasonable maximum.

Members of the focus group are usually selected on the basis of the homogeneity of the group with respect to the issue to be discussed, or the similarity of the members on some demographic characteristics of relevance. Unlike the majority of surveys, focus groups are not selected with an attempt to provide representativeness in terms of a par-ticular target population. However, it is also rather common for several focus groups to be organised to discuss a particular issue, such that each of the groups is selected with the aim of obtaining differing views on the same issue. The number of focus groups

Page 156: Collecting managing and assessing data using sample surveys

Definition of a focus group 129

is usually small; too many groups will lead to the same types of discussions aris-ing within different groups, providing no new data on the topic under study. Morgan (1996) suggests that there should be about three to five groups, although Cameron (2000) points out that, when less standardised questions are used, or there is a low level of moderation, more groups may be needed, because of the variability between the groups. Indeed, Cameron notes one study that used as many as thirteen groups.

As an example, focus groups are sometimes used by political pollsters. In such cases, different focus groups might be held for men and women, for those supporting different major political parties, and for younger and older voters. Because the goal of focus groups is to have members engage in discussion, it is important to consider carefully who the members of a particular group should be, so that the purposes of the group are not undermined by conflict or total polarisation. There is an issue here on the construct of focus groups, as to whether they should be naturally occurring groups, or whether they should be constructed by the researcher (Krueger, 1994). For the most part, it would appear that constructed focus groups are best (Cameron, 2000). It has been noted that discussion of controversial or sensitive topics is usually enhanced by bringing together people who share some common key characteristics, whether this be age, ethnicity, educational level, or whatever. When research is place-based, which may occur in a number of areas of research, it may be unavoidable that some members of the focus group will know one another. This may raise prob-lems about confidentiality, especially because people tend to over-disclose rather than under-disclose information in a focus group setting (Morgan and Krueger, 1993, cited in Cameron, 2000).

7.2.2 How a focus group functionsFocus groups are conducted with a moderator, who is normally a person who has been trained and is skilled in maintaining good group dynamics. It is not necessary, although it may sometimes be the case, that the moderator is a specialist in the topic area of the focus group. Indeed, there are many instances in which it may be preferable that the moderator is not particularly knowledgeable in the topic area. The roles of the modera-tor are to keep the discussion focused on the topic area, to pose questions from time to time that will help to move the discussion along, and to ensure that, as far as possible, all members of the group participate in the discussion.

A focus group session requires several questions to be devised to focus the dis-cussion of the group. It is generally recommended that about five or six questions be posed (McNamara, 1999). However, it is also advisable to have a set of secondary questions developed, in case the initial questions do not produce the in-depth, interac-tive discussion that the researcher desires. The questions must be designed carefully so that they elicit discussion rather than simple ‘Yes’ or ‘No’ answers (ASA, 1998). Questions should also not be leading questions – i.e., suggesting the opinion of the conveners of the focus group, or leading to the answer for which the conveners are hoping. This means that the questions need to be easily understood and clearly stated,

Page 157: Collecting managing and assessing data using sample surveys

Focus groups130

neutral in terms of the issues for which the focus group has been assembled, sequenced so that the easier questions are raised early and more difficult ones later, and so that, if applicable, less intimate issues are discussed before more intimate ones (ASA, 1998). Some considerable time and effort needs to be expended on the specific wording of questions, so that they engender the discussion that is desired, without leading or con-fusing the participants. It may also be useful to have discussion aids. For example, in designing a survey, prototype mock-ups of the survey may be useful.

It is also very important that, once the moderator has put a question before the group, the moderator metaphorically steps away and leaves the group to talk to one another about the question that has been posed. As pointed out in the ASA pamphlet (ASA, 1998), this is a completely different dynamic from that of an interview. In the case of a focus group, it is a free discussion, with each participant being encouraged to state and debate his or her view. The moderator’s role is to make sure that the group does not go off on a side issue or unrelated topic, while still making sure that there is scope for the group to raise issues that were not thought of by the conveners but that are germane to the topic area. The moderator also has a role making sure that each member of the focus group is heard, so that all have an opportunity to engage in the discussion.

Unlike most other forms of survey, it is essential that the focus group is recorded. This may be solely an audio recording, or may be both video and audio recording. The latter is preferred, because of the richer information obtained from both visual and audible records. In many cases, specially equipped rooms are used for holding the focus groups. These rooms may have a wall that is a one-way mirror and permits observers to sit on the other side of the wall and observe the interactions in the group, while also listening in to the discussion. However, modern video and CCTV is largely making the use of one-way mirrors obsolete. In addition to the audio and video record-ing that will usually be under way, the observers may also take notes during the focus group. Such note-taking can be of considerable value, because individual observers pick up specific nuances in the discussion that may be very revealing, especially when reviewed in the context of the recordings. The room itself will usually be equipped with both audio and video recording capabilities. Ideally, the room will have a table, with chairs surrounding it, in such a manner that the members of the group can make eye contact with one another. The table should be round or oval, so that there is no dominant position around the table. The moderator will occupy one of the seats, or may stand during the session (especially if there is a need to point to various discussion aids), interjecting only when needed.

While such specialised focus group rooms offer what may be considered to be the ideal venue for a focus group, it is not essential to use them. A successful focus group can be held in a room that possesses none of the sophisticated equipment just mentioned. All that is really essential for holding the focus group is a room of suffi-cient size to accommodate comfortably the group members, the moderator, and pos-sibly one or two additional observers. Portable recording equipment can be brought into the room. A video camera should be used only if it can be operated relatively

Page 158: Collecting managing and assessing data using sample surveys

Definition of a focus group 131

inconspicuously, so that focus group members do not ‘perform’ for the camera. (In focus group rooms equipped with one-way mirrors, the video camera is usually on the observers’ side of the mirror, so that focus group members are not actually able to see it.)

Recalling the discussions in Chapter 4 about survey ethics, it should be clear that it is absolutely necessary for focus group members to be informed that the session will be recorded and how, and that they have given consent individually to this. It is also usual that focus group members will be paid both their transport costs to get to and from the location of the focus group and an honorarium in addition. The payment for transport is necessary, so as to adhere to the ethical standards of surveys. The honorar-ium is advisable, in order to gain the cooperation of potential focus group members in giving up their time to participate. It also conveys a sense of the importance and value of their contributions to the study.

7.2.3 Analysing the focus group discussionsAfter the focus group is concluded, a report is developed from the recordings and any notes taken, with a view to determining the answers to the questions raised in the focus group and to identifying any new issues that arose through the group dynamics. When multiple groups are used, these results will be compared among the different groups, and conclusions drawn as to the relationship between the characteristics that define the different groups and the issues under discussion within the groups. Sometimes a full transcript of the session may be produced, while on other occasions only a partial transcript is needed.

The process of analysing the discussions of the focus group is by no means triv-ial (Buttram, 1990; Cameron, 2000). Usually, it is best to analyse the discussion by picking out the thematic content, rather than simply looking at the sequential discus-sion. By listing the principal themes of the discussion, it is then possible to determine that relevant points were made about each theme. Making a note of key quotes as the review proceeds is also very useful. These key quotes may become a part of the final report on the focus groups. Other methods of analysis are also described by Cameron (2000). It is also important to highlight the specific areas of agreement among the par-ticipants and the areas of disagreement or contradiction. Again, there is considerable information available for those who are designing focus groups, so as to direct their analysis. Finally, it is important to achieve a balance between the use of direct quotes from the focus group members and the researcher’s analysis and interpretation of the discussions that took place. Too many quotes may make the report appear to be either or both repetitive and chaotic. Too few quotes may lose the vitality of the discussion for the reader.

7.2.4 Some disadvantages of focus groupsFocus groups have some problems and issues (Styris, 1981). The researcher for whom the focus group is being undertaken has much less control over the focus

Page 159: Collecting managing and assessing data using sample surveys

Focus groups132

group than is the case with a one-on-one interview. It is possible, depending on the skills of the moderator and the composition of the focus group, that the focus group may spend a considerable amount of time discussing issues that are not relevant to the research. Another disadvantage of a focus group is that the data are difficult to analyse. Discussion within the focus group consists mainly of reactions to things said by different members of the focus group, and this can often pose difficulties in terms of classification and usefulness for analysis. It may also be hard to get the members of the focus group to gel as a group and to stay on target. Focus groups are not representative of the population, partly because of their small size and partly because of the time commitments required to participate. Therefore, results from focus groups must be used with care.

Elsewhere in this book, the issues of conditioning and contamination are dis-cussed (see Chapter 14, in particular). Focus groups are very susceptible to both these effects. Conditioning refers to situations in which behaviour or responses to questions are changed as a result of the survey process. This is always a danger with any method in which the subjects of the survey participate in the survey process. It has been likened to Heisenberg’s uncertainty principle in quantum mechanics (Walvis, 2003). Clearly, some members of a focus group may change their minds about certain issues of relevance to the focus group while participating in the focus group discussions. This is a form of conditioning. Contamination in this context refers to situations in which survey respondents may provide answers that are based on what the respondent believes the moderator of the focus group wants to hear, rather than on the feelings and opinions of the respondent. Both these effects will tend to invalidate the outcome of the focus group. Neither of these effects can be avoided easily.

Other criticisms of focus groups are not hard to find (e.g., Rushkoff, 2005; Stycos, 1981), and it is easy to point to situations, especially in product testing and marketing, when the results of focus groups have been less than helpful. Focus group members can get bored, can rebel, or can simply try to please rather than providing genuine opinions and evaluations. These are all dangers of focus groups, although they can be mitigated by using a well-trained moderator, in particular a moderator who is not identified with the researcher and may have only limited knowledge of the issues being researched through the focus group.

7.3 Using focus groups to design a survey

As noted at the beginning of this chapter, the major interest here is on how to use focus groups to design a survey. This can be done in several ways. Focus groups can be selected and asked to complete a pilot version of the proposed survey, prior to the time of the session. At the session, questions may be asked of the group that pertain to such things as the ease of completing the survey, the length of time it took, the difficulty of understanding or answering any of the questions, etc. The overall design of the survey

Page 160: Collecting managing and assessing data using sample surveys

Using focus groups to design a survey 133

form may also be discussed, particularly if the survey under consideration is to be pro-vided as a self-administered survey.

Other pertinent questions can also be raised, such as the method by which the sur-vey should be administered. Discussion about the pros and cons of alternative methods of administration – self-administered paper and pencil, self-administered Web-based, or interviewer-administered at home or by telephone – may prove very valuable in the final design decisions for a survey. The acceptability and value of pre-notification letters can be ascertained, as can alternative methods of recruitment. One might also pursue the issue of the use of incentives, determining whether incentives would change the attitudes of group members about completing the survey, what form the incentive should take, and what value it should have.

Focus groups may be used to refine questions and categories for answer. A good example of this was conducted recently by the author in relation to evacuation from bush fires in Australia. The desire was to design a computer-assisted survey that would determine when people decided to evacuate as a bush fire approached their property (Alsnih, Rose, and Stopher, 2005). We knew that the size and ferocity of the fire would be an issue in the decision on whether or not to evacuate, and also that the prevailing weather conditions would also be important. Initially, we had designed a set of categories to describe the type of fire, using terms commonly used by the Rural Fire Service, among others. From the focus groups, we found that this terminology was not one that the members of the focus groups typically used in their thinking. Instead, they defined just two categories of fire, which were described by the adjec-tives ‘hot’ and ‘cold’. A hot fire is one that is burning through the trees, and possibly moving in the crowns of the trees (also known by the Rural Fire Service as a ‘crown fire’). Similarly, a ‘cold’ fire is one that is relatively slow-moving, and probably con-fined to the grass and scrub on the forest floor. We received similar feedback on the weather descriptors. For example, whereas we had started out with different levels of humidity, each expressed in the percentage of relative humidity, we found that the residents of the areas that were prone to bush fires thought simply in terms of ‘humid’ and ‘not humid’. The different percentage levels of relative humidity were generally of little value to them.

Another valuable insight that came from these focus groups concerned the nature of the evacuation, which arose as a focus-group-defined issue, because we did not know enough to have been able to pose the correct questions. In this regard, the focus groups made us aware that there were three levels of evacuation: stay put, evacuate children and elderly or frail relatives, and evacuate everyone. We also found that the second of these three evacuation levels often involved a vehicle trip out of the threatened area to evacu-ate those members of the household that should be protected from the worst of the fires, and a vehicle trip back so that the household member involved in the transport could return to help fight the fire and protect the home. Failure to include the three levels of evacuation in the final survey would have rendered much of the survey very difficult for respondents to answer and also would have produced somewhat meaningless results.

Page 161: Collecting managing and assessing data using sample surveys

Focus groups134

7.4 Using focus groups to evaluate a survey

In addition to the uses explained above, focus groups can be used to evaluate pilot surveys, and also an entire survey that is to be conducted again on a future occa-sion, particularly, for example, a survey that is to be administered repeatedly on the same or different populations. In the process of evaluation, a similar situation would be used to that described for designing a survey instrument. However, in forming the focus groups, it may be useful to assemble groups based on completion performance. For example, one group might comprise respondents who completed the survey and did so with few errors. Another group might consist of respondents who completed the survey but who made a number of errors in completing it, or had a moderate level of item nonresponse in their completed surveys (see Chapter 20 for an explanation of item nonresponse). Yet another group might comprise those who terminated the survey prior to completion, while a final group (if they can be persuaded to participate) would be made up of those who refused to undertake the survey.

Discussion within each group would, of course, follow somewhat different lines. Those in the first group would be asked to talk about what they found easy or hard to complete in the survey, and why they were motivated to complete the survey, as well as other potentially useful opinions and reactions to the conduct and content of the survey. Those in the second group might focus more on where they found difficulty in responding, or why they chose not to respond to certain questions. The opinions expressed in this group might be very constructive in connection with future survey design for identifying and dealing with specific question wording issues or other related issues that the designer may be able to change.

The third group would be asked about their reasons for terminating the survey, and whether there were particular issues that arose that decided them to terminate, or if it was a matter of the length of time the survey had taken to that point, or other influences that contributed to termination. The fourth group would be asked questions relating to why they would not undertake the survey at all, and a focus here might be on ways of approaching potential respondents so that they would be more motivated to attempt the survey.

A rather specialised use of focus groups for survey evaluation was used by the author and his associates (Swann and Stopher, 2008). In this use of focus groups, respondents in a survey that used GPS devices to track travel behaviour were invited to participate in focus groups, following the conduct of the survey over at least two waves of a panel. Four focus groups were held, with the population segmented by the length of the sur-vey in which the households had participated and by the level of performance of the household. The length of the survey related to the fact that some respondents were asked to carry a small GPS device for a week, while others were asked to carry the device for twenty-eight days. The level of performance of the household was defined as either carrying the devices throughout most or all of the requested period, and fail-ing to carry the device for more than a few days. The four groups varied in size from

Page 162: Collecting managing and assessing data using sample surveys

Summary 135

seven to eleven participants, although the original intent was to have each group the same size at about ten participants. Partly by design and partly on the basis of the directions that the discussions took, there were six broad themes that emerged in the discussions (Swann and Stopher, 2008).

(1) Respondents’ understanding of the survey task.(2) The form and functions of the devices.(3) Patterns of respondent behaviour in undertaking the task.(4) Reactions to the survey documents and survey administration.(5) Respondent attitudes and perceptions of issues relevant to the study.(6) Curiosities about the study displayed by respondents.

A number of useful lessons were learnt from the focus groups. Among these were that respondents preferred longer than seven days to be asked to carry the devices, but not as long as twenty-eight days (there seemed to be a consensus that around fifteen days would be ideal, allowing respondents time to get into the habit of carrying and charging the devices, and to gain confidence that the devices were working correctly). Another finding that emerged was that respondents needed more information than just a light on the device to be assured that it was working. Yet another interesting finding was that focus group members indicated little concern over any invasion of privacy from carrying the devices.

Although it is not appropriate to describe all the findings from the focus groups, the ones noted here serve to illustrate the sort of useful information that can be obtained from an evaluative focus group in a survey exercise, in which the focus group is not the survey mechanism itself, but is used to assess the strengths and weaknesses of the survey. Among the subsequent results from this example of an evaluative focus group were a redesign of the GPS device itself and some refinements to the survey materials for use in future surveys. It was also noted, although it has not yet been possible to put this into operation, that a recruitment process that involved group instruction on how to use the devices might produce substantial dividends in improving the overall perfor-mance of this type of survey.

7.5 Summary

Referring back to Figure 5.1 (page 91), focus groups should be used as a precursor to completing the design of the survey instruments, and should also be used prior to finalising the survey method. Further, focus groups can also be used as part of the evaluation of a completed survey, as they may shed light on strange results that were obtained in the survey, or provide inputs to the design of future similar surveys. Given the richness of the information that can be obtained in this way, it is unfortunate that focus groups are not used more often in survey design. Potentially, they could save substantial amounts of money with regard to the need to redesign questions and sur-vey methods, or make it more certain that the final survey achieves its desired goals in terms of the quality and quantity of data and its fitness for purpose.

Page 163: Collecting managing and assessing data using sample surveys

Focus groups136

However, a word of caution is required here. As noted earlier, a focus group should be posed about five or six questions at most. As can be seen from the list of possi-ble lines of enquiry, it would be very easy to exceed this number for a focus group on survey instrument design and data collection procedures, and also one on evaluat-ing an already completed survey. If a wide range of issues is to be determined from focus groups, then multiple focus groups should be implemented, with different groups restricted to different topic areas within the survey design.

Page 164: Collecting managing and assessing data using sample surveys

137

8 Design of survey instruments

8.1 Scope of this chapter

In this chapter, the various aspects of the design of a survey instrument are considered. Designing a survey instrument is a complex process and cannot be considered in isola-tion from other aspects of the overall design of the survey. For example, the decision as to the survey methodology impacts directly on the design of the survey instrument.

It is also important to realise that some surveys may use multiple instruments. For example, in surveys that are focused on households and individuals, it may be neces-sary and desirable to provide an instrument for the household as a whole, and individ-ual instruments for each individual within the household. In household travel surveys, there are usually four different instruments: a form for recording data about the house-hold, a form for each individual in the household, a form for the vehicles available for use by household members, and a daily diary for each household member, in which they record details of their travel. A similar set of survey instruments is also often used in time use surveys, except that there is less interest in household vehicles, so informa-tion about them may be collected on the household survey form. For time use surveys, the diary will collect information about what the person is doing throughout the day or days of the survey.

The design principles that are discussed in this chapter work for all types of survey instruments. Some of the specific requirements of instruments used in observational surveys are also discussed. Distinctions are drawn, where appropriate, between require-ments for interviewer-administered instruments and self-administered instruments.

8.2 Question type

There are three principal types of questions that may be included in a survey con-ducted on a human population: classification questions, attitude and opinion questions, and behaviour questions. Classification questions are those questions that allow the researcher to classify the respondent into appropriate demographic or other groupings. They provide a description of the respondent. It is usually important that many of the classification questions are questions that are also asked in a national census, so that

Page 165: Collecting managing and assessing data using sample surveys

Design of survey instruments138

it is possible to compare the respondents to the survey with the population as a whole. While classification questions tend to be more objective, there can still be interpreta-tion issues with these questions that may require very careful treatment in the way the questions are asked.

Attitude and opinion questions are subjective in nature. These questions have no right or wrong answer, but ask instead for a respondent to indicate such things as the strength of his or her agreement or disagreement with statements, the importance of different concepts, the degree of satisfaction or dissatisfaction, the strength of prefer-ences, etc. Most, but certainly not all, questions of this type are asked on a scale, which may contain anything from three to nine or more points. For questions of this type there are a number of design issues of considerable importance, including such things as order effects, and potential biases arising from primacy, recency, affirmation, etc.

Behaviour questions relate to the activities of the respondent and usually involve the respondent in reporting activities in the past, present, or future (Zmud, 2006). Again, like the classification questions, these are factual questions requiring objective report-ing by the respondent. However, as with the classification questions, they may not be as simple to construct as might at first appear. For behaviour and classification ques-tions alike, the goal is to have the respondent report these truthfully and accurately. For attitude and opinion questions, the goal is to have the respondent report as accurately as possible his or her feelings relating to the issues of concern.

8.2.1 Classification and behaviour questionsSome authors treat these as the same general type of question, such as Sudman and Bradburn (1982). To the extent that both are asking for objective information and that both are subject to similar problems in design, this is appropriate. As pointed out by Zmud (2006), there are two key issues that are important in designing these types of questions: the perceived level of threat implied by the question and the presence of memory or recall error. Of course, good question design is still essential, but these two issues are of importance in designing the survey content.

When respondents perceive that there is some level of threat in a question, they may answer untruthfully or refuse to answer at all. A major problem here is that ques-tions that the survey designer feels are unthreatening may be perceived quite differ-ently by specific groups of potential respondents. In a study reported by Zmud (2006), some surprising results were found for three groups, comprising African Americans, English-speaking Hispanics, and Spanish-speaking Hispanics. From a list of twelve classification questions, all three groups found questions about the number of vehi-cles owned to be non-threatening, but a question on the types of vehicles owned was perceived as threatening. Similarly, reporting the address of their workplace was not perceived as threatening, but reporting who they work for was. Recent immigrants (Spanish-speaking Hispanics) found questions on where they live and their owner/renter status to be threatening. African Americans found race/ethnicity and household income questions to be threatening.

Page 166: Collecting managing and assessing data using sample surveys

Question type 139

However, the level of threat in all these questions could be reduced by explaining the reasons why the questions were being asked. Understanding why the information was requested and how it related to the survey in question significantly mitigated the degree of threat perceived by each of these groups. Threatening questions are not, automatically, to be avoided. There are ways of asking threatening questions that will still elicit good responses (Vinten, 1995). In general, survey researchers need to be aware what questions are considered threatening, because they will tend to lead to overstatements or understatements of behaviour, or an exaggeration or diminution of classification characteristics. Although Zmud’s (2006) study did not seem to find that household income was perceived universally as a threat, the author’s experience is that this is perceived rather generally as a threatening question. The result is that it is often misreported. In some instances people fear that the income information provided in the survey will be provided to the government tax authorities, and, as a result, lie about their income and understate it. In other instances people have no such fear but, rather, are concerned that the interviewer or survey researcher will think less of them if they report their real income, and therefore exaggerate their income to be more acceptable, as they see it. Both these types of responses undermine the acquisition of accurate and truthful classification information.

Mitigating threatening questionsThere are a number of techniques that can be used to mitigate threatening questions. Vinten (1995) lists ten possible ways in which these questions can be mitigated, although the survey designer is urged to use these cautiously (Sudman and Bradburn, 1982). The first of these is to repeat the question several times, on the basis that those who do not answer the question truthfully at first may do so subsequently. However, this can also be regarded as an antagonistic approach, and it is probably better to do this by asking the question on separate occasions. For example, one could ask a sensi-tive question, such as household income, in the recruitment interview, and again in the subsequent follow-on interview. If different answers are obtained on each occasion, it would be prudent to accept the last answer as the more accurate one, on the basis of the theory of repetitive questioning.

Second, the question could be embedded among non-threatening questions. This should have the effect of reducing the apparent importance of the question to the respondent, making him or her less sensitive to the question. However, care needs to be taken not to ask the question in a section of the survey in which it is completely out of place, in that this may have the opposite effect to the desired one. Third, if the planned survey has a number of threatening questions, then it may be worthwhile considering using impersonal interview methods, such as a postal survey or an internet survey, in which the respondent does not have to provide his or her response to another human being. A similar effect can also be obtained in a face-to-face interview by allowing the respondent to key in the information to a computer in a CAPI survey.

Fourth, diaries and panels may offer a solution, in that repetitive questioning over time becomes less threatening. Keeping diaries reduces the need for respondents to

Page 167: Collecting managing and assessing data using sample surveys

Design of survey instruments140

rely on memory, while panels offer opportunities to provide the same information at intervals of time and reduce the perceived threat. However, both these techniques are limited in application. When they are appropriate, they offer a useful method for miti-gating perceived threat in questions.

Fifth, using an appropriate time frame can be helpful. Generally, asking about past behaviour is seen as less threatening than present or future behaviour. For example, in asking about a particular undesirable behaviour, the question might be posed ‘Did you ever…?’ followed up by asking, in the event the person replies in the affirmative, whether the respondent did so in the past year, or month, or other appropriate time period. However, the reverse questioning approach may be necessary for a desirable behaviour, because admitting to never having behaved in this way could be seen as increasing the threat. In this case, the question might be asked only about a short time period in the past, such as the past week, or day, or month. An issue (dealt with later in this chapter) that is of concern when asking about behaviour in the past is that of ‘telescoping’. This is an issue for many questions relating to events in the past, and is dealt with as a sepa-rate question design issue. However, one other aspect of timing is important, namely the choice of word to deal with the frequency of a behaviour. It is generally good practice to avoid using words such as ‘usual’, ‘normal’, ‘regular’, ‘typical’, etc. These are all vague words, and will lead to vague answers. Some will be found impossible to answer because the respondent does not regard any of the behaviours of interest as usual or normal or regular. Instead, it is better to ask about a specific occasion, such as ‘the last time’, ‘last week’, or ‘yesterday’. This is discussed further in the next chapter.

Sixth, open-ended questions may reduce the threat. In this case, the respondent is free to choose the words to express the behaviour or condition. The main reason that this is helpful for threatening questions is that closed-ended questions require a list of responses that includes extreme values. In a threatening context, respondents are likely to be reluctant to choose the extremes in the scale of responses. However, by using their own words, they can report an extreme without feeling that this is necessarily extreme. For example, suppose one were questioning about alcohol use. A question might be asked about how many glasses of wine a person consumed in the past week. If offered as a closed-ended question, the options might be, for example, none, one, two, three, four, five, six, seven, and eight or more. There is an implication, then, that eight or more glasses in a week are excessive. Respondents may wish to avoid indicating that answer from a social desirability viewpoint. On the other hand, if the question is asked with an open end, a respondent may feel no problem about writing in an answer of ‘About twenty’, for example.

Seventh, questioning about other people is often seen as threatening. If the other people are to be named, this will probably result in unwillingness to respond. This may be mitigated by not requesting or providing identification for those about whom the respondent is to report. Although this may lose some quality in the information, it is much more likely that such anonymous questions will be answered.

Eighth is to formulate a longer question. Interestingly, Vinten (1995) reports that longer questions have been known to increase the reporting of undesirable behaviour

Page 168: Collecting managing and assessing data using sample surveys

Question type 141

by 30 per cent or more. However, this may also lead to the over-reporting of desirable behaviours. The reasons that longer questions appear to lead to this result is that they may aid the respondent’s recall, they may give the respondent more time to think as he or she absorbs the questions, and the length of the reply is likely to be related to the length of the question.

Ninth, the questions should use words that are likely to be familiar to the respon-dent. This may require using focus groups in the design stage of the survey in order to determine which the familiar words are. However, a word of caution is necessary, to the extent that it must be made certain that the alternative, familiar words convey exactly the same meaning as is desired in the survey. The use of slang words can sometimes lead to changing the sense of the question and asking a question that was not intended.

Finally, loading the question may be a way to mitigate the threat. While this con-tradicts the principle that we should avoid asking leading questions, it may be the only way to get truthful answers to some threatening questions. The loading might, for example, be done for socially undesirable behaviours by including in the ques-tion the notion that ‘everybody does it’. For example, a question might be phrased as ‘At some time or another almost everybody cheats a bit on their income tax return. By how much would you say you under-reported your income for the last tax year?’ It might be possible simply to assume the behaviour. However, this is likely to offend the respondent who has never behaved in this way. Thus, for example, asking the question ‘How often would you say that you have under-reported your income to the tax authorities’ might be considered offensive to people who are always honest in reporting their incomes, even if one of the answers provided for the question is ‘Never’. Another possibility is to use authority as a way to justify the behaviour, such as ‘Economists say that more than 90 per cent of individuals understate their income at least once in their lives’.

For desirable behaviour, one possible approach is to use a casual approach, such as ‘Did you happen to…?’ (Vinten, 1995). However, this approach is likely to increase the threat for questions relating to undesirable behaviour, because people do not just ‘happen’ to do it. Another possibility is to provide, in the question, reasons why the behaviour might not be engaged in. For example, a question might be phrased ‘Many people do not, on principle, give money to someone who is begging on the street. When were you last begged for money on the street? Did you give that per-son anything?’ An example of mitigation of this type that was used recently by the author concerns a survey in which people were asked to carry a GPS logging device with them for a number of days. A form was provided for respondents to report what happened on each day with respect to carrying the GPS device with them. One of the possibilities was that the person might have forgotten to take the GPS device with them on any given day. To mitigate the possible threat posed by the statement ‘I forgot to take the device with me today’, it was phrased instead as ‘Oops, I forgot to take the device with me today’. As a result, it was found that respondents gen-erally reported this situation honestly (as indicated by there being no data recorded for that day on the GPS device). In this case, simply introducing the word ‘Oops’ at

Page 169: Collecting managing and assessing data using sample surveys

Design of survey instruments142

the beginning of the statement, thereby making the statement more casual, removed most of the perceived threat in the question.

Although not included in the list of design procedures in these other sources, one additional technique that is worth mentioning is to place threatening questions close to or at the very end of the survey. There are two reasons for this. First, respondents will have become used to responding to questions by the time that they encounter the perceived threats. This may make it more likely that they will have a reduced sensitivity to the questions when they are encountered. Second, if respondents have completed all the earlier questions, a useful, although incomplete, response may still be obtained, even should they refuse to answer the last question or two on the survey because of perceived threat. In many instances, this will mean that useful information is still obtained.

8.2.2 Memory or recall errorThe problem of memory error is twofold. A respondent may genuinely forget about the occurrence of some event that is of interest to the survey researcher, or a respondent may forget exactly when an event occurred, and misreport it in terms of frequency or the last time it occurred. This latter problem is often one of telescoping, as has already been mentioned. Telescoping is when a respondent believes that an event occurred either much more recently than it did (often true of significant events in the life of a respondent) or that it occurred much longer ago than in reality (often true of insignif-icant events in the life of a respondent). In either case, when asked about occurrences within a specified time period, the respondent will allocate the event to an incorrect time period, or may report more or fewer occurrences within the specified time period than actually happened.

Sudman and Bradburn (1982) point out that the most serious problem with behav-ioural questions that are not perceived as threatening is that ‘human memory is falli-ble and depends on the length and recency of the time period and the saliency of the topic’. One of the important things that survey researchers must keep in mind is that, quite often, the behaviours about which we wish to question people are not considered particularly important to those we are surveying. This means that we tend to fail on the issue of saliency, so that the behaviours are easily forgotten, and respondents will have difficulty in recalling those events about which we wish to question them. Second, the issue of recency is also important. Asking people about a behaviour in the past year may be asking far too much in terms of their ability to recall.

There are excellent illustrations of these problems in the area of travel surveys. Richardson, Ampt, and Meyburg (1995) report that studies of the results of travel sur-veys show that the number of trips that are reported by respondents in such surveys are consistently fewer than the number of trips actually taking place. This has been con-firmed further in recent studies, in which the results of household travel surveys have been compared against the information obtained from equipping the respondents with GPS devices (Forrest and Pearson, 2005; Stopher, Xu, and FitzGerald, 2005; Wolf,

Page 170: Collecting managing and assessing data using sample surveys

Question type 143

2006; Wolf et al., 2003). These studies confirm that there are consistently fewer trips reported by respondents than they actually carry out. The results vary from an under-reporting of about 7 to 11 per cent in a face-to-face interview to an average of 20 to 30 per cent in CATI surveys. This under-reporting cannot be due to ignorance, because the respondents actually participated in the travel. It must, therefore, be a result, at least in part, of the fallibility of human memory. It must be stressed, also, that in these travel surveys it is usually the case that respondents are asked to report their travel for a day as yet in the future (at the time of recruitment) and are furnished with a diary or memory jogger in which to record what they do on the appointed day. Nevertheless, a number of people still under-report their travel.

There are probably at least three reasons for this under-reporting, all of which pro-vide important lessons concerning survey instrument design and related issues. First, many of the individual small amounts of travel that people perform on a daily basis are not considered by most to be particularly important. Therefore, even when people are writing things down at the end of the day (which is probably how most people fill out the prospective diaries or memory joggers), they forget short trips and even some longer, habitual trips, because they lack salience.

Second, there is a problem with the word ‘trip’. It is used by transport researchers to mean any travel from a single origin to a single destination, without stops other than for traffic-related reasons on the way. Many people define a trip differently from this. Often it means a major journey to a place, or it may be construed as a round trip – i.e., the travel to a destination and back again. For example, at least two dictionaries define a trip as an excursion to some place. This clearly connotes something much more sig-nificant than simply driving down the road to the nearest petrol station to fill up the car’s petrol tank, or travelling to the nearest convenience store to buy a bottle of milk or a loaf of bread. Hence, part of the problem would appear to arise from the use of a specialised word, for which respondents understand a different meaning from the survey researcher. While this is important and is a potential contributor to the issue, a number of the surveys that have been assessed do not use the word ‘trip’ yet still expe-rience severe under-reporting of travel.

The third issue here is probably one of the time period. If it were possible to ask peo-ple about any travel that they performed in the past hour, or two hours, it is likely that a fairly accurate reporting could be obtained. However, when asking about an entire day or even longer, there is a problem that the time period mismatches what people are inclined to remember. Thus, the point made by Sudman and Bradburn (1982) about the length of the time period may be partly at the root of this problem.

There are perhaps two other points that can be drawn from this. The first is a word of warning about the interaction between survey design and respondent behaviour. Even though travel surveys have moved from retrospective data collection (‘What did you do yesterday…?’) to prospective data collection (‘Next Wednesday, please tell me what you do …?’), there is actually no way to guarantee – unless the interviewer is to follow the respondent throughout the day – that the respondent will not still wait until the end of the day (Wednesday, for example), and then try to recall what he or she did during

Page 171: Collecting managing and assessing data using sample surveys

Design of survey instruments144

that day. The main difference here is that the respondent is forewarned about the need to recall what was done, as opposed to learning about the need for recall only after the day has already passed.

The second point to note is that of respondent burden. Possibly one of the other rea-sons that people omit to tell researchers about some of their short trips to buy a news-paper or a loaf of bread is that they perceive that there is a significant effort required to report everything the survey asks about that event. It may be that a major reason for under-reporting in these and other surveys is that the respondent wants to reduce as far as possible the burden of work that he or she must undertake to complete the sur-vey, and so will be selective in what he or she ‘claims’ to remember. This is borne out even more strongly in the transport arena by a problem that is discussed in more depth later in this book: false reporting by the respondent that he or she did not go anywhere on the survey day. It has been determined to be a common response by a minority of respondents to claim that they have not left home all day, when they realise that, by so doing, the amount of work required to complete the survey is greatly reduced. In other words, this false reporting of non-mobility is actually a form of nonresponse, although the respondent believes, perhaps, that it will not be detected, so that he or she will still ‘look good’ in the eyes of the interviewer – very often an important consideration for respondents.

However, as Zmud (2006) points out, it is also important to note that the GPS studies tend to show that there is a significant proportion of the samples in most studies that are perfect reporters – i.e., they report all the travel that is measured by the objective use of a GPS device. Furthermore, of those who do fail to report all this travel, usually it is only a single ‘trip’ that is missed. A further demonstration of the points made by Sudman and Bradburn (1982) is also evident in the transport research arena. Over the past two decades there has been a shift away from using recall of travel in surveys of travel behaviour to using a date in the future when people are asked to record their travel. One of the major advantages found in this shift has been an increase in the amount of travel reported. Although travel reports are still inaccurate, they became more accurate with the shift from retrospective to prospective reporting. This illus-trates one of the ways in which it may be necessary to deal with the memory problems of respondents.

Another interesting note comes from this same area of travel surveys. Recent work in using GPS devices has included a prompted recall survey, in which respondents carry GPS devices for a period and are then provided with maps showing where they travelled during that period. They are then asked to respond to survey questions con-cerning the purpose of the travel, who accompanied them, etc. (Stopher, Bullock, and Horst, 2002). Using the maps and tables from the GPS device showed that people were perfectly able to recall all their travel activities, even quite trivial ones, as much as three or four weeks after the travel had been completed. In this case, the memory aid provided by the traces was sufficient to provide virtually 100 per cent recall.

Other, less involved, recall aids can also be devised for different types of surveys. Another form of memory aid also used in travel surveys is to ask people, after they have reported travel from one place to another, if they made any stops as they travelled

Page 172: Collecting managing and assessing data using sample surveys

Question format 145

to the next place. This question will often bring to mind a stop that would otherwise have been overlooked. Sudman and Bradburn (1982) also suggest the use of a list of possible behaviours that might have been undertaken. However, they also caution that any such list must be as exhaustive as possible, because items not mentioned on the list may then be intentionally excluded by the respondent as seemingly not relevant or not asked for. This will lead to substantial under-reporting of such behaviours. On the other hand, if being exhaustive means that the list becomes long, then it is important that behaviours that are frequently thought or known to be under-reported be at the top of the list, so that people will read them even if they stop reading before coming to the end of the list – a frequent occurrence with long lists in surveys. For example, again in travel surveys, visits to a petrol station, a stop to purchase fast food, a visit to a local store for bread, milk, or a newspaper, and a stop to buy a cup of coffee are all often under-reported, so they should appear near the top of a list of activities that would be used as a memory aid.

8.3 Question format

When considering the design of questions, an important issue is the question format to use. Basically, there are three types of format that can be used in designing questions. The first format consists of open or open-ended questions, for which the respondent writes in his or her answer in his or her own words. The second consists of field-coded questions, for which the respondent provides an answer in his or her own words and the interviewer records the answer by selecting the appropriate code. The third consists of closed or closed-ended questions, for which the respondent selects from among a list of possible answers, indicating the answer that comes closest to describing the appropriate response.

8.3.1 Open questionsThese questions have a number of advantages and some disadvantages. They provide an opportunity for respondents to provide their own views, without being straitjack-eted into a response set defined by the survey designer. They also provide an easier opening in an interview for the interviewer to probe for additional information. They are especially good for developing ideas or new concepts, or for exploring unknown behaviours and activities. They may be particularly useful when they follow a closed question, when they may be used to obtain more in-depth understanding of the answers to the prior question. In many surveys that are conducted over a period of time, or that are funded by public agencies, it is often desirable to be able to provide news releases from time to time about the survey. Open questions are often a good source of ‘quot-able quotes’ that may be incorporated in news releases or other public documents concerning the survey.

On the negative side, open questions may often be considered to be more threat-ening, partly because they involve the respondent in thinking about how to answer, and also because they may seem to be probing in greater depth into the thinking or

Page 173: Collecting managing and assessing data using sample surveys

Design of survey instruments146

behaviour of the respondent. Open questions may also result in vague answers that are not easily analysed. In an interviewer survey, open questions are much more prone to interviewer bias, both in the way in which the questions are asked and in what the interviewer records from the answer. Interviewers are rarely expected to write a verba-tim account of the answer provided by the respondent but, rather, to summarise what the respondent says. This process is fraught with potential problems.

In the same way, there may be respondent bias, especially as a result of particularly loquacious respondents. There are many stories from interviewer-based surveys of the respondent who will not stop talking. Because such loquacious respondents will usu-ally depart from the central concern of the question, there may be a bias in what is actu-ally recorded as the response of this person. Another disadvantage of open questions is that they require double processing. First, whatever is provided verbally as an answer, or whatever is written down in a self-administered survey, has to be interpreted and classified in some way, and then it has to be entered into the computer. Such double processing opens the door to additional error in recording the responses. Finally, open questions can be tiring to both the respondent and the interviewer and can lengthen the interview substantially.

In summary, open questions require careful design, and they should not be used excessively in a survey instrument if one of the goals is to minimise respondent bur-den. They are not appropriate for simple factual questions in which the answer set is already well defined. For example, it would be of no point to ask a respondent’s gender with an open question, nor would it be necessary or appropriate to ask for the age of the respondent with an open question. In fact, many years ago, when the author had no experience with surveys, he designed a survey in which one question was an open question asking ‘Sex?’. Several respondents decided to have some fun at the expense of the author by writing in ‘Yes, please’ as their answer – not exactly a helpful response when what one wishes to know is the gender of the respondent. Generally, open ques-tions cannot be used as a precursor to a branch in a survey. In an interviewer-based survey, the interviewer will have to be very sure of classifying the open response cor-rectly to know which leg of the branch to follow. In effect, this requires the question to become a field-coded question, which is discussed in the next section of this chapter. It is not always possible for the survey designer to know the types of answers that might be obtained, so that use as a conditioning question for a branch would be ill-advised.

8.3.2 Field-coded questionsThese questions can appear only in interviewer-based surveys. As noted earlier, these entail having the interviewer ask what is essentially an open question and then record the answer by deciding on one of a list of codes for the answer. A common question in which field coding is used is that of type of employment. Usually, such a question will be asked in an open-ended form, with the respondent providing an answer in his or her own words. The interviewer than selects what he or she believes is the appropriate classification from a list of employment types.

Page 174: Collecting managing and assessing data using sample surveys

Question format 147

Field coding eliminates the double processing, in the sense that the only item recorded is the interviewer’s selection of the appropriate code. However, herein lies the major disadvantage: there is no record of the actual answer given by the respon-dent. Use of field-coded questions also requires more intensive interviewer training, so that interviewers will be likely to pick the correct codes most of the time. However, no matter how well trained an interviewer is, there will remain the potential for inter-viewer bias, in a similar manner to what was discussed for the open-ended question. As interviewers process and censor the information provided by the respondent and select a particular category to describe the answer, it is possible for interviewer bias to creep into the responses.

Field coding cannot be used in self-administered surveys, whether these are admin-istered by post or the internet. It should be used circumspectly in interviewer-based surveys, because of the potential to introduce bias and because of the lack of record of what the respondent actually said.

8.3.3 Closed questionsClosed questions are those questions in which the possible answers are provided in the survey, and respondents are asked to choose the answer that most nearly fits their response. Closed questions are most useful for reporting factual information, for which the list of possible responses is likely to be well defined. However, care must still be taken to ensure that the answers provided are exhaustive and mutually exclusive. They must also be unambiguous. They are restrictive, in that the respondent generally does not have the freedom to offer an alternative that he or she feels is not included in the list. However, there is a way to ameliorate this by including a category called ‘Other – please specify’, in which a respondent who does not feel that any of the categories fits his or her case can write in a response.

Closed questions can also be used to help define the meaning of the question. However, in general, this is not good practice. If the meaning of the question is not clear without the answer set, this should probably be taken as an indication that a better question wording must be developed.

There are potential issues with the list of answers to closed questions that require the attention of the survey designer. The ordering of the list can result in possible bias in the answers. The first possible problem is that of primacy. This is the tendency to select the first item in the list, even if it is not the correct one, or to select an item early in the list, because it seems to be more or less a fit, without troubling to read all the way down the list. Thus, for example, if the list were a list of possible occupations, rather than looking through the entire list and then choosing the one that most closely describes a person’s occupation, a respondent may simply pick the first occupation category that seems to be vaguely similar to his or her actual occupation. This is much more of a problem in interviewer-administered surveys, especially telephone interviews, in which the respondent will be likely to remember the first one or two categories and not the remainder. It is also a problem in self-administered surveys, particularly when

Page 175: Collecting managing and assessing data using sample surveys

Design of survey instruments148

the list of possible answers is long. This can be overcome by ordering lists so that less common categories appear early in the list, and more common ones come later. In interviewer-administered surveys, it is a little more difficult to overcome. In some cases, it can be done by asking the respondent to stop the interviewer when he or she says the one that most nearly corresponds, but this again will probably work only when the list begins with less common categories and moves to more common ones.

The second problem is that of recency. Recency is the opposite of primacy. Recency means that the respondent tends to remember the last thing that he or she heard or read, rather than the earlier ones. Thus, in an interviewer-administered survey, if a respon-dent has to wait until the interviewer has read a list of categories, he or she will tend to remember only the last ones read, rather than the earlier ones. Another recency effect is that the most recent occurrence or experience may dominate a person’s memory of a repetitive event. For example, suppose we are surveying people about their use of local bus service. If, on the last occasion that the respondent used the bus, it was very crowded and ran very late, this will dominate in the mind of the respondent, who will then be likely to report that the buses are always overcrowded and always run late, even though this experience might have been an isolated event. Another form of recency is that something the respondent has only recently noticed is then assumed to have occurred only recently. In application to survey design, the main issue here is in listing categories for answers. The combined primacy and recency effects mean that respondents will tend to remember best the first and the last items in a list and not those in the middle of the list. Therefore, in designing a list for interviewer-administration, in particular, the most obvious categories should appear in the middle of the list, with the least common ones occurring at the beginning and the end.

On the topic of primacy and recency, Rosnow and Robinson (1967) suggest that there are primacy-bound, recency-bound, and free variables. Topics that are nonsalient, controversial, interesting to the respondent, or highly familiar to the respondent tend to be primacy-bound. Topics that are salient, uninteresting to the respondent, or moder-ately unfamiliar tend to be recency-bound. Neutral topics and those that are very unfa-miliar to the respondent will tend to produce either order effect, depending on the way in which they are presented and any possible bias conveyed by the interviewer or the instrument. Rosnow and Robinson (1967) suggest that strength and reinforcement are free variables, in that they can lead the respondent to either primacy or recency, accord-ing to how they are used. For example, if arguments for one side are perceived more strongly than arguments against, then the stronger arguments will tend to dominate.

Another effect of concern is that of range. This has to do with the number of cat-egories used in a response that is not naturally categorical. For example, in asking people’s ages, a survey might offer a set of categories such as under fifteen, fifteen to eighteen, nineteen to twenty-five, twenty-six to fifty-four, and fifty-five and over. This may lead people who are in their late fifties or even early sixties to put themselves into the twenty-six to fifty-four category, because they do not want to be perceived as being ‘old’, as the set of available categories implies that anyone aged fifty-five and over is old. Similar issues arise with income categories, although the result can be the opposite

Page 176: Collecting managing and assessing data using sample surveys

Question format 149

here, in that people will tend to put themselves in the highest income category if it actually starts at a relatively low income value. Mitigating this effect is best done by providing a wide range of categories, especially at the ends of the scale, where there is a tendency to send unintended messages as to what constitutes an extreme value.

The fourth effect is that of affirmation bias. Strictly defined, affirmation bias is the bias that arises when a respondent answers ‘Yes’ to a sequence of questions requiring a ‘Yes’ or ‘No’ response, or indicates agreement or satisfaction with a series of per-ception or attitude questions (Richardson, Ampt, and Meyburg 1995). This bias is best dealt with by using language that sometimes presents items positively and sometimes negatively, interspersing these in a somewhat random fashion, so that the respondent is not lulled into providing a pattern response of alternating ‘Yes ’and ‘No’.

Another closely related bias to affirmation bias, also sometimes defined as affir-mation bias, is better termed the demand characteristic (Orne, 1961; Rosnow, 2002). This has to do with the desire of the respondent to give responses that he or she thinks the interviewer or survey researcher wants or expects to hear. For example, the author worked at one time on travel surveys in South Africa. When asking many black Africans how long they had waited for a bus, it turned out that they would try to determine what the interviewer really wanted to hear. If the respondent thought that the interviewer wanted to hear that bus service was not good, the respondent would report a very long waiting time. If the respondent thought that the interviewer wanted to hear that the bus service was good, he or she would report a very short waiting time. It was much more important to these respondents to tell the interviewer what they thought the interviewer wanted to hear than to report the factual information. The demand characteristic can also arise when the questions are phrased in such a way as to seem to make a judge-ment about the respondent. In these cases, respondents will tend to want to respond with an answer that they feel will make them appear to be ‘good’ in the eyes of the interviewer or researcher.

Another, and closely related, effect is the social desirability bias. This is easily con-fused with the demand characteristic but is the bias that arises from respondents wishing to give answers that are more favourable to their own self-esteem (Richardson, Ampt, and Meyburg, 1995). Fisher (1993) defines social desirability bias as the tendency for people to bias their responses to self-report questions towards their own perceptions of what is ‘correct’ or socially acceptable. Chung and Monroe (2003) define it as the tendency for people to ‘deny socially undesirable behaviours and to admit to socially desirable ones’. Generally, it appears that the social desirability bias is more prevalent in interviewer surveys (Tourangeau and Smith, 1996) than in self-administered sur-veys. Avoidance of this bias is most readily accomplished by ensuring that questions are posed that do not tend to suggest judgements of social acceptability or correctness. However, this cannot always be avoided. Some surveys will be particularly prone to this bias, especially ones relating to recreational and illicit drug use, certain types of political surveys, and ones relating to people’s habits and private behaviours.

There is a further bias that can arise, which is that of rationalisation bias. This arises when respondents deliberately or subconsciously report responses that justify their

Page 177: Collecting managing and assessing data using sample surveys

Design of survey instruments150

behaviour. This is quite common in transport survey applications, when questions are asked about alternative ways a person could travel. For example, a person who uses his or her car, suspecting that it might be thought more appropriate to ride the bus, might deliberately or subconsciously report the car as being faster or cheaper to operate than it really is, as a way to justify that he or she uses the car. In the same way, this respon-dent may report the bus as taking longer and costing more than it really does. These responses are indicators of rationalisation bias and arise because most people want to appear to behave rationally in the eyes of other people, especially when they know that they are not, in fact, doing so.

In many instances, there is little that one can do in the design of a survey to avoid these last four biases completely. However, being aware that these biases exist may assist the survey designer in phrasing questions in such a way as to make it less likely that respondents will perceive a way to interject these biases. Similarly, the answer sets to such questions require attention, so that there is not an apparent bias in them that may result in affirmation, social desirability, demand characteristic or rationalisa-tion biases.

8.4 Physical layout of the survey instrument

Two aspects of physical layout are considered here. First, the introduction to the sur-vey and question ordering issues and related matters are considered. Second, the actual appearance of the survey instrument is discussed.

8.4.1 IntroductionZmud (2003) states: ‘It is a well-established fact that most refusals in telephone sur-veys occur in the first minute of the contact.’ Other survey researchers have reported similarly (examples include Collins et al., 1988; Dillman, 1978; and Groves and Lyberg, 1988). While not as well documented, the initial contact in face-to-face surveys, postal surveys, and internet surveys is also critical. For face-to-face surveys, the initial contact may be as critical as it is for the telephone survey. In postal and internet surveys, the respondent may simply disregard the introduction that is pro-vided in the survey, and plunge directly into completion. However, if it is read, it is likely that the introduction will have a significant influence on whether or not the respondent completes the survey.

There are two important aspects of survey introductions: length and content. There is general agreement that introductions need to be short. Dillman (1978) suggests that long introductions increase the perceived ‘cost’ to the respondent of participating in the survey. Bradburn and Sudman (1979) suggest that long introductions make both the interviewer and the respondent nervous, especially if the introduction includes references to individual questions or sections of the survey. In addition, the wording used in the introduction should be similar to everyday word usage. For example, many telephone surveys in the United States have been designed to start with a sentence in

Page 178: Collecting managing and assessing data using sample surveys

Physical layout of the survey instrument 151

which the interviewer says ‘Hello, my name is …, and I am calling …’. However, this is immediately a clue to the person hearing it that this is a survey or a marketing call. In normal English vernacular, people do not say ‘Hello, my name is…’ when they call someone they do not know. Rather, they say ‘Hello, this is…, and I’m calling …’. Although there is no documentation of response differences to these two different introductions, anecdotally the second is to be preferred to the first.

A number of survey researchers also believe that the word ‘survey’ is ill-advised in referring to a survey, and recommend using the word ‘study’ instead. This may be par-ticularly important in this opening sentence, where the inclusion of the word ‘survey’ may be like the proverbial red rag to a bull. Sudman and Bradburn (1982) suggest that the introduction should consist of the following: an introduction of the interviewer and who he or she represents, together with a one- or two-sentence description of what the study is about. At this stage, it is not necessary to give detailed information about the purposes of the study, or to address possible sensitive issues that the study may explore. There are three ways in which additional information can be provided:

(1) as an answer to a question for further information asked by the respondent at the outset or later in the survey;

(2) through a pre-notification letter that is sent to potential respondents before the first contact is made by an interviewer; and

(3) through the initial questions themselves.

However, it is important to note that information about the type of study may also condition the willingness of different population segments to participate. Lynn and Clarke (2001) undertook a study of six surveys on very different topics in different years in the United Kingdom. Lynn and Clarke conclude that women are more likely to be reluctant to take part in health surveys, men in attitude surveys, and that both men and women are reluctant to take part in surveys about finances. Those who occupy a house that they own or are buying are also found to be less likely to respond to a survey about finances, but more likely to respond to an attitude survey.

Three other pieces of information are usually necessary in the opening of the survey:

(1) an assurance of the confidentiality of information collected from the respondent;(2) an assurance that the respondent is not being asked to buy anything; and(3) an indication of how long the survey is expected to take.

It is most important that each of these statements be made as honestly as possible. Of particular importance is the indication of how long the interview will take. If interviews may take varying lengths of time, depending either on the answers that the respondent gives or on the amount the respondent has to say, then the length of the interview can be specified with a proviso that ‘it depends on how much you have to say’ (Sudman and Bradburn, 1982).

There has been some discussion in the literature and among survey researchers them-selves about the value of appealing to ‘altruistic’ benefits from completing the survey,

Page 179: Collecting managing and assessing data using sample surveys

Design of survey instruments152

such as improving society or helping some (relatively large) group of people. However, Dillman et al. (1996) find that such appeals do not appear to influence cooperation with the survey interviewer. In the same way, Dillman et al. (1996) also find that the strength of the confidentiality statement has little effect on participation. Therefore, a simple statement that the information provided will not be divulged in such a way as to identify the respondent is probably all that is necessary.

Although most surveys do not benefit from this situation, Dillman et al. (1996) find that the most influential statement that can be made in the introduction is an indica-tion that the respondent is required by law to complete the survey. This mandatory statement is found to improve response rates by twenty percentage points or more. Unfortunately, apart from the repeated censuses of most countries, and a few addi-tional surveys that may be mandated from time to time by the governments of some countries, most of the surveys that readers of this book will be designing do not have this mandatory requirement.

An example of an introduction in a telephone recruitment interview that has been used successfully by the author is as follows.

Hello, this is <interviewer’s name> and I’m calling on behalf of the University of Sydney. You should have received a letter in the post in the past couple of days about an important travel study that your household has been chosen to take part in.

In two brief sentences, this introduction reveals who the interviewer is, says with whom he or she is affiliated, refers to the pre-notification letter, in which there was more information about the study, notes that this is an important travel study, and tells the household that it has been chosen specifically to take part in the study. At the same time, the interviewer was provided with additional information, should the potential respondent ask further questions about the survey, confidentiality, how the respon-dent was selected, etc. In this case, more details on the purposes of the survey and the safeguards on confidentiality were provided in the pre-notification letter and were not repeated in the introduction. In the execution of the survey, relatively few respondents asked for further information about either of these two issues; the reference to the pre-notification letter appeared to be sufficient.

As noted by Zmud (2003), three elements of Cialdini’s (1988) theory of influence are probably applicable to survey design. These three are:

reciprocity;•scarcity; and•authority.•

Reciprocity has to do with the fact that, if a person is given something, he or she is more inclined then to repay or reciprocate in some way. This is discussed further in Chapter 20 in relation to incentives. Cialdini’s scarcity principle is that people assign more value to something that is limited by amount or time. Thus, indicating that a household will represent 500,000 households in the nation, or that the respondent was specially selected for this study, are applications of this principle. Scarcity is also the

Page 180: Collecting managing and assessing data using sample surveys

Physical layout of the survey instrument 153

reason that personalisation of the survey materials is often effective. Personalisation suggests to the respondent that he or she is one of only a select few who have been chosen to participate.

The authority principle, applied to surveys, indicates that people are more likely to comply with a request to participate in a survey if the request comes from some-one known to the respondents, and who is recognised as having legitimate reasons for collecting the information. In the case of surveys undertaken for public agencies, the signatory of the pre-notification letter and any covering letter for the survey itself should be a locally known political figure. However, care must be taken not to choose someone who is notorious rather than well known. In the case of surveys undertaken for private companies, it may be useful to find a local celebrity who would be willing to sign the cover letter, or a corporate figure who is well known. In the event that it is not possible to use a well-recognised name, the letter should be signed by someone who clearly has the authority to collect the information, as shown by their title and affiliation,

8.4.2 Question orderingThe order in which the respondent is presented with questions is a very important aspect of the instrument design, and considerable care and thought should be devoted to determining the most appropriate order. The following subsections consider which questions should be used in the opening of the survey, the main body, and at the end.

Opening questionsFirst, if there are screening questions, either to determine if this respondent is eligible to be included in the sample, or to determine something about the sequence of ques-tions to be asked, then these questions should usually be asked at the beginning of the survey. This is a courtesy to respondents, who should not be asked to spend a signifi-cant amount of time answering questions when, in fact, the respondent is not eligible for inclusion. In addition, in most interview surveys, it is usually necessary to establish that the person being spoken to is old enough to participate in answering questions. Often, the first question that should be asked is to request to speak to someone in the household who is over a specific age. This is ethically necessary (see Chapter 4).

Second, it is reported by a number of survey researchers (such as Sudman and Bradburn, 1982) that respondents are generally more interested in a survey if they have the opportunity to express their opinions early in the survey. Therefore, it is sug-gested that, following any required screening questions, the next questions seek opin-ion, preference, attitude, or similar information from the respondent, even if this is not central to the purposes of the survey. Indeed, one could suggest that the use of such introductory questions is the one exception to the rule of confining the survey to only those questions of relevance to the survey purposes. Even if these questions are not strictly needed by the survey client, they should be orientated to the topic of

Page 181: Collecting managing and assessing data using sample surveys

Design of survey instruments154

the survey, thereby also helping to focus the respondent on the survey issues. These questions are possibly less necessary in self-administered surveys, although they are still likely to be beneficial in capturing the interests of prospective respondents. In the case of self-administered surveys, these questions should be closed-ended questions, because questions that require respondents to write out their answers are likely to be perceived as threatening.

In interviewer-administered surveys, these opening questions are intended to serve as a means to establish rapport between the interviewer and the respondent, and to help the respondent to relax. It is important that the interviewer be nonjudgemental about, but interested in, the answers provided by the respondent. Often it is appropriate to remind the respondent that there are ‘no right or wrong answers to these questions’, which will frequently have the effect of helping the respondent to relax. This will con-tribute to the feeling of rapport and to increasing empathy.

Body of the surveyAfter completing the opening questions, the body of the survey is the next focus. Probably the most important aspect of the questions in the main body of the survey is that they should follow a logical sequence. In the case of a survey that is eliciting opin-ions, a logical sequence should be observed in which one moves from general ques-tions to specific questions. One such possible sequence that could be used in a survey about policies on greenhouse gas emissions might be as follows.

(1) How concerned are you about the issue of global warming?(2) How do you think this country is doing in controlling greenhouse gas emissions at

present?(3) Do you think we ought to be doing more about greenhouse gas emissions?(4) If ‘Yes’, what do you think we should be doing?(5) Some people think that the best way to control greenhouse gas emissions is through

agreeing with other countries around the world to limit our production of these emissions, while others think that the proposed agreements would be damaging to this country. How do you feel about this?

The first question is rather broad, but focuses the respondent on the issue of global warming. The second question focuses more specifically on the issue of greenhouse gases, specifically in relation to global warming. The third question aims at eliciting the respondent’s opinion about whether the country should be doing more about con-trolling greenhouse gas emissions. The fourth question offers the respondent an oppor-tunity to talk about the things that he or she thinks should be done, if more should be done, and the fifth question specifically aims at determining whether the respondent thinks this country should sign agreements with other nations to limit greenhouse gas production. If the fifth question had been asked at the outset, it is likely that it would then have influenced how the respondent answered the other four questions. It would also have required the respondent to think about what the survey was trying to find out, whereas the sequence presented here leads the respondent to the issue.

Page 182: Collecting managing and assessing data using sample surveys

Physical layout of the survey instrument 155

When a survey is asking about behaviour, the questions relating to behaviour should be ordered as far as possible in a sequence that relates to the sequence of events in the behaviour. For example, in a transport survey, in which the respondent is being questioned about his or her travel behaviour, a logical sequence would be to ask as follows.

(1) From where did you start out (origin of the trip)?(2) At what time did you start out?(3) What was the reason for your travel?(4) By what means did you travel?(5) Did you make any stops along the way, other than for traffic?(6) To where were you going?(7) At what time did you arrive at your destination?(8) How much did it cost you to travel to your destination?

While this is somewhat oversimplified, it illustrates the point about a logical sequence relating to behaviour. In this case, the respondent is metaphorically taken from the beginning of his or her trip to the end of it, with the intervening questions relating to what was happening at that point in the travel. Thus, the starting location and time are asked first, the reason for travel is asked next, and then the means of travel. Following this, the survey asks about anything that might have happened along the way, then asks where the person was going, when he or she arrived there, and how much it cost the respondent to travel to that destination. Cost is asked last, because often the last thing that the person does at the end of the trip is to pay for parking, if payment is required.

If the survey had asked first where the person was going, then what time he or she left to go there, then how much it cost, and then from where he or she started out, and so forth, the respondent would be being asked to think about the trip in an illogical sequence. Two things are likely to result from this: first, frustration with the questions, which do not appear to the respondent to follow in any particular order; and, second, difficulty in recalling the answers.

Another illustration of order is useful here. Suppose that the survey were collecting information about employment history. The questions might now be ordered in the following way.

(1) In what year did you leave school?(2) What was your first job?(3) When did you start that job?(4) How long did you work in that job?(5) Who did you work for in that job?(6) What was your next job?(7) When did you start that job?

To most of us, it would seem natural to think back to the earliest time we had a job and then to move forward in a chronological sequence. On the other hand, if the survey was intending to ask only about the most recent ten years of employment history, it

Page 183: Collecting managing and assessing data using sample surveys

Design of survey instruments156

might start out by asking about the current job, and then work backwards from now until ten years ago.

Once a respondent is thinking about a particular issue, it is also logical to ask all the questions about that issue. Sudman and Bradburn (1982) note that some survey researchers believe that it is better to switch back and forth between different topics, on the grounds that it reduces the monotony of the survey. However, they point out that most respondents would find this confusing rather than stimulating or interest-ing. In other cases, the interview switches back and forth between topics so that the interviewer can ask a question multiple times to check on the validity of the response. However, Sudman and Bradburn (1982) also point out that most respondents resent being asked the same question multiple times, and would also be impatient when their train of thought keeps being changed by the question order. This may lead to decreased reliability in the responses and to premature termination of the survey. It is recom-mended that such switching back and forth between topics not be used, unless there are other important reasons for doing so.

As a general point in the design of a survey, it is useful to inform the respondent of each change of topic by an introductory sentence, or, in a self-administered survey, through a new heading. For example, the preceding set of questions could have been introduced by a phrase to the effect that the interviewer is now going to ask some questions about the respondent’s employment history. After completing that section, the next questions might be about residential history, and could be introduced by a phrase such as ‘Now, I would like to ask you about the places you have lived’. In some instances, it will be particularly important to indicate why the questions are to be asked, especially if the new topic might seem to the respondent to be unrelated to the main purpose of the survey, or unlikely to be related to the preceding questions.

It is very important to be aware of conditioning effects in a survey. It is probably true to say that all surveys will have some conditioning effects. These effects may be inter-nal to the survey, whereby the responses to later questions are affected by the answers provided earlier in the survey, or they may even be external to the survey and affect the behaviour under study. Within the survey, conditioning effects occur when answers to questions early in the survey influence the way in which the respondent answers questions later on. For example, in the sequencing of the questions about greenhouse gas emissions used earlier in this chapter, inverting the order of the questions would be likely to lead to conditioning effects. They would arise in this case because the answer about making agreements on greenhouse gas emissions among different coun-tries would be likely to influence responses to the other questions, about whether the country should be doing more about greenhouse emissions and what should be done, and whether the respondent is even concerned about global warming.

On the other hand, there are instances when conditioning effects are sought inten-tionally, in a sense. This might occur, for example, when the topic to be discussed is one that the respondent may not have thought about or may not have opinions about in advance of the survey. In this case, the inverted sequence of questions, leading from specific to general, may be a way to assist the respondent in reaching a judgement on

Page 184: Collecting managing and assessing data using sample surveys

Physical layout of the survey instrument 157

a general issue. This could be important, also, when the research needs to make sure that all respondents have based their judgements on the same dimensions of the overall issue. As suggested by Kahn and Cannell (1957), if the survey was attempting to find out how respondents felt about the company at which they work, the survey might first ask questions about the respondents’ foremen, the physical conditions of the work and the workplace, the content of the job, and so forth, ending up by asking the respondent to consider all those things that had just been raised and then say what he or she thinks of this company as a place to work.

One other aspect of conditioning has been alluded to already. This is the condition-ing effect of having the respondent become used to and willing to answer questions. When sensitive or threatening questions need to be asked, it was suggested earlier that these should be asked near the end of the survey or should be surrounded by non-threatening questions. Either of these two strategies is actually based on a form of con-ditioning, in which respondents are assumed to become more willing to answer and to see questions as less threatening after they have become conditioned to responding to many other non-threatening or non-sensitive questions. Therefore, conditioning is both a negative and a positive outcome of survey questions.

Another issue relating to question ordering is that of the phrasing of questions that lead to branching. This can be a particular problem in self-administered surveys. The issue here is that respondents will fairly quickly realise if answering ‘Yes’ to a question that has a ‘Yes’/‘No’ option always leads to follow-up questions, while ‘No’ does not. In such cases, respondents may start to choose the ‘No’ or ‘Don’t know’ answer, even when these are not the truthful or factually correct responses. This can be overcome by changing the phrasing of questions so that sometimes ‘Yes’ leads to follow-up ques-tions and sometimes ‘No’ leads to follow-up questions.

Another way to deal with this is a little more subtle. Suppose the survey is ask-ing questions about the small appliances owned by a household. For each appliance owned, it is then desired to ask a series of questions about the make and model, the date of purchase, the degree of satisfaction with the appliance, etc. If the survey is ordered so that a respondent is asked first if he or she owns the appliance in question, and is then asked the series of questions about it, the respondent learns quickly that many questions can be avoided by simply stating that he or she does not own the appli-ance in question. However, if the respondent is first asked just to state whether or not he or she owns each type of appliance, and only after answering about ownership of all relevant appliances are the detailed questions on each owned appliance asked, the respondent is unable to have anticipated that an untruthful negative answer would have resulted in being able to avoid significant numbers of questions. Of course, this may be less effective in self-administered surveys, when some respondents will read ahead, before answering the questions, and therefore deduce the potential usefulness of lying in answering such questions.

Similarly, in attitudinal questions, attributes or statements should vary between being positive and negative, so that the respondent does not always tick the same rat-ing, without thinking about the question. For example, suppose respondents are being

Page 185: Collecting managing and assessing data using sample surveys

Design of survey instruments158

asked to indicate their strength of agreement with a series of statements relating to using their cars. The design might pose the following statements.

Car travel offers more flexibility than walking, bicycle, or public transport.•Driving is more convenient than walking, bicycle, or public transport.•Reducing car use would not make a difference to the environment.•Driving your car allows you to be independent.•Needing to transport others makes it too difficult to use alternatives to the car.•Driving allows you to save time.•Reducing car use will not reduce traffic in your local area.•Driving is not stressful.•

For a respondent who is an habitual car user, it is likely that he or she will agree with all these statements. Conversely, for someone who uses public transport most of the time, by choice, and perhaps does not even own a motor car, by choice, it is likely that he or she will disagree with all these statements. The problem is that the respondent may get into a rote response, in which all questions are answered the same way. In other words, the respondent drops into an automatic response mode, in which no think-ing is done about the question. To overcome this problem, the questions need to be rewritten so that the habitual car user is presented with statements on which he or she will sometimes agree and sometimes disagree. These same questions could, for exam-ple, have been written in the following way to avoid this problem of a rote response.

Car travel does not offer more flexibility than walking, bicycle, or public transport.•Driving is not more convenient than walking, bicycle, or public transport.•Reducing car use would make a difference to the environment.•Driving your car allows you to be independent.•Needing to transport others makes it too difficult to use alternatives to the car.•Driving allows you to save time.•Reducing car use will reduce traffic in your local area.•Driving is stressful.•

Now the respondent must pay more attention to each statement to decide if he or she agrees or disagrees with it. In fact, this also then becomes a test of rote responding. Any respondent who marks the same level of agreement with each of these statements is almost certainly not paying attention and has responded by rote. It may, therefore, be justified to discard the answers of that respondent.

The end of the questionnaireQuite frequently, respondents find some demographic questions to be threatening. Although it is acknowledged that some demographic questions may be required for initial screening and eligibility concerns, it is generally considered wise to place most of the demographic questions at the end of the survey. There are two reasons for plac-ing these questions there. First, by the time the respondent comes to these questions, he or she has already answered a number of questions and has become habituated to

Page 186: Collecting managing and assessing data using sample surveys

Physical layout of the survey instrument 159

responding – a form of conditioning. As a result, questions that would otherwise be considered to be threatening or invasions of privacy may now be accepted as being non-threatening, or the invasion of privacy is no longer an issue in the respondent’s mind. Second, in the event that the respondent still feels threatened by some questions, or is inclined to protect his or her privacy, the failure to answer some of the demo-graphic questions may not jeopardise some use of the data already collected. Thus, rather than losing an entire respondent, the respondent may simply be considered as having provided an incomplete or partial response, with the remaining responses still being useful in some contexts. Any other questions that are found to be difficult tra-ditionally to get all respondents to answer should also be placed near the end of the survey for the same reasons.

Finally, in all surveys, the end of the questionnaire should offer the respondent an opportunity to provide comments, whether about the survey itself or about the topic of the survey. This is especially important in self-administered surveys in which rapport could not be established with an interviewer, and the respondent may feel that there are things he or she wishes to say that were not provided for in the main body of the survey. However, it is also useful in interviewer-administered surveys, because, even if a rapport has been established, the respondent may still get to the end of the survey and now feel that there are things that he or she wishes to say either about the survey or the topics of the survey.

It is also important to bear in mind that fatigue is more likely to be an issue at the end of the questionnaire. Therefore, it is wise not to leave to the end of the survey those questions that are considered to be most critical to the researcher. Fatigue may lead to carelessness in answering the last few questions, superficial responses to these ques-tions, or avoidance of answering the questions altogether. Any of these results would lead to degradation of the survey results, so care must be taken in deciding which questions to place at the end of the survey. Again, placing classification questions at the end is appropriate, because, since these questions are factual in nature, little thinking is normally required to answer them and correct answers should usually ensue even if the respondent is becoming fatigued.

8.4.3 Some general issues on question layoutAs Zmud (2006) points out, the questionnaire should follow the normal reading pat-tern. In Western cultures, this means that people read from left to right and from the top of the page to the bottom. The questionnaire should be laid out in this manner. In some instances, it may be reasonable to arrange the page layout in columns. However, when this is done, the columns should still follow the same pattern of reading from left to right and top to bottom within the column, and columns should progress from left to right across the page.

Dillman (2000) also points out that the goal of any questionnaire design, and espe-cially the design of a self-administered questionnaire, is to have each respondent or interviewer read and comprehend each word in the same order, and to have them also

Page 187: Collecting managing and assessing data using sample surveys

Design of survey instruments160

comprehend the words forming the same groupings. One of the major sources of error in surveys arises from these issues, through missed words, questions that are not under-stood the same way by each respondent, and answer categories that some respondents do not see. Achieving this goal of good survey design for a self-administered survey, whether on the internet or in a paper survey, is a major challenge for those who design survey instruments. Some of these issues are addressed in Chapter 9 on question word-ing, while others are addressed in this section. It must be kept in mind that the reader of a survey instrument is actually reading the instrument in two languages, as pointed out by Dillman (2000). One language is the written word in the language of the survey. The second language is the symbols and arrangements of words and symbols on the page. The importance of the second language cannot be overstressed. The symbols, the layout, and the arrangement of words on the page provide cues to the reader as to the order in which things should be read, and the importance attached to them. Dillman (2000) provides an excellent and detailed treatment of these issues.

Sudman and Bradburn (1982) provide a checklist of seventeen major points that should be considered in survey layout. Dillman (2000) makes many of the same points and adds to them. In this section, these various points are discussed, as they pertain to both interviewer-administered and respondent-administered surveys. Most of these points are most critical for respondent-administered surveys, but most are also quite important for interviewer-administered surveys. If the interviewer becomes confused or misdirected by the survey form, this will lead to mistakes in the interview and to frustration for the interviewer and the respondent alike.

Overall formatFor more than twenty-five years of survey design the author has preferred a booklet style of survey form. As pointed out by Sudman and Bradburn (1982), a booklet format is easier to read, makes it easier to turn the pages, and is not prone to losing pages. A format that consists of sheets of paper that are corner-stapled is hard to handle and very liable to having the last page pulled off, inadvertently. While booklet styles are a little more complex to set up, and are more expensive to produce (requiring the capability to ‘saddle-stitch’ the booklet – i.e., staple the pages along the central spine), these added requirements are well worthwhile in the overall presentation of the survey form.

Dillman (2000) is even stronger in his support of booklet questionnaires. He describes four unacceptable formats, based in all cases on the fact that these formats have suf-ficiently large negative consequences for the survey as to render them unacceptable. These include printing on both sides of a sheet and stapling the sheets together in the corner; printing pages in landscape (horizontal) mode, rather than portrait (vertical) mode; using unusual folding patterns, such as an accordion style of fold-out, or a ques-tionnaire that folds and unfolds like a road map; and using unusual shapes, such as diamond-shaped pages, square pages, etc.

The author has generally found that a convenient booklet size is obtained either by taking regular A4 (or US letter or legal) size sheets and folding these in half to cre-ate the booklet, or using A3 (or US 11 inch × 17 inch) sheets to form an A4 (or US

Page 188: Collecting managing and assessing data using sample surveys

Physical layout of the survey instrument 161

letter) size booklet through the same process. If using the larger size (A3 or 11 inches × 17 inches), Dillman (2000) suggests that the pages should be divided into columns, so that the respondent does not have to read across the entire page width. This is less important in an interviewer-administered survey, when using full A4 size pages in a booklet is usually quite workable.

When setting up such a survey form in a word processor, if it is desired to lay out the sheets for the full-paper-size printing it is important to note how this should be done. On the first sheet, which would be displayed in landscape format on a computer, and which would be divided into two columns, representing the print area of each of the half pages in the booklet, the first page in the computer document should have the back of the cover on the left side and the front of the cover on the right side. The next doc-ument page will contain the inside of the front cover on the left, and the inside of the back cover on the right. If done in this manner, then a simple print of the document to any printer, with either the printer itself printing on both sides of the sheet, or manually feeding the sheets to print on both sides, the booklet will come out printed correctly.

After completing the cover, the next computer document page will have the third page from the end of the booklet on the left side, and the third page from the front of the booklet on the right. The fourth document sheet will have the fourth page of the booklet on the left and the fourth from the back on the right. This pattern continues through the entire booklet. Figure 8.1 shows schematically how this should appear for a booklet with N pages. Note also that N must be a multiple of four.

By following this pattern, and printing double-sided copies either manually or with a printer capable of duplex printing, the booklet will result. Note that, when printing this manually, it is necessary first to print the ‘odd’ pages and then, after replacing the sheets the correct way round in the document feeder, to print the ‘even’ pages.

This is the preferred format whether the survey form is to be filled out by the respon-dent or the interviewer. Of course, if the survey is very short, the entire survey may be contained on one or two sides of a single sheet. In this case, a booklet is unnecessary.

It is also recommended that, when a survey form is to be produced with double-sided printing, a heavy weight of paper is used. For example, rather than the standard 80 gsm (grams per square metre) paper that is used in most computer printing applications, 100 gsm is preferred for double-sided printing. In addition, it may be advisable to use an even heavier stock for the cover of the booklet or for a single-sheet survey form, with 120 to 140 gsm being the preferred weight of paper in these cases. Similarly, if the survey is a single- or multiple-sheet form or a booklet to be completed on board a vehicle (such as many on-bus and air surveys), the entire survey form may be best printed on heavier weight stock, such as 120 to 140 gsm.

Appearance of the surveyFor self-administered surveys of all types (postal, handed out, internet, etc.), the appearance of the survey form will impact respondents’ willingness to answer the survey. The survey form should have a professional appearance, both as to layout and printing. In the early twenty-first century, with colour laser printers and sophisticated

Page 189: Collecting managing and assessing data using sample surveys

Design of survey instruments162

word-processing and layout software, there is no reason why a survey should not have a highly professional finish to it. Generally, it is no longer necessary to have survey forms typeset and printed, although this may still be the way to go for very large sur-veys. Rather, almost all surveys can be produced readily from commonly available software and using available printers.

Even for interviewer-administered surveys, a professional appearance is ideal. This will give confidence to the respondent in a face-to-face survey and will generally make the task of the interviewer easier to accomplish. It will also usually make the handling of the survey forms, both in the interview and for later data entry purposes, that much easier.

Front coverThe cover of the survey should always show the title of the study, the name of the orga-nisation for which the study is being done and that is conducting the survey, and, if

Backcover

Inside offrontcover

2

Lastinsidepage

Secondinsidepage

4

Secondto lastinsidepage

Firstinsidepage

Inside ofbackcover

N–1

N–2

N–3

3

Frontcover

N 1

Figure 8.1 Document file layout for booklet printing

Page 190: Collecting managing and assessing data using sample surveys

Physical layout of the survey instrument 163

relevant, the date of the survey. This should be the case for all surveys, irrespective of the mode of administration. For self-administered surveys, the front of the survey may also contain a graphic (such as the logo of the organisation undertaking the survey) and some brief instruction.

Spatial layoutWhether for use by interviewers or respondents, survey questionnaires should be open and include plenty of unused space. A survey form that is filled with questions looks daunting to the respondent and is likely to be confusing to respondent and interviewer alike. As Sudman and Bradburn (1982) note, questions should not be crowded. In gen-eral, it is far preferable to increase the length of the survey instrument than to squeeze as much as possible into a small space, resulting in a cluttered and confusing ques-tionnaire. In mail surveys, there tends to be somewhat more reaction to survey length, in that people may be put off if the survey is too long. However, as pointed out by Sudman and Bradburn (1982), the salience of the survey topic will have a considerable impact on the effects of length on response. People will generally respond well to even quite long surveys on topics that are of high salience, while on topics of low salience, a short survey is almost mandatory.

It is of utmost importance to make sure that it is clear what is to be recorded either by the interviewer or the respondent. First, the type of mark to be used should be shown and specified clearly, so that there is no confusion in the mind of the person complet-ing the survey form. If boxes are to be ticked, then this should be shown clearly. If a cross is to be used, or if circles are to be filled in, this should also be shown clearly. If a mark-sensing form is to be used, requiring circles to be filled in completely in black pencil, then this must be indicated clearly. In this last case, it is also usually advisable to provide the type of pencil to be used, because many respondents will not have one readily available.

Second, it should be clear to which answer each box or circle belongs. In the exam-ple shown in Figure 8.2, the question responses are likely to lead to confusion, because there is no distinction as to which box or line each response belongs to. In question 17, the second box could equally well belong to ‘Male’ as to ‘Female’. Similarly, in the questions that have multiple answers, such as housing type (question 22), the place-ment of the boxes and the responses is such as to lead to serious confusion for the respondent (or interviewer) and the data entry person.

The layout is also cluttered in this case. There is very little open space in the format, and a respondent may look at this and feel it is going to be quite difficult to fill out. For the respondent or interviewer, and the data entry person, the questionnaire should be columned, so that the boxes to tick line up vertically.

When questions require responses to be written in, it is important that enough space, both vertically and horizontally, is left for the person to write in the required informa-tion. When laying out a survey form in a word processor, it is often not easy to see how much vertical space will actually appear on the form, and the survey designer is advised to print out the drafts of the form frequently, to ensure that sufficient space is

Page 191: Collecting managing and assessing data using sample surveys

Design of survey instruments164

being provided. In a form that is being set up in a 12-point typeface, generally a single-spaced line will not accommodate most people’s handwriting. Attention should also be paid to ensuring that the line provided is long enough for the information that people are likely to need to answer the question. In Figure 8.2, the lines for the responses to the ‘other’ option in questions 22, 23, and 24 are all too short. In addition, none of the lines to be written on allow adequate vertical space for writing the response.

In contrast, Figure 8.3 shows how these questions could have been laid out, so as to provide the respondent and interviewer, and the data entry person, a much cleaner and clearer survey. A larger and more open font has been used in Figure 8.3. The answers have been columned, so that the eye is led much more easily to the responses, and the boxes are now much more clearly associated with the responses. Further, the spaces to write in a response are longer and provide more vertical space. There is much more unused space in this layout, so that it does not look as though it would be as difficult to complete. It is, of course, true that the layout of Figure 8.3 occupies more than twice the space of Figure 8.2. However, this is a worthwhile increase in space, which is likely to be associated with an increase in response, not a decrease.

Choice of typefaceThis layout also demonstrates the need to use care in selecting the typeface. Generally, a serif-style typeface, such as is used in this book, is less desirable in a survey. Figure 8.2 uses the common Times New Roman serif typeface, which is the default for many computer word-processing packages. It has a rather cluttered appearance to it.

In contrast, Figure 8.3 uses a sans serif typeface called Tahoma. This is preferable to the common Ariel typeface that is also a default for many computer software packages. Tahoma has a cleaner, more open feel to it, and is easier for most people to read. There is also an important difference in type size between the two figures. For the purposes

17. What is your sex? � Male � Female

18. What is your present marital status? � Never married � Married � Separated � Divorced � Widowed

19. How many children do you have in each of the following age groups?____ under 5___ 5–12____ 13–18 ____ 19–25 and over

20. How old are you? _____

21. Do you hold a current driver’s licence? � Yes � No

22. What type of house do you live in? � Apartment/flat � Townhouse � Villa � Semi-detached

� Detached � Other (specify)______

23. What is the highest level of education you have completed? � None � Primary school �

Secondary school � TAFE/secondary college � University � Other �specify)�_________

24. What is your job status? � Employed full-time � Employed part-time � Unemployed/unpaid

work � Retired/pensioner � Other �specify)�_________

Figure 8.2 Example of an unacceptable questionnaire format

Page 192: Collecting managing and assessing data using sample surveys

Physical layout of the survey instrument 165

17. What is your gender?

� Male

� Female

18. What is your present marital status?

� Never married

� Married

� Separated

� Divorced

� Widowed

19. How many children do you have in each of the following age groups?

_____ under 5

_____ 5–12

_____ 13–18

_____ 19–25 and over

20. In what year were you born? ___________

21. Do you hold a current driver’s licence?

� Yes

� No

22. What type of house do you live in?

� Apartment/flat

� Townhouse

� Villa

� Semi-detached

� Detached

� Other ________________________________________

Please specify

23. What is your job status?

� Employed full-time

� Employed part-time

� Retired/pensioner

� Unemployed/unpaid

� Other________________________________________

Please specify

Figure 8.3 Example of an acceptable questionnaire format

Page 193: Collecting managing and assessing data using sample surveys

Design of survey instruments166

of cramming in more questions in less space, Figure 8.2 was set in a 10-point type size, whilst Figure 8.3 uses just one point more at 11 points.

It is particularly important that one keeps in mind the needs of different popula-tion subgroups who may have to read and respond to a questionnaire. Many surveys are conducted on people of all ages. It is important to remember that older people sometimes have difficulty in reading small type, especially when it is a serif style. Distinction between some letters becomes more difficult, and a person may read some-thing that is not the word actually presented in the survey.

For interviewer-administered surveys, the typeface and size are more of an issue in terms of allowing for clear and non-hesitant reading, so that the interviewer does not stumble over sentences or questions. Similar layouts should be used, whether the sur-vey is to be interviewer-administered or self-administered.

There are two other issues with respect to typefaces that need to be addressed. Both require caution in their use. First, there is the use of different typefaces in the same ques-tionnaire. One of the problems with the proliferation of available typefaces in word-processing software is that the unwary can be tempted into using a number of different typefaces. This is ill-advised. It is probably best to stick to one typeface, or at most two different typefaces, in a survey instrument. The use of different typefaces can become hard on the eyes, and may detract from the overall professional appearance of the sur-vey. Sometimes it can be useful to use one typeface for questions and a different one for answers. This may assist respondents in complex surveys. Alternatively, there may be one typeface used for instructions and another one for everything else in the survey.

The second issue relates to the use of typeface enhancements, such as bold and italic. Again, the best advice here is to be judicious in the use of such enhancements. Some survey designers seem to feel that they should boldface every word in a question to which they wish to give emphasis. As before, this will tend to defeat the purpose of the text enhancement, leading instead to respondents ignoring all enhancements. Again, one possibility is to use one enhancement to pick out either the questions or the responses. Another enhancement, such as italic, could be used for all instructions.

Two principles are worth observing in relation to both the typeface issues. Do not overuse either enhancements or changes in font, and be consistent in the use to which they are put. Either enhancement or changed typeface can be used to distin-guish between the three main groupings of material in a questionnaire: questions, responses, and instructions. However, these should be done consistently through-out. On the other hand, if typeface enhancement is used to provide emphasis, espe-cially in questions that people typically misread, then the number of cases of this use should be small.

Use of colour and graphicsColour can be particularly helpful in designing surveys. When colours are used, they should be pastel shades, and the colours should be chosen carefully so that the black text shows clearly against the coloured background. Colour can be used very effectively

Page 194: Collecting managing and assessing data using sample surveys

Physical layout of the survey instrument 167

to lead respondents or interviewers through a survey that has complex skip patterns. For example, the author has used colour by creating a coloured background around a subset of answers, from which the respondent is then directed to another set of ques-tions with the same colour background. At other times, different colour backgrounds can be used to distinguish between the questions to be answered from one branch of a question and those to be answered from another branch.

When the use of colour is considered to be too expensive, similar effects can be achieved by using light grey shading with different patterns of shading. While not as versatile, the careful use of shading can help to clarify skip patterns, and to distinguish branches in a branched question. Of course, with both colour and shading, the key principle is to use them judiciously. Excessive use of either one of these will lead to a saturation effect, whereby the respondent is no longer able to see the guidance that is supposed to be provided by these devices.

A second alternative with colour is to use coloured paper, instead of white. In most of the surveys the author has designed, booklets have covers that are printed on coloured stock instead of white. This can be a useful device when it is desired to dis-tinguish between different survey forms. This can be useful in both interviewer- and respondent- administered surveys. For example, the author has designed a number of surveys that use a diary, with different households being asked to complete the diaries on different days of the week. For each day of the week different colour covers were used, so that these could be readily distinguished for sending out and after return. This made the packaging for mailing simpler and easier, and also allowed instant sorting of returns by day of week. In other cases, the use of coloured stock can assist respon-dents, as it becomes possible to refer to forms by colour in the instructions to respon-dents. This also works well for interviewers, who may need to know that they have three different coloured forms to complete for each person or household that they are interviewing.

So far as graphics are concerned, these, again, can be useful to assist the respondent in finding his or her way through a survey and even in clarifying questions. The most useful graphic aid is the arrow. Figure 8.4 shows part of a diary survey in which arrows are used to help guide the respondent. The arrows are used to direct from ‘Yes’/‘No’ branches to the appropriate conditional questions.

When using arrows, they should never cross questions or responses. If there is an apparent need for them to do so, then the survey instrument should be redesigned to prevent such a need. Short arrows, such as those shown in Figure 8.4, are preferred. Other graphics can also be used if these may be helpful to key a person as to what is required.

Figure 8.5 shows the use of some limited graphics to assist the respondent in pick-ing up the instructional information that has been provided. Similar graphics may be used sparingly in a survey instrument, especially one that is to be filled out by the respondent. Graphics should generally not be necessary for interviewer-administered surveys.

Page 195: Collecting managing and assessing data using sample surveys

15. If you have a driver’s licence, do you EVER drive to your main job place?

� Yes � � No �

16. Does your main employer/business offer to pay for all or part of the cost of bus passes?

� No Go to 17 � Yes, and I use it � � Yes, but I don’t use it �

17. Which of the following best describes your work schedule? (please tick ONE only)

Variable at my choice � Variable depending on the work Allowed to vary within fixed limits � Fixed and the same every day Fixed starting time, variable ending time

Worker/job information

How much do you THINK you would personally pay to park?

� per day � Nothing � per week

� per fortnight � per month � per quarter � per year

How long do you think it would ta ke you to walk from the place you would park to your main job place?

� Less than one minute minutes

$

How much do you PERSONALLY pay for a bus pass?

� Nothing

$

How much do you THINK you would personally pay for a bus pass?

� Nothing

$

How much do you PERSONALLY pay to park?

� per day � Nothing � per week

� per fortnight � per month � per quarter � per year

How long does it take you to walk from the place you park to your main job place?

� Less than one minute minutes

$

Fixed but different hours on different days of the week

Fixed and the same for several days or weeks

Rostered shift

18. Do you have a second job? Yes No Go to the end of

� Rotating shift

Other (please specify): _____________________________________

the diary

Figure 8.4 Excerpt from a survey showing arrows to guide respondent

Page 196: Collecting managing and assessing data using sample surveys

Physical layout of the survey instrument 169

Question numberingOne of the features that should be noted in all the examples of surveys shown in this chapter is that the questions are numbered, in all cases. This should always be the case in any survey. Sub-parts of questions should be lettered. This helps to prevent ques-tions from being accidentally omitted or skipped when they should not be. It aids both respondents in self-administered surveys and interviewers in interview-administered surveys. It is also essential for referencing questions in the coding and analysis pro-cess. Usually, the database into which the survey data are entered will use the question numbers as part of the identification of each item of data. This makes it easier to go back to the original survey to check for possible errors or corrections in the data.

Dillman (2000) points out that, once people have responded to one question, they are then immediately looking to find the next question to which they are to respond. Providing question numbers in a consistent format helps the respondent find the next question. Thus, numbering from 1 for the first question without any skipped numbers is a very important aid to navigating through the survey. Likewise, using letters for the sub-questions also ensures that the navigational path is clear, and distinguishes the sub-question from the main question, guiding the respondent on what to look for next.

Sudman and Bradburn (1982) also suggest that sub-parts of questions should be indented. In general, this is good practice, but it may not be appropriate if something other than a straight sequential questioning is used, as in Figure 8.3, where the use of the boxes in the sub-parts of the questions would mean that indenting is not needed, nor is it practical. However, as a general rule, indenting will help to clarify that these

This is the end of yourTIME USE DIARY

Please go back over your Time Use Diaryto be sure that you included all youractivities.

Please be sure to fill out the personalinformation in the back of the diary.

Please place all of your household’sdiaries in the envelope provided andreturn it to ITS. Do not post it.

THANK YOU!

This study is being conducted by the Institute of Transport Studies of the University ofSydney under funding from the New South Wales Roads and Traffic Authority.

Thank youfor agreeing to take part in this important study for

the University of Sydney and the RTA

Your information counts!

It doesn’t matter how much or how little you travel, YOUARE IMPORTANT. You are one of the FEW PEOPLE chosen to help us understand the travel patterns in Sydney and to testthe Global Positioning System equipment.

By filling out this TIME USE DIARY, you will be giving us information that will help make improvements in future travel surveys for Sydney and that will help plan travel improvements in Sydney. Please fill out the diary as COMPLETELY AS POSSIBLE.

After you have filled out the diary, please turn to the end of the diary and fill out the information about yourself and your school or work.

We have included a completed diary as an example of howto fill out this diary. Please look at the example before you fill out this diary.

Questions?Call us at9351 0071

Figure 8.5 Extract from a questionnaire showing use of graphics

Page 197: Collecting managing and assessing data using sample surveys

Design of survey instruments170

are sub-part questions. Again, this helps to ensure that respondents do not inadver-tently fail to answer questions.

Dillman (2000) points out that simple numbering is essential. Often, one finds combinations of letters and numbers used for question numbering. For example, in a travel survey, questions relating to the person’s travel, which are asked first in the survey, might be numbered T1, T2, etc., indicating to the survey designer that these are travel questions. Next are attitude questions, designated A1, A2, etc., which are then followed by questions about the person completing the survey (P1, P2, etc.) and the household in which the person lives (H1, H2, etc.). These letter designations may be helpful to the agency or the survey researcher, but they are clearly confus-ing to the respondent, who would expect A questions to come first, followed by H, then P, and lastly T. Because this is not the order of these letter and number question designations, the navigational path has become confused. Simple numbers are much to be preferred. The respondent does not need to know the underlying groupings of these questions.

Similarly, it is not uncommon for survey researchers to divide a survey instrument into parts, with each part numbered, and the questions within each part start over from question 1, each time. Again, this is confusing for the respondent, who is now faced with multiple questions labelled 1, 2, 3, etc. This is not overcome, Dillman (2000) warns, by using a numbering system such as I-1, I-2, …, II-1, II-2, etc. Again, these complex numbers add difficulty to the task for the respondent, and can easily confuse.

In the event that a survey is divided into sections, question numbers should still run continuously from the first question in the survey to the last. Thus, part A may contain six questions, which would be numbered 1, 2, 3, …, 6, while part B will then start at question 7, and so on through all the parts of the survey. Sometimes the objec-tion is raised that labelling all the questions from 1 to the number of questions in the survey will increase nonresponse when respondents realise how many questions they are expected to answer. However, like similar objections to survey length, there is no evidence that such a nonresponse effect arises.

Page breaksQuestions should never be split across a page break. When this happens, the inter-viewer or respondent may assume that the part of the question showing on the initial page is all that there is of the question, and proceed to answer that. Splitting a question across the page break may also be frustrating to respondents to self- administered sur-veys, especially if the remainder of the question is on the back of the page on which the first part of the question appears. Because a good survey design involves maintain-ing a reasonable amount of open space within the layout, it should never be difficult to ensure that questions and their responses are wholly contained on one page. Sudman and Bradburn (1982) also point out that a long question with a number of sub-parts should not be followed by a short question that appears at the bottom of the page. This will tend to result in respondents inadvertently missing the short question.

Page 198: Collecting managing and assessing data using sample surveys

Physical layout of the survey instrument 171

Repeated questionsIt is often desirable to ask the same questions about different members of the house-hold. When this is needed, the best way to do this in a self-administered survey is to put the questions in columns and the members of the household in rows. If necessary, this arrangement may need to span two pages. An example of the way this may be done is shown in Figure 8.6.

The same procedure can be used for questions that require the same answer cate-gories, as is often the case with attitudinal questions. Figure 8.7 shows a comparison between an inefficient and an efficient way to organise such questions.

Household information form

Please fill out one row for EACH person who lives in your household. Be sure to include infants and live-in domestichelp.

What is this person’s relationship to you?What is thisperson’s month

and year of birth?

PersonNo.

Month Year

Is thisperson

Does thisperson have avalid driver’s

license

Self

Spouse/Part

ner

Son/Daugh

ter

Father/

Mother

Brother/

Sister

Other Relati

ve

Not Relate

d

Does this personwork or go to

school, TAFE oruniversity?

Total GROSSincome of this

person last year

1� Male� Female

� Yes� No �� � � � � � �

�Work �School/TAFE/

University

�Neither� Both

� None � Yearly$ __________ � Monthly

� Weekly

2� Male� Female

� Yes� No �� � � � � � �

�Work �School/TAFE/

University

�Neither� Both

� None � Yearly$ __________ � Monthly

� Weekly

3� Male� Female

� Yes� No �� � � � � � �

�Work �

�Neither� Both

� None � Yearly$ __________ � Monthly

� Weekly

4� Male� Female

� Yes� No �� � � � � � �

�Work �School/TAFE/

University

School/TAFE/University

�Neither� Both

� None � Yearly$ __________ � Monthly

� Weekly

5� Male� Female

� Yes� No �� � � � � � �

�Work �

�Neither� Both

� None � Yearly$ __________ � Monthly

� Weekly

6� Male� Female

� Yes� No �� � � � � � �

�Work �

�Neither� Both

� None � Yearly$ __________ � Monthly

� Weekly

7� Male� Female

� Yes� No �� � � � � � �

�Work �

�Neither� Both

� None � Yearly$ __________ � Monthly

� Weekly

8� Male� Female

� Yes� No �� � � � � � �

�Work �

�Neither� Both

� None � Yearly$ __________ � Monthly

� Weekly

9� Male� Female

� Yes� No �� � � � � � �

�Work �

�Neither� Both

� None � Yearly$ __________ � Monthly

� Weekly

10� Male� Female

� Yes� No �� � � � � � �

�Work �

�Neither� Both

� None � Yearly$ __________ � Monthly

� Weekly

Please turn over the form and complete the VEHICLE INFORMATION

What was your total GROSS HOUSEHOLD income last year, including all members of your household?

� No Household Income � $1–$4,159 ($1–$79 /week) � $4,160–$8,319 ($80–$159/wk.) � $8,320–$15,599 ($160–$299/wk.) � $15,600–$25,999 ($300–$499/wk.) � $26,000–$36,399 ($500–$699/wk.) � $36,400–$51,999 ($700–$999/wk.) � $52,000–$62,399 ($1,000–$1,199/wk.)

$1,199/wk.)�� $62,400–$77,999 ($1,200–$1,499/wk.) � $78,000–$103,999 ($1,500–1,999/wk.) � $104,000 and greater ($2,000 or more/wk.)

School/TAFE/University

School/TAFE/University

School/TAFE/University

School/TAFE/University

School/TAFE/University

School/TAFE/University

Figure 8.6 Columned layout for asking identical questions about multiple people

Page 199: Collecting managing and assessing data using sample surveys

Design of survey instruments172

InstructionsIt is always necessary to include some instructions in a survey. However, the survey designer for a self-administered survey should bear in mind that most respondents will not read the instructions. As a result, the survey should always be designed so that it can be filled out correctly even if a respondent does not read the directions on what to do. It should flow intuitively and questions should be self-explanatory. Nevertheless, it is usually advisable to include some instructions that will help those who take the trouble to read them.

When instructions are included, it is a good idea to distinguish them from the ques-tions and response sets by using a different font or font enhancement. For example, italics usually work well for one-line instructions. Sometimes it may be necessary to use all capitals, to draw particular attention to the instructions.

Looking at Figure 8.8, it can be seen that the instructions there are provided in bold. In one instance, all capitals are used, where the instruction is to mark ONE of the options only. Instructions should generally appear just before the question to which

An inefficient structure

5. How important is it to you to save time when you travel?

(a)

(b)

� Extremely important

� Very important

� Important

� Somewhat important

� Not important

6. How important is your comfort to you when you travel?

� Extremely important

� Very important

� Important

� Somewhat important

� Not important

An efficient structure

5. Do you consider each of the following to be Extremely important, Very important, Important,

Somewhat important, or Not important when you travel? (Please tick one box for each.)

Importance of this when you travel

Saving time ....... � Extremely � Very � Important � Somewhat � Not

Your comfort .....� Extremely � Very � Important � Somewhat � Not

Figure 8.7 Inefficient and efficient structures for organising serial questions

Page 200: Collecting managing and assessing data using sample surveys

Physical layout of the survey instrument 173

17. Do you hold a current driver’s licence? Mark ONE answer only in the box� No → Go to question 21� Yes

18. (If Yes) Do you ever drive to your main job place?� Yes → Go to question 19� No → Go to question 20

Figure 8.8 Instructions placed at the point to which they refer

they apply (Dillman, 2000; Sudman and Bradburn, 1982). Skip instructions should appear immediately after the answer, as is also shown in Figure 8.8.

Some survey researchers provide instructions in a separate booklet or in a separate section of the survey instrument. Dillman (2000) points out that neither of these is a good option, because they both tend to increase the difficulty of the task for the respon-dent completing the survey. Government-sponsored surveys are particularly prone to use separate booklets of instructions, which tend to include extensive descriptions of highly improbable situations that may arise in completing the survey. The longer the instructions that are provided, the less likely it is that they will be read. Moreover, the fact that a separate booklet or a separate section of the survey is required for instruc-tions will lead many respondents to assume that the survey must be very complicated, and therefore to make the decision not to respond. Lengthy instructions, provided sep-arately from the questions themselves, also make it more difficult for respondents to find out information to help with a specific question.

Dillman (2000) also points out that the decision to create a separate instruction booklet or section in the survey has tended to lead some survey designers to create more cryptic survey instruments. This, in turn, creates an increased reliance on the instructions to explain how to complete the survey, and generally results in even length-ier instructions. Further, when the instructions are in a separate section of the survey instrument, the respondent is required to keep flipping back and forth between the instruction section and the questions. Most respondents will soon lose patience with this process, and will either fill out the survey without the benefit of the instructions or will give up on the survey altogether.

Instructions may be used more extensively in interviewer-administered surveys. However, care should be taken that the instructions are always clear, are in a different typeface, so that the interviewer does not accidentally read them out to the respondent, and are provided only when they are necessary. It should be kept in mind that inter-viewers can and should be trained in the administration of the survey, before going into the field, so that only those instructions that are needed to deal with complex issues should normally be necessary. As with respondent-administered surveys, instructions should be provided with the questions to which they refer. However, for interviewers, a separate training booklet can be created, which the interviewer will use in the training sessions and possibly in the first few interviews. Nonetheless, it should be able to be set aside for most of the interviews.

Page 201: Collecting managing and assessing data using sample surveys

Design of survey instruments174

Show cardsIn face-to-face interviews, it may often be necessary to provide show cards that show the possible set of answers for a respondent. This will help overcome some of the pri-macy and recency issues that were discussed earlier in this chapter. Show cards are particularly useful for attitudinal questions that are to be answered on a numeric scale of such things as importance, satisfaction, or agreement. They can also be useful for such questions as income, level of education, and employment status, among others. Indeed, in any instance for which the list of possible responses is more than about three or four items, a show card is useful.

When using show cards, the arrangement of the responses may be vertical, horizon-tal or across and down. Ideally, the responses should be numbered, in which case the interviewer simply inserts the appropriate number. In addition, the numbering must be clearly shown on the card, so that respondents are required only to pick the appropriate number.

Time of the interviewIn any interviewer-administered survey, space should be provided on the survey for the interviewer to record the time at which the interview started and the time at which it ended. Space should also be provided for the interviewer to identify himself or herself. Other information, specifically concerning any special problems that may have arisen, should also be recorded. Special problems could relate to problems that the respondent had that may have impacted the survey, problems that the interviewer encountered in conducting the survey, or environmental issues that may have impacted the results of the interview.

The times of starting and ending the survey will allow it to be known at what time of the day the survey was conducted and how long the interview took. Interviews that take much more or much less time, and for which there is no note of special problems encountered, may suggest that the interviewer needs further training. Determination of the average length of time taken for the interview will help the survey designer. A note of special problems may provide explanations for unusual answers obtained in a particular interview.

PrecodingClosed questions should always be precoded, so that data entry can be performed rapidly and analysis undertaken without problems. However, data codes should not appear on self-administered survey forms. It is optional as to whether or not they are included on an interviewer-administered form. However, if their inclusion is distract-ing to the interviewer, they should be omitted. Figure 8.9 shows the unacceptable survey format shown earlier in Figure 8.2 with the addition of response codes. This adds enormously to the clutter and to the generally overwhelming feel and look of the instrument.

The presence of the response codes in Figure 8.9 adds to the clutter and confusion of the survey form, and they should never be used in a respondent-administered survey.

Page 202: Collecting managing and assessing data using sample surveys

Physical layout of the survey instrument 175

In this case, they are also confusing for the data entry person, because of the overall bad formatting of this questionnaire. Another way of including response codes that the author has seen is the addition of a column on the right-hand side of each page of the survey form, labelled ‘For office use only’ and usually containing the list of codes for each question. Again, this adds clutter and confusion to the survey form. It is also an open invitation to some respondents to cause difficulty by marking or obliterating codes in this column.

Interestingly, Dillman (2000) does not mention the inclusion or exclusion of answer codes, nor the possible use of a column on the survey form for coding answers. His illustrations even of bad designs of questionnaires do not include either of these devices. Apparently, Dillman considers the idea of including such codes on the questionnaire so absurd as to not even require discussion. However, surveys continue to appear that use such devices. They always result in making the survey look more cluttered, more difficult to answer, and less appealing.

Although most respondents are aware, at some level of consciousness, that their answers to the survey will be reduced to numbers entered into a computer, most respon-dents do not need to be reminded of this fact. The inclusion of response codes and ‘For office use only’ columns emphasises this point and can lead to significant response problems. There are many other issues in constructing codes for question responses, which are dealt with in Chapter 18 of this book.

End of the surveyThere are two important items that should appear at the end of every survey: an invita-tion to provide comments on the survey or its topic, and a ‘Thank you’ for participat-ing in the survey. These should be provided in both interviewer- and self-administered surveys. However, they are especially important in self-administered surveys. In most instances, interviewers will be pre-trained to ask for comments and to offer thanks to

17. What is your sex? 1� Male 2� Female

18. What is your present marital status? 1� Never married 2� Married 3� Separated

4� Divorced 5� Widowed

19. How many children do you have in each of the following age groups? 1____ under 5 2

___5–12 3____13–18 4 ____19–25 and over

20. How old are you? _____

21. Do you hold a current driver’s licence? 1� Yes 2� No

22. What type of house do you live in? 1� Apartment/flat 2� Town house 3� Villa 4� Semi-

detached 5� Detached 6� Other (specify) ______

23. What is the highest level of education you have completed? 0� None 1� Primary school 2�

Secondary school 3� TAFE/secondary college 4� University 5� Other (specify) _________

24. What is your job status? 1� Employed full-time 2� Employed part-time 3� Unemployed/Unpaid

Work 4� Retired/pensioner 5� Other (specify) _________

Figure 8.9 Example of an unacceptable questionnaire format with response codes

Page 203: Collecting managing and assessing data using sample surveys

Design of survey instruments176

the respondent. Figure 8.5 shows the inclusion of a ‘Thank you’ at the end of the diary from which those pages were extracted.

In a self-administered survey, when offering respondents the opportunity to pro-vide comments, adequate space should be provided, preferably with lines, for people to write in their comments. Again, surveys exist in which comments were invited, but little or no space was provided for those comments. This sends the message that the respondent’s opinions and comments are not considered to be of much value.

Some final comments on questionnaire layoutIn summary of the points already made in this chapter, it is important that survey instruments are laid out in a logical fashion, and that considerable attention is paid to the information that the survey is trying to obtain. It should be apparent from the length of this chapter and the many issues discussed that the creation of a survey instrument is not a simple procedure, and that there are many pitfalls into which the unwary may fall.

Survey instruments should be as visually appealing as possible, using good design principles in the placement of material, symmetry, consistency, etc. Survey instru-ments should convey the importance of the survey and the respondent through the care and attention paid to the design. Both interviewer-administered and respondent-administered surveys should be designed carefully, so that completion of the survey will be as easy as possible and will be interpreted identically for each respondent. It is extremely important to keep in mind throughout the design process that the respondent is the customer, and to design the survey so that it is customer-friendly.

As is discussed in Chapter 12, pilot surveys and pretests should be considered an essential element of the design process. Only by field testing a survey is it possible to determine what works and what does not work. Design features that appear simple and straightforward to the survey designer will often fail in field tests, because respondents place different interpretations on these design aspects from the ones the designer does. In the same way, focus groups should also be a part of the design of surveys, providing information on the language to use, the issues of relevance, and the categories to use for answers.

Page 204: Collecting managing and assessing data using sample surveys

177

9 Design of questions and question wording

9.1 Introduction

Chapter 8 provides considerable information on the survey instrument design and offers many directions on how to accomplish a good design for both interviewer- administered and respondent-administered surveys. In this chapter, the specific issue of how to write questions in a survey is addressed. As with many of the topics cov-ered in this book, there are entire books written on the topic of question wording and designing questions. This chapter provides an overall guide to the topic, as well as references to more in-depth treatment of certain topics.

The goal of writing questions in a survey is to have every respondent interpret the question exactly the same way, have every respondent be able to respond accurately to each question, and have every respondent willing to respond. This is not an easy task. Dillman (2000) illustrates how difficult this can be, by using as an example a question asked of students about the amount of time that they study each day. In one version of the survey, the respondents were asked to select a response from a set that began with less than half an hour, and had a maximum of more than 2.5 hours. In a second ver-sion, the response range was given as starting at less than 2.5 hours, and increasing to a maximum of more than 4.5 hours. In both response sets, the responses incremented by 0.5 hours, so that version one had categories of less than 0.5 hours, 0.5–1.0 hours, 1.0–1.5 hours, etc. In addition to this, the surveys were conducted by mail and by tele-phone for each version.

The results of this survey were that, from the mail version that started from less than 0.5 hours, 23 per cent claimed to study 2.5 hours or more, while the mail version with categories from less than 2.5 hours to 4.5 hours and more produced 69 per cent of students claiming to study 2.5 hours or more. By telephone, these figures were, respec-tively, 42 per cent and 70 per cent. By the same token, in the first version by mail, 77 per cent indicated that they studied less than 2.5 hours, while, in the second version by mail, only 31 per cent claimed to study less than 2.5 hours per day. Although the question was asked identically in each case, the response categories were different, leading respondents to answer differently in each of the two versions. Similar results were found when asking a question on how much time the students spent watching TV,

Page 205: Collecting managing and assessing data using sample surveys

Design of questions and question wording178

with the first version yielding 17 per cent indicating that they watched more than 2.5 hours of TV per day, while the second version produced 32 per cent who admitted to watching more than 2.5 hours per day. These two examples show clearly the challenge of writing questions appropriately, and also demonstrate that the issue is not limited to the questions themselves but also to the response sets provided for the questions.

9.2 Issues in writing questions

There are several issues that should be addressed in writing questions. The following subsections deal with several of these.

9.2.1 Requiring an answerThe first issue is to make sure that each question requires an answer from each respon-dent to whom it is addressed. A recent survey in Australia was filled with questions that did not require an answer. Such questions are frustrating to the respondent, who feels that he or she should be able to indicate some response to all questions addressed to the respondent. They are also problematic for analysis, because the survey researcher cannot be certain whether a lack of response indicates that the question did not apply or that the respondent skipped the question, either intentionally or unintentionally. An example of a question that does not require an answer is shown in Figure 9.1.

Question 6 in Figure 9.1 provides no way for a respondent to indicate that he or she does not read a daily newspaper. Instead, it makes the assumption that all respondents must at least read a daily newspaper sometimes. For the respondent who does not read a newspaper, not only is there nothing to answer in question 6, but there also is nothing to answer in question 7 either. When such a respondent confronts these two questions,

6. Please indicate which daily newspapers you read

Frequently Sometimes Frequently Sometimes

Sydney Morning Herald Adelaide Advertiser

Daily Telegraph West Australian

The Age Hobart Mercury

Herald Sun Financial Review

Courier Mail The Australian

Canberra Times

7. Is your main or occasional newspaper…

Main Occasional Main Occasional Main Occasional

Home-delivered Work-supplied Other

Figure 9.1 Example of a sequence of questions that do not require answers

Page 206: Collecting managing and assessing data using sample surveys

Issues in writing questions 179

he or she will be rather frustrated at not being able to tell the survey researcher that he or she does not read a daily newspaper, and will also be frustrated that there is no way to answer question 7 either. The survey researcher, receiving back the postal survey form with nothing ticked in any of the boxes for these two questions, is then unable to determine if the respondent skipped the questions as a form of intended or unintended nonresponse, or if he or she actually does not read a daily newspaper. The questions could easily have been rewritten in the manner shown in Figure 9.2.

By simply adding an opening question to ascertain whether or not the respondent reads a daily newspaper at all, the respondent is able to make an appropriate indica-tion. The new question 6 in Figure 9.2 can be answered by all respondents. Questions 7 and 8 are then skipped by any respondent who answers ‘Never’ to question 6. The respondent is satisfied that he or she is able to answer each relevant question, and the survey researcher is clearly able to distinguish between those who never read a daily newspaper and those who did not respond to the questions.

A possible reason for asking the questions as posed in Figure 9.1 is the erroneous notion of saving questions and keeping the survey shorter. However, the frustration to respondents and the ambiguity provided to the survey researcher far outweigh any potential problem of adding slightly to the length of the survey by asking the qualify-ing question, shown in Figure 9.2.

As a basic principle of question wording, all questions that are addressed to a respon-dent should require an answer from the respondent, and it should be obvious whether a question that has no answer is in fact a case of item nonresponse – i.e., that the respon-dent has decided not to answer the question. Any question that is to be considered by a respondent but that does not require an answer should be rewritten.

6. Do you read a daily newspaper...

7. Please indicate which daily newspapers you read and how often...

Frequently Sometimes Frequently Sometimes

Sydney Morning Herald Adelaide AdvertiserDaily Telegraph West Australian The Age Hobart MercuryHerald Sun Financial Review Courier Mail The AustralianCanberra Times

8. Is your main or occasional newspaper…

Main Occasional Main Occasional Main OccasionalHome-delivered Work-supplied Other

Frequently � Sometimes � Never � Skip to question 9

Figure 9.2 Example of a sequence of questions that do require answers

Page 207: Collecting managing and assessing data using sample surveys

Design of questions and question wording180

9.2.2 Ready answersA second issue has to do with whether or not respondents can be expected to have a ready answer to the questions that are posed. In general, if a respondent has the answer available, the question is likely to be answered accurately. When respondents do not have ready answers, accuracy will suffer, and there is an increased chance that respon-dents may decide not to answer. A question about the respondent’s gender is usually readily answered, as is a question about a person’s age, although some people prefer not to divulge their age. Although Dillman (2000) suggests that people can answer a ques-tion on age irrespective of how it is asked, this author suggests that some ways of asking age will be more likely to be answered than others. Therefore, it is more than just an issue, in this case, of the question content. This is dealt with later in this chapter.

Another question that is often asked is that of income. This is a question to which respondents may or may not have a ready answer. Most people know the amount of their take-home pay. Thus, if the question asks a person to report his or her net personal income, on the basis of one pay period, then most people who earn income will know this figure more or less correctly. If asked to report their gross personal income – i.e., before any taxes or other payroll deductions – many people will not know this figure without consulting a recent pay slip. Furthermore, if asked to report household income, for any household with more than one wage earner, this is not likely to be readily known by any respondent.

While gender, age, and income are factual questions, and there is variation in how readily people can and will answer these, attitude and belief questions are much less likely to produce a ready answer. Consider the question shown in Figure 9.3. This is a belief question to which the average respondent is unlikely to have a ready answer. Instead, he or she will need to think about who the present prime minister is and where he or she is from, and also about previous prime ministers and where they were from. After pondering this information, the respondent will then have to decide on the strength of his or her agreement or disagreement with the statement.

Another version of this question is shown in Figure 9.4. In this version, the response requested is more vaguely stated. This will result in a vaguer response. The more vague the response set provided, the more vague will be the answer provided by the respon-dent. Vagueness of the question and vagueness of the answer categories will both con-tribute to decreased accuracy and reliability in the answers. In fact, a question such as that shown in Figure 9.4 may result in a different answer each time it is posed to the same respondent, if sufficient time has elapsed that the respondent is no longer able to recall the answer given previously.

It is necessary for the survey designer to consider how readily respondents can answer questions, both to determine what level of accuracy and reliability can be expected in the answers, and to establish the amount of effort that may be required in writing the questions. The less ready the answers are, the more time may be needed to draft and redraft the question, to test it, and to redesign it. Factual questions to which respondents may be expected to have ready answers may be able to be written more rapidly and require less testing.

Page 208: Collecting managing and assessing data using sample surveys

Issues in writing questions 181

9.2.3 Accurate recall and reportingThis topic has already been dealt with to some extent in Chapter 8. In designing ques-tions and question wordings, it is necessary to be conscious of the salience of behav-iours about which it is wished to obtain information. Behaviours that are not salient to people will be recalled with difficulty for any period longer than within a day or two. Behaviours that are salient will be recalled for much longer. For example, ask-ing people what they ate for breakfast on Wednesday, three weeks ago, is unlikely to be recalled readily, because this is not salient information. On the other hand, asking people where they went on their annual holiday in each of the past three years is much more likely to be recalled and reported accurately. Nevertheless, even with salient behaviours, there will still be some tendency for telescoping to occur, as was discussed in Chapter 8.

For behaviours that are not of particular importance to people, it is best either to ask about them in relation to an immediate past period of time, which should gener-ally be short, or to ask them to report it for a period in the future. In the area of travel surveys, for example, the daily travel that most people undertake is not very impor-tant to them, and is not easily recalled, especially for days in the past. Even asking people about the travel they undertook on the day preceding the interview has been found to be subject to significant error, by way of people forgetting short trips that they undertook. On the other hand, asking people about long-distance travel only, with ‘long distance’ typically defined as in excess of 150 kilometres, is usually more successful for a period of even as much as six months prior to the interview. In travel surveys that focus on all travel conducted in a period of twenty-four or more hours, it has been found that there is a considerable improvement in accuracy by setting

The prime minister of Australia is more likely to be a person from New South Wales thanany other state or territory.

5.

On a scale of 1 to 7, where 1 means entirely agree and 7 means entirely disagree, pleaseuse a number to indicate how strongly you agree or disagree with this statement.

1 � 2 � 3 � 4 � 5 � 6 � 7 �

Figure 9.4 Example of a belief question with a more vague response

The prime minister of Australia is more likely to be a person from New South Wales thanany other state or territory.

Agree somewhat � Disagree somewhat � Disagree strongly �

5.

How strongly do you agree or disagree with this statement.

Agree strongly �

Figure 9.3 Example of a belief question

Page 209: Collecting managing and assessing data using sample surveys

Design of questions and question wording182

a date in the future at the time of recruitment, and asking respondents to record all their travel on that future day.

It is necessary to determine ahead of the survey design whether people are able to recall the behaviours in question and for how long in the past they can recall with any degree of accuracy. The survey questions then need to be posed within that limita-tion. As a general rule, recall of behaviour will be best if the period of time is short. However, for events that may occur only infrequently, such as an annual holiday, the period of time may have to be extended, commensurate with the behaviour frequency.

While on the subject of this particular issue of recalling past behaviours, a word of caution is necessary. Vague words, such as ‘usual’, ‘typical’, ‘normal’, and ‘regular’, should be avoided in asking for reporting of behaviour. The reason to avoid these words is that their very vagueness will result in inaccurate and unreliable responses. Furthermore, these words presuppose that the behaviour in question is routine, which may not be the case for many respondents. For example, suppose one is enquiring about what a person eats for breakfast. One respondent might have scrambled eggs two days per week, cold cereal on two days, hot cereal on one day, a boiled egg on one day, and toast only on one day. If the survey asks this respondent what he or she usually has for breakfast, the respondent cannot give an answer. While he or she has eggs on three days, cereal on three days and toast only on one day, no one type of breakfast is eaten more often than any other one. Hence, using a question wording that asks what this respondent typically, or usually, or normally, or regularly has for breakfast is actually unanswerable. At worst, the respondent will refuse to answer. At best, the respondent will pick either eggs or cereal as the answer to give, assuming that it doesn’t really matter that much anyway. In either case, the response is actually wrong. In such a case, the survey researcher either should ask about a specific day, or should ask for the respondent to report what he or she had for breakfast on each of the past seven days.

Recall data requests should be kept simple, as well as being related to events or behaviour that are as recent as possible. The more complex the requests, the less accu-rate the data are likely to be and the more unreliable will be the results.

9.2.4 Revealing the dataAlthough respondents may know the answers to questions that are posed in the survey, they may not wish to reveal that information. As has already been observed, people are often reluctant to reveal information about their incomes. Indeed, in most travel behaviour surveys of the past twenty years or so, reported nonresponse rates to income questions run at around 20 per cent – i.e., about one-fifth of all respondents do not wish to reveal income. Certain types of behaviour are also ones that people are often reluc-tant to reveal. Such behaviours as those relating to sex, drug use, petty theft, and other similar activities are much less likely to be reported on at all, or at least truthfully.

In this instance, question wording may not play a primary role in gaining more accurate information. Instead, evidence suggests that people are more willing to report

Page 210: Collecting managing and assessing data using sample surveys

Issues in writing questions 183

such behaviours and respond to questions that probe into what may be considered private areas, if the survey is a self-administered one, as opposed to an interviewer-administered one (de Leeuw, 1992; Fowler, Roman, and Xi 1998; inter alia). Dillman and Tarnai (1991) find that, in response to a question about driving after drinking alco-holic beverages, 52 per cent responded ‘Never’ in a self-administered survey, while 63 per cent responded ‘Never’ in a telephone interview. There is also evidence to suggest that younger people are more willing to reveal the truth of such behaviours in computer surveys than in other forms of self-administered survey.

Thus, in this case, it is incumbent upon the survey researcher to determine ahead of time whether or not the behaviours and other data that are desired are such that peo-ple are willing to reveal them, and to choose an appropriate survey mode for enquiry about such behaviours or other data. Alternatively, the survey researcher needs to be aware of the potential for nonresponse or untruthful responses from a survey that probes such areas.

9.2.5 Motivation to answerAlthough the motivation to answer a survey may seem to be somewhat removed from question wording, there are elements of question wording that can have significant impacts on motivation. First, there is a substantial difference between interviewer-administered surveys and self-administered surveys with respect to motivation. In an interviewer-administered survey, the interviewer is able to provide motivation through tone of voice, additional things spoken outside the questions themselves that may encourage or persuade respondents to participate, and even in body language in a face-to-face survey. In a self-administered survey, the survey researcher must rely on the printed survey form to provide the motivation, because none of these other interviewer assists are available. In addition, it must be remembered that people will not read lengthy instructions or letters that might be intended to motivate. Motivation must come partly from the design of the survey itself and partly from the obvious care and attention paid in creating the survey form.

Second, there are certain aspects of the survey that will definitely decrease motiva-tion. Most important among these are:

(1) separating the instructions from the questions to which they apply;(2) using complex question formats;(3) using unfamiliar language or using words in a way that is unfamiliar; and(4) requiring respondents to undertake a very large task.

While there are other motivational elements to a survey, such as offering incentives to people to complete the survey (see Chapter 20), using follow-up reminders (see Chapters 16 and 20), and creating a respondent-friendly survey form, there are issues that come down to wording alone. Sometimes it is necessary to modify the question wording; sometimes a question needs to be split into two or more questions; and some-times a question format may need to be changed, such as asking for ratings instead

Page 211: Collecting managing and assessing data using sample surveys

Design of questions and question wording184

of a ranking of different attributes. Each of these may improve the motivation of the respondent.

9.2.6 Influences on response categoriesThe choice of response categories may appear to indicate to the respondent certain prior judgements or indications of ‘expected’ behaviour, which will influence the respondent’s choice of response. The earlier example of the question on study habits illustrates this point. When the response categories were shown as less than 0.5 hours, 0.5 to one hour, etc., there was an implicit recognition that students might study as lit-tle as an hour or less per day. Similarly, there was an implicit indication that studying more than 2.5 hours per day represented studying a great deal. As a result, using this low scale of response categories, some students may have felt that it was not ‘cool’ to indicate that they studied the most at 2.5 hours per day or more, while feeling much more comfortable to indicate that they studied only one or two hours per day.

On the other hand, when the high scale was used, in which the lowest response was less than 2.5 hours and the highest response was more than 4.5 hours, students appar-ently felt less comfortable to indicate that they studied the minimum amount, and more comfortable to indicate a category at the upper end of the scale. Indeed, there appeared to be a consistent reluctance in this case to use the lowest response category. Thus, the choice of categories appears to influence the responses obtained. This is a case in which there is something more than just the words alone that is influencing respon-dents on how or what to answer. In fact, it is possible that respondents to this question saw implicit within the categories the concepts of highness and lowness, and chose a category not on the basis of the actual hours that they studied but, rather, on the basis of whether they believed that they were among those who study the most, or among those who study the least.

The category ranges and the layout of the categories will both influence the catego-ries that respondents will choose. This is true even when the question relates to factual information. For example, Figure 9.5 illustrates two alternative sets of response cat-egories for a question on age. It is likely that people in their fifties and sixties will be more likely to lie about their ages in the first set of responses, while being more likely to be truthful in the second set. The reason for this is that the first response set gives a value judgement that anyone over the age of fifty is ‘old’, whereas the second set sug-gests that only someone over the age of eighty-four is ‘old’.

Care in setting response categories is clearly necessary, as part of good question wording. It is also important to note that, the more vague the categories are, the more vague will be the answers and the more prone the survey will be to measurement error. For example, the question in Figure 9.4 has a vague response range from 1 to 7, which requires each respondent to put his or her own definitions on the meaning of each value between 1 and 7. Not only will each respondent be likely to differ from all other respondents in setting these values, but the same respondent, asked to respond on such a scale at two different times, is not likely to be consistent in the definitions.

Page 212: Collecting managing and assessing data using sample surveys

Issues in writing questions 185

9.2.7 Use of categories and other responsesConsider the questions shown in Figure 9.6. These different ways of asking for age are not equivalent to one another in the eyes of most respondents. Experience shows that respondents are least likely to answer a question that asks them to state their cur-rent age in years. Those, especially females, who do not wish to reveal their age are particularly likely to find such a question offensive or prying, and refuse to answer it. The question asking the respondent to mark a category generally works much better. In this case, the respondent feels that he or she has not fully revealed his or her age, and is more comfortable responding. The wider the categories, the more likely it is that an answer will be forthcoming from all respondents. Additionally, because many people are sensitive to the passing of each decade (forty, fifty, sixty, etc.), it is usually better if the age groups run, as shown in Figure 9.6, from the midpoint of each decade to the midpoint of the next decade.

In the author’s experience, the best results are actually obtained from the last version of the question, although similarly good results are also obtained from the question before that. For reasons about which one can speculate endlessly, even people who are sensitive to revealing their ages seem untroubled to respond with birth year or date. Perhaps this is because there are so many applications and forms in this day and age that require this information, and that even use the information on the date of birth to confirm a person’s identity, so that sensitivity to this question has been lowered sub-stantially. Perhaps it is also the fact that the person responding with this information

17.(a)

(b)

How old are you? Please tick the box that indicates your present age.

Under 25

25–30

31–35

36–50

over 50

17. How old are you? Please tick the box that indicates your present age.

Under 25

25–34

35–44

45–54

55–64

65–74

75–84

85 and over

Figure 9.5 Two alternative response category sets for the age question

Page 213: Collecting managing and assessing data using sample surveys

Design of questions and question wording186

does not make the connection that by so doing he or she is revealing his or her age. It is interesting to note that these last two questions do not mention age, so it is possible that the respondent is not even immediately aware that this is a question about age.

Similar issues arise with income, which is another ‘sensitive’ question, as has been noted previously. Again, asking a respondent to state his or her income to the nearest dollar (or pound, etc.) is generally the least successful method. Asking respondents to tick the category that includes their income is much more effective, and usually subject to considerably less nonresponse. Once more, though, the width of the catego-ries is important. The more narrow the categories, the less effective is the categorical question in overcoming reluctance to reveal information. A compromise must there-fore be sought between the level of detail that the analyst would like and the lack of detail that will gain the highest level of cooperation from respondents. In Australia, the United States, and Canada currently, it would generally be advisable to use catego-ries of income that are at least $20,000 in size – e.g., $0–$19,999, $20,000–$39,999, $40,000–$59,999, etc. The highest income category would usually need to be set as some amount and over. If comparison of income data is to be made to census statistics,

How old are you?

___________ Years

Please write in your current age...

___________ Years

Under 2525–3435–4445–5455–6465–7475–8485 and over

What is your date of birth?

_______Day _________ Month ________ Year

In what year were you born?

19_____

How old are you? Please tick the box that describes your present age.

(e) 27.

(d) 27.

(c) 27.

(b) 27.

27.(a)

Figure 9.6 Alternative questions on age

Page 214: Collecting managing and assessing data using sample surveys

Issues in writing questions 187

which will often be the case, then the categories used should be able to be mapped to aggregations of the census categories.

In an interviewer-administered survey, it is also possible to use another device to get at income when a respondent is unwilling to respond. Having found that the respon-dent is unwilling to indicate which of a set of categories of income his or her income falls into, the interviewer asks: ‘Can you tell me if your income is above or below $70,000 per annum?’ Assuming that the respondent is willing to answer this question, and indicates that his or her income is below $70,000, the interviewer then asks the respondent if his or her income is above or below $35,000. This procedure continues until the respondent refuses to answer further, or until an income with similar specific-ity to the original categories has been obtained.

Ordered and unordered categoriesIn the various examples of categorical responses used so far, all the response categories have been ordered. In other words, they proceed from lowest to highest, from stron-gest agreement to strongest disagreement, etc. However, in some instances, survey researchers have developed categories that are unordered. An example of an unordered set of categories is shown in the questions in Figure 9.7, which were designed to be asked after establishing the make and model of one of a household’s vehicles.

Both the questions in Figure 9.7 have unordered response categories. In this case, the choices are probably not very difficult to make, even though the responses are unordered. However, this is a case in which, especially in an interview, primacy and

23. What is the body type of this vehicle?

Car

Four-wheel Drive

Utility/van/panel van

Truck

Taxi

Motorcycle

Other (please specify) _________________________________________

24. What type of fuel does this vehicle use?

Petrol

Diesel

LPG/LNG

Dual fuel

Other (please specify) ___________________________________________

Figure 9.7 Examples of questions with unordered response categories

Page 215: Collecting managing and assessing data using sample surveys

Design of questions and question wording188

recency effects, as discussed in Chapter 8, could arise. More difficult situations can arise in questions in which mixed ordered and unordered categories are posed, or when the respondent is asked to rank the categories or is asked to choose among categories that may be confusing to the respondent, and which also may not necessarily meet the requirements of being exhaustive and mutually exclusive. An example of such a ques-tion is shown in Figure 9.8, from a fictitious evaluation survey on a university unit of study. Here, the categories represent a mix of ordered and unordered categories.

This question seeks for information about both the usefulness and the relevance of the unit of study, but is likely to result in an uninterpretable response. It is probably best rewritten as two questions, one relating to relevance and one to usefulness, as shown in Figure 9.9. This would allow students to provide a much more clearly interpretable evaluation of the unit of study in question. Another possible way of asking the ques-tion, still using unordered categories, is shown in Figure 9.10. This provides what is known as a head-to-head comparison of concepts.

Understanding these different structures of questions is very important for writing appropriately worded questions. Using different structures, such as completely open-ended questions, closed-ended questions with ordered response categories, closed-ended questions with unordered response categories, closed-ended questions with head-to-head comparisons, and partially closed-ended questions, can achieve the goals of the survey that is being designed. Examples of all these types of questions are pro-vided by Dillman (2000), among other places.

9.3 Principles for writing questions

There are a number of principles that should be observed in writing questions. As Dillman (2000: 50) points out ‘The rules, admonitions, and principles for how to word questions … present a mind-boggling array of generally good but often conflicting and confusing directions’. One of the problems that is encountered in following a list of specific rules or admonitions is that they cannot generally be applied to every question-naire. It is necessary to consider the nature of the population that is being studied, the relationships between questions, and the goals of the study. Keeping in mind that there will be exceptions to any rule, and that certain admonitions may not be appropriate in

7. Which of these five statements best describes this unit of study?

Both useful and relevant to my degree

Useful, but not necessarily relevant to my degree

Definitely relevant, but not very useful to my degree

The most relevant class I have taken

The most useful class I have taken

Figure 9.8 An example of mixed ordered and unordered categories

Page 216: Collecting managing and assessing data using sample surveys

Principles for writing questions 189

specific situations, the following subsections provide guidance in writing questions that the author has found have served well over more than forty years of designing questionnaires.

9.3.1 Use simple languageIn general, the language that is used in a survey should avoid using words that are spe-cialised, or that may not be understood by the majority of respondents. In designing questionnaires in the United States, the author generally advocates using a vocabulary that would be understood by a fifth grade student in school – i.e., a child of about eleven years of age. However, for most adults, it is hard to recall what vocabulary was understood at the age of eleven, and this is in any case liable to change over time, through influences such as television and new technological developments.

7. This unit of study was very useful to my degree. Please indicate how strongly youagree or disagree with this statement.

Agree strongly

Agree somewhat

Neither agree nor disagree

Disagree somewhat

Disagree strongly

8. This unit of study was very relevant to my degree. Please indicate howstrongly you agree or disagree with this statement.

Agree strongly

Agree somewhat

Neither agree nor disagree

Disagree somewhat

Disagree strongly

Figure 9.9 Reformulated question from Figure 9.8

7. Which one of the following do you feel best describes this unit of study?

Very useful to my degree

Very relevant to my degree

Both very useful and very relevant to my degree

Neither very useful nor very relevant to my degree

Figure 9.10 An unordered alternative to the question in Figure 9.8

Page 217: Collecting managing and assessing data using sample surveys

Design of questions and question wording190

A more workable principle is first to look for all multisyllabic words in the ques-tionnaire and consider replacing them with words of one or, at most, two syllables. Sometimes this will also mean replacing one multisyllabic word with two or more shorter words. Some examples that the author encounters frequently are as follows.

‘Complete’ replaced by ‘fill out’, as in ‘When you have filled out [completed] this •survey, please place it in the envelope and post it back to us’.‘Occupation’ replaced by ‘job’, as in ‘Please tell me what job you do’ in place of •‘Please tell me your occupation’.‘Employment’ replaced by ‘job’, as in ‘Please tell us about your job •[employment]’.‘Location’ replaced by ‘place’, as in ‘What was the next place [location] you •visited?’.‘Participate’ replaced with ‘take part’, as in ‘Thank you for agreeing to take part •[participate] in this survey’.‘Destination’ replaced by ‘place you went’, as in ‘What was the next place you went •[destination]?’.‘Occupants of this household’ replaced by ‘people who live here’, as in ‘Please tell •me the number of people who live here [occupants of this household]’.‘Questionnaire’ replaced by ‘study’, as in ‘Your answers [responses] to this study •[questionnaire] will be kept in strictest confidence’.‘Indicate’ replaced by ‘tell us’, as in ‘Please tell us [indicate] your age’.•

At other times, it may be a matter of finding a more usual synonym for a word. For example, it may be better to use the word ‘answer’ in place of the word ‘response’, as in the penultimate bullet point above. Both are two-syllable words, but ‘answer’ is more frequently used in common English. On the other hand, if the survey is directed to specialists in a particular field, it would be inadvisable to replace complex words that are well understood within that profession with simpler words and phrases. To do so might suggest that the survey designer was not knowledgeable of the speciality and possibly lacked understanding of the issues.

9.3.2 Number of wordsAs a general principle, it is better to use fewer words than more in writing a question. However, this may seem to conflict with the previous rule of using simple words in place of complex ones. Indeed, the reason that one often resorts to long words in techni-cal areas is to be able to communicate more efficiently with fewer words. Nevertheless, in question writing, after determining that words that are as simple as possible have been used, the next goal should be to reduce the number of words. The first problem with lengthy questions is that respondents will get tired of listening to or reading a long question, will skip over words and phrases after reaching their tolerance point, and answer the question based on that part of the question to which they really listened, or that they actually read. The second problem is that the attention paid to words in a

Page 218: Collecting managing and assessing data using sample surveys

Principles for writing questions 191

long question is uneven, and certain words will be retained in the respondent’s mind. Quite often, the words remembered may actually be unimportant ones, or be ones that produce the wrong emphasis with respect to the purpose of the question.

For example, in a recent survey that was designed to elicit perceptions about will-ingness to use the car less, the following statement was prepared for respondents to indicate strength of agreement:

I would find it difficult to reduce my car usage because I feel very dependent on my car.

After careful consideration, this statement was reduced to:

I am very dependent on my car.

There is redundancy in the first question. People who are very dependent on their car will obviously find it difficult to reduce their use of it, especially if the alternative is to walk, cycle, or use public transport more. In the same survey, another statement was reduced by changing around the words and tightening up the statement, as in ‘I enjoy driving because I can listen to music at my leisure’, which was replaced by ‘I can listen to music at my leisure when I drive’. In this case, the fact of enjoying the experience was an unnecessary additional idea, which could be taken out of the question without materially altering the information gained from the respondent.

On the other hand, it is also important that questions are asked by using complete sentences, rather than picking out specific words and phrases and using them to ask the question. For example, the author has seen many surveys that pose the question of the respondent’s age by simply writing ‘Age?’, or even ‘Your age?’. This may seem to be appropriate in terms of shortening the questions, but it is not a recommended pro-cedure. As noted earlier in this book, posing the question of gender by simply writing ‘Sex?’ is open to all sorts of misinterpretations. It is really not much longer to write out the questions as ‘Please tell us your age’ or ‘Please tell us your gender’. Not only is the use of a couple of words or a phrase open to misinterpretation, but it may also result in an outright mistake about the word used. Dillman (2000) cites an example of this when a question was posed about the city or town and county in which the person lived, in the state of Idaho in the United States. In the shortened version, the county was asked by just posing the question ‘Your county’. In a significant number of cases, this question was read as ‘Your country’. Instead, Dillman points out that the question should have been posed as ‘In what Idaho county do you live?’. By using a complete sentence and adding the name of the state before the word ‘county’, the likelihood of anyone misreading the word ‘county’ as ‘country’ is almost completely eliminated.

9.3.3 Avoid using vague wordsIn the area of behaviour surveys, one of the worst offences that is frequently commit-ted is the use of vague words relating to particular behaviours. For example, questions are asked along the lines of ‘What mode of travel do you usually use to get to work?’. The problem with words such as ‘usually’, ‘generally’, ‘typically’, and ‘normally’,

Page 219: Collecting managing and assessing data using sample surveys

Design of questions and question wording192

as noted in Chapter 8, is that they are vague and all imply that people are routine in what they do. This is often not the case. For example, in the case of the mode of travel to work, the respondent might drive two days a week, take the bus on two days, and work from home on one day. It is then impossible to answer a question about what this respondent does ‘usually’.

Another way in which vague wording often appears in a survey is in asking about how frequently a certain behaviour occurs. For example, the question shown in Figure 9.11 might be asked. In this case, the respondent has to try to decide what is meant by concepts such as ‘rarely’, ‘occasionally’, and ‘regularly’. Each respondent will be likely to interpret these differently, leading to inconsistent results. As an alter-native, the question could be asked as shown in the second option of Figure 9.11. This provides something much more precise, and is not open to different interpretations by different respondents.

However, care must also be taken not to ask for more specificity than the respondent is likely to be able to provide readily. For example, one may be asking a question about the number of nights the respondent has spent in a hotel in the past year. If the respon-dent is simply asked to specify the number of nights, this is information unlikely to be readily available and will require some thought. There is then a high probability that the respondent will just make a guess. Instead, the number of nights could be requested

9. How often did you travel to work by public transport in the past year?

Never

Rarely

Occasionally

Regularly

(b) An alternative wording

(a) Original wording

9. How often did you travel to work by public transport in the past year?

Not at all

Fewer than 12 times

About once per month

Twice or three times per month

About once per week

Twice or three times per week

More than three times per week

Figure 9.11 Avoiding vague words in question wording

Page 220: Collecting managing and assessing data using sample surveys

Principles for writing questions 193

in terms of mutually exclusive and exhaustive categories, such as none, one to seven, eight to fourteen, fifteen to thirty, thirty-one to sixty, and more than sixty, or some other appropriate grouping. The usefulness of the suggested categories is that they map into weeks and months fairly readily, which may help the respondent to recall and pick the correct one. The categories could also have been given that way, as in: none, up to one week, more than a week but not more than two weeks, over two weeks but less than one month, one to two months, more than two months.

It is important, in asking questions of this type, to be very clear as to what informa-tion is actually required and how it will be used. In such a case as the illustration used in the preceding paragraph, if the information is to be used simply to classify respon-dents into those who did not stay in a hotel last year, those who stayed few nights, etc., then the categories are all that are required. On the other hand, if the intent is to build a model, or produce tabulations of some type that require exact numbers and the estima-tion of means and variances, etc., then the categories may not serve the purpose.

9.3.4 Avoid using ‘Tick all that apply’ formatsThe main problem with the format in which the respondent is offered a list of responses and told to tick all that apply is that it will be especially prone to generate primacy effects. This will occur because respondents will go down the list, ticking as many options as they feel apply, until they feel that they have ticked enough to represent what they consider to be a satisfactory answer. Of course, some respondents will diligently read the entire list first, and then literally tick all those that apply, but many will not. This will lead to inconsistency in the answers and a tendency that items early in the list will still end up with more ticks than items later in the list.

Recent research, as noted by Dillman (2000), suggests that this problem arises more in self-administered surveys, such as those undertaken by post or the internet. On the other hand, in an interviewer-administered survey, there may be more likelihood of a recency effect – i.e., that respondents will be more likely to select the items heard last. However, Dillman (2000) also points out that his own research has shown inconsisten-cies in this regard, with postal surveys being as likely to generate recency response biases as primacy response biases. Nevertheless, the recommendation remains that avoidance of ‘Tick all that apply’ formats is desirable in order to remove the potential for satisficing behaviour on the part of respondents who decide not to read the entire list, but to stop when they feel they have provided a satisfactory response.

9.3.5 Develop response categories that are mutually exclusive and exhaustiveThis issue has been mentioned a number of times in the past chapters. Overlapping categories cause a problem whereby the respondent who falls into the overlap, and who is instructed to choose one response only, does not know which option to choose. For example, in a survey of the amount of time that students study each day, categories were given of less than half an hour, half an hour to one hour, one hour to one and a half hours, one and a half to two hours, two to two and a half hours, and more than two

Page 221: Collecting managing and assessing data using sample surveys

Design of questions and question wording194

and a half hours. The problem arises for a student who estimates that he or she studies for exactly one hour per day, or one and a half hours per day, etc. For each of these study periods, there are two alternative categories that appear to include the value. For a student who wishes to be honest, but also wants to give the impression of being more studious, the choice would be likely to be to the higher of the two categories, and vice versa. Clearly, this is again providing vague or inconsistent data.

It would probably have been best, in this case, to specify the categories as less than thirty minutes, thirty to fifty-nine minutes, one hour to one hour and twenty-nine min-utes, one hour and thirty minutes to one hour and fifty-nine minutes, etc. Now there is no overlap. Note also that the two extreme categories in this case are less than an amount and more than an amount. This provides an exhaustive categorisation. Even the student who studies for ten or twelve hours per day will find a category that fits. Of course, this student may be rather dissatisfied with the categories, because he or she would like to let the survey researcher know what a large amount of time he or she normally spends studying. This is an issue in the use of closed-ended questions of this type.

Another issue of mutual exclusivity arises when responses to a question are inap-propriately mixed, resulting in a lack of exhaustiveness and confusion of categories, which then become effectively no longer mutually exclusive. An example is shown in Figure 9.12.

In this case, the categories of response mix up the source and the location, resulting in categories that are not mutually exclusive and also not exhaustive. It is often nec-essary to include a category of ‘Other (please specify)’ to ensure exhaustiveness. A preferable design of the question in Figure 9.12 is shown in Figure 9.13, which uses alternative wording and breaks the question into two: one about source and one about location.

Mutual exclusivity would appear not to be a problem if respondents are instructed to tick all that apply. However, as noted in the previous section, such responses lead to a variety of problems of their own. In particular, the order in which the catego-ries are presented in ‘Tick all that apply’ questions will generate different responses,

11. From what source did you learn about the train timetable change?

(Please tick one only)

Radio

Television

The internet

Someone at work

While travelling to work

At home

Figure 9.12 Example of a failure to achieve mutual exclusivity and exhaustivenessSource: Adapted from Dillmar (2000).

Page 222: Collecting managing and assessing data using sample surveys

Principles for writing questions 195

as has been shown in various research studies (see, for example, Israel and Taylor, 1990). Such questions will tend to be subject to primacy effects in written surveys and recency effects in interview surveys, as noted in the previous section, much more than is the case with mutually exclusive response categories. Thus, this is a further argu-ment against using ‘Tick all that apply’ responses.

9.3.6 Make sure that questions are technically correctThere is nothing that is likely to undermine confidence in a survey more than tech-nical errors in the survey form. The credibility of the survey designer will be open to considerable question when such errors are present. Technical errors may include such issues as misspelling the name of a person or organisation referred to in the survey, or misidentifying the entity responsible for a particular action or behaviour. Unfortunately, such errors are very easy to make, and it is likely that every survey designer has made such an error at some time or another. Technical errors are most likely to creep into a survey when the designer is designing a survey on a topic about which he or she knows very little – a situation that will arise for most survey design-ers at some time.

Avoidance of such errors can be achieved through two processes. First, the survey should always be checked over by someone who is an expert in the area of the survey and who has not been involved in the design of the survey. This will often help to find and correct technical errors. Second, as is discussed in Chapter 12, the survey should always be subjected to a pilot survey, during which such errors as remain in the survey are likely to be located.

11. From what source did you learn about the timetable changes?

(Please tick one only)

Radio

Television

The internet

Newspaper

Another person

Other (please specify)

12. Where were you when you learned about the timetable changes?(Please tick one only)

At work

At home

Travelling to work

Somewhere else

Figure 9.13 Correction to mutual exclusivity and exhaustiveness

Page 223: Collecting managing and assessing data using sample surveys

Design of questions and question wording196

9.3.7 Do not ask respondents to say ‘Yes’ in order to say ‘No’It is quite common in ordinary conversation to use double negatives. As a result, they tend to creep into survey design, and they are, in fact, probably rather common in many surveys, especially those on political issues. This arises in political surveys in partic-ular because voters are often asked to support a measure that will result in something not being done or not happening. Thus, for example, voters may be asked to support a measure to not build a bypass around a particular town centre. This might lead to a survey question for a poll of voters that would be written as shown in Figure 9.14.

Because the measure that voters will see on the ballot is expressed in the negative, survey designers are often reluctant to change the wording for the purposes of a politi-cal survey. However, the problem that may arise here is that many people will read the question quickly and will read the question as asking if they favour or oppose having a bypass built around the town centre. They will, therefore, answer the question the opposite of the way they would actually vote. Therefore, it is wiser by far to reword the question, as shown in Figure 9.15.

The advantage of this revised question is that it is much harder to misread, and will generally produce a more accurate response. Another alternative way around this problem is shown in Figure 9.16, which also avoids the double negative. In this latter alternative, the wording preserves the wording of the measure itself, and also gives the response options that are closest to those that would be found in the election itself. It also clearly avoids the double negative of the original question.

9.3.8 Avoid double-barrelled questionsDouble-barrelled questions are questions that are seeking a single answer to what is actually a two-part question. An example of such a question would be ‘Do you enjoy driving to work and listening to the radio while driving?’. The respondent may wish

6. Do you favour or oppose not having a bypass built around the town centre?

Favour

Oppose

Figure 9.14 Example of a double negative

6. Do you favour or oppose having a bypass built around the town centre?

Favour

Oppose

Figure 9.15 Example of removal of a double negative

Page 224: Collecting managing and assessing data using sample surveys

Conclusion 197

to indicate that he or she enjoys driving to work, but does not listen to the radio while driving. Alternatively, he or she may not enjoy driving to work, but does enjoy listen-ing to the radio while driving. This leaves either of these respondents in a quandary as to how to respond to the question as posed, especially when it is posed with just a ‘Yes’/‘No’ response set. Such questions should be avoided, no matter how much they may appear to cover the specific situation that is the subject of the survey question. Rather, the question should be split into two, so that the respondent can provide an independent response to each part of the question. In this case, the two questions would be ‘Do you enjoy driving to work?’ and ‘While driving to work, do you enjoy listening to the radio?’. Alternatively, the question could have been asked rather differently, with an exhaustive set of response options, as shown in Figure 9.17.

9.4 Conclusion

As was pointed out earlier in the chapter, no set of rules, advice, or admonitions will work or apply in all cases. It is important to consider the specific context of each survey in determining which rules to apply and how to apply them. Some of these rules and items of advice may even be in conflict with one another at various times. Therefore, slavishly abiding by all these rules is not a guarantee of a clearly written and under-standable survey, which respondents will find easy to answer. Rather, the careful con-sideration of these rules and admonitions should lead to a good design, especially as the reader seeks to understand the reasons that lie behind each one.

6. In the coming election, you will be asked to vote on this referendum question: ‘Nobypass should be built around the town centre.’ If the election were held today,would you vote for or against this?

For

Against

Figure 9.16 An alternative that keeps the wording of the measure

17. Some people report that they enjoy driving to work and listening to the radiowhile driving. Which of the following best describes how you feel about drivingto work and listening to the radio?

I do not enjoy driving to work, nor do I enjoy listening to the radio while I drive.

I enjoy driving to work, but not with the radio on while I drive.

I enjoy listening to the radio while I drive to work, but do not enjoy driving.

I enjoy both driving to work and listening to the radio while I drive.

Figure 9.17 An alternative way to deal with a double-barrelled question

Page 225: Collecting managing and assessing data using sample surveys

Design of questions and question wording198

In written surveys, there are clearly issues about the order in which questions are presented, whether the respondent reads all the words, and whether those words are all understood as the designer had intended. There are also many non-verbal messages provided by the layout, the use of colour and graphics, and other issues that have been dealt with in this and preceding chapters. In interview surveys, there are issues relating to whether or not the interviewer reads all the words, whether the respondent hears all the words, and whether intonations, body language, and other non-verbal communica-tions change the respondent’s perception of the question. Careful attention must be paid to all these aspects. However, it can only be re-emphasised that a poorly written survey will not elicit accurate and complete responses from respondents, no matter what may be done in the way of design, layout, interviewer training, etc.

Page 226: Collecting managing and assessing data using sample surveys

199

10 Special issues for qualitative and preference surveys

10.1 Introduction

Chapters 8 and 9 provide guidance on survey instrument design and question wording. However, there are some special issues that arise in qualitative and preference surveys that are dealt with separately in this chapter. These have to do with certain aspects of layout, question design, and question wording. Qualitative surveys or questions are those that address the reporting of people’s feelings, opinions, preferences, attitudes, values, and thoughts, as opposed to those that address the factual reporting of behav-iours and situations. In the field of transport research, as in some areas of marketing and other fields, there is also a substantial and important area of work in what is var-iously called stated preference and stated choice experiments. These are qualitative experiments that are based more or less loosely on an area of psychological enquiry that was originally known as conjoint analysis. While it is not possible for a book such as this to go in depth into the psychological techniques themselves, there is much that can and should be covered in the design of surveys that are used in such fields of enquiry. At the same time, it also needs to be recognised that the dividing line between what represents an issue or question of survey design and what is actually an issue of the underlying psychological theory is often very blurred. As a result, there are issues that are discussed in this chapter that may seem at times to deal with the issues of the theory underlying the questioning.

10.2 Designing qualitative questions

Many of the issues discussed in Chapters 8 and 9 apply to the design of qualitative ques-tions, and, indeed, qualitative questions have been used in some examples relating to the issues in those chapters. It is also appropriate to repeat the warning within Chapter 9 that any set of rules, guidance, or admonitions on survey design will have exceptions, and the survey designer must always consider the specific circumstances of the survey that he or she is designing. It is probably the case that just about every rule will have its exceptions. However, in this chapter, a number of further items of advice are offered with respect to the design of qualitative questions and preference surveys.

Page 227: Collecting managing and assessing data using sample surveys

Special issues for qualitative and preference surveys200

10.2.1 Scaling questionsA common form of question in qualitative surveys is the scaling question, in which respondents are asked to read statements, attributes, or other questions and respond by indicating, on a scale provided, the strength of their feelings about it. Such scales are commonly defined as scales of agreement, satisfaction, importance, or preference. An example is shown in Figure 10.1.

Sometimes the scales are presented as in Figure 10.1, as a series of verbal labels. Sometimes they are presented as a numbered scale with only the ends of the scale being labelled. An example of this is shown in Figure 10.2. There is nothing wrong with this second format, except that it provides less specificity about the meaning of the numbers. This falls into the area (discussed in the previous chapter) of vagueness in the response categories. The analyst is less certain of what each person means when he or she responds to this question using the numbers.

Although this may not be a requirement of the underlying psychological theory, in terms of survey design it would generally be better to restrict the number of categories to no more than can be labelled with verbal gradations. The English language is actu-ally quite rich in modifiers, so that one could use such categories as ‘somewhat satis-fied’, ‘satisfied’, ‘very satisfied’, ‘extremely satisfied’, and similarly for the negative side of the scale. However, even with the richness of English, it is probably arguable as to whether more than about five-point, or at most seven-point, scales should ever be used.

1. Please indicate how strongly you agree or disagree with the following statement:‘The teaching in this unit of study helped me to learn effectively.’

� Strongly disagree� Disagree� Neither agree nor disagree� Agree� Strongly agree

Figure 10.1 Example of a qualitative question

1. Please choose a number between 1 and 7, where 1 indicates strong agreement and 7strong disagreement, to indicate how strongly you agree or disagree with the following statement: ‘The teaching in this unit of study helped me to learn effectively.’

� 1 Strongly agree� 2� 3� 4� 5� 6� 7 Strongly disagree

Figure 10.2 Example of a qualitative question using number categories

Page 228: Collecting managing and assessing data using sample surveys

Designing qualitative questions 201

With scaling questions of this type, it is desirable that the number of positive and negative categories should be equal. Thus, the response categories shown in Figure 10.3 should not be used but, rather, those shown in Figure 10.4. The problems with the unbalanced response categories are that they imply, first, that the midpoint or neutral point is biased towards one side of the measurement scale – satisfaction in this case – and, second, that there is only one possible category for a negative response and three for a positive response, which biases the responses yet further. This is a case in which the question layout communicates something different from the language. The result of asking the question with the response options of Figure 10.3 is that there will be overwhelmingly more indications of satisfaction than dissatisfaction, whereas Figure 10.4 allows for an unbiased response.

Another related issue that arises with these questions is whether or not to provide an odd number of response categories. Sometimes attitudinal questions are posed without offering an option for a neutral opinion or no opinion. It can be argued that it is eas-ier for respondents to tick the ‘No opinion’ category than to think about the question and report a true opinion. On the other hand, not providing a neutral response ‘forces’ respondents into either a positive or a negative response, with the result of invalid responses, or respondent frustration and higher rates of termination. Two other issues arise if a neutral response category is included. First, there is the question of where to place the neutral response. There is evidence (Willits and Janota, 1996) that the central position of the neutral response will result in a substantially larger number of people choosing this option; Willits and Janota indicate that the percentage nearly tripled. The alternative that they use is to place the neutral response at the end of the response set, as shown in Figure 10.5.

5. How satisfied were you with the helpfulness of the customer service staff you contacted today?

� Completely satisfied� Mostly satisfied� Somewhat satisfied� Neither satisfied nor dissatisfied� Dissatisfied

Figure 10.3 Example of unbalanced positive and negative categories

5. How satisfied were you with the helpfulness of the customer service staff you contacted today?

� Completely satisfied� Somewhat satisfied� Neither satisfied nor dissatisfied� Somewhat dissatisfied� Completely dissatisfied

Figure 10.4 Example of balanced positive and negative categories

Page 229: Collecting managing and assessing data using sample surveys

Special issues for qualitative and preference surveys202

The second issue is that the neutral category can confound two possible situations: the respondent is truly neutral, or the respondent is undecided or has no opinion. In this case, Dillman (2000) suggests that the neutral response should remain in the middle of the range of responses, but the ‘No opinion’ option should be placed at the end, as shown in Figure 10.6. The wording has been chosen carefully to distinguish between the two options, and the ‘No opinion’ option is at the end of the list.

In a face-to-face interview, the options illustrated here can be shown to respon-dents on a card. In a telephone interview, it will be necessary to read the alternative responses. In this case, it is useful to phrase the question to ask the respondent to stop the interviewer when he or she comes to the response that best describes the respon-dent’s opinion, and to read the options in the same order as shown in these illustrations. Reading the entire list and then asking the respondent to choose will incur problems of recency and will be overly demanding of the respondent to remember five or six categories.

One might question whether it is preferable to place the answer boxes to the right or the left of the response category in a written or internet survey. In the layouts pro-vided in this chapter, it is probably not important and will not be likely to change the responses given. However, there is a slight preference to place them first, because it is easier to have them line up vertically, and to have the first word of the response cate-gory also line up vertically.

As was discussed in Chapter 8, it is preferable for the layout of the responses to be as shown in the examples in this chapter, when the attitudinal responses apply to just

5. How satisfied were you with the helpfulness of the customer service staff you contacted today?

� Completely satisfied� Somewhat satisfied� Somewhat dissatisfied� Completely dissatisfied� Neither satisfied nor dissatisfied

Figure 10.5 Example of placing the neutral option at the end

5. How satisfied were you with the helpfulness of the customer service staff youcontacted today?

� Completely satisfied� Somewhat satisfied� Neither satisfied nor dissatisfied� Somewhat dissatisfied� Completely dissatisfied� No opinion

Figure 10.6 Example of distinguishing the neutral option from ‘No opinion’

Page 230: Collecting managing and assessing data using sample surveys

Designing qualitative questions 203

the one question. In other words, the responses should be listed vertically under the question, not horizontally at the side of the question. However, it is frequently the case that the survey designer wishes to ask a series of attitudinal questions, with the same response categories for each question. In this case, layout in a columned design is more efficient. However, there are some important points that should be made concerning the use of such a layout. In a written or internet questionnaire, the column headings should not be rotated, but the text should be horizontal. Second, while numbers can be used for each question, the interpretation of the numbers should be given verbally for each.

Figure 10.7 shows part of a set of questions asked in a nonresponse survey, laid out in the preferred format. This layout could have used a box under each response cate-gory and asked the respondent to tick the box, or could have used a circle and asked the respondent to place a cross in the circle, etc.

Figure 10.8 shows an example in which boxes are used and the table format is less obvious. This layout works only for a relatively short list of statements, because oth-erwise there is too much of a disconnect between the words at the headings of each column and the boxes further down in the list.

Another area of concern with attitudinal questions of this type, especially in writ-ten or internet surveys, but also to a lesser extent in interview surveys, is a habitual response. This may occur when all the statements are positive or all are negative, so that a person begins by always agreeing or, conversely, always disagreeing with the state-ments. After a while the respondent stops concentrating on what is being stated and just automatically goes down the page ticking the same response category, or answer-ing with the same degree of agreement or disagreement to an interviewer. To avoid this rote response, it is advisable to change the sense of the statements from time to time. Figure 10.9 shows a battery of questions relating to use of the car that maintain the same sense in each question. In this example, a person who favoured using his or her

1. You did not complete a survey we sent to you recently. We would like to ask you some questions about that. Please read through the following statements and circle the number that shows how strongly you agree or disagree with each one as a reason that you did not complete the survey.

Reason for not completing the survey Stronglyagree

Agree Neither agreenor disagree

Disagree Stronglydisagree

a. The survey form was too long 1 2 3 4 5

b. I don’t care about the issues in the survey 1 2 3 4 5

c. You called me at a bad time 1 2 3 4 5

d. I didn’t like the questions asked 1 2 3 4 5

e. I didn’t understand the questions 1 2 3 4 5

Figure 10.7 Use of columned layout for repeated category responses

Page 231: Collecting managing and assessing data using sample surveys

Special issues for qualitative and preference surveys204

car would be likely to agree or strongly agree with all the statements, and would then continue down the page answering the same to every statement, but probably taking less and less time to think about the statements. Similarly, someone who was against car use would disagree or strongly disagree with each statement, and would also start responding by rote.

A better design is to change the sense of statements in a haphazard manner, so that respondents must think about each statement to be sure how to answer. Figure 10.10

1. You recently did not complete a survey we sent to you. We would like to ask you some questions about that. Please read through the following statements and tick the box that shows how strongly you agree or disagree with each one as a reason that you did not complete the survey.

Reason for not completing the survey

Stronglyagree

Agree Neither agreenor disagree

Disagree Stronglydisagree

(a) The survey form was too long � � � �

(b) I don’t care about the issues in the survey

� � � �

(c) You called me at a bad time � � � �

(d) I didn’t like the questions asked � � � �

(e) I didn’t understand the questions � � � �

Figure 10.8 Alternative layout for repeated category responses

6. Please tell me how strongly you agree or disagree with the following statements by ticking the box that shows the strength of your agreement or disagreement.

Stronglyagree

Agree Neither agreenor disagree

Disagree Stronglydisagree

(a) It is useless to reduce my car use if other people don’t do the same

(b) Reducing my car use would not be better for my health

(c) Driving my car allows me to be independent �

(d) Reducing my car use would not save me money �

(e) Using a car is more acceptable than using other forms of transport

(f) Reducing my car use would not save me time �

Figure 10.9 Statements that call for similar responses

Page 232: Collecting managing and assessing data using sample surveys

Designing qualitative questions 205

shows how this might have been done with the statements offered in Figure 10.9. In this case, only statements (b) and (d) were changed from pro-car statements to anti-car statements. However, the effect is to require the respondent to pay attention to each statement and indicate agreement or disagreement after some thought.

In all the examples shown in this chapter, it should be noted that both sides of the attitude scales have been used in the main part of the question. It is important that this is always done in questions of this type. If the questions were asked in the form ‘Please indicate how strongly you agree with the following statements…’, even with the provision of both ‘Agree’ and ‘Disagree’ response options the question will tend to lead respondents to agreement. Psychologically, such a question is suggesting to the respondent that agreement is actually the social norm. Therefore, a respondent who wishes to answer according to what might be considered socially acceptable will tend to opt for the ‘Agree’ responses, regardless of his or her true feelings.

A shortcoming of the ‘Agree’/‘Disagree’ questions, and, to a lesser extent, even of the satisfaction questions, is that there are potential cultural problems with disagree-ment. In some societies, people are taught that disagreement with authority is inap-propriate. For people from such societies it may not be possible for them to answer ‘Agree’/‘Disagree’ questions truthfully, nor may it be possible for them to indicate dissatisfaction with specific matters. In such cases, it may be necessary to ask the ques-tions in a different way. For example, looking at question (a) in Figure 10.7, it could be reworked into the question shown in Figure 10.11.

This is still a closed-ended question with ordered categories, but it now removes the necessity to indicate agreement or disagreement. Instead, it focuses on the attribute of length in connection with the survey form. In present-day mixed cultures, this may be a more productive way to ask such qualitative questions, thereby eliminating the poten-tial for cultural biases to creep into the responses.

6. Please tell me how strongly you agree or disagree with the following statements by ticking the box that shows the strength of your agreement or disagreement.

Stronglyagree

Agree Neither agreenor disagree

Disagree Stronglydisagree

(a) It is useless to reduce my car use if other people �

(b) �

(c) �

(d) �

(e) �

(f) �

don’t do the same

Reducing my car use would be better for my health

Driving my car allows me to be independent

Reducing my car use would save me money

Using a car is more acceptable than using otherforms of transport

Reducing my car use would not save me time

Figure 10.10 Statements that call for varying responses

Page 233: Collecting managing and assessing data using sample surveys

Special issues for qualitative and preference surveys206

10.3 Stated response questions

Stated response questions (sometimes also referred to as ‘stated preference’ or ‘stated choice’) are questions that pose hypothetical alternatives to respondents and ask them to indicate which of the alternatives is preferred or which behaviour they believe they would undertake (Louviere, Hensher, and Swait 2000). The technique has been devel-oped extensively in the transport arena, and also, to a somewhat lesser extent, in the areas of marketing and the microeconomics of consumer behaviour. Other applica-tion areas have also adopted the technique in recent years. It is an outgrowth of con-joint analysis, to which, for purposes of survey design, it remains closely related. It is not the intent of this book to deal extensively with the theory and practice of stated response techniques. There are excellent resources available that do a far better job than could be done within these pages (examples include Lee-Gosselin, 2003; and Louviere, Hensher, and Swait, 2000). Rather, it is the intent here to address a few issues that relate directly to the design and layout of questions that are posed in such surveys.

10.3.1 The hypothetical situationThe first issue that should be of concern in stated response surveys relates to the nature of the hypothetical situation or choices provided to the respondent. It is important, for the reliability of the response, that hypothetical situations that are presented to respondents should not be too far from current experiences. The more extreme the hypothetical situations are, the less reliable will be the responses. For example, if one were questioning airline passengers for the route between Sydney and Los Angeles, for which current flying times are on the order of twelve to fourteen hours, and offered a hypothetical alternative in which the flight took thirty minutes, this would represent something too extreme to produce rational and reliable responses. Indeed, it has been advocated in many cases that the hypothetical alternatives offered in a stated response experiment should be pivoted off actual experiences (Stopher, 1997) and should not vary from those experiences by too large an amount. Specifically, it is recommended that attribute values should generally not differ from current experience by more than about 40 per cent. Thus, in the case of the Sydney to Los Angeles air travel scenario discussed above, the greatest decrease that should probably be considered would be to a flying time of about seven and a half hours.

1. Thinking about the survey that we sent you, do you think that the survey form was too long, about right, or too short?

Too longAbout rightToo short

Figure 10.11 Rephrasing questions to remove requirement for ‘Agree’/‘Disagree’

Page 234: Collecting managing and assessing data using sample surveys

Stated response questions 207

10.3.2 Determining attribute levelsA related issue to this is how to determine appropriate attribute levels in a stated response experiment. For behavioural reasons, the best designs appear to be ones that are pivoted from the actual current knowledge of the sampled respondents (Hensher 2004; Hensher and Greene, 2003; Stopher, 1998). This is an outgrowth from support-ing theories in economics and behavioural and cognitive psychology, such as those expounded by Hensher (2004), Starmer (2000) and others. Indeed, Starmer states (2000: 353):

While some economists might be tempted to think that questions about how reference points are determined sound more like psychological than economic issues, recent research is showing that understanding the role of reference points may be an important step in explaining real economic behaviour in the field.

As a result, it is recommended that stated response designs are developed by first ascer-taining the current experience of each respondent and then selecting attribute levels that relate to the experience of the respondent. This has various modelling implica-tions, which cannot be explored in this book. The reader is referred to other sources for further understanding of these issues (such as Bliemer and Rose, 2005; and Rose et al., 2008).

In general, this means that the stated response survey must be preceded by a revealed choice survey, in which the respondents indicate what choices they currently make in the relevant situation, and the stated response survey is then designed around the results of the revealed choice survey. This could be accomplished in three ways, at least. In one approach, the revealed response survey would precede the stated response survey by days or weeks, and a customised stated response survey would be designed for each respondent in the intervening period, using the information provided in the revealed choice survey. In the second approach, the revealed choice and stated response surveys can be combined, but the revealed choice questions are asked first and the respondent is then asked about percentage or proportionate responses around the revealed choice situation just provided. This may be less desirable, in that many people may find it dif-ficult to think in terms of percentage differences.

The third option gets around this problem by using a computer-assisted method to conduct the survey, in which the computer calculates the attribute values as percent-ages of the revealed choice situation as the survey proceeds. Because there may be issues relating to recall of the situation after a lapse of time, or difficulty in gaining the cooperation of respondents on a second contact, as is required in the first of these options, this author recommends the third option as being the best. This option is also readily handled in an internet survey, when the calculations can again be performed in the background as the survey proceeds.

10.3.3 Number of choice alternatives or scenariosThere has been considerable discussion in the literature about the number of choice situations that each respondent can be asked to handle. Some researchers in this area

Page 235: Collecting managing and assessing data using sample surveys

Special issues for qualitative and preference surveys208

have maintained that no more than perhaps eight or twelve choice situations should be presented to one respondent, with some even suggesting that four alternatives are the most that should be presented. However, Stopher and Hensher (2000) find no evidence that there were differences in the ability of respondents to handle the choice experiment with up to thirty-two choice situations provided to respondents. In their experiment, Stopher and Hensher found little difference in the results from four up to thirty-two choice situations or profiles offered to respondents, with the number of attributes being used to describe the choice situations kept constant throughout. Clearly, an unreason-ably large number of choice situations will result in information overload. On the other hand, the complexity of the experiment is not a function just of the number of alterna-tives presented to the respondent alone, but also of the number of attributes used to describe each alternative. Thus, one cannot define some absolute maximum number of choice alternatives that can be presented to respondents, as has sometimes been done in past designs, or stated as a limitation on the design of stated response experiments. It does appear that people can handle more complex choice situations than has often been assumed to be the case. Of course, as experimental designs are made more complex, people will use different coping strategies to handle the complexity (Hensher, 2006). Although designers of these types of surveys must be cognisant of the respondent bur-den aspects of their designs, it is not possible to state any hard rules about numbers of choice alternatives, or, indeed, numbers of attributes and attribute levels to be used. Rather, the designer needs to consider complexity as an issue, and take care in making a decision about the level of complexity to employ.

10.3.4 Other issues of concernThere are three issues of concern in stated response surveys that cannot be addressed as specific design issues, but are nevertheless of concern in the design process and in reviewing the results of, say, a pilot or a full survey (the issue of pilot surveys is dis-cussed more fully in Chapter 12). However, avoidance of these issues is often a matter of design, so they are relevant to this discussion. The specific issues are data inconsis-tency, lexicographic responses, and random responses.

Data inconsistencyNorheim (2003) identifies several potential forms of inconsistency. The two that are perhaps of the greatest concern are situations in which a respondent chooses the ‘worst’ alternative or fails to choose the ‘best’ alternative, and situations in which a respondent chooses an outlier alternative.

The worst alternative in this case may be defined as being an alternative for which all attribute levels are less favourable than for any of the offered alternatives that were not chosen. An example in transport could be the choice of a means of travel to go to work, and the respondent chooses the one that costs most, takes the longest time, is the least convenient, and is also the least comfortable. Likewise, the best alternative is the one that offers the highest or best levels of each attribute used to describe it.

Page 236: Collecting managing and assessing data using sample surveys

Stated response questions 209

Inconsistencies of this type could be taken as indicators that the respondent has not understood the choice task.

The situation of a respondent choosing outliers arises when dominant alternatives – i.e., worst or best – are not included in the design. Norheim (2003) suggests that, among the reasons for not including such alternatives, are the facts that they will gen-erally not reveal any new information about the respondent’s choices and that they may be regarded by respondents as a waste of time, and therefore may result in respondents not taking the choice task seriously. If such dominant alternatives are not included, then searching for this form of data inconsistency requires searching for outliers, which are choices made that would seem to have very low probabilities associated with them. Again, if a respondent chooses these as preferred alternatives, it would tend to indicate failure to understand the choice task.

Lexicographic responsesLexicographic responses are defined in this context as responses in which respondents appear to choose on the basis of only one attribute and do not make trade-offs between the attributes (van der Reis, Lombard, and Kriel 2004). Two possible reasons can be put forward for such a response pattern. First, Norheim (2003) suggests that this might result from the survey being too complicated or having an unbalanced design. The respondent uses this mechanism to simplify the task (Sammer, 2003). In this case, obviously, the presence of lexicographic responses should be taken as an indicator that the survey task needs to be simplified. Second, it is possible that a lexicographic response results from a situation in which the respondent feels that none of the offered alternatives are acceptable, but is not given an option to indicate this (Ortúzar and Lee-Gosselin, 2003). Either this calls for adding the option ‘None of the above’ to the choice options, or it requires that the alternatives are rethought, so that this situation no longer arises. An example of a lexicographic response might arise in a survey to determine choices among hypothetical designs of a public transport vehicle, in which the only attribute that the respondent uses is that of personal safety, ignoring such attributes as cost, travel time, vehicle and ride comfort, etc. Van der Reis, Lombard and Kriel (2004) report that they found a group of respondents who, in follow-up surveys, said that they considered only price in choosing between alternatives. However, they note that there was anecdotal evidence that most of the respondents who provided lexicographic responses either hurried through the survey task, seemingly giving little thought to the choice process, or appeared confused at the task and, therefore, presum-ably simplified it by concentrating on just one attribute.

Random responsesNorheim (2003) describes this as the situation in which respondents seem to be focused only on making any choice, simply in order to get the survey task over. The result will be choices that show no apparent rhyme or reason, or a situation in which the respon-dent appears to have made choices that are inconsistent with one another. Several pos-sibilities are put forward as a reason for random responses. Norheim (2003) suggests

Page 237: Collecting managing and assessing data using sample surveys

Special issues for qualitative and preference surveys210

fatigue as a potential cause of such responses, arising either because the interview is too long, or because it is too complicated. As a result of either of these situations, the respondent loses concentration and then focuses simply on getting the task done of responding to all the questions. This can be tested for by splitting survey responses into those given early in the survey and those given late in the survey, to determine if there appear to be consistent responses early on that subsequently become inconsis-tent. However, random responses may arise for other reasons, such as a respondent who either attaches little importance to the survey, or has too little time to complete it (Ortúzar and Lee-Gosselin, 2003), while another possibility is that the survey may have triggered cultural sensitivities (Morris and Adler, 2003), resulting in such a response pattern. If random responses appear to arise because the survey is too long, this can be corrected by shortening or simplifying the survey task. If they appear to arise because of too little importance being attached to the survey, or too little time being available to the respondent, then the recruitment process may need revision. If it is a result of cul-tural sensitivity, then the survey may need to be changed, or the sensitivities addressed through design changes.

10.4 Some concluding comments on stated response survey design

Although stated response techniques have been in use for some significant time, this is still a field that is comparatively new in survey design, and there are many issues that have yet to be explored. In this chapter, the author has made no attempt to be exhaus-tive in treating this topic but, rather, has sought to raise a few concerns and issues that may help to lead to better design of instruments. In the coming years it is likely that much more experience will be gained, new designs developed, and significant issues dealt with that surround the design of such surveys.

In the meantime, those who engage in designing these types of surveys should rely on some of the standard mechanisms that can be used to improve designs, such as the use of focus groups, pilot surveys and pretests, and adherence to generally good design principles, such as those discussed in this and the preceding chapters on instrument and question designs. Issues relating to the detailed design of a stated choice experi-ment are best covered in other texts, not in a broad-based text on survey design.

Page 238: Collecting managing and assessing data using sample surveys

211

11 Design of data collection procedures

11.1 Introduction

Chapter 6 deals with the alternative methods for conducting surveys of human popu-lations, such as by post, internet, telephone, or in person (face-to-face). Chapter 8 deals with the design of survey instruments, and Chapters 9 and 10 with wording, question design, and layout issues. In this chapter, a number of issues relating to the overall design of data collection procedures are dealt with that have not been the sub-ject of these earlier chapters. These issues include such items as the number and type of contacts, making initial contacts, who should respond to the survey, what consti-tutes a complete response, an introduction to item and unit nonresponse issues, sample replacement, incentives, and respondent burden.

11.2 Contacting respondents

In most sample surveys, it is necessary to contact some sample elements more than once. At the outset of the planning of the survey, it is necessary and appropriate to plan the number, type, and schedule of contacts. In many surveys of households and individuals, there may be a pre-notification contact, a recruitment contact, an interview contact, and a retrieval contact. In addition, there may be various reminder contacts that are also necessary. The actual contacts required will depend on the method by which the survey is being done. In the following subsections, some of the different options are discussed.

11.2.1 Pre-notification contactsIt is well worth considering, for most surveys, designing and sending out a pre- notification letter. Pre-notification or advance letters have been found by several researchers to improve response. Dillman (1978) points out that such letters provide potential survey respondents with evidence that the survey is legitimate and is not a sales device of some type. It is also likely that such letters reduce the anxiety that can be caused by a cold contact.

Page 239: Collecting managing and assessing data using sample surveys

Design of data collection procedures212

The author suggests that the pre-notification letter should be printed on the official stationery of the sponsoring agency, or on a letterhead that is created specifically for the survey. The letter should outline briefly the purposes of the study, its importance, and how the results from the study will be used. It should indicate why the person to whom the letter is sent is important to the study and how he or she or the household was selected. It should indicate the confidentiality of the information that the pro-spective respondent will be asked to provide and should provide a toll-free telephone number and other contact details that the person can use if he or she has questions about the study or the letter. If an incentive is to be offered, it is usually a good idea to indicate what that incentive is and when the respondent can expect to receive it. The letter should be signed by someone who is high-ranking in the organisation whose letterhead is being used. It is preferable if the signatory is a well-known and respected public figure. However, one should be warned that a well-known public figure who is highly controversial or is not well respected will be more harmful than useful for this purpose.

The letter should be written in simple, declarative language. In English, it is advis-able to avoid multisyllabic words, and words that may not be readily understood by people who have only a few years of formal education. It is also best to use a number of short paragraphs. A letter that is written as one or two lengthy paragraphs will appear daunting to read and, as a result, will probably not be read.

Some researchers have suggested that pre-notification letters may have the nega-tive effect of increasing refusals, because potential respondents are forewarned of the study. Collins et al. (1988) in their study, report that a number of refusals occurred in the telephone interview as soon as the advance letter was mentioned and referred to the letter, specifically. On the other hand, it is likely that these were hard-core refusers who would not have been recruited under any circumstances, in which case the advance let-ter may have provided an opportunity for them to prepare their response, but may not have increased refusal rates.

Finally, the pre-notification letter should be sent out in a white envelope, using either direct address printing onto the envelope or handwritten addresses. The enve-lope should have the return address of the agency clearly indicated on it, matching what appears on the letterhead inside. It is a good idea to use stamps rather than some form of bulk mailing, because this again affirms to the recipient the importance of this study and the importance of the recipient’s participation in the study. Commemorative stamps are even better, because they add an appeal to the envelope. An envelope of an unusual size is also a good idea. Rather than using an envelope of the size generally used for bills and other less welcome material, a larger envelope will usually get the attention of the recipient.

Probably the only types of survey in which a pre-notification letter is not appropri-ate are a postal survey and, maybe, an internet survey, although the latter is not a clear case. However, a word of caution is necessary. It is inadvisable to send out massive mailings of pre-notification letters when they result either in many people receiving letters who are not subsequently approached to participate in the study, or result in a

Page 240: Collecting managing and assessing data using sample surveys

Contacting respondents 213

long period of time elapsing from the receipt of the letter to contact being made about the survey. As Groves and Lyberg (1988) point out, the impact of a pre-notification letter can be negated if the recipient cannot remember having seen the letter when approached to participate in the study. Ideally, pre-notification letters should be sent out only a few days to a week or so before contact is expected to be made. Groves and Lyberg suggest that about 75 per cent of those sent a pre-notification letter will recall having received it, and 75 per cent of those will recall actually reading the letter. The fact that over 55 per cent will recall both receiving and reading the letter means that it has potential to affect response rates significantly.

Groves and Lyberg (1988) also suggest that there are two potential effects of the letter. For the respondent, the letter provides legitimacy to the later contact by the interviewer, and may reduce his or her concerns about the survey. For the inter-viewer, they suggest that it may increase his or her confidence in making a cold contact, although an experiment they ran failed to provide support to this. Traugott, Groves, and Lepkowski (1987) indicate that the pre-notification letter in cold-contact telephone surveys showed indications of an increase in response rate of from 5 to 13 percentage points.

11.2.2 Number and type of contactsThe number and the type of contacts required in a survey will depend on the method that is being used to undertake the survey. Usually, contacts are made to recruit the respondent, to retrieve information or pose the questions, and to provide reminders, as appropriate. Beyond the pre-notification letter that is discussed in the preceding sub-section, this subsection considers the number and type of contacts that should be made following the pre-notification letter, to provide the survey, and retrieve the responses. However, before examining the number and type of contacts, it is appropriate to con-sider the nature of reminder contacts, and these comments apply by and large to all types of surveys.

Nature of reminder contactsAs suggested by Dillman (2000), a successful interviewer in a face-to-face interview is the ideal on which to model the contact process for all surveys. In a face-to-face interview situation, a good interviewer has the opportunity to persuade a respondent to participate, and is able to do so by being responsive to voiced concerns, to body language, and to other forms of communication. Further, the good interviewer is able to tailor the amount of persuasion required by reading the situation. In this process, the amount of persuasion used may be very little, or may be considerable, depending on what is called for in the specific situation.

As far as possible, follow-up procedures in all other types of survey are designed to emulate as much of this process as it is possible and reasonable to do. Of course, one has to bear in mind that, in most cases, the specific concerns of an individual respondent cannot be known, because of a lack of direct contact, or because it is

Page 241: Collecting managing and assessing data using sample surveys

Design of data collection procedures214

much easier for a respondent, for example on the telephone, to break off the contact before allowing the interviewer to determine the nature of his or her objections to the survey. It must also be kept in mind that many failures to respond initially may have little or nothing to do with the survey itself – the instrument, the approach, or the subject of the survey – but may have everything to do with human nature. By this is meant the possibility that a person receives the survey in a postal or internet survey, notes that it will take some investment of time to make a response, and puts it on one side, with every intention of getting back to it in due course, when time is more available than at the time of receipt. However, as days pass by, the survey is given an increasingly low priority, until it is eventually forgotten completely, lost, or even thrown away.

To counteract this very human response, most surveys need to employ a sequence of reminders. These reminders need to be distinguishable from one another, need to some extent to increase in persuasiveness, but also need to remain on the side of ‘gen-tle’ reminders, so as to not offend or actually turn away potential respondents. Because there is no mechanism of feedback from the potential respondent to the survey team, all potential respondents have to receive the identical reminders. As can be seen read-ily, this is different from the face-to-face situation, in which the good interviewer can often deduce the level of persuasion to apply in each specific situation.

Along with Dillman (2000), this author recommends the use of a ‘reserved’ approach. Great care should be taken with any escalation of the intensity of the requests for cooperation, and reminders must remain consistent both with good survey ethics, as discussed earlier in this book, and with principles of good business practice. As will be noted in what follows, a more frequent reminder process and one with more steps than is outlined by Dillman (2000) is recommended here. It is also prudent to use more than one method to send the reminders, such as post, telephone, or e-mail (when appro-priate), and to alternate methods through the reminder process, as a method of getting the attention of those who have simply forgotten or not been able to find the time to respond so far. Wording is very important in the reminders, especially those sent by post or e-mail. It must always be borne in mind that, by the time the potential respon-dent receives each reminder, there is every possibility that the survey has already been completed and sent back. Therefore, the opening sentences of the reminder should first recall the fact that a survey was sent, and then thank the respondent if he or she has, in fact, already completed it and returned or submitted it. The next sentences in the reminder should indicate why it is important for the respondent to complete and return the survey. Details of a toll-free or local phone number and an internet site (if possible) where the person can ask questions, or obtain answers, or even request another copy of the survey, should then be provided. An example of a postcard reminder that the author designed is shown in Figure 11.1.

In the next subsections, the tailoring of the contact and reminder procedure to differ-ent survey methods is discussed. This is not intended to be exhaustive, but is intended to provide sufficient examples to show how the process can be adapted to different survey methods.

Page 242: Collecting managing and assessing data using sample surveys

Contacting respondents 215

Postal surveysIn postal surveys, it will usually be the case that the only contact information available for the potential survey participants is the postal address. In this case, the probable contacts will be an initial mailing of the survey materials to the addresses that have been selected for the sample; reminder postcards that are sent at intervals to attempt to obtain compliance with the survey request; and a possible re-mailing of the origi-nal survey materials, in case the recipient threw away or lost the original mailing. It is important to tread a careful line between neglecting the prospective respondents and harassing them. It is suggested that, for postal surveys, the first reminder, in the form of a postcard, be sent out about four days after the initial mailing of the survey. If surveys have already been received back, then efforts should be made to send the postcard only to those recipients of the survey from whom a response has not been obtained as yet. However, the wording of the reminder should always state upfront that the recipient should disregard the reminder if he or she has already returned the study, and should thank him or her for doing so. This same preamble to the reminder should appear on each reminder, in fact.

A second reminder should be sent about a week later, and a third one about one week after that. If a fourth reminder is considered necessary, it should be a com-plete re-mailing of the original survey package. At this point, at three and a half weeks since the original mailing, it is highly probable that those recipients who have still not completed the survey will have lost or destroyed it. Whether or not further reminders should be made at this point will depend on the productivity of

The University of Sydney

Dear <NAME> ,

Two weeks ago your household agreed to take part in the ‘Your travel counts’ travel diary study. If you have already posted your diaries, thank you and please ignore this reminder.

If you have already filled in the diaries, please return these to us by placing them in the self- addressed stamped envelope and dropping it into the nearest post box. If you have not yet filled in the diaries, please do so starting tomorrow.

Please return the filled-in diaries as soon as possible.Your responses are important, even if you do not travel much. If you have any questions, please contact Natalie on (02) 9351 0070. Thank you for your help.

Natalie LauerResearch AnalystInstitute of Transport StudiesThe University of Sydney

Figure 11.1 Example of a postcard reminder for the first reminder

Page 243: Collecting managing and assessing data using sample surveys

Design of data collection procedures216

the reminders already implemented. A good rule is always to discontinue reminder efforts once the productivity falls to less than 1 per cent increases in the response rate.

Postal surveys with telephone recruitmentIn the instance that the survey is a postal survey with telephone recruitment, then telephone numbers of prospective respondents will be known. The initial contact for such a survey (after any pre-notification letter) will be the recruitment telephone call. Following a successful recruitment, the next contact will be the mailing of the sur-vey to respondents who provided an address and agreed to take part in the survey. Following this, there will be a need to send reminders to many respondents, because of the normal human response to put off doing the task of completing the survey. (The issue of the number of attempts for telephone recruitment is addressed in the next sub-section, on telephone surveys.) With respect to reminders for the postal survey, in this case it becomes possible to alternate telephone and postal reminders, which is proba-bly the optimal reminder pattern. The author has found that this type of survey is best conducted with the following pattern of contacts:

(1) pre-notification letter;(2) telephone recruitment;(3) posting of survey materials;(4) telephone confirmation about three days later that postal materials have arrived,

and to answer any questions about what is to be done;(5) postcard reminder about four days after the telephone confirmation call;(6) telephone reminder one week after the telephone confirmation call;(7) postcard reminder one week after the first postcard reminder;(8) repeat reminders 6 and 7 for a further week;(9) telephone reminder three weeks after the confirmation call; and

(10) re-mailing of the survey package four days later.

Reminders 4 to 9 can then be repeated after the re-mailing of the package. In addi-tion, a final reminder may be sent using certified or priority mail. However, usually by about the fourth week, if responses have not been obtained, it is very unlikely that fur-ther reminders will be productive. Although there is no study that the author is aware of that documents this, anecdotally the response appears to increase by about one-half of the response to each prior contact. Dillman (2000) suggests that response rates without any reminders will be about 20 to 40 percentage points lower than those that would be attained with a reminder procedure. This is more or less consistent with the sugges-tion here. Thus, if the initial response from the mailing of the survey packages were to be 20 per cent of respondents, then the second reminder (the postcard) will usually produce a response of about another 10 per cent. The next telephone call will produce another 5 per cent or so, and the next postcard about 2.5 percent. This is likely to con-tinue through the entire reminder process. Because of the fact that one does not usually ask people to which contact they responded, and because there is a lag time that is not

Page 244: Collecting managing and assessing data using sample surveys

Contacting respondents 217

able to be fully determined between when a person fills out the survey and when it is received by the survey agency, it is probably impossible ever to be sure of the response created by each reminder. However, the pattern indicated here is a good guide to expect in terms of the productivity of reminders.

Two possibilities exist for when to stop the reminders. The first is when the responses to the reminders have dwindled to the point that it is not considered worthwhile to con-tinue them. The second is when the desired number of responses has been received. If there is a need to reach a certain minimum target value, then additional reminders may be warranted, even when their productivity is low, if the survey yet remains below the target. However, it must be cautioned that the use of too many reminders will be con-sidered by some people to be a form of harassment and will reflect negatively on the survey. As in so many areas of survey design and execution, this is an area in which a careful trade-off is required, balancing between sufficient reminders to obtain the desired response level without using so many as to be perceived as harassment by the potential respondent.

Telephone interviewsIn surveys conducted entirely over the telephone, the number of contact attempts is usu-ally the key issue, because a reminder process is not generally appropriate or relevant. The main issue in telephone interviews is the number of call attempts. In determining the number of call attempts, there is again a trade-off. If too few attempts are made, it is likely to bias the data by not including those respondents who are harder to get hold of at the times at which calls are made for recruitment. On the other hand, the more call attempts that are made, the more expensive is the survey, and the question arises as to whether those who are really hard to get hold of will ever actually complete the survey. On the first issue, there is substantial evidence to suggest that, in most surveys, there are significant differences between those who complete the survey after only one or two attempts, and those who require numerous call attempts before they are contacted. In an analysis of respondents who completed a survey against those who were not con-tacted, Black and Safir (2000) report that there were significant differences, with non-contacts being more likely to have experienced telephone service interruptions, to be members of one- or two-person households, and to be renters. They also find them to have been significantly different with respect to age, and race and ethnicity. Complete households also tended to be larger, to be more likely to include children under the age of eighteen, and not to have experienced food concerns. The research was undertaken in the United States, and non-contacts were also found to be less likely to be US citi-zens. As a result of a survey carried out in Canada, Elspett-Koeppen et al. (2004) find that respondents who were hard to reach (defined as those requiring more than eleven call attempts) were more likely to be male, to be younger, to have a higher household income, and to be fully employed. An analysis of certain health factors and issues also showed significant differences. On the other hand, Biemer and Link (2006) report that they found no evidence of nonresponse bias between a 40 per cent response rate and a 70 per cent response rate in a number of telephone surveys.

Page 245: Collecting managing and assessing data using sample surveys

Design of data collection procedures218

Drew, Choudhry, and Hunter (1988) report on the effect of the number of call attempts and the eventual response rate. They find that five call attempts obtained response rates of around 86 per cent, while nine attempts increased the rates to around 90 per cent. It took seventeen attempts to increase the response rate further, to about 93 per cent. On the other hand, one attempt achieved a response rate of only 54 per cent. Similarly, Sangster and Meekins (2004) report that the majority of interviews, refusals, and ineligibles were obtained by the sixth or seventh call attempt and that by the twentieth attempt, there was less than a 2 per cent chance of completing an interview. In another study, this one a health study, Kristal et al. (1993) report that 82 per cent of interviews were completed within eleven call attempts and that an addi-tional 3 per cent were obtained by a further eleven call attempts, with most of those being completed by the fifteenth call attempt. Lind (1998) undertook an experiment to compare the results of twenty versus thirty call attempts, as well as other aspects of the telephone protocol. They report that they found no significant difference in the response rate between twenty and thirty call attempts, thus further confirming these various studies that a modest limit on the number of call attempts is unlikely to result in biases in the survey results.

Dennis et al. (1999) report that, in an immunisation survey, 69 per cent of inter-views were complete by the fifth call attempt, 86 per cent by the tenth, and 97.6 per cent by the twenty-fifth. Interestingly, they also report a decline in the eligibility rate after the tenth call attempt, whereas the eligibility rate had climbed from one up to ten attempts. Item nonresponse was shown to climb with increasing call attempts, whilst the data quality indicators declined fairly consistently. A number of demo-graphic variables were also analysed against the number of call attempts. Similar results were found in the other studies reported here, with a number of the demo-graphic variables showing little change as the number of call attempts was increased above six, while a few variables changed significantly. The primary variables that showed significant differences for the higher numbers of call attempts were race (specifically the percentage of blacks), marital status, low income, and the presence of a second telephone line.

In a more recent study, Rogers and Godard (2005) examine the effect of the number of call attempts on the representation of people whose first language is not English. Like the other studies reviewed in this chapter, Rogers and Godard (2005) find no sig-nificant differences in the survey findings between those who required up to six call attempts compared to those requiring seven or more call attempts, up to twenty-nine. However, they do find a difference in the demographics of the sample according to the number of call attempts. In keeping with other studies, they find that Hispanic respon-dents required more call attempts on average, as did those respondents who were born outside the United States. More call attempts were also required for African Americans than for white non-Hispanic respondents. Interviews conducted in Spanish, Mandarin, or Cantonese required the largest numbers of call attempts, with well over one-third requiring more than six. The effects of language persisted after controlling for such things as education and employment status.

Page 246: Collecting managing and assessing data using sample surveys

Contacting respondents 219

Kalsbeek et al. (1994) undertook a comprehensive study of the cost-efficiency of a survey in relation to the number of call attempts. They report very similar findings to the other studies cited here. In their study, they obtained a 40 per cent response rate with only one call attempt. This increased rapidly to around 90 per cent with five call attempts, and then barely increased after nine call attempts. The incremental response rate is quite high for the first six attempts and then falls sharply on the seventh attempt, reaching a low at the ninth attempt and remaining low from then on. At the same time, Kalsbeek et al. find that the relative costs of the survey continued to climb also to the ninth attempt and then levelled out. An examination of gender, age, income, race, and household type revealed that there were only minor changes for race, gender, and younger people over the range of one to fifteen call attempts. On the other hand, the sample composition changed dramatically for the elderly, poor people, and persons from single-person families. However, by seven call attempts, there were no significant differences on any of these measures. Estimates of sample bias converged to zero by the sixth call attempt and remained close to zero for increases in call attempts beyond this. Kalsbeek et al. (1994: 149) conclude that, ‘while the practice of allowing up to 15 call attempts is technically the most cost-efficient option among those considered, relatively little cost-efficiency is lost by reducing the allowable number of attempts to as few as 6 attempts’.

Taking the results of all these studies together, the overall findings appear to be as follows. A maximum of between five and seven call attempts appears to be appropri-ate, if the major concerns are with cost-efficiency and the representativeness of the findings of the survey. However, if the actual make-up of the respondents is of greater importance, then a higher limit on call attempts may be necessary, so that certain demographic groups, such as the elderly, the poor, those whose native language is dif-ferent from the country in which the survey is being conducted, and those who fall into certain racial minorities, are represented more correctly in the final response make-up. However, in general, increasing the number of call attempts beyond twelve to fifteen appears unwarranted, even with concerns about the demographics of the sample.

Face-to-face interviewsFace-to-face interviews involve sending an interviewer to a specific location, such as the potential respondent’s home or workplace, and recruiting the respondent to undertake the survey, which is then undertaken at that time. As with the pure tele-phone survey, there is no relevant reminder process for such a survey. The issue is instead similar to the telephone survey, and concerns the number of attempts to be made to find a respondent at the sampled location. In contrast to telephone surveys, making additional call attempts for face-to-face interviews can be a very expensive proposition, depending on the locations at which interviews are to take place. For intercept surveys, there is usually no opportunity to make additional attempts. These would be the cheapest types of face-to-face surveys with respect to interviewer time, because the interviewer does not travel from one respondent to another. Typically, such a survey would be undertaken at a place such as an airport, a shopping centre,

Page 247: Collecting managing and assessing data using sample surveys

Design of data collection procedures220

a specific store, on board a vehicle, etc. In such a survey, there will usually be one opportunity for contact. If it is successful, then the interview proceeds. If not, then this is logged as a refusal.

A face-to-face interview at a workplace, when there are multiple interviews to be undertaken at each workplace, represents the next cheapest method of face-to-face interviewing. Dependent on the exact nature of the survey, repeat contacts may, again, not be an issue, because the survey may function in much the same fashion as the inter-cept survey. On the other hand, if the survey involves interviewing preselected persons at the workplace, it is usually necessary to set appointments with these individuals before the interviewer proceeds to visit the establishment. In this case, the number of contacts will depend on whether or not the selected individuals keep to their appoint-ments, and the survey protocol with respect to calling back for someone who was unable to be interviewed at the appointed time. No rules on contacts are appropriate for such a survey.

When the face-to-face survey involves interviewing people at their homes, the usual sampling method will have been to draw a sample of addresses, and then to have inter-viewers visit each address and attempt to recruit and interview there. It is not usual that such surveys will be undertaken through prior appointments, although this certainly is a possibility. If prior appointments are made, then the only issue to determine in terms of number of contact attempts is the number of times that the interviewer should return to a specific address if no one is at home at the original appointed time. The principal issue here is a trade-off between potential bias by not trying often enough to contact a hard-to-find respondent and the expense of making multiple attempts on different days and at different times of day. To decrease potential bias, the more visits that can be made to find a given sampled respondent, the better. However, it is not usu-ally affordable to make more than two or three attempts, and the usual protocol when two or three attempts fail is to select the household next door, in a specified direction. This represents a cost-saving method of sample substitution that is normally practised. Documentation of the potential biases from fewer contact attempts and from selecting the next-door household is largely not available.

Internet surveysBecause internet surveys are still relatively new, there is not much literature on the issue of reminders for such surveys. In Dillman’s (2000) book Mail and Internet Surveys, specific reference is not made to internet surveys in the reminder process described. The process can follow exactly the same procedure as that for a postal survey, with or without telephone recruitment. In other words, reminders can be con-ducted entirely by post, if neither the e-mail address nor the telephone number of the potential respondents is known. On the other hand, if either or both the telephone number and e-mail address are known, these can be used to provide some of the reminders. In work that the author has conducted recently, it has been found that e-mail reminders in which the URL of the survey site is repeated are particularly effective for the internet version of surveys.

Page 248: Collecting managing and assessing data using sample surveys

Who should respond to the survey? 221

11.3 Who should respond to the survey?

In part, the answer to the question of who should respond to a survey is determined by the nature and design of the survey itself. Many surveys are designed so that one person, representing either him- or herself alone, or as spokesperson for his or her household, responds. Some surveys, particularly various types of censuses and those in the fields of transport planning, sociology, voting patterns, health issues, etc., are designed to elicit information from all the members of a household. In surveys of peo-ple undertaking a particular activity (intercept surveys), or at workplaces, the issue of who should respond is also pre-defined by the nature of the survey in most instances. Therefore, this section concentrates on two types of surveys: those in which a specific person is targeted within a household or other unit, and those in which all the members of a household are to be included.

11.3.1 Targeted personIn many types of surveys, it is undesirable to select more than one person from a house-hold. This may be because it is important that other household members are not able to overhear the responses of another household member, or because sampling more than one member of a household will not provide sufficiently independent data, or because the purpose of the survey is in other ways best served by having one person represent each sampled household, or because the purpose of the survey is to measure informa-tion about only one person in a household. Usually, when this is the case, households, which are generally the units that can be sampled the most readily (the sampling units), are sampled by some random sampling method, and the task is then to select one per-son from each randomly sampled household.

A number of methods exist for doing this. Probably the least satisfactory method is to survey the person answering the telephone or the door, or, in case that person is not of a sufficient age to be surveyed, the first adult member of the household who is available to speak on the phone or to the interviewer at the door. A pure postal survey – i.e., one in which the only contact with the household is through the post – does not lend itself readily to a single adult respondent per household, unless there is a means to identify a household member by name so that the survey materials can be addressed to that person. For example, in the case of certain marketing surveys that are based on previous purchases, for which a record has been obtained of the name of the purchaser, a postal survey can be mailed specifically to the person who made the purchase. In other cases, lists may be available of the names and addresses of persons who have a particular interest or have registered with some service or organisation, and postal surveys can be sent directly to the individuals concerned. However, in many instances, surveys are intended to be made of a random sampling of all the population within a geographic area, in which case the selection of a specific person in each household demands some alternative procedure.

A useful method for selecting a person from a household is to use the ‘nearest birth-day’ method. After establishing that there is more than one person in the household

Page 249: Collecting managing and assessing data using sample surveys

Design of data collection procedures222

of at least the minimum age for the survey, this involves selecting that person over the minimum age for the survey whose birthday is nearest to the day on which contact is made with the household. In effect, this provides a random selection of adults from households. Another similar method is to use a grid to select the person to interview. The grid may be constructed using any readily available characteristics of individuals. For example, a grid may be constructed that uses the relative age of the person in the household and the gender of the person. In this case, interviewers rotate through the grid, asking for a different gender and relative age for each household contacted, based on initial questions to determine the number of adults by gender in the household. For example, initially, the interviewer may ask how many males there are in the household over the age of fourteen, and how many females over the age of fourteen, when the survey is to be conducted with one person in the household who is over that age. In this case, the grid might be constructed as shown in Table 11.1.

Using a grid such as the one shown in this table, the interviewer establishes the type of household from the questions about numbers of adult males and females and then chooses the selection provided in the grid. In the same survey, there may be three or four versions of this grid, so that different selections are made by different interview-ers, or different grids may be used on different days of the week. Ideally, the grid is constructed by random drawings from the appropriate numbers of males and females in each household. The result should be an unbiased selection of a specific individual from each sampled household. In this case, there will potentially be a slight bias in select-ing from households in which there are more than three adults of either gender, unless there are sufficient different grids to allow all possible adults to be sampled with equal

Table 11.1 Selection grid by age and gender

Number of male adults

Number of female adults

0 1 2 3 4+

0 – Oldest female

Youngest female

Oldest female

Second oldest female

1 Oldest male Oldest male Youngest female

Second oldest female

Oldest male

2 Oldest male Oldest female

Oldest male Youngest male

Second oldest female

3 Youngest male Second oldest male

Oldest female Youngest female

Youngest male

4+ Third oldest male

Youngest male

Second oldest male

Second oldest female

Youngest female

Page 250: Collecting managing and assessing data using sample surveys

Who should respond to the survey? 223

probability. However, in Western societies, the number of households with more than three males or more than three females over the age of fourteen years old will be rela-tively small, and the biases resulting from the use of the grid will be of little concern.

Other grids can be constructed for selection, and will all work on a similar principle. Either the grid or the nearest birthday methods will provide a randomised selection of a person to interview, when only one person in a household is desired. It is important to note that a method that was used some years ago is not considered acceptable any longer, and that is to request the ‘head of the household’, or to address postal materi-als to such a labelled individual. This is because it is no longer considered politically correct to assume that there is a head of a household. In the 1940s, 1950s, and 1960s this almost certainly meant a man, and the term has therefore taken on sexist overtones. Survey designers would do well to stay away from such a label. On the other hand, the US Census Bureau often uses the term ‘any responsible adult’, especially when it wishes to obtain information on an entire household through only one adult in the household. This continues to provide reasonably good information for Census Bureau purposes. On the other hand, using the person who answers the door or the telephone as the respondent when the survey is focusing on the person rather than the household is likely to create a bias. In many modern societies, women are more likely to answer the telephone and men are more likely to answer the door.

If the goal of the sampling is to choose adults proportionately, then the methods just discussed will not be suitable: only one person will be picked regardless of the size of the household. However, it should be rather clear to the reader that simple changes to Table 11.1 would permit, say, one-third or one-half of the adults in a household to be picked. All that would be required is to increase the number of adults designated in each cell, based on the total number of adults in the household. Suppose one wishes to choose one-half of the adults in sampled households. For households with even numbers of adults, this is easily accomplished. Those combinations of male and female adults that result in an even number in Table 11.1 would have two selections when the total number of adults is four, three selections for a six-adult household, etc. When the number of adults is odd, then the different grids (by day of week, or interviewer) will have the number of adults rounded down half the time and rounded up half the time. Thus, half the grids will have no one to be selected in a one-adult household, and the other half will have one adult selected. For households with three adults, half will have two selected and half will have one selected. The other alternative, which is discussed in more detail in Chapter 13, is to weight the sampled adults from households with odd numbers of adults, so that one adult is always selected from a one-adult household, two are always selected from a three-adult household, and so forth. By applying an appropriate weight to these individuals, it is still possible to achieve an equal-probability sample computation.

11.3.2 Full household surveysIn surveys in which it is important to collect information on each member of the house-hold, there are two principal options. The first is to have one person in the household

Page 251: Collecting managing and assessing data using sample surveys

Design of data collection procedures224

report on everyone. The second option is to survey each member of the household separately. In reality, the second option cannot normally be applied to all households, because ethical concerns will usually mean that children under a certain age are not asked to participate directly in the survey. This raises two important issues: proxy reporting – i.e., the reporting by one person of information about another person – and the definition of what constitutes a complete household survey. The issue of proxy reporting is also, of course, relevant to the first option, in which one person reports on all members of the household.

Proxy reportingA great deal has been written on the topic of proxy reporting in surveys. As a general rule, adherence to appropriate ethical considerations in surveys requires that proxy reports are obtained for children below a certain age. As suggested in Chapter 4, this age might generally be fourteen. In many cases, when a survey is done by post, chil-dren under fourteen cannot be trusted to take the survey task seriously and complete a survey accurately and correctly, even if the topic of the survey is such that ethical issues do not arise in having younger children participate. However, in certain cases, having younger children respond to a survey may be essential to the purposes of the survey. There will, of course, be a minimum age below which a child would not have the understanding or ability to respond. In all cases when a child is deemed too young to respond, but when the survey requires information about the child, it is necessary to have a responsible adult act as a proxy in completing the survey. In the case of all such surveys, this form of proxy reporting is considered necessary. There may also be cases when proxy reporting of adults is also required. This may occur when the adult is physically or mentally unable to complete the survey, but when a valid response is required for that person. Again, a proxy report will normally be obtained in such cases.

However, in general, an extensive search of the literature indicates that, in most surveys, proxy reports are not otherwise considered desirable. It is not uncommon, in self-administered surveys – e.g., postal surveys – and in telephone surveys, that a wife will provide responses on behalf of her husband. Although this may seem to be a sexist remark, it is in fact the case that men are more likely to refuse to respond to a survey by post or telephone, while women will respond. Sometimes, when a household is recruited, the wife will agree to undertake the survey, and thereby commits her hus-band to participate. However, when it comes to actually doing the survey, the husband may refuse, or the wife may be unwilling to press the issue, and so she ends up com-pleting the survey on her husband’s behalf.

As a general rule, it is found that proxy reports tend to be inaccurate, especially when they have to do with reporting behaviours that may not be directly observable by the person making the proxy report. A good example of this is provided in the field of transport planning. Most household travel surveys aim to collect data from all mem-bers of the household about the travel that each person undertakes on a specific day. Usually, in modern surveys, the day is set in the future at the time of recruitment, and

Page 252: Collecting managing and assessing data using sample surveys

Defining a complete response 225

survey materials are provided before the designated day, in the form of some type of diary for recording activities and travel. In theory, the idea is that people will take the diary with them during the designated day, filling in information about each journey they make and each activity they do throughout that day. In fact, it is known that most people will still fill the diary out at the end of the day, relying on memory of what they have done through the day to permit completion. However, the prior distribution of the diary serves two important purposes, in that people know in advance that their travel is to be measured on that day, and in that they know what information is asked about the travel. Nonetheless, it is not uncommon in these surveys that one or more members of the family (such as a spouse, teenage children, etc.) may be unwilling to take the time to fill in the diary, and a parent (usually the mother/wife) will fill the diaries out on behalf of other family members. The problem is that the proxy individual may not be aware of what the family member actually did in the way of travel and activities through the day.

In an analysis of the Nationwide Personal Transportation Survey carried out in the United States in 1995, Greaves (2000) finds the following results. Overall, proxy reporting when the information was written down into a diary showed under-reporting of travel of about 18 per cent. Within the proxy reporting in this case, it was found that travel between home and work and home and school was overestimated, while all other travel purposes were underestimated by the proxy respondent. Similarly, when no diary was used, the proxy reporting underestimated travel by almost 40 per cent against someone responding for him- or herself without a diary filled out, and by 56 per cent against a self-report using a diary. Again, the patterns of under- and over-reporting were very similar to the previous case. This suggests a high degree of unreli-ability in having other people in a household report the travel behaviour of household members.

In general, whether surveys relate to smoking behaviour, purchasing, desire for chil-dren, measuring time use, voting behaviour, or travel behaviour, inter alia, the results appear very similar. Proxy responses tend to be relatively deficient compared to self-reports, and reliance on proxy reports is likely to lead to error. However, an interesting discussion of this is given by Schwarz and Wellens (1994). They find that proxy reports tend to underestimate the variability of behaviour over time. As a result, they find that, if the survey relates to recent behaviours, the correspondence between proxy and self-reports is generally rather poor, whereas, when the period of behaviour to be reported on is more distant, the proxy and self-reports tend to converge.

11.4 Defining a complete response

Two primary issues arise in defining a complete response. The first concerns the pos-sibility of missing items of information in the set of information that is to be obtained on each unit of the sample. The second concerns the composition of the unit itself, and usually applies only to surveys in which the unit of the sample is made up of more than one separate entity.

Page 253: Collecting managing and assessing data using sample surveys

Design of data collection procedures226

11.4.1 Completeness of the data itemsAs a general rule, in any survey, there is a possibility that some data items will be missed. In an observational survey, this may arise from observer error, if the observer either misses out a piece of information that is required, forgets to rec-ord the item, or is unable to observe that item for some units of the population. In a participatory survey, this may arise from interviewer error, respondent error, or respondent refusal to answer some questions. Interviewer error may occur when an interviewer inadvertently skips over a question or forgets to record the answer to a question, or records it indecipherably. Earlier chapters dealing with question design and instrument layout focused on how to use the design to minimise the occur-rence of such omissions and problems. Nevertheless, even in the best-designed sur-vey, such interviewer errors will occur, simply because of the fallibility of human beings. Respondents, in self-administered surveys, may also omit certain questions inadvertently, which is again an issue of design, tempered by human fallibility. However, in both interviewer-administered and self-administered surveys, respon-dents may also sometimes be unwilling to respond to certain questions, especially those that they regard as threatening, or as probing into matters that are considered to be too personal to reveal. Again, good design of the survey and its questions should reduce the occurrence of such problem questions, but they will arise. Indeed, in surveys of human populations in North America and Australia, it is often reported that refusals to provide household or personal income information run between 10 and 20 per cent of the sample.

Although there are a number of issues to be discussed relating to this item nonre-sponse, which are the subject of Chapter 20, the specific issue that needs to be addressed in this chapter is that of when such omissions of data should result in a decision to clas-sify the entire observation of that unit of the sample to be incomplete, and therefore of no potential use to the analyst. This is far from a trivial issue, especially from the point of view of the client for the survey and the survey research firm undertaking the survey. The number of responses to be obtained is frequently a matter of contractual agree-ment between the client and the survey firm, and these responses are either explicitly or implicitly defined as ‘complete’ responses.

A strict interpretation of the notion of ‘completeness’ would, of course, result in any observation being rejected if there is any missing information in that observation. However, because of human fallibility and sensitivity, such a strict interpretation is likely to lead to other, more serious, issues in the survey response. Probably the most serious consequence of a strict interpretation of completeness is the substantial likeli-hood that this interpretation will lead to serious biases in the final sample. Consider, for example, a self-administered survey. Varying levels of literacy in the sample may lead to less well educated respondents being more likely not to respond to some questions. Sensitivity to specific questions, such as was outlined in Chapter 8 relating to threat-ening questions, may lead to certain segments of the population refusing to answer a specific question. In this case, a decision to classify all such responses as incomplete would bias the sample against those population groups.

Page 254: Collecting managing and assessing data using sample surveys

Defining a complete response 227

A non-trivial consequence of this strict interpretation is an economic one. For a survey firm to be required to meet sample requirements with every single observation having no missing items whatsoever will drive the cost of the survey up very substan-tially. For example, given the problems experienced in almost every survey in which income is requested, one might immediately assume that at least 125 per cent of the sample size goal will need to be surveyed, to allow for discarding all those who refuse to divulge their income. Other sensitive questions could lead to further escalation in the sample size, resulting in an effective factoring up of the cost of the survey by the percentage of over-sampling that would be required to make up for item nonresponse. Richardson and Meyburg (2003) provide a cogent argument for a more relaxed defi-nition of what constitutes a complete response. Using data from the Victoria Activity and Travel Survey (VATS) carried out in that Australian state in 1995, they show the consequences of two different definitions of completeness. In the first definition, com-pleteness is defined as having no items missing. This definition led to the exclusion of 30 per cent of the households in the sample. Further, this represented only 67.2 per cent of all the information collected in the survey from both complete and incomplete households. However, Richardson and Meyburg argue that, in this case of a travel survey, the households that are more likely to be excluded are larger households and households that undertake more travel, because, in both cases, these households have more information to provide to the survey and therefore a bigger chance of omitting some item of information. Indeed, when looking at household size, it is found that the average household size of the ‘complete’ data was only 95 per cent of the average household size of all the data. Similarly, the average amount of daily travel was only 97 per cent of that of all households.

As an alternative, Richardson and Meyburg (2003) propose a definition of complete-ness that would allow up to 20 per cent of the data items to be missing in a complete household. Under this definition, only 3.2 per cent of the sample was required to be excluded, and 96 per cent of the data collected was actually in the usable sample. At the same time, the average household size of the retained households was the same as that of all households in the sample, and average daily travel was actually 2 per cent higher than all households. The reader is referred to Richardson and Meyburg (2003) for more information on the biases found in this simple comparison.

This discussion leads to the conclusion that some compromise is required in the definition of a complete response. Moreover, following the arguments of Richardson and Meyburg, it is probably wise to err on the side of leniency in the decision on what constitutes a complete response. At least two possibilities exist here. First, one might define completeness in terms of a simple percentage of items that may be missing in what is still termed a complete response. In the VATS survey analysed by Richardson and Meyburg, there were a total of 115 data items per household. Allowing 20 per cent of these to be missing would mean that at least ninety-two items were provided for each household in the acceptable final data set. Whether 20 per cent is the best figure or some other percentage should be adopted is a matter of judgement, and is also likely to vary both with the subject matter of the survey and the number of data items to be

Page 255: Collecting managing and assessing data using sample surveys

Design of data collection procedures228

recorded in each response. An alternative method of defining completeness is to define key variables in the data and to define a complete response as being one in which at least all key data items are provided.

The concept of key data items is an important one for defining sample size, because it is the key data items that are to be used in setting the accuracy requirements of the survey. Because the accuracy requirements for the key data items lead directly to defining the required sample size, it would make sense that all observations that con-tribute to the final sample size should be complete on these key items. This is the def-inition that this author recommends for a complete response – i.e., that each complete response must be complete on the key data items, which are also those data items on which the sample size has been based. As a general rule, it will be found that there are usually no more than three to five key data items in most surveys. Although this might seem likely to lead to accepting some very incomplete data, it will generally be found that most responses are more than 90 per cent complete, and that rejection of only those responses in which one or more key data items are missing will lead to a very small proportion of the sample being rejected.

One final issue relating to this topic should be addressed. As a general rule, this author recommends that the completeness of a response should be assessed after all attempts have been made to clean and repair the data. As is discussed in Chapter 20, there are a number of techniques available for repairing data that are provided erro-neously, and for inferring or imputing values of some missing data items. At the outset of the survey, the client for the survey should agree with the survey firm the extent to which repair, inference, and imputation are to be undertaken. Only after such repairs and cleaning of the data have been completed should the completeness of the observa-tions be assessed. If no repair, imputation, or inference is permitted for key variables, then this step will make no difference to the retained sample. However, if repair, infer-ence, and imputation are permitted, then this should be performed first, and then all the data that are considered complete following these steps should be retained for use.

11.4.2 Completeness of aggregate sampling unitsIn those cases where the sampling units are made up of aggregates of elements, we also need to consider what constitutes a complete response in terms of entire elements being omitted. Principally, this applies to household-based surveys, when the survey is required to obtain information about all household members, and the completeness issue concerns whether a household is considered complete if one or more members of the household did not provide any response. This is often referred to as unit nonre-sponse, and it is dealt with at much greater length in Chapter 20.

Very similar issues arise with respect to bias in the sample if a strict definition of a complete household is applied. In surveys to which each member of the household is supposed to respond, households that include teenage children will often present a major problem with respect to completion. This will obviously end up biasing the sam-ple against households with teenagers.

Page 256: Collecting managing and assessing data using sample surveys

Sample replacement 229

Some household surveys use a sliding scale for defining complete households in a household-based survey. For example, such a sliding scale may indicate that there must be one person for one- and two-person households who provides complete information, at least two persons for three- and four-person households, at least three persons for five- and six-person households, etc. Others may specify a percentage completion, such as 50 per cent of household members, while others may specify completion in terms of percentages or numbers of adults and children for whom responses must be obtained.

Richardson and Meyburg (2003) argue for adopting a lenient definition of what con-stitutes a complete household, with a recommendation of a range of 20 to 50 per cent. They argue that such definitions will be more cost-effective, because fewer data will be discarded as unacceptable, and will also be less biased, because the more lenient definitions do not tend to exclude certain classes of households from the final sample. This author concurs with this position, and would argue for either a sliding scale, such as the one described in the preceding paragraph, or a constant percentage, which might range between 33 and 50 per cent.

11.5 Sample replacement

The issue of sample replacement arises because of the inability to contact all the origi-nal sample. Zmud (2003), inter alia, points out that failure to make contact with sample units is both a significant issue in modern surveys and one that appears to be a grow-ing problem, especially in surveys that rely on telephone contacts using random digit dialling. Others also report an increase in the number of attempts that must be made to contact sample units, such as Triplett (1998). In designing the sample, initially the survey designer should build in the anticipated refusal and non-contact rate. The first issue here is at what point to replace a sample unit with another one. For example, suppose one is recruiting respondents by means of a telephone call. Some numbers may be called a number of times and a human response is never obtained on that num-ber. At some point, the decision must be made to replace this number with a new one. Similarly, with a face-to-face interview, an interviewer can attempt to find someone at home to interview on a number of occasions and be unsuccessful each time. The ques-tion must be to determine how many appointments there should be before the sample unit is dropped and replaced with another.

The second issue that must be dealt with is how to select a replacement sample. Strict adherence to the sampling procedure determined at the outset of the design is paramount for avoiding bias in the survey. Therefore, any replacement sample should be drawn in the same manner as the original sample, and should represent a continua-tion of the strict random sampling or other sampling procedure.

11.5.1 When to replace a sample unitThe first issue, as noted above, is when to replace a sample unit. The decision to replace a sample unit is required in almost all surveys of human populations, as well as in

Page 257: Collecting managing and assessing data using sample surveys

Design of data collection procedures230

some surveys of inanimate objects, etc. As is discussed in more detail in Chapter 21, the requirement to replace a sample will lead to a decrease in the correctly calculated response rate. As such, sample replacement represents a worsening representation of the underlying study population. On the other hand, failure to replace a sample will result in a decreasing sample size, and a decision to make a large number of attempts before abandoning a sample unit and replacing it will lead to escalating survey costs. The issue is how to balance the competing needs of representativeness and economy.

Groves and Lyberg (1988) report that the number of non-contacts in a survey is an inverse function of the number of attempts made to contact sample units. De Leeuw and de Heer (2002) find in Europe that efforts to reduce non-contacts systematically were what made the difference between survey response rates that remained more or less stable and those that were declining.

Consider two extremes in survey design. In the first of these, the decision is made to replace sample units after a first unsuccessful attempt to contact is made. In this day and age, when all the adults in a household are often working outside the home, this will tend to bias the sample against households in which all adults work outside the home and towards households in which there is at least one adult who works at home only. For many survey purposes, this will result in a very substantial bias in the responses obtained. Many marketing and opinion poll surveys are actually conducted in this way, because time is sometimes critical and it is necessary that all respondents are interviewed on a specific day or within a period of a very few days, thus not permit-ting repeat attempts to contact respondents who are not available on the first contact. In the specific case of travel surveys, the bias created by this survey process will be significant, because those most likely to be unavailable are probably those who travel the most, and are therefore often not at home to be reached by telephone or interview at the time that the interviewer calls first. On the other hand, people who have mobil-ity-related disabilities and others who travel very little are more likely to be found at home, so that the survey will over-represent such people, while under-representing the very mobile.

Another feature of this survey will be that there will be a number of potential respon-dents who will provide a ‘soft’ refusal, and these respondents will never be included in the final sample. A ‘soft’ refusal is defined as one in which the respondent does not indicate a complete unwillingness to respond, but possibly responds by saying that this is not a good time, that he or she is too busy, or giving some similar indication of unwillingness to participate at this particular moment. In contrast, a ‘hard’ refusal is one in which the potential respondent provides a strong indication that he or she will not participate in the survey. There is varying evidence about the biases resulting from soft refusals, which probably again depend largely on the topic of the survey. In some cases the failure to follow up with such potential participants may interject no signifi-cant bias in the results, while in other cases it may result in substantial biases.

In the second survey, each selected sample unit that is not contacted successfully is tried again and again, with no limit on the number of attempts to be made. In addition, all potential participants that provide a soft refusal are also re-contacted subsequently,

Page 258: Collecting managing and assessing data using sample surveys

Sample replacement 231

in an attempt to convert the soft refusals into participants. This is done not just by re-contacting the sample unit, but also by using specially trained interviewers who have a proven track record of being persuasive and able to gain the cooperation of many soft refusals in subsequent contacts. In this survey, there are minimal problems of cover-age bias, because every sample unit that can be contacted is eventually contacted and becomes either a participant or a hard refusal. In this survey, additionally, the times and days of contact are varied through a large range, so that those sample units that are not available at certain times have an increasing chance of being contacted successfully.

This survey is now a very expensive one. If it is a face-to-face home-based survey, it may require the interviewer to travel to a specific address many times, without ever resulting in a successful interview. In a telephone survey, it will require interview-ers to telephone a particular household a large number of times, without successfully completing the survey. In both cases, but more assuredly with the face-to-face inter-view, the costs of the survey will escalate rapidly. In addition, the need to transfer all soft refusal contacts to specially trained interviewers will result in increasing costs for these sample units, especially if the conversion of the sample unit from a soft refusal to a participant takes multiple contact attempts. Thus, this survey obtains a largely unbi-ased sample, but at a potentially huge cost.

Clearly, the ideal lies between these two extremes. Stopher et al. (2008b) report on an analysis of one travel survey with respect to the issue of response conversion. In this survey, there were initially 2,315 refusals. Up to six attempts were made to con-vert those refusals to participants. Of the 2,315 initial refusals, 521 were converted to completing the recruitment interview, while 172 households ended up completing the entire survey. Two further analyses are reported of this survey. First, the number of call attempts that was required for conversion was examined. This showed that 52 per cent of the successful conversions to complete the recruitment stage were achieved with only one subsequent call attempt. By three call attempts, 92 per cent of the conversions had been achieved. This suggests that a large number of attempts is not worthwhile. In a second analysis, Stopher et al. (2008b) report that, when a household was found to be non-contactable on one or more occasions after an initial soft refusal, the conversion rate was much lower. In this case, only 9 per cent went on to complete the recruitment interview, and only 3 per cent completed the entire survey. Sebold (1988) also reports on a decreasing productivity of additional call attempts. An interesting finding from her work is that there were demographic biases found in non-contact follow-up efforts on the characteristics of household size and marital status. Those who were difficult to contact were more likely to be small households and to have a ‘Never married’ head of household.

In a further study of the call history files of two surveys, Stopher et al. (2008b) report on the number of calls required to make an initial contact, as well as the number required to a household that initially requested a subsequent callback (which could be considered to be the softest version of a soft refusal). In one survey analysed on this issue, the survey design had called for ten attempts to be made before the number was replaced. The analysis found that the first two or three callbacks were generally very

Page 259: Collecting managing and assessing data using sample surveys

Design of data collection procedures232

productive, although at a rapidly declining rate, with more than 50 per cent of success-ful contacts resulting on the first callback (second call attempt), declining to around 14 per cent by the fourth call attempt. In all cases, by the fifth call attempt (fourth callback) the productivity represented fever than 10 per cent of successful contacts. By the seventh call attempt (sixth callback) the productivity dropped below 1 per cent. This suggests that, in line with the attempts to convert soft refusals, no more than five callbacks should usually be considered. This applies to a telephone survey.

In the case of a face-to-face survey, the number of potential callbacks will need to be set much lower than this, because the cost of repeated callbacks is very high, compared to a telephone survey. The literature does not provide any examples of studies of the number of callbacks that should be used for a face-to-face survey, probably because economics will dictate this more strongly than the statistics relating to subsequent callback productivity. Having undertaken some surveys of this type, this author would suggest that a maximum of two attempts should normally be specified as the require-ment for a good survey, with the ability to replace a non-contactable sample unit by a near neighbour after the failure of the second attempt. For example, in a survey con-ducted by the author in South Australia, the survey firm was permitted to replace non-contactable units on the second failed attempt by the household right next to the one initially sampled. The theory here is that such a replacement should provide a house-hold of similar characteristics to the one being replaced, because of the usual degree of homogeneity in neighbourhoods. However, again, the specific number of contact attempts and the replacement strategy will be conditioned by the nature of the survey.

In terms of bias, Keeter et al. (2000) find that young people and better-educated people were more difficult to contact. They find that households with one adult and those with employed adults were harder to contact, while those with incomes between $25,000 and $35,000 (in the 1990s) had the highest contact rates for first calls on Monday to Thursday evenings (Dennis et al., 1999). For the same time period, house-holds with low incomes (below $15,000) and those with high incomes (over $75,000) had the lowest contact rates for first calls. This implies that any survey about a topic that correlates at all with income will be biased as a result of these differential contact rates, especially if the number of contacts attempted is low.

Reviewing a number of recent surveys, it was found that the number of attempts made to contact a household after a failure to make contact on the initial attempt var-ied from none to six. In other words, some surveys did not pursue a contact any fur-ther after the first failure to contact, while others made up to six additional attempts. Harpuder and Stec (1999) undertook a study to examine the trade-offs between making additional callbacks and the cost and amount of residual bias in the data. They find that, after the second callback, the costs rose steadily and linearly for a CATI-type survey. However, bias appeared to drop to a global minimum at the fourth callback, and there-after remained steady or increased slightly. Using comparisons to the US census, they also find that, even after twenty callbacks, their sample remained biased with respect to race and age. Overall, they conclude that the appropriate number of callbacks is between four and six, but suggest that six or seven be used until such time that more

Page 260: Collecting managing and assessing data using sample surveys

Sample replacement 233

precise confidence intervals can be computed on the results. Interestingly, another study (Stopher et al., 2008b) also examined the optimal number of call attempts using household travel surveys conducted by CATI methods in the United States. This study also concludes that six call attempts constituted the maximum number that should be used, based on the rate of success of converting a non-contact or refusal into a com-plete survey.

In summary, in a face-to-face survey, economics will generally dictate that a smaller number of callback attempts should be used than in any other type of human subject survey, with a limit usually of no more than two or three attempts being made to obtain a contact. In a telephone-based survey, for which economics will allow more attempts to be made, the number of attempts should still be limited to six or seven. In a postal survey, there is not an option, usually, for making multiple attempts at contact, with the only possibility in this case being to issue reminders, as discussed earlier in this chapter.

11.5.2 How to replace a sampleThe second issue is how a sample should be replaced when the prescribed number of contacts have been attempted and there has been no success. The main problem with sample replacement is that it actually represents a failure to adhere to the origi-nal sample, and, as such, presents a serious potential for survey bias. In the unlikely event that non-contacts or refusals are randomly distributed through the sample, then no bias would arise. However, such an eventuality is very rare in practice, especially with surveys of human subjects. It is generally much more likely that those who can-not be contacted and those who refuse to participate will be drawn more specifically from certain population subgroups than others and will, therefore, result in a bias to the final sample. While such biases are generally unavoidable, and often cannot be determined until after the completion of the survey, at which time demographic pro-files can be determined and compared to population data, such as from a census, it is important that additional bias is not added by the method used to replace a sample that is unsuccessful.

The previous subsection introduced one possible method of replacement, which is efficient for face-to-face surveys undertaken at residences. This is, during the field-work, when a household is not contactable or refuses to participate, it may be replaced by the nearest neighbour, defined in an unambiguous manner, so that all interviewers will use exactly the same process for replacement. As nearly as possible, this nearest neighbour replacement should result in a new sample member that is as similar as possible to the one being replaced, but is also not drawn in some process that leads to specific biases. However, this is not the only method that can be used for face-to-face samples. Another method is to use a similar procedure to the one that is outlined in the following paragraphs for a telephone interview, namely drawing an initial sample that is larger than the desired sample, by an amount that allows for expected non-contacts and refusals. The reason that this is not offered as the first method is because, in a

Page 261: Collecting managing and assessing data using sample surveys

Design of data collection procedures234

face-to-face household-based survey, this will be a significantly more expensive option, except when the sample is drawn from a very compact geographic area.

For most other surveys, the best method of generating replacement samples is to draw an initial random sample that is equal to the size of the required sample plus an allowance for expected refusals and non-contacts. The key in survey design is to strike an appropriate balance between the number of attempts made to obtain a response from a sampled unit and the replacement of the non-responding sample unit with a new member of the sample. On the one hand, one could argue that the least bias will occur if the number of attempts to contact is made very large, so that the need for replacement arises only rarely. However, Harpuder and Stec (1999) find that there was little evidence that even as many as sixty attempts would reduce the bias significantly, beyond what could be achieved with four to six attempts. On the other hand, repeated attempts to obtain a response from a sample unit involve additional survey cost that may never result in a successful contact. Most experienced survey researchers know that there is a proportion of people in any population that are hard-core non-responders. The size of this group is rarely known, but it does appear to have been increasing in size over the past few decades. However, the presence of such a group means that there will be some proportion of the population that will never respond, no matter how many attempts are made. Thus, the key design issue here is to make repeated attempts to contact, until it is estimated that one has left pre-dominantly only the hard-core non-responders. A design in which, say, only one or two attempts are made to contact each sample unit will necessarily be biased, almost irrespective of the subject of the survey.

The experienced survey designer will know, from past surveys, the level of non- contacts and refusals likely to be encountered, or this information can be estimated from conducting a pilot survey (see Chapter 12). Using this information, one can then determine the size of the sample that needs to be drawn initially so as to allow for obtaining the desired sample size. Suppose, for example, that past experience and a confirming pilot survey indicate that the non-contact rate in a particular survey is expected to be about 25 per cent, and that refusals are likely to run at 30 per cent of those contacted. Now suppose that the desired sample size for this particular survey has been determined to be 10,000 households. This will be the number that is obtained from 70 per cent of successful contacts (100 per cent minus the 30 per cent of refusals). Hence, the number of successful contacts required is 10,000 ÷ 0.7, or 14,285. However, this number represents 75 per cent of all attempts to make contact (100 per cent minus the 25 per cent of non-contacts). Hence, the total number of samples required is 14,285 ÷ 0.75, or 19,048. Allowing for some margin of error, the prudent survey designer would then draw an initial random sample of at least 20,000 households. Working this backwards, to check that the computations are correct, we will now suppose that we draw an initial sample of 20,000 households. Assuming a non-contact rate of 25 per cent, this means that we should contact 15,000 households successfully. However, with a 30 per cent refusal rate, we can expect 4,500 of these contacts to refuse to participate. This will leave us with a successful survey completion of 10,500 households. This is

Page 262: Collecting managing and assessing data using sample surveys

Incentives 235

slightly larger than the aimed-for 10,000, because we rounded up the required sample from 19,048 to 20,000.

11.6 Incentives

Incentives are often considered to be a means to reduce respondent refusals. However, more than any other topic in this book, perhaps, the issue of incentives will be likely to vary in its impact enormously from one country to another. In the United States today, it is difficult to carry out almost any public service survey without offering an incentive, because incentives have a significant and substantial effect in increas-ing response rates. As noted by others, such as Ryu, Couper, and Marans (2005), incentives have been the subject of a large number of studies, which include issues of whether or not to offer incentives, the type of incentive to use (monetary or non-monetary), and when the incentive is offered (prepaid, or promised in return for a completed survey). There are also extensive citations in the literature indicating that incentives, at least in North America, increase response rates (see, for example, Goyder, 1987; Nederhof, 1983; and Sudman and Bradburn, 1974). However, there are a few citations that also note that there are cases in which incentives may actu-ally decrease response rates (Dohrenwend, 1970; Groves, 2004). The author has also found that, in the case of Adelaide, Australia, in one instance incentives decreased the response rate. This was in a pilot study that requested survey respondents to carry GPS devices with them for a period of a week. In the pilot survey, one-half of the participants were not offered an incentive, while the other half were offered a prepaid A$5 incentive. The response rate was lower for the half that was offered an incentive than the half that was not offered one.

The practice of offering incentives, though widespread, is not consistent. In mar-ket research for private companies and for products and services in the private sector, incentives are probably much more common. In surveys for government agencies and other non-profit agencies, the practice is much more variable. In a review of household travel surveys, Stopher et al., 2008a, find that somewhere between one-quarter and one-third of these surveys offered incentives to respondents during the 1990s. It is by no means uncommon to find public agencies shying away from the use of incentives on the grounds that spending public money on incentives would be considered frivo-lous or wasteful by a majority of the public. There is certainly anecdotal evidence to suggest that some, possibly a minority, of the public do so view the spending of public money on incentives.

Although there is much agreement in the literature that incentives will impact response rates, generally favourably, there is little in the literature that addresses the issue of the effect of incentives on nonresponse error, or on the distributions of char-acteristics of respondents (Ryu, Couper, and Marans, 2005). There are speculative comments about the possibility that incentives may bias responses, but there are few studies that have actually examined the effect, probably because there are few cases in which researchers have had sufficient funds to carry out side-by-side incentivised

Page 263: Collecting managing and assessing data using sample surveys

Design of data collection procedures236

and non-incentivised surveys that are otherwise identical in nature. This section sum-marises what is currently published on the effects of incentives on different types of surveys, with the majority of the information coming from US researchers, and there-fore reflecting specifically what has been experienced in that particular market.

There are several dimensions along which one can compare and discuss incentives. Incentives can be provided in the form of money, gifts, participation in a lottery, or donations to a charity. They can also be provided prior to the completion of the survey or as a ‘reward’ for completing the survey, and they can be provided to everyone (all prospective respondents in a prepaid incentive, or all respondents in a post-paid incen-tive) or can be used differentially, to try to increase response from nonrespondents. However, the basis of any incentive is that of reciprocity. This, as Cialdini et al. (1975) have suggested, is such an extremely powerful rule that it probably overwhelms the effects of most other factors. Reciprocity is the psychological principle that underlies the golden rule of treating others as you would have them treat you. In this specific instance, offering an incentive to a person to complete a survey creates in the potential respondent a desire to return the favour (of the incentive) by fulfilling the request to complete the survey. However, Dillman (1978) cautions that the incentive must be in balance with the desired response. As Kalfs and van Evert (2003) note: ‘[T]he social standard of reciprocity only works if the gift or favour received is seen as fair; if it is seen as an attempt to coerce the respondent, make him feel guilty, or bribe him, the gift has an adverse effect.’

11.6.1 Recommendations on incentivesFrom the wealth of literature on the topic of incentives, it is possible to distil a few rec-ommendations about the use of incentives, their type, and the means of administration. Nevertheless, the survey researcher is advised to pilot-test alternatives, because none of these findings can be considered to be applicable to all situations and all cultures, and there may be sufficiently large differences, particularly between cultures, that rec-ommendations that are based largely on US practices may not work in other countries and situations.

In terms of the type of incentive, the broad conclusions that emerge from the liter-ature are that monetary incentives work consistently the best (Church, 1993; Kropf, Scheib, and Blari, 1999; Singer, 2002; Singer, Groves, and Corning, 1999; Stopher et al., 2008a; Tooley, 1996; Zmud, 2003; inter alia). Church (1993), in studying mail surveys, concludes that prepaid incentives worked better than promises of an incen-tive upon survey completion, that monetary incentives were better than gifts, and that increasing amounts of money resulted in increased response. The average value of monetary incentive tested in the surveys reviewed by Church was $1.38. Similar results have been reported in face-to-face surveys (Ryu, Couper, and Marans, 2005). Likewise, Singer, Groves, and Corning (1999) conclude much the same for telephone and face-to-face surveys, finding that the effect of incentives seemed to be independent of the mode of survey (telephone or face-to-face), although they report that the effects

Page 264: Collecting managing and assessing data using sample surveys

Incentives 237

of incentives were smaller in telephone and face-to-face surveys than in mail surveys. Singer, Groves, and Corning also conclude that money is more effective than gifts.

There then comes the question of the size of the monetary incentive to be offered. As noted earlier, Kalfs and van Evert (2003) suggest that the amount should be suffi-cient to be seen as fair, in relation to the burden of the survey and in compensation for potential lack of interest in the survey topic, and in personal reasons for doing the sur-vey. It should not be of such a size as to appear to be an attempt to make the potential respondent feel coerced, bribed, or guilty. How much this will be will clearly depend greatly on the nature of the survey, and the culture and context in which the survey is administered. It is probably always going to be necessary, therefore, to assess the amount of the incentive through a pilot survey that tests different magnitudes of incen-tive. Referring back to the work of Church (1993), it is clear, given the average size of an incentive in his meta-analysis, that most incentives were probably in the range of $1 to $2. Elsewhere, there are instances of much larger incentives being offered. Singer (2002) notes incentives of $20 and $10 being used in panel surveys. Wagner (1997) reports that he used a $50 incentive for respondents participating in a first-time application of GPS technology in a travel survey. Stopher et al. (2008a) recommend that incentives should be offered in the range of $1 to $2 for the majority of household travel surveys in the United States.

In one of the only studies on this issue, Tooley (1996) finds that there is evidence that incentives offered to a household were much less effective than incentives offered to individual respondents. This is logical, in that the appeal here is to the self-interest of the prospective respondent. Therefore, an incentive offered to an entire household is much less directly in the self-interest of a respondent than is the offer of an incentive to each member of the household who is expected to respond to the survey. Of course, in cases in which the survey involves only a single member of the household responding on behalf of the entire household, this point will not apply.

Another type of monetary or gift incentive is a lottery. Singer (2002) reports that there are only few studies that have examined the effects of lotteries, and that the conclusions are somewhat mixed. However, she suggests that lotteries act much like promised incentives and that their value effect is probably comparable to the value of the lottery prize divided by the sample size. Thus, a lottery prize of $500 in a sur-vey of 10,000 respondents could be expected to have the same effect as a promised cash incentive of 5 cents per respondent. Given that promised incentives have a much lower effect on response rates than prepaid ones, the effect of a lottery is likely to be rather small.

A further type of incentive that is sometimes offered is that of a charitable dona-tion. Zmud (2003) reports that offers of such contributions have been found to have no effect on response rate, pointing out that Dillman (1978) has noted that self-interest is ‘a very compelling factor in survey participation’, and clearly a charitable donation does not cater to self-interest but, rather, to the altruism that probably already would have persuaded the respondent to complete the survey. In other words, a donation to charity will appeal only to those who are already inclined to respond to the survey out

Page 265: Collecting managing and assessing data using sample surveys

Design of data collection procedures238

of altruism anyway. In this context, it is interesting to note that Porst and von Briel (1995) (as cited by Singer, 2002), concluding that little is known about why respon-dents choose to participate in a survey, undertook an analysis that suggested that about 31 per cent of respondents participated for altruistic reasons, 38 per cent because they were interested in the topic of the survey, and 30 per cent for personal reasons (includ-ing the fact that they had promised to do the survey).

A study in Canada drew very clear conclusions on the value of charitable gifts, lot-teries and money (Warriner et al., 1996). Even the title of the paper makes clear the conclusions: ‘Charities, no; lotteries, no; cash, yes: main effects and interactions in a Canadian incentives experiment’. The 1994 survey on which this paper is based used three levels of prepaid cash incentives, amounting to C$2, C$5, and C$10. There was a control group that was offered no incentive. These four groups were also offered a sec-ondary incentive, of either a charitable donation ranging from C$2 to C$10, or partici-pation in a lottery with a prize up to C$200. Thus, there were twenty subgroups in the survey, ranging from a group that was offered no incentive, four groups offered only the charitable donation or the lottery, three groups offered only a prepaid cash incen-tive, and twelve groups that were offered a combination of a prepaid cash incentive and either a charitable donation or participation in the lottery. The lottery was offered to a total of almost 400 participants, so its average value would be 50 cents per participant. Through both a cell-by-cell review and logistic regressions looking for main and inter-action effects, it was found that the prepaid cash incentives had a significant effect on response rates. The charitable donations actually appeared to have a depressing effect on response rates, although this was not always statistically significant.

This also conforms with results from another experiment, carried out by Hubbard and Little (1988). Warriner et al. (1996) find no effect from the lottery on response rates, either alone or in combination with the prepayments of cash. They also find that the C$10 incentive achieved an almost negligible improvement in response over the C$5. They point out that, at the time of this survey, C$10 would have approached the average wage rate (it took about a half-hour to complete the survey, so C$10 equates to C$20 per hour), and conclude that this moved the incentive out of the range of rec-iprocity and into the range of a payment that is in ‘bad taste’. One other study that compared different incentives is worth mentioning, drawn from the transport litera-ture. Goldenberg, Stecher, and Cervenka (1995) report on a pilot survey in which four alternative incentives were tested, consisting of no incentive, a prepaid cash incentive, a gift, and both cash and a gift. In this pilot survey, it was found that the prepaid cash incentive had the largest significant effect on response, and was far superior to the gift. Interestingly, combining the gift and the cash actually produced a depressing effect on the response rate, similar to the experience of Warriner et al. (1996) with combining a cash incentive with a lottery.

With respect to administration of the incentives, as the above discussion has already revealed, there appears to be overwhelming evidence that prepaid incentives are far more effective than promised or contingent incentives – i.e., incentives that are sent back to respondents who actually complete the survey. Some would argue that sending

Page 266: Collecting managing and assessing data using sample surveys

Incentives 239

out incentives to all those who are sampled to participate in a survey is financially wasteful, especially when response rates may be on the order of 40 per cent or less in many telephone and mail surveys. However, apart from the much greater effectiveness (Church, 1993, finds that contingent incentives did not increase response rates), the use of a contingent incentive adds both administrative and equity headaches. For example, using a contingent incentive means that a mailing (in the case of telephone and postal surveys) has to be undertaken to send out the incentive to each respondent, and that there must therefore also be a careful collection of the addresses of those who respond. In addition, the question has to be addressed as to whether or not the quality of the response must be assessed. Clearly, some ‘respondents’ might send back a blank mail survey, in the hopes that the receipt of the survey package will trigger the payment of the incentive. If it does not, then this implies that the survey package must be assessed for completeness. In that case, is the incentive still sent out to respondents who do not provide sufficiently complete responses to be usable, or who fill out the survey form with rubbish? This is a thorny issue, and adds to the headaches of undertaking the sur-vey. It can be avoided entirely by using prepaid incentives.

Singer, Groves, and Corning (1999) consider the effects of differential incentives. Such incentives are offered when initially no incentive or only a very small one is offered at the outset of the survey, and incentives, or larger incentives, are offered in an attempt to convert a refusal to a completion of the survey. However, as Singer, Groves, and Corning point out, offering differential incentives could have unintended consequences on the survey. Principally, they raise the concern that, when other sur-vey participants get to hear that incentives, or larger incentives, have been offered to other participants, it may make them more mistrustful of surveys in general and also less likely to respond to future surveys. However, their research concludes that most respondents (in the United States) believe that survey organisations use incentives to boost response to the surveys, and that this belief is affected by the respondent’s own past experience of being offered an incentive. They also find that only a half of those who believed that incentives were used also believed that incentives were distributed equally to all participants, and that about 75 per cent of the respondents to their sur-vey felt that the practice of paying differential incentives was unfair. However, they do not find evidence that knowledge of differential incentive payments will affect future participation significantly in future surveys, whether the survey is from the same or a different organisation. Nonetheless, Singer, Groves, and Corning conclude that further research is needed on the unintended consequences from differential incentives, espe-cially in their effects on increasing expectations for payment, and the perceptions of inequity in paying differential incentives.

In summary, the recommendations on incentives are that, if they are to be used, they should be monetary in nature, and should be prepaid. Gifts, lotteries, and charitable donations are found to be either less effective or ineffective in increasing response rates and should, therefore, be avoided. Incentives can be expected to have the larg-est effect on a postal survey, and lesser effect on a telephone or face-to-face survey, although they are significant in all modes of survey. The value of the incentive should

Page 267: Collecting managing and assessing data using sample surveys

Design of data collection procedures240

be in keeping with the nature of the burden of completing the survey and the likely level of interest in the survey topic. Small incentives, in the range of $1 to $2 per per-son in the United States, have been found to be quite effective when provided as pre-paid incentives in household travel surveys. Incentives should be offered to individual respondents, not to households, even when the survey is a household survey. If incen-tives are to be applied in a differential manner, to encourage response from those who have previously refused to respond, this may increase perceptions of inequity in the use of incentives, but it does not currently appear to be a significant problem with respect to future participation in surveys.

Finally, given the limitation of most studies on incentives in the United States, and changing mores over time, it is strongly suggested that any incentive that is planned should be tested in a pilot survey to determine whether or not it will be effective in the specific context of that survey and the sample population. Furthermore, the pilot sur-vey should also be used to establish the appropriate amount of the cash incentive, to ensure that it is seen as appropriate to the task for which it is offered.

11.7 Respondent burden

Respondent burden is a topic of considerable concern to the survey designer, or should be. It can be defined as the effort, time, and cost required for a respondent to provide answers to a survey. As pointed out in one definition on the Web (Canadian Outcomes Research Institute, 2008), completing a survey can cause the respondent to experience annoyance, anger, or frustration, all of which feelings will be exacerbated by the com-plexity, length, and frequency of surveys. It can be measured in tangible terms in the time required to complete the survey and to comply with any requests made by the survey, and any actual costs that have to be incurred by the respondent in completing the survey. It also has intangible components, such as the effort required to think about the survey topics and to respond with answers to the survey questions. The tangible effects can gen-erally be measured, while the intangible effects are likely to remain strictly perceptual.

There can be little question that respondent burden affects the response rate to a survey. The more burdensome a survey appears to be, or actually eventuates to be, the lower the response to it is likely to be. Therefore, survey designers need to be acutely aware of the burdens they are creating in the surveys they design. In 1978 Bradburn (1978) suggested that respondent burden was rather like the weather: every-body talked about it but nobody did anything about it. An internet search suggests, unfortunately, that little has changed since 1978. While little has changed, in that there is a fair amount of talk about respondent burden, especially in medical literature and literature from transport planning, there are few concrete studies on the topic and lit-tle in the way of clear recommendations as to what to do about respondent burden. In two international conferences on travel survey methods, in 1997 and 2001 (TRB, 2000; Stopher and Jones, 2003), workshops were organised on the topic of respon-dent burden (Ampt, 2000, 2003; Ortúzar and Lee-Gosselin, 2003). Ampt (2003) pro-poses a framework for analysing respondent burden that involves a series of ‘Yes’/‘No’

Page 268: Collecting managing and assessing data using sample surveys

Respondent burden 241

questions to be answered by the survey researcher about prospective respondents to a survey. The framework is shown here in Figure 11.2.

11.7.1 Past experienceIn most instances, the survey researcher can only guess at the past experience of a potential respondent with respect to burden. However, assessing the overall amount of surveying that goes on in the country or area where the survey is to be done will pro-vide a reasonable indicator of how important this is likely to be. Respondent burden in this area could be defined simply as the number of survey contacts the respondent has experienced within a given period of time. However, this may not always be an indicator of future response. Although DeMaio (1980) reports that one of the more common reasons for not responding to a survey was unfavourable previous experience as a survey respondent, Presser (1989) reports the contrary: that respondents in a panel survey (a survey repeated on a number of occasions with the same respondents) who had refused to respond to an early wave, but were approached again on a subsequent wave, often did respond when re-contacted. This suggests that burden may not always be perceived as being cumulative.

Past experience of burden

OK Not OK

Appropriate moment?Appropriate medium

RelevanceThe ‘feel good’ factor

Perceivedattitudes/opinions of

external peopleYes No

Yes

Difficulty

IntellectualPhysical

Easy

Yes NoYes No Yes No

HardEasy Hard

Emotional

Easy Hard

Willingness

Yes No

No

Figure 11.2 Framework for understanding respondent burden

Page 269: Collecting managing and assessing data using sample surveys

Design of data collection procedures242

11.7.2 Appropriate momentThe appropriateness of the moment relates to when the potential respondent is expected to undertake the survey. In the case of a face-to-face or telephone interview, it is the moment when the interviewer makes contact with the potential respondent and requests him or her to undertake the survey. In the case of a postal or internet survey, it is the moment when the potential respondent elects to undertake the survey. Clearly, in the case of postal and internet surveys, the potential respondent has much greater control over when the moment is that the survey is to be done. While a response to a telephone or face-to-face interview can always be a request for a later callback, there is much less perceived control by the respondent. The advantages of postal and internet surveys are the greater perceived control of the appropriate moment.

The appropriateness of the moment relates both to whether the respondent per-ceives that he or she has time to undertake the survey at this moment, and also to whether the respondent is engaged in some other activity that he or she is willing to forgo for the time it takes to do the survey. There will certainly be cases when some people perceive themselves to be so busy that there is never a good time to be able to respond, while others may have particular times when they consider that a survey response would be inappropriate, for example on a day set apart for religious activities.

11.7.3 Perceived relevanceThe perceived relevance has already been discussed in relation to incentives. Richardson and Ampt (1993) point out a real danger in the initial contact. In a survey, they note that a pre-notification letter was sent out to prospective respondents, addressed to the ‘Householder’. Many renters interpreted this to mean that it was addressed to the owner of the house, not the tenant, and some became quite distressed when follow-up remind-ers were sent, because they had already assumed that the survey was not relevant to them. This clearly illustrates the impact that the initial contact can have in showing rel-evance. In a similar vein, if the pre-notification letter, or other first contact, comes from someone who is familiar to respondents and has a perceived relationship to the topic of the survey, this will help to define the relevance of the study with little effort.

In terms of the relevance of the survey, Ampt (2003) points out that it is not just a matter of the topic itself, which is often beyond the control of the survey researcher, but also relates to such issues as to whether or not the respondent believes that the survey will help deal with the problem or issue about which the survey is being carried out. Moreover, there is the question of whether or not the survey materials provide suffi-cient information about how the respondent’s information will be used. The relevance of the survey should be addressed throughout the survey, especially when questions are being asked that may not appear to the respondent to be directly relevant to the purposes of the survey. Of course, the purpose of the survey should be stated clearly at the outset, and no survey should be carried out without a clearly defined purpose. Perhaps of equal importance is the fact that, after a survey is completed, there should be media releases or other information that shows that the survey data have been used

Page 270: Collecting managing and assessing data using sample surveys

Respondent burden 243

for the stated purposes. As Ampt (2003) points out, this is, unfortunately, rarely done, although national censuses show a good example of this, as there are often news arti-cles provided a year or two after the census that demonstrate how the data are being used.

11.7.4 DifficultyChapters 8 and 9, which deal with survey design and question wording, addressed a number of the issues that may contribute to difficulty in completing a survey. Even though a survey may be clearly relevant to a potential respondent, and may come at an appropriate moment, the respondent may not complete the survey if he or she finds it too difficult. If the respondent does proceed to complete the survey, but finds difficulty in doing so, then the burden is substantially increased. Ampt (2003) suggests that diffi-culty may be manifest in three different forms: physical, understanding or perception, and emotional.

Physical difficultyAmpt (2003) points out that it is important to realise how people are often questioned in a survey. For example, the respondent may simply hear questions from an inter-viewer with no visual cues; or a respondent may hear questions but receive occasional visual cues, such as a show card; or a respondent may read the survey questions on paper or on a computer screen, and be asked to respond by entering answers; or there may be some combination of these methods of administration.

Among physical difficulties, the respondent may have visual impairment ranging from blindness to difficulty in seeing, even with corrective lenses. Another, increas-ingly common, physical difficulty is the inability to read or to read at the language level of the survey questionnaire. This would tend to favour an interviewer- administered survey. Two other elements may contribute to visual difficulty. The first of these is colour blindness. Many people are colour-blind between red and green, and the use of these colours in a survey may render the survey much more difficult to read for such respondents. The other issue in visual difficulty, which may also relate to some aspects of colour blindness, is that of the contrast between the print and the background. The use of strong colours in the background of the survey form may make the printed let-ters difficult to discern. Likewise, the use of a pale type colour against a light or white background may also produce difficulties.

On the other hand, Armstrong (1993) suggests that the majority of people (70 per cent) understand things better when they are presented visually than when they are simply heard, which would tend to favour the use of written survey forms, either on paper or on a computer screen. Failure to follow much of the guidance provided in Chapters 8 and 9 of this book on layout and wording will add to the physical difficulty of responding to a written survey. Other people may have hearing difficulties, which may make responding to an interviewer-administered survey difficult, while language differences and even accent differences can cause problems of understanding.

Page 271: Collecting managing and assessing data using sample surveys

Design of data collection procedures244

Other difficulties that can arise in completing a survey include being unable to find an appropriate writing instrument, having nowhere to rest a survey questionnaire so that it can be filled in (e.g., trying to complete an on-board bus survey while standing in a crowded bus), being unable to find a suitable quiet space in which to respond to the survey, and weather effects when the survey is conducted out of doors. All these issues will increase respondent burden, unless appropriate strategies are implemented to mitigate them. For example, in carrying out self-administered on-vehicle surveys (e.g., surveys on a bus or train), the author has used stiff cardstock for the survey ques-tionnaire, so that finding a place to rest the form to be able to fill it in is much less of a problem; in addition, respondents have been provided with a short pencil with the sur-vey form, so that lack of a writing instrument is not a problem. Such mitigating steps can be introduced to reduce the difficulty of completing the survey.

Intellectual difficultyIntellectual difficulties may arise in connection with the ability of the person to com-prehend the survey, perceptions about the length of the survey, and perceptions about the image conveyed by the survey. In the area of comprehension, there are generally two difficulties that arise. The most common one is the language level used. Because many of those who design surveys are well educated, there is a tendency to use words and phrases that are at a relatively high educational level. This phenomenon has already been addressed in Chapter 9 of this book, but it certainly bears repeating here in the context of burden. If a person is unable to understand the terms and phrases used in a survey, whether in a written or a computer survey, or in an interview over the telephone or face to face, this lack of understanding creates a burden. A second aspect of com-prehension has to do with the use of technical terms and phrases, which may be well understood by those creating the survey but not understood by the general public. A classic example occurs in the field of transport planning, in which the profession uses the word ‘trip’ to mean a one-way travel movement that involves no stops, except for travel-related reasons, and represents the movement from an origin to a destination. Unfortunately, among the survey populations of interest to the profession, the word ‘trip’ may be interpreted as a long journey, such as one might do once or twice a year for holiday purposes, or the result of taking hallucinatory drugs. It is almost never interpreted in a manner similar to that of the professional meaning. Even contemporary dictionaries tend to define the travel-related aspect of the word ‘trip’ as an excursion or journey, especially for pleasure. In no commonly used dictionary is it defined in accor-dance with its usage in the transport profession.

Another aspect of comprehension is the use of long sentences or phrases in ques-tions or in instructions. Most people prefer short sentences, with little or no ‘clutter’. Long questions become burdensome, as the respondent tries to follow the questioning and pick out the salient meanings. This may lead to misunderstanding of what is being asked, or just to significant periods of concentrated effort on the part of the respondent, which is burdensome.

Page 272: Collecting managing and assessing data using sample surveys

Respondent burden 245

Other issues relating to the survey layout and wording are also dealt with elsewhere in this book. For example, requiring respondents to write in lengthy responses to questions is burdensome. Providing response categories may reduce the burden sig-nificantly, although this will be so only if the response categories cover all situations that a respondent may wish to convey, and if the response categories clearly relate to the question in the mind of the respondent. Ampt (2003) suggests another exam-ple from transport planning that illustrates a potential mismatch between a question and a response category. She suggests that a question asking about ways in which the respondent’s employer helps with journey to work costs may have a response category of ‘Teleworking’. Although this response category may make complete sense to the transport professional, it may be much less likely to relate to the question in the mind of the respondent.

Another aspect of survey design that affects this aspect of difficulty is repetitive questioning. A survey that asks about behaviours that may be repeated on a daily basis, and that seeks to establish the frequency and nature of these behaviours over a period of time, such as a week, may involve repetitive questions for each day of the subject week. These repetitive questions will create a sense of frustration in the respondent, which is another aspect of burden that falls within this general category of perceptual difficulty. A further element of this aspect of difficulty is a survey that appears to be very thick – i.e., has many pages. Respondents may look at the thickness of the ques-tionnaire and deduce from this the amount of time it will take to complete. The appear-ance of the layout can also make the respondent feel that the survey will be difficult. This is conveyed especially by a design that is cluttered and leaves little white space, as well as a design in which response categories are not lined up.

A final issue relating to perceptual and understanding difficulty relates to the use of computers in the survey, whether being used by the interviewer or the respondent. There are still a significant number of people who are unable to use a computer them-selves, as well as a substantial number who, while able to use a computer for certain rudimentary tasks, are not able to use the internet. The use of a computer in the sur-vey can, therefore, add perceptual difficulty for people who are uncomfortable using computers. Another aspect of the use of computers is included in the final category of difficulty, that of emotional difficulty, as explained in the next subsection.

Emotional difficultyAmpt (2003) suggests that there are three main areas of emotional difficulty, all of which relate to fear in one form or another. The first of these is that the survey may remind respondents of examinations, which are generally stressful and which some respondents may have always found difficult. The association between a survey ques-tionnaire and an examination results in a fear response that would increase burden significantly.

Second, respondents may be afraid to give the wrong answer to survey questions. As Ampt (2003) points out, respondents do not know implicitly that they will not be exposed to right and wrong answers at the end of the survey, and so may have a fear,

Page 273: Collecting managing and assessing data using sample surveys

Design of data collection procedures246

during the survey, of providing the wrong answer and being made to look foolish later. The feeling that there are right and wrong answers may increase the perceived diffi-culty of the survey to a respondent, because they feel it important to try to deduce what the ‘correct’ answers might be. This fear may also lead some respondents to provide much more information than the survey designers intended, which may also add bur-den on the respondent.

Third, the use of computers or other technological devices in the survey may also create an element of fear not just from those who are not comfortable with computers and other aspects of modern technology but also from some who are highly knowl-edgeable. This fear stems from the ‘Big Brother’ implications of the use of a computer, either as a means of recording or storing the data (Ampt, 2003). This fear may also arise simply from lack of familiarity with the use of computers or other technological devices for what is certainly a decreasing segment of the population.

Reducing difficultyAs has been alluded to a number of times in this subsection, other chapters in this book have already addressed some of the issues that relate to difficulty in responding to a survey. There are many wording, layout, and design issues that are identified as a source of burden, and also, therefore, as a way to reduce burden. Of course, it is always important to ensure that an assessment is carried out as to the extent to which a partic-ular population of interest for a survey may be likely to incur certain problems with the survey. For example, surveying an elderly population will be likely to result in much more frequent encounters with vision and hearing impairment, lack of familiarity with computers and other technological devices, and even language problems, in that the meanings of certain words may have changed over time, or be used differently in mod-ern idiom. Such a survey would need to pay particular attention to issues of difficulty that arise from these particular concerns.

11.7.5 External factorsFigure 11.1 also shows certain external factors that may affect response burden. Principally, these are the attitudes and opinions of others, the ‘feel good’ effect, and the appropriateness of the medium of the survey.

Attitudes and opinions of othersAmpt (2003) suggests that there are two levels at which the influence of other people may arise, either with other people being present when the survey is being answered, or with other people not being present for whom there is a necessity that they should be present. In the first case, other people may be present during an interview, or even when a person is completing a self-administered survey, and the attitudes and opinions of others may affect the respondent, either by making the respondent feel uncom-fortable, or by constraining the answers that the respondent is asked to provide. For example, a respondent may not wish to reveal to the person present a particular fact or

Page 274: Collecting managing and assessing data using sample surveys

Respondent burden 247

attitude in answering the survey. In addition, the presence of someone who disapproves of the survey, the particular issues of concern in the survey, or the agency for which the survey is being carried out will result in burden to the potential respondent, and may result in a decision not to participate.

The perceived attitudes and values of people who are not present can also affect respondent burden, in at least three different ways. In the first situation, it may be that certain people need to be present with the respondent, such as for a woman respondent who feels that there must be a man in the house at the time the interviewer is there, or a youth who feels that a responsible adult should be present. In both these cases, the absence of the required person will add to the burden and will be likely to lead to a refusal to complete the survey. The second situation might be when an elderly person relies on someone else’s memory but that person is unavailable at the time of the sur-vey; or when a person does not have good spatial sense, and relies on someone else to remember locations or routes, but that person is not present (Ampt, 2003). The third situation is one in which other persons who are not present may have made the poten-tial respondent aware of their lack of approval for the survey, for whatever reason. Completing the survey, even when these people are not present, will add to the burden perceived by the respondent.

Both these aspects of the attitudes and opinions, and even just presence or absence, of other persons can be mitigated by allowing the respondent a choice as to when and where the survey is to be answered. In addition, the survey researcher needs to be sen-sitive to the needs of some people to have another person present, such as the elderly, youth, and women.

The ‘feel good’ effectThis effect, contrary to all the others discussed in this chapter, is one that can reduce burden. Ampt (2003) suggests that there is increasing evidence that this is an impor-tant issue for many potential survey respondents. Depending on the topic of the survey, efforts should be made to reinforce such feelings throughout the survey, and particu-larly on completion of the survey, especially if others who are known to the respondent are going to be asked to participate in the survey. As noted earlier, in the section on incentives, one study (Church, 1993) found that altruism – the ‘feel good’ effect – was the motivator for 31 per cent of respondents. Given the age of that study and the increasing presence and awareness of negative factors (e.g., global warming, pollu-tion, terrorism, unrest, security, etc.), it is likely that the percentage of people who are motivated by altruism to respond to a survey that is clearly in the public or community interest will have increased well above 31 per cent nowadays.

This effect may be able to be enhanced in any given survey by reiterating, especially at the end of the survey, how the respondent’s contribution will have assisted the partic-ular purposes of this survey. Commenting throughout the survey that the respondent’s answers are helpful will also contribute to this effect, although, like everything else in survey design, this should not be overdone, or it may lead to suspicion about the true purposes of the survey, or will be interpreted as being insincere.

Page 275: Collecting managing and assessing data using sample surveys

Design of data collection procedures248

Appropriateness of the mediumThe final external effect on respondent burden relates to the appropriateness of the medium of the survey – e.g., a self-report questionnaire, telephone interview, face-to-face interview, etc. For example, attempting to undertake a face-to-face interview in an environment that is very noisy, making it difficult for interviewer and respondent to hear one another, will contribute significantly to the burden of the survey. Similarly, distributing self-report questionnaires on a short bus ride, with respondents expected to complete the survey and return it before leaving the bus, will add to the perceived bur-den. A lengthy telephone interview made to a person while at work is another instance of an inappropriate medium for the survey.

Considerable care needs to be taken to consider the situations in which respon-dents are likely to receive the survey, and consideration given to how responses can be obtained that will not place stress, and therefore a burden, on potential respondents. Decisions about the medium of the survey are, therefore, not dictated solely by con-siderations of cost and design, but must also take into account issues of respondent burden.

11.7.6 Mitigating respondent burdenThe reader is referred back to Chapter 4 of this book on ethics at this point. Many of the ethical requirements of surveys are also aimed at reducing the burdens on respondents, and assume some substantial importance in that regard. A set of guidelines have been suggested by the Medical Outcomes Trust (1995) for reducing burden, and many of these are related to ethical issues as well. The five criteria suggested by the Medical Outcomes Trust are as follows.

(1) No undue physical or emotional strain should be placed on respondents, and particular respondents for whom a survey instrument is not suitable should be identified.

(2) The average time needed to complete the survey and the range of times needed should be estimated and made available to respondents at the outset of the survey.

(3) The reading and comprehension level that has been assumed in designing the sur-vey should be specified.

(4) Any special requirements to consult other materials in order to answer survey questions should be identified.

(5) In reporting on the survey, the level of missing data and refusal rates to under-take the survey should be reported (as has also been advocated elsewhere in this book).

These are useful guidelines that could readily be adapted to most surveys of human populations, and they would certainly mitigate some of the issues of burden identified in this section. However, there are three other strategies that will have a considerable effect on respondent burden. First, there should always be a pilot survey undertaken prior to the main survey, to assess the level of burden in a particular survey. This is the

Page 276: Collecting managing and assessing data using sample surveys

Respondent burden 249

topic of the following chapter of this book, and is not dealt with further here. However, it ought to be stressed that a clear purpose of a pilot survey should be to assess the level of respondent burden.

Second, it cannot be recommended too strongly that the survey designer should also undertake the survey. However, this alone will not be sufficient, because it will not detect problems of comprehension or word use as the survey researcher is already familiar with the language in the survey. It would be useful for the survey researcher to observe the implementation of the survey, whether an interview or self-administered survey. During observation, particular note should be taken of places where the respon-dent hesitates, changes a previous answer, or refers back to instructions. These places will usually indicate where there are burdensome problems in the design of the survey. In interview surveys, the survey researcher should also undertake some interviews. Doing so will often reveal problems in the survey that increase the burden on the respondent, whether from language, visual cues, or other issues.

Third, one of the main ways, evident on the internet, that agencies have adopted to reduce respondent burden is that of combining administrative data with survey data. Government agencies in Australia (United Nations Economic and Social Council, 2005), Canada (Michaud et al., 1995), and Denmark (Stetkær, 2004), among others, have embraced the strategy of using administrative data to replace some of the data that would normally be collected through respondent surveys. In each case, the main area in which this has proved possible has been in the area of business surveys.

Another interesting approach is documented from Brazil (Rodrigues and Belträo, 2005), in which the census consisted of a short form asking twenty-three questions per person and a long form with ninety-three questions per person. In this case, the respon-dent burden was reduced by creating a split design, in which respondents were allocated one of three component sets from a split design using the ninety-three questions. The design consisted of a core set of questions administered to all respondents, and then the remaining questions were split into three sets, with each set being administered to only one group of respondents. The missing questions for each group were then cre-ated synthetically by using hot deck imputation (described in detail in Chapter 20). The authors of this study conclude that the split design reduced not just respondent burden but also the cost of the survey. They also note that an alternative way of looking at this was that respondents could be asked more questions without increasing the burden from the original design.

In conclusion, respondent burden is a serious issue that needs to be addressed at all stages of the design and execution of a survey. While respondent burden is fre-quently discussed in the literature, the number of instances in which it is reported that real efforts have been made to reduce respondent burden through specific designs, such as the split survey design in Brazil, are still relatively small. The framework and understanding of response burden documented by Ampt (2003) seems to provide the most effective platform from which to understand and set about reducing respondent burden, pointing to numerous areas in design and execution that may be effective. Ampt (2003) also concludes that the issue of respondent burden clearly calls for more

Page 277: Collecting managing and assessing data using sample surveys

Design of data collection procedures250

emphasis to be placed on the possibility of using multiple methods of surveying, so that respondents have more choice about and control over how, when, and where they are surveyed.

11.8 Concluding comments

This chapter has addressed a number of design issues for the procedures of data col-lection. These include the number and type of contacts, who should respond to the survey, how to define what constitutes a complete response, sample replacement, the use of incentives, and respondent burden. These are important issues, which need to be addressed carefully in the design of any human subjects survey, and which are also critical in determining the costs of surveys. They also have broad implications for the success of any human subjects survey. Failure to address these issues in the design phases can lead to serious biases in the resulting survey data, and to major dissatisfac-tion on the part of the clients of the survey and the respondents in the survey alike.

These topics and the material of this chapter complete the treatment in this book of the design of surveys. The remaining chapters of this book deal with various aspects of the execution of the survey, with the statistical background of surveys and the compu-tation of sampling errors, sample sizes, and population characteristics inferred from a sample, and with the assessment of the quality of a completed survey.

Page 278: Collecting managing and assessing data using sample surveys

251

12 Pilot surveys and pretests

12.1 Introduction

Before a survey is committed to the field, the various designs and procedures of the survey should be tested. Despite everything that has been written on the topic of the design of surveys, the ways to word questions, the procedures to use, and so forth, each time a new survey is put in the field, new issues will arise and different interpretations and responses will occur among the respondents. Pilot surveys and pretests are the pro-cedures that are used to test and refine the survey before it is actually fielded (see, for example, Freeman, 2003). Unfortunately, those commissioning surveys are often igno-rant of the needs for pilot surveys and pretests, or assume that they are unnecessary in this particular case, and therefore do not budget either the time or money to undertake them. However, it has been proven time after time that surveys that are fielded without testing result in problems that could have been avoided, had such prior testing been undertaken (Brace, 2004).

Pilot surveys and pretests can be used for many purposes. Most obviously, they are used to test aspects of the survey, to make sure that everything in the survey works as intended. However, they can also be used to test alternative approaches to various aspects of the survey design and execution, to assess response rates and completion rates of a survey, and to refine the instruments and protocols to be used.

In the literature, the terms ‘pilot survey’ or ‘pilot test’ and ‘pretest’ are often used interchangeably. However, it is suggested that these are actually two rather different procedures, and that the terms should be used carefully to refer to these different pro-cedures and not used interchangeably. A pilot survey or pilot test should be a complete run-through of the entire survey process – a dress rehearsal for the full survey execu-tion. In the pilot survey, all aspects of the survey should be tested. In contrast, a pretest is a test of one or more elements of the survey, in isolation from the rest of the survey process, and may be used to refine such discrete elements of the survey as individual question wordings, or question ordering, etc.

Referring to questions that should be asked and answered at the stage of planning a survey, Yates (1965: 48–9; emphasis added) says: ‘If prior knowledge in these mat-ters is not available a pilot or exploratory survey will be necessary. Even if there is

Page 279: Collecting managing and assessing data using sample surveys

Pilot surveys and pretests252

adequate knowledge of the statistical properties of the material, pilot surveys are fre-quently advisable in large-scale surveys in order to test and improve field procedures and schedules, and to train field workers.’ Similarly, Cochran (1963: 8) states: ‘It has been found useful to try out the questionnaire and field methods on a small scale. This nearly always results in improvements in the questionnaire and may reveal other troubles that will be serious on a large scale, for example, that the cost will be much greater than expected.’ Kish (1965: 51; emphasis in original) states: ‘To design effi-ciently a large sample in an unknown field, a pilot study may be conducted prior to the survey, to gain information for designing the survey.

Although Yates, Cochran, and Kish do not specify that pilot surveys have to be undertaken, these statements do indicate that pilot surveys should be considered to be essential unless there is considerable prior survey research experience with the sub-ject population, and that large-scale surveys need pilot surveys. None of these authors define ‘large-scale’, but it is probably reasonable to assume that this involves any sur-vey with a sample size in excess of a few hundred respondents.

As further support for the notion that pilot surveys should be undertaken for almost all surveys, Dillman (2000: 146–7) states that ‘pilot studies frequently result in sub-stantial revisions being made in the survey design, from adding additional contacts or an incentive to improve response rates, to eliminating or adding survey questions’. Further, the American Association for Public Opinion Research (AAPOR) quality guidelines (quoted by Biemer and Lyberg, 2003) state: ‘Pretest questionnaires and procedures to identify problems prior to the survey. It is always better to identify prob-lems ahead of time rather than in the midst of the survey process.’

In this chapter, several issues relating to pilot surveys and pretests are discussed. Initially, the definitions of these two procedures are elaborated. The purposes that are served by each procedure are described and the times when they should be imple-mented are discussed. The selection of respondents for use in pilot surveys and pretests is discussed next, together with a discussion of the sample sizes needed for pilot sur-veys and pretests. Finally, comments are provided on the likely costs and time require-ments for pilot surveys and pretests.

12.2 Definitions

It is important to have a clear understanding of what defines a pretest and a pilot sur-vey. Richardson, Ampt, and Meyburg (1995) also use another term in this connection: ‘skirmishing’. As used by them, the term skirmish is used to refer to the preliminary testing, sometimes even of the feasibility of a survey for the intended purpose. In the terms that are used in this book, a skirmish could also include focus groups that are convened for the purposes of designing a survey, as the focus groups may help to define the questions to be asked, or the procedure to be used for the survey, or the type of instrument. In the event that there is no question that a survey is feasible, then the skirmishing may involve preliminary versions of questions, and even trials of the sur-vey method – e.g., postal, face-to-face, telephone, internet, etc. It takes place at a time

Page 280: Collecting managing and assessing data using sample surveys

Definitions 253

when the various stages of the survey are still under development, and is used to help narrow down the options.

Following the skirmishing, a series of pretests may take place. A pretest is defined as ‘a test of any element, or sequence of elements of a survey, but comprising less than the full survey execution’ (Stopher et al., 2008a). Biemer and Lyberg (2003) define pretests as ‘small studies using informal qualitative techniques’ that are used to acquire information that helps in the design of the survey. Elements that may be pretested could range from individual question elements through to two or three elements of the final survey, such as the sampling plan, recruitment procedure, and survey ques-tionnaire. Pretesting may be used to refine individual components of the final survey design, especially where something new and different is being attempted or a popula-tion is being targeted for the survey that has not been surveyed previously. Pretests may be used to compare different approaches to any element of the survey materials and procedures, as well as testing whether a specific design will work. Thus, pretests serve the role of refining individual elements of the survey, either in isolation or together with one or two other elements.

Pilot surveys, also known as pilot tests, are a full run-through of the entire sur-vey process. Biemer and Lyberg (2003) define pilot surveys as surveys ‘to obtain information that can improve the main survey’. They then define dress rehearsal as ‘a miniature of the main survey, conducted close to the main survey to reveal weak-nesses in the survey design’, and generally to perform those functions described in this book for a pilot survey. The pilot survey should include drawing a sample accord-ing to the methods to be used in the final survey, the recruitment and administration of the survey, coding and data entry, and the analysis of the results, including archiving procedures. By undertaking this complete run-through of the survey process, it can be determined whether or not all steps of the survey work together as designed and also whether there are any unexpected results. There are usually two types of pilot surveys. The first is one in which a single version of the entire survey procedure is administered. The second is one in which some elements of the survey are offered in more than one version, and the pilot survey has a secondary purpose to test the effi-cacy of the alternatives.

Yates (1965) also describes some of the roles of pilot surveys, and specifies these as:

(1) providing information on the various components of variability within the subject population;

(2) the development of fieldwork procedures;(3) testing questionnaires;(4) training interviewers;(5) the provision of data for estimating survey costs; and(6) determining the most effective type and size of sampling unit.

Richardson, Ampt, and Meyburg (1995) list eleven purposes for a pilot survey, namely:

Page 281: Collecting managing and assessing data using sample surveys

Pilot surveys and pretests254

(1) determining the adequacy of the sampling frame;(2) determining the variability of respondent characteristics within the survey

population;(3) estimating the response rate;(4) testing the method of data collection;(5) testing the question wording;(6) testing the layout of the survey questionnaire (for self-administered and interview

surveys);(7) testing the adequacy of the questionnaire in general;(8) determining the efficiency of interviewer and supervisor training;(9) reviewing the data entry, editing, and analysis procedures;

(10) determining the cost and duration of the survey; and(11) determining the efficiency of the survey organisation.

These two sets of purposes are clearly consistent one with the other and provide a good guide to what should be expected from a pilot survey. Goldenberg, Stecher, and Cervenka (1995) report on a pilot survey (incorrectly called a pretest in their paper) that tested a number of alternatives. It was a pilot survey because it involved a complete run-through of the survey procedures, from sampling through to data entry and prelim-inary analysis. However, this pilot survey involved the testing of a number of different options for the final survey. These included three alternative incentives (money, a gift, and both a gift and money), a short and a long version of the printed survey (both of which involved the same telephone retrieval interview), a booklet and a leaflet format for the questionnaire, and two methods of retrieving the data from respondents – by post and by telephone. This actually represents a total of twenty-four alternatives to be tested. In their pilot survey, Goldenberg, Stecher, and Cervenka did not test twenty-four different versions of the survey but, rather, used a method of overlapping designs that resulted in twelve combinations of designs. Each of these twelve designs was tested on a sample of 100 respondents, making a total pilot survey sample of 1,200 households. To place this in context, the actual survey for which this was a pilot sur-vey had a desired sample size of 5,000 households. Therefore, the pilot survey, in this case, had a sample size that was 24 per cent of the final sample size. The results of this pilot survey led to the adoption of a monetary (only) incentive, the long version of the survey form, a leaflet format, and retrieval by telephone. However, it was also decided to ask respondents to return their filled-out questionnaires by post, after the data had been retrieved by telephone, which was a retrieval idea that arose only as a result of the pilot survey.

One additional type of preliminary test is a ‘rolling pilot survey’ (Pratt, 2003), which is defined as using the first two or three days of the main survey to assess whether the survey is proceeding as expected. However, such a pilot survey can allow only minor adjustments at this stage, whether to survey instruments, procedures, or other aspects of the survey, and these may not be practical if extensive printing is required for the survey questionnaire. A rolling pilot survey is particularly useful when a full pilot

Page 282: Collecting managing and assessing data using sample surveys

Selecting respondents for pretests and pilot surveys 255

survey cannot be undertaken because both time and money are limited and when the survey has been used previously either in time or location, so that extensive changes are not anticipated. Nevertheless, even if only relatively minor changes are made, the surveys from these first few days may still have to be discarded and the overall sample size adjusted upwards to compensate. Of course, if there is a significant pause between the rolling pilot survey and the balance of the main survey, then this becomes no dif-ferent from a full pilot survey.

An important element of pilot surveys in particular is that they afford the opportu-nity to determine more than what can be learnt simply from looking at the data. When the survey is to be administered by interviewers, whether by telephone or face to face, the opportunity should always be taken to debrief the interviewers after the pilot sur-vey, to learn of things that were awkward, things that didn’t work, and feedback pro-vided by respondents about the survey. The supervisors of the interviewers should also be debriefed. Survey designers should also take the opportunity, when possible, to listen in to interviews in order to detect nuances and problems that may not even be evident to the interviewer.

If the survey is a postal or internet survey, then, if time and budget permit, focus groups should be organised after the completion of the pilot survey to gain reactions from those who were selected to do the survey. Ideally, this should include a focus group of respondents and one of non-respondents. There are enormous potentials for learning how to improve the design and conduct of a survey by following this recom-mendation. However, to the author’s knowledge, this is very rarely done in reality.

12.3 Selecting respondents for pretests and pilot surveys

12.3.1 Selecting respondentsThere are a wide range of practices with respect to the selection of respondents for any of these preliminary tests of the survey. One major concern that must always be kept in the forefront of consideration is that the selection of respondents for the pre-liminary testing should not have consequences on the final sample used in the main survey. There are at least two important ways in which the selection of respondents could have significant and important consequences for the main survey sample. The exception to these concerns would be when the survey involves observation and no direct participation.

The first potential impact arises if the survey population is not very large in com-parison with the sample sizes to be used for both the preliminary testing and the main survey and the decision is made not to ask those who are recruited for preliminary testing to participate in the main survey. In a case in which the combined sample sizes represent a significant proportion of the total population, it is important that the main survey sample is chosen first, and the preliminary testing samples are chosen from the remainder of the population, none of whom will be needed for the final survey. If this procedure is not followed, then the sampling for the main survey will no longer adhere to strict random sampling (see Chapter 13), because those respondents who were used

Page 283: Collecting managing and assessing data using sample surveys

Pilot surveys and pretests256

in the preliminary testing are excluded from the population before the main sample is drawn, thereby violating the principle of having an equal probability of being sampled from the full population.

However, it is necessary to define what is meant, in this case, by ‘small’ and ‘large’. Consider, for example, the situation in Dallas–Fort Worth, for the survey reported on by Goldenberg, Stecher, and Cervenka (1995). In this case, the total population of the survey region was about 1.5 million households. The number of households that it was estimated should be approached for the recruitment for the main survey, to yield 5,000 completed household surveys, was about 9,000 households. For the pilot survey of 1,200 households, this involved an initial selection of about 2,500 households. For the main survey, each household in the region had approximately a 0.006 probability of being selected into the sample. Further, if the pilot sample was drawn first, and then the main sample was still drawn from all households, the probability that a household in the pilot sample would also be drawn in the main sample is still about 0.006, which, across a sample of 9,000 households, with a sample of 2,500 being used for the pilot survey, would lead to the expectation that about fifteen households would be selected for the main sample that had also been approached in the pilot survey. The need to then exclude these fifteen households could potentially bias the final sample. This would suggest that, possibly, only small national samples in fairly large countries might avoid this potential bias.

The risk-averse position that this suggests is that the main survey sample should always be chosen first, and the samples required for testing the survey be selected from the remainder. This is perhaps a little more difficult if the preliminary testing is being done to establish the needed sample size. However, even in this case, it is usually possible to estimate the rough size of the sample that will be needed and to select and set aside the approximate main sample requirements before selecting any of the test respondents. Even if it is then necessary to return to the population to sample a small number of additional respondents, the biases caused by so doing will be very small, especially in comparison with the other sources of bias that are present in all surveys.

The second potential impact arises when participants in the preliminary testing are also included in the main survey. In this case, participation in the preliminary testing may have sensitised participants in some way, so that their responses to the main sur-vey will now be different from what they would have been had they not participated in the preliminary testing. As a general rule, respondents in preliminary testing should not be included in the main sample, except in surveys that involve observation only, unless the observations will also have a tendency to sensitise the participants. At the same time, most preliminary testing should involve using participants who are very similar to those who will participate in the main survey. If this is not done, then the results of the preliminary testing may fail to uncover issues that will affect the main survey. However, there are exceptions to this rule.

In the case of skirmishing, testing of the survey elements may be done quite well by colleagues of the survey designer. Brace (2004) defines testing with colleagues as an informal pilot survey, and suggests that this is the minimum level of testing that

Page 284: Collecting managing and assessing data using sample surveys

Selecting respondents for pretests and pilot surveys 257

should always be done. He also suggests that, when the survey involves interviewing, the informal pilot survey should be conducted by the person who is designing the sur-vey instrument. However, Brace also points out that there are a number of reasons that such informal testing will not identify potential problem areas in the survey. These include the fact that the colleagues may be more familiar with the terminology used in the survey and may, therefore, not pick up on wording that may be difficult for the main survey participant. Moreover, colleagues will be aware that this is a test, and may not, therefore, give as much thought to responses as would the real participant. If col-leagues also do not match intended survey participants in terms of such things as the eligibility criteria or demographics that are the focus of the main survey, even asking colleagues to imagine that they are that sort of person will not reveal many potential flaws in the survey design. On the plus side, Brace points out that those who are famil-iar with survey design issues may also be quicker to spot problems in question design than the target population, so that some of these issues may be identified more effi-ciently with the use of colleagues. For this reason, it is best to limit such informal test-ing to the initial skirmishing, in a situation in which a full pilot survey will be held with a sample of the target population. In this case, knowledgeable colleagues are likely to be able to help refine various aspects of the early designs of the survey.

Another option for preliminary testing is to select participants from a population that is similar to, but not the same as, that for the main survey. For example, a survey that is to be conducted on a geographically limited population could be tested in a dif-ferent geographic area. However, the limits to such testing are the difficulty of finding another population that is sufficiently similar, and the potential that some apparently unrelated characteristic of one area compared to another may have significant effects on the completion of the survey. Richardson, Ampt, and Meyburg (1995) relate an example of this that concerned a household travel survey. The survey had previously been conducted in Brisbane (which was effectively the pilot survey for our purposes), and was then to be conducted a year later in Melbourne. However, due to differ-ences in the proportion of holiday homes in the two metropolitan areas, the proportion of immigrants from overseas, and the incidence of people who reported themselves as ‘other pensioners’, the survey in Melbourne required significant changes to be made. The need for these changes was revealed only by conducting a pilot survey in Melbourne, after making those changes that the designers were aware had to be made in transferring the survey from Brisbane to Melbourne, mostly in relation to transport system differences.

An additional argument for conducting the preliminary tests on samples drawn from the same population as will be used in the main survey arises in considering a full pilot survey, or a pretest that includes the sampling processes. Problems that may arise in the sampling will be revealed only when the sampling is tested on the same population as will be used in the main survey. For example, in a survey that is to be carried out with address-based sampling, problems in the address listings are likely to be specific to the geographic region of the survey. Likewise, a survey that is to be carried out using random digit dialling for specific telephone service areas will also be likely to

Page 285: Collecting managing and assessing data using sample surveys

Pilot surveys and pretests258

have problems that are specific to the target areas, and are unlikely to occur in another telephone service area.

12.3.2 Sample size

Pilot surveysThere are no specific rules about the size of a pilot survey sample. Biemer and Lyberg (2003) note that ‘[t]he design and use of pilot studies are sadly neglected in the survey literature’. They also go on to say that ‘[t]he same casual treatment that pilot survey design has received in the literature is also seen in the surveys themselves’. However, having said this, and because Biemer and Lyberg are not writing about survey design per se, they do not suggest what sample sizes might be appropriate.

The pilot survey itself, which is the essential element of the preliminary testing, should clearly be smaller than the main survey sample, although it should be as large as time and money will permit. Certainly, it needs to be large enough to produce statis-tically reliable results. A pilot survey with a sample of twenty-five respondents will not suffice for this. It also makes sense, as suggested by Richardson, Ampt, and Meyburg (1995), that the size of the pilot sample should be somehow related to the size of the sample for the main survey. In fact, Richardson, Ampt, and Meyburg suggest that about 5 to 10 per cent of the survey budget should be spent on the pilot survey. Because the effort required to field a pilot survey will often be larger than that for the main survey, this would suggest that the sample size for a pilot survey would be in the range of 3 to 7 per cent of the main sample size. Thus, for a survey of 1,000 households, a pilot sur-vey should have a sample size of thirty to seventy households, while a survey of 25,000 households should probably have a pilot survey of 750 to 1,750 households. However, as recommended by Stopher et al. (2008a), the minimum sample size for a pilot survey should be no smaller than thirty units – i.e., thirty people in a survey that is sampling people, and thirty households in a household survey. Samples smaller than thirty units will provide information that is too unreliable for sensible conclusions to be drawn. However, Dillman (2000) suggests that a pilot survey should have a sample size of 100 to 200 respondents in general, and proposes that the size should be larger than this, if resources allow. He goes on to say that ‘entering data from 100–150 respondents allows one to make reasonably precise estimates as to whether respondents are cluster-ing into certain categories of questions’.

It is also important to remember that the size of the pilot survey can be specified in two ways, as can the sample for the main survey. One can define the number of sample units that are to be attempted or recruited, or one can define the number that are to be completed. It is wiser to use the latter of these to define both pilot sample sizes and full sample sizes, because it is the final number of sample units available for analysis that determines the accuracy of the survey. Thus, the specified minimum sample size of thirty households or thirty persons refers to the final completed sample size. In a recent household travel survey, it was decided to aim for a pilot survey sample of fifty households as the number to be attempted for recruitment. However, the pilot survey

Page 286: Collecting managing and assessing data using sample surveys

Selecting respondents for pretests and pilot surveys 259

revealed that only 60 per cent of the households that were attempted could be recruited, which gave thirty households that were successfully recruited. Of those, only 67 per cent actually completed the survey, giving a completed sample of just twenty house-holds. This was an insufficient sample to draw any sound conclusions about any aspect of the survey. For example, this was sufficient to show only that the recruitment rate would have about a 95 per cent probability of being between 45 per cent and 75 per cent, and that the completion rate for recruited households would be between about 47 per cent and 87 per cent. It is probable that these ranges could have been guessed on the basis of experience in other surveys without going to the expense of running the pilot survey at all. Indeed, Kish (1965) says: ‘If the pilot study is too small, its results are useless, because they are less dependable than the expert guesses we can obtain without it.’

Suppose that one of the reasons for carrying out the pilot survey example of the previous paragraph was to determine how many households should be attempted for recruitment in a main survey with a sample size of 5,000 households. Given the range of completion rates, this would lead to the conclusion that between 5,750 and 10,650 households would have to be recruited, in order to achieve a final sample of 5,000 com-pleted households. In turn, the pilot survey would indicate that anywhere from 7,700 to 23,700 households would have to be attempted to gain the required sample size. If the decision was made to attempt 23,700 households and the actual recruitment rate was found to be, say, 63 per cent, and the completion rate was found to be 72 per cent (both of which are highly likely results based on the small pilot survey), the result would be a final sample of 10,750 households, or more than double what was required and, pre-sumably, budgeted for.

This situation can be contrasted with the result that would have occurred if the final completion for the pilot survey had been specified as 100 households, with the same recruitment and completion rates being found. In this case, this would have meant that 150 households would have had to be recruited, and that 250 households would have to be attempted. The 95 per cent confidence range on the recruitment rate would be, in this case, between 54 per cent and 66 per cent, while the range for the completion rate would be between 59 per cent and 75 per cent. Again, applying this to the same final desired sample size for the main survey would result in the determination that the range of households that should be attempted would be between 10,100 and 15,700. If the survey firm decided to err on the conservative side and attempted 15,500 households and the actual results were as suggested earlier for the main survey, then the resulting sample would be 7,030 completed households, which is much closer to the target than the pilot survey of fifty attempted households would have produced.

Of course, the costs of this second pilot survey would have been about five times as great as for the first one. If we suppose that the average cost per household of the pilot survey was $250 per completed survey, and that for the main survey was $180 per completed household, then the first pilot survey would have cost a mere $5,000, while the second pilot survey would have cost $25,000. However, the potential sav-ings on the main sample would be $673,200. In reality, unless the survey was a

Page 287: Collecting managing and assessing data using sample surveys

Pilot surveys and pretests260

postal survey, the survey firm would be able to cut off further recruitment once the target sample had been reached. However, there are still significant costs that would be incurred in having set up the survey to attempt 23,700 recruitments as opposed to 15,700, and there could be implications of bias in the resulting sample, depending on how the sample was actually contacted, from discarding about 8,000 potential recruitments.

It should also be borne in mind that, if few changes need to be made to the survey as a result of the pilot, the sample from the pilot survey can often be combined with the main sample, to provide a larger total final sample. Of course, this will not be the case when a pilot survey is used to test different alternative questions, designs, procedures, etc. In such cases, the data from those test versions that are not the same as the ones finally adopted for the main survey will not be useful as part of the final sample.

In general, two cases for determining sample sizes for pretests and pilot surveys can be distinguished. In the first case, the pretest or pilot survey is used only to trial a sin-gle version of the survey to confirm that it is working, or to determine refinements that may be needed to improve its functioning. In the second case, two or more different designs of one or more elements of the survey are to be tested, so that a decision can be made on which to adopt for the final survey. These two cases will require different sample sizes. As is discussed in Chapter 13 of this book, there are methods that can be used to estimate required sample sizes, based on the expected variance of the measure that is of key concern and on the accuracy with which one desires to know the answer. Stopher et al. (2008a) provide some suggested ranges of sample sizes to answer differ-ent questions about the main survey, on the basis of typical results in household travel surveys. From their work, it can be seen that recommended sample sizes, depending on the accuracy desired, range from twenty-one to over 1,000, although it should be noted that the very small sample sizes in their work incur very large errors, similar to those suggested by the example provided in this chapter.

However, the work of Stopher et al. (2008a) does not address the size of sample needed to test two alternative designs of an element. Again, the sampling error theory provided in Chapter 13 of this book also provides a basis for estimating the required sample sizes based on the variability of what is being measured and the accuracy of the desired result. Using methods from Chapter 13, if, once again, the key criterion were the response rate to two different designs, then a rough determination of the sample size that would be required can be made. To make a comparison with the case in which the pilot survey is being carried out to answer what the response rate would be from a single design, the same figures are used as were used earlier in this chapter. For this, suppose that, on a single design, the desire was to know the number of households to be recruited to within ±25 per cent (which was the result achieved with a pilot survey sample of 100 completed households). This was based on achieving a recruitment rate of 60 per cent and a completion rate of 67 per cent. Suppose now that two versions of the survey are fielded in the pilot survey. The first one of these achieves a recruitment rate of 55 per cent and a completion rate of 60 per cent, while the other one achieves a recruitment rate of 65 per cent and a completion rate of 70 per cent. The question that

Page 288: Collecting managing and assessing data using sample surveys

Selecting respondents for pretests and pilot surveys 261

is asked of the pilot survey is whether or not these rates are significantly different with 95 per cent confidence.

The question that must be addressed is this: what sample sizes are required for the two versions of the pilot survey? For example, one might be tempted to suggest that the sample of 100 that was used for the single design could be divided in half, with fifty households completing one design and fifty completing the other. However, given that one does not know the recruitment and completion rates in advance, what would actually happen is that, if one started out with 250 households to recruit from, and split the sample in half, one would have sixty-nine households agreeing to do the first ver-sion and eighty-one agreeing to do the second one. This would then lead to completion numbers of forty-one for the first design and fifty-seven for the second. There would, of course, be an estimated difference of 10 per cent in the completion rates between the two versions. However, the 95 per cent confidence limits on this difference would be ±15.3 per cent, meaning that one would not be sure that there was a difference in fact. Similarly, for the 10 per cent difference in the recruitment rates, the 95 per cent confi-dence limits in that case would be ±12.1 per cent, so that one would again be unsure if the recruitment rates were, in fact, different.

To achieve a result in which the 10 per cent differences in each of the recruitment and completion rates would be significantly different would require a sample size of 300 for each of the two versions. With this sample size, the difference between the recruitment rates would have a 95 per cent confidence range of ±7.8 per cent, while that on the difference in completion rates would be ±9.9 per cent, so that one would just be reasonably certain that the two versions had produced different recruitment and completion rates. This would lead one to choose the second design with reasonable confidence. However, it should be noted that this means that the sample size for the pilot survey has increased from 250 households to be attempted to 600 households to be attempted.

The method used to calculate these confidence limits is provided in Chapter 13 and is not detailed here. It should be noted that a simple random sample has been assumed in all cases and that the variance in the response rate is given by p*(1 – p), where p is the proportion either that is recruited or that completes the survey from among the recruited households.

PretestsIt is much more difficult to provide guidance on the sample sizes for pretests. Because a pretest is usually testing a single element of the survey, it is not usually the case that one is looking for a statistically significant result from the pretest but, rather, whether or not something works, or is understood by potential respondents. For pretests, it is likely that much smaller samples will suffice, and that, as noted earlier in this chapter, the samples could even be drawn from among one’s colleagues, family, or other con-venient groups. Samples of as few as five to ten respondents may suffice for pretesting, although it is probably better to use ten to twenty-five. Again, as with pilot surveys, the largest samples that can be afforded will be best; it is always true that a larger sample

Page 289: Collecting managing and assessing data using sample surveys

Pilot surveys and pretests262

will give more accurate results. However, the value of increasing the sample size must always be considered carefully.

There may be instances in which a statistically determined sample size is required for a pretest. For example, suppose that a pretest was to be done to compare two alter-native wordings for an income question, on the basis that income is usually a question on which there is a fairly high nonresponse rate. In this case, the pretest should be carried out with a sample drawn from the same population as the main survey, and the sample size would need to be computed from the difference in nonresponse rates. If the desire is to know, with 95 per cent confidence, if the two question versions dif-fer in nonresponse rate by more than 5 per cent, then this could be used to determine the sample size required. This might be the requirement if the survey designers were interested only in changing the wording of the income question if a new wording could reduce the nonresponse rate by at least 5 per cent, but were not interested in changing the question if the difference were less than 5 per cent. This would actually require a very large sample to achieve, with each version of the question being tested on about 450 respondents. If one wished to have only 90 per cent confidence, rather than 95 per cent, then the sample size would decrease to about 310 respondents for each.

From this discussion, it should be clear that pretests can range from a handful of selected respondents, who may be colleagues of the survey designer, to a relatively large sample of the final population. The determination of the sample size will be very much a function of the goal of the pretest, so that no specific rules can be offered on choosing the sample size in this case. Rather, the designer needs to determine the goal very clearly so that the decision as to the sample size, and as to the respondents them-selves, can be made appropriately to serve that goal.

12.4 Costs and time requirements of pretests and pilot surveys

The preceding discussions on the nature, sample composition, and sample size of pre-tests and surveys would indicate that the costs and time requirements of these survey activities are both likely to be extremely variable, depending on the nature of the sur-vey to be done and the tests to be accomplished. In addition, it is almost axiomatic that clients for surveys never allow enough time or money for these activities, even though the completion of the pretests and pilot surveys will determine whether the money spent on the main survey is spent well or not. Indeed, it is not too extreme a statement to say that failure to conduct a pilot survey is quite likely to lead to the main survey being of little or no use, and therefore the money and effort spent on it having been wasted.

With respect to the costs of these activities, the guidance provided by both Dillman (2000) and Richardson, Ampt, and Meyburg (1995) seems most appropriate. Dillman recommends that pilot surveys should never be done on fewer than 100 to 200 respon-dents, while Richardson, Ampt, and Meyburg recommend an expenditure equivalent to 5 to 10 per cent of the budget. Following Dillman’s recommendation, small surveys might require a larger proportion of the budget to be spent on the pilot survey than

Page 290: Collecting managing and assessing data using sample surveys

Cost/time requirements of pretests and pilot surveys 263

Richardson, Ampt, and Meyburg would suggest. In such cases, it could be suggested that the budget for the pilot survey should not exceed one-third of the total budget for the survey. However, for moderate to large sample surveys, the guideline level of 5 to 10 per cent of the budget would seem reasonable, with the proviso that less than 5 per cent would always be too little, while more than 10 per cent could be acceptable, espe-cially if the survey that is being designed is the first of its kind.

The level of prior experience with a survey of the type to be undertaken will most certainly impact a reasonable budget for the pilot survey and any pretests and skir-mishes to be undertaken. If the survey is one of a type that has been done repeatedly with similar populations, and when the entire survey design is well documented, as are also the results of the survey in terms of both response rates and the quality of the data, then a modest pilot survey budget is entirely appropriate. However, if the sur-vey is a new type of survey for which there is little or no prior experience that can be consulted, or the survey has been widely utilised on some populations but is now to be applied to a quite different population, then a much larger pilot survey budget should be expected. For example, if a type of survey, such as a household travel survey, has been designed and executed numerous times in urban areas in North America, but is now to be used in, say, Africa, then extensive pilot testing should be anticipated. Not only is it likely that language changes are to be made, but in addition the cultural back-ground of respondents is quite different and the ways in which respondents will react to questions will also be very different. It will, therefore, be similar to designing a completely new type of survey.

The matter of the time required will also depend in part on the sample size of the pilot survey. As general guidance, the pilot survey should not be executed at any faster rate than is expected to apply to the main survey. Thus, if a main survey of 5,000 households is to be undertaken over a period of nine months, and it is desired to under-take a pilot survey of 350 households, then the period of time for the fieldwork should be no less than three weeks, which is approximately the time that would be taken for 350 surveys in the main survey. Added to that, time must be allowed for designing the pilot survey and drawing the sample, and for coding, cleaning, and analysing the data after the completion of the fieldwork. Because the analysis will often be more intensive with the pilot survey, if questions of comparison are to be answered, the time allotted to this phase of the work should be significantly longer than what would be expected for the main survey. As a general guideline, the period of time required for undertak-ing a pilot survey should be on the order of three to six months, and longer in the case when multiple tests of different components of the survey are to be undertaken.

In the author’s experience, it is too often the case that clients for a survey try to com-press the pilot survey activity into a few weeks, having already decided that the main survey must be fielded by a certain date, and having left too little time for selecting the survey firm and getting contracts completed and signed. When too little time is allowed for the pilot survey, bad decisions may be made about altering or not altering components of the survey, and the main survey may actually start before the analysis has been completed on the pilot survey. It is equally true that allowing too little time for

Page 291: Collecting managing and assessing data using sample surveys

Pilot surveys and pretests264

pilot surveys and pretests will be as damaging to the main survey as not doing the pilot surveys and pretests or not providing an adequate monetary budget for the pretests and pilot surveys.

12.5 Concluding comments

Pilot surveys and pretests are so important to achieving a good outcome of the main survey that it seems to be worthwhile to reiterate the major points that have been made in this chapter.

(1) A pilot survey should always be undertaken prior to undertaking the main survey.(2) The pilot survey should be undertaken on a sample of respondents who are as sim-

ilar as possible to the population from which the main sample is to be drawn.(3) Care should be taken that the drawing of the pilot survey sample (and any pretest

samples drawn from the target population) does not bias the sample for the main survey.

(4) The debriefing of interviewers and supervisors should be an essential component of a pilot survey for interview-based surveys, as should focus groups of respon-dents and non-respondents for postal and internet surveys, in particular.

(5) The sample size should be determined on the basis of the goals of the pilot survey or pretests, but should not be smaller than about 100 respondents for pilot surveys, and probably not smaller than ten to fifteen for pretests.

(6) Adequate time and budget should be set aside for pretesting and undertaking a pilot survey, so that the results of these testing activities can be used to influence the final survey and ensure that a good product is achieved.

(7) The time, effort, and budget allocated to pilot surveys and pretests, and therefore also the sample sizes, should always be kept in proportion both to the main survey effort and to the risks of the main survey not accomplishing its goals.

Page 292: Collecting managing and assessing data using sample surveys

265

13 Sample design and sampling

13.1 Introduction

In Chapter 3 of this book, the basic concern of sampling was introduced as being that of achieving a representative sample. Furthermore, that chapter introduced the idea of probability sampling, and put forward the concept of the equal probability of selection method as the basis for representative sample creation. The purpose of this chapter is to explore how to design a sample and how to determine the sample size that is required for a particular survey. Moreover, this chapter elaborates on the notion of sampling error and develops the statistical procedures required to estimate the amount of sample error in a survey.

Prior to embarking on the statistical aspects of sample design, there are a few issues that must be dealt with. First, there is the definition and description of sampling frames. Next, this chapter considers what comprises a sampling procedure, within which is also described a popular process for generating a random telephone sample, namely random digit dialling. Sampling procedures in general are then discussed. Following this, a num-ber of equal and unequal probability methods of sampling are dealt with, in which the statistical procedures for estimating population means and totals, proportions, and ratios, the estimation of sampling errors for each sampling method, and methods to determine the sample sizes required for a pre-specified level of accuracy are all laid out.

Specifically, within this chapter, each of the following sampling methods are discussed:

(1) simple random sampling;(2) stratified sampling with a uniform sampling rate (proportionate sampling);(3) stratified sampling with a variable sampling rate (disproportionate sampling);(4) multistage sampling;(5) cluster sampling;(6) systematic sampling;(7) choice-based sampling;(8) quota sampling;(9) expert sampling; and

(10) haphazard sampling.

Page 293: Collecting managing and assessing data using sample surveys

Sample design and sampling266

Sampling on successive occasions, also known as repetitive sampling, is discussed in the following chapter, Chapter 14, with the special case of panel samples included. These two chapters build extensively on the statistical methods that were introduced in Chapter 2 of this book, as well as on the notions of representativeness and randomness described in Chapter 3.

13.2 Sampling frames

A sampling frame is simply a listing of all the members of the target population. It can exist only if the population is finite. If it is available it can be used to draw the sample, and it is essential for some sampling methods, but it is not required for other sampling methods. However, sampling frames are usually rather difficult to obtain for human populations in most countries of the world, either because such listings do not exist or because such listings are assembled and maintained by government departments and are not made available to survey researchers outside the departments that are respon-sible for the lists.

For any type of sampling that is based on equal or unequal probability sampling and that requires the probabilities of sampling to be established, a sampling frame is essen-tial. It is also absolutely essential that the sampling frame is complete. The reason for the importance of the completeness is that this is the only way that one can be sure of performing equal or unequal probability sampling from a sampling frame. If any unit is not listed in the sampling frame, it has a zero probability of being selected. Any item that is listed more than once will have a higher probability of being selected. Both these situations represent a violation of equal probability sampling. They also compromise unequal probability sampling, in that, if the researcher is unaware of the omissions and duplicate listings, then those units will have different probabilities of selection from what was intended. Typical problems encountered with sampling frames are:

being out of date (hence, some units are not listed and others are listed that should •not be);being incomplete (not every unit in the relevant population is included in the •frame);containing duplicate entries;•containing errors (meaning that the units are listed, but there are errors that will mean •that the units are not contactable in the survey); andcovering an area that is larger or smaller than the study domain, so that either there •are units included in the list that are not within the study domain, or not all units that should be included are in the list.

In surveying human populations, one of the candidates for a sampling frame that is often suggested is a telephone directory. However, telephone directories in all coun-tries of the world suffer from several problems that make them unsuitable as a sam-pling frame. First, they are always out of date by the time they are published, because of the time it takes to put together the directory contents and publish them, while there

Page 294: Collecting managing and assessing data using sample surveys

Sampling frames 267

are people moving in and moving out of a given locality every day, so that the most recent moves will never be included. Second, they will not include people who do not have a telephone registered to them. This will include households with no telephones. Third, other households will have unlisted or ‘silent’ numbers, and therefore will not appear in the directory. Fourth, yet other households will have more than one entry, perhaps by having two or more telephone numbers, each of which is listed as a separate entry. Fifth, they will include numbers that are no longer in use and numbers that have been reassigned and belong to someone at a different address, or even a different per-son. Sixth, because they are put together by human means, they will inevitably include errors, which may be errors of spelling, juxtaposition, or errors in the numbers them-selves, etc. Clearly, with all these problems, telephone directories do not represent a useful sampling frame. Besides, it is rare that the coverage area of a specific directory will match the study domain for a specific survey.

Some countries maintain voter registration lists that are publicly available. However, these lists are also subject to errors similar to those encountered in telephone directo-ries. They do, of course, cover only those adults who are registered to vote, and there-fore exclude people who are not citizens of the country, and those who have not resided in the country long enough or are not old enough to be registered to vote. Depending on national laws regarding voting, they will not include those who are eligible to vote but who have not registered. They will also always be out of date and contain errors, they may include duplicate entries, they will omit other entries, and they rarely cover an area that will match a study domain.

In short, it is actually very difficult to find an adequate sampling frame of a human population, unless the study domain coincides with some administrative unit for which an up-to-date and complete list is maintained. In the United States, the US Census Bureau maintains a master address file (Liu, 2007), which contains every address of which the bureau is aware in the country. This file took a great deal of time, effort, and money to prepare and is continually updated. It is supposed to contain the address of every residential unit in the United States, and it is used as the means to mail out census forms every decade. However, it is a closely guarded file that is not available for release to survey researchers. This is partly to protect the confidentiality of survey and census data, and partly because of the costs associated with building and maintaining the file.

The alternative to using an existing listing is to create one. However, this will again be an expensive and time-consuming process, depending on the size of the study domain. It requires a full enumeration of the population under study and may require physically visiting each and every location where the survey is to be carried out in order, to rec-ord the address, or other identifying information. It is rare for this to be done outside a national census. When sampling frames do not exist, alternative sampling methods may be required that do not rely on having an available listing.

However, there are many situations in which a listing will exist that can be used as a sampling frame. For example, suppose a university wishes to undertake a survey of existing undergraduate students, or a school wishes to undertake a survey of its

Page 295: Collecting managing and assessing data using sample surveys

Sample design and sampling268

existing pupils. A complete listing of all students currently registered will normally always exist, providing the sampling frame for either survey. Likewise, if a survey is to be conducted of employees of a particular company or agency, the company or agency would also usually have a complete listing of the employees and their contact details. Many other similar situations will exist in which special groups will be fully listed, so that a sampling frame can be used quite readily. However, it will always be necessary to question whether the listing is up to date and complete and does not contain erro-neous or duplicate entries. Any known errors in the list should be corrected prior to its being used as a sampling frame.

13.3 Random sampling procedures

13.3.1 Initial considerationsThere are two principal steps in a sampling procedure. The first step is to determine whether or not a sampling frame exists for the population to be studied. The existence of a sampling frame, or the lack of one, will determine whether or not certain sampling methods can be used. Having determined whether or not a sampling frame exists or can be created, the second step is to decide on the actual sampling method. This involves a decision both on the statistical nature of the method to be used and on the method by which the units of the population will be recruited into the sample. These two decisions have to be taken together, because they are highly interdependent.

Given that the determination of the sampling frame has been completed and the method of sampling decided, there is another issue that needs to be decided at this juncture. There are two ways in which sampling can be carried out that are important to be decided at this point: sampling with replacement or without replacement. Sampling with replacement means that, after any element has been selected for the sample, it is replaced in the population and could be sampled additional times. Sampling with-out replacement means that, after an element has been selected for the sample, it is removed from the population and cannot be sampled again. Both these methods of sampling are consistent with equal probability sampling. In the case of sampling with replacement, the probability of a member of the population being sampled at each drawing of a new sample member is equal for all members of the population and is also equal on every drawing. For example, suppose a sample of 100 elements is to be drawn with replacement from a population of 100,000 elements. When the first mem-ber of the sample is drawn, the probability of choosing any element in the population is 1 / 100,000, or 0.00001. Because that member of the population continues to be a part of the eligible population, when the second element is drawn, the probability of any element in the population being drawn is again 0.00001. This will continue through to the 100th member of the sample.

In the case of sampling without replacement, the probability of a member of the population being sampled at each drawing is still equal throughout the population, but the probability will increase for each subsequent drawing of a member of the sample. For example, with the same sample and population size as beforehand, the probability

Page 296: Collecting managing and assessing data using sample surveys

Random sampling procedures 269

of sampling any member of the population will still be 0.00001 for the first drawing. However, after the first sample member has been drawn, the remaining population is now 99,999. Therefore, the probability of any member of the remaining population being drawn will now be 1 / 99,999, or 0.0000100001. By the time that the 100th member of the sample is to be drawn, ninety-nine sampled members will have been removed from the population, so that the equal probability with which any member can be drawn at this last drawing will be 1 / 99,901, or 0.000010009. If the desired sample had been 1,000 members, the final equal probability would have been 1 / 99,001, or 0.0000101. Thus, at each drawing, the probability of any member of the population being drawn into the sample is equal for all population members, as it should be. The fact that the probability actually increases over the draw is irrelevant to the concept of equal probability sampling.

Sampling with replacement is rarely used for participatory surveys of human sub-jects, because this would require a person who is sampled more than once to under-take the survey more than once. Because most people would object to undertaking the same survey more than once, this is not usually considered an advisable procedure. Therefore, participatory human subjects surveys are usually restricted to sampling without replacement. In a survey of inanimate objects, if the survey method may actu-ally damage or destroy the sampled units (such as would be the case in taking samples of a batch of concrete used in construction), sampling will again have to be without replacement, because each sampled element will either exist no longer, or be irrevoca-bly changed by the survey process.

However, in observational surveys and other non-participatory surveys, sampling with replacement may be the selected method. For example, a survey that involves counting vehicles on a roadway section would usually be done by sampling with replacement. It would probably be far too expensive and of no consequence to the survey results to try to exclude a vehicle simply because it had used this section of roadway at another time in the survey. Counts of people patronising a store, or riding on a public transport vehicle, or entering a museum, etc. would normally be sampled with replacement.

Both these methods of sampling – with and without replacement – are completely valid as methods underlying various types of random and quasi-random sampling, and their selection depends largely on the nature of the survey to be done. However, this is an important element in the design of the sampling procedure, and it should normally be decided at this point in the design of the sampling procedures.

13.3.2 The normal law of errorAll random sampling procedures are based on the assumption that the normal law of error applies. This law states that the errors to which estimates of population statistics are subject as a result of sampling from the population are distributed approximately normally around the true parameter value – i.e., the value in the population. One could look at this law in two ways. Suppose one were to draw a series of samples such that

Page 297: Collecting managing and assessing data using sample surveys

Sample design and sampling270

the sample size was made larger in each successive sample. If one were to estimate the same statistic from the population for each sample, the differences between the value of the statistic and the true value of the parameter in the population would describe a normal distribution, with the values becoming closer and closer to a zero error as the sample size became increasingly large. Another way to consider the law is to suppose that one were to draw repeated samples of the same size. Assuming that these samples were substantially less than the population in size, then the distribution of the values of a statistic estimated from the samples would follow a normal distribution with a mean that was the true population parameter value. Values close to the true parameter value would be much more likely to occur than values at some distance from the true parameter value.

It is important to note that the normal law of error relates to the distribution of what is subsequently referred to as the sampling error. That is, it is the error that arises sim-ply because one is measuring a sample of the population, not the entire population. To understand an important issue more clearly here, consider a population in which voting intentions are being measured. Suppose further that the election of interest is one in which there are two candidates running, so that the outcome of voting will be a binary result. Quite clearly, the distribution of votes will not be normal. However, the statistic of interest in this case is the number of votes that will be cast for one candi-date, or the percentage of votes cast for that candidate. This percentage or number of the sample who will vote for one of the candidates is the statistic that is of interest from the sample, from which one would wish to estimate the value of the parameter in the population, that parameter being the percentage or number of the total population that would vote for this candidate. Suppose now that the electorate consists of 100,000 vot-ers. Suppose further that a sample of 400 voters is selected from which to estimate the percentage of votes likely to be cast for one of the candidates (the statistic). The normal law of error means that, if repeated samples of 400 voters were to be drawn from the population of 100,000 voters, the estimates of the number of votes that would be cast for this candidate would follow a normal distribution, in which the mean of that distri-bution would be the value of the actual number of votes that would be cast by the entire population. This distribution would arise no matter what the underlying distribution is of the measures in the sample and the population, whose mean, or total, or percentage, or proportion is the one of interest to estimate from the sample.

In the event that the normal law of error does not apply, then estimates of the error from sampling would be underestimates. In other words, error estimates based on the normal law of error will be less than estimates of error based on any other distribution. However, the normal law of error is so universally applicable that there is probably little reason to doubt its application to almost anything that might be measured in a survey.

13.4 Random sampling methods

There are four principal methods of sampling using strictly random procedures, all of which are explored at length in this section of this chapter. There are also at least three

Page 298: Collecting managing and assessing data using sample surveys

Random sampling methods 271

useful methods of quasi-random and non-random sampling that are frequently used in practice, which are also discussed at length in this chapter. The four random methods are:

(1) simple random sampling;(2) proportionate sampling;(3) disproportionate sampling; and(4) multistage sampling.

The three quasi-random methods that are discussed in this chapter are:

(1) systematic sampling;(2) cluster sampling; and(3) choice-based sampling.

There are also some non-random sampling methods that are discussed, although they are either not recommended for use or are recommended only for special situa-tions. These are:

(1) quota sampling;(2) expert sampling; and(3) haphazard sampling.

13.4.1 Simple random sampling

Drawing the sampleSRS is the drawing of elements or units of a population with or without replacement, with an equal probability of being sampled on each draw. SRS usually requires the existence of a sampling frame, so that each member of the population can be assigned a number (consecutively from one to the total of the population), and the random draw-ing can be done by selecting random numbers and then determining the member of the population to which that number belongs.

As discussed in Chapter 3, a good source of random numbers should be used, espe-cially when the sample is fairly large (say in the thousands), in which case random number generators on computers will often not be sufficiently random. Sampling can be done with or without replacement. Suppose, for example, that a population of 3,500 households is of interest in a study and a complete listing of these households is avail-able. The households are arranged in the list in alphabetical order of the last names of the families living in each household. For the purposes of sampling, the households in the list are numbered from one to 3,500. A portion of this sampling frame is shown in Table 13.1. It is decided to use the RAND random numbers (RAND Corporation, 1955) to select a sample of 100 members of the population. An excerpt of these num-bers is shown in Table 13.2.

The random numbers are grouped in blocks of five for convenience in using them. In this case, with a total population of 3,500, groups of four digits will suffice. Had

Page 299: Collecting managing and assessing data using sample surveys

Sample design and sampling272

the population been 35,000, say, then groups of five digits would be required, and a population of 3,500,000 would require groups of seven digits to be used. These requirements are in place so that it is possible for all members of the population to be chosen. For example, suppose the population was 350,000 and groups of five digits were used. The selection is found by multiplying the random number, treated as a decimal fraction, by the population size. Suppose, therefore, that the random number selected is 00597. This is treated as 0.00597 and multiplied by 350,000, producing the value 2089.5, which would be used to select the 2,090th member of the population. Now suppose that the number 00598 were chosen. This would produce the value 2093, which would select the 2,093rd member of the population. Since there is no possible number between 00598 and 00597 that could occur in groups of five digits, it is clear that the 2,092nd and 2,091st members of the population could never be drawn. Hence, using groups of five digits for a six-digit population (350,000 in this case) will violate equal probability sampling. In fact, the use of five-digit groups will mean that two-thirds of the population can never be selected in this case. Therefore, the rule is that the number of digits to be used must be the same as the number of digits in the pop-ulation. (Of course, if the population were exactly 10,000, say, then four-digit groups would suffice, because values produced between zero and 0.49999 would select num-ber 1,000, as would values between 9999.5 and 9999.99999.)

Returning now to the problem, and assuming that the random numbers beginning on line 30 in Table 13.2 were used, the first group of random digits would be 0449, which will select the 157th member of the population. Similarly, the next group of four digits is 3524, which will select the 1,834th member of the population. The next group is 9475, which will select the 3,316th member, and so forth. If sampling is without replacement, then, should a member of the population be selected a second time, this selection is rejected, and the next one is taken. The resulting draws for the entire sam-ple are shown in Table 13.3.

Table 13.1 Partial listing of households for a simple random sample

Number Family name Street address Town

1 Adams 12 Wells Street Mountain View2 Allen 31 Pepper Drive Oxton3 Arthur 27 Sunset Court Mountain View4 Atkins 4 Pine Road Eastworth5 Baker 61 Station Road Oxton6 Barton 24 Redlands Road Oxton7 Batty 15 Oak Street Mountain View8 Bean 6 Willow Road Eastworth9 Bell 19 Marine Parade Eastworth10 Bishop 22 Pine Road Eastworth11 Billiton 86 High Street Oxton

Page 300: Collecting managing and assessing data using sample surveys

Random sampling methods 273

In Table 13.3, the top ten rows of the table show the four-digit groups taken from Table 13.2, while the lower ten rows show the resulting calculations of the sample draw. In this case, there are no duplications of draws, so there would be no difference in the selection with and without replacement. However, a larger sample would be likely to produce duplicates. A detailed inspection of the table shows that the lowest-numbered member of the population to be selected is six, while the highest is 3,411. The numbers are therefore well distributed across the population, from one to 3,500. The average of the numbers drawn is 1,788, which is very close to 1,750, which would be the expected average of a large sample. This average suggests that the numbers are well distributed across the range of the population.

Estimating population statistics and sampling errorsThe purpose of a survey is usually to provide data from which inferences can be drawn about the population from which the sample was drawn. This being the case, the next thing to be considered is how sample statistics can be estimated and then used to esti-mate the population values. It is also necessary to know how accurate these estimates can be considered to be.

In this and subsequent sections of this chapter, a notation convention is used to dis-tinguish between population values, sample values, estimates of population values, and true (unobservable) values of the population. Population values are shown with

Table 13.2 Excerpt of random numbers from the RAND Million Random Digits

Line Random digits

00030 04493 52494 75246 33824 45862 51025 61962 79335 65337 1247200031 00549 97654 64051 88159 96119 63896 54692 82391 23287 2952900032 35963 15307 26898 09354 33351 35462 77974 50024 90103 3933300033 59808 08391 45427 26842 83609 49700 13021 24892 78565 2010600034 46058 85236 01390 92286 77281 44077 93910 83647 70617 4294100035 32179 00597 87379 25241 05567 07007 86743 17157 85394 1183800036 69234 61406 20117 45204 15956 60000 18743 92423 97118 9633800037 19565 41430 01758 75379 40419 21585 66674 36806 84962 8520700038 45155 14938 19476 07246 43667 94543 59047 90033 20826 6954100039 94864 31994 36168 10851 34888 81553 01540 35456 05014 5117600040 98086 24826 45240 28404 44999 08896 39094 73407 35441 3188000041 33185 16232 41941 50949 89435 48581 88695 41994 37548 7304300042 80951 00406 96382 70774 20151 23387 25016 25298 94624 6117100043 79752 49140 71961 28296 69861 02591 74852 20539 00387 5957900044 18633 32537 98145 06571 31010 24674 05455 61427 77938 9193600045 74029 43902 77557 32270 97790 17119 52527 58021 80814 5174800046 54178 45611 80993 37143 05335 12969 56127 19255 36040 9032400047 11664 49883 52079 84827 59381 71539 09973 33440 88461 2335600048 48324 77928 31249 64710 02295 36870 32307 57546 15020 09994

Page 301: Collecting managing and assessing data using sample surveys

Sample design and sampling274

upper-case Roman characters – e.g., Y, S. Corresponding sample values are shown with lower- case Roman characters – e.g., y, s. Estimates of population values from a sample are shown as upper-case Roman characters with a circumflex accent over them – e.g., Ŝ, Ŷ. Finally, the true (but unobservable) values in the population are shown by the Greek equivalent lower-case letters – e.g., σ.

In estimating values from a sample, there are generally four measures that may be of interest: the mean, the total, the ratio of two values, and the proportion of units in a given category. For example, from a household survey, one may be interested in the total number of cars owned in the study region, the average number of cars owned per household, the ratio of cars owned to licensed drivers, and the proportion of house-holds that do not own a car.

Because the focus here is on a simple random sample, which can usually be drawn only from a finite population, it is also assumed that the sample size is n, that the size of the population is N, and that the sampling fraction or sampling rate is f, where f = n / N. One can also define g as being the inverse of the sampling fraction, or g = N / n = 1 / f. In a simple random sample, g indicates the number of population units that are repre-sented by each sample unit. Of course, if the population is not finite, then no listing can be made of it, and no sampling fraction can be determined. Furthermore, it will not be possible to estimate a total of any attribute of an infinite population (because any such total would also be infinite).

Table 13.3 Selection of sample of 100 members using four-digit groups from Table 13.2

Source Random numbers and selected sample

Line 30 0449 3524 9475 2463 3824 4586 2510 2561 9627 9335Lines 30–1 6533 7124 7200 5499 7654 6405 1881 5996 1196 3896Lines 31–2 5469 2823 9123 2872 9529 3596 3153 0726 8980 9354Lines 32–3 3335 1354 6277 9745 0024 9010 3393 3359 8080 8391Line 33 4542 7268 4283 6094 9700 1302 1248 9278 5652 0106Line 34 4605 8852 3601 3909 2286 7728 1440 7793 9108 3647Lines 34–5 7061 7429 4132 1790 0597 8737 9252 4105 5670 7007Lines 35–6 8674 3171 5785 3941 1838 6923 4614 0620 1174 5204Lines 36–7 1595 6600 0018 7439 2423 9711 8963 3819 5654 1430Line 37 0175 8753 7940 4192 1585 6667 4368 0684 9628 5207First ten 157 1233 3316 862 1338 1605 879 896 3369 326711–20 2287 2493 2520 1925 2679 2242 658 2099 419 136421–30 1914 988 3193 1005 3335 1259 1104 254 3143 327431–40 1167 474 2197 3411 8 3154 1188 1176 2828 293741–50 1590 2544 1499 2133 3395 456 437 3247 1978 3751–60 1612 3098 1260 1368 800 2705 504 2728 3188 127661–70 2471 2600 1446 627 209 3058 3238 1437 1985 245271–80 3036 1110 2025 1379 643 2423 1615 217 411 182181–90 558 2310 6 2604 848 3399 3137 1337 1979 50191–100 61 3064 2779 1467 555 2333 1529 239 3370 1822

Page 302: Collecting managing and assessing data using sample surveys

Random sampling methods 275

The estimated total of an attribute, y, of the population is obtained by summing the values of the attribute over the entire sample and multiplying this total by g, which is also known as the expansion factor, as shown in equation (13.1). The sample values of the attribute y are denoted by yi, where i denotes a count over the sample. It can also be obtained by estimating the mean value from the sample and multiplying that by the population size, N, also as shown in equation (13.1):

Y g y Ny Nyy Nyy Ni

nY gY g y Ny Ny Niy Ny Niy Ny Ny N

1(13.1)

The estimated value of the population mean is simply the sample mean, as shown in equation (13.2):

Y yY yn

i

n

Y yY yyyii1 (13.2)

The reader can prove this by noting that the population total is the population size divided by the sample size multiplied by the sum of the attribute values. The mean is then equal to this total value divided by the population size, as shown in equation (13.3):

ˆ /YN

ny N/y N/

ni

n i

n

y Ny Ny Niy Ny Niy Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Nyyii

11 (13.3)

The ratio of two values in the sample is given by taking the ratio of the sums of the val-ues of the two attributes whose ratio is desired. Hence, the estimate of a ratio for the popu-lation is the same as the estimate of the ratio for the sample, as shown in equation (13.4):

R ry

x

ii

n

ii

nR rR r 1

1

(13.4)

Assuming that u members of the sample possess a particular attribute (or attribute value), then the proportion that possesses that attribute is given in the sample by p u / n. The estimate of the proportion in the population that possesses the attribute (or attribute value) is then equal also to the sample proportion, as shown in equation (13.5):

P pu

nP pP p (13.5)

Hence, the best estimate of the population mean, the population ratio, and the popu-lation proportion is provided in each case by the sample values of the same, while only the population total requires estimation with the expansion factor. It also follows, there-fore, that, although a population total cannot be estimated for a non-�nite population, the mean, ratio, and proportion can be for any attribute that can be measured in a sample from such a population. An example will illustrate the computation of these statistics.

Page 303: Collecting managing and assessing data using sample surveys

Sample design and sampling276

Example

Suppose that a sample of twenty households has been drawn from a population of 475 households. In the survey, among other things, the households were asked to report the numbers of adults and children in the household, the number of cars owned, the number of licensed drivers in the household, and the number of bicycles owned. The data from this fictitious survey are shown in Table 13.4. From the sample observa-tions, estimates have been produced of the means of each of the variables, which are shown in the penultimate row of Table 13.4.

Thus, the average household size is 2.3 persons, which would also be the estimate of this value for the whole population of 475 households. Similarly, the average number of chil-dren is 0.8 per household and the average number of vehicles owned is 1.25 per household. These two values are the values from the sample, and would again be the best estimates of the values of those statistics for the entire population. The total number of cars in the popu-lation would be estimated to be 594 (1.25 × 475 = 593.75). The ratio of licensed drivers to adults in the sample is 1.4 ÷ 1.5, or 0.9333. This would be the estimate of this ratio for the total population also. The proportion of households that own bicycles in the sample is 7 / 20, so that the estimated proportion in the total population would also be 7 / 20, or 0.35.

Table 13.4 Data from twenty respondents in a fictitious survey

Sample number

Number of persons

Number of adults

Number of children

Number of vehicles

Number of licensed drivers

Number of bicycles

1 2 2 0 1 2 02 5 2 3 2 2 53 2 2 0 1 1 04 1 1 0 1 1 05 1 1 0 1 1 06 2 1 1 1 1 07 1 1 0 1 1 08 3 2 1 2 2 09 1 1 0 1 1 0

10 2 1 1 1 2 011 2 2 0 1 2 012 1 1 0 1 1 013 4 2 2 1 2 214 2 1 1 1 1 015 2 2 0 1 1 116 2 2 0 2 2 117 2 1 1 1 1 018 4 1 3 1 1 119 4 2 2 2 2 420 3 2 1 2 1 2

Means 2.3 1.5 0.8 1.25 1.4 0.8

Variances 1.379 0.263 1.011 0.197 0.253 2.063

Page 304: Collecting managing and assessing data using sample surveys

Random sampling methods 277

The next thing that needs to be estimated is the sampling error for each of these sta-tistics. This is also referred to quite often as the standard error (s.e.) of the statistic. To understand how to estimate the standard error of the mean of a sample, it would be useful to assume at the outset that a sample has been drawn consisting of one unit of the population (Yates, 1965). Assuming that the mean of the population is actually Y ,then the deviations of the sample estimate of the mean would be z1, where z1 y1 Y .Assuming that a number of similar samples were to be drawn, the error in the rth sam-ple would be zr yr Y . If one continued to draw samples until every unit in the popu-lation had been sampled, then the mean and sum of the zr values would have to be zero, because by sampling every unit, and summing the values of the deviations from themean, one has to end up with no error. If one stops short of drawing all the units in the population but, instead, draws just a number of random samples, then the mean of the values of zr must be approximately zero. This means that it is being asserted that the random drawing of samples is unbiased.

The simplest measure of the size of the expected error of a sample statistic, such as the mean, would be obtained by squaring the zr values, so that sign does not play a part, and taking the mean of those values. Suppose, for example, that a sample of two units, r and s, has been drawn. The sample mean from these two units is then y . The actual error of the estimate of the population mean would be given by equation (13.6):

y Yy Yy Yy Y z zz zr sr sz zr sz zz zr sz zz zz zz zz zz zr sz zz zr sz zz zr sz zz zr sz z1

2(13.6)

In other words, it is the mean of the errors from the two units in the sample. In this case, the standard error of the estimate of the mean will be obtained by squaring this error, which gives equation (13.7):

y z z zs es e y zy z. .. .s e. .s es e. .s e y zy zy zy zy zy zy zy zy zy z z zz zr sr sz zr sz zz zr sz zy zy zy zy zr sr sr sr sz zz zz zz zz zz z z zz z zzr sr sz zr sz zz zr sz zr sr szr szzr szz zz zz zz zr sr sr sr sz zz zz zz z2 2 2 2z zz z2 2z zz z2 21

4y z

4y z

1

42z zz z2z zz z (13.7)

The average values of zr2 and zs

2 are each 2, where 2 is the standard deviation of the population (because zr

2 (yr Y )2). The average value of zrzs is zero, which leaves equation (13.8):

s es e. .. .s e. .s es e. .s e yyyy2 21

2 (13.8)

If the number of units sampled was increased to three, the squared standard error of the mean would be given by equation (13.9):

s es e. .. .s e. .s es e. .s e yyyy2 21

3 (13.9)

In general, if the sample size was increased to n units, then the squared standard error of the mean would be given by equation (13.10), and the standard error of the mean by equation (13.11):

Page 305: Collecting managing and assessing data using sample surveys

Sample design and sampling278

ns es e. .. .s e. .s es e. .s e yyyy

2 21 (13.10)

s en

. .s e. .s e yy (13.11)

Referring back to Chapter 2, it may be recalled that, if b is a multiplier of an attrib-ute of the population, then the standard error of by is b times the standard error of y,assuming that b is a constant that is itself not subject to sampling error. In this case, the population total can be estimated as the expansion factor for the sample (g N / n)multiplied by the sample total. The sample total is equal to the sample size multiplied by the mean of the attribute of the sample. Hence, the standard error of the population total, Y , is given by equation (13.12):

s e Y s e gny gn y g n. .s e. .s e . .e. .eY sY sY sY s s es e y gy g. .. .s e. .s es e. .s e y gy gy gy gy gy g (13.12)

Of course, there is a problem with these formulae. They presuppose that the vari-ance of the population, 2, is known, whereas the usual reason for doing a survey is that the attributes of the population are not known, and it is the purpose of the survey to measure the attributes. Therefore, the sample data must be used to obtain a sample estimate of the value of 2. The values from the sample can be estimated and squared. The sum of these values of squared deviations from the mean will be nearly, but not exactly, equal to the values of z, the deviations from the true mean, and will lead to an approximation to the value of 2. In fact, it can be seen that the sum of the squares of the deviations from the sample mean will always be less than the sum of the squares from the true population mean, as shown in equation (13.13):

n yy Yy Yy Yy Yy Yy Y n yn yn yn y YYn yn yn yn yy yy yy yy yy yy yy yy yy yy yy yy y22 2 22 22 2

YY2 2

YY (13.13)

The average value of the �rst term on the right-hand side of equation (13.13) is n 2,and the average value of the second term is 2. Hence, the average value of the left-hand side of equation (13.13) is (n – 1) 2. This means that equation (13.13) can be rearranged to give equation (13.14), where s2 is the estimate of the variance, 2:

sn

2 21

1y yy yy yy yy yy y

(13.14)

For computational purposes, it may sometimes be convenient to note that the sum of the squared deviations, (y y)2, can be derived as shown in equation (13.15):

y nyy ny ny ny ny yy yy yy yy yy yy yy yy yy yy yy y22 2 22 2y n2 2y ny2 2y2 2y ny n2 2y ny n

(13.15)

The variances listed in Table 13.4 were estimated using equation (13.14). Using the variances in the table, together with the estimated means, the standard error of each of the means from this �ctitious sample of twenty observations can be estimated. For the

Page 306: Collecting managing and assessing data using sample surveys

Random sampling methods 279

number of persons in the household, the mean was estimated as 2.3, and the variance was 1.379. Using equation (13.11), this produces the estimate of the standard error of the mean number of persons per household as 0.263 ( 1 379 20. /379. /379 ). Similarly, the standard error of the mean number of adults per household is 0.115, while that of the number of children is 0.225. It can be seen rather readily that, because the sample size is the same in each case, the major determinant of the differences in the standard error is the size of the estimated population variance in each attribute.

Sampling from a �nite populationThe above equations need to be modi�ed in the event that one is sampling from a rela-tively small, �nite population, especially when the sample size may represent a relatively large proportion of the total population. A factor needs to be introduced into the equa-tions for the standard error of the sample, to re�ect the fact that, were the entire popula-tion to be chosen for the sample, then the sampling error (standard error) for the entire population must, by de�nition, be zero. It can be seen quite readily that a factor equal to (1 – f), where f is the sampling rate (f n / N), will achieve this. When the entire pop-ulation is included, then f 1, so that the sampling variance, if multiplied by the factor (1 – f), will be zero. Thus, for example, equation (13.11) will become equation (13.16), in the case of a small �nite population, or a large sample, relative to the population size:

s e yf

n. .s e. .s e

1 (13.16)

Looking back at the sample data in Table 13.4, which it was supposed was drawn from a population of 475 households, it is appropriate to ask if the �nite population correction factor is required. In this case, the sampling fraction is 20 / 475, or 0.042. Therefore, (1 – f) is equal to 0.958. Inserting this into equation (13.16) for the number of persons per household, the standard error of the mean becomes 0.257. Previously, without the correction factor, this was estimated to be 0.263. Therefore, the correction factor has reduced the sampling error slightly. However, if the results were rounded to two decimal places in each case, both estimates would be of a standard error of 0.26, so that the �nite population correction factor has made only minimal difference in this case.

Sampling error of ratios and proportionsReturning to the original attributes of the population, the equations for estimating the sampling or standard error of the means and totals of the population from a sample have already been shown. However, it is still necessary to estimate sampling or stan-dard errors for ratios and proportions. Again, referring back to Chapter 2, note that the variance of a proportion is given by pq or p(1 – p). Hence, the standard error of a pro-portion will be given by equation (13.17):

s epq

n. .s e. .s e pp (13.17)

Page 307: Collecting managing and assessing data using sample surveys

Sample design and sampling280

If the population is �nite and relatively small, then equation (13.17) would become equation (13.18):

s e pf pq

n. .s e. .s e

f pf p1(13.18)

For a ratio, it is necessary to determine the variance and standard deviation for two independent variables, similar to the discussion of variables that consist of a sum or difference in variables, as discussed in Chapter 2. Suppose that a ratio is formed from the means of two variables, y1 and y2. The variance of the ratio of these two means is given by equation (13.19):

Vy

y

y

y

V yV y

y

V yV y

y1

2

1

2

21

12

2

22

V yV y V yV y (13.19)

The standard error of the ratio can then be estimated from equation (13.19), by not-ing that the standard error of each of the numerator and denominator of the ratio is the square root of the variance divided by the sample size. Hence, equation (13.20) arises for the standard error of a ratio:

s ey

y

s

ny

s

ny. .s e. .s e rr 1

2

22

12

12

22

22

(13.20)

Again, this is correct only if the two attributes are independent of one another, and requires adjustment with the covariances of the two attributes if they are not inde-pendent. It should also be noted that equation (13.20) can be rewritten as equation (13.21):

s e r rs

ny

s

ny. .s e. .s e r rr rr rr rr rr r22 11

2

12

22

22

(13.21)

Returning again to the data of Table 13.4, and noting that the proportion of house-holds owning bicycles was found previously to be 0.35, the standard error of this proportion can be estimated using equation (13.17). This is estimated to be 0.107 ( 0 30 35 05 05 0 65 20. .. .0 3. .0 30 3. .0 35 0. .5 05 0. .5 0 / ). Similarly, it was noted that the ratio of licensed drivers to adults was 1.4 ÷ 1.5, or 0.9333. Using equation (13.21), the standard error of this ratio can be determined. This is shown in equation (13.22):

s e. .s e. .s e ..

* .

.

* ..rr

* .* . * .* .0 9333

0 263

20 1 5* .* .1 5* .* .

0 253

20 1 4* .* .1 4* .* .0 1032

2 2* .

2 2* .

2 22 2* .* .

2 2* .* .20

2 220 1 4

2 21 4* .* .1 4* .* .

2 2* .* .1 4* .* .

55 (13.22)

Hence, the standard error of the ratio of licensed drivers to adults is 0.104.

Page 308: Collecting managing and assessing data using sample surveys

Random sampling methods 281

De�ning the sample sizeOne of the most important questions that needs to be answered in designing a survey is that of the sample size that is needed. There are two ways to answer such a question: by a simple consideration of the available budget for the survey; or through the statis-tics of sampling error. In the �rst case, one is assuming that the budget-setting process determines the maximum sample that can be afforded and that adherence to this bud-get is the only consideration of relevance. However, such a method for the determina-tion of sample size is unlikely to be ef�cient. Budgeting is not a scienti�c method of determining sample size, and it is quite possible that the budget would de�ne a sample that is either much too small to be useful or much larger than is needed. In the second approach, the sample size is determined by specifying in advance the maximum per-mitted sampling error that should be allowed to occur in the sample. An examination of the sampling error formulae provided in the previous subsection of this chapter reveals that the sample size is always a part of the de�nition of sampling error. Therefore, to determine the required sample size is algebraically the manipulation of those equa-tions to express the sample size n. The most common case to consider is to focus on the sample estimate of the mean (especially since this is also the best estimate of the population mean from a simple random sample).

Returning to equation (13.11), and multiplying both sides of the equation by the sample size n, equation (13.23) is the result:

n s sn sn s e ye y. .. .e y. .e ye y. .e ye ye ye ye y2 2 (13.23)

Dividing both sides of this equation by the squared standard error of the mean yields equation (13.24):

ns

s es e yyyy

2

2. .s es e. .s es e

(13.24)

In words, this equation states that the required minimum sample size is given by the estimated variance of the attribute of interest divided by the squared standard error of the mean of that attribute. In the event that the attribute of interest is a probability, a similar manipulation of equation (13.17) yields equation (13.25):

npq

s es e pppp. .s es e. .s es e2 (13.25)

There are three important issues that need to be addressed in order to determine the sample size. First, it is necessary to specify one or more key attributes, the accuracy of which must determine the sample size. Although it would certainly be possible to esti-mate the required sample size for every attribute to be measured in a survey, this would be expensive and time-consuming, and would yield a different sample size for each attribute, still leaving the survey researcher with the task of deciding which attribute is key and therefore de�nes the sample size. Because it will not be known in advance

Page 309: Collecting managing and assessing data using sample surveys

Sample design and sampling282

which of perhaps three or four key attributes will generate the largest minimum sample size, it is usual to specify several key attributes and estimate the sample size on that basis, and then choose the attribute that generates the largest minimum sample size.

Second, it is necessary to specify the maximum sampling error that is to be incurred. Usually, this is done by specifying a maximum desired error with a statistical confi-dence level. From this specification, it is possible to determine the maximum stan-dard error that can be permitted from the sample. This provides the second piece of information for determining the sample size. It is most common that the maximum sampling error will be specified in terms of the 95 per cent confidence limit on the sampling error. Because of the normal law of error, in which it is assumed that errors follow a normal distribution, the 95 per cent confidence limit for an error will occur at 1.96 times the standard error of the estimate of the key attribute. To see how this might work out in reality, suppose that it is desired that the estimate of the mean of a key attribute is to be obtained within ±5 per cent with 95 per cent confidence. It is now necessary to know the expected value of the mean. Suppose the key attribute, from the data in Table 13.4, is the number of vehicles per household. From other surveys in other locations, or earlier in this present location, it is believed that the probable mean value of cars per household is 1.4. Five per cent of this value is 0.07 cars per house-hold. Therefore, the maximum acceptable error with 95 per cent confidence is ±0.07 cars per household. This maximum level of error is equivalent to having a maximum standard error of ±0.0357 (= 0.07 / 1.96). This will be the value that must be substituted into equation (13.24).

Third, the formulae also require a knowledge of the variance of the attribute in the population. This poses a difficulty, because normally this information will not be known in advance of the survey. It is therefore necessary to estimate the variance either on the basis of similar populations that have been surveyed and for which an estimate of the population variance is known, or by conducting a pilot survey (see Chapter 12) to produce a rough estimate of the population variance. An exception to this is pro-vided by the situation in which the key attributes are to be measured as a proportion. In this case, as discussed in Chapter 2, the maximum value of the variance is known, and is 0.25, so this value can be used in the absence of any other information on the likely population proportions to determine the sample size. To illustrate, assume that prior information has suggested that the population variance of car ownership is about 0.25. In this case, the required sample size would be given by dividing 0.25 by the square of the desired maximum sampling error discussed previously, or (0.0357)2. This would generate a desired sample of 196 households. Because the value of both the mean and the variance of cars per household were estimated values, it would be usual to round the sample size up to the next nearest multiple of ten for samples of up to 1,000, fifty for samples up to 5,000, 100 for samples up to 10,000, etc. In this case, it would prob-ably be stipulated that a sample size of 200 households should be used.

A further issue is raised in the event that the population is small, or the required sample size is large compared to the population. In such a case, the finite population correction factor must also be introduced into the equations. However, the value of the

Page 310: Collecting managing and assessing data using sample surveys

Random sampling methods 283

�nite population correction factor cannot be determined until the sample size is known. De�ne n as the unadjusted sample size. Thus, equation (13.16) could be rewritten as equation (13.26), also substituting the estimated standard deviation, s, for the true pop-ulation standard deviation, :

s e y sn

s

n. .s e. .s e y sy sy sy s

ff1 (13.26)

Recalling that f is given by n / N, then this means that n is given by equation (13.27):

nn

n N1 /1 /n N1 /n N(13.27)

This can be rearranged to de�ne n in terms of n , as shown in equation (13.28):

n nn nn n N nN nN nn nn n n N n

nn

n Nn N

1n n1n n

1

//n N/n N

/n N/n N

(13.28)

Rearranging equation (13.26) means that n is de�ned as shown in equation (13.29):

ns

s e y

2

2. .s e. .s e

(13.29)

It is clear that, except when the population is very small, or the sample size approaches the size of the population, the sample size is dependent only on the variance of the key attribute in the population and inversely on the maximum acceptable sampling error. Again, it is useful to consider some examples.

Examples

Assume that the population from which the survey is to be carried out is only 475 house-holds and that the �gures estimated earlier for the sample size, based on car ownership per household as the key attribute, are the appropriate �gures. It is now assumed that the estimate of 196 for the sample size is the estimate of n . Using the last part of equa-tion (13.28) to estimate the actual sample required, calculate 196 / (1 196 / 475), which reduces the sample size to 138.7, or 140 rounded up to the nearest ten. Therefore, with a small population, the required sample size is reduced from 200 to 140.

Still using the data from Table 13.4, assume now that these are the results of a pilot survey of twenty households that was undertaken preparatory to carrying out a main survey from a population of 150,000 households. Also assume that the key attribute is now the average household size, which was 2.3 from the pilot survey. This mean value had a variance of 1.379 in the pilot survey. The maximum acceptable error is stipulated as being ±2 per cent at 95 per cent con�dence. In this case, ±2 per cent

Page 311: Collecting managing and assessing data using sample surveys

Sample design and sampling284

translates into ±0.046 persons per household. This means that the maximum sam-pling error can be no more than 0.0235 (= 0.046 / 1.96). The preliminary estimate of sample size is obtained from equation (13.26), and is equal to 1.379 divided by the square of 0.0235, or 2,504 households. Check first, to see if this needs to be adjusted, by calculating n from equation (13.28). This would reduce the sample size to 2,463, which would be rounded to the nearest fifty and give a sample of 2,500 households. Hence, the finite population correction factor has made a small difference only in the sample size, which would otherwise have been set at 2,550, from the estimate of 2,504. In general, when the sampling rate is less than about 5 per cent, the finite population correction factor would be ignored. In this case, the sampling rate from the unadjusted sample size estimate is 1.6 per cent, so, practically, the finite population correction factor would have been ignored and a sample of 2,550 assumed to be appropriate.

An example is appropriate to illustrate the use of a proportion as the key attribute. In this case, assume that a survey is to be undertaken of voting intentions for an elec-tion that is slated to occur in six months’ time. In this case, it is assumed that there is no prior knowledge of voting intentions, because no previous recent polls have been undertaken, and various issues have emerged since the previous election, so that there is no good basis for assuming a particular split. Assume that the key attribute is the proportion of voters likely to vote for the main opposition party. Assume also that this is an election for a relatively large electorate, with the number of voters about 300,000. Because there is no prior knowledge on which to base an estimate of the likely proportion of voters who will vote for the opposition, the assumption is made that it will be the ‘worst case’, in which approximately 50 per cent of the voters vote for the opposition party. It is also assumed that the result is desired to be known to within ±2 per cent with 95 per cent accuracy. This means that the maximum sampling error that would be accepted is 0.02 divided by 1.96, or ±0.0102. Using equation (13.25), the minimum sample size is determined to be 0.25 divided by (0.0102)2, or 2,402 vot-ers. Because the sampling rate from a population of 300,000 is well below 5 per cent, there is no further check made on the finite population correction factor. Instead, the sample size is rounded and it is stipulated that a sample of 2,450 voters is needed. One thing that should be noted here is that this sample size will, then, be adequate whether the population is 300,000, or 3 million, or even 300 million. Second, if a maximum error of ±5 per cent at 95 per cent confidence had been required, this would have pro-duced an estimated sample size of 384 voters. This number of 384 is well known and often quoted (often rounded up to 400) in sampling literature as the minimum sample size required for a sample from a large population, with a desired accuracy of better than ±5 per cent at 95 per cent confidence, when the key attribute is a proportion or probability and the variance is unknown.

It should also be noted from this that the sample size increases as the square of the reduction in acceptable error. The change from ±5 per cent to ±2 per cent resulted in an increase in sample size from around 400 to around 2,500. This suggests that, if the sample size that is estimated for a given level of accuracy and key attribute is more than the budget permits, then a relatively modest decrease in the acceptable level of

Page 312: Collecting managing and assessing data using sample surveys

Random sampling methods 285

accuracy (increase in allowable error) may make a substantial difference in the afford-ability of the survey.

As a further example, suppose that the client for a survey such as the one used here for a political poll had initially stipulated a maximum allowable error of ±2 per cent, but found the resulting sample size of 2,450 to be too high for the budget. By increas-ing the allowable error to ±2.5 per cent from ±2 per cent, the sample size would decrease from 2,402 to 1,537. If the budget would permit a survey of 1,600 voters, then the change of 0.5 percentage points in the maximum allowable error would now allow the survey to proceed, whereas the previous result would have entailed an over-expenditure of about 50 per cent of the budget.

Because the variance of key attributes in the population is beyond the control of the survey researcher or his or her clients, this discussion of simple random sampling leads to the conclusion that the only thing that is within the control of these parties is the stipu-lation of the maximum acceptable error for determining a sample size, or the stipulation of the sample size for estimating the actual sampling error. Moreover, if the interest is in reducing sampling error, this falls only by the square root of increases in the sample size, leading to the fact that reductions in sampling error are expensive to obtain. The follow-ing subsections of this chapter introduce some alternative sampling methods to simple random sampling that provide further control of the sampling error or the sample size.

13.4.2 Stratified sampling

Types of stratified samplesStratified sampling refers to a group of sampling techniques in which the population is divided into strata, or segments, either prior to the drawing of the sample or after the sample has been drawn. There are two principal types of stratified sampling: stratified sampling with a uniform sampling fraction, also known as proportionate sampling; and stratified sampling with a variable sampling fraction, also known as dispropor-tionate sampling. The key to both types of sampling is that, within the strata or seg-ments of the population, the sample is drawn as a simple random sample, just as was described in the previous subsection. The important characteristic of both these types of sampling procedure is that samples are drawn from each segment of the population, and the segments are designed so that they are mutually exclusive and exhaustive – i.e., there is no overlap between segments, but each member of the population can be clas-sified into one, and only one, segment.

There are several reasons to stratify the population for sampling. The primary rea-son is to reduce the sampling error, which is possible through the properties of vari-ances and the concept of population grouping. The specific property of variance that comes into play here is based on a field of statistical study called analysis of variance (ANOVA). Consider a population that is subdivided into a number of groups. For each group it is possible to estimate a mean, and for each group it is also possible to estimate a sum of squared deviations from that mean, and a variance within the group. Suppose a

Page 313: Collecting managing and assessing data using sample surveys

Sample design and sampling286

population is subdivided into H groups. There will then be H means for the groups. There is also an overall mean for the population, and an overall sum of squared deviations from the overall mean. The sources of the means and sums of squares can be shown as in Table 13.5. In words, Table 13.5 indicates that the means within groups are derived from within the groups themselves, and that the overall mean is the weighted mean of the means. Within each group, there is a sum of squared deviations from the group mean. Between the groups, there is a sum of squared deviations of the group means from the overall mean. The total sum of squares is then equal to the sum of all the within-group sums of squares and the between-group sum of squares. The implication of this is that grouping cannot change the total sum of squares, but the way in which the groups are chosen can make signi�cant differences in how much of the total sum of squares is accounted for inside the groups and how much is accounted for between the groups.

The key in grouping can best be shown by considering two extremes. In the �rst case, assume that the groups comprise identical random groupings of the population. In this case, all the within-group means will be equal, and the between-group sum of squares will be zero; all the variability of the population is preserved within the groups. In the second case, assume that the members of each group are identical to one another, but (necessarily) different from each other group. In this case, the within-group sums of squares will be zero and the between-group sum of squares will be equal to the total sum of squares; there is no variability in a group, and all the variability is between the groups. In reality, the �rst case could be achieved, but it would have little bene�t, while the second case could probably be achieved only if each group contained just one member of the population, which would not be helpful in sampling.

The bene�t of grouping from a sampling viewpoint is that, if it is possible to form groups of the population that are as homogeneous as possible within the groups, and as heterogeneous as possible between the groups, then much of the population variance will occur between the groups. In estimating sampling errors from a grouped popula-tion, only the within-group variability must be taken into account. Hence, grouping

Table 13.5 Sums of squares for population groups

Source Mean Sums of squares

Within groups y y y y yh Hyh Hy1 2y y1 2y y y y3y y, ,1 2, ,1 2y y1 2y y, ,y y1 2y y , ,y y, ,y yh H, ,h Hh H h H, , , ,y y, ,y y y y, ,y yh H, ,h H h H, ,h H y y y yi

I

y yiy yi

I

y yiHy yHi

I

H

H2

12 2y y2 2y y

2

1

2

11

1

2

2

y yy yy y2 2y yy y2 2y y y yy yii11 22

y yy yy yiy yy yiy y1 11 1y y1 1y yy y1 1y y22

y yy yy yy yy y1 1y yy y1 1y yy y1 1y yy y1 1y y , ,y y, ,y yy yiy y, ,y yiy y2 2, ,2 2y y2 2y y, ,y y2 2y y, ,, ,, ,, ,

Between groups –h

H

y yy yy yhy yy yhy yy yy yy yy y2

1

Total y n y nh

h hy nh hy nh

y ny ny ny ny ny ny ny ny ny ny ny ny ny ny ny n y ny ny nh hy ny nh hy ny ny n y ny nhh y nh hy ny nh hy ny ny ny ny ny ny ny ny ny nh hy ny nh hy ny nh hy ny nh hy ny ny ny ny ny nh hy ny nh hy ny nh hy ny nh hy ny ny ny ny ny ny ny ny ny ny ny ny ny ny ny ny ny nh hy ny nh hy ny nh hy ny nh hy ny nh hy ny nh hy ny nh hy ny nh hy n/y ny n/y ny ny nh hy ny nh hy n/y nh hy ny nh hy n y yi

I

y yhy yh

H

h hihi

h

y yy yy yhy yy yhy yhh

y yy yy yihy yy yihy yhhy yy yy yy y22

11

2

1

Page 314: Collecting managing and assessing data using sample surveys

Random sampling methods 287

offers a mechanism to reduce the population variance that contributes to sampling error. There is no contribution to sampling error from the between-group sum of squares. This constitutes the primary reason for stratifying a population for sampling purposes.

However, there are a number of secondary purposes for strati�cation. First, it may be desirable to use a different sampling method for different subgroups of the popu-lation. Second, it may be that some subgroups of the population are suf�ciently small that it is expected a priori that a simple random sample of the population will produce too few observations of the subgroup in the �nal sample for any reliable statements to be made about that subgroup. Third, it may be desired to obtain the same accuracy in each subgroup when either the variances are the same in each subgroup or the vari-ances are quite different. In either case, a simple random sample will quite likely not produce the same sampling error for each subgroup. If it is desired that the statistics for each subgroup are of the same accuracy – i.e., the sampling errors are the same – then strati�ed sampling offers a means to achieve this. Illustrations of these situations are provided within the following subsections of this chapter.

Study domains and strataA study domain is a grouping of the population that produces subgroups that are of pri-mary interest to the study being undertaken. For example, in a household survey, study domains might be de�ned by the number of people living in a household. Thus, one study domain might be one-person households, another would be two-person house-holds, another three-person households, etc. In a household travel survey, the study domains might consist of households with varying numbers of cars available to them, such as non-car-owning households, one-car-owning households, etc. In a political survey, study domains may be voters registered with different parties, or voters who express a desire to vote for particular positions.

Usually, a study domain de�nes a group of the population that is considered essen-tial to be included in the sample in suf�cient numbers to permit the estimation of valid statistics about the group, by itself. The principal way in which obtaining a suf�cient sample in each study domain can be achieved is to stratify by study domain. Thus, strati�cation by the characteristics that de�ne study domains is normally the procedure that would be adopted in a strati�ed sample.

Weighted means and variancesWhen a population is divided into subgroups, and statistics are estimated for each subgroup by itself, the population statistics must be obtained by estimating weightedmeans and variances. In most cases, simply summing the statistics from the groups, producing an unweighted sample mean or an unweighted variance, will give an incor-rect estimate for the total population. Assuming that the weight for the hth stratum or subgroup is Wh, then the weighted mean is given by equation (13.29):

y Wy W yw hy Ww hy W hh

H

y Wy Wy Ww hy Wy Ww hy Wy Wy Ww hw hy Ww hy Wy Ww hy W1

(13.29)

Page 315: Collecting managing and assessing data using sample surveys

Sample design and sampling288

Similarly, using the properties of variances discussed in Chapter 2, the weighted vari-ance of the mean is given by equation (13.30):

var vary Wr vy Wr vhr vhr vy Why Wr vy Wr vhr vy Wr vh

H

r vr vy Wy Wy Wy Wy Wy Wr vy Wr vr vy Wr vwwy Wwy Wy Wwy Wr vy Wr vwr vy Wr vr vy Wr vwr vy Wr vr vy Wr vr vy Wr v yyhhy Wy Wr vy Wr vr vy Wr v2r v2r v1

(13.30)

Given these expressions for each of the weighted mean and the weighted variance, the remaining item that needs to be de�ned is the weight itself. The weight should be that number that would permit each subgroup to occur in the population in the correct proportion. The number of population members in each stratum, Nh, must add up to the total population. Thus, equation (13.31) must hold:

N N N N N N Nh HN Nh HN N hNhNh

N NN N N NN N N NN NN Nh HN NN Nh HN NN NN Nh Hh HN Nh HN NN Nh HN N1 2N N1 2N N1 2 3N N3N N3N NN N3N NN N N N N Nh H h HN Nh HN N N Nh HN N N NN N N NN NN Nh HN NN Nh HN N N Nh HN NN Nh HN N (13.31)

If the sample size in each stratum or subgroup is denoted by nh, then equation (13.32)also holds:

n n n n n n nh Hn nh Hn n hh

n nn n n nn n n nn nn nh Hn nn nh Hn nn nn nh Hh Hn nh Hn nn nh Hn n1 2n n1 2n n1 2 33 n n n nh H h Hn nh Hn n n nh Hn n n nn n n nn nn nh Hn nn nh Hn n n nh Hn nn nh Hn n (13.32)

In other words, the sum of the samples in each of the strata is the total sample size. The proportion of the sample that should appear in stratum 1 is clearly Nh / N. If each of the sample members were to be multiplied by this fraction for its stratum, then the sam-ple would have the correct proportions in each stratum. By de�nition, this implies that the ratio Nh / N is the weighting factor for each stratum – i.e., equation (13.33) is true:

W NNhWhW hNhN (13.33)

It also follows that the sum of the weights should usually equal one. Clearly, this is true for a weight de�ned by equation (13.33), because the sum of the Nh is N, so the sum of the weights will be equal to N / N 1. This means that equations (13.29) and (13.30) can be rewritten as equations (13.34) and (13.35):

yN

N yN yw hw hN

w hN

N yw hN yhh

w hw hw hw h1 (13.34)

var vr vN

N yr vN yr varN yarhN yhN yr vN yr vhr vN yr vh

r vr vyyr vyr vr vyr vwwr vwr vr vwr vr vr vN yN yN yN yhhr vr v1

22N y2N yr vN yr v2r vN yr v (13.35)

In the event that the samples from each stratum are drawn in exactly the same propor-tion as the population in each stratum – i.e., that nh / n Nh / N – then it will follow that the weighted mean is simply given by equation (13.36) and the weighted variance by equation (13.37):

yH

yw hw hH

w hH

yw hyh

w hw hw hw h1

(13.36)

Page 316: Collecting managing and assessing data using sample surveys

Random sampling methods 289

var vr varH

w hr vw hr varw harw hr vw hr vr vw hr vH

w hH h

r vr vyyr vyr vr vyr vw hw hr vw hr vr vw hr vr vr vr vw hr vr vw hr v yyw hw hyw hyyw hyr vr vw hw hr vw hr vr vw hr v1

2w h2w h(13.37)

As will be seen shortly, this is a useful fact to keep in mind in strati�ed sampling. It should also be noted that, if all the strata are sampled at the same rate, then this means that equation (13.38) holds true:

n

N

n

Nh Hh

hNhN

nnh Hh H1,h H1,h H,h H,h Hh Hh H (13.38)

In this case, we can replace Nh / nh with N / n, and equation (13.34) becomes equation (13.39):

y Ny N N yw iy Nw iy Ni

n

ihi

n

h

H

h

H k knk knHk kHk k

y Ny Ny Nw iy Ny Nw iy Nw iw iw iw iw iw iw iw i

k kk k

y Ny Ny Nw iy Ny Nw iy Nw iw iy Nw iy Ny Nw iy Ny Nw iy Ny Nw iy Ny Nw iy Ny Nw iy Nk kk kk kk k

y ny n N yN yw iw iy nw iy ny nw iy ny nhy ny nhy ny ny ny ny ny ny ny ny ny ny ny ny ny ny ny ny n N yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yN yy Ny Nw iw iy Nw iy Ny Nw iy N1 1i1 1ih1 1h1 11 11 11 11 1ii1 1iihh1 1hh1 11 11 11 11 11 11 111 11 111 11

/ /y ny n/ /y ny n/ / / n ynn yn n yii

n

n yn yn yn yn yn yn yn yn yn y n yn y1

/(13.39)

This shows that weights are required in the computation of the mean and the variance only when the sample is drawn disproportionately from each stratum.

Strati�ed sampling with a uniform sampling fractionDrawing the sample

In strati�ed sampling with a uniform sampling fraction, the same sampling rate is used in each stratum. Therefore, the method of drawing a random strati�ed sample with a uniform sampling fraction would be to �rst identify to which stratum each member of the population belongs, and then to number the population members sequentially within each stratum. A simple random sample is then drawn by the same method as described for simple random samples earlier in this chapter.

A strati�ed sample with a uniform sampling fraction can be drawn using SRS only if the population is �nite and it is possible to list all members of the population. For sampling as described here, it is also necessary to be able to classify each member of the population into the appropriate subgroup or stratum prior to drawing the sample, so that the sample can be drawn by stratum. However, an exception to this requirement is discussed later in this subsection.

This also means that the sample within each stratum can be treated as a simple ran-dom sample from the subgroup of the population de�ned by the stratum. The condi-tions required for strati�ed sampling with a uniform sampling fraction to be the same as simple random sampling from each stratum are:

(1) all elements are sampled randomly; and(2) all elements have an equal probability of being selected at each draw of the sample

within each stratum.

Strati�ed sampling with a uniform sampling fraction means that the identical sam-pling fraction is used in each stratum – i.e., that equation (13.38) holds true. Equation (13.38) and the conditions of strati�ed sampling with a uniform sampling fraction also can be written as equation (13.40):

Page 317: Collecting managing and assessing data using sample surveys

Sample design and sampling290

n

n

N

Nh hNh hN (13.40)

It is therefore also known as proportionate sampling, because each element is repre-sented in the sample in the same proportion as in the total population. This means that no weighting is required in a proportionate sample, except to the extent that a small random sample will only approximate the correct proportions. This is discussed further later in this subsection.

Estimating population statistics and sampling errorsIf the interest is only in means, sampling errors, and other statistics within the stratum itself, then the formulae that apply are exactly the same as those already described for a simple random sample. In other words, equations (13.1), (13.2), (13.4), and (13.5) for population values and equations (13.11), (13.12), (13.16), (13.17), (13.18), and (13.21)for the standard errors will all hold for values within any speci�c stratum. However, if the interest is in population values, then, based on equation (13.35) and recalling that the variance of the mean in a stratum is equal to sh

2 / nh, the standard error of the mean is given by equation (13.41):

s eN

N

nshNhN

hh

h

H

. .s e. .s e yyww1 2

2

1

(13.41)

Recalling also that, for a proportionate sample, equation (13.40) holds true, then that equation can be rearranged to replace Nh, as shown in equation (13.42):

NNn

nhNhN h (13.42)

Substituting this expression for Nh in equation (13.41) yields equation (13.43):

s e yN n n nN

n sn

n swhh

H

hn shn shh

H

hn shn shh

H

. .s e. .s enNnNhh hhn nn n

ssnNnNhh

N nN nsshh sssshhhh

1 11 1N n1 1N nH1 1H1 1N nN n1 1N nN nhh1 1hh 12 21 12 21 1N n1 1N n2 2N n1 1N n

n n2n n2n nn n2n nn n221 121 1

11

2

1

2

1

(13.43)

Thus, if there are unequal variances in the strata, then the standard error of the weighted mean of an attribute measured by the survey is given by equation (13.44):

s en

n sw hw hw hn sw hn shh

H

. .s e. .s e yyw hw hw hw hw hw h1 2

1

(13.44)

Equation (13.44) is modi�ed to equation (13.45) if the sample is large relative to the population:

s en

f n sw hw h h hh

H

. .s e. .s e yyw hw hw hw hw hw hw hw hw hw hf nf nw hw hf nw hf nf nw hf nw hw hw hw hw hw hw hw hw hw hw hw hw hw h1

1w hw h1w hw h1w hw hw hw h1w hw hw hw h2

1

(13.45)

Page 318: Collecting managing and assessing data using sample surveys

Random sampling methods 291

If the variances in each stratum are equal (a relatively unusual situation), then equation (13.44) becomes equation (13.46), where se is the common variance in each stratum:

s e yn

s

nn

s

nn

s

nw hw hw h

h

H

hh

He ee ese es

. .s e. .s e w hw hw hw hw hw hw hw h nnn sn sss

nnw hw hn sw hn sn sw hn see

een sn sn sn sw hw hw hw hn sw hn sn sw hn sn sw hn sn sw hn s1 22

1 11 1n1 1n h1 1h1 11 1hh1 1hh(13.46)

If the population is small relative to the sample size, then equation (13.46) becomes equation (13.47):

s e y sf

nw ey sw ey s. .s e. .s e y sy sw ew ey sw ey sy sw ey sy sy s

1 (13.47)

Comparing equation (13.46) with equation (13.11), it can be seen that the only dif-ference is that se has replaced s. This provides a clue to the effects of strati�cation. Provided that se is less than s, then the standard error of the mean of the strati�ed sample will be smaller than the standard error of a simple random sample. Of course, it is not possible, as the analysis of variance earlier in this chapter showed, for se to be greater than s. In the event that the strati�cation is purely random, and each stra-tum contains a random selection of the entire population, then se will be the same as s. However, if strati�cation succeeds in placing population members who are more similar to one another into the same stratum, while each stratum is different from each other stratum, then it follows that se must be less than s, and the more similar popula-tion members within a stratum are to each other, and the more different the strata are to each other, the greater will be the difference between se and s. Herein lies the principal bene�t of strati�cation. The more effectively a population can be grouped, the more the standard errors of the sample estimates can be decreased through strati�cation, without increasing sample size.

Pre- and post-strati�cationProportionate sampling produces what is known as a self-weighted sample, because the population mean is the same as the sample mean and no weighting is required. This has an important implication for proportionate sampling. It means that it is not, in fact, necessary to sort the sample elements into their strata to estimate the population mean. Hence, if the population mean is the only estimate required from the survey, then it is not necessary to know in advance of the survey into which stratum each population member falls. This can be determined as part of the survey itself. Of course, this is true if and only if the sampling is undertaken as simple random sampling, and there are no biases present from any source.

On the other hand, to estimate the standard errors of population estimates, it is nec-essary to estimate the within-stratum variances. In addition to ensure that the sample is truly a proportionate sample, it is necessary to know the stratum to which each sam-pled element belongs at the time that the survey is conducted. What all this means is that, in the special case of proportionate sampling, it is not necessary to know stratum membership in advance of the survey. An illustration of this might be helpful. Suppose

Page 319: Collecting managing and assessing data using sample surveys

Sample design and sampling292

one were to undertake a survey of students at a university. It may be decided to strat-ify students by the department to which they belong, on the basis that the measures to be surveyed are likely to be more similar among students within a department than between students in different departments. Supposing that the university has a listing of students by home department, then a stratified sample can be drawn easily by num-bering the students within each department and sampling from the students in each department. If this sample is drawn at the same rate from each department, then the resulting sample is a proportionate sample, or a stratified sample with uniform sam-pling fraction.

However, taking advantage of the property just described, it would also be possible to draw a sample from a listing of all the students in the university without knowledge of the department to which each student belongs, and to ascertain their departments as part of the survey. It would be necessary to know the number of students in each department, to make sure that the final sample met the requirements of a proportionate sample. This could be controlled further by estimating the number of students required from each department. Once the number required in a department has been reached, no further students from that department will be sampled.

This can reduce the cost of proportionate sampling significantly. For example, many household-based surveys could be stratified by a particular attribute of households. In many instances, the periodic national censuses may include statistics on the attribute chosen for stratification. However, to protect the confidentiality of the census records, information cannot be obtained from the census to identify which households pos-sess particular values of the stratification attribute. The census will provide totals of households by different values of the attribute, such as by census district, or other census geography. This will provide the control totals that define the number of house-holds needed in the sample from each attribute value. The survey could then proceed by recruiting households for the survey and identifying the value of the stratification attribute as part of the recruitment process. The sample can then be controlled to pro-duce a proportionate sample.

This mechanism leads to the further notion of post-stratification. Suppose that a sample is drawn as a simple random sample from a population. Suppose also that there is other information available, possibly from a census, of the sizes of certain subgroups in the population – e.g., in a student population, the gender and age of each student might be known from university records. The question now is whether having such information can be used to improve the sample estimates.

The answer is that this information can most certainly be used to improve the esti-mates from the sample. Having completed the SRS survey, the sample cases can now be sorted into the subgroups, for which population information is known – e.g., the age and gender of the students. The number of students by gender and age is known for the sample, giving nh students in the hth stratum, and the total number of students in that stratum, Nh, is also known. That means that weights, Wh = Nh / N, can be computed for each stratum. The sample can now be treated as a stratified sample with a uniform sampling fraction, using the standard error formula in equation (13.35).

Page 320: Collecting managing and assessing data using sample surveys

Random sampling methods 293

This is an example of using secondary information to improve sample estimates. However, a note of caution is required relating to the size of the strata in post-strati-�cation. There should be no fewer than ten elements in each stratum. As long as this minimum number is observed, post-strati�cation should work well as a method to improve sample estimates. Nevertheless, it is appropriate to ask the question as to how much it will improve the estimates. Hansen, Hurwitz, and Madow (1953) show that the variance of the weighted mean, in the case of post-strati�cation, is given by equation (13.48):

varn n

W Ws

nw hw h

hhW WhW W

h

h

h

yyw hw hw hw h

ff ffW WW WhhW WhW WW WhW WW WW WW WW WW sW s

nnw hw hW sw hW sW sw hW shh

ffff1 11 1ff1 1ff1 11 11 11 11W WW W1W WW W2

1 12

1 1 2

(13.48)

The difference between this equation and the one normally used for strati�ed sampling with a uniform sampling fraction is the second term. Irrespective of the values of Wh and sh, if the values of nh are large, then this term will become very small and can be ignored. The term is present because of the fact that, in post-strati�cation, the sample size in each stratum is not exactly correct. (If the sam-pling was performed using strati�cation, then, as explained in drawing the sample, effort would have been made to get the sample sizes exactly proportionate in each stratum. When undertaking post-strati�cation, the sample sizes will not be exactly correct.)

In the event that the nh are not large, at least for some strata, then the second term may be large, and could be suf�ciently large to double the variance of the mean. As with pre-strati�cation, there is only a gain if sh is smaller, if not considerably smaller, than s. The smaller that sh is in comparison to s, the greater will be the gains of post-strati�cation, even if the strata sample sizes are relatively small.

To perform post-strati�cation, the survey researcher must have knowledge of the actual stratum sizes, so as to be able to estimate the weights, Wh, and also of the infor-mation that will allow members of the sample to be classi�ed into the strata. However, post-strati�cation does not require every member of the population to be classi�ed into strata, and also does not require sorting into strata either before or at the time of sam-pling. Therefore, post-strati�cation may be a useful procedure to reduce sample error, without increasing the cost of sampling.

Example

Suppose a sample is being drawn from a college with four departments. Suppose that the college has a total of 1,000 students, with 500 from the �rst department, 300 in the second department, and 100 in each of the remaining two departments. A random sample of 200 students is to be drawn from a list of all students in the college, with department not shown. A random sample is drawn and the students are surveyed. Upon analysis of the data, it is found that the sample contains 110 students from the �rst department, �fty-�ve from the second, twenty from the third, and �fteen from the fourth. The total number of students in each department is known from other information.

Page 321: Collecting managing and assessing data using sample surveys

Sample design and sampling294

The weights for the four departments are 0.5, 0.3, 0.1, and 0.1, respectively, and using strati�ed sampling with uniform sampling rates would have produced samples of 100, sixty, twenty, and twenty, respectively, from the four departments. However, using post-strati�cation, equation (13.48) is applied to the results. Inserting the weights and the actual sample sizes in equation (13.48) produces equation (13.49):

var.

.

y sy sy swy swy sy sy sy sy sy sy sy sy s. . .. . .y sy s. . .y s. . .. . .y s. . .s s ss s s. . .s s s. . .. . .s s s. . .. . .. . .. . .. . .s s ss s ss s ss s s. . .s s s. . .. . .s s s. . .. . .s s s. . .. . .s s s. . .. . .. . .. . .. . .s s ss s ss s ss s s1 01 0 2

100y s

100y s0 5y sy s0 5y sy s. . .y s. . .. . .y s. . .0 5. . .y s. . .. . .y s. . .0 3. . .. . .0 3. . .. . .0 3. . .. . .. . .. . .0 3. . .. . .. . .. . .0 1s s ss s s0 1s s ss s s. . .s s s. . .. . .s s s. . .0 1. . .s s s. . .. . .s s s. . . 0 1s s ss s s0 1s s ss s s.s s s..s s s.0 1.s s s..s s s.

1 01 0 2

100

1. . .. . .1. . .. . .22s s ss s s2s s ss s s. . .s s s. . .. . .s s s. . .2. . .s s s. . .. . .s s s. . .s s ss s ss s ss s s2s s ss s ss s ss s s. . .s s s. . .. . .s s s. . .. . .s s s. . .. . .s s s. . .2. . .s s s. . .. . .s s s. . .. . .s s s. . .. . .s s s. . .22s s ss s ss s ss s s2s s ss s ss s ss s s3s s ss s s3s s ss s s2s s ss s s2s s ss s s4

2

0 500 50 0 5

110

0 3 0 7

55

0 1 0 9

20

0 1 0 9

1512

22

32

42. .0 5. .0 5 0 5. .0 5 . .0 3. .0 3 0 7. .0 7 . .0 1. .0 1 0 9. .0 9 . .0 1. .0 1 0 9. .0 9. .. . . .. . . .. . . .. .

s ss s1s s12s s2s s s ss s3s s3

2s s2s s

(13.49)

Gathering the terms and multiplying out the numerators in the second part of the expression, equation (13.49) can be simpli�ed to equation (13.50):

var y sy sy swwy swy sy swy sy sy sy sy s. . .. . . ..y sy s. . .y s. . .. . .y s. . .s ss s. . .s s. . .. . .s s. . . ss. . .. . .. . .. . .y sy sy sy s. . .y s. . .. . .y s. . .. . .y s. . .. . .y s. . .s ss ss ss s. . .s s. . .. . .s s. . .. . .s s. . .. . .s s. . .0 8.0 8.

200y s

200y s0y sy s0y sy s0y sy sy sy s0y sy sy sy s502y sy s502y sy s. . .y s. . .. . .y s. . .502. . .y s. . .. . .y s. . .502y sy sy sy s502y sy sy sy s. . .y s. . .. . .y s. . .. . .y s. . .. . .y s. . .502. . .y s. . .. . .y s. . .. . .y s. . .. . .y s. . .0. . .. . .0. . .. . .304. . .. . .304. . .. . .0s ss s0s ss s. . .s s. . .. . .s s. . .0. . .s s. . .. . .s s. . .0s ss ss ss s0s ss ss ss s. . .s s. . .. . .s s. . .. . .s s. . .. . .s s. . .0. . .s s. . .. . .s s. . .. . .s s. . .. . .s s. . .105s ss s105s ss s105s ss ss ss s105s ss ss ss s 0 1061. . .. . .1. . .. . .1. . .. . .. . .. . .1. . .. . .. . .. . .22

2s ss s2s ss s. . .s s. . .. . .s s. . .2. . .s s. . .. . .s s. . .2s ss s2s ss s3322

42 (13.50)

Comparing this to the �rst term of equation (13.49), it is clear that the post-strati-�cation will have added only a very small amount to the variance of the mean of the attribute. Thus, in this case, if the within-department variances had been smaller than the total variance, post-strati�cation would have improved the estimates of the popu-lation values, and decreased the standard errors of those values.

Equal allocationEqual allocation is the term applied to strati�ed sampling when the stratum sample sizes are all identical. In such a case, the stratum sample sizes nh would be replaced by the common sample size, nc. All the weights would be equal, and would be the inverse of the number of strata, so Wh 1 / H for all h. In this case, the variance of the weighted mean can be rewritten as shown in equation (13.51):

var yf

n

f

n H

y y

nw hw hw h

h

H y yihy yh

ci

n

hn Hhn H

H

w hw hw hw h

y yy y

iihh

W sW sff

n Hn Hw hw hW sw hW sW sw hW shh

1 1H1 1H1 1f1 1f1 11 11 11 11 1 11

12

1 12

1 1

1122

2

11n H1n H11n Hn Hn Hn H(13.51)

This can be further manipulated to equation (13.52):

var yf

n H ny

y

nw

cih

h

ci

n

h

H

n Hn Hn Hn Hn Hn Hn Hn Hn Hn Hn Hn Hn Hn Hn Hn Hn Hn H

1 1

12

2

11

(13.52)

Equal allocation will minimise the standard errors of sample estimates only if the variances in each stratum are also equal. Equal allocation can, of course, occur in pro-portionate sampling only when the strata are of identical size in the total population. Equal allocation is discussed further under strati�ed sampling with a variable sampling fraction.

Page 322: Collecting managing and assessing data using sample surveys

Random sampling methods 295

Summary of proportionate samplingStrati�ed sampling with uniform sampling rates is also known as proportionate sampling. It is a useful method to reduce sampling error. It can be applied either prior to sampling or after a simple random sample has been used. The actual sam-ple achieved with proportionate sampling should look very similar in composition to a simple random sample, differing only to the extent that a random selection will not provide exactly correct proportions of each stratum in the population. Elements of the sample will require prior classi�cation in pre-strati�cation designs, but do not require prior classi�cation in post-strati�cation designs. Post-stratifying a sim-ple random sample is a simple and effective way to use secondary information to improve sample estimates.

Strati�ed sampling with variable sampling fractionIn contrast to proportionate sampling, strati�ed sampling with variable sampling frac-tion means that a different sampling rate is used in drawing the sample from each stra-tum. It is generally used when the strata are of radically different population sizes, or the variances in each stratum are known to be very different. In addition to allowing the standard errors to be reduced, strati�ed sampling with a variable sampling frac-tion may be used to obtain similar standard errors in each stratum, or to ensure that a minimum sample size is achieved in each stratum. Strati�ed sampling with variable sampling fraction is also known as disproportionate sampling, because the resulting samples will not be in the same proportions as the strata in the population.

Drawing the sampleThe �rst step in drawing the sample is to de�ne the strata to be used, and then to deter-mine the sample size required in each stratum. Normally, it is necessary that the popu-lation is pre-classi�ed into the strata, and then a simple random sample is drawn from the population within each stratum, using the appropriate sampling rate.

As with proportionate sampling, drawing a simple random sample within each stra-tum means that the individual stratum samples can be treated as being the same as a random sample from a sub-population. The same conditions as described for propor-tionate sampling also apply to disproportionate sampling for being able to treat the individual stratum samples as random samples from the sub-populations, namely:

(1) all elements are sampled independently, and(2) all elements have an equal probability of being selected at each stage of sampling

within a stratum.

However, unlike proportionate sampling, the relationship between the sample sizes in each stratum relative to the total sample and the population sizes of each stratum relative to the total population does not hold, as shown by equation (13.53):

n

n

N

Nh hNh hNh hh h (13.53)

Page 323: Collecting managing and assessing data using sample surveys

Sample design and sampling296

This means that the weights in each stratum can be determined from the expansion factor for each stratum. Assuming that the expansion factor for stratum h is gh, where gh 1 / fh and fh is the sampling rate in stratum h, then the weights for each stratum will be equal to gh Nh / nh.

Estimating population statistics and sampling errorsAs for proportionate sampling, if the interest is only in the means, totals, probabilities, and ratios within each stratum and the standard errors of those quantities, then the for-mulae for simple random samples will still apply within the stratum. However, if the interest is in statistics for the entire population, then the estimation requires application of the weighted values, with the weights de�ned as indicated in the preceding subsec-tion of this chapter. For example, the weighted mean will be given by equation (13.54),while the weighted probability is given by equation (13.55):

y gN

w hy gw hy gi hh

y gy gy gw hy gy gw hy gw hw hy gw hy gy gw hy gy gy gy gw hy gy gw hy gw hw hw hw hy gw hy gy gw hy gy gw hy gy gw hy g g ng n yyh hh hg nh hg ng nh hg n hhy Ny NNN

y Ny Ny Ny Ny Ny Ny Nihy Ny Nihy Ny Nihy Ny Nihy Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny Ny gy gw hw hy gw hy gy gw hy g /y Ny N/y Ny N11

(13.54)

pN

w hw hN

w hN h

w hw hg ug uw hw hg uw hg ug uw hg uhhw hw h1

(13.55)

Other expressions for the population total, and a ratio, will follow the same rules for calculating weighted values. Similarly, the standard error of the mean is given by equation (13.56):

s eN

h. .s e. .s e yyww

g ng n sshhg nhg ng nhg nhh hh2 2g ng n2 2g ng n ss2 2ss

2(13.56)

This can also be written in terms of Nh and nh, by substituting for gh in equation (13.56),to produce equation (13.57):

s e

N s

n

N

h hN sh hN s

hh. .s e. .s e yyww

2 2N s2 2N s

2

(13.57)

As should be clear from these formulae, it is necessary to know the total population within each stratum, as well as the total population across all strata. This also means that this type of sampling can be done only for a �nite population, and with secondary data that provide the strata and total population sizes.

Non-coincident study domains and strataAlthough it is certainly quite common for the study domains and convenient strata to be the same as one another, this will not always be the case. In proportionate sampling this was not a signi�cant issue. However, because of the different sampling rates in each stratum, this requires special treatment in the case of disproportionate sampling.

Page 324: Collecting managing and assessing data using sample surveys

Random sampling methods 297

To illustrate the notion, an example is drawn from political polling. Suppose that the study domains of interest are the voting intentions with respect to three major parties in an upcoming election. However, these voting intentions will be determined only as part of the survey itself. Therefore, they cannot be used conveniently as a mecha-nism for strati�cation. Indeed, it is possible that the voting intention may be the last question asked in the survey, to avoid potential biases to other questions relating to the acceptability of party leaders, or support for or opposition to particular policies. Instead, using data available from secondary sources, the most convenient strati�ca-tion variable is household income groupings. This will mean that the study domains –voting intentions – and the strata – household income groupings – will not coincide. Members of each study domain may be found scattered across all income strata. If a uniform sampling fraction was being used, there would be no problem from this. However, when a variable sampling fraction is being used, this requires a more com-plex approach. It will, in fact, be necessary to estimate the sampling error contributed to each study domain from each stratum.

To develop the appropriate formula, some additional notation must be introduced. First, the within-stratum variance of units in the study domain that fall in the hth stra-tum is denoted s h

2. Second, the proportion of units in the hth stratum that are not in the study domain of interest is denoted q h. Using this notation, the variance of the weighted mean, y′w, for a study domain is given by equation (13.58).

var /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /r /yr /r /yr /r /N

g nr /r /g nr /r /n qr /r /n qr /r /y yr /r /r /r /y yr /r /r /r /r /r /r /r /r /r /r /r /y yr /r /r /r /r /r /r /r /r /r /n sr /r /r /r /r /r /n sr /r /r /r /r /r /r /r /r /r /r /r /n sr /r /r /r /r /r /r /r / nwr /r /wr /r /hg nhg nr /r /g nr /r /hr /r /g nr /r /h hr /r /h hr /r /h hr /r /r /r /h hr /r /r /r /r /r /r /r /r /r /r /r /h hr /r /r /r /r /r /r /r /n qh hn qr /r /n qr /r /h hr /r /n qr /r /h hr /r /h hr /r /h hr /r /r /r /h hr /r /r /r /y yh hy yr /r /r /r /y yr /r /r /r /h hr /r /r /r /y yr /r /r /r /hr /r /r /r /hr /r /r /r /r /r /r /r /n sr /r /r /r /hr /r /r /r /n sr /r /r /r /hr /r /hr /r / h1

r /1

r /1 1r /r /1 1r /r /r /r /1 1r /r /r /r /r /r /1 1r /r /r /r /r /r /r /r /1 1r /r /r /r /r /r /r /r /1 1r /r /r /r /r /r /r /r /1 1r /r /r /r /r /r /r /r /r /r /r /r /1 1r /r /r /r /r /r /r /r /1 11 1r /r /n sr /r /1 1r /r /n sr /r /r /r /r /r /n sr /r /r /r /1 1r /r /r /r /n sr /r /r /r / n1 1nr /r /hr /r /1 1r /r /hr /r / h1 1h22r /r /2r /r /r /r /g nr /r /2r /r /g nr /r /

2r /r /

2r /r /2r /r /1 1r /r /2r /r /1 1r /r /

hh

r /r / (13.58)

If, in any stratum, there are no members of a study domain, then the value of n h will be zero for that value of h, q h will be equal to one for that value of h, and s h

2 will also be zero for that value of h. Hence, that stratum will not contribute to the error, as is appropriate.

Optimum allocation and economic designDisproportionate sampling can be used to obtain an optimum allocation of the sample among the strata, with optimality de�ned as achieving the least variance for the over-all cost. Thus, an optimum sample would represent the cheapest sample that could be obtained that minimises the sampling error of a key attribute. Because the sampling error in a stratum is de�ned as being proportional to the standard deviation within the stratum for the key attribute and as being inversely proportional to the square root of the sample size in the stratum, equation (13.59) is true, where the sampling rate is the ratio of the sample size in the stratum to the total population in the stratum, and k is a constant of proportionality:

fn

Nkshfhf

h

hNhNh (13.59)

Page 325: Collecting managing and assessing data using sample surveys

Sample design and sampling298

This means that an optimal allocation of sample will occur if the sampling rate in each stratum is proportional to the standard deviation of the key attribute within each stratum. To determine the actual sample size in each stratum will then depend on the total sample size that is desired. That is, we must know the sum of the nh values for the entire population. Usually, this will be a matter of budget. However, if the total error that results from an optimal design with a �xed budget is less than what is acceptable, then the sample size could be decreased proportionately to achieve a lower overall survey cost. It is also necessary to have information on the expected standard deviations of the key variables in each stratum. These values may be obtained from a pilot survey, or from a previous survey in the same location, or a similar survey conducted elsewhere, but with a population that is expected to have similar values for the standard deviations. An example is useful to illustrate how optimum allocation works.

Example

Suppose that a household travel survey is to be carried out. Based on budgetary con-siderations, a sample of 2,000 households is considered to be the maximum that can be afforded. For this survey, it is assumed that the costs of obtaining a sample in each stratum are identical – i.e., that there is no difference in average unit survey costs from stratum to stratum. Suppose, further, that it is decided to stratify the sample by house-hold size. Data from a survey carried out ten years previously provide a rough estimate of the standard deviations for the key attribute for each stratum. The key attribute is the number of trips made per day by members of the household. Recalling that the weights for the strata are de�ned as the ratio of the stratum population to the total population, the known information is shown in Table 13.6.

As a �rst approximation to obtain the sample sizes, the constant of proportionality k can be assumed to equal one. Noting from equation (13.59) that the sample size in each stratum can be estimated from equation (13.60), a new column is added to the table with the value Nhsh.

n kN sn kN sn kh hn kh hn kN sh hN sn kN sn kh hn kN sn k hn kn kn kh hn kn kh hn k (13.60)

The value Nhsh is proportional to the sample size required, but it is necessary to mul-tiply each of these values by the ratio of the maximum sample size desired to the sum of the values of Nhsh. This gives the estimate of the values of nh, as shown in Table 13.7.The allocation shown in the last column of Table 13.7 represents the optimal alloca-tion of the sample, given this information. (The reader who attempts to reproduce the numbers shown here should use more precise calculations of the value of sh than are shown in the table itself.)

This information can also be used to estimate the expected standard error of the estimates of the population mean for each stratum. This is obtained by using the SRS equation. Because the sample size is small relative to the population size, the �nite pop-ulation correction factor can be ignored. The appropriate equation is equation (13.11),

Page 326: Collecting managing and assessing data using sample surveys

Random sampling methods 299

which is repeated here as equation (13.61), with the addition of a subscript, h, to denote the value in a stratum.

s es

nh

h

. .s e. .s e yyhh (13.61)

The expected standard error of the weighted mean can also be estimated, by apply-ing equation (13.56). The result of this is shown in the last row of the last column ofTable 13.8.

One could compare this result with the alternative of equal allocation. Because the strata are of different sizes, equal allocation can be obtained only by using dispropor-tionate sampling. In this case, with �ve strata, equal allocation would result in select-ing 400 samples in each stratum. The sampling errors within the strata and overall are shown in Table 13.9. While the differences are not large, it can be seen that the overall sampling error is larger for equal allocation than for the optimal allocation. As expected, the individual sampling errors for the strata are smaller or larger, depending on whether the optimal allocation was smaller than 400 or larger than 400.

Of course, in reality, the standard errors would be calculated after the survey has been completed. Computation of the standard errors would use the values of the stra-tum variances that were measured in the survey, rather than the estimates that are used for estimating the sample sizes.

Table 13.7 Optimal allocation of the 2,000-household sample

Household size Nh sh2 sh Nhsh nh

1 150,000 2.37 1.54 230,922 2282 185,000 4.72 2.17 401,923 3973 230,000 5.99 2.45 562,913 5564 172,000 6.85 2.62 450,167 4455 and over 133,000 8.05 2.84 377,355 373Total 870,000 – – 2,023,280 1,999

Table 13.6 Data for drawing an optimum household travel survey sample

Household size Nh sh2 sh

1 150,000 2.37 1.542 185,000 4.72 2.173 230,000 5.99 2.454 172,000 6.85 2.625 and over 133,000 8.05 2.84Total 870,000 – –

Page 327: Collecting managing and assessing data using sample surveys

Sample design and sampling300

Survey costs differing by stratumIn the example used here, it has been assumed that there is no difference in the cost of obtaining samples in each stratum. However, it will often be the case that there is a cost difference between the strata. In this case, the optimum allocation can be designed to give the minimum standard error subject to a cost constraint, rather than just obtaining the minimum error for a given overall sample size. As noted by Kish (1965), the dif-ference now is that the sampling rate should be proportional to the standard deviation within the hth stratum, and inversely proportional to the square root of the cost per element, Jh, in the hth stratum, as shown by equation (13.62):

fn

NK

s

Jhfhf

h

hNhNh

h

(13.62)

As before, K is a constant of proportionality, and it is again possible to estimate the sample sizes by stratum without initially evaluating K. Assume that the maximum budget available for the variable costs of the survey – i.e., the part of the costs that vary with the sample size – is C, where C Jhnh. As in the previous case, this can be solved by setting K initially to one and then computing provisional values of the sample size and the survey cost, then factoring these to produce the sample size for the desired �xed cost.

Table 13.8 Optimal allocation and expected sampling errors by stratum

Household size Nh sh2 sh gh Nhsh nh sh/√nh

1 150,000 2.37 1.54 657.89 230,922 228 0.1022 185,000 4.72 2.17 465.99 401,923 397 0.1093 230,000 5.99 2.45 413.67 562,913 556 0.1044 172,000 6.85 2.62 386.52 450,167 445 0.1245 and over 133,000 8.05 2.84 356.57 377,355 373 0.147

Total 870,000 – – – 2,023,280 1,999 0.052

Table 13.9 Results of equal allocation for the household travel survey

Household size Nh sh2 sh gh nh sh/√nh

1 150,000 2.37 1.54 375.0 400 0.0772 185,000 4.72 2.17 462.5 400 0.1093 230,000 5.99 2.45 575.0 400 0.1234 172,000 6.85 2.62 430.0 400 0.1315 and over 133,000 8.05 2.84 332.5 400 0.142

Total 870,000 – – – 2,000 0.054

Page 328: Collecting managing and assessing data using sample surveys

Random sampling methods 301

Example

Using the same example as before, it is assumed now that the variable costs for sam-ples in each of the �ve strata are $100, $50, $40, $80, and $120, respectively, for the �ve household size groups. These costs re�ect the dif�culty of both �nding households of each size, and of persuading them to participate in the survey. These variable costs are the new information provided in addition to Table 13.6, and the given information is shown in Table 13.10. The total budget for the survey is $150,000.

Rearranging equation (13.62) gives equation (13.63) as the expression for the sam-ple size in each stratum.

nKN s

Jh

h hKNh hKN sh hs

h(13.63)

Hence, it is possible to proceed to estimate the preliminary sample size with this for-mula, and then estimate the cost associated with the preliminary sample size. In this case, rather than choosing an initial value of one for K, for convenience, we choose a value of 1 / 100. This is done to keep the numbers from becoming too large. Table 13.11shows the results of estimating the preliminary sample sizes and costs.

Because the total cost is above the budgeted cost of $150,000, the sample sizes are now factored by the ratio of the available budget to the total cost estimated in

Table 13.10 Given information for economic design of the optimal allocation

Household size Nh sh2 sh Jh

1 150,000 2.37 1.54 $1002 185,000 4.72 2.17 $503 230,000 5.99 2.45 $404 172,000 6.85 2.62 $805 and over 133,000 8.05 2.84 $120

Total 870,000 – – –

Table 13.11 Preliminary sample sizes and costs for economic design of the optimum allocation

Household size Nh sh2 sh Jh sh / 100√Jh Nhsh/100√Jh n hJh

1 150,000 2.37 1.54 $100 0.001539 230.92 $23,0922 185,000 4.72 2.17 $50 0.003072 568.40 $28,4203 230,000 5.99 2.45 $40 0.003870 890.04 $35,6024 172,000 6.85 2.62 $80 0.002926 503.30 $40,2645 and over 133,000 8.05 2.84 $120 0.002590 344.48 $41,337

Total 870,000 – – – – $168,715

Page 329: Collecting managing and assessing data using sample surveys

Sample design and sampling302

Table 13.11, to produce the final sample sizes and a cost that meets the budget. This result is shown in Table 13.12, where the factoring is performed.

As would be expected, the sample size decreased from the preliminary result and gave a final sample of 2,254 samples. Comparing the results of Table 13.12 with those of Table 13.8, it is apparent that the introduction of the costs has increased the sample sizes in the strata in which the costs are lowest, and has decreased the sample sizes in the most expensive strata. Table 13.13 provides a comparison between the opti-mum allocation result, the equal allocation result, and the economic design result, showing for the first two the comparative costs for those designs, assuming the same stratum costs as used in the economic design. In addition, for the first two designs, the samples have been factored to give the same total cost of $150,000 to allow direct comparison between the three designs. It should be noted that none of the costs are exactly $150,000, because the sample sizes must be an integer number of households. Therefore, after creating each sample as an integer, the costs will vary a little around the target of $150,000.

As might be expected, the economic design produces the largest overall sample for the available budget of $150,000, and does this by increasing the sample sizes for the

Table 13.12 Estimation of the final sample size and budget

Household size Nh sh Jh sh / 100√Jh Nhsh / 100√Jh n′hJh nh ch

1 150,000 1.54 $100 0.001539 230.92 $23,092 205 $20,5002 185,000 2.17 $50 0.003072 568.40 $28,420 505 $25,2503 230,000 2.45 $40 0.003870 890.04 $35,602 791 $31,6404 172,000 2.62 $80 0.002926 503.30 $40,264 447 $35,7605 and over 133,000 2.84 $120 0.002590 344.48 $41,337 306 $36,720

Total 870,000 – – – 2,537 $168,715 2,254 $149,870

Table 13.13 Comparison of optimal allocation, equal allocation, and economic design for $150,000 survey

Household size Jh

Optimal allocation Equal allocation Economic design

nh ch nh ch nh ch

1 $100 235 $23,500 385 $38,500 205 $20,5002 $50 410 $20,500 385 $19,250 505 $25,2503 $40 574 $22,960 385 $15,400 791 $31,6404 $80 460 $36,800 385 $30,800 447 $35,7605 and over $120 385 $46,200 385 $46,200 306 $36,720

Total – 2,064 $149,960 1,925 $150,150 2,254 $149,870

Page 330: Collecting managing and assessing data using sample surveys

Random sampling methods 303

cheapest strata (2 and 3) and decreasing the sample sizes in the most expensive strata (1, 4, and 5 and over). The equal allocation, as also might be expected, produces the smallest overall sample for the budget.

It is also useful to compare the resulting sampling errors for these three designs. Table 13.14 shows the sampling errors within each stratum and then for the overall sample for each of the three designs – optimal allocation, equal allocation, and eco-nomic design.

The standard errors show an expected difference between the alternative designs, and show that the economic design produces, in this case, the lowest overall standard error, largely because it permits the largest sample size to be drawn for this bud-get. On a stratum-by-stratum comparison, as expected, the standard errors show their relationship to the sample size, with the larger stratum samples between the designs producing the smallest standard errors within a stratum, and the smaller sample sizes producing the larger errors.

Practical issues in drawing disproportionate samplesAs has been discussed at length in this chapter, disproportionate sampling is a use-ful strategy when the population is finite and membership in the strata of interest is already defined. For example, in a survey of a student population at a university, various pieces of information are already likely to be known about the students, so that groups can be defined quite readily for stratified sampling – e.g., gender, age, year of study, country of origin, etc. However, this type of situation will often not occur. When prior information does not exist or is not available, the question is whether or not disproportionate sampling can still be used. A method can be used at such times that the author calls ‘stratification on the fly’. The method works as follows. First, a simple random sample is drawn. As each sample element is selected for recruitment, questions are asked to ascertain membership of the predetermined strata. Elements are sampled into the strata until each stratum has reached its pre-determined required sample size. As additional elements are contacted and their strata membership determined, a record is kept of these, but additional elements

Table 13.14 Comparison of sampling errors from the three sample designs

Household size Jh

Optimal allocation Equal allocation Economic design

nh sh / √nh nh sh / √nh nh sh / √nh

1 $100 235 0.100 385 0.078 205 0.1082 $50 410 0.107 385 0.111 505 0.0973 $40 574 0.102 385 0.125 791 0.0874 $80 460 0.122 385 0.136 447 0.1245 and over $120 385 0.145 385 0.145 306 0.162

Total – 2,064 0.0512 1,925 0.0549 2,254 0.0501

Page 331: Collecting managing and assessing data using sample surveys

Sample design and sampling304

are not included into strata in which the required sample size has already been met. Provided that elements are sampled independently and with equal probability, then this still meets the requirements of random sampling, and the resulting sample should be unbiased.

Suppose that a household survey is to be carried out for the purposes of studying body weight, especially the occurrence of obesity. It is decided to stratify the sample by household size. Although household size is collected by the census, it cannot be released at the household level, and it is now four years since the most recent census, so that even the strata populations are considered to be unreliable from the census. For the locality in question, a random sample of households is drawn using the method of random digit dialling (see Chapter 6 of this book). As households are phoned to recruit them for the survey, a question is asked at the outset of the survey as to how many peo-ple live in the household. The answer to this question is tallied against a desired sample size for each household size group or stratum. As long as there are fewer households recruited into a particular stratum, new households that are contacted continue to be added to the sample, by proceeding on to the recruitment part of the survey. Once a particular household size grouping has reached its required number, no additional households are recruited into that stratum. However, the tally is continued until all strata have been filled. The tally permits an estimate to be made of the proportion of households in the locality in question thereby providing updated estimates of the sizes of the strata.

An example may help. Suppose, in the above example, a sample of 3,000 house-holds is desired, split between the strata as shown in Table 13.15. Calls are placed to the numbers of households shown in Table 13.15, with the tallies shown in the table. Using the tallies, it is determined that the proportions of the five strata are as shown in the table. Given an estimate of the total population in the region as being 1,076,894 households, the stratum populations are estimated as shown. Two things should be noted from Table 13.15. First, not all calls placed resulted in getting households to answer the required question about household size. Hence, the number of calls placed includes a number of households of unknown size, these being those households that failed to answer the question. Second, due to rounding, the estimated population values exceed slightly the total estimated population of 1,076,894. This is always likely to occur, because of the small size of the tally.

Two assumptions are made in using the above method. First, it must be assumed that the sample selection is fully random. Second, it must also be assumed that the households that refuse to answer any questions and are not classified are randomly dis-tributed across the household sizes. If there is a tendency for households of a particular size to terminate the telephone call prior to answering a question, then the estimates of the strata populations will necessarily be biased. Care must be taken that the strat-ification variable is not, therefore, one on which responses are likely to be biased. It is probably a good idea to check that the proportions generated by this method are at least similar to the last known proportions for the target population, or are comparable to values from a similar population.

Page 332: Collecting managing and assessing data using sample surveys

Random sampling methods 305

Concluding comments on disproportionate samplingDisproportionate sampling offers some substantial flexibility in the design of a sam-ple. In addition to providing a method to reduce sampling errors below those of simple random samples, the method can also be used to ensure a minimum sample size in a population subgroup of interest in the survey, or to produce the same level of error in each stratum, or to provide the lowest overall sampling error, either with or without information on different levels of cost for different strata.

On the other hand, disproportionate sampling requires prior knowledge of the total regional population, although, as shown in the preceding subsection, the actual stratum populations are not required to be known. Moreover, as noted in this subsection, the formulae required to estimate population values and standard errors are more complex than those required for proportionate sampling or simple random sampling. It is also likely that there will be an additional cost incurred for disproportionate sampling as a result of the sampling process itself. These costs should be considered along with the gains in accuracy or decreases in sample size resulting from disproportionate sam-pling. Finally, it must be remembered that both proportionate and disproportionate sampling provide gains in accuracy only insofar as the grouping chosen reduces the standard deviations within the groups to values that are substantially below the stan-dard deviation of the key attribute over the entire population. Therefore, the choice of attributes on which to group is critical to the value of the process of stratification.

13.4.3 Multistage samplingMultistage sampling is a method of sampling that allows the survey researcher to avoid the need to create an expensive sampling frame when no sampling frame exists. As its name implies, multistage sampling involves at least two stages in the sampling, and it may involve more than two stages. At each stage, any method of sampling may be used. However, it must also be borne in mind that multistage sampling will normally increase the values of standard errors, compared to a single-stage sample. In this case, the value of the method is to avoid the cost of creating a sampling frame, which in

Table 13.15 Desired stratum sample sizes and results of recruitment calls

Household size

Desired sample

Calls placed

Recruited households Tally

Proportion of households

Estimated stratum population

1 578 675 578 675 17.53% 188,7802 956 1243 956 1243 32.29% 347,7293 768 1109 768 1109 28.81% 310,2534 365 448 365 448 11.64% 125,3505 and over 333 375 333 375 9.74% 104,889Unknown – 362 – – – –

Total 3,000 4,212 3,000 3,850 100.01% 1,077,001

Page 333: Collecting managing and assessing data using sample surveys

Sample design and sampling306

many cases could be a very substantial cost indeed. The principle behind multistage sampling is that there will often exist aggregate units of the desired sampling units for which listings are readily available, and that sampling from these aggregate units can be carried out readily, leading to the need only to enumerate a relatively small number of sampling units for the final stage of sampling.

Examples of multistage sampling situations are many and varied. Most often, it applies to population surveys when listings of population members (persons or house-holds) for use as a sampling frame are unavailable. In a multistage population sample, administrative units, such as states, counties, towns, postal codes, census geographic units, etc., may be used in the preliminary stage or stages of sampling. Multistage sam-pling has been used extensively in crop surveys, when listings of the fields devoted to specific crops do not exist, but for which sampling can be undertaken on a farm basis, or even at a level of counties, initially. Another example of a multistage sample could be a public transport rider survey, for which routes, or vehicles, are sampled initially. Many other examples can be found readily in the literature of survey research.

Drawing a multistage sampleAs an example, a household survey is considered here. As noted earlier, the basis for multistage sampling in population surveys is usually the use of administrative units of some type, when there are readily available lists of such administrative units, and the units themselves have clearly defined boundaries and are stable over time. These are essential properties for the aggregate units that are used in multistage sampling. Consider a population survey to be undertaken in Australia, with the aim of surveying a sample of households about expenditure patterns. There is no publicly available list-ing of all household addresses in Australia. Although a small sample is to be drawn, it is desired to have representation from across the entire country, and also from both rural and urban areas.

Australia is divided into postal code areas, designated by four-digit postcodes. There are slightly fewer than 2,900 codes in current use and they are grouped by state, with postcodes beginning with ‘0’ in the Northern Territory, those beginning with ‘1’ and ‘2’ in New South Wales and the Australian Capital Territory (which is entirely within New South Wales), those beginning with ‘3’ and ‘8’ in Victoria, those beginning with ‘4’ and ‘9’ in Queensland, those beginning with ‘5’ in South Australia, those begin-ning with ‘6’ in Western Australia, and those beginning with ‘7’ in Tasmania. Australia Post provides a database that lists all the localities that are listed in the Australian place name gazetteer with their postcodes. This permits a sample to be drawn initially of postcodes, then of localities within postcodes, and then of streets within localities, and houses within streets. At the first stage – the sample of postcodes – the sample might be drawn as a stratified sample, so as to ensure that postcodes from all states and ter-ritories are included in the sample. This would probably need to be a disproportionate sample, because the number of postcodes within a state or territory is largely a function of population. Hence, a jurisdiction such as the Northern Territory, with rather a small population, would need to be sampled more heavily than New South Wales, where

Page 334: Collecting managing and assessing data using sample surveys

Random sampling methods 307

there is a large population, to have sufficient postcodes selected to provide a reasonable statistical reliability at the level of the state or territory.

The listing of postcodes and localities is readily available from the internet. Hence, the first two sample stages can be drawn relatively easily. The second-stage sample may be taken as a simple random sample from all localities in the sampled postcodes, or could again be stratified by state or territory. This would provide a list of sampled localities. There are various commercially available street directories in Australia, any one of which can be used to provide a listing of streets within each locality. From these street directories, a random sample could be drawn of streets within a locality. For the final stage in this multistage sample, specific household addresses would be sampled. These would most probably require an actual in-field enumeration of addresses on the sampled streets, because lists of household addresses do not exist in Australia in a publicly available form. However, the task of in-field enumeration has been reduced in this process to a few streets in each state and territory, rather than a requirement to enumerate every household address in the entire nation. Thus, the multistage sample has reduced the task of identifying all possible sample elements to just a fraction of the possible sample elements.

Another example is provided by a survey of air passengers. There is no available listing of all the passengers flying on a particular airline on a given day. However, there is a readily available listing of all flights made by a particular airline on a particular day. At the first stage, a sample could be drawn of airline flights. These could be strat-ified into domestic and international flights, or could be stratified by distance, or some other attribute that is considered useful for the sampling. Once the sample of flights has been drawn, it is also known what aircraft will be used for each sampled flight, and seat plans are available for each such aircraft. Therefore, the second-stage sample could be a sample of seat numbers on the sampled flights. Survey personnel, or flight attendants, might then request the passenger sitting in each sampled seat to complete a survey. This strategy obviates the need to have a listing of passengers, and also requires only the seating on the selected flights to be detailed for sampling purposes. A further stratification could be used in sampling the seats on the aircraft, by stratifying by class of service – e.g., economy, business, and first class. Alternatively, stratification could be done by the position of the seats in the aircraft, such as the forward one-third of the plane, the centre third, and the rear third.

Another potential advantage of multistage sampling arises when the survey method involves face-to-face surveying. One of the largest cost components of face-to-face surveys is the travel of survey personnel to each sampling element. In a multistage sample, the geographic spread of the sample is controlled, and it may be possible to reduce considerably the geographic area that has to be covered by each interviewer.

Requirements for multistage samplingFor multistage sampling, there are several requirements that must be met by the units that are selected for use at the various stages of the sampling. Some of these have been

Page 335: Collecting managing and assessing data using sample surveys

Sample design and sampling308

mentioned already, but others are also equally important. Overall, the requirements for multistage sampling units are as follows.

(1) The primary units should be large compared to the number of selections required.(2) Units must have clear boundaries and information available for stratification, if

desired.(3) Units should have some uniformity of size in terms of the final-stage sampling

units.(4) The units should remain stable over time.(5) The units should provide comparability to other data sources related to the

survey.

In relation to these various requirements, census or administrative geography is probably the best choice for population surveys. Other types of surveys, such as resource surveys, vehicle ridership surveys, establishment surveys, etc., will need to select primary units that meet as many of the requirements listed above as possi-ble. An airline passenger survey provides a good example. The flights operated by an airline do not undergo significant change over fairly long periods of time in most countries. Therefore, taking flights as the primary unit is a choice of a fairly stable unit. On commercial jet services, the size of an aircraft is large compared to the final unit of an air passenger, with most aircraft seating between about 100 and 400 pas-sengers. The flight is clearly defined, so its boundaries are well understood. There is also information available, such as the type of aircraft, the time of departure, the length of the flight (in distance or time), the seating plan, the number of seats avail-able, etc., for use in stratification. Hence, airline flights represent primary units that meet most of the requirements listed, when the final sampling unit of interest is the air passenger.

Estimating population values and sampling statisticsThere are two basic approaches to estimating population values and sampling statistics from multistage samples: to estimate these values at each stage, or to estimate these values for all stages together. If the same method of sampling is used at each stage, such as with SRS, it is probably easiest to estimate the values for all stages together. However, if different methods of sampling are used at different stages, the only feasi-ble way to estimate the values is probably on a stage-by-stage basis. It must be noted that the standard errors will be built up from the standard errors incurred at each stage of sampling.

In general terms, consider a two-stage sample. In the first stage, aggregate units are sampled at the sampling rate of fa, with a corresponding expansion factor of ga. At the second stage, the sampling rate is fb with a corresponding expansion factor of gb. This means that the overall expansion factor is gagb. This would allow the estimation of a population total from equation (13.64). The population mean will still be the mean from the final stage of the multistage sampling. This assumes that both stages are sam-pled using simple random sampling.

Page 336: Collecting managing and assessing data using sample surveys

Random sampling methods 309

Y g gyi

ii

Y gY gY gY g g yg ya ba bg ya bg yg ya bg yiiY gY gY gY g g yg yg yg ya ba ba ba bg ya bg yg ya bg yg ya bg yg ya bg yiiii (13.64)

However, the sampling error of the mean would be made up of two components, the �rst from the �rst stage, and the second from the second stage, as shown in equation (13.65):

s ef s

n

f s

n na af sa af s

a

f sbf sb

a bn na bn n. .s e. .s e yy

f sf sf sa af sf sa af s f sf s1 1f s1 1f sa a1 1a af sa af s1 1f sa af s1 1f sf s1 1f sf sf sa af sf sa af s1 1f sa af sf sa af s1 11 11 11 12 2f s2 2f s2 2f s2 2f s2 22 22 22 2f sf s2 2f sf s2 21 12 21 11 12 21 11 12 21 11 12 21 1(13.65)

In this equation, sa2 is the variance of y at the �rst-stage sampling, and sb

2 is the var-iance of y at the second-stage sampling. These would be determined as appropriate for the method of sampling used in each stage. Again, this is restricted to simple random sampling. If the �rst stage uses strati�ed sampling, then the formula becomes more complex.

Example

It is useful to consider an example to illustrate the multistage sampling situation. In this example, a sample is to be drawn of students in a university. The university has sixteen faculties and colleges, within which there are 125 departments, and 35,229 students. The departments and students are distributed as shown in Table 13.16.

The cumulative count of the departments is shown for the purposes of sampling. The university does not have a single listing of students, but the individual depart-ments maintain lists of students enrolled in each discipline. As a result, it is decided to undertake a multistage sample. In the �rst stage, a simple random sample of depart-ments is to be drawn, and then a simple random sample of students within the selected departments will be drawn. This simpli�es the process, in that lists of students will be required only from the subset of departments that are sampled, rather than requiring lists from all departments, sorting those lists, and drawing a simple random sample.

It is decided to draw a �rst-stage sample of thirty-two departments, and then to draw a simple random sample of 10 per cent of the students within each selected depart-ment. The result of this draw is shown in Table 13.17. The key variable that is to be measured in the survey is the age of each student in the sample. From this, the average age of the students in the university is to be determined, along with a number of other attributes and measures. In this example, only the age variable is considered.

Table 13.17 shows the sample in order of drawing, with the departments numbered according to the order drawn. The faculty or college of each department is shown, along with the number of students in the department and the number sampled. The overall expansion factor is obtained by multiplying together the expansion factor at each stage. For stage 1, the sampling rate is 32 / 125, giving an expansion factor of 3.906. For stage 2, the sampling rate is 0.1 and the expansion factor is 10. Therefore, the overall expansion factor is 39.0625, as shown in the table. Applying this expansion factor to the sample from each department gives an estimated number of students, but

Page 337: Collecting managing and assessing data using sample surveys

Sample design and sampling310

this is not the number in the department but, rather, the number that this department’s sample contributes to the faculty or college.

To determine the number of students on a faculty/college basis, one adds up the expanded number of students from each department within each faculty/college. As is the nature of simple random samples, it will be noted that one faculty/college did not end up being in the sample. This is faculty/college D.

It can be noted from Table 13.17 that the total number of students in the university is estimated quite well at 33,438, compared to the correct number of 35,229. This is much as would be expected for a simple random sample of this size. The mean and variance of age are shown next. These are calculated on a department-by-department basis. The last column presents the sum of squares about the department means for each department sampled. These are used to estimate the overall mean square for the sample. To proceed to estimate the mean age of the population, this is the same as the estimated mean age from the sample, which can be obtained either from the weighted means from the individual departments, or by obtaining the overall mean from all sam-pled students.

The overall mean was found to be 21.26 years. The weighted mean could also be obtained as shown in equation (13.66):

yw23

8 5620 9

30

8 5621 6

38

8 5620 9

24

8 52020 6 26 26 21 26

8 5.8 5.

8 5.8 5.

8 5.8 5.

8 5.8 5. .. .6 2. .6 26 2. .6 21 2. .1 2 (13.66)

Table 13.16 Distribution of departments and students

Faculty/collegeNumber of departments Cumulative

Number of students

A 5 5 1,076B 16 21 5,492C 8 29 1,276D 11 40 4,193E 9 49 2,171F 7 56 2,569G 12 68 3,107H 3 71 653I 9 80 1,584J 10 90 3,488K 8 98 2,110L 6 104 1,647M 4 108 1,262N 9 117 2,388O 3 120 874P 5 125 1,339

Total 125 35,229

Page 338: Collecting managing and assessing data using sample surveys

Random sampling methods 311

The next item to determine is the standard error of this mean. This can be done using equation (13.65), in which sa is the variance of the first-stage unit means about the overall mean, which is found to be 0.5193, and sb is the variance of the second-stage units about the first-stage unit means. This latter value has to be determined from the sums of squares within each department. The sum of squares is shown in the final col-umn of Table 13.17, and the entry at the bottom of the column is the mean square. This is used as the estimate of sb.

Table 13.17 Two-stage sample of students from the university

DepartmentFaculty/college

Number of students in department

Number of students sampled

Overall expansion factor

Estimated number of students

Variance of age within department

Mean age

Sum of squares

1 B 230 23 39.0625 898 3.48 20.9 76.55652 B 301 30 39.0625 1,172 2.99 21.6 86.75473 C 376 38 39.0625 1,484 3.10 20.9 114.5544 M 322 32 39.0625 1,250 2.92 20.4 90.61725 N 97 10 39.0625 391 4.08 22.7 36.6766 E 188 19 39.0625 742 1.99 21.5 35.83797 G 280 28 39.0625 1,094 3.06 22.0 82.64688 G 376 38 39.0625 1,484 3.65 21.2 135.0759 I 190 19 39.0625 742 2.88 20.8 51.765310 J 45 5 39.0625 195 3.80 20.6 15.18811 C 283 28 39.0625 1,094 2.79 21.7 75.329612 G 222 22 39.0625 859 2.99 20.9 62.690913 C 187 19 39.0625 742 3.17 22.3 57.064214 K 358 36 39.0625 1,406 3.24 21.9 113.5715 O 121 12 39.0625 469 3.53 21.0 38.809216 B 304 30 39.0625 1,172 3.44 20.3 99.74317 E 277 28 39.0625 1,094 2.98 23.1 80.529618 F 198 20 39.0625 781 1.87 21.6 35.4819 B 469 47 39.0625 1,836 2.21 20.5 101.53220 B 217 22 39.0625 859 3.52 21.8 73.853221 E 83 8 39.0625 313 3.09 21.8 21.6622 G 452 45 39.0625 1,758 3.69 20.5 162.19823 B 266 27 39.0625 1,055 3.43 22.1 89.306724 P 192 19 39.0625 742 2.76 20.8 49.606325 E 345 35 39.0625 1,367 2.60 21.5 88.458926 E 281 28 39.0625 1,094 3.11 21.4 83.849627 C 337 34 39.0625 1,328 3.89 22.3 128.35128 C 264 26 39.0625 1,016 3.46 19.9 86.571229 A 410 41 39.0625 1,602 2.39 20.8 95.624930 L 278 28 39.0625 1,094 3.99 21.5 107.61231 B 351 35 39.0625 1,367 4.02 21.2 136.70732 G 240 24 39.0625 938 3.57 20.6 82.1867

856 33,438 21.3157 3.03673

Page 339: Collecting managing and assessing data using sample surveys

Sample design and sampling312

From these estimates, it is found that the multistage standard error for the mean of age is 0.11. The computation of this is shown here, using equation (13.65), and shown in equation (13.67):

s e y. .s e. .s e. . .

.1 31 32

125 0 519

32

1 01 0 1 3. .1 3. .1 3. .. .1 3. .. .1 3. .. .1 3. .. .0368

32 8560 110 (13.67)

This can be compared to the result if the sample had been a simple random sample without using two stages. Assuming that the same students were chosen with a one-stage simple random sample, then the standard error would be obtained from the vari-ance of the ages of all the students, which is found to be 3.52. Thus, the standard error would be given by equation (13.68):

s e ySRS. .s e. .s e .3 5.3 5. 2

8560 064 (13.68)

As expected, the standard error of the estimate of the population mean is smaller from a simple random sample than from a multistage sample. In this case, the �nite population correction factor would have made no detectable difference to the result, because the sampling fraction is 856 / 35,229, which is too small to affect the standard error. Looked at another way, the sample size for the SRS would have had to be only 290 students to produce the same level of error as the multistage sample in this case. Clearly, therefore, in determining whether or not it is worth the savings in the multi-stage sample, it is necessary to estimate the costs of the additional sample required for the multistage sample so as to equal the error level of a smaller simple random sample. It is doubtful if, in this contrived case, that would have been so. However, in many large population surveys, it is still likely to be much cheaper to use a multistage sample, despite the increase in sample size needed to equal the standard error of a simple random sample.

Suppose now that the multistage sample was drawn by using strati�ed sampling with variable sampling fraction for the �rst stage, so as to ensure that each faculty/col-lege is represented in the sample (recall that one faculty/college was missed entirely by the two-stage simple random sample) and that the second stage is again a simple ran-dom sample of the students from each sampled department. In this case, it is decided to choose two departments from each faculty/college. The result of the sampling is shown in Table 13.18. It can be seen immediately from this table that the estimates of the number of students both by faculty/college and in total are closer to the true values than from the simple random sample, showing one of the obvious advantages of using disproportionate sampling.

The mean age for the population can again be estimated from the mean of the sam-ple. In this case, the weighted mean of the sample is determined as shown in equation (13.69). Comparing to the result in equation (13.66), it can be seen that the result is almost identical.

Page 340: Collecting managing and assessing data using sample surveys

Tabl

e 13

.18

Mul

tist

age

sam

ple

usin

g di

spro

port

iona

te s

ampl

ing

at th

e fir

st s

tage

Facu

lty/

colle

geSa

mpl

ing

rate

Exp

ansi

on

fact

orM

ean

of y

Stud

ents

in

fa

culty

/co

llege

Stud

ents

in

sam

pled

de

part

men

tsSt

uden

ts

in s

ampl

eSa

mpl

ing

rate

Exp

ansi

on

fact

or

Ove

rall

expa

nsio

n fa

ctor

Est

imat

ed

num

ber

of

stud

ents

A0.

42.

521

.010

7653

153

0.1

1025

1,32

5B

0.12

58

20.7

5492

698

700.

110

805,

600

C0.

254

21.9

1276

285

290.

110

401,

160

D0.

1818

25.

521

.541

9365

666

0.1

1055

3,63

0E

0.22

222

4.5

20.8

2171

235

240.

110

451,

080

F0.

2857

13.

521

.325

6950

550

0.1

1035

1,75

0G

0.16

667

622

.031

0754

555

0.1

1060

3,30

0H

0.66

667

1.5

20.5

653

425

420.

110

1563

0I

0.22

222

4.5

22.4

1584

475

480.

110

452,

160

J0.

25

20.9

3488

686

690.

110

503,

450

K0.

254

20.7

2110

535

530.

110

402,

120

L0.

3333

33

21.6

1647

458

460.

110

301,

380

M0.

52

21.4

1262

626

630.

110

201,

260

N0.

2222

24.

521

.323

8860

160

0.1

1045

2,70

0O

0.66

667

1.5

21.1

874

688

690.

110

151,

035

P0.

42.

521

.013

3959

159

0.1

1025

1,47

535

,229

8,54

085

634

,055

Page 341: Collecting managing and assessing data using sample surveys

Sample design and sampling314

yw53

85621 0

70

85620 7

29

85621 9

59

8562121 0 20 21 25. .. .0. .0 20. .20 . .. .. .9. .9 21. .2121. .21 .1 2.1 2. .. .. .. . (13.69)

To estimate the standard error of the mean, it is now necessary to compute the �rst-stage standard deviation on the basis of strati�ed sampling with a variable sampling fraction, as given earlier in this chapter in equation (13.56). This is the value required for sa. Calculations for the standard error of the mean are shown in Table 13.19. Thesecond-stage mean square is also computed for equation (13.65), and these computa-tions are also shown in Table 13.19. The sum of the elements of sa is shown at the bot-tom of the penultimate column, and the mean square for use in calculating sb is shown at the bottom of the �nal column of the table.

Using these values, the standard error of the mean age is found to be as shown in equation (13.70), which is quite a bit less than the value from the two-stage simple random sample (equation (13.67)), though still larger than that obtained from the one-stage SRS (equation (13.68)). This shows the advantage of stratifying the �rst-stage sample, which not only improves the population estimates but also reduces the stan-dard error of the estimates from the sample.

s e. .s e. .s e. . . )

.0 0955

32

1 3. .1 3. .345

32 8560 0780yy ss22

. .. .1 01 0 1 31 3. .1 3. .. .1 3. .1 01 01 01 0 1 31 3. .1 3. .. .1 3. .(13.70)

Concluding comments on multistage samplingAs noted earlier, the aim of using multistage samples is to reduce the cost of the sam-pling. At the same time, as these examples have shown, this reduction in sampling cost comes at the expense of incurring either larger standard errors on the population estimates from the sample, or larger sample sizes to compensate for the increased stan-dard errors. Therefore, a careful analysis will normally be required to determine if the savings in sampling are worth the increase in sample size or standard errors of esti-mate. Moreover, any method of sampling may be used at each stage of the multistage sample, and the standard deviations required for the computation will be determined by the sampling method as shown here. More than two stages may also be undertaken, with the standard errors being calculated from an extension of equation (13.65) to as many stages as may be used.

13.5 Quasi-random sampling methods

As noted in the introduction to this chapter, there are three sampling methods that may be considered as quasi-random sampling methods – i.e., they are methods that are close to random but are not actually random. They are usually used because they represent a cheaper or more practical alternative to strict random sampling. They are cluster sam-pling, systematic sampling, and choice-based sampling. These quasi-random sampling methods are still loosely based on random sampling principles, and approximations are

Page 342: Collecting managing and assessing data using sample surveys

Tabl

e 13

.19

Cal

cula

tion

s fo

r st

anda

rd e

rror

from

sam

ple

in T

able

13.

18

Facu

lty/c

olle

geE

xpan

sion

fa

ctor

Mea

n of

y

Num

ber

of

stud

ents

in

sam

pled

de

part

men

ts

Num

ber

of

stud

ents

in

sam

ple

Exp

ansi

on

fact

or

Ove

rall

expa

nsio

n fa

ctor

Var

ianc

e w

ithin

st

ratu

m(1

– f h

)*g h

2 nhs

h2

Sum

s of

sq

uare

s

A2.

521

.053

153

1025

3.24

24.

3317

1.94

1B

820

.769

870

1080

3.04

340.

2021

2.62

3C

421

.928

529

1040

2.93

70.

37 8

5.03

D5.

521

.565

666

1055

3.48

172.

0622

9.41

4E

4.5

20.8

235

2410

452.

92 9

2.00

70.

0939

F3.

521

.350

550

1035

2.95

51.

5614

7.30

4G

622

.054

555

1060

3.20

191.

9517

5.95

6H

1.5

20.5

425

4210

153.

47

5.21

145.

776

I4.

522

.447

548

1045

2.99

94.

2714

3.65

3J

520

.968

669

1050

2.92

116.

8620

1.58

4K

420

.753

553

1040

3.73

89.

5819

7.83

L3

21.6

458

4610

303.

47 4

1.58

159.

406

M2

21.4

626

6310

202.

78 1

1.12

175.

164

N4.

521

.360

160

1045

5.01

157.

9530

0.86

3O

1.5

21.1

688

6910

153.

13

4.69

215.

831

P2.

521

.059

159

1025

3.86

28.

9222

7.47

21.2

58,

540

856

0.

0955

3.

3449

6

Page 343: Collecting managing and assessing data using sample surveys

Sample design and sampling316

available for estimating the population values and standard errors of estimate of those values. However, it must be kept in mind that these are only approximations and that the quasi-random sampling methods will involve some bias.

13.5.1 Cluster samplingCluster sampling is a special case of multistage sampling in which, in the final stage of sampling, all the population elements in the final-stage units are selected. An example can best illustrate this. In a population sample from a state, a cluster sample is designed for sampling individual households. Jurisdictions are used to define the stages of the sampling, with the first-stage sample being of towns and the second-stage sample being of blocks within the towns. In a normal multistage sample, the towns could be sampled randomly using simple random sampling or some form of stratified random sampling, and then households would be sampled also by a random sampling procedure in each block. In cluster sampling, all households in each sampled block would be selected.

Usually, the reasons for using cluster sampling are related either to cost or to sam-pling difficulties. As noted by Kish (1965), comparing cluster sampling to element sampling, the cost per element is always lower in cluster sampling, but the element variance is higher, because of the natural homogeneity that will occur within a clus-ter, and the complexity of the statistical analysis is greater, leading to increased costs in undertaking the analysis. A number of examples of cluster samples are shown in Table 13.20.

Other examples can easily be thought of. In each case, the sampling units fall into more or less natural clusters, and the clusters may be more easily identified or more economically surveyed than the individual sampling units. However, it is important to note that clusters may be of equal or unequal sizes. In the event that the clusters are of equal size, then the cluster sample is actually a multistage sample and is fully

Table 13.20 Examples of cluster samples

Population Attributes to be surveyed Survey elements Clusters

Metropolitan area Household characteristics Dwelling units BlocksMetropolitan area Medical histories Persons HouseholdsMetropolitan area Travel behaviour Persons HouseholdsAir passengers Airline preferences Current

passengersAirline flights

University students Student characteristics Students Departments or classes

Land parcels in an urban area

Land use characteristics Land parcels Blocks

Rural residents Social attitudes Adults VillagesBus passengers Bus use patterns Bus riders Bus tripsVoters in a state Voting intentions Adults Dwelling units

Page 344: Collecting managing and assessing data using sample surveys

Quasi-random sampling methods 317

random. This is because the members of each cluster have the same probability of being selected into the sample. In this case, it is possible to develop the appropriate formulae for estimating population values and standard errors. However, if the clusters are of unequal size, then the sample is not truly random, and there will be bias present. This results from the fact that, with unequal clusters, the probability of being selected is then unequal across the sample. In addition, as discussed shortly, the sample size of the �nal sampling units is, itself, a random variable, depending on which clusters are selected. In the �rst case – equal-size clusters – the calculations are very similar to those involved for multistage sampling. In the second case – unequal-size clusters – the computations are more complex and are only approximations. Additionally, in the case of equal-size clusters, as with multistage samples, the errors from sampling will be larger than those of an equivalent one-stage random sample of the sampling units.

Among the major reasons for undertaking a cluster sample are the lack of a sampling frame for the sample units, lowering the cost of the survey, and the potential to create a limited sampling frame in the �eld. However, as discussed for multistage samples, cluster samples have higher variance and are more dif�cult to analyse. With that said, the next subsections of this chapter show how to estimate population values and stan-dard errors for cluster samples, initially dealing with the case of equal-size clusters.

Equal clusters: population values and standard errorsIn this subsection, it is assumed that the clusters are all of equal size and that the clus-ters are selected by a random sampling process. Assume that there are nc clusters that are selected from a total of Nc clusters in the total population. Assume also that each cluster contains Nd sampling units. In this case, the equal probability of sampling any units in the population, or the sampling rate, f, is given by equation (13.71):

fn

N

N

N

n

Nc

c

d

d

cc (13.71)

The sample mean is given by equation (13.72), and this is also the estimate of the pop-ulation mean:

yn n N

yc dn Nc dn N

cddc

yyn Nn N

j jj jyj jyyj jyyyyyj jj jj jj jyj jyyj jyyj jyyj jy1 11 1 (13.72)

where yj are the units of the entire sample and n is the sample size, c denotes the clusters, d denotes the sample units within clusters, and ycd denotes the values within cluster c. The mean can also be written in terms of the cluster means, as shown in equation (13.73):

yn

yc

cc

1(13.73)

If the sampling of the clusters is EPSEM and the clusters are of equal size, then this is an unbiased estimate of the population mean. The variance and standard error of the estimate of the population mean are shown in equations (13.74) and (13.75):

Page 345: Collecting managing and assessing data using sample surveys

Sample design and sampling318

varn

s

Nc

c

d

yyff1 2

2(13.74)

s e ys

N

f

nc

d cd cnd cn. .s e. .s e

1 (13.75)

The standard deviation, sc, within the clusters is given by equation (13.76):

sn nn n

yy

nc

c

c cc cn nc cn nn nc cn nc

cc

y yy yy ycy yy ycy yy yy yy yy y

n nn nc cc cn nc cn nn nc cn n

2

221

1

(13.76)

From these equations, as expected, there is no contribution to the standard error from the observations within the cluster, because all the units in the cluster are included.

The preferred method to sample the clusters would be strati�ed sampling. In this case, the standard error computations would be modi�ed to count only the variance within the strata and not between the strata. Depending on whether the strati�ed sam-pling is undertaken with uniform or variable sampling rates, the appropriate formula would be chosen for estimating the within-stratum variance. Otherwise, the estimation of population values and standard errors is much the same for either simple random sampling or strati�ed sampling.

For proportionate sampling, the variance of the mean is given by equation (13.77):

varn n h

H

yyff

n nn ncc

n yn y yyn ycn yn ycn yhchc hhccn y

cn yn y

cn yn yn yn yn y

1

12n n2n n2 2yy2 2yy

1

(13.77)

Similarly, for disproportionate sampling, the variance of the mean is given by equa-tion (13.78):

var y Wn n N

yy

nchn nchn nh

H

dhch

c

h

ch

y Wy Wy Wy Wy Wy Wffhhfhffhf

n nn nchchn nn nn nn n NNn nn nn nn ny Wy W

n nn nhhy Why Wy Why Wffffhhhhfhffhffhffhf

n nn nn nn nn nn nn nn n2

122

2211 11

11n nn nn nn nn nn nn nn nn nn nn nn n

(13.78)

Various special cases can be developed from equations (13.77) and (13.78), and the reader is referred to Kish (1965) for additional cases of interest.

Cluster samples may also be selected through a multistage process, in which two or more stages of sampling precede the de�nition of the clusters. This would again require modi�cation of the estimating equations to take into account the variance at each stage of the sampling. In fact, the equations for standard error would be identical to those for regular multistage sampling, except that there would be no error term from the �nal stage of the sampling – the cluster.

The standard errors of cluster samples with equal cluster sizes will always be larger than the corresponding standard errors from a random sample (simple or strati�ed). Although it is clear from the equations presented above that the between-cluster

Page 346: Collecting managing and assessing data using sample surveys

Quasi-random sampling methods 319

variance is likely to be smaller than the population variance, the variance is reduced only by the number of clusters, rather than the total sample size. An example is useful to illustrate this method of sampling and its properties.

Example

Suppose a survey is to be carried out on past and present patients of a medical prac-tice. Suppose, further, that the practice has 32,950 patient �les that are arranged by number. The numbers on the �les correspond to the speci�c doctor in the practice to which the patient goes, and also the town of residence of the patient. The number of �les per doctor varies between 2,500 and 4,800 and there are ten doctors in the prac-tice. The chief purpose of the survey is to �nd out how satis�ed the patients are with the care they have received, although the key variable is whether or not the patient has private health insurance. An interview survey of 400 patients is desired, in clusters of ten patients to save interviewer time and to reduce the costs of sampling.

To undertake a cluster sample, the survey researcher considers the 32,950 �les to represent 3,295 clusters of ten �les each. Some of the clusters will be split between two doctors. For a sample of 400 patients, forty random numbers are selected from one to 3,295, so that each unique random number represents a cluster of ten patients. Because the �les are ordered by doctor and by town of residence for each doctor, it is likely that most patients within each cluster of ten will reside in the same town, although there will be a split in some clusters between towns. As a further re�ne-ment of the sampling, the survey researcher constructs the sample as a strati�ed sample with a variable sampling fraction, so that each doctor in the practice has forty patients sampled, or that approximately four of the clusters are drawn from each doctor’s �les. The number of patients for each doctor and the blocks of �les drawn for each doctor are shown in Table 13.21, along with the results of the key variable: the number of patients with private health insurance within each cluster of ten patients.

Only two numbers need to be calculated to obtain all the population information and the standard errors for this sample. The �rst value required is the sum of the num-bers in each cluster that have health insurance, shown in the �nal cell at the bottom right of the table. The other is the sum of the squared values of these for each cluster. These are shown also in equations (13.79) and (13.80), respectively:

y yn cc

y yc

y yy yy yjjy yjy yy yjy ynn

y yy yy yy yy yy y 185 (13.79)

ycc2 1263 (13.80)

From these two numbers, the sample mean and the estimate of the population mean can be determined to be 185 / 400, or 0.4625, meaning that 46.25 per cent of the patients have private health insurance. The estimated variance and standard error are given in equations (13.81) and (13.82), using equation (13.76):

Page 347: Collecting managing and assessing data using sample surveys

Sample design and sampling320

varf

n n Ny

y

nc cn nc cn n dc

cc

yyn nn nc cc cn nc cn nn nc cn nn nn nc cc cn nc cn nn nc cn nn nn nn nn nc cc cc cc cn nc cn nn nc cn nn nc cn nn nc cn n

1 1f1 1f1 11 11 11 1

1

1400

329502

22

40

1

40 1 11 101263

185

400 002580

2

2

. (13.81)

s e y yy y. .s e. .s e vay yvay yr .y yr .y yy yy yy yy yy yy yr .r .y yr .y yy yr .y yr .r .0r .0r .0508 (13.82)

This gives a standard error of ±5.08 per cent for the mean of 46.25 per cent. The total number of patients with private health insurance would be estimated to be 15,329 (this being equal to 0.4625 times 32,950, or 185 multiplied by the expan-sion factor of 32,950 / 400). The standard error for the total is ±1,674. If a simple random sample had been drawn, instead of a cluster sample, then the variance of the mean would have been determined from pq / n, which is 0.000621, and the stan-dard error of the simple random sample would have been ±0.0249, or ±2.49 per cent. Therefore, the standard error of the cluster sample is more than double that of the equivalent simple random sample. Thus, like multistage sampling, to which it is closely related, the penalty of cluster sampling is that it increases the standard errors of estimate of population values.

Table 13.21 Cluster sample of doctor’s �les

Doctor File numbersNumber of patients Groups of �les drawn

Resultsof survey

Adams 01001–08271 2,677 01191–01200, 01451–01460 10, 804721–04730, 08071–08080 6, 5

Baker 11001–18422 4,000 13311–13320, 15611–15620 9, 816271–16280, 19101–19110 8, 5

Devine 21001–28163 2,988 21171–21180, 21421–21430 9, 924221–24230, 25121–25130 9, 10

Erickson 31001–38228 3,789 33011–33020, 33071–33080 4, 335461–35470, 39161–39170 1, 2

Fisher 41001–48129 2,500 41141–41150, 41511–41520 3, 444311–44320, 45281–45290 0, 6

Jackson 51001–58633 3,978 52211–52220, 52281–52290 3, 559011–59020, 59321–59330 0, 3

Lamb 61001–68245 3,250 61281–61290, 65311–65320 0, 068061–68070, 68161–68170 4, 0

Reader 71001–78332 2,822 71071–71080, 74041–74050 8, 076281–76290, 76281–76290 10, 5

Stone 81001–88107 3,146 84041–84050, 86271–86280 6, 188091–88100, 88301–88310 3, 3

Webster 91001–98212 3,800 94131–94140, 94171–94180 1, 595101–95110, 97181–97190 5, 4

Total 32,950 185

Page 348: Collecting managing and assessing data using sample surveys

Quasi-random sampling methods 321

The effects of clusteringThe example and the derivation of the appropriate formulae for standard errors of clus-ter samples have already implicitly shown the effect of cluster sampling. Suppose one were to compare two samples, one of which was a simple random sample, the other a cluster sample. In both samples, the same number of elements is chosen. In the clus-ter sample, there are many fewer independent choices in the population. The simple random sample has n independent choices of elements; the cluster sample has only nc

independent choices, where nc is the number of clusters drawn. The effects of cluster-ing on the variance come from two sources: the variance between the clusters, and the variance within the clusters. The variance within clusters will often be relatively small, but it can be large. This depends on the degree of homogeneity of the clusters them-selves. For example, dwellings in a city block will usually be similar in size and price, because they are likely to have been developed at much the same time, and to have been developed to similar plans. If the survey is measuring attributes that may correlate with the size or price of the housing, then one could expect considerable homogeneity in the data from blocks of dwelling units. On the other hand, passengers on an aircraft are not likely to be very similar on almost any attribute, so that there is likely to be much less homogeneity in clusters based on aircraft passengers.

In the example, the variance of the mean was found to be 0.002580. For the equiv-alent simple random sample, the correct estimate of the variance would be given by equation (13.83):

var( ) .) .

. ..p f) .p f) .p f) .p f) .

pq) .

pq) .

np fmp f) .p f) .) .p f) .p fp f) .p f) .) .p f) .p fp f) .p f) .) .p f) .) .p f) .) .p f) .) .p f) .) .p f) .) .) .) .) .) .) .) .) .) .) .) .) .) .) .) .) .

. .. .) .p f) .1) .p f) .) .p f) .) .p f) .1) .p f) .) .p f) .

1) .

1) .) .1 0) .) .) .1 0) .) .012

0 4625. .4625. .0. .0. .5375

400 10 0006155500061555000615 (13.83)

This would usually be approximated by omitting the �nite population correction factor and also by replacing (n – 1) in the above equation by n, producing the esti-mate of the variance of 0.000621. Although this is a very slight overestimate, it is usually acceptable for practical situations. Suppose that the variance of the cluster sample was mistakenly calculated as though the sample were a simple random sample (a commonly made error), then the variance of the mean would be underestimated by a ratio of 25.80 / 6.21 4.15, and the standard error would be underestimated by the square root of this value, or 2.04. The ratio of the variance of the actual sample to the equivalent simple random sample, estimated in this case to be 4.15, is the design effectof the cluster sample. The design effect is also often written (Kish, 1965) as deff.

A second useful idea that can be derived from this comparison is the effective sam-ple size. The effective sample size is the size of a simple random sample that would produce the same variance as was produced by the cluster sample. Clearly, since the cluster sample produced a variance that was 4.15 times that of the simple random sample, the effective sample size will be equal to the actual cluster sample size (400 elements in this case) divided by the ratio of the variances (4.15 in this case), which is equal, in this case, to ninety-six. In other words, a simple random sample of ninety-six

Page 349: Collecting managing and assessing data using sample surveys

Sample design and sampling322

elements would have had the same standard error as the cluster sample of 400 ele-ments, drawn as forty clusters of ten elements. This can be proved easily by computing the variance for a sample of ninety-six elements, as 0.4625 times 0.5375 divided by ninety-six, which produces a variance of 0.002590, which is, effectively, exactly the same as the variance obtained from the cluster sample.

This also allows the derivation of a measure of the homogeneity of the elements within the clusters. Assuming that there are Nd elements in each equal-size cluster, then Kish (1965) de�nes the measure of homogeneity, , as shown in equation (13.84):

deff

Nd

1

1(13.84)

In this particular example, the value of would be 0.35. It is clear to see, from equa-tion (13.82), that smaller clusters would tend to lead to greater homogeneity, as might be expected, and that a smaller design effect would also produce an estimate of greater homogeneity.

Unequal clusters: population values and standard errorsIt is much more common that naturally occurring clusters will be of unequal size. For example, looking at Table 13.20, every one of the clusters shown will be of unequal size. It is usually only when data are not naturally clustered and the survey researcher can cluster the data into groups, as was done in the medical practice example, that equal clusters can be used. Even when the clusters are of equal size, nonresponse may result in unequal-size clusters. For instance, in the previous example of the medical practice, all patients were assumed to respond. The reality is likely to be that each cluster will end up with a different number of patients, and the clusters will be of unequal size.

When the clusters are of unequal size, the sample size, itself, becomes a random var-iable, in that it cannot be known in advance what the �nal sample size will be, and, if many different random cluster samples were to be drawn, each would result in a differ-ent sample size, even when the goal was to sample the same number of clusters in each sample. Two results arise from the variability in the sample size. The �rst is that there will now be uncertainty about both the cost and the variance of the sample. Second, the variance and population estimates will no longer be unbiased estimates.

Apart from this difference in the properties and uncertainty about variances, and sample and population values, and the random nature of the sample size, cluster sam-ples with unequal clusters can still be drawn using any random sampling method –simple random sampling, strati�ed sampling, and multistage sampling. To develop the appropriate formulae for unequal clusters, the number of elements in the population is X, and the number of elements in the sample is x. This notation is used to remind the reader that there is variability in the number of elements in the sample, because of unequal clusters. There are Xd ( Nd) samples in the dth cluster and ∑dXd X N. For the sample, xc nc, and the total sample is ∑cxc n. It then follows that Ycd is the value

Page 350: Collecting managing and assessing data using sample surveys

Quasi-random sampling methods 323

of the observed attribute Yj of the cth element of the dth cluster. The values of all the elements of the population could then be written as equation (13.85) shows:

X Y Y Y Y Y Y Y

X Y Y Y Y Y Yd XY Yd XY Y

d

1 1 1 1Y Y Y1 1 1Y Y Y 1 12 1Y Y2 1Y Y 3 1Y Y3 1Y Yd X1d XY Yd XY Y1Y Yd XY Y

2 2 2 2Y Y Y2 2 2Y Y Y 1 22 2Y Y2 2Y Y 3 2Y3 2Y1

Y Y YY Y YY Y Y1 1 1Y Y YY Y Y1 1 1Y Y Y Y YY Y1 11 1Y Y1 1Y YY Y1 1Y YY Y2 1Y YY Y2 1Y Y 3 13 1Y YY YY Yd XY YY Yd XY Y

Y Y YY Y YY Y Y2 2 2Y Y YY Y Y2 2 2Y Y Y Y YY Y1 21 2Y Y1 2Y YY Y1 2Y YY Y2 2Y YY Y2 2Y Y 3 23 2

Y Y Y Yd X d XY Yd XY Y Y Yd XY Y3 1 3 1Y Y3 1Y Y Y Y3 1Y Y3 13 1 3 13 1Y Yd XY YY Yd XY Y Y Yd XY YY Yd XY Y

3 23 23 23 23 23 2

Y

X Y Y YY Y Y Y YY Y Y Y Y Y Y Y Y Y

X Y Y YY Y Y Y YY Y

X

c c c cY Y Yc c cY Y Y c cY Yc cY YY Yc cY Y cd cd Y YcdY Y Y Y cd Y Y cXY YcXY Y ccXccX

C C C CY Y YC C CY Y YY Y YC C CY Y Y C

2Y2Y

1 21 2Y Y1 2Y Yc c1 2c cY Yc cY Y1 2Y Yc cY Y 3

1 21 2Y Y1 2Y YC1 2CY YCY Y1 2Y YCY Y

2

C CCC CCY YCY YC CY YCY Y d CC Cd CC C Xd CXd CY YC CY YC Cd CY Yd CC Cd CC CY YC Cd CC C C3C C3C CC CC CY YY Yd CY Yd Cd CY Yd CC C C Cd C d CC Cd CC C C Cd CC CC CY YC C C CY YC Cd CY Yd C d CY Yd CC Cd CC CY YC Cd CC C C Cd CC CY YC Cd CC CC CC C C CC Cd CY Yd Cd CY Yd C d CY Yd Cd CY Yd C

(13.85)

The total Y in the population is given by equation (13.86):

Y Y Y Yc

C

cdY YcdY Yd

D

jY YjY Yj

N

c

C

Y YY Y Y YY YY YY YccY YcY YY YcY YY YY YY YY Y Y YY Y1 1d1 1dc1 1c1 1c1 1c1 1dd1 1dd1 1cc1 1cc1 1 11 111 11 111 1

(13.86)

The mean of Y in the population is given by equation (13.87):

YY

X XY

XY

Y

XcdYcdY

d

D

cYcYcYcY

c

C

cc

Cc

C

c

C

YY1 1

Y1 1

YD1 1DC1 1C1 11 1

1

1

1

11

(13.87)

There are several problems created by cluster sampling with unequal clusters. These include the following.

(1) The size of the sample cannot be �xed prior to sampling, because it is a random variable that depends on the number of smaller and larger clusters selected in the sample, and will vary from sample to sample.

(2) The estimated mean from the sample, which is y / x, is not an unbiased estimate of the true population mean.

(3) The variance estimates obtained from the sample are not unbiased estimates of the population variance.

(4) The formulae for the variance appear complex, as is shown later in this section.

However, provided that the cluster sample is well designed, the biased estimates of the mean and the variance are good approximations to the true values, although it must be stressed that this is dependent on the use of a good design. The biases can be reduced by selecting a large number of clusters, or by strati�cation in the sampling of the clusters when the number of clusters is �xed. The best method for selecting clusters is a method called paired selection. There are a number of other sampling methods that can be used also. Most will not be covered in this book. However, the methods of random selection of unequal clusters, two-stage sampling of unequal

Page 351: Collecting managing and assessing data using sample surveys

Sample design and sampling324

clusters, and paired selection of unequal clusters are described in the remainder of this subsection.

Random selection of unequal clustersClusters can be selected with or without subsampling within the clusters. Without sub-sampling, a random selection is made of c clusters from the C clusters in the popula-tion. In this case, the sampling rate is fp c / C, which is the probability of selection of any of the N members of the population, and where p denotes the primary sampling units. If the sample selection is without replacement, then the simple variance formula produces an overestimation that is generally negligible if C, the number of clusters in the population, is large. For example, if the clusters are households in a metropolitan region, then C will usually be estimated in the hundreds of thousands or millions, and the condition of a large C is clearly met. If the selection is with replacement, then some clusters may be sampled more than once. The reader is referred to Kish (1965) for fur-ther details on this situation.

Random selection can be extended to more than one stage, by selecting primary units at random, and then subsampling secondary units, prior to the �nal clusters. For this sampling method to remain EPSEM, the sampling rate for the secondary units must be a constant rate, fs, so that the probability of sampling any of the secondary population units is fs, conditional upon the initial sampling probability of fp. This pro-duces an EPSEM sample in which the equal probabilities of selection are f fp ⋅ fs. If the clusters are of equal size, then the constant rate of sampling of the secondary units provides equal-sized subsamples. However, if the clusters are of unequal size, then two alternatives arise. If the secondary units are selected with equal probabilities, then the combined probability of selecting a unit from the population will be unequal, and will in fact be inversely proportional to the size of the primary units. This would mean that the sample is not EPSEM. Thus, to maintain an EPSEM sample, it would be necessary to sample the secondary units in one of a number of different ways, each of which would produce an EPSEM sample. Kish (1965: 185–6) outlines �ve different proce-dures that can be used to achieve the desired result.

Regardless of the method selected to obtain the sample, the population estimates and standard errors can be computed as shown in this subsection. First, the sample mean is estimated as the ratio r, as shown in equation (13.88):

ry

x

y

xc

c

(13.88)

It is important to recall here that the size of the sample is itself a random variable, which will be different from one sample to the next from the same population. Thus, dividing by n, the count of elements, to obtain the mean is shown here as division by x ∑xc as a reminder that the sample size is a variable. Similarly, in equation (13.88), y is the sample total for the attribute Y and is simply ∑yc. Because y and x are both random variables, the variance of the mean is not the simple division of the element variance,

Page 352: Collecting managing and assessing data using sample surveys

Quasi-random sampling methods 325

sy2, divided by the sample size of n, but is given by equation (13.89), which is the var-

iance of the ratio of two random variables:

var vr var var( ) covx

y r x r) cx r) cr vr vr vrr vr vrr vr vr v y ry r var(var( ) c) cy ry r x rx r) cx r) c) cx r) c y xy x,y x,,y x,r vr vr vr v1

) c2) c) cx r) c2) cx r) c2

22 (13.89)

This is actually an approximation, and is a good approximation provided that the coef�cient of variation of x (s.e.(x)/ x ) is less than about 0.20. Then the standard error of the estimated population mean from the sample is given by equation (13.90):

s ex

f c s r s rss rss ry xs ry xs r s ry xs r xy. .s e. .s e rr f cf cf cf c1

1 2s r1 2s r1 2s r1 2s rs ry xs r1 2s ry xs r1 2f c1 2f c1 2f cf c1 2f cf c1 21 2s rs r1 2s rs r s rs r1 2s rs ry xy x1 2y xy xs ry xs rs ry xs r1 2s ry xs rs ry xs r s ry xs rs ry xs r1 2s ry xs rs ry xs r1 21 22 21 22 21 2s r1 2s r2 2s r1 2s r1 22 21 2s rs r1 2s rs r2 2s rs r1 2s rs r 21 221 21 221 2s rs r1 2s rs r2s rs r1 2s rs r(13.90)

These formulae hold for simple random sampling of the clusters. More complex for-mulae apply for strati�ed sampling, which is discussed following an example.

Example

The example, based on one from Kish (1965), is concerned with a survey of dwelling units in a small city, where there are a total of 300 blocks. A sample of twenty blocks is drawn randomly from the 300 blocks, with the results shown in Table 13.22.

Although this sample is a simple random sample of blocks, if the sampling units of interest are the dwelling units, then this is a cluster sample with unequal cluster sizes, as shown by the number of dwelling units in each block. The survey is to determine the proportion of renter-occupied dwelling units. The number of blocks in the sample is twenty from a population of 300. The sampling fraction for the blocks is 20 / 300 0.067. The proportion of rented units is given by applying equation (13.88), as shown in equation (13.91):

ry

x

282

4350 6483. (13.91)

The variance of this proportion can be calculated by using equation (13.89), as shown in equation (13.92). The variance of the number of dwelling units is calculated to be 369.25, the variance of the number of rented units is 297.46, and the covariance of these two measures is 304.38.

var.

. .rf

xs r s rss rss r

c

s rs r s rs r . .. .1

2s r2s r1 01 0 067 20

435297 46. .46. .0 6. .0 6. . 4

22 2s r2 2s r2 2s rs r2 2s rs r 22s rs r2s rs r

2883 369 25 2 0 6483 304 38

0 933

9461 25

2 369369 2525 2 02 0 64836483. .25. .25 2 0. .2 0. .2525. .2525. .2 02 0. .2 02 0 .

.

.297297 4646 155155 193193297297297297 46464646. .. .46. .4646. .46 155. .155155. .155. .. .4646. .46464646. .4646 394 659 0 005719. .. .659. .659 0. .0

(13.92)

From this, the standard error of the proportion can be calculated, by taking the square root of the result in equation (13.92), giving s.e.(r) 0.0756, or 7.56 per cent. Once again, the variance and standard error can be computed for the case in which the dwellings were selected as a simple random sample, using equation (13.93) for the variance:

Page 353: Collecting managing and assessing data using sample surveys

Sample design and sampling326

Table 13.22 Random drawing of blocks of dwelling units

Random draw Block Dwelling units Renters

1 294 67 522 74 15 93 136 13 74 85 57 435 135 4 –6 27 29 217 117 15 88 220 11 79 106 24 4

10 96 20 1011 100 2 –12 49 62 5913 126 19 1014 153 27 1815 268 15 1216 146 25 917 266 6 –18 113 18 1219 219 3 120 243 3 –

Total - 435 282

var. . .

.f pq

npp

f pf p . .. .. .. .1

1

0 933. .933. .0. .0. .0. .. .0. .. .64836483 0 3517

4 3.4 3. 40 00049 (13.93)

The standard error for a simple random sample is therefore 0.0221. The design effect is 0.005719 / 0.00049 11.81. The standard error would have been underesti-mated by a factor of 0.0756 / 0.0221 3.42.

Strati�ed sampling of unequal clustersIt is more usual that clusters are sampled using strati�ed sampling rather than simple random sampling, especially because this is a method to reduce the variance and there-fore the standard error of the estimates. There are many possible sample designs that could be created for this case. However, the computation of the estimated mean and the variance and standard error of the mean can be calculated in a relatively straightfor-ward way. In this case, the ratio estimate of the mean is given by equation (13.94):

ry

x

y

x

y

x

hcc

C

h

H

hcc

C

h

H

hh

H

hh

H11

11

1

1

(13.94)

where h indicates the stratum, and c the cluster.

Page 354: Collecting managing and assessing data using sample surveys

Quasi-random sampling methods 327

The same issues exist for this type of sample as for the random selection of unequal clusters. It will be necessary to devise a sampling method that ensures that there is still an equal probability of selection within each stratum across all clusters. The formula for the variance is only slightly changed from equation (13.89), by the replacement of the overall variances with the variances within the strata, as shown in equation (13.95):

var( ) v) v r covrx

y x,y x,h

h hy xh hy xh hr ch hr covh hov hhh

) v) v h hh h) v) v) v) v) v) v) v) v) v) v) v) v) v) v) v) varar vavar cr cy ry r x rx rr cx rr cr cx rr cy rhy ry rhy r h hh hr ch hr cr ch hr cx rh hx rx rh hx rr cx rr ch hr cx rr cr cx rr ch hr cx rr ch hh hx rh hx rx rh hx rr cx rr ch hr cx rr cr cx rr ch hr cx rr cy ry ry ry ry ry ry ry r r cr cr cr cr cr cr cr cr cx rr cr cx rr cr cx rr cr cx rr ch hh hh hh hx rh hx rx rh hx rx rh hx rx rh hx rr cx rr ch hr cx rr cr cx rr ch hr cx rr cr cx rr ch hr cx rr cr cx rr ch hr cx rr cr cx rr cr cx rr cr cx rr cr cx rr cr cx rr ch hr cx rr cr cx rr ch hr cx rr cr cx rr ch hr cx rr cr cx rr ch hr cx rr c1

) v1

) v 2r cr c2r cr cr cx rr cr cx rr c2r cx rr cr cx rr cr cx rr ch hr cx rr cr cx rr ch hr cx rr c2r cx rr ch hr cx rr cr cx rr ch hr cx rr c2

) v2

) v 2 (13.95)

This equation can also be rewritten in terms of the within-stratum variances, as shown in equation (13.96):

varx

f C S r f C Sh hf Ch hf C hS rhS rh

h hf Ch hf C yxhh

rr f Cf Cf Ch hf Cf Ch hf C f Cf Cf Ch hf Cf Ch hf Cf Cf C S rS rh hh hf Ch hf Cf Ch hf C xxS rxS rS rxS rf Cf Cf Cf Cf Ch hf Cf Ch hf Cf Ch hf Cf Ch hf C S rS rS rS r1

1 1f C1 1f C S r1 1S rh h1 1h hf Ch hf C1 1f Ch hf C S rhS r1 1S rhS r1 1f Cf C1 1f Cf Cf Ch hf Cf Ch hf C1 1f Ch hf Cf Ch hf C1 11 1S rS r1 1S rS r1 11 11 11 1 2 12 12 12 12 1S rS r2 1S rS r2 1S rS rS rS r2 1S rS rS rS r2

2 21 12 21 1S r1 1S r2 2S r1 1S r1 12 21 1S rS r1 1S rS r2 2S rS r1 1S rS r 2S rS r2S rS r (13.96)

In proportionate sampling of the clusters, all the fh will be the same, so that the term (1 – fh) will become simply (1 – f) and can be taken outside the summations in equa-tion (13.96).

Paired selection of unequal-sized clustersOften, the best design for sampling clusters is to use a paired selection procedure, in which, in effect, strata are formed by taking pairs of clusters. In doing this, the number of strata, H, will be equal to one-half of the number of sampled clusters – i.e., H c / 2. This procedure is simple, as well as providing a larger number of strata than many other methods and leading to ef�cient design.

To sample clusters with this method using a uniform sampling fraction, one would �rst form strata of 2g clusters, where g is the expansion factor for the cluster sam-ple, g 1 / fc, and select two clusters at random from each stratum. For example, the city blocks used in the preceding example number 300, and range from one dwelling unit to 153 dwelling units in a block, with an average of twenty-seven dwelling units in a block. A sample of about 540 dwelling units could be selected in c 20 blocks. Therefore, form ten strata of thirty blocks each and select two blocks from each stra-tum. The sampling fraction is 2 / 30 1 / 15, which is the same as twenty out of 300. Unequal sampling rates could also be used in paired selections, if there are major issues with cost or variance that need to be considered. The reader is again referred to Kish (1965) for further elaboration of this method of sampling.

Using paired selections for strati�ed sampling, the estimate of the ratio mean is given in equation (13.97):

ry

x

y

xh

h

y yy yh hh hy yh hy yy yh hy yy yy yy yy yy yh hy yy yh hy yy yh hy yy yh hy y

x xx xh hh hx xh hx xx xh hx xx xx xx xx xx xh hx xx xh hx xx xh hx xx xh hx x1 2h hh h1 2h hh hy yh hy yy yh hy y1 2y yh hy yy yh hy yy yh hy yy yh hy yy yh hy yy yh hy y1 2y yh hy yy yh hy yy yh hy yy yh hy y

1 2h hh h1 2h hh hx xh hx xx xh hx x1 2x xh hx xx xh hx xx xh hx xx xh hx xx xh hx xx xh hx x1 2x xh hx xx xh hx xx xh hx xx xh hx x(13.97)

Page 355: Collecting managing and assessing data using sample surveys

Sample design and sampling328

where yh1 and yh2 are the totals of the Y attribute for the �rst and second clusters in the hth stratum, and xh1 and xh2 are the same for the counts of elements in the clusters. The variance of r may also be estimated from equation (13.98):

var rf

xy r y xh hy rh hy r h hy xh hy xx rx rh hh hx rh hx rx rh hx rx rx rx rx rh hh hh hh hx rh hx rx rh hx rx rh hx rx rh hx r

12x rx r2x rx r

22 2 22y ry ry rh hy ry rh hy ry ry ry ry rh hh hh hh hy rh hy ry rh hy ry rh hy ry rh hy rh hh hh hh hh hh hh hh h2 22 2y r2 2y ry r2 2y r2 22 2y ry r2 2y ry ry ry r2 2y ry r y xy xy xh hy xy xh hy x

(13.98)

where

∆yh (yh1 – yh2 ) and

∆xh (xh1 – xh2)

If variable sampling fractions are used, then the factors (1 – fh) are brought inside the summation. Otherwise, the equations above will work for paired selection of clusters for all random sampling methods.

13.5.2 Systematic samplingSystematic sampling is a commonly used method of sampling that approximates a random sampling method. The method is to draw a sample consisting of every mthelement from a list of elements in the population, or from encounters with elements in the population. For example, the survey researcher may have a list of persons, or home addresses, etc. and choose every mth entry in the list. A traf�c survey person may select every mth vehicle passing a particular point on the road, or an interviewer might select every mth person entering a shopping centre. Such samples are not random, because, once the �rst element has been chosen, the remaining elements in the population have either a probability of one of being chosen (every subsequent mth element) or zero (all the other elements of the population). The choice of the �rst element to sample can be done in any number of ways, although none of these will change the systematic sample from a non-random to a random method. It will remain a biased method, no matter how the �rst element is selected.

Some examples of systematic samples are:

(1) selecting from a list of voters on an electoral roll, by choosing every tenth individ-ual from an alphabetised list;

(2) selecting people entering a shopping mall, by stopping every twentieth person as he or she enters the main entrance of the mall; and

(3) selecting vehicles at a border crossing, by stopping every tenth vehicle that approaches the border crossing.

Population values and standard errors in a systematic sampleThere is no completely valid method to estimate population values and standard errors in a systematic sample, because the sample is not a random or EPSEM sample. However, there are a number of methods that can be used to obtain approximate values.

Page 356: Collecting managing and assessing data using sample surveys

Quasi-random sampling methods 329

The first of these is to treat the sample as a simple random sample, the second is known as a stratified random model, the third is a paired selection model, and the fourth is a successive difference model. Each of these is discussed and described in the following subsections of this chapter.

Simple random modelThe simple random model is based first on the assumption that the initial listing from which elements are selected is in random order. An alphabetised list of people’s names, or arrivals of vehicles at a point on a roadway, or arrivals of shoppers at a shopping centre are generally random processes, unless the purpose of measurement of the survey is related to the alphabetical ordering of last names, the time of arrival at a point on the roadway, or the time of arrival for shopping. In the event that the survey measurement is not related to these aspects, the systematic sample can be assumed to be a fair approximation to a random sample. In this case, the statistics developed early in this chapter for population values and their standard errors are acceptable approximations. However, it should be noted that the assumption of a random sample in such instances will always lead to an underestimation of the stan-dard errors.

Stratified random modelIf nearby elements in a population listing or interception are likely to be more simi-lar to one another than those that are not as near, then the variance in the sample will be lower than that indicated by an assumption of a simple random sample, and will lead to further underestimation. An example of such a listing would be a list of home addresses arranged geographically by block. Homes on the same block are likely to be more similar to one another than homes on widely separated blocks. This will intro-duce a proportionate stratification effect. This effect will be even more pronounced if the systematic sample is used to draw a cluster sample, which is often the case. To handle this situation, the population should be divided into strata, and the sampling units should then be randomly ordered within the strata before drawing the sample. This cannot, of course, be done with a sample that is taken in time, such as shoppers entering a shopping mall or vehicles passing a point in the roadway. It applies only to a listed population.

The major issue that arises in such a case is that there is no clear way in which to set the stratum boundaries, and this will become a subjective process. It is also not a simple process. As a result, when there is the expectation of such stratification effects in the population, it is usually better to use either the paired selection model or the suc-cessive differences model.

Paired selection modelThis method is similar to the paired selection method of cluster sampling. In this case, it is assumed that each successive pair of selections from the population fall into an implicit stratum. Thus, for example, the first and second selections in the systematic

Page 357: Collecting managing and assessing data using sample surveys

Sample design and sampling330

sample are assumed to belong to one stratum, the third and fourth to another, and so on through the entire sample. It is an underlying assumption of this method that the elements within each stratum are randomly sorted, so that there is no systematic rela-tionship among them that bears on the subject of the survey. The elements are now compared pairwise to estimate the variance of the mean, which is estimated as shown in equation (13.99):

var/

n h

n

yyff

y yy yy yhay yy yhay yhbhby yy yy yy y1

2

2

1

2

(13.99)

As before, h designates each stratum, containing two successive systematic samples, and a and b indicate the �rst and second systematic draws within each stratum. If n is odd, then a slightly different procedure has to be introduced, otherwise one ends up with one stratum containing only one observation. In this event, one element from the sample is selected at random and is used twice. This will provide (n 1) / 2 m pairs, and equation (13.99) becomes equation (13.100):

varm m h

ml

yyff

m mm my yy yy yhay yy yhay yhbhby yy yy yy y

1

2m mm m2m mm m22

1

(13.100)

The procedure for estimation is otherwise the same as in the case of n being even.

Successive difference modelThe successive difference model is a modi�cation of the paired selection model that works for either odd or even sample sizes. It avoids the need to select one sample ele-ment randomly to use twice over. In this case, the paired selection involves taking the �rst and second elements for the �rst pair, the second and third for the second pair, etc. In this case, the variance is computed as shown in equation (13.101):

varn n g

n

yyff

n nn ny yy yg gg gy yg gy yy yg gy yy yy yy yy y

1

2 1n n2 1n n2 1n nn n2 1n nn n2 11

2

1

1

(13.101)

In the following example, the three methods of random sample, paired selections, and successive differences are all used to estimate the standard error of the mean. It should be noted that the means, ratios, probabilities, and totals are estimated from a systematic sample in the same way as from a simple random sample.

Example

Suppose that the data exhibited in the last column of Table 13.21 had been obtained by a systematic sample, yielding the results repeated here of 10,8, 6,5, 9,8, 8,5, 9,9, 9,10, 4,3, 1,2, 3,4, 0,6, 3,5, 0,3, 0,0, 4,0, 8,0, 10,5, 6,1, 3,3, 1,5, 5,4. These have been grouped

Page 358: Collecting managing and assessing data using sample surveys

Quasi-random sampling methods 331

into pairs here, to aid in determining the results from the paired selection method. As was found before, the mean of these values is 185 / 40 4.625. If the simple random model is used, then the standard error of this sample is obtained by taking the square root of the variance divided by the number of observations. The variance is 10.44551, so the standard error is 0.511 (ignoring the �nite population correction factor).

Using the method of paired selection, equation (13.99) applies. Using the method of successive differences, equation (13.101) applies. The results of these are shown inTable 13.23. The sums of squares of the differences are shown at the bottom of the sec-ond and third columns of the table. Substituting into equation (13.99), the result for the variance for the paired selection model is given in equation (13.102):

var .r ./

n h

n

r .r .yyr .yr .r .yr .r .r .r .r .ff

r .r .y yy yr .y yr .r .y yr .y yhay yy yhay yr .y yr .har .y yr .r .y yr .har .y yr .hbhbr .hbr .r .hbr .r .y yr .r .y yr .r .y yr .r .y yr .r .r .r .r .r .r .r .r .1 1

40215r .215r .0r .0r .1344

2

2

21

2(13.102)

This provides a standard error of the mean of 0.367.Similarly, for the successive differences model, substituting into equation (13.101),

the result for the variance is given in equation (13.103):

var .r .n n g

n

r .r .yyr .yr .r .yr .r .r .r .r .ff

r .r .n nn n

r .r .y yy yr .y yr .r .y yr .g gg gr .g gr .r .g gr .y yg gy yy yg gy yr .y yr .g gr .y yr .r .y yr .g gr .y yr .r .y yr .r .y yr .r .y yr .r .y yr .r .r .r .r .r .r .r .r .r .r .1

2 1n n2 1n n2 1n nn n2 1n nn n2 1

1

80 39540r .540r .540r .r .540r .r .0r .0r .17311r .r .1r .r .

1

1

(13.103)

The standard error of the mean is estimated in this case as 0.416.As can be seen clearly from this example, the random sampling model gives the larg-

est estimate of error, while the paired selection model gives the smallest estimate. None of these estimates are correct, but any one of them will provide a reasonable approxi-mation of the standard error. As to which of the methods provides the best estimate, this is largely a matter of the nature of the sample itself.

There are two possible problems that can arise in the populations from which a systematic sample is drawn. The �rst of these is that there is a monotonic trend in the population listing or intervention occurrence. For example, a list of immigrants to a country that is arranged by date of entry will have a monotonic trend in terms of the duration of time that the immigrant has been in the country. If the survey relates to such issues as assimilation into the population of the country, then a monotonic trend is likely to exist in the listing. The second problem arises if the order of the popula-tion list is subject to periodic �uctuations that are relevant to the survey. For example, a survey of bank transactions may be subject to �uctuations resulting from a standard pay period for customers of the bank, or from the payment of pension instalments on a speci�c day of the week or month. Such situations should be avoided if at all possible in drawing a systematic sample, so as to avoid any biasing of the sample.

There are two procedures that can be used to improve the representativeness of the systematic sample. The �rst is to randomise the population list entries before drawing

Page 359: Collecting managing and assessing data using sample surveys

Sample design and sampling332

Table 13.23 Calculations for paired selections and successive differences

Results of survey Paired selections Successive differences

108 2 26 25 1 19 −48 1 18 05 3 39 −49 0 09 0

10 −1 −14 63 1 11 22 −1 −13 −14 −1 −10 46 −6 −63 35 −2 −20 53 −3 −30 30 0 04 −40 4 48 −80 8 8

10 −105 5 56 −11 5 53 −23 0 01 25 −4 −45 04 1 1

185 215 540

Page 360: Collecting managing and assessing data using sample surveys

Quasi-random sampling methods 333

the sample. This can be done only when there is a complete population list, and it is seldom practical, because, if the listing is feasibly randomised, then it would be possible to draw a true random sample, and systematic sampling would be inappro-priate. The second method that can be adopted in larger samples is to introduce a ran-dom interval periodically through the sampling. For example, suppose that a sample is being drawn of vehicles passing a point on a roadway. The systematic sample is set as taking every tenth vehicle that passes the point. After the first ten vehicles have been sampled, a random interval between one and ten is chosen, so that, say, an interval of six is chosen. This would mean that, after choosing the tenth, twentieth, thirtieth, and so on to the 100th vehicle, the next vehicle chosen after the 100th vehicle is the 106th vehicle. Following this, the 116th, 126th, 136th, etc. vehicles are selected up to the 196th. Then another random interval is chosen, say three, so that the next vehicle chosen is the 199th. This can continue to be done throughout the sampling and reduces the systematic nature of the sample. However, as a field sampling method, it can be difficult to implement.

In summary, systematic sampling is a useful quasi-random method of sampling that can be applied either to situations in which a list of population members exists but choosing a random sample is a very onerous task, or in which population members can be intercepted at a point, but random selection is not feasible. The method provides population estimates that are expected to be somewhat biased, but is sufficiently close to random that the differences are usually ignored.

13.5.3 Choice-based samplingThis is a method of sampling that has been introduced in recent years to provide data generally about modelling consumer choices. It involves drawing a sample from peo-ple making specific choices in a market situation, such as people choosing to buy a television, or people choosing to travel on a specific day, or people taking a bus on a particular day, or people visiting a shopping centre at a particular time. In choice-based sampling, the sampling is not EPSEM from the general population of an urban area, or other region. This means simply that the data from such a survey cannot be expanded to the total population. Any expansion that is made will be only to the population mak-ing that particular choice.

Provided that the sampling from within the choice population is EPSEM , then all the statistics and population values for EPSEM samples can be calculated as usual, but for the population making the choice only. Any method of sampling may be used within the choice population, such as simple random sampling, proportionate or dispropor-tionate sampling, multistage sampling, cluster sampling, or systematic sampling. The same formulae will then apply to the choice population as have been developed for each of these sampling methods, and the same ability is provided to estimate totals, means, ratios, and probabilities, with their associated standard errors, except that these apply only to the choice population and not to the total population.

Page 361: Collecting managing and assessing data using sample surveys

Sample design and sampling334

13.6 Non-random sampling methods

There are a number of non-random sampling methods that may be encountered at var-ious times in survey research. In general, it is recommended that the survey researcher avoid such sampling methods, especially if there is a desire to produce population esti-mates and standard errors from the data. As a general rule, it can be stated that there are no accepted methods for estimating either population values or standard errors from non-random sampling methods. They are discussed here for two reasons. First, there are situations when such samples are appropriate to use. Second, the survey researcher may come across such methods used by others or suggested for use, and needs to be able to recognise the methods for what they are and to be able to avoid them, when appropriate.

13.6.1 Quota samplingQuota sampling is often confused in the minds of survey personnel with stratified sam-pling. However, there are major differences between them. Stratified sampling involves using strict random sampling from the strata into which the population is split, whereas quota sampling usually involves no strict sampling method but, rather, giving survey personnel quotas to apply to field sampling. For example, a survey may be conducted of shoppers arriving at a shopping centre. Interviewers are given a certain quota of men and women to interview. With simply a number of interviews to complete by gender, interviewers are likely to either select the first n shoppers to arrive at the shopping cen-tre, or choose the ones they pre-qualify as being most likely to respond, or choose the ones they are most attracted to trying to interview.

In general, quota sampling proceeds by defining a number of samples to be taken from specific subgroups of the population, but leaves the actual selection of the indi-vidual sampling elements to the discretion of the surveyor in the field. Such a sample will not be an equal-probability sample, no matter how carefully interviewers attempt to select sample members. Quota sampling also tends to rely heavily on ‘first calls’ – i.e., accepting all initial refusals and going immediately to replacement members of the population. There are also likely to be a range of other errors in quota sampling.

Quota sampling is often cheaper, because interviewers are provided with consid-erable discretion in selecting members of the sample. Especially when interviewers are paid on the basis of completed interviews, a quota sample is likely to be cheaper than a random sample, and to involve numerous errors in the representativeness of the sample. Other errors likely to occur with quota sampling include: errors in stratum sizes and assignments, judgemental sample selection within strata, and inaccuracies in the computed size of the strata and in the quota sample size. Simple and avoidable mistakes are often added to the less avoidable problems of quota samples, such as assigning equal quotas to strata of different sizes when the quotas should actually be proportional to stratum size, or when stratum size is unknown and cannot be deduced from the survey or secondary data.

Page 362: Collecting managing and assessing data using sample surveys

Non-random sampling methods 335

It is, in fact, unknown how well quota samples actually perform. It is highly depen-dent on the degree of latitude given to interviewers or surveyors, and also on how good the interviewers are in obeying any rules that are given to them for sample selection. Because of the unknowns in quota samples, there is no accepted method to compute population values or standard errors. Therefore, the best advice is to avoid such a sam-pling method, unless representativeness is not a requirement, and standard errors and population values are not expected or required from the sample.

13.6.2 Intentional, judgemental, or expert samplesSamples that are defined variously as intentional, judgemental, or expert are samples that rely on the survey researcher’s expertise to pick subjects for the survey. This type of sample is highly non-random, even when the ‘expert’ tries to pick a repre-sentative sample, by including subjects that are close to the supposed mean of the population, and some to represent subjects that are substantially different from the mean. Typically, this is how the expert chooses subjects, and the result is a non-representative sample.

There are no sampling statistics available for intentional samples, and they cannot be expanded readily to the population. They may be useful for developing hypotheses in research, or for developing survey designs. For example, it may be useful to choose some of the youngest and some of the oldest potential participants for a survey, and some of the least well educated and some of the most well educated to test the survey design, but not for the main representative sample. Intentional samples may also be used frequently for focus groups (see Chapter 7). Intentional sampling methods may also be applied to defining the sample population, strata for stratified sampling, and other similar activities, but not for the survey sample itself.

13.6.3 Haphazard samplesHaphazard samples are those samples taken when a survey researcher attempts to rep-licate a random sample but without using strict random procedures. For example, a survey researcher might select a sample from a telephone directory, by allowing the directory to fall open at haphazard pages, then covering his or her eyes and stabbing at the open pages to locate a particular entry. The entry so picked is taken as a sam-ple member. This process is continued until the sample is selected. Another example is a surveyor at a shopping centre who is interviewing shoppers as they arrive at an entrance to the shopping centre. The interviewer selects arriving shoppers by waiting unspecified but varying amounts of time after the preceding interview was completed.

Close field supervision of strict random sampling may be required to prevent strict random sampling from becoming haphazard. Haphazard sampling will, like quota sampling, appear to be much cheaper than random sampling, because of the reduced effort required to draw the sample. However, it is biased, cannot be expanded to the total population, and has unknown errors. As a result, it should be avoided.

Page 363: Collecting managing and assessing data using sample surveys

Sample design and sampling336

13.6.4 Convenience samplesConvenience samples, as the name suggests, are samples that are convenient to the survey researcher. Convenience samples are often used in exploratory research, when there are no requirements to expand the data to the total population, and when estimat-ing standard errors is not important. For example, a Ph.D. student may select a sample of undergraduate or postgraduate students at his or her institution as the sample popu-lation for a survey that will provide data for testing certain hypotheses. Such a sample may be perfectly acceptable if the hypotheses are not strictly related to occupational status, and the student population would provide adequate data to test new ideas and hypotheses.

Other convenience samples may include a list of names and addresses obtained in prior research, for which permission has been given by respondents to be contacted again, or may be provided by employees at a particular workplace. Again, the only caution that must be kept in mind is the non-expandability of the sample statistics, and the lack of standard errors.

13.7 Summary

In this chapter, a fairly exhaustive treatment is provided of all the different sampling methods that are in common use and a number that are not commonly used. For those sampling methods that permit such estimates to be made, formulae are provided to permit the survey researcher to estimate population means, totals, ratios, and prob-abilities, and to estimate standard errors for these population estimates. The chapter describes random sampling methods first, which are the only methods that are able to provide unbiased estimates of population values, and for which standard errors can be estimated readily. The next section of the chapter describes a number of sampling procedures that can be considered as reasonable approximations to random samples, though not being equal probability of selection methods. For these methods, approxi-mations are available for population values and standard errors, and the formulae for these, and methods of computation when necessary, are described. Finally, a number of non-random and non-representative sampling methods are described, for which there are no accepted methods for estimating population values, and standard errors cannot be calculated. Apart from a few exceptional circumstances, in which population values and standard errors are not needed, and for which the representativeness of a popula-tion is not important, these methods should normally be avoided at all costs.

Page 364: Collecting managing and assessing data using sample surveys

337

14 Repetitive surveys

14.1 Introduction

It is quite common that surveys are designed to be conducted on the same population on repeated occasions. Sometimes such repeated surveys are to be treated simply as an unrelated series of surveys, which provide, on each occasion, more up-to-date infor-mation on the subject population. At other times the purpose of the repeated surveys is to measure change from one period of time to another. Especially when change is to be measured from one period to another with the same population, there are four methods by which the samples may be drawn on the repeated occasions.

(1) Samples may be drawn independently of one another, such that the chance of any population member being included in successive surveys is extremely low, or even intentionally avoided.

(2) Samples may be drawn so that there is overlap between the sample on one occa-sion and the sample on the next occasion. In such a case, part of the sample will be different on each occasion, and part will be included in at least two successive occasions.

(3) The sample on a second and subsequent occasion may be drawn as a subsample of the first occasion. In this case, all the sample members on occasions after the first will have been included in the first survey sample, but not all those included in the first survey sample will be measured in succeeding samples.

(4) The sample on each occasion consists of exactly the same subjects, with no omis-sions and no additions. This is also known as a true panel survey.

Each of these four methods of repetition has different implications for the calcu-lation of population values and standard errors, especially when there is interest in calculating differences between two or more occasions. The four types of sample are illustrated schematically in Figure 14.1.

In the remainder of this chapter, each of these four cases is discussed, and formu-lae are presented for estimating differences between the occasions, and estimating the standard errors associated with each of the cases. For the purposes of this discussion, it is assumed that a major purpose of the survey is to make comparisons between the

Page 365: Collecting managing and assessing data using sample surveys

Repetitive surveys338

successive occasions. In the event that comparisons are not of interest, then the method of independent samples is the best method to use, because it is straightforward and there are no advantages to the estimation of statistics from the survey of any overlap.

14.2 Non-overlapping samples

This is the case that corresponds to case 1 in Figure 14.1. Two samples are drawn from the population independently of one another. Assuming that the population is large and the sample is comparatively small – e.g., a sample of 5,000 households from a population of 1 million households – the probability of any overlap will be inher-ently extremely low. When the sample is comparatively large, efforts may be made to avoid the inclusion of any of the sampled subjects from the �rst occasion in the second-occasion sample. This is often important in human subject surveys, because the inclusion of the same persons or households in two successive surveys, when no prior warning was provided that subjects could be asked to participate in a second survey, is likely to lead to increased nonresponse from such repeated sample subjects.

Examples of non-overlapping samples are such surveys as metropolitan region household surveys that are performed every decade or two decades to provide updated information on certain attributes of the population. Another example could be a survey that is performed every two to four years on-board public buses in a metropolitan area to measure the attributes and travel characteristics of bus riders. Another example is the sample within some national censuses that is used to collect much more detailed demography of the sample.

When samples are drawn independently, there is no impact of successive occasions on the estimates from each survey, by itself. Thus, if the survey is conducted by using simple random sampling on each occasion, then the statistics and standard errors from each survey will be computed using the formulae presented in Chapter 13, without any change. The same would apply if proportionate or disproportionate sampling were to be used, or if a different method of sampling was used on each occasion.

However, if change is of interest between the two occasions, then there is a clear impact of the sampling method on the estimation of change. Change between two occasions is represented by estimating the differences between a population value on the two or more occasions of the survey. Errors in differences are governed by the rules of the variances of differences, which were introduced initially in Chapter 2. It may be recalled that the variance in a difference is given by equation (14.1):

var var var covr vx xr v x xvax xvar cx xr c x x2 1r v2 1r vr vx xr v2 1r vx xr v 2 1va2 1var c2 1r cx x2 1x xvax xva2 1vax xvar cx xr c2 1r cx xr c 2 1,2 1,x x2 1x x,x x,2 1,x x,2r c2r cr vx xr vr vx xr vr vx xr v2 1r vx xr vr vx xr v2 1r vx xr vr vr vr vr vr vr v x xx x2 12 1x x2 1x xx x2 1x xx xx xx x2 1x xx x2 1x xr cr cr cx xr cr cx xr c2 12 1x x2 1x xx x2 1x xr cx xr c2 1r cx xr cr cx xr c2 1r cx xr cr cr cr cr c (14.1)

n1 n1n1n2 n2 n2

n1

Case 4: complete overlap (panel)

Case 3: subsampleCase 2: incomplete overlapCase 1: no overlap

n2

Figure 14.1 Schematic of the four types of repetitive samples

Page 366: Collecting managing and assessing data using sample surveys

Incomplete overlap 339

In the case in which the two samples are drawn independently, then it can be assumed that the covariance term in equation (14.1) is zero. In fact, if there are no members of the sample on the �rst occasion in the second-occasion sample, no value for the covariance could be obtained, and the assumption of a zero value is clearly warranted. This means that the difference between the means on the two occasions would have a variance that is equal to the sum of the variances of the attribute of con-cern on each of the two occasions. The standard error of the difference in the means is shown in equation (14.2):

s es

n

s

nx x. .s e. .s e2

2

2

1

1 1x x1 1x xx xx x2 12 1x x2 1x xx x2 1x xx xx xx xx xx x2 1x xx x2 1x xx x2 1x xx x2 1x x x xx x1 11 1x x1 1x xx x1 1x x (14.2)

In the event that the variances of the attribute were the same on both occasions, then equation (14.2) could be reduced to equation (14.3):

s e sn n

x. .s e. .s e2 1n n2 1n n

1 1x xx x2 12 1x x2 1x xx x2 1x xx xx xx xx xx x2 1x xx x2 1x xx x2 1x xx x2 1x x ssxx (14.3)

It is important to note that this is not the same as would be obtained by combining the two samples and estimating the difference from the combined sample. This would result in the formula shown in equation (14.4):

s ep

. .s e. .s e1

x xx x2 12 1x x2 1x xx x2 1x xx xx xx xx xx x2 1x xx x2 1x xx x2 1x xx x2 1x xn nn n2 12 1n n2 1n nn n2 1n nn nn nn nn nn n2 1n nn n2 1n nn n2 1n nn n2 1n n

(14.4)

If, for example, the sample size was 5,000 units on each occasion, then sx would be multiplied by 0.02 correctly, or by 0.01, if the incorrect procedure of equation (14.4)were used. This shows the design effect to be equal to two.

14.3 Incomplete overlap

Incomplete or partial overlap occurs when an attempt is made to survey the same ele-ments on the second occasion, but some elements surveyed on the �rst occasion are no longer available or cannot be found. In this case, additional sample elements are added on the second occasion, usually to keep the sample size the same on this occasion as on the �rst occasion. In other words, most often in this situation, n1 is equal to n2. Because many elements are common to both the �rst and second occasions, the covariance term in equation (14.1) will be non-zero. This means that, compared to the �rst case, with no overlap, the variance in a difference is likely to be reduced by the amount of covari-ance. Of course, covariance can be either positive or negative, so a decrease in variance is not guaranteed. Indeed, if the covariance between the two occasions is negative, then the variance will actually be increased by the overlap in the sample. Fortunately, in most practical cases of surveying on successive occasions, covariances are usually

Page 367: Collecting managing and assessing data using sample surveys

Repetitive surveys340

positive, so that they will have the effect of reducing the errors. The equation for the standard error of the difference in means between two occasions when there is overlap between the samples on the two occasions is shown in equation (14.5):

s es

n

s

n

n s

n nx x c xn sc xn s x. .s e. .s e2

2

2

1 2n n1 2n n1

1 1x x1 1x x 2 1x2 1x2x xx x2 12 1x x2 1x xx x2 1x xx xx xx xx xx x2 1x xx x2 1x xx x2 1x xx x2 1x x x xx x1 11 1x x1 1x xx x1 1x x (14.5)

In equation (14.5), nc is the size of the overlap – i.e., it is the sample that is common to both occasions. Equation (14.5) also makes it clear that the larger the overlap, the greater will be the reduction in error from a positive covariance. Equation (14.5) can also be modi�ed to the case in which the sample sizes on the two occasions are the same, so that n1 n2 n and nc< n. This modi�cation is shown in equation (14.6):

s en

s sn s

nx xc xn sc xn s x. .s e. .s e 2 2s s2 2s s

1 21 1x x1 1x x

2 1x2 1xx xx x2 12 1x x2 1x xx x2 1x xx xx xx xx xx x2 1x xx x2 1x xx x2 1x xx x2 1x x s ss s2 22 2s s2 2s ss s2 2s s (14.6)

The closer that nc is to n, the larger will be the covariance term. Provided that the covariance between the two occasions is positive, then this is likely to lead to substan-tial reduction in the standard error of the difference. It should be noted that the effects on the standard error of the additional units that are included only on the �rst occasion and only on the second occasion is principally to improve the overall accuracy of the estimation of the means on each occasion, and also, possibly, to reduce the overall size of the variances on each occasion. It will not always be the case that the units included on only one occasion will help the overall estimation of differences between the two occasions. This will depend on the extent to which the variances are decreased by the added sample units. These units contribute nothing to the estimate of the covariance.

The actual impact of overlapping samples can be illustrated by considering the potential change between two occasions. Suppose that the concern of the survey is to examine the changes in certain demographics of a speci�c population over a period of time. Suppose that, among the demographics of interest, are the average age of adults and the average number of children in a family. Suppose, further, that surveys are conducted every �ve years, that the comparison of interest is from two occasions that are �ve years apart, and that the survey has been conducted for a speci�c metro-politan area.

Assuming that the population in the metropolitan area is fairly stable, then the over-lap between the two occasions should be quite high. Those households that do not participate in the second survey compared to the �rst are primarily those households that moved away (a small number, assumed in this case) and those households that have dissolved over the period (due to death, divorce, or other causes, also assumed to be a small number in this case). Because people in both surveys will have aged �ve years between the two occasions, there will be a shift of �ve years in age, but not much change in the distribution of ages around the mean. This would imply that, for

Page 368: Collecting managing and assessing data using sample surveys

Subsampling on the Second and Subsequent Occasions 341

the characteristic of the average age of adults, the variance on each occasion would be very similar. In addition, the covariance should be similar in value to the variances on each of the two occasions, because there will be a very high correlation between the two occasions. Moreover, if the population is fairly stable, nc will not be much smaller than n. This being the case, the sum of the variances on the two occasions will be only slightly larger than twice the covariance between the two occasions, and, because nc /n will be only slightly less than unity, the entire expression within the square root will be very small. This means that the standard error of the difference in adult average age will be very small.

Suppose that there has been government action over the �ve years that provided monetary incentives to couples to have more children. As a result of this action, sup-pose that there are signi�cantly more children in households on the second occasion than there were on the �rst. In this case, the variance in the number of children in the household may have increased from the �rst period to the second, and there is a much less strong correlation between the two occasions, meaning that the covariance is quite a bit less than the variance. While the ratio of nc / n is still close to unity, the covari-ance is much less than either of the variances, so that the sum of the two variances is reduced by perhaps less than half of one of the variances, and the standard error of the difference is quite large.

It should also be apparent that it is quite dif�cult to conceive of conditions that would give rise to a negative covariance between the two occasions. In effect, this would have to be some characteristic such that large values on one occasion are almost always followed by small values on the second occasion, while small values on the �rst occasion are followed by large values on the second occasion. This might give rise to a negative covariance, but it is not certain that it will. For this reason, the general statements that overlapping samples should lead to a decrease in standard errors should hold in most practical cases.

14.4 Subsampling on the second and subsequent occasions

In effect, case 3 from Figure 14.1 is a special case of partially overlapping samples. This case usually arises when some members of the initial sample are not able to be included on the second or subsequent occasion, and no attempt is made to add units to the sample to increase the sample size. In this case, n1> nc and n2 nc. Writing sx1 as s1,sx2 as s2, and the covariance as s12, equation (14.6) becomes equation (14.7):

s e x xs

n

s

n

s

n. .s e. .s e 2 1x x2 1x x 2

2

2

12

1

12

1

2x xx xx x2 1x xx x2 1x x 22

(14.7)

In surveys of human populations, it is rarely possible to survey all the units from the �rst occasion on the second occasion. The loss of units from one occasion to another is termed attrition, and this is discussed at more length subsequently in this chapter. If a survey is designed to be a true panel – i.e., measuring the same units on each

Page 369: Collecting managing and assessing data using sample surveys

Repetitive surveys342

occasion – the actual practical result in human populations is almost always a subsam-ple, because some people will move away without leaving forwarding information, some will get tired of taking part and will refuse on the second or subsequent occa-sions, and others may have died. If no attempt is made to add to the sample to make up for these losses, then the result will always be a subsample on the second and sub-sequent occasion.

As with the overlapping samples, the additional units in the sample on the prior occasion that are not present on the subsequent occasion may not necessarily improve the standard error of the difference in means. This will depend on whether these units decrease the estimate of variance on the �rst occasion. This is the only circumstance under which these units will impact the standard error of the difference between the occasions.

14.5 Complete overlap: a panel

Case 4 from Figure 14.1 represents the ideal for situations in which successive samples are taken with the goal of estimating differences between the occasions. In this case, the standard error of the difference in the means between the two occasions is given by equation (14.8):

s en

. .s e. .s e1

x xx x2 12 1x x2 1x xx x2 1x xx xx xx xx xx x2 1x xx x2 1x xx x2 1x xx x2 1x x s ss s ss22s s2s ss s2s s221122

121222s ss ss ss ss s2s ss s2s ss s2s ss s2s s2222s s2s ss s2s ss s2s ss s2s s (14.8)

This can also be written in terms of the correlation between the two occasions, as shown in equation (14.9):

s en

. .s e. .s e1

x xx x2 12 1x x2 1x xx x2 1x xx xx xx xx xx x2 1x xx x2 1x xx x2 1x xx x2 1x x s ss s R sR s ss22s s2s ss s2s s221122

11R s1R sR s1R s2 1 22 1 2R s2 1 2R sR s2 1 2R s s2 1 2ss2 1 2s22s ss ss ss ss s2s ss s2s ss s2s ss s2s s2222s s2s ss s2s ss s2s ss s2s s (14.9)

This makes it clear that, as s1 and s2 become closer in value to one another, and R12

approaches unity, the standard error of the difference will approach zero. On the other hand, the more dissimilar the samples are on the two occasions, the less will be the bene�t of a panel, because the correlation, R12, will become very small, and the last term will nearly disappear. Equation (14.9) can also be employed usefully to compare the effects of a panel with non-overlapping samples, given by equation (14.3). Suppose that the variance of the measure of interest is the same on each occasion, and that the sample size is exactly the same on each occasion for the non-overlapping case, as it would be for a panel. In this case, equation (14.3) reduces to equation (14.10):

s es

n. .s e. .s e

22x xx x2 12 1x x2 1x xx x2 1x xx xx xx xx xx x2 1x xx x2 1x xx x2 1x xx x2 1x x (14.10)

With identical variances on both occasions, the panel standard error of equation (14.9)becomes equation (14.11):

Page 370: Collecting managing and assessing data using sample surveys

Practical issues: designing/conducting panel surveys 343

s es R

n. .s e. .s e

22 1s R2 1s R22 12s R2s R2 1s R2s Rx xx x2 12 1x x2 1x xx x2 1x xx xx xx xx xx x2 1x xx x2 1x xx x2 1x xx x2 1x x

s Rs R2 12 1s R2 1s Rs R2 1s Rs Rs Rs Rs R (14.11)

It can be seen, therefore, that the standard error of the panel is the square root of (1 – R)times the standard error of independent samples. If R is equal to 0.5, then the reduction in the standard error from a panel is proportional to the square root of 0.5, or about 71 per cent of the independent sample standard error, while, if R is equal to 0.75, then the standard error for the panel is the square root of 0.25 times the standard error from independent samples, which is 50 per cent of the standard error of the independent samples. This demonstrates clearly the advantage of panels when the desire is to mea-sure change between two occasions. It should be noted that the smaller the expected change, and the more such changes are expected to be uniformly distributed through the population, the greater is the advantage of the panel.

14.6 Practical issues in designing and conducting panel surveys

A panel consists of a group of elements that are surveyed on repeated occasions. These elements may be households, people, animals, or inanimate objects. For the purposes of this section, all the three cases in which there is overlap – partial overlap, subsam-pling, or perfect overlap – are considered to be instances of panels. The primary value of a panel is the reduction in the standard error of estimates of differences between the measurement occasions compared to drawing independent samples on each occasion, as shown in the preceding section. Even when a true panel is not achievable, there are usually reductions in the standard error of differences between occasions when either a subsample or a partially overlapped sample is used.

Panel measurement is not possible for a survey that results in a signi�cant change in the survey subjects, nor is it possible if the population is transitory. For example, repeated measurements are not possible when the subject of sampling and measurement is core samples of material that is subjected to permanent deformation or destruction. It also may not be possible when the result of one ‘treatment’ of the panel is to cure a condition that we may be unwilling or unable to in�ict on panel members again.

In discussing and describing panel surveys, it is usual to refer to each successive occasion of measurement of the panel as a panel wave. Thus, the �rst occasion on which the panel is measured would be referred to as the �rst wave, the second occa-sion as the second wave, and so forth. The length of time that elapses between one wave and the next is often referred to as the wave interval. It is common in human subjects panels to maintain a wave interval of one year, although some panels will use a shorter interval, and others may use a longer one, depending on the purposes of the panel.

For those cases in which repeated measurement of survey subjects is possible, there are several problems that arise in the conduct of the panel survey. The primary issues that arise are attrition, replacement, conditioning, and contamination.

Page 371: Collecting managing and assessing data using sample surveys

Repetitive surveys344

14.6.1 AttritionIn human panels, in particular, attrition is virtually always a problem. Attrition is the loss of sample from one wave to the next. Usually, there are three principal causes of attrition. The first of these consists of nonresponse, fatigue, and loss of interest. Although survey ethics would normally demand that potential respondents are told at the time of recruitment that they are being recruited to a panel that will entail repeated measurement, some respondents will decide at a certain point in the panel waves that they no longer wish to participate in the panel survey. This may be the result of fatigue at the repeated measurements of the panel, loss of interest in the purposes of the panel, or simply an unwillingness to continue with the original commitment. Panel attrition due to fatigue, loss of interest, and nonresponse can occur as early as the second wave, or may occur only after a number of waves have been conducted.

The second cause of attrition is the death of the subject. Because panel surveys usu-ally take place over several years, for human panels it is inevitable that some panel members will die during the period of the panel. In addition, one can include in this the ‘death’ of a household caused by the dissolution of the household. This may result from the death of a household member, or from divorce, or other causes. However, if the panel survey sampling units are households, this has the same effect as the death of a person when individual people are the panel survey sampling units.

The third cause of attrition is that sample units may become impossible to contact. When the wave interval is a year or longer, it is quite likely that people and households will move, and that forwarding information to the new address has already expired or was never set up. The same may occur when a telephone number changes, for what-ever reasons. In some instances, it will be a requirement of the panel survey that panel members reside within a specified region, and some panel members may move outside that region during the period of the panel survey. Such moves, which are rather com-mon in many countries, are a substantial cause of attrition.

In many household and person panel surveys, annual attrition has been found to run as high as 30 per cent, although, typically, attrition is highest between the first and sec-ond waves, and decreases through subsequent waves. The reason for this is that, while the incidence of death and moving does not change, the nonresponse component gen-erally reduces rapidly as a subject undertakes additional panel waves. In addition, as panel members remain in the panel for longer periods, it is also more likely that they will voluntarily inform the survey administrators of changes in contact details.

The result of attrition is that the size of the original panel shrinks from wave to wave. If no replacement of panel members is made, then the overall panel will become pro-gressively smaller, and the standard errors of differences between successive waves will become larger. This results in a subsample panel. In some instances, it may be desirable that this form of panel is used, especially if the first wave represents some form of bench-marking measurement, or that some treatment is applied between the first and second waves and it is important for the panel measurement that only those subjects who were measured both before the treatment and in at least one subsequent wave are included in the panel. There are also occasions when a panel has only two waves. In such a case, there will be no advantage to replacing panel members lost through attrition.

Page 372: Collecting managing and assessing data using sample surveys

Practical issues: designing/conducting panel surveys 345

In the event that the above conditions do not apply, the more usual procedure is to adopt a process of replacing panel members who have been lost in the second and sub-sequent waves with new panel members, with the aim of keeping the size of the panel approximately constant. In this case, the panel becomes one with overlapping samples, and this is the correct procedure to use for estimating standard errors of differences between waves. However, there are a number of issues that need to be considered in determining the method of replacement, which is the topic of the next subsection of this chapter.

It should be noted that a true panel is very rare in human populations, as a result of the human condition. Unless the panel is of very short duration and is very for-tunate, the death of some panel members (or the dissolution of some households) is virtually inevitable. Moreover, in almost every country around the world, people move from time to time, and the ability to trace people from a previous address or telephone number does not always exist. Hence, even when the survey is such that there is no nonresponse attrition, other factors will almost always result in attrition.

Replacement of panel members lost by attritionReplacement of panel members who are lost to attrition is more complicated than might be appreciated at first. There are at least three different methods that can be used to replace panel members lost to attrition.

(1) New recruits may be sought who are as similar as possible to the panel members lost by attrition, so that the attributes of the panel are kept as close as possible to being unchanged.

(2) New recruits may be drawn at random from the original population from which the panel was drawn, so that the panel remains representative of the original popula-tion as far as possible.

(3) New recruits may be drawn at random from the current population, so that losses through attrition are used to maintain the representativeness of the contemporane-ous population. In other words, sampling from the current population will allow the panel to change attributes in a way that is reflective of the changes taking place over time in the population from which the panel is drawn.

The method of replacement will depend mainly on the purposes of the panel survey itself. For example, a medical panel that is intended to determine the effects over time of a certain treatment is most likely to use the first method of replacement – i.e., to seek to maintain the original make-up of the panel. A panel of people or households that is intended to measure changes over time in a specific population is likely to follow the second method of replacement. A panel of people or households that is intended to provide ongoing measurement of a population, including changes occur-ring from other reasons to the overall population, will probably follow the third method of replacement.

It should be clear that a decision on the method of replacement, assuming that it is decided that replacement for attrition will be undertaken, is a critical decision that

Page 373: Collecting managing and assessing data using sample surveys

Repetitive surveys346

should be made as part of the planning process for the panel survey. This will necessi-tate clear thinking about the purposes of the panel, in order to be able to determine the most appropriate method of replacement.

Reducing losses due to attritionAlthough there is clearly relatively little that can be done to reduce attrition resulting from such natural causes as death and moving away, there are things that can be done to reduce attrition resulting from nonresponse. A good panel design should incorporate steps to minimise the attrition of panel members.

As with any survey of human populations, it may be useful to consider offering an incentive to people to participate. Incentives are dealt with at greater length in Chapter 11. However, in cultures in which incentives may increase response, they are also likely to increase the willingness of people to continue to participate in a panel, if offered for each wave of the panel. As noted in Chapter 11, the most effective incentive is likely to be a small monetary gift that is provided prior to the execution of each panel wave. However, in some cultures, as also noted in Chapter 11, incentives may lead to a reduction in response, so care must be taken to determine whether an incentive is actu-ally useful in each particular instance.

Another method to reduce attrition due to nonresponse is to maintain periodic con-tact with panel members. A newsletter that is sent to panel members about midway between two waves is a useful method of contact. It reminds panel members of their continued involvement in the panel, and can also be a means of communicating some information about the results of previous waves that may help to emphasise the impor-tance of the panel. It also may provide information that helps to stress the particular value of a person or household remaining in the panel.

A greeting card sent to panel members at the end-of-year holiday season may also be an effective method of maintaining contact with panel members. In surveys in which one of the data items collected is the date of birth of panel members, a birthday greeting may be sent at the appropriate time. There are two or three advantages that accrue from periodic contact in between waves. First, it helps to maintain in the minds of panel members the importance of continuing in the panel, and of the importance of each panel member to the panel survey. Second, for panel members who change contact details, this will often provide a means to obtain new contact information, especially in countries where the forwarding information may remain current only for a restricted period of time. Third, a forwarded item of information may also prompt panel members to contact survey administrators to update their contact information.

A third important element in reducing attrition is to pay attention to the respondent burden inherent in the survey. With modern computing aids in particular, it is relatively easy to design survey forms for second and subsequent waves that provide back to the respondent the information given in the most recent prior wave, and ask the respondent only to correct information that is wrong, and update information that has changed. Especially when data collection includes the demographics of the person or the house-hold, the use of a playback, usually utilising the mail-merge capabilities of modern

Page 374: Collecting managing and assessing data using sample surveys

Practical issues: designing/conducting panel surveys 347

software, provides a major reduction in the burden of subsequent waves. In addition, the fact that such a playback can be provided also helps to convey to the panel mem-bers the importance of each one of them to the survey effort.

Finally, it may be useful to conduct focus groups with a subsample of panel mem-bers in between panel waves, with the intention of determining how respondents felt about the tasks required of them in prior waves and offering an opportunity for them to contribute ideas about how to improve the retention of respondents in future waves. The information from such focus groups can be used both to improve retention and to provide interesting information in a periodic newsletter. For those respondents who are invited to or are included in focus groups, this alone is often a strong incentive to continue in the panel. However, since such invitations can usually be extended only to a rather small subsample, the ideas contributed and the newsworthiness of the focus group results are helpful to the entire sample.

14.6.2 ContaminationPanel members become contaminated when the repeated conduct of the survey results in the loss of the innocence or unbiased nature of the responses obtained from respon-dents. For example, using a panel to test different ways of asking a particular question will result in contamination, because, having once been asked the question in one way, the panel members have already been alerted to what the survey researcher is trying to ascertain, or may have been misled by the previous form of the question. In either case, panel members can no longer provide an unbiased response to subsequent versions of the question wording.

Often, when a panel survey is intended to determine how behaviours change over time, perhaps in response to an external stimulus, contamination will result if panel members become aware of the change that is being sought. For example, suppose a panel survey is proposed about smoking, the effects on smoking of a prohibition of advertisements for tobacco products, and the effects of public service messages that show the harmful effects of smoking. Asking panel members about their smok-ing habits and also their perceptions of advertisements about tobacco products and anti-smoking public service messages is likely to contaminate panel members, as their smoking behaviour or claims about their smoking behaviour may become contami-nated by increased awareness of the publicity connected with smoking.

In general, contamination results from surveys about behaviour, values, attitudes, preferences, and so forth when survey respondents are able to perceive the connections between the questions posed and the results expected by the survey researchers. Such surveys will generally be poor candidates for a panel survey. To prevent contamination, it would be necessary to ‘hide’ either the behavioural questions or the supposed stimuli for behaviour change. This can sometimes be achieved by including a number of ques-tions that seem to be about different, but related, issues and also asking about different behaviours, attitudes, values, etc. In other situations it is simply not possible to avoid contamination, with the result that a panel survey cannot be used.

Page 375: Collecting managing and assessing data using sample surveys

Repetitive surveys348

14.6.3 ConditioningConditioning is somewhat similar to contamination in its effects on the survey results, but arises from a different cause. Conditioning may be evidenced in two alternative ways. On the one hand, respondents may become so used to the questions that are posed that they become adept at providing answers that are untrue but reflect what they think the survey researchers want to hear. In general, the issues of affirmation bias, demand characteristic, and social desirability bias are dealt with in Chapter 8 in relation to one-time surveys. These issues are magnified in intensity in panel surveys, in which the repeated nature of the survey can provide a means for respondents to learn more about the intentions of the survey, and as a result provide more incorrect responses. In other words, the repetitive nature of a panel survey can result in a learn-ing process for respondents that, in turn, leads respondents to provide answers that are increasingly less truthful, and increasingly unrepresentative of the population from which the sample was originally drawn.

The second aspect of conditioning occurs when the repetitive nature of the survey actually leads to changes in the behaviour being measured in the survey. In household travel surveys, survey researchers are all too familiar with elderly people who travel very little deciding to reschedule their travel to the day on which travel details are to be completed, so as to have something to report in the survey. Conversely, some respondents who travel a great deal may put off some travel from the travel day to another day, in order to have less to report, thereby reducing the burden of the survey. Such behaviours are not uncommon in these one-off surveys, but may be exaggerated into actual persistent behaviour change by panel respondents that is unrepresentative of the general population. For example, a panel survey about household recycling may induce panel members to be more conscientious in recycling household waste, or even to start recycling when previously they did not. These behaviour changes are more evi-dent in panel surveys, in which the repetitive nature of measurement induces behaviour shifts in the behaviours that are the subject of the panel survey measurement.

When such conditioning effects are likely to occur, the best mechanism to deal with them is to ensure that no panel member remains in the panel for more than a small number of waves. It has been noted in some long-running panels that, if no attempt is made to remove potentially conditioned members, as many as 60 per cent of initial recruits to the panel may still be in the panel after a decade of annual waves. Such a high number of continuing panel members is not a problem if the survey is unlikely to produce conditioning effects, but would result in serious bias in the results for a survey that could have caused extensive conditioning of the respondents.

14.7 Advantages and disadvantages of panels

Successive independent samples provide only limited information about changes over time. For example, successive independent samples of a household attribute such as car ownership may show that there is an increasing level of car ownership over time, but do not provide information about the underlying causes of car ownership increases,

Page 376: Collecting managing and assessing data using sample surveys

Methods for administering practical panel surveys 349

nor do they identify whether the increases are occurring in households with previously low car ownership levels, or in households with already high levels. In addition, even to be able to deduce that there have been increases in car ownership requires all suc-cessive independent samples to be equally representative of the population from which they are drawn. In the event that there is significant nonresponse in any of the samples, this assumption would have to be called into question.

In contrast, panels reveal the dynamics of change. A good example of this was pro-vided by the Puget Sound Transportation Panel (Murakami and Ulberg, 1997). For some years successive independent surveys had shown little change in the bus rider-ship of the Seattle region. It was assumed by planners that this meant that there was a core group of the population who were bus riders, while everyone else was a car user. However, several waves of the panel revealed a different story. From the panel sur-vey, it was found that the bus riders were continually changing, consisting primarily of young people who had recently joined the workforce and who remained bus riders for only a few years, until they were able to afford to buy cars, at which point they ceased riding the bus. Thus, the successive independent surveys suggested a constant core group of bus riders, while the panel revealed that there was a constant turnover in riders, most of whom were bus riders for only a short period of time in their lives. The planning implications of such a finding are clearly quite significant. In general, panels are able to reveal changes in the population that are masked by aggregate statistics, and provide in-depth information on the changes in relationships among population attributes over time that successive independent samples cannot.

Panels also offer potential reductions in sample size for given error levels (Stopher and Greaves, 2006). Indeed, Stopher and Greaves show that the reduction in sample size depends very much on the level of correlation between two waves of a panel: sample size can be reduced by as much as 50 per cent from successive independent samples with a 0.5 correlation, rising to as much as 90 per cent if the correlation between the waves is as high as 0.9. Using the Puget Sound Transportation Panel data, Stopher and Greaves show that correlations between successive waves, assuming some external change imposed on part of the population, would probably range from 0.55 to about 0.67, suggesting that panels would result in about a 60 per cent or larger reduc-tion in sample size.

The disadvantages of panels are primarily in what might be termed the ‘care and feeding’ of the panel members. As explained in the subsection on attrition in this chap-ter, panels require rather more work to maintain contact, and minimise attrition. It is also likely that panels will require more effort in recruitment than is true of indepen-dent samples. The main disadvantages are attrition, contamination, and conditioning, as discussed earlier in this chapter.

14.8 Methods for administering practical panel surveys

A panel survey can be conducted without any major difference in administration from a standard sample survey, other than attention to the issues of attrition and replacement.

Page 377: Collecting managing and assessing data using sample surveys

Repetitive surveys350

If the subject of the panel, or its design, are such as to make contamination improbable, then the panel administration may hold fairly closely to the design of any other sample survey, with the exception that respondents are recruited for a repeated sample. The period over which the panel is to operate and the frequency with which the panel will be measured are decisions that would be made at this point. Other aspects of admin-istration to be considered would be whether incentives are to be offered, and the type of incentive to be used; what contact, if any, is to be made between panel waves; and whether and how replacement is to be undertaken.

There are four types of panels that can be defined, assuming that the panel is drawn from a human population, and that attrition will occur. The first type of panel, as men-tioned earlier, is one in which there is no replacement of panel members lost to attrition. This type of panel is called a subsample panel, and will steadily decline in size over time. There is anecdotal evidence from some ongoing panels to suggest that, although attrition in the first year might be as high as 25 to 30 per cent in an annual panel, this may decline successively over subsequent years, to around 12 to 15 per cent between the second and third waves, about 6 to 8 per cent between the third and fourth waves, about 3 to 4 per cent between the fourth and fifth waves, and so forth. This would mean that the final panel size in a ten-year subsample panel held on an annual basis, with no significant attention paid to reducing attrition, would be likely to end at between 40 and 50 per cent of its original size. Any comparisons to be made between the first and the tenth wave would be able to utilise only the 40 to 50 per cent of panel members who remained in the panel for the entire duration of the survey. One other property of the subsample panel that runs over a decade or more is that, although it may remain relatively representative of the population from which it was drawn originally, it is likely to be increasingly unrepresen-tative of the current population. In some instances this is exactly the result required and expected, while in others it may be contrary to the purposes of the survey.

The second type of panel is one in which the panel members lost to attrition in each wave are to be replaced. This is known as a refreshed panel. Usually, a refreshed panel will have a set sample size, and the replacement samples each year will be intended to bring the panel back to its original size. In practice there is likely to be slight var-iation in size from wave to wave, simply because the final size may not be able to be determined until after all attempts to remind respondents have been completed and time may have run out to add more members, or the aimed-for sample size may be exceeded slightly. This is the sampling design discussed earlier in this chapter under the terminology of partial overlap or overlapping samples. If the purposes of the panel include estimating panel characteristics from each wave, as well as measuring differ-ences between waves, then replacement samples will be required for each wave after the first. However, if the main purpose of the panel is to measure change, then there is often no value to adding replacement members to the final wave. In fact, in such a case, it is useful to convert the panel from a refreshed panel through the first (m – 2) waves, and then treat the last two waves as a subsample panel, with the replacement sample in the penultimate wave sufficient that the final panel wave can be expected to be of the size of the earlier waves.

Page 378: Collecting managing and assessing data using sample surveys

Methods for administering practical panel surveys 351

An example is useful to illustrate this procedure. A panel is designed to have approx-imately 650 households in it for a total survey period of ten years. The primary focus of the panel is to measure differences between successive waves and also between waves more than a year apart. In years 2 through 8, replacement members are drawn for each of these waves to maintain the size of approximately 650 households. Over this period it is found that the average annual attrition is 100 households. For the ninth wave, it is desired to oversample, so that, after attrition, there will be 650 households that are common to both the ninth and tenth waves. This means that there need to be 765 households completing the ninth wave successfully, allowing for 115 to be lost between waves 9 and 10, leaving 650 households completing wave 10 that were also included in wave 9. This number is obtained by noting that, if attrition runs at 100 out of 650 households, this is roughly 15 per cent. Thus, the final sample in wave 9 should be 650 divided by 0.85, or 765 households. If the loss from wave 8 is expected to be 100 households, then the additional sample required at wave 9 will be 215 households, meaning that the 650 that responded in wave 8 will be augmented by another 215 households in wave 9. This should yield a final sample in wave 9 of 765 households, and a final sample of 650 in wave 10.

The third type of panel is called a split panel. With a split panel, the main panel sur-vey can be conducted as a subsample or a refreshed panel, but a cross-sectional sam-ple is also undertaken, either in each wave or every two or three waves. The purpose of the cross-sectional panel is to determine changes in the overall population and to benchmark the population against the panel. However, this is a very expensive design, because the cross-sectional survey will require a fairly large sample size in order to provide reliable indications of population attribute shifts. Although the cross-sectional surveys may not need to be used to change anything in the sample, they do allow fac-tors and adjustments to be made to results from the panel to adjust for deviation from the population.

The fourth type of panel is called a rotating panel. In a rotating panel, the members of the panel are asked to remain in the panel for only a specified period of time, which is shorter than the duration of the entire panel survey. For example, the panel survey may be designed to last for ten years, but each respondent who is recruited is asked to remain in the panel only for three years. This means that one-third of the panel must be replaced each year. Part of the panel loss may comprise normal attrition, while the rest will be made up by retiring panel members. There are two benefits to a rotating panel. The first is that it provides a constant overlap in the overlapping design, while the sec-ond is that it reduces the chances of conditioning of panel members.

Suppose, for example, that a panel of 1,000 persons is designed as a rotating panel. The panel survey is to last a total of ten years, and panel members are asked to remain on the panel for four years. Suppose, further, that normal attrition for this panel is about a 20 per cent loss between the first and second years, reducing to 10 per cent between the second and third years, and 5 per cent between the third and fourth years. In the first year of the panel 1,000 individuals are successfully recruited and complete the survey. In the second wave of the panel 20 per cent, or 200 individuals, are lost to

Page 379: Collecting managing and assessing data using sample surveys

Repetitive surveys352

attrition. However, to achieve the rotational goals, a further fifty individuals are told that they no longer need continue with the panel, and 250 new recruits are added to the panel. In the third wave 10 per cent of the continuing persons from the first wave are lost to attrition, representing a loss of seventy-five persons. A further 20 per cent of the second-wave recruits are also lost to attrition, representing a further loss of fifty people. This totals 125 lost to attrition. A further 175 individuals are then not continued from the first wave, to achieve the 25 per cent rotation of the original panel members, who are now reduced to 500 in number. This means that the second-wave participants continuing on into the fourth wave now comprise 200 individuals, and a new 3,000 who were recruited in the third wave. At the fourth wave twenty-five individuals are lost from the first wave due to attrition, and 225 are rotated out of the panel, so that only 250 individuals from the first wave complete the fourth wave. From the second-wave recruits 10 per cent are lost to attrition, leaving 180 who complete the fourth wave. From the third-wave recruits 20 per cent, or fifty individuals, are lost, leaving 200 from the third wave. This provides a total of 670 individuals, so that a new recruit-ment of 330 individuals is required in the fourth wave. For the fifth wave nine of the recruits from the second wave are lost to attrition, and the 171 individuals who are left complete this wave. From the third-wave recruits twenty-four are lost to attrition, leaving 216 continuing from that wave. From the fourth-wave recruits sixty-six are lost to attrition and 264 continue. This totals 651 continuing participants, requiring the addition of 349 new recruits.

The succeeding waves continue in this manner, so that all the second-wave recruits are rotated out after the fifth wave, and are replaced for the sixth wave, together with attrition replacement for all the newer recruits. Figure 14.2 shows how this rotational panel would evolve over the ten years of the survey.

As can be seen from Figure 14.2, around 20 per cent of the panel remain throughout for four years. This could be adjusted upwards by changing the amount of rotation and adjusting the recruitment in later waves. Because this panel survey was for ten years, all remaining persons are rotated out after the tenth wave. In addition, it should be noted that, if change between successive panels is the major focus of the survey, it would be better to over-sample in the ninth wave and add no new members in the tenth wave.

14.9 Continuous surveys

All the foregoing discussion in this chapter has been concerned with surveys that involve repeated surveying of the same sample. These sampling methods should not be con-fused with rolling samples, which involve surveying different samples on a continuous basis. Continuous surveys are frequently surveys that take place throughout the year and over a period of several years. An example of a continuous survey is the Sydney Continuous Household Travel Survey (Battellino and Peachman, 2003), which is a sur-vey conducted almost every day of the year and has been ongoing for at least fourteen years at the time of writing. The American Community Survey, a part of the US Census Bureau’s decennial census programme, is another example of a continuous survey.

Page 380: Collecting managing and assessing data using sample surveys

Continuous surveys 353

Frequently, a continuous survey will use a rolling sample. With a rolling sample, the sample for several years of the continuous survey is drawn at one time, and the sample is then apportioned to days, months, and years within the period of two or three years. This design ensures that the same address in address-based sampling, or the same indi-vidual in person-based sampling, is not surveyed more than once in a period of several years. It is, in fact, the intent of a rolling sample to avoid surveying the same element repetitively within a defined time period. Because of the normal mobility of human populations, a rolling sample that is defined for a period of three to five years will reduce substantially the chances of duplicate survey measurement on a sampling unit.

Recruitment

wave

Result Survey wave

1 2 3 4 5 6 7 8 9 10

1 Panel 1000 750 500 250

Attrition 200 75 25

Rotation 50 175 225 250

2 Panel 250 200 180 171

Attrition 50 20 9

Rotation 0 0 0 171

3 Panel 300 240 216 205

Attrition 60 24 11

Rotation 0 0 0 205

4 Panel 330 264 238 227

Attrition 66 26 11

Rotation 0 0 0 227

5 Panel 349 279 251 238

Attrition 70 28 13

Rotation 0 0 0 238

6 Panel 278 223 201 191

Attrition 55 22 10

Rotation 0 0 0 191

7 Panel 299 239 215 204

Attrition 60 24 11

Rotation 0 0 0

8 Panel 322 258 232

Attrition 64 26

Rotation 0 0

9 Panel 336 269

Attrition 67

Rotation 0

10 Panel 295

Annual sample size 1,000 1,000 1,000 1,000 1,000 1,000 1,000 1,000 1,000 1,000

Total rotated out – 50 175 225 250 171 205 227 238 191

Percentage remainingfor four years – – – 25 17.1 20.5 22.7 23.8 19.1 20.4

Total attrition 0 200 125 105 99 107 94 95 98 104

Figure 14.2 Rotating panel showing recruitment, attrition, and rotation

Page 381: Collecting managing and assessing data using sample surveys

Repetitive surveys354

Continuous surveys can be conducted using any valid method of sampling. The sample may be drawn as a simple random sample, a proportionate or a disproportion-ate sample, a systematic sample, a cluster sample, a multistage sample, etc. Sampling may become slightly more complex as a result of potential changes in the population over each sampling period – e.g., mobility within human populations means that some selected samples may no longer be available by the time the survey is to be conducted on those samples. However, this requires a standard procedure to be set in place from the outset, to replace any sample members that are no longer available at the time of surveying, in such a way that the underlying method of sampling is not compromised.

There are four advantages at least to continuous surveys. First, in periodic cross- sectional surveys, one of the major problems encountered is the huge budget increase that is required each time the cross-sectional survey is to be repeated. Generally, espe-cially for public agencies, this results in a need to mount a major defence of the need for the survey, and can often lay a survey open to not being approved. A continuous survey, on the other hand, becomes a fixed-budget item that has to be included in the budget every year. As such, it is less subject to intense scrutiny and is easier to maintain over a long period of time. It also involves a much lower expenditure typically than, say, a decennial survey, and is much easier to afford for most agencies.

Second, a continuous survey allows the information base about the population being surveyed to be continually updated. In periodic cross-sectional surveys, the data will become increasingly out of date as the data age and prior to the collection of a new cross-sectional survey. Again, a decennial survey is a clear case in point. By the eighth or ninth year from when the decennial survey was last undertaken, the data are likely to be quite far out of date, especially if the population of concern is a human population. Indeed, because a large-scale cross-sectional survey may also take two or three years to execute and analyse, the results from the previous survey may be as much as thirteen years old before new data are available to use. In contrast, a continuous survey allows for updating on a continuing basis, so that statistics may be able to be maintained such that they are never more than a year or two old. A continuous survey can also provide rolling means for the population estimates, in which the mean is obtained from, say, the most recent three years of the survey. Such means can then be updated annually, dropping off the oldest year’s data and adding the newest year’s data.

Third, a continuous survey has the benefit of creating a trained survey staff that can be maintained for many years. When undertaking a cross-sectional survey every ten years, it is usually necessary to hire new staff to undertake the survey on each occasion, and significant training must be done to obtain effective field staff and office staff for the implementation of the survey. In contrast, a continuing survey provides continuous employment for survey staff, who will be able to learn from the experience of contin-uing to conduct the fieldwork and office work. This has major impacts on improving the quality of the survey data. It also provides steady employment for a small cadre of survey staff and may have implications for reducing the unit cost of the survey.

A fourth benefit may arise from being able to reduce the annual sample size. In the case of the Sydney Continuous Household Travel Survey, for example, the target is

Page 382: Collecting managing and assessing data using sample surveys

Continuous surveys 355

to interview about 3,300 households per year, so that three-year rolling averages are obtained from about 10,000 households. This also provides, using data from the most recent three years, about 10,000 households for modelling and analysis purposes. In contrast, a periodic survey would probably be required to collect data from 10,000 households in a one-year period.

Continuous surveys are of greatest value when there is a need to maintain up-to-date profiles of a population on a continuing basis. They are not well suited to describing changes in the population, which is done more appropriately from panels. There have been discussions from time to time about the idea of replacing a decennial census by a continuous survey that would cover the entire national population in a period of about ten years. This would entail surveying just one-tenth of the population each year. The gains from such a change in methodology are fairly clear, in the spreading of the costs of the survey and in the currency of the population information. On the other hand, there is no longer a ‘snapshot’ of the population at a specific moment in time, which is the disadvantage of the continuous survey. Moreover, unlike the panel survey, the continuous survey does not provide information on the dynamics of change. Rather, it provides a series of cross-sectional views of the population, but it may mask change, through the extended nature of the sampling and survey measurement.

Page 383: Collecting managing and assessing data using sample surveys

356

15 Survey economics

15.1 Introduction

The biggest problem with surveys is their cost. Most clients for surveys operate from limited budgets, such that the budget available for a survey is rarely ever sufficient for the desired accuracy and coverage of the survey (Stopher and Metcalf, 1996). As has been discussed in previous chapters of this book, the art of survey design is one of trade-offs or compromise, and this is certainly the case in the area of survey eco-nomics. The specific trade-offs involved are those pertaining to producing the greatest accuracy and coverage for the least cost.

Economics in survey design is a serious issue. There are too many instances of surveys having been commissioned and undertaken with too little attention to the eco-nomics of the survey design, resulting in one of two outcomes. Either the data from the survey end up never being used and the money spent is effectively wasted, because the data turn out not to be suited to the task at hand, or the firm undertaking the sur-vey produces data fit for the purpose, but only at the cost of a major loss of money in executing the survey – a situation that has, unfortunately, resulted in the bankruptcy of several survey firms around the world over the years.

There are a number of aspects to a survey, the costs of which may vary substan-tially depending on the decisions made as to how to undertake the aspects and when trade-offs are presented in terms of coverage, accuracy, and cost. The main areas that are considered in this chapter are: the cost of drawing the sample, including the cost of building or purchasing the sampling frame; the cost of recruiting survey respon-dents; the cost of surveying respondents; and the cost of data processing, cleaning, and checking. To make the situation a little more complex, the decisions on each of these aspects of the survey are not independent decisions. It is important, therefore, that knowledge is acquired at the outset on the likely costs of different options, so that appropriate decisions can be made on the design in light of available budget and time constraints. There is an entire book (Groves, 2004) that has been written on the topic of costs and the error trade-offs, so the treatment in this chapter will necessarily be somewhat abbreviated.

Page 384: Collecting managing and assessing data using sample surveys

Cost elements in survey design 357

15.2 Cost elements in survey design

There are a number of components of cost that need to be considered to estimate the total costs of undertaking a survey (Groves, 2004). First, there are the fixed costs that are involved in any survey design. These include the overall management of the survey undertaking, and all those costs that do not vary with the sample size. They may also include the costs of meetings with the survey client, the preparation of the interim and final reports on the survey, progress reporting, and a variety of other tasks that have to be undertaken regardless of method and sample size. However, the fixed costs may vary with the survey method selected.

Second, there are supervision costs, which are always incurred in supervising the overall conduct of the survey. These will vary with the method of surveying the pop-ulation, and will tend to be lowest for surveys that involve no direct contact between survey staff and the sampled population and be highest for surveys that involve face-to-face contact at various locations, such as homes or workplaces, between the survey staff and the sample members (Schonlau, Fricker, and Elliott, 2002). The supervision costs will depend on the sample size and the method of surveying the population. They consist of the general guidance and direction costs for the survey effort, ensuring that tasks are correctly defined, scheduled, and executed and that all the survey tasks are undertaken in the correct order and at the appropriate time. Supervision costs also include the costs of training the supervisors.

The third element of costs relates to those of monitoring. Monitoring involves the periodic checking of the conduct of the survey by individual survey staff. It usu-ally involves a review of a certain percentage of the work of each interviewer. It also includes the costs of training the monitoring staff. Fourth are the interviewing costs, which cover the actual labour costs of the interviewers themselves and include train-ing time, actual interview costs, the administrative work of interviewers associated with the interviewing, and required breaks and feedback to supervisors. Here again, as Schonlau, Fricker, and Elliot (2002) point out, there are differences between survey methods. Interviewers in face-to-face surveys are often the highest-paid interview-ers, followed by telephone interviewers, while those who put together packages for posting to respondents are usually paid the lowest rates. Add to that the fact that the time required for face-to-face interviewing is much greater than that required for tele-phone interviewing, which is also greater than that required for postal surveys, and the differences between the methods assume much greater significance. However, this is still not the end of the story, because there are also differences in response rates, which determine the number of attempts that need to be made. As discussed in Chapter 6, interview-based surveys generally produce much higher response rates than postal surveys, so that the number of interviews that need to be attempted will be lower for a face-to-face survey than will be the number of postal packages that have to be sent out. On the other hand, face-to-face surveys conducted at homes may require the interviewer to make several attempts to find the appropriate persons at home, incur-ring travel costs for each attempt, while the additional costs incurred by the telephone

Page 385: Collecting managing and assessing data using sample surveys

Survey economics358

interviewer in attempting to call a particular number several times are much smaller, and the cost of sending out postal reminders to those who have not sent back a postal survey will, generally, be even lower.

The �fth element of cost consists of the administrative costs, which include such things as machine maintenance, �le manipulation and back-up, and the associated hardware and software costs for the survey. The sixth and �nal element in most surveys could be described as the devices cost, which includes such things as postage costs, telephone costs, interviewer expenses for kilometres driven and related costs, and also the cost of any special devices required for the conduct of the survey.

Apart from the �xed costs, all the other cost elements described above are a func-tion of the sample size. This leads Groves (2004) to propose a general model of survey costs that is of the form shown in equation (15.1):

C F S M I A DC FC F S MS MS MS M I AI AI AI A. .S M. .S MS MS M. .S MS MS MS M. .S MS M. . . .I A. .I A. .I AI A. .I AI AI AI A. .I AI A. . . (15.1)

where:

C total survey costs;F �xed costs;S supervision costs;M monitoring costs;I interviewer costs;A administrative costs;D devices costs; and(.) function of sample size.

Each of the above cost components can then be elaborated by breaking down the indi-vidual activities involved in the cost. One of the more complex of these is the inter-viewer costs for an interview survey, which can be considered as including such items as the time spent in training, plus the time spent in making an initial attempt at con-tacting, the time spent in the initial contact, the time spent questioning the respondent, the time (if appropriate) spent editing the responses of the respondent, the time spent to access initial refusals, the time spent recontacting initial refusals, the time spent questioning initial refusals, the time spent editing responses for initial refusals, the time taken in required breaks, and the time spent in providing feedback to supervisors. All these time elements are then multiplied by the sample size and the hourly rate (including overheads) for the interviewers to produce the total interviewer cost I(.).The costs can be re�ned further, if there are interviewers assigned to somewhat differ-ent tasks and paid at different rates, such that the times are estimated for each type of interviewer and the amount of work to be done by each type of interviewer, and then multiplied by the appropriate rate.

However, the model proposed by Groves (2004) is also incomplete, in that it does not speci�cally include the costs of drawing the sample. As pointed out by Schonlau, Fricker, and Elliott (2002), among others, there can be major cost trade-offs in designing a sample. Broadly, there are two types of samples that can be

Page 386: Collecting managing and assessing data using sample surveys

Trade-offs in survey design 359

drawn: probability samples (which are the type of samples that are dealt with at great length in this book) and convenience samples. The latter are always cheaper to draw than the former. However, as is discussed in Chapter 13, there are also substantial differences in the costs of different probability samples. Depending on the resources available for drawing a sample (such as sampling frames), strati�ed sampling with variable sampling fraction is probably the most expensive method of sampling, while systematic sampling is usually one of the least expensive. Therefore, Groves’ equa-tion for costs might be extended to equation (15.2), by adding in the cost of sam-pling, indicated by SM(.):

C F SM S M I A DC FC F S MS MS MS M I AI AI AI A. .S M. .S M. .. .S MS M. .S MS M . .I A. .I A. .. .. .I AI A. .I AI A . .D. .D. .. .. . (15.2)

It can be seen that each of the cost elements identi�ed in equation (15.2) could be bro-ken down similarly, and the costs of the survey built up from this point. However, to add to the complexity, there are various trade-offs in the decision on the survey design that will affect these costs, and hence the overall cost of the survey. These trade-offs are explored further in the next section of this chapter.

15.3 Trade-offs in survey design

As discussed in earlier chapters of this book, in undertaking a human population survey there are a number of options that are available. Households or people can be surveyed by means of a postal survey, in which surveys are sent out and returned by post. They can also be surveyed by means of a telephone recruitment, followed by the posting out of copies of the survey forms. The survey forms may then be retrieved either by post or by a retrieval telephone call, in which the answers to the questions are provided to the interviewer over the telephone, using the posted survey form as a guide or prompt for the answers. Persons may also be surveyed by face-to-face interview at the person’s home, workplace, or other convenient location. Surveys may also be completed by means of the internet, e-mail, fax, or other electronic communication.

To provide a non-exhaustive treatment of these options, some of the major types of survey can be considered. Without pinning down speci�c costs, which will vary from country to country and from survey to survey, and depend on the actual con-tent and appeal of the survey, certain generalities can still be put forward to describe the trade-offs that are implied. Some costs are provided in these trade-offs, and these represent current costs (as of about 2010) for a survey conducted in a country such as the United States, the United Kingdom, or elsewhere. The costs should be consid-ered more useful as a comparative base between the different trade-offs, rather than as absolute measures. Thus, if one survey is indicated as costing an average of $50 to $100 and another one as costing $150 to $300, the assumption should be made that the second survey costs about three times as much as the �rst. Costs are given per completed survey (household or person) but do not take into account the response bias due to low or medium response rates. One of the major problems about discussing the

Page 387: Collecting managing and assessing data using sample surveys

Survey economics360

costs of surveys is that it is very rare for costs to be reported, and, when they are, it is often unclear what was included in the costing and what was not. Indeed, Groves et al. (2009) point out that relatively few studies report the costs of the surveys in sufficient detail to be useful, and the authors offer only some generalisations, such as the fact that face-to-face interview surveys are usually at least a factor of two more expensive than telephone interview surveys, although they also suggest that the ratio is closer to five or ten to one for national surveys. This is presumably because of the much greater distances that interviewers have to travel between sample respondents in a national survey. A useful table comparing a number of different surveys and showing a portion of the costs incurred in those surveys is provided by the World Health Organization (WHO, 2005). However, it is cautioned there that these are not all comparable costs, because different elements are omitted or included, and the estimates are rough and vary from country to country.

15.3.1 Postal surveysIn postal surveys, one of the larger expenses involved is the purchase or creation of a sampling frame of addresses, especially because such sampling frames are often not available and must be created for each specific survey. A second significant expense will be the creation of the survey instrument. Because a postal survey relies on self- completion, it is important that the design of the instrument be developed with great care and be subjected to a number of tests, to ensure that respondents are able to pro-vide ready answers and clearly understand the questions. This represents a significant expense in the postal survey. However, there are no interviewers in a postal survey, and also, therefore, no monitors of the interviewer work. Instead, there are costs for a staff to address and stuff envelopes, place stamps on them, and put them in the post, and there are very moderate costs for the materials required to post surveys to respon-dents. Because the surveys are then returned by post, and have been filled out by survey respondents, there will be considerable editing and data entry costs at the return stage.

Although there are exceptions to this rule, it is generally true to say that the response rates to postal surveys are among the lowest that are encountered from any survey method. This means that the achievement of a particular sample size will be more expensive, because more surveys must be printed and posted, more postage must be spent, and a larger sample drawn than might be the case for surveys that produce a higher response rate. In addition, this type of survey is likely to involve the need to undertake a number of reminders, such as sending postcards to remind respondents, and even resending the entire survey package, with the result that additional costs are involved in the reminder process. Moreover, because the response rates are typically quite low for this type of survey (as discussed in the next paragraph), it may be neces-sary to use incentives to obtain a response from a larger proportion of the population, which entails further expense, both for the incentive itself and for its distribution.

There is also a cost incurred in such a survey that is difficult to calculate, which is the cost of producing a sample that is likely to be compromised in terms of

Page 388: Collecting managing and assessing data using sample surveys

Trade-offs in survey design 361

representativeness, because of low response rates. On the other hand, the time spent in producing each survey and sending it out is very small compared to the cost of interviewing respondents. As a general rule, this type of survey is considered to be one of the cheapest. Typical costs may range from as little as $30 per com-pleted survey to as much as $150 per completed survey, with response rates ranging from as low as 10 per cent to as high as 35 or 40 per cent, but rarely higher unless other measures are introduced to boost response rates (Business to Business [B2B] International, 2007).

15.3.2 Telephone recruitment with a postal survey with or without telephone retrieval

Substituting telephone recruitment for an address-based postal survey saves costs on the generation of a sampling frame, because random digit dialling can be used in place of this, but adds significantly to the recruitment costs, because this now involves an interviewer on the telephone, with attempts costs, contacts costs, interviewing costs (albeit a short interview, usually on the order of five minutes), and the initial refusal conversion costs. On the other hand, it may lead to some reduction of nonresponse and more knowledge about those who respond and those who do not, especially if a few demographic questions are part of the recruitment call and can be obtained from households that subsequently refuse to provide an address for the postal survey.

In another variant on the postal survey, the data may be retrieved by telephone, as well as households being recruited by telephone. Again, this assumes a telephone-based sampling procedure, so that phone numbers are available for all members of the sample. In this variant, instead of relying on households to return surveys by post, inter-viewers call households and ask the same questions over the telephone that were asked in the paper survey sent to the household, usually recording their answers directly into the computer. This adds substantially to the interview cost, especially because some surveys may require more than thirty minutes on the telephone to retrieve the data from a single household. However, this time replaces most of the editing and data entry time of the postal survey, so the total increase in cost is offset somewhat by the reductions in these aspects. Both the quality of the responses and the overall response rate may be improved by this method. In addition, the multiple reminders of the postal survey are replaced by the attempts to contact the household to retrieve the data by telephone.

Overall, the telephone recruitment can be expected to add $10 to $20 to the cost of the postal survey. On the other hand, a telephone recruitment with telephone retrieval, and a posted questionnaire between these two events, can be expected to cost between $90 and $200. Response rates to a telephone recruitment with postal survey to fol-low are reported as being generally in the range of 20 to 40 per cent, while telephone retrieval boosts these response rates to around 25 to 65 per cent. Therefore, although the costs of the latter survey are significantly higher than a postal survey, the potential response rate is much higher, and therefore the potential sample bias, or lack of repre-sentativeness, is much lower.

Page 389: Collecting managing and assessing data using sample surveys

Survey economics362

15.3.3 Face-to-face interviewReplacing the postal survey with a face-to-face interview usually requires that the sampling method reverts to an address-based sample, with the attendant costs of cre-ating a sampling frame, although it is possible to recruit households or persons for a face-to-face interview through telephone contact, or through interception at an appro-priate location. In this case, there may be a cost saving associated with the develop-ment of the survey questionnaire, which requires development for use by interviewers rather than a self-administered survey form. This may permit some reduction in costs compared to the postal survey. It requires interviewers to travel to the location of the face-to-face interview, typically the homes of respondents, which adds substantial expense, both in terms of interviewer time and kilometrage costs. Interview times are typically longer than for a telephone interview, and the costs of recontacting initial refusals or those who are not available at the time of the initial attempt can be con-siderable. On the other hand, the costs for reminders are largely removed, and coding and data entry can be reduced substantially if interviewers use a computer-assisted personal interview method.

The response rates from face-to-face interviews are typically the highest of any sur-vey method, and may run as high as 75 to 90 per cent. This reduces substantially the potential for a non-representative sample. The expected cost range for a face-to-face interview is from about $150 to $500 per completed survey, making this also the most expensive of all the survey methods discussed here. In many situations it is the only practical method of surveying a population, while in other situations it may be totally impractical. The quality of the data from face-to-face surveys is usually considered to be the highest.

15.3.4 More on potential trade-offsAs a general comment, the requirements of the sampling frame offer opportunities for trade-offs, although these are also tied, to a degree, to the sampling method to be used. If a simple random sample is desired, then a complete sampling frame is needed, so that the population can be enumerated for sampling purposes. If a sam-pling frame is not available, then this may require the use of multistage sampling to draw the sample. Alternatively, a method such as RDD, involving an initial recruit-ment by telephone, is sometimes a good alternative to the creation of an expen-sive sampling frame, or the use of multistage sampling. Stratified samples, which can lower significantly the sample size requirements, may impose much greater demands on the sampling frame, so as to achieve the desired stratification. They may also require more supplementary or secondary-source information, which can represent an additional cost, in terms of whether to purchase the secondary data, or simply in processing the secondary data to provide the needed information for the stratification process.

Survey costs also clearly depend on the survey method, with face-to-face inter-viewing usually being the most expensive, but also the method that usually produces

Page 390: Collecting managing and assessing data using sample surveys

Concluding comments 363

the highest response rate, the most accurate information, and the least sample bias. Telephone, postal, internet, and other survey methods may offer reductions in cost, but usually do so at the expense of representativeness and response rates. Survey methods that allow for the direct entry of the survey responses into a computer, especially when the software for data entry includes the means to check answers for reasonableness and to cross-check answers with those given to earlier questions, can substantially reduce both the data entry costs and the data editing and cleaning costs. Similarly, the abil-ity to train interviewers how to ask questions may be a lower cost than developing a self-administered survey that allows respondents to self-report the same information. However, this will depend greatly, again, on the nature of each specific survey, the extent of the sample, and other factors, such as local salaries and wages and labour availability.

Data cleaning costs are usually lowest for interview-based surveys, especially if computer-assisted, and highest for completely self-administered surveys. Similar relationships exist for data repair, the costs of which can be very high in self- administered surveys if respondents are unsure what the questions mean and provide their interpretation of the questions in their answers. This may require extensive in-depth analysis of survey returns, and may also introduce greater uncertainty into the resulting data.

Supervision and monitoring costs also vary significantly with the survey method. Postal surveys require minimal supervision and monitoring, while face-to-face surveys taking place at respondents’ homes will usually require the highest level of supervision and monitoring. In surveys that are concerned with geographic locations, such as travel surveys, a significant cost is usually added by the need to geocode the data – that is, to change address references to a latitude/longitude coding, or other geographic referenc-ing system. These costs can often be reduced by computer-assisted surveys, especially when a gazetteer can be built into the software that is able to assist the interviewer and respondent to pinpoint geographic locations and simultaneously code them into the desired referencing system.

15.4 Concluding comments

Survey designs offer numerous trade-offs for changing the costs of a survey. It must be stressed that the trade-offs often represent a trade between cost and accuracy or repre-sentativeness. In most cases, the less spent on a survey, the poorer will be the quality of the survey, unless the cost reductions are obtained through reducing sample sizes, in which case the generalisability of the findings may be compromised.

It also behoves the survey research firm to have a clear grasp of the costs of under-taking a survey. Failure to grasp some of the cost elements can result in bankruptcy for the survey research firm, or at least a severe loss of money in conducting a survey. Cost models, such as those put forward by Groves (2004), Pettersson and Sisouphanthong (2005), and Yang (2004), among others, are a very useful way to ensure that proper

Page 391: Collecting managing and assessing data using sample surveys

Survey economics364

accounting is made of the various costs involved in a survey, and that appropriate trade-offs can be selected to reduce cost without compromising quality. However, it is important to note, that most reports on surveys are silent on the costs of the surveys, and therefore the uninitiated need to be careful to put together detailed costings for a survey in the planning stages. There is relatively little material available to guide the estimation of costs, beyond the articles referenced in this chapter.

Page 392: Collecting managing and assessing data using sample surveys

365

16 Survey implementation

16.1 Introduction

The preceding chapters have provided guidance on all the aspects of a survey up to the point of actually implementing the survey. In this chapter, a few suggestions and some guidance are provided on the implementation of the survey itself. The chapter begins with a discussion on the selection and training of interviewers and supervisors, then continues with the fielding of the survey, and concludes with discussion about issues that are likely to arise in the execution of many surveys of human populations.

16.2 Interviewer selection and training

16.2.1 Interviewer selectionSomewhat tongue in cheek, it is not uncommon to hear university lecturers talk about using students as interviewers, largely because they are comparatively inexpensive and readily available to most research projects. However, the use of students is often not a good idea for the role of interviewers. All surveys, except fully postal surveys, require interviewers. Even postal surveys require survey staff to pack survey materials and ensure that survey packages are sent out to the correct sample of interviewees. For the next few paragraphs, the focus is on interviewers, rather than other survey staff.

In this day and age of widespread migration and multiracial populations, it is impor-tant that the selection of interviewers largely reflects the make-up of the population. In countries in which accents also change from one region to another, interviewers should be selected whose accents match those of the region where the survey is to be performed. The major reason that interviewers should match the racial and regional make-up of the survey region is that this is the most efficient way to create a rap-port between the interviewer and the potential respondent. Such a rapport can make the difference between a low response rate and an acceptable one. For example, in a country such as the United States, an African-American interviewer is far more likely to be able to build rapid rapport with an African-American respondent, while a Chinese-American interviewer will be successful in making a Chinese-American respondent comfortable.

Page 393: Collecting managing and assessing data using sample surveys

Survey implementation366

Even when the survey is to be conducted by telephone, this racial matching is impor-tant, because there are nuances in pronunciation and vocabulary that can readily iden-tify the racial identity of many interviewers to potential respondents. In like manner, regional accents are important, both for the understanding of respondents and inter-viewers and for building rapport. An interviewer with a British-English accent, for example, may find it difficult to build rapport with a respondent from the deep south of the United States, as the differences in accent between the two will be very marked and may lead to misunderstanding on the part of either the interviewer or the respon-dent. Of course, in any survey, matching the interviewer to the respondent becomes increasingly difficult as people move into different regions and countries, unless there are naming conventions that are clearly associated with particular racial groups or regional areas. In such cases, it may be best in general to use interviewers who are able to speak clearly and who can readily understand different accents, and use interviewers that match regional or racial characteristics to attempt to convert initial refusals and to deal with other difficult-to-complete interviews.

In some instances, this type of matching may seem to be politically incorrect, but it is a reality of surveys that great care must be taken with the selection of interviewers, so that there is a high probability that rapport can be established from the first contact, rather than that potential respondents become alienated from the outset. In an inter-cept survey, in situations in which it is safe to do so, greater success may be obtained using female interviewers than male interviewers. The reason for this is that having an unknown male approach potential respondents may be seen by many people as threat-ening, and lead to avoidance tactics, whereas having an unknown female approach will be seen as non-threatening. This is true for male and female respondents alike.

Furthermore, with the increasing globalisation of the population, it may also be desirable that interviewers are multilingual, so that they can readily switch to an alter-native language when potential respondents are encountered whose native language is not the same as the prevailing national language. However, care is needed with the use of multilingual interviewers. Interviewers should not be free to translate survey questions into other languages, because there are often important nuances in language that are important to be aware of and that can sometimes change the nature of the questions being asked. Therefore, if interviewers are multilingual, in the first instance they may use their language skills to introduce themselves and the survey, and describe the purposes of the survey and the participation sought from the potential respondent. However, the interview itself should be conducted in another language only if a ver-sion of the survey has already been developed in the alternative language and the inter-viewer is fluent in that language and can accept answers in it.

The alternative for multilingual interviewers is that, if the respondent can understand the language in which the survey is prepared, but is more comfortable responding in his or her own language, then a multilingual interviewer can ask the questions in the survey language, but can permit the respondent to give responses in the respondent’s native lan-guage. This may often be extremely useful in getting respondents to stay in the sample, when they might otherwise be considered as presenting a language barrier.

Page 394: Collecting managing and assessing data using sample surveys

Interviewer selection and training 367

For a survey involving face-to-face interviewing, not only is the gender of the interviewer potentially important, but so also is his or her appearance, in terms of dress, personal hygiene, and grooming. In selecting interviewers for face-to-face surveys, the researcher should pay close attention to these aspects of those to be hired to undertake interviewers. For a telephone or other remote interview, the voice quality and vocal presentation are important in the selection of appropriate interviews. Interviewers also need to be good salespersons, in the sense that they should be outgoing and easily able to engage strangers in conversation, rapidly putting them at ease, and also be able to take rejection without difficulty. Skill in what is known in the marketing industry as ‘cold contacting’ is highly desirable in an interviewer.

As an example of the selection of interviewers, the author was in charge of under-taking an on-board bus survey in Honolulu, Hawaii, in the 1980s. The task of the interviewers in this case was to approach each boarding passenger and attempt to per-suade him or her in a few seconds to accept a survey form and a pencil, complete the survey, preferably during the bus trip, and return the survey to boxes provided on the bus before leaving the bus. For those travelling only short distances or boarding crowded buses on which they had to stand for their journey, the option was provided to mail the completed survey back. In Waikiki, the major holiday area of Honolulu, there are individuals who walk the pavements (sidewalks) handing out advertisements for various attractions, such as restaurants, bars, and other attractions to tourists. To find on-board bus interviewers, I observed these individuals and selected those who seemed most successful at getting tourists to accept the brochures they were handing out. These individuals were approached and offered the opportunity to work on the survey. Most of the successful individuals were young females. For the on-board bus survey itself, these female interviewers were asked to dress in a long muu-muu (a long, flowered Hawaiian dress), which, as well as being attractive dresses and practical in the climate of Hawaii, made the interviewers seem to be part of the island culture. These interviewers proved to have a high rate of success in persuading boarding passengers to take and complete survey forms.

Others have reported successful surveys using insurance salespeople, car salespeo-ple, and other similar individuals as the interviewers in both telephone and face-to-face surveys. Such individuals are usually used to rejection and do not take it personally, are persistent, but understand the limits of persistence, and are readily able to engage strangers in conversation from a cold contact.

For postal surveys, there is no need for interviewers, but there is a need to hire peo-ple to put together survey packages and ensure that these are correctly addressed and have the appropriate postage applied for being sent out. This requires a willingness to undertake repetitive work, with accuracy and care, and without becoming bored and inattentive as the work proceeds. Again, care is needed in selecting the appropriate people to staff such a survey. This is especially important in a household postal survey, in which each person in the households is to receive a survey, so that packages have to be made up with the correct number of survey forms for each specific household. In

Page 395: Collecting managing and assessing data using sample surveys

Survey implementation368

such cases, the postage will also vary from one household to another, and due care and attention is essential to execute the survey properly.

16.2.2 Interviewer trainingInterviewers and survey staff must always receive training on each new survey, even if they are seasoned survey workers who have already worked on many prior surveys. Training should involve several activities:

explanation of the survey purpose;•review of the survey instruments;•explanation of frequently-asked questions (FAQs) and their answers;•for interview surveys, role play as both an interviewer and a respondent; and•response to all questions from survey staff or interviewers.•

For surveys requiring survey staff or interviewers, the first thing that should be done in the area of training is to develop a survey manual. The manual should contain all the material that any survey staff person or interviewer might need to know and should duplicate the material that will be covered in the training sessions. It should also include the purposes of the survey, FAQs and the responses to them, and instruc-tions to interviewers about such things as clothing, grooming, appearance, behaviour, responsibility, etc. A good interviewer manual may be a fairly substantial document, if prepared appropriately.

In the role play, each interviewer should interview another interviewer, who plays the role of the potential respondent. The roles should then be reversed. If the interview is to be face to face, then the role play should be done face to face. If the interview is by telephone, then the role play interviews should be conducted by telephone, with-out eye contact or the ability to see the other interviewer, so that the actual survey can be simulated and the interviewers can get the feel of the way in which the interview will take place. After a role play session, interviewers should provide feedback to the trainers, and modifications may be recommended. Often, it will be appropriate for the role plays to be repeated after further reinforcing instruction has been given to the interviewers.

A very important part of training is provided by the pilot survey and any pretests. In the pilot survey, as many of the final interviewers as possible should be used, so that there is an opportunity for the interviewers to experience the actual fieldwork condi-tions, albeit without the same pressures as will exist in the main survey. Furthermore, the feedback from the interviewers after the pilot survey will often provide valuable input into needed modifications in the survey procedures, instrument, etc.

Some interviewers may show a particular aptitude for converting potential refusals. When this ability is noted, it is useful to assign such individuals to soft refusal conver-sion, if this is an appropriate procedure in a survey, and also to have such individuals otherwise available to deal with potential respondents who are more hesitant about getting involved.

Page 396: Collecting managing and assessing data using sample surveys

Interviewer selection and training 369

16.2.3 Interviewer monitoringFor all surveys that use interviewers, the work of the interviewers should be moni-tored throughout the survey. Monitoring consists of the quality control checking of the work of each interviewer. As a general rule of thumb, about 10 per cent of the work of each interviewer should normally be monitored directly, with this percentage being increased in cases in which there is some concern as to the performance of a specific interviewer.

Basic monitoring consists of maintaining continuous records of the response rates achieved by interviewers, the time taken per completed interview, the refusal rate, the termination rate, the eligibility rate, the distribution of key variables from the com-pleted interviews, and specific issues and concerns raised on the work of each inter-viewer, such as the number of errors found in the results of interviews, the number of complaint calls received for individual interviewers, and other indicators of interviewer performance. These statistics should be collected on a continuing basis, throughout the survey. Running averages should also be computed from the statistics, and the work of each interviewer should be compared to the averages. Any interviewer performing at levels that are significantly different from the average should be monitored more care-fully. This applies both to interviewers performing at better-than-average and worse-than-average rates.

However, interviewers who consistently perform with below-average response rates and above-average times per interview, or receive above-average complaints, should be monitored much more carefully, to determine if further training is required, or if the interviewer is simply not capable of performing at the levels attained by most interviewers. Detailed monitoring of such interviewers’ work is imperative.

Detailed monitoring for telephone interviews is usually done using special monitoring stations, which allow the supervisor to listen in to the telephone interview and also to see what is being recorded on the computer, if the survey is a computer-assisted telephone survey. For a face-to-face survey, supervisors should sit in on a proportion of interviews by each interviewer and should also conduct follow-up or validation surveys. In both telephone and face-to-face interviewing, it is this detailed sitting in on the interviews that should be undertaken on approximately 10 per cent of the work of each interviewer.

Validation or follow-up surveys are usually undertaken for face-to-face surveys, but can also be carried out for telephone surveys. A small percentage of the total sample should receive such surveys. The validation survey usually consists of recontacting the household or person that was interviewed and asking questions to confirm that the interview was indeed conducted, and then asking for validation of a few selected items of information collected in the original interview. These might consist of questions such as the number of people in the household, the number of children and number of workers present in the household, or certain person demographics. In these validation or follow-up surveys, it is best to avoid asking questions that may be considered more sensitive, such as questions about income.

It is also very important to stress that the assessment of interviewers and other sur-vey staff must be done objectively and according to very strict standards. It is very

Page 397: Collecting managing and assessing data using sample surveys

Survey implementation370

easy for one interviewer who is not performing appropriately to cause untold dam-age to a survey. Therefore, survey staff should be warned at the time of hiring that their work will be monitored and that they will be let go immediately if their work is assessed as being below expectations. Survey administrators must then be ruthless in their assessment of survey staff and in making timely decisions to fire poorly perform-ing surveyors. The cost of maintaining a poorly performing survey person is not just in the lost surveys from that individual, but is also likely to be in the spread of informa-tion through the target population about problems with the survey staff. News media will take any opportunity to discredit a survey if information becomes available about problems with survey staff and their interaction with the public. Once reports of poorly performing surveyors reach the news media, the entire survey may be discredited, and response rates may fall precipitously.

16.3 Record keeping

One of the most important aspects of any survey is good record keeping. Without good record keeping it is not possible to be sure that the sample is adhered to, and there is a danger, especially in human population surveys, that some sampled units may be missed or that, after interviewing a sampled unit, an attempt at a second interview is made, per-haps by a different interviewer. Record keeping means, first, that a detailed database is kept that shows the disposition of each and every sampled unit, including the date and time of each attempt made to contact and perform the survey, the outcome of each attempt to contact, the interviewer attempting the contact (when relevant), the assigned identity number for the sampling unit, and contact details for the unit. This is important in all surveys, no matter how conducted, except for intercept surveys, in which the sam-pling units are not identified until the contact attempt is made. In such surveys the record keeping will differ, in that it should show the number of attempts made in a given time period, the total number of population members that could have been attempted (when this is being counted), and the outcomes of the attempts to contact. Again, it should spec-ify the names of interviewers at each location where the intercept is being undertaken.

Essential elements of record keeping are the following:

which sampling units are scheduled for contact on each day of the survey;•which sampling units are contacted successfully;•the date of successful contacts;•which sampling unit contacts result in hard refusals, soft refusals, terminations, or •the establishment of ineligibility;the date of unsuccessful contacts;•which sampling units require to be recontacted, and when recontact should occur;•which interviewers are used on each contact;•the time required to complete the survey for each successful contact and the time •required to reach an unsuccessful contact completion; andany other relevant information on each contact attempted.•

Page 398: Collecting managing and assessing data using sample surveys

Record keeping 371

Statistics compiled from the database will be needed to calculate response rates, as discussed in Chapter 20, as well as other indicators of the quality of the survey and the assessment of individual interviewers and other survey staff. In postal surveys, it is important to keep records to know on what date packages are mailed to specific addresses, and when reminders are sent, as well as showing the type of reminder. Results from the mailing, such as packages returned by the Post Office, also need to be recorded in the database.

When equipment of some type is used in the survey, it is again necessary to keep accurate records of what equipment is sent out and to whom and when, and when equipment is returned. Any malfunctions of equipment should be noted, as well as some indication of the results of the use of properly functioning equipment. To this end, it is important that, prior to the start of the survey, each piece of equipment is assigned a distinct serial number, so that it is possible to keep track of each item.

Depending on the complexity of the survey and the analyses that are to be under-taken from the database, there are many options in today’s computer environment for establishing the database. It may be developed in a simple spreadsheet programme, in a statistical analysis programme that uses and allows entry into a database, or in dedicated database software. The more complex the analyses to be undertaken, the more should be the inclination to use a dedicated database programme that can sup-port a wide variety of user queries. Records should normally be updated at a specific time on each day of the survey. For interview surveys that may utilise the evening hours for contacting and questioning respondents, the database may be updated on the following morning, by compiling information from all that happened on the pre-vious day. For postal surveys, the database may be updated each day after packages have been delivered to the postal system. For Web surveys, the updating should be done at a specified time each day, perhaps around noon, because it may be that this is the time of least activity on the website; alternatively, it could be done at some other convenient time, when it is found that the number of in-progress surveys is relatively low.

As is clear from the discussion of computing response rates in Chapter 20, it is important that the records permit the determination of the number of ineligible units in the drawn sample, the number of units whose eligibility cannot be determined, and the number of eligible units, with the response, refusal, and termination rates for the eligible units. In addition, the record keeping will permit the appropriate moni-toring to be undertaken of the work of interviewers, as outlined in subsection 16.2.3. In postal surveys, the records should also indicate the date on which packages are mailed and the date on which each complete response is received back. For inter-viewer surveys, the time taken to interview the sampling units should be included in the database.

The database will also find application in writing up a final report on the execu-tion of the survey. Descriptive statistics from the database provide important infor-mation to the client for the survey and to the public about the conduct of the survey. In addition, much of the information contained in the database will be of use to

Page 399: Collecting managing and assessing data using sample surveys

Survey implementation372

survey researchers in improving survey techniques in the future. Thus, a well-con-structed and well-maintained database is essential to the reporting and improvement of surveys.

16.4 Survey supervision

All the foregoing discussion of interviewer selection, training, and monitoring, and the issue of record keeping, means that the selection and use of appropriate supervisors is mandatory for a successful survey. In a survey research firm, supervisors will often be chosen from seasoned interviewers who have proved their abilities over many previous surveys as competent interviewers. In other instances, people may be hired specifically as supervisors. In either case, it is as important that the supervisors are trained as it is to train the interviewers. All aspects of training for interviewers should also be given as training to supervisors, with the addition of training in the specific duties of super-vision, and the expectations of the outcomes from supervision.

Supervisors will usually require more detailed knowledge about the purposes of the survey and the way in which the survey has been designed. Supervisors are the main point of contact with interview and other survey staff and are essential to the achieve-ment of a high-quality outcome for a survey. Supervisors may need to be skilled in undertaking interviews themselves, so that they understand the challenges and diffi-culties faced by interviewers. They also need to be able to understand the expectations of performance on the part of the interviewers, so that they are able to pinpoint failures and undertake on-the-spot retraining, as may be needed.

It may be useful to have supervisors undertake a proportion of the pilot survey inter-views, to provide them with first-hand knowledge of the survey. Supervisors can then be responsible for much of the training of interviewers.

Top-quality supervisors are as important to the success of the survey as are top-quality interviewers/survey staff. If supervisors fail to detect poor performance on the part of survey staff, then the survey is, again, jeopardised. Supervisors will normally each be assigned responsibility for a subgroup of survey staff, and it will be their job to monitor the interviewers in their subgroup and to assist in compiling the daily statis-tics that are entered into the survey database. Diligent and detail-orientated people are essential for supervisory staff.

Supervisors should, when possible, conduct a daily debriefing with each interviewer or survey staff person in their subgroup, using this as an opportunity to gather some of the statistics needed for the survey database. This debriefing is also a method of detect-ing problems, either in the performance of specific survey staff or in the design of the survey itself. This should then lead to immediate correction, retraining, or suggestions for design modification.

Survey managers should, in turn, debrief supervisors on at least a weekly basis, and probably more frequently in the pilot survey and in the opening days of the main sur-vey. As with the debriefing by the supervisors, the debriefing of the supervisors will help to identify problems among the survey staff and in the execution of the survey

Page 400: Collecting managing and assessing data using sample surveys

Survey publicity 373

itself, and may lead to decisions to retrain or to let go certain survey staff, or to rede-sign certain elements of the survey process, especially in the pilot survey.

16.5 Survey publicity

An important part of almost all surveys is publicity regarding the survey. In general, publicity that a survey is being undertaken and about its main purposes is found to increase response rates and recognition of the survey itself. Surveys by and for private firms are not generally publicised, but surveys by and for public agencies are fre-quently publicised, and should be. The media through which publicity should be given will depend on the population that is to be the subject of the survey. For public agency surveys, publicity can often be provided through free public service announcements in newspapers, on radio, and on television. For national surveys, the appropriate media would be national newspapers, radio stations, or television stations, while, for regional and more local surveys, regional or local media would be the most apposite.

Pre-survey publicity usually involves announcements or news items describing the upcoming survey, its purposes, who is conducting it, and for whom it is being conducted. Other information as appropriate may be added to this basic information. Following the pre-survey publicity, further news items can be provided to the media that say something about how the survey is progressing, that may relate interesting anecdotes about things that have happened in the survey, especially when these are amusing but are also positive in respect to the survey, and that may keep people inter-ested in the survey and convinced that it is worthwhile to respond, if approached. It is recommended that information about incentives or rewards that are offered to those completing the survey is not included in the publicity material, because doing so is likely to open a floodgate of people requesting to take part in the survey. If the sample is a random sample, then this will be completely unproductive. Only in the event that a non-random sample is to be used and this is seen as a means to recruit sample should this be done.

In addition to the media publicity, an important element of publicity is the provision of a toll-free telephone number and, when appropriate, a website, to which people can go to find out more about the survey, or raise specific questions about it. It is not nec-essary that the toll-free phone number be manned all the time. An alternative is to have the toll-free number connect to a voicemail facility. In such case, the messages should be retrieved on a daily basis, and all messages should be answered within twenty-four hours on weekdays, and by the next working day for weekend calls.

Other types of publicity may include flyers and other advertisements that can be pro-vided through door-to-door delivery, postal services, or on billboards and other adver-tising media. However, the publicity should be in keeping with the importance of the survey, especially from the viewpoint of potential respondents. Publicity that is beyond what might normally be expected, given the nature and importance of the survey, may send a message to the public that there is something suspicious about the survey, and have the opposite effect to that desired.

Page 401: Collecting managing and assessing data using sample surveys

Survey implementation374

16.5.1 Frequently-asked questions, fact sheet, or brochureAnother important aspect of the interface with the public is the preparation of a list of frequently asked questions, which can be provided on a website of the survey as well as printed and provided with paper survey materials as a fact sheet or survey brochure. In carrying out the design of the survey, researchers should be particularly attuned to things that the public may have questions about, including such questions as how and why the respondent was selected, why it is important for the respondent to provide a complete response, why certain questions are asked in the survey, why the survey is being carried out, for whom the survey is being undertaken, how information from the survey is stored, how information from the survey is published or released, why ques-tions are asked about such things as income and the names of persons in the household, how to get help for answering questions, and so forth. These and other questions that may arise, especially during the pilot survey, should form the backbone of the FAQs, to which detailed answers can be generated.

Although a number of these questions may be answered briefly in the pre-notifica-tion letter (see Chapter 11), respondents may have destroyed the letter by the time they receive the actual survey, or may be unable to find it again, and the answers in the let-ter are usually very brief, whereas the FAQ list allows a more detailed response to be given. Further information on the confidentiality of responses can be provided in the FAQ list, along with many other questions that respondents may raise or that appear likely to offer explanations that may increase response.

16.6 Storage of survey forms

It may seem almost a trivial matter to raise, but it is extremely important to ensure that survey forms are stored correctly. First, there are the printed, but as yet unused, survey forms. These need to be stored in such a way as to be readily accessible for the survey. If forms are numbered sequentially, then the numbering should be apparent in storage, so that appropriate forms can be taken in order. Planning the ‘survey room’ ahead of the survey is an important activity of the survey preparation, so that materials are ready to hand and easily marshalled for the survey activities. Second, and perhaps even more importantly, in surveys in which the product is a completed form or forms, appropri-ate storage for these forms must be organised prior to the survey, so that forms can be stored temporarily or permanently in a way that makes them easily accessible for data entry, visual checking, cross-checking with electronic data, etc. When the product of the survey is electronic data, stored directly to a computer server or hard drive, or on temporary storage, the specific ways in which the data are to be stored – e.g., specify-ing the folders in which files are to be stored, specifying file names for the data, etc., – are also extremely important.

An important aid to the correct storage of data is the use of unique identification numbers for each respondent or potential respondent. Defining how the identification number is to be constructed, and ensuring that it is correctly registered on all materials

Page 402: Collecting managing and assessing data using sample surveys

Storage of survey forms 375

from a specific respondent, are of utmost importance in the survey administration process.

16.6.1 Identification numbersThere are many possible ways in which identification numbers can be constructed and applied (Stopher et al., 2008a). The most important thing is to decide how this is to be done prior to the commencement of the survey and then to make sure that this is done correctly throughout the fieldwork portion of the survey activity. The first issue with identification numbers is whether to assign these to all sampled units or just to com-pleted units. In postal surveys, it is most usual to assign identification numbers to each package that is sent out. This allows easy identification, if needed, as to which surveys have been returned.

The simplest form of identification number is simply a consecutive number, begin-ning at 1 and rising to the total number of respondents, or the total number of attempted contacts. If, for example, a telephone survey is to be carried out in which it is expected that 5,000 telephone numbers will need to be called, an identification number could be pre-assigned to each telephone number, beginning with 0001 and ending at 5000. Alternatively, if it is found that 3,000 of these telephone contacts result in an eligible contact, then the identification numbers could be assigned from 0001 to 3000 and assigned only to the eligible phone contacts. It is important to note that, if any analysis is to be undertaken with refusals and terminations, these will also generally require identification numbers to be assigned.

The alternative to a simple consecutive numbering system is to include other infor-mation in the identification number. Normally, this will be possible only when identifi-cation numbers are assigned after the initial contact is attempted or made. A very simple procedure is to use the first two or three digits to indicate the interviewer who made the contact, and then to increase the identification numbers consecutively for the remainder of the number. Suppose that a survey is being executed with twenty-five interviewers. Each interviewer is assigned an interviewer number from 01 to 25. For interviewer 14, making his or her 276th contact, in a survey in which each interviewer is expected to make upwards of 1,000 contacts, but fewer than 10,000, the identification number for this contact by this interviewer would be 140276. This numbering system allows inter-viewers to assign the identification numbers at the time that contact is established and provides a very easy way of segmenting the survey results by interviewer.

A more complex system can be used if a stratified sample is used, in which the initial digits can be used to indicate the stratum in which the sample unit falls, the first two or three digits might indicate the interviewer again, and the next two to four digits may indicate the stratum to which this response belongs. Suppose a sample is to be strati-fied by household size (with the range being from one to five or more, in increments of one person) and by number of workers in the household (with a range from zero to four or more). As before, there are twenty-five interviewers being used and we wish to

Page 403: Collecting managing and assessing data using sample surveys

Survey implementation376

allow for up to 9,999 contacts per interviewer. Now the identification number for the 276th contact by interviewer 14 which turns out to be a household with three persons and one worker, receives an identification number of 14310276. Immediately, this tells the survey management that this is the 276th contact made by interviewer 14 and it had a household size of three with one worker present in the household. This provides a very useful method for being able to check very quickly how many samples have been drawn for each stratum in the sampling scheme. However, if this is the primary purpose of including the stratum numbers in the identification number, it is likely to be better to have them appear as the first two digits in the number. In this case, the specific response will then be numbered 31140276. Then, sorting all the identification numbers on the first two digits allows a quick tally to be obtained of the number of contacts made in each stratum of interest.

Another possibility is to build into each identification number the date on which the survey is completed or is sent out. This can be very helpful for postal surveys, for which identification numbers would normally be pre-assigned. In this case, suppose this is a postal survey in which approximately 300 packages are to be sent out each day, with the beginning date being 9 September and the ending date being 10 October. Using the dd–mm date recording, the 207th package to be sent on 24 September would have the identification number 2409207. This is very useful for checking how long it takes for respondents to return their surveys. For example, if this particular package is returned on 17 October, it is immediately clear that it took twenty-three days from posting the package to return of the package. In keeping track of how long returns take, and also if it is desired to send out reminders at specific time intervals, the embodying of the date of initial posting in the identification number makes it much easier to keep track of which packages need reminders on a specific day. Of course, this does require a record to be kept of the address that is associated with each identification number. However, for this same survey, if it is desired to send out a postcard reminder seven days after the initial mailing, then, on 1 October, all those addresses with an ID number beginning with 2409 and that have not been received back as yet should be sent a postcard.

As can be seen with these illustrations, there are numerous possible ways to construct identification numbers that can provide ongoing management information directly. It is necessary to decide in advance of the commencement of the fieldwork what infor-mation it is desired to make readily available through the identification number, and then to set up the construction of identification numbers in that manner. There is one further issue that requires noting. Some computer-assisted survey programmes auto-matically generate identification numbers. Unfortunately, with some of these, it will be found that a new identification number is generated each time a contact is made, irrespective of whether the number has been contacted previously in the survey. There are two problems with software of this type. First, it imposes a simple consecutive numbering on all contacts and does not permit the survey researcher to construct a customised number. Second, when multiple contacts are made to the same number, a new identification number is created on each occasion, which is generally undesirable. It is suggested that potential users of CATI software become knowledgeable about this aspect of the software that it is intended to use.

Page 404: Collecting managing and assessing data using sample surveys

Issues for surveys using telephone contact 377

16.7 Issues for surveys using posted materials

Although some of these issues are also discussed elsewhere in this book, it seems useful to review and list here some of the important issues for postal survey mate-rials. The appearance of the material that is sent out by post is very important. To attract potential respondents to take the postal materials seriously, and potentially to fill out the enclosed survey forms, the appearance is critical. Because so much ‘junk mail’ is sent to homes in many countries today, it is important that the appear-ance of the survey materials does not resemble junk mail. To this end, the following are suggested to provide a good and professional appearance for the survey, based on Dillman (1978, 2000), as well as on the author’s experience (Stopher et al., 2008a).

(1) Use large white envelopes, not coloured envelopes, and have a return address clearly printed on the outside of the envelope.

(2) Print the address of the recipient directly onto the envelope, and do not use address labels. A print face that resembles cursive writing, or some other typeface that is significantly different from standard typefaces used on bulk mailings, will also help.

(3) The use of postage stamps on the envelope, rather than bulk mail permits or pre-printed postage, is very important. If available, commemorative stamps or other stamps that have an appealing look to them will further enhance the appearance of the package.

(4) Identify on the outside of the envelope that this is the survey, giving the survey name that was used in any prior contact, such as a pre-notification letter or tele-phone recruitment.

(5) For surveys requiring postal return, there should be a stamped addressed envelope enclosed with the survey materials. This envelope might have printed on the back a list of instructions as to what is to be placed in the envelope and what needs to be done with those materials, unless local postal regulations prohibit such matter from being printed on the outside of an envelope.

(6) When postal regulations require it, there should be a statement on the outside of the envelope to indicate that the package is to be returned if undeliverable. If it is important that the package be delivered only to the printed address and no other, then a further statement of ‘Do not forward’ should also be printed on the envelope in the appropriate place.

16.8 Issues for surveys using telephone contact

There are two or three additional issues that should be taken into account in the imple-mentation of the survey when the telephone is used for any part of respondent contact. The first of these relates to caller identification in countries where the telephone system provides this option. The second relates to encountering answering machines, and the third relates to repeated requests for a callback at a later time.

Page 405: Collecting managing and assessing data using sample surveys

Survey implementation378

16.8.1 Caller IDIn an increasing number of countries, it is possible to have the identification of the caller displayed on the telephone at the time that the call comes through to a person’s telephone. This is also known in some countries as called line identification, or caller display. In a recent paper, Kempf and Remington (2007) note that the prevalence of caller ID on telephones in the United States has been increasing much more rapidly than answering machines have, and that about 50 per cent of US households had caller ID by around 2001. It is common that the telephone lines used by calling centres for surveys do not display the identification. Rather, what may be displayed is the message ‘Out of area’, or ‘Unknown listing’, or ‘Private number’. Increasingly, people are able either to block automatically any incoming call that has no identification attached to it or to decide not to answer such a call, upon noting the display. Therefore, the lack of a displayable identification from the telephone lines used to make survey calls may increase nonresponse to the survey to a significant degree.

It is recommended that an effort be made to ensure that there is a valid display of information on the originating number for the call. It may be as simple as ensuring that the telephone number is displayed, or it may be possible to have a display that shows the origin as a government agency or university, when the survey is being conducted by such. Even when the survey is being conducted by a survey research firm, it is better to have the telephone number or name of that firm display than the anonymous ‘Out of Area’ or ‘Unknown listing’. In the event that the survey is being conducted from a location that is overseas, it is also preferable to have the identity shown, if possible, rather than just ‘Overseas’, especially as marketing entities seem to be using overseas calling centres increasingly for their contacts.

16.8.2 Answering machinesAnother screening device that many people use today is the answering machine. Answering machines can, in fact, be used in two different ways, and some people may use them in both ways, depending on the circumstances. One use is to avoid missing telephone calls that occur at a time when members of the household are unavailable to answer, while the other is to use the machine as a screening device. In the latter case, people allow the answering machine to pick up the call, and then listen to see if they recognise the person making the call before deciding if they are going to interrupt the answering machine and take the call. In effect, this entails using the answering machine in a similar manner to caller ID. There is no right or wrong method to deal with answering machines. Rather, it is necessary at the outset of the fieldwork to make a decision on how answering machines will be treated. Because the survey person-nel cannot distinguish between the two alternative uses that are made of answering machines, a protocol should be established for a telephone contact in a survey that permits either use to be a possibility.

The principal issue is whether or not a message should be left if an answering machine is encountered, and, if so, whether the message should include a telephone

Page 406: Collecting managing and assessing data using sample surveys

Issues for surveys using telephone contact 379

number that the recipient of the message can call. A secondary issue is at what point to count an answering machine response as being a nonresponding number. The issue as to whether or not to leave a message really depends on whether doing so is likely to increase the overall response to the survey. In many surveys using some form of telephone contact, there may be two or more points at which respondents are called on the telephone: in the initial recruitment, and in one or more reminders to complete the survey and return the information or arrange a time for retrieval by telephone.

In the National Immunisation Survey in the United States, Kochanek et al. (1995) report that they found conflicting results as to whether leaving messages on answering machines increased or decreased response. However, overall, they conclude that, if the message was designed appropriately, it would increase the response. Xu, Bates, and Schweitzer (1993) find that households that owned an answering machine were more likely to respond to the survey, and that leaving a message was effective in increasing response rate. They also find that the actual content of the message had no detectable effect on response. In fact, Xu, Bates, and Schweitzer suggest that the message works in a similar manner to the pre-notification letter in increasing response. However, the messages they tested indicated only that the interviewer would call back at a later time, and did not include leaving a phone number for recipients of the message to call to do the survey. Kempf and Remington (2007) note that a phone number can be left to encourage higher completion rates, but do not provide any information on how effec-tive this has been.

In the United States, it has been reported consistently (Kempf and Remington, 2007; Oldendick and Link, 1994; Tuckel and O’Neill, 2002; inter alia) that those with answering machines are more likely to be younger and better educated and to have higher incomes. As a result, simply treating an answering machine as a nonresponse is likely to bias the sample. Hence, re-calling the number, leaving a message, or leaving a phone number to be called back are important to do, so as to avoid potential biases. Furthermore, because some studies show that those with answering machines may pro-vide a higher response than those without, there is a further reason to leave a message and recontact such potential respondents.

One remaining issue on answering machines concerns the potential for people to feel as though they are being harassed by the survey interviewers. Leaving repeated mes-sages on an answering machine may lead to the perception that the survey researchers are harassing potential respondents. For this reason, it is suggested that, if the proto-col for answering machines is to leave a message indicating a subsequent call will be made, then further messages should be left on only alternate or less frequent subsequent attempts. For example, the Bureau of Transportation Statistics (BTS), in its Omnibus Surveys (such as BTS, 2002), directed interviewers to leave messages on only the seventh, fourteenth, and twentieth times of calling the same number and reaching an answering machine. In the event that the protocol is to leave a phone number to be called by the recipient of the message, then it is advisable not to call back again until sufficient time has elapsed to allow for a call to be made to the number left. This will minimise the risk of alienating the potential respondent through too persistent calling.

Page 407: Collecting managing and assessing data using sample surveys

Survey implementation380

When the calls that are answered by an answering machine are to a household that has already been recruited and the action that is to take place is retrieval of data, or a reminder of an interviewer visit to conduct the survey, then a message should be left. If repeated calls are needed, then discretion should be used as to whether or not to leave a message on each occasion or whether to leave one only every so often. It should always be remembered that one is seeking the cooperation of the message recipient, and any actions that could be considered to be putting undue pressure on the recipient are ill-advised.

16.8.3 Repeated requests for call-backIn both telephone and face-to-face interviews, one of the possible responses that is given is a request for a call-back by the interviewer, because the time of the telephone call or interviewer visit is not convenient. In many cases this is a genuine response, while in some cases it is a nonresponse mechanism, with the potential respondent assuming that such a request will result in no further attempts to gain his or her coop-eration for the survey. There is, of course, no way to distinguish between these two possible uses of the request. In this respect, the use of the request for a call-back is not dissimilar to the use of an answering machine.

The first important issue here is that the survey implementation process should include a clearly defined process for complying with call-back requests that are made by respondents, both those that are made for a specific time and day and those that are made without specifying the time or day. This is important, because it is another way of demonstrating to the respondent the importance of the survey and the importance of that respondent’s participation in the survey. In line with the earlier discussion on answering machines, if an answering machine is reached when the call-back is made, then a message should be left, with a number for the potential respondent to call. McGuckin, Liss, and Keyes (2001) report that, in the US National Household Travel Survey in 2000, 24 per cent of those households that requested a call-back eventually provided a complete response to the survey. In this survey, 10 per cent of households were not reached again (presumably the call-back request was a means of refusal by these households) and over 47 per cent requested another call-back.

Based on experience with telephone surveys, Stopher et al. (2008a) recom-mend that, after a fifth request for a call-back, the potential respondent should be classified as a ‘soft’ refusal. In this case, any protocols that have been set up for converting soft refusals should then be employed with this potential respondent. There is no scientific basis for the recommendation of five requested call-backs as constituting a ‘soft’ refusal. However, it is necessary to draw a line somewhere when a potential participant asks repetitively for a call-back and is never willing to complete the survey when the requested call-back is made. The recommendation of five requests seems to meet this necessity without incurring the undue costs of repetitive unproductive calls, while at the same time not abandoning a potentially productive respondent too soon.

Page 408: Collecting managing and assessing data using sample surveys

Data on incomplete responses 381

Similar issues arise for requested call-backs in face-to-face surveys. However, in this type of survey, the cost of undertaking a requested call-back is much higher than in a telephone survey. It is still very important to establish a protocol that will have the interviewer return at the requested time to undertake the call-back interview. However, in the case of face-to-face interviews, a request for call-back that is repeated two or three times, without leading to a complete survey, would normally be considered the maximum that could be allowed before classifying the respondent as a ‘soft’ refusal.

16.9 Data on incomplete responses

In many surveys, data on incomplete responses provide very important information both on the current survey and, for the survey researcher, on the design of future sur-veys. For example, data on terminations may provide a clue as to a specific question or questions that seem to deter respondents from continuing with the survey, and may, therefore, provide new data on sensitive questions. If there is also in existence, for the terminated surveys, information on the sociodemographics of the respondents, then it may be possible to deduce that certain questions are considered sensitive by cer-tain subgroups of the population. Similarly, if it is also known which interviewer was involved in each termination, then it may be possible to detect problems that apply to a specific interviewer that may be correctable with additional training.

It is not uncommon to find that CATI and CAPI software has an automatic deletion of incomplete surveys, largely as a means of being efficient in terms of the use of disk space. When this is found to be the case, a means to override this should be put in place at the outset of the survey. In other situations, either the survey firm or the client for the survey may specify this as a standard procedure. Again, it is recommended that this should not be done and that incomplete data should be retained in a separate file.

For the purposes of post-survey analysis, the data to be retained should be restricted to terminations in which at least one question is answered. Refusals that occur with-out any questions being asked provide no data for retention other than the interviewer identity. However, even one answer that is followed by termination of the interview may be revealing as to a significant reason for termination, especially if it is found to occur many times. In surveys in which there is both a recruitment step and an interview step, data should be retained from both recruitment terminations (to inform on possi-ble reasons for terminations and refusals to be recruited) and interview terminations. Likewise, in self-administered surveys, data should be retained from partially complete responses, even when these are not useful for the primary survey purpose. They may indicate either a problem question or the point at which respondent burden is perceived as too great for the respondent to be willing to continue completing the survey.

In the case of a self-administered paper survey, there may be instances when the cost of data entry is too high for entering incomplete data into a computer database. When this is so, it is recommended that the paper surveys be retained and filed sepa-rately against the possibility of a future opportunity to analyse the results. It is further

Page 409: Collecting managing and assessing data using sample surveys

Survey implementation382

suggested that these incomplete surveys should be destroyed only when it becomes clear that no useful purpose is served by continuing to retain them.

16.10 Checking survey responses

In all types of surveys, there is a need to institute a procedure for checking responses to the survey promptly. In computer-assisted surveys, whether CAPI, CATI, or CASI, cross-checks should normally be built into the software for the computer-assisted part of the survey. As part of the design of the computer-assisted element of these surveys, a thorough set of cross-checks of data should be programmed. These include checks that each response is within a range that is considered realistic, as well as checking that answers to two related questions are consistent. In surveys that are not computer- assisted, it is necessary to devise a set of checks and cross-checks that should be applied to the responses from the survey within a short time of each return being received.

In a US transport survey a few years ago (Stopher et al., 2008b), it was found that the data showed a substantial number of schoolchildren under the legal minimum age to drive who reported that they drove a car to school alone. This was not discovered until some time after the survey had been completed and the data were being analysed. Unfortunately, by that time, little could be done to correct the problem, because it was too late to re-call respondents and find out what the error was (reporting of the means of travel, reporting of age, etc.). This underlines the necessity of performing checks on the data as the returns come in, so that cases that fail the logical checks can be recon-tacted and clarification sought on the issue.

Some of the checks that should be part of the list in any survey of human populations are the following:

children below minimum working age reporting paid employment;•people who report that they are retired or out of work reporting paid employment;•the head of the household reporting an age under sixteen (or other cut-off age spe-•cific to the country where the survey is being carried out);reporting more workers in a household than there are people or adults; and•reporting more children and adults than the total number of people reported as living •in the household.

The specific nature of the survey may also suggest additional checks. For example, in a survey of daily travel habits, the following checks might be added:

people failing to report a trip back to home at the end of the day;•people failing to report a trip that links one location to another;•people who are not employed reporting travel to work;•people failing to report other family members that accompanied them on travel; and•people failing to report such activities as transferring between public transport routes •and systems, or waiting for a public transport vehicle.

Other checks can be developed following the patterns shown in these examples. In the case of computer-assisted surveys, these checks and similar ones can be built into

Page 410: Collecting managing and assessing data using sample surveys

Summary comments on survey implementation 383

the programme, while these would be specified as manual checks to be made on other surveys. The checks can most easily be done if data are computerised within hours of the returned data being received. However, when this is not possible to do, then the checks should be performed manually by inspecting the written returns.

16.11 Times to avoid data collection

In many surveys, there will be certain days of the year on which attempts to collect data are either more likely to end in refusal or more likely to provide biased measure-ments. Data collection on national holidays is often a bad idea, because people will be less likely to be at home, or, if at home, will be less likely to wish to spend the time responding to a survey. For example, in the United States, it is probably ill-advised to attempt to collect survey data on Thanksgiving Day or New Year’s Day. In Victoria, Australia, it is probably ill-advised to try to collect data on Melbourne Cup Day (and perhaps anywhere in Australia on that day, for that matter).

Religious holidays are even more likely to result in poor response in countries where there is a strong majority of people who observe the national religion. In cer-tain Christian countries, days such as Christmas Day, Easter Day, and other major religious holidays would be likely to result in few responses. Where these holidays have also become secularised and are considered to be major family holidays, the same concerns would arise.

The collection of data during weather emergencies is also generally unlikely to be successful, although there are times when surveys about how people respond to weather emergencies may be needed, and these have to be undertaken specifically at these times. With that exception, surveys should probably not be attempted in areas prone to cyclones or hurricanes when landfall by such a storm is imminent, nor in areas subject to wildfires when a fire front is known to be approaching the residential area.

Apart from these rather obvious restrictions, there may be other times that are inap-propriate depending on the nature of the survey itself. It is important, therefore, to specify at the outset of the survey implementation the days and weeks of the survey period when fieldwork should not be attempted, because of such reasons, and also to put in place a procedure to handle weather and other emergencies that may arise dur-ing the survey period, but that cannot be foreseen at the time that the survey goes into the field. There should be clear reasons why such restrictions are put in place, which should relate either to the probable high refusal rate that would occur at the time, or to the fact that external influences may alter substantially the behaviours that the survey is seeking to measure.

16.12 Summary comments on survey implementation

This chapter provides a number of pointers and recommendations relating to the imple-mentation of a survey. From the procedures for selecting, training, and monitoring survey personnel, to the record keeping required through the survey implementation, issues relating to the storage of materials required for completion of the survey and

Page 411: Collecting managing and assessing data using sample surveys

Survey implementation384

completed survey forms, assigning identification numbers to respondents and poten-tial respondents, special issues for postal and telephone survey implementation, to such issues as retaining data on incomplete responses and specifying cross-checks that should be applied to data as they are received from respondents, this chapter has attempted to cover a series of important issues in fielding a survey of a human population.

While the coverage of such issues may not have been exhaustive, it is hoped that sufficient information has been provided in this chapter to alert the survey researcher to many issues that must be dealt with as the survey fieldwork is set up and proceeds. Failure to attend to these issues at the appropriate time will produce threats to the validity and usefulness of the data collected from the survey. Given the costs usually associated with survey fieldwork, such threats can represent serious risks to the contin-ued existence of survey firms, and can result in serious implications for public agencies involved in data collection.

Page 412: Collecting managing and assessing data using sample surveys

385

17 Web-based surveys

17.1 Introduction

In the past few years the internet has emerged as an interesting mechanism by which to undertake surveys. Its appeal is both the low cost (there are no interviewers needed, nor survey personnel to stuff envelopes, no postage costs, and data are recorded directly to computer storage, thereby eliminating the requirements for manual data entry by sur-vey staff) and the relative ease by which the survey can be set up (Schonlau, Fricker, and Elliott, 2002). Web surveys, like postal surveys, have the advantage that respon-dents can choose their own time to fill out the survey, rather than feeling pressured to respond when an interviewer shows up at the door or calls on the telephone. Like computer-assisted surveys, Web surveys also offer the ability to build in various error-checking procedures, so that respondents can be asked to correct potentially conflicting responses or responses that are out of range. Another distinct advantage of Web surveys is that complex skip patterns can be embodied within the survey, but their complexity can be hidden completely from the respondent. In this respect, the Web survey shares an advantage with any interviewer-administered survey, in which a skilled interviewer can follow complex skips without the respondent being aware that answers to certain questions lead to certain other questions or segments of the survey. However, the Web survey is a self-administered survey, which in almost any other form would require guidance on how to skip certain questions and go to certain other questions. Because a specific respondent to a Web survey is never aware of the other possible questions that may be included in the total survey design but that were not relevant to this respondent, the Web survey has the advantage over other self-administered surveys that the survey task may appear much smaller than it would in a paper version.

However, on the negative side, the response rates to internet surveys are reported by several researchers as being lower than that of postal surveys, which is a matter of some concern, because postal surveys generally experience lower response rates than other methods (Jones and Pitt, 1999; Leece et al., 2004; McDonald and Adam, 2003). Jones and Pitt in their study achieved response rates of 19 per cent with e-mail and an internet survey, while achieving 72 per cent with a postal survey, both surveys having three reminders sent. In the survey by Leece et al., the response rate by the internet

Page 413: Collecting managing and assessing data using sample surveys

Web-based surveys386

was 45 per cent, while that of the postal survey was 58 per cent, which was a statisti-cally significant difference. McDonald and Adam achieved a response rate of 46 per cent by post, but 21 per cent by the internet. The postal response rates of these three surveys are at the high end of postal response rates, in general, suggesting that these were well-designed surveys. The response rates to the internet are then quite low, at 19 to 45 per cent, with two studies of the three achieving internet response rates that are only around 20 per cent. It should be noted that these surveys were all conducted more than five years ago and that internet connection speeds, computer capabilities, and penetration of the internet have all increased markedly since then. In current work (2010) by the author, the response rate to an internet survey is running at around 32 per cent, without deducting the number of potential respondents whose e-mail notifica-tions proved to be incorrect and so did not reach the intended respondent. This suggests a slightly higher response rate to an internet survey, but still only in the same range as a postal survey.

However, notwithstanding these advantages, the Web survey remains, at best, one of two or more alternative ways for respondents to complete a survey in random sam-pling. The primary reason for this is that internet penetration is by no means universal anywhere in the world currently, and, even where internet penetration is high, there are many people who have the internet who are not particularly comfortable using it and who may well find a Web survey too daunting to undertake. There are also a number of design issues that need to be taken into account in designing a Web survey, many of which are frequently neglected. In this chapter, a number of issues relating to Web surveys are discussed, and design issues of importance are noted.

There is also another concern about Web surveys, especially when they are used as one of two or more methods for respondents to use in the same survey. McDonald and Adam (2003) suggest that respondents answer a Web survey differently from how they do a postal survey. It should be kept in mind, of course, that the survey used by McDonald and Adam included a substantial number of attitudinal questions. It is in such a context that the concern arises over differences in response to a survey. When a survey is predominantly factual in nature – e.g., collecting demographic information and data concerning specific activities of a respondent – the concerns with different responses may not be as serious. However, again, the research to date is inconclusive. Coderre, Mathieu, and St-Laurent (2004) report a comparison of telephone, postal, and Web surveys to collect qualitative data to be used to measure corporate image. In their research, they found that the three methods performed comparably for two of the three corporations that were assessed. The one for which a difference was found concerned a firm that was in the business of providing internet service, so the authors argue that this may have resulted in the opinions of those undertaking the Web survey to be different from the general population – hence the difference in measurement in the survey.

In another comparison of Web and telephone surveys, Fricker et al. (2005) find a number of differences between the two, partly because of differences in the way in which the two surveys were administered. For example, item nonresponse was higher

Page 414: Collecting managing and assessing data using sample surveys

Introduction 387

on the telephone survey, because telephone interviewers were willing to accept ‘No opinion’ responses, whereas the Web survey was designed to prompt respondents for an answer rather than allowing the response to be left blank. Consistent with some of the earlier cited studies, Fricker et al. (2005) report finding a lower response to the Web survey than to the telephone interview. In addition, respondents gave less differ-entiated answers in the Web survey to a battery of attitude questions than the telephone respondents. This could be because telephone interview respondents did not recall prior answers and had to think about each question, whereas Web respondents were presented with the questions in a grid format, and it would have been easier to give similar responses to each question.

Braunsberger, Wybenga, and Gates (2007) report greater reliability in the data obtained from a Web survey than from a comparable telephone interview survey. While also noting the warning about problems with the representativeness of the sam-ple for a Web survey, their research indicates considerable value in using such a sur-vey. Bech and Kristensen (2009) compared a postal survey and a Web survey for older respondents. As with several other studies, they also find a lower response rate by the Web survey than the postal survey. However, they also find significant differences in the characteristics of respondents between the two methods of survey, particularly that the Web survey was more representative with respect to gender, but less with respect to age. Data quality was again assessed as being better in the Web survey, with fewer item nonresponses and ‘Don’t know’ answers than the postal survey. However, because of the lower response rate, they still conclude that the Web survey was more expensive on a cost per response basis.

Before commencing on the main body of this chapter, it is useful to review the extent to which the internet has penetrated into populations of different countries and regions of the world. Internet World Stats (2008) estimates that North America has the highest rate of penetration, with 73.6 per cent of the population having access to the internet either at home, at work, or both. Next in order after North America are Oceania/Australia, with 59.5 per cent, and Europe, with 48.1 per cent.

Madden and Jones (2008) report that there is some polarisation in the use of the internet in the United States. In their study, they found that, of those people with inter-net access, 60 per cent used it every day at work, while 28 per cent never used it at work. They also found that 62 per cent of internet users used it every day at home, while 6 per cent never used it at home. At least occasional internet usage was reported to have increased in the United States, from 63 per cent who used it at least occasion-ally in 2004 to 73 per cent in 2008. The point of all these statistics is that internet pen-etration is still well below telephone penetration, which exceeds 95 per cent in areas of the world such as the United States, Canada, much of Europe, and Australasia. Hence, given that questions are often raised of bias in sampling using the telephone, even in countries such as the United States, there must be considerable bias in considering use of the internet alone for a random survey of the population. This is not to say that some surveys cannot be carried out via the internet alone, especially when it is known that the majority of the target population have internet access. However, population surveys

Page 415: Collecting managing and assessing data using sample surveys

Web-based surveys388

in general should not limit the response method to the internet, because of the signifi-cant biases present in doing so.

Some examples of the biases that may arise are given by Madden and Jones (2008). They find that 60 per cent of employed seniors (people over sixty-five years of age) were online, while only 38 per cent of all seniors were online. Only 40 per cent of US citizens without a high school education were on line, although, of those with less than a high school education but who were employed, 62 per cent were online. In contrast, 63 per cent of all high school graduates were on line, while 75 per cent of employed high school graduates were. Clearly, then, any internet survey will tend to be biased against older people, those not employed, and those with lower education levels.

Another issue relates to the type of internet connection – dial-up or broadband. In the United States, Madden and Jones (2008) report that 55 per cent of US citizens had high-speed internet access at home, which is a significant increase from the 47 per cent reported in early 2007. However, this also implies that there is still a significant pro-portion of internet users with slow-speed dial-up service at home – a factor that must be taken into account in the design of any survey on the internet.

Finally, as Dillman (2000) notes, having a computer or access to one does not nec-essarily imply that a person has the skill to use it for completing a survey. Indeed, Dillman points out that many people may have a computer and even access to the internet but skill only to use a single programme, or to use it in the pursuit of a par-ticular hobby. For such people, the skills needed to fill out a Web survey may involve learning a new set of skills, which may be sufficiently daunting as to deter them from trying. Hence, computer and internet penetration themselves are no guide to the actual possession of the skills needed to answer a Web-based survey.

17.2 The internet as an optional response mechanism

As should be clear from the statistics reported in the introduction to this chapter, a random sample of the population cannot be achieved using an internet survey alone (Bech and Kristensen, 2009; Dillman, 2000; McDonald and Adam, 2003; Schonlau, Fricker, and Elliott, 2002; inter alia). The sample obtained will tend to be of younger population members, those who work, and those with higher education, based on US statistics, which seem likely to apply in general to other parts of the world. Hence, the ideal use of the internet in a survey that is intended to produce a representative sample of the general population would be to recruit survey respondents by more conven-tional means, such as the telephone or door knocking, and then to offer the internet as one of two or more methods for respondents to provide their responses to the sur-vey. However, again, this has to be recommended with the caution that, if the survey involves a number of attitudinal questions, then the responses from the internet survey may not be comparable to the responses obtained from other methods.

Convenience samples for the research and exploration of issues, and samples that are intended to be of specific population subgroups, such as university academics (Jones and Pitt, 1999), may be able to rely on the internet as the sole method of surveying.

Page 416: Collecting managing and assessing data using sample surveys

Some design issues for Web surveys 389

However, these are not surveys of the general population, and the biases to which a Web survey would be subject are unlikely to be of serious consequence.

As an illustration of the potential use of a Web-based survey, consider a survey that is being undertaken of a representative sample of the general population in an area with high telephone penetration. The survey is designed to have a recruitment call by telephone, followed by the sending out of a survey package by post. Respondents are offered three alternative ways to complete the survey. They may complete the paper forms sent by post and post them back in a prepaid envelope; or they may complete the paper forms and then provide the data over the telephone to an interviewer; or they may go to a Web address (URL) that is provided to each potential respondent and com-plete the survey online.

In cases such as this example, the main issues of concern remain the potential biases in the representative sample that is sought and the compatibility of the data from dif-ferent sources. In other words, would a person provide the same answers to each of the three methods of survey that are offered – the self-administered form by post, the telephone retrieval of data, and the Web survey? This is a research issue that is yet unresolved.

17.3 Some design issues for Web surveys

There are a number of design issues that are specific to Web surveys and that need some elaboration. In this section of this chapter, these issues are introduced and dis-cussed, with recommendations made as to the way in which they should be handled.

17.3.1 Differences between paper and internet surveysProbably the single biggest difference between paper surveys and Web-based surveys relates to the appearance of the survey (Dillman, 2000). In paper surveys, the designer controls completely the way in which the survey will appear to each and every respon-dent. In drafting the survey materials, using word-processing or other publication soft-ware, the survey designer has complete control over the appearance, the layout, and all other physical aspects of the survey. Further, the designer then knows exactly what each respondent will see when the survey form is received.

In Web surveys, this is not the case. The survey designer uses specific software and hardware to design the survey and will see a particular display of the finished survey design. However, this design is then transmitted via the internet to the respondent’s software and hardware, by which the survey is displayed. Because of various restric-tions on the respondent’s hardware and software, the appearance of the survey can be quite significantly different. In working with Web survey designs, the author has expe-rienced situations in which the programmer has created various screens for the survey such that each page of the survey displays in its entirety on his or her screen. However, when sharing the survey with other workers in the same office, other displays cut off the bottom of the page, or display only part of the page width, because of different

Page 417: Collecting managing and assessing data using sample surveys

Web-based surveys390

settings, different software capabilities, and different hardware. Various local settings, such as the browser used by each of the designer and the respondents, the screen set-ting for display, the operating system of the computers, as well as many other aspects, can change how the respondent sees the survey quite significantly compared with how the designer intended that the survey should be seen.

A particular problem here is either cutting off content, or misaligning content, so that alignments that were intended to guide the respondent or help comprehension may become garbled, or even be missing altogether on particular respondents’ computer displays. The Web survey designer must, therefore, attempt to keep in mind the poten-tial limitations of the respondents’ computers in putting together the survey design. This becomes even more difficult as it is likely that the programmer designing the Web survey has current or very recent software and hardware, while many respondents may have significantly outdated software and hardware. It can be quite challenging to the Web designer to recall and implement the restrictions of such outdated software and hardware in designing the survey.

17.3.2 Question and responseIn Chapter 9, the issue of question wording and appropriate response categories is discussed at some length relating to all surveys. However, this issue is of particular importance in Web surveys, especially when the Web survey is designed so that the respondent cannot proceed to the next page or question until an answer is given to the current question. Considerable care is required in Web surveys to ensure that every respondent will be able to find an appropriate answer category for a question, and that questions are so worded that the respondent is in no doubt as to the intent of the question.

In a recent questionnaire sent to this author, one of the questions asked concerned areas of scientific expertise. The questionnaire required the respondent to fill in a min-imum of four areas of expertise. However, in the listing provided, there were none that even remotely matched this author’s areas of expertise. Because the selection of areas of expertise could lead to future contacts for reviewing papers, it was not appropriate to fill in the areas of expertise falsely. However, there was no provision for filling in a category of ‘Other’, and the respondent could not proceed without listing a minimum of four areas. This prevented the author from completing the questionnaire. The rem-edy for this would have been to remove the restriction that four areas had to be pro-vided, or to provide a category of ‘Other’ and require the respondent to type in four areas of expertise of the respondent’s choice, or sufficient to bring the total (between the built-in list and the write-ins) to four areas. Unfortunately, there are online services for developing Web surveys that do not prevent survey researchers from creating such blocks to the completion of a survey.

Another problem encountered recently illustrates another issue with question and answer design. In this case, a questionnaire was sent out that asked respondents a num-ber of questions, to which the initial response was either ‘Yes’ or ‘No’. The survey

Page 418: Collecting managing and assessing data using sample surveys

Some design issues for Web surveys 391

designer then wanted to know reasons behind the affirmative or negative questions, so added a third answer category of ‘Why/why not?’. Unfortunately, these three responses were set as mutually exclusive responses to the question. If the respondent desired to answer ‘Yes’ and then respond to ‘Why/why not?’, the ‘Yes’ response was un-ticked by the survey, and ‘Why/why not?’ was ticked. This also happened if the respondent simply started to enter a reason into the text box available for the reason to be typed in. This is unfortunate from the viewpoint of the respondent, who would probably be frustrated that his or her original ‘Yes’/‘No’ answer was negated, and would probably lead many respondents not to attempt to give their reasons for a ‘Yes’ or ‘No’ response. It would also be frustrating to the survey researcher, who will end up either with many responses that just indicate a negative or affirmative response and no reasons, and oth-ers for which the researcher will have to try to decide from the reasons stated whether the respondent was indicating agreement or disagreement with the question. This could and should have been remedied by splitting the questions. The initial response to each question should have been a ‘Yes/no’ question, followed by a secondary question in each case asking for the reasons for an affirmative or negative response. Indeed, ‘Why/why not?’ should not have been a response category, but should have been a question, with a text box available for the response.

The designer of Web surveys must bear in mind that it is easy for a respondent to become frustrated when the survey does not permit him or her either to proceed without providing an answer to a question, if the responses provided are not exhaus-tive, or to provide the answer that matches the respondent’s situation. In such cases, it is extremely easy for the respondent simply to terminate completion of the survey and proceed no further. Survey designers need to remember that the survey must be designed for the convenience of the respondent, not the survey researcher.

The principle espoused in Chapter 9, of always adding a category named ‘Other’ to the response set of any question when it is not possible to be certain that every alternative has been covered in the list of responses, is even more important in Web surveys, especially when continuing on to subsequent questions is conditional on com-pleting the current question. Even when the researcher may think that the only possible answers are ‘Yes’ or ‘No’, there are possibilities that a respondent may be unsure or may genuinely not know the answer. Therefore, options of ‘Maybe’ and ‘Don’t know’ almost certainly should be added even to these apparently simple answers. Again, it is noteworthy that several of the papers cited earlier on comparing Web surveys with other types of surveys mentioned a lower item nonresponse rate because of the ability of the Web survey to prompt for an answer repeatedly. However, a serious issue that was not often addressed in the research is whether the Web answer was actually a cor-rect or realistic one. In some instances, it is bound to be the case that respondents will give a false answer when compelled to give an answer, or will just cease to fill out the survey, thereby generating the lower response rate frequently noted for Web surveys.

Looking at a typical interviewer survey, it will be noted that there are often answer categories that are not read out or shown to respondents, such as ‘Don’t know’, ‘Refuse to answer’, or ‘Not sure’. The reason for not showing or reading these categories is

Page 419: Collecting managing and assessing data using sample surveys

Web-based surveys392

usually that it is too easy for respondents to pick one of these without really thinking about the question. This is clearly undesirable, if it results in respondents treating the survey very superficially. In an interview survey, it is possible to keep these responses hidden, yet also record them when they are the genuine and only response a respon-dent is willing to give. Unfortunately, in a Web survey, these response categories have to be offered, if the respondent is not to be turned off or frustrated by the survey. It is possible to hide these responses initially by providing the categories that the survey researcher would hope that the respondent would usually use, but also to provide a cat-egory ‘Other’ that, when clicked on, produces a drop-down list that allows the choice of ‘Don’t know/unsure’, ‘Refuse to answer’, and possibly a further option to write in the response. Omitting these options entirely is likely to increase premature termina-tions and respondent burden and frustration.

17.3.3 Ability to fill in the Web survey in multiple sittingsOne of the clear advantages of self-administered surveys relates to when they can be carried out: not only can they be filled out when the respondent wishes, but the respondent can also leave off filling out the survey and return to it at a more conve-nient time, if something arises to interrupt him or her or if it is taking longer to fill out than expected. This advantage needs to carry over to the Web survey. Thus, Web surveys that compel the respondent to complete the survey at one sitting and require the respondent to start again if something interrupts the completion of the survey lose one of the major advantages of this type of survey. It is not difficult to design into the Web survey retention of the answers already provided, so that a respondent can stop and start completing the survey as the need arises. This is also potentially important because computers are known to encounter problems, such as ‘freezing’ or ‘rebooting’ in the midst of a session, and internet connections may go down in the middle of a sur-vey completion. Such incidents, in addition to external reasons that require a person to stop completing the survey before it is finished and return to it later, require that there is a resume capability, not a restart.

Probably the only exception to this situation is a survey that has certain time-critical elements to it, or that involves some type of learning process being simulated through the survey. In these situations, it may be unavoidable that, for a valid response, the respondent has to complete the survey at one sitting. It may also be necessary, in such special cases, that the time taken for the response is recorded as part of the survey result. However, with these exceptions, the survey should always be designed to store the answers given so far, so that the respondent may start and stop filling out the sur-vey at will, and does not have to restart, answering questions already answered once before. Failure to provide this resumption capability is likely to increase the refusal rate for the survey. It also makes it impossible for a Web survey to provide data from terminations.

To enable the retention of data on terminations, the data should be transmitted to the survey server from each session in which the respondent fills out the responses to any

Page 420: Collecting managing and assessing data using sample surveys

Some design issues for Web surveys 393

questions on the Web survey. In other words, the stored answers should not be stored solely on the user’s computer, but on the server as well, so that an incomplete survey response can be retrieved should a particular respondent fail to complete all of the sur-vey, or should an error occur on the server or the respondent’s computer that prohibits proceeding further.

It is also important, especially in light of this issue, to ensure that respondents are told truthfully at the outset of the survey how long it is expected to take to complete the survey. Respondents should also be informed if they are required to complete the survey in one sitting, or if their answers to questions will be recorded as they go, so that stopping and returning to the survey will be possible, without having to restart the survey.

If respondents may need to refer to other information during completion of the sur-vey, they should also be informed about this at the outset, and a suggestion should be made that respondents may wish to make sure that such information is at hand before they commence completing the survey. For example, a survey might require the respon-dent to report the odometer readings of all cars currently owned by the household. If some cars owned by the household are not currently at home, the respondent may wish to defer responding until all cars are at home. The respondent may wish to go to the cars and note down the odometer readings of each car before starting the survey.

17.3.4 Progress trackingAnother feature that is important in the design of a Web survey is to provide ongoing information to the respondent as to how far he or she has proceeded through the survey. In an interviewer survey, the respondent can always interrupt the interviewer to ask how much longer the survey will take, while, with a self-administered paper survey, the respondent can look ahead to see how much more there is to be done. It is impor-tant to retain this feature in a Web survey. This can be done by displaying on each new page the percentage of the survey that has been completed so far. This will assist the respondent in several ways. It ensures that the survey is not seen as a ‘black hole’ that will absorb untold amounts of the respondent’s time. It also assists the respondent in making a decision as to whether time permits him or her to complete the survey at the present sitting, or if it will be necessary to come back to the survey at a later time. It also allows the respondent to judge better how long it will take him or her to complete the survey, and may prevent terminations that result from a respondent believing incor-rectly that a significant further amount of time is yet required to complete the survey.

A better method than displaying the percentage of the survey that has been com-pleted is to have a clock that shows the amount of time spent so far and the estimated amount of time required to complete the remainder of the survey. Of course, this is more demanding, because different respondents will require different amounts of time to complete the survey. However, it is actually not much more difficult than the per-centage of time. If the designer can work out how long each question takes to answer and can convert this into percentages, then these percentages can be re-estimated as

Page 421: Collecting managing and assessing data using sample surveys

Web-based surveys394

times, based on the actual time taken by each respondent. Thus, for example, a particu-lar survey might consist of ten questions that the designer estimates should take fifteen seconds, twenty-five seconds, forty-five seconds, ten seconds, twenty seconds, fifty-five seconds, thirty seconds, fifteen seconds, ten seconds, and ten seconds to complete, for a total of 235 seconds or approximately four minutes. Converted to percentages, these are, respectively, 6.4, 10.6, 19.1, 4.3. 8.5, 23.4, 12.8, 6.4, 4.3, and 4.3 per cent. If a particular respondent takes 120 seconds to answer the first four questions, which represents 40.4 percent of the time required by the designer, then the respondent could be informed that there are 127 seconds, or two minutes and seven seconds, still left to go. This amount would be readjusted as each question is answered.

Most software for building Web surveys includes the capability to use a progress bar or indicator. However, there appears to be little or no research on the effects of a pro-gress bar. Those texts that discuss designing Web surveys are more or less silent on the topic, and research papers on the topic are not readily in evidence. However, Dillman (2000) advocates using some type of indicator of progress through the survey, either by using a progress bar or by using text to convey how far the respondent has progressed. This author would recommend that the presence of a progress bar should be tested in a pilot survey. In most instances, it seems likely that it will be considered helpful and useful by respondents. However, if the survey is rather long, it may be daunting to the respondent to see how slowly the bar indicates progress is being made. However, again, this may indicate that the survey is too long, and that the researcher should con-sider ways to shorten the survey.

17.3.5 Pre-filled responsesNormally, it would be very dangerous for there to be a pre-filled response to any ques-tion in the survey, although this is encountered in actual Web surveys from time to time. For example, the country for an address may be defaulted to the United States, with an address format that asks for a US state and a five-digit zip code as part of the address. With the question arranged to fill out the street address first, then the town or city, then the state, and zip code, and then the country, a person completing the survey from another country is faced with the task of being unable to specify the address cor-rectly until the country is changed, at which time the Web survey may offer a different format for the address, perhaps with different states or provinces, and with a postal code that consists of a different number of letters and/or numbers. Of course, some respondents will recognise this problem and will know to click on the country first and select the correct country before attempting to complete the rest of the address question. However, it is better in the design to request the country information first and then offer a question design that matches the requirements for the address in that country. Of course, this is not a problem in a survey that is restricted to one country. However, the point about not pre-filling a response is an important one that should not be lost. At the same time, one should be aware that, unless there is a strict restriction of the survey to one country only, then it may well be possible that persons from other

Page 422: Collecting managing and assessing data using sample surveys

Some design issues for Web surveys 395

countries may fill out the survey, and this may require different formatting for some questions in the survey.

In other instances, such as an attitudinal survey on the Web, responses in the mid-dle or neutral scale point may be pre-filled in the survey. This would be unfortunate, in that the survey researcher would not know, in the case that such answers are received from a respondent, whether the respondent actually chose that scale point, or simply skipped the questions. This will apply to any other type of question to which there is a pre-filled response. This is never a good idea, therefore, and should be avoided at all costs.

17.3.6 Confidentiality in Web-based surveysThe confidentiality of the responses to a survey is equally as important in Web-based surveys as it is in interviewer and other self-administered surveys. However, many computer users will be aware of the potential dangers of filling out information on the Web. Therefore, it is critical that survey designers ensure that responses to Web-based surveys are fully protected and that data entered into the survey cannot be accessed by unauthorised persons. First and foremost, this requires that each respondent has unique access to the survey. This can be provided by giving the potential respondent a unique user name and password for accessing the survey. Alternatively, each respondent may be given a unique URL to access the survey, whereby the URL embodies an effective password that limits access to that individual respondent only.

Second, especially if the data that are to be collected by the survey include any per-sonal or other information that may be considered by the respondent to be sensitive, the data recorded should be encrypted in such a way that only the survey researcher can determine the meaning of the responses. There are various data encryption procedures available, and these should be checked carefully and an appropriate one chosen when needed. Again, it is important to inform respondents if data encryption is being used, to reassure them of the confidentiality and privacy of the data being provided.

17.3.7 Pictures, maps, etc. on Web surveysBearing in mind that, in many countries, many, if not a majority, of internet users may still be using dial-up or slow-speed connections, it is important not to overload the Web survey with high-resolution pictures or maps that may slow the loading of the survey to a significant degree. Pictures and maps may sometimes be an essential element of a Web survey. However, great care must be taken that these are not so large in size as to take significant time to view on the local computer. A Web survey that takes minutes to load may result in substantial nonresponse by those who do not wish to take the time or who assume that, because of the time being taken, the survey is malfunctioning. When maps or pictures are required and some delay may be associated with a page of the survey becoming available to view, messages should be displayed to assure the respon-dent that the survey is still functioning and, if possible, to provide an estimate of the time remaining to load the page. This will help to ensure that the respondent will not

Page 423: Collecting managing and assessing data using sample surveys

Web-based surveys396

assume a malfunction or that the survey contains some malicious software or spyware, and consequently prematurely terminate the loading of the survey.

It is also important to use graphics only to the extent that they are necessary to the survey, so that excessive download times are not experienced. In addition, in many countries, charging for internet access may be based on the size of files downloaded, in which case it is essential that the survey designer is especially sensitive to the size of the files that need to be accessed to complete the survey. Anything that is not essential to the survey should be removed.

Animation in survey pictures and mapsA major advantage that the Web survey offers over interview and other self-admin-istered surveys is the potential to include some type of animation that may assist the respondent to provide the desired information. For example, the author has worked with surveys that involve having people carry a GPS device with them to record where they travel, subsequently putting the traces onto maps, and asking the respondents for other information about the travel being undertaken. Because one of the things that cannot be determined with certainty from the GPS traces is the location of each stop made by the respondent, a useful form of animation is to allow the traces to be drawn onto the map in sequence and at a speed that allows the user to stop the trace each time a recognised stopping point in the travel is reached on the map. However, again, the time and the download requirements that may be needed to allow such animation may result in substantial costs to the user and in significant amounts of delay in loading the survey. Thus, while animation can be very helpful, it must be designed with great care and attention to the delays and other costs imposed on the respondent.

17.3.8 Browser softwareIn addition to the concerns that were raised earlier, relating to the differences between what the designer sees and what the respondent sees, a specific issue relates to the software in which the Web survey is designed. Most Web surveys are developed in JavaScript®, which is widely available. However, not every personal computer with the internet will have this software installed. Indeed, some computers may use a browser that does not support JavaScript, while others may have intentionally disabled it for fear of security violations on their computers, or may use a personal data assistant that cannot run JavaScript, and so forth. It is therefore recommended that the survey be structured to check to see if JavaScript is installed on the respondent’s computer. In the event that it is not installed, then the survey should still be able to run without JavaScript.

User interface designA key element of the Web survey is the user interface design (Schober et al., 2003). As pointed out by Schober et al., there is a significant difference in the user inter-face for Web surveys and any Web application for which the user is providing

Page 424: Collecting managing and assessing data using sample surveys

Some design issues for Web surveys 397

information to the server, compared to situations in which the user is obtaining information from the server. A primary reason for this is that the consequences of misunderstandings are quite different between these two situations. There is also, according to Schober et al., a potential difference in the interpretation of words on the Web between the two situations. In a Web survey, respondents will usually assume that the words displayed on the screen mean exactly what they appear to mean, whereas, when undertaking a search for information on the Web, the user will often assume that the words on the screen do not necessarily mean what the user means, recognising, for example, that Web searches are not tailor-made to the requirements of the user.

These differences tend to lead to an important difference between the behaviour of Web survey respondents compared to Web searchers. Schober et al. (2003) find that respondents to Web surveys sought clarification much less often than Web users who were using the Web for a search. In general, those who complete well-designed Web surveys often find them easy to complete and not burdensome, which is a major advan-tage of such surveys. However, it places a very major demand on Web survey designers to be sure that the words used are clear and unambiguous, perhaps to an even greater extent than on a paper or interviewer survey, because of the attitudes and perceptions of those who complete Web surveys.

However, another aspect of interface that is important relates to the situation in which the Web survey is just one of two or more alternative ways of responding to the survey. If a respondent receives a paper survey or is offered an interview survey, with the option of also responding to the survey on the internet, there should be a consistent interface between the paper or interview survey and the online version. In looking at paper surveys, some of the areas of consistency include the fonts used for questions, answers, and instructions, the order of questions, the colours of the backgrounds, the physical locations of the questions on the page, the colours of fonts, and the locations and amount of white space.

In a paper and internet survey, in which the respondent has limited ability to ask for clarification and definition, the internet version should replicate the paper version as closely as possible. In this respect, the recommendations of Schober et al. (2003) to include capabilities that allow the respondent to click for clarification would violate the need to match the paper survey. On the other hand, when the Web survey is an alternative to an interview survey, including the capability in the Web survey of a click for clarification is comparable to the ability to ask the interviewer questions of clarifi-cation, and it should definitely be included in the design.

Creating mock-upsIn developing a Web survey, one issue of importance is to create a mock-up of the design of the survey. This mock-up should then be subjected to various reviews and be thoroughly scrutinised, before proceeding to undertake the programming. The reason that this is important is that changes to the survey interface, once programming is well advanced, will be time-consuming, and often can be quite difficult. The mock-ups will

Page 425: Collecting managing and assessing data using sample surveys

Web-based surveys398

often serve to reveal potential problems in the survey design, which are best corrected prior to programming the Web pages.

Page loading timeAnother issue that is frequently encountered in Web surveys is the lengthy times required to load Web pages. When a user is searching for information on the Web, he or she will usually be willing to wait for pages to load. However, when the user is a survey respondent, his or her patience to wait for a page to load will be much less, and even short time delays will be perceived as being lengthy. As a result, it is advisable, when a Web page is going to be slow to load, to provide a pop-up that indicates the progress being made in loading the page, both to assure the respondent that the pro-cess is occurring correctly and also to provide an indicator of the time still needed to complete the page loading. While this will not satisfy every potential respondent, it is likely to reduce the number of potential respondents who terminate the survey when a page is slow to load.

17.4 Some design principles for Web surveys

As a general rule, most of the principles for designing paper surveys, explored at length elsewhere in this book, apply equally well to Web surveys. However, although paper surveys are concerned with the way in which people read and then mark their responses, Web surveys have to deal with how a person navigates around a computer screen and provides his or her responses, as well as the way in which the material is read and understood. One particular issue that should be considered here is how the respondent is able to move his or her cursor on the display and access the data entry points. This needs to be made as easy and obvious as possible. For example, it may be wise to make sure that arrow keys, mouse movements, the tab key, and the enter key all allow the respondent to move from one response to the next. In addition, the means by which one page is ended and a new page is displayed should be as simple as possible.

Dillman (2000) outlines a number of design principles, which seem worth repeating here.

(1) Start the survey with a welcome screen that provides clear information about what the respondent is to do, stresses the ease of completing the survey, and tells the respondent how to proceed through the survey.

(2) As with all surveys, start with a question that will be likely to be interesting to the respondent, as well as focusing his or her attention on the topic of the survey.

(3) Use formats for the questions and answers that are very similar to those that would be used in a self-administered paper survey. This should include numbering the questions and also ensuring that questions and their associated response sets can all be displayed on one page.

(4) Recall the advice and warnings given about the use of colour in survey designs. In Web surveys, just as in paper surveys, make sure that colour helps the flow of the

Page 426: Collecting managing and assessing data using sample surveys

Concluding comments 399

survey, guides the respondent, and does not impede or interfere with the clarity of the survey. Because the use of colour is so much easier, and greater variety is pos-sible in Web surveys than on paper surveys, some designers overuse colour, so that it interferes with the survey design and implementation.

(5) Give instructions when they are needed and be sure to provide instructions on how to complete the survey. It is even more important in Web surveys, in which access-ing another page of the survey is not necessarily straightforward, that instructions appear at the point where they are needed.

(6) Drop-down boxes are a useful capability offered in Web surveys that are not possi-ble on paper surveys. Therefore, they should be used sparingly and should always be indicated by a ‘Click here’ instruction, so that the respondent is made aware that a drop-down box will appear at this point. Further, make sure that the entire contents of the drop-down box will display on the respondent’s screen. If a drop-down box is partially cut off, the respondent may never be aware of some options that display below the bottom of his or her screen.

(7) Dillman (2000) indicates a preference for Web surveys that scroll from page to page, rather than ones that require the respondent to click on a ‘Next’ or ‘Continue’ button. There are probably arguments to be made for either design. However, the repeated requirement to click on a button to proceed to the next page can become annoying to respondents, especially in a long questionnaire, or one that displays only a single question on each page. With the increasing avail-ability of scroll wheels on a computer mouse, the scrolling option will normally be the better one.

These guidelines, along with other matters explored in prior sections of this chapter, should help in the design of better Web surveys. Moreover, because this is an area of rapid development, the reader is encouraged to seek out the most recent literature on Web survey design for further guidance. However, if design information is found that is in direct contradiction to the basic design principles outlined in this book and by other analysts, such as Dillman (2000), the reader is advised to exercise considerable care and judgement in making any decision to adopt a different procedure or design.

17.5 Concluding comments

Web surveys have become increasingly common in the past decade, especially in more developed countries, where computer access and internet access have become more widespread in the population. However, until virtually every household possesses a computer and a fast internet connection (such as broadband), the Web survey is not likely to be a stand-alone method of surveying in any situation in which a represen-tative sample of the population is required. The exception to this would be when the population is defined as people with internet access.

In this chapter, a number of design issues are discussed that are specific to Web-based surveys and that are of considerable importance in achieving a high-quality

Page 427: Collecting managing and assessing data using sample surveys

Web-based surveys400

survey instrument. Many existing Web surveys ignore some of these issues, with serious consequences to the quality and meaning of the data collected. Web surveys are likely to be an increasingly useful option, if designed carefully, for respondents who are computer-literate and have ready access to the internet. They share with self- administered paper surveys the asset of being able to be filled out at the convenience of the respondent, but also offer options for more correct and error-free responses, resulting from the interactive capabilities offered through the computer interface. As computers penetrate the population further, and internet access also penetrates further, together with improvements in internet download and upload speeds, the Web survey is likely to become an increasingly popular tool for survey researchers. Whether or not penetration of the population by computers and the internet will ever reach the levels of the telephone, so as to make Web surveys a method for obtaining representative samples, is clearly a matter of speculation at this time. Given recent trends in increas-ing computer penetration and connection to the internet, it may be some years in the future before this occurs in any area of the world.

Page 428: Collecting managing and assessing data using sample surveys

401

18 Coding and data entry

18.1 Introduction

In years gone by it was not uncommon to see on survey forms a dedicated column on each page for coding the responses. As noted elsewhere in this book, this should never be present on a self-administered survey, and it should often be avoided, if pos-sible, on interviewer-administered surveys. In computer-assisted surveys of any type, whether by telephone, self-administered, or internet, it is completely unnecessary, because these surveys will record the computer-coded responses directly to an appro-priate computer storage medium.

Coding is a step in which responses provided by the survey sample are converted into codes that permit computer-based analysis to be undertaken. Data entry is then the process of transferring these computer codes from the survey form or other intermedi-ate medium to the computer. Coding is necessary for any responses on the survey form that are provided in a form that is not amenable to analysis. This will include questions that require the respondent to write in a response in words to a particular question, or to a question that offers multiple choices of response, with the possible responses being descriptors of some sort. For example, the set of unordered responses shown in Figure 18.1 (which is the same as Figure 9.10) requires some type of numeric coding to be useful for the analyst.

Even simple responses, such as ‘Yes’/‘No’, require coding. Indeed, any response that is not naturally a numeric scale will require some level of coding prior to data entry.

In this chapter, a number of issues are discussed concerning coding, following which some issues relating to data entry are also discussed. Poor coding or poor data entry can so compromise a survey that the results provided in computer files can be completely meaningless, thus destroying the value of the survey. It is therefore of some considerable importance that these issues are dealt with appropriately and that plans are put in place at the outset of the survey for how data coding and data entry are to be accomplished.

Page 429: Collecting managing and assessing data using sample surveys

Coding and data entry402

7. Which one of the following do you feel best describes this unit of study?

� Very useful to my degree� Very relevant to my degree� Both very useful and very relevant to my degree� Neither very useful nor very relevant to my degree

Figure 18.1 An unordered set of responses requiring coding

18.2 Coding

The first area to be considered is that of data coding. Several issues should be consid-ered under this heading:

(1) the coding of missing values;(2) the use of zeros and blanks in coding;(3) coding consistency;(4) the coding of complex variables;(5) geocoding – the coding of geographic locations to a reference system, such as lat-

itude and longitude; and(6) methods for creating codes.

18.2.1 Coding of missing valuesIf one were to review many different surveys over the past few years, one would find a wide variation in how missing data are coded, and often even variability within the same survey. In many cases missing data items have simply been left blank, while in other cases some numeric code may be assigned. One of the issues here is that missing data arise from at least three different situations. First, a respondent may not be asked a particular question because it is not relevant and there is a skip pattern in the survey that avoids the question. Second, there may be missing data because the respondent did not know the answer to the question. For example, a teenager in a household might be asked what the household’s after-tax income is, and may genuinely respond that he or she does not know. If a specific ‘Don’t know’ category is not provided, this will usually result in a missing value, which may again be left blank in many instances. Third, the respondent may have refused to answer the question, or, in a self-administered survey, may have overlooked the question and failed to provide a response. As is discussed further in Chapter 21, these different situations have very different implications for the quality of the data. As a result, any coding of missing data should be responsive to these different situations.

There are several possibilities when it comes to coding missing data. First, one may establish certain codes that are used throughout the entire survey consistently to indi-cate each of the three situations in the preceding paragraph. A common method that is

Page 430: Collecting managing and assessing data using sample surveys

Coding 403

used is to assign the value ‘97’ for not applicable answers – i.e., for responses to ques-tions that were legitimately skipped by the respondent; ‘98’ is then assigned to indicate that the respondent does not know the answer; and ‘99’ is assigned to indicate a refusal or unintentional skip of the question. These values generally work quite well, except in instances when there may be a question with a numerical answer, for which one or more of these values may be a legitimate response. For example, the street number in an address may have a field of its own, and may legitimately include values of ‘97’, ‘98’, or ‘99’. In surveys in which these could be legitimate values for one or more attri-butes, a second option is to make a simple change of assigning negative values for the missing value codes – i.e., ‘–97’, ‘–98’, and ‘–99’ – such that these now provide codes that are appropriate and cannot be confused with any legitimate code in the survey. A third option is to define the missing codes on a question-by-question basis. In this case, it could be possible that an attribute that has six or fewer legitimate codes could use the values ‘7’, ‘8’, and ‘9’ to indicate the three missing conditions. An attribute that has legitimate values up to ninety-six or below, could use the values ‘97’, ‘98’, and ‘99’, while an attribute that has legitimate values up to 996 or below could use ‘997’, ‘998’, and ‘999’ to signify the three missing conditions. However, a word of warning is appropriate, that this use of individual question-by-question missing value codes is likely to prove cumbersome when processing and analysing data, so it is probably the least preferred method.

When values of ‘97’, ‘98’, and ‘99’, or their negative counterparts, are used, it is then a simple matter to obtain a frequency count of each of these values. The number of instances of ‘97’ will provide a check on the skip patterns and may also suggest future modifications to the survey, if these values seem to arise too frequently. A count of the number of instances of ‘98’ will provide useful information on the frequency with which survey respondents are unable to answer a question. Finally, a count of the number of instances of ‘99’ will provide a quick indication of the extent of item non-response in the survey (see Chapter 20).

It is recommended strongly that missing values, for whatever reason, should never be left blank. The reasons for this are discussed in more detail in the next subsection of this chapter.

18.2.2 Use of zeros and blanks in codingIt is still the case that a number of computer procedures do not distinguish between a zero and a blank in a field. For this reason alone, it is strongly recommended that blanks are never used as a legitimate code in any field of a survey, and that zeros are used only to indicate a response of ‘None’ or ‘Nothing’. In this way, every data field in the record will contain information, either in numeric or alphabetic form, depending on other coding conventions and requirements that are adopted. Even when blank and zero codes are distinguished by analysis software, a blank is not a good idea, because it can too easily be confused with a situation in which a code has accidentally been omit-ted. By ensuring that a blank is never used as a legitimate code, it becomes possible to

Page 431: Collecting managing and assessing data using sample surveys

Coding and data entry404

search for blanks in the data and use these to flag places where there is a problem in the coding, either by a code having been skipped, or by data having been shifted from the appropriate fields. In the final data, there should be no possibility of finding a blank.

In like manner, zero should be used only to indicate a response of ‘None’ or ‘Nothing’. It is not uncommon to find that certain binary fields are coded as ‘1’ or ‘0’ – e.g., when the answer is either ‘Yes’ or ‘No’, such that ‘1’ may be used to indi-cate ‘Yes’, and ‘0’ indicates ‘No’. Again, it is suggested that this is not a good coding procedure. The primary reason for this is that, when a zero and a blank are seen as the same, the analyst will count incorrectly any blanks in the responses as ‘No’, even though it may be that no response was given, or that the coding or data entry missed the value. A similar result would occur if ‘Male’ is coded as ‘1’ and ‘Female’ as ‘0’, or vice versa. By strictly limiting the code of ‘0’ to mean ‘None’ or ‘Nothing’, the inter-pretation of the coded results is made much more foolproof.

18.2.3 Coding consistencyCoding consistency has to do with using the same codes throughout a survey to indi-cate the same responses. The most common type of response requiring consistent cod-ing is a response of ‘Yes’ or ‘No’ to questions. If there is inconsistency in the codes assigned from one question to another, when the response is either ‘Yes’ or ‘No’, then future analysis of the data is seriously jeopardised. Thus, for example, if the code ‘1’ is assigned to ‘Yes’ and the code ‘2’ to ‘No’, this should be established as the correct coding for these responses throughout the entire survey, irrespective of the question.

This is similar to the point made earlier regarding the use of consistent values to indi-cate a skipped question, ‘Don’t know’, and a refused question throughout the survey. Consistent values make it much easier to produce simple frequency and cross-tabulation reports from the data, and to filter the data, as necessary, for certain values. This also lends itself to the mass recoding of missing values to an internal missing value in some statis-tical analysis packages, as well as simplifying the command syntax to search for certain values or recode values for subsequent analysis. Labelling is also made much simpler.

Binary variablesIt is recommended that binary responses be coded using the values of ‘1’ and ‘2’ con-sistently throughout the survey. Thus, for example, questions with ‘Yes/no’ answers and questions such as gender would all take these values of ‘1’ and ‘2’ for the potential responses. Skipped answers would be signified by ‘97’ or ‘–97’, ‘Don’t know’ by ‘98’ or ‘–98’, and refused answers by ‘99’ or ‘–99’. It is not particularly important whether ‘Yes’ is ‘1’ or ‘2’, but it should be established that ‘Yes’ is always denoted by the same value throughout the survey.

Numeric variablesWhen the answer to a question should be a number, the coded value should be the same as the number in the answer. For example, if the question is about the number

Page 432: Collecting managing and assessing data using sample surveys

Coding 405

of children in the household, then the valid codes should be ‘0’, ‘1’, ‘2’, etc. such that these indicate the actual response value. In past US Census Bureau data sets, numerical data have been coded such that ‘0’ indicates missing, ‘1’ indicates zero, ‘2’ indicates one, etc. This can result in serious issues in analysis if the analyst forgets about this mismatch between the coded value and the actual numeric value of the attribute. It is recommended that this should be avoided. Again, by using the dedicated values of ‘97’, ‘98’, and ‘99’ (or their negatives) to indicate the missing values, there is no need to mismatch the numeric value with the code.

In this way, for any variable that represents a count, values of ‘0’ will indicate zero, while non-zero values indicate the actual value of the count of this attribute, such as numbers of children, adults, workers, cars, driving licences, etc. Analysis of these variables is simplified by this coding convention, and the potential for mistakes in the analysis is largely avoided.

18.2.4 Coding complex variablesAt issue here are situations in which a naturally continuous variable may be categor-ised in different ways in different surveys, especially when there may be different possible levels of aggregation that may be desirable for a categorised variable. A good example of this is provided by the variable of household or personal income (Stopher et al., 2008a). Different levels of detail can be achieved within one coding scheme by using multi-digit codes, in which the first digit represents the most aggregated level of reporting, the second digit offers some level of disaggregation, the third digit offers yet further disaggregation, etc.

Considering income as a useful case to illustrate, it is usual that income is best collected as a categorised variable. This is found to reduce the tendency of people to refuse to answer the question. Suppose that income is collected in $5,000 increments, but that the analyst may wish to aggregate these to $10,000 increments for comparison to a secondary data source. Suppose, further, that income categories start from under $5,000 and are collected up to a value of $160,000 and over. Table 18.1 shows a poten-tial coding scheme for this situation. The usefulness of this coding scheme is shown, first, by the fact that the numeric codes are closely related to the lower values of each category and are therefore quite easily understood without the need for a coding guide to explain what they mean. Second, it is shown by the fact that aggregation to incre-ments of $10,000 for each category can be carried out simply by dropping the third digit of the code. It should be noted that the choice was made to use negative values for the missing value codes in this case, because the coding scheme could produce poten-tially legitimate numbers that could be confused with positive missing value codes. The alternative would be to use ‘997’, ‘998’, and ‘999’ for the missing values.

Similar principles can be applied to complex variables that are not numerically based, such as occupation, type of industry, type of land use at a location, etc. Again, a multi-digit code can be assigned, so that successive levels of aggregation can be achieved simply by dropping the rightmost digits of the code.

Page 433: Collecting managing and assessing data using sample surveys

Coding and data entry406

Another possible variable that provides a useful example of this type of coding is a question about whether or not an individual uses a mobile phone and/or the internet. As suggested by Stopher et al. (2008b), the coding scheme could follow the example shown in Table 18.2.

Other variables can be coded in this same manner, whenever there is a desire to col-lect data at a fine level of detail and potentially report it at a much coarser level of detail subsequently, while retaining the detailed information within the data file.

18.2.5 GeocodingGeocoding is the process of assigning geographic referencing information to loca-tions reported in a survey. Usually, this is a coding procedure that is applied to address

Table 18.1 Potential complex codes for income categories

Income category Coded value

Under $5,000 000$5,000–$9,999 005$10,000–$14,999 010$15,000–$19,999 015$20,000–$24,999 020$25,000–$29,999 025$30,000–$34,999 030$35,000–$39,999 035$40,000–$44,999 040$45,000–$49,999 045$50,000–$54,999 050$55,000–$59,999 055$60,000–$64,999 060$65,000–$69,999 065$70,000–$74,999 070$75,000–$79,999 075$80,000–$84,999 080$85,000–$89,999 085$90,000–$94,999 090$95,000–$99,999 095$100,000–$104,999 100$105,000–$109,999 105… …$150,000–$154,999 150$155,000–$159,999 155$160,000 and over 160Legitimate skip –97‘Don’t know’ –98Missing –99

Page 434: Collecting managing and assessing data using sample surveys

Coding 407

information, or information about a specific building or place. It is commonly required in transport surveys, but may also be required in a variety of other surveys, including time use, land use, and other surveys. Most commonly, the geographic referencing system to be used is latitude and longitude, but it may also be to jurisdictions, to an x/y grid that is created for a specific study, or to other areal units that may be created as part of a study. In transport applications, urban areas are often divided into small areas called traffic analysis zones, which may be the geographical referencing system that is being used, or it may be census geography in other cases.

Readers should be warned that geocoding can represent a major expenditure in the execution of a survey and, if not correctly budgeted, can result in significant and sub-stantial economic losses to the survey firm. Probably the major problem that arises in most surveys is the inability of people to provide accurate addresses for geocoding purposes. From many years of conducting surveys in which addresses are asked, the author has found that most people know their home address accurately, although some will not. Some people know the address of their workplace, but, if their occupation is such that they do not send out letters or other items that require a return postal address, then they may not know it. In addition, if a firm uses a Post Office box for return post,

Table 18.2 Example codes for use of the internet and mobile phones

Primary category Code Secondary category Code

No 1 10Yes (both) 2 Internet shopping 21

Internet banking 22Internet work-related 23Internet research 24Internet general surfing 25Internet chat room/communication/e-mail 26Mobile phone work-related 27Mobile phone non-work-related 28Other 29

Yes (internet only) 3 Shopping 31Banking 32Work-related 33Research 34General surfing 35Chat room/communication/e-mail 36Other 39

Yes (mobile phone only) 4 Work-related 41Non-work-related 42Other 49

Not applicable 97 97‘Don’t know’ 98 98Refused/missing 99 99

Page 435: Collecting managing and assessing data using sample surveys

Coding and data entry408

or an address that does not specify a street address, then the actual street location may not be known by many workers. Further, workers in major building complexes may know the name of the building in which they work, but may not know its street address. A good example is the author’s university, for which addresses are usually given by discipline, building name and code, the university name, the state, and the postal code of the university. No street address is used, and the Post Office has designated the entire campus as a suburb with the name of the university, so that there is no other sub-urb name in the address.

Schools are an even more difficult issue for address reporting. Most parents know the name of the school that their child or children attend and may know the street on which the main entrance of the school is located. However, it is very unlikely that they will know the street number for the school. The addresses of shops are also a major issue, especially when shops are located within a shopping centre, or even along a strip development. Many shops do not display a street number, and shops within a shopping centre or mall will usually not have a recognisable street for their address. Even the suburb in which the shop is located may not be known with any accuracy by most shoppers. The shop is probably known by the name of the shop and, possibly, the shopping centre name. Little else is likely to be known with any certainty. Similar issues arise with respect to medical offices, other professional offices (such as law-yers, accountants, etc.), post offices, and various other locations that people custom-arily visit in the course of their normal daily activities. This means that the addresses that people can report in a survey are, to a large extent, known only incompletely or not at all.

There are a number of software packages available that can assist with geocoding. However, most of these packages require a complete street address, consisting of the street number, street name, suburb name, state or province (if applicable), and postal code. Similarly, in undertaking manual geocoding – that is, by locating the address of the location on a map and then reading off the coordinates or other geographic refer-encing of the location – a complete street address is usually required. However, there are a number of steps that can be used to assist with these problems. Before discussing them, it is important to consider how addresses should be requested, especially given the difficulty that people have in knowing the addresses that may be requested.

Requesting address details for other places than homeIn surveys in which respondents are asked to report addresses to which they may travel or at which they may undertake various activities, there are important guidelines for asking for the address information. While the inexperienced survey designer may assume that all that is necessary is to ask for the street number and name, the suburb or town, the postal code (zip code), and possibly a state or province, this is actually likely to become a threatening question to many respondents, because they do not know how to answer the question. At best, it will lead to missing address information for any place for which the respondent does not have those details, and, at worst, it may result in termination of the response.

Page 436: Collecting managing and assessing data using sample surveys

Coding 409

The first line of the address information should ask for the business name, or build-ing name, or place name to which the person travelled or which the person visited. The next line may ask for the street address, but with the alternative option to record the street and nearest cross-street. The next line of the address should be the name of the town, suburb, or other designation that may be appropriate to the country where the survey is being carried out. Finally, the question should ask for the postcode and state or province (where applicable). The question should be preceded by an instruction to the respondent to provide as much of the information as possible, but not to worry if he or she has only part of this information. An example of a possible question of this type is shown in Figure 18.2. Other formats are possible, but this shows an appropriate one that covers most of the issues with addresses.

Appropriate adaptations should be made to this question format for different coun-tries, where address information may be provided differently. It is also usually neces-sary to remind respondents that a Post Office box number is not satisfactory, because the purpose of the address is to locate where the place is. Whatever the question format that is used, it is important to undertake a pretest or pilot study to ensure that respon-dents understand the question and are able, through the question format, to provide sufficient information to allow a high level of geocoding. There should also be minimal refusals to provide the requested information, and there should be no terminations of the survey as a result of the request for address information.

Pre-coding of buildingsThe first step that can be undertaken to increase the potential capability to geocode addresses is to prepare a list of buildings by name or establishment and to use a telephone directory or an internet mapping tool to locate each such establishment and record for it the appropriate geocodes. This list is known as a gazetteer, or geo-graphical list. With such a list in hand, it is then necessary only to be able to match the building or establishment name from a survey response to the pre-coded list to be able to determine the correct geocodes. In the event that there are many establish-ments with the same name, then the suburb or some other specific identifier will be needed to make a match.

5. Where is this?

(please provide asmuch of this

information as you areable to)

Name of location

Street address Type of place or business

City, state, zip code Nearest cross-streets

Figure 18.2 A possible format for asking for an address

Page 437: Collecting managing and assessing data using sample surveys

Coding and data entry410

This method can be applied, for example, to schools, supermarkets, shopping cen-tres, major office buildings, fast food outlets, and so forth. In these cases, it is nec-essary for respondents only to be able to name the school, or supermarket, or office building, etc. and possibly identify the suburb in which the establishment is located, or provide the postal code, and the pre-coded list will provide a coding aid.

Interactive gazetteersA gazetteer is a geographical dictionary that provides street, suburb, and postcode information for each building or business in an area. Preparing a gazetteer prior to the outset of a survey can be a very valuable asset, especially if the sample is fairly large (in the thousands) and if accurate geocoding is required.

If the survey is being done using computer assistance in an interview or self- administered situation, then the gazetteer can be made interactive. In this case, the gaz-etteer is attached to the location fields in the computer-assisted version of the survey, or in the internet survey. After providing some initial information on the location, the interviewer or respondent would be provided with a drop-down list of full addresses that can then be used to choose the correct location. For example, a respondent may indicate that he or she had lunch at a particular brand of fast food outlet. Having typed in the name of the brand as the name of the establishment, a drop-down list of suburbs might now appear for the respondent or interviewer to choose the correct one. Having clicked on the suburb, the computer then automatically inserts the correct geocodes for this location. Depending on the scale of the survey and the available budget, the gaz-etteer may be able to contain a complete listing of all known addresses in the survey region. In this case, as soon as part of a recognisable address is keyed into the com-puter, the balance of the identification of the location may be offered for confirmation, or alternatives may be offered, if there is more than one partial address match, so that the respondent or interviewer can then select the correct one, and a complete geocode can be entered into the survey file.

Other forms of geocoding assistanceThere are a number of other potential ways in which geocoding can be assisted. One that is quite useful is to employ as geocoders people who have an extensive geographic knowledge of the region in question. For example, taxi drivers may be helpful in under-taking geocoding, because in their jobs they may often be given very incomplete infor-mation by persons seeking to hire their taxis, and they have become used to finding the correct address from that incomplete information.

Part-time or retired delivery personnel are also, potentially, good geocoders, because their jobs may have required them to learn the region well, and especially to know where various roads and streets are and the location of many buildings that are normally known only by name. Any others whose present or previous jobs have required an extensive knowledge of address locations will potentially be very valuable as geocoders.

Page 438: Collecting managing and assessing data using sample surveys

Coding 411

Locating by mapping softwareThere is an increasing availability of mapping software on the internet, most of it free, which has the capability to locate addresses on a map. Even when only a partial address is available, or a building name or business name is all that is available, these various internet mapping capabilities are often still able to locate the place correctly on a map, or provide a location that is close enough to be able to determine where the building is located. For example, it is not uncommon for surveys asking for address information to request respondents to provide the nearest cross-streets, if they do not know the actual address. These are locatable with most internet mapping services and can then be used to get close enough for most geocoding purposes. With such software, it is then possible to locate the address by pointing on a geographic information system (GIS), which would allow the latitude and longitude to be read off for inclusion in the necessary coded file.

Similarly, some surveys will also ask for a building name as part of the address. If a respondent provides only the building name, and possibly the town in which it is located, there are software packages that have the potential to assist. Again, many of the software packages available for geocoding have the capability of providing latitude and longitude for a building name.

The biggest problem that arises with mapping software is the level of accuracy required in the spelling of any of the elements of the address to be geocoded. Some software is very unforgiving and requires that building names, street names, towns, cities and suburbs are all spelt exactly correctly. This can be a significant problem, because many respondents do not know correct spellings and provide phonetic, but not accurate, spellings for various parts of the address. Some software is more for-giving and will be able to find address elements that are not more than a few letters wrong, while others may be able to find the address if the spelling is at least phonet-ically recognisable. The user of such software is advised to check the capabilities of geocoding software. The more flexible the software is to misspellings, the better will be the resulting geocoding. Of course, there can be errors in the other direction, in that there may be more than one building, street, or town that sounds phonetically similar. In such cases, a flexible software package may give incorrect results for geocoding.

As a result of the various issues with respect to addresses, it is probably prudent always to check the results of geocoding. An example of the type of problem that can arise occurred for the author in a survey in Los Angeles some years ago. During the geocoding, one respondent provided as the address to which he travelled ‘The Naval Base’, and the geocoding identified this as the naval base in Ventura County. The person had walked to the naval base from his home in Long Beach, and had done so in about five or six minutes. The distance between Long Beach and the naval base in Ventura was about seventy miles, which then obviously produced a quite unbeliev-able speed of walking. The correct geocoding should have been to the Long Beach Naval Station, which was, in fact, less than half a mile from the home address of this

Page 439: Collecting managing and assessing data using sample surveys

Coding and data entry412

respondent. Correct identification of the workplace generated a walking speed of around four miles per hour, a perfectly reasonable speed, compared to the implied speed to the wrong naval base of some 700 miles per hour! Not only did this produce a very odd outlier in the tabulated data, but it was not caught until after some mod-elling had been attempted, with some very unexpected and counter-intuitive results obtained for coefficients in a model. Hence, it is necessary to make careful checks of the results of geocoding, to ensure that the location is within reason.

18.2.6 Methods for creating codesThe foregoing discussion of various aspects of coding should make it quite clear that the creation of codes for various entries in a survey is not a trivial task. Rather, it is a task that requires considerable care and attention to detail. When the answers to a question are numeric, it is best to use a corresponding numeric code to identify each possible answer. In other cases, specific codes need to be created.

The first step in creating codes is to determine the maximum level of detail that is likely to be required from the coded data. Usually, this should match all the categories in a question with categorical responses, but it is much more difficult to ascertain in an open-ended question. As a general rule, the best method for creating codes for an open-ended question is to await, first, the responses from a pilot survey (see Chapter 12) and then, second, probably to construct multi-level codes, similar to those discussed in subsection 18.2.4 in this chapter. During the actual coding of the responses from the survey, it is usually the case that the multi-level coding will readily allow the insertion of additional sub-codes that can account for responses that were not anticipated in the initial coding scheme.

Alternatively, if a multi-level coding system is not appropriate, then the codes should be set up so as to allow the addition of further codes as may be needed, with these additional codes being determined during the coding activity for the survey. This means that, for example, for a question for which there appear initially to be no more than four or five valid codes required, a single-digit code might be defined, with the potential to add codes up to a total of ten values, plus the missing data codes. However, when initially anticipated responses already require close to ten codes or at least double-digit codes, then the codes should be set up initially as two-digit codes.

When responses are given to questions that have been asked in a prior survey or are asked in the census, it is usually a good idea to consult the coding from the prior survey or the census to provide guidance on the codes to be used. This is especially important when there is the intention later on to make comparisons between the current survey and either the prior survey or the census. It is tedious and a potential source of significant error if codes are inconsistent and conversion is required to match the coded values. Therefore, it is always wise to consult any supplementary information that may be used subsequent to the survey and to create consistent codes with such data.

Page 440: Collecting managing and assessing data using sample surveys

Data entry 413

18.3 Data entry

Compared to coding, data entry has become a relatively minor issue in many surveys. When computer-assisted means are used to conduct the activity, such as CAPI, CATI and Web surveys, there will be no data entry activity as such, but the respondent or the interviewer, in entering responses to the questions in the survey, is automatically also performing data entry. However, one of the potential pitfalls with such direct data entry is the lack of a second record for verification of the entered data. In CAPI, CATI, and Web surveys, the only record of each data item will be the one entered into the computer during the survey. This makes it extremely important that a variety of logic checks are built into the CAPI, CATI, or Web survey programme, to check for potentially unlikely or impossible combinations of codes, as well as the more obvious ones of ensuring that all entries are within the expected range. Beyond this, there is relatively little that can be done to ensure that the data items are entered correctly. Of course, for interviewer-assisted entry, the training of the interviewers in data entry is important, with particular stress on the need to be extremely careful to enter the correct code or to mark the correct response. Ideally, data entry for all three of these procedures should, as far as possible, avoid the requirement to type in numbers, let-ters, or words, and should use closed-ended questions, in which the requirement is simply to click on the appropriate response option. The more typing that is required for the entry, the greater will be the potential for error in recording the information provided by the respondent.

This mandate to avoid, to the greatest extent possible, the need to type in letters, numbers, and words also serves another purpose in each type of survey. For CAPI and CATI surveys, it minimises the time required for the interviewer to enter data from the respondent, thereby promoting a smooth flow from question to question, and avoiding long pauses, while the interviewer types in a response to a question. For Web surveys, it minimises respondent burden in providing answers to the questions. However, there are often unavoidable situations in which a typed answer is necessary. In transport surveys, for example, it is almost always necessary for respondents to provide address information. Such information will always require typing in the address details. For such questions, interviewers need to take care to type in the details with as little error as possible. The use of geographic gazetteers to provide pop-up details of addresses is therefore of great help in avoiding typographical errors that could lead to unrecognis-able addresses.

When a survey is conducted using paper forms and written entries, or audio record-ings that are to be transcribed later, the advantage is that a record will exist that can be used to cross-check the entered data. In the case of paper forms, increasingly there are three options available for data entry. The first is transcription entry, in which there is a keyboard operator who reads the information from the survey form and types in the data to a programme of some type on the computer. The second is the use of a mark-sensing form, whereby those filling out the survey do so by marking response areas that are designed to be machine-readable. The third, which is still not well developed,

Page 441: Collecting managing and assessing data using sample surveys

Coding and data entry414

is the use of scanning procedures to scan the filled-out paper forms and record the data into the computer.

For survey forms that are filled out by a respondent or interviewer and then tran-scribed into the computer, it will usually be found helpful to use entry screens for the data either that look exactly like the survey form pages or that represent an electronic version of the response possibilities, similar to what would be used in an actual CATI, CAPI, or Web survey. Some software products on the market already offer procedures that allow relatively easy construction of such entry screens, with the data so entered being stored in an appropriate database. This usually offers the potential as well for the data entry programme to provide options to build in checks on a range of data and cross-checks to other entered data. However, in this case, an override to checks may be required, in the event that the paper survey has recorded what is assumed to be an ille-gal combination of codes. This is quite likely to happen in self-administered surveys.

Mark-sensing forms may be used in some applications, but such forms are often not respondent-friendly and may present a real barrier to respondents. The first prob-lem with many mark-sensing forms is that they require a rather precise marking of the response, usually using a specific grade of pencil, because most mark sensing uses the graphite in the pencil lead to complete an electrical circuit, thereby recording the correct response to the computer. Normally, there are also specific requirements for how the marks are to be made. Partial marks or marks that fall outside the window area provided will often not record correctly. Requiring respondents to follow these rather precise pro-cedures for completing a mark-sensed form may lead to higher nonresponse rates, and to the receipt of a number of forms that will require remediation to be able to be used. Figure 18.3 shows an excerpt from a mark-sensing self-administered survey. As can be seen, the instructions are quite specific as to how to make the required marks; failure to follow these instructions will render the resulting survey unusable for mark sensing.

On the other hand, mark-sensed forms for interviewer use are potentially more valu-able. Interviewers can be instructed on how to use the forms correctly and can be provided with the appropriate marking pencils. The use of a mark-sensing form can, in this case, speed up considerably the data entry process and remove one potential source of error: transcription error.

The third option, which is likely to become increasingly possible in the future, is to scan survey forms into a computer using software that is able to produce an editable file and that can also record the information on the survey form directly to a database. Although this is a method that is becoming increasingly feasible, the author is not aware of any applications of it to major surveys to date. It is likely to be a method that will become more usable in the future. It appears that it is used quite extensively to rec-ord data from small forms, such as those used by the immigration departments of many countries for controlling the entry and exit of people. In such uses, the precise accu-racy of the recording may not be as high as that required in surveys. One instance does appear of a comparative study of scanning and manual data entry (Bright and Lenge, 2009), but this also cautions that the costs are quite high for this method. The software and hardware for a single scanner is estimated to cost A$30,000, but, as character

Page 442: Collecting managing and assessing data using sample surveys

Data entry 415

recognition improves, this is likely to be an increasingly feasible method of data entry. However, it is noted that the design of the survey form is critical for scanning to work well. It does require that as many questions as possible are answered by marking a box in the appropriate manner, and it is noted that ‘messy writing’ can be hard, if not impossible, to interpret through scanning. Even different ways of writing numbers into prepared boxes can result in difficulty in interpreting the numbers, with ‘1s’ and ‘7s’, and ‘5s’ and ‘8s’ being particularly prone to misinterpretation.

Whatever method is used for data entry, and irrespective of the built-in error-check-ing procedures, following data entry it is necessary to perform a series of tests and checks on the data. The most useful of these are frequencies of the various values in numeric value codes that represent categories, and the mean, median, maximum, and minimum values for other numeric codes, when meaningful. These will often identify values that may need to be looked at again, because they are unexpected or occur with too high or too low a frequency. Next, various cross-tabulations or pivot tables should be created, in which variables that are related to one another are shown with the fre-quency of occurrence of various pairings of values. Again, this should help to identify any unexpected combinations of values, as well as any potentially invalid combina-tions not otherwise flagged and removed.

Whenever possible, original copies of survey materials should be kept, carefully stored, and ordered in order to be reasonably readily accessible, so that checks can be made from the computer record against the original fieldwork record. This will apply only to paper-based survey forms, because direct computer entry provides no origi-nal source. However, raw data files that are created by computer-assisted interview software and Web survey pages should also be stored for subsequent access, in case questions arise in the future about any attribute values that have been changed as a con-sequence of the editing and checking procedures.

INSTRUCTIONS

1.

2.

Use a blue/black biro or pencil, preferably 2B

Do not use red pen or felt tip pen

The learning outcomes and expected standards of this unit of study were clear to me

The teaching in this unit of study helped me to learn effectively

Please explain the reasons for your rating.

Please explain the reasons for your rating.

STRONGLYDISAGREE

1

1 32 4 5

2 3 4 5

STRONGLYAGREEDISAGREE NEUTRAL AGREE

STRONGLYDISAGREE

1 2 3 4 5

STRONGLYAGREEDISAGREE NEUTRAL AGREE

Erase mistakes fully

Make no stray marks

Please MARK LIKE THIS

For each item below, please indicate the extent to which you AGREE or DISAGREE with the statement,using the scale provided. Then use the space below each question to explain the reasons for your ratingand provide suggestions for improvement.

Figure 18.3 Excerpt from a mark-sensing survey

Page 443: Collecting managing and assessing data using sample surveys

Coding and data entry416

18.4 Data repair

In Chapter 20, the broad topic of nonresponse is discussed, part of which deals with item nonresponse – i.e., the failure to obtain an answer to a question that should be eli-gible for answer. In relation both to item nonresponse and erroneous responses, there may be a need to repair the data, or even impute values, when the missing or erroneous data are key to analysis, and simple repair is not possible. The topics of data imputation and data inference are left to Chapter 20, but as they are germane to the issue of data repair in general the reader is referred to that chapter for further information.

When erroneous data are found during the data-entry-checking process, there are two possibilities that arise. First, one or more items of information requested from the respondent may have been entered incorrectly into the database. In this case, repair may be a matter simply of consulting the original records and correcting the comput-erised data to match the original responses. However, when the computer record is the only record available, as in CATI, CAPI, and Web surveys, the incorrect data entry cannot be repaired as readily. This also is true of the second case, which is when the respondent has provided incorrect information. This can, of course, occur both in paper surveys and in computer-assisted surveys, and, in the latter, will be indistinguishable from the situation in which the interviewer or respondent entered an erroneous code. Unfortunately, in these cases, there is little that can be done to repair the incorrect data entry, in the sense of replacing it with the correct value. In such cases, the initial proce-dure that should normally be adopted is to replace the erroneous code with the missing data code (see earlier in this chapter). In some instances, it may also be unclear as to which of two or more values is incorrect, or if several items are incorrect. For exam-ple, if someone claims to be under the age at which driver’s licences are issued, but then claims to have a driver’s licence and to have driven a car to a particular location, it may be unclear as to whether it is the age that is wrong, or the possession of a driv-er’s licence, or of having driven a car to reach a particular destination. In such a case, it may be possible to infer which item is incorrect from other information provided in the survey responses, or even by looking at responses (if possible) from other members of the same household. This situation is also covered in further detail in Chapter 20, in the subsection on inference.

When data are repaired by correcting an erroneous data entry from an original rec-ord, such as a paper survey, no further action is required in the data set. However, when a value that has been recorded in the data set is determined to be wrong and there is no way available to determine the correct response, it is important to flag this fact in the data set, so that future users of the data set are made aware that a value was changed. This may be as simple as including a separate variable in the data set that indicates by a coded value which data items were changed. This allows future users of the data to agree or disagree with the changes made and provides the opportunity to reanalyse the data with a different interpretation of the missing or incorrect items. It also allows a count to be made of the number of such items, which may be used as a measure of the quality of the survey (see Chapter 21).

Page 444: Collecting managing and assessing data using sample surveys

Data repair 417

However, a word of caution is extremely important here. Data repair can be a dan-gerous pitfall. Inappropriate or inconsistent repairs to a data set can seriously damage it and may render the data set useless. It is wise to err on the side of caution about chang-ing any values that have been provided by respondents, unless there is clear, objective evidence that an incorrect response has been provided and that the repair is made in a fully objective manner. Any possible subjectivity in repairing data will result in biases being introduced into the data. These biases may result in sufficient errors in analysis of the data that the results of the analysis cannot be accepted or trusted. Simply omit-ting, for example, a value that is considered to be an outlier – i.e., an extreme value and an extremely unlikely one – and replacing this value either with what is considered to be a more reasonable or likely value, or replacing it with a missing value, may have the effect of removing data that are actually correct and that would result in different conclusions being drawn from analysis of the data.

The primary defence against such incorrect data repair is to set out rules on what is to be repaired and how it is to be repaired, and then to store this as part of the metadata (see Chapter 23) belonging to the data set. Such rules will ensure objectivity in the repair of the data, and will provide a clear record of what was considered necessary to change and why. They will also allow future analysts to determine the potential effect of such data repairs on analysis results. Further, when data repair is required, two cop-ies (at least) of the data should be kept: one that represents the raw data without any changes and repairs having been made, and the second containing the final data set after any repairs that are required by the rules for data repair have been made. With these two data sets, any future analyst or researcher can establish the potential effects of the repairs that have been made, or even employ a new set of objective rules to repair the raw data, should there be disagreement on the original rules of data repair.

Page 445: Collecting managing and assessing data using sample surveys

418

19 Data expansion and weighting

19.1 Introduction

Data expansion and data weighting are often considered as part of the same process, but they should in fact be distinguished from one another. Data expansion is simply the procedure of multiplying each observation in the data by a factor that represents how many members of the population are represented by that observation. Data weighting is the procedure of developing multiplication factors that attempt to correct for biases in the sample design that have been introduced, either intentionally or unintention-ally, into the sampling and surveying process. Both data expansion and data weighting have, as their principal goal, the purpose of providing as accurate an estimate as possi-ble of population statistics from a sample.

There are many instances in data collection when there is no interest in producing population statistics from the data. In many instances of research and practice that are aimed at developing statistical models of a process, using data collected from a sample, there may be no need or interest to expand the data to the full population. Indeed, it is a property of a number of modelling procedures that, provided an appropriate specifi-cation of the model is utilised, biases in the data will not affect most of the estimated parameters of the model, so that neither weighting nor expansion of the data are neces-sary. This is true, for example, for choice modelling using the method called multino-mial logit analysis (Manski and Lerman, 1977; Stopher and Meyburg, 1979), in which, provided that a full set of alternative-specific constants are specified, the coefficients of all model variables will be unbiased estimators, even if the input data are significantly biased. Further, in this particular case, alternative-specific constants that are incorrectly estimated can be corrected through a simple process (Manski and Lerman, 1977).

However, there are a number of situations in which expansion of the data and correc-tion by weighting are desirable, if not necessary. The main purposes include:

(1) to check the validity of sample data;(2) to provide for comparisons to census and other data;(3) to provide for comparisons to prior surveys;(4) to remove concerns of potential bias or to provide details of potential biases and

how they can be overcome;

Page 446: Collecting managing and assessing data using sample surveys

Data expansion 419

(5) to provide descriptive data about the population; and(6) to provide profiles of population groups and subgroups.

In the balance of this chapter, the topic of data expansion is addressed first, followed by data weighting.

19.2 Data expansion

To be able to expand a sample survey to the population from which the sample was drawn, there are two principal requirements. First, the population must be finite and its size must be known. Second, the sample must have been drawn by using a random sampling procedure, or a close approximation to a random sampling pro-cedure. If either or both of these conditions is not met, then data expansion is not possible.

19.2.1 Simple random samplingSimple random sampling is the simplest procedure by which to illustrate data expan-sion. SRS can be carried out only when the population is finite and known. Further, SRS itself meets the second criterion for the expansion of data. If data have been sampled at a sampling rate of f (= n / N), then the expansion factor for every observation in the sample is simply the inverse of the sampling rate, usually written as g (= 1 / f = N / n). Thus, suppose that a population of 1 million households has been sampled at a sampling rate of 1 per cent, then the sample consists of 10,000 households. Each household therefore represents 100 households in the total population, and 100 is the expansion factor that would be applied to each and every sample observation.

19.2.2 Stratified samplingIn proportionate sampling (stratified sampling with a uniform sampling fraction), the expansion factor is the same as for simple random sampling, because every sample observation was sampled at the same rate as every other observation, and there is only one sampling fraction and hence one expansion factor. In disproportionate sampling (stratified sampling with a variable sampling fraction), each stratum is sampled at a different sampling rate. Therefore, there will be a sampling rate within each stratum, and each stratum will have its own distinct expansion factor. Thus, each observation in the data set will receive an expansion factor that is determined simply by its member-ship in a stratum.

For example, suppose a population is stratified into five strata, with sampling rates of 1 per cent, 2 per cent, 0.5 per cent, 1.5 per cent, and 4 per cent. Then, the expansion factors of the five strata will be 100, 50, 200, 66.6667, and 25, respectively. In other words, each sample observation in the first stratum will represent 100 members of the stratum 1 population, each sample observation in the second stratum will represent fifty members of the stratum 2 population, and so forth. When the expansion factors

Page 447: Collecting managing and assessing data using sample surveys

Data expansion and weighting420

are applied correctly, the total number of members of the expanded sample will equal the total number of members of the population, apart from any rounding errors in the calculation of the expansion factors.

19.2.3 Multistage samplingIn multistage sampling, there will be different sampling fractions at the different stages of the sample, and there may also be different sampling methods used, so that some stage or stages may use disproportionate sampling, while others may use proportionate sampling or simple random sampling. In this case, it is possible to calculate a sampling fraction at each stage of the sampling and, therefore, also an expansion factor. The final expansion factor for each observation will be the product of the expansion factors for each stage of the sampling.

For example, suppose a disproportionate sample is drawn at the first stage, using five strata with sampling rates of 2 per cent, 5 per cent, 10 per cent, 20 per cent, and 25 per cent, then the expansion factors for this stage will be 50, 20, 10, 5, and 4, respectively. Suppose that a simple random sample is drawn at a rate of 0.1 per cent at the second stage. At this stage, the expansion factor is therefore 1,000 for all observations. This will mean that the expansion factor for observations that are in stratum 1 of the first-stage sample is 50,000 (= 50 × 1,000), while that of sample units from stratum 2 of the first stage will be 20,000, and so forth.

Similar computations can then be carried out for more stages in a multistage sample. It is a simple matter of multiplying the expansion factors from successive stages, being sure to maintain correct expansion factors for each stage, especially when stratification is used.

19.2.4 Cluster samplesExpansion factors for cluster samples are similar to those for multistage sampling, with the exception of the final stage. Because the final stage in a cluster sample is a census of the units in the clusters, there is no additional expansion that occurs at this stage. Suppose that a national sample of households is to be drawn from a nation that has no listing of household addresses that can be used for sampling, using a cluster sampling technique. It is decided to sample first from the counties of the nation, using a simple random sample of counties. Within counties a stratified sample is to be drawn of town-ships, and within townships a simple random sample of residential blocks is drawn. All households on each sampled block are to be surveyed. Suppose that the sampling rate for counties is one-tenth, and that the sampling rates for the strata of townships are 2 per cent, 3.3333 per cent, 1 per cent, and 0.5 per cent for the four strata. The sampling rate for blocks is one in 100 or 1 per cent. The sampling rate for households within the blocks, is, of course, 100 per cent. For the first stratum of townships, the expansion fac-tor will be 10 × 50 × 100 (× 1) = 50,000. For the second stratum, it is 10 × 30 × 100 (× 1) = 30,000. Similar computations would apply to the other two strata, using expansion factors of 100 and 200 for the second stages of each of those two strata, respectively.

Page 448: Collecting managing and assessing data using sample surveys

Data weighting 421

Note that the final-stage expansion factor is 1 in each case, because the entire cluster (the residential block) is sampled.

19.2.5 Other sampling methodsOther sampling methods follow the same principle as described here for the major probability sampling methods. For example, if a systematic sample is drawn, at a rate of one in fifty, then the expansion factor is simply 50 for each observation. This will be the case even if different starting points are used throughout the sampling, to introduce some semblance of randomness into the sampling. In the case of a sys-tematic sample of a population that is not already enumerated, such as sampling shoppers arriving at a shopping centre, with, say, every twentieth shopper being approached for the survey, it may be necessary to undertake a count of the total population from which the sample is drawn for the expansion to be carried out correctly.

However, it is important to note that expansion is only part of the story in human population surveys, as well as some other surveys, in which there is any form of non-adherence to the sample design, such as failure to complete a survey with a selected sampling unit. Such non-adherence to the sampling design requires weighting of the data. It is also important to note that data expansion is not possible if the sampling is from an undefined population – i.e., a population that has an unknown size. As noted earlier in this book, only samples from populations that are known can provide sam-pling rates or fractions, and therefore only such samples can be expanded.

19.3 Data weighting

As noted in the preceding sections of this chapter, data expansion correctly refers sim-ply to the process of estimating population statistics from sample statistics, based on the sample design. Weighting is the process that is required to attempt to correct for biases in the data. The most common occurrence that requires weighting is nonre-sponse, which is by far the most common cause of non-adherence to the sample design in human subject surveys.

When the weighting of a sample is to be carried out, it must be performed for specific characteristics of the population that are key characteristics or criterion variables for which bias is considered to be unacceptable, or that are deemed to be the most useful characteristics against which to check for biases. For example, a common occurrence in human surveys is that people from households of different sizes will have different willingness to respond to a survey. Often those in small households (mainly one- person households) and those in large households (ones with five or more members) are less likely to respond; the former because of concerns with invasion of privacy and vulner-ability created by the information collected by the survey, and the latter because the survey may represent an increased burden for larger households. Therefore, the first step in weighting is to decide on which characteristics of the population weighting is

Page 449: Collecting managing and assessing data using sample surveys

Data expansion and weighting422

to be carried out. It is not necessary to restrict the weighting to just one characteristic, although the procedures to weight the sample become increasingly complex as more characteristics are used. Most commonly, the characteristics that are chosen are those that are also available in some secondary data source, such as a population census. This is because comparison between the sample and the secondary source provides an indi-cation of the possibility of bias and, as is explained later in this chapter, provides the easiest means to weight the sample. However, it is by no means the case that the char-acteristics are always those available in secondary data sources. It may be that the most important characteristics for use in weighting a particular sample may not be available from other sources. Therefore, there are two alternative situations that may exist in which weighting needs to be carried out for nonresponse: first, actual population totals for the selected characteristics are not known from other sources; and, second, actual population totals for the selected characteristics are known from other data sources. Each of these situations is described in the following sections.

19.3.1 Weighting with unknown population totalsWhen the population totals on the key characteristics for weighting are not known, it is necessary that a record is kept throughout the survey on all nonrespondents. Nonrespondents fall into two major categories. First are nonrespondents who refuse the survey before any questions are answered and therefore provide no information whatsoever (normally referred to as ‘refusals’). A count of these nonrespondents is normally required. Second are nonrespondents who answer some questions, but fail to answer sufficient questions to be considered complete responses (normally referred to as ‘terminations’). Not only is a count of these nonrespondents needed, but the answers to those questions to which they do respond must be recorded for use in the weighting. Indeed, in the case in which population totals are not known for the characteristics in question, these terminations are a very valuable source of important information on the biases that may exist in the survey, and are usually the only such source.

Different situations with respect to nonrespondents will arise from different methods of surveying. With self-administered surveys, such as postal and internet surveys, it is most often the case that there are only refusals, although there may be a small num-ber of cases in which respondents have partially completed the survey and still sent it back by post or submitted it online. In interviewer surveys, whether by telephone or face to face, there will usually be a mixture of refusals and terminations. Considering this aspect of the survey methodology, it is appropriate to consider two cases. In the first case, there are only refusals and there are no terminations. In such a case, when supplementary data on the characteristics for weighting do not exist, there is in fact no weighting possible. The only assumption that can be made is that the nonrespondents are distributed identically to respondents and, hence, no weighting is possible.

If there are terminations in the data set, then the possible assumption is that refus-als and terminations are identically distributed with respect to the population char-acteristics to be used for the assessment of bias and potential weighting of the data.

Page 450: Collecting managing and assessing data using sample surveys

Data weighting 423

Now the appropriate test is to determine if the terminations have a different distribu-tion from the completed data on the key characteristics. If the distributions are the same, then no bias is present as detected by the key variables. If the distributions are different, then there is prima facie evidence of bias, and weighting should be undertaken.

Assuming that it is found that the terminations have a different distribution on the key characteristics and that weighting is, therefore, necessary, then the procedure to weight the data is as follows. For each characteristic that has been selected to assess bias, determine the distribution of the terminations and of the completed surveys. Combine these two to form a composite distribution and estimate the proportions of the combined terminations and completes for each category combination of the key characteristics. Compute the ratio of the combined proportion to the proportion in just the complete surveys. This ratio is the appropriate weight by which to multiply the observations within the complete survey. An example is helpful to illustrate the procedure.

An example

Suppose that a household survey has been undertaken, using telephone interviews. Assume that the key characteristic to be used to assess bias is that of household size. It is assumed here that the question on household size was asked early in the survey, so that most terminating households did answer this question. Of course, the outright refusals do not provide any data on household size. The survey results are assumed to be distributed as shown in Table 19.1.

It is clear from an examination of this table that the completed surveys are distrib-uted differently from the terminated surveys, with there being a much larger propor-tion of terminations for one-person households and a much lower termination rate for three-person households. As a result, correction for bias appears to be required. This is done by calculating weights as shown in Table 19.2.

In Table 19.2, the top row is the sum of the first and third rows of Table 19.1 and the second row is the percentage that these represent of the total of the completed and terminated. The third row is a repeat of the second row of Table 19.1 and the fourth row is the ratio of the second row to the third row and represents the weight. Suppose that the expansion factor for households of size 1 was 100, then the weighted expan-sion factor would be the product of 100 and 1.22, or 122. This accounts for the lower response rate of single-person households.

This same procedure could be applied using more than one characteristic of the population. In that case, it would be necessary to calculate the n-way tables for the n characteristics that are to be used both from the completed surveys and the termina-tions, and then to follow the same procedure as described here to find the weights for each cell of the n-way table. An example with two characteristics is provided here, to demonstrate the procedure.

Page 451: Collecting managing and assessing data using sample surveys

Data expansion and weighting424

A second example

In this example, suppose once again that a telephone interview survey has been con-ducted on households and that the key characteristics of the population are thought to be the household size and the number of workers. Table 19.3 shows the two-way distri-bution for these characteristics from the completed surveys and Table 19.4 shows the same distribution for the terminated surveys.

The weights can be determined by first converting the values in Table 19.3 to per-centages of the total sample, as shown in Table 19.5. Next, the values of the corre-sponding cells from Tables 19.3 and 19.4 are added together, as shown in Table 19.6,

Table 19.1 Results of an hypothetical household survey

Status

Household size

1 2 3 4 5+ Total

Completed 85 268 293 212 142 1000Percentage completed 8.5 26.8 29.3 21.2 14.2 100.0Terminated 39 57 30 40 34 200Percentage terminated 19.5 28.5 14.8 20.1 17.1 100.0

Table 19.2 Calculation of weights for the hypothetical household survey

Source

Household size

1 2 3 4 5+ Total

Completed plus terminated 124 325 323 252 176 1,200Percentage completed + terminated 10.33 27.08 26.91 21.00 14.67 100.0Completed percentage 8.5 26.8 29.3 21.2 14.2 100.0Weights (ratio of two proportions) 1.22 1.01 0.92 0.99 1.03 –

Table 19.3 Two-way distribution of completed surveys

Workers

Household size

1 2 3 4 5+ Total

0 23 93 26 13 2 1571 62 64 77 85 66 3542+ 0 111 190 114 74 489

Total 85 268 293 212 142 1,000

Page 452: Collecting managing and assessing data using sample surveys

Data weighting 425

and converted to a percentage, as shown in Table 19.7. The percentages in Table 19.7 are then divided by the corresponding percentages in Table 19.5 to create the weights. The results of this are shown in Table 19.8.

The weights shown in Table 19.8 would be applied, together with the expansion fac-tors, to determine the correct population values from Table 19.3.

As can be seen in both the one-variable and the two-variable examples here, the weights will usually be values that are around 1.00, but they will vary from values greater than one to less than one. However, large values of the weights would be indic-ative of serious problems of under- or over-representation of a particular subgroup of the population in the sample, and should be seen as a warning that the extent of bias

Table 19.4 Two-way distribution of terminated surveys

Workers

Household size

1 2 3 4 5+ Total

0 15 30 3 7 2 571 24 18 14 22 10 882+ 0 9 13 11 22 55

Total 39 57 30 40 34 200

Table 19.5 Table 19.3 expressed as percentages

Workers

Household size

1 2 3 4 5+ Total

0 2.30 9.30 2.60 1.30 0.20 15.701 6.20 6.40 7.70 8.50 6.60 35.402+ 0.00 11.10 19.00 11.40 7.40 48.90

Total 8.50 26.80 29.30 21.20 14.20 100.00

Table 19.6 Sum of the cells in Tables 19.3 and 19.4

Workers

Household size

1 2 3 4 5+ Total

0 38 123 29 20 4 2141 86 82 91 107 76 4422+ 0 120 203 125 96 544

Total 124 325 323 252 176 1,200

Page 453: Collecting managing and assessing data using sample surveys

Data expansion and weighting426

in the survey may be too large to be assumed to be reasonably corrected by weight-ing. Indeed, for weighting to be an appropriate procedure, it is necessary to make the assumption that those who respond to the survey within a specific population subgroup that is used for weighting are identically distributed on measures in the survey to those in the subgroup who did not respond.

19.3.2 Weighting with known populationsWhen the characteristics of the population are known, the procedure for weighting is normally much simpler. In this case, there is no need to use information on the termi-nations to the survey. Instead, knowing the proportions in the population of the cate-gories of the selected weighting variable(s), it is simply a matter of using those known population proportions to develop weights. Suppose that the data in the top half of Table 19.1 still represent the results of the survey. However, from a secondary source (say, for example, a national census), the proportions shown in the penultimate row of Table 19.9 are derived. Taking the ratios of the census values to the survey values produces the weights, shown in the last row of the table.

One of the differences here is that these weights will produce a correct distribution of the values, insofar as the census is complete and correct and, again, on the assump-tion that respondents and nonrespondents in a particular category are alike on other measures of interest in the survey.

The procedure remains simple for the use of two or more variables, provided that the secondary source information provides an n-way cross-tabulation of the variables

Table 19.7 Cells of Table 19.6 as percentages

Workers

Household size

1 2 3 4 5+ Total

0 3.17 10.25 2.42 1.67 0.33 17.831 7.17 6.83 7.58 8.92 6.33 36.832+ 0.00 10.00 16.92 10.42 8.00 45.33

Total 10.33 27.08 26.92 21.00 14.67 100.00

Table 19.8 Weights derived from Tables 19.7 and 19.5

Workers

Household size

1 2 3 4 5+

0 1.38 1.10 0.93 1.28 1.671 1.16 1.07 0.98 1.05 0.962+ 1.00 0.90 0.89 0.91 1.08

Page 454: Collecting managing and assessing data using sample surveys

Data weighting 427

of interest for weighting. In other words, if using, say, two variables, as in the case described in the previous section, the secondary source of population data contains a two-way cross-tabulation for the two variables of interest, then the weighting proce-dure remains simple and straightforward. However, it may be the case that the n-way cross-tabulations of the selected variables are not available and only the row and col-umn totals of the tabulation are known from the secondary source. In this case, it is necessary to apply an iterative proportional fitting procedure to estimate the weights (Stopher and Stecher, 1993). This can be done for as many variables as may be desired in the weighting procedure, although the more variables chosen, the more complex will be the procedure. An example of this procedure is provided here.

An example

Suppose that the survey has been carried out and has yielded the results shown earlier in Table 19.5 and reproduced again here, for convenience, as Table 19.10.

In this case, only the row totals and column totals for workers and household size are known from the secondary data, giving proportions for workers of 16.9 per cent for zero workers, 37.2 per cent for one worker, and 45.9 per cent for two and more workers, and giving proportions for household size of 10.2 per cent for one-person

Table 19.9 Results of an hypothetical household survey compared to secondary source data

Status

Household size

1 2 3 4 5+ Total

Completed 85 268 293 212 142 1000Percentage completed 8.5 26.8 29.3 21.2 14.2 100.0Census proportions (percentage) 10.2 27.2 25.3 21.9 15.4 100.0Weights 1.20 1.01 0.86 1.03 1.08

Table 19.10 Two-way distribution of completed surveys by percentage (originally shown in Table 19.5)

Workers

Household size

1 2 3 4 5+ Total

0 2.30 9.30 2.60 1.30 0.20 15.701 6.20 6.40 7.70 8.50 6.60 35.402+ 0.00 11.10 19.00 11.40 7.40 48.90

Total 8.50 26.80 29.30 21.20 14.20 100.00

Page 455: Collecting managing and assessing data using sample surveys

Data expansion and weighting428

households, 27.2 per cent for two-person households, 25.3 per cent for three-person households, 21.9 per cent for four-person households, and 15.4 per cent for house-holds with five or more persons. The procedure for iterative proportional fitting (IPF), or the Furness method (Furness, 1965), is demonstrated here for the two-variable case. It proceeds by first estimating the factors for either the rows or the columns that will produce the correct row (or column) totals. In this case, arbitrarily, the rows are used first. The row total for the first row should be 16.9 per cent but was surveyed as 15.7 per cent. Hence, the entries in that row must be factored by the ratio of 16.9 / 15.7. Similarly, the second row must be factored by 37.2 / 35.4, and the third row by 45.9 / 48.9. The results of this are shown in Table 19.11.

As is now evident, the factoring of the rows to get the correct proportions in each row has left the columns out of balance. Therefore, the next iteration, shown in Table 19.12, is to factor the columns to get the correct proportions in each column.

As would be expected, this now throws the rows out of balance, although it should be noted that the totals on the rows are closer than in the original data. Hence, the next step is to factor the rows again, using factors that are the desired row totals divided by the newly estimated row totals, as shown in Table 19.13.

After this third iteration, the row proportions are again correct, but the column pro-portions are only marginally incorrect, with none of them varying by more than 0.03

Table 19.11 Results of factoring the rows of Table 19.10 (percentages)

Workers

Household sizeCorrect proportion1 2 3 4 5+ Total

0 2.48 10.01 2.80 1.40 0.22 16.90 16.901 6.52 6.73 8.09 8.93 6.94 37.20 37.202+ 0.00 10.42 17.83 10.70 6.95 45.90 45.90Total 8.99 27.16 28.72 21.03 14.10 100.00Correct proportion 10.20 27.20 25.30 21.90 15.40

Table 19.12 Second iteration, in which columns are factored (percentages)

Workers

Household sizeCorrect proportion1 2 3 4 5+ Total

0 2.81 10.03 2.47 1.46 0.24 16.99 16.901 7.39 6.74 7.13 9.30 7.58 38.13 37.202+ 0.00 10.44 15.71 11.14 7.59 44.87 45.90Total 10.20 27.20 25.30 21.90 15.40 100.00Correct proportion 10.20 27.20 25.30 21.90 15.40

Page 456: Collecting managing and assessing data using sample surveys

Summary 429

per cent from the correct proportions. Although another iteration can theoretically be performed, the amount of change it will produce will be so small that it is probably not worth doing. Therefore, Table 19.13 now represents the actual proportions we believe we should attain in each cell of the table for the original survey. Weights are now obtained by dividing each cell in Table 19.13 by the cell values in Table 19.10, produc-ing the weights shown in Table 19.14.

These weights are approximate, because the individual cell populations were not known at the outset, and the method of iterative proportional fitting had to be used to approximate these. It is important also to be aware that the starting distribution in the cells is of great importance to the result from the IPF method. Hence, this method can be considered to be only an approximation of the correct weights, albeit one that will provide correct estimates of the distribution of the proportions of the sample in each category for each weighting variable.

More complex designs are relatively easy to handle with modern computers, so that the IPF method can be applied with n criterion variables with only a minor increase in complexity. It simply requires an n-dimensional solution to the IPF method.

19.4 Summary

Expansion is the procedure of multiplying each survey observation by the inverse of the sampling rate, to obtain estimates of the population totals from the survey totals.

Table 19.13 Third iteration, in which rows are factored again (percentages)

Workers

Household sizeCorrect proportion1 2 3 4 5+ Total

0 2.79 9.97 2.45 1.45 0.23 16.90 16.901 7.21 6.57 6.95 9.07 7.39 37.20 37.202+ 0.00 10.67 16.07 11.40 7.76 45.90 45.90Total 10.00 27.22 25.47 21.92 15.39 100.00Correct

proportion10.20 27.20 25.30 21.90 15.40

Table 19.14 Weights derived from the iterative proportional fitting

Workers Household size

1 2 3 4 5+0 1.21 1.07 0.94 1.11 1.171 1.16 1.03 0.90 1.07 1.122+ 0.00 0.96 0.85 1.00 1.05

Page 457: Collecting managing and assessing data using sample surveys

Data expansion and weighting430

The method of expansion is, therefore, related to the sampling method used. Weighting is the procedure of creating factors and applying them to each survey observation to correct for known or suspected biases in the data. Multiplying for each survey observa-tion the expansion factor and the weighting factor yields a composite factor that should provide the best estimate of the unbiased population statistic from the sample.

Expansion factors can, of course, be estimated only for finite populations. Weighting factors may be estimated both in cases in which the population totals for the criterion variables or key characteristics are unknown and in which they are known. Methods for doing this are described in this chapter.

The reason for weighting data is sample non-compliance, which is most frequently a result of nonresponse in human population surveys. In weighting samples, there is an inherent assumption of considerable importance. This assumption is that the respondents in a particular category of the key characteristics are similar to the nonre-spondents in the same category on all other variables measured by the survey. If this assumption does not hold, then no amount of weighting will actually result in a reduc-tion in bias from the sample estimates, and, indeed, weighting may potentially make the biases worse.

Page 458: Collecting managing and assessing data using sample surveys

431

20 Nonresponse

20.1 Introduction

As has been alluded to in various other chapters of this book, nonresponse is a prev-alent problem in all human subject surveys. Nonresponse is generally the dominant cause of non-adherence to the sample design. Nonresponse is also a function of the survey method, as well as the quality of the survey instrument, the training of survey personnel, and the subject of the survey, among other things. Clearly, nonresponse is undesirable. Chapter 19 discussed the issue of weighting as a means to try to correct biases in the data. Here, again, the primary cause of bias is seen as nonresponse.

Nonresponse in human subject surveys may arise in two ways. First, there is survey nonresponse, also known as unit nonresponse. This arises when a sample unit fails to respond to the survey. For example, in a human population survey, unit or survey nonre-sponse occurs when a sampled person or household refuses to participate in the survey. In a telephone survey, this may arise when a qualified person answers the telephone but refuses to answer any questions and hangs up the telephone without providing any information, other than that there is a qualified individual or household at that telephone number. Similarly, in a face-to-face survey, this will occur when the survey respondent refuses to answer the interviewer’s questions and terminates any further discussion of the survey. In a postal survey, this occurs when the sampled person or household does not return the survey questionnaire, or returns a blank or spoiled survey. Similarly, in an internet survey, unit or survey nonresponse occurs when a person refuses to respond to any part of the internet survey, and perhaps even fails to visit the URL for the survey.

The second form of nonresponse is called item nonresponse, and occurs when a respondent to a survey fails to answer one or more questions that should have been answered by that respondent, or responds by stating that he or she does not know the answer to the question, unless such responses are appropriate and provide important information for the survey subject. Item nonresponse occurs most often for questions that are considered by respondents to be of a sensitive nature, or that are perceived as being threatening (see Chapter 9).

In this chapter, unit nonresponse is discussed first. In discussing unit nonresponse, the important issue of calculating response rates is included, as well as methods for

Page 459: Collecting managing and assessing data using sample surveys

Nonresponse432

reducing unit nonresponse and the potential costs associated with unit nonresponse. Subsequently, item nonresponse is discussed, including a treatment of methods of inference and imputation.

20.2 Unit nonresponse

As discussed above, unit nonresponse is a consequence of respondents refusing to par-ticipate in the survey. It results in failure to achieve the designed sample and is always present in surveys of human populations. However, unit nonresponse is a problem only if the nonrespondents are not randomly distributed through the population with iden-tical characteristics on all issues of interest in the survey to the respondents. Indeed, if one can establish that nonrespondents are randomly distributed through the population and that their characteristics on all issues of interest in the survey are distributed iden-tically to respondents, then unit nonresponse becomes only a minor issue, resulting in some additional cost for the survey to substitute additional respondents for those who refuse to participate.

However, such a situation is unlikely to arise in most practical surveys. On the con-trary, it is much more common to find that nonrespondents are concentrated in certain subgroups of the population and behave differently or have different characteristics from respondents. For example, in surveys of travel behaviour, it is common to find that nonrespondents are most likely to be either those who travel very little (such as the elderly, who have limited mobility) or those who travel a great deal (for whom the survey task is more onerous). This means that a survey on travel behaviour is likely to be responded to mainly by those who travel a moderate amount, leading to obvious biases in the results of the survey.

Unit nonresponse generally arises from three main causes: the unwillingness of peo-ple to complete the survey, a failure on the part of the survey to include certain units of the population, and the loss of completed surveys. The latter two causes are under the control of the survey researcher, and various safeguards should normally be put in place to remove or reduce their occurrence. The former is not under the direct control of the survey researcher, although there are a number of ways in which even this aspect of unit nonresponse can be reduced.

Before discussing methods to reduce nonresponse, it is appropriate and impor-tant to discuss the methods to calculate response rates, which are the direct means to establish the extent of nonresponse. After a discussion of response rates, then various methods for improving response rates and reducing unit nonresponse are more appro-priately discussed.

20.2.1 Calculating response ratesTo determine if nonresponse is a problem in a survey, it is necessary to calculate a response rate. There are many other measures of survey quality that may also be appro-priate, and the response rate is but one measure of survey quality, but it is an important

Page 460: Collecting managing and assessing data using sample surveys

Unit nonresponse 433

one in relation to judging sample adherence and the overall potential for bias. There are a number of possible ways to calculate response rates, and there is no agreement among survey professionals as to a single method that is ‘correct’. However, there are some methods for calculating response rates that are most definitely incorrect. These are identified in the following discussion.

Classifying responses to a surveyBefore calculating response rates, it is necessary to classify the responses that are received. The classification varies by the survey method, although the intent of classification is the same in each case. Survey responses can be classified into two primary groups – eligi-bility known and eligibility unknown – with eligibility referring to whether a unit that is attempted is or is not eligible to respond to the survey. At the next level, the eligibility known units can be divided into eligible and ineligible units. The ineligible units are of no further interest to the survey, but the eligible units are. These units can be subdivided fur-ther into respondents and nonrespondents. To see how sample units can be categorised into these various categories, each of telephone interview, face-to-face interview, and postal surveys are used to show the differences and common elements of these categories.

For a telephone interview survey, there are usually three steps that have to be undertaken: an initial contact, a possible screening and recruitment, and completion of the full interview. Suppose that a telephone survey is being carried out using lists of telephone numbers, which may have been generated from a variety of sources. In the initial contact for a household survey, there are a number of potential outcomes.

No household contact is made, because:•

(1) the telephone number is that of a business or government unit;(2) the telephone number is that of an institution, such as a hospital;(3) the telephone number is that of some other non-residential unit;(4) the telephone number is verified as a dedicated fax or modem line; or(5) the telephone number is not in service.

The telephone number is answered with a recording that indicates that the number •has been changed.No answer is obtained after making the prescribed maximum number of attempts to •obtain an answer.Any other outcome that results in a failure to establish if there is an eligible house-•hold assigned to this telephone number.

However, if contact is made and it is established that there is a household at the other end of the telephone line, then there are a number of further possible outcomes.

Initial contact outcomes are:•

(1) the person hangs up before any questions are asked;(2) the person answering refuses to answer any questions; or(3) the person answering terminates the initial contact part-way through.

Page 461: Collecting managing and assessing data using sample surveys

Nonresponse434

However, if the initial contact is completed, there are four other outcomes that are then possible.

There are no eligible household members at this number.•There are eligible household members but they refuse to be recruited or to complete •the survey if recruited.There are eligible household members but they have language difficulties or mental •or physical limitations that prevent completion of the survey.Eligible household members complete the survey.•

For a face-to-face interview, the three steps are the same and the potential outcomes are similar to those of the telephone interview. For the initial contact, the results are slightly different, consisting of the following.

No household contact is made, because:•

(1) the address is a business or government entity;(2) the address is some other non-residential address;(3) the household is not accessible or the interviewer is unable to locate the address;(4) the address is of a house that is clearly vacant;(5) no answer is obtained after making the prescribed maximum number of

attempts; or(6) some other reason prevents any contact from being made that establishes if there

are eligible people present.

Again, if contact is made, then the following are the potential outcomes.

The initial contact is refused.•The initial contact is terminated prior to completion.•

If the initial contact is completed, then there are the same potential four outcomes as for the telephone interview.

For a postal survey, the outcomes are quite different, and they are also similar for a Web survey. In neither case are there usually three steps. Rather, there is a single step of making contact with potential respondents by mailing a package of survey materials and requesting a response to them. If the survey is sent out by post, then the following outcomes may occur.

From the initial mailing:•

(1) the Post Office returns the package as being undeliverable;(2) the Post Office returns the package as being not forwardable; or(3) the package is not delivered or is not returned.

If the package is delivered, then there are four possible outcomes.

There are eligible household members who refuse to return the survey package.•There are no eligible household members.•

Page 462: Collecting managing and assessing data using sample surveys

Unit nonresponse 435

There are eligible household members but they have language difficulties, or mental •or physical limitations that prevent participation.There are eligible household members, who complete and return the survey.•

Now the task is to classify these various outcomes into the various classes of response category. The outcomes that represent unknown eligibility from a telephone survey are the following.

No answer from the prescribed number of attempts (the telephone may just ring •without answer, or a busy signal is obtained, or an answering machine picks up but does not reveal whether or not a household is at the other end of the line).The line is not verified as a fax or modem line, although this may be suspected.•The person answering the telephone hangs up without answering any questions.•A request for a call-back, whether scheduled or not, is not able to be completed •successfully.A responsible adult is never reached.•The initial contact is terminated before critical questions on eligibility are answered •completely.

In personal interview surveys, the principal outcomes that result in eligibility unknown are those when no one is ever found at home, the house is inaccessible or not found, the initial contact is refused, there is an incomplete initial contact, a request for a call-back is not completed successfully, or there is no responsible adult at home. Similarly, eligibility unknown arises for all survey packages that are sent out and not returned, and also for those returned by the Post Office as either undeliverable or not able to be forwarded, and any that are returned with no annotation from the Post Office and none from the recipient.

Those classified as ineligible include business and government contacts, non- working telephone numbers, institutions, other non-residential units, verified fax/modem lines, and cases in which no eligible household members are present, the res-idential location is outside the study area, and so forth. From postal and Web surveys, usually there will be no known ineligible units, unless there is an initial eligibility screening that produces reasonable response.

The classification results from a survey are most easily understood with a diagram, showing how each of the classifications relates to the others, as shown in Figure 20.1.

Calculating response ratesThe incorrect way to calculate response rates is to calculate the ratio R / E – that is, the number of respondents divided by the number of eligible respondents. Nevertheless, this is often the figure that is reported as the response rate to a survey. Unfortunately, it is biased and will result in presenting a number that is misleadingly high. The reason for this is that it takes no account of the potential for there to have been eligible respon-dents among the eligibility unknown (U) group. The major market research organisa-tions – the Council of American Survey Research Organizations and the American

Page 463: Collecting managing and assessing data using sample surveys

Nonresponse436

Association for Public Opinion Research – are agreed that response rate calculations must take into account some fraction of the eligibility unknown group that may poten-tially be eligible for the survey (AAPOR, 2008; CASRO, 1982). The differences in calculation of response rates have to do with the assumption as to what proportion of U should be considered as estimated eligible units. Smith (1999) undertook a review of the de�nition of response rates from various market research organisations around the world. The results of his research show that, at that time, none of the World Association for Public Opinion Research (WAPOR), the National Council of Public Polls (NCPP), the Council for Marketing and Opinion Research (CMOR), the Research Industry Coalition (RIC), the European Society for Opinion and Marketing Research, the American Marketing Association (AMA), the Marketing Research Association, the Association for Consumer Research (ACR), the Advertising Research Association (ARF), the American Statistical Association (ASA), the International Statistical Institute (ISI), and the International Association of Survey Statisticians (IASS) pro-posed speci�c de�nitions of nonresponse and completion rates, although most of these organisations call for the accurate reporting of statistics about surveys, and many have a code of ethics that requires the appropriate and honest reporting of statistics about the completion of surveys. However, it appears from this research and from other research (Stopher et al., 2008b) that only CASRO and AAPOR have actually set out speci�c details of how to compute response rates.

Both CASRO and AAPOR agree on the fundamental formula for calculating a response rate. This can be expressed as shown in equation (20.1):

RR =R

E e UAE eE eAA

(20.1)

where:

RR = response rate;R = respondents;

All sampleunits (S)

Eligibilityknown (K)

Eligible (E)

Respondents(R)

Nonrespondents(N)

Ineligible (I)

Eligibilityunknown (U)

Figure 20.1 Illustration of the categorisation of response outcomes

Page 464: Collecting managing and assessing data using sample surveys

Unit nonresponse 437

E = eligible units;U = unknown eligibility units; andeA = eligibility rate of unknown eligibility units.

The key issue here is the determination of an appropriate value for eA. CASRO (1982) suggests that it should be the eligibility rate determined from the known eligibil-ity units. In other words, they recommend calculating eA as the ratio of E / K. This assumes that the eligibility rate of the unknown eligibility units is the same as that of the known eligibility units. This is probably appropriate for many face-to-face surveys, because the majority of unknown eligibility units will be when no one is ever found at home, the house is inaccessible or not found, the initial contact is refused, there is an incomplete initial contact, a request for a call-back is not completed successfully, or there is no responsible adult at home. In such surveys, addresses that are businesses or government offices, or properties that have other non-residential uses, will largely have been excluded by interviewers simply from locating the address. It is unlikely that such ineligible units will remain in the unknown eligibility group.

However, in telephone surveys, the unknown eligibility units will include such situa-tions as when there is no answer from the prescribed number of attempts (the telephone may just ring without answer), or a busy signal is obtained, or an answering machine picks up but does not reveal whether or not a household is at the other end of the line, the line is not verified as a fax or modem line (although this may be suspected), the per-son answering the telephone hangs up without answering any questions, a request for a call-back is not able to be completed successfully, a responsible adult is never reached, or the initial contact is terminated before critical questions on eligibility are answered completely. From these situations, it is possible that more of the unknown eligibility units will be ineligible than is found among the known eligibility units. In this case, applying the same eligibility rate may overstate the likely proportion of eligible units among the unknown eligibility units. Furthermore, from a cost viewpoint, face-to-face surveys are likely to be restricted to fewer attempts to resolve the unknown eligibility units than are telephone interview surveys, so that it is also fair to assume that tele-phone surveys with reasonably high limits on the number of attempts (say five and more) will have been able to find a higher proportion of eligible units in the eligibility known units than will be the case for a face-to-face survey.

Possibly with this reasoning in mind, AAPOR (2008) recommends that the eligibil-ity rate, eA, be one that is determined by the researcher and for which the rationale is clearly documented. This is also the recommendation of Stopher et al. (2008b). For example, in a telephone survey, the researcher may have reason to believe that the eligibility rate for the unknown eligibility units should be set as about one-half of the eligibility rate of the known eligibility units. Such a rate can be used if the researcher can put forward objective reasons for adopting such a rate. Nonresponse surveys may also provide help in establishing this rate. Another potential source of information on the rate is to graph the change in eligibility rate that occurs as the number of attempts is increased.

Page 465: Collecting managing and assessing data using sample surveys

Nonresponse438

AAPOR has put forward a slightly modi�ed equation for estimating the response rate, which helps to provide more speci�city into the calculations. It is shown in equa-tion (20.2):

RR ASR

O eA

3SRSR PIPI RBRB O eO eRBRBRBRB O eO eAA UHUH UOUO NCNCUHUHUHUH

(20.2)

where:

RR3A = response rate 3A of AAPOR;SR = successful responses – i.e., completed interviews or questionnaires;PI = partial responses;RB = refusals and terminations;O = other response outcomes, not otherwise listed, where eligibility is known;UH = unknown if household is occupied;UO = unknown other;NC = non-contacted sample units; andeA = estimated proportion of eligible units among the eligibility unknown units.

The same procedure applies for determining the value of eA as before.Equation (20.2) perhaps makes it a little clearer as to which units are considered

to be eligible or of known eligibility and which are not. It also helps when applying this formula to such surveys as postal surveys or internet surveys. For example, NC and UH could be construed to include those postal surveys that are returned by the Post Of�ce as being undeliverable, whether because of a problem with the address, a lack of forwarding information for the addressee, or other similar situation. The dif�culty for postal and internet surveys is that usually it is impossible to differ-entiate refusals from non-contacts, unless it is assumed that all posted surveys that are not returned by the Post Of�ce and are not returned by respondents represent refusals and terminations. In addition, postal and internet surveys will generally not produce information on ineligible units, so that an eligibility rate, in these cases, may not be able to be calculated. Unless there is secondary information available from which to estimate an eligibility rate, it may be necessary to assume that all addresses (postal or email) are eligible, and use an assumption that eA is 100 per cent. Any assumption of less than 100 per cent requires careful documentation to substantiate.

AAPOR (2004) also provides a useful set of disposition codes for random digit dial-ling telephone surveys, which is shown here as Table 20.1, with some minor adapta-tions and amendments. These disposition codes map quite readily into the categories of equation (20.2). Similar codes can be developed for mail, internet, and personal interview surveys.

In many surveys, especially those accomplished by hybrid means, there may be both a recruitment step and a data completion or retrieval step. Response rates may be

Page 466: Collecting managing and assessing data using sample surveys

Unit nonresponse 439

Table 20.1 Final disposition codes for RDD telephone surveys

Eligibility Eligibility code Disposition Disposition code

Eligible interview 1.0 Complete 1.10Partial 1.20

Eligible non-interview

2.0 Refusal 2.11Termination 2.12Respondent never available after

call-back request2.21

Telephone answering device (message confirms residential unit)

2.22

Miscellaneous 2.90Unknown

eligibility, non-interview

3.0 Unknown if housing unit 3.11Not attempted or worked 3.12Always busy 3.13No answer 3.14Telephone answering device

(don’t know if housing unit)3.15

Telecommunication technological barriers (e.g., call blocking)

3.16

Technical phone problems 3.17Housing unit, unknown if eligible

respondent3.21

No screener completed 3.22Other 3.90

Not eligible 4.0 Out of sample 4.10Fax/data line 4.20Non-working number 4.31Disconnected number 4.32Temporarily out of service 4.33Special technological circumstances 4.40Number changed 4.41Cell/mobile phone1 4.42Cell/mobile forwarding 4.43Business/government/office/other

organisation4.51

Institution1 4.52Group quarters1 4.53No eligible respondent 4.60

Quota filled 4.70

Note: 1 If specified as ineligible in the survey design.Source: Adapted from AAPOR (2004).

Page 467: Collecting managing and assessing data using sample surveys

Nonresponse440

desired for each of these two stages. In such a case, the �rst stage (or recruitment stage) of the survey would produce a response rate as given by equation (20.3):

RR =RH

E e U1

AE eE eAA

(20.3)

In equation (20.3), the symbols have the same meaning as in equation (20.1), with the addition of the subscript to the response rate, indicating that this is for the �rst stage, and the symbol RH, designating the number of recruited sampling units. For the sec-ond stage, the response rate is given by equation (20.4):

RR =R

RH2 (20.4)

The second-stage response rate is simply the ratio of completed surveys to recruited units, as shown by equation (20.4). Furthermore, it is easy to see that the product of these two response rates produces the same equation as equation (20.1), indicating the consistency of these two formulae for response rate, as shown in equation (20.5):

RRRH

E e U

R

RH

R

E e UA AUA AU RHA ARH E eA AE eE eE eA AA AE eE eA AA AE eA AE eE eA AE e(20.5)

Again, it is important to note that these equations result in a quite different estimate of response rate from simpler, but incorrect, formulations such as R / E and R / (E U), both of which, among others, are found from time to time in reports on surveys.

20.2.2 Reducing nonresponse and increasing response ratesFrom the foregoing, it is clear that a response rate of anything under 100 per cent indicates that some aspect of the original survey design has not been met. There are usually a number of reasons for nonresponse. These often relate to aspects of the sur-vey design, such as the appeal of the survey to the potential respondent, the dif�culty for respondents to undertake the survey, the approach and introduction, and the amount of time and effort required from the respondent (also known as respondent burden). Another source of potential nonresponse is the nature of or lack of publicity about the survey. An absence of publicity or poorly aimed publicity may leave potential respon-dents unaware of the apparent value of the survey, unable to identify who is doing the survey and, therefore, its legitimacy, and unaware of the importance of each respon-dent’s complete response.

Design issues affecting nonresponseThere are a number of design issues that are likely to have a profound impact on response levels to a survey. Many of these issues are discussed elsewhere in this book, but it is useful to bring a number of them together here, with a speci�c focus on how they can be used to reduce nonresponse.

Page 468: Collecting managing and assessing data using sample surveys

Unit nonresponse 441

In a postal survey, for example, Dillman (1978, 2000) suggests the following guidelines, which this author has also found to work well for improving the response rate.

•Stamped return envelopes. Ideally, the return envelopes should have actual postage stamps on them. If permitted by postal regulations, it is also useful to print on the back of the envelope instructions as to what materials should be included in the package. The use of stamps as opposed to postage-paid return envelopes impresses potential respondents on the seriousness of the survey, because postage-paid enve-lopes are charged for only if the package is actually posted back, whereas stamps mean that all return envelopes have been paid for, whether the respondent returns the survey or not.

•Large white envelopes. The use of a large white envelope attracts attention and is clearly differentiated from most junk mail.

•Addresses printed directly on the envelope. The use of address labels for the address indicates less care and attention to the survey, whereas printing the address directly on the envelope tends to suggest that this is not a mass mailing that is of little value to either the respondent or the survey organisation.

•Recognisable return addresses and some indication of the contents of the enve-lope. Envelopes containing the survey materials should have a clearly printed return address on the envelope that can be easily recognised by the addressee as a legitimate address. It indicates that there is value in the envelope, such that the survey organisa-tion would want undeliverable packages returned to them. Moreover, indicating the contents on the outside of the envelope is helpful to let respondents know that these are survey materials, especially if there was a prior recruitment step.

•Postage stamps on the initial envelope. Postage for the mailing of the survey package should also be in the form of postage stamps, preferably attractive commemorative stamps. The use of stamps, as opposed to prepaid envelopes or franking machine imprints, indicates an additional level of care and effort being taken with the survey package, and also adds to the attractiveness of the package and reduces the impres-sion that this is a mass mailing.

In the area of survey design in general, there are a number of design issues that are most likely to impact response levels. These include the following.

Keep the survey simple, without complicated skip patterns or other impediments to •completion.Use simple language – perhaps around the level that a nine-year-old would be •expected to be able to comprehend.Keep instructions to a minimum.•Make the survey flow according to the way people think, experience, or remember •the issues about which the survey is questioning the respondent.Use colour as a guide and also to add visual appeal to the survey.•Make the survey clear and easy to read, with an appearance that invites the respon-•dent to read and respond.

Page 469: Collecting managing and assessing data using sample surveys

Nonresponse442

Leave plenty of empty space in the layout of the survey, so that the appearance is not •cluttered, and does not convey the idea of complexity and difficulty.

There are also a number of other steps that can be taken in the design that are likely to improve response rates. Included among these are:

removing unnecessary questions;•providing response categories that respondents can tick, instead of requiring them to •write out their answers;minimising the amount of information that has to be written;•putting oneself in the shoes of the respondent;•judiciously using clip art and other aids to navigating the survey;•judiciously using arrows and other indicators as to where to answer;•removing ambiguities in question and answer wordings; and•paying careful attention to the manners and presentation of interviewers.•

Survey publicityReference was made earlier to publicity about the survey. This is a procedure to enhance response rates. Publicly funded surveys in particular should use the media to publicise the existence of the survey. This would generally mean using radio, TV, and newspa-pers to publicise the survey, its purpose, and its importance. In addition, to maintain interest in the survey, publishing reports of survey findings (so long as they will not condition or change the responses of subsequent respondents) and relating incidents that occur while the survey is under way, are beneficial in terms of increasing interest in and response to a survey.

The provision of a free-call telephone number for people to call to raise questions or express concerns about the survey is also very important. A web page is also a good idea, although it needs to be remembered that, in most countries, internet penetration is still below 50 per cent, so that reliance on a website alone is not advisable. Finally, the survey should include a cover letter that is signed by a respected and well-known public figure.

Use of incentivesAs a general rule, providing an incentive of some sort to potential respondents should result in an increase in the response rate. Chapter 11 provides a detailed discussion of the use of incentives. There are a few key issues to keep in mind about the use of incentives and their effect on nonresponse. First, although there is usually little doubt that incentives will increase response rates, these increases may come with the burden of added bias to the survey responses. This is because incentives do not have univer-sal appeal to the population but, rather, will tend to increase responses from certain sectors of the population while perhaps even reducing responses from other sectors. Therefore, it is wise to pretest the use of incentives and to determine what additional biases may be introduced. These biases must then be traded off against the potential increase in total responses to the survey.

Page 470: Collecting managing and assessing data using sample surveys

Unit nonresponse 443

Second, as discussed in Chapter 11, the most effective incentive is usually mon-etary in form, and should be provided in advance of a completed survey, not as a post-survey ‘reward’. For some agencies involved in carrying out surveys, monetary incentives may pose problems or be impossible for the agency to offer. In such cases, if an incentive seems necessary, other forms of incentives must be investigated and used. Evidence suggests that incentives do not need to be high in value to elicit the desired response improvement, although this is definitely a function of the size of the survey task to be undertaken. In some cases, when the resources for providing incentives are restricted, the only option available for an incentive may be to use some form of lot-tery or draw, with just three or four significant awards to be made from the drawing. However, lotteries and draws are generally found to be less effective than monetary incentives provided to all sampled units.

Third, incentives may be offered only to certain hard-to-reach subgroups of the pop-ulation. Provided that there is no publicity about the provision of incentives, such dif-ferential incentives should not have a negative effect on the survey. Incentives may also be used, in this way, to counteract biases that would otherwise be present in the survey.

To determine the effect that an incentive will have, it is useful to undertake a pilot survey in which a proportion of the pilot sample is provided with no incentive, while the rest are offered an incentive. Comparisons between those receiving an incentive and those not receiving one will help to identify the biases that will be introduced by the incentives, and also to determine the amount of increased response likely to be achieved by the incentive.

Use of reminders and repeat contactsChapter 11 also discusses at some length the potential use of reminders. These repre-sent another procedure for reducing nonresponse. In this chapter, suffice it to say that not all types of surveys are subject to reminders. However, postal and internet surveys are definitely subject to reminders, and can show substantial increases in response when reminders are given. Some types of telephone survey may also be benefited by reminders, while most face-to-face interview surveys will not benefit from a reminder. Reminders should use different types of media for successive reminders, to ensure that respondents receive and take note of at least some of the reminders. For example, alter-nating phone and post for reminders is often considered a useful strategy.

In this context, it is useful to note the work of Richardson and Meyburg (2003) in discussing the economics of adding to the sample or striving to increase response from sample units that have already been approached. This work is also discussed in Chapter 11, but it is worth repeating here. Richardson and Meyburg (2003) clearly demonstrate that it is much cheaper to repeat reminders to potential respondents who have not yet responded than to drop those units from the sample and replace them with new sample units. Not only is it usually cheaper to use multiple reminders than to add another new sample unit, but it also reduces the biases in the responses obtained from the survey. Further, there is often useful information to be obtained by comparing the responses

Page 471: Collecting managing and assessing data using sample surveys

Nonresponse444

of those who respond only after multiple reminders to those who respond without reminders. Among other things, this information can help point to the likely biases in measurement that will arise from the survey as a result of incomplete response.

Too ready an acceptance of refusals can result in the creation of a significant bias in the survey results, especially as it is likely that those respondents who take more time and effort to respond differ in important ways from those who respond imme-diately. In addition to the use of reminders, classifying refusals as ‘soft’ or ‘hard’ is a useful strategy. Hard refusals are those when the person answering the interviewer in a face-to-face or telephone contact indicates that he or she is unwilling under any circumstances to participate in the survey. Soft refusals include situations in which the person answering the initial contact seems unwilling to indicate whether or not he or she qualifies to do the survey, or in which the potential respondent simply indicates that he or she does not have time for the survey or that the time of contact is not good. In such cases, the most persuasive interviewers should then be assigned to recontact the soft refusals and attempt to ‘convert’ them to participants.

However, the process of converting soft refusals is still subject to some problems. In a two-stage survey in which it is first necessary to recruit respondents, following which a survey is to be conducted, the conversion may result in an expressed willing-ness to undertake the more major survey that follows, but the eventual completion rate by converted refusals may be quite low. This is an area deserving of further research, especially in the context of surveys in a particular topic area, or conducted in a specific manner. There is at least some anecdotal information that suggests that the conversion of soft refusals is not highly productive. Nevertheless, the classification of refusals and further attempted follow-up with soft refusals is still recommended procedure, because this will be likely to reduce some of the nonresponse bias in the survey, and it is a rel-atively cheap procedure compared to adding replacement sample units.

Richardson and Meyburg (2003) find in their research that there are distinct and sig-nificant differences in the characteristics of those who respond without reminders to those who require one, two, or more reminders, and that each group requiring additional reminders differs in important ways from groups that respond sooner. Their research makes it quite clear that serious biases are potentially incurred if early nonresponders are replaced with new samples, and all potential respondents are dropped if they have not responded after being sent the survey, or if only one reminder is offered.

PersonalisationAnother, albeit more expensive, procedure that may be found to improve response rates applies to surveys that involve more than one contact by the survey researcher. This would apply to surveys in which there is a recruitment step followed at a later stage by either an interview or a data retrieval step, as well as other multi-step survey designs. The procedure is to assign a specific interviewer to each respondent, mak-ing this interviewer the one who contacts the household or person on each occasion and is also the one who would respond to any phoned-in questions about the survey. In this case, the interviewer identifies himself or herself by first name and, at the first

Page 472: Collecting managing and assessing data using sample surveys

Unit nonresponse 445

contact, essentially indicates that he or she will be the person who contacts this poten-tial respondent each time and is the person with whom the respondent can make con-tact to ask questions, obtain clarification, or otherwise make comments or observations during the conduct of the survey.

This may become expensive because of the necessity to allocate interviewers to spe-cific potential respondents and to have them available to talk with those respondents at various times. This process may make the time of each interviewer less productive than in a survey in which interviewers can be assigned to work on any respondent during the hours they have available to work on the survey. However, the increase in response rates may be quite significant, making the process cost-effective in the long run. A procedure of this type was used in the Dutch Mobility Survey, and succeeded in reversing a steady decline in response rates (van Evert, Brög, and Erl 2006). Indeed, in this particular case, the introduction of a number of changes to the design, including this personalisation of the contacts, raised the response rate from around 35 per cent to just over 70 per cent.

SummaryIn summary, a number of procedures are available to the survey researcher that can provide the means to increase response rates and reduce the biases associated with nonresponse. These include: careful attention to the design of the survey, reducing respondent burden and increasing the clarity and appeal of the survey materials; care-ful attention to the materials that are sent by post to potential participants, whether as an introductory letter or a package of materials to be completed by respondents; the offering of incentives to respondents to complete the survey, with the most effective incentive generally being cash that is provided prior to the completion of the survey; the use of reminders, multiple contacts with respondents to encourage them to take part and to complete the survey tasks, and attempts to convert soft refusals; and the person-alisation of survey contacts, so as to provide a single person to contact throughout all the survey activities.

Of considerable importance is to be honest about the time commitment required of respondents, and also to attempt to motivate response by making it clear how the respondent will benefit from participating. Unfortunately, there are many surveys in which there is no direct benefit to the respondent for participating, necessitating in such cases an appeal to the altruism of the respondent. In many instances, and in cer-tain cultures, such appeals may be of rather little effect. Nonetheless, they are usually worth making, because nothing is lost by making such an appeal.

20.2.3 Nonresponse surveysNonresponse surveys are an important and often overlooked element of survey activ-ity. Given that virtually all surveys experience significant nonresponse, a nonresponse survey should be a necessary component of all human subject surveys. A nonresponse survey is a survey that is conducted on as many of those who failed to respond to the

Page 473: Collecting managing and assessing data using sample surveys

Nonresponse446

original survey as can be persuaded to respond. The primary purpose of the nonre-sponse survey is to determine the characteristics of those who did not respond on cer-tain key measures, so as to be able to determine the likely extent of bias in the survey results. The secondary purpose of the nonresponse survey is to find out reasons why people do not respond.

It is rarely carried out, probably for two reasons. On a response basis, it is an expen-sive survey to undertake, with the result that many agencies that commission surveys are unwilling to spend the significant additional amount required to undertake a nonre-sponse survey. Second, many agencies commissioning surveys are not strongly moti-vated to uncover the biases and reasons for failure to respond and often do not see the long-term value in such a survey. However, nonresponse surveys are extremely important, for both these reasons. First, they are probably the only reasonably sure way to determine if the original survey is biased, and in what respect it is biased. Second, they are also a very important source of information for improving survey quality, by revealing the main reasons for potential respondents failing to respond to the survey. In this regard, they will often provide clear directions on how future surveys of a similar type can be improved.

Black and Safir (2000) and Groves and Couper (1998), among others, have offered mathematical formulae to calculate nonresponse bias. Further, given that late respond-ers to a survey after numerous postal reminders have been sent out usually respond like nonresponders (Lahaut et al., 2003; Richardson, 2000), it may be better to reduce the number of postal reminders and adopt nonresponse surveys to correct for nonresponse bias. Nonresponse surveys are important because they enable the researcher to gain some knowledge about nonrespondents and to determine if important behaviours and characteristics differ significantly from respondents. Nonresponse surveys also allow the researcher to understand why these individuals refused to participate in the original study and aid in the development of future surveys.

Richardson (2000) points out that calling nonresponding households and reminding them to participate will be of little use if they have discarded or misplaced the survey package. In this regard, Freeland and Furia (1999) report that they recorded no sig-nificant increase in response rates to a postal survey when telephone reminders were made, while Whitworth (1999) finds that a second mailing of the survey forms to hard-to-enumerate households did increase response rates.

In a survey conducted with households, there are five principal reasons for nonre-sponse to a survey.

(1) Households are never contacted, either because of unfulfilled requests for call-backs, or because an answer is never obtained when the interviewer attempts to make contact.

(2) Households are contacted but refuse to participate in the survey.(3) Households initially agree to participate but actually fail to complete the survey.(4) Households agree to participate, and complete the survey, but fail to provide the

information in the survey forms to the survey researcher.

Page 474: Collecting managing and assessing data using sample surveys

Unit nonresponse 447

(5) Households complete the survey, but their answers are clearly not truthful, and may even constitute an intentional trashing of the survey.

A nonresponse survey should provide follow-up with each of these types of house-holds. In a person-based survey, very similar categories of nonresponse occur, and they also require follow-up.

The best method to conduct a nonresponse survey is to design a shortened ver-sion of the original survey, in which only key questions are asked that will deter-mine both the demographic profile of nonrespondents and some key measures that were central to the main survey, and on which nonresponse bias needs to be assessed. The survey is most likely to need to be conducted as a mixed-mode sur-vey – that is, a survey that is conducted by multiple different means, such as face to face, telephone, internet, and post. However, in general, unless the original survey was conducted face to face, a nonresponse survey should be a face-to-face survey principally, with other methods being added to assist. The reason for this is that face-to-face surveys are usually the hardest to refuse outright, and the interviewer may be able to observe some attributes of the household or person who has refused to respond, even without actual contact with the respondent. The nonresponse survey should reduce the perception of high respondent burden by asking less com-plex questions, keeping the survey form shorter, thus taking less time to complete, and giving the visual presentation of the survey a more ‘aesthetically pleasing’ look. The respondent will be likely to notice the level of effort of interviewers to contact him or her and may, therefore, be more inclined to participate. These changes and efforts are an aspect of reciprocity (Kalfs and van Evert, 2003; Kam and Morris, 1999).

For example, suppose the main survey is a travel survey conducted with entire households, using telephone contact, a postal survey form to households that agree to be recruited, and telephone retrieval of the completed survey data. A nonresponse sur-vey could be designed for this survey that would collect a small number of the same demographic attributes that were requested in the main survey, and could then ask for two or three indicators of the amount of travel undertaken by the household on the pre-vious day or during the previous week. In addition, a small number of questions would be asked to seek to establish the principal reasons for the household not responding. Because of the way in which the original survey was conducted, the main method for the nonresponse follow-up should be personal contact and a face-to-face interview, although this should be supplemented by internet and postal surveys for those who cannot be reached by face-to-face contact, and possibly with some further attempt at telephone contact, especially for those who were originally recruited by telephone but then failed to complete the survey.

A number of nonresponse surveys have been conducted, too numerous to describe in any detail here. However, a few such surveys in the travel behaviour arena are worth reporting on, because they provide some clear indications of the potential results that may come from such surveys.

Page 475: Collecting managing and assessing data using sample surveys

Nonresponse448

A nonresponse survey was undertaken in the Denver region as part of the 1997 Household Travel Survey. In the nonresponse survey, respondents were offered a $2 incentive, which was found to be very effective in reducing item nonresponse for the income question (Kurth, Coil, and Brown, 2001). This nonresponse survey found that more elderly households were among the non-contacts and quick-refusing house-holds, and, therefore, their trip rates were not accounted for adequately in the original survey.

Another nonresponse survey was conducted in Sydney in 2001 by the Transport Data Centre of the New South Wales Department of Transport, to investigate non-response and its effects on data quality, in relation to the Sydney Household Travel Survey, as well as to test the telephone as an alternative data collection method to the costly face-to-face interview (Transport Data Centre, 2002). Households that could not be contacted after at least five visits (non-contacts) and those that still refused after refusal conversion was attempted were moved into the nonresponse study. A full telephone interview was offered first, if the main reason for nonresponse was unavailability for a face-to-face interview. If the non-respondents still declined, a shorter person nonresponse interview was offered. This collected only core demo-graphic and trip information. If the nonrespondent did not want to complete the person nonresponse interview form, a person non-interview form was offered; infor-mation was collected by proxy. The results of this nonresponse study provided some useful insights into the characteristics of nonrespondents to a face-to-face interview. Interestingly, the average total number of trips made by nonresponding households was not found to differ significantly from those of responding households. However, nonresponding households were found to use train and walk significantly more than responding households. It was also found that nonresponding households were more likely to live in units or apartments, were more likely to be renters, were more likely to be aged between fifteen and forty-nine, and were more likely to be full-time work-ers. About 60 per cent of those who answered the nonresponse survey indicated that they had not responded because of a lack of interest in the survey or simply because they did not want to take part.

Finally, an extensive account of a nonresponse survey is given by Stopher et al. (2008b). This nonresponse survey was carried out specifically as a research task to elaborate on the need for and potential results from a nonresponse survey. Two instru-ments were designed and used in this survey, one for refusers and one for terminators. Both surveys contained a number of questions about why the nonresponder had chosen not to participate, and when he or she would have preferred to have been contacted, followed by some brief questions on demographics and travel patterns. However, the terminators were also asked to respond to a stated choice experiment. A $2 pre- incentive was sent to all the sample members. Both $10 and $20 post-incentives were offered to some of the households. Four methods of surveying were used: post, inter-net, telephone interview, and face-to-face interview. Because of budget limitations, the face-to-face interview could be used on only a small subset of the total sample. For the terminators, a total response of 26.6 per cent was obtained, with responses from

Page 476: Collecting managing and assessing data using sample surveys

Unit nonresponse 449

all four modes of surveying and from all incentive categories. For the refusers, 30.3 per cent responded, also from all survey modes and all incentive levels. There were insufficient responses to provide reliable estimates of differences in response rate by incentive. The majority of responses were obtained from the postal option, for which the original main survey was conducted by CATI for recruitment and retrieval, with a postal survey form that was sent out after recruitment and constituted the basis of the retrieval of data.

Detailed results of this effort are provided by Stopher et al. (2008b), and they are not repeated here. However, it is interesting to note that, among the reasons for the terminators not to complete the survey, being called at a bad time and not having suffi-cient time to complete the survey were the two most frequently cited reasons, followed closely by problems of getting other family members to take part. For the refusals, the reason cited most often was the fear that this was a marketing deal or scam, the next was that it was a bad time, and the next was not having the time to do the survey (sight unseen, in the case of refusals).

These results point to two principal conclusions that bear out issues raised elsewhere in this book. First, the initial approach is extremely important, especially to reassure potential respondents that the survey is legitimate and is not a marketing deal or some form of scam. Second, any type of survey mode that requires respondents to respond at a specific time – i.e., when called on the telephone or when an interviewer comes to the door – is likely to suffer from a reduced response rate, because of the lack of free time that people perceive that they have and the importance they attach to controlling when they do certain things. Survey modes that allow the respondent to choose a time of his or her convenience or that offer multiple modes for response, including self-selected times, are more likely to obtain a higher response rate.

One other result that has been found in most nonresponse surveys, including those mentioned in this chapter, is that nonrespondents tend to differ from respondents on the behavioural issues that are at the heart of the survey. In travel surveys, it has been found that people who do not respond tend to be more likely to be those who travel a great deal and those who travel very little. This means that these two extreme groups are poorly represented in the data, and the resulting information on how much people travel is likely to be biased.

Finally, it needs to be noted that nonresponse surveys do not solve the nonresponse problem. Rather, they provide information about the nature and severity of the non-response problem in a survey and may provide pointers to what can be done to improve survey designs. They are invaluable for determining what biases are likely to exist in the completed survey, and are the only source for this information. It has been found repeatedly that simple comparisons between the demographics of the population and the demographics of the sample are insufficient to identify the real biases in a survey. Moreover, although demographic factoring may help in making the survey more representative, if the behaviours of those who do not respond are sufficiently different from the respondents, no amount of demographic adjustment will correct such biases.

Page 477: Collecting managing and assessing data using sample surveys

Nonresponse450

20.3 Item nonresponse

Item nonresponse arises when survey respondents complete most of the survey task but omit to respond to one or more questions. Item nonresponse leads to the creation of missing values in the resulting data, and may require responses containing missing items to be excluded from some parts of the analysis of the resulting data. This effec-tively reduces the sample size for some of the analysis, and may also lead to larger than anticipated sampling error in some of the results from the survey.

Item nonresponse is likely to arise from three primary situations. The first is when a question is perceived as threatening or invasive of the privacy of the respondent. For example, many people find it unacceptable for a survey to ask questions about personal or household income and may refuse to answer such questions. Second, respondents may find that a question is poorly worded or the response categories do not fit their par-ticular situation, in which case they may refuse to answer the question either because they do not understand it or because they are unable to find a response that they feel matches their answer to the question. Ambiguity in the question wording is often at the root of such problems. Third, respondents may inadvertently skip answering a ques-tion, simply because they did not see the question, or thought that the question did not apply to them.

Careful attention to question wording, survey layout, and the categorisation of responses can address many of the foregoing issues and reduce item nonresponse. Question placement is also an effective method of counteracting sensitivity to ques-tions or perception of threats in questions. Much of this is discussed in Chapters 9 and 11. Often, sensitive or potentially threatening questions should be placed near the end of the survey. This is effective in reducing the perception of threat or an invasion of pri-vacy, because the respondent has already provided a great deal of information by this time and may no longer perceive the question as being threatening or invasive, given all that has gone before. It is also effective for gaining improved unit response, in that early sensitive or invasive questions may lead to outright refusal to complete the sur-vey, while late questioning on these issues may lead only to some item nonresponse.

Item nonresponse may also arise when a person provides an obviously incorrect response to a question, either in an effort to damage the survey (survey respondent van-dalism), or in an effort to avoid giving information that the respondent either does not know or is unwilling to divulge. In this case, although an apparent response is given, the response will be found, on subsequent checking, to be unreasonable, inappropriate, or inconsistent with other responses in the survey. Such data items also require repair in much the same way as outright missing information.

20.3.1 Data repairWhen item nonresponse does occur in a survey, and it often will, no matter how well designed a survey is, because of both human nature (inadvertent skips of relevant ques-tions) and the importance of asking some sensitive or invasive questions, then the next potential step is to reduce the impact of item nonresponse by repairing the data. Data

Page 478: Collecting managing and assessing data using sample surveys

Item nonresponse 451

repair refers to inserting values in the data for either missed or obviously erroneous data, so as to avoid having missing values and requiring that some observations are dropped from some analyses. However, data repair is a very sensitive issue. There are good ways and bad ways to repair data. Good ways are lacking in bias and alter means and variances in the data in ways that are expected and potentially useful, while bad ways would result in introducing new biases into the results and potentially damaging the data set for future potential use. Bad ways may also cause changes in means and variances that are incorrect or harmful.

While discussing data repair for nonresponse, it is also appropriate to consider data repair for invalid or incorrect responses. Because the topic here is human participa-tory surveys, there is always a likelihood that people will provide responses that are obviously erroneous, either because of misunderstanding (or the too rapid reading of a question) or because of mistake or malicious intent. In the author’s experience in travel surveys, two common errors are reporting that a person is not a worker, yet find-ing that this same respondent makes one or more trips to work, and reporting being a licensed driver or reporting driving a vehicle to get somewhere by a respondent who is legally too young to hold a driving licence. A few years ago a household travel survey was conducted in a Californian city and produced results that a substantial proportion of children aged between eleven and thirteen were driving to school. This error actually arose because of poor wording in the ‘Means of transport’ question, for which respondents apparently interpreted the phrase ‘driving to school’ as being the same as ‘driven to school’. Because the erroneous question was not discovered until after the survey had been completed, it became necessary to repair the data to correct these entries.

In the subsequent subsections of this chapter, data repair for missing data and repair to correct obviously erroneous or inconsistent information are treated together, because the procedures are generally equally applicable. Indeed, the first step in data cleaning from a survey is usually to look for inconsistent responses and responses that are out of range to particular questions. On finding such responses, the usual initial step is to replace the obviously incorrect response with the missing data code for that variable. The subsequent step would then be to replace the missing value with a repaired value.

Flagging repaired variablesWhenever data repair is undertaken, there should be flags placed in the data that indi-cate clearly which values have been repaired. For example, for each survey question in which repair is undertaken, there may be added to the data set a new binary variable (0, 1) that indicates if the response to that question was repaired or is the original value provided by the respondent. Such a flag allows future analysts, for example, to con-vert all repaired values to missing and repeat their analysis without the repaired data, to check whether the data repairs resulted in significant changes to analytical results. Clearly, the desire for data repair is that it increases the number of observations that are available for analysis, thereby reducing the amount of error in the results, but without significantly altering the results of analysis.

Page 479: Collecting managing and assessing data using sample surveys

Nonresponse452

Ideally, there should exist two versions of the data set from any survey. The first version is the raw, unrepaired data, representing exactly what respondents provided as their answers to the survey. The second version is the repaired data set, including flags to indicate where repairs have been made, and with the repaired values replacing the missing or obviously incorrect responses that people provided.

InferenceThe first method of repair for data is that of inference. Inference involves using the valid responses from a specific respondent to infer the correct response for an incorrect or missing response to another question. For example, a respondent may inadvertently have missed responding to a question asking about whether or not the respondent holds a valid driving licence. Examining the other responses to the survey may reveal that the person has made a number of trips by driving a car. While certainly not foolproof, it is a reason-able inference to draw that the person does have a valid driving licence, albeit allowing for the fact that some people who do not hold such a licence may still take the risk of driving a car, although they are also relatively unlikely to risk reporting such behaviour in a survey.

Another example of inference might relate to a response to a question about the respondent’s age. Many respondents, particularly female respondents, do not like to respond to questions about their age. Although it is unlikely to be possible to identify age with any precision, it is certainly possible to deduce the probable age range by examining responses to questions such as work/student status, driving licence holding, and relationship to others in the household. This may not allow more than identifying a broad age group, such as under eighteen, eighteen to sixty-five, and over sixty-five, but this may be sufficient to be useful in certain analyses and prevent the observation from having to be dropped for missing data.

Inference may also be used when a survey involves all the members of a household and one or more individuals within the household has not responded to one or more questions. In this case, inference may also be undertaken by looking at the responses of other members of the same household, which may reveal missing information such as income, work status, relationship, age, etc.

The principal caution on inference is that it should proceed with an objective set of rules on how to infer values for missing variables and should pass the test of objectiv-ity, whereby two or more persons, operating independently, arrive at the same inferred value from applying the rules that have been set up. If subjective judgement is involved in inferential repairs, they are suspect and should not be implemented. Indeed, it is rec-ommended that, when inference is used to repair data, the documentation on the data set should contain a clear discussion about the data repair conducted and should relate exactly what rules were used to repair data by inference.

ImputationThe other alternative to inference is imputation. The distinction between inference and imputation is that inference uses data from the person who omitted certain

Page 480: Collecting managing and assessing data using sample surveys

Item nonresponse 453

information or reported it erroneously, or uses data from other members of the same household, whereas imputation uses sources outside the individual respondent to repair the data or uses historical data in some instances. There are at least nine recognised methods of imputation, and the following subsections discuss each one of these, describing the method and commenting on how useful it is. The methods discussed in this chapter are historical imputation, average imputation, ratio impu-tation, regression imputation, cold-deck imputation, hot-deck imputation, expec-tation maximisation, multiple imputation, and artificial neural network (ANN) imputation.

It is important to consider what the outcome of imputation should be. Imputation should generally result in increases in variance. Most analytical procedures depend on variance to discern relationships and to qualify information on means in the data. Missing data results in a reduction in variance, in general, because fewer observations are available for estimating the variance. Imputation that leaves variance unchanged or even reduces it further is generally not helpful. In the following subsections, each of the methods of imputation is discussed specifically in terms of what might be expected to result from the method in terms of the mean and the variance of the attribute whose values include imputed values.

Historical imputationThis form of imputation is generally applicable only in a longitudinal survey, such as a panel survey, or in a repeated cross-sectional survey. It also applies only to situations in which the variables requiring imputation are quite stable over time. In using this method of imputation, in the case of a longitudinal study, the survey researcher would use a value for the variable from a previous wave of the survey for the same individ-ual or household. For example, if incomes are considered to have been quite stable, perhaps with increases of around 3 to 4 per cent per year, and a household that had reported income on a previous occasion now did not report income, then the income reported on the previous wave, adjusted by an appropriate amount to reflect annual increases, would be substituted for the missing data.

In a repeated cross-section survey, historical data would be used by looking at the averages for the missing data in a previous cross-section of the same population, and applying the average value from the previous cross-section to the current missing val-ues. In this respect, this is not dissimilar to the next method of imputation, namely average imputation.

The issue with the first method of doing historical imputation is that it is limited to situations in which a panel survey is being conducted. In such situations, it repre-sents a reasonable method of imputation, and should cause relatively sound changes in both the means and variances of the repaired variables. In the case of repeated cross-sectional surveys, the problems of using historical imputation will be similar to those discussed in the next subsection for average imputation. Using averages will tend to leave the mean unchanged and to reduce the variance.

Page 481: Collecting managing and assessing data using sample surveys

Nonresponse454

Average imputationAverage imputation involves replacing missing or erroneous variables with the average value in the rest of the data set for that attribute value. For example, in a data set, ques-tions may have been asked on each of age and income. Both questions received some number of missing responses. In this method of imputation, the complete data would be used initially to estimate the overall mean of income and the overall mean of age. For each observation that is missing one or both of these variables, the average value is then inserted. This method is, clearly, quite easy to apply, and most statistical software packages would be quite readily able to implement this procedure.

However, it is necessary to consider what this imputation does to the data for analy-sis. First, in respect of the mean, there will be no change, because replacing miss-ing values with the mean will obviously have no effect on the mean values from the imputed data. Second, in respect of the variance, this will actually be reduced by this process. The variance will be reduced because each imputed value will add zero to the sum of squares of deviations about the mean (because each imputed value is at the mean), while the sum of squared deviations will now be divided by a larger number of observations than before, thereby reducing the value of the variance. This is an unde-sirable result.

In effect, imputation by using the overall mean will add no new information and will reduce the variances in the data set. Even segmenting the data into classes by other variables that may be reported more completely, and replacing missing values by class means rather than overall means, will not rectify this problem. It will still result in a reduction in the variances within each population class, and it will not lead to increased variance between segments.

Ratio imputationRatio imputation uses a secondary or auxiliary variable that is closely related to the variable that requires imputation. A condition of the secondary variable is that it should have few, if any, missing values, so that it will be present in most cases when the pri-mary variable for imputation is missing. In this case, the underlying assumption is that the two variables – the primary variable requiring imputation and the secondary vari-able used to impute values – are related by a constant ratio. The ratio can be established for the entire sample, or can be estimated separately for different classes of the sample. In general, the imputed value of a variable will be obtained from equation (20.6):

yy

xxik

k

kik

(20.6)

where:

yik the value to be imputed for observation i of class k;yk the average value of y in class k;xk the average value of the secondary variable x in class k; andxik the value of x for observation i of class k.

Page 482: Collecting managing and assessing data using sample surveys

Item nonresponse 455

If classes of the population are not used, then the subscript k is dropped from equation (20.6). Probably the biggest problem with this method is finding an appropriate vari-able x that is closely related to the variable to be imputed, y, and that is reported much more completely than y. Suppose that the variable with missing values to be imputed is the number of vehicles owned or available to the household, and the secondary var-iable is the number of workers in the household. The sample might be divided into classes based on age and level of education. Within each class, the ratio of the average vehicles available to the average number of workers is established. Then, for each missing value of vehicles available, the number of workers is multiplied by the ratio of the average number of vehicles to the average number of workers, to produce an esti-mate of the number of vehicles available to the household.

Ratio imputation will generally have the desired effect of changing the mean and the variance, with the potential that the variance may increase with the added imputed variables. The drawback is that two variables are rarely sufficiently closely related to provide a useful ratio imputation, or, if there are two closely related variables, both may be equally poorly reported by respondents. For example, the two variables most frequently missing in household- or person-based surveys are age and income, and nei-ther one of these is sufficiently strongly related to other variables that are well reported to offer the opportunity for ratio imputation.

Regression imputationThis method is closely related to ratio imputation, except that there may now be more than one secondary variable, and there is also room in the regression procedure for there to be a constant effect in the relationship. Regression imputation can take the place of ratio imputation when there is not a single secondary variable that is closely related to the variable whose missing values are to be imputed, as it allows for multi-ple variables to be used. Unlike the previous methods, regression imputation can also provide either a deterministic result or a stochastic one. A deterministic result would be the situation in which the regression equation is solved for the missing value, so that an identical set of values of the secondary variables will always provide the same value for the imputed variable. A stochastic result would be obtained if a random error term is assumed to exist and is added to the deterministic prediction, so that the value provided will vary from time to time, even when the input values are the same.

Regression imputation suffers from similar limitations to ratio imputation. First, regression imputation requires there to be a finite number of other variables that are being measured in the survey that correlate sufficiently strongly with the variable whose values are to be imputed to give a credible regression relationship. Second, apart from some limited possibilities to use non-linear transforms of the secondary variables, it is normally necessary for a linear relationship to exist between the secondary variables and the primary variable that is to be imputed.

Hence, when the assumptions underlying the regression solution are upheld and the secondary variables are reported well, compared to the primary one, regression imputation will be likely to change the mean and variance in expected and desirable

Page 483: Collecting managing and assessing data using sample surveys

Nonresponse456

ways. Thus, regression imputation will be likely to add information to the data set and to result in an overall improvement in the analyses that can be made of the data. When the relationships are weak, or highly non-linear, regression imputation is unlikely to produce good results.

Cold-deck imputationCold-deck imputation refers to imputation in one data set by using values from another related data set. The terminology of ‘cold deck’ is derived from past days when survey data were recorded on punched cards, with the cards then forming a ‘deck’. In this case, the adjective ‘cold’ refers to the fact that the deck of data cards being used to pro-vide the missing values is from an older, and therefore ‘colder’, survey than the subject survey for which imputation is to be undertaken.

In general, this method proceeds by first identifying certain variables in the two sur-veys that are common and that have some relationship to the variables for which repair is desired. For example, income, again, may be the variable for which repair is desired. First, income must have been collected in both data sets. Second, the analyst must iden-tify certain other variables that correlate to income. These might include such variables as vehicle ownership, the renting/ownership status of the home, the number of workers in the household, and the level of education completed. These four variables must then be in both data sets. Using these variables, the data in the cold deck are grouped by the value ranges of the four variables, and one of two procedures can be followed. In the less powerful method of imputation, the data from the prior survey would be used to estimate the average values of income for each of the four-way classifications of the other variables. Each observation in the new survey would be similarly classified and the average income, in this case, for the four-way class into which the observation falls would be used to replace the missing value in the new survey. In the more powerful method, a random draw would be made from the prior survey, and the value of income for the randomly drawn observation in a particular four-way class would be assigned to the missing income value in the observation in the new survey.

Whichever of the two methods are used, there is likely to be change in the mean and the variance of the repaired survey. However, when means are used from the donor sur-vey (the cold deck), the changes to means and variances in the repaired data are likely to be small, especially if the means and variances in the donor survey are close to those in the survey to be repaired. On the other hand, if the values in the donor survey are not close to the survey to be repaired, this may be a result of the fact that the populations from which the two surveys have been drawn are too dissimilar and should not have been used.

As with ratio and regression imputation, probably the greatest difficulty will lie in finding a situation in which cold-deck imputation is appropriate to use. In this case, it is necessary to have a second survey that has been carried out, either in another location or at an earlier time, to use as the source for the imputed values. In the event that it is a cross-sectional survey from an earlier time for the same population, then cold-deck imputation becomes a special case of historic imputation.

Page 484: Collecting managing and assessing data using sample surveys

Item nonresponse 457

Hot-deck imputationHot-deck imputation involves drawing the imputation values from the same data set. In this case, a decision is first made on how to segment the data set into classes. Within each class, the data are separated into two further groups: com-plete or donor observations, and incomplete observations (the ones for which impu-tation is required). Imputation can then proceed either sequentially or randomly. In sequential imputation, the donor observations are selected in sequence from the first to the last to donate values of the missing variable to observations in the same class that are incomplete. If there are more incomplete observations in a class than there are complete or donor observations, then the donor observations are used a second time in sequence. In random imputation, the donor observations are sampled randomly and used to donate a value to the incomplete observations. Alternatively, the donor observations are shuffled into a random order and then used sequentially.

The advantages of hot-deck imputation over cold-deck imputation are that the donor values are from the same population as the missing observations and that the data are contemporaneous. The advantages of either hot-deck or cold-deck imputation over the other methods are that both these methods can be used to assign values from a distri-bution of available values to the missing observations, so that means and variances are both affected. However, hot-deck imputation is more likely to provide values that are within the same distributional limits.

Expectation maximisationThis method was originally explained and named by Dempster, Laird, and Rubin (1977), and is described in an imputation context by McLachlan and Krishnan (1997). It consists of a two-step iterative procedure. In the first step, which is called the expec-tation step, a value is assigned to the missing observation. This is followed by the maximisation step, in which maximum likelihood is used to estimate the parameters of a model that would be used to estimate the imputation values. These two steps are repeated iteratively, with the second step being used to provide new estimates of the missing values for the repeat of the first step. The process is iterated until the imputed values converge to a stable solution.

For example, one might, at the first step, assign to the missing values the average value from the entire data set, or for a particular class of observations in the data set. In the second step, one estimates the maximum likelihood values of the parameters of a model that could estimate the missing values, using the actual values from step 1 as the initial starting values. From the model that results from step 2, new estimates are made of the missing values, in the repeat of step 1, and then the parameter values are re-estimated by maximising the likelihood function for the model once more. This pro-cedure continues until satisfactory convergence is obtained.

Mathematically, the procedure can be described as follows. Suppose that there is a likelihood function L(θ; x,z), where θ is the vector of parameters, x are the observed data, and z are the missing values. The maximum likelihood estimate is determined by

Page 485: Collecting managing and assessing data using sample surveys

Nonresponse458

the marginal likelihood of the observed data L( ; x), but this is often intractable. To �nd the maximum likelihood, the following two steps are applied iteratively, as follows.

Step 1 (expectation) is to calculate the expected value of the log of the likelihood function with respect to the conditional distribution of the missing z values, given the observed x values and the current estimate of the parameters (l) from equation (20.7):

Q E L x zz xQ Ez xQ E og , ,L x, ,L x| ,z x| ,z x L xL xQ EQ E L xL xz xz xQ Ez xQ EQ Ez xQ E| l| lQ E| lQ EQ E| lQ Ez x| lz xz x| lz xQ Ez xQ E| lQ Ez xQ EQ Ez xQ E| lQ Ez xQ E ogog| ,| ,z x| ,z xz x| ,z x| l| ,| l| l| ,| lz x| lz x| ,z x| lz xz x| lz x| ,z x| lz x| l| l| l| lQ EQ EQ EQ EQ EQ E| l| lQ E| lQ EQ E| lQ EQ EQ EQ EQ E| l| l| l| lQ E| lQ EQ E| lQ EQ E| lQ EQ E| lQ E1111| l| l1| l| l| l| l1| l| lQ E| lQ EQ E| lQ E1Q E| lQ EQ E| lQ EQ E| lQ EQ E| lQ E1Q E| lQ EQ E| lQ E| l| l| l| l| l| l| l| lQ E| lQ EQ E| lQ EQ E| lQ EQ E| lQ EQ E| lQ EQ E| lQ EQ E| lQ EQ E| lQ E| l| l| l| l| l| l| l| lQ E| lQ EQ E| lQ EQ E| lQ EQ E| lQ EQ E| lQ EQ E| lQ EQ E| lQ EQ E| lQ EQ E| lQ EQ E| lQ EQ E| lQ EQ E| lQ E L xL xL xL x| l| l| l| l| l| l| l| l| l| l| l| l| l| l| l| l| l| l| l| l| l| l| l| l1111| l| l1| l| l| l| l1| l| l (20.7)

Step 2 (maximisation) is to �nd the parameters (l 1) that maximise the value of equa-tion (20.8):

ll QQ11 1max |(20.8)

Because the values, in this case, are developed from relationships within the survey data, it can be expected that means and variances will both be changed and that the method will produce appropriate changes in means and variances.

Multiple imputationIn multiple imputation, multiple values are imputed for each missing value, thus pro-ducing a distribution of values for each. The method is described by Rubin (1987)and is implemented in some statistical software packages (e.g., SAS – Yuan, 2008).Essentially, using one of three possible methods, multiple values are imputed for each missing data value, thereby permitting a mean and variance of the missing data value to be obtained. This procedure re�ects the inherent uncertainty in imputing values, and provides a better base for drawing valid statistical inferences from the data. The details of the methods for producing the multiple values are not provided in this book; the reader is referred instead to Rubin (1987) or Yuan (2008). Multiple imputation should provide better estimates of means and variances for the variables subject to imputa-tion than any of the preceding methods. Means should change and variances should increase, with the addition of new but plausible information through the imputation process.

Imputation using neural networksWilmot and Shivananjappa (2003) report on their experiments with the use of arti�cial neural networks as an alternative method for the imputation of missing values. ANNs are used in a number of different applications and are a type of arti�cial intelligence. They are a mathematical representation of the human brain in the form of nodes and links, with the nodes in at least three layers and the links representing the transition or information �ows between nodes. The three layers comprise an input layer and an out-put layer and then at least one processing layer. In the application to data imputation, one processing layer was used. A diagram of the ANN is shown in Figure 20.2.

Page 486: Collecting managing and assessing data using sample surveys

Item nonresponse 459

To use an ANN to impute data, it is necessary first to provide the ANN with the opportunity to ‘learn’ about imputation, and then to use the ANN to process the data that are missing certain values. Hence, the data are first separated into complete data and incomplete data. Within the complete data, a substantial number of observations are chosen to train the neural network. The training is done by allowing the neural network to create weights associated with the links in the neural network, which will produce as nearly as possible the observed values of the variables for which, in this case, it is desired to use the ANN to impute.

To start the ANN, it is necessary to select certain variables in the data that are thought to be related to the variable whose missing values are to be imputed. The input nodes are assigned to the input variables, with one node for each value of a cat-egorical variable, and one node for each continuous variable. In the work by Wilmot and Shivananjappa (2003), the input nodes consisted of the number of vehicles in the household (one node), the median housing unit value in the census tract of the house-hold (one node), the median household income of households in the census tract (one node), the age of the oldest worker in the household (one node), and the education status of the oldest worker in the household (six nodes). Thus, the ANN used ten input nodes. The output layer consisted of one node for each household income level to be imputed, for which there were eighteen nodes (one representing each income level). Whichever node had the highest value was assumed to be the income interval for the household. In the learning phase, the ANN model is provided with the correct output data, and uses this to determine weights on each of the links shown in Figure 20.2.

In application, then, complete data are used first to train the ANN model, follow-ing which the data that need repair are provided to the ANN model, which provides imputed values for each of the missing values. The imputed values are based on the weights assigned to the links in the neural network that provided the best results for matching to the complete data. Wilmot and Shivananjappa (2003) propose the ANN model as an improvement on hot-deck imputation, although it is similar to hot-deck imputation in that it uses data from the same survey as the one for which repair is needed. In comparisons to hot-deck imputation, Wilmot and Shivananjappa found the

Input layer

Hidden orprocessing layer

Output layer

Figure 20.2 Representation of a neural network model

Page 487: Collecting managing and assessing data using sample surveys

Nonresponse460

ANN model to perform equally as well as hot-deck imputation, and, for some vari-ables, better than hot-deck imputation. These comparisons were undertaken by falsely making certain complete observations have missing data and then comparing to see how closely each of hot-deck and ANN imputation matched to the correct value. They conclude that ANN was demonstrated as a feasible method of imputation, although it may not offer significant advantages over hot-deck imputation.

In terms of the evaluation of imputation methods in this chapter, it would be expected that ANN imputation would change both the mean and the variance, and that these changes would represent new information. Hence, variances are likely to be increased from this method, in a similar manner to any of the last four of the imputation methods discussed here.

Summary of imputation methodsThere are a wide variety of imputation methods available to use for repairing missing data. The methods vary considerably, both in the quality of the information likely to be provided through imputation and in the effects of the imputation on the means and variances of the variables that are the subject of imputation. Methods such as average imputation and the ratio and regression methods are among the simpler procedures that can be applied. However, they are also the methods that produce the least helpful results. The more complex methods, such as hot- and cold-deck imputation, expecta-tion maximisation, and multiple imputation, are more complex to apply but will pro-vide better results generally. Of the more complex methods, cold-deck imputation is expected to provide the least accurate results, with multiple imputation or expectation maximisation providing the best results. However, hot-deck imputation combines both reasonable accuracy with somewhat less difficult application, and is probably the pre-ferred method.

20.3.2 A final note on item nonresponseAs with unit nonresponse, any item nonresponse should be looked at first as an indi-cation that there may be faults or flaws in the design of the survey. It is very useful, in pilot surveys and pretests, to count and then examine the number of missing answers in otherwise valid responses (see also the discussion in Chapter 21 relating to missing data items and survey quality). It is very likely that it will be found that nonresponses occur mainly in the answers to only one or two questions. In this case, it is also likely to be evident as to the reason for the nonresponses. For example, this may flag a sen-sitive question, a question that invades privacy, a question to which respondents do not have ready answers, or a question that offers an inadequate or inappropriate set of response categories. In all these cases, it should usually be possible to make design changes that will reduce the level of missing data in the final survey. Once more, this emphasises the value of pilot surveys and pretests.

When it is not obvious why respondents are failing to answer a question, it may also be necessary to review the physical layout of the survey to see if the loss of responses

Page 488: Collecting managing and assessing data using sample surveys

Item nonresponse 461

might be due to poor question placement, or to confusion about who should answer the question. Again, this can be rectified by appropriate design changes. Nevertheless, some level of item nonresponse is probably inevitable, and it is necessary then to decide whether such missing data will be repaired and, if so, what method of repair will be used. Decisions on this may well hinge on whether or not the client is willing to receive data that contain repairs for missing items, and whether the items that are missed are considered key to the use of the data.

However, as is discussed in Chapter 21, the number of data items requiring repair and the number of missing data items are both measures that may reveal the quality of the data from a survey. Therefore, it is incumbent on the survey designer to do the utmost possible to reduce the levels of missing data responses. However, anecdotal data suggest that up to 20 per cent of respondents may refuse to answer questions on income, while about 5 per cent may refuse to answer questions about age. In the area of general demographic data, these are the two variables that are most likely to suffer from item nonresponse.

Strategies to obtain age and incomeIt is useful, given the broad experience about nonresponse to questions of age and income, to include here some suggestions for ways in which to obtain data on these items, even when respondents are reluctant to provide such information. In the remain-der of this chapter, I consider first age and then income, and offer some suggestions for ways in which nonresponse to these questions can be reduced.

AgeAsking respondents to report their age in years is probably the most likely method to result in substantial item nonresponse. Many people find such a question to be invasive or embarrassing and will refuse to answer. A better method is to ask people to indicate the age category into which they fall. However, this can again cause some problems. If the age categories are divided up into decades, such as ‘20–29’, ‘30–39’, ‘40–49’, etc. it is likely that those (especially females) who are in the early years of a new decade – e.g., forty or forty-one, fifty or fifty-one – will find these categories to be ones to which they will continue to object to respond, or will lie. Breaking the age categories at mid-decade – e.g., ‘25–34’, ‘35–44’, etc. – helps on this item, but may again encounter reluctance from those who have turned sixty-five, who find themselves having to admit to being in the age group of sixty-five to seventy-four years of age. Nevertheless, it is worthwhile testing categories in a pilot or pretest.

Somewhat surprisingly, in many Western countries today, the best responses come from asking people to write in their year of birth. Perhaps because this is a less direct way of telling one’s age, or perhaps because many application forms of various types ask for this information, it seems that people are less sensitive to this question than to one requiring the reporting of their age. Even though it is obvious to most people that, having provided their year of birth, it is very quick and easy for the survey researcher

Page 489: Collecting managing and assessing data using sample surveys

Nonresponse462

to estimate the respondent’s age to the nearest year, the year of birth seems to be more acceptable and to be seen as less insensitive.

In the event that the year of birth is not found to improve responses over age cat-egories, two other strategies exist. One is to give decades for the categories of birth year – e.g., ‘1920–1929’, ‘1930–1939’, ‘1940–1949’, etc. The other is to use the bracketing method, which is discussed further under income in the next subsection of this chapter.

IncomeAs with age, asking people to write down the exact amount of their household or per-sonal income (appropriately defined as ‘gross’, ‘net’, ‘including interest’, ‘investment income’, etc.) will generate a high level of nonresponse. The most usual method for requesting income is to offer categories. In the case of income, unlike age, there are probably not so much sensitive ranges for categories as there is the issue of the size of each category. If income is asked in North America, Australia, or New Zealand, for example, in categories of $5,000 – e.g., ‘0–$4,999’, ‘$5,000–$9,999’, ‘$10,000–$14,999’, ‘$15,000–$19,999’ etc. – then many people will still find this too close to divulging their exact income, with which they are uncomfortable. In these countries, income categories should probably be at least $10,000 wide, and might better be $20,000 or even $25,000 wide. In the United Kingdom, it may be reasonable to equate this to using categories that are £5,000 to £10,000 wide.

However, even relatively wide income categories may not persuade some respon-dents to answer the income question. In this case, a bracketing method may be sug-gested. This is usually possible to use only in either an interviewer survey or possibly a Web survey, because it requires the insertion into the survey of an alternative set of questions in the event that the respondent is unwilling to answer the categorical ques-tion. The bracketing method might proceed as follows.

Having ascertained that the respondent is unwilling to respond to the question of income with a set of categories, the interviewer then asks whether the respondent is willing to indicate if his or her income is above or below a particular threshold figure, such as $50,000 or £25,000 (or appropriate other currency). If an answer is gained to this question, the interviewer can proceed and divide the appropriate range again and ask if the income is above or below that figure. For example, if the respondent said it was above $50,000, the next question could be to ask if it is above or below $75,000. Alternatively, if the respondent answered that it was below $50,000, the next question could be to ascertain if it is above or below $25,000. Clearly, this procedure can be continued only until the respondent is unwilling to answer further, but it will probably provide at least some information on income. Care is required in choosing the initial threshold and subsequent ones, to ensure that respondents are not immediately put off and unwilling even to answer this question. Again, it is unlikely that even this bracket-ing method will produce responses from all respondents, but it may well change a 20 per cent item nonresponse to a less than 10 per cent item nonresponse.

Page 490: Collecting managing and assessing data using sample surveys

Item nonresponse 463

A similar bracketing procedure can be applied to other questions, such as the value of the person’s house, or other financial questions that may produce high item nonre-sponse, as well as to a question such as age, for which the respondent could be asked if his or her age is above or below forty-five, or, better yet, if he or she was born before or after 1965. Subsequent questions could further define the year of birth or the age of the respondent.

Page 491: Collecting managing and assessing data using sample surveys

464

21 Measuring data quality

21.1 Introduction

In the past, many surveys have reported the response rate (see Chapter 20) as the only measure of the quality of the data from a survey. Although the response rate could be considered a necessary measure of data quality, it is by no means a sufficient defini-tion of data quality. The response rate, after all, measures only the proportion of the attempted sample from which what are regarded as complete responses were obtained. It is certainly true that one can immediately assume that a low response rate is likely to mean that the data will be subject to substantial nonresponse bias and may be far from a representative sample of the target population. However, it does not follow that a high response rate indicates good-quality data.

As a general rule, there are a few measures of data quality that can be applied to all surveys, and there are a few measures of data quality that are specific to specific types of surveys. In this chapter, I deal first with some general measures of quality, and then discuss some specific measures that may apply to only a specific type of survey. The latter are provided only as examples, to assist the reader to develop such measures for the specific type of survey with which he or she may be concerned. Following this the chapter deals with validation surveys, and it concludes by proposing a checklist of measures that would assess the overall quality of the survey effort.

21.2 General measures of data quality

In addition to the response rate, correctly calculated, there are four general measures of data quality that are discussed in this chapter. These are the missing value sta-tistic, the data cleaning statistic, the coverage error, and a measure of sample bias. Apart from these, probably the best method of measuring data quality, but one that can rarely be afforded, is to conduct a ‘mode comparison study’ (Biemer, 1988). Such a study involves carrying out two surveys on the same population (each one using a random sample of the population), preferably simultaneously, and comparing the results between the two surveys, which should differ only in the survey method or mode. Biemer provides a detailed discussion of how to interpret the results of two

Page 492: Collecting managing and assessing data using sample surveys

General measures of data quality 465

comparative surveys, and the following three chapters in the book (Groves et al., 1988)provide examples of three such comparative surveys. De Leeuw and Zouwen (1988)and Sykes and Collins (1988) compare a telephone and a face-to-face survey, while Bishop et al. (1988) compare a telephone and a self-administered survey. However, in their book, Groves et al. (1988) do not propose any speci�c measures of survey quality, such as those outlined in this chapter.

21.2.1 Missing value statisticRelating to concerns raised in the previous chapter (Chapter 20), the �rst general data quality measure that could be proposed is that of a missing value statistic (Stopher et al., 2008a, 2008b). It is worthwhile considering two versions of this statistic. In each case, the idea is to estimate the extent of missing values among all the variables in the survey. It is also important, as noted by Stopher et al. (2008a), to identify clearly which are missing values. These should be those values that represent either refusals by the respondent or come from a respondent indicating that he or she does not know the answer to the question. They may also include erroneous responses that have been replaced by the missing value code. It must not include any times that respondents cor-rectly skipped the question, or that the question did not apply to the respondent.

The suggested statistic (Stopher et al., 2008a) is shown in equation (21.1):

MVS =

IN

IN

x

x

i nin

i nin

,i n,i n*

,i n,i n

11

11

(21.1)

where:

MVS missing value statistic;x*i, n 1 if data item i is missing for respondent n, 0 otherwise;xi, n 1 if response to variable i is applicable to respondent n, 0 otherwise;I number of variables in the data set; andN number of respondents in the data set.

This missing value statistic provides an indication of the amount of missing data in the survey. It will take the form of a decimal fraction, which could be converted to a per-centage �gure. There are no known standards for this value, so it cannot be stated what values would be indicative of poor quality. A value of zero would indicate that no data were missing, while a value of one would indicate that all values were missing (hope-fully, a value that would never be seen in reality). However, it is clear that values on the order of 0.1 and higher would certainly be likely to indicate serious issues in the survey, since such values would indicate that more than 10 per cent of the data were missing. Values of less than 0.02 would indicate that only 2 per cent or less having missing val-ues, and would probably be considered evidence of a reasonably good-quality survey.

Page 493: Collecting managing and assessing data using sample surveys

Measuring data quality466

The missing value statistic should be calculated for the raw data, before any work is done on data repair, and then repeated after data repair. This will serve two purposes. It will indicate the extent of the improvement obtained through data repair, and it will also provide an indication of how complete the �nal data set is. A change of the value from, say, 0.1 to zero would indicate that all missing data items were able to be replaced by valid values. A change from 0.1 to 0.05 would indicate that about half the missing values were able to be repaired.

21.2.2 Data cleaning statisticAs discussed in Chapter 20, data usually require cleaning and repair prior to the �nal use of the data. The missing value statistic indicates the amount of error in the data as received, while the data cleaning statistic measures how much repair has been accom-plished with the data, whether by inference or imputation (Stopher et al., 2008a). The data cleaning statistic measures the number of cases in which repair of data items has occurred, and, ideally, should be determined just on the basis of the key variables in the data. It is proposed that this measure apply only to key variables, partly because these variables are obviously the ones of greatest importance in the survey, and also because variables that are not key may not be cleaned or repaired to the same level of completeness as the key variables.

The data cleaning statistic is then given by equation (21.2):

DCSN J

JN

N JN J

count x j njn

j n,j n11 (21.2)

where:

DCS data cleaning statistic;xj, n jth data item of respondent n;count(xj, n) 1 if jth data item of respondent n was repaired, 0 otherwise;N number of respondents in the survey; andJ number of key variables.

The DCS will range in value from zero to one. A value of zero indicates that no data repair or correction has been accomplished, while a value of one would indicate that all the values had to be repaired or corrected. A value of 0.1 would indicate that 10 per cent of the values of the key variables had to be corrected. Such a value would probably be considered to indicate data of rather poor quality. A value of 0.01 or less would indi-cate that only about 1 per cent of key variable values were repaired or corrected. Such a value would probably be considered to indicate relatively good data, especially if the MVS was also low. On the other hand, a relatively high value of the MVS and a low value of the DCS would indicate poor-quality data, in that this would tend to indicate that there were many missing values, but few had been repaired or corrected. Note that

Page 494: Collecting managing and assessing data using sample surveys

General measures of data quality 467

the DCS does not correspond to the change in the MVS, because the MVS applies to all the variables in the data, whereas the DCS applies only to key variables. Moreover, the MVS indicates the amount of missing data, while the DCS measures how much of the data on the key variables has been repaired or corrected. If the MVS and DCS were both computed on the same variables, then the DCS could never be larger than the MVS for the raw data.

Again, there are no standards for assessing values of the DCS. However, it would be cause for concern if the value of the DCS were as high as 0.1 or more, because this would indicate that 10 per cent or more of the key variable values had been replaced or repaired, whereas a value of 0.01 or less would indicate that less than 1 per cent of the values had been replaced or repaired. Because of the criticality of the key variables, a value of 0.1 or higher would indicate poor-quality data. However, the DCS is not a suf�cient measure by itself, since it will clearly be zero when no attempt is made to repair or correct any data items. Therefore, it is useful only when reported along with the MVS.

21.2.3 Coverage errorCoverage error (Stopher et al., 2008a) refers to the problem of a mismatch between the units of the target population and the units from which the sample is drawn. This occurs if the sampling frame deviates in some way from the target population. There may be coverage error from including units in the sampling that should not have been included (over-coverage) or from excluding units that should have been included (under-coverage). Two obvious examples of each of these types of coverage error occur in a telephone interview based on RDD. Under-coverage will arise if the target population is the entire residential population within a particular geographic area and a telephone sampling is used. Such a sampling will omit any residential units that do not have a telephone, leading to under-coverage. On the other hand, if no adjustment is made to the sampling for households with multiple telephone lines, then over-coverage will occur, because residential units with more than one telephone line will be more likely to be included in the sample.

Kish (1965) suggests the formula shown in equation (21.3) as an appropriate one for estimating coverage error:

CEF

XxFxF

11 100

(21.3)

where:

CE coverage error;Fx sample population multiplied by the expansion factor; andX population estimate from an external (reliable) source.

Clearly, the coverage error as shown in equation (21.3) is only a net measure of cov-erage error, in that over- and under-coverage can cancel one another out to some extent

Page 495: Collecting managing and assessing data using sample surveys

Measuring data quality468

in this formula. In most surveys, it is likely that under-coverage will be more prevalent than over-coverage, so that a net under-coverage is likely to be reported. In the United States, the Census Bureau reports coverage error for the Current Population Survey (CPS). In 1970 the CPS was estimated to have an under-coverage of 3.7 per cent, while it was estimated at 7.4 percent in 2000 (US Census Bureau, 2000). Kish suggests that CE values of less than 10 per cent would normally be considered indicative of good coverage. Values greater than this would indicate problems with coverage.

21.2.4 Sample biasThe response rate to a survey may be high, the number of missing values may be very low, and the data may have been repaired completely on the key variables, but the survey may still be seriously biased. Indeed, as already pointed out in Chapter 3, even a census may be biased, if there are systematic measurement errors in the sur-vey or a faulty measurement device is used. It is probably correct to state that every sample survey, in reality, is biased to a greater or lesser extent. Estimation of the amount of sample bias that is present in the survey is, therefore, a further important measure of quality.

Nonresponse is a major potential source of bias in human subject surveys. As men-tioned in Chapter 3, nonresponse causes the sample to be a biased representation of the population when respondent behaviour or characteristics are different from those of non-respondents. Thus, nonresponse as a cause of bias is not directly related to response rate but to the degree to which the sample is representative of the population. There is considerable evidence that nonrespondents are often different from respon-dents in terms of socio-demographic and other characteristics (Richardson, Ampt, and Meyburg, 1995). For example, in travel surveys in English-speaking countries, nonrespondents are typically more likely to be elderly, physically or mentally chal-lenged, non-English-speaking, limited-literacy, minority, less mobile persons (Ettema, Timmermans, and van Veghel, 1996; Kim et al., 1993; Zimowski et al., 1997).

To estimate sample bias, it is necessary first to choose certain variables on which to measure bias. For continuous variables, either the mean or the median should be selected as the criterion measure. For discrete variables, the categories for measuring bias should be defined. There need then to be reference values for each variable cho-sen for bias measurement, with these reference values considered to be as unbiased as possible and determined independently from the subject survey. For travel surveys, Stopher et al. (2008a) suggest six variables, five of which are probably suitable for almost all surveys. Their proposed variables are

the mean household size;•vehicle availability, in categories of ‘0’, ‘1’, ‘2’, and ‘3• +’ vehicles;household income, in categories that correspond to those in the reference data;•the race of each person in the household, in categories that correspond to those in the •reference data;

Page 496: Collecting managing and assessing data using sample surveys

Speci�c measures of data quality 469

the age of each person in the household, in categories of ‘0–5’, ‘6–10’, ‘11–14’, ‘15–17’, ‘18–64’, ‘65–74’, and ‘75 and over’; andthe gender of each person in the household, in categories of ‘Male’ and ‘Female’.

Kish (1965) suggests that the accuracy of a survey can be expressed as ‘the inverse of total error’. Thus, it would seem appropriate to use a measure of average total error such as the root mean square error (RMSE) as a statistic of data quality. The percent-age RMSE is a unitless measure that must be interpreted subjectively, although it has a clear intuitive meaning that is generally well understood. Using these six variables, the measurement of sample bias is obtained by computing a percentage RMSE statistic as shown in equation (21.4):

Percent_RMSE=1 1

100

2

n n

r s

ri j1i j1n ni jn n1n n1i j1n n1 i

ijr sijr sij

ijrijri jii jn ni jn nin ni jn n

n1 1n1 1 r sr s22

1 11 1

n nn ni ji jn ni jn nn ni jn n(21.4)

where:

ni number of variables;nji number of categories, j, for variable i;rij reference value of variable i in category j; andsij sample value of variable i in category j.

Again, there are no standards established by which to use this sample bias value to determine if a survey is good or not, but it should be clear that large values of this statistic will indicate a biased survey, while small values will indicate a relatively unbi-ased survey. A key issue here is the source of the reference values, because these must be as unbiased as possible.

21.3 Speci�c measures of data quality

As noted earlier, speci�c measures of data quality depend on the speci�c survey to be assessed. Examples of some speci�c measures are provided here from the area of household travel surveys, based on the work of Stopher et al. (2008a, 2008b), which, it is hoped, will be adequate to suggest other speci�c measures that could be used for other types of survey.

21.3.1 Non-mobility ratesIn a typical household travel survey, questions are asked of respondents about each travel event that took place on a speci�ed day or over a speci�ed period of time. Of course, there will be some number of persons in the sample who did not travel any-where on the speci�ed day. This may have been because the person was sick and con�ned to home all day, or was busily engaged all day in activities within the home, or otherwise genuinely did not leave his or her home at all. To accommodate this sit-uation, it is usually necessary to provide respondents with the option to inform the

Page 497: Collecting managing and assessing data using sample surveys

Measuring data quality470

survey researcher (through the interviewer or self-administered survey form) that no travel took place on the specified day. However, a number of respondents will usu-ally deduce that, if they falsely report no travel on the specified day, then the survey task will be significantly reduced in size and the respondent will appear to have been responsive to the survey by completing the other questions that were asked. Although there are various methods that can be used to reduce the occurrence of this stratagem, it is likely to occur in many designs of household travel surveys. Therefore, one of the statistics that can be used to identify the quality of the survey data is the percentage of persons and the percentage of households that reported no travel on the specified day. It is fairly probable that individual persons will not be mobile on a given day, and surveys suggest that this figure may be as high as between 10 and 20 per cent of the population for a weekday. It is much less likely that everyone in the household will be non-mobile on the specified day, and data suggest that this may be only on the order of 1 to 2 per cent of households. Therefore, computing the proportion of persons and of households that report no travel on the specified day will provide an indication of whether the survey data contain a ‘hidden’ form of nonresponse, namely the incorrect reporting of non-mobility.

If a survey reports a figure of greater than 2 per cent for households and greater than 20 per cent for individuals, it should be suspected that there is significant use of the non-mobility report as a means to disguise nonresponse to the survey. Low values of both the person and household non-mobility rates would be indicative that this nonre-sponse mechanism has not been used extensively by respondents, and that this source of nonresponse bias is largely absent.

21.3.2 Trip rates and activity ratesHousehold travel surveys involve the reporting of the number of travel events (or trips) undertaken by each member of the household. One problem in using travel events to assess survey quality is that there are a number of different ways of counting such events. For example, if a person walks from home to the bus stop, catches a bus, rides to another bus stop and alights, then walks to a destination, is this one or three travel events? In the parlance of the transport profession, the three travel events would be known as ‘unlinked trips’, while the single travel event from home to the destination would be known as a ‘linked trip’. Another issue that raises some difficulties in setting desired values on the number of travel events is that some surveys define travel events as those in which some form of motorised vehicle is used, whereas others define them as all instances of travel. Others may define a travel event as being an event that lasts more than a specified amount of time, or covers more than a specified distance.

If a clear definition is made of a travel event or trip, then the number of travel events per person or per household can be used as another measure of data quality for a house-hold travel survey. Stopher et al. (2008a) provide some suggested reference values for linked, unweighted, person trips per day. That is, these values include all means of travel and do not set a lower limit on the length of time or the distance covered for

Page 498: Collecting managing and assessing data using sample surveys

Specific measures of data quality 471

the travel to be included as a distinct trip. Such values relate to data quality in that, if the average values produced by a survey are lower than the reference values that are proposed, then this would indicate that there may have been problems in people com-pleting the whole survey. Even though response rates may be high, missing values low, data cleaning completed adequately, coverage error found to be acceptably small, and the non-mobility rate found to be appropriately low, the survey may still have failed to measure fully the number of trips that are actually completed in a day. This would be an indicator that the survey had not achieved its goals and has produced biased results that would not be likely to have been determined from the measurement of sample bias.

In a similar way, the number of distinct activities at places during the day could also be used as a measure of data quality. In travel or activity surveys, the number of activ-ities reported by people in a day is a measure of the completeness of the reporting of behaviour. However, to use this statistic to assess the quality of a survey, there needs to be some standardisation in how activities are defined in a survey and in the classi-fication schemes used for activities. Because this has not occurred in household travel surveys to date, the measure is probably not yet a suitable one. In the future, it is to be hoped that some standardisation will appear in the definition of activities and their categorisation. When that happens, the number of activities engaged in by individuals and by households could provide another specific measure of data quality.

21.3.3 Proxy reportingIt has been found that proxy reporting in travel surveys is a major source of error, as is discussed in Chapter 11. Other household- or person-based surveys may also suffer from problems of the accuracy and completeness of proxy reports. It was also noted in Chapter 11 that some proxy reporting is inevitable, especially for children and others who may not be capable of responding to a survey because of language limitations, physical or mental or emotional difficulties, or other reasons. As a result, there are usu-ally two categories of proxy reports in a survey. The first category consists of proxy reports that are required because of the inability of the subject to report for himself or herself. The second consists of proxy reports that are provided on behalf of those who are capable of reporting for themselves, but for whom the report or survey response was provided by someone else, possibly because of absence or an unwillingness to participate. It is important to distinguish between these two situations in reporting on the level of proxy reports in a survey.

The first step that is necessary is to prepare in advance of the survey a clear set of rules as to when proxy reporting is required. There should then be one or more questions included in the survey instrument that ascertain whether or not the rules for proxy reporting apply to this particular response. These questions should proba-bly include information about the age of the respondent (or the person from whom a response is desired) and, for respondents who are deemed old enough to respond for themselves, questions that would identify if the respondent is not, in fact, capable of

Page 499: Collecting managing and assessing data using sample surveys

Measuring data quality472

providing a self-response (such as a mental or emotional disadvantage, a language problem, etc.).

Having established the eligibility of each respondent to respond to the survey for himself or herself, two variables should be included in the survey data that indicate if a proxy response was obtained and whether or not the original respondent was consid-ered able to provide a response, or if a proxy response was necessary. In the event that a survey involves respondents completing survey forms and data being retrieved over the telephone or by a face-to-face interview, a distinction should be made between whether the respondent recorded the original information on paper, and it was reported by someone else from the written record prepared by the intended respondent, or whether another person both prepared the written record and provided the verbal report of the data (or just provided a verbal report with no written information).

The proxy reporting statistic would then relate only to those proxy reports that were not required by the survey design. It would consist of a simple reporting of the per-centage of persons eligible to report for themselves, but for whom proxy reports were received. When a proxy report is provided from a record that was written by the intended respondent, the main inaccuracies that will arise will be when the original respondent provided an incomplete written record or a record with errors in it. In such a case, the interviewer would probe for correct responses in a situation in which a proxy report is not being provided. However, when such a proxy report is provided, there is limited or no capability to probe for correct answers or to fill in possibly missing information. When no written record exists, the proxy respondent must provide information to the best of his or her ability. In this case, there is considerable room for erroneous infor-mation to be provided, as well as information that is incomplete. There is also no room generally for interviewers to probe. This is the worst proxy situation. As a result, it is recommended that these two situations be distinguished in reporting proxy statistics.

In other words, in cases in which there is both a form to be completed and a report of the contents of the form to be obtained, two statistics should be prepared. The first would be the percentage of respondents who were eligible to respond for themselves who provided a written response that was retrieved from a proxy respondent, and the second would be the percentage of respondents who were eligible to respond for them-selves who did not provide a written response and for whom the data from the survey were provided by a proxy respondent. These two statistics provide an important mea-sure of survey quality.

21.4 Validation surveys

Another method of assessing the quality of the data from a survey is to conduct some form of validation survey. Validation surveys can take one of two forms. One form is to validate a proportion of the data collected by contacting the sample units a second time and asking certain questions about their completion of the survey. This form of vali-dation survey is useful primarily for surveys that are undertaken by interview, whether

Page 500: Collecting managing and assessing data using sample surveys

Validation surveys 473

in person or over the telephone. Validation surveys of this type are not useful for self-administered surveys. The second form is to validate, by a separate and independent method of measurement, certain key items of information in the survey. Both forms of validation are discussed further here.

21.4.1 Follow-up questionsWhen validation is performed by recontacting sample units (persons or households), this will normally be carried out by asking a limited set of questions that determine whether or not the person or household responded to the survey and then asking for information about a small subset of the questions in the survey.

Validation surveys are not popular, because they involve significant time and effort and also require an explanation to each respondent selected for the validation survey as to why they are being recontacted. However, the mere fact that interviewers know that validation surveys will be conducted is often enough to discourage them from being lax in the execution of the survey, or, in extreme cases, of falsifying informa-tion (Richardson, Ampt, and Meyburg, 1995). A second advantage is that validation surveys provide information that can be used to assess the quality of the survey data. To use statistics from validation surveys to assess the quality of a survey, variables that are to be used in the statistic must be identified, their combination in a statistic must be formulated, and the ability to interpret the values must be developed.

In a face-to-face or telephone survey, each interview in the validation survey has to be conducted by someone different from the one who conducted the initial interview. Validation surveys should be conducted progressively throughout the survey so that problems can be identified and remedied, and interview standards maintained through-out the study. Whenever possible, the validation survey should be conducted with the initial respondent.

The questions included in the validation survey must not be verbatim quotes from the main survey but, rather, should express the same question in different terms. Additionally, the questions should be phrased as if further information is being sought, rather than that the purpose is to verify the integrity of the data gathered earlier. The questions should not ask for detail that the respondent has difficulty in recalling, but yet must ask something that would be difficult to guess. For example, in validating a travel event, a feature that is relatively easy to remember but difficult to fabricate is the approximate time spent at the destination of the travel, or the number of accompanying persons on the trip.

Stopher et al. (2008a) suggest that the following core questions should be included in every validation survey conducted.

(1) Did you complete the initial survey? (Yes/No). If ‘Yes’, go to question 3 below. If ‘No’, go to the second question below.

(2) Did someone else in your household complete the survey? (Yes/No). If ‘Yes’ go to question 3 below. If ‘No’, terminate the validation survey.

Page 501: Collecting managing and assessing data using sample surveys

Measuring data quality474

(3) In a household travel survey, select a travel event that the respondent is likely to remember from among the events reported in the initial survey and note the time spent at the destination. Ask the respondent to recall the travel event in question and to report the approximate time spent at the destination.

If the answers to the first two questions are both negative, then it is likely that the household was not surveyed. This may be due to a lack of knowledge on the part of the person being interviewed, forgetfulness, or a case of genuine falsification of an entire interview. The interviewer conducting the validation survey must discreetly determine which one of these possibilities is the most likely. For those who admit to being inter-viewed, the third question provides a brief check on the information reported in the survey. Due to the difficulty of recall, only large differences should be considered evi-dence of possible falsification.

The tolerable limits of falsified information are a matter that must be decided by each agency commissioning a survey. The main purpose of the validation survey is to identify and remedy problems within the survey company. An indirect purpose is to act as a disincentive to interviewers when they know that validation surveys are conducted. The falsification of data by interviewers is likely to be dealt with very severely in survey companies. However, anecdotal evidence suggests that it does exist and that, under pressure to reach certain goals, interviewers will develop very innovative ways in which to introduce such data. Statistical analysis of reported data is often used to detect the lack of randomness, and particularly the change in the relationship between variables that characterises falsified data (Richardson, Ampt, and Meyburg, 1995). Validation surveys may be directed to cases identified through such analysis.

The third question in the above set would be changed for other survey purposes. The key here is to choose some aspect of the survey that it would be difficult for an interviewer to fabricate accurately by chance, and to ask selected questions about that aspect that can verify the information in the original survey. It is necessary for two reasons. First, even though the respondent to the validation survey may think that he or she responded to the earlier survey, this could be mistaken. Second, the questioning on the specific aspect of the survey enquiry will provide some degree of information on how accurately the original interview was carried out and whether there may be problems in the survey because of the way in which certain questions are asked.

As a measure of validation, Stopher et al. (2008a) suggest that a statistic could be computed that estimates the percentage of cases in which the first two questions are both answered in the negative. A second statistic could be compiled that measures the percentage of the time that there is a mismatch between the information provided from the third question and the details in the original survey. They also suggest that a rule of thumb for this type of validation might be that there should be no more than 1 per cent of cases in which the first two questions are answered negatively, and that there should be less than 5 per cent of cases with a mismatch on the third question set.

Page 502: Collecting managing and assessing data using sample surveys

Validation surveys 475

21.4.2 Independent measurementIndependent measurement can be used in any type of survey, whether interviewer-administered or self-administered. The principle of independent measurement is that a second method be used to measure the same attributes or some of the same attri-butes as in the main survey. Usually, such measurement could be done with a sub-sample of the original sample for the main survey. If the phenomena being measured are variable over time, then it is important that the independent measurement method is applied at the same time as the main survey. If there is no time dependence for the phenomena being measured, then independent measurement may take place at a different time.

Clear examples of this type of validation procedure are provided by some recent household travel surveys. In these examples (Forrest and Pearson, 2005; Stopher and Greaves, 2009; Stopher, Xu, and FitzGerald, 2007; Wolf, 2006; Wolf et al., 2003), a household travel survey was conducted, using conventional methods, together with a validation survey that used small portable GPS devices. The main survey may have been conducted by telephone interview, face-to-face interview, or through a self- administered form that was delivered to households. These interviews and forms asked respondents to report all travel undertaken in a twenty-four-hour period on a specified day of the week, subsequent to the initial contact. In both the interview methods, the normal procedure was to recruit a household to the survey and then leave or deliver a paper diary to the household, with instructions to complete that diary for a specified day a few days in the future. All the conventional methods, therefore, relied upon the self-report of travel events. The validation survey consisted of providing members of the same households a GPS device that each person could carry around with himself or herself for one or more days, including the day for which the self-reported travel was to be provided.

In each of these surveys, the results of the self-report surveys were then compared to the results of the GPS devices. The GPS devices in almost all cases were of a pas-sive type (Stopher, 2008) that simply required respondents to carry them around with them throughout the day. The GPS devices recorded position in terms of latitude and longitude every one or two seconds, and also recorded the time to the nearest second at which the reading occurred. The devices also recorded speed, the heading (direction of travel), and some measures that provided a check of the validity of each reading (satellites in view and horizontal dispersion of precision). In effect, provided that the respondent remembered to take the GPS with him or her all the time, and provided that there was sufficient battery charge available, the GPS device provided an objective measurement of all travel.

In these studies, it was found that respondents did not generally report all their travel events, and that it appeared that some travel events were forgotten or simply chosen to be omitted in the reporting. In addition, it was found that the start and end times of many travel events were reported incorrectly (apart from the fact that most people tend to report such times to the nearest five, ten, fifteen, or even thirty minutes), and that estimates of travel time and travel distance were generally rather inaccurate. There

Page 503: Collecting managing and assessing data using sample surveys

Measuring data quality476

was also found to be some evidence of telescoping in the reporting of travel events (see Chapter 8), whereby some travel events were reported as having occurred on the speci-fied day when, in fact, they had occurred on a different day. Many of these results were not considered surprising, because it had been suspected that there were problems in the accuracy of the self-report surveys. However, the use of GPS measurement in these surveys provides a useful independent measurement procedure that provides a mech-anism for validating the main survey. In most of the cases reported in the literature, samples for the validation surveys have been relatively small, with targeted samples ranging from around seventy-five households to 820 and actual samples ranging from forty-six to almost 300 households (Wolf, 2006). In most cases, the main survey had a sample of 3,000 households and more. Sample sizes smaller than 100 would gen-erally be considered to provide indicative results only, while sample sizes of 300 and more would generally permit statistically sound estimates to be made of the levels of misreporting. If correction factors are desired, the sample size requirements need to be determined by first establishing whether population segmentation is necessary for factoring. For example, if it is found that misreporting varies significantly by certain socio-demographic characteristics of people or households, then larger validation sam-ples will be required to permit the estimation of statistically significant correction fac-tors for each population segment. Segment samples would generally need to be on the order of seventy-five to 150 samples, to permit the estimation of statistically reliable factors. Larger samples would be better, if they can be afforded. If no segmentation is needed, smaller samples of about 100 to 250 households or persons (depending on whether the survey is focused on households or persons) would be required, although, again, larger samples are desirable if the budget permits.

21.5 Adherence to quality measures and guidance

As suggested by Stopher et al. (2008a), a final method of assessing survey quality is provided by using and reporting the adherence of the survey to established quality prin-ciples and measures. Richardson, Ampt, and Meyburg (1995: appendix C) provide an example of a checklist of actions. This checklist, some thirty pages long, encompasses all the aspects of survey design discussed in the book itself. A more extensive list of requirements, covering all aspects of the survey process, including management, qual-ity control, survey design, subcontracting, inspection and testing, and product delivery and storage, has been suggested by Richardson and Pisarski (1997). This extensive list was based on principles promoted by the International Standards Organization (ISO), which Richardson and Pisarski applied to travel surveys. They propose a list of fifty-five aspects of a travel survey that collectively describe adherence to ISO standards (Richardson and Pisarski, 1997: 27–8).

As stated by Stopher et al. (2008b): ‘A comprehensive checklist of activities or stan-dards that each survey should perform or comply with, will help ensure that individual aspects of the survey are not overlooked or neglected. The degree of compliance with these requirements in each survey can serve as an indirect measure of data quality.’

Page 504: Collecting managing and assessing data using sample surveys

Adherence to quality measures and guidance 477

Using the items identified by Richardson and Pisarski (1997), Stopher et al. (2008a) propose a short listing of some ten items that could be used to assess the quality adher-ence of a survey. The items they suggest are as follows.

(1) Has the survey agency an active quality control programme in operation?(2) Is a senior, independent staff member responsible for quality control in the

organisation?(3) Have pretests been conducted?(4) Has a pilot survey (or surveys) been conducted?(5) Have validation surveys been conducted?(6) Have data reported by proxy been flagged to indicate that they were obtained by

proxy reporting?(7) Have data values obtained through imputation been flagged to indicate the nature

of their origin?(8) Has the survey report been prepared and submitted to the client?(9) Has a coding manual and other metadata that accompany the data been prepared

and submitted to the client?(10) Have the survey data been adequately archived in a safe, accessible, and well-

recognised data storage location?

To this list an eleventh item should be added, as follows.

(11) Have quality measures, such as response rates, coverage error, data cleaning sta-tistics, data repair statistics, and other appropriate measures, been calculated and reported to the client?

One way in which this checklist might be used is to award a point for each affir-mative response and report the final score as a percentage out of the possible eleven points. This score, reported with the results of the survey, would provide an indica-tion of the overall quality of the survey that has been conducted. However, it does not appear that such a quality score has ever been produced for a survey to date.

Page 505: Collecting managing and assessing data using sample surveys

478

22 Future directions in survey procedures

22.1 Dangers of forecasting new directions

It is often a dangerous practice to make predictions about changes in established proce-dures and methods. Existing procedures and methods are habitually deeply entrenched, and change often takes place far more slowly than seems appropriate. On the other hand, issues that frequently seem pressing and in need of radical change turn out in hindsight to be much less serious than assessed at the time, while existing procedures and methods may be more resilient than anticipated. In other situations, it may be found to be easier and less expensive to continue to use the current methods and proce-dures, albeit with adaptations and minor changes so as to accommodate the emerging issues.

Notwithstanding these cautions, this chapter provides some assessments of areas of surveying human populations that seem to be presenting increasing problems, and then offers some suggestions as to ways in which survey methods and procedures may develop into the future. However, the reader is warned to be cautious in interpreting these possible future directions. One of the purposes of this chapter is to provide the reader with an appreciation for some of the current difficulties, and also to offer some thoughts that may lead to future improvements in survey methods.

22.2 Some current issues

22.2.1 Reliance on telephonesAs has been noted elsewhere in this book, there has been an increase in reliance on the telephone as a means of contacting and recruiting people for surveys, if not for collect-ing all the survey information. There are at least three reasons for this domination of the telephone in surveys in several countries. First, in the United States in particular, cities have evolved such that they often include small areas where it would be unsafe for interviewers to enter and go from door to door to conduct face-to-face interviews. Although this is not a universal issue yet, it has increasingly become a problem in several other countries as well. When the sample is to be drawn as a geographically and demographically representative sample of an urban area, the exclusion of certain

Page 506: Collecting managing and assessing data using sample surveys

Some current issues 479

suburbs or areas within the city for face-to-face interviewing will result in biases in the survey coverage. Under these circumstances, substituting telephone interviewing for face-to-face interviewing has been seen as a reasonable procedure.

Second, drawing a geographically representative sample of households or people from an urban area requires either a complete address listing for the sample frame, or that the area is small enough that in-field sampling can be achieved reasonably. As cit-ies have increased in size, the potential for carrying out in-field sampling has declined. In addition, as is noted in Chapter 13, complete address listings are rarely found and available in modern times. Census organisations may have and maintain such lists, but they are usually considered strictly confidential. If there is a possibility of obtaining the list, it is likely to be very expensive to do so. As a result, survey designers have turned to random digit dialling as an inexpensive alternative method to draw fully random samples of households or persons within an urban area (see Chapter 6). RDD then requires that at least the initial contact with persons or households is made by telephone.

Third, the cost of face-to-face interviewing is considerable and growing. It is an expensive method of administering a survey, because not only is the time taken for the interview to be paid for, but so also is the time taken by the interviewer to travel to the respondent’s location. In modern cities, the amount of time spent in travelling, even when interviewers are chosen from local residential areas, can rapidly mount up and may easily exceed the time spent in interviewing by a factor of two or three, sometimes even by a factor of ten. Furthermore, if the survey design involves, as it should, multi-ple attempts to contact each household or person, then the number of times that some addresses may need to be visited can increase significantly. Without question, a tele-phone interview removes the cost of travel to the address of the household, substituting instead just the time taken to dial the telephone number, and can therefore permit a significantly larger number of attempts to be made without substantial costs. Indeed, in a telephone interview, the time requirements (and therefore costs) of repeat attempts may amount to less than one or two minutes per attempt, compared to perhaps twenty or thirty minutes of travel time, or more, for repeat attempts in a face-to-face survey.

For these reasons, among others, the telephone has become a popular method of recruiting people or households to a survey, and often it is also used for the entire sur-vey. Unfortunately, there are a number of threats to the continued use and effectiveness of telephone-based surveys.

Threats to the use of telephone surveysSome of the threats to the effectiveness and continued use of telephone surveys are not new, but are probably trending increasingly in the direction of damaging the use of the telephone. First, the growing use of the telephone for unsolicited marketing and sales calls has led to increasing acceptance of several methods whereby people can screen telephone calls. Screening methods include caller ID technology that allows the calling number to be displayed on the telephone handset, so that the potential respondent can decide whether or not to answer the phone. Another screening method is the answering

Page 507: Collecting managing and assessing data using sample surveys

Future directions in survey procedures480

machine, enabling the potential respondent to listen to the initial introduction, if it is offered, and then decide whether or not to answer the call. Yet another screening method is one that has been introduced in a few countries around the world, namely the ‘Do not call’ registry. This is a register of residential numbers that are not to be called by marketing organisations. In most countries, the use of telephone numbers to conduct government-sponsored surveys is not covered by the ‘Do not call’ registry. Nevertheless, the existence of such registers is clearly indicative of the extent to which people are unwilling to receive unsolicited telephone calls. Although there is no study completed to date on this issue, it seems likely that people who have taken the trouble to have their telephone numbers included in such a register will be less likely to agree to participate in a survey, in any event, even when the survey is legitimate and is spon-sored by a government or other public agency. Apart from this, the increasing use of telephones for telemarketing is clearly affecting response rates to the telephone survey, resulting in a marked downward trend in the proportion of contacted households that can be recruited successfully into a survey.

Second, although telephone penetration has reached very high levels in a number of countries, in no country in the world is telephone penetration 100 per cent. Therefore, any telephone-based survey will experience non-coverage (unless the population is defined as those persons or households with a residential telephone) and will, as a result, be likely to be biased to a greater or lesser extent. More importantly, there is an increasing shift away from fixed residential telephone lines to mobile or cellphones. It was estimated that, in 2006, about 13 per cent of homes in the United States had mobile phones only and no land lines (Blumberg and Luke, 2007). Some research-ers have suggested that mobile phones are likely to be the only telephone service for more than 50 per cent of households in some countries within a relatively few years. In Australia, all mobile phones have the area code 04, so that there is absolutely no geographic information in the mobile telephone number, and numbers are portable across the country and between telephone service providers. In some countries, mobile telephone numbers may be issued first in a particular geographic area, but are transfer-able elsewhere within the country, so that the geographic information becomes lost. In a few countries, mobile telephone numbers are not so portable, but even then it is not unusual for the area code only to be geographically specific, with the rest of the num-ber being from a block of numbers reserved for mobile telephones. All these issues on the geographic content of mobile telephone numbers, or the lack of it, raise serious problems if a survey is to be carried out only in a specific geographic locality. Prior to the substantial take-up of mobile telephones, considerable geographic specificity was available within the assignment of land line telephone numbers.

A second issue with mobile telephones relates to the charging systems of different telecommunications companies. In some countries, the receiver of a call to a mobile telephone bears part of the cost of the call, although in other countries only the caller pays. In the former case, calling mobile telephone numbers would violate one of the ethical standards of surveying (Chapter 4), which states that participation in the survey should not require monetary expenditure by the respondent. In other countries, calls to

Page 508: Collecting managing and assessing data using sample surveys

Some current issues 481

mobile telephones may be charged at a higher rate than calls to land line telephones, thereby presenting a significant budgetary impact on the survey.

The third threat to the use of telephone surveys is the increasing use of unlisted or ‘silent’ telephone numbers. This is another stratagem employed by households and individuals to protect themselves from unsolicited and unwanted calls. Of course, ran-dom digit dialling is a way around the use of such strategies, but those who have elected to request no listing of their numbers are also then more resistant to responding to a survey when they are contacted through the use of RDD. Dal Grande, Taylor, and Wilson (2005) note that those who have unlisted telephone numbers are more likely to be single person households in the younger age brackets. This leads to the potential for serious biases if such households are either excluded from the survey or are more likely to decline to respond when reached through RDD.

Conclusions on reliance on telephonesOverall, these threats to the use of the telephone as a recruiting device seem likely to reduce severely the worth of telephones as an initial means of recruitment to a survey. Rather, the telephone may continue to be useful as a means of retrieving data or of con-ducting a survey following recruitment, when the telephone is chosen as the preferred contact method by the respondent. This also means that the convenience of RDD may be in the process of being lost, especially as mobile telephones become increasingly widespread and as they substitute for, rather than complement, the use of land line telephones.

22.2.2 Language and literacy

LanguageOver the past two decades and more there has been considerable globalisation, not only of major companies, but of the population of the world. In substantial proportions, people of various nationalities have left their home countries and settled in other coun-tries, often in countries whose native language is different from their own. The result is that there are, in many countries, increasingly large population subgroups of people who speak a different language at home from the accepted language of the country, and an increasing number of people who are not fluent in the native language of the country in which they live and work. Statistics from the Organisation for Economic Co-operation and Development (OECD) (2004: 19) indicate that the United States, Canada, the European Economic Area, Australia, and Japan all experienced signifi-cant immigration during the 1990s, and that this appears to have continued into the twenty-first century. Rates of immigration have, in the past, ranged from 0.1 per 1,000 population in Japan to 5.2 per 1,000 population in Canada. In 2008 Australia’s immi-gration rate was a staggering 13.8 per 1,000 population, and it appears to be continuing to grow. Population projections for Australia show a massive increase in anticipated population over the next forty years, with most of that increase arising from immigra-tion. One of the effects of this rapid immigration is seen in the changes in language. In

Page 509: Collecting managing and assessing data using sample surveys

Future directions in survey procedures482

2006, according to the ABS (2008a), between 1 and 5 per cent of the population of the various states and territories of Australia did not speak English well or at all. The same report indicates that over 21 per cent of the population spoke a language other than English at home in Australia in 2006. Similar figures will be found in other countries around the world that have experienced significant immigration over the past decades, especially the United Kingdom, Canada, the United States, and Germany.

This growing internationalisation of the population and the prevalence of foreign languages within any nation’s population clearly have major impacts on efforts to undertake random sample surveys of the population. From the above figures, one could assume that, on average, in Australia, about 3 or 4 per cent of a random population sample would be unable to understand any written or verbal material in English, and a further 20 per cent or so might have significant difficulties in understanding concepts expressed in English in a survey questionnaire. This could well result in a majority of these groups being unwilling (or unable) to respond to a survey, thereby excluding perhaps as much as one-fifth of the population from the sample. It is also highly likely that this one-fifth will have characteristics and behaviours that are quite different from longer-term residents who do speak the native tongue of the country.

Of course, this is not insurmountable. It can be overcome by translating the survey materials into several languages and using interviewers who are multilingual. In many parts of the United States today, it is accepted that most survey materials will need to be translated into Spanish as well as being provided in English, and that interviewers will need to be able to speak both Spanish and English. However, even two languages is increasingly frequently not sufficient. In the United States, other language groups are increasing in numbers, so that Mandarin, Cantonese, Tagalog, Korean, and many other languages are increasingly encountered, apart from English and Spanish.

Two difficulties arise with respect to translation and multilingual interview capabil-ities. First, English is a complex and highly idiomatic language. Translation is often not straightforward into other languages. For those who speak English, the difficul-ties of translation are often well illustrated by instruction manuals that are provided in English for products manufactured elsewhere, in which the instructions have had to be translated from some other language to English. The choice of an inappropriate word or two, or an expression that is not correct for English, highlight the difficulty of transferring even relatively straightforward everyday instructions and ideas from one language to another. In translating from English to another language, it is also important to understand the particular local dialect within another language that is likely to be encountered. A good illustration of this was provided some years ago when it was decided to translate a sign on buses in Miami into Spanish. The sign to be translated was one that asked people to give up their seat to an elderly or handi-capped person. It was translated into Spanish, but the wrong dialect of Spanish was used for the predominantly Cuban population in Miami. The resulting sign asked that riders provide a certain part of their anatomy to elderly and handicapped persons! This illustrates the many pitfalls that can arise in achieving an accurate translation of a survey instrument.

Page 510: Collecting managing and assessing data using sample surveys

Some current issues 483

The second problem with translation is that, if it is done well, it is expensive. This will add significantly and substantially to the cost of most surveys. Similarly, the use of bilingual or multilingual interviewers is also likely to add significantly to the cost of the survey. In an interview survey, if a mix of unilingual and bilingual interviewers is used, one may also encounter the problem that an interviewer speaking only one lan-guage encounters a household that needs an interviewer speaking the other language. There is then the necessity to discontinue the initial contact and then recontact the household with an interviewer who speaks the appropriate language. The necessity to recontact may also result in a higher likelihood of refusal to continue the survey.

A third, related, issue is that people from different countries will have quite different degrees of willingness to respond to surveys. In some countries, it is considered normal and appropriate to assist authorities who are undertaking surveys, especially if these are legitimate authorities and, possibly, government agencies. However, people from other countries may have a much greater distrust of government authority, or, indeed, any authority, and therefore be unwilling to cooperate in a survey. For example, people who settle in a democratic country after being brought up under a dictatorship may have much less willingness to cooperate with any activity sponsored by government, especially if their presence in the democratic country was by way of being a refugee.

A final issue that compounds language problems in surveys is the presence in many countries of illegal aliens who also do not speak the native language of the country to which they have migrated. Illegal aliens will, of course, be far less likely to respond to a survey, for fear that doing so will reveal their presence to the authorities and lead to deportation or imprisonment. Translation into the language of the illegal alien will not have much effect on this issue. However, the greater the proportion of illegal aliens in a survey population, the greater, again, will be the biases in survey results from non-inclusion of this segment of the population.

It is clear that increasing immigration into various countries will require that a fully random sample survey must be translated into a number of languages and that, if inter-viewers are used, it will be necessary to have interviewers who can converse fluently in each of the languages prevalent in the country where the survey is being conducted. On the other hand, almost by default, English is becoming an increasingly interna-tional language. For many years it has been the language of international and national air traffic control. Through the internet, it is becoming increasingly the language of communication over the Web. Further, it is increasingly the predominant language in international trade and marketing. It is possible that, as time goes by and further globalisation of population occurs, English will increasingly become the common language of communication, in which case the issues of multi-language surveys may gradually decrease in importance. However, this is not the present situation.

LiteracyLiteracy is potentially more of a threat to surveys than multiple languages and the globalisation of populations. Most countries around the world claim levels of literacy for their populations that are well over 90 per cent. Indeed, the US Central Intelligence

Page 511: Collecting managing and assessing data using sample surveys

Future directions in survey procedures484

Agency (CIA, 2008) estimates that the literacy rate of the entire world is around 82 per cent, with most developed countries having a literacy rate that exceeds 95 per cent. These statistics define ‘literate’ as a person being able to read and write. However, such figures may mask a more serious problem.

Kirsch et al. (2002) find a very different picture in examining literacy in the United States. Their study finds that about 21 to 23 per cent of the adult population in the United States had only the very lowest level of skill in performing tasks requiring lit-eracy. This skill level meant that this segment of the population could perform only the simplest routine tasks involved in reading and writing. Kirsch et al. find that a fur-ther 25 per cent of the population functioned at the next level of literacy, which they describe as being ‘able to locate information in text, to make low-level inferences using printed materials, and to integrate easily identifiable pieces of information’. Neither of these skill levels appears to correspond to the literacy requirements of most person and household surveys. Thus, assuming that about 3 per cent of the US population is illit-erate and that 46 per cent of the population has a literacy level that is insufficient to be able to read and respond to the average survey questionnaire, it could be assumed that approximately 49 per cent of the US population would find the average printed house-hold or person survey to be beyond their capabilities. This suggests that approximately one-half of the US population would find a postal survey, or probably an internet sur-vey, largely beyond their literacy capabilities to complete.

Starting in 2003, a number of countries participated in an Adult Literacy and Life Skills (ALLS) Survey conducted by the OECD and Statistics Canada (ABS, 2008b). In 2003 this survey included the United States, Bermuda, Canada, Italy, Mexico, Norway, and Switzerland. A second wave of this survey was conducted in 2006 cover-ing Australia, Hungary, the Netherlands, New Zealand, and South Korea. The results of these surveys are not dissimilar to the results found by Kirsch et al. (2002). In this survey, literacy is classified into five levels. Although literacy is measured on a con-tinuous scale from zero to 500 points, these scores are grouped into five levels. Level 1 is the lowest measured level of literacy and level 5 is the highest. The developers of the survey regard level 3 as ‘the minimum required for individuals to meet the com-plex demands of everyday life and work in the emerging knowledge-based economy’ (Statistics Canada, 2005). Tests are conducted over four domains: prose, documents, numeracy, and problem solving. The most relevant domains for survey completion are primarily document literacy and, to a lesser extent, problem solving. Prose literacy would be required to read lengthy instructions in a survey.

The results of the studies show that, in most countries, the percentage of the popu-lation in level 1 of document literacy is around 15 per cent, but varies from a low of 8.9 per cent in Norway to a high of 49.2 per cent in Italy. Level 2 averages around 30 per cent and varies from a low of 23.5 per cent in Norway to a high of 34.5 per cent in Switzerland. With the exception of Norway, these percentages, which correspond roughly to the lowest two skill levels of the study by Kirsch et al. (2002), suggest, again, that around 45 per cent of the population in the countries that have conducted the ALLS survey probably have a level of document literacy that is below that required

Page 512: Collecting managing and assessing data using sample surveys

Some current issues 485

to complete a questionnaire survey with any level of competence. Indeed, in the United States, the percentage of the population in levels 1 and 2 is 52.5 per cent, while in Australia it is 43.5 per cent. Canada stands at 42.6 per cent and Switzerland at 49.0 per cent. In terms of problem solving, the combined percentages in levels 1 and 2 are even higher, with Australia having 67.8 per cent, Canada 68.5 per cent, and Switzerland 66.1 per cent. Switzerland did not collect data from Italian-speaking people for this domain and the United States did not collect information in this domain at all. The percentages for levels 1 and 2 on prose are largely indistinguishable from those for document lit-eracy. From these results, it must be concluded that about half the population in most developed nations do not have a sufficiently high literacy level to be able to handle a survey questionnaire provided as a written document or on the internet.

This could readily account for the declining response rates that are reported by many countries, as well as the typically low response rate that is generally obtained from postal surveys. Simply providing a printed document with questions to be answered would probably be considered as threatening by a majority of those individuals who have only the basic or second-level literacy skills. It would also account for the diffi-culties often found in getting people to provide correct responses to questions that are asked in a survey, and further explains why many people will not read the instructions on how to fill out a survey.

The author of this book has found (Stopher, 2010) that self-administered postal sur-veys often require very extensive editing of responses, with some responses clearly being to questions that were not actually asked but were what the respondent thought was being asked. Often there are significant numbers of responses that cannot be repaired, because of the poor level of understanding on the part of the respondents.

Another study in the United States (Jackson, 2003) found that one-third of high school graduates never read another book after graduating from high school and that about 42 per cent of college graduates also never read another book after gradua-tion. This lack of reading will contribute to declining literacy skills as the population ages. This is clearly shown, also, in the results of the ALLS survey in Australia, which shows that the percentage of the population with level 1 or 2 skills declines from age fifteen, when it is about 50 to 55 per cent, to about twenty-five to twenty-nine, when it is around 40 per cent or a little less, and then climbs through the remaining years of life, reaching its highest levels among those over the age of sixty-five, when it peaks at about 65 per cent. This may be partly due to fewer education opportunities having been available when those who are now in their seventies were receiving education as children, but is also likely to be due to the lack of practice in the skills of reading after graduation.

As Stopher (2010) notes, these statistics concerning literacy must call into question the representativeness of any survey that relies on printed survey materials, whether these are posted or displayed on the internet. Modern-day society relies increasingly on information that is provided either aurally or visually, suggesting that surveys also have to adopt more visual and aural presentation formats if representative samples are to be obtained and if the information is to be reliably recorded. Indeed, the situation

Page 513: Collecting managing and assessing data using sample surveys

Future directions in survey procedures486

with respect to literacy and language suggests that more reliance needs to be placed on oral interviewing and on finding means to measure behaviours of concern without the need for self-reporting.

22.2.3 Mixed-mode surveysIn response to the increasing levels of nonresponse to human subject surveys, one of the strategies that is being employed increasingly is that of mixed-mode surveys. That is, potential respondents are offered more than one way to respond to the survey, in the hopes that more respondents will be willing to complete the survey given a choice as to how, when, and where to complete it. An increasing complaint from busy people in the early twenty-first century is that they do not have the time to complete a survey when the survey researcher contacts them. Some prefer to complete the survey with the assis-tance of an interviewer, while others would prefer to complete the survey in their own time and at their own convenience. Not only is this likely to increase response rates, but it is also a method to reduce respondent burden (Ampt, 2003).

The primary issue in mixed-mode surveys is whether information is comparable between different survey modes. In other words, if a person responds to a question on the internet, does that person give the same response then that he or she would give in a postal survey asking the same question or in a face-to-face or telephone interview asking the same question. There is information that suggests that there are differences in responses to the same question according to the survey mode – i.e., whether the data are collected by interview, post, internet, etc. (Bonnel, 2003; Christian, Dillman, and Smyth, 2006; Dillman, 2000). Ampt and Stopher (2005) conclude from their research that considerable care is required in designing mixed-mode surveys, to reduce the potential for design effects to impact the data collected. They also suggest that research is needed to conduct surveys in parallel by multiple methods to ascertain the actual size and extent of design effects on different modes of survey administration. Until such research is completed, it is rather speculative as to whether mixed-mode surveys can be considered to be appropriate and unbiased.

However, it is clear that the concept of mixed-mode surveys is one that warrants con-tinual consideration, especially as people’s lives become busier and as response rates tend to drop. By offering respondents alternative ways in which to provide a response to a survey, the survey researcher returns a degree of control to the respondent, allow-ing him or her to decide when, where, and how a response will be provided. Some will opt for the telephone interview, especially when challenged somewhat by the literacy level likely to be required by a written survey, while others who may have consider-able time pressures are likely to be favourably inclined to internet or self-administered postal versions that allow them to complete the survey in their own time and in several sessions, if necessary, especially when their literacy level is sufficient for the survey not to be perceived as a threat. This also adds weight to the requirement that internet surveys are designed so that they are not normally required to be completed at a single sitting but, rather, offer the respondent the opportunity to complete part of the survey

Page 514: Collecting managing and assessing data using sample surveys

Some current issues 487

now and part later on. Of course, there are exceptions to this, when it is necessary, as part of the experiment being conducted or the type of data being collected, that partici-pants complete the entire survey at one sitting, or at least a specific component of it.

A further feature that this also mandates for the internet survey is that there is some indicator of the amount of the survey already completed and the amount yet to be done. In this way, the respondent can gauge whether or not he or she has the time to complete the survey in one sitting, or if it will be necessary to return at a later time to complete it.

22.2.4 Use of administrative dataIncreasingly, modern technology results in various administrative records being cre-ated that may have utility to the survey researcher. For example, every time that a person uses a credit card to purchase a good or service, certain information is auto-matically recorded about the transaction, such as the product purchased, the vendor, the location, and the type of sale. The use of smart ticket systems on public transport provides an administrative record that will usually show where the person boards and alights a public transport vehicle, the time at which these activities take place, and details about the length of time that the person spent on the public transport vehicle. The use of a mobile telephone will leave information in the system about when the call was made, to whom, and for how long, and may also be able to provide information on the approximate location of the caller at the time the call was made. Many other instances of the recording of data automatically by the use of modern technology can be raised as further examples.

In most countries, at present, such data are considered confidential and are gener-ally not made available to third parties for research or other potential uses. However, in some countries such data may be readily available for purchase, and in the future the extent to which data of this type may become available for purchase or for public agency use is unknown. Such data obviously offer attractive options to researchers in many fields, and could obviate the need to design and conduct a survey, or may require a much simpler and less burdensome survey to be conducted.

However, such data are often collected without the express knowledge of the per-sons from whom or about whom the data are recorded. The use of such data in any form of survey research must, therefore, raise serious ethical issues. Ideally, some form of permission should be obtained from the subjects for the use of the data collected to be used in certain types of research. Alternatively, in executing a survey that seeks to augment the administrative data, the permission of each respondent to use the data collected through these passive methods can be sought specifically.

It seems likely that, as time goes by and technology continues to advance, there will be increasing opportunities to collect data passively and without even the awareness of the respondent in most cases. This will also increase the requirements for confidential-ity to be offered and observed strictly in the use of such data, when it is not restricted either to the commercial undertaking collecting the information or to the public agency

Page 515: Collecting managing and assessing data using sample surveys

Future directions in survey procedures488

that has access to such data. On the other hand, it may offer the survey researcher opportunities to gain more accurate and complete data than have hitherto been possible to collect using normal self-reporting techniques of data collection.

A good example of this is provided in the field of travel surveys. Most travel surveys in the past have relied on self-report techniques, whether conducted by interview (face-to-face or telephone) or through self-administered means (postal, internet, and other methods). It has long been known that such self-report surveys are not fully accurate, because people forget some of the travel that is of interest to the researcher, or dismiss some travel as being of no significance, or omit some travel because of the tedium of providing a complete report. However, the increasing use of mobile telephones, GPS devices for guiding travel, and smart fare cards offer a wealth of passive ways to col-lect data about people’s travel that would be more complete and accurate than self-reported data. Speed cameras and stop-light cameras also offer further methods for collecting data, and the many proposals now arising for some form of pay-as-you-drive charging of motorists for various aspects of costs (such as insurance, registration, etc.) offer yet further opportunities to gain detailed information about travel by individuals. Other forms of intelligent transport systems are also likely to provide a wealth of data about travel. All these sources could be tapped for information that is now collected rather unreliably by traditional surveys. So far, such data have generally not been made available to the transport planning profession. However, it may only be a matter of time before such data become available to the profession, with as yet untold implications for surveys.

The challenges that such data raise are, first, that of the ethics of using such data and preserving the anonymity and confidentiality of the respondents; second, that of the impact on survey designs of the future; and, third, the appropriate restrictions that need to be placed on the use of such data. These will be both challenges and opportunities for the survey profession.

22.2.5 Proxy reportingAs a direct consequence of the increasing complexity and busy-ness of modern-day life in the twenty-first century, it is likely that person and household surveys will be increasingly subject to proxy reports. Usually, proxy reporting will be required for children in most surveys in which children are included as potential respondents. Such proxy reporting is usually unavoidable, and it is probably least subject to the custom-ary problems of this type of reporting.

As has been discussed elsewhere in this book, there are various problems with proxy reporting. The fewest problems probably arise when the proxy reporting is done by a member of a household reporting on behalf of another member, but using a written survey that was filled out by the household member for whom the proxy report is being given. The issues here will usually relate only to the ability of the interviewer to probe for further information or to correct obviously erroneous information, neither of which may be possible when a proxy report is being relayed. The most troublesome proxy

Page 516: Collecting managing and assessing data using sample surveys

Some possible future directions 489

reporting takes place when a person is reporting on behalf of another person and has no written or other information to use to provide the required data. In such cases, the proxy may know only sketchy information about the behaviour or characteristics of the person on whose behalf he or she is reporting, and may either guess or not know the answers to various questions. Incomplete and inaccurate data are likely to arise in such cases. This is particularly likely when a parent is reporting on behalf of a teenage child, or when a husband or wife is reporting on behalf of his or her spouse.

Whether a survey is being conducted with sampled individuals or entire households, proxy reporting is likely to be a growing phenomenon. Often, the person whose data are to be collected is not currently at home and available when the interviewer calls, and others in the household offer to provide the information. As discussed earlier, rules can be set to limit the amount of such proxy reporting that is permitted to take place. However, it is likely to add significantly to the cost of the survey to permit a low level of or no proxy reporting, and the added costs may lead some clients to specify that some level of proxy reporting is allowable.

There is, therefore, a conflict that is being created. The complexities of modern life and its many demands on people’s time are likely to result in greater difficulty to be able to speak to the sampled individual in a person-based survey, while issues such as improving the accuracy and completeness of surveys and the increasing data demands in many areas of survey research require that proxy reporting is reduced as much as possible. Eliminating proxy reporting is likely to be expensive in interviewer-based surveys, because it will require multiple contact attempts, and may reach a level at which households perceive the repeated attempts to be a form of harassment. At such levels, proxy reporting will not be eliminated, and damage may be done to the reputa-tion of the agency undertaking the survey. In self-administered surveys there is actually no way to control proxy reporting. It can often be detected in postal surveys by not-ing that the same handwriting appears for more than one person in the household. In internet surveys it is less likely to be detectable, and detection would probably have to depend on a question being asked as to who had completed the survey and then relying on the honesty of respondents to indicate whether or not a proxy report was provided. This will be an increasing challenge to surveys in the future.

22.3 Some possible future directions

The author of this book has previously offered some suggestions about the way ahead in the specific area of household travel surveys (Stopher, 2010). In this book the inter-est is rather broader, so some suggestions of future directions are provided for house-hold and person surveys in general, taking as a departure point some of the challenges already outlined in this chapter.

First, the issue of language and literacy seems likely to be one that will have the most far-reaching effects on how household and person surveys are conducted in the future. Both issues contraindicate continued reliance on written media for conducting surveys. Indeed, to the extent that the profession continues to rely on written forms of

Page 517: Collecting managing and assessing data using sample surveys

Future directions in survey procedures490

survey questionnaires to be filled out by respondents, it seems likely that response rates will continue to drop and that surveys will become increasingly less representative of the general population.

To overcome these dual problems, it is likely that surveys will need to rely increas-ingly on some form of interview or use of technology to provide observation data in place of self-reporting. There may also be ways in which internet surveys can be used to overcome either or both of these problems. For example, if internet surveys can be crafted that offer respondents the option of hearing the survey questions rather than having to read them, then literacy issues may be able to be reduced. Similarly, if improved translation options become available so that internet surveys can be provided in a choice of languages (both for reading and hearing), then language and literacy issues alike can be mitigated. However, these solutions depend on further and exten-sive penetration of the internet into households, especially in Africa, Asia, Europe, and South America. Another option may become possible through interactive television or through the use of DVDs. In either case, use could potentially be made of the television to display and verbally ask survey questions, providing respondents with a method to provide responses through touch screens, a keyboard option, or other methods, with transmission of the results either to a recordable medium or via telephone lines to a survey server.

When possible, technology may be able to be used to provide some or all of the information required by the survey. An example of this is provided in the next sub-section of this chapter in the field of household travel surveys. Instructions for what is required of the respondent in using a technology that is provided to the respondent for the purposes of the survey may be provided on DVD or by going to a specific URL on the Web, or through a recorded procedure that may be accessed by dialling a toll-free number.

The next issue with significant impact on surveys in some countries is the possi-bility that the telephone will become increasingly less useful as a means of sampling and recruitment. It seems likely that the telephone will continue to be a potentially useful method for response, when chosen by respondents, but that the use of random digit dialling for sampling and the use of the telephone as the first contact method may have to be reconsidered. For sampling, the alternative that can be considered is to use address lists, which are becoming increasingly available, or geographic information systems of land use that identify residential parcels, or some form of multistage sam-pling to enumerate addresses for sampling purposes. All these methods would allow address-based sampling to be conducted, which should be more complete than tra-ditional RDD sampling, and would not rely on the existence of a working land line at the addresses to be sampled.

For recruitment, there are several options. Using an address-based sample, either face-to-face or postal recruitment can be undertaken. Face-to-face recruitment would be significantly more expensive, especially if multiple attempts are to be made at sam-pled addresses. However, it is also likely to be more effective in obtaining a relatively high recruitment rate. Postal recruitment has been carried out in the past and has often

Page 518: Collecting managing and assessing data using sample surveys

Some possible future directions 491

shown a rather low response rate. Indeed, such a procedure has been used by the author in two or three instances and has generally produced a recruitment rate of less than 20 per cent. It is therefore much less desirable, and would be unlikely to generate a representative sample, although it is much cheaper than the face-to-face recruitment method.

If an internet survey is to be carried out, then recruitment could be done by e-mail or directly through the Web. However, such a sampling procedure at this time will not be likely to be representative, even in North America, because internet and e-mail pen-etration is still substantially below the levels necessary for drawing a representative sample. This would be effective only when the population for the survey is limited to those with internet access, in which case either e-mail or direct Web recruitment would be potential methods.

Once households or persons have been recruited, and assuming that issues relating to design effects on survey responses can be satisfactorily dealt with, then the telephone becomes an optional method of response. Indeed, it would seem that, as a technique both to increase response rates and to decrease respondent burden, and as a means to reduce the levels of undesirable proxy reporting, the survey should be offered through multiple modes, such as the internet, telephone interview, and post, among others. Here, the telephone interview offers a useful way to get around issues of literacy, and, with multilingual interviewers and sufficient translations of the survey questionnaire, a way around language barriers as well. In the case of the Web, versions may be offered in different languages, and there may also be the potential for respondents to turn on an audio option, in which the questions are spoken from the website. The postal version may also be available in more than one language, but there is no easy way around the literacy issue for a postal survey.

A final issue that should be considered at this point is the extent to which it is worth-while to continue to undertake large-scale cross-sectional surveys. In a number of areas, especially in medical research, use is made of panels of survey respondents (see Chapter 14). The problem with the large-scale cross-sectional survey is that it is within this type of survey that the greatest problems lie in achieving representativeness, and in which the greatest problems are also likely to arise over language, literacy, proxy reporting, and use of the telephone. Much can be gained by designing panel surveys to be used to continually update information bases and provide the data needed for policy determination, modelling, and the like. It is often not necessary to collect large samples in order to gain adequate information about a variety of issues that have typically been the subject of surveys over the years. The issues are primarily to ensure that any such panels are chosen initially so that they are fully representative of the population of con-cern and also to ensure that they remain representative, even through rotation, attrition, and replacement.

There are several advantages to using a panel, or several panels. First, the number of respondents required, especially if the major concern is to measure change in and the dynamics of behaviour over time, is generally much smaller than what is required from a repeated cross-sectional sample, as was discussed in Chapter 14. The small size

Page 519: Collecting managing and assessing data using sample surveys

Future directions in survey procedures492

of the panel may then permit more intensive and expensive survey modes to be used, in which respondents are provided with personalised help to complete survey materi-als. Face-to-face interviewing may be much more feasible with a modestly sized panel than it is with a large-scale cross-sectional survey. Focus groups can be held with panel members, in which in-depth issues can be probed, and also through which panel mem-bers may be able to be informed more completely about how to use survey materials.

The annual updating of data can also be achieved rather easily with a panel, whereas it would usually be prohibitively expensive with a cross-sectional survey. Panels can also be used to obtain attitude and opinion data relating to contemporary issues in society that may be of relevance to the main survey purpose. Panels also provide an opportunity to determine behavioural reactions to unexpected events that may be of major consequence but of short duration, and that normally could not be covered by an occasional cross-sectional survey. For example, in 2007/8 there were substantial and significant rises in the costs of oil-based products, especially for petrol and diesel fuels. The rises took place rapidly over a few months, and the highest prices remained in place for a relatively short time. The subsequent drop in prices was also very rapid. An ongoing panel survey would have provided an excellent opportunity to find out how people adjusted their household budgets, changed travel decisions, etc. in response to these rapid price changes, whereas there would not have been time to roll out a cross-sectional survey to find out the same information within the time that those price increases and decreases occurred.

Among the advantages that a panel or panels would offer are the following.

(1) The possibility of selecting a carefully designed panel that provides good repre-sentation of the target population, including the possibility of using a paid panel to ensure participation by those needed to represent the population.

(2) The ability to train panel members more carefully and in a more personalised way so as to be able to complete surveys of greater complexity or that collect more in-depth information than is normally possible.

(3) The ability to obtain data on contemporary issues of relevance to the overall pur-poses of the panel.

(4) The ability to monitor change and to explore the dynamics of change within the panel sample.

(5) A more modest outlay of funds for much higher-quality data.(6) The creation of a permanent budget-line item to continue the panel, as opposed to

the requirement to budget a large outlay for a cross-sectional survey on each occa-sion when a new cross-sectional survey is required, but followed by years of no required budget for major data collection.

When using an ongoing panel in this way, it is likely to be necessary to benchmark the panel from time to time, because attrition and replacement will be inevitable. In addition, it may be necessary for the panel to be representative of a changing popula-tion, in which case attrition replacement will need to reflect changes in the underlying population. Benchmarking may be able to be undertaken entirely with secondary data,

Page 520: Collecting managing and assessing data using sample surveys

Some possible future directions 493

such as periodic censuses, which may be taking place anyway. Alternatively, it may be necessary to conduct a large cross-sectional sample every few years, albeit with a much-reduced set of questions and a shorter survey than would normally be required, simply to benchmark the panel from time to time. However, although it may sound at first as though this then represents no gain over just collecting a larger-sample cross-sectional survey every few years, there would be large differences in the complexity and cost of the survey required for benchmarking, compared to a survey that takes place when no panel is in existence. Conservatively, it might be expected that the cross-sectional benchmark survey would be so much simpler and briefer than the non-panel situation that its cost may be an order of magnitude lower, and the time required to field it would also be substantially lower.

22.3.1 A GPS survey as a potential substitute for a household travel surveyOver the past two decades there have been rising levels of research and practice in the potential use of GPS devices for conducting household travel surveys (Stopher, 2008; Wolf, 2006; Wolf et al., 2003; inter alia). The appeal of GPS in this area is that it provides very accurate information about movement, in terms of time and location records that can be stored as frequently as every second. In self-report surveys, peo-ple are often uncertain of the addresses to which they travel, and will tend to round times of departure and arrival to the nearest five or even fifteen minutes, and often do not remember these accurately in any case. People are also generally unable to report travel distances with any degree of accuracy. Further, the traditional self-report survey of travel, whether by interview, paper, or the internet, involves several questions that must be asked about each travel event in a person’s day and can often become quite burdensome. As a technological alternative to this, people can be provided with a small GPS unit that can be carried on their person throughout the day, and that will record their movements accurately and in detail. With other technological advancements, it is becoming feasible to use software procedures to deduce the means of travel being used at any given time on the travel, and also to deduce the purpose for which the travel is being carried out (Stopher, 2008).

The task of carrying around a small GPS device that weighs about fifty grams, is smaller in size than the average mobile phone, and needs to be recharged only every couple of days is not burdensome, requires no special literacy skills, and can be readily explained in a variety of languages. Indeed, although it has not been done to date, it is quite plausible that a recording could be made in multiple languages of how to use the device, which could be made available by having respondents call in to a toll-free telephone number. Alternatively, a short video could be made, also in multiple lan-guages, showing how to use the GPS device. The video could be recorded to a DVD, which would be sent out with each device. If other data are required to be collected from respondents, in addition to the data on the GPS device, then these data could be collected through an internet survey for those with Web access and a preference to use it, or by a telephone interview or even a face-to-face interview. Alternatively,

Page 521: Collecting managing and assessing data using sample surveys

Future directions in survey procedures494

an interactive method of collecting data, such as suggested in the preceding subsec-tion, could be used. Each of these methods could offer the opportunity for multiple languages to be used, and could avoid the necessity for people to read and write their responses as, instead, they would listen and respond verbally, with the potential for explanation when the respondent is unclear as to what is being asked.

In this application, a technological device would be used to collect the data that are the most difficult for people to record, through a combination of not knowing the data with any certainty, the burden of the repetitive nature of the data collection, and potentially a lack of understanding of what is important to the survey researcher to rec-ord. The remaining data, and the instructions on how to make use of the technological device, could all be provided through mechanisms that reduce the issues of literacy and language, and that also represent less burdensome methods of data collection. Because of the nature of the instructions for using the device and the remaining questions, potential biases from multiple methods of survey administration are also unlikely to be of any significance, thereby increasing the potential to offer respondents choice and control over the method of response.

Two further major advantages with this technological development relate to the amount of data that can be collected and some of the data content. Although survey researchers in the transport field have long desired to collect detailed information of the routes that people choose in travelling from one place to another, it has never been possible to devise a survey that most people would be capable of completing that can ask for a report on the route taken. However, the GPS device provides a very precise mapping of the route of travel for every travel event recorded on it. This is obtained with no burden placed on the respondent whatsoever. Second, travel survey researchers have also long been aware that there is variability from day to day in people’s travel. However, in most cases, it has been determined that asking people to report their travel for more than two or three days by conventional surveys is too burdensome. The use of GPS, on the other hand, means that collecting data for a week or longer adds little to the burden of the survey. Indeed, in focus groups undertaken in Australia, it was found that respondents preferred to have the devices for about two weeks or so than for a shorter period (Swann and Stopher, 2008). Thus, collecting data about travel for a week or even for multiple weeks becomes immediately feasible. This also has implica-tions for reducing required sample sizes, because the added days of data provide infor-mation on variability that improves the overall information content.

This last point is an interesting one to pursue a little further. Most of the discus-sion of sampling error in Chapter 13 was based on the assumption that each obser-vation in the data set represented an independent observation of the phenomenon of interest in the survey. In the case of household-based surveys, this is not strictly cor-rect, because the behaviour of members of a household is not usually independent of other members of the household. In other words, people in a household usually make at least some decisions and engage in some behaviours that are based on joint decisions. This is usually ignored in the computation of sampling errors in household surveys, although, strictly speaking, it should not be. In the case of observations of a

Page 522: Collecting managing and assessing data using sample surveys

Some possible future directions 495

subject over multiple days, the assumptions of independence are clearly violated, and it is necessary to take into account the interrelationships between days of observation, if the unit of observation of interest is each individual day from each person in the sample. This issue has been explored by Stopher et al. (2008c), who show that sub-stantial sample size reductions are possible, although these reductions are much less than the increased number of actual observations obtained. The following discussion of the effects of multiple observations of each respondent on sample size is based on the work of Stopher et al. (2008c).

The effect of multiple observations of each respondent on sample sizeThe potential of using GPS to measure person travel opens up an issue that has not received much coverage in the survey literature to date, apart from within panel sur-veys, which is the issue of having multiple observations of a respondent within a sam-ple. As noted above, sampling theory and sampling statistics are based largely on the assumption that each observation of a sampling unit in a data set is independent of any other observation. An exception to this assumption is clearly acknowledged in developing sampling statistics for panels, as noted in Chapter 14 of this book. When multiple observations are obtained of each subject within a single period of time in a data set, then the sampling errors that would be calculated using the procedures of Chapter 13 would usually be expected to underestimate the true sampling errors.

When collecting multiple observations – e.g., over multiple days – from the same individual, the observations for one person are clearly not independent of one another. Underestimation of sampling variance appears intuitively obvious, because one per-son’s behaviour over multiple observations is likely to be more similar than would be the behaviour of a number of people on any given occasion. The true variance needs to be estimated, if the true sampling error is to be estimated and if it is desired to estimate the sample sizes required to achieve a given level of accuracy for the key variables in the survey.

In the following discussion,1 in order to keep the discussion reasonably simple, it is assumed that the multiple observations are observations of the same individual over multiple days, concerning a behaviour that is likely to vary from day to day, but that is also likely to be partially dependent on the behaviours of prior and subsequent days. An individual is sampled for D days, any one of which is designated by d. Further, suppose that N individuals are each sampled for D days and observations about one individual are designated by i. Some behaviour of interest is measured over the multi-ple days and the discussion focuses on a key variable, yid, for individual i on day d. The measure of this key variable of behaviour has two independent and latent components, as shown in equation (22.1):

yid i id i idd id i i i i i i ii i i ii i d i d i d id i d id i (22.1)

1 The author is indebted to Professor Kara Kockelman of the University of Texas for assistance in developing the theory in this section.

Page 523: Collecting managing and assessing data using sample surveys

Future directions in survey procedures496

where:

yid the measure of the key variable for individual i on day d;μi person i’s true mean value of the key variable;εid the random error term that accompanies a single day’s data point;μ the population mean of the key variable; andδi the difference between the mean of the key variable for individual i and that

of the population (serving as a �xed effect – e.g., when analysts wish to esti-mate these particular values directly – or a random effect when sample sizes are large).

With repeat observations of the behaviour of various individuals, the means of the key variable for each individual in the sample can be estimated, as can the difference of each individual mean from the population mean, and the individual variance of the key variable, as shown in equations (22.2), (22.3), and (22.4):

iid

d

D y

D1

(22.2)

ˆ ˆ i i i i id

d

D

i

N y

ND i ii i i i i i

11

(22.3)

ˆˆ

iiid ii

d

D y

D22

2

1 1(22.4)

where i2 the variance of εid.

The variance in the key variable can be estimated across all persons and all days, which is equivalent to summing the interperson and interday (temporal) vari-ances. In effect, this is an application of analysis of variance, whereby each respon-dent is a ‘treatment’, and the survey provides data for computing both between and within sums of squares (BSS and WSS). In other words, the total variance in the data is the result of interpersonal plus intertemporal variations, as shown in equation (22.5):

yd

D

i

N

d

D

i

idV Y

ND ND2

2

11

2

1

ididV YV YidV YididV Yid

ididyyˆ

ˆTSS

dd 11

22idididid iiiiyyyy ˆ11

2

1

N

ii

N

ND N NDN NDN N

ˆ ˆ i iWSS BS BSS

(22.5)

Page 524: Collecting managing and assessing data using sample surveys

Some possible future directions 497

where:

TSS the total sum of squares of the key variable; andµ the population mean of the key variable for the sample.

This relationship is shown as approximate, because an unbiased estimate of the vari-ance of yid requires accounting for losses in degrees of freedom by subtracting one and N in the appropriate denominator terms. The use of sample estimates in the last line of the equation implies that such adjustments are necessary, and these can become signif-icant when the number of periods are few – e.g., if D three or four days or fewer.

Equation (22.5) demonstrates that the total variance in the data can be decomposed into the variance around the mean of the key variable for each respondent and the var-iance of the individual respondent means of the key variable around the population mean. If it were to be assumed that all individuals exhibit the same variance in their day-to-day idiosyncratic effects (i.e., i i j j 2 2 2 2 2 2 j j 2 2 2 2 2 2 for all individuals i and j), and keeping in mind that increases in sample size and sample duration will reduce the var-iance in the �nal estimates, then equation (22.6) follows:

VND N

i| ˆ 2 2

(22.6)

where i|2 the variance of the mean value of the key variable for each respondent

around the population’s grand mean.Equation (22.6) should include some covariance terms if there is heteroscedastic-

ity in the idiosyncratic effects, in other words, if i j i j 2 2 2 2 i ji j i j i j 2 22 2 2 2 2 2 , and/or if there is correla-tion between the mean and the variance of the key variable for an individual, such as would be the case if respondents with larger mean values also had higher variability in their day-to-day values of the key variable (equation (22.6) also requires the standard assumption of independence in the μi and εid terms). At this point, such correlations are ignored, together with their associated covariance terms.

Given the data from multiple observations of each respondent, all the terms in equa-tion (22.6) can be estimated by computing the sample variance of the key variable across all respondents, averaging the sample variances for the key variable for all indi-viduals, and computing the sample variance across the average values of the key vari-able for all respondents.

It is useful then to consider two extreme situations. At one extreme, if there is no variation from observation to observation (day to day) within each respondent (in other words, respondents behave exactly the same on each occasion when they are observed), then the �rst term on the right-hand side of equation (22.6) becomes zero and there is no gain to obtaining multiple observations of each respondent. At the other extreme, if there is no inter-respondent variation in the key variable, but only variability in the observations for each respondent, then the second term of equation (22.6) will be zero and there will be no point in observing more than one individual, although analysts

Page 525: Collecting managing and assessing data using sample surveys

Future directions in survey procedures498

would still want as many observations as possible on that one individual. Presumably, reality in most cases will lie somewhere between these two extremes.

It is possible to explore further what these relationships imply. For example, suppose that the day-to-day variability is about one-half of the interpersonal variability – i.e., that ii ii

2 22 22 20 52 20 52 22 22 22 22 22 22 22 20 50 52 20 52 22 20 52 22 20 52 22 20 52 20 5.0 50 50 5.0 50 5 | . In addition, a constant ‘K’ is de�ned as the sum of these two vari-ances – i.e., ii ii

2 22 22 22 22 22 2| . Under this assumption and using the value of K, the variance of

the key variable across all data points can be expressed as shown in equation (22.7):

VK

ND

K

3 1ND3 1ND 5(22.7)

If a value is now assumed for D, the number of repeat observations on each respon-dent, as, say, �fteen, then equation (22.7) would yield equation (22.8) for the variance and the standard deviation of the key variable:

VK

N N

K

N

sdK

N

ˆ .

ˆ .

45 15N N15N N0 689

0 830(22.8)

For this case, �fteen repetitions of data (days of observation, for example) for each respondent would reduce the sample size requirements (in providing, for example, con�dence intervals of the mean response) by 17 per cent (0.17 1 – 0.83) over the situation of one observation on each respondent in the sample. (This result relies on the fact that K is N times the variance of the mean of the sample in cases in which D 1.) However, if the interday variance equals the interpersonal variance in the mean of the key variable – i.e., i i i

2 2 2 2 2 2 2 2 | – equation (22.7) now yields equation (22.9).

VK

ND

K

N

K D

NDˆ

K DK DK DK D

2 2ND2 2ND

1K D1K D

2(22.9)

If, as before, D is again assumed to be �fteen, then (1D) / 2D is equal to 0.533 and the square root of this is 0.73, leading to a 27 per cent reduction in sample size, as compared to one-day data.

In the paper by Stopher et al. (2008c), some actual multi-day data were used to dem-onstrate the effects of these reductions in sample variance. The different means and variances required in these estimations were calculated, and it was found that the ratio of i

2 to ii|22 was on the order of three to �ve, rather than a fraction of 0.5, or equal-

ity at 1.0. With this much higher ratio, the sample size reductions from �fteen days of data were calculated to be on the order of 65 to 75 per cent, which shows a much larger gain from the multiple observations. Clearly, the larger the intraperson variance is compared to the interperson variance, the greater will be the gains of using multiple observations on each respondent.

Page 526: Collecting managing and assessing data using sample surveys

499

23 Documenting and archiving

23.1 Introduction

Collecting data on a sample of respondents is a far from trivial task, as the preceding chapters of this book should have demonstrated quite clearly. However, for the data to continue to be useful, two things are necessary: there must be adequate documen-tation of the survey effort, and the data must be appropriately archived, so that future users can access the data and can understand their meaning. Too often, in the past, survey researchers have ignored these requirements, with the result that further uses of expensive data sets have been precluded. Because of the time required to docu-ment and archive, it is often tempting to overlook these final activities and to consider that the survey is done when the data have been collected, processed, edited, cleaned, weighted, expanded, and analysed. In addition, unfortunately, there are many instances when a survey research firm has estimated the costs of the survey sufficiently poorly that money has been lost in completing the data collection itself, and the time required to document and archive the data would add to the financial losses already incurred.

In considering these two activities, it makes sense to consider the documentation of the survey first and then to deal with archiving, especially because a significant amount of the information that should be assembled in the documentation should also provide part of the metadata that should be stored with the archived data. In discussing docu-mentation, this book provides suggestions on what should be included, recognising at the same time that contractual relationships may override or replace these suggested requirements.

23.2 Documentation or the creation of metadata

All surveys should be completed by creating appropriate documentation about what was done. Failure to document the data collection effort not only reduces the value of the data for immediate use but also degrades the value of the survey substantially for future use. In Europe, data documentation is often referred to as the creation of meta-data, and this terminology is becoming more prevalent in other parts of the world also. Metadata is simply ‘data about the data’. The metadata provides information about

Page 527: Collecting managing and assessing data using sample surveys

Documenting and archiving500

the survey data that may be required by users, groups of users, and even by software programmes.

Descriptive metadata or data documentation is descriptive information about statis-tical data ‘that describes specific information about data sets and allows for the under-standing of the elements and structure of a given data set (Gillman et al., 1996; Sprehe, 1997; National Archives of Australia, 1999; McKemmish et al., 2001; Wigan et al., 2002; Sharp, 2003; Stopher et al., 2008b).

According to Axhausen and Wigan (2003), there are four main purposes for descrip-tive metadata in survey research:

(1) to provide a description of the survey itself and the methodology used to conduct the survey;

(2) to list any supplementary and secondary source data that may have been used, along with any other materials that were used for weighting, validation, or other purposes, including adding to the primary data;

(3) to describe the responsibilities for the survey; and(4) to provide a critical assessment of the processes that were used to generate or pro-

duce the data.

In addition to descriptive metadata, there is also preservation metadata. Preservation metadata are the data that provide long-term accessibility to data sets by providing structured information about the data in the form of technical details required to under-stand and access the data. In recent years many projects have been undertaken, and a considerable amount of literature has been generated about preservation metadata. The reader will find a useful summary of this information provided by PADI (Preserving Access to Digital Information) (2007), together with an extensive set of references to literature on the topic.

Because the time horizons for using data in various different topic areas may vary considerably and because of the expense usually associated with collecting represen-tative data, both the data collected and all relevant documentation and technical details concerning the data should be preserved for as long as possible. Loss of information about data equates to loss of knowledge. For this reason, many efforts are currently under way to agree on standards for metadata, especially preservation metadata. This is also important as a means to preserve public access to data. Although there are still countries and agencies that attempt to keep data from the public, increasingly around the world it is becoming common, if not a legal obligation, to provide the public with access to data, especially data that have been collected with public money.

23.2.1 Descriptive metadataIt is not uncommon for the personnel who are involved in the collection of survey data to be the only individuals who possess critical information about the data. Either when the data collection project is over or when the personnel involved leave the

Page 528: Collecting managing and assessing data using sample surveys

Documentation or the creation of metadata 501

organisation, the knowledge about the data often leaves with them, unless careful safe-guards are employed to document the data collection project (Axhausen, 2000; Wigan, Grieco, and Hine, 2002). Creating descriptive metadata is, therefore, essential if this knowledge is not to be lost. Descriptive metadata will explain the methods used and the ideas that formed the basis of the process of data collection, and will document the other data used. A failure to create descriptive metadata, incorrect documentation, and the exclusion of major elements of the survey process from the documentation have often resulted in the loss of significant information.

Part of the descriptive metadata will often be a ‘code book’. In the social science literature, ‘code books’ are often referred to as metadata. In engineering circles, including transport planning, code books may often document only variable names and codes, category codes and labels, and missing value codes and labels. In con-trast, a social science code book may house all the information included in engi-neering survey code books, as well as the survey questions asked, skip patterns employed, and response rates (Interuniversity Consortium for Political and Social Research [ICPSR], 2002; Leighton, 2002). A proposal for standard items to include in descriptive metadata is provided by Stopher et al. (2008a) for household travel surveys, but they are general enough to be suitable to most surveys of human popu-lations. They suggest the following items as standard items that should always be included.

(1) Sponsorship of the survey. This would include the name of the organisation that sponsored the survey and would also indicate the name of the fieldwork agency, if the sponsoring agency did not undertake the actual data collection.

(2) The survey purpose and objectives. This should explain why the survey was car-ried out, what it was hoped the outcome would be, and what other purposes were to be served by the data.

(3) The survey instruments and other survey documents. This should include all the forms that were required for administering the survey, including pre- notification letters, reminder postcards and letters, recruitment scripts, interview scripts, and all other survey documents. This should also include interviewer manuals.

(4) Other survey fieldwork procedures. This includes coding instructions and code books, descriptions of incentives offered and how these were provided, and meth-ods used to validate the results.

(5) Population and sampling frame. This includes a description of the population that the survey was intended to sample, reasons for targeting this population, and the sampling frame used to select the sample, if applicable. If no sampling frame was used, then the method of achieving a representative sample from the population should be described.

(6) Sample design. This is a complete description of the design of the sample, the determination of the sample size, information on the eligibility criteria for screen-ing the sample, and the procedures used to accomplish the screening.

Page 529: Collecting managing and assessing data using sample surveys

Documenting and archiving502

(7) Sample selection procedures. This describes the methods employed to select the sample, details of how the sample was drawn to meet the sample design speci-fications, what was permitted in terms of proxy reporting, and what defines a completed survey – e.g., what key variables must be answered and, if the survey is a household survey, from how many household members data are required to define the household as complete.

(8) Sample disposition. This is a detailed reporting of the outcome of the sampling, including refusal rates, terminations, ineligibles, completed interviews, and non-contacts. It should also report on the levels of item nonresponse that occurred for individual questions.

(9) Response rates. This should document how ‘Eligibility unknown’ contacts were counted and used in response rate calculation, what response rate formula was used, and the calculation of relevant response rates.

(10) Processing description. This should detail the process of editing, adjusting the data, the inference and imputation procedures applied, etc.

(11) Precision of estimates. This should report the estimation of sampling errors for key variables, the identification of other possible sources of error and bias, and a description of the weighting and expansion methods used.

(12) Basic statistics. This includes a description of all base percentages or estimates on which conclusions are made.

(13) Data collection methods. This is a description of the methods used to collect the data and any procedures not described elsewhere.

(14) Survey period. This should include the dates of interviews for fieldwork and data collection, and reference dates for reporting, for example, the time, day and date when calls or other contacts were made.

(15) Interviewer characteristics. A description of the number of interviewers and other fieldwork personnel used and their backgrounds and qualifications.

(16) Quality indicators. This should document the results of internal validity checks and any other indicators used to assess the quality of the survey. This would also include details of the amount of data inference and imputation required, and how corrected, imputed, and inferred values are flagged.

(17) Contextual information. This would include any other information required to make a reasonable assessment of the findings and the data.

(18) Geographic referencing materials. A description of any procedures used to locate survey respondents geographically and to code geographic information such as addresses collected in the survey.

The documentation should also include such things as the request for proposals or tenders, the successful proposal, the contract for the survey work, and other organisa-tional documentation required in the course of the survey. Modifications to the contract should also be included, along with progress reports, the final report, the results of any key meetings, and any relevant information about events that occurred during the sur-vey that may have affected survey results.

Page 530: Collecting managing and assessing data using sample surveys

Documentation or the creation of metadata 503

23.2.2 Preservation metadataAs distinct from descriptive metadata, preservation metadata is the documentation that is developed for data that are archived. Preservation metadata is of benefit to those who will use archived data, because it enables better organisation and discovery of the data, and facilitates better management of the data (Gillman, Appel, and LaPlant, 1996; Sprehe, 1997; Wigan, Grieco, and Hine, 2002). Correctly developed, preser-vation metadata provides a succinct description of the contents of the archive. This saves time for all users.

Preservation metadata standards have been established in Europe and Australia, such as, for example, the Commonwealth Record-keeping Metadata Standard (National Archives of Australia, 1999), the Metadata Encoding and Transmission Standard Initiative (CEDARS Project, 2002), and the Dublin Core Metadata Initiative (Dublin Core, 2004). The contents of these standards are generally quite similar, although the Metadata Encoding and Transmission Standard Initiative is rather more difficult to understand at first glance. Table 23.1 shows the elements that should be included in preservation metadata so that future users of the archived data will be able to make best use of the data and minimise data retrieval costs, especially when collating data from different sources (McKemmish et al., 2001).

23.2.3 Geospatial metadataMany surveys are based on some type of geospatial structure – i.e., the surveys relate to a specific geographic area and contain data that reference geographic locations. There are additional standards and specifications that have been developed to apply to such data bases, in particular by the International Standards Organisation (ISO). These standards for the documentation of spatial data were originally developed by the United States’ Federal Geographic Data Committee, and have been adopted by ISO as an international standard, embodied in ISO 19115. There are seven major components in this standard (Stopher et al., 2008b):

identification information containing basic characteristics of the data set – e.g., con-•tent description, the spatial domain, and the time period covered by the content;data quality information that provides an assessment of the quality of the data set and •its suitability for use;spatial data organisation information that describes the mechanism used to represent •the information within the spatial data set;spatial reference information that describes the reference frame used to encode spa-•tial information;entity and attribute information that outlines the characteristics of each attribute •including its definition, domain, and unit of measure;distribution information that identifies the data distributor and the options of obtain-•ing the data; andmetadata reference information that describes the date, time, and the person(s) •responsible for maintaining the database (Cromley and McGlamery, 2002).

Page 531: Collecting managing and assessing data using sample surveys

Documenting and archiving504

Table 23.1 Preservation metadata elements and description

Layers Element Repeatable Description, example

Registration Record identifier Yes Primary key for the metadata record, would be assigned by the computer – e.g., 20011005_MD1

Date/time created No Date/time when database was createdLocation Yes E.g., //server2/datawarehouse/file.csv

Terms and conditions

Rights managementSecurity

classificationNo E.g., unrestricted, restricted

Usage condition No E.g., ‘Must be a member of Workgroup’, or ‘Usage upon payment of $74.50’, or ‘ITS staff only’

DisposalDisposal

authorisationNo Person authorising or able to authorise disposal

of recordDisposal status No E.g., not disposed, removed from system,

archived in…Reason for

disposalNo E.g., ‘Replaced through different data set’

Structural Type No E.g., data base, mapAggregation level No E.g., tables, series, setFormatMedia format No E.g., electronic, printedData format No E.g., Microsoft Access, database, SPSS, csvMedium No E.g., hard drive. CD-ROM, DVDSize No E.g., 100MB, 300 pages

Contextual AgentAgent type Yes E.g., publisher, administrator, userJurisdiction Yes The jurisdiction within which the agent

operatesCorporate ID Yes Identifier assigned to the agent department or

agency – e.g., 1234IDCorporate name Yes E.g., University of SydneyPerson ID Yes Identifier assigned to an individual who

performs some action – e.g., 1234ID-123Personal name Yes E.g., John DoeSection name Yes E.g., ‘ITS’Position name Yes E.g., ‘research analyst’Contact details Yes E.g., ‘12 Brown Street, Newtown NSW 2042,

Australia’E-mails Yes E.g., [email protected] item ID Yes Unique identifier for the related record or

information source – e.g., filename or metadata record

Relation type Yes Category of relationship – e.g., subset of…Relation

descriptionYes Additional description if previous two do not

provide enough information

Page 532: Collecting managing and assessing data using sample surveys

Documentation or the creation of metadata 505

Layers Element Repeatable Description, example

Content Title The name given to the record – e.g., ‘National Household Travel Survey 2005’

Scheme type No Naming convention used to title the recordsScheme name No Naming of standard used for namingTitle words No The titleAlternative Yes Alternative name by which the record is knownSubject Subject of topic that concisely or accurately

describes the record’s contentKeyword No Highest level of a subject-weighted titleSecond-level

keywordYes Intermediate level of a subject-based title

Third-level keyword

Yes Third level of a subject-based title

Description No Free text description of the content and purpose of the data set or record

Language No The language of the content or the recordCoverage The jurisdictional, spatial, and/or temporal

characteristics of the content of the recordPlace name Yes Locations, regions, or geographical areas

covered by and/or discussed in the content of the record

Period name Yes Time period covered by and/or discussed in the record

History of use

Management historyEvent date/time Yes E.g., date editedEvent type Yes E.g., update records, add entriesEvent description Yes E.g., replacing outliers with data from another

source…Use historyUse date/time Yes E.g., access dateUse type Yes E.g., extractionUse description Yes E.g., extraction of data for paper on…Links to other

documentation files

Yes E.g., server2//data_documentation.docx

For databases

General data set characteristicsNumber of

recordsNo E.g., 23455

Data set classification

No E.g., random sample

Data set classification description

No E.g., random sample of 5 per cent of the population

Field identifiersTable name Yes E.g., survey.xlsxField name Yes E.g., workersField size Yes E.g., single, double

Table 23.1 (cont.)

Page 533: Collecting managing and assessing data using sample surveys

Documenting and archiving506

Layers Element Repeatable Description, example

Field format Yes E.g., integer, real, BooleanDecimal places Yes E.g., 3Field description Yes E.g., 3Primary key Yes E.g., ‘Yes’/‘No’

Source: National Archives of Australia (1999).

23.3 Archiving of data

All the preceding discussion of documentation assumes that data will not be used just once, but will be preserved for potential future use. The purposes of this section are to discuss the methods and procedures for archiving data. According to the Norwegian Social Science Data Services (1999) and McKemmish et al. (2001), archiving preserves data for future use, maintains the value of the data, and allows space to be freed on expensive storage media, once the data are not in constant current use (Moore, 2000). However, given the costs normally associated with data collection, these valuable files should be stored on a safe medium, in a form that provides easy access to the data. Thus, data archiving is about the careful storage of data and the incorporation of relevant doc-umentation of the data, as discussed in the previous section of this chapter.

Data archiving has often not been practised in the past, and many expensive data sets have been lost as a result. There has frequently been an issue with public agencies for which data were collected not perceiving it to be part of their role to archive the data and not including budget allocations in the data collection projects sufficient to permit comprehensive archiving to be undertaken. Public agencies have also often been reluc-tant to make their data readily available to the public, for fear that the public might use the data to generate arguments and positions counter to those of the public agency through use or misuse of the data. To achieve effective data archiving, the responsibil-ity for archiving must be clearly assigned at the outset, and adequate funding must be provided for this task within the data collection project (Axhausen, 2000; Dahlgreen, Turner, and Garcia, 2002; ICPSR, 2002; CODATA, 2003; Sharp, 2003).

In some countries, such as Australia, some public agencies have seen data differ-ently and have spent more effort on archiving from the perception that data should not be made freely available to the public, but that they are a marketable commodity and potential users can be charged for access to them. In Australia, this was a position long held and supported by the Australian Bureau of Statistics, and imitated by a number of public agencies as a consequence. In countries that have traditionally looked on data as a marketable commodity, there have generally been more significant efforts made to archive data.

In the United States, where data collected by public agencies have, in the past, gen-erally been considered unsuitable for public release, and archiving has often not been

Page 534: Collecting managing and assessing data using sample surveys

Archiving of data 507

accomplished, things have recently changed with the establishment of the Archived Data User Services (ADUS) for data generated from intelligent transport systems (ITSs) (US Department of Transportation, 2004). This enables transport agencies in the United States to preserve data generated from ITSs, as well as making such data available for further study and analysis.

As an example of what has happened in the past, the storage of expensive household travel survey data has been far from adequate. For example, in the United States, some data sets with important travel information have been misplaced or irretrievably lost, in some cases with the data sets simply disappearing and in other cases with the data sets remaining on computer media that were no longer able to be read (such as seven-track or nine-track tape). In the early twenty-first century freedom of information legisla-tion in the country give the public legal access to information that had previously been labelled as ‘confidential’. As a result, the public now has stronger feelings about the ownership and acquisition of survey data by public agencies (Axhausen, 2000). For example, according to Sprehe (1997), users of census data in the United States said that they wanted the data access and dissemination systems of the Census Bureau to allow them to define their own data products online, access data documentation online via hypertext links, retrieve, display, order, fax, and download pre-packaged products, and be user-friendly and print on demand. This represented a major change in access to census products in the United States, and it is now the norm for access to census data.

There are a number of benefits that stem from providing ongoing access to any data, whether the data are social science, marketing, transport, or other types. These include further secondary analysis not conceived within the original data collection and analysis procedures and the application of new and emerging statistical procedures that can lead to better analysis. As a result, more information may be derived from the data (Axhausen, 2000; ICPSR, 2002). According to the Norwegian Social Science Data Services (1999), archiving preserves the data for future use and adds value to the preserved data by:

requiring or enabling more intensive checking and cleaning of the data to ensure data •integrity;eliminating software or system dependency, thereby ensuring that the data can be •read at any time in the future;avoiding duplication of data collections, thereby reducing costs;•developing comprehensive preservation metadata;•developing methods to improve data collection efforts;•allowing for the integration of data from various sources to produce user-friendly •information products, such as CD-ROMs and online databases;enabling students to access the information for research training purposes; and•cataloguing the data so that the data can be accessed through electronic search and •retrieval systems.

Figure 23.1 shows an archival system developed for the preservation of digital data, by the University of Leeds, in the United Kingdom.

Page 535: Collecting managing and assessing data using sample surveys

Documenting and archiving508

It is assumed in Figure 23.1 that all information projects are composed of data objects. There are four main information objects in the model.

(1) Content information: this is the information that requires preservation (data and documentation).

(2) Preservation description information (PDI): this comprises any information that enables the user to understand the content of the information over an indefinite period of time – i.e., the documentation. In essence, this is part of the content information.

(3) Packaging information: this is the information that binds all the other components into a specific medium (the data archive format and structure).

(4) Descriptive information – information that helps users locate and access information of potential interest. This is distinct from the PDI. This is the preservation metadata: documentation that describes the contents of the archive (CEDARS Project, 2002).

The model shown in Figure 23.1 is, essentially, a model for social science data. It does not include geospatial data and does not, therefore, deal with the complexities of archiving geospatial data. Moreover, the model assumes that data archiving is con-ducted by a central agency and not the agency that collected the data. In a number of fields, this will not be the case, and the agency that collected the data will be the one that archives the data. Axhausen (2000) suggests that data sets that are geospatial in nature will need to be more specialised and will also need to be able to support a num-ber of different software products that may be used to analyse the data. McKemmish

Preservation planning

Descriptiveinformation

Ingest

AIPSIP

PRODUCER

Data management

Archival storage

Administration

MANAGEMENT

AIPDIP

Access

Descriptiveinformation C

ONSUMER

Figure 23.1 Open archival information system modelNotes: SIP = submitted information package; AIP = archived information package; DIP = disseminated information package.Source: CEDARS Project (2002).

Page 536: Collecting managing and assessing data using sample surveys

Archiving of data 509

et al. (2001) also make the point that data archiving has become a more dynamic and complex process since the early efforts at describing data archiving systems, which is also believed to be one of the reasons that agencies have shown some reluctance to get into the business of data archiving.

When dealing with databases that include geospatial data, the data may often be structured into a relational database. This may then require more careful interpretation of the data, so that the correct relationships are maintained, which may raise problems of providing open access to potential users (Axhausen and Wigan, 2003). Normalising the database to remove these problems will add further to the cost of establishing the archive, with the result that an agency that did not include this cost in the initial esti-mate of project costs is likely to be reluctant to archive the data in a manner that makes the data fully and properly accessible to potential users. This will result in the data being effectively ‘lost’, leading to the conclusion that agencies should consider the costs of data archiving at the outset of any data collection process, to ensure that ade-quate funds are included to perform appropriate archiving, especially when geospatial data are involved (ICPSR, 2002). The following is a list of things to consider when archiving data (CODATA, 2003).

(1) How to describe the system.(2) How to describe the property.(3) A description of the text – how data were generated and analysed, and how vari-

ables were created and why.(4) Descriptions of changes over time.(5) How to save and store database management systems (size, version, proprietary

software, etc.).(6) How to ensure that all relevant documentation is incorporated in the archive;(7) How should changes to databases be saved? Should the data be saved at every

point in time or should just the important results be archived?(8) How should the operating systems, hardware, and storage media be preserved?(9) Who pays for data preservation and storage?

Following on these questions, the Interuniversity Consortium of Political and Social Research has proposed guidelines for the deposition of any social science database into an archive. These guidelines suggest the following.

Databases should be in ASCII format, or they can be in portable SPSS or SAS files. •However, it is essential to maintain the privacy of respondents. Therefore, it is rec-ommended that any personal information be removed from the database before it is deposited.If the archive contains two or more related files, as is often the case for geospatial •databases, then variables, such as a unit identifier, that link the files together should be included in each file.If the data were collected by some type of telephone interviewing process, then the •archive should include the call history documentation in the DDI format (extended

Page 537: Collecting managing and assessing data using sample surveys

Documenting and archiving510

markup language – XML). Documentation should include a complete code book, as defined in the social sciences.The ICPSR also has a data deposit form that must be completed by the data pro-•ducer. This form is equivalent to, though not as detailed as, the preservation metadata requirements described earlier in this chapter.

Given the guidelines proposed by the ICPSR and those contained in a variety of other literature on the topic of archiving, it is recommended that the following prin-ciples should be adopted to archive data from social science, geospatial, and various other data collection activities.

(1) The agency that sponsors the data collection effort should be the one that takes pri-mary responsibility for archiving the data, developing the associated preservation metadata, and any relevant auxiliary data.

(2) Any relevant geospatial information, such as maps, should be included in the archive to aid in the interpretation of geospatial data. All stored data should be in ASCII format, so that problems are not encountered from rapid changes in soft-ware products.

(3) Adequate documentation of the data should be created and archived with the data. It is particularly important that any changes made to the data are documented and that code books, documentation on sampling, weighting, and expansion proce-dures are all included as part of the data archive.

(4) All the documentation, preservation metadata, and other archives should use the document type definition (DTD), such as extended markup language.

(5) The data that should be archived are the raw data. Provided that there is full docu-mentation of the statistical tests on the data and modifications made, it is not nec-essary to store modified data.

(6) All details from recruitment procedures (such as face-to-face or telephone recruit-ment), call history files for interview-based data, and other materials describing the dispositions of sampled units should also be archived.

It cannot be overemphasised that it is critically important to archive data from expen-sive survey processes. Too frequently, data have been lost in the past, and new surveys have had to be commissioned, because older data that would have been adequate are no longer available or cannot be accessed. At the same time, if archiving is undertaken with insufficient funds and insufficient attention to detail, it is likely to be no better than if the data had not been archived at all.

Page 538: Collecting managing and assessing data using sample surveys

511

References

Abraham, M. D., H. L. Kaal, and P. D. A. Cohen (2002). Licit and Illicit Drug Use in the Netherlands, 2001, Amsterdam: Mets & Schilt.

ABS (2005). E-mail communication from Lee Johnson, Census Public Relations, ABS, 10 March. (2008a). ‘2006 Census tables: Australia’, ABS, Canberra (available at www.abs.gov.au; accessed

26 April 2008). (2008b). Adult Literacy and Life Skills Survey: Summary Results, 2006 (Reissue), Canberra, ABS

(available at www.absa.gov.au; accessed 25 September 2009).Alsnih, R. (2006). ‘Characteristics of Web based surveys and applications in travel research’, in

P. R. Stopher and C. C. Stecher (eds.), Travel Survey Methods: Quality and Future Directions, Oxford: Elsevier, 569–92.

Alsnih, R., J. Rose, and P. R. Stopher (2005). ‘Developing a decision-support system for emer-gency evacuation: case study of bush fires’, paper presented at the 84th annual meeting of the Transportation Research Board, Washington, DC, 13 January.

AAPOR (2004). Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys, 3rd edn, Lenexa, KA: AAPOR (available at www.aapor.org/pdfs/standarddefs2004.pdf; accessed 3 December 2004).

(2008). Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys, 5th edn, Lenexa, KA: AAPOR (available at www.aapor.org/uploads/Standard_Definitions_07_08_Final.pdf; accessed 14 August 2009).

Ampt, E. S. (2000). ‘Understanding the people we survey’, in Transportation Research Board, Transport Surveys: Raising the Standard, Washington, DC: Transportation Research Board, II-G/1–II-G/13.

(2003). ‘Respondent burden’, in P. R. Stopher and P. Jones (eds.), Transport Survey Quality and Innovation, Oxford: Elsevier Press, 507–21.

Ampt, E. S. and P. R. Stopher (2005). ‘Mixed method data collection in travel surveys: challenges and opportunities’, paper presented at the 28th Australasian transport research forum, Sydney, 28 September (available at www.patrec.org/atrf/papers/2005/Ampt%20&%20Stopher%20(2005).pdf; accessed 24 January 2006).

ASA (1998). ‘What are focus groups?’, ASA, Alexandria, VA; available at www.surveyguy.com/docs/surveyfocus.pdf (accessed 25 January 2006).

Ames, L. (1998). ‘The view from Peekskill: tending the flame of a motivator’, New York Times, 2 August; available at www.nytimes.com/1998/08/02/nyregion/the-view-from-peekskill-tending-the-flame-of-a-motivator.html?n=Top%2FNews%2FScience%2FTopics%2FResearch (accessed 29 April 2010).

Armstrong, T. (1993). Seven Kinds of Smart: Identifying and Developing Your Many Intelligences, New York: Penguin Books.

Page 539: Collecting managing and assessing data using sample surveys

References512

Axhausen, K. W. (2000). ‘Presenting and preserving travel data’, in Transportation Research Board, Transport Surveys: Raising the Standard: Proceedings of an International Conference on Transport Survey Quality and Innovation, May 24–30, 1997, Grainau, Germany, Transportation Research Circular no. E-C008, Washington, DC: Transportation Research Board, II-F 1-19.

Axhausen, K. W. and M. R. Wigan (2003). ‘The public use of travel surveys: the metadata perspec-tive’, in P. R. Stopher and P. M. Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 605–28.

B2B International (2007). ‘Postal surveys in industrial marketing research’, B2B International, Manchester (available at www.b2binternational.com/library/articles/article07.php; accessed 27 May 2010).

Barbour, R. S. and J. Kitzinger (1998). Developing Focus Group Research, London: Sage.Battellino, H. and J. Peachman (2003). ‘The joys and tribulations of a continuous survey’, in P. R.

Stopher and P. M. Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 49–68.

Bech, M. and M. B. Kristensen (2009). ‘Differential response rates in postal and Web-based surveys among older respondents’, Survey Research Methods, 3 (1), 1–6.

Bennett, D. J. (1998). Randomness, Harvard University Press.Berntson, D. A., R. J. Brown, J. J. Ehlenz, B. J. Friedrichs, S. Hohnstadt, K. K. Lynk, J. W.

Marrson, L. L. Plahn, and P. A. Weimer. (2005). ‘History of statistics and probability’, University of Minnesota Morris, www.mrs.umn.edu/~sungurea/introstat/history/indexhistory.shtml (accessed 4 January 2005).

Biemer, P. P. and M. W. Link (2006). ‘A latent class call-back model for nonresponse’, paper presented at the European conference ‘Quality in survey statistics’, Cardiff, 26 April (available at www.statistics.gov.uk/events/q2006/downloads/WedSession15–21.pdf; accessed 31 August 2006).

Biemer, P. P. and L. E. Lyberg (2003). Introduction to Survey Quality, New York: John Wiley.Biemer, P. P. (1988). ‘Measuring data quality’, in R. M. Groves, P. P. Beimer, L. E. Lyberg, J. T.

Massey, W. L. Nicholls, and J. Waksberg (eds.), Telephone Survey Methodology, New York: John Wiley, 273–82.

Bishop, G. F., H.-J. Hippler, N. Schwarz, and F. Strack (1988). ‘A comparison of response effects in self-administered and telephone surveys’, in R. M. Groves, P. P. Beimer, L. E. Lyberg, J. T. Massey, W. L. Nicholls, and J. Waksberg (eds.), Telephone Survey Methodology, New York: John Wiley, 321–40.

Black, T. and A. Safir (2000). ‘Assessing nonresponse bias in the national survey of America’s families’, in ASA, 2000 Proceedings of the Section on Survey Research Methods, American Statistical Association, Alexandria, VA: ASA, 816–21 (available at www.amstat.org/sections/srms/Proceedings/papers/2000_139.pdf; accessed 31 August 2006).

Bliemer, M. C. J. and J. M. Rose (2005). ‘Efficiency and sample size requirements for stated choice studies’, Working Paper no. ITLS-WP-05–08, Institute of Transport and Logistics Studies, University of Sydney.

Bloor, M., J. Frankland, M. Thomas and K. Robson (2000). Focus Groups in Social Research, London: Sage.

Blumberg, S. J. and S. V. Luke (2007). ‘Wireless substitution: early release of estimates based on data from the National Health Interview Survey, July–December 2006’, National Center for Health Statistics, Atlanta (available at www.cdc.gov/nchs/nhis.htm; accessed 14 May 2007).

Bonnel, P. (2003). ‘Postal, telephone, and face-to-face surveys: how comparable are they?’, in P. R. Stopher and P. M. Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 215–37.

Bonnel, P. and J.-L. Madre (2006). ‘New technology: Web-based’, in P. R. Stopher and C. C. Stecher (eds.), Travel Survey Methods: Quality and Future Directions, Oxford: Elsevier, 593–603.

Page 540: Collecting managing and assessing data using sample surveys

References 513

Brace, I. (2004). Questionnaire Design: How to Plan, Structure, and Write Survey Material for Effective Market Research, London: Kogan Page.

Bradburn, N. M. (1978). ‘Respondent burden’, paper presented at the annual meeting of the ASA, San Diego, 14 August (available at www.amstat.org/sections/SRMS/proceedings/papers/1978_007.pdf; accessed 21 February 2008).

Bradburn, N. M. and S. Sudman (1979). Improving Interview Method and Questionnaire Design, San Francisco: Jossey-Bass.

Braunsberger, K., H. Wybenga, and R. Gates (2007). ‘A comparison of reliability between telephone and Web-based surveys’, Journal of Business Research, 60 (7), 758–64.

Bright, P. and B. Lenge (2009). ‘ “To scan or not to scan” – that is the question’, Automated Data Entry in the Pacific, June, 1–7 (available at www.spc.int/sdp/index.php?option=com_docman&task=doc_view&gid=167; accessed 7 July 2010).

Brög, W. (2000). ‘Raising the standard! Transport survey quality and innovation’, in Transportation Research Board, Transport Surveys: Raising the Standard: Proceedings of an International Conference on Transport Survey Quality and Innovation, May 24–30, 1997, Grainau, Germany, Transportation Research Circular no. E-C008, Washington, DC: Transportation Research Board, I-A 1-9.

BTS (2002). Omnibus Survey: Household Survey Results, August 2002, Washington, DC: BTS, US Department of Transportation (available at www.bts.gov/omnibus/household/2002/july/month_specific_information.pdf; accessed 28 October 2003).

Buttram, J. L. (1990). ‘Focus groups: a starting point for needs assessment’, Evaluation Practice, 11 (3), 207–12.

Cameron, J. (2000). ‘Focussing on the focus group’, in I. Hay (ed.), Qualitative Research Methods in Human Geography, Melbourne: Oxford University Press, 83–102.

Canadian Outcomes Research Institute (2008). ‘Glossary: respondent burden’, www.hmrp.net/canadi-anoutcomesinstitute/glossary/displayresearchterm.asp?ID=85&NAME=Resources&PATH=../ Resources.htm (accessed 8 February 2008).

CASRO (1982). On the Definition of Response Rates: A Special Report of the CASRO Task Force on Completion Rates, Port Jefferson, NY: CASRO (available at www.casro.org: accessed 11 June 2002).

CEDARS Project (2002). ‘Cedars guide to preservation metadata’, University of Leeds (available at www.leeds.ac.uk/cedars/guideto/metadata: accessed 26 August 2003).

CIA (2008). The World Fact Book, Washington, DC: Government Printing Office (available at www.cia.gov/library/publications/the-world-factbook/geos/xx.html: accessed 26 April 2008).

Chaitin, G. J. (1975). ‘Randomness and mathematical proof’, Scientific American, 232 (5), 47–52.Christian, L. M., D. A. Dillman, and J. D. Smyth (2006). ‘The effects of mode and format on

answers to scalar questions in telephone and web surveys’, draft version of paper presented at the 2nd international conference on ‘Telephone survey methodology’, Miami, January 12.

Chung, J. and G. S. Monroe (2003). ‘Exploring social desirability bias’, Journal of Business Ethics, 44 (4), 291–302.

Church, A. H. (1993). ‘Estimating the effect of incentives on mail survey response rates: a meta-analysis’, Public Opinion Quarterly, 57 (1), 62–79.

Cialdini, R. B. (1988). Influence: Science and Practice, Glenview, IL: Scott Foresman.Cialdini, R. B., J. E. Vincent, S. K. Lewis, J. Catalan, D. Wheeler, and B. Darby (1975). ‘Reciprocal

concessions procedure for inducing compliance: the door-in-the-face technique’, Journal of Personality and Social Psychology, 31 (2), 206–15.

Cochran, W. G. (1963). Sampling Techniques, New York: John Wiley.CODATA (2003). ‘Working group on archiving scientific data’, CODATA, Paris (available at www.

codata.org: accessed 18 August 2003).

Page 541: Collecting managing and assessing data using sample surveys

References514

Coderre, F., A. Mathieu, and N. St-Laurent (2004). ‘Comparison of the quality of qualitative data obtained through telephone, postal, and email surveys’, International Journal of Market Research, 46 (3), 347–57.

Collins, M., W. Sykes, P. Wilson, and N. Blackshaw (1988). ‘Nonresponse: the UK experience’, in R. M. Groves, P. P. Biemer, L. E. Lyberg, J. T. Massey, W. L. Nicholls, and J. Waksberg (eds.), Telephone Survey Methodology, New York: John Wiley, 213–32.

Cover, T. M. (1974). ‘Universal gambling schemes and complexity measures of Kolmogorov and Chaitin’, Technical Report no. 12, Statistics Department, Stanford University, CA.

Cromley, R. G. and P. McGlamery (2002). ‘Integrating spatial metadata and data dissemination over the internet’, IASSIST Quarterly, 26 (1), 13–16.

Dahlgreen, J., S. Turner, and R. C. Garcia (2002). ‘Collecting, processing, archiving and disseminat-ing traffic data to measure and improve traffic performance’, paper submitted for presentation at the 2002 Transportation Research Board meeting, (available at www.its.berkeley.edu/confer-ences/trb/2002.htm; accessed 30 June 2003).

Dal Grande, E., A. Taylor, and D. Wilson (2005). ‘Is there a difference in health estimates between people with listed and unlisted telephone numbers?’ Australian and New Zealand Journal of Public Health, 29 (5), 448–56.

De Leeuw, E. D. (1992). Data Quality in Mail, Telephone, and Face-to-Face Surveys, Amsterdam: TT Publications.

De Leeuw, E. D. and J. van der Zouwen (1988). ‘Data quality in telephone and face-to-face surveys: a comparative meta-analysis’, in R. M. Groves, P. P. Beimer, L. E. Lyberg, J. T. Massey, W. L. Nicholls, and J. Waksberg (eds.), Telephone Survey Methodology, New York: John Wiley, 283–99.

De Leeuw, E. D. and W. de Heer (2002). ‘Trends in household survey nonresponse: a longitudinal and international comparison’, in R. M. Groves, D. A. Dillman, J. L. Eltinge, and R. J. A. Little (eds.), Survey Nonresponse, New York: John Wiley, 41–54.

DeMaio, T. J. (1980). ‘Refusals: who, where and why’, Public Opinion Quarterly, 44 (2), 223–33.Dempster, A. P., N. M. Laird, and D. B. Rubin (1977). ‘Maximum likelihood from incomplete data

via the EM algorithm’, Journal of the Royal Statistical Society, Series B (Methodological), 39(1), 1–38.

Dennis, J. M., N. A. Mathiowetz, C. Saulsberry, M. Frankel, K. P. Srinath, A.-S. Rodén, P. J. Smith, and R. A. Wright (1999). ‘Analysis of RDD interviews by the number of call attempts: the National Immunization Survey’, paper presented at the 54th annual conference of AAPOR, St Pete Beach, FL, 16 May.

Dillman, D. A. (1978). Mail and Telephone Surveys: The Total Design Method, New York: John Wiley.

(2000). Mail and Internet Surveys: The Tailored Design Method, New York: John Wiley. (2002). ‘Navigating the rapids of change: some observations on survey methodology in the early

twenty-first century’, Public Opinion Quarterly, 66 (3), 473–94.Dillman, D. A., G. Phelps, R. Tortora, K. Swift, J. Kohrell, and J. Berck (2001). ‘Response rate

and measurement differences in mixed mode surveys using mail, telephone, interactive voice response, and the internet’, paper presented at the 56th annual conference of AAPOR, Montreal, 19 May (available at http://survey.sesrc.wsu.edu/dillman/papers/Mixed%20Mode%20ppr%20_with%20Gallup_%20POQ.pdf; accessed 31 January 2006).

Dillman, D. A., E. Singer, J. R. Clark, and J. B. Treat (1996). ‘Effects of benefits appeals, mandatory appeals and variations in statements of confidentiality on completion rates for census question-naires’, Public Opinion Quarterly, 60 (3), 376–89.

Dillman, D. A. and J. Tarnai (1991). ‘Mode effects of cognitively designed recall questions: a com-parison of answers to telephone and mail surveys’, in P. P. Beimer, R. M. Groves, L. E. Lyberg, N. A. Mathiowetz, and S. Sudman (eds.), Measurement Errors in Surveys, New York: John Wiley, 73–93.

Page 542: Collecting managing and assessing data using sample surveys

References 515

Dohrenwerd, B. S. (1970). ‘An experimental study of payments to repondents’, Public Opinion Quarterly, 34 (4), 621–4.

Drew, J. D., G. H. Choudhry and L. A. Hunter (1988). ‘Nonresponse issues in government tele-phone surveys’, in R. M. Groves, P. P. Biemer, L. E. Lyberg, J. T. Massey, W. L. Nicholls, and J. Waksberg (eds.), Telephone Survey Methodology, New York: John Wiley, 233–46.

Dublin Core (2004). ‘Dublin Core Metadata Initiative’, http://dublincore.org (accessed 2 July 2004).

Elspett-Koeppen, R., D. Northrup, A. Noack, and K. Moran (2004). ‘Survey methodology, Sample representativeness, and accurate reporting of population health statistics’, Bulletin no. 40, Institute for Social Research, York University, Toronto (available at www.isr.yorku.ca/bulle-tins/b31to40/PopHealthStats.pdf; accessed 31 August 2006).

Ettema, D., H. Timmermans, and L. van Veghel (1996). ‘Effects of data collection methods in travel and activity research’, research report, European Institute of Retailing and Services Studies, Eindhoven University of Technology (available at www.bwk.tue.nl/urb/eirass/report.htm: accessed 22 January 2003).

Ewart, P. J., J. S. Ford, and C.-Y. Lin (1982). Applied Managerial Statistics, Englewood Cliffs, NJ: Prentice Hall.

Fern, E.F. (2001). Advanced Focus Group Research, London: Sage.Fisher, R. J. (1993). ‘Social desirability bias and the validity of indirect questioning’, Journal of

Consumer Research, 20 (2), 303–15.Forrest, T. and D. Pearson (2005). ‘A comparison of trip determination methods in GPS-enhanced

household travel surveys’, paper presented at the 84th annual meeting of the Transportation Research Board, Washington, DC 11 January.

Fowler, F. J., A. M. Roman, and Z. X. Di (1998). ‘Mode effect in a survey of Medicare prostate sur-gery patients’, Public Opinion Quarterly, 62 (1), 29–46.

Frazier, L. M., V. A. Miller, D. V. Horbelt, J. E. Delmore, B. E. Miller, and A. M. Paschal (2010). ‘Comparison of focus groups on cancer and employment conducted face to face or by tele-phone’, Qualitative Health Research, 20 (5), 617–27.

Freedman, D. and P. Diaconis (1981). ‘On the histogram as a density estimator: L2 theory’, Probability Theory and Related Fields, 57 (4), 453–76.

Freeland, E. P. and P. Furia (1999). ‘Telephone reminder calls and mail survey response rates’, paper presented at the international conference on survey nonresponse, Portland, OR, 28 October (available at www.jpsm.umd.edu; accessed 30 January 2003).

Freeman, A. M. (2003). The Measurement of Environmental and Resource Values: Theory and Methods, 2nd edn., Washington, DC: RFF Press.

Fricker, S., M. Galesic, R. Tourangeau, and T. Yan (2005). ‘An experimental comparison of Web and telephone surveys’, Public Opinion Quarterly, 69 (3), 370–92.

FTC (2003). ‘National do not call registry’, FTC, www.ftc.gov/bcp/edu/micrositesdonotcall/index.html (accessed 24 January 2006).

Furness, K. P. (1965). ‘Time Function Interaction’, Traffic Engineering and Control, 7 (7), 458–60.

Gibbs, A. (1997). ‘Focus groups’, Social Research Update, no. 19, Guildford: University of Surrey; available from www.soc.surrey.ac.uk/sru/SRU19.html (accessed 25 January 2006).

Gillman, D. W., M. V. Appel, and W. P. LaPlant (1996). ‘Statistical metadata management: a stan-dards-based approach’, in ASA, Proceedings of the 1996 Joint Statistical Meeting: Section on Survey Research Methods, Alexandria, VA: ASA, 55–64 (available at www.amstat.org/sec-tions/SRMS/proceedings/paper/1996.pdf; accessed 16 October, 2009).

Goldenberg, L., C. Stecher, and K. Cervenka (1995). ‘Choosing a household-based survey method: results of the Dallas–Fort Worth pretest’, paper presented at the 5th national conference ‘Transportation planning methods applications’, Seattle, 20 April.

Page 543: Collecting managing and assessing data using sample surveys

References516

Goulias, K. G. (2000). ‘Surveys using multiple approaches’, in Transportation Research Board, Transport Surveys: Raising the Standard: Proceedings of an International Conference on Transport Survey Quality and Innovation, May 24–30, 1997, Grainau, Germany, Transportation Research Circular no. E-C008, Washington, DC: Transportation Research Board, II-A 1-9.

Goyder, J. (1987). The Silent Minority: Nonrespondents on Sample Surveys, Boulder, CO: Westview Press.

Greaves, S. P. (2000). ‘Simulating household travel survey data for metropolitan areas’, unpub-lished PhD dissertation, department of civil and environmental engineering, Louisiana State University, Baton Rouge.

Groves, R. M. (2004). Survey Errors and Survey Costs, New York: Wiley Interscience.Groves, R. M., P. P. Beimer, L. E. Lyberg, J. T. Massey, W. L. Nicholls, and J. Waksberg (eds.) (1988).

Telephone Survey Methodology, New York: John Wiley.Groves, R. M. and M. P. Couper (1998). Nonresponse in Household Interview Surveys, New York:

John Wiley & Sons.Groves, R. M., F. J. Fowler, M. P. Couper, J. M. Lepowski, E. Singer, and R. Tourangeau (2009).

Survey Methodology, Hoboken, NJ: John Wiley.Groves, R. M. and L. E. Lyberg (1988). ‘An overview of nonresponse issues in telephone surveys’, in

R. M. Groves, P. P. Biemer, L. E. Lyberg, J. T. Massey, W. L. Nicholls, and J. Waksberg (eds.), Telephone Survey Methodology, New York: John Wiley, 191–211.

Hansen, M. H., W. N. Hurwitz, and W. G. Madow (1953). Sample Survey Methods and Theory, Vol. I, New York: John Wiley.

Harpuder, B. E. and J. A. Stec (1999). ‘Achieving an optimum number of call-back attempts: cost sav-ings versus nonresponse error due to non-contacts in RDD surveys’, in ASA, 1999 Proceedings of the Section on Survey Research Methods, American Statistical Association, Alexandria, VA: ASA, 913–18.

Hensher, D. A. (2004). ‘Identifying the influence of stated choice design dimensionality on will-ingness to pay for travel time savings’, Journal of Transport Economics and Policy, 38 (3), 425–46.

(2006). ‘How do respondents process stated choice experiments? Attribute consideration under varying information load’, Journal of Applied Econometrics, 21 (6), 861–78.

Hensher, D. A. and W. H. Greene (2003). ‘Mixed logit models: state of practice’, Transportation, 30 (2), 133–76.

Holbrook, A., M. Green, and J. Krosnick (2003). ‘Telephone versus face-to-face interviewing of national probability samples with long questionnaires’, Public Opinion Quarterly, 67 (1), 79–125.

Hubbard, R. and E. L. Little (1988). ‘Promised contributions to charity and mail survey responses’, Public Opinion Quarterly, 52 (2), 223–30.

Hyndman, R. J. (1995). ‘The problem with Sturges’ rule for constructing histograms’, available at www-personal.buseco.monash.edu.au/~hyndman/papers/sturges.pdf (accessed 25 January 2005).

ICC (2007). ICC/ESOMAR International Code on Market and Social Research, Paris: ICC.ICPSR (2002). ‘Guide to social science data preservation and archiving’, ICPSR University of

Michigan, Ann Arbor (available at www.icpsr.umich.edu/access/dpm.html: accessed 3 November 2003).

Internet World Stats (2008). ‘Internet usage statistics: the internet big picture’, Internet World Stats, Bogota, www.internetworldstats.com/stats.htm (accessed 19 December 2008).

Israel, G. D. and C. L. Taylor (1990). ‘Can response order bias evaluations?’, Evaluation and Program Planning, 13 (4), 365–71.

Jackson, R. (2003). ‘Some startling statistics’, Erma Bombeck Writers’ Workshop, University of Dayton, www.humorwriters.org/startlingstats.html (accessed 26 April 2008).

Page 544: Collecting managing and assessing data using sample surveys

References 517

Jones, R. and N. Pitt (1999). ‘Health surveys in the workplace: comparison of postal, email and World Wide Web methods’, Occupational Medicine, 49 (8), 556–8.

Kahn, R. L. and C. F. Cannell (1957). The Dynamics of Interviewing: Theory, Technique, and Cases, New York: John Wiley.

Kalfs, N. and H. van Evert (2003). ‘Nonresponse in travel surveys’, in P. R. Stopher and P. M. Jones (eds.), Transport Survey Quality and Innovation, Oxford: Elsevier Press, 567–86.

Kalsbeek, W. D., S. L. Botman, J. T. Massey, and P.-W. Liu (1994). ‘Cost-efficiency and the num-ber of allowable call attempts in the National Health Interview Survey’, Journal of Official Statistics, 10 (20), 133–52.

Kam, B. H. and J. M. Morris (1999). ‘Response patterns in travel surveys: the VATS experience’, in Australian Transport Research Forum, Proceedings of the 23rd Australian Transport Research Forum, vol. II, Canberra: Australian Government Publishing Service, 591–608 (available at www.trc.rmit.edu.au/publications/papers/responsepatterns.pdf: accessed 10 February 2003).

Kaufman, M. T. (2003). ‘Robert K. Merton, versatile sociologist and father of the focus group, dies at 92’, New York Times, 24 February; available at www.nytimes.com/2003/02/24/nyregion/robert-k-merton-versatile-sociologist-and-father-of-the-focus-group-dies-at-92.html (accessed 29 April 2010).

Keeter, S., A. Miller, A. Kohut, R. M. Groves, and S. Presser (2000). ‘Consequences of reducing nonresponse in a national telephone survey’, Public Opinion Quarterly, 64 (2), 125–48.

Kempf, A. M. and P. L. Remington (2007). ‘New challenges for telephone survey research in the twenty-first century’, Annual Review of Public Health, 28, 113–26.

Kim, H., J. Li, S. Roodman, A. Sen, S. Sööt, and E. Christopher (1993). ‘Factoring household travel surveys’, Transportation Research Record, 1412, 17–22.

Kirsch, I. S., A. Jungeblut, L. Jenkins, and A. Kolstad (2002). Adult Literacy in America: A First Look at the Findings of the National Adult Literacy Survey, 3rd edn. Washington, DC: National Center for Education Statistics, US Department of Education.

Kish, L. (1965). Survey Sampling, New York: John Wiley.Kochanek, K. M., D. P. Camburn, A.-S. Roden, M. Sawyer, C. D. Wolters, J. J. Massey, E. R. Zell,

and P. L. Y. H. Ching (1995). ‘Answering machine messages as tools for a random-digit dialing telephone survey,’ paper presented at the 50th annual conference of AAPOR, Ft Lauderdale, FL, 18 May.

Kristal, A. R., E. White, J. R. Davis, G. Corycell, T. Raghunathan, S. Kinne, and T. K. Lin (1993). ‘Effects of enhanced calling efforts on response rates, estimates of health behavior, and costs in a telephone health survey using random-digit dialing’, Public Health Reports, 108 (3), 372–9.

Kropf, M., J. Scheib, and J. Blair (1999). ‘The effect of alternative incentives on cooperation and refusal conversion in a telephone survey’, unpublished paper, Survey Research Center, University of Maryland, College Park.

Krueger, R. A. (1994). Focus Groups: A Practical Guide for Applied Research, 2nd edn. Thousand Oaks, CA: Sage.

(2000). Focus Groups: A Practical Guide for Applied Research, 3rd edn., Thousand Oaks, CA: Sage.

Krueger, R. A. and M. A. Casey (2000). Focus Groups: A Practical Guide for Applied Research, 3rd edn., London: Sage.

Kurth, D. L., J. L. Coil,, and M.J. Brown (2001). ‘Assessment of quick-refusal and no-contact non-response in household travel surveys’, Transportation Research Record, 1768, 114–24.

Lahaut, V. M. C. J., H. A. M Jansen, D. van de Mheen, H. F. L. Garretsen, J. E. E. Verdurmen, and A. van Dijk. (2003). ‘Estimating nonresponse bias in a survey on alcohol consumption: com-parison of response waves’, Alcohol and Alcoholism, 38 (2), 128–34.

Lee-Gosselin, M. E. H. (2003). ‘Can you get there from here? A viewpoint on stated response survey innovation and quality’ in P. R. Stopher and P. M. Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 331–43.

Page 545: Collecting managing and assessing data using sample surveys

References518

Leece, P., M. Bhandari, S. Sprague, M. F. Swiontkowski, E. H. Schemitsch, P. Tornetta, P. J. Devereaux, and G. H. Guyatt (2004). ‘Internet versus mailed questionnaires: a randomized comparison’, Journal of Medical Internet Research, 6 (3), e30.

Leighton, V. (2002). ‘Developing a new data archive in a time of maturing standards’, IASSIST Quarterly, 26 (1), 5–9.

Lind, K., T. Johnson, V. Parker, and S. Gillespie (1998). ‘Telephone non-response: a factorial exper-iment of techniques to improve telephone response rates’, in ASA, 1998 Proceedings of the Section on Survey Research Methods, American Statistical Association, Alexandria, VA: ASA, 848–50 (available at www.amstat.org/sections/srms/Proceedings/papers/1998_145.pdf; accessed 31 August 2006).

Liu, X. (2007). ‘Comparing the quality of the Master Address File and the current demographic household surveys’ multiple frames’, Demographic Statistical Methods Division, US Census Bureau, Washington, DC, www.fcsm.gov/07papers/Liu.II-C.pdf (accessed on 3 April 2008).

Louviere, J. J., D. A. Hensher, and J. D. Swait (2000). Stated Choice Methods: Analysis and Applications, Cambridge University Press.

Lynn, P. and P. Clarke (2001). ‘Separating refusal bias and non-contact bias: evidence from UK national surveys’, Working Paper no. 2001–24, Institute for Social and Economic Research, London.

Madden, M. and S. Jones (2008). Networked Workers, Washington, DC: Pew Research Center (available at www.pewinternet.org/pdfs/PIP_Networked_Workers_FINAL.pdf; accessed 19 December 2008).

Manski, C. F. and S. R. Lerman (1977). ‘The estimation of choice probabilities from choice-based samples’, Econometrica, 45 (8), 1977–88.

McDonald, H. and S. Adam (2003). ‘A comparison of online and postal data collection methods in marketing research’, Journal of Marketing Intelligence and Planning, 21 (2), 85–95.

McGuckin, N., S. Liss, and M. Keyes (2001). ‘Hang-ups: looking at nonresponse in telephone sur-veys’, paper presented at the international conference on transport survey quality and innova-tion ‘How to recognize it and how to achieve it’, Kruger Park, South Africa, 10 August.

McLachlan, G. J. and T. Krishnan (1997). The EM Algorithm and Extensions, New York: John Wiley.

McKemmish, S., G. Acland, N. Ward, and B. Reed (2001). ‘Describing records in context in the continuum: the Australian Recordkeeping Metadata Schema’, Monash University, Melbourne (available at www.infotech.monash.edu.au/research/groups/rcrg/publications/archiv01.html; accessed 29 August 2003).

McNamara, C. (1999). ‘Basics of conducting focus groups’, Authenticity Consulting, www.manage-menthelp.org/evaluatn/focusgrp.htm (Accessed 25 January 2006).

Medical Outcomes Trust (1995). ‘SAC instrument review criteria’, Medical Outcomes Trust Bulletin, 3(4), 1–4.

Michaud, S., D. Dolson, D. Adams, and M. Renaud (1995). ‘Combining administrative and sur-vey data to reduce respondent burden in longitudinal surveys’, in ASA, 1995 Proceedings of the Section on Survey Research Methods, American Statistical Association, Alexandria, VA: ASA, 11–20.

Moore, F. (2000). ‘Enterprise storage: profiling the storage hierarchy – where should data reside?’, Computer Technology Review, 20 (3), 24–5.

Morgan, D. L. (1988). Focus Groups as Qualitative Research, London: Sage. (1996). ‘Focus groups’, Annual Review of Sociology, 22, 129–52.Morgan, D. L. and R. A. Krueger (1993). ‘When to use focus groups and why’, in D. L. Morgan (ed.),

Successful Focus Groups: Advancing the State of the Art, Thousand Oaks, CA: Sage, 3–20.Morris, J. and T. Adler (2003). ‘Mixed mode surveys’, in P. R. Stopher and P. M. Jones (eds.),

Transport Survey Quality and Innovation, Oxford: Pergamon Press, 239–52.

Page 546: Collecting managing and assessing data using sample surveys

References 519

MRA (2007). The Code of Marketing Research Standards, Glastonbury, CT: Marketing Research Association; available at www.mra-net.org/pdf/images/documents/expanded_code.pdf (accessed 20 April 2005).

Murakami, E. and C. Ulberg (1997). ‘The Puget Sound Transportation Panel’, in T. F. Golob, R. Kitamura, and L. Long (eds.), Panels for Transportation Planning: Methods and Applications, Norwell, MA: Kluwer Academic Press, 159–92.

National Archives of Australia (1999). ‘Archives Advice 41: recordkeeping metadata standard for Commonwealth agencies’, National Archives of Australia, Canberra (available at www.naa.gov.au/recordkeeping/rkpubs/advices/index.html; accessed 2 September 2003).

Nederhof, A. J. (1983). ‘The effects of material incentives in mail surveys: two studies’, Public Opinion Quarterly, 47 (1), 103–11.

Norheim, B. (2003). ‘Stated preference surveys: do we have confidence tests of the results?’, in P. R. Stopher and P. M. Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 347–63.

Norwegian Social Science Data Services (1999). ‘Providing global access to distributed data through metadata standardization: the parallel stories of Nesstar and the DDI’, paper presented at the Conference of European Statisticians, Neuchâtel, 14 June (available at www.nesstar.org; accessed 20 August 2003).

O’Connor, J. J. and E. F. Robertson (1997). ‘Frank Yates’, www-history.mcs.st-andrews.ac.uk/Mathematicians/Yates.html (accessed 4 January 2005).

Oldendick, R. W. and M. W. Link (1994). ‘The answering machine generation’, Public Opinion Quarterly, 58 (2), 264–73.

OECD (2004). Trends in International Migration, Paris: OECD.Orne, M. T. (1962). ‘On the social psychology of the psychological experiment: with particular

reference to demand characteristics and their implications’, American Psychologist, 17 (11), 776–83.

Ortúzar, J. de D. and M. E. H. Lee-Gosselin (2003). ‘From respondent burden to respondent delight’, in P. R. Stopher and P. M. Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 523–8.

Oz, M. E., D. H. Winston, N. Desmond, G. N. Herlitz, G. H. Raniella, and N. L. Gingrich (2006). ‘The Hippocratic oath, V2.0: using focus groups in health care policy’, American Journal of Health Studies, 21 (3/4), 189–98.

Pettersson, H. and B. Sisouphanthong (2005). ‘Cost model for an income and expenditure survey’, in United Nations, Household Sample Surveys in Developing and Transition Countries, New York: United Nations, 267–77.

PADI (2007). ‘Preservation metadata, Preserving Access to Digital Information (PADI)’, National Library of Australia, Canberra (available at www.nla.gov.au/padi/topics/32.html; accessed 16 October 2009).

Porst, R. and C. von Briel (1995). ‘Wären Sie vielleicht bereit, sich gegebenenfalls noch einmal befragen zu lassen? Oder: Die Gründe für die Teilnahme an Panel-befragungen’, ZUMA-Arbeitsbericht no. 95/04, Mannheim: Zentrum für Umfragen, Methoden und Analysen.

Powell, R. A. and H. M. Single (1996). ‘Focus groups’, International Journal of Quality in Health Care, 8 (5), 499–504.

Pratt, J. H. (2003). ‘Survey instrument design’, in P. R. Stopher and P. M. Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 137–50.

Presser, S. (1989). ‘Collection and design issues: discussion’, in D. Kasprzyk, G. Duncan, G. Kalton, and M. P. Singh (eds.), Panel Surveys, New York: John Wiley, 75–59.

RAND Corporation (1955). A Million Random Digits, New York: Free Press.Rasinski, K. A., D. Mingay, and N. M. Bradburn (1994). ‘Do respondents really “Mark all that

apply” on self-administered questions?’, Public Opinion Quarterly, 58 (3), 400–8.

Page 547: Collecting managing and assessing data using sample surveys

References520

Richardson, A. J. (2000). ‘Behavioural mechanisms of nonresponse in mailback travel surveys’, paper presented at the 79th annual meeting of the Transportation Research Board, Washington, DC, 10 January.

Richardson, A. J. and E. S. Ampt (1993). ‘Southeast Queensland Household Travel Survey: final report 4: all study areas’, Working Paper no. TWP 93/6, Transport Research Centre, Melbourne.

Richardson, A. J., E. S. Ampt, and A. H. Meyburg (1995). Survey Methods for Transport Planning, Melbourne: Eucalyptus Press.

Richardson, A. J. and A. H. Meyburg (2003). ‘Definitions of unit nonresponse in travel surveys’, in P. R. Stopher and P. M. Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 587–604.

Richardson, A. J. and A. Pisarski (1997). ‘Guidelines for quality assurance in travel and activity surveys’, paper presented at the international conference on transport survey quality and inno-vation ‘Transport surveys: raising the standard’, Grainau, Germany, 26 May.

Rodrigues, N. M. C. and K. I. Belträo (2005). ‘A split questionnaire survey design applied to the Brazilian census’, at http://iussp2005.princeton.edu/download.aspx?submissionId=51631 (accessed 28 February 2005).

Rogers, J. D. and D. Godard (2005). Trust and Confidence in the California Courts 2005: A Survey of the Public and Attorneys, part II, Executive Summary of Methodology with Survey Instruments, San Francisco: Judicial Council of California (available at www.courtinfow.ca.gov/reference/4_37pubtrust.htm).

Rose, J. M., M. C. J. Bliemer, D. A. Hensher, and A. T. Collins (2008). ‘Designing efficient stated choice experiments in the presence of reference alternatives’, Transportation Research Part B: Methodological, 42 (4), 395–406.

Rosnow, R. L. (2002). ‘The nature and role of demand characteristics in scientific enquiry’, Prevention and Treatment, 5, article 37.

Rosnow, R. L. and E. J. Robinson (1967). Experiments in Persuasion, New York: Academic Press.Rubin, D. B. (1987). Multiple Imputation for Nonresponse in Surveys, New York: John Wiley.Rushkoff, D. (2005). Get Back in the Box: Innovation from the Inside Out, New York : Collins.Ryu, E., M. P. Couper, and R. W. Marans (2005). ‘Survey incentives: cash vs. in-kind; face-to-

face vs. mail; response rate vs. nonresponse error’, International Journal of Public Opinion Research, 18 (1), 89–106.

Sammer, G. (2003). ‘Ensuring quality in stated response surveys’, in P. R. Stopher and P. M. Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 365–75.

Sangster, R. L. and N. J. Meekins (2004). ‘Modeling the likelihood of interviews and refusals: using call history data to improve efficiency of effort in a national RDD survey’, in ASA, Proceedings of the 2004 Joint Statistical Meeting: Section on Survey Research Methods, Alexandria, VA: ASA, 4311–17 (available at www.bls.gov/ore/abstract/st/st040090.htm; (accessed 31 August 2006).

Schober, M. F., F. G. Conrad, P. Ehlen, and S. S. Fricker (2003). ‘How Web surveys differ from other kinds of user interfaces’, in ASA, 2003 Proceedings of the Section on Survey Research Methods, American Statistical Association, Alexandria, VA: ASA, 190–5 (available at www.stanford.edu/~ehlen/2003–05_ASA.pdf; accessed 16 January 2009).

Schonlau, M., R. D. Fricker, and M. Elliott (2002). Conducting Research Surveys via E-mail and the Web, Santa Monica, CA: RAND Corporation (available at www.rand.org/pubs/monograph_reports/MR1480; accessed 14 May 2010).

Schwarz, N. and T. Wellens (1994). ‘Cognitive dynamics of proxy responding: the diverging per-spectives of actors and observers’, Working Paper in Survey Methodology no. 94–07, Statistical Research Division US Census Bureau, Washington, DC.

Scott, D. W. (1979). ‘On optimal and data-based histograms’ Biometrika, 66(3), 605–10.

Page 548: Collecting managing and assessing data using sample surveys

References 521

Sebold, J. (1988). ‘Survey period length, unanswered numbers, and nonresponse in telephone sur-veys’, in R. M. Groves, P. P. Biemer, L. E. Lyberg, J. T. Massey, W. L. Nicholls, and J. Waksberg (eds.), Telephone Survey Methodology, New York: John Wiley, 247–56.

Sharp, J. (2003). ‘Data interrogation and management’, in P. R. Stopher and P. M. Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 629–34.

Singer, E. (2002). ‘The use of incentives to reduce nonresponse in household surveys’, in R. M. Groves, D. A. Dillman, J. L. Eltinge, and R. J. A. Little (eds.), Survey Nonresponse, New York: John Wiley, 163–78.

Singer, E., R. M. Groves, and A. D. Corning (1999). ‘Differential incentives: beliefs about practices, perceptions of equity, and effects on survey participation’, Public Opinion Quarterly, 63 (2), 251–60.

Smith, T. W. (1999). ‘Standards for final disposition codes and outcome rates for surveys’, Federal Committee on Statistical Methodology, Washington, DC, www.fcsm.gov/99papers/smith.html (accessed 14 August 2009).

Smyth, J. D., L. M. Christian, and D. A. Dillman (2006). ‘Does yes or no on the telephone mean the same as check-all-that-apply on the Web?’, paper presented at the 2nd International Conference on Telephone Survey Methodology, Miami, 12 January.

Sprehe, J. T. (1997). ‘The US Census Bureau’s Data Access and Dissemination System (DADS)’, Government Information Quarterly, 14 (1), 91–100.

Starmer, C. (2000). ‘Developments in non-expected utility theory: the hunt for a descriptive theory of choice under risk’, Journal of Economic Literature, 38 (2), 332–82.

Statistics Canada (2005). Learning a Living: First Results from the Adult Literacy and Life Skills Survey, Ottawa: Statistics Canada (available at www.statcan.gc.ca/pub/89–603-x/2005001/pdf/4200878-eng.pdf; accessed 25 September 2009).

Stetkær, K. (2004). ‘Danish experiences in tackling the respondent burden’, paper presented at work-shop ‘Future challenges of services statistics’, Luxembourg city, 29 June.

Stopher, P. R. (1997). ‘A review of separate and joint strategies for the use of data on revealed and stated choices’, in P. Bonnel, R. Chapleau, M. E. H. Lee-Gosselin, and C. Raux (eds.), Urban Travel Survey Methods: Measuring the Present, Simulating the Future, Lyons: Cartier, 15–32.

(1998), ‘A review of separate and joint strategies for the use of data on revealed and stated choices’, Transportation, 25 (2), 187–205.

(2008). ‘Collecting and processing data from mobile technologies’, invited resource paper pre-sented at the 8th International conference on survey methods in transport, Annecy, France, 26 May.

(2010). ‘The travel survey toolkit’, in M. Lee-Gosselin and J. Zmud (eds.), Transport Survey Methods: Keeping Up with a Changing World, Bingley, UK: Emerald Press, 15–46.

Stopher, P. R., R. Alsnih, C. G. Wilmot, C. Stecher, J. Pratt, J. P. Zmud, W. Mix, M. Freedman, K. Axhausen, M. Lee-Gosselin, A. E. Pisarski, and W. Brög (2008a). Standardized Procedures for Personal Travel Surveys, Washington, DC: Transportation Research Board.

(2008b). Technical appendix to Standardized Procedures for Personal Travel Surveys, Web-only Document no. 93, Washington, DC: Transportation Research Board, http:trb.org/news/blurb_detail.asp?id8858.

Stopher, P. R., P. Bullock, and F. Horst (2002). ‘Exploring the use of Passive GPS devices to measure travel’, in K. C. P. Wang, S. Medanat, S. Nambisan, and G. Spring (eds.), Proceedings of the 7th International Conference on Applications of Advanced Technologies to Transportation, Reston, VA: American Society of Civil Engineers, 959–67.

Stopher, P. R. and S. P. Greaves (2006). ‘Guidelines for samplers: measuring a change in behaviour from before and after surveys’, Transportation, 34 (1), 1–16.

(2009). ‘Missing and inaccurate information from travel surveys: pilot results’, paper presented at the 32nd Australasian Transport Research Forum, Auckland, 1 October.

Page 549: Collecting managing and assessing data using sample surveys

References522

Stopher, P. R. and D. A. Hensher (2000). ‘Are more profiles better than fewer? Searching for par-simony and relevance in stated choice experiments’, Transportation Research Record, 1719, 165–74.

Stopher, P. R. and P. M. Jones (2003). Transport Survey Quality and Innovation, Oxford: Pergamon Press.

Stopher, P. R., K. Kockelman, S. P. Greaves, and E. Clifford (2008c). ‘Reducing burden and sample sizes in multiday household travel surveys’, Transportation Research Record, 2064, 12–18.

Stopher, P. R. and H. M. A. Metcalf (1996). Methods for Household Travel Surveys, Washington, DC: National Academy Press.

Stopher, P. R. and A. H. Meyburg (1979). Survey Sampling and Multivariate Analysis for Social Scientists and Engineers, Lexington, MA: Lexington Books.

Stopher, P. R., L. Shillito, D. T. Grober, and H. M. A. Stopher (1986). ‘On-board bus surveys: no questions asked’, Transportation Research Record, 1085, 50–7.

Stopher, P. R. and C. Stecher (1993). ‘Blow up: expanding a complex random sample travel survey’, Transportation Research Record, 1412, 10–16.

Stopher, P. R., M. Xu, and C. FitzGerald (2005). ‘Assessing the accuracy of the Sydney Household Travel Survey with GPS’, paper presented at the 28th Australasian Transport Research Forum, 30 Sydney, September.

Stopher, P., M. Xu, and C. FitzGerald (2007). ‘Assessing the accuracy of the Sydney Household Travel Survey with GPS’, Transportation, 34 (6), 723–41.

Sturges, H. (1926). ‘The choice of a class interval’, Journal of the American Statistical Association, 21, 65–6.

Stycis, J. M. (1981). ‘A critique of focus group and survey research: the machismo case’, Studies in Family Planning, 21 (12), 450–6.

Sudman, S. and N. M. Bradburn (1974). Response Effects in Surveys, Chicago: Aldine Press. (1982). Asking Questions: A Practical Guide to Questionnaire Design, San Francisco: Jossey-

Bass.Swann, N., and P. R. Stopher (2008). ‘Evaluation of a GPS survey by means of focus groups’, paper

presented at the 87th annual meeting of the Transportation Research Board, Washington, DC, 16 January.

Sykes, W. and M. Collins (1988). ‘Effects of mode of interview: experiments in the UK’, in R. M. Groves, P. P. Beimer, L. E. Lyberg, J. T. Massey, W. L. Nicholls, and J. Waksberg (eds.), Telephone Survey Methodology, New York: John Wiley, 301–20.

Talvitie, A. P. (1997). ‘Things planners believe in, and things they deny’, Transportation, 24 (1), 1–31.

Tooley, M. (1996). ‘Incentives and rates of return for travel surveys’, Transportation Research Record, 1551, 67–73.

Tourangeau, R. and T. W. Smith (1996). ‘Asking sensitive questions: the impact of data collection mode, question format, and question context’, Public Opinion Quarterly, 60 (2), 275–304.

Transport Data Centre (2002). ‘Household travel survey nonresponse study report’, internal publica-tion, Transport Data Centre, New South Wales Department of Transport, Sydney.

Transportation Research Board (2000). Transport Surveys: Raising the Standard: Proceedings of an International Conference on Transport Survey Quality and Innovation, May 24–30, 1997, Granau, Germany, Transportation Research Circular no. E-C008, Washington, DC: Transportation Research Board.

Traugott, M. W., R. M. Groves, and J. Lepkowski (1987). ‘Using dual frame designs to reduce non-response in telephone surveys’, Public Opinion Quarterly, 51 (4), 522–39.

Triplett, T. (1998). ‘What is gained from additional call attempts and refusal conversion and what are the cost implications?’, unpublished paper, Survey Research Center, University of Maryland, College Park.

Page 550: Collecting managing and assessing data using sample surveys

References 523

Tuckel, P. and H. O’Neill (2002). ‘The vanishing respondent in telephone surveys’, Journal of Advertising Research, 42 (5), 26–48.

United Nations Economic and Social Council (2005). ‘The experiences of ABS with reducing respondent burden through the use of administrative data and through the use of smarter statistical methodologies’, CES/2005/18, Geneva: United Nations Economic and Social Council.

US Census Bureau (2000). ‘Design and Methodology, Current Population Survey’, Technical Paper no. 63, US Department of Commerce, Washington, DC (available at www.census.gov/prod/2000pubs/tp63.pdf; accessed 9 November 2003).

US Census Bureau (2005). ‘Response to question reference 050308–000073’, available at Question and Answer Center, http://askcensus.gov (accessed 9 March 2005).

US Department of Transportation (2004). ‘Archived Data User Service (ADUS); ITS standards advisory’, US Department of Transportation, Washington, DC (available at www.standards.its.dot.gov/Documents/ADUS_Advisory.pdf; accessed 30 June 2004).

Van der Reis, P. and A. S. Harvey (2006). ‘Survey implementation’, in P. R. Stopher and C. C. Stecher (eds.), Travel Survey Methods: Quality and Future Directions, Oxford: Elsevier, 213–22.

Van der Reis, P., M. Lombard and I. Kriel (2004). ‘A qualitative analysis of factors contributing to inconsistent responses in stated preference surveys’, paper presented at the 7th international conference on travel survey methods, Playa Herradura, Costa Rica, 4 August.

Van Evert, H., W. Brög, and E. Erl (2006). ‘Survey design: the past, the present and the future’, in: P. R. Stopher and C. C. Stecher (eds.), Travel Survey Methods: Quality and Future Directions, Oxford: Elsevier Press, 75–93.

Ven Inger, E., L. Stoop, and K. Breedveld (2008). ‘Nonresponse in the Dutch Time Use Survey. Strategies for response enhancement and bias reduction’, Field Methods, 21 (1), 69–90.

Vinten, G. (1995). ‘The art of asking threatening questions’, Management Decision, 33 (7), 35–9.Wagner, D. P. (1997). Lexington Area Travel Data Collection Test: Global Positioning Systems for

Personal Travel Surveys: Final Report, Columbus, OH: Battelle Memorial Institute.Walvis, T. H. (2003). ‘Avoiding advertising research disaster: advertising and the uncertainty prin-

ciple’, Journal of Brand Management, 10 (6), 403–9.Warriner, K., J. Goyder, H. Gjertsen, P. Hohner, and K. McSpurren (1996). ‘Charities, no; lotter-

ies, no; cash, yes: main effects and interactions in a Canadian incentives experiment’, Public Opinion Quarterly, 60 (4), 542–62.

Whitworth, E. R. (1999). ‘Can the use of a second mailing increase mail response rates in histor-ically hard to enumerate census tracts?’, Paper presented at the international conference on survey nonresponse, Portland, OR, 31 October (available at www.jpsm.umd.edu; accessed 30 January 2003).

WHO (2005). ‘Activity costs of surveys, studies, including technical assistance, for M&E imple-mentation’, WHO/HIV/SIR version 17/3/05, WHO, Geneva (available at www.global-hivevaluation.org/media/globalaids/Regional2005/Module7-Agency_Specific_Meetings/UNAIDS_Meeting/Matrix_of_Activity_Costs_of_Surveys_Studies.doc; accessed 27 May 2010).

Wigan, M. R., M. Grieco and J. Hine (2002). ‘Enabling and managing greater access to trans-port data through metadata’, paper presented at the 81st annual Transportation Research Board meeting, Washington, DC, 15 January.

Willits, F. K. and J. O. Janota (1996). ‘A matter of order: effects of response order on answers to surveys’, paper presented at the annual meeting of the Rural Sociological Society, Des Moines, IA, 16 August.

Wilmot, C. G. and S. Shivananjappa (2003). ‘Comparison of hot-deck and neural network imputa-tion’, in P. R. Stopher and P. M. Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 543–54.

Page 551: Collecting managing and assessing data using sample surveys

References524

Wolf, J. (2006). ‘Application of new technologies in travel surveys’, in P. R. Stopher and C. C. Stecher (eds.), Travel Survey Methods: Quality and Future Directions, Oxford: Elsevier, 531–44.

Wolf, J., M. Loechl, M. Thompson, and C. Arce (2003). ‘Trip rate analysis in GPS-enhanced per-sonal travel surveys’, in P. R. Stopher and P. M. Jones (eds.), Transport Survey Quality and Innovation, Oxford: Pergamon Press, 483–98.

Xu, M., B. J. Bates, and J. C. Schweitzer (1993). ‘The impact of messages on survey participation in answering machine households’, Public Opinion Quarterly, 57 (2), 232–7.

Yang, Y. M. (2004). ‘Survey errors and survey costs: experience from surveys of arrestees’, in ASA, Proceedings of the 2004 Joint Statistical Meeting: Section on Survey Research Methods, Alexandria, VA: ASA, 4656–8.

Yates, F. (1965). Sampling Methods for Censuses and Surveys, 3rd edn., London: Charles Griffin and Co.

Yuan, Y. C. (2008). ‘Multiple imputation for missing data: concepts and new development (version 9.0)’, SAS Institute, Cary, NC (http://support.sas.com/rnd/app/papers/multipleimputation.pdf; accessed 20 August 2009).

Zimowski, M., R. Tourangeau, R. Ghadialy, and S. Pedlow (1997). Nonresponse in Household Travel Surveys, National Opinion Research Center, University of Chicago (available at www.fhwa.dot.gov/ohim/nonrespond.pdf; accessed 19 October 2001).

Zmud, J. P. (2003). ‘Designing instruments to improve response’, in P. R. Stopher and P. M. Jones (eds.), Transport Survey Quality and Innovation, Oxford: Elsevier, 89–108.

(2006). ‘Instrument design: decisions and procedures’, in P. R. Stopher and C. C. Stecher (eds.), Travel Survey Methods: Quality and Future Directions, Oxford: Elsevier, 143–60.

Zmud, J. P. and J. Wolf (2003). ‘Identifying the correlates of trip misreporting: results from the California Statewide Household Travel Survey GPS Study’, paper presented at the 10th inter-national conference on travel behaviour research, Lucerne, 15 August.

Page 552: Collecting managing and assessing data using sample surveys

525

Index

AAPOR, 436accuracy, 66, 98, 125, 180, 228, 265, 287, 340, 356,

363, 469, 471, 489, 495address

listing, 257, 479reporting, 407

administrative costs, 358data, 249records, 487

affirmation, 138bias, 149, 348

age, 461nonresponse to, 461

age category, 461analysis of variance, 285, 291, 496animation

in web surveys, 396ANOVA. See analysis of varianceanswer

ready, 180, 360required, 179

answering machine(s), 121, 377, 378appearance

of survey materials, 377of the survey, 161

appropriateness of the medium, 248archive, 499, 503, 506, 508,

509, 510archived data, 503archiving, 4arithmetic mean, 28arrows, 108, 167artificial neural networks, 458attitude surveys, 492attitudes, 199, 247, 347

of others, 246attitudinal questions, 157attribute levels, 207attrition, 341, 344, 350, 491, 492authority principle, 153average(s), 21, 369, 453

rolling, 355average imputation, 454

basic statistics and probability, 2benchmarking, 492bias(es), 71, 74, 78, 123, 220, 233, 316, 379, 417,

418, 421, 430, 431, 444, 468, 512, 515, 516, 517

avoidance of, 78coverage, 231from incentives, 442nonresponse, 444sources of, 74

bimodal, 25blank

in coding, 403booklet, 160

instruction, 173box and whisker plot, 36bracketing method, 462branching, 157burden, 81

call attempts, 219number of, 217

caller ID, 121, 479displays, 110

caller identification, See caller IDCAPI, 106, 362, 381, 413CASRO, 81, 435categories

layout, 184mixed ordered and unordered, 188ordered, 187ranges, 184respondent, 433response, 184, 200, 391, 405, 412, 415, 442, 450, 460unordered, 187

CATI, 109, 119, 143, 381, 413, 449census, 7, 65, 66, 67, 73, 84, 92, 137, 292, 412, 493

advantages of, 66bias, 468comparisons to, 186geography, 292, 306, 308, 407

change, 343character recognition, 415

Page 553: Collecting managing and assessing data using sample surveys

Index526

charitable donation(s), 237, 238, 239checking responses, 382choice situations

number of, 207choice-based sampling, 314, 333classes

equal, 19maximum number of, 18

classification of survey responses, 433

closed questions. See closed-ended questionsclosed-ended questions, 140, 145, 154, 188, 413cluster samples, 3, 420cluster sampling, 314, 316, 321, 323, 420clusters, 316, 323, 420

equal size, 317, 324unequal size, 322, 324, 326, 327variance of, 321

code book, 501, 510coding, 3, 169, 401, 412

complex variables, 405coding consistency, 404coding manual, 477coding the responses, 401coefficient of determination, 51coefficient of variation, 52cold-deck imputation, 456colour, 108, 166

use of, 166comments, 159, 175comparability, 96comparison

head-to-head, 188to census, 418

complete household definition, 224

complete response definition, 225

completeness, 471, 489of a sampling frame, 266

complex variables coding, 405

computer-assisted personal interview. See CAPIcomputer assisted surveys, 363computer-assisted telephone interview. See CATIcomputer-assisted telephone interviewing. See CATI

376conditioning, 132, 156, 348, 351confidence limit

of sampling error, 282confidentiality, 82, 151, 395, 487, 488

threats to, 84conjoint analysis, 199consent form, 86, 114contacts

number and type of, 211, 213contamination, 132, 347content information, 508continuous data, 18continuous scale, 11continuous surveys, 3, 352

advantages, 354continuous variables, 468control group, 97convenience samples, 336conversion

of soft refusals, 444correction factors, 476correlation, 50, 341, 342, 497cost(s) , 66, 73, 356, 363

of data archiving, 509of pilot surveys and pretests, 262of translation, 483

covariance, 50, 68, 280, 339cover

of the survey, 162coverage, 122coverage error, 464, 467, 477creation of codes, 412criterion variables, 421cross-sectional sample, 351cross-sectional surveys, 3, 491cross-tabulation, 426cross-tabulations. See pivot tablescumulative frequencies, 18cursor

movements in web surveys, 398

data, 4, 6, 7, 8, 64administrative, 487archiving, 1, 477, 506central measures of, 21cleaning, 451cleaning costs, 363cleaning statistic(s), 464, 466coding, 91, 402collection, 2, 93, 118, 418

prospective, 143retrospective, 143

collection methods, 83design of, 211

comparability, 96continuous, 62cross-checks, 382description of, 8descriptive, 419discrete, 13documentation, 499electronic, 374encryption, 395entry, 3, 163, 174, 401, 413entry screens, 414expansion, 418, 419incomplete, 381inconsistency, 208inconsistent, 194, 209integrity, 507missing, 226, 228, 402needs, 93, 94, 98

defining, 98quality, 4, 92, 387, 464, 503

indicators, 218

Page 554: Collecting managing and assessing data using sample surveys

Index 527

measures of, 464, 469raw, 19, 83, 417, 452, 510repair, 228, 363, 416, 450, 451, 466

incorrect, 417statistics, 477

secondary, 422, 492uses of, 64weighting, 418, 421, 430

database, 91, 169, 370debrief

of interviewers, supervisors, 255decision parameter, 7demand characteristic, 149, 348demographic questions, 158, 361descriptive data, 419descriptive information, 508descriptive metadata

creation of, 501purposes of, 500

design effect(s), 321, 339, 486, 491design issues

for web surveys, 389design of survey instruments, 2devices cost, 358diary

prospective, 225difference in the means, 339differential incentives, 443difficulty(ies) , 243

emotional, 245intellectual, 244physical, 243reducing, 246

discrete data, 18, 21scale, 11variables, 468

disposition codes, 438, 439for random-digit dialling, 438

disproportionate sampling, 295, 297, 303, 318, 419

‘Do not call’ registry, 110documentation, 452, 499, 503

standards, 503documenting, 4don’t know

code, 403donor observations, 457double negatives, 196dress rehearsal. See pilot surveydrop-down boxes, 399drop-down list, 410

effect feel good, 247

elements of a population, 79

eligibility, 257, 433, 472criteria, 501known, 433rate, 218, 369, 437unknown, 371, 433, 435

eligible units, 371, 433EPSEM, 70, 333equal allocation, 294equal probability of selection method. See EPSEMequal probability sample, 223, 334equal probability sampling, 70, 266, 268error, 71, 277, 298, 334, 356

maximum permitted sampling, 281normal law of, 282random, 73sampling, 73, 270systematic. See bias

ESOMAR, 81ethical issues, 2ethics, 2, 3, 81, 131, 248, 480, 487, 488

code of, 82definition of, 81

evaluate by focus groups, 134

evaluation, 127, 134, 460exchange codes, 115expansion, 3, 430expansion factor(s), 79, 275, 296, 419expectation

maximisation, 457step, 457

extreme values, 23, 30, 41

face-to-face interview(s)/interviewing, 104, 105, 106, 213, 219, 220, 359, 362, 434, 443, 478, 479

face-to-face survey. See face-to-face interviewface-to-face surveys. See face-to-face interviewfalsified information, 474FAQs. See frequently asked questions.fatigue, 159, 210, 344feelings, 199field coding, 146field sampling, 334, 479field-coded questions, 145, 146fieldwork, 263, 354

procedures, 501finite population correction factor, 279, 282, 321first contact, 490fixed costs, 357flag(s), 404, 416, 451

data repair, 451focus group(s) , 2, 127, 141, 176, 210, 252, 255, 347,

492analysis of, 131disadvantages of, 131function of, 129number of, 128selection of members, 128size of, 128

follow-up procedures, 213foreign languages, 482fractile, 35frequencies, 17, 415frequently asked questions, 368, 374future directions, 478fuzziness

of locations, 86

Page 555: Collecting managing and assessing data using sample surveys

Index528

gazetteer, 363, 409, 410gazetteers, 413geocoders, 410geocoding, 363, 402, 406, 408, 410geographic bounds, 100geographic coding, 3geographic information

in telephone numbers, 480geographic information systems. See GISsgeographic referencing. See geocodinggeographical list. See gazetteergeometric mean, 25geospatial data, 508, 509geospatial metadata, 503gifts, 239GISs, 411, 490Global Positioning System devices.

See GPS devicesglobalisation, 481GPS, 121, 123, 134, 141, 235, 396, 475,

493, 495GPS devices, 142, 144, 475, 488, 493graphical presentation, 11graphics, 167, 198

use of, 167grid

selection, 222group interview, 128

haphazard, 69, 70haphazard samples, 3, 335harassment, 379hard refusals, 444harmonic mean, 27high-resolution pictures, 395histogram or bar chart, 13historical imputation, 453hot-deck imputation, 457, 459

identification numbers, 374illegal aliens, 483implementation, 365, 377

of survey, 91of the survey, 354, 365

imputation, 228, 452, 466, 477, 502artificial neural network, 458average, 454cold-deck, 456evaluation of, 460expectation maximisation, 457historical, 453hot-deck, 249, 457multiple, 458ratio, 454regression, 455

incentives, 183, 235, 346, 360, 442, 445administration of, 238contingent, 239differential, 239monetary, 443prepaid, 236, 240promised, 237

income nonreponse to, 462

income categories, 462incomplete responses, 381increasing response. See reducing nonresponseindependence

assumptions of, 495independent measurement, 475independent variables

addition of, 49subtraction of, 49

in-depth, 128surveys, 128

ineligible units, 371, 433inference, 228, 416, 452, 466, 502informed consent, 86initial contact, 150, 433, 434, 449innovation, 96instructions, 107, 112, 172, 183, 244, 399, 441, 484,

490coding, 501to interviewers, 368typeface, 166

intentional samples, 335interactive capabilities

of web surveys, 400internet, 83, 104, 120, 124, 220, 385, 388, 490

browser, 113penetration, 386, 387recruitment for, 115survey(s), 111, 220, 443, 490

interquartile range, 35interval scale, 10interview

survey(s), 105, 371telephone, 217

interviewer(s), 3, 78, 83, 105, 255bias, 147characteristics, 502costs of, 357instructions to, 368monitoring, 369multi-lingual, 482selection of, 365training of, 368

interviewing costs, 357interviews

face-to-face, 219item nonresponse, 2, 106, 111, 134, 218, 226, 403,

416, 431, 450, 460, 502iterative proportional fitting procedure, 427

JavaScript®, 396judgmental samples, 3

key attributes, 281key characteristics, 421key data items, 228key variables, 228, 369, 466, 502

accuracy of, 495kurtosis

coefficient of, 54

Page 556: Collecting managing and assessing data using sample surveys

Index 529

language, 160, 366, 481, 483, 491, 494barrier, 366, 491level, 107survey, 189translation, 482

leaving messages on answering machines, 379

legitimate skip code, 403

leptokurtic, 55line graph, 15literacy, 226, 483, 491, 494logic checks, 413loss of interest, 344lotteries, 239lottery, 237

malicious software, 396mark-sensing forms, 163, 414maximisation step, 457maximum likelihood estimate, 457mean, 23, 28, 39, 274, 468

absolute deviation of, 37, 39arithmetic, 23geometric, 25group, 285harmonic, 27imputation, 454overall, 286population, 40, 68, 275, 277, 291, 308,

317, 323quadratic, 28rolling, 354sample, 23, 40, 68, 275, 277, 291, 308,

317, 323standard error of, 296variance of, 290, 318weighted, 286, 287, 290, 293, 294, 296, 422

means difference in, 339

measurable, 70measures of dispersion, 34median, 24, 28, 35, 468

of grouped data, 24memory error, 142metadata, 417, 477, 499

descriptive, 500, 501geospatial, 503preservation, 500, 503, 507purposes of descriptive, 500standards, 500

missing data code, 416coding, 402

missing value statistic, 464, 465missing values, 403, 450mitigation

of threatening questions, 139mixed-mode surveys, 120, 123, 125, 486mobile telephones, 480, 487, 488mock-up

of web survey, 397mode, 24, 28

comparison study, 464survey, 120, 124, 236, 449, 464, 486

model, 64ANN, 459open archival system, 508paired selections, 329simple random, 329statistical, 418stratified random, 329successive difference, 330survey costs, 358, 363

moderator, 129moment

appropriate, 242monitoring costs, 357motivation

to answer, 183MRA, 81multi-lingual interviewers, 482multinomial logit analysis, 418multiple imputation, 458multiple observations, 495multistage samples, 3multistage sampling, 305, 362, 420, 490multistage sampling units

requirements, 308

nearest birthday method, 221newsletter, 346nominal scale, 9non-coverage, 79, 480non-overlapping samples, 338, 342nonparticipatory surveys, 269nonrespondents, 422, 433nonresponse, 4, 77, 112, 117, 344, 346, 431, 445

bias, 446incentive effect on, 442item, 226, 431reasons for, 440, 446surveys, 445unit, 431

nonsalient, 148normal distribution, 45, 74, 270normal law of error, 269, 282numeric value

match to code, 405

objective repair, 417observation, 7, 64, 226, 418, 429, 494

donor, 457error, 79missing, 454multiple, 495valid, 23

observational survey, 79, 105, 125, 226, 269observations

valid, 19ogive, 15open-ended question(s), 99, 140, 145, 188, 412

Page 557: Collecting managing and assessing data using sample surveys

Index530

opinion no, 201, 387

opinion surveys, 492opinions, 153, 199, 246, 247optimum allocation, 297, 300order effects, 138ordinal scale, 9outlier, 417overall quality, 477overlapping designs, 254overlapping samples, 3, 345, 350

packaging information, 508page break, 170paired selection, 323, 327paired selections model, 329panel, 3, 134, 139, 342, 343, 453, 491, 495

attrition, 344conditioning, 348disadvantages, 349refreshed, 350rotating, 351rotation, 491sample size of, 349split, 351subsample, 344, 350survey(s), 3, 241, 337, 344, 348, 491wave, 343

paper and pencil interview. See PAPIPAPI, 106parameter, 7, 79, 269partial overlap, 339participatory surveys, 269, 451past experience, 241perceived relevance, 242percentile, 35personal interview surveys, 435pie chart, 13pilot survey(s), 3, 90, 176, 195, 210, 234,

240, 251, 253, 282, 368, 409, 443, 460, 477

definition, 251sample

size of, 258pivot tables, 415platykurtic, 55population, 6, 7, 65, 79, 273, 501

census, 422covariance, 68elements, 79mean, 40, 68, 275, 277, 291, 308, 317, 323proportion, 47, 275representativeness, 65, 68sampling frame, 266statistics, 287, 418survey, 79, 306total, 275, 308totals, 422, 430

unknown, 422value, 38, 50, 79values, 23, 273, 308, 322, 337

variance, 68, 79, 278, 323postal survey(s), 107, 108, 215, 216, 361,

434, 441materials, 360, 371, 377, 443, 485

post-stratification, 292, 293gains of, 293

precoding, 174preferences, 199, 347pre-filled response, 394pre-notification letter, 211, 242, 374, 379pre-recruitment contact, 112preservation description information, 508preservation metadata, 500, 503, 507

standards, 500pretest(s), 3, 90, 176, 210, 251, 253, 261, 368, 409,

460, 477definition, 251of incentives, 442sample sizes, 261

primacy, 138, 174, 187, 193primacy-bound, 148privacy, 81, 82probability

sampling, 70, 265weighted, 296

progress bar, 394proportion(s), 17, 47, 274

standard error, 279variance, 47

proportionate sampling, 290, 295, 318, 419proportionate stratification effect, 329prospective data collection, 143proxy, 477, 489

reporting, 3, 224, 471, 477, 488, 491, 502rules for, 471statistic, 472

reports, 224, 471, 488responsible adult as, 224

publicity, 373, 442purposes

of a panel, 344, 346, 492of a pilot survey, 251, 253of a survey, 92, 152, 212, 368, 501of expansion and weighting, 418of stratification, 287

quadratic mean, 28qualitative, 128, 199

questions, 199research, 127

qualitative surveys, 128quality, 66

adherence, 477control, 101, 102, 369, 477of a survey, 476, 477, 502of data, 263, 354of response, 239of survey, 101, 371of the data, 387, 464, 469, 472, 503measures, 477

quartile, 35

Page 558: Collecting managing and assessing data using sample surveys

Index 531

question(s) agree/disagree, 205attitude, 137, 180attitudinal, 157, 201, 203, 386behaviour, 137, 138belief, 180biased, 74branch, 167branching, 157categorical, 186classification, 137, 159closed-ended, 140, 145, 147, 154, 174, 188, 413demographic, 158, 361design, 2double-barrelled, 196field-coded, 145, 146first, 153focus group, 129follow-up, 473format, 122, 145, 398frequently-asked, 368, 374initial, 398introductory, 153layout, 159, 201long, 190, 244numbering, 169, 398open. See question: open-endedopen-ended, 99, 145, 188, 412opening, 153opinion, 137ordering, 150, 153qualitative, 199ready answer to, 180refinement, 133repeated, 171repetitive, 245requiring answer, 178revealing, 182scaling, 200screening, 153sensitive, 186, 227, 381, 431splitting, 170stated response, 206threatening, 138, 157, 226, 408, 450type, 137vague, 192vagueness, 180wording, 2, 110, 130, 182, 254, 390, 450writing, 178, 188

questionnaire, 104design, 159end of, 159format, 160layout, 159, 163self-administered, 117self-report, 248

quota samples, 3quota sampling, 334

random-digit dialling, 115, 229, 361, 362, 467, 479, 481, 490

random error, 73random numbers

sources of, 71random sampling, 69, 78, 268, 269, 270randomness, 69, 71

test for, 71range, 34, 148

interquartile, 35ratio, 274

imputation, 454scales, 10

rationalisation bias, 149raw data, 83, 415, 466RDD. See random-digit diallingrecall, 144, 182recency, 138, 142, 148, 174, 188, 193, 202recency-bound, 148reciprocity, 152, 236, 447record keeping, 370recruitment, 113, 216, 433, 490response rate, 438reference points, 207reference values, 468refreshed panel, 350refusal(s), 422, 444

conversion, 368hard, 230rate, 369soft, 230

refuse, 83code, 403

regression imputation, 455relational data base, 509relationship

between variance and covariance, 50linear, 51

reliability, 180reminders, 214, 371, 443, 446

e-mail, 220repair

of data, 228, 363, 416, 450, 451, 466repeated occasions, 337repetitive questions, 245repetitive surveys, 3replacement

of panel members, 345sample, 229

representative sample, 265, 388representativeness, 65, 68, 363, 485, 490

of focus groups, 128of systematic samples, 331

request(s) for a call back, 377, 380respondent bias, 146respondent burden, 98, 144, 240, 244, 249, 381, 392,

445, 486, 491, 494respondents, 433response(s)

categories, 184, 390codes, 174habitual, 203lexicographic, 208, 209

Page 559: Collecting managing and assessing data using sample surveys

Index532

random, 208, 209rate(s), 4, 110, 120, 123, 213, 369, 371, 373,

379, 385, 432, 435, 464, 477, 485, 486, 490, 491, 502

how to calculate, 432retrospective data collection, 143role play, 368rolling pilot survey, 254rolling samples, 352root mean square, 28

error, 469rotating panel, 351

safety, 83salience, 142, 163, 181salient, 148sample, 7, 64, 65, 68

bias, 464, 468continuous, 354convenience, 336cost, 65costs, 358covariance, 68design, 3, 73, 265, 501

non-adherence, 421disposition, 502distribution, 79expansion, 419expert, 335frame, 479haphazard, 335intentional, 75, 335judgmental, 75, 335mean, 23, 40, 68, 275, 291non-compliance, 430non-overlapping, 338overlapping, 350pilot survey, 253pretest, 261probability, 70, 71proportion, 275purposeful, 69quota, 334replacement, 229, 233, 491representativeness, 65rolling, 352selection procedures, 502self-weighted, 291size(s), 3, 73, 79, 102, 228, 281, 476

definition of, 281for pretests and pilot surveys, 260

statistics, 273survey

uses of, 65systematic, 328value, 79variance, 68, 497weighting, 421

sampling, 3, 257, 265, 269bias, 79, 387cost of, 314

EPSEM, 70error(s), 3, 73, 79, 270, 277,

295, 450, 502fraction, 419frame, 79, 266, 268, 271, 362, 501methodology, 65, 90methods, 5, 270

quasi-random, 314procedure, 314process

testing, 257rate, 274, 419units, 80, 221with replacement, 268without replacement, 268

scale, 8scan, 414scarcity, 152scatter plot, 11, 51screening questions, 153selection

sample for pilot surveys and pretests, 255

selective memory, 144self-administered, 111self-administered questionnaire, 117self-administered surveys, 392, 422self-report, 225, 475, 486, 488, 493self-weighted sample, 291sensitive questions, 381, 431show cards, 174silent numbers, 110silent telephone numbers, 481simple random model, 329simple random sample(s), 3, 362simple random sampling, 70, 271, 419situation

hypothetical, 206skewness

coefficient of, 53skirmish, 252social desirability bias, 149, 348soft refusal(s), 380, 444split design, 249split panel, 351SRS. See simple random samplingstandard deviation, 41, 277

for probability, 47of a proportion, 47of a proportion, maximum, 48properties of, 46

standard error, 277, 337maximum, 282of difference in means, 339of differences, 343of non-random samples, 334of panel, 343of proportion, 279of ratios, 279of the population total, 278

standards for metadata, 500

response(s) (cont.)

Page 560: Collecting managing and assessing data using sample surveys

Index 533

stated choice, 199stated preference, 199statistic(s), 2, 4, 6, 7, 23, 79, 270, 273

basic, 502biased and unbiased, 39census, 186data cleaning, 466missing value, 465root mean square error, 469sample, 79unbiased, 39

statistical inference, 7, 70statistical models, 418stem and leaf display, 21step chart, 16stratified random model, 329stratified samples, 3, 362stratified sampling, 285, 419

of unequal clusters, 326with uniform sampling fraction, 289with variable sampling fractions, 295

stratum, 80study domain, 287Sturges’ rule, 18, 62

alternatives to, 62subsample, 342subsample panel, 344, 350successive difference model, 329, 330supervision, 109supervision costs, 357supervisors, 372supplementary information, 97, 412survey(s), 8, 65

delivery, 117design, 1, 101, 127

trade-offs in, 101economics, 3ethics, 214general-purpose, 92implementation, 3instrument(s), 90, 360, 501

design of, 137intercept, 219internet, 220layout, 450length, 102manual, 368method, 102methodology, 90nonresponse. See unit nonresponseobjectives, 501postal, 215

with telephone recruitment, 216purpose, 92, 93, 501qualitative and preference, 199quality, 432, 446quality indicators, 502revealed choice, 207room, 374sampling methodology, 65sponsorship, 501stated response, 207

statistics, 4timing, 99workplace, 220

systematic error, 71systematic samples, 3systematic sampling, 314, 328

technical errors in survey form, 195

telephone interview, 479, 491telephone interview survey, 433telephone recruitment, 114, 361telephone retrieval, 361telephone survey(s), 104, 108, 443, 478

threats to, 479telescoping, 140, 142terminate, 83terminated surveys, 381termination rate, 369terminations, 392, 422tertile, 35thoughts, 199threatening question(s), 157, 226, 408, 431, 450

mitigation of, 139tick all that apply, 193time requirements

of censuses and surveys, 67of pretests and pilot surveys, 262

total, 274estimated, 275

training neural network, 459of interviewers, 368

transformation data, 48linear, 48

translation, 482true value, 79typeface, 164typeface enhancements, 166

unbiased estimators, 418unequal probability sampling, 70unit nonresponse, 2, 228, 431

causes of, 432unlisted telephone numbers, 110, 481unordered responses, 401user interface design, 396

vague, 200vague words, 140, 182vagueness, 180validation survey(s), 369, 472, 477validity, 111, 502

of response, 156of sample data, 418

values, 11, 199, 347extreme, 23, 30, 41missing, 402, 403, 450population, 23, 308, 322, 337reference, 468weighted, 296

Page 561: Collecting managing and assessing data using sample surveys

Index534

variables, 8, 49continuous, 468criterion, 421demographic, 218discrete, 468independent, 49, 280key, 228, 369, 466, 502missing, 452random, 79

variance(s), 39, 46, 278, 282, 285, 497equal, 291for probability, 47in cluster samples, 321of a proportion, 47of a proportion, maximum, 48of the mean, 290, 318of weighted mean, 294population, 79, 278properties of, 46sample, 39, 68, 497unequal, 290weighted, 287

within stratum, 291, 297, 327within-group, 285

variation coefficient of, 52

wave interval, 343Web survey(s), 371, 385, 434weight(s), 80, 288, 296, 423, 425weighting, 3, 421

with known populations, 426weighting factors, 430welcome screen, 398wording

of questions, 130words

number of, 190vague, 191

year of birth, 461

zero in coding, 403