handbook of parametric and nonparametric statistical procedures

25
Fourth Edition Handbook of Parametric and Nonparametric Statistical Procedures David J. Sheskin Chapman & Hall/CRC Taylor & Francis Group Boca Raton london New York Chapman & Hall/CRC is an imprint of the

Upload: others

Post on 09-Feb-2022

10 views

Category:

Documents


0 download

TRANSCRIPT

Fourth Edition

Handbook of Parametricand Nonparametric

Statistical Procedures

David J. Sheskin

Chapman & Hall/CRCTaylor & Francis Group

Boca Raton london New York

Chapman & Hall/CRC is an imprint of the

Table of Contentswith Summary of Topics

Introduction 1

Descriptive versus inferential statistics 1Statistic versus parameter 2Levels of measurement 2Continuous versus discrete variables 4Measures of central tendency (mode, median, mean, weighted mean, geometric

mean, and the harmonic mean) 4Measures of variability (range; quantites, percentiles, deciles, and quartiles;

variance and standard deviation; the coefficient of variation) 10Measures of skewness and kurtosis 16Visual methods for displaying data (tables and graphs, exploratory data

analysis (stem-and-leaf displays and boxplots)) 29The normal distribution 45Hypothesis testing 56A history and critique of the classical hypothesis testing model 68Estimation in inferential statistics 74Relevant concepts, issues, and terminology in conducting research (the case

study method; the experimental method; the correlational method) 75Experimental design (pre-experimental designs; quasi-experimental designs

true experimental designs; single-subject designs) 82Sampling methodologies 98Basic principles of probability 100Parametric versus nonparametric inferential statistical tests 108Univariate versus bivariate versus multivariate statistical procedures 109Selection of the appropriate statistical procedure 110

Outline of Inferential Statistical Tests and Measures ofCorrelation/Association 125

Guidelines and Decision Tables for Selecting the AppropriateStatistical Procedure 133

Inferential Statistical Tests Employed with a Single Sample 141

Test 1 : The Single-Sample ζ Test 143

I. Hypothesis Evaluated with Test and Relevant Background Information 143

II. Example 143III. Null versus Alternative Hypotheses 143IV. Test Computations 144V. Interpretation of the Test Results 145

VI. Additional Analytical Procedures for the Single-Sample Test and/orRelated Tests 146

VIII Handbook of Parametric and Nonparametric Statistical Procedures

VII. Additional Discussion of the Single-Sample ζ Test 1461. The interpretation of a negative z value 1462. The standard error of the population mean and graphical

representation of the results of the single-sample z test 1473. Additional examples illustrating the interpretation of a computed ζ

value 1524. The z test for a population proportion 152

VIII. Additional Examples Illustrating the Use of the Single-Sample ζ Test 153

Test 2: The Single-Sample / Test 157

I. Hypothesis Evaluated with Test and Relevant Background Information 157II. Example 158

III. Null versus Alternative Hypotheses 158IV. Test Computations 158V. Interpretation of the Test Results 160

VI. Additional Analytical Procedures for the Single-Sample t Test and/orRelated Tests 162

1. Determination of the power of the single-sample / test and the single-sample ζ test, and the application of Test 2a: Cohen's d index 162

2. Computation of a confidence interval for the mean of the populationrepresented by a sample 173

VIL Additional Discussion of the Single-Sample / Test 1831. Degrees of freedom 183

VIII. Additional Examples Illustrating the Use of the Single-Sample t Test 184

Test 3: The Single-Sample Chi-Square Test for a Population Variance 191

I. Hypothesis Evaluated with Test and Relevant Background Information 191II. Example 192

III. Null versus Alternative Hypotheses 192IV. Test Computations 193V. Interpretation of the Test Results 194

VI. Additional Analytical Procedures for the Single-Sample Chi-SquareTest for a Population Variance and/or Related Tests 196

1. Large sample normal approximation of the chi-square distribution 1962. Computation of a confidence interval for the variance of a population

represented by a sample 1973. Sources for computing the power of the single-sample chi-square test

for a population variance 200VII. Additional Discussion of the Single-Sample Chi-Square Test for a

Population Variance 200VIII. Additional Examples Illustrating the Use of the Single-Sample Chi-

Square Test for a Population Variance 200

Test 4: The Single-Sample Test for Evaluating Population Skewness 205

I. Hypothesis Evaluated with Test and Relevant Background Information 205

IL Example 2 0 6

III. Null versus Alternative Hypotheses 206IV. Test Computations 2 0 7

V. Interpretation of the Test Results 2 0 9

Table of Contents ix

VI. Additional Analytical Procedures for the Single-Sample Test forEvaluating Population Skewness and/or Related Tests 210

VII. Additional Discussion of the Single-Sample Test for EvaluatingPopulation Skewness 210

1. Exact tables for the single-sample test for evaluating populationskewness 210

2. Note on a nonparametric test for evaluating skewness 210VIII. Additional Examples Illustrating the Use of the Single-Sample Test for

Evaluating Population Skewness 211

Test 5: The Single-Sample Test for Evaluating Population Kurtosis 213

I. Hypothesis Evaluated with Test and Relevant Background Information 213II. Example 214

III. Null versus Alternative Hypotheses 214IV. Test Computations 215V. Interpretation of the Test Results 217

VI. Additional Analytical Procedures for the Single-Sample Test forEvaluating Population Kurtosis and/or Related Tests 218

1. Test 5a: The D'Agostino-Pearson test of normality 2182. Test 5b: The Jarque-Bera test of normality 219

VII. Additional Discussion of the Single-Sample Test for EvaluatingPopulation Kurtosis 220

1. Exact tables for the single-sample test for evaluating populationkurtosis 220

2. Additional comments on tests of normality 220VIII. Additional Examples Illustrating the Use of the Single-Sample Test for

Evaluating Population Kurtosis 221

Test 6: The Wilcoxon Signed-Ranks Test 225

I. Hypothesis Evaluated with Test and Relevant Background Information 225IL Example 225

III. Null versus Alternative Hypotheses 226IV. Test Computations 226V. Interpretation of the Test Results 228

VI. Additional Analytical Procedures for the Wilcoxon Signed-Ranks Testand/or Related Tests 230

1. The normal approximation of the Wilcoxon T statistic for large samplesizes 230

2. The correction for continuity for the normal approximation of theWilcoxon signed-ranks test 232

3. Tie correction for the normal approximation of the Wilcoxon teststatistic 233

4. Computation of a confidence interval for a population median 234VII. Additional Discussion of the Wilcoxon Signed-Ranks Test 235

1. Power-efficiency of the Wilcoxon signed-ranks test and the conceptof asymptotic relative efficiency 235

2. Note on symmetric population concerning hypotheses regarding medianand mean 236

VIII. Additional Examples Illustrating the Use of the Wilcoxon Signed-Ranks Test 237

Handbook of Parametric and Nonparametric Statistical Procedures

Test 7: The Kolmogorov-Smirnov Goodness-of-Fit Test for a SingleSample 241

I. Hypothesis Evaluated with Test and Relevant Background Information 241IL Example 242

III. Null versus Alternative Hypotheses 243IV. Test Computations 244V. Interpretation of the Test Results 248

VI. Additional Analytical Procedures for the Kolmogorov-SmirnovGoodness-of-Fit Test for a Single Sample and/or Related Tests 249

1. Computing a confidence interval for the Kolmogorov- Smirnovgoodness-of-fit test for a single sample 249

2. The power of the Kolmogorov-Smirnov goodness-of-fit test fora single sample 250

3. Test 7a: The Lilliefors test for normality 250VII. Additional Discussion of the Kolmogorov-Smirnov Goodness-of-Fit

Test for a Single Sample 2521. Effect of sample size on the result of a goodness-of-fit test 2522. The Kolmogorov-Smirnov goodness-of-fit test for a single sam-

ple versus the chi-square goodness-of-fit test and alternativegoodness-of-fit tests 253

VIII. Additional Examples Illustrating the Use of the Kolmogorov-SmirnovGoodness-of-Fit Test for a Single Sample 253

Test 8: The Chi-Square Goodness-of-Fit Test 257

I. Hypothesis Evaluated with Test and Relevant Background Information 257II. Examples 258

III. Null versus Alternative Hypotheses 258IV. Test Computations 259V. Interpretation of the Test Results 261

VI. Additional Analytical Procedures for the Chi-Square Goodness-of-FitTest and/or Related Tests 261

1. Comparisons involving individual cells when k>2 2612. The analysis of standardized residuals 2643. The correction for continuity for the chi-square goodness-of-fit test 2654. Computation of a confidence interval for the chi-square goodness-of-

fit test/confidence interval for a population proportion 2665. Brief discussion of the ζ test for a population proportion

(Test 9a) and the single-sample test for the median (Test 9b) 2696. Application of the chi-square goodness-of-fit test for assessing

goodness-of-fit for a theoretical population distribution 2697. Sources for computing of the power of the chi-square goodness-of-fit

test 2738. Heterogeneity chi-square analysis 273

VII. Additional Discussion of the Chi-Square Goodness-of-Fit Test 2771. Directionality of the chi-square goodness-of-fit test 2772. Additional goodness-of-fit tests 279

VIII. Additional Examples Illustrating the Use of the Chi-Square Goodness-

of-Fit Test 2 8 0

Table of Contents χι

Test 9: The Binomial Sign Test for a Single Sample 289

L Hypothesis Evaluated with Test and Relevant Background Information 289II. Examples 290

III. Null versus Alternative Hypotheses 290IV. Test Computations 291V. Interpretation of the Test Results 293

VI. Additional Analytical Procedures for the Binomial Sign Test for aSingle Sample and/or Related Tests 294

1. Test 9a: The z test for a population proportion (with discussion ofcorrection for continuity; computation of a confidence interval;procedure for computing sample size for test of specified power;additional comments on computation of the power of the binomialsign test for a single sample) 294

2. Extension of the ζ test for a population proportion to evaluate theperformance of m subjects on η trials on a binomially distributedvariable 303

3. Test 9b: The single-sample test for the median 305VII. Additional Discussion of the Binomial Sign Test for a Single Sample 308

1. Evaluating goodness-of-fit for a binomial distribution 308VIII. Additional Example Illustrating the Use of the Binomial Sign Test for

a Single Sample 310IX. Addendum 311

1. Discussion of additional discrete probability distributions and theexponential distribution 311

The multinomial distribution 311The negative binomial distribution 313The hypergeometric distribution 316The Poisson distribution 319

Computation of a confidence interval for a Poisson parameter . 322Test 9c: Test for evaluating two Poisson counts 323Evaluating goodness-of-fit for a Poisson distribution 324

The exponential distribution 326The matching distribution 330

2. Conditional probability, Bayes' theorem, Bayesian statistics andhypothesis testing 332

Conditional probability 332Bayes' theorem 333Bayesian hypothesis testing 349Bayesian analysis of a continuous variable 374

Test 10: The Single-Sample Runs Test (and Other Tests of Randomness) 389

I. Hypothesis Evaluated with Test and Relevant Background Information 389II. Example 390

III. Null versus Alternative Hypotheses 391IV. Test Computations 391V. Interpretation of the Test Results 391

VI. Additional Analytical Procedures for the Single-Sample Runs Testand/or Related Tests 392

1. The normal approximation of the single-sample runs test for largesample sizes 392

2. The correction for continuity for the normal approximation of the

Handbook of Parametric and Nonparametric Statistical Procedures

single-sample runs test 3933. Extension of the runs test to data with more than two categories 3944. Test 10a: The runs test for serial randomness 395

VII. Additional Discussion of the Single-Sample Runs Test 3981. Additional discussion of the concept of randomness 398

VII. Additional Examples Illustrating the Use of the Single-SampleRuns Test 399

IX. Addendum 4021. The generation of pseudorandom numbers (The midsquare method;

the midproduct method; the linear congruential method) 4022. Alternative tests of randomness

Test 10b: The frequency test 407Test 10c: The gap test 409Test 10d: The poker test 413Test 10e: The maximum test 413Test 10f: The coupon collector's test 414Test 10g: The mean square successive difference test (forserial randomness 417Additional tests of randomness (Autocorrelation; The serialtest; The d2 square test of random numbers; Tests of trendanalysis/time series analysis) 419

Inferential Statistical Tests Employed with Two IndependentSamples (and Related Measures of Association/Correlation) 425

Test 11: The / Test for Two Independent Samples 427

I. Hypothesis Evaluated with Test and Relevant Background Information 427II. Example 427

III. Null versus Alternative Hypotheses 428IV. Test Computations 428V. Interpretation of the Test Results 431

VI. Additional Analytical Procedures for the / Test for Two IndependentSamples and/or Related Tests 432

1. The equation for the / test for two independent samples when a valuefor a difference other than zero is stated in the null hypothesis 432

2. Test 11a: Hartley's Fm j u test for homogeneity of variance/ F testfor two population variances: Evaluation of the homogeneity ofvariance assumption of the / test for two independent samples 434

3. Computation of the power of the t test for two independent samplesand the application of Test 11b: Cohen's d index 439

4. Measures of magnitude of treatment effect for the t test for twoindependent samples: Omega squared (Test 11c) and Eta squared(Test 11d) 446

5. Computation of a confidence interval for the / test for two independentsamples 448

6. Test 11e: The ζ test for two independent samples 450VII. Additional Discussion of the / Test for Two Independent Samples 452

1. Unequal sample sizes 4522. Robustness of the / test for two independent samples 453

Table of Contents XIII

3. Outliers (Box-and-whisker plot criteria; Standard deviation scorecriteriai; Test 11f: Median absolute deviation test for iden-tifying outliers and Test 11g: Extreme Studentized deviatetest for identifying outliers; trimming data; Winsorization) anddata transformation 454

4. Missing data 4685. Clinical trials 4716. Tests of equivalence (Test 11h: The Westlake-Schuirmann test of

equivalence of two independent treatments (and procedure forcomputing sample size in reference to the power of the test) and TestHi: Tryon's test of equivalence of two independent treatments)

4747. Hotelling's T2 498

VIII Additional Examples Illustrating the Use of the t Test for TwoIndependent Samples 499

Test 12: The Mann-Whitney t/Test 513

I. Hypothesis Evaluated with Test and Relevant Background Information 513II. Example 514

III. Null versus Alternative Hypotheses 514IV. Test Computations 515V. Interpretation of the Test Results 518

VI. Additional Analytical Procedures for the Mann-Whitney U Test and/orRelated Tests 518

1. The normal approximation of the Mann-Whitney U statistic for largesample sizes 518

2. The correction for continuity for the normal approximation of theMann-Whitney t/test 520

3. Tie correction for the normal approximation of the Mann-Whitney Ustatistic 521

4. Computation of a confidence interval for a difference between themedians of two independent populations 521

VII. Additional Discussion of the Mann-Whitney (/Test 5241. Power-efficiency of the Mann-Whitney U test 5242. Equivalency of the normal approximation of the Mann-Whitney

t/test and the t test for two independent samples with rank-orders524

3. Alternative nonparametric rank-order procedures for evaluating adesign involving two independent samples 524

VIII. Additional Examples Illustrating the Use of the Mann-Whitney t/Test 525IX. Addendum 526

1. Computer-intensive tests (Randomization and permutation tests: Test12a: The randomization test for two independent samples; Test12b: The bootstrap; Test 12c: The jackknife; Final comments oncomputer-intensive procedures) 526

2. Survival analysis (Test 12d: Kaplan-Meier estimate) 5403. Procedures for evaluating censored data in a design involving two

independent samples (Permutation test based on the medianfor censored data, Gehan's test for censored data (Test 12e),and the log-rank test (Test 12f)) 550

Handbook of Parametric and Nonparametric Statistical Procedures

Test 13: The KoImogorov-Smirnov Test for Two Independent Samples . . 577

I. Hypothesis Evaluated with Test and Relevant Background Information 577IL Example 578

III. Null versus Alternative Hypotheses 578IV. Test Computations 580V. Interpretation of the Test Results 582

VI. Additional Analytical Procedures for the Kolmogorov-Smirnov test fortwo independent samples and/or Related Tests 583

1. Graphical method for computing the Kolmogorov-Smirnov teststatistic 583

2. Computing sample confidence intervals for the Kolmogorov- Smirnovtest for two independent samples 584

3. Large sample chi-square approximation for a one-tailed analysis of theKolmogorov-Smirnov test for two independent samples 584

VII. Additional Discussion of the Kolmogorov-Smirnov Test for TwoIndependent Samples 585

1. Additional comments on the Kolmogorov-Smirnov test for twoindependent samples 585

VIII. Additional Examples Illustrating the Use of the Kolmogorov-SmirnovTest for Two Independent Samples 585

Test 14: The Siegel-Tukey Test for Equal Variability 589

I. Hypothesis Evaluated with Test and Relevant Background Information . . . . . . 589IL Example 590

III. Null versus Alternative Hypotheses 590IV. Test Computations 591V. Interpretation of the Test Results 594

VI. Additional Analytical Procedures for the Siegel-Tukey Test for EqualVariability and/or Related Tests 594

1. The normal approximation of the Siegel-Tukey test statistic for largesample sizes 594

2. The correction for continuity for the normal approximation of theSiegel-Tukey test for equal variability 595

3. Tie correction for the normal approximation of the Siegel-Tukey teststatistic 596

4. Adjustment of scores for the Siegel-Tukey test for equal variabilitywhen 596

VII. Additional Discussion of the Siegel-Tukey Test for Equal Variability 5981. Analysis of the homogeneity of variance hypothesis for the same set

of data with both a parametric and nonparametric test, and the power-efficiency of the Siegel-Tukey Test for Equal Variability 598

2. Alternative nonparametric tests of dispersion 599VIII. Additional Examples Illustrating the Use of the Siegel-Tukey Test for

Equal Variability 600

Test 15: The Moses Test for Equal Variability 605

I. Hypothesis Evaluated with Test and Relevant Background Information 605IL Example 6 0 6

Table of Contents xv

III. Null versus Alternative Hypotheses 606IV. Test Computations 608V. Interpretation of the Test Results 610

VI. Additional Analytical Procedures for the Moses Test for EqualVariability and/or Related Tests .611

1. The normal approximation of the Moses test statistic for largesample sizes 611

VII. Additional Discussion of the Moses Test for Equal Variability 6121. Power-efficiency of the Moses Test for equal variability 6122. Issue of repetitive resampling 6133. Alternative nonparametric tests of dispersion 613

VIII. Additional Examples Illustrating the Use of the Moses Test for EqualVariability 613

Test 16: The Chi-Square Test for r x c Tables (Test 16a: The Chi-SquareTest for Homogeneity; Test 16b: The Chi-Square Test of Indepen-dence (employed with a single sample)) 619

I. Hypothesis Evaluated with Test and Relevant Background Information 619II. Examples 621

III. Null versus Alternative Hypotheses 622IV. Test Computations .625V. Interpretation of the Test Results 626

VI. Additional Analytical Procedures for the Chi-Square Test for r χ cTables and/or Related Tests 628

1. Yates' correction for continuity 6282. Quick computational equation for a 2 χ 2 table 6293. Evaluation of a directional alternative hypothesis in the case of a 2 χ

2 contingency table 6304. Test 16c: The Fisher exact test 6315. Test 16d: The ζ test for two independent proportions

(and computation of sample size in reference to power) 6376. Computation of a confidence interval for a difference between two

proportions 6437. Test 16e: The median test for independent samples 6458. Extension of the chi-square test for r χ c tables to contingency tables

involving more than two rows and/or columns, and associatedcomparison procedures 647

9. The analysis of standardized residuals 65310. Sources for computing the power of the chi-square test

for r χ c tables 65511. Measures of association for r χ c contingency tables 655

Test 16f: The contingency coefficient 657Test 16g: The phi coefficient 659Test 16h: Cramer's phi coefficient 660Test 16i: Yule's Q 661Test 16j: The odds ratio (and the concept of relative risk)(and Test 16j-a: Test of significance for an odds ratio andcomputation of a confidence interval for an odds ratio) 662

Test 16k: Cohen's kappa (and computation of a confidenceinterval for kappa, Test 16k-a: Test of significance for

xvi Handbook of Parametric and Nonparametric Statistical Procedures

Cohen's kappa, and Test 16k-b: Test of significance fortwo independent values of Cohen's kappa) 669

12. Combining the results of multiple 2 x 2 contingency tables: 673Heterogeneity chi-square analysis for a 2 χ 2contingency table 673Test 161: The Mantel-Haenszel analysis (Test 161-a: Testof homogeneity of odds ratios for Mantel-Haenszelanalysis, Test 161-b: Summary odds ratio for Mantel-Haenszel analysis, and Test 161-c: Mantel-Haenszel testof association) 676

VII. Additional Discussion of the Chi-Square Test for r χ c Tables 6881. Equivalency of the chi-square test for r χ c tables when c = 2 with the

t test for two independent samples (when r = 2) and the single-factorbetween-subjects analysis of variance (when r 2) 688

2. Test of equivalence for two independent proportions: Test 16m: TheWestlake-Schuirmann test of equivalence of two independentproportions (and procedure for computing sample size in referenceto the power of the test) 688

3. Test 16n: The log-likelihood ratio 6984. Simpson's Paradox 7005. Analysis of multidimensional contingency tables through use of

a chi-square analysis 7026. Test 16o:Analysis of multidimensional contingency tables

with log-linear analysis 713VIII. Additional Examples Illustrating the Use of the Chi-Square Test for

r x c Tables 727

Inferential Statistical Tests Employed with Two DependentSamples (and Related Measures of Association/Correlation) 741

Test 17: The t Test for Two Dependent Samples 743

I. Hypothesis Evaluated with Test and Relevant Background Information 743II. Example 744

III. Null versus Alternative Hypotheses 744IV. Test Computations 745V. Interpretation of the Test Results 747

VI. Additional Analytical Procedures for the t Test for Two DependentSamples and/or Related Tests 748

1. Alternative equation for the t test for two dependent samples 7482. The equation for the t test for two dependent samples when a value for

a difference other than zero is stated in the null hypothesis 7523. Test 17a: The / test for homogeneity of variance for two depen-

dent samples: Evaluation of the homogeneity of variance assumptionof the t test for two dependent samples 752

4. Computation of the power of the t test for two dependent samples andthe application of Test 17b: Cohen's d index 755

5. Measure of magnitude of treatment effect for the t test for twodependent samples: Omega squared (Test 17c) 761

Table of Contents

6. Computation of a confidence interval for the / test for two dependentsamples 763

7. Test 17d: Sandler's/I test 7648. Test 17e: The ζ test for two dependent samples 765

VII. Additional Discussion of the t Test for Two Dependent Samples 7681. The use of matched subjects in a dependent samples design 7682. Relative power of the t test for two dependent samples and the / test

for two independent samples 7713. Counterbalancing and order effects 7724. Analysis of a one-group pretest-posttest design with the t test for two

dependent samples 7745. Tests of equivalence: Test 17f: The Westlake-Schuirmann test of

equivalence of two dependent treatments (and procedure forcomputing sample size in reference to the power of the test) and Test17g: Tryon's test of equivalence of two dependent treatments 776

VIII. Additional Example Illustrating the Use of the / Test for TwoDependent Samples 785

Test 18: The Wilcoxon Matched-Pairs Signed-Ranks Test 791

I. Hypothesis Evaluated with Test and Relevant Background Information 791II. Example 792

III. Null versus Alternative Hypotheses 792IV. Test Computations 793V. Interpretation of the Test Results 794

VI. Additional Analytical Procedures for the Wilcoxon Matched-PairsSigned-Ranks Test and/or Related Tests 796

1. The normal approximation of the Wilcoxon T statistic for large samplesizes 796

2. The correction for continuity for the normal approximation of theWilcoxon matched-pairs signed-ranks test 797

3. Tie correction for the normal approximation of the Wilcoxon teststatistic 798

4. Computation of a confidence interval for a median difference betweentwo dependent populations 799

VII. Additional Discussion of the Wilcoxon Matched-Pairs Signed-RanksTest 801

1. Power-efficiency of the Wilcoxon matched-pairs signed-ranks test 8012. Probability of superiority as a measure of effect size 8013. Alternative nonparametric procedures for evaluating a design

involving two dependent samples 801VIII. Additional Examples Illustrating the Use of the Wilcoxon Matched-

Pairs Signed-Ranks Test 802

Test 19: The Binomial Sign Test for Two Dependent Samples 805

I. Hypothesis Evaluated with Test and Relevant Background Information 805II. Example 806

III. Null versus Alternative Hypotheses 806IV. Test Computations 807V. Interpretation of the Test Results 809

xviii Handbook of Parametric and Nonparametric Statistical Procedures

VI. Additional Analytical Procedures for the Binomial Sign Test for TwoDependent Samples and/or Related Tests 810

1. The normal approximation of the binomial sign test for twodependent samples with and without a correction for continuity 810

2. Computation of a confidence interval for the binomial sign test fortwo dependent samples 813

3. Sources for computing the power of the binomial sign test for twodependent samples, and comments on asymptotic relative efficiencyof the test 814

VII. Additional Discussion of the Binomial Sign Test for Two DependentSamples 815

1. The problem of an excessive number of zero difference scores 8152. Equivalency of the binomial sign test for two dependent samples and

the Friedman two-way analysis variance by ranks when £ = 2 815VIII. Additional Examples Illustrating the Use of the Binomial Sign Test for

Two Dependent Samples 815

Test 20: The McNemar Test 817

I. Hypothesis Evaluated with Test and Relevant Background Information 817II. Examples 818

III. Null versus Alternative Hypotheses 820IV. Test Computations 822V. Interpretation of the Test Results 822

VI. Additional Analytical Procedures for the McNemar Test and/or RelatedTests 823

1. Alternative equation for the McNemar test statistic based on thenormal distribution 823

2. The correction for continuity for the McNemar test 8243. Computation of the exact binomial probability for the McNemar test

model with a small sample size 8254. Computation of the power of the McNemar test 8275. Computation of a confidence interval for the McNemar test 8286. Computation of an odds ratio for the McNemar test 8297. Additional analytical procedures for the McNemar test 8308. Test 20a: The Gart test for order effects 830

VII. Additional Discussion of the McNemar Test 8381. Alternative format for the McNemar test summary table and modified

test equation 8382. The effect of disregarding matching 8393. Alternative nonparametric procedures for evaluating a design with two

dependent samples involving categorical data 8404. Test of equivalence for two independent proportions: Test 20b: The

Westlake-Schuirmann test of equivalence of two dependentproportions 840

VIII. Additional Examples Illustrating the Use of the McNemar Test 850IX. Addendum 852

Extension of the McNemar test model beyond 2 x 2 contingencytables 852

1 Test 20c:The Bowker test of internal symmetry 8522. Test 20d:The Stuart-Maxwell test of marginal homogeneity 856

Table of Contents

Inferential Statistical Tests Employed with Two or MoreIndependent Samples (and Related Measures ofAssociation/Correlation) 865

Test 21: The Single-Factor Between-Subjects Analysis of Variance 867

I. Hypothesis Evaluated with Test and Relevant Background Information 867II. Example 868

III. Null versus Alternative Hypotheses 868IV. Test Computations 869V. Interpretation of the Test Results 873

VI. Additional Analytical Procedures for the Single-Factor Between-Subjects Analysis of Variance and/or Related Tests 874

1. Comparisons following computation of the omnibus F value for thesingle-factor between-subjects analysis of variance (Planned versusunplanned comparisons (including simple versus complex com-parisons); Linear contrasts; Orthogonal comparisons; Test 21a:Multiple t tests/Fisher's LSD test; Test 21b: The Bonferroni-Dunn test; Test 21c: Tukey'sHSD test; Test 21d: The Newman-Keuls test; Test 21e: The Scheffé test; Test 21f: The Dunnetttest; Additional discussion of comparison procedures and finalrecommendations; The computation of a confidence interval for acomparison) 874

2. Comparing the means of three or more groups when k 4 9053. Evaluation of the homogeneity of variance assumption of the single-

factor between-subjects analysis of variance (Test 11a: Hartley'sFmax test for homogeneity of variance, Test 21g: The Levene Testfor homogeneity of variance« Test 21 h: The Brown-Forsythe testfor homogeneity of variance) 906

4. Computation of the power of the single-factor between-subjectsanalysis of variance 913

5. Measures of magnitude of treatment effect for the single-factorbetween-subjects analysis of variance: Omega squared (Test 21 i),Eta squared (Test 21j), and Cohen's/index (Test 21k) 916

6. Computation of a confidence interval for the mean of a treatmentpopulation 920

7. Trend analysis 922VII. Additional Discussion of the Single-Factor Between-Subjects Analysis

of Variance 9341. Theoretical rationale underlying the single-factor between-subjects

analysis of variance 9342. Definitional equations for the single-factor between-subjects analysis

of variance 9363. Equivalency of the single-factor between-subjects analysis of variance

and the t test for two independent samples when k = 2 9384. Robustness of the single-factor between-subjects analysis of

variance 9395. Equivalency of the single-factor between-subjects analysis of variance

and the / test for two independent samples with the chi-square test forr χ c tables when c = 2 939

xx Handbook of Parametric and Nonparametric Statistical Procedures

6. The general linear model 9437. Fixed-effects versus random-effects models for the single-factor

between-subjects analysis of variance 9448. Multivariate analysis of variance (MANOVA) 944

VIII. Additional Examples Illustrating the Use of the Single-Factor Between-Subjects Analysis of Variance 945

IX. Addendum 9461. Test 211: The Single-Factor Between-Subjects Analysis of

Covariance 946

Test 22: The Kruskal-Wallis One-Way Analysis of Variance by Ranks .981

I. Hypothesis Evaluated with Test and Relevant Background Information 981II. Example 982

III. Null versus Alternative Hypotheses 983IV. Test Computations 983V. Interpretation of the Test Results 985

VI. Additional Analytical Procedures for the Kruskal-Wallis One-WayAnalysis of Variance by Ranks and/or Related Tests 985

1. Tie correction for the Kruskal-Wallis one-way analysis of variance byranks 985

2. Pairwise comparisons following computation of the test statistic forthe Kruskal-Wallis one-way analysis of variance by ranks 986

VII. Additional Discussion of the Kruskal-Wallis One-Way Analysis ofVariance by Ranks 990

1. Exact tables of the Kruskal-Wallis distribution 9902. Equivalency of the Kruskal-Wallis one-way analysis of

variance by ranks and the Mann-Whitney {/test when k = 2 9903. Power-efficiency of the Kruskal-Wallis one-way analysis of

variance by ranks 9914. Alternative nonparametric rank-order procedures for evaluating a

design involving k independent samples 991VIII. Additional Examples Illustrating the Use of the Kruskal-Wallis One-

Way Analysis of Variance by Ranks 992IX. Addendum 993

1. Test 22a: The Jonckheere-Terpstra Test for OrderedAlternatives 993

Test 23: The Van der Waerden Normal-Scores Test for * IndependentSamples 1007

I. Hypothesis Evaluated with Test and Relevant Background Information 1007II. Example 1008

III. Null versus Alternative Hypotheses 1008IV. Test Computations 1009V. Interpretation of the Test Results 1011

VI. Additional Analytical Procedures for the van der Waerden Normal-Scores Test for k Independent Samples and/or Related Tests 1012

1. Pairwise comparisons following computation of the test statistic forthe van der Waerden normal-scores test for k independent samples 1012

VII. Additional Discussion of the van der Waerden Normal-Scores Test fork Independent Samples 1015

Table of Contents

1. Alternative normal-scores tests 1015VIII. Additional Examples Illustrating the Use of the van der Waerden

Normal-Scores Test for k Independent Samples 1015

Inferential Statistical Tests Employed with Two or MoreDependent Samples (and Related Measures ofAssociation/Correlation) 1021

Test 24: The Single-Factor Within-Subjects Analysis of Variance 1023

I. Hypothesis Evaluated with Test and Relevant Background Information 1023II. Example 1025

III. Null versus Alternative Hypotheses 1025IV. Test Computations 1025V. Interpretation of the Test Results 1030

VI. Additional Analytical Procedures for the Single-Factor Within-SubjectsAnalysis of Variance and/or Related Tests . . 1031

1. Comparisons following computation of the omnibus F value for thesingle-factor within-subjects analysis of variance (Test 24a: Multiple/ tests/Fisher's LSD test; Test 24b: The Bonferroni- Dunn test;Test 24c: Tukey's HSD test; Test 24d: The Newman-Keuls test;Test 24e: The Scheffé test; Test 24f: The Dunnett test; The compu-tation of a confidence interval for a comparison; Alternative method-ology for computing MSna for a comparison) 1031

2. Comparing the means of three or more conditions when k 4 10393. Evaluation of the sphericity assumption underlying the single-factor

within-subjects analysis of variance 10414. Computation of the power of the single-factor within-subjects

analysis of variance 10475. Measures of magnitude of treatment effect for the single-factor

within-subjects analysis of variance: Omega squared (Test 24g) andCohen's/ index (Test 24h) 1049

6. Computation of a confidence interval for the mean of a treatmentpopulation 1051

7. Test 24i: The intraclass correlation coefficient 1053VII. Additional Discussion of the Single-Factor Within-Subjects Analysis of

Variance 10551. Theoretical rationale underlying the single-factor within-subjects

analysis of variance 10552. Definitional equations for the single-factor within-subjects analysis of

variance 10583. Relative power of the single-factor within-subjects analysis of var-

iance and the single-factor between-subjects analysis of variance 10614. Equivalency of the single-factor within-subjects analysis of variance

and the t test for two dependent samples when k = 2 10625. The Latin square design 1063

VIII. Additional Examples Illustrating the Use of the Single-Factor Within-Subjects Analysis of Variance 1065

xxii Handbook of Parametric and Nonparametric Statistical Procedures

Test 25: The Friedman Two-Way Analysis of Variance by Ranks 1075

I. Hypothesis Evaluated with Test and Relevant Background Information 1075II. Example 1076

III. Null versus Alternative Hypotheses 1076IV. Test Computations 1077V. Interpretation of the Test Results 1078

VI. Additional Analytical Procedures for the Friedman Two-Way AnalysisVariance by Ranks and/or Related Tests 1079

1. Tie correction for the Friedman two-way analysis varianceby ranks 1079

2. Pairwise comparisons following computation of the test statistic forthe Friedman two-way analysis of variance by ranks 1080

VII. Additional Discussion of the Friedman Two-Way Analysis of Varianceby Ranks 1085

1. Exact tables of the Friedman distribution 10852. Equivalency of the Friedman two-way analysis of variance by ranks

and the binomial sign test for two dependent samples when £ = 2 10853. Power-efficiency of the Friedman two-way analysis of variance

by ranks 10864. Alternative nonparametric rank-order procedures for evaluating a

design involving k dependent samples 10865. Relationship between the Friedman two-way analysis of variance by

ranks and Kendall's coefficient of concordance 1087VIII. Additional Examples Illustrating the Use of the Friedman Two-Way

Analysis of Variance by Ranks 1087EX. Addendum 1088

l.Test 25a: The Page Test for Ordered Alternatives 1088

Test 26: The Cochran Q Test 1099

I. Hypothesis Evaluated with Test and Relevant Background Information 1099II. Example 1100

III. Null versus Alternative Hypotheses 1100IV. Test Computations 1100V. Interpretation of the Test Results 1102

VI. Additional Analytical Procedures for the Cochran Q Test and/orRelated Tests 1102

1. Pairwise comparisons following computation of the test statistic forthe Cochran Q test 1102

VII. Additional Discussion of the Cochran Q Test 11061. Issues relating to subjects who obtain the same score under all of the

experimental conditions 11062. Equivalency of the Cochran Q test and the McNemar test when

k = 2 11073. Alternative nonparametric procedures with categorical data for

evaluating a design involving k dependent samples 1109VIII. Additional Examples Illustrating the Use of the Cochran Q Test 1109

Inferential Statistical Test Employed with a Factorial Design(and Related Measures of Association/Correlation) 1117

Table of Contents

Test 27: The Between-Subjects Factorial Analysis of Variance .1119

I. Hypothesis Evaluated with Test and Relevant Background Information 1119II. Example 1120

III Null versus Alternative Hypotheses 1120IV. Test Computations 1122V. Interpretation of the Test Results 1128

VI. Additional Analytical Procedures for the Between-Subjects FactorialAnalysis of Variance and/or Related Tests 1132

1. Comparisons following computation of the F values for the between-subjects factorial analysis of variance (Test 27a: Multiple t tests/Fisher's LSD test; Test 27b: The Bonferroni-Dunn test; Test 27c:Tukey's HSD test; Test 27d: The Newman-Keuls test; Test 27e:The Scheffé test; Test 27f: The Dunnett test; Comparisons betweenthe marginal means; Evaluation of an omnibus hypothesis involvingmore than two marginal means; Comparisons between specific groupsthat are a combination of both factors; The computation of a confidenceinterval for a comparison; Analysis of simple effects) 1132

2. Evaluation of the homogeneity of variance assumption ofthe between-subjects factorial analysis of variance 1143

3. Computation of the power of the between-subjects factorial analysis ofvariance 1144

4. Measures of magnitude of treatment effect for the between-subjectsfactorial analysis of variance: Omega squared (Test 27g) and Cohen's/index (Test 27h) 1146

5. Computation of a confidence interval for the mean of a populationrepresented by a group 1150

VII. Additional Discussion of the Between-Subjects Factorial Analysis ofVariance 1150

1. Theoretical rationale underlying the between-subjects factorial analysisof variance 1150

2. Definitional equations for the between-subjects factorial analysis ofvariance 1151

3. Unequal sample sizes 11534. The randomized-blocks design 11545. Additional comments on the between-subjects factorial analysis of

variance (Fixed-effects versus random-effects versus mixed-effectsmodels; Nested factors/hierarchical designs and designs involving morethan two factors; Screening designs) 1158

VIII. Additional Examples Illustrating the Use of the Between-SubjectsFactorial Analysis of Variance 1161

IX. Addendum Π62Discussion of and computational procedures for additional analysis ofvariance procedures for factorial designs 1162

1. Test 27i: The factorial analysis of variance for a mixeddesign 1162

Analysis of a crossover design with a factorial analysisof variance for a mixed design 1167

2. Test 27j: Analysis of variance for a Latin square design 1182

xxiv Handbook of Parametric and Nonparametric Statistical Procedures

3. Test 27k: The within-subjects factorial analysisof variance 12034. Analysis of higher order factorial designs 1208

Measures of Association/Correlation 1219

Test 28: The Pearson Product-Moment Correlation Coefficient 1221I. Hypothesis Evaluated with Test and Relevant Background Information 1221

II. Example 1225HL Null versus Alternative Hypotheses 1225IV. Test Computations 1226V. Interpretation ofthe Test Results (Test 28a: Test of significance for

a Pearson product-moment correlation coefficient; The coefficientof determination) 1228

VI. Additional Analytical Procedures for the Pearson Product-MomentCorrelation Coefficient and/or Related Tests 1231

1. Derivation of a regression line 12312. The standard error of estimate 12403. Computation of a confidence interval for the value of the criterion

variable 12414. Computation of a confidence interval for a Pearson product-moment

correlation coefficient 12435. Test 28b: Test for evaluating the hypothesis that the true

population correlation is a specific value other than zero 12446. Computation of power for the Pearson product-moment correlation

coefficient 12457. Test 28c: Test for evaluating a hypothesis on whether there is a

significant difference between two independent correlations 12478. Test28d: Test for evaluating a hypothesis on whether k indepen-

dent correlations are homogeneous 12499. Test 28e: Test for evaluating the null hypothesis Ho: p = p 1250

10. Tests for evaluating a hypothesis regarding one or more regressioncoefficients (Test 28f: Test for evaluating the null hypothesis Ho: β= 0; Test 28g: Test for evaluating the null hypothesis J/o: ß, = ß2) 1252

11. Additional correlational procedures . . 1254VII. Additional Discussion of the Pearson Product-Moment Correlation

Coefficient 12551. The definitional equation for the Pearson product-moment correlation

coefficient 12552. Covariance 12563. The homoscedasticity assumption of the Pearson product-moment

correlation coefficient 12574. Residuals, analysis of variance for regression analysis and

regression diagnostics 12585. Autocorrelation (and Test 28h: Durbin-Watson test) 12696. The phi coefficient as a special case of the Pearson product-moment

correlation coefficient 12847. Ecological correlation 12858. Cross-lagged panel and regression-discontinuity designs 1288

VIII. Additional Examples Illustrating the Use of the Pearson Product-Moment Correlation Coefficient 1295

Table of Contents xxv

IX. Addendum 12951. Bivariate measures of correlation that are related to the Pearson product-

moment correlation coefficient (Test 28i: The point-biserialcorrelation coefficient (and Test 28i-a: Test of significance for apoint-biserial correlation coefficient); Test 28j: The biserialcorrelation coefficient (and Test 28j-a: Test of significance for abiserial correlation coefficient); Test 28k: The tetrachoriccorrelation coefficient (and Test 28k-a: Test of significance for atetrachoric correlation coefficient)) 1296

2. Meta-analysis and related topics (Measures of effect size; Meta-analyticprocedures (Test 281: Procedure for comparing k studies withrespect to significance level; Test 28m: The Stouffer procedure forobtaining a combined significance level (p value) for k studies; Thefile drawer problem; Test 28n: Procedure for comparing k studieswith respect to effect size; Test 28o: Procedure for obtaining acombined effect size for k studies); Alternative meta-analyticprocedures; Practical implications of magnitude of effect size value;Test 28p: Binomial effect size display; The significance testcontroversy; The minimum-effect hypothesis testing model) 1306

Test 29: Spearman's Rank-Order Correlation Coefficient 1353

I. Hypothesis Evaluated with Test and Relevant Background Information 1353II. Example 1355

III. Null versus Alternative Hypotheses 1355IV. Test Computations 1356V. Interpretation ofthe Test Results (Test 29a: Test of significance for

Spearman's rank-order correlation coefficient) 1357VI. Additional Analytical Procedures for Spearman's Rank-Order

Correlation Coefficient and/or Related Tests 13591. Tie correction for Spearman's rank-order correlation coefficient 13592. Spearman's rank-order correlation coefficient as a special case of the

Pearson product-moment correlation coefficient 13613. Regression analysis and Spearman's rank-order correlation

coefficient 13624. Partial rank correlation 13635. Use of Fisher's zr transformation with Spearman's rank-

order correlation coefficient 1364VII. Additional Discussion of Spearman's Rank-Order Correlation

Coefficient 13641. The relationship between Spearman's rank-order correlation coefficient,

Kendall's coefficient of concordance, and the Friedman two-wayanalysis of variance by ranks 1364

2. Power efficiency of Spearman's rank-order correlation coefficient 13673. Brief discussion of Kendall's tau: An alternative measure of association

for two sets of ranks 13674. Weighted rank/top-down correlation 1367

VIII. Additional Examples Illustrating the Use of the Spearman's Rank-OrderCorrelation Coefficient 1368

Handbook of Parametric and Nonparametric Statistical Procedures

Test 30: Kendall's Tau 1 3 7 1

I. Hypothesis Evaluated with Test and Relevant Background Information 1371II. Example 1373

III. Null versus Alternative Hypotheses 1373IV. Test Computations 1374V. Interpretation ofthe Test Results (Test 30a: Test of significance for

Kendall's tau) 1376VI. Additional Analytical Procedures for Kendall's Tau and/or

Related Tests - 13791. Tie correction for Kendall's tau 13792. Regression analysis and Kendall's tau 13823. Partial rank correlation 13824. Sources for computing a confidence interval for Kendall's tau 1382

VII. Additional Discussion of Kendall's Tau . . 13821. Power efficiency of Kendall's tau 13822. Kendall's coefficient of agreement 1382

VIII. Additional Examples Illustrating the Use of Kendall's Tau 1382

Test 31: Kendall's Coefficient of Concordance 1387

I. Hypothesis Evaluated with Test and Relevant Background Information 1387II. Example 1388

III. Null versus Alternative Hypotheses 1388IV. Test Computations 1389V. Interpretation ofthe Test Results (Test 31a: Test of significance for

Kendall's coefficient of concordance) 1390VI. Additional Analytical Procedures for Kendall's Coefficient of

Concordance and/or Related Tests 13911. Tie correction for Kendall's coefficient of concordance 1391

VII. Additional Discussion of Kendall's Coefficient of Concordance 13931. Relationship between Kendall's coefficient of concordance and

Spearman's rank-order correlation coefficient 13932. Relationship between Kendall's coefficient of concordance and the

Friedman two-way analysis of variance by ranks 13943. Weighted rank/top-down concordance 13964. Kendall's coefficient of concordance versus the intraclass correlation

coefficient 1396VIII. Additional Examples Illustrating the Use of Kendall's Coefficient of

Concordance 1398

Test 32: Goodman and Kruskal's Gamma 1403

I. Hypothesis Evaluated with Test and Relevant Background Information . 1403II. Example 1404

III. Null versus Alternative Hypotheses 1405IV. Test Computations 1406V. Interpretation ofthe Test Results (Test 32a: Test of significance for

Goodman and Kruskal's gamma) 1409VI Additional Analytical Procedures for Goodman and Kruskal's Gamma

and/or Related Tests 1410

Table of Contents XXVII

1. The computation of a confidence interval for the value of Goodman andKruskal's gamma 1410

2. Test 32b: Test for evaluating the null hypothesis H^ γ, = γ2 14113. Sources for computing a partial correlation coefficient for Goodman and

Kruskal's gamma 1412VII. Additional Discussion of Goodman and Kruskal's Gamma 1412

1. Relationship between Goodman and Kruskal's gamma and Yule's Q . . . 14122. Somers' delta as an alternative measure of association for an ordered

contingency table 1412VIII. Additional Examples Illustrating the Use of Goodman and Kruskal's

Gamma 1413

Multivariate Statistical Analysis 1417

Matrix Algebra and Multivariate Analysis 1419

I. Introductory Comments on Multivariate Statistical Analysis 1419II. Introduction to Matrix Algebra 1420

Test 33: Multivariate Regression 1433

I. Hypothesis Evaluated with Test and Relevant Background Information 1433

II. Examples 1439ΠΙ. Null versus Alternative Hypotheses 1441

IV/V. Test Computations and Interpretation of the Test ResultsTest computations and interpretation of results for Example 33.1

(Computation of the multiple correlation coefficient; The coefficientof multiple determination; Test of significance for a multiple correlationcoefficient; The multiple regression equation; The standard error ofmultiple estimate; Computation of a confidence interval for Γ; Evalua-tion of the relative importance of the predictor variables; Evaluatingthe significance of a regression coefficient; Computation of aconfidence interval for a regression coefficient; Analysis of variancefor multiple regression; Semipartial and partial correlation (Test 33a:Computation of a semipartial correlation coefficient; Test ofsignificance for a semipartial correlation coefficient; Test 33b:Computation of a partial correlation coefficient; Test of significancefor a partial correlation coefficient) 1442Test computations and interpretation of results for Example33.2 with SPSS 1459

VI. Additional Analytical Procedures for Multiple Regressionand/or Related Tests 1470

1. Cross-validation of sample data 1470VII. Additional Discussion of Multivariate Regression 1471

1. Final comments on multiple regression analysis 14712. Causal modeling: Path analysis and structural

equation modeling 14723. Brief note on logistic regression 1473

VIII. Additional Examples Illustrating the Use of Multivariate Regression 1474

Handbook of Parametric and Nonparametric Statistical Procedures

Test 34: Hotelling's Τ2 · 1 4 8 3

I. Hypothesis Evaluated with Test and Relevant Background Information 1483

IL Example 1 4 8 3

III. Null versus Alternative Hypotheses 1484IV. Test Computations 1 4 8 5V. Interpretation of the Test Results I486

VI. Additional Analytical Procedures for Hotelling's T2 and/orRelated Tests 1488

1. Additional analyses following the test of the omnibus nullhypothesis 1488

2. Test 34a: The single-sample Hotelling's T1 14893. Test 34b: The use of the single-sample Hotelling's T2 to

evaluate a dependent samples design 1491VII. Additional Discussion of Hotelling's T2 1495

1. Hotelling's Τ2 and Mahalanobis' D2 statistic 1495VIII. Additional Examples Illustrating the Use of Hotelling's T2 1495

Test 35: Multivariate Analysis of Variance 1499

I Hypothesis Evaluated with Test and Relevant Background Information 1499

II. Example 1500III. Null versus Alternative Hypotheses 1501IV. Test Computations 1502V. Interpretation of the Test Results 1503

VI. Additional Analytical Procedures for the Multivariate Analysis of Varianceand/or Related Tests 1510

VII. Additional Discussion of the Multivariate Analysis of Variance 15101. Conceptualizing the hypothesis for the multivariate analysis of

variance within the context of a linear combination 15102. Multicollinearity and the multivariate analysis of variance 1510

VIII. Additional Examples Illustrating the Use of the MultivariateAnalysis of Variance 1511

Test 36: Multivariate Analysis of Covariance · 1515

I. Hypothesis Evaluated with Test and Relevant Background Information 1515II. Example 1516

III Null versus Alternative Hypotheses 1517

IV. Test Computations 1517V. Interpretation of the Test Results 1518

VI. Additional Analytical Procedures for the Multivariate Analysis ofCovariance and/or Related Tests 1521

VII. Additional Discussion of the Multivariate Analysis of Covariance 15211. Multiple covariates 1521

VIII. Additional Examples Illustrating the Use of the MultivariateAnalysis of Covariance 1522

Test 37: Discriminant Function Analysis 1525

I. Hypothesis Evaluated with Test and Relevant Background Information 1525II. Examples J527

III. Null versus Alternative Hypotheses 1528

Table of Contents xxix

IV. Test Computations 1529V. Interpretation of the Test Results 1531

Analysis of Example 37.1 1531Analysis of Example 37.2 1543

VI. Additional Analytical Procedures for Discriminant Function Analysisand/or Related Tests 1552

VII. Additional Discussion of Discriminant Function Analysis 1552VIII. Additional Examples Illustrating the Use of Discriminant

Function Analysis 1552

Test 38: Canonical Correlational 1557

I. Hypothesis Evaluated with Test and Relevant Background Information 1557II. Example 1560

III. Null versus Alternative Hypotheses 1560IV. Test Computations 1560V. Interpretation of the Test Results 1561

VI. Additional Analytical Procedures for Canonical Correlationand/or Related Tests 1574

VII. Additional Discussion of Canonical Correlation 1574VIII Additional Examples Illustrating the Use of Canonical Correlation 1575

Test 39: Logistic Regression 1581

I. Hypothesis Evaluated with Test and Relevant Background Information 1581II. Example 1584

III. Null versus Alternative Hypotheses 1584IV. Test Computations 1587V. Interpretation of the Test Results 1590

Results for a binary logistic regression with one predictor variable 1590Results for a binary logistic regression with multiple predictor variables . 1598

VI. Additional Analytical Procedures for Logistic Regressionand/or Related Tests 1605

VII. Additional Discussion of Logistic Regression 1606VIII. Additional Examples Illustrating the Use of Logistic Regression 1606

Test 40: Principal Components Analysis and Factor Analysis 1615

I. Hypothesis Evaluated with Test and Relevant Background Information 1615II. Example 1618

HI. Null versus Alternative Hypotheses 1618IV. Test Computations 1618V. Interpretation of the Test Results 1622

VI. Additional Analytical Procedures for Principal Components Analysisand Factor Analysis and/or Related Tests 16341. Principal axis factor analysis of Example 40.1 1634

VII. Additional Discussion of Principal Components Analysis andFactor Analysis 1635

1. Criticisms of factor analytic procedures 16352. Cluster analysis 16353. Multidimensional scaling 1636

xxx Handbook of Parametric and Nonparametric Statistical Procedures

VIII. Additional Examples Illustrating the Use of Principal ComponentsAnalysis and Factor Analysis 1637

Appendix: Tables 1647

Table Al. Table of the Normal Distribution 1651Table A2. Table of Student's /Distribution 1656Table A3. Power Curves for Student's / Distribution 1657Table A4. Table of the Chi-Square Distribution 1661Table A5. Table of Critical Γ Values for Wilcoxon's Signed-Ranks and

Matched-Pairs Signed-Ranks Tests 1662Table A6. Table of the Binomial Distribution, Individual Probabilities . . 1663Table A7. Table of the Binomial Distribution, Cumulative Probabilities 1666Table AS. Table of Critical Values for the Single-Sample Runs Test 1669Table A9. Table of the Fm Distribution 1670

Table A1O. Table of the F Distribution 1671Table A11. Table of Critical Values for Mann-Whitney U Statistic . . . 1679Table A12. Table of Sandler's A Statistic 1681Table A13. Table of the Studentized Range Statistic 1682Table Al4. Table of Dunnett's Modified / Statistic for a Control Group

Comparison 1684Table A15. Graphs of the Power Function for the Analysis of Variance . . . . 1686Table A16. Table of Critical Values for Pearson r 1690Table A17. Table of Fisher's zr Transformation 1691Table A18. Table of Critical Values for Spearman's Rho 1692Table A19. Table of Critical Values for Kendall's Tau 1693Table A20. Table of Critical Values for Kendall's Coefficient of

Concordance 1694Table A21. Table of Critical Values for the Kolmogorov-Smirnov

Goodness-of-Fit Test for a Single Sample 1695Table A22. Table of Critical Values for the Lilliefors Test for Normality 1696Table A23. Table of Critical Values for the Kolmogorov-Smirnov Test for

Two Independent Samples 1697Table A24. Table of Critical Values for the Jonckheere-Terpstra

Test Statistic 1699Table A25. Table of Critical Values for the Page Test Statistic 1701Table A26. Table of Extreme Studentized Deviate Outlier Statistic 1703Table A27. Table of Durbin-Watson Test Statistic 1704

I n d e x · 1707