handbook - c. d. publications · the handbook of psychologywas prepared for the pur-pose of...

738

Upload: others

Post on 31-May-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

  • HANDBOOKof

    PSYCHOLOGY

    VOLUME 2

    RESEARCH METHODS IN PSYCHOLOGY

    John A. Schinka

    Wayne F. Velicer

    Volume Editors

    Irving B. Weiner

    Editor-in-Chief

    John Wiley & Sons, Inc.

    schi_fm.qxd 9/6/02 11:48 AM Page iii

  • schi_fm.qxd 9/6/02 11:48 AM Page vi

  • HANDBOOKof

    PSYCHOLOGY

    schi_fm.qxd 9/6/02 11:48 AM Page i

  • schi_fm.qxd 9/6/02 11:48 AM Page ii

  • HANDBOOKof

    PSYCHOLOGY

    VOLUME 2

    RESEARCH METHODS IN PSYCHOLOGY

    John A. Schinka

    Wayne F. Velicer

    Volume Editors

    Irving B. Weiner

    Editor-in-Chief

    John Wiley & Sons, Inc.

    schi_fm.qxd 9/6/02 11:48 AM Page iii

  • This book is printed on acid-free paper.

    Copyright © 2003 by John Wiley & Sons, Inc., Hoboken, New Jersey. All rights reserved.

    Published simultaneously in Canada.

    No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic,mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 UnitedStates Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of theappropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to thePermissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, e-mail: [email protected].

    Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, theymake no representations or warranties with respect to the accuracy or completeness of the contents of this book and specificallydisclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended bysales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation.You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit orany other commercial damages, including but not limited to special, incidental, consequential, or other damages.

    This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is soldwith the understanding that the publisher is not engaged in rendering professional services. If legal, accounting, medical,psychological or any other expert assistance is required, the services of a competent professional person should be sought.

    Designations used by companies to distinguish their products are often claimed as trademarks. In all instances where John Wiley &Sons, Inc. is aware of a claim, the product names appear in initial capital or all capital letters. Readers, however, should contact theappropriate companies for more complete information regarding trademarks and registration.

    For general information on our other products and services please contact our Customer Care Department within the U.S. at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.

    Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available inelectronic books.

    Library of Congress Cataloging-in-Publication Data:

    Handbook of psychology / Irving B. Weiner, editor-in-chief.p. cm.

    Includes bibliographical references and indexes.Contents: v. 1. History of psychology / edited by Donald K. Freedheim — v. 2. Research

    methods in psychology / edited by John A. Schinka, Wayne F. Velicer — v. 3. Biologicalpsychology / edited by Michela Gallagher, Randy J. Nelson — v. 4. Experimentalpsychology / edited by Alice F. Healy, Robert W. Proctor — v. 5. Personality and socialpsychology / edited by Theodore Millon, Melvin J. Lerner — v. 6. Developmentalpsychology / edited by Richard M. Lerner, M. Ann Easterbrooks, Jayanthi Mistry — v. 7.Educational psychology / edited by William M. Reynolds, Gloria E. Miller — v. 8.Clinical psychology / edited by George Stricker, Thomas A. Widiger — v. 9. Health psychology /edited by Arthur M. Nezu, Christine Maguth Nezu, Pamela A. Geller — v. 10. Assessmentpsychology / edited by John R. Graham, Jack A. Naglieri — v. 11. Forensic psychology /edited by Alan M. Goldstein — v. 12. Industrial and organizational psychology / editedby Walter C. Borman, Daniel R. Ilgen, Richard J. Klimoski.

    ISBN 0-471-17669-9 (set) — ISBN 0-471-38320-1 (cloth : alk. paper : v. 1) — ISBN 0-471-38513-1 (cloth : alk. paper : v. 2) — ISBN 0-471-38403-8 (cloth : alk. paper : v. 3) — ISBN 0-471-39262-6 (cloth : alk. paper : v. 4) — ISBN 0-471-38404-6 (cloth : alk. paper : v. 5) — ISBN 0-471-38405-4 (cloth : alk. paper : v. 6) — ISBN 0-471-38406-2 (cloth : alk. paper : v. 7) — ISBN 0-471-39263-4 (cloth : alk. paper : v. 8) — ISBN 0-471-38514-X (cloth : alk. paper : v. 9) — ISBN 0-471-38407-0 (cloth : alk. paper : v. 10) — ISBN 0-471-38321-X (cloth : alk. paper : v. 11) — ISBN 0-471-38408-9 (cloth : alk. paper : v. 12)

    1. Psychology. I. Weiner, Irving B.

    BF121.H1955 2003150—dc21

    2002066380Printed in the United States of America.

    10 9 8 7 6 5 4 3 2 1

    schi_fm.qxd 9/6/02 11:48 AM Page iv

  • Editorial Board

    Volume 1History of Psychology

    Donald K. Freedheim, PhDCase Western Reserve UniversityCleveland, Ohio

    Volume 2Research Methods in Psychology

    John A. Schinka, PhDUniversity of South FloridaTampa, Florida

    Wayne F. Velicer, PhDUniversity of Rhode IslandKingston, Rhode Island

    Volume 3Biological Psychology

    Michela Gallagher, PhDJohns Hopkins UniversityBaltimore, Maryland

    Randy J. Nelson, PhDOhio State UniversityColumbus, Ohio

    Volume 4Experimental Psychology

    Alice F. Healy, PhDUniversity of ColoradoBoulder, Colorado

    Robert W. Proctor, PhD Purdue UniversityWest Lafayette, Indiana

    Volume 5Personality and Social Psychology

    Theodore Millon, PhDInstitute for Advanced Studies in

    Personology and PsychopathologyCoral Gables, Florida

    Melvin J. Lerner, PhDFlorida Atlantic UniversityBoca Raton, Florida

    Volume 6Developmental Psychology

    Richard M. Lerner, PhDM. Ann Easterbrooks, PhDJayanthi Mistry, PhD

    Tufts UniversityMedford, Massachusetts

    Volume 7Educational Psychology

    William M. Reynolds, PhDHumboldt State UniversityArcata, California

    Gloria E. Miller, PhDUniversity of DenverDenver, Colorado

    Volume 8Clinical Psychology

    George Stricker, PhDAdelphi UniversityGarden City, New York

    Thomas A. Widiger, PhDUniversity of KentuckyLexington, Kentucky

    Volume 9Health Psychology

    Arthur M. Nezu, PhDChristine Maguth Nezu, PhDPamela A. Geller, PhD

    Drexel UniversityPhiladelphia, Pennsylvania

    Volume 10Assessment Psychology

    John R. Graham, PhDKent State UniversityKent, Ohio

    Jack A. Naglieri, PhDGeorge Mason UniversityFairfax, Virginia

    Volume 11Forensic Psychology

    Alan M. Goldstein, PhDJohn Jay College of Criminal

    Justice–CUNYNew York, New York

    Volume 12Industrial and OrganizationalPsychology

    Walter C. Borman, PhDUniversity of South FloridaTampa, Florida

    Daniel R. Ilgen, PhDMichigan State UniversityEast Lansing, Michigan

    Richard J. Klimoski, PhDGeorge Mason UniversityFairfax, Virginia

    v

    schi_fm.qxd 9/6/02 11:48 AM Page v

  • schi_fm.qxd 9/6/02 11:48 AM Page vi

  • My efforts in this work are proudly dedicated to Katherine, Christy, and John C. Schinka.J. A. S.

    This work is dedicated to Sue, the perfect companion for life’s many journeysand the center of my personal universe.

    W. F. V.

    schi_fm.qxd 9/6/02 11:48 AM Page vii

  • schi_fm.qxd 9/6/02 11:48 AM Page viii

  • Handbook of Psychology Preface

    Psychology at the beginning of the twenty-first century hasbecome a highly diverse field of scientific study and appliedtechnology. Psychologists commonly regard their disciplineas the science of behavior, and the American PsychologicalAssociation has formally designated 2000 to 2010 as the“Decade of Behavior.” The pursuits of behavioral scientistsrange from the natural sciences to the social sciences and em-brace a wide variety of objects of investigation. Some psy-chologists have more in common with biologists than withmost other psychologists, and some have more in commonwith sociologists than with most of their psychological col-leagues. Some psychologists are interested primarily in the be-havior of animals, some in the behavior of people, and othersin the behavior of organizations. These and other dimensionsof difference among psychological scientists are matched byequal if not greater heterogeneity among psychological practi-tioners, who currently apply a vast array of methods in manydifferent settings to achieve highly varied purposes.

    Psychology has been rich in comprehensive encyclope-dias and in handbooks devoted to specific topics in the field.However, there has not previously been any single handbookdesigned to cover the broad scope of psychological scienceand practice. The present 12-volume Handbook of Psychol-ogy was conceived to occupy this place in the literature.Leading national and international scholars and practitionershave collaborated to produce 297 authoritative and detailedchapters covering all fundamental facets of the discipline,and the Handbook has been organized to capture the breadthand diversity of psychology and to encompass interests andconcerns shared by psychologists in all branches of the field.

    Two unifying threads run through the science of behavior.The first is a common history rooted in conceptual and em-pirical approaches to understanding the nature of behavior.The specific histories of all specialty areas in psychologytrace their origins to the formulations of the classical philoso-phers and the methodology of the early experimentalists, andappreciation for the historical evolution of psychology in allof its variations transcends individual identities as being onekind of psychologist or another. Accordingly, Volume 1 inthe Handbook is devoted to the history of psychology asit emerged in many areas of scientific study and appliedtechnology.

    A second unifying thread in psychology is a commitmentto the development and utilization of research methodssuitable for collecting and analyzing behavioral data. Withattention both to specific procedures and their applicationin particular settings, Volume 2 addresses research methodsin psychology.

    Volumes 3 through 7 of the Handbook present the sub-stantive content of psychological knowledge in five broadareas of study: biological psychology (Volume 3), experi-mental psychology (Volume 4), personality and social psy-chology (Volume 5), developmental psychology (Volume 6),and educational psychology (Volume 7). Volumes 8 through12 address the application of psychological knowledge infive broad areas of professional practice: clinical psychology(Volume 8), health psychology (Volume 9), assessment psy-chology (Volume 10), forensic psychology (Volume 11), andindustrial and organizational psychology (Volume 12). Eachof these volumes reviews what is currently known in theseareas of study and application and identifies pertinent sourcesof information in the literature. Each discusses unresolved is-sues and unanswered questions and proposes future direc-tions in conceptualization, research, and practice. Each of thevolumes also reflects the investment of scientific psycholo-gists in practical applications of their findings and the atten-tion of applied psychologists to the scientific basis of theirmethods.

    The Handbook of Psychology was prepared for the pur-pose of educating and informing readers about the presentstate of psychological knowledge and about anticipated ad-vances in behavioral science research and practice. With thispurpose in mind, the individual Handbook volumes addressthe needs and interests of three groups. First, for graduate stu-dents in behavioral science, the volumes provide advancedinstruction in the basic concepts and methods that define thefields they cover, together with a review of current knowl-edge, core literature, and likely future developments. Second,in addition to serving as graduate textbooks, the volumesoffer professional psychologists an opportunity to read andcontemplate the views of distinguished colleagues concern-ing the central thrusts of research and leading edges of prac-tice in their respective fields. Third, for psychologists seekingto become conversant with fields outside their own specialty

    ix

    schi_fm.qxd 9/6/02 11:48 AM Page ix

  • x Handbook of Psychology Preface

    and for persons outside of psychology seeking informa-tion about psychological matters, the Handbook volumesserve as a reference source for expanding their knowledgeand directing them to additional sources in the literature.

    The preparation of this Handbook was made possible bythe diligence and scholarly sophistication of the 25 volumeeditors and co-editors who constituted the Editorial Board.As Editor-in-Chief, I want to thank each of them for the plea-sure of their collaboration in this project. I compliment themfor having recruited an outstanding cast of contributors totheir volumes and then working closely with these authors toachieve chapters that will stand each in their own right as

    valuable contributions to the literature. I would like finally toexpress my appreciation to the editorial staff of John Wileyand Sons for the opportunity to share in the development ofthis project and its pursuit to fruition, most particularly toJennifer Simon, Senior Editor, and her two assistants, MaryPorterfield and Isabel Pratt. Without Jennifer’s vision of theHandbook and her keen judgment and unflagging support inproducing it, the occasion to write this preface would nothave arrived.

    IRVING B. WEINERTampa, Florida

    schi_fm.qxd 9/6/02 11:48 AM Page x

  • Volume Preface

    xi

    A scientific discipline is defined in many ways by the re-search methods it employs. These methods can be said to rep-resent the common language of the discipline’s researchers.Consistent with the evolution of a lexicon, new researchmethods frequently arise from the development of newcontent areas. By every available measure—number of re-searchers, number of publications, number of journals, num-ber of new subdisciplines—psychology has undergone atremendous growth over the last half-century. This growth isreflected in a parallel increase in the number of new researchmethods available.

    As we were planning and editing this volume, we dis-cussed on many occasions the extent to which psychologyand the available research methods have become increasingcomplex over the course of our careers. When our generationof researchers began their careers in the late 1960s and early1970s, experimental design was largely limited to simplebetween-group designs, and data analysis was dominated bya single method, the analysis of variance. A few other ap-proaches were employed, but by a limited number of re-searchers. Multivariate statistics had been developed, butmultiple regression analysis was the only method that wasapplied with any frequency. Factor analysis was used almostexclusively as a method in scale development. Classical testtheory was the basis of most psychological and educationalmeasures. Analysis of data from studies that did not meeteither the design or measurement assumptions required for ananalysis of variance was covered for most researchers by asingle book on nonparametric statistics by Siegel (1956). Asa review of the contents of this volume illustrates, the choiceof experimental and analytic methods available to thepresent-day researcher is much broader. It would be fair tosay that the researcher in the 1960s had to formulate researchquestions to fit the available methods. Currently, there are re-search methods available to address most research questions.

    In the history of science, an explosion of knowledge isusually the result of an advance in technology, new theoreti-cal models, or unexpected empirical findings. Advances inresearch methods have occurred as the result of all three fac-tors, typically in an interactive manner. Some of the specificfactors include advances in instrumentation and measure-ment technology, the availability of inexpensive desktop

    computers to perform complex methods of data analysis, in-creased computer capacity allowing for more intense analysisof larger datasets, computer simulations that permit the eval-uation of procedures across a wide variety of situations, newapproaches to data analysis and statistical control, and ad-vances in companion sciences that opened pathways to theexploration of behavior and created new areas of researchspecialization and collaboration.

    Consider the advances since the publication of thefirst edition of Kirk’s (1968) text on experimental design.At that time most studies were relatively small N experimentsthat were conducted in psychology laboratories. Research ac-tivity has subsequently exploded in applied and clinicalareas, with a proliferation of new journals largely dedicatedto quasi-experimental studies and studies in the natural envi-ronment (e.g., in neuropsychology and health psychology).Techniques such as polymerase chain reaction allow psychol-ogists to test specific genes as risk candidates for behavioraldisorders. These studies rely on statistical procedures that arestill largely ignored by many researchers (e.g., logistic re-gression, structural equation modeling). Brain imagingprocedures such as magnetic resonance imaging, magnetoen-cephalography, and positron-emission tomography providecognitive psychologists and neuropsychologists the opportu-nity to study cortical activity on-line. Clinical trials involvingbehavioral interventions applied to large, representative sam-ples are commonplace in health psychology. Research em-ploying each of these procedures requires not only highlyspecific and rigorous research methods, but also specialmethods for handling and analyzing extremely large volumesof data. Even in more traditional areas of research that con-tinue to rely on group experimental designs, issues of mea-suring practical significance, determination of sample sizeand power, and procedures for handling nuisance variablesare now important concerns. Not surprisingly, the third edi-tion of Kirk’s (1995) text has grown in page length by 60%.

    Our review of these trends leads to several conclusions,which are reflected in the selection of topics covered by thechapters in this volume. Six features appear to characterizethe evolution in research methodology in psychology.

    First, there has been a focus on the development of proce-dures that employ statistical control rather than experimental

    schi_fm.qxd 9/6/02 11:48 AM Page xi

  • xii Volume Preface

    control. Because most of the recent growth involves researchin areas that preclude direct control of independent variables,multivariate statistics and the development of methods suchas path analysis and structural equation modeling have beencritical developments. The use of statistical control has al-lowed psychology to move from the carefully controlled con-fines of the laboratory to the natural environment.

    Second, there has been an increasing focus on construct-driven, or latent-variable, research. A construct is defined bymultiple observed variables. Constructs can be viewed asmore reliable and more generalizable than a single observedvariable. Constructs serve to organize a large set of observedvariables, resulting in parsimony. Constructs are also theoret-ically based. This theory-based approach serves to guidestudy design, the choice of variables, the data analysis, andthe data interpretation.

    Third, there has been an increasing emphasis on the de-velopment of new measures and new measurement models.This is not a new trend but an acceleration of an old trend.The behavioral sciences have always placed the most empha-sis on the issue of measurement. With the movement of thefield out of the laboratory combined with advances in tech-nology, the repertoire of measures, the quality of the mea-sures, and the sophistication of the measurement models haveall increased dramatically.

    Fourth, there is increasing recognition of the importance ofthe temporal dimension in understanding a broad range of psy-chological phenomena. We have become a more intervention-oriented science, recognizing not only the complexity oftreatment effects but also the importance of the change in pat-terns of the effects over time. The effects of an interventionmay be very different at different points in time. New statisti-cal models for modeling temporal data have resulted.

    Fifth, new methods of analysis have been developedthat no longer require the assumption of a continuous, equal-interval, normally distributed variable. Previously, re-searchers had the choice between very simple but limitedmethods of data analysis that corresponded to the propertiesof the measure or more complex sophisticated methods ofanalysis that assumed, often inappropriately, that the measuremet very rigid assumptions. New methods have been devel-oped for categorical, ordinal, or simply nonnormal variablesthat can perform an equally sophisticated analysis.

    Sixth, the importance of individual differences is increas-ingly emphasized in intervention studies. Psychology hasalways been interested in individual differences, but meth-ods of data analysis have focused almost entirely on the rela-tionships between variables. Individuals were studied asmembers of groups, and individual differences served only toinflate the error variance. New techniques permit researchers

    to focus on the individual and model individual differences.This becomes increasingly important as we recognize that in-terventions do not affect everyone in exactly the same waysand that interventions become more and more tailored to theindividual.

    The text is organized into four parts. The first part, titled“Foundations of Research,” addresses issues that are funda-mental to all behavioral science research. The focus is onstudy design, data management, data reduction, and data syn-thesis. The first chapter, “Experimental Design” by Roger E.Kirk, provides an overview of the basic considerations thatgo into the design of a study. Once, a chapter on this topicwould have had to devote a great deal of attention to compu-tational procedures. The availability of computers permits ashift in focus to the conceptual rather than the computationalissues. The second chapter, “Exploratory Data Analysis” byJohn T. Behrens and Chong-ho Yu, reminds us of the funda-mental importance of looking at data in the most basic waysas a first step in any data analysis. In some ways this repre-sents a “back to the future” chapter. Advances in computer-based graphical methods have brought a great deal of sophis-tication to this very basic first step.

    The third chapter, “Power: Basics, Practical Problems,and Possible Solutions” by Rand R. Wilcox, reflects the crit-ical change in focus for psychological research. Originally,the central focus of a test of significance was on controllingType I error rates. The late Jacob Cohen emphasized that re-searchers should be equally concerned by Type II errors.This resulted in an emphasis on the careful planning of astudy and a concern with effect size and selecting the appro-priate sample size. Wilcox updates and extends these con-cepts. Chapter 4, “Methods for Handling Missing Data” byJohn W. Graham, Patricio E. Cumsille, and Elvira Elek-Fisk,describes the impressive statistical advances in addressingthe common practical problem of missing observations.Previously, researchers had relied on a series of ad hoc pro-cedures, often resulting in very inaccurate estimates. The newstatistical procedures allow the researcher to articulate theassumptions about the reason the data is missing and makevery sophisticated estimates of the missing value based on allthe available information. This topic has taken on even moreimportance with the increasing emphasis on longitudinalstudies and the inevitable problem of attrition.

    The fifth chapter, “Preparatory Data Analysis” by Linda S.Fidell and Barbara G. Tabachnick, describes methods of pre-processing data before the application of other methods ofstatistical analysis. Extreme values can distort the results ofthe data analysis if not addressed. Diagnostic methods canpreprocess the data so that complex procedures are not un-duly affected by a limited number of cases that often are the

    schi_fm.qxd 9/6/02 11:48 AM Page xii

  • Volume Preface xiii

    result of some type of error. The last two chapters in this part,“Factor Analysis” by Richard L. Gorsuch and “Clusteringand Classification Methods” by Glenn W. Milligan andStephen C. Hirtle, describe two widely employed parsimonymethods. Factor analysis operates in the variable domain andattempts to reduce a set of p observed variables to a smallerset of m factors. These factors, or latent variables, are moreeasily interpreted and thus facilitate interpretation. Clusteranalysis operates in the person domain and attempts to reducea set of N individuals to a set of k clusters. Cluster analysisserves to explore the relationships among individuals and or-ganize the set of individuals into a limited number of sub-types that share essential features. These methods are basic tothe development of construct-driven methods and the focuson individual differences.

    The second part, “Research Methods in Specific ContentAreas,” addresses research methods and issues as they applyto specific content areas. Content areas were chosen in part toparallel the other volumes of the Handbook. More important,however, we attempted to sample content areas from a broadspectrum of specialization with the hope that these chapterswould provide insights into methodological concerns andsolutions that would generalize to other areas. Chapter 8,“Clinical Forensic Psychology” by Kevin S. Douglas, RandyK. Otto, and Randy Borum, addresses research methods andissues that occur in assessment and treatment contexts. Foreach task that is unique to clinical forensic psychologyresearch, they provide examples of the clinical challengesconfronting the psychologist, identify problems faced whenresearching the issues or constructs, and describe not only re-search strategies that have been employed but also theirstrengths and limitations. In Chapter 9, “Psychotherapy Out-come Research,” Evelyn S. Behar and Thomas D. Borkovecaddress the methodological issues that need to be consideredfor investigators to draw the strongest and most specificcause-and-effect conclusions about the active components oftreatments, human behavior, and the effectiveness of thera-peutic interventions.

    The field of health psychology is largely defined by threetopics: the role of behavior (e.g., smoking) in the develop-ment and prevention of disease, the role of stress and emotionas psychobiological influences on disease, and psychologicalaspects of acute and chronic illness and medical care. Insightinto the methodological issues and solutions for research ineach of these topical areas is provided by Timothy W. Smithin Chapter 10, “Health Psychology.”

    At one time, most behavioral experimentation was con-ducted by individuals whose training focused heavily on ani-mal research. Now many neuroscientists, trained in variousfields, conduct research in animal learning and publish

    findings that are of interest to psychologists in many fields.The major goal of Chapter 11, “Animal Learning” by RussellM. Church, is to transfer what is fairly common knowledge inexperimental animal psychology to investigators with limitedexposure to this area of research. In Chapter 12, “Neuropsy-chology,” Russell M. Bauer, Elizabeth C. Leritz, and DawnBowers provide a discussion of neuropsychological inference,an overview of major approaches to neuropsychological re-search, and a review of newer techniques, including functionalneuroimaging, electrophysiology, magnetoencephalography,and reversible lesion methods. In each section, they describethe conceptual basis of the technique, outline its strengths andweaknesses, and cite examples of how it has been used inaddressing conceptual problems in neuropsychology.

    Whatever their specialty area, when psychologists evalu-ate a program or policy, the question of impact is often at cen-ter stage. The last chapter in this part, “Program Evaluation”by Melvin M. Mark, focuses on key methods for estimatingthe effects of policies and programs in the context of evalua-tion. Additionally, Mark addresses several noncausal formsof program evaluation research that are infrequently ad-dressed in methodological treatises.

    The third part is titled “Measurement Issues.” Advances inmeasurement typically combine innovation in technologyand progress in theory. As our measures become more so-phisticated, the areas of application also increase.

    Mood emerged as a seminal concept within psychologyduring the 1980s, and its prominence has continued unabatedever since. In Chapter 14, “Mood Measurement: CurrentStatus and Future Directions,” David Watson and Jatin Vaidyaexamine current research regarding the underlying structureof mood, describe and evaluate many of the most importantmood measures, and discuss several issues related to thereliability and construct validity of mood measurement. InChapter 15, “Measuring Personality and Psychopathology,”Leslie C. Morey uses objective self-report methods of mea-surement to illustrate contemporary procedures for scaledevelopment and validation, addressing issues critical to allmeasurement methods such as theoretical articulation, situa-tional context, and the need for discriminant validity.

    The appeal of circular models lies in the combination of acircle’s aesthetic (organizational) simplicity and its powerfulpotential to describe data in uniquely compelling substantiveand geometric ways, as has been demonstrated in describ-ing interpersonal behavior and occupational interests. InChapter 16, “The Circumplex Model: Methods and ResearchApplications,” Michael B. Gurtman and Aaron L. Pincus dis-cuss the application of the circumplex model to the descrip-tions of individuals, comparisons of groups, and evaluationsof constructs and their measures.

    schi_fm.qxd 9/6/02 11:48 AM Page xiii

  • xiv Volume Preface

    Chapter 17, “Item Response Theory and Measuring Abili-ties” by Karen M. Schmidt and Susan E. Embretson, de-scribes the types of formal models that have been designed toguide measure development. For many years, most tests ofability and achievement have relied on classical test theory asa framework to guide both measure development and mea-sure evaluation. Item response theory updates this model inmany important ways, permitting the development of a newgeneration of measures of abilities and achievement that areparticularly appropriate for a more interactive model of as-sessment. The last chapter of this part, “Growth Curve Analy-sis in Contemporary Psychological Research” by John J.McArdle and John R. Nesselroade, describes new quantita-tive methods for the study of change in development psy-chology. The methods permit the researcher to model a widevariety of different patterns of developmental change overtime.

    The final part, “Data Analysis Methods,” addresses statis-tical procedures that have been developed recently and arestill not widely employed by many researchers. They are typ-ically dependent on the availability of high-speed computersand permit researchers to investigate novel and complex re-search questions. Chapter 19, “Multiple Linear Regression”by Leona Aiken, Stephen G. West, and Steven C. Pitts, de-scribes the advances in multiple linear regression that permitapplications of this very basic method to the analysis of com-plex data sets and the incorporation of conceptual models toguide the analysis. The testing of theoretical predictions andthe identification of implementation problems are the twomajor foci of this chapter. Chapter 20, “Logistic Regression”by Alfred DeMaris, describes a parallel method to multipleregression analysis for categorical variables. The procedurehas been developed primarily outside of psychology and isnow being used much more frequently to address psycholog-ical questions. Chapter 21, “Meta-Analysis” by Frank L.Schmidt and John E. Hunter, describes procedures that havebeen developed for the quantitative integration of researchfindings across multiple studies. Previously, research findingswere integrated in narrative form and were subject to the bi-ases of the reviewer. The method also focuses attention on theimportance of effect size estimation.

    Chapter 22, “Survival Analysis” by Judith D. Singer andJohn B. Willett, describes a recently developed method foranalyzing longitudinal data. One approach is to code whetheran event has occurred at a given occasion. By switching thefocus on the time to the occurrence of the event, a much morepowerful and sophisticated analysis can be performed. Again,the development of this procedure has occurred largely out-side psychology but is being employed much more fre-quently. In Chapter 23, “Time Series Analysis,” Wayne

    Velicer and Joseph L. Fava describe a method for studyingthe change in a single individual over time. Instead of a sin-gle observation on many subjects, this method relies on manyobservations on a single subject. In many ways, this methodis the prime exemplar of longitudinal research methods.

    Chapter 24, “Structural Equation Modeling” by Jodie B.Ullman and Peter M. Bentler, describes a very generalmethod that combines three key themes: constructs or latentvariables, statistical control, and theory to guide data analy-sis. First employed as an analytic method little more than20 years ago, the method is now widely disseminated in thebehavioral sciences. Chapter 25, “Ordinal Analysis of Behav-ioral Data” by Jeffrey D. Long, Du Feng, and Norman Cliff,discusses the assumptions that underlie many of the widelyused statistical methods and describes a parallel series ofmethods of analysis that only assume that the measure pro-vides ordinal information. The last chapter, “Latent Class andLatent Transition Analysis” by Stephanie L. Lanza, Brian P.Flaherty, and Linda M. Collins, describes a new method foranalyzing change over time. It is particularly appropriatewhen the change process can be conceptualized as a series ofdiscrete states.

    In completing this project, we realized that we were veryfortunate in several ways. Irving Weiner’s performance aseditor-in-chief was simply wonderful. He applied just the rightmix of obsessive concern and responsive support to keep thingson schedule. His comments on issues of emphasis, perspective,and quality were insightful and inevitably on target.

    We continue to be impressed with the professionalism ofthe authors that we were able to recruit into this effort.Consistent with their reputations, these individuals deliv-ered chapters of exceptional quality, making our burdenpale in comparison to other editorial experiences. Because ofthe length of the project, we shared many contributors’experiences-marriages, births, illnesses, family crises. A def-inite plus for us has been the formation of new friendshipsand professional liaisons.

    Our editorial tasks were also aided greatly by the generousassistance of our reviewers, most of whom will be quicklyrecognized by our readers for their own expertise in researchmethodology. We are pleased to thank James Algina, PhippsArabie, Patti Barrows, Betsy Jane Becker, Lisa M. Brown,Barbara M. Byrne, William F. Chaplin, Pat Cohen, Patrick J.Curren, Glenn Curtiss, Richard B. Darlington, SusanDuncan, Brian Everitt, Kerry Evers, Ron Gironda, LisaHarlow, Michael R. Harwell, Don Hedeker, David CharlesHowell, Lawrence J. Hubert, Bradley E. Huitema, BethJenkins, Herbert W. Marsh, Rosemarie A. Martin, Scott E.Maxwell, Kevin R. Murphy, Gregory Norman, Daniel J.Ozer, Melanie Page, Mark D. Reckase, Charles S. Reichardt,

    schi_fm.qxd 9/6/02 11:48 AM Page xiv

  • Volume Preface xv

    Steven Reise, Joseph L. Rogers, Joseph Rossi, JamesRounds, Shlomo S. Sawilowsky, Ian Spence, James H.Steiger, Xiaowu Sun, Randall C. Swaim, David Thissen,Bruce Thompson, Terence J. G. Tracey, Rod Vanderploeg,Paul F. Velleman, Howard Wainer, Douglas Williams, andseveral anonymous reviewers for their thorough work andgood counsel.

    We finish this preface with a caveat. Readers will in-evitably discover several contradictions or disagreementsacross the chapter offerings. Inevitably, researchers in differ-ent areas solve similar methodological problems in differentways. These differences are reflected in the offerings of thistext, and we have not attempted to mediate these differingviewpoints. Rather, we believe that the serious researcherwill welcome the opportunity to review solutions suggested

    or supported by differing approaches. For flaws in the text,however, the usual rule applies: We assume all responsibility.

    JOHN A. SCHINKAWAYNE F. VELICER

    REFERENCES

    Kirk, Roger E. (1968). Experimental design: Procedures for thebehavioral sciences. Pacific Grove, CA: Brooks/Cole.

    Kirk, Roger E. (1995). Experimental design: Procedures for thebehavioral sciences (3rd ed.). Pacific Grove, CA: Brooks/Cole.

    Siegel, S. (1956). Nonparametric statistics for the behavioralsciences. New York: McGraw-Hill.

    schi_fm.qxd 9/6/02 11:48 AM Page xv

  • schi_fm.qxd 9/6/02 11:48 AM Page xvi

  • Handbook of Psychology Preface ixIrving B. Weiner

    Volume Preface xiJohn A. Schinka and Wayne F. Velicer

    Contributors xxi

    PA RT O N EFOUNDATIONS OF RESEARCH ISSUES: STUDY DESIGN,

    DATA MANAGEMENT, DATA REDUCTION, AND DATA SYNTHESIS

    1 EXPERIMENTAL DESIGN 3Roger E. Kirk

    2 EXPLORATORY DATA ANALYSIS 33John T. Behrens and Chong-ho Yu

    3 POWER: BASICS, PRACTICAL PROBLEMS, AND POSSIBLE SOLUTIONS 65Rand R. Wilcox

    4 METHODS FOR HANDLING MISSING DATA 87John W. Graham, Patricio E. Cumsille, and Elvira Elek-Fisk

    5 PREPARATORY DATA ANALYSIS 115Linda S. Fidell and Barbara G. Tabachnick

    6 FACTOR ANALYSIS 143Richard L. Gorsuch

    7 CLUSTERING AND CLASSIFICATION METHODS 165Glenn W. Milligan and Stephen C. Hirtle

    PA RT T W ORESEARCH METHODS IN SPECIFIC CONTENT AREAS

    8 CLINICAL FORENSIC PSYCHOLOGY 189Kevin S. Douglas, Randy K. Otto, and Randy Borum

    9 PSYCHOTHERAPY OUTCOME RESEARCH 213Evelyn S. Behar and Thomas D. Borkovec

    Contents

    xvii

    schi_fm.qxd 9/6/02 11:48 AM Page xvii

  • xviii Contents

    10 HEALTH PSYCHOLOGY 241Timothy W. Smith

    11 ANIMAL LEARNING 271Russell M. Church

    12 NEUROPSYCHOLOGY 289Russell M. Bauer, Elizabeth C. Leritz, and Dawn Bowers

    13 PROGRAM EVALUATION 323Melvin M. Mark

    PA RT T H R E EMEASUREMENT ISSUES

    14 MOOD MEASUREMENT: CURRENT STATUS AND FUTURE DIRECTIONS 351David Watson and Jatin Vaidya

    15 MEASURING PERSONALITY AND PSYCHOPATHOLOGY 377Leslie C. Morey

    16 THE CIRCUMPLEX MODEL: METHODS AND RESEARCH APPLICATIONS 407Michael B. Gurtman and Aaron L. Pincus

    17 ITEM RESPONSE THEORY AND MEASURING ABILITIES 429Karen M. Schmidt and Susan E. Embretson

    18 GROWTH CURVE ANALYSIS IN CONTEMPORARY PSYCHOLOGICAL RESEARCH 447John J. McArdle and John R. Nesselroade

    PA RT F O U RDATA ANALYSIS METHODS

    19 MULTIPLE LINEAR REGRESSION 483Leona S. Aiken, Stephen G. West, and Steven C. Pitts

    20 LOGISTIC REGRESSION 509Alfred DeMaris

    21 META-ANALYSIS 533Frank L. Schmidt and John E. Hunter

    22 SURVIVAL ANALYSIS 555Judith D. Singer and John B. Willett

    23 TIME SERIES ANALYSIS 581Wayne F. Velicer and Joseph L. Fava

    schi_fm.qxd 9/6/02 11:48 AM Page xviii

  • Contents xix

    24 STRUCTURAL EQUATION MODELING 607Jodie B. Ullman and Peter M. Bentler

    25 ORDINAL ANALYSIS OF BEHAVIORAL DATA 635Jeffrey D. Long, Du Feng, and Norman Cliff

    26 LATENT CLASS AND LATENT TRANSITION ANALYSIS 663Stephanie T. Lanza, Brian P. Flaherty, and Linda M. Collins

    Author Index 687

    Subject Index 703

    schi_fm.qxd 9/6/02 11:48 AM Page xix

  • schi_fm.qxd 9/6/02 11:48 AM Page xx

  • Leona S. Aiken, PhDDepartment of PsychologyArizona State UniversityTempe, Arizona

    Russell M. Bauer, PhDDepartment of Clinical and Health Psychology University of FloridaGainesville, Florida

    Evelyn S. Behar, MSDepartment of PsychologyPennsylvania State UniversityUniversity Park, Pennsylvania

    John T. Behrens, PhDCisco Networking Academy ProgramCisco Systems, Inc.Phoenix, Arizona

    Peter M. Bentler, PhDDepartment of PsychologyUniversity of CaliforniaLos Angeles, California

    Thomas D. Borkovec, PhDDepartment of PsychologyPennsylvania State UniversityUniversity Park, Pennsylvania

    Randy Borum, PsyDDepartment of Mental Health Law & PolicyFlorida Mental Health Institute University of South FloridaTampa, Florida

    Dawn Bowers, PhDDepartment of Clinical and Health PsychologyUniversity of FloridaGainesville, Florida

    Russell M. Church, PhDDepartment of PsychologyBrown UniversityProvidence, Rhode Island

    Norman Cliff, PhDProfessor of Psychology EmeritusUniversity of Southern CaliforniaLos Angeles, California

    Linda M. Collins, PhDThe Methodology CenterPennsylvania State UniversityUniversity Park, Pennsylvania

    Patricio E. Cumsille, PhDEscuela de PsicologiaUniversidad Católica de ChileSantiago, Chile

    Alfred DeMaris, PhDDepartment of SociologyBowling Green State UniversityBowling Green, Ohio

    Kevin S. Douglas, PhD, LLBDepartment of Mental Health Law & PolicyFlorida Mental Health Institute University of South FloridaTampa, Florida

    Du Feng, PhDHuman Development and Family StudiesTexas Tech UniversityLubbock, Texas

    Elvira Elek-Fisk, PhDThe Methodology CenterPennsylvania State UniversityUniversity Park, Pennsylvania

    Susan E. Embretson, PhDDepartment of PsychologyUniversity of KansasLawrence, Kansas

    Joseph L. Fava, PhDCancer Prevention Research CenterUniversity of Rhode IslandKingston, Rhode Island

    Contributors

    xxi

    schi_fm.qxd 9/6/02 11:48 AM Page xxi

  • xxii Contributors

    Linda S. Fidell, PhDDepartment of PsychologyCalifornia State UniversityNorthridge, California

    Brian P. Flaherty, MSThe Methodology CenterPennsylvania State UniversityUniversity Park, Pennsylvania

    Richard L. Gorsuch, PhDGraduate School of PsychologyFuller Theological SeminaryPasadena, California

    John W. Graham, PhDDepartment of Biobehavioral HealthPennsylvania State UniversityUniversity Park, Pennsylvania

    Michael B. Gurtman, PhDDepartment of PsychologyUniversity of Wisconsin-ParksideKenosha, Wisconsin

    Stephen C. Hirtle, PhDSchool of Information SciencesUniversity of PittsburghPittsburgh, Pennsylvania

    John E. Hunter, PhDDepartment of Psychology Michigan State UniversityEast Lansing, Michigan

    Roger E. Kirk, PhDDepartment of Psychology and NeuroscienceBaylor UniversityWaco, Texas

    Stephanie T. Lanza, MSThe Methodology CenterPennsylvania State UniversityUniversity Park, Pennsylvania

    Elizabeth C. Leritz, MSDepartment of Clinical and Health PsychologyUniversity of FloridaGainesville, Florida

    Jeffrey D. Long, PhDDepartment of Educational PsychologyUniversity of MinnesotaMinneapolis, Minnesota

    Melvin M. Mark, PhDDepartment of PsychologyPennsylvania State UniversityUniversity Park, Pennsylvania

    John J. McArdle, PhDDepartment of PsychologyUniversity of VirginiaCharlottesville, Virginia

    Glenn W. Milligan, PhDDepartment of Management SciencesOhio State UniversityColumbus, Ohio

    Leslie C. Morey, PhDDepartment of PsychologyTexas A&M UniversityCollege Station, Texas

    John R. Nesselroade, PhDDepartment of PsychologyUniversity of VirginiaCharlottesville, Virginia

    Randy K. Otto, PhDDepartment of Mental Health Law & PolicyFlorida Mental Health Institute University of South FloridaTampa, Florida

    Aaron L. Pincus, PhDDepartment of PsychologyPennsylvania State UniversityUniversity Park, Pennsylvania

    Steven C. Pitts, PhDDepartment of PsychologyUniversity of Maryland, Baltimore CountyBaltimore, Maryland

    Karen M. Schmidt, PhDDepartment of PsychologyUniversity of VirginiaCharlottesville, Virginia

    Frank L. Schmidt, PhDDepartment of Management and OrganizationUniversity of IowaIowa City, Iowa

    Judith D. Singer, PhDGraduate School of EducationHarvard UniversityCambridge, Massachusetts

    schi_fm.qxd 9/6/02 11:48 AM Page xxii

  • Contributors xxiii

    Timothy W. Smith, PhDDepartment of PsychologyUniversity of UtahSalt Lake City, Utah

    Barbara G. Tabachnick, PhDDepartment of PsychologyCalifornia State UniversityNorthridge, California

    Jodie B. Ullman, PhDDepartment of PsychologyCalifornia State UniversitySan Bernadino, California

    Jatin VaidyaDepartment of PsychologyUniversity of IowaIowa City, Iowa

    Wayne F. Velicer, PhDCancer Prevention Research CenterUniversity of Rhode IslandKingston, Rhode Island

    David Watson, PhDDepartment of PsychologyUniversity of IowaIowa City, Iowa

    Stephen G. West, PhDDepartment of PsychologyArizona State UniversityTempe, Arizona

    Rand R. Wilcox, PhDDepartment of PsychologyUniversity of Southern CaliforniaLos Angeles, California

    John B. Willett, PhDGraduate School of EducationHarvard UniversityCambridge, Massachusetts

    Chong-ho Yu, PhDCisco Networking Academy ProgramCisco Systems, Inc.Chandler, Arizona

    schi_fm.qxd 9/6/02 11:48 AM Page xxiii

  • schi_fm.qxd 9/6/02 11:48 AM Page xxiv

  • PA RT O N E

    FOUNDATIONS OF RESEARCH ISSUES:STUDY DESIGN, DATA MANAGEMENT,

    DATA REDUCTION, AND DATA SYNTHESIS

    schi_ch01.qxd 8/7/02 12:12 PM Page 1

  • schi_ch01.qxd 8/7/02 12:12 PM Page 2

  • CHAPTER 1

    Experimental Design

    ROGER E. KIRK

    3

    SOME BASIC EXPERIMENTAL DESIGN CONCEPTS 3THREE BUILDING BLOCK DESIGNS 4

    Completely Randomized Design 4Randomized Block Design 6Latin Square Design 9

    CLASSIFICATION OF EXPERIMENTAL DESIGNS 10FACTORIAL DESIGNS 11

    Completely Randomized Factorial Design 11Alternative Models 14Randomized Block Factorial Design 19

    FACTORIAL DESIGNS WITH CONFOUNDING 21Split-Plot Factorial Design 21Confounded Factorial Designs 24Fractional Factorial Designs 25

    HIERARCHICAL DESIGNS 27Hierarchical Designs With One or

    Two Nested Treatments 27Hierarchical Design With Crossed

    and Nested Treatments 28EXPERIMENTAL DESIGNS WITH A COVARIATE 29REFERENCES 31

    SOME BASIC EXPERIMENTALDESIGN CONCEPTS

    Experimental design is concerned with the skillful interroga-tion of nature. Unfortunately, nature is reluctant to revealher secrets. Joan Fisher Box (1978) observed in her autobiog-raphy of her father, Ronald A. Fisher, “Far from behavingconsistently, however, Nature appears vacillating, coy, andambiguous in her answers” (p. 140). Her most effectivetool for confusing researchers is variability—in particular,variability among participants or experimental units. Buttwo can play the variability game. By comparing the variabil-ity among participants treated differently to the variabilityamong participants treated alike, researchers can make in-formed choices between competing hypotheses in scienceand technology.

    We must never underestimate nature—she is a formidablefoe. Carefully designed and executed experiments are re-quired to learn her secrets. An experimental design is a planfor assigning participants to experimental conditions and thestatistical analysis associated with the plan (Kirk, 1995, p. 1).The design of an experiment involves a number of inter-related activities:

    1. Formulation of statistical hypotheses that are germane to thescientific hypothesis. A statistical hypothesis is a statement

    about (a) one or more parameters of a population or (b) thefunctional form of a population. Statistical hypothesesare rarely identical to scientific hypotheses—they aretestable formulations of scientific hypotheses.

    2. Determination of the experimental conditions (independentvariable) to be manipulated, the measurement (dependentvariable) to be recorded, and the extraneous conditions(nuisance variables) that must be controlled.

    3. Specification of the number of participants required andthe population from which they will be sampled.

    4. Specification of the procedure for assigning the partici-pants to the experimental conditions.

    5. Determination of the statistical analysis that will beperformed.

    In short, an experimental design identifies the independent,dependent, and nuisance variables and indicates the way inwhich the randomization and statistical aspects of an experi-ment are to be carried out.

    Analysis of Variance

    Analysis of variance (ANOVA) is a useful tool for under-standing the variability in designed experiments. The seminalideas for both ANOVA and experimental design can be traced

    schi_ch01.qxd 8/7/02 12:12 PM Page 3

  • 4 Experimental Design

    to Ronald A. Fisher, a statistician who worked at the Rotham-sted Experimental Station. According to Box (1978, p. 100),Fisher developed the basic ideas ofANOVAbetween 1919 and1925. The first hint of what was to come appeared in a 1918paper in which Fisher partitioned the total variance of a humanattribute into portions attributed to heredity, environment, andother factors. The analysis of variance table for a two-treat-ment factorial design appeared in a 1923 paper published withM.A. Mackenzie (Fisher & Mackenzie, 1923). Fisher referredto the table as a convenient way of arranging the arithmetic. In1924 Fisher (1925) introduced the Latin square design in con-nection with a forest nursery experiment. The publication in1925 of his classic textbook Statistical Methods for ResearchWorkers and a short paper the following year (Fisher, 1926)presented all the essential ideas of analysis of variance. Thetextbook (Fisher, 1925, pp. 244–249) included a table of thecritical values of the ANOVA test statistic in terms of a func-tion called z, where z = 12 (ln �̂2Treatment − ln �̂2Error). The statis-tics �̂2Treatment and �̂

    2Error denote, respectively, treatment and

    error variance. A more convenient form of Fisher’s z table thatdid not require looking up log values was developed byGeorge Snedecor (1934). His critical values are expressed interms of the function F = �̂2Treatment/�̂2Error that is obtaineddirectly from theANOVAcalculations. He named it F in honorof Fisher. Fisher’s field of experimentation—agriculture—was a fortunate choice because results had immediate applica-tion with assessable economic value, because simplifyingassumptions such as normality and independence of errorswere usually tenable, and because the cost of conductingexperiments was modest.

    Three Principles of Good Experimental Design

    The publication of Fisher’s Statistical Methods for ResearchWorkers and his 1935 The Design of Experiments graduallyled to the acceptance of what today is considered to be thecornerstone of good experimental design: randomization.It is hard to imagine the hostility that greeted the suggestionthat participants or experimental units should be randomlyassigned to treatment levels. Before Fisher’s work, mostresearchers used systematic schemes, not subject to the lawsof chance, to assign participants. According to Fisher, ran-dom assignment has several purposes. It helps to distributethe idiosyncratic characteristics of participants over the treat-ment levels so that they do not selectively bias the outcome ofthe experiment. Also, random assignment permits the com-putation of an unbiased estimate of error effects—thoseeffects not attributable to the manipulation of the independentvariable—and it helps to ensure that the error effects arestatistically independent.

    Fisher popularized two other principles of good experi-mentation: replication and local control or blocking. Replica-tion is the observation of two or more participants underidentical experimental conditions. Fisher observed that repli-cation enables a researcher to estimate error effects andobtain a more precise estimate of treatment effects. Blocking,on the other hand, is an experimental procedure for isolatingvariation attributable to a nuisance variable. As the namesuggests, nuisance variables are undesired sources of varia-tion that can affect the dependent variable. There are manysources of nuisance variation. Differences among partici-pants comprise one source. Other sources include variationin the presentation of instructions to participants, changes inenvironmental conditions, and the effects of fatigue andlearning when participants are observed several times. Threeexperimental approaches are used to deal with nuisancevariables:

    1. Holding the variable constant.

    2. Assigning participants randomly to the treatment levels sothat known and unsuspected sources of variation amongthe participants are distributed over the entire experimentand do not affect just one or a limited number of treatmentlevels.

    3. Including the nuisance variable as one of the factors in theexperiment.

    The last experimental approach uses local control or blockingto isolate variation attributable to the nuisance variable sothat it does not appear in estimates of treatment and erroreffects. A statistical approach also can be used to deal withnuisance variables. The approach is called analysis of covari-ance and is described in the last section of this chapter.The three principles that Fisher vigorously championed—randomization, replication, and local control—remain thecornerstones of good experimental design.

    THREE BUILDING BLOCK DESIGNS

    Completely Randomized Design

    One of the simplest experimental designs is the randomizationand analysis plan that is used with a t statistic for independentsamples. Consider an experiment to compare the effectivenessof two diets for obese teenagers. The independent variable isthe two kinds of diets; the dependent variable is the amount ofweight loss two months after going on a diet. For notationalconvenience, the two diets are called treatment A. The levelsof treatment A corresponding to the specific diets are denoted

    schi_ch01.qxd 8/7/02 12:12 PM Page 4

  • Three Building Block Designs 5

    by the lowercase letter a and a subscript: a1 denotes one dietand a2 denotes the other. A particular but unspecified level oftreatment A is denoted by aj, where j ranges over the values 1and 2. The amount of weight loss in pounds 2 months afterparticipant i went on diet j is denoted by Yij.

    The null and alternative hypotheses for the weight-lossexperiment are, respectively,

    H0: �1 − �2 = 0H1: �1 − �2 �= 0,

    where �1 and �2 denote the mean weight loss of the respec-tive populations. Assume that 30 girls who want to loseweight are available to participate in the experiment. Theresearcher assigns n = 15 girls to each of the p = 2 diets sothat each of the (np)!/(n!)p = 155,117,520 possible assign-ments has the same probability. This is accomplished bynumbering the girls from 1 to 30 and drawing numbers froma random numbers table. The first 15 numbers drawn between1 and 30 are assigned to treatment level a1; the remaining 15numbers are assigned to a2. The layout for this experiment isshown in Figure 1.1. The girls who were assigned to treat-ment level a1 are called Group1; those assigned to treatmentlevel a2 are called Group2. The mean weight losses of the twogroups of girls are denoted by Y ·1 and Y ·2.

    The t independent-samples design involves randomlyassigning participants to two levels of a treatment. A com-pletely randomized design, which is described next, extendsthis design strategy to two or more treatment levels. The com-pletely randomized design is denoted by the letters CR-p,where CR stands for “completely randomized” and p is thenumber of levels of the treatment.

    Again, consider the weight-loss experiment and supposethat the researcher wants to evaluate the effectiveness of

    three diets. The null and alternative hypotheses for theexperiment are, respectively,

    H0: �1 = �2 = �3H1: �j �= �j ′ for some j and j ′.

    Assume that 45 girls who want to lose weight are available toparticipate in the experiment. The girls are randomly as-signed to the three diets with the restriction that 15 girls areassigned to each diet. The layout for the experiment is shownin Figure 1.2. A comparison of the layout in this figure withthat in Figure 1.1 for a t independent-samples design revealsthat they are the same except that the completely randomizeddesign has three treatment levels. The t independent-samplesdesign can be thought of as a special case of a completelyrandomized design. When p is equal to two, the layouts andrandomization plans for the designs are identical.

    Thus far I have identified the null hypothesis that theresearcher wants to test, �1 = �2 = �3, and described themanner in which the participants are assigned to the threetreatment levels. In the following paragraphs I discuss the com-posite nature of an observation, describe the classical modelequation for a CR-p design, and examine the meaning of theterms treatment effect and error effect.

    An observation, which is a measure of the dependent vari-able, can be thought of as a composite that reflects theeffects of the (a) independent variable, (b) individual charac-teristics of the participant or experimental unit, (c) chancefluctuations in the participant’s performance, (d) measure-ment and recording errors that occur during data collection,

    Figure 1.1 Layout for a t independent-samples design. Thirty girls are ran-domly assigned to two levels of treatment A with the restriction that 15 girlsare assigned to each level. The mean weight loss in pounds for the girls intreatment levels a1 and a2 is denoted by Y ·1 and Y ·2, respectively.

    Figure 1.2 Layout for a completely randomized design (CR-3 design).Forty-five girls are randomly assigned to three levels of treatment A with therestriction that 15 girls are assigned to each level. The mean weight loss inpounds for the girls in treatment levels a1, a2, and a3 is denoted by Y ·1, Y ·2,and Y ·3, respectively.

    schi_ch01.qxd 8/7/02 12:12 PM Page 5

  • 6 Experimental Design

    and (e) any other nuisance variables such as environmentalconditions that have not been controlled. Consider the weightloss of the fifth participant in treatment level a2. Suppose thattwo months after beginning the diet this participant has lost13 pounds (Y52 = 13). What factors have affected the value ofY52? One factor is the effectiveness of the diet. Other factorsare her weight prior to starting the diet, the degree to whichshe stayed on the diet, and the amount she exercised duringthe two-month trial, to mention only a few. In summary, Y52 isa composite that reflects (a) the effects of treatment level a2,(b) effects unique to the participant, (c) effects attributable tochance fluctuations in the participant’s behavior, (d) errors inmeasuring and recording the participant’s weight loss, and(e) any other effects that have not been controlled. Our con-jectures about Y52 or any of the other 44 observations can beexpressed more formally by a model equation. The classicalmodel equation for the weight-loss experiment is

    Yi j = � + �j + �i( j) (i = 1, . . . , n; j = 1, . . . , p),

    where

    Yi j is the weight loss for participant i in treatmentlevel aj.

    � is the grand mean of the three weight-loss popula-tion means.

    �j is the treatment effect for population j and is equal to�j − �. It reflects the effects of diet aj.

    �i( j) is the within-groups error effect associated with Yi jand is equal to Yi j − � − �j . It reflects all effectsnot attributable to treatment level aj. The notationi( j) indicates that the ith participant appears only intreatment level j. Participant i is said to be nestedwithin the jth treatment level. Nesting is discussedin the section titled “Hierarchical Designs.”

    According to the equation for this completely randomizeddesign, each observation is the sum of three parameters�, �j , and �i( j). The values of the parameters in the equationare unknown but can be estimated from sample data.

    The meanings of the terms grand mean, �, and treatmenteffect, �j , in the model equation seem fairly clear; the mean-ing of error effect, �i( j), requires a bit more explanation. Whydo observations, Yi js, in the same treatment level vary fromone participant to the next? This variation must be due to dif-ferences among the participants and to other uncontrolledvariables because the parameters � and �j in the model equa-tion are constants for all participants in the same treatmentlevel. To put it another way, observations in the same treatment

    level are different because the error effects, �i( j)s, for theobservations are different. Recall that error effects reflect idio-syncratic characteristics of the participants—those character-istics that differ from one participant to another—and anyother variables that have not been controlled. Researchers at-tempt to minimize the size of error effects by holding sourcesof variation that might contribute to the error effects constantand by the judicial choice of an experimental design. Designsthat are described next permit a researcher to isolate and re-move some sources of variation that would ordinarily be in-cluded in the error effects.

    Randomized Block Design

    The two designs just described use independent samples. Twosamples are independent if, for example, a researcher ran-domly samples from two populations or randomly assigns par-ticipants to p groups. Dependent samples, on the other hand,can be obtained by any of the following procedures.

    1. Observe each participant under each treatment level inthe experiment—that is, obtain repeated measures on theparticipants.

    2. Form sets of participants who are similar with respect toa variable that is correlated with the dependent variable.This procedure is called participant matching.

    3. Obtain sets of identical twins or littermates in which casethe participants have similar genetic characteristics.

    4. Obtain participants who are matched by mutual selection,for example, husband and wife pairs or business partners.

    In the behavioral and social sciences, the participants areoften people whose aptitudes and experiences differ markedly.Individual differences are inevitable, but it is often possibleto isolate or partition out a portion of these effects so thatthey do not appear in estimates of the error effects. One designfor accomplishing this is the design used with a t statistic fordependent samples. As the name suggests, the design usesdependent samples. A t dependent-samples design also uses amore complex randomization and analysis plan than does a tindependent-samples design. However, the added complexityis often accompanied by greater power—a point that I will de-velop later in connection with a randomized block design.

    Let’s reconsider the weight-loss experiment. It is reason-able to assume that ease of losing weight is related to theamount by which a girl is overweight. The design of the exper-iment can be improved by isolating this nuisance variable.Suppose that instead of randomly assigning 30 participants tothe treatment levels, the researcher formed pairs of participants

    schi_ch01.qxd 8/7/02 12:12 PM Page 6

  • Three Building Block Designs 7

    Figure 1.3 Layout for a t dependent-samples design. Each block containstwo girls who are overweight by about the same amount. The two girls in ablock are randomly assigned to the treatment levels. The mean weight loss inpounds for the girls in treatment levels a1 and a2 is denoted by Y ·1 and Y ·2,respectively.

    so that prior to going on a diet the participants in each pair areoverweight by about the same amount. The participants in eachpair constitute a block or set of matched participants. A simpleway to form blocks of matched participants is to rank themfrom least to most overweight. The participants ranked 1 and 2are assigned to block one, those ranked 3 and 4 are assigned toblock two, and so on. In this example, 15 blocks of dependentsamples can be formed from the 30 participants.After all of theblocks have been formed, the two participants in each blockare randomly assigned to the two diets. The layout for this ex-periment is shown in Figure 1.3. If the researcher’s hunch iscorrect that ease in losing weight is related to the amount bywhich a girl is overweight, this design should result in a morepowerful test of the null hypothesis, �·1 − �·2 = 0, than woulda t test for independent samples. As we will see, the increasedpower results from isolating the nuisance variable (the amountby which the girls are overweight) so that it does not appear inthe estimate of the error effects.

    Earlier we saw that the layout and randomization proce-dures for a t independent-samples design and a completelyrandomized design are the same except that a completely ran-domized design can have more than two treatment levels.The same comparison can be drawn between a t dependent-samples design and a randomized block design. A random-ized block design is denoted by the letters RB-p, where RBstands for “randomized block” and p is the number of levelsof the treatment. The four procedures for obtaining depen-dent samples that were described earlier can be used to formthe blocks in a randomized block design. The procedure thatis used does not affect the computation of significance tests,but the procedure does affect the interpretation of the results.The results of an experiment with repeated measures general-ize to a population of participants who have been exposed toall of the treatment levels. However, the results of an experi-ment with matched participants generalize to a population of

    participants who have been exposed to only one treatmentlevel. Some writers reserve the designation randomizedblock design for this latter case. They refer to a design withrepeated measurements in which the order of administrationof the treatment levels is randomized independently for eachparticipant as a subjects-by-treatments design. A design withrepeated measurements in which the order of administrationof the treatment levels is the same for all participants isreferred to as a subject-by-trials design. I use the designationrandomized block design for all three cases.

    Of the four ways of obtaining dependent samples, the useof repeated measures on the participants typically results inthe greatest homogeneity within the blocks. However, if re-peated measures are used, the effects of one treatment levelshould dissipate before the participant is observed under an-other treatment level. Otherwise the subsequent observationswill reflect the cumulative effects of the preceding treatmentlevels. There is no such restriction, of course, if carryover ef-fects such as learning or fatigue are the researcher’s principalinterest. If blocks are composed of identical twins or litter-mates, it is assumed that the performance of participants hav-ing identical or similar heredities will be more homogeneousthan the performance of participants having dissimilar hered-ities. If blocks are composed of participants who are matchedby mutual selection (e.g., husband and wife pairs or businesspartners), a researcher should ascertain that the participantsin a block are in fact more homogeneous with respect to thedependent variable than are unmatched participants. A hus-band and wife often have similar political attitudes; the cou-ple is less likely to have similar mechanical aptitudes.

    Suppose that in the weight-loss experiment the researcherwants to evaluate the effectiveness of three diets, denotedby a1, a2, and a3. The researcher suspects that ease of losingweight is related to the amount by which a girl is overweight.If a sample of 45 girls is available, the blocking proceduredescribed in connection with a t dependent-samples designcan be used to form 15 blocks of participants. The three par-ticipants in a block are matched with respect to the nuisancevariable, the amount by which a girl is overweight. The lay-out for this experiment is shown in Figure 1.4. A comparisonof the layout in this figure with that in Figure 1.3 for a tdependent-samples design reveals that they are the same ex-cept that the randomized block design has p = 3 treatmentlevels. When p = 2, the layouts and randomization plans forthe designs are identical. In this and later examples, I assumethat all of the treatment levels and blocks of interest are rep-resented in the experiment. In other words, the treatment lev-els and blocks represent fixed effects. A discussion of the casein which either the treatment levels or blocks or both are ran-domly sampled from a population of levels, the mixed and

    schi_ch01.qxd 8/7/02 12:12 PM Page 7

  • 8 Experimental Design

    random effects cases, is beyond the scope of this chapter. Thereader is referred to Kirk (1995, pp. 256–257, 265–268).

    A randomized block design enables a researcher to testtwo null hypotheses.

    H0: �·1 = �·2 = �·3(Treatment population means are equal.)

    H0: �1· = �2· = · · · = �15·(Block population means are equal.)

    The second hypothesis, which is usually of little interest,states that the population weight-loss means for the 15 levelsof the nuisance variable are equal. The researcher expects atest of this null hypothesis to be significant. If the nuisancevariable represented by the blocks does not account for an ap-preciable proportion of the total variation in the experiment,little has been gained by isolating the effects of the variable.Before exploring this point, I describe the model equation foran RB-p design.

    The classical model equation for the weight-loss experi-ment is

    Yi j = � + �j + �i + �i j (i = 1, . . . , n; j = 1, . . . , p),

    where

    Yi j is the weight loss for the participant in Blocki andtreatment level aj.

    � is the grand mean of the three weight-loss popula-tion means.

    �j is the treatment effect for population j and is equal to�· j − �. It reflects the effect of diet aj.

    �i is the block effect for population i and is equal to�i · − �. It reflects the effect of the nuisance variablein Blocki.

    �i j is the residual error effect associated with Yi j and isequal to Yi j − � − �j − �i . It reflects all effects notattributable to treatment level aj and Blocki.

    According to the model equation for this randomized blockdesign, each observation is the sum of four parameters:�,�j , �i , and �i j . A residual error effect is that portion of anobservation that remains after the grand mean, treatmenteffect, and block effect have been subtracted from it; thatis, �i j = Yi j − � − �j − �i . The sum of the squared erroreffects for this randomized block design,∑∑

    �2i j =∑∑

    (Yi j − � − �j − �i )2,

    will be smaller than the sum for the completely randomizeddesign, ∑∑

    �2i( j) =∑∑

    (Yi j − � − �j )2,

    if �2i is not equal to zero for one or more blocks. This idea isillustrated in Figure 1.5, where the total sum of squares anddegrees of freedom for the two designs are partitioned. The Fstatistic that is used to test the null hypothesis can be thoughtof as a ratio of error and treatment effects,

    F = f (error effects) + f (treatment effects)f (error effects)

    where f ( ) denotes a function of the effects in parentheses. Itis apparent from an examination of this ratio that the smallerthe sum of the squared error effects, the larger the F statisticand, hence, the greater the probability of rejecting a false null

    SSRES

    (n � 1)(p � 1) � 28

    SSWG

    p(n � 1) � 42

    Figure 1.5 Partition of the total sum of squares (SSTOTAL) and degrees offreedom (np − 1 = 44) for CR-3 and RB-3 designs. The treatment andwithin-groups sums of squares are denoted by, respectively, SSA and SSWG.The block and residual sums of squares are denoted by, respectively, SSBLand SSRES. The shaded rectangles indicate the sums of squares that are usedto compute the error variance for each design: MSWG = SSWG/p(n − 1)and MSRES = SSRES/(n − 1)(p − 1). If the nuisance variable (SSBL) in therandomized block design accounts for an appreciable portion of the total sumof squares, the design will have a smaller error variance and, hence, greaterpower than the completely randomized design.

    Figure 1.4 Layout for a randomized block design (RB-3 design). Eachblock contains three girls who are overweight by about the same amount.The three girls in a block are randomly assigned to the treatment levels. Themean weight loss in pounds for the girls in treatment levels a1, a2, and a3 isdenoted by Y ·1, Y ·2, and Y ·3, respectively. The mean weight loss for thegirls in Block1, Block2, . . . , Block15 is denoted by Y 1·, Y 2·, . . . , Y 15·,respectively.

    schi_ch01.qxd 8/7/02 12:12 PM Page 8

  • Three Building Block Designs 9

    hypothesis. Thus, by isolating a nuisance variable that ac-counts for an appreciable portion of the total variation in arandomized block design, a researcher is rewarded with amore powerful test of a false null hypothesis.

    As we have seen, blocking with respect to the nuisancevariable (the amount by which the girls are overweight)enables the researcher to isolate this variable and remove itfrom the error effects. But what if the nuisance variabledoesn’t account for any of the variation in the experiment? Inother words, what if all of the block effects in the experimentare equal to zero? In this unlikely case, the sum of the squarederror effects for the randomized block and completely ran-domized designs will be equal. In this case, the randomizedblock design will be less powerful than the completely ran-domized design because its error variance, the denominatorof the F statistic, has n − 1 fewer degrees of freedom thanthe error variance for the completely randomized design. Itshould be obvious that the nuisance variable should be se-lected with care. The larger the correlation between the nui-sance variable and the dependent variable, the more likely itis that the block effects will account for an appreciableproportion of the total variation in the experiment.

    Latin Square Design

    The Latin square design described in this section derives itsname from an ancient puzzle that was concerned with thenumber of different ways that Latin letters can be arranged ina square matrix so that each letter appears once in each rowand once in each column. An example of a 3 × 3 Latin squareis shown in Figure 1.6. In this figure I have used the letter awith subscripts in place of Latin letters. The Latin square de-sign is denoted by the letters LS-p, where LS stands for“Latin square” and p is the number of levels of the treatment.A Latin square design enables a researcher to isolate the ef-fects of not one but two nuisance variables. The levels of onenuisance variable are assigned to the rows of the square; thelevels of the other nuisance variable are assigned to thecolumns. The levels of the treatment are assigned to the cellsof the square.

    Let’s return to the weight-loss experiment. With a Latinsquare design the researcher can isolate the effects of theamount by which girls are overweight and the effects of a sec-ond nuisance variable, for example, genetic predisposition tobe overweight. A rough measure of the second nuisance vari-able can be obtained by asking a girl’s parents whether theywere overweight as teenagers: c1 denotes neither parent over-weight, c2 denotes one parent overweight, and c3 denotes bothparents overweight. This nuisance variable can be assigned tothe columns of the Latin square. Three levels of the amount bywhich girls are overweight can be assigned to the rows of the

    Latin square: b1 is less than 15 pounds, b2 is 15 to 25 pounds,and b3 is more than 25 pounds. The advantage of being able toisolate two nuisance variables comes at a price. The ran-domization procedures for a Latin square design are morecomplex than those for a randomized block design. Also, thenumber of rows and columns of a Latin square must eachequal the number of treatment levels, which is three in the ex-ample. This requirement can be very restrictive. For example,it was necessary to restrict the continuous variable of theamount by which girls are overweight to only three levels.The layout of the LS-3 design is shown in Figure 1.7.

    Figure 1.6 Three-by-three Latin square, where aj denotes one of thej = 1, . . . , p levels of treatment A; bk denotes one of the k = 1, . . . , p levelsof nuisance variable B; and cl denotes one of the l = 1, . . . , p levels of nui-sance variable C. Each level of treatment A appears once in each row andonce in each column as required for a Latin square.

    Figure 1.7 Layout for a Latin square design (LS-3 design) that is based onthe Latin square in Figure 1.6. Treatment A represents three kinds of diets;nuisance variable B represents amount by which the girls are overweight;and nuisance variable C represents genetic predisposition to be overweight.The girls in Group1, for example, received diet a1, were less than fifteenpounds overweight (b1), and neither parent had been overweight as ateenager (c1). The mean weight loss in pounds for the girls in the nine groupsis denoted by Y ·111, Y ·123, . . . , Y ·331.

    schi_ch01.qxd 8/7/02 12:12 PM Page 9

  • 10 Experimental Design

    The design in Figure 1.7 enables the researcher to testthree null hypotheses:

    H0: �1·· = �2·· = �3··(Treatment population means are equal.)

    H0: �·1· = �·2· = �·3·(Row population means are equal.)

    H0: �··1 = �··2 = �··3(Column population means are equal.)

    The first hypothesis states that the population means for thethree diets are equal. The second and third hypotheses makesimilar assertions about the population means for the twonuisance variables. Tests of these nuisance variables are ex-pected to be significant. As discussed earlier, if the nuisancevariables do not account for an appreciable proportion of thetotal variation in the experiment, little has been gained by iso-lating the effects of the variables.

    The classical model equation for this version of theweight-loss experiment is

    Yi jkl = � + �j + �k + l + �jkl + �i( jkl)(i = 1, . . . , n; j = 1, . . . , p; k = 1, . . . , p; l = 1, . . . , p),

    where

    Yi jkl is the weight loss for the ith participant in treat-ment level aj, row bk, and column cl.

    �j is the treatment effect for population j and is equalto �j ·· − �. It reflects the effect of diet aj.

    �k is the row effect for population k and is equalto �·k· − �. It reflects the effect of nuisance vari-able bk.

    l is the column effect for population l and is equalto �··l − �. It reflects the effects of nuisance vari-able cl.

    �jkl is the residual effect that is equal to �jkl − �j ··−�·k· − �··l + 2�.

    �i( jkl) is the within-cell error effect associated with Yijkland is equal to Yi jkl − � − �j − �k − l − �jkl .

    According to the model equation for this Latin square design,each observation is the sum of six parameters: �, �j , �k,l, �jkl, and �i( jkl). The sum of the squared within-cell erroreffects for the Latin square design, ∑∑

    �2i( jkl) =∑∑

    (Yi jkl − � − �j −�k − l − �jkl)2,

    will be smaller than the sum for the randomized block design, ∑∑�2i j =

    ∑∑(Yi j − � − �j − �i )2,

    if the combined effects of ∑

    �2k,∑

    2l , and∑

    �2jkl aregreater than

    ∑�2i . The benefits of isolating two nuisance

    variables are a smaller error variance and increased power. Thus far I have described three of the simplest experimen-

    tal designs: the completely randomized design, randomizedblock design, and Latin square design. The three designs arecalled building block designs because complex experimentaldesigns can be constructed by combining two or more of thesesimple designs (Kirk, 1995, p. 40). Furthermore, the random-ization procedures, data analysis, and model assumptions forcomplex designs represent extensions of those for the threebuilding block designs. The three designs provide the organi-zational structure for the design nomenclature and classifica-tion scheme that is described next.

    CLASSIFICATION OF EXPERIMENTAL DESIGNS

    A classification scheme for experimental designs is given inTable 1.1. The designs in the category systematic designs donot use random assignment of participants or experimentalunits and are of historical interest only. According to Leonardand Clark (1939), agricultural field research employing sys-tematic designs on a practical scale dates back to 1834. Overthe last 80 years systematic designs have fallen into disuse be-cause designs employing random assignment are more likelyto provide valid estimates of treatment and error effects andcan be analyzed using the powerful tools of statistical infer-ence such as analysis of variance. Experimental designs usingrandom assignment are called randomized designs. The ran-domized designs in Table 1.1 are subdivided into categoriesbased on (a) the number of treatments, (b) whether participantsare assigned to relatively homogeneous blocks prior to randomassignment, (c) presence or absence of confounding, (d) use ofcrossed or nested treatments, and (e) use of a covariate.

    The letters p and q in the abbreviated designations denotethe number of levels of treatments A and B, respectively. If adesign includes a third and fourth treatment, say treatments Cand D, the number of their levels is denoted by r and t,respectively. In general, the designation for designs with twoor more treatments includes the letters CR, RB, or LS toindicate the building block design. The letter F or H is addedto the designation to indicate that the design is, respectively, afactorial design or a hierarchical design. For example, the F inthe designation CRF-pq indicates that it is a factorial design;the CR and pq indicate that the design was constructed bycombining two completely randomized designs with p and qtreatment levels. The letters CF, PF, FF, and AC are added tothe designation if the design is, respectively, a confoundedfactorial design, partially confounded factorial design, frac-tional factorial design, or analysis of covariance design.

    schi_ch01.qxd 8/7/02 12:12 PM Page 10

  • Factorial Designs 11

    TABLE 1.1 Classification of Experimental Designs

    AbbreviatedExperimental Design Designationa

    I. Systematic Designs (selected examples).1. Beavan’s chessboard design.2. Beavan’s half-drill strip design.3. Diagonal square design.4. Knut Vik square design.

    II. Randomized Designs With One Treatment.A. Experimental units randomly assigned to

    treatment levels.1. Completely randomized design. CR-p

    B. Experimental units assigned to relatively homogeneous blocks or groups prior torandom assignment.1. Balanced incomplete block design. BIB-p2. Cross-over design. CO-p3. Generalized randomized block design. GRB-p4. Graeco-Latin square design. GLS-p5. Hyper-Graeco-Latin square design. HGLS-p6. Latin square design. LS-p7. Lattice balanced incomplete block design. LBIB-p8. Lattice partially balanced incomplete LPBIB-p

    block design.9. Lattice unbalanced incomplete block design. LUBIB-p

    10. Partially balanced incomplete block design. PBIB-p11. Randomized block design. RB-p12. Youden square design. YBIB-p

    III. Randomized Designs With Two or More Treatments.A. Factorial designs: designs in which all treatments

    are crossed.1. Designs without confounding.

    a. Completely randomized factorial design. CRF-pqb. Generalized randomized block factorial design. GRBF-pqc. Randomized block factorial design. RBF-pq

    2. Design with group-treatment confounding.a. Split-plot factorial design. SPF-p ·q

    3. Designs with group-interaction confounding.a. Latin square confounded factorial design. LSCF-pk

    aThe abbreviated designations are discussed later.

    AbbreviatedExperimental Design Designationa

    b. Randomized block completely confounded RBCF-pk

    factorial design.c. Randomized block partially confounded RBPF-pk

    factorial design.4. Designs with treatment-interaction confounding.

    a. Completely randomized fractional CRFF-pk−i

    factorial design.b. Graeco-Latin square fractional factorial design. GLSFF-pk

    c. Latin square fractional factorial design. LSFF-pk

    d. Randomized block fractional factorial design. RBFF-pk−i

    B. Hierarchical designs: designs in which one or more treatments are nested.1. Designs with complete nesting.

    a. Completely randomized hierarchical design. CRH-pq(A)b. Randomized block hierarchical design. RBH-pq(A)

    2. Designs with partial nesting.a. Completely randomized partial CRPH-pq(A)r

    hierarchical design.b. Randomized block partial hierarchical design. RBPH-pq(A)rc. Split-plot partial hierarchical design. SPH-p ·qr(B)

    IV. Randomized Designs With One or More Covariates.A. Designs that include a covariate have

    the letters AC added to the abbreviateddesignation as in the following examples.1. Completely randomized analysis of covariance CRAC-p

    design.2. Completely randomized factorial analysis CRFAC-pq

    of covariance design.3. Latin square analysis of covariance design. LSAC-p4. Randomized block analysis of covariance design. RBAC-p5. Split-plot factorial analysis of covariance design. SPFAC-p ·q

    V. Miscellaneous Designs (select examples).1. Solomon four-group design.2. Interrupted time-series design.

    choose. Because of the wide variety of designs available, it isimportant to identify them clearly in research reports. Oneoften sees statements such as “a two-treatment factorial de-sign was used.” It should be evident that a more precisedescription is required. This description could refer to 10 ofthe 11 factorial designs in Table 1.1.

    Thus far, the discussion has been limited to designs withone treatment and one or two nuisance variables. In the fol-lowing sections I describe designs with two or more treat-ments that are constructed by combining several buildingblock designs.

    FACTORIAL DESIGNS

    Completely Randomized Factorial Design

    Factorial designs differ from those described previously inthat two or more treatments can be evaluated simultaneously

    Three of these designs are described later. Because of spacelimitations, I cannot describe all of the designs in Table 1.1.I will focus on those designs that are potentially the mostuseful in the behavioral and social sciences.

    It is apparent from Table 1.1 that a wide array of designsis available to researchers. Unfortunately, there is no univer-sally accepted designation for the various designs—somedesigns have as many as five different names. For example,the completely randomized design has been called a one-wayclassification design, single-factor design, randomized groupdesign, simple randomized design, and single variable exper-iment. Also, a variety of design classification schemes havebeen proposed. The classification scheme in Table 1.1 owesmuch to Cochran and Cox (1957, chaps. 4–13) and Federer(1955, pp. 11–12).

    A quick perusal of Table 1.1 reveals why researcherssometimes have difficulty