tbm original research

14
TBM ORIGINAL RESEARCH Intentional research design in implementation science: implications for the use of nomothetic and idiographic assessment Aaron R. Lyon, 1 Elizabeth Connors, 2 Amanda Jensen-Doss, 3 Sara J. Landes, 4,5 Cara C. Lewis, 1,6 Bryce D. McLeod, 7 Christopher Rutt, 8 Cameo Stanick, 9 Bryan J. Weiner 10 Abstract The advancement of implementation science is dependent on identifying assessment strategies that can address implementation and clinical outcome variables in ways that are valid, relevant to stakeholders, and scalable. This paper presents a measurement agenda for implementation science that integrates the previously disparate assessment traditions of idiographic and nomothetic approaches. Although idiographic and nomothetic approaches are both used in implementation science, a review of the literature on this topic suggests that their selection can be indiscriminate, driven by convenience, and not explicitly tied to research study design. As a result, they are not typically combined deliberately or effectively. Thoughtful integration may simultaneously enhance both the rigor and relevance of assessments across multiple levels within health service systems. Background on nomothetic and idiographic assessment is provided as well as their potential to support research in implementation science. Drawing from an existing framework, seven structures (of various sequencing and weighting options) and five functions (Convergence, Complementarity, Expansion, Development, Sampling) for integrating conceptually distinct research methods are articulated as they apply to the deliberate, design-driven integration of nomothetic and idiographic assessment approaches. Specific examples and practical guidance are provided to inform research consistent with this framework. Selection and integration of idiographic and nomothetic assessments for implementation science research designs can be improved. The current paper argues for the deliberate application of a clear framework to improve the rigor and relevance of contemporary assessment strategies. Keywords Implementation science, Assessment, Measurement, Nomothetic, Idiographic, Research design Intentional research design in implementation science: implications for the use of nomothetic and idiographic assessment Research design and measurement in implementation science Implementation science is focused on improving health services through the evaluation of methods that promote the use of research findings (e.g., evidence- based practices) in routine service delivery settings [1]. Studies in this area are situated at the end of the National Institutes of Health translational research pipeline (i.e., at T3 or T4); which describes how inno- vations move from basic scientific discovery, to inter- vention, and to large scale, sustained delivery in rou- tine community practice [2]. Implementation research is also inherently multilevel, focusing on individuals, groups, organizations, and systems to change profes- sional behavior and enact service quality improve- ments [3]. Frequently, the objectives of implementa- tion science include the identification of multilevel variables that impact the uptake of evidence-based practices as well as specific strategies for improving adoption, implementation, and sustainment [4, 5]. 1 Department of Psychiatry and Behavioral Sciences, University of Washington, School of Medicine, 6200 NE 74th Street, Suite 100, Seattle, WA 98115, USA 2 Department of Psychiatry, University of Maryland, School of Medicine, 737 West Lombard Street, office 420, Baltimore, MD 21201, USA 3 Department of Psychology, University of Miami, P.O. Box 248185, Coral Gables, FL 33124, USA 4 Department of Psychiatry, University of Arkansas for Medical Sciences, 4301 W. Markham St, Little Rock, AR 72205, USA 5 VISN 16 Mental Illness Research, Education, and Clinical Center (MIRECC), Central Arkansas Veterans Healthcare System, 2200 Fort Roots Drive, North Little Rock, AR 72114, USA 6 Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10th St, Bloomington, IN 47405, USA 7 Department of Psychology, Virginia Commonwealth University, PO Box 842018, 806 W. Franklin Street, Richmond, VA 23284, USA 8 Department of Psychology, Harvard University, 33 Kirkland Street, William James Hall, Cambridge, MA 02138, USA 9 Department of Psychology, University of Montana, Skaggs Building Room 143, Missoula, MT 59812, USA 10 University of North Carolina at Chapel Hill, 1102-C McGavran- Greenberg Hall, Chapel Hill, NC 27516, USA Correspondence to: A Lyon [email protected] Cite this as: TBM 2017;7:567580 doi: 10.1007/s13142-017-0464-6 © Society of Behavioral Medicine 2017 Implications Research: Implementation research studies should be designed with the intentional integration of idiographic and nomothetic approaches for specifically-stated functional purposes (i.e., Con- vergence, Complementarity, Expansion, Develop- ment, Sampling). Practice: Implementation intermediaries (e.g., consultants or support personnel) and health care professionals (e.g., administrators or service pro- viders) are encouraged to track (1) employee and organizational factors (e.g., implementation cli- mate), (2) implementation processes and outcomes (e.g., adoption), and (3) individual and aggregate service outcomes using both idiographic (compar- ing within organizations or individuals) and nomo- thetic (comparing to standardized benchmarks) approaches when monitoring intervention imple- mentation. Policy: The intentional collection and integration of idiographic and nomothetic assessment approaches in implementation science is likely to result in data-driven policy decisions that are more comprehensive, pragmatic, and relevant to stake- holder concerns than either alone. TBM page 567 of 580

Upload: others

Post on 19-Feb-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

TBM ORIGINAL RESEARCH

Intentional research design in implementation science:implications for the use of nomothetic and idiographicassessment

Aaron R. Lyon,1 Elizabeth Connors,2 Amanda Jensen-Doss,3 Sara J. Landes,4,5 Cara C. Lewis,1,6

Bryce D. McLeod,7 Christopher Rutt,8 Cameo Stanick,9 Bryan J. Weiner10

AbstractThe advancement of implementation science is dependenton identifying assessment strategies that can addressimplementation and clinical outcome variables in ways thatare valid, relevant to stakeholders, and scalable. This paperpresents a measurement agenda for implementationscience that integrates the previously disparate assessmenttraditions of idiographic and nomothetic approaches.Although idiographic and nomothetic approaches are bothused in implementation science, a review of the literature onthis topic suggests that their selection can be indiscriminate,driven by convenience, and not explicitly tied to researchstudy design. As a result, they are not typically combineddeliberately or effectively. Thoughtful integration maysimultaneously enhance both the rigor and relevance ofassessments across multiple levels within health servicesystems. Background on nomothetic and idiographicassessment is provided as well as their potential to supportresearch in implementation science. Drawing from anexisting framework, seven structures (of various sequencingand weighting options) and five functions (Convergence,Complementarity, Expansion, Development, Sampling) forintegrating conceptually distinct research methods arearticulated as they apply to the deliberate, design-drivenintegration of nomothetic and idiographic assessmentapproaches. Specific examples and practical guidance areprovided to inform research consistent with this framework.Selection and integration of idiographic and nomotheticassessments for implementation science research designscan be improved. The current paper argues for the deliberateapplication of a clear framework to improve the rigor andrelevance of contemporary assessment strategies.

Keywords

Implementation science, Assessment, Measurement,Nomothetic, Idiographic, Research design

Intentional research design in implementation science:implications for the use of nomothetic and idiographicassessmentResearch design and measurement in implementation scienceImplementation science is focused on improvinghealth services through the evaluation of methods thatpromote the use of research findings (e.g., evidence-

based practices) in routine service delivery settings [1].Studies in this area are situated at the end of theNational Institutes of Health translational researchpipeline (i.e., at T3 or T4); which describes how inno-vations move from basic scientific discovery, to inter-vention, and to large scale, sustained delivery in rou-tine community practice [2]. Implementation researchis also inherently multilevel, focusing on individuals,groups, organizations, and systems to change profes-sional behavior and enact service quality improve-ments [3]. Frequently, the objectives of implementa-tion science include the identification of multilevelvariables that impact the uptake of evidence-basedpractices as well as specific strategies for improvingadoption, implementation, and sustainment [4, 5].

1Department of Psychiatry andBehavioral Sciences,University of Washington, School ofMedicine, 6200 NE 74th Street,Suite 100, Seattle, WA 98115, USA2Department of Psychiatry,University of Maryland, School ofMedicine, 737 West Lombard Street,office 420, Baltimore, MD 21201,USA3Department of Psychology,University of Miami,P.O. Box 248185, Coral Gables, FL33124, USA4Department of Psychiatry,University of Arkansas for MedicalSciences, 4301W. Markham St, LittleRock, AR 72205, USA5VISN 16 Mental Illness Research,Education, and Clinical Center(MIRECC),Central Arkansas VeteransHealthcare System, 2200 Fort RootsDrive, North Little Rock, AR 72114,USA6Department of Psychological andBrain Sciences,Indiana University, 1101 E. 10th St,Bloomington, IN 47405, USA7Department of Psychology,Virginia Commonwealth University,PO Box 842018, 806 W. FranklinStreet, Richmond, VA 23284, USA8Department of Psychology,Harvard University, 33 KirklandStreet, William James Hall,Cambridge, MA 02138, USA9Department of Psychology,University of Montana, SkaggsBuilding Room 143, Missoula, MT59812, USA10University of North Carolina atChapel Hill, 1102-C McGavran-Greenberg Hall, Chapel Hill, NC27516, USA

Correspondence to: A [email protected]

Cite this as: TBM 2017;7:567–580doi: 10.1007/s13142-017-0464-6

© Society of Behavioral Medicine 2017

ImplicationsResearch: Implementation research studiesshould be designed with the intentional integrationof idiographic and nomothetic approaches forspecifically-stated functional purposes (i.e., Con-vergence, Complementarity, Expansion, Develop-ment, Sampling).

Practice: Implementation intermediaries (e.g.,consultants or support personnel) and health careprofessionals (e.g., administrators or service pro-viders) are encouraged to track (1) employee andorganizational factors (e.g., implementation cli-mate), (2) implementation processes and outcomes(e.g., adoption), and (3) individual and aggregateservice outcomes using both idiographic (compar-ing within organizations or individuals) and nomo-thetic (comparing to standardized benchmarks)approaches when monitoring intervention imple-mentation.

Policy: The intentional collection and integrationof idiographic and nomothetic assessmentapproaches in implementation science is likely toresult in data-driven policy decisions that are morecomprehensive, pragmatic, and relevant to stake-holder concerns than either alone.

TBM page 567 of 580

As research studying the implementation of effec-tive programs and practices in health care has ad-vanced, diverse research designs have emerged tosupport nuanced evaluation [6, 7]. All implementationstudies involve design decisions, such as determina-tions about whether the identified research questionscall for comparisons within-site, between-sites, orboth; whether and how to incorporate randomization;as well as how to balance internal and external validity[6, 8]. In implementation science, there is also growingemphasis on practical research designs that remainrigorous while incorporating the local context andstakeholder perspectives with the goal of acceleratingand broadening the impact on policy and practice [9,10]. Design decisions often have clear implications forthe selection of assessment approaches (e.g., identifi-cation of instruments, determinations about howresulting data will be compiled and analyzed), whichare one of the most concrete manifestations of studydesign. Nevertheless, implementation researchdesigns – and, by extension, assessment paradigms –remain underdeveloped relative to those for moretraditional efficacy or effectiveness studies [6]. Thecurrent paper offers strategic guidance to achieve prac-tical research design solutions for complex implemen-tation research by intentionally blending two dispa-rate, but complementary measurement traditions: idi-ographic and nomothetic.Different terms are relevant to the use of idiographic

and nomothetic assessment. As described in moredetail below, idiographic assessment goes by multiplenames, each of which underscores its individualizedfocus. These terms include Bintra-unit,^ Bperson/unit-centered,^ and Bsingle-case;^ in which Bunit^ refers toindividual clinicians, service teams, or organizations,among others. In contrast to idiographic assessment,nomothetic assessment is also known by terms, such asBinter-unit,^ Bvariable centered,^ or Bgroup^ that com-municate its focus on data collected from multipleindividuals or organizations to evaluate an underlyingconstruct and classify or predict outcomes. Althoughwe discuss the implications of the other terms andconceptualizations below and sometimes apply themdescriptively (e.g., the implications of single-case ex-perimental research for the idiographic approach; no-mothetic assessment being variable-centered and pro-ducing inter-unit information), we use the terms idio-graphic and nomothetic to refer directly to these twoassessment approaches in this paper.

Intentional measurement decisionsBWhat gets measured gets done,^ has long been arallying cry for health care professionals interested inperformance measurement and quality improvement[11]. Simultaneously, cautions that BIf everything getsmeasured, nothing gets done^ [12] underscore theneed for deliberate and parsimonious measurement.Examples of literature emphasizing effective assess-ment and outcome surveillance can be found in suchdiverse fields and sectors as public health [13],

medicine [14, 15], education [16, 17], mental/behavioral health [18], athletic training [19], businessmanagement [20], agriculture [21], and government[22]. As detailed by Proctor and colleagues [23],Bimprovements in consumer well-being provide themost important criteria for evaluating both treatmentand implementation strategies^ (p. 30). Because imple-mentation science may focus on intervention outcomes(e.g., service system improvements, individual func-tioning) or implementation outcomes (e.g., fidelity, accept-ability, cost), research designs rely on utilizing assess-ment strategies that can effectively identify and addressboth types of outcomes in ways that are practical,valid, and relevant to a wide range of stakeholders.Regarding intervention outcomes, problem identifi-

cation and outcome measurement strategies are re-ceiving international attention [24–26], and are centralto studies evaluating the effectiveness of programs,demonstrating the efficiency of such programs to fun-ders, and determiningwhere to target quality improve-ment efforts [27]. Outcomes of interest may includereductions in symptoms or disease states, improvedservice recipient functioning, adjustments in servicesreceived, or changes to the environmental contexts inwhich service recipients interact, among others [28].Implementation science also frequently focuses on

the rigorous measurement of implementation out-comes (e.g., feasibility, fidelity, sustainment, cost; [4,29, 30]) acrossmultiple levels. Indeed,manymodels ofimplementation science and practice rely on targetedmeasurement of key variables, such as the organiza-tional context, intervention acceptability, the quality ofpractitioner training and post-training supports, or thedelivery of key intervention content [18, 31, 32]. Ad-ditionally, due to its multilevel nature, the assessmentof implementation outcomes carries particular chal-lenges [13]. For example, implementation measure-ment instruments oftentimes lack established psycho-metric properties or are impractical for use in commu-nity contexts [18, 29, 33].Despite advances, strategic guidance is necessary to

improve intentional research design and associated mea-surement of intervention and implementation outcomesin a manner that supports rigorous and relevant imple-mentation science. Such a comprehensive design ap-proach necessitates attention to target outcome specifica-tion, instrument selection and application, and groundingin the breadth of relevant measurement traditions. Fur-ther, there is growing recognition that complex assess-ment batteries developed for traditional efficacy studiesfocused on individuals and organizations are unlikely tobe appropriate or feasible for use in implementationresearch [23]. Multiple assessment perspectives, beyondthose typically used in efficacy trials, are necessary toeffectively address implementation research questions.Two assessment traditions – nomothetic and idio-

graphic – are relevant to the comprehensive, intentionaldesign of implementation research studies. Nomotheticassessment focuses on inter-unit information that allowsfor comparison of a specific unit of analysis (e.g., individ-uals, teams, organizations) to data aggregated from

ORIGINAL RESEARCH

TBMpage 568 of 580

similar assessments across multiple units on an identifiedconstruct of interest (e.g., depression symptoms, organi-zational readiness), typically for the purposes of classifi-cation and prediction. In contrast, idiographic assessmentfocuses on intra-unit variability and is primarilyconcerned with information that allows for the compar-ison of a specific unit of analysis to itself across contextsor time on a target variable. Both methods are relevantacross a wide variety of system levels (e.g., individual,team, organization) and target assessment domains. Asindicated below, the distinction between these two assess-ment traditions lies more in design-driven assumptionsabout how to interpret the resulting data than on the datacollection process or tool itself. Although both nomothet-ic and idiographic assessment approaches are used in thecontemporary literature, it is rare that these approachesare explicitly defined, differentiated, or justified in thecontext of the research design, question, or assumptionsand implications of findings. Most of the extant literatureappears to reflect the nomothetic tradition, with a pre-ponderance of research focused on the development anduse of instruments intended to yield nomothetic data [29,34]. Virtually no guidance exists to support the intention-al use of these two assessment approaches in implemen-tation research, which provides opportunities for thestrategic integration of nomothetic and idiographic data.This is particularly unfortunate given that deliberate in-tegration of these two methods has the potential to bal-ance and maximize the measurement rigor and practicalqualities of existing assessment approaches.To advance health care implementation research,

the current paper reviews information about nomo-thetic and idiographic approaches to inform their prac-tical selection – and, when appropriate, integration –tomatch a range of implementation research questionsand study designs. As indicated above, effective assess-ment in implementation science requires attention toboth implementation and intervention outcome con-structs, identification of feasible yet rigorous methods,and integration of multiple types of data. Below, wereview nomothetic and idiographic assessment tradi-tions and detail their theoretical foundations, charac-teristics, study designs to which they are best suited,and example applications. Finally, strategic guidance isprovided to support the intentional integration of no-mothetic and idiographic assessments to answer con-temporary implementation research questions.

Nomothetic and idiographic approachesNomothetic and idiographic assessment approachesmay be differentiated by the ways in which the dataresulting from measurement instruments are intendedto be used and interpreted in the context of a researchdesign. As a result, data measured with any giveninstrument may be used in either an idiographic ornomothetic manner, depending on the specific re-search questions of interest. For this reason, the evi-dence for a particular instrument’s idiographic or no-mothetic use should be evaluated based on the score(rather than instrument) reliability and validity across

the circumstances most relevant to the intended use ofthe instrument in a particular study.

Nomothetic assessmentNomothetic assessment is primarily concerned withclassification and prediction, and closely associatedwith the framework of classical test theory [35]. Thisassessment approach emphasizes collecting informa-tion that allows for comparison of a specific unit ofanalysis (e.g., individuals, teams, organizations) to oth-er units or to broader normative data. Consistent withthis goal, these assessment strategies are based onnomothetic principles focused on discovering generallaws that apply to large numbers of individuals, teams,or organizations [36]. The nomothetic approach usu-ally deals with particular constructs or variables (e.g.,intelligence, depression, organizational culture), as op-posed to idiographic approach, which is focused onindividual people or organizations. As such, assess-ment instruments intended to produce nomotheticdata are typically developed, tested and interpretedthrough classical test theory, in which large amountsof assessment data are assumed to provide an estimateof a relatively stable underlying construct that cannotbe directly assessed for an entire population. Nomo-thetic assessment focuses on learning more about theconstruct (as opposed to the person or individual unit)via correlations with instruments designed to assessother constructs. Within classical test theory, variationin an individual’s scores is typically seen as measure-ment error.Instruments intended to yield nomothetic data are

typically standardized, meaning they have rules forconsistent administration and scoring. In their devel-opment process, they are ideally subjected to large-scale administration across numerous units. These da-ta can then be used to evaluate an instrument’s nomo-thetic score reliability (defined as whether the instru-ment will yield consistent scores across administra-tions) and score validity (correlational approaches toestablish whether it is measuring what it purports toassess). The means and standard deviations of thescores from these large-scale administrations are usedto generate norms for the instrument, which can beused to classify individual scores in terms of wherethey fall relative to the larger population. For example,the distribution of scores on an instrument within aspecific sample could be used to create a Bclinicalcutoff^ that identifies individuals who score two stan-dard deviations above the population mean.In some research designs, instruments may be se-

lected for the nomothetic purposes of classification,prediction, or inter-unit comparisons. Relevant workin implementation science focused on classificationmight seek to place units into meaningful groups. Asan implementation example, Glisson and colleagues[37] have created a profile system, the OrganizationalSocial Climate (OSC) instrument, to characterize theorganizational climate of mental health service organ-izations. Such classification activities can also facilitate

ORIGINAL RESEARCH

TBM page 569 of 580

probability-based prediction studies. For instance,Glisson and colleagues [37] found that agencies withmore positive OSC profiles sustain new practices forlonger than agencies with less favorable organizationalclimates. Research designs intended to evaluate inter-unit differences may use instruments with establishednorms to make nomothetic comparisons between pro-viders or agencies exposed to different implementa-tion approaches [38, 39] or to collect and comparebenchmarking data from one implementation siteagainst randomized controlled trial data [40].

Idiographic assessmentIdiographic assessment, is concerned with the individ-ual performance, functioning, or change of a specificunit of analysis (e.g., individual, team, organization)across contexts and/or time. It is therefore appropriatefor research designs intended to document intra-unitdifferences – either naturally occurring or in responseto a manipulation or intervention. Unlike the nomo-thetic approach, in which meaning is derived relativeto normative or correlational data, the point of com-parison in idiographic assessment is the same identi-fied unit. Indeed, the strength of idiographic assess-ment is the ability to identify relationships amongindependent and dependent variables that are uniqueto a particular unit. For this reason, examples of instru-ments designed to yield idiographic data may alsoallow for item content to be tailored to the individualunit (e.g., ratings of service recipients’ Btop problems^[41]). Idiographic assessment is particularly useful forprogress monitoring (e.g., across implementationphases or over the course of an intervention) or tocompare a single unit’s performance in different sce-narios (e.g., an individual’s or organization’s successusing different evidence-based interventions, seriallyor in parallel) to inform the tailoring of specific sup-ports to promote that unit’s success. However, as thepoint of comparison in idiographic assessment is al-ways a single person, group, or organization, it is lessuseful for comparisons across units than nomotheticassessment.The idiographic assessment tradition is closely asso-

ciated with single-case experimental designs. A re-quirement of single-case experimental design is it mustbe possible to assess the process under study in areliable way over repeated occasions [42]. The psycho-metric properties of idiographic assessment thus em-phasize situational specificity (processes will varyacross contexts) and temporal instability (processeswill vary over time). A major strength of idiographicassessment is its ability to detect meaningful intra-unitdifferences in performance across contexts and timealong with the factors that might account for thosedifferences.The use of idiographic assessment aligns with gen-

eralizability theory [43], which is a statistical frame-work for developing and evaluating the properties ofinstruments that yield intra-unit data. In contrast, clas-sical test theory typically emphasizes psychometric

properties (e.g., internal consistency, high test-retestreliability) that work against the core tenets of idio-graphic assessment (e.g., repeated assessment acrosstime and context). As opposed to classical test theorythat considers only one source of error, generalizabil-ity theory evaluates the performance of an instrumentacross facets relevant to different applications of theinstrument. Five facets are usually considered: forms,items, observers, time, and situation [43, 44]. Theapplication of generalizability theory to idiographicassessment allows researchers to pinpoint potentialsources of variation (facets) that systematically influ-ence scores. If, for example, the context in which aninstrument is used (e.g., inpatient versus outpatientservice setting within a single organization) accountsfor significant variation in scores, then scores wouldnot be considered to be consistent across situation.This variation may be problematic if the instrumentis intended to measure a construct hypothesized toremain constant across situation (e.g., practitioner pro-fessionalism), but not for constructs where variability isanticipated (e.g., amount of time practitioners devoteto each patient). In fact, variation can provide veryuseful information if understanding differences acrosscontexts or time is relevant to the study (e.g., effective-ness of an intervention for patients in clinic versushome or community settings).It is important to note thatmany of the psychometric

concepts relevant to nomothetic assessment (e.g., test-retest reliability) do not directly apply to idiographicassessment [45]. In fact, there have been debates aboutwhich psychometric categories are relevant wheninstruments are applied idiographically [44, 45]. Per-haps for this reason, recent efforts to identify importantpsychometric dimensions have focused primarily up-on nomothetic assessment [46] and do not identifyimportant properties of idiographic assessment (al-though there are exceptions in which idiographic as-sessment approaches are described in the context ofmental health service delivery [47]).Implementation research has started to investigate

the use of single-case experimental designs. Thesedesigns differ from observational case studies in thatconditions are typically varied within subject overtime. Idiographic assessment is best suited to supportsingle-case series research in which information aboutthe performance of a particular unit is studied acrosscontexts and/or time. From a practical standpoint,single-case experimental designs offer cost advantagesrelative to group designs, especially when research isfocused on changes at the unit level (e.g., agency orcommunity level [42]). From an empirical standpoint,some have suggested that single-case experimentaldesigns (i.e., in which a unit is randomized to differentconditions sequentially, and assessments compared toa baseline period) are the ideal way to understand therelation between independent and dependent varia-bles and the contextual variables that impact them atthe individual, organization, or community level [42,48]. Single-case experimental designs are thus ideal fortheory building [36] related to implementation science

ORIGINAL RESEARCH

TBMpage 570 of 580

that could subsequently be tested in group-baseddesigns [49]. Group-based designs (e.g., randomizedcontrol trials; RCTs) are good vehicles for evaluatingwhether principles identified at the idiographic levelgeneralize to groups. However, to achieve these aims,the measurement approaches need to be matched tothe design, something that does not consistently occur.

Opportunities for integrating nomothetic and idiographicassessment methodsThe inter-unit vs. intra-unit approaches of nomotheticand idiographic assessment, respectively, may suggestdifferences in their application as well as a certain levelof mutual exclusivity. Though differences betweenthese approaches are apparent, nomothetic and idio-graphic approaches are complementary. As indicatedabove, the primary distinctions lie in the manner inwhich information or scores from instruments are usedand interpreted. Consequently, data collected frommany instruments may be flexibly used in an idio-graphic or nomothetic manner given the appropriateresearch design considerations. Scores from the sameinstrument may be employed to track a single unit’schange over time (idiographically) or by aggregatingacross units and determining meaningful cutoffs(nomothetically). In addition, some study designsmay call for collection of separate idiographic andnomothetic data, but for complementary purposes.Integrated research designs that blend nomotheticand idiographic are a good match for certain imple-mentation studies. For instance, nomothetic data mayallow a research team to evaluate how a group ofcommunity clinics compare on treatment adherence,attendance, or outcomes. This information can help togenerate hypotheses that may be subsequently testedat the unit level. In contrast, idiographic data can helpto identify what factors are impacting adherence, at-tendance, or outcomes in a clinic and/or test causalrelations at the unit level. Additional detail – andexamples – about the ways research designsmay relateto nomothetic and idiographic data are provided in apreliminary taxonomy below.A growing body of research exemplifies poten-

tial ways in which the overlap between nomotheticand idiographic assessment approaches can be le-veraged surrounding implementation and interven-tion outcomes. For instance, studies have shownthe benefit of employing instruments initially de-veloped from a nomothetic assessment tradition inidiographic ways, such as conducting single-casestudies of one unit over time [50]. Examples alsoexist of studies that have used instruments devel-oped from an idiographic perspective to collectdata that were ultimately evaluated nomothetically.For instance, as part of a large, community-basedmental health treatment effectiveness trial for chil-dren and adolescents [51], interviews were used todetermine individualized (i.e., idiographic) Btopproblems,^ which could serve as intervention tar-gets for participants [41]. Participants were asked

to describe their symptoms in their own words,and then to rank order these problems. The result-ing list of each participant’s top three problemswas then routinely assessed (i.e., the Youth TopProblems assessment) throughout treatment as anindicator of treatment progress (idiographic ap-proach). In addition, the authors applied nomo-thetic criteria to the Youth Top Problems, evaluat-ing the psychometric properties of the resultingratings (including score validity via correlationswith gold standard nomothetic instruments) and,ultimately, using aggregate Youth Top Problemsratings to evaluate group outcomes in a clinicaltrial [51].The discussion above is intended to highlight some

of the potential benefits of combining nomothetic andidiographic approaches. Nevertheless, each carries itsown set of disadvantages as well. Notably, leadingnomothetic strategies often demonstrate low feasibilityand practicality [52]. Instruments developed from anomothetic tradition may be subject to proprietaryrights and copyright restrictions. As a result, highlyspecialized or overly complicated assessment instru-ments may lack access or relevance within certaincommunity settings [29]. In addition, less researchhas focused on the score validity of idiographic meth-ods as compared to nomothetic, and very little re-search has investigated the combined use of bothapproaches. Thus, to maximize the potential of bothnomothetic and idiographic assessment approaches inadvancing implementation research, a more completeunderstanding of the optimal scenarios, research ques-tions, and study designs that are conducive to eachapproach – or the combination of approaches – isessential.

Guidelines for integrating nomothetic and idiographicmethodsGuidelines for integrating nomothetic and idiographicmeasurement approaches are warranted to supporttheir combined use in implementation research stud-ies. We suggest a series of critical steps or decisionpoints to guide the selection and application of nomo-thetic and idiographic assessment (see Figures 1 and 2).These steps include: (1) Clearly articulate study goals,(2) Select an optimal study design, and (3) Ensure themeasurement approach attends to contextualconstraints.First, the research or evaluation objective is of par-

amount importance (i.e., clear articulation regardingthe goals of the study), which is likely to narrow thescope of variables to those central to the investigationand reveal whether the study questions are best an-swered by an intra-unit (warranting an idiographicapproach) or inter-unit (warranting a nomothetic ap-proach)measurement process. At this step, researchersshould ask themselves if they are most interested inhow a unit performs relative to other units(nomothetic) or itself (idiographic), and whether theywant to focus on understanding how a process unfolds

ORIGINAL RESEARCH

TBM page 571 of 580

for a particular unit over time or contexts (idiographic)versus understanding how a unit compares to otherunits on a particular construct (e.g., comparing settingsof interest to other settings; nomothetic). Without apriori articulation of the uses and implications of thedata obtained, measurement determinations cannot bespecified.Second, the optimal design to address the re-

search question should be selected. For instance,observational studies are common in implementa-tion science – and often preferred by communitystakeholders – given the relatively frequent oppor-tunity to evaluate a naturalistic implementationand the infrequent opportunity to randomly assignproviders or sites to different conditions [6, 7]. Anaturalistic, observational design may call for idio-graphic approaches whereas a randomized trialwith design controls may be better suited for

nomothetic approaches. Furthermore, attention tothe timing of assessments, weighting of one assess-ment approach relative to another, and functionalpurpose (each of these is discussed further below)is critical. Even within observational designs, thereare opportunities for a project to be cross-sectionaland likely nomothetic (e.g., to determine the pres-ence and level of implementation barriers andfacilitators, relative to norms) or longitudinal andlikely idiographic (e.g., to track changes in fidelityor other implementation outcomes over time).When combining idiographic and nomotheticapproaches in a single study, selection of a designshould include consideration of these structuraland functional elements of that integration (seeFigure 2 and below).Third, and related to the second factor, given that

the Breal world^ serves as the laboratory for

Steps to Guide Nomothetic and Idiographic Assessment

Fig. 1 | Steps to Guide Nomothetic and Idiographic Assessment

ORIGINAL RESEARCH

TBMpage 572 of 580

implementation science and that evaluation measureshave the potential to inform practical implementation

efforts, feasible, locally relevant, and actionable assess-ment approaches should be considered a top priority

Nomothetic and Idiographic Integration Checklist

Step 1: Goals Objective: Determine how key study questions can be answered by idiographic (intra-unit)

and/or nomothetic (inter-unit) comparisons.

Q1a: For each study question (can be primary and secondary, for instance) are you interested in how a

unit performs relative to other units or to itself? (select all that apply for each study question)

Itself = idiographic

Other units = nomothetic

Q1b: For each study question, do you want to focus on understanding a process in depth and/or testing

specific processes? (select all that apply for each study question)

Process in depth = idiographic

Specific process = nomothetic

Step 2: Design Objective: Determine the research design to incorporate both idiographic and nomothetic data

with intentional decisions about structure and function.

Q2a: Structure – Timing: Will the two data approaches be obtained sequentially or simultaneously?

(select one)

Sequentially =

Simultaneously = +

Q2b: Structure – Weighting: Will one data approach be emphasized over the other? (select one)

Single emphasis or weighting, one approach is “primary” = IDIO/nomo or NOMO/idio

Dual emphasis or equivalent weighting, both approaches are “primary” = IDIO and NOMO

Q2c: Function: What is the functional purpose of integrating these approaches? (select one)

Are both approaches used to answer the same question? = Convergence

Are both approaches used to answer a related question or series of questions? =

Complementarity

Evaluation: when one approach is used to answer questions raised by the other

Elaboration: when one approach provides an in-depth understanding of the other

Is one approach used to build on questions raised by the other? = Expansion

Is one approach used to contribute to or develop new research products (e.g., measures

models, interventions)? = Development

Is one approach used to define or identify the participant sample for collection and analysis of

data representing the other? = Sampling

Step 3: Feasibility and Practicality Objective: Ensure measurement attends to context.

Q3a: Does your design and its measurement satisfy all of the following questions:

Is the measure feasible for stakeholders?

Is the measure relevant to stakeholders?

Will the data be actionable

Fig. 2 | Nomothetic and Idiographic Integration Checklist Step 1: Goals Objective: Determine how key study questions can beanswered by idiographic (intra-unit) and/or nomothetic (inter-unit) comparisons. Q1a: For each study question (can be primaryand secondary, for instance) are you interested in how a unit performs relative to other units or to itself? (select all that apply foreach study question) Itself = idiographic Other units = nomothetic Q1b: For each study question, do you want to focus onunderstanding a process in depth and/or testing specific processes? (select all that apply for each study question) Process indepth = idiographic Specific process = nomothetic Step 2: Design Objective: Determine the research design to incorporate bothidiographic and nomothetic data with intentional decisions about structure and function. Q2a: Structure – Timing: Will the twodata approaches be obtained sequentially or simultaneously? (select one) Sequentially =➔ Simultaneously = + Q2b: Structure –Weighting: Will one data approach be emphasized over the other? (select one) Single emphasis or weighting, one approach isBprimary^ = IDIO/nomo or NOMO/idio Dual emphasis or equivalent weighting, both approaches are Bprimary^ = IDIO and NOMOQ2c: Function: What is the functional purpose of integrating these approaches? (select one) Are both approaches used to answerthe same question? = Convergence Are both approaches used to answer a related question or series of questions? = ComplementarityEvaluation: when one approach is used to answer questions raised by the other Elaboration: when one approach provides an in-depth understanding of the other Is one approach used to build on questions raised by the other? = Expansion Is one approach used tocontribute to or develop new research products (e.g., measures models, interventions)? = Development Is one approach used to define oridentify the participant sample for collection and analysis of data representing the other? = Sampling Step 3: Feasibility andPracticality Objective: Ensure measurement attends to context. Q3a: Does your design and its measurement satisfy all of thefollowing questions: Is the measure feasible for stakeholders? Is the measure relevant to stakeholders? Will the data be actionable

ORIGINAL RESEARCH

TBM page 573 of 580

[29]. These study considerations have clear implica-tions for using nomothetic and/or idiographicapproaches. For example, if electronic health recordshouse patient-centered progress data, an idiographicapproach may be preferred (compared to implement-ing a nomothetic approach) in a research study fo-cused on providing feasible, actionable, and real-timefeedback to clinicians about patient progress. Unlessthe goals of the study (per the first factor, above) trulynecessitate using data for prediction or classification,the practical advantage of using idiographic data al-ready on hand is likely preferred. To offer additionalspecific direction in this area, we adapt an existingtaxonomy below to guide the combination of nomo-thetic and idiographic approaches.

Use of an existing taxonomy to inform integrationGuidelines for the integration of nomothetic and idio-graphic assessment approaches, based on study de-sign, can also be informed by established taxonomiesfor integrating different types of data in implementa-tion science. For instance, designs that mix quantita-tive and qualitative methodologies have been high-lighted as robust research approaches that yield morepowerful and higher quality results compared to eithermethod alone [53, 54]. Although the current paper isfocused on two distinct quantitative assessmentapproaches, we contend that frameworks for thinkingabout mixed methods research are useful in the extentto which they can guide the intention and systematicintegration of any disparate methodologies that pro-duce unique types of data. Similar to the considera-tions highlighted above, best practices in mixed meth-ods research involve sound conceptual planning to fitthe needs of the research question and study design,with purposeful collaboration among scientists fromeachmethodological tradition and a clear acknowledg-ment of philosophical differences in each approach[53, 54]. Palinkas et al. [55] proposed a taxonomy forresearch designs that integrate qualitative and quanti-tative data in implementation science. They proposedthree elements of mixed methods that may also beconsidered when integrating nomothetic and idio-graphic assessment approaches: structure, function, andprocess. Structure refers to the timing and weighting ofthe two approaches, function to the relationship be-tween the questions eachmethod is intended to answer(i.e., the same question versus different questions), andprocess to the ways in which different datasets arelinked or combined. In all, seven structural arrange-ments, five functions of mixed methods, and threeprocesses for linking quantitative and qualitative dataemerged.Although nomothetic and idiographic approaches

are quantitative, the elements articulated in the mixedmethods taxonomy proposed by Palinkas and col-leagues can help guide their integration. Consistentwith the objectives of a taxonomy, this approach isintended to organize research designs, provide clearexamples, promote their use in the future, offer a

pedagogical tool, and establish a common languagefor the design variations within implementation sci-ence [56]. Once research goals are set and designoptions considered, mixing nomothetic and idiograph-ic approaches according to structure and function canoptimize project evaluation. Because nomothetic andidiographic data are both represented quantitatively,the dimension of process (i.e., the ways datasets arelinked or combined) is less relevant. It has thereforebeen omitted below.Table 1 provides an overview of a preliminary ap-

plication of the taxonomy to nomothetic (BNOMO^)and idiographic (BIDIO^) measurement approaches.As outlined by Palinkas et al. [55], the structure ofdesign decisions illustrated in this table vary based onthe timing and weighting (or emphasis) of each ap-proach. Timing is represented by the B➔^ and B+^symbols. When the timing of the two approaches issequential, such that one is conducted prior to theother, this is indicated by the B➔^ symbol, often reflect-ing that the first measurement approach informed theselection, execution, or interpretation of the second.When the timing of the two approaches is concurrent,this is indicated by the B+^ symbol. Weighting oremphasis is indicated by capitalization, with the pri-mary method indicated by capital letters (e.g., BIDIO^or BNOMO^). In application, it may be challenging toidentify whether IDIO or NOMO is emphasized instudies that purposefully integrate both. Althoughauthors rarely make this distinction explicitly, theweighting or emphasis is most often aligned withwhichever approach reflects the Bprimary^ researchquestion. Nevertheless, this decision is frequently anarbitrary one that falls to the investigative team andtheir interpretation of study priorities and, whenauthors are unclear, this task then falls to the reader.Indeed, in our review of studies in the next section(Table 1), determinations about weighting were madeby our team out of necessity, based on informationreported in the published studies. The researcher isalso cautioned against conceptualizing this as a distinc-tion between proximal (as Bprimary^) and distal (asBsecondary^) outcomes. A distal outcome, such aspatient morbidity, may in fact answer a critical primaryresearch question such as the effectiveness of an initia-tive designed to implement an evidence-based healthintervention for a particular disease state. Despitesome subjectivity, weighting offers a useful commonlanguage to communicate and discuss critical aspectsof study design.

Examples of idiographic/nomothetic integrationThe literature reflecting integrated idiographic andnomothetic measurement approaches in implementa-tion is limited and focused on implementation andintervention outcomes, so we identified study designexamples from both implementation science andhealth services research that (1) use data for idiograph-ic and nomothetic purposes and (2) appear to repre-sent one or more of the structural and functional

ORIGINAL RESEARCH

TBMpage 574 of 580

Table1|A

dapted

Taxono

myforIdiograp

hican

dNom

othe

ticIntegration

Elem

ent

Catego

ryDefinition

*Exam

ple

Structure

IDIO

➔no

mo

Sequ

entia

lcollectionan

dan

alysisof

data

foridio

andno

mouse,

beginn

ingwith

idio

useof

data

fortheprim

arypu

rposeof

(IDIO)intra-un

itcompa

rison

across

contexts

andtim

e.Seetextfor

exam

ple

idio

➔NOMO

Sequ

entia

lcollectionan

dan

alysisof

data

foridioan

dno

mouse,

beginn

ingwith

idio

useof

data

forthe

prim

arypu

rposeof

(NOMO)inter-unit

compa

rison

toaggregateda

taforclassificationor

pred

ictio

n.[57]

nomo➔

IDIO

Sequ

entia

lcollectionan

dan

alysisof

data

foridio

andno

mouse,

beginn

ingwith

nomouseof

data

fortheprim

arypu

rposeof

(IDIO)intra-unit

compa

rison

across

contexts

andtim

e.[58]

NOMO➔

idio

Sequ

entia

lcollectionan

dan

alysisof

data

foridioan

dno

mouse,be

ginn

ingwith

nomouseof

data

forthe

prim

arypu

rposeof

(NOMO)inter-unit

compa

rison

toaggregateda

taforclassificationor

pred

ictio

n.[34]

idio

+NOMO

Simultane

ouscollectionan

dan

alysisof

data

foridioan

dno

mouse,forthe

prim

arypu

rposeof

(NOMO)inter-unitcom

paris

onto

aggregateda

taforclassificationor

pred

ictio

n.[59]

IDIO

+no

mo

Simultane

ouscollectionan

dan

alysisof

data

forprim

arypu

rposeof

(IDIO)intra-un

itcompa

rison

across

contexts

andtim

e.[60]

NOMO+IDIO

Simultane

ouscollectionan

dan

alysisof

data

foridio

andno

mouse,

giving

equa

lweigh

tto

both

approa

ches

[61]

Functio

nCo

nvergence

Using

both

idio

andno

moap

proa

ches

toan

swer

thesamequ

estio

n,either

throug

hcompa

rison

ofresults

toseeifthey

reachthesame

conclusion

(tria

ngulation)

orby

convertin

gada

tasetfrom

oneuseinto

anothe

r(e.g.d

atacollected

foridio

purposebe

ingused

nomothe

ti-cally

orvice

versa)

[60]

Complem

entarity

Using

each

approa

chto

answ

erarelatedqu

estio

nor

serie

sof

questio

nsforpu

rposes

ofevalua

tion(e.g.,usingda

taforno

mouseto

answ

erindividu

alidio

questio

nsor

vice

versa)

orelab

oration(e.g.,usingidio

data

toprovidede

pthof

unde

rstand

ingan

dno

moda

tato

provide

breadthof

unde

rstand

ing)

Elab

oration

exam

ple:

[59]

Expa

nsion

Using

oneap

proa

chto

answ

erqu

estio

nsraised

bytheothe

r(e.g.,usingda

tasetfor

idio

useto

explainresults

from

data

forno

mouse)

[60]

Develop

men

tUsing

oneap

proa

chto

answ

erqu

estio

nsthat

willen

able

useof

theothe

rto

answ

erothe

rqu

estio

ns(e.g.,de

velopda

tacollectionmeasures,

concep

tual

mod

elsor

interven

tions)

Seetextfor

exam

ple

Sampling

Using

oneap

proa

chto

defin

eor

iden

tifythepa

rticipan

tsampleforcollectionan

dan

alysisof

data

represen

tingtheothe

r(e.g.,selecting

interviewinform

ants

basedon

respon

sesto

survey

questio

nnaire)

[58]

*Note.Theoverarchingcatego

riesan

dthespecificde

finition

sforthe

taxono

myaredirectlyfrom

Palin

kaset

al.[55

].Minor

adap

tatio

nsweremad

eto

tran

slatethequ

antitative/qu

alita

tiveintegrationto

thecurren

tidiograph

ic/nom

othe

ticintegration.

Processwas

omitted

abovebe

causealld

ataarequ

antitativean

dthus

relativelyeasilyintegrated

ORIGINAL RESEARCH

TBM page 575 of 580

categories outlined in Table 1. Rather than conduct acomprehensive literature review, we searched the lit-erature for illustrative examples of the seven structuresand five functions [[55]]. Below, each structural andfunctional category is presented with an example fromthe literature and/or hypothetical examples relevant toimplementation research. The examples demonstratethe value added to study goals, design, and practicalitywhen using an idiographic and nomothetic approachto assessment in an integrated research design.Structure—The seven structural categories are definedbased on the timing and weighting of nomothetic andidiographic data in an integrated design. The first fourstructures presented represent sequential (B ➔^) datacollection, and the last three represent simultaneous(B+^) data collection.IDIO➔ nomo—The first structural category in Table 1

is IDIO➔ nomo, in which a study design begins withthe primary purpose of intra-unit comparison acrosscontexts or time (IDIO). For instance, a hospital mayexamine an individual clinic or department perfor-mance indicator (e.g., no show rate, wait times, and/or 30-day readmission rates) across time (e.g., onefiscal year to the next). The primary idiographic(IDIO) goal of using these intra-unit data could be toidentify longitudinal patterns of clinic or departmentperformance over time as well as factors that influenceperformance for the particular clinic or department(e.g., time of year, type of patient). A secondary no-mothetic goal may be to compare performance acrossclinics or departments to hospital norms or nationalperformance standards to help determine if a particu-lar clinic should be more closely examined for theirperformance strengths or needs.Idio ➔ NOMO—The next structural category, idio ➔

NOMO, also seeks to collect data for idiographicpurposes initially, followed by a nomothetic approach,but the primary purpose is for nomothetic classifica-tion or prediction (NOMO). Hirsch and colleagues’[57] work offers a possible example of using data bothidiographically and nomothetically for patients in pri-mary care experiencing lower back pain. The idio-graphic data were patient self-reported pain intensityand a visual analog measure of health. Emotional andphysical symptoms measured by several standardizedsymptom checklists were intended to yield data usedfor nomothetic comparison to other patients. A clusteranalysis was performed with these variables, whichidentified four groups that were used to classifypatients and predict health care utilization and costs.However, it could also be argued that, because bothapproaches were actually used to determine patientgroups, a more specific subcategory (e.g., [idio +nomo] ➔ NOMO) might be an even more accurateclassification. Importantly, Hirsch and colleagues se-lected a design and analyses to account for both intra-and inter-client variations to classify patients and pre-dict the outcome of interest for patient groups.Nomo ➔ IDIO—Lambert and colleagues [58] provide

an example of nomo ➔ IDIO structure in which thenomothetic approach precedes an idiographic

approach, for a primarily IDIO purpose. They collect-ed the Outcome Questionnaire-45 (OQ-45) from allpatients to classify and predict individual patient prog-ress in psychosocial treatment based on establishednormative clinical cutoffs. This nomothetic approachwas used to identify Bsignal-alarm cases^ (i.e., individualpatients at risk of poor treatment response, based ontheir OQ-45 score). For these at-risk cases that wereselected, intensive individual progress monitoring overtime (IDIO data collection) was initiated to maximizeindividual patient outcomes. The authors recommendthat individual patient data be used for feedback sys-tems to therapists in routine care to support case levelprogress monitoring and treatment planning.NOMO ➔ idio—In a NOMO ➔ idio structure, a

primary nomothetic approach is followed by a second-ary idiographic approach. Coomber and Barriball [34]conducted a systematic literature review on a researchquestion that reflects this structure. They sought tounderstand how specific independent variablesdesigned for nomothetic use (e.g., standardized instru-ments of job satisfaction, work quality, professionalcommitment, work stress) predict subsequent intra-unit dependent variables over time (e.g., individualemployee intent to leave or stay, and actual decisionsthereof) among hospital-based nurses. The authorsconcluded that the studies reviewed used primarilynomothetic approaches andwere frequently underspe-cified for the purpose of understanding and supportingindividual intentions and decisions at the nurse ornursing ward level. These findings reflect conclusionsfrom our literature search, in which NOMO ➔ idiostudies were rare. However, a study seeking to answerthis question could use a design whereby a standard-ized instrument of work stress is collected acrossnurses and nursing sites to help predict turnover rates.This primary nomothetic approach would yield cross-site findings about the distribution and norms of workstress within the sample. These primarily BNOMO^findings could also inform an idiographic approach totrack individual nurses’ work stress levels over timeand/or across work contexts based on individuallytailored stress management interventions.Idio + NOMO—This structure of idio + NOMO is the

first of three structures listed in Table 1 that reflectsimultaneous (B+^) collection and analysis of data,instead of a sequential data collection (B➔^ above).This design structure (idio + NOMO) is for the prima-ry purpose of comparing scores to aggregate data forclassification or prediction (NOMO). This is illustratedin a study by Fisher, Newman, and Monenaar [59]where the primary focus was on analyzing treatmentoutcome for the entire sample of patients. However,this was accomplished by aggregating individual pat-terns over time (e.g., Bidio^ daily diary ratings) in adataset to examine treatment outcome for the overallpatient sample. Thus, the main emphasis was on no-mothetic prediction but derived from a approach orig-inally designed to produce idiographic data (idio +NOMO). Moreover, findings indicated that the con-tribution of individual (idiographic) pattern variability

ORIGINAL RESEARCH

TBMpage 576 of 580

of diary ratings significantly moderated the effect oftime in treatment on NOMO-measured treatmentoutcomes.IDIO + nomo—Simultaneous collection and analysis

of idiographic nomothetic data for the primary purposeof exploration, hypothesis generation, or intra-unitanalysis is represented by the IDIO + nomo structure.Sheu, Chae, and Yang [60] offer an example of howidiographic and nomothetic approaches can be simul-taneously combined to understand implementation ofan information management system (called BERP^),wherein the countries (or sites) are regarded as theindividual unit. They simultaneously collected second-ary data from a literature review on multinational im-plementation of ERP (nomo) as well as case researchfrom six companies in different countries involved inERP implementation (IDIO) to understand how indi-vidual countries’ (IDIO) differences impact implemen-tation. The design included idiographic case research tosystematically study individualized strategies based ontime since implementation as well as similarities anddifferences in implementation between different units(countries). Results informed the identification of local,individualized (idiographic) characteristics that specificcountries needed to consider (e.g., language, culture,politics, governmental regulations, management styleand labor skills) to support decision making aboutadoption/implementation of the information system(IDIO + nomo).Nomo + IDIO—The last structural category, NOMO

+ IDIO, describes a research design whereby data aresimultaneously collected to inform intra-unit and inter-unit study goals, which are equally prioritized. Forinstance, Wandner et al. [61] examined the effect of aperspective-taking intervention on pain assessmentand treatment decisions at the intra-unit level acrosstime (IDIO) and inter-unit level (NOMO). The re-search team collected a participant pain assessmentand treatment ratings on a scale of 1 to 10 and theGRAPE questionnaire to assess pain decisions basedon sex, race, and age. These data were used for nomo-thetic and idiographic application. For instance, aggre-gate data were examined between perspective takingand control groups for the purpose of predictingchange over time (NOMO goal for inter-unit compar-ison, prediction). Also, individual participant decision-making policies based on their individual weighting ofdifferent contextual cues was examined to understandIDIO patterns and variability of patient decisions.Consistent with the goals of their study, combiningthese approaches facilitated empirical investigation ofidiographic participant variation across context as wellas classifying nomothetic implementation outcomes ofa perspective taking intervention.Function—Drawing from Palinkas’ et al. [55] frame-

work, we have identified five functional purposes fordesigning an implementation research study to incor-porate both idiographic and nomothetic approaches tousing data. These include Convergence, Complemen-tarity, Expansion, Development, and Sampling. Foreach function below, one of the aforementioned

studies and/or hypothetical examples is referencedfor the function(s) they illustrate.Convergence—Convergence represents the use of both

approaches to answer the same question. This may bethrough comparison of results to determine whetherthe same conclusion is reached (i.e., triangulation) orby converting a dataset from one type into another(e.g., data originally intended for idiographic use beingused nomothetically or vice versa). The Sheu, Chae,and Yang [60] article provides an example of triangu-lating both case research (idio) and secondary litera-ture review (nomo) to identify specific factors thatinfluence the implementation of information manage-ment systems at the national level.Complementarity—This function is carried out when

each approach is used to answer a related question orseries of questions. This may be for evaluation (whenone approach is used to answer questions raised by theother), or elaboration (when one approach provides anin-depth understanding of the other). An example ofan evaluation function might be a study that initiallyuses individual clinic or department performance dataover time to study idiographic trends. Those trendsmay raise subsequent questions about how the clinics’data compare to national standards, and the investiga-tor could then use national health care performancestandards (nomothetic approach) as benchmarks toevaluate those clinics’ performance. As an exampleof elaboration within the complementarity function,Fisher, Newman, and Monenaar [59] found thatintra-unit variability (idio) among patients comple-mented their understanding of inter-unit (nomo) out-comes for the entire sample. That is, the amount ofvariance in individual patient patterns over time sig-nificantly moderated the effect of time in treatment onthe reliable increase of nomothetic (inter-unit) out-comes (Table 1).Expansion—Similar to complementarity, a design

with the expansion function uses one approach toanswer questions raised by the other. However, in-stead of using idiographic and nomothetic approachesto complement the same research question (i.e., com-plementarity), these designs build upon one another ina distinct, stepwise fashion. For instance, the Sheu,Chae, and Yang [60] study not only represents theconvergence function (noted above), but also illus-trates expansion. Their case study approach to under-standing implementation challenges in each nation(idio) was selected to build upon and provide newinformation about existing literature on multinationalimplementation across countries (nomo).Development—Development is another function

whereby one approach is used to answer questionsthat were raised by another approach. Developmentis distinguished from complementarity and expansionby an intention to contribute to or develop new re-search products. General examples include the devel-opment of data collection measures, conceptual mod-els or interventions. In an implementation context, toborrow from the topic studied by Coomber and Barri-ball [34], idiographic change over time or context (e.g.,

ORIGINAL RESEARCH

TBM page 577 of 580

nurse stress level self-reports across multiple intervalsor nursing units) is likely to inform the development ofstandardized instruments for nomothetic classificationor prediction (e.g., nurse professional wellness) as wellas tailored interventions (e.g., organizational supports,administrative practices).Sampling—The function of an integrated design is

sampling when one approach is used to define oridentify the participant sample for collection and anal-ysis of data representing the other. A general examplewould be selecting interview informants (idiographic)based on responses to survey questionnaires(nomothetic). The function of the Lambert and col-leagues [58] study appears to be sampling, given thatOQ-45 data were used in nomothetic fashion to iden-tify patients within a clinical sample who are predictedto have poor treatment outcome. The identification ofthis sub-sample of at-risk patients was followed by anindividual progress monitoring plan over time(idiographic) to serve individuals at risk. Lambertand colleagues [54] is also an example of convergence,as data were initially collected for nomothetic purpo-ses subsequently converted for idiographic use (i.e.,intra-unit treatment planning).

Summary and future directionsThis paper is intended to advance a practical and rigor-ousmeasurement agenda in implementation science byproviding guidance on the combination of previouslydisparate assessment traditions. Specifically, we arguefor strategic and intentional integration of idiographicand nomothetic assessment approaches in a mannerthat addresses project goals and aligns with study de-sign. The benefits of such integration include simulta-neously enhancing the rigor and relevance of assess-ments across multiple levels within service systems.Importantly, although particular instruments may

have been developedwith a nomothetic or idiographictradition or purpose in mind, we conceptualize theprimary distinction between the two as the manner inwhich the data are interpreted or used. Despite this, theline between nomothetic and idiographic approachesis inherently blurry, and is sometimes influenced bythe perspective of the research team in addition to therelations among study variables. To guide the integra-tion of idiographic and nomothetic assessment, wepresented a preliminary framework – adapted fromthe taxonomy proposed by Palinkas et al. [55] – todetail different structural and functional arrangements.In addition, we presented a set of tools (Figures 1 and 2)to facilitate application of the framework.When available, published studies that exemplify

research designs with an integrated nomothetic andidiographic approach were presented. However, ourreview of the literature revealed a paucity of suchintegrated designs. Thus, hypothetical examples wereoffered as needed to clarify the potential of each cate-gory to support integrated implementation researchdesigns. Among those identified, the simultaneous

versus sequential collection of data is not readily ap-parent, making the purpose of an integrated designunderspecified in many published studies. Moreover,four of the seven studies included as illustrations ofstructural categories contained sufficient detail to alsocategorize the function of integration. Two of these [52,60] appeared to use an integrated idiographic/nomothetic design for more than one function. Inves-tigators of future studies designed to incorporate bothidiographic and nomothetic use of data are thus en-couraged to detail the functional purpose(s) of usingboth approaches.Future research may include a systematic review of

implementation research studies that leverages nomo-thetic and idiographic approaches to evaluate the fre-quency with which each structural and functional cat-egory appears. For instance, it may be that there arecertain structures/functions – potentially the idio-graphic use of nomothetic data (e.g., NOMO ➔ idio)– that occur more commonly. Further, the relation-ships between each category and other study charac-teristics should be examined. Relevant study charac-teristics may include the research context or setting,phase of implementation examined (e.g., Exploration,Adoption, Implementation, Sustainment; [31]), levelsat which data were collected (e.g., inner context, outercontext, individual), and overall complexity of studydesign. Consistent with the findings from Palinkaset al. [51] in their examination of mixed methodsapproaches, it may be that the outer context is alsounderrepresented in the literature.Individual studies can contribute to the advance-

ment of an integrated assessment agenda by moreexplicitly detailing their use of – and rationale for –combined idiographic and nomothetic approaches.Among other things, this would include the identifica-tion of whether one approach is primary, one second-ary, or whether the approaches were equivalentlyweighted. In addition, specifying whether the sequen-tial or simultaneous data collection occurred withinthe study design would also support a transparentexplanation of the structure (and ultimately, function)of integrating idiographic and nomothetic approaches.This would be pivotal in reducing the ambiguity andsubjectivity noted above when classifying publishedwork. Also, for the purpose of advancing scientificknowledge within the implementation literature, moreexplicit description of assessment use and rationale inthe context of the study goals and design would sup-port the replication and expansion of individual studydesigns and findings.Finally, an integrated assessment agenda may also

align with increased attention to pragmatic researchwithin implementation science, as it emphasizes incor-poration of the local context and stakeholder perspec-tives into research design with the goal of acceleratingand broadening its impact on policy and practice [9,10]. The collection of stakeholder input about the waysin which different assessment methods align with theirneeds, expectations, and values is likely to enhancepragmatic assessment [62]. Within this frame, it could

ORIGINAL RESEARCH

TBMpage 578 of 580

be hypothesized that patients and practitioners mayplace greater value on data from idiographic methodswhereas policymakers may disproportionately valuedata fromnomotheticmethods. Indeed, recent work inthe area of education sector mental health has sup-ported the notion that clinicians [63, 64] and servicerecipients [65] prefer the flexibility and personalizationassociated with the idiographic approach. Despite this,additional work is needed that examines ways to opti-mize assessments that leverage the best qualities ofboth approaches. In a prime example of this, membersof the Society for Implementation Research Collabo-ration (SIRC [66]) currently have a large, federallyfunded project underway in the United States to de-velop pragmatic instruments of key implementationoutcomes (acceptability, feasibility, and appropriate-ness; [29]). The process includes structured stakehold-er input and an explicit objective to create psychomet-rically strong, low-burden instruments, and may serveas a generalizable example for the integration of idio-graphic and nomothetic approaches. It is our perspec-tive that such deliberate, pragmatic initiatives repre-sent the future of implementation science as the fieldstrives to balance rigor and relevance and promotelarge-scale impact in service systems.

Compliance with ethical standards

Information/findings not previously published: Although no originaldata are presented in the current manuscript, the information contained in themanuscript has not been published previously is not under review at any otherpublication.

Statement on any previous reporting of data: None of the informationcontained in the current manuscript has been previously published.

Statement indicating that the authors have full control of all pri-mary data and that they agree to allow the journal to review theirdata if requested: Not applicable. No data are presented in the manuscript.

Statement indicating all study funding sources: This publication wassupported in part by grant K08 MH095939 (Lyon), awarded from the NationalInstitute of Mental Health. The content is solely the responsibility of the authorsand does not necessarily represent the official views of the National Institutesof Health.

Statements indicating any actual or potential conflicts of interest:No conflicts of interest.

Statement on human rights: Not applicable. No data were collected fromindividuals for the submitted manuscript.

Statement on the welfare of animals: Not applicable. No data werecollected from animals for the submitted manuscript.

Informed consent statement: Not applicable.

Helsinki or comparable standard statement: Not applicable.

IRB approval: Not applicable.

1. Eccles, M. P., & Mittman, B. S. (2006). Welcome to Implementa-tion Science. Implement Sci, 1, 1.

2. Khoury, M. J., Gwinn, M., Yoon, P. W., Dowling, N., Moore, C. A., &Bradley, L. (2007). The continuum of translation research in

genomic medicine: how can we accelerate the appropriate inte-gration of human genome discoveries into health care and dis-ease prevention? Genet Med, 9, 665–674.

3. Bauer, M. S., Damschroder, L., Hagedorn, H., Smith, J., & Kil-bourne, A. M. (2015). An introduction to implementation sciencefor the non-specialist. BMC Psychol, 3, 32.

4. Proctor, E., Silmere, H., Raghavan, R., Hovmand, P., Aarons, G.,Bunger, A., Griffey, R., & Hensley, M. (2011). Outcomes for imple-mentation research: conceptual distinctions, measurement chal-lenges, and research agenda. Admin Pol Ment Health, 38, 65–76.

5. Powell, B. J., Waltz, T. J., Chinman, M. J., Damschroder, L. J., Smith,J. L., Matthieu, M. M., Proctor, E. K., & Kirchner, J. E. (2015). Arefined compilation of implementation strategies: results from theexpert recommendations for implementing change (ERIC) project.Implement Sci, 10, 21.

6. Brown CH, Curran G, Palinkas LA, Aarons GA, Wells KB, Jones L,Collins LM, Duan N, Mittman BS, Wallace A, Tabak RG, DucharmeL, Chambers D, Neta G, Wiley T, Landsverk J, Cheung K, Cruden G:An overview of research and evaluation designs for disseminationand implementation. Annu Rev Public Health 2016, 38:null.

7. Landsverk, J., Brown, C. H., Reutz, J. R., Palinkas, L., & Horwitz, S.M. (2010). Design elements in implementation research: a struc-tured review of child welfare and childmental health studies. AdmPolicy Ment Health Ment Health Serv Res, 38, 54–63.

8. Mercer, S. L., DeVinney, B. J., Fine, L. J., Green, L. W., & Dougherty,D. (2007). Study designs for effectiveness and translation re-search. Am J Prev Med, 33, 139–154 e2.

9. Glasgow, R. E. (2013). What does it mean to be pragmatic?Pragmatic methods, measures, and models to facilitate researchtranslation. Health Educ Behav, 40, 257–265.

10. Krist, A. H., Glenn, B. A., Glasgow, R. E., Balasubramanian, B. A.,Chambers, D. A., Fernandez, M. E., Heurtin-Roberts, S., Kessler, R.,Ory, M. G., Phillips, S. M., Ritzwoller, D. P., Roby, D. H., Rodriguez,H. P., Sabo, R. T., Gorin, S. N. S., Stange, K. C., & Group TMS.(2013). Designing a valid randomized pragmatic primary careimplementation trial: the my own health report (MOHR) project.Implement Sci, 8, 1–13.

11. Behn, R. D. (2003). Why measure performance? Different purpo-ses require different measures. Public Adm Rev, 63, 586–606.

12. Lee, R. G., & Dale, B. G. (1998). Business process management: areview and evaluation. Bus Process Manag J, 4, 214–225.

13. McGlynn, E. A. (2004). There is no perfect health system.Health Aff(Millwood), 23, 100–102.

14. Castrucci, B. C., Rhoades, E. K., Leider, J. P., & Hearne, S. (2015).What gets measured gets done: an assessment of local data usesand needs in large urban health departments. J Public HealthManag Pract, 21(Suppl 1), S38–S48.

15. Ijaz, K., Kasowski, E., Arthur, R. R., Angulo, F. J., & Dowell, S. F.(2012). International health regulations–what gets measured getsdone. Emerg Infect Dis, 18, 1054–1057.

16. Herman, K. C., Riley-Tillman, T. C., & Reinke, W.M. (2012). The roleof assessment in a prevention science framework. Sch Psychol Rev,41, 306–314.

17. Fuchs, D., & Fuchs, L. S. (2006). Introduction to response to interven-tion: what, why, and how valid is it? Read Res Q, 41, 93–99.

18. McKay, R., Coombs, T., & Duerden, D. (2014). The art and scienceof using routine outcome measurement in mental health bench-marking. Australas Psychiatry, 22, 13–18.

19. Thacker, S. B. (2007). Public health surveillance and the preven-tion of injuries in sports: what gets measured gets done. J AthlTrain, 42, 171–173.

20. Lefkowith, D. (2001). What gets measured gets done: turningstrategic plans into real world results. Manag Q, 42, 20–24.

21. Woodard JR:What getsmeasured, gets done!Agric EducMag 2004.22. Trivedi, P. (1994). Improving government performance: what gets

measured, gets done. Econ Polit Wkly, 29, M109–M114.23. Proctor, E. K., Landsverk, J., Aarons, G., Chambers, D., Glisson, C.,

& Mittman, B. (2009). Implementation research in mental healthservices: an emerging science with conceptual, methodological,and training challenges. Adm Policy Ment Health Ment Health ServRes, 36, 24–34.

24. Carlier, I. V. E., Meuldijk, D., Van Vliet, I. M., Van Fenema, E., Vander Wee, N. J. A., & Zitman, F. G. (2012). Routine outcome moni-toring and feedback on physical or mental health status: evidenceand theory. J Eval Clin Pract, 18, 104–110.

25. Harding, K. J. K., Rush, A. J., Arbuckle, M., Trivedi, M. H., & Pincus,H. A. (2011). Measurement-based care in psychiatric practice: apolicy framework for implementation. J Clin Psychiatry, 72, 1–478.

26. Trivedi, M. H., & Daly, E. J. (2007). Measurement-based care forrefractory depression: a clinical decision support model for clini-cal research and practice. Drug Alcohol Depend, 88, S61–S71.

27. Hollingsworth, B. (2008). The measurement of efficiency andproductivity of health care delivery. Health Econ, 17, 1107–1128.

28. Hoagwood, K. E., Jensen, P. S., Acri, M. C., Serene Olin, S., EricLewandowski, R., & Herman, R. J. (2012). Outcome domains inchild mental health research since 1996: have they changed and

ORIGINAL RESEARCH

TBM page 579 of 580

why does it matter? J Am Acad Child Adolesc Psychiatry, 51, 1241–1260 e2.

29. Lewis, C. C., Stanick, C. F., Martinez, R. G., Weiner, B. J., Kim, M.,Barwick, M., & Comtois, K. A. (2015). The Society for Implemen-tation Research Collaboration Instrument Review Project: a meth-odology to promote rigorous evaluation. Implement Sci, 10, 1–18.

30. Rabin, B. A., Purcell, P., Naveed, S., Moser, R. P., Henton, M. D.,Proctor, E. K., Brownson, R. C., & Glasgow, R. E. (2012). Advancingthe application, quality and harmonization of implementationscience measures. Implement Sci, 7, 119.

31. Aarons, G. A., Hurlburt, M., & Horwitz, S. M. (2011). Advancing aconceptual model of evidence-based practice implementation inpublic service sectors. Adm Policy Ment Health Ment Health Serv Res,38, 4–23.

32. Damschroder, L. J., Aron, D. C., Keith, R. E., Kirsh, S. R., & Alexan-der, J. A. (2009). Lowery JC, others: fostering implementation ofhealth services research findings into practice: a consolidatedframework for advancing implementation science. Implement Sci,4, 50.

33. Martinez, R. G., Lewis, C. C., & Weiner, B. J. (2014). Instrumenta-tion issues in implementation science. Implement Sci, 9, 118.

34. Coomber, B., & Louise Barriball, K. (2007). Impact of job satisfac-tion components on intent to leave and turnover for hospital-based nurses: a review of the research literature. Int J Nurs Stud,44, 297–314.

35. Novick, M. R. (1966). The axioms and principal results of classicaltest theory. J Math Psychol, 3, 1–18.

36. Cone, J. D. (1986). Idiographic, nomothetic, and related perspec-tives in behavioral assessment. In Conceptual foundations of behav-ioral assessment, 111–128.

37. Glisson, C., Landsverk, J., Schoenwald, S., Kelleher, K., Hoagwood,K. E., Mayberg, S., & Green, P. (2008). Health TRN on YM: assess-ing the organizational social context (OSC) of mental health serv-ices: implications for research and practice.AdmPolicy Ment HealthMent Health Serv Res, 35, 98–113.

38. Kilbourne, A. M., Almirall, D., Eisenberg, D., Waxmonsky, J., Good-rich, D., Fortney, J., Kirchner, J. E., Solberg, L., Main, D., Bauer, M.S., Kyle, J., Murphy, S., Nord, K., & Thomas, M. (2014). Protocol:adaptive implementation of effective programs trial (ADEPT): clus-ter randomized SMART trial comparing a standard versus en-hanced implementation strategy to improve outcomes of a mooddisorders program. Implement Sci, 9.

39. Lochman, J. E., Boxmeyer, C., Powell, N., Qu, L., Wells, K., &Windle, M. (2009). Dissemination of the coping power program:importance of intensity of counselor training. J Consult Clin Psychol,77, 397–409.

40. Self-Brown, S., Valente, J. R., Wild, R. C., Whitaker, D. J., Galanter,R., Dorsey, S., & Stanley, J. (2012). Utilizing benchmarking tostudy the effectiveness of parent–child interaction therapy imple-mented in a community setting. J Child Fam Stud, 21, 1041–1049.

41. Weisz, J. R., Chorpita, B. F., Frye, A., Ng, M. Y., Lau, N., Bearman, S.K., Ugueto, A. M., Langer, D. A., & Hoagwood, K. E. (2011). Youthtop problems: using idiographic, consumer-guided assessment toidentify treatment needs and to track change during psychother-apy. J Consult Clin Psychol, 79, 369–380.

42. Biglan, A., Ary, D., & Wagenaar, A. C. (2000). The value of inter-rupted time-series experiments for community intervention re-search. Prev Sci, 1, 31–49.

43. Cronbach L, Gleser G, Nanda H, Rajaratnam N: The Dependability ofBehavioral Measurements: Theory of Generalizability for Scores and Pro-files. John Wiley & Sons; 1972.

44. Barrios B, Hartman DP: THe contributions of traditional assess-ment: Concepts, issues, and methodologies. In Conceptual founda-tions of behavioral assessment. New York, NY: Guilford; 1986.

45. Foster, S. L., & Cone, J. D. (1995). Validity issues in clinicalassessment. Psychol Assess, 7, 248–260.

46. Mash, E. J., & Hunsley, J. (2008). Commentary: evidence-basedassessment—strength in numbers. J Pediatr Psychol, 33, 981–982.

47. McLeod BD, Jensen-Doss A, Ollendick TH: Overview of diagnosticand behavioral assessment. In Diagnostic and behavioral assessmentin children and adolescents: A clinical guide. Edited by McLeod BD,Jensen-Doss A, Ollendick TH, McLeod BD (Ed), Jensen-Doss A (Ed),Ollendick TH (Ed). New York, NY, US: Guilford Press; 2013:3–33.

48. Biglan A: Changing Cultural Practices: A Contextualist Framework forIntervention Research. Reno, NV, US: Context Press; 1995.

49. Jaccard J, Dittus P: Idiographic and nomothetic perspectives onresearch methods and data analysis. In Research methods in person-ality and social psychology. 11th edition.; 1990:312–351.

50. Vlaeyen, J. W. S., de Jong, J., Geilen, M., Heuts, P. H. T. G., & vanBreukelen, G. (2001). Graded exposure in vivo in the treatment ofpain-related fear: a replicated single-case experimental design infour patients with chronic low back pain. Behav Res Ther, 39, 151–166.

51. Weisz JR, Chorpita BF, Palinkas LA, Schoenwald SK, Miranda J,Bearman SK, Daleiden EL, Ugueto AM, Ho A, Martin J, Gray J,Alleyne A, Langer DA, Southam-Gerow MA, Gibbons RD: Testingstandard and modular designs for psychotherapy treating depres-sion, anxiety, and conduct problems in youth: A randomizedeffectiveness trial. Arch Gen Psychiatry 2012.

52. Elliott, R., & Wagner, J. (2016). D M, Rodgers B, Alves P, Café MJ:psychometrics of the personal questionnaire: a client-generatedoutcome measure. Psychol Assess, 28, 263–278.

53. Creswell JW, Klassen AC, Clark VLP, Smith KC: Best Practices forMixed Methods Research in the Health Sciences. .

54. Robins, C. S. (2008). Ware NC, dosReis S, Willging CE, Chung JY,Lewis-Fernandez R: dialogues on mixed-methods and mentalhealth services research: anticipating challenges, building solu-tions. Psychiatr Serv, 59, 727–731.

55. Palinkas, L. A., Horwitz, S. M., Chamberlain, P., Hurlburt, M. S., &Landsverk, J. (2011). Mixed-methods designs in mental healthservices research: a review. Psychiatr Serv, 62, 255–263.

56. Teddlie C, Tashakkori A., Major issues and controversies in theuse of mixed methods in the social and behavioral sciences. InHandbook of mixed methods in social &behavioral research. SAGE;2003:3–50.

57. Hirsch, O., Strauch, K., Held, H., Redaelli, M., Chenot, J.-F., Leon-hardt, C., Keller, S., Baum, E., Pfingsten, M., Hildebrandt, J., Basler,H.-D., Kochen, M. M., Donner-Banzhoff, N., & Becker, A. (2014).Low back pain patient subgroups in primary care: pain character-istics, psychosocial determinants, and health care utilization. ClinJ Pain, 30, 1023–1032.

58. Lambert, M. J., Harmon, C., Slade, K., Whipple, J. L., & Hawkins, E.J. (2005). Providing feedback to psychotherapists on theirpatients’ progress: clinical results and practice suggestions. J ClinPsychol, 61, 165–174.

59. Fisher, A. J., & Newman, M. G. (2011). M C: a quantitative methodfor the analysis of nomothetic relationships between idiographicstructures: dynamic patterns create attractor states for sustainedposttreatment change. J Consult Clin Psychol, 79, 552–563.

60. Sheu, C., Chae, B., & Yang, C.-L. (2004). National differences andERP implementation: issues and challenges.Omega, 32, 361–371.

61. Wandner, L. D., Torres, C. A., Bartley, E. J., George, S. Z., &Robinson, M. E. (2015). Effect of a perspective-taking interventionon the consideration of pain assessment and treatment deci-sions. J Pain Res, 8, 809–818.

62. Glasgow, R. E., & Riley, W. T. (2013). Pragmatic measures: whatthey are and why we need them. Am J Prev Med, 45, 237–243.

63. Connors, E. H., Arora, P., Curtis, L., & Stephan, S. H. (2015).Evidence-based assessment in school mental health. Cogn BehavPract, 22, 60–73.

64. Lyon, A. R., Ludwig, K., Wasse, J. K., Bergstrom, A., Hendrix, E., &McCauley, E. (2015). Determinants and functions of standardizedassessment use among school mental health clinicians: a mixedmethods evaluation. Adm Policy Ment Health Ment Health Serv Res,43, 122–134.

65. Duong, M. T., Lyon, A. R., Ludwig, K., Wasse, J. K., & McCauley, E.(2016). Student perceptions of the acceptability and utility ofstandardized and idiographic assessment in school mentalhealth. Int J Ment Health Promot, 18, 49–63.

66. Lewis, C., Darnell, D., Kerns, S., Monroe-De Vita, M., Landes, S. J.,Lyon, A. R., Stanick, C., Dorsey, S., Locke, J., Marriott, B., Puspita-sari, A., Dorsey, C., Hendricks, K., Pierson, A., Fizur, P., Comtois, K.A., Palinkas, L. A., Chamberlain, P., Aarons, G. A., Green, A. E.,Ehrhart, M. G., Trott, E. M., Willging, C. E., Fernandez, M. E., Woolf,N. H., Liang, S. L., Heredia, N. I., Kegler, M., Risendal, B., Dwyer, A.,et al. (2016). Proceedings of the 3rd biennial conference of theSociety for Implementation Research Collaboration (SIRC) 2015:advancing efficient methodologies through community partner-ships and team science. Implement Sci, 11, 1–38.

ORIGINAL RESEARCH

TBMpage 580 of 580