rigor in qualitative social work research.pdf

10
Rigor in Qualitative Social Work Research: A Review of Strategies Used in Published Articles Amanda Barusch, Christina Gringeri, and Molly George This study was conducted to describe strategies used by social work researchers to enhance the rigor of their qualitative work. A template was developed and used to review a random sample of 100 articles drawn from social work journals listed in tbe 2005 Journal Citation Reports: Science and Social Sciences Edition. Results suggest that the most commonly applied strategies were use of a sampling rationale (67%), analyst triangulation (59%), and mention of methodological limitations (56%); tbe least common were negative or deviant case analysis (8%), external audit (7%), and specification of ontology (6%). Of eight key criteria, researchers used an average of 2.0 {SD = 1.5); however, the number used increased significantly between 2003 and 2008.The authors suggest that for this trend to continue, social work educators, journal editors, and researchers must reinforce the judicious application of strategies for enhancing the rigor of qualitative work. KEY WORDS; qualitative methods; research methods; rigor; social work research T he social nature of inquiry is an ongo- ing challenge to the production of good research in social work. As tbe positivist belief in the potential objectivity of social work research has come into question, researchers using a range of paradigms have recognized the pervasive effects of human limitations and subjectivity.Tbis has motivated some postpositivist researchers to carefully design tbeir studies, using quantitative methods to minimize "bias" or "subjectivity." Over time, tbese efforts have become standardized as criteria to ensure the rigor of the work. In a postpositivist framework, these would be described as standards for estabHshing rehability and validity (Padgett, 2004). As social researcb using qualitative methods has moved beyond anthropology and into the social sciences, researchers have had to grapple with the meanings of terms such as"objectivity,""reliability," and "validity" (among others) in a completely new context—one that insists on recognition of tbe interactive dimension of social inquiry. How can social work researcbers using qualitative metbods produce credible work when objectivity is no longer assumed or even pursued (Kincbeloe, 2001 ; Padgett, 2004; Rolfe, 2004)? Sometimes referred to as "criteriology," this question has been a conundrum for qualitative researchers for at least three decades. It has given rise to a substantial body of literature on criteria: whether they are needed, what they should be called, how and when they should be implemented, and whether they can be used to evaluate tbe quality of tbe work (CaelH, Ray, & Mill, 2003; Davies & Dodd, 2002; Emden & Sandelowski, 1998, 1999; Kincheloe, 2001 ; Marshall, 1989; Rolfe, 2004, Seale, 1999, 2002;Whittemore, Chase, & Mandle, 2001). We discuss the main points of this literature here as background to tbe present study. Dialogue about criteria started in the early 1980s as qualitative methods became more visible in the social sciences (LeCompte & Goetz, 1982; Lincoln, 1995; Lincoln & Guba, 1985). Early dis- cussions about criteria, such as Kirk and Miller's (1986) Reliability and Validity in Qualitative Research, were based on postpositivist researcb assumptions. Lincoln and Guba proposed criteria based on the terms "credibility,""transferability,""dependability," and "confirmability," which were based on tbe postpositivist concepts of internal validity, external validity, reliabihty, and objectivity. Lincoln (1995) rigbtly calls tbese early efforts, including ber own, "foundationalist": "These criteria. . . . rested in as- sumptions that had been developed for an empiricist philosophy of research, and spoke to tbe procedural and methodological concerns that characterize em- piricist and post-empiricist research" (p. 276). Although some objected to the use of tbese parallel terms, they did offer a useful vocabulary CCCCode: 1070-5309/11 $3.00 ©2011 National Association of Social Workers 11

Upload: juan-carlos-valencia

Post on 21-Oct-2015

10 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Rigor in Qualitative Social Work Research.pdf

Rigor in Qualitative Social Work Research:A Review of Strategies Used in Published Articles

Amanda Barusch, Christina Gringeri, and Molly George

This study was conducted to describe strategies used by social work researchers to enhancethe rigor of their qualitative work. A template was developed and used to review a randomsample of 100 articles drawn from social work journals listed in tbe 2005 Journal CitationReports: Science and Social Sciences Edition. Results suggest that the most commonly appliedstrategies were use of a sampling rationale (67%), analyst triangulation (59%), and mention ofmethodological limitations (56%); tbe least common were negative or deviant case analysis (8%),external audit (7%), and specification of ontology (6%). Of eight key criteria, researchers usedan average of 2.0 {SD = 1.5); however, the number used increased significantly between 2003and 2008.The authors suggest that for this trend to continue, social work educators, journaleditors, and researchers must reinforce the judicious application of strategies for enhancingthe rigor of qualitative work.

KEY WORDS; qualitative methods; research methods; rigor; social work research

The social nature of inquiry is an ongo-ing challenge to the production of goodresearch in social work. As tbe positivist

belief in the potential objectivity of social workresearch has come into question, researchers usinga range of paradigms have recognized the pervasiveeffects of human limitations and subjectivity.Tbis hasmotivated some postpositivist researchers to carefullydesign tbeir studies, using quantitative methods tominimize "bias" or "subjectivity." Over time, tbeseefforts have become standardized as criteria to ensurethe rigor of the work. In a postpositivist framework,these would be described as standards for estabHshingrehability and validity (Padgett, 2004).

As social researcb using qualitative methods hasmoved beyond anthropology and into the socialsciences, researchers have had to grapple with themeanings of terms such as"objectivity,""reliability,"and "validity" (among others) in a completely newcontext—one that insists on recognition of tbeinteractive dimension of social inquiry. How cansocial work researcbers using qualitative metbodsproduce credible work when objectivity is no longerassumed or even pursued (Kincbeloe, 2001 ; Padgett,2004; Rolfe, 2004)?

Sometimes referred to as "criteriology," thisquestion has been a conundrum for qualitativeresearchers for at least three decades. It has givenrise to a substantial body of literature on criteria:

whether they are needed, what they should be called,how and when they should be implemented, andwhether they can be used to evaluate tbe qualityof tbe work (CaelH, Ray, & Mill, 2003; Davies &Dodd, 2002; Emden & Sandelowski, 1998, 1999;Kincheloe, 2001 ; Marshall, 1989; Rolfe, 2004, Seale,1999, 2002;Whittemore, Chase, & Mandle, 2001).We discuss the main points of this literature here asbackground to tbe present study.

Dialogue about criteria started in the early1980s as qualitative methods became more visiblein the social sciences (LeCompte & Goetz, 1982;Lincoln, 1995; Lincoln & Guba, 1985). Early dis-cussions about criteria, such as Kirk and Miller's(1986) Reliability and Validity in Qualitative Research,were based on postpositivist researcb assumptions.Lincoln and Guba proposed criteria based on theterms "credibility,""transferability,""dependability,"and "confirmability," which were based on tbepostpositivist concepts of internal validity, externalvalidity, reliabihty, and objectivity. Lincoln (1995)rigbtly calls tbese early efforts, including ber own,"foundationalist": "These criteria. . . . rested in as-sumptions that had been developed for an empiricistphilosophy of research, and spoke to tbe proceduraland methodological concerns that characterize em-piricist and post-empiricist research" (p. 276).

Although some objected to the use of tbeseparallel terms, they did offer a useful vocabulary

CCCCode: 1070-5309/11 $3.00 ©2011 National Association of Social Workers 11

Page 2: Rigor in Qualitative Social Work Research.pdf

for qualitative researchers to speak about their workwith those unfamiliar with qualitative methods andperspectives. Many writers incorporated these basicconcepts as criteria, and they have endured andbeen further developed in the literature (Golafshani,2003;Morse,Barrett, Mayan, Olson, & Spiers, 2002;Schwartz-Shea, 2006; Whittemore et al, 2001).However, there is no consensus on what strategiesor how many should be used to develop strongqualitative research.

Other writers (Denzin, 2002; Fade, 2003; Seale,1999, 2002; Tobin & Begley, 2004) criticized theparallel concepts proposed by Lincoln and Guba(1985), noting that research paradigms in the qualita-tive tradition are philosophically based on relativism,which is fundamentally at odds with the purposeof criteria to help establish "truth." The founda-tionahst critique led Lincoln (1995) to reevaluatethe earlier work and discuss emerging criteria inqualitative research, most of which bespoke a "re-lational" quality.These included the positionahty orstandpoint of the researcher, the role of communityin research, voice, reflexivity, reciprocity and fluid-ity between researcher and researched, and sharingof the privileges of power. These emerging criteriafocus attention on the relational aspects of know-ing and of working to produce knowledge, leadingus to question more deeply the links between thequality of the work and the relationships that formthe context in which the work is developed.

The relational quality of the criteria poses an-other challenge for researchers in evaluating quali-tative work and points to the subjective nature ofjudging the goodness or strength of such research(Emden & Sandelowski, 1998; Marshall, 1989).Marshall concluded in a public presentation that"evaluating the goodness and value of researchrequires a judgment call," regardless of the criteriaused. Later, Lincoln (1995) recommended thatquahtative researchers move beyond standardizedcriteria, uniformly applied, toward quality deci-sions that are made "locally" within the contextand paradigm of a research project itself. Once webecome comfortable with the subjective nature ofdoing and evaluating research, we can understandthe flexible nature and broad range of criteria thatmay be used in qualitative research. Indeed, as Seale(1999, 2002) suggested, we come to see researchmore as a craft skill in which practitioners make"local" (or project-specific) decisions to enhancethe quahty of the end product.

Together, the criteria proposed by Lincoln andGuba (1985) and revised by Lincoln (1995) havebecome part of the ongoing discussion of evolvingcriteria.These criteria, and the strategies suggestedto meet them, provide a good starting point forevaluating qualitative work.

Credibility involves generating confidence in thetruth value of the findings of qualitative research,reminding ourselves that all texts are local, and allresearchers write themselves into the text, and, thus,"truth" has a local quahty to it (Anfara, Brown, &Mangione, 2002; Kincheloe, 2001; Saukko, 2005).Some strategies for strengthening credibility areprolonged engagement, persistent observation,triangulation, peer debriefing, negative case analy-sis, and member-checking (Lincoln, 1995). Thesestrategies are important and do contribute to depthand detail; for example, prolonged engagement infieldwork requires that a researcher spend sufficienttime in a setting, developing trust and relation-ships, understanding a variety of perspectives, andco-constructing meanings with members of thatsetting.

Prolonged engagement helps a researcher toidentify and "bracket" his or her preconceptions,identify and question distortions in the data, andessentially come to see and understand a setting asinsiders see and understand it. How do we knowprolonged engagement when we see it? Is it by theoutcome of a study (that is, prolonged engagementproduces rich description)? Is it by length of time?How long is long enough?

The same questions arise with persistent observa-tion, intended as a strategy to deepen understanding.Generally, persistent observation requires a research-er "to identify those characteristics and elements inthe situation that are most relevant to the problemor issue being pursued and focusing on them indetail" (Lincoln & Guba, 1985,p. 304). How muchobservation is enough? We can already see withthese two strategies that, as evaluators of research,we are dependent on an author's accountability inarticulating his or her decisions regarding methods;accountability becomes an important standard forjudging the quality of the work.

Triangulation, as a strategy to establish credibility,may involve multiple data, methods, analysts, ortheories; originally, it referred to collecting data frommultiple sources—for example, using interviews,observation, documents, and video recordings as datasources in one study. The purpose of triangulation

12 Social Work Research VOLUME 35, NUMBER i MARCH 2011

Page 3: Rigor in Qualitative Social Work Research.pdf

is to deepen understanding by collecting a varietyof data on tbe same topic or problem with tbe aimof combining multiple views or perspectives andproducing a stronger account rather than simplyacbieving consensus or corroboration.

Peer debriefing requires researcbers to disclosetbeir personal and methodological processes dur-ing the researcb to a disinterested peer, with tbepurpose of making explicit aspects of tbe work thatmigbt remain implicit witbin tbe researcher's mind(Lincoln & Guba, 1985). Most researcbers do thisto a certain extent, by discussing tbeir work withpeers; tbis strategy asks researchers to engage in tbisprocess of discussion and questioning of tbeir workin a consistent and systematic fashion and to recordtbe process in notes tbat will be useful in tbe analysis.Tbis strategy also supports researcbers in exploringtbeir perspectives, reactions, and analyses as they gotbrougb the researcb process.

Negative case analysis is used to cballenge theemerging patterns or analyses in a study. Ratbertban accepting without question tbe dominant pat-terns tbat one observes, a negative case is selectedwith wbich to compare analytically tbe cases in tbeemerging pattern.

Member-checking is another tecbnique forstrengthening credibility in qualitative studies, butit is not without controversy. Tbere are differentapproaches to implementing member-cbecks: Par-ticipants may read transcripts of their interviewsand comment on accuracy or omissions; research-ers may use individual or group discussions witbparticipants to check representation in the data andanalysis. Member-cbecking bas been criticized asa strategy for establishing credibility. Ratber thanclarifying the meaning of data, participant reviewsmay confuse an issue by changing accounts fromone time to another. Participants may see the datadifferently tban researcbers; wbose view or ac-count sbould prevail? Participants may have varyingperspectives on tbe same data; again, wbose viewprevails? Member-cbecking, like triangulation, canbe used to deepen understanding ratber tban as acorroborative strategy. By listening deeply and rais-ing questions witb participants, researcbers bave achance to clarify or develop tbeir tbougbts, wbicbwill strengthen their findings.

Lincoln and Guba (1985) recommend tbe use oftbick description to strengthen transferability, tbeparallel term for generalizability. Tbick descriptioninvolves rendering a deeply detailed account of one's

work so that readers can judge the work's poten-tial for application to otber times, places, people,and contexts. But remember the "local" nature ofinterpretive work; to wbat extent is any work ap-plicable to otber contexts? Some would argue tbattransferability is a noncriterion, one tbat does notmake sense witbin tbe qualitative paradigm.

An audit trail is one technique for developingconfirmability in qualitative researcb; it is a record ofthe steps taken in tbe process of the research projectfrom beginning to end and includes decisions madealong tbe way that help illuminate and detail tbeentire process. Using the audit trail as a criterionagain requires accountability or transparency ontbe part of tbe researcher and good record keepingthroughout tbe process (Anfara et al., 2002).

Building on criteria discussed by Lincoln (1995),Creswell (2007) identified eight key strategies forestablisbing rigor and recommended tbat qualita-tive studies use at least two of tbem. His approacboffers a practical synthesis of recommendationsfrom otber authors. Tbe key strategies are pro-longed engagement and persistent observation,triangulation, peer review or debriefing, negativecase analysis, reflexivity (clarification of researcherbias), member-checking, tbick description, andexternal audits.

Although strategies are important in belping re-searcbers strengthen tbe quality of their work, tbeyare difficult to apply in evaluating research. Not onlyis it important to know wbetber certain strategieswere applied, it is critical tbat researcbers be open andaccountable in describing tbe cboices they made toimplement various approacbes to establish credibility.Without accountability, tbe work simply cannot bejudged.We approached this assessment of qualitativesocial work research asking two main questions: (1)To wbat extent are the eight strategies present inpublished social work articles? (2) Wbat trends dowe observe in tbe use of tbe key strategies?

METHODAlthough tbe irony of using statistical methods toexamine qualitative researcb was not lost on us, suchdata proved appropriate to address tbe aims of tbisstudy. We see researcb as a "craft skill" and believethat research questions—ratber tban philosophy,personal preference^ or convenience—sbould drivemethodological decisions. Tbis study used a retro-spective descriptive approach, based on review ofa random sample of publisbed articles. Data were

BARUSCH, G R I N G E R I , AND G E O R G E / Rigor in Qualitative Social Work Research: Strategies Used in Published Articles 13

Page 4: Rigor in Qualitative Social Work Research.pdf

recorded on a standardized template and analyzedusing descriptive and bivariate procedures availablein SPSS. In this section we discuss the research-ers, the data collection template, issues concern-ing reliability, the sample, and the data-analysisprocedures.

ResearchersThree researchers participated in this study. Twowere senior faculty members in coüeges of socialwork. Both were trained in the United States, andboth had taught qualitative methods at the doctorallevel. Of these two, one was trained in quantitativemethods, coming to qualitative work midcareer,whereas the other was trained in and conductedher work exclusively using qualitative methods.The third researcher was a master's student in socialanthropology at a New Zealand university who usedqualitative intervie\vs and analysis for her bachelor'sand master's theses.

TemplateDevelopment of the template was a fairly lengthyprocess. We initially reviewed articles publishedfrom 1980 to the present on evaluation of rigor inqualitative research, then we developed and testedmultiple drafts of the template. The final versionfit (with some squeezing) on one piece of paper. Itincluded a list of 19 strategies, including some thatwere not particular to qualitative methods but arehallmarks of good research, such as specification oftheoretical framework, sampling rationale, analyticprocedures, methodological limitations, protectionof human subjects, and institutional review board(IRB) approval.We included the eight strategies thatCreswell (2007) identified in their entirety. Otherstrategies were drawn from a range of sources, in-cluding our experience. These strategies includedanalyst and theory triangulation, audit trail, theoreti-cal saturation, and articulation of ontology and/orepistemology. A list of strategies in the template ispresented in Table 1.

The body of the document consisted of four col-umns: (1) strategy name, (2) description, (3) presence,and (4) notes. Several key decisions were made indevelopment of the template.These were recorded inmeeting summaries and e-mails, which were retainedas an audit trail or record of the research process.Specific decisions included the following:

• Through multiple iterations, the descrip-tions of strategies were refined to reflect the

Table 1: Percentages of SampledArticles Using Each Strategy {N = 100)

^ Sampling rationale provided

Analyst triangulation'*

' Problems/limitations specified

Analysis detailed

Theory or framework specified

! Human subjects considerations addressed

; Data triangulation**

; Peer debriefing/review*

Member-checking'

Theory triangulation'*

' Persistent observation'*

Thick description'

; Reflexivity or use of self

i Prolonged engagement**

i Audit trail

Theoretical saturation achieved

! Negative/deviant case analysis'

External audit*

Ontology/epistemology specified

67

59

56

53

50

44

36

31

31

18

17

16

14

13

9

8

8

7

6'One of Creswell's (2007) eight strategies."These strategies were combined as one in Creswell's formulation.These strategies were combined as one in Creswell's formulation.

consensus of the researchers informed by theliterature.

• Response options for the presence of a strat-egy were "yes,""no," and"?," which indicatedmaybe or unclear.

• Notes were taken to record the reviewer'sbasis for deciding whether a strategy wasused in cases in which there might be someambiguity.

• In addition to notes related to the strategies,the reviewers recorded their identity; date ofthe review; the journal, year and article title;the sample size and data collection method(s);and whether the reviewer thought there wasa good fit between the methods used and theauthor's conclusions.

ReliabilityWe used analyst triangulation to refine our defi-nitions of the 19 strategies, measuring interrateragreement to test the consistency of our coding.To do this, all three reviewers separately coded twosets of five articles. After each set, we noted areas ofdisagreement and made any necessary revisions .This

14 Social Work Research VOLUME 35, NUMBER I MARCH 2011

Page 5: Rigor in Qualitative Social Work Research.pdf

process continued until all three reviewers agreedon at least 80% of the codes.

In subsequent data analysis, we computed a chi-square statistic for each of the items in the templateby reviewer. This was done to check for consistentreviewer differences, and it resulted in the exclusionof one item, the fit between methods and conclu-sion, from further statistical analysis. No significantreviewer effects were observed for other items inthe template.

SampleA random sample of 100 articles was drawn from27 journals listed under "social work" in the 2005Journal Citation Reports QCR): Science and Social Sci-ence Edition (2006).To be eligible for inclusion inthe study, an article must have involved collectionand analysis of exclusively qualitative data. Theinitial pool of 487 articles included those publishedbetween 2003 and 2008 that had the word "qualita-tive" in either the abstract or keywords.This pool ofarticles w as then numbered, and a random numbergenerator selected 100.

As it turned out, 29% of the initial sample couldnot be used, either because the research did not in-volve the collection and analysis of data or becausethe authors used mixed methods. These articleswere replaced with articles that met both samplingcriteria.The resulting sample was distributed acrossall years, with concentration in 2005. The sampledrew from all but three of the 27 journals listed inthe JCR, with the largest percentages coming fromthe British Journal of Social Work (15%), Healtli andSocial Care in the Community (13%), and Child andYouth Services Review (10%).The percentage in eachjournal is summarized in Table 2.

AnalysisInitially, we set out to describe the strategies used inqualitative social work research. For these purposes,we computed simple descriptive measures, count-ing the number of times each response option (yes,no, maybe) was applied. After some consideration,we concluded that only the "yes" option (not"maybe") represented the unambiguous applicationof a strategy.

During the course of data collection and analysis,several questions arose that required the use of bi-variate statistics.We wondered, for instance, whetherthe number of strategies used had changed duringthe time period under consideration (2003 through

Table 2: Percentages of Sampled Articles,by Year and by Journal {N = 100)

20032004

,2005

2006

'2007

Í2008

¡ • • • • • • • • • ^ ^ ^ • • ^ • • • • • • • • • i B

.British Journal of Social Work

Health and Social Care in the Community

Child and Youth Services Review

journal of Community Psychology

\ Social Work

International Journal of Social Welfare

Family Relations

Child & Family Social Work

' Health & Social Work

\Afflia\ Journal of Social Policy

' Child Abuse & Neglect

American Journal of Community Psychology

' International Social Work

¡Research on Social Work Practice

•Journal of Social Work Practice

; Clinical Social Work Journal

Social Service Review

Journal of Social Work Education

.Child Welfare

Journal of Social Service Research

Administration in Social Work

Asia Pacific Journal of Social Work

Indian Journal of Social Work

Child Maltreatment

Social Work in Health Care

', Social Work Research

1310

25

10

20

22

15

13

10

7

6

6

554

4

3

3

3

3

2

2

2

1

1

1

1

1

1

1

0

0

0

2008).To gauge this trend, we computed the meannumber of key strategies used on average in eachyear, then estimated the correlation between yearpublished and number of strategies used to assess thelikelihood that the trend we observed occurred bychance. A Pearson correlation coefficient was usedfor this purpose.

Subsequent analysis focused on trends in the useof specific strategies.To describe this, we calculatedthe percentage of each year's articles that used each

BARUSCH, G R I N G E R I , AND G E O R G E / Rigorin Qualitative Social Work Research: Strategies Used in Published Articles 15

Page 6: Rigor in Qualitative Social Work Research.pdf

strategy. Then we computed a chi-square to gaugewhether the association between year and use ofthe strategy was statistically significant.

Once statistical analysis of the data was complete,we undertook a process of peer review. (This processwas declared "exempt" by the University of UtahIRB.) Five colleagues—located in New Zealand,France, and the United States—agreed to participatein tbe review of our preliminary findings. Three ofthese provided feedback individually, and two metwith us one afternoon to discuss and interpret thestudy results. On the basis of their feedback, weundertook additional analysis and modified ourinterpretation of the findings.

RESULTS

Popular StrategiesThe frequency of use for each strategy is presented inTable 1. Five strategies were used in more than half ofthe articles reviewed.Tbe most popular strategy, usedin 67% of articles, was a sampling rationale. Here,we examined each study, looking for the author'sarticulation of the sampling rationale used. A nega-tive response on this strategy indicated that authorsonly mentioned convenience sampling. The nextmost popular approach, used in 59% of articles, wasanalyst triangulation.This strategy involved the useof multiple perspectives ("eyes") in interpretation ofresults. Specification of problems or limitations in astudy, although not particular to qualitative researcb,was used in 56% of articles. Careful representationof analysis procedures was used in 53% of article.In these, the description of data analysis procedureswas sufficiently detailed such that they could bereplicated. Finally, 50% of articles presented studiestbat were clearly informed by a theory or conceptualJramework, which we defined as broad ideas thatpredict or explain the phenomenon under con-sideration. Typically, this content was found in theintroduction and discussion sections, though in somecases it was evident throughout tbe article. Somestudies mentioned the use of grounded theory as amethod but failed to offer a theory or conceptualframework based on tbeir results.

Key StrategiesIn this phase of tbe analysis, we examined authors'use of Creswell's (2007) eight strategies: triangulation(analyst, data, theory), peer debriefing, member-checking, persistent observation or prolongedengagement, reflexivity or use of self, thick de-

scription, negative case analysis, and external audit.Creswell grouped all three forms of triangulation asone strategy, which appeared in 77% of the articlesreviewed. Triangulation typically involved the useof different individuals in analysis of data. This ap-proach, known as "analyst triangulation," was used inover half (59%) of the articles. "Data triangulation,"used in just over a third (36%), involved the use ofdifferent types of data, most commonly interviewsand focus groups. Theory triangulation, use of di-vergent theoretical perspectives, was used in 18%of the studies. Typically, researchers reported usingonly one approach to triangulation, though 32%reported using two or more.

Peer debriefing or review involved a disinterestedpeer in the process of data analysis and interpretation.This approach was used in 31% of articles.

Member-checking, applied in 31% of articles,involved review of transcript or data interpretationby respondents or people who were related or similar.In these studies, authors typically invited respondentsto review transcripts of their interviews or to com-ment on results of the study. Sometimes this wasdone in focus groups, and sometimes researchersinvited people who were similarly situated to theirrespondents to review the work.

Considerably less frequently observed strategieswere persistent observation or prolonged engage-ment (treated as one strategy by Creswell, 2007),used in 24% of studies; thick description, used in16%; reflexivity, used in 14%, negative case analysis,used in 8%; and external audits, used in 7%.

Those studies using persistent observation spentsufficient time in the field in a variety of ways beyondthe primary method for data collection, allow-ing a strong focus on an issue within the studiedcommunity, such as spending a year on the streetswith homeless youths, participating, observing, andinterviewing to address the topic of their resilience.Researchers wbo used prolonged engagement spenta significant amount of time in the field collectingdata from a variety of perspectives. Only 6% ofstudies used both of these approaches.

Authors who included thick description of-ten used rich, direct quotes with well-developedinterpretations to present in depth concepts andconstructs that were important to the study. Othertimes, the authors' own words or descriptive phrasesconveyed a sense of the participants, the environ-ment, or the researcher's experience.This sometimeswent beyond what might be considered essential to

16 Social Work Research VOLUME 35, NUMBER I MARCH 2011

Page 7: Rigor in Qualitative Social Work Research.pdf

reporting the research but greatly enhanced a piece.For instance, one author mentioned that during aninformal interview a homeless respondent offeredhim a piece of cardboard to sit on as protectionfrom the cold street. Another author participatedin a Jewish Orthodox community for a year whileinterviewing women (the focus of the study) as ^vellas community leaders, male and female, about therole of women.

Seventeen percent of the articles reviewed usednone of these key strategies, and just over half (59%)met Creswell's (2007) recommended level of two ormore.This finding is illustrated in Table 3.

TrendsSince the 1985 pubhcation of Lincoln and Guba'sground-breaking work Naturalistic Inquiry, the useof quahtative methods by social work researchershas expanded considerably. For instance, the decadebetween 1982 and 1992 saw a dramatic increasein the proportion of social work doctoral disserta-tions that used qualitative methods (Brun, 1997).Qualitative research methods are listed in the revisedGuidelinesfor Qiiality iti Social Work Doctoral Programs

(Group for the Advancement of Doctoral Education,2003), and there are now dozens of texts availableon the topic (Padget, 2004) .The 2002 establishmentof the journal Qualitative Social Work confirms theprominence of these methods.

So, we wondered whether the use of key strategiesfor ensuring rigor had increased during the periodunder consideration. Results of our analysis are pre-sented in Figure 1. A modest, positive correlation wasobserved between year and the number of strategiesused [r(99) = .264, p < .01].The magnitude of thiseffect can be seen in a comparison of the lowest year's

Table 3: Percentages of SampledArticles Using Different Numbers

of Key Strategies {N = 100)

0

1

2

3

4

¡516

17

24

23

17

12

6

1

mean (1.4 strategies in 2003) with that obtained inthe highest year. By 2007, this had increased to 2.4strategies, a level that was sustained in 2008.

DISCUSSIONQuahtative research methods seem well-suited to theskills and values of those who select social work asa career. Relationship and communication skills arevital for the collection and interpretation of qualita-tive data.These methods may also prove a better fitfor the "messy" problems tbat are the focus of socialwork intervention (Meyer, 1996).

This study identified strategies that quahtativeresearchers in our field have reported using toenhance the rigor of their work. Interpretations ofthese findings may be taken from either a "half full"or a "half empty" perspective. For instance, giventhe importance of samphng, it is reassuring to seethat 68% of published articles in this sample offereda clear rationale for the authors' sampling choices.Conversely, nearly a third (32%) did not. Similarly,over half of these articles (59%) met Creswell's(2007) suggested criterion, using at least two of thekey strategies; but 17% used none at all.The generaltrend for the period under consideration was towarduse of more strategies for ensuring rigor.

The absence of reflexivity in this sample of re-cently pubhshed social work articles is surprising—inthe vast majority (86%) of articles, authors did notprovide any information about themselves. Perhapsthese authors feared it would be unprofessional orintrusive to disclose their personal characteristics,or perhaps they thought personal disclosure wouldbe inconsistent with editorial demands. Given theacknowledged subjectivity of qualitative methodsand the importance of the researcher's lens, westrongly encourage that the simple task of present-ing the researcher's relevant traits be undertakenmore often.

Like the self, theory was widely neglected inthe articles reviewed. Ignoring Kurt Lewin's well-known dictum that "there is nothing as practicalas a good theory," nearly half (49%) of the studiesreviewed did not specify a theoretical or conceptualframework in any section of the published article.These atheoretical articles seemed to be informedby pragmatic considerations. Sections that mightotherwise include theoretical content were devotedto the importance of the social problem or theinjustice under consideration. We wonder whetherthere might be regional differences in the use of

BARUSCH, G R I N G E R I , AND G E O R G E / Rigor in Qualitative Social Work Research: Strategies Used in Published Articles 17

Page 8: Rigor in Qualitative Social Work Research.pdf

Figure 1: Mean Numbers of Key Strategies Used, by Year

3.0

2.5

2.0

1.0

0.5

0.02003 2004 2005 2006

Year of Publicatíon2007 2008

theory and suggest tbat future researcb examineinternational trends in this area.

Careful attention to rigor is necessary but notsufficient to ensure bigb-quality researcb. Rigorousresearcb is not necessarily "good" researcb. As oneof our peer reviewers pointed out, researcb mustalso be evaluated on tbe basis of its relevance to tbeprofession and its potential impact on social justice.But, as Lietz, Langer, and Furman (2006) suggested,judicious application of the key strategies helps toensure that participant voices are beard and increasestbe likelihood that the work will meet otber stan-dards of quality.

Tbe present work leads us to reflect on botb edu-cation and research in our field. Use of the key strate-gies can help correct the perception tbat qualitativeresearcb is not sufficiently rigorous to contribute tothe knowledge base of tbe profession. We suggestthat doctoral classes on qualitative methods treatthe issue of rigor with depth and that dissertationproposals be reviewed witb an eye to tbe author'smethodological awareness. Clearly, published re-search sbould set a strong example. Strengtheningqualitative research can only benefit social work by

offering credible and applicable findings to shapepractice, education, and future researcb.

Given tbe local character of qualitative research,tbe application of universal criteria would be coun-terproductive. However, criteria for strengtheningqualitative research cannot be ignored. Instead, weargue for wbat Seale (2002) called "methodologicalawareness" (p. 108)— tbat is, tbe thoughtful appli-cation of relevant criteria throughout the researchenterprise.We believe that each researcher is empow-ered to make autonomous cboices regarding wbatstrategies are appropriate to the researcb context andquestions and that tbe act of doing research carrieswitb it an ethical obligation to be accountable andtransparent about those choices. BSS3

REFERENCESAnfara,V.A.,jr.,Brown,K. M.,& Mangione,T. L. (2002).

Qualitative analysis on stage: Making the researchprocess more public. Educational Researcher, 31.Retrieved from http://edr.sagepub.com/cgi/content/abstract/31/7/28

Brun, C. (1997).The process and implications of doingqualitative research: An analysis of 54 doctoral dis-sertations. Jowraa/ of Sociology and Social Welfare, 24(4),95-112.

18 Social Work Research VOLUME 35, NUMBER i MARCH 2011

Page 9: Rigor in Qualitative Social Work Research.pdf

Caelli, K., Ray, L., & Mill,J. (2003, Spring). "Clear asmud":Toward greater clarity in generic qualitativeresearch. International Journal of Qualitative Methods,2(2), 1-13.

Creswell,J.W. (2007). Qualitative inquiry and research design:Choosing among five fraííi(io/ij.Thousand Oaks, CA:Sage Publications.

Davies, D., & Dodd,J. (2002). Qualitative research andthe question of rigor. Qualitative Health Research, 12,279-289.

Denzin, N. K. (2002). Social work in the seventh mo-ment. Qualitative Social Work, 12, 25-38. Retrievedfrom http://qsw.sagepub.com/cgi/content/abstract/1/1/25

Emden, C , & Sandelowski, M. (1998).The good, the badand the relative, part one: Conceptions of goodnessin qualitative research. International Journal of NursingPractice, 4, 206-212.

Emden, C , & Sandelowski, M. (1999).The good, the badand the relative, part two: Conceptions of goodnessin qualitative research. International Journal of NursingPractice, 5, 2-7.

Fade, S.A. (2003). Communicating and judging the qualityof qualitative researchiThe need for a new language.Journal of Human Nutrition and Dietetics, 16, 139—149.

Golafshani, N. (2003). Understanding reliability andvalidity in qualitative research. Qualitative Report, 8,597-607.

Group for the Advancement of Doctoral Education.(2003). Guidelines for quality in social work doctoralprograms (Rev. ed.). Retrieved from http://www.gadephd.org/docunients/gadeguidelines.pdf

Kincheloe,J. L. (2001). Describing the bricolage: Concep-tualizing a new rigor in qualitative research. Qualita-tive Inquiry, 7, 679-692.

Kirk,J., & Miller, M. L. (1986). Reliability and valid-ity in qualitative research. Beverly Hills, CA: SagePublications.

LeCompte, M. D., & Goetz,J. P (1982). Problems of reli-ability and validity in ethnographic research. Reviewof Educational Research, 52(2), 31-60.

Lietz, CA. , Langer, C. L., & Furman, R. (2006). Establish-ing trustworthiness in qualitative research in socialwork. Qualitative Social Work, 5, 441-458.

Lincoln,Y. S. (1995). Emerging criteria for quality in quali-tative and interpretive research. Qualitative Inquiry, 12,275—289. Retrieved from http://qix.sagepub.com/cgi/content/absrtact/1 /3/275

Lincoln,Y. S., & Guba, E. G. (Eds.). (1985). Naturalisticinquiry. Thousand Oaks, CA: Sage Publications.

Marshall, C. (1989, March). Goodness criteria:Are they objec-tive criteria or judgment calls? Paper presented at theAlternative Paradigms Conference, San Francisco.

Meyer, C. H. (1996). My son the scientist. Social WorkResearch, 20, Wi-WA.

Morse,J. M., Barrett, M., Mayan, M., Olson, K., & Spiers,J. (2002).Verification strategies for establishing reli-ability and validity in qualitative research. InternationalJournal of Qualitative Methods, 1(2). Retrieved fromhttp://www.ualberta.ca/~iiqm/backissues/l_2Final/html/morse.html

Padgett, D. (2004). Introduction: Finding a middle groundin qualitative research. In D. Padgett (Ed.), Tlwqualitative research experience (pp. 1—13). Belmont, CA:Thomson—Brooks/Cole.

Rolfe, G. (2004).Validity, trustworthiness and rigour:Quality and the idea of qualitative research. Jounia/ ofAdvanced Nursing, 3, 304-310.

Saukko, P. (2005). Methodologies for cultural studies: Anintegrative approach. In N. K. Denzin &Y. S. Lincoln(Eds.), Tlie Sage handbook of qualitative research (3rd ed.,pp. 343—356).Thousand Oaks, CA: Sage Publications.

Schwartz-Shea, P. (2006). judging quality: Evaluativecriteria and epistemic communities. In D.Yanow& P Schwartz-Shea (Eds.), Interpretatioti and method:Empirical research methods and the interpretive turn (pp.89-113).Armonk, NY: M. E. Sharpe.

Seale, C. (1999). Quality in qualitative research. QualitativeInquiry, 5, 465—478. Retrieved from http://qix.sagepub.eom/cgi/content/absttact/5/4/465

Seale, C. (2002). Quality issues in qualitative inquiry.Qualitative Social Work, 12, 97-110. Retrievedfrom http://qsw.sagepub.com/cgi/content/abstract/1/1/97

Tobin, G. A., & Begley, C. M. (2004). Methodologicalrigour within a qualitative franiework._/o»r)ia/ ofAdvanced Nursing, 4, 388-396.

2003Journal Citation Reports: Science and social science edition.(2006). New York: Thomson Reuters.

Whittemore, R., Chase, S. K., & Mandle, C. L. (2001).Validity in qualitative research: Pearls, pith andprovocation. Quahtative Health Research, 11, 522-537.Retrieved from htcp://qur.sagepub.com/cgi/content/abstract/11 /4/552

Amanda Barusch, PhD, is professor, College of Social Work,

University of Utah, Salt Lake City, Department of Social Work

and Community Development, University ofOtago, Dunedin,

New Zealand. Christina Gringeri, PhD, is associate profes-

sor, College of Social Work, University of Utah, Salt Lake

City. Molly George, MA, is a research assistant. Department

of Social Work and Cotnmunity Development, University of

Otago. The authors gratefully acknowledge the contributions

of Mary Duffy, Deborah Hinton, Emily Keddell, Peregrine

Schwartz-Shea, and Stephanie Wahab to their interpretation of

the findings presented in this article. Address correspondence to

Amanda Barusch, College of Social Work, University of Utah,

395 South 1500 East, Salt Uke City, UT 84112; e-mail:

[email protected].

Original manuscript received October 20. 2009Accepted February 12, 2010

BARUSCH, GRINGERI, AND GEORGE / Rigor in Qualitative Social Work Research: Strategies Used in Published Articles 19

Page 10: Rigor in Qualitative Social Work Research.pdf

Copyright of Social Work Research is the property of National Association of Social Workers and its content

may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express

written permission. However, users may print, download, or email articles for individual use.