increasing evaluation capacity within community-based hiv prevention programs

9
Increasing evaluation capacity within community-based HIV prevention programs Deborah Gibbs a, * , David Napp b , David Jolly c , Bonita Westover d , Gary Uhl e a Health, Social and Economics Research, RTI, P.O. Box 12194, Research Triangle Park, NC 27709, USA b Practical Applications of Public Health, 1309 Glendale Avenue, Durham, NC 277071, USA c Department of Health Education, North Carolina Central University, 1801 Fayetteville Street, Durham, NC 27707, USA d Wisconsin Tobacco Control Monitoring and Evaluation Program, University of Wisconsin–Cooperative Extension, 45 North Charter Street, Room 141, Madison, WI 53715, USA e Division of HIV/AIDS Prevention, Centers for Disease Control and Prevention, Mail Stop E-07, Atlanta, GA 30333, USA Received 1 April 2001; received in revised form 1 November 2001; accepted 1 January 2002 Abstract Funding agencies use technical assistance to strengthen the evaluation capacity of community-based organizations (CBOs). We used qualitative methods to describe beliefs and attitudes related to evaluation and to identify factors influencing evaluation capacity, based on interviews with 61 CBOs, nine health departments, and 28 technical assistance providers. Four factors influencing evaluation behavior among CBOs were identified: funding agency expectations, resources, leadership and staff, and evaluation tools and technology. Using these factors, we developed a model that describes three stages of evaluation capacity: compliance, investment, and advancement. We propose strategies by which funding agencies and technical assistance providers can help strengthen evaluation capacity within CBOs. q 2002 Elsevier Science Ltd. All rights reserved. Keywords: Evaluation; Technical assistance; HIV/AIDS prevention; Community-based organizations 1. Introduction Government agencies that fund human immunodefi- ciency virus (HIV) prevention programs are increasingly emphasizing evaluation to assess the effectiveness of prevention efforts and to increase accountability (Centers for Disease Control and Prevention, 1999; Rugg et al., 1999). The CDC funds HIV prevention programs within community-based organizations (CBOs). The CDC recom- mends and supports program evaluation. However, for a variety of reasons, many CBOs lack the skills in design, data collection, and analysis needed to scientifically evaluate programs. Because limitations in program evalu- ation hamper the effectiveness of prevention program- ming, funding agencies that support these programs have been urged to invest in strengthening local evaluation capacity (Ruiz et al., 2000). Evaluation capacity is defined as the extent to which a CBO has the necessary resources and motivation to conduct, analyze, and use evaluations. The CDC views evaluation technical assistance 1 as a key strategy in this effort. While technical assistance is commonly provided to health promotion programs, it seldom addresses program evaluation specifically. To guide CDC’s provision of technical assistance, we described beliefs and attitudes regarding evaluation, assessed the existing technical assistance resources and demand, and identified prefer- ences in format and content. We focused on CBOs whose HIV prevention programs are funded by CDC either directly or through cooperative agreements with state and local health departments. Findings from this study, the Program Evaluation Technical Assistance 0149-7189/02/$ - see front matter q 2002 Elsevier Science Ltd. All rights reserved. PII: S0149-7189(02)00020-4 Evaluation and Program Planning 25 (2002) 261–269 www.elsevier.com/locate/evalprogplan * Corresponding author. Tel.: þ 1-919-541-6942; fax: þ1-919-880-8454. Send reprint requests to: National Center for HIV, STD and TB Prevention, Office of Communication, Centers for Disease Control and Prevention, Mail Stop E-07, Atlanta, GA 30333, USA. E-mail address: [email protected] (D. Gibbs). 1 Throughout this article, technical assistance will refer specifically to technical assistance with program evaluation rather than any of the other areas in which HIV/AIDS prevention programs may receive technical assistance.

Upload: deborah-gibbs

Post on 15-Sep-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Increasing evaluation capacity within community-based HIV prevention programs

Increasing evaluation capacity within community-based HIV

prevention programs

Deborah Gibbsa,*, David Nappb, David Jollyc, Bonita Westoverd, Gary Uhle

aHealth, Social and Economics Research, RTI, P.O. Box 12194, Research Triangle Park, NC 27709, USAbPractical Applications of Public Health, 1309 Glendale Avenue, Durham, NC 277071, USA

cDepartment of Health Education, North Carolina Central University, 1801 Fayetteville Street, Durham, NC 27707, USAdWisconsin Tobacco Control Monitoring and Evaluation Program, University of Wisconsin–Cooperative Extension,

45 North Charter Street, Room 141, Madison, WI 53715, USAeDivision of HIV/AIDS Prevention, Centers for Disease Control and Prevention, Mail Stop E-07, Atlanta, GA 30333, USA

Received 1 April 2001; received in revised form 1 November 2001; accepted 1 January 2002

Abstract

Funding agencies use technical assistance to strengthen the evaluation capacity of community-based organizations (CBOs). We used

qualitative methods to describe beliefs and attitudes related to evaluation and to identify factors influencing evaluation capacity, based on

interviews with 61 CBOs, nine health departments, and 28 technical assistance providers. Four factors influencing evaluation behavior

among CBOs were identified: funding agency expectations, resources, leadership and staff, and evaluation tools and technology. Using these

factors, we developed a model that describes three stages of evaluation capacity: compliance, investment, and advancement. We propose

strategies by which funding agencies and technical assistance providers can help strengthen evaluation capacity within CBOs. q 2002

Elsevier Science Ltd. All rights reserved.

Keywords: Evaluation; Technical assistance; HIV/AIDS prevention; Community-based organizations

1. Introduction

Government agencies that fund human immunodefi-

ciency virus (HIV) prevention programs are increasingly

emphasizing evaluation to assess the effectiveness of

prevention efforts and to increase accountability (Centers

for Disease Control and Prevention, 1999; Rugg et al.,

1999). The CDC funds HIV prevention programs within

community-based organizations (CBOs). The CDC recom-

mends and supports program evaluation. However, for a

variety of reasons, many CBOs lack the skills in design,

data collection, and analysis needed to scientifically

evaluate programs. Because limitations in program evalu-

ation hamper the effectiveness of prevention program-

ming, funding agencies that support these programs have

been urged to invest in strengthening local evaluation

capacity (Ruiz et al., 2000). Evaluation capacity is defined

as the extent to which a CBO has the necessary resources

and motivation to conduct, analyze, and use evaluations.

The CDC views evaluation technical assistance1 as a key

strategy in this effort.

While technical assistance is commonly provided to

health promotion programs, it seldom addresses program

evaluation specifically. To guide CDC’s provision of

technical assistance, we described beliefs and attitudes

regarding evaluation, assessed the existing technical

assistance resources and demand, and identified prefer-

ences in format and content. We focused on CBOs

whose HIV prevention programs are funded by CDC

either directly or through cooperative agreements with

state and local health departments. Findings from this

study, the Program Evaluation Technical Assistance

0149-7189/02/$ - see front matter q 2002 Elsevier Science Ltd. All rights reserved.

PII: S0 14 9 -7 18 9 (0 2) 00 0 20 -4

Evaluation and Program Planning 25 (2002) 261–269

www.elsevier.com/locate/evalprogplan

* Corresponding author. Tel.: þ1-919-541-6942; fax: þ1-919-880-8454.

Send reprint requests to: National Center for HIV, STD and TB Prevention,

Office of Communication, Centers for Disease Control and Prevention,

Mail Stop E-07, Atlanta, GA 30333, USA.

E-mail address: [email protected] (D. Gibbs).

1 Throughout this article, technical assistance will refer specifically to

technical assistance with program evaluation rather than any of the other areas

in which HIV/AIDS prevention programs may receive technical assistance.

Page 2: Increasing evaluation capacity within community-based HIV prevention programs

Assessment, suggest a model of CBO evaluation

capacity. Technical assistance providers, as well as

federal, state, and private funding agencies, can use this

model to identify the kind of support needed to improve

evaluation by CBOs with varying levels of resources

and expertise.

2. Methods

2.1. Study planning

To refine the study design, we reviewed materials from

recent CDC-sponsored studies related to evaluation and

technical assistance, met with CDC project officers, and

spoke with leaders of related studies and grantees attending

CDC-sponsored meetings. This preliminary work suggested

that CBOs were the most frequent users of evaluation

technical assistance and identified several possible influ-

ences on behavior related to evaluation and technical

assistance. These potential influences included funding

agency expectations and resources; beliefs and attitudes

regarding evaluation, such as its perceived usefulness and

possible negative repercussions; skill and resource limita-

tions; and intervention type and target population. These

constructs guided both the site selection process and the

development of topic guides for field data collection.

We next selected eight sites in which the CDC funded

HIV prevention programs. To ensure variation in site

context factors, we selected cities or metropolitan areas that

varied in size and AIDS epidemiology. Population size,

based on estimated population of 1996, was categorized as

medium (500,000 to 1.5 million) or large (1.5 million to 4

million). We excluded metropolitan areas with populations

less than 500,000 because they contained few CDC-funded

prevention programs; those with populations larger than

4 million were excluded because their size and

complexity would make it difficult to capture a com-

prehensive picture of evaluation and technical assistance

activity with the resources available for this study.

AIDS case rates, based on 1996 rates, were categorized

as either less than or greater than the average case rate

for metropolitan areas with populations greater than

500,000, which was 29.3 per 100,000 (Centers for

Disease Control and Prevention, 1996). We also limited

site selection to those in which a CDC HIV prevention

project officer and health department staff were

available and willing to facilitate access to CBOs.

From those sites that fit our criteria, we selected sites

that were geographically diverse and that represented a

range of evaluation resources, based on CDC project

officer assessments.

We also developed an interview guide that addressed the

constructs identified during preliminary research. The

interview guide was extensively reviewed by CDC staff,

pretested with the director of a CBO not in a study site, and

further refined. The interview guide provided a framework

to cover topics of interest, while allowing the interviewer

latitude to adapt questions as needed to explore related lines

of discussion, focus on topics with which the respondent

was most conversant, and limit discussion of nonproductive

topics.

2.2. Data collection

Between November 1998 and February 1999, we

interviewed staff at 61 CBOs in the eight sites. The 90-

min onsite interviews were conducted by two members of

the study team, one taking notes and one leading the

interview. With respondents’ permission, all interviews

were audiotaped. The study team asked to interview the

person within each CBO who was most knowledgeable

about evaluation, although in some instances more than one

person was interviewed. Regardless of the number of people

interviewed, the organization is the unit of analysis.

In addition to CBO staff, staff members from nine state

and local health departments and 28 technical assistance

providers who work with CDC-funded CBOs were inter-

viewed in person or by telephone. These interviews, which

provided additional insights into factors influencing CBOs’

evaluation behavior and use of technical assistance, were

used to inform interpretation of data from CBO respondents.

In terms of their target populations and intervention

types, these CBOs typify the range of AIDS prevention

programs. Most were relatively small, with half reporting

five or fewer full-time employees.

2.3. Analysis

All interview notes were transcribed. To protect

confidentiality, all identifying information was removed

from these notes prior to analysis, and respondents were

offered the option of having records of their interviews

removed from the data file that was delivered to the CDC.

The study team used QSR NUDISTw software (Quali-

tative Solutions and Research Pty, Ltd; Melbourne,

Australia, Version 4) to facilitate coding and analysis. The

qualitative analysis was based directly on the interview

guide topics. The analysis used a hierarchical coding

structure that allowed specific dimensions of meaning to

be examined separately and new codes to be added as the

analysis progressed. To identify any inconsistent interpreta-

tions of codes or overlapping codes, team members coded

three interviews. Revisions were made to the coding

structure and definitions refined until interrater reliability

approached 100% for these interviews.

The study team first conducted top-level coding and

prepared descriptive summaries for review and discussion

of four major topics: relationships with funding agencies,

beliefs about evaluation, evaluation experiences, and

technical assistance experiences and preferences. To further

explore each of these topics, the study team reviewed the

D. Gibbs et al. / Evaluation and Program Planning 25 (2002) 261–269262

Page 3: Increasing evaluation capacity within community-based HIV prevention programs

data, developed and tested hypotheses, and used summary

matrices and displays to reveal patterns or clusters within

the data. As patterns emerged, study team members added

additional coding to identify finer distinctions of meaning

and partitioned the data to examine data from subsets of

respondents. These processes were repeated until distinc-

tions were clear.

CBO respondents’ descriptions of their attitudes and

beliefs regarding evaluation and their recent evaluation

experiences were reviewed to identify factors influencing

evaluation capacity. Examination of recurring themes

identified four broad factors that affected evaluation

behavior: funding agency expectations, resources, leader-

ship and staff, and evaluation tools and technology. These

factors are described in Section 3.

Partitioning the data to compare responses from CBOs

with varying levels of evaluation experience, we noted

variations in the relative importance of these factors and

their effects on evaluation. We developed matrix, descrip-

tive, and graphic representations of how these factors were

experienced; how they interacted with each other; and how

they influenced and were influenced by the CBOs’

evaluation experiences. These representations were com-

pared to interview data and refined until three distinct

patterns emerged that most closely fit the data. While

variations were observed within each group, the attitudes

and behaviors described represent the predominant pattern.

The resulting model of evaluation capacity is described in

Section 4.

3. Findings

Factors influencing evaluation (funding agency expec-

tations, resources, leadership and staff, and evaluation tools

and technology) were perceived either as facilitators or

barriers to the initiation of evaluations or as benefits or

disadvantages resulting from evaluations. These concepts

were interrelated and at times mirror images of each other.

For example, while one CBO respondent described

inadequate staff evaluation skills as a barrier to evaluation,

another stated that having these skills facilitated evaluation

efforts. Recursive patterns were also noted, so that a benefit

of having conducted evaluations, such as learning how to

improve a prevention program, might be described as an

incentive to conducting more and more complex

evaluations.

3.1. Funding agency expectations

Funding agency expectations influenced evaluation

behavior by setting a baseline for the amount and type of

evaluation activity to be performed. These expectations

varied considerably among sites. While some respondents

acknowledged funding agency expectations as a motiva-

tional support, many described them as excessive in

proportion to resources provided for conducting evaluation.

Some respondents were concerned that evaluation results

might affect future funding, e.g. that funding agencies might

shift funds from programs unable to demonstrate success to

those with positive findings or that they might interpret

positive evaluation findings as suggesting that programs

were no longer needed.

3.2. Resources

CBO respondents included many whose staff, by virtue of

their commitment to evaluation, managed to conduct it with

minimal resources. More often, however, the availability of

resources such as staff time, access to external consultants,

funding for operational costs, and computer hardware and

software was a critical determinant of evaluation activity.

Resources influenced both the type of evaluation and how

extensively—or whether—the resulting data were analyzed.

Without sufficient funds or a budget line item designated

specifically for evaluation, respondents reported being forced

to choose between service delivery and evaluation.

Although technical assistance was an important facili-

tator for CBOs that might otherwise have had difficulty

meeting funding agency expectations, there was no

evidence that the availability of technical assistance

influenced CBO decisions on what kinds of evaluation to

implement or how extensively to invest in evaluation. For

those CBOs whose leaders had already decided to increase

or improve evaluations, technical assistance was a critical

source of support.

3.3. Leadership and staff

Many CBOs went well beyond their funding agency’s

expectations and conducted more extensive or complex

evaluations than were required of them. Typically, these

were CBOs whose leaders believed that evaluation could be

used to improve program effectiveness and that strategies

could be found to overcome the many challenges of

evaluation. To promote evaluation, leaders needed to

overcome staff beliefs that evaluation detracted from the

mission of HIV prevention, reassure staff that unfavorable

evaluation results would have no negative repercussions,

and engage staff in evaluation planning. Perhaps most

importantly, leaders needed to inculcate in their staff a view

of evaluation as a process that would improve, rather than

detract from, service delivery.

Some CBO leaders also built evaluation capacity by

recruiting staff with skills in data collection, computer use,

and analysis, or by training existing staff in relevant skills.

Respondents noted that workers with the life experiences

and skills most needed for outreach and prevention among

hard-to-reach populations rarely possessed evaluation skills

and that low pay scales within CBOs made it difficult to

attract and retain staff with these skills.

D. Gibbs et al. / Evaluation and Program Planning 25 (2002) 261–269 263

Page 4: Increasing evaluation capacity within community-based HIV prevention programs

3.4. Evaluation tools and technology

Respondents reported needing, in addition to computers

and software, evaluation designs and data collection

methods tailored to the specific requirements of community-

based HIV prevention programs. Data collection issues

include the difficulty of collecting follow-up data from

transient populations, trust and confidentiality issues among

the disenfranchised populations most likely to be at risk for

HIV infection, and limited client literacy.

For many HIV prevention programs, the outcome

measures of greatest interest are changes in risk behaviors

such as drug use and high-risk sexual activity. However,

the validity of these measures is difficult to assess when the

behavior in question is intensely personal and/or illegal. The

common alternative, measuring changes in knowledge and

attitudes, is less conclusive when program participants have

been exposed to numerous interventions over the course of

the epidemic. Validity may be further compromised when

clients provide what they perceive to be desirable responses

in order to collect research incentives or assure ongoing

access to services from the CBO.

4. Evaluation capacity: a model

The model presents the relationships and interactions

among the four groups of factors that influence evalu-

ation (funding agency expectations, resources, leader-

ship and staff, and evaluation tools and technology),

evaluation activities, and benefits realized from evalu-

ation. The three stages of evaluation capacity represent

a developmental continuum along which CBOs could

move, given experience and the necessary support. Each

stage represents a composite of CBOs participating in

the study, although many CBOs would not fit precisely

into any of the stages. The three stages of evaluation

capacity are as follows.

† Compliance. CBOs conduct evaluation to the extent

required by funding sources without necessarily

perceiving any benefit to their program except the

possibility of continued funding.

† Investment. With strong support from leadership,

CBOs commit the resources necessary to conduct,

analyze, and use program-specific evaluation to

improve interventions and support funding expansion.

† Advancement. CBOs engage staff and external

partners in increasingly ambitious evaluations that

contribute to a broader understanding of prevention

theory and practice.

4.1. Compliance stage

CBOs represented by this model (Fig. 1) carry out

required evaluation activities but rarely go beyond funding

agency requirements. They use standardized methods,

which may be prescribed by the funding source, with little

adaptation to the CBO’s specific interventions or target

population. As CBO staff collects evaluation data, they

report it to the funding agency, often without further internal

review or analysis. If the funding agency compiles it for the

CBO, evaluation data may provide potentially useful

information, but CBOs at this stage often lack the ability

or motivation to use this information. In the absence of this

feedback, evaluation activities serve no function for the

Fig. 1. Evaluation capacity: compliance stage.

D. Gibbs et al. / Evaluation and Program Planning 25 (2002) 261–269264

Page 5: Increasing evaluation capacity within community-based HIV prevention programs

CBO other than compliance with the funding agency’s

requirements.

In the compliance stage, funding agency expectation is

the predominant motivating force for evaluation. Depending

on the level of resources provided, evaluation activities may

be perceived by CBO staff as unfunded mandates, because

staff time required to implement them is often drawn from

program activities or imposed on program staff as an

additional responsibility. Evaluation is generally limited to

collection of process measures that quantify program

activities, e.g. the number of outreach contacts or workshop

attendees. Data quality may be poor if staff cut corners in

order to focus on prevention work. Even if program leaders

and staff have an interest in evaluation, they may be unable

to do more than what is required without additional financial

support or technical assistance.

4.2. Investment stage

In the investment stage, championship of evaluation by

CBO leadership has institutionalized evaluation as a tool for

program improvement (Fig. 2) and fostered development of

relevant skills within the CBO. Although funding agency

expectations may facilitate this enthusiasm, it is primarily

driven by program leaders’ motivation and staff support.

These CBOs commit resources specifically to evaluation,

either as a budget line item or by incorporating evaluation

into the job responsibilities of program staff. While these

resources are primarily used internally, some external

evaluation support may be purchased as well. Staff adapts

evaluation methods to their clients’ literacy levels or

specific risk behaviors and often have access to computers

for data entry and analysis.

Evaluation activities in the investment stage go beyond

process evaluation to include simple outcome (effectiveness)

evaluations. Staff also may initiate formative evaluations to

shape new interventions or incorporate client satisfaction

measures into outcome evaluation tools. Because of leader-

ship support for the evaluation process, staff are conscien-

tious about data quality, although they may find it challenging

to maintain. Data analysis may be limited to reviewing

completed instruments or hand-tallying data, or staff may use

a spreadsheet or EpiInfo to support descriptive analyses.

Regardless of data format, CBOs in the investment stage

examine results for potentially useful information. These

CBOs use evaluation findings to document program

Fig. 2. Evaluation capacity: investment stage.

D. Gibbs et al. / Evaluation and Program Planning 25 (2002) 261–269 265

Page 6: Increasing evaluation capacity within community-based HIV prevention programs

achievements and strengths, identify problem areas, and

suggest possible modifications. They may also use evaluation

findings to support requests for additional funding.

4.3. Advancement stage

CBOs in the advancement stage have broad institutional-

ized support for evaluation and the use of increasingly

sophisticated designs and methods (Fig. 3). This stage is

similar to the investment stage but involves more extensive

and complex evaluations, which are integrated into the

intervention planning process. The CBO may have con-

tracted with an external evaluator and/or may have one or

more skilled staff members for whom evaluation is a

specific job responsibility. Design and data collection

methods are tested and refined, and computers and analysis

software are readily available.

CBOs in the advancement stage focus evaluation designs

on client behavior change and may include comparison

groups and/or follow-up data collection. They often

combine qualitative and quantitative methods and may use

complex statistical analyses. Some CBOs in this group

participate in university-based, multisite collaborations in

which model programs are tested and replicated. CBOs in

the advancement stage use evaluation findings to support

both ongoing funding and program improvement. They also

disseminate them at conferences or through scientific

publications. Their evaluation findings serve as the basis

for intervention models for other programs.

5. Increasing evaluation capacity: recommendations

Although this study’s methods were not designed to

classify CBOs according to the stages of this model, we

estimate that 5% of the CBOs interviewed would be

classified as advancement stage, 55% as investment stage,

and 40% as compliance stage. Regardless of where an

individual CBO falls on this continuum of evaluation

capacity, CBO leaders and funding agencies can support

Fig. 3. Evaluation capacity: advancement stage.

D. Gibbs et al. / Evaluation and Program Planning 25 (2002) 261–269266

Page 7: Increasing evaluation capacity within community-based HIV prevention programs

evaluation activities through the strategic deployment of

technical assistance and other resources (e.g. funds ear-

marked for evaluation activities, hardware and software

support, and training).

Evaluation support should begin with strengthening the

quality of existing evaluations and enhancing their useful-

ness to ongoing program operations. In addition, the model

suggests specific types of technical assistance that will help

the CBO in each stage advance to the next stage. Although

many of the recommendations that follow involve relatively

small adjustments to current activities (such as reporting

data back to CBOs), others would require considerable

investments of time, funds, or other resources to implement

(such as funds earmarked for evaluation). Funding agencies

may not find it feasible to implement all or even most of the

measures suggested. However, it is worth noting that there

are real, albeit unmeasured, costs associated with the lack of

support for evaluation. These may include the loss of

information that could guide program improvement, the toll

on CBO staff who are asked to conduct evaluations without

adequate resources, and strain on relationships between

funding agencies and CBOs.

The identification of priorities from the broad array of

recommendations presented here would depend on local

resources and needs. Based on interview data and the study

team’s observations, we believe that the greatest enhance-

ments in evaluation capacity can be achieved through

attention to two principles. First, resources provided to

support evaluation should be commensurate with the

evaluation activities required. Second, funding agency

staff and technical assistance providers (as well as agency

leaders) should take advantage of every available oppor-

tunity to use existing evaluation data as a resource for

program improvement.

5.1. Compliance stage

The goal of technical assistance for CBOs in the

compliance stage should be to engage program leadership

in the evaluation process, to support the use of data from

existing evaluation activities, and to use external expertise

as required to complete simple evaluations. Both funding

agencies and technical assistance providers can play a role

in meeting this goal.

Funding agencies should ensure that any data required

from CBOs as a stipulation of funding be compiled and

analyzed by the funding agency’s own staff or by a technical

assistance provider if necessary. Simple analyses can

provide the CBO with information on trends over time

and can compare the CBO’s operating experience with

expectations. It is unlikely that CBOs in the compliance

stage can be persuaded to engage in any extensive

evaluation activity unless additional funding is provided to

support staff time and other costs of evaluation without

depleting program resources.

Technical assistance providers can lend skills and

expertise to strengthen ongoing evaluation activities by

recommending or designing data collection instruments that

are appropriate to the intervention and population, training

staff in the use of these instruments, and conducting periodic

monitoring of the data collection process to ensure that

adherence to the protocol is maintained. Because CBOs in

the compliance stage are unlikely to have staff with skills in

data analysis, it is often appropriate for technical assistance

providers to conduct all analyses. Technical assistance

providers should share evaluation findings with CBO staff

prior to preparation of any written reports, discuss possible

interpretation of the data, and collaborate on recommen-

dations. This precludes the scenario in which CBOs feel

blindsided by unfavorable findings and promotes the idea of

evaluation as supportive of their efforts. Both technical

assistance and basic evaluation training should emphasize

the potential usefulness of evaluation for identifying

program strengths and areas for modification.

To help CBOs move from the compliance stage to the

investment stage, technical assistance providers should

blend motivational support with skill building. They can

teach CBOs how to adapt instruments to specific interven-

tions and populations, use an appropriate mix of qualitative

and quantitative data collection, conduct basic data entry

and analysis, apply evaluation to program improvement and

funding requests, and integrate evaluation into the early

stages of program planning.

5.2. Investment stage

For CBOs at the investment stage, in which leaders and

staff are convinced of the value of evaluation, funding

agencies can support CBO commitment by recognizing

achievements, allowing (or requiring) a portion of program

funding to be dedicated to evaluation, and providing

practical support and skill building. Technical assistance

at the investment stage should focus on building a broad

base of skills among CBO staff and maintaining scientific

integrity, while adapting methods to the prevention

program. Training should include as many program staff

members as possible, extending evaluation skills beyond

those charged with conducting evaluation, and focusing on

fundamental concepts as well as practical benefits of

evaluation. Technical assistance providers can also provide

specific expertise that extends CBO capacity, by consulting

on evaluation methods, helping staff adapt procedures and

instruments to target populations and interventions; and

assessing instrument reliability and validity. In this process,

technical assistance providers may conduct more sophisti-

cated statistical analyses for the CBO, while concurrently

training CBO staff to eventually take over these tasks. When

technical assistance providers perform analyses for CBOs,

the completed analyses should be reviewed in detail with

CBO staff so that technical assistance providers and staff

can collaborate on interpretation and application of findings.

To help CBOs move from the investment stage to

D. Gibbs et al. / Evaluation and Program Planning 25 (2002) 261–269 267

Page 8: Increasing evaluation capacity within community-based HIV prevention programs

advancement stage, the technical assistance provider should

teach CBOs how to incorporate behavior change theory into

program and evaluation planning, develop more sophisti-

cated methods of data collection and analysis, and

disseminate evaluation findings through conferences, work-

shops, and publications.

5.3. Advancement stage

CBOs in the advancement stage are essentially self-

supporting with respect to evaluation and require little

support from funding agencies. Indeed, their own evaluation

resources, whether internal or contracted, may surpass those

available from their funding agencies. Funding agencies can

support CBO leadership by providing financial support for

the development of new projects to attract external funding

or by brokering expert technical assistance when needed.

Funding agencies also can create a structure in which staff

of CBOs in the advancement stage can serve as an

evaluation resource for other CBOs. CBOs at the advance-

ment stage will maintain their involvement in the evaluation

process without funding agency support, but they can

benefit from external technical assistance for specific

expertise in such areas as evaluation design, instrument

development and testing, analysis, and reporting. Technical

assistance providers can also broker partnerships among

these CBOs, universities, and other agencies for large,

multisite evaluation research projects to test model

programs, and arrange for CBOs to provide evaluation

technical assistance to other CBOs.

6. Discussion

The study used a qualitative approach to data collection

and analysis in order to explore the interactions of site,

organizational and individual factors that influenced

attitudes and behaviors related to evaluation and evaluation

technical assistance. Qualitative methods are uniquely

suited to capturing rich descriptions and generating

explanations without constraint by predetermined cate-

gories of analysis (Miles & Huberman, 1994; Patton, 1990).

While qualitative methods are ideally suited to explora-

tory studies such as this one, their potential limitations

should be noted. The sites selected do not constitute a

representative sampling of cities. Since CBOs that do not

receive CDC funds were excluded from the study, the

participating organizations may not represent all HIV

prevention programs within any site. In addition, data may

not accurately represent the distribution of attitudes toward

evaluation among respondents if respondents either tended

toward socially desirable responses or took the opportunity

to vent frustrations to the interviewer. However, the

consistency with which many themes, including some

unexpected ones, were heard across sites and program

types strengthens our confidence that these would be

supported by further investigation.

This exploratory study does not, and was not intended to,

support conclusive tests of hypotheses. Qualitative methods

can be used, however, to link variables and processes to an

outcome—in this case, evaluation behavior. Inferences of

causality are strengthened by both the consistency with

which these variables and processes were observed and the

plausibility of the mechanisms linking them to outcomes

(Miles & Huberman, 1994).

7. Applications of this research

Our data affirms the findings of Rossi, Freeman, and

Lipsey (1999) and others on two themes: the importance of

adequate resources for evaluation, and the importance of

accommodating the perspectives of program managers and

staff (as well as program sponsors and other stakeholders) to

facilitate evaluation use. However, in the highly decentral-

ized world of HIV/AIDS prevention programs, we observed

that the role of evaluation professionals was much less

prominent than is assumed in the evaluation literature.

While sponsors may set expectations, and evaluation

professionals (referred to elsewhere in this article as

technical assistance providers) provide expertise, CBOs

are typically expected to actively participate in, if not direct,

evaluations of their own work. The pivotal role of CBOs in

the evaluation process underscores the need to provide them

with adequate resources for evaluation activities, commen-

surate with what would be required by an external evaluator.

It also lends additional weight to the importance of

capitalizing on their deep interest in program improvement

to increase their sense of the value of evaluation. Patton

cites findings to suggest that the process of valuing

evaluation is “a necessary condition for evaluation use”

(Patton, 1997). This would seem to apply equally well to

facilitating evaluation participation.

Findings from this study suggest that funding agencies

and technical assistance providers wishing to enhance CBO

evaluation capacity need to be both active and specific in

their efforts. Funding agencies must tailor their expectations

and support to the current capacity and long-term goals of

individual CBOs. Some funding agencies may themselves

need assistance in assessing CBO evaluation capacity and

identifying technical assistance needs. Funding agencies

wishing to enhance CBO evaluation capacity need to create

an environment that supports this process by communicat-

ing clear messages on the usefulness of evaluation as a tool

for program improvement. This message should be

reinforced by establishing evaluation expectations that are

both reasonable in terms of the CBO’s capacity and closely

linked to service planning and delivery. In addition, funds

should be allocated specifically for evaluation, in order to

allay CBO concerns that evaluation diverts resources from

program activities.

D. Gibbs et al. / Evaluation and Program Planning 25 (2002) 261–269268

Page 9: Increasing evaluation capacity within community-based HIV prevention programs

Technical assistance providers can increase their effec-

tiveness by using the stages of evaluation capacity described

in this article as a framework for assessing which skills,

motivators, and resources to incorporate into their work

with CBOs. In addition to meeting immediate evaluation

requirements, technical assistance should strengthen each

CBO’s current evaluation activities and build capacity for

more ambitious efforts. When the technical assistance

providers’ role is to provide evaluation services rather

than training, they should find ways to keep CBO staff

engaged in the process. Ideally, technical assistance

providers should use a graduated approach to mix

evaluation support and capacity building. This might

progress from (a) conducting the evaluation for a CBO

while showing staff how findings can be used to support

their work to (b) teaching CBO staff to conduct specific

evaluation activities, filling in with expertise where needed,

to (c) advising CBO staff on design and interpretation of

evaluations that they conduct.

The CDC is using the results from this study in two ways.

First, the CDC is better targeting evaluation training and

technical assistance, thereby increasing the evaluation

capacity of CBOs. Specifically, various venues at CDC

are incorporating the results and recommendations into

systems for technical assistance available to CBOs (Davis

et al., 2000). Second, CDC is refining and exploring further

the conceptual model of evaluation capacity in an ongoing

mixed-methods study of health departments federally

funded for HIV prevention. CDC’s experience in providing

evaluation technical assistance to CBOs, combined with

findings from the health department research, will add to

existing evaluation theory meaningful identification,

description, and measurement of specific constructs within

a general evaluation capacity model.

Acknowledgments

The research described in this article was conducted by

RTI with funding from the Centers for Disease Control and

Prevention (CDC) under contract 200-96-0511. Debra

Bercuvitz, currently at the Donahue Institute for Govern-

mental Affairs at the University of Massachusetts, made

inspired contributions to this research while a member of the

RTI staff. In addition to the CDC technical monitors who are

listed as authors, many staff members from the Program

Evaluation Research Branch and the Prevention Program

Branch of CDC’s Division of HIV/AIDS Prevention

provided valuable insights and suggestions at several stages

of the study. Finally, we are deeply grateful to the many

prevention program representatives, health department staff

members, and technical assistance providers who shared

with us their experiences with evaluation and technical

assistance.

References

Centers for Disease Control and Prevention (1999). Evaluating CDC-

funded HIV prevention programs (vol. 1). Guidance, Atlanta, GA:

Centers for Disease Control and Prevention.

Centers for Disease Control and Prevention (1996). HIV/AIDS Surveillance

Report, 8(2).

Davis, D., Barrington, T., Phoenix, U., Gilliam, A., Collins, C., & Cotton,

D. (2000). Evaluation and technical assistance for successful HIV

program delivery. AIDS Education and Prevention, 12(Suppl. A),

115–121.

Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An

expanded sourcebook (2nd ed). Thousand Oaks, CA: Sage.

Patton, M. Q. (1990). Qualitative evaluation and research methods (2nd

ed). Thousand Oaks, CA: Sage.

Patton, M. Q. (1997). Utilization-focused evaluation: The new century text

(3rd ed). esp. 26.

Rossi, P. H., Freeman, H. E., & Lipsey, M. W. (1999). Evaluation: A

systematic approach (6th ed). Thousand Oaks, CA: Sage.

Rugg, D., Buehler, J., Renaud, M., Gilliam, A., Heitgerd, J., & Westover, B.

(1999). Evaluating HIV prevention: A framework for national, state,

and local levels. American Journal of Evaluation, 20(1), 35–56.

Ruiz, A. R., Gable, E. H., Kaplan, M. A., Stoto, H., Fineberg, J., & Trussell,

M. S. (Ed.), (2000). No time to lose: Getting more from HIV prevention.

Washington, DC: National Academy Press.

D. Gibbs et al. / Evaluation and Program Planning 25 (2002) 261–269 269