reflective literature review of contribution analysis
TRANSCRIPT
Reflective Literature Review of Contribution Analysis Catherine-‐Rose Stocks-‐Rankin, February 2014
ABSTRACT CA is part of a family of theory-‐based evaluation methods that trace the pathway from inputs to outcomes. In this review, I reflect on the peer-‐reviewed literature about contribution analysis and my experience of using CA to evaluate the PROP (Practitioner-‐Research: Older People) project.
2
Introduction Within social services, an ‘outcomes focus’ is an increasingly prominent feature of the way we understand, plan and deliver services. By focussing on outcomes for individuals, we shift the focus away from the inputs that make up service delivery and pay attention to the individual experience of the user. This conceptualisation is underpinned by an ethical principle of empowerment which focuses on engaging and enabling individual citizens over and above the institutionalized processes of welfare provision (Miller 2012; Miller & Cook 2012). Evidencing impact has become an important aspect of organisational accountability. Contribution Analysis (CA) is a recent methodological development in the evaluation field that uses a process of “logical argumentation” (Craig 2013; Wimbush et al. 2012) to understand the links between policy and practice activities, external factors and outcomes. CA is part of a family of theory-‐based evaluation methods (Weiss 1998; Funnell & Rogers 2011; White 2010) that trace the pathway from inputs to outcomes. CA (Mayne 2001; Mayne 2012) is a newly developed approach to theory-‐based evaluation. It seeks to extend and strengthen this family of evaluation methods. The following report synthesizes my understanding of the current literature on CA and offers a practice-‐based account of using CA to evaluate knowledge exchange project called PROP.
Overview of Contribution Analysis CA developed by Mayne (Mayne 2001; Mayne 2012) and has since gathered additional proponents in Canada (Dybdal et al. 2010) and the EU (Delahais & Toulemonde 2012; Leeuw 2012; Lemire et al. 2012). There has been particular interest in Scotland (Morton 2013), notably within the NHS (Craig 2013; Wimbush et al. 2012) and Scottish Government (Scottish Government Social Research 2012). CA aims to “reduce uncertainty about the contribution an intervention is making to observed results through an increased understanding of why results did or did not occur and the roles played by the intervention and the other influencing factors” (Mayne 2012, p.271). CA began as a framework for using performance management data to measure project outcomes and the pathway to impact generated (Dybdal et al. 2010; Mayne 2001). More recently, CA practitioners have developed the method to measure the impacts of policy initiatives (Wimbush et al. 2012), knowledge processes such as research use (Morton 2013) and knowledge exchange (Stocks-‐Rankin et al. 2013). There are also range of large-‐scale uses of the CA approach within Scotland, e.g. an evaluation of the Commonwealth Games (Scottish Government Social Research 2012) and the Scottish Alcohol Policy (Beeston et al. 2012) Contribution Analysis is typically conducted in six stages (Mayne 2001):
3
1. Determine the cause-‐effect issue to be addressed 2. Develop a theory of change and risks to its success 3. Generate evidence in response to the theory of change 4. Assemble the contribution story, and outline the challenges to it 5. Seek out additional evidence 6. Revise and strengthen the contribution story
These steps articulate a clear process through which an evaluator can determine whether the outcomes observed are the result of the intervention’s activities. The production of a simple six-‐steps process is thought to provide definition and added-‐valued to theory-‐based approaches to evaluation (Delahais & Toulemonde 2012, p.282).
How does CA work? Practitioners of the CA approach rely on three key mechanisms to carry out their evaluations:
Reflections from PROP: Evaluating our Impact The PROP project was a 15-‐month practitioner-‐research programme designed to support practitioners in health and social care to design and carry out small-‐scale research projects which would improve practice. We used the CA approach on the PROP project to evaluate:
a. The impact of a training programme on research skills for practitioners in health and social care
b. The impact of the evidence produced by practitioners on the practice of health and social care in Scotland
The evaluation was carried out by the project manager, Catherine-‐Rose Stocks-‐Rankin, with direction from an expert in CA, Sarah Morton. In practice, the evaluation was a secondary priority to the delivery of the project and successful completion of our objectives. We were focused on supporting practitioners to develop their research skills, design and complete their research projects, and support changes to practice based on these findings. When we began to design the evaluation, we hoped that it would tell us what worked about the project and show particular changes to practice in health and social care practice. As the project progressed, our expectations of the evaluation changed and our learning developed. For more detail on PROP: http://blogs.iriss.org.uk/prop/
4
1. Theory of change 2. Results chain or logic model 3. Contribution story
While these tools are common to theory-‐based evaluations, CA’s explicit focus on context and rival explanations inclines users to ensure that these tools are used to rigorous effect. The following section gives an explanation of each tool attempts to untangle some of the ambiguity that surrounds the terminology for the non-‐evaluation audience.
1. Theory of change A theory of change articulates the pathway to contribution. It should de-‐mystify the processes leading to change. That can mean showing the bumps along the road as well as anything that supported the impact observed. Theories of change should also give a clear indication of ‘why’ a project is thought to make a difference. That’s the ‘theory’ bit of it. Most projects are underpinned by a set of assumptions. The theory of change should make these explicit. Producing a theory of change is a process. It usually requires reflection, discussion and collaboration between the stakeholders in the project and the evaluator. This is particularly important when unpacking the assumptions within a project. These can be implicit to a project’s design and delivery. Reflection and discussion can help to make these explicit. Mayne (2012, pp.273–274) suggests that a good theory of change should include the following elements:
1. A results or causal chain showing the logic of the programme 2. Assumptions which underpin each link in the results chain 3. Account of the risks to each of these assumptions 4. Description of the unintended effects 5. Identification of other key explanatory factors (rival explanations)
These five elements can be considered a set of steps. First one would try and work out the logic of the project. This might include an account of the inputs, processes and outcomes for the project (a process for doing this is outlined in the next section). A theory of change should also show the mechanisms that will support (or inhibit) a project’s success. For example, a risk might be limits to staff time or lack of clarification in a partnership agreement. A robust theory of change should will also show the context which surrounds the project. This wider context might include other organisations that are doing similar work or budget cuts in the sector. These factors might influence the success of the project and/or the realisation of outcomes. For example, the changes we see the
5
sector could be influenced by our project as well the work others are doing. Perhaps there is a critical mass of influence and attention in this area that we are all contributing to? Most of the writing on CA suggests that this focus on context is where CA really shines. CA assumes a complex system (Patton 2012) and begins with the viewpoint that there are multiple and complex processes at play in the production of any outcome. The information produced as part of the five-‐steps outlined above produces a ‘theory of change’. The following outlines our approach to developing a theory of change on the PROP project. This section is followed by some practice-‐based accounts from the literature.
In their practice-‐based review of CA, Delahais and Toulemonde highlight the importance of a rigorous development of the theory of change, particularly in terms of the alternative explanations. Their experience of using CA suggests that there can
Reflections from PROP: Theories of change On the PROP project, our conception of the theory of change developed over the course of the project. At the beginning, we assumed that our theory of change was a simple journey from inputs and activities to changes in practice with the organisations who had partnered with us on the project. As we delivered the programme, it became clear the theory of change was two-‐fold. We needed to account for the impact of the research training programme, which supported practitioners to learn new skills, as well as the impact of the evidence produced. To show these two different stores, we developed a nested theory of change (see model below). This model highlights that the practitioners themselves are a mechanism for change. Working to develop a theory of change was an iterative process on the PROP project. This could be due to the simultaneous nature of project delivery and evaluation. Although we had a project plan and a set of outcomes to deliver, the nature of project work, particularly research and knowledge exchange, is that it is emergent. Much was unknown when we set out to plan the PROP project and that may have impacted our ability to define a clear ‘theory of change’. In practice, we re-‐visited our logic model (described below) three times over the course of the project and only refined a clear theory of change when we set out to write our final contribution story.
6
be “an unbalanced attention paid to the causal links under test at the expense of other contributing factors and rival explanations” (2012, p.284). Dybdal and colleagues (2010) echo this sentiment and further suggest that, in practice, the clarification of influencing factors and alternative explanations proved difficult, at times blurring together and often lacking in sufficient detail. As the authors suggest, “this is notable when taking into account that the embeddedness of the theory of change is one of the key arguments for using CA compared with other theory-‐based approaches (Dybdal et al. 2010, pp.43–44). Diagram 1: PROP Theory of Change
2. Results chains and logic models Theories of change depend on the use of a results chain or logic model to articulate the pathway to contribution. There is some ambiguity in language here. Results chains are used in performance management, particularly in Canada where CA originated. Logic model is a common term in evaluation science and is more easily understood by those with some background in theory-‐based evaluations. Some practitioners use the term logic model and results chain interchangeably to refer to very similar tools and processes. But, there does seem to be an implied distinction between the inputs and outputs focus of a results chain and the process focus of a logic model. There are some (see Wimbush and colleagues 2012) who avoid jargon terms like results chain in order to facilitate communication. Since results chains/logic models are a communication device, intended to support both the capture and articulation of the projects inputs, processes and outcomes, it’s important that they make sense
!"#$%&%'$()&*(+,-%.&#/&,&0$,-%1%1#2'$34'.',$-"'$&5$,12126&0$#6$,(('
7#26&%'$()&*(+,-%.&#/&8'9&4'.',$-"&:;1<'2-'&12&=',>%"&,2<&!#-1,>&?,$'
*80@5!
!"#$"#%&'()*+'),&,-)./'0)*1,.2&3#*45,$%,'+,$"-6'0*&2.-)$&7'),&,-)./'
),0*)2&8,&,-)./9&-::;'0)-.2"2"*#,)&
<-0-."2;'*('0-)2#,)&'(*)'=#*45,$%,',>./-#%,
A?5*B*5*:!
3#*45,$%,',>./-#%,'-2'2,-+'+,,2"#%&7'&2)-2,%".9+,,2"#%&7'?#":,)&"2;'5,.2?),&7'-#$'#-2"*#-5'+,$"-
:8CAC:D:85
@,):".,'?&,)&A),&,-)./'0-)2"."0-#2&<*55,-%?,&'
B"#,9C-#-%,)&'-#$'@,#"*)'C-#-%,)&D2/,)'),&,-)./,)&'-#$'-.-$,+".&
4:A?5*E8
@?00*)2'(*)'./-#%,&'2*'0*5".;'-#$'0)-.2".,@?00*)2'(*)'.*#2"#?,$'?&,'*('0)-.2".,9E-&,$'
"#2,):,#2"*#'8,F?,&2&'(*)'+*),'=#*45,$%,',>./-#%,'
-.2":"2",&
?=A8C:
G)-.2"2"*#,)&'$,:,5*0'��������������
=#*45,$%,',>./-#%,G)-.2"2"*#,)&'-#$'
0-)2#,)&'$,:,5*0'#,4'&="55&'"#'=#*45,$%,'
,>./-#%,H#2"."0-2,$'./-#%,&'
2*'0)-.2".,
*D0A?5
</-#%,&'2*'0)-.2".,'8,&,-)./'?&,'(?)2/,)',+E,$$,$'"#'0-)2#,)'
*)%-#"&-2"*#&G)-.2"2"*#,)&':-5?,$'(*)'
2/,")'),&,-)./'-#$'=#*45,$%,',>./-#%,'&="55&'
*80@5!
G)-.2"2"*#,)'=#*45,$%,'-#$',>0,)",#.,
<8!8AI8I@@'.-0-."2;-#$',>0,)2"&,C,#2*)&J'&?00*)2K,$".-2,$'),&,-)./'5,-:,'()*+'0-)2#,)'*)%-#"&-2"*#&B,&&*#&'5,-)#,$'
()*+'0),:"*?&'G8G&'0)-.2"2"*#,)9),&,-)./'
0)*%)-++,&
A?5*B*5*:!
8,&,-)./'2)-"#"#%8,&,-)./'$,&"%#'-#$'"+05,+,#2-2"*#
<*#2)"E?2"*#'H#-5;&"&'*('G8DG
:8CAC:D:85
G-)2"."0-2"*#'"#'2/,'),&,-)./'2)-"#"#%L>./-#%,'5,-)#"#%'4"2/'G8DG'2,-+'-#$'
0)-.2"2"*#,)&'L#%-%,+,#2'4"2/'*5$,)'0,*05,7'.-),)&'-#$'
.*55,-%?,&
4:A?5*E8
G)-.2"2"*#,)&':-5?,'2)-"#"#%'-#$'),&,-)./'
0)*.,&&G)-.2"2"*#,)&'$,:,5*0'0)-.2".,9),5,:-#2'
),&,-)./'G)*1,.2'2,-+'-$-02&'2*',#&?),'0)-.2"2"*#,)&'
-),'&?00*)2,$
?=A8C:
G)-.2"2"*#,)&'$,:,5*0'��������������"#'),&,-)./'
G)-.2"2"*#,)&'-#$'0-)2#,)&'$,:,5*0'#,4'.-0-."2;'(*)'),&,-)./'G)*1,.2'2,-+'5,-)#&'#,4'5,&&*#&'-E*?2'0)-.2"2"*#,)9),&,-)./
*D0A?5
K,:,5*0+,#2'*('0)-.2"2"*#,)&'-&'),&,-)./,)&
G)-.2"2"*#,)&'?&,'#,4'),&,-)./',:"$,#.,'2*'./-#%,'0)-.2".,
7
stakeholders. In my experience, the term ‘logic model’ is more appealing because it reflects the need to explain ‘why’ a project is having an effect. This adds an important dimension to the more traditional focus on inputs and outputs. These tools are used in the beginning of the CA process to articulate the cause-‐effect issue to be addressed and to map the logic of the project. They are useful for supporting the development of a theory of change and offer a concrete way to detail the evidence that supports this theory and the contribution that is being claimed. In essence, these tools show the journey from inputs to impact. They show this pathway by tracing the inputs, processes and outcomes of a project. On the PROP project we used a logic model which six steps that has used by (Montague 2009) and adapted from Bennett (1979) and Patton (1977). This model was further refined by Morton (Morton 2013), the CA expert on the PROP project. This logic model includes six stages:
1. Inputs 2. Activities 3. Engagement 4. Reaction 5. Changes to knowledge, skills or capacity 6. Changes to behaviour or practice
There are a variety of results chains/logic models that can be adapted to the needs of individual projects. A selection of different models is shown below: Diagram 2: Results chain
Montague (2011) Adapted from Claude Bennett 1979. Taken from Michael Quinn Patton, 1997, p 235.
As the diagram below shows, there are important distinctions between the level of control and influence a project can claim.
320 Evaluation 18(3)
Another recognized problem was that the communication and sharing of learning from evalua-tions in relation to its mission was not systematic across CCS Divisions. As one Board member noted: ‘Divisions have participated in certain evaluation exercises and then shared the evaluation findings with other divisions, but it has been on an ad-hoc basis rather than as a rule.’ In the view of senior staff members, communication of evaluation studies and findings across the organization was ‘difficult since it is a complex organization’. Since there is no ‘structure for sharing evaluation findings on programs other than the core, national programs’ there is little awareness of evaluations conducted for a specific division(s) or program. In recent years, steps have been taken to address this gap and to create an evaluation function that focuses on learning as well as accountability and that is part of planning, performance management and reporting. The initiative had to work within the context of the organization and its pre-existing practices.
In order to ‘develop a nation-wide performance management system to tell the CCS story to our stakeholders’ and make better use of evaluation findings, the PMT (now carried on by the VP Strategy) consolidated the thinking from previous evaluation work and divisional balanced- scorecard work to create a single performance framework for the CCS. From late 2005 to early 2007, CCS developed what became known as a Results Chain or Results Hierarchy as a framework for designing services, programs and policies that would achieve a defined set of results. This framework is shown in Figure 5 and is based on the work of Bennett (1979, 1997, 2000) and Montague (2002) and others.
While the seven steps of the results chain hierarchy may appear more granular (and therefore more complicated) than conventional results statements, its disciplined structure allowed stake-holders to reduce some 34 different results statements into half a dozen key goals – thus simplify-ing its contribution story. Applying the hierarchy of results in a systematic way across CCS programs and initiatives has allowed what might be called a ‘structured contribution analysis’. The common framework has helped to embed a common language of change that enables a participa-tive and analytical process to demonstrate the CCS contribution.5
In order to help people to adopt the new practices, the language was kept simple. The term CA has not been adopted, nor the term logic model, in preference for the terms ‘Results Chain’ and ‘Results Plan’. In the words of one senior leader: ‘A common language and common framework
Figure 5. Results Strategy and Performance Information.Source: Adapted from Bennett (1979). Taken from Patton (1997), p. 235.
at Edinburgh University on May 14, 2013evi.sagepub.comDownloaded from
8
Diagram 3: Simple results chain
Montague, Porteous and Sridharam (2011)
Here is a logic model from the originator of CA, John Mayne. It includes a focus on the assumptions, supports, risks, and processes which underpin the transition from inputs to outputs to outcomes. Diagram 4: Logic model
Mayne (2012)
The key consideration in producing a logic model for a CA evaluation is whether it can capture the logic of the intervention, the risks to its success, assumptions which underpin its implementation and prospects for creating change, as well as the influencing factors and alternative explanations. Terminology is less important than the level of analysis the tool can facilitate.
Copyright PMN 2011 Consider an Example
Consultations / Promotions
Activities Outputs Outcomes Impact
Assessments and Delivery of
Funding
Information
Grants
Services used by target
communities
Community health
improved
[email protected] www.pmn.net 9
274 Evaluation 18(3)
�� an elaboration of the risks to each of these links;�� identification of unintended effects; and�� identification of other key explanatory factors (rival explanations).
Figure 1 illustrates the various components of a theory of change.4 The theory of change is dis-played deliberately as a quasi-linear process, but allows for feedback loops as needed. A ‘sort of’ linear theory of change facilitates both arriving at causal claims and communicating the performance story of the intervention. The assumption boxes can be used to reduce the number of explicit links that might otherwise be needed in a theory of change. Other explanatory factors (rival explanations) may be different for different links or may apply to the overall causal logic of the intervention. The vertical ‘activities and outputs’ box allows for an implementation theory to be shown (i.e. the activi-ties and outputs that are going to be delivered, perhaps over time, to implement the intervention).
Theories of change as causal packagesThe logic used for making causal contribution claims outlined above was not related directly to the literature on causality. There is a large and active literature on the issue of causation, and over centuries now, a number of different perspectives have been developed to explain and understand
Final Outcomes
IntermediateOutcomes
ImmediateOutcomes
Assumptions: How do the intervention outputsexpect to result in or effect the immediate, intermediate and !inal outcomes? What has to happen? What contextual factors in!luence these processes?Risks : Risks to the link not occurring.
Assumptions: How are immediate outcomes expected to produce the intermediate outcomes? What has to happen? What contextual factors in!luence these processes?Risks: Risks to the link not occurring.
Assumptions: How are intermediate outcomes expected to produce the !inal outcomes? What has to happen? What contextual factors in!luence these processes?Risks: Risks to the link not occurring.
Other Explanatory Factors/Rival Explanations: Socio-economic factors; other interventions(can differ for different
outcomes)
UnintendedResults
Activ
ities an
d O
utp
uts
Figure 1. Displaying a theory of change.Terms:
Assumptions are events and conditions that need to happen for the link to work. They are developed from a mix of stakeholder and social science theories and research. Risks are external event and conditions that could put the causal link at risk. Other Explanatory Factors are other factors or conditions that might help explain the occurrence of the observed result other than the influence of the intervention. Unintended effects are positive or – more usually – negative unanticipated effects that occur as a result of the interventions activities and results.
at Edinburgh University on April 30, 2013evi.sagepub.comDownloaded from
9
3. Contribution Story Drafting a contribution story is another key mechanism in the CA process. Delahais and Toulemonde (2012) suggest “this is the core step where CA adds most value” (p287). The contribution story is the final output of the CA process. As Mayne (2012) suggests, its production should an iterative process in which the story is shared, verified and further developed (if necessary). Contribution analysis is appealing because its language gives the sense of a journey that is told through a story. Stories are accessible and easy to follow. This format should make the process of ‘logical argumentation’, which underpins the CA process, easy to understand. In practice, there is quite a lot of detail underpinning a contribution story. Evaluations are evidence-‐rich and the contribution story needs to give an account of the theory of change, the details that support those claims as well as all the assumptions, risks, and contextual factors which are implicated in the project delivery.
Reflections from PROP: Logic modelling The development of a logic model was facilitated by an expert in CA who guided me through the modelling process using a tool that outlines the key stages in impact, from inputs to changes in practice. At each stage, we considered the pathway to impact and the risks to this stage in the journey. As the project progressed I plotted in more detail and added indicators for each aspect of the contribution story. We used this template to create a prospective theory of change at the beginning of the PROP project (June 2012). This was refined a three different points in the project (August 2012, November 2012 and January 2013) to include the iterative learning which was a result of the project’s activities. I used the logic-‐model as a linear map of the project that begins with the resources we brought to the project and the activities we would carry out. Engagement in these activities and the reaction of stakeholders was also mapped. This is followed by changes to capacity, knowledge and skills as well as changes to behaviour and practice. PROP involved a range of stakeholders. We created four categories to clarify the different groups of people involved in the project: project team, practitioner-‐researchers, mentors, and organisational partners. Each of these groups had their own pathway to impact and our detailed logic model reflects those different journeys (a copy of our detailed logic model can be accessed here: http://blogs.iriss.org.uk/prop/contribution-‐analysis/.
10
Few authors have articulated the use of specific tools to support the generation of a ‘contribution’ narrative. Delahais and Toulemonde (2012) are an exception. Their article gives a useful insight into the development of a contribution story and the use of CA in practice. From their perspective, a contribution story should be made up of a series of contribution claims. Contribution claims begin with a change statement and articulate that the observed change did (or did not) occur due to specific aspects of the intervention. These statements include a brief acknowledgment of contextual factors that surround the change observed (or not observed) and indicate the strength of the contribution. Each contribution claim is underpinned by causal mechanisms which show the link between processes and change. Delahais and Toulemonde’s (2012) work is first attempt to unpack the key elements of a contribution story. Their concern is rigor. As they suggest “the challenge is to make contribution claims that are based on evidence in a way that is rigorous, traceable, and credible” (Delahais & Toulemonde 2012, p.290). Done well, this refinement offers important insights into the degree of contribution that a project has made through a robust examination of competing explanations This detail is a welcome insight into the production of a robust CA evaluation. But mechanisms like ‘contribution claims’ and ‘causal package’ reflect the evaluation expertise of the practitioners and do not necessarily chime with the practicality which users of CA seem to appreciate. As Sridharan and Naikama argue, “CA achieves its strength by its clear structure consisting of a few common-‐sense steps” (2011, p.380). This “elegant simplicity”, as Sridharan and Naikama suggest (2011, p.380), could be undermined by the complexity of the approach undertaken by Delahais and Toulemonde (2012). For example, each contribution claim requires the triangulation of evidence to determine the degree of influence. These claims are developed on the back of an ‘causal claims’ which also require rigorous testing of evidence to determine strength. It could be challenging for non-‐evaluation specialists to use CA if it is requires this level of evaluation expertise. The following section outlines our approach to writing the contribution story on the PROP project. This is followed by a discussion of some of key issues in the use of CA and other theory-‐based evaluations.
11
Key issues in using contribution analysis Causation: The use of CA is thought to provide a rigorous alternative to experimental models of evaluation which would typically use a counterfactual or control case. This is appealing in evaluations of social services where an ethical case for controlled trials
Reflections on PROP: Writing the contribution story Writing the contribution story was unexpectedly challenging on the PROP project. We produced two contribution stories to reflect the nested theory of change. The first story showed the contribution of PROP’s research training programme and argued that its contribution was practitioner development. Over the course of the research training and practitioner-‐led research process, practitioners became ‘boundary-‐spanners’ and occupied a hybrid position as both a researcher and practitioner. The second story reflected the impact of the new research evidence that was produced as part of the PROP project. It argued that boundary-‐spanning practitioners were able to generate innovative and on-‐going knowledge brokerage opportunities and highlighted some of changes to practice at an organisational level that were occurring. I noticed a stark difference between my approach to the first contribution story and the second. The first contribution story is a reflection of the practitioners’ journey to become boundary-‐spanners. I was witness to, and one of the facilitators of, that journey. It was challenging to synthesize a process that I had responsibility for designing and which I found to be nuanced and complex. My own role in this process impacted my ability to abstract key elements of the journey into a linear narrative and it took me much longer to write than I expected. It’s a very long document and I wonder whether the central thread of our narrative got lost in the detail of the report The second contribution story was noticeably easier to tell – I suspect this was due to my lack of direct involvement. I was not a witness to the changes in organisational practice and did not need to unpick my own journey from that of the practitioners. Instead, I was able to fit their narrative of change into the logic model framework. These stories provide robust detail about the journey of the PROP project and the contributions we made, but these stories are limited in several ways. They do not test these claims against other competing explanations and there is very little detail about the influencing factors at an organisational level. We were limited in term of time and the scope of the CA on this project reflects those limits. Our contribution stories are available here: http://blogs.iriss.org.uk/prop/contribution-‐analysis/. Writing the contribution story was unexpectedly challenging on the PROP project. We produced two contribution stories to reflect the nested theory of change. The first story showed the contribution of PROP’s research training programme and argued that its contribution was practitioner development. Over the course of the research training and practitioner-‐led research process, practitioners became ‘boundary-‐spanners’ and occupied a hybrid position as both a researcher and practitioner. The second story reflected the impact of the new research evidence that was produced as part of the PROP project. It argued that boundary-‐spanning practitioners were able to generate innovative and on-‐going knowledge brokerage opportunities and highlighted some of changes to practice at an organisational level that were occurring. I noticed a stark difference between my approach to the first contribution story
12
would be difficult to justify or where the phenomenon under evaluation does not lend itself to this approach, for example where the processes are complex and context specific – as in the case of practitioner research. There is still some debate about the degree to which a CA approach can account for ‘cause and effect’. Definitive claims of attribution are difficult to make when an intervention is understood to occur within complex social systems. In the absence of a control, most theory-‐based methods like CA opt for strong evidence of contribution rather than direct attribution or cause and effect. Mayne suggests that the focus of a CA evaluation should be directed towards increasing understanding of a programme or intervention and accounting for ‘what works’; it rarely “’proves’ things in an absolute sense” (Mayne 2001, p.5). In seeking to show attribution, Mayne (Mayne 2012, p.277) suggests that CA is able to confirm that:
• The expected results of the intervention occurred • The logic of the intervention including the necessary supporting elements
which facilitate the progress from inputs to outcomes has occurred • Potential rival explanations have been accounted for and explained
He goes on to argue that “if one can verify or confirm a theory of change with empirical evidence, and account for major external influencing factors, then it is reasonable to conclude that the intervention in question has made a difference” (Mayne 2012, pp.271–272).
Ambiguous Processes and Terminology CA suffers from a lack of consensus on the terms and processes that make up the method. A close reading of the limited literature on CA reveals a handful of differing claims and a range of overlapping terms. All this can lead to some confusion about what’s the best term or approach to take. As Mayne suggests, “one result of the widespread interest in theory-‐based evaluations is that there is no agreement on the terms used and even some of the concepts. Nevertheless, there is consistency on the value of theory-‐based approaches” (2012, p.270). An example of this confusion surrounds the process of developing a theory of change, one of the key mechanisms in the CA approach. Dybdal et al (2011) and Delahais and Toulemonde (2012) have articulated challenges with the development of a theory of change and suggested that the development of alternative explanations as well as the influencing factors which support the causal claims proved difficult in practice. Interestingly, they differ on the solutions to this problem. Dybdal et al (2011) suggest that evidence (step three in CA’s six-‐step process) should be gathered simultaneously with the development of the Theory of Change. In contrast, Delahais and Toulemonde (2012) suggest that gathering evidence simultaneously, as was their practice in some evaluations, was detrimental to a rigorous, critical, thinking about plausible alternative explanations and influencing factors.
13
Another example involves the confusion around the terms implementation theory and programme theory. Weiss (1998), a pioneer of theory-‐based approaches, uses the terms to reflect the two stages of evaluation: the programme’s design and implementation (implementation theory) and its impact (programme theory). In contrast, as Blamey and Mackenzie point out (2007, p.445), other CA practitioners are more likely to use the term programme theory to refer to the design and implementation processes; thus the confusion. Some CA practitioners use a mix of the terms. For example, Montague (2011) uses the terms ‘implementation theory’ to capture the programme’s design and ‘change theory’ to showcase its impact. Other CA practitioners avoid these terms entirely, using the terms ‘theory of change’ or ‘logic model’ to reflect the different theories instead. For example, the term ‘nested logic model’ (see Craig 2013; Wimbush et al. 2012) is sometimes used to encapsulate the range of processes under evaluation. This is akin to Weiss’ use of the plural “theories of change” which encapsulates the implementation and programme theories described above.
What’s missing? Gaps in the Practice of Contribution Analysis CA could be strengthened by a conceptualisation of different kinds of evidence or knowledge and how they might combine to support the CA approach. While Mayne acknowledges (2001, 2012) that CA can be used in combination with a range of methods, there remain some implicit tensions around the question of robustness and which methods produce the strongest results. CA could be developed by a fuller articulation of its theoretical and methodological underpinnings. Dybdal and colleagues suggest that CA needs to further elaborate its epistemological root (2010, p.52). Practitioners are, perhaps, cautioned then to ensure that they combine CA with an existing epistemological and methodological approach in order to ensure that the rigors of those disciplines might shore up the weaknesses in CA. Theory based evaluations have been criticized for using underdeveloped theories of change. While CA seeks to address this by including an explicit focus on alternative influences and explanations for contribution, there remains a plurality in the design and level of detail included in the Theory of Change. There is no clear consensus on how detailed a Theory of Change needs to be in order for it to be robust enough to test (see Sridharan & Nakaima 2011) Theory-‐based approaches have also been criticized for focusing on the implementation theory rather than the change theory (or programme theory) (see Dybdal et al. 2010, p.50). As Drbdal suggests, CA offers a model in which both theories are embedded within the broader Theory of Change. However, there are fewer tested examples of robust change theory compared with those which exist for theories of implementation. Indicators and methods for generating evidence are likely to be different for the two theories and further thinking is required to determine how the two kinds of analysis can complement one another.
14
Challenges in Using Contribution Analysis There are few peer-‐reviewed, and practice-‐based, examinations of the process of carrying out a CA evaluation. The article by Dybdal et al (2011) and the special issue of Evaluation (2012) edited by Erica Wimbush are notable exceptions. These papers provide a range of useful insights into the challenges of using CA, but more practice-‐based accounts of this flexible process are required to fill the gaps identified above. There may be limited time and scope to carry out the iterative process of testing and re-‐testing the Theory of Change which Mayne (2001, 2012) suggests. The practicality of evaluation suggests that the real-‐world pressure of presenting to an audience who may ‘need’ the results can limit the scope of the research (Drbdal 2011). Critics of the contribution analysis model who ask whether the focus on contribution, rather than direct attribution, “is so weak that a finding of no contribution is highly unlikely” (see Patton 2012, p.376). Patton agrees that this is a legitimate concern and offers an eight-‐step metric for promoting rigor in contribution analysis to supplement his analysis (for more detail, see p. 375 in Patton 2012). Patton suggests that the pathway to contribution which is developed as part of the evaluation can be considered sufficiently robust if multiple perspectives are included in the creation of the logic model, alternative explanations for change are thoroughly addressed and accounted for, and the process itself is reflective and iterative so as to be appropriately critical.
Conclusion: What makes Contribution Analysis Different? This family of evaluation methods uses a theory of change, which as Mayne (2012) suggests is “a logical model for an intervention showing a results chain of how outputs are expected to lead to a sequence of outcomes” (2012, p.271). People who use Contribution Analysis describe it as an innovative extension of theory-‐based evaluation methods. In particular, it is thought to:
• Offer a “more systematic” method for determining causal claims. It offers a structured six-‐step process which is designed to improve the robustness of theory-‐based approaches (Mayne 2012, p.271; White 2010)
• Ensure that the insight of key stakeholders is included throughout the modelling and testing of the theory of change
• Create a model for reflective engagement in project design, planning and delivery which in turns ensures better outcomes
• Provide a framework for understanding the complexity of an evaluation context and a pathway which separates the contribution of a project from other competing influences (Lemire et al. 2012; Patton 2012)
• Focuses on both the project design and the implementation phases in order to give a thorough account of processes through which a project gains impact (Dybdal et al. 2010)
• Offer insight into the degree of contribution that a project has made through a robust examination of competing explanations (Delahais & Toulemonde 2012)
15
References Beeston, C. et al., 2012. Monitoring and Evaluating Scotland’s Alcohol Strategy. 2nd
Annual Report, Available at: http://www.understandingglasgow.com/assets/0001/1709/MESAS_2012_SecondAnnualReport.pdf.
Bennett, C., 1979. Analyzing Impacts of Extension Programs, Washington, D.C.: Department of Agriculture. Available at: http://www.robertsevaluation.com.au/2013-‐09-‐12-‐04-‐14-‐41/bennett-‐s-‐hierarchy.
Blamey, A. & MacKenzie, M., 2007. Theories of Change and Realistic Evaluation. Evaluation, 13(4), pp.439–455.
Craig, N., 2013. Seeing the wood and the trees: using outcomes frameworks to inform planning, monitoring and evaluation in public health. Journal of Public Health, 35(3), pp.467–74.
Delahais, T. & Toulemonde, J., 2012. Applying contribution analysis: Lessons from five years of practice. Evaluation, 18(3), pp.281–293.
Dybdal, L., Nielsen, S.B. & Lemire, S.T., 2010. Contribution Analysis Applied: Reflections on Scope and Methodology. Canadian Journal of Program Evaluation, 25(2), pp.29–57.
Funnell, S.C. & Rogers, P.J., 2011. Purposeful Program Theory: Effective Use of Theories of Change and Logic Models, San Francisco, CA: John Wiley & Sons.
Leeuw, F.L., 2012. Linking theory-‐based Evaluation and contribution analysis: Three problems and a few solutions. Evaluation, 18(3), pp.348–363.
Lemire, S.T., Nielsen, S.B. & Dybdal, L., 2012. Making contribution analysis work: A practical framework for handling influencing factors and alternative explanations. Evaluation, 18(3), pp.294–309.
Mayne, J., 2001. Assessing Attribution through Contribution Analysis: Using Performance Measures Sensibly. Canadian Journal of Program Evaluation, 16(1). Available at: http://old.evaluationcanada.ca/site.cgi?s=4&ss=21&_lang=en&article=16-‐1-‐001.
Mayne, J., 2012. Contribution analysis: Coming of age? Evaluation, 18(3), pp.270–280.
Miller, E., 2012. Individual Outcomes: Getting Back to What Matters, Dunedin Academic Press. Available at: http://strathprints.strath.ac.uk/40062/.
16
Miller, E. & Cook, A., 2012. Talking Points: Personal Outcomes Approach, Joint Improvement Team. Available at: http://www.jitscotland.org.uk/wp-‐content/uploads/2014/01/Talking-‐Points-‐Practical-‐Guide-‐21-‐June-‐2012.pdf.
Montague, S., 2009. Transforming Evaluation: Strategic Results Management. Available at: http://www.pmn.net/wp-‐content/uploads/Structured-‐Contribution-‐Analysis.pdf.
Morton, S., 2013. Assessing Research Impact: A Case Study in Participatory Research, Edinburgh: Centre for Research on Families and Relationships. Available at: https://www.era.lib.ed.ac.uk/bitstream/handle/1842/6562/Briefing%2066.pdf;jsessionid=C2DD113D24689F4AC59CCCA7E078556C?sequence=1.
Patton, M.Q., 2012. A utilization-‐focused approach to contribution analysis. Evaluation, 18(3), pp.364–377.
Patton, M.Q., 1977. Utilization-‐Focused Evaluation, Thousand Oaks, CA: Sage Publications.
Scottish Government Social Research, 2012. An Evaluation of the Commonwealth Games 2014 Legacy for Scotland, Edinburgh: Scottish Government. Available at: http://www.gov.scot/Resource/0040/00408160.pdf.
Sridharan, S. & Nakaima, A., 2011. Ten steps to making evaluation matter. Evaluation and Program Planning, 34(2), pp.135–146.
Stocks-‐Rankin, C.-‐R. et al., 2013. Becoming a Boundary Spanner Through Practitioner Research, Edinburgh/Glasgow: CRFR/IRISS. Available at: http://blogs.iriss.org.uk/prop/contribution-‐analysis/.
Weiss, C.H., 1998. Evaluation: methods for studying programs and policies, Prentice Hall.
White, H., 2010. A Contribution to Current Debates in Impact Evaluation. Evaluation, 16(2), pp.153–164.
Wimbush, E., Montague, S. & Mulherin, T., 2012. Applications of contribution analysis to outcome planning and impact Evaluation. Evaluation, 18(3), pp.310–329.