use of evaluations and the evaluation of their use

8
http://evi.sagepub.com Evaluation DOI: 10.1177/13563890260620621 2002; 8; 433 Evaluation Osvaldo Feinstein Use of Evaluations and the Evaluation of their Use http://evi.sagepub.com/cgi/content/abstract/8/4/433 The online version of this article can be found at: Published by: http://www.sagepublications.com On behalf of: The Tavistock Institute can be found at: Evaluation Additional services and information for http://evi.sagepub.com/cgi/alerts Email Alerts: http://evi.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: © 2002 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution. by Osvaldo Feinstein on February 15, 2007 http://evi.sagepub.com Downloaded from

Upload: cecilia-castillo

Post on 08-Mar-2016

218 views

Category:

Documents


4 download

DESCRIPTION

Use of Evaluations and the Evaluation of their Use

TRANSCRIPT

Page 1: Use of Evaluations and the Evaluation of their Use

http://evi.sagepub.comEvaluation

DOI: 10.1177/13563890260620621 2002; 8; 433 Evaluation

Osvaldo Feinstein Use of Evaluations and the Evaluation of their Use

http://evi.sagepub.com/cgi/content/abstract/8/4/433 The online version of this article can be found at:

Published by:

http://www.sagepublications.com

On behalf of:

The Tavistock Institute

can be found at:Evaluation Additional services and information for

http://evi.sagepub.com/cgi/alerts Email Alerts:

http://evi.sagepub.com/subscriptions Subscriptions:

http://www.sagepub.com/journalsReprints.navReprints:

http://www.sagepub.com/journalsPermissions.navPermissions:

© 2002 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution. by Osvaldo Feinstein on February 15, 2007 http://evi.sagepub.comDownloaded from

Page 2: Use of Evaluations and the Evaluation of their Use

Use of Evaluations and the Evaluationof their Use1

O S VA L D O N . F E I N S T E I NThe World Bank, USA

All evaluations have a cost but not necessarily a value. Their value does notdepend on their cost but on their use, and this article discusses factorsaffecting the use of evaluations. These factors could be taken into account inorder to increase and improve the use made of evaluations and,consequently, their value. Two key issues (lags and the attribution problem)for the evaluation of the use of evaluations are discussed and a ‘possibilist’approach to evaluation use is presented.

KEYWORDS : dissemination; knowledge management; learning; use; value

Introduction

The issue that this article addresses is the very limited use made of evaluations.A simple conceptual framework (including a few formulae to make these ideasas clear as possible) will be proposed to deal with this issue. Different types ofevaluation uses and users will be discussed as well as some key issues regardingthe evaluation of the uses of evaluations. This article uses concepts from econ-omics and political economy in order to consider key evaluation issues from aperspective that allows for a more rigorous (yet still simple) approach than thoseoften used to address the issue of evaluation use (e.g. see the articles in Caracelliand Preskill, 2000), leading also to a different set of questions and answers onthis topic. The explicit way in which this conceptual framework has been articu-lated may facilitate its further development and also the evaluation of its use.

Types of Evaluation Use

There are different types of evaluation use that have been considered in theevaluation literature (Worthen et al., 1997), such as instrumental and persuasive,which are useful. Another use identified in the literature is ‘enlightenment’(Marra, 2000; Weiss, 1998), a term which may be replaced by ‘cognitive’, as theformer is not very enlightening. Also important is the distinction between evalu-ation for accountability and evaluation for learning, which sometimes areconsidered as mutually exclusive options (whereas in this article these last two

EvaluationCopyright © 2002

SAGE Publications (London, Thousand Oaks and New Delhi)

[1356–3890 (200210)8:4; 433–439; 029950]Vol 8(4): 433–439

433

02Feinstein (bc/d) 10/31/02 2:19 PM Page 433

© 2002 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution. by Osvaldo Feinstein on February 15, 2007 http://evi.sagepub.comDownloaded from

Page 3: Use of Evaluations and the Evaluation of their Use

uses are considered to be complementary, the former creating an incentiveframework for the latter). It is also worth noting a crucial distinction betweenapparent and actual use of evaluations: between what seems to be the use (or thelack of use) of evaluations and the way(s) in which evaluations are actually used.

The ‘authorizing environment’ (AE), introduced by the JFK Harvard Schoolof Government (Moore, 1995), is an important concept within this discussion. Inthe context of programs or projects, the AE includes those ‘principals’ that makefundamental decisions concerning the approval or cancellation of programs. One key use of evaluations is to persuade the AE, and the public at large, that aprogram should continue (with a new phase) or be cancelled, ‘legitimizing or de-legitimizing’ it by providing information concerning its performance and results.

This use of evaluations is neither for accountability nor for learning, it isneither instrumental nor necessarily enlightening, but it can play a crucial role interms of whatever is evaluated if decision makers are persuaded to make adecision on the basis of information provided by the evaluation, which mightconfirm their own views on the program, project or policy evaluated. Evaluations,like audits (though frequently in a less formal way), provide a ‘seal of approval’.In addition, this use for persuasion can build trust of activities when doubts ariseregarding the value of the corresponding activity (be it part of a program, projector policy implementation). It is interesting to note that whereas the use of evalu-ation for persuasion has been recently acknowledged in the evaluation literature,there is a school of thought (and practice) in economics that highlights the keyrole of persuasion (Kirkhart, 2000; McCloskey, 1994).

Another basic distinction that helps in discussing evaluation use (and in findingways to promote it) is between actual and potential use. What are the ‘barriers touse’? How can potential use of evaluations be transformed into actual use? Theseare some of the key questions to consider, rather than trying to develop acomprehensive typology of evaluation uses (that may not be used).

Cost, Value and Use of Evaluations

Given that evaluations are not subject to Say’s law, that supply creates its owndemand (and only to a limited extent to Yas’ law – the inverse of Say’s law – thatdemand induces supply), the gap mentioned before between potential and actualuse might be very significant. An important challenge is to identify crucial factorsthat affect use and that can be turned into levers to promote it.

Let me start then with two key factors: the relevance of the evaluations andthe quality of their dissemination. Relevance has to do with the extent to whichan evaluation addresses issues that are considered of importance by the ‘clients’of the evaluation (using a wide concept of clients, including not only those thathave requested the evaluation but also some possible additional audiences). Thequality of dissemination is the appropriateness of the means used to facilitateaccess to the evaluation. One can summarize the relations between theseconcepts through the following relation:

(1) U = R � D

Evaluation 8(4)

434

02Feinstein (bc/d) 10/31/02 2:19 PM Page 434

© 2002 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution. by Osvaldo Feinstein on February 15, 2007 http://evi.sagepub.comDownloaded from

Page 4: Use of Evaluations and the Evaluation of their Use

where U is use, R is relevance and D is dissemination. U, R and D can beconsidered as variables and they can be rated with values from 3 to 0 with thefollowing categories:

3 = highly satisfactory; 2 = satisfactory; 1 = partially satisfactory; and 0 = unsatisfactory.

Thus, if there is no relevance or no dissemination, there is no use. Figure 1 illus-trates the relation between these variables.

Relevance and dissemination can also be considered in terms of supply anddemand of evaluations: thus, relevance corresponds to demand and dissemina-tion to supply. Relevant evaluations are those for which there is a demand. Ifthere is no dissemination, there is no supply (beneath the heading Incentives andCapacities to Use Evaluations, there are some additional considerations aboutthe use of demand and supply categories in this context).

Now we turn to the factors determining the quality of dissemination and thedegree of relevance. For the latter, the choice of evaluation theme and especiallythe timing of the evaluation are crucial, to make evaluation findings available whendecisions are taken. Also involving stakeholders increases the perceived relevanceof evaluations. Finally, the evaluation’s credibility is another crucial factor deter-mining the relevance of the evaluation, and this credibility depends on themethodology used and the perceived quality of the evaluation team. Summing up:

(2) R = T � C

where R is relevance, T is timeliness and C is credibility (and a four-point scalecan again be used for the three variables).

In the case of dissemination it is important to consider the way in which evalu-ations are presented (their user-friendliness) and the mechanisms or channels usedfor their communication. For the latter, the use of a knowledge management(KM) approach, through help desks and alternative ways of packaging the infor-mation, has been found useful. Through KM, evaluations (E) are used as inputsto produce user-friendly evaluation products (E*), facilitating the conversion of

Feinstein: Use of Evaluations and the Evaluation of their Use

435

Figure 1. Relationship between Relevance, Dissemination and Use

Relevance (R)

Use = R x D

Dissemination (D)

02Feinstein (bc/d) 10/31/02 2:19 PM Page 435

© 2002 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution. by Osvaldo Feinstein on February 15, 2007 http://evi.sagepub.comDownloaded from

Page 5: Use of Evaluations and the Evaluation of their Use

information into knowledge; for example, these may include brief notes onlessons learned, self-contained summaries and stories illustrating key points(Ingram and Feinstein, 2001).

Schematically:

E ➝ (KM) ➝ E*

where E is evaluation, E* is evaluation products and KM is knowledge manage-ment.

Partnering in evaluation also helps the dissemination process, both to thepartners (by their involvement) and through them to others. Thus:

(3) D = P � M

where D is dissemination, P is presentation of the evaluation (user-friendliness)and M is ‘means’ or mechanisms or ways or channels for distribution (includesKM, help desk and other e-means).

Incentives and Capacities to Use Evaluations

Another way to address the issue of evaluation use is to consider the incentivesand capacities to use evaluation, as well as the demand and supply of evaluations.The previous framework assumes an institutional environment with given incen-tives to use evaluation, both positive and negative (‘carrots and sticks’).

Note that relevance is linked to both demand and the capacity to supply. If anevaluation is not relevant then there will not be any demand for it and vice versa.However, even if an evaluation is relevant there could possibly be no capacity toproduce it (either directly or by contracting it out) or there could be no incen-tives to use existing capacities to produce this type of evaluation. In other words,if the evaluation is relevant then there will be an incentive to use it, therefore itis important that there is also an incentive to produce it.

On the other hand, the availability of relevant evaluations does not ensure thatthey will be used, if the capacity to use evaluations is very reduced. This type ofevaluation-use capacity involves a capacity to search for relevant information, i.e.knowing where to search (this will be facilitated if there are good websites andportals such as the emerging global development gateway).

It also requires a capacity to use the evaluations, to highlight findings andlessons that are relevant for specific issues being addressed (user-friendly evalu-ations facilitate this, but in the end the use depends on the users). It is importantto distinguish between the capacity to produce evaluations and the capacity touse them (in the same way as has been done for household surveys), and to assessthe capacity to use evaluations, providing when needed appropriate support forits development.

Therefore, in order to better understand (and to promote) the use of evalu-ations it is worth focusing on the issues of incentives and capacities. Sometimesthe argument is conducted in terms of supply and demand of evaluations, but themarket of evaluations is quite imperfect (among other issues, there is frequentlyno relevant analogy for market prices) and, in addition, incentives have an

Evaluation 8(4)

436

02Feinstein (bc/d) 10/31/02 2:19 PM Page 436

© 2002 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution. by Osvaldo Feinstein on February 15, 2007 http://evi.sagepub.comDownloaded from

Page 6: Use of Evaluations and the Evaluation of their Use

influence both on supply and demand, and the same is true with capacities.Though in some cases it might be useful to discuss use issues in terms of supplyand demand, it might be more fruitful (and rigorous) to focus on incentives andcapacities to produce, disseminate and use relevant evaluations.

Evaluation of Use

There are some typical pitfalls in the evaluation of the use of evaluations. Oneof them is due to the existence of lags, to a ‘gestation period’ for the occurrenceof use. It might seem that there is no evidence of use and therefore no use. Butthis may just mean that the process leading from the production of the evaluationto its use takes time, and that the evaluation of evaluation use might have beenpremature. There are two risks: the first is of waiting sine die, always refrainingto pass judgement because it might be that the evaluation will still be used(apocalyptic fallacy); the other risk is ‘killing’ an evaluation, arguing that it hasnot been used and that therefore it is useless, whereas it might be that it will beused in the future (premature killing).

Another source of pitfalls is the attribution problem: one can find things thathave been done after the evaluation was completed in a way consistent with theevaluation’s recommendations. Is this evidence of use? It seems so, but it mightbe that there were other reasons why things were done in such a way and thatthis is merely a case of apparent use. The fact that there is consistency betweenthe evaluation findings and recommendations and what was done after the evalu-ation is not necessarily an indication of use (post hoc fallacy).

However, it is also possible to completely neglect the role a specific evaluationhad in the decision making process and in achieving results. A particular evalu-ation could possibly have played a contributing role, perhaps helping the decisionmakers reach the ‘tipping point’ (Gladwell, 2000) through the cumulative effectof evaluations.

Finally, in evaluating the use of evaluation it is worthwhile to refer to thefactors mentioned before, such as relevance and dissemination and their deter-minants (timeliness, credibility, quality of presentations and means of dissemi-nation, as well as incentives and capacities); low levels of those factors or ofincentives and/or capacities can act as barriers to use. Furthermore, when evalu-ating use it is important to consider changes in knowledge, attitudes andbehavior, bearing in mind lags and the attribution problem.

A ‘Possibilist’ Approach to Evaluation Use

It is generally recognized that evaluation activities generate knowledge that issignificantly under-used. The distinction between actual and potential use resultsin greater focus on possible uses of evaluation that are more intensive andbeneficial. One of these possible uses would be to identify what worked and whatdid not work in specific contexts (applying a sort of ‘realist’ approach to evalu-ations) in order to identify opportunities for effective interventions and to avoidthose that are ineffective.

Feinstein: Use of Evaluations and the Evaluation of their Use

437

02Feinstein (bc/d) 10/31/02 2:19 PM Page 437

© 2002 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution. by Osvaldo Feinstein on February 15, 2007 http://evi.sagepub.comDownloaded from

Page 7: Use of Evaluations and the Evaluation of their Use

Codifying evaluation knowledge in this way facilitates learning (‘vicariouslearning’) and the ‘mise en valeur’ of evaluations, using them as sources ofinsights for the design of interventions in similar contexts. Thus, the stock ofevaluations could be a source of relevant ideas about effective actions, facilitatingthe process of learning from positive and negative experiences.

If evaluations are conducted with this aim in mind, they can be readily usedin this way. Otherwise, it could still be possible, at least for a subset of theseevaluations, to codify them using a ‘realist’ framework (Feinstein, 1998). Thisimplies making the context of application explicit, to facilitate the use of evaluations as a source through which to identify effective interventions insimilar contexts.

Conclusions

The key messages of this article are:

1. the value of evaluations depends on their use;2. the use of evaluations should not be taken for granted; and 3. there are several things that can be done to promote greater and better use

of evaluations. One of them is to develop a conceptual framework to guideour analysis of evaluation use and our actions that aim to improve it. Thegoal of this article has been to provide a simple and practical frameworkfor this purpose.

Note1. This article is a revised version of a keynote speech delivered at the IVth Annual

Meeting of the Italian Evaluation Association. I wish to thank Nicoletta Stame forencouraging me to prepare this article, Luca Meldolesi for triggering the thoughts thatled to a new section on the ‘possibilist’ approach, Frans Leeuw for his comment on‘vicarious learning’ and Mita Marra for our dialogue on these issues.

ReferencesCaracelli, V. J. and H. Preskill (eds) (2000) The Expanding Scope of Evaluation Use, New

Directions for Evaluation 88. San Francisco, CA: Jossey-Bass. Feinstein, O. N. (1998) ‘Review of “Realistic Evaluation” ’ , Evaluation 4(2): 243–6.Gladwell, M. (2000) The Tipping Point: How Little Things Can Make a Big Difference.

New York and London: Little, Brown and Company.Ingram, G. K. and O. N. Feinstein (2001) ‘Learning from Evaluation: The World Bank’s

Experience’, Evaluation Insights 3(1): 4–6.Kirkhart, K. E. (2000) ‘Reconceptualizating Evaluation Use: An Integrated Theory of

Influence’, in V. J. Caracelli and H. Preskill (eds) The Expanding Scope of EvaluationUse, New Directions for Evaluation 88, pp. 5–23. San Francisco, CA: Jossey-Bass.

McCloskey, D. N. (1994) Knowledge and Persuasion in Economics. Cambridge: CambridgeUniversity Press.

Marra, M. (2000) ‘How Much Does Evaluation Matter?’, Evaluation 6(1): 22–36.Moore, M. H. (1995) Creating Public Value. Cambridge, MA: Harvard University Press.

Evaluation 8(4)

438

02Feinstein (bc/d) 10/31/02 2:19 PM Page 438

© 2002 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution. by Osvaldo Feinstein on February 15, 2007 http://evi.sagepub.comDownloaded from

Page 8: Use of Evaluations and the Evaluation of their Use

Weiss, C. H. (1998) Evaluation. Englewood Cliffs, NJ: Prentice Hall.Worthen, B. R., J. R. Sanders and J. L. Fitzpatrick (1997) Program Evaluation: Alterna-

tive Approaches and Practical Guidelines, 2nd edn. New York: Longman.

OSVALDO N. FEINSTEIN is a manager of the Operations Evaluation Departmentat the World Bank, and an evaluator and economist with worldwide experience.He designed and supervised the Latin American Program for StrengtheningEvaluation Capacities in Latin America and the Caribbean (PREVAL), and hasworked as a consultant with several international organizations. He has alsolectured and published on evaluation, economics and development. Please addresscorrespondence to: 1818 H Street, NW, Washington, DC 20433, USA. [email:[email protected]]

Feinstein: Use of Evaluations and the Evaluation of their Use

439

02Feinstein (bc/d) 10/31/02 2:19 PM Page 439

© 2002 SAGE Publications. All rights reserved. Not for commercial use or unauthorized distribution. by Osvaldo Feinstein on February 15, 2007 http://evi.sagepub.comDownloaded from