meta-analysis of development evaluations in 2006 · report 2006:1 evaluation of environmental...

102
Evaluation Evaluation Meta-Analysis of Development Evaluations in 2006 MINISTRY FOR FOREIGN AFFAIRS OF FINLAND DEPARTMENT FOR DEVELOPMENT POLICY Evaluation report 2007:2

Upload: others

Post on 25-Jan-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

  • Meta-Analysis-ka (Converted)-30 14.3.2008 12:59 Page 1

    Composite

    C M Y CM MY CY CMY K

    EvaluationEvaluationMeta-Analysis of

    Development Evaluations in 2006

    Evaluation report 2007:2ISBN 978-951-724-632-3 (printed)

    ISBN 978-951-724-633-1 (pdf)ISSN 1235-7618

    Ministry for Foreign Affairs of FinlandDepartment for Development Policy

    MINISTRY FOR FOREIGN AFFAIRS OF FINLANDDEPARTMENT FOR DEVELOPMENT POLICY

    Evaluation report 2007:2

  • REPORT 2007:1 Finnish Aid to AfganistanISBN: 978-951-724-634-7 (printed), ISBN: 978-951-724-635-4 (pdf), ISSN: 1235-7618

    REPORT 2006:3 Review of Finnish Microfinance CooperationISBN: 951-724-569-6, (printed), ISBN: 951-724-570-X (pdf), ISSN: 1235-7618

    REPORT 2006:2 Evaluation of CIMO North-South Higher Education Network ProgrammeISBN: 951-724-549-1, ISSN: 1235-7618

    REPORT 2006:1 Evaluation of Environmental Management in Finland´s Development CooperationISBN: 951-724-546-7, ISSN: 1235-7618

    REPORT 2005:6 Evaluation of Support Allocated to International Non-Govermental Organisations (INGO)ISBN: 951-724-531-9, ISSN: 1235-7618

    REPORT 2005:5 Evaluation of the Service Centre for Development Cooperation in Finland (KEPA)ISBN: 951-724-523-8, ISSN: 1235-7618

    REPORT 2005:4 Gender Baseline Study for Finnish Development Cooperation ISBN: 951-724-521-1, ISSN: 1235-7618

    REPORT 2005:3 Evaluation of Finnish Health Sector Development Cooperation 1994–2003ISBN: 951-724-493-2, ISSN: 1235-7618

    REPORT 2005:2 Evaluation of Finnish Humanitarian Assistance 1996–2004ISBN: 951-724-491-6, ISSN: 1235-7618

    REPORT 2005:1 Ex-Ante Evaluation of Finnish Development Cooperation in the Mekong RegionISBN: 955-742-478-9, ISSN: 1235-7618

    REPORT 2004:4 Refocusing Finland’s Cooperation with NamibiaISBN: 955-724-477-0, ISSN: 1235-7618

    REPORT 2004:3 Evaluation of the Development Cooperation Activities of Finnish NGOs and Local Cooperation Funds in TanzaniaISBN: 951-724-449-5, ISSN: 1235-7618

    REPORT 2004:2 Evaluation of Finland’s Development Cooperation with Bosnia and HerzegovinaISBN: 951-724-446-0, ISSN: 1235-7618

    REPORT 2004:1 Evaluation of Finnish Education Sector Development CooperationISBN: 951-724-440-1, ISSN: 1235-7618

    REPORT 2003:3 Label Us Able – A Pro-active Evaluation of Finnish Development co-operation from the disability perspective ISBN 951-724-425-8, ISSN 1235-7618

    REPORT 2003:2 Evaluation of Finnish Forest Sector Development Co-operationPART 2 ISBN 951-724-416-9 ISSN 1235-7618

    REPORT 2003:2 Evaluation of Finnish Forest Sector Development Co-operation PART 1 ISBN 951-724-407-X, ISSN 1235-7618

    REPORT 2003:1 Evaluation of the Finnish Concessional Credit SchemeISBN 951-724-400-2, ISSN 1235-7618

    REPORT 2002:9 Evaluation of the Development Cooperation Activities of Finnish NGOs in KenyaISBN 951-724-392-8, ISSN 1235-7618

    REPORT 2002:8 Synthesis Study of Eight Country Programme EvaluationsISBN 951-724-386-3, ISSN 1235-7618

  • META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    Evaluation report 2007:2

    Evaluation

    Meta-Analysisof

    Development Evaluations in 2006

  • META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

  • META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    This evaluation was commissioned by the Ministry for Foreign Affairs of Finland toFinnish Consulting Group L . The Consultants bear the sole responsibility for thecontents of the report. The report does not necessarily reflect the views of theMinistry for Foreign Affairs of Finland.

    Evaluation report 2007:2

    Evaluation

    Meta-Analysisof

    Development Evaluations in 2006

    Pamela White

    Tuija Stenbäck

    MINISTRY FOR FOREIGN AFFAIRS OF FINLANDDEPARTMENT FOR DEVELOPMENT POLICY

    td

  • META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    ISBN 978-951-724-632-3ISBN 978-951-724-633-1 (pdf)ISSN 1235-7618Cover esign and ayout: Anni PalotiePrinting House: Hakapaino Oy, Helsinki, 2008

    D L

    (printed)

    Hard copies of the report can be request from [email protected]

  • FINDINGS,; CONCLUSIONS AND RECOMMENDATIONS ........................... 21

    META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006 i

    CONTENTS

    PREFACE .......................................................................................................... ... iii

    ACRONYMS ...................................................................................................... ... v

    ABSTRACTS ...................................................................................................... ... 1Finnish ..................................................................................................... ... 1Swedish ..................................................................................................... .. 3English ..................................................................................................... ... 4

    SUMMARIES ..................................................................................................... ... 5Finnish ..................................................................................................... ... 5Swedish ..................................................................................................... .. 8English ..................................................................................................... . 11Spanish ..................................................................................................... . 14Portuguese ................................................................................................ 17

    1 INTRODUCTION .......................................................................................... 29

    2 OBJECTIVE AND PURPOSE ....................................................................... 29

    3 BACKGROUND ............................................................................................. 29

    4 TASKS AND METHODOLOGY .................................................................... 304.1 Limitations of Meta-Analysis ..........................................................32

    5 OVERALL PICTURE OF THE EXTENT OFEVALUATION ACTIVITY IN THE MFA OF FINLAND IN 2006 ................ 33

    5.1 Overview and Analysis of Development AssistanceRelated Evaluation Activity ................................................................ 34

    5.2 Consideration of Compliance of the Approach of the Reports againstthe Overall Development Assistance Policy of the Ministry in 2006 .... 40

    5.3 Assessment of Treatment of Finnish Policy and Cross-CuttingIssues in Projects and Evaluations ........................................................ 43

    5.4 Assessment of Overall Quality of the Reports againstCurrent Quality Criteria of the EU and the OECD/DAC ..................... 45

    6 ANALYSIS OF THE EVALUATIONS FOR

    6.1 General Assessment of the Quality of theInterventions and/or Evaluations .......................................................... 52

    6.2 Comparative Analysis – Different Aid Modalities ................................ 556.3 Comparative Analysis – Africa, Asia and Latin America ....................... 556.4 Assessment of the ost Common Problems,

    Findings and Recommendations .......................................................... 566.4.1 Quality of the TOR............................................................... 56

    S

    the

    ENHANCED ORGANISATIONAL LEARNING .......................................... 52

    m

  • META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006ii

    6.4.2 Great Variations in the Quality of the Reports .................... 566.4.3 Evaluation Methodology and Reporting .............................. 586.4.4 MFA Evaluation Guidelines Based on Project Cycle

    Management (PCM) are Difficult to Apply when theProject has not been Planned Using LogicalFramework .......................................................................... 59

    6.4.5 Time Allocations for the Evaluations (Particularly for theField Work) have been Insufficient in Many Instances ......... 59

    6.4.6 Too much Focus on the Past and not Enough to theFuture Evaluations .......................................................... 60

    6.4.7 Confusion on the Roles of the Evaluators .............................606.4.8 Discrepancy between Qualifications of the Long-Term TA

    and the Actual Requirements of the Project work ............... 616.4.9 Under-Spending Budgets .................................................... 616.4.10 Delays in Implementation ................................................... 616.4.11 Lack of Sustainability and Difficulties with

    Institutionalisation of the Intervention ................................ 626.5 Analysis of Lessons Learned and Identification of

    Good Practice Procedures .................................................................... 636.6 Assessment of Evaluation Processes and Management –

    Were the Evaluation Findings Useful for Future Implementationof the Project/Programme? How was the InformationDisseminated and Used? ...................................................................... 64

    6.6.1 Use of the Findings ............................................................. 646.6.2 Improved Definition of what is an Evaluation ..................... 676.6.3 Clarification of Roles and Responsabilities within the

    Ministry Staff ...................................................................... 676.7 Analysis of other Interesting and Innovative Factors

    Procedures Occurring .......................................................................... 68

    REFERENCES.......................................................................................................70

    ANNEX 1 TERMS OF REFERENCE ............................................................. 71

    ANNEX 2 PERSONS INTERVIEWED OR CONTACTED VIA EMAIL ....... 76

    ANNEX 3 LIST OF EVALUATIONS ANALYSED FROM 2006 .................... 77

    ANNEX 4 QUALITY ASSESSMENT OF DECENTRALISEDEVALUATIONS: METHODOLOGICAL FRAMEWORK ............ 80

    ANNEX 5 DEFINITIONS OF EVALUATION TERMINOLOGY ................. 85

    ANNEX 6 QUESTIONS FOR MFA STAFF REGARDINGEVALUATIONS IN 2006 .............................................................. 87

    or

    in

  • META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006 iii

    PREFACE

    In the Ministry for Foreign Affairs of Finland (MFA) the evaluation function isdivided between the central evaluation unit of the Department for DevelopmentPolicy and those units of the regional departments which administer developmentaid. According to the internal norm of the MFA, the former unit is responsible forwider evaluation concepts of regional dimension, country programme entities,thematic evaluations or groups of non-governmental organisations or alike, while thelatter units perform the project, organisation or programme -specific, frequentlymandatory evaluations of the project cycle, such as mid-term reviews or ex-post or ex-ante evaluations.

    There has been fairly little horizontal exchange of evaluation results or efforts to drawcommon system-wide lessons from the evaluations performed or to widen the scope ofthe overall evaluation benefits and the use of results in institutional learning in theMFA. Thus, it was decided in early 2007, to perform a horizontal meta-evaluation ofall evaluations performed the previous year, 2006. The enclosed report Meta-analysisof development evaluations in 2006 is the fruit of this decision. A total of 26 evaluationsemerged from regional units, embassies and other unit responsible for theadministration of development cooperation, the Unit for Evaluation and InternalAuditing had performed three wider evaluations in 2006, making the total of 29evaluations included in this meta-analysis.

    After competitive bidding in the summer 2007, the team of two experts from FCGInternational Ltd, Ms. Pamela White and Ms. Tuija Stenbäck, were selected.

    Even if some material had been collected and were ready to surrender to theevaluators, it still proved to be a tedious job to collect all necessary information fromembassies around the world and the respective units in the MFA. Therefore, onelesson to be learned from this process as such was that there is room forsystematization of the structure of the evaluation reports, and in particular theannexes, and also that there are some guidance needed on the structure of the reportsthemselves. Hopefully, the Evaluation Guidelines, which were finalised in 2007, canrectify part of the problems.

    There were two major questions in the meta-analysis:

    i) to what extent the evaluated programmes and projects had been in line withthe Finnish development policy (quality and impact of the developmentintervention), and

  • META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006iv

    ii) to what extent the evaluation provides the required information on allaspects of Finnish development policy (quality of the evaluation itself)? Inthe quality assessment, the nine evaluation criteria of the EuropeanCommission were used.

    There are many interesting results and conclusions in this meta-analysis. One of themwas that there is insufficient analysis of the compliance with Finnish developmentpolicy and treatment of cross-cutting issues such as gender, environment, humanrights and good governance. Yet, it is admitted that in sectorally fundedprogrammes, compliance with the country policies of an individual donor is notpossible. Relevance, coherence and strategic issues were fairly well dealt with,including within the prevalent socio-economic context. – The meta-analysis showed,however, that the format and structure of the reports offered a very variegated picturewhich would need attention in the future.

    Helsinki, 14 December 2007

    Aira PäivökeDirectorUnit for Evaluation and Internal Auditing

  • META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006 v

    ACRONYMS

    CIMIC Civil and Military Cooperation Programme

    EAC East African Community

    EC European Commission

    EU European Union

    FiSNDP Finnish Support to the Namibian decentralisation process

    FOMEVIDAS Rural Development Strengthening and Poverty ReductionProgramme (Nicaragua)

    FSSP Forest Sector Support Programme (Vietnam)

    IOM International Organisation for Migration

    MFA, Ministry Ministry for Foreign Affairs of Finland

    MTR Mid-Term Review

    NGO Non-Governmental Organisation

    OECD/DAC Organisation for Economic Cooperation and Development /Development Assistance Committee

    PCM Project Cycle Management

    PROGESTION Municipal Management and Local DevelopmentStrengthening Programme (Nicaragua)

    PRORURAL Rural Development Sector Programme (Nicaragua)

    QTRDP Quang Tri Rural Development Programme (Vietnam)

    SARED Reproductive Health, Equity and Rights Project (Nicaragua)

    SWAp Sector-Wide Approach

    TA Technical Advisers / Technical Assistance

    TFF Trust Fund for Forests (Vietnam)

    TOR Terms of Reference

    TTHRDP Thua Thien Hue Rural Development Programme (Vietnam)

    UM Ulkoasiainministeriö

    YLE Finnish Broadcasting Company

  • META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

  • 1META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    Kehitysyhteistyön vuoden 2006 Evaluaatioiden Meta-Analyysi

    Pamela White ja Tuija Stenbäck

    Ulkoasiainministeriön evaluointiraportti 2007:2

    ISBN 978-951-724-632-3 (painettu); 978-951-724-633-1 (pdf) ISSN 1235-7618

    Raportti on luettavissa kokonaisuudessaan osoitteessa http://formin.finland.fi/

    TIIVISTELMÄ

    Meta-analyysi toteutettiin arvioimalla Suomen kehitysyhteistyöhankkeiden/ohjelmien evaluoinnit vuodelta 2006. Pääkysymykset olivat:

    i) Missä määrin arvioidut ohjelmat ja hankkeet ovat olleet yhdenmukaisiaSuomen kehityspolitiikan kanssa (kehityshankkeiden/-ohjelmien laatu javaikuttavuus)?

    ii) Missä määrin arvioinnit tuottavat tarvittavaa tietoa kaikkiin Suomenkehityspolitiikkaan liittyviin tekijöihin liittyen (arviointien laatu)?

    Yhteensä 29 raporttia, sisältäen eri kehitysyhteistyön muotoja, tutkittiin, enitenkahdenvälisiä ohjelmia ja hankkeita. Työssä analysoitiin hyvin laajalti erisektoreita,maaseutukehityksen ollessa raporteissa eniten edustettuna. Vaikka yksittäistenraporttien laatu vaihteli, raporttien laadussa ei havaittu merkittäviä erojaryhmiteltynä sektorin, apumuodon tai maantieteellisen alueen mukaan.

    Evaluoinnit vahvistavat Suomen rahoittamien ohjelmien/hankkeiden olevanrelevantteja. Hankkeiden pysyvää vaikutusta (impact) ja kestävyyttä (sustainability)oli evaluointien tekijöiden vaikea arvioida, koska useimmat evaluoinnit olivatväliarviointeja tai seurantaan liittyviä toimeksiantoja, ja arviointeihin varattu aika olilyhyt. Tehokkuus oli yleensä parempi hankkeissa/ohjelmissa, joissa ulkopuolinenarviointeihin varattu aika oli lyhyt. Tehokkuus oli yleensä parempi hankkeissa/ohjelmissa, joissa ulkopuolinen tekninen apu vastasi toimintojen toteuttamisesta,kuitenkin joissain tapauksissa kestävyydestä ei ollut varmuutta. Kansalaisjärjestötolivat yleensä tehokkaita hankkeiden toteuttamisessa, mutta puutteita oli useinhallintokäytännöissä.

    ;

  • 2 META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    Yleisimmät ongelmat, riippumatta apumuodosta, olivat budjetin alikäyttö,viivästykset toteutuksessa ja kestävyyden puute, ja ne edellyttävät jatkossa hanke-/ohjelmasuunnittelun parantamista. Suositukset evaluointisysteemin parantamiseenliittyen koskevat tehtävänantojen parantamista, laadun valvontaa ja evaluoinneistasaadun tiedon jakamista sekä keskustelun laajentamista.

    Avainsanat: kehitysyhteistyö, meta-analyysi, evaluaatio

  • 3META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    Meta-Analys om Utvärderingar av det FinskaUtvecklingsstödet under 2006

    Pamela White och Tuija Stenbäck

    Utrikesministeriets utvärderingsrapport 2007:2

    ISBN 978-951-724-632-3 (print); 978-951-724-633-1 (pdf); ISSN 1235-7618

    Rapporten finns i sin helhet på addressen http://formin.finland.fi/

    ABSTRAKT

    En meta-analys har genomförts för att analysera utvärderingar av detfinskautvecklingsstödet under 2006. De centrala frågorna var:

    i) Till vilken grad har de utvärderade programmen och projekten varit i linje medfinsk utvecklingspolitik (kvaliteten och effekten av utvecklingsprogrammet)?

    ii) Till vilken grad ger utvärderingarna den efterfrågade informationen omsamtliga aspekter av finsk utvecklingspolitik (dvs. kvaliteten av självautvärderingen)?

    Sammanlagt 29 rapporter som behandlade många olika biståndsformer studerades,med fokus på bilaterala program och projekt. Programmen representerade ett bretturval av sektorer, med huvudvikt på landsbygdsutveckling. Medan kvaliteten på deindividuella rapporterna varierade, var det inte någon betydande skillnad i kvalitetenav rapporterna när de blev granskade gruppvis, per sektor, biståndsform eller region.

    Utvärderingarna bekräftar relevansen av insatser finansierade av Utrikesministet. Detvar dock svårt för utvärderarna att bedöma effekten och uthålligheten av insatserna,eftersom majoriteten av utvärderingarna genomfördes efter halva tiden av projekteneller under uppföljningsbesök medan projektet fortfarande pågick, med begränsad tidfor utvärderingsaktiviteten. Effekten var generellt sett bättre i projekt där dentekniska bistånds-personalen hade ansvar för genomförandet av aktiviteter, även omuthålligheten inte var garanterad.

    De vanligaste svårigheterna var underutnyttjande av budgeten, förseningar iprojektgenomförandet och bristfällig brister i planeringen. Rekommendationergällande förbättring av utvärderingssystemet omfattade förbättringar i förberedandetav projektbeskrivningar, kvalitetssäkring, samt en bredare disseminering ochdiskussion om resultatet.

    Nyckelord: utvecklings bistånd, utvärdering, meta-analys

  • 4 META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    Meta-Analysis of Development Evaluations in 2006

    Pamela White and Tuija Stenbäck

    Evaluation report of the Ministry for Foreign Affairs of Finland 2007:2

    ISBN 978-951-724-632-3 ; 978-951-724-633-1 (pdf) ISSN 1235-7618

    The entire report can be accessed at http://formin.finland.fi/

    ABSTRACT

    A Meta-analysis was carried out to analyse Finnish development assistanceevaluations in 2006. The key questions were:

    i) To what extent the evaluated programmes and projects have been in line withthe Finnish development policy (quality and impact of the developmentprogramme)?

    ii) To what extent the evaluations provide the required information on all aspectsof Finnish development policy (quality of the evaluation itself)?

    In all, 29 reports covering a range of aid modalities were studied, with bilateralprogrammes and projects being in the majority. The sectors covered a wide spectrum,with rural development most represented. While individual quality varied, there wasnot a significant difference in the quality of reports when grouped by sector, aidmodality or different regions

    The evaluations confirmed the relevance of the interventions. Impact andsustainability were difficult for the evaluators to assess, as the majority of theevaluations were mid-term or monitoring missions, and time for evaluation wasshort. Effectiveness was usually better in projects where the TA staff were in controlof implementing the activities, though the sustainability was then less assured. Theprojects implemented by Non-Governmental Organisations (NGOs) have usuallybeen effective in carrying out the planned activities, but often had pooradministration and financial management. The most common difficulties(irrespective of the aid modality) were underspending of budgets, delays inimplementation, and a lack of sustainability requiring improvements in planning.The recommendations regarding the evaluation system concern the TOR preparation,quality control, and widening the dissemination and discussion of findings.

    Key words: development cooperation, meta-analysis, evaluation

    ; (printed)

  • 5META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    YHTEENVETO

    Suomen Ulkoasiainministeriön (UM) ohjeistossa koskien kehitysohjelmiensuunnittelua, seurantaa ja arviointia, evaluointi määritellään seuraavasti: “Evaluointion systemaattinen ja objektiivinen arviointi meneillään olevan tai päättyneenintervention suunnittelusta, toteutuksesta ja tuloksellisuudesta. Sen kaksipäätavoitetta ovat i) parantaa tulevaa apupolitiikkaa ja tulevia interventioita saadunpalautteen ja kokemuksesta opitun kautta, sekä ii) tarjota perusteitatulosvastuullisuuteen, mukaan lukien tiedon julkistamisen.” Kokemuksistaopittujen asioiden ja tiedon jakaminen ei ole ollut riittävän tehokasta, ja tämä meta-analyysi pyrkii osaltaan parantamaan tilannetta.

    Tämän toimeksiannon päämääränä oli analysoida Ulkoasiainministeriön vuonna 2006teettämät kehitysyhteistyön arviointiraportit. Meta-evaluaation tavoitteena oli lisätäevaluoinneista saatavaa hyötyä ja hyödyntää tuloksia organisaation oppimisessa.Meta-analyysi identifioi ja hakee eri selvityksistä, raporteista ja evaluoinneistayhteisiä tekijöitä. Tarkoituksena oli tuottaa Ulkoasiainministeriölle tietoa liittyenkahteen pääkysymykseen:

    i) Missä määrin arvioidut ohjelmat ja hankkeet ovat olleet yhdenmukaisiaSuomen kehityspolitiikan kanssa (kehityshankkeiden/-ohjelmien laatu javaikuttavuus)?

    ii) Missä määrin arvioinnit tuottavat tarvittavaa tietoa kaikkiin Suomenkehityspolitiikkaan liittyviin tekijöihin liittyen (arviointien laatu)?

    Meta-analyysissä arvioitiin ensin evaluointiprosessit ja seuraavaksi evaluointientulokset suhteessa tehtyihin suosituksiin, opittuihin asioihin ja parhaisiinkäytäntöihin. Tarkoituksena oli tuottaa sellaista tietoa UM:n henkilöstölle japäätöksentekijöille, mikä auttaisi heitä

    1) aloittamaan uusia kehityshankkeita/ohjelmia;

    2) parantamaan meneillään olevia hankkeita/ohjelmia; sekä

    3) parantamaan arviointijärjestelmää.

    Yhteensä 29 raporttia analysoitiin, joskin kolmessa tapauksessa kaksi raporttia liittyikiinteästi yhteen. Raportit vaihtelivat sekä pituuden että monimutkaisuudensuhteen, viidestä sataan kolmeenkymmeneenviiteen (5-135) sivuun. Kaikki raportitolivat saatavilla sekä elektronisessa muodossa että paperikopioina, ja kolme niistä olijulkaistu kirjana. Tilaajana oli useimmissa (26) tapauksissa UM:n alueyksikkö tailähetystö. Ainoastaan kolmen evaluoinnin tilaajana ja julkaisijana oli UM:n

  • 6 META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    evaluointiyksikkö. Suurin osa evaluoinneista oli suoritettu joko Nicaraguassa taiVietnamissa – tämä on luultavasti sattumaa, kuvaten vuotta 2006, mutta voi myösliittyä näillä alueilla tapahtuvan toiminnan laajuuteen. Raporteissa analysoitiinmonia kehitysyhteistyön muotoja, vaikkakin eniten kahdenvälisiä hankkeita jaohjelmia. Työssä analysoitiin hyvin laajalti eri sektoreita, maaseutukehityksen ollessaraporteissa eniten edustettuna. Ryhmiteltynä sektorin, apumuodon taimaantieteellisen alueen mukaan merkittäviä eroja ei havaittu raporttien laadussa.Raporttien laatu ei myöskään näyttänyt olevan yhteydessä siihen, millainen oli tiiminkokoonpano (jäsenten sukupuoli tai kansallisuus), joskin useasti tiimeillä näyttiolevan vaikeuksia ymmärtää suomalaista hallintojärjestelmää.

    Vaikeuksia työssä tuotti raporttien lopullisten versioiden hankkiminen, mukaanluettuna raporttien liitteet. Tämä heijastanee UM:n vaikeutta ylläpitää ajanmukaistaraporttitietokantaa. Koska kaikki raportit ovat kuitenkin saatavilla nykyäänelektronisessa muodossa, olisi tärkeätä luoda käyttäjäystävällinen keskitettyarkistointijärjestelmä, johon koko henkilökunnalla olisi pääsy, ja jonka avulla kaikkihankittu yhteinen tieto voitaisiin jakaa.

    Euroopan Komission (EC) laatuohjeiden (EC Quality Guidelines) mukaan arvioitunayleisarvosana raporteista oli ‘hyvä’. Heikoimmat arviot saatiin kategorioissatarkoitukseen sopiva arviointisuunnitelma ja hyvin perustellut johtopäätökset. Joissakintapauksissa yksittäinen evaluointiraportti arvioitiin ‘hyväksi’, mutta lähetystö eikatsonut raportin vastanneen heidän vaatimustasoaan, johtuen ehkä liian korkealleasetetuista odotuksista suhteessa evaluointitiimin käytössä olleisiin resursseihin.

    Siitä huolimatta, että UM:n Kehitysohjelmien suunnittelu, seuranta ja arviointi -ohjeistomäärittelee puitteet tehtävänannolle (TOR) ja raporttien formaatille, näitä eiuseinkaan noudatettu. Useimmat raportit eivät olleet UM:n, EU:n tai OECD/DAC:nohjeiden mukaisia. Evaluoinneissa tehtävänannon laatu on tärkeä – jos tiimiä ei olepyydetty arvioimaan jotakin ja raportin formaattia ei anneta, on vaikea vaatiakorkeatasoisia tuloksia. Toisaalta tehtävänannon vaativuuden tulisi heijastuaevaluointiajan pituuteen ja olla suhteessa myös siihen, kuinka monimutkainen itsehanke/ohjelma.

    Raporteissa ei ollut riittävästi analysoitu hankkeiden yhdenmukaisuutta Suomenkehityspolitiikkaan ja sen läpileikkaaviin periaatteisiin kuten sukupuoleen,ympäristöön, ihmisoikeuksiin ja hyvään hallintoon. Luonnollisesti tärkeidenläpileikkaavien asioiden sekä Suomen kehitysyhteistyöpolitiikan käsittelyäevaluoinneissa voidaan edellyttää vain, jos hanke on Suomen yksin rahoittama.Sektoriohjelmien yhteisarvioissa (Joint Review) on vaikea arvioida yhden avunantajankehityspolitiikan toteutumista erillisenä.

    Relevanssia, johdonmukaisuutta (coherence) ja strategisia tekijöitä käsiteltiin yleensähyvin analysoitaessa poliittista ja institutionaalista viitekehystä, sosio-ekonomista

  • 7META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    tilannetta sekä hankkeen yhteyttä muihin sektoreihin. Yleisesti ottaen evaluoinnitvahvistivat relevanssin olevan hyvän Suomen rahoittamissa hankkeissa. Hankkeidenpysyvää vaikutusta (impact) ja kestävyyttä (sustainability) oli evaluoinnin tekijöillevaikeampi arvioida, koska useimmat evaluoinnit olivat väliarviointeja tai seurantaanliittyviä toimeksiantoja, ja arviointeihin varattu aika oli erittäin lyhyt.Projektidokumenteissa ei useinkaan ollut määritelty mittareita, joiden avullavaikutusta olisi voitu arvioida. Kestävyys oli kaikissa hankkeissa/ohjelmissakyseenalainen, koska hyvin harvassa raportissa oli merkkejä siitä, että vastaanottavanmaan hallitus suunnitteli tai oli kykenevä ottamaan vastuulleen sitä taloudellistavastuuta, jonka hankkeen kehittämien toimintojen jatkaminen olisi edellyttänyt.

    Tehokkuutta (effectiveness) ei myöskään käsitelty monissa raporteissa. Sitä olihelpompi arvioida kahdenvälisissä hankkeissa, joiden dokumentointi, loogisetviitekehykset, sisäiset seurantajärjestelmät ja selkeämpi taloudellinen seurantahelpottivat tuloksellisuuden havainnointia. Useissa raporteissa oli ainoastaantoimintojen kuvausta ilman tehokkuuden arviointia. Epäsuorasti voitiin kuitenkintodeta, että tehokkuutta estivät usein vastaanottavien maiden hallitusten hidaspäätöksenteko, mikä viivytti hankkeen toteuttamista (erityisesti rakenteidenuudistamiseen liittyen kuten hallinnon hajauttamisessa). Tehokkuus oli yleensäparempi hankkeissa/ohjelmissa, joissa ulkopuolinen tekninen apu vastasi toimintojentoteuttamisesta, kuitenkin joissain tapauksissa kestävyydestä ei ollut varmuutta.Kansalaisjärjestöt olivat yleensä tehokkaita hankkeiden toteuttamisessa, muttapuutteita oli usein hallintokäytännöissä.

    Yleisesti ottaen evaluoinneissa käsiteltiin tehokkuutta (efficiency) ja talousasioitavarsin puutteellisesti, tai ei ollenkaan, varsinkaan vertailevaa analyysiä ei yleensä olluttehty. Tähän saattoi vaikuttaa se, ettei monista hankkeista/ohjelmista saanut helpostiajan tasalla olevia taloustietoja eivätkä arvioijat olleet yleensä talousasioidenasiantuntijoita.

    Useimpien hankkeiden ongelmana, riippumatta apumuodosta, oli budjetin alikäyttö,viivästykset toteutuksessa ja kestävyyden puute. Nämä ovat merkittäviä ongelmia,joskaan eivät odottamattomia, ja edellyttävät hanke-/ohjelmasuunnittelunparantamista. Suositukset evaluointitoiminnasta itsestään koskevat tehtävänantojenparantamista, laadun valvontaa ja evaluoinneista saadun tiedon jakamista sekäkeskustelun laajentamista.

  • 8 META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    SAMMANFATTNING

    Enligt finska utrikesministeriets riktlinjer för planering, övervakning ochutvärdering av projekt är utvärdering “en systematisk och objektiv analys avplaneringen, förverkligandet och resultaten av en pågående eller slutförd insats. Detvå huvudsakliga målen när utför utvärderingar är i) att dra lärdomar för att förbättrabiståndspolitiken och framtida insatser och ii) att skapa ett underlag föransvarstagande, vilket innefattar informationsspridning åt allmänheten.” Dennafunktion att dela lärdomar och sprida information har inte varit fullkomligt effektivoch meta-analysen är ämnad som ett steg i rätt riktning.

    Uppdraget bestod av att analysera ministeriets utvärderingar av utvecklingsbiståndetår 2006. Målet för meta-utvärderingen var att ge utvärderingarnas resultat ettbredare tillämpningsområde och en bredare användning inom det institutionellalärandet.

    Med meta-analys avses en process där olika studier, rapporter och utvärderingaranalyseras för att identifiera och spåra gemensamma åtgärder. Meta-analysen avutvärderingsrapporterna förser det finska utrikesministeriet med upplysningar i tvåviktiga frågor:

    i) Hur förenliga har de utvärderade programmen och projekten varit med denfinska utvecklingspolitiken (programmets/projektets kvalitet och effekt)?

    ii) Hur användbara och nödvändiga upplysningar ger utvärderingarna om denfinska utvecklingspolitiken och alla dess aspekter (utvärderingens kvalitet)?

    Först analyserades utvärderingsprocesserna och sedan resultaten i form avrekommendationer, lärdomar och bästa exempel. Syftet var att ge utrikesministerietsanställda och beslutsfattare konsoliderad information som hjälper

    1) inleda nya utvecklingsprojekt och program;

    2) förbättra tillämpningen av pågående projekt och program; och

    3) förbättra utvärderingssystemet.

    Utvärderingen omfattade 29 rapporter; i tre fall var det dock fråga om två rapportermed sammankopplade ämnen. De analyserade rapporterna hade en varierande längdoch komplexitet, från 5 till 135 sidor. Alla fanns i både elektronisk och tryckt formoch tre hade publicerats i bokform. Största delen (26) hade beställts av regionalaenheter och ambassader. Endast tre hade beställts (och publicerats) av enheten förutvärdering och intern revision. Huvudparten av utvärderingarna utfördes i Vietnam

  • 9META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    och Nicaragua. Detta är högst antagligen en slumpmässig stillbild av händelserna år2006 men kan också bero på det större antalet insatser som pågår i dessa regioner.Analyserna omfattade en rad olika biståndsmodeller med tyngdpunkt på bilateralaprogram och projekt. Det sektoriella spektrumet var brett med landsbygdsutvecklingsom den största sektorn. En gruppering enligt sektor, biståndsmodell eller regionavslöjade inga väsentliga kvalitativa skillnader mellan rapporterna. Det verkadeheller inte finnas något betydande samband mellan rapporternas kvalitet ochutvärderingsgruppernas sammansättning (kön/nationalitet), även om grupperna iflera sammanhang rapporterade om svårigheter med anledning av en bristandebehärskning av finska förvaltningssystem.

    Väsentliga svårigheter uppstod när det gällde att samla ihop de slutliga versionernaav rapporterna med samtliga tillhörande bilagor. Detta avspeglar en vanligproblematik inom utrikesministeriet när det gäller arkiveringen av data ochrapporter. Eftersom alla rapporter idag finns i elektronisk form är det viktigt attskapa ett användarvänligt, centralt arkivsystem som hela personalen har tillgång tilloch där alla lärdomar kan delas.

    Utvärderingarna analyserades enligt kommissionens kvalitetsriktlinjer och fickmedelvitsordet “bra” på alla områden. Resultaten var svagast inom kategorierna“lämplig utformning” och “säkra slutsatser”. I vissa fall bedömdesutvärderingsrapporten vara av hög kvalitet men ambassadpersonalen upplevdefortfarande att den inte mötte kraven. Detta kan eventuellt bero på attförväntningarna för vad en utvärderingsgrupp kan åstadkomma var orimligt höga. Iandra fall hade utvärderingsarbetet en otillräcklig tidsplan.

    Trots att det finns standardmodeller för uppgiftsbeskrivningar och rapporteri utrikesministeriets riktlinjer för planering, övervakning och utvärdering avprojekt har dessa modeller ofta inte använts. Största delen av rapporterna följerinte inrikesministeriets, EU:s eller OECD/DAC:s riktlinjer. I synnerhetuppgiftsbeskrivningen har stor betydelse för utvärderingen – om en grupp inte får enkonkret uppgift och en klar rapporteringsmodell är det orealistiskt att förvänta sigresultat av hög kvalitet. Dessutom måste uppgiftsbeskrivningen utformas med enkomplexitet som motsvarar utvärderingens längd och själva insatsens komplexitet.

    Det finns otillräckligt med analysdata i fråga om hur den finska utvecklingspolitikenhar observerats och hur ämnesövergripande frågor som t.ex. könsindelning, miljö,mänskliga rättigheter och goda styrelseformer har behandlats. Ämnesövergripandefrågor och mål som är av betydelse för den finska utvecklingspolitiken kan givetvisbara krävas i utvärderingar som gäller insatser där Finland är en direktbiståndsgivare. När det är fråga om gemensamma översyner av sektorsvistfinansierade program är det svårt att tillmötesgå individuella biståndsgivande länderspolitik.

  • 10 META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    Utvärderingen av insatsernas relevans och sammanhållning samt av olika strategiskafrågor har i allmänhet gjorts bra och med hänvisningar till det politiska ochinstitutionella ramverket, den socioekonomiska kontexten och projektets/insatsensförhållande till andra sektorer. De insatser som finansieras av det finländskautrikesministeriet bekräftas i allmänhet som relevanta. Insatsernas effekt ochhållbarhet var svåra att bedöma då huvudparten av utvärderingarna utfördes somuppdrag efter halva tiden eller övervakningsuppdrag, och under en begränsad tid.Ofta fanns det inga fastslagna indikatorer i projektdokumentet mot vilka effekternakunde mätas. I samtliga fall är hållbarheten tvivelaktig eftersom rapporterna innehöllmycket få tecken på att den mottagande regeringen planerar eller har möjlighet att taöver finansieringen och fortsätta med verksamheten efter att projektet har slutförts. Imånga av rapporterna finns det inga direkta hänvisningar till effektivitet.Effektiviteten var lättare att analysera när det var fråga om bilaterala projekt eftersomdessa hade bättre logiska och dokumenteringsramar, bättre internaövervakningssystem samt klarare finansiella spår som gör det lättare att kopplasamman resultat och orsaker. I många fall fanns det bara en beskrivning över vad somhar hänt hos institutionerna, utan någon klar effektivitetsbedömning. En indirektslutsats är att effektiviteten i många fall har hämmats av den mottagande regeringenslångsamma beslutsprocesser, som är nödvändiga för att projektet ska gå framåt (isynnerhet när det gäller att genomföra strukturella reformer som t.ex.decentralisering). Effektiviteten var i allmänhet bättre hos projekt där TA-personalenhade kontrollen över förverkligandet, även om det i somliga fall rådde oklarheter ifråga om hållbarheten. Projekt som har genomförts av enskilda organisationer (NGO)har i allmänhet förverkligats effektivt men det har däremot funnits administrativabrister. I allmänhet har behandlingen av effektivitetsaspekter och finansiella frågorvarit bristfällig eller obefintlig, i synnerhet när det gäller effektivitetsjämförelser.Detta kan bero på svårigheterna att få ut aktualiserad finansiell information från vissaprojekt/program eller på det faktum att utvärderarna oftast inte är finansiellaexperter.

    När det gäller insatserna själva var de vanligaste problemen underanvända budgetar,förseningar i förverkligandet och bristfällig hållbarhet, oberoende av biståndsmodell.Dessa brister, om även inte oförväntade, är av stor betydelse och kräver attplaneringen förbättras. Rekommendationerna för utvärderingsverksamheten gäller ihuvudsak förbättringar i utformningen av uppgiftsbeskrivningar och ikvalitetskontrollerna samt en större spridning av resultat och en bredare efterföljandediskussion.

  • 11META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    SUMMARY

    The Finnish Guidelines for Programme Design, Monitoring and Evaluation state that“evaluation is a systematic and objective assessment of the design, implementationand outcome of an on-going or completed intervention. The two main purposes ofevaluation are i) to improve future aid policy and interventions through feedback oflessons learned, and ii) to provide a basis for accountability, including provision ofinformation to the public.” This function of sharing of lessons learned anddisseminating information has not been entirely effective, and this meta-analysis wasa step towards improving that.

    The objective of this assignment was to make an analysis of development assistanceevaluations of the Ministry in 2006. The purpose of the meta-evaluation was to fulfilthe need to widen the scope of evaluation benefits and use of results in institutionallearning.

    A meta-analysis refers to a process by which common measures are identified andtracked across various studies, reports and evaluations. The meta-analysis of theevaluation reports provides information for the MFA Finland on two major questions:

    i) To what extent the evaluated programmes and projects have been in line withthe Finnish development policy (quality and impact of the developmentproject/programme)?

    ii) To what extent the evaluations provide the required information on all aspectsof Finnish development policy (quality of the evaluation itself)?

    Firstly, the evaluation processes were analysed, and secondly the evaluation results, inregard to the recommendations, lessons learnt and best practises. The intention wasto provide MFA staff and decision-makers with consolidated information that wouldhelp them to

    1) initiate new development projects/programmes;

    2) improve the implementation of on-going projects/programmes;

    3) improve the evaluation system.

    There were 29 reports studied, though in three cases there were two reports on linkedsubject matter. The reports studied range in length and complexity, from 5 pages to135 pages. All were available in electronic and hard copy, and three were published inbook form. Most (26) were commissioned by regional units and embassies, with only3 commissioned by the Unit for Evaluation and Internal Auditing. The majority of

  • 12 META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    the evaluations were carried out in Vietnam and Nicaragua – this is probablycoincidental, as a snapshot of what occurred in 2006, but also may relate to thegreater number of activities underway in these regions. A range of aid modalitieswere studied, with bilateral programmes and projects being in the majority. Thesectors studied covered a very wide spectrum, with rural development mostrepresented. There was not a significant difference in the quality of reports whengrouped by sector, aid modality or different regions. In addition there did not appearto be a significant link between the quality of the report and the composition of theevaluation team (by gender or nationality), although several times there weredifficulties reported with the teams not having a good grasp of Finnishadministrative systems.

    There were considerable difficulties in collecting the final versions of the reports,along with all annexes. This reflects a common difficulty internally within the MFAto maintain an updated archive of data and reports. As all reports are availableelectronically now, it would be important to create an easy-to-use central archivingsystem that all staff could access, and with which all lessons could be shared.

    Using the European Commission (EC) Quality Guidelines, the average assessments ofthe reports under the different quality criteria were ‘good’, in all areas, with theweakest areas being in the categories of ‘appropriate design’ and ‘valid conclusions’.In some cases, an evaluation report was of good quality, yet the Embassy staff felt itfell short of requirements, perhaps due to unreasonably high expectations of what anevaluation team can achieve. In other cases the time for the evaluation was excessivelyshort.

    Despite the fact that the Guidelines for Programme Design, Monitoring andEvaluation of the MFA provide standards for TOR and report formats, these wereoften not used. Most of the reports don’t follow the MFA, EU or OECD/DACguidelines. The quality of the TOR for the evaluations is important – if the team isnot asked to review something, and a format for reporting is not provided, then it isdifficult to expect high quality results. On the other hand, the level of complexity ofthe TOR should reflect the length of the evaluation and the complexity of theintervention itself.

    There is insufficient analysis of the compliance with Finnish development policy andtreatment of cross-cutting issues such as gender, environment, human rights andgood governance. Naturally treatment of the significant cross-cutting issues andgoals for Finnish development cooperation policy can only be required in evaluationsof interventions funded directly by Finland. In Joint Reviews of a sectorally-fundedprogram, compliance with individual donor country policies are difficult.

    Relevance, coherence and strategic issues are usually dealt with well, with discussionof the policy and institutional framework, socio-economic context, and the

  • 13META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    relationship of the project/intervention to other sectors. In general the evaluationsconfirm the relevance of the interventions funded by the MFA. Impact andsustainability were difficult for the evaluators to assess, as the majority of theevaluations were mid-term or monitoring missions, and time for evaluation was soshort. Often there were no indicators established in the project document againstwhich to measure the impact. Sustainability of all the interventions is questionable,because there were very few indications in the reports that the recipient Governmentis planning or able to take over the financial burden of the project and continue theactivities after the project is over. Effectiveness has not been dealt with directly inmany reports. Effectiveness was easier to track in the bilateral projects, where thebetter documentation and logical frameworks, internal monitoring systems andclearer financial tracking made attribution of outcomes easier. In many cases therewas only descriptive information of what has been happening in the institutionswithout clear assessment on effectiveness. Indirectly it can be concluded thateffectiveness in many projects has been hampered by the recipient government’sslowness in making decisions that are necessary for the project to move forward(particularly in implementing structural reforms like decentralisation). Effectivenesswas usually better in projects where the TA Staff was in control of implementing theactivities, though in some cases the sustainability was not evident. The projectsimplemented by NGOs have usually been effective in carrying out the plannedactivities, but often had poor administration. In general the treatment of efficiencyand financial issues by the evaluators was often weak or missing altogether,particularly with relation to comparative efficiency. This may be due to thedifficulties of extracting up-to-date financial data from some projects/programmes, orthe fact that the evaluators are usually not financial management experts.

    The most common difficulties with the interventions themselves, irrespective of theaid modality, were under-spending of budgets, delays in implementation, and a lackof sustainability. These are significant difficulties, though not unexpected, andrequire improvements in planning. The follow-up recommendations regarding theevaluations themselves mostly concern the necessary improvements in TORpreparation, quality control, and widening the dissemination and discussion offindings.

  • 14 META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    RESUMEN

    Las Directivas Finlandesas para Diseño, Mantenimiento y Evaluación deProgramasdispone que “la evaluación sea una valoración sistemática y objetiva deldiseño,implementación y resultado de una intervención en activo o completada”. Losdospropósitos principales de una evaluación son: i) mejorar la política de asistencia eintervenciones futuras mediante la retroalimentación de las lecciones aprendidas, y ii)proveer una base para la responsabilidad, incluyendo la provisión de información alpúblico.” Esta función de compartir lecciones aprendidas y diseminación deinformación no ha sido enteramente efectiva, y este meta – análisis era un paso haciasu mejoramiento.

    El objetivo de este cometido era hacer un análisis del desarrollo de evaluaciones deasistencia del Ministerio durante el 2006. El propósito de la meta – evaluación erasatisfacer la necesidad de ampliar el alcance del uso y los beneficios deevaluaciónresultantes para el aprendizaje institucional.

    Un meta – análisis se refiere al proceso por el que se identifican e investiganmedidascomunes a través de diverso estudios, informes y evaluaciones. El meta – análisis delos informes de evaluación provee información al MAE de Finlandia sobre doscuestiones importantes:

    i) ¿ Hasta qué punto los proyectos y programas evaluados han estado enconformidad con la política de desarrollo Finlandesa (calidad e impacto deproyecto / programa de desarrollo)?

    ii) ¿ Hasta qué punto las evaluaciones proveen la información requerida sobretodos los aspectos de la política de desarrollo Finlandesa (calidad de laevaluación en sí)?

    Primero, se analizaron los procesos de evaluación, y segundo los resultados de laevaluación, con respecto a las recomendaciones, lecciones aprendidas y buenaspracticas. La intención era proveer a los que toman decisiones y al personal del MAEcon información consolidada que les ayudaría a:

    1) iniciar nuevos proyectos / programas de desarrollo;

    2) mejorar la implementación de proyectos / programas en activo; y

    3) mejorar el sistema de evaluación.

    Se estudiaron 29 informes, aunque en tres de casos dos de los informes trataban sobrematerias vinculadas. Los informes estudiados tenían de longitud y complejidad, de

  • 15META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    5 a 135 páginas. Todas estaban disponibles en formato electrónico e impreso, y tres sepublicaron en forma de libro. La mayoría (26) fueron encargados por embajadas yunidades regionales, de los que sólo 3 fueron encargados (y publicadas) por la Unidadde Evaluación y Auditoria Interna. La mayoría de las evaluaciones se efectuaron enVietnam y Nicaragua – por pura coincidencia, como una instantánea de lo qué ocurrióen 2006, pero también pueden relacionarse al mayor número de actividades a realizaren estas regiones. Se estudiaron una gama de modalidades de asistencia, con losproyectos y programas bilaterales siendo mayoría. Los sectores estudiados cubrían unespectro muy amplio, con el desarrollo rural más representado. No Había unadiferencia importante en la calidad de informes cuando agrupados por sector,modalidad de asistencia o regiones diferentes. Además, no parecía haber un nexoimportante entre la calidad del informe y la composición del equipo de evaluación (engénero o nacionalidad), aunque varias veces se informó sobre dificultades con losequipos por su falta de conocimiento adecuado sobre los sistemas administrativosFinlandeses.

    Existieron dificultades considerables para recoger las versiones finales de los informes,junto con todos los anexos. Esto refleja una dificultad común interna dentro del MAEpara mantener un archivo actualizado de datos e informes. Al estar ahora todos losinformes disponibles en formato electrónico, sería importante crear sistema de archivocentralizado fácil de usar al que pudiese acceder todo el personal, y con el que sepudieran compartir todas las lecciones.

    Usando las Directivas de Calidad de la CE, el promedio de evaluación de los informesbajo diferente criterios de calidad fue ‘bueno’, en todas las áreas, siendo los másdébiles en las categorías de ‘diseño apropiado’ y ‘conclusiones validas’. En algunoscasos, un informe de evaluación de buena calidad, aunque el personal de la Embajadaconsideraba que no cumplía los requisitos, quizás debido a expectativasirracionalmente altas sobre lo qué un equipo de evaluación puede conseguir. En otroscasos el tiempo para la evaluación era excesivamente corto.

    A pesar del hecho que las Directivas para Diseño, Mantenimiento y Evaluación deProgramas del MAE proveen estándares para TR y formatos de los informes, éstosfueron no usados frecuentemente. La mayoría de los informes no cumplen lasdirectivas del MAE, UE u OCDE / CAD. La calidad del TR para las evaluaciones esimportante – si no se pide al equipo revisar algo, y no se provee un formato parainformar, entonces es difícil esperar resultados de alta calidad. Por otra parte, el nivelde complejidad del TR debería reflejar la longitud de la evaluación y la complejidadde la intervención en sí.

    Hay análisis insuficientes que cumplan la política de desarrollo Finlandesa y eltratamiento de cuestiones entrelazadas como género, medio ambiente, derechoshumanos y buena gobernación. Naturalmente el importante tratamiento decuestiones entrelazadas y metas para políticas de desarrollo de cooperación

  • 16 META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    Finlandesas únicamente se pueden requerir en evaluaciones de intervencionesfinanciadas directamente por Finlandia. En Revisiones Conjuntas de programasfinanciados por sector, el cumplimiento de políticas individuales de los países dona.

    La relevancia, coherencia y asuntos estratégicos se efectúan usualmente bien, condiscusión de la estructura política e institucional, el contexto socio-económico, y larelación del proyecto / intervención con otros sectores. En general las evaluacionesconfirman la relevancia de las intervenciones financiadas por el MAE. Evaluar elimpacto y sostenibilidad era difícil para los evaluadores, como la mayoría de lasevaluaciones eran a medio plazo o misiones de supervisión, y el tiempo para evaluarera tan corto. Frecuentemente no había indicadores establecidos en el documento deproyecto contra los que cotejar el impacto. La sostenibilidad de todaslasintervenciones es discutible, porque había muy pocos indicios en los informes queelGobierno receptor planee o sea capaz de asumir la carga financiera del proyecto ycontinuar con las actividades al término del proyecto. La eficacia no se ha tratadodirectamente en muchos informes. La eficacia era más fácil de investigar en losproyectos bilaterales, donde la mejor documentación y estructuras lógicas; sistemasinternos de supervisión y mayor claridad del seguimiento financiero, permitían laatribución de resultados más fácilmente. En muchos casos únicamente habíainformación descriptiva de lo qué ha estado sucediendo en las instituciones sin unaclara evaluación de la eficacia. Indirectamente se puede concluir que la eficacia demuchos proyectos ha sido obstaculizada por la lentitud del gobierno receptor paratomar decisiones necesarias para el avance del proyecto (particularmente al intentarimplementar reformas estructurales como la descentralización). La eficacia eraregularmente mejor en proyectos donde el Personal de la AT controlaba laimplementación de las actividades, aunque en algunos casos la sostenibilidad no eraevidente. Los proyectos implementados por las ONG han sido usualmente eficientesen efectuar las actividades planificadas, pero frecuentemente la administración erainferior. En general el tratamiento de eficiencia y cuestiones financieros de losevaluadores era frecuentemente flojo o enteramente ausente, particularmente conrelación a la eficiencia comparativa. Esto puede ser debido a las dificultades de extraerdatos financieros actualizados de algunos proyectos / programas, o el hecho que.

    Las dificultades más comunes con las mismas intervenciones, independientemente dela modalidad de asistencia, infrautilización de presupuestos, demoras en laimplementación, y una carencia de sostenibilidad. Estas son dificultadessignificativas, aunque no inesperadas, y requieren mejoramientos de la planificación.Las recomendaciones de seguimiento con respecto a las evaluaciones en sí conciernenmayormente los mejoramientos necesarios en la preparación de TR, control decalidad, y amplificación de la diseminación y discusión de los descubrimientos.

  • 17META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    RESUMO

    As Directrizes Finlandesas para a Concepção, Acompanhamento e Avaliação deProgramas determinam que “a avaliação consiste numa apreciação sistemática eobjectiva da concepção, da implementação e dos resultados de uma intervenção emcurso ou já concluída. Os dois principais objectivos da avaliação são i) melhorar apolítica de ajuda e as intervenções futuras com base nas lições aprendidas e ii)proporcionar uma base de responsabilização, incluindo a disponibilização deinformações ao público”. Esta função de partilha de lições aprendidas e disseminaçãode informações não tem sido inteiramente eficaz, constituindo esta meta-análise umpasso no sentido da melhoria deste aspecto.

    O objectivo desta tarefa prendia-se com a análise das avaliações da ajuda aodesenvolvimento realizadas pelo Ministério em 2006. A meta-avaliação visavaresponder à necessidade de alargamento do âmbito dos benefícios da avaliação e dautilização dos resultados na aprendizagem institucional.

    Uma meta-análise corresponde a um processo através do qual são identificadas eacompanhadas medidas comuns em vários estudos, relatórios e avaliações. Ametaanálise dos relatórios de avaliação fornece informações ao Ministério dosNegócios Estrangeiros (MNE) finlandês em duas questões fundamentais:

    i) Em que medida os programas e projectos avaliados estão em linha comapolítica de desenvolvimento finlandesa (qualidade e impacto do projecto/programa de desenvolvimento)?

    ii) Em que medida as avaliações facultam as informações necessárias sobre todos osaspectos da política de desenvolvimento finlandesa (qualidade da própriaavaliação)?

    Inicialmente, foram analisados os processos de avaliação e, posteriormente,osresultados da avaliação, tendo em conta as recomendações, lições aprendidasemelhores práticas. Pretendia-se proporcionar ao pessoal e aos decisores doMNEinformações consolidadas que os pudessem ajudar a

    1) lançar novos projectos/programas de desenvolvimento;

    2) melhorar a implementação de projectos/programas em curso e

    3) melhorar o sistema de avaliação.

    Foram estudados 29 relatórios, embora, em três casos, tenham sido igualmenteincluídos dois relatórios de âmbito relacionado. Os relatórios estudados variam em

  • 18 META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    extensão e complexidade, entre 5 e 135 páginas. Todos se encontravam disponíveisem versão electrónica e em papel e três foram publicados em formato de livro. Amaioria (26) foi encomendada por unidades e embaixadas regionais, tendo apenas 3sido encomendados (e publicados) pela Unidade de Avaliação e Auditoria Interna. Amaioria das avaliações foi realizada no Vietname e na Nicarágua, o que será umaprovável coincidência, em virtude dos acontecimentos de 2006, mas que poderáestarigualmente relacionado com o maior número de actividades em curso nestasregiões. Foi estudada uma variedade de modalidades de ajuda, correspondendo, amaioria, a programas e projectos bilaterais. Os sectores estudados abrangiam umamplo espectro de áreas, sendo o desenvolvimento rural o mais representado. Não severificou uma diferença significativa na qualidade dos relatórios quando agrupadospor sector, modalidade de ajuda ou diferentes regiões. Por outro lado, não era aparenteuma ligação significativa entre a qualidade do relatório e a composição da equipa deavaliação (por sexo ou nacionalidade), embora, por várias vezes, tenham sido referidasdificuldades relacionadas com o facto de as equipas não disporem de conhecimentossuficientes relativamente aos sistemas administrativos finlandeses.

    A recolha das versões finais dos relatórios, bem como de todos os anexos, foi umatarefa particularmente exigente. Esta situação reflecte uma dificuldade internacomum no MNE na manutenção de um arquivo actualizado de dados e relatórios.Uma vez que todos os relatórios estão actualmente disponíveis em formatoelectrónico, será importante criar um sistema de arquivo central, de fácil utilização eacessível a todo o pessoal, através do qual seja possível a partilha de todas as lições.

    Utilizando as Directrizes de Qualidade da CE, a avaliação média dos relatóriossegundo os diferentes critérios de qualidade foi considerada ‘boa’ em todas asáreas,encontrando-se as áreas mais fracas nas categorias de ‘concepção apropriada’ e‘conclusões válidas’. Em alguns casos, o relatório de avaliação foi considerado de boaqualidade, mas o pessoal da Embaixada considerou-o insuficiente ao nível dosrequisitos, possivelmente devido a expectativas demasiadamente elevadasrelativamente ao que é exequível por parte de uma equipa de avaliação. Noutros casos,o período de tempo destinado à avaliação foi excessivamente curto.

    Apesar de as Directrizes para a Concepção, Acompanhamento e Avaliação deProgramas do MNE mencionarem normas relativas a especificações técnicas eformatos de relatório, estas foram frequentemente ignoradas. A maioria dos relatóriosnão segue as directrizes do MNE, da UE ou da OCDE/CAD. A qualidade dasespecificações técnicas para as avaliações é fundamental. Se não for solicitado à equipaque analise algo e não for disponibilizado um formato para o relatório, dificilmente sepoderá esperar resultados de elevada qualidade. Por outro lado, o nível decomplexidade das especificações técnicas deverá reflectir a extensão da avaliação e acomplexidade da própria intervenção.

  • 19META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    Não existem análises suficientes ao cumprimento da política de desenvolvimentofinlandesa e ao tratamento de questões transversais, tais como o sexo, ambiente,direitos humanos e boa governação. Naturalmente, o tratamento das questõestransversais importantes e dos objectivos da política finlandesa de cooperação para odesenvolvimento apenas pode ser exigido em avaliações de intervenções directamentefinanciadas pela Finlândia. Em Avaliações Conjuntas a um programa financiado deforma sectorial, torna-se difícil o cumprimento de políticas individuais dos paísesdadores.

    As questões de relevância, coerência e estratégia são normalmente abordadas de formasatisfatória, com incidência no quadro institucional e de políticas, do contextosócioeconomico e da relação do projecto/intervenção com outros sectores. De ummodo geral, as avaliações confirmam a importância das intervenções financiadas peloMNE. O impacto e a sustentabilidade foram de difícil apreciação para os avaliadores,uma vez que a maioria das avaliações correspondia a missões intercalares ou demonitorização, o que se traduzia por períodos de avaliação demasiado curtos. Erafrequente a inexistência de indicadores estabelecidos no documento de projecto contraos quais fosse possível medir o impacto. A sustentabilidade de todas as intervenções équestionável, dado que existem muito poucas indicações nos relatórios de que oGoverno beneficiário está a planear ou é capaz de assumir o esforço financeiro doprojecto e dar continuidade às actividades após a conclusão do projecto. Em váriosrelatórios, a eficácia não foi directamente abordada. A eficácia foi mais facilmenteacompanhada nos projectos bilaterais, nos quais uma melhor documentação eenquadramento lógico, sistemas de monitorização interna e controlo financeiro maisclaro facilitaram a afectação de resultados. Em muitos casos, apenas estava disponívelinformação descritiva do que acontecia nas instituições, sem uma avaliação clara daeficácia. Indirectamente, é possível concluir que, em vários projectos, a eficácia foiprejudicada pela falta de acção do governo beneficiário na tomada de decisõesnecessárias ao avanço do projecto (particularmente, na implementação de reformasestruturais, tais como a descentralização). A eficácia foi, globalmente, consideradasuperior em projectos nos quais o Pessoal de Assistência Técnica controlava aimplementação das actividades, embora, em alguns casos, a sustentabilidade não fosseevidente. Os projectos implementados por ONGs, têm sido, em geral, eficazes narealização das tarefas planeadas, mas a sua administração tem-se revelado poucoeficiente. Em geral, o tratamento de questões financeiras e de eficiência por parte dosavaliadores foi, muitas vezes, insuficiente ou mesmo inexistente, particularmente noque respeita à eficiência comparativa. Esta situação poderá dever-se a dificuldades naextracção de dados financeiros actualizados de alguns projectos/programas ou ao factode os avaliadores não serem, normalmente, especialistas em gestão financeira.

    As dificuldades mais comuns nas intervenções propriamente ditas,independentemente da modalidade de ajuda, prendiam-se com a não utilização da

  • 20 META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    totalidade dos orçamentos, atrasos na implementação e falta de sustentabilidade. Estasdificuldades são expressivas, embora não inesperadas, e requerem melhorias noplaneamento. As recomendações de acompanhamento relativas às próprias avaliaçõesdizem sobretudo respeito às melhorias necessárias na elaboração das especificaçõestécnicas, no controlo de qualidade e no alargamento da disseminação e abordagem dasconclusões.

  • 21META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    FINDINGS, CONCLUSIONS AND RECOMMENDATIONS

    Findings Conclusions Recommendations

    Recommendations for improved evaluation practices and modalities

    Inadequate archiving ofevaluations (and allreports) by the MFA.

    Improved access toreports would improveshared learning across theorganisation, and wouldcontribute to institutionalmemory.

    Improved archivingsystem for all evaluationsof the MFA in electronicformat. Archivedmaterial should be finalversions, including allannexes and TOR, and inall language versions inwhich it was produced.Clear guidelines andresponsibilities for allstaff should be noted onthis issue.

    Great variations exist inthe quality of the reports.

    The MFA staff have notplayed a sufficientlyactive role in qualityassurance of evaluations.This situation could berectified by closelymonitoring quality andby actively collecting andsharing lessons learnedwith a wide range ofstakeholders. This meta-analysis is a useful stepand should be repeatedand improved onannually.

    Regular analysis ofevaluations to ensure thatlessons learned arecaptured. Establishmentof an electronic databasefor lessons learned andevaluators to be asked tocollect lessons learnedinto one file for archivingand sharing with otherprojects. Ensure only truemonitoring or evaluationreports are archived there– not short termconsultancies.

  • 22 META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    Findings Conclusions Recommendations

    Evaluations and recom-mendations are oftenshelved, and no standardprocess for sharingresults and follow upappears to be in use.

    Important lessons learnedand ways to improve theaid quality are beingmissed. Evaluations arenot only useful to theimmediate interventionstakeholders, but have awider role for learning inthe organisation.

    Follow-up recom-mendations of evaluationsto see how they have beenimplemented.

    Usefulness could beimproved by establishingand using guidelines fordissemination, discussionand electronic archivingof the results. In addition,recommendations with atimetable for makingchanges could be postedon the same internal MFAdatabase, and revisited tofollow progress.

    Not all evaluation teamshave understood thepolicy and administrativeframework of theintervention.

    The advantage ofutilising evaluatorswithout experience of theFinnish system could bethat fresh ideas aregenerated. However, thisappears to be outweighedby unusable recom-mendations in some cases.

    Experience with FinnishDevelopment Policy isconsidered important,and this should beemphasised whenpreparing TOR andselecting teams.

    TOR quality is variable –at times there aredeficiencies, (such as notasking for crucialelements of FinnishDevelopment Policy to bestudied). At other times,they are too complex inrespect to the timeprovided.

    The evaluation report canonly be expected to be asgood as the TOR itresponds to.

    Provide clear TOR.Training may be neededfor the regional unit staff.Provide a format for thereport when preparing theTOR – in this way it ismore likely that theevaluation will fulfil therequirements. Restrict thescope to what ismanageable. Begin thepreparation of the TORearlier, with assistancefrom the Evaluation Unitand sectoral advisers.

  • 23META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    Findings Conclusions Recommendations

    Inadequate resourceshave been provided forsome evaluations.

    Very short field timeprovided in severalevaluations has led todifficulties in quality datacollection and analysis.

    Evaluation Unit couldrecommend thebreakdown of timerequired for differentphases of evaluations,depending on their levelof complexity.

    On some occasions,evaluations have beenconcentrating on eitherdescribing or criticisingthe work done, but notproviding good solutionsfor the future.

    Some of the evaluationsleft a negative feelingin the programmeunder evaluation, withoutproducing clear guidelinesfor improvement.

    The focus of evaluationsshould be how to improvethings for the future – notsimply on criticism of thepast. If the evaluatorsconsider that there is abetter method to dosomething, they shouldprovide examples forlearning of equivalentprojects.

    Some evaluations producelong lists of conclusionsand recommendationsthat are confusing andhard to remember.

    Clear presentation of alimited number ofrecommendations thatcan be developed into anaction plan with duedates and personsresponsible will be mostlikely to produce results.

    Only a limited number ofclear recommendationsshould be included in thereport.

    Finnish added value wasreferred to in the 2004Development Policy andis reinforced in the 2007Policy, but at presentthere is no requirementto evaluate it.

    If developmentinterventions andevaluations should reflectthe Finnish DevelopmentPolicy, including thefocus on Finnish addedvalue, and cross-cuttingissues, clear guidelinesare needed.

    Consider how ‘Finnishadded value’ and othercross-cutting issues can beevaluated in the future.

  • 24 META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    Findings Conclusions Recommendations

    The existing evaluationguidelines were notapplicable to many of theevaluations as theyweren’t designed with alogframe and often hadmultiple fundingsources.

    Attribution of results,tracking of finances andclear reporting andevaluations are muchmore difficult with thenewer aid modalities.Particularly in the case ofjoint donor reviews, itcan’t be expected thatMFA Development Policyand Finnish evaluationguidelines will beapplied. Discussion isneeded on TOR writingand how to carry outevaluations of this type.

    Develop specificrecommendations forevaluation procedures ofnewer aid modalities, eg.SWAps, budget support,NGO programs, jointreviews, etc. It may benecessary to include afinancial expert in thesereview teams in order totrack the use of the funds.

    Uncertainty existsregarding the role of thesectoral advisers and thecentral Unit forEvaluation.

    Quality of TOR andevaluation reports will beimproved if necessaryguidance can be providedby the sectoral advisersand the Unit forEvaluation – though it isrecognised that there isinsufficient staff timeavailable for this.

    Clarification of the roleand responsibilities of allstaff, including Embassystaff, MFA programmestaff, the sectoral advisersand the Evaluation Unit.

    There is inadequateunderstanding ofevaluation and the role ofevaluators among bothMFA staff andconsultants carrying outthe evaluations.

    Training is needed forboth MFA staff andconsultants, in order toimprove both the qualityof the evaluations and therole of each party.

    Continuing training inevaluation should beprovided for both MFAstaff and consultants. Thenew Monitoring andEvaluation guidelines willbe useful tool but will alsorequire training in theiruse. Provision of reportingformats and good briefingof consultants is vitalprior to beginning anevaluation.

  • 25META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    Findings Conclusions Recommendations

    Issues, best practices and lessons learnt regarding development cooperation

    Most evaluations reportdelays in implementation(seen in all aid modalitiesevaluated).

    This suggests that theinterventions may beplanned over-ambitiously,and particularly in thestart-up period, time isneeded for fine tuning ofthe plan and processes,and understanding of newapproaches.

    Improved planning isneeded (for instance, offramework documents).Adequate planning timeis vital to ensure that theactivities will be carriedout effectively – pressurefor implementation ispointless in the first year.

    Under-spending is acommon finding,whether the interventionmodality is a bilateralproject, bilateral fundingbut using local processesand funding mechanisms,or basket funding/sectoral modalities.

    It seems that budgetingis over-ambitious atplanning stage, leading tocontinuing difficultiesand pressure to spendcommitted budgets.

    As above, improvedplanning and lessambitious budgets areneeded.

    Poor management/administration skills areevident in someinterventions, both on thepart of long term TA andthird party implementingorganisations.

    Discrepancies betweenqualifications of the long-term TA and the actualrequirements of theproject work are leadingto problems inmanagement.

    Clearer guidelines andtraining are also neededfor implementingorganisations, particularlyif they are inexperiencedin MFA procedures.

    Evaluation criteria forselection of TA shouldconsider a mix of practicaladministration skills aswell as sectoralqualifications.

    Prepare clear guidelines foradministration of Finnishprogrammes, or considercontracting consultant toset-up system and provideon-the-job training in caseof interventions withoutTA.

  • 26 META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    Aid modality Common findings Lessons learned andrecommendations collectedfrom evaluation reports

    Bilateral projects Good treatment of Finnishpolicy goals and cross-cuttingissues in most cases.

    Good implementation at locallevel.

    Reasonable administrativeprocess in most cases.

    Some difficulties withadministrative skills of TA(versus sectoral professionalskills).

    Lack of institutional memory,due to turn-over of TA, had anegative impact in someprojects.

    Delays in expenditure.

    Delays in implementation.Problems with sustainabilityand institutional setting insome cases.

    Political and socialuncertainties affected someprojects negatively, especiallyin Kosovo.

    Planned institutional linkswith Finland have not alwayseventuated.

    Integration betweenProgrammes’ administrativeplanning and budgetingprocess and the local processis often limited due todifferences in budgettimetables and planningsystems.

    Budgets of the line ministriesare often limited and hencethe public expenditureremains meagre despitepledges.

    Improved planning and lessambitious budgets are needed(for instance, of frameworkdocuments). Adequateplanning time is vital toensure that the activities willbe carried out effectively –pressure for implementation &expenditure is pointless in thefirst year.

    Evaluation criteria forselection of TA shouldconsider a mix of practicaladministration skills as well assectoral qualifications.

    Institutionalising aprogramme into a structurethat does not have thecompetence, let alone themandate for theadministrative tasks, will notproduce intendedsustainability.

    Institutional linkages shouldbe strengthened with localorganisations (and whererelevant, with relevant Finnishinstitutions eg. highereducation).

    Complex project structureshave resulted in too ambitiousobjectives and delays in someprojects.

    Focussed and simple annualpriorities with specific targetsand indicators needed.

  • 27META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    Aid modality Common findings Lessons learned andrecommendations collectedfrom evaluation reports

    Sectoral/ basketfunds

    Delays in expenditure

    Delays in implementation,particularly in reaching locallevels.

    Little or no focus on cross-cutting issues, nor to Finnishsectoral policy aims in thecountry.

    Difficulties with reportingand formal accountingdischarge for funds.

    Harmonization is provingdifficult as many donors havetheir own procedures inparallel to the localimplementing institution –decision-making processes aresometimes cumbersome andcause of significant delays inapproving funding.

    Donor support is usuallyfocused on the investmentbudget (for sound policyreasons), not onadministration and staffexpenses, and these are oftenwhere bottlenecks exist.

    Insufficient operationalfunding by the recipientgovernment may indicate lowimportance of the sectoralaims.

    Insufficient local capacity forimplementation andmonitoring.

    Finding sufficient consensusamong the stakeholdersproves to be difficult – eg.selection of recommendationsfor implementation.

    Good ownership in somecases, but not consistently.

    Support is needed forinstitutional development andcapacity building, includingmonitoring systems.

    Need to narrowly defineeligible activities that bestuse available resources tostrategically support theobjectives of the sector.

    Clarity regarding fundingcommitments from donors isneeded, to allow futureoriented planning.

    Recommendations producedby reviews need follow-up bydonors.

    Improved coordination indecision-making by donors isneeded.

    Lack of local capacity ishindering progress and donorsshould focus on addressingthis.

  • 28 META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    Aid modality Common findings Lessons learned andrecommendations collectedfrom evaluation reports

    NGO / church /other 3rd partyimplementingagencies

    Some administrativeproblems in implementation,due to lack of experience ofstaff, lack of time foradministration, and lack oflogical frameworks.

    Some problems withreporting and formalaccounting discharge of funds.

    Overwhelming needsdemonstrated in calls forproject funding could not beadequately responded to.

    Improved monitoring andinformation sharing is needed.

    Only limited participation bystakeholders, and difficultiesin achieving sustainability.

    Cultural traditions notsupportive to financingNGOs in some countrieswith socialist histories.

    In one case, theuncoordinated situationamong the donors and NGOswith weak links to thegovernments made it difficultto evaluate the results/impacts.

    When applications for NGOfunding exceed availablefunds, priority should begiven to those sectors inwhich Finland has somethingspecial to offer (added value).

    Logical framework work plansand budgets should beprepared. MFA to ensure thatthe standard format is used formonitoring and themonitoring reports are timelyand self-explanatory,including the financialreporting.

    Third part implementingagency (NGO or other) has insome cases not beenknowledgeable regarding (orcommitted to) the demands ofMFA on sustainability in thefrom of transfer of skills/knowledge to the relevantinstitutions and transfer offinancial responsibility to therecipient organisations.

    Professional capacity shouldbe combined with theunderstanding of the MFApolicy of developmentcooperation. Sustainability isproblematic as transfer of theresponsibilities to thegovernment is not as feasiblein humanitarian aid as intypical bilateral projects.

    Better planning ofinterventions andestablishment of a jointmanagement and monitoringsystem with representationfrom all stakeholders wouldconsolidate the operations;

  • 29META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    1 INTRODUCTION

    This is a draft report from the first Meta-analysis carried out under the auspices of theMinistry for Foreign Affairs of Finland. The report structure begins with thedescription of the objectives, background and methodology for the study. It thensummarises the overall picture for development intervention evaluations during 2006(by sector, modality, region, etc) and provides a quality assessment against EU andOECD/DAC quality criteria. The report summarises specific findings, both in termsof the evaluations, and of the development interventions being studied, and providesa list of recommendations for future action.

    2 OBJECTIVES AND PURPOSE

    In accordance with the TOR, the objective of this assignment is to make an analysisof development assistance evaluations of the Ministry in 2006. The purpose of themeta-evaluation is to fulfil the need to widen the scope of evaluation benefits and useof results in institutional learning, in other words, to offer system-wide opportunityto learn from past experience and to extract good practices for quality development ofdevelopment assistance. The users of the results of the meta-evaluation are the staffand decision-makers of the ministry and the constituency implementingdevelopment assistance.

    3 BACKGROUND

    Evaluations carried out by the Ministry for Foreign Affairs of Finland (MFA) arecarried out by two mechanisms:

    a) Central evaluations carried out by the Unit for Evaluation and InternalAuditing. These usually are thematic or sectoral, such as evaluation of cross-cutting issues, or of specific instruments or modalities.

    b) Decentralised evaluations carried out under the auspices of the regional unitsor embassies, such as mid-term reviews, external monitoring reports, finalevaluations, and assessments, which target specific projects or restricted topicsof the development field etc. The usual interventions have been bilateralprojects or programmes, at least until recently.

    In this case the materials subjected to the meta-analysis came from bothabovementioned sources (3 central evaluations and 26 decentralised). A meta-analysisrefers to a process by which common measures are identified and tracked across

  • 30 META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    various studies, reports and evaluations. The meta-analysis of the evaluation reportsprovides information for the MFA Finland on two major questions:

    i) To what extent the evaluated programmes and projects have been in line withthe Finnish development policy (quality and impact of the developmentproject/programme)?

    ii) To what extent the evaluations provide the required information on all aspectsof Finnish development policy (quality of the evaluation itself)?

    These two broad evaluation questions form the framework for the meta-analysis.Firstly, the evaluation processes were analysed, and secondly the evaluation results, inregard to the recommendations, lessons learnt and best practises. Comparativeanalysis have been made, according to the regional, sectoral and aid modalitydifferences of the projects or programmes evaluated. The MFA staff and decision-makers need consolidated information that helps them to

    a) initiate new development projects/programmes;

    b) improve the implementation of on-going projects/programmes;

    c) improve the evaluation system.

    4 TASKS AND METHODOLOGY

    The framework for these evaluations was the overall development assistance policy ofthe Ministry in operation in 2006 (Government Resolution 5.2.2005). A furtherpoint to note is that new draft Monitoring and Evaluation Guidelines will be releasedin late 2007, and put into trial use. However, the reports considered in this meta-analysis were prepared using the previous guidelines (last updated 2000).

    The MFA Guidelines in use during 2006 defined the content of an evaluation (orexternal monitoring) report as follows:

    EVALUATION REPORT FORMAT

    1 Executive summary

    2 Subject of the evaluation, including brief history of the intervention,changes in the project environment and their effects on theintervention, etc.

    3 Background of the evaluation: its purpose, methodology used,limitations etc.

    4 Evaluation issues4.1 General evaluation issues:

  • 31META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    4.1.1 Correspondence with the present needs i.e. relevance

    4.1.2 Impact of the project

    4.1.3 Extent of achievement of project purpose effectiveness

    4.1.1 Assessment of the efficiency of the implementation

    4.2 Specific evaluation issues

    5 Compatibility and sustainability

    5.1 Compatibility with the strategic goals

    5.2 Policy environment

    5.3 Economic and financial feasibility

    5.4 Institutional capacity

    5.5 Socio-cultural aspects

    5.6 Participation and ownership

    5.7 Gender

    5.8 Environment

    5.9 Appropriate technology

    6 Conclusions and recommendations: suggestions for operationalimprovements and developmental lessons learnt

    The methodology followed the TOR (Annex 1). The team made an initial analysisand prepared checklists at the start, and then the reports were divided between thetwo experts for analysis. Following the initial analysis phase they shared findings andresults, then looked at gaps and planned for additional interviews as necessary.

    As an initial step of the meta-analysis, the evaluations were sorted according to basiccharacteristics – country, region, aid modality, sector, type of report and unitresponsible.

    The Quality Assessment Guidelines for evaluations of the European Commissionwere utilised as a framework for the quality assessment of the evaluations (see Annex

    useful, it was decided that in practice the newer assessment criteria were moreappropriate, as well as being more flexible for the different types of evaluations underconsideration. For instance, the earlier version referred to the logical frameworks ofthe projects. In many cases it appears that the projects/activities under evaluationmay not have had a logframe, and therefore different criteria should be used. TheOECD/DAC Evaluation Quality Standards (2006) were also consulted, and appliedboth to the individual evaluations and to the meta-analysis itself. Both quantitativeand qualitative assessment of the reports was undertaken. Specific findings from thereports or comments on the process were noted.

    The members of the team met (or held telephone discussions and email exchanges)

    4). Initially, the older version of the Quality Assessment checklist was trialled. While

  • 32 META-ANALYSIS OF DEVELOPMENT EVALUATIONS IN 2006

    with the staff of the Unit for Evaluation and Internal Auditing, and alsorepresentatives of some Regional Units – specifically, Vietnam, Nicaragua and theBalkans, as these were the regions/countries where the majority of the evaluationstook place. A series of questions were asked regarding the process and usefulness(Annex 6). Additional documents were received from Nicaragua and Vietnam dealingwith the responses of the embassies and the follow-up to the evaluations. These wereextremely useful in understanding the context of the evaluation and the way theresults and recommendations were used. Written questionnaires were distributed tomany desk officers, advisers and embassy staff, however not many responses werereceived via this method.

    In addition, some feedback was obtained from interviews with representatives ofconsulting companies implementing the projects under evaluation, andrepresentatives of counterpart organisations in-country.

    4.1 Limitations of the Meta-Analysis

    This was the first meta-analysis of evaluations carried out by the MFA Finland, and assuch, is a learning process for all concerned (although an earlier synthesis study wascarried out in 1996 of evaluations and reviews from 1988-1995).

    As Finnish development circles are quite small, is difficult to be truly independentfrom the projects or evaluations carried out. While neither of the team-members northe contracted company have participated in any of the evaluations under analysis, wedid have connections (and insider knowledge) with some of the projects that wereevaluated. While this could be a limitation, it also means that having a goodknowledge of Finnish