accountability and learning

140
by Jean Ellis with Tracey Gregory Accountability and learning Research report developing monitoring and evaluation in the third sector

Upload: jean-ellis

Post on 19-Aug-2015

30 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Accountability and Learning

by Jean Ellis with Tracey Gregory

Accountability and learning

Research report

developing monitoringand evaluation in thethird sector

Page 2: Accountability and Learning

Charities Evaluation Services (CES) was established as an independentcharity in 1990. Our mission is to strengthen the voluntary andcommunity sector by developing its use of evaluation and quality systems.

CES supports thousands of voluntary and community organisations of allsizes every year. We also work with funders and government organisations.

We provide training, consultancy, technical support, advice and information,and external evaluations. These services are intended to help organisationsbecome more effective by adopting appropriate quality assurance andmonitoring and evaluation systems.

CES produces a range of publications including PQASSO, the practicalquality assurance system for small organisations.

Charities Evaluation Services4 Coldbath SquareLondon EC1R 5HL

Website: www.ces-vol.org.ukTel: 020 7713 5722Fax: 020 7713 5692Email: [email protected]

Charity Number: 803602

Charities Evaluation Services is a company limited by guarantee.

Registered in England Number: 2510318

Copyright and trade markUnless we say otherwise no part of this publication may be stored in a retrievablesystem or reproduced in any form unless we first give our written permission. Wewill consider requests from organisations to adapt this publication for use withina specific sector or organisational context, but we have the right to decide theterms upon which such adaptations may be permitted.

© Charities Evaluation Services, September 2008

ISBN 978-0-9555404-7-9

Published by Charities Evaluation Services

Design by WavePhotography by Slater KingEdited by Helen Martins, Portfolio PublishingPrinted by Datumpress Ltd

Page 3: Accountability and Learning

Accountability and learning:developing monitoring and evaluationin the third sector

Research report

Written by Jean Ellis with Tracey Gregoryfor Charities Evaluation Services

Related publication

Ellis, J (2008) Accountability and Learning: Developing Monitoringand Evaluation in the Third Sector: Research briefing,Charities Evaluation Services, London. ISBN 978-0-9555404-8-6

Page 4: Accountability and Learning

Acknowledgements

Sally Cupitt, Shehnaaz Latif, Diana Parkinson and Avan Wadia for theircontributions to the research.

Deena Chauhan and Josie Turner for their technical support.

Many thanks are given to all the individuals and third sector andfunding organisations that participated in the field work for this study.

Thanks also to the following members of the study advisory group andother readers who contributed their comments and suggestions:

Jean Barclay, freelance consultant

Ben Cairns, Institute for Voluntary Action Research

Sioned Churchill, City Parochial FoundationSally Cupitt, Charities Evaluation ServicesJake Eliot, NCVOFleur Fernando, North Warwickshire Council for Voluntary ServiceMaxine Fiander, West Norfolk Voluntary and Community ActionJenny Field, the City Bridge TrustSarah Mistry, Big Lottery FundTania Murrell, Hillingdon Association of Voluntary ServicesColin Nee, Charities Evaluation ServicesLisa Sanfilippo, new economics foundationHelen Simons, University of SouthamptonAustin Taylor-Laybourn, Comic ReliefAlex Whinnom, Greater Manchester Centre for Voluntary Organisation

We are grateful to the City Bridge Trust, the Performance Hub, the CityParochial Foundation and Barclaycard for their financial support forthis research, and the Office of the Third Sector for their core support.

Page 5: Accountability and Learning

Foreword

The City Bridge Trust has a proud record of strengthening the third sector bydeveloping its monitoring and evaluation practice. We were pleased to join withother funders to support this important research project from CharitiesEvaluation Services, the first national study in 20 years to take stock of both thepolicy environment which has driven changes in evaluation and the response ‘onthe ground’.

Twenty years ago, Mog Ball carried out an overview of evaluation practice in thevoluntary sector following national discussions instigated by the Home OfficeVoluntary Services Unit and the Charitable Trust Administrator’s Group. As thisnew study shows, the landscape has been transformed in the ensuing decades.

Increased public investment in the sector has coincided with an acceleratingdrive towards greater transparency and the demonstration of effectiveness.Third sector organisations have become more involved in the design anddelivery of statutory services and contract-based commissioning is in theascendancy. Independent funders, too, have encouraged the groups theysupport to up their game and provide better evidence, not just of how thefunding has been spent but what it has achieved. Meanwhile, there are far moreresources available to help those looking to get the most from this new climate.

So it was timely that CES should seek to build an evidence base of currentevaluation practice and its benefits. Over 700 third sector organisations andover 100 funders and commissioners contributed to the study – the largest evercarried out in this area.

Huge resources are invested annually by third sector organisations in monitoringfor funding purposes. This study helps us look at the efficacy of this activity inpractice; it explores the requirements placed on organisations by funders andcommissioners and the extent to which the various demands complement eachother and reinforce organisational benefit from monitoring and evaluation.Crucially, it looks at these issues not just from the funders’ perspective but alsocaptures the experiences of sector organisations themselves.

The challenge for both funders and for the sector is to ensure that we get themost from our investment in monitoring and evaluation, enabling individualorganisations to develop and move forward while fostering a commitment tolearning throughout the third sector.

Clare ThomasChief Grants OfficerThe City Bridge Trust

Page 6: Accountability and Learning
Page 7: Accountability and Learning

i

Contents

Summary research findings v

The research v

A developing third sector v

Key findings vi

Introduction ... putting the spotlight on voluntary sector organisations xi

Background to the research xi

Carrying out the research xii

This report xii

Chapter 1A changing context ... exploring shifts in policy and practice 1

A developing third sector 1

Investing in the third sector 1

Demonstrating value 2

The third sector and public services 3

The importance of evidence 3

The funding environment 4

Chapter 2Developing third sector accountability ... balancing external andinternal agendas 8

Monitoring and evaluation 8

The drive for accountability 9

Increased monitoring 9

The fallout from commissioning 10

The reporting burden 11

Addressing the issue of multiple reporting 13

Covering the costs of monitoring and evaluation 13

Gaining benefit from reporting requirements 15

Different agendas 15

Contents

Page 8: Accountability and Learning

ii

Chapter 3Building monitoring and evaluation knowledge and skills ...working towards a common language 17

The information explosion 17

Where organisations get information about monitoring and evaluation 18

Monitoring and evaluation training and support 21

Monitoring and evaluation in the context of supportive funding 22

Management and evaluation consultancy and support providers 23

Capacity-building initiatives 24

Developing a monitoring and evaluation culture 28

Managing data 30

Addressing data management issues 34

Chapter 4Focus on outcomes ... how to demonstrate change 36

Pioneering outcomes 36

Outcomes and public service delivery 39

Organisation-wide outcomes frameworks 40

An outcomes approach 42

The evidence gap 44

Potential pitfalls of an outcomes approach 45

Chapter 5Internal evaluation and performance management ...insights into approaches and methodologies 49

Some definitions 49

The development of self-evaluation 50

Third sector self-evaluation and quality assurance 51

Internal evaluation approaches 53

Data collection methods 57

Participatory monitoring and evaluation 59

User involvement in monitoring and evaluation 60

Dual approach 63

Using internal monitoring and evaluation data 63

Page 9: Accountability and Learning

iii

Chapter 6External evaluation ... the challenges of demonstrating value 66

External evaluation 66

The quality of external evaluations 67

Evaluating impact 69

The question of attribution 72

Impact across regional and national bodies 74

Global impact reporting 76

Cross-sector impact 76

Assessing social capital 77

Demonstrating ‘value’ 78

Social return on investment 81

Chapter 7Using monitoring and evaluation findings ...implications for good practice and service delivery 85

Evaluation for a purpose 85

Benefits of monitoring and evaluation 86

Using monitoring and evaluation for internal learning 88

Benefiting service users 90

Using evaluation to influence policy change 90

Evaluation for learning 93

Using data effectively 94

Evaluation and knowledge management 96

Learning frommonitoring and evaluation practice 97

New learning initiatives 99

Chapter 8Recommendations ... towards good practice 100

For funders, commissioners and policy makers 101

For third sector organisations, evaluation support agencies and evaluators 103

For all stakeholders 105

Contents

Page 10: Accountability and Learning

iv

Acronyms and abbreviations used in the report

CES Charities Evaluation Services

CTAC Community Technical Aid Centre

IDeA Improvement and Development Agency

INTRAC International NGO Training and Research Centre

LSC Learning and Skills Council

NAVCA National Association for Voluntary and Community Action

NCVO National Council for Voluntary Organisations

NSPCC National Society for the Prevention of Cruelty to Children

PQASSO Practical Quality Assurance System for Small Organisations

SORP Statement of Recommended Practice

SROI Social return on investment

Appendix 1 Methodology 107

Appendix 2 Profile of survey respondents 109

Appendix 3 Research participants 112

References 114

Page 11: Accountability and Learning

vCharities Evaluation Services Research report

The research ... focus and scope

The public spotlight on third sector organisations and their effectiveness hasbecome intense. It has never been more important for third sector organisationsto acquire an understanding of monitoring and evaluation and the skillsassociated with its practice.

This summary reports the key findings of field research carried out by CharitiesEvaluation Services (CES) between November 2006 and March 2008. Theresearch included 682 responses to an online survey to third sector organisationsand 89 responses to an online survey of a wide spectrum of funders. Focusing onEngland, we carried out face-to-face and telephone interviews with 88 national,regional and local third sector organisations, statutory and independent fundersand commissioners, and evaluation support agencies and evaluators. Additionally,we carried out a series of interviews with providers of monitoring and evaluationsoftware systems and their users. The field research was supported by a widereview of relevant literature.

A developing third sector ... shifts in diversity, growth and character

The third sector is now much changed from the voluntary sector of 1990, whenCES was established as a strategic support to the development of monitoringand evaluation. It has been a period of rapid growth in the sector, with civilsociety organisations showing a combined income of £31bn in 2005/06.

The diversity of the sector has also increased, including some 55,000 socialenterprises, and ranging from multi-million pound service delivery agencies tohundreds of thousands of small community groups. This development of thesector has included a blurring of distinctions across public services, third sectorand business, and a major investment in building the sector’s capacity to deliverpublic services. There has also been structural change in funding the sector:there is dwindling grant funding and earned income now accounts for morethan half the sector’s income, while state funding generates 36 per cent ofcharities’ income.

The dominance of business models, the increasing language of performance, afocus on outcomes, and the quest for an evidence base for investment of publicmoney are all increasingly shaping the character of third sector monitoring andevaluation.

The sector has welcomed a move away from the previously prevailing ‘beancounting’ culture that equated success with the achievement of outputs, in favourof a focus on benefits for users, and many organisations have welcomed anoutcomes approach. Yet value in third sector organisations is increasingly beingdefined by an organisation’s ability to demonstrate it, and often in ways imposedby external priorities and targets. In an environment of increasing competition, andsmarter funding application and tendering procedures, many small organisationswith insufficient resources, or those unable to frame their benefits in the languageof quantifiable outcomes and impacts, have become increasingly vulnerable.

Research findings Summary

Page 12: Accountability and Learning

viCharities Evaluation Services Research report

Key findings ... learning from the researchIn this section key findings are presented under the following five headings.

• Growth in monitoring and evaluation support.

• Funding requirements dominating monitoring and evaluation.

• How useful is the data being collected?

• Resources for monitoring and evaluation.

• Meeting accountability and learning need.

Growth in monitoring and evaluation support

Multiple programmes of monitoring and evaluation support1. The provision of strategic support through CES since 1990, and

subsequently the multiplication and diversification of available resourcesthrough a wide variety of agencies, have been effective in disseminating abasic understanding of monitoring and evaluation and the application ofpractical models.

2. This learning has been accelerated through a number of specific monitoringand evaluation capacity building initiatives that have required considerabletime and funding inputs. These programmes, and a number of regional andlocal initiatives, have demonstrated the value of disseminating andcascading knowledge through infrastructure and sub-sector networks.

Huge growth in learning resources3. The landscape of monitoring and evaluation in the third sector is a changed

one from that of 18 years ago, when CES started. Introductory training isavailable from a wide range of sources, and there has been an explosion offree information and resources. The sector can now access a range oftoolkits, soft monitoring and other data collection tools, ranging fromsimple frameworks to complex economic evaluation methodologies. Forthose organisations that can pay, there is a pool of skills available throughnational and regional organisations, consultancy and support agencies, anduniversity departments.

Support for self-evaluation4. Self-evaluation has increased in currency and legitimacy as third sector

organisations have gained internal skills in monitoring and evaluation anddeveloped information management systems. The previously predominantmodel of an expert carrying out evaluation at project completion is givingway to increased self-reliance and also to partnership between externalevaluators and internal staff, with work carried out at the project start, anda combination of internal and external methods.

Still some gaps in understanding5. Evidence from both funders and support agencies is that, despite the

increase in resources and training, there is still a huge constituency oforganisations that are struggling with the basics of monitoring andevaluation and to understand outcomes approaches. The evidence from

Sixty-eight per centof third sector surveyrespondents obtainedinformation fromtraining courses –the most frequentlycited method.

Page 13: Accountability and Learning

viiCharities Evaluation Services Research report

monitoring and evaluation initiatives is that implementing an effectivemonitoring and evaluation system – particularly one that is outcomes-focused – takes time and support.

Funding requirements dominating monitoring and evaluation

Short-term nature of funding and reporting timescales counter-productive6. Reporting cycles, with their short time scales, acted to increase the

reporting focus on targets, outputs and early outcomes. They also ledorganisations to prioritise monitoring as a performance measurementactivity and were a disincentive to a focus on longer-term results andreflective evaluation. Many third sector research participants made no realdistinction between monitoring and evaluation – which meant they werenot open to the potential of evaluation research to develop analysis andunderstanding about why an intervention works, for whom and in whatconditions.

Unrealistic targets and measurements agreed7. The research found an increasing demand for impact information. Evaluators

reported that in order to procure funding, organisations have frequentlyagreed a level of impact with funders that cannot be realised or measuredfeasibly. A critical issue was reported for many organisations: the difficulty inestablishing a fit between the immediate outcomes for users, which arerealistic for local organisations to achieve given their scale and remit, andthe targets relating to higher-level government or funder outcomesframeworks.

8. Third sector organisations reported that funders and commissionersthemselves often had a poor grasp of outcomes, or what might be realisticfor an organisation to achieve and measure.

Reporting requirements over-burdensome9. There is a convincing body of evidence that many third sector organisations

are finding the level of monitoring inappropriately burdensome – a result ofnew and changing funding and regulatory requirements, the multiplicity offunders and regulatory bodies, and the complexity of devolved nations.Seventy-nine per cent of the sample said that different information wasrequired by different funders. Survey respondents reported that theincreasing amount of commissioning of third sector organisations for publicservices had compounded this effect.

10. There was strong evidence from third sector organisations that funders donot link monitoring requirements proportionately to size of organisation orlevel of funding, despite good practice guidance. Neither do fundersroutinely consider issues of type of organisation, or the stage of its lifecyclewhen requesting monitoring and evaluation data.

11. Paradoxically, in the context of an overall effectiveness agenda, heavyreporting requirements may themselves be acting as a disincentive toperformance improvement, both because organisations do not seemonitoring and evaluation as valuable internally, or because available time isabsorbed by monitoring and reporting.

Summary

Sixty-seven per centof third sectorrespondents saidthat funders’requirements hadbecome moredemanding over thelast five years.

‘Now funders’

requirements have

become much

more demanding

to the point where

the monitoring

requirements of

our funders are

dictating how we

do all of our work.’

Advice agency

Page 14: Accountability and Learning

viiiCharities Evaluation Services Research report

Lack of integrated monitoring and evaluation12. Monitoring and evaluation has been developed on a project-by-project

basis, to respond to project funding requirements, discouraging thedevelopment of consistent practice across organisations and the loss oflearning about organisational impact. While the research found examples oforganisations that had integrated monitoring and evaluation into theirorganisational strategy, for many third sector organisations much is stillrequired to embed monitoring and evaluation in everyday practice.

How useful is the data being collected?

Variable quality and usefulness of monitoring data reported by funders13. Despite the range of outcomes initiatives in the sector, funders frequently

found the reporting of outcomes inadequate, and often over-simplistic andsubjective. More generally, the research found that large amounts ofinformation are gathered to meet the needs of funders and regulators,which is frequently of variable quality, difficult to use in external evaluations,and which cannot be aggregated for funding reports or for programmaticevaluations.

14. Expectations of external evaluation are now much higher, but frequentlyfunders and others commissioning evaluations found them to bedisappointing. They reported that evaluators often failed to investsufficiently in understanding the programme or project, lacked criticalanalysis, and evaluations were often too academic and presented ininaccessible formats.

Some effective use of monitoring and evaluation findings internally15. Those organisations that had been able to invest resources in monitoring and

evaluation as a management tool had been able to see benefits. We foundsome good evidence among our survey sample of third sector organisationsusing monitoring and evaluation most commonly as a communication andmarketing tool, but also to develop services and activities.

Funders unable to deal effectively with data16. Funders themselves reported that they were getting more data than they

could deal with, often limited by the systems necessary to collate, analyseand use it, thereby losing the potential for learning. Some funders areaddressing this issue through the introduction of new monitoringprocedures and databases.

Resources for monitoring and evaluation

Computerised monitoring and evaluation systems17. Organisations using monitoring and evaluation IT systems showed huge

time savings and increased effectiveness. Yet there are still relatively feworganisations accessing these resources. The research found no over-archingstrategic approach to the development of monitoring and evaluationsystems software within the sector as a whole, with the result thatdevelopment has been uncoordinated, often generated from particular sub-sector needs.

‘We are developing

a cycle of feeding

back issues of

under-performance

and key

performance…

As we move to

better outcomes

recording, the IT

system will really

help us monitor our

progress towards

our strategy.’

Homeless charity

Eighty-one per centof third sector surveyrespondents reportedchanging how thingswere done as a resultof monitoring andevaluation. Sixty-fiveper cent had changedproducts or servicesas a result ofmonitoring andevaluation findings.

Page 15: Accountability and Learning

ixCharities Evaluation Services Research report

Inadequate systems impacting on evaluation practice18. There is still a huge demand for basic monitoring and evaluation skills

learning. However, more focused and targeted support is also required,whether within sub-sectors, or to develop more specific skills for researchmethodologies and data collection, and for management and analysis toimprove the quality of monitoring and evaluation.

Monitoring and evaluation not properly resourced19. Apart from a few large public programmes, and some specific programmes

and projects funded by charitable funders, high expectations of monitoringand evaluation are seldommatched by resources available, which arefrequently provided on a reactive, limited basis. Funders and organisationsthemselves under-estimate the considerable amount of time and resourcesrequired to introduce an integrated results-focused evaluation culture.Evaluation consultants found that most organisations able to introduce aworking system only did so successfully with support. Seventy-five per centof third sector organisations identified time as a main constraint.

20. The competitive commissioning environment means that it is difficult fororganisations to cost realistically for monitoring and evaluation, even wherecore costs are agreed in principle. Addressing the costs of monitoring andevaluation is integrally tied in with the debate on full cost recovery. Bothfunders and third sector organisations reported that even where full costs ofdelivering services were agreed in principle, there was a strong incentive tokeep the bottom line down and this included reducing or ignoring the costsof monitoring and evaluation.

Meeting accountability and learning need

Focus on accountability21. Three-quarters of third sector survey respondents described organisational

benefits from monitoring and evaluation, but wider evidence suggests thatthere is still a predominant belief that monitoring and evaluation is donemainly for the benefit of funders and regulators, and externally derivedtargets and performance measures reinforce this perception. Some fundersrecognised that a focus on outcome measurement alone might notencourage learning without an analysis of resources, activities and outputsto make sense of the outcome information. Indeed, further learning mightbe positively discouraged when both funders and funded wanted todemonstrate success.

Implementation issues deprioritised22. While some funders expressed frustration at organisations not doing

monitoring and evaluation for themselves, but in response to funderdemands, many were implicitly or explicitly placing themselves as the mainstakeholder for monitoring and evaluation information. Furthermore, althoughthird sector organisations welcomed the focus on benefits to users and‘making a difference’ as more valuable and essential than a focus onquantified outputs, implementation and practice issues have often becomedeprioritised.

Summary

Thirty per centof third sectorsurvey respondentsreceived no fundingfor monitoringand evaluation.

Page 16: Accountability and Learning

xCharities Evaluation Services Research report

Differing location of monitoring and evaluation responsibilities23. The research found that restricting monitoring and evaluation

responsibilities to management or finance and fundraising functions, ratherthan integration within core functions, could reinforce the priority given tocompliance over learning. Specialist posts could provide valuable input butdid not in themselves ensure integration of monitoring and evaluation andcould discourage it.

Sharing learning about evaluation methods24. Where user-friendly and participative methodologies had been introduced

by external evaluators, we found evidence of these being integrated intoorganisational life. However, more complex externally applied evaluationmethodologies were less likely to obtain buy-in to the process and findingsor to leave a legacy of usable and enduring internal tools.

Need for practical models for learning25. There is an increasing demand for third sector organisations to demonstrate

their contribution towards economic priorities, and their investment valueor return. As a result, there is a danger of seeing more technicalmethodologies as of universal application, and devaluing practical methodsthat can be used for internal learning.

26. Yet experience of working with community organisations and small socialenterprises underlines the importance of light analytical models. Manyindependent funders advocated that third sector organisations should notuse complex scientific methods, but rather practical data collection toolsand thoughtful questions.

Sharing learning about practice27. Evaluations are increasingly available on the internet but there is no ready

way of knowing what is available, and evaluation findings are rarelystructured in formats that would make learning easily accessible. Researchparticipants pointed to the lack of a third sector learning culture. Indeed,the research found limited evidence of monitoring and evaluation used toincrease communication among agencies about approaches.

28. Knowledge management is reaching the agenda, and there are majordevelopments with the potential to develop a third sector learning culture.Online knowledge banks, the new third sector research centre and a newjournal for third sector research all point towards a major opportunity forlearning.

Page 17: Accountability and Learning

xiCharities Evaluation Services Research report

Background to the research ...building blocks for efficiency,effectiveness and value for money

Charities Evaluation Services (CES) wasset up in 1990 as a strategic initiativeto underpin the development ofvoluntary sector monitoring andevaluation. Since then, the publicspotlight on voluntary sectororganisations – and their ability todemonstrate effectiveness – hasbecome intense. Monitoring(collecting information and trackingprogress routinely and systematically)and evaluation (gathering researchand other data to make judgementsand determine the value or worth ofsomething) has become a critical partof organisational life. (See page 8 formore information on the distinctionsbetween monitoring and evaluation.)

In late 2006, CES began acomprehensive research study ofdevelopments in voluntary sectormonitoring and evaluation practice.There had been no wide-ranging studysince the 1988 overview of evaluationpractice in the voluntary sector byBall: that research was one initiativewhich led to the establishment of CES(Ball 1988).1 The focus of CES’ newresearch also turned to a wider sector– now commonly defined as the thirdsector, to include social enterprises aswell as voluntary and communityorganisations (although the researchwould not include other civil societyorganisations such as schools andtrades unions).

Ball’s study, and the initial support forCES by funders and government, hadbeen driven by the policy changes ofthe 1980s, mainly associated with theimpact on the voluntary sector of thereduction in the role of the state, the

promotion of compulsory competitivetendering and the contracting out oflocal authority services.2 Alongsidethese changes was an increasedemphasis on efficiency, effectivenessand value for money. The 1990Efficiency Scrutiny of GovernmentFunding of the Voluntary Sectorreport explicitly called for governmentdepartments and voluntary bodies toestablish concrete objectives for allgrants and to monitor and evaluatetheir performance accordingly (HomeOffice 1990).

The Deakin report (1996), the LabourParty Report on the Voluntary Sector(1997) and the Treasury Cross-CuttingReview (2002) successively focusedthe sector on their role in deliveringpublic services, and on theireffectiveness. Voluntary sectorperformance became a policy issueand, for many organisations,monitoring and evaluation wasbecoming an accepted part oforganisational life, even where itspractice remained in its early stages.Other organisations struggled todemonstrate effectiveness,sometimes with negativeconsequences for their sustainabilityin the new competitive world.

CES felt that research was needed tobuild an evidence base aboutmonitoring and evaluation practiceand its benefits, if any, and to placethat practice within the context offunders’ requirements and evaluationactivity. We wanted to look at theavailable support, and the extent towhich organisations had learnt theskills and techniques needed to carryout evaluation. Had they also acquiredthe skills necessary not only tomanage and interpret data, but alsoto make it useful? How had theemphasis on ‘value’ and on outcomes

Putting the spotlight on voluntarysector organisations

Introduction

Page 18: Accountability and Learning

xiiCharities Evaluation Services Research report

and impact that were pervading thepolicy and funding environmentaffected learning from evaluation andcreated benefits for users?

Carrying out the research ...throwing light on the maindevelopments and challenges

Field research was carried out fromNovember 2006 to March 2008,supported by funding from the CityBridge Trust and the PerformanceHub. The methodology included thefollowing:

• a wide review of relevantliterature, including a review ofmonitoring and evaluationresources

• a UK-wide online survey to thirdsector organisations, whichgained 682 responses

• an online survey to funders andcommissioners, which had 89responses

• three separate focus groups orworkshops with a range offunders

• 88 interviews with national,regional and local third sectororganisations, independentfunders and statutorycommissioners, supportorganisations and evaluators

• 15 interviews with softwareproviders and 14 interviews withsystems users

• a review of a limited number ofexternal evaluations providinglearning about approach andmethodology issues

• development of case examples.

Further information on the researchmethodology is provided in Appendix1 and Appendix 2.

The research report describes astrikingly different landscape fromthat of Ball’s 1988 study, which foundthat information and support wereseverely limited and patchy, whichshowed that most organisations weresuspicious or felt threatened byevaluation and its judgements andwhich indicated that, most commonly,‘nothing happened as a result ofthem’.

The current study could never be fullycomprehensive, but was sufficientlywide-ranging to throw light on themain developments and challenges inthird sector evaluation. Interviewsfocused on organisations based inEngland, while the two online surveysdrew respondents from across the UK.

Although the online surveys werewidely circulated through electronicnetworks and gave a good response,organisations that responded arelikely to have been those for whommonitoring and evaluation is arelatively high priority. Interviews withthird sector organisations largelyaimed to explore practice, rather thanfocusing on organisations stillgrappling with the very language ofevaluation. Yet behind the studysample is a constituency for whommonitoring and evaluation has not yetbecome a priority; the findings andrecommendations of this report willbe particularly relevant to this group.

This report ... overview of content

Chapter 1 details the policy andfunding context which has providedthe key triggers and facilitators forchanged monitoring and evaluationpractice, but also some of theconstraints to its development.

Chapter 2 focuses on the effects onthird sector organisations of theincreased emphasis on accountability.

Page 19: Accountability and Learning

xiiiCharities Evaluation Services Research report

In particular, it discusses the effects ofthe reporting demands placed onthird sector organisations by fundersand regulators.

Chapter 3 reports on the significantdevelopment of knowledge andunderstanding about monitoring andevaluation in the third sector, andreports on the effectiveness ofmonitoring and evaluation capacitybuilding initiatives. It also examinesthe limiting effects of inadequate datamanagement systems.

Chapter 4 looks specifically at theadvances made in developingoutcomes approaches in the thirdsector and stresses the importance ofadequate support for developingrelevant skills. It also urges the needfor outcomes approaches to beaccompanied by sufficient technicalcapacity to produce reliable data, andby sufficient understanding ofimplementation and process issues tomake sense of the data.

Chapter 5 reports on internalevaluation, both the development ofself-evaluation in third sectororganisations and the approaches andmethods used when working withexternal consultancy. It raisesquestions about the quality and utilityof the data produced through internalmonitoring and evaluation forprogramme evaluation.

Chapter 6 considers some of theissues surrounding external evaluationin the third sector. It examines thedrive for impact assessment andreviews some of the approaches to,and learning from, value for moneyevaluation.

Chapter 7 reflects on the benefits ofmonitoring and evaluation reportedby third sector organisations, and thecurrent limitations in the use ofevaluation findings by both fundersand third sector organisations. It looksforward to the potential of newknowledge-sharing initiatives toincrease the value of monitoring andevaluation for wider learning andsharing good practice.

Key findings are summarised at thebeginning of every chapter.

Recommendations deriving from thesefindings are provided in chapter 8.

Introduction

1 This research was an outcome of discussion begunin 1984 and 1985 in seminars on ‘the need for thetheory and practice of evaluation’ in the voluntarysector, convened jointly by the Home OfficeVoluntary Services Unit and the Charitable TrustAdministrator’s Group.

2 The importance of the developing policy context inthe development of CES was outlined in LibbyCooper's keynote speech at the InternationalEvaluation Conference, Adelaide, as first director ofCES. See Cooper, L (1997) ‘Evaluation and the ThirdSector’ in Evaluation (Australia) 6 (2), December1997.

Notes for Introduction

Page 20: Accountability and Learning
Page 21: Accountability and Learning

1Charities Evaluation Services Research report

A developing third sector ...sheer size and scale

There have been major changes in thevoluntary sector since the Centrisreport Voluntary Action in 1993(Knight 1993), which provided ananalysis of the role of voluntary actionin society, and the 1996 DeakinCommission report on the future ofthe voluntary sector, which identifiedthe key issues facing the sector overthe following ten years, including thereform of charity law and thedevelopment of quality assurance(Deakin 1996). The sector is nowmore commonly defined as the thirdsector, made up of all organisationswhich are ‘established on the dualbasis of being non-profit-distributing,and which are not part of the state’(Morgan 2008:2).1

The 2008 UK Civil Society Almanacprovides statistics on the third sectorfor 2005/06 and captures its rapidgrowth. There were 865,000 civilsociety organisations in 2005/06,some 164,000 registered as charities,

with a combined income of £31bn,a 10 per cent increase from theprevious year. Charities aloneemployed 611,000 people in 2005,26 per cent more than a decadeearlier. There were also around55,000 social enterprises, with anannual turnover of £27bn andhundreds of thousands of smallcommunity groups (Reichardt 2008).

Investing in the third sector ...diversification of roles andfunding

This development of the sector hasincluded an increase in public servicedelivery by third sector organisations,and a major investment in buildingthe sector’s capacity to deliver thoseservices. There has also been ablurring of distinction across publicservices, the third sector and business,and an increased policy and fundingfocus on social enterprise. Thesechanges, in turn, have influenced astructural change in funding thesector.

Focus of chapter 1This chapter outlines the important changes in public policy and fundingpractice, and in the nature of the third sector itself, which provide the definingcontext against which third sector monitoring and evaluation has developedover the last 20 years. It includes the following:

• the increased size and complexity of the third sector, the growth of socialenterprise, and the blurring of distinctions across public services, thirdsector and business

• the increase in third sector delivery of public services

• the structural changes in the sector’s sources of income

• the increased drive for accountability and transparency, and for thedevelopment of an evidence base of third sector effectiveness and value

• the effect of the policy context on funders’ monitoring and evaluationrequirements and practices.

A changing context ... exploringshifts in policy and practice

Chapter 1

Page 22: Accountability and Learning

2Charities Evaluation Services Research report

• For the first time, earned incomeaccounts for more than half thesector’s income.

• State funding generates 36 per centof charities’ income.

• There are indications of a decrease ingrant funding, with fewerunrestricted grants, reduction inmoney available to small charitiesand greater competition for a poolof dwindling grant funds.2

Local authority funding for the sectoris now increasingly available eitherthrough contracts for service delivery,or through the Single CommunityProgramme – a single merged fundingstream and the main funding vehiclefor neighbourhood renewal strategies.

Between 2004 and 2006, two majorgovernment investments werecommitted to build capacity in thethird sector. The first was theinvestment of £215m in the sectorthrough Futurebuilders, running from2004 to 2011.3 The second initiativein 2006 channelled £150m throughCapacitybuilders into increasingsupport to frontline voluntaryorganisations through a sustainableinfrastructure, with a furthercommitment of £83m from April2008 to 2011.

Part of Capacitybuilders’ supportincluded continued funding for thePerformance Hub, a national initiativeled by CES and the National Councilfor Voluntary Organisations (NCVO).The Performance Hub beganoperating in 2005 through a HomeOffice ChangeUp grant and hadmonitoring and evaluation as one ofits work streams. In April 2008,Capacitybuilders funded CES to lead anew body – the National PerformanceProgramme – which also delivers workon monitoring and evaluation.

Third sector infrastructure has alsobeen strengthened by Big Lotteryfunding during this period. Its BASIS

programme, which was launched in2006, allocated £100m in its firstround of funding to voluntary andcommunity sector infrastructureorganisations across England andexpects to award up to £50m inround two.

In 2006, the government’s action planfor the third sector included acommitment to learning from thesector and to building its capacity(Cabinet Office 2006). Thegovernment’s social enterprise actionplan, launched in November 2006, setout the intention to work withregional development agencies toensure they are better able to supportthe specific needs of social enterprises(Cabinet Office, Office of the ThirdSector 2006).4

Demonstrating value ... drivetowards efficiency, effectivenessand accountability

The drive towards more effectivepublic services and accountability forresults was influenced by the newpublic management strategiesaffecting western Europe in the1980s. Increasing regulation, and thedrive towards third sectortransparency and demonstration ofvalue, placed monitoring andevaluation practice higher on theagenda and shifted the focus ofregulation from questions of probityto ones of performance (Bolton2004). However, the election of theNew Labour government marked astep up in the government’s paralleldrive for engagement with the thirdsector and demonstration ofefficiency and effectiveness.

The Charities Act of 1993, amendedseveral times, brought with it a needfor charities to make informationpublicly available about how they plantheir work and account for fundsreceived, and more rigorous reporting

Page 23: Accountability and Learning

3Charities Evaluation Services Research report

requirements were introduced by theSORP Accounting Framework(Statement of RecommendedPractice, 1995). With the SummaryInformation Return introduced in2005, the Charities Commission forthe first time required information oncharities’ achievements. Althoughapplying only to charities with anincome of over £1m, this increasedthe focus on outcomes that hademerged in the late 1990s. (Theemergence of outcomes approachesis discussed further in Chapter 4.)

The Charity Commission’s Hallmarks ofan Effective Charity include the needfor charities to spell out their missionwith clarity, along with intendedbenefits and impact, and how thesewill be achieved. And, from 2008,charities are required to state clearlywhat they have done for publicbenefit and relate this to their aims.5

A limited number of initiatives in thesector have focused on improvingcharity reporting. The ImpACT(Improving Accountability, Clarity andTransparency) Coalition, established in2005, has over 100 members workingto improve ‘reflective’ reporting onsuccesses, achievements andsetbacks. The Coalition is currentlydeveloping an online toolkit to helpmembers assess and improve theirtransparency and accountability.6

Shout about Success is a CompassPartnership programme which aims toimprove charity performancereporting. New Philanthropy Capital,notably, has called for moreindependent scrutiny and assessment.New Philanthropy Capital has a teamof over 20 charity analysts workingwith both funders and charities toincrease the quantity andeffectiveness of funding by helpingcharities improve the way theydemonstrate their results (Brooks et al2005).

The third sector and publicservices ... promoting socialcapital and social cohesion

Government cross-cutting reviews ofthe sector as part of the 2002 and2004 Spending Reviews focused on itspublic service delivery and reform,with a developing emphasis on its rolein promoting social capital and socialcohesion.7 Commissioning strategieshave since acted further to embed thethird sector within public servicedelivery, particularly in areas such ascorrectional services, employmentservices, children’s services, educationand training, and health and socialcare services (Department forChildren, Schools and Families 2006and Department of Health 2007).

In 2007, the Office of the Third Sectormade an unprecedented commitmentto the sector by designating specificthird sector ministerial responsibilitywithin the Cabinet Office, and makinga commitment to developing anevidence base through a new researchcentre (HM Treasury 2007).

The importance of evidence ... themove to outcomes-based targets

One of the major New Labour changesin strategy was the emphasis onevidence-based policy and practice, inpart reflecting an increasing interestin evaluative activity worldwide.

From the 1990s, there had beenincreasing moves in the public sectorin the United States and otherwestern economies to addressoutcomes as a key part ofperformance measurement. In the UK,departmental targets – previouslyoutput-based – were increasinglyfocused on outcomes in spendingreviews and, by 2000, 67 per cent oftargets were outcome-based (CabinetOffice 2006).8 This shift in emphasis

1A changing context ...

Page 24: Accountability and Learning

4Charities Evaluation Services Research report

also included a requirement ongovernment agencies deliveringpublic services to translate high-leveloutcome targets into operationaltargets on the ground (HM Treasury2002). The 2006 Local GovernmentWhite Paper set out a simplification ofthe performance framework foroutcomes secured by local authorities,with 53 outcomes and a single set ofabout 200 national outcome-basedindicators covering national priorities(Department for Communities andLocal Government 2006).

More recent government policydocuments urged commissioners ofservices to put outcomes for users atthe heart of the strategic planningprocess, to focus on outcomes incontract management and to use theachievement of outcomes as a keyindicator of success in service delivery(Office of Government Commerce andthe Active Communities Directorate2004). This shift was important forthird sector organisations increasinglydrawn into local authoritycommissioning processes. It was alsosignificant for the sector that theshifting focus to outcomes remainedlargely on quantitative data, and oftendid not incorporate the qualitativechanges achieved.

Government policy recognised animportant gap in existing localauthority commissioning practice.IDeA, the Improvement andDevelopment Agency for localgovernment, had provided somesupport to local councils on outcomesand, since the introduction of theEvery Child Matters framework in2004, over 40 local authorities hadparticipated in the seminars onadopting the Outcomes-BasedAccountability approach, an approachdeveloped in the United States toimprove outcomes for children andfamilies. Recognising the importanceof developing local authorities’understanding of an outcomesapproach, the government’s 2006

Action Plan for Third SectorInvolvement included investing intraining 2,000 commissioners, thetraining being rolled out betweenJanuary 2008 and March 2009, aswell as a commitment to taking actionto ensure fair and proportionatecontracting (Cabinet Office 2006).

A 2007 Cabinet Office report broughtthe evidence-based approach directlyinto the third sector sphere, reflectinga concern to improve measurement,to develop opportunities to learn anddisseminate lessons about delivery,and to assess the overall impact of thesector (Cabinet Office 2007). Thereport emphasised the need for thethird sector to provide evidence of itsdistinctive or ‘comparative’ value, andof the contribution of social enterpriseto economic productivity, and socialand environmental impacts.

The funding environment ...tracking the changing policy andfunding context

The diverse nature of third sectorfunding makes it difficult to talk aboutfunders as a group. Unwin has pointedto the disappearance of the olddistinctions between funder andfunded as many organisations applyfor funds in order to distribute themto others (Unwin 2006). This researchsought to consider the full range offunding and funding streams, and toreflect recent developments at a locallevel – these have seen a shift in theway many local authorities fund thethird sector, with a move away fromgrant funding towards acommissioning and contract deliveryapproach. The introduction of LocalArea Agreements has also beenimportant – setting the targets formuch of the monies available to thethird sector. The research also soughtto engage with independent funders,both those running large, multi-yeargrant programmes and smaller more

Page 25: Accountability and Learning

5Charities Evaluation Services Research report

localised grant givers.9

The effects of policy changeRespondents to our survey of fundersreflected this diversity, and showedclear effects of the policy changes.Nearly three-quarters of the funders inour survey had changed their fundingcriteria over the previous five years,and 84 per cent had changed theirmonitoring and evaluation criteria. Anoutcomes focus and the increase inevidence-based practice in certainsectors were a significant factor in thischange – with nearly 60 per cent ofthe total sample placing a greateremphasis on outcomes than five yearspreviously.

SORP 2005 had prompted manycharitable funders to improve theirown monitoring and evaluation andthis has had a knock-on effect onrequirements from fundedorganisations clarifying, as onereported, that ‘our beneficiaries mustbe able to fulfil our own charitableobjectives’. For statutory funders, thistranslated into a need to link fundingoutcomes to the Community Strategyand Local Area Agreements.

The funders’ survey sample indicatedthat while some short-term fundingremained, there was a move towardsmore three- or five-year funding, andlarger and fewer grants and contracts.Two main trends emerged amongindependent funders:

• There were more focused fundingprogrammes with clearer statementsof aims, partly to control theincreased demand, and partly toallow a focus on outcomes andimpact. For some it was a desire toset the agenda through theirfunding.

• There was more proactive funding,with independent funders oftenadopting the contractual approachof statutory funders. Twelve per centof the funders in the survey said thatthey were largely proactive,

approaching organisations tocommission work in areas of interest,often for larger grants, multiple-yearfunding and contracts. A further 49per cent reported funding bothproactively and through openprogrammes. For some funders,proactive strategic work explicitlyinformed their more open grant-making.

Funding as investmentOur research found the language offunding ‘investment’ being used moreand more. Unwin (2004) distinguishedbetween three main types of funder:giver; supplier; and investor. She hasdescribed some of the features of aninvestment approach as typicallybeing specific about expectations andintentions of producing long-termoutcomes, with a focus onachievements and being interested intracking the impact of funding. An‘investment’ approach was expressedby both local authority andindependent funders responding toour survey. From our wider review, theapproach varies in practice, asdemonstrated in the followingexamples of charitable funding.

• The Impetus Trust, founded in 2003,with its role of bringing venturecapital know-how to the voluntarysector and funding a small numberof ‘exemplary’ organisations, hasidentified three areas of specificinvestment: long-term financing andlonger-term relationships, capacitybuilding and management support.

• The King’s Fund has made a strategicdecision to become a developmentalfunder, investing in organisationalinfrastructure.

• SHINE Trust, an educational trust,makes a small number of largegrants to organisations that are ableto produce robust data. This is partlyin response to its corporate andindividual donors who see theirfunding as an investment.

1A changing context ...

‘We have greater

expectations around

evaluation – so

organisations must

have some capacity

to deliver this.’

Funder

Page 26: Accountability and Learning

6Charities Evaluation Services Research report

• Capital Community Foundation,generally a small-grant funder, hasreframed its grants away fromfunding new organisations toprovide three-year funding in somecases and repeat grants in others, inorder to invest in organisationaldevelopment.

• Other funders, such as Bridge HouseTrust, are reserving an element oftheir annual grant-making fordeveloping capacity, sharing bestpractice and contributing toexpertise.

The effects on monitoring andevaluation requirementsOur research tracked how thechanged policy context, and thechanges in funding approachdescribed above, have fed throughinto current monitoring andevaluation arrangements. The resultsof a 1996 survey of grant monitoringin 170 UK grant-making charitiesfound that around 28 per centindicated that they collected noinformation once they had given agrant, and more than half (54 percent) did not report formallyevaluating past performance whenconsidering subsequent decisions.The study concluded that ‘Of theprofessional grant-makers, only a fewundertake extensive and regularmonitoring including informationcollection, organisation evaluation

and control’ (Ashford and Clarke1996:298).

This study demonstrates the degreeof change that occurred in thedecade since that report. In our thirdsector survey, 94 per cent ofrespondents said that fundersrequired monitoring information orreports and 73 per cent said thatfunders required evaluation reports.The evidence suggests thatgovernment policy has been animportant factor in changing funders’reporting requirements and evaluationpractices as they themselves facegreater accountability demands –whether against public sectorperformance targets or against newcharity reporting regulations. This hastranslated into greater informationneeds from the organisations theyfund.

Some funders had specificallyreviewed their monitoring andevaluation arrangements. For example,as part of their restructuring in 2003,the Arts Council England set up asmall evaluation team, identifyingimprovements needed in how theyhandled and used information. Somefunders, channelling corporate orsignificant individual donorinvestment, were seeking betterevidence of a demonstrable return.Although many small fundersremained constrained by lack ofresources, third party funders, suchas Community Foundations, reportedbeing driven by the requirementsof their own funders for informationon the impact of their investment.

Table 1 shows how respondents tothe funders’ survey evaluated fundingprogrammes.

Activity Percentage of responses (n= 60)

Analysis and report on grant fundingand outputs 58

Evaluation of the overall impact ofgrant funding 53

Assessment and report on organisational/project outcomes against funding aims 48

Evaluation of the impact of specificgrant programmes 47

Table 1 How funders monitored and evaluatedfunding programmes

Page 27: Accountability and Learning

7Charities Evaluation Services Research report

Intelligent funding and learningfrom evaluationThe increased currency given to thenotion of investment has also broughtwith it expectations of ademonstrable return. Focusedprogrammes, with smaller cohorts,have permitted funders to be morespecific; we found greater demandsbeing made relating to quality of self-reporting. More coherent strategicprogrammes – such as the EsméeFairbairn Foundation’s RethinkingCrime and Punishment programme(2001/05), which aimed to developthe debate and increase publicknowledge about prison and itsalternatives – have built evaluationinto them, with expectations oflearning and impact evaluation.

The concept of ‘intelligent funder’ hasalso brought with it an expectation ofsynergy between the aims andobjectives of those funding and thosefunded, and the importance ofevaluation and learning (Unwin 2006).This is an approach which meanstalking to organisations aboutevaluation from the beginning. It isalso an approach that takes evaluationbeyond the demands of accountabilityto questions of learning, and implicitlyraises a question of whether thesetwo have been equallyaccommodated in the developmentof monitoring and evaluation in theUK third sector. This issue will beexplored further in the next chapterand is a theme running throughoutthis report.

1A changing context ...

1 The government defines the third sector as ‘non-governmental organisations that are value-drivenand which primarily reinvest their surpluses tofurther social, environmental or cultural objectives.It includes voluntary and community organisations,charities, social enterprises, cooperatives andmutuals’. (Cabinet Office 2007)

2 A study conducted by the Birmingham Race ActionPartnership for the ChangeUp Finance Hubsuggests a trend of declining funds across the nineEnglish regions. Available information showed thatfunding distributed by grant aid in 2006/07 wasapproximately 87 per cent of that of three yearsbefore, with more ‘restricted’ types of fundingreplacing more flexible, ‘unrestricted’ forms.(Birmingham Race Action Partnership 2007)

3 The original £125m over three years wassupplemented by a further £65m following theThird Sector Review in 2007.

4 Social enterprise has alternatively been seen as anactivity – where a trading venture is undertakenprimarily with a social aim – rather than a type oforganisation. (See Morgan, G (3 April 2008) TheSpirit of Charity, Professorial Lecture, SheffieldHallam University,)

5 On 16 January 2008 the Charity Commissionpublished its guidance on public benefit under theCharities Act 2006. Charities are required to reporton details of their aims and objectives, theiroutputs and their outcomes from 2008.

6 See www.institute-of-fundraising.org.uk/informationaboutfundraising/forfundraisers/hottopics/impactcoalition/index.ht

7 The Treasury Cross Cutting Review of 2002 made42 recommendations designed to overcome thebarriers facing voluntary and communityorganisations on delivering public services. See HMTreasury (2002) Spending Review 2002: Reportand HM Treasury (2002) Spending Review 2004:Report.

8 See also Department for Communities and LocalGovernment (October 2006) Strong andProsperous Communities: The Local GovernmentWhite Paper, www.communities.gov.uk

9 This included funding for third sector organisationsfrom central government departments – forexample, Children, Schools and Families,Communities and Local Government, InternationalDevelopment – as well as funding deliveredthrough regional and local government, other non-departmental public bodies and third sectoragencies themselves. The Big Lottery Fund is thebiggest generalist funder outside the government.The Big Lottery Fund, plus Capacitybuilders andFuturebuilders, are third sector organisations actingas third party funders for statutory funding.European Structural Funds, managed throughgovernment offices until the end of 2006, andregional development agencies (in London at least)from 2007, were also considered in the research.

Notes for Chapter 1

Page 28: Accountability and Learning

8Charities Evaluation Services Research report

Monitoring and evaluation ...understanding its role in today’sclimate

Some definitionsMonitoring is the process ofcollecting information routinely,systematically and continuouslyagainst a plan, and sometimes againsttargets. The information might beabout activities, or services, users, oroutside factors affecting theorganisation or project. Thisinformation will not in itself explainprogress or lack of progress.

CES describes evaluation as an in-depth study, taking place at specificpoints in the life of an organisation,project or programme.

‘Evaluation aims to answer agreedquestions and to make a judgementagainst specific criteria. Like otherresearch, for a good evaluation, data

must be collected and analysedsystematically, and its interpretationconsidered carefully. Assessing ‘value’– or the worth of something – andthen taking action makes evaluationdistinctive.’ (Ellis 2005:8)

Evaluation theorists differ in theiremphasis and different policy contextsinfluence practice, but mostdefinitions have at the heart Scriven’sdefinition that ‘evaluation determinesthe worth, merit or value ofsomething’ (Scriven 1991:139).Patton (1997) has a focus on howevaluation is used, emphasising thatevaluators may use monitoring andother management information that isnot research orientated. This isparticularly relevant in a climate thatfocuses on the use of evaluationmethods for performancemanagement and policy deleivery.

A changed climateFunders and evaluation consultants in

Focus of chapter 2This chapter focuses on the effects on third sector organisations of theincreased emphasis on accountability. It reports that statutory funding, andcommissioning in particular, has developed a high level of monitoring that hasnot been accompanied by sufficient resources to develop the requiredorganisational monitoring and evaluation capacity to support it. In particular,the research has found:

• There is a convincing body of evidence that many third sectororganisations are finding the level of monitoring burdensome.

• The competitive commissioning environment means that it is difficultfor organisations to budget realistically for monitoring and evaluation,even where core costs are agreed in principle.

• Externally derived targets and performance measures reinforce a beliefthat monitoring and evaluation is done for the benefit of funders andregulators.

• Reporting requirements may themselves act as a disincentive todeveloping monitoring and evaluation for internal learning andimproving performance.

Developing third sector accountability ...balancing external and internal agendas

Chapter 2

‘Before we

didn’t know

the difference

between

monitoring and

evaluation and

did a mishmash

of things.’

Housing association

Page 29: Accountability and Learning

9Charities Evaluation Services Research report

this study pointed to a growing, morewidespread acceptance orunderstanding of monitoring andevaluation in the third sector. Itsimplications and requirements areclearer. Sub-sectors once resistant tomeasurement, such as communitydevelopment, have acceptedmonitoring and evaluation as apragmatic response to therequirements of project funding. Onefunder said that even in the last fiveyears ‘the whole climate has changedan unbelievable amount,’ largely dueto SORP 2005 and the publicaccountability agenda, together withthe widespread dissemination ofinformation about monitoring andevaluation. (The development ofmonitoring and evaluation resourcesis reported in Chapter 3.)

However, it was clear that manyrespondents made no clear distinctionbetween monitoring and evaluation.1

This theme will be discussed further inchapter 5.

The drive for accountability ...comparing international andnational practice

The third sector survey and theinterviews with third sectororganisations illustrated a diverserange of monitoring and evaluationpractice, ranging from minimalmonitoring to complex systems. Byand large, organisations have beendriven to develop their monitoringand evaluation systems not by internalfactors, but by increased regulationand reporting demands, reported inChapter 1. As a result, much of thedevelopment has been project-focused in response to fundingrequirements and, within regulatedsub-sectors, dominant monitoring andevaluation models often come fromregulators.

Third sector respondents included anumber of organisations working in

the international development field,and we interviewed a small number ofthese for the learning about practicethat it offered. Also, a number of oursurvey respondents had moved fromthe international sector to thedomestic UK third sector andreflected on the extent to which, intheir view, the UK third sector islagging behind. They reported:

• Evaluation of outcomes and impacthas been on the internationalagenda for 20 years whereas ‘here itseems like it is a new move’.

• International evaluation has a trackrecord of working with communitieson participatory indicators andparticipatory methods, whereas inthe UK it is difficult to involvecommunities.

• International organisations arefurther forward in developinglearning strategies and knowledgemanagement.

In general the UK third sector is seenas more fragmentary and ad hoc in itsapproach to monitoring andevaluation and less receptive to newways than international organisations,which moved towards a moreperformance-based focus during the1990s (earlier than domesticvoluntary organisations), and whichhave been influenced by results-basedmanagement within multilateral andbilateral aid agencies.

Increased monitoring ... externaland internal requirements

Two-thirds of third sector respondentsreported that funders’ monitoring andevaluation requirements had becomemore demanding over the last fiveyears.

Forty per cent of our funder surveyrespondents said that they were nowasking for more information fromfunded organisations and one-third

2Developing third sector accountability ...

‘Organisations

often do

evaluation just

because they have

to report to

funders showing

what they got for

their money.’

Regionalinfrastructureorganisation

Page 30: Accountability and Learning

10Charities Evaluation Services Research report

said they asked for more standardisedinformation. These trends were seenthroughout different sizes of funderand irrespective of number of grantofficers. Surveyed organisationspointed to a number of dimensionswhich had increased the reportingburden.

• New and constantly changingstatutory funding and regulatoryarrangements have produced moreonerous monitoring and evaluationrequirements.2

• The multiplicity of funders andregulatory bodies is affecting smallas well as larger organisations, andeffort required is duplicated by theneed to report on the same piece ofwork in different ways.

• The complexity of devolved nationsrequires separate bids and differentmonitoring requirements.

If the initial driver of monitoring andevaluation in the UK third sector wasexternal requirements, the perceptionamong survey respondents was thatthey were also developing theirmonitoring and evaluation as part ofgood organisational management. Alarge majority of respondents (83 percent) reported that they developedtheir monitoring and evaluation tomeet their own requirements as wellas those of their funders. Thepercentage rose to 93 per cent fororganisations set up three years agoor less. One organisation describedthe development of monitoringsystems integrated throughout itsservices:

Our monitoring has been changedand improved due to our ownorganisation’s wish to do so. Thedepartment head is a CES trained‘outcomes champion’ and hascascaded the benefits of outcomesmonitoring through the organisation.All external facing services have beendeveloping systems over the last 12months. Internal services plan to

come on board soon. (Localinfrastructure organisation)

The fallout from commissioning ...prescriptive reporting

Changing requirementsSurveyed organisations talked of the‘eternal frustration’ of different andchanging reporting schedules andrequirements. Over three-quarterssaid that different information wasrequired by different funders, while 30per cent said that all their funderswanted different information. Wherecommissioning had been introduced,this led to further complexity as somecommissioning remained output-based and some introduced outcomesreporting. Organisations reporteddata being required that often did notaccurately reflect the core activities ofthe organisation or which was entirelyirrelevant and inappropriate.

The study indicated that the statutorycommissioning landscape is aconfusing one for the third sector,with every local authority at adifferent stage in relation tocommissioning. A 2006 study notedthat third sector organisations foundan increasingly competitiveenvironment surrounding thecommissioning process and a movetowards more prescriptive funding,with more externally set targets and asubstantial increase in their need toprovide financial and other statisticalinformation, in particular to statutoryfunders (Cairns et al 2006).

Moving goalpostsApproximately one-third of the thirdsector survey sample – organisationsthat had been commissioned todeliver services – described thechanges that commissioning hadentailed. They reported:

• increased stringency, level and focusof the monitoring

‘Pre commissioning

they required

detailed monitoring

of users and minimal

output information.

Post commissioning

they require

performance

indicators, outcome-

based monitoring

and more detailed

client information.’

Women’s healthorganisation

Page 31: Accountability and Learning

11Charities Evaluation Services Research report

• a disproportionate amount ofmonitoring for one small part of theorganisation’s overall services

• a multiplicity of contracts or service-level agreements and co-financing

• lack of consistency between funders,both in monitoring cycles and in thedetail (for example profilingcategories)

• considerable frustration at lack ofclarity and moving goalposts

• more paperwork, more onerous andlonger reports, and inappropriatedetail required

• poor standards, lack ofunderstanding of services, and over-simplistic or irrelevant targets.Sometimes funders were unaware ofhow the service might becompromised by the request forsensitive information, for example ondrug use, criminal records or refugeestatus.

The Learning and Skills Council (LSC)was mentioned by severalrespondents as having onerousrequirements – inconsistent andchanging, highly complex anddemanding – and having a resultantnegative effect on services. One thirdsector organisation reported ‘constantand time-consuming chasing for detailand irrelevant facts undertaken by[our] staff directly responsible forfeeding back information’.

The reporting burden ... need forproportionality

In recent years, a number of studieson the third sector have highlightedthe negative effects of monitoringthat is over-burdensome andinadequately resourced. The need forproportionate requirements and theneed to address the administrativeburden have been spelt out insuccessive policy documents andreviews without evident change.

The Gershon Efficiency Review (2004)pointed to the lack of changedbehaviour, reiterating previousrecommendations about streamliningmonitoring and reporting (Gershon2004). A 2005 National Audit Officereport recorded that, in 2004, 61 percent of third sector organisations saidmonitoring processes were notproportionate and had not improvedsince 2002 (National Audit Office2005). It also emphasised thatinappropriate monitoring andevaluation requirements wastedresources and led to poor value forfunders.

The 2005 Better Regulation Task Forcereport recommended thatgovernment departments should worktogether to measure and reduce theadministrative burden, but attempts tocoordinate requirements through a‘lead funder’ approach had not beenfruitful. Messages about simplicity,relevance and proportionality, as wellas costs to be included within thegrant, were contained in both theHome Office Code of Good Practice(Cabinet Office 2000, 2006) and theHM Treasury guidance to funders(Cabinet Office 2006).

An NCVO report in 2007, on thesector’s relationships withgovernment funders, found that:

• Short-term funding was still aproblem and that funders had notimplemented the principle of fullcost recovery.

• Reporting requirements wereinconsistent and disproportionate,with organisations concerned thatinformation collected was notmeaningful and would not be usedby funders.

• Requirements for information andthe methods for demonstratingresults were inflexible and there wasa need for more joint negotiation ofappropriate outcomes and outputs(Bhutta 2005).

2Developing third sector accountability ...

‘We currently deliver

a contract for LSC

London East which

has a very heavy

administrative

burden – and has

resulted in audits

by three levels of

funders. The

monitoring and

evaluation

requirements of the

project have changed

substantially over

the life of the

project, and is the

major reason we

will no longer seek

co-financing

arrangements or

contracts.’

National infrastructureorganisation

Page 32: Accountability and Learning

12Charities Evaluation Services Research report

A 2007 study of commissioningexperiences in six London boroughsfocusing on health and social carefound that work to the principles laiddown by the Office of the Third Sectorwas extremely rare in commissioningpractice, and that furtherdevelopment work would be requiredto make an effective transition tooutcome-focused commissioning(Tanner 2007).

There were a number of other reportsduring this period demonstrating thecommon experience of over-burdensome and inappropriate levelsof reporting across different parts ofthe sector (Alcock et al 2004; Cairnset al 2006; National Association forVoluntary and Community Action2006). Our research provides morebroad-based evidence, reinforcing thefindings of these more focusedstudies. They indicate an increasedurgency to address:

• the increased volume of work,particularly to meet the demands ofstatutory authorities

• the inconsistency of reportingrequirements across multiple funders,even between different departmentsof the same local authority

• the lack of proportionality intreatment of organisations ofdifferent sizes and types

• data frequently required that doesnot accurately reflect the coreactivities of the organisation orwhich is entirely irrelevant andinappropriate – often based bycontractors on government targets.

Staff and volunteers feel over-inspected and over-stretched withissues for which they are not trained;most significantly, they lack sufficientresources to cover increasedmonitoring and evaluation demands.Three-quarters of respondentsindicated insufficient time as causingdifficulties, and this included medium-

sized organisations that were unableto finance the training and supportrequired to implement adequatesystems. Even for the 13 per cent thathad received increased funding formonitoring and evaluation over theprevious five years, this was often notproportionate to increased demands.Thirty per cent did not receive anyfunding to cover monitoring andevaluation costs.

Several respondents in our surveypointed to the need for proportionaterequirements, often the samerequirements being made ‘whether£5,000 or £50,000 is involved’. Thishas considerable implications forsmaller organisations wishing tocompete in the market place, in termsof the ‘upscaling’ required. Oneorganisation, with no paid managers,reported detailed anddisproportionate reports required forsmall amounts, such as funding for£1,000, which ‘in itself can bethreatening to our being able todeliver services as a disproportionateamount of time is spent satisfyingthese requirements’.

This research indicates that theadministrative burden itself hasimplications for third sector workloadsand value for money. In addition,some respondents reported aresulting adverse effect on monitoringand evaluation practice, with someorganisations developing negativeperceptions about monitoring andevaluation as being a burdensomeexternally-imposed task. Theimplications are that externally-drivenmonitoring has often displacedmonitoring and evaluation whichmight be more useful for managingthe service, improving performanceand wider learning.

Councils for voluntary service officers,in close touch with frontlineorganisations, stressed the danger ofover-heavy monitoring, and the costsof developing new systems for each

‘There is no

acknowledgement

of the time and

money needed to

properly monitor

and evaluate for

different funders

when it is only part

of a job description

for the one and

only employee we

have who works

part time.’

Local communitygroup

Page 33: Accountability and Learning

13Charities Evaluation Services Research report

funder contributing to a spirallingnegative view of monitoring andevaluation. One suggested that ‘Isuspect that if the organisations feltthere was more benefit from themonitoring and evaluation and/orreporting was not so funder specific,they would find both the time andthe resources’.

Addressing the issue of multiplereporting ... the cost of compliance

Although some funders interviewedlooked positively at the potential towork cooperatively with otherfunders, their requirements forstandardised information aboutspecific outcomes for themedprogrammes may work against this.New Philanthropy Capital followed upa pilot project in Scotland, Turningthe Tables, with an English pilot. Bothprojects addressed the issue ofmultiple reporting to multiplefunders, and the costs entailed, andinvestigated the possibility of devisinga standard report. Examples in theScottish pilot showed the costs ofcomplying with monitoring andreporting requirements. On average,the charities in the sample spent 4.5per cent of each grant on reportingback to funders (Heady and Rowley2008). The Scottish pilot also foundevidence consistent with CES’ currentwider search: the lack ofinfrastructure and skills, and the lackof centralised databases, bothimpacting on the size of the burden,particularly for small organisationsand particular sub-sectors.

Covering the costs of monitoringand evaluation ... differentfunders, different practice

Organisations pointed to the additionalresources needed for staff time andfor computer hardware and software.

Smaller organisations feltoverwhelmed and, even in largerorganisations, senior managers withmonitoring and evaluationresponsibilities were often doing acombined job – for example qualityassurance, and evaluation andperformance management, and oftenin the face of cuts. Some organisationsreported using their own unfundedresources to carry out monitoring andevaluation, while others recognisedthat lack of staff capacity meant theywere losing potential learning fromthe additional data.

Addressing the cost implications ofmonitoring and evaluation is closelytied in with the debate on full costrecovery. The 2007 Audit CommissionHeart and Minds report said that localcouncils needed to work with thevoluntary sector to develop apragmatic approach to full costrecovery, within budgetary constraints(Audit Commission 2007). More than40 per cent of all charities surveyedby the Charity Commission and whichwere contracting with councils saidthey were not paid the full cost ofdelivering services.3 It was clear fromour research that requirements forgood quality monitoring informationor increasing requirements were notroutinely accompanied by anawareness of a need to fund therelated costs appropriately.

In the funders’ survey, responsesshowed a wide variety of practice inmeeting monitoring and evaluationcosts in funded organisations. Sixty-four per cent of funders reportedspecific funding for monitoring andevaluation. Of these:

• 25 per cent funded monitoring andevaluation as an integral part ofproject funds.

• 54 per cent encouragedorganisations to put money in theirproject budget.

• 14 per cent funded monitoring and

2Developing third sector accountability ...

‘The higher your

monitoring and

evaluation costs,

the harder it will

be to secure full

funding, so one

has to be careful

not to put in too

much for it.’

Local advice andadvocacyorganisation

Page 34: Accountability and Learning

14Charities Evaluation Services Research report

evaluation against a specific budgetline.

• 51 per cent gave specific funding ifrequested.

• 30 per cent said that they wouldprovide specific funding in certaincases.

For a minority of funders, covering thecosts of monitoring and evaluation wasa priority. One funder reported that:

We almost always provideunrestricted funding but encourage afull costs analysis that includes a highlevel of evaluation and monitoring.We would not consider an applicationunless the applicant could show howtheir work is being evaluated.

However, most funders recognisedtheir own resource issues as aconstraint. One funder, sympatheticto evaluation, said that organisationswere not invited to apply formonitoring and evaluation in thebudget, as it was recognised that allwere trying to get the bottom linedown. However, small amounts forevaluation were built in on an ad hocbasis. For most of the statutory

commissioners interviewed, there wasno budget for evaluation; the concernwas for monitoring rather than forevaluation. Yet our evidence showedthat monitoring itself remained under-resourced.

Funders noted that smallerorganisations were less likely tobudget for monitoring and evaluationresources in their funding applications.Even where third sector surveyrespondents reported receiving corecosts, there was a pressure to reducethe amount in order to remaincompetitive, and funding formonitoring and evaluation suffered.One organisation said that it assumedthey would have less evaluation withcommissioning because organisationswere going ‘down to the bone’ onbids, and were competing with privatesector organisations that would not bethinking about evaluation.

These trends echo those noted in theUnited States and Canada, whereaccountability demands have alsostretched voluntary sector capacity.This is shown in the box below.

The UK situation echoes that reported by US observers, who have argued that critical issuessuch as organisational capacity and the allocation of sufficient resources to evaluation havenot been addressed.4 Research on monitoring and evaluation activity in US non-profitorganisations showed that difficulties in implementing evaluation activities stemmed not justfrom lack of funding or motivation, but rather from a lack of evaluation capacity – not havingstaff with the time and expertise to design, conduct and maintain data collection systems well-suited to the types of services that organisations provide (Carman 2008).5

A 2003 voluntary sector research project in Canada found that almost half the respondentsreported that funder expectations had increased over the previous three years and increasinglyfocused on outcomes (Hall et al 2003). Despite this:

• Funders’ increased expectations were not accompanied by increased financial support.

• Less than half of funders provided funding for evaluation.

• About six in ten reported offering advice on evaluation and about half provided evaluationtools and resources.

• Fewer than one-fifth offered training.

Comparison with non-profit evaluation in the US and Canada

Page 35: Accountability and Learning

15Charities Evaluation Services Research report

Gaining benefit from reportingrequirements ... useful as in-housepractices

Despite the frustrations, many thirdsector research participants feltincreased accountability wasappropriate, and found benefits fortheir organisations in evidencingtheir outcomes, particularly wherethere was good support from thecommissioning body or grant funder.

One local infrastructure respondentsaid ‘When targets are not beingmet, monitoring and evaluationallows us to know, and either adaptservices to meet targets, or discusschanging targets with funders’.Although it was more time-consuming, several appreciated whatthey saw as a step-up in their ownprevious monitoring practices: theywere better placed to record andmonitor their work and some wereable to roll out new systems –designed to meet specific funderrequirements – across theorganisation. This acceptance ofmonitoring and evaluation as usefulin-house practices has also beenrecorded by other studies (Cairns etal 2006; Alcock et al 2004).

For some organisations in theresearch, the considerable investmentin IT was welcomed, together withthe related development of clientmanagement systems. For example:

• Increased monitoring requirementshad led to the adoption ofcommercial software which for oneorganisation had ‘accidentally givenus strong internal managementtools’.

• A case management systemdeveloped to meet the needs of aprimary care trust could be extendedto provide monitoring systems tomeet Supporting Peoplerequirements.

• A management information systemwas being developed in partnershipwith a university department inorder to meet extensive newmonitoring arrangements.

A number of funders are nowcollecting data electronically. Onemajor funder noted the strategicpotential of online monitoring in termsof sharing data, but also the ‘vision,time and resources’ required toachieve it. However, issues ofcompatibility and administrativeburden were reported byorganisations as being greater, ratherthan lessened, where onlinemonitoring had been introduced,because of the need to enter the sameor similar data into multiple systems.

Different agendas ... aligningrequirements more closely withinternal needs

Among the CES research participants,one critical area of difficulty was themismatch between the informationrequired by funders and theinformation needs of the third sectororganisations themselves.Organisations with social aims fundedthrough regional developmentfunding, for example, may strugglewith a primary reporting focus oneconomic outputs. Manyorganisations wishing to move to anoutcomes focus have found thatstatutory funders are lagging behindin their own expectations andrequirements. One organisationreported that it monitored outcomesfor its own Board, outputs for itsnational-level employer, and outputsand demographic information for theregional development agency.

This chapter has focused on thirdsector monitoring, largely driven byexternal reporting demands.Sometimes these requirements havebrought with them a new

2Developing third sector accountability ...

‘We know

evaluation is

important not

only from a

funding point

of view, but it

genuinely does

improve what

we provide.’

National charity

Page 36: Accountability and Learning

16Charities Evaluation Services Research report

organisational commitment tostandardising information collectionand making it useful. However, oftenmonitoring activities remainunderfunded or unfunded and their

external focus has consequences forthe utility of monitoring data forevaluation purposes. This will bediscussed further in Chapter 4 andChapter 5.

1 CES expects self-evaluation reports to contain alevel of analysis and judgement in relation toevaluation questions or criteria as distinct from apresentation of collated data.

2 For example, the Supporting People QualityAssurance Framework and new OutcomesFramework, and the housing inspection regime,block grants which replace the Department forInternational Development’s PartnershipProgramme Agreements and new requirements forthe Commission for Social Care Inspectorate.

3 Cited in Cozens, A (2007) in Commissioning News,April/May 2007.

4 For example, Frederickson, HG (2000) ‘Measuringperformance in theory and practice’ in PA Times,23(8): 8–10, The American Society for PublicAdministration.

5 This study was based on data gathered from asurvey of 305 non-profit organisations andinterviews with 14 funders and 40 non-profitorganisations.

Notes for Chapter 2

Page 37: Accountability and Learning

17Charities Evaluation Services Research report

Building monitoring and evaluationknowledge and skills ... working towardsa common language

Chapter 3

The information explosion ...agenda for learning andnetworking

Evaluation is a relatively new practice.If it has not yet developed fully into adiscipline, there is now a language ofmonitoring and evaluation recognisedin the third sector. In this chapter wedescribe how this has been developedby non-profit specialist and otherinfrastructure organisations, andthrough funding bodies, universitydepartments and privateconsultancies engaging in third sectorevaluation. The UK Evaluation Societywas founded in 1994 and third sectorconsultancies and evaluationdepartments are represented at UKevents. Organisations can access goodpractice guidelines on the UKEvaluation Society website and go toworkshops and network events.1

Language and conceptsAlthough mystification continues formany people working in the sector,the research demonstrates that thereis a greater common understanding oflanguage and concepts for thoseengaging with evaluation than, say,ten years ago.

In January 2006, an informalpartnership of funders, governmentdepartments, regulatory bodies andvoluntary sector organisationslaunched the first issue ofJargonbuster, to increaseunderstanding between funders andproviders by building a commonlanguage about planning, projectmanagement and performanceimprovement. One of the factors inthis development of understandingand practice has been the hugeincrease in information available. Ball’s1989 report contained only six titles

Focus of chapter 3This chapter reports on the range and extent of monitoring and evaluationsupport to third sector organisations and the provision of monitoring andevaluation resources and their effectiveness. It also describes the constraintsplaced on monitoring by inadequate data management systems and the earlydevelopments of IT systems to meet this need. The research found:

• There has been a significant development of monitoring and evaluationresources and skills in the third sector over recent years.

• Sufficiently resourced initiatives have been effective in developing basicmonitoring and evaluation understanding and skills.

• The sector benefits from the diversity of support provision, particularlyfrom sub-sector specific or localised or specialist input.

• Well-resourced and well-targeted initiatives have had a multiplier effect,providing a wide-reaching return to initial investment.

• Both funders and third sector organisations frequently under-estimate theamount of time and resources required to implement monitoring andevaluation, specifically outcomes approaches and data management.

• Inadequate IT systems and data management are holding backmonitoring and evaluation in the sector.

Page 38: Accountability and Learning

18Charities Evaluation Services Research report

of helpful evaluation publications, ameasure of the amount of monitoringand evaluation information targetedto the voluntary sector then available.When CES was set up, priority wasgiven to publications to raise theprofile and understanding ofmonitoring and evaluation,particularly for small organisationswith limited resources.

Resource guidesThere has since been a revolution ininformation technology. When CESstarted in the early 1990s, weenvisaged a publication print run of1,000 copies would be the maximum,to be taken up over several years. In2007 alone, the CES website had1,637,049 hits overall, over 64,000unique users and nearly 18,000 freepublications were downloaded, withthese figures set to double in 2008.And the source of information to thirdsector practitioners has diversified,emanating from government bodies,funders, consultants, national bodies,regional and local networks andinfrastructure, as well as fromspecialist third sector evaluationsupport bodies. As part of theresearch reported here, we were ableto place on CES’ website a guide toover 100 sector-specific tools andother resources on monitoring andevaluation, and to other onlinegeneric and specialist resource guides,providing immediate access to abroader literature on evaluation.2

Evaluation Support Scotland hassimilar information on its website. TheBig Lottery Fund’s guide to using anoutcomes approach (Explaining theDifference Your Project Makes) and itsevaluation reports are available on itswebsite, making information instantlyaccessible to its grantees, and otherfunders have accessible informationand resources. (Examples of these aregiven on page 24.) Barnardo’schildren’s charity has producedlearners’ and trainers’ packs focusing

on using research evidence andoutcome-focused evaluation. NewPhilanthropy Capital has a researchtools team and is developing a set ofresearch tools for the charities itassesses and is also launching anonline community.

A wealth of resources is potentiallyavailable from a wide number andrange of sources, although it is clearthat some organisations have haddifficulty in knowing how to accessthese tools. Many of these resourcesreflect the growth of monitoring andself-evaluation within organisationsand are targeted at organisationsdeveloping internal capacity toevaluate their activities, althoughsome are the products of externalevaluation. An overview of theseresources is given in the box on pages19 and 20.

Where organisations getinformation about monitoringand evaluation ... most frequentlyused sources and resources

Approximately one-quarter of thirdsector organisations in our surveydescribed monitoring and evaluationtools and guidance, and their sources.The most frequently cited were asfollows:

• CES training, publications, websiteand other materials were mostfrequently mentioned – byapproximately one-third ofrespondents.3

• Evaluation Support Scotland, neweconomics foundation, NCVO andthe National Association forVoluntary and Community Action(NAVCA) were other nationalinfrastructure organisationsreferenced for their material.

• Resources provided by nationalinfrastructure organisations wereoften used together with other

Page 39: Accountability and Learning

19Charities Evaluation Services Research report

3Building monitoring and evaluation knowledge and skills ...

Publications and web-based resources specific or relevant to the third sector now range fromthose covering the basic principles of monitoring and evaluation to those explaining andproviding guidance on the use of different evaluation approaches and models. Generalevaluation guidance includes CES’ entry-level publication, First Steps in Monitoring andEvaluation as well as CES’ more comprehensive Practical Monitoring and Evaluation: A Guidefor Voluntary Organisations and the new economics foundation resource Proving andImproving: A Quality and Impact Toolkit for Social Enterprise.

A number of the general evaluation guidance publications identified through the researchwere initially developed for use by organisations working on particular programmes, but areapplicable to a wide range of organisations and situations. Others are more sector-specific,such as Learning, Evaluation and Planning: A Handbook for Partners in Community Learningand the international development focused Toolkits: A Practical Guide to Monitoring,Evaluation and Impact Assessment, produced by Save the Children.

Resources on outcomes range from introductions to an outcomes approach, such as CES’Your Project and Its Outcomes, through to a CD-Rom-based system such as the Spirit Levelprogramme, that can create and store quality of life outcomes for individuals and the CareServices Improvement Partnership web-based outcomes network.

The website of the Washington Urban Institute – www.urban.org – provides an outcomesindicator framework for 14 different programme areas. For each programme area there is anoutcome sequence chart, a table of candidate programme-specific outcomes and datacollection strategies with suggested data sources for each outcome indicator. There is nothingcomparable in the UK, although organisations refer to statutory indicators, quality of lifeindicators and NHS databases, and indicators on the CES and new economics foundationwebsites.

Organisations and funders from different voluntary sub-sectors have developed a number oftools formeasuring ‘soft outcomes’ – the less tangible, harder to measure outcomes. Somehave been funder-led, such as a guide to measuring soft outcomes and distance travelled inEuropean Social Fund projects, developed by the Welsh Funding Office. (You can read moreabout ‘distance travelled’ in Chapter 3.) Others are locally-based developments, such as SoulRecord, which grew out of a need for voluntary organisations in Norfolk to evidence theprogression of their clients in terms of informal learning. Others have been developed byorganisations and sub-sectors.

Triangle Consulting has worked with Off the Streets and Into Work and the London HousingFoundation, both organisations working to tackle homelessness, on the design anddevelopment of tools and initiatives that also measure soft outcomes and distance travelled.The Employability Map tracks the progress of participants on programmes helping homelesspeople get employment. The Outcomes Star is another tool for measuring the changes thatservice users make as a result of homelessness agency support.4

There are a number of resources available that explain different evaluation approaches andmodels, many of them developed by US-based organisations. The Logic Model DevelopmentGuide by the Kellogg Foundation and Logic Model Handbook by United Way can bedownloaded from the organisations’ websites. www.theoryofchange.org is a website-basedresource developed by ActKnowledge and the Aspen Institute Roundtable on CommunityChange. The site provides a forum for information exchange, tutorials and tools for evaluatingsocial change. In the UK, the Social Audit Network has produced Social Accounting and Audit:

The development of monitoring and evaluation resources

continued »

Page 40: Accountability and Learning

20Charities Evaluation Services Research report

systems or toolkits, frequentlyrelating to specific sub-sectors: forexample toolkits by Sport England,Volunteer England, Age Concern,National Children’s Bureau; and neweconomics foundation’s publicationProving and Improving (targeted tosocial enterprises), containing anumber of different approaches, wasmentioned (Sanfilippo 2005).

• Funders were mentioned as a source:these included the Learning and SkillsCouncil, the Big Lottery Fund, ArtsCouncil England and the King’s Fund.

Some of the survey responsesreflected the development of bespoketools and tailored systems. There wasevidence of a degree of borrowing of

good practice and adaptation ofexisting evaluation tools, andsometimes wider research, as notedby one third sector survey respondent:‘We wrote our own toolkit based onresearch of public and third sectordocuments and good practice in theUK, Canada, Australia, New Zealandand the USA’.

Training, publications and the internetwere most frequently reported as theway that organisations got informationon monitoring and evaluation. This isshown in Table 2 on page 21.

Just over one-third of those definingthemselves as communityorganisations drew on localinfrastructure organisations (such as

The Manualwhich describes a framework for social accounting and what needs to be in placeto make it happen. The new economics foundation has developed a guide to measuring socialreturn on investment for organisations interested in measuring their social, economic andenvironmental impact –Measuring Real Value: A DIY Guide to Social Return on Investment. Thistool, and the new economics foundation’s Local Multiplier 3 tool, provide guidance forassessing economic impact, but there is little practical guidance on how to conduct impactevaluation generally, apart from more academic material.

Some voluntary and community sub-sectors are better served than others withmonitoringand evaluation materials. The research identified several resources aimed at organisationsworking to combat substance misuse and those working with children and organisations in thearts sector; funders of these sectors have been particularly active in producing resources. TheHeritage Lottery Fund and Children’s Fund have produced resources for organisations workingwith children, and Arts Council England and Northern Ireland and Voluntary Arts Wales haveproduced evaluation material for the arts. The National Evaluation of Children’s Fund hasproduced a small range of practical tools for collecting evaluation data from children. YoungRoots, by QA Research for the Heritage Lottery Fund, contains examples of 125 participatoryactivities. Tools Together Now: 100 Participatory Tools to Mobilise Communities for HIV/AIDS isone of a large collection of participatory tools produced by the International HIV/AIDS Alliance.

However, there are a number of gaps in the materials available. There is still a lack ofresources providing practical advice and tools that make the link between the methodology,the questions the evaluation is seeking to address, and the development of indicators and datacollection tools. The Social Exclusion Task Force’s guidance Think Research: Using ResearchEvidence to Inform Social Development for Vulnerable Groups has a chapter on assessing andappraising research evidence. The Evaluation Support Scotland Support Guide series includes abasic guide to analysing information. But there is little targeted to the sector on survey design,a data collection method that seems to be fairly widely used by the sector.

Visit www.ces-vol.org.uk for more information and references for these resources.

The development of monitoring and evaluation resources

Page 41: Accountability and Learning

21Charities Evaluation Services Research report

councils for voluntary service) formonitoring and evaluationinformation. The 23 organisationswithout paid employees were lesslikely to use the internet orconsultants as a source of informationand support. Generally, largerorganisations had greater access to allforms of monitoring and evaluationsupport than smaller ones. The use ofconsultants was more likely to befound in larger organisations and, notsurprisingly, the use of consultantsincreased proportionate to the size ofthe organisation. Black and minorityethnic organisations cited trainingcourses more frequently than themean (79 per cent),5 but use of theinternet less frequently (42 per cent).Refugee organisations used theinternet and publications least (37 percent and 43 per cent respectively) butaccessed other types of support aswell. Rural organisations reportedcomparable levels of access for allsources of support except consultancy(11 per cent), possibly reflectingdifficulties of geographical access.

Monitoring and evaluationtraining and support ... sourcesand uptake

One-third of our surveyedorganisations had received nomonitoring and evaluation training.However, to the 10,000 organisationsattending CES training over the lastten years can be added the trainingfiltering down through CES-trained‘outcomes champions’ (see page 39for information on the NationalOutcomes Programme and outcomeschampions) and the ChangeUpPerformance Hub (now replaced bythe National Performance Programme).

Added to this is the training andinformation provided by national,regional and local infrastructurebodies, the provision of training byother specialist agencies such as the

International NGO Training andResearch Centre (INTRAC) whichprovides support to internationaldevelopment and relief agencies, andby the new economics foundation,social enterprise and co-operativeagencies, by generalist trainers, andother private consultancies and publicproviders. The research also foundincreasing provision or funding fortraining from charitable trusts andfoundations, and indications ofgrowing local authority support formonitoring and evaluation initiatives.

From the evidence provided, we haveidentified five different types offocused monitoring and evaluationtraining:

• bespoke and applied training forindividual organisations to putmonitoring and evaluationframeworks in place, oftensupplemented by additionalconsultancy

• training the trainer, or cascadetraining, such as that provided by theCES National Outcomes Programme

• training provided to grant-fundedorganisations to meet the specificneeds of funders (often within theremit of a particular programme)

3Building monitoring and evaluation knowledge and skills ...

Sources of information Percentage of respondentsusing this source (n=526)

Training 69

Publications 56

Internet 51

Funders 42

Consultants 24

Local infrastructure organisation 24

Local authority or government agency 24

National infrastructure organisation 21

Head office or network 14

Table 2 How third sector organisations getmonitoring and evaluation information

Page 42: Accountability and Learning

22Charities Evaluation Services Research report

• one-off training events, both openand in-house

• training provided to groups oforganisations as part of a time-limited initiative.

Much of this training is introductoryand does not provide technical skills indata collection and analysis, forexample. However, the research foundevidence of a slow building of in-house competence and resources inkey agencies, often assisted byconsultancy support from universitiesor from a growing range of privateand not-for-profit consultancy, anddrawing on the increasing number ofpublications and other resourcesavailable.

The West Midlands and North Eastwere the English regions which morefrequently reported receivingmonitoring and evaluation trainingand the East England and South Westregions considerably less frequently.Organisations working withvolunteers, with no paid staff, werethose least likely to have receivedtraining (57 per cent as opposed to31 per cent). Newer organisations andthose identifying as projects ratherthan organisations also less frequentlyreported having received monitoringand evaluation training.

Monitoring and evaluation in thecontext of supportive funding ...good practice approaches

Third sector research participantsappreciated a more engaged andtrusting relationship, experiencedmost often with some independentfunders, and reported support onmonitoring and evaluation within thiscontext. The Big Lottery Fund, thelargest third sector funder, hasdeveloped explicit guidance onoutcomes. Other independent fundershave also clarified expectations at the

application stage. City ParochialFoundation, for example, makes initial,interim and final evaluation reportrequirements clear, providing supportto organisations to develop evaluationframeworks at an early stage, andproviding a link to additional supportif needed. Northern Rock Foundationprovides forms and guidelines todownload. Its Evaluation PlanningForm includes a question about howorganisations will share and promotewhat they have learnt.

Funders in our survey sampleidentified the importance of asupportive approach within thefunding relationship. Eighty-five percent of the sample carried outmonitoring visits and, of these, overthree-quarters identified such visits asa part of establishing a face-to-facerelationship and part of a supportive,developmental funding approach.Only one-quarter of funders reportedusing monitoring visits to check onmonitoring systems and theiradequacy to provide requiredinformation.

Half the sample – both large andsmaller funders – reported training orad hoc support on monitoring andevaluation to organisations. From ourinterviews we found some fundingbodies, such as London Councils,developing training for theircommissioned organisations.However, those providing consistentsupport on monitoring and evaluationwere in a minority. Most of thoseproviding any support did so in theform of ad hoc support to identifyplanned outcomes and impacts, andtwo-thirds reported ad hoc support toset up a monitoring and evaluationframework and develop self-evaluation methods and a monitoringsystem. Of the 14 funders thatprovided training:

• ten offered training on plannedoutcomes and impacts

Page 43: Accountability and Learning

23Charities Evaluation Services Research report

• nine provided support on developingself-evaluation methods andmonitoring systems

• seven helped with setting up amonitoring and evaluationframework.

The City Bridge Trust regards itself asan engaged funder, but recognisespractical limitations: ‘Very often it isthe borderline groups that need a bitof help but this can be very hard workas they are more vulnerable, and as atrust we do not always have time’.For some funders, there was a morefundamental attempt to change thebasis of the funding relationship. TheLondon Housing Foundation bases itsrelationship primarily on whether theorganisation learns and reports back,while the Diana, Princess of WalesMemorial Fund, which is developing anew focus on using evaluation andlearning to increase advocacy andimpact, also sees that the ‘focus onthe difference being made providesthe core of the relationship ratherthan the budget line’.

Abdy and Mayall (2006), in their studyof six relationships between voluntarysector organisations and their funders,cited the helpful response of theDerby City Partnership, which ran amonitoring and evaluation workshopfor all Small Change programme grantrecipients at the outset of the grant,following early warnings thatorganisations needed more help withmonitoring. The Sandwell InteragencyCommissioning Group drew attentionto the ACES mentoring programmerun in partnership with WestBromwich Albion, as an example ofbenefits obtained from a systematicapproach to involvement withvoluntary sector organisations atevery stage of the evaluation process,with a shared agenda and jointly-heldgoals (Sandwell InteragencyCommissioning Group 2006).6

In Scotland, the Laidlaw Youth Trustwas providing projects with access to

free, ongoing support from EvaluationSupport Scotland through anEvaluation Support Account whichgives charities a ‘bank’ of evaluationsupport time. In England, a number offunders – including City Bridge Trust,City Parochial Foundation, BBCChildren in Need and the NationwideFoundation – were either subsidisingtraining for their grantees on CES’open training or arranging bespoketraining for them.

Management and evaluationconsultancy and supportproviders ... main sources

The research identified six mainsources of consultancy and technicalsupport to third sector organisations:

• Non-profit organisations providingspecialist evaluation support – forexample CES, the International NGOTraining and Research Centre(INTRAC), new economicsfoundation, the Evaluation Trust, andin Scotland and Northern Ireland,Evaluation Support Scotland andCommunity Evaluation NorthernIreland. There are also specificservices, such as the TelephoneHelplines Association EvaluationService, which provides evaluationsof helplines.

• University departments – such asthe Centre for Charity Effectivenessat Cass Business School and theInstitute for Voluntary Action andResearch (IVAR). Many of these havea focus on external evaluation.However, some have been involvedin developing specific tools, such asthe Centre for Regional Economicand Social Research, SheffieldHallam University, where JulieManning has developed a SocialReturn on Investment methodologyfor measuring the economic andsocio-economic impacts ofinvestments in voluntary andcommunity organisations, and

3Building monitoring and evaluation knowledge and skills ...

‘Funders who have

good relationships,

and carry out

technical checking

with applicant

groups prior to

funding being

awarded have a

greater chance of

monitoring being

provided

accurately and in

good time.’

Funder

Page 44: Accountability and Learning

24Charities Evaluation Services Research report

Brighton University, which workedwith London Enterprise to developan evaluation framework for smallsocial enterprises.

• Private consultancy organisations –these frequently work across publicand third sectors. They often providea specialist focus and are increasinglydrawn into outcomes-focusedevaluation work.

• Specialist evaluation departmentswithin national organisations –these vary in their focus and abilityto support frontline organisations.

• Regional or local infrastructure –for example councils for voluntaryservice, or sub-sector infrastructureor network organisations.

• Independent evaluators – who maywork as freelance sole traders or asassociates for consultancyorganisations, university departmentsor national organisations.

The research suggests that thisdiversity of provision is of great valueto the third sector, bringing creativity,developing skills and, in practicalterms, increasing local provision.However, evidence also suggests thatsuch diversity results in considerableduplication of effort in terms of toolsdevelopment and lack ofcoordination. It is also not clearwhether the potential for sharing andembedding learning within the widersector is fully realised from consultant-led work and from local initiatives.These initiatives are further discussedin the next section.

Capacity-building initiatives ...partnerships and approaches

Funder-led initiativesOur interviews and wider reviewfound a small number of widerexamples of support provided byfunders – in addition to providing orfunding training for their grantees –

such as localised posts and initiatives.Specific toolkits, such as TheEvaluation Game developed by theScarman Trust, and the Paul HamlynFoundation’s user-friendly EvaluationResource Pack, developed in 2007,provide an effective method ofdisseminating basic models andmethods for self-evaluation.

The research suggests that moredirect support has the potential formore focused capacity building, butmay have a restricted remit or be timelimited. The Kirklees Shared EvaluationResource was originally set up to lookat the evaluation of regenerationinitiatives.7 It is now funded by theKirklees Primary Care Trust, and hasbroadened its remit to supportevaluation activity across many of themajor Neighbourhood Renewal-funded programmes. The resourcedoes not specifically build capacitywith third sector organisations unlessthey are involved in a programmesuch as Sure Start.

By contrast, the Plymouth Monitoringand Evaluation Development Groupwas a council-funded initiative from2004 to 2006. It was formed toincrease the quality, knowledge anduse of monitoring and evaluationacross the city and, although short-lived, was an interesting model. Thegroup included local practitioners, keyfunding bodies and service providers:it produced a set of ‘how to’ guidesand provided basic monitoring andevaluation training and some one-to-one capacity-building support.

An initiative in the NorthumberlandFamilies and Children’s Trust aimed topromote a longer-term impact onmonitoring and evaluation practice. Itwas targeted at both public sectorofficers and voluntary organisations.The Children’s Fund funded anevaluation coordinator post withinNorthumberland County Council,planned initially until March 2008 ‘tohelp organisations understand how

Page 45: Accountability and Learning

25Charities Evaluation Services Research report

powerful a tool evaluation can be’.The postholder produced basicworksheets using CES materials andcarried out basic training. The aim wasto develop ‘a community of practiceacross Northumberland with regard toevaluation’.8 Workshop sessionstargeted public and voluntary sectorproviders, with the theme‘developing Evaluation Champions inNorthumberland’, and the projectaimed to leave a legacy of evaluationchampions.

Although, as discussed on page 22,monitoring and evaluation support byfunders is frequently ad hoc andreactive, the evidence shows thatgrants officers are in a prime positionto provide such support. NottinghamChildren’s Fund engaged over 20projects, primarily voluntary sector-based, in developing andimplementing a monitoring,performance management andevaluation framework, based on‘Logics and Results’, in order to meetthe demands of reporting againstEvery Child Matters outcomes.9 CEShas provided monitoring andevaluation training for funders since2003. Sandwell Metropolitan BoroughCouncil grants officers have attendedCES outcomes-based training, andhave been able to support groupspractically, for example by helpingthem to devise questionnaires.

Is capacity building effective?A study by the University of HullBusiness School (Boyd 2002)suggested that there was little impacton participants in single one-dayworkshops, which were provided aspart of a 2000/01 initiative (the HazeProject) to build capacity in statutoryand voluntary and communityorganisations. By contrast, we foundfunders that had invested inmonitoring and evaluation support toorganisations reported a worthwhilereturn for their investment. Forexample, the City Bridge Trust was

able to make a link betweeninvolvement in learning sets requiredof the organisations funded throughits Golden Jubilee Grants programmein 2002 and subsequent projectquality.

BBC Children in Need has beenproviding training to two participantsfrom selected funded organisationsacross the UK through CES for overten years and reported that using theself-evaluation model covered in theCES training enabled organisations toprovide a greatly improved standardof evidence of benefits to children.The organisations involved have beenthose which have received three-yearfunding for new salaried posts, andBBC Children in Need is currentlyconsidering whether to extend themodel to other organisations. The2007 report on the Voluntary ActionFund Outcome Monitoring project inScotland found some slowimprovements in writing reports andmore capturing of soft outcomes, and46 per cent of the projects had put inplace tools such as questionnaires,surveys and exit interviews (EvaluationSupport Scotland and VoluntaryAction Fund 2007).

More practical support from fundershas also had benefits. SHINE Trustprovides an electronic monitoring andevaluation spreadsheet for use by allits projects. This has improvedmonitoring standards by ensuringuniformity of the data collected.Hertfordshire County Councildelivered support through training onevaluation for groups working withyoung people, and provided informalhands-on support, getting someorganisations to set a baseline andfollow-up statistics on anti-socialbehaviour. As a result some of theprojects developed and placedthemselves in a much strongerposition to obtain funding.

Leicestershire Children’s Fund workedwith its funded organisations to

3Building monitoring and evaluation knowledge and skills ...

Page 46: Accountability and Learning

26Charities Evaluation Services Research report

develop evaluation frameworks usingthe CES planning triangle model, andto develop data gathering tools.10 CESconsultants later worked with thefunded organisations to improve thestandard of their reporting against theframeworks and provided training onareas of difficulty. LeicestershireChildren’s Fund reported thatorganisations initially started out withno experience of outcomes and dataanalysis, but that enthusiasm formonitoring and evaluation, initiallyrated at 20 per cent, rose to 60 percent following the first evaluationsupport and reporting process.

The London Housing FoundationIMPACT programme was an exampleof intense support over a period oftime which has seen significant widerimpact as soft outcomes models havebeen disseminated into the widerhousing and homelessness sector.Over the eight years of itsprogramme, London HousingFoundation worked with CES andTriangle Consulting to train over 220people from a range of organisations,and then supplemented this withtechnical assistance, finally workingwith 14 organisations to provide agreater level of support for putting anevaluation framework into practice.11

These initiatives show that supportfrom funders can be effective. Thecommitment can be considerable, yetthe research showed that there isscope for more limited but potentiallyhelpful support without resourceimplications. For example, funderscould provide weblinks to evaluationresources. Yet 45 per cent of oursample of funders neither signpostednor referred to any source ofevaluation support. Of those that did,most referred to a local infrastructureorganisation, three-quarters referredto a specialist monitoring andevaluation provider, and just over one-third to a national infrastructureorganisation.

Monitoring and evaluation supportfrom national and regionalorganisations and infrastructureThe evidence indicates that manynational organisations do not have theresources to provide comprehensiveevaluation support to projects. Wherean evaluation department exists, itoften focuses on training and supportto its regional staff. The NationalSociety for the Prevention of Crueltyto Children (NSPCC) evaluationdepartment offers support toindividual projects and teams, but hastoo many local projects to delivermore than a reactive response torequests for support. NSPCC alsoemploys an evaluation officer based inthe service delivery team, who is ableto offer proactive support, forexample in planning internalconferences to address particularissues. NCH Action for Children issuesguidance on how local-level servicecan fit within Supporting Peoplerequirements and within the EveryChild Matters framework.

The Community Technical Aid Centre(CTAC) was funded in 2002 throughits Monitoring and Evaluation Projectto develop a system for evaluatingimpact on delivery of regenerationpolicies, community plans and localagendas. It evolved an evaluationframework for use in CTAC and inother technical aid centres across thecountry. CTAC also developed self-evaluation capacity throughmentoring, and a framework of socialaccounting was placed around themonitoring and evaluation process.CTAC delivered seven taster sessionsabout social accounting to 60 thirdsector organisations, and deliveredfive training sessions aboutmonitoring and evaluation to 50learners on behalf of the infrastructureconsortium, now the GreaterManchester Voluntary Sector Support,for which the Greater ManchesterCentre for Voluntary Organisation

Page 47: Accountability and Learning

27Charities Evaluation Services Research report

(GMCVO) is the accountable body.12

However, this initiative did not takeroot, and although GMCVO continuesto promote outcomes monitoring,researchers were informed that ‘socialaudit has not taken off locally’.

Sub-sector-specific initiativesWe found that monitoring andevaluation initiatives emerging from,or targeted to, particular sub-sectorscan work with several advantages.They can reference common or coreareas of work and outcomes. (See thehomelessness outcomes work on page26 and the NAVCA outcomes-focusedquality award for infrastructureorganisations on page 38.) They canalso develop tools that reflect thedominant ethos and respond toreporting needs. Sub-sector initiativesalso permit relevant case studymaterial to be developed. This wasthe case in a joint project betweenCES and the Refugee Council(2006/08), which included training onmonitoring and evaluation and a newpublication specific to refugeecommunity organisations (Latif 2008).Faithworks’ Impact and Excellenceproject, funded by Capacitybuilders,included training and thedevelopment of a toolkit to help faith-based projects implement qualityassurance standards and impactmeasurement techniques.13

The research found a number ofinitiatives covering social auditing.From 1995 to 2000, new economicsfoundation pioneered a social auditingapproach and helped to form theInstitute of Social and EthicalAccountability to promoteprofessional standards for socialaccounting. The Social Audit Networkhas developed a course to train socialenterprises in how to implement areporting framework covering therange of organisational activities. Theapproach uses a three-step process toplan, gather information and audit the

enterprise’s social accounts. neweconomics foundation and the SocialAudit Network have worked inpartnership on a social accountingframework to meet the needs ofsmaller social enterprises andcommunity organisations. This workculminated in the foundation’spublication Proving and Improving: AQuality and Impact Toolkit for SocialEnterprise, which contains a range oftools and an indicator bank, which areaccessible online (Sanfilippo, 2005).

Co-operativesUK provides an exampleof sub-sector-focused performancemeasures. It launched its Key Socialand Co-operative PerformanceIndicators project,14 intended tocapture the cooperative, social andenvironmental performance of acooperative, offering a series oftraining events from 2005. In 2006,Co-operativesUK provided memberswith an electronic reportingframework to enable them to monitortheir performance. Co-operativesUK

reported that the framework hadbeen introduced to a section of itsmembership – consumer retailsocieties. After three years the level ofreporting has increased and Co-operativesUK is able to provide thesocieties with benchmarking data.Most retail societies are also includingperformance on the indicators in theirannual reports.

Some sub-sector infrastructure andnetworks offer monitoring andevaluation training, largely at anintroductory level. For example,Homeless Link offers seminars onoutcomes, and Sitra, the umbrellaorganisation for the housing, care andsupport sector, offers a limitedamount of monitoring and evaluationtraining. BTCV Yorkshire and Humber(formerly The British Trust forConservation Volunteers) providesmonitoring and evaluation support toits networks by email on an ad hoc

3Building monitoring and evaluation knowledge and skills ...

Page 48: Accountability and Learning

28Charities Evaluation Services Research report

basis. From CES’ experience, such adhoc support can provide targeted,responsive input to meet immediatemonitoring and evaluation needs.

ConsultanciesResearch survey participants mostfrequently referred to non-profitspecialist evaluation agencies for theirtechnical support in monitoring andevaluation. But the broader researchfound a wide range of privateconsultancies that have developedmonitoring and evaluation support tothird sector organisations, as well asproviding external evaluation services.Other third sector organisations,having developed some evaluationexpertise, are also looking to developmonitoring and evaluation supportservices. Technical support caninclude helping to develop monitoringand evaluation frameworks,developing a conceptualunderstanding of a project andhelping to develop tools.

Consultancy work was increasinglyfound to combine technical capacitybuilding and external evaluation,although the relationship betweenthe two differed. Consultancies canbring specific areas of expertise, forexample on economic issues andregeneration, health, action research,participatory approaches and serviceuser involvement. Some, such as theCentre for Local Economic Strategies,have well-established courses onevaluating projects and programmesand some, such as Research for Real,based in Edinburgh, have a UKnetwork of associates.

The research shows that these diversesources of technical expertise andassistance have greatly strengthenedthe development of monitoring andevaluation in the third sector.However, it also found that issues ofinternal capacity and organisationalculture are key to that development.This is explored in the rest of thischapter.

Developing a monitoringand evaluation culture ...organisational embeddingand integration

The location of monitoring andevaluation responsibilities can be astrong indication of their integrationinto, or isolation from, coremanagement functions. Monitoringand evaluation tasks were reported bythe large majority of our surveyrespondents as located within jobresponsibilities and functions;however, for 10 per cent oforganisations, monitoring andevaluation was not a formal part ofany job responsibility. Managementposts were most likely to have formalmonitoring and evaluationresponsibilities, but we often foundthat the monitoring and evaluationfunction had been placed in thefinance or fundraising department,either by default, or with intent.

While there were some advantages(including strong motivation to meetreporting requirements and to useinformation as leverage forfundraising), several organisations,funders and consultants reporteddisadvantages with that location ofevaluation functions. Disadvantagesincluded inadequate sharing ofunderstanding about outcomes ordeveloping motivation across theorganisation and to frontline staff, andlittle emphasis on using monitoringand evaluation to review services andoutcomes. And for those workingwithin the finance department, accessto policy functions could be difficult.

Several respondents pointed to theimportance of the responsibility formonitoring and evaluation beingshared among all staff. However, 16per cent of our third sector surveyrespondents had a specialistmonitoring and evaluation post. ForQuaker Social Action, a post withdedicated responsibilities for

Page 49: Accountability and Learning

29Charities Evaluation Services Research report

monitoring and evaluation hadenabled a concerted effort to embedand integrate monitoring andevaluation across the organisation.Somerset Community Food is a smallorganisation with a lot of dispersedwork. Having a part-time monitoringofficer post allowed more creativeand broad-based monitoring for alarge three-year project with a varietyof different aims.

Constraints to introducing asystematic approach to monitoringand evaluationCES’ experience and further researchevidence has shown that, even withan organisation fully engaged in thetask, developing an outcomes focus inparticular is technically complex andmay take several years. However, thereare a number of issues that make theintroduction of monitoring andevaluation more generally a complexand frequently lengthy process:

• Consultancy help and a drive tochange from the top may encounterdifficulty if monitoring and evaluationis not felt as a priority for theorganisation or group as a whole.

• Generally speaking, too fewresources are put behind the changeeffort, and budgets for consultancyare inadequate for the level ofsupport required.

• Larger organisations trying tointroduce a new culture may not beable to invest in staff to support thechange or may even have to cutthose posts able to facilitate changeacross regional and local offices.

• Organisations often under-estimatethe amount of support required tobuild administration skills and newfrontline skills and ways of working.

• Most important, it can be difficult tointroduce a culture change – to‘change people’s hearts and minds’.

A real fear of additional work and thelack of an open culture were also

reported as constraints. Care for theFamily worked with an internalevaluation group, drawing on the skillsof external consultants and a part-timeresearcher over three years, resultingin evaluation taking a more centralrole in its work, ‘becoming much morepart of our organisation’s DNA’. SENSE,working with the Compass Shoutabout Success programme to improveits reporting and focus on impact,found that it was important to workwith a task group made up ofmotivated people from different partsof the organisation to encouragemore integrated development.

Many participants in the research feltthat more specific skill building wasrequired to improve the quality ofdata. However, the researchdemonstrated the time required todevelop research skills and the stepsneeded to ensure that skills areembedded within organisations,which can be difficult when most staffand volunteers are concerned withfrontline work. Some respondents feltthat practical time issues meant thatthey could not always put their skillsand knowledge into practice.

Several organisations pointed tospecific monitoring and evaluationdifficulties, such as the need forlanguage translation, sensitive workareas or difficult-to-assess outcomes.One organisation reported howdifficult it was to define outcomes fora client group that declined rapidly intheir health. Another area reported asproblematic was advocacy; this is workwhich is highly client centred and itseemed difficult to look at patternsacross cases. For some organisations,such challenges became a completebarrier, for example:

We work in a sensitive area and applyfull client confidentiality so clients areunder no obligation to tell us anythingand we try to avoid ‘interrogating’them to gather statistics.

Finally, organisational culture was

3Building monitoring and evaluation knowledge and skills ...

‘It’s about having

the time for staff

to spend on doing

it or to oversee

volunteers doing it

for them. It’s not

obvious to me that

all the capacity

building in the

world does

anything to

improve this

situation for us.’

Local voluntaryorganisation

Page 50: Accountability and Learning

30Charities Evaluation Services Research report

‘We are not

proactive at all and

this simply isn’t

part of our

working culture.

Our focus is

traditionally

quantity rather

than quality.’

National charity

Third sector respondents recorded the following as constraints:

• Time: Three-quarters (76 per cent) of third sector respondents identified insufficient time formonitoring and evaluation as causing difficulties. This rose to 90 per cent of local offices oflarger organisations. However, this was a significant issue even for larger organisations.

•Money: Just over one-third (39 per cent) of respondents identified lack of money as a keyconstraint and for organisations with no staff this rose to 60 per cent.

• Skills: Just over one-third (35 per cent) identified lack of skills. Often, there was a dependenceon volunteers, who did not have the necessary skills. Buy-in from volunteers and theirinvolvement in developing evaluation questions can be vital. For example, the Motor NeuroneDisease Association’s volunteers are involved in a very direct way with the organisation,managing evaluation questionnaire responses locally across the 94 branches and groups andencouraging people to complete them.

• Data collection: There were a number of practical issues identified as constraining monitoringand evaluation. The most important of these was finding appropriate or manageable datacollection methods and getting responses back (affecting nearly half of all respondents).

• Outcomes and impacts: Just over one-third (39 per cent) of respondents overall reportedidentifying outcomes and impacts as a difficulty.15 Related problems were identifyingindicators and getting baseline information. Several mentioned difficulties of making aconnection between activities and change.

Constraints to developing a monitoring and evaluation culture

recognised as an important factor.Sometimes this related to the extentto which routine monitoring was feltto be compatible with types ofactivity, such as street- or community-based work. Other issues concernedlack of management buy-in and non-compliance or lack of understandingof its value by staff. The importance ofchanging organisational culture wascaptured by one respondent:

Staff working at service level have avaried understanding of monitoringand evaluation. Most have acommonsense idea of collectinginformation and why, but others saywe are here to help people, not to dothe paperwork…there are difficultiesdeveloping people’s understandingand awareness – this is the realproblem. It’s hard to spend timeselling this when you are just trying toget things done.

The main areas of difficulty expressedby third sector survey respondents areillustrated in the box below.

Managing data ... overview ofsystems used by respondents

One of the critical issues reported indeveloping monitoring and evaluationwas difficulty with data collection andanalysis. (This was identified by 50 percent of third sector respondents.)Participants reported:

• a general lack of both generic ITskills, and skills in data analysis

• difficulties in getting peoplesufficiently well-trained throughoutmore complex organisations both toprioritise data collection and toprovide consistent information

• reliance on frontline staff andvolunteers who did not commit todata collection or did not have theright skills

• collecting qualitative data in waysthat did not permit analysis

• inadequate database systems thatwere usually unable either to provide

Page 51: Accountability and Learning

31Charities Evaluation Services Research report

the data needed or to successfullydeliver outcomes information

• inadequate systems and skills toanalyse data generated from‘distance travelled’ tools16 andparticipatory methods

• inability of organisations to shiftproject-based staff into systems anddata work.

In our survey, the great majority ofthird sector organisations (80 percent) reported using a combination ofelectronic and paper-based systems(including three-quarters of thoseorganisations without paid staff), while12 per cent of respondents reportedusing IT-only systems. A relatively smallnumber used paper-based systemsonly. Describing their software:

• 37 per cent used off-the-shelf ITsystems

• 36 per cent used customised off-the-shelf systems

• 27 per cent had a system designedand built for them.

A small percentage used alternativequantitative packages, such as SPSS,and very few (2 per cent) usedqualitative software packages, such asNVivO. Many organisations used aMicrosoft Excel spreadsheet or asimple database in Microsoft Access tostore and manage information. Someorganisations reported developingcomprehensive databases, withcustomised systems or bespokesystems being geared up in somecases to deal with new demands. Forexample, BTCV Yorkshire and Humberoffices had a custom-built system thatcould be accessed by project workersthrough an internet browser whichcould generate reports for differentfunders and provide volunteer profilesagainst various indicators, such asindices of deprivation, disability andethnicity.

Some third sector organisationsreported being in the process of

introducing new IT systems, ormapping existing data collectionagainst new monitoring needs. Oneexplained ‘We are currently mappingall our reports and requirements so asto ensure that the electronic patientnotes we are developing will meetmonitoring requirements’.

CES’ experience, and that reported byothers working with frontlineorganisations, is that most arestruggling with inadequate systems.With their funders as their primarystakeholder for monitoring andevaluation, new databases havefrequently been acquired to meetreporting requirements, sometimeslater found to be less useful for otherpurposes, such as processingqualitative data and meeting differentfunder requirements. For many therehas been no time or financialresources to develop an appropriatesystem, or there is little knowledge ofwhat is available and how it couldhelp. Added to this was theperception that data managementwas difficult.

For some of the larger organisations inour research, IT had become a criticalissue. Without a core database, orwith an inadequate one, somereported that they were struggling tointroduce reporting against astandardised set of indicators. Anumber of respondents reporteddatabases, some quite newlyacquired, often geared towards theoutput data required by funders, thatwere inadequate for assessingoutcomes. Organisations have found itdifficult to find adequate off-the-shelfsystems and relevant information.Additionally, the research indicatesthat investment in IT expertise toadapt software packages or designdatabases may not meet the differentrequirements for multiple funders.

Although there has been an increasedfocus on ICT development recentlythrough the ChangeUp ICT Hub, the

3Building monitoring and evaluation knowledge and skills ...

‘All clients now

have an individual

outcomes profile

and are tracked

through the

organisation. The

organisation has

defined core

outcomes and all

funding and new

projects are

designed with

measurable and

achievable

outcomes.’

Drop-in centre foryoung people

Page 52: Accountability and Learning

32Charities Evaluation Services Research report

research found little informationavailable on IT systems for monitoringand evaluation.17 A new workbookpublished by the Performance Hub onusing ICT for effective monitoring andevaluation provides guidance on thedifferent options for developing aninformation system and also on issuessuch as sharing data, linking with othersystems and security (Davey et al2008). We found a limited amount oftraining on monitoring and evaluationIT systems available, although net.gain(a programme funded byCapacitybuilders to help third sectororganisations tackle ICT planning) isresearching needs in this area.

We obtained information from 15systems providers offering monitoring

and evaluation software specificallyfor third sector monitoring andevaluation and placed information onthe CES website.18 (This did notinclude bespoke systems.) We alsointerviewed 14 organisations that hadadopted these systems. They reportedthat, as well as helping with the workitself – for example through improvedefficiency, improved client work,better access to information andgreater ease of outreach work – theywere obtaining more consistentmonitoring information and they weremore able to achieve a system whichmet all their different reporting needs.A brief summary of the systemsreviewed is provided in the box below.

Monitoring and evaluation software systems have a number of defining characteristics:

• Some systems are particularly relevant to different sub-sectors (for example AdvicePro foradvice work, Link for (larger) homelessness organisations, ContactLINK for infrastructureorganisations.

• Some systems are targeted at specific sub-sectors (for example CharityLog for Age Concernoffices or Volbase for councils for voluntary service) but spread out from there to similarorganisations.

• Some systems attempt to cross the sectors by focusing on supporting the way people work(for example Caseworker Connect).

• Some systems are designed to offer different things to different types of organisation (forexample Lamplight, supporting both frontline and infrastructure organisations).

• A few systems provide soft outcomes measurement only (for example SpiritLevel and SoftSkills Toolkit). These may be useful for organisations that are already monitoring outputs andprofile but need something in-depth for soft outcomes. However, these systems do notappear to be able to aggregate data or to link to outputs.

• Some systems from the United States are beginning to be used in the UK. These are highlydeveloped systems, but are expensive so more likely to be used by statutory services.

• Some systems are driven by funding streams (for example SP Provider and SPA, both relatingto Supporting People contracts, and AdvicePro, developed for advice agencies with Learningand Skills Council contracts).

Visit www.ces-vol.org.uk for more information on these systems.

Summary of monitoring and evaluation software systems

Page 53: Accountability and Learning

33Charities Evaluation Services Research report

Some of these systems are recentdevelopments; most are still beingused by only tens of users and ofteninformation about them is passed viaword of mouth. At the time ofinterviewing, some of the systemswere just beginning to developoutcomes tracking and reportingfacilities. Seven of the 14 system usersinterviewed used their system foroutcomes information. Some of thesystems still required data to beexported to Microsoft Excel for graphsand other functions.

Many systems have focused on casemanagement and event managementand are waking up to outcomestardily. One user of AdvicePro – whichwas required for a Learning and SkillsCouncil contract – said that it wasalmost too sophisticated for them:‘Change management is the mainissue for the culture. My caseworkteam is well advanced in terms of ITbut we still struggle to record things –

we are busy doing the job’. Anadvocacy organisation with 11 paidstaff, using Case Connect, a MicrosoftWindows-based system, had previouslybeen unable to produce the extentand accuracy of reports required, andneeded to move forward fromhandwritten case notes. Staff are nowable to record outcomes and undertakecase management with the system.One software systems user said:

Generally it has massively improvedthe way we work, the standard of caserecording, statistics, evidencing whatwe do which has been very wellreceived. We can produce detailedreports…Yes, it was more expensive,but it is worth the money. We’realmost there in terms of getting themost out of it.

The process of introducing asoftware system and its benefitsare demonstrated in the caseexample below.

3Building monitoring and evaluation knowledge and skills ...

‘The whole process

is a lot more about

people and a lot

less about IT than

people think. You

have to take a

long-term view

and be very

committed. It’s

taken us years but

we are beginning

to see the benefit.’

Volbase system user

Preston Community Arts Project – trialling soft outcomes software

BackgroundPreston Community Arts Project employs three permanent staff and nine project-funded staffon arts-based community projects. Four of the project staff work on a Community Radioproject (Preston FM). At key points, the project has captured user outcomes relating toimproved self-confidence, self-esteem and other soft skills, using one-to-one appraisals, casestudies and self-reported scale rating data.

Soft outcomes softwareIn spring 2007, Preston FM began trialling a new Soft Outcomes Measurement System –software developed as part of the Kirklees ESF EQUAL Project, Common Ground, which aimedto measure soft and intangible outcomes, particularly among socially excluded groupsdisadvantaged in the labour market. The software is designed to allow participants to fill in apaper or screen-based questionnaire at various stages during their involvement with a project.A quantitative analysis of the ‘distance travelled’ by participants can then be produced for tencore areas, such as confidence, teamwork, problem solving and self-esteem.

Trialling the softwareBefore and after their seven-week course, which included time spent on air during an FMcommunity radio broadcast, participants trialled both on-screen and paper questionnaires.The software allocates record numbers to each individual, allowing the personal informationcollected to remain anonymous. In the second trial, participants were given the option toput their name to their responses and get feedback on their progression.

caseexam

ple

continued »

Page 54: Accountability and Learning

case

example

34Charities Evaluation Services Research report

Addressing data managementissues ... ways to improvereporting capacity and quality

From the research, practical issues ofdata management appear tocompound the effects of thepredominant focus on performancereporting, with its heavy demands onavailable resources, and may beholding back the development ofevaluation within the sector.Consultancy that works together withorganisations to identify internalinformation needs and criticalevaluation questions (as well asexternal requirements), and todevelop the means of collecting dataand reporting effectively, is helpful butdoes not itself resolve the resourceconstraints. A study in the UnitedStates of 298 projects found similarresults (United Way of America 2000).More than half the US respondentsagreed that implementingprogramme outcome measurementhad overloaded their record-keepingcapacity. Two-thirds reported difficultyidentifying manageable datacollection methods, relevant outcome

indicators and appropriate outcomes.In this study, one consultant urgedfunders to consider the value ofcovering the costs of designingsoftware to capture the data. Theresearch strongly suggests thatresourcing monitoring and evaluationIT development would significantlyincrease reporting capacity andquality.

A 2005 CES study for London HousingFoundation’s IMPACT Programmelooked at the need in the sector for IT-based outcome data managementsolutions. The report focused on 25organisations in the homelessnesssector but its findings are relevant tothe wider third sector. There was aclear sense that organisations want IT-based outcome data managementsystems (Parkinson 2005:11). Thereport concluded: ‘It clearly does notmake sense for the sector to developa multitude of different individualisedsystems for outcomes datamanagement’. This research reinforcesthe conclusions of that study that astrategic approach is required todevelop and introduce appropriatemonitoring and evaluation softwaresystems to the sector.

The results show the ratio of distance travelled to the maximum possible score. There werepositive results in eight of the ten core areas. They included improvements in motivationand autonomy, which had not been measured before.

Reviewing the soft outcomes softwareFeedback on the use of the software was positive, with participants preferring the on-screenversion. There were some software design issues: questions were not sufficiently gearedtowards the context of the project, which made it difficult for participants to make aconnection to their own roles and situations. Some found the language difficult, and took along time to complete the questions. However, there were real benefits. Even people withlimited computer skills found it easy to use the system; it was easily networked and thesoftware stood up well to multiple users (up to ten) accessing questionnaires.

The software allowed outcomes to be quantified that Preston FM had previously founddifficult to measure and, combined with other evidence, this provided a gooddemonstration of the project’s effectiveness as a stepping stone to further training oremployment. Participants liked the anonymity of the process and it was felt that they wereencouraged to give genuine answers. Most importantly, Preston FM hoped that futureanalysis of the data would allow them to adapt delivery methods in low-scoring areas.

Page 55: Accountability and Learning

35Charities Evaluation Services Research report

Since 1998, when proposals were firstformulated to strategically supportvoluntary sector monitoring andevaluation, there have beensubstantial developments in resourcesavailable, and in third sectormonitoring and evaluation knowledgeand skills. Yet, the encouragingdevelopments reported by thoseparticipating in this research do not

represent the experiences of thoseorganisations that still find monitoringand evaluation irrelevant or itslanguage inaccessible. Developmentacross the sector is uneven, and thosecarrying out monitoring andevaluation need more technical skillsand better systems. The need forstrategic support remains pressing.

3Building monitoring and evaluation knowledge and skills ...

1 See www.evaluation.org.uk

2 See www.ces-vol.org.uk

3 This effect may be explained by the presence ofrespondents from the CES database within themuch larger third sector survey sample. Forinformation on the survey distribution, seeAppendix 1.

4 The London Housing Foundation has set up adedicated website atwww.homelessoutcomes.org.uk with guidance onhow to use the Outcomes Star, and the OutcomesStar ladders are available to view online.

5 We found no clear explanation for this, althoughsome targeted dissemination of the survey throughnetworks to those refugee organisations attendingmonitoring and evaluation training might haveaffected the figure.

6 This programme, which ran from 2003 to 2007,engaged over 2,000 young people from Sandwellhigh schools, West Bromwich Albion’s CommunityProgramme and a range of local voluntary andcommunity organisations.

7 See [email protected]

8 www.northumberland.gov.uk

9 See Nottingham Children’s Fund Strategic Plan,2005–2008, www.nottinghamcity.gov.uk/nottingham_childrens_fund_strategic_plan_2005-08.pdf

10 The CES planning triangle is discussed further onpage 54.

11 Information on this programme is available atwww.lhf.org/uk/IMPACTProgramme

12 Information from Community Technical Aid Centrewebsite www.ctac.co.uk 2.04.06 This website isno longer available as CTAC is no longer trading.

13 See www.faithworks.info

14 Now renamed the Co-operative, Environmentaland Social Performance Indicators (CESPIs)

15 This rose to 55 per cent of respondents identifyingas local offices of larger organisations.

16 ‘Distance travelled’ refers to the perceivedprogress that a beneficiary makes towards a finaloutcome, such as employment or maintainingstable housing.

17 Now NCVO ICT Development Services. Seewww.icthub.org.uk

18 See www.ces-vol.org.uk and see alsowww.homelessoutcomes.org.uk andwww.itforcharities.co.uk

Notes for Chapter 3

Page 56: Accountability and Learning

36Charities Evaluation Services Research report

Pioneering outcomes ...developing outcomes frameworks

In Chapter 1, we discussed the shift inemphasis in public sector funding toan outcomes approach. The moreadvanced development of anoutcomes focus in certain sub-sectorscan be related to the drive towardsoutcomes by governmentdepartments.

PilotsSpecifically, in1993 the Departmentof Health decided to trial anoutcomes approach with grant-funded organisations, and AlcoholConcern piloted work with alcoholagencies on how outcomesmonitoring could be used; the drugsand alcohol field still remains at theforefront of outcomes work in thesector. Housing and homelessnessorganisations have been exploring theuse of outcomes measurementsystems since the late 1990s, drivenby the introduction of the Supporting

People1 and Every Child Mattersnational outcomes frameworks, butalso by the London HousingFoundation’s IMPACT capacity-buildingprogramme, and aided by the cultureof peer learning within thehomelessness sector. (See the reporton this programme on page 26.)

Outcomes toolsThe dissemination of the OutcomesStar tool has been particularlyinfluential. This is an outcomes tooldeveloped by Triangle Consulting inthe context of the IMPACTprogramme, which has allowed softoutcomes to be assessed and usedjointly by key workers and clients.Darlington (2007) found that onceadopted by St Mungo’s, one of thelargest homelessness organisationsin London, dissemination eventswere held within the city and similarorganisations across London beganto follow suit, and the outcomesmodel slowly spread out nationwide.Infrastructure organisations,including Homeless Link and the

Focus of chapter 4This chapter reports on the advances made in developing outcomes approachesin the third sector, the challenges that relate to the requirements of governmentframeworks and targets, and also those connected to data collection andmanagement. Specifically, the research found that:

• Funders frequently found the quality of outcomes reportingunsatisfactory.

• Many organisations found it difficult to frame outcomes so that theyallow for statistical analysis and so that they relate to higher-leveloutcomes defined by statutory agencies.

• A focus on outcomes results has often meant that information processesand context required to understand the factors that encourage or inhibitgood outcomes are overlooked.

• Developing outcomes approaches is complex, involving time andresources, change of organisational culture, and the development of datacollection and analysis skills.

Focus on outcomes ... how todemonstrate change

Chapter 4

Page 57: Accountability and Learning

37Charities Evaluation Services Research report

umbrella organisation Sitra, havealso promoted the use of outcomesmeasurement to managers.

The case example below illustrateshow organisations have benefitedfrom cross-sector learning to developoutcomes tools.

4Focus on outcomes ...

Street Cred – developing a tool to measure soft outcomes for a micro-credit project

Street Cred is a project run by Quaker Social Action (QSA) since 1999, providingencouragement and practical support to women who are unemployed or on a low income tostart up their own small businesses. QSA has 27 staff, half of whom are part-time, working inEast London.

Developing a tool to measure soft outcomesThe project found that it was unable to capture the importance of group support andchanges in the confidence, motivation and skills that women reported anecdotally. Its funderswere also asking for better outcomes information. At first the Street Cred team tried usingforms to record personal assessment of skills and progress (PASP forms) – but found themlengthy and intimidating; it was difficult to translate the information into a format that waseasy to collate and analyse. So the project team decided to adapt an existing tool, the Off theStreets and Into Work ‘employability map’, which aimed to capture evidence of how homelessprojects develop confidence and other ‘soft’ skills among their users.

The team worked together to make changes to the employability map. The process tookabout a year to develop, and another six months to implement.

BenefitsStreet Cred staff describe the benefits of the new tool:

• Evidence and confirmation of progress made by Street Cred users boosts confidence forusers and staff, has improved reporting to funders and strengthened new fundingapplications.

• The one-to-one interview provides an opportunity to assess progress in the key areas ofconfidence and motivation, basic skills, business skills, and readiness for trading, and toidentify remaining difficulties.

• Collated information from individual self-employability maps serves to identify areas ofcommon difficulty for users, such as marketing, cash flow and basic finances.

The self-employability map has been so successful that a version has been developed foroutreach and introductory work with new users.

ChallengesOne of the main challenges has been taking time to adapt the tool, but the team has alsoneeded to make sure that the questions and the information were relevant for bothmonitoring purposes and self-assessment, and were time-efficient. It was also a challenge tofind a convenient and accurate way to analyse the information from the interviews in contextand alongside other monitoring information.

caseexam

ple

continued »

Page 58: Accountability and Learning

38Charities Evaluation Services Research report

FrameworksCouncils for voluntary service andvolunteer bureaux also paid earlyattention to outcomes. PERFORM (anoutcomes-focused framework) wasdeveloped to provide an over-archingframework of higher-level outcomesfor infrastructure organisations,2 andNAVCA’s outcomes-focused qualityaward has strengthened theoutcomes focus for local infrastructureorganisations.3

Some developments have beendirectly sponsored by government. Apilot project set up in 2004/05 inresponse to a request from the Officeof the Deputy Prime Ministerproposed four higher-level outcomesfor agencies covered by theHomelessness Advice Service (NationalHomelessness Advice Service 2005).The Community DevelopmentChallenge Report recommended thatgovernment and communitydevelopment organisations worktogether to establish a communitydevelopment outcomes and evidencebase. The steering grouprecommended that a programme ofsupport for community developmentpractitioners and managers bedelivered to increase understanding of

the links between evidence and policyand how to use evidence to influencedecision makers (CommunityDevelopment Exchange, CommunityDevelopment Foundation andFederation for CommunityDevelopment Learning 2006).

The shift towards outcomes has alsobeen promoted by non-statutoryfunders. The Big Lottery Fund has beena high-profile leader in this, describingitself as an ‘outcomes funder’ – anapproach developed from earlierinitiatives at the Community Fund andthe New Opportunities Fund – whichrequires funded projects todemonstrate how well they achieveintended Big Lottery Fund programmeoutcomes.

The research found that otherindependent grant makers nowfrequently grant fund projects likely tohave outcomes that would relate totheir own specified grant programmeoutcomes, and expect to receivemonitoring information on theirachievement. The third sector has hadto shift its focus to meet thischallenge. New Philanthropy Capital,from its experience in analysingcharities, reports rarely finding aneffective outcomes framework or

case

example Using the information

The self-employability map is just one of the tools used to monitor the project. Othermonitoring information on user profile and outputs information is collected, and morequalitative data is also gathered through exit interviews and case studies. The reports areused for:

• Performance management: There are quarterly meetings to review progress against themonitoring and evaluation framework and targets.

• Reporting to funders: The statistics from the self-employability map analysis are used toback up and confirm the quantitative hard outputs provided to funders and the qualitativecase studies and narrative reports.

• Improving services: Data from the self-employability map has helped to identify topics forbusiness development workshops.

QSA continues to develop its monitoring and evaluation and keeps the self-employabilitymap under review so that its data can be integrated within the organisation’s widermonitoring and evaluation frameworks.

Page 59: Accountability and Learning

39Charities Evaluation Services Research report

operating outcomes monitoringsystem, essential for effectivereporting of outcomes. NAVCA hasinstituted a quality award for localinfrastructure organisations focusingon outcomes, and echoes the needfor strong internal systems iforganisations are going to evidencetheir outcomes.

Outcomes championsCES’ work since 1990 has been one ofthe key drivers of an outcomes focusin the sector. Much of the training andconsultancy by CES is focused onworking with organisations to developoutcomes frameworks, and otherconsultants reported an increase inthis work. The National OutcomesProgramme, initially funded from2003 to 2006, worked with localdevelopment officers and specialistnetworks to cascade training onoutcomes to frontline voluntary andcommunity organisations. Some1,400 voluntary and communityorganisations were reached through74 ‘outcomes champions’ basedwithin infrastructure organisations.New Big Lottery funding from 2007 to2009 will enable a further 80 newoutcomes champions to be trained,resulting in a cascade of outcomestraining to at least another 800frontline voluntary and communityorganisations.

An independent programmeevaluation of the National OutcomesProgramme (Aiken and Paton 2006),carried out by the Open University,found good signs that an outcomesapproach had been successfullyintroduced into several types oforganisations by the outcomeschampions themselves, and that manyintended to deliver further outcomestraining. One outcomes champion,who had been carrying out the CEStraining since May 2005, reported thechange that has occurred over a two-year period, and the ‘profound effect’of outcomes training and support on

the sector: ‘Outcomes have becomepart of the common language. It isnot something alien to them’.

Outcomes and public servicedelivery ... contributing togovernment and other targets

We noted in Chapter 1 that by 2001the government was urging the focusfor third sector performance to gobeyond financial procedure andprobity to achievement of aims andobjectives. And beyond this, thirdsector organisations – increasinglydrawn into delivering public servicesand funded by central or localgovernment or by governmentagencies – should determine theircontribution to government targets.

Although the research found thatoutput targets continued to bedominant in some commissioning,successive key guidance andprocedural documents (Departmentof Health 2006) and finally the 2007Office of the Third Sector Action Planhave all encouraged commissioners toplace outcomes at the centre of theirstrategic planning. The challenge forthird sector organisations funded byregional development agencies is tomake sure that their aims andobjectives fit with funding prioritiesand broader corporate evaluationframeworks. The Every Child Mattersframework emerged around March2005, requiring local authorities toassess how projects were meeting itshigh-level outcomes. In localauthorities, the Local Area Agreementis providing a focal point for thecorporate plan and the communitystrategy and, when commissioningservices, local authorities have toshow how the services will meetidentified priorities and needs. Evenwhen delivering smaller grants, localauthorities are looking for evidenceagainst governmental targets andNational Service Frameworks.4

4Focus on outcomes ...

‘Everything is now

outcome based –

we have had to

change the

organisational

mindset as well.’

Local voluntaryservices organisation

Page 60: Accountability and Learning

40Charities Evaluation Services Research report

London Councils made an early movetowards an outcomes approach in2002/03, grouped under the threeover-arching themes of reducingsocial exclusion, poverty anddisadvantage; promoting equality andreducing discrimination; andincreasing London’s opportunities.Beneath these they have identified 59priority service areas, each with itsown targeted outcomes that theirfunded projects are required to reporttheir achievement against. This will, inturn, enable London Councils toobtain a statistical analysis andidentify how far their own outcomeshave been met by funded services.

Across England, local councils havebeen moving away from grant fundingthe third sector, towards a morecommissioning-based service deliveryapproach. The extent to which grantfunding remains varies greatly fromcouncil to council, as do thestructures for commissioning services,target setting and the performancemeasures applied. For example, inJune 2007 Camden Council agreed tomove from grants to outcome-basedcommissioning, through NegotiatedFunding Agreements and small grants,allowing the Council to specify moreclearly the services its funding wouldbe used for and to obtain greaterinformation about the contribution ofthe third sector (London Borough ofCamden 2008).

Organisation-wide outcomesframeworks ... need for acomprehensive and systematicapproach

Chapter 2 discussed the developmentof monitoring and evaluation at aproject level, in response to projectfunding requirements. While this trendremained broadly true, the researchfound some evidence of organisation-wide monitoring (as opposed toproject-level monitoring) of key

indicators in response to the focus onoutcomes and impacts andperformance targets. Severalrespondents recognised that aprevious emphasis on project-levelmonitoring had left data collectionand reporting weak at a wholeorganisation level. Collection andanalysis of monitoring data, previouslydriven by the requirements of fundingstreams, had not resulted in acomprehensive and systematicapproach or a core system.

A number of national organisationsreported developing outcomessystems and frameworks, cascadingdown information requirements tolocal bodies and to local projects andoffices. For example:

• NCH Action for Children has workedto develop results-basedaccountability and more robustevidence on how the organisation asa whole contributes to outcomes forchildren. It has rolled out anoutcomes framework across the UK,with sub-outcomes for each of thefive Every Child Matters outcomeareas. The aim is to make the datacollection IT-based, and forinformation from local assessmentscarried out with local parents andchildren to drop into the framework.

• The NSPCC is developing anoutcomes framework to evaluatetherapeutic services, located withina new strategic framework for all itsservices.

• The Motor Neurone DiseaseAssociation has a team of volunteervisitors and advisers responsible formaking detailed assessments andreports on user outcomes, compiledinto detailed management reports. Italso carries out regular surveys ofover 1,000 individuals to assess theexperience of both service users andproviders.

• Following requests for outcomesinformation at a national level by the

Page 61: Accountability and Learning

41Charities Evaluation Services Research report

Department of Trade and Industryand the Office for National Statistics,and at individual bureau level,Citizens Advice initiated a monitoringand evaluation project to refocusdata gathering from its 450 bureaux.

• NAVCA has a national focus onoutcomes and now its annualmembers survey assesses againstperformance indicators introducedin 2005/06.5

Some research participants describedestablishing a new link betweenoutcome reporting and businessplanning. However, they reported

difficulty in integrating outcomes intokey indicators. Key indicators are likelyto be established across a number ofdifferent areas of organisational life,including financial performance andquality standards, and often focus onoutputs. Organisations sometimesstruggled to identify organisation-wide single outcomes against whichinformation could be aggregated at anational level.

The case example below illustratesthe process carried out by oneorganisation to introduce anoutcomes framework.

4Focus on outcomes ...

Voluntary Action Rotherham – developing an outcomes framework

Voluntary Action Rotherham (VAR) is the lead infrastructure body providing support,development and promotion of the voluntary and community sector in Rotherham. It has 42staff running a range of services: human resources and legal advice; communityaccountancy; small group development; procurement advice; network co-ordination; andpolicy and performance.

Developing an outcomes frameworkIn autumn 2005, as the voluntary sector began to work more closely with the statutory andprivate sectors through the Local Strategic Partnership, VAR needed better evidence of theeffect of its work. It was also keen to measure its performance, and to feed findings intoorganisational management and service improvement. The concept of an outcomesframework was agreed by the VAR Board in January 2007 and the Policy and Performanceteam led the development in consultation with trustees, management and staff, and withexternal stakeholders – voluntary and community organisations and public sector partnerssuch as the local authority and the primary care trust.

At the first of three stakeholder events, VAR was able to develop an outcomes frameworkwith five high-level outcome areas, consistent with the PERFORM framework, eachunderpinned by a number of local outcomes, consistent with the NAVCA quality standard.The second event focused on developing performance indicators. The Policy andPerformance team used this information to develop a suite of performance indicators tomeasure the outcomes and included some additional indicators from the Local AreaAgreement. The third event focused on identifying an action plan that would help VARachieve the outcomes.

Measuring achievementOnce the framework and action plan were agreed by the Board in September 2006, thePolicy and Performance team focused on developing systems and tools – a new contactmanagement system database, an annual voluntary and community sector survey, and acustomer satisfaction survey.

caseexam

ple

continued »

Page 62: Accountability and Learning

42Charities Evaluation Services Research report

The Prince’s Trust, an organisationwith over 650 staff, has well-established monitoring proceduresand provides an example of anintegrated national-level monitoringand evaluation system. The charitysupported over 40,000 young peoplein 2007/08 through six coreprogrammes, pilots and locally-specific projects. The Evaluation teamis part of the Planning andPerformance team, located within theFinance Department. The Evaluationteam produces monthly reports whichinclude key performance indicatorswhich, in turn, include outcome butnot impact. These link into theorganisation’s strategies and prioritiesin the following ways:

• The team collects standard profiledata on all users across local groups.

• It collects outcomes data on eachyoung person who leaves the Trust,as a minimum (although this is avoluntary process), and this is storedand reported on centrally.

• The system enables the team toaggregate at different levels –national, programme and regionalstatistics.

• Comprehensive systems allow thecharity to meet the needs of newfunders.

• Specific quantitative and qualitativeoutcome information may beobtained through ad hoc internalevaluations or externallycommissioned evaluation.

The Prince’s Trust current outcomesmonitoring form is broad, allowing itto be used consistently by all projects,and providing standard data acrossprojects. However, changes are beingconsidered to capture more specificdetails and to meet changing needsfor outcomes data for internal policyuses and funders. The charity is alsoconsidering more distance travelleddata. User profile monitoring isreviewed monthly; if quotas related tospecific target groups are not filled,then programmes are adjusted totarget priority groups more effectively.Recommendations from externalevaluations are also worked intobusiness plans, and evaluation findingsare used by ‘quality groups’, as well aspolicy, programme management andfundraising colleagues.

An outcomes approach ...the challenges

Data collectionDespite the range of monitoring andevaluation initiatives in the sector,

case

example Initial baseline data was collected through analysis of the database and an annual survey.

The postal survey – with an option for completion online – was sent to 410 organisationsidentified as being VAR members or service users. A questionnaire was also sent to a groupof voluntary and community organisations not using VAR services. Although this group wasnot ‘controlled’ for size and type of organisations, or access to other interventions, VAR feltthat comparing responses from the two groups enabled them to ‘assess the difference weare making to the organisations we work with’.

All staff are involved in gathering data. At present, most data gathered is quantitative, butmore qualitative data is planned. In its recent submission for the NAVCA quality award, VARalso produced a number of case studies to demonstrate evidence against each standard.

The approach has highlighted evidence of where VAR could be more effective. For example,there has been agreement to try to engage more with community groups. VAR sees theoutcomes framework as a tool to manage organisational performance, but also as useful forteams and individuals to measure their own performance. The outcomes and action planare embedded in the staff appraisal process with staff given clear targets to achieve.

Page 63: Accountability and Learning

43Charities Evaluation Services Research report

some organisations are still strugglingto come to terms with an outcomesapproach. Nearly one-third of fundersin our survey found outcomes data‘frequently’ limited or incomplete, andone-third found that it was ‘frequently’not convincing.

One organisation collectedinformation from local offices throughmonthly reports on activities andoutcomes. This was collated into amonthly report to the trustees, butthe data was not used to makechanges as ‘we don’t have themanpower’. This evidence points to ahuge waste of time and resources. Wefound difficulties in aggregating dataarising from a number of issues – forexample an overload of data fromcollection methods, such as clientassessments, and difficulties ofcollating and analysing data fromdistance travelled models, andtranslating that numerically. QuakerSocial Action used training from CESto set up frameworks across itsprojects and staff are using these toassess evidence at quarterly meetingsand draw up reports. Yet staff findthat the lack of similarity between theprojects can militate against pullingthis information together. There isongoing work to improve thissituation, using ideas and input fromstaff across the organisation.

Cultural shiftCommentators in the United States(Hatry and Lampkin 2001) haveobserved the importance of thecultural shift within organisationswhen outcomes management isintroduced and the importance ofintegrating data collection onoutcomes into other informationmanagement systems. They suggestthat outcomes assessment must alsobe seen to be useful to organisationsand beneficial to service users, so it ishelpful to staff to see how their datais used.

This was illustrated by an initiative in

Scotland by Voluntary Action Fund,which explored the barriers torecording and reporting outcomes involuntary organisations and how bestto support funded organisations toimprove in this area. Its OutcomeMonitoring project provided bespoketraining and support to 36 projectsdelivered by Evaluation SupportScotland to a group of fundedprojects, testing how best to supportthem. The project found a culture oftrust between funder and fundee tobe essential to get outcomes properlyestablished for both parties. It alsofound that organisations establishedunrealistic outcomes, which funderswere not challenging. Some time wasrequired to look into the differentlevels of change that a project couldrealistically record and how thisrelated to an overall vision for theirproject and the high-level outcomesof the funding programme. VoluntaryAction Fund viewed time andresourcing as being the main barriershampering outcome monitoring(Evaluation Support Scotland andVoluntary Action Fund 2007).

Time and resourcesTime and resourcing issues were alsomain themes from this research – foroutcomes assessment as well as forwider monitoring and evaluation.Interviews and responses to the thirdsector survey indicated furtherdifficulties that third sectororganisations were experiencing inassessing and reporting outcomes:

• commissioners without a clearunderstanding of outcomes; onereported: ‘They use the wordoutcomes but they don’t know whatthey mean’

• outcome setting by funderswithout consultation, resulting in‘ridiculous outcomes being set’

• internal disagreements about theappropriate tool and methodologyfor assessing outcomes

4Focus on outcomes ...

Page 64: Accountability and Learning

44Charities Evaluation Services Research report

• large amounts of outcomes datacollected, for example through clientassessment, without the systems ortime to process it

• insufficient staff skills to deal withthe extended approaches to clientwork brought by an outcomes focus

• tension caused by imposing arecording system againstpredetermined measures in reactivefrontline work

• difficulties in finding measures toprovide data against funder-ledframeworks in some areas of work,such as preventative work.

Focus on outcomesThe monitoring process – with itsshort timescales – may in itself act as adisincentive to developing anoutcomes focus. Evaluators of theDefra (Department for Environment,Food and Rural Affairs) EnvironmentalAction Fund 2002 to 2005 (CAGConsultants with NorthumbriaUniversity 2005) reported thatrigorous project monitoring was setup to track change and projects wereencouraged to concentrate especiallyon outcomes. However, a by-productof quarterly reporting was anunintended concentration on theshort term, skewing project reportingto focus on outputs and short-termoutcomes. We can also make thefollowing observations:

• It is important for funders andorganisations to negotiate on alimited number of outcomes that arerelevant to both parties.

• Continued support is required onhow to frame questions and obtaindata about user benefits,distinguishing this from usersatisfaction.

• There are a considerable number ofoutcome measurement toolsavailable, but organisations are notnecessarily aware of them or how toaccess them.

• Much of the available trainingfocuses on how to monitor outcomeinformation, rather than onevaluation methodologies, design,interpretation and analysis.

• A diverse client group andinconsistency in applying outcomestools may make it difficult toproduce standardised data.

• With some activities – such as artsand culture, and communitydevelopment – it can be difficult todistinguish between process andresult and to satisfy funders’expectations of quantitativeoutcomes.

Many organisations indicated thatthey were having difficulty inestablishing a fit between outcomesthat were within the remit oforganisations to achieve at the locallevel, given the focus and scale oftheir activities, and the informationthey were required to producerelating to higher-level outcomes.These could be set nationally or byfunders, and were often determinedby narrowly-defined governmenttargets. This is explored further in thenext section.

The evidence gap ... areas toconsider and negotiate

A report for the Performance Hub(Darlington 2007) on the adoption byhomelessness organisations of anoutcomes focus describes an exampleof attempts to balance measurementsconsidered useful for more effectivework practice and outcome measuresrequired for aggregation againstfunder frameworks or governmenttargets – which may be described aslower-level and higher-level outcomes.The London Housing Foundation hasdone substantial work with theDepartment of Communities andLocal Government and some Londonlocal authorities to find a way that the

Page 65: Accountability and Learning

45Charities Evaluation Services Research report

priorities of frontline organisations,those of individual local authoritiesand central government outcomescould be defined within theframework of Supporting People. TheSupporting People framework setshigh-level indicators, enabling data tobe aggregated up, but beneath thisorganisations could implement theirown distance-travelled measures.

The evidence from this researchreinforces CES’ consultancyexperience that this is an area ofconcern to organisations, requiringconsiderable support and negotiationwith funding bodies. Many individualprojects do not serve a sufficientlylarge target group to affectcommunity-wide statistics, regardlessof how successful they are, or theymay be working with clients withoutcomes at the very beginning of acycle of change. A 2005 article inAnimated Dance indicated that‘evidence can be supplied to showthat art and health projects areaddressing health and socialparticipation, and this work can bedescribed. But it is more difficult toprovide evidence that these projectshave an effect on health, socialexclusion and civic participation’(White 2005).

In the evaluation of the Healthy LivingCentres funded by the NewOpportunities Fund, in which aboutone-third of the local projects wereled by voluntary organisations,national targets were used as aframework for the evaluation using abroad theory of change approach andshowing interlinked changes towardsan ultimate outcome. The evaluatorscollected some 45 evaluations fromprojects, about two-thirds of whichwere self-evaluations. It was reported(Hills et al 2007) that there was realdifficulty in demonstrating clinicalhealth outcomes from both the localevaluation and the externalevaluation, which made it difficult forprojects to market health results in an

effort to get further funding. Oneconclusion was that many localevaluations contribute to learningrather than demonstrate outcomes.This is an important point for fundersand emphasises the need to set upsystems to extract that learning.

CES evaluators have also observedsome practical difficulties fororganisations in obtaining the levelof information required. For somegovernment targets, such as entry intoeducation, training and employment,it may be feasible to do longitudinalstudies with sufficiently resourcedmonitoring and evaluation, but eventhese studies risk heavy attrition ratesand may be subject to bias. For otheroutcomes, such as improved schoolattendance, a good level of datasharing is required, and for others,such as decreased recidivism, datamay not be accessible or relevant for aproject working at a modest level. Fora crime reduction project anorganisation may be working on therisk factors associated with offendingbehaviour, and what is relevant tomonitor is intermediate outcomes,such as improved skills or reducedalcohol or drug misuse. This points tothe need for funded research evidenceto establish the predictive linksbetween preventative or intermediateand higher-level outcomes.

Once the link has been shown, thethird sector organisation can producedata on intermediate outcomes,pointing to research evidence, andthe probability that the final outcomethey want will occur.

Potential pitfalls of an outcomesapproach ... the danger ofexclusion

Early moves by funding bodies towardsan outcomes approach broughtwarnings by some commentatorsabout some of the possible negativeeffects of an overriding emphasis on

4Focus on outcomes ...

Page 66: Accountability and Learning

46Charities Evaluation Services Research report

outcomes (Wales Funders Forum2002; Cook 2000; McCurry 2002).Among these were the potential topenalise programmes with hard tomeasure outcomes, presentingoutcomes in an unrealistic way in orderto secure funding and measuringwithin an unrealistically short timescale.

Indeed, concerns about implicationsfor funding choices and for focus ofwork and target users continue tobe expressed. Some fundersparticipating in the research reportedexcluding funding applications thatare less likely to demonstrateoutcomes. Some also recognised thata more proactive approach, and onerequiring organisations to have aconceptual grasp of the project atthe application stage, was resultingin more sophisticated, largerorganisations being funded. Therewas a real danger of excludingorganisations capable of deliveringgood work but less able to express ordeliver ‘tidy’ or measurable outcomes.

In parallel, some survey respondentsnoted the risk of adapting their clientgroup, or their activities, to meetfunding success criteria. Onereported: ‘We are having to pick andchoose the groups we work with inorder to improve our evaluationcriteria’. Another echoed this: ‘Whatyou review determines very muchwhat you do. If you review targets,you end up doing targets’. A CESevaluation of a black and minorityethnic capacity-building programmein London noted that:

The funding requirements also actedas a constraint. The LondonDevelopment Agency’s targets set anarrow definition of success, based ona particular model of social andeconomic regeneration. Themonitored output areas did notrecognise the importance ofdeveloping local networks, buildingalliances and community cohesionand creating access channels to

decision makers and funders. (Ellis andLatif 2006)

Issues were also raised about rigourand methodology. One funder wasconcerned about the inherentencouragement to ‘claiming’outcomes without real evidence.Another observed a trend towards amore ‘pseudo-scientific’ approach tomeasuring outcomes and consideredattempts at pinning down causaleffects as ‘a waste of time’, asking fora simpler approach.

A number of third sector organisationsalso expressed concerns about funderexpectations of proof of causality andtheir ability to attribute outcomes tospecific interventions. For the MotorNeurone Disease Association, the verylarge number of organisationsinvolved in one person’s care made itdifficult to attribute individual changeto any one agency, particularly as thecharity advocates a multi-disciplinarycare approach. SHINE Trust, fundingeducational projects in schools,recognised a potential danger thatdemand for scientific methods ofevaluation and proof of projectsuccess could push projects that didnot focus on academic work intodoing more academic work in order todeliver quantifiable outcomes.

Measuring processIn general, over-heavy statisticaldesigns on the one hand andsimplistic monitoring information onthe other can miss important contextinformation. IDeA, the Improvementand Development Agency for localgovernment, has warned that‘outcomes information which isgathered is only as useful as the levelof sophistication of its analysis’(McAuley and Cleaver 2006:15), andas outcomes approaches becomemore embedded in the sector, theissue of context is receiving someattention. In the research someconcern was expressed by fundersabout over-simplistic interpretations

Page 67: Accountability and Learning

47Charities Evaluation Services Research report

and measurements that did notrecognise the complexity giving rise tooutcomes. This is an issue that hasalready been rehearsed in the contextof international developmentevaluation. Ebrahim (2002) reportingon the information that movedbetween two international non-governmental organisations and theirfunders, found that the collation ofstandardised data, amenable toquantitative analysis, meant that littleimportance was attached to thedifferent social contexts in which theprojects developed or the extent towhich development activities wereisolated or integrated into morecomplex social and political structuresand initiatives.

In the UK third sector, emphasis onoutcomes monitoring foraccountability has also shifted thefocus of evaluative activity awayfrom context and implementationissues. In Ball’s 1989 report, theevaluations offered as case studiesfocused on process and servicedelivery. In one case only did theevaluation gather data on outcomes.However, the current focus onoutcomes has shifted attention fromprocess, and self-evaluationfrequently prioritises outcomes dataas evidence of effectiveness in itself,over understanding issues ofimplementation. CES’ experience isthat even formative externalevaluations, ostensibly well-positionedto feed back valuable learning onprogramme and project set-up, oftenhave an imperative to focus on earlyoutcomes, despite the short timeframe and even though outcomesinformation devoid of context canoffer little learning.

However, some funders reportedplacing greater emphasis on processissues. The King’s Fund, in its Partnersfor Health programme, had developedan approach founded in realisticevaluation, which places a value oncritical reflection and on how and in

what conditions a given interventionworks (Tilley 2000). The King’s Fundhad put considerable resources intotraining organisations in evaluationthrough a two-day workshop duringthe application process, emphasisingthe importance of establishingresearch questions and identifyinguser pathways towards outcomes. Itworked with funded organisations tocollect information about contexttogether with outcomes, so thatevidence could be used by otherhealth service providers.

UnLtd supports individuals to buildtheir social entrepreneurial capacityto generate social and environmentalchange, with support from adedicated development manager. Thework depends on both the needs andpreferences of the award winner andthe particular skills and personality ofthe development manager. UnLtdgathers evidence on these helpingrelationships, so as to capture thedifferent approaches and to get abetter understanding of what works.The research team carries outbiannual focus groups with regionalteams of development managersalongside interview data from awardwinners throughout the year.

Influence on policy and changeOne regional infrastructureorganisation reported that it might bedifficult to demonstrate policyinfluence, and the process of sitting ata policy table was itself important torecord. White (2005) has argued thatevaluators of arts in health shouldembark on a process of discovery, notproof, and that it was theenvironment, and the conversationsabout the activity, that provided theintermediate indicators of perceivedbenefit (White 2005).

The evidence reported in this chapterhas emphasised how important it isfor outcomes approaches to provideinformation, not only foraccountability purposes, or to assess

4Focus on outcomes ...

‘It’s positive that

services focus on

outcomes but

often they tend to

forget the process

of how to achieve

those outcomes

and that is

important to

capture ... There is

an obsession with

outcomes now.’

Funder

Page 68: Accountability and Learning

48Charities Evaluation Services Research report

how much progress is being made,but to demonstrate how change hasbeen achieved or limited, and how toprogress further. This suggests that itis important, as noted by Hughes andTraynor (2000), for organisations toclarify with funders a theory ofchange, in which outputs, processes

and early outcomes can be seen to beleading to the ultimate expectedoutcome. This will be explored furtherin the next chapter, as we reportmonitoring and evaluationmethodologies being used by thirdsector organisations.

1 Supporting People was launched in April 2003,ensuring that vulnerable people have help andsupport to live independently. Around two-thirds ofthe programme is delivered by third sectororganisations.

2 The PERFORM framework was developed explicitlyfor infrastructure agencies by NAVCA, Action withCommunities in Rural England (ACRE), CommunityMatters, NCVO, COGS (consultants in communitydevelopment), and the Big Lottery Fund amongothers. The framework outlines four high-leveloutcomes against which all infrastructure canpotentially evaluate themselves.

3 See www.navca.org.uk/services/quality/

4 Such as the mental health National ServiceFramework, Supporting People Framework and theCommon Performance Framework.

5 Seewww.navca.org.uk/publications/evaluatingnavca08

Notes for Chapter 4

Page 69: Accountability and Learning

49Charities Evaluation Services Research report

Some definitions … the languageof evaluation and performance

In this chapter different conceptsjostle against each other. This sectionwill argue that although the terms areclosely related, this report makessome distinctions that are worthcapturing as helpful to the discussion.They are listed here.

Internal evaluationAn organisation carries out internalevaluation for its own purposes, butmay use an external agency orindividual to work with it. This chapterreports that much internal evaluationin the third sector is in practicefocused on external reportingrequirements, and runs the risk oflosing its internal purpose and value.

Self-evaluationAlthough often used interchangeablywith ‘internal evaluation’ self-evaluation is distinct. CES definesself-evaluation as a reflective process,

providing an opportunity forimprovement:

‘Self-evaluation is a form of evaluationwhereby the project itself seeks tounderstand and assess the value of itswork. The project is the judge’ (Russell1998:2).

Performance managementPerformance management is aprocess that contributes to theeffective management oforganisations, programmes, teamsand individuals. It is based on effectivestrategy and has many tools tosupport it. Performance managementessentially relies on measurement,which may include measurementagainst aims and objectives, return oninvestment, standards or levels ofcompetency. The government’soutcomes-based approaches relate toa drive for performance management,in which target setting provides acentral role. This approach is notnaturally amenable to identifyingimplementation and delivery

Focus of chapter 5This chapter reports on the development of internal evaluation. It exploresapproaches and methods used and the value of internally generated monitoringand evaluation data for performance management and for wider purposes. Theresearch found:

• Self-evaluation had increased in currency and legitimacy, but concernswere raised about the quality of data derived from internal informationsystems.

• Third sector organisations were using a range of approaches and datacollection methods, and monitoring and evaluation is increasingly basedon an improved understanding of the relationships between outputs,outcomes and impacts.

• Some third sector organisations surveyed made no distinction betweenevaluation and quality assurance.

• The perception of evaluation as synonymous with performancemanagement ignored the wider potential purposes of evaluation.

Internal evaluation and performancemanagement ... insights into approachesand methodologies

Chapter 5

Page 70: Accountability and Learning

50Charities Evaluation Services Research report

problems or unanticipated outcomes.

Performance measurementPerformance measurement is theongoing monitoring and reporting ofefficiency and effectiveness, usuallytracking progress against what wasplanned and often within time limits.Performance measurement is typicallyused as a tool for accountability.

Quality assuranceQuality assurance is essentially aboutsystematically learning what anorganisation is doing well and whatneeds to improve, and using thisinformation to do it better.

This chapter will argue that whilemonitoring and evaluation are keyperformance measurement tools, andtheir potential for improvingperformance is being embraced bysome third sector organisations, theresearch found two areas of risk. First,a focus on performance measurementfor accountability may in practice bedetracting from the use of monitoringand evaluation for performanceimprovement. Second, the thirdsector priority given to evaluation forperformance purposes is giving littleroom for its use for knowledgebuilding and for informing policy.

The development of self-evaluation ... steps towardsincreased currency and legitimacy

The main focus of CES’ training,technical assistance and publicationssince 1990 has been on self-evaluation and this has been an areaof substantial development in thesector. This focus recognises a shiftfrom evaluation being perceived as anexternal, expert activity to a processconducted by the organisation itselfor in partnership with externalsupport.

The increased currency and legitimacyof self-evaluation can be explained in

part by increased understanding andbasic organisational skills inmonitoring and evaluation.Additionally, the research indicatedthe influence of limited funds forexternal evaluation and some doubtsabout the value of external evaluationby both funders and third sectororganisations. A majority of funders inour sample reported that theyrequired self-evaluation reports, andnearly half of our third sector surveysample said that they completed self-evaluation reports, as distinct frommonitoring reports. For many of thefunders interviewed, self-evaluationwas felt to be a more useful optionthan costly external evaluations.Sandwell Metropolitan BoroughCouncil valued self-evaluation asputting users at the heart of theprocess and providing a learningprocess for the organisation.Leicestershire Children’s Fundrecognised potential issuesconcerning the validity of tools used,but considered self-evaluation as thebetter option.

Yet many funders reported that whatthey received was monitoring reports,not self-evaluation. Indeed, responsesto our third sector survey indicatedthat there was often no real distinctionmade between monitoring andevaluation. Asked for details of theirself-evaluation, several organisationsdescribed the reporting of statisticscollected centrally through amanagement information system,often used for compliance purposes.

This echoes experience in the UnitedStates, where a study for the JamesIrvine Foundation found that evenwhere there was a functioningmanagement information system,data collection was often moregeared towards meeting fundingarrangements than towardsevaluating the programme, and thatdata was seldom used for analysis andlearning (Hasenfeld et al undated).When our surveyed organisations

‘I always put self-

evaluation first as

long as the

organisation is

equipped. External

evaluations are too

intrusive, cost too

much, and don’t

say anything

different to what

we know.’

Funder

Page 71: Accountability and Learning

51Charities Evaluation Services Research report

provided more thoughtful reports,they frequently reported lack offeedback from funders and doubtsthat reports were read.

The Big Lottery Fund encouragesfunded organisations to self-evaluate. This is seen primarily asan organisational learning tool,although in some cases theinformation may be used as data forprogramme evaluations. The BigLottery Fund (2007) suggests that iffunders are requiring self-evaluationdata, funding support should ideallybe provided to assist access totraining or learning sets for grantholders, and networking events topromote self-evaluation techniques.

The case example below illustrateshow an infrastructure organisationhas benefited from investment inself-evaluation.

Third sector self-evaluation andquality assurance ... making thedistinction

The picture of self-evaluation is alsoconfused because a significantminority of third sector respondentsmade no distinction between qualityassurance systems and activities andmonitoring and evaluationmethodologies. Quality assurance has

become a key tool in performancemanagement in the sector. Thirty-twoorganisations mentioned a qualityassurance system as the system theywere using to monitor. Twenty-four ofthese were references to CES’ qualityassurance system, PQASSO (PracticalQuality Assurance System for SmallOrganisations), and other qualityassurance frameworks included ISO9001, Investors in People (for stafftraining and development) and QuADS(Quality in Alcohol and Drugs Services).

A 2007 questionnaire by CommunityDevelopment Exchange (CDX) tocommunity development practitionersalso found that there is a need formore clarity about distinctionsamong, for example, monitoring,evaluation, performancemanagement and quality assurance(Longstaff 2008).

Distinctions and definitionsCES describes monitoring andevaluation and quality assurance astwo distinct, if complementary andoverlapping processes. Whereas inquality assurance agreed standardsform the criteria against whichjudgements are made, in evaluationthe criteria may be set by individualterms of reference, or may emergethrough the evaluation process itself.1

Quality assurance systems such as the

5Internal evaluation and performance management ...

Plymouth Community Partnership – investing in self-evaluation

Plymouth Community Partnership (PCP) is the biggest infrastructure organisation in the city,working with 300–400 community and voluntary group members. The project monitoringofficer, who has a research background, works with staff to provide evidence of their work andto pull together the data from monitoring and evaluation projects to demonstrate PCP’soutcomes. The post is funded for four days a week by putting the function in every funding bid.

PCP produces several self-evaluation reports every year, using both quantitative and qualitativedata. Funders’ information needs are primarily met through monitoring (and their database),not evaluation. Although self-evaluation reports are sent to funders, they are primarily used forinternal development and to disseminate findings about the organisation’s work to membersand partners.

The monitoring officer specialises in case study methods, and case studies are sent to funders,although Capacitybuilders is the only funder requesting qualitative information.

caseexam

ple

Page 72: Accountability and Learning

52Charities Evaluation Services Research report

widely-used PQASSO, which are basedon a process of self-assessment, willrely on a functioning monitoring andevaluation system to provideinformation. One organisation aboutto implement PQASSO recognised thisimplication: ‘We definitely need toimprove upon how we use/analysethe information we have collated anddevelop an evidence-based approachto promoting and developing services’.

However, the distinction between self-evaluation and quality assurancebecomes less clear when evaluationrelates closely to processes because oftheir link to, or implications for,quality. In Advance, a housingassociation, a single person heads upmonitoring and evaluation and qualityassurance. A tenant survey and 12focus groups with service users raisedthe question of safety, which gave riseto a promotion campaign on safety.

It is clear that a quality assurancesystem can provide an over-archingframework within which monitoringand evaluation is located. Action forBlind People describes annual self-evaluation within the framework ofthe Excellence Model,2 a processwhich identified a lack of provision foryoung people in leisure services andresulted ultimately in the provision of29 ‘Actionnaires’ clubs.

Increasingly within the third sector,delivery of information aboutorganisational results is an integralpart of a quality system, and suchinformation relies on effectivemonitoring and evaluation data.PQASSO, probably the mostcommonly-used quality assurancesystem in the voluntary andcommunity sector, has a resultsquality area.

NAVCA’s performance standards andquality award, launched in 2006,focuses on organisational outcomes inrelation to increased effectiveness ofthe local voluntary sector, effectivenetworking, effective representation

and working relationships, andintegration into local planning andpolicy making. It is designed toprovide local infrastructureorganisations with the means ofdemonstrating that they are providinghigh-quality services and to provideevidence against 21 key outcomes oftheir work. Approximately a quarter ofNAVCA’s membership has so far signedup to the award. Co-operativesUK hasdeveloped key performance indicatorsto allow cooperatives to monitor theirperformance. Half of these indicatorsare about results – the key outcomeareas, such as focusing on investmentin the community and in other co-operatives (The National Centre forBusiness and Sustainability 2006).

Topics for debateThe question may be raised as towhether distinctions betweenmonitoring and evaluation and qualityassurance are of any consequence, orif both might simply beinterchangeably regarded asperformance managementtechniques. Our observation is thatwhile evaluation can and should beused as a performance managementtool, not making this distinction ispotentially problematic as it de-emphasises the functional relationshipbetween monitoring and evaluationand quality assurance – that effectivemonitoring and evaluation is requiredto provide information for qualityassurance and that monitoring andevaluation findings should be used toimprove performance. In addition,while locating monitoring andevaluation within an over-archingquality assurance framework mayhave benefits by reinforcing itseffective functioning for performanceimprovement, by the same token itmay devalue the role of evaluation forwider learning purposes. This theme isexplored further in Chapter 7, wherewe report on the benefits and use ofmonitoring and evaluation.

Page 73: Accountability and Learning

53Charities Evaluation Services Research report

Internal evaluation approaches ...theories, frameworks andmethodologies

In Chapter1 we reported that manyfunders commissioning evaluations inthe third sector are looking for causallinks to be made between theirinvestment and demonstrablechange, and that the sector is oftenunable to provide this informationconvincingly. From our research wecan also trace a related distinctionbetween two trends – one towardsquantified methods and the otherlooking at methods such asstorytelling – as capable of morepowerfully demonstrating outcomesthan quantifiable linear methods.Realistic evaluation approaches stressthe need to understand specificcontexts and to establish and exploretheories of change or programmelogic which identify each step in anexpected chain of change, leading toa better understanding of how toincrease effectiveness (Connell andKubisch 1998).

The theory of change, sometimesdescribed as programme logic, is aninitial planning tool that describes aproject or programme so as to showall the assumptions, intentions andlinks within it that are often not madeexplicit. It can be used to decide whatmust be done to achieve aims and keyintermediate outcomes that will leadto ultimate long-term outcomes, andfinally to impacts, and helps to focusevaluation on measuring each set ofevents in the model.

CES’ experience is that many thirdsector organisations are unlikely toconceptualise evaluation in this way.However, many of the monitoring andevaluation approaches taking root inthird sector organisations are framedby an understanding of linkedindicators of change. Asked aboutwhether they used the approaches tomonitoring and evaluation listed in

Table 3, only 40 per cent responded.Of these, the largest number, 69 percent of third sector respondents,identified monitoring and evaluationof achievement of aims and objectivesas an approach they used. The CESplanning triangle, the ABCD approach(Achieving Better CommunityDevelopment), the logical frameworkand social accounting were allmethods that were relevant to thisapproach. These are explained onpage 54.

Some organisations were using a mixof these approaches, and indeed theyare not mutually exclusive. For asignificant minority of thoseresponding (and possibly for many ofthose not responding), these were notmeaningful categories. Ten per centwere unable to identify any listedapproaches and 20 per cent were notsure what approaches were used,reporting that they did not recognisethe technical language, although theymight use the methods. Follow-upinterviews also indicated that someorganisations were not clear aboutapproaches they had indicated theywere using. With that qualification,the following is an analysis of theframeworks within which evaluationwas reportedly carried out.

Aims and objectives, or goal-oriented evaluationThis approach focuses on monitoringand evaluating an organisation’s

5Internal evaluation and performance management ...

Type of approach Percentage of responses (n=275)

Aims and objectives 69

Case study enquiry 27

Logical framework 11

Social accounting 9

Balanced scorecard 9

Theory of change 7

Appreciative enquiry 7

Table 3 Third sector evaluation approaches

Page 74: Accountability and Learning

54Charities Evaluation Services Research report

progress and effectiveness in deliveringplanned outputs, outcomes andimpacts and information gatheredagainst selected indicators. The focus ison establishing a planning framework.

The CES planning triangleMany organisations recognised orused the CES planning triangle. Thetriangle is a simple theory of change,which clarifies the separate links ofchange among outputs, outcomesand impacts and their derivation fromplanned aims and objectives. Aninitiative in Scotland by VoluntaryAction Fund (Evaluation SupportScotland and Voluntary Action Fund2007) found the CES planningtriangle was helpful in brokeringunderstanding of the levels ofexpected change. In 2002, theCommunity Technical Aid Centre(CTAC) Monitoring and EvaluationProject, which had as its remit todevelop an evaluation framework fortechnical aid centres across thecountry, used the CES planningtriangle to explain the logic process.CTAC commented:

The introduction of the planningtriangle used by Charities EvaluationServices gave a more identifiablestructure and justification for thedevelopment of [the] governingprinciples of the organisation. Thistechnique also made the wholeprocess more easy to explain to others.3

Achieving Better CommunityDevelopment (ABCD)A small number of survey respondentshad used the Scottish CommunityDevelopment Foundation ABCDapproach, which provides a projectplanning framework on whichmonitoring and evaluation can bebuilt. For example, Preston CommunityArts Project used this approach for itscommunity radio project. St Martin’sCollege in Lancaster provided theproject with a training afternoon andalso provided some support tocommunity radio staff to be clear

about what they needed to evaluate,thinking about a baseline and whatinformation needed to be collected.They found ABCD useful, not as a tool,but as a way of thinking through andsetting up a framework for monitoringand evaluation.

Logical framework analysisThe logical framework analysis4 alsomakes a link between the projectdesign and evaluation and was usedmainly by those organisations workingin an international context. It linksevaluation directly with projectmanagement through a framework ormatrix showing the project’s aims andobjectives, the means required toachieve them and how theirachievement could be verified.

Social accountingFifty organisations (9 per cent of thosedescribing approaches and methods)reported using social accounting as aframework for their monitoring andevaluation, being useful internally andproviding credibility for funders. Thestudy found a number of socialaccounting systems in the UK.5

Organisations most frequentlyreferred to those disseminated by thenew economics foundation and theSocial Accounting Network. Socialaccounting links a systematicreporting of organisational activitiesto the issue of social impact and theethical behaviour of an organisation.The social accounting process can bequite flexible and may look like, orindeed incorporate, other evaluationprocesses. However, there is a greateremphasis on public accountability anda social audit panel reviews the reportor social accounts and only then willthe report be available to a widerpublic. The new economicsfoundation social accounting process,which takes into account thepotential difficulties of deadweightand attribution, was reported as morecomplex than that of the SocialAccounting Network.6

‘I have found the

CES triangle method

an extremely useful

and adaptable tool.

It has been the

backbone of both

my ‘monitoring and

evaluation’ and

IMPACT training

sessions. I am also

using the framework

in a pilot where I

am supporting

individuals on a one-

to-one basis to learn

about outcomes.’

Local governmentevaluation officer

Page 75: Accountability and Learning

55Charities Evaluation Services Research report

5Internal evaluation and performance management ...

Soft Touch – implementing social accounting

Soft Touch Arts Ltd is a Leicester-basedorganisation, set up in 1986 as a not-for-profitcompany with a cooperative managementstructure. It has nine staff working inpartnership with a range of organisations andindividuals to provide opportunities forpeople from socially or economicallydisadvantaged backgrounds to expressthemselves and gain skills and confidencethrough involvement in creative activities.

In 2004, Soft Touch felt vulnerable in the faceof voluntary sector funding cuts as it had nosystematic way to assess and present itsvalue. It applied to the Social AuditProgramme run by Social Enterprise EastMidlands (SEEM) which provided a £3,000grant, and workshop places and mentoringfor organisations interested in adopting asocial accounting approach. The grant wouldpay for some staff time and for brochuressummarising the completed accounts, withcase studies and photos.

We felt that, not only would the socialaccounting process allow us to demonstrateour value, but it would also be a good tool tohelp us look at and improve our internalpractices and consult with and get feedbackfrom different stakeholders, which weweren’t really doing.

As part of the social accounting process, SoftTouch consulted with staff, funders, keypartners and programme participants toassess how they were delivering against theirmission, values and objectives. They alsocarried out a review of their compliance withstatutory requirements and other companypolicies, and implemented a new objectivefor the organisation to work towards goodenvironmental practice. In December 2005the first set of Soft Touch social accounts, for2004/05, were accepted by an independentaudit panel.

ChallengesThe SEEM programme and peer support

network were felt to be essential to goingforward with the social accounting. But it wassometimes difficult to find time to implementthe process on top of already heavyworkloads. Soft Touch reformulated its missionstatement, defined organisational values,devised clear aims and objectives, and set up astaff training policy and appraisal system.

Action planThe 2004/05 accounts listed a number ofrecommendations for improving on differentaspects of Soft Touch’s work. An action planwas developed to implement theserecommendations, which included expandingthe range of techniques used to consultparticipants, endeavouring to increase thedepth of feedback and recording, andquantifying the value of support given toemerging creative organisations.

The organisation has completed a second setof accounts – for 2006/07 – this time withoutexternal funding or support.

Reviewing the processSoft Touch has learnt what data gatheringmethods work with the young people itworks with, most of whom are disaffectedand often under-confident, many strugglingwith literacy. For the second set of accountsit rejected survey methods, and carried out‘green room’ style videoed interviews withperformers on a music project. It has alsoused video diaries, audio interviews and‘brainstorm’ type activities with Post-Its andchalkboards. Soft Touch feels that the processhas given it added credibility and theevidence has been useful for a range ofdifferent purposes.

The great thing about all this evaluation isthat along with writing it into the socialaccounts we are able to use the evidence incase studies and mapping exercises, forfundraising applications and towards ourefforts to get commissioned to deliver publicsector services.

caseexam

ple

The following case example shows how a Leicester-based cooperativeintroduced social accounting.

Page 76: Accountability and Learning

56Charities Evaluation Services Research report

Soft Touch has adopted the SocialAudit Network’s social accountingframework as a practical programmeplanning document and an overallmonitoring and evaluation framework.However, although valuable, SoftTouch found the process of socialaccounting time-consuming and staffsaid that they did not need toproduce another set of accounts forthree to five years.

The research found a number ofprojects that are introducing socialaccounting methodologies to smallerorganisations, looking to adapt themethodology to a lighter model. TheSocial Enterprise People, Cambridge,had a pilot project from July 2006with funding from the East of EnglandDevelopment Agency through theGreater Cambridge Partnership aspart of its Investing in CommunitiesInitiative. Working with fourorganisations, it developed its ownsocial accounting tool, AssessingEffectiveness, as a more realistic socialaccounting option for small, sociallydriven programmes and organisations.It found that both the SocialAccounting Network and the neweconomics foundation processesrequired more time than most smallorganisations could commit andhoned down the process, whileretaining social accounts that wouldbe useful, accurate and credible. Evenwith this reduced version, a large partof the process was carried out by theSocial Enterprise People. It has putbasic information on the model on itswebsite, including case studies andsome of the worksheets developed,and is delivering social accountingtraining sessions.7

In the south-west region the socialenterprise support network RISE haschampioned social accounting, linkingup with Business Links and gainingresources and status, working closelywith sub-regional organisations suchas Co-active, Exeter CommunityInitiatives and Social Enterprise Works

(Evaluation Trust 2007).

The Social IMPact measurement forLocal Economies 2007 project(SIMPLE) was funded by the South EastEngland Development Agency and theEuropean Regional Development Fundto find a suitable method for impactassessment, targeting 40 small andmedium-sized organisations withthree-day training. As part of theproject it developed a performancemanagement framework and in 2007Social Enterprise London developedand piloted an electronic PerformanceMeasurement Toolkit to serve as asimple, do-it-yourself method tomeasure and manage performanceand track outputs and outcomes.8

Balanced scorecardThis approach tracks key elements ofan organisation’s strategy through acoherent set of performancemeasures, which may include finance,user satisfaction, and key outputs andoutcomes. This was found to be usedby some regional and nationalorganisations. However, somedifficulty was expressed about findingsingle or unitary outcomes measureswhich could be incorporated withinthe framework, and often outcomeevaluation was taking place outsidethis framework.

Case study enquiryCase study evaluation allowsexamination of a particular individual,event, activity or time period ingreater detail and emphasises thecomplexity of situations rather thantrying to generalise. One organisationreported compiling case studies into a12-page newsletter about three timesa year: ‘As well as being a usefulevaluation tool, it is great marketingmaterial and also helps to energiseand inspire existing and potentialusers of our services’. In general,organisations are more skilled in usingcase examples as illustrative materialrather than as part of an analyticalapproach to identify what works well

Page 77: Accountability and Learning

57Charities Evaluation Services Research report

and in what circumstances. Severalindependent funders would like to seemore case study information, yet casestudy approaches do not fit well withthe demand for quantified data.

Theory of changeThe research found concreteexamples of the theory of changemost commonly used by evaluators inmapping large-scale evaluations. CESused the approach in its evaluation ofFairbridge to develop a hypothesis ofchange and to devise appropriatemeasures (Astbury and Knight 2003).The new economics foundation callsthis theory of change ‘impactmapping’ – systematically laying outhow the organisation intends tochange things – encouraging impactmapping as a useful tool for self-evaluation in its publication Provingand Improving: A Quality and ImpactToolkit for Social Enterprise (Sanfilippo2005).

Appreciative enquiryAppreciative enquiry is a qualitativeapproach that a small number oforganisations recognised, as one thatfocuses on what has worked well,often telling stories. Anotherapproach used – almost entirely byorganisations working internationally– is the Most Significant Changemethod.9 This approach focuses ongathering and listening to storiesabout change within communitiesthrough different levels across aprogramme and provides a means toidentify, analyse and conceptualisecomplex outcomes. Practical Actionwas trialling this method for self-evaluation as one that had theadvantage of a narrative process that‘gets under the skin’. Tearfund wasalso beginning to use this process, notas a replacement for indicators andthe logical framework, but tocomplement it, get a deeperunderstanding of change and toidentify unexpected change. For bothorganisations, such methods were

seen as a way to be accountable tobeneficiaries rather than meetingdonor demands.

Data collection methods ... toolsof the trade

The most common data collectionmethods reported by the third sectorsurvey respondents were routineoutput monitoring and the use of userfeedback forms and surveys. Ninety-five per cent of respondents indicatedthat they were carrying out outputmonitoring, and 81 per cent and 83per cent respectively reported the useof participant feedback forms andquestionnaires or surveys. Bothfunders and evaluators in the researchfound that despite the strong leaningtowards survey methods, a lack oftechnical skills in survey methodsoften detracted from the value of theresulting data.

Outcomes monitoring toolsCase files and other record-keepingand individual assessments were usedby approximately two-thirds ofrespondents, and interviews and focusgroups by just over half. A smallnumber of respondents providedfurther information about outcomesmonitoring tools, as in the followingexample:

We survey all young people who takepart in one of our programmes/awards3 months (or 12 months in the case ofBusiness Start-Up) after theprogramme has concluded. This helpsus to collect a range of informationsuch as outcomes (entry intoemployment, education/training, oralternative progression routes) andthe impact of support onskills/personal development and lifesatisfaction. (National youth charity)

Outcomes were monitored in anumber of ways, such as throughindividual weekly ‘scores’ for

5Internal evaluation and performance management ...

‘We have a

Feedback Fortnight

every year where

every single thing

we do we ask for

user feedback.

We advertise this

broadly so any

service user,

professional,

stakeholder can

comment on

our service.’

Local carers’organisation

Page 78: Accountability and Learning

58Charities Evaluation Services Research report

participants against key developmentcriteria, monitoring individualdevelopment or health plans, orthrough progression routes. Aninfrastructure organisation monitoredthe financial health of organisations itworked with against a baseline.Several organisations used consultantsto develop their own assessment toolsor to customise existing ones. Specificoutcomes tools mentioned in thesurvey included the Outcomes Star,the Off the Streets and Into WorkEmployability Map and the SOUL tool.These and other outcomes tools aredescribed briefly in the box.10 Thirdsector organisations frequently usethe term ‘soft outcomes’ to describeoutcomes that are difficult to quantifyand often describe the progress madetowards a final outcome – althoughthe distinction between ‘soft’ and‘hard’ outcomes is not universallymade by or applied by evaluators.

Web-based methods for internalevaluationTwelve organisations gave informationabout web-based tools or methods formonitoring and self-evaluation,including website comments,feedback forms and blogs, onlineforums, social networking sites, onlinecase monitoring systems and web-based management informationsystems driven by service deliverydata. There is some evidence thatonline surveys were increasingly beingused, such as SurveyMonkey and PollManager – survey software which canbe installed on the website andenable regular online surveys.

Several organisations emphasised theneed to explore the use of technology,particularly for evaluation with youngpeople, such as the use of blogs andsocial networking sites. UnLtd, thefoundation for social entrepreneurs,

Soft outcomes tools are most in evidence in the housing and social care sectors. However,there are current developments in other sectors and most tools have the potential to be usedmore widely.

The Outcomes Star, initially developed by Triangle Consulting for St Mungo’s, is in use widelywithin the homelessness sector. Variations of the Star have been developed for women’srefuges and projects for young people, and continue to be developed for other sub-sectors,such as drugs and alcohol projects, domestic violence, parenting and mental health.

The Employability Mapwas designed by Triangle Consulting for Off the Streets and Into Work.This tool encourages clients and support workers to plot progress towards work and furthertraining, is currently in use across 54 organisations and has been adopted by others not fundedby Off the Streets and Into Work.

The SOUL soft outcomes evaluation tool measures against the five key areas of Every ChildMatters. It was developed by The Research Centre, City College, Norwich and providers acrossNorfolk to use within the Norfolk voluntary sector.

The Rochdale Client Centre Dial, with ten outcomes domains, was developed by RochdaleSupporting People and local providers for use across a range of services.

The Regional Outcomes System for Yorkshire and the Humber (ROSYH) has 14 outcomedomains. It was developed by Sheffield Supporting People and local providers and is beingrolled out across the Yorkshire and Humber region.

Adapted from Darlington, I et al (2007) Use of Outcomes Measurement Systems within Housing andHomelessness Organisations, Homeless Link, London.

Soft outcomes tools

Page 79: Accountability and Learning

59Charities Evaluation Services Research report

is researching and developing a simplesoftware tool for its socialentrepreneurs. The tool is designed tohelp a wide range of users measurethe impact of their work.

Participatory monitoring andevaluation ... creative approaches

Several survey respondents reporteddeveloping creative tools to gatherdata on their work and feeding thisback in a variety of ways toreinvigorate discussion and provokeanalysis internally. The wider reviewindicated that international non-governmental organisations inparticular are struggling to promote aco-existence of participatory,community-based approaches andmore formal monitoring approachesand performance frameworks, such asthe logical framework.

INTRAC (The International NGOTraining and Research Centre),working in the internationaldevelopment and relief sector, hasrecently written of the preoccupationin the last five years with a clashbetween new managerial techniquesand participatory approaches (INTRAC2007).

Practical Action, for example, requiredby donors to work to the inflexiblelogical framework, has also developedsome examples of basic record-keeping for communities to recordthe basic minimum themselves and todevelop their own approaches. In theUK, some of the clearest indications ofparticipatory methods were in artsand culture organisations, work withyoung people and other sectorswhere the process and the outcomewere closely allied. Methods includedfeedback on ‘graffiti walls’, facilitation,drama, puppets and games,community action meetings, andnarrative and diaries to tell stories.Self-reporting could be done through

the use of ‘Big Brother’ style feedbackrooms, for example, or combined withmaking videos.

Play and participatory approaches aredescribed in Views of Health 2000, aconsultation published by Save theChildren and In our View, aconsultation with young peopleavailable on the National Children’sBureau website. Less than half of thelocal Children’s Fund evaluations citedby the National Evaluation of theChildren’s Fund (University ofBirmingham 2006) involved childrenand young people as questionnairerespondents or participants ininterviews and/or focus groups;however, of these many indicated thatthey had used ‘participatory’ methodsof engaging children and youngpeople tailored to ensure they weremore engaged and willing to expresstheir views. Methods used included aportable diary room, photo, drawing,and art and crafts work.

Audio-visual methods were reportedby 18 per cent of third sectorrespondents, often as a response to alow return from more formalmethods. A number of organisationsdescribed using a variety of ways ofgetting feedback, such as radio clips,e-groups, drama and video, textingand professional photography thatcould be disseminated to otherorganisations. Bristol Children’s Fundhad a skilled facilitator working withchildren to use cameras to interviewthemselves: ‘Video interviews ofchildren and young people at differentstages give them an opportunity toself-reflect. We collect photos andquotes and stories about individualusers of our projects, like mini-casestudies or interviews, but not asdaunting or time consuming forproject participants’.

The wider review indicated someconsultancy organisations specialisingin participatory methods.11 However,much of the participatory work found

5Internal evaluation and performance management ...

Page 80: Accountability and Learning

60Charities Evaluation Services Research report

lies in research and needs analysis,rather than in programme and projectevaluation. One consultant usingparticipatory methods withcommunities explained that she usesit for needs assessment only becauseof the difficulty of integrating theinformation into other data. Indeed,some organisations reported usingcreative methods, but were unclearhow to analyse or use the data theyobtain from them. Both funders andorganisations raised the issue of howto collate participatory data, and howto use it as credible evidence. Whatthis shows is that it is important forparticipatory work and creativemethods to be placed within abroader framework of aims andobjectives and for the analysis to beinformed by an understanding of keyevaluation questions.

User involvement in monitoringand evaluation ... a theme for thethird sector

Involving usersThere have been wider pressures foruser involvement in research andevaluation, and this has been a themefor third sector evaluation. Beresford(2002) cautioned for a thoughtfulapproach, and a need to be clearabout whether what was beingproposed was an empowermentapproach or part of a consultative ormanagerial approach. Our researchlooked for examples of how userswere involved in evaluation –acknowledging the range of activitiesthis might include, from being askedtheir views, to carrying out theresearch and collecting data,analysing data and producingfindings, and writing reports.

Only 7 per cent of third sector surveyrespondents said that users were notinvolved in any way in monitoring andevaluation. On the whole, responses

reflected that the main involvementwas in the initial processes and inproviding feedback. Mostorganisations collected feedback fromusers and nearly half reportedconsulting them on findings, 40 percent involved users in definingoutcomes and indicators, and 20 percent in helping to decide what tomonitor and evaluate.

However, there was limited qualitativeevidence from our survey respondentsof any dynamic relationship throughcarrying out the monitoring andevaluation process. Nineteen per centsaid that users were involved incollecting information and 8 per centreported users as leading theevaluation. Both these figures appearhigh as we could find little informationto substantiate them. Only 11organisations gave illustrations of anyuser involvement apart from providingfeedback on services and outcomes.These included:

• input into an evaluation advisory orsteering group

• audience or user surveys carried outby project participants

• young people putting together aquestionnaire and analysing it withthe help of a university consultant

• attendance at committee meetings

• service user and member forums

• contributions to newsletters andreflective journals.

Only one organisation specified that itused a user-led evaluation team forone project.

Prescap Community Radio reported asteering group of ten people – halfusers and half funder representatives.The steering group had considerableinput into what was evaluated,formulating the questions that framemuch of the evaluation. For manypeople it was the first time that theyhad been involved in anything like a

‘We have

developed a

service user

group to evaluate

services and

shape future

developments.’

Alcohol supportagency

Page 81: Accountability and Learning

61Charities Evaluation Services Research report

steering group and it had taken awhile for them to think about whatthey needed to ask, but Prescapreported some thoughtful questionsemerging from the group.

Involving young peopleOur third sector survey and widerresearch suggest that involvement ofusers as evaluators was mostfrequently found in work with youngpeople, although even hereinvolvement was limited. A 2002study found that more evaluationsnow involved children directly, butmore frequently children were thesubject of data collection rather thandata gatherers (Alderson andSchaumberg 2002). Our wider reviewfound some examples of youngpeople as evaluators. For example, theBig Lottery Fund’s Young People’sFund has a role for young peoplegathering information peer to peer,and the Fitzrovia Youth in Actionproject, funded by Big Lottery Fund,12

involves an evaluation by youngpeople of clinic-based and outreachservices on drugs issues and feedbackinto future services. Part of theCamden Young People SubstanceMisuse Plan 2006/07 includedinvolving young people in planning,commissioning and monitoringthrough Fitzrovia Youth in Action. Themonitoring activities included takingpart in mystery shopping ofcommissioned services.

Giving a voice to multipleperspectivesIt has been argued that part of theimpetus for participatory approachesis to give voice to multipleperspectives and learning and newideas.13 Indeed, there appears to beevidence that involvement by serviceusers as evaluators has positive effectson service delivery. The EvaluationTrust reports on a user evaluation ofan independent living unit, whichresulted in major changes andempowerment of the people who

were part of the work. As part of awider evaluation of Adoption UK’sparent support programme, A Piece ofCake – a training programmedelivered by trainers who are adoptersthemselves – involving a combinedteam from the Evaluation Trust andthe Hadley Research Centre togetherwith 12 volunteers and staffmembers, the latter carried outinterviews with 66 programmeparticipants. The evaluation reportedthat, because of the highlyparticipatory approach, Adoption UKhad been effectively engaged in acontinuous piece of reflection sincethe evaluation work began andlearning was reflected in positivechanges in the programme’s staffing,structure, content and delivery overthe two years (Evaluation Trust andthe Hadley Centre for Adoption andFoster Care Studies 2007).

As part of the Supporting PeopleHealth Pilot evaluation, the DoncasterOn-Track pilot decided from theoutset that user representatives wouldplay a prominent role in developingthe service (Cameron et al 2006). Theoriginal bid included plans for anevaluation to be undertaken by a localservice users group. A representativeof this group became a member ofthe steering group. As the service userevaluation progressed, the evaluatorsmade regular presentations to thesteering group and their findingsinformed the subsequentdevelopment of the service. Not onlydid this approach improve thecredibility of the service amongservice users but it was felt that itmight indirectly have contributed tothe high levels of engagement withthe service.

Constraints to user participationHowever, there are contexts thatmake user evaluation more or lessrealistic. The few examples we foundof users as evaluators illustrated theamount of time that was involved and

5Internal evaluation and performance management ...

‘Volunteers and

service users

attend strategic

planning and

review awaydays ...

to comment on

where we have

been, where we

are going, what

we need to get

there, how we

know we have

arrived and what

difference did the

journey make.’

Local communityorganisation

Page 82: Accountability and Learning

62Charities Evaluation Services Research report

other constraints. A 2000 pilotevaluation of an empowermentproject carried out by a team of fourusers from the Refugee Education andTraining Advisory Service andRedbridge Sign-Posting Centreinvolved five days of preliminarytraining for the user-evaluators. In theirreport on the pilot, the user-evaluatorsrecommended eight days of trainingand a more limited focus for any futurereplication evaluation project(Mukooba-Isabirye et al 2000). Theyalso stressed that evaluators needed ahigh level of self-organisation andindependence. The researchprogramme at UnLtd makes use ofparticipatory methods whereverpossible. As part of this, a group of Big

Boost award winners from the YoungSocial Entrepreneurship programmewere given extensive training insampling, evaluation design andinterviewing methods. Theparticipants subsequently went on todesign their own their own surveyinstrument and to gather quantitativeand qualitative data.

As well as the practicalities of time,training and resources in userevaluation, the examples found alsoraised questions about how evaluationrequirements of neutrality,independence, rigour and quality canbe met. An example of both thebenefits and the issues to beaddressed in user-led evaluation isillustrated in the case example below.

case

example National evaluation of the Children’s Fund – using ‘Young Reporters’

Strand B of the national evaluation of the Children’s Fund (University of Birmingham2006),14 led by Professor Edwards and Dr Marian Barnes (based at the University ofBirmingham), focused on models of preventing social exclusion. The research team primarilyworked with a sample of local Children’s Fund partnerships to explore the processes andoutcomes of their activity.

A number of key messages emerging from early work with all 149 Children’s Fundpartnerships concerning participation were outlined in the Early Messages from DevelopingPractice report (NECF 2003):

• There was a concern to find ways of working towards meaningful participation, and toavoid tokenism and misrepresentation.

• Representative participation, involving diverse communities of children and young peopleproved challenging.

• Limited time scales clashed with the need for careful preparation and confidence building.

• Participation can require considerable local cultural change and this was not an easyprocess for some partnerships.

Although children were involved in the design and delivery of prevention services, only aminority of local evaluators involved children more fully – through evaluation design,participation in fieldwork, analysis of data and presenting findings. ‘Young Reporters’ wereseen to bring an informality to the research process and an ability to elicit appropriate datafrom staff and other young people. However, a lack of resources available to evaluators,difficulties establishing representative samples of those accessing services and the lowpriority given by the partnership to this type of evaluation activity, were all challenges facedby local evaluators trying to engage children, young people and the wider community inthe evaluation process (University of Birmingham 2006).

These learning points were drawn from University of Birmingham (2006) Local Evaluation ofChildren’s Services: Learning from the Children’s Fund, Research report 783, www.ne-cf.org

Page 83: Accountability and Learning

63Charities Evaluation Services Research report

Dual approach ... combininginternal and external evaluation

CES consultants and other consultantsin this study reported an increasingmove towards combining internal andexternal evaluation approaches.Evaluation consultants increasinglywork with organisations to developevaluation frameworks and tools,working with data that has beeninternally generated to either supportinternal evaluation reports or to writea report. This has also moved the workinto starting up front and being morecontinuous rather than carried out atthe end of a project. Rather thanworking with the organisation as anexpert, there is a closer partnershipbetween the consultant and theorganisation, which may continueover a number of years. However,evidence indicates that, for this towork, it is important that each party isclear about the relationship and theirrespective responsibilities.

Using internal monitoring andevaluation data ... potentialdifficulties

Although self-evaluation can be aneffective management tool, buildingskills and providing opportunities forinternal learning, the study indicatedthat there are often problems in termsof using monitoring data forevaluative purposes. Manyorganisations have informationsystems geared to meeting minimumreporting requirements but may notbe able to provide effective data forevaluation. Even those organisationswith bespoke or customised ITsystems frequently reported them asnot meeting their needs. Thiscorresponds to CES’ experience of thedifficulties of using the products ofself-evaluation as data within largerprogramme evaluations. There is afurther difficulty inherent in bringing

together data of different quality andfrom different project types toevaluate under an over-arching theme.

One evaluator commented on thequality of self-evaluation data: ‘Oftenthey are not asking questions at all, orin a very patchy, threadbare kind ofway. The vast amount of organisationsjust don’t have the capacity. Much isto do with asking questions that don’tgive you the insight and information,and asking questions in postal surveyformats that lead to fairly lowresponses and no consideration tobiases in those responses’. Thissentiment was echoed by manyfunders in the research. At least 70per cent of funders found the qualityof outcomes data, financial reporting,and client and output data frequentlyor sometimes disappointing.

This lack of comparable data drawnfrom internal evaluation is also aproblem at programme level forfunders that are trying to evaluatetheir own outcomes and impacts.There is an increasing emphasis byfunding bodies on receivinginformation that can be aggregated,yet data is often being collected inways that do not permit aggregationor synthesis. In the Big Lottery Fund’sevaluation of its Healthy LivingProgramme, it was difficult tosynthesise the information from self-evaluations. In the evaluation of itsRethinking Crime and Punishmentprogramme, the Esmée FairbairnFoundation pulled together self-evaluations in a pack, but then dida top-level evaluation rather thanattempt to synthesise the reports.Self-evaluation data worked betterand provided key information for theBig Lottery Fund’s evaluation of theNew Opportunities for PE and SportsInitiative (NOPES), where keyindicators could be found more easilyand more simply reported againstacross all the projects because acommon approach was agreed atthe outset.

5Internal evaluation and performance management ...

Many groups set

up systems at the

start but struggle

to collect the

information

systematically

throughout the

grant…It is often

the qualitative

information which

is of poor quality –

mainly because

of lack of time

and skills’.

Funder

Page 84: Accountability and Learning

64Charities Evaluation Services Research report

case

examples

Using self-evaluation in national-level evaluations

National Evaluation of the Children’s FundThe national Children’s Fund evaluation – evaluating what works in prevention of socialexclusion for children and young people – raised the importance of the quality of local-leveldata for national evaluations.15

A number of local evaluators worked to raise the capacity of Children’s Fund projects to self-evaluate their work. Some evaluators worked alongside project staff in developingappropriate self-evaluation tools during the life of the evaluation, with others providingmore formal training in workshops. Concerns over future resources in some cases positivelydrove the development of self-evaluation tools. Other evaluators were restricted in theirefforts to build capacity in self-evaluation through limited resources. As well as problemswith quantitative data, there was also difficulty because the differences across localprogrammes made synthesis difficult, and finally they had to show the difference made byeach individual service.

Supporting People Health Pilot EvaluationThe Supporting People Health Pilot evaluation illustrates the importance of developingmeasurable aims at local level and the need to agree on sharing data.

The six Supporting People Health Pilots represented a range of agencies from the statutory,independent and voluntary sectors across England. They were designed to explore theextent to which the Supporting People framework for policy, planning and commissioningcould be used to benefit the physical and mental health of the community.

The evaluation research team worked collaboratively with each pilot to ensure that theoutcomes they specified in their original submissions were appropriate and measurable,and would provide a baseline against which progress could be assessed on key aspects ofthe nature of the problems they were seeking to tackle. One of the tasks was to translatebroad aims into more easily measurable ones. Each pilot was asked to set quarterlyperformance targets and to report activity against the targets in the subsequent quarter.Where possible, baseline data was identified. Reflective diaries were used to inform thedevelopment of interview schedules. Data was drawn from semi-structured interviews witha range of stakeholders and from project evaluation reports.

Quarterly project evaluation reports had a standard format and served to collect some ofthe data concerning process and implementation, including issues relating to joint workingat strategic and operational levels. The reflective diaries captured experiential data thatcould otherwise be difficult to access. They formed the basis of some of the developmentalwork with the pilots. In some cases health professionals had concerns that sharing data withcolleagues in the local authority might compromise the confidentiality of individualpatients. The experience of the pilots illustrated the importance of establishing processesthrough which to share information at a strategic and operational level and acrossorganisational boundaries.

These learning points were drawn from University of Birmingham (2006) Local Evaluation ofChildren’s Services: Learning from the Children’s Fund, Research report 783, www.ne-cf.org andCameron, A, Macdonald, G, Turner, W and Lloyd, L (2006) An Evaluation of the Supporting PeopleHealth Pilots, Notes and extracts, Department for Communities and Local Government.

Page 85: Accountability and Learning

65Charities Evaluation Services Research report

Reports of the National Evaluation ofthe Children’s Fund and theSupporting People Health Pilotevaluation both show how national-level evaluations have driven thedevelopment of self-evaluation andalso capture some of the limitationsof the self-evaluation data produced.This is shown in the case examples onpage 64.

This chapter has looked at thedevelopment of internal evaluation

in third sector organisations and theapproaches and methods used. It hasraised issues of the quality of self-evaluation data when used for widerimpact evaluation. The issues of thequality and utility of internalmonitoring and evaluation data forprogramme evaluation will beconsidered further in the nextchapter, which looks at some of thechallenges for the third sector inexternal evaluation.

5Internal evaluation and performance management ...

1 ‘Evaluation determines the merit, worth or value ofthings. The evaluation process identifies relevantvalues or standards that apply to what is beingevaluated, performs empirical investigation usingtechniques from the social sciences, and thenintegrates conclusions with the standards into anoverall evaluation or set of evaluations.’ Scriven, M(1991) Evaluation Thesaurus (4th edition), SagePublications, Newbury Park, CA.

2 The European Foundation for Quality Management(EFQM) Excellence Model is an over-archingframework for continuous improvement.

3 Information from Community Technical Aid Centrewebsite www.ctac.co.uk 2.04.06 This website isno longer available as CTAC is no longer trading.

4 The logical framework analysis links evaluationdirectly with project management through aframework or matrix showing the project’s aimsand objectives, the means required to achievethem and how their achievement can be verified.

5 See the systems developed through the NationalCentre for Business and Sustainability, SocialEnterprise Partnership, new economics foundationand the Institute of Social and EthicalAccountability.

6 These are related concepts. ‘Deadweight’ relates tothe extent to which the benefits of an interventionmight have happened anyway. This ‘deadweight’needs to be deducted from the assessed change inorder to assess the real amount of impact.‘Attribution’ is claiming that a particular result isbecause of a particular intervention. This is an issuefor both outcomes and impact evaluation,particularly when the effect of other agencies orsocietal influences are likely to be felt.

7 See www.thesocialenterprisepeople.co.uk

8 See www.sel.org.uk The Social EnterprisePerformance Measurement toolkit follows the samebasic logic as the CES planning triangle describedon page 54.

9 This approach was originally proposed by RickDavies, an independent monitoring and evaluationconsultant for development aid programmes inAfrica and Asia, who has been a leading proponentof participatory approaches. Seewww.mande.co.uk

10 See www.ces-vol.org.uk for further information onthese tools.

11 For example, Community Involvement Group,www.communityinvolvement.org.uk

12 See www.biglotteryfund.org.uk, Wednesday 25October 2006 12:48, Tolerance in AdversityBacking from Big Good Cause.

13 See the work of SOLAR, Social and OrganisationalLearning as Action Research at the University ofWest England. See also Bennett, F with Roberts, M(2004) From Input to Influence: ParticipatoryApproaches to Research and Inquiry into Poverty,Joseph Rowntree Foundation, York. Also Percy-Smith, B (2006) ‘From consultation to sociallearning in community participation with youngpeople’ in Children, Youth and Environments,16(2): University of Colorado, Boulder, CO.

14 See www.ne-cf.org The National Evaluation of theChildren’s Fund has been a research programmerun by the University of Birmingham, inpartnership with the University of London Instituteof Education and ICT specialists Nemysis.

15 Ibid

Notes for Chapter 5

Page 86: Accountability and Learning

66Charities Evaluation Services Research report

External evaluation ... how thirdsector organisations are involved

Over one-third of surveyed thirdsector organisations had had anexternal evaluation in the past, andaltogether 45 per cent had either hadan external evaluation, were currentlyhaving one or planned to have one.

External evaluation is evaluation thatis carried out by an agency orindividual not directly responsible forthe programme or activitiesevaluated. As such, it may haveconcerns and criteria that do notderive from the organisation itself. Inpractice, hard distinctions betweeninternal and external evaluation havebecome less clear as external

evaluators work with participatorymethods and engage withstakeholders to design the evaluationand work with internally-generatedcriteria.

Among survey respondents, smallerorganisations, those identifyingthemselves as projects and morerecently established organisations,were less likely to have had anexternal evaluation in the past. Largerorganisations were more likely toreport having had an externalevaluation, probably reflecting thegreater complexity of the organisationand greater availability of resources.

More generally, third sectororganisations may be involved inexternal evaluations in a number of

Focus of chapter 6This chapter reports on external evaluation in the third sector, and discussesquestions concerning expectations, quality issues and different agendas. Itlooks specifically at the drivers for evaluating impact, and the learning fromimpact assessment methods. The chapter concludes by reporting on thegrowing interest in assessing the monetary value of third sector delivery. Theresearch found:

• Funders and others commissioning evaluation frequently found externalevaluations disappointing.

• External evaluations could be improved through a number of keyelements, including being clear about purpose, developing more criticalanalysis, and building in time for organisational learning anddisseminating findings.

• Third sector organisations are increasingly asked to demonstrate theircontribution towards economic priorities and their investment value orreturn, and organisations often agree a level of impact not realistic orfeasible to measure.

• Large third sector organisations are developing their ability to assess theiroverall performance and impact.

• A focus on complex, technical methodologies may reinforce the emphasison external information requirements and be at the expense of developinglighter, practical models that build internal capacity and learning.

External evaluation ... the challengesof demonstrating value

Chapter 6

Page 87: Accountability and Learning

67Charities Evaluation Services Research report

ways – through, for example, project-level evaluations commissioned eitherinternally or by a funder, as part ofprogrammatic evaluationscommissioned by nationalorganisations, being involved in aprogrammatic evaluation basedregionally or as part of a nationalprogrammatic evaluation. Howorganisations are involved in externalevaluation, and the extent to whichexternal evaluation is internally driven,will combine to affect howorganisations learn both from thefindings and the process itself.National organisations in this studyhad a variety of criteria operating tosignal external evaluations. Tearfund,for example, commissions externalevaluations of any project over£250,000. But criteria are often moreflexible, as shown in these examples.

• Help the Aged reported that itevaluates time-limited work, iffunding bids allow, and hascommissioned evaluations in most ofits areas of support, advice, regionaland national development, andvolunteering, some of which areavailable as published reports.

• Refugee Action reported that it hasflexible criteria, but the trigger for anexternal evaluation is often relatedto whether a project is innovative,impacts on quality or whether it islikely to demonstrate good practice.

• NCH Action for Children reportedthat its external evaluations weremostly triggered by safer care issues.

For some third sector organisations,their experience of evaluation is aspart of a national programmaticevaluation and this may expose themto more rigorous methods than purelylocal-level evaluation.

For example, Moving People is an£18m programme over four years(2007/11) which includes 28community projects. It is a diverse

programme of national and localactivity to reduce stigma anddiscrimination linked to mental illhealth, and improve the physical andmental wellbeing of people who haveexperienced mental health problemsand those who have not. It is led byfour mental health organisations –Mental Health Media, MIND, Rethink,and the Institute of Psychiatry, King’sCollege London. The evaluation isbeing done by the Institute ofPsychiatry with a £2m budget. Itincludes pre- and post-testing ofattitudes, viewpoint survey, aDepartment of Health survey andresearch on employer attitudes. Theevaluation is also using the neweconomics foundation’s wellbeingportfolios for which it has developed acommon assessment tool.1

The quality of externalevaluations ... issues aboutscoping, methodology andevidence

Funding for external evaluationsThe research evidence suggests somedisenchantment by funders withexternal evaluations. While half of thefunders’ survey respondentssometimes required externalevaluations, only six funders requiredexternal evaluations frequently. Lessthan half of the sample fundedexternal evaluations and the triggersfor external evaluation varied as follows:

• when an organisation or projectoffered specific learning (53 per cent)

• as part of a themed grantprogramme (47 per cent)

• in response to a request (37 per cent)

• for high-risk projects (33 per cent)

• for grants over a certain sum(20 per cent).

6External evaluation...

Page 88: Accountability and Learning

68Charities Evaluation Services Research report

Forty-two per cent of funders foundless than half of externallycommissioned evaluations useful. Thechief problem reported lay in thequality of the evaluation specificationand research methodologies; alsoimportant was the adequacy of theevidence. Funders describedevaluations as often being unclearabout the intended audience, toowordy, not sufficiently analytical andnot addressing key questions.

Expectations of external evaluationsCES’ experience is that expectationsby those commissioning externalevaluations of what they shoulddeliver are now higher than 20 yearsago. Increased technological capacity– both the ability of software todeliver data and the overallpresentation of reports – has had astrong influence on this. However,funders in the research also expresseda sense of disappointment about thequality of external evaluations thatgoes beyond the quality of the data.A number of key elements emergedin the study as being important todifferent stakeholder groupscommissioning evaluations. Theseinclude:

• having clarity about the keypurposes of and audiences for theevaluation and the appropriateformats for presenting informationand where and how it is needed

• developing a better grasp of theprogramme and committingsufficient time at the brief anddesign stage

• having a theory of change, so thatvariables likely to affect theperformance of the project orprogramme can be identified andassessed

• moving away from ‘good news’stories and delivering what peoplewant to hear, and developing morecritical analysis and learning

• ensuring that the evaluator’s ownpersonal research or other agendasdo not impinge.

These issues are discussed further here.

Evaluators of choiceUniversity departments are often theevaluators of choice for the specificsubject area expertise required forsub-sector large-scale evaluations(such as health, families andregeneration) and can bring atechnical rigour to evaluation studies.Additionally, we found examples ofuniversity departments linking withconsultancy organisations to obtaincomplementary expertise. However,research participants often criticisedevaluations as being over-academicand jargon-ridden. Lambeth CrimePrevention Trust made an initial falsestart to an external review, with thetrustees and staff unhappy with thework carried out by university-basedconsultants, but were pleased withthe work of a consultant with abackground in the voluntary sector.‘You don’t want an evaluator to bejust repeating back what you havetold them. You want some analysis ofthe situation.’

Commissioning skillsEvaluators, national organisationsand funders all pointed toinadequate commissioning skills. CES’more general experience is thatthose commissioning evaluationsfrequently do not adequately specifythe work and often have unrealisticexpectations of what can be donewith available resources. Anexamination of what works in whatcircumstances will require sufficientfunds to look at the context andprocess information and to supportthe relevant analysis – rather thanjust providing outcome data.

Some evaluation commissionersexpressed dissatisfaction that externalevaluators had not understood the

‘Good evaluation is

very hard to do

and there do not

seem to be the

skills in the sector

to do it. We are

not impressed

generally about

the quality of

external evaluators

out there.’

Funder

Page 89: Accountability and Learning

69Charities Evaluation Services Research report

project sufficiently. Yet, except inlarge programmatic evaluations, wefound that time was rarely factored infor preliminary scoping work at thecommissioning and evaluation designstage – squeezed by both resourceissues and time constraints.

Some funders accepted that whereevaluation primarily servedaccountability purposes it was difficultfor the organisation evaluated to beopen to areas of difficulty, or issuesconcerned with management andimplementation – even where therewas an apparent learning agenda. Thisindicates that if there is to be alearning function, evaluationproposals and designs need toestablish frameworks that can receiveand process feedback.

Learning opportunitiesThere is also a danger – welldocumented in evaluation literature –of role confusion where a programmeor project is moving from one phaseto another or reconsidering refundingor developing strategy. Here,evaluators may be expected implicitlyor explicitly to make managementdecisions, and there is disappointmentwhen recommendations do not dothat but rather point out theimplications of findings.

The evidence suggests that if the mainpurpose of evaluation is to helpfunded organisations become learningorganisations – to identify weaknessesand correct them – funding technicalsupport to self-evaluation may be abetter option to external evaluation inmany cases. Given the specific natureof this information, such self-evaluation may not provideinformation that is needed for thefunder’s own accountability purposesand to help themmake fundingdecisions and these monitoring needsmay need to be specified separately.

Evaluating impact ... need foran agreed definition andunderstanding

Within the third sector there is adegree of confusion in the use of theterm ‘impact’. Impact is sometimesused to describe all the resultsachieved by an organisation, or issometimes used interchangeably with‘outcomes’. It is also used to imply thenet effects of an intervention – thechanges brought about by the projector programme above and beyondthose resulting from other processesand events. Research participantswere most likely to recognise thefollowing distinction betweenoutcomes and impacts:

• Outcomes: the more direct benefitsor effects of a project, within thecontrol of the project, althoughthere may be unexpected orunwanted outcomes.

• Impacts: the wider and often morelong-term consequences, forexample for the social, economic orpolitical environment. These may bepositive or negative and are morelikely to be influenced by theinterplay of the context with theintervention over time. The ‘neteffect’ was also a concern for someassessing impact.

This distinction is common to many ofthe definitions found in evaluationliterature and practice (Astbury 2005)and was found within the context oflarge-scale programmatic or national-level evaluations reviewed.

Common understandings of theterminology are important to theextent that they help clarity about thelevel of change on which informationis required, and what information isfeasible or useful to collect. Most ofthe funders in our survey did make adistinction between monitoring andassessing outcomes and impact,

6External evaluation...

Page 90: Accountability and Learning

70Charities Evaluation Services Research report

pointing to the difficulties ofdemonstrating impact within theshort timescale of many grants. TheCity Bridge Trust, for example,required organisations to provideinformation commensurate with thelevel of funding awarded, and thisaffected whether impact informationwould be expected.

Setting measurable aimsCES’ experience and the assessmentof some funders in this research isthat impact evaluations are frequentlyasked for when it is not realistic andthat organisations often collude inthis when applying for funding bydeclaring unrealistic expectations ofimpact. In the Health Pilots evaluationone of the first tasks for theevaluation team was to help pilotsdevelop measurable aims (Cameronet al 2006).

For example, the original aim in theDoncaster pilot was to reduce suicide.The report pointed out that, leavingto one side the issue of causalattribution, this impact would bedifficult to demonstrate, and the pilotdeveloped a number of proxyindicators such as engagement withservices and sustainment of tenancy.In Salford, the pilot’s aims originallyincluded reducing the incidence ofdeath caused by accidents, andpromoting health and active life for allolder people. Again, the difficulties ofidentifying reliable indicators for theseambitious aims meant they had to beput aside. This was partly because ofthe challenge of attributing causallinks between these outcomes andthe project, but also because of thedifficulty in demonstrating changes inlow-frequency events such as death byaccidents.

(Chapters 4 and 5 described thedifficulty of using local outcomes datafor aggregation.)

The research interviews and review ofevaluations also underlined the

difficulty of using local data forprogramme-wide evaluation, whethergenerated internally or byindependent evaluators:

• When different evaluation designsare applied at local level, there arereal difficulties in collating andsynthesising data.

• Collecting and aggregating dataagainst centrally identified outcomesmay miss the learning from localimplementation and importantdifferences in service delivery.

One evaluator experienced in thirdsector programme evaluationsreported that the variable capacity oforganisations to assess and report hadcreated problems not only foraggregation, but in obtaining an over-arching view and when makingcomparisons.

Potter (2005) describes amethodology to address thesedifficulties by providing an over-archingframework at the beginning of theevaluation that groups individualprojects by theme and formulatesevaluation questions that transcendthe issue of the performance of asingle local project, rather thansynthesising summatively. At the sametime, allowances are made for theindividualities of each project.

Practical examplesThe Yorkshire Forward programmeevaluation provides a challenge interms of evaluating projects broughttogether within a programme, butwith little commonality. Evaluators atthe Centre for Regional Social andEconomic Research (Sheffield HallamUniversity) have drawn up aframework with six simplified themes,each with a dedicated package ofresearch. Also important is theestablishment of a logic chain for eachof the six funded elements, with atleast 17 identified theories of changeunderpinning the programme.Sheffield Hallam evaluators are also

Page 91: Accountability and Learning

71Charities Evaluation Services Research report

6External evaluation...

Evaluating the impact of grant programmes

Evaluation of the Phoenix Development FundAn evaluation of the Phoenix Development Fund (Freiss 2005) demonstrated the need forfunders to encourage good data collection in funded organisations in order to have goodquality data for their own programme evaluation.2 Evaluators asked highly relevantquestions such as:

• Did the Fund encourage fresh thinking?

• How effective were specific project approaches?

• To what extent did projects successfully help particular sections of the community?

• To what extent did the Fund help to engage mainstream providers?

• Did funding help to build capacity?

The evaluation was able to present data on aggregated hard outcomes, such as thenumber of businesses and social enterprises created and the numbers leaving benefits andentering education. Evaluators also dug down to look at particular case studies. However,they found that few projects systematically collected robust client information. In generaloutput and outcome information was collected only in order to report to outside agencies,particularly funders.

caseexam

ple

continued »

delivering a capacity buildingcomponent to the evaluation inpartnership with consultants. Moregenerally, from CES’ experience, theamount of resources available for suchcapacity building is critical to how farit can affect the quality and quantityof locally-produced data.

This research has found the issue ofvariable data quality and of projectdiversity particularly important inimpact assessment. The City ParochialFoundation, the Equitable CharitableTrust and the City Bridge Trustcombined their funding in a Funders’Alliance in order to tackle the problemof school exclusion in the LondonBorough of Merton, funding twovoluntary sector organisations andthree projects working with childrenand young people, each offeringdifferent sorts of intervention. Theyfound difficulty in makingcomparisons across them to see theoverall impact of the three togetherbecause of the differences intimetable and differences in projects(Unwin and Field 2003).

The evaluation of the DefraEnvironmental Action Fund 2002 (CAGConsultants with NorthumbriaUniversity 2005) reported thatevaluating impact was madeproblematic because of the diversity ofprojects across a wide range of policyareas and delivery approaches. Morefocused funding would have led to agreater combined impact withinchosen policy areas and a greater focusfor evaluation and impact analysis. Thequarterly reporting structures and theinclination of the voluntary andcommunity organisations had theeffect of focusing monitoring towardsoutputs and short-term outcomes. As aresult, it was difficult to drawconclusive lessons about longer-termoutcomes and impact.

Funders are increasingly moving fromevaluation of individual projects andprogrammes to evaluating the impactof their funding programmes and thequality of data produced by theirfunded projects becomes a criticalissue, as shown in the case examplesbelow.

Page 92: Accountability and Learning

72Charities Evaluation Services Research report

The question of attribution ...disentangling the effects ofinterventions

Emphasis on evaluating impact hasraised the question of attribution andnet effect – how to isolate the effectsof a particular intervention from otherexternal factors that may influenceoutcomes. As one funder expressed it‘How do you know whether it wasyour £5,000 that made a difference?’

Many public sector evaluators considerthat experimental and quasi-experimental designs and standardisedmeasurement favoured by governmentdepartments3 are expensive, can take along time to get results, and are rarelyconclusive or widely applicable atprogramme level. In general, thoseexpecting rigorous designs are oftenlooking for a cause-effect relationship,rather than a ‘good enough’ picture toput value on an intervention. In boththe public and third sectors,evaluations are carried out over theshort term and real impact, even with

large programmes, may not be seenfor several years. In CES’ experience,before/after comparison studies arethe most common quasi-experimentalapproaches in third sector evaluation,and these still present difficulties ofdisentangling the effects ofinterventions from other fundamentalchanges, such as central governmentor local authority interventions.

Using a control group (a group thatdoes not experience the intervention)is rare in third sector evaluation. CESset out to use a control group in itsfour-year evaluation of the Fairbridgefoundation course (2001-2003) – thenworking in a context of increasinginterest in hard evidence of successfrom the Social Exclusion Unit and HMInspectorate of Probation. However,finding people who were legitimatelycomparable to the group going toFairbridge, but who had not attendedthe foundation course was not withoutits difficulties (Astbury 2002). A controlgroup was useful in the Every Child aReader evaluation described in thecase example on page 73.

case

examples

Evaluation of the impact of Greater Bristol Foundation’s grants programmeGreater Bristol Foundation has made grants to locally-based charities and communitygroups in the former Avon county area since 1987. It was expected that grants totallingover £2m would be made in the financial year 2003/04. An evaluation was carried outbetween January and March 2004 to determine the impact of the Foundation’s grantmaking during 1998 to 2003. Impact was looked for in relation to individual organisations,their ability to collaborate within the sector and on the area as a whole.

The evaluation methods included a review of grant applications and other data, aquestionnaire sent to 300 grant recipients, and a small number of interviews with a sampleof grant recipients and local strategic agencies. The evaluation highlighted the difficultiesthat many small organisations face in thinking about and identifying outcomes and impactsfor their users and the wider community, as many beneficiary organisations struggled toanswer questions in these areas.

The final report drew attention to the need to have data on the monitoring forms aboutbenefits to users and the difference made to the wider community. It also concluded thatwhatever data collection methods were used in the future, studies would be dependent ongroups’ ability to understand impact evaluation and that there might be difficulties withgroups trying to give the answers they thought funders needed to hear.

These learning points were drawn from Freiss, PR (2005) Evaluation: The Phoenix Development Fund:Final Report and Whitfield, L (2004) Evaluation of the Impact of the Greater Bristol Foundation’sGrant Programme.

Page 93: Accountability and Learning

73Charities Evaluation Services Research report

In the Health Pilots evaluation(discussed on page 64), in most caseshealth pilots came to the conclusionthat it was unlikely that they couldgenerate evidence that outcomeswere directly and solely attributable totheir work. What they could do wasgather information about the likelycontribution of the pilot. Even with

this, pilots experienced difficulties inobtaining information that requiredlong-term tracking and collection ofdata by other agencies.

Research participants recognised thatit may be particularly hard forpreventative programmes to establishimpact, which may only become clear

6External evaluation...

Every Child a Reader: using a control group to evaluate outcomes

BackgroundThe Every Child a Reader programme is a collaboration among charitable trusts, thebusiness sector and government. The three-year, £10m initiative has funded highly-skilledReading Recovery teachers in inner-city schools to provide intensive help to children mostin need. It has the longer-term aim to secure sustainable investment in early literacyintervention, and to explore how intensive support in reading can be provided most cost-effectively in a national context.

Annual evaluations were a vital part of measuring the effectiveness of the programme andits potential for further roll-out. They were funded by the partners from the overall £10mraised for the project, half of which came from the DfES (Department for Education andSkills), the other half from a partnership of trusts and businesses. The year one evaluation,carried out by the Institute of Education, cost £70,000.

Methods for control group comparisonThe first year of the evaluation (2005/06) compared the literacy progress of the lowestachieving six-year-olds who had access to Reading Recovery with that of children atsimilarly low achievement levels in schools with no access to the programme. Two hundredand ninety-two children were identified in the Every Child a Reader Year 1 class – 145 in 21Every Child a Reader schools and 147 in the 21 comparison schools. These 292 childrenwere diagnostically assessed at the beginning and end of the 2005/06 school year.

Information was also collected on which literacy interventions these children had accessedduring the year. In the Every Child a Reader schools, 87 children had received ReadingRecovery. This group of 87 children was the sample compared with the 147 control groupsample of lowest achieving children from comparison schools. The detailed assessmentcarried out at the beginning and end of the children’s Reading Recovery programmeprovided information on the range of different skills they had developed.

A further assessment of achievement was planned for the end of the 2006/07 school yearby collecting key stage results and a whole class measure of spelling and readingadministered to monitor the sustainability of earlier gains. Qualitative data was alsogathered through termly progress reports submitted by schools.

Although this evaluation is highly regarded, setting up the control group was not withoutissues. There was an incentive for control group schools to take part as they were given aguarantee of full participation in the programme the next year. Although the work with thecontrol group was straightforward, the risk of external factors playing a role in outcomesremained. As one funder in the partnership said ‘Every time you do an evaluation without acontrol group you have to qualify it by taking into consideration other effects in their lives.When you add in a control group, you have to have double the amount of health warnings’.

caseexam

ple

Page 94: Accountability and Learning

74Charities Evaluation Services Research report

after a long period of time and withresources directed into a populationwith definable boundaries, andsufficient to make a change of therequired magnitude.

For example, a programme fundedfrom the Single Regeneration Budgetaiming at homelessness prevention inLondon identified risk factors inyoung people as young as ten yearson issues such as parenting andschool exclusion, and intervened atthis level. In general, it takes manyyears for the impact of such earlyintervention to emerge, and withrelatively small project interventionsacross a wide area it is difficult todissociate effects of the programmefrom other influences. One fundersaid that when statutory servicespointed to what they considered asgood impact evidence, it was oftenunachievable for small organisations,because the organisations they wereworking with did not work with thenumbers that would demonstratethose changes – on educationalachievement, for example.

Some organisations in our third sectorsurvey raised the question of whethera demonstration of impact wasreasonable, given the size, scope andremit of their work, and given thedifficulties of quantification andattribution, although they were beingrequired to provide such evidence.Preston Community Radio pointedout that its funders wereregeneration bodies and are lookingat improvements against targets inthe area and community, but therewere difficulties in linking the realbenefits from involvement withcommunity radio to crime reductiontargets, for example.

The difficulties and ethical dilemmasof identifying and integrating controlgroups are problems faced by, andfamiliar to, the wider evaluationcommunity. Such problems are evenseen in public policy evaluation,

where human and financial resourcesare not necessarily a constraint(Martin and Sanderson 1999). It istherefore hardly surprising that thethird sector faces difficultiesassociated with measurement,attribution, causality and validation ofoutcomes and can experiencedifficulty when using experimentaland quasi-experimental methods.

Impact across regional andnational bodies ... the newchallenge

Third sector organisations areincreasingly being required byregional development bodies andgovernment funders to produceimpact information at a regional andnational level. For example, amongour research participants:

• A key challenge for Regional YouthNorth East is demonstrating impactas a regional organisation.

• BTCV, the environmentalvolunteering charity, recentlyemployed NFPSynergy consultants tolook at an impact analysis processacross the organisation and tounderstand the impacts of its work inenvironmental, social and economicterms, drawing from best practiceelsewhere.

• Action for Blind People is trying totranslate outcomes information – interms of jobs, numbers intoemployment and numbers into newhomes – into impact measuresacross the organisation.

For some national organisations thisis a completely new challenge. Earlyimpact reports for public consumption,providing headline informationabout annual achievements, showedconsiderable confusion aboutimpact. The literature reviewshowed that this reporting hasdeveloped in recent years.

Page 95: Accountability and Learning

75Charities Evaluation Services Research report

Groundwork UK’s 2006/07 report provides a good example of a synthesis, orpresentation of cumulative ‘impact’ of its outputs and outcomes, shown inthe case example below.

International non-governmentalorganisations have institutionalisedimpact evaluation since the mid-1990s, but there have been warningsfrom international developmentagencies about the difficulty of large-

scale statistical studies, particularlywhen the catchment area is fluid orunclear. Plan introduced a corporatemonitoring and evaluationprogramme across the organisation inthe late 1990s, and reported the

6External evaluation...

Groundwork UK – reporting on impact

Groundwork UK is a national federation of charitable trusts whose aim is to buildsustainable communities in areas of need through joint environmental action. In 2006/07,Groundwork generated and invested around £127m in practical activities to supportregeneration and promote sustainable development.

During 2006/07, Groundwork developed a new approach to evaluating the impact of itsactivities. Each evaluation is undertaken in partnership with the Centre for Local EconomicStrategies (CLES) using a team of external assessors. The evaluation methodology has beendeveloped with support from a number of organisations, including the Home Office, theTreasury, the Department for Communities and Local Government, new economicsfoundation, the Audit Commission, Greenspace, the Community Development Foundation,University of London and the University of Cambridge.

Groundwork’s reportMaking a Difference: Impact Report 2006–07 has clear sections onthe following:

• its overall mission and scope and remit of its work

• how it worked in key areas of improving the physical and natural environment, buildingsustainable communities (outputs)

• the difference it made to the way people involved and affected think, feel and act(qualitative outcomes)

• the extent to which it helped businesses to reduce waste and improve resource efficiency,and helped to improve green spaces (quantified outcomes)

• how it delivered its approach to finding ways of tackling social and environmentalproblems – through partnership, innovation and sustainable solutions.

The report includes a learning section, which reports on some of the findings from externalevaluations and provides more qualitative data. Another section – ‘Sharing our experience –influencing others’ – reports on achievements in relation to policy change. The report alsoprovides illustrative case studies.

Groundwork uses a common set of output and outcome measures and a range ofperformance indicators, sets targets as part of its business planning process and reviewsperformance to monitor progress. During 2006/07, the organisation redeveloped itsintranet to support the sharing of good practice and collated over 120 project case studies.

These learning points were drawn fromMaking a Difference: Impact Report 2006–07www.groundwork.org.uk

caseexam

ple

Page 96: Accountability and Learning

76Charities Evaluation Services Research report

difficulties of retaining meaning andaccuracy when information wasaggregated at national, regional andglobal levels. Furthermore, there wasa real danger of corporate indicatorsdriving strategy at programme level.

Debate among international non-governmental organisations raisesconcerns about the methodologicalproblems of measuring theeffectiveness and impact ofcomplicated developmentprogrammes (Hailey and Sorgenfrei2005). A greater importance attachedto contextualisation necessarily ledaway from aggregation. To someextent this led to a de-emphasis onformal evaluation and a greateremphasis on integrating evaluationinto programme management(Mansfield 1997). Roche (1999) hasurged an approach that placesintervention within the wider contextand enables understanding of therelative importance of interventions inrelation to other changes going on.

Global impact reporting ... anincreasing trend

The research indicated an increasingtrend for international developmentorganisations to report on theirimpact across programmes, orworldwide. Oxfam has beenproducing annual impact reportssince 1999, drawing together field-level participatory evaluations,ongoing monitoring reports andregional impact reports against itsoverall aims and strategic objectives(Oxfam 2005). Plan produced a globalimpact report for the first time in2007. Based on an extensive review ofinternal and external documentationacross all the 46 countries in whichPlan works, it identifies strengths andweaknesses and is designed to be thebasis for planning future work (Betts2007). Tearfund synthesises localinformation at a regional level, and

also produces an evaluation synthesisreport, reviewing some 20 evaluationscommissioned across the year.

Giffen (2005) expresses concern thatsuch global impact assessments areprimarily a tool for marketing. Sheidentifies the risk of the search forimpact becoming linked to the needto demonstrate achievement ofstrategic targets, and suggests thatevaluation developed locally forlearning – to understand change –risks being misused to demonstrateimpact.

Cross-sector impact ... the valueof a strong evidence base

A 2006 Treasury report emphasisedthe need to demonstrate the thirdsector’s impact as a whole morepersuasively through a strongerevidence base (HM Treasury 2006),and the issue of evidencing theeffectiveness of particular sub-sectorshas also emerged.

A Rapid Evidence Assessment of thebenefits of voluntary and communitysector infrastructure in 2006 found itdifficult to bring evidence together inany cumulative sense to gain animpression of its overall or aggregateimpact – finding the evidence basesomewhat ‘fragmented and disparateand not particularly substantial’. TheRapid Evidence Assessment found veryfew longitudinal or comparativeresearch designs, or value for moneyor cost-benefit studies (Macmillan2006). Sub-sector specific initiativesto build an evidence base includecommunity development, housingand welfare rights. However, evenwithin sub-sectors, the diversity ofprovision is an obstacle to findingunitary measures of impact.

A 2006 study reviewed the literatureassessing the impact of welfare rightsadvice. This is shown in the caseexample on page 77.

Page 97: Accountability and Learning

77Charities Evaluation Services Research report

Assessing social capital ... focus oncommunity capacity building

In Northern Ireland the impact of newpublic management has beentempered by an emphasis oncollaboration and negotiation ininspection and review. The concepts ofsocial capital and community capacitybuilding have been central to thenotion of community cohesion,particularly the focus of voluntarysector funding (McNamara et al 2007).Community Evaluation NorthernIreland (CENI) has built a framework forindicators relevant to public fundingfor the voluntary and communitysector, using the concept of socialcapital to capture the notion ofcommunity process (McDonnell 2002).

Social capital evaluation has been lessprominent in England, except in the

community development sphere. A2007 survey by CommunityDevelopment Exchange identifiedCENI indicators on monitoring andevaluation used by communitydevelopment practitioners, showinghow the community demonstratesinternal cohesion, engages with othercommunities and links with otheragencies and policy makers (Longstaff2008).

UnLtd, the foundation for socialentrepreneurs, aims to help socialentrepreneurs (some 5,700 awardwinners) by offering a tailoredpackage of support and financialassistance. UnLtd is developing anassessment of social capital, in termsof networks of support andachievements of its award winners. Itis looking at the potential to usemapping of relationships, andassessing the skills and relationships

6External evaluation...

Assessing the impact of welfare rights advice

A 2006 review of the literature on local authority and voluntary sector welfare rights adviceand their benefits found local economy gains from improved take-up of benefits as a resultof welfare rights advice, because of a multiplier effect (Wiggan and Talbot 2006).

Examples included an impact assessment of the Neighbourhood Renewal scheme Povertyand Income Maximisation in Newham, which drew on national and local data covering arange of measures, such as take-up of pension credit, local recipients of housing benefit andcouncil tax benefit, to set a baseline. It reported delivering an increase in income of£9,065,200 (London Borough of Newham Social Regeneration Unit 2005).

A review of a three-year (2000/03) welfare rights take-up project in Yorkshire andHumberside, run by the Royal National Institute of Blind People (RNIB) to improve take-up ofbenefits among visually impaired people, reported a substantial monetary impact (RoyalInstitute of Blind People 2003). RNIB estimated that around £916,000 per year ofadditional income (April 2000 to January 2003) was received by people with sightdifficulties and their carers.

A 2003 study calculated that Brighton and Hove Citizens Advice Bureau raised an additionalincome for its clients of £676,000 (Ambrose and Stone 2003). The true economic benefitwas, however, much higher because of the multiplier effect in the local economy. Drawingon the local multiplier toolkit (LM3) developed by the new economics foundation,4 thestudy concluded that the economic impact of the initial £676,000 raised should thereforebe multiplied by 1.7 giving a total financial gain to the local economy of £1,149,000.

This learning was drawn fromWiggan, J and Talbot, C (2006) The Benefits of Welfare Rights Advice: AReview of the Literature, Manchester Business School, University of Manchester, commissioned bythe National Association of Welfare Rights Advisors.

caseexam

ple

Page 98: Accountability and Learning

78Charities Evaluation Services Research report

developing in the sector. An onlinesocial network, recently launched, willitself provide a potentially rich sourceof data.

bassac, the national network ofcommunity-based organisations, hasfocused in recent years on introducingan impact focus to communityorganisations. bassac first introducedconcepts of wellbeing, quality of lifeand sustainable development tocommunity anchor organisations –support organisations to self-helpaction (Church 2005). This isdescribed in the case example below.

Demonstrating ‘value’...distinctive value, added value andvalue for money

Beyond the question of outcomes andimpact, third sector organisations arealso being asked to assess the ‘value’

they bring to public service delivery –their distinctive value, their addedvalue, their strategic added value andtheir value for money.5 The notion ofvalue is central to evaluation.However, there is considerable lack ofclarity about what these differentconcepts mean, how they are relatedor distinct, and how they can beassessed and demonstrated.

Added valueThe Audit Commission itselfintroduced the notion of added valuewhen it explained value for moneyconcepts, referring to intangible

benefits such as greater flexibility,innovation, knowledge and expertise(Audit Commission 2007).

Somerset Community Food explainedhow it expressed added value:

We are offering a lot of support tomarginalised people. A lot of added

case

example bassac – developing an impact focus with community anchors

The feedback and experiences of bassac’s members in 2000/01 showed bassac that itneeded to shift the focus of community anchors away from the individual project-levelinformation gathering required by funders, to focus on the effectiveness of the organisationas a whole – or impact.

With funding from the Community Alliance, bassac’s Changing Places, Changing Lives,printed in 2005, aimed to introduce the concept of impact to community-based groups,targeted primarily at bassac members. With the help of the Baring Foundation, themethodology was published in mid-2007 as a toolkit, ChangeCheck, aimed at communityanchors and accompanied by workshops and training for development workers, to cascadethe knowledge. Recognising that measuring impact was not likely to be within the grasp ofsmall organisations, the methods focus on how organisational impact is experienced bydifferent audiences, based around perception of the Audit Commission’s eight categories ofcommunity wellbeing.

bassac found that, although a more positive response has built gradually, the take-up ofconsultancy support was slow with those for whom impact was new; feedback on theusefulness of the process has been positive and enthusiastic. However, it remainschallenging for community anchors to resource the exercise, and only a small number oforganisations had gone through the process: ‘There has been a striking mismatch betweenwhat we put on the table and what people could pick up’. bassac’s response has been torevisit the process and method in order to engage community anchors more effectively.bassac has maintained the emphasis on the whole organisational and strategic impact,‘rather than the narrower and more common focus on the outcomes of individual projects’.

Page 99: Accountability and Learning

79Charities Evaluation Services Research report

value, for example, young mumsgroups – it’s not just about comingand getting health food, it’s also achance for them to meet up and gettheir babies weighed. That’s what Imean by added value. You’ve got toargue everything from first principles.We want to tell social services thatthis is a great way to work with theirclients and that it will have lots ofknock-on effects that would save themmoney in the long term.

However, Blackmore (2006:11) hasargued that government recognitionof the added value that third sectororganisations can bring to publicservice delivery has to an extent‘muddied the waters’ as it has ledmany to try to quantify it, whereas thereal point was that commissionersneeded to be clear about ‘what valueit is they are seeking for a particularservice, and which potential providersare best able to provide that value’. A2007 Performance Hub publicationargues that the concept of addedvalue is unhelpful and that a moremeaningful way forward is to think interms of ‘full value’, which includes anorganisation’s outcomes and thesatisfaction of its users (Eliot and Piper2007).

‘Strategic added value’, however, hasbecome an element within evaluationbriefs, although there is no uniformapproach. It has been included in theregional development agencies’impact framework, and a measure ofstrategic added value is needed to bedemonstrated to the Department ofTrade and Industry (DTI) and theDepartment of the Environment,Transport and the Regions (DETR),making the link into regional andeconomic strategies. Strategic addedvalue may encompass issues such asstrategic influence, partnershipworking and leverage. It has beendescribed as a subjective measure,often based on opinion, assessedthrough qualitative methods.6

Value for moneyThe 2007 Audit Commission Heartsand Minds report emphasised theimportance for local authorities ofshifting their emphasis to value formoney information from voluntarysector providers, requiringconsideration of outputs andoutcomes as well as inputs.Government has defined value formoney as ensuring that funds arespent in a way that minimises costs,maximises outputs and achievesintended outcomes (Cabinet Office2000, 2006), or as getting the bestpossible outcome from any given levelof input (HM Treasury 2006). TheAudit Commission report stated thatlocal authority staff and voluntarysector providers attached differentmeanings to value for money andneither assessed it well (AuditCommission 2007).

The report focused on local authoritiesand made little connection to the skillsor methodologies that might beneeded by third sector providers todemonstrate value for money. Anumber of third sector organisations inour survey sample reported thatfunders required value for moneyinformation. Although, for comparisonof value for money, much depends onthe consistency of the measurement,there were substantial variations inhow value for money was interpretedby funders: in some cases it wasassessed through achievement againstorganisational purpose, or cost-effectiveness, whereas others lookedat efficiency targets, which meanteither reducing management costs orincreasing the number of users.

In general, the third sector has littletrack record in carrying out value formoney studies. Where it exists, thevalue for money component is oftentacked on the end of an evaluationstudy and carried out independentlyof other parts of the evaluation. InChapter 4 we reported that internal

6External evaluation...

Page 100: Accountability and Learning

80Charities Evaluation Services Research report

outcome monitoring is rarely relatedto implementation, design orefficiency issues. But those studiesreviewed have shown that value formoney evaluations are not simpleexercises. The 2001 evaluation of theCrossing the Housing and Social CareDivide programme demonstrated howprojects were vulnerable and failed toget renewed contracts because theywere unable to produce evidence oncosts and benefits to feed into valuefor money assessments:

Few projects were able to calculateoutputs and to link these with inputs,let alone specify how the productionof outputs might contribute to theachievement of longer-termoutcomes. (Cameron et al 2001:5)

The value for money study thatformed part of the DefraEnvironmental Action Fund evaluationwas carried out through a review ofapplication forms, and of processes

and management of delivery and aquantitative assessment based onachievement of targets and outputs.Judgement on value for moneyincluded issues such as widegeographical extent, engagementwith a range of communities andwork covering a range of themesrather than single issues. Value formoney criteria were also containedwithin an evaluation of ‘strategicvalue’. These criteria were providinginnovative features; a model thatcould be replicated; and whether theproject worked with an umbrellaorganisation (thus being more cost-effective than funding individualorganisations). Overall, evaluatorsfound it a challenge to provide a‘quantifiable’ value for moneyassessment, as a number ofassumptions were made about theappropriateness of plannedperformance levels (CAG Consultantswith Northumbria University 2005).

case

example Women’s resource centres – capturing monetary values

A study of four women’s resource centres in 2006 presented problems in capturing thewhole cost of providing the service, because of the difficulties of including ‘non-funded’indirect or in-kind inputs (such as the use of donated premises and volunteers). The costconsequence analysis used allowed the relevant costs and effects of the intervention to bemeasured, but did not produce one single effectiveness measure to allow clear comparisonof the relative cost-effectiveness of alternative interventions. Comparing differentorganisations was also problematic as different organisations had different financialmanagement systems, so the cost analysis might just be comparing the variations infinancial management rather than the variations in real costs of delivering services.

The total economic cost of a specific project was calculated for each of the case studyorganisations. Ideally, the researchers would have liked to have identified each project’sown economic value for the outcomes and to balance the costs of each against thebenefits. The limited time and resources available meant they used ‘figures from existingresearch about the cost of whatever harm the projects were tackling’ to suggest howsuccessful the projects had been at breaking even.

Although they produced a figure at which the centres could provide a saving to society,there appeared to be no way of measuring the level of prevention afforded by the service,so more descriptive findings were presented. The outreach support service was felt to bereducing the risk of domestic violence by enabling women to reach safe places and byimproving services available to them.

Adapted from Matrix Research and Consultancy (June 2006) Economic Impact Study of Women’sOrganisations, Women’s Resource Centre and the Association of London Government.

Page 101: Accountability and Learning

81Charities Evaluation Services Research report

From our review, one of the maindifficulties of value for money studieslies in putting a monetary value onsocial benefits. But there are alsoproblems in costing inputs, asdemonstrated by the 2006 study offour women’s voluntary andcommunity organisations in Londondescribed in the case example onpage 80.7

Cost-benefit analysis faces the samechallenges as impact in relation toattribution and generalising findings.Additionally, the very local andparticular nature of communities andthe context in which interventionstake place make it difficult togeneralise from results (Sefton 2000).

Social return on investment ...measures of value

New indicatorsThird sector organisations are beingincreasingly drawn into new indicatorsor measures of value. An article inThird Sector in July 2002 cited themove by some funders towards aninvestor approach, seeking to identifywhat social ‘return’ they are gettingon their grants (McCurry 2002). Ofgreater likely impact is thegovernment drive to find standardisedprocesses to measure the impact and‘value’ of investment in third sectorinfrastructure and their delivery ofpublic services.

One theory of state funding thatgovernment is increasingly applying topublic services and to third sectorservice delivery is that of ‘investing tosave’, with its logic of better returnsand reduced costs.

The Invest to Save Budget is a jointHM Treasury/Cabinet Office initiativewhich aims to create sustainableimprovements in the capacity todeliver public services in a morejoined up manner. The LondonBorough of Camden obtained funding

in 2006 for a three-year partnershipproject – Third Sector Delivery inCamden – to develop and pilot a newcommissioning model that wasoutcome-focused and valued bothservice-level and wider community-level outcomes (economic,environmental and social) and trackedsavings to the service, council andwider public sector of theachievements of these outcomes(London Borough of Camden 2008).Camden commissioned the neweconomics foundation to develop amodel that could be used in both thecommissioning process and byproviders to demonstrate the ‘triplebottom line’ impact – economic,environmental and social – of itsactivity. The sustainablecommissioning model was developedand successfully piloted in tenderingfor a Housing and Adult Social Carecontract: Provision of Mental HealthDay Care Services.

Social return on investmentThe social return on investment (SROI)methodology was pioneered in theUnited States by REDF, a venturephilanthropy fund. It has beendeveloped in Europe by members ofthe European SROI Network and in theUK initially by the new economicsfoundation and, since first pilots in2004/05, within a network of SROIpractitioners (Nicholls 2007). SheffieldHallam University is now carrying outan evaluation of the FuturebuildersFund.8 The agreed model ofmeasurement is to assess the SROIfrom Futurebuilders investments. TheFuturebuilders Advisory Panel noted inits report to the Minister in December2006 that measurement andreporting of SROI had implications forother government-sponsoredprogrammes and action plans thatwere investing in third sector andpublic service delivery. The 2008Interim Report summary describedthe methodology as an innovative andexperimental part of the

6External evaluation...

Page 102: Accountability and Learning

82Charities Evaluation Services Research report

Futurebuilders evaluation (Centre forRegional Economic and SocialResearch 2008). It found clearevidence of potential savings to thepublic purse which investments couldbring in eight of the case studies, butin four cases it was not possible tomake an estimation with a significantlevel of certainty, either because theevidence did not exist, or becausethe returns were likely to be in thevery long term, although otherpositive social outcomes were likelyto be achieved.

The Sheffield Hallam report urgedsome caution in interpreting resultsbecause the calculations wereforecasts of likely financial and socialreturns; some social returns weredifficult to place a monetary value on,such as community cohesion, someinitiatives would achieve other socialreturns beyond financial savings to thepublic purse and some successfulprojects were likely to lead toincreased costs to the public purse.

Originally for social enterprises, anddeveloped from traditional cost-benefit analysis, the social return oninvestment methodology attempts toput a monetary value on the socialand environmental benefits of anorganisation relative to a givenamount of investment. The processinvolves an analysis of inputs, outputs,outcomes and impacts leading to thecalculation of a monetary value forthose impacts, and finally to an SROIratio or rating. SROI studies have nowbeen applied to organisationsproducing social returns, such ashelping ex-offenders intoemployment, where clients cease toreceive benefits and start to pay taxes,all of which result in savings to thepublic purse. The new economicsfoundation has produced a do-it-yourself guide (now in its secondedition), but this remains relativelycomplicated analysis. The neweconomics foundation is also

publishing a number of SROI-basedresearch studies.

The SROI UK network, launched in2008, is dedicated to the consistentand effective use of SROI. It hasdeveloped an online tool forcalculating SROI free to its members.Corporate Citizenship Partnershiphas been funded by the LondonBenchmarking Forum (corporatefunders) to test out SROI studies in afew charities to see if they are likelyto be useful for corporate funders,and was developing a toolkit forpublication in 2008.

The case example on page 83describes a return on investmentstudy that provided effective evidenceto encourage government funding.

This emphasis on return oninvestment is likely to increase. From2008/09, the government is investingapproximately £350,000 over threeyears in its Monitoring Social Valueproject, to develop a standard formeasuring SROI that will bring thedifferent methodologies together,increase the accessibility of SROI andincrease the evidence base of theimpact of the third sector.

Evaluators currently exploring themethodology emphasise itscomplexity, sensitivity and theassumptions that will need to bemade explicit about the socialbenefits and cost savings. The fear isthat the results of the social return oninvestment studies may be morepowerful than they are worth, and isabout the difficulties of extracting realoperating costs and putting amonetary value on social value.Although lessons have been learnt bydevelopment work commissioned bythe Scottish government, and pilotstudies are taking place, the potentialdanger is that the language of SROI,as of impact, is gaining currency whenthere is insufficient widespreadunderstanding of what such studies

Page 103: Accountability and Learning

83Charities Evaluation Services Research report

6External evaluation...

Every Child a Reader – Social Return on Investment study

KPMG Foundation and its partners set up the Every Child a Reader programme with theexpressed aim of obtaining sustainable funding. The research into the benefits of the EveryChild a Reader programme provides an example of how evaluation might be placed togetherwith a return on investment study to good effect. There was already a lot of evidence of itssuccess, but the main barriers to getting the government to support Reading Recovery wascost. The KPMG Foundation felt it would be useful to commission a separate Social Return onInvestment study to complement the evaluation. The KPMG Foundation funded this itself, ata cost of £20,000. The research brief was to review the research on the long-termconsequences of literacy difficulties for individuals and for society, to estimate the costs tothe public purse that result and the return on investment of early intervention to addressliteracy difficulties (Gross 2006).

The report identified five different types of costs for children who have not learnt to read bythe age of seven:

• health costs

• cost of crime

• educational costs – special needs support

• education costs – behaviour, exclusions, truancy

• costs of unemployment and low wage.

The report noted that estimation and full quantification of social costs depended on fourcritical pieces of data: population numbers, prevalence rates, typical frequency and/orduration of the problem (for example exclusion from school) and unit cost information.

Unit cost information was taken from other cost-benefit studies and from national sources forhealth and social care services, criminal and benefit recipients. Only hard costs were includedin the calculations – those having a direct monetary effect and usually falling on public,private or voluntary sectors. Soft costs – those borne by family and friends – were excludedbecause information on them was too scarce.

The cost savings were set out for lower and upper estimates at age 37 (the last point atwhich reliable survey data was available). Based on the evidence from previous studies that79 per cent of children aged six currently leaving school with very low literacy could be‘lifted out’ through Reading Recovery, the return on investment for every pound spent on theEvery Child a Reader programme was estimated to range between £14.81 and £17.56. Thenthe long-term return on investment from the £10m spent on the programme could beestimated to be £148.1m and £175.6m over the period between 2006/08 and 2007/09,when the children at that time accessing the programme would reach the age of 37.

While this report provided convincing evidence to government to invest in the programme,it should be noted that the researchers were able to access pre-existing unit costinformation, an agreed unified outcome measure (reading level) could be identified and dataon this was readily available. The study was nevertheless limited in that existing studies haddata only to age 37, and it was not possible to calculate and include soft costs – those borneby friends and family.

Adapted from Gross, J (December 2006) Every Child a Reader Evaluation Report 2: The Long TermCosts of Literacy Difficulties, KPMG Foundation, London.

caseexam

ple

Page 104: Accountability and Learning

84Charities Evaluation Services Research report

imply and as yet little evidence baseconcerning the efficacy of SROI as apractical tool.

In the current efforts to produce auser-friendly SROI tool, it is alsorecognised that it may not be suitableor appropriate for some organisationsand the level of effort may not benecessarily justified. The review ofeconomic evaluation studies andimpact evaluation more generallywould suggest that considerationshould also be given to whether SROIcan avoid the difficulties associatedwith cost-benefit analysis relating tomonetary values, social outcomes andattribution. There is an argument forsuch economic studies to be done inparticular areas of work (for example,

bearing on government targets)which other organisations can refer toand not have to prove their owninvestment and value each time.

This chapter on external evaluationhas raised questions about its qualityand utility. It has focused on the publicagendas driving demands for evidenceof impact and value, and themethodological issues arising. Thenext chapter reports on howevaluation findings are being used.The evidence suggests that moreinternally-driven evaluation hasproduced most organisationalbenefits. It also suggests that a morefocused effort will be required toharness wider learning from thirdsector evaluation.

1 See www.movingpeople.org.uk

2 The Phoenix Development Fund, administered bythe Small Business Service, is part of the £189mPhoenix Fund, running from 2000 to 2008. It aimsto tackle social exclusion by supporting innovativeprojects providing business support to enterprise indisadvantaged geographical areas and to groupscurrently under-represented among businessowners. Funded projects used a wide variety ofapproaches to target different types ofdisadvantaged areas or groups under-representedin enterprise.

3 See the argument for randomised control trials inNational Offender Management Service (2005)‘Understanding research methods and findings’, in‘What works’ briefing 3/05,www.homeoffice.gov.uk/rds/noms.html

4 new economics foundation has developed its LocalMultiplier 3 (LM3) as a practical tool for measuringeconomic indicators. The measuring process startswith a source of income and follows how it is spentand re-spent within a defined geographic area –demonstrating how money that enters an economyhas a multiplier effect. This involves surveying

businesses and people that money is spent on tosee how they in turn spend their incomes.

65 See HM Treasury (2002) Spending Review 2002:Report and HM Treasury (2002) Spending Review2004: Report.

6 Presentation by Tim Foster of York Consulting,Assessing Impact and the Importance of StrategicAdded Value, undated.

7 The centres involved were the Asian Women’sResource Centre in Brent, the Women’s Health andFamily Service in Tower Hamlets, the Creative andSupport Trust in Camden, and the Rape and SexualAbuse Support Centre in Croydon.

8 The Futurebuilders Fund is a demonstrationprogramme, testing the theory that if third sectororganisations can access investment finance atreasonable cost, they can then compete for andwin public service delivery contracts and publicservice delivery will improve as a result.

Notes for Chapter 6

Page 105: Accountability and Learning

85Charities Evaluation Services Research report

Evaluation for a purpose ...integrated into daily work

Much current third sector training andguidance on monitoring andevaluation emphasises the pointsmade by Patton (1997) on utilisation-focused evaluation. It stresses theimportance of integrating evaluationinto daily work from the beginning ofa project, focusing on a small numberof indicators and coordinatingevaluation reports with decisionmaking. Evaluation trainers andresources also stress the importanceand value of monitoring andevaluation for a number of differentpurposes. These include:

• the communication role –communicating achievements andcelebrating success

• ensuring accountability andtransparency – monitoring andtracking outputs and outcomessystematically

• the performance management role –feeding back into management andorganisational planning

• the wider learning role – learningfrom failures as well as successes

• the policy role – generatinginformation that can be used tostrengthen advocacy campaigns andinfluence organisational and publicpolicy.

This list shows that monitoring andevaluation, in theory at least, hasmultiple and wide-ranging purposes.In practice, however, observationsfrom research participants in ourwider study strongly suggest that

Focus of chapter 7This chapter reflects on the benefits of monitoring and evaluation reported bythird sector survey respondents. It reports on the wider evidence of the extentto which monitoring and evaluation have been used for internal and widerlearning and the relationship between monitoring and evaluation andperformance management. It also relates this to an emerging emphasis ondeveloping knowledge management and sharing learning. The research found:

• The use of monitoring and evaluation for internal learning may be limitedby the perception by third sector organisations that monitoring andevaluation is for the benefit of funders, reinforced by externally derivedtargets and performance measures.

• Those organisations that have been able to invest resources inmonitoring and evaluation as a management tool have been able tosee organisational benefits.

• Many funders have had more data than they can deal with, and are limitedin the systems and resources necessary to collate, analyse and use it.

• Third sector organisations have not yet developed effective ways of sharinginformation about the appropriate and successful application of evaluationtechniques and methods.

• Early moves towards developing knowledge sharing in the third sector havethe potential to help move the sector towards a more learning culture.

Using monitoring and evaluationfindings ... implications for good practiceand service delivery

Chapter 7

Page 106: Accountability and Learning

86Charities Evaluation Services Research report

accountability – primarily to fundersand regulators – is the main driver formonitoring and evaluation in thesector. This is somewhat at odds withthe reported importance attached bya majority of third sectororganisations in our survey tomonitoring and evaluation for internalpurposes, at least in principle. Indeed,among our survey respondents wefound evaluation valued as aperformance management tool, forday-to-day management, and allowingchanges to be made in organisationaldirection or in practice.

We also found evidence of monitoringand evaluation used as acommunication and marketing tool.However, we must recognise that oursample represents organisations withrelatively high levels of engagement inmonitoring and evaluation, and fromour wider research this limitedevidence represents good rather thanwidespread practice. Even among ourrespondents, a small number wereexplicit that they were unable to usemonitoring information internally. Onerespondent said ‘We don’t use it, justpass it on to funders. Terrible, I know’.And our data suggests that usingevaluation for wider learning purposesand as a policy tool is as yet under-developed.

Benefits of monitoring andevaluation ... differences inperception

There were some differences in howmonitoring and evaluation findingswere used in practice and howrespondents viewed the benefits ofmonitoring and evaluation. Althoughthe research found reporting tofunders was a main driver,respondents rated this important, butless so than potential internal benefitsand benefits for beneficiaries. Amongour survey respondents the top eightperceived benefits of monitoring and

evaluation reported were, in thefollowing order:

• being clear about the benefits oftheir work

• learning about what is workingwell/effective practice

• improving the end result forbeneficiaries

• better services/strategic planning

• improving the way they worked

• telling others about results

• competing for funding and resources

• improving reporting to funders.

These results are very similar to thoseof the US-based evidence aboutperceived benefits of outcomesassessment by non-profitorganisations (The Advisory BoardFoundation 1999; United Way 2000;The Urban Institute and The Center forWhat Works 2006).

Seventy per cent of our surveyrespondents described the areaswhere change had been introduced asa result of monitoring and evaluation.Most reported that change lay indeveloping organisational services andactivities. But an important related – ifless frequent – area of change was inclarifying organisational purpose andthe organisation’s target group.These changes are shown in the boxon page 87.

Ninety-nine organisations (15 per centof the total sample) provided furtherdescriptive information about howthey used monitoring and evaluationto introduce significant changes andfor marketing and income generation.This is summarised here.

Marketing and income generationFor several organisations, a keypurpose of monitoring and evaluationwas profile raising, promotion andfunding. Organisations described howthey had used information to obtain

‘It provides ongoing

enhancement to my

advice as an

independent adviser

in wildlife

conservation,

especially by

providing real life

examples which can

be used in narrative

form. Value from

evaluation

experience is most

effectively ‘fixed’ by

the preparation of

papers, tables,

presentations etc.’

Conservation charity

Page 107: Accountability and Learning

87Charities Evaluation Services Research report

funding, to rebrand, to developpromotional material or to marketwork differently – for example, usinguser feedback to rewrite leaflets andredesign websites. BTCV Yorkshire andHumber, an environmental volunteerorganisation, had been able to use itsdeveloped management informationsystem to improve the information topotential customers via its website, sothat a potential volunteer could clickon a map and see what projects wereavailable in a particular area.

Preston Community Radio used casestudies demonstrating outcomes formarketing in its six-monthly magazine.Case profiles were built up throughdepth interviews and helped toprovide evidence of the impact of theprogramme for strategic agencies.One survey respondent felt that anincreased profile was one of the mainbenefits of an evaluation:

We are two and a half years into theproject and I’ve used monitoring andevaluation primarily to please thefunders, ie, to get third year money,but also to let others in the project

know what is happening…I have nowgot two years of project informationand I am using that to reach themedia, the NHS, local authorities andschools to tell them about oursuccesses so that they would wantpartnerships with us in the future.

Managing performanceOf those 99 survey respondentsproviding qualitative information onchanges made, 12 organisationstalked about the benefits of feedinginformation back into review andstrategy. This included:

• refocusing the organisation

• clearer targeting of services

• being more realistic about whatcould be achieved

• being clearer about costs involved inachieving desired results.

For one organisation, ‘the wholedirection of the project has beenchanged over the past four years, aswe have monitored and evaluated ourwork more effectively’.

7Using monitoring and evaluation findings ...

• Redefining the user group, including changing geographical boundaries, targeting under-represented groups and prioritising in the light of discovered need.

• Identifying and acting on gaps, including developing worker training, providing moretargeted support, altering roles and restructuring the organisation.

• Improving systems, for example, monitoring requirements leading to a move to electroniccommunication.

• Developing services, for example, using needs assessment to obtain increased funding,shedding services in the light of information about capacity to sustain them and tailoringactivities to respond to access issues.

• Changing work practices, which included:

− working more closely with beneficiaries

− becoming more service user-led, including involving user groups in project planning andworking together to define aims and objectives

− changing approaches to campaigns and public awareness

− developing more effective partnership working.

How third sector survey respondents used monitoringand evaluation

Page 108: Accountability and Learning

88Charities Evaluation Services Research report

Using monitoring and evaluationfor internal learning ... potentialto check and challengeorganisational direction

Performance measurementOur review of the political and fundingcontext in Chapter 1 proposed thatperformance measurement in thethird sector has been very largelydeveloped as a response to centralgovernment reforms and the policycontext. The wider literature, CES’experience and the researchinterviews suggest that – despite theencouraging evidence from the thirdsector survey respondents – all too

often performance measurement isused purely as a compliancemechanism. It is used less often tocheck and challenge the direction ofthe organisation. Respondents fromour different stakeholder groupsreported that organisations wereoften doing evaluation ‘just becausethey have to report to funders showingwhat they have got for their money’.

Cambell and McClintock (2002)commented on the trend in non-profitorganisations in the United Statestowards a stronger emphasis onperformance measurement – relevantto the current situation in the UK thirdsector and its drive for demonstratingcomparative advantage in a

case

example Lambeth Flipside – using evaluation to change strategy

The Lambeth Crime Prevention Trust (LCPT), which was set up in 1997, refocused on youth-oriented work following an external review in 2006, changing its operating name to themore positive ‘Flipside’.

The external review was prompted because the organisation was facing a challenge to itssustainability. LCPT had developed in a project-based way, largely in response to availablestatutory funding, and lacked a coherent organisational ethos and mission. The reviewfocused on the structure needed to ensure sustainability, the roles of the director, staff andother stakeholders, and the options for fundraising, and these were explored in interviewswith staff, trustees and a number of funders. The tight timescale – over four months –meant that managing the interview schedule was challenging, and consultation was notextended to consultation with the young people who LCPT was working with.

The director found the process helpful in highlighting the challenges for the organisation.One of the main report findings was that the organisation was too small to carry the rangeof project work, and the organisation needed a distinct focus and role. The directorsubsequently proposed to the trustees that this focus should be youth work, and the newname of Flipside was adopted as one that would attract the potential user group.

The review helped the organisation define its work more clearly – as work with youngpeople aged 8 to 25 years, their parents and carers. Flipside is now clear about not pursuingfunding for projects falling outside this remit. Staff have started implementing moreeffective monitoring and evaluation systems, hoping in the future to be able to assess thelonger-term impact of its work with young people.

The process has made the organisation more conscious of the potential role of evaluationand, with a single focused area of work, staff now find it easier to collect and presentmeaningful data.

The case example below illustrates how an organisation in the London Boroughof Lambeth used evaluation to refocus its services.

Page 109: Accountability and Learning

89Charities Evaluation Services Research report

competitive funding situation, andfor measuring performance againststrategic outcomes and impacts.They suggest that the use ofevaluation predominantly for publicrelations and compliance is not easilycompatible with exploring realproblems. Furthermore, they arguethat assessment against performancetargets may benefit from rigorousevaluation designs and standardisedmeasurement, but that suchmethods may not permit thelocalised detail useful fororganisational development.

The high number of respondents whoreported using quality assurancesystems as their monitoring andevaluation tool suggests that formany organisations there is littledistinction made between evaluationand performance measurement. Yet,although both performancemeasurement and evaluation areinterested in judging theperformance and success of a projector programme, and arecomplementary processes, evaluationis quite specific in terms of coverageand time, and may have a number ofdifferent purposes expressed in anevaluation brief or terms of reference.

Policy and advocacy roleWe have also seen that evaluation canhave a wider learning role, which maymean focusing on processes as well asresults (discussed in Chapter 3) andmust mean being explicit aboutfailures as well as successes. It mayalso have a wider policy and advocacyrole, not necessarily incompatiblewith results-based monitoring andevaluation, but which may be de-emphasised in a performancemeasurement focus.

Evaluation for improvementThis discussion does not lose theimportance of using evaluation forimprovement. A broader debate istaking place among evaluators, and a

strong argument can be made formaking evaluation more useful andrelevant by incorporating it withinperformance management andstrengthening performancemeasurement by using evaluation.To do this, we would argue thatorganisations must be more skilled indrawing the best possible evidencefrom a number of different sources,and be more concerned withanalysing and interpreting data, andestablishing how and why somethingworks and in what conditions. CES’experience is that all too oftenmonitoring data is collected acrossorganisations and aggregatedwithout looking for the meaning indetail and variance. Bonar Blalock,writing about governmentaccountability trends in the 1990s,cautioned against the tendency ofthose responsible for reportingresults on performance ‘as if theyrepresented the direct and exclusiveeffects of program interventions,when these outcomes may be onlyweakly correlated with them,’ andwithout regard to the complexity ofinteraction with other interventions.Understanding performancemanagement as a planning andmanagerial tool and evaluationresearch as a research tool shouldenable a more complementaryrelationship to develop between thetwo (Bonar Blalock 1999:134).

Sanderson (2001) suggests thatbuilding the capacity for evaluationand learning and embeddingmonitoring and evaluation as part ofthe performance culture require twoelements: first, structuring thecontent and process of evaluations sothat they become useful for internalmanagement; and second, refocusingevaluation so that evaluatorsincorporate and analyse processesand implementation issues to betterinterpret outcomes.

7Using monitoring and evaluation findings ...

‘By sharing the

numerical part of

our information

with others (for

example, the

Council) we have

raised awareness

and brought

tangible, practical

change for the

client group.’

Local adviceand advocacyorganisation

Page 110: Accountability and Learning

90Charities Evaluation Services Research report

Benefiting service users ...difficulties of ascribing changesto evaluation

Most organisations participating inthe research were unable todemonstrate how improved services –a result of monitoring and evaluation– had resulted in demonstrable useroutcomes. Benefits tended to bedescribed as greater access toparticular user groups, increasedactivity, volunteers or beneficiaries, oran improved service – outcomes forthe organisation that could beexpected, but not demonstrated, tolead to user benefits. For example,one organisation had developed ‘amore flexible respite service andwider range of support and increasednumbers of young carers supported’.Following an evaluation, the VolunteerCentre in Liverpool improved accessto, and participation in, a youthvolunteering residential training,resulting in greater attendance. Therewere also reported knock-on benefitsto users by sharing information withother agencies working with the sameclient group and making themmoreaware of issues.

This lack of evidence does not meanthat evaluation did not contribute toclient benefits, but that it is difficult todetermine without individualorganisational impact assessments.We would also expect thatestablishing ‘net effect’ (see page 69)– directly attributing a level ofimproved user outcomes tomonitoring and evaluation – would beproblematic. Participants sometimesreported that it was difficult to ascribechanges directly to evaluation, butchange was ‘in the air’ and evaluationcontributed to the process of change;indeed evaluation could be anexpression of the desire to change.

Using evaluation to influencepolicy change ... a subject fordebate

The influencing factorsTwenty per cent of third sector surveyrespondents identified changesobtained in the policy and practice ofother organisations as a benefit ofevaluation. However, there was littleconcrete evidence of use ofevaluation to influence public policy.This is not surprising; thegovernment’s evidence agenda hascontained an implicit notion thatevaluations can influence publicpolicy, but this assumption has beenthe subject of debate.

Lessons have been learnt from theextensive evaluations of Sure Start andthe New Deal for Communities associal programmes being notoriouslyhard to evaluate (Coote 2004). Cootehas argued that inherent within thenotion of ‘what works’ is theassumption that cause and effect canbe traced, yet this has been extremelydifficult to do in complex localinterventions. Government haspointed to initiatives such as Defra’sEvidence and Innovation Strategy ascommitment to overcoming theresearch-policy communicationproblem. However, commentatorshave argued – as we report below –that the degree to which evidenceattracts and retains influence dependson a variety of factors, including itsdegree of political contentiousnessand whether the data produced iscompatible with the ideology,aspirations and resourcecommitments of those who have thepower to decide. It will also beaffected by how effectively the resultsof research are presented to policymakers and how far they have beeninvolved in its generation.

More optimistically, it has beenargued that evaluation may affect

Page 111: Accountability and Learning

91Charities Evaluation Services Research report

policy makers thinking in a diffusedrather than a specific way, and thatevaluation can be expected to raisequestions rather than to explain, andhas a cumulative effect rather than animmediate and direct influence onpublic policy (Palfrey 2000). CES foundinterest among policy and practiceleaders to the findings of the CESFairbridge evaluation, which foundthat the Fairbridge foundation coursehad a significant effect on personaland social skills – countering the‘nothing works’ approach found insome of the literature (Astbury 2002).For its Rethinking Crime andPunishment funding programme, theEsmée Fairbairn Foundationattempted a dissemination approachfrom the start and tried to ensure ittook professional communicationsadvice – including some mediaexperience – on the advisory board.On the other hand, the Foundationreflected that it was necessary to berealistic about how possible it is to pindown precisely how much change isdue to a programme of funding.

ExamplesThe Big Lottery Fund publication,Answering BIG Questions, reflects onthe extent to which a number of itsevaluations have affected publicpolicy. The Big Lottery Healthy Livingevaluation showed that regularattendance at healthy living centreshad a protective effect on physicaland mental health for regular userscompared with non-users, but theevaluation was carried out in thecontext of a changing policyenvironment in which it was difficultfor findings to be heard. Evaluation ofthe Out of School Hours Learningprogramme (which influenced theextended school model) and the Outof School Hours Childcare programmeboth affected government policy. TheBig Lottery Fund’s Out of School HoursChildcare evaluation produced goodpractice guidelines and also provideda crucial template to the government

on specific issues about developmentand sustainability at the right time forstatutory policy change. AnsweringBIG Questions suggests that:

Influencing policy development at theright moment is as much a matter offortuitous timing as it is of weight ofevidence. By capitalising on thesepolicy opportunities, and by ensuringthat policy-makers are involved in thedesign and implementation of ourprogrammes and our evaluations, wehave been able to foster links thathelp to ensure wider take-up oflearning from our funding (Big LotteryFund 2007:46).

An example frequently cited of howevaluation has informed policy is theevaluation of the Dundee FamiliesProject, which provided a wide rangeof support services to families facingeviction (Dillane et al 2001). Theproject was run by NCH Action forChildren in partnership with DundeeCity Council. During 2003, six localauthorities, working closely withhousing associations and charities,established a number of dedicatedanti-social behaviour Intensive FamilySupport Projects, based on the workundertaken by the Dundee FamiliesProject. These interventions were inturn evaluated, and the study findingsrelating to outcomes at the point atwhich families left the serviceindicated that, for the vast majority offamilies, the projects had helped themachieve remarkable changes (Nixon etal 2006; Pawson 2007).

NCH argued the case against evictionin written evidence to the SelectCommittee on Home Affairs on Anti-Social Behaviour, 2004/05, using theevaluation evidence which had foundthat two-thirds of the families referredto the project had been successfullyre-housed and put back on securetenancies.1 The findings were takenup by the government’s RespectAgenda, a cross-government strategyto tackle anti-social behaviour.

7Using monitoring and evaluation findings ...

Page 112: Accountability and Learning

92Charities Evaluation Services Research report

While some organisations have beenable to use structured channels to feedback into government, for nationalorganisations in the study there wasevidence of a number of constraints:

• Dependence on public funds anddonations and good public imagemeant that there were difficulties inbeing open and evaluations were notalways published.

• It was reported that evaluations maynot deliver the desired results; evena balanced report might besuppressed if there were somecritical or unwanted findings.

• Systems were not always establishedby national organisations to capturepolicy change achieved at a local

level and might not be consideredworth investing in.

The research found some evidence tosuggest that third sector organisationsmight be more likely to be effective ata local, rather than a national policylevel. Preston Community Radiocollated information across projects,feeding evidence from them into theLocal Strategic Partnership and thecommunity cohesion steering group.It was able to highlight the need forservices and feed into, for example,the local Arts and Events strategy.

The use of evidence in decisionmaking was recorded in the ResearchReport 783 of the National Evaluationof the Children’s Fund. This is capturedin the case example below.

case

example Local Children’s Fund evaluations – using local evidence in decision making

Evidence derived from local Children’s Fund evaluation reports and interviews with localevaluators suggested that the majority of evaluations appeared to pragmatically gather andanalyse data on impact or processes (such as children’s participation and partnershipworking) with a view to affecting partnerships instrumentally, such as through changingpractices and deciding to re-commission specific projects (University of Birmingham 2006).Many Children’s Fund programme managers suggested that local evaluation had helpedthem to identify successful and less successful projects and how they worked well or lesswell, as well as to make decisions about continuing funding projects or which projects topromote for mainstreaming.

Nearly half the programme managers indicated that local evaluation had helped them tobroaden their partnership’s thinking about prevention and partnership working. Localevaluators pointed to a number of factors that had helped them to influence partnerships’work. These included:

• providing relevant and useful information

• delivering evaluation outputs at appropriate and relevant times

• making results widely available and presenting them in appropriate, accessible formats

• making realistic recommendations to which the partnerships could respond.

Local Children’s Fund evaluators stressed that the rapidly changing national context meantthat changes in partnerships’ directions were often politically motivated rather than basedon local evaluation – that is, the move towards integrating Children’s Services andChildren’s Trusts was driving the agenda. The commitment of partnerships to their ownpriorities and the climate of rapid change meant that they tended to have insufficient timeto act on evaluation findings.

These learning points were drawn from University of Birmingham (2006) Local Evaluation ofChildren’s Services: Learning from the Children’s Fund, Research report 783, www.ne-cf.org

Page 113: Accountability and Learning

93Charities Evaluation Services Research report

Making evaluation useful forinfluencing public policyJulia Mountford of the Office of theThird Sector offered tips for makingevaluations more useful at a BigLottery Fund conference in November2007. These included an emphasis onmaking reports accessible andunderstandable, admitting what doesnot work as well as what does work,keeping the report real by writing inplain language and using real casestudies, and by tailoring thepresentation to different audiences(Big Lottery Fund 2007).

The KPMG Foundation’s social returnon investment (SROI) study (discussedon page 83) was carried out togetherwith an evaluation of the Every Childa Reader programme, with theexpressed aim of obtainingsustainable funding. The Every Child aReader programme is now wellknown, as is the SROI study, for itssuccess in securing governmentcommitment and mainstreamfunding. The format of the report wascarefully considered, as it was aimedat a wide audience from teachers togovernment ministers. Both the EveryChild a Reader evaluation and theSROI report made quantitative dataaccessible by good use ofillustrations, tables and graphs,reducing descriptive text to aminimum and creating a morevisually interesting report. Oncepublished, the KPMG Foundationdistributed it widely to all thoseinvolved, including the teachers,Department for Education and Skills,patrons and others including theHome Office, with the main aim ofgetting it to the Treasury – and in thisthe Foundation succeeded. Thegovernment is now committed tofunding and rolling out the ReadingRecovery programme after the endof partnership funding in July 2008.

Evaluation for learning ...creating a culture of learningand improvement

In Chapter 3 we reported that theLondon Housing Foundation’s IMPACTprogramme worked with 200members of staff from over 50homelessness agencies to supportthem in taking an outcomes approach.In its response to a consultation onstrategy for the Supporting PeopleProgramme, the London HousingFoundation emphasised the benefitsthat had been obtained forkeyworkers by providing greaterclarity and focus for their work(Triangle Consulting for the LondonHousing Foundation 2006). Overall,they found that an outcomes focushad helped to produce a culture oflearning and improvement. Save theChildren (2005) also reported on howmonitoring and evaluating outcomesand impact, including evidencingunexpected or unwelcome results,had improved learning. It reports that‘being aware that they will have toanswer the impact question’ has theeffect of helping staff realise that theyneed to be clear from the beginningabout the kinds of changes they hopeto see, and be sure that the activitiesthey plan will lead to these changes(Scott and Molteno 2005).

Wider learning

Leat (2003:72) has written about thelack of investment in evaluation whichwould produce the knowledgerequired for replication of successfulinitiatives and asks whether thefundamental question should be ‘howdo we encourage learning and sharingof learning to create cumulativeknowledge building?’ Some fundershave become explicit in theirencouragement to fundedorganisations to use learning fromevaluation. The Big Lottery Fund

7Using monitoring and evaluation findings ...

‘Evaluation should

be the start of

something. The

report is not the

end of the matter.

It is a conversation

which is needed

that says OK that

didn’t work. What

is needed next?’

Funder

Page 114: Accountability and Learning

94Charities Evaluation Services Research report

encourages self-evaluation fororganisations’ own learning, and theArts Council explicitly encourages amore reflective culture. The CityParochial Foundation makes theconcern for how data will be usedexplicit in its funding guidelines byasking ‘How will you know you haveachieved what you have set out todo? What will you do with thefindings? If relevant, how will youshare your learning?’

Evaluations are increasingly availableon the internet but, on the whole, ourresearch found only limited evidenceof monitoring and evaluation beingused to increase communicationamong agencies about approaches.The research emphasised the difficultyfor organisations reliant on a strongpublic profile and a positive press tobe open, and that there is little cultureof bringing evaluation findings intothe public arena unless there is astrong, positive public message.

There are some positive examples.Practical Action focuses on promotingappropriate technologies so, inpushing for change in practice, it isimportant to have internal monitoringthat is integral to management ofchange. Practical Action providedinformation about technology thathad been mainstreamed in certaincountries as a result of monitoring andevaluation. For example, as a result ofevaluation and learning on consensusbuilding for managing waterresources in Bangladesh, this learningwas rolled out and extended to otherdisaster management projects.

The research found evaluationsavailable on websites, but often not ina digestible form. Less frequently,evaluations had been presented tomake learning accessible. The JosephRowntree Foundation has anestablished format for its briefresearch Findings papers, and wefound specific learning initiatives. In2006, Community Development

Foundation employed associates toreview the Civic Pioneers’ Project – 18projects in 13 local authority areas –intending that a wider audiencewould benefit. Learning waspresented in two ways: case studiesand learning points (Evison 2006).NCH Action for Children did a reviewof child protection assessments andproduced a practical resource pack asa result of its learning. EnterprisingCommunities (now Social Enterpriseand Cooperative Development Ltd)used external consultants to write upcase studies and developed a lessonslearnt website.

In 2006, Oxfam used a meta-analysisdrawing on findings and conclusionsin more than 60 evaluation reportsand assessment reports producedover the previous three years.(Oxfam’s global impact reports arediscussed on page 76.) The exercisehad the expressed purpose ofidentifying the main factorsinfluencing the quality of Oxfam’sprogrammes and whether Oxfam’sways of working were firmly based onits principles and stated programmeapproach. It also explored using thefindings as a basis for learning lessonsby regional directors and heads ofdepartments and to encouragediscussion by teams (Oxfam 2006).

Using data effectively ...overcoming barriers to learning

However, the evidence discussed inChapter 2 indicated that a focus onaccountability requirements maymean, in practical terms, less time forimproving performance and for widerlearning, and that accountability andlearning purposes may work againsteach other (Hailey and Sorgenfrei2005). Many third sectororganisations, large and small,reported having a lot of data whichthey were unable to interpret and use.This was particularly an issue for ‘soft’

‘There is an

understandable

reluctance to talk

about failure and

mistakes but much

learning occurs

when this

happens.’

Funder

Page 115: Accountability and Learning

95Charities Evaluation Services Research report

outcomes data. One of the keyfindings from the substantial IMPACTprogramme carried out by the LondonHousing Foundation was thatoutcomes feedback was being used asa keyworking tool but organisationswere finding it difficult to aggregateand use the data. Some organisationshad recently become aware of theneed for a staff member with a specificremit for analysing information.

The barriers to third sector learningfrom monitoring and evaluationemerging from the overall researchdata (particularly relating to theevidence reported in Chapters 2 and4) are summarised in the box below.

Many funders in the research reportedthat they were getting more datathan they could digest, and that theywere often limited in the systems andstaff necessary to collate, analyse anduse it. They commented that thesheer volume of reports made itdifficult to distil lessons learnt frombeneficiary reports, particularly whenthey were in narrative form and were

held in files and folders. With a moveto an outcomes focus, old monitoringsystems were serving less well: funderswere struggling with the question ofhow to put data together in ameaningful way, and some hadinstalled or were looking for adatabase which would allow outcomesto be systematically recorded andlearning to be extracted. In 2007,County Durham CommunityFoundation, for example, questioned asmall sample of 20 communityfoundations and five other funders andfound that most of the systems wereinadequate for evaluation, reporting‘The majority never actually evaluate.Everyone is going aroundmonitoring’.

Several funders in our survey sampleadmitted that they ‘don’t actually domuch with evaluation reports’ andthat they needed to develop greaterclarity about what they should dowith them. However, a number ofothers were already taking steps touse evaluation learning. For example,the City Bridge Trust had started TheKnowledge, short publications

7Using monitoring and evaluation findings ...

‘The problem for

us is the lack of

a learning agenda

in the voluntary

sector and also

possibly the

public sector.’

Funder

Beyond the lack of a learning agenda, a number of specific constraints on learningwere reported.

• Funders often place themselves and are seen as the primary stakeholders for evaluation,and the heavy demands of compliance and performance reporting frequently act as adisincentive to learning and performance improvement. This is because monitoring andevaluation is associated with external priorities and needs, and because further time foranalysis and reflection may be an unaffordable luxury.

• Learning from work means joint approaches between funders and funded organisationswhich may not be sufficiently developed.

• Organisations often see monitoring as a less formal approach to evaluation and there islittle development of sufficiently rigorous or creative evaluation approaches or criticalanalysis of the data.

• The focus on outcome measurement has often not been accompanied by an analysis ofresources, activities and outputs to make sense of the information.

• Learning is costly and many funders are not willing to resource these costs, and there isalso a lack of time.

• Learning may be limited when both funders and funded want to demonstrate success.

The barriers to third sector learning from monitoring and evaluation

Page 116: Accountability and Learning

96Charities Evaluation Services Research report

containing key learning points fromfunding programmes. (See also page97.) Hertfordshire County Councilreported using information in anumber of ways: to inform its annualplans; to share information aboutsuccess factors in diversionary activitywith young people with otherorganisations; to disseminate goodpractice; and to put others in touchwith organisations doing well.

The Big Lottery Fund makes themultiple purposes of evaluation clearin its strategy. ‘The Big Lottery Fundundertakes evaluation and research toenable it to improve funding impactsand processes; to promote widersharing of such learning in order toimprove practice and influence policy;and to support public accountability.’(Big Lottery Fund 2005)2 It uses thedata to help in funding decisions butalso to build up accumulated activitywhich will help to meet higher-leveloutcomes; to provide aggregateddata to show patterns of change, andhigher-level impacts above projectlevel; and to provide building blocksfor more detailed analysis andevaluation.

Table 4 shows how respondents to thefunders’ survey used data.

Many funders participating in thisresearch commented on the lack of alearning agenda in the third sector,most particularly the lack of learningderiving from evaluation. Several alsoreported a lack of examples ofexciting or insightful learning reportsto inform practice or policy. Somefunders expressed a concern thatcompliance needs were conflictingwith learning, creativity and honesty.Learning from work also meantdeveloping joint approaches andadjusting the power balance, and amove away from ‘good news stories’.

We can draw a number of implicationsfrom these findings. As a first step,funders’ approaches to learning mustessentially link to having clarity aboutthe purpose of funding – whether it isabout developing services, buildingthird sector capacity, or bringingabout social or policy change inspecific areas. Following from this,both internal and external evaluationneed to develop better ways ofdrawing out key learning points fromevaluation studies without losingvaluable insights about complexity,and to develop a more open culture inwhich organisations are prepared tomake use of the technology availableto disseminate studies widely.

Evaluation and knowledgemanagement ... using andsharing learning

Learning and knowledgemanagementThe research found an increasingemphasis on using evaluation data forlearning and knowledge managementamong some independent funders.Kumar and Leat’s 2006 review of theBig Lottery Fund’s grant makingsuggested that learning was less likelywhen the purpose of evaluation wasunclear, when there were inadequateplanning and resources in relation to

How data is used Percentage of responses (n=62)

To make decisions on further/continued funding 81

To track progress 76

To inform funding strategy 66

To identify learning points for future funding 65

As data in monitoring own effectivenessand impact 65

As data for evaluation of themed programmes 37

As part of discussion with other funders 36

To inform policy debates 31

Table 4 Funders’ use of monitoring andevaluation data

Page 117: Accountability and Learning

97Charities Evaluation Services Research report

dissemination, and over-reliance onwritten reports (Kumar and Leat2006). The review also stressed theimportance of focusing evaluation anddissemination of findings aroundintended purposes and audiences, andmaking findings comprehensible andaccessible to users. The Diana,Princess of Wales Memorial Fund is anexample of a funder which has in thepast used evaluation primarily toinform its funding priorities. It is nowdeveloping an exit strategy focusingon its impact and is looking at theinformation systems it requires tolearn from evaluation and putinformation in the public arena.

Intelligent funderThe Big Lottery Fund has foundevidence that its good practice guidesemerging from evaluation findingshave been widely read; feedback fromgrant holders and evidence ofpartnerships sustained beyond thefunding period offered an indicationof changes in the way that peoplewent about their work (Big LotteryFund 2007). The Big Lottery Fund’scommitment to promoting a widersharing of learning includes anemphasis on external dissemination,providing findings in diverse formatsand facilitating discussion of lessons.The research and evaluation team alsostresses that being an ‘intelligentfunder’ will include mininginformation better and developingevaluation activity by theme (BigLottery Fund 2007). (‘Intelligentfunding’ is discussed on page 7.) TheBig Lottery Fund’s learning documentAnswering BIG Questions bringstogether 12 years of findings andlessons learnt across different areas,such as health and wellbeing,strengthening communities, youngpeople and the environment. Theauthors have drawn out broadlearning from the evaluations as wellas specific findings. The Big LotteryFoundation is also committed to

developing a knowledge bank – anelectronic means of collecting formalreports and making them accessible.It has appointed a knowledgemanagement officer, and isprioritising adequate databases andinstitutional memory.3

In 2005, Northern Rock Foundationlaunched a learning study of its Moneyand Jobs programme, drawing outimplications for funders, statutoryagencies, and national and regionaldecision makers (Wilkinson 2007).While recognising that its funding wasgoing into a mix of resources, theFoundation nevertheless wanted toknow if people were getting richer asa result of the programme, whatworked and what was the unique roleof the Foundation. Most significantinsights were arrived at through ananalysis of the case study material anda hypothesis was put forward as aresult of analysis that effectivecommunity-based organisations weremore able to engage with people atthe beginning of the pathways frommore to less desirable circumstances.The City Bridge Trust has producedthe first three of a series of learningdocuments. These provide illustrationsof common issues, drawing outcommon problems and providing tipsto other organisations.4

Learning from monitoring andevaluation practice ... capacitybuilding potential

Within the sector generally, there hasbeen little evidence that evaluationfindings have been fed back into apool of knowledge or that externalevaluation has brought substantialtransfer of learning aboutmethodology. We found someexternal evaluations that had acapacity building element built in, butalso found that external evaluation alltoo often does not bring to theevaluated organisation any

7Using monitoring and evaluation findings ...

Page 118: Accountability and Learning

98Charities Evaluation Services Research report

development of internal monitoringand evaluation skills. It was suggestedby some research participants thatmany evaluators engaged inevaluation of third sector activity havebeen more attuned to academicresearch and policy audiences than tothe third sector, and there was alimited understanding of any rolebeyond that, in terms of sharingknowledge and building capacity.

Evaluation departments embeddedwithin third sector organisations arepotentially well placed to buildmonitoring and evaluation capacity,but we have seen that they workwithin resource constraints and theirfunctions are often linked to otherareas such as quality assurance. (Seepage 26.)

CES’ series of evaluation discussionpapers, first published in 1998, aim todevelop learning about evaluationpurposes and methods, based onpractical experience. A number ofwebsites provide resources based onthird sector monitoring andevaluation experience.5 One areasuggested for increased resources isthrough indicator development acrossthe sector, an issue explored in a2008 study by the new economicsfoundation. Half the sample of 18charities and social enterprises in thestudy shared indicators with otherorganisations externally, making useof shared indicators from smallbenchmarking groups set up in thesector, or participated in a group oforganisations meeting every fewmonths to discuss outcomes, keyperformance indicators andperformance management (Sanfilippoand Chambers 2008). Very largely, theresearch found examples oforganisations that have been isolatedin the more detailed work ofdeveloping appropriate frameworksand methods, and a certain amount ofduplication of effort.6

Within the UK third sector generally,learning opportunities are building inother ways. For example:

• CES’ National Outcomes Programmeoffers opportunities for sharingpractice through its networks andconferences.

• The Performance Hub, and now theNational Performance Programme,provides an online forum forinfrastructure organisationssupporting performanceimprovement in frontlineorganisations.

• The London Housing Foundation’sIMPACT programme, and itscontinuing work on homelessoutcomes, provide an opportunity todevelop outcomes tools.

• The development of outcomes toolsin other sub-sectors by TriangleConsulting, learning from thesuccessful Outcomes Star, hasdemonstrated a capacity to learnacross sectors.

• The social enterprise sector has seena rich development of tools andmethodologies.

• Conferences, such as the NVCOannual impact conference, UKEvaluation Society annualconference, London HousingFoundation conferences and the BigLottery Fund 2007 evaluationconference have providedopportunities to exchangeexperience and learning.

• When Evaluation Support Scotlandwas set up in 2005, it was able tolearn from and build on existing CESmaterials.

International developmentInternational development and reliefagencies provide a particular exampleof cross-learning within a sub-sector.BOND provides a network of voluntaryorganisations working in international

Page 119: Accountability and Learning

99Charities Evaluation Services Research report

development and opportunities forlearning and sharing. INTRAC providestraining, consultancy and publications,focusing debate on evaluationapproaches and methodologies. TheMandE newsletter and website alsoprovide a news service focusing ondevelopment in monitoring andevaluation methods relevant todevelopment projects andprogrammes. REMAPP is a UK-basedgroup of networking professionalsconcerned with planning, appraisal,monitoring, evaluation, research andpolicy. The group includes peopleworking at organisations such asACORD, Action Aid, the British RedCross and CAFOD (Catholic Agency forOverseas Development). Since 1992the group has met three or four timesa year, reflecting on experiences oforganisational learning and goodpractice.

Concern for knowledge managementis also growing among internationalnon-governmental organisations. Plan,for example, is among thoseattempting to increase the value itgets from evaluation reports. It isrolling out Sharepoint – Microsoftsoftware – a portal which will makeinformation more accessible globallyto support the development of acomprehensive knowledgemanagement strategy. Tearfund hasa knowledge management managerand a programme developmentteam, but pointed to the difficultyof extracting and using the learningfrom evaluations.

New learning initiatives ...exciting developments fordisseminating and sharing

There are a number of new initiativesthat indicate a move towards bringinglearning in the UK third sector intofocus. For example:

• As part of its Third Sector ActionPlan, the Office of the Third Sector isinvesting, with partners, in a£10.25m independent third sectorresearch centre.

• There is a projected new onlinelearning portal for the third sector –the e-Learning Net, funded by theBig Lottery Fund BASIS programmethrough a grant to the Cass BusinessSchool.7

• The Drug and Alcohol Findingsproject has aimed to provide abridge between research andpractitioners. Its main output hasbeen the Drug and Alcohol Findingsmagazine, replaced from 2007 bythe Effectiveness Bank, whichhighlights research and evaluationstudies on interventions intended toreduce the use of alcohol and otherdrugs or to reduce harm related tothat use.8 A customised searchfacility managed by DrugScope willafford one-stop access to the 'whatworks' literature including Findingsdocuments and source researchpapers and reviews.

• The Journal of Voluntary SectorResearch is a new quarterly researchjournal aimed at practitioners as wellas academics, and aims to help meetthe need for research evidence toinform and underpin good practiceand performance improvement.

• Community Campus is a newknowledge exchange site for thirdsector organisations hosted byNorthumbria University, providing aplace where organisations working inthe areas of ageing, disability,community development and socialenterprise can share knowledge. Thiswill include sharing knowledgeabout the purpose and application ofevaluation and bringing togetherinformation, training and appliedresearch in practical outcomes.

7Using monitoring and evaluation findings ...

Page 120: Accountability and Learning

100Charities Evaluation Services Research report

As well as providing a potentialvehicle to disseminate evaluationfindings and learn about research andevaluation methods, these areexciting developments and couldmeet the needs outlined on pages 42and 45 for research evidence onwhich third sector organisations coulddraw in order to make the linkbetween outcomes achievable withintheir own remit and the higher-leveloutcomes relating to funders’ andgovernment frameworks and targets.Third sector organisations also needto be able to link into other learning,such as research carried out by localauthority commissioners, which iscurrently being retained at a locallevel, or the learning about healthimprovement practices disseminatedby the Evaluation Programme at theUniversity of Edinburgh.

For the third sector to benefit fromthese initiatives, the learning needs tobe appropriate and presented inaccessible formats. There also needsto be good coordination between theseparate initiatives, and sufficientpromotion to third sector

organisations themselves of therelevance of the data available andhow they can access it.

This chapter has covered the benefitsof monitoring and evaluation reportedby research participants. It has alsolooked at some of the barriers tousing learning from monitoring andevaluation and at the wider evidenceof the potential for using evaluationfindings for service development tobetter effect, and to inform policy andbuild knowledge.

The research has been mindfulthroughout of those organisationsthat are not yet sufficiently resourcedor skilled to integrate monitoring andevaluation into their work practices.The recommendations in Chapter 8have been developed from theresearch findings. They focus on howthis gap can be addressed, and howthe resources invested by third sectororganisations in monitoring andevaluation can bring greater valueto themselves, their funders andthe individuals and communitiesthey support.

1 Seewww.publications.parliament.uk/pa/cm200405/cmselect/cmhaff/80ii/80we35.htm

2 Big Lottery Fund (2005) Evaluation, Research andLearning Strategy. Summary for Period 2005–09,www.biglotteryfund.org.uk

3 Institutional memory can be defined as holdinginformation and experiences so that they areavailable for the organisation and its current andfuture stakeholders.

4 The first edition of The Knowledgewas based onlearning from the City Bridge Trust’s programme inLondon, Greening the Third Sector; the second onmaterial based on an evaluation of its programmeon Access to Buildings; and the third disseminatesthe findings of an evaluation of the London AdviceServices Alliance’s Circuit Riding project, whichdeveloped the ICT skills of some organisationsfunded by the Trust.

5 For example, www.ces-vol.org.uk;www.evaluationsupportscotland.org.uk;www.nef.org.uk; www.homelessoutcomes.org.uk

6 At the time of publication there were no knownconcrete developments arising from this research.

7 The project, which will be called the e-LearningNet, funded by the Big Lottery Fund, will have threezones, one for information on existing learningresources; one for problem solving, which willinclude online interaction; and another for study,which will include more formal learning such as e-learning and workshops. The site should becomefully operational in 2009.

8 See www.findings.org.uk

Notes for Chapter 6

Page 121: Accountability and Learning

101Charities Evaluation Services Research report

1. For funders, commissioners and policy makers

Building monitoring and evaluation capacity

1.1 Funders should work together with monitoring and evaluation supportagencies to develop a more strategic approach to building monitoring andevaluation capacity.

Focus of chapter 8This chapter provides recommendations in three sections.

1. For funders, commissioners and policy makers.

2. For third sector organisations, evaluation support agencies and evaluators.

3. For all stakeholders.

We have developed recommendations under five headings.

• Building monitoring and evaluation capacity.

• Developing more appropriate reporting.

• Developing third sector monitoring and self-evaluation practice.

• Making evaluation more relevant.

• Building and sharing knowledge.

Recommendations ... towardsgood practice

Chapter 8

• signposting or referral to sources of evaluation support as standard

• a focus on infrastructure and sub-sectors as an effective means ofcascading learning

• developing technical skills that go beyond a basic understanding ofmonitoring and evaluation

• ensuring that short-term monitoring and evaluation support initiativesbuild sustainability into their design

• developing the use of IT in third sector organisations to store, processand manage data

• disseminating good practice

• storing and sharing knowledge and learning.

Elements of a strategic monitoring and evaluation supportapproach include:

‘We need more

training,

particularly in

developing

outcome measures

and in IT.’

Women’s communitygroup

Page 122: Accountability and Learning

102Charities Evaluation Services Research report

1.2 There should be greater investment in training commissioners and grantsofficers to understand outcomes, evaluation frameworks, and good practicein questionnaire design and other practical data collection methods. Inparticular, an extension to the IDeA (Improvement and DevelopmentAgency for local government) training programme should includecommissioners, finance and legal personnel involved in commissioning. Itshould address:

• tendering outcomes-based contracts

• assessing the credibility of outcomes information included in the tendersfrom would-be suppliers

• specification of outcomes and outcomes indicators in contracts

• the appropriate outcomes monitoring evidence to be expected fromchosen suppliers.

Developing more appropriate reporting

1.3 Funders and commissioners should develop a monitoring and evaluationpolicy that applies the principle of proportionality to reportingrequirements, taking into consideration the level of funding, and the sizeand resources of the funded organisation.

1.4 Funders and commissioners should acknowledge within their monitoringand evaluation policy that different types and levels of monitoring andevaluative activity are appropriate to different organisations. This dependsnot only on size, but also on the type and status of the funded organisation– how it works, what user group it works with, and which stage it is at in itsorganisational lifecycle.

1.5 Funders and commissioners should work within the principles of full costrecovery, providing sufficient funding to realistically cover monitoring andevaluation activities to meet reporting requirements. This meansrecognising the cost implications of:

• externally provided technical assistance

• improving IT capacities

• developing skills in data collection and analysis

• allocating time to collect and analyse data.

1.6 Funders and commissioners should work more closely with third sectororganisations to ensure a closer match between information needs forlearning and development, and those required for compliance andaccountability.

Developing third sector monitoring and self-evaluation practice

1.7 Funders and commissioners should develop self-evaluation standards byapplying appropriate elements of good practice in supporting monitoringand evaluation.

‘Funders aren’t

usually prepared to

pay for a charity’s

monitoring and

evaluation and the

charity doesn’t have

the resources to pay

to make a good job

of it themselves…

Funders need to be

more realistic and

invest more in

monitoring and

evaluation of the

charities they fund.’

Funder

Page 123: Accountability and Learning

103Charities Evaluation Services Research report

Building and sharing knowledge

1.8 For the third sector to benefit from a number of recent or plannedknowledge-building initiatives, including the planned third sector researchcentre, those government and academic bodies responsible for theirdevelopment should ensure that there is good coordination across theseparate initiatives, that the learning is presented in accessible formats, andthat sufficient information is provided to third sector organisationsthemselves on the relevance of the data available and how they can accessthem. Good communication about sharing and disseminating learning withthird sector organisations will be enhanced through coordination with thirdsector evaluation support agencies.

2. For third sector organisations, evaluation supportagencies and evaluators

Building monitoring and evaluation capacity

2.1 Evaluation support agencies, including national and regional bodies,networks and local infrastructure organisations, should continue to providebasic training and support. However, more focused and targeted supportshould also be developed to increase the quality of data – focusing onresearch methodologies and data collection, and management and analysis.

2.2 Evaluation support agencies should develop new training approaches toaddress the practical application of monitoring and evaluation methods.This should include exploring the potential of online training andconsultancy methods as well as interactive technologies.

2.3 Third sector organisations and evaluation support agencies should addressthe development and application of IT and monitoring and evaluation

8Recommendations...

• making monitoring and evaluation terminology clear, providing evaluationpro-formas, and making expectations about evaluation and learningexplicit

• enabling or providing information, training and support at the beginning ofthe funding relationship

• helping organisations design information collection as an integral andintegrated part of their core services

• checking on the adequacy of IT systems for managing and processing dataduring initial visits

• asking only for information that can be used

• encouraging greater analysis and interpretation of data

• reading and giving feedback on reports.

Good practice in supporting self-evaluation includes:

‘Many charities

do need to

professionalise how

they collect data,

but need help to do

so, and funders need

to acknowledge

and address this.

Measuring outcomes

is notoriously

difficult’.

Funder

Page 124: Accountability and Learning

104Charities Evaluation Services Research report

systems software, as a way to save time, improve the quality of monitoringand evaluation, and increase organisational effectiveness. This shouldinclude developing pilot and demonstration projects and developingappropriate training and support.

Developing more appropriate reporting

2.4 Third sector organisations should work more closely with funders andcommissioners to ensure a closer match between their own informationneeds for learning and development, and those required for complianceand accountability.

Developing third sector monitoring and self-evaluation practice

2.5 Evaluation consultants should reinforce internal evaluation activity throughtheir technical assistance or external evaluation by using processes anddeveloping tools that address internal as well as external reporting needs,and that transfer evaluative learning to the organisation.

2.6 Third sector organisations should improve the utility of the data they collect.

2.7 Third sector organisations should ensure that monitoring and evaluationresponsibilities are written comprehensively into job descriptions in order tobuild an integrated monitoring and evaluation function and establish anevaluative culture.

2.8 Third sector organisations and evaluation support agencies should developa greater understanding of the differences as well as the complementaritiesof monitoring, evaluation, quality assurance and performance measurement.

• translating broad aims into more measurable ones

• collecting baseline data

• learning how to collect and analyse their data so that it can be aggregatedboth for learning and reporting purposes

• addressing implementation and process issues

• making connections across data to see which resources or other factors aremost important to achieve quality services, customer satisfaction anddesired outcomes

• establishing systems for systematically feeding back and acting onmonitoring and evaluation findings.

Third sector organisations can make their data collection moreuseful by:

‘Using the IT system

has made accessing

information a lot

easier and a lot

quicker, especially

for fundraisers.

Simply having it has

raised the profile of

outcome

monitoring across

the organisation.

LINK IT system user

Page 125: Accountability and Learning

105Charities Evaluation Services Research report

Building and sharing knowledge

2.9 Third sector organisations, their evaluators and evaluation support agenciesshould develop ways to share successful evaluation techniques as they haveapplied them, so that choices can be made about appropriatemethodologies for types of organisation and scale of intervention.

3. For all stakeholders

Making evaluation more relevant

3.1 There should be sharper evaluation commissioning processes to clarify thekey purposes and audiences of evaluation.

3.2 Third sector organisations, evaluation commissioners and externalevaluators should find opportunities within the evaluation process toincrease participation and joint learning, which should be appropriatelybudgeted for and resourced. Practical application of evaluation learningshould also be encouraged by:

• including utilisation strategies in evaluation plans

• placing the focus on implementation and short-term outcomes whereappropriate

• producing key findings and learning points in accessible formats

• developing effective systems for using the data collected.

3.3 Third sector organisations and their funders and commissioners shouldattach greater value to negative findings, and discover how these can beused for learning purposes.

8Recommendations...

‘The main problem

is the power

relationship

between funder and

grantee. There is an

understandable

reluctance to talk

about failure and

mistakes, but much

learning occurs

when this happens.’

Funder• acquiring data to meet higher-level targets

• gathering data to contribute to evidence of programme outcomes oroverall grant impacts

• research purposes and replication

• advocacy or policy change

• individual organisation learning

• sector-wide learning.

Clarification of evaluation purpose will include whetherevaluation is for:

Page 126: Accountability and Learning

106Charities Evaluation Services Research report

3.4 Policy makers, funders and evaluation commissioners should recognise thatassessing impact may have varied relevance and feasibility across differentsectors and types of interventions, and that economic analysis and tools arenot universally relevant. The development of, and focus on, technicalmethodologies for value for money and return on investment studies iswelcomed. However, care should be taken that more complex, externally-driven methods supplement, rather than replace, more internally-drivenmethods.

3.5 Evaluation commissioners and third sector organisations should recognisethat third sector evaluation is better able to demonstrate contributions tochange rather than to establish causal links. Instead of looking for cause andeffect, organisations should be encouraged to ask questions about whichresources were most important for providing high-quality services andwhich activities and services were most important for achieving desiredoutcomes.

3.6 Funders and commissioners should encourage third sector organisations todraw on a body of accessible research to demonstrate a link between theirachievable outcomes and higher-level outcomes or impacts. Thedevelopment of appropriate research and accessible knowledge banks willbe essential for this to be viable.

3.7 When drawing up an evaluation brief, those commissioning evaluationsshould take into account the limited organisational learning that may beprovided by outcome and impact evaluation unless it includes a) attentionto an understanding of implementation processes, and b) an understandingof how and why an intervention works, for whom and in what conditions.

Page 127: Accountability and Learning

107Charities Evaluation Services Research report

Overview

Field research for the study wascarried out from November 2006 toMarch 2008. The methodologyincluded the following.

• A wide review of relevantliterature, in order to place thepractice of research participants in abroader context of third sector andpublic sector monitoring andevaluation in the UK and beyond. Itincluded a review of monitoring andevaluation resources. As part of thisexercise, researchers placed on theCES website a guide to over 100sector-specific tools and otherresources on monitoring andevaluation, and links to other onlinegeneric and specialist resource guides.

• A UK-wide online survey to thirdsector organisations, sent outwidely through electronic networksin May and June 2007, receiving 682responses.

• An online survey to funders andcommissioners, circulated in Julyand August 2007, receiving 89responses

• One hundred and seventeen face-to-face and telephone interviewswith national, regional and localthird sector organisations,independent funders and statutorycommissioners, supportorganisations, evaluation consultantsand software providers, including:

− thirty interviews with funders andstatutory commissioners

− fifty-eight interviews with thirdsector organisations, including 14interviews with third sectorsystems users

− fourteen interviews withevaluation consultants

− fifteen interviews with softwareproviders.

• A review of a limited number ofexternal evaluations providinglearning on issues of approach andmethodology.

• Case example development. Thisallowed the research to explore theissues raised in greater detail and toillustrate learning points.

Third sector interviews

The researchers developed a matrixto cover different types of third sectororganisation and areas of work andused this as a guide when selectingorganisations for interview.Seventeen third sector organisationswere followed up from the thirdsector survey responses, as offeringpotential learning about evaluationpractice and the use of evaluationfindings. Further third sectorinterviews were held to explore arange of perspectives, such asnational or regional support toevaluation; developed monitoringand evaluation systems and knowngood practice; the development ofnew initiatives; and the use ofmonitoring and evaluation software.

Funder and statutorycommissioner interviews

The researchers interviewed a rangeof funders and statutorycommissioners, in order to bringwithin the research the broadspectrum of funding and fundingstreams. Interviews were designed tocover funding delivered from centralgovernment departments, as well asthat delivered through regional andlocal government, other non-

1Appendices

Appendix 1Methodology

Page 128: Accountability and Learning

108Charities Evaluation Services Research report

departmental public bodies andthrough third sector agenciesthemselves. It meant interviewingindependent funders – those runninglarge, multi-year grant programmesas well as smaller, more localisedgrant givers.

The researchers were also able togather data and test out findingsthrough meeting with three groups offunders throughout the period of theresearch:

• a focus group of London Fundersmembers 26 September 2006

• a presentation of early findings anddiscussion at the London FundersResearch and Evaluation Groupmeeting 8 March 2007

• a workshop with members of theAssociation of CharitableFoundations 28 February 2008.

Online surveys

Information about the distribution ofthe two online surveys and the profileof survey respondents is given inAppendix 2.

Page 129: Accountability and Learning

109Charities Evaluation Services Research report

This appendix gives details of how thetwo online surveys (one to third sectororganisations and the second tofunders) were distributed. It alsodescribes the profile of the surveyrespondents.

Survey to third sectororganisations

Distribution of third sector survey

Researchers distributed the surveythrough electronic bulletins andnewsletters, ranging from those with awide circulation across the sector tothose issued by networks with atargeted geographical or sub-sectordistribution, and to those localinfrastructure organisations andothers able to reach out to localnetworks.

Distribution through the moreevaluation-focused networks of CESand the Performance Hub were morethan balanced by the reach of theDirectory of Social Change (40,000)and those of other organisations, suchas the NCVO Sustainable FundingProject, NAVCA and the LondonVoluntary Service Council.

Sub-sectors were reached throughorganisations such as the NationalCouncil of Voluntary ChildcareOrganisations, the National Councilfor Voluntary Youth Services, Sitra (theumbrella body for housing and socialcare organisations), the London Drugand Alcohol Network, BOND (thenetwork for voluntary organisationsworking in internationaldevelopment), the Refugee Council,Council for Ethnic Minority VoluntaryOrganisations, and the SocialEnterprise Coalition.

Circulation through organisationssuch as the Wales Council forVoluntary Action and Yorkshire,Humber Regional Forum for Voluntaryand Community Organisations andmany other regional organisationshelped to ensure that the survey hada good geographical distribution.

Profile of third sectorrespondents

The survey had 682 respondents.

Geographical distributionOver half the organisations (53 percent) had a local remit, 43 per cent aregional or national remit, and 4 percent worked internationally. Therespondents were located in thefollowing countries and English regions:

FundingOrganisations had a wide mix offunding: 12 per cent had only onefunder, but most organisations had

Appendices

Appendix 2Profile of survey respondents

Country/region Percentage of responses

London 33.6

North West 10.7

Yorkshire and the Humber 8.1

East of England 7.7

South West 7.5

South East 8.3

East Midlands 4.9

West Midlands 3.7

North East 3.3

Wales 5.9

Scotland 5.1

Northern Ireland 1.2

Table 5 Geographical distribution of respondents

Page 130: Accountability and Learning

110Charities Evaluation Services Research report

more than one funding source; 48 percent had been commissioned, with acontract to deliver services, by apublic sector agency.

Organisational characteristicsThe survey established the size, age,type of organisation and area of work.

• Organisations with 6–25 employeeswere the largest single group ofrespondents (42 per cent). Nearlyone-third of respondents were smallorganisations, either with noemployees (6 per cent) or with 1–5employees (26 per cent). Theremaining 26 per cent were fromlarger organisations.

• Forty per cent of the respondingorganisations were established over20 years ago, 26 per cent 4–10years ago, 19 per cent 11–20 yearsago and 14 per cent three years agoor less.

• Forty-four per cent of organisationsdefined themselves as communityorganisations, 11 per cent as blackand minority ethnic organisations, 9per cent as refugee and 6 per centas faith organisations. Eleven percent classified themselves as rural.

• Over half (59 per cent) wererecorded as providing services, suchas care or counselling, and almost asmany (57 per cent) as providingadvice, information or advocacy.Twenty-nine per cent of theorganisations were localinfrastructure organisations and 8per cent acted as a nationalinfrastructure organisation. Twenty-one per cent were campaigningorganisations. Respondents placedthemselves within the followingfields of work shown in Table 7.

Survey to funders

Distribution of survey to funders

The survey to funders was distributedthrough the CES database, by directemail, website information and byelectronic communication throughthe Association of Charitable

Type of funding received Percentage of responses

Local authority 52.1

Trust or foundation 50.1

Central government 37.1

Lottery funders 34.8

Income from fees, charges and sales 32.1

Individual donors 26.4

Health authority/primary care trust 22.7

European funding 19.1

Regional body/London Councils 13.8

Corporate funders 12.7

Regeneration funds 10.8

Other 9.4

Table 6 Funding profile of third sectororganisations

Field of work Percentage of responses

Education/training 39.8

Social care and development 34.3

General charitable purposes 27.8

Disability 24.7

Medical/health/sickness 21.2

Economic/community 20.2

Relief of poverty 17.6

Accommodation/housing 10.8

Environment/conservation/heritage 8.6

Arts/culture 8.2

Sports/recreation 7.8

Religious activity 3.7

Overseas aid/famine 3.1

Animals 1.0

Other 30.6

Table 7 Field of work of third sector respondents

Page 131: Accountability and Learning

111Charities Evaluation Services Research report

Foundations, London Funders, theDirectory of Social Change andCharities Aid Foundation.

There were 89 responses in total fromthird sector funding agencies,although response rates to differentquestions varied considerably. Thefunding bodies were profiled on thefollowing dimensions:

• source of income

• size of yearly grants budget

• main recipients of funding

• basis on which funding was given.

Basis on which funding was given

Fourteen per cent of fundersdistributed funds to a third partyfunder to allocate. Few agenciesfunded in a single way; for examplegrants were likely to be both one-yearrevenue and multiple-year revenuegrants, and the same proportion ofboth (62 per cent) was found in ourrespondents. In addition:

• Under half (42 per cent) fundedcapital grants. A minority ofrespondents worked in the contextof service-level agreements (15 percent), commissioned grants (24 percent) and contracts (10 per cent).These were rarely found in agencieswith less than £1m annual grantbudget. (Thirty-four per cent offunders worked in the context ofeither service-level agreements,commissioned grant or contracts.)

• Commissioned grant giving wasmost likely to be found in fundersover £1m, with all four of those over£15m giving commissioned grants.

• Sixty-five per cent of the samplegave grants of over £60K. Just overtwo-thirds of these gave grants ofthis size and over to ‘lots’ oforganisations, and one-third ofthese gave large grants to ‘a few’organisations.

• Only five funders restricted theirgiving to grants of over £60K, withnearly all funders (94 per cent)giving grants of less than £60K.

Appendices

Source of income Percentage of responses

Endowment 41

Earned income 25

Central government 24

Local government 24

Private individual or family 16

Own fundraising 11

Other sources of funding 25

Note: The total deriving income from a governmentsource (central or local) was 26 per cent.

Table 8 Funders’ sources of income

Size of budget Percentage of responses

<£300K 19

£300K–£1m 25

£1m–£5m 42

£5m–£15m 6

>£15m 6

Table 9 Funders’ yearly grant budget

Type of project/organisation Percentage of responses

Service delivery projects 72

Core funding organisations 56

Research 11

Policy and campaigns 10

Individuals 10

Table 10 Main recipients of funding

Page 132: Accountability and Learning

112Charities Evaluation Services Research report

Interviewees

FundersKevin Ashby, Big Lottery Fund

Katharine Berber, Capital Community Foundation

Steve Buckerfield, London Borough of Kensingtonand Chelsea, Children and Family Services

Melanie Caldwell, County Durham CommunityFoundation

Heather Chinner, Sandwell Metropolitan BoroughCouncil

Jo Clunie, KPMG Foundation

Andrew Cooper, The Diana, Princess of WalesMemorial Fund

Angella Daley, London Borough of Lambeth,Children and Young People Services

Lindsay Edwards, Hertfordshire Council Council,Young People’s Substance Misuse and CrimeReduction Services

Lea Esterhuizen, UnLtd

Jenny Field, The City Bridge Trust

Kathryn Hinds, The King’s Fund

Kevin Ireland, London Housing Foundation

Jain Lemom, London Councils

Ellie Lowe, Leicestershire Children’s Fund

Andrew Matheson, London Borough of Southwark,Advice and Legal Services

Sarah Mistry, Big Lottery Fund

Lin O’Hara, Futurebuilders England

Sheila-Jane O’Malley, BBC Children in Need

Janine Parkin, Rotherham Metropolitan BoroughCouncil, Neighbourhood and Adult Services

Gwenda Perry, Bristol Children’s Fund

Ian Redding, London Councils

Stephen Shields, SHINE Trust

Adrienne Skelton, Arts Council England

Lesley Talbot, Community Foundation Network

Joy Wakenshaw, Performance Improvement Team,Northumberland County Council

Jill Walsh, Capacitybuilders

James Wragg, Esmée Fairbairn Foundation

Simon Wyke, London Development Agency

Third sector organisationsAndrea Allez, National Association for Voluntary andCommunity Organisations

Kahiye Alim, Asylum Aid

David Bamber, Action for Blind People

Martin Brockwell, Depaul Trust

Jean Carpenter, Flipside – Lambeth CrimePrevention Trust

Andy Clark, Derby City Mission

Annie Clarke, National Children’s Bureau

Richard Cotmore, National Society for thePrevention of Cruelty to Children

Lydia Daniels, National Autistic Society

Chris Dayson, Voluntary Action Rotherham

Simon Early, Plan

Jake Eliot, Performance Hub

Judy Elmont, Friend Community Mental HealthResource Centre, Weston-Super-Mare

Ron Fern, BTCV Yorkshire and Humber (formerlyBritish Trust for Conservation Volunteers)

Charlotte Fielder, The Prince’s Trust

Sarah Flood, NCVO Sustainable Funding Project

Alice Hainsworth, Centrepoint

Jo Hambly, Plymouth Community Partnership

Penelope Harrington, Hammersmith and FulhamVoluntary Resource Centre

Robin Harris, Social Enterprise London

Carol Hicks, SKILL (National Bureau for Studentswith Disabilities)

Sarah Hind, Quaker Social Action

Jayne Humm, Community Development Foundation

Graeme Hunter, Care for the Family

Nicole Johnson, City YMCA

Mike Kershaw, Macclesfield Care and Concern

Richard Lace, Preston Community Arts Project/FM Community Radio

Tamsin Lawson, Citizens Advice

Viv Lewis, Social Enterprise and Co-operativeDevelopment Ltd, Cumbria

Beth Longstaff, Community Development Exchange(CDX)

Tris Lumley, New Philanthropy Capital

Malcolm Matthews, SENSE

Lyn Maytum, People First, Slough

Appendix 3 Research participants

Page 133: Accountability and Learning

113Charities Evaluation Services Research report

Brian McCausland, Advance Housing andSupport Ltd

Colette McKeaveney, Age Concern Luton

Leon Mexter, Regional Youth Work Unit,North East

Val Neasham, Broadacres HousingAssociation

Ben Nicholson, Tearfund

Heather Noble, Newham UniversityHospital NHS Trust

Tom Owen, Help the Aged

Mark Parker, bassac

Barnaby Peacocke, Practical Action

David Pearson, Help the Aged

Helen Pearson, Soft Touch Arts, Leicester

Dennis Perkins, Age Concern Ledbury

Sally Pickering, Gloucester Association forVoluntary and Community Action

Tamara Pollard, Somerset Community Food

Helen Rice, Directory of Social Change

Jill Roberts, Refugee Action

Jay Sharma, NCVO Sustainable FundingProject

Lucy Tran, Chinese National HealthyLiving Centre

Valerie Tulloch, NCH Action for Children

Nicola Wilkinson, Motor Neurone DiseaseAssociation

Tessa Willow, Volunteer Centre, Liverpool

Alex Whinnom, Greater ManchesterCentre for Voluntary Organisation

Sharon Worthington, Blackpool Advocacy

Irko Zuurmond, Plan

Evaluators and consultantsRowan Astbury, Charities EvaluationServices

Jean Barclay, freelance consultant

Michael Bell, Michael Bell Associates

Mark Bitel, Partners in Evaluation

Sara Burns, Triangle Consulting

Sally Cupitt, Charities Evaluation Services

Martin Farrell, Get2thePoint

Shehnaaz Latif, Charities EvaluationServices

Karl Leathem, Lodestar

Steven Marwick, Evaluation SupportScotland

Rob McMillan, Centre for RegionalEconomic and Social Research, SheffieldHallam University

Victoria Pagan, ERS

Giovanna Speciale, freelance communityresearch consultant

Melanie Windle, MW Associates

Monitoring and evaluation systemsprovidersIan Balm, Charity Log

John Boyle, Oxford Computer Consultants

Tim Copsey, mptye

Phil Dew, AdvicePro

Jane Finucane, Resource InformationService

Massimo Giannuzzi, ITSorted

Chris Hill, Blue Door Software Ltd

Jonathan Humfrey, Mozaic Network Inc

Mark Kemp, Gallery Partnership Ltd

Tim Kendall, twmk Ltd

Chris Leicester, Volbase Business Solutions

Peter McCafferty, Cúnamh ICT

Sarah Parker, Lamplight DatabaseSystems Ltd

Richard Thornton, Fairbridge

The following also contributed tothe study.Andrea Butcher, Telephone HelplinesAssociation

Helen Carlin, Shared Evaluation Resource,Kirklees Partnership

Maria Clarke, Evaluation Trust

Libby Cooper, Amber Consulting

John Goodman, Co-operativesUK

Bea Jefferson, Yorkshire Forward

Kate O’Brien and Marika Rose, The SocialEnterprise People

Karl Wilding, National Council forVoluntary Organisations

Appendices

Page 134: Accountability and Learning

114Charities Evaluation Services Research report

Abdy, M and Mayall, H (2006) Funding BetterPerformance, Performance Hub, CharitiesEvaluation Services, London.

Aiken, M and Paton, R (2006) Evaluation of theNational Outcomes Dissemination Programme,2003–2006, The Open University, Milton Keynes.

Alcock, P, Brannelly, T and Ross, L (2004) Formalityor Flexibility? Voluntary Sector Contracting in SocialCare and Health, University of Birmingham.

Alderson, P and Schaumberg, H (2002)Workingwith Children and Young People on Evaluation: TheCase of the Office of Children’s RightsCommissioner for London, Social Science ResearchUnit, Institute of Education, University of London.

Ambrose, P and Stone, J (2003) Eleven Plus to One:The Value of the Brighton and Hove Citizens AdviceBureau to the Local Economy, Health and SocialPolicy Research Centre, University of Brighton.

Ashford, K and Clarke, J (September 1996) ‘Grantmonitoring by charities: the process of grant-making and evaluation’ in Voluntas: InternationalJournal of Voluntary and Nonprofit Organisations,7(3): 279–299, Springer, Netherlands.

Astbury, R (2002) Participatory and InnovativeModels: An Application of the ‘Theory of Change’ ina New Context, paper to 2002 UKES Conference,12–13 December 2002.

Astbury, R and Knight, B (2003) Fairbridge ResearchProject: Final Report, Charities Evaluation Services.

Astbury, R (2005) Assessing Impact: DiscussionPaper 9, Charities Evaluation Services, London.

Audit Commission (2007) Hearts and Minds:Commissioning from the Voluntary Sector, AuditCommission, London.

Ball, M (1988) Evaluation in the Voluntary Sector,The Forbes Trust, London.

Beresford, P (2002) ‘User involvement in researchand evaluation: liberation or regulation’ in SocialPolicy and Society, 1: 95–105, Cambridge UniversityPress.

Betts, J (2007) The Effectiveness of Plan’s Child-centred Community Development: Plan ProgrammeReview (2003–2006), Plan Ltd, Woking.

Bhutta, M (2005) Shared Aspirations: The Role of theVoluntary and Community Sector in Improving the

Funding Relationship with Government, NationalCouncil for Voluntary Organisations, London.

Big Lottery Fund (2005) Evaluation, Research andLearning Strategy. Summary for Period 2005–09,www.biglotteryfund.org.uk

Big Lottery Fund (2007) Answering BIG Questions:Impacts and Lessons Learned from our Evaluationand Research, London.

Big Lottery Fund, Big Evaluation Conference: TidyFindings in an Untidy World?, 1 November 2007,Summary Report, www.biglotteryfund.org.uk

Blackmore, A (2006) How Voluntary andCommunity Organisations Can Help TransformPublic Services, National Council for VoluntaryOrganisations, London.

Bolton, M (2004) The Impact of Regulation onVoluntary Organisations, National Council forVoluntary Organisations, London.

Bonar Blalock, A (1999) ‘Evaluation research andthe performance management movement: fromestrangement to useful integration’ in Evaluation,5(2):117–149, SAGE, London.

Boyd, A (2002) Capacity-Building for Evaluation:Follow up to the HAZE Project: Research Report,Centre for Systems Studies, The Business School,University of Hull.

Brooks, M, Langerman, C and Lumley, T (2005)Funding Success, New Philanthropy Capital, London.

Cabinet Office (2007) The Future Role of the ThirdSector in Social and Economic Regeneration: FinalReport, HM Treasury, London.

Cabinet Office, Charity and Third Sector Unit (2006)A Summary Guide: Guidance to Funders: ImprovingFunding for Voluntary and CommunityOrganisations, HM Treasury, London.

Cabinet Office, Office of the Third Sector (2000,revised 2006) Funding and Procurement: CompactCode of Good Practice, HM Government, London.

Cabinet Office, Office of the Third Sector (2006)Partnership in Public Services: An Action Plan forThird Sector Involvement, HM Government, London.

Cabinet Office, Office of the Third Sector (2006)Social Enterprise Action Plan: Scaling New Heights,HMSO, London,www.cabinetoffice.gov.uk/thirdsector

References

Page 135: Accountability and Learning

115Charities Evaluation Services Research report

CAG Consultants with Northumbria University(2005) Defra Environmental Action Fund 2002–2005: Summative Evaluation – Executive Summary.

Cairns, B, Harris, M and Hutchison, R (2006)Servants of the Community or Agents ofGovernment? The Role of Community-basedOrganisations and their Contribution to PublicServices Delivery and Civil Renewal, Institute forVoluntary Action Research and bassac, London.

Cambell, M and McClintock, C (2002) ‘Shall wedance? Program evaluation meets OD in thenonprofit sector’ in OD Practitioner, 34(4): 3-7,Organization Development Network.

Cameron, A, Harrison, L, Burton, P and Marsh, A(2001) Crossing the Housing and Social Care Divide,Joseph Rowntree Foundation, York, www.jrf.org.uk/knowledge/findings/foundations/111.asp

Cameron, A, Macdonald, G, Turner, W and Lloyd, L(2006) An Evaluation of the Supporting PeopleHealth Pilots, Notes and extracts, Department forCommunities and Local Government.

Carman, JG (June 2008) ‘Nonprofits, funders, andevaluation: accountability in action’ in AmericanReview of Public Administration, SAGE Publications.

Centre for Regional Economic and Social Research(2008) Evaluation of Futurebuilders: Interim ReportSummary, Office of the Third Sector, Cabinet Office,London.

Church, C (2005) Changing Places, Changing Lives,Understanding and Developing the Impact of yourOrganisation, bassac, London.

Community Development Exchange, CommunityDevelopment Foundation and Federation forCommunity Development Learning (2006) TheCommunity Development Challenge, Departmentof Communities and Local Government, London.http://www.communities.gov.uk/documents/communities/pdf/153241.pdf

Connell, JP and Kubisch, AC (1998) ‘Applying aTheory of Change Approach’ in Fulbright Anderson,K, Kubisch, AC and Connell, JP (eds), NewApproaches to Evaluating Community InitiativesVolume 2: Theory, Measurement and Analysis, TheAspen Institute, Washington DC.

Cook, T (2000) ‘What is a good outcome?’ inReflections on Good Grant-making, Association ofCharitable Foundations, London.

Coote, A (2004) Finding Out What Works: BuildingKnowledge about Complex, Community-basedInitiatives, King’s Fund, London.

Darlington, I et al. (2007) Use of OutcomesMeasurement Systems within Housing andHomelessness Organisations, Homeless Link,London.

Davey, S, Parkinson, D and Wadia, A (2008) Using ITto Improve your Monitoring and Evaluation,Performance Hub, Charities Evaluation Services,London.

Deakin, N (1996)Meeting the Challenge of Change:Voluntary Action into the 21st Century, NationalCouncil for Voluntary Organisations, London.

Department for Children, Schools and Families(2006) Joint Planning and CommissioningFramework for Children, Young People andMaternity Services.

Department for Communities and LocalGovernment (October 2006) Strong and ProsperousCommunities: The Local Government White Paper,www.communities.gov.uk

Department of Health (6 July 2006) No Excuses.Embrace Partnership Now. Step Towards Change!Report of the Third Sector Commissioning TaskForce, www.dh.gov.uk

Department of Health (2007) CommissioningFramework for Health and Well-Being,www.dh.gov.uk

Dillane, J, Hill, M, Bannister, J and Scott, S (2001) AnEvaluation of the Dundee Families Project, ResearchFindings No 121, Scottish Executive CentralResearch Unit, Development Department ResearchProgramme, Edinburgh.

Ebrahim, A (2002) ‘The role of information in thereproduction of NGO-funder relationships’ inNonprofit and Voluntary Sector Quarterly, 31(1):84–114, Association for Research on NonprofitOrganizations and Voluntary Action.

Eliot, J and Piper, R (2007) True Colours: Uncoveringthe Full Value of Your Organisation, PerformanceHub, National Council for Voluntary Organisations,London.

Ellis, J (2005) Practical Monitoring and Evaluation:A Guide for Voluntary Organisations, CharitiesEvaluation Services, London.

Ellis, J and Latif, S (2006) Capacity Building: Lessonsfrom a Pilot Programmewith Black andMinorityEthnic Voluntary and Community Organisations,Findings September 2006: Informing Change, JosephRowntree Foundation, York.

References

Page 136: Accountability and Learning

116Charities Evaluation Services Research report

Evaluation Support Scotland and Voluntary ActionFund (2007) Investing in Outcomes, SupportingFunding Projects to Record and Report on Outcomes:Pilot Report, Voluntary Action Fund, Dunfermline.

Evaluation Trust (2007),Mapping PerformanceResources in the South West, paper prepared for thePerformance Hub.

Evaluation Trust and the Hadley Centre for Adoptionand Foster Care Studies (June 2007) A Summary ofthe Evaluation of Adoption UK’s Parent SupportProgramme ‘It’s a Piece of Cake’,www.evaluationtrust.org

Evison, I (2006) Civic Pioneers Case Study Review,Department of Communities and LocalGovernment, London,www.communities.gov.uk/documents

Freiss, PR (2005) Evaluation: The PhoenixDevelopment Fund: Final Report.

Gershon, P (2004) Releasing Resources to the FrontLine: Independent Review of Public SectorEfficiency, HM Treasury, London.

Giffen, J (11 May 2005) Impact Assessment: A Toolfor Evidence-based Programming or for Self-Marketing, INTRAC,www.intrac.org/docs/Impact%20assessment.doc

Gross, J (2006) Every Child a Reader EvaluationReport 2: The Long Term Costs of LiteracyDifficulties, KPMG Foundation, London.

Hailey, J and Sorgenfrei, M (2005)MeasuringSuccess: Issues in Performance Measurement,Occasional Papers Series No 44, INTRAC, Oxford.

Hall, M, Phillips, S, Meillat, C and Pickering, D (2003)Assessing Performance: Evaluation Practices andPerspectives in Canada’s Voluntary Sector, CanadianCentre for Philanthropy, Ontario, Canada.

Hasenfeld, YZ, Hill K and Weaver, D (undated) AParticipatory Model for Evaluating Social Problems,The James Irvine Foundation, San Francisco,California.

Hatry, H and Lampkin, L (eds) (2001) OutcomeManagement in Nonprofit Organisations, UrbanInstitute, Washington DC.

Heady, L and Rowley, S (2008) Turning the Tables:Putting Scottish Charities in Control of Reporting,New Philanthropy Capital, London.

Hills, D (2007) The Evaluation of the Big LotteryFund Healthy Living Centres Final Report, Big LotteryFund, London.

HM Treasury (2002) Outcome FocusedManagement in the United Kingdom, HM Treasury,London, www.hm-treasury.gov.uk/media/

HM Treasury (May 2006) Improving FinancialRelationships with the Third Sector: Guidance toFunders and Purchasers,www.hm-treasury.gov.uk/media/9/4/guidancefunders

HM Treasury (December 2006) The Future Role ofthe Third Sector in Social and EconomicRegeneration: Interim Report, Cabinet Office,www.hm-treasury.gov.uk/media53E/94/pbr06_3rd_sector_428.pdf

HM Treasury, Cabinet Office (July 2007) The FutureRole of the Third Sector in Social and EconomicRegeneration: Final Report, HM Treasury, London.

Home Office (1990) Efficiency Scrutiny ofGovernment Funding of the Voluntary Sector:Profiling from Partnership, HMSO, London.

Hughes, M and Traynor, T (2000) ‘Reconcilingprocess and outcome in evaluating communityinitiatives’, in Evaluation, 6(1): 37–49, SAGEPublications, London.

Knight, B (1993) Voluntary Action, Centris, London.

Kumar, S and Leat, D (2006) Investing in ourProgrammes: Maximising the Impact of GrantMaking, Big Lottery Fund, London.

Latif, S (2008) Becoming More Effective: AnIntroduction to Monitoring and Evaluation forRefugee Organisations, Charities Evaluation Servicesand the Refugee Council, London.

Leat, D (2003) Replicating Successful VoluntarySector Projects, Association of CharitableFoundations, London.

London Borough of Camden (1 April 2008) VCSFunding Implementation Update, Report to theCulture and Environment Scrutiny Committee.

London Borough of Newham Social RegenerationUnit (2005)Making a Difference for NewhamPeople: Newham Neighbourhood Renewal Povertyand Income Maximisation Programme, Year 1Impact Assessment 2004/05, London.

Longstaff, B (2008) The Community DevelopmentChallenge: Evaluation, Community DevelopmentFoundation, London.

Macmillan, R (2006) A Rapid Evidence Assessmentof the Benefits of the Voluntary and CommunitySector Infrastructure, Centre for Regional Economicand Social Research, Sheffield Hallam University.

Page 137: Accountability and Learning

117Charities Evaluation Services Research report

Mansfield, D (1997) Evaluation: Tried and Tested? AReview of Save the Children Evaluation Reports,Working Paper 17, Save the Children, London.

Martin, S and Sanderson, I (1999) ‘Evaluating publicpolicy experiments: measuring outcomes, monitoringprocesses or managing pilots’ in Evaluation 5(3): July1999, SAGE Publications, London.

Matrix Research and Consultancy (2006) EconomicImpact Study of Women’s Organisations, Women’sResource Centre and the Association of LondonGovernment, www.londoncouncils.gov.uk/grants/publications/economicandsocialimpact.htm .

McAuley, C and Cleaver, D (2006) Improving ServiceDelivery: Introducing Outcomes-basedAccountability, Improvement and DevelopmentAgency for local government, London.

McCurry, P (2002) ‘Tools to assess outcomes’ inThird Sector, 3 July 2002.

McDonnell, B (2002) A Social Capital Framework –Evaluating Community and Voluntary SectorActivity, Briefing Paper No 1, Community EvaluationNorthern Ireland, Belfast.

McNamara, G, O’Hara, J and Boyle, R (2007)‘Influences shaping national evaluation policies: thecase of Ireland’ in The Evaluator, Spring 2007, UKEvaluation Society.

Morgan, GG (2008) The Spirit of Charity, ProfesorialLecture, delivered at Sheffield Hallam University,3 April 2008, www.shu.ac.uk/research/

Mukooba-Isabirye, A, Gnamatsi, K, Katunguka, S andMashahadiagha, AT (2000)What Worked for Us?A Joint Evaluation on Empowerment in Two IntegraProjects, Refugee Education and Training AdvisoryService, Redbridge Sign-Posting Centre andCharities Evaluation Services, London.

National Association for Voluntary and CommunityAction (2006) For Good Measure: Contract, withoutSqueezing, Voluntary and Community Groups,National Association for Voluntary and CommunityAction, Sheffield.

National Audit Office (2005)Working with the ThirdSector, National Audit Office, London.

National Homelessness Advice Service (2005)Homelessness and Housing Advice OutcomeMonitoring Pilot, National Homelessness AdviceService, London.

Nicholls, J (November 2007)Why Measuring andCommunicating Social Value Can Help SocialEnterprise Become More Competitive: A SocialEnterprise Think Piece for the Office of the ThirdSector, Cabinet Office, Office of the Third Sector,London.

Nixon, J, Hunter, C and Parr, S (2006) Anti-socialBehaviour Intensive Family Support Projects: AnEvaluation of Six Pioneering Projects, Departmentfor Communities and Local Government, London,www.communities.gov.uk

Office of Government Commerce and the ActiveCommunities Directorate (2004) Think Smart. ThinkVoluntary Sector: Good Practice Guidance onProcurement of Services from the Voluntary andCommunity Sector,www.commercial.homeoffice.gov.uk./documents/

Ontrac (September 2007) Issue 37, INTRAC, Oxford.

Oxfam (July 2005) Programme Impact Report:Oxfam GB’s Work with Partners and Allies Aroundthe World, www.oxfam.org.uk/resources/accounts/impact report/

Oxfam Global Learning Report (2006),www.Oxfam.org.uk/resources/evaluations/programme_learning_report06.html

Palfrey, C (2000) Key Concepts in Health Care Policyand Planning: An Introductory Text, MacMillan,London.

Parkinson, D (2005) Putting an Outcomes Focus intoPractice: The Need for IT-based Outcome DataManagement Systems, Charities Evaluation Servicesand London Housing Foundation, London.http://www.lhf.org.uk/Publications/OcITMarketresearchforwebsite.pdf

Patton, MQ (1997) Utilization-focused Evaluation:The New Century Text, third edition, SAGEPublications, Thousand Oaks, CA.

Pawson, H, Davidson, E and Netto, G (March 2007)Evaluation of Homelessness Prevention Activities inScotland, Scottish Executive, www.scotland.gov.uk/Publications/2007/03/26095144/4

Potter, P (2005) ‘Facilitating transferable learningthrough cluster evaluation: new opportunities inthe development partnerships of the EU ‘EQUAL’programme’, Evaluation, 11(2): 189–205, SAGEPublications, London.

Reichardt, O, Kane, D, Pratten, B and Wilding, K(2008) The UK Civil Society Almanac 2008, NationalCouncil for Voluntary Organisations, London.

Roche, C (1999) Impact Assessment forDevelopment Agencies, Oxfam Publishing, Oxford.

Royal National Institute of Blind People (2003)Welfare Rights Take-up Project in Yorkshire andHumberside April 2000–March 2003, RNIB, London.

Russell, J (1998) Self-evaluation: Discussion Paper 3,Charities Evaluation Services, London.

References

Page 138: Accountability and Learning

118Charities Evaluation Services Research report

Sanderson, I (2001) ‘Performancemanagement, evaluation and learning in‘modern’ local government’ in PublicAdministration, 79(2): 297–313, BlackwellPublishing.

Sandwell Interagency CommissioningGroup (March 2006) Evaluation ofCommissioning the Voluntary Sector witha Capacity Building Model ofCommissioning, Sandwell Children’s Fund.

Sanfilippo, L (2005) Proving andImproving: A Quality and Impact Toolkitfor Social Enterprise, new economicsfoundation, London.

Sanfilippo, L and Chambers, T (February2008) Banking on Outcomes for the ThirdSector: Useful? Possible? Feasible?,Charities Evaluation Services and neweconomics foundation, London.

Scott, C and Molteno, M (October 2005)In the Right Direction: Examples of theImpact of Save the Children’s Work, Savethe Children, London.

Scriven, M (1991) Evaluation Thesaurus,SAGE Publications, Newbury Park, CA.

Sefton, T (November 2000) Getting Lessfor More: Economic Evaluation in theSocial Welfare Field, CASE paper 44,Centre for Analysis of Social Exclusion,London School of Economics.

Tanner, S (May 2007) Common Themes onCommissioning the VCS in Selected LocalAuthorities in Greater London, LondonCouncils, London.

The Advisory Board Foundation (1999)Fifteen Conclusions on EvaluatingNonprofit Performance, Washington DC.

The National Centre for Business andSustainability (2006) Guidance Document:Demonstrating Co-operative Difference:Key Social and Co-operative PerformanceIndicators, Co-operativesUK Ltd, Manchester.

The Urban Institute and The Center forWhat Works (2006) Building a CommonOutcome Framework to MeasureNonprofit Performance, The UrbanInstitute, Washington DC.

Tilley, N (September 2000) RealisticEvaluation: An Overview, paper presentedat the Founding Conference of the DanishEvaluation Society.

Triangle Consulting for the LondonHousing Foundation (2006) LondonHousing Foundation Response Focussingon the use of Outcomes withinSupporting People – Response toConsultation on a Strategy for theSupporting People Programme, LondonHousing Foundation, London,www.homelessnessoutcomes.org.uk

United Way of America (2000) AgencyExperiences with Outcome Measurement:Survey Findings, United Way, Alexandria,Virginia, http://national.unitedway.org/files/pdf/outcomes/agencyOM.pdf

University of Birmingham (2006) LocalEvaluation of Children’s Services: Learningfrom the Children’s Fund, Research report783, www.ne-cf.org

Unwin, J (2004) The Grant-making Tango:Issues for Funders, Baring Foundation,London.

Unwin, J (2006) The Intelligent Funder:A Discussion Paper.

Unwin, J and Field, J (2003) ‘Cancollaborative funding partnerships work?’in Trust and Foundation News, Autumn2003.

Wales Funders Forum (2002)WhichInvestor Approach: Not a Way Forward forthe Community Fund? Or, the Emperor’sNew Clothes, Investing in Change BriefingPaper No 3, 8 August 2002.

White, M (2005) ‘Well being or wellmeaning’ in Animated Dance, Spring2005, www.communitydance.org.uk/

Wiggan, J and Talbot, C (2006) TheBenefits of Welfare Rights Advice:A Review of the Literature, ManchesterBusiness School, University of Manchester,commissioned by the National Associationof Welfare Rights Advisors,www.nawra.org/nawra/docs_pdf/Benefitsofwelfarerightsadvicelitreview.pdf

Wilkinson, D (2007) Adding Value? BeingRicher? Summary Report: Lessons fromNorthern Rock Foundation’s Money andJobs Programme, Northern RockFoundation, Newcastle upon Tyne.

Page 139: Accountability and Learning

About the authors

Dr Jean Ellis has been a senior consultant with Charities Evaluation Services since1995. She has worked both in the international development sector and withUK-based third sector organisations, carrying out external evaluations andsupporting the development of monitoring and evaluation and quality systems.Jean is the author of CES’ Practical Monitoring and Evaluation: A Guide forVoluntary Organisations (2005) and Performance Improvement: A Handbook forMentors (2006).

Tracey Gregory is an independent research and evaluation consultant. She hasworked with third sector organisations since 1995, initially on fundraising andbusiness planning and then programme delivery. More recently she has focusedon research and evaluation to inform policy, strategy and programme planning.Tracey has a Masters degree in Research and Evaluation.

Page 140: Accountability and Learning

Charities Evaluation Services4 Coldbath SquareLondon EC1R 5HL

Website: www.ces-vol.org.ukTel: 020 7713 5722Fax: 020 7713 5692Email: [email protected]

funded by